35664, We remove the highly correlated features to avoid the problem of multicollinearity explained earlier
13485, Check versions
31347, Implementing Stacking
6731, OverallQual Vs SalePrice
43197, Ensemble two model NN LGBM
8278, Preparing Data for Modelling
36469, Images from ROLLS ROYCE ENGINEERING SPECIFICATION INDEX
12805, Load the Data
7583, For all SF we further add some of the outside area values
15367, Title feature
39187, LARCENY THEFT a categoria de crimes com maior n mero de ocorr ncias Onde estas ocorr ncias mais se concentram
7998, Train SGD Regressor
41868, XGBoost baseline model
27334, Label plot
20171, Out of parameters below we be playing with Gamma and C where
13085, Random Forest Classifier
3823, First we graph the population distribution of survivors for reference
497, We extract first letter of assigned cabin and then map it into a category
1305, Observations
35591, Transfer Learning
10598, Step 2 Know Your Data
27974, countplot
29440, The list of common places include strings like earth or Everywhere It s because this field is the users input and were not automatically generated and is very dirty
11209, Use another Stacking function from mlxtend
142, Decision Tree Classifier
26056, Examining the Model
36787, Ambiguity
18473, CompetitionDistance
34975, Library
18028, Perform the gridsearch
12753, Inspired by such features we can add another FamilySize column and IsAlone column
26864, The box filter aims at replacing the old pixel value by the mean of all the pixel of its neighbourhood
32983, Polynomial regression
24267, Observations from the Pearson analysis
9956, Random Forest
13659, Fare
4784, we can zoom in to the 10 most important variables according to the pearson correlation coefficient and check the matrix
13669, SVM
12375, Box Plot of the sale price over the whole dataset
22284, GridSearchCV and RandomizedSearchCV are excellent tools for determing the best hyperparameters for your models This can increase your model accuracy significantly
26853, Model Training using Vowpal Wabbit Algorithm
4047, We can drop out the Passenger ID column
42231, Numerical columns within the dataset
16517, Lets use pycaret
24273, Perceptron
3447, check that Age Title boxplot again
8337, Surprisingly only two features are dominant OverallQual and TotalSF
22749, Crimes by year month
12069, TotalBsmtSF
34507, Our param grid is set up as a dictionary so that GridSearch can take in and read the parameters
23720, Storing best hyper parameters for XGB Regresssor into bestP DataFrame
35641, Kindly upvote the kernel if you like it font
18352, SCALING VARIABLES
39123, With Random Forest
1717, Import Datasets
41592, User Features
9590, Finding missing values
21657, Using glob to generate a df from multiple files duplicated Trick 78
10972, Linear regression L1 regularisation
12570, that we are ready with the pre processed data we start feeding the training data into machine learning models and make predictions on the test data
10211, First start with passenger s Age
31891, logistic Regression
7109, We use five models to apply votingclassifier namely logistic regression random forest gradient boosting decision support vector machine and k nearest neighbors
4572, We ll check again if we have filled all missing values
1934, Building remodelling years and age of house
11657, K Nearest Neigbors
18582, It s important that you have a working NVidia GPU set up
6738, LotFrontage Vs SalePrice
18236, This is a 5 layers Sequential Convolutional Neural Network for digits recognition trained on MNIST dataset
27658, In order to avoid overfitting problem we need to expand artificially our handwritten digit dataset
14253, Sibling brother sister stepbrother stepsister
28592, Fireplaces
29986, We ll use as the HDF5 interface and multiprocessing for image pre processing
15662, xgboost
28731, Category s by items sold by day Crosstab
4036, Correlation with SalePrice
30763, Save data
24954, Embarked in Train set
35914, Callback
34656, Price in USD
33297, Housekeeping
11189, Elastic Net Regression
4280, Exploratory Data Analysis
25677, Text Analysis Character and Word counts
13794, Multi layer Perceptron
7236, Finding the optimum Alpha value using cross validation
23216, we need a black box function to use baian optimization
16431, Random Forest
15114, as our last step before modeling we separate the combined dataset into its train and test versions once again
14512, Observations
3980, Describe the data
14258, Fare Range
26426, Implement new feature AgeCat
21795, Quadratic Discriminant Analysis
7034, Fence quality
26594, TASK 5 TRAIN THE MODEL PART A
2332, Take Your Position
2536, In this small part we isolate the outliers with an IsolationForest
20721, Neighborhood column
2822, Splite the data to training set and a validation set
28630, GarageYrBlt
18779, PoolQC data description says NA means No Pool That make sense given the huge ratio of missing value 99 and majority of houses have no Pool at all in general
2436, We can also take a look directly at what the most important coefficients are
20475, City registered not live city and not work city
13415, Cross Validation
15747, Cleaning Data
27000, Checking the improvement
14732, Evaluating the Model
14748, Evaluating Our RBF Model
42830, While the sample mean differences are more or less balanced around zero the sample variance differences are almost entirely on the negative side
16909, fillna Embarked
37832, Naive Bayes
6375, Find out the median value
15314, Lets assign X value to all the NAN values
15247, Feature importance
26517, Cod prov Province Code distribution
28807, Prophet
14322, Sibsp
5319, we ll try to remove features with lowest variance
30079, y hat consists of class probabilities
3065, Initial
37991, 1Remarkable after the fist epoch
40795, Categorical Features
22751, Function to Model SIR Framework
12158, Criterion
22138, GarageArea Size of garage in square feet
43126, Random forest
16842, Now there is only one missing value inside the Fare column in test data
721, drop these columns
39149, Load data into DataBunch
25864, Linear Model Prediction on test set
4328, Minors children age 18
3421, Families or groups traveling together may have had similar prefix codes for example
10173, Can also be used to analyze null values
15347, In case you wanna take a look at the data
20733, TotalBsmtSF column
15307, Training Decision Tree Model Again
34011, No outliers
19750, How good does our process work
24520, There are some customers with seniority less than zero
14201, do some plots
16942, The feature importance is not exactly the same but it is close
30140, Callbacks Optional
25083, How aout HOBBIES
31869, Class Balance on fly with XLA
34332, Preprocessing test data
40393, Define Model
11963, We now create a function that would predict each value depending on the weights set by us We can tune these weights however we want to get the least error possible
38685, Probability of melanoma with respect to Age
13419, Ensemble Modeling
42744, The clients with short overdue days is most numerous
28000, Needed Libraries
21346, Understanding the Model Better
30889, we compute predictions on the test data set and save them
22426, Understanding the figure subplots and axes in matplotlib
40692, note that the highest demand is in hours from say and the from this is bcoz in most of the metroploitan cities this is the peak office time and so more people would be renting bikes this is just one of the plausible reason
4466, Checking for missing values in dataset
6093, Outliers can affect a regression model by pulling our estimated regression line further away from the true population regression line
30326, For neutral samples use original texts as they are
4821, Log transformation is just a special case of Box Cox transformation let us apply it on our prices data to make it more normal
28156, import the BERT tokenizer used to convert our text into tokens that correspond to BERT s vocabulary
14307, Analyze the Correlation between Features
10236, With this we are ready to get predicted values and submission file
10608, MachineLearning 102 Intermediate Working on Feature Set
36533, we now generate a list of pairs from each of the class
35762, reference learn org stable auto examples model selection plot underfitting overfitting htmlsphx glr auto examples model selection plot underfitting overfitting py
25417, Some plots
18063, Evaluate Model Function
3580, And treats the missing values as None like BsmtCond variable
28058, Categorical variables
27158, GarageType Garage location
21564, FamilySize
29186, Finalizing X and scaling train test separately
4137, Basic PyTorch NN model for Regression Analysis
36087, Inference
41363, Pool data is not so useful for us
29861, data element function
21838, One hot Encode Categorical Variables
23976, Model
39880, Drop Id and SalePrice column
27051, Malignant category
12434, Filling all null categorical and numerical features e features that are almost constant
42860, Train the model
20776, T SNE applied on Doc2Vec embedding
29470, Univariate analysis of feature word common
1779, ROC Curve
12745, let s deal with Fare and Embark
12937, Fare Column
12831, Some insight in Fare information
40178, Code from notebook
17561, Check Missing Values
34626, ConvNets
275, Prediction
22787, Barplot
24034, It is also interesting how price affect sales
11211, Check predictions are they one same scale as SalePrice in Train dataset
10546, Random Forest Regressor
6669, Random Forest Classification
4606, Other Categorical Features
7006, Wood deck area in square feet
1518, Still we have a lot of features to analyse here so let s take the strongly correlated quantitative features from this dataset and analyse them one by one
20353, Our plot of train and validation loss looks good
41000, Basic Block
36779, Test Data
11278, Looking at the Name column There is a title like Mr or Mrs within each as well as some less common titles like the Countess
24515, This is much better than the severely imbalanced case but still not as good as training on the original dataset
34349, The embedding sizes we are going to use for each category
10599, Lets find out more about Training Set
11450, We use the Name feature to extract the Titles from the Name so that we can build a new feature out of that
15582, Gaussian Naive Bayes
27154, BsmtQual Evaluates the height of the basement
25082, compare the average daily sales per event
20806, SalePrice Distribution Visualization
26532, Submit
29544, we ll get Classification Report and Confusion Matrix
38041, check out the Probability of picking a house in the Neighborhood OldTown
28286, prepare target dataset too
37434, Check for bigrams in selected text font
15997, Searching the best params for Random Forest
23991, For missing values in numerical cols we fillNa with
27067, Count keyword
41266, Alternatively you can use plt
22869, Classification Fully Connected Layer FC Layer
29716, Scatter plot
4168, Square root transformation
34876, now we take care of common misspellings when using american british vocab and replacing a few modern words with social media for this task I use a multi regex script I found some time ago on stack overflow
38133, We remove Name Cabin values
6798, Modelo
22143, YearBuilt Original construction date
30413, LSTM models
39235, BASIC PREPROCESSING
23219, assume you find some kernel with different hyperparameters then you want to check if those values are better than yours or not
16458, family size with 2 to 4 members are more likely to survive
34445, Combine the data
37192, Data pre processing
41011, And here we are 11 passengers were added to a group
18903, Sex Feature
20339, Make the train validation split
25195, we come to the most important part of our notebook
6134, Air is electrising
26874, pred gbc pd
16090, Parch vs Survival
4809, Since stacking gave us the best scores we would used that to get the predictions to be used to submit our scores
12752, Presented in the visualiztion below the survival chance of a passenger with 1 or 2 siblings spouses and 1 2 or 3 parents children is significantly higher than than for a single passenger or a passenger with a large family
4682, Since there is no NA category in the Utilities feature i prefered to fill the missing 2 values by the most common values rather than deleting the two rows Information is precious Think we have only 1459 train rows for 1460 predictions to do
4979, I limit my analysis to the letter of the Cabin
42132, Model Evaluation
27514, Reshape data
41835, Improve the performance
42525, Classifcation Report
12140, Training the model 2
22064, In the next step we calculate the fraction of words which are available in both our data and the spaCy model aswell as the fraction which is available in train and test text
12607, Ratio of Survived and Not Survived passangers for S and Q Embarked are similar but Passengers from C embarked have higer chances of survival
29153, GarageQual GarageType GarageFinish GarageCond Fill with None
10345, we add up some numeric features with each other to create new features that make sense
18385, try some more keywords and using for loop to iterate over
9663, Submission
14304, Creating the feature Age
13453, Age Missing Values
36696, Normalizing the data
9312, Again not quite what we want
12701, There s a reason I chose to look at Age after Name that s because I m going to use the new Title feature to calculate the missing the Age values a technique called imputation
11253, Submission
7480, 3f Age
42405, Baseline based on Means on the same day F1 F28 Not Active
10815, Firstly let s check what categories could have an impact on Title by building correlation matrix
22508, after training our model let s check it s accuracy
38585, Splitting data into training part and testing part
892, Random Forest Classifier
5614, On merging them I found that the merging by CHAID is wrong since they only consider SalePrice
7823, Split data for training and Testing
5063, We start by plotting all numeric features according to the current pandas dtypes
6464, Imputer for the string categorical columns
31717, Tuning LightGBM
14851, SibSp
21638, Aggregation over timeseries resample
10510, Parch
14461, back to Evaluate the Model model eval
40047, start with resnet
3006, strong DBSCAN Density Based Spatial Clustering of Applications with Noise strong font div
33802, Polynomial Features
37163, Dataset Dataset wrapping tensors Each sample be retrieved by indexing tensors along the first dimension
42223, now bring it all together
37321, Padding parameter selection
27436, Applying same Age imputation with respect to Pclass to ensure same logics in both datasets
20244, Ticket first letters and Cabin first letters are also needed
26922, First of all we need to load modules for linear algebra and data analysis as well as gensim
11753, that the skewed variables have been corrected we are going to get dummy variables for all of the remaining categorical variables
6556, Survived People with no families with them likely survived
15174, Feature engineering
17749, 3rd class passengers skewed younger than 2nd class passengers who skewed younger than 1st class passengers
35744, Another look at the feature to output correlations
33410, Defining the architecture
31909, A prediction is an array of 10 numbers
14397, Map each Sex value to a numerical value
30772, Predictions are generated as usual
24582, Create dataloaders
2457, Removing quasi constant features
16511, KNN Classifier
28005, Hyperparatemers
31075, GarageYrBlt
3705, Regression
16926, Stacking Classifier
15186, Exporting survival predictions
3425, Excellent we ll make dummy variables for each TicketPrefix category
26653, it is safe to
42265, Target Variable
1995, Multivariable Analysis
40959, Filling in the missing values
36516, Family Size
15656, Passive Aggressive
40947, Features Checking
7654, Modelling
7206, Emsemble Average Predictions
37543, As I have already told I be using Digit Recognizer data for comparison based on CNN so first lets read the data
17011, According to the plot survival rate for people with missing Age is lower than for people that have age value
12114, Pred ML Evaluation
9664, Data exploration and visualization
11052, pipeline log implemention is given below
28334, Analysis based on INCOME TYPE
41170, look at how a normal case is different from that of a melanoma case We look at somes samples from external data of alex shonenkov
30946, Training code step
29473, Distribution of the token sort ratio
31507, We segregate the data features into X dataframe and the target variable in the y dataframe
13746, Random Forest
29874, TTA
42564, First define 5 Fold cross validation
22136, GrLivArea Above grade ground living area square feet
2688, Tree Method random forest
13958, Columns
35465, Visualiza the skin cancer nevus
8397, let s use the selector in the new features
38926, Combination Vis
194, Elastic Net Regression
3540, Missing values for Categorical features in Bar chart Representation
32707, Creating an Embedding matrix
11899, ML
25957, Combine Product Asile and Department info into single Dataframe for better understanding
28331, Checking the Imbalance of Target Variable
36276, As maximum values in train set is S let s replace it with the null values
38738, This looks better as we can now guess the missing age values more accurately than before
20970, Accuracy of Model
12418, ML import with different models
13474, Using 80 20 Split for Cross Validation
5882, Deal with outliers
6194, Support Vector Machine using Linear kernel
114, fare group
29212, TotalBsmtBath Sum of
2272, Creating Categories
7504, Once weird values and outliers have been removed from the training dataset it s time to deal with missings and encoding
1916, Electrical
994, A simple method for imputing the missing values for categorical features is to use the mode this is what we ll do for Embarked
12243, NumPy basics
7291, Gradient Boosting
33585, Weighted Box Fusion
20629, Punctuations present in the whole dataset
17461, Convert categorical data to Numerical data for process
34463, Introduce Lags
34622, Time to check if our model is actually working
26632, Train the model
2763, Finding the categorical and numerical feature
15483, Choosing a Machine Learning Algorithm
41605, look at the 0th image predictions and prediction array
26575, When testing the model we also want to use a data generator to get images into the model but we do not want to transform them
31124, All data preparation process on test set
1322, Which features are categorical
32130, How to get the second largest value of an array when grouped by another array
40679, Model Reconstructions
33688, Leap year or not
19810, One Hot encoding Pros Cons
38655, Age
1986, We can check precision recall f1 score using classification report
29136, Datatype check
16088, Pclass Sex Embarked vs Survival
14252, Sibspip Feature
26971, Transfer Model
31725, Target and Genders
6192, Logistic Regression
7565, First look with Pandas
13315, We can first display the first rows of the training dataset in order to have an overview of the parameters
29946, We can also save the features to later use for plotting feature importances
10967, Dummy encoding
24988, Using LASSO for feature selection
25211, Analyze Garage Area
42221, Last up because the problem is multi class classification the network end according to the number of outcome possibilities followed by a sigmoid activation which is the most appropriate for multi class classification problems
3349, Since Deck T is negligible assigning it to frequent deck value
8165, Boosting is what the kids want these days
30921, Load data back
16445, we are done with data cleaning part
7595, other numerical features
17592, First we ll separate categorical and numerical features in our data set
14547, Gender font
37994, The frames Display Target Density and Target Probability
17606, Linear SVC
1168, There are three NaN s foor PoolQC that have a PoolArea
11860, Blending
33229, Resnets
19800, MEDIAN Suitable for continuous data with outliers
6248, Fare
32585, The best object holds only the hyperparameters that returned the lowest loss in the objective function
3496, Fit the best model and produce a confusion matrix
3514, Final Model Comparison
13452, SUBMISSION FILE choosing Gradient Boosting
18542, How about categorical variables
40192, Countplot
25999, Dropping the Middle Man
24735, K Means Clustering
17951, Random Forest Model
37114, train this model using 5 fold stratified validation
20191, Continious Featuers Data Distribution
1512, we ll try to find which features are strongly correlated with SalePrice
19014, Experiment 4
23300, Normalizing Data
15151, Embarked vs Survival
26065, t SNE
1213, Tansforming begin equation Y log 1 X end equation
953, Parameters
24967, Predict
27115, The mean and median values of MasVnrArea are significantly different
2209, Logistic Regression
21421, SalePrice
42838, Submitting the Test Predictions
31033, If you want to change match repetitive characters to n numbers chage the return line in the rep function to grp 0 n
16451, Parch
36623, Building a Sequential Model
16911, Export whole data
7897, I explore the entropy to check wheter the values can give a good learning to the algoritmh
989, let s do CatBoost
21732, I am now interest to check out how the item condition relate to the price
16273, Dtypes
18163, combining the question1 and question2 column as a single column
10992, Predict
5955, Exploratory data analysis
14643, extract information from the Cabin field by plucking the first character
30602, For each of these pairs of highly correlated variables we only want to remove one of the variables
25384, Visualize the output of convolutional and pooling layers
41201, fit this model ONLY on X train and y train as of now
27429, Imputing age by Pclass
18616, Missing Data
19958, Ticket
5596, Tuning the algorithm
28144, The list entity pairs contains all the subject object pairs from the Wikipedia sentences
33316, Submission
37668, Model compilation
39970, The younger passengers had a higher survival rate
9231, Test Model 75 15
17908, 177 missing ages in train set
28624, ExterQual
24048, Detecting missing values
17917, DATA TRAIN
32404, Convert Data to SQuAD style
22121, Took 5 mins for model to run
26549, The MNIST database is a large database of handwritten digits that is commonly used for training various image processing systems
42611, Optimisation 1
15667, Linear Regression SVC
2771, Observations
12361, MSZoning Identifies the general zoning classification of the sale
38846, Family houses are too high
856, in df test some values for Age and many values for Cabin are missing
5358, Diplay relationship between 3 variables in mesh 3D surface with solid lines
26896, Create Submission File for approach 7
22936, SibSp and Parch are two features that are closely interrelated
7240, Interpreting the Model With Shapely Values 2
2635, I couldnt find a way to hide output of the cell below
27107, Of which total numerical features are 38
17394, Embarked wise Survival probability
37370, Logistic Regression
18123, Splitting Training Dataframe prior to training ML algorithms using cross validation
30380, Add temporal features
22874, Model Evaluation
17347, Bagging Classifier
24592, Generate test predictions
8897, RIDGE
24853, Performance during training
25239, Submission
40476, Random Forest Classifier
35631, Moving in a random direction away from a digit in the 784 dimensional image space
3590, Statistical variables Ex min max mean std skewness kurtosis etc
9358, Create new feature for FamilySize which combines Parch and SibSp
23415, use the sigmoidal decay as the learning rate policy
19437, Building the Model
23952, In our dataset excluding the dependent feature we have 80 indenpendent feature If we consider all the 80 columns as our independent feature our model accuracy decrease as the number of features increases the accuracy decreases this is called as the Curse Of Dimentionality
21397, Some utility functions
30757, Learning Curve
19438, Creating Loss function optimizer and checkpoints
2740, A dendogram plot is a tree diagram of missingness that reveals trends deeper than the pairwise ones visible in the correlation heatmap
4875, Gradient Boosting Regressor
34957, Clustering
2769, Distribution of Data
1933, Garage Area
3018, One of the other methods that we tried that did not work well in selecting the feature and improving the accuracy was Backward Elimination with Adjusted R squared
22391, Predict using the model
12286, The five number summary or 5 number summary for short is a non parametric data summarization technique
11377, Decision Tree Regressor
7351, A good model is RandomForestRegressor
15701, Create categories for Age Fare
35372, Initialize loss and accuracy classes
4872, Split the data into Train and Test
10708, One hot encoding for Training data set
4735, A skewness value of 0 in the output denotes a symmetrical distribution
42424, Full Square Vs Price Doc
1707, Imputations Techniques for Time Series Problems
10434, I think we are good for now
32541, Missing Value
15930, Survived
17563, Exploratory Data Analysis
33023, Splitting
16505, step in order to increase the precision and get more accuracy I be doing more feature engineering such as trying to grab the title of the names cabin letter and ticket information
13429, Implementing Neural Network
4927, LightGBM
37902, Top 10 Feature Importance Positive and Negative Role
27134, we can analyze the rest of the continuous numerical features
9653, Since Neighbourhood and LotFrontage are highly correlated we fill up lotFrontage s NAN using it
33694, Data from M5 Forecasting Accuracy
24481, Training for 30 epochs strategy 2 improves validation score to 0
5826, Custom Implementation
17801, We also map Fare to 4 main fare segments and label them from 0 to 3
24724, Section 2 Supervised Learning Classification
10904, Check the Feature Importances returned by the Tuned Decision Tree
15696, Most passengers were traveling alone 68
43292, score 0
5561, using KNN K Nearest Neighbors with number of neighbors 5 to impute missing values
36947, One Hot Encoding the Categorical Features
21072, Testing our technique on small corpus
38948, Putting the principles in practice
22354, Extracting the test data from the test
15303, K Fold Cross Validation
42352, look like no relation of frequency hence using label encoding for it
29095, Create fake predictions
15016, Parents Children
19852, Combine discretisation with label ordering according to target
8363, There are several imputing techniques we use the random number from the range mean std
17401, We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate
3788, GrLivArea SalePrice and TotalBsmtSF SalePrice is linear related those 2 continuous variable is both about area square feet and they are highly related to house SalePrice
3361, iLoc iloc returns a Pandas Series when one row is selected and a Pandas DataFrame when multiple rows are selected or if any column in full is selected
7510, Data preparation
590, Load input data And combine the available features of train and test data sets test of course doesn t have the column that indicates survival
21559, Age
9224, Visualize the Best Model Fit for RFC
28162, Predict and Evaluate on Holdout Set
4914, Our Feature Engineering begins with Handling Missing Data
40427, Leaderboard
30658, we have a nice and descriptive table
10990, plot co relation between labels and some features
13026, SibSP
15358, Advanced Uses of SHAP Values
24998, Adding together numeric and categorical columns
16779, Submission File Preparation
10639, How to use as feature in prediction model
24005, create a barchart to check the number of digits in each class
1982, We sum of family member
10350, Transformation and Scaling
11978, After making some plots we found that we have some colums with low variance so we decide to delete them
18657, Image Augmentation
23373, With the labels extracted from the data I now need the images loaded as numpy arrays
24968, V10 prediction
10497, So out of 891 examples only 342 38 survived and rest all died
16033, For training model we use scikit learn package
38964, Sample visualization of predictions
38702, Probability of melanoma with respect to Mean Color
32558, Sex
10120, Most people from Southamptom belong to class 3 that s why they have the lowest chances of survival
521, Seaborn Countplots
24306, start train
13287, Random Forests is one of the most popular model Random forests or random decision forests are an ensemble learning method for classification regression and other tasks that operate by constructing a multitude of decision trees n estimators 100 300 at training time and outputting the class that is the mode of the classes classification or mean prediction regression of the individual trees Reference Wikipedia
11044, Make predictions with trained models
7526, Although IQR method suggest several outliers for now I m going to focus on outliers with remotion recommended by the dataset author
1842, Identify Types of Features
13763, Tune model using feature selection
16866, Tuning model
21689, Qual a distribui o dos valores categ ricos entre as amostras
30780, Make prediction using model trained previously
11540, Random Forest
22860, Plotting Ideas Number of unique items sold each month
15641, Feature Importance for random forest
22, GridSearchCV SVR
15793, There are alot of missing values present in both the datasets which is not good for our model
27576, ps ind 15 x ps ind 06070898
30961, Some of these we do not need to tune such as silent objective random state and n jobs and we use early stopping to determine perhaps the most important hyperparameter the number of individual learners trained n estimators
8999, I definitley want to drop the Utilities column
4198, Custom numerical mapping for ordinal categorical variables
36594, Train and predict
38903, Sunburst Chart
15616, Visualize Age Data
17429, get the s
40036, L focal sum n N sum k 2 alpha k cdot t n k cdot gamma cdot log
37462, Machine learning
6431, Reading the data
13931, Some modalities can be grouped
14604, Support vector machines
41700, Some appear to have weekend like behvaiour
28365, The Dependent Variable target
5496, Random Forest Regression
36668, Very interesting Through just basic EDA we ve been able to discover a trend that spam messages tend to have more characters
37660, Save predictions
41698, Minutes looks pretty promising
29680, Data Preprocessing
12901, Modeling
12916, View statistical properties
20176, Using all data from train and test files for Submission
4696, This transformation is important Because it s the target feature
2305, Create calculated fields from other features 2 ways outlined below
11003, Parch
30350, I use a model from a marketing paper by Emmanuelle Le Nagard and Alexandre Steyer that attempts to reflect the social structure of a diffusion process
26509, To predict values from test data highest probability is picked from one hot vector indicating that chances of an image being one of the digits are highest
4982, Cleaning
30542, Baseline Model
43021, Gradient Boosted Tree Classifier
32817, Show influence of economical factors on housing prices
28081, Pipeline
30896, Plot the location where missing the regionidcity values
228, Model and Accutacy
42095, Here labels are the digits which we have to recognize
39971, More passengers with 4 family members were rescued
7494, 4f Sex Mapping
8308, Neighborhood vs mean SalePrice
24751, Transforming skewed feature variables
40844, Variable Description Identification and Correction
1774, Separating dependent and independent variables
2617, Model and Accuracy
4524, Stacking models
25961, Most Reordered Products
32121, How to filter a numpy array based on two or more conditions
3603, Distributions
13601, One hot Encoding
24300, Build Neural Network Model
32214, Add lag values for item cnt month for month city item
33515, Andorra
4225, Frequency Encoding
34653, Retrieving shop types
22651, Log Loss
21945, Python
20595, Data Interpretation and Visualization
38229, Benchmark
6551, Crating function for barchart
8030, Age
12118, We fix our skewness with the log function
2269, Ticket
9605, Family Size Details
33332, FEATURE 5 AVERAGE OF DAYS PAST DUE PER CUSTOMER
14866, Notice we only need the first letter of the deck to classify its level
13316, we can display the data type of each feature
31066, Data with new Features
42810, Prediction
15729, Random Search Training
17694, USING CHI SQUARRED METHOD
3303, correct SalePrice with a simple log transformation
38052, Effect of LandContour on SalePrice
42845, The Netherlands
2514, Logistic Regression
21618, Create one row for each item in a list explode
25762, Before submitting run a check to make sure your test preds have the right format
959, Create a dataframe from the lists containing the feature importance data for easy plotting via the Plotly package
23245, We have 3 missing features
26756, Preprocessing
25084, How about HOUSEHOLD
20739, KitchenQual column
6944, Apply visualization method TSNE for clustering our data
40612, look at the straight average
6023, Parameters
2523, Random Forests
24532, Total number of products by segmentation
28912, It s usually a good idea to back up large tables of extracted wrangled features before you join them onto another one that way you can go back to it easily if you need to make changes to it
8776, High Fare mostly have chance to survived
6308, Random Forest
2476, Training
23262, Categorical Categorical
9034, The easiest way I know to summarize the variables is by using the
21783, Below is the evaluation code for the model
20587, KNN
5473, Can we Quantify the of the decisions with in a Tree
4568, get some info about the Target Variable
27406, Prepare Submissions
2418, That should take care of the last couple of missing values
37624, Loading the data
26210, We write a function to check if the number of unique image ids match the number of unique images in the folder
9364, Save wrangled training and test data to files
26179, Rearange the prices from lowest to highest
31218, Data cleaning Preprocessing
42754, In order for keras to know which features are going to be included into the Embedding layers we need to create a list containing for each feature the corresponding numpy array 13 in total for us The last element of the list be our numerical features 173 and the categorical features that we decided not to include in the Embedding 3 for a total of 176 distinct features
42279, Show some image
25723, Hyper parameters Setting
38762, Decision Tree
28487, Reducing columns to type int8 if possible
36548, Exploring top features
15934, Embarked
11928, You can combine precision and recall into one score which is called the F score
29420, This is just a fun part I loved this thing i found in one of the notebooks so i added it in mine
6050, MSSubClass a lot of classes that I don t know the meaning of
20691, Develop Convolutional Neural Network Models
34296, This is the input image that be used for the convnet visualization
31041, Only Numbers
28935, Ops you want the hidden layer variables right
37072, Dropping Features
31224, Features with positive values and maximum value less than 10
29837, Display summary statistics for order data
41050, About Customer
40928, Sample Fold Generator
25410, This Loss function works well but we need to be careful with the log because the log of the number zeros isn t exist so we consider log 0 as the lowest value inf after use the np nan to num
21466, To create a submission file
9063, FireplaceQu
29882, Submission
37021, Prices of the first level of categories
27583, Adding basic features
35251, These plots can explain the distribution of jaccard score
1352, Convert the Fare feature to ordinal values based on the FareBand
28672, PavedDrive
40426, Run AutoML
43105, Label encoding
25827, Comparing to the Santander dataset
9286, There is a positve coorelation between Fare and Survived and a negative coorelation between Pclass and Surived
22327, Removing Punctuations
20366, Visualising the MNIST Digit set on its own
30669, Start with simple sklearn models
30574, Function for Numeric Aggregations
22705, Training and Evaluation
32722, Zeros and ones are just dominating
11640, Adaboost
29889, Finish getting the data ready to fit
29782, Encoder
8697, Price varies with neighborhood
37149, MODEL EVALUATION
42414, Visualizing Datatypes of Variable
12479, Name and Title
41473, Plot a normalized cross tab for AgeFill and Survived
29066, Numeric features basic engineering
23061, The beauty of pytorch is its simplicity in defining the model
17781, create a Decision Tree model to predict missing Fare value
10603, Survived is Output Variable
35510, First part is finding missing values for each feature
41545, How did we do
17261, Age Feature
36058, Feature Weekday
14902, Sex Pclass Fare and Survived
12412, CentralAir
43399, Prepare natural and non natural fooling targets
1728, Plot Gender against Survived
40611, Anisotropy of the space
37646, Adding interest acc to manager id to test acc to train data
18573, Gender
39217, Do the training
29937, Correlations for Random Search
39129, x frac x bar x sigma
17733, Prediction and submission
12522, For missing values
19047, Lets create a dataframe with a few features we want to explore more
38897, Predict
3363, One more thing to make note of is the higher in terms of degree you go in Polynomial the the more feature it create and you might end up breaking your code so it is better to add a break for this model
21649, Convert continuos values to categorical cut
7400, BsmtFinSF1 is highly skewed and its distribution is not normal based on the Shapiro Wilk test
6863, Feature Engineer Dealing with 0 values
32667, The charts below illustrate the dependent variable distribution before and after log transformation
41610, before we make the the days since reorder dictionary we need to address the NaN values in that column
9859, Total data table value counts
27893, Data Augmentation to prevent Overfitting
9736, Import Libraries and Settings
6467, Preprocessing pipeline that takes the raw data and outputs numerical input features that we can feed to any Machine Learning model we want
26882, Score for A2 16546
25212, Some Outliers after GarageArea of 1200
12384, Line Plots for all continuous data fields
13732, COMPARING TRAIN AND TEST DATA
9741, Pclass Ticket class
43291, Prepara arquivo para Envio submiss o
22127, Boosting in Depth
23648, Get OOF and CV score
16731, correlation
23183, Findings RF DT ETC and ABC in particular give some features no importance zero importance On the other hand GBC give all the features more or less importance but it doesn t give zero importance to any features These are the tree based models that have feature importances method by default LR KNN and SVC don t have this method In this problem SVC uses rbf kernel only possible for linear kernel to plot feature importance so its not possible to view feature importance given by SVC Though its trickier we would try to get the feature importance given by LR
35109, We can construct the following table
13986, Missing values Age
12386, Here in order to make the machine learning model I have taken the threshold to be 0
5296, there are 221 features due to a large amount of the dummy variable
25806, I think this high diversity should be accounted for when building our predictive model
20845, We ll back this up as well
33767, Setting the Random Seeds
31253, Scaling
11830, first find out the columns which have missing values
22614, All apps are installed
33481, Join data filter dates and clean missings
28016, There are also non english featuers
22836, Before we do that let s find out more about the 5100 items in the test set
2423, Defining Training Test Sets
4332, Embarked
15839, Sex
4189, Important numeric variables OverallQual and GrLivArea
30674, tokenize the tweets
23428, Common words
867, third model
14486, now appending both the test and train frames first
20967, Adding Output Layer
2643, we have removed 9 rows
5241, Converting Some Categorical Variables to Numeric Ones
27309, Here 1 is black and 0 is white
5294, For the categorical features I transform them to dummy variables but I ll drop one column from each of them to avoid dummy variable trap
20495, External Image Embeddings 2019 comp data
42225, Callbacks
15970, SibSp Feature
31104, Observing next 2 variables
22505, Checking accuracy with foloowing
21951, CNN
11105, Heat map of highly correlated Features
35379, Predicting and Submitting
38669, Random Forest
30373, Data Augmentation
10392, Random Forest
11119, Calculate Training and Validation Accuracy for different number of features
23454, Month
34696, Lagging target variable
24155, Organization informed that Not all images include wheat heads bounding boxes
10917, Ensemble
20180, Predicting Test Data
19381, Missing value
28345, Analysis Based on EMERGENCYSTATE MODE
3255, Box plots Neighborhood
15387, It is commented because I dont reccomend this
16640, Train Test data splitting
15924, Load and check data
32851, Data preprocessing
28053, Alternatively we can select a few columns and inspect within Spark
5029, Nice We ve improved our Train and Test score and reduced the difference between the two suggesting we are dealing with overfitting
5983, Bagging Classifier
11709, we are making Feachure Engineering for train and test
16275, DMatrix
22615, 0 of apps are inactive
16128, Create Dummy Values
15657, Perceptron
7371, I join the columns Hometown and Home country so that all 3 tables have the same structure and can be later concatenated
16932, And if the class distribution in the survived who survived is approximately equal most of the third class passengers died
7724, In the following features there are very less missing values so we ll impute them with the most frequent value
12844, Confusion Matrix
36207, Comparing Scores
40634, Keyword replace NaN with string
24670, Prediction on test data
28701, If you hadn t figured it out already our brick laying friend enticed us in earlier on in order to explain Stacking
18986, Doing the same things that I have done for countries
2652, People with family have 20 higher chance of survival than people travelling alone
28093, Input data are in 1 D we re shape into 3 D matrix
38706, Correlation Matrix
31269, Rolling Average Price vs Time for each store
5301, Eliminating samples or features with missing values
12942, Do the same with test set
3410, extract the letter character and make it a new variable Deck
17934, If Pclass high probability of survival is high
13754, Graph distribution of Pclass w r t other variables fare age family size
25884, Histogram Plots of number of punctuations per each class 0 or 1
23119, Findings
2235, Outliers Analysis
42948, Some Predictions
17773, Survival rate increases considerably with the fare value
2397, Using make pipeline in a ML project
21471, For swapping words we just randomly select an index of word before the last and swap it with it successor in the sentence
34739, Applying T SNE non linear method to LSA reduced space
9076, we have rows with Wd Sdng HdBoard and HdBoard Wd Sdng
16756, Feature Engineering
8222, Replacing NaN values for categorical features
12152, on to training and prediction
20729, MasVnrArea column
19016, Predict test csv and submit to Kaggle
23618, Transform category features to dummies
27856, Fill missing values in test set
17863, Build the second level ensamble model
17664, Observations
25242, Square Root Transform
37866, Data Visualization
38714, make sure tensorflow is doign all of it s operations on the GPU and not the CPU
29770, Submission
16643, SVM
35566, we add Faron s features
31752, Submission
8904, Blending the Models
18888, We are slowly getting our dataset ready
2784, Model Building
28885, In addition to the provided data we be using external datasets put together by participants in the Kaggle competition
28779, The number of words plot is really interesting the tweets having number of words greater than 25 are very less and thus the number of words distribution plot is right skewed
5847, Surprisingly only 2 features are dominant OverallQual and TotalSF instead of using all the 77 features maybe just using the top 30 features is good enough dimensionality reduction in a way
18474, Before deciding how to treat this we know there are infinite ways of filling missing values
34970, Testing Random Forest Parameters
32692, Evaluate model accuracy on the validation dataset
16585, Lets check which Fare class Survived along with their title
648, It s Mr Osen and Mr Gustafsson on Ticket 7534
40460, Average Sale Price
112, I have yet to figureout how to best manage ticket feature
34362, Temp Atemp and humidity are normally distributed
13662, Dropping
19147, Wrangling non numeric features
11862, Submission
5567, Big Congratulations now we don t have any missing values or outliers in our data
15085, SVM
3957, Creating Extra Features
39207, Submitting Predictions to Kaggle
9824, Parch Number of Parents Children Aboard
25833, Target Counts
37084, implement soft voting ensemble in mlxtend
38696, Image Size
28385, Economic Impact of COVID 19 on India
2816, proc df replace categories with their numeric codes handle missing continuous values and split the dependent variable into a separate variable for the max n cat is to create dummy variables for the categorical column with less or equal to 10 categories
17428, Invert the Survived Value
491, Age Feature
9343, Run GridSearch optionally
14765, The score for the new training sample is very close to the original performance which is good
12711, SibSp Parch
22504, Using AdamOptimizer to minimize cross entropy You can also use GradientDescentOptimizer inplace of AdamOptimizer
40794, The char 38
21033, PPS matrix
9954, Train Test Split
35571, Sales broken down by time variables
36465, Images from ARVALIS Plant Institute 1
29123, Preparing data for Pytorch
5084, compare this to the predicted values
30762, Score features
23268, Age
4322, Age
37091, Substracting normality
7342, XGBoost
19887, Here we were able to generate lag one feature for our series
6129, Square feets
29604, Interestingly the most incorrect was an example that is incorrectly labelled in the dataset itself
11990, Firstly call kfold for cross validation
15605, Make first Submission
18353, linear regression
4898, criterion A function which measures the quality of a split
25441, Evaluating the Model
4177, Top coding important
15982, Embarked Feature
1724, CODING classes gather here
10686, Regularization Models
42393, Do stores sell different kinds of items Nope All stores have the same kind of items
21355, Model Sumary
11515, Grid search for SVR
14780, Data poinst for SibSp 1 are very few based on survival ratio for SibSp possible features can be
43377, Tensorboard Visuals
13916, It looks good so let s proceed with creating target and features array
14401, Map each Embarked value to a numerical value
5103, Feature visualization with Target variable
9898, Embarked
7031, Garage condition
4652, Quick Observations
4314, We could conclude first class passengers were given preference over the other classes given the time in the history of the event
2914, I Create a copy of datasets
38425, It works
4611, One thing to note is that the tree based models took approximately 3 6 seconds to run using CPU only That is quite concerning given the fact that there are less than 1500 training samples When we perform model selection and hyperparameter optimization the running time is going to scale remarkably It s better if we can somehow reduce the complexity of the models and save that precious training time and one way is to continue cutting down on number of features
38592, Predictions on test set
18329, Fundamentally variables in the dataset can be subdivided into
14595, Fare Values
1976, Apply the Estimator which got from parameter tuning of Random Forest
39212, Since we use a neural network we need to scale the features
3897, Box and whiskers plots
11292, Turn categorical variables from integers to strings
2048, RandomForest Model
9471, Titanic Data Report
11106, Remove multi colinear features
18996, Train
7722, Imputing Missing Values
36841, The only thing I changed about the model is the size of the LSTM and GRU
22958, Submit
36767, Compiling the Keras Model
5943, fit our training data into Model
8355, Investigate who were masters
26363, Fitting the Network
8972, convert values
21789, Correlation with TARGET
32837, We have succesfully pre train buid and train our model
9052, Garage Area
34645, Creating Our Model
34632, We removed outliers data points
28171, Part Of Speech POS Tagging
15300, Decision Tree Model
33228, nbsp nbsp InceptionNets
42968, Testing Different Models
35167, Experiment Batch normalization
24758, Ridge Regression L2 Regularisation
7369, Just like with the previous DataFrame I add the column Class but assign it to 2
20470, Days employed distribution
3663, Tansforming begin equation Y log 1 X end equation
26973, Run Training and Validation Step
32055, After fitting the model plot the feature importance graph
9782, Train the Algorithms with Optimized Parameters
16725, Parch
2222, Type of Zoning
20341, Sidebar on Data Augmentation
32670, Identifying relevant outliers with regression assistance
6199, Gaussian NB
16949, its time to train We use Adam optimizer and Binary Cross entropy as the loss function
6948, Important features on cluster plot
9239, Descriptive statistics summary
38850, Generally dropping the columns is not adviceable but in our case the following columns are having too many null values
38068, Unigrams Analysis
31560, Submission
9764, Check for Correlations
41666, Feature selection and engineering
20145, Model building tymm
7497, Calculate accuracy of each model
22634, Function to store output file as CSV
19632, Age distribution lines denote target groups
18682, let s change our working directory to kaggle working and take a look at its contents
9737, Import Dataset
24324, Apply log1p to the skewed features then get dummies
40433, Submissions
25015, Resampling
7918, Basic Class DataImputer to pre clean the dataset not used here but it be interesting to compare the performances of our approach with this simpler one TODO
8501, Missing Data Assessment
23808, Separate Cat and Num
20471, Days of registration distribution
243, Model and Accuracy
26692, Train model
32886, Normalizing features
36658, Erosion is the opposite of dilation where it scans for fits among the boundaries and strips a layer from the inner and outer boundaries of the shape
36638, we try Logistic Regression another very popular classifier It s a Generalized Linear Model the target values is expected to be a linear combination of the input variables for classification where a logistic or sigmoid function is fitted on the data to describe the probability of an outcome at each trial This model requires in input set of hyper parameters that can t be learned by the model Exhaustive Grid Search comes to the rescue given a range for each parameter it explores the hyper space of the parameters within the boundaries set by these ranges and finds the values that maximise specific scoring functions
24581, Dataset class
3538, Missing values for all numeric features in Bar chart Representation
7760, Deleting more outliers
27559, Lets take a look at the dedups Don t worry about the key but just take a look at what values are in the same key
13123, Using catplots
18030, Best model parameters
11019, Replacing the age bands with ordinal numbers just like we did in Sex and Salutation
9129, Set Fireplace Quality to 0 if there is no fireplace
21599, Create a bunch of new columns using a for loop and f strings df f col new
28431, Item name correction
28363, Loading Data
30096, And here are some quick examples of the test data
7648, BsmtFinSF1 BsmtFinSF2 BsmtFullBath BsmtHalfBath BsmtUnfSF GarageCars GarageYrBlt LotFrontage MasVnrArea TotalBsmtSF are numerical
37412, We can keep only the features needed for 95 importance
20822, We turn state Holidays to booleans to make them more convenient for modeling
15819, Get Dummies to convert categorical data into Numerical data
14544, Numeric variables PassengerId Age Fare SibSp Parch font
38639, Importing Libraries
37202, LightGBM
19424, we need to add two new methods to our LightningTwitterModel class
31946, Make model XGBRegressor
34027, Sparse weather column
1160, We caF say the best working model by loking MSE rates The best working model is Support Vector Machine
40935, Hyper Param Searching RandomizedSearchCV
27829, Mean and std of the classes
36124, One hot encoding be done to encode the rest of categorical features
18161, checking the values
41397, CNT CHILDREN and NAME FAMILY STATUS
27627, Let create the word index
22483, Parallel Coordinates
31602, GENERATING THE OUTPUT FILE
490, Most used titles are Mr Miss Master Mrs
39078, Prediction Submission
33263, Callbacks
3744, One of the best way to visualize Null values is through Heatmap
231, Library and Data
7408, Take several variables for example
16274, Train Val Test Parity
30670, SGD Classifier
38457, quick tests
15378, Doctor Rev are not exactly Royalty but I tried matching age group wherever possible
4687, As i did for the Basement features with an extraordinary logical thinking p i ll set the Garage Sizes to 0 for houses without garage
2569, Confusion Matrix
8131, Correlation matrix
35806, Making the same adjustment on the test data for our submission
9661, Modeling and Predicting
16729, by age
27837, Architecture layers
26397, In the third class there are twice as much passengers than in the first and second class respectively
13034, PassengerID
28416, Variable Importances
11266, All of this means that the Age column needs to be treated slightly differently as this is a continuous numerical column One way to look at distribution of values in a continuous numerical set is to use histograms We can create two histograms to compare visually the those that survived vs those who died across different age ranges
33492, Andorra
17956, Logistic Regression
18651, Please note that there is a new value NA present in the test data set while it is not in train data
12315, It looks very similar to TotalBsmtSF
19713, Choosing a model
8134, In this part I keep the common columns between the test and the train and I delete the outliers in the train data
10539, merge the numerical categorical feature with final test c
40458, NeighborhoodOverallQualNCond
1784, Decision Tree
1417, Cabin vs Survived
22632, Here Android games are most negatively correlated with other variables of interest
39889, Ensemble
32530, Generating csv file for submission
40249, Ground Living Area
19957, Because of the low number of passenger that have a cabin survival probabilities have an important standard deviation and we can t distinguish between survival probability of passengers in the different desks
10132, Multi Layer Perceptron
27978, Check missing data for test train
18734, RandomForestRegressor bootstrap True criterion mse max depth None
35054, Complexity graph of Solution 1
3310, Delete outliers
39020, Drop the rows with a null value
42467, Creating dummy variables
35140, Data preparation
32089, How to create a 1D array
31432, incorrect prediction x start x end We can check Jaccard leq 0
19303, Data Preparation
4662, To Know The Relationship with numerical variable
36005, to create the submission
10086, Feature Engineering
38485, reflexes to have font
647, Almost 100
28775, Lets look at the distribution of tweets in the train set
32844, Evaluation Functions
25872, Number of tweets according to location top 20
15591, That took a really long time
27430, Taking log of Fare and Age
19797, Reading in the sample submission file
7287, DecisionTree Model
24460, Augmentations shonenkov using in his kernel Training CV Melanoma Starter cv melanoma starter
7224, Garage Null Values
24039, Forming splits for random forest and knn
13507, Parameter Tunning
18261, the SHAP explanation
4020, BsmtQual Imputation
30944, Correlation Between Price and Other Features
32914, The function below allow to stop the model training when there are too much epochs without improvment performance
22163, Pipeline contVars taxes FeatureUnion intro
1756, Imputation for Test set age
41603, the model is most confident that the number on this image is a 7
4481, Logistic Regression
7341, SVC
11233, More feature process work get the title from the names and map to a new feature cut the fare and age into category
34009, No outliers
28793, It s Time For WordClouds
5156, Regularised Linear Regression
41406, deliquencies
38712, Prediction
18401, Unigrams
15185, Feature importance
2217, Splitting the Variables into Different Categories
20786, Bivariate Analysis
32262, Relationship between numerical values
1244, SalePrice the variable we re trying to predict
27654, Split training and valdiation set
15923, Importing Librarires
30675, Make attention mask embbeddings and updated features
36882, KNN
8695, CATEGORICAL FEATURES
27143, MSZoning Identifies the general zoning classification of the sale
12521, we need to correct the skewness of the price
12983, Droping PassengerId and Cabin
7488, we want to group the ages
5289, Data Preprocessing
12353, BsmtHalfBath Basement half bathrooms
30283, Recovery Count 50
35417, Calculate the Mean Absolute Error in Validation Data
18660, Compile Model
21672, Data part
38644, Deleting Unwanted Columns
27270, apply Gaussian Naive Ba to our Santander data
43120, Prepare training and test dataset for ML
38059, Taking a glance into the data NaNs values basic stats distributions
18414, Test Set Labels
35327, Importing Various Modules
6704, Categorical Features
31314, Treating Missing Values
34335, Submission
18614, Model 3 XGBoost Model
22830, Extracting City
1012, Excellent now let s split our dataset back to test train
2294, Pandas whats the distribution of the data
19974, Keras models are trained on Numpy arrays of input data and labels
4659, to visulaization dependent variable which is target variable which is SalePrice
1107, K nearest neighbors
11827, Looking at the Bigger Picture
9345, Save predictions
5344, Diplay quanitive values of a categorical variable in area funnel shape
10204, Lets take significant features as X and target variable as y
9878, I am going to use factor plot to visualize SipSp and Survived variables
16009, Feature importance
792, Submission
15181, Data analysis
20210, Compare Different Tree Sizes
2422, Creating Training Evaluating Validating and Testing ML Models
24332, we do some hyperparameters tuning
33566, Fill nans
17667, Feature Engineering Title Age left right of the boat and Sex
27480, Train the model
11480, FireplaceQu
6031, we don t have any missing value
28639, GarageCond
42226, Model fit prediction
5812, Modelling Feature Importances
39410, Age
14399, Both passengers with missing embarked were of Pclass 1 and paid Fare 80
13359, View shape of test set
8328, In this way we can convert strings to categorical values
11717, Naive Bayes
9480, Confusion Matrix
41297, Make prediction
24699, let s setup logging and the best model checkpointing
31558, test data normalization
7456, Missing values in Frame column in Test Dataset
2378, Save a model of pipeline using joblib
14530, Embarked
43299, Valida o Cruzada
31629, we get a feature vector for image
20775, Takeaways from the plot
36596, First we are importing the necessary libraries
17395, SibSp Siblings Spouse wise Survival probability
14096, Random Forest
16055, SibSp Parch vs Survived
34712, Mean over fixed category id and month
5887, Imputer
25293, Submission
27102, Import Libraries
3474, get a list of the non zero coefficients
15900, Submitting Predictions
20606, Skewness is positive so we can use medium to fill the missing values
29190, Gradient Boosting
34346, Pre processing Test Data
9408, Feature Eng Extracting letters from tickets could be important font
12249, Exploratory Dats Analysis EDA
31765, To create the users x products count table loop through the prodcut ids data to as a sparse matrix column position contains the product ids with position listed in a dict
37480, There are a lot of NLP and ML libraries out there with can take over the whole process text preprocessing
39170, How to use a custom PyTorch architecture
11381, Final submission
2876, if we compare the feature importance list and drop list
15304, Feature Importance
11112, Drop features with more than 30 null values
29543, Both of them look pretty good
27748, Missing data for test
2138, Tune LightGBM
9021, Convert null variables which actually just mean that there is no basement
32997, Save Output
40392, Model CheckPoint
9639, we shall find the Top ten most correlated features to sale price
23302, drop 90 missing value ratio
13670, Decision Tree
19443, We proceed by fitting several simple neural network models using Keras and collect their accuracy
9327, Rare ordinal features
38047, Notice that the true mean is contained in our interval
38737, One thing to note here is that unlike the Master title there is no separate category for young female passengers
22617, Apps per device
10819, I would like to check what are correlated categories with Age
23126, Looks like on average if you pay more for your ticket you are more likely to survive plot histogram of survivors and victims fare together to validate our intuition
8167, Our predictions are now ready for submission
37988, okay not that exhilarating I plan to train on 80 epochs however it stopped at the 12th epoch reaching training accuracy of 0
24940, Q Q plot of the initial feature
2765, Label Encoding All the Categorical variables
18282, DRAW CONCLUSION
31357, run
35853, Alright then lets shift the colors to binary and go to the model
16695, We can convert the categorical titles to ordinal
27099, I use focal loss rather than regular Binary Cross Entropy loss as our data is Imbalanced and focal loss can automatically down weight easy samples in the training set
36823, Here we init the vectoriser with the CountVectorizer class making sure to pass our tokenizer and stemmers as parameters remove stop words and lowercase all characters
5285, In the next series of experiemnts we are going to train a number of LR models that use only top X features as ranked by RFE feature importance feature selection algorithm
39158, But we want the CNN to predict only one class
2194, Prepraring data for prediction
15848, Age
28579, This be a very important feature within my analysis due to such a high correlation with Saleprice
31815, Define loss functions for all four outputs
5593, Split the data into training and validation sets
35924, Train the model using batch size of 32 and up to 30 epochs
3910, Handling columns with missing data
20774, Visualization of target variable via Bokeh
17892, Lets plot the new variable
10163, Subplots for iris datasets
26329, Try Random Forests
22045, We could potentially add more variables like this
29012, Have a look at our target variable
27435, Decision Tree
9333, Features with a large number of categories
41776, Weights
33280, Missing Values
9113, Location Location Location
36858, Distribution of labels
24230, This is the augmentation configuration we use for training and validation
27934, With the model compiled and the data sitting ready in the pipeline it is time to train the model
6703, Train Dataset
7304, Observation
28191, Tokenizing Words Sentences
5651, Create a PClass Fare category
24857, Check the model s performance for the beginning of April
29326, look at the loss column
20730, ExterQual column
3700, Training the model
17914, Boxplot Analysis of passengers ages in each class
15903, Missing Fare
11793, Spliting the train data
16682, The survival percentage
25992, Submission
1187, delete Utilities because of how unbalanced it is
13117, Model Comparison
1601, Correlations
29189, Residual Histogram to visualize error distribution
33299, Double Check
15607, FareBand feature
12278, Set XGB model the parameters were obtained from CV based on a Bayesian Optimization Process
43121, Scaling of features
12737, Model re training
39451, checking missing data in POS CASH balance
36734, Decision Tree Model
39887, Prediction with Ridge Model
15718, passengers were male
1863, Gradient Boosting
1626, Linear Regression with Lasso regularization L1 penalty
18090, Example augmentation pipeline
43064, check now the distribution of the mean value per column in the train dataset grouped by value of target
15590, Random Forest classifier
34369, Time for some parameter tuning
16260, A quick overview of the public leaderboard to get a feel for the competition
21034, compare the PPS matrix to the basic correlation matrix
6990, Year garage was built
32894, Output dataframe
21435, Further Tuning
15022, Sex
1839, Dealing with Zeros
156, Import datasets
7473, Immediately we notice something interesting in the count row the number for Age is lower which means that for some passengers no age is present in the table
12343, GarageQual Garage quality
34330, Modeling
17374, Fill missing values for each sub dataframe then combine all the sub dataframes into one
41669, The final Random Forrest model is instantiated with the optimal hyperparameter arguments derived in section and trained using the features selected in section
4102, MinMaxScale
37306, Term Frequency Inverse Document Frequency TF IDF
8824, Feature Engineering SibSp Parch IsAlone
14982, Extract Title from name column
16435, Embarked
21191, Initialize the parameters for an LL layer neural network
18838, Visualising the MNIST Digit set on its own
15785, K fold cross validation
3450, The T Deck looks like an outlier
14054, Model evaluation
5348, Diplay distribution of numerical value in sequential way
16053, Passenger Class Vs Survived
28288, prepare field vocabs for tokens in trainset
124, After Scaling font
34464, Mean Encoding
16221, Now we identify the social status of each title
15818, Data Wrangling
27946, OOF Evaluation
249, Model and Accuracy
42472, Feature scaling
29518, delete keyword and location
7587, Indexes for the outliers are the same like for GrLivArea
20309, Here is how our clusters appear
7446, Read in the data
14600, We make the Survived columns as the features value
36269, Fare vs Embarked
1708, I won t go much into explaining the data since I have done a lot of relatedw work in my kernel titled
10750, Check the response variables
35322, Ensemble Predictions and submit
34926, Words features
23803, XGBoost SHAP Explain the Unexplainable
25914, Show orders visualization
2904, Importing Data
8320, Cross validating the model with the best hyperparameters on the train data and using it to predict the test data
6841, Models Stacking
15737, After looking plot I am thinking that Cabin feature is not important to predict Survived I want to drop it
22161, Testing
28188, that we re all set up it s time to actually build our model We ll start by importing the LogisticRegression module and creating a LogisticRegression classifier object
16832, Finding Optimal Cutoff Point
40638, keyword feature
19874, Outlier Detection Removal using Scatter Plot
33360, Visualizing nulls values
33874, Light GBM
40464, Floor Sizes and Room Conditions
37449, We load the Distilbert pretained tokenizer and save it to directory
24766, XGBoost
12067, Numerical Features
412, Bagging Classifier
36733, Building models
17474, GradientBoosting
5850, XGBOOST
6355, Last thing to do before Machine Learning is to log transform the target as well as we did with the skewed features
17657, Observations
28549, Introduction
25350, Keywords
4991, Model Selection
20752, as data is highly skewed so i decided to remover these three column 3SsnPorch ScreenPorch PoolArea
36968, ROC AUC Curve
21100, try to plot sales for every year to understand about seosaonal data
18255, Train top layers
21904, Naturally overall quality of the house is strongly correlated with the price
5113, Target variable distribution
8878, In order to handle NAs we replace the nulls in the columns of datatype object with the mode of the respective column whereas for the columns for datatypes integer or float we replace the nulls with the median of the respective column
33565, Delete outliers incredibly large homes with low prices and drop SalePrice column
27871, Ensemble
27057, Write loop to find the ideal tree size max leaf nodes
38616, Generate submission
21155, With these parameters we achieve the RMSE 26 949
24268, Observations
19275, let s make a validation set we can use for local evaluation
25459, sort predictions to have the same order as the submission
19650, Benchmark predict gender from device model
36771, plot the accuracy vs no of epcohs and loss vs no of epochs curves for a better insight
29864, To work with pixel data intelligently use pixel array tag
31936, Prepare the data for use in CNN
23825, Taking Log Transformation
24265, Dropping features
36345, Compile and Train the Model
23580, Retrain the model with the optimal hyperparameters from the search
16617, Pclass
14331, This first function creates two separate columns a numeric column indicating the length of a passenger s Name field and a categorical column that extracts the passenger s title
19699, Fit the Model
27339, Model Preparation Libraries
40432, Save Leader Model
2471, Adding Family Survival
36204, here we have the dataset with all the predictions and targets
29851, Select and train a model
42368, After a lot of experimenting I went with the random forest model as it gave the best accuracy
41367, Sale Price NA Pave Grvl
26102, Random Forest Regressor
22754, let s examine a case where R0 is reduced to 3
24909, Confirmed COVID 19 Cases per day in Germany
13066, Feature X and label y selection normalization of data to give data zero mean and unit variance X submit is the data for final submission
23886, The correlation of the target variable with the given set of variables are low overall
22014, Run the next code cell to get the MAE for this approach
24405, Permutation Importance
26844, Hook to extract activations
35446, Augmentation
7642, log transform target variable
25345, we should evaluate the model on the test set
10838, this features have good correlation with target variable
22278, It s a good thing that we filled fare with the median value as there is an outlier
10757, Another way is to do it with apply
12165, Min Sample Split
6754, Checking Skewness for feature LowQualFinSF
33649, As I am happy with the input data I won t be making any further changes to it
21898, Something to speed things up a little
31843, Category dataset preprocessing
13088, Missing values
6936, Firstly drop most correlated features
19039, Specify the folder that contains the training images train and use fastai2 s method of accessing the DICOM files by using get dicom files
14641, There is a null value and it makes sense why There was no one in the dataset that was a female with a title as Ms
8718, Deterministic and Random Regression
29018, Fare
14312, Or You can also use the below code for writing the output
38709, Training Xgboost
31697, Categorical Data
1971, AdaBoost
27587, Trick we can even spell check words although this is very computationally expensive
16972, Apparently the data of passengers having more than 5 siblings are outliers we filter our rows and keep just the SibSp less or equal to 5
18588, The learning rate determines how quickly or how slowly you want to update the weights
12632, Feature Correlations
4694, Most of them have a positive skewness
13159, Scaling down features
21758, Looks like these are all new customers so replace accordingly
23233, You have also learned how to use pipelines in cross validation
11520, Kernel Ridge Regression scores
31552, Featues Engineering
40738, One Hot Encode
23900, Convert Text to Tokens
15046, Passenger with Title Mr have a very small survival rate Rule of Women and Children first is obeyed
4261, Fillna for categorical variables
37372, Gaussian Naive Bayes
10633, Lets try to find out best parameters for SVC using GridSearch
33331, FEATURE 4 AVERAGE NUMBER OF TIMES DAYS PAST DUE HAS OCCURRED PER CUSTOMER
15634, Survival by Family Size and Gender
8412, Bathrooms Features
36385, For the comparison I m using 2 models whose hyper parameters have already been optimized
30118, create a new data object that includes this augmentation in the transforms
17770, Majority of the passengers were between 20 and 35 years old
28462, Columns regionidcity regionidneighborhood and regionidzip
36421, Features with mostly single value
16120, Random Forest
4631, Filtering null and not null values
15691, Passenger class vs Survived
14980, Creating new variable Family Size Alone
8238, Fixing missing values
15246, Logistic regression
32201, Clean item type
4817, Standard step Remove id columns from both data sets since we are not going to need when modeling
7667, what do we do in combine data that contains less than 80 missing values
12200, However in the test set we don t have all those values
30766, they all score relatively close
7742, predict
36129, Visualization of data is an imperative aspect of data science
3563, Long tail formation to the right
36812, we take this POS tagged sentence and feed it to the nltk ne chunk method This method returns a nested Tree object so we display the content with namedEnt draw
706, fill the NAs with means
33579, Model Instantiation
13096, Box plots Outlier detection
10073, Extracting first n e g 6 models
34862, 177 entries in the set are null for the Age column to save deleting imputing so many rows of data I exclude this from the features for now and include it later on
22766, Split the data into training and validation sets
11072, Feature Selection
10110, Fixing Missing Values
9274, Light GB model
40994, Filtering does not change the data but only selects a subset
9388, check outlier
11972, Dealing with categorical features
25021, That s better
23530, Accuracy and confusion matrix
17476, SVM
42624, Creating other useful columns
21891, Calculating the Attention Context Vector
7390, Below are the final lists of unmatched passengers from the Kaggle dataset kagg rest6 and from the Wikipedia datasets wiki rest6
25760, Apply
6452, we take an average of predictions from all the models and use it to make the final prediction
18374, For making the dummies command to work we need the categorical data in object or category type
9682, Make Predictions
3492, Random forest models have the added benefit of providing variable importance information
11028, Logistic Regression
10343, Correcting Features
3860, Filter out the outliers
37923, XG Boost
32878, XGBoost feature importance
11376, XGBRegressor
9893, We don t need Name feature any more
1957, Identifying Missing Value
13462, Now apply the same changes to the test data
18020, New dataframe Woman Child Group by Name features
23744, Decision Tree
28076, Ensemble Tree
7486, Group fares into 3 categories weighed according to the survival rate
8482, ElasticNet
19749, Data preproccesing
27034, Total Number of images
42088, Lets quickly look at the predictions before delving deeper into the details
40760, Use the next code cell to label encode the data in X train and X valid
36653, Averaging is done by convoling an image with a normalized box filter by taking the mean of the pixels in the kernel area and replacing the middle central element
33769, Reshape the Images
20320, The struggle is real
6916, MODELS
25319, Improve the performance
9931, I compared Linear Regression and XGBRegressor because are the only two models I worked with haha
23890, Bathroom Count
38559, The plots reveal that values among X0 X1 X2 X5 X6 and X8 are fairly distributed where as values in X3 is moderately distributed
32282, Display distribution of a continous variable
23577, The Hyperband tuning algorithm uses adaptive resource allocation and early stopping to quickly converge on a high performing model
31215, Bivariate analysis scatter plots for target versus numerical attributes
3526, Visualisation of OverallQual TotalBsmtSF GrLivArea GarageArea FullBath YearBuilt YearRemodAdd features
13352, View shape of training set
3943, Missing values Imputation
21756, Empty columns for some ages REVIEW
7892, For Is alone and AgexClass the effect in the survival rate is not that clear
13530, Automatic FE
13372, Sex
24361, This is giving us an accuracy of 92
10196, predict for test data and generate submission file
42775, OneHot
32392, Simplified Meta Predictions
29400, BUILDYEAR CAN BE IN FUTURE TYPE OF PRODUCTS
18079, Similarly let s look at very small bounding boxes
21444, Data Summary
4263, Differences
13517, After creating new features we can drop useless columns that we won t use in the training process
32916, it s time to predict test dataset
9843, Hyperparameter Tuning
18362, Vizualise the Continous features Vs Demand Count
5261, we are ready to train one more set of RF models that use top 50 features selected by permutation method
8036, Checking the Correlation Matrix
8402, Exploratory Data Analysis EDA
37012, Best Selling Aisles over all Departments
4112, Fill the features more than 10 percent of the missing data with None Values
18262, try on a few more images
13416, AdaBoost Classifier Parameters tuning
23046, so now lets normalization our data
2997, Feature Engineering Creating New Features
28195, tokenize the sample text and filter the sentence by removing the stopwords from it
20412, There are two rows with null values in question2
42362, stacking base models
10829, We have 713 single tickets and 216 that belong to family members
43340, import train test split module from sklearn
2770, Finding the outliers
38632, let our trained CNN recognize the image and get the outputs from each layer
30147, correlation
15180, also quickly check Age s relation to Survival and investigate our Embarked Pclass theory
31732, Preprocess csv files
3784, Model Comparison
11922, K FOLD Validation
30850, Location for Occurence of Larceny Theft
26769, Fit models
31427, Generate some correct true labels and arbitrary prediction lables
5652, Create a Family Size category
15989, K Nearest Neighbor
19575, shops analysis
21803, Feature importance
31133, Embarked feature
29747, The dimmension of the original train test set are as following
903, Correlation Matrix
14696, Numeric Features
13164, Scaled features
11059, Age versus Fare
3267, Data Cleaning
18970, Display the contour and histogram of two continuous values
6918, Final model fit evaluation prediction
30773, Data cleansing to drop NA row
2634, SO let s repeat it all
32836, That s how i got to from
27839, Set other parameters
7769, Non linear SVM Regression
3024, Since area related features are very important to determine house prices we add a few more features which is the total area of floors bathrooms and porch area of each house before we continue droping these numeric colum
30835, Month Feature
39118, K Nearest Neighbor
37229, We can replace Quorans with Quora contributors
5693, Prepare submission file
12672, Dropping null values from train
28175, Lemmatization
33878, Stacking Regressor
40158, In this section we closely look at different levels of StoreType and how the main metric Sales is distributed among them
40455, HouseStyle
14299, little more about the missing Values
36346, Predictions
1880, Age
14386, Most of the passngers were of AgeGroup Young Adults and Adults
22367, Cleaning Data on Column Age and Fare
622, As suspected it is more likely to know the cabin of a passenger who survived
37403, Drop Correlated Variables
12429, with regex
7114, Dealing with outliers
14868, Where did the passengers come from
36987, Using xgboost the important variables are structured tax value dollar count followed by latitude and calculated finished square feet
31637, Reload Dataset
27636, area total calc is the most cross correlated item The larger the overall property the more room there is for everything else including tax
29072, Categorical features label encoding
19871, finally we were able to remove two outliers using this Z score approach
1755, One wonders what is the point of comparing distributions pre imputation and after imputation Do they need to follow the same distribution Well to me this is not an obvious answer but one intuitive explanation is as follows
33844, Number of distinct questions
41116, Year Build Vs Mean Error in each County
14423, Fill missing Cabin data with N
3564, Skewness The longer the right tail the more positive the tail
7909, Correlation matrix
37747, There we go finally by optimizing the columns we ve managed to reduce the memory usage in pandas from MB to MB an impressive reduction
19971, VGG 16
27902, Read the Data
32769, Build the Network
30264, score favors classifier that have similar precission and recall This may be not always what we want Sometimes we want to have a model with greater precision but this as a consequence lower the recall rate We can visualize the precision recall ratio in a line chart using precision recall curve function from sklearn metrics
21434, Add Boolean columns
34737, Random 3 dimensions of the Latent Semantic Space
43349, Predicitng from model
38480, Preprocessing font
28464, Column censustractandblock
32119, How to insert values at random positions in an array
7305, Observation
13389, 537 people are travelling alone and 354 people are travelling with family
41937, Discriminator Training
23743, Gradient Booster Classifier
574, scores from GridSearchCV
13130, Survival by Embarked
10899, One Hot Encoding
36235, After analysing the model we make the final predictions and create the submission file
8252, Bivariate Analysis
12190, Transformers and classes
20763, so we divided train and test data
40704, CPU test
20938, Optimization
33078, Feature Engineering
11462, Feature Selection
13657, Binning Converting Numerical Age to Categorical Variable
40999, After maxpooling operation with stride of 2 again output from conv2d batchNorm Relu gets downsampled to half
34171, First trials
38471, Training dataset visualization
265, Preprocessing
43055, The first 100 values are displayed in the following cell Press Output font to display the plots
13508, Model Assembly
27465, Data Visualisation
1222, Box cox transform for skewd numerical data
28181, The words dog cat and banana are all pretty common in English so they re part of the model s vocabulary and come with a vector
42465, ps car 13 and ps car 15
20559, Confusion matrix evaluation of the model on the validation set
32666, The dependent variable is right skewed
40199, Model Evaluation
15199, Embarked
4860, Getting Correlation between variables
2429, Random Forest Regressor
7509, Some categorical columns refer to quality or type and can be encoded into numeric values
24185, FastText does not understand contractions
2162, Learning curves allow us to diagnose if the is overfitting or underfitting
16364, the Countess Sir Major Johnkheer Don Col Capt Rare Titles
5143, Rare labels
30182, The species are in string format as such to convert it to one hot encoding format first I have to labelling it in assistance with labelencoder
67, Train Set
30836, Hour Feature
12498, we can put the training data through our transformer We split the data into a train and test set
12164, When the tree is deep we get nodes and leaves with a very small number of samples which are therefore not very informative
12417, use correlation matrix to examin the top 10 features to eliminate similar ones
4884, annot argument is mandatory as you also need data value in each cell
20271, Unique categorical values per each category
7940, To improve further our prediction we can stack the different top regressors
1087, Load data
11915, I want to create different categories for family members
20791, DROP GarageArea
14138, K Nearest Neighbor
15956, Train set
29929, The only clear distinction is that the score decreases as the learning rate increases
42846, Russia
3417, We begin by chopping up the Name variable to give us Title LastName MaidenName Nickname and FirstName
14367, In case of female passengers Parch 0 passengers had highest survival rate but in case of male Parch 1 had highest survival rate
18442, Interpretation font div
35448, Evaluating
3780, Model 1
20462, Organization type
27335, Segregating Data
29948, Bayesian Optimization
16283, Unused columns
41735, This is where the magic happens
6756, Checking Skewness for feature PoolArea
9232, Scorecard Model 75 15
16235, For comparsion of different models we are initializing one list which store accuracy of all the models
32229, First things first we should always create a validation data set to evaluate our model and avoid overfitting
40746, Run the model
9973, The Polinominal idea
23189, RF s specificity score indicates it correctly predicts over 92 of the victims as a victim Comparing recall score with specificity it looks like our rf model is more accurate on predicting negative class victims 0 than predicting positive class survivors 1
17720, Since there is only two missing values in the train data Embarked feature they be filled with the most frequent value
16784, Naive Bayes Classifier
13603, Similarly one the encoder is fitted we can transform test set as well
33474, As a fraction of the total population of each country
25165, Basic Feature Extraction
31613, Random Forest
24507, Current distribution
32574, The number of leaves on the other hand is a discrete uniform distribution
27190, TF IDF
8369, Create dummy variables for categorical data
1776, Feature Scaling
39278, RAW FEATURES
27441, Evaluating
7467, Submission
34708, Other mean values
7420, Make sure that variables in test data have the same order as in training data
41595, Training the LGB model
10582, Evaluating ROC metrics
27387, i use a single validation set to keep it simple and fast
20405, Exploratory Data Analysis
30904, let s plot it on the map
14566, Lets convert our categorical data to numeric font
4279, Remove Outliers
24588, Check how well the prediction went
22348, Bernoulli Classifier
20216, Encode the y train labels
11801, Observation All of them have positive skew
43273, Avaliando o desempenho do nosso modelo nos dados de valida o
2249, Import Libraries
5804, Data is ready for training
14601, Logistic Regression
35429, use all our models to make predictions
4485, Support vector classifier using Linear kernel
17978, that we have titles
14485, data cleanning jobs are pending 1 filling of null values imputaion 3 oulier cleaning
14320, SEX
36303, Magic Weapon 4 All model Accuracy Score
7261, SibSp Feature
14588, We fixed all the missing information available in the dataset
16091, SibSp vs Survival
1037, Categorical features
40961, Importing ML Libraries
7936, LightGBM Regressor
2725, it s time to make predictions and store them in a csv file with corresponding Ids
3572, It is strange that FullBath is zero higher
41325, As a reference we train a single decision tree on all the pixel features and check what score we get
16999, Importing Libraries
21791, Comparison of classification models
14794, Logistic Regression Model
42100, Splitting Train dataset into training and validation dataset
4362, BsmtFinSF2 feature is not looking useful
3661, Quiring the data
7256, Make predictions
3824, We define our success in the binomial distribution as surviving so the population proportion value is in practice when performing these tests we do not have access to this value as we are attempting to estimate it but we use it as reference to get an idea of the accuracy of our intervals
42017, Creating a new Dataframe with certain columns
21273, Test Data
31808, Vgg 16 model
33782, Examine the Distribution of the Target Column
3711, XGBOOST
19780, Gradient tuning
2519, Random Forests
26092, Loading the best model
13292, Bootstrap aggregating also called bagging is a machine learning ensemble meta algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression It also reduces variance and helps to avoid overfitting Although it is usually applied to decision tree methods it can be used with any type of method Bagging is a special case of the model averaging approach Bagging leads to improvements for unstable procedures which include for example artificial neural networks classification and regression trees and subset selection in linear regression On the other hand it can mildly degrade the performance of stable methods such as K nearest neighbors Reference Wikipedia
10625, One Hot encoding dataOne Hot encoding
7343, Ensemble
1055, Linear regression
32928, Plotting model s accuracy
2383, Display the intercept coefficients for a liner model
22686, Remove unnecessary double quotes in names
18004, Machine learning algorithms need not be the blackboxes especially with the availability of various tools today
9606, Embarkment Details
40965, Titanic Deep Learning Model
28547, Train
24383, Target Value SalePrice
35533, Based on the distribution of data let us remove some of the outliers
22198, Define RNN model
31668, Display Data
36392, Alternative to the median value use the correlation between categories
15938, Fill Embarked nan values of dataset set with S most frequent value
6476, Lets check corelation of target varible with other variables
3497, Classification report
10197, Anathor such important features are kitchen quality garage quality and condition etc which include object data type
27088, LDA
7968, Correcting by dropping features
15772, ticket
4244, The overall accuracy scored using our Artificial Neural Network can be viewed below
30619, The Survived variable is the label of the dataset we want to predict
11966, Calculating how much percent of null values in each columns
8589, Here we have imported SimpleImputer created an imputer with strategy median created a dataframe of just numerical features applied fit on the numerical dataset and transformed the dataset
1537, we ll fill in the missing values in the Age feature
19670, write a short function to visualize the 15 most important features
36600, Building the model
12947, Correlation of features with target
38627, test data is loaded and converted with pre learned DictVectorizer
7922, Check the skewed features other than SalePrice which was already corrected
2555, Well the person who paid 500 pound did survive while the relationship is not linear but we do have slightly high chance of survival if we paid more than 100 pounds
42304, Training Validation Split
308, blend 50 50
29109, So the train and test directories have been created Now here are a few things to consider
21952, img
26391, back to Table of Contents TOC
29943, Meta Machine Learning
5474, Below i m using the predict function from the tree interpreter package
9963, Sales by month s
25585, Null values research
17817, Precision
39462, Rovolving loans Arrangement which allows for the loan amount to be withdrawn repaid and redrawn again in any manner and any number of times until the arrangement expires Credit card loans and overdrafts are revolving loans called evergreen loan
36809, now we can turn this into a DataFrame for better visualization
38492, we remarque that there is missing values in the trainset and testset
20558, Training progress plot
27100, Visualizing Attention
31278, Modeling
34914, Count punctuation by
38572, Variance Threshold
12714, Machine Learning
19425, Alright we are almost there
34354, let s do some fitting
28067, Creating a new column and dropping a column
21775, TRAINING MODEL
34677, Taking a look at what s happening inside the top category
42169, MNIST dataset
10646, Those are some crazy numbers just people paid almost times total Fare compared to other people
17689, FARE AGE w r t SURVIVED
6474, Analysis Sales Price Target or Dependent Variable
5308, Assessing feature importance with random forests
39063, Wiki News FastText Embeddings
15198, Sex
20275, center center
21661, Select columns by dtype
739, Unskewing
11988, explore a bit more about the variance each feature add to data
30920, Save for later analysis
23615, Treat missing values by mode of the column
17724, This explains why a high number of passengers who embarked from port S paid the low fares as this port is mostly used by Pclass 3
13576, it look s better and clearly
24282, Observations
14739, Our KNN model is moderately sensitive and highly specific
74, Embarked feature
18662, Predict
10978, Comparison
14712, KERNEL SUPPORT VECTOR CLASSIFIER
2160, Extra
42453, Binary variables
9386, check outlier
10034, Missing value in test train data set are in same propotion and same column
29094, Load wieghts for WRMSSE calculations
20039, Where X is our input to hidden layer matrix computed using the function from the previous step and y is our training labels
17798, we replaced the missing Age values with the one obtained from the approximations
6617, Creating Train Test Set
31789, Blending predictions
37373, Linear Support Vector Machine
36680, min df float in range or int default
38048, Are house prices in OldTown really different from the House Prices of Other Neighborhoods
8449, Altogether there are 3 variables that are relevant with regards to the Age of a house YearBlt YearRemodAdd and YearSold
40283, Set up network architecture
42460, Interval variables
27144, Category 3 Overall Quality and Condition
3829, selection indexing and filtering
1961, Feature engineering
284, Name
14338, go to encoding some categorical features
22427, I recommed using the OOP aproach of plotting in matplotlib and I be covering this way of doing things in the tutorial
8830, Divide df to train dataset and holdout for final testing purpose
38845, Gable supports for cold or temperate climates
1227, Splitting the data back to train and test
2057, create a dataframe to submit to the competition with our predictions of our model
7924, Apply dummy categories to the rest
19867, In this case mean is 29
32824, Test data preparation
38127, Max boarders were from South Hampton
34230, To convert it we need to add our width and height to the respective x and y
4444, drop columns that missing percent is too high or unnecessary
40108, Sequence of states
42808, Because our regression model doesn t predict briefly mentioned peaks let s create classification model for them
16516, XGBoost Classifier
3301, Bsmt According to the file fill with None or There are also some special examples below that should not be filled like this I have tried other filling methods but in the end I chose to fill with None or
34038, Describe Dataset
21277, Kaggle Submission
4312, Gender
23659, Rolling Average Price vs Time CA
30847, Visualize the Criminal Activity on the Map
41912, Exploring our dataset images size
38681, Categorize Mean Color
7477, 3c SibSp
24257, There are 4 types of titles
1869, Predict SalePrice for the Test Data
26713, first look at the distribution of prices across Categories
13383, Since Age and Fare do not have values less than 0
214, Libraries and Data
41015, We can understand the power of this approach now if we look at all the values that WCSurvived assumes
14533, Dropping the Name column which doesn t add up to our predictions
5186, The core principle of AdaBoost Adaptive Boosting is to fit a sequence of weak learners e models that are only slightly better than random guessing such as small decision trees on repeatedly modified versions of the data The predictions from all of them are then combined through a weighted majority vote or sum to produce the final prediction The data modifications at each so called boosting iteration consist of applying N weights to each of the training samples Initially those weights are all set to 1 N so that the first step simply trains a weak learner on the original data For each successive iteration the sample weights are individually modified and the learning algorithm is reapplied to the reweighted data At a given step those training examples that were incorrectly predicted by the boosted model induced at the previous step have their weights increased whereas the weights are decreased for those that were predicted correctly As iterations proceed examples that are difficult to predict receive ever increasing influence Each subsequent weak learner is thereby forced to concentrate on the examples that are missed by the previous ones in the sequence Reference sklearn documentation learn org stable modules ensemble htmladaboost
40925, Augmentation
21849, Training
41440, tf idf is the acronym for Term Frequency inverse Document Frequency It quantifies the importance of a particular word in relative to the vocabulary of a collection of documents or corpus The metric depends on two factors
32172, REGULAR FEATURES EXAMPLE
2179, No significant differences can be found
34624, k Fold Cross Validation
38099, Submission
18315, lots of items in each category
20377, Predict the test data
25425, SLATMD Almost completely repeated Removed
19634, Class probabilities benchmark
37429, Number of unique words in tweets font
1301, Observation
483, Reconcile feature types
27373, I impute the missing prices with the mean price of the category of that item
31298, Prepare Keras Data Generators
18443, Ensembling font div
5052, checking 4 more features of size
12244, Indexing
12009, before moving to next kernel let s get R squared value for it
41054, Initial test using Gradient Boosting Regressor
41247, DEALING WITH THE NULL VALUES
137, Grid search on KNN classifier
6204, K Nearest Neigbhbors
20844, The authors also removed all instances where the store had zero sale was closed
22711, Plotting the loss in variance as we reduce number of components
29933, The next plot is learning rate and number of estimators versus the score
11636, Decision Tree
37885, Prediction from Linear Model
18923, Cross Validation
25221, Group the similar featurtes related to a House Feature and analyze
15024, Age
1526, Sex Feature
12928, Age
18672, Make prediction for test data
41092, Certain Recovery South Korea Germany Iran Flattened the Curve
28785, Cleaning the Corpus
12688, Age
17933, Pclass
11218, Show adjustments
29030, Missing values imputation
4241, Genetic Algorithms using TPOT
21642, Fill missing values in time series data interpolate
29913, Score versus Iteration
10384, Imputing Categorical variables
31863, Building the new top
37177, Output the data to CSV for submission
14257, Family Size And Is Alone
18190, let s have a look what is the product with the biggest demand of all times
29586, we ll actually plot the images
30174, we ll display these heatmaps a bunch of different ways overlaid on a map
15557, Filling out the missing fare
4761, Outliers is one of the most important task in EDA
27255, Train our First Level Models
38016, The part before the double underscore is the vectorizer name and the feature name goes after that
34788, During working days there is a high demand around the 7th hour and 17th hour
37026, Which brands are most expensive
4035, The four data points on the far right side of the graph are outliers in the data set based on LotArea SalePrice distribution and we remove these four data points from the dataset
10048, Predict for unsen data set
38001, Well
8015, Few Columns datatype are defaulted as int64 but they are catergorical in nature
25284, The cnn learner factory method helps you to automatically get a pretrained model from a given architecture with a custom head that is suitable for your data
8029, There are outliers for this variable hence Median is prefered over mean
13479, Final RF Submission
32220, Modelling
31600, EXTRACTING THE FEATURES FROM THE TWEET TEXT
842, Ridge
25655, The mean degree is about 2
7263, Embarked Feature
20287, Oldest person to survive is of age 85
31671, Determine Optimal Learning Rate
33498, Germany
2482, Exploratory Data Analysis EDA
11890, Naive Ba
12965, Pclass Age and Survived
19628, THE TEST DATASET
30378, How to Tell if the Model is Good
14839, At a first look the relationship between Age and Survived appears not to be very clear we notice for sure that there is a peak corresponding to young passengers for those who survived but apart from that the rest is not very informative
1399, Age vs Survived
11012, We got the Salutations
15758, Let s take a more detailed look at what data is actually missing
37917, Model Comparision
14686, lets create some creative new features using the existing ones
26701, Plotting Sales Ratio across the 3 states
33868, Defining Cross Validation and Error Function
24423, Infrastructure Features
8269, Create TotalBath Feature
19335, Transforming testing data
2157, Preparing the data
37661, Conclusion
31367, submission
20161, Using from sklearn library
2786, Load Dataset
39041, complete this first part by looking at the averaged interest levels for group 0
7822, Ensemble by Voting
843, Lasso
12119, Dealing with missing data
24992, LASSO for numerical variables
24394, Model building
27660, Plot CNN model
30648, Voting Classifier
23852, training the whole dataset on selected parameters so as to avoid any data loss
14831, Train Test Split
33301, Modelling
41791, Confusion matrix
31506, Feature Selection using K Best
37656, Extract test data from zip file
37299, Visualization
42112, Seems farily straightforward just ID text and target firlds In addition the train set is very decently sized million records is probably enough for a decent text classifier
3558, Select a model
17535, Extract number from Cabin and create custom Room bands
13949, Numerical Variable
6310, Decision Tree
27362, Motivation the item id is not ordinal and if we one hot encoded it we fall in the curse of dimensionality hence mean encoding is needed to solve this trap
12792, Instantiating the class and Fitting the model
8885, Creating two more features of the total number of square foots in a house which be the sum of basement 1st floor and the 2nd floor
35476, Adaptive Thresholding
32733, Import libraries
3740, The correlation matrix may give us a understanding of which variables are important
23104, what does the value of skewness suggest
24944, Interactions
20076, Insight
34765, Generating predictions for Test set
24742, Yep two pretty clear outliers in the bottom right hand corner
48, Feature Scaling
25982, Bayesian optimization using Gaussian Processes optimize github io stable modules generated skopt gp minimize html
2896, The reason why we are not doing applying Dummy function here because when we apply this function individually on train and test data this may lead to different feature vector dimension becuase of different number of unqiue varaible in features Confusing let s narrow it down one simple example
23097, Findings Looks like Cabin is an alphanumeric type variable with 1014 missing obsevations There are 187 kinds of categories in variable Cabin Since there are too many categories in Cabin we must process e reduce the number of categories Cabin to check if there is any association between Survived and Cabin
42364, Visualizing given dataset
12268, Preparing to modeling
21610, Filtering a df with multiple criteria using reduce
42904, Individuals
25261, Sample images of dataset
7270, Age Feature
39135, Hidden layer
36405, model prediction again
7429, I start from the original y train to train the random forest and then use the log transformed y train to train
3650, Combining SibSp and Parch to create FamilySize feature
25070, Time series data plot FOODS
4633, Dropping columns
21633, Filter a df with query and avoid intermediate variables
15783, Decision trees
28131, Building a Text Classification model
8537, Statistical Signifiance
12019, WOW XGBoost performs very well let s make predictions on test data
10695, Cabin processing
29811, PreTrained Fasttext
14435, go to top of section eda
10231, Before predicting from test set we need to clean test data set to make it equivalent with training data set e
27842, Fit the model
28136, Fitting our Model
4483, Quadratic Discriminant Analysis
1303, Zoomed Heat Map
23951, Filling the missing values
20749, OpenPorchSF column
23051, once our layers are added to the model we need to set up
20915, Feature creation
8970, A B C T are all in first Passenger class
28294, Setup torch nn Embeddings layer
36019, Aggregator
43398, Attacking the model
23384, The final part of the data generator class re shapes the bounding box labels
28012, TARGET DISTRIBUTION
43209, test evaluation and visualization
12477, Embarked
22247, Modeling
23460, Month
2298, Pandas Filtering and slicing your dataframe
10942, Structure of train data
36463, Plotting with and without bounding boxes for same images
1378, To finish the analysis I let s look the Sibsp and Parch variables
37101, Feature Extraction
17937, Age Fare
13833, check shape of training and test set
15604, Create initial predictions
18716, let s run fastai s learning rate finder
26458, GB Modelling
32823, Set Model for prediction
17900, Lets try SVC
20129, Concatenate all features
2706, For this section we use Pipelines which are a way to streamline a lot of the routine processes
24535, Number of products by customer type at the beginning of the month
38259, Target Distribution in Keywords
29888, The first time around I weight by time
42146, Sampling Layer
6266, As expected the survival rates of females were signficantly higher than males
22009, Investigating cardinality
17639, Basic Modelling
33321, Create the neural net model
34299, The 5th channel looks like it is detecting hair or the darker color with the image of the mole basically removed
7276, Embarked Feature
10273, That looks halfway decent
4686, And
209, Gaussian Process Classifier
27840, Data augmentation
31940, Error Analysis
42572, Lets have a look at the edge cases in which we are extremely certain about our predictions or and either wrong or right
27242, Lets display the confirmed cases and pandamic spread on a world map
6138, we just fill the last missing value with TA
42532, Converting the date column to datetime datatype so that we can extract day and month from it
9734, Final Imputation
31054, Mac Address
7310, Multivariate Analysis
20515, Another quick way to get a feel of the type of data is to plot a histogram for each numerical attribute
16072, Further Evaluation
23339, Ekush Numerals
17780, Imputation of missing data
36780, Building a Classifier
4156, One Hot Encoding OHE
5164, XGBoost
685, For the purpose of this demonstration we only use the file train
10679, Data Preprocessing and Data Cleaning
9867, SibSp Survived
33227, Find final Thresshold
1685, Relationship of a numerical feature with a categorical feature Relationshipofanumericalfeaturewithcategoricalfeature
13376, Age
1268, PART 1 Exploratory Data Analysis EDA 3
14179, As noted in the data analysis part 0 refers to male
8108, Age
28444, COLUMNS WITH NON VARIANCE
32207, Add item cnt month lag features
14559, Oldest passenger Survived was 80 year old boarded from Southampton who was in class 1 font
21567, Convert year and day of year into a single datetime column
12760, I count the number of data examples in each class in the target to determine which metric to use while evaluationg performance
12141, Basic evaluation 2
42324, Transforming Train data and Test data into images labels
8014, Sumbit Test Set
32859, Feature item cnt distribution
8900, AdaBoost
29751, now plot the images
17257, Have a look at data shape
10164, Pair Plots
26753, size of image be height width 3
33663, Converting the string field to datetime is mandatory before processing them
20588, Decision Tree Classifier
38680, Mean Color Used previously saved data
34514, Plot for a sanity check
38565, Linear Model Lasso
3468, we set up some cross validation schemes using the KFold function from sklearn
18349, and a similar intrepretation can be drawn for NA in Fence FireplaceQu BsmtCond BsmtCond BsmtQual BsmtExposure BsmtFinType BsmtFinType To deal with all these NA values a simple imputation is enough
6277, We now have 5 meaningful categories
8022, Inspecting the Dataframe
41910, Balance the distribution based on the smallest set
29919, Distributions of Search Values
15731, Grid Search with Cross Validation
40079, We try to fit our data with KNN and check for the best number of neighbors
40828, perfrom some feature engineering
27484, We use a very small validation set during training to save time in the kernel
26512, On the local environment we recommend saving training progress so it can be recovered for further training debugging or evaluation
19298, Data Visualization
29144, Furthermore we could also display a sorted list of all the features ranked by order of their importance from highest to lowest via the same plotly barplots as follows
20799, FEATURE ENGINEERING
22770, We first create a list called field where the elements be a tuple of string and Field object
21246, Compile our DC GAN Model
21734, Make Predictions
7813, RMSLE on hold out is 0
923, Being root mean squared error smaller is better Looks like RIDGE is the best regression model followed by SVR GB and XGB Unfortunately LR can t find any linear pattern hence it performs worst and hence discarded
29071, Text features
24478, Strategy 1 Simple CNN Architecture
21196, Linear backward
11551, Correlation of SalePrice with Numerical Features
29537, we re gonna scale them between 0 and 1
22281, Check Test DataFrame For Any Missing Values Too
17665, Correleation
2166, Bar plot gives us an estimate of central tendency for a numeric variable and an indication of the uncertainty around that estimate
21797, K Nearest Neighbors
14126, Embarked
21743, item cnt is correlated with transactions and year is highly correlated with date block num
41467, Plot a normalized cross tab for Embarked Val and Survived
29920, Even though the search domain extended from 0
922, Model Evaluation
15351, we can finally take some insights from our model
6049, Similar distributions for OverallCond in train and test
33881, XGBoost
1786, Submission for Random Forest Model
21531, look at the connections for the first 100 rows of negative responses
5323, Display boundaries and dense by lines using latitude and longitude
3183, Fast and Butchery
19332, Training for 50 epochs
27150, RoofStyle Type of roof
41070, li style float left margin px font px Georgia Times New Roman serif padding px overflow auto
21917, For selecting most important features lasso regression is performed on various alpha values and optimum features are chosen according to RMSE score on Ridge model
36545, Random Forest Top Features
2710, We use lasso regression
24675, EVALUATING THE MODEL ON TEST SET
3906, Skewness
32057, Backward Feature Elimination Recursive Feature Elimination RFE
1712, Imputation using Linear Interpolation method
14326, Embarked
32534, Loading the weights
26286, Forward propagation with dropout
6543, categorical columns
38975, embedding dictionary starts with 1 so at 0 index nothing be there
19199, To make the visualization easy I separate the rest of the products in 3 groups with comparable number of customers
40054, take a look at the different target distributions
15440, produce the output file
22298, Not sure if information loss is worth it but experiment
21833, Split home data and test data into quantitative and qualitative be imputed differently
26903, Score for A9 16011
14832, Simple Logistic Regression
21168, Defining cnn model
26296, Defining CNN Model
9169, This looks rough
6080, so we have titles
33712, FEATURE SEX
8119, Linear SVC
28791, Look at Unique Words in each Segment
28278, Fitting a new model with the tuned hyperparameters to the combined dataset
1353, And the test dataset
42036, Groupby count cmap
32756, Installment Payments
321, Submission
2210, Random Forest
26700, Total sales from each of the state
29111, that the data is in the correct file and folder structure we feed it it to DataBunch object which is used inside the FastAI library to train the CNN Leaner class
7502, Remove the two outliers on the bottom right corner
38191, We can clearly distinguish digits facing or shifted towards one of the edges
11874, DataFrame concatination and Y separation
39715, Word2Vec retrieve all unique words from all sub lists of documents thereby constructing the vocabulary
35400, Dropping the cabin column as 50 values in it are NaN
32800, Extra Tree
39984, SalePrice vs GrLivArea
26932, A little bit modified preprocess
1344, Create new feature combining existing features
28634, GarageCars
18380, RMSLE
30336, The first function prepares a random batch and the second one prepares a batch given its indices
26847, Third batch
10031, Submission
20564, Specify and Fit Model
35907, Visualizing Training Set
25832, Looking for text and target data only
38772, Inception
12649, Predictions
37038, Can the length of the description give us some informations
34538, Fine Tuning the model for tweet classification
18557, The smallest group is honor passengers with royal kind titles
147, AdaBoost Classifier
4268, KitchenQual
41875, Building a voting classifier
27781, Defining the model
42522, It looks like diversity of the similar patterns present on multiple classes effect the performance of the classifier although CNN is a robust architechture
39260, ANALYSIS OF TEMPORAL TRENDS
19441, Training the Model
36021, Classifier
33097, Gradient Boosting Regressor
39751, Support Vector Machine
35050, Preprocessing
15451, Split dataset back into train and test variables
4187, Data exploration
38051, Fit If the significance value that is p value associated with chi square statistics is there is very strong evidence of rejecting the null hypothesis of no fit It means good fit
27439, PREDICTIONS ON SAMPLE DATA DT
33522, Further improve by auto cropping
11148, Look at some correlation values in a list format
3899, One Hot Encoding
21412, NOTE Even tough it is automatic we can incorporate some manual features IF we know some domain specific information
9894, Family Size
32426, Accuracy Function
35393, Generating Submission File
24174, Submission
33575, Configuration Settings
23759, Running the XGBRegressor Algorithm
16390, Cross Validation Plotting using CV set
16776, Feature Engineering
28296, plot training process
23374, With the ids split it s now time to load the images
2206, Train Test Split
2056, Great now that we have the optimal parameters for our Random Forest model we can build a new model with those parameters to fit and use on the test set
16020, Fare
6721, Selected HeatMap
5201, We fill NAN values in the rest of the columns using the mean values
42622, Clustering the data with those 5 columns Population Size Tourism Date FirstFatality Date FirstConfirmedCase Latitude Longtitude Mean Age
1333, Creating new feature extracting from existing
19311, Evaluation prediction and analysis
29987, Pre processing from
16764, Titles
12823, How are the Age spread for travellers
20441, bureau balance
23220, prediction with 5 time prediciting
27421, More num iterations
36590, Use all training data learning rate
3920, IQR
36635, We start with Na ve Bayes one of the most basic classifiers Here the Bayesian probability theorem is used to predict the classes with the na ve assumption that the features are independent In the sklearn library implementation Gaussian Na ve Bayes the likelihood of the features is Gaussian shaped and its parameters are calculated with the maximum likelihood method
4119, Optional Step Feature Engineering
25587, Function to transform model into model predicting log
31822, Random under sampling
5360, Diplay many types of plots in a single chart
642, we model a fare category Fare cat as an ordinal integer variable based on the logarithmic fare values
22439, Marginal Histogram
42545, Scatter plot of question pair character lengths where color indicates duplicates and the size the word share coefficient we ve calculated earlier
28854, Select One Time Series as an Example
42426, Floor Vs Price Doc
29011, Conclusions
13281, SVC is a similar to SVM method Its also builds on kernel functions but is appropriate for unsupervised learning Reference Wikipedia vector machineSupport vector clustering SVC
314, Embarked
35443, Previewing
7821, Plot learning curve
35112, Model DenseNet 121
29175, Apply boxcoxp Transformation to deal with features with values to all features with absolute skewness
8475, Model Hiperparametrization
4787, it is time to combine our train and test sets since we need to preprocess it the same way so that we can feed it later into our model
11014, we can drop Name attribute without any loss
35941, SVC
37525, Embarked Sex Pclass Survived
3836, Groping and Aggregation
730, Heuristic but effective
21684, Obten o dos dados
25990, Training Multi layer Perceptron Classifier
2764, Neighbourhood wise salesprice distribution
34076, Family Size
32925, let s add all the base features from the main loan table which don t need aggregation
7444, apply Gradient Boosting for regression and find the best parameter for GBR using GridSearchCV
24824, Output Prediction
6569, Since Embarked have only 2 missing value we are trying to fill with most common one
9181, Year Built
20086, Time Series Graph of Whole Company Sales
4648, Read Below for a quick learning note on how to combine multiple charts in Python using Seaborn
2128, With coefficients
15197, Title
23151, Findings Most of the passengers survived and died were from cabin But percentage wise its category B D and E that had impressive chance of survival People from cabin category X had just 30 chance of survival
3523, Finding Correlation coefficients between numeric features and SalePrice
27610, Create Data Generators
34830, Analyse and list the column with the null values
26205, We can delete width and height columns because we do not need them it can be easily pulled out from the images itself
12127, in the basement case we have some data missing in places where there s actually a basement but the vast majority of the NaN are really due to no basement in the house
17965, Easily select the string columns using the select dtypes Previously a column could only be selected by using its name explicitly
27835, random state in train test split ensures that the data is pseudo randomly divided
26857, Data Exploration
8744, New Features
21572, Clean Object column with mixed data using regex
27049, Visualizing Images with Malignant lesions
19607, Univariate analysis of categorical data
12333, MiscFeature
37359, Parch vs Survived
14523, Embarked
11610, fill them up one by one
29947, Random Search
34107, Hospitals in Urben and Rural Areas
3669, Box cox transform for skewd numerical data
13217, The plot between Gender and Target variable clearly suggest that more men have suffered the fate of Jack from Titanic
569, lightgbm LGBM
40872, Optimize Support Vector Machine
36632, there are outliers let s deal with that
9271, Dealing with missing values
14550, Age font
18488, Since associating 0 1 2 3 to categorical variables like StoreType Assortment StateHoliday affect the bias of the algorithm
17697, LOGISTIC REGRESSION
22956, Load test data
24114, Gradient Boosting
27229, Create XGB model and training it for intermediate values
23912, New Bayesian Optimization
39845, BASE MODEL
38625, Observe Training History
28820, From Monday peak of almost 10 to as low as 6
37690, give it a simpler task output 1 when image is digit 1 and output 0 otherwise
19813, Binary encoding creates fewer columns than one hot encoding It is more memory efficient It also reduces the chances of dimensionality problems with higher cardinality
39814, let s plot a random image along with it s label
20207, Converting cateogrical columns to Numeric
34680, Even though the volume is mostly determined by one set of categories the situation in case of revenue is slightly different
11071, Recover
1137, Model evaluation based on K fold cross validation using cross val score function
30907, One more step needed is converting the datatype from mixture to numerical datatype
33362, Visualization of outliers
9783, Ensemble Models
30068, Each data point consists of 784 values
24513, If we instead use a weighted random sampler with weights that are inverse of the counts of the labels we can get a relatively balanced distribution
13487, Check for imbalance class problem
18623, Converting Features
36996, explore now the orders
21809, Submission
39136, Output Layer
29949, Feature Importances
5650, Create a Fare category
39291, Discard irrelevant features
3146, Model Building
22971, Train Model
15352, we can try permutation importance
20126, Installed apps features
12715, Round 1 Initial models
32871, Build test set
21354, Compile Model
33153, Instead of summing up the income we sum up the actual values of our predictions
8349, Survival by Class and Embarked
33784, Column Types
9127, Electrical
13554, Geting the Fare Log
2444, Here is quick estimation of influence of categorical variable on SalePrice
20074, Insights
1414, FamilySize vs Survived
14474, so female survived more in than male since during disaster females are send from the ship first and siblings too
42204, Modeling Data
4581, we test a few potential variables
37140, Setting up Dataloaders
9228, Neural Network Classdefinition
15640, Feature Correlation
28373, Lemmatization
6171, Ticket
23447, Holiday
21192, Implement the forward propagation module
668, Gradient Boosting
8754, Data Columns
1714, Multivariate feature imputation Multivariate imputation by chained equations MICE
34742, Fitting a T SNE model to the dense embeddings and overlaying that with the target visuals we get
22597, Shops Items Cats features
12435, Lets deal with quasi constant features e features that are almost constant
14688, Model Creation
29525, Randomforest
3302, Utilities There is no NoSeWa in the test set basically no effect on SalePrice delete
20062, Load the dataset
26739, Plotting sales over the month for the 3 categories
10882, Examine Missing Values
7117, Make Submit prediction
4728, top 5
11673, It looks like passengers that embarked in Southampton were less likely to survive
23208, Findings Bagging can t beat our best base learners
4141, Evaluating the model
6213, XG Boosting
18196, cut products into 10 quantiles by summary adjusted demand
40837, Using the same pre processing functions on the test data
12970, It is very clear that distribution and median values of both men and females are almost similiar Therefore we cannot use gender variables directly for filling missing values
40726, Training Performance
9389, Feature BsmtUnfSF Unfinished square feet of basement area
26719, We have 10 National and Religious events 6 Cultural Events and 3 Sporting events in a year
28998, Since the distribution is right skewed we take the log transformation and convert it too Normal Distribution
6408, Null Values Treatment
20772, Quickly plotting a scatter plot of the first 3 dimensions of the latent semantic space just to get an initial feel for how the target variables are distributed
4336, Ticket
34365, Ideally I should have concatenated both the dataframes and done all manipluations at the same time
6271, Cabin
39015, Display death by embarked city
16113, Define training and testing set
9703, Normalization
85, Fare Feature
36837, Make predictions
14575, Overview the dataframes
20123, Prediction Submission
13724, FINAL CHECK OF CURRENT CORRELATION
32281, Display distribution of a multiple continuous variable
606, we start to use factorplots e
5375, Feature Importance
34521, Relationships
16968, after we cleaned and encoded the combined data set we split them to the original train and test data
15166, Random Forest Classifier
14534, Model Building
43342, Reshape the Data
24041, Submission
8783, One hot encoding for class
17997, To combine the two a feature named group size is derived by taking the maximum of the family size and the number co travelers of a passenger
9980, relationship between these countinues features and Sale Price
20539, Support Vector Regressor
618, A 60 yr old 3rd class passenger without family on board
16485, Embarked Ship
24459, let s use dataloader from this awesome kernel Melanoma Starter of shonenkov and vizualize some augmentations
30878, Some of the categories appear to have a higher variaiton per day than others
28412, Visualize
14887, Sex
24936, But to some extent it protects against outliers
17601, Logistic Regression
35320, Fit the model
6726, YrSold Vs SalePrice
14979, Filling Cabin missing values of test set
4901, do a little exploration on the training file
432, LotFrontage Since the area of each street connected to the house property most likely have a similar area to other houses in its neighborhood we can fill in missing values by the median LotFrontage of the neighborhood
35060, Making predictions using Solution 2
11842, Lets look at all the Surface Area variables
19254, Daily sales by store
25050, Define Task
15431, let s one hot encode the passenger class
32862, Feature engineering
15129, Engineering Feature
24511, There is significantly less accuracy for the same set up
1019, And now we set our grid search algo
32302, Displays location of a continents
34740, Visualising T SNE applied to LSA reduced space by changing Perplexity
3592, Normalization
7913, Features engineering
3565, so we must use spearman correlation becuase categorical variables are mixed
34001, datetime
722, we got around the issue with NAs in the numerical data by replacing with the medians
24419, Home State Material
12405, For the first step it is always a good idea to start with analyzing target in this case the SalePrice
29939, Correlations for Bayesian Optimization
41168, Model configuration
20140, Implementation of Model
16826, Model Building
15389, Have a look at the training data
23817, 70 numerical features and 8 categorical features
33604, pull some of the digits from our training set
3716, Taking mode for all similar features like BsmtCond
24717, EigenVector 8
37634, After every epoch of training we compute the validation loss and the validation accuracy
5980, XGBClassifier
14405, FareBand
15826, first for unscaled data
19928, joining train and test set
29550, concatenate train and test datasets
20375, Get the best parameters
30645, K Nearest Neighbors
20319, Alright so we re resizing our images from 512x512 to 150x150
18631, Target Variables distribution
21681, I ve removed the real grid search parameters here because it takes too long to make it online
8081, Setup cross validation method
18466, quick glimpse at the data on hand
25913, Read orders
16678, look at all the columns in the data
7041, Lot configuration
12397, Reading test file
29037, Training set accuracy
10274, Lasso
15458, Pclass
21764, Zero Value for NaN in cod prov
32141, How to rank items in an array using numpy
6999, Types 1 and 2 finished square feet
962, Plotly Barplot of Average Feature Importances
7637, Stacking
11948, Transforming all the skewed values to log values to lessen the impact of outliers
25489, Training time
8889, we plan to remove the columns with a very low Standard Deviation in its values
11470, Decision Tree
29093, Functions for WRMSSE calculations
41117, An important note
34021, Count temp
23371, Above a scatterplot of test and train data for a very small XY window
6001, Impute data train dan test
40260, Garage Area
30112, We re going to use a pre trained model that is a model created by some one else to solve a different problem Instead of building a model from scratch to solve a similar problem we ll use a model trained on ImageNet million images and classes as a starting point The model is a Convolutional Neural Network CNN a type of Neural Network that builds state of the art models for computer vision We ll be learning all about CNNs during this course
41176, We initialize the weights of first two convolutions with imagenet weights
28423, How many outdated items in test set
8591, Filling missing Embarked values with Most Frequent with SimpleImputer
200, Library and Data
9831, In this case I use the age that was provided from our dataset to create the groups to find out if the passenger was a child youth adult etc
1812, Deleting features
34969, Random Forest Classification
256, Libraries and Data
14215, Bar Chart for Categorical Features
24004, Submission
32147, How to convert a PIL image to numpy array
3718, Handle Remaining missing values
28309, Identifying Missing Value Present in Application Train Dataset
30536, Exploration of Prev Application
40410, Looks like there are some outliers in this feature
9130, Set Pool Quality to 0 if there is no pool
4431, Is is clear by the graphs that SalePrice feature is skewed to the left
25912, Read train order products
23813, Those columns contain Geographic Information
24452, working with test dataset
43054, Density plots of features
7437, I build a neural network model in TensorFlow with two densely connected hidden layers and an output layer that returns a single continuous value
14094, Decision Tree
28184, To further clean our text data we ll also want to create a custom transformer for removing initial and end spaces and converting text into lower case
30839, Districts most vulnerable to a Crime
21637, Aggregating by multiple columns using agg
2453, Remove constant features
1036, Numerical features
43250, Saving data trained with logistic regression
20274, Correlation between categorical variables
26535, checkout the network a bit first
1365, I start looking the type and informations of the datasets
3434, While we re at it let s change the Sex variable so that male female is encoded 0 1
33583, Load saved model
17924, Importing train test split
3907, Kurtosis
13165, GridSearchCV for Hyperparameters
4563, Using BoxCox Transformation
20891, Inception CNN
1748, However there is a problem all passengers of Pclass and SibSp have missing values so we cannot find any median value for that category for simplicity sake we fill in manually the missing ages of those who belong to Pclass SibSP to be the median age of passengers belonging to Pclass SibSP value which is
5477, From the provided columns we have to select few columns as independent variables or features on which we can train our model
700, Furthermore we can check how well the target variables correspond with our predictions
474, Before we mvoe forward let s make a copy of the training set
43385, First of all we need to calculate the perturbations for each image in the test set
41102, Prediction using growth factor
1403, Missing values
23990, Fill the remaining columns as None
36593, LSTM models
20105, Item count mean by month item category for 1 lag
19835, Square root transformation
18882, Data imputation
26699, Lets look at the number of rows for each state
40302, There is some good difference between 0 and 1 class using the common unigram count variable
11733, let s plot the loss function measure on the training and validation sets
8306, Meta Classifier
18093, EfficientNetB5 456 456 3
15043, Cabin name is highly related to the Survived
4843, AS soon as we have numerical variables check for skeewnes and correct it
29228, Elastic Net
12154, Behind the scenes
5989, Simpan kolom Id
8019, It s Catergorical data
34448, Item HOBBIES priced at around dollars is the costliest item being sold at walmarts across California
6810, Random forest is a robust practical algorithm based on decision trees It outperforms almost always the two previous algorithm we saw If you want to find out more about this model here williamkoehrsen random forest simple explanation 377895a60d2d is a good start
10376, Handle categorical features
36863, Logistic Regression
15749, We may take Fare average Fare of passengers with
572, Second Voting
17051, Optimal parameters found during cross validation are lower than on public leaderboard meaning we are overfitting to training set increase n neighbors
9860, Total data table s variables value counts index
32470, kd Model
15623, Ticket feature
13856, Name
35447, Training
8771, Survival by fare
26740, Impact of Events and SNAP days on sales
36919, Age and Sex
2687, Lasso Method
38787, Apply probability frequency threshold and reset below threshold points to old values rounded
27210, my presumption was true and most benign cases are diagnosed as nevus
8317, Combining the processed numerical and categorical features
37333, Final model
6496, RoofMatl ClyTile
42688, REMIND repay 0 and not repay 1
23410, And now let s define the entire model
3644, Too many values for the Cabin feature are missing
18168, TPU or GPU detection
42012, Copying and creating a new DataFrame
28740, test set df test
42139, Encoder
6110, Garages and cars
27145, OverallQual Rates the overall material and finish of the house
23899, Seems tax amount is the most importanct variable followed by structure tax value dollar count and land tax value dollor count
22795, Lets us examine the shape of this dataset
12068, GrLivArea
28662, LandContour
38666, Logistic Regression
24810, Drop columns with too many 0 s
7751, I chose the features which are strongly corrolated with SalePrice as numerical attributed which be used in future model
31350, Usage
36292, Magic Weapon1 Scale our data and re train the model
19523, We can increase or decrease number of partitions with repartition method
41990, To read a cerain column
11132, Need to look at the y log relationship since that is what we be predicting in the model
15682, VotingClassifier Ensemble
4612, we look at the features that we deemed necessary to be tested earlier
34227, let s start building our ground truth data
9509, preview top 5 and bottom 5 records form test dataset
6596, Submission Back to Top
5120, Feature Engineering only for text fields
21503, look and the largest and the smallest images from both sets
7147, VISUAL EXPLORATORY DATA ANALYSIS
23617, Check category features of the dataset
29623, Feature engineering extract title from Name attribute
7255, Train model
24989, Ordinal encoding the categorical features before applying LASSO
30592, Missing Values
33354, Timeseries decomposition plots daily sales
42122, Model
11530, Elastic Net Regression
2928, In order not to slow down loading the kernel I commented on the parameter grids used by me and replaced them with parameter grids with the best models
9901, Sex
3724, Train Random Forest Regressor
41633, Removing Stopwords
26747, Sales data of each item at a weekly level
8455, Defining Categorical and Boolean Data as unit8 types
37623, We use the variable device to indicate whether to use GPU or CPU
14862, Excellent we have seperated the passengers between female male and child This be important later on beacuse of the famous Women and children first policy
32155, How to create a numpy array sequence given only the starting point length and the step
22486, Bonus2 how to make create hyperlinks inside a document and change the color of a text
29830, RNN
17050, KNN
20259, Assume we have equation y x 2
3581, And treats the missing values as 0 like BsmtFinSF1 variable
23592, Merge predictions
36923, Chances for Survival by Port Of Embarkation
10622, Looking at missing amount of data from Cabin
25266, Croping Images randomly for resizing
19326, Countplot for each of the 10 digits
4342, Summing up number of parents children and siblings across passengers
42339, Make Predictions
9403, We easily plot the OLS regression line as follows using the python code
26760, Preprocess
20544, Light Gradient Boosting Regressor
6065, Decks porch fence
3280, Model Building
33022, The standard deviation is much lower now good job br
38201, Baian optimization works by constructing a posterior distribution of functions that best describes the function you want to optimize
6876, XgBoost Regressor
40193, Visualizing data w r t each class
757, Simple Linear Classifier
36741, Make Predictions with Validation data
35082, Ploting the relation between the Variance and the Dimensions
14809, Find Missing Value
30837, Extracting Feature from the Address Feature
22932, For sex there is not much we can probe from
42019, Creating a new Dataframe with exisiting rows that matches a certain condition
42571, These functions are used to calculate the performance of our predictions at this competitions metric the logarithmic loss
1029, Example of categorical features Neighborhood
22339, Text to Numeric Conversion
42410, Fill missing values
37024, How many different brands do we have
24680, Model Evaluation and Prediction
8564, Handling Missing Values
3986, Processing categoricals variables
21201, Visualizing and Analysis
30403, Train the model using data augmentation
22623, Define Network Architecture
3987, Make mean target encoding for categorical feature
42367, Modelling
39089, Prediction
43300, Cross Validation Feature Importance
41524, Scatter Plots
3810, Light GBM
3649, Grouping fares in Fare feature and assigning values based on their survival rate
8675, Importing the Modules and Loading the Dataset
16820, People from Cherbourg have more survial than Deaths
35861, rd Training Last
2302, Pandas Fill in NA s
8832, Splitting the dataset
35055, Accuracy gotten after training
20493, Test Image Embeddings
42171, View the corresponding labels
25744, Removing shape information
4556, LotFrontage Since the area of each street connected to the house property most likely have a similar area to other houses in its neighborhood we can fill in missing values by the median LotFrontage of the neighborhood
5841, Checking Feature data distribution
34792, Linear Regression
15039, Does the Fare related to the Survived
42819, Creating Dataset
25047, Add to Cart Reorder ratio
22833, perform a similar train test analysis for item ids
15301, XGBoost Model
40963, Checking out other models Uncomment and Run
37854, Exploratory Data Analysis
31517, We determine the shape of the matrix and confirm if the shape of rows for the input matrix X is same as that of y
16889, Fare vs Survival
20923, Submit
20528, Imputing missing values
21349, Accumulate the image names
28814, Saving our Model
4260, Dataset Statistics
22331, Convert Emoji Into Words
2998, Creating Dummy Variables
9210, Distribution of Ticket Class to Survival
32181, Define optimizers
38288, MasVnrType and MasVnrArea not yet imputed completely
14335, Droping few column which doesn t useful for prediction
5670, Fill basic missing values for Fare feature
18563, size 7
34003, holiday
34709, Mean over all shops and all items
36753, Importing Various Modules
1654, Now let s have a look at Age
17916, ANALYSIS OF GENDER AND SURVIVAL STATUS
1678, Analysis of a numerical feature AnalysisofaNumericalFeature
5141, Categorical variables
2934, Apply the model on test dataset and submit on Kaggle
8583, Applying RandomForest
14657, The Married status and Pclass are important in determining the age of the person
16052, From which location passenger go on board to Titanic does it matter or its more important that passenger is on Titanic no matter from where you go on board to Titanic as we know that At 2 20 a m on April 15 1912 the British ocean liner Titanic sinks into the North Atlantic Ocean
15311, Lets find out the Survival Rate
36382, CNN
16918, Confirm no missing data
9275, XG Boost model
11740, Just as expected the graph looks very similar to the first one we looked at
9014, Set Fence Quality to 0 if there is no fence
40948, Great
28858, Difference Transform
43320, predicting the values with the model
37218, First does the embeddings index contain digits Google News replaces numbers 9 with signs
151, Gaussian Process Classifier
21592, Remove a column and store it as a separate series
24440, LightGBM
32683, DataArgumentation
20124, Training Loop
9974, Heatmap
37417, We only need a few prinicipal components to account for the majority of variance in the data
26949, OBS 30 CNT SOCIAL CIRCLE
22491, Bonus7 Sankey plot in Python
27537, Display the distribution of a continous variable
6164, Age
43134, Fatalities 3 Worst Predicted
86, Here We can take the average of the Fare column to fill in the NaN value However for the sake of learning and practicing we try something else We can take the average of the values wherePclass is 3 Sex is male and Embarked is S
6524, Since the numeric variables are skewed we perform log normal distribution
1720, Either train
8577, Machine learning algorithms cannot run with categorical columns so we need to make them numerical
5368, Permutation Feature Importance
5556, Make Predictions
15059, Most Correlation
161, Our first step is to load and quickly explore our train dataset the previous cell output helps us to find the file names To read the dataset we use the read csv Pandas method Now the previous listing of our files comes in handy since we know the input to our method
22697, Lets Do CrossValidation
43382, Training the model
12399, Removal of null values
22834, There are a lot many more items in the training set than there are in the test set
35922, Define the model using Keras and TensorFlow backend
496, I think the thought here is that individuals with recorded cabin numbers are of higher financial class and in this manner bound to survive
31771, Pre trained model preparation
22942, with the numbers there were several glithces in our algorithm
41195, check if there are any categorical columns left with missing values
42534, No need fot the date column now so dropping it
28300, Have a look some perfomance metrics on validation score
13515, Section 2 Modeling
24008, Train and validation
39674, We define our error as the squared difference between our true Y and our predicted which gives us an absolute error metric
22528, Dropping columns and filling NA NaN values using the specified method
34763, Fitting Model on Total data
1747, Fill in missing values for Age
19431, Check NaN count in each column
18066, Logistic Regression Model
16893, New Feature haveCabin
207, Confusion Matrix
8055, submit
41790, Loss and accuracy
30136, Input are 2 Numpy array Let me briefly go over them
32591, Learning Rate Distribution
4373, we can generate one new feature using these all bathoom related features
30595, We need to align the testing and training dataframes which means matching up the columns so they have the exact same columns
30004, test the model
11213, LightGBM
37621, score3 600
3826, We might require a sample size greater than 50
6217, GarageType Garage location
7804, Blend Models
24104, Applying Machine Learning algorithms
5376, A Quik View of Data
36475, MissxMarisa is the most active users with 23 tweets followed by ChineseLearn erkagarcia and MiDesfileNegro
37301, Unigram
9712, try to interpret the coefficients
3587, Frequency Encoding
8588, Filling missing Age values with Median with SimpleImputer
15337, The Data that we are dropping from the dataset
11194, Additional testing
8462, Prepare Data to Select Features
29022, save the target column separately
6747, Multivariate Analysis
24932, Some tasks may require additional calendar features
3441, check the ages of the passengers with the female honorifics in Title Lady and the Countess
881, all data for submission
29924, In both search methods the gbdt and dart do much better than goss
16649, Overview of The Data
18495, A crucial validation
35687, LightBGM Hyperparameter tuning
20509, import the libraries
19058, Viewing a batch
6708, All columns have more than 1 unique value No feature found with one value
11086, make sure to optimize parameters once in the start and then once again in the end after all your data tweaks
39383, Label encode all categorical features
25216, Impute the numerical features and replace with a value of zero
43367, Viewing some example
30462, Preprocessing pipeline
22812, Dealing with Outliers
41165, Imports
4432, Way better
1896, RandomForest Model
1542, Embarked Feature
27294, Intervention by Hill function for SEIR model
29425, use the sample submission csv file as reference and fill the target column with our predictions
37169, practise on a simple decision tree
6374, Find out the mean value
13589, Calculating the class weights
10009, MSSubClass Na most likely means No building class We can replace missing values with None
3591, Make a Categorical Variable 1 if exist 0 if absent
26678, Numerical features by label
8983, How do these categories affect price
32992, ROC curve
37052, Feature Engineering Or Data Preprocessing
9378, Ooh
29866, The dataset is highly imbalanced with many samples for level 0 and very little for the rest of the levels
995, And now let s convert our categorical feeatures to a model ready type
19159, Submission
8118, SVM
10668, Assign model
17258, Describing training dataset
29014, Distribution of Age feature by Survived
40407, Bathrooms
22147, GIVING WEIGHTS TO EACH FEATURE S RESULT CV
22906, Findings
36062, Scale Data
35953, Filling missing values Age fare and Embarked Cabin have missing values we ll take different approach to fill them but we ll drop the Cabin feature because it s usually advised to drop feature with high amount of missing values 50
20236, Fare
23644, Model
1384, Anatomy of a neural network
43091, check how many features we have now
6677, Bagged DecisionTree
30779, Using test csv to make another prediction
42902, MCA Factor Map Biplot It s a global pattern within the data Individual categories and variables
30317, If more than one candidates are available we here take only the first candidate and discard the rest for plainness
41224, visualize one row of the train dataset
39912, As Kaggle kernels have 1200 seconds limit I have divided the prediction step
43215, Import libraries and fit the model
6734, TotalBsmtSF Vs SalePrice
34348, now let s read in the test data and generate our submission file
42452, Ordinal variables
43114, Use the next code cell to one hot encode the data in X train and X valid
865, Embarked and Sex
4876, Lasso
39113, Parch
22044, Since schools generally play an important role in house hunting let us create some variables around school
20669, We changed the type of MSSubClass to string so we can impute the median based on MSSubClass in MSZoning in the next step
35756, Keras earning rate and early stopping during training
24726, After creating new features we can drop useless columns that we won t use in the training process
14292, Start Diving into it
20734, Heating column
14584, There are no missing data in the Test information
5410, Well I insist if we input thise feature without any preprocessing it cause over fitting since people with 5 and 8 SibSp all survived but there aren t many of them
32816, Macro economic factors influence on House pricing in Moscow
11655, Find better hyper parameters
32967, Passanger Class
8994, Missing values
15173, Fare
8615, Here i have printed top 20 features which make huge contrubution to target variables and now i check whether any of these variables have null values or not if there are any i fill them with most common values
15461, Ticket Number Remap
41653, Retrieval
18927, Relationship between numerical values
41289, Using the saved weights in a new model and checking predictions
41961, The words need to be encoded as integers or floating point values for use as input to a machine learning algorithm called feature extraction
29180, Trick Combine BsmtFinSF1 BsmtFinSF2 1stFlrSF and 2ndFlrSF
1768, Cabin
34234, For our DataLoaders we re going to want to use a ImageBlock for our input and the BBoxBlock and BBoxLblBlock for our outputs our custom get items along with some get y s
28777, In the previous versions of this notebook I used Number of words in selected text and main text Length of words in text and selected as main meta features but in the context of this competition where we have to predict selected text which is a subset of text more useful features to generate would be
40405, idling is not me
26906, Score for A10 14810
42135, Apply model to test set and output predictions
37937, Before we start let s take a quick look at the data structure
23597, Define a tokenizer
14960, Impute missing embarked value
26788, Clustering NYC data
36429, Train Test Split
15218, Model CatBoost
2212, General Scores
20546, Base Models Scores
43267, Desenha a rvore de decis o
13857, We can replace many titles with a more common name or classify them as Rare
7085, People with high Fare are more likely to survive though most Fare are under 100
42855, Top words No disaster tweets
36397, EDA with describe
20731, Foundation column
24831, Here you can choose to run the subpart
6333, But before going any futher we start by cleaning the data from missing values I set the threshold to 80 red line all columns with more than 80 missing values be dropped
26855, F Score Graph
31430, Apply fresh JEL
26011, Just to try out as a traditional approach I am going to run a Correlation Heat Map for all the variables
20414, Here are some questions have only one single words
41433, From the pricing point of view all the categories are pretty well distributed with no category with an extraordinary pricing point
4988, Tickets
5369, Recurrent Feature Elimination
36336, Before the model is ready for training it needs a few more settings
24985, Removing numeric variables that have low correlation with target
11664, Adaboost
24737, Agglomerative Hierarchical Clustering
5956, Correlation Matrix and Heatmap
21619, Store NaN in an integer type with Int64 not int64
23367, Analysis after Model Training
32918, Try my model on random images 1 Dog 0 Cat
1941, nd Floor with SalePrice
1016, And let s print our accuracy estimated by our k Fold scheme
11750, Feature Engineering
15743, split data
11700, Experiment 2 Compare the custom BCE and TF built in BCE
26679, EXT SOURCE 2
14093, KNN Output File
41456, Feature Sex
39332, Text features for items df
25398, Submision
30858, Implementing a max pooling layer in TensorFlow is quite easy
27606, With the cleaned image segmentation is performed thereby dividing the image into four distinct regions background wall left lung cavity right lung cavity
13098, Modifying seaborn countplot make it work with FacetGrid when all 3 arguments are used
3286, Train models again
12139, LGBMRegressor 2
6523, Temporal Variables
39183, Visualiza es
5508, Dropping Columns
36815, then we ll define the test and training data URLs to variables as well as filenames for each of those datasets
589, Load Python modules The list of modules grows step by step by adding new functionality that is useful for this project A module could be defined further down once it is needed but I prefer to have them all in one place to keep an overview
7392, For the final dataset I drop the surname and name codes because they aren t needed anymore
18036, Boy
16218, Feature Engineering
7050, Exterior covering on house
18156, Normalize the Sparse Features
39693, Remove Stop Words or and Frequent words Rare words
10931, Importing Pythons Moduls
29554, If we want to use torchtext we should save train test and validation datasets into separated files
27994, DecisionTreeClassifier
27895, Building the ConvNet Model
40783, because you changed the cost you have to change backward propagation as well All the gradients have to be computed with respect to this new cost
5563, Moment of truth
20711, Data is highly skewed
30924, Add our predictions along the original data
39401, Numeric Features Exploration
23757, Analysis of Temperature and Confrimed Cases via Plotly Graphs
40616, I created a quick imitation of test train split in order to compare of OOF ensembling methods
32005, Observations
29316, You have a higher chance of surviving if you have a first class ticket than having a second or third
99, Passenger who traveled in big groups with parents children had less survival rate than other passengers
1169, there are houses with garages that are detached but that have NaN s for all other Garage variables
35201, In previous figures we mentioned about visualizing proof later
28986, Missing Value Analysis
23570, Visualize the model s training progress using the stats stored in the history object
6824, we ll add XGB and LightGBM to the StackedRegressor
30926, Question texts are not printed fully but shortened Lets fix that
41929, XGBoost Starter
18663, Make Submission
37453, we need to make each output of the same length to feed it to the neural network
29787, Sample few test images
21361, Preparing the data and the model
27129, Sale Price
16448, Embarked
42606, Model
33778, Fitting the model
8666, CentralAir and Street is a binary features so we can encoding ones with 0 and 1
9797, After all the processes we can divide the training and test set from each other for model testing
32625, Transforming tokens to a vector
36470, Images from University of Saskatchewan
3529, Box plot Neighborhood
19339, Reference Table
5387, Compare and Compound Classificaton Algorithms
26032, Preprocessing The Data
7605, for numerical features
5225, PCA
7901, Bulding the model for the test set
5977, Grid Search on Logistic Regression
34066, Pclass Survived
33677, Random font
33095, AdaBoostRegressor
35108, Quadratic Weighted Kappa
14892, Apply the correlation Heatmap
16396, Calculating Percentage
10346, Skewness and Kurtosis
8650, get a view of the actual data
8106, Sex
5690, Use Feature Importance to drop features that may not add value
9100, How does an unfinished floor affect the SalePrice
3002, Train Score is 0
26471, We use training set as our data that be further split into train test due to memory constraint
4801, For all the numeric features we have checked for skew and have log transformed those features which have high skew greater than in our case to get a normal distribtuion
27254, Create a Dict data type to hold the parameters for our First Level Models
13847, But before going any further we start by cleaning the data from missing values
35341, MISCLASSIFIED PET IMAGES
30411, Train and predict
34283, checking for improvements in count over time
16036, Here is the public score so far
11744, analyze which variables are missing
37878, Top 10 Feature Importance Positive and Negative Role
4409, Indicator dummy Variables
38494, let s take look into the distribution of the target
893, Train again for all data and submit
30177, I thought of a sigmoidal function because China s data resembled a sigmoidal shape
28202, Named Entity Recognition
6361, The Lasso regression
28466, Column yearbuilt
8816, from Above information we can fill Age Information with
18003, Irrespective of whether we process the training and test set together after we decide on the hyperparameters we use the entire training set to retrain the model and then use this model to predict the test cases
42242, Missing null values in numerical columns
3488, Classification report
12495, Building final model with best parameters and calculating on full train data with voting regressor
30355, Predict World With China Data
12276, Import required packages for Feature Selection Process
31228, Features with positive values and maximum value greater than 20
10463, let s visualize an estimation of the probability density function to get a better understanding of how values of each attribtue look like
7930, Lasso model
35569, Visualizing the data for a single item
27226, Using ImageDataGenerator to bring in augmentaion in data
4517, Linear Regression
9735, Appendix
9362, Completing a categorical feature
13322, Observations
28168, Just in case you wish to disable the pipeline components and keep only the tokenizer up and running then you can use the code below to disable the pipeline components
23949, if the percentage is greater than 0
7095, Models include
10756, create a new feature
22112, step is guess what new feachers we need to intoduse to make the model better
21512, Padding images
33992, RandomForest Regressor
553, SVC RandomizedSearchCV
22452, Dumbbell plot
12427, Using map
24549, In case of total products 1
20537, Linear Regression with Lasso regularization L1 penalty
27197, 3rd Step Data Analysis and Some Visiuality
4182, Fare
19824, Information Value
35633, Histogram of L 2 distances from the eight digit
40155, Apperently this information is simply missing from the data
25353, Locations
24981, Extracting categorical and numeric columns
1291, Hyper Parameters Tuning 5 3
19762, Application of the cleanUp function
26744, Plotting sales of weekends before different event types
33491, Albania
18707, let s download the file and upload it to a bucket in Google Cloud Storage
34671, Average sales volume
42277, Check NA
3606, Another way to look at null values using a trick from
24126, Looks like some values from Keyword and Location columns are missing let s explore these columns
10747, Check missing values in features and impute
35070, I try again but this time I add a Lambda layer at the input of my NN
40259, Garage Cars
29426, fill the target column
15910, Age and Sex
41743, Error analysis
742, Final DFs
14966, Though there are more males on the ship but females survival chances are more than males
15673, xgboost
21204, Parameters Initialization
19594, main cate id
38035, Ensemble Learning
34907, Fill it
274, Model
6176, More number of Males are present then comes Females according to titles
5588, discretize Age feature
6076, Embarked
7981, We should drop from orange blocks
34714, Mean over all months
27651, We perform a grayscale normalization to reduce the effect of illumination s differences
4521, Using Neural Nets
11843, Creating a TotalSF variable from TotalBsmtSF and 2ndFlrSF
37611, Inorder to fill the missing values effectively you have to understand the data in the dataset Go through the data description
18083, The brightness of the images
4557, MasVnrArea NA most likely means no masonry veneer for these houses
15946, AgeGroup
21452, Numercial
16554, Dropping the unnecessary features
1296, Univariate Analysis
35931, Age Fare
14596, Ticket class
33795, Effect of Age on Repayment
16161, SVM
1071, visualization
13423, Baseline models
11318, Fare
38695, After Mean Encoding
31734, Save the files before continuing
23221, prediction with 1 time of prediction
40977, Another way to look at the data with groupby
3637, Merging Titles
32328, transaction date
28279, Generating the submission file
34920, Words length stats in tweet
10427, Showing Confusion Matrices
43122, Checking correlation between features
35810, Add test
11457, Age and Sex
15909, Further exploring the relationship between Pclass Sex Age and Survival
6425, PCA Dimensionality reduction
15679, Plot Area under ROC
12313, GarageArea is divided into 0 and non zero parts
13564, Mapping the titles
12812, now lets check missing values for test data
32193, We ll remove the obvious outliers in the dataset the items that sold more than 1000 in one day and the item with price greater than 300 000
3935, The rest I just delete because those are correlated or not so important
26688, MEAN BUILDING SCORE TOTAL the sum of all building AVG score
12189, and we want the data to be scaled before going to our model
19161, if category present yes no features
41302, How many categorical predictors are there
40382, format path train and format path test merely takes in an image name and returns the path to the image
30775, Using KFold for cross validate
30080, Cleaning Data
6424, Data Scaling
26752, vil
23694, For some reason uusin Matthews Correlation
4248, Sherpa
29254, based on that and the definitions of each variable I fill the empty strings either with the most common value or create an unknown category based on what I think makes more sense
9792, Encoding nominal values using dummies
23538, For test data we have 28 000 entries
37279, Our functions for loading our embeddings
19054, Getting some quick batch tfms
11917, Embarked Column
7018, Evaluates the height of the basement
15779, Logistic Regression
18025, XGBoost model
38544, To visualize the tree we use graphviz
1028, Interesting The overall quality the living area basement area garage cars and garage area have the highest correlation values with the sale price which is logical better quality and bigger area Higher price
36547, take a look at n top features of your choice
23922, Prediction and submission
8380, It s time to go
14805, Basic Data Analysis
23841, Taking X10 and X54
2333, Imbalanced Dependent Variable
33019, Preprocessing
32607, Numeric Features Exploration
20837, We ll do this for two more fields
8060, Various datatypes columns
10947, merge train and test data set
20851, In time series data cross validation is not random
42051, Change string to numbers by replacing values
16905, Conclusion
15784, Choose best model
39843, Quater Wise Resampling the data
21374, Reshape
36262, Most Important thing when plotting histograms Arrange Number of Bins
39667, Splitting chosen image into inputs and targets
25052, Find best coefficients
37806, Elastic Net
34911, Count len of tweets
4995, Poking around
20059, Optuna is an automatic hyperparameter optimization software framework particularly designed for machine learning
26840, Rides Per Year
43157, Convert the data
17411, calculate the mean of all the feature importances and store it as a new column in the feature importance dataframe
7761, Preparing Train Data for Prediction
21594, Read data from a PDF tabula py
41866, Below there is a cloud of words from alphabetical entities only in a shape of fire as well
7771, Random Forest
33713, FEATURE AGE
20825, In pandas you can add new columns to a dataframe by simply defining it
7315, From following cells we could know that train and test data are split by PassengerId
13570, Creating the Family Size feature
22366, Create Column FamilySurvived FamilyDied
39414, Embarked
26379, Extract bounding box data
9041, drop these columns
14124, Age
29047, Removing the circular boundary to remove irregularities
37154, Now we choose a particular image which we use in the later sub section to view the features learnt by the model Click here to go to that section
15957, Test set
33074, Data preprocessing
24765, Gradient Boosting
1201, Elastic Net L1 and L2 penalty
18816, LASSO Regression
22465, Pie chart
2573, Model and Accuracy
16027, Cabin
15644, Split data into test and training
42014, Creating a new column by copying an existing column
32111, How to print the full numpy array without truncating
15078, PCA
8154, Valuation Models
11919, One Hot Encoding
158, EDA of training set
909, We transform the Saleprice in log transformation for more clear linear relationship
31514, Satisfied customers
4347, By Observing the house feature can conclude that the house s Salesprice is low with respect to all of its features
36540, Test
16337, k Nearest Neighbors
701, The first thing to notice is that this dataset is that there are a fair few variables
24673, MODEL TRAINING
17420, as it is a classification model we have so many ways to fit it but lets chose the best one
14008, Ensemble RF LR KNN by voting
7269, It s clear that the majority of people embarked in Southampton
29223, we dummy coding the remaining categorical variables
10338, First of all let s start by replacing the missing values in both the training and the test set we be combining both datasets into one dataset
24970, Model Inference
2913, ok after several tests I decided how to do the feature extraction I wrote a function that encapsulates all the others I used for this purpose
10, Pipelines
38945, Plan of action
5433, Pool
28449, COLUMNS WITH NO MISSING VALUES
23698, Make predictions and submit
31365, vectorizing
17981, Prediction
5924, The data is right skewed
24591, TRAINING THE MODEL
22335, If you want to know what exactly each tag specify use nltk help function as below
17024, Hypothesis There is some correlation in title with survival rate
32236, let s train our model several times depending on what we set epochs to be
2814, combinion the traning and testing dataset to maintain consistency between the sets
42048, Lower case the column head
14433, go to top of document top
7762, Cross Validation
11754, First we are going to split all of our data back into our train and test data sets
64, predictions
28102, Separte the Categorical and numerical columns
11725, Bagging classifier
20548, See the loss
4256, Nulls in testing set
6875, Linear Regression
12374, List cate It is the list of all the categorical data fields in the dataset
29968, Collecting texts for training corpus
13544, Exploring the data
7513, Prediction and submit
19090, Correlation
16075, ROC AUC Curve
6313, we have found the optimal feature set for each of the selected models This is a very powerful exercise and it is important to refine the dimensionality of the data for each individual model as each estimator work with the data differently and hence produce different importances
39717, Exploring the trained Word2Vec model
34264, To use the day of the week we merge data from the calendar DF
7325, By fixing the values of Embarked and Pclass we could plot histogram of Fare And we should use the most common value to replace the NA value of Fare
3853, Feature AgeState
5889, Same indices are missing it means that if area 0 those coln are missing
32781, Scheduler
18926, Data
6060, 1stFlrSF well distributed a couple of outliers
16384, S 0 C 1 Q 2
13102, One hot Encoding and Standardization
23153, Findings 93 passengers died with ticket category A over 64 survived from category P Over 57 survived from F and just over 15 passengers survived from ticket category W
26652, check if ad id can be used as index
38249, One might ask how is this helpful
6463, Pipeline for the numerical attributes
22209, Here is the commented out version for when I was working with 2 models instead of 3
25810, Evaluate Model
15562, we are ready for the real filling of all missing ages
23329, visualize what the light pattern looks like on the the photodetector array
30311, Import modules
8788, We use cross validation to evaluate your model
25853, Some other symbols
35384, SUPPORT VECTOR MACHINES
24261, Descriptive statistics
20932, Visualize the training history
29350, Babies are more likely to survive than any other age group
43259, Criando a coluna rolling temp
20346, and also plot some of the training images
9060, It is confusing to me that the minimum of this is 2
14722, let s look at the distribution of titles for survivors vs
29559, Predictions
16387, Combining Titles and Sex only
24746, Exploration
17417, sibsp of siblings spouses aboard the Titanic
35913, Compile the model
37547, PYTORCH MODEL
23809, Preprocess Pipeline
20442, credit card balance
32753, Function to Aggregate Stats at the Client Level
2644, Fill NaN or null values in data
12816, Gender Analysis
32509, Loading the weights
14364, Feature Desciption
29430, Make Predictions
6618, Function for Evaluation
22444, Diverging lines with text
22537, Creating the submission file
31234, Features with max value more than 20 and min values between 10 and 20
2757, Dropping the columns that have more than 30 of missing values
15250, Tune
2463, ANOVA F value For Feature Selection
11722, Passive Aggressive Classifier
15028, In the speculation when we explore the feature Sex we think that the dead ratio of Adult Male be much higher than Adult Female and Children
31425, Case x start x end
8155, Submission
5498, XGBoost Regressor
22600, Price trend for the last six months
15401, It s a 3rd class passenger so let s fill the missing value with the mean fare for passengers of the same class
28037, Lets remove variables for having clean workspace for rest of the kernel
25508, CREATING SAMPLE CORPUS OF DOCUMENTS ie TEXTS
27338, One Hot Encoding
25803, Before continuing let s give them some fake identities based on the most common first and last names in the US
18653, Exploring categorical fields
31894, Performing 5 fold C
16276, Defaults
31509, We create two dataframes
31724, EDA take a look
37191, Feature Engineering
1604, Conclusion
6845, Checking the Imbalance of Target Variable
11062, Embarkment versus survival
19060, Specify the architecure to be used
8729, Correlations
8039, Add new feature
40964, Saving the Submission File
12449, With the GarageYrBlt my first assumption is that it must be strongly correlated with the YearBuilt
22425, This plot is an example of the power of matplotlib By the end of this kernel you learn to do this and more advanced plots span
5692, Predict the output
8594, df train cat is a sparse matrix to convert it to dense matric use toarray
20768, applying these functions sequentially to the question text field
42660, The difference between the two matrices is sparse except for three specific feature combinations
43256, Mudando o Nome da Coluna count para rentals
1898, KNeighbors Model
1865, Compare Models
28036, CONCLUSION ON TRADITIONAL METHODS
37679, Preparing the T5 Dataset
10773, First thing first we split back the data into training and testing set and we drop some non used feature
4703, I defined this little and very helpfull function to compute the Root mean square error
18611, Model 1 SVM Linear Accuracy 79
28417, Simple bar plot of the feature importances to illustrate the disparities in the relative significance of the variables
19373, Transforming data to reduce skew
285, Sex
4795, log transform it and recheck
17980, Applying Classifier using sklearn wrapper
9383, fill the features of this house with appropriate values
28021, LoggicReg
8575, Concatenating DataFrames
13336, let s convert the title feature which contain strings to numerical values
13148, Family size as Survival Function
8818, IT S CLEAR ready for feature engineering but we ll drop Ticket first
3604, Independent variables
436, BsmtQual BsmtCond BsmtExposure BsmtFinType1 and BsmtFinType2 For all these categorical basement related features NaN means that there is no basement
14606, Naives Bayes
24866, Data Analysis
43005, Data cleanning drop duplicate columns
3420, Extracting Combining Ticket Prefixes
37617, XGB Regressor
17730, The data reveals that some passangers had more than one cabin
10612, Selecting best method of imputation
25035, Seems like 0 and 1 is Saturday and Sunday when the orders are high and low during Wednesday
11626, Model building
36213, Exporting the different submissions
22820, Nope The outlier is the only transaction
15613, Title Cetegory
14293, Reading the dataset
36450, Callback
14511, Observations
38166, For TPOT everything needs to be in float or int therefore deleting variables that are not those for example purpose
4197, Feature engineering
1539, that we ve filled in the missing values at least somewhat accurately it s time to map each age group to a numerical value
498, Fare Feature
14494, Random forest accuracy increased to
17404, kf means K Folds cross validator so kf KFold generates a K Folds cross validator which provides train test indices to split data in train test sets
35574, Rollout of items being sold
4585, BsmtFinType SF is more powerful than its creaters in the case of type 1 but not so in type 2 this is strange as we expect the behavior would be consistent
14144, AdaBoost Classifier
2903, Stacked Regressor
36090, Histogram of successes and failure rates
38849, Family houses selling price is too high
6156, Stacking on test subset
7962, display the columns with the number of missing values
11898, This is a general method in handling outliers trend
11905, Ensemble and submission
37694, error is smaller measure accuracy and f1
6819, There are some outliers in this data as well but we are not going to remove these ones too because we don t want our data biased and our models affected by that bias
26002, step is to split up the enc train into a training and validation set so that we there s a way to access the model performance without touching the test data
40718, Model Using Keras
16699, do this for all the other classes as well using iteration
3623, First let s get a baseline for where we are without doing too much more clean up work
30853, Locations For Assault
8551, Other Numeric Features
542, Fill NaN with mean or mode
14572, Output font
37040, This Keras function simplifies augmentation e
2250, Getting the Data
10859, Checking the correlation of the numerical features
16452, Pclass
19695, Define Optimizer
40457, OverallQualNCond
42335, Here summary of the model can be interpreted with each layer separetly containing
14854, Plotting the survival rate by family size it is clear that people who were alone had a lower chance of surviving than families up to 4 components while the survival rate drops for bigger families and ultimately becomes zero for very large ones
40267, Lets create new Binary Columns for Basement Garage so we can analyze there distribution as well
1191, Overfitting prevention
3393, Feature Selection with Correlation
13341, Cabin extracting information from this feature and converting it to numerical values div
6674, In ensemble algorithms bagging methods form a class of algorithms which build several instances of a black box estimator on random subsets of the original training set and then aggregate their individual predictions to form a final prediction
17759, Clean Data
36083, Training
31050, Specific String Rows
3153, the error metric RMSE on the log of the sale prices
6125, Comfort houses by the way
33615, How many of each digits mis classified
6286, I separate the predictor features of the training dataset from the Survived column which is what we want to predict
37766, That method is not recommended because we can easily overlook a variable that should be removed or remove a used variable by mistake
33043, Neural Network using nnet
24508, New distribution
6221, dereferencing numerical and categorical columns
41329, After compressing the 784 pixel features to 50 features we train the t SNE algorithm
1560, Upon first glance the relationship between age and survival appears to be a murky one at best
41341, Detect Delete Outliers
12611, replace missing values by median of Age
16770, A general rule is that the more features you have the more likely your model suffer from overfitting and vice versa
9004, check for duplicate rows if there are any
2109, Moreover we have already transformed some categorical features into ordinal ones
31838, Creating the custom metric
28999, Scaling the data
14932, Roc Curve
15828, Logistic Regression
1732, Plot Embarked against Survived
42780, Turning labels into 1 hot encoding transforms the shape from 1 column to 10 columns
4885, Fix data
31798, Real or Spurious Features
31284, ARIMA
4341, Test dataset
33708, LOADING DATASET
8798, combine both train and test data to save our time and energy
34523, visualize one of these new variables
40473, Final Column Drop
42246, Dealing with missing null values
12935, PassengerId 62 and 830 have missing embarked values Both have Passenger class 1 and fare 80
11344, Fare Binning
20278, center center
22253, SVM
23885, Univariate Analysis
8164, This is pretty straight forward as we onehot encode the input variables before splitting up training and test set and scaling each individually
32482, The final piece of the puzzle is an evaluation metric to ascertain the performance of each model
12661, The dataset contains a lot of missing values
42351, Assumption is mostly correct
8659, Instructions
1026, have a look first at the correlation between numerical features and the target SalePrice in order to have a first idea of the connections between features
15343, Lets try using Random Forest A Random Forest is an ensemble technique capable of performing both regression and classification tasks with the use of multiple decision trees and a technique called Bootstrap Aggregation commonly known as bagging The basic idea behind this is to combine multiple decision trees in determining the final output rather than relying on individual decision trees
16596, Predict the outcome on Test data
31706, Cross Validation
41753, You have some problems with employment but your external scores and annuity to income ratio is good so we can give you a credit
13211, Our tree are very extensive and probably our model are overfited let s create a submission with test predictions and upload on kaggle to confirm
28242, Generating Ground truth
23405, Labels and Features are splitted
39879, right bottom two out liars
17524, Prepare submission file
20283, P1class survival chance is around 63
13077, we deal with categorical data using dummy variables
32414, Convolutional Neural Network
23274, Sex
15916, After Adjusting the surnames of families group these true families together again
24895, To score higher in kaggle competitions it usually is the case that final model is derived out of multiple previous models
782, Lasso
25018, Our plot function takes a threshold argument which we can use to plot certain structures such as all tissue or only the bones
2130, Tuning Ridge
7, focus on some of the categorical features with fewer missing values and replace them with the most frequently occured value which is the mode of that feature
25237, Sklearn Classifier Showdown
19544, Tokenization
37644, Find position and neighbourhood from latitude and longitude
14350, How many Passengers Survived
23832, But YOU MUST SEE THAT SOME FEATURES ARE HAVING SAME CORRELATIONS THAT COULD INDICATE THE POSSIBLE DUPLICATE FEATURES Lets check them too
2992, Feature Engineering
22947, I also isolate the number value of the Cabin feature
4480, We first create these functions to allow us the ease of computing relevant scores and plotting of plots
28693, The distribution of the target variable is positively skewed meaning that the mode is always less than the mean and median
42298, Here I have followed the ensemble method to train the model and used the Keras Sequential API where you have just to add one layer at a time starting from the input
43046, Data exploration
11639, Extra Trees
21906, Dropping outliers in the highly correlated variables
18967, Display the density of two continuous variable with facet of many categories
16822, Feature Scaling
26705, Plotting Sales Ratio across the 3 categories
19413, As stated in the evaluation page
19515, Creating an Empty RDD
3594, Check more accuracy alpha
18244, merge both train and prop
1375, To complete this part I now work on Names
25881, Plotting of meta features vs each target class 0 or 1
3351, Bining Deck feature
31772, Accuracy
11422, Find
693, that the convergence is confirmed let s calculate the confusion matrix of our model
20199, Removing Multicolinearity using VIF
20550, Predictions and Submission
32906, the
17887, Continous Variables
39673, Specify our network structure which take in X and pass it through a number of layers
13889, Passenger s Tickets
32853, New dataset
34642, SVC Classifier
8869, Skewness Check in the Column to be Predicted
20058, Create a model
7208, Lets get to know what our dataset looks like
7571, Drop columns with lots of missing data
24484, Evaluate the model
22773, Declare Hyperparameters
36539, The target as well as the ID Code of a sample are 2 special variables
20272, Convert categorical string values to numeric values
36865, Random Forest Classifier
9211, Survivors by Salutation
25393, For the data augmentation we choosed to
37344, Preprocessing
1361, This model uses a decision tree as a predictive model which maps features to conclusions about the target value
20884, Define early stopping callback
43361, Prediction and Evaluation
27012, Neural Networks
802, Final Tree
22776, Training the model
20632, We would also remove embedded special characters from the tweets for example earthquake should be replaced by earthquake
18720, unfreeze our model and run fastai s learning rate finder
30269, RandomForestClassifier performs clearly better
10138, Submission Files
15624, Ticket Type Feature
42863, We prepare Keras Inputs for the different features in the data
10614, Looking at dependancies of values for each code Pclass code
11283, The RFECV selector returned only four columns
10798, Again categories below 45 items are not respresentative
38195, Grid search is a way to find the best hyper parameters by evaluating all possible combinations of these
603, we continue to examine these initial indications in more detail
42964, Name Feature
16985, Neural Networks using sklearn
4103, Split data train test
12722, Model re training
7003, Low quality finished square feet
6707, Checking missing values by below methods
26079, Function to drawing learning curve history learning neural network
13749, Convert Features
16514, Gradient Boosting Classifier
17454, Deep Learning in progress
42049, Replace spaces with underbar
10083, We shall preprocesses total data based on training data not on the basis of total data as that would make the data biased towards test set
12416, To drwa heatmap for correlation
6587, K Fold cross validation is common type of cross validation is performed by partitioning the original training data set in to k equal subset
36852, plot confusion matrix
18538, How are numeric features related to SalePrice
817, Plots of relation to target for all numerical features
15820, we don t need Passenger id Name Ticket Cabin Age bin Fare bin
1393, Explore the Data
1061, I tried numberers that round alpha 0
14730, we can split our training and testing data and fit our model
16549, make sure it s done
4232, Feature reduction
11654, Artificial Neural Network
237, Model and Accuracy
25391, A loss function to measure how good the network is
7001, Total square feet of basement area
32599, The final plot is just a bar chart of the boosting type
8498, LIME Locally Interpretable Model Agnostic Explanations
36428, Consolidation of Data
18747, Selecting
39153, Evaluation
9409, Cleaning encoding categoricals and simple imputation
8265, Checking for any Extra Missing Values
15496, Output Final Predictions
19135, Model 1 input 784 sigmoid 512 sigmoid 128 softmax output 10
9263, Looks linear but two outliers very evident
447, Since area related features are very important to determine house prices we add one more feature which is the total area of basement first and second floor areas of each house
18981, Display more than one plot and arrange by row and columns with common x axis
22264, Gender Age Group Probability by Model Benchmark with CV
14655, Dropping Name
5449, Make a function for all features
42073, Each of the models are fitted on the Train set and evaluated using Test
4720, splitting the training data into training and validation set
1996, With 81 features how could we possibly tell which feature is most related to house prices thing we have a correlation matrix
8602, Handling the null values in both datasets
27470, Word Clouds
20966, Adding Input Layer and Hidden Layers
925, Optimize Ridge
13734, Confusion matrix and accuracy score
3706, KNN Regression
7135, Sex
6741, TotRmsAbvGrd Vs SalePrice
22071, Starting the training
27468, Tokenization
23396, pick an image and cycle through the layers making a prediction and visualising the features outputted by the layer
31570, ToTensor
11251, gradientboostingregressor xgbregressor randomforestregressor adaboostregressor ridgecv stackngcvregressor
35534, concatenate both the training and test datasets into a single dataframe for ease of data cleaning and feature engineering
37107, Code in python
33695, I utilize HOBBIES 1 234 CA 3 validation item for few basic plots
10446, The numerical variables having correlation with SalePrice are dropped
9864, Pclass vs Survived
21143, Having some information about the target we choose 3 interesting numeric variables to analyse whether they are correlated in any way with our target
27031, If we test whether a sample is positive we can get another hypothetical test
32065, Principal Component Analysis PCA
30419, Define tokenizer
24862, Generating the submission file
14708, LOGISTIC REGRESSION
41577, Transfer learning refers to using a pretrained model on some other task for your own task
41130, Prophet Forecasting
38748, we can calculate the correlation
21998, Using PCA
35236, Text font
35820, Train model
33460, DayOfWeek Open
24545, Again we exclude the dominant product
31119, Feature selection by xgb
3779, RandomForest
27918, XGBoost
4218, Data Transformation
39120, Perceptron
2377, Vectorize two text columns using ColumnTransformer
14914, Fare
6215, MiscFeature Miscellaneous feature not covered in other categories
13074, Submission
37075, Training Model
14418, Check for NULLs Duplicates
7799, Blend Models
41021, Make WCG predictions and submission to Kaggle
12566, we got 2 NaN values left so we simply remove the two entries
31059, Positive look ahead succeed if passed non consuming expression does match against the forthcoming input
6495, linear models perform better than the others
8376, Feature Engineering
13524, Predictions
7808, Tune Models
33746, We can now fit the parameters
29977, Util class
14841, Fare
42867, Run random search over the search space
1643, Explore the size of the data
37015, Top 10 categories by number of product
34255, Test RMSE
4205, Inconsistent types
11543, Before starting our analysis it is important to separate our data intro training and testing set
16936, The second stupid model is rich woman
41597, Build the model
14736, True Negative Rate
24808, there is no such columns like people s name house s name good
13975, Create a new feature Family size from the features SibSp and Parch
30890, Find out the most completed column which could be use as a reference to impute the missing value
15081, Logistic Regression
14812, Parch Survived
36887, dense 2
43252, Saving prediction trained with SVM
41410, DAYS EMPLOYED
14992, To check missing values
35423, plotting the training and validation accuracy plots
26542, Create Model
24395, Learning schedule
10987, Again let s check the correlation in our new features
42732, There are 60 combinations which have larger correlation values than 0
8849, Instead of separating the data into a validation set we ll use KFolds cross validation
6188, Explanation of Violin Plot
31522, The optimal parameters can be extracted through the model
7859, The Finale Building the Output for Submission
24705, Optimise on metric
22382, Imputting missing values
36132, In this kernel we implement EfficientNetB7 a framework that was released by Google AI in 2019 which achieves much better accuracy and efficiency as compared to the previous Convolutional Neural Networks
20249, Drop the colums that are no longer needed
7492, Check out the survival rate for each port
8276, Box Cox Transformation
5611, Merge By CHAID
36917, How many Survived
20950, Setting network parameters
25780, Survival probility as per familysize
14161, In addition we can also derive the marital status of each member from the Name feature
28816, EDA
12188, The validation set is 20 of the full training set and it correctly reproduces the proportion of houses in each Neighborhood
20096, Modeling with Prophet
41600, Evaluate accuracy
7435, Randomized Search
17471, DecisionTree
19386, Split dataset
10342, The only missing values that are left are within SalePrice which is exactly the number of lignes in the test data
34972, Final Model
18357, One benifit of using a tree based model that it can provide us with the best features it uses to split on In this final section i am going to check if doing a fecture selection of the top 1 3 features can help improve model any further
29694, Final Checking
29475, Featurization through weighted tf idf based word vectors
2303, Categorical vs Continuous
7283, Modelling
42644, Some other basic NLP techniques
14675, Submission Predictions
10084, If there is no garage there is a null value for all garage fields so let s put NA to all such fields there is an outlier with which we convert to because if there is no garage there should be no Garage Area which means it may be an artificial error error in data collection etc
19278, That was a surprisingly large amount of 2 s
11729, Quadratic Discriminant Analysis
19554, Model Evaluation
5371, Filter Methods
36381, Visualizing Images
20114, Total sales trend
5673, Create a new Fare Category category variable from Fare feature
23483, Custom Preprocesser
493, we ll fill in the missing values in the Age feature
20445, POS CASH balance
41613, let s only take columns that we need for this analysis and put that data into a new table called reorder merged
30915, Embedding
958, I have not yet figured out how to assign and store the feature importances outright
22027, Correlations
5290, Feature Engineering
24560, Distribution of products by segment
22140, 1stFlrSF First Floor square feet
21160, The scores and rank should match the public LB scores as of 31st May 2020 midnight
42527, filled all the nan value with a string nan this string could be anything
32312, Relation between Survival and Port of Embarkation
9627, Ridge
5667, First Name
12605, First check the impact of feature Sex on Survived
29027, Concatenate train test
2524, Voting Classifier
196, XGBoost
17553, First class people have more count of survival
32600, The Ba optimization spent many more iterations using the dart boosting type than would be expected from a uniform distribution
23265, Outlier Detection
28861, We wil use a sliding window of 90 days
23477, Since we know that the output should never be less than replace all negative values with
8815, Filling missing values in Age
21905, The last two values are out of trend and are outliers
28622, MasVnrArea
12201, The normal OneHotEncoding or the standard get dummies would create a dataset with only 3 columns and when a model is called an error caused by the mismatch in shape
11060, Pclass versus survival
41113, The mean median and mode of a normal distribution are equal
19667, Feature Importances
37896, Feature Importance
28411, Generating submission File
34961, Pclass was not included in the passenger df as input to the K Means clustering model
10873, we split back comb2 as we are done pre processing
10039, Extract feature in Cabin variable
18702, pass in the new learner to DatasetFormatter
9773, But because completition s data provider does not give us a correct answer for the prediction
25797, This is more clear if one looks at the plots for the cumulative distributions
27953, Use the next code cell to label encode the data in X train and X valid
16823, Drop Already existing Columns
5440, i 3 Split at 3rd element
20858, All
28100, Prediction based on test data
40716, Visualizing the digits by plotting Images
27599, Dual LSTM Layers
25241, Box Cox Transform
29348, The plot confirms 1st class more likely survivied than other classes whereas
35911, Prepare the model
17813, plot the features importance
24879, fill the Age of missing people with missing Age
32757, Save All Newly Calculated Features
15798, Grouping Fare and creating a new column called FareGroup with their means by Pclass
1745, Embarked is a category variable
18, GridSearchCV Ridge
8493, The library to be used for plotting PDPs is called python partial dependence plot toolbox or simply PDPbox
14546, Survival font
20200, Deal With Outliers
30959, Hyperparameter Tuning Implementation
22210, Creating Submission
16609, Separate dataset into train and test
37288, Our threshold was set to
6579, This didstribution is leftside skew to resolve this there are two possible approches
27055, Make validation predictions and calculate mean absolute error
15298, Perceptrons Model
24231, Prepare generators for training and validation sets
28277, Combining the synthesized data with the actual training data
32693, have a look at the correlation matrix
9250, Encoding the data
1809, Imputing Missing Values
38420, You need to scale a data before train the model
10999, Lets hear what categorical variables tell us
11646, Suport Vector Machine
152, Voting Classifier
16867, Plot tree
4135, Visualizing Feature Correlations
2647, We can fill all the NaN values using fillna
18537, Check for missings
9641, we shall se that how they affect the Sale Price through easy yet informative visualisations
43272, Importando a fun o mean squared error para avaliar o desempenho do nosso modelo
22819, Once again using Google translate I found that this is an antivirus sold to 522 people and the price is probabaly the cost of one installation times 522
7326, Replace the missing value of Cabin with U0
34300, The 28th channel appears to focus on the white coloring around the mole on the right side
4505, Multivariate analysis
31747, RandomVerticalFlip
13202, First let s split titanic dataset into train and test again
9484, Class Prediction Error
33672, Weekday font
9202, Family Name Extraction
20972, Make Predictions for test data
4907, Generate the Correlation Matrix
4899, print our optimal hyperparameters set
23693, Create databunch
10237, Above Models are giving good score but this can be improved let s try K Fold techniques to check model accuracy
1245, visualize some of the features in the dataset
26246, Sanity Check on 9 Images
38518, Exploring the selected text column
12719, Round 2 Feature selection
8584, Save the predictions
10092, We use lasso or elastic net in final predictions
41678, lets take a look at some of the resized images of Malignant Tumours
3950, Changing Types
41334, Make final predictions using our decision tree classifier with t SNE features and make a submission for Kaggle to measure our own validation against the public leaderboard
15788, we had earlier made a X test for us lets predict values with that
37345, Add first layer
32580, Optimization Algorithm
29630, Logistic Regression
5357, Diplay relationship between 3 variables in bubble and two more variables affecting size and color
7825, GradientBoost
29923, Boosting Type
24037, Forming splits for catboost
24264, Observations here are the survivors
36135, Preparing the data
34029, TEST SET
9467, XGBoost
22506, Converting Dataframe into ndarray
23673, We want to train the autoencoder with as many images as possible
39731, We have two labels for Sex with females having a much higher survival rate
3994, you can use our trained model for forecasts
24747, now make this data clearer by plotting it in a graph enter barplot
26947, Removing that data point
4296, sklearn
26943, If the magic number is removed the distribution
33707, Color Gradient Gantt Chart
28352, Checking the Correlation Between The Features for Application Train Dataset
35089, Error Analysis Training Set
2656, drop columns which are not useful to us as of now
5001, Univariate Analysis
16278, Add some regularization
34449, Items Sold
29166, KitchenQual Fill with most frequent class
27167, Before that we need to convert few features datatypes back to int from object that we had change in the start
1891, We can also use a GridSearch cross validation to find the optimal parameters for the model we choose to work with and use to predict on our testing set
23533, Discussion and some more examples
9167, I feel like I could merge 3 and 4 into a single category as Above 2
2403, Loading and Inspecting Data
14001, Children and women go first
13799, pandas profiling library
12428, without regex
13582, Dummies
32188, among RMSprop Adam Adagrad SGD adadelta the best optimizer for this particular model was Adadelta
5149, Numerical variables
40023, Probably the most interesting information is given by Rows Columns and Pixel Data
18375, Train Test Split
35526, In this part some hyperparameters were tunned and find the optimum ones for model building
42802, Exploration of target variable
28471, When do people usually buy houses
41483, Data Wrangling Summary
37483, Logistic Regression
17390, Pearson Correlation Heatmap
7471, First checking how the data looks like in both files
37618, Categorical Variables
268, Prediction and Accuracy
36765, Building a Convolutional Neural Network using Keras
31399, Evaluate the model
16085, Pclass vs Survival
1697, Detecting missing data visually using Missingno library
21265, In this section we are trying to find the learning rate setting up a pretrained model
27640, Numpy and pandas help with loading in and working with the data
26104, PCA
5152, Feature Scaling
8573, Applying a Function Over All Elements in a Column
3815, Clearly this is not a normal distribution as the titanic contained far fewer 10 20 year olds than 20 30 year olds
19328, Model Building Process
21319, Explore file properties 2016 and 2017
28160, Training loop
16493, Features Scaling
34752, Loading embeddings
22236, Feature Enineering
12888, we can plot this since we have extracted a categorical feature from a qualitative one
4469, Dealing with Cabin Missing values
15167, Neural Network
3391, Variable Variable name
34275, I want to check what is the accuracy in predicting the right interest level
41331, that the t SNE features are created we train a new with the same hyperparameters
40799, Mean and median outcome group by group 1
2185, Learning curve
12275, Dealing with the NA values in the variables some of them equal to 0 and some equal to median based on the txt descriptions
27207, Submission Part
23582, Testing our tuned model on the Test dataset
7141, Pclass
6768, XGBoost
35086, As the best Hyper parameter turns out to be C
29547, Punctuation
30522, Target Variable with respect to Walls Material Fondkappremont House Type
32323, Create final file for submission
42807, best score 0
42654, We determine the indices of the most important features
30255, Have a look at the most important predictors
131, AUC ROC Curve
11415, Use Case 7 Pricing of Books
17455, Test
28887, We ll be using the popular data manipulation framework pandas
16632, Missing values
15027, Young passengers get a higher Survival ratio and old passengers get a lower Survival ratio
41867, BIGRAMS
3458, we construct a random forest classifier object with them
24243, Feature Engineering Bi variate statistical analysis
9099, This could be a useful feature so let s keep it
19432, Check the number of unique values in each column
33499, Albania
30953, Take each model s probability mean as final ensemble prediction
21631, Explore a dataset with profiling
18318, according to google translate top values are
31431, find original Jaccard scores for the same data
11175, We choose number of eigenvalues to calculate based on previous chart 20 looks like a good number the chart starts to roll off around 15 and almost hits max a 20
23733, Feature Engineering
35058, I compiled this model using rmsprop optimization categorical crossentropy for loss measurement and accuracy as metrics measurement
4076, Relationship with numerical variables
23457, Season
27176, Gradient Boosting Regressor
41479, Final Data Preparation for Machine Learning
18525, In order to avoid overfitting problem we need to expand artificially our handwritten digit dataset
8288, Stacking
35490, Visualizing training Curve
18755, Each CT Scan consists of multiple 2D slices which are provided in a DICOM format
29796, Building Skipgram WordVectors using gensim
28808, UCM
6278, Family Size
6354, Log transform skewed numeric features
40486, Extremely Randomized Trees
33793, The distribution looks to be much more in line with what we would expect and we also have created a new column to tell the model that these values were originally anomalous
646, investigate the Fare affair in more detail
35469, Visualiza the skin cancer lichenoid keratosis
22854, Add month feature
21802, Model 3 Transform skewed data
27664, Prediciting the Outputs
4541, scikit learn
6772, load train test dataset using Pandas
20583, Dropping unnecessary columns
23430, Data Cleaning
15586, This number looks more like something I could believe
36736, Random Forest Model
17857, Create the Out of Fold Predictions
5157, scaling dataset
17693, CONVERTING CATEGORICAL VALUES INTO NUMERICAL VALUES
17650, Gradient Boosting Decision Tree
24167, Build Model
40087, General Feature Preprocessing
15833, Split train and test
15864, Submission
13451, approximately similar to 79
24530, Total number of products by age
17962, pandas
6920, SUBMISSION
3874, How hot is a recently remodelled house
20608, Ticket
33873, Random Forest Regressor
42407, Visualize sample rows of the submission predictions
42888, Laboratory
37317, Data preprocessing
35470, Visualiza the skin cancer solar lentigo
26416, The titanic had three different stops before it went on to cross the Atlantic Southhampton Cherbourg and Queenstown
1233, Pairplot for the most intresting parameters
18373, Create Dummy Variables
15239, Using sns
11353, Check relationship between numerical variables and Sale Price
20251, Machine Learning
16500, Gaussian Naive Bayes
29138, Correlation plots
43313, We can bin the Age and Fare
12166, Min sample leaf
31428, And apply the formula
21911, Missing Data
22635, Test Function for Modeling calculate RMSE for various tests
5970, I train the data with the following models
8257, Creating a Visualization of every feature with missing values
19172, Standardize
19533, Perfoming Joins on RDD
11371, Import libraries
33352, Timeseries autocorrelation and partial autocorrelation plots weekly sales
15058, PS The old man with nan fare what s his ending
26890, Create Submission File for approach 5
24027, Since the ammount of opened and closed shops is different all the time to separate seasonal effects I adjust item sales by the ammount and size of shops that are opened at any given point
22536, Confusion Matrix
37803, Random Sample Consensus RANSAC
11970, Categorical Features
32993, PCA
15491, Grid search
31938, set up a validation set to check the performance of the CNN
96, Facetgrid is a great way to visualize multiple variables and their relationships at once
8340, it is time to have fun with the stacked emsembling model
10769, Formatting for submission
7455, Missing values in Embarked column
34247, Normlize Data
7878, Highest fare mean improve the chance of survival
27446, This is very interesting notice how 1 2 and 3 are repeated twice in the platform once as floats and once as strings
13543, Summary of df test
28059, Similarly the first class passengers are more likely to survive than the second class
17885, Lets look at the data
38251, Submission
37354, Cabin vs Survived
17272, K Nearest Neighbour
42150, Encoder
29139, Correlation of integer features
10097, devide data into X Y
19099, And that s the basics of Support Vector Machines
11446, Replace missing data for Embarked
7706, we fit the models on the test data and then submit it to the competition
4375, Something Interesting
31072, DATA CLEANING
28256, We shall look at distribution of target variable later
7036, The building class
22607, Producing lags brings a lot of nulls
3211, At last we have a lot of features regarding the outside porch in all its types
8239, Visualizing a Tree
23670, Holt linear
28228, Below we generate an image that contains only pixels plucked from a uniform distribution between and compare it with the autoencoder generate image
23458, Weather
17692, SURVIVAL vs INITIALS
42468, Creating interaction variables
28247, Visuliazation
20494, Train Image Embeddings
18750, Grouping and Aggregating
42397, How do SNAP days affect sales
14447, go to top of section engr2
40826, Some of the inferences that can be made
38935, By mean of both Random forest and Xgboost
35946, ADABoost
11456, The Cabin feature needs further investigation but it looks like that we might want to drop it from the dataset since 77 of it are missing
24537, Number of products by customer s birth country in relation to the bank country
3455, Here s a correlation heatmap for the predictors in the Deck training set
15346, SUBMISSION FILE choosing Gradient Boosting
12638, Using the Mean for Age
16705, replace by ordinal values
32417, Parameters weights of the Convolutional Layer
8663, Instructions
3789, SalePrice
43295, Testando sem a Coluna Day
16615, Feature Fare
42951, SibSp Feature
36414, BoxPlot Continous To find Outliers
43033, Normalizing the Data
2280, Conclusion
31191, Using Cross validation
31319, Applying Log Transformation of the Target Variable
35602, Running
20581, Checking for missing values
39816, Clearly 255 is the range so we now use this value to scale our data
12245, Array math
35584, ML Model
37535, Importing Pytorch Libraries
33605, Check the frequency of each digit in our training data and how they are distributed
31730, Patients
31647, MODEL 3 Conv1D
37076, Cross validation Evaluating estimator performance
1424, Decision Tree Classifier
9696, check for the missing values first
21814, Longitude and Latitude
23978, switching to 224x224 size which is usually used for ResNet 50
28299, Apply model to validation again
14603, Random Forst Classifier
170, fit the model to our train set We use Scikit learn which is our favourite library when it comes to Machine Learning since it includes every standard regression classification and clustering algorithm Deep Learning is a different animal if you are interested in this area stay tuned
20821, As a structured data problem we necessarily have to go through all the cleaning and feature engineering even though we re using a neural network
34245, Plot The TS
6573, Sex
40041, Creating a hold out dataset
27746, Submission
36731, Save submission file
25648, Imputation
3803, Lasso
36122, our numerical columns are set now its time to encode the categorical features
34516, Previous Applications
15978, Cabin Feature
35833, For handling the numercial missing values we use the features mean
21726, Avalia o dos modelos
12366, Exterior2nd Exterior covering on house if more than one material
10964, Basement garage fireplaces and other features
31566, take a look at the dataset
3828, concate both train and test data
26053, The evaluation loop is similar to the training loop
15741, we have to focus categorical features because for ML algorithms we can not use it Pclass Sex Embarked Title
8327, Essentially object is string There needs to be a way to convert string to float or int This is where LabelEncoder kicks in
43406, There are several commented out maximize lines that could be worth exploring The exact combination of parameters determines exploitation vs exploration It is tough to know which would work better without actually trying though in my hands exploitation with expected improvement usually works the best That s what the XGB BO maximize line below is specifying
20215, Reshaping
34406, Preprocessing
18992, Optional Dump assets to disk
18307, first item appearance feature
42041, Insert new column at a specific location
19436, Split into train test csv with highly correlated variables float removed
2123, These are 32768 different configurations to test on 5 folds
39267, 1st order features SPATIAL DISTRIBUTION
1349, Converting categorical feature to numeric
22594, Monthly sales
36524, Train Test Split
18635, From 2011 the number of customers becoming the first folder of a contract in the second six months is much higher than the first six months in a calendar year and it is across all years after that Looks interesting to me from a business standpoint
30377, Model Evaluation
21848, Understanding the Model
16910, fillna Fare
24787, Text Data
12506, The best eta value in this case looks like it s Add it to the parameters Since this is the last parameter we re going to tune we can also pay attention to the number of rounds the model took If we don t want to use early stopping we can set the number of rounds manually to essentially num boosting rounds is the final parameter to tune it changes so much based on the other parameters that it s key to tune last
11667, Feature selection
31939, Run the model using the train and validation datasets and capture histories to visualise the performance
39415, Where do we have NaN values
13328, let s store in a new binary column the rows where the Age value is missing
8418, Neighborhood
32071, Projections Based
8481, Gradient Boosting Regressor
2996, Fixing Skewness
6389, To check whether female passengers were more likely to survive we conduct
2099, Or of ExterQual
16238, The third classification method is Linear Support Vector Classifier
7297, Univariate Analysis
7288, KNN
14134, As the fare range increases the changes of survival increases
13082, Looking at out best features
20690, MLP for Regression
10159, Bar Plots
11336, Outlier Treatment
27982, skewness and kurtosis
15131, Name
16973, Great We dropped all the outliers from the train dataset now we have to localize those outliers and drop them from the target survived to keep the same shape of the train and target sets
8040, Similarly For testdata we perform same action
1356, We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals
20661, FILLING THE DATA WHICH IS LEFT NOT FILLED
41213, start Frist we log the train KernelNB as input data
36519, Ticket
8927, FireplaceQu
7805, Stack Models
19067, Load the sample submisson
6336, Before cleaning the data we zoom at the features with missing values those missing values won t be treated equally
4595, SUMMARY For garage features we will
21587, Keep track of where your data is coming when you are using multiple sources
42629, Line plot with Date and Fatalities
6837, Dealing with the Skewness
3628, Features Porch Bathroom Square Footage
14261, Predictive Modeling
30978, we must slightly modify random search and grid search to write to this file every time
16587, Lets Check first from where 1st Class passesnger Came
6210, Naive Ba
1309, Observations
30914, Tokenize
25411, CROSS VALIDATION
11363, Functional data description says NA means typical
42613, begin by reading in the training data and taking a look at some sample images
33652, Split features and targets from the data
9429, Color the points using a second categorical variable Sex
13430, Determining the missing data
25649, Run the next code cell without changes to obtain the MAE for this approach
28297, look at best saved models
43265, Treina a rvore de decis o com base nos dados de treinamento e diz quais s o as respostas
4170, BoxCox transformation
11682, you ll make predictions on your test set create a new column Survived and store your predictions in it
30633, Relationship between sex and survival rate
2448, Price Segments
17645, k Nearest Neighbors
12271, Linear Regression
32656, Visualization of distribution and correlation with the dependent variable BEFORE normalization p
41360, Fireplace quality and sale price are correlated
41752, Predicted value is low we can give credit here
25377, In order to increase the training data the original digits are transformed to reproduce the variations occuring when someone is writing a digit
8688, INFERENCES
31299, Train Our Model With Cats Dogs Train splitted Data Set
28964, Continuous Variables
2425, Linear Regression Model
844, Elastic Net
26818, check the distribution of the mean of values per columns in the train and test datasets
29338, do one last ensemble using XG boost
33589, Test Time Augumentation
35817, Extra interaction features
38482, Convert data to a tensorflow dataset font
41370, All houses have all the utilities so this feature is useless for us
5514, SVM Classifier
18584, First look at cat pictures
1948, SalePrice Correlation
27264, Calculate priori probability P and P
4033, replace the skewed SalePrice column with the normalized log10 data which be used for the regression prediction model
7257, Data Analysis
108, title
11087, Missing Data
38701, Mean Color
6745, Box Plot
22227, Tam ba lant Fully connected
3501, As with the random forest model a gradient boosted classifier can provide us with information about variable importance
9118, Continuous Variables Distribution
40715, Frequency plot for the validation set
13395, There are 4 variables that need to be categorical encoded
34610, Prediction using XGB
15699, Embarkation point vs Survived
11436, Categorical Variables Get dummies
33709, FEATURE Passenger
38724, that we have changed the generator and discriminator let s compile them into a single model
17019, Cabin
22420, for the main event
20463, Education type of the client
41059, Training
40263, Full Bathrooms
31315, Fill Promo2SinceWeek with 0
8517, Test Codes
7657, For the other model parameters I referred to this
7850, First things first A Random Tree Regressor
1211, Quiring the data
28704, The new training dataset is 438 rows of predictions from the 8 algorithms we decided to use
8352, Class and age distribution
36821, We need to remove punctuations lowercase remove stop words and stem words All these steps can be directly performed by CountVectorizer if we pass the right parameter values We can do this as follows
26285, Initialize the model s parameters
10666, K means Clustering
18094, The metric that is used for this competition is Quadratic Weighted Kappa
11901, Vlidation
35141, Initial CNN model
23, RandomizedSearchCV SVR
35792, Plot prediction vs actual for train for one last verification of the model
20111, Item count mean by month city item for 1 2 3 6 12 lag
4706, 3 of these models have selected features
32097, How to generate custom sequences in numpy without hardcoding
32505, Pre processing the features
1665, Good no missing values for Age now
2507, Fare Range
11433, Intersection of biOut and uniOut
16240, The fifth classification method is RandomForestClassifier
9583, Printing emojis with Python
25793, use One Hot Encoding for Categorical data
5427, Basement
28795, Modelling the Problem as NER
21382, In order to make the optimizer converge faster and closest to the global minimum of the loss function
30407, Miss labeled data
30414, Train and predict
17979, Our dataset is almost ready
27163, We have four year features in our dataset
21653, Merging datasets and check uniqueness
2405, Since there are so many columns to work with let s inspect the correlations to get a better idea of which columns correlate the most with the Sale Price of the house
19394, Organize data for visualization
16877, Embarked apparently affects survival rate but is it really
6417, SalePrice looks good now lets handle other numeric variables
30874, Predictions on test set
25860, Linear Model SGD Classifier
30252, Preset xgb parameters
33142, Don t forget that the latest model might not be the best Instead the best is the one we saved as model
3792, Scatter Plot
13466, Exploration of Fare
17763, Load packages
19176, LGBM
12267, EDA
25269, Model Architecture Design
9762, Dataset Cleanup and Spliting
10632, SVC
15575, Feature preparation
25298, It is not finding the tips but the points with the greatest distance from center look at leaf19
11813, after gaining a more in depth view of Sale Price we can conclude that
9733, I define an SVR classifier with parameters that have been tuned separately
6393, NULL VALUES
36789, Normalizing Text
11350, Output Submission
18432, Gradient Boosted Trees
13377, Remove redundant features
7873, introducing new features as Family size
25880, Creating Meta Features
25199, Data Augmentation
21932, Save cleaned datas
29595, up is the learning rate finder
35473, Visualiza the skin cancer unknown
38290, Dealing with nulls in LotFrontage
10696, Embarked processing
9821, Each passenger Name value contains the title of the passenger which we can extract and discover
40696, NOW LET S TUNE A BIT
30621, According to Titanic Facts p
41460, Plot survival rate by Sex and Pclass
9755, FamilySize
27925, Observe the reduction of RMSE value and the iterations stops when there was no longer an improvement
13936, This feature is from this kernel
42556, We use the best estimator found by cross validation and retrain it using the best hyper parameters on the whole training set
29961, Combinations of TTA
42093, Importing necessary libraries
20393, Support Vector Machine Model
7645, GarageCars and GarageArea are highly correlated variables which makes sense since the number of cars that fit into the garage is proportional to the garage area
8299, AdaBoost Adaptive Boosting
20461, Ocupation of client
36138, check out the scores of KNeighborsClassifier and RandomForestClassifier without hyperparameter tuning
28340, Analysis based on FLAG OWN CAR
23044, we can do the same thing in item category
11078, Second Level Dataframe
32253, Library
5397, Here I look deeper to make sure there might be some special guests with special title
24340, Aside from normal stacking I also add the get oof method because later I ll combine features generated from stacking and original features
1638, I m using ElasticNetCV estimator to choose best alpha and l1 ratio for my Elastic Net model
3916, Transform Categorical to numerical
11876, Dummies
940, Convert Formats
8343, Alright so far we have three models
33478, And finally to obtain the evolution of the disease we simply define the initial conditions and call the rk4 method
12389, Here in order to make the machine learning model I have taken the threshold to be 0
37815, Do people go for lengthy tweets in case of disaster
13051, we have the training and test datasets available and we can start training the model
5915, Testing on test data
17673, People who have their rooms on the right of the titanic have more chance to survive
16214, Importing Libraries
12272, Comparison of the all feature importance diagrams
27386, testing the impact
18333, Neighborhood
14444, go to top of section engr2
17006, Missing values Fare and Embarked
29165, Functional Fill with Typ
5280, Permutable Feature Importance
1275, so there are some misspelled Initials like Mlle or Mme that stand for Miss
13149, explore a IsAlone column created from FamilySize
12765, it is time to design the ML Pipeline
7475, 3a Sex
35238, We would know the proportion between text and selected text
22384, Prepraring data for prediction
1773, Dropping Useless Features
30469, Tune the parameters of a VotingClassifier
6867, Dealing with 2nd floor
213, Model and Accuracy
40470, One Hot Encoding
27061, It s clear that the rest of the texts are hided because of the test length I put the column in a list to check the first 5 texts
21188, How does logerror and abs vary with time
4298, Submit Predictions
6971, Random Forest
36652, Normalizing image arrays between 0 and 1 is very beneficial when training deep learning models because it helps the models converge and train faster
34045, Engineer a feature to indicate whether the crime was commited by day or by night
5364, Diplay charts with slider option
34730, Importing necessary modules
13032, Additional analysis
5174, Support Vector Machines are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis Given a set of training samples each marked as belonging to one or the other of two categories an SVM training algorithm builds a model that assigns new test samples to one category or the other making it a non probabilistic binary linear classifier Reference Wikipedia
4740, Missing value of each rows percentage
36372, Fit the Model
3791, SalePrice Distribution
18290, I also apply padding mostly to store X as an array
19949, There is 17 titles in the dataset most of them are very rare and we can group them in 4 categories
19599, Number of days in a month
25044, Department Distribution
3528, Box plot OverallQual
14905, Embarked Pclass and Fare vs Survived
31623, dumb classifier
38623, Creating Callbacks
15603, Create simple model
15306, Training Random Forest Model Again
2345, XGBoost eXtreme Gradient Boosting
36944, Target Encoding
23179, Model Selection
41999, Sorting columns multi columns in ascending order
17700, RANDOM FOREST
40311, Main part load train pred and blend
24495, Basic train test split at 20 test data 80 training data
7383, I merge two datasets based on WikiId
35178, Conclusion this case in similar to batch normalization The general behavior looks better when MaxPooling is not replaced by convolutional layers but the long term tendency is better for convolutions with strides Given that our objective is to increase the final CNN accuracy I decided to keep this replacement which I verified to be better through submissions
965, There have been quite a few articles and Kaggle competition winner stories about the merits of having trained models that are more uncorrelated with one another producing better scores
10749, Combine train features and test features and create dummy variables for character features before runnning machine learning models
22421, I ll build a function that convert differences month to month into a meaningful label
8743, Label Encoding
34819, Label Encoding
23213, We ve made our submissions using different ensembles now compare their submission scores with our best base models submission scores
9594, Statistic charasteristic of only one feature Age
38667, K Nearest Neighbours
37292, Basic EDA
39094, The Flesch Reading Ease formula
33300, Correlation Matrix
32979, Features standarization
24056, Feature Generation
11968, we can remove PoolQC MiscFeature Alley Fence as these features contain more than 80 of null values
16917, Final data
7092, TPP 1 means highest probability to survive and TPP 3 means the lowest
24464, Lets begin remembering how GANs work
16627, Build Final Model Using Best Threshold
19520, Using Range to Create RDD
24943, Q Q plot after taking the logarithm
12882, It looks like most passengers were alone and passengers who were alone typically had lower survival rates
11708, Check the work
9693, Converting some numeric variables to categorical
39186, N mero de crimes por lacalidade
13226, GridSearchCV for XGBoost
24479, Training for 30 epochs this strategy achieves a validation score of 0
22755, We achieve this by setting gamma
23375, Visualise images
1065, ENSEMBLE METHODS
33026, Training the model
28328, Exmaine the installments payments dataset
29179, Look how much features are correlated by using the Correlation Plot
29456, submit your prediction of this model with
42783, Choose model architecture and compile
5094, Once the data is cleaned we can proceed to the cake in our party e
4971, Age and Fare bins
12257, Besides MSE when we only have one feature we can visually inspect the model
15088, K Nearest Neighbor
10585, Evaluating ROC metric
12157, Split the data to train and test sets
40078, Linear Regression
14520, Observations
10897, Label Encoding
1273, Feature Age 3 2 3
23266, Pclass
6946, check the quality of clustering with dendrogram and silhouette plot
38298, There are no null values present either
20948, One Hot Encoding
25454, Preparing data and generating their labels
43058, The next 100 values are displayed in the following cell Press Output font to display the plots
9381, check any missing value present in TotalBsmtSF feature
19174, PCA unused
37212, Functions Compare Training and Embedding Vocabulary
34789, Clearly weekend and weekdays follows a different pattern
12526, now we add the categorial missing values manually
12372, Verifying if all null values are removed
1849, Collinearity
37642, Log of Price as it is right skewed
30567, for the new variable we just made the number of previous loans at other institutions
30963, The boosting type and is unbalance domains are pretty simple because these are categorical variables
5022, Partition
34924, Text preprocessing with NLTK
29038, Testing set accuracy
6776, SibSp of siblings and spouse
19884, Date Related Features
6000, Kita buat function untuk melakukan impute sekaligus membuang fitur penyebab Multicollinearity
24434, Feature matrix and target
34422, we do a bigram analysis over the tweets
14268, K Nearest Neighbours score as we change the n neighbours attribute
8241, Correlation Matrix
20742, GarageYrBlt column
57, DataSet Library Loading
4791, Base on the definition of these features give in the data description we impute the rest of the values to NA or 0 which signifies absence of that feature
32226, Great let s take another peek at our data
42566, define the neural network
33037, Random Forest Modeling
11791, Embarked Title
3022, We use the scipy function boxcox1p which computes the Box Cox transformation
9645, Seems fine
34097, look into the spread of this COVID 19 disease over the period of time
31028, Year font
42806, Gradient boosted tree was tuned with randomized cross validation
13686, I have used method to get the StringMethods object over the Series and to split the strings around the separators
37548, Optimizer
4691, Non ordered
25174, Removing Uncleaned Questions
36894, from validation data
1009, it is the size of each family aboard
10305, We re up to 87 of the variance now but the adjusted R squared is not meaningful
4515, Categorical Values
30642, One hot encoding
7490, We want to use both SibSp and ParCh into one variable
34631, Now let s find the outliers
21144, In this case let s use most common method correlation amtrix which presents the linear behaviour strength between factors
8938, Feature Importances
18358, I am going to train my model with the top 100 features derived from regression tree
22858, Training
11832, Lets find out all the categorical columns in our data
23727, Visualization
1836, Basement Finish Types
11648, Decision Tree
38891, Activation Function
23476, Predicting the output over test set
40967, A Simple Nueral Network You can try your hands on changing the architecture
2873, Preprocessing
34453, CA 2
21836, Split Training Data into target sale price and explanatory vars
8483, Bayesian Ridge Regression
30372, Label Smoothing
20789, Find the highly correlated
5450, Test it with a different data set Single Decision Tree
36223, Final predictions and submission
11426, I delete MasVnrType MasVnrARea Eletrical
28599, BldgType
31825, For ease of visualization let s create a small unbalanced sample dataset using the method
27341, Adam Optimizer it converges more efficiently
39059, that our model building is done it might be a good idea to clean up some memory before we go to the next step
13046, Most of these tickets belong to category 1 2 3 S P C
16651, Age
24897, Bruh this is seriously unexpected
38842, After 2000 number of house construction gone peak
34668, Seasonal decomposition
42213, add in the second convolutional layer same as before
35734, Since area related features are very important to determine house prices we add one more feature which is the total area of basement first and second floor areas of each house
42361, kfolds for cross validation
38510, Sentence length analysis
38508, Analyzing Text Statistics
6818, A skewness of zero or near zero indicates a symmetric distribution
146, Bagging Classifier is the ensemble method that involves manipulating the training set by resampling and running algorithms on it
18885, check that we do not have anymore missing values and procede with feature engineering
15425, let s explore the family size as a potential feature
36281, encode with OneHotEncoder technique
16460, Lets make bins for age and fare also and visualize them
24276, Decision tree
21341, Loading Library and Dataset
26348, It s time to leave
14270, Random Forests
13765, Voting Classifier
7084, We can also check the relationship between Age and Survived
22058, How is len selected text related to len text per sentiment
2539, With tf
1100, Create family size and category for family size
34841, Data Augmentation
30375, Adamax
41809, Build training model
34652, Retrieving cities from the shop names
42887, Testings
31015, We drop NA values
27595, Now we could just feed our model our word embeddings but we could also add the features we added during our initial exploration to improve performance
38678, Scaling Image Size
31281, Moving average
14955, Use only the most common titles
20662, Categorial data
15735, Create the Output File for Submission
9629, Xgboost
32841, The Model
24987, Features with low variance do a bad job of prediction
42020, Creating a new Dataframe with exisiting rows that matches multiple conditions
40847, would like to write 3 functions for different Plotly plots
13173, let s take a look into our Ticket variable
16486, Data Visualisation for Data Preprocessing
40064, Data Processing Dependant Variables Filter Outliers
10609, html
15245, Correlation
34091, Manager Id
2176, The hypothesis regarding Embarked is that it doesn t influence the chances of survival
34165, Again we keep grouping
42767, SibSp Parch
42942, Defining a function for the augmentation proceduer
41443, Below is the 10 tokens with the highest tfidf score which includes words that are a lot specific that by looking at them we could guess the categories that they belong to
21136, Ordering is finished
13680, We notice that Fare for Survived 0 is heavily skewed towards the lower end
42450, Below the number of variables per role and level are displayed
10275, Whoops It didn t really reduce any from the ones I selected
9197, Embark
2686, Forward feature selection for Regression
19175, Setting up Dataframes
6572, Create new feature using title of Name
31810, Classifier layer
21643, Create DataFrames for testing
9959, Predictions
39133, Neural Network
37362, Family vs Survived
41368, Sale Price IR3 IR2 IR1 Reg
8507, Check for Regression Assumptions
15572, Ticket group size feature
14633, First let s import our models and helper functions
17772, Go to top font
17805, Family Size is mapped then to only 3 sizes the first corresponding to the case when someone is alone
16490, I tried to fill the missing values based of my age columns for both the data set
43359, Training Test Split
17670, The function here allow to create class of age and sex and drop features not help me to predict the test dataset
18889, One hot encoding so that algorithm can understand
39072, A single line of code to set things up
35194, Yeah it is very skewed indeed
6585, we can calculate the coefficient of features to validate our decesion for feature creation
28414, Train Predict Models
28575, There are a large number of data points with this feature 0
19068, We can delete the target column as we be populating this with the probabilites
32762, We use Grayscale normalization the model work faster if the interval of data between instead of
39973, Prediction
3307, Use Ridge to find outliers
12891, To make it easier to plot I ll bin the fare based on the median values of fare at each Pclass
10876, Predicting and creating a submission file
40635, Drop location feature
12080, Overview the data
32396, Here we train our model takes a while but at the end we ll have strong model to make predictions
17574, Grid SearchCV
20163, Fitting train data and finding a score for test data to check model performance
9865, We want to make a table to understand class effect on survival For this we use groupby method
23633, that our model building is done it might be a good idea to clean up some memory before we go to the next step
29398, FIRST SET OF FEATURES
40165, We are finished with adding new variables to the data so now we can check the overall correlations by plotting the seaborn heatmap
5071, The Linear Regressor predicts negative values for SalePrice and that crashes crossvalidation scoring with our metric R2 scores would be working
25181, Building a random model Finding worst case log loss
21402, Submission
42301, Here among all the models the best performing model is estracted to perform the prediction
23028, Event Day flag and Sell Price don t have so strong relationship
1219, Fill numerical data 0 or median
26383, implementation of the Backprop algorithm
3687, Prepare Data for Modelling
2493, Embarked Categorical Value
17706, EXTRACTING THE INITIALS OF THE PASSENGERS CONVERTING CATEGORICAL DATA TO NUMERICAL DATA
7773, Bagging
35823, Pandas
17649, Random Forest
5247, Separating Train and Test Sets Again
13540, Importing Datasets
40425, let s identify the response column and save the column name as y
17679, it s time for predict
2920, Spot check Algorithms
23627, Here we explore two methods that are very simple to use and can give some good insights about model predictions These are
14921, Pearson s Residual Test
3505, Do our grid search for good parameters
4916, Handling Missing Values in Numerical Data
20431, Helper Functions
8731, Missing Data
27276, It s too slow we can speed up by binize the variable values
26430, Here I use seven fare categories
20234, Ticket
21654, Rename all columns with the same pattern
4394, Data Preperation Cleaning and Visualization
6549, Embarked Port of Embarkation
36745, Since the daysBeforeEvent feature is used for predicting after the model trained as input we seperate the 28 days as daysBeforeEventTest
10906, Grid Search for Adaboost
38109, Converting string into numbers and imputing missing values
7440, Find all the numeric columns and replace the NaN values with 0
13728, Extracting and handling Title simultaneously dropping Name
36975, Permutation Importance
18737, Tokenization
2246, GradientBoostingRegressor
11398, We have filled in all our missing data We deem Cabin a lost cause at this point and we won t investigate the Ticket column so let s drop those here
3800, Simplified features
33240, Lets train our language model
40332, Encode tags
25271, Keras Callback Funcations
37400, we want to one hot encode the dataframes
22960, Examples of hair augmentation with OpenCV
4452, Done
9968, Removing Outiliers
31361, Printing all the outputs of a cell
7768, Linear SVM Regression
42615, Setting up the network
26333, Data cleaning In summary we want to tokenize our text then send it through a round of cleaning where we turn all characters to lower case remove brackets URLs html tags punctuation numbers etc We ll also remove emojis from the text and remove common stopwords This is a vital step in the Bag of words linear model
13155, Dropping features
25773, Sex
11274, One common way to engineer a feature is using a technique called binning Binning is when you take a continuous feature like the fare a passenger paid for their ticket and separate it out into several ranges or bins turning it into a categorical variable
37887, Evaluation
28323, identifying the Categerical and numerical Variable
38108, Final Submission
40630, Reading Data
32182, Define Loss
9264, Removed 2 outliers
12414, YearBuilt
3376, Create your model
11884, Feature Engineering
40774, Helper Functions For Making Predictions
33292, Embarked Processor
168, We use a countplot which is a histogram in disguise across categorical variables instead of numerical ones
26891, Score for A5 16004
20575, Checking the missing values
4814, We both have numerical and categorical features
34398, Time Only Model
41933, We use the Leaky ReLU activation for the discriminator
43060, check the distribution of the mean values per columns in the train and test set
9679, Lower down learning rate
6656, Feature Importance by ExtraTreeClassifier
22128, The Hyperparameters that can be used tuned in XGBoost model are
20403, For each cluster s centroid embedding collect the 20 nearest word embeddings
11644, Logistic Regression
38721, DCGANs are extremely similar to FCGANs
6128, Functional
21727, The top 20 categories account for the 87 of the whole
40678, let s try for K 36
5169, My upgrade creation new features
36424, Ordinal Features
11216, Get Pre adjustment score for comparison
30417, Define PyTorch Dataset
40614, It takes a while now is time to look at what goodness it gives to us
29932, First up is reg alpha and reg lambda
10508, SibSp
27562, ps ind 03 x ps ind 06 09 bin
5919, Submission file
16110, create a new feature named IsAlone
34854, Oversampling
5065, step is to create new features by combining existing ones
13552, Crossing Embarked by Sex and Survived
39292, Export aggregated dataset
16562, Sex Pclass and Embarked
2247, StackingRegressor
9524, Submission
31221, MLP Evaluation
9031, If there is no Kitchen Quality value then set it to the average kitchen quality value
2682, Univariate roc auc for Classification
38015, that the model is fitted we can check it s most important features with the power of ELI5
11881, Data
28933, Fitting model with data
24785, Retrieve everything
22455, Density plot
3949, Imputing LotFrontage with median based on Neighborhood
24270, Observation
4827, there are a couple of them where missing values indicites None so we can jsut write a for loop for these columns
2155, One of my favourite definitions of startup belongs to Eric Ries a startup is a human institution designed to create a new product or service under conditions of extreme uncertainty
1274, Filling Age NaN
43216, Import necessary libraries for hyperparameter tuning such GridSearchCV and metrics such as precision recall f1 score
28824, Fitting the prophet
7520, SHAP values for selected rows
3575, Finding Missing Values
33675, Timestamp
32840, convert all the values to int and export for submission
23526, The target for mislabelled tweets is recalculated
11428, Linear Imputation
25521, Train and Predict
28147, Let s take a look at the most frequent relations or predicates that we have just extracted
23059, When we have our data prepared we want to split it into datasets one to traing our model and another to test it s performance And the best way to do that is using sklearn We set up a test size which is standard value for this operation usually for test we leave of data which means that for training remains It is also a good practice to set shuffle True as some datasets might have ordered data so the model learn to recognize s and s but won t have any idea that exists for example
7253, Normalize data
3312, GBDT
9088, look value counts of each category in MSSubClass
31301, Observe Prediction Time With Different Batch Size
28531, LotArea
16011, Survived
22593, Shops Cats Items preprocessing
27354, x
2667, Duplicated Features
19308, Evaluation prediction and analysis
26456, RF Predict training data for further evaluation
524, Default mode for seaborn barplots is to plot the mean value for the category
20342, The are 7 transforms try them out on a single image
24286, Gathering data
6097, Holy skew transform
36400, EDA of test dataset
26698, look at the unique states in the sales dataset
42801, Variables like life sq or kitch sq are important in prediction and because they are linked with full sq it is better to fill missing values with ratio of full sq than median or mean
15912, Passengers with title Master are likely to be children we can infer those missing age as the mean age of Master
8297, Grid Search
6170, Cabin
24138, Nearest Neighbors Model
4627, Box and whisker plots for detecting outliers
11205, Stacking averaged Models Class
5810, Feature Engineering
11229, let s convert the features into category integers
19552, Train Validation Split
40060, Confusion matrix
34108, ICMR Testing centers
1829, Blending Models
28608, YearBuilt
18961, Display distribution of a continuous variable for multiple categories
21780, I would then define some helper functions
21451, Categorical
23122, Findings As expected Sex Embarked and ticketProcessed have the weakest correlation with Age what we could guess beforehand from boxplot Parch and familySize are moderately correlated with Age nameProcessed Pclass Cabin and SibSp have the highest correlation with Age But we are gonna use nameProcessed and Pclass only in order to impute Age since they have the strongest correlation with Age the tactic is to impute missing values of Age with the median age of similar rows according to nameProcessed and Pclass
5467, train another baseline RF model to get some baseline importance
20924, We can first use a built in object feature importances
10791, I wonder now about the Parch
8354, Survival rate by the title
2517, the accuracy for the KNN model changes as we change the values for n neighbours attribute The default value is Lets check the accuracies over various values of n neighbours
34364, Lets convert our date feature to date time and do it more sunnictly
1785, Random Forest
36359, Submission
42164, Data Preprocessesing
21600, Select columns using f strings new in pandas
35920, Prepare Submission File
27883, For CA the SNAP dasy are the first 10 days of the month for TX and WI 10 days within the first 15 days of the month but always in the same order
17543, Feature importance
5332, Visualize three way split of three variables differentiating with color
8566, Deleting a Row
42329, Reshaping train test set for making ready to go through modelling purpose
21036, After applying Grid Search we found the optimial n components is between to In this case we pick the mean which is
12005, Elasticnet
20841, we want to drop the Store indices grouped together in the window function
18217, Parte 3 Rede Neural Convolucional
21475, Here you can notice that we have a problem some of the generated data is very long compared to the original data
34822, Optimizers and Annealers
42734, The mutivariate kde plot of not repay is broader than one of repay
19312, Evaluation prediction and analysis
13555, Ploting Fare Distribution
24121, Use macro model predictions to adjust XGBoost micro predictions
11182, Save Cleansed Data to disk
1883, We change Sex to binary as either 1 for female or 0 for male
13043, SibSp
29825, Trained FastText
25494, Label encoding
38198, Random Search looks like grid search but without testing all possible combinations we chose only a limited number of combinations drawn randomly without discount and we evaluated our model with what we have drawn
12109, Once again dealing with missed MSZoning feature
666, Random Forest
2164, Our initial approach to estimate Age missing values was to fill with a placeholder value
29372, Creating submission file
41034, Explore non WCG females
16788, Importing and Merging Data
2351, Gradient Boost Blend
6473, Evaluate Ensemble method
31778, This is just an example
957, Feature importances generated from the different classifiers
40202, Sample predicted Images
43036, Split the Data into Train and Validation Set
8931, MSZoning
16588, from C high number of 1st Pclass people Survived lets fill C in missing value
38566, Ridge Regression
36081, Model
12707, There are two missing values for Embarked let s replace it with the most frequently occurring value
30632, Relationship between age and survival rate
32792, K fold CV with Out of Fold Prediction
11043, Train the models
6100, Build the Model
21220, parameter tuning for getting Global maximum and this is done by tuning the Learning rate
5597, OverallQual
11938, Transforming the target variable to log values so that the error is equally impactful to the low and high prices
20233, SibSp Parch
1800, Observations
24596, Again zeros and ones are not mixed up
35424, use the convolutional neural network architecture to train the model for this we need to modify our data as
984, Extra Blending made easy
29779, Adding Noise
4786, OverallQual and GarageCards have a positive correlation with the Sale Price
5060, How do neighborhood and zoning affect prices
14618, Station 4 Transforming and scaling the fare
10107, encode a categorical value in Test Data
580, 19 attributes have missing values 5 over 50 of all data
12420, We inspect 3 example ML models RandomForest KNN and LGBoost
12940, Do you have longer names
2414, Categorical Imputing
3494, Best score
22283, Prepare Data
38819, we setup the RandomForrestClassifier as the estimator to use for Boruta
32132, How to find the most frequent value in a numpy array
10041, Determine outliers in dataset
6136, There is no other options
25419, I thus use the date column to group rows
4101, Data Marge
35688, Catboost
9928, I wrote the next lines when I was coding in after a previous study of the features
1093, Embarked
21240, create our submission file
22080, Body of script change number of patient to create other video
24012, Accuracy
11558, check how many features have we gotten rid of
43335, Here is the brief introduction about all the packages from their documentation page
32858, Checking for outliers
40076, Multi colinearity Categorical
9748, Embarked
19248, Fitter
7975, Quick completing and converting a numeric feature
42788, Word Cloud
625, Just about formally significant
31505, We may have to implement the same process for all the features to check how each feature correlates with the target variable which can be quite tedious given the number of features
15561, there is both a Miss and a Ms title Without much considerations I m joining the Ms titled
16539, convert the Sex column from categorical to Numerical use the map function for this
41415, Load initially explore data
18685, let s use a list comprehension to generate the labels
16341, Perceptron
7868, I explore both PClass and Sex in the same plot
19964, Feature importance of tree based classifiers
27841, Epochs and batch size
31288, Prophet appears to output very similar shaped predictions to ARIMA
23840, Taking X118 X314 and X315
4773, Logistic Regression
40832, With weather 1 2 and season 2 3 and working days the bicycle rental count is maximum
783, Ridge
36227, let s visualise the images in our dataset
8398, IMPLEMENTING TPOT
9411, Cleaning Filling NAs imputation font
34962, Exploratory Data Analysis
13331, Fare completing feature with its mean value div
7656, We use multiple regression models Lasso learn org stable modules generated sklearn linear model Lasso htmlsklearn linear model Lasso Elastic Net learn org stable modules generated sklearn linear model ElasticNet html Kernel Ridge learn org stable modules generated sklearn kernel ridge KernelRidge htmlsklearn kernel ridge KernelRidge Gradient Boosting learn org stable modules generated sklearn ensemble GradientBoostingRegressor htmlsklearn ensemble GradientBoostingRegressor XGBoost and LightGBM To find out the best parameters for the models we can use GridSearchCV learn org stable modules generated sklearn model selection GridSearchChtmlsklearn model selection GridSearchCV
42785, Submission
22146, Generate test predictions and preparation of the submission data file
39105, Printing keywords
35781, Compare different predictions to each other
5251, Creating the Feature Importance Dataframe
14264, Linear Support Vector Machine linear SVM
22577, 0s
729, Funnily enough LotFrontage gets dropped That teach us next time to spend so much time data engineering Joking aside it looks like a lot of dummy variables get removed
28094, Label Encoding
21776, FINAL TEST
35375, Train model
8243, Grid Search CV
41697, Well some places are visited at certain time periods for sure but can t do much else until we disentangle time
6279, As expected from our analysis of Parch and SibSp Family Size follows a similar pattern
17774, The best survival rate is for passengers embarked in Cherbourg the worst for passengers embarked in Southampton
15104, Exploratory Data Analysis
32178, compile data processing
13962, Pclass
13483, Reference
244, Library and Data
10774, Because I m using a SVM model in my ensembling I need to apply some standard scaling to my data the following code use a scikit transformer design for this effect and apply it on the data
10981, Testing to get best possible accuracy
3427, drop the Cabin and Cabin M variable for now since we ll focus on Deck instead
1561, SibSp
11532, Submission
5313, Model fitting
21548, Training and evaluation of the model
18303, Feature Matrix Creation
22180, RoBERTa span
20631, Lets visualize the top 10 stopwords and punctuations present in real and fake tweets
41939, Generator Training
16901, New Feature NameLen
24235, This is the augmentation configuration we use for testing Only rescaling
7382, The matched values of WikiId are presented in the list wiki id match and are added as an extra column to the DataFrame kagg rest4 corr which is a copy of kagg rest4
28, Weight Average
40314, Target
10896, Passenger Id and Ticket are just random variables and they do not give any intuition for their relation with the chances of survival of a passenger
14690, The Age variable is missing roughly 20 of its data
11254, Data Preprocessing
15527, Age
42211, Define model architecture
25942, LightGBM
34825, Model Fitting
14381, Replaced all available values with 1 and all the missing values filled with 0 in new feature CabinBool
9702, Missing values in numeric features
22184, Remove low prices anything below 3
16125, Comparing Models
22399, antiguedad
8003, Train Elastic Net
5992, Sofar
22517, Hence this technique is more like oversampling but here we oversample BOTH classses rather than just one
17796, Extract Deck from Cabin
14799, Gradient Boosting Classifier
42372, Submission
6930, Missing values in column LotFrontage I fill with median values
28819, Mondays and Sundays present best average sales and Saturday is the weakest
21534, Scaling
7677, Numerical features
40308, also check the correlation between the three fields
8553, RANDOM FOREST ON TEST DATA
37363, Age features are very important
3776, Last Name
2175, The plot suggests that those who survived paid a higher fare
16557, Submission
26479, Create dataloader for mini batch training
24913, Time Series in China
16604, Continuous variables
27473, Bag of Words Countvectorizer
21138, make big cross over and combine
35784, use different function to calculate ensembles
42773, Fill
3867, Categorical Data Pipeline
12061, Ordinal Variables
1198, Feature Selection
829, Checking correlation to SalePrice for the new numerical columns
33867, Preprocessing for Modelling
20184, Glimpse on the data
17764, Go to top font
39718, The embedded vectors for a specific token are stored in a KeyedVectors instance in model
8227, After shortlisting the variables I built a pair plot between the variables which have either a high positive correlation or a negative correlation
14764, Using 75 25 Split for Cross Validation
27005, Embeddings
5117, Variable Correlations Only Numerical Fields
20104, Item count mean by month shop for 1 2 3 6 12 lag
19649, Wrap the scoring in a function to try different values for prior weight
29945, First we need to format the data and extract the labels
14361, Feature Description
5056, Since most of the house were built during the two decades before 2010 let s take this into account and analyze the actual age of the house in the year of sale
14806, Outlier Detecetion
33739, Set the threshold value for predicting bounding box
11272, The coef method returns a NumPy array of coefficients in the same order as the features that were used to fit the model
15244, Generation
14738, It was that easy let s do our evaluations
16565, Checking for missing values
28546, Model
36631, How about price Maybe there are some outliers and skewness
13701, we ll deal with the missing values
5043, get more detailed stats about the distribution
41607, Reshaping the data
27661, Evaluate the model
26872, apply the first layer filters to our selected image
31229, Features with values between 10 and 10
2278, K Fold Cross Validation
16916, Check feature correlations against each other
19387, Encode categorical features
508, Support Vector Machines
37813, Only a few tweets are missing keyword
28145, Relations Extraction
39213, Clean up
36577, Gender and age statistics
629, Similar to the known Cabin numbers what about the passengers for which we know the age
24446, Cleaning the text
28106, try something different We ll make a new category for every categorical column as NA
8965, group by family
5082, we ll look at the predictions in detail
35829, If you want to delete any features this can also be done easily with pandas
8534, FEATURE SELECTION REGULARIZATION IN REGRESSION
16774, Prepare Submission Data
32681, Few specific aspects of this exercise demand special consideration p
29852, Fine tune The model
20653, SCATTERPLOT TO ANALYSE RELATION BETWEEN EACH PARAMETER AND THE OUTPUT
24806, numeric data
21150, Important remark This operation be a bit surprising we have to split our train data set to differentiate train and test for modeling The so called test set from data doesn t contain target so we would be not able to make evaluation on the basis of this
22708, Loading the Dataset
18254, Model
28396, Split into independent and dependent features
29167, Electrical Fill with the most frequent class
22343, CountVectorization
25460, write a submission file for each submodel
25378, With an annealer we can set a relative large rating rate in the beginning to approch the global minimum fast then reduce the learning rate by 10 every epoch to find the best parameters smoothly
18514, Load data
7436, Grid Search
19137, Plotting violin plots of weights to performm a sanity check
34839, Since XGBRegressor model is the best model let use this for prediction
19590, shop and main cate id
20103, Item count mean by month item for 1 2 3 6 12 lag
439, Utilities Since this is a categorical data and most of the data are of same category Its not gonna effect on model we choose to drop it
1955, Training by XGBoost algorithm with default Parameters
26356, Feature Engineering
11547, A lot of the categorical variables have rare labels that appear so little
36568, Prepare data
33198, Recompose pixel digits to image data
42858, Optimization
39287, PRICE FEATURES
19398, Save the model
16621, Modeling
34823, I have set the epoch to 10 for the purpose of kernel
15178, Data exploration
16744, voting
41622, combine training test data to save data cleaning preprocessing efforts
19585, date block num
1610, Target Encoding
26640, Submission file
25476, We can get a better sense for one of these examples by visualising the image and looking at the label
2940, check the missing values
482, Fill missing values
10214, It s quiet evident that number of male passengers are almost double of female passengers
26319, A Random Forest is an ensemble technique capable of performing both regression and classification tasks with the use of multiple decision trees and a technique called Bootstrap Aggregation commonly known as bagging
6152, Split train set into two subsets
19630, There is a small but still significant difference in the distribution of target 1
27281, Prepare for submission
14586, C Cherbourg Q Queenstown S Southampton
17929, Load Dataset
21551, for visualization
18470, Sales
26514, When we re happy with the outcome we read test data from test
18724, get the filenames in the test folder
17361, now look nice
24410, Submission File
27455, Lowercase
18033, Everybody
27461, Stemming Lemmatizing
7441, Verify that there are no null values in the data set
27353, diagnostics is not bad so we are good and continue
24051, Encoding nominal categorical features
32770, Evaluate The Model
24931, Date and time
16304, NOTE
28424, Outliers by price and sales volume
21084, The unique value of ps car 11 cat is maximum in the data set is 104
30900, means the knn could not find a suitable city id for that pair of latitude and longitude
33993, Logistic regression is good at classification but when complexity increases the accuracy of model decreases
139, Using RandomizedSearchCV
15844, Cabin
3275, Fill all other Basement features missing values with None or Zero
22352, Using xgboost XGBRegressor to train the data and predict loss values on the test subset
28578, TotalBsmtSF
5506, Creating a feature based on Sibsp and parch
11634, Suport Vector Machine
14080, Logistic Regression
34474, PCA
21903, If you got this far please upvote span
37305, Inverse Document Frequency
28105, Remove the SalePrice and assign it to Y
29142, Feature importance via Random Forest
19191, Predicting using the Simple Linear Regressor
7620, ElasticNet
10810, That s a lot of categories
26054, We re finally ready to train
19446, Model 2B
3997, measure the RMSE for sklearn
13045, Tickets are of 2 types here
36898, RMSprop
25647, Run the next code cell without changes to obtain the MAE for this approach
2153, Startups use pitches to sell their idea
30272, Applying web scrapping
17723, Individuals who embarked through port S paid the lowest fare
16903, Ticket
1021, Fantastic now let s insert these values into our Random Forest Classifier
41620, Good
26522, TSNE Visual Clustering
20057, Prepare data for model
40453, NeighborhoodBldgType
6161, Prediction for sklearn models
41871, Randomized search
6695, Submission
28430, Item name category feature additional feature
19062, Loading the test set
32501, Processing the Predictions
24476, Plot the first 9 images in the test set
25184, XGBoost
11185, Import libraries
23797, Balance Methology
1391, Give me your feedback how can I increase this model
42996, Logistic Regression with hyperparameter tuning
11394, find the missing fare in the data
16915, Check Each feature against Target label
38657, Gender Distribution
1149, we are going to pick some features for the model
38271, Firstly we define vocabulary size as len test That means this system here support len test different words
27485, We only used a subset of the validation set during training to save time
23383, The next functions load an image and the corresponding bounding boxes depending on randomly picked image ids
27324, Learning schedule
15746, Cleaning Data
32266, Relationship between variables with respective to time Represent in horizonatal line
23252, We build our neural network model
13439, After explored features in dataset
23453, Day
18285, Category and Number Lists have now been changed
10928, Draw random graphh with nodes and probability
26770, Detecting best weight to blending
13858, so we have titles
22605, Months since the first sale for each shop item pair and for item only
11797, Missing values
6412, Check variation in the feature values
23416, And now we can train the model
42272, Make day month year column to analysis whether there is meaningful difference between each interest level values
20435, Encoding the text into tokens masks and segment flags
7911, These scatter plots give us some insights on a few outliers e
13276, Encoding categorical features
29015, Distribution of Pclass feature by Survived
10577, Machine learning in pyspark
11473, ANN Artificial Neuron Network
41676, Resizing the Images
7226, Pool Quality Null Values
20477, now analize the application bureau train data
14592, Age Group
7931, Elastic Net
41269, Qualitative Colormap
12515, Woah The first feature we engineered did end up being pretty important Way To go
11609, Take care missing data
18384, Here is how to extract keyword location text to replicate the structure of the dataset used in this competition
21182, Housing Prices
15371, Fare Feature
24800, CNN
12368, SaleType
42044, Replace object column to integer column
35061, Augmenting an image
6923, Make Submission
19736, Observation
35494, For example let s look at first sample pixel values
23443, Heatmap of all the continuous values in the file
27767, Submission
8411, However by making the year of construction of the garage an indicator of whether it is newer it becomes easiest to identify a pattern of separation
947, Tune Model with Feature Selection
12917, Types of Variables
8510, Feature Engineering
31167, title
35458, Visualize the Chunk of Melanoma Images
9158, This narrows down the features significantly
38690, Before Mean Encoding
15632, Survival by FareBand and Gender
20447, application train
29730, I need to give bounds for these parameters so that Baian optimization only search inside the bounds
16177, we can safely drop the Name feature from training and testing datasets
37456, Fitting the model
23562, It looks like full all is not the exact sum of male and female population
38297, We saw there are only numerical columns in this dataset
37636, It is always a good idea to tune the Dataloader s num workers parameter From the PyTorch documentation
25952, Departments csv
29944, Implementation
35116, Creating Training Pipeline
22840, Ran out of memory when trying to create a dataframe from january and february lists
26723, For WI
21527, take a look at the downsampling operations of a multilayer convolutional network
28266, we shall take a look at categorical features
27121, MasVnrType
11489, SaleType
6516, Outliers
15276, Importing Libraries
11496, Check Correlation
6989, Size of garage in car capacity
18108, After creating the submission I always check the format and the distribution of the test predictions
17675, Miss Mrs Master have a lot of chance to survive compared to the title Mr
208, Report
15248, Tune
15501, Age
9206, Gender Distribution by Ticket Class
20162, We are not passing parameters in this step to keep it simple and be using the default ones
30082, I found passengers which joined with their family or alone by picking up counts of sibsp and parch
35481, TPU detection
42940, Loading the data from feather files
38679, Categorize Image Size
36749, LSTM Model with Keras
30516, How does Target Varies with Suite and Income Type of Applicants
14325, Fare
35440, Reshape
29074, Categorical features mean encoding
35783, Stacking Averaged models Score
2529, XGBoost
9307, The error here is that our encoding forced an order among the categories It is saying to our models that BrkFace None Stone and this can be very harmful to both the performance and the explainability of the model because that ordering does not make any sense
32361, Missing Values
14662, Encoding Sex PClass column
17674, Categories of title are a good feature
20585, Applying Feature Scaling on Test Set
17793, Build families
38221, Some points are misplaced
42635, The Human Freedom Index
18400, Target
839, Creating Datasets for ML algorithms
20535, Define error metrics
14396, we can drop the name feature since it contains no more useful information
26948, Normalized information about building where the client lives
36051, Functions
20120, Hyperparameter Tuning with Optuna
7268, Embarked Feature
18677, Evaluating Ridge model on dev data
6545, i am going to plot a heatmap for just fun so if you don t understand it go ahead even i don t understand this below heatmap
8466, Select Features by Recursive Feature Elimination
26916, Lets solve the problem of outliers
37468, EDA
21742, Highly skewed item cnt and item price
42936, Getting the list of names of the added features
2023, Stacked models
1969, KNN Classifier
42021, Concat Merging and creating a new data frame
6401, Preprocessing Train File
14172, Here we display the features with their corresponding importance values based on each model
26961, Anual Sales
42648, target 0 1
9749, Fill the Missing data
22160, Column extractor
32247, we fit for 2 epochs
13338, Another piece of information is the first letter of each ticket which again might be indicative of a certain attribute of the ticketholders or their rooms
6181, Data still evenly distributed
35064, Making predictions using Solution 3
15566, A feature for the cabin deck
11885, Logistic Regression
1395, Correlations between numerical variables and Survived aren t so high but it doesn t mean that the other features are not useful
24503, Generate the output
28946, try to optimize the hyperparamerts using Randomised search
6471, Try on the test data and make submission
4805, learn org stable auto examples preprocessing plot all scaling htmlsphx glr auto examples preprocessing plot all scaling py
34368, Modelling
25470, Define architecture
8432, Check and Input Basement Features Nulls
22601, Last month shop revenue trend
8730, Scatterplot with Correlated Variables
17727, The reason q 4 was used is becaused the data be divided according to the values that
40865, Model Evaluation
35334, Data Augmentation to prevent Overfitting
12982, Sex
4454, Plotting and visualising the distributions of different variables
18884, Fare and Embarked are two columns where test and train differ in missing values
5322, Display heatmap by count
1967, Random Forest Classifier
30401, Defining the architecture
7727, The BsmtFullBath FullBath BsmtHalfBath can be combined for a TotalBath similar to TotalSF
41276, Transparency
36747, Here is the important part
11146, separate into lists for each data type
38756, Gaussian Naive Bayes
31397, Lets create our model make sure to add the dropout value so that our model does not over fit
38782, Logs of model predicted prices
15732, Confusion Matrix
23120, Impute Age
25288, Now let s try with the optimal learning rates
36617, Plot FacetGrid using seaborn
2797, Deploy Trained Model on cloud
6435, Conditioning The Coolest Feature
16465, Feature Selection
27496, Shallow CNN with no regularization
7009, Value of miscellaneous feature
1837, 1st and 2nd Floor Area
31904, Compile the model
12684, Every column that is either an int or a float can be described
6322, Adaboost
5346, Diplay actual as scorecard with increase or decrease number
37053, Looks like Cabin is alphanumeric type variable with no special characters between letters and numbers
254, Library and Data
10916, Select features
20108, Item count mean by month sub item category for 1 lag
27048, Visualizing Images with benign lesions
6280, Sex
32559, Age
23118, Imputing Missing Variables
6655, The positive corelated attributes are age age bins SibSp Family size Parch Family size Sex Title Survived Sex
13142, NameLength as Survival function
9768, these are the features I use in this run
20177, Converting Images to binary
10043, Encoder
32319, Predict the scores using KNearestNeighbors
3747, isnull
23407, Scaling
4625, Using histograms we can easily check the skew for each variable
40484, Gradient Boost Regressor
105, Hypothesis testing for Titanic
42946, Creating the submission file
3547, now get this awesome new function to work on other features
18376, Fit and Score Model
40334, Baseline
3909, if you take a look at the discription of the variables you notice that this is actually not missin data it means no item is found no pool is found for example and so on for other variables so we replace these values by None
35419, Training data Extracting from dataframe converting to list numpy array scaling into range
35905, We unzip train and test directory using zipfile library
9292, Statistical Modelling
23674, Split into training validation sets
32850, Data cleaning
17927, Interesting question The test accuracy score is 79
31535, 50 values are close to mean value so replacing with median
634, Travelling alone appears bad enough to be significant
35342, Making predictions on the Test Set
31845, Category dataset preprocessing
11067, Adding some categorical features
30833, Use the date feature to get the year month and hour features
29712, Visualizing distributions for numerical features is essential
6821, We are going to use regularized linear models to avoid the risk of overfitting
42966, I converted non ordinal categorical variables to dummy variables
4422, Make Submisison
11516, Random Forest Regressor
14099, SVC Output file
24669, Modeling
9005, Convert categorical nominal variables into multiple columns using One Hot Encoding
13252, extract title
42344, Convert categorical variables and process real test data
25800, Besides it looks like there is a strong correlation between the number of entries of the contributors in both datasets
8834, SUBMISSION
4540, Matplotlib
7933, Random Forrest Regressor
25858, BagOfWords Implementation
36228, Here we have used train test split from scikit learn to split our training images and labels into train and test data Since we have our test size set to our initial dataset be split into training and test data elements
26898, Using Cross validation
8967, sibsp parch
7537, Members with head count of more than 6 never survived in our train dataset so lets make 6 members that is 7 and 10 members as 7 members
2077, Following the best practices let s calculate our own CV score that be used as a reference
5834, Feature Engineering
15625, Passenger Class Feature
3199, Transformations
4190, We exluded from this list the Id column which is an unique identifier for the houses
16377, Before combining features all features needs to be changed to numeric types
12633, Text Based Features
4353, make new feature of OverallCond OverallQual
21016, Bar Chart
26301, Test on test datasets
24859, let s check how we are performing on two groups
31230, Features with max value between 10 20 and min values between 0 10
2286, F Score
6860, Numeric Float Into the Battle Field
19069, Get the probabilites for the test set and create a list of the probabilites for each image in the test set and convert the probabilty to a float
24723, PPS Predictive Power Score
18032, Errors Analysis
9080, I wonder if Condition1 and Condition2 are ever equal in values
27760, Tokenzing the text
31526, We repeat the same process as before
8837, The author also says I would recommend removing any houses with more than 4000 square feet from the data set so let s do that too
5507, Fare Category
10369, Categorical Variables
27958, One hot encoding
26732, Plotting monthly sales accross departments
17044, Frequency encoder
17944, Filling Data fare
4173, Domain knowledge discretisation
17894, Lets visualize the effect of covariates on probability of survival
27263, Discrete variable
42889, Hospital Bed
1138, Model evaluation based on K fold cross validation using cross validate function
8884, The 5th Feature that we make is the overall average rating of the house to determine its price We do that by taking the arithmetic average of the OverallQual and OverallCond
18487, We still have StoreType Assortment and StateHoliday as Obejcts we need to convert them to numerical categories
17829, Sex is the dominant feature followed by Pclass and Fare
3397, BsmtQual BsmtCond BsmtExposure BsmtFinType1 and BsmtFinType2 For all these categorical basement related features NaN means that there is no basement
32364, Scans by Anatom Site
19799, Mean Median Mode Imputation
37421, first take a look at the class distribution of sentiment label
4813, Dropping unnecessary columns
22750, Import Libraries
12636, also define a function for removing outliers that be used later on
26642, This table means that 3M doc have meta infor
22387, GradientBoostingRegressor
37544, Data Preparation for Keras Model
11767, Our stacked model scored an impressive
42376, CNN
9603, Data Vizualization
4100, New columes
5909, u AdaBoost u
973, Set up our dataset preprocessing
4577, Fifth step Modelling
37830, Consider as cut off for consider probability as
22088, Import required libraries
18081, look at some examples of images with small areas covered by bounding boxes
36538, 200
6777, The Chart confirms Women more likely survivied than Men
8527, Basement Features Continued
37626, Here we split the train data to create a validation set
8077, The rest can be safely imputed with 0 since this means that the property is not present in the house
24796, Wavenet
26191, HyperTuning
40242, before we do any kind of analysis we must check the kind of features we are working with what percentage of those are null
10766, to tune all the hyper parameters
3525, SalePrice Correlation matrix
25436, Label Encoding
25834, Calculating and analyzing Char length of each text
38430, Submission
42463, ps car 12 and ps car 13
27123, If we look at the types of masonry venner and their corresponding area
17784, We apply the rule for extracting the title
39886, Scaling
33848, Basic Feature Extraction
24030, One more thing that we may notice from spikes is that sales count depends on days passed after release date
10217, Take Away Points
26336, Models
7544, our data is ready now its time to use it for model building and prediction
33482, Observations
43283, Instancia uma nova RandomForest
13047, Cabin
37197, Box Cox Transformation of highly skewed features
33277, Observations
13723, Perform OHE on Pclass new Sex Embarked and Title
10504, Fill the Age with it s Median and that is because for a dataset with great Outliers it is advisable to fill the Null values with median
4456, Comparing survival rates among different variables
23433, Romoving Emojis
32514, Extracting VGG19 features for training and testing datasets
4550, Treating categorical and numerical features differently
21505, Image with the largest width from test set
11340, Family Size Feature
35101, Compute accuracy of model
10975, Top influencers
12145, Applying cross validation on all the algorithms 2
36667, As we continue our analysis we want to start thinking about the features we are going to be using
8125, Submission
42560, Function to search for best threshold regarding the F1 score given labels and predictions from the network
26559, Submission
16370, Categorizing the Age values
25463, and now to the Test Set
19528, Creating User Defined Functions UDF
31435, Before submitting run a check to make sure your test preds have the right format
423, Target Variable Transform
35650, Baseline models font
19192, Predicting using the DecisionTree Regressor
11040, Prepare our data for the pipeline
11315, Age
19453, First Hidden Layer
4629, Scatter Matrix Plot
22098, Avg Accuracy VS Number of Epochs Graph
7889, I would simulate the training using the new features
22077, Positive values in the column Diff Jaccard pred vs text mean our prediction is better than simply taking text as selected text
11622, See how each feature are correlated
33803, This creates a considerable number of new features
14634, let s write some functions to help clean our data
9730, One more variable of note is GarageCars
699, If we now select survivors based on the new probability threshold we finally obtain a nonzero number of survivors
24863, Just need to do this little trick to extract the relevant date and the forecastId and add that to the submission file
15257, As Most of the records embarked at port S Filling missing values for two records with values S
21660, Convert numbers stored as strings coerce
550, SVC features scaled
30761, Display distribution examples
15098, Additionally from looking at the features it looks like we can just drop PassengerId from the dataset all together since it isn t really a helpful feature but rather simply a row identifier
4917, Handling Missing Values in Categorical Data
27492, SAVING THE TRAINED MODELS
6251, Embarked
3771, There is no missing value on this feature and already a numerical value
31434, Check on some random input data
23066, You should notice that your model training ran about twice as fast but the accuracy change was trivial
11539, GradientBoostingClassifier
18731, The same patterns appear to be present a dropoff in sales after substantial returns
34868, I define a function that checks the intersection between our vocabulary and the embeddings
22753, R for Covid is estimated to be between to Source Forbes Report covid coronavirus disease may be twice as contagious as we thought cbca
41089, We use StratifiedKFold to split our data into 7 folds
28851, GPU
12310, The higher the quality the better the selling price
18085, Plot the brightest images
16094, Heatmap of Correlation between different features
14181, Importing Libraries
14394, we can drop the Age feature from the dataset as we are using AgeGroup now
15450, We still have some categorical features which we must change to numerical so we use a Label Encoder
34461, WI 3
35471, Visualiza the skin cancer atypical melanocytic proliferation
507, Nearest Neighbors
26585, TASK 2 IMPORT LIBRARIES AND DATASET
2371, Evalutate LGBM Stacker
21424, LotFrontage
41254, MODEL TESTING EVALUATION SELECTION
21041, Related paper Topic Modeling and Network Visualization to
32394, Here we set config for our next steps
29742, we got 0
23681, we can instantiate the model and take a look at its summary
21598, Create new columns or overwrite using assing and set a title for the df
42331, Information of ReduceLROnPlateau can be obtained from here
30751, Hyperparameter tuning for best model
40045, Both have very similar target distributions now
40428, Ensemble Exploration
27192, XGBoost
40733, Visualizing All Intermediate Activation Layer
37619, attributes of which have less than 10 unique values do One Hot Encoding
7352, Correlation
37871, Distribution of dependent variable
9829, Creating Dummies Variables
28469, Plotting columns Latitude and Longitude
38065, NLP Features distribution Disaster Tweets vs Non Disaster Tweets
2409, GarageYrBlt MasVnrArea and MasVnrType all have a fairly decent amount of missing values
40487, Ensemble VotingRegessor
9815, Survivals Survived 1 or died 0
35308, Start training
37232, CNN Model
16038, We must fill Fare NaN value with median of Fare
4134, Scaling the Data
652, And that s quite expensive for a 3rd class ticket
30108, PATH is the path to your data if you use the recommended setup approaches from the lesson you won t need to change this
3699, Predict
8361, Embarked fill embarked with a major class
18121, Pipeline Validation
2352, Logistic Regression
30655, To increase iteration time I create a table grouped by keywords
12089, Fit the first part
18626, For Fare
27656, Normalization
11491, Exterior1st and Exterior2nd
37323, Select pooling layer pool size parameter
39992, Creating Dummy Variables
2897, Define Performance Metric
12403, Exporting output to csv
4194, The first plot is an histogram with the SalePrice median value of each Neighborhood categories
7966, have fun now with Machine Learning and the Regression algorithms
712, Perhaps we should impute medians e
28298, Load best model evaluated at validation set
1852, Box Cox Transform Suitable Variables
40792, Check if it s a class imblance problem
7309, Observation
8120, KNN
18628, Applying RandomForest
12694, Fare
26918, The two outliers are the one with 7
31820, Confusion matrix
14221, to create features feature vectors that make machine learning algorithms work
27417, More Training data
22660, Evaluation Functions
27549, Display interactive filter based on click over bar
1006, let s bundle them
26758, Submission
9589, Viewing the tail of the data
20230, Age
13981, Map Title to numerical values
22616, Explore Apps
23317, Add previous shop item sales as feature Lag feature
32233, We also need a way to evaluate our model s performance for classifying the numbers
24141, XGBOOST
28506, We can make individual predictions for numbers to test our model is it working or not
11800, let s handel skew in all numerical features
29872, evaluate our model
26908, Fit the model
18023, Woman Child Group fates are connected to their class
35622, Define our student network
5849, Observations
18622, Data Preprocessing
11356, Log transformation of the target variable
16876, Embarked vs Survival
10825, I thought it be less
16858, Person
10394, Gradient Boost Regressor GBM
22744, Model
26374, Fitting the network
7154, We already know our datasets dimmentions now let s take a first look into our datasets structure and possible missing values
35390, Training Fine Tuning and Validation
11536, Continuous Variables
41435, It be more challenging to parse through this particular item since it s unstructured data
22598, Traget lags
28789, the first two common word was I m so I removed it and took data from second row
5490, summarize what we have done till now
9058, GarageFinish
12143, Training the model 2
20824, Join weather state names
5341, Diplay quanitive values of a categorical variable in funnel shape
16700, Binning data into different categories
17884, Deployment
29926, we can look at the instance of is unbalance a hyperparameter that tells LightGBM whether or not to treat the problem as unbalance classification
10425, Logistic Regression
16186, We can create another feature called IsAlone
11486, MSZoning
2356, Radial Basis Function RBF
7828, Make predictions
5409, Obviously most groups of female are more likely to survive
39214, A really simple neural network
6832, Categorical Features
35597, Greedy Ensemble
22223, Normalization Normalize etme
20130, Cross validation
27533, Display the variability of data and used on graphs to indicate the error
36068, Generate Output
18952, Display distribution of a continous variable
29457, Gaussian Naive Bayes
20591, we are done with all the models
8760, Survival By Sex
30851, Location for occurrence of Other Offenses
35827, The HeatMap visualization is from this kaggle kernel 5 on leaderboard data
10300, Initial Analysis and Feature Processing of Primary Training Data
1164, Feature Engineering
2401, Using ColumnTransformer to manipulate different columns
15618, Mother
1694, Get to know your dataset using Pandas Profiling span PandasProfiling
20413, now construct a few features like
17959, I choose Gradeint boosting model
28402, Simple Imputer
18339, scatterplots are a good way to visualize replationships between quantitative variables
25274, Validation Accuracy
22356, Wrinting the is and loss values to submissions
480, Before we go forward let s make a copy of the training set
36123, Ordinal categories features Mapping from 0 to N
18284, Made conditions sets that were responses found in the data
34787, There is no line plot for weather 4 because there is only three data point for weather 4
23578, Before running the hyperparameter search define a callback to clear the training outputs at the end of every training step
16023, Name Ticket
15850, Fare
20695, Model Architecture Plot
38700, Probability of melanoma with respect to Anatom General Challege
1888, Feature Rescaling
1867, Find useful interactions
36426, Ordinal Mapping
8609, Perform One Hot Encoding to all nominal variables
36563, Ensemble Model
15156, Extracting titles from name
27833, Reshape
40734, Creating submisson
39003, Initializes parameters to build a neural network with tensorflow
40979, Data filtered out only for store 36
24117, Combining
13997, Create a new feature FamilySize from SibSp and Parch
91, It looks like
35181, Adversarial Validation
12254, Scikit learn Implementation
20291, Survival for women of class 1 and 2 is ALMOST 100
42065, Renaming Columns
4953, Notice that we are using cross validation technique with 5 folds
39840, Statistical Description
13444, Parch don t have big effect on numbers of survived people
17413, Correlation Heatmap of the Second Level Training set
27897, Summary of the Model
28572, BsmtFinType2
22915, A little improvement from the earlier model
36564, Blend Models
4509, Missing Values
4265, Exterior1st
43148, Visualizing images
4278, BsmtFinSF1 BsmtFinSF2 BsmtUnfSF TotalBsmtSF BsmtFullBath BsmtHalfBath
43151, Downloading the pretrained model
6399, HeatMap
5976, LogisticRegression
7234, Finding the optimum Alpha value using cross validation
32136, How to compute the row wise counts of all possible values in an array
31710, Load the weights
9196, There is a way to split the string values easily but the information gain from the 41 examples in comparison to the roundabout 1000 missing values are pretty low
37774, let s run the code under the control of the memory profiler
5994, Correlation Matrix SalePrice
35851, We refine the edges of the numbers to be clear using unsharp filter from open cv
486, Defining our scoring metrics
38267, Removing all the emojis
38684, Age
6924, Separando as vari veis para aplica o do modelo
24223, Apply the Estimator which got from parameter tuning of Random Forest
4868, Filling NAs
5579, Xgboost Regressor
19837, Yeo Johnson Transformation
18695, save our model s weights
37984, Because I didn t find gridsearchCV and randomsearchCV that can be used with fit generator I have to define my own sudo randomsearch
21236, Deciding our evaluation metrics
32904, Example of preprocessing
16964, We fill the Age missing values with the median 28
28320, identifying the Catergical and numerical variables
34421, Common words
41711, 3D plotting the scan
20798, LotFrontage is correlated to Neighborhood so we fill in the median based off of Neighborhood feature
9621, train test split
14608, ADABoost
6382, Bar charts are commonly used to visualize nominal data
6591, OOB error also called out of bag estimate is used for measuring the prediction error in random Forest
34755, Punctuations
24835, use k means
17967, Dedicated datatypes for booleans with missing value support data type with missing values support
733, we might have juts gotten lucky by stumbling upon these two outliers
3963, Split the data into train and validation set
31839, Making it flexible
14321, Age
23811, Piping ML Methods
29907, Data Augmentation
33660, Create a submission file
19641, Some device models can belong to more than one brand
10914, Transform skewed numerical data
22870, Data Augmentation
15494, Dropout
8997, Categorical feature importance
29170, SaleType Fill with most frequent class
12830, Fare Analysis
40291, Acctually it is another step that you can skip but to make it more clearly I am gonna make it explicity that our classes are dog and cat
24254, Embarked
25729, Load datasets
8518, Linear Regression Model
23437, Baseline Model
26365, TSNE or UMAP Visualisations
16586, Finding out missing values in Embarked column
31889, Bagging Classifier
22638, Model 3 January Average 2013 2014 2015
2361, Class Predictions
26547, Fit the model
7483, Mapping the titles depending on survival rate
38618, Surprise again
24052, CatBoostEncoder replaces a categorical value with the average value of the target from the rows before it
33785, now look at the number of unique entries in each of the object columns
4195, Other Important variables Garage variables
506, Logistic Regression
5615, Modeling
14578, Test data overview
1120, There are only 2 missing values for Embarked so we can just impute with the port where most people boarded
26537, it s time to put the network to test note on theano can take 1 3 mins to compile
886, Train again for all data and submit
13715, 2nd METHOD BASED ON PCLASS
39733, Something else that just stood out to me is that I m not quite sure about how important encoding variables is
40968, Dropping non useful column
27113, Missing values in test dataset
19162, If brand name is present yes no features
29781, Original Images
25023, Zero centering
9604, Survival Details
10261, First the coding helper function stolen from
39964, Feature Engineering
32321, that I have checked the accuracy precision and recall I predict the scores test
1676, comes the cool part we can calculate the accuracy metric ourselves Do you remember the definition Let me remind you
6281, Ticket
27059, Make Predictions for Test Data
9742, Sex
40448, Drop unused fields
1177, GarageYrBlt does not have any incongruencies
6804, There is 77 of missing data in the cabin column it s usually way too much for this column to be exploitable but as we have a small amount of data we still try to use it in feature engineering
11883, Visualizations
5317, Collinearity
12599, Random forest without hyper parameter optimisation
17367, We drop the irrelevant attributes for our task here
18725, There are 12500 files
10077, With tf
7817, Modeling
8787, The best score
26093, Generate output
43366, Countplot for Labels
18835, Calculating the Eigenvectors
27404, Training the Model
13861, Fare categorize it into 4 ranges
41396, FLAG OWN CAR and FLAG OWN REALTY
18534, The avarage saleprice for which a house was sold was 180
3271, Updating Basement features
14614, Station 1 Recap gradient magnitudes
42800, Encoding categorical data to numerical
8001, Train Lasso Regression
11207, use different function to calculate ensembles
7939, Averaging models
29097, Score df for visualizations
16350, Feature selection
41017, We are not done yet as there are some family groups with all members in the test data
21810, Ownwer occupied properties typically sell at random looking prices
41327, As mentioned at the beginning of this kernel we first compress the data using Truncated Singular Value Decomposition
23272, Cabin
14524, Observations
28014, Because of high dimentional problems we have encountered with by default it returns the matrix in sparse mode
15372, Building Machine Learning Model
3984, Drop columns with a lot of NaNs more than 75
8144, These are the categorical features in the data that have missing values in them
35216, Jaccard Score is more about how exactly the predicted words match against actual words in a sentence
12567, that our training data is cleaned we do the same thing for the test data
24031, Overall sales trend looks to be declining so I calculate relative year size to adjust sales count
28860, Create Sequance
41965, Shops
1775, Train Test Split or Cross Validation
31078, LotFrontage
28678, CentralAir
3992, R 2
1946, Relationship with YearBuilt
41313, Imputation
40245, We can start by looking at features highly correlated with Sale Price
708, Some dummy variables exist in train but not test create them in the test set and set to zero
38150, DRIFT REMOVAL
16974, Correlation between our features
738, Fill with medians
40876, Here we go Its kernel ridge that scores best on leaderboard after optimization followed by ridge and svm The xgb scores worst among the models
32822, Best parameters are searched by GridSearchCV on my Laptop
41465, Since the vast majority of passengers embarked in S 3 we assign the missing values in Embarked to S
33027, Evaluation
18687, The training set contains 20000 images and the validation set contains 5000 images
31222, Data loader
23504, Design CNN architecture
34337, Analyse Target Variable
10644, we can treat Age as Categorical feature
106, Now we have to understand that those two means are not the population mean bar mu The population mean is a statistical term statistician uses to indicate the actual average of the entire group The group can be any gathering of multiple numbers such as animal human plants money stocks For example To find the age population mean of Bulgaria we have to account for every single person s age and take their age Which is almost impossible and if we were to go that route there is no point of doing statistics in the first place Therefore we approach this problem using sample sets The idea of using sample set is that if we take multiple samples of the same population and take the mean of them and put them in a distribution eventually the distribution start to look more like a normal distribution The more samples we take and the more sample means be added and the closer the normal distribution reach towards population mean This is where Central limit theory comes from We go more in depth of this topic later on
36628, let s checkout our label interest level
19535, Grouping on RDD
17048, Gradient Boosting
42935, Adding more features
35856, MODEL SUMMARY
29092, SW dataframe
25892, The Coleman Liau Index
12773, Data Visualization
17653, Out of Fold Predictions
20439, application test
40691, NOW LET SEE HOW COUNT VARIES WITH DIFFERENT FEATURES
5981, DecisionTree Classifier
36738, Building the best final model
675, We want to make sure that our classifiers are not overfitting random data features
41864, Bag of Words statistics
34736, First 3 dimensions of the Latent Semantic Space
34930, Get target
3226, Trendline with error based on pandas dataframe
40063, More Visualization on Numerical variables
33306, Learning Curves of the Models
27409, forward propagation
19815, Hashing Encoder
438, MSZoning The general zoning classification RL is by far the most common value we can fill in missing values with RL
41580, Data Augmentation to prevent Overfitting
633, Alone
7394, All datasets were saved as CSV files and can be easily added to your kernel from the page
6442, Replacing other missing data frame into median for continous variable and mode for categorical variable
6439, Bivariate Analysis
34267, Create the sliding window data set
18583, In addition NVidia provides special accelerated functions for deep learning in a package called CuDNN
1126, The age distribution for survivors and deceased is actually very similar
40330, NLP based stuff
10716, Lets try to understand how survival varies with different features
14497, Reading and Understanding the Data
23651, Discrete wavelet transform
22447, Area chart
28275, Fitting a new model with the found hyperparameter values to the training data and making predictions on the test data
41693, The two dips of time in training set are curious if looking at counts per unit time they might need to be normalised
16558, SibSp and Parch
15683, Hyper Tuned Ensemble Modelling
37016, Treemap of the categories
36701, Evaluate model with evaluate method
5133, Temporal variables
39871, GrLivArea
30113, As well as looking at the overall metrics it s also a good idea to look at examples of each of
14645, Store a copy of the train test data at this stage for workflow purposes
26771, Submit
29335, Basic NN
9887, 1st class passengers are older than 2nd class passengers
15029, Pclass
11195, XGBoost
37997, And here we have information on sell prices for all items in all stores by weeks
1098, Extract Cabin category information from the Cabin number
20479, Credit currency
12511, Fill them nan s
2373, Using custom and existing function in a ColumnTransformer
31249, Fare Age Binning
37491, 1d CNN Convolutional Neural Network
13129, VERDICT WITH BEAR GRYLLS
32889, 2nd level model as a linear regression
22228, Optimizasyon fonksiyonlar n tan mlamak Define the optimizer
18977, Display values in table format with each columns and header with different colors
24769, Fit all models
22538, Data preparation
16623, Hold Out Validation
9262, 3 outliers removed
14404, In order to get rid of outliers and make data more usable
434, GarageYrBlt GarageArea and GarageCars Replacing missing data with 0 Since No garage no cars in such garage
6996, Linear feet of street connected to property
42264, I fill the empty strings either with the most common value or create an unknown category based on what I think makes more sense
9770, The multi collinearity is what we want to avoid when using dummy variables
1892, Defining Features in Training Test Set
3642, Filling in missing values in test set for Fare
3175, on a single core
36645, It s buggy and I don t know why
42818, Augmentations
16636, Datatypes
11492, Check every features
9875, The incomes of passengers boarding from the Q port can be said to be very low
3479, Construct and fit the best elasticnet model
29226, we can prepare our data for modeling
14303, Creating new feature Title
9165, KitchenQual
9291, Imput Missing or Zero values to the Cabin variable span
34613, t SNE
862, Passenger Class Survival rate decreases with Pclass
3654, Creating dummies for all features
12132, Create a corss validation strategy 2
21275, Metric
31533, Numerical Features
31108, Bad Label cols are those columns that the values are not the same between the 2 dataset In this case the training and testing
15996, Searching the best params for SVC
10800, I am not optimistic here as I don t believe that departure place could influence survival rate
2670, Above is the correlation heatmap of all the numerical featuers in the paribas dataset out of this correlation map let s identify the features with higher correlation and exclude those features In the exercise we set a threshold of and exclude all the features with correlation more than
9059, TotRmsAbvGrd
16996, Plot tree
15645, AdaBoost
36334, Display the first 5 images from the training set and display the class name below each image
19940, Pclass
14441, go to top of section eda
7002, First and Second Floors square feet
6828, Correlations
26648, User features
17400, Correlation between columns
19269, CNN on train and validation
9308, Which is significantly worse for OLS and puts us in trouble if we have to explain the model to someone else
41654, General overview
20290, INSIGHTS
25440, Data Augmentation
13084, Decision Tree
39780, Custom Sprinkles
34833, Find correlation with in the features and drop highly correlated feature
42577, The code below repeats it 10 times if you are not confident that this isn t a lucky shot
21835, Manually Encode A Couple of Variables
8855, Create the Explainer
13100, Data needs to be one hot encoded before applying machine learning models
23905, References
12106, Experimenting with ElasticNet
27180, Prediction on test dataset
37003, check the number of orders made by each costumer in the whole dataset
1045, Interesting The outlier in basement and first floor features is the same as the first outlier in ground living area The outlier with index number
3405, Here I add a column of NaNs as placeholders for the Survived variable in the test dataset and then combine it with the train data into a single dataframe data
22006, This is a common problem that you ll encounter with real world data and there are many approaches to fixing this issue
15855, Label encoding non numeric features
18751, We can also use
13216, Performing basic visulization with the help of Seaborn
9280, Conditions to check if data is tidy span
11625, all feature are scaled and ready to use as model input
20324, we re ready for the network itself
15615, Create AgeBands
29917, Hyperparameter Values
12182, Converting the embarked feature
37194, Transforming some numerical variables that are really categorical
22610, Validation strategy is 34 month for the test set 33 month for the validation set and 13 33 months for the train
19449, With a validation score of close to 98 we proceed to use this model to predict for the test set
24850, Only keeping 25 sequences where the number of cases stays at 0 as there were way too many of these samples in our dataset
38125, Name Age Sex Ticket Cabin Embarked All are non integer values rest are integer values
13707, so now let s generate our predictions based on the best estimator model
33859, Splitting into train and test set with 80 20 ratio
40000, Central Tendency
4667, Scatter plots between SalePrice and correlated variables
28159, that we have our model loaded we need to grab the training hyperparameters from within the stored model
42840, China
19288, Train Valid split
40083, Categorical and Macro Features
43145, Check the Dimensions of images
39870, year
7827, Clean test dataset and use the same features we used in training the model
2201, doing the same thing to the fare column
29370, RANDOM FOREST
25466, Cost calculation
15700, Most passengers embarked in Southhampton 73
11134, plot the poly fit data
31257, we build our keras classifier that be used for optimization
43258, Tratamento de datas
16277, Zoom In
24877, let s try to compute the mean age of Names with titles who do not have missing ages
12087, Split train data into two parts
37648, Changing features to sparse matrix
39741, Fare
36857, Exploring the Data
34747, CREATING THE EMBEDDINGS USING KERAS EMBEDDING LAYER
22292, Set the parameters for the network
29104, Convolutional Neural Network
34285, exploring the correlations
7576, Barchart Correlation to SalePrice
41829, Evaluate table of single models
21248, Image data is reshaped to image in 3 dimensions height 28px width 28px channel 1
16929, One third of the passengers are female
12988, Ensembling
811, Numerical and Categorical features
41455, Plot the cross tab
2563, predicting using h2o predict
24342, we extract the features generated from stacking then combine them with original features
801, Finding best tree depth with the help of Cross Validation
6601, Make Predictions
33794, Correlations
12050, XGBoost Regressor
8098, Fare vs Survival
22949, also visualize the count of each prefix in a bar graph
5260, As our next step we are going to train a set of RF models that utilize only the tiny subset of features selected from the model via embedded feature selection algorithm
40073, Label Encode Categorical
7855, Mutual Information Regression Metric for Feature Ranking
39391, Summarise findings
12769, Predictions return probability between 0 and 1 for survived or non survived so i take the argmax of the array to get the max index for each test example
39728, Family name
28388, Write a useful function
33866, Exploratory Data Analysis
13710, CORRELATION ANALYSIS
12415, In here I introduce a new and general method to handle non numerical data with sklearn
32359, Training the CNN Model
12519, categorize the columns on basis of data types
38932, Applying Linear Regression
36258, first vizualize null values on our training set on graph
1950, Dealing with Missing Data
4782, Exploratory Data Analysis
27056, Define a function to get the Mean Absolute Error MAE
4710, Prediction
40012, Target distribution
7280, Embarked
12569, let do the final step of data pre processing normalization Scaling helps in making our data to be in the same range and help prevents domination by single feature with high values
10837, move on toward the predictors
16198, we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis Given a set of training samples each marked as belonging to one or the other of two categories an SVM training algorithm builds a model that assigns new test samples to one category or the other making it a non probabilistic binary linear classifier Reference Wikipedia
468, Clustering
37171, Check the performance of the random forest
2317, Train Test Split how to grade your model
13442, Pclass People of higher socioeconomic class have more chance to survive
22285, Our param grid is set up as a dictionary so that GridSearch can take in and read the parameters
20839, set null values from elapsed field calculations to 0
25593, Rectified Linear Units Most Deep Learning applications right now make use of ReLU instead of Logistic Activation functions for Computer Vision Speech Recognition and Deep Neural Networks etc
34400, Convert Pandas DataFrames to Numpy arrays
24653, img
37789, Network Structure
4031, OverallQual an important variable to look at
10966, Categorical data with few samples per bin
160, EDA of gender submission file
36042, when going through articles about COVID 19 following factors in a country was found as factors that are having an impact on COVID 19 outbreak in a country
27138, We focus now on categorical variables in our dataset
6173, Visualizations
29150, Visualize using Missingness Map
4878, Modeling
5702, GarageType GarageFinish GarageQual and GarageCond Replacing missing data with None
4486, K Nearest Neighbour
12026, check which model contribute how much in ensembling
26665, bureau balance
9964, Lot Area by Price
6899, By creating Age subgroups we can examen the data even more
27445, Events
7904, Second model Logistic Regression
3535, Facet Grid Plot FirePlace QC vs SalePrice
34838, Model Prepration and Training
37106, there are 2 methods
37004, let s explore the items datasets
27006, f1 metric for Keras
13897, Data Visualization
4794, check the distribution of our dependent variable
32927, K Fold cross validation
28987, plot with the features with missing values vs Sales Price to find some insights about the data
25405, CREATING WEIGHTS
19411, So the TL DR of RoBERTa vs BERT are the following
5949, Data Loading and Description
41088, We be using bert base uncased as our base model and add a linear layer with multisample dropout This is based on 8th place solution unintended bias in toxicity classification discussion 100961latest 593873 of Jigsaw Competition unintended bias in toxicity classification overview
2732, Example with decision tree
28374, Wordcloud
15669, KNN
31421, Evaluate the Model
16525, RandomForestClassifier
10046, Logistic Regression model
12742, In this dataset we have some missing values
23954, Applying Linear Regression Algorithm
40399, Submission for GroupKFold
20931, Run the session
42300, For the data augmentation i choosed to
20887, Define batch size
5070, Less good
13574, Ticket is a very sparse data with many values
17806, also add one more feature Class Age calculated as Class x Age
28138, Evaluating our Model
12491, Building a pipeline for lgbm
38754, Logistic Regression
27103, Import Dataset
28009, Test Accuracy
21621, Use a local variable within a query in pandas using
21602, Calculate running count with groups using cumcount 1
9044, There are about 24 features with correlation values 0
12048, Droping a few variables
15529, There are 2 missing values in Embarked which can be found online
6720, Correlation Heat Map
3893, Variance and Standard Deviation
26578, Using our cat classifier
5952, Dealing with Missing values
42542, 3D t SNE embedding
1158, A Random Forest is aF ensemble technique capable of performing both regressioF and classificatioF tasks with the use of multiple decisioF trees and a technique called Bootstrap Aggregation commonly knowF as bagging
39809, reshape the whole x train dataset
27758, The most frequent words
9892, categorize these 17 features
36467, Images from ARVALIS Plant Institute 3
6036, Create regression models and compare the accuracy to our best regressor
23988, we are merging train and test datasets so that we can handle noise and missing data in the dataset
11712, Prepare Train and Test
42894, Sigmoid function
40621, Lets drop the outliers and split into x and y
11662, Gradient Boost
41246, CORRELATION ANALYSIS
42518, Printing the shape of the Datasets
19525, Saving RDD to a Text File
27755, Removing emojis
19043, This is because these images are stored in YBR FULL 422 color space and this is stated in the following tag
14488, now with dealing with age values
42576, Calculating the absolute difference between the possible sums and the score we retrieved and sorting means that we find our labels in the first entry of sorted sum
39257, ANALYSIS BY SENIORITY LEVEL
35583, Similarity Measure
32752, Function to Calculate Missing Values
43331, Ensembleing Predictions
20089, Periodicity
7388, After inspecting all the unmatched passengers from the Kaggle dataset the following manipulations should be performed to correct the mistakes
10690, Missing Value
31950, Train and predict
41311, Drop columns with missing values
19547, Generalization
11559, now use the reduced dataset to train a bunch of models cross validate and evaluate them and pick the best performing model
8693, NUMERIC FEATURES
29579, We calculate the mean and standard deviation of our data so we can normalize it
12776, Encode target labels with value between 0 and
4775, k Nearest Neighbour
14532, We can also drop the columns SibSp and Parch as we have derived a column Family size out of them
28103, Hist Plot
849, GaussianProcessRegressor
9882, Correlation Between Pclass Age Survived
31525, And our ROC curve score is
38587, We have to do samething for the test dataset
4349, The distribution of LotArea feature is highly skewed
24176, Time to actually starting fitting to a model
14818, Fill Missing Age Feature
20399, Loop over folds check performance for each
26355, Cabin and ticket graphs are very messy
21801, Model 2 Remove duplicated and constant columns
38830, Train the model on GPU
23753, Importing a dataset to anlayze the temperature trends
10090, Scaling
22862, Suppose our data lies on a circular manifold in a 2 D structure like in the image below
32340, We had an understanding of important variables from the univariate analysis
17536, Determine board side by checking Ticket number odd
12380, In order to view the detailed plot of any just replace x with the column of choice in the line 2 of box below
42310, Here we implement the log loss function for a given training and validation set
2341, Ensemble Methods for Decision Trees
13840, Target Distribution
35343, Generating a csv file to make predictions onto Kaggle
6576, Embarked
33483, Compute lags and trends
4045, Extract Title from Name
3388, Random Forest
3582, Filling a missing value with simply mode is a risk of Bias
39245, Visualization
33453, Implementation of t SNE
39416, replace the NaN values in Age with the mean value
41947, You can view the animated training video here
39859, Inference
40387, Our next step is to decode the image using our functions defined earlier
32750, previous application
36850, get best score for GridSearchCV
4308, Missing data
39818, so we are almost done but to process the image we need to tell our model that our images are grey scale and we do it by adding a 1 to the shape of our dataset
15253, Submission
14763, Nearly all variables are significant at the 0
36798, Information Extraction
34525, To use interesting values we assign them to the variable and then specify the where primitives in the dfs call
16061, we can say that fare is correlated with Passenger class
29049, Gamma Correction
28343, Analysis based on WEEKDAY APPR PROCESS START
17018, From the graph it is visible that there is some pattern in probability of survival based on ticket type
16980, Support vector machine
19839, Equal Width Discretisation
13537, Gradient Boosting Classifier with HyperOpt tuning
31398, fit our model
7812, Finalize Blender and Predict test dataset
1546, compare the accuracies of each model
1912, LotFrontage
28393, Split the dataset raw features
39110, Embarked
33294, Gender Mapper
6241, See mortality rate percentage for solo females in Pclass 3 is much lower that mortality rates percentage for non solo females
3683, Data Cleansing
3973, Stacking At this point we basically trained and predicted each model so we can combine its predictions into a final predictions variable for submission
37788, One hot encoding Convering the label to one hot encoded 10 categories each for one number
11880, Submission
34253, The following parameters are provided to the net
16115, Logistic Regression
20226, Pclass
6648, Computation of Fare Range
11206, Stacking Averaged models Score
38768, Model Stacking
11933, Submission
2783, EDA
5413, Basically except people have name length between 19 and 20 all survived name length and Pclass have positive correlation with the survival
24381, Imports
10281, XGBoost
5006, Categorical Features
8899, XGBoost
13580, we might have information enough to think about the model structure
26790, Adding the cluster variable
32652, Numerical features with missing values are identified
19430, Check column dtypes
6137, Finaly time for the most important area into entire house
41073, Contractions
19776, Defining the input shape
19662, That only gives us 884 features
14165, Here we deal the the null values in the Cabin column
10908, Check the Feature Importances returned by the Random Forest
12687, Pclass
762, Obtain the Hidden Representation
19264, CNN for Time Series Forecasting
42406, Baseline with Tree Ensemble
24439, Random Forest
8460, Create Degree 3 Polynomials Features
31023, Email font
29168, Exterior1st Fill with most frequent class
14392, I am going to replae Unknown values in AgeGroup feature of both train and tes set
16574, Find out missing data
15478, We create a new feature IsAlone for passengers with FamilySize of 1
4329, Marital status could not be known alone from just their name s
31418, Set Up a Logging Hook set up a logging hook
31784, Selection the best classic model for this dataset
7815, Combine train test data to pre processing
7418, I am not using LabelEncoder here because for most categorical variables their values are not in order
21351, Creating a Training Set Data
1413, FamilySize number of family members people travelling alone have a value of 1
7294, Gaussian NB
4812, information about the dataset
42104, Compiling the model
21215, split to training and validation sets
21568, Interactive plots out of the box in pandas
28017, CLASSIFICATION
40389, Data Augmentation as always is helpful in training Neural Networks
30694, Ordenaremos os retornos dos modelos de maneira decrescente comparando seus ganhos
17708, Here is some basic preprocessing to get fast training and test datasets
33726, EXPLORATORY DATA ANALYSIS
43195, LightGBM Model
30384, Processed test set
2366, Output Submission Matrix for Experimental Stacking
10651, We have reached the point there are no more columns which need preprocessing
17414, Second level learning model via XGBoost
9019, Since there are only 2 rows with null values in the GarageFinish GarageQual GarageCond GarageArea and GarageCars variables in the testing data I ll just set them to the average values since there are so few rows with nulls anyway
19635, Phone brand and model data
43360, Model
5207, let s fit the model and using Random forests ExtraTreesRegressor and support vector regressor ensemble learning using with the lasso regressor as meta regressor
1383, Titanic survivors prediction
29229, Elastic Net is doing much better but still with quite large error rate
2156, First thoughts
37006, Most important Departments by number of products
30196, let s run mice
9523, Evaluation
38629, Define MLP Regression Model and compile it against MAE loss
35899, I hope that this was intuitive enough Now let s compile our model we ll use the popular Adam optimizer a crossentropy loss function and an accuracy metric
17957, MLP Classifier
1943, Relationship with GrLivArea
13057, plot feature importance by various algorithms
35864, Make predictions
15998, We take the best model for predicting
14817, Embarked sex Fare Survived
43016, Gradient Boosted Tree Classifier
28568, BsmtFinType1
32020, Fill NaN values and apply one hot encoding
28028, SAVING THE MODEL
25906, Building text model
35096, promoted content csv
29764, Validation accuracy and loss
10545, Lasso
8654, The Age column contains numbers ranging from to If you look at Kaggle s data page it informs us that Age is fractional if the passenger is less than one The other thing to note here is that there are values in this column fewer than the rows we discovered that the train data set had earlier in this mission which indicates we have some missing values
1406, check the distribution of the cabins in individual passenger classes
2756, Handling the Missing Values
5188, In pattern recognition the k Nearest Neighbors algorithm or k NN for short is a non parametric method used for classification and regression A sample is classified by a majority vote of its neighbors with the sample being assigned to the class most common among its k nearest neighbors k is a positive integer typically small Reference Wikipedia nearest neighbors algorithm
38130, Parent Sibling that survived were
2689, We conclude that
24402, check now the distribution of the mean value per row in the train dataset grouped by value of target
28930, Read Write Locker Help
34455, CA 4
40798, Missing Value Imputation
19595, sub cate id
36916, Survived is the target variable we are trying to predict 0 or 1
21915, Transforming distribution of SalePrice to a normal distribution
14893, Visualization of Features
17906, Filling in null values with means of their respective columns
20857, Sample
18107, As mentioned earlier we use custom thresholds to optimize our score
17028, For all family names in test and train set that are not in overlap family dictionary we keep family survival rate as train dataset mean survival rate and for those families that are in dataset we set family survival rate as the one we have calculated in overlap family dictionary
39983, SalePrice vs OverallQual
41091, I trained for all folds offline and selected the models from best folds to make predictions on test set
42353, Tokenize
31232, Features with max value between 10 20 and min values less than 20
34431, Visualizing the embeddings
279, Fare
23987, Storing SalePrice column seperately as it is the Y label target that our model learn to predict Not to be stored in X or features
38753, The y test is not provided in this dataset
8495, SHAP Values
22529, Replacing object value with an int value
20410, Number of occurrences of each question
7127, Ticket
21165, Reshape
41956, Tokenization is the first step in NLP
15583, 935 is way better than I expected
5985, Submit test predictions
31004, We must specify which data format convention Keras follow using the following line of code
35642, Data glimpse font
20382, Remove unwanted words
24016, Evaluation
22937, analyze the SibSp and Parch data before we move on
3333, As we now have 2 features of Age 1 is orignal with Nan and second with imputated values so droping NaN col
12957, It can be said that missing values of Embarked variable can be filled by Cherbourg C
4096, There are a few recode which Fare is zero
13538, Comparison of 4 models including 3 new models
19064, Specify a test patient
10241, Go to Contents Menu
9204, Everyone who is travelling alone according to the Family feature be set with 1 for Number of Familymembers feature
34767, Pre Trained Model Link
11727, Gradient Boosting Classifier
29099, Only 1
13399, The accuracy score of top performing algorithms in descending order is given below
17955, Gaussian Naive Bayes
28915, Create features
8346, Average Age is 29 years and ticket price is 32
41330, Besides compressing data and making simple algorithms more effective on high dimensional data t SNE can also be used to create intuitive and beautiful visualizations of data
6826, Randomly Missing Data
27416, Tuning hidden layer size
18415, Preprocessed Datasets
13807, Completing a numerical continuous feature
2025, LightGBM
17662, visualize missing data
4359, BsmtFinSF1 Type 1 finished square feet
5920, I now want to drop similar columns by looking at the data But unable to find intersection between columns
10157, Line Plots
14733, Plot Confusion Matrix
42138, let s give the parameters that are going to be used by our NN
30001, Building the Model
41214, Just a notice we can get the same AUC score of by doing sum on the logified data before doing exp on it We should have a clear understanding of what we have done first do a log transformation and a sum in log transform means product in original form
17685, SibSp Parch PLOTS W R T SURVIVED
16215, Reading the data
20912, Load best model
38564, Linear Regression
37913, Defining model
36999, Days of Orders in a week
16891, Cabin vs Survival
20487, Go to top font
42374, Combine
42641, Predicting the ConfirmCases From Fatalities
15866, Validation set
18990, Build a vocabulary of token identifiers and prepare word embedding matrix
5013, Bonus Plots
15286, Creating categories based on Family size
14178, According to the Feature Selection graphs for random forest features of Sex and Title had the greatest influence
39167, How to use a predefined architecture from PyTorch
42328, train test split function
8042, Similarly For testdata we perform same action
20574, First we drop unnecessary columns because they do not contribute to final output
22116, Some features still look suspicious
10886, Correction in the data
26327, Concatenate all features
11256, The initial learners now be specified and trained on the training data
4999, Looks like a good chunk of houses are in North Ames Collect Creek and Old Town with few houses in Bluestem Northpark Villa and Veenker
22060, Quick explanation
29450, Right now our glove dict vector is a dictionary with the keys being the words and the values being the embedding vectors
12768, every thing is okay with the dataset so i predict the output values for submission
13839, Analyze by pivoting features
18898, we use our best model to create output set Please note that here we are working with train X etc
31794, we define 3 shared head layers exactly the same types as used in original kernel
38675, Images Per Patient
23957, Lets Merge Train And Properties To Facilitate EDA
27652, Reshape
21318, Distribution of the target variable logerror
19583, Merge Shops Items Cats features
24122, Avaliando o Modelo
8717, Mean Substitution
11346, Selecting Features for Final Model
30769, Comparing meta learners
16443, Cabin
6390, Shape of Data
2450, 30 First PCA Components explain 75 of variance
9236, Checking out the data
33310, Here we pruned half of the features we have what left there is more important features like
26839, How is ride count based on Month
26645, We have problem here
6898, Some additional statistics for both data sets Average and Standard Deviation
20896, To generate new images the existing images are rotated shifted zoomed and stretched at a certain angle The generator supplies the model with new generated images realtime The steps per epoch parameter in the fit generator defines how often this should happen per epoch The characteristics of the desired random changes for each image are defined here
11423, Find
21925, Submit blended predictions according to various trial and error weightages
21097, Get an idea about the data structure
38579, Loading The Dataset
12053, Stepwise Regression
4780, Model Evaluation
18367, Checking Multicollinearity
26975, Save Model
19840, How does survival correlate with Age
18434, Training of the neural network
19193, Predicting using the Randomforest Regressor
11158, Alternate Method to calculate missing Zoning values neighborhood should be zoned the same most of the time
28962, Numerical variables Types
27265, Calculate conditional probability
26791, A tiny bit better but lets check with more clusters
3181, Random Forest Cpu Sklearn
9816, Sex
21245, The Discriminator model
8740, Prepare Data for Training
14674, Gradient Boosting
34976, Data
38231, Feature Scaling
19302, Data Visualization
30260, Comparing with dumb classifier
37304, Term frequency
32215, Add average item price on to matix df
39979, Our main focus is target variable which is SalePrice
11934, Having a look at the columns
7231, Converting Categorical variables into Numeric
41721, We now create the GAN where we combine the Generator and Discriminator
9818, Pclss Passenger s class
23297, Data Cleaning
20095, Trend and seasonality look similar to the ones which we got by traditional metohod
10172, HeatMap
13407, Classification Report
38636, And our second convolutional layer with 15 filters
34387, Months
12242, Classes
2898, Scaling
9281, Run discriptive statistics of object and numerical datatypes and finally transform datatypes accoringly
43160, Running the model
6651, Categorize Embarked to numeric variable
37709, One Hot encoding of labels span
6446, Label Encoding
40121, First I create an event sequence object converting the state object to event sequence
42779, Dataset looks fairly balanced
3966, Ridge Regression
9192, Cabin
37309, ML Modelling
14881, Categorical Variable
41345, Numerical Features
9834, Building Machine Learning Models
17341, XGB
42761, Pclass
34033, First cross validation of ridge model with alpha 1 and k 10
26419, The analysis reveals some more titles some of which are noble titles or indicate a higher social standing
17345, Random Forest
13491, Age
30872, The fit method returns a history object containing the training parameters the list of epochs it went through and most importantly a dictionary containing the loss and extra metrics it measured at the end of each epoch on the training set and on the validation set
34526, One of the features is MEAN
21615, Named aggregation with multiple columns passing tupples new in pandas
19659, DFS with Default Primitives
11432, YrSold SalePrice KitchenAbvGr BsmtUnfSF BsmtFullBath Id Fireplace HalfBath are 100 hard to say bivaraite normal distribution
13497, Is Alone
9710, Elastic Net
42188, We can check it printing the vector returned by the method
12347, BsmtCond Evaluates the general condition of the basement
23814, explore the latitude and longitude variable to begin with
21035, Insincere Questions Topic Modeling
10824, 0 means males
31220, Plot the feature densities comparing response 1 targets with response 0
24712, Show some image and their model Reconstructions
7919, Transform dataset to only numeric values
39824, we have come a long way CONGRATS we have almost completed our model
34539, Classifying the dataset into duplicate and non duplicate to explore the data
29217, Splitting The Data into Train and Test set
2191, Here OverallQual is highly correlated with target feature of saleprice by 82
26085, I made function to visual output
11478, Alley
13212, prune our tree durating the train fase using gini creterion and max depth 3
23887, take the variables with high correlation values and then do some analysis on them
24858, We can first check whether one of the outputs is globally harder to predict than the other
43123, Random split of traning and test data
26036, load 25 images
31094, BsmtFullBath font
27965, evaluation
34473, Make the submission
31005, There are two csv files that contain the data for the training set and the test set when combined they form the mnist Dataset of 60 000 28x28 grayscale images of the 10 digits along with a test set of 10 000 images
5653, Create Age Group category
2189, Corralation between train attributes
11635, Naive Bayes
41948, Losses and Optimizers
43338, split our dataset in Features and Target
37811, Quick Peak into the data
40922, Predict
31581, Continuous
27047, Visualising Images JPEG
42066, Separate key colums only for machine learning
6197, Submission 3 for SVC without Hyperparameter Optimisation
33480, Fit SIR parameters to real data
6102, Mean price is around 180k USD the most expensive house is for 775k USD and the cheapest is only for 34 9k USD
30418, Define roBERTa base model
11010, Some of the features in the dataset be clinging there for no good
41086, BERT expects three kinds of input input ids segment ids and input mask
26004, PHASE 1 BUSINESS UNDERSTANDING
42544, The distributions for normalized word share have some overlap on the far right hand side meaning there are quite a lot of questions with high word similarity but are both duplicates and non duplicates
10683, NO MISSING RATIO
18424, CV score of the basic average
4424, We can note that there are two outliers that can be really bad for the model as they have very large GrLivArea and low SalePrice
8097, Embarked vs Survival
10560, Handle Missing Data for continuous data
37551, Bonus Part Digit Recognizer
18226, Data Cleaning
13961, Visualization of Survived Target column
36118, Lets plot some columns to find ones with low variance so that we can delete them
7333, Normalized the fare
12606, We can say that Female passangers have higher probability of survival than Male passangers
34533, We now do the same operation applied to the test set
36226, import our dataset We read images one by one from the train dir and store them in the train images array Each image is read in Grayscale and be resized to 50 50
33245, Predicting and creating a submission file
27125, Electrical
6623, Creating Submission
38933, Applying Random Forest
18522, I used the Keras Sequential API where you have just to add one layer at a time starting from the input
30691, Isso ocorre pois a fun o h2o
489, we ll extract titles from the Name feature
33685, Days left in year
28511, The categorical columns have following characteristics
12657, Fitting and Tuning an Algorithm
42346, also tried Random Forest Regressor locally With the number of estimators set to it produced a prediction with MAE of However the running time is incredibly long and boring For Gradient Boosting Regressor the accuracy is with the learning rate setting to
21076, Testing Data Is In The House
28321, identifying the missing values
39687, Remove Non ASCI
21748, Splitting the data into train validation and test set
26077, Data Preparing
9341, Split training data into input X and output Y
30460, Preprocessing with nltk
637, Shared ticket
31911, Importing Python Modules
38958, Augmentations
5014, Here we re looking at sales price by neighborhood and color coding by zoning classificaton of the sale
19464, introspect a few correctly and wrongly classified images to get a better understanding of where the model fails and hopefully take corrective measures to increse its accuracy
7684, We gather all the outliers index positions and drop them from the target dataset
25822, In this competition the train and test set are from different time periods and so let us use the last 1 year as validation set for building our models and rest as model development set
20689, MLP for Multiclass Classification
14440, Distribution Plots for FarePerPerson
12131, Splitting train data into train test 1
10265, Exploring categorical features
36859, get indexes of first 10 occurences for each number
7884, finally I explore new features for example a measure of Age x Class would give better insight of the survival rate
21197, Linear Activation backward
12045, CentalAir variable needs transformation to binary variable
38314, Few Examples after conversion
39272, Format and export data
7606, for categorical features
34226, We don t require as much for the labels as all of them are simply a string of wheat
36363, XGBRegressor Model
2975, Handling Missing Values
33990, Scatterplots for continuous predictors
9793, Imputation with MICE
1521, And don t forget the non numerical features
34609, Extract test features using CNN model
23298, Missing Null Values
890, another decision tree with different parameters for max features max depth and min sample split
9941, Creating the category of the age section
5947, predict the test dataset
2877, if we compare our drop list and feature importance we find that the features life sq and kitch sq are common
31412, From the top 9 losses we can try the following
8309, Neighborhood vs mean SalePrice on houses with three bathrooms
23227, This preprocessing help into getting a better accuracy for the model
481, Remove outliers
10893, this plot gives us some useful information such as children are more likely to survive followed by young and old
42117, The following LightGBM model is based on Peter s
5036, examine our target variable SalePrice and at the same time derive information from many corresponding features
11020, Dropping the Age Band feature
29089, sequence length weights
8706, LINEAR REGRESSION
3386, Logistic Regression Model
29177, Dealing with the Dependent Variable SalePrice
9342, Simple Network using Keras
33041, Gradient Boosted Model
18513, so it looks like my theory that Slice Position refers to the z position of the scan was correct
7461, Dropping Unnecessary columns
32246, the training data and testing data are both labeled datasets
1719, Data Types are Important for EDA
2637, total 335 people have survived 547 people have died in the Titanic
26468, For our simple CNN we use 3 convolutional layers with all having the same filter size of
12706, Embarked
4084, analyse this to understand how to handle the missing data
33303, Cross Validate Models
24131, From unique words data frame we can say that some words are repeated lot and some repeated very less we can only keep words which are occuring 20 times or more to keep the dimension of Bag of world model reasonable
40745, Apply a multiplier
4121, split the data set into train and test which is now ready for preprocssing
32254, Data
25176, WordCloud
29798, Validate dimension of our word vector
24348, Split the train and the validation set for the fitting
17852, check the accuracy for the validation set
4989, Correlation of features
10518, Generate CSV file based on DecisionTree Classifier
34407, Predicting
25280, Prepare Data
27306, Get trian and test data
2369, Logistic Regression
8725, Mean Sale Price by Neighborhood
32058, Forward Feature Selection
3404, Label Encoding
27903, Exploratory Data Analysis
7604, Preprocessing pipeline
42636, Number of tests per day by country
24682, let s define SqueezeExcitation module
21508, Plot random images from the training set
19920, Here is a plot of 10 largest positive and negative coefficients for each gender age group
8508, Discrete Categorical Features Bivariate Analysis
23737, Replacing missing values in Age column with the median of Title group
12974, Name Title
23069, Helper functions
37672, Reading the data
42758, Train the network
15090, LightGBM
28728, Shops and items categorys of the most expensive trades
33885, Bureau loading converting to numeric dropping
16740, sex
37510, Customers in train test and by time
7881, I check in a first model how can age correlate with the chance of survive also related to the passanger Class
40378, Splitting the dataset according to GroupKFold
289, Cabin
27930, Shuffle To make sure the model doesn t pick up anything from the order of the rows in the dataset the top 100 rows of data be shuffled per training step
17651, Adaboost
29105, GRU
25821, Price of the house could also be affected by the availability of other houses at the same time period
127, Accuracy
30886, Keep going
32961, There are some features with correlation values between and We need to remove one feature from such highly correlated feature pairs
34633, We are going to fill the row that wind speed is equal zero
36884, SVM
1699, The plot appears blank wherever there are missing values
23309, Data loading
32755, Monthly Credit Data
16568, Transforming the data
40017, Insights
29784, Create and compile model
1517, Some of the features of our dataset are categorical
9772, Machine Learning
14219, The Chart confirms a person aboarded with more than 2 parents or children more likely survived
29371, ROC curves typically feature true positive rate on the Y axis and false positive rate on the X axis
30947, Predict the label and make submission file
31422, License
20031, Show feature importances
21028, After the installation which takes some time reload this browser page
35472, Visualiza the skin cancer cafe au lait macule
37296, Removing Stopwords
5099, Collinearity means high intercorrelations among independent features
563, submission for ExtraTreesClassifier
8141, These are all the numerical features in our data
34228, And now we can add the rest of the data
38107, you can check if you model is trained well
40663, Calculating Jaccard similarity using NLTK Library
25851, Cleaning Text
31715, Tuning XGBoost
28647, PoolQC
37475, Stop words are words that appear very often in sentences but not necessarily make up the meaning of the sentence
14961, Add family size column
16474, Lets Concatenate both the data frames for Exploratory Data Analysis
13055, Gradient Boosting
19445, As it turns out it does appear to be the case that the optimizer plays a crucial part in the validation score In particular the model which relies on Adam as its optimizer tend to perform better on average Going forward we use Adam as our optimizer of choice
41252, CAN WE USE ALGORITHMS TO PREDICT THE AGE AND DECK OF THE MISSING VALUES
1796, SalePrice vs GarageArea
36659, There are also examples of compound opeartions in morphology
35381, Modeling
14883, Cabin
15128, FamilySize SibSp Parch
17731, Deal with categorical values
38725, The fashion MNIST database is more complex than tha standard digit MNIST database
4816, Fitting the pipeline
32648, Variables starting with numbers are renamed properly for further calling as dataframe elements
17795, Identify families by surname
10664, Clustering
40050, As we are using a hold out dataset to simulate what happens when there is a image group in test that is missed in train we need to selected the proper indices
4334, Cabin
40658, finally we want a single real valued metric for comparing our models and implementations
42040, Grouping with bins
18842, Linear Discriminant Analysis LDA
25299, This does not make ANY sense
38990, Traing for neutral sentiment
4337, First class
2271, Embarked
15279, Survival based on Fare Embarked and Pclass
29865, get pixels hu function
15552, Random Forest
15048, FamilySize
14170, It is reasonable to remove the Cabin column now that we have extracted two new features from it Deck and Room
42762, Sex Correlation
31069, DATA SNEAK PEAK
38422, Train and valid accuracy are roughly at the same low point
27343, item id
39013, Print the death by sex
9251, Predictive Power Score
5583, The next step is deletin column that not be used in our models
6513, Discrete Variables
3369, Initialize the clients and move your data to GCS
21324, Data Pre Processing
19049, For some more EDA we need to convert the categorical features in the dataframe into numeric values
29, AdaBoost
5196, check if any of the columns contains NaNs or Nulls so that we can fill those values if they are insignificant or drop them
18748, We can also
14360, Treemap for the distribution of Survived and Non Survived Passengers with their Classes and Genders
8939, Prepare Submission
16253, Submit
1681, Analysis of a categorical feature Analysisofacategoricalfeature
25721, Prediction
17473, XGBoost
14459, Picking the algorithm with the best Cross Validation Score
1795, SalePrice vs GrLivArea
36387, restart the process with a custom label encoding
3289, First a k fold target encoder class be created for the train set
37942, Basically the picture is the same as for the confirmed cases
5403, make sure there is no more NA value
25838, Cleaning text
16454, From dataset description it was clear that Fare values are not skewed
7340, Logistic Regression
37501, SalePrice strong correlations with GrLivArea GarageArea 1stFlSF TotalBsmtSF
38024, what are sincere topics where the network strongly believe to be insincere
30943, Bedrooms Vs Bathrooms Vs Interest Level
41237, Building vectors
13679, Fare check fare next because it is closely correlated to Pclass
16885, Alone
12008, SVR with rbf kernel
19320, Model Architecture
28457, The remaining two columns are random in nature in terms of their values propertycountylandusecode and propertyzoningdesc
27460, Replace Elongated Words
978, AUC Curve
37010, Best Selling Departments number of Orders
21104, Data preprocessing Identify features This means selecting only needed features and create the proper dataset for the processing
42824, Engine
4438, Predict and save
20049, Some shops have the same name let s check again based on the opening and closing dates of the store and check on the test data
14993, To check correlation
1385, we need to choose a loss function and an optimizer
36678, ngram range tuple min n max n default 1 1
14262, Split data for validation
40435, Scaling Minmax scaler
1040, Feature engineering
28150, Let s plot the network
9174, FullBath
21358, Testing Model on the Test Data
43152, Model Parameters
17434, C 0 best
26180, MS Zoning
9613, Strip Plot
7425, It is clear to notice that the first 100 components contain nearly 100 of the variance
2187, Submit predictions
26529, Get Optimal Threshold
28223, Before we continue with implementing a deep convolutional autoencoder let s free up some memory within our RAM
38848, Attached Car parking is high followed by Separate Garage
14310, preparing the test set for prediction
9089, How do these categories affect price
14755, Exploration of Age
6490, It is worth noting that there are only 7 different pool areas
21340, Implement XGBoot
17810, We start with a simple model with just few predictors
6519, Categorical Variables
14260, Dropping UnNeeded Features
15709, Fare Categories per Passenger Class vs Survived
29154, GarageArea GarageYrBlt GarageCars Fill with 0s
25016, Please note that when you apply this to save the new spacing Due to rounding this may be slightly off from the desired spacing
25640, Prediction
18015, Survived by gender
1154, RegressioF EvaluatioF Metrics
40053, At the moment I m using this as I found it difficult to train without more positive cases
38852, Imputation is done based on Numerical and Catecorical features
73, Missing values in test set
8020, Pclass
20312, A first analysis of the clusters confirm the initial hypothesis that
42375, LSTM
12678, Model 2
20620, Support Vector Machine Classifier
22111, Preparing for sklearn
3569, GarageCars 4 is very strange
6214, K Nearest Neighbors
6782, Name
21514, Text Preprocessing
37529, 1st class passangers are older than 2nd and 2nd class passangers are older than 3th
28507, Submission for predicted model
31721, Train ANATOMY Variable
8129, There is a strong negative correlation between the selling price and Average quality of the material on the exterior Kitchen quality and the height of the basement
1722, The code below is useful to understand if you want to plot a barplot using Seaborn package
39083, Preparing data for Pytorch
15714, Family Members by Gender vs Survived
30132, We need a that points to the dataset
23271, Embarked
36201, html Training and Validation font
11716, K Nearest Neighbors
18257, Model loss graph
3032, All of the models individually achieved scores between 0
42412, Create submission
5049, We are now examining the relation of our target variable to other interesting correlated features and start with several numerical features of size
26666, credit card balance
29911, Distribution of Scores
27391, Tuning min data in leaf
12639, we just have to fill in the missing Age values
16824, Looking at Correlations
39721, given positive and negative words we find top 2 words that are similar to positivie words and opposite to negative words
15348, Split data
12060, Non Ordinal Variables
41125, Standard Deviation of Absolute Logerror
12951, Categorical variable analysis
26251, The Model
10638, Finding Social Standing in the Title
15929, Fare
4702, Cross validate models
29324, probs probs 1
2213, Predictions
19782, Setting the number of epochs
33308, Decision Trees
20527, Data Cleaning
42170, Check the values loaded
34731, Importing dataframes
24506, Creating imbalanced dataset
23940, Item Description Lengths
9227, Neural Network based Model and Prediction
19562, have a look at our config
42678, Improve the performance
15969, As we suposed babies survival rate is high and surprisingly it s almost the same for female and male babies
32708, Creating an Embedding layer
5042, back to sale price
13755, Graph Distribution of Sex Variable
18062, Tranforming the dataset
17666, Here we construct two functions
740, Removing outliers in sample
35387, Visualizing Test Set
16057, you get more confident that person who have age around 30 have more chances to survive
20547, Deep Neural Network with Dropout
26938, Inspect your predictions and actual values from validation data
14107, center Heatmap center
4317, Fare does not conclusively say if ones who paid more were more likely to survive although there are outliers that need to also be considered Maybe looking at a slightly higher level class could help
28018, Multinominal NB
34542, Checking the word length of each sentence in both the columns question and question This help us decide the sequence length during model training
9394, Domain Space
18901, Overall survival states
6026, You can change null ratio on parameters section
13511, Submission
11180, Getting the new train and test sets
24531, sort the values in a descending order to check the age groups which contribute the most to the total number of products
40442, Plotting Accuracy of the traning model
34385, Weekend Weekdays
5200, NOTE
21243, DC GAN with Fashion MNIST
20744, GarageArea column
42555, Prepare submission
42970, Creating Submission File
8948, Fixing Kitchen
40863, Model Building Evaluation
37025, Top 10 brands by number of products
39090, Submission
29215, Concat Categorical and numerical features
11374, Gradient Boosting Regressor
10903, Grid Search for Decision Trees
15983, We can replace the two missing values with the mode that is Southampton and check if there are more missing values
29729, As data is loaded let s create the black box function for LightGBM to find parameters
3305, Missing value filling
8942, Fixing Alley
8681, NUMERIC FEATURES
13366, Missing values
9085, I also wonder if price is affected by whether the row was set to have 1 or 2 conditions
6504, Dataset for House Price Prediction is from below URL
8707, LASSO and tuning with GridSearchCV
43096, Submission
22277, Exploratory Analysis
6621, Gradient Boosting Regressor
8041, Name Ticket Number are not an important feature for prediction
14610, Model comparison
25808, Create Model
28517, 1stFlrSF
20976, Pipeline
31729, Test Dataset Overview
7500, continue analyzing other columns that may be correlated to SalePrice
8802, Lets first impute the missing values in rows
22123, The Stacked model is slightly better when compared to the three base models
34156, Great we have the total sales for each month
16303, Finding relations between features and survival
42733, After looking these 56 plots I found som combinations in which the distribution for repay and not repay is a bit different
37570, Binary Variables
35136, Is there a surge of customers during SchoolHolidays
14160, From the Name feature we can astract other important features such as the family name to identify members of the same family
2977, Merging the training and testing dataset
33238, Write a classifier to predict two classes
32011, In Cabin column the number of null values is very high There are only 204 non null values in 891 rows We can drop Cabin column from both train X and test X
35065, Solution 4 1 Convolutional layers with 16 feature maps
32698, instead of single words let s use bigrams and trigrams as tokens
21899, Execute
24809, We removes columns that mostly 0 for noncategorical variables it is not helpful to our prediction
35874, We have an accuracy of 98
11009, Heat map for correlation between features
42276, Drop features photos column
30341, Here we run iterations using different learning rates
6869, Final View
17893, I believe people with higher number of family members have better odds of Survival
15654, Logistic Regression
28124, Exploring the Keyword Column
16146, Title map
2728, GridSearchCV
41718, Get test set predictions and Create submission
42548, Parallel Coordinates
13246, Y value
21145, Even though the graph is huge this is not really informative
11177, Edit pca
30387, Example predictions on our validation set
30457, Starting with the original training data randomly sample rows to use as the source material for generating new fake tweets
11395, Since Mr
3827, survived column is missing in test df
12501, Tuning max depth and min child weight
2941, Dropping features with more than 50 Missing Values
16155, Modelling
18691, let s run fastai s learning rate finder
38954, Seed Everything
13592, Train Test Split
5088, The pure linear models Ridge Lasso and ElasticNet are highly correlated
23106, Findings PassengersId is an unique identity number positive integer assigned to each passenger
22500, create variable with initializer along with their shape and make your logit with equation
4921, Creating Dummy Columns for Label Encoding the Categorical Features
32986, Random forest regression
23412, Randomly take 20 of data for validation
37791, Start Training
1205, Gridsearching gave me optimal C and gamma for SVR
20398, Models evaluation
28660, LotShape
9420, Using matplotlib Libary
38927, Submissions
41531, what have I learnt Well the height is very similar for each digit
12457, Data conversion
28240, XGBoost
14625, Obviously our loss decreases faster Consequently we may reduce the iteration steps and or increase the learning rate
19382, PoolQC 1453 missing value as most house do not have pool PoolArea associate with null value in PoolQC be 0 not valuable column can consider to be dropped
30290, Forecasting
17381, we know the median age of the passinger classes
28115, Training Function
30464, Creation of a pipeline with prepocessing pipeline
20102, Item count mean by month for 1 lag
10968, The variables that express a I don t have this feature should not treat the 0 as a normal category
15858, Plotting feature correlations with a heat map
815, Missing values in test data
1262, Setup models
4026, Categorical or Numerical
37615, Building the model for training and evaluation
14613, Split into train and validation data
38749, convert this into a visualization for better comprehension
42118, The following Logistic Regression is based on Premvardhan s
4558, BsmtFinSF1 BsmtFinSF2 BsmtUnfSF TotalBsmtSF BsmtFullBath and BsmtHalfBath missing values are likely zero for having no basement
10440, Univariate analysis
34910, Text preprocess for counts
22952, Reshape data
17467, Name Title
26441, we can evaluate our trained classifier on the training dataset
20238, Embarked
40973, Tip For a String use 39
27626, use hard code to speed up
22386, RandomForestRegression
11804, USING Multiple Models for final prediction
30774, Input as X y for Linear Regression
14121, Bonus Geographic Plots
22698, test
29774, Classification part
15273, Submit
20717, Utilities column
12647, Generating a Submission File
5515, Knn
2570, Report
13144, draw a barplot to visualize this same chart without the stretch in x axis due to varible Name Length
4777, Decision Tree Classifier
41171, Train samples
38516, Distribution of top Bigrams
41831, Confusion matrix for single model
15091, Model Comparison
23204, implement soft voting ensemble in mlxtend
14863, Interesting quite a bit of children in 3rd class and not so many in 1st How about we create a distribution of the ages to get a more precise picture of the who the passengers were
8597, Builing Categorical Pipeline
33242, step to load the encoder previously trained
1528, SibSp Feature
18069, The number of train and test images
8769, Family Size
9334, If we make dummies out of this feature we end up with 25 columns and this can make harder for any model to learn the data The fancy name of this is curse of dimensionality and from its name we don t need much to know that we don t want it
34041, Drop the Resolution Column
1555, First we can obtain useful information about the passenger s title
6710, Explore the Categorical Features
17870, Basic summary statistics about the numerical data
8657, Instructions
39143, Explore Data
28284, TorchText data loader fields preparation
20726, RoofMatl column
14785, Median Fare paid by those who survived is higher than those who did not survive
23992, We are going to transform skewed columns
9069, If the skewness is between and the data are fairly symmetrical
10045, Hyperparameter tuning
478, How good a predictor is GarageArea for SalePrice
30415, Main part load train pred and blend
33991, Correlations
18951, Display time range for labels with gradient
39754, Since I have to use scaled data for the svm and the polynomial features worked best for LR and SVM I ll use the scaled poly set
22959, Augmenting hair with OpenCV
92, Fare and Survived
42794, Inference
13325, Sex converting feature to numerical values div
21533, Categorical Values to Numerical values
33468, The length of competition distances increase with decile classes
12214, Using the pipeline in GridSearch
2891, fill in the missing values
7046, Style of dwelling
13743, Creating the O P file
16743, base models
31744, Crop
26641, From doc id we can derive a lot of content features for the doc which should be features D
29909, Submitting Predictions to Kaggle
29698, lets take them through one of the kernels in first maxpooling layer
23074, Fill Age
35404, Spliting into train and test again
22347, Gaussian Classifier
1162, Statistical transformation
5575, Ridge Regression
28109, Encode the Categorical features
39190, H uma grande quantidade de incidentes cujos endere os associados cont m este termo Block no endere o N s podemos criar mais uma feature categ rica a partir disto
3154, Load the data
20246, we need some helper functions to group our categories
26569, Setting up the neural net
15041, Cabin type is related to the Fare price some cabins like C B E D have a higher price than others
27224, let s dive into the next section where we try to calculate the number of units processed by each station machine and also what s the failure rate for each station
7093, We can simply classify them by Sex Pclass
15794, median is used instead of mean so that the value does not sway too much in a direction
22451, Slope chart
2685, Forward feature selection for Classification
6640, The arrow is pointing towards Base Rate
37311, Under Smapling the imbalanced dataset
37454, We need to use the sentiment as a feature for this encode it using LabelEncode
614, We learn
20968, Compiling our Model
31296, Define Our Transfer Learning Network Model Consisting of 2 Layers
18643, We have 38 special values
3485, The best score in the grid search
7969, Creating new feature extracting from existing
29809, FastText Implementatation using gensim
1034, We isolate the missing values from the rest of the dataset to have a good idea of how to treat them
14305, Creating feature Fare
36221, RandomForestRegressor
16845, Adding features to our data
2161, Assessing model performance
11692, KNN Classifier
35951, First we ll separate categorical and numerical features in our data set
41002, why we need to downsample input
35589, Vertical Shift Range
38503, It ll be better if we could get a relative percentage instead of the count
687, The numeric and categorical features are separated since different prepocessing strategies are applied to them
33353, Timeseries autocorrelation and partial autocorrelation plots monthly sales
39407, Target Value Survived
16859, Family
5104, Observations
10726, Sib Sp and Parch can be combined
27505, Check for Missing Value
38291, Seperating train and test data
41619, Lets start by learning from the past and normalizing the quantity column so we can compare departments to each other on a single graph
4450, Mean square error validation
37941, Fatalities
4305, Inference
18587, As well as looking at the overall metrics it s also a good idea to look at examples of each of
21837, also try transforming our target variable since it s not normally distributed
870, SibSp and Parch
9356, Completing a numerical continuous feature
8310, Neighborhood vs mean SalePrice on houses with larger garage areas
27431, Dummy Creation
19723, For each day
15671, Random Forest
11438, Further Engineering is possible excpecially between Oridnal and Numerical Variables And If you find any meaningful relationship Do it for example TotalBsmtSF BsmtQual
39953, I tried numberers that round alpha 0
1564, One piece of potentially useful informatin is the number of characters in the Ticket column
4054, Getting Family Features
39234, Import raw data
266, Model
19132, And our ensemble is
8271, Create PorchSF Feature
15712, Gender Passenger Class Embarcation point vs Survived
7869, Explore the Parch and SibSp column
27527, Display heatmap of quantitative variables with a numerical variable
3402, Most common causes of outliers on a data set
8612, we have to re transform the predicted sale prices back to their inital state
42965, I dropped it because the Fare column reduces the success values of the algorithms
9585, Matplot Lib
15320, Calculating median values of Age by using Pclass and Embarked to fill up the missing values
11719, Nu Support Vector Classification
15542, FIRST MODEL LOGISTIC REGRESSION KAGGLE SCORE
7422, Check if training data and test data have the same numeric variables
40939, Calling Folds
6145, split the distribution into four bins 0 800 800 1700 1700 2900 2900 max
39295, PRICE FEATURES
30688, Tratar a Base de Treino
16625, Plotting Learning Curve
29802, CBOW Continuous Bag of Words
16382, Checking Feature Importance
24662, New confirmed cases prediction
19409, In its essence a transformer model is a seq2seq self supervised task that only uses self attention Voil easy
10587, Random Forest prediction with select features
10536, Drop first five columns
37635, We use the cross entropy loss and the Adam optimizer
4984, Create a dummy df of the categorical data
42425, Life Square Vs Price Doc
17938, SipSp Parch
4006, Compare with a similar method from the sklearn library
31910, the model is most confident that this image is an T shirt top or class names
18958, Display distribution of a continous variable for different groups
2331, Visualization
16030, we create convert Cabin to numerical feature
4395, Exploratory Data Analysis
3610, Additional ways to look for relationships between variables
2009, Impute Missing Data and Clean Data
21126, First we look at variables with the lowest coefficient of variation then starting with YrSold
43193, Build Simple NN model Pytorch
14455, go to Correlation section corr corr
37998, All info about a single item
2454, Using variance threshold from sklearn
9983, Colomns like PoolQC MiscFeature Alley Fence having missing values more than 80 FireplaceQu LotFrontage also significant amount of missing values
20093, Model Fitting Visualization
8253, Using the IQR Rule
32100, How to get the positions where elements of two arrays match
8580, Feature Engineering
32194, Remove any rows from train where item price is negative these could be refunds
36891, CNN 2
504, Drop the features PassengerId Name Age SibSp Parch FamilySize and Ticket which won t be useful in prediction now
13976, Remove unnecessary columns
28754, Loading the model
2807, missingno
19853, Discretisation using decision trees
22024, seperate that in two plots
12326, GarageCars
15690, Percentage of total survived dead passengers
42045, Divide categories more specifically
8649, Use pandas
16821, There 148 unique values for Cabin this s not important field to be considered
31373, Rotate image
1855, Fit and Optimise Models
34051, let s get our dataset ready for training
35228, How much impact does have
16231, we call our model and give the input and the output column Here the input column be our normalized data and the output column is what we have to predict e Survived
11641, XGBoost
10892, Although this graph is stating that children with age between 0 10 have more survival to non survival ratio this graph is not helping much in getting some useful information
32875, Catboost feature importance
1637, Splitting to train and validation sets
37485, Random Forest
1845, Distribution of SalePrice in Discrete Numeric Features
5738, Ensemble prediction
12838, K Nearest Neighbor
38649, Binning
14205, for XGBoost using CV to better tuning the model
15592, Number of rows and columns
22963, Using it with tf data Dataset API
24428, Converting the labels into Catogerical features
16838, First thing first
4895, Dateset is completely ready now
120, Pre Modeling Tasks
42823, Eval Function
27191, Rebalancing the Data
16524, KNeighbourClassifier
10657, Ensembling
29877, Import libraries
262, Model
26058, We can then plot the incorrectly predicted images along with how confident they were on the actual label and how confident they were at the incorrect label
32393, Machine Learning to Neural Networks
14090, Applying Logistic Regression Model
40009, Gender counts
22972, CAM Extraction Model
7128, Outlier Treatment
32643, Regression
38205, Evaluate
3025, The are no more skewed numerical variables
37977, Confusion Matrix
26660, PLotting the loss during the increment of epochs
38683, Plotting Barplot with number
8579, checking if there is any missing value lest
41561, we have to add our pca data and label to plot them over labels
6786, Embarked
2736, The
22248, use train test split to split our data for validation
18031, Features importance
38894, Backward Propagation with Dropout
31383, Basic Imports
21576, Rearrange columns in a df
39404, Imports
13313, XGBoost
1113, Automagic
40089, Benchmark Models
516, Imports Functions
25786, Embarked
42751, Label encode the categorical features
9757, Title
3288, We now apply K Fold Target Enoding technique on the Titanic competition data by encoding the Embarked feature of the dataset
31082, FEATURE IMPORTANCE
22458, Distributed dot plot
32413, Building model
10862, checking for the outliers and dropping them
1014, Great So now we can reproduce our Intro to local validation from tutorial for beginners part 2Intro to local validation with our new features
11392, check to ensure that the missing values were filled in
11967, Features with more than 50 of none values
12608, Passengers from Pclass 3 have lesser chances of Survival while passengers from Pclass 1 have higher chances of survival
32522, Processing the Predictions
42423, Lets Us Understand Relationship Between Top Contributing Features And Price Doc
12642, Family Size Feature
37329, With the two convolution pooling modules of the current model and the final fully connected module a total of three modules are selected to add dropout
3167, Lasso
39280, TARGET ENCODED FEATURES
29738, the list of all parameters probed and their corresponding target values is available via the property LGB BO
2912, Feature Extraction
36552, Feature engineering
36128, Creating CNN model
33201, Compile and Evaluate Model
14429, Verifying that PassengerIDs 760 and 797 titles were correctly updated to Mrs
8163, SalePrice is skewed and deviates from a typical Gaussian distribution
7982, Merge FullBath and HalfBath
26071, we can plot the weights in the first layer of our model
16051, First Lets check how many person in different Embarked and Passenger Class
36754, Loading the training and testing files
23889, Calculated finished square feet
12795, Conclusions
22462, Population pyramid
8228, Below pair plot is created with respect to the SalePrice with other variables
2242, Combine Attribute Class
25781, Ticket is also like Passengerid which does not matter to Survivel of Passengers
29159, Alley Fill with None
9809, Bar Chart
15955, Assumptions
33347, Total sales and the variation on secondary axis
28588, Kitchen
5927, These are Outliers
35191, Feature Importance
12518, check the columns which have less unique values
4069, Applying Different Machine Learning Models
10835, and feed a list of classifiers to find the best score
19728, Observation
17042, Label Encoder
25807, Dorothy Turner and Dorothy Lopez are rocking it Poor Dorothy Martinez instead should consider moving to another industry
20296, Changing Age Age Band
25510, The Keras Embedding layer requires all individual documents to be of same length Hence we wil pad the shorter documents with 0 for now Therefore now in Keras Embedding layer the input length be equal to the length ie no of words of the document with maximum length or maximum number of words
971, Import whole classification
16673, RANDOM FOREST
32977, Lets scale the data
3955, Create YrBuiltAndRemod Feature
24258, Extracting deck from cabin
19345, Getting test data
3432, Fare values vary greatly depending on the accomodation class encoded by the variable Pclass
33341, Exploratory Data Analysis EDA
3805, ElasticNet hybrid of Lasso and Ridge
31061, Positive look behind succeed if passed non consuming expression does match against the forthcoming input
11490, KitchenQual
33473, Italy Spain UK and Singapore
18138, The two parameters below are worth playing with
26196, Multi label models for Planet dataset
22076, check where our non empty predictions were significantly worse than simply taking text Maybe there is a pattern
15010, that we know that sex is an important factor note that there were more males than females onboard
18520, Split training and valdiation set
14827, Pclass
2413, These are the numerical features in the data that have missing values in them
6034, Converting categorical features to numerical
24, GridSearchCV Kernel Ridge
13729, Handling Pclass
12920, Here 0 stands for not survived and 1 stands for survived
23240, Linear Regression
37466, Necessary Libraries
2890, There is certain features having dtype as int but it should be object or string type convert them
10702, Simple Neural network
28604, OverallCond
21941, Python
9148, I found the standard deviation of the BsmtFinSF2 in houses with basements
1105, Support Vector Machines
11110, List the Numerical features required
43339, If the value is 0 then its black
19719, Data Overview
17753, For roughly 100 parameter configurations it takes about a minute to compute the optimum configuration
5343, Diplay quanitive values of a categorical variable in area funnel shape
5539, Split data
14070, Cabin
10135, Performance
19591, shop and sub cate id
34931, Truncated SVD for logistic regression
27751, Average word length in a tweet
37827, TF IDF
6462, DataframeSelector to select specific attributes from the DataFrame
31828, Random under sampling and over sampling with imbalanced learn
21163, visualizing the number of different labels in traing data
8316, Scaling the numeric columns
4490, OVERVIEW
41707, Back to time what if the accuracy varies as a function of hour of day
9272, No more null values
35640, Submition
18186, Sanity check
36049, Trying with XGBoost Algorithm
24341, first try it out It s a bit slow to run this method since the process is quite compliated
8067, Heating and AC arrangements
2407, Imputing Real NaN Values
1097, Extract titles from passenger names
37727, Missing garbage value treatment
10926, Generating a complete graph with ten nodes
26388, Building Model and Fitting
1033, Before cleaning the data we zoom at the features with missing values those missing values won t be treated equally
36531, we can now look and maybe find the value of distance that is visually identical
36726, Set the hyper parameters for training
5608, Nonparameteric Test w r t Ordinal Variables
28074, we split the training data into the train and validation part
24896, Thing is that I kinda discovered the original outcome of test dataset p
20517, The 25 50 and 75 rows
15037, The Fare for the old is much cheaper compare to the children and young actually
24354, For the data augmentation i choosed to
16645, Decision Tree
20307, We can then execute a Principal Component Analysis to the obtained dataframe
16857, Ticket Class
6567, Cabin value is mapped to new featurewith numerical value we can drop Cabin column for Train set and Test set
8336, Which features are important Let a random forest regressor tell us about it
23518, About url function
32185, Show predictions
7082, first check whether the Age of each Title is reasonable
39159, This is exactly what we want
21022, Extract Keywords or searching for words in a corpus
21480, Build roBERTa Model
33721, Handling Missing Values of Embarked Feature by substituting the frequent occurring value S
23391, the bounding boxes with low confidence need removing
40777, Save Model
32780, Model
41232, apply Convolutional Neural Network CNN techniques on the original data
40135, Generating the submission file
14969, Survival rate of 1st class is higher than 60 irrespective of the gender
20746, PavedDrive column
7332, Create a feature Group size When the size is between 2 and 4 more people are survived
10023, Averaging base models
41792, Examples of misclassied digits
38306, The much needed train test split for cross validation
35430, Firstly we make predictions on each model and then save it into lists this create 5 different prediction lists
1345, We can create another feature called IsAlone
16268, Interaction
33246, Correlation
18202, Same series of charts but for returns
1593, Embarked
5209, i make sure that the traing and the test set cols are in the same order
6002, Cek apakah masih ada missing value
29891, Fit the heteroskedasticity model
14, Models include
12337, LotFrontage
4306, Import libraries packages
18489, Correlation Analysis
11168, Want to add ordinal or int column for Year and Month this is the function to perform that task
23915, pip install pyspellchecker
11918, Dropping Values
19462, Evaluate the model
2561, PARAMETERS DETAILS
32873, Test set
2754, Data Exploration and Analysis
19878, Maximum Absolute Scaling
35821, Stacking didn t work for me
8255, Checking for Missing Values
20641, Above world cloud image gives a good picture of the most common words used in the real disaster tweets
42220, Before initiating the final classification layer a batch normalisation layer be added
32101, How to extract all numbers between a given range from a numpy array
7011, General shape of property
26920, In Case of positive skewness log transformation works
20381, Basic NLP Techniques
35401, Testing Dataset
26802, Submission
24350, I used the Keras Sequential API where you have just to add one layer at a time starting from the input
39743, Cabin
25870, Columns
14747, Best Parameters for Recall Score RBF Kernel C Gamma
23031, Discount Season Presumption
15252, Compare models
7602, SalePrice as target
16317, NOTE
18174, Here is a function for generagting a test dataset
19948, The Name feature contains information on passenger s title
16595, Predict the values on Train data to check Accuracy Score
30607, The control slightly overfits because the training score is higher than the validation score
3524, Correlation Heat Map
11632, Logistic Regression
16131, Diving the dataset into Train Validation and Test
10080, There are some obvious correlations like GarageArea is highly correlated to GarageCars GarageYrBuilt and YearBuilt OverallCond is correlated to yearBuilt etc
22916, Output data for Submission
17539, Training
40290, Everything is ok
42757, Prepare the data
9112, I definitley want to drop the Utilities column
4302, Inference
2174, The same logic applied to Pclass should work for Fare higher fares higher survival rate
42341, See the mean sd and median of response variable
38173, Pycaret
8595, Creating custom transformers
14436, go to top of section eda
17558, Similarly for SibSp column
33994, ANNs are able to better fit data containing non linearities
36740, Predict using our final model
304, train test split
37007, Most important Aisles in each Department by number of Products
19652, Benchmark predict gender age group from device model
10937, After data imputation there no much cahnge in the data distribution so we are using this method to fill in test dataset
9610, Factor Plot
35177, Plot the model s performance
11466, Linear SVM
4275, LotFrontage
26050, We place our model and criterion on to the device by using the
16777, Logistic Regression
29427, Mind taking a sneak peak P
30979, To run these functions for 1000 iterations uncomment the cell below
7728, Combining YearBuilt and YearRemodAdd
15070, Class
15768, Embarked
18554, Age vs class vs gender
20531, Create new features
29705, Submit Predictions
18546, Data types non null values count
34302, Lets Visualize A Malignant Mole
24124, Teste Adicional Com ru do
10100, Check the x
34000, Normal distribution with boxcox
25374, We can get a better sense for one of these examples by visualising the image and looking at the label
14276, Bagging
23422, Number of words in a tweet
42395, Do Sales Differ by Category
16237, The second classification method is GBTClassifier We have to almost repeat the same steps as we did previously and just have to change the name of model and pipeline and call the gbtclassifier and check the accuracy of the model
10597, Sequential model with 3 dense layers
41087, Similarly for test set
11958, Creating all the models with the best hyperparmeters span
8146, Creating Training Evaluating Validating and Testing ML Models
3190, I simply do the following
63, Pytorch Training
37714, Time to test our work
38676, Categorize Number of Images Per Patient
32897, Taking the average of the two word scores
4762, Low range values are similar and not too far from 0
8063, Fireplaces Variable Factor
15559, Filling out the missing ages
39202, Designing Neural Network Architecture
41131, Firt thing first
3880, All our feature transformations steps now live inside pipelines namely scaled numerical pipeline categorical data pipeline score data pipeline and engineered feature pipeline Combining all the features using a FeatureUnion give the data our model train on
10042, One Hot Encoding
14548, Pclass font
12825, Passanger Class Pclass analysis
34720, LGBM parameters estimated using Optuna
39450, checking missing data in application train
8334, Well it is right skewed
463, Scale numeric data by StandardScaler
13373, Pclass
3796, Missing Data
29478, Machine Learning models
19623, Data augmentation
38159, The only required parameters for H2O s AutoML are ytraining frame and max runtime secs which let us train AutoML for x amount of seconds and or max models which would train a maximum number of models
21817, Bedrooms
25958, First we should combine Prior data with Product Dataset for some baseic analysis
21739, As the data consumes high memory we downcast it
27068, again what about test data
6846, Types Of Features
310, Some Regression Visuals to help us understand the current state
17597, Mapping categorical features
21352, From Train Dividing X and Y
272, Model and Forecast
8325, This states that for Neighbours 6 the accuracy is the maximum
22928, With text there is also another handy feature to make The Length
2772, Correlation plot for new dataset formed
1817, You can tell this is not the most efficient way to estimate the price of houses
35889, Make training data
38467, Aggregating by month and calculating fraction of missing values for each feature
24921, Model Building
18484, Since CompetitionDistance is a continuous variable we need to first convert it into a categorical variable with 5 different bins
11518, lasso scores
28881, Decoder
14816, Embarked sex Pclass Survived
12300, Deck
5355, Diplay relationship between 3 variables in lines
13656, Age
20947, Compute the number of labels
30855, Model training
14573, Import Python Libraries
35854, SO 10 classes One hot matrix encoding
5613, Mask Under Min Size
43301, Rodando o Modelo Sem a Coluna Holiday
14911, Cabin
3892, Kurtosis
1926, Fence
12738, Round 4 Voting Classifier
662, Support Vector Machine
10443, We shall check the same for GrLivArea and TotalBsmtSF and apply the same procedure if these are not normally distributed
28744, Black color means that there is weak correlation between the two values
31267, Average smoothing
37079, Prediction Submission
12341, GarageType Garage location
28243, Submission file
21255, ItemNames
7928, For the linear regression models we use both a GridSearchCV for tuning the hyperparameters and compute the best score
34039, Show Random rows
8312, Visualizing the distribution of some features with missing values
16269, the all important surv group of functions
9229, Initialize Neural Network
22689, Attention layer
21161, Shutdown h2o
14656, Age
38939, That is a very interesting plot
3812, Prediction
264, Library and Data
24014, Train
29222, According to the graph we plotted Cat14 15 17 18 19 20 21 22 29 30 32 33 34 35 42 43 45 46 47 48 51 54 55 56 57 58 59 60 61 62 63 64 65 67 68 69 70 74 77 78 85 can be categorized as noisy columns
9978, Numerical variables Types
18321, Create Validation Function
30764, Check the impact of removing uncorrelated features
28337, Analysis based HOUSING TYPE
43288, Um R de 1
18728, let s obtain our test set predictions
16957, Parent children
38764, Random Forest
29788, Denoising Cifar10 Data
6945, Clustering
19422, In what follows I introduce the TweetDataset
31663, center Numerical Imputation
5278, Feature Importance Scores From Model and via RFE
24136, calcualte Model Accuracy Score and F1 score
11222, Find best cutoff and adjustment at low end
17789, Because she is traveling with a friend about the same age and with a Mrs title we might want to set her as well as a Mrs
10845, 1593
18920, Splitting the Training Data
28418, New random forest with only ten most important variables
14731, Predict the label for the testing features
42988, Pair plot of features ctc min cwc min csc min token sort ratio
37784, Visulaizing an Image on the pixel level
12981, Pclass
26556, CNN Model
33044, This section of code gives fit accuracy on the training and test data for each run through
23299, Outliers
35441, To One Hot
24855, I only display what I call temporal inputs as we re simply trying to have a feeling of how well our model is fitting the trends
4855, average all of them
4326, Parch Parents and Children count
32885, Clustering models
13970, Parch Children Parents
22118, Training Base Learners
5851, Submission
42266, Data Visualization
15045, Title
7323, Missing values on Embarked
10168, Regression Plots
1310, Observations
8522, Test Codes
36395, Download datasets
41342, Find
18364, Checking Multi Linear Regression Assumptions
12892, Revisiting Pclass
18142, Save the final prediction
24860, I am using the original training CSV file instead of the enriched dataset as it is up to date with the latest stats from this week
39885, One Hot Encoding
10450, Elastic Net Regression
8757, Check Null Data
14136, Scaling data
30930, How about most correct predictions
3856, categorical feature encoding
39238, Feature engineering cities
13293, ExtraTreesClassifier implements a meta estimator that fits a number of randomized decision trees a k a extra trees on various sub samples of the dataset and uses averaging to improve the predictive accuracy and control over fitting The default values for the parameters controlling the size of the trees e g max depth min samples leaf etc lead to fully grown and unpruned trees which can potentially be very large on some data sets To reduce memory consumption the complexity and size of the trees should be controlled by setting those parameter values Reference sklearn documentation learn org stable modules generated sklearn ensemble ExtraTreesClassifier html
12402, Predicting over test set
39286, RAW FEATURES
33679, Difference Month font
24917, Cases Distribution with Rolling mean and standard deviation
14828, Sex
14629, Great but it be better if we plot this information
43296, Fazer predi es com o dataset inteiro
28394, Setting up models and grid search
16927, Some columns are treated as integer but should be categories
41840, Unigrams
25407, PREDICT
1968, Support Vector Machines
16262, let s take a quick look at the full distribution of scores
2512, Radial Support Vector Machines rbf SVM
35885, Combine data sources
7348, You can fix this with log transformation
34094, Created Date
15428, we construct our models using the following features Sex Pclass Fare Age and TravelledAlone
19316, Evaluation prediction and analysis
13862, Age
12843, Which is the best Model
39014, Parse shipping data to numbers
27050, Histograms
29867, look at an example image
5262, As a final step in our experiments we are going to train a set of RF models with the top 50 features selected by drop column method
134, Using the best parameters from the grid search
19668, The most important feature created by featuretools was the maximum number of days before current application that the client applied for a loan at another institution
2910, Fill the Embarked Feature in the train setwith 0
10131, Linear Discriminant Analysis
15065, Name Title
31410, Unfreeze the model and train a little bit more
2738, This bar chart gives you an idea about how many missing values are there in each column
23181, See We ve successfully managed to reproduce the same score that we achived only after tunning hyperparameters if we predict using these trained models we should have the best test accuracy possible out of those model let s predict using those trained models
6061, KitchenQual 1 missing value in test
26253, Training Function
28680, Electrical
33616, Submission
15918, Combining Friends and Families as Groups
12404, In almost any situation id is an irrelevant feature to prediction
10436, we shall do the following
26214, Augmentation is an important technique to artifically boost your data size
27913, We have missed 3 features GarageYrBlt LotFrontage and MasVnrArea
31321, Reverse the Transformation
27873, Product Life Cycle
13044, Parch
5884, XGBoost
135, K Nearest Neighbor classifier KNN
37088, now we have OOF from base or 0 level models models and we can build level 1 model We have 4 base models level 0 models so we expect to get 5 columns in S train and S test S train be our input feature to train our meta learner and then prediction be made on S test after we train our meta learner And this prediction on S test is actually the prediction for our test set X test Before we train our meta learner we can investigate S train and S test
9962, Conflict with the domain knowledge
29846, categorical data
35088, Confusion Matrix Training Set
32283, Display distribution of a continous variable in group box plot
562, ExtraTreesClassifier
37105, Discretization
22373, Modeling a tensorflow CNN
6647, Computation of Age Bins
5246, Final Check of The Data Before Feature Selection and ML Experiments
39168, inception v3 requires a tensor of size N x 3 x 299 x 299
2329, Predictions and Scoring Classifications
15515, The survival rate does change between different Embarked values
9010, Set Pool Quality to 0 if there is no pool
10253, Go to Contents Menu
8114, Logistic Regression
36495, Create corresponding train folds
26743, First let s look at impact of different types of events and then we look at specific events
16851, Lets take a quick look at all the categorical data present in our dataset
20202, Categorical Features Data correctness
24908, Confirmed COVID 19 cases per day in Spain
13457, Embarked Missing Values
3168, Just take an average of the XGBoost and Lasso predicitons
11193, maybe do a groupby to make this table more manageable and easier to read
1284, Converting String Values into Numeric 4 1
15030, Pclass 1 get the highest Survived ratio
2410, the features with a lot of missing values have been taken care of move on to the features with fewer missing values
18689, A couple of checks
1590, Missing Values
25036, majority of the orders are made during day time
26902, Create Submission File for approach 9
4062, We just selected all the variables with more than 15 of Pearson Coefficient
37055, This column contains string that furth contains titles such as Mr Mrs Master etc
1552, Survived
10700, Encode Train Set
25343, configure the learning process and choose the suitable parameters
19383, Outliers
26856, Final Submission
24838, use Random Forest
40481, Linear Support Vector Machine
10659, Setup the model
1035, We split them to
26631, Split the train data in train and validation
18894, Random Forest parameter tuning and other models
28553, Outliers can be a Data Scientists nightmare
4249, Features
12872, Now let s create a neural network
10920, Creating Edge
32676, Here the six primary regressors and the stacking regressor are fitted to the training set
41477, Plot a histogram of FamilySize
20030, Train with median price over bedrooms
13983, Handling missing values
16092, Age vs Survival
4638, Count of distinct categories in our variable Here we have counted nan values also if any
23082, Few other checks
40171, Prophet also allows to and that s what we do here
9655, These values can offer more as categorical features than numerical data therefore we be converting them to string
3368, we can further join index with orignal dataframe to check where on which features our Naive Bayes and decision tree model is having issue and with further analysis we can tune our model better but this be out of scope for this notebook
8444, Since we don t have other TenC and others Pools don t coincide with any miscellaneous feature we include the pools into the Misc Features and drop Pools columns after used it in the creation of others features
22673, Named Entity Recognition
6432, Selecting specific columns and slicing
27489, Reshape data and define mini batches
28879, Convert to Pytorch Tensors
18462, try applying a keras sequential model this isn t a great model yet just copied over from the source
33173, or
12869, One hot encoding
25309, Creating Data For Submission
16669, Feature Scaling Continuous Variables
8625, The Goal
1209, Ensemble 2 averaging
1545, Testing Different Models
16544, let s encode the Fare column into 5 categories
12410, Neighborhood
9369, Save predictions
34725, Top features description
37457, Make predictions on test
13296, The VotingClassifier with hard voting would classify the sample as class 1 based on the majority class label Reference sklearn documentation learn org stable modules ensemble htmlVoting 20Classifier
9884, Correlation Between Embarked Sex Fare Survived
9249, Standardizing the data
1958, Filling missing Values
4405, Confusion Matrix
6842, Evaluation Blending
10462, Diagonal Correlation Matrix
26627, Prepare the model
3422, Here is the number of passengers in each of the ticket prefix groups
1703, Finding reason for missing data using Dendrogram
40119, Calculating entropy
34246, SEED all
41757, Run the code cell below without changes
12987, Hyperparameter Tuning Grid Search Cross Validation
40042, let s pick the group of 480 rows and 640 columns or the group with 3456 rows and 5184 columns
5827, Visualising DT
41958, Removing links
21180, Getting Rich With Cryptocurrencies
28923, We can create a ModelData object directly from out data frame
11068, Categorical to one hot encoding
6972, XGBoost
16259, Leaderboard
3515, that you have got a general idea about your data set it s also a good idea to take a closer look at the data itself
3676, Pairplot for the most intresting parameters
26803, Save model
7412, Drop columns where there is a large percentage of missing data
36587, Use all training data learning rate LEARNING RATE
40093, Submissions
13834, Analyze by describing data
40450, Neighborhood
6247, And the following correlation matrix can give us guidance on how to fill the numerical columns with missing values
15670, DecisionTree with RandomizedSearch
4129, Handling Missing Values
27352, We should always run model diagnostics to investigate any unusual behavior
38723, we are going to make a discriminator
16755, Fare
12693, For this feature to work the train test index had to be kept as is
42795, build the model now
12419, Process the data with our 6 top chosen features
18419, let s check the xgboost CV score
30861, Load data
22190, Standard split the train test for validation and log the price
31105, Looking at all the diffrent values in each column
10775, MODELLING MODELS PARAMETERS
39495, The function to plot the distribution of the categorical values Horizontaly
2398, Handle new data while using OneHotEncoder
21093, Logistic Regression model
9598, This way we can get the information on all the people you travelled 1st Class on the Titanic
6429, Random Forest Model
27084, We now repeat this process using LDA instead of LSA LDA is instead a generative probabilistic process designed with the specific goal of uncovering latent topic structure in text corpora
21429, Explore Categorical Data
41270, Sequential Colormap
13150, Data Wrangling
38217, Classification Interpretation
15992, Extreme Gradient Boosting
12292, Calculating correlation matrix in python
1081, earlier we split the train set into categorical and numerical features
35859, nd TRaining
43155, Predictions
35329, In this section I have visualized the dataset with just some random images
15349, Train
5811, First seperate train and test
6644, Survived and not survived by Pclass
19070, Data Exploration Analysis
7887, men who were alone have lower chance of survive
31095, BsmtHalfBath font
35382, Try hyperopt
13125, If needed we can use hue too and get comparisions between 4 features
9035, Observations
702, As a starting point let s do a quick heatmap plot
10924, Labeling a node
20842, we ll merge these values onto the df
27772, Generate test predictions
38741, we drop all of the unnecessary rows and columns
4169, Exponential Transformation
35338, Evaluating the Model Performance
4165, Age
25855, Misspelt word typo
26799, Plotting prediction
13033, PassengerID No action required
5941, split the data into Training testing
7302, Observation
24115, SVM
38421, Using sub and validation sets from prevoius steps
38710, Plot Feature Importance
8366, Seperate young and adult people
11627, Create baseline model
40836, Grid search gives best accuracy for max depth 8 min child weight 6 gamma 0
1754, Median imputation Comparing the KDE plot for the age of those who survived before imputation against the KDE plot for the age of those who perished after imputation
18969, Display the contour lines of a 2D numerical array z e interpolated lines of isovalues of z
9150, There is 1 row in the testing dataframe with no value for KitchenQual
24695, check update fn
11388, Looks like there were a lot of very small children on the Titanic and then a decent amount of people in their 20s and 30s
22429, If you are wondering why are there so many ways of doing things The answer is in the matplotlib architecture
20687, Once connected we define a Model object and specify the input and output layers
21392, Building the Predictive Model and Iterate through Country Region wise
28755, Training
22135, OverallQual Rates the overall material and finish of the house
16133, Check Accuracy
33757, STEP Transforming Extracted Data
2416, These are the categorical features in the data that have missing values in them
60, Pytorch Logistic Regression Model
19754, Test predictions
14576, The items present in the train directory
9508, preview top 5 and bottom 5 records from training dataset
13488, EDA
37282, Here is where our first major trick occurred
6200, Multinomial NB
24129, create Corpus from Text coloumn Corpus is a simplified version of our text data that contain clean data To create Corpus we have to perform the following actions
6671, Confusion Matrix
36810, nltk
20576, Filling the missing values column by column using scikit learn
17846, we prepare the submission dataset and export it in the submission file
27104, Dataset Info
43336, store train csv and test csv in their respective format using pandas read csv function
32113, How to extract a particular column from 1D array of tuples
4448, Cross validation
43329, Training
19412, the text that helps extract the sentiment
16846, we dont need the SibSp and Parch variables
3328, Installing the newest release of Seaborn
17986, Within each gender surviving passengers have a higher median fare than those who did not survive
25646, Drop columns with missing values
26434, If there are only cotegorical features with no missing values as it is the case for the current feature selection the features can be transformed by simply using the OneHotEncoder
6489, Most finished garages gave average quality
9687, EDA of continous variables
32241, The tqdm module was introduced to me by one of my viewers it s a really nice pretty way to measure where you are in a process rather than printing things out at intervals
8058, GridSearchCV
15952, Random Forest
11666, Artificial Neural Network
22968, Here is the batch version of the augmentation that flips the image vertically and horizontaly
13625, Submission
26712, Price Data
36061, One Hot Encoding for categorical variables
4963, I create copies of the input data as I modify it at cleaning and feature engineering stages For easy manipulation of train and test sets I create a list with both references to the actual dataframes
29875, Submission
27159, Category 11 Outdoors
21483, Kaggle Submission
43400, Some small experiment only take the sign of the gradient of our attack
21068, Using same technique to our Problem Statement
11493, More Feature Tuning
43393, so out of 16800 samples the model failed to predict around 1600
1007, Better convert them to numeric
3239, Importing the Libraries strong
25257, Data Visualization and EDA
11479, Fence
6428, lets check RMSE as this is used by the kaggle competition for to evaluate model s predictive power
38752, we split our train and test datasets with respect to predictor and response variables
31556, Working with Test Data
42239, Assess correlations amongst attributes
29126, The model2 reaches almost 99
41392, This gives a score on Kaggle of 0
6557, Correlation Back to Top
10341, Based on the previous correlation heatmap LotFrontage is highly correlated with LotArea and Neighborhood
7265, Fare Feature
16880, Feature Engineering
8690, CATEGORICAL FEATURES
12679, Predicting
10386, Visulaizing the relationship between SalePrice and YearBuilt
34831, Find of missing values for each column
7702, let s view each model parameters in detail
26527, TF IDF Word and Character Grams Regular NLP
28167, You can use the below code to figure out the active pipeline components
4176, Top bottom zero coding
38896, Update parameter with Adam
2092, Nothing particularly shocking here
21083, Convert variables into category type
29455, RidgeClassifier
7515, Split the data into training and validation sets
1872, This data looks very messy We re going to have to preprocess it before it s ready to be used in Machine Learning models
18498, Test our RF on the validation set
10573, Checking null values in Pyspark
16630, Submission
10136, Stacking is an ensemble learning technique that uses predictions from multiple models to build a new model This model is used for making predictions on the test set We pick some of the best performing models to be the first layer of the stack while XGB is set at layer 2 to make the final prediction We use a package called vecstack to implement model stacking It s actually very easy to use you can have a look at the documentation for more information
16763, Fare Feature
14267, K Nearest Neighbours KNN
36719, Again picture is more than word
24120, Run the macro model and make predictions
16713, plot the values according to the number fo neighbours
38941, lets try and pick out high low performers
2851, Model and Accuracy
31841, We categorize subtype of shops in
35843, For more about the parameters and details follow this link learn org stable modules generated sklearn ensemble RandomForestRegressor html
16652, Pclass and Age have high correlation so decided to group the data by Title and Pclass and fill the Age column with the median of each group
40454, Total Square Feet
90, Pclass and Survived
10054, Clustering Title variable
17527, let s train
40856, Numerical and Numerical Variable
32788, let s run the binary encoding function
21628, Pandas slicing loc and iloc 6 examples
7065, GridSearch for GBR
18497, The choice of the combinations of my hyperparameters are based on my past experience with Machine learning projects i worked on most of the time the number of estimators should not exceed the 100 trees since the training set is big enough and the computational ressources needed are very big and in this case we re only using a local computer with 16GB of Ram
35949, Ensemble Prediction
18829, StackedRegressor
21682, Nice score with a little effort
1988, Random Forest Classification
31766, The problem with using the sparse matrix is that it is gigantic there are a lot of users and a lot of products
36883, Random Forest
28060, The number of siblings and the number of parents also play some role in their survival
39769, let s use our topic modeling code on this preprocessed DataFrame
24689, Load pretrained weights
37307, Building a Text Classification model
2506, Family Size 0 means that the passeneger is alone Clearly if you are alone or family size 0 then chances for survival is very low For family size 4 the chances decrease too This also looks to be an important feature for the model Lets examine this further
32305, I also want to check a hypothesis that while saving passengers minors were given preference over adults
7100, we add new feature MPPS
23537, We have 112 000 entries and 8 labels and we have 784 columns
11946, We now differentiate the categorical features and numerical features to perform the preprocessing accordingly
31067, IMPORTS
10989, drop these columns
2047, LinearSVC Model
3387, Feature Selection
37748, Technique 4 Random Row Selection
26823, check now the distribution of the standard deviation per row in the train dataset grouped by value of target pre
5979, GridSearch on AdaBoostClassifer
4873, Import libraries for Ensemble modeling
19435, Remove 1 the latter of pairs of 2 highly correlated variables e g remove v2 for v1 v2 pair
24949, Grid search
32802, BernoulliNB
34707, Cumulative shop revenue based on a particular item
20879, Normalization
13699, We ll then take just the first character and assign it to a new column named Deck and take the any numerical sequence right after this letter and assign it to room
220, Model and Accuracy
27880, Time lag in introduction of items
4583, All of these variables are ordinal so we can easily convert them to numerical features later
22946, take a look at the Cabin feature
33258, Submission
24362, After fitting on the training data you can apply the method on the test data
36847, python
26551, We seperate out the Input columns and the Output columns
16849, we move on to the Name variable
42901, Scree plot
33780, Confusion Matrix
11681, Build a Decision Tree Classifier
24308, Confusion matrix
17027, There are 144 overlapping family names in train and test set
3506, Best score from the grid search
9876, We don t have any missing value in Embarked column
40706, GPU test
19343, Sklearn provides a very efficient tool for encoding the levels of a categorical features into numeric values
1114, Deployment
33079, For instance for the Extorior1st feature we got 15 categories for the train set and 13 for the test set
593, Missing values
16868, Prediction
20192, Categorical Frequency Count Plot
30891, find the correlation between the missing values
17337, XGBoost
35491, Making Prediction
23217, set range for hyperparams and library select best choice for these hyperparams in this range
11524, Support Vector Regressor
13835, split training data into numeric and categorical data
29912, Keep in mind that random search ran for more iterations
14278, Boosting
6007, Data Categoric
14282, Handling categorical variables
38453, Replace proper nouns in sentence to related types
41057, I m curious now what the group 1 feature looks in terms of these first two principle components
8804, Replace the BsmtFinType2 based on BsmtFinSF2 by bucketing the BsmtFinSF2
29468, Analysis on few extracted features
34743, Training our own embedding layer in keras
20852, An even better option for picking a validation set is using the exact same length of time period as the test set uses this is implemented here
43254, Campos
667, Extremely Randomised Trees
2184, Fit model for best feature combination
30931, Shortest and longest questions
19774, Early stopping
31052, Tags
607, We learn
33788, Back to Exploratory Data Analysis
31664, Voting Ensemble
32795, The following is the k fold function for XGB to generate OOF predictions this function is very much similar to its sklearn counter part
22844, By default pandas fills the dataframes with NaN
20254, After hyperparameter tuning
24874, Exploring names
10921, Displaying edges
14092, KNN
20560, Prediction and generation of submission file
23897, There are no visible patterns here as well
1060, ElasticNet
16556, we have to head towards the modelling and submision part so let s split the data into it s initial train and test sets
30320, Train Val split
1215, Locating missing data
5660, The basic reason to combine Train and Test data is to get better insights during Feature Engineering
14061, Parch vs survived
2159, Ready
2950, Find out the best parameter values
19299, Data Transformation
37792, Visualize model behaviour
26635, Validation accuracy per class
30285, Active Count 50
41924, Looking at pictures again
42759, Pretty sure that the plot in this notebook not look the same as my local plot
3738, Evaluate the model
16148, some age is missing
30912, Before submitting run a check to make sure your test preds have the right format
36790, Stemming
23238, Conversion of Categorical Variables
22620, Categories per devices
33567, Some more nan filling
19611, Ordinal variables
21131, define our luxurious interaction
9198, Salutation
5842, Lets look at the distribution of all of the features by ploting them
1763, Names
3730, Extract the target variable
12447, For the LotFrontage I m going to analyze the LotArea
27389, Tuning bagging fraction
11238, use XGB to do the train and prediton to compare with RF
31722, Test ANATOMY Variable
43097, Using keras tokenizer to tokenize the text and then do padding the sentences to 30 words
1408, I m gonna get title from each Name in dataset
8456, Box cox transformation of highly skewed features
19150, we can apply these new tags on the actual Training and Test data sets that have been provided
42983, Preprocessing
39350, Convert to submission format
15021, Positive correlation
7753, it s time to delete outliers in our data as they can degrade our model and prediction
3975, Train the model
39402, Categorical Features Exploration
34468, Save the data
40997, Convolution block
38148, Reading and preprocessing
27422, Number of teams by Date
29731, put all of them in BaianOptimization object
20628, Stopwords present in the whole dataset
36498, Basic Data Analysis
25438, Defining the model
26265, Exporting output to csv
1919, Basement Features
6970, Decision Tree
5578, Decision Tree Regressor
18742, Padding and sequencing
23326, Cross Validation
25375, I used the Keras Sequential API where you have just to add one layer at a time starting from the input
21613, Making plots with pandas
20665, FILLING VALUES IN LotFrontage
35460, Visualize the skin cancer at lower extremity
10647, Ticket
11688, Logistic Regression
24905, Confirmed COVID 19 cases per day in US
9516, Family
267, Compiling Model
39030, Choose the best algorithm the most accurate
38522, Sentiment Extraction using Bert
39318, Save model and test set
42263, There was an issue with the unicode character in A Coru a I ll manually fix it
32596, Learning Rate Distribution
38895, Update Parameters
10067, It is very clear that the survival is higher if the family size between 1 and 5
33118, Model 4
26474, We make predictions on images collected from the test data using the architectures we have built thus far Custom CNN sec Feature Extraction sec and Fine Tuning sec
43010, Resampling techniques Undersample majority class
24993, LASSO for categorical variables
13361, Preview test set
20545, Stacking CV Regressor
23231, As the data in y pred is 2 dimensional we convert the same into 1 dim
32539, Data Visualization
20667, WE HAVE HANDLED ALL THE COLUMNS IN THE REQUIRED WAY IN THE TRAIN SET
10809, start from some analysis
30952, Predict with model emsemble
13404, I drop the least important feature Alone from the model rebuild the model and check its effect on accuracy
31925, The learning rate and the batch size are already tuned the only one that remains is the number of epochs
29899, Lets plot 10th label
28952, Preparing the data for output file
8928, Garage Features
29597, we ll finally use the range finder
43251, SVM
31742, LUV Color Space
36093, Load packages and data
27572, ps ind 16 18 bin
38673, Support Vector Classifier
5363, Diplay charts with button options
35848, And convert them to numpy for preprocessing
564, Gradient Boost Decision Tree GBDT
29522, font size 3 style font family Futura Sorting values through frequency of word
32689, The ImageDataGenerator fit method is used for feature normalization
18708, recreate our ImageDataBunch using the cleaned data frame
26657, PLotting the accuracy and loss during the increment of epochs
28717, Mapping our dictionary
35906, First we one hot encode the categories for the purpose of data exploration
1308, Observations
31258, check our initial model
37061, Outliers Detection
4790, Similarly we impute the values for LotFrontage using Neighborhood and LotConfig as indicators
38943, Effect of StoreType Assortment on stores performance
18306, item price features
26311, Train Test Split
21606, Combine the output of an aggregation with the original df using transform
615, After studying the relations between the different features let s fill in a few missing values based on what we learned
29869, The Kaggle competition used the Cohen s quadratically weighted kappa so I have that here to compare
234, Model and Accuracy
25033, there are 206 209 customers in total
21222, we rotated by 10degrees for some images and zooming some images by 10percent shifts heights and widths to make sure we covered all the
9402, look at the dataset concerning only the relationship between GrLivArea the predictor variable and the SalePrice the response variable From now on I refer GrLivArea as just house area for simplicity
26392, back to Table of Contents TOC
19134, ReLU with He Normal initialization
22510, Lets check for first 3 prediction made by model
18361, Histogram
24674, FINE TUNING THE BEST MODEL
14621, To pass this station uncomment and run
20832, We add CompetitionMonthsOpen field limiting the maximum to 2 years to limit number of unique categories
14624, Station 6 Speed up and pass the gate
8026, Similarly For train data
18430, K Nearest Neighbors
15520, The main categories of Ticket are 1 2 3 P S and C so I combine all the others into 4
21668, Inference
9413, Simple imputation font
26199, Utility functions are stored here they are useful and feel free to add these into your arsenal
4952, let s split our data into training data and test data
36586, Use more training data less learning rate
36822, Here we have our tokenizer which removes non letters and stems
34728, Define Sweep Configuration and Run
19295, Training Evaluation
10262, the coding
41747, MLP ReLU ADAM
28409, train with all
24010, Tensors
37070, Optional Standarding Fare
9376, BsmtExposure Refers to walkout or garden level walls
22533, Sex vs Survived
19598, Add more features
2391, Handling missing values
17910, Using dropna False inside value counts method enables us to include counts of null values when performing value counts
12247, Advanced
22917, we inspect the tweets that predicted probability differs the most from target outcome
35516, look closer to some categorical features
7005, Area of the garage
37804, Ridge Regression
8747, Model
27015, How is EfficientNet working
35621, Normalize and split data
25591, LightGBM
2460, SelectKBest
9637, Trying to understand the density value for Sale price
3066, Cabin
33203, Sample of Predictions
5559, Submit Predictions
1764, Creates Title as a new feature
28676, HeatingQC
19665, Visualize Distribution of Correlated Variables
40699, Converting Data into Tensors
43248, We got a very irregular graph and not a satisfied accuracy and the accuracy doesn t increase with the increaing training data
18088, The most green images mostly contain the plants with very small spikes which are just starting to appear
27437, Taking log of both Age and Fare similar to train datasets
891, all data
204, Huber Regressor
41, Deck Where exactly were passenger on the ship
34634, We are going to apply that function
12654, Formatting for submission
24692, We finetune the model on GPU with AMP fp32 fp16 using nvidia apex package
16361, Removing Some Useless Columns
39890, tree 15303
11459, Pclass
21725, O pr ximo modelo Random Forests um dos mais populares
38230, Methodology
11487, Utilities
20623, I have written few functions to perform EDA
39386, Calculate median values for renta grouped by segmento and ind actividad cliente
20762, Done with all columns not lets start with model building
15206, Process cabin number get deck cabin digit number as distance from ship s nose and check is number odd even as side of ship axis
39093, Average letters per word in a question
11519, Elastic Net Regression scores
36146, Train Validation Split
32168, DEFINING THE PROBABILITY OF MAKING A TRANSFER
5089, Skip ahead to the next chapter if you don t want to find new parameters
16746, Feature Correlation
13494, Deck
39277, Samples of seniority 0 are totally new in the catalogue
2494, Chances for Survival by Port Of Embarkation
7744, we need to know how many missing values we have in each field and if they are considerable we should delete this field as we don t have it for many instances and it not be helpful
24891, Support Vector Classifier SVM
12285, For eg
9393, Basic XGBRegressor scoring
20910, Training model
6984, Number of bathrooms
4443, let s concat train data and test data and save a copy of SalePrice and Id
16667, Feature Selecting
20803, Create PorchSF feature
28398, BarPlot
19524, With coalesce we cannot increase the partitions
5019, More Feature Enginnering
20038, The next step is to compute our hidden layer to output weights
2055, We can improve the accuracy of our model by turning the hyperparameters of our Random Forest model
7521, Submitting Predictions
13171, let s save the Survived column from train data set and join our train and test dataset into titanic dataset to deal with missing values and do EDA
23020, check the sales of event day
35752, Other ML Algorithms need StandardScaler such as SVR
32314, Relation between Survival and Passenger Age Adult Status
32811, Level 3 ensemble
20227, Name
1053, we split them to train and test sets
10786, My intuition says that Pclass could have an impact on Survival as higher class may have better access to lifeboats
18565, Looks like they actually traveled alone I correct that data
18350, REMOVING REDUNDENT FEATURES
5300, The simplest way to stack the five base models is to average their predictions This method essentially gives each base model the same wight for each and then combine them Here I use a linear model with L regularization Ridge to find the optimal linear combination of the five base models In doing so the predictions from the five base models are fed to Ridge as five new features
6528, Scaling
41855, Lung segmentation
19306, Evaluation prediction and analysis
41007, we divide the passengers into three categories men women and children
6630, Power of money
18368, Dropping Unwanted Features
16127, Extra Features Title
9863, Histogram Plot
26082, Run new model
39428, Modeling
32131, How to sort a 2D array by a column
7125, Sex vs survived
37897, Predictions
21427, BsmtFinSF2
16082, Survived column is not present in Test data
15381, See the HUGE difference If we had used the average value without considering the Parch we would have gone horribly wrong
32240, Submission
10166, Swarm Plots
23047, so now we Reshape image in 3 dimensions
42551, Run cross validation with a few hyper parameters
4675, SalePrice analysis
5073, Problem fixed Take away Visualizing the data is almost always superhelpful
16986, Ensemble methods
3734, Split the dataset into training and validation
3708, Normalization
3274, Fill these BsmtQual null values with values of BsmtCond and others with None
15567, We need some cleanup as some cabins are numbered F Gxx
38004, We have information about sales of 3049 various items which belong to different categories and departments
5079, Fellow Kaggler massquantity suggests here you need is pca lb 0 11421 top 4Ensemble Methods that we can reduce the collinearity that we have in the data and even might have increased with feature engineering by applying PCA to the whole data set I think it s a cool idea worth persuing
8431, Check if all nulls of masonry veneer types are updated
10154, Example of a youtube video data
31687, Making predictions using the combined network
35855, A conv2d block CONV2D number of filters size of filters ReLU MAXPOOL2D
37787, Data Preparation
8581, Separate the datasets again
11434, Undoubtely Delete the RED Point InterOut
12807, There is no Survived column here which is our target varible we are trying to predict
2820, creating a plot of the most relevant features
23596, Repeating PCA and making another plot of the first two principal components
1398, Fare vs Survived
26884, Create Submission File for approach 3
23306, Select algorithm and hyperparameters
3908, How to deal with missing data
38156, H2o
22294, Training the NN
5663, Extract Titles from Name feature and create a new column
7638, stack1
9987, Zoomed Heat Map
20457, Family status of client
38277, apply our trained model to the test data But before that we have to also convert text of test data to padded data like we did earlier
42459, Categorical variables
6716, Distribution of Continous Numerical Features
34659, There is an entry with negative price
28237, Compile the model
16979, Cross Validation
30534, Exploration of Credit Card Balance
36050, Preprocessing
23484, N Grams and analyzer parameter
41996, Sorting columns w descending order
25447, Testing Datasets
16737, alone
11613, Feature engineering Create new usful feature
39868, Area
1609, Title Is Married
25789, Great We have done all the Features
41489, Random Forest Prepare for Kaggle Submission
3533, ViolinPlot Functional vs SalePrice
33411, Train the model using data augmentation
13663, Class
27188, Semantic Analysis
17475, AdaBoostClassifier
32782, Training
32785, Submission
12601, Random Forest using hyperopt
34669, Cumulative sales volume
18436, Compare Model font div
10570, After training now is the time to recommend top movies which user might like
1056, Ridge regression
17661, Observations
3159, we re ready to set up the data frames
23731, Survival rate is maximum for passengers who boarded at Port C 55 while it is least for passengers who boarded at Port S 35
15635, Survival by Passenger Class and Family Size
42869, Train the production model
17819, We fit the model
5087, The more different the models that we choose for an ensemble the better it ll perform
43002, Understand the sample ratio
20747, WoodDeckSF column
37796, Training a Linear Regression Model
9415, Feature eng Family Column font
4994, High Level Overview
36250, Mimic the IDF function but penalize the words which have fairly high score otherwise and give a strong boost to the words which appear sporadically
30970, Normally in grid search we do not limit the number of evaluations
11284, Transforming target distribution
20121, Modeling with LightGBM using best parameter
15568, That looks better
4597, Kitchen
21173, Visualize CNN Layers
42311, Cross Validation
17522, Predict the output
1173, For the rest we just use a loop to impute None value
4125, Averaged base models score
36148, Define Optimizer and Annealer
8950, Fixing Fireplace
4381, fill missing value in FireplaceQu with NA
30605, Control
7905, Third model XGB Classifier
38993, Helper functions to sort the image files based on the numeric value in each file name
10751, Train Validation split on the training data and Build Ridge Regression
14867, Interesting to note we have a T deck value there which doesn t make sense we can drop it out with the following code
27290, Model with intervention
10215, Take Away Points
10512, Cabin contains a lot of Null values so I m going to drop it
42638, Correlation
14216, The Chart confirms Women more likely survivied than Men
34440, Prediction
24315, As is discussed in other kernels the bottom right two two points with extremely large GrLivArea are likely to be outliers
20266, Correlation between continuous features
33505, Germany
35673, Changing Data Type
37540, Optimizers
18515, We have similar counts for the 10 digits
24659, We need to install pytorch lightning library
34630, We can say that people prefer the morning and evening times for renting bike
15488, Create neural network model
36224, This array contains all predictions
37734, Deeper understanding of subtypes
30395, Making model
29815, CBOW model
8246, Reading Data
31837, Example calculations
31791, Displaying some original test images
13536, Linear Regression
16921, Modelling Training
33338, Quick look at items df
11152, Look for any good correlations to use for imputation
10429, Merging solutions and submission
38279, 97 accuracy That s not bad at all for only 2 dimensions
15329, First we are converting float to string for both the datasets namely test and train
31951, Main part load train pred and blend
24445, WordCloud
14133, Feature Engineering
28484, Number of houses built VS year
14415, Here we are getting 84 accuracy with RandomForest Classifier
11297, Define a method to carry out cross validation
40459, Age
36515, Name Title
7599, Boxplot SalePrice for MSZoning
32054, Random Forest
7239, Model Training 0
21395, We need to create PyTorch native Dataset so that we can train our model
15908, Tickets with the most Predictive power A5 PC
5837, From here lets call our train set as df
18340, EXPLORING QUALITATIVE VARIABLE
11290, Drop columns
37869, Data Modelling
27292, Cumsum signal
2497, Filling Embarked NaN
16717, Random Forests
24551, In case of total products 2
4683, For Basement features we be a little bit smarter
33842, PandasProfiling
5406, About 2 thirds of the people in PClass1 survived and only half of the people in PClass3 survived
39147, Display images
26367, Images the Network gets really wrong
26924, And check if the
35078, The prediction obtained by this solution yielded a score of 0
15555, Support vector classifier
42259, these are all new customer
14384, At First filling all the missing values with 0
41780, We have below an example of 60 digit images from this dataset
43373, Defining the tensorflow graph
9677, Feature fraction Bagging fraction Bagging frequency
41223, Separate out predictor variables e pixel values and label
11779, Outlier Treatment
36373, Submissions
34221, we have 49 images that aren t labelled
5236, Dropping Outliers From Training Set
22372, Separating the Data Set
18055, build up the XGBoost model As I have not taken care of the ordering of the categorical variables I need to make a larger tree ensemble let s start with 1000 trees
22629, We now explore categories
41257, Compile Your Model
6598, Create and fit the decision tree
2412, These are all the numerical features in our data
32638, Tweets
12824, what is Outlier
28406, Create Model
15847, Name Length
42238, Bivariate analysis scatter plots for target versus numerical attributes
458, Fit the training dataset on every model
24941, Q Q plot after StandardScaler
25191, Normalization
16195, SOOO apparently adding the fmaily size increase the accuracy on the training set LOL well theres no crossvalidation so idk
7259, Pclass Feature
27110, 65 is a big number that tells us there are a lot of missing values in train dataset
10841, we are done with most of the Feature Analysis Beging with the Feature Engineering
8497, Advanced Uses of SHAP Values
15824, Standard scalar for test data
42067, Separating the length of the data to use it later
27272, Using normal distribution to estimate each feature
31419, Train the Model
4220, Target Encoding
41120, OverAll Average Absolute Log Error
31068, LOAD DATA
37159, ALL ACTIVATION LAYERS
14943, Final Data
22806, Educational Level Literate without Educational Level
15477, We create a new feature FamilySize that combines SibSp and Parch
36149, Data Augmentation
2442, Categorical data
14801, XGBoost Model
14579, Delete Unwanted Columns
36729, Prediction on Test Set
31609, We were just training our model to predict 8
24239, Exploratory Data Analysis EDA Cleaning and Engineering features
30268, Using different classifier
31666, Random Forest Classifier
7862, Import some libraries for data exploration
14191, LogisticRegression
40849, Imputing Missing Variables
43032, XGBoost model for None
17655, Prediction
3306, dummies
9064, YearRemodAdd
12062, Train Test Split
1740, We first check the number of missing values for fare
32232, We need a goal for our model that we re building
34384, Hour of the Day
29792, Cosine Similarity
34344, Evaluate Each Model and Cross Validate Top Model
31603, Loading the MNIST data
19864, determine the outliers
33729, Install and import necessary libraries
157, EDA with Pandas Profiling
14781, Parch
20306, I want to find a possible clusters among the different customers and substitute single user id with the cluster to which they are assumed to belong
6553, Calculating of Woman and Men survived
6634, Feature Engineering
16358, Plotting Train vs Validation Curve
32954, It says the absolute difference more than 0
523, Disribution of Fare as function of Pclass Sex and Survived
8716, Deletion
19173, Rotated features unused
10645, Fare
694, The precision score for the survived passengers is decent but not good enough for our specific problem that requires precision to be as high as possible
39178, still noise but this time the noise is more intense 3 extra letters instead of one in the left and missing letter in the right
9479, Learning Curve
22401, Yup same people
42358, Remove punctuations special characters numbers
36230, After creating the model we compile it
38098, Modelling
25045, Department wise reorder ratio
20474, Region registered not live region and not work region
30934, Ridge or L2 regression is the most commonly used method of regularization for the problems which do not have a unique solution It adds penalty equivalent to square of the magnitude of coefficients Unlike L1 it don t srink some of the coefficients to zero It srink the coefficients near to zero but it never by zero
38474, Axis 1 As Feature
41914, As well as looking at the overall metrics it s also a good idea to look at examples of each of
22815, item 11373 was sold 2169 times at shop 12 on a single day in October
22409, There was an issue with the unicode character in
37481, SpaCy tokenizes the text and uses it s internal model to create linguistic features
11151, try another method imputation
5938, we have to encode a categorical values
1090, A heat map of correlation may give us a understanding of which variables are important
7061, Same skewness analysis for target variable
39034, Plot in 2D
8652, Use DataFrame
1078, NOTE i simply used median to fill na values actually there is lot to explore when you do feature engineering
7014, Rates the overall material and finish of the house
18722, save our model s weights
12659, Predict the Actual Test Data
9866, To understand the survival rate according to gender
11992, let s predict test values
21091, Split data set
1140, GridSearchCV evaluating using multiple scorers simultaneously
9706, Linear regression
18658, Test Data Images
9957, GBDT
39453, checking missing data in previous application
1565, Another piece of information is the first letter of each ticket which again might be indicative of a certain attribute of the ticketholders or their rooms
9486, Evaluate Model
24541, Total number of products by age group
20174, To verify lets pass the optimal parameters to Classifier and check the score
7433, XGBoost
20481, Duration of credit DAYS CREDIT
6698, Moving to the point plot
5957, Gender and Survived
34963, Dropping Columns that don t heavily Influence the Outcome
16599, Plot the distribution of each feature
8157, Correlation study
33459, Assortment
7765, Lasso Regression
1108, Gaussian Naive Bayes
34694, Lag everything
27975, hist
35183, Projection into 2 Dimensional PCA
713, Looks like a good thing to condition on
3964, Parameters
13716, Replacing the two missing values in Embarked with the most common value under this feature handling missing categorical data
43243, Testing
2384, Two types of Pipelines
9840, Decision Tree
37349, Lets check missing data
19429, Drop TARGET from train csv and combine train test csv
4228, Direct Method
2017, Modeling and Predictions
14936, we need to first study of our data
23270, SibSp Parch Family Size
39881, There are 11 NAs
32784, Prediction with TTA
3013, This is to make sure that SalesPrice values are distributed normaly using function log1p which applies log to all elements of the column which fixes the skeweness of the distribution
41033, Submission to Kaggle
29931, 3D Plots
3244, Creating 2 new variable for each data type
10404, View statistical properties of the data
17552, Check if any remaining null value for age
27635, Several of these columns have
38992, We not use input test in this notebook as we are not intending to submit We split the entire training set into train dev cross validation and test sets later
39948, Lasso regression
26446, The high number to the left around 0 and to the right around 1 suggests that the classifier was quite certain in most of the predictions that turned out to be right
32882, Linear models
17971, Sex
20178, Reducing Dimensions using PCA
29384, ARCHITECTURE span nbsp nbsp 1st convolutional layer span nbsp nbsp no of filters n1 16filter dimentions 5x5 we use the relu function as the activation function for all the layers except the output layer we need to pass the dimentions of the input as input shape only for the first layer
16503, Creating Data Frame of models scores
20449, credit card balance
28590, There is a clear positive correlation with the SalePrice and the quality of the kitchen
22019, Explore the interaction between var15 age and var38
43131, Confirmed Cases 3 Best Predicted
20674, How to Confirm TensorFlow Is Installed
22778, Firstly creating Date column and dropping the unwanted column and reformatting the date column
7107, Gradient Boosting Decision Tree
1730, Plot Pclass against Survived
6083, Data Cleaning
34249, Plot The distribution
1603, Categorical Features
42030, qcut to change continous values to ordinal groups based on quantiles Try to be the same number per range
21103, Data Cleaning
2953, Convert into Dataframe for final submission
41365, Sale Price FV RL RH RM C
32251, Alright so we made a couple mistakes but not too bad actually
5678, Print the average age based on their Title before filling the missing values
15466, Feature Pclass
32422, MNIST Classification using CNNs
41511, Logistic Regression
1363, Model evaluation
31755, Selected text is a subset of text
35682, Ensemble Algorithms sec
24837, use the decision tree
20225, Survived
32085, Figure 5 Distribution of the absolute correlation between log and our variables
36089, And individually
26366, The Journey of an Image Through the Network
36890, CNN 1
15236, Load data
7658, evaluate base models
38104, Building the Convolutional Neural Network
33281, Building the Feature Engineering Machine
5548, Check Data
28267, Most number of categories are present within Neighbourhood followed by Exterior2nd and Exterior 1st
35561, approximately 90 of the data
1294, We need to do some preprocessing to this test data set before we can feed that to the trained model
37533, Importing Common Libraries
12480, AgeGroup
39994, Blending of models
16900, New Feature Child
19374, Feature Selection Engineering
14398, Feature Embraked
5500, Imputing the missing values
16159, Ramdom Forest
25656, Here are a few medium sized connected components
25894, Dale Chall Readability Score
32702, Cleaning data
22592, Several shops are duplicates of each other
41208, len prediction len GT
40157, No if there s no Promo2 then there s no information about it
18973, Display the distribution of a multiple continous variable
28292, Intensive Ignite usage part
36077, Utils
13420, Submission
6840, Cost Function Cross Validation
18847, Grab a statistic summary of the training set
32675, Each of the six elected regression models is hereby submitted to scoring based on the Root Mean Squared Error metric
10572, For visualization before using visual library matplotlib seaborn we need to convert SparkDataframe to PandasDataFrame
12330, Missing Values
15453, Cabin
3419, grab the wives maiden last names too
1562, Parch
15269, Logistic Regression Algorithm
15880, To bag or not to bag
9989, Features engineering
8726, Sale Price and Living Area
31376, Scale image
18390, Create and submit the submission file
23413, Normalize data by subtracting the mean image
18158, Setting up models and grid search
14529, Sex
4427, check the numerical variables once again
27298, Global Forecast
4637, Look at the different values of distinct categories in our variable This method list down any missing values nan as well
28146, The pattern defined in the function tries to find the ROOT word or the main verb in the sentence
40451, NeighborhoodLotArea
35559, Parameters
10728, MODELS
31087, MasVnrArea font
22050, Checking for missing values
30967, Grid Search Implementation
29633, From our top 3 best perform model I ll try to combine them into soft voting classifier improve overall predicting performance
22012, One hot encoding
32354, Making model
18397, Run the next code cell without changes
14995, The survived passenger was not much
2380, Use of stratify when performing classification problems
5240, Binning the Rare Category Values
14188, Creating Features and Labels
33485, Add country details
9705, Onehot encoding categories
27063, Tweets Locations
8107, Embarked
18956, Display distribution of a continous variable
39288, TARGET ENCODED FEATURES
1692, Describing Categorical and Numerical features separately DescribingCatAndNum
14807, Missing Value
41265, If you re a front end developer it may be easier to understand
13903, Train Test data Split
20952, View model summary
6565, Missing Data
6388, At first we have to prepare the data prepare vectors for female and male passengers
10344, Adding Features
28494, Create a model
2926, AdaBoost
39126, EXP
24516, Data cleaning
3742, EDA
14522, Family Size
16987, Boosting
31915, Normalizing Data
1964, Pairplots
32151, How to subtract a 1d array from a 2d array where each item of 1d array subtracts from respective row
12934, Embarked Column
6505, Explore Data
30081, I ve given a number to classify
15050, IsAlone
32759, Model
24978, lets train the model
14734, Compute Metrics from this Confusion Matrix
14375, Analyzing Feature Age
25462, Applying the model to your validation set
26895, Using Pipelines
10796, Naturally after Parch category let s check SibSp
42790, Observations
24433, Deal with Categorical features OneHotEncoding
8321, Creating output CSV file in the required format
22855, Add Holiday feature
21231, Define the learning rate
11306, Supress Warnings
1330, Correlating categorical features
4380, check missing values in these two feature
36526, Hyperparameter Tuning Grid Search Cross Validation
25588, Linear regression
30416, Define hyperparameters and load data
23303, Drop our target feature from training data SalePrice
13379, Imputation of missing values in Cabin
32955, We should steadily add correct samples to our submission286 in order to get 0
31884, labeling of column which contain catagorical data in object data type Sex and Embarked
39805, start this notebook with importing all the necessary libraries we would need and the dataset
3245, Correlation matric for numerical data
38637, We have a dropout layer which its dropping rate is set to 25 of inputs
7995, Analys Labels
28025, MOST SIMILAR WORDS TO
41022, Here is a summary of which passengers are predicted to live or die from the previous prediction rules
6108, LotFrontage
6111, Nope
6423, Feature Engineering on Test Data Set
22920, look at our first feature Pclass
35206, Common Observation from all Residual Plots The main purpose of viewing residuals plot is to analyze the variance of the error of the regressor Here the points are not randomly dispersed around the horizontal axis This gives an indication that a linear regression model is not appropriate for the data A non linear model perhaps tree based model is more appropriate
34173, Well
20433, Loading data
1176, also take a closer look at GarageYrBlt
5835, Spliting data back into train df and test df
9505, Know how to import your data
28282, Set up data store
31415, Or you can take advantage of the fastai TTA
6283, We have now finished feature engineering I tried many different methods of this including creating polynomials and also interaction variables
7209, The dataset contains 81 columns and the SalePrice column tells us the price at which the house was sold
20783, Viewing Columns
14087, Training Data
13275, Features engineering FE
24822, Encoding Dataset
10565, Visualize Predicted vs Actual Sales Price
19065, Lets look at a prediction for the test patient
19259, We use the current timestep and the last 29 to forecast 90 days ahead
38155, Running predictions
4090, now we are dealing with the big boss
30975, We can also evaluate the best random search model on the test data
28872, RMSE
5891, Test data
25886, Histogram plots of number of chars in train and test sets
10153, We aren t able to add legend so let s use different technique
37441, Couldn t find any evidence or relation between them in positive tweets
14984, Converting categorical variable Sex Embarked Title to numeric values for both the data sets
30456, Example usage
10656, Interpretation
27427, People were travelling alone or with 1 sibling spouse at max
8225, Heat Map for the data
12481, Family Size
15190, Replacing Ages
1617, StratifiedKFold is used for stratifying the target variable
36096, Feature engineering There can always be done some feature engineering comment challenge lstm blob master toxic comment 9872 model ipynb even with text data
42956, The number of people with two or more siblings or spouses in the data is very low
29069, Numeric categorical interactions
23376, I ll also add a wrapper function to call this function multiple times for multiple images
27894, We be using the Sequential model from Keras to form the Neural Network Sequential Model is used to construct simple models with linear stack of layers
17537, Create FamilySize bands based on SibSp and Parch properties
12555, Random Forest
34394, Crime Coordinates
41858, Our target variable is binary and not well balanced but for now for simplicity I leave it as it is
35844, Prediction
17698, SUPPORT VECTOR CLASSIFIER
31642, Address
40068, Correlation Study Numerical Data
2832, Library and Data
29476, Merging all the extacted features
10192, Creating columns from each categorical feature value
19096, Data cleaning
14551, Children less than 5 years old were saved in large numbers font
22527, Pandas Profiling
22037, impute the missing values with some value which is outside the range of values of the column say 99
18286, Pipeline Creation
25746, the shape of the foreground is enough 94 of validation data are assigned the correct original size after 1 epoch
18270, GENERATING WORD CLOUD OF DUPLICATES AND NON DUPLICATE QUESTION PAIRS WE CAN OBSERVE MOST FREQUENT OCCURING WORDS
10528, Feature Generation
6559, Explore Age distribution with kde plot
20253, We can look at the feature importances
32627, Building a Text Classification model
4661, Find out the Relationship with categorical feature
22156, Data read in and prep
846, DecisionTreeRegressor
8765, Graph of survive and unsurvive is almost similar
18493, Test Set Adaptation
20846, Create features
28112, Predicting the Test Set
32197, Only keep shop category if there are 5 or more shops of that category the rest are grouped as other
4107, Basically We don t know test dataset s information so we have to use train dataset s info
5545, Submit Hyper Tuned Baseline Model
21757, For missing values in ind nuevo we can fill in missing values by looking how many months of history these customers have
19860, finally here are the outliers
24382, Exploring the Data
18684, There are 25000 files
21997, 42000 images splited into 189000 images
12686, Survived
14248, Children less than 5 years old were saved in large numbers
14142, Random Forest
16741, double check
12921, females have higher probability of survival than males
41412, Ther are so many features and so many possible angles from which we can analyze them
3459, We can estimate the performance of the model out of sample using cross validation
34537, Defining the model
24527, Number of products by activity index and sex
31840, Shops dataset preprocessing
2245, RandomForestRegressor
1368, Title grouped
12978, Family size
16292, Importing Data for train and test csv files
850, Comparison plot RMSE of all models
41482, Convert the DataFrame to a numpy array
14143, Bagging Classifier
123, font color 5831bc face Comic Sans MS Before Scaling
41563, PCA in 3 D
42050, Change floats to integer to save resources
12041, I m gonna write two functions to help me in imputing missing values in variables
25039, No re ordered products
8608, Perform label encoding to all ordinal variables
18681, extract the files in train
6312, Extra Trees
13132, VERDICT WITH BEAR GRYLLS
17342, Gaussian Process
1327, Analyze by pivoting features
36619, PCA Dimension Reduction
30643, Machine Learning Estimation and model evaluation
21728, Standard Prices and Outliers
36057, Feature Season
32108, How to print only 3 decimal places in python numpy array
23563, 720 1280 3 is the most common image size available in the train data and 10 different sizes are available
32650, Numerical and categorical features are identified and segregated into two speficic lists
12358, Masonry veneer type and masonry veneer area
16785, Neural Network
38278, Submitting our predictions
7191, Normalization
8598, Getting everything together using ColumnTransformer
3625, Another way to visualize housing prices by month and year
5524, Estimate missing Fare Data
6285, Hence in order for the output files later to be accepted by Kaggle I convert this to an integer
36535, It s often a good idea to pre process the images before doing anything with them
558, Decision Tree
4789, We impute the missing values for zone using neighborhood as an indicator
39056, Prediction
34444, In this case what the melt function is doing is that it is converting the sales dataframe which is in wide format to a long format
26364, Confusion Matrix
23525, Majority voting
6991, Remodel date
7885, Less values give a clue of more survival mean however a crosstab maybe would give more clear information
37225, Choose Embedding
13476, The score for the new training sample is very close to the original performance which is good
28077, Prediction
40100, Model
42932, Merging test and train
40927, Melanoma Samples Generator
4162, Weight of evidence
24553, In case of total products
1243, The Goal
7680, Outliers visualization
8488, SGD Regressor
8923, PoolQC
22714, Creating the image matrix for the dataset
17658, Observations
31311, Store 1 Analysis
673, With these optimised parameters let s have a look at the feature importance that this classifier gives us
28086, Cross validated stratified and shuffled
38555, just build a model using the mean num of rooms for the missing values
32154, How to compute the moving average of a numpy array
12461, Feature Transformations
15056, Model
13475, References
5148, Temporal variables
32270, Relationship between variables with respective to date multiple series
19618, Blending
5946, enode a categorical data
37820, Remove twitter handles
7625, num iterations default 100 alias num iteration num tree num trees num round num rounds
80, All the cabin names start with an English alphabet following by multiple digits
35804, distribution of residual errors looks normal except for ET and SVR
3686, Feature Engineering
35840, To learn more about sklearn DT parameters what they stand for and how influence the model follow this link learn org stable modules generated sklearn tree DecisionTreeRegressor html
41013, we found 80 woman child groups for a total of 230 passengers 171 women and 59 boys
18629, Applying Decision Tree
4641, Percentage of Missing Values in that variable
33790, That doesn t look right The maximum value is about 1000 years
21125, In the previous part we used DataRaw
4908, have an Eagle s Eye View
5700, Imputing missing values
28477, New features based on TAX
23052, its time to evaluate our model
2788, PyCaret s classification module is a supervised machine learning module which is used for classifying the elements into a binary group based on various techniques and algorithms
4797, Feature Engineering
40160, Clearly stores of type A
40430, Plotting the base learner contributions to the ensemble
23614, Bar plot of missing features
2096, Categorical features
30909, For this column propertyzoningdesc the information behind the code could refer to
16513, Random Forest Classifier
42841, South Korea
19959, It could mean that tickets sharing the same prefixes could be booked for cabins placed together
33143, we round the predictions set them to integer update the submission dataframe and save it as CS Oof
2022, Since our lasso model performed the best we ll use it as a meta model
1038, Number of missing values per column
5472, More rooms more Squarefoot More Garage Area Garage Cars Total Basement SQFT 1st Flr SQFT
29631, Support Vector Classifier with RBF kernel
24156, modify the annotation file for feasible use
36104, Attention layer
31374, Flip Image
43363, Feature importances
9632, Importing important libraries
16863, Fit the models
13977, Map Sex and Embarked to numerical values
37493, Submission
12815, Data Exploration Visulization
35497, Implementing with Keras
37365, Fare Vs Survived
31835, Function to get proper weights and scales
15658, Random Forest
16464, Ticket
41321, Loading in files
18523, Set the optimizer and annealer
5021, Dummy Variables
24248, The best categories for age are
42868, Find the best epoch
16254, Scripts
4823, Standard approach missing data scaling imputating etc
1633, Standardizing numeric data
17533, Fill EmbarkedCode with the most frequent value and transform it to numerical categorical feature
7896, same for the FamiliSize columns
29902, Lets create a simple model from Keras Sequential layer
34224, get y needs to return the coordinates then the label
29999, Plotting the scores of all models
15212, Model Keras MLP
10591, Gradient Boosting
12262, Categorical variables
23325, Item name Tfidf text feature
3008, strong Visualizing the data strong font div
24790, Metric
27021, Confusion Matrix
1906, OverallQual GrLivArea GarageCars GarageArea TotalBsmtSF stFlrSF FullBath TotRmsAbvGrd YearBuilt YearRemodAdd have more than correlation with SalePrice
12065, Correlation Analysis
29808, Glove pretrained WordVector of a word
40618, And a simple neural net
10221, check correlation cofficent between our features
32940, Create the vocabulary the word count and prepare the train X
153, Submit test predictions
894, SVM Classifier
11894, Outliers Handling
42608, Optimizer Settings
4474, Encoding Titles
23401, save the weights so that the model can be used in an inference notebook
12749, extract some grab the title out by using the quick and dirty split method and then add a column called Title
11488, Functional
4130, Using Imputer instead of fillna
17340, XGB
10257, Get rid of NaNs
36411, Statistical Distribution
26213, In object detection with bounding boxes it is always a good idea to randomly plot some images with their bounding boxes to check for any awry bounding box coordinates
13615, separate our features into ordinal categorical features and nominal categorical features then we
28951, Predicting the Survived label on the test set
7037, The general zoning classification
7585, Scatterplot SalePrice vs all SF
3527, Scatter plots between the most correlated variables
38790, Save
29838, Helper functions to the main association rules function
21177, Displaying output of layer 8
2167, Our hypothesis is that the higher the class the higher the chances of survival
21396, Create the model loss and optimizer
17559, Similarly for Parch column
9597, There were three classes on the Ship 1 2 3
32371, Loading Image Meta Features
2696, First of all let s start by replacing the missing values in both the training and the test set we be combining both datasets into one dataset
11499, if you need scaling you can also use minmax scaler
26346, Clean and Edit Dataframes
23921, Model deep learning seq
4131, Some Plots to Visualize the effects of features on driving the Sale Price
28368, location
8032, There are outliers for this variable hence Median is prefered over mean
32076, Here
15589, Your submission scored 0
24139, Decision Tree Model
21821, Features
19601, prediction xgboost
28031, GLOVE INITIATIONS
39239, Analysis of opening periods
11696, Single Imputation for categoric columns
36634, To compare performance and efficacy of each technique we use a K fold cross validation This technique randomly splits the training dataset in k subsets While one subset is kept to test the model the remaining k 1 sets are used to train the data
1068, Fit the model on test data
2475, now our datasets look like this
35546, perform some feature selection like Lasso
2518, Gaussian Naive Bayes
19608, Bivariate analysis
4548, Finding Missing Values
23898, Hurray
40869, Optimize Ridge
30316, Start positions and end positions of selected texts in tokenized source texts
34261, Rolling windows
3304, Special example missing value filling
32598, Evolution of Search
8873, GarageCars and GarageArea are highly correlated to each other and from the heatmap both are highly correlated to the SalePrice Hence removing GarageArea from our analysis since it adds redundancy
24406, The values towards the top are the most important features and those towards the bottom matter least
4364, One Outlier may be detected with TotalBsmtSF 6000 and SalesPrice is low
7423, I simply choose skewness as the criterion to select between standardization and Yeo Johnson transformation because I am not training my data on linear regression models
43356, Outliers
22775, loading the pretrained vectors into the embedding matrix
10985, let move the last step which is importing the test data and apply the GBR algorithm
16444, instead of filling those values form their cluster We assume that those people don t have cabin
34050, Label Encoding of Category
6502, Submission
6791, KNN
25220, The BsmtFullBath FullBath BsmtHalfBath can be combined for a TotalBath similar to TotalSF
13425, Hyper parameter Tuning
5684, Create a new Age Category category variable from Age feature
25816, The distribution is right skewed
17032, Rare values in categorical variables tend to overfit models especially it is true for tree models
21409, Show the top performing feature pairs
28812, Train our Model
26733, Plotting sales for each category in each of the state
22054, part of our task is to teach an algorithm the following realtionship which is not clearly understandable for humans
20850, We can now process our data
15187, I know there are missing values but in order to better understand the data I create some segmentation and them I compare them with the new graph without NAN
27165, Due to the strange behaviour in Year Sold we subtract each feature with Year Sold
6911, Correlations and Heatmap
33875, ElasticNet
41936, As one might expect the output from the generator is basically random noise
37082, Findings The prediction looks quite similar for the 6 classifiers except when DT is compared to the others classifiers create an ensemble with the base models RF XGB DT KNN and LR This ensemble can be called heterogeneous ensemble since we have three tree based one kernel based and one linear models We would use EnsembleVotingClassifier method from mlxtend classifier module for both hard and soft voting ensembles The advantage is it requires lesser codes to plot decision regions and I find it a bit faster than sklearn s voting classifier
21254, Binning to renta age and antiguedad
39197, Random Forest
42255, Repeat pre processing defined previously
27983, Prepare our data for our model
10258, Separating types of features
9885, Fill Missing Age Value
33445, XGBClassifier
6705, Test Dataset
21184, How many missing values are there in the training data Some features have almost all their entries missing
27530, Display heatmap of multiple time series
9285, Joint Plots continous vs continous
5428, in the cases where the categorical variables are NaN the numerical ones are 0
19338, we have 450k Question Sets consisting of 540K separate Questions
25854, Cleaning Text
40543, Prepare data
12032, Predictive Score
38582, Our cleaned text data
609, now from here it looks more like S is the interesting port since survival is less probably for that one if you are a 3rd class passenger
36088, Writing Submission File
11174, Do some PCA for the dataset to remove some of the collinearity
5132, Numerical Variables
7961, We ll divide the data into numerical and categorical data and verify their descriptive statistics
22368, Encode Categorical Data
19392, Create the final sentences
26858, Look at summary of numerical fields by using summary function
1184, I use the boxcox1p transformation here because I tried the log transform first but a lot of skew remained in the data
17668, I use the title to find missing values in the column Age
8383, Knowing the type of our data
11554, Engineering time features
13160, Scaled Features
11805, I m going to remove the Id Ticket Name and Cabin variables
43287, Esse comando equivalente a usarmos o r2 score implementado pelo sklearn
18128, LGBM Regressor
28165, Linguistic annotations
23127, That s true Passengers who paid more for their fair mostly survived
17911, CHECKING FOR NULL VALUES IN ALL COLUMNS IN BOTH DATA TRAIN AND DATA TEST
9752, Reshape Dataset
4410, Using pd get dummies on an entire DataFrame
12163, The deeper the tree is the better it fits the training data
5333, Diplay values of variables in logarithmic axes
8332, There are many
12764, Since i dropped some features from the training set I have to drop the same featurres from the test set and do the same steps of feature eng
38562, Random Forest Regressor
22460, Dot box plot
15130, Check Survival Rate
17568, Logistic Regression
18972, Display the distribution of a continous variable with violin and boxplot
17680, SURVIVAL PERCENTAGE
34168, First let s take a look at the decomposition of the item price time series
22214, Dataset
3660, Importing and dropping ID
35835, Handling categorical features
19868, Above for first record with age 22 z score is 0
36407, We use k nearest neighbours to fill in blanks for some of the variables that might be be able to be filled in using geographically nearby properties
14745, Evaluate Again
1882, Feature Engineering
9483, AUC Curve
18881, There is a possibility that test and train datasets do not have sama NaN values especially if they are small
35582, Preparing vectors and BOW
41112, Basic Statistic using Pandas and Numpy
17012, data is a almost normal we fill missing values with mean
28129, Bag of Words
12081, Pre process
36365, Using GridSearchCV to search for best hyper parameter of XGBoost
34338, Numerical Columns
17876, Data Preparation
33139, Build a simple sequential model in Keras with just a few lines
41156, FEATURE 2 NUMBER OF TYPES OF PAST LOANS PER CUSTOMER
24812, fill the missing values
8371, Random Forest
3173, transform pandas to cudf object
18639, We have 27734 missing values and the mean age is 40
5570, At this point we dealing with correlation matrix and Scatter plot to choose best features for our model But these methods don t include any feedback to know if our choices true or not all of them depend only on native statistical techniques
25392, Train the model using data augmentation
34034, when alpha 250 RMSE is minimized
29409, We deal with the missing data a bit later
9853, Ensemble
33570, Box Cox Transformation of highly skewed features
9331, While by using the categorical variable GarageCars
2665, Quasi Constant Features
31010, we add a flatten layer that takes the output of the CNN and flattens it and passes it as an input to the Dense Layers which passes it to the output layer
36294, try on scaled data
4310, Summing up number of parents children and siblings across passengers
30834, Year Feature
13556, Categorical features by Fare
15642, Re forcast predictions based on new features
28564, BsmtCond
12269, LGBM
33275, Applying Augmentation On Melanoma Classification
14295, Cleaning Dataset
3799, Merge mutiple related or same kind of categorical features to creat a new one
12299, FamilySize
40988, To make it horizontal use transpose function at the end
13549, Embarked Feature
27780, Label Encoding
31927, Confusion Matrix
26508, To evaluate network performance we use and to minimise it is used
31811, Calculate IoU
27931, Train model
18669, Define RNN model
28257, Nullity correlation heatmap below help us identify whether missing value of one feature affects missing values of other value
32860, Removing outliers
4029, look at columns with high correlation with SalePrice
17598, Correaltion and feature Importance
26328, Try xgboost
6931, if we drop all NaN values from rows we drop only 8 of data
31344, Prepare Data
33504, Italy
878, Correlation Matrix
31941, Make Predictions
30570, we simply merge with the training data as we did before
41074, Remove URLS and mentions
8068, Kitchen Quality
10671, Assign model
20455, Client gender
24169, Learning Curves
3773, Sex
14000, Save data
13683, Embarked Did the embarkation location play any part
26024, Label Encoding
20165, Case 3 GrayScale Dimensionality Reduction PCA
7910, Inspect the top correlated features values
18347, TACKLING NON LENIARITY
15835, look at the median ages of passangers grouped by Sex class and title
13682, We can glean that children had a high probability of Survival
14195, Comparing cross val score
36613, Compute standardization of data
18841, K Means Clustering to identify possible classes
15576, Features to normalize
43027, Hyper parameter tuning
40403, Test Paths
41166, How many samples for each class are there in the dataset
27776, Calculate the Mean Absolute Error in Validation Data
22622, The Model is trained in the following steps for each minibatch
25275, Test Time Augmentation
238, Library and Data
6483, There are several additional cases when a categorical variable is None relevant numerical variable should be 0
18998, Categorical encoding
8895, LASSOCV
13660, Cabin
38032, Decision Trees
36804, We need an example to actually process so below is some text from Columbia s website
27060, Create a submission file
42190, ANN The complete example
32301, Display heatmap by count
35107, Mixup Data Generator
17465, Ticket
12820, as we know that At 2 20 a m on April 15 1912 the British ocean liner Titanic sinks into the North Atlantic Ocean
41778, Target and features
3171, Data Preparation
41569, Stage 6 Plotting the t SNE
7244, Explaining Instance 2 2 3 2
23055, after Data Augmentation its making good prediction
9939, Duplicate Values
33442, RandomForestClassifier
20301, Converting String to Numeric
15468, Data Cleaning
22472, Timeseries decomposition plot
42371, Splitting data train test
21519, Creating a Dataframe of Loss And Accuracy
15068, Fare Group
22342, N Gram
40972, This way it can take a list of column names and perform an aggregation on all of the comumns based on Store ID and Store s Volume by using Name 1 Name 2
15980, We need to divide the room feature by groups
39198, Submiss o
33725, Binning on Age Feature and Fare feature
25891, Automated Readability Index
19088, Age violinplot based on Pclass Survived
12650, all we have to do is submit the outputs to the competition The Random Forest Model gives a higher test score than the k NN model If you ve got this far please give my notebook an upvote It really helps
12352, BsmtFullBath Basement full bathrooms
10727, Feature Scaling
26554, Label Encoding
22790, look at a more practical example of housing prices problem to understand it better
31117, correlation between features
8021, Embarked
13498, Age
39702, Comparison between original text and the lammatized text
38413, Yup it works
47, Age Column
32469, Mortality Model
29219, Takeaways from the heatmap br
1247, plot how SalePrice relates to some of the features in the dataset
16249, Globals
29051, Selective Gamma correction
15123, This finds percentage of n Age class
32479, Combine Demographic Based and Memory Based Probabilities
12381, Converting ordered categorical fields to numbers
20443, installments payments
3503, look at the correlations between this new predictor set and Survived
39293, Samples of seniority 2 are made of items that have previously been sold in this very shop and so they are in the local catalogue of this shop for sure
31848, Monthly sales
5576, Lasso
23607, Evaluation Function
15549, Creating Dummy Varibles for categorical data
31829, In the code below we ll use to resample the majority class
13018, Loading the data
30925, Count the error
35392, TTA Test Time Augmentation
861, Uncomment if you want to check this submission
10267, Exploring numeric features
20297, Family Size And Alone
14923, However some of the machine learning model treat the categorical features as numeric features such as XGBoost
7539, Wait Embarked also have 2 mising values lets do filling But first we need to explore Embarked column
41845, Bi directional LSTM with Glove embeddings
15315, Lets examine the types of classes that were present
11133, we look at a polynomial fit
33351, Manually calculate the Partial Autocorrelation
37801, Residual Histogram
22913, Findings
24873, Prepare the Test Data
11114, Handling Null Values
43298, Fazer predi es com o dataset inteiro
23758, Selecting the Columns for manipulation
3961, Perform BoxCox Transformation on skewed features
27542, Display heatmap by count
27089, Show first n important word in the topics
31852, Traget lags
14987, Creating a Function with multiple models
3769, Heatmap
16399, Dropping Unecessary Data
28306, Chicking the Root mean squared error for RandomForestRegressor method
8302, XGBoost Extreme Gradient Boosting
21652, Avoid the series of lists TRAP
29773, We repeat this using the optimal model
18652, Numerical variables Vs Target variables
16461, Youngters are likely to survive more
17604, Naive Bayes
37666, Model instantiation
6208, Other SVM s
39412, Parch
28730, Hummm
26912, Handle Missing values on test data
1704, Treating Missing values
39945, The metric RMSE
42001, Sort by a certain column s order Ascending
8823, with child 0 Young Adult 1 Adult 2 Old 3 Veteran 4
8874, Removing SalePrice column
13843, Correlating categorical and numerical features
40028, Insights
8490, Stacking the Models
40140, Prediction
39188, Atualiza o h um correla o muito fraca entre as features como pode ser visto abaixo
40002, Our test set misses three columns diagnosis benign malignant target
1058, We check next the important features that our model used to make predictions
18592, create a new data object that includes this augmentation in the transforms
37526, Embarked Sex Fare Survived
19858, The boundary using times the interquantile range coincides roughly with the boundary determined using the Gaussian distribution vs years
15469, Data Cleaning Extraction of Meaningful Data from Meaningless Data
26521, Distribution of product among different genders
2767, Description of new features
9948, Hetamaps
937, Meet and Greet Data
38766, XGBoost
27184, on the leaderboard Score
1515, Very interesting We found another strongly correlated value by cleaning up the data a bit
13490, Concat data
28850, Load Libraries
24732, Predictions
17566, Female passengers have survived more than male passengers e Females and Children would have been the priority
39010, Here we have looked at the ensembling however the final score can be improved upon with better feature selection as well as both feature engineering and hyperparameter tuning of the individual estimators
18686, create an ImageDataBunch object
9346, Predict the missing data in the Age feature
16690, Decisions Taken
14112, center CountPlot center
17744, However this mode would assign the NaN values the wrong value of C
30587, Aggregated Stats of Bureau Dataframe
786, ElasticNet
37482, spaCy can do a lot more but for now we are going to turn to sklearn to vectorize the lemma version of the sentences
4877, Light GBM
269, It s an important method for dimension reduction It extracts low dimensional set of features from a high dimensional data set with a motive to capture as much information as possible and to visualise high dimensional data it also reduces noise and finally makes other algorithms to work better because we are injecting fewer inputs
19372, Addressing outliers
25579, MSZoning
14439, go to top of section eda
21135, These were quite simple let s look further
23125, Findings The distribution of Fare between different categories of Survived and are distinct very least overlap that makes it comparatively strong predictor for Survived what is kind of true from the correlation value of and the p value less than that suggests we re confident that this correlation is statistically significant survival is positively correlated to Fare so the more you pay for fare the more your chances are to survive that is quite evident from the box plot
9282, Carryout univariate and multivariate analysis using graphical and non graphical some numbers represting the data
38078, Lets take care of remaining columns with missing data
21021, Lets create a corpus of all the tweets in the training set
13337, Ticket extracting information from this feature and converting it to numerical values div
15722, K nearest Neighbors KNN
14939, Correlation
15702, The histogram tells us that most passengers have ages between 16 44
12095, DataFrame concatination and Y separation
8766, Handle NAN age
34744, Taking our sample text corpus
22282, One Hot Encoding one of the most useful techniques that a data scientist can know
42182, Train the model with fit method
39981, We fix this later
30923, Our predicted values
40338, We haven t shuffled test to we can just create the submission as follows
36020, Ensembler
28186, idf W log documents documents containing W
6676, Bagged KNN
8980, Similarly I noticed that the features Condition1 and Condition2 are also dependent of another so I want to split their values into their own columns
26115, select numerical and categorical features
39156, Prediction
29617, Bivariate analysis
16060, Fare Vs Survived
26959, merge data into one dataframe and extract feature from date colomn
2976, We ll consider that when more than 20 of the data is missing we should delete the corresponding variable and pretend it never existed
8319, Choosing the regression model SVR with lowest RMSE and performing hyperparameter tuning using RandomizedSearchCV
31675, Data Preprocessing
33233, now the extracted features are stored in the variable resnet features
35131, adding an additional feature that records the no
20946, Import Keras layers
39823, let s call this image gen we created
41355, There is a correlation between BsmtCond and SalePrice
14855, To further summarize the previous trend as my final feature I created four groups for family size
15787, This is our X train
7358, Drop noisy features
11471, Gaussian Naive Bayes
39432, tensoflow neural network insurance claims 0 268
31089, TotalBsmtSF font
9173, I feel like that helped
21367, To find more noise you can run the following code to get random samples
4792, For features that can t be missing we have taken the mode and the default value as per data description
12621, create dummy variables
22904, WordCloud
40420, Word Clouds
4097, Encoding
29042, Randomly pick 5 images from each of the five labels
12677, Model 1
3643, Cabin
15907, Ticket short with Survival
33440, LGBMClassifier
14289, Draw the decision tree using graphviz
23721, we perform the same steps for Test Dataset as well
29545, Most common words
39240, Remove outlier shops
2546, center Conclusion
15063, Import data
21924, XGBoost parameters from reference notebook were hypertuned using CV but it took long time to run
1306, Stripplot
2188, Visualising missing values
19444, Using a 4 layer neural network with
22856, Russian Ruble Price per month
24890, Random Forest
4666, most correlated features
9029, I took the average of BsmtFinType2 values within 1 SD around the BsmtFinSF2 value
3624, Create Binary Variables
20601, Name
19698, Data Augmentation
41701, Somewhat weird trends looks like some of the top places only opened up business part way through
4235, Random Search
9317, In summary the lesson here is that we can t give an ordering to something that is not supposed to have one as in the previous section but when we do we should keep in mind that we are making an assumption when we put numerical values in our data As such we should keep track of our choices and how the model is reacting to that
38776, Submission
3277, Check again if we have any feature left with missing vale
16618, Sex
23311, Previous Value Benchmark
23084, STEP 3 Model and save your prediction
9942, Creating features that the person is married or not and the family size with SibSp and parch column
6044, try Blending our Models
12319, Preprocessing
34429, Not Disaster
2720, Support vector regression
6527, Create Dummies
27269, When dealing with continuous variable how do you calculate the probabilty for a given value The probability should be zero
32874, Modeling the data
7661, output predictions
8287, Viewing Model Performance
1418, Mother vs Survived
20294, Fare
2230, Which Material Combination increased the Price of Houses
33719, We know that Cabin and Embarked have missing values
22449, Lollipop Chart
2808, Matrix
28959, examine numerical features in the train dataset
16837, Making predictions on the test set
21410, Create Test output
8547, Basement Features
10137, Predictions
3960, Skew Visualization
9958, Voting Classifier
15176, Age
1751, Comparing the KDE plot for the age of those who survived before imputation against the KDE plot for the age of those who survived after imputation
24418, Timestamp
14190, GaussianNB
30388, Making Data for the network
1778, Confusion Matrix
23462, Training set
21336, Feature Engineering
31538, GarageYrBlt
12907, Import Libraries
9434, Displot
15727, Random Forest Feature Importance
10533, Skewing the features
18957, Display distribution of a continous variable
20754, Fence column
37322, Select the number of convolution layers
19805, Classic Encoders
15251, Feature impotance
31353, Interactive Pandas plots
32958, Define accuracy metric
4684, Rows are precious for House price prediction fill the one SaleType missing value with the most common value of SaleType
37011, Best Selling Aisles in each Department number of Orders
22781, Point Type
5964, family size feature
42360, slicing back into train and test
26987, 80 20 train validation split
6865, Dealing With BsmtFinSF1 BsmtFinSF2 BsmtUnfSF TotalBsmtSF Please notice the dependency
39326, Load Embeddings
24590, PIPELINE
11521, Gradient Boosting Regression scores
37980, Early Stopping in Keras
3314, Lasso
40005, Patient id counts
19066, The evalution requirement in this competiton is that for each image name in the test set you must predict the probability that the sample is malignant
21545, Basic CNN model
22978, Correlate based week of month facet per DayOfWeek
2952, Predict SalePrices
32423, Understanding how the Network Works
34403, Create the model and compile it
38067, NLP Features distribution conclusions
39419, replace the NaN value in Fare in df test with the mean value
503, People who are alone are more likely to survive
17629, Title
33314, Plotting Decision Regions
38593, Final Predictions
24962, Kernel SVM
19258, Transform the data into a time series problem
39028, Learning curves
21431, Here train
15517, The change of Age as a function of Title Fare bin or SibSp is quite significant so I ll use them to guess the missing values
18392, Training of the neural net
20415, Feature word share
10372, Relation between categorical features and dependent variable
40136, Export to CSV and upload to GCS
22872, Model Definition
12676, Modelling
30361, Predict all province greater than 500
4274, Continuous Features Nulls
29567, Non correlated cols are removed
16836, Precision and recall tradeoff
17797, Estimate age
11969, treat categorical and numerical features seprately
32738, Encoding categorical features
25238, Barplots display
4382, Generate New Feature
35811, Shop features
20322, Section 3 Preprocessing our Data
9039, much better The data points are now fairly symmetrical and there isn t as many outliers on one particular tail
11994, plot the dependence contribution of features on sale price
29860, Accessing the data element by
14269, Gaussian Naive Bayes
17262, Cabin Feature
43333, Make Submission
13800, I also follow some old methods of data analysis to increase focus
27615, Test Data Preparation
34262, The lags and rolling windows created Nan values
33883, dropping features with small variance
27399, Convert images to tensors
24819, RobustScaler Definition
41659, From section 3
30916, Simple bidirectional LSTM with two fully connected layers We add some dropout to the LSTM since even 2 epochs is enough to overfit
27380, working on the lags
32854, EDA
38760, Support Vector Machine SVM
39185, N mero de crimes que ocorreram durante cada m s e ao longo dos anos registrados no conjunto de dados
11506, Setting for K fold cross validation
11734, compare how the model performs on the test dataset
4399, do some feature engineering with the name variable
38687, After Mean Encoding
32412, Preparing data
36624, Model Performance Analysis
11822, Lets create a correlation matrix using heatmap
7445, Logistic Regression in Python
6252, Fare
4338, Fares on the first class range from all the way to Although a number of tickets were sold at a much higher price x to x normal price than the average price of approximately Could they have been sold at a higher price closer to the sail date
7757, Pipeline for numerical features which aplies an imputer to put median of each feature for instances which doesn t have value for that feature and after that standardizing features using standard scaler
3872, How did we generate scores
22148, You could try to optimize those weights in some other way
25444, Training Dataset
22288, Check nulls
37411, There are now no 0 importance features left
10380, Lets make a plot to intuitively understand what s missing in our data and where it is missing
33254, Categorical features
4768, Replacing the embarked column with the most frequently occuring value in the Embarked column since there are a really few na present
20935, Plot an example
25799, As mentioned before fortunately these top contributors are the same for both datasets
42, How Big is your family
14223, Title Map
46, Convert Categorical variables into Numerical ones
14114, center ViolinPlot center
5310, Univariate feature selection
26000, Creating Columns
20838, We re going to set the active index to Date
10375, Data Pre processing
5, focus on numerical features with one missing value and replace them with 0
10682, NO MISSING RATIO
39195, Normaliza o
34284, multicollinearity
12660, Data Pre Processing
41781, The labels are coded as integers corresponding to the digit in the image
16018, This is survival rate each SibSp and Parch how about join them together
41250, The features created below are known as Indicator Variables Raw data is transformed into a much more straight forward form of input hence allowing the ML model to easily recognize these features also making the model more robust In other words we can also classify this as revealing insights
29897, Feature Standardization
23343, Callbacks
7852, The Random Tree Regressor A Terrible Model
28733, Understanding the revenue for each Shop Name
15901, Cabin
36630, Features contains a big list of features is there anything unique Can we build a relational database on these features
27944, Create a Submission
422, One of the figures we may find interesting is the one between TotalBsmtSF and GrLiveArea
29389, final neural network prediction model span input layer span nbsp nbsp the input layer for the final Neural Network is the same as the flattening layer s output that is the flattened vector hidden layer span nbsp nbsp hidden layer of n3 64 nodes and activation funtion reluoutput layer span nbsp nbsp nodes number of classes 10 and activation funtion softmax
31416, Training and Evaluating the CNN MNIST Classifier
14302, Creating a features Family size corresponding to sibsp and Parch
18097, Here we use the auto cropping method with Ben s preprocessing as explained in
11418, Use case 10 US Store Sales Exploratory Data Analysis
830, columns and correlation before dropping
13274, Download datasets
40011, Image location
40014, Ages per patient
4178, Outliers in continuous variables
23310, Insert missing rows and aggregations
17043, Mean Encoder
555, submission for svc
35903, Submit
37558, How many files loaded
38547, Average Length of Sincere vs Insincere
4778, RandomForestClassifier
28101, Removing the not needed ID column
24797, CNN
3917, Correlation
5601, Garage
28782, I was not able to plot kde of jaccard scores of neutral tweets for the same reason thus I plot a distribution plot
1236, Importing learning libraries
425, Concatenate both train and test values
13070, Logistic Regression
7983, Merge BsmntFS and add Unfinished Fraction
14246, We saw that Sex and Class is important on the survive So Lets check survival rate with Sex and Pclass Together
34065, Parch Survived
25206, Check the top 10 and Bottom 10 correlated and non corelate features respectively
14616, Station 3 Exotic impact
34968, We are going to get an age estimate for
5442, Tree Drawing function
13230, Feature Importance using Permutation Importance importance
14109, center RegPlot center
31926, The calculate performace function calculate the accuracy and gives back data about the falses of the model s predictions to visualize and analyze them
28062, Feature Engineering
13492, Others
35306, CNN
7080, we fill the missing Age according to their Title
902, Create 3 functions for different Plotly plots
31738, Class Imbalance
9592, Concanat to get P Class details
11018, We cannot process the age as a whole number
39189, Extraindo features a partir dos endere os
5888, Fitting for test data also
37706, Accuracy was 67 and now it is 90
33728, Fit Model Using All Data
28614, Exterior
19715, Fit the model
17459, Family
12890, Fare
11888, Decision Tree
31308, Average number of customers across all the stores every day is 633
27365, working with shop name
34676, Sales distribution by item category
30374, Cosine Annealing
2268, Sex
39061, Glove Embeddings
11886, ADABOOST CLASSIFIER
15481, We leave out Name Ticket and Cabin because the syntax brings about no direct insight to the survival rate
17562, No missing values left so we can proceed further
2350, Light Gradient Boosting
2575, Model and Accuracy
31243, Kaggle Submission
14874, Look like survival rates for the 3rd class are substantially lower But maybe this effect is being caused by the large amount of men in the 3rd class in combination with the women and children first policy
11215, Plot prediction vs actual for train for one last verification of the model
26570, We need to know the shape of our images
39262, DISTRIBUTION OF SALES AMONG SHOPS THROUGH TIME
16527, read the data both train and test one and to make sure that they are in the same format we for the time being concatinate them so that both have the same operations
12833, Feature Engineering
29007, CNN
41833, Confusion Matrix for ensemble models
36803, Using spaCy we ll load the built in English tokenizer tagger parser NER and word vectors We indicate this with the parameter en
27116, let s examine categorical features
23075, Fill Cabin
27271, Calculate mean sd of train data for each each feature
1123, I ll also create categorical variables for Passenger Class Gender and Port Embarked
1358, In pattern recognition the k Nearest Neighbors algorithm is a non parametric method used for classification and regression
38624, Used it upto version 7
40018, Individual patient information
21120, First we look at some global measures of our data set
19455, In artificial neural networks the activation function of a node defines the output of that node given an input or set of inputs
30105, PREDICTIONS
20596, Count Plot for Features
14904, Embarked Fare and Pclass
12035, MSSubClass is not a numerical variables so let s transform it to caterogical variable
23845, That look great the new features engineered have outperformed the existing features in the data in the RandomForrest feature importance plot
20808, Dealing with Data for Modelling
9841, Which is the best Model
5175, Linear SVC is a similar to SVM method Its also builds on kernel functions but is appropriate for unsupervised learning Reference Wikipedia vector machineSupport vector clustering svr
6688, Scatter Plot
22531, SibSp vs Survived
17270, Prepare data for train and test model
39311, XGBRegressor training for predictions
23509, There are 5 elements in the class
16523, its looks good
20919, Discount rate
15750, We may take Age median Age of passengers with same
5582, afterward Salutation column should be factorized to be fit in our future model
18906, SibSp Feature
6303, Support Vector Machine
29873, Optimize the Metric
8358, Correlation of the variables
27478, It would be possible to train the net on the original data with pixel values 0 to 255
10268, Observations
2527, AdaBoost Adaptive Boosting
17691, INITIALS AGE GROUP
11836, Additional Missing values in test data
51, Logistic Regression
27182, Final prediction with XGBoost on test dataset
1832, Columns with NaN Values
13534, Liner Regression with Autofeat from kernel Autofeat Regression feature engineering comments652596
33651, Final Submission
549, SVC features not scaled
43364, Submission
7052, Type of foundation
32504, Extracting VGG19 features for training and testing datasets
36644, Class 0 is fish and class 1 is no fish
23382, random sized crop The model needs to be able to detect a wheat head regardless of how close or far away the head is to the camera To produce more zoom levels in the dataset the crop method take a portion of the image and zoom in to create a new image with larger wheat heads
40805, Data Analyze by visualization Method
37000, Period of Reorders
32179, Make dataset
42652, Prediction and Submission
25804, Splendid
30707, define augmentations using albumentations library
10062, Correlation matrix
19576, Handle Duplicate Shops
39784, Whenever the logerror of one is zero the other is not This pattern is consistent with other nearest neighbors as well
5396, smart way to impute Age Refer to to prediction dietanic notebook
42047, Replacing column heads
14192, DecisionTreeClassifier
42971, Exploratory Data Analysis
35199, Ridge Regression L2 Penalty
41346, There are no missing values anymore
23874, Gradient boosted decision trees are the most powerful and widely used models for supervised learning
4253, Ordinal Features
4730, We can use the info method to output some general information about the dataframe
1509, lets take a look at how the housing price is distributed
2902, GradientBoostingRegressor
27613, Vgg16
13309, K Nearest Neighbor
15867, Create Random Forest model
4634, Sorting columns
41037, build an ensemble again
43383, And we pass it to our class and call prepare
37943, Geographic Spread of Covid 19
34867, we import the embeddings we want to use in our model later
4325, SibSp Siblings and spouse count brother sister husband wife etc
25718, Data Visualization
6421, Outliars Detection
28495, Train the model
34047, Create Hour Month and Year Columns
20543, XGBoost Regressor
19266, CNN LSTM for Time Series Forecasting
15380, all titles are in shape
36748, Creation of X train and y train
33650, Model Comparison
34345, Choose Algorithm and Fine Tune
1255, We use the scipy function boxcox1p which computes the Box Cox transformation
16152, Fare
24007, Lets visualize some train examples with it s labels
26571, we get to the model itself
23402, There also appears to be a weekly structure to the date y variable although it isn t as cleanly visible However the class probabilities appear to swing much lower reaching on a weekly basis
41466, Verify we do not have any more NaNs for Embarked Val
33718, FEATURE Embarked
16321, NOTE
41082, Roberta Base Model
12178, Passengers of higher classes have better chance of survival
8988, I am also interested how the number of stories affect Sales Price
38236, Pre print Analysis
4352, OverallQual Rates the overall material and finish of the house
16614, Feature Age
4700, We can predict the LotFrontage missing values with a R 2 of 0
559, submission for decision tree
17416, Producing the Submission file
32543, Feature Engineering is a technique by which we create new features that could potentially aid in predicting our target variable which in this case is SalePrice In this notebook we create additional features based on our Domain Knowledge of the housing features
2320, Fitting a regularized linear Model with k Folds
43128, Lets submit it
5365, Diplay charts with date range slider option
15988, Support Vector Classifier
10827, start from checking values in Ticket category
32563, Diff Sex
14679, Removing name ticket number and passengerid as they are not important features in predicting if a passenger die or survive
1780, Choosing Cross Validation Method
20949, Data Preprocessing
31705, Modelling
18749, Aggregating
16956, Sibling Spouse
40489, XGBoost
33858, Featurization through weighted tf idf based word vectors
39746, Modelling
13054, Random Forest Classifier
25575, OverallQual
19059, Specify the evaluation metric and check how many labels there are
34639, Data Visualisation
32703, function to remove emoji
38962, In order to select the best model epoch weights we need to create a valid competition metric
3915, check for any ramaining messing values
1207, Ensemble methods
12520, now we understand data with the help of visualization techniques
33471, COVID 19 global tendency excluding China
32158, Brilliant isn t it But we have a problem
3372, I just run those functions on my train
9329, From continuous to categorical
12173, Buiding Pipeline
21234, Set initial bias
33266, Prediction
16504, Model Evaluation
15726, Compare the chosen machine learning models
22814, And just to satisfy my curiosity bowels there is only one item under item category 71 and it is indeed present in the test set
29033, Save clean datasets
13741, AdaBoostClassifier
31636, PdDistrict
9225, Train the Random Forest
15419, let s have a look at the age of the passenger
43057, For example var 0 var 1 var 2 var 5 var 9 var 13 var 106 var 109 var 139 and many others
4626, Distplots are histplots with curves on them
22507, Below we are going to divide data into Mini batches and Iterate over many times to get global minimum cost
216, BernoulliNB
3415, Crosstabs to check for correct totals
23229, we convert the shape acceptable to our neural network and y test and y train into numpy format
24849, Preparing the inputs and shuffling the dataframe to make sure we have a bit of everything in our training and validation set
30110, Our library assume that you have train and valid directories
18904, Age Feature
13314, Solution
7104, k Nearest Neighbors
13736, KNN
13565, Ploting Title Distributions
1234, Preparing the data
4181, Age
22908, TD IDF Count
29388, flattening layer span nbsp nbsp we now add the flattening layer
32123, How to find the correlation between two columns of a numpy array
20609, Fare
3549, Embarkation
24816, labelEncoder convert categorical value to numeric
39173, index 27470
604, We study the correlation of Age with Pclass using a violin plot which is also split between survived and not survived
23653, EDA
13973, Pclass vs Sex
8247, Inspecting Data
25205, Analyze Numeric features for correlations with Target variable
20221, Evaluate the model
33772, One Hot Encoding
19961, Simple modeling
3977, Examine model performance
15651, K Nearest Neighbors
18341, For this purpose we are going to use kruskawallis test which is also known as One way ANOVA with Ranks which is a non parametic test use to determine if there is statistical difference between two or more groups
22445, Diverging dot plot
15386, I added a condition that femalechild should be non solo traveller
25671, Set predictions 1 to the model s predictions for the validation data
26195, Masking all except Lung and visualization
12197, We can use these transformers in many places but the imputer probably come before the others as we don t want to deal with missing values again
1813, Creating Dummy Variables
40970, Looking at the average price for each store
29783, Decoder
4163, Variable Transformation
38955, Preparing the Data
1960, Both the test and train Age features contains more the 15 of missing Data so we are fill with the median
1342, replace Age with ordinals based on these bands
32401, Weighted Avg Bayesian Optimization
34691, Number of active days for a particular item shop pair
4727, let s try printing out column names using columns
27641, First we have to read the train and test data
19966, Ensemble modeling
36483, that we have a list of properties for each file we are going to look at we create smaller files for each of them
4250, Continuous Features
25745, Alright now we can check whether the shape is enough to say which source the image comes from
32660, Categorical features with missing values are identified
5586, Detecting the missing data in test dataset is done to get the insight which column consist missing data
4160, Target Mean Encoding
31057, PAN
18099, Since we want to optimize the Quadratic Weighted Kappa score we can formulate this challenge as a regression problem
40701, Defining The Regression Model
16997, Prediction
15475, Imputation filling of empty cells of the Dataset
16582, Bravo Mrs
28852, Read Date
14175, I repeat the same process but with some columns being removed
18391, Definition of the seq2seq neuronal architecture
18022, combine ticket groups and name groups
8925, Alley
9101, and now I can drop MSSubClass and HouseStyle
17952, Support Vector Machine Model
16386, Sex 0 Titles 3 e
20801, Create TotalBath feature
419, Correlation Analysis
31439, A different type of model Random forests
38240, make a report of all the data then we dive deep into each column to understand in depth
39065, Observations
27779, Reshape
38066, NLP Features distribution Train vs Test Analysis
33087, Putting everything together
40463, Heating Cooling
6022, First Import Data
39996, Blending with top kernels
41493, Import Libraries
32796, LigthGBM K fold OOF function
16834, Precision and Recall
33765, Plotting Random Images
9630, Neural Network
4845, Dummy Encoding
9347, Split data in to training set and to be predicted set
3232, Chloropeth in texas
12925, Pclass
13265, Statement Women all survived and men all died
18971, Display the distribution of a continous variable
32363, Imputing Missing Data
34402, Reshape the data
20034, plot the first 5 images from the dataset to better visualize the data
28338, Analysis based on TYPE SUITE
7920, we re gonna LabelEncode all quality features where the order matters and the precedents columns
31240, Build roBERTa Model
16016, 177 null value
31683, Constructing a very simple encoder and decoder network
10771, Pre processing Pipeline joy
18478, Findings
6675, Voting Classifier
16284, Built in plotting
6641, Total number of survived and not survived
20260, Linear Regression
36576, Out of curiosity what are the labels associated with the hottest apps No Pok mon Go yet
15643, Make revised submission
36556, At least one big gaussians with one or two small very thin but long gaussians
993, remember what happens with missing values obviously the actual numbers and percentages be different since we have a concatenated dataset now
11799, LABEL ENCODING OF CATEGORICAL FEATURES
29807, Pretrain Glove
19055, However before we can create the DataBlock so that the images look real we need to create a new method so that PILBase takes into consideration the Photometric Interpretation
18965, Display the density of two continuous variable
34649, Parsing dates
10457, fill in the holes with the means on numerical attributes
7596, Visualizations for Categorical features
586, Blending Models
22716, Creating PCA models
6862, We Drop the following Columns because they are strongly Related but btween them I chose the one with stronger relation with Saleprice
4818, When performing regression it is always important to aim for normality that is does our dependent variable follows normal distribution
37198, Building Model
15454, Embarked
20480, Credit type
27782, Initializing Optimizer
6672, Kernel SVM
35909, Convert data into dataframe
17998, I have noticed that using the feature Embarked does not improve my model so I discarded it as well
37047, Applications of lemmatization are
14110, center Joint Plot center
6939, Target
13363, Print variables containing missing values
6896, Looking at the Age distribution for men and women it s clear that the average age for both was about 30
32351, Data for the Network
34401, Normalize the data
36370, Data Augmentation
37895, Best Score and Params
7489, We group the ages in 5 groups weighed according to the survival rate
28688, SaleType
41048, Order By Days
16477, Class
33326, Save the model
15040, Cabin
15158, Lets predict the test data output using random forest classifier
15492, Optimization Algorithm
11095, Base Model
23358, Learning Curve
21193, Linear Activation Forward
23719, Extensive Hyper Parameter tuning using Hyperopt Library
20466, Credit distribution
39447, installments payments data
9513, Embarked
7979, Model evaluation
6595, The ROC AUC score is the corresponding score to the ROC AUC Curve
3952, Create TotalSF Feature
23377, Clean bounding boxes
28566, BsmtExposure
16438, Yup Values differ totally according to Pclass
8990, Yeah I ll keep this feature
24162, Build Validation Set
19008, Build the final model
42962, Title Feature
11058, Fare versus survival
35940, Hyperparameter Tuning
12874, Via input layer data go in from output layer we get the prediction from our neural network
14476, now inference with respect to survivebiltity and age factor
40966, This step converts our dataframes as passable to the Nueral Networks
19980, MLP with Batch Normalization on hidden Layers AdamOptimizer 2
13136, VERDICT WITH BEAR GRYLLS
19646, Age distributions by phone brand
9006, convert categorical ordinal variables to integers
4180, Age is quite Gaussian and Fare is skewed so I use the Gaussian assumption for Age and the interquantile range for Fare
3865, Feeling a little lost
35872, Confusion matrix is a great way of visualising the performance or rather the shortcomings of our model
24298, Train Test Split
4923, Create the Folds for Our Cross Validation Model
38938, State Holidays
38626, Inference and Submission Generation
9857, Following codes give us whole Sex column
32287, Display distribution of a continous variable in population standard deviation boxplot
1791, Observation
13496, Family Size
4647, Combine Violin Plot Swarm Plot
15455, Family Family Size Family Name Family Survived
26074, Reading data
577, Compare feature importances
8103, Pclass
34861, I m trying to find features here that influence the survivablity of a passenger
9650, Lets find out the percentage of missing values according to which imputation and datacleaning can take place further
37701, find minimum of this function using gradient descent
24974, Ensemble of different models
14380, Lets check thaht people with cabin were more likely to survive or not
27277, Using more accurate kernel function we can achieve 0
15725, Random Forest
2721, Lasso regression
21384, We obtain 99 3 acuracy
1320, creating matrices for feature selection
26805, Augmentations
42961, It s time separate the fare values into some logical groups as well as filling in the single missing value in the test dataset
8976, finishs feature engineering and null values part
27937, use the model to classify each image
14913, Age
5125, Random Forest Classifier
38677, Image Size
16017, SibSp Parch
27118, As there is very less data available for these four columns we can drop them
12042, imput missing values using two functions which I wrote
26025, Data transformation is very vital for any data that contains numeric variables as it may have Positive or Negative skewness
12615, replace the NaN by mode
15194, Tuning Model using RandomizedSearchCV
27532, Display the variability of data and used on graphs to indicate the error for different categories
7012, Type of utilities available
26824, check now the distribution of standard deviation per columns in the train and test datasets
14797, Hyperparameter tuning for Decision Tree
15598, Survival by Age Class and Gender
24773, Load Libraries Data Tokenizer
2895, Categorical Features Encoding
32077, Categorical Variable Summary Statistics
20818, We ll be using the popular data manipulation framework pandas
28303, Conver categorical string values to numeric values
21549, its time to evaluate the model
7143, Logistic Regression
8237, Object to Category
20388, Decision Tree Model
28857, Split Data
30148, temp and atemp have high correlation and register and have too
20668, FILLING MISSING VALUES IN MSZoning
18334, OverallCond
17363, Stacking base learners
35753, Import libraries
38697, Probability of melanoma with respect to Image Size
3313, XGB
20079, Insights
25840, Emojis Text Cloud
35110, Additionally we also need to compute p e
34270, Training
14283, Applying model with default values
20356, we are ready to submit to Kaggle
416, Drop the Id column because we dont need it currently
41032, The fold precision of the ensemble is slightly better than and in between the original precisions of the three models
27291, Fit the SEIR model to real data
17972, SibSp and Parch
18535, Examine the predictors
42778, Visualize the numbers
20876, At first we read in the data
10255, Go to Contents Menu
16589, Lable Encoding
16759, Families or Alone
6926, Preparando arquivo para submiss o ao desafio
39822, I am using ImageDataGenerator function from keras here for data augmentation and have set the range for different features as below
41336, To check if our predictions make sense we check the target distribution of the training data with the distribution of our predictions
24795, RNN
23977, train with small image size first to get some rough approximation
29569, one hot encoding
33309, Feature Selection by Recursive Feature Elimination
25296, I consider this beginning part done
26859, Look at summary of numerical fields by using describe function
34520, Entities
15495, Make Predictions on Test Set
32602, Random Search on the Full Dataset
7004, Above grade living area square feet
39382, Dumping ult fec cli 1t and conyuemp
38542, Its important to notice that there is no sense in keeping high depth values
6095, We can do more investigation on other variable outliers at a later date
9601, Data types
23257, Passengers with 1 2 Siblings Spouses have a higher chance of Survival
34690, Borrowed from
34716, Model
8305, Model 2 Random Forest
33335, FEATURE 8 AVERAGE DRAWING PER CUSTOMER
28201, Chinking
32261, Grouped Bar
32946, Get features len common word count percentage of common words seqratio similarity
635, Large Family
15650, Gradient Boosting
3629, Combine some features
40149, Missing values
4850, Gradient Boosting Regression
23253, Age Cabin have significant rows with missing values while Fare Embarked have a few rows
14574, Importing the input csv
20252, I have used GridSearchCV for tuning my Random Forest Classifier
3801, Models
4729, lowesst 5
7431, Basically grid search considers all the combinations of parameters so it takes longer time than randomized search
35241, We need to what sort of words are place on top for each sentiments
1226, Deleting features
39839, Loading the dataset
7660, We need to apply np
13145, bin up using pd
21259, Drop date and items to get user information
30371, Inception Model
32336, log error changes based on this
25320, Run the code cell below without changes
27330, Looking into data
39999, At this stage I compare two data set according to the Pclass
30599, The highest correlated variable with the target is a variable we created
2091, Get insights from the data
14255, There looks to be a large distribution in the fares of Passengers in Pclass1 and this distribution goes on decreasing as the standards reduces
39418, get rid of columns with NaN in Embarked in df train
14516, Observations
30258, Model validation
1030, But before going any futher we start by cleaning the data from missing values
20728, MasVnrType column
7264, Age Feature
18292, Embedding Matrix
26926, make a list of sentences by merging the questions
11111, Explore Data
31559, Prediction
13750, Split data into training and test set
2970, Univariate analysis
15505, It looks like there are two types of tickets number letters number
41251, ANALYZING THE OBSERVATIONS WITH MISSING DATA
43297, Testando sem a Coluna Month
41571, Importing Various Modules
12034, Fixing variables
10421, FE EDA
4397, Feature Scaling
10559, Create separte datasets for Continuous vs Categorical
1794, SalePrice vs OverallQual
18365, Checking Outliers
34323, Divide the labels according to label stratification
41338, Basic Relation Analysis
32794, Xgboost K fold OOF function
18122, Data Prep for ML
3352, Same for Test dataset
18756, To visualise the slices we have to plot them
5342, Diplay quanitive values of a categorical variable for multiple terms in stacked funnel shape
6195, Submission 2 for SVC without Hyperparameter Optimisation
18323, we try lightgbm with different parameters
16145, Name
43178, Simple CNN
24363, Certainly a very long right tail Since our metric is Root Mean Square Logarithmic error let us plot the log of price doc variable
19549, Simple Naive Bayes on TFIDF
10167, Violin Plots
33774, Compiling the Model
11749, There are still a few more missing values in different columns
30565, Kernel Density Estimate Plots
33750, Take a look at what we predicted on a test sample
38234, Conclusion
4468, In the following funcion the medians of each group would be used to replace missing values in the Age based on their groups
6638, Modelling
16571, Evaluation of model with 3 classifiers
33239, predict the output on new images and check the outcome
19625, Finding null columns though PyCaret manages to fill null values
37393, Done check what s going on with GrLivArea
29424, let s train our model
39315, Save model and test set
6796, Definindo vari veis
11761, Understanding our Models
11608, From info we know that
15270, Gaussian Naive Ba Algorithm
40879, All of the model are doing okay in terms of bias variance tradeoff except kernel ridge just a bit of high bias or low variance and hence underfitting Since training and validation curves haven t yet converged adding more instances might help for lasso ridge elastic net and svm And for kernel ridge increasing model s complexity perhaps adding more features might help
30612, Test Two
34695, Projected values silly linear extrapolation to the next month
23394, Evaluate Model
34004, workingday
34324, Confirm that the labels of both training and validation sets are equally divided
37873, Train Test Split
27921, Underfitting and Overfitting
40191, Read Data
8772, We need to group fare by each 50 dollar
29795, Skip Gram
9725, Onto multivariate analysis
31585, Continuous Features Importance
3407, we ll look at a frequency table for the target variable Survived
32872, Replacing missing values
40310, Train and predict
37166, That is really powerful stuff In just 5 epochs we reach 0
24681, Submission csv Generation
40771, Build Model
19, RandomizedSearchCV Ridge
7028, Fireplace quality
5119, Missing Values Treatment
13792, Random Forest
43, Do you have longer names
25464, Initializaion
27203, Sex
15249, Feature importance
30948, Data augmentation
35929, Pclass Sex
21747, Somewhere around item id 6000 we might have an outlier
6468, Get the labels
32766, We use data augmentation is called in place data augmentation the process is input batch of images to the IDG and transforms each image in the batch using random translation at the end the transformed batch returned to the calling function
25811, Prediction
21820, Non Numerical Features
12504, From the cross validation we re getting that the best subsample and colsample by tree values are both 1
13960, Number of missing values
10370, Continous Variables
21504, Image with the smallest width from training set
15042, We could conclude that
38030, Performance Function
11538, Preprocessing
28874, Transformiation
2673, MI for Classification
7124, Parch vs survived
3944, Numeric variables Imputation
25653, Use the next code cell to preprocess your test data Make sure that you use a method that agrees with how you preprocessed the training and validation data and set the preprocessed test features to final X test
3273, Fill these BsmtExposure null values with values of BsmtQual and other with None
29984, Create output
462, Filling NA s in numeric columns with medians
38786, Find mode for each case
39029, Prediction
15003, Rather than calculating the percentage of passengers by hand we can use the built in parameter normalize True within our value counts method call
6713, Explore the Numerical Features
25766, Pclass id for define Ticket class
10413, Sum multiple features together to get overall totals
40075, Save dataframe in case we use another model which don t require the steps necessary for linear model
34319, I m gonna label the data so that in train test split I won t have to use stratification
42662, For the difference of the missing value correlation matrices a striking pattern emerges
12198, Which can be tested very simply
8596, Building Numerical Pipeline
10513, We have a few Null values in Test Age Fare let s fill it up
17682, PASSENGER CLASS SURVIVAL PERCENTAGE
16169, Analyze by visualizing data
42786, Spot checking some values
30573, None of the new variables have a significant correlation with the TARGET
8104, Name
8395, Adding some interesting values
37532, we need to pad our input so it have the same size of 512
31026, Number font
16898, Nobles all survived
37405, Remove Missing Values
8828, Feature Engineering Scaling
39748, I m going to use a function from the model selection module in sklearn
5436, Fence
19981, MLP Dropout AdamOptimizer
38836, The model is able to classify almost all images
8399, TPOT is very user friendly as it s similar to using scikit learn s API
18974, Display more than one categorical variables distribution in a parallelized view
32609, Categorical Nominal Features Exploration
20644, CNN with word Embeddings
33231, Comparison of different architectures
4068, we can split our variables in training set training labels and testing set
24274, Linear SVC
36017, Preprocessor
19423, And with the Dataset defined we can get the train and validation DataLoaders
43092, Model
29834, Input Dataset
3781, Model 2
12635, Removing Unfillable Values
38822, And we create a list of the feature names if we would like to use them at a later stage
39316, Import and process training set
3401, Meet the Outlier
19859, The boundary value using the 3 times the interquantile is a bit high according to normal human life expectancy particularly in the days of the Titanic
28719, Quantiles of continuous features and target
39777, that s a beginning Lets work with predict proba to find the best threshold that optimizes our f1 score for insincere questions
20534, Setup cross validation
27657, Model Definition
9726, start by looking at LotArea and its relation to LotFrontage
17799, More features engineering
32912, These tools are generators
38847, Even most of the houses are 1 family but it contains 2 car parking garages
5856, Skewed variables
23718, Splitting into train and test DataFrames
3871, assign scores to our categorical data
3889, Mode
10574, Using the Regex A Za z we extract the initials from the Name It looks for strings which lie between A Z or a z and followed by a dot
12954, Detection of outliers
36964, that we have a proper model we can start evaluating it s performace in a more accurate way
20613, Checking for null values if any
10593, Partial Dependence based on gradient boosting regressor
39075, Models
6107, Most common categorial features
19642, Brands and models popularity
9081, There are a lot I next wondered if the order of the conditions mattered
26553, Reshape
3554, There is a 3 features that correlate with the Fare Survived Pclass and Family
35449, Confusion matrix
41221, No pattern for the elimintaed features the columns must have been shuffle
8126, EDA
6440, Missing Variable Treatment
35319, Data Augmentation
17464, Feature Engineering
10982, it s time to split the data we make 10 for test the the rest for the train
32987, Gradient boosting
21152, Alright let s fit GLM
42354, Firstly Applying lematization
20067, Processing date column into convenient format
19833, Logarithmic Transformation
35452, Few Word about dataset
3670, Encode categorial features can and should be replaced
16010, Submission
24663, Forecast preparation
41720, Define GAN architecture
31565, we now didn t convert numpy array
23997, Removing outliers Read other Kernels to understand how they were found out
7355, Concatenate train and test
40288, submission file single model on effnet b4 using bceloss
23308, Make predictions
32891, Make predictions on test set using the 1st level models predictions as features
9193, Cabin and Pclass Distribution
27200, In this section we randomly fill the missing data in our age variable with a value between the average of the age variable and the standard error
39761, Maybe it is interesting to calculate the punctuation ratio
1266, Identify the best performing model
13915, Sex and Embarked are categorical values but they are in non numeric format
21566, Convert a wide DF into a long one
14251, Most of the passengers boarded form S
7767, Ransac
37206, Stacking Averaged models Score
24731, Plotting Feature Importances
12655, Some Final Encoding
5277, Modelling and Feature Selection Pre Requisite
21749, Simple Ensemble Learning
4724, Basic Analysis with Pandas
11379, Building the base models
41350, Condition evaluation is not correlated with price as clear as quality but we still can say that there is a positive relation between condition and price
7251, Load data
18959, Display distribution of a continous variable for different groups
35575, Sales by Store
14473, a pie chart percentage of Categories of people travelling survived
12671, Replacing null values by mean of the vaues of column data
6646, Categorize by Age bins PClass and Sex
34956, sex and embarked are categorical
15062, Import modules
24050, Encoding ordinal categorical features
35069, From these two CNNs that with more feature maps yielded a higher test accuracy
16360, Using Bar Chart for Categorical Data
38213, Creating Keras model
8438, Alley Fence and Miscellaneous Feature Miss Values Treatment
42884, Map View
8492, Partial Dependence Plots
34075, Name Title
19170, Target encoding unused
22206, Evaluating Ridge model on dev data
4461, Correlation heatmap of all variables
3418, Married women are listed under their husband s full name with their actual first names now in MaidenName
3891, Skewness
5093, Clean data
3940, Removing Outliers
11534, It is evident that male passengers are far less likely to survive than females
18752, Performing Joins
32400, Submissions
34340, Missing Values and Outliers
8342, we implement our last regressor which is SVR
41041, Final Submission
21961, Lets check the structure of the new dataset p
36730, Download the prediction file
28483, Correlations
20802, Create YrBuiltAndRemod feature
11730, Scores
34416, Average word length in a tweet
33806, Domain Knowledge Features
34517, Previous Credit and Cash
42941, Logistic regession with the added features
13459, Based on my assessment of the missing values in the dataset I ll make the following changes to the data
20716, LandContour column
6146, Polynomial features
28496, Plot feature importance
30150, Divide predictors and outcomes And take logging outcomes to normalize
11476, PoolQC
30609, Based on these numbers the engineered features perform better than the control case
18683, The folders train and test contain the images
15911, Title Filling Missing Age
21770, The next columns with missing data I ll look at are features which are just a boolean indicator as to whether or not that product was owned that month
26533, SHAP
5350, Diplay multiple scorecard with bullet
31787, To submit pred test prediction and manually add real LB score in the next cell
34458, TX 3
23322, Mean encodings for item Mean encoding doesnt work for me
41646, Cleanup Column Names
21740, There are some redundant values which we remove later
36368, predict test data
24664, Convert predicted new cases to total cases
9396, Optimization algorithm
520, Seaborn heatmaps
42391, Plotting Helper Functions
5446, Make a 1 level Decision Tree
31864, we fit this top model to the bottleneck training data we created
4621, The other 7 statistical values can be used for detecting outliers
18309, lag features from months ago
31084, RANDOM FOREST
21321, Data Exploration
3377, Batch Predict on your Test Dataset
26900, Score for A8 15950
38699, Anatom General Challenge
18260, SHAP Model explainability
5390, this is indeed an improvement over mean median
28880, Encoder
42986, Word Clouds generated from duplicate pair question s text
30667, Drop all necessary columns
36828, And as always we need actually do the training so we call the fit method on our data
28587, Bedroom
4549, Removing variables have greater than 70 of missing values
41777, Show intermediate output values
2907, The cabin feature is full of missing values 75 so I remove it as it is difficult to guess which cabins used certain people
22779, divide the states based on 5 regions namely
13758, Higher proportion of young children survived More survived than died
8779, Title
34093, There are also outliers in latitude longitude that need to be removed for plotting
14859, We are now ready to make our predictions by simply calling the predict method on the test data
18316, check if encoding item category is beneficial
3985, Categoricals Numerics columns
20582, Filling missing values column by column
9660, Splitting data into train and test again
7545, Linear Classifier
3720, Feature Engineering by OneHotEncoding
28657, Land
8102, PassengerId
37368, Feature engineering
9738, Dataset Checking
28738, X train data
12296, Fare
14402, Feature Fare
11070, Normalise
16228, Spark ML Models
14185, Plotting Survived distributions
18472, A closer look at the Store Dataset
2909, Fill the Fare Feature in the test set with median
34749, Preparing data
35477, Otsu s Binarization
32922, Generate model predictions
4482, Linear Discriminant Analysis
12431, Using pandas string operations
32647, Functions were developed to assist with graphical analysis of specific dataset elements and with cross validation scoring of regression strategies
31420, without logging each step set steps 1000 to train the model longer but in a reasonable time to run this example
1525, Some Observations
15877, Or the best score
41293, Comparing the values under each category label
17609, Random Forest
35915, Train the model
25963, of Order containing no Reordered product
22215, Augmentations
19399, Data Visualisation
29135, Target variable inspection
11323, PClass
15841, Age
22093, Create the Model
14965, Out of the total passengers travelling in titanic only could survive which is not even the half of the passengers
6062, Fireplaces and Garage
37981, Difference between fit and fit generator
18757, Segmentation of Lungs
25782, This feature is to hard to find is it usefull for use or not do some oprations and find the meaningfull data
12500, Untuned XGBoost Trained with Cross Validation
15470, Data Cleaning Conversion of Categorical Data to Numerical Data
30639, Family variable enineering
35791, Ensemble prediction
32933, Make a submission
40721, Training Performance
42960, I noticed that there are outliers in the Fare variable
22084, Visualization of our Data Sample Images
34797, Random Forest
37414, Test Full Dataset
12958, Filling missing values in Fare variable
3959, High Skewed Features
15294, Random Forest Model
38315, Code I used for word conversion you need to install googletrans package in your kernel
11347, Model Building
36476, Construction of private test set
1788, If you want to know more about why we are splitting dataset s into train and test please check out this
27312, Model Compilation
28792, Positive Tweets
406, XGB
27520, Probabilities of predicted vs actuals value
24892, K Nearest Neighbors
39969, Passengers with the most expensive ticket survived
16156, Cross Validation K fold
10889, There are 2 features in our dataset SibSp gives the information about the sibling or spouse of the passenger onboard and Parch gives information about the parents and children of the passenger onboard But both these variables basically indicate the family information of passenger onboard we combine these 2 variables into one variable family
25595, Forming a Classifier
38645, Embarked
38718, In this step we combine the generator and the discriminator
34007, No outliers
3782, GradientBoostingClassifier
13501, Fare
7385, Inspecting the unmatched passengers
6604, Modelling
5931, Some of the non numeric predictors are stored as numbers convert them into strings
24060, We ll use Tree structured Parzen Estimater which is a form of Baian Optimization
23409, In practice we can t use the identity function as f everywhere because of inconsistent dimensions so when the input and output of the H l have the different dimensionality we use the 2d convolution with 1x1 kernel as the f function
3790, Statistics in case of positive skewness log transformations works well
24536, Number of products by Customer relation type at the beginning of the month
24303, because we are using the categorical crossentropy loss method we need to convert y train y test using one hot encoder
7072, Since Fare is mainly related to Pclass we should check which class this person belongs to
9028, I found the standard deviation of the BsmtFinSF2 in houses with basements
37565, have a look at the data type of all the variables present in the dataset
2548, If you are single and male the chances of survival would be low as historical account of the incident women and children were the first one to be rescued hence we create a column solo traveller
20077, Insights
5492, check TotalBsmtSF mean and update the same to missing value
18699, let s use the plot top losses method to examine images which have the biggest losses along with
3834, histogram and KDA plots
707, Test
32767, After Data Augmentation
27077, Preprocessing
32021, We should also convert Embarked column to numerical using one hot encoding First let s look if there are any null values
15099, The first thing I want to do is parse out the Name column and extract the title s of each person s name so we can group the data according to a person s title This allow us to more accurately estimate other features in the next few steps Technically this is more of a feature engineering step but it help out in the data wrangling process so we include it here first
8084, Get cross validation scores for each model
42993, Converting strings to numerics
4717, we ll need to convert categorical features to dummy variables using pandas Otherwise our machine learning algorithm won t be able to directly take in those features as inputs
19950, Women and children first
14432, go to top of section engr
32506, Defining the Model architecture
16455, Highly right skewed
42109, Confusion Matrix for better understanding of True positive and Negative
4273, The properties with None MasVnrType have 0 MasVnrArea
20308, I haven plotted several pair of components looking for the one suitable in my opinion for a KMeans Clustering
21067, Removing OOV
18228, Baseline Models
32547, Having a skewed target affect the overall performance of our machine learning model thus one way to alleviate be to using log transformation on skewed target in our case the SalePrice to reduce the skewness of the distribution
32105, How to reverse the rows of a 2D array
6795, SVM
12452, it is time to deal with MasVnrArea
13993, Number of missing values
29985, The code in this kernel is almost entirely from the reference kernel
6087, We handle quantitative and qualitative features seperately
9944, A feature which categorizes the fare rates of the person
22434, Scatter plot
9990, Missing data
41782, Split Train Test
39348, PCA using Scikit Learn
13828, Making and Printing our predictions
15414, Intuitively one would expect a similar trend to that of passenger class when looking at ticket prices
33441, LinearSVC
6302, that we have prepared the chi squared and extra trees reduced datasets for estimators without the feature importances attribute we are ready to start modeling the selected algorithms
5553, Check Submission
4344, Data Analysis for one by one feature
29446, As we saw here the distribution of training and testing set are very alike and we can assume that indeed these two are from one dataset
8085, Fit the models
19919, Look at coefficients
34729, The run function is the main function
41601, Makes predictions
16151, fill out missing embark with S embark
38693, Probability of melanoma with respect to Number of Image per Patient
8728, Sale Price and Year Built
3659, Looks like svc clf and gb clf are performing best with sgd clf forest clf and extra clf close behind it
20725, RoofStyle column
3928, KNN Classifier
2315, Method 1b Quick and Dirty way to get 100 numerical matrix swap categories to numbers
42275, month
11298, Baian Hyperparameter Optimization This code is more of an example as it can take a long time to work with the gradient boosting models
13545, start exploring Age Column
21607, Use header and skiprows to get rid of bad data or empty rows while importing
32199, Cleaning Item Data
38026, Bonus what are insincere topics where the network strongly believes correctly
16440, Its high Thus any value derived statistically based on only Age column can mislead the dataset for the classifier
18111, A training and testing dataset are provided
32145, How to find the duplicate records in a numpy array
40085, One hot encoding
21232, Explore our data
3518, Visualising missing values for a sample of 250
37165, For fastai we can do something known as 1 cycle learning
35062, look at an example of data augmentation
37724, Removing low variance features
40091, Neural Networks
2874, Treatment
16884, New Features FamilyType Alone Small family Big family
42290, Save OOF Preds
26287, compute cost to compute the value of the cost JJ
29318, You are more likly to survive if you are travels with 1 to 3 people and if you have 0 or more than three you have a less chance
31551, since more than 50 values are missing so replacing with NA
29333, Linear Regression
3807, Random Forest
26638, Prepare the submission
29034, Ditch unnecessary features
38735, Since there are many titles with very few counts we map them to main categories
3814, First we load a distribution of the data
16046, Lets check missing data
4172, Equal frequency discretisation with pandas qcut function
18086, I would like to plot images with different dominant colors
14300, let understand about data
11047, Originally existing features
21617, Useful parameters when using pd read csv
11248, method from house price predictions notebookEnhanced House Price Predictions again
24757, Machine Learning
31748, Hair Removal
12923, As expected females have higher probability of survival 74
20929, We use the categorical cross entropy loss for training the model
12747, Title
36002, Plot Heatmap of correlation matrix
14141, Decision Tree
21605, Creating running totals with cumsum function
13365, Types of Variables
32525, Compiling the Model
35948, ExtraTree
21357, Plotting loss and accuracy for the model
6318, Linear Support Vector Machine
19818, Backward Difference
33093, ElasticNet regressor
42994, Random train test split 70 30
33084, Changing numericals to categoricals
39141, Predict test s labels
16773, TESTING FINAL MODEL
28389, Test different parameter values
2466, Backward Elimination
42614, we ll split the data into a training set and a cross validation set
885, Logistic Regression
9391, Dealing with skew values is very important for some ML models such that NN or Linear Regression that are bases on normal distribution hypothesis
9072, much better The data points are now fairly symmetrical and there isn t as many outliers on one particular tail
38094, Here we are visulizing the 20 random test images
43089, check the distribution of these new engineered features
23522, The 85 mislabelled tweets
43112, For large datasets with many rows one hot encoding can greatly expand the size of the dataset
33711, FEATURE Pclass
41624, begin by looking at the summary
15617, Lone Travellers Feature
1405, If it comes to Cabin variable I m gonna fill up NaN values with Unknown and get first letter from every Cabin in dataset
10133, XGBoost Extreme Gradient Boosting
31577, There are FAR less ones than zeros
32008, We don t have a serious imbalance in our dataset The distribution is approximately to
11261, Importing the dataset
7079, We ll create a function to filter the girls
8741, Missing Values
20224, submit
11976, we fill the year of bulit of garage corresponding to the median built age of home and the Lot frontage with 68
33665, Current Date Time TZ font
4080, Correlation matrix heatmap style
3486, The parameters of the model with that score
42032, filtering small categories using nlargest
24283, Summary of most important features
8149, Linear Regression Model
32265, Relationship between variables with respective to time with different dot size and no dots
14720, Visualizing this Info
17575, Prediction
9435, Regplot
7235, Lasso Regression
36612, Visulaize a single digit with an array
1425, Random Forest Classifier
16508, Data Cleaning
34855, Undersampled
8811, EXPLORATORY DATA ANALYSIS
35901, Great we have to convert our predictions to a proper format
12390, Training set
875, fourth model model 3 Age bin SibSp Parch and Fare bin
12031, Below we re doing the preprocessing stages I didn t give any details about these if you need please go to
35600, Simulation
24278, Model summary
38820, we setup Boruta
37489, Simple RNN
9791, I had hard time to find a solution to encode nominal values with missing data
34102, Containment Zones
1579, Our last helper function drops any columns that haven t already been dropped
77, Here in both training set and test set the average fare closest to 80 are in the Embarked values where pclass is 1
2258, Pclass
24792, SWA
3655, Separating features and target
17912, Having more than 5 siblings spouses was very rare
39758, visualize our meta features
42038, Groupby Crosstab
38487, Evaluation div
30275, Some more analysis On Testing
18431, Random Forest
15990, Decision Tree
41192, There are many ways to fill missing values in numerical columns
26889, Include numerical columns Label Encoding
37018, So some categories are expensive but most are cheap
30325, Convert start end position into selected texts
24550, let s plot which products were chosen as the only product in case of the total products is one in any single month
31103, Observation of First 5 Variables with SalePrice
8384, I start exploring the categorical object variables
9219, Visualize the Best Model Fit for KNN
7013, Slope of property
13072, XGBClassifier
10595, Piplelines with Gradient boosting
30927, Sort the items in decreasing order of error amount
985, Import Whole Regression
17838, Model with Title FamilySize Pclass features
42070, Using sklearn to try using standrad scaler
11726, AdaBoost classifier
1001, To do this we must use Regular Expressions e sequences of characters which define a search pattern
16661, Categorical Features
11998, calculating R squared value for ridge
28283, For local use cache processed texts
25835, Calculating and analyzing No of words in each text
24937, Another fairly popular option is MinMax Scaling which brings all the points within a predetermined interval
25577, GarageArea
6766, Random Forest Regressor
37697, Looks like numbers
23950, Label Encoding the categorical features
15259, Removing unnecessary variables
31649, Model 5 GRU Add
29733, it s time to call the function from Baian optimization framework to maximize
16063, Feature engineering
20068, Calculating the amount of sales per a day
13418, Gradient Boost Parameters tuning
33760, Extract Dataset
14896, Parch and SibSp vs Age
40247, Sale Price
3452, The majority of 3rd class passengers would have been in steerage with no cabin designation
33289, Fare Imputer
10314, Create the ensemble
1366, To try a new approach in the data I start the data analysis by the Name column
16688, look at the relation between Age and Survival
20460, Income type of client
30087, Decision Tree Algorithm
19000, Deal with category imbalance
8557, We can use for slicing the dataframe
30348, Add active column
19550, Preparing Submission CSV
29625, Title feature review some interesting insight
2251, Data Exploration Analysis
12379, Since non numerical categorical data in a dataset be displayed in an alphabetical order in the graphs we need to provide a dictionary with orders in order to override the default order
21956, creating some plots
40478, Logistic Regression
11678, Your first machine Learning Model
14687, Lets try Neural Networks to get a better classifier
16003, Titles
1348, Completing a categorical feature
19648, Benchmark predict gender from phone brand
3599, Ensemble
37542, The major difference in the code comes here
32728, XGBoost part
35605, Errors
34964, Separate the Title of the Individual
32613, Exploring the Target Column
6864, Dealing with 3SsnPorch ScreenPorch EnclosedPorch OpenPorchSF which is the Area outside the House
16912, Feature Selection
41364, There is a correlation
15146, Seperate dataset to train test set
35495, Normalization Reshape and Label Encoding
3837, crosstabs
28703, take a quick look at what these new datasets look like
4603, Miscellaneous
42967, Splitting the Training Data
15937, Age
30905, examine the zooning code used by three different county in our data
14666, Train and Validation Split
14924, Building Machine Learning Models
28586, BsmtHalfBath BsmtFullBath HalfBath FullBath
35093, Creating a submission CSV file
29756, The dimmension of the processed train validation and test set are as following
32527, Loading the weights
18065, LinearSVC Model
22603, Number of days in a month
19822, Define a binning function for continuous independent variables
37145, PREPROCESSING STEPS
40396, Fold 3
25582, LandContour
42330, Information of ImageDataGenerator can be obtained from here class
33700, Custom Range Date font
25175, Analysing our extracted features
13825, Calculating Accuracy Score TP TN float TP TN FP FN
19803, Replacement by Arbitrary Value
19907, Month
1917, Alley
20593, Loading Datasets
4804, now split out train set further in to train and test sets for validation
19566, In each row we have image name width and height of the image bbox coordinates and the source
21069, preprocessing Training Data
3469, L1 regularization can be used for variable selection as it shrinks the coefficients of some variables to 0
19365, Show scatter plots for each numerical attribute and correlation value
24677, CNN Architecture
27296, fit SEIR model on country
31736, Image shapes
9311, Here it is easy to argue that Ex Gd TA Fa because Excellent is better than and so on
13069, Support Vector Machine
11557, first start by reducing the number of the dataset by applying a feature selection pipeline
27990, SHAP Values
22181, ALBERT span
32532, Compiling the Model
34294, Evaluate Convnet
13426, Function to perform Grid Search with Cross Validation
20707, Submission
25580, Alley
18581, PATH is the path to your data if you use the recommended setup approaches from the lesson you won t need to change this
23641, RNN
20811, Here we create a num transformer and a cat transformer for imputing and hot encoding numerical and categorical values
27574, ps ind 15
7623, loss ls lad huber quantile optional default ls
18407, After cleaning the tweets glove embeddings and fasttext embeddings are deleted and garbage collected because they consume too much memory
695, plot the Precision Recall curve and find a point with a better precsion
9398, Optimization
592, Together with the PassengerId which is just a running index and the indication whether this passenger survived or not we have the following information for each person
22922, Here it gets a little more complicated as name comes in string data which we must mine the data from
20109, Item count mean by month sub item category shop for 1 lag
24743, Target Variable
26923, And a little bit more of the linguistic tools We use a tokenization and a stop word dictionary for English
8219, Removing Outliers
15926, Age
10667, Plot the model
14814, Pclass Survived
15369, Sex Feature
3656, The basic approach for predictive modeling is as follows
2900, RandomForest Regressor
12946, Feature Scaling
37148, DATA AUGMENTATION
5531, Family Size Feature
18208, Post Process font
27834, One hot encoding of label
15943, Sex
4420, Using random forest
3521, The Challenges of Your Data
22197, Here are some unused RMSE and RMSLE functions
37095, Inconsistent types
1965, Model
38225, Category
28104, Pair Plot for all cols
24567, Total number of products by income
18026, NaN by feature
1230, Optional Box plot
7081, Exploratory Visualization
3797, According to the data description value NA means None for these categorical features Fiil NA with None for them
31113, Missing values
32329, Additional features
32466, Rate of Spread Model
3182, Random Forest Gpu Rapids
8736, Apply Log Transformation
24881, There was an empty Fare value in test
31606, Separate Features and Labels
14173, Grid search is used to tune hyperparameters to improve model performance
30281, India Data
1128, Exploration of Fare
23487, Creating a Baseline Model using TFIDF
38461, Output
26478, Augmentation
41122, Orange County Average Absolute Log error
14791, Is Alone
40315, Explore keywords
242, Library and Data
241, Model and Accuracy
13179, First of all let s take a look into our numerical variables correlation
34164, This plot says what we already figured out from the previous barplot
26427, In the previous analysis we found that that family mebers of small families have a higher survival chance than singles or passengers with a big family
12162, Max Depth
12741, first take a look at the train dataset and then learn the correlations between Survived column and other columns
1777, Model Accuracy Score for Logistic Regression by using Train Test Split
12277, Start the code drop some outliers The outliers were detected by package statsmodel in python skip details here
41860, Apparently tweets containing keywords derailment wreckage and debris are assosiated with true tweets
3164, Back to the main program we now apply the one hot encoding
13347, Random search allowed us to narrow down the range for each hyperparameter that we know where to concentrate our search we can explicitly specify every combination of settings to try We do this with GridSearchCV a method that instead of sampling randomly from a distribution evaluates all combinations we define To use Grid Search we make another grid based on the best values provided by random search
7682, We can safely remove those points
32805, Generate L1 output dataframe
31217, Exploring Categorical Columns
39175, index 27477
4841, NOTE if we do not labelENCODE numerical variables BEFORE we apply dummy encoding than these variables never be encoded Since dummy encoding works only on categorical variables
21411, Run model on validation data and save output
24600, Generate test predictions
17393, Sex wise Survival probability
39011, Keras works with batches of images
5054, Interesting but hard to decipher bin the data to decades
9134, Set Fence Quality to 0 if there is no fence
8440, Final Check and Filling Nulls
2872, Identification
21119, As we want to apply the same data cleaning and preprocessing then we temporarily connect them
31728, Diagnosis and Target
36349, Predictions
13717, ALL MISSING VALUES HAVE BEEN HANDLED
36144, Reshape
37691, How to make predictions for the whole dataset use matrix multiplication again
23387, I experimented with a few optimisers including FTRL and SGD but in the end Adam was the fastest and most reliable function
35479, Scale Up Scale Down
1576, We fill the null values in the Embarked column with the most commonly occuring value which is S
29013, Distribution of Sex feature by Survived
831, columns and correlation after dropping
7237, Final Prediction
21212, There are no missing values in training and i also checked in test data also
35554, Let s Vote
27347, Arima AutoRegressive Integrated Moving Average
19255, Daily sales by item
12039, I m putting 0 in GarageArea GarageFinish GarageType GarageYrBlt and GarageCars where houses don t have garage
21892, Plotting same metrics for each item category
21938, Spark
5457, lets calc the CI s for the rest of the samples and Collect in a table
26506, To prevent overfitting we apply dropout before the readout layer
41163, FEATURE 9 OVERDUE OVER DEBT RATIO
39841, Visualizing the data
32632, to evaluate a default configuration
24872, Creating neural networks is pretty simple in tensorflow
24700, freeze the first layers for the first epochs
16117, Linear SVM
35550, Stacking
9025, Great Since there are only 2 and 3 missing values in these columns respectively and their values have the same range I just set the null values to the value of the other
17877, Embarked
23547, Training the dataset
10823, Excellent
18654, Read the data
3989, Check data again
18632, Exploring Dates
4734, Estimate Skewness and Kurtosis
11121, List of selected Categorical Features
37461, Stemming operations
19426, And as before we add new methods to our LightningTwitterModel class
519, Exploratory Data Analysis
34326, Since we re gonna use two different types of multi input model with flow from directory I m using two generator
21257, Set Ratings Dictionary
27042, Age Distribution of patients
3466, Split into input and output dataframes and define a variable to hold the names of the features
16765, Mapping and removal of features
27321, Plotting model accuracy
27212, Preparing the Datasets
41559, Stage 3 Understanding and Applying PCA
43282, Avalia a m trica das previs es geradas no conjunto de treino
6667, Decision Tree Classification
27395, I get the best paramaters using Randomgridsearch with 4 folds
19970, That fits our intuition from the other charts then
33569, Process some features with LabelEncoder
42546, Animation over average number of words
4645, Violin Plot
26407, The survival chance of a child with 2 or less siblings is much higher than than for a child with 3 or more siblings
9215, Gaussian Distrbution for Fare
22107, Write TFRecords Test
37759, Technique 7 Eliminate unneccessary loops by using itertools
10781, Remaining columns
42949, Sex Feature
22748, Crimes by year 2003 2015
21544, our CNN model consist of a convolution followed by a max pooling
36782, Classification
37749, Suppose we want to fetch a random sample of 10000 lines out of the total dataset
6119, Veneer area
18327, Try Ensamble
26516, Age distribution
36870, Keras only input and output layer
1902, Evaluating Model Performances
28561, OverallQual
8981, Given our new features we can drop Condition1 and Condition2
11949, Preparing the test and train dataset span
16064, Predictive Modeling
3398, MasVnrArea and MasVnrType NA most likely means no masonry veneer for these houses
30897, plot the location before we fill the missing value
1963, Correlation Between The Features
28685, Although this feature is a numeric feature it should really be a category
36210, Evaluating Ensemble Models
32176, FEATURE ENGINEERING
14286, Classification report analysis
27418, Less learning rate
38050, Chi Squared Goodness of fit Test
1193, To prevent overfitting I ll also remove columns that have more than 97 1 or 0 after doing pd
809, The target variable Distribution of SalePrice
30618, We save the statistics dataframe in a variable as we can use it at a later time
14821, Age is nor correlated with sex but it is correlated with parch sibsp and pclass
41217, The AUC on public LB is when using only variables a very slight drop from using features
24058, SHAP importance
28055, count acts differently in Pandas and Spark
6552, Survived The first class people more likely survived
17622, Fare Cleaning Fill missing Fare value by taking median of Fare for respective Pclass As Fare is proportional to Pclass
17897, Encoding variables
24585, Run Training and Validation Step
14615, Station 2 Scaling the age
29555, Method for wrapping TabularDataset into iterator
35205, Increasing worsens the performance of Lasso
38319, Break the model
40974, Daily Revenue summed up into Monthly Reevenue for every store using
19558, coco transforms
41349, House prices are normally correlated with overall evaluation
40270, Exterior Quality vs Sale Price
6566, Cabin
2829, get only the column used on the training set to predict on the test set
12685, Visual Data Exploration
20928, We choose a 4 layered network consisting of 2 convolutional layers with weights and biases w1 b1 and w2 b2 followed by a fully connected hidden layer w3 b3 with HIDDEN hidden neurons and an output layer w4 b4 with 10 output nodes one hot encoding
7586, Like for GrlivArea there are two outliers at the lower right also for all SF
41713, Lung segmentation
36510, Embarked Sex Fare Survived
7811, Interpret LightGBM Model
15054, Categorical variables which classified to less categories could improve the correlation
32235, launch our session In TensorFlow you always launch a session to run computations and assign placeholder values
14464, back to Evaluate the Model model eval
20378, Write predictions to submit it
35087, Predicting values on training set
14846, Since we don t expect that a passenger s boarding point could change the chance of surviving we guess this is probably due to the higher proportion of first and second class passengers for those who came from Cherbourg rather than Queenstown and Southampton
11823, SalePrice Correlation Matrix using HeatMap
30463, Visualizing the most informative features
7442, Combining the two datasets and then doing One Hot Encoding on the combined dataset
15854, Namelength
35868, Splitting in training and validation set
7223, These houses do not have any fireplaces and FireplaceQu can be replaced with none
4644, Box Plot
14519, Age
30423, Load model
39834, Main part load train pred and blend
24818, predicting model and submit solution
16570, Voting Ensemble
19941, Embarked
32122, How to drop rows that contain a missing value from a numpy array
33145, Training the Map
17677, Now I m ready to predict test data set with xgboost algorithm
8023, We found few missing values in few of Columns
18219, Train 15 CNNs
8451, Drop the features with highest correlations to other Features
36513, 1st class passengers are older than 2nd and 2nd is older than 3rd class
1687, Correlation Heat Map CorrelationHeatMap
20957, Overfitting and Regularization
41975, To read bottom 5 lines
18761, After filtering there are still lot of noise because of blood vessels
42140, Decoder
13599, we can fit and transform our data
19673, Align Train and Test Sets
11638, Gradient Boost
26964, which shop is the highest and lowest revenue
22799, look in detail the categorical features
35524, The last part of Feature Engineering is box cox transformation
16187, drop Parch SibSp and FamilySize features in favor of IsAlone
4081, SalePrice correlation matrix zoomed heatmap style
1218, Turn Nan to None
27075, we generate a histogram of headline word lengths and use part of speech tagging to understand the types of words used across the corpus This requires first converting all headline strings to TextBlobs and calling the pos tags method on each yielding a list of tagged words for each headline
24100, Dropping the rows which contains outlier values
26967, which item category is the most popular and the worst
18561, Family
6728, MoSold Vs SalePrice
24163, Training Samples
15069, Survived
35150, Plot the model s performance
8956, No NULL value is remaining
5046, To examine the relevant correlations more thoroughly we plot SalePrice vs OverallQual The latter is a categorical ranging from Very poor to Very Excellent Rather than a scatter plot we use box plots for categoricals
10650, Sex Pclass and Embarked
17686, FILL THE MISSING VALUES IN THE DATA
28713, Joining the tables to our train dataset
4689, Do you remember you do
29474, Distribution of the fuzz ratio
41606, plot several images with their predictions
32468, Fatality Rate Model
40483, Random Forest Regressor
4954, we ll start with LASSO Regression
29996, XG Boost
26989, Predict
22117, Some point are far from the red line
41709, Resampling
40154, We have few variables with missing values that we need to deal with
15014, Embarkation Location
26362, With the nvis object created let s make use of the NetworkVisualiser
4226, Mean Encoding
4542, random forest decision tree decision tree
4304, Inference
28238, Changed epoch from 1 to 10
5550, Fit Model
37064, Imputing Missing Variables
19447, The accuracy as measured by the different learning rates and are around and respectively As there are no considerable gains by changing the learning rates we stick with the default learning rate of
13868, Dropping UnNeeded Features
31790, Prepare tools of the original kernel
12509, Transformations
16593, Importing RandomForestClassifier
33683, Add Days font
17867, Submission ensamble
5061, let s have a look at sale types and conditions
3838, Pivot Tables
21557, Name
9104, This distribution looks skewed to the right
41340, Numerical Features
19604, Columns with missing values
9902, Drop Passenger Id and Cabin
39184, Percentual de incidentes por endere o
14691, Naturally the wealthier passengers in the higher classes tend to be older
40628, Yep looks ok
35370, Create train and valid datasets
18243, Removing the Outliers
30752, Fixing max depth and min samples split
37894, Random Forest Regressor
23021, One Item Features Analysis
23934, Looks like the target distribution is more concentrated between but there are still values until
13890, There are 8 NaN tickets because they didn t have a number in them
18820, Gradient Boosting Regression
35871, Confusion Matrix
2884, In my perspective heatmap is the best way to have a look at different correlation without going through all the troubles
38414, SGDClassifier
32330, Important Features selection
11844, Bath
14376, MOst of the passengers were of Age 17 to 40
31568, we can look at images and labels of batch size by extracting data
25281, Display images
13829, we save the prediction in the following file
6866, Dealing with With FullBath HalfBath
42094, Importing Data
12625, Fill the missing values
41935, We use the TanH activation function for the output layer of the generator
7659, stack averaged models with a meta model
14428, Create function replace titles to update the titles verifying Sex of some of the titles before replacement
19927, We detect 10 outliers
31639, Dates minute Encoding
30681, NN is the future
8130, Missing values
37630, In the example below I have used different arguments for the shift scale and rotate limits to make it easier to visualize what happens if we do not specify the border mode and value arguments
1615, Model
1543, Fare Feature
6763, OneHotEncoding
19640, Any models that can belong to different brands
6800, XGBoost
32114, How to convert a 1d array of tuples to a 2d numpy array
20075, Insights
125, Modeling the Data
1381, Preprocessing
7482, I decided to divide these into 5 categories Miss Mrs Mr Noble Crew
8831, Balancing Dataset
7008, Pool area in square feet
9074, I wonder if Exterior1st and Exterior2nd are ever equal
27238, Trend by Country Region for the maximum cases
13470, Exploration of Gender Variable
36825, Here we re simply converting the features to an array for easier use
17961, Converting to Markdown
19730, Observation
6730, SaleCondition Vs SalePrice
12150, If we have a larger dataset we can e
25209, Analyze the GrLivArea Above grade ground living area square feet Second highest correlation with SalePrice
15473, Feature Fare
21164, Normalizing data
4510, Observations
34295, Visualize Activations Benign
17532, Complete Fare property and create custom Fare bands
16295, Describing training dataset
38980, test data
6742, CentralAir Vs SalePrice
36868, Multi Layer Perceptron
16232, it s time to call pipeline MLlib standardizes APIs for machine learning algorithms to make it easier to combine multiple algorithms into a single pipeline or workflow we add all the commands which we have called till now and add them to pipeline
33042, The default boosted model produces around 84 accuracy in the prediction which isn t very good
9599, Finding out the Family Size
36106, Binary LSTM itself
1514, now lets remove these 0 values and repeat the process of finding correlated values
1998, Well the most correlated feature to Sale Price is
6830, Categorical Features
27332, Checking for NULL values
19708, The two variables train data and test data be used for storing the modified prep data generated from train images and test images
7230, Log Transform the skewed features
15686, Checking missing values in more detail
13525, Remember All transformations that were done in the training dataset must be done in the test set
3888, Median
3813, Submission
32604, Bayesian Optimization on the Full Dataset
16559, Basically the columns SibSp and Parch tell us whether the corresponding person was accompanied by anyone or not we create a new column Is alone which tell us whether the person was accompanied 1 or not 0
36371, Creating a CNN Model
39281, Feature analysis
16476, SEX
37220, Choose Embedding
20065, Normalizing category columns which have similar values
1557, Additionally looking at the relationship between the length of a name and survival rate appears to indicate that there is indeed a clear relationship
8372, Logistic Regression
21761, Some entries don t have the date they joined the company
38705, After Mean Encoding
33642, Missing Values
16907, fillna
35412, Attribute Bathrooms Bedrooms
8296, Random Forest
4366, 1stFlrSF First Floor square feet
6858, Dropping Columns
20709, Removing outlier
4939, Handling numerical missing data
24116, NN
15608, Embarked Feature
18242, Predictions for testing data
24404, check the distribution of the standard deviation of values per columns in the train and test datasets
32578, Example of Sampling from the Domain
34280, check for outliers
9654, Filling other missing categorical and numerical data with None and 0
36739, Reading Test CSV Data to Predict values
16624, Cross Validation
16266, Sex
42398, How do sales differ by store
1519, look at their distribution
1304, Observations
2389, RandomizedGridSearch
15296, K nearest Neighbors
32190, We can print some of images from the test set and print the corresponding predicttions to get a sense of how good our model really wasWe can print some of images from the test set and print the corresponding predicttions to get a sense of how good our model really was
7566, describe for numerical features
19735, For each Department
16694, generate classes for different titles and divide them into different title classes
41233, Reshaping the data to the format Convolution layer expects
33996, Recurrent Neural Network RNN
23838, Taking X314 and X315
24442, Load Data
17046, Random Forest
41490, Evaluate Model Accuracy
38083, We normalize the train data and test data and we do this by dividing the data by 255
23535, Load data
37137, Multiclass Classification
5431, I would assume that the 690 just don t have a fireplace
19010, Predict on the submission test sample
9313, To give a simple ordering to these categories we can do the following
17851, predict with the validation data
5534, Create SibSp Categories
18394, To select the best model out of the five we define a function score model below This function returns the mean absolute error MAE from the validation set Recall that the best model obtain the lowest MAE To review mean absolute error look here validation
31360, Pretty Printing
32086, Table 2 The 5 features that have the highest absolute correlation with log
24480, Strategy 2 Add data augmentation and a learning rate annealer
18005, Here Sex 1 implies male and Sex 0 implies female
43042, This is the fifth part of the algorithm mentioned below
11628, my baseline model be LogisticRegression
5710, Exterior1st and Exterior2nd Again Both Exterior 1 2 have only one missing value We just substitute in the most common string
31377, Translate image
23182, Feature Importance
1004, So what would you do here The first thought is to extract the salutation do you agree RegEx it is once again
34711, Mean over all items
40256, Total Rooms Above Ground
14273, Interpreting Confusion Matrix
1759, Code for finding outliers individually
30996, Adding a Relationship
32537, Generating csv file for submission
13737, Confusion matrix and accuracy score
41788, Check the output button for the training description
40642, Preprocessing
2435, Nice The lasso performs even better so we ll just use this one to predict on the test set
13094, Frequency distribution Categorical Variables
17049, Optimal parameters found during cross validation are lower than on public leaderboard meaning we are overfitting to training set decrease max depth and decrease n estimators to make model more general
9868, Parch Survived
6402, Preprocessing Test File
16371, Checking Size of Family
31929, visualize random examples predictions from the test dataset
9669, we can say that we have a quite well clean dataset to provide to our classifier algorithm
19319, Make Predictions
15525, The names begin with certain titles Master Miss for children and Mr
838, StandardScaler
6218, BsmtQual Evaluates the height of the basement
27174, Feature Selection
37216, Much better
853, mean of best models
31832, Over sampling followed by under sampling
34250, The Basic idea of Time Series prediction and RNN is to re arrange the data
10867, Fitting the vectorizer
38991, jaccard score on train data
2734, Reading the two datasets that are going to be used to demonstrate various methods of handling missing values
512, Gradient Boosting
25512, PARAMETERS OF THE EMBEDDING LAYER
23564, Animation
13886, Passengers Fare
4127, Stacking Averaged models Score
19817, Contrast Encoders
8789, Training with the whole dataset
33853, Analysing extracted features
1224, Observe the correction
40983, Grouping by multiple columns
15676, Plotting Learning Curves
3929, MLP Classifier
11472, Confusion Matrix
23619, Gini coeficient
35778, Calculate Metrics for model list
8662, Instructions
18435, Predictions
9478, This function ensembles the trained base estimator using the method defined in method param
65, submission
21773, Based on that and the definitions of each variable We fill the empty strings either with the most common value or create an unknown category based on what I think makes more sense
43006, from pyspark sql functions import is need for the countDistinct
6441, More than 50 of data are missing for PoolQC MiscFeature Alley Fence
20829, we ll fill in missing values to avoid complications with NA s
23643, Augmentations
11023, People embarked at C port have better survival rate
235, Library and Data
12641, Name Feature
39409, Sex
19463, Plot the accuracy and loss metrics of the model
41590, MISCLASSIFIED IMAGES OF FLOWERS
282, Machine learning algorithms typically do not handle skewed features very well
9617, Feature Engineering
22189, The brand name data is sparse missing over 600 000 values
537, Looks like there is very strong correlation of Survival rate and Name length
21142, let s look at its moments
6104, Four features have more than 80 of missing values
2170, Sex is one of the most discussed topics in Human history
7212, Lets check which features are co related to our Target variable SalePrice
26729, Plotting monthly sales time series across different stores
18954, Display distribution of a continous variable for two or more groups with Mean and Standard Deviation
28354, EDA of Bureau Data
40993, While for example aggregate function reduced DF this function just transforms our DF
18601, Confusion matrix
38513, Again more or less word count is also similar across positive negative and neutral texts
42056, Dispalying pie chart and countplot together using matplotlib
40850, Usually we drop a variable if at least 40 of its values are missing
6868, Dealing with LowQualFinSF MiscVal
34953, Model accuracy assessment
3375, Select the Target in your dataset
18169, Data access
87, Age Feature
7111, We use logistic regression k nearest neighbors support vector machine Gradient Boosting Decision Tree as first level models and use random forest as second level model
42181, loss function is categorical crossentropy
31235, Features with max value more than 20 and min values less than 20
4113, Fill the columns with None or zero accordingly
24926, Images
28798, Predicting with the trained Model
13866, Mapping Feature
27214, Modelling
18492, I think it s always better when working with decision tree based models to have dummy variables instead of categorical with different levels because this alters the bias of the algorithm who favor a higher weight to the categories like 4 and deprioritize levels like 1
21777, Training
22612, Randomized Search
36750, Take last days 14 for this notebook in order to predict firts unknown day s sales
31413, Pack the test set as what we did for the training set
35884, Add time lagged features
6831, Numerical Features
21665, Prepare Dataset
7419, Remove additional variales and add missing variables in test data
1929, MSZoning
6893, And here is the rate of survival by class
15851, Cabin
32243, Convolutional Neural Network
22270, Age
20939, Augmentation
38206, Predict and Make Submission File
760, define the autoencoder model
26499, TensorFlow graph
5460, Which specific attributes lead to more uncertainty
4231, Data Manipulation
41202, check how our model behaves on the training data itself
29591, Defining the Model
4041, Categorical Numerical Variables
19183, And now we create our submission
34420, we move on to class 0
15705, We have created 8 age categories that follow the original distribution of Age values
12952, Numerical variable analysis
11502, The danger in label encoding is that your machine learning algorithm may learn to favor a over b
19808, look at another example feature like Embarkment
10730, Random forest classifier
3636, Extracting Title from the Name feature
8560, Replacing Values
26309, we need to handle missing data
7174, that we already convert this features let s take a look into their correlation with our target dooing a correlation matrix plot
6744, OpenPorchSF Vs SalePrice
20323, Section 4 3D Convolutional Neural Network
40268, Feature Selection with categorical features can be a bit tricky as there are many proven ways to do so but no standard process
34235, And now we can build our DataLoaders and you re done
12459, XGBoost Regression
25948, Feature Engineering
24049, Imputing nominal categorical features
23379, I did some more manual inspection of the bad labels picked out of this code that I have not included in this notebook
18891, the really last step is scaling and data transformation
41528, by eye it looks as if there may be some issues distinguishing between 5 and 6 on occasion
2006, Everything looks fine here
33890, previous application loading converting to numeric dropping
31627, Train and predict
32465, Reach Model
5250, Feature Importance Scores From Model and via RFE
16592, Define Feture importance function
37665, Data visualization
10374, Test data
14408, deleting Survived copy feature because I made it just for EDA
36092, And how successfully
33743, We have to map the image vectors back to image
41425, Missing values in the macro economic data
35347, Model Building
9837, K Nearest Neighbor
31109, finally we use cat col dataframe and good label cols for one hot encoding and later be used for prediction
28405, And KERAS
23323, Number of month from last sale of shop item Use info from past
35707, Compare the r squared values
23650, Submit to Kaggle
40624, we can construct the pipeline and transform the training data ready to train the model
39425, get rid of Cabin for now
12754, Get dummies
14004, Random Forest br
23711, Run the next code cell to get the MAE for this approach
22924, now we have a column of titles such as Mr Mrs and Miss
6659, Drop unnecessary columns in train and test set before predictions
32937, Feature Engineering
28476, New features based on area adapted from THIS KERNEL additional features scriptVersionId 1379783
41578, BREAKING IT DOWN
14074, AgeGroup
41596, To verify that the data is in the correct format and that you re ready to build and train the network let s display the first 25 images from the training set and display the class name below each image
42345, Preliminary model selection try five different regressors at first
3912, MasVnrArea and MasVnrType NA no masonry veneer for these houses
36616, Calculate unit vectors U1 V1 and new coordinates
3268, move to handle missing values
41808, Image augmentation
3695, Data Pre processing
19912, Split train test
39151, There are many different types of CNNs For now we only use one type of CNN ResNets The come in different sizes There is resnet18 resnet34 and a few more At the moment you just have to select the size
6059, Spaces and Rooms
32090, How to create a boolean array
7138, Fare
38272, As each word is just a sequence of characters and obviously we cannot work with sequence of characters Therefore we convert each word into an integer number and this integer number is unique as long as we don t exceed the vocabulary size It s not the one hot encoding It s basically just the transformation from a list of the words into a list of integer values
3767, Random Forest Classifier
22195, Define constants to use when define RNN model
32427, Training Function
8582, Applying Linear Regression
39169, To use a model pretrained on ImageNet pass pretrained True to the architecture like this
22470, Autocorrelation ACF and partial autocorrelation PACF plot
29708, final Inference and submission
3830, selection methods
27132, Discrete Numerical Analysis
8223, Working on integer related data
34447, Item Prices
13234, PassengerId string
6020, Submit
25978, Features columns
2430, Evaluation Our Models
20376, Build the final model with the best parameters
29321, Model 2
10156, Not effective because numbers are comparatively very large Apply log to avoid that
20766, Text Processing
27466, Most common stopwords
28215, the same in plots
13560, Fare by Age
22703, Visualize the Transformations
18096, We visualize a random image from every label to get a general sense of the distinctive features that seperate the classes
651, That s really cheap for a 1st class cabin
28270, We shall now take look at rest of the categorical features
31280, Naive approach
13910, Does age play a role
1265, Blend models and get predictions
10648, Cabin t is not much useful as we expected
31477, I trained the model on my local machine
34532, Putting it all Together
5194, Prediction
42102, Convutional Neural Network
39027, Tunning the hyperparameters
4815, Only numerical features left to feed our pipeline
41211, For accelation instead of calculating p xi y we now calculate p bin of xi y for every bins To achieve that we cut every continues value in xi into bins and map continues xi to its bins probability p bin of xi y This is bining Kernel Naive Bayes
41005, Resnet class
22182, Bart span
32771, Predict
15786, Confusion matrix
13757, Again clear that females have higher chance of survival
37090, Load our data
38774, Select model and predict test data
29151, Lot Frontage Fill with Median
23083, STEP 2 Features Engineering
6643, Survived and Non Survived male and female by PClass
33636, Descriptive statistics
10922, Displaying info on graph
21753, We use the same dataframe without further processing for Advanced Ensemble Learning so I save it to csv and use it in Part 2
12818, Embarked and Fare Analysis
34738, Last 3 dimensions of the Latent Semantic Space
4307, Training Dataset
12227, Read the data
5672, Check if there are any unexpected values
31305, Neural network with Keras
18893, eXtreme Gradient Boosting
9316, which one is better
31475, Model
16447, Age
19916, Here I am creating one lag for test data by just replacing subsequent column name
18132, SVR
13358, Types of Variables
25784, there are lots of unique values are here so take first character of Cabin
17544, Submission
8903, Stacking
16313, Plotting some distribution plots based on survival s sex
30183, After one hot encoding we have got 99 different species
10444, Not normally distributed so we shall apply logarithmic transformation
42547, Embedding with engineered features
21066, Removing Empty Documents
37889, Alpha
661, Nearest Neighbours
10637, We can extract some crucial information about passengers Age and Social Status from Title
20117, Fill null value with 0 for lag features
43303, Given the fact that they hold relativel small number of nulls I just set them to Unknown
21375, Keras expects the training targets to be 10 dimensional vectors since there are 10 nodes in our Softmax
9433, Point Plot
26940, The importance of the missing values
29978, Augment Images using ImgAug
23228, There is variance in the dataset so we scale the data
13581, Encoding
43009, Normalize Imbalanced data
8711, XGBoost
7582, new feature sum of all Living SF areas
2547, Imputation with median for numeric variables and mode for character variables
20638, Bigrams
28565, As the condition of the basement improves the SalePrice also increases
40855, Bivariate Analysis
15621, Cabin feature
37115, total training time is 65 minutes
28281, Needed for generating important absent vectors in embeddings
36299, Decision Tree
43368, Normalisation
15619, Family Size Feature
9760, Because I used handwriting dictionary to create TitleGroup feature there might be some titles which only exists in test set and be converted to NaN value
8572, Looping Over a Column
33102, Evaluation of the models
30942, Visualizing Interest Level Vs Hour
23878, Properties 2016
23319, Add previous shop item price as feature Lag feature
36423, Numerical Features
41183, print list of numerical columns
32774, Separate Train Test data
42121, To be continued
9652, Assigning None to missing categorical values
2293, Pandas whats the data row count
37, Embarked Column
31785, The most interesting model is LGBM with the first draft score
3762, Ridge Regression
40626, We are now ready to train the various classifiers
2182, One standard way to enrich our set of features is to generate polynomials
13891, To cleanup the data I am going to
20780, Reading Data
9708, Lasso
6427, Linear Regression Model
10119, Here it looks like passengers from Cherbourg have had the highest chance of survival when compared to Queenstown and Southamptom
31997, get the validation sample predictions and also get the best threshold for F1 score
22668, Most Common Words
20243, In EDA we decided to use family size feature
10558, Find all categorical data
12619, use prefixes in the name to Create a new column Title
8037, Insights
24828, Feature Selection
35094, documents meta csv
12129, Encoding some features features 1
34545, Here I am training the dataset in batches since the RAM cannot handle the entire dataset having 200 dimensional word vectors all at once I am training 30 000 samples in iteration A single epoch undergo training of all the samples in a dataset one after another batch 1 epoch have length dataset 30000 number of iterations
29621, Almost 70 of passenger are embarked in S
40684, Show Model Accuracy as function of number of clusters used
19330, Building a Sequential Model
32932, Train the production model
27105, Our dataset consists of 1460 rows and 81 columns
27205, 5th Step Predictions
5881, Most of them are right skewed
15135, Embarked
38232, Building the final model
41655, Distribution of data for numerical features
27749, Target variable visualization
20115, Number of days after first sale
37880, Residual Plot
5470, We do note that the columns have changed as well as the order That means some of the fields have absorbed some of the smaller importances into their score
5253, Permutable Feature Importance
5516, Decision Tree
4755, the value zero doesn t allow us to do log transformations
31396, We should scale the values in the data so that the neural network can train better
41316, Generate test predictions
34534, Remove Features
399, SVC
29216, Separate Train and Targets
36282, We can get the alphabets by running regular expression
40061, Submission
12398, Checking for null values
3178, SpeedUp Xgboost with GPU
37725, Feature correlation analysis
1324, Which features contain blank null or empty values
18764, After reading the 3D CT Scans we first segment the lungs and then generate the binary mask of nodule regions
38682, Min Max age of Patient
39137, We get the following output after applying softmax
30261, Confusion Matrix
3622, a Spearman correlation is conducted
36554, New feature importances
3990, Ooops
34848, EDA
19395, PCA transform the vectors into 2d
28753, Helper functions for image convertion and visualization
15400, There is one missing ticket fare value in test data
31252, Splitting Data Back into Train and Test
37479, In some applications it is essential to find out how similar two words or sentences are
32133, How to find the position of the first occurrence of a value greater than a given value
24144, Prediction for Test Data and Submission
2731, Writing submission files
16468, Split the data into train and test sets
27085, However in order to properly compare LDA with LSA we again take this topic matrix and project it into two dimensions
104, Squaring the correlation feature not only gives on positive correlations but also amplifies the relationships
19417, Before we start we can do some preliminary work
991, SHAP Values impact on model output
16509, first handle the missing values in our dataset
11285, Handling Missing Data
4840, Data Correlation
22644, Sales Mean
24748, A couple of features look severely depleted but the rest only suffer a few omissions which means imputing these blank variables certainly becomes an option
6096, We now begin an analysis on the normality of some of our very important features
2304, Best Practice Check and Convert the rest into categorical
16886, Small Family
33611, Fit our model to training features and corresponding labels
15447, visualize our new Deck feature
8844, The first thing we ll want to do is replace all missing values with some value
9017, If the there is no Garage just set GarageYrBlt to the average year that Garages are built
818, Outliers
21014, Matrix
1835, Remaining NaNs
39017, Segment the groups by age
31325, XGBOOST
9985, Skewness and kurtosis
7045, Type of dwelling
26801, Miss labeled data
8124, Bagging
37183, use stratified strategy
38307, Model building
7934, The pros of using Lasso and RandomForrest algorithms are to get insight on the coefficient weights and feature importances
4104, Set Artifial neural network and Learning
11449, Converting Fare from float to int64 using the astype function pandas provides
12609, check which features contain missing values
9626, Feature Selection Using Linear Regression
19291, Creating multilabels
8115, Guassian Naive Bayes
7312, Observation
40, Fare Column
31795, Below we construct another model using exactly the same layers
13127, Survival Percentage by Gender
23523, The list of all 309 strings after the cleaning
13107, K Nearest Neighbors
29188, Scatter plot of actual values vs predicted values HousingPrices train
41565, None of the kernel PCA give satisfactory seperation now let s try LDA
5102, we create a basic random forest classifier with 2000 estimators
1914, looks good to go
39044, These top 100 account for a good 20 of the whole training dataset
35331, Splitting into Training and Validation Sets
17821, plot feature importance
30866, Split training and validation data
33601, MaxPooling The function of pooling layer is downsizing the feature maps
1921, Fireplaces
16971, Interesting All the passengers with fares higher than 200 were in Pclass1
5028, LASSO Regression
983, Predictions
11324, Embarked
33080, Missing data
37209, Writing to submission
17973, Another helpful feature would be to check whether they were travelling alone or not
8576, right join or merge This joins return all the columns that are in right dataframe and the common columns in left dataframe
16952, PClass
12964, Age and Survived
6122, We just fill missing Zoning values with RL
17824, Here we plot the confusion matrix
3848, Feature Age
25168, Creating our very own stopword list after removing some stop words like how whom not etc that may be useful to differentiate between questions
6120, we replace missing Veneer area with O
27448, Categories
37174, Actual versus expected
11879, Predictions for submission
37141, Training Code
14202, Trying something a bit better
1157, Linear SVR
38502, Distribution of the Sentiment Column
12963, Pclass and Survived
21760, Missing Antiguedad min Antiguedad
8549, Masonry Features
8500, Create dtype lists
19336, Prediction Submission
22473, Multiple timeseries
20616, K nearest neighbor Classifier
24285, Import Libraries
11217, Find best cutoff and adjustment at high end
37451, we load the pretrained bert transformer layer
23919, Model on TFidf
11987, F statistic
6323, Extra Trees
17918, DATA TRAIN
2199, Gotta work a little bit in the Name column
16344, Comparing Models
33686, Days surpassed
11651, Extra Trees
12001, lasso also acts as a regularized model but unlike Ridge lasso not shrinks unimportant features but makes their cofficients zero so it also acts as a feature selection model
42179, Defining the model
32140, How to create groud ids based on a given categorical variable
32517, Model 4 New Model From scratch
32763, From 1D vectors to 28 28 1
4882, You can always use value counts to check on data visualization is just another option
24977, Train Predict and Save
17742, I wonder if the ticket number corresponds to port of embarkation
38815, WBF over TTA
13227, GridSearchCV for Random Forest Classifier
41445, we can reduce the dimension from 50 to 2 using t SNE
23828, Removing columns with zero ovariance
13567, Ploting Age Cat Distributions Fare
1175, As expected the lotfrontage averages differ a lot per neighborhood so let s impute with the median per neighborhood
12170, Reading Inputs
15695, Number of siblings spouses aboard the Titanic vs Survived
23902, tell BERT to ignore attention on padded inputs
4967, Family Name
21223, let s run the model for 3epochs it takes easily 30min
3027, Outlier removal is usually safe for outliers that are very visible in certain features
10627, we can drop and code Parch code
13846, before the cleaning data we combine training and test data in order to remain keep the same structure
40384, To understand what the function decode image does we use a sample filename to test it out
22405, tipodom Addres type
34958, Looks like 5 is the best number of clusters based on inertia
15430, let s scale the Fare and Age values to be in similar range to the rest of features
13841, Correlating numerical and ordinal features
38097, Feature Encoding
2794, Predict
1043, We are done with the cleaning and feature engineering
7192, let s create a new dataset only with normalized features to be combined with one hot encoded features latter
20625, We have a balanced dataset which is good
13116, CatBoost
38583, Every text be converted into machnie read able form
10144, Encode categorical features
20159, Having a look at pixel values frequency
36338, Evaluate accuracy
33329, Convert labels to categories
1890, To evaluate our model performance we can use the make scorer and accuracy score function from sklearn metrics
29366, SUPPORT VECTOR MACHINE
9970, Targets and features
36091, Discuss events
11633, K Nearest Neigbors
39171, When you are finished the CNN could look something like this
4331, Marital status
18396, Generate test predictions
42349, Retrain the model and generate the output file
30898, fill those values
319, Random Forest
25956, Training Data
1978, Data Cleaning
41366, Sale Price Pave Grvl
12078, Model Submission
22873, Model Training
38426, Improving network architecture
41438, look at the examples of if the tokenizer did a good job in cleaning up our descriptions
39422, get rid of the Name column for now
15397, Both are first class passengers that paid 80
26928, And then we can make two different corpora to train the model stemmed corpus and lemmatized corpus
10206, Lets check how much error the model is giving
15865, Random Forest
20389, Gradient Boosting Model
26403, We implement it here only for the training dataset to use it for further data analysis below
19416, that the model is defined one trick is to check that it works by passing one input sample
1238, Defining models
32924, let s generate the features coming from the installments data
16573, Evaluation of model with the best classifier
3338, As evident age is mostly correlated with Pclass this help us This time i use median age by Pclass feature and fill missing value by this correlation
1840, Distribution of SalePrice
3589, Feature Engineering
23404, Probability features
20056, Additional features
28702, Since Lasso performed the best after optimisation I chose this to be the meta model All other models be used as base estimators
1843, Distribution of SalePrice in Categorical Variables
26820, check the distribution of the standard deviation of values per columns in the train and test datasets
17529, Calculate average Age for each title and fill NaN values with it
11564, We have a few outliers that are pulling the distribution towards higher prices
14460, back to Evaluate the Model model eval
28071, Model building
27081, However this does not provide a great point of comparison with other clustering algorithms In order to properly contrast LSA with LDA we instead use a dimensionality reduction technique called t SNE which also serve to better illuminate the success of the clustering process
36591, CNN Model
18738, Lowercasing
26644, This means 2
2778, Ensembling Weighted average
18127, Gradient Boosting Regressor
27259, Create a DataFrame with our First Level features
4966, We can extract title and family names from the Name feature
37039, Is there a correlation between description length and price
13109, Decision Trees with Bagging
11920, Testing Machine Learning Models
2106, We continue with the top5 Lasso RandomForest XGBoost LGBoost Ridge
17901, Lets try Random Forest
22038, We have a timestamp variable in the dataset and time could be one of an important factor determining the price
14422, Use function age fillna to fill out NULLs for Age for training and test datasets
2580, Model and Accuracy
37719, Individual Fetaure Analysis
37905, Evaluation
2939, Transforming some Numerical Features into Categorical Featured
5502, Imputing Fare
42080, lets create a function which read the image using opencv library
42770, Cabin
40982, Alloccating Daily Revenue values into custom sized buckets by specifying the bin boundaries
40164, To complete our preliminary data analysis we can add variables describing the period of time during which competition and promotion were opened
39817, But won t it affect the image Same question was mine when i first thought about it but NO it won t Don t believe me See for yourself
33455, Non Graphical Analysis
6886, Gender
6437, Saving the data The final task
20540, Kernel Ridge Regression
23113, Findings Looks like most of the passengers over were single without family followed by passengers had a small family Almost passengers had medium families and just over passengers had large families abroad
28311, identifying the Catergical and numberical variables
16787, Feature Engineering
30601, Collinear Variables
12761, i check the number of Null values
15444, Create a new feature called FamSize which combines Parch and SibSp
22467, Bar chart
6202, Gradient Boosting Classifier
6785, Converting Numerical Age to Categorical Variable
22534, Embarked vs Survived
20860, RF
42084, Lets get a validation sample that is 20 of all training data should be marked as validation which be used to validate how our DL model looks
11494, 1
5552, Add predictions to submission
5925, Handling skewed data by applying log x 1 1 as if zero present thrn error
38512, Text word count analysis
18886, Feature engineering
32028, Fit the regressor
31047, Length of words
36255, Variable Name Description
42835, After training the model we plot the ROC curve on training data and evaluate the model by computing the training AUC and cross validation AUC
39988, Eliminating missing values
35444, CNN
7991, Impute
7999, Train Polynomial Regression
19901, Bottom 10 Sales by Shop and item Combination
10066, Variables SibSp and Parch give the information about passengers family hence we can add them up to reach the family size of the passenger including the passenger himself herself
18827, To make the two approaches comparable by using the same number of models we just average Enet KRR and Gboost then we add lasso as meta model
39978, Looking for Missing Values
11243, Skewness
16721, age
26178, first take a look at the distrubution of house prices
22501, placeholder which is use to initialize once when graph is run Basically placeholder is use for giving input to NeuralNet
11439, Tune Parameter
565, submission for GradientBoostingClassifier
39753, List out all the best scores
39154, Plot the 9 images with the highest loss
19305, Evaluation prediction and analysis
21327, Fireplace
12458, Scikit learn Linear Regression
3210, we have half and full bathrooms
38123, Checkout the size of data
15421, let s check if age group had an effect of survival
6114, Garage cars and Garrage area next please
5419, Modeling font
23364, Preview of first 10 images
37168, Addressing problems with NaN in the data
2360, Sklearn Classification Voting Model Ensemble
39455, checking missing data in credit card balance
22126, Bagging
35902, Ready to submit
15217, This got on LB
22129, AdaBoost
23197, Findings The prediction looks quite similar for the 8 classifiers except when DT is compared to the others classifiers create an ensemble with the base models RF GBC DT KNN and LR This ensemble can be called heterogeneous ensemble since we have three tree based one kernel based and one linear models We would use EnsembleVotingClassifier method from mlxtend module for both hard and soft voting ensembles The advantage is it requires lesser codes to plot decision regions and I find it a bit faster than sklearn s voting classifier
13110, Random Forests
8752, Create Output
40056, Personally I find it a bit easier to use weighted cross entropy loss but perhaps with tuning the hyperparameters properly the focal loss could be a good choice as well
4508, Data Cleaning
41873, Printing scores
12622, Model
26619, We are defining as well a function to process the data
7665, Clean and Edit Dataframes
16894, it turns out most of the people having Cabin recorded are from Pclass 1
23128, Age Survived
658, Based on the first look we define the input columns we ll be working with
8953, Fixing Fence
33880, Choosing the columns that we use as features
18369, Checking Autocorrelation
41576, Setting the Random Seeds
1225, Creating features from the data
27282, Base Model
19697, Epochs and Batch Size
16925, Rank all classifiers
39865, Sort out numeric columns
33253, Numerical features
15087, Extra Trees Classifier
30868, Building CNN architecture using keras sequential API
39310, Import and process training set
11821, SalePrice vs YearBuilt
42933, Removing data we no longer need
9785, From intuition 20 is a good threshold for now
14723, There is indeeed a stark contrast you were much less likely to survive if your title was Miss
917, Feature Scaling
19900, Top 10 Sales by Item
40144, also explore the possible correlations between features and simple high level properties of images without going into NN
32508, Training Model 2
29134, we can use resident Kaggler s creator of the Missingno package which is a most useful and convenient tool in visualising missing values in the dataset so check it out
37008, Most important Aisles over all Departments by number of Products
3241, Concatenating both the data set
18027, GridSearch Parameters
15775, Create new features
31557, doing same for GarageYrBlt LotFrontage MasVnrArea as train data
7630, blend 1 gscv Ridge and gscv Lasso
12462, Remapping Categorical variables
25163, Checking and Removing Null Value
42222, With the model set up there s just one more step to add before all of the individual elements can be compiled and that s to add the Optimizer
20847, that we ve engineered all our features we need to convert to input compatible with a neural network
15226, Handling Missing Values Imputation
17556, Its hightly skewed so we apply normal distribution to it
32043, AUC is a measure of the classifier skill considering all different thresholds
11464, Logistic Regression
18968, Display the contour lines of a 2D numerical array z e interpolated lines of isovalues of z
27555, Display interactive filter based on selection of dependent chart area
5669, Original Name
26110, Model Build Train Predict Submit
3586, Label Encoding
20977, Prediction
21167, split training data into training data and validation data
24685, All EfficientNet models can be defined using the following parametrization
30470, Ensemble multiple methods using VotingClassifier or VotingRegressor
39218, Run function
22699, Data Loader
7306, Observation
22403, is indrel which indicates
24701, BatchNorm momentum scheduler
25240, Log Transform
14256, Feature Engineering
31473, Train Validation split
1049, I didn t drop all the outliers because dropping all of them led to a worst RMSE score
16696, We do not require names after this either so let s drop that data
670, eXtreme Gradient Boosting XGBoost
31255, Hyperparameter Tuning
20909, Data Augmentation
36761, Splitting into Training and Validation Sets
25684, SIRD Model
34928, WordCloud for tweets
31120, Even there is no obvious charateristics on time derived columns month and date still have relatively high importance that s interesting
23440, Submission
4722, Import necessary libraries and files
11402, Looks like we can expect a model accuracy of about 80
41359, Kitchen quality is an important feature because there is a clear correlation with sale price
26662, Check the data
14323, Parch
6410, check relation of these fields with the target variable
36795, Awesome stuff But if we want to take it a step further we can We ve previously learned what lemmas are if you want to obtain the lemmas for a given synonym set you can use the following method
15208, Fare processing
35606, Show all problem images
5538, Statistical Overview on final Features
19053, We now specify the y or output using ColReader and specify the target column in the csv file which in this case 0 denotes benign and 1 denotes malignant
22672, Word Cloud
38671, Gradient Boost
14820, 1st class older than 2nd and 2nd is older than 3rd class
26256, Splitting data into train and test set
41420, Three classifiers are considered in this notebook Random forest classifier Support vector machine classifier and KNeighbors classifier
19846, Equal Frequency Discretisation
19534, Listing and Replication
42825, Sample
3690, Hybrid Models combine different models
10464, For a better distribution plot we can use the Seaborn package s distplot method which offers a smoothed histogram with a kernel density estimation plot as a single plot
17818, Model with Sex Age Pclass Fare features
26796, Defining the architecture
20137, visualise some digits
29752, Test set images
36340, Define Neural Network Architecture
42083, we create a training batch now
421, Scatter plots between SalePrice and correlated variables
1838, Categorical Features with Meaningful Ordering
36422, Filling Missing Values
19700, Evaluate the Model
14194, SVC
14649, Number of rows and columns
4201, We can use one hot encoding to create as many columns with 0 and 1 as variable values
29710, Before going any further we start by splitting the training data into a training set and a validation set
18801, Since area related features are very important to determine house prices we add one more feature which is the total area of basement first and second floor areas of each house
35117, create folds
28161, Training Evaluation
3953, Create TotalBath Feature
35427, Plotting the model metrics
15281, Survival by Gender and Age of pasangers
11358, Missing Data
18736, 2096320414714
5843, Pandas dataframe corr is used to find the pairwise correlation of all columns in the dataframe Any na values are automatically excluded For any non numeric data type columns in the dataframe it is ignored
23331, The errors are reasonable but the achieved accuracy is still a way off from the state of the art neural networks
9890, Name Title
1510, To do so lets first list all the types of our data from our dataset and take only the numerical ones
13698, Start by isolating the rooms which have data
14685, Training on different models
17442, in the future may ai check some better way to bredigt the age
14296, Writing the Output in csv file for Submission
34786, Count is hightly correlated with Casual and Registered
38998, Normalize the flattened array by 255 since RGB values are between 0 and 255
32273, Go to TOC
34046, Encode DayOfWeek to Integer
10569, Evaluate the training
22400, That number again
34413, Before we begin with anything else let s check the class distribution
12503, Tune subsample and colsample
16335, Support Vector Machine SVM
9184, Month and Year the house was sold
10422, Preparing to modeling
26809, Net
8526, Basement Features
38650, Total Family onboard
38308, Accuracy with Logistic is quite good
16648, Getting Data
39196, Tratando classes desbalanceadas
19913, XG Boost
27313, Training and Prediction
15183, ROC Curve and AUC
26422, Only bout 12 of the Misses younger than 15 years have no parents children on board
30680, lightGBM
39759, The second line of graphs is just a zoom in the interesting parts of the grpahs on the first line
14059, Pclass vs survived
42543, Feature EDA
34032, Cross validation is best option for this data set
1407, The Cabin Unknown be set to C for the first class D for the second class and G for the third class
14005, Logistic Regression
11129, More filtering of data to try
6851, now we analysis with base Parch Parents travel with passenger SibSp siblings travel with the passenger
16491, ENCODING Categorical Data
30684, Make a special function for RNN
18536, There are three two types of features
9673, Create a class to track all parameters and tuning process
9168, GarageScore
27127, The missing data may be as no data is available for the garage
43304, that there are no nulls in the categorical features I can proceed to label encode them so they become numerical features that allow me to put them through the regression algorithms
5697, Missing Data
1856, Ridge
30365, Looks no problem predict all provinces of United States which confirm cases greater than 500
13397, Modelling
40250, Same problem Same solution
1850, Normalise Numeric Features
20214, Normalization is an important step while training deep learning model
29325, There are many different columns now so it be difficult to view all of the data
12093, Y target value to Log as stated at Kaggle Evaluation page
16225, as S as occured most of the time so we fill it with S
34864, Combining All Columns
28111, Predict the Sale Price using the best fit XGBBRegressor model
22237, Title
28567, As the amount of exposure increases so does hte typical SalePrice
23998, Turns out we have dropped just one column Compare the shape
26492, Use the next code cell to preprocess your test data Make sure that you use a method that agrees with how you preprocessed the training and validation data and set the preprocessed test features to final X test
32727, Features Interaction
42076, Submit
13312, Decision Tree
28718, I start exploring our target item cnt day that refers to items sold and the item price
23801, Omit low information variable
38062, Feature NLP Engineering
39045, consider now the average interest levels for these 100 buildings
22089, Convert Numpy Arrays to Tensors
27964, Version
8624, Modelling
3707, Ridge
3664, Locating missing data
19252, Basic EDA
11202, Calculate Metrics for model list
630, As we would expect intuitively it appears that we are more likely to know someones age if the survived the disaster
14768, Final RF Submission
36197, Ok so maximum 6 months of data for customers with missing fecha alta It means we can consider missing fecha alta as new customers
29610, We can then get the learned values of these filters the same way we did for our version of AlexNet and then plot them
8702, no null value remain now
2202, applying these functions to both columns
3473, As mentioned earlier L1 regularization shrinks some of the coefficients to exactly zero and in this way does variable selection
30381, Feature selection
6186, The number of passengers survived on the basis of the siblings the passengers had on Titanic
25904, look at the important words used for classifying when target 1
3317, Observe the prediction effect of our model on the training set
19804, End of Distribution Imputation
31550, FireplaceQu PoolQC Fence MiscFeature
2336, Helper Functions
40112, Since we do have a lot of combinations we need manually assign a color palette
7986, Merge Lots
29130, Model2 Error
34252, Pytorch Tensors
15165, It s Time For The Machine To Learn
30965, As an example of a simple domain the num leaves is a uniform distribution
9754, FareRange
4848, Elastic Net Regression
23292, Highly corrolated features
31346, Our param grid is set up as a dictionary so that GridSearch can take in and read the parameters
9043, Correlation between Target Variable and Features
39092, Average syllables per word in a question
7679, We are done with the cleaning and feature engineering
5187, Logistic Regression is a useful model to run early in the workflow Logistic regression measures the relationship between the categorical dependent variable feature and one or more independent variables features by estimating probabilities using a logistic function which is the cumulative logistic distribution Reference Wikipedia
17913, DATA VISUALIZATION
37825, Word cloud for Normal tweets
7851, Explained Variance as a Performance Metric
3668, Check all numerical categorial
15535, Separate labels y from data
14823, Family Size
31801, updated Robustness Test with Albumentation
32699, Random Forest
20027, How can we use this
38409, Here we have a multiclass classification problem
8458, Evaluate Apply Polynomials by Region Plots on the more Correlated Features
36876, Keras CNN model 1
4807, Modelling
24314, let display the model errors
22519, Add also int label encoding for a train test pair implemented in the same manner
2510, Dropping UnNeeded Features
9587, Importing the data with Pandas
21172, plotting training and validation accuracy
34670, Cumulative revenue
5996, Kita menemukan Outliers di
25666, Cross validate the data so we can use a test set to check accuracy before submitting
25514, GETTING ENCODING FOR A PARTICULAR WORD IN A SPECIFIC DOCUMENT
31800, First let us test the first 10 test examples
32192, T SNE Lets code
21038, Topic Probability
8685, ALSO LINEAR REGRESSION IS BASED ON THE ASSUMPTION OF THE HOMOSCADESITY AND HENCE TAKING LOG WILL BE A GOOD IDEA TO ENSURE HOMOSCADESITY that the varince of errors is constant A bit scary but simple
27886, Roughly speaking this table is a summary of the previous three graphs
1944, Relationship with TotalBsmtSF
7070, I ll use the best models and put weight on their predictions
26734, Plotting sales over the week
27372, adding mean price and removing item price
31510, We sort the top 40 features
28612, Functional
428, MiscFeature Data documentation says NA means no misc feature
30938, Visualizing Outliers In Data
1210, Actual predictions for Kaggle
33634, A general overview of the data we need to work with
28941, The main reason why passengers who embarked from Southampton had a low survival rate was that most of them were 3rd class ticked holders
35625, Knowledge distillation
3453, assume the 693 3rd class passengers missing a Deck were in steerage
32070, Here n components decide the number of components in the transformed data
22078, Findings
1335, We can convert the categorical titles to ordinal
32481, Derive Recommendations
26970, Image Augumentation
8840, Prepare columns
32138, How to generate one hot encodings for an array in numpy
23823, y wrt ID of dataframe
34723, Submission
27202, Embarked
42002, Sort Columns by alphabetical order
31190, AUC ROC Curve
27371, working with the price
16291, Importing Libraries
28421, Simple graph
40682, look at another such feature distance from 10th most frequent template
29332, How do the predictions compare for the two random forests
27420, Less learning rate
29115, Our model was able to get to very high accuracy in just a few epoch s
26300, Test on validation datasets
40802, Prediction
9966, Distribution of lot area
9348, Predict age
9728, Looking at the neighbourhood spreads of lot frontage there are some areas in the city of Ames that have a fairly distinct spread of LotFrontage values
4286, We don t have 2010 data for all year round
12670, transformation
29363, Spliting the dataset
35172, Compile 10 times and get statistics
13167, let s load the datasets from competition inputs
467, t SNE
21274, Build roBERTa Model
42357, Removing https type symbol
32519, Train the Model
21524, We can now visualize this data
7292, Decision Tree
33512, Italy
8421, Test hypothesis of better feature Construction Area
37773, define a function that allocates large objects
39681, Display clustering boat
15263, Observation People from class 1 have higher chances of survival and People from class 3 have less chances of survival
36629, How about bathrooms and bedrooms
13797, Acquire data
8758, Corralation Of Data
8324, the accuracy turns out to be with n neighbors
42273, day
36468, Images from ETH Zurich
21012, Initial Model Building with dabl
28313, Let Exmaine bureau dataset
31546, BsmtQual BsmtCond BsmtExposure BsmtFinType1 BsmtFinType2
1421, I create dummy variables for all variables with categories using the function get dummies from pandas
39789, Lets look at distribution of logerrors with top 10 frequent propertyzoningdesc
12098, Replace NA in Numerical columns based on information from description
36023, Presets
12138, Basic evaluation 2
41950, We are importing libraries nltk numpy pandas and sklearn
27609, Creating the OHE vector for the labels
1873, Dealing with NaN Values Imputation
37625, Distribution of labels
18576, The wider fare distribution among passengers who embarked in Cherbourg
23665, Beta t EGARCH models
35685, XGBoost HyperParameter Tuning
11889, Random Forest
29127, Performance Comparison Erros For Each Model
17644, Hyperparameter Tuning
22715, Plotting the image matrix
21794, Linear Discriminant Analysis
14390, We have to map each of the title groups to a numerical value
14454, go to Correlation section corr corr
3795, Concat all data
1575, The following two functions extract the first letter of the Cabin column and its number respectively
11548, How many missing values
4247, Tune
15636, Fare Jointplot
36679, max df float in range or int default
26292, Visualizing and Analysis
610, this is somewhat expected since it explains the difference between S and the other ports
31604, Exploratory data analysis
35498, Define Optimizer
40434, Onehot
40659, We now have everything we need for making some predictions
14875, From this data it looks like being a male or being in 3rd class were both not favourable for survival
27602, Fit Model Using All Data
35422, Defining Standard Neural Network
18564, group Andersons with 7 size family by ticket number
18507, I ve read a random image let s view the information inside
10301, I m going to come clean here
28658, LotArea
28655, Condition1 Condition2
27172, Similarly for test dataset
41945, Here s how the generated images look after the 10th 50th 100th and 300th epochs of training
6521, Replace the missing values in train set with median since there are outliers
31007, Data Exploration
3729, Transform the gender feature
7743, Submit
9234, Execution on Testset with Neural Network
20732, few columns need to drop
1259, Recreate training and test sets
41684, Make the submission
14263, Logistic Regression
12967, Embarked Sex Fare and Survived
26673, Numerical features
1583, take a brief look at our variable importance according to our random forest model
41103, Prediction for the next 15 days
24269, Predictive modelling cross validation hyperparameters and ensembling
38404, Just in case check the datasets info
29399, CORRECTIONS RULES FOR FULL SQ AND LIFE SQ APPLY TO TRAIN AND TEST
16978, Logistic regression
35754, choose method to use ensemble or stacked
2313, Model Prep Splitting DataFrames by Columns with 2 methods
41449, Latent Dirichlet Allocation
38928, MSSubClass Identifies the type of dwelling involved in the sale
40995, Clean both train and test dataframes
21858, Showing Samples
20502, Submit to Kaggle
409, KNeighbors
10387, Visualizing the relationship between Sale prices and Overall Quality
27742, Check for Class Imbalance
7735, Predictions
24468, Before EDA let s group the features into category and non category based on the number of uniqueness
35169, Plot the model s performance
31735, The Images
15345, Checking out the accuracy of our predictions
38652, Fare
34741, T SNE applied on Doc2Vec embedding
1141, GridSearchCV evaluating using multiple scorers RepeatedStratifiedKFold and pipeline for preprocessing simultaneously
7542, No missing value finally lets do type conversion
39140, Predictions
24416, Sale Type
13187, start with our variable Age to solve outlayers problems i dicide to divide passengers ages into 8 groups
19714, Compile the model
23884, check the number of Nulls in this new merged dataset
4467, Dealing with Age Missing values
40095, The model can also lemmatize assign parts of speech find dependencies and otherwise annotate text
30951, Train the model with cross validation
35768, Additional testing
10088, Skewness removal
8605, now we have sorted all the null values we can then proceed to create new features
38260, Number of words in a tweet
8072, NA s
37167, Encoding categorical features
35127, This data contains many outliers but these might have been caused to the surge of customers during a festival or Holiday or due to an effective promo
43015, Besides Logistic Regression therefore I used class CurveMetrics
12294, The embarked place is obviously connected with the fare and the tickets From the value counts the probability of S is more possible than other two options
35095, Statistical Analysis Machine learning
5281, Drop Column Importance
13489, Outlier removal
15201, Correlations
9932, Finlly I decided to use Ridge regression because performs better on this scenario
32477, We now have everything we need to perform memory based and demographic based CF
43073, Distribution of skew and kurtosis
20951, In Keras we can add the required types of layers through the add method
9881, Correlation Between Age Survived
33886, agregating Bureau balance features into Bureau dataset
7144, Random Forest
1256, ML models have trouble recognizing more complex patterns so let s help our models out by creating a few features based on our intuition about the dataset e
36927, Pclass
21332, Ph n c n l i
19149, Making a submission
705, remove the original categorical variables
43008, Check Null
11481, LotFrontage
23380, Data pipeline
16639, Preprocessing
15558, I fill this in with the median fare of the passengers for that
26498, Lastly we set aside data for validation
34028, New features Temp by weather
14336, We can put the cabin column in dataset but there is lot of missing value in it which may reduce the model performance
42212, up the first convolutional layer is added
24543, check number of products by channels
7802, Compare Models
42859, Callbacks list
38788, Data frames with new predictions
41783, CNN model Without Batch normalization
24238, A very first look into the data
24387, delete the columns that have more than 15 missing data
35132, The columns StoreType Assortment Season have char type or String type values all of this need to converted to a numerical value
22807, Educational Level Below Primary
35812, Item features
18543, Loading libraries
7803, Create and Store Models in Variable
26290, nn model The general methodology to build a Neural Network is to
22611, Random Froest
22513, It takes a while now is time to look at what goodness it gives to us
3946, Imputing Garage Related Features which are numerical with 0 and None for categorical
31201, Decision Tree Classifier
35139, Using Linear Regression to predict Sales
28183, Let s create a custom tokenizer function using spaCy
13343, SibSp Parch combining features div
10256, Go to Contents Menu
32830, remove the text noises
27241, There are peaks in average mortality rate trend due to China Iran UK and Netherlands which drops down in about 15 days The rise in Iran reached its maximum on Feb 18 however this is the same time when the outbreak started in Iran Here one should be cautioned as these numbers truely depends on the number of confirmed cases which itself depends on how many tests were performed during that time The average mortality rate in Italy and Spain is still rising to 12 Lets look at the mortality rate by the end of the training data date
32022, There are 2 null values in Embarked columns of train X
12320, OverallQual 4
1901, XGBoost Model
13139, Survival by Fare and Pclass
26184, Lets log the data so that it can be linear
4445, deal with the rest of the missing value
12516, First try to understand the data
4632, Filling in missing values
11276, Another way to engineer features is by extracting data from text columns Earlier we decided that the Name and Cabin columns weren t useful by themselves but what if there is some data there we could extract take a look at a random sample of rows from those two columns
6568, Other feature Age have missing value be calculated with mean is null and standered deviation of Age
4144, let s enjoy the number of algorithms offered by sklearn
10381, Fixing the missing data
20764, let use LGBM AS WELL
5287, As a final step in our experiments we are going to train a set of LR models with the top features selected by drop column method
34916, Count numeral in tweets
34155, Not as enlightening as I expected The data is too noisy in this plot as it accounts for the variation of each day s sales
40084, Exploration numbers of categories and values for categorical features
9440, The missingno correlation heatmap measures nullity correlation how strongly the presence or absence of one variable affects the presence of another
1848, Correlation Between Numeric Features
13446, Age Young have more chance to survive
40261, Total Basement Surface Area
7977, Modeling
9701, Train test Split
11049, We now implement the data preprocessing and have a look at out processed data
9823, From the number of the cabin we can extract first letter which tell us about placement of the cabin on the ship
39333, Text features for item categories df
41110, DataType Converting
42062, Using seaborn to display displot
28705, The new test dataset is 1459 rows of predictions from the 8 algorithms we decided to use
6636, Making some changes in the titles
40860, Feature Scaling
27243, To do Display Confirmed cases by Population
6454, Pipeline
2988, Lasso regression is a type of linear regression that uses shrinkage
41662, Categorical features
19951, Family size
22390, Plotting the training history to tune parameters
14105, center Boxplot with hue center
8935, Generating Validation Set
12564, First we remove the unwanted features from our data
34008, atemp
8705, Splitting into Training and Validation Sets
15089, CatBoost
12362, Kitchen Variables
29369, DECISION TREE
31917, Train Test Split
5105, Observations
39138, And since softmax is a probability distribution it sums up to 1
3242, Estimating the location for our target variable
23025, The price of some Household category is super expensive like over 100
13967, Embarked Survival probability
27526, Display heatmap of quantitative variables with a numerical variable as dense
38227, Address
12359, Masonry veneer type
583, Simple clustering
16116, Support Vector Machine SVM
29002, Only women from who embarked in either Cherbourg or Southampton survive
28800, after the transformations our p value for the DF test is well within 5 Hence we can assume Stationarity of the series
8846, For the most skewed column let s look at the distribution
22822, And there exists a value of 1
19004, Set up HyperParameter search
22231, Elimizdeki verilerin farkl rotasyonlar n alarak verilerimizi o alt r Data Augmentation
21442, Check up Missing Values
40673, Histogram plots of Number of characters
18078, plot some examples of large bounding boxes
8955, Fixing SaleType
41010, All these people have the same Pclass Fare Embarked and Ticket number but two of them are considered part of any group this is what we are going to fix
23941, Word count
26885, Score for A3 16423
30716, Reshape image into 3 dimensions height 28px width 28px channels 1
18276, XGBoost
33684, Add Hours Minutes font
9676, Tune min child sample weight and min data in leaf
1907, Heatmap
20942, Sample
10807, I think it looks obvious that low fares had less chance to survive
40169, The next step in ourtime series analysis is to review Autocorrelation Function andal Autocorrelation Function plots
36991, let s identify which products are ordered the most
1870, Importing Libraries and Packages
17545, 2 Entries of Emabarked Column have Null Value
18552, Actually Mr Algernon Henry Barkworth was born on 4 June 1864
16049, Survived Count
27459, Remove Stopwords
26795, Adding dimensions for keras
35570, Merging the data with real dates
206, Model and Accuracy
24110, Data Preprocessing
27858, Uncomment below if you want the predictions on the test dataset
848, KNN Regressor
16848, The Sex variable
33520, For severity level 4 I feel that two examples here are difficult to spot on pic and pic
28038, PREPARING DATA FOR USING IN KERAS
17816, The accuracy for the validation is much better than the accuracy for the training set
10586, Visualizing AUC metrics
29404, CHECK STATE
14091, First Submission File
10676, Prediction
33682, Difference Minutes font
42765, Embarked
37080, Different Ensemble Methods
14212, Data Dictionary
40695, NOW THE DATA EXPLORATION ANALYSIS AND VISUALIZATION AND PREPROCESSING HAS BEEN DONE AND NOW WE CAN MOVE TO MODELLING PART
42130, Fine tune the complete model
8660, Instructions
28806, We ve correctly identified the order of the simulated process as ARMA 2 2
15626, Free Passage
20640, Lets plot word cloud for real and fake tweets
5160, Support Vector Machine
110, is alone
20209, Model Building
10932, Degree Centrality
3914, Utilities For this categorical feature all records are AllPub except for one NoSeWa and 2 NA
5245, Label Encoding The Categorical Variables
33437, even with topic modeling there are only stopwords they be removed for better visualization of relevant words from the data
19643, Gender ratios by phone brand
13036, Name
460, Submission
19129, Blend
17996, In other cases the family members were not traveling on the same ticket
32705, tokenization
19584, Implement Lags
5286, we are ready to train one more set of LR models that use top features selected by permutation method
19829, James Stein
4061, We can confirm that if we have a master then the absolute value of the correlation be significant and we have already found a simple way to threat it
38027, Modeling
21526, Multilayer Network
28026, Or in more related way to competition challenges
16677, Read data into Pandas Dataframe
38823, Load the training and testing data
26461, Extracting Zip file sec
12496, Submission
16550, encode the age column into 4 parts
16637, Visual Exploration
12155, The same we can setup with two pipeline branches for numerical and categorical data
17775, Go to top font
31859, Select best features
4565, Splitting X into X train and X test
5392, To be more precise we need to understand the meanings of each columns font
1658, A continuous feature here for a change right skewed with a peak close to zero
8682, FEATURES WITH MISSING VALUES
41052, Number Of Product Department
41851, First Patient empty lung
5111, Predictions
21676, Blender call
10865, Converting the categorical columns into numerical
33659, Define X and y
4643, Since we are working on a supervised ML problem we should also look at the relationshipt between the dependent variable and independent variable In order to do that let s add our dependent variable to this dataset
23473, Retraining the decision tree over the whole dataset for submission
25390, Train the model without data augmentation
3605, Missing Data
14920, Statistical Analysis and Feature Selection
23942, Category Name
5425, the missing values for the MasVnr
11302, Blending
7375, Preparing Wikipedia dataset for merge
5303, Handling categorical data
42077, Uninteresting half bathrooms
14776, Name
11321, SibSP
29689, Adam with default lr 1 and different weight decays
13071, Bernoulli Naive Bayes
9146, For basement exposure I just set it to the average value based on both the training and test set where HasBasement is true
236, Preprocessing
43394, that s the same pattern as for the fooling targets
20358, A low difference means the classifier is unsure There is a Jupyter widget PredictionsCorrector which then serves up batches of the most unsure predictions for the human analyst you to confirm It looks like this
6715, Find Continous Numerical Features
23675, Reshape normalize
36567, Prediction
20168, Case 4 Binary Dimensionality Reduction PCA
4487, Decision Tree
18530, The most important errors are also the most intrigous
28116, Evaluation Functions
28250, who accompanied client when appliying for the loan application and their repayment count given below
21593, Convert continuos variable to categorical cut and qcut
29703, you have two Options the simple one is to just take the best Model of our Grid Search and make predictions on the test set
37901, Best Score and Parameters
16843, Feature Engineering
37739, Object types optimization using categoricals
39444, POS CASH balance data
27017, GroupKFold
34623, After the training it is a good practice to visualise how the cost curve evolved as the number of epochs increased
26685, impute missing values
12413, GarageCars GarageArea
35558, Parameters
2428, Decision Tree Regressor Model
11429, Rules Of Thums in Imputation
8420, Check the Dependent Variable SalePrice
35355, We have 28 28 784 pixels of the images
34658, Price distribution
16861, Moving On
11178, The shape values are the number of columns in the PCA x the number of original columns
33657, Stratified KFold
12111, Saving DataFrame for next Steps
22048, build xgboost model using these variables and check the score
36134, Imports
18402, Bigrams
10122, Feature Engineering
42857, Model parameters
21950, Split training and valdiation set
25691, Meidcal Facility Indian Heatmap
2719, Ridge regression
21651, Creating toy df 3 methods
6148, for some gradient boosting machines let s encode categorial string values to integer ones
24584, Create the Optimizer
5905, Q can k fold cross validation be applied to regression
21098, Visualize data First try to visualize some random samples extracted from the data I m using different methods which we can use to visualize data in tabular way
2177, Ups
5423, Alley
38263, also visualize the wordcloud
28198, of Speech Tagging
32649, Specific numerical features have a discrete nature
38224, there is no significant deviation of incidents frequency throughout the week
29769, visualize the images from the validation set that were incorrecly classified
21579, Trick 86bis Named aggregations on multiple columns avoids multiindex
35487, Data shape
899, The Classifiers with best performance are Decision Tree Random Forest and SVC
22845, First let s insert the date block num feature for the test set Using insert method of pandas to place this new column at a specific index
11855, We right our wrongs move forward and repeat this process again and again
25011, Revenue
28518, TotalBsmtSF
16307, Below I have just found out how many males and females are there in each Pclass then plotted a bar diagram with that information and found that there are more males among the 3rd Pclass passengers
5405, And now let s dig deeper with whether these people survived or not
17751, I m setting the random state variable to prevent random fluctuations appearing significant
18214, Agora montamos nossa rede neural
672, For each of these various classifiers we can have a closer look to improve their performance and understand their output
7720, Data Preprocessing and Cleaning
12531, Creating diffrent Models
38076, Test prediction and submission
42205, Predicting Price
23307, Final Model
22414, The next columns with missing data I ll look at are features which are just a boolean indicator as to whether or not that product was owned that month
19261, Remove unwanted columns
16006, Removing remaining features
3831, describe function is use for details of all statistics
38641, Data Assessing
4863, Deleting the 2 outliers in bottom right
20958, Import Training data as Numpy array
40919, Prepare data for macro model
28177, Named Entity Recognition NER
34090, Combined Features
18275, LINEAR SVM WITH HYPERPARAMETER TUNING
11707, Now after checkig the data lets go to Clean
22291, Building the graph
3991, Select columns with high correlation to drop
42013, Creating a new column
41174, callback Learning rate finder
4430, Target Variable
1989, XGBoosts Classifier
32895, that we have our similarity matrix let s calculate the pair match score
5078, Remember the outliers Dean De Cock warned us about Removing these now
34395, We have far too many points to plot so I ll try a different approach
23685, Sample digits
14981, As there is just 1 missing value of Fare in Test data set we can fill it using Mean or Median
21812, Prices
2103, Or use this plot to just investigate further the features we have analyzed before
3942, Find the total and percentage of missing values in dataset
27236, Train the model
5297, we can use RFECV learn org stable modules generated sklearn feature selection RFEChtmlsklearn feature selection RFECV to eliminate the redundant features
7549, Decision Tree
40197, Build CNN Model
22792, Data Fetching
9911, Get Model Score from Imputation
21535, Logistic Regression
32209, Add lag values of item cnt month for month item id
15161, Nowww Time For Some Plots Charts
8969, Cabin
20753, PoolQC column
24764, As well as scaling features again for the Kernel ridge regression I ve defined a few more parameters within this algorithm
38523, Preprocessing data for Bert
29746, Read the data
32962, Highly correlated features
6132, This house was built in 1940
5153, Feature Selection
24760, An addition to the Lasso model I use a Pipeline to scale features
28205, We now check if theres is some data missing
20782, Checking for Missing Values
14235, Find here an Ensemble of 20 Benchmark ML models including tree classifiers GBDTs SVCs Naive Ba and LDA Classifiers
15413, let s have a look if passenger class is a good predictor of survival
1987, Decision Tree Classifiction
8555, Describing the Data
1807, Observation
8715, Use no Age data at all
34098, Trends of Top 5 covid 19 affected countries and India
11382, Triggering the Engine
7221, Replace missing with most common value
827, Dropping all columns with weak correlation to SalePrice
8892, Actual vs Predicted
4276, GarageYrBlt
2915, I delete these features because I created the dummy variables
27302, How does individual product ownership evolve over time
37043, Test Time Augmentation
15192, Modeling
32903, Shape of the matrix
20489, Cash loan purpose
23875, This looks nice with some outliers at both the ends
35863, We have hit
35654, use the stacking classifiers predictions as submission
3000, Modeling
19976, MLP with Sigmoid activation and SGDOptimizer
28528, 2ndFlrSF
20465, Total income distribution
5155, Identify the selected variables
10719, Believe me this is the most interesting thing i found
35635, build an autoencoder in keras It have 3 hidden layers with 64 2 and 64 units respectively
29449, In natural language processing one of the best method for text classification is using pretrained text
42268, bedrooms is categorical column
22843, we need to merge our two dataframes
25395, Confusion matrix
13424, Stacking Ensemble
37020, Zoom on the second level
26342, Predictions and Submission
27457, Replace Negations with Antonyms
42074, Detecting best weight to blending
12207, A pipeline for categorical features
11748, we have filled in all of the obvious missing values in our dataset as outlined by the data description
1631, Filling NAs and converting features
16567, Observation
28289, prepare batch generators for train part
6680, Extreme Gradient Boosting
35133, Are the promos effective
14751, Embarked Missing Values
25432, Final Dataset
28310, Examine application test data set
18988, Load the word embedding model
42752, In order to create our embedding model we need to have a look at the spatiality of the cat features
13197, let s drop the ogirinal Parch and SibSp variables
6662, Logistic Regression
22759, MODEL
15498, Looks like the main ones are Master Miss Mr Mrs
10526, Although their type is Integer We would treat them as a categorical feature in the next section EDA on Categorical Features intLink
14287, Confusion Matrix
24528, Most of the customers used only one product which is the current account
42822, Training Function
35610, Downloading
17964, pandas
21202, More Data
14590, Gender
28027, X Y Z
16583, Lable Encoding Categorical Data
36295, Support Vector Machine SVM
5838, Lets now understand how the Housing Prices SalePrice is distributed
12238, Dictionaries
36200, we ll use the padded token IDs to create the training and validation dataloaders
38841, Plotting by segments have revealed different patterns
2819, rf feat importance that uses the feature importances attribute from the RandomForestRegressor to return a dataframe with the columns and their importance in descending order
26429, We already introduced these feature in section 4
32159, In fact we can just do nothing
37884, Top 10 Feature Importance Positive and Negative Role
7697, ElasticNet
26015, I personally consider dealing with missing values is very prominent as it can significantly affect the size of the data from the ML model perspective
36367, Final submission file
905, Linearity and Outliers Treatment
2126, While the performance on our test set is
12117, Submission
35106, we can split it into a training and validation set
15815, survival percentage of women is more than the men
11999, let s check the feature importance for ridge regression
11979, create one new feature that is age of house for simplicity
27546, Go to TOC
16475, Data Visualisation of Entire Data Frame Training and Test Set
41795, Create our CSV file and submit to competition
2349, Build Full Model using best parameters
1645, Explore features properties
11514, Support Vector Regressor
32544, Binay Columns
10113, Here it s clear that people with more number of siblings spouse onboard had a lesser chance of survival
13267, Statement Everybody from the class cabins that were sat in Southampton S were died
17931, Sex
1112, Feature importance selecting the optimal features in the model
2978, Missing data
36794, If we feed a word to the synsets method the return value be the class to which belongs For example if we call the method on motorcycle
13375, Embarked
15919, Survival of the Group
22392, iv Plot the test image which is being predicted
11080, Visualise co relation between classifiers
5483, check missing values in our independent and dependent variables
36084, Evaluation
30102, Data Augment
3808, SVR
10496, look at Test data
41024, Non WCG passengers
40838, Using the XGBoost Regressor for predictions
873, Fare continuous numerical to 12 bins
17470, Ensemble Model
13186, Clearly passengers from all classes can survive but almost passengers that died was from second and third classes
14056, Loading Data
14882, Sex
14651, Create a column Cabin Status 0 where Cabin is NotAvailable and 1 otherwise
34024, Temp Atemp
14824, Small families have more chance to survive than large families
21677, Dimensionality reduction
812, List of features with missing values
10670, Plot the model
33646, Dealing with highly skewed variables
26680, EXT SOURCE 3
38558, Exploratory Data Analysis
6597, look at the first 3 corresponding target variables
1787, Describe the Datasets
12202, A pipeline for numeric features
7078, the PassengerId of the female Dr is 797
232, Model and Accuracy
33256, Encode
31567, we can use dataloader as iterator by using iter function
28089, Create submission file
36786, Not sure what DT JJ or any other tag is Just try this in your python shell
43353, Classification Matrix
25511, embed the words into vectors of 8 dimensions
5325, Display density of values with heatmap over latitude and longitude
24001, Here we compare the various models that we just created
31316, Merging Data
26538, visualize the training process plot attached separately in comments
24175, Pre Proccessing
37699, A is 10 what else
33293, Deck Finder
35601, It is similar isn t it
40394, The code below is commented out as it was already defined earlier
43196, LGBM training
4598, SUMMARY
21413, Featuretools is an open source Python library for automatically creating features out of a set of related tables using a technique called deep feature synthesis Automated feature engineering like many topics in machine learning is a complex subject built upon a foundation of simpler ideas By going through these ideas one at a time we can build up our understanding of how featuretools which later allow for us to get the most out of it
15134, Again They behave u Lady elderly First u even if they were in desperate situation
40406, Work in progress
10960, Missing values
13519, Balancing Data
12096, Create columns to mark originally missed values
4153, Replacement by Arbitrary Value on Titanic dataset
2392, Examine each step of a Pipeline
16354, Look at feature importance
6058, Utilities irrelevant I can drop it
27945, Rankdata
29061, Krisch filter
17786, There are male only titles Capt Col Don Jonkheer Major Master Mr Rev and Sir
33252, Imputation
21629, Convert from UTC to another timezone
21405, Building the Convolution Neural Network
27388, Tuning number of leafs
11259, The final model be created and trained on the predictions of the models on the validation data
6787, more than 50 of 1st class are from S embark
19556, Lets predict on test dataset
28523, MasVnrArea
7944, have a look at the architecture and the number of parameters
22682, score
18760, Nodule Candidate Region of Interest Generation
33237, Prepare X and y from the dataset
33305, ROC S of the Models
17, RandomizedSearchCV Lasso
4419, Plotting the cross validation score
9436, Seaborn s jointplot displays a relationship between 2 variables as well as 1D profiles in the margins
2086, We implement a cleaning procedure that follows the documentation shortly but before even creating the evaluation environment we want to remove 2 outliers that the documentation recommends to remove
41394, NAME CONTRACT TYPE
4711, Combining predictions and submition
31114, delete columns with more than 20 missing
6324, Ensemble Voting
23929, This is working surprisingly well
23369, Some of the Incorrectly Predicted Classes
26677, Education Type
10109, Let convert our prediction int Submission csv
37919, Data Cleaning and transformation of test data is done using proper analysis with respect to other co factor variables
7584, The two new features are highly correlated
6346, One hot encoding
37437, Readability features font
8541, KAGGLE SUBMISSION WITH TEST DATA
11074, Voting
17769, From the total female passengers 74 survived
5993, Correlation Matrix
8562, Finding the Minimum Maximum Sum Average and Count
18950, Display time range for labels
38907, Modelling the Tabular Data
22530, Pclass vs Survived
34603, PyTorch s style data loader defintion
21454, All Feature
22179, BERT Base Uncased span
28498, try SHAP
4774, Support Vector Machine
32971, fill the missing values in Embarked with the most frequent value in train set
4747, Categorical Features
240, Library and Data
21062, Prediction on Test Set
20792, We also have useless features so we also decide to drop the features below
6469, Evaluate non ensemble Baseline methods
42018, Creating a new Dataframe with certain columns and certain rows
41926, Feature analysis
23258, Parents Children have higher chance of Survival
37111, Sample code for regression problem
25522, How d We Do
6935, Corrplot
5044, How are numerical features correlated to SalePrice
138, Using best estimator from grid search using KNN
36236, Soft voting can be easily performed using a VotingClassifier which retrain the model prototypes we specified before
40141, optional Undeploy model
2240, Transforming our Data
12557, SVM
11746, There are also some variables that are numeric variables instead of categorical variables that do not exist in certain houses
30881, Group data by month to visualize variance per month
3464, Final preparations for model building
5609, Kruskal Will Test
13349, Model Submission div
6586, we can evaluate our model to choose the best one for our problem
6932, Numerical data
15067, Age Group
18268, Basic Feature Extraction before cleaning
12240, Loops
42355, text cleaning
26649, Since this is a sample from page views
41534, Here the first principla component is responsible for all the variation
36793, Background
28285, Way to transform dataframe rows to torchtext examples
10105, check NULL values in Test dataset
18932, Relationship between variables with respective to time
10515, Now Apply Models
8367, Drop broaden variables
42728, find the float features which are highly correlated with target
8220, Integer type variables The missing values would be added with median values
31651, Submission File
8761, Survival By Class
24958, Embarked in Train Test set
32994, Plot PCA reduced picture
11420, Use Case 12 India Blood Bank Visualisation
24432, Fitting the model
7908, The target column is skewed Therefore we need to transform it into a more normal distribution since linear models perform better
22097, Average Loss VS Number of Epochs Graph
2379, Imputing missing values for categorical values
20072, Insights
15449, ML Predictions
4252, Nominal Features
6274, I locate each of the bins using
6581, Age Multiplied with Pclass
27287, Load populations of each country
21368, Scrap
40198, Accuracy and Loss Graphs
32080, Figure 4 Distribution of mode proportions across categorical variables
5564, Congratulations now we don t have any missing values in our data
31831, We ll use to resample the minority class
19061, We can look at the top losses
14912, of the Cabin data are null for both dataset
32577, Complete Bayesian Domain
3265, visualize relationship of features with SalePrice using Seaborn s Heatmap
20712, Street column
9000, Location Location Location
2261, First I drop PassengerId from the train set because it does not contribute to a persons survival probability
34419, First let s check tweets indicating real disaster
24186, Now let us deal with special characters
7387, To understand the issue I looked for the WikiId 1128 in the final DataFrame with all matched passengers merg all
19857, The upper boundary for Age is 73 74 years The lower boundary is meaningless as there can t be negative age This value could be generated due to the lack of normality of the data
26330, Predict on test data
22641, Model 5 Random Forest
36415, Countplot Discrete To find features with single values and remove them
6048, Type Quality and Condition
20158, Converting D array to D x array using reshape reference generated numpy reshape html to plot and view grayscale images
29441, Keyword
35916, Plot the loss and accuracy curves
6027, I use mode for cats and for median for numeric features but you can change it whatever you want
25954, Orders csv
11991, RMSLE
24026, After preprocessing is done I combine everything into one dataframe
37014, How many items categories do we have
32395, These functions below for reading labeled tfrecords
12338, Garage
25837, Looks more url counts in disaster Tweets
4726, At first i want to drop Id column
24391, Feature Engineering
24924, Here is an illustration of the process
30652, FATALITY
36662, From the scikit learn documentation learn org stable modules feature extraction htmltext feature extraction
37484, K nearest neighbors
20070, Insights
33490, Germany
27936, Inference
402, XGBoost
13266, Statement All boys Master from the classes survived
35507, PREDICT TEST DATA
13113, SVM with RBF kernel
34973, accurate submission
4507, Zooming a little
43286, O R uma m trica que j est incorporada no quando usamos o par metro oob score True Para acess la basta usarmos o comando abaixo
30626, Parch is a numerical variable
35389, Define Model Architecture
2553, Using the title Master to create a column VIP
38089, For building Neural network I am using python keras library tensorflow backend
4552, GarageType GarageFinish GarageQual and GarageCond Replacing missing data with None as there may be no garage in the house
12149, We can tune the scoring by providing several parameters
33327, Test data preparation
10530, But this new TotalPorchSF feature is not linear with SalePrice so we not use it
11448, First I thought we have to delete the Cabin variable but then I found something interesting
1240, Validating and training each model
29553, PyTorch
2817, resplite the training and test set
1682, SpiderMan ah that is some useful info OverallQual feature got 10 categories
37616, Random Forest Regreesor
10194, Analyzing most important features
28994, find out the relationship between categorical variable and dependent feature SalesPrice
7687, Last thing to do before Machine Learning is to log transform the target as well as we did with the skewed features
4363, If we use this new feature we must remove BsmtFinSF1 BsmtFinSF2 feature as we have already use it
31098, TO MAKE CSV FILE FOR SUBMISSION
5318, Garage Area and Garage Cars as well as TotRms AbvGrd and Gr Liv Area are highly correlated
927, Optimize KernelRidge
13028, Ticket
1990, ANN
9417, GridSearch
33805, Several of the new variables have a greater correlation with the target than the original features
16366, Predicting Ages based on grouping by Sex and Pclass
32023, The most frequent class is S so we can fill null values with S
19615, Redundant features
8450, Check for any correlations between features
17886, Lets visualize the relationship between the target variable and independent variables
18596, that we have a good final layer trained we can try fine tuning the other layers
5242, Creating New Features
5123, Create baseline model
13245, Missing value fill
21845, If we unfold the RNN
3433, The passenger was a 3rd class ticket holder
37733, Predicting the probability of every customer is unhappy
35866, I be using a sequential feed forward model e the data flows from left to right there are no feedback connections
28068, And drop the unwanted columns
37339, Voting function did not find the kind of direct vote on the forecast I do not know if there is I wrote one of my own
29819, Embedded Matrix Keras Embedding layer
38087, The cross validation is a techinique used for measure the accuracy and visualizing overfitting
15987, Logistic Regression
39289, Feature analysis
11400, And finally in this simple kernel we won t investigate a passenger s name nore their PassengerId
13441, Fare when Fare value increase
3611, Looking at the relationships between qualitative variables and Sale Price
17523, Pseudo Labeling
2223, Neighborhoods
40877, Looks like GrLivArea is the the most positively correlated factor for SalePrice while MSZoning C all is the most inversely correlated factors for SalePrice for all the three models
16691, start by dropping unwanted features
30356, Predict World Without China Data
15744, cross validation
20282, Pclass
27634, The top cross correlated entries are
19978, for relu layers If we sample weights from a normal distribution N we satisfy this condition with
3770, Age
33710, FEATURE Survived
22326, Removing Additional Spaces
13505, Model
42792, Meta Features
7551, Random Forest
36222, Comparing the MAE of both models we can say that the XGBRegressor model works better we use this model for our final predictions
2948, Train Model 1
40190, Using my notebook
26357, Data Preprocessing for Model
40010, Age distributions
35867, Initiailising parameters
37800, Predictions from our Model
13118, Evaluation Metrics
4964, We can discard PassengerId since we assume the passengers are sorted at random
32238, now let s make some predictions with the test set
19654, Make a submission
36838, Utility functions
28875, Normlize
23352, Learning Curve
88, Gender and Survived
16923, ExtraTreeClassifier
19329, Encoding train labels
2932, Ensemble Feature Importances
32984, Support vector regression
2723, We obtain a score of 0
9895, sum up SibSp and Parch to get the family size
1570, Embarked
30753, Fix max samples leaf
43316, we shall apply the algorithm and check the accuracy
10193, Looking for most relevant features
12058, Categorical Encoding
30825, MAP calculation
38534, Set the optimizer and annealer
27117, PoolQC MiscFeature Alley and Fence are the categorical features with more than 1000 missing values in the dataset
14484, 25 QUARTILE IS 7 THEN 50 IS 14 75 IS 31 MAX 512
36360, Probabilities can be identical for several values as pointed out by Commander
15925, Numerical values
33751, Appendix
14728, Feature Selection
12717, Round 1 complete and it is the Gradient Boosting algorithms that come out top
15093, ROC Curve and AUC Score
18200, a quick check if demand distribution changes week to week
9397, Objective function
29472, Analysing extracted features
26812, Baseline model
9306, The wrong way of handling dummies
3439, Aside from those special titles we have four categories Master Mr Miss and Mrs
7948, No sign of overfitting by the evolution of the validation vs
40955, Producing the Submission file
30710, Plotting a few images with and without augmentations
794, Our dataset is now much cleaner than before with only numerical values and potentially meaningful features
38081, Train and Test Matrices
14857, Modeling
41403, this means that only 278 loans have some other type
11661, Random forrest
28538, I divide dataset to 2 parts
26565, let s try to rotate the competition dataset systematically
18106, Since the test set is not that large we not be using a generator for making the final predictions on the test set
33787, Aligning Training and Testing Data
30569, We need to create new names for each of these columns
7136, Name Title
5499, Understanding the Survival Nature of Titanic
23734, Family Size Feature
24260, Visualising updated dataset
22664, Target Distribution
13712, More than 77 of the values in Cabin are missing Since it is impossible to replace so many missing values without introducing errors we remove the feature named Cabin
10850, Split data into two parts for training and testing
3811, Stacked Models
36137, Preparing the data for Modeling
29009, Plotting the graph
26295, Labels are 10 digits numbers from 0 to 9
30577, First we one hot encode a dataframe with only the categorical columns
43056, The next 100 values are displayed in the following cell Press Output font to display the plots
4567, Predicting test file data
6306, Multi Layer Perceptron
34909, Link Flag
13614, we apply target map to Honorfics feature
19669, We can calculate the number of top 100 features that were made by featuretools
12620, Assigning datatypes
9247, Percentage of Nulls calculation
28356, Analysis Based on CREDIT ACTIVE CREDIT CURRENCY CREDIT TYPE
31001, create additional variables which are simplification of some of the other variables
9582, Printing the version of the Python Modules used
33724, Mean Encoding feature
22437, Jittering with stripplot
13108, Decision Tree
9183, Sum of years since remodeling and built
28075, Linear model
18199, lets aggregate by week and short name now
13401, Feature Importance with Random Forest model
17712, Lets create train and test dataset and create holdout set for validation
16248, Library Settings
11173, Try dropping the lowest correlating columns
8898, RidgeCV
29983, Validation dataset prediction
33147, As expected some of them are quite similar to specific digits archetypes while others still contain generic and undefined shapes either because they lay closer to a border region between different categorical clusters or simply because in need of more training
1426, KNeighbors Classifier
18455, Setup the model font div
35413, Attribute Geographical information latitude longitude
40252, Total Basement Surface Area
12208, The pipeline for categorical features then be
4920, Box Cox Transformation on Skewed Features
24679, Training
2921, Logistic Regression
31192, Grid Search on Logistic Regression
35626, Experment 2
18187, ROC curve
5505, Creating a new feature based on Age
22350, Support Vector Machine
22471, Cross correlation plot
10371, Outliers
10466, We can also estimate the PDF smoothly by convolving each datapoint with a kernel function via the Seaborn kdeplot method
2093, we removed the skewness with that logarithm transformation in the previous section
9694, check missing values in numeric columns
31719, Comparing the models
1885, We can combine SibSp and Parch into one synthetic feature called family size which indicates the total number of family members on board for each member
9659, Fixing Skewness skew function returns unbiased skew over requested axis Normalized by N Skewness is a measure of the asymmetry of the probability distribution of a real valued random variable about its mean
25301, From these 2 binary morphology tests it is clear
23728, Almost 65 of the travellers are male and 35 are female
40482, Decision Tree
22357, BinaryEncoder
32472, Residual Analysis
8429, Masonry veneer
22008, Run the next code cell to get the MAE for this approach
41954, Remove numbers
10980, We notice there is two point are far from regression line and these two may effect the study and make mislead to the predicted data so the solution here is to remove them
29032, Save target separately for training data
15578, we have only seven titles
26239, Generating Pseudo Labels
10808, And now we are ready to start a prediction
6178, The distribution of age among the passengers and their count for particular number of age
8439, Back to the Past Garage Year Build from 2207
35815, Basic lag features
23041, Calendar Visualization
2973, Low Range SalePrice less than 143000
12668, I encountered an error when I attempted to predict classes for the test set because the Fare column contained empty values e NaN This was simply resolved by adding an extra line of code to the existing pre process function in the previous cell
33152, Evaluate on validation set
2053, XGBoost Model
36973, Summary Stats
13332, Name extracting information from this feature and converting it to numerical values div
30468, You can access a part of a pipeline using Python slicing
8390, importing necessary librarys
987, Set up our dataset preprocessing
37664, RAM Data augmentation
2930, The Learning Curve refers to a plot of the prediction accuracy error vs the training set size ie how better does the model get at predicting the target as you the increase number of instances used to train it
4028, Correlation between the variables
37067, the tactic is to impute missing values of Age with the median age of similar rows according to Title and Pclass
17755, I ll tweak certain parameters one by one and repeat this process looking for an increase in mean validation score
11061, Pclass versus fare
32213, Add lag values for item cnt month for month city
24774, Training Data
39226, Brute force approach
2554, Exploring Fare and creating categories
29601, Defining Necessary functions to Train the model
10412, Check for skew in the sales price vs transform with log
35936, Name
18479, that we are done with clearing missing values let s merge the two datasets
23894, YearBuilt
20387, Models Bulding
34836, Analysing and processing Categorical feature
36100, Load embeddings
35121, Model
18131, Ridge Regression
35416, Inspect your predictions and actual values from validation data
3835, scatter plot
6890, To look at the correlation between passenger class and survival statistics I would plot a countplot
35873, Checking validation loss and acc
25194, split our data into training and validation set and we are going to use the train test split function of sklearn library for this step
34103, Counts over the time
35513, Another features were created as a categoric variable
28965, analyse the continuous values with data visualisation to understand the data distribution
6855, Correlation Between The Features
40335, We ll use LightGBM model boosting due to its natural strengths
9303, Lasso experiment regularization
37562, 2141 feature columns
39974, Submission
5454, Since Every Tree is Different every prediction for the same point vary as well
38481, Display some examples font
24367, Seems variables are found to missing as groups
8714, Missing at Random
5540, Create basic SVC model
12179, Filling the Embarked feature missing values
20436, Model Build Train Predict Submit
29908, Adding Batch Normalization
16363, Lady the countess Mme Mrs Ms Mlle Miss Johnkheer Don Capt Major Col Rev Rare Sir Mr
10661, Interpretation
27227, Adam Optimizer
34104, Worst hit states in tree plot
35890, Imports
11853, One Hot Encoding
12000, making predictions on test data
24558, Products use occurencies by age
27206, And finally we can implement our models I applied all of them here at the same time and sorted the accuracy scores of the models in a dataframe You can implement it one by one
3184, Quite a lot I just drop everything create dummies because these models require that and obtain the following
3972, Stacking
23747, Make Predictions
4079, Like all the pretty girls SalePrice enjoys OverallQual
24512, Currently the sampler is a random sampler
13845, Missing Data
29956, Item Description
5688, Drop columns that are not required
36031, Build ensembler and make CV scores
21765, A few values in ind actividad cliente are missing
28871, Inverse Difference transform
23186, RF 749 makes exactly same correct predictions true positives true negatives as gbc 749 hence rf and gbc have exactly same accuracy score that we saw when we calculated both model s accuracy score
38028, separate input variables and target variable
38996, Image on the left Read and Resized Image
15241, Embarked
32573, We can visualize the learning rate by drawing 10000 samples from the distribution
1206, LightGBM
10699, Testing set
20047, there are outliers from data
7465, Using cross validation for more robust error measurement
28505, Compiling and Fitting the model
36880, Perceptron
3534, FactorPlot FirePlaceQC vs SalePrice
18709, create a new learner with data cleaned
24130, It s quiet evident that some words occured very less in tweets so we can remove these words from our corpus to decrease dimension of Bag of World model
14106, center Pair Plot center
36384, Predictions
42736, A client can have several loans so that merge with bureau data can explode the row of application train
13726, We know which columns to drop We drop them without further analysis
29364, LOGISTIC REGRESSION
8921, Preserve original train and test dataframe
33451, know try to identify best model
19561, Hydra config
9724, who is the winner
14272, Confusion Matrix
20597, Feature Relationships
23363, We perform a grayscale normalization to reduce the effect of illumination s differences Moreover the CNN converges faster on data than on
32347, Text lengths
43007, Data cleanning replace strange value in columns
35348, The linear model gives approx
15460, Ticket Number
17392, Pclass wise Survival probability
40651, Method 1 train on full and predict on test
35104, Resize Images
33671, Day font
253, Model and Accuracy
9727, It may be useful to review the other variables in context of the relationship between LotFrontage and sqrt scatterplot
36024, Model selection
30625, SibSp is a numerical variable
38056, Submission creating
21129, The situation is interesting
35803, Final stacked model
9273, MODEL FITTING
32429, Bonuses
20650, A standard model for document classi cation is to use an Embedding layer as input followed by a one dimensional convolutional neural network pooling layer and then a prediction output layer
6888, Percentages of who survived The men s survival is tragically a lot lower than women s
30767, Comparing base learners
37486, XGBoost
29132, Parameter tuning
7946, We define also a callback to check the model after every epoch with ModelCheckpoint
3374, And next give it the path to where the relevant data is in GCS and import your data
11702, Pytorch Source Code
18253, Data generator
36565, Stack Models
36579, App usage by age and gender
19396, Some helpers for visualization
20602, Check the Titles that were extracted
12883, It s a little more clear now that if you were alone or in a family greater than 4 your chances of survival were lower
5095, Once the entity set is created it is possible to generate new features using so called feature primitives A feature primitive is an operation applied to data to create a new feature Simple calculations can be stacked on top of each other to create complex features Feature primitives fall into two categories
10583, Visualizing AUC metrics
19057, As the dataset is huge we can test the model by just training with 100 samples from the train dataset
20962, Feature Scaling
37541, Here we are using SGD optimizer for both the models just the syntax is different
27483, We train once with a smaller learning rate to ensure convergence
14347, Machine Learning k Nearest Neighbors
5116, Variable Types
30687, Importe o arquivo test csv e atribua um nome nico hexadecimal para o H20Frame test hex
22933, Age is another simple feature to handle but we can defenitely find some special features from this
33678, Difference Year font
23515, There are 2 elements in the class
3209, Simple feature engineering
18012, Test set
23679, Decoder network
1312, Imputing missing values
43260, Separando os DataFrames
26996, Not a lot of contractions are known FastText knows none
38415, Are boosting machines better than simple ones
2135, target encoding always helps
1340, we iterate over Sex and Pclass to calculate guessed values of Age for the six combinations
35668, Removing outliers prevent our models performance from being affected by extreme values
26014, well said Everyone like to work with data when it is clean and no painstaking efforts needed to clean and transform it
5655, As there are about 20 of Age values with NaN instead of just filling them with the Mean or Mean based on their Age group we use GradientBoostingRegressor and LinearRegression to fill the missing values
35705, Then add this column from previous investigation to the dataset y curve fit gla 2 curve fit gla 1 x data curve fit gla 0 x data2
37320, Select the first layer kernel size parameter
23635, Glove Embeddings
19577, extract city form shops
32041, One of the cool properties of XGBoost is the built in function to plot feature importance If we want we can use semicolon to suppress the output other than the plot
21356, Fit Model
33763, Examine NaN Values
11891, SVM
36361, For use with xgboost we wrap it to get the right input and output
35129, Adding an additional feature that records the no
1770, There is one person on the boat deck in the T cabin and he is a 1st class passenger
23399, apply the typical post processing functions to the predictions
18776, let s first concatenate the train and test data in the same dataframe
11674, It looks like most passengers paid less than 100 for travelling with the Titanic
4240, We can now retrieve the set of best parameters identified and test our model using the best dictionary created during training Some of the parameters have been stored in the best dictionary numerically using indices therefore we need first to convert them back as strings before input them in our Random Forest
1000, have a look at the unique values of Cabin
41419, Save test set id features with scaling
26567, But SVD does not mirror the axes
31749, Additional information about the clusters
36216, Data Cleaning
11877, Pred ML Evaluation
6470, Tune the best non ensemble methods
12826, SibSp and Parch Analysis
14562, Age We can fill in the null values with the median for the most accuracy
42235, Skew of target column
14159, take a look at the correlation matrix to get a quick insight about the relationships between features
30660, Aftershocks ruins and body bags are the most fake topics in Twitter
22951, quickly visualize the data before wrapping up
20380, we only have text and target columns only
32081, Continuous Variable Summary Statistics
23927, read the predictions and make a submission
11649, Random forrest
7505, Missings count in the train test set
10362, Making Predictions and Submission
18080, Area of bounding boxes per image
3584, When I look at two situations it is similar from 25quantile to 75quantile so it may be ok to fill with median
38046, The 95 confidence interval defines a range of values that you can be 95 certain contains the population mean
17573, Gradient Boosting
38136, We now scale the data Scaling is used to reduce the effect of feature with higher magintude to take over a feature lower magnitude
36355, Generate Predictions
22431, There is one more way to create this types of plots and it is using the gridspec
14644, let s work on the label encoding and one hot encoding for the categorical features in our dataset
37210, Functions Embedding Related Functions
41706, This highlights the variance looking suspiciously like streets
42755, go more specifically into this function
8480, XGBRegressor
1249, The SalePrice is skewed to the right
8486, Robust Regressor
7766, Elastic Net Regression
6209, Decision Trees
13232, Data exploration
860, Sex Female more likely to survive than male
18409, Cross validation
10847, Looking at Skewed Features
41574, Label Encoding the Y array e Daisy 0 Rose 1 etc then One Hot Encoding
28210, Define the model and the metrics
41661, We decide to deal with missing values as follows
11839, We have successfully addressed all the categorical missing values move on to numerical missing values
664, Bagging
25997, Pandafy a Spark DataFrame
1664, There is a significant difference in the groups medians
12255, We need to standardize our data in order to properly use SGD and optimize quickly
13533, From my kernel
22057, Distribution of number of words per sentiment in train data
17791, We succesfully set passenger as a Mrs
17950, Train
22954, Model
30661, Which topics are the most controversional
35504, Predict For Random Sample
32189, mini batches 128
3936, Another checkpoint
16718, Model evaluation
38784, Create assumed probability distributions
17633, Family
23210, Stacking Or Stacked Generalization
15913, Title Survival
22624, we should create our own network and train it
1536, It s clear that the majority of people embarked in Southampton
20693, Develop Recurrent Neural Network Models
7752, it s time to select useful categorical features for our model
30085, Support Vector Machine SVM Algorithm
25652, Run the next code cell to train and evaluate a random forest model
20186, Descriptive Analysis
38619, Yeap there are the same descriptions withing the train data but with different date of creation
36288, Dateset is completely ready now
20784, Distribution of Data
14838, Age
22047, Validation Methodology
4900, submit our solutions
10740, Import the raw data and check data
10909, Compare the performances of the tuned algorithms on our dataset
1716, Importing Libraries
36993, Do people usually reorder the same previous ordered products
11013, Lets map Salutations to ordinal numbers
25402, ONE HOT ENCODING
16162, Testing
16450, We found something interesting Cherbourg port is very safe for females and Qweenstone and Southampton ports are very dangerous for males
26960, Monthly Sales and Revenue
23740, Feature Selection
23981, we load the required files
3576, Train vs Test
14567, We can drop Name and Ticket column
41587, I have tried to train the model from scratch
14137, Model building
21203, Large Learning Rate
17663, Fill missing values
37663, Data preprocessing
25422, Test and train date column comparaison
10630, RandomForestClassifier
1278, Filling Embarked NaN
37103, Recursive Feature Elimination
910, Ahh Showing linear relationship
4615, Is it possible to drop more columns and shorten running time even further
5100, The next step is to use linear models penalized with the L1 norml
17966, The usual string accessor methods now work can be used for data manipulation
9830, Adding New Features and Filling the missing values
16252, Plotting
31042, Boundaries
41411, Ther was a strange value 365243 it could mean empty values or some errors so I replace it with zero
8428, Check if all nulls of Garage features are inputed
35690, Training and Evaluation
10134, LightGBM Light Gradient Boosting
3894, Distribution plots
32549, Scaling of Data
4859, Load train and test data
10279, the model at least makes some logical sense
1529, Parch Feature
31092, GarageArea font
38181, Hyperopt
12721, It looks clear now that Embarked Cabin really aren t helping us out and therefore I am going to get rid of them
43351, In our yp e
36484, Code for Loading Embeddings
14627, Right then so now we know that most of people died the number of men are twice as many as the women most people belonged to the Ticket Class 3 did not travel with their siblings spouses parents children and embarked from Southampton
22461, Violin plot
36983, There are few variables at the top of this graph without any correlation values I guess they have only one unique value and hence no correlation value confirm the same
24296, Define Variables
36462, Close Analysis of a Single Image
23392, Wrap these post processing functions into one and output the predicted bounding boxes as a dictionary where the image id is the key
23080, Correlation matrix
1319, adding some additional features
8061, Heatmap
1663, It looks like there is a lot of explanatory power in Pclass To keep the analysis intuitive we use only Pclass to impute Age missing values since they have the highest correlation in absolute numbers
11714, RandomForest
7344, Make submission
17636, Sex mapping
15012, Option 2 is to use the Seaborn catplot which is a much faster way of visualizing the relationship between variables
29614, Univariate analysis
33204, Creating submission file
16050, Sex vs Survived
27403, Define the optimizer and loss functions
15995, Searching the best params for XGBoosting
19903, Grouping by Month Shop id and Item id
14664, Seperate Train and Test
32557, Source
14506, Imputing the missing values with mode
9940, Creating a feature with the titles of the name p
6751, Train set
20524, Another way to check for correlation between attributes is to use the
27998, Label Encoding
3319, Predict and submit
9711, Cross validation on Ridge regression
35100, events csv
32926, Random Forest model
4148, Imputation of Age variable
11150, LotFrontage Since the area of each street connected to the house property most likely have a similar area to other houses in its neighborhood we can fill in missing values by the median LotFrontage of the neighborhood
14453, go to Correlation section corr corr
14607, Decision Tree
35435, Creating Train and test dataset
3809, XGBoost
15986, Naive Bayes
2432, Remember how we transformed the Sale Price by taking a log of all the prices Well now we need to change that back to the original scale
1017, let s start our hyperparameter adventure in the Random Forest
8751, Final Fit
29566, Number of binary values in row
24768, CatBoost
35184, Projection into 3 Dimensional PCA
14135, Creating dummy variables
18976, Display values in table format
35802, Residuals Plot Actual vs Predicted for Train data test for normality of residual errors
41307, Preliminary investigation
20289, There is around 55 chances of survival for those who have boarded from port C
2571, Gaussian Process Classifier
2171, Age is the next variable in the list
32959, For all numerical features mean value is approx and standard deviation is approx
29957, Create the model
10701, Encode Train Test
35580, A simple submission
295, Missing values
39265, order feature ABSOLUTE TIME
16250, File Paths
32269, Go to TOC
29688, Loss Function which calculates the difference between current output and actual output Here CrossEntropyLoss is used which is commonly used for Multi class Classification problems
7801, Iteration 2 Setup with Preprocessing
42651, Build and train BERT model
24889, And time to scale features to get their values as less as possible
4695, After these log1p transformation most of our features have a smaller skewness
2204, Making several new features based on the size of the family
12355, BsmtFinSF2 Type 2 finished square feet
20429, Importing necessary modules
14725, We can expect that the one guy who paid way more for this than everybody else did survive
677, Ranking of models I ve borrowed that one straight from this very nice kernel because it s a useful summary display of how our models perform
14525, Observations
35836, Checking Duplicates Rows
27148, Neighborhood Physical locations within Ames city limits
33456, Correlation
12023, Feature importance
29821, Pre Trained Word2Vec
26848, Fourth batch
12646, Random Forest
35318, CNN
5934, Applying box cox1 transformation
2363, Introduction to Receiver Operating Characteristic curve ROC
30856, Predicting
5315, Linear model fitting using Scikit learn
18848, FastAI Tabular Learner
18274, LOGISTIC REGRESSION TO FIND HYPERPARAMETER
27433, FINAL DATA
39254, Import data
6616, One Hot Encoding
13079, Feature Selection
15993, Best Model
27304, How does the total number of products owned across all accounts evolve over time
23800, One hot Encoding
38997, Reshape the input x train dev to a vector e currently x train dev is of shape number of examples image width image height number of color channels
8448, Transform Years to Ages and Create Flags to New and Remod
17029, Outliers can shift decision boundry for linear models significanlty thats why is it inportant to handle them
10210, Survived is a target variable where survival is predicted in binanry format e 0 for Not Survived and 1 for Survived
5432, Garages
19270, LSTM on train and validation
12739, Final model prediction submission
36246, Statistical Significance
28493, Use test subset for early stopping criterion
38484, Create model with TPU font
42085, Lets create a data object using really cool fastai API
40998, Conv2d layer takes 3 input channels and generate 64 filters channels e feature maps
24161, Build Train Set
15297, Gaussian Naive Ba Model
1059, Nice The most important feature is the new feature we created TotalArea
11983, let s check the scales of all numerical features to verify is there any scaling required or not
23421, Number of characters in tweets
23058, to make our life little bit easier we transform our dataframe to have only two columns label and image where image is a numpy array of pixels
4688, Again here i set these missing values with the most common values
38216, Train
28873, we return our Selected time series into a data frame
17634, IsAlone
15939, Test Fare
38509, create three separate dataframes for positive neutral and negative sentiments
6658, Prediction Classification Algorithms
11914, creating different age bands
32589, To save the Trials object so it can be read in later for more training we can use the json format
14437, go to top of section eda
6620, Bagging Regressor
31118, delete duplicated features
24691, As we are interested to finetune the model to APTOS19 we replace the classification fully connected layer
4408, Variable transformations
34519, Applying Featuretools
36751, Here is again an important part
37186, Reading dataset
39388, Fill null values for cod prov with median
8034, Since it s catergorical datatype we opt for Mode
8396, Ok now let s set our X and y values
14850, Cabin and Ticket
41865, Below I genrate a wordcloud from raw text excluding stopwords and in a shape of fire
31395, that our data looks good lets get ready to build our models
24241, Completing features
40485, AdaBoost
37097, Depending on the categorical variable missing value can means None or Not Available
2490, so there are some misspelled Initials like Mlle or Mme that stand for Miss
17766, Go to top font
10557, Merge Train and Test to evaluate ranges and missing values
517, some useful functions
2037, We change Sex to binary as either 1 for female or 0 for male
18293, Making a Synonym Dictionary
2543, The principal changements are here with the lines beginning by for categorical features
8232, Model Building
14792, Pipeline
2760, Observations
39300, Discard irrelevant features
23400, Although only a submission to the competition provide a final score on how good the model is I ll visualise each test image with their predicted boxes to get an idea of the models quality
37786, Visualizing sample of the testing dataset
16771, Our random forest model predicts the same as before
24431, Building the CNN Model
29160, Fence Fill with None
31934, Submission
22659, Training Function
3281, Ridge Regression
29521, font size 3 style font family Futura color green Data Cleaned
43362, Random Forest
24759, Lasso Regression L1 regularisation
24566, Total number of products by age
2650, SibSp of siblings spouses aboard the Titanic of passenger
20572, Many passensgers are of age 15 40 yrs
32184, Train model
32804, LightGBM
13709, PRIMARY CONCLUSIONS DERIVED
3692, Exploratory Data Analysis EDA
42373, Word2vec Embeddings
5439, i 2 Calculate the purity score
28955, make a new data frame which contains all of the people info and add to it the profit info
16496, Random Forest
9419, Making predictions and outputting
11705, Check the Data
43374, Defining the accuracy function
27311, Model Architecture
13693, up we encode all the categorical features
38855, Submission
1972, Linear Discriminant Analysis
42768, Fare Group and Fare Categorie
21252, Compiling Model
33343, From our very first and simple figure we can already extract very useful information
30910, LAR1 may refer to Los Angle R 1 zoning
6177, Count of males and females aboard the titanic
30089, Evaluation Classification Models
22938, One last interesting feature I can create is from the documentation from the data
27044, Distribution of Ages w r t gender
931, Optimize XGboost Regressor
16402, We use Imputer from sklearn for the median values
27434, MODEL DEVELOPMENT
13103, We need to impute values in Age
17683, EMBARKED SURVIVAL PERCENTAGE
19647, Age distributions by phone model
11728, Linear Discriminant Analysis
37328, Pool size Parameter selection of New Module pool layer
6159, Feature importances
26766, Submit To Kaggle
1397, SibSp vs Survived
8920, Correlation Matrix
20848, We re going to run on a sample
37182, now try to predict using random guess but stratified using dummy classifier
15124, Sex
43041, Looks like we have hit our target of 99
14499, Numerical Features PassengerId Age Fare SibSp Parch
3225, Scatter Plot
26986, Compile model
7872, To explore better the relationship between these variables before featuring I create a first model
7222, Fireplace quality Null Values
19947, Feature engineering
32013, Now we can have a look at our train X and test X again
23660, Rolling Average Sales vs Time Wisconsin
12400, One hot encoding of all purely categorical columns
41191, The list of numerical and categorical columns might have changed after removing those columns
13689, Fare
40712, Loading test csv
16089, Embarked vs Survived
1650, Quite informative well apart from the NaNs but a good idea here would be to examine each feature separately given that they are not too many
29812, Displaying FastText pretrained WordVector of a word
35555, Boosting
17746, all tickets starting with were priced at for passenger s traveling alone I set the passengers fare as
14721, This one is a bit more interesting
25171, Pre processing our Questions data Removing Stop Words Doing Stemming and more
10409, Plot sales price against Living Area sliced by Overall Quality
12887, Name
12840, Linear Support Vector Machine
23505, Compile and run the model
4528, Series Pandas
15077, Correlation
42332, Dropout function used to avoid overfiting of model by terminating disabling random node while building
12021, Nice It also performs similar to xgboost
7740, Predict
11301, Averaging
30266, We can use custom threshold value to fine tune our classifier in precision recall space
26577, Training the model
31256, Making Prediction
26883, Include only numerical columns impute columns with missing values plus an extension to imputation
18994, Build the neural network
16882, People who are alone aboard have low survival rate
2168, Our assumption is that people s title influences how they are treated
31900, Scale these values to a range of 0 to 1 before feeding them to the neural network model
41754, Approach 2 pd read csv and pd to sql chunk by chunk
26291, Predictions
7929, Ridge model
38716, FCGAN Implementation
28890, As a structured data problem we necessarily have to go through all the cleaning and feature engineering even though we re using a neural network
11741, we have a much better feel for a lot of the variables that have the largest impact on the Sale Price
3292, we need to create the Embarked Kfold Target Enc in the test dataset by using the following class
43204, Prediction
28503, Creating Label
37509, View data for a single customer
10169, Count Plots
5528, Create Age Band Categories
14884, we can get all the types of Cabin are starting with these character
11199, BayesianRidge
24713, Show Model Variations around the Mean Image
9635, Checking out the number of Categorical Data and Numerical data and adding them up to find out the total feature types
24293, reshape it to 28x28
12388, Plotting the correlational matrix
7675, We dealt already with small missing values or values that can t be filled with 0 such as Garage year built
40416, Looks very similar to the train set dates and so we are good to go
26009, There are also other significant variables that I missed during the earlier sneakpeak and that s the beauty of EDA They are
13692, We begin by dropping the columns that we are not using
3238, Maps
13990, Drop Ticket PassengerId and Cabin columns
26460, GB Prediction for test dataset
22162, Pipeline taxes
21788, Var21 var36
32313, Relation between Survival and Family Members On board
21609, Apply a mappings or functions to the whole df applymap
7300, Bivariate Analysis
4716, Adding certain columns
741, Overfit Columns
35374, Load Model
41362, There is no correlation
10390, Preparing the Data to do Machine Learning
25385, Visualize the output of fully connected layer
11045, And now use our pipeline
37427, Which are the most common stopwords font
9731, Model Building and Evaluation
34627, EDA Feature Engineering
23893, is the mean value with which we replaced the Null values
7546, logistic Regression
15728, Hyperparameter Tuning
23692, Define transforms
16227, as we have applied all the feature engineering steps so now its time to separate our data back
29152, FireplaceQU Drop Feature
10973, Top influencers
41966, Sales Per Month Count
42658, The data set is characterized by a large proportion of missing values
888, KNN KNeighborsClassifier
25727, check our datasets
41072, Misspelled data
16257, Preprocessing
19355, Numerical columns within the dataset
18330, DISTRIBUTION OF TARGET VARIABLE
9221, Execution on Testset with KNN
27013, How is ResNet50 working
8800, lets impute the missing value
15116, we load the test set to the variable X test
6147, Features encoding
11085, ensemble
30, Stacking
20554, Function monitors and changes the learning rate
4594, GarageCond and GarageQual are highly correlated
37812, Tweets missing the keyword location
28010, Confusion Matrix
32142, How to rank items in a multidimensional array using numpy
14697, PassengerId can be removed from the dataset because it does not add any useful information in predicting a passenger s survival
28711, Take training data through graph and evaluate performance
19587, shop
35666, Useless features in predicting SalePrice
4136, define some helper functions which would be used repeatedly
14427, Title Description notes
36343, Compute the Network Error
17448, first split in x and y values
17831, Model with Sex Age Pclass Fare Parch SibSp FamilySize Title features
21787, Var3 var15 var38
4202, Modeling
7386, By manually inspecting each unmatched name from the Kaggle dataset and looking for it in the Wikipedia dataset I discovered several matching mistakes
21648, Select multiple rows and columns with loc
39850, Actual value vs Modelled Value
20071, Insights
21461, Moving to the second chapter
14709, LINEAR SUPPORT VECTOR CLASSIFIER
7044, Proximity to main road or railroad
41575, Splitting into Training and Validation Sets
18317, any useful words in shop names
16037, Thats accuracy of our model
21181, Preventing Infections
17783, Extract Title from Name
19875, Feature Scaling
4871, Generating Dummies
22094, Define Loss Function and Optimizer
41964, Item Category
25945, EDA and Feature Engneering
35350, now choose the best hyperparameters
12365, Exterior Variables
7094, Basic Modeling Evaluation
18332, Just for demonstratio purpose the positive skewness in the data can be mitigated by using a log transform
16727, title
19087, Age histogram based on Embarked Survived
8781, One hot encoding for sex
14266, Decision Tree
41244, Tensorflow Hub Universal Sentence Encoder LightGBM
6378, Find out variance
38456, Go through all questions and records entity type of all words
38292, Define GridSearchCV for hypter parameter tuning
709, use a random forest as this should partially remove the dependancy on skews that we have with linear regression based modelling
13456, Cabin Missing Values
40943, Out of Fold Predictions
27741, There is no missing data
13959, Data type of each column
7926, Modeling
34674, Sales distributions by shop id
16831, An ROC curve demonstrates several things
16707, We can also store the values of each prediction model in a dictionary
40456, NeighborhoodHouseStyle
718, At this point I d thought it would be wise to try a different modelling technique
30402, Defining a class to get access about information after each update
29594, We apply the initialization by using the model s apply method
32180, Define Centernet model
17452, ok lets make some c and epsilon tests
31025, Mention font
28350, Analysis Based on EXter Source Types
20193, Lavene s Test
36212, Preparing Evaluating Submissions
26965, which shop is the most popular and lowest
24706, Inference on test set
24718, Show Scatter plot of Leaf images as points in high dimentional space
1940, Overall Quality
23244, Many machine learning models allow some randomness in model training
17761, explain mean mode median
11552, Effect of year on SalePrice
8242, Model performance Visualization
40130, Function to build a model based on LeNet 5 architecture
28302, There are 116 categories with non alphanumeric values most of the machine learning algorithms doesn t work with alpha numeric values
8464, Like before I excluded one by one of the features with the highest P value and run again until get only P values up to 0
22074, We got between 15 and 40 EMPTY PREDICTIONS That s the core reason why spaCy NER model does not perform well on this task We can t score over 66 X on the Leaderboard so far
27472, Top Ngrams I am analysing Bigrams only
34059, Outlier Detection
24868, Fortunately the imbalance isn t too bad for this dataset
6300, Extra Trees reduction
35138, Final Check
14345, Data Visualisation
21730, check out the outliers instead
22485, Bonus1 how to make simple lines to connect points in matplotlib
33656, Drop Unnecessary Columns
28173, Dependency Parsing
10126, There 4 graphs again reiterate the same fact in much more detail
27666, 3D CNN
3764, Lasso Regression
19468, let s check at the occurence of each class to be sure that there is no asymmetry in our data that can skew the algorithm
12064, Target Variable
7308, Observation
13790, Encoding
43135, Overall Model Performance
7860, We tweak the pre processing function from before to handle missing data better too
15653, LinearSVC
35500, Epochs and Batch Size
31782, Exploring correlation of features between the train set and target
26789, Naive variables
28113, The Model
8442, Include pool in the Miscellaneous features
12873, There are total 2017 parameters in our model which we tune while back propagation
33740, Lets plot some of our prediction
31259, start optimization process
41696, Exploring places and times
10498, Pclass
37947, If we want to predict a data point in the future currently only Country State and the date are known
3937, The sales price is right skewed
36166, Instead of simple removing duplicates I m taking into account only the last month for each customer
41053, Creation of X features and y targets based on the data sets
20280, Lets visualize this
34333, Prediction
28742, Trainning the model with best parameters and predicting the X test to submission
33582, Test Dataset
7217, We can replace the missing Alley values with value None
16001, Embarked
28488, We just reduced the dataframe size from 57MB to 35MB
4085, Univariate analysis
13510, Score
9751, Re check for missing data
33798, The target 1 curve skews towards the younger end of the range
4024, Electrical
17785, verify the relationship between Title and Sex
24969, V11 prediction
21430, Explore NaN Values
20526, Prepare the Data for Machine Learning Algorithms
3480, Make predictions on the training set and construct a confusion matrix
6021, Ini adalah data submission kita
20452, POS CASH balance
16671, Model Tuning
33833, Replacing With Mean Median Mode
23545, Set the optimizer and annealer
15606, Estimate missing Fare Data based on Embarkation
18068, Using the Model
8513, Fixing Skewness
4824, Plot the distribution of missing values
27596, Simple LSTM Model
11652, Adaboost
17930, Explore
10697, SibSp and Parch processing
36744, 1 is assigned the day before an event exist
2468, Random Forest Importance
2348, First Find Ideal Boosting Rounds
33150, MEMO If you are predicting on your own moles your don t need to rank the probability
26043, we ll define a DataLoader for each of the training validation test sets
21237, We can use callbacks to stop training when there are no improvements in our validation set predictions and this stops overfitting
24179, Stacking
16919, Confirm features of train test are the same
6560, Pclass
24869, Imputing is the process of dealing with missing values
27916, Impute by Strategy
26957, there is an outliner in the item price and item cnt day columns
35858, now we reduce the learning rate by a lot and train the model for real
35672, Numerical features
33796, As the client gets older there is a negative linear relationship with the target meaning that as clients get older they tend to repay their loans on time more often
29791, Euclidean Distance
442, MSSubClass Na most likely means No building class We can replace missing values with None
36298, Look Accuracy on Training data lol
8050, feature engineering
5599, Bath
38506, Text Data Preprocessing
27995, CatBoostClassifier
1606, Age
31670, Define Helper Functions
4426, Some features are very highly skewed and this can negatively impact the model
12102, Dealing with LotFrontage
36595, Main part load train pred and blend
3337, think more logically and find certain features which have significant impact on missing data lets start corelations and find which features are similar to Age
36846, when tuning a model it makes little sense to only track the change in CV score We have to tune models on a local test set in order to get a valid estimate of how well it perform on the leaderboard
3189, Follow the documentation
31737, DICOM Images
2664, there are 38 constant feature columns with same value in all the data out of 370 columns
15171, No survivors
21926, Test Score 12284
42959, It s clear that the majority of people embarked in Southampton
28308, Function for find out Numerical and categeical Variables
28079, we come to imputing missing age in the test data
16754, SibSP Parch
32521, Predictions
32657, Visualization of distribution and correlation with the dependent variable AFTER normalization p
4512, Considering number of missing values and its relationship with LotArea we ll drop it
814, Missing values in train data
1804, let s make sure that the target variable follows a normal distribution
12220, There is a big outlier in our prediction and a visible pattern in the residual plot both things that would require further investigation
28797, Training models for Positive and Negative tweets
16944, Tuned Random Forest
29465, Frequency of each question
13593, Build AdaBoost
29435, Importing Packages and our dataset
39021, Columns dissociation
19160, Basic feature engineering
23086, Here we can do an evaluation of our model
27164, Line plot that tells us the variation of each year with Sale Price
32520, Loading the weights
33891, agregating installments payments features into previous application dataset
17646, Logistic Regression
2562, Variable importance cross validation details and model stats
20810, We select every numerical column from X and the categorical columns with unique values under 30
10643, Methods to deal with Continuous Variables ways deal continuous variables predictive modeling
440, Functional data description says NA means typical
14890, Embarked
32686, The python list containing the loaded data is converted into two numpy arrays one for features and one for labels
19672, Remove Low Importance Features
3562, The std is big
23901, let s pad the sequence to fixed length
21326, Pools Hot tubs
14240, Most of the first class passenger embarked on S
1121, Based on my assessment of the missing values in the dataset I ll make the following changes to the data
17762, Drop Useless Columns
11838, After careful examination of Utilities variable we can drop it
1661, A clever idea at this point is to ask our friend seaborn to provide a features correlation matrix
27915, One hot encode the categorical features and identify features that are failed to encode
15137, Cleaning
11523, LightGBM scores
27225, Creating a intermediate layer model to extract the data from my dense layer
14582, The diagram is rightly skewed
17760, Missing Ratio of Columns
37922, Ridge
3064, Filling in the NaN values
900, For delailed variable description please check out here prices advanced regression techniques data
12349, BsmtFinType1 Rating of basement finished area
42955, The number of people with two or more children or parents is very low in the data
17371, Embarked S
16436, Fare
11647, Naive Bayes
38708, I get rid of some features for best LB score
19093, that s pretting amazing correlation
4075, Final Submission
494, Embarked Feature
28651, Location
14457, go to top of section model
38960, As mentioned in the introduction we are using the default here
18995, Do the train val split
36054, Test Data
8544, FireplaceQu
1174, let s think about imputing the missing values in the numerical features
4470, We create a column for each cabin and insert the value 1 if the passenger belongs to that cabin and 0 if the passenger do not belong to it We only create columns for cabin A B C D E F G T and LEFT OUT CABIN U in the columns created in order to prevent collinearity Passengers in Cabin U would have values 0 for all the cabins columns A B C D E F G T
7893, As Sex and Embarked are not numerical I do the pandas OneHotEncoder
26358, Model Building Baseline Validation Performance
1052, Machine Learning
2274, Creating new Features
18460, preprocess the question text
12656, Splitting up the Training Data
14062, Sex vs survived
31320, Decision Tree
13616, we fit clf cat to training dataset and apply it to transform
34409, Save as CSV
23754, A little editing with the datasets for Analyses and Prediction
6542, numerical columns
35157, Plot the model s performance
16983, Nearest neighbor classifier
6198, Decision Tree
11421, Find
532, Swarm and Violin plots
22512, comes the cool part
20101, Item count by month for 1 2 3 6 12 lag
38447, Convert categorical columns in numerical dtype to object type Convertnumericalcategoricalobject
21999, Highly compressed
9874, We can fill Embarked data by the help of passengers class or by looking fare data and which seaport the passangers prefer to enter the ship
8829, FINAL DATA PREPARATION
42011, Making a new empty DataFrame
2538, To use Tensorflow we need to transform our data in a special format
6295, Voting
32615, Exploring the keyword column
31768, Here are there order counts
6174, Shows the count of survived people
728, A strong score
27260, look at the correlation between the regressors
3905, there is one missed column which is the y we want to predict
32799, Random Forest
5535, Create Parch Categories
32975, Lets fill the missing Ticket value in train data with median Ticket value and one missing fare value in test data with median fare in train
32866, Lag based features
33532, Not much difference in distribution of average words length in tweets with target 1 and target 0
26368, Images that confuse the network
26904, Approach 10 A10
16432, Decision Tree
12122, Alley data description says NA means no alley access
14571, Random Forest font
24409, Feature Importance
5988, Gabungan Data Train dan Test
4192, Important categorical variables MSSubClass and Neighborhood
7723, In these numerical features we ll impute NaN with zero because a missing values here means the house doesn t have that feature so it s zero
16664, Binning Continuous Features
19620, DATA PRE PROCESSING
29079, Price
27037, Exploring the Target column
4089, SalePrice is not normal
11484, MasVnrArea MasVnrType
6190, Feature Scaling
5199, now we get the label from the training set and put it in a seperate series then we drop it from the training set for future use
23200, Not so much But considering the number of features we have its not either too less visualize our two components transformed features in a scatter plot
13907, Roc Curve
10145, Picking the right model
184, Label Encoding
10575, We using PysparkDataFrame na fill to fill a value to specific column
32839, Most of the plots for ConfirmedCases and Fatalities look like a degree 2 or sometimes a degree 4 polynomial
30903, analyse the regionidneighborhood
5923, Learning from others kernels
20270, There are 116 categories with non alphanumeric values most of the machine learning algorithms doesn t work with alpha numeric values
1573, We combine the SibSp and Parch columns into a new variable that indicates family size and group the family size variable into three categories
41719, Import and load data
6793, Ramdom Forest
26473, In this example we be using the pretrained for fine tuning by doing the following
9695, EDA on Categorical features
36615, Calculate Eigen values and eigen vectors
38581, when ever we have a text data we have to clean the data for remove some unnecessary symboles and uncesessary stopwords
22432, But if you want to make not a regular n x n column but something more sophisticated you can only do it using grid
28869, Predict
14085, Test Data
18440, Setup the model font div
38554, Correlation Matrix
12101, Dealing with MSZoning
2011, Feature Transformation Engineering
39144, look at train
28643, OpenPorchSF EnclosedPorch 3SsnPorch ScreenPorch
2812, Reading the data
32316, in order to run SelectKBest I have converted values in Embarked and Sex to floats
3639, Treating Mising Values before performing further Feature Engineering
13368, Here 0 stands for not survived and 1 stands for survived
636, In the same way having a large family appears to be not good for survival
3974, Submission
5554, Submit entry
13966, Embarked
28307, Number of records and Features in the datasets
33786, Label Encoding and One Hot Encoding
22709, Splitting the target and predictor variables
9817, As predicted females have a much higher chance of survival than males
19726, Observation
29163, MSZoning Fill with the most frequent class
39124, With XGBClassifier
27171, There are two types of popular feature scaling methods
18585, Here is how the raw data looks like
37422, But our task is not to predict these labels but to predict the selected text which can help us figure out the sentiment
15459, Price distribution for Pclass
37879, Prediction from Linear Model
32969, Deck
16835, Using sklearn utilities for the same
21641, Copy data from Excel into pandas quick read clipboard
6905, Port of Embarkation
34035, Ridge regression by increasing alpha
26042, To double check we ve correctly replaced the training transforms we can view the same set of images and notice how they re more central and have a more standard orientation
15523, We can now get the prediction
28866, Decoder
18250, Unique values in dataset
36788, Bigram Models
4573, Fourth step A little bit of Feature Eng
536, New Feature NameLenBin
8954, Fixing Miscellaneous
40081, we fit the test
25734, In this Problem we used softmax except sigmoid
882, fillna fill nan with mean values for that column
11053, ENSEMBLE Weighted Voting classifier p
20586, Logistic Regression
5040, As expected the skewness is a positive value
23461, Removing unnecessary columns
34460, WI 2
1313, Fix Skewed features
42420, Finding Top Contributing features
27511, Extract our features and target
15101, For our next step we are going to assume that their is a relationship between a person s age and their title since it makes sense that someone that is younger is more likely to be a titled a Miss vs a Mrs
26524, Visualize Logistic Regression Predictions that are MOST wrong with ELI5
22618, Plot the distribution of apps on the 75 of the devices
22803, Few Data Observations
2505, Family Size and Alone
37610, return missing col is a helper function to find the missing columns of a dataset easily
16022, We remove Z score 3 thats mean Fare 200
11720, Linear Support Vector Classification
19337, There they are bring back the single unconnected Questions to get our complete universe of Question Sets
22704, Network Structure
20257, Basic Math with Pytorch
31058, Format Country code Local Area Code Number
22450, Dot plot
11191, This model needs improvement run cross validate on it
12950, Stacked Classifier
41618, lets set up our plot again
9159, Correlation between Target Variable and Features
1400, Sex vs Survived
23390, Prediction post processing
13667, Random Forest
26620, Data exploration
13909, Number of males that survived vs number of males who did not survive
13541, First look at the data
17734, Looking for NaN s
39734, While I m at it I ll encode it and Sex as numeric using the map method
11815, visualise this data in in boxenplot violinplot and stripplot
36627, How many buildings and managers
16546, use the get dummy method of pandas to encode the Embarked column
23903, Get the sequence embedding
7027, Home functionality
41988, isnull sum isna sum To check the sum counts of null values
21769, Drop nomprov column
19637, 529 device ids have duplicate entries in phone dataframe
26260, Plotting the training vs testing accuracy
37199, This model may be very sensitive to outliers
26934, And now we can obtain the features matrices
6481, Check Missing or Null Values in data
13364, View statistical properties of test set
36098, Save some memory
41567, n components means to what dimensions you want to reduce the dataset to
6091, GarageCars and GarageArea are like twins
5705, BsmtQual BsmtCond BsmtExposure BsmtFinType1 and BsmtFinType2 For all these categorical basement related features NaN means that there is no basement
42081, we need some labels too so lets add y which be our label
23996, Here Again train and test are spilt back seperately as now all data processing is done
11706, Data Cleaning
31123, let s look at the top xgb chosen features
2387, Most important parameters of a LogisticRegression
12043, understand a data set variable after variable check basic statistics and drop a few outliers
36297, Magic Weapon2 Support Vector Machine with RBF kernel
9686, Observations
10571, EDA in Pyspark
27043, Visualising Age KDEs
29417, All the functions are below and quiet basic
23180, Findings Among the classifiers RF and GBC have the highest accuracy after tunning hyperparameters RF and GBC are perhaps worthy of further study on this classification problem Hence we choose RF and GBC
27524, Library
24687, compare the number of parameters with some of ResNets
2214, ROC
20785, Univariate Analysis
24366, majority of them are numerical variables with 15 factor variables and 1 date variable
7121, Classes of some categorical variables
30887, We now have fixed all parameter values but searched only coarsely for the best number of estimators
33777, Data Augmentation
2999, Trying some nifty Feature Selection Techniques
27078, Thus we have our very high rank and sparse training data small document term matrix and can now actually implement a clustering algorithm Our choice be either Latent Semantic Analysis or Latent Dirichilet Allocation Both take our document term matrix as input and yield an n N topic matrix as output where N is the number of topic categories which we supply as a parameter For the moment we shall take this to be 5 like categories number
3613, Independent variables
31292, Separating macro data depending on its period
39846, Actual value vs Modelled Value
23736, The code may look a little overwhelming but if you look in between the lines it simply extracts the title as it is formatted in the name column
7629, Linear Models
16543, As there is only one missing value we can easily fill it up by either mean or median of the column I am using mean here
21105, Our date block num column be the sequnce index sales be the value
9904, Train Test Split
22358, Frequency Encoding Count Encoding
1981, let s check that heat map again
40258, Ground Living Area
12877, Cool
37460, Remove Stopwords
20637, Unigrams
14538, Summary
34428, Real Disaster
18201, let s look at which proucts sell by week with interactive heatmaps use our quantiles here
28108, Combining the highly correlated columns based on the distribution
7870, To get a better insight of the relationship of these features and the survival rate a general pairplot give some clues
25085, Event 2 vs sales FOODS
41976, To read bottom 10 lines
34724, Feature importances from LOFO
30880, Plot crime categories counted per year
19315, Evaluation prediction and analysis
20619, Gaussian NB Classifier
6736, SalePrice vs MasVnrArea
3934, Replace lacking values with most common or median
12969, Filling the missing values in Age variable
964, Correlation Heatmap of the Second Level Training set
31702, Transforming the Data
23435, Even if I m not good at spelling I can correct it with python I use pyspellcheker to do that
11950, Applying Standard Scaler to the numerical dataset
2234, Kurtosis
14972, Deleting columns which are of no use
36499, Outlier Detection
29156, BsmtQual BsmtCond BsmtExposure BsmtFinType1 and BsmtFinType2 Fill with None
4406, Permutation Importance
18018, Boy feature
9176, GarageFinish
24019, Train all
14906, Embarked S and C have higher survival rate than Embarked Q
9949, Bar graphs and CountPlots
31998, get the test set predictions as well and save them
26108, Helper Functions
9636, Checking out the shape of our training and testing dataset
21039, Topic Probability
33449, ExtraTreesClassifier
41748, MLP Batch Norm on hidden Layers AdamOptimizer 2
24842, Complex CNN
25812, Confusion Matrix
38405, Sklearn models
24947, Selection by modeling
33673, Week Number font
26402, We already mentioned that most likley certain age groups might have a higher probability to survive
42531, One Thing I have noticed that the curve of a countries confirmed cases and fatalities looks pretty same And it make sense if you think if the confirmed cases fluctuates deaths would also increase or decrease
23517, Data cleaning preprosessing
41434, Brand Name
33361, The previous plot is kinda nice but we can do better
36509, Embarked Sex Plcass Survived
18211, Os dados est o em valores entre 0 e 255
20241, I have used two types of Imputer from sklearn
27384, the next cell take a lot of time you can skip it and run the one after it
28628, GarageType
22329, Removing Digits
10929, Undirected Network
32978, Machine Learning
597, Passengers with more than 3 children parents on board had low survival chances
16129, Covert Pandas Dataframe to H2O Frame
33274, Cutout data augmentation
2020, Here we stack the models to average their scores
38825, Split train valid 80 20
7296, Creating Submission File
25, RandomizedSearchCV Kernel Ridge
6619, Xgboost
27201, Fare
20814, Our next step is setting up the final model
5898, Label Encoder of Train and test
12486, Gradient Boosting Classifier
1235, Removing overfit
14327, Name
35079, The final submission is done using the model that yielded in the best score in this notebook that is the model 4 2 I just decided to compile it using Adam as optimizer because it finds local minima faster
31297, Compile Our Transfer Learning Model
11560, Apparently the best model is XGBoost let s plot a boxplot to visualize the performances of the different models
43324, take a look at our data
20633, Lets create our own vocabulary
6105, combine two datasets and work with missing values faster
18221, Training Function
17549, lets analyze the available Age data
41946, We can visualize the training process by combining the sample images generated after each epoch into a video using OpenC
3424, let s put all the prefixes with only one member into a single category called Unique
4471, Dealing with Embarked Missing values
30611, Over half of the top 100 features were made by us That should give us confidence that all the hard work we did was worthwhile
5986, Data Train
8496, SHAP values are calculated using the library which can be installed easily from PyPI or conda
38686, Before Mean Encoding
13324, Wrangle cleanse and Prepare Data for Consumption div
27469, Stopwords Removal
41680, Extract features from Images
27906, Reference We used Decision Tree Regressor A cool video on Decision Trees
21678, The n components variable here is crucial
22408, By now you ve probably noticed that this number keeps popping up
22846, Merging Shops Items Categories dataframes to add the city label category id main category and sub category feature
6629, Most of the people who died were from Passenger Class 3 irrespective of Gender
35371, Create train and valid dataloaders
556, KNN
4738, We ll consider that when more than 20 to 30 of the data is missing we should delete the corresponding variable and pretend it never existed but we do it in feature engineering part
1145, We have to convert all columns into numeric or categorical data
5529, Gender Categories
17259, Info of training data
2551, We have 5 categories cut off from 16 then at 32 48 64
3673, Optional Box plot
15467, Features SibSp and Parch
19433, Create a correlation matrix to check linearly highly correlated numeric float variables
30137, Creating Custom Model
5658, Basic info of Train data
14663, Standardize
4651, Basic Statistics
41124, Median of Absolute Logerror
7238, Data Preparation 0
35942, GradientBoosting
8607, We can then perform box cox transformation on the independent variables as well
2759, Numerical values distribution
22693, feature X contains 8 pixels 784 28 28
101, Train info
14717, RANDOM FOREST
23560, These have their kitchen area larger than the total area of the house
27570, ps ind 14
41703, Nope not really
17030, All features have outliers
7734, Tutorials for Models
3922, this is a tricky function to remove outliers I have searched a inbetween various methods but this was the fantastic one
28312, identifying the missing values
9979, relationship between these discrete features and Sale Price
27892, Split the train data into train having 20 000 images and test having 5 000 images
25396, Miss labeled data visualization
26451, knowing that our model is already quite good we are ready to make the final predictions
38711, Dimension Reduction PCA
32421, AlexNet
20090, Outlier
37490, LSTM
34698, Averaging the values based on the extrapolation over 1 2 and 11 months
13414, Specificity True Negative Rate
12018, XGboost is one of the state of the art model which is a faster implementation of gradient boosting with slight modification
8053, single model
16839, Handling Missing Values
43035, One Hot Encode the Labels
1922, Having 2 fireplaces increases house price and fireplace of Excellent quality is a big plus
41062, Predict Test
22654, Threshold
2052, DecisionTree Model
4670, The train set is composed of 81 columns so we not explore all features but focus on most important ones
31225, var 68 var 91 var 103 var 148 and var 161 have comparatively lower range of values
5959, Pclass and Survived
36626, Prediction Submission
31512, Feature Selection using Feature Importance
16672, KNN
15277, After saving the training and testing dataset in train data and test data dataframes repectively we ll replace the missing values in Age and Fare columns with the average values of the respective columns
38060, Cleaning stage
22628, Making Predictions on Test data
6938, Final set of features
111, ticket
32779, Visualizing Augmentation
27230, Prediction for res
29765, Validation accuracy class
34607, Cross Validation
1221, Splitting the data into categorial and numerical features
6740, Observed Outliers and non linear relationship
10388, Visulizing the Distribution of SalePrice
4753, its told us that OverallQual have good relation with sale price when quality increases sale price exponentially increases and also have some good relation in FullBath TotRMSAdvGrd GarageCars with sale price as well
1131, Exploration of Traveling Alone vs With Family
26095, Generate bonus output
23677, Model construction
23931, How can we use Faster RCNN in this competition span
18714, Trick to create an even better model
15480, We create a new feature Age Class to increase the number of features
24961, Feature Scaling
37536, Reading the data
431, FireplaceQu data description says NA means no fireplace
42963, we ll fill in the missing values in the Age feature
35229, How punctuations play part in this competition
16879, The age distribution between Survivors and victims are not very different
5687, Normalize Age and Fare
18071, Construct dataframe with all images
2237, Dealing with Missing Values
40921, Here s what the fit looks like in sample
13282, In pattern recognition the k Nearest Neighbors algorithm or k NN for short is a non parametric method used for classification and regression A sample is classified by a majority vote of its neighbors with the sample being assigned to the class most common among its k nearest neighbors k is a positive integer typically small Reference Wikipedia nearest neighbors algorithm
17435, Name
5605, TransFormation
12530, Spliting back the train and test data
5665, Create Name Length
2552, explore gender wise survival too though we use column as it is
40627, Just to sanity check let s have a look at the predictions on the training data to check they look ok and are approximately on the right scale
41468, It appears those that embarked in location C 1 had the highest rate of survival
5568, Correlation matrix
17701, K NEAREST NEIGHBORS
8454, Excellent we are in the good path but
1522, lets plot some of them
35661, Correlation Matrix
8592, Standardising numerical features
2761, Just ensuring the distribution of data before and after filling the missing values remain s the same It s always good to have distrivution same before and after imputing the missing values
12626, Transfer the features into numerial values
4833, MSZoning since RL is the most common values we are going to use mode to impute it
40177, Import libraries
38573, Choosing a model
29588, We ll be normalizing our images by default from now on so we ll write a function that does it for us which we can use whenever we need to renormalize an image
7619, Lasso
33090, Defining usefull functions score cross validation
20969, Fitting the ANN to the Training set
11611, Cabin since there is too many missing value I remove it from dataframe
25371, Reshape
10715, Running Machine leanring python Chunk in R
36657, First off dilation monitors hits or contrasts to the pixels and adds an extra layer of pixels to the inner and outer boundaries of the shapes
1401, Pclass vs Survived
20451, previous applications
3487, Construct the best model and fit it to the training set
27479, The labels were given as integers between 0 and 9
6514, Continuous Variables
26332, Data Preprocessing
42850, no dis for no disaster tweets
18762, The plot 3d function plots the 3D numpy array of CT Scans
13894, Data Preparation
19188, we need to encode the categorical columns so that we can feed them to our model for the prediction Labelencoder is used for all the columns at once
15570, I have no idea what cabin T is supposed to mean so I set this to unknown
9476, This function creates a model and scores it using Stratified Cross Validation
33808, Baseline
5239, Numeric Features to be Treated as Categories
29016, Distribution of Embarked feature by Survived
25825, now there should only be a handful of categories per feature
25747, to an accuracy that s much better than chance It doesn t always guess zero either for size it guesses mostly That s bad set out to remove the correlation between shape and class from the data
1063, We try other kind of regressors such as XGBRegressor and ExtraTreesRegressor
9624, Test DataSet
5860, TODO Intro for K NN model
31006, We then reshape the samples according to TensorFlow convention which we chosed previously using K
1713, Advanced Imputation Techniques
22535, Training and testing
37641, Handling time
13578, On the Boat Deck there were 6 rooms labeled as T U W X Y Z but only the T cabin is present in the dataset
19268, MLP on train and validation
20217, Split the data into training and validation sets
35831, Handling the Missing Values
8578, Splitting the Data and Training the Model
42282, Load test data
9350, Cross validation
19828, LeaveOneOut
4475, I use this equation SibSp Parch 1 ownself to calculate the total family size that the person travelled with on board
22158, The first pipeline
587, Submission
3483, make predictions and write to csv
172, At last Final step let s make our submission
19357, Categorical columns within the dataset
19516, Creating RDD with Partitions
16285, First 5 trees
16875, Sex Pclass VS Survival
24904, Confirmed COVID 19 Cases by country
29390, The last thing that we need to do is compile our CNN model This is where we specify the loss function we use categorical crossentropy the optimizer we use adam so that the model s learning rate is automatically optimized and for easy evaluation the metric we choose is accuracy
18697, plot the confusion matrix
41823, For the data augmentation i choosed to
38215, Submit
27317, Rescaling features to bring it down between and
32202, Preprocessing
35496, Train Test Split
26528, Light Gradient Boosting Binary Classifier
38628, Import Keras
14346, Machine learning logistic regression
35830, Rescaling Normalization Standardization
16704, Completing a categorical feature
22004, Label encoding
31508, We use the ANOVA test to determine the feature importance
27009, Fitting
5112, Predictions
38717, The discriminator also use the same format except the number of neurons be different
2925, Spot Check Ensemble Methods
20088, Trend in Time Series
40025, I have created a dataset that covers some simple image statistics for train and test set images
36082, Loss
37790, Creating Training Generator
17984, we check the survival rate among both genders within the three ticket classes
16510, Data Modeling
21414, Relationships betweeen the sets
12900, Before we start modeling make sure we have no missing values on our training and test set
41009, a great idea which definitely helps adding relatives to the groups we identified so far
11039, Encapsulate all our feature cleaning and engineering
8763, Survival by Age
17904, Videos
8957, Label Encoding categorical features
36994, 9 of ordered products are previously ordered by customers
25497, Use the next code cell to label encode the data in X train and X valid
15179, Correlation heatmap
1082, Defining cross val score function for both train and test sets separately
39182, Extraindo features a partir das coordenadas
34795, Ridge Regression
4472, Similar to Cabin we created 2 columns leaving out Embarked Q to prevent collinearity
28514, OverallQual
24572, Observations
38160, We can also print a leaderboard with the training time in milliseconds of each model and the time it takes each model to predict each row in milliseconds
35658, Numeric Features
6685, Correlation Heat Map
14822, Name Title
36218, There are a lot of missing values here
4909, See who makes the Best Couple
26305, We droped some columns that less than of correlation of Sale Prices
14593, Observations
35442, Validation data
38011, All info about a single state
11380, Averaging the base models
38890, Initialise Adam parameters
403, XGB
6005, Kita bisa melakukan sorting berdasarkan data typenya namun kita harus berhati hati tidak semua yang terlihat numeric adalah data numeric bisa saja itu adalah data numeric yang berbentuk categoric kita harus melakukan sorting manual lagi nantinya
6240, It appears as if solo travellers are at high risk
41262, now apply this to the data set
32731, HyperOpt
32601, Applied to Full Dataset
35663, Scatterplot
17426, And how many did survived
4972, For age we can create equally spaced bins but for fare it makes more sense to divide into parts with same amount of individuals
12059, Factorizing
9267, Numerical data help better decide which rows to drop
9330, If we use GarageArea we get
16176, We can convert the categorical titles to ordinal
11604, LightGBM
31893, we implement our LGB Naive Ba classifier
25686, System with SOCIAL DISTANCING
1572, we impute the null values of the Age column by filling in the mean value of the passenger s corresponding title and class
960, Interactive feature importances via Plotly scatterplots
41441, vz is a tfidf matrix where
31312, The data for Store 1 is available from 2013 01 01 to 2015 07 31
14391, NOTICE the values of last newly added feature Title
24910, Confirmed COVID 19 cases Per day in India
25856, Above comment code tooks so much time
5311, Recursive Feature Elimination
1104, Random Forests Model
12304, XGBoost
511, AdaBoost
16569, Ensembling
7560, Prediction Time
27237, Predict using the best model
37442, Same here
33454, Reconstruct Train and Test sets
2219, Right Skewed Distribution Summary
37563, 1157 feature columns
35436, Getting the Train and Test sets for model to operate on
14489, now both frames are filled up now we are left with cabin
42055, Co relation matrix
36018, Encoder
13999, center Preprocessed data center
2716, For choosing the most optimal hyper parameters we perform gird search
22225, Etiket kodlamas ve veri setimizin train test olarak ayr lmas Label Encoding and Separation of our data set as train test
6835, Label Encoding Ordinal Features
27346, item cnt day
4514, Final Imputation
10724, Converting categorical to numeric
7015, Rates the overall condition of the house
34869, Ouch only 24 of our vocabulary have embeddings making 21 of our data more or less useless lets have a look and start improving For this we can easily have a look at the top oov words
7248, Explaining Instance 4 2 3 4
40992, Apply function can be also apply ied to the Time Series
12287, Empirical rule
4731, The describe function in pandas is very handy in getting various summary statistics
19856, calculate the boundaries outside which sit the outliers assuming Age follows a Gaussian distribution
12473, Age
22871, Normalization
35459, Visualize the skin cancer at torso
1182, Factorization
1530, Age Feature
41197, check if there is any still any categorical column
24305, because we want to reduce over fitting i use Data Augmentation technique
16178, Converting a categorical feature
6509, Numerical Variables
42108, Evaluating Model
20097, Making new dataframe for feature engineering
40747, Second Model
23621, Lightbm model
1577, We also fill in the one missing value of Fare in our test set with the mean value of Fare from the training set
3735, Get a baseline
42402, Sarima helper function
10245, Go to Contents Menu
34037, Modeling using Ridgecv
9325, Put the rare categories together by using the documentation
8610, now we have our dataset ready we fit several regression models and predict the housing sale prices
14695, Name and Ticket can be removed from the dataset as these features do not provide additional information about a passenger s liklihood of survival
548, Review k fold cross validation
25172, Lets look how pre processing changed our question text
31781, Exploring missing values
25295, we can try to project it again into polar space
9103, There are 3 rows where this is the case so I am going to assume the data collector accidentally forgot to calculate the MasVnrArea
29447, visualizing the target Distribution
37914, Performance
13546, Gender Column
31896, here is the prediction plot for var 108
27540, Display time range for labels with mean line
9899, Ticket
31921, Model Architecture
9186, perform linear regression on all the data at once
36257, look at target feature first
6907, In this case I decided to use mapping each Embarked value to a numerical value instead of creating a dummies
41472, Ensure AgeFill does not contain any missing values
15536, Create training and dev sets
38309, Dataset balancing
25258, Train and Test dataset
27643, I be doing some simple preprocessing to start out to improve the model quality and speed up training
21650, Reshape a MultiIndex df unstack
40074, One Hot Encode Categorical
27114, start filling these null values
6582, Add relative column
35159, Experiment Dense layer size
13791, Hyperparameter Tuning
17343, Adaboost
35090, Accuracy on Training Set
31996, compare the ground truth and CV DataFrames
28066, we create a function that imputes the age column with the average age of the group of people having the same name title as theirs
697, If we apply this threshold to the probabilities for our testing dataset we find out however that none of them pass the threshold
21171, plotting training and validation loss
32970, Lets count the missing values in train and test
42016, Creating a new Dataframe with certain rows
13221, Train test split
39977, Describing Dataset
17040, SibSp and Parch are correlataed with family size we drop SipSp and Parch as discussed in the chapter
34454, CA 3
41326, We write the top level of the decision tree to a
15802, Selecting the top 3 classifiers for model prediction
43147, look at how many bounding boxes do we have for each image
6068, Sell and Miscellaneous
30651, OMG we have missing values
821, List of categorical features and their unique values
15697, Number of parents children aboard the Titanic vs Survived
24772, Final predictions
26869, CNN model definition
18379, RMSE
5327, Display heatmap of quantitative variables with a numerical variable as dense
42370, Make model XGBRegressor
9279, Submission
23067, Command Center enabled features
17948, Encode FamilySize
26655, Transform features in float
5334, Diplay dendogram with many variables
15025, Fill NA
32765, Split to train and test
5130, Create Submission File
34793, The variability between the actual values and the predicted values is higher
6450, Hyper Tunning
30673, Try to apply DistillBERT
15020, Explore the Features
4262, MsZoning
34389, At this stage I m still a bit puzzled
18899, We use Pandas a Python library to read the train
4434, Making Predictions
41772, Label Nodes and Show Connections
20536, Linear Regression with Ridge regularization
16977, We first scale our data since some features such as Age and Fare have higher values than the rest of the features
3768, Magic Weapon 2 Xgboost Classifier
20485, Credit sum limit AMT CREDIT SUM LIMIT
35386, KNN
23611, Submission
28570, BsmtFinSF1
21575, Split names into first and last name
2193, Encoding str to int
43369, Reshaping Array
4926, XGBoost
32177, All of these variables have a bump of frequency that matches the rising of the probability of making a transfer
8762, Plot Ages
26106, It s an important and hard lesson for our classifier
38088, We can get a better sense for one of these examples by visualising the image and looking at the label
33819, This submission should score about 0
19519, Setting Name of RDD
1550, First let s take a look at the summary of all the data
40619, And a GBM
40668, Ngram Analysis
32915, On Kaggle the kernel can run for at least an hour I can t train the model here
12193, More often than not you need to create transformers that do nothing while fitting the data and do a lot of things when they transform it
27385, Thresh 60
915, Normality And Transformation of Distributions
515, Creating Submission
13863, Ticket
41488, Take the decision trees and run it on the test data
19655, we define each entity or table of data
39847, AutoCorrelation Plot
24990, As a sanity check checking that the number of features in train X numerical match the total number of numerical features
6311, Adaboost
13056, Support Vector Machines
1861, KNeighbors
5086, I have looked at the features and found a possible error in the test data The house was sold BEFORE it was built
34414, Number of characters in tweets
9976, Finaly prepare submit
13398, Predict accuracy with different algorithms
5330, Display heatmap of quantitative variables with a numerical variable as dense with custom gradient Similiar to heatmap
16598, Describing all the Categorical Features
29031, Convert categoricals to numerics
28949, We finalize the model by training it on the entire training dataset now
834, Correlation Matrix 2 All features with strong correlation to SalePrice
39230, Demonstration how it works
36656, And finally bilateral filtering utilizes Gaussian filtering twice in order to preserve edge detail while also effectively removing noise
12691, Family information
36783, Accuracy
6144, Living Area bins
11108, Scale values
32135, How to get the positions of top n values from a numpy array
8559, Selecting Rows Based on Conditionals
37464, Prediction and Submition
8759, Survival Count
7364, Final prediction blending
28853, Date List
38663, Bivariate Analysis
37374, Random Forest
41656, Distribution of data for categorical features
36277, corr is the highest correlation in absolute numbers for Fare so we ll use Pclass again to impute the missing values
40333, Prepare Universal sentence encoder embeddings
36261, It s Time to look at the Age column
26675, Days employed
37874, Standardization
25678, Conclusion
8045, Embarked
11859, Fit the models
35579, Sale Prices
42159, 99 of Pool Quality Data is missing
5616, Etc 3D
465, Making dummy features for each categorical column
31290, Problem solved
16349, Create new features Title FamilySize IsAlone AgePclass NameLength HasCabin
7258, Data Visualization
40631, Exploratory Data Analysis
18721, We shall use slice as our sequence of learning rates
32663, Categorical values must be quantified into numerical equivalents to be properly processed by machine learning algorithms
7096, As some algorithms such as KNN and SVM are sensitive to the scaling of the data here we also apply standard scaling to the data
35943, LogisticRegression
21095, Reciever Operating Charactaristics
12258, Inference
8503, Observations
10198, Lets convert them into numerical form
24725, Feature Engineering
20255, Prepare the submission file
34437, Download data
22040, the next important variables from EDA are floor and max floor
33291, Scaler
11549, drop columns where the percentage of missingness is greater than 90
35307, Set optimizer and loss function
12743, visualize those missing values in a heatmap
4401, Splitting the Dataset
21205, Zero initialization
21020, The dataset is from the competition where our job is to create a ML model to predict whether the test set tweets belong to a disaster or not in the form of 1 or 0
10822, double check it
23234, Write a useful function
27022, Submission
9684, Loading data
19522, Repartitions and Coalesce
26751, image in 3 dimensions
19073, Visulaly analyzing
15800, Since categorical features have been created from the features present in the dataset taking only the categorical for training the models
21419, Train the model predict etc
3777, Training
32216, Add total shop revenue per month to matix df
38283, Imputing data in Garage related columns
15581, Submission says Your submission scored which is not an improvement of your best score Keep trying
15796, Most of the Passengers aboard were alone
13422, Data Transformations
13147, Title as Survival Function
29927, Hyperparameters versus Iteration
1789, Missing Train values
10106, fill NULL values
11383, we re ready to load the data
7024, If Central air conditioning is present
24386, look at which values have what percentage of missing data
34347, Create Final Model and Predict
11405, Building Your Model
23712, Extensive Feature Selection using Wrapper Methods
32156, How to fill in missing dates in an irregular series of numpy dates
1635, Obtaining standardized dataset
5092, Load data
29328, Random forrest
27140, MSSubClass Identifies the type of dwelling parts for sale by the owner involved in the sale
29406, let s import our datasets both train and test
12618, combine SibSp and Parch features to create new one FamilyMem
42661, as in the univariate case we analyze correlations between missing values in different features
21590, Count the number of words in a pandas series
9753, AgeRange
3761, KNN Regression
1129, Exploration of Passenger Class
297, Correlations
31923, One of the hyperparameters to tune is the learning rate and the decay
32196, Clean up some shop names and add city and category to shops df
28722, The Item Solds by Shop Names
34518, Installments Payments
11896, Data Engineering transform all non numerical data into numerical processable ones and fill missing values
38296, Predict the test data
6843, Submission
37877, Alpha
16126, Create Submission File to Kaggle
24234, Time to predict classification using the model on the test set
16778, Grid Search CV
31216, How are numeric features related to SalePrice
38674, Age
14084, We fill the missing Age data with the mean age of passenger of each class
27659, Model training
13550, fill na s in Embarked
13584, Modelling Pipeline of models to find the algo that best fit our problem
32556, Target
39172, And to try the created CNN use the following code
11621, Preprocessing
34399, Split the data and labels
1727, Plot variables against Survived
290, The letter prefix might provide useful information for the model
12884, I think this warrents some more investigation
11406, Checking Underfitting and Overfitting
17560, do all the transformation needed
42918, Categories
16062, Correlation Correlation Matrix
31615, A good way to measure the performance of a classifier is to look at the confusion matrix
3543, the mean age is 29
7123, SibSp vs survived
3774, Embarked
19450, We load the dataset and verify the dimensions of the training and testing sets
25826, Truncated SVD on categorical data
6522, Replace the missing values in test set with median since there are outliers
970, Read our files
15377, See how the mean age differs across Pclass
18680, Creating Submission
34101, Statewise cases
40954, Assignment of Variable
36133, Predictions
23980, Predictions
34756, Correcting spellings
25501, For large datasets with many rows one hot encoding can greatly expand the size of the dataset
35900, fit our model to the training set
41609, we create a dictionary for the second part of our analysis called department time data in a similar manner that we did before
27045, Distribution of Diagnosis
38080, In the train data and test data there are no missing values
37395, please
26647, we have 1
27512, Rescale features
23682, Train the VAE
7971, Completing a numerical continuous feature
18273, BUILDING A RANDOM MODEL AND FINDING THE WORST CASE LOG LOSS
31247, Family Size
40020, dicom images and the image names can be found in the train and test info as well
10792, Excellent
5305, Bringing features onto the same scale
22055, Distribution of sentiments in train and test data
43264, Instancia um objeto chamado dt a partir de uma classe DecisionTreeRegressor
29850, prepare the testset
845, SGDRegressor
18627, Creating new Features
12409, Utilities
14352, Analyzing Features
17356, The perceptron is an algorithm for supervised learning of binary classifiers
8854, Interpreting the Model With Shapely Values
9293, Regression on survival on Age span
11482, GarageYrBlt GarageType GarageCars GarageArea GarageQual GarageCond GarageFinish
4968, We can pick categories for names with at least 2 individuals
24639, Similar to Simple Linear Regression there is an equation for multiple independent variables to predict a target variable The equation is as follows
12424, We be using the same training data for Backward Feature Selection
33807, Visualize New Variables
3010, strong Transforming values strong font div
23049, ok now lets Split training and valdiation set
15228, Drop Correlated Features
3491, Make predictions on the test set and output csv
23514, There are 7 elements in the class
1733, Pclass vs Embarked
2152, Functions
8132, Resizing and viewing
20676, The three most common loss functions are
13320, Analyze by visualizing data div
34293, Train Convnet
37831, Test against the Test Data
43380, Attack class
21059, Show Attention
2083, Upon adding the feature Surname the CV score again increases to 0
784, SVR
11386, More people died than survived in the training set
25031, Network Evaluation
3283, ElasticNet
11982, Will we use One hot encoder to encode the rest of categorical features
4822, Feature engineering
41332, We again save the top level of the t SNE decision tree to a
31943, Looks good
41940, Training the Model
33643, Below I impute missing records
29740, Training LightGBM model
9096, I am also interested how the number of stories affect Sales Price
42554, Precision Recall Curve
25589, XGBoost
26981, Normally we would prefer to map pixel values to 0 1 to avoid slowing down the learning process
27258, have a look at the feature importances across all of the emsemble models
14254, Fare Features
18024, Only families from Pclass 3 died together
15601, Survival by Age Number of parch and Gender
34640, Splitting Data
14667, PCA
16728, by survival
18928, Relationship between numerical values with a categorical field
13587, Best params
31044, Pick Sentence
23521, Equivalence classes with mislabelling
27508, First By Normal fit method method
22895, Train and test have the same set of keywords
2578, BernoulliNB
6193, Submission 1 for Logisitc Regression without Hyperparameter Tuning
32690, We build now a CNN try with a simple one consisnting in 5 Conv layer one dense layer and one ouput layer
26513, After training is done it s good to check accuracy on data that wasn t used in training
3278, Feature Engineering and Transformation
32564, Diff anatom site general challenge
19883, Introduction
27284, Char TFIDF
33668, Time font
24851, Splitting my dataset with 90 for training and 10 for validation
42127, Data generator
18524, In order to make the optimizer converge faster and closest to the global minimum of the loss function i used an annealing method of the learning rate
22436, Scatter plot with linear regression line of best fit
20552, Data preprocessing
18921, Testing Different Models
8258, Checking each feature s type to know which value to impute later
18441, Compare Model font div
32566, Diff Age approx
9049, KitchenQual
26942, Are the distributions making sense
13380, Imputation of missing values in Embarked
35309, Make predictions
36665, Exploratory Data Analysis EDA
10743, Check the relationship between SalePrice and predictor variables
316, Exploring
13558, Fare mean by Embarked
24995, Selecting best categorical variables
42270, bedrooms from 1 2 interest level increases since number of low decreases while number of medium and high increase
1757, In the following chunk of code I use a code from a very experienced fellow Kaggler But on top of that since I mentioned that coding is an integral part of data science and I am not experienced with coding I make an attempt to break the code into easy chunks to understand experienced coders can skip the explanation
26027, Inorder to help my model better understand what data its going to face I am converting all the encoded variables into dummy variables aka one hot encoding
26561, Put all together
37534, Importing Keras Libraries
16264, Cabin
21666, Load Pretrained Models
9977, There are 1460 instances of training data and 1460 of test data
19012, Experiment 2
27848, Top 20 3 gram words in sincere and insincere questions
12073, Data Cleaning
1790, Missing Train values
32965, Baseline Submission
22674, Getting Things Ready
15393, See which columns have missing values in test data
38785, Multiply by frequency distribution
29890, fit to the whole training data
13865, Cabin
36205, Evaluating the BERT Variations
26737, Plotting sales over the months across the 3 categories
33871, Ridge
2947, Convert Categorical features into Dummy Variables
13573, Ticket feature
6315, Gradient Boosting Classifier
2203, now the gender column getting rid of the categorical strings and making new dummy columns
495, Majority of people who survived embarked from S
28316, Exmaine the bureau balance DataSet
38110, Model Training
4748, first we trying to find out date time variable and then trying to plotting them and analyzing them
18251, Feature with only 1 or 2 values
1571, This first function creates two separate columns a numeric column indicating the length of a passenger s Name field and a categorical column that extracts the passenger s title
14002, The good correlation between Embarked and Survived makes me wonder
35135, Does competition distance matter
30759, Build the benchmark for feature importance
6400, Pairplot
28765, Splitting data using sklearn
3719, Checking for missing values if any
27413, nn model The general methodology to build a Neural Network is to
27039, Gender vs Target
42811, Adversarial validation unfinished
28530, HalfBath
33704, I select top most 100 sold items
6530, Stack Models
5971, Classifier Comparision
15374, Submission File
42848, Rest of the World
30755, Fixing sub samples
4043, First let s take a fast looking at the table we have
19153, Select the right class weight
40298, Questions Exploration
39712, Compare the similarity between cat vs
2499, Parch
23294, MasVnrArea MasVnrType have the same MissingValueRatio 8 so probably these houses don t have a masonry veneer We ll fill the missing values with None
24355, Evaluate the model
31021, Find and convert emoji to text
10725, Lets drop and combine some features now
24755, Label encoding
10871, Build a Name dataframe
6436, Statictics Helping hands for a Data Scientist
7498, Create output
20692, Running the example loads the MNIST dataset then summarizes the default train and test datasets
10851, Import models
224, Decision Tree
10817, Data looks accurately
32838, We are done with most of the feature engineering part
16600, How many missing values are there in our dataset
35904, Load Required Packages
23232, far you ve learned how to build pipelines with scikit learn For instance the pipeline below use SimpleImputer learn org stable modules generated sklearn impute SimpleImputer html to replace missing values in the data before using RandomForestRegressor learn org stable modules generated sklearn ensemble RandomForestRegressor html to train a random forest model to make predictions We set the number of trees in the random forest model with the n estimators parameter and setting random state ensures reproducibility
8538, Feature to Feature Correlation
15721, Logistic regression
38617, Just to be sure let s check if there are any duplicated descriptions
20804, We create extra features in order to categorize data with and without a feature
16633, Class distribution
2200, we need to impute age column basically fill in the blanks I m filling the blanks based on their Priority Class means
6212, Gradient Boosting Classifier
14385, I am grouping age with following categories
16472, Ensemble Modelling
15256, Fill Null Values for Embarked Feature in Train Dataset
14900, Pclass Fare vs Survived
16112, Feature Selection
11157, MSZoning The general zoning classification RL is by far the most common value we can fill in missing values with RL
18381, And now we start building a new smaller dataset from twitter from scratch
40418, Number of Photos
40645, 2D Visualization using TSNE
14782, Possible derived features
42128, Model
6749, Drop Features have more missing values
868, Age continuous numerical to 8 bins
20486, Comparison of interval values with TARGET 1 and TARGET 0
5159, Linear Regression
42865, We encode our features
10894, since we have imputed the missing age values again divide the age variable into three groups child young and old
37331, Select the parameters for the second dropout
34490, Lets get the answer for the test data
37409, remove the features that have zero importance
1179, Incorrect values
43328, Detect and Instantiate TPU Distribution Strategy
8656, far we have identified three columns that may be useful for predicting survival
7116, Train the data
28786, Most Common words in our Target Selected Text
29557, For this task we try to use LSTM network
33854, Analysis of ctc min cwc min csc min token sort ratio using pair plot
12011, SVR with linear kernel
37692, we want to train our neuron This means we want to make predictions closer to the true values
27593, We can also include the oov token parameter in the Keras Tokenizer object so that when the tokenizer finds words that our out of vocabulary in the test set instead of skipping them it includes them as a token of our choosing we just need to ensure our token does not resemble any other words in our vocabulary However since we ultimately embed our words as GloVe vectors this step is useless But if you were using your own embeddings it could prove useful so I include it below
6889, Another way to calculate these using groupby
32206, Feature Engineering
22866, Input Image
10846, Check now is there ant missing value remain
10844, separate the catgorical freatues and numerical features
8655, Create the cut points and label names lists to split the Age column into six categories
5610, Minimum Size
9870, Pclass Survived
12713, Everything that I want to carry forward to the machine learning stage looks ready
10997, Age Cabin and Embarked are the features that have missing values in the Train dataset
35671, Categorical features
35330, Encoding the target as usual
15293, Building and Training the Model
10334, Removing Outliers
27381, RMSE 1
27481, Another important method to improve generalization is augmentation
21153, check what random forest can do in this case
1364, Librarys
1653, Another nice categorical feature here with 2 categories But wait in tutorial for beginners part 1 we found that Sex was a very strong explanatory variable after all
141, Support Vector Machines SVM
33747, Stack more CNN layers
15036, Is there any discount for the old man Possibly not
671, LightGBM
19972, A simple model with just a Softmax classifier
41076, Remove empty comments
8394, Creating a normalized entity to cross throught our main interest table
28082, This is the in sample accuracy which is generally higher than the out sample accuracy
29405, SAVE TEST AND TRAIN AS CLEAN
22157, Splitting the dataset to train valid sets
7119, Loading Data
37637, we make predictions on the test data and create a CSV submission file
38750, Model Building Evaluation
32037, we have the best parameters and the best estimator which gives the highest mean cross validated score But we don t have the threshold for our classifier instead of using best estimator we make a train validation split on train X train Y we train and validate a new classifier on this new split with the best parameters found in grid search The reason to do this is to compute the threshold for our classifier
14503, Cabin
3324, Explanation
12513, get to work using LassoCV
37898, Residual Plot
15753, Removing Pclass 3 and Embarked S
8424, Some Observations Respect Data Quality
14597, Observations
20203, Check the missing percentage of missing data and drop those columns
9094, I want to have a binary column for sharing a floor
38689, Probability of melanoma with respect to Sex
24345, Reshape
12281, See the importances of features
1618, Feature Importance
28563, SalePrice is clearly affected by BsmtQual with the better the quality being meaning the higher the price
13369, females have higher probability of survival than males
35578, It appears that walmarts are closed on Chirstmas day
13099, Survival among Passenger Class Sex and Embarked
21781, We do some simple math to find out what block should we train on
18101, Batch Normalization becomes unstable with small batch sizes and that is why we use layers instead
41932, Discriminator Network
35083, Applying PCA and Transforming x train scaler and x test scaler to x train pca and x test pca respectively
25003, Preprocessing test images
19407, Lets predict for testing dataset
20778, Fitting a T SNE model to the dense embeddings and overlaying that with the target visuals we get
7836, Sqlite and SQlAlchemy
2132, Once again we use these information to iterate a few more times on the various configurations
8520, Ridge Regression
36829, we use the classifier to label the evaluation set we created earlier
36831, Retraining
317, Feature Engineering
6121, We need more different zones Milord
11529, Gradient Boosting Regression
34644, Random Forest Classifier
2924, Support Vector Machines
31348, Submission To CSV
28478, New features based on the address
31553, Data Normalization
28510, The numberical columns have following characteristics
16829, Checking VIFs
8943, Fixing Utilities
8079, Creating Dummies
18240, Hyper parameters and network initialization
12092, Apply the model on the test data and create submission
32134, How to replace all values greater than a given value to a given cutoff
8329, The white elements have nan there
3262, strong strong strong Creating a table of columns with maximum missing values p strong
42789, Word Frequency plot of sincere insincere questions
35765, This model needs improvement run cross validate on it
21469, We cannot do our transformations blinded
31043, Search
18420, save the oof prediction for further use
3269, Get percentage of these missing values in features
33705, Transposing all 100 items into columns
7254, Model parameters
27539, Display time range for labels
2051, GaussianNB Model
41750, Hyper parameter tuning of Keras models using Sklearn
2129, And the following in fold and out of fold predictions
20213, Handle null values or missing values
32776, Ensemble
31555, Linear Regression
23411, load and preprocess the data to test our network s possibilities
7809, Blend Models
34339, Categorical Features
6994, Month Sold
25223, LotFrontage Linear feet of street connected to property
3956, Create PorchSF Feature
24317, first imput the missing values of LotFrontage based on the median of LotArea and Neighborhood
42261, some entries haven t the date they joined the company
41408, AMT CREDIT
14456, Train Test Split
22476, Stacked area chart
19002, Prepare learning rate shrinkage
28526, LotFrontage
19596, Trend Analysis
10872, Build df train and df test data frames
38507, clean text function applies a first round of text cleaning techniques
40071, Looking at the heat plot for cross correlation
17406, Parameters
935, Prepare Data for Consumption
18312, not much correlation heare
41071, Data preparation
2545, For this part we expolore the architecture with just one Hidden Layer with several units
30671, LinearSVM
4277, GarageArea GarageCars
32819, Data Engineenring
43149, Preprocessing Data for Input to RetinaNet
3552, There is much more male than female on the Titanic but what about the survivals didn t guess that already the survival non survival ratio is much greater for the female than the male isn t that obvious like we say les femmes d abord
25373, Split training and valdiation set
42853, WordCloud Most frequent keywords in the tweets
3379, From here you re pretty much done In the last piece of code below we ll simply generate a csv file in the format that we can use to submit to the competition namely with two columns only for ID and the predicted SalePrice
36757, Visualizing Number of Images for each Digit
16887, Big Family
40878, Learning Curves
4049, Cabin and Ticket Analysis
2087, since we want to evaluate our model in order to be able to say how good or bad it can be in certain situations we need to create our test set
31543, replacing with NA due to lack of subject
17740, As a side note it s interesting that deck E contained 1st 2nd and 3rd class cabins
5857, The last step is to analyse the predicted variable in this case the Sale Price
31792, Construct model Some hacks to get gradients
13428, Run iterations for all the trained baseline models
5309, Feature selector that removes all low variance features
29040, Class label distribution
29542, Checking the metrics now
3863, Bonus Significant parts of these helper libraries are currently under development to be included in sklearn library
25898, Printing keywords
5999, Hasil
19814, BaseN Encoder
9794, Since this is a regression problem
31783, Exploring correlation of features between the train and test sets
2700, Based on the previous correlation heatmap LotFrontage is highly correlated with LotArea and LotFrontage
10243, Go to Contents Menu
24794, Modules
27333, Data description
6746, Count Plot
2310, Sklearn How to scale fields
4048, We can also dummify the sex column convert the categorical data to zeros and ones
21604, Reading JSON from the web into a df
6294, Naive Bayes
12782, Neural Network s Layer s
23482, MIN DF and MAX DF parameter
11143, Compare the r squared values for different functions
13310, Gaussian Naive Bayes
16501, Perceptron
28855, Plot
9477, This function tunes the hyperparameters of a model and scores it using Stratified Cross Validation
543, Bining for Age and Fare convert Title to numerical
9838, Gaussian Naive Ba
1706, Imputations Techniques for non Time Series Problems
7020, Refers to walkout or garden level walls
22164, Finished with your model and push it to prod
19179, Training Summary
20922, evaluation
3537, Missing Value Analysis
11057, Age versus survival
21422, LotArea
724, the mode is SBrkr
26515, Appendix
3864, Numerical Features pipeline
4977, IsAlone and LargeFamily
32355, Fitting
14753, Additional Variables
42258, Calculating the Hash Shape Mode Length and Ratio of each image
10451, Kernel Ridge Regression
12273, Submission
41541, It does look better slightly anyway
25764, PassengerID
6760, Checking Skewness for feature 3SsnPorch
7763, Linear Regression
6546, top 40 correlated columns after data preprocessing
31918, Augmenting Data
18363, Visualse the categorical features
6767, Logistic Regression
27166, For this section we create a separate dataset
6232, Split training validation test data
4340, Third class
31763, Create the product list for each order with filtering in pandas
17864, We prepare as well the second level classifier
880, train test split
36398, Different options
29612, Dataset explanation
1607, Frequency Encoding
30917, Find best threshold on Trainin data
22394, Data Cleaning
4636, EDA for Categorical Variables
28603, OverallQual
29824, PreTrained Glove
2474, Mapping SEX and cleaning data dropping garbage
26821, check now the distribution of the mean value per row in the train dataset grouped by value of target pre
37302, Bi grams
26033, we have defined our transforms we can then load the train and test data with the relevant transforms defined
461, Same for the categorical data
4357, We can also generate one new feature using YearRemodAdd feature
22684, Train and predict
3652, Looks good
8671, CatBoost model
8347, Just a quick look on variables we are dealing with
15845, Embarked
27079, start by experimenting with LSA This is effectively just a truncated singular value decomposition of a very high rank and sparse document term matrix with only the r n topics largest singular values preserved
33448, RandomForestClassifier
41589, CORRECTLY CLASSIFIED FLOWER IMAGES
25363, Simple Model Logistic Regression
41046, we select at random the clusters that form the validation set
2192, Imputting missing values
10458, Pearson correlation coefficient
32280, Display distribution of a continuous variable
26047, Training the Model
16058, Here is only age no relation with family passenger is calculated alone with his her age
20384, As we know that there some words that repeated so little in our tweets so we must remove these words from our Bag of words to decrease dimensionality as possible
42471, With SelectFromModel we can specify which prefit classifier to use and what the threshold is for the feature importances
40062, Visualization 43 categorical 38 numerical columns
32205, Add shop items and categories data onto matrix df
39284, Export aggregated dataset
13705, Here again it appears as if no cabin data is correlated with lower Fare value and decks B and C are correlated with higher Fare values
15, Hyper parameter Tuning
1290, A confusion matrix is a table that is often used to describe the performance of a classification model
21772, Last columns with missing values
30386, Preparing submission
42958, Ticket Feature
1947, Finding the correlation between the different features
9521, Support Vector Machine SVM
34866, lets populate the vocabulary and display the first 5 elements and their count
37682, Test if saved model works w sample inference
20197, Check variance
4407, Cross Validation CV
26030, MODELING AND EVALUATION
12363, Utilities
39501, Target variable
28859, Normlize Scale Data
30084, K Nearest Neighbour KNN Algorithm
13196, Introducing family aboard no child couple and features related to family size
2671, here we identify which are highly correlated with other variables including these features not add any additional information to the machine learning modelling Ideally we should exclude all these features The threshold can be changed from to any number based on the business scenario Below we exclude all the correlated features
16526, LogisticRegression
38550, have a glance at new features
27248, Feature selection
32815, Well for training score we manage to arravie at 0
36344, Implement Backward Propagation
11686, Bin numerical data you want to bin the numerical data because you have a range of ages and fares However there might be fluctuations in those numbers that don t reflect patterns in the data which might be noise That s why you ll put people that are within a certain range of age or fare in the same bin You can do this by using the pandas function qcut to bin your numerical data
39692, Text Preprocessing
28369, text
34638, We use the test data
13311, Perceptron
38276, check our predictions
27083, All that remains is to plot the clustered questions included are the top three words in each cluster which are placed
40175, Ensemble Performance
11192, Optimize
5394, Most of the missing values are in Age and Cabin columns
2984, XGBoost
41919, Since we ve got a pretty good model at this point we might want to save it so we can load it again later without training it from scratch
39216, New have a logloss calculation such that we re not totally in blindness
17714, We are going to use our single models in the first layer and LogisticRegressor as metalearner
34167, Time series analysis
42365, Building the Feature Engineering Machine
30778, Score of trained model
11715, Logistic Regression
12172, Split the training dataset to understand in and out sample performance
7303, Observation
195, Gradient Boosting Regression
8926, Fence
28823, An easy getting started with time series analysis is
1782, We come up with a function called Acc score that gives the mean of all cross validated scores through the CV K Fold
40415, we have data from April to June 2016 in our train set
40200, Confusion Matrix
23037, Department Analysis
8676, The Features and the Target variable
26911, Build the final model with the best parameters
12342, GarageFinish Interior finish of the garage
2651, Interesting
10013, Box Cox Transformation of highly skewed features
7345, the categorical columns that are important to predict the SalePrice are ExterQual BsmtQual and KitchenQual
21258, Fine Tuning and Using SVD Collabrative Filtering algorithm using Scikit Suprise
35813, Date features
1200, Lasso Regression L1 penalty
18164, Checking the character in question
30865, Label encoding Target variable
41123, Ventura County Average Absolute Log error
32562, Diff Target
20326, Section 5 Concluding Remarks
21815, Latitude Only
42526, As there are many countries for which the province state column is nan so i thought to put the country name if it is nan
41281, Perform automated EDA using visualizeR
27279, If DAYS EMPLOYED is a large positive number means the client is unemployed Maybe extraxt those with dummy variable It would applying for a loan unemployed lowers your approval from 8
2755, Understanding the distribution of Missing Data
34229, we have our actual data array we need to make some adjustments
17421, LogisticRegression
26385, The hello world dataset MNIST released in 1999 contains images of handwritten digits
4685, Here also
41480, Drop the columns we won t use
3602, Optionally look at column descriptions
4290, TotalSquareFootage
3567, The higher the quality the better the selling price
24134, we can calulate F1 score for all models
16144, how titanic sank
36676, Tuning the vectorizer
22433, When you create a gridspec like 3 x 3 to post a plot into this grid you must use index slicing span
9036, If the skewness is between and the data are fairly symmetrical
38254, Simple Mean Model
15141, Check the Survival Rate
30956, We have to pass in a set of hyperparameters to the cross validation so we use the default hyperparameters in LightGBM
30840, Crime Distribution over the years
35551, Blending
8502, Focusing on Sales Price
23024, I don t understand why mean NaN when using
1188, Creating features
9503, Import all required or necessary libraries
8732, Scale Sales Data
26428, The inspiration for this feature came from
9098, Basically price increases if the house is 2 stories or more
31417, Create the Estimator create the estimator
39007, Build the model and run the session
33877, Voting Regressor
18189, clean stop words and leave only the word stems
28217, Correlation matrix
8323, Maximum people are with no siblings travelling
35935, A B C T are all 1st class
2989, Model Performance Plot
23296, GarageCond GarageQual GarageFinish GarageType have the same MissingValueRatio 81 so probably these houses don t have a garage We ll fill the missing values with None
13954, Load data
37214, Tokenize Training Text
22346, ML Modeling
31809, RPN layer
39002, Create placeholders to pass training data when you run the tensorflow session
2905, First Data Exploration
32387, First Step Creating Meta Submission
14526, Fare
43094, We run the model
39972, Model Data
31055, Subword
6088, Examine correlations between SalePrice and target
13121, Using subplots
7069, Here I used the other models not yet trained to be combined with the Stacking Regressor and make a powerful one using the Voting Regressor
10934, Let find out degree centraility of all nodes of directed graph g
38758, k Nearest Neighbors kNN
28006, Data Augmentation
19274, We need to convert our data from the dataframe into a matrix of inputs and target outputs
28942, Before preparing the model we can drop the below columns as they are not useful for the model
41416, Label encoder
7774, Pasting
5324, Display density of values with bubbles over latitude and longitude
21169, optimizer
13989, Read test data
43004, Data cleanning drop duplicate rows
24571, Observations
22183, ELECTRA span
12718, Always the most interesting in my opinion anyway The imputed age feature went down a treat with XGBoost followed closely by the new Family Survival feature
28015, Any of these rows reveals the vector of each document
34441, Submission
1816, As we discussed before there is a linear relationship between SalePrice and GrLivArea
4630, Find the count of missing values
41184, first analyze missing value counts in numerical columns in training data
7204, let s compute the RMSE on train dataset to evaluate or model error
28317, identifying Catergical and numerical variables
18852, Test and Submit
16689, find the relationship of survival between different Source stations
7553, ADA BOOSTING
15096, Classifiers
35947, RandomForest
32714, GRUs
6709, Find Features with one value
5400, The embarked is probably decided by or correlated to Fare and Pclass
34704, Item active months
26352, Correlations increased after the outliers were removed
10089, Model Selection
7952, Tuning on weight constraints and dropout
11469, k Nearest Neighbors
59, Pytorch
28290, Some NN train settings
22410, There s some rows missing a city that I ll relabel
32528, Predictions
11228, let s take a look at the people s survival status if he she had friends or Sibsp based on the same ticket number
39435, The final submission
19701, Basic Data Analysis
28960, Temporal Variables Eg Datetime Variables
27907, Drop All Columns where NaN occurs
27993, RandomForestClassifier
11409, Using Categorical Data with One Hot Encoding
35893, Plot the evaluation metrics over epochs
16657, Fare
37524, Pclass Survived Age
40830, A lot of inferences that we have already covered could be verified using the following heatmap
31062, Positive look behind succeed if passed non consuming expression does not match against the forthcoming input
19540, Specify and Fit Model
40388, We explan repeat and batch together
9189, We use the average age of the corresponding class to fill the missing passenger ages
34791, As the target variable is a highly skewed data we try to transform this data using either log square root or box cox transformation
6430, Clearly performance is better with Random Forest lets make final submission with RF model
22657, The Model
38000, let s make some plots
20050, The store names are the same but they don t open and close at the same time let s check if store id is in the test data
28013, MAKE YOUR HANDS DIRTY
17768, Sex Age SibSp Parch
38071, Vocabulary and text covering embeddings
38042, Conditional Probability
6541, now we are saperating categorical columns and numerical columns for filling missing values
1202, Xgboost
12493, predicting prices with ElasticNet Regressor
35493, Loading the Data Set
18198, Distribution of counts by product
20584, Encoding Categorical Data
16872, Survival Correlations
12123, Fence data description says NA means no fence
13396, Feature Scaling
13481, Resources
8822, Feature Engineering Age AgeGroup
27004, Splitting
31226, Features with positive values and maximum value between 10 20
40304, Q1 Q2 neighbor intersection count
30659, People never lie about the wreckage derailment and debris
42136, Predictions class distribution
13665, Cross Validation K fold
7347, The target variable is right skewed
21456, Modelling
25236, Since Lasso is wininng in predictions here are the coefficients
34751, Building model using Glove Embeddings
2777, CatBoost Regressor
32835, We have done the pre training of the model now we build our model using BERT
39306, XGBRegressor validation
22290, Scaling
12259, Interpretability
17557, Similarly for Age column
39012, VGG19 is a similar model architecure as VGG16 with three additional convolutional layers it consists of a total of 16 Convolution layers and 3 dense layers also stride 1
28876, Split Data
15838, Cabin
2005, Only 0
29161, MasVnrArea Fill with 0
32478, Build Demographic Based and Memory Based Similarity Matrices
34683, Including all possible combinations of unique shops items for each month in the train set
6584, The challenge is we have to use machine learning to create a model that predicts which passengers survived the Titanic shipwreck
42621, Clustering the data
4003, One solution to the latter drawback is to gradually decrease the learning rate
14842, However when it came down to modeling these fare categories did not help at all as they underfit quite substantially
13248, PassengerId is distinct so delete
23485, Creating a Baseline Model using Countvectorizer
41064, One hot encoding of label
41356, Generally houses with high level basements have higher prices than others
29086, Embed building id
25520, CatdogNet 16
2285, Precision and Recall
12168, Class Weight
15382, Notice what happens when you replace the missing fare by the mean fare instead of mean fareperperson
23417, Training the full model for 100 epochs leads to 99
34036, Transformation of target variable
17895, Lets do some predictions now
17792, verify the average survival ratio for the passengers with the aggregated titles and sex
20510, import the data
19597, shop revenue trend
16380, Changing the DataType of Age and Fare to int
5007, Bivariate Analysis
37136, Binary Classification
32754, Monthly Cash Data
25394, Evaluate the model Model with data augmentation
24339, But if we average only two best models we gain better cross validation score
4913, Concat the Training and the Test Data to a Complete Dataframe
4866, Preprocessing
40295, I am just setting my network from what is called Convolution Neural Network
36229, To read more about CNNs you can refer to this
1147, We are going to do same thing to the test data
27655, Data Visualization
40336, About leak
7459, the Age column needs to be treated slightly differently as this is a continuous numerical column
5691, Model Building
3927, Support Vector Machine
1797, SalePrice vs TotalBsmtSF
32847, take a look at the raw data
38131, Range of fare was as follows
2046, SVC Model
5587, Re Check for missing data
20116, Because of using 12 as max lag feature we need to drop first 12 months
16014, Age
1183, Skew transformation features
42449, Example to extract all nominal variables that are not dropped
40793, Check null values
11867, Checking skewness of all the numeric features and logarithm it if more than
12692, Group information
10905, Grid Search for Bagging Classifier
3023, We check the data after normalisation is there any remaining skewed values
41043, we extract layer 4 VGG features from the images
31261, Import libraries
40189, Using my notebook
9355, Converting a categorical feature
2698, Since MasVnrArea only have 23 missing values we can replace them with the mean of the column
11962, I have added 0 to wherever required while creating extra features and adding to the test set This is due to the fact that while doing OneHotEncoding there were some extra values in the train set but not in the test set Thus create those features insert 0 in values and concat it to the test set
30429, Evaluation
21418, split the variables one more time
41179, Submission
37983, Model2 Simple CNN but with tunned hyperparameters
17699, DECISION TREE
4522, Going with Random Forest Regressor
31073, Delaing with MasVnrArea
21632, Pandas display options
29119, Time to make some predictions
37722, Constant and Duplicate features handling
7693, Ridge regression
40280, making 5 fold divisions on the dataset csv
7843, let s try to get the title from name
3758, seperate the data as it was
33717, FEATURE CABIN
4944, however notice that this is just a demonstration and we did not in fact change our data all data we however deal with outliers later using StandardScaler RobustScaler
25824, PCA on categorical variables
28591, TotRmsAbvGrd
1599, After filling the missing values in Age Embarked Fare and Deck features there is no missing value left in both training and test set
39256, Distribution of target value
15284, Average Fare for each Pclass
40187, There are problems in processing
5662, Convert feature variable Sex into dummy variable
9691, Checking for Linearity
1758, Attention
30099, Dataset visualizations
929, Optimize Support Vector Regression
23263, Categorical Continuous
32575, Conditional Domain
2452, Adding quadratic terms improves local score but behaves unstable on LB
26592, TASK EXPLORE MERGED DATASET
10038, The captain of the ship did not survived
15429, first turn the gender values in numerical labels
21428, Make change to categorical column
17424, we are with the same
22364, Again three duplicates are in Public LB subset
35451, Imports
29593, confusingly instead of just telling the initialization function which non linearity we want to use and have it calculate the gain for us we have to tell it what gain we want to use
42327, Importing all the necessary keras library
15432, First let s start with the basic
14959, Impute missing fare values
15448, Dropping Name Ticket SibSp Parch FamSize and Cabin because they were already used for feature engineering or may be useless
20964, Importing the Keras libraries and packages
31860, we can create the bottleneck features of our training set
5886, NN
1422, To check how good each model is I m gonna split dataset to train and test dataset and use Accuracy Score from sklearn
4441, saleprice
39389, Compute feature importance
35142, Compile the model and fit to training data
40857, Categorical andNumerical Variable
10915, Encode and extract dummies from categorical features
33265, Evaluate Model
5020, Transforms
7416, TansformerMixin can be used to define a custom transformer dealing with heterogeneous data types which basically contains two major parts
24358, The most important errors are also the most intrigous
38165, TPOT Tree Based Pipeline Optimization Tool
32186, For submissions
23459, Hour
11861, Final Model and Submission
13687, Fixing the missing data
26564, This rotation separates one of the histograms but collapse all the others
31824, Python imbalanced learn module
23273, Ticket
29054, Contrast stretching
7591, Scatterplot colors SalePrice vs all SF and OverallQual
39079, Simple Blend of Predictions
25889, The Flesch Kincaid Grade Level
38984, 3 Models Vs 1 Model
14962, Save cleaned version
12623, Random Forest
32091, How to extract items that satisfy a given condition from 1D array
6305, XGBoost
17396, Parch Children Parents wise Survival probability
20045, Check missing values and outliers from data
13869, One hot encoding
311, Missing values
32072, Parameters used
36723, here are what we want to modify
25658, The largest components have over 100 nodes and 1000 edges
32227, One Hot Encoding
27363, adding category id to mean encode it
37452, This code is lifted from kernel baseline starter kernelTraining
11124, Training Evaluation and Prediction
7120, Basic summary statistics about the numerical data
1360, The perceptron is an algorithm for supervised learning of binary classifiers
6472, Evaluate ensemble methods
9344, Build model and predit
6443, Outlier
6922, Check Accuracy
24594, DATA VISUALIZATION
40870, Optimize Kernel Ridge
37673, Confirm that the train set doesn t have any overlaps with the test and validation sets
31424, Subcase x start x end cap x start x end varnothing
12720, The league table is out and we have a winner in Title
33701, Range Slider font
32036, GridSearchCV also returns best parameters and best score The output below is formatted to 4 decimal digits
27543, Display heatmap with shape size by count
38576, Random Forest looks best
31436, it s time for you to define and fit a model for your data
20262, Number of iteration is 1001
3390, Important step as Data Scientist is to work with Data
15541, Average All Predictions
26447, Regarding the false predictions there is a majority around Hence cases where the information on the passenger was not conclusive enough to make a desicive prediction By further feature engennering or by improving our model it might be possible to predict some of this cases around right
10087, Dummy variables
39191, Codificando os r tulos
2919, Split into train and test dataset
11824, Lets find out highly correlated variables as it smells like something unexpected is cooking
39103, Count Vectorizers for the data
30481, Passing a df directly to sklearn
37882, Ridge Linear Regression
23979, This part is taken from abhishek great kernel
26293, More Training data
27411, backward propagation
36560, Compare Models
34798, The variability between the actual values and the predicted values is lesser than the linear regression
1420, I m cutting Age variable to 5 equal intervals
1985, Training and Predicting
34636, Train test split
13798, Analyze by describing data
39750, Random Forest Model
19327, The distribution of data across the classes of digits is pretty much the same
8240, Seeing missing value percentages per column
37492, Reuse Embeddings from TFHub
38835, Visualize predictions
37757, you may wonder what happen if we call next past the end For this we have a built in exception type called StopIteration that automatically raises once the generator stops yielding
29520, font size 3 style font family Futura b style color green Creating data clean function
10449, LASSO Regression
4671, We have 37 numerical features and 43 categorical features Since we have a lot of categorical features and a regression problem we have to pay attention to multicolinearities after the categorical encoding
25270, Compile model
42106, Data Augmentation
39678, Using 1 we select the final image produced by our training process we can use 0 1 etc to view images from earlier in the training process
36873, Keras 3 hidden layers
34842, Early Stop
6737, MSSubClass Vs SalePrice
5047, The higher the quality the wider the price range Poor very poor quality buildings have almost the same price range while very good to excellent properties have a much wider range of possible sale prices
42107, Fitting the model
26190, Lasso Regression
8300, AdaBoost with Random Forest
4161, Important
2447, Pairplots
37297, Constructing Custom Stop Word Lists
6801, Predi o XGBoost
42087, we pre compute the weights of Resnet34 and fit the model
13874, Exporting the data to submit
26633, Validation accuracy and loss
6008, Untuk seleksi fitur seperti ini kita tidak bisa melakukannya secara otomatis dengan mesin karena fitur fitur seperti ini butuh sense dari manusia untuk memisahkannya
33257, Model Selection
24839, use CNN
10610, check completeness of our Train and Test data
5372, Recursive Feature Elimination RFE
29029, We replace missing values based on Title after feature engineering with their mean
13991, Convert Sex and Embarked to Numerical values
27383, Feature selection
19380, Fourth try
5407, In terms of age most of the people under 40 survived
20971, Import the Test Data
22051, As just one row 314 of the data contained missing values in place dropping of that row is ok
19063, For testing purposes we only use the first 100 images in the test set
4428, It is clear that the skewness really improved and that be enough for now
4223, One Hot Encoding
29887, Resume
38589, Submission File
43276, Gerando o output do modelo
21089, Determine outliers in dataset
33273, MixUp data augmentation
24914, Comparing a group of countries with similar evolution Germany Spain France and the US
14465, back to Evaluate the Model model eval
30949, Define dataset class
35624, Experiment 1
8153, Ridge
17898, Here is the final table ready for predictions
805, Some useful functions
41255, Add the first layer
2236, Bivariate Analysis Detecting outliers through visualizations
14088, Test Data
32171, ANALYSING THE TYPES OF GRAPHS
34662, Item count per day distribution
5097, apply a deep feature synthesis dfs function that generate new features by automatically applying suitable aggregations I selected a depth of Higher depth values stack more primitives
7632, blend 3 gscv Ridge gscv Lasso and gscv ElaNet
22867, Convolution Layer
42306, K Nearest Neighbors
4911, Analyze others as well
528, But survival rate alone is not good beacuse its uncertainty depends on the number of samples
1634, Converting categorical data to dummies
41861, Tokenization and Features Engineering
29879, See my posts about this issue Punctuation marks repetition in incorrectly classified text getting started discussion 166248
26663, POS CASH balance
2299, Pandas Take a look at the column names of all the fields
19830, M estimator
28182, Text Classification
18388, Calculate the selected text and score for the validation set
2825, fitting the model to the training set
38171, The arguments used in structured data classification class
39968, More passengers embarking from Cherbourg survived than Queenstown and Southampton
33779, Plotting the Train and Validation Curves
10563, Create Estimator and Apply Cross Validation
11293, look at a correlation heat map of the variables we have so far
41078, LSTM
35377, Dropping Less Important Features
23264, Missing Value Treatment
38164, Saving the Leader Model
29173, Check for skewness of features
16107, Create FareBand
34405, Load the test data
40201, Classification Report
15009, If we encode sex and look at the correlation with Survived we confirm that these variables are highly correlated
3279, Apply log tranformation to target variable and Transform categorical features values into dummies
11258, The predictions on the validation data be stacked together to create the train set for the final model
26555, Split the data into training data and validation data for validation of accuracy purpose
1302, Observations
5659, Basic info of Test data
29723, All the models follow a similar trend for Validation Accuracy
144, I admire working with decision trees because of the potential and basics they provide towards building a more complex model like Random Forest
29158, MiscFeature Fill with None
28597, Architectural Structural
1297, skew data makes a model difficult to find a proper pattern in the data that s why we convert skew data into normal Gaussian distribution
11051, Parameters of the transformers were already tuned using gridsearchCV
28948, We can check feature importance below
29145, Decision Tree visualisation
32597, Distribution of all Numeric Hyperparameters
16114, Please note that the accuracy score is generated based on our training dataset
4798, We have created flags for different feature of the house to check if it is available or not
36622, Encoding train labels
30184, Most of the learning algorithms are prone to feature scaling
35933, Embarked
30846, Crime Distribution of Districts over the Year
12930, Visualizations about training set
11246, method from house price predictions notebookEnhanced House Price Predictions
3426, Make indicators for imputed values
13619, Test Set
8749, Train Scores
222, Libraries and Data
2446, Spearman correlation is better to work with in this case because it picks up relationships between variables even when they are nonlinear
18931, Relationship between variables with respective to time
19816, Sum Encoder
10165, Box Plots
16935, First stupid models and evaluation
15917, Identify Roomates by Ticket
25434, Normalization
749, AutoEncoders to the rescue
12017, Adaboost is also a boosting ensemble model but it based on the feedback from previous model it uses the same model at each step giving more priority to the error made on the last step by this model
10561, Handle Missing Data for Categorical Data
2146, In other words houses with that particular exterior
36052, EDA
24389, Fill the other values with None
1614, Conclusion
32115, How to compute the mean median standard deviation of a numpy array
41348, There are lots of features correlated with SalePrice
28641, WoodDeckSF
16871, Pure survival rate
38889, Defining the Neural Network
14131, Cabin
1239, Training the models
2279, Feature Importance
30461, Creation of a tokenize s function permitting to automatically tokenize our train and test set
27997, Funny Combine
15509, There is no corresponding value for Cabin so let s look at the relation between Fare and the three features
40941, Outlier s Check
36433, Submission
33851, Univariate analysis of feature word common
38985, Reading Data
37034, Does shipping depends of price
26343, Variables associated with SalePrice
40026, Test image statistics
4221, Normalization
28698, Training
38557, Looks like there are no missing values in the dataframe
39139, Regression with Gradient Descent
14108, Highely correlated columns both negative and positive correlation
20698, Running the example fits the model and saves it to file with the name model
29172, Split back to train and test
10232, Here also Age is missing let s fill in the similar way how we did it for training data
21478, Test Data
27506, Building the Convolutional Neural Network
24878, we should be having a cool nice dictionary of titles and their average ages
3726, Un comment the below code to generate csv file
510, Random Forest
36581, let s take age into account
32749, Function to Convert Data Types
38789, Merge these new predictions back into the full prediction set
42216, A third convolutional layer now be added this time with a larger filter size combined with smaller kernel size
32034, So mean test std test all have similar length
33085, Label Encoding
35669, Filling Missing Values sec
26405, Most passengers travel as singles
10078, go to prepare our first submission Data Preprocessed checked Outlier excluded checked Model built checked step Used our model to make the predictions with the data set Test
3465, Here s the correlation heatmap for the predictors in the training set
32187, Convolutional architecture Conv Conv Pool Dropout Flatten Hidden Dropout Output
29155, BsmtFinSF1 BsmtFinSF2 BsmtUnfSF TotalBsmtSF BsmtFullBath and BsmtHalfBath Fill with 0s
35198, Modelling with Linear Regression
66, After loading the necessary modules we need to import the datasets
863, Passenger Class and Sex
42753, We are including 13 features out of 16 categorical features into our Embedding
10406, View correlation of data
7820, Stacking
39701, Comparing the output of Lemmatization on non POS Tagging and POS Tagging output
8249, Random Example
34933, Best number Truncated SVD is
36060, Encoding
36119, Create some new features
22456, Density curves with histograms
28781, I was not able to plot kde plot for neutral tweets because most of the values for difference in number of words were zero
27531, Display the variability of data and used on graphs to indicate the error
37785, Visualizing sample of the training dataset
42772, Trying Random Forrest
19126, Scaling
20498, CSV Files
19589, shop and item category id
6494, Training and Testing the Model
38257, we have 7613 tweets in the train set and 3263 tweets in the test set
22174, We have 4 categorical features in our data
11675, It looks as though those that paid more had a higher chance of surviving
31703, Here we dropping some unnecessary features had their use in feature engineering or not needed at all Obviously it s subjective but I feel they don t add much to model we one hot encode the categorical data left so everything be prepared for the modelling
20313, The following table depicts the percentage these goods with respect to the other top 8 in each cluster
4599, Fireplace
27423, Top LB Scores
29717, Numerical and Categorical Features are processed differently
3574, Visualization Object Variable
20805, Some features are the wrong type so we convert them to the right type
41546, I think the model was struggling with classifying the 1s and 7s du to rotation so i removed rotation
2180, Data preparation
5896, label encoding
819, Find columns with strong correlation to target
38950, Validation Strategy
5711, SaleType Fill in again with most frequent which is WD
13037, Title
1311, Observations
25831, EDA ON TWEETS
18741, MAX NB WORDS is a constant which indicates the maximum number of words that should be present
2515, Decision Tree
15229, Label Encoding
19921, Properties Dataset
30843, Lets form a pie chart to plot the data
3144, Data Visualization
33528, Below is the unpreprocess version just for comparison
37795, Saving the scores to the submission CSV
18719, save our model s weights
3394, SalePrice Correlations
3988, Imputer with mean strategy
13339, we can group the letters with small value counts based on their survival rate
594, We learn
13586, Running the HyperOpt to get the best params
41648, Enable Parallel Processing
21529, Scale the dates and remove duplicates in order to get clearer graphs
37155, We have applied max min normalization to the values of the filters to make it easy to visualise the filters
34705, Shop item active months
14508, Visualisations
7616, fit intercept boolean optional default True
14848, Looking at the distribution of the titles it might be convenient to move the really low frequency ones into bigger groups
11818, SalePrice and GrLivArea
14778, Women are more likely to survive shipwrecks
9149, I took the average of BsmtFinType2 values within 1 SD around the BsmtFinSF2 value
14510, Passenger Class
28342, Analysis based on NAME CONTRACT TYPE
15219, Model XGBoost
16726, fare
34790, Feature Engineering
13387, Make additional variable travel alone
6158, Make a Catboost cross validation
15538, Create submission file and submit
11220, Show new predictions
19271, CNN LSTM on train and validation
24801, RNN
14281, Missing value identification handling
10930, Directed graph
7025, Electrical system
23214, important to us are
19656, Relationships
5942, create our LGBMRegressor Model
36041, Plotting the Confirmed vs Fatalities trend for top 8 countries
32306, Remove outliers
6933, After this we can pick out features which have one dominant value in the whole sample
34660, Filling it with the mean price of this particular item
2281, Training random forest again
39763, Here we ll try to find wich topics appear more often in sincere and insincere questions
7262, Parch Feature
27846, Predict and save to csv
23851, Till here I was in 20 on private leaderboard but after hyperparameter tuning I landed in top 2
16970, localise those outliers
230, Model and Accuracy
20658, look for data which is useless
1323, Which features are mixed data types
2803, display missing Display missing values as a pandas dataframe
20459, Number of family members of client
41982, len to check the length of the dataframe
16032, Same as Cabin we create dummies values
2326, Using RandomSearch we can find the optimal parameters for Random forest
8868, look at our regplot for GrLivArea vs SalePrice again
27752, Common stopwords in tweets
24259, 3 types of deck 1 with 15 passengers 2 to 6 and 7 to 8
32606, Corralation between features variables
27308, Display images
40848, Outliers Treatment
17858, Train the first level models
7650, label encode some categorical variables
38827, Define MLP model
16572, Random Forest Classifier
7614, Preprocessing Scalers
21658, Dealing with missing values NaN
14591, Observation
42129, Train top layers
14104, center Boxplot center
18642, here again we have 27734 missing values
40043, both groups look somehow far away from the others
8880, Feature Engineering
42123, Predict
14102, center Histogram with hue center
16467, Spearman s
18608, Parameters are learned by fitting a model to the data
11965, firstly finding columns in dataset which contain null values
913, LotFrontage is remain to impute becuase
14374, Majority of the passengers Embarked from S and almost all of the passengers embarked from Q were of Pclass 3 this was also a reason of less survival rate of passnegers embarked from S and Q as compared to C
3931, Save submission
11109, Model Testing Only Numerical Features
41794, we are done with our analysis
32848, Time period of the dataset
38255, Predictions
18037, Make predictions
5127, xgBoost Classifier
2880, Using Xgboost for the Classifier
3539, Categorical Features
2473, Making AGE BINS
9517, Sex
32672, Outlier removal
38777, Save predictions before small improvements
12051, Decision Tree Regressor
7397, I split the analysis based on the different types of variables
35880, Derive some time related features
40086, Adding missing columns with zero values
6901, Cabin
6580, Log Transformation
36289, Linear Regression
24028, It is also oblious that different catgories and shops have different seasonalities lets look at it first
21790, Missing data
29741, Number of Kfolds
37712, Set rest of the things
27962, Load packages
27908, Remove Null Value Columns
19973, The model needs to know what input shape it should expect
2537, To rescale our data we use the fonction MinMaxScaler of Scikit learn
34766, Creating Submission file
20482, Credit overdue CREDIT DAY OVERDUE
7854, Scientific ish Feature Analysis to Improve Random Forest Regressors
26852, Meta Feature Engineering
12968, Pclass Sex and Survived
34043, Check if there are any missing values or typos
499, Sex Feature
21793, Support Vector Machine
43315, Age is normally distributed
42299, In order to avoid overfitting problem we need to expand artificially our handwritten digit dataset
11055, There are many ways for filling missing values just filling in the median of the values for now
22804, look in detail the numerical features that need to be dropped as they do not contribute to the primary education
35786, Use another Stacking function from mlxtend
5733, Stacking Averaged models Score
18833, Ensemble prediction
23697, Confusion matrix doesn t look so bad
8902, Random Forest
2791, Tune model
18837, Visualizing the Eigenvalues
23936, Text exploration
35846, Lets create prediction on RF Model
1866, Best Model ElasticNet
35130, Adding additional features that records the avg
38163, lets check the performance in our test set
18454, Converting Text Data font div
18690, create our learner by specifying
26892, Include numerical columns One hot Encoding
29026, look for missing values
12323, We judge it as outliers and remove it
4435, lets check the loss plots
43372, Defining Convolution and max pool function
1679, SpiderMan Ah interesting A numerical feature with big values
36970, The Accuracy of this model on kaggle leaderboard Quite Reasonable Score for so much HardWork
25297, I really want to find features that generalize well but this is just random noise
29148, Take a Peek at the Dataset
1594, When I googled Stone Mrs George Nelson Martha Evelyn I found that she embarked from S Southampton with her maid Amelie Icard in this page Martha Evelyn Stone Titanic Survivor titanica org titanic survivor martha evelyn stone html
36256, Hehe null values spotted
31266, The below diagram illustrates these graphs side by side
2340, Introduction to Feature Importance Graphic
40161, All store types follow the same trend but at different scales depending on the presence of the promotion Promo and StoreType itself
23810, Loading ML Packages
225, Library and Data
21571, Combine the small categories into a single category named Others using frequencies
18197, distribution of returns by product
12648, Expand some of the Test Features
12613, replace NaN by 0
31268, The below diagram illustrates these graphs side by side
28197, Lemmatization
11038, Choosing the estimators
36066, Fit Model
140, Gaussian Naive Bayes
1343, We can not remove the AgeBand feature
21923, Support Vector Regression
20185, Pandas profiling Report analysis
24323, Label Encoding three Year features
11366, Since area related features are very important to determine house prices we add one more feature which is the total area of basement first and second floor areas of each house
23657, Rolling Average Price vs Time storewise
13908, How many people in your training set survived the disaster with the Titanic
13994, Impute Age using median of columns SibSp Parch and Pclass
29443, One important thing we have to check is whether or not our test and training set are from the same dataset
32474, Data Subsetting
24911, For day to day track of this COVID 19 Cases Deaths please refer my another notebook
9422, Pie Chart
2801, describe Calculates statistics and information about a data set Information displayed are shapes size number of categorical numeric date features missing values dtypes of objects etc
5526, Estimate missing Age Data
8530, Electrical
3900, Leave One Out Encoding
29893, Fit the prediction model again using the new weights
25995, Viewing Tables
34266, Multi Dimensional Sliding Window
33334, FEATURE 7 RATIO OF CASH VS CARD SWIPES
19451, Data Preprocessing
15500, Sex
19570, Preparing for training
27464, Distribution of target variable
14581, Age
37643, Street address and Display address
13952, Find Missing Value
12049, Modeling
5444, For simplicity lets choose only a few features
38843, Mostly 1 Story houses followed by 2 Story houses
33706, Gantt Chart
20119, Data Preparation
23520, The text after the cleaning processing looks like this
7103, let s do grid search for some algorithms
37369, Predictive Modeling
34967, Assigning 1 to females and 0 to males
31254, Modeling
15479, We drop parch sibsp and family size as correlation coefficient is lower than IsAlone and create noise
39861, we can look at how the prediction values are spread
40019, File structure and dicom images
35651, Light boost font
10933, For the degree centrality measure the normalized interpretion is really intuitive
10527, Misssing Values
39866, Label Encoding using LabelEncoder
41495, There are 52 duplicated rows
4330, Marital status does not apply for those who are not of legal age
19823, Define a binning function for categorical independent variables
15803, center Decision Boundary Visualization center
26264, Predicting over test set
21667, WBF Ensemble methods
1252, Fill missing values
19267, Comparing models
8484, Linear Regression
34717, Data preparation
10537, We had discussed about some numerical features which looked like categorical features
15949, Spliting the train data
16577, Visualize data
8777, Survival by Embarked
15545, Processing NaNs
30151, Modeling
9674, For this baselien model we are going to
11669, Visual Exploratory Data Analysis EDA
38146, ML BOX
12897, def mapSex
24939, The lognormal distribution is suitable for describing salaries price of securities urban population number of comments on articles on the internet etc
10631, KNeighborsClassifier
28825, Example
2080, Indeed upon adding the feature Boy the CV score increases to 0
15504, Ticket
2741, Checking for Missing values numerically
35376, Inference Model
13486, LOAD DATA
309, finally the submission
4167, Reciprocal transformation
13678, Women had a much higher chance of survival than men
20519, analyze the relationship of SalePrice with other numerical variables
6339, Numerical features
42247, Addressing outliers
19448, Compared to our first model adding an additional layer did not significantly improve the accuracy from our previous model
25263, GrayScale Images
26582, The images tend to be dogs in cat like poses
25399, COUNT OF EXAMPLES PER DIGIT
10218, It s clear that female passengers with lowest fare also survived the disaster and passenger with highest fare also survived irrespective of the gender and this proves our theory that Socio Economic Status played an improtant role in survival
20500, Display Duplicates
4437, Prepare Test table
2464, Using Pearson correlation our returned coefficient values vary between 1 and 1
2154, Numbers are crucial to set goals to make sound business decisions and to obtain money from investors
25753, Removing color correlations
37539, ANN PYTORCH MODEL
22371, K Nearest Neighbors
40292, Nice
21373, Normalization
34935, Drop columns with high correlation
39426, use one hot encoding for Embarked since those are nominal variables
7923, To correct these feature since some of them have high skewness we re gonna use a generalisation of the np
6770, Submission
28429, Item info
42280, Model
1196, fit our first model
7729, These features are not much related to the SalePrice so we ll drop them
24109, Analysing and Saving our Model
42764, Age
8986, I am also interested in comparing PUD homes verses not
5080, Several Kagglers regressions top 4 on leaderboard apply a box cox transformation cox html to skewed numerical features Cool idea worth persuing too
22621, Some helper functions to facilitate training and give a nice overview of the 2 dimensional latent space are defined
35363, Creating evaluation set
31262, Load the data
29771, We identify the predicted class for each image by selecting the column with the highest predicted value
13672, Here train columns docs stable generated pandas DataFrame columns htmlpandas DataFrame columns returns all the column labels in the train DataFrame and we then use values docs stable generated pandas Series values html highlight valuespandas Series values to get an easy to print numpy ndarray
14309, Creating the Model
13714, 1st METHOD THE ONE I GENERALLY DO
6377, let s calculate range variance standard deviation and find out and visualize the quartiles
13278, Logistic Regression is a useful model to run early in the workflow Logistic regression measures the relationship between the categorical dependent variable feature and one or more independent variables features by estimating probabilities using a logistic function which is the cumulative logistic distribution Reference Wikipedia
15117, Logistic Regression Model
35454, We cannnot fill this diagnosis column because it may affect other variable too
3884, Numeric Attributes
21344, Here is my optimal parameter for LightGBM
5429, Electrical
40022, load an example
39452, checking missing data in bureau balance
42991, After we find TF IDF scores we convert each question to a weighted average of word2vec vectors by these scores
25893, Linsear Write Formula
17702, MODEL COMPARISON
39296, TARGET ENCODED FEATURES
3600, Average
211, Model and Accuracy
34175, Visualizing Layer 2
23647, Run 5 Fold Training
34396, Training Models
3541, Categorical Feature Exploration
22440, Marginal Boxplot
6650, Categorize sex to numeric variable
14647, Separate train and test data
38853, Training a model is like teaching a kid
19440, Creating Train Function
36130, In our example we have images with 98
4902, let s have the Training and Testing ID s aved in a dataframe for future references
24948, We must not forget that this is not a silver bullet again it can make the performance worse
36604, Competition Submission
13731, OHE features Sex Embarked Title Pclass new
26378, Images That Confuse the Network
26759, Cleanup
14938, Missing Values
22028, Clusters
16616, Categorical Variable
15599, Survival by Age Port of Embarkation and Gender
23985, We take log as logs are used to respond to skewness towards large values e cases in which one or a few points are much larger than the bulk of the data
21, RandomizedSearchCV Linear SVR
19904, Lag Feature are being created
16601, Realtion Between Variables And Survived
24814, data engineering
40680, let s visualize how these cluster center look like in the original high dimensional space
28216, PPS matrix of correlation of non linear relations between features
18102, We train all layers in the network
31293, we create a function that calculates the period of a data series
34619, Multilayer Perceptron
37215, good coverage but the top missing words all have contractions
35056, Making predictions using Solution 1
25885, Histogram plots of number of words in train and test sets
31695, this is how we gonna fix most of the missing data
25469, Plot graph of cost
23583, Make predictions
32717, Transformers
26, GridSearchCV ElasticNet
34180, Interpreting CNN Model 1
35148, Experiment Number of convolution layers
23426, First let s check tweets indicating real disaster
30114, The learning rate determines how quickly or how slowly you want to update the weights
20610, Cabin
33573, Download and Import Dependencies
27491, MODEL TRAINING
16099, Sex Feature
8151, Random Forest Regressor Model
2683, Univariate roc auc for Regression
7076, Extract Title from Name
43098, create the embedding matrix where each row corresponds to a word
21383, Our goal is to avoid overfitting
18745, First lets remove rows with null values in column Age SibSp ParchFare
14910, Refill the missing values for new Embarked in both training and testing dataset
20859, Test
31855, Add month and days in a month
32303, Display heatmap by count
8005, SVM Linear
38289, Dealing with nulls in MSZoning
20496, Train Test Image Names
8002, Try to boost with gradient method
14330, The Ticket column is used to create two new columns Ticket Letter which indicates the first letter of each ticket and Ticket Length which indicates the length of the Ticket field
11437, Ordinal Variables Mapping values according to a correct order
29849, Getting dummy other categorical features
20934, Make a prediction about the test labels
16830, Metrics beyond simply accuracy
27233, Choose type for each feature
18089, look at the most yellow images
36720, Prepare the Dataset
14301, Applying Feature Engineering
29839, Association rules function
26294, Data Preprocessing
17525, We can further divide the cabin category by simply extracting the first letter
12348, BsmtExposure Refers to walkout or garden level walls
36610, Labelling the test data
28039, Same to the process we had done in countVectorizer we need to tokenize the text and convert them to list of integeres
20553, Accelerator initialization
34357, Our model is having trouble identifying the people with class 1
4052, Age Analysis A Simple Approach
40677, try also for K 16
27367, adding the year and month
28264, We shall use strip plot along with boxplot as box plot alone not give us clear cut idea about distribution of data points within the box thus increasing chances of losing some relevant interesting patterns among two features
15280, Survival by Age of passangers
29910, Distribution of Scores
39016, Replace null age values for the average age
11808, We are ready to fit our models
7043, Proximity to main road or railroad
21818, Bathrooms
29386, 2nd convolutional layer span nbsp nbsp no of filters n2 32filter dimentions 5x5 we use the relu function as the activation function
42944, Predicting
7118, Importing Librarires
35609, Predicting
9713, check homoscedacity and uniform distribution of residuals with fitted value
5000, Here s the smallest house
40297, Target Variable Exploration
23511, There are 2 elements in the class
26518, Renta spread in each Cod prov
35573, Combined Sales over Time by Type
16984, XGB Classifier
26438, First of all we define the function create model which gives us the freedom to try different model architectures by setting the respective hyperparameters
5510, Feature Selection
42796, Getting the best threshold based on validation sample
18600, There is something else we can do with data augmentation use it at inference time
37567, Missing values
42627, Difference between Lockdown Date and First Confirmed Fatality
11718, SVM
12063, Exploratory Data Analysis
26560, Get the parameters
19538, train time series df could be saved for later use as it is the basis of a lot of manipulations
38124, Check for data type via
19164, if item description present yes no feature
26881, Create Submission File for approach 2
22914, We then pick up the 1133 selected features to do Grid Search CV to find optimal hyperparameters
27253, XValidation
4479, We use cross validation to test our models
23603, The iterator returns a yield of a TaggedDocument every time the Doc2Vec
13111, Decision Trees with AdaBoost
9991, PoolQC data description says NA means No Pool That make sense given the huge ratio of missing value 99 and majority of houses have no Pool at all in general
25980, Holdout Prediction
41256, Add the remaining layers
7990, Dropping very null attributes
215, Gaussian NB
9907, Ensemble Modeling
15049, With a large family size may lead to a big burden in unexpected situation
41679, lets also take a look at some images from the test set
10170, This indicates data is in equal proportion
26887, Create Submission File for approach 4
25733, Train our Network
30315, Hugging Face Transformers tokenizers must be prepared in your Kaggle dataset to use in off line notebook
26464, We first split our training data into training and validation set
38178, Tune a Model
13035, Pclass
6895, Age
2013, In our previous example we could tell that our categories don t follow a particular order
20840, we ll demonstrate window functions in pandas to calculate rolling quantities
33476, Implementing the SIR model
16086, Sex vs Survival
6371, Averaging Regressors
35113, Training Evaluation
6071, Taking care of the truly missing data
27986, Partial Dependence Plots
33320, Prepare dataset for training model
36614, Calculate covariance matrix S dxd
8701, Lastly checking if any null value still remains
21207, He initialization
31626, LSTM models
5238, Imputing Missing Values
14420, go to top of section prep
4623, Correlations between Attributes
1841, Log Transform SalePrice
42392, How do sales differ by department
20530, Fix skewed features
13867, Extract new feature after mapping
16547, Here s how you can fill missing ages using the Pclass column as reference
23468, Using different models for registered
24565, Total number of products by seniority
13768, take a break with interractive plots and come back to our good old seaborn So how many passenger survived
26064, we ll plot the outputs of the second hidden layer
19251, find out what s the time gap between the last day from training set from the last day of the test set this be out lag the amount of day that need to be forecast
19751, Model
21729, we are going to plot the price distributions of the top 10 categories excluding their outliers
10698, Training set
27425, Distribution of Scores over time
7653, convert categorical variables to dummy variables
38500, The text and selected text column have one row missing each
25798, The Pareto principle e
21174, Displaying original Image
828, Convert categorical columns to numerical
21499, Load training and testing csv files containing image filenames and corresponding labels
19182, Generating submission
4012, you can use our trained model for forecasts
37762, Technique 8 Do not use operator for strings
20195, I have tested for some but you can for other features
36580, We merge this individual app usage data frame back into the gender age data and determine the most frequently used apps separately for men and women
41650, Building Stack Model
8470, Separate data for modeling
3868, according to the here most of these attibutes can be converted to scores
11977, fill all others with 0
23637, Wiki News FastText Embeddings
17659, Observations
30654, At least bioterror bioterrorism annihilated annihilation and blaze blazing mean the same
26628, Compile the model
32173, REVERSED FEATURES EXAMPLE
10423, LGBM
16710, Correlations with each feature
32014, For Name column we assign a new category to all rows The names containing Miss are assigned Miss as the new category Same is done for Mrs Ms Master and Mr To differentiate Mr and Mrs we use dot in substring search These abbreviations end with a dot most of the time but there is one row with Mr without dot we incorporate in code Mr Mr code meaning code Mr code or code Mr code
7130, Age
28497, Upsss
27557, Second attempt use an OOV token
17285, You can select several rows of dataframe by indexing
24573, Observations
34646, Predicting For Test Data
24951, Train set
35660, Categorical Features
22783, Lineplot with rangetool
30581, Applying Operations to another dataframe
3150, model accuracy improve by 1 with randomforest
34684, Adding test shops items last month
26899, Create Submission File for approach 8
32068, visualize the transformed variables by plotting the first three principal components
26703, Plotting Sales Ratio across the 10 stores
16289, Public LB
866, Embarked Pclass and Sex
34350, PyTorch Fast ai
22159, We assume that
24994, Selecting best numerical variables
27345, removing high and negative prices
3272, Fill these BsmtCond null values with BsmtQual and others with None as they have no Basement at all
42003, Multi Column filtering
21766, Renta missing values
11006, I am going to plot histograms for Age attribute to find which age group survived the most
7738, feature creation
3157, The factorize function converts categorical features into ordinal numbers
40168, The next thing to check the presence of a trend in series
29439, Location
14718, AND THE WINNER IS
40441, Plotting Loss of the traning model
19854, Outlier Detection Removal using IQR Inter Quartile Range
42803, Our target variable is spread across few orders of magnitude it s more suitable to work with log10 of this value
13403, Visualize feature scores
39192, Codificando outras features categoricas
20201, Numerical Features Data correctness
19613, Interaction features
43011, Feature Assembly
7403, After we apply the log transformation to LotArea the skewness is reduced a lot but it is still not a normal distribution based on the test
17865, We fit the model
15031, Embarked
23933, Price distribution
37996, Here we have information about item characteristics and sales for each day
10919, Displaying nodes
4602, SUMMARY
19579, Categories Analysis
27030, The smaller the prob1 the more likely a sample is positive
23235, Test different parameter values
41422, Deploy the chosen model on test set
27020, Making the Submission
41121, Los Angeles Average Absolute Log error
292, The majority of passengers with an assigned Cabin survived
41942, We ll also define a helper function to save a batch of generated images to disk at the end of every epoch
7048, Roof material
30369, Test Imageio
11509, Kernel Ridge Regression
33754, Adding categorical features
41608, Predict the test images
4484, Support vector classifier using RBF kernel
33282, Data Merger
15356, Here we have a couple of plots of the interaction between some of our features on the side of each plot we have a scale of feature importance
35674, Mapping Ordinal Features
5259, In the next series of experiemnts we are going to train a number of RF models that use only top 50 features as ranked by RFE feature importance feature selection algorithm
24710, Train The Gaussian Model also known as PCA
30544, Feature Engineering Previous Applications
39500, The number of rows and columns we have in the application train csv
42137, MNIST
12835, As we created new fetures form existing one so we remove that one
5451, Sklearn
32960, cont1 and cont10 cont9 and cont6 cont10 and cont6 are highly correlated
43403, I am doing a stratified split and using only 25 of the data
9685, Exploratory data analysis
936, Load Data Modelling Libraries
15573, Ticket group survivors feature
3902, Topics covered in this tutorials
22811, Summary Total
6465, Pipeline for the categorical attributes
43245, Splitting data for training and testing
20285, Survival Rate for chlidren are pretty good
22777, Plot a graph to trace model performance
6902, Family Size
40823, Time to merge tables
36202, html Predictions font
11904, Model training
40778, Script for adding augmented images to dataset using keras ImageDataGenerator
42842, US
2016, Fixing skewed features
19379, LinearRegression array 15195
10227, As this column consist of only two values let s encode this with 1 for feamle and 0 for male we can use hot encoder also but for starters let s avoid that as we have very simple column to encode
5069, Cool Much better Already sth around rank on the Leaderboard with just one feature
19933, Parch
3641, Filling in missing values in train set for Embarked
19317, Evaluation prediction and analysis
2636, check unique values for each column
2742, Missing values are frequently indicated by out of range entries perhaps a negative number in a numeric field that is normally only positive or a 0 in a numeric field that can never normally be 0
31090, BsmtFinSF1 font
39004, Implement the forward propagation for the model LINEAR RELU LINEAR RELU LINEAR RELU LINEAR SIGMOID
38665, Splitting the Dataset
21394, Creating Train and test dataset
18064, BernoulliNB Model
10128, Logistic Regression
20345, We can inspect the ImageDataBunch as
32622, Token normalization
38947, Reading in the dataset
1827, Elastic Net
4166, Logarithmic transformation
38659, Embarked Ratio
17672, Womens have more chance to survive
18744, Vanishing Gradients Problem
40490, Best
14493, logistic Regressor increased by 1
29021, check the correlations between our numerics target
33647, There are a few columns with different set of categories among train and test sets
36335, Build the model
26953, Import necessary libraries
24054, Dropping features with one value Utilities Street PoolQC
17803, Similarly we map Age to 5 main segments labeled from 0 to 4
3026, Separating the training and the testing set
4555, Filling value for MasVnrType and MSZoning with most repeated values
43034, Reshape the Data
17538, Encode some features
43274, Fazendo previs es nos dados de teste
29841, Numerical data
28088, Train
38826, Define the PyTorch dataloaders
18713, save our model s weights
26586, TASK IMPORT SALES TRAINING DATA
12456, we take care of BsmtHalfBath and BsmtFullBath
19404, Training the Model
21158, we fit the model on all training data we have
36044, Cleansing the data set
17355, In machine learning naive Ba classifiers are a family of simple probabilistic classifiers based on applying Ba theorem with strong independence assumptions between the features
29973, Training corpus using BertWordPieceTokenizer method
4770, Cabin column is dropped because there are a lot of na present in them
32694, now generate a submission file according to the instructions
13859, With the number of siblings spouse and the number of children parents we can create new feature called Family Size
32741, FS by the SelectFromModel with Lasso
41738, We start with frozen weights and then unfreeze them for some more tuning
4665, Correlation matrix in Visulization Form
13598, Ordianal Encoding
37102, Univariate Selection
18461, The following code uses some tools from keras to Tokenize the questions text ie assign numbers to every word
9595, Finding out people with age greater than 70 in the dataset
1669, We have to transform our categorical to numeric to feed them in our models
15711, Gender by Fare Categories vs Survived
8069, Outlier
2669, Correlation
32832, Pre training BERT
3733, True there are missing values in the data and a quick look at the data reveals that they are all in the Age feature
20041, we can test our model
7040, Flatness of the property
13996, Impute Fare with it s mean
31295, Plotting the macro data
40937, Start Searching
15436, let s start having a look at ensemble learners
42969, compare the accuracies of each model
38140, Plot the cumulative explained variance as a function of the number of components
18344, Age of house sold
8792, Skewness
10209, try to understand each features in training data set
29692, Predict Put input values throgh model and get output
36111, Handling Missing Values
7636, Blend Ridge XGB
21603, Fixing SettingWithCopyWarning when changing columns using loc
26622, Images samples
32608, Categorical Ordinal Features Exploration
6758, Test set
11614, Cut unwanted feature
34617, One Hot Encoding
30471, Create polynomial features
15112, In preparation for our modeling we convert some of the categorical variables into numbers
785, Kernel Ridge
19588, item category id
28324, identifying the missing value in credit card balance dataset
7780, Predict Test Data
22909, Logistic Regression
6925, Prevendo os pre os da casa utilizando o modelo de Regress o Linear
8556, Navigating DataFrames
2262, Missing Data
8813, for each passenger I ll just try to create a new feature called Deck with first letter from the Cabin as its value
21229, The following cell returns the number of images we have in each dataset
4078, Relationship with categorical features
20054, Create Lag Features and Mean Encodings
18074, plot some examples with small number of spikes per image
35809, Simple train dataset
1523, And finally lets look at their distribution
18637, We can now convert the field to dtype float and then get the counts
34336, Simple understanding about the data
2010, Imputing Missing Values
27509, Know This is Wrong Method of collecting data from keras model but this is also an research on data
37659, Flatten predictions array to get 1 D array for submission
8295, Bagging oob score True
37658, ImageDataGenerator again
10035, Univariate analysis
34722, LGBM feature importances
23239, Data Splitting
21750, Average Voting
40881, Advanced Ensemble Methods
21210, We need to normalize the values and need to bring the mean and std to zero
933, Retrain and Predict Using Best Hyperparameters
34621, Building our first model
30254, we can train our model with preset parameters and best nrounds value we had figured out by now
36420, Dropping features with huge missing values
17741, Filling in NaN s for variables with high coverage
20468, Goods price
26695, Listing the available files
37726, Feature Outlier Analysis
23654, Merging the data with calendar real dates
11833, MasVnrArea is categorical in the data but it shouldn t be
3630, check whether or not these added features are correlated to the SalePrice
30267, You can use precision recall chart to select best threshold value for your specific task
15518, And then we still need to bin the data into different Age intervals for the same reason as Fare
1354, Model predict and solve
544, double check for missing values
8924, MiscFeature
23796, Drop Id
8710, GRADIENT BOOSTING
7993, Analys labels using p value
30973, Random Search
31387, This is an awesome function I created that preprocesses the data It does thes following
835, Check for Multicollinearity
23039, check whether the trainsaction is right
32711, Making a basic LSTM model LSTM stands for Long Short Term Memory networks
879, sklearn StandardScaler
11252, Adaboost kinda sucks
22905, We try to distinguish disaster and non disaster tweets
23449, Weather
11107, Handling Null values
21850, LSTM Long Short Term Memory RNNs
31423, Case x start leq x end
24933, This transformation preserves the distance between points which is important for algorithms that estimate distance
35253, The difference between length of words and jaccard in same plot would tell us how it is baised
25665, steps
3739, An advantage of logistic regression is that it s easily interpretable
11414, Use Case 6 California Biking Radial Bar Chart
13027, Parch
43289, Um R de 0
19452, Building a 3 layer Neural Network
32813, Level 3 Logistic Regression
26945, The plot makes more sense if we remove that data point
12475, Cabin
5158, Machine Learning Models
30291, Adding some derived features
4763, the value zero doesn t allow us to do log transformations
30379, Predict on Test Set
21032, Single Predictive Power Score
18346, Quality Scores
17567, Modeling Data
23880, explore the latitude and longitude variable to begin with
14577, There are some missing values in cabins Age and Embarked in the datasets
16172, Correlating categorical and numerical features
32386, Meta Feature Importances
29004, How to create a submission csv
13131, have a look at the count distribution for Embarked
7376, Merging datasets using surname codes name codes and age
13101, Train Test split
25160, Percentage of Similar question and Non Similar Question
8764, Age by group every 10 year old
26520, Distribution of product by Cod prov
8043, Fare
34030, How to treat categorical variable
42610, Train Steps
2137, Tuning XGBoost
25006, Pulling Items Shops Feature
1750, Comparing the KDE plot for the age of those who perished before imputation against the KDE plot for the age of those who perished after imputation
2712, PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance
15817, From this we can infer that women of class 1 and class 2 survived most and nearly all men of class 1 and class 2 died
678, For additional insight we compare the feature importance output of all the classifiers for which it exists
15355, 2D Partial Dependence Plots
21859, Adding Noise
17265, Fare Feature
27942, Train on the full training data
14449, go to top of section engr2
23142, Findings A large number of passengers who survived were without any siblings or spouse followed by passengers with spouse or siblings Percentage wise passengers with spouse or siblings had over chance of survival followed by passengers with siblings or spouse had over chance of survival Passengers with or siblings or spouse had all died
31608, Spliting Train and Test sets
40146, In this first section we go through the train and store data handle missing values and create new features for further analysis
31571, if you want to take data augmentation you have to make List using torchvision transforms Compose
24328, By using a pipeline you can quickily experiment different feature combinations
6816, Exploratory Data Analysis and Data Cleaning are two essential steps before we start to develop Machine Learning Models
41476, Feature Family Size
22671, Determining Topics
500, Women are more likely to survive than men
2501, Fare Continous Feature
24524, Almost all obervations have come from non employees N
13493, Cabin
11912, Age
34353, We define our learner s loss function
7965, Another pre processing step is to transform categorical features into numerical features
28370, As one can expect from any textual data there are all sorts of messyness going on here
33028, Visualizing the model s performance
6360, Ridge regression
8426, Fill Missing Values of Garage Features
35832, Now lets give an another category named Unknown for every categorical features that have missing values
34053, Import XGBoost and create the DMatrices
27969, Reducing for test data set
14636, let s fill the missing values for the Fare
11102, Split Categorical and Numeric Features
37310, Using RFE Recurisve Feature Elimination
18179, Dataset visualizations
1939, Neighbourhood
37431, Common punctuations in tweets font
38976, Data loading and batching
26105, My second shot was to take tweets from another source Luckily there is a kaggle dataset containing m tweets that can be found here
8111, Cabin Ticket
16982, Random forest classifier
15013, Siblings Spouses
29728, In this kernel I be using 50 Stratified rows as holdout rows for the validation set to get optimal parameters Later I use 5 fold cross validation in the final model fit
11475, Create Submission File to Kaggle
2923, Decision Tree
32712, Plotting training and validation accuracy
10588, Random Forest prediction with all features
36885, MLP
42849, Logistic Curve Fitting
5249, Modelling and Feature Selection Pre Requisite
23831, Taking as threshold on grounds of experimental changes
4283, GrLivArea ranges from 334 to a maximum value of 3627 ft 2
7311, Observation
37433, Which are the most common words in selected text font
13618, Train Set
3859, Logistic Regression for analysis
14589, Survival
10805, Firstly let s fill empty values
12006, Due to some negative errors here we not directly applies RMLSE instead we use RMSE
26218, References Albumentation Documentations team albumentations examples tree master notebooks
3717, GarageYrBlt PoolQC Fence MiscFeature
6037, Train Test Split Classic
34522, Time Features
136, Manually find the best possible k value for KNN
25452, Train and Predict
18393, Evaluate several models
13588, Predicting the X test with Random Forest
4398, Feature Engineering
29041, Highly skewed distribution of labels
15061, Model Feature Importance
5192, There is VotingClassifier The idea behind the VotingClassifier is to combine conceptually different machine learning classifiers and use a majority vote hard vote or the average predicted probabilities soft vote to predict the class labels Such a classifier can be useful for a set of equally well performing model in order to balance out their individual weaknesses Reference sklearn documentation learn org stable modules ensemble htmlvoting classifier
6320, Logistic Regression
31582, let s check how the target classes are distributd among the CAR continuous features
13041, Sex
41944, that we have trained the models we can save checkpoints
32409, Submission
29356, Age feature
12451, And that GarageYrBlt is clearly an outlyer
38488, Confusion Matrix font
4767, Replacing the age value in the test and train dataset with the median value of the age column in their respective dataset
24437, Wrapper
15292, Creating categories for Male and Female and dropping few uneccessary columns
2334, Understanding Feature Importance
35092, Running model on the test set
18739, Normalization
963, First level output as new features
12902, Logistic Regression
27128, There are still some missing values in test dataset as we were following train dataset
33653, Exploratory Visualization
20499, Find Duplicates
37576, Language model in action
7271, Cabin Feature
38408, Yeah we guessed
1854, Identify and Remove Outliers
1585, Data cleaning Feature selection and Feature engineering
39987, Feature Engineering
17968, Support for missing values
737, Test Set
30849, Locations of Top 5 Crimes
32500, Predictions
13450, Data Modeling
26258, Defining the layers and metrics for the Neural Network
9790, While encoding it is vital that we skip missing values
7776, AdaBoost
34643, XgBoostClassifier
42570, submit the predictions with the threshold we have just found
13075, we fill in the missing Fare
16631, Data properties
15426, Great majority of people travelled on their own
23795, Pandas Profiling is one line code to view most key information
21114, I have created optimal binning for people char 38 new variable people class 38
21591, Webscraping using read html and match parameter
10983, We are going to use The Gradient Boosting Regressor but before we need to know what the best parameter to use also we are going to need GridSearchCV for this job
21685, An lise descritiva dos dados
1118, Cabin Missing Values
2487, Age Continous Feature
29556, For embeddings we are using
4251, Discrete Features
9690, based on this we drop GarageArea TotRmsAbvGrd and GarageYrBlt 1stFlrSF
40978, It s supposed to be a DataFrameGroupBy
23531, Support Vector Machine prediction
41234, Load Data
25167, Analysing Shared Word
37676, Format data to input text Q A format
30636, The idea and approach for creating a Title column was taken from the following great notebooks p
26421, With the title Master we are able to identify young male passengers which have a significatly higher survival probability than adult men
39390, Loop over product columns assign every column to the target vector y and compute the feature scores each run
19664, Correlations with the Target
23275, Building Models
22336, coming back to Lemmatization we need to define a wordnet map and specify for which all parts of speech we need to find Base words otherwise by default it fetch base words for nouns only
16404, Using Sklearn contingency matrix
33745, Before fitting the parameters we need to compile the model first
32475, This frees up space allowing us to finish our feature engineering
8567, Dropping Duplicate Rows
21386, Predict
2577, Gaussian NB
6563, Fare
40179, My upgrade of parameters
26894, Score for A6 16073
20705, Generate test predictions
14683, this gives a very good insight about family and gender relations
34657, Revenue
34148, Just importing the necessary libraries
26954, Load Data
33350, Timeseries autocorrelation and partial autocorrelation plots daily sales
16130, Train Test Split
8313, Imputing the missing numerical values with the median value as the features are not uniformly distributed
7462, Logistic Regression Implementation
43093, We define the hyperparameters for the model
39111, Name
7777, Gradient Boosting
27924, Model Tuning
6712, Relationship between Categorical Features and Label
36140, plot the train and test scores for the different neighbour values
1802, Here we are plotting our target variable with two independent variables GrLivArea and MasVnrArea It s pretty apparent from the chart that there is a better linear relationship between SalePrice and GrLivArea than SalePrice and MasVnrArea One thing to take note here there are some outliers in the dataset It is imperative to check for outliers since linear regression is sensitive to outlier effects Sometimes we may be trying to fit a linear regression model when the data might not be so linear or the function may need another degree of freedom to fit the data In that case we may need to change our function depending on the data to get the best possible fit In addition to that we can also check the residual plot which tells us how is the error variance across the true line look at the residual plot for independent variable GrLivArea and our target variable SalePrice
11743, Imputation Feature Engineering
35607, Fitting
4106, Predict OUTPUT
27314, Graphing Accuracy
278, Were younger passengers more likely to survive compared to older passengers
42340, As we have a high dimensional dataset feature engineering be quite time consuming if I check the co variate one by one Therefore I first focus on exploring the response variable PCA might also be a good choice to reduce dimension if necessary
36430, Hyperparameter Tuning
16990, Gaussian Process Classifier
40184, The main problem is in the predicting of the longest and shortest selected text which are most or least different from the given text
18839, PCA Implementation via Sklearn
9602, Viewing the data randomly
19072, total number of survived passanger
23853, Check for output tab of the notebook and check the score after submitting it
29184, If the feature is irrelevant lasso penalizes it s coefficient and shrinks it Hence the features with coefficient 0 are removed and the rest are taken Dropping one column from the one hot encoded variables wouldn t be necessary as this step definitely eliminate one of them
33638, Extracting the ID since we need to use it for submission later
16698, Completing a numerical continous feature
32040, we compute the accuracy using accuracy score of Scikit Learn
25519, Your Average Cat and Dog Photo
24589, Generate submittion csv
4140, Training the model
14777, Sex
1945, Relationship with OverallQual
14555, Fare font
1949, Here there are many identical features which means that they convey same information so it is worthy to consider them only 1 time
15472, Feature Sex
9961, i count how many houses was sold by year station
3532, Housing Price vs Sales
31532, Handling Missing Values
37450, the comment text is prepared and encoded using this tokenizer easily
39019, Drop no necessary columns for the data analysis
16958, Fare
34446, Exploratory Data Analysis
14317, Data Defination
12392, Splitting data into train test sets
14417, Statistic Summary CATEGORICAL data
4733, there lots of columns have null value
42342, Check whether the response variable is normally distributed
12445, Submission
10752, Use log 1 SalePrice
40331, Build word wectors from text field
9054, TotalBsmtSF
2239, Combining Attributes
3601, Before apply boxcox
16953, Sex
33355, Timeseries decomposition plots weekly sales
12369, Checking and removal of null values
30882, Set predictions 1 to the model s predictions for the validation data
40016, Age gender and cancer
841, Linear Regression
4411, Missing Values
7867, If you create a model saying that only woman survive it would have a score of so the mission is to create a model at least better
13417, LightGBM Parameters tuning
8012, Ensemble Methods
21762, Missing values in indrel 1 99
8494, We can also visualize the partial dependence of two features at once using 2D Partial plots
2346, XGBoost Native Package Implementation
13350, Import Libraries
21520, Lets visualize the Loss over the Epochs
21940, Spark
16658, Fare
26913, Predict test data
22692, Main part load train pred and blend
18908, Fare Feature
34760, Model Creation
42126, Model parameters
34470, Save the data for training
9596, Unique values of a Feature
5968, Calculated fare feature
34905, Simple EDA
2906, Handling Missing Data
9037, Kurtosis is used to describe the extreme values in one tail versus the other
7951, Tuning on Adam optimizer params
1020, print our optimal hyperparameters set
7317, We could know that the numbers of survived and died people are close to balanced
41464, Plot the histogram for Embarked Val
12740, Some common python libraries for data analysis and plotting are imported you already can tell how straightforward the model is by the number of libraries imported
42177, So we use the one hot encoding procedure
34794, The residual distribution is normal
29459, Text embeddings and Setting the initial weights for our NN
6809, SVMs aim at solving classification problems by finding good decision boundaries between two sets of points belonging to two different categories To understand how it works you can refer to this webpage tutorial com 2014 11 svm understanding math part 1
1612, Label Encoding Non Numerical Features
26016, After a good amount of homework I came to a conclusion to impute the values rather than dropping the columns
286, I wonder if one sex was more likely to survive than the other I expect that women and children were evacuated first
14450, Normalize Data
13172, let s take a look into a summary of our numerical variables
32661, Replacement strategy p
25404, The Softmax function allow us predict the model because it normalize the data in one hot encoding
36243, After extracting all features it is required to convert category features to numerics features a format suitable to feed into our Machine Learning models
1628, Checking for NAs
16012, Pclass
10910, Tune the Decision Tree Model with Feature Selection
42464, ps car 12 and ps car 14
3069, Change Strings to Numbers
28580, 1stFlrSF
11050, pipeline xgb
12534, Calculating score of each model
21080, Missing value is data set
5497, For implementing Grid Search in Random Forest Regressor model I have refered this
30083, Logistic Regression Algorithm
15148, Concatenate both Train and Test datasets for data cleaning
20131, In order to make a good logistic regression model we need to choose a value for regularization constant C
24384, We have two outliers where SalePrice is 700000
17531, Some titles are to rare
12490, Lets use lgbm to predict prices
20292, SibSb
26503, The first layer is a convolution followed by max pooling
12345, Basement
6391, Statistical Description
26248, Sample Predictions
11921, Best Model
38691, After Mean Encoding
4518, Ridge Regression
31100, Handling Missing Values in Categorical data
5900, u Using Heatmap u
36532, I guess we should be aware of the dataset but given my experiment here
21477, Training Data
38586, Checking Accuracy
621, Cabin known
38219, For each image in the test set you must submit a probability that image is a dog
37340, Combine predict
38204, Fit RandomForestRegressor
11094, Modeling
32116, How to normalize an array so the values range exactly between 0 and 1
28475, New feature Weekend weekday transaction
5511, Distribution with respect Survived
5354, Diplay combined 3D subplots
33801, Pairs Plot
20647, We are now ready to define our neural network model
37036, Does the lack of a description give an information on the price of an item
14167, I just repeat the same process for test data
8993, There are 3 rows where this is the case so I am going to assume the data collector accidentally forgot to calculate the MasVnrArea
15375, We use the
31519, Using the roc auc scoring parameter as that is the metrics to be evaluated in the problem statement
27073, do the same for test set
34686, Clipping target variable according to the competition rules
23541, Labels are 10 digits numbers from 0 to 9
21183, Check how many parcel values occur 1x 2x 3x times
36864, Using GridSearchCV with KNN takes very long for this dataset
5401, Keep Cabin and impute Cabin or not
28456, Out of the remaining three columns transactiondate is a datetime object
16495, Logistic Regression
33049, I can try with mutilple layers as with the neuralnet
22341, Tokenization
5501, Using the titles obtained we ll fill the age
4224, Label encoding
10731, Lets try to make a new feature
20708, Second column
5607, Skewness Kurtosis
29178, Normal Log Transformation
10854, Checking for the number and percentage of missing values
18960, Display distribution of a continuous variable
25376, Complie the model with loss function and optimizer
17003, Explore Target Variable
39429, Predict for df test
29565, ID
4543, decision tree
16056, Age vs Survived
1031, Data cleaning
23642, Define Dataset
3794, Quantitative and Qualitative
21350, Preprocess a single image
41241, do predictions on our training set and build a submission for the competition
23512, There are elements in the class The classes are revealed by text cleaning in the model The strings in the class before the text cleaning differ not only by URLs but by via usatoday or via USATODAY at the end of the strings Mean target is in the class relabelling to
22636, Model 1 Naive Model
35232, This is essential to know before we perform modelling
12336, FireplaceQu
20565, Make Predictions
31086, LotFrontage font
29008, Creating combinations and running models on them
27357, Feature Engineering
31542, Alley
2220, We visualize how the housing market in Ames IOWA performed during the years and how bad it was hit by the economic recession during the years of
7525, Depending on the categorical variable missing value can means None or Not Available
38946, Yearly plot
28931, DAE Generator
31905, Train the model
26893, Create Submission File for approach 6
40685, NOW WE CAN EXPLORE OUR FEATURES FIRST LETS EXPLORE THE DISTRIBUTION OF VARIOUS DISCRETE FEATURES LIKE weather season etc
12956, Filling missing values in Embarked variable
24279, Model cross validation with K Fold
38264, We apply data cleaning steps to both our training and test data
16780, Random Forest Classifier
18463, Apply the model to predict values
37638, Just to make sure that our predictions make sense we display 10 different images from the test set for each of the 10 classes
703, Recall the form of a standard linear regression
42009, Sorting with conditions and if there are any that matches
7047, Type of roof
32350, Train Test split
29396, we finally reached the last part of the notebook where we submit our predictions of the test data Just as we pre processed our training data we preprocess the test data make our predictions with our CNN model and submit our predictions
11169, Since area related features are very important to determine house prices we add one more feature which is the total area of basement first and second floor areas of each house
4236, Once trained our model we can then visualize how changing some of its Hyperparameters can affect the overall model accuracy
2479, Using a model found by grid searching
19881, Quantile Transformer Scaler
15823, Standard scalar for training data
11721, Radius Neighbors Classifier
501, Pclass Feature
9845, submit our solutions
15368, Handling Age Feature
36785, Word Tagging and Models
2029, Dealing with NaN Values Imputation
19138, Model 1 with GradientDescentOptimizer
6135, This house is almost new
6739, LotArea Vs SalePrice
12806, Data Analysis
40740, Simple CNN
16906, New Feature Connected Survival
21597, Webscraping using read html
32560, Anatom
27547, Display world map with different porjections
5170, EDA based on the my kernel FE EDA with Pandas Profiling eda with pandas profiling
1858, ElasticNet
40439, Image Augmentation
5839, These are the top 5 recorded values under SalePrice column and it is int type variable
22900, Mumbai and Nigeria are still on the top
3423, Some of these could be duplicate groups
40467, Sale Info
9469, Unique values
16888, FamilyType
10834, We start from testing number of classifiers to choose the best one for further fitting
7879, Based on the exploration of the data I propose to discretize the Fare in four states
20206, impute mean value to numerical features
18220, Ensemble 15 CNN predictions and submit
8733, Bivariate Analysis
16149, Binning
3316, Combined model I also tried stacking which can improve the score but the effect is not ideal
11125, Model Stacking
30572, In the code below we sort the correlations by the magnitude using the sorted Python function
23544, Define the model
8491, Create Submission File
20273, Analysis of categorical features with levels between 5 10
19863, In this method there is no fixed approach for setting the upper and lower threshold and based on purely the problem statement and the dataset we have
43100, In embedding layer trainable is set to False so as to not train the word embeddings during the back propogation
36492, Dispatch all samples in fold X in 5 folds
33443, LogisticRegression
39242, Feature engineering
14480, kids on 1st class was one found and died even 1st class children weren t there
6153, Set a list of basic regressors
29743, Rank averaging
6319, Random Forest
949, Pearson Correlation Heatmap
25439, Initializing Optimizer
8093, DATA EXPLORATION
11461, SibSp and Parch
2800, check train test set Checks the distribution of train and test for uniqueness in order to determine the best feature engineering strategy
36280, Sex is categorical data so we can replace male to 0 and femail to 1
23689, Check image size
24903, EDA
5907, Before applying SVR Scaling needs to be done
26841, Preprocess for Neural Net
19939, Categorical values
27486, To easily get to the top half of the leaderboard just follow these steps go to the Kernel s output and submit submission
2028, Loading and Viewing Data Set
34390, Locations
39165, How to use a predefined architecture from fast ai
6605, Random Forest
10111, Exploratory Data Analysis EDA
17440, Fare
9299, We can use a pandas function called get dummies to as the function says get our dummies
39997, Fixing Outliers
7903, I keep two of these models with 6 neighbors and with 10 neighbors
27087, The transformations are standard Python objects typically initialized by means of a training corpus
20311, check out what are the top 10 goods bought by people of each cluster
6493, Maybe a good idea would be to create some new features but I decided to do without it
41478, Plot a histogram of AgeFill segmented by Survived
20646, We can then use the maximum length as a parameter to a function to integer encode and pad the sequences
40617, to check if Linear Regression finds a similar solution
3256, Estimates of Locations where STATISTICS comes in place
9296, Regression on survive on PClass using a Categorical Variable span
32809, Level 2 Random Forest
26084, assign the values provided by the model
38829, Define binary cross entropy and accuracy
34269, LSTM model
20607, Parch
33069, Modelling
28764, we shuffle the data
20488, Go to top font
36814, Machine Learning
911, Imputing Missing Variables
19771, Dimensions
5902, Train test split
22067, Jaccard Scores for using TEXT as SELECTED TEXT in TRAIN data
800, If we split by Title 1 we ll have the two following nodes
26400, In general the pobability of a woman to survive is more than three time higher than for a man
4742, now we can do some plotting with those null value
25164, As we have very few points with NULL value then its better to remove them
12880, Passenger Class
4819, Remove
43346, We use Cross Entropy ti calculate Loss optimizer adam and metrics as accuracy
16103, We now convert the categorical value of Embarked into numeric
22041, One more variable from floor area could be the difference between full area and living area
19460, Compiling the model
20379, We all know that id column isn t important to us so we drop it
39724, I m still pretty new to python so I m not sure what the cannonical way of doing this is but using
36886, dense 1
10568, We use Alternating least squares ALS algorithm in Pyspark Ml library for recommendation To read more you can visit at collaborative filtering html
28809, Middle out
36570, Train model
25418, Alright let s now explore this newly created column
218, Model and Accuracy
3820, Notice that as our confidence interval becomes bigger the interval of possible population means becomes bigger
21479, We need to build the augmented test that we use to cross validate the test predictions
21133, It s good idea to investigate first variables which have Qual in names cause this shortcut refers to Quality in other words we expect that some levels indicate lower quality while other ones higher one what can enable us to order them
38247, Most of the test data is from Asia and Row spacing is also comparitive to Europe
23833, Duplicate features
31379, Rotate Image
3513, The variable importances from the boosted model on the reduced dataset
27855, Drop columns with more missing values in Test Set
28276, Each image in the training set is
7562, Imports
14829, Drop Passenger ID and Cabin
21913, Normalizing features
25005, Test Data
14481, Embarked S survived more in female in every passenger class in male also same the same
3804, Ridge
26623, we plot the image selection
15486, Scale Continuous Variables
22648, Initializing
37778, Loading the data
28301, Choose optimal threshold by val set
5299, I now build the five base models
7195, Clearly we have not a normal distribution in our target let s solve this with log transformation
6848, Analysis Name column
22687, Get correct labels from test data with labels
17405, Out of Fold Predictions
39856, Augmentations with ImageGenerators
21558, Sex
33748, The following be very slow
9689, How to decide out of two which feature we should remove
11525, RF Regressor
34511, Replace Outliers
32715, Attention Model
40824, Machine learning
32718, BERT Transformer
28509, Here is some sample rows
12280, Generate output the mean median max and min of the scores fluctuation
21157, Comparing used methods
13562, Names Column
14197, take a look into the data
6947, Silhouette plot
40277, As any model we build we most certainly be performing feature scaling and one hot encoding it is good idea to use the same basic transformations on our baseline model
19633, 1 n classes benchmark
42174, Data normalization in Keras
8004, Train SVM Regression
31515, Unsatisfied customers
42567, Training
2426, Lasso Model
27070, Intersection
7232, Dividing the Dataset
942, Split Training and Testing Data
36742, Calculate the Mean Absolute Error in Validation Data
1371, let s categorize them
17922, Successfully completed one hot encoding on the dataframe s columns
24660, Loss function
28208, Modeling
25689, How Kerala Flattened Curve
40767, From this point forward
19100, Plotting the Accuracy of the models
38091, In order to avoid overfitting problem we need to expand artificially our handwritten digit dataset
33577, Transformation
628, The factorplot suggests that bad tickets are worse for male passengers and 3rd class passengers
42429, Lets Do A Simple Submission
19307, Evaluation prediction and analysis
2994, There you go SalePrice looking off the charts
40729, Predict on Testset
29806, Displaying Glove WordVector of a word
15502, SibSp and Parch
18298, Looks pretty good we now generate some texts
19292, Creating keras callback for QWK
36869, NN Classifiers with Keras
6012, Hasilnya masih belum memuaskan ya bagaimana kalau kita pakai target yang original yuk kita coba
822, Relation to SalePrice for all categorical features
6571, Fare
37447, In this section I am trying to build a multi input model using BERT embedding for predicting the target keyphrases
34487, Write the Softmax class
2531, Confusion Matrix for the Best Model
118, take a look at the histogram of the age column
8135, I took several classification algorithm and I compared their RMSE on the train data in cross validation
824, Correlation matrix 1
20916, Lags
27475, Modelling
33997, casual
5543, Tune Model
33639, Target Variable Distribution Univariate
9609, People were addresses as Mr Mrs Miss Master Rev Dr Major Mlle and Col on the titanic
17631, Assigning Miss age smaller than 14 to Girl Title
19461, The loss function we ll use here is called categorical cross entropy and is a loss function well suited to comparing two probability distributions
13527, Download datasets
38218, Predict
38286, Imputing data in Remaining Columns
28635, GarageArea
26946, AMT REQ CREDIT BUREAU QRT
18039, Predictions
1905, Correlation in Data
39888, Basic 17185
22417, based on that and the definitions of each variable I fill the empty strings either with the most common value or create an unknown category based on what I think makes more sense
7795, investigate how well this model did
1295, Feature Importance 6
21216, In deep learning always remember we can have large datasets most of the times so we don t split in 75 and 25 ratio rather we give a small percent like 4 10 for validating and testing and those are more than enough
16306, Sex vs Survival
7876, This continus feature could be converted in a continues feature in order to increase prediction of the model
5674, Pclass
21122, we investigate the quality of the given data
5361, Diplay many types of plots in a single chart with table in it
15204, Get title from Name and average survivability per title
39424, get rid of Ticket for now
6611, we have to deal with Categorical Data types
27133, Visualize Discrete Features with their average Sale Price
23060, We checked the length the head of datasets all good we can start building our model
5675, Create a new Pclass Fare Category variable from Pclass and Fare features
223, Model and Accuracy
33781, Generating CSV File
27326, TTA Analysis
16828, Creating new column predicted with if Survived Prob else
9622, Evaluation
22706, Train the network
1851, Which Numeric Features are Candidates to be Transformed
9194, The roundabout 300 filled examples for the Cabin information are fairly one sided distributed to the first class
31530, Normality Test
40181, Code from notebook
13385, Categorize passengers as male female or child
32050, Let s check whether all the missing values have been filled
17992, We first group the passengers as per their titles and then fill the missing values for the age using the median age for each group
43352, But for each image we get probabilites for each label
34341, Transform Target and Redefine Features
13353, View profile report of training set
42148, Define VAE Loss
31949, LSTM models
28209, Normalize the input features using the sklearn StandardScaler
11417, Use Case 9 African Adult Literacy Rate Butterfly chart
9795, Here there is a left skewed data
39744, Embarked
11685, Passenger s Cabins When you loaded in the data and inspected it you saw that there are several NaNs or missing values in the Cabin column It is reasonable to presume that those NaNs didn t have a cabin which could tell you something about Survival So let s now create a new column Has Cabin that encodes this information and tells you whether passengers had a cabin or not
37448, Below function is from this by xhlulu this is used to encode the sentences easily and quickly using distilbert tokenizer
15922, Model Selection and model tuning
7644, remove highly correlated features
20740, Functional column
20675, Deep Learning Model Life Cycle
38656, Survival Rate
10349, we split the data to training and testing datasets
24256, Titles
7017, Evaluates the present condition of the material on the exterior
29006, NN
3632, Pclass Sex SibSp Parch
11237, The Grid search Best params are
9276, Showing one scenario of how I tuned the parameters of my model
6491, Data preparation
31709, Submission
31099, CATEGORICAL DATA
28153, This knowledge graph is giving us some extraordinary information
38417, For this classifier we use a special submission algorithm
36304, Submitting the solutions
36784, Regular Expressions
20955, Train the model with fit method
29195, Residual Histogram to visualize error distribution
22119, How to interpret Feature Importances
12771, Fix Data using 3 various methods
6140, Feature selection
8365, Integrate into test the title feature
9290, Imput Missing or Zero values to the Age variable span
20232, Sex
29918, we define the hyperparameter grid that was used
26921, Test for homoscedasticity is the dispersion of scatter plot
15740, we can creat a new feature about family size because SibSp and Parch feature are about same meaning so if we merge them we can creat family size feature
33511, Spain
2111, New features and what suggested them
5023, Standardize
14790, Family size features
33276, Categorical Features
39237, Remove duplicate shops
23891, There is an interesting 2
7996, Create Pipeline
29129, Model1 Error
5016, Missing Data
22979, Median of sales per month different from month to month
877, pandas get dummies for categorical features
23218, If you want to read more about this Baian library read
21400, Training the model
3252, Bar Plots House Style Frequency
19123, ELI5
30319, Drop train data with no candidates of start end positions due to poor segmentation of selected texts
24417, Build Year
31414, It turns out that you dont need to override add test to work
17625, Cabin Cleaning Assigning the missing Cabin values with repect to the Ticket number
41318, Use the next code cell to preprocess your test data Make sure that you use a method that agrees with how you preprocessed the training and validation data and set the preprocessed test features to final X test
22857, Russian Stock Exchange Trading Volume in Trillions
19560, basic utils
16347, Manipulate data
32902, Convert an image as numpy array
4956, Gradient Boosting Regression
4243, Artificial Neural Networks ANNs Tuning
36735, Decision Tree Model with max leaf nodes specified
41012, It looks like 9 of those 11 passengers are in the test data so there is definitely the possibility to find other females who died nice
35356, I am using Sequential model and used Flatten layer to convert tensors into arrays Using relu activation REctified Linear Units with different input image parameters I degraded features vectors to Lastly I used Softmax function with output entries to I compiled model with adam optimzer and used loss function as sparse categorical crossentropy At the end I trained model using data with epochs Epoch is training loop forward and backward to train model
19333, Model Performance Analysis
33850, Univariate analysis of feature word share
33715, FEATURE SibSp
30697, Dividindo a Base
23423, Average word length in a tweet
39180, apply the postprocessing to build the noise
23953, Building the model
12350, BsmtFinType2 Rating of basement finished area if multiple types
7816, Feature selection
27648, Predict
38646, Fare
2081, You can check yourself the public LB score of this model by submitting the file submission3
26439, let s build a ANN classifier and train it
41671, take a look at some benign tumours from the train set
2287, Precision Recall Curve
13412, True Positive Rate
42704, do similar analysis for int features
5945, There are many missing values in test dataset
17845, Go to top font
1524, Numerical Features Age Continuous Fare Continuous SibSp Discrete Parch Discrete
31847, And then drop types
16896, Title Definition
1241, Blend model prediction
19663, Correlations
4719, Following is our new cleaned dataset on which we be applying our machine learning models
17369, Observations
5657, Read the Train and Test Data
15958, Correlation between the Numeric Features
1723, Embarked Count
34625, We load the training set
27035, Unique IDs
38942, We have now identified the outliers in the data
12346, BsmtQual Evaluates the height of the basement
32793, Sklearn K fold OOF function
18191, a quick look at the distributions
29714, List of columns with too many outliers
35200, Lasso Regression L1 Penalty
11676, It looks like fare is correlated with survival aboard the Titanic
10918, Submission
28341, Analysis based on FLAG OWN REALTY
27534, Display the confidence interval of data and used on graphs to indicate the error
5417, This is pretty straightforward
19573, handle item cnt day outliers
2525, Bagging
42411, Training time
13832, Age
12793, We plot the accuracy stories we have retrieved
27095, Reaching the Top Ten
7220, Electrical Null Value
3339, Notice the difference of Medians for train and test data
32074, n components decide the number of components in the transformed data
300, don t really know what to do with these
40276, Before we can begain with feature engineering or model development it is important to set our baseline across which any future improvements be measured
14565, lets check the missing values again font
1639, We use a lot of features and have many outliers
10248, Go to Contents Menu
24473, Normalization Each value in the dataset is a greyscale value between and It s best to normalise the data so that each value is between and before applying any models
27844, Plot confusion matrix
26748, Distribution of median prices of items over the years
1942, Street
8991, and now I can drop MSSubClass
25682, System When there is NO SOCIAL DISTANCING
33664, Current Date Time font
42348, tune the parameters of model My role of thumb is finding the balance point between number of rounds and the learning rate
1711, Imputation using bfill
2328, Predictions and scoring regressions continuous variables
11155, GarageYrBlt GarageArea and GarageCars Replacing missing data with 0 Since No garage no cars in such garage
33325, Model Training
36664, Reading a text based dataset into pandas
19342, The category name is a categorical variable so we have to turn it into dummy variables and check the correlation between them and the price
4404, Classification Report
10152, Scatter Plots
1404, Missing values in Embarked and Fare variables are very easy to imput because we can use the most popular value or something like that
30333, For convenience we create a dictionary that takes as keys the animal classes
17546, Fill the missing Embarked Value with the most frequent one e S
13906, Classification report
20302, Dropping some featutes
24504, Submission
29739, We have got a better validation score in the probe As previously I ran LGB BO only for 10 runs
23666, First we need to create miniature training and validation sets to train and validate our models
2722, In order to further improve our model accuracy
35849, we reshape it into proper image shapes as we be using convolutional networks
4066, we are going to assign to each row with missing ages the mean value of the age for its respective cluster using the DBSCAN algorithm
31474, Dataset
24547, Again we exclude the dominant product
39132, The data is separated again in two variables train and test
14451, go to top of section corr
2472, Making FARE BINS
38287, focus on following set of columns first
8472, Compressing Data via Dimensionality Reduction
27027, Calculate probability use hypothetical test
27198, This image gives us the correlation analysis We use it to explain the relationship between variables and we can say that the relationship increases as the ratio on boxes approaches
4800, Outlier Handling
4906, This is the Normalised Data
28690, SaleCondition
21078, Correlation plot
10842, Check values in each column
16580, More Feature Creation
30695, Uma vez com os dados ordenados pegaremos o id doprimeiro da lista ou seja o que teve o maior ganho
2729, RandomizedSearchCV
19385, score
1860, Non Linear
36474, Tweet authors
35482, Intialize the Value
12364, Functional Home functionality
21768, Assigning missing incomes by province is a good idea
10548, Ridge performs better than the Lasso on this dataset
31317, Encoding
28525, BsmtFinSF1
11286, Fill missing values with the most common values using the groupby function
15928, Parch
31597, import the necessary packages
32419, Another Example Increasing Padding and Stride
19716, visualize some of the predictions the model made
10578, Logistic regression
5293, I ve tried various transformations and found that log transform somehow works better for me
3854, Feature FamilySize
13995, Impute Title with it s mode
41960, In simpler terms it is the process of converting a word to its base form
19586, item
11413, Use Case 5 US Wages Tableau Visualisation
7607, ColumnTransformer
18060, Splitting the Data
12377, Removing of outliers in Sale Price
15205, Make features related to family size
26103, Linear Regression
19979, MLP ReLU ADAM
18464, Libraries to import
1192, we find that these are outliers
34465, Rolling Window Statistics
23313, Quick Baseline with XGBoost
17969, Experimental NA scalar to denote missing values na scalar to denote missing values
9825, SibSp Number of Siblings Spouses Aboard
15514, Embarked
34085, Simple Logistic Regression
36824, we use the fit transform method to transform our corpus data into feature vectors Since the input needed is a list of strings we concatenate all of our training and test data
34971, For Loops to Test the Various Parameters
15707, Most fare tickets are between
43013, Logistic Regression
4915, Numbers our not my Stuff See some Graphs
34733, Resampling the training data
36918, Missing Values
10745, From the scatter plot there are two outliers with gross living area 5642 4676 but the SalePrice is low
17635, Age Cleaning Fill missing Age values by taking the median Age of its corresponding Title Pclass
33236, Convert the features from dictionary format to pandas dataframe
24422, Cultural Recreational Characteristics
4118, Ordinal data mixes numerical and categorical data The data fall into categories but the numbers placed on the categories have meaning For example rating a restaurant on a scale from 0 lowest to 4 highest stars gives ordinal data Ordinal data are often treated as categorical where the groups are ordered when graphs and charts are made However unlike categorical data the numbers do have mathematical meaning For example if you survey 100 people and ask them to rate a restaurant on a scale from 0 to 4 taking the average of the 100 responses have meaning This would not be the case with categorical data
11305, Import Python Libraries
30685, Transitions out of State 0
20204, Create function to set mean value to numerical features and mode value to categorical features
591, First a broad overview
12016, Gradient boosting is an ensembling model based on boosting method
29454, Here we ll use counter vectorizer to specify one column for each word in the tweets and use it as a feature for prediction
24165, BBoxes with Masked Sample
5356, Diplay relationship between 3 variables in bubble
8801, First impute the missing values in Bsmt Features
11758, Both our Lasso and Elastic Net models look significantly better And we have the parameters we need now to train them on our full data and use them for predictions
16515, MLP Classifier Multi layer Perceptron
23450, Hour
33670, Month font
26707, Plotting sales ditribution for each state across categories
14735, True Positive Rate
33610, I am going to compile my model with adam optimizer as i think it is working better than RMSprop for me
34948, Predictions
18438, Interpretation font div
3545, We are gonna implement a function that be useful to visualize the survival relatively to a certain feature
129, False Positive Rate How often the model predicts yes survived when it s actually no not survived
2229, Interesting insights
7886, The same analysis for the IsAlone feature
5848, we are ready with our most influencing features lets check again how the are related to SalePrice visually
40920, Fit
761, Train the model
27181, XGBoost Regressor
13837, Which features are numerical
37870, One Hot Encoding Categorical Variables
12666, The metrics were calculated as described below
36575, also have a look at the other end of the spectrum
21470, For each word of a sentence that isn t a stopword we randomly choose between itself and all his synonyms in order to replace it in the modified sentence
42366, Male and Female are multicollinear columns
5917, ANN
29020, SibSp
2727, Default Model
34412, Download data
37990, Here we use GlobalAveragePooling before FC layer
38305, lets go to model testing
9946, removing the useless columns
21102, Check the distribution for detectiting outliers
6013, Kita gunakan automl yang ada di jcopml gunakan AutoRegressor untuk kasus regres
1511, lets plot them all
26674, Annuity Amount
23686, Check number of files per folder
23688, Barplot
20134, Predict on test data
30329, This is a function to reshape the image
14605, KNN Classifier
5008, Numerical Features
5411, This as well
37758, Generator Approach
10377, Prediciton and selecting the Algorithm
24250, Fare
41016, The unique values count confirms that generally the assumption that these groups lived or perished together is absolutely correct that is the real power of the WCG model
31324, Customized Metric
41540, Linear Discriminant analysis
7210, Univariate analysis of Target variable SalePrice
42999, Make Submission
21264, Data augmentation step
42416, Simple Feature Engineering From Timestamp
31401, We can plot the loss and val loss acc and val acc
22448, Ordered Bar Chart
12248, Loading the data
32398, Ensembling With Meta
13764, Validate and Implement Model
26749, use a threshold of 3
30758, For given problem we must define desired RMSE and it should be not more than 10000
36215, Analyzing categorical attributes
18133, Model Performance Review
16532, Visualizing given dataset
34710, Mean over all shops
26746, Different states have different SNAP dates so we have to view them separetly
7428, Random Forest
4056, The transformer dummify variable be used to handle the categorical columns
42023, Extracting the first alphabet of a word ending with a stop
1153, Predictions from our Model
43165, Cropping and Resize
10460, We might also be interested in strong negative correlations it would be better to sort the correlations by the absolute value
1005, Fantastic But wait a minute let s investigate a bit more
23876, Transaction Date
1337, Converting a categorical feature
21853, Training on ALL IMAGES
24180, Starting point
27072, I want to check if there is some duplicated tweets it could be a retweet If we know that a tweet is fake or not so the other duplicated tweets get the same class
9698, go through different categorical columns
41651, Predict on test set
14379, Females from 18 to 30 were most likely to survive
19671, The most important feature created by featuretools was MAX
6183, The number of parents and children present
11128, Edit Look for fliers in other columns
26083, We need to reshape data
14986, Feature Scaling Normalization
24510, Train model on imbalanced data
15571, A family size feature
36518, Embarked
18002, Processing the training and test set together
3924, Random Forest Classifier
38934, XG Boost
7890, These new features provide a more clear distribution that the dataframe without features
12317, I think that outlier
10121, Look at that if we had dropped cabin we d have lost so many information
6637, Above we are creating boolean values for the model to understand
28776, draw a Funnel Chart for better visualization
1592, In order to be more accurate Sex feature is used as the second level of groupby while filling the missing Age values
34651, Item categories data
10693, Fare processing
15602, Pairplots
31720, Train AGE Variable
8738, Apply Log Transform
35627, Experment 3
32576, We need to set both the boosting type and subsample as top level keys in the parameter dictionary
20557, Model training
1133, Feature selection
1013, All good apart from Survived it is still a float
32898, Validation dataset
15339, LOGISTIC REGRESSION
17624, Ticket cnt
10663, Model Evaluation
14587, It is very less value we replace the missing value with the most embarked port e S
14340, Try a RF model
952, Out of Fold Predictions
39787, taxvaluedollarcnt vs nearest neighbors taxvaluedollarcnt
32161, And the last note here
43332, check some of our prediction
24448, Train Test Split
42756, This list be passed into the network It is composed of 14 numpy arrays containing our categorical features that are going throught the Embedding Layers 13 layers The last element of the list is a numpy array composed of the 173 numerics features added to the 3 categorical features that have at most 2 distinct outcomes
23243, Building Your Model
9400, Hyperparameter search with model combination
29600, We can now create a new optimizer with our found learning rate
4779, Voting
23760, Adding it to final dataset
10354, Testing Different Models
13591, Running LogReg HyperOpt
10940, Check the summary of train data
7649, convert numerical variables to categorical variables
34417, First we analyze tweets with class 0
41670, Final predictions and submission
16641, Naive Bayes
21211, check any missing values are there
12948, Declare feature vector and target label
38734, First let s deal with the missing Age values
34297, This is the activation of the first convolution layer for the benign image
28805, things get a little hazy Its not very clear straight forward
1315, Feature Transformation
25731, build Model
26335, TF IDF Vectorizer
28772, We have one null Value in the train as the test field for value is NAN we just remove it
12360, MasVnrArea Masonry veneer area in square feet
41374, Feature Engineering
33578, Meter
27394, A side note that i tried random forest and decision tree but the RAM was crashing so i removed them
14298, again Check for missing values
33340, Joining df
37342, VGG 19
15963, Name Feature
20401, visualize the embeddings by projecting embeddings in 2 dimensional space using singular value decomposition
36079, Seed
24840, LaNet5
13042, Creating Family Size variable using SibSp Parch
19050, lets set some seaborn parameters
20914, Predict test data
38973, As we added three new rows in train X list we need to add three new rows in train y list
30646, Random Forest Classifier
30977, Below is the code we need to run before the search
33847, Checking for missing values
2396, Using missing values as a feature SimpleImputer add indicator True
32388, Adversarial Validation
452, Lets apply Modelling
15403, There are multiple age values missing in both training and test data
9699, After much exploration I found that In some columns one category is highly dominating
1970, Gaussian Naive Bayes
37828, Model Building
17038, Feature Correlation and Dependencies
29598, we can plot the learning rate against the loss
19619, Outliers smoothening
18966, Display the density of two continuous variable with custom bin size
37312, Rfe Selected columns
2821, keep only the column that have a acceptable information gain
20590, SVC
1766, We do a simple bar plot to check title vs survival rate
36301, Below we set the hyperparameter grid of values with 4 lists of values
25450, Applying Decision Tree
14060, SibSp vs survived
30088, Random Forest Algorithm
37402, Identify Correlated Variables
5918, DF for ANN
12465, Submission
1066, We stack all the previous models including the votingregressor with XGBoost as the meta regressor
19419, and the last token is Alright time to write the processing function sup processing sup
8436, Fireplace Quality Miss Values Treatment
3958, Label Encoding
31511, Plotting the top 30 features
31902, We split our training data into train and validate datasets in order to train our model and validate it using the validation data set to avoid overfitting before testing the model on the test datasets which is as real world data for our model
33817, As expected the most important features are those dealing with EXT SOURCE and DAYS BIRTH
3509, Classification report
25431, Test
13499, Is Baby
3925, Gradient Boosting Classifier
159, EDA of test set
31279, Below are the sales from three sample data points
25758, Making sure the transformation works on the original train images and on test
40102, Clean up the memory
4203, In this step we try to feed a Lasso regression model with all of our variables
19942, Since we have two missing values i decided to fill them with the most fequent value of Embarked
14169, After adding all these new features we need to check whether we have null values and deal with them
23930, Output
11553, Influence of Categorical Features on SalePrice
13826, Calculating F1 Score 2 precision recall precision recall
34745, INTEGER ENCODING ALL THE DOCUMENTS
5661, Fill basic missing values for Embarked feature and convert it in to dummy variable
32963, the categorical features
20464, Type of the housing of client
16194, And the test dataset
19888, There is more than one way of determining the lag at which the correlation is significant
32807, what just happened there Our CV score for each training fold is pretty descent but our overall training CV score just fell through the crack Well it turns out since we are using AUC Gini as metric which is ranking dependent and it turns out that if you apply xgb and lgb at level 2 stacking the ranking get messed up when each fold s prediction scores are put together
19727, For week days vs week ends
30653, Is that real 222 unique value without any duplicates
2648, There are just 2 NaN values in Embarked column
19915, Test Prediction
437, MasVnrArea and MasVnrType NA most likely means no masonry veneer for these houses We can fill 0 for the area and None for the type
14501, Missing Values
4931, Another good practice when doing DS projects is to look for all sorts of correlations between all features
2131, the bedroom proportion is never helping
23632, get the test set predictions as well and save them
3967, ElasticNet
28335, Analysis based on OCCUPATION TYPE
12634, Sex
8096, Age vs Survival
25916, Creating Submisssion File
38044, Sample Mean and population Mean
33845, Checking for duplicates
29801, Fetch list of word vocabulary
14617, We are summing over all n passengers and our initial predictions y are relatively close to Thus the term y n t n is always close to for t n or close to for t n There is nothing wrong with that BUT you are multiplying with x n fare If it s an outlier this yields a high contribution to the gradient in the sum Even if all other contributions might be of a low value one high outlier value already shifts the entire gradient towards higher values as well This is a bad learning behaviour If our model gradients are mainly driven by outliers it tries to learn the survival of these exotic values ignoring the majority of all remaining passengers Uff
43099, its time to build the model
6921, Fit Model
1186, take a closer look at some of these lower numbered variables
7621, Loop over GridSearchCV Pipelines Linear
15302, Which is the best model for prediction
28351, Analysis based Averages values
1799, How about one more
1997, It s a nice overview but oh man is that a lot of data to look at
14811, SipSp Survived
32861, Creating the label
3510, This model appears to be the best preforming in terms of precision and recall on the training set
36285, Better let s convert to numeric
24477, Create train and validation sets
34343, Setup
14507, Fare
38452, process all questions in qid dict using SpaCy
37728, PCA Analysis
31276, Rolling Average Price vs Time TX
40940, We are calculating the feature importance because the variables are just too much so we only need concern ourselves with the ones that are useful for our analysis
4171, Equal width discretisation with pandas cut function
4708, i wanted to combine the linear models to tree based models
36602, Printing the predicted frequencies of train data
36934, After filling the missing values in Age Embarked Fare and Deck features there is no missing value left in both training and test set
15814, Female passengers survived more than the male passengers
35453, Data EDA
36425, Categorical Features
11407, Model Train With Random Forest
43379, Before we start with building targeted and non targeted attacks let s have a look at the first digits of the test set
40923, Displaying Samples
12870, Feature scaling
42267, There is no difference in this boxplot
14788, Age
26724, Plotting daily sales time series
25951, We able to grab ranking position within top 10 on test data just by using EDA and FE for numerical features and with simplistic model In the next section we work on categorical features as well as model selection strategy We store all the original and newly genearated features for furhter step
34411, Import libraries
5452, Setup a small subset of the data
24923, Texts
24578, This train model function comes from training and validation code
6066, WoodDeckSF mostly 0 maybe binary is better
24251, Observations
42078, demonstrate the half bathrooms unininterest from a statistical point of view
19284, If the model is on the GPU we have to transfer the data to the GPU to run it through the model
20639, Trigrams
17031, Fare and Age mean arde highly skewed
24249, Family SibSp and Parch
36032, Holdout evaluation Logging for error analysis
10802, Before we start the prediction we need to imput two empty values
36799, we define the tag pattern of an NP chunk A tag pattern is a sequence of part of speech tags delimited using angle brackets e g DT JJ NN This is how the parse tree for a given sentence is acquired
16441, A basic trend can be found from the graph
22069, Model
37713, Start training
20359, This corrected solution improves the final leaderboard score to places improved as of the publication of this kernel The error rate is improved by from the human in the loop step
526, Highest survival rate for women in Pclass 1 or 2
43118, apply OH on X test
14860, All we have to do now is convert them into the submission file
22042, Age of building might have an impact in the rental price and so we can add that one as well
31185, Evaluating a classification model font
24055, Each sample s missing values are imputed using the mean value from n neighbors nearest neighbors found in the training set
26545, Epochs and Batch Size
20944, MNIST dataset
10517, try to tune Parameters
18061, TF IDF Vectoriser
22478, Calendar heat map
38661, Count of Siblings
11847, A spy in disguise
26910, Get the best parameters
40801, Model tunning
13413, False Positive Rate
29748, Class distribution
35604, Confusion matrix
5437, MiscFeature
9283, Univariate Analysis
24976, Save prediction
25277, Read the train and test datasets
19609, Following variables are highly correlated
31696, Feature Engineering
4272, MasVnrType and MasVnrArea
3530, Count Plot Neighborhood
12905, Title is quite important primarly because it indicated whether a passenger was male or female
9382, interesting one missing value found in test data
605, For a view into Pclass vs Sex let s use a mosaic plot for a 2 dimensional overview
10758, How many men women were there onboard
1760, It is easy to spot some obvious features to drop especially features that are uniquely and randomly assigned to each passenger
12025, Stacking Ensembling
6624, Understanding the data at hand
8148, Splitting into Validation
23539, For most image data the pixel values are integers with values between 0 and 255
30937, Visualizing Distribution Of Price Before and After Removing Outliers
7413, How to define a large percentage try 15 first
15647, Decision Tree
43107, This is a common problem that you ll encounter with real world data and there are many approaches to fixing this issue
20411, Checking for NULL values
17540, Chose one hot features
1339, start by preparing an empty array to contain guessed Age values based on Pclass x Gender combinations
25934, Variable Description
4287, Correlation Matrix
15698, Most passengers were traveling without children parents 76
22016, the value 117310
29697, lets take them through one of the kernels in second convolutional layer
8006, SVM Poly
8315, One hot encoding the categorical columns
1232, Largest correlation with Sale Price
31807, try out semi supervised learning with pseudo labeling
3951, Change Datatypes
39301, Export aggregated dataset
28610, Foundation
39114, Data engineering
36193, There are only 2 levels in new customer index a customer is a new one first 6 month And we do not know the index in 27734 cases
18754, Lets perform a left outer join you can perform any joins as per your requirement
11250, Reconstructing train and test sets
28072, String Indexer
26658, Classification report of each class
16035, we split training dataset to 80 for training model and 20 for validate model
37175, Variable importance
620, Child
9776, Parameters to adjust in Kernel SVM are as follows
27768, Examples
34546, CNN Model
16394, BoxPlot
613, study the relation between Fare and Pclass in more detail
18665, RNN Model
14888, Cabin
23649, Predict
3431, we make some dummy variables
6189, The siblings of class 1 survived more than Class 2 and Class 1
1620, Submission
2026, Ensemble Prediction
39067, using quickdataanalysis to create dummies with ease
41636, that we are done with basic data processing let s create a term document matrix for further analysis
15846, Ticket Frequency
27142, Category 2 Structure of Land and Property
9022, Unfortunately we still have not dealt with all of the null values for basement features
9265, I feel nothing requires removal for garagecars
16168, Analyze by pivoting features
24397, TTA Analysis
513, XGBoost
42813, I did not know that we can add the path to environment variables using sys hence I was changine directories but now I have made changes so I do not have to change directories and import detr easily
4264, Utilities
42274, Can t find the difference with boxplot
32358, Building the CNN
17814, Go to top font
2082, I consider the generated feature Surname
37710, CNN Model Structure
28200, Chunking
16967, A quick look over the encoded features
6292, Neural Networks
13030, Cabin
16548, Here s how you can fill missing ages using the Title column as reference
40431, Predicting Using Leader Model
25581, LotShape
39413, Fare
8007, Train Decision Tree Regression
7590, As can be expected from the large correlation coefficient of 0
16659, Exploring Target Variable
24281, Observations to fine tune our models
19898, Top 10 Sales by Shop
11069, Ticket feature extraction
25416, how to extract a date
35896, Great now let s build our simple model by stacking the following 4 layers
16962, Data cleaning
41749, MLP Dropout AdamOptimizer
2318, split out rowsets using sklearn s train test split function
33089, Modelling
39729, After yet MORE reading I ve decided that any model that includes family name massively overfit the data
20139, Let the split be 80 20 train val split
6714, Find Discrete Numerical Features
22769, we create a DataFrameDataset class which allow us to load the data and the target labels as a DataSet using a DataFrame as a source of data
32761, Load Data
7503, The following columns have poor corelation with SalePrice
24893, XGB Classifier
22334, Parts of Speech POS tagging
5004, it s positively skewed and peaky with fat tails or outliers namely to the right
30458, save the newly expanded training data to CS Download this file and plug it in to your existing pipeline as a bigger training set
8161, Outlier analysis
36377, Normalize
23966, No Of Storey Over The Years
3852, we can also use qcut for creating the bins for remove outliers
40203, Prediction
5965, Is alone feature
2775, Gradient Boosting
23223, check how many features are duplicate
36464, Examples of images without any wheat heads
11656, Logistic Regression
8461, Separate Train Test Datasets identifiers and Dependent Variable
20960, Check for Null Values
12700, Age
20648, We use a 100 dimensional vector space
36350, Deep Neural Networks Convolutional Neural Networks
21847, Multilayer RNNs
13023, Pclass
4139, Defining the Sequential Model
42833, Despite the presence of bounds we are going to assume that the transformed data is normal and proceed anyway
38761, The train accuracy is 82
25953, Load Prior data
23038, Week Events and This Week Sales Relationship Analysis
39722, I use Word2Vec to find odd items given a list of items government corruption and peace
36984, Seems the range of logerror narrows down with increase in finished square feet 12 variable Probably larger houses are easy to predict
39714, Training of Word2Vec
23944, 2nd level categories middle level
26591, TASK EXPLORE STORES INFORMATION DATA
35484, Cutout data augmentation
41927, out of all our features we are given 8 object variables 368 integer variables
28716, First look at some informations of our data
4400, Handling Categorical Data
1622, Modeling
37083, Soft Voting
42234, There are 43 categorical columns with the following characteristics
16109, SibSp Parch Feature
14163, Since SibSp include information about the number of siblings and spouses altogether and Parch includes information about the number of nannies we can extract the family size from this info
37876, Lasso Linear Regression
121, Splitting the training data
12128, LotFrontage Since the area of each street connected to the house property most likely have a similar area to other houses in its neighborhood we can fill in missing values by the median LotFrontage of the neighborhood
9649, Concatinating Test and Train for making Imputing and Cleaning of Data Easier
28788, Most Common words in Text
35944, XGRFBoost
27665, Preprocess the Data
31380, explore Keras ImageDataGenerator
10665, Setup the model
25578, GarageCars
4015, Choose the best algorithm for House Prices Advanced Regression Techniques
15309, Submission
29035, 80 20 training validation split
13091, Correlations in the Data
8784, Grid Params
40039, Searching for an optimal learning rate
38082, VISUALIZATION OF THE DATA
38238, have a glimpse of both the tables
30885, Break the model
24463, ABSTRACT from that paper
15354, try on Pclass
23807, Separate X y
34292, Build Convnet
6883, Missing Values in the column Age can be fixed by imputing values in our case using average of this column
9800, Feature Importance
8971, train tickets
126, Evaluating a classification model font
12385, Plotting a correlation matrix for all ordered categorical and continuous data fields vs Sale Price
22079, Creating submission df to hand in our solution
13684, SibSp and Parch
34620, Cost and Optimiser
11562, We can now plot the learning curves to check whether we are overfitting the training set
14193, RandomForestClassifier
12094, Dealing with Outliers
37575, Categorical occupy the top spots followed by binary variables
30274, Top 20 Countries as per Total number of Test
34169, Sliding Window Method
5465, But what about specific neighborhoods
24540, ind cco fin ult1 is the dominant product
31821, Despite the advantage of balancing classes these techniques also have their weaknesses
42541, Since we are looking at pairs of data we be taking the difference of all question one and question two pairs with this
33888, credit card balance loading converting to numeric dropping
28446, There are 43 such columns
5359, Go to TOC
1853, Prepare Data for Model Fitting
35072, Complexity graph of Solution 5
12221, And there we find that our prediction so much off with respect to the real value was indeed a house too cheap for its size
13531, For all 16 features
40667, Word clouds of Text
42025, Extracting the first alphabet of the first word
37085, Advanced Ensemble Methods
16034, X contain all predictor feature from training dataset
21075, Problem solved
11985, We finally merge those features
11084, remove outliers
38520, Wordclouds
11281, It is very difficult to make inferences from this heatmap
34372, REBUILD MODEL
17521, Model Building
19922, Dealing with Missing Values
16007, Final Analysis
13704, so this is quite interesting
19985, Using self attention
13539, Importing Librarys
19660, If you are interested in running this call on the entire dataset and making the features I wrote a script
8221, We would be replacing the removing values with median values
19666, Collinear Features
41579, we need to add at the top of the base model some fully connected layers
19155, Scale the numerical features
7142, Spliting the train data
20112, Item price trend
2979, Both Exterior 1 2 have only one missing value
36011, Registries
21063, Make it into a dictionary so we can read out each row missing ones we assume are zero like the submissions sample
20113, Month number number of days in a month
32075, Uniform Manifold Approximation and Projection UMAP
36599, We divide the data for testing and training in 90 10 since it is a competition submission However it is generally splitted in the ratio
35819, Remove data for the first three months
38813, Demonstration how it works
34672, Average revenue
12474, Fare
18010, Functions
11893, Output File Submission
10407, Get the data into better working form
18511, It s worth noting that this patient does not have cancer
25761, Make Predictions
1582, We are now ready to fit our model using the optimal hyperparameters
14518, Observations
31549, Garage columns
35351, Building and Evaluating the Final Model
1142, look at the point of visualization
23507, Perform predictions
4295, Fit a regression model with Bloomington Heights as Baseline
30976, we can view the random search sequence of hyperparameters
41768, Write a useful function
40172, The forecast object here is a new dataframe that includes a column yhat with the forecast as well as columns for components and uncertainty intervals
9869, This simplest possible box plot displays the full range of variation the likely range of variation and a typical value
30142, Testing
13219, Feature Engineering data science framework to achieve 99 accuracy
12943, Ticket column
6047, MSZoning FV RH and C are sparse classes have to reassign them Missing 4 times in test
1993, Analyzing the Test Variable Sale Price
21406, Model to Test
6053, Foundation Stone Wood and Slab are sparse
35428, As we are adding the data augmentation and training models several times this process take some time
1540, Name Feature
29896, Data Visualization
40194, Normalizating the Data
9796, The visualization of several numeric features against Sale Price better outlay the relations
9164, ExterQual
19844, The majority of people on the Titanic were between 16 40 years of age
42133, Confusion Matrix
36583, Merge all functions into a model
7829, Store data in csv file as below and submit your output
13740, Creating O P file
29090, USD sales weights
13964, Sex
9153, Deal with Categorical Nominal Variables
37899, Evaluation
15959, Distribution of Numeric Features
32407, Evaluation
16564, Explore
1027, To have a better idea we sort the features according to their correlation with the sale price
37875, Model Comparision Storage
14469, go for analysis for features and there data types
9201, Family
27299, Predict all Country Region and Province States
23581, Re training the model on the entire train dataset x y
1132, Exploration of Gender Variable
24997, Standard scaling numeric columns
40996, Input
35080, Fitting the Scaler to the x train
1695, Examining the Target column
24827, Cropping and Resize
11522, XGBoost scores
38966, seperate id column
15531, Add variable for number of family members on board
38634, First we visualize the first convolutional layer with 30 filters
16045, Data Exploration Analysis
42647, ids with target error
719, This is interesting
22626, Inference
26470, By setting include top False we essentially removed the fully connected layers from the pretrained VGG16 model
15680, Basic Ensemble Modelling
17036, Predict Missing Age with KernelRidge
32892, Ensemble model metrics on validation set
17983, we look at how the gender within the ticket classes affects the chance of survival
17748, Clearly titles like Master applied to small children but all titles have a distinct and quite different median
40713, Normalize Pixel Data
29421, BAG OF WORDS introduction bag words model text A 20bag 2Dof 2Dwords 20is the 20presence 20of 20known 20words
22393, CNN
36293, KNN Classifier
7406, OverallQual GrLivArea living area square feet GarageCars GarageArea TotalBsmtSF 1stFlrSF etc
28152, That s a much cleaner graph
25486, Define training features
6510, Temporal Variables Date time variables
11961, We still some small steps remaining to create the test det similar to train set
30986, The boosting type should be evenly distributed for random search
5897, For test data
37687, Looks like one I guess
6433, There is a subtle difference in operations involving a single column and the last operation involving multiple columns Last operation returns a DataFrame while other operations return a Series If you want the single column operation to return DataFrame you can do like as follows
30894, print
40835, Choosing the appropriate model for regression
19820, Bayesian Encoders
29085, item2vec
5525, Fare Feature
9432, Violin plots
28536, Bellow is the number of null values
16670, Model Training
24062, Plot hyperparameter importance
26706, Plotting Sales of each category across the 3 states
15665, Make model submission
20167, Lets find the score using reduced dimensions keeping the same amount of samples to compare accuracy
25597, MNIST Dataset
27756, Removing contractions
34274, First we ll split the training data into testing and training sets
22000, KNN Classifier
42451, Interval variables
27364, working with item category
16610, Feature Age
7511, Before going to the final detection there s an additional problem to address
20467, Annuity distribution
31598, since we have downloaded the necessary packages lets import the data
12680, Join data files
545, convert categorical to numerical get dummies
14421, Calculate Age per Pclass and Sex for training and test datasets
736, Train Set
32826, We start by importing the libraries to be used and the dataset provided
24182, Better but we lost a bit of information on the other embeddings
36561, Create Model
19937, Since we have one missing value i decided to fill it with the median value which not have an important effect on the prediction
14007, K Neighbors
20549, Blend all the models and let s get the predictions
9608, Getting details of Mr Mrs etc
19605, Univariate analysis of Numerical data
8524, Lot Frontage
15512, Cabin
15487, Neural Network
27495, Load data
3454, That leaves us with 67 1st Class and 254 2nd Class passengers to find Deck vlaues for
41874, Building the simplest Decision Tree Classifier
3609, CONCEPT Variables that are strongly correlated such as with TotalBsmtSF and 1stFlrSF could be an example of multicollinearity which I believe means we need to consolidate these as a single attribute rather than keeping separate
42305, Nearest Neighbors
5998, Hapus outliers
21337, Ki m tra ki u d li u
3710, HyperTuning
21506, Image with the smallest width from training set
13134, We can clearly make out by the first glance that a lot of passengers going to S belong to 3rd Pclass
39026, Cross Validation
301, let s fill the Na with specific values
25962, Reordered Ratio
13504, Feature encoding
41393, Target
30641, Which of the features are we going to use in ML Models p
24438, Embeded
20066, Merging sales dataframe and item shop dataframe into one dataframe by item id and shop id as a key
42060, Using seaborn to display violin plot
7712, Checking Correlations
16947, we create the ColumnTransformer object to fit and transform the data
7056, Type of sale
30754, Fixing max features
6276, Some of these can be merged with other Titles such as Mlle and Ms with Miss
31919, The augmentation generator only use the train portion we got from the split
9066, perform linear regression on all the data at once
29070, Two way categorical features interactions
6997, Area of the Lot
24450, Creating the Model
36716, Picture is more than word
10532, we have removed the outliears manually
32760, Inference
42937, Normalize the data
16561, Fare
30335, We create the validation and train file
33241, that we have a language model trained we can create a text classifier on it
9619, Encoding
8010, First we use ada boost with our random
10962, Lot frontage
31846, create type of item names
1698, Visualizing the locations of the missing data
27910, Prediction 1 Selected Numeric Data
26962, Daily Sales
25878, Bigram plots
28330, identifying the missing value in installments payments
18271, FEATURIZING TEXT DATA USING TF IDF
4680, Filling missing Values
27109, Null Values
27653, One Hot Encoding
26249, Generating Predictions from Test Data
32569, we can evaluate the baseline model on the testing data
27289, Model without intervention
23799, There are many imputation methodology some simple ones such as mean median and mode and some more complex ones such as multiple imputation by chained equations
11115, Label encode features
17765, There are train and test data as well as an example submission file
32990, Define simple classifiers
30580, Function to Handle Categorical Variables
23625, Basemodel EfficientnetB5
40831, A lot of inferences that we have already hypothesised could be verified using the following heatmap and correlation matrix
35549, If we consider only two models then the score vary
10907, Grid Search for Random Forest
15981, we can drop the original cabin feature the temporal one and the numeric cabin feature
40037, Model structure
18227, Utils for models
43386, Uii the threshold is given by a max of 16 pixel that are allowed to be added as perturbation per pixel per image
5890, Test data
15554, K Nearest Neighbours
19397, Visualize a random sample
25749, It does not the network is still very good at figuring out the original size of the train image
19963, Plot learning curves
16002, Fare
19616, One hot encoding
27898, Fitting on the Training set and making predcitons on the Validation set
37298, Lemmatizing the text
7091, From the plot we can draw some horizontal lines and make some classification
24577, Helper Functions
5352, Diplay relationshiop between 3 variables with shape color
27872, Sales and department distribution per store over time
15703, The histogram tells us that most fare values are between 0 100
39878, swarmplot It is easier to interpret than stripplot
2943, Inputing Missing Values
35145, Experiment Size of the convolution kernels
21320, zillow data dictionary
22345, SVD TruncatedSVD
25272, Transfer Learning
22020, Select the most important features
28532, BsmtFullBath
24327, Based on the Feature Importance plot and other try and error I decided to add some features to the pipeline
6385, We can use numpy library to calculate covariance
8410, Year Built Vs Garage Year Built
7528, Feature engineering
11724, ExtraTreeClassifier
13767, Age Fare Pclass analysis
40683, Visualize distance from cluster centers feature space
31806, Get test set predictions and create submission
22092, Define any desired architecture with feed forward function
32286, Display distribution of a continous variable in standard deviation boxplot
27548, Display interactive average based on selection of bar
9744, SipSp
22827, However this doesn t mean that the training set contains all of the shops present in the test set
14003, We can find the people from C embarked port are in higher fare and better ticket class and better cabin than S and Q
27086, LDA with an other way of visualisation
3757, Feature Engineering by OneHotEncoding
15008, Sex
16920, Export data
17866, With the fitted second level model we do the prediction for the test data
4642, Count Plot
16267, Ticket
12144, Basic evaluation 2
32244, it wont always be the case that you re training the network fresh every time
30042, Build SUB Ensemble
16775, Preprocessing Test Data like train data
23172, PassengerId SibSp and Parch data types be kept same
17599, finally fitting our training set and making prediction
3282, Lasso Regression
34040, We have 878049 Observations of 9 variables
21082, Replace missing value with mode
8683, MERGING THE TRAIN TEST SETS
26350, Columns with more than 40 positive or negative correlations with SalePrice
7380, Merging the unmatched passengers using surname codes
27340, Model Layers
42847, New Zealand
9051, I feel like I could merge 3 and 4 into a single category as Above 2
3445, The situation is different for the female titles
433, GarageType GarageFinish GarageQual and GarageCond Replacing missing data with None as per documentation
36938, Correlation Between The Features
40069, Looking at the correlation chart we ought to drop columns that is below a correlation value
18727, let s load our learner from the exported pickle
28315, identying the missing data
11118, Feature importance from RF Model
19743, Pytorch Data Loader
12241, Functions
7842, Well this is something very easy and we have already partly done it
40272, Neighborhood vs Sale Price
33328, Make categorical prediction
31851, Shops Items Cats features
37208, Submission
4446, At my first attempt I dropped all the columns that contain missing value
5345, Diplay Number as scorecard
6365, I tried numberers that round alpha 0
37652, Image augmentation with ImageDataGenerator
21754, Age Study
37614, Converting Categorical values to Numerical Values
41977, To read lines from 101 to 110
5124, Decision Tree Classifier
15170, we shall calculate the F 1 score
39989, Fixing Skewness
42782, Learning rate decay
31291, There is a lot of NaNs in the macro data
16398, Pearson s Correlation Coefficient
35161, Plot the model s performance
28326, identifying the catergical and numerical variable in previous application Data set
28322, Examine the credit card balance dataset
13360, View profile report of test set
39829, convert these predictions into a submiitable csv file
28470, CREATING NEW FEATURES
13394, Drop the Sex variable
20765, let us combine both model predictions
3614, Making sure that the transformation was done correctly
24804, Submission
17705, FILLING THE NAN VALUES IN TEST DATASET SIMILAR TO TRAIN DATASET
11936, Looking at outliers
6166, Fare
7732, Extracting Train and Test Data again
35474, Skin cancer At different Age Group
20098, Item Count by month in each shop and item
15412, have a look if gender had an influence on survival rate
1217, Most common string transform
5602, Outside
36805, With this example in mind we feed it into the tokenizer
27831, Visualization
25890, The Fog Scale Gunning FOG Formula
11431, Bivariate Methods
29317, Women survive more if they embarked from port Southampton or Queenstown
27278, Since CODE GENDER does not appear in the test set we can drop them from the train samples
18955, Display distribution of a continous variable for two or more groups with all different boxplot visualization
40381, Modelling
24378, Save the SVM for later Use
29622, Imputing by its median Because some attribute contains outlier filling with median would have less effect to attribute distribution
18006, Interestingly gender is the least important feature in the model we built which is counterintuitive considering what is largely believed about the tragedy
21261, Save recommender Model
14564, Fare Fare in test data is a numerical variable lets impute with median
3308, Explore outliers with ElasticNet
25993, Appendix PCA of VGG16 Embeddings
8982, MSSubClass
37303, Tri grams
21443, Prediction
43138, How d We Do
24436, Chi 2
2749, Categorical values need to be treated differently
23802, Modelling with Xgboost Blackbox Model
6466, Join the numerical and categorical pipelines
34042, The Dates column type is String It be easier to work with by parsing it to Datetime
16981, GridSearch Hyperparameter tuning
39095, The Flesch Kincaid Grade Level
34006, temp
18019, Prepare training data
24799, Wavenet
35049, How does an image of this dataset looks like actually
7427, I am not going to use regression models for now because of the assumptions they make on the data
2541, An example with another activation function Leaky Relu We can create this new function with Relu
9223, Random Forest based Model and Prediction
28244, Lets read all the fiels and have a glimpse of data
22482, Andrews curves
7053, Type of heating
37809, Import Libraries
28724, Items category
8016, Pclass
2467, LASSO Regression
26738, Plotting sales over the days of month
29876, Commit now
192, Ridge Regression
22043, Price of the house could also be affected by the availability of other houses at the same time period
24371, The distribution is right skewed
39057, get the validation sample predictions and also get the best threshold for F1 score
28234, Split the train and the validation set for the fitting
19704, Modeling
24399, tweak all the layers
26334, Bag of Words Vectorizer
9395, History storage
1018, criterion A function which measures the quality of a split
8158, By looking at the correlation plot we note that the following features are highly correlated to SalePrice
9188, Average Age per Class
31294, Yet we have to write another algorythm to purify that set and get the final time categories for macro data
15881, Make Predictions on Test Set
30933, In one sentence we can also notice that there are http addresses within the data
20299, Fare Range
33733, Writing a custom dataset for train and validation images
43003, Data cleanning remove irrelevant
40545, RandomForestClassifier
43285, Avalia a m trica da competi o em todos os dados a partir das previs es OOB
13433, Sigmoidal
2795, Finalise Trained Model
16552, let s do it
3573, I think that outlier
34026, Delet count log count boxcox columns
12334, Alley
5523, Embarked Feature
19838, Box Cox Transformation
22503, defining Loss function for gradientDescent optimizer for getting optimal value of logit variables
17669, Tickets which are pair means that you are on the left of the boat
35923, Compile and summarize the model
15353, let s do some partial plots
16897, In order to eliminate all the titles which have really low occurances we should recategorise them
14466, back to Evaluate the Model model eval
10234, only one value is missing from Fare column which we can fill by median fare
16864, Preparing to prediction including FE
8947, Fixing Electrical
32735, Clearning data and manual FE
39727, Titles
9681, check average out of sample score
20930, open the session
25400, NORMALIZATION
10252, Go to Contents Menu
23916, Readability features
4318, As we look closer it appears that there are more survivors around the higher fare price range as opposed to the lower ones But this we already know from the other features
1584, Our last step is to predict the target variable for our test data and generate an output file that be submitted to Kaggle
37829, Logistic Regression
21735, Before submitting run a check to make sure your test preds have the right format
17603, KNN
14996, Distribution
10814, All small categories should be taken out
37920, Target Model Creation
20961, Splitting Dataset into Training set and Test set
13421, Bivariate Analysis
17569, Support Vector Machine
3250, Creating functions for effecient data visualisation
15132, Engineering Feature
23764, First look at cat pictures
28886, Feature Space
43294, Removendo a Coluna Day
21132, now add the new interaction to our main categorical data set
3833, categorical features
36478, To construct the private test set I tried many different things and algorithms
28485, MEMORY CONSUMPTION
22001, Notice that the dataset contains both numerical and categorical variables
41573, Visualizing some Random Images
15308, Training XGBoost Model Again
22685, Main part load train pred and blend
40656, The next question is how to handle the object data types
22911, Confusion matrix
27161, Category 12 Sale Type and Condition
33716, FEATURE PARCH
41953, Convert text to lowercase
287, Ticket
39241, Visualization
10203, One hot encoding
40324, URLs
22758, DATA AUGMENTATION
24761, ElasticNet Regression
18593, By default when we create a learner it sets all but the last layer to frozen
11131, This chart does not look linear or at least the line is not matching the data across the entire x axis
13174, We have 929 unique values in Ticket let s just droop out this column
12804, Import Necessary Libraries
6998, Masonry veneer area in square feet
28225, Here we construct the model architecture
16847, The Fare variable
30362, there is data with United States by Province let s make models
24420, Floor of Home
37476, One hot encoding creates sparse vectors
11694, Single Imputation for Numeric columns
13719, CONCLUSIONS
24103, Dropping the less important features
4849, Kernel Ridge Regression
21233, Build our base model
22842, Aggregating sales to a monthly level and clipping target variable
36990, Number of products that people usually order
29131, Run the code
10408, Plot all significant features along with a regression line
41402, AMT GOODS PRICE
14934, The format of submission
26132, Modeling font
42068, Using sklearn lable encoder for pre processing
20917, Crosstab
25586, Modelling
10229, For this data set I be using Random Forest Classifier we can use other classifier models but for the sake of simplicity I use only one model here
41612, Check the unique values in that column to make sure that all numbers 0 30 are accounted for before removing the NaN rows
21921, Thus the model generalizes best score when the number of features are 270
31796, by using functional module we can access to all layers in the backbone which is evident when executing model
12038, First of all I m gonna look how many variables have less than 50 missing values and fix it
21586, Count of rows that match a condition
29174, Visualize some
21507, I would like to play with some augmentations from
16270, The Pipeline
22152, Not all features are like that though
39775, Importing utils from sklearn
20963, Encoding Categorical Data into Continuous Variable
21228, Define loading methods
41564, See how does kernel PCA fare to reduce complexity we use only 10k rows
25051, Training
12147, Training the model 3
20242, We also encode the sex column
43130, Chosen Examples
23269, Name
18704, A new CSV file cleaned
16854, Lets double check for any categorical data
9485, Validation Curve
39985, SalePrice vs GarageArea
32315, From the df train dataframe I have created a dataframe X which contains all the features and a numpy array y which contains values of survived passengers
23478, Exporting output to csv
41583, In this section I have done fine tuning
15688, Convert categorical columns to integer
12376, In this we have added an extra visual column for the box plot
24388, first fill all numeric values with the median
4333, It appears those who embarked from Southampton took the biggest toll followed by Cherbourg and then Queenstown This might need further investigation Does it have to do with the class they travelled in or if they were in a cabin or not
8585, Separating target feature
3901, M Estimate Encoding
24451, Model Evaluation
1899, GaussianNB Model
10028, LightGBM
1264, Fit the models
12210, Putting everything together
9642, 10 Excellent Quality therefore Higher Sale Price
31693, Missing Data
631, Very much so
42260, probably the same people that we just determined were new customer double check
13120, feature Analysis
38526, Loading the dataset
9704, Missing values in categoric features
23053, we are with 98 accuracy goood
24885, It is integral to merge both train and test datasets before feeding input to the model
17453, XG Boost
34073, 1st class passengers are older than 2nd and 2nd is older than 3rd class
23081, Pairplot scatter matrix
25010, First Time Sale
28448, COLUMNS WITH MISSING VALUES
2522, Hyper Parameters Tuning
14870, Who was alone and who was with family
19056, We can now easily collate all the data into a DataBlock and use fastai2 s inbuilt splitter function that split the data into train and valid sets
1231, Numerical values correlation matrix to locate dependencies between different variables
19811, If we performed One Hot Encoding in the variable Cabin that contains 148 different labels we would end up with 147 variables where originally there was one If we have a few categorical variables like this we would end up with huge datasets Therefore One Hot Encoder is not always the best option to encode categorical variables
37048, Implement the best model
41407, AMT INCOME TOTAL
21620, Create rows for values separated by commas in a cell assing and explode
37794, Drawing predicited data
33814, Make Predictions using Engineered Features
33356, Timeseries decomposition plots monthly sales
4958, LightGBM
12121, MiscFeature data description says NA means no misc feature
3913, MSZoning RL is by far the most common value
8703, For handling skewnesss I take the log transform of the features with skewness 0
33758, MODEL Creation
32079, Figure 3 Distribution of the number of unique levels for a categorical variable
32883, Normalizing features
37463, Evaluate the model
16021, Very clearly there is outlier 200
30571, Correlations of Aggregated Values with Target
2197, GradientBoostingRegressor
17787, let s set the female Dr as one of the female tipical roles
27141, In MSSubClass The Newer 2 STORY and 1 Story PUDs have on average higher sale price than the others
14199, Feature Engineering
8525, Garage Features
42938, Saving the data to feather files
25300, Here we go
657, Test and select the model features
40700, Selecting and Engineering Features for the Model
18320, Train Validation Test Split
29828, Basic DNN
3482, This model appears to do a slightly better job at picking out the true survivors
15991, Random Forest
31945, Check missing value
22652, Gradient Descent
8511, There you go SalePrice looking off the charts
13053, ExtraTrees Classifier
12203, Please note that this transformer takes a parameter that determines whether or not to create a new feature
27007, Model
4339, Second class
37392, SalePrice is not normal
28682, Miscellaneous
24151, Morning and Afternoon Products
7338, KNN
19658, Feature Primitives
27185, It is also worth pointing out that the actual number of test rows are likely to be much lower than million According to the data page question pairs data most of the rows in the test set are using auto generated questions to pad out the dataset and deter any hand labelling This means that the true number of rows that are scored could be very low
23741, Feature Scaling
30693, Testar os Modelos
28196, Stemming Words
7273, Fare Feature
13569, Parch feature
37647, Adding no of photos no of words in features and description
37087, Stacking Or Stacked Generalization
38898, Split Dataset in Batches
9861, Total data table s numerical part value counts values
27002, Untreated text
25255, Show some samples
3969, XGBoost
25598, Multivariate analysis
12329, GrLivArea
10776, not bad at all for a KNeighbors model on such complicated dataset
17932, female male is True
7554, Gradient Boosting
4321, Combining Fare and Pclass
12787, Evaluation report
42071, Submitting
35646, The below code snippet is taken from my previous kernel and also from this wonderful kernel by SRK
566, eXtreme Gradient Boosting XGBoost
13122, Using hue
11291, Feature Engineering
17372, Embarked C
18465, Load the datasets
8677, We can drop the Id column as the frames are already indexed
39088, Training
12673, Transforming instead of skewing
27252, Construct the Models
41235, Clean data
24302, Before training the model we need to compile
26057, We can then loop through all of the examples over our model s predictions and store all the examples the model got incorrect into an array
23943, 3rd level categories
36959, K Fold Cross Validation
40150, There re 172817 closed stores in the data
34161, Understanding russian using googletrans
13133, Survival by Embarked and Pclass Interesting finds
28727, Total revenue Representation of total sales
40488, Ensemble Stacked Generalization
9338, Final tips
13215, A beathful curve our almost 90 of AUC in class 0 and class 1 or model are very stable
27447, It s still unclear what the platform means but it s possible that it s things like computers phones tablets etc
38212, OneHot encoding
22280, Keep in mind that the model accuracy could be improved by finding titles for each of the passengers
17737, The vast majority of recorded cabin numbers came from first class passengers
23174, Model Building and Evaluation
12911, Preview test set
6422, There are outliers in the dataset these be treated in the data engineering section
16434, Gaussian Naive Bayes
11658, Suport Vector Machine
19757, Data dimensions
24952, Test set
14527, Fare values are highly skewed and hence need to be treated with log transformation
2325, Using GridSearch we can find the optimal parameters for Random forest
17869, Feature Exploration Engineering and Cleaning
8364, Check if we disrupted the distribution somehow
4605, Other features
2802, detect outliers Detect Rows with outliers
5978, AdaBoostClassifer
16522, Handling categorical features
35098, documents topics csv
10340, Based on the previous correlation heatmap GarageYrBlt is highly correlated with YearBuilt so let s replace the missing values by medians of YearBuilt
2399, Common ways to encode categorical features OneHotEncoder OrdinalEncoder
32868, Dataset after feature engineering
19954, Cabin
312, Fare
15138, Engineering
37027, More details on brands with a treemap
22137, GarageCars Size of garage in car capacity
5845, Feature to Feture correlation
26811, Save all states for honest training of folds
16989, ExtraTrees Classifier
26897, Score for A7 16073
30559, Checking skewness of all the numeric features and logarithm it if more than
13720, FEATURE ENGINEERING
14189, KNeighborsClassifier
18922, compare the accuracies of each model
13898, Plot of sizes of differnt age groups
28127, Stopwords Removal
15370, Embarked Mapping is required to convert string to numeric values of Embarked column
5349, Diplay time series with scorecard in it
25979, Load Models
13836, Which features are categorical
33103, Fit the models to the whole dataset
9216, Survivor by Fare Price
38961, Training Function
29227, Linear Regression
11244, Right tailed
3615, let s standardize the numeric features
2781, let s ask dabl what it thinks by cleaning up the data
8535, UNIVARIATE FEATURE SELECTION
11403, Selecting a Single Column
10629, DecisionTreeClassifier
25423, Most of the dates overlap
18735, Max feature and min samples leaf
2372, Feature selection with Pipeline
17469, Fare
40829, A non linear relationship between temperature and day of the hour according to different seasons is evident from this chart
4111, Merge the training data and test data
226, Model and Accuracy
40461, Exterior
25594, Fully Connected Layers
3231, Violin Plot
28796, For Full Understanding of the how to train spacy NER with custom inputs please read the spacy documentation along with the code presentation in this notebook Follow along from Updating Spacy NER
43305, Investigate numerical columns
42657, After this split we can now draw violin plots
14931, Model Comparison
598, This is a tricky feature because there are so many missing values and the strings don t all have the same number or formatting
18659, Build Model
11219, We have bumped up the predictions and they look correct so far now to verify on the previous chart
408, Adaboost
42075, Five models are fitted and found to have different levels of accuracy
21195, Compute the loss
16279, The Final Model
2393, Difference between Pipeline and make pipeline
8117, Random Forest
19711, Preparing Y train
43375, Training the Tensorflow graph
11848, Label Encoding Manual
1190, we resplit the model in test and train
5620, Using sklearn preprocessing LabelEncoder
32988, Neural network
13892, Passenger s Name
15363, parch The dataset defines family relations in this way
5982, Random Forest Classifier
21799, Comparison table
5954, Edit font
2358, Parametric Generative Classification
951, Helpers via Python Classes
19142, Model 3 Input Sigmoid BatchNormalization 512 Sigmoid BatchNormalization 128 Sigmoid output
12308, Pearson 1 Spearman 1 Pearson 0
26838, Year to Year Trend
4949, let s get all the dummies
20286, Slicing Intials From name
27360, Testing the model performance after trimming all values greater than 20 and less than 0
2684, Forward featrue selection
39253, Export data
28521, YearRemodAdd
3787, Pairplot
34260, Plot Lags
41371, Sale Price FR3 CulDSac FR2 Corner Inside
2248, Keras and TensorFlow
26055, Afterwards we load our the parameters of the model that achieved the best validation loss and then use this to evaluate our model on the test set
1250, plot the SalePrice again
14426, go to top of section engr
15944, Name
7756, Transformation Pipelines
15319, Since we have explored all the features in our dataset now we shall draw close comparisons with SURVIVED feature to help us draw some inference
2949, Model Parameters tuning with RandomizedSearchCV
17809, Split the data
22710, Fetching the variance ratios for PCA over the given dataset
11412, Use Case 4 Tableau Data Visualisation using Sankey
24704, Create submission
6211, Random Forest Classifier
31400, Create Predictions
25732, set Loss function and optimizer
2513, Linear Support Vector Machine linear SVM
107, name length
22821, we inspect the item price field for low priced outliers
28732, TOP 25 items Solds
2818, Train a quick randomForest Resressor to check the feature importance
13083, Creating the train and test datasets
17946, C
29005, Hope you enjoy
1334, We can replace many titles with a more common name or classify them as Rare
42187, The predict method return a vector with the predictions for the whole dataset elements
32827, In this part we define some functions which we can use later to make the process smoother
23246, We fill the Age s missing values with median
14265, Radial Support Vector Machines
1338, Completing a numerical continuous feature
30390, Tokenizing
16951, EXPLORATORY DATA ANALYSIS
4159, When doing count transformation of categorical variables it is important to calculate the count over the training set and then use those numbers to replace the labels in the test set
37099, Although Zscore and IQR methods suggest several outliers for while I m going to focus on outliers with remotion recommended by the dataset author
7484, Fill the missing fares with the median
2889, First find out of missing values in each features
32546, SalePrice Distribution
28099, Fit the Model
26836, Monthly Demand
41787, Data augmentation
14833, Hyperparameter Tuning Grid Search Cross Validation
33337, Quick look at items category df
3172, I ve augmented the Titanic Dataset 8192 to be able to compare the performance
22649, Log Odds
33264, Fit model
30263, The F1 score
12195, A Pipeline step by step
20505, We also need to pad the tweets that are less than 220 words which is essentially all of them
22114, Choose alpha with better score
11887, KNN
33513, Germany
13613, For target mean encoding we are going to replace categories with their mean target
20532, create a few more features by calcualting the log and the square of the features
33574, Data Preprocessing
9053, This looks rough
7902, First we run this loop to detect the correct number of Nieghbors in KNN
7469, Importing the packages that I need
16697, Convert females 1 and males 0
3438, Others like military or noble titles we ll group together and create indicators for
38929, Deleting columns which have very high frequency of Na
8127, Relationship with numerical variables
36989, There is no missing data in order products all dataset
8845, Machine learning models generally want data to be normally distributed
42769, Name Lenght
9799, Stack Model 1
38560, Modeling and Prediction
3315, Ridge
24533, Number of products by customer index
43159, Define the model
33140, check if the model looks the way we want
12834, We already have Survived column so we drop the colom with name survived or not
5131, Relationship between values being missing and Sale Price
8812, Some usefull information
9836, Logistic Regression
21366, See Where My Model Scored Jaccard 0 for Positive and Negative Sentiments
40088, Scaling shuffling and splitting
12175, Data analysis
15862, Model training and score prediction The cross validation strategy is a k fold scheme For each fold we make the trained model predict probabilities for the test set With 5 folds this means each row in the test set recieves 5 predictions probabilities from 5 versions of the model fitted on slightly different training datasets This gives a better score than just training on the full train set and predicting for test
36115, Lets deal with numeric null values now
29057, Source image
23661, Rolling Average Sales vs Time Texas
37764, Technique 9 Memory profiling
7574, Distplot for SalePrice and SalePrice Log
36001, Duplicated columns
27535, Display the confidence interval of data and used on graphs to indicate the error sorted
28178, Entity Detection
5686, Cabin
10849, Encoding the finalized features
36406, There s quite a few variables which are probably dependant on longtitude and latitude data
17628, Embarked Cleaning Fill missing Embarked values by taking mode of Embarked
10397, Prediction on Stacked Layer 1
40666, Importing Data
35560, Classes 0 2 can be predicted extremely well using just the image meta data
9763, Encoding Data for Analysis
21166, Label Encoding
18259, Quadratic Weighted Kappa
32370, Diagnosis Distribution
34530, MostRecent
27336, Normalizing Data
35325, Submission
41470, Feature Age
9013, Functionality
37326, Selection of filter parameters for convolution layer of New Module
28821, Sales per week of the year
32175, EXTREME FEATURES EXAMPLE
24526, Customers attraction by channel
12076, Feature Scaling
32318, Since the test
9360, Drop Parch SibSp and FamilySize features in favor of IsAlone
7337, Build Model
27204, Age
7432, Compared to random search model grid search model performs a little better
26457, RF Prediction for test dataset
7336, Convert values of Embarked and Ticket into dummy variables
16340, Gaussian Naive Bayes
28454, Analyzing columns of type object
27538, Display the distribution of a continous variable
11004, At this point we can create a new feature called Family size and analyse it
255, Model and Accuracy
18902, Pclass Feature
32198, Cleaning Item Category Data
19323, Data Normalization and Cleaning
33342, Viz of sales per week month of shops and item category columns
7956, Use best hyperpameters and train best model
38763, The train accuracy is 81
13411, Recall
25722, Import Library
2676, Fischer Score chi square
12344, GarageCond Garage condition
6316, XGBoost
7627, Loop over GridSearchCV Pipelines Ensembles
31002, One of the EDA kernels indicated that structuretaxvaluedollarcnt was one of the most important features
39042, This should be further investigated I reckon but let s now forget about group 0 and consider the buildings with most entries having proper ids
24000, Stacking
42099, Converting y train to categorical variable you ask why
16579, Fill Missing data
20759, BsmtFullBath column
31569, transform is ToTensor
19877, Min Max Scaling
7506, Most missings can be understood from the data description
35937, Ticket
750, Create the model architecture by compiling input layer and output layers
4857, Stacking
9518, PClass
10581, Random Forest
23604, the more documents you have the better
12894, Age
29524, Naive Ba
20407, Distribution of data points among output classes
21472, As more and more examples are generated we need to generate textID values for them
34162, It is clear that most sales are related to eletronics especially videogames and consoles
1419, Free vs Survived
6094, take a look at garage area
41742, Prediction on validation dataset
5603, Find
31244, model train
9778, Random Forest
14542, There were total 1309 passengers onboard font
15550, preparing Data
14876, Looks like there is a general trend that the older the passenger was the less likely they survived
33761, Examine Dimensions
16683, Distribution of categorical variables
16188, We can also create an artificial feature combining Pclass and Age
20231, There are 177 missing values in Age column we impute them in Feature engineering part
12270, XGB
38648, Enumerating and Converting
21389, Predict on test dataset
16095, Name Feature
13440, Sex Females have more chance to survive
14123, Sex and Pclass
21842, Vanilla RNN for MNIST Classification
2404, That is a very large data set We are going to have to do a lot of work to clean it up
20138, One hot encoding the target variable e y train
33789, Those ages look reasonable
5676, As the chance of survival is more for Pclass 1 we change the numerical values so that more weightage is added to Pclass 1 instead of Pclass 3
32888, Create new datasets with the predictions from first level models
11619, Age 20 30 should have highest survive rate since they are in the healthiest age range
12160, Max Features
24754, Class imbalance
42419, Build Year
10863, Taking a look at the target variable
11851, Importing libraries
3896, Scatter plots
22511, look at the straight average
7064, GridSearch for Ridge
12256, Evaluation
22670, Most Common Trigrams
37912, Train Test Split
38138, Generate test predictions
31885, normalizing data
1591, Age
10082, We have now removed the skewed data and we can now apply the normalization if needed but since all values are in same range after log we can go without normalization
14342, Try SVM
28019, RF
10414, Create pipeline with models
25659, Conclusion Work in Progress
29380, Prediction on Test Data
23911, First scores with parameters from Tilii kernel
20091, Replacing outliers with value of the row having similar condition
2484, Sex Categorical Feature
27160, PavedDrive Paved driveway
19466, begin by printing a preview of the dataset and look at its size and what it contains
10453, XGBoost
24711, lets look at the main variance directions of the PCA
6701, Our dataset features consists of three datatypes
4462, Combining the 2 datasets
13795, Voting Classifier
41872, Showing the best parameters combination
7972, Create new feature combining existing features
31022, Remove Emoji from text
37504, now randomly i am try to find out the relation with saleprice for different categorical data
14925, Logistic Regression
34223, DataFrame Format
5438, Are we done
36198, let s tokenize the first tweet again and add the special tokens as well
20831, We ll replace some erroneous outlying data
14288, ROC Curve
15639, Re evaluate the on new features
29900, Designing Neural Network Architecture
30467, Tune multiple models simultaneously with GridSearchCV
7601, SalePriceLog as target
8735, Sales
28219, let s reduce memory usage
33155, we calculate the orange area by integrating the curve function
41159, FEATURE 5
19936, Fare
21523, now let s implement a simple convolution depending on the parameters we have chosen earlier
5173, Linear Regression is a linear approach to modeling the relationship between a scalar response or dependent variable and one or more explanatory variables or independent variables The case of one explanatory variable is called simple linear regression For more than one explanatory variable the process is called multiple linear regression Reference Wikipedia
2337, Discriminative Models
7531, We can fill missing age with mean but age varies for each Pclass so filling missing age with mean not be proper Lets fill Age according to Pclass
20697, How to Save and Load Your Model
40987, Another but not as useful way
6949, I choose most import features
27351, x are best paramaters with the least aic score 434
27442, Output
10925, Calculating the diameter of the graph
18308, use item first month to create new item feature
22768, We need to create Field objects to process the text data
15777, Usually this is the one that saves us
864, Embarked Survival rate lowest for S and highest for C
23961, Top Features Selection
10662, Ensembling
34439, Build and train BERT model
424, This distribution is positively skewed Notice that the black curve is more deviated towards the right If you encounter that your predictive response variable is skewed it is recommended to fix the skewness to make good decisions by the model
12015, what random forest done is that it combines the predictions of several independent decision trees which helps in reduce overfitting problems
21518, Fitting The Model
23542, Split training and valdiation set
35519, One hot encoding
5546, Split Data
27667, t SNE
40620, let s compare
11916, Creating Categories for Fare column
568, CatBoost
22717, Creating the image matrix
20773, Perhaps a non linear technique yield more insights This also helps to collapse the information from all 50 dimensions when we apply the T SNE technique to this LSA reduced space
17016, Create variable ticket type
13608, We can also encode categorical variables with their frequencies
435, BsmtFinSF1 BsmtFinSF2 BsmtUnfSF TotalBsmtSF BsmtFullBath and BsmtHalfBath missing values are likely zero for having no basement
103, Sex is the most important correlated feature with Survived dependent variable feature followed by Pclass
4983, Create code variables for the categorical data
37057, Process SibSp Parch
21491, SAVE DATASET TO DISK
26618, We set a function for parsing the image names to extract the first 3 letters from the image names which gives the label of the image It be either a cat or a dog We are using one hot encoder storing 1 0 for cat and 0 1 for dog
34719, Selecting proper categorical features
27646, our last classifier be a Naive Ba Classifier
32099, How to remove from one array those items that exist in another
1307, Observations
5679, Create Age Null columns to indicate NaN values
4489, XGBoost
43355, Submiting our output
11408, Reduce the mean square error Model by using Imputation
32674, Six regression models covering a variety of regression strategies techniques cross validation capabilities and regularization features have been elected in this exercise
26834, Plot the count
35256, In below code please go through comments which i have mentioned between codes to identify variables progress line by line
1824, Using cross validation
31797, Just to make sure that we have the correct weights
9070, It looks like the data is positively skewed
31711, The original decode predictions tries to download imagenet class index
28750, fit and calculate the log mean errors for each model
36471, Images from INRAE
14348, Read In and Exploring the Historic Data EDA
18072, plot some image examples
11810, We have successfully imported everything we need to solve this challenge
28314, identifying the catergical and numnerical features
27177, Define a function that helps us in creating GBM models and perform cross validation
5031, Kernel Ridge Regression
25869, Importing Dataframes
19879, Robust Scaling
22216, Setup dataset
11082, Stack predictor
33741, Submission
22091, Architect our Neural Network
32798, Here I would like remind you that for stacking you SHALL use consistent fold distribution at ALL level for ALL your model
17656, analyse what are the features that makes this much difference
26361, As I ve setup the NetworkVisualiser class with default values appropriate for the MNIST dataset all I need to specify when initialising the NetworkVisualiser object is the list of layers
28056, Exploratory Data Analysis
37336, predict and submit
13796, Prediction
16890, New Feature FareBin
28748, there are missing values for LotFrontage and MasVnrArea in both train and test data
41915, The learning rate determines how quickly or how slowly you want to update the weights
7580, Dropping the column sum 1SF 2SF LowQualSF again since it already exists as GrLivArea
22896, There is no common top 10 keywords between disaster and non disaster tweets
3457, Here are the parameters for the model with the highest accuracy on the training set
27223, Masking
1258, Numerically encode categorical features because most models can only handle numerical features
22249, Cross Validation K fold
3152, The script begins with the usual imports
17605, Perceptron
25262, Max Min Height and Width
12137, Training the model 2
36802, Named Entity Extraction
22059, check how the difference of length relates to the length of the tweet s text
31746, RandomGreyscale
32806, Level 2 XGB
42035, Groupby Mean
30399, Tweaking threshold
21509, Augment the images with CLAHE
11945, Aim here is to create multiple features from highly correlated features which might help enhance the prediction
3430, replace the two missing values of Embarked with the most common value S
41538, Dictionary learning
941, Da Double Check Cleaned Data
26997, Now let us deal with special characters
36995, Which products are usually reordered
43065, Distribution of min and max
19418, Another handy thing to know is how does the RoBERTa tokenizer handle concatenated sequences
30376, Train
25871, Target Value Distribution
10418, Partial Dependence Plot PDP
8593, Dealing with Categorical features
3621, then encodes qualitative variables based on the mean of SalePrice for each unique value ordered starting at 1
6203, XG Boosting
29863, in operator
30594, Calculate Information for Testing Data
23680, Loss
26985, Build model
7631, blend 2 gscv Lasso and gscv ElaNet
6040, How is look like our predictions
21170, Learning Rate Schedules
765, Missing Data
27101, Submission
11401, Building the logistic regression model
21057, Training Set
10604, Split data into training and validation data
9048, ExterQual
22943, Nice visualize the features we made so far
5571, let s deal with Categorical columns
10692, Age processing
13871, We further split the training set in to a train and test set to validate our model
23030, Sell price and value relationship
19400, We have quite balanced data
23224, Store the duplicates into a variable
33730, Load train and test file
13896, Most of the models require the data to be standardised so I am going to use a scaler and then check the scores again
25388, Visualize some examples from the dataset
25321, Generate test predictions
38733, Again it looks like there are no data errors just some passengers who got a free ride for whatever reason
14171, For this dataset we examine the effect of features on 5 different models br
1355, Logistic Regression is a useful model to run early in the workflow
17472, RandomForest
7071, The Age Cabin Embarked Fare columns have missing values
4034, Outliers Removal
36142, let s set up the hyperparameter search for the RandomForestClassifier using RandomizedSearchCV
36101, which embedding do we use Well there are a couple of ways to combine different embeddings one that comes naturally is taking the mean across all of them
36770, Making Predictions on the Validation Set
25888, Score Difficulty
20529, Some numerical features are actually really categories
17458, Sex
21459, Since the label is in the form of dataframe it needs to be converted into array
28148, Build Knowledge Graph
25226, Create a dictionary to keep track of scores for each model and compare later
2938, Feature Engineering
24823, BERT Modeling
920, Model Building and Evaluation
7871, The first features to work on are SibSp and Parch
16121, Gaussian Naive Bayes
24252, Observations
29062, Additional functions that might be useful
41617, Whoops
608, We learn
6687, Pair Plot
8603, Null in train data
22864, Convolutional Neural Network
18758, The get segmented lungs function segments a 2D slice of the CT Scan
7029, Interior finish of the garage
31, we extract the features generated from stacking then combine them with original features
17923, Even after using
18469, After reading the descrition of the this task Rossman clearly stated that they were undergoing refurbishments sometimes and had to close
32687, Now we divide the training data into two sets one for training and one for validation
22252, Naive Beyes
23881, From the data page we are provided with a full list of real estate properties in three counties data in 2016
24236, Prepare generator for test set and start predicting on it
20513, The info method is useful to get a quick description of the data in
33869, Model and Parameter Tuning
3646, Grouping ages in Age feature and assigning values based on their survival rate
25406, DROPOUT
9268, I tried to remove a few least correlated features from the training set
29536, we are going to normalize our data
9850, Data preparation
19894, Initialize the model
24918, Decomposing the data
37782, Submission
17271, Support Vector Machine
35457, Visualize the Chunk of Non Melanoma Images
3068, Fare
36078, Configuration
13356, Print variables containing missing values
25059, I finally understand
36807, Now we go a step further and count the number of nounphrases by taking advantage of chunk properties
32964, Insights
42931, Saving the number of rows in train for future use
36666, We have 4825 ham message and 747 spam message
33759, Final Submittion
1083, Linear model without Regularization
4892, Whoops Apart from Mr Miss Mrs and Master the rest have percentages close to zero
24686, Number of parameters
28271, Converting the train and test dataframes into numpy arrays
9384, check the houses who have zero basement area
17767, Check for missing data
21330, Di n t ch
29826, Pre Trained FastText
9233, Train with Remaining Data 75 15
2823, creating a function that evaluate the algorithmes performance Learn more about OOB score
16646, Holdout Prediction
43158, Display one image
29884, Word Cloud visualization
28589, KitchenQual
4622, There are about 19 missing values
2544, center Predictions bis
1619, ROC Curve
16453, Cabin
6142, Split years of onstruction into bins
29762, also visualize the model using model plot
18980, Display more than one plot and arrange by row and columns
22832, Great that worked just add these codes to the shops dataframe
7713, List of Highly correlated features Here we ll visualize them and clean the outliers
35686, LightGBM
24462, CenterCropping height width 256
26901, Using Gradient boosting
20409, Checking for Duplicates
38100, Downloading MNIST Dataset
39434, MY PART STACKING
25907, Generating tweets about not a real disaster
43293, Criando um Set de Valida o Melhor
16265, Name
1975, Hyper Parameters Tuning
41658, Distribution of missing values
13696, Submission
3759, PCA Principle component analysis
23170, Dropping Features
41399, NAME TYPE SUITE
6732, GrLivArea Vs SalePrice
11210, add the previous averaged models here
30359, Predict all country greater than 1000
13291, Tikhonov Regularization colloquially known as Ridge Regression is the most commonly used regression algorithm to approximate an answer for an equation with no unique solution This type of problem is very common in machine learning tasks where the best solution must be chosen using limited data If a unique solution exists algorithm return the optimal value However if multiple solutions exist it may choose any of them Reference Brilliant org regression
12926, Here 0 stands for not survived and 1 stands for survived
35590, We split the data into the train and validation sets
27614, Finetune Vgg16
16098, After that we convert the categorical Title values into numeric form
14471, nullity analysis
20395, Gaussian Naive Bayes Model
29763, Run the model
28134, Designing our model
34154, Early Stopping with Cross Validation
11320, Parch
9967, Correlation by Lot Area and Price
38817, Replace all missing values with the Mean
27988, Partial Dependence Plot
27257, have a look at our feature importances
8540, RANDOM FOREST
25034, there are no orders less than 4 and is max capped at 100 as given in the data page
25412, TRAINING
37327, Selection of kernel size parameters for convolution layer of New Module
26762, Build Model
10339, Since MasVnrArea only have 23 missing values we can replace them with the mean of the column
10746, I am thinking of better ways to visualize the relationship between the SalPrice and categorical features any suggestion be greatly appreciated
34355, Understand our predictions
3502, Model 5 Gradient Boosted Tree with Reduced Parameter Set
4092, Older versions of this scatter plot had a conic shape
3461, Here s the contingency table for Deck and Pclass again
6334, Data cleaning
8974, if you search from internet embark should be S just have a check
10828, We have tickets assigned to more than one person
22591, There is one item with price below zero
12674, Joining newly created dummy data columns
24556, let s check the products share of different cases
21670, XGBoost
21325, Tr c khi quy t nh lo i b m t s feature c n m b o r ng m t c ch NaN th c s mang ngh a l Kh ng
10884, Passenger Name is a categorical variable and it is obviously unique for each passenger
28474, Most of the houses have ATLEAST ONE of the mentioned exquisite features
23320, Add previous item price as feature Lag feature
15836, Embarked
4488, Random Forest
38063, REMOVE TEXT WITH NULL TEXT CLEANED ONLY IN TRAIN
31520, We implement random search CV technique as this search the search space randomly and attempt to find the best set of hyperparameters
16084, Relationship between Features and Survival
27178, Tuning number of estimators
35814, Interaction features
40986, And now using groupby and aggregate functions together we get
37435, WordCloud for tweets font
5418, Features Engineering Data Munging font
21966, lets understand the data before any strong Exploratory Analysis strong p
36718, Random Rotation Shift Affine Transformation
36872, Keras 2 hidden layers
11355, SalePrice is the variable we need to predict let s do some analysis on this variable first
8304, Model 1 Decision Trees
19086, Age Histogram based on Gender and Survived
4449, Select an algorithm
37824, Tokenize Stemming
14926, Random Forest Classfier
26580, This our model gives this image a high probability of being a dog
32678, Interestingly weights associated with individually best performing regressors have their weights reduced during the weight optimization process
2661, Voila
24576, Constants
22647, For logistic regression we don t need to standardize the data
42103, Output layer
2142, Looking at the residual plots it appears evident that all the models we trained so far are underestimating the price of low costs houses and overestimating the more expensive ones
34688, Merging with the shop and item datasets
148, Gradient Boosting Classifier
23837, Preparing the data for feature importance
31895, Creating submission
29894, Predict on the test set
12002, On applying grid search cv we get best value of alpha 1
28080, check the missing values again
24401, check the distribution of the mean values per columns in the train and test set
30677, Start with simple logistic regression
6035, Remove low features with low variances
40546, Export
1246, and plot how the features are correlated to each other and to SalePrice
24999, Performing some sanity checks before saving the dataframes to csv files
19518, Collecting the Data with Partitions
21894, Augmentation Settings
27932, I ll compile the model and get a print out describing it
4371, GrLivArea Above grade living area square feet
9789, Ordinal Values
33286, Title Encoder
35122, utils
8882, After diving into our dataset more another feature we can create is the Total Number of Porches by combining the following columns
525, As we know from the first Titanic kernel survival rate decreses with Pclass
24875, If you take a good look at names every name tells us 3 things
36549, Take Away
6363, Nice The most important feature is the new feature we created TotalArea
8651, The type of machine learning we be doing is called classification because when we make predictions we are classifying each passenger as survived or not
1735, Plot Age against Survived
34432, TF IDF
36353, Create the CNN Model Architecture
40255, Log Therapy to the rescue
31364, checking the files
20037, start by defining some constants and generate the input to hidden layer weights
3147, Cross Validation and Grid Search
980, Feature Importance
19403, there are 7613 disaster tweets and 21637 unique words
19459, Model Chart
22075, on average IF we have a prediction our prediction is significantly better than simply taking text
21474, We don t forget to save it
20128, App labels features
32032, Train a Binary Classifier with XGBoost
1833, Meaningful NaN Values
21737, Build the Model and Make Predictions
19514, Spark Core RDD
22808, Educational Level Primary
12260, J frac 1 2 sum i 2 frac lambda 2 sum sum W 2
19542, Data Visualization
30133, ConvLearner
5573, Model Bulding
12354, BsmtFinSF1 Type 1 finished square feet
32603, we can make predictions on the test data
7476, 3b Pclass
16241, The sixth classification method is MultilayerPerceptronClassifier This method is little different as here we have to provide it with layers also For eg here we give layers 8 5 4 2 where 8 demonstrate number of input features 5 and 4 are genral middle layers and 2 is number of output classes According to yor model you should define layers
33249, Missing values
27112, This doesn t help us much let s try to visualize the number of missing values in each feature
7379, Merging the unmatched passengers using surname codes and age
19358, There are 43 categorical columns with the following characteristics
19564, Dataset
23226, remove the duplicates and keep only the unique features and also we transpose the dataframe again into its original shape
34320, In case of train test split allowed inputs are lists numpy arrays scipy sparse matrices or pandas dataframes
5666, Create Name Length Category
2264, Embarked
41431, Item Category
2275, Fare per Person
7010, Here we first change the strings by integers because the variables are ordinal we can t get dummies unless the correlation with SalePrice is very low
20937, Close Session
29106, Glove
14438, go to top of section eda
32901, We ll try to use some tools to transform Chucky s image
31306, I experimented with parameters of compilation fit and network construction
320, SVM
23388, While a high learning rate at the beginning of a training run is great at the start it can cause issues as the model approaches convergence when smaller more careful steps are needed
33349, Calendar heatmap
25998, Upside Down with Spark
24733, Dimensionality Reduction
4013, Compare to sklearn implementation
33689, Import the library holidays to get holidays of almost all states
27919, Examine the Features once again
32529, Processing the Predictions
20714, As it contain many null values so we drop this column
34863, Pipelines
22224, lenecek g rsellerin bi imini ayarlama Setting the shape of images
24127, check how many missing values are there Location and Keyword columns
25801, let s focus on the training dataset and on the interest level of its top 100 contributors
13992, Extract Title from Name and convert to Numerical values
9833, Feature selection
19821, Weight of Evidence
38703, Before encoding
11668, Misc
11731, hope you find this kernel helpful and some UPVOTES font would be very much appreciated
25791, here we have only one feature which have some missing data
15499, we can use dummy encoding for these titles and drop the original names
31027, Phone Number font
41409, DAYS BIRTH
28360, analyzing the numerical features disturbion in previous application dataset
13024, Sex
43290, Um R negativo significa que nosso modelo pior do que um modelo que prev a m dia
14011, Acquire data
5542, Prepare the model
7927, Test basic models such as RandomForrest Lasso Regression
1686, SpiderMan How beautiful Just as we expected Sales Price increses with OverallQual Overall material and finish quality Shall we do this to analyse relationship with SalePrice for few more categorical columns
31233, Features with max value greater than 20 and min values less than 20
36603, Printing some samples which were classified wrong
30364, Test Washington prediction
35397, Modeling
16255, seq
10866, Fitting the Model
3444, The male title groups appear to be distinct possibly separable distributions
20722, Condition1 column
15262, Observation There are more number of people in the range 20 30
26039, We can print out the number of examples again to check our splits are correct
12283, Frequency bar chart
27199, 4th Step Data Cleaning and Editing
28947, try boosting
22229, Modeli derlemek Compile the model
18289, Tokenizing
19563, Preparing the model
22950, the last feature we have to tackle is the embarked feature
9774, Logistics Regression
4238, Grid Search
30709, define new augmentations
27513, Data check
33722, Data Formating
29831, Recurrent Neural Network LSTM
27560, In order to make this easier to use I ll output this as a CSV of the original feature and the deduced string
6697, Lets start with the bar plot between SibSp and Survived
34828, Important Error
14245, The survival rates for a women on the ship is around 75 while that for men in around 18 19
36834, Construct TensorFlow graph
20541, Random Forest Regressor
26444, To further evaluate our model we calculate the probabilities for all instance If the probability is the prediction for the instance passenger is died For probabilities the survival prediction is survived
24290, count the labels
32268, Relationship between variables with respective to date
16747, Survival by sex
25008, Mean Encoded Features
23818, Looking into each categorical feature
13225, GridSearchCV for Gradient Boosting Classifier
29785, Training
19809, When should you use k and when k 1
7282, Pclass Feature
31861, as well as the label vector of our training set
3842, Munging Fare
1357, we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis Given a set of training samples each marked as belonging to one or the other of two categories an SVM training algorithm builds a model that assigns new test samples to one category or the other making it a non probabilistic binary linear classifier Reference Wikipedia
15392, See which columns have missing values in training data
40279, Importing packages
35097, sample submission csv
34853, Undersampling
22463, Categorical plot
72, Dealing with Missing values
35525, Before model building part
6051, Exterior and materials
20701, How to Halt Training at the Right Time With Early Stopping
29962, WBF over TTA
5555, Optimizing Deep Learning Model
40273, Garage Type vs Sale Price
41225, There is no missing value and there is very little scope of feature engineering so we only do scaling of pixel values to bring them within and
10297, Data
5899, u Using Univariate Selection u
6666, Gaussian Naive Bayes Classification
18993, Define a neural network
15556, Tuning the parameters for SVC
14207, Train the final model
41677, Here s a look at some of the resized images of Benign Tumours
17268, Converting Categorial data to Numeric
2737, We can import the missingno library that can be used for graphical analysis of missing values and it is compatible with pandas
32067, Singular Value Decomposition SVD
2291, Submission
20796, Filling Categorical Missing Values
11561, now tune the best model using grid search Cross validation
29056, Histogram matching UPDATE
32535, Predictions
6792, Decision Tree
22653, With the best coefficients and intercept insert these into the sigmoid function to get the sigmoid values of the optimized coefficients
41841, Bi Gram
14658, If age is greater than 60 classify them as Senior Citizen
20029, Train with basic features
13086, Gradient Boosting
8574, Applying a Function to Groups
41018, These families are the Gibsons Klasens Peacocks van Billiards
4396, Handling Missing Data
11687, Building models with Your New Data Set
27014, EfficientNet
11501, Label Encoding some categorical variables that may contain information in their ordering set
15600, Survival by Age Number of Siblings and Gender
20836, walk through an example
21625, Style you df fast with hide index and set caption
14140, Support Vector Machines SVM
36080, Data Preparation and Feature Engineering
12236, Lists
12105, Experimenting with Random Forest
24263, Observations for Age graph
34918, Count other upper
16381, Importing different models
1636, Splitting train and test features
22026, now look at all 8 features together
19097, Creating Dummy Variables
40474, Linear Regression
26192, XGBOOST
9811, Matrix
2774, XG Regressor
31548, Electrical
11739, Once again we find mostly what we probably expected
22679, Predicting and Submission
20219, ReduceLROnPlateau reduces overfitting it simply reduces the learning rate by a factor of e half whenever there is no improvement in the monitored value here validation accuracy after three patience epochs
27390, Tuning feature fraction
42623, Checking for the optimal K in Kmeans Clustering
15143, Ticket
10448, MODELLING
11857, Creating Models
32208, Add the previous month s average item cnt
5486, create pair plot one more time to check outliers are coming or not
23050, Build CNN
16351, Create training validation testing sets
32665, One hot encoding is applied to categorical nominal features where there is no evident ranking nature among categories
16642, Logistic Regression
10373, Deviate from the normal
2901, Light GBM
33648, Feature Importance
4315, Combining Gender and Passenger Class PClass
23678, Sampling function
33734, Lets visualize some of the images with bounding box
38978, initialize bilstm
26185, PCA Principal Component Analysis
13115, LightGBM
3396, Feature Enginering
859, Uncomment if you want to check this submission
36219, Missing values left in the dataset LotFrontage GarageYrBuilt Electrical
17812, we fit the classifier with the train data prepared before
15445, Create a new feature called FamLabel which categorizes the family size
25641, Predict on test
36773, CORRECTLY CLASSIFIED IMAGES
39785, Logerrors vs Distance to nearest neighbors
38767, The train accuracy is 82
39276, SPLIT DATAFRAME DEPENDING ON SENIORITY
12664, Training and Evaluating the Classifier
33674, Periods
34393, This heatmap deserves a lot of comments
13979, Combine some of the classes and group all the rare classes into Others
24946, There are other ways that are also based on classical statistics learn org stable modules feature selection htmlunivariate feature selection
39220, Using Variance Threshold
28142, Chunk 1
28867, Attention Decoder
28333, Analysis based on Code Gender
41200, use a GradientBoostRegresssor model with parameters n estimators 4000 learning rate 0
23995, Get dummies converts Categorical data to numerical as models don t work with Text data
37564, FAIL PARTS
7733, Modeling
40873, Optimize XGB GB and LGB
27518, Model evaluation
42326, Previewing first fifty images
21224, We now work on predictions for test set
35445, Compiling
18509, plot a few more images at random
40299, the distribution is right skewed with upto 237 words in a question
27980, A large part of the data is unbalanced but how can we solve it
2353, Logistic Generalized Additive Model GAM
22786, Scatterplot
23723, Exploratory Data Analysis
39381, Proportion of null values within each feature
4588, We look at these features in addition to a few of their possible combinations
29836, Convert order data into format expected by the association rules function
22330, Removing Stopwords
6127, pretend there are no bath at these houses
10400, Create Submission Files
8940, Fixing MS values
12781, An example of an output of the Perceptron
13319, Pivot analysis div
19636, Duplicate devide ids
38930, for LotFrontage
7132, Fare
18579, Looks like the bigger passenger paid the more chances to survive he had
8684, Analyzing the Target e SalePrice
12169, Presort
15072, Family Size
31554, Random Forest
21380, Scientists created lots of network architectures coveryng lots of real world problem
2498, SibSip Discrete Feature
2718, Elastic net regularization
6168, SibSp Parch Family Size
24529, Total number of products per customer
31612, To evaluate the performance of a classifier model we can use the cross validation but the accuracy is generally not the preferred performance measure for classifiers especially when some classes are more frequent than others
25428, Excluding duplicates with clustering approach clustering check marking
29817, Vector Averaging With Fasttext
43088, check the new created features
35349, Grid Search Hyperparameter Tuning
32867, Item sales count trend
32991, Report
18105, We can optimize the validation score by doing a over rounding thresholds instead of doing normal rounding
6649, Extract Initials from the Name feature Categorize the Initials by different values
40867, Optimizing Hyperparameters
21675, Data builder function
29461, Cross Validation in Neural Netword
12395, Retraining the model over the whole dataset
5292, Being inspired by other Kernels and this blog it is necessary to transform the numerical features that are skewed
20305, Fresh fruits and fresh vegetables are the best selling goods
38214, Data Augmentation to increate training size
43001, function below go through each column one by one to do the describe it is really good pratic but it takes long time to process
21595, Print current version of pandas and it s dependencies
21334, Chuy n d li u t ch th nh s
5373, Recursive Feature Elimination Cross Validation RFECV
42163, The BsmtFullBath FullBath BsmtHalfBath can be combined for a TotalBath similar to TotalSF
15182, Confusion Matrix
2144, A few noticeable things are
26745, Plotting sales of weekends preceding each event
25775, female is more likely to survive
11212, XGBoost
28170, Features
7495, Choose the features that be used
11249, Feature Transformations
33571, Apply Cox Box transformation and create cleaned train test data
6625, Exploratory Data Analysis
14113, center SwarmPlot center
7988, Dropping
32234, Here is our optimizer
21130, For all categorical variables which don t belong to class luxurious we apply NA correction by imputing level Unknown
24013, Training function
34005, weather
6341, Number of missing values per column
24957, sex in Train Test set
32311, Relation between Survival and Passenger Sex
2139, Error Analysis interpretation and model improvement
13209, let s loading the required librarys to the decision tree plot
27498, Shallow CNN 2 with regularization
9620, Feature Scalling
20953, Another way of verifying the network is by calling the plot model method as follows
28813, Plotting out Loss and Accuracy Charts
25903, Threshold value search for better score
3608, Data Correlation
23406, Transformed to numpy arrays
29146, Feature importance via Gradient Boosting model
8427, Group by GarageType
28576, BsmtUnfSF
32989, Definition
19801, Mode
4120, Check different values of lamda Usually follows tukey s transformation Lamda as all the features are positively skewed we can go for lambda or The output generated as gave the lowest RMSE score after different regressions was applied on the dataset
11837, tackle all the missing categorical test data
17648, Decision Tree
29149, Transform necessary features to object type
37735, Numeric Column Optimization using Subtypes
16487, we can check out that we re missing some age information and we are missing a lot of Cabin information
29823, Trained Glove
24557, Distribution of products by age group
30974, we define the random search function
5654, Create Name Length Category
14632, From the violon plot of Age and Survival along with the KDE plot that was constructed during the bivariate analysis we can conclude that there were certain age categories that fared worse
1533, Cabin Feature
2540, go to prepare our first submission Data Preprocessed checked Outlier excluded checked Model built checked step Used our model to make the predictions with the data set Test
19955, The Cabin feature column contains 292 values and 1007 missing values
12298, Surname
28562, Interior
27251, Scale the data
27630, Properties that have been sold more than once have multiple transaction entries
1214, Concatenate train and test
14845, Embarked
39148, also look at one image in more detail
8038, Age
15733, Precision and Recall
40629, we can save the predictions to disk and zip them ready for submission
10864, Checking the skewness of other variables and treating it
32644, Clustering
21457, GBR
41952, Visualization
23748, Before submitting run a check to make sure your test preds have the right format
22325, Spelling Correction
42445, We have 59 variables and 595
19802, Random Sample Imputation
21942, Spark
11802, Again using log transformation to remove skewness as all of the numerical features have positive skew
14341, Try Knn K Nearest neighbors
6063, Fireplaces consider turning it into binary
29790, Model
39420, check if there are any NaN values left
2000, It makes sense that people would pay for the more living area
37944, Feature Engineering
1316, checking for missing and unique values in combined dataset
10219, Majority of passengers embarked from Southampton it may be the journey start point
15115, Modeling
17840, plot the parameters for the classifier
4239, Automated Hyperparameter Tuning
15442, Creating Title feature from Name feature
35919, We plot 25 random test images with their class using list of predicted labels
13078, we set our PassengerId as our index
18883, There are couple of outliers here we can procede bothways
33855, Distribution of the token sort ratio
5740, Installing the packages
7945, we need to specify
17589, Grid SearchCV
15082, XGBoost
21565, How to avoid Unnamed 0 columns
38698, After Encoding
1825, Regularization Models
1574, The Ticket column is used to create two new columns Ticket Lett which indicates the first letter of each ticket and Ticket Len which indicates the length of the Ticket field
37893, Evaluation
42028, Split and join to have a new str order
22494, To measure how well we normalize addresses we check the unique number of addresses before normalization and after as a last step of cleansing we remove all characters like and and then strip all space symbols
31631, Dates
23524, calculate some statistic for each class
28123, Disaster tweets are Less in Number as compared to Non Disaster Tweets
30928, False Positive with biggest error Sincere but predicted strongly as insincere
9151, If there is no Kitchen Quality value then set it to the average kitchen quality value
32668, Segregating train and test features
37649, Create folders for the train set and validation set I created two different cats and dogs folder both in train and val Because I am going to use ImageDataGenerator the flow from directory method identify classes automatically from the folder name
23994, we have 86 columns having added around 7 more to our data
4010, compare with the sklearn implementation
6657, Chisquare Test for Feature Selection
18372, Taking Care of Auto Correlation
9295, Regression on survive on Sex using a Categorical Variable span
1535, Embarked Feature
6224, MasVnrType Masonry veneer type
11555, Engineering rare categories
1032, news Most of the features are clean from missing values
24242, Descriptive analysis univariate
557, submission for knn
29460, NeuralNetword Model
6448, Linear Regression
42210, Splitting data
6910, Fare
10927, Complete graph with 100 people
34467, Trends
35794, Find best cutoff and adjustment at high end
20247, Apply the functions
41155, FEATURE 1 NUMBER OF PAST LOANS PER CUSTOMER
37688, Ok now you know the rules is black is white and is gray
30692, Voltando ao c digo
38070, Trigrams Analysis
9082, There are only 2 rows that have the Conditions in an order that is flipped but given that most of the time the conditions are equal I was to perform feature engineering to build a relationship between these 2 seperate columns
21156, We improved it just a bit
1054, We use RobustScaler to scale our data because it s powerful against outliers we already detected some but there must be some other outliers out there I try to find them in future versions of the kernel
3785, Plot learning curves
36843, Evaluation
29858, Understanding dicom dataset structure
30358, Predict by Specify Country
2158, We don t need PassengerId for prediction purposes so we exclude it
31064, Percentage
31274, Rolling Average Price vs Time WI
14784, Fare
7058, The tuning parameters were obtained from GridSearchC
31541, Some categorical columns have big impact on neighborhood data some should be filled by NA cause lack of subject
40417, Display Address
28752, load the training data and validation data
17854, Submission model with hyperparameters optimization
1544, Splitting the Training Data
30412, Main part load train pred and blend
13735, Creating output file
28371, Text Cleaning
29898, One Hot encoding of labels
10081, Data Preparation
11831, Percent of Missing Values
2312, Sklearn Label Encoding Method multiple int options but 1 column right table in pic
20718, here we do label encoding instead of one hot encoding
37318, Perform rotation on the picture move up and down left and right and other operations to increase the amount of data Data augmentation is very effective Parameters such as rotation range zoom range width shift range and height shift range can be modified The following parameters are the version used by most people but you can also explore parameters that are more effective for your own model
27551, Display interactive filter based on click over legend
17008, Missing values Age
24562, Distribution of products by sex
13984, Impute Embarked with it s majority class
7994, If all classes of a category was false we delete it
16852, Dropping some variables
33470, Sales
7646, deal with missing data
18640, ANTIGUEDAD
32053, High Correlation Filter
18229, Test Gradient Boosting
32554, Merge other cases
2654, Can you think of any way we can use name column
9522, Random Forest
22443, Diverging bars
22175, We have features column which is a list of string values
40649, Concatinating all the features standardscalar tfidfW2v
41998, Sorting columns looking at certain rows
21800, Model 1 Baseline LGB
1656, Same story with previous feature categorical with 7 categories Actually SibSp and Parch are very much related but let s leave this for tutorial for beginners part 1
17534, Extract first letter from Cabin property and transform it to numerical categorical feature
21194, L model forward
38172, The Evaluate Returns
16025, Sex
6839, Feature Selection
36960, Feature Importance
26814, Class Activation Maps
42185, True Positives TP True Negatives TN False Positives FP and False Negatives FN are the four different possible outcomes of a single prediction for a two class case with classes 1 positive and 0 negative
11257, The initial learners now be used to make predictions on the validation data and to make predictions on the test data
2273, Fare
29225, For Testing Data Set
37035, Can we get some informations out of the item description
35925, Make prediction on the test data
3627, Features Year Built Year Remodeled
3285, drop these outliers
10398, Final Prediction
4082, Scatter plots between SalePrice and correlated variables move like Jagger style
8894, LASSO
36881, Logistic Regression
36265, look at Number of siblings spouses aboard
35437, Lets Train our Model
41115, Transaction Date Vs Mean Error in each County
25680, India Cases vs Recoverd vs Deseased
9416, Train test split
17804, And let s plot the Age clusters grouped by the survived and not survived
22021, var36
26968, which item name is the most popular and the worst
15060, Most Positive Correlation
5910, u Gradient Boost u
9633, Saving IDs column
19194, Predicting using the XgBoost Regressor with Hyperparameters Tuning to give the best predictions
619, The next idea is to define new features based on the existing ones that allow for a split into survived not survived with higher confidence than the existing features
35059, Complexity graph of Solution 2
2980, Data Visualization
25458, predict the test set using augmented images
29634, with our voting classifier we try to predict whether passenger in test set survived the catastrophe or not
26617, Read the data
11504, we don t need scaling because of the we did log
19350, can log price so that decrease large gap
7457, Creating new Features
36410, Basic Pre EDA
4059, Enhancing the Missing Age Solution
37868, Distribution of Categorical Features with respect to Sales Price
20777, we can implement the Doc2Vec model as follows
42007, Sorting and Counting sorted values
19709, Since train data is a list let us convert it into a numpy array
3015, We now start the process of preparing our features we first find the percentage of missing data in each column and we determine whether the threshold of missing values is acceptable or not
38410, Model selection
9392, K fold preparation
36479, Private dataset and EDA
27845, Plot errors
9401, This be the final set of parameters I use here for my prediction
16492, Creating Train Test Split
11477, MiscFeature
6017, Bisa dilihat algoritma Random Forest kita tidak kalah jauh kok mesin merekomendasikan XGBRegressor karena tidak terlalu overfit jika dibandingkan dua algoritma setelahnya
18991, Represent train test texts with token identifiers
24509, create our DataBunch
33272, CutMix data augmentation
16693, We can also look at the survival rates
6029, Detect and Remove outliers
18014, Survived
12020, LGBM
39152, it s time to train the neural network
30895, hardcoding the locations of the main cities in CA
20416, Feature word Common
3875, Does building a garage after few years make the house more valuable
17943, preprocess test data also
33336, Fix shops df and generate some features
34908, Make mean target encoding for categorical feature
39122, XGBoost
39112, SibSp
32911, This is the augmentation configuration we use for training
24435, Filter
15906, Findings
7818, Architecture
35528, Multiple regression models combine with an ensembling technique
14769, try another method a decision tree
4001, The obtained parameters are very far from those that we obtained analytically
9786, that problematic values dropped we can work on filling missing values that lie below the threshold
1810, Fixing Skewness
32821, Starting importance variables evaluation
9426, BoxPlot
3185, With both models I need a scaler
24750, Converting variables
1134, Feature ranking with recursive feature elimination and cross validation
42245, Data Cleaning Preprocessing
13400, Plot the classifier accuracy scores
13058, Voting Classifier
34486, Train the model
18016, Survived by Gender Pclass
17353, we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis Given a set of training samples each marked as belonging to one or the other of two categories an SVM training algorithm builds a model that assigns new test samples to one category or the other making it a non probabilistic binary linear classifier Reference Wikipedia
22221, Veri setinden rnek bir g rselle tirme Vizualization in dataset digit image
6084, Classifier Comparison
36811, Then as always we tokenize the sentence and follow up with parts of speech tagging
24920, AUTO Correlation
31091, BsmtFinSF2 font
9683, Importing libraries
33588, Data Augumentation
8110, Parch SibSp
15255, Fill Null Values for Age Feature in train and test dataset
18476, Since we have no information whatsoever on those missing values and no accurate way of filling those values
29024, And some nice scatterplots
29181, One Hot Encoding using pd getdummies
8289, Stacking At this point we basically trained and predicted each model so we can combine its predictions into a final predictions variable for submission
6664, Linear SVC
36109, Correlation of each feature with our target variable
31775, Predictions class distribution
25936, Missing Values
1781, remember our feature scaling earlier on train test split
12135, Basic evaluation 2
36557, Take Away
20634, lets consider only those words which have appeared more than once in the corpus
7146, Deployment
16135, Total rows and columns
32531, Model Refining Model
39232, WBF over TTA
904, SalePrice correlation matrix
31786, Calculating the first metrics without Bayesian Optimization
13742, Confusion matrix and accuracy
26401, It is remarkable that the percentage of men in the third class is much bigger
3437, Some titles like Dr and Rev we ll create specific indicator variables for
41158, FEATURE 4 OF ACTIVE LOANS FROM BUREAU DATA
42766, Age group
14187, Creating dummy variables
27999, Spliting Test and Train Data
12205, As we wanted the data flew through the pipeline getting cleaned transformed and rescaled
30954, Appendix
6064, FireplaceQu missing values missing fireplaces sparse classes
1194, drop these from X and testing features
39957, ENSEMBLE METHODS
22493, Bonus9 Animation in matplotlib
39857, Creating the Model
42777, Confirm that the shape is what we expect 42k in train 28k in test with 784 pixels per row
7714, Data Visualization
37022, Top 15 second levels categories with highest prices
15376, The code can be read from right to left and is a list comprehension and is executed as follows
25915, MODEL BUILDING
23291, Correlation Coefficents
3775, Data Wrangling II
2773, Modelling
32242, we can run the training
11854, Reverse log transform on SalePrice
8875, Merging finalTrain and finalTest
16966, Encode categorical feature
7624, booster gbtree gblinear or dart default gbtree
18771, More filtering of data to try
19710, try visualising a photo from the converted training set
42893, I thought of a sigmoidal function because China s data resembled a sigmoidal shape
42153, check how the Overall Quality rating affects the median price of the house
12922, check the percentage of survival for males and females separately
29423, We use a multinomial Naive Ba model for this notebook You can go ahead and choose your own model as per you like can also play with this model s parameters so as to increase it s accuracy But for me this gave a accuracy of around 79
3607, Outliers
28030, CLASSIFIERS ACCURACY COMPARISON ON OUR WORD2VEC MODEL
11735, In the correlation heat map we get some important information
28529, OpenPorchSF
26206, I assign a class 1 which is the label wheat
9073, Feature Engineering
11690, Random Forest Classifier
27028, use Z test
21583, Select data by label and position chained iloc and loc
27482, The model needs to be compiled before training can start
17383, Training Our linear SVM
31304, PCA decomposition can be inverted
17037, parameters gamma
42805, Model evaluation
29832, Recurrent Neural Network GRU
11063, Feature Engineering
33827, Missing values Treatment Techniques
12070, 1stFlrSF
36386, SGDRegressor is sensitive to scaling and normalization
28751, Dataset Loader function
12407, GrLivArea
33830, Suppose we want to keep only the rows with at least 4 non na values
17995, Instead of having seperate features for the number of parent children and sibling spouse we can use the family size as a single feature for accompanying family members
18521, We can get a better sense for one of these examples by visualising the image and looking at the label
42015, Creating a new Dataframe with a certain column
260, Library and Data
20212, Plot the data
42817, Preparing the Data
28245, Lets check the missing values in each file
17456, switch back survived value
35845, Lets create prediciton on DT Model
25224, Data Preprocessesing
22801, Data Cleansing
3962, Data Preparation
40251, Garage Area
26284, Defining the neural network structure
7612, Loop over Pipelines Ensembles
8330, organize the data a bit better
9903, Modeling
6074, SibSp and Parch
8590, Deleting Cabin feature
8750, Validate Model
23346, Learning Curv
33514, Albania
19144, Model 4 Input ReLu 512 Dropout ReLu 128 Dropout Sigmoid output
19724, Observation
10737, Bidirectional LSTM GRU Attention
21562, Fare
36569, Save images
17736, The letter in each cabin number refers to the deck the cabin was on
17716, center Final submission center
34031, Scaling
12464, Model Fitting
15316, Checking out the distribution of Fares
28520, YearBuilt
2557, Encoding the variables
34648, USD RUB exchange rate taken from
24143, Voting Classifier Model
15119, Looks like the Random Forrest Classifier got us a little higher score of 84 given the paramters we fed into the model
26414, let s go back to the histograms and check how the distributions look like for FareCorr
7782, value counts with relative frequencies of the unique values
17415, Plot final XGBoost Decision Tree
9525, Library and Data
24364, This looks much better than the previous one
18331, Looking at SalePrice distribution
14536, Feature Importance
13025, Age
25879, Trigram plots
41047, Order By Timing
13206, First let s create a basic decistion tree model using all information that we have
7558, we got our best algorithm with accuracy of Gradient Boosting
37280, Our standard embedding matrix builder
28601, HouseStyle
4692, Numerical values
11911, Lets focus on the Missing Values
26720, look at the unique days at which the 10 SNAP days of a month exists over the years
28700, Stacking
15121, Before handling Age null data I m gonna separate few class with 10 years
42804, Log of our data have distribution close to normal with exception to two abnormal peaks on left side
6154, Fit the basic models
13192, First let s deal with name variable here i extract the name title of each passenger passengers that have not a title i replace ad unknow latter i ll do one hot encoding in this new variables and another categorical variables
7967, Wrangle data
9889, we try to write a code to complete missing age variables
27041, Location of imaged site w r t gender
38759, The train accuracy is 82
27948, Drop columns with categorical data
36856, setting train and validation data
2024, XGBoost
21473, we just have to apply it to the whole dataframe We don t forget to shuffle it afterwards otherwise the batches all have nearly identical rows
7051, Masonry veneer type
31239, As per the Kolmogorov Smirnov test 46 features have a high probability of not being from the same sampling distribution
14477, Most of the youth male died siblings survived both only i can say that high class passengers in male might got save
27933, Learning Rate Decay If the models loss does not reduce for 2 consecutive epochs reduce the learning rate This should stop the model stepping over the optimum too many times
23488, Hashing Vectorizer
25372, Label encoding
3939, Bivariate Analysis
30141, Training
33457, StateHoliday
25492, Drop columns with categorical data
9226, Execution on Testset with Random Forest
21118, Define train and test data sets
7806, Iteration 3 Setup with Advance Preprocessing
11517, Grid search for RF
36553, Rounding quantile based binning
32939, Load the data
4473, Dealing with Fare Missing values
8474, Residuals Plots
19865, we have got around 144 outliers
15175, Missing values part 2
18193, There are interesting things
30657, Bags have been transformed into Bag
18613, Model 2 SVM Linear Accuracy 79
27764, SUBMISSION
39851, Seasonal ARIMA Model
6225, that s it there no missing values
3727, There are three ports C Cherbourg Q Queenstown S Southampton
16603, Discrete Variables
39384, The idea is to replace the null values of renta with appropriate median values
9412, Fancy imputation font
31743, Torchvision transforms
27529, Display heatmap of quantitative variables with a numerical variable
15579, TicketGroupSurvivors
25170, Replacing common words like 1000 to 1k or 1m and many other and removing special characters
19351, Second try
7899, I explore the entropy to check wheter the values can give a good learning to the algoritmh
3246, Before moving further let s take a look at missing values in Numberical and Categorical columns
14196, Creating Model
3504, Separate into inputs and outputs
36702, Accuracy of the model
20679, While fitting the model a progress bar summarize the status of each epoch and the overall training process
32125, How to replace all missing values with 0 in a numpy array
40008, Overlapping patients in train test
15736, After filling AGE values we can focus CABIN
11167, Transforming some numerical variables that are really categorical
7295, Comparison of All Models
33288, Age Grouper
18230, Test RNN Model
7668, Missing values percentage per column with less than 80
12088, Naive Bayes classifier
5210, predict the output and making an output CSV
1318, data after tuning
3435, Exploring the Title variable imputing missing Age values
5331, Visualize three way split of three variables
29391, now check our final CNN model
14568, We can combine SibSp and Parch into one Family which indicates the total number of family members on board for each member
27273, Infact our data is not normal distributed we can achive better score with Gaussian constran removed
28274, Fine tuning the model by finding the best values for the hyperparameters using GridSearchCV
37164, We use nn
15332, Searching for the titles and extracting them from the names in the given data
4074, Bonus Plotting Feature Importances
36725, Define validate function
4044, we can check the general attributes of our data and the non null elements
42401, Weekly cycle decompose of CA 1 store
42470, Selecting features with a Random Forest and SelectFromModel
36849, some global variables
2190, Top 50 Corralation train attributes with sale price
12476, Embarked
37081, Correlation among Base Models Predictions How base models predictions are correlated If base models predictions are weakly correlated with each other the ensemble likely to perform better On the other hand for a strong correlation of predictions among the base models the ensemble unlikely to perform better To sumarize diversity of predictions among the base models is inversely proportional to the ensemble accuracy make prediction for the test set
18813, Import libraries
37556, Get file structure
39064, Paragram Embeddings
38157, We already have our train and test sets so we just need to choose our response variable as well as the predictors We do the same thing that we did for the first tutorial
35623, Some necessary functions
28022, CLASSIFIERS ACCURACY COMPARISON ON COUNT VECTORIZER
42147, Decoder
24564, let s look at the total number of products by age income and seniority
21241, The Generator Model
29971, Splitting train dataset into subtrain and subtest Training data with LSTM model
6757, Checking Skewness for feature MiscVal
20329, Create a submission file
40137, Create our class instance
6942, Here I bring categories into a numerical format
43357, Target
20044, look at the train data file
5928, Don t know how to find outliers
39499, Got the idea about its
3618, And then this happens
10439, The missing data shall be removed now
3877, A Transformer by definition should support the fit transform interface There are two ways to achieve this
39669, Input tensor placeholder X needs arbitrary amount of 2D inputs
40097, spaCy s CNN
3500, Make predictions on the test set and write them to csv
31665, Evaluation of model with 3 classifiers
20594, Exploring Missing Values
17004, Explore and Engineer Features
34177, Interpreting ANN model 1
10205, Train Test Splt
9804, Stack Model 3 Feature Disintegration with PCA
14156, First we plot the distributions to find out if they are Gaussian or skewed
37700, just like numpy right
18195, there are products that were never returned
19983, Attention for text classification
4793, Since the SubClass are categories and not of numeric data type we covert the feature to category type
36055, Create new features
35522, Another part is checking the distribution of Sale Price
12433, Feature Engineering
28366, Independent Variables
26671, Income Type
18532, Here we have the first rows of the train and the test dataset
1159, LightGBM
16994, Download data and preparing to prediction including FE
14218, The Chart confirms a person aboarded with more than 2 siblings or spouse more likely survived
16928, Analyze data
14306, As we only cascade the test and train data for applying the feature engeering We donot mix the train and test data cascading help in apply feature in single run of code not applying seperately for test and train And we able to use train and test data sepreatly
37108, Code in python
2141, If we compute the residuals and plot them the pattern looks even more evident
26453, The validation score evulated by cross validation is quite close to the 83
1983, Converting Categorical Features
6453, Creating Submission file
29379, Begin Training
26219, Vertical Flip
20603, Since most of the values are in Mr Miss Mrs we can include the others in separate category and hence have four categories
15664, Reforcast predictions based on best performing model
17427, How many values are missing
9936, Combining the dataset p
9842, Logistic Regression Model
8565, Deleting a Column
29995, LinearRegression
36826, Linear Classifier
25259, Split DataSet
3570, GarageArea is divided into zero and non zero parts
20809, Split X and y into train and valid data for model testing
35173, Plot the model s performance
24770, Rank model performance
18238, Softmax activation prediction and the loss function
29185, Visualizing
4618, Lets have a look at the Dimensions of the data
30321, This notebook deal with positive negative and neutral samples independently
25292, These are the probabilities that the image is any of these numbers But we don t want that We only want the highest probability
37653, Train
24864, First we notice that the Survived category not have any consequence if we drop the Name category
12174, Custom Build Submission Function
10032, CORELATION PLOT
29052, Adaptive Histogram Equalization
6681, Confusion matrix for the best model
1410, check how the distribution of survival variable depending on the title
40472, Training
37703, It took 12 seconds and we used the whole dataset
7750, We need to know how much each feature is useful for us to predict prices which means it should be corrolated with SalePrice
14646, Run encoding on the combined train test data
24108, Extracting Models
34023, Count Hour
34321, Make sure that the labels are proportionately equal before and after the train test split
38, PassengerId 62 and 830 have missing embarked values
28686, YrSold
42206, Checking Error
6482, Imputting missing values
42269, bedrooms bathrooms Category Category Values
33766, Distribution of Labels
872, For females with Parch and Pclass survival rate is below
7148, MANIPULATING DATA FRAMES WITH PANDAS
8311, Checking for features with missing values
5913, RandmoizedSearchCV vs GridSearchCV
6827, Target Distribution
26815, Submitted Test Set
20720, LandSlope column
10669, Hierarchical Clustering
15233, Splitting training data and testing data
15272, Random Forest Algorithm
1101, Variable selection
37731, Feature Selection
34529, Create Custom Feature Primitives
16355, Comparison of base models
38144, Is ensemble default in auto sklearn
43318, KNN is the best classifier for the dataset
41566, Stage 5 Understanding and Applying t SNE
5831, we can deal with these missing values in 2 step identifing categorical and numerical feature separately
11693, The accuracy of KNN classifier as reported by Kaggle is 77
12637, Filling in Missing Values
3019, Looking at the distribution of the numeric features
7019, Evaluates the general condition of the basement
42462, ps reg 02 and ps reg 03
11235, go to the prediction part
28020, MLP
1127, Considering the survival rate of passengers under 16 I ll also include another categorical variable in my dataset Minor
29967, Visualizing dataset
22102, Predict Labels Targets for Test Images
10654, Compare Model
25185, Importing Libraries
39497, Function to explore the numeric data
15441, We find correlations between features and impute missing values using the correlations
21596, Check if 2 series are similar
15840, PClass
27130, Our target feature is a continuous variable with values ranging from 34900 to 755000
13345, Modeling div
19536, you can check the lineage
10383, Fixing missing data in test set
13883, Passenger s Gender
8817, Filling missing values in Fare Cabin and Embarked
28204, Setup
5098, This is a list of new features
11224, Show new predictions
36113, Seperate numerical and categorical features
23896, There are no visible pockets as such with respect to latitude or longitude atleast with the naked eye
1185, Incomplete cases
4402, Hyperparameter Tuning
9906, I am going to compare 5 different Machine Learning models which are
20699, How to Get Better Model Performance
40661, The next two be tree models
7327, Create a feature Names to store the length of words in name
4155, Missing Value Indicator on Titanic dataset
1277, Majority of passengers borded from Southampton
16480, Survived Plots
3862, examine the attributes
36574, visualize the distribution of less popular apps
12913, View concise summary of test set
31083, Selcting Union of Features from both the ways
36348, Train the Model
10441, Normality
36555, Gaussian Mixture Clustering
979, Confusion Matrix
263, Compiling model
28729, The most expensive products
35081, implementing the scalar over the x train and x test and transforming them to x train scaler and x test scaler
31739, Ben Graham greyscale Gaussian Blur
20450, installments payments
201, Model with plots and accuracy
5448, the Class version Lets turn it into a class with a single function still only doing 1 feature
30852, Location for Non Criminal Activity
39668, Normalize input incase image array numbers in 0 255 range
31378, Flip Image LR Up Down Transpose
38981, Location encoding based on
9030, There is 1 row in the testing dataframe with no value for KitchenQual
27374, Adding lags
24272, Naive Bayes classifier
13048, Embarked
31773, Loss
7396, Import Data
38313, Final Prediction
29848, Label Encoding some categorical variables that may contain information in their ordering set
40446, Street and Alley
10826, As always let s make a prediction
6834, Outliers
37566, majority of the columns are integers with 8 categorical columns and 1 float column
20406, Reading data and basic stats
33307, Feature Importances
36354, Train the Model
10832, Before we start tweaking the classifier I would like to check one thing
28943, PyCaret Library
6182, The number of siblings present for passengers
24461, only cutout with 8 holes
25429, Simple EDA
4570, Third step Missing data
2266, Fare
18387, Algorithm for finding selected text
7666, Features with 80 missing values we drop
10777, Voting ensembling
32890, Trained on validation set using the 1st level models predictions as features
23839, Taking X122 and X128
18930, Relationship between numerical values exposing most data points by color gradient
13535, Gradient Boosting Classifier with HyperOpt tuning from my kernel Titanic Comparison popular models comparison popular models
35237, Already we analysed punctuations of selected text now lets have a peek into whole text
638, Sharing a ticket appears to be good for survival
31601, MODEL BUILDING AND PREDICTION
2752, We are going to use the House prices Advanced regression dataset to point out a mistake that poeple might make while handling missing values
1559, Age
1331, Correlating categorical and numerical features
13718, Plot to check distribution
20438, application train
42724, With lmplot from seaborn we can draw linear regression plot very easily
23455, Year
13157, Custom Mappings
2990, Create dtype lists
36698, Seperate Train Set
22998, Standard deviation analysis in each store
32267, Relationship between variables with respective to time Represent in stacked fill line
20849, To run on the full dataset use this instead
39698, Lemmatization
36358, Training our model
8995, For LotFrontage I am confused why some of these values would be null
22353, Writing the id and loss values to a csv file last line of which contains the MAE value
29437, Null Values and unuseful data
16638, EDA
9421, Count Plot
29375, Folders to store the images in format that allow to run fastai algorithms
5434, Many missing values are they all without a pool
5520, Ensembling
31246, strong Sex strong
30678, 3775148606625
21420, Outliers
19359, Exploring numerical columns
33477, In order to solve the differential equations system we develop a 4rth order method
6207, Linear SVC
29128, investigate for errors Curious let s dive in
14706, GETTING MODEL READY
19301, Data Interaction
41989, Locating loc To read a certain row
21902, Summary
23324, Number of month from last sale of item Use info from past
27053, Extracting DIOCOM files information in a dataframe
14521, Observations
22509, start session for prediction
17974, Embarked
18213, Primeiro formatamos os dados para em vez de ser uma matriz 28x28 ser um vetor de 784 valores
27525, Data
19551, Reshape
30720, Data Augmentation
34086, Hyperparameter Tuning Grid Search Cross Validation
8256, Creating a Dataframe listing every feature with missing values
25235, We had log transformed the Ytrain and hence it is essential to transform it back to original by taking an exponential of model predictions
5307, Selecting meaningful features
2806, AutoViz
20205, impute mode value to categorical features
32307, We have 177 NaN in Age and 2 in Embarked
8473, Modeling
8086, Blend models and get predictions
27058, Define a model with best tree size
34472, Plotting feature importances
18698, Accuracy
32129, How to do probabilistic sampling in numpy
8680, CATEGORICAL FEATURES
5803, Data for the exercise
29028, Feature Engineering based on this excellent notebook
1580, Having built our helper functions we can now execute them in order to build our dataset that be used in the model a
28616, RoofMatl
27878, the properties of the categories should be similar
8415, Reviwe Porch Features
42457, Checking the cardinality of the categorical variables
39119, Gaussian Naive Bayes
22604, Months since the last sale for each shop item pair and for item only
10222, start Feature Engineering with creating new variable Family Size by adding SibSp Parch and One Current Passenger
6261, It looks as though passengers having many siblings or spouses aboard had a lower chance of survival Perhaps this ties up with the reasoning why passengers with a large value for Parch had a lower chance of survival
10680, HERE SalePrice is mostly related to
17745, examine fare NaN s now
39672, Create input X and output Y tensor placeholders that correspond to 2 coordinate inputs for X and 3 RGB color outputs for Y
19696, Compile Model
27976, distplot
6405, Great There are no negative values in the dataset for sale price which is good
24229, We be using the Sequential model from Keras to form the Neural Network Sequential Model is used to construct simple models with linear stack of layers
12523, Combining train and test data and separating the label
10734, XGBoost
34936, Stack it
23441, Checking for null values
3264, The target variable is right skewed
27900, Simple training only 10 epochs just test it
4564, Defining features and target variable
41967, Distribution Checking
31632, Dates Hour Dates minute
18468, Deep Dive on Stores Closed on Certain days
19434, Take a quick look at correlation matrix as a heatmap float
32237, Here is the accuracy of our model for the validation set
14563, Embarked Embarked is a categorical variable here we can impute the missing values with the most popular category
12955, Detecting missing values
28433, Test Set
10641, Encoding Categorical Variable code Title code
8865, Outlier Analysis
15188, Finding Missing Values in Embarked
4002, RMSE is also very close to sklearn and far from our analytical solution
10843, check missing values
15659, Ridge Classifier
3400, More Features Engineering
41164, FEATURE 10 AVERAGE NUMBER OF LOANS PROLONGED
12640, Gender Feature
20170, Training Data Size Vs Accuracy Fitting Score Times
28817, No surprise in Decembers the stores have more customers and make more sales
19967, Prediction
18606, We need a that points to the dataset
39211, One of the samples is listed with 112 bathrooms
24570, Observations
42253, Cross validation
25344, Data Augmentation
37170, a random forest
26497, For most classification problems one hot vectors are used
25219, TotalBsmtSF Total Basement Square Feet
2311, Words Labels as Features
20941, RESULT
18763, load itk Used to read a CT Scan for the mhd file
11882, Exploratory Data Analysis
28453, Analyzing column of type int parcelid
8719, MICE
26523, Model and Model Evaluation
18389, Recalculate word weights using the entire training set
13694, We convert the Cabin data into a flag about whether a passenger had an assigned cabin or not
32248, Size Matters
7366, have a look at the table for the 1st class
1165, NA s
42834, Training and Evaluating the Model
2102, Another way of looking at these features is to combine them in a plot together with another numerical feature
41682, Since the data is imbalanced meaning there are a lot of records for benign tumours while very less records for malign tumour
29822, Trained CBOW
9839, Linear Support Vector Machine
6033, Remove correlated features
1216, String Values
26798, Visualizations
7874, The next option is to cerate IsAlone feature to check wheter a person traveling alolne is more likely to survived or died
34049, Dummy Encoding of PdDistrict
17263, Ticket Feature
2831, concatenate the saved Id with the predicted values to create a csv file for submussion
36997, Missing Data
42144, We re trying to build a generative model here not just a fuzzy data structure that can memorize images
38311, Decision tree
8720, KNN Standardized Features
3938, Univariate Analysis
13566, use some features to help us fill Age NaN s
20696, How to Plot Model Learning Curves
21922, LIghtGBM Regressor
14388, Extract a title for each Name in the train and test datasets
30620, Less than 40 of passengers in the training dataset survived p
7917, Select the numeric and categoricals columns
19890, Expanding Window
7281, Title Feature
8797, Deleting Outliers
9934, To handle the missing values in the age column of the dataset I have calculated the average age of the males and the females in the dataset and replaced the missing values accoring to their sex p
19541, Make Predictions
5443, On to the real Data
12972, We can say
10833, make a quick summary
7775, Extra Tree
17457, Ticket class
35405, Applying Random Foresting
25424, Idea to try predict the mean of the target using the date
7550, SVM
10467, let s create a merged plot of the top 6 strong correlated features with the target
9439, Zoomed Heatmap
17397, Age wise Survival probability
27001, Data for the network
1966, LogisticRegression
9230, Train Model 75 15
29055, Histogram Normalization
5151, we need to transform the strings of these varibales into numbers
36860, first 10 image samples for each digit
19580, Matrix shop item
33234, Transfer Learning Example
11771, Analysis and Visualization of Numeric and Categorical Variables
17354, In pattern recognition the k Nearest Neighbors algorithm is a non parametric method used for classification and regression
14694, Categorical Features
34713, Mean over fixed subcategory and month
18345, Remodling of house
7481, 4a Titles
31106, Label Encoding one Column with Sklearn
10033, Missing value is data set
10881, Examine the Distribution of the Survived Column
25328, The Porto Competition
29990, The following dataset is modified to work with our hdf5 file
171, Now we can make predictions for our test set Notice that we quickly transform the Sex values to numeric ones dummies by using the get dummies pandas method in order to bring them to a suitable format for our Logistic Regression model
1743, Since fare is all filled up we compare the fare distribution as below
4922, that we are done with our Feature Engineering We can now seperate thhe Training and Test Data from the complete data
5944, Our model created now we have to import test dataset
7443, Data Cleaning is now complete We can now use our data to build our models
12074, Feature Normalization
41272, Scientific Colormap
5316, Multiple Regression
41993, iloc To read certain rows
11221, correlation looks much better at the high end
39417, replace the NaN values in Cabin with Missing
13090, Finding categorical features and converting their pandas dtype to categorical ease visualization
22523, Evaluate models with different categorical treatment
43255, Pegando o log da coluna count
293, Embarked
18919, Embarked Feature
4091, In the search for writing homoscedasticity right at the first attempt
25861, Parameter Tuning of SGDClassifier
2402, Importing Packages
13887, Passengers Port of Embarkation
7524, Summary
27877, Sanity check
17905, Analyzing shapes of the dataframes
28479, Obtaining the absolute error from the logerror column
1554, Name
26709, Plotting sales ditribution across departments
9644, Again we can notice some type of deviation
32909, There is a very interesting article comparing the optimization function for this challenge
35123, Training Function
14244, Sex Categorical Feature
17721, Here we face the same issue as the age feature
30262, We read this array in a way that Columns are corresponding to our class predictions
10971, Top influencers
33507, Andorra
8539, Removing Duplicated Features
22847, Generating Lag Features and Mean Encodings
41029, We draw some interesting conclusions Pclass is definitely useful and also Embarked even though these two features are not independent
21537, Random Forest
39225, Correlation
2641, Preprocess data
11723, BernoulliNB
12022, Learning Curves
11531, Select Xgboost LightGBM Gradient Boosting Regression
6761, Checking Skewness for feature PoolArea
10860, Zooming up the map to list the top correlations with SalePrice
12758, For this part we evaluate our model using classification report and confusion matrix
25725, check a directory
24170, Evaluation
38274, define our model
2933, Combining Models with VotingClassifier
16040, We get our second error
24133, create Bag of word model it contains only unique words from corpus
35163, Experiment Data augmentation
28555, Treat missing values
20386, Create sparse matrix Bag of words
20269, Bang
10141, Simple Neural Network
3470, Here s a classification report
16850, Except the top four titles others are present in very few numbers and hence not fit to train the data on such small quantity
38169, You can also tell TPOT to export the corresponding Python code for the optimized pipeline to a text file with the export function and I personally think this is an amazing feature
21561, Embarked
261, Preprocessing and Data Split
30821, No overlapping
10201, Lets drop the columns with high perentage of null values
26797, Train the model using data augmentation
37755, Technique 6 Use of Generators
29794, Word2Vec
31534, LotFrontage
8443, Check if we had others TenC in the dataset
33269, Small peak at 64 and large peak at 451 they conrespond to periods
37986, Model3 fine tune model 1 add 1 fc layer
43125, SVM
411, Extra Trees
37335, train
17446, Actually i dont know if their is a reason for the tickets parch and SibSp
3345, lets visualize our newly created feature in waffle
18693, let s train our model for 4 epochs
16628, Select The Best Model
23479, Submitting Test Data
4672, We don t need Id feature so we drop it
15026, AgeGroup
41446, It s now possible to visualize our data points
11314, Survived
16960, Embarking Port
14355, Male and female distribution in Non Survived Passengers
29833, Target Prediction
24320, Convert some numerical features into categorical features
1300, Swarmplot
26510, Train validate and predict
5048, By the way If we want to visually combine count and distribution we can use a swarmplot
13518, To give us a better view of these transformations this is how our dataset columns would look like after the transformations inside a pipeline with Scaling and OneHotEncoding
6642, Survived and Not Survived by Age and Embarked
11751, Great We filled in all of our missing values
31862, We do the same with the validation data
32789, optionally you can also choose to drop the original categorical features
5912, Ensemble Blending
40722, Confusion Matrix
18938, Displays collected view of different categorical features with respect to single numerical variable
10416, Train the model
18354, ridge
20854, We can create a ModelData object directly from out data frame
19331, Utility Functions
5522, Estimate missing Embarkation Data
14681, Some important inferences
38901, Anatom Site Differences
40926, Simple Augmentation
12757, After trying several different models and even Artificial Neural Network with extensive hyperparameter analysis it turns out that the logistic regression performs the best which is extremely simple but effective
11943, Finding the most important features in the dataset span
15127, Correlation with Age Pclass Embarked
35548, to calculate the average weights let us look at the following code
11607, Overview
18399, All of the meta features have very similar distributions in training and test set which also proves that training and test set are taken from the same sample
42421, Correlation Analysis
14552, Parch Parents Children font
4219, Categorical Encoding
21088, Correlation plot
39773, the question ids range of the test dataset is sthe same as the question ids range for the train one
1396, Parch vs Survived
623, Deck
25465, Forward propagation
19368, Visualization of categorical attributes versus target
1110, Train the selected model
31406, We are ready to go now we can use standard fastai datablock API to create databunch
32349, Tokenizer
15423, let s have a look if port of embarkation affected the chances of survival
2183, Univariate statistics
27519, Show incorrectly classified images
26494, To output one of the images we reshape this long string of pixels into a 2 dimensional array which is basically a grayscale image
16581, Lets check which Fare class Survived along with their title
38588, Cleaning text in testing dataset
30405, Plotting prediction
28870, Inverse Normalise
8773, Label the Fare
21323, plot histograms numeric
18059, Word Cloud for Positive tweets
36270, Fare vs Pclass
32943, Check test and train now
15331, Mapping values
8958, Getting accuracy from the model
31858, Build XGB Models with GPU
40833, Adding dummy varibles to categorical data
726, In addition to the original values we have some new skewed values of the 2num variety
10040, Co relation plot
32049, Low Variance Filter
23389, the model is ready to be trained
33298, Feeding the Machine
28618, Exterior1st Exterior2nd
16620, Scaling
12183, that our data is numeric We can check the correlation between the features
32428, Training
2581, It is single layer neural network and used for classification
15202, that our data looks good lets get ready to build our models
41722, Training
28794, WORDCLOUD OF NEUTRAL TWEETS
36651, Plotting Functions and Resizing Images
15066, Family Size
35894, Submission
34718, Features found via LOFO importance
42418, Price Doc Distribution
17550, Age is the not very much determining factor for Survival prediction
3993, We got some values of the coefficients theta but we understand that they are obtained from data in which there is noise
34685, Target variable
16733, impute age
37550, Predicting on test set
15476, We bin Age into AgeBand so as to reduce noise
38995, Helper function to check out the properties of numpy arrays We be using this to validate dimensions of matrices throughout the rest of the code
15774, Categories
6123, Utilities
26807, Metrics
32146, How to find the grouped mean in numpy
38222, We replace the outlying coordinates with the average coordinates of the district they belong
14496, KNN With neigbhour and metric euclidean and accuracy score is and it is cross validated
28174, The dependency tag ROOT denotes the main verb or action in the sentence
31426, Implementation
13965, Sex Survival probability
3932, Exploring data
0, Missing Values Treatment
40274, Kitchen Quality vs Sale Price
27069, There is a little difference between keywords in train set and test set so I check the intersection between the keywrds in train and test
37537, Data Preparation
41852, Thresholding merged output
18978, Display values in table format Figure Factory format
41453, that we have a general idea of the data set contents we can dive deeper into each column
9822, Cabin
15221, So how great is
21040, Insincere Topic Wordcloud
11947, Handling remaining missing values by replacing it with median of the values
2339, Decision Trees
36086, Engine
20492, Client type
40711, Shape of training set
283, How many passengers travelled alone Were they more likely to survive compared to those that travelled with family
27611, Create the Dataframe for the Datagenerators
14324, Ticket
29043, Median subtraction
1908, Pivotal Features
31322, RMSPE Root Mean Square Percentage Error
6447, Modelling
32411, Preparing model and prediction
3180, cuml Models
39735, Age
18491, The StateHoliday is not very important to distinguish and can be merged in a binary variable called is holiday state
24277, Random Forests
21087, Descrictive Statistic Features
32157, How to create strides from a given 1D array
4838, Exterior1st and Exterior2nd and SaleType have just one missing values so we are just going to impute with the most common string
15051, LastName
30721, Confusion Matrix
32929, Saving model
22230, renme oran optimizasyonu Optimum Learning Rate
28934, Predict from data and we are done
6645, Survived and Not Survived by Embarked
25935, There are only a few variables that are object type and most of them are and no
11892, XGBOOST
3166, XGBoost
23554, have a look at how they have grown over these years
8391, Entities and EntitySets
42308, Predictions
5594, List of Machine Learning Algorithm used
20300, fare range increases the chances of survival increases
14515, Observations
31922, After trying multiple architectures the final one is
18348, To measure the completeness of the data present python provides a missingno function which help us visualize the complements of each variable present
12961, SibSp and Survived
29687, MaxPool This and some other similar layers actually makes our matrix smaller and smaller and less complex and saving only the important features with its location
28932, Creating Model and Fitting with multi gpu
8623, TotalBsmtSF vs SalePrice
1151, Creating and Training the Model
9656, Adding New Features
40749, Combine
33493, Linear Regression for all countries method 1
7384, For convenience I concatenate all matched passengers in one DataFrame merg all
9407, Neural Architecture Used
40419, Number of features
30313, Load dataset
30623, Pclass is a categorical male vs female variable
7532, lets take a look at Fare column May be it want to something to us
33872, Lasso
16655, Cabin
16153, Cabin
20080, Top Sales Item
24253, Cabin
1155, Gradient Boosting RegressioF
1822, A much anticipated decrease in mean squared error therefore better predicted model
41775, Model Building
3342, Using Median approach for Fare as it is also have few occuring
25779, using this features we can create new fature Family
42723, DAYS BIRTH is some high correlation with target
12675, Dropping old columns
2885, Year Built Features should have nice correlation with SalePrice
7463, Evaluating Accuracy of our model
3688, Normalise through Scaling
32480, that we have probabilities we want to remove the possibility of predicting a product that a user already owns
27283, Text Based Features
27920, Train and Validation Data Set
7643, combine train and test
39166, In fast ai the default is to use pretrained models If you don t want to use a model pretrained on ImageNet pass pretrained False to the architecture like this
27153, ExterQual Evaluates the quality of the material on the exterior
22062, Slang wording typos
36654, In Gaussian Filtering instead of using a normalized box filter a Gaussian kernel is used instead
10535, Missing values
23516, There are 4 elements in the class
40477, Polynomial Regression
40985, Passing a dictionary to agg and specify operations for each column
41843, Data Preprocessing
40034, Cross entropy loss
31524, Printing the confusion matrix for the same
1911, Missing Value Imputation
1904, Variable Identification
7529, xgboost optimization
15433, let s have a look how a Logistic Regression classfier is performing
32764, Before we move forward we need to check our dataset first if there anything wrong and the label are correct
27968, Reducing for train data set
6434, loc iloc
20604, Sex
31310, Missing Values
16687, Even though this is a very simple plot but we find out that women from Class 1 and class 2 have a higher survival rate and men from third class have a very low survival rate
26049, We then define device
7339, Random Forest
20166, Keeping 90 of information by choosing components falling within 0
9955, ML Models
3714, Alley
21839, XG Boost Model and Validation
12301, TicketGroup
6282, Final thing to do is create a flag to indicate the passengers that had a ticket with a prefix or not
6273, Fare
32212, Add lag values for item cnt month for month shop item subtype
14089, Feature Selection
19007, Look at the performance of the top 5 parameter choices
29036, Logistic Regression
1067, Averaging Regressors
21612, Shuffle rows of a df df sample
16236, Using MulticlassClassificationEvaluator we get the accuracy of our model
18656, Test Input Pipeline
27261, Use XGBRegressor as Second Level Regressor on First Level Features
5114, lets generate some plots related to dataset
12597, Function for model performance using AUC
3355, The only downside is we cannot track if the correlation is negative if you want to do it remove abs from the code
31622, Random Forest
29171, Check for missing values again
23820, Outlier detection and removal A bit cleaning
14598, Embarked
16233, we fit our pipeline and create a model This method is called Estimator An Estimator abstracts the concept of a learning algorithm or any algorithm that fits or trains on data Technically an Estimator implements a method fit which accepts a DataFrame and produces a Model which is a Transformer For example a learning algorithm such as LogisticRegression is an Estimator and calling fit trains a LogisticRegressionModel which is a Model and hence a Transformer
6724, Histogram
10278, One of the cool things about random forests is that you can get an assessment of which features contributed the most to the predictions
24656, Data window for forecast
40703, Predicting output for test data
7145, Gradient Boosting Classifier
38638, we get the outputs from the last fully connected layer
19873, look at the boxplot of age feature
27769, Adjust the bounding boxes
22213, Imports
17266, Observation
36267, Look in to relationships among dataset
36715, Final Submission
2119, Hyperparameter tuning
14994, Target Feature Survived
33862, Fititng Linear SVM with hyperparameter tuning
14514, Observations
4620, We have 3 types of data in our dataset Integer Float and Object
14976, Filling Age missing values of training test data set with Median
5162, Random Forest Regressor
24390, check if we have any NaN values left
43146, Sources of Data
35118, Creating NLP Augmentation pipeline similar to Albumentations in Deep Learning
10874, Modeling
14430, go to top of section engr
28176, Sentence Boundary Detection SBD
26875, target encoder LabelEncoder
13661, Family Size
14569, Feature Scaling font
16132, Build Model
2477, Scaling features
11308, Inspect Data Frames
39283, Discard irrelevant features
34391, Clearly there are differences in the occurrence of crimes through district
19571, training
29113, far this is what I tried
29606, We can do the same with the t SNE algorithm
8780, Cabin
10129, Support Vector Machine Classifier
27639, The may be the time to try and train a neural network on the top 25 fields most correlated with logerror
13038, Fare
20700, How to Accelerate Training With Batch Normalization
14389, Most of the Titles are Master Miss Mr and Mrs
3176, Trying to use all avaiable cpu cores with swifter
36043, create the following factors to get an idea about the affect from Population Population Density Median Age Urban population
5580, Model evaluation
23732, Survival rate is highest among middle aged men and women
34845, Ensemble CNN
18337, SUMMARY FROM GRAPH
8975, OnehogEncoder
9765, Feature Importances
29970, LSTM model baseline
41190, remove these columns from merged data
36551, Baseline submissions
18815, Base models
15801, center Models center
12097, Replace Missing
40065, There are negatively positively skewed columns
18178, Visualization utilities
7633, Boost Models
13597, check unique categories for each feature
40840, The outcomes in group 27940 changed only once in the timeline
30690, Tentativa Sem Autoencoder
4533, Numpy
25819, We also have some null values in the dataset
34002, season
22238, Gender
26722, For TX
33290, Fare Encoder
12698, To understand better what we are now working with the list size and values be printed below
29169, Exterior2nd Fill with most frequent class
2374, Area Under Curve AUC for binary classification ovo and ovr strategies
24693, Training
19721, Preprocessing Data
306, Light GBM
41832, Ensemble models
950, Takeaway from the Plots
1212, Observing Sale price histogram
6480, Pivotal Features
38005, we have the data about sell prices for all items in this store
14132, We decide to assign the N based on their fare
32808, Level 2 LightGBM
35115, Submit
29590, The final bit of the data processing is creating the iterators
42626, Difference between Lockdown Date and First Confirmed Case Date
1119, Embarked Missing Values
42536, Here I did all the steps for the test data as I did with train
14635, The first step can be to write a function that adds titles for the passengers from the Name column based on our findings in the EDA
20538, Linear Regression with ElasticNet regularization L1 and L2 penalty
27747, Missing data for train
22176, stack both the dense and sparse features into a single dataset and also get the target variable
29039, Submit
37740, A quick glance reveals many columns where there are few unique values relative to the overall 1902194 records in our data set
4361, There are lots of home with 0 value of BsmtFinSF2 feature
20338, Convert single channel grayscale images to 3 channel RGB
8665, Fillna and Feature Engineering
25019, Lung segmentation
13706, set up our cabin only dataset for use by a random forest classifier
27366, the shop name starts with a city name
38834, Prepare submission
34959, that I have my clusters I can attached them back to my original dataframe
26806, Dataset
8727, Sale Price and Overall Quality
22274, The most important part of each value is what cabin letter they are in
32751, We can join the calculated dataframe to the main training dataframe using a merge
3941, Combining train and test data in order to make imputing missing values easier locating missing values
17430, For my idea this means
34286, Feature Reduction
23444, Line plot for all continuous values in file
13049, Creating a Model
39150, perform normalization to make the CNN converge faster
32653, Replacement strategy p
17444, just for today
869, For passengers in Age bin 1 All male in Pclass 1 and 2 survived
28125, Data Cleaning
28656, Since this feature is based around local features it is understandable that having more desirable things like a parks
19281, Inspect the structure of our model
31540, Categorical Features
18009, Prediction and submission
26977, Submission
259, Plotting Clusters
41061, Train XGBoost
10447, Convert Categorical Variables into ordinal numerics
8746, Train Validation Split
3449, Decision tree based imputation of missing Deck values
26519, Visualising the distribution of each product by age by boxplot
23078, Outliers
10624, do some more EDA
40003, Missing values
35878, Final prediction
26846, Second batch
38003, All info about a single store
37089, The shapes look familiar
4759, heatmap is a good way to understand correlation
35076, Performing the second step of the training process
27568, ps ind 10 13 bin
17725, In the test data there is 1 empty value in the Fare feature
2981, Hypothesis Testing
12702, The mean and percentile breakdown indicates multiple features converging around the 30 mark which perhaps isn t surprising
3321, The main reason to have seaborn apart from matplot lib is It is used to create more attractive and informative statistical graphics
32583, Automated Hyperparameter Optimization in Practice
38692, n images
16895, New Feature Title
5664, Create a Dictionary to map the Title s
3653, Dropping redundant features
28533, BsmtUnfSF
27504, First Import Required Library
16708, collect our splits
23608, Training Function
17001, Explore Dataset
6126, Time to bath not bass
14148, Voting Classifier
24821, BERT Text Encoding Sample Example
5950, Loading the data files
42563, Another step that many others have already done
40981, Cutting values into 3 equal buckets
1696, Detecting Missing values
27193, we compile our Neural Network
21614, Concatenate 2 column strings
6009, Gunakan target yang sudah di Scaling dan Transform
35434, Split data into features pixels and labels numbers from 0 to 9
37338, load weight and Prediction
19908, Days
10417, Permutation Importance Importance of various features
10958, Data types
6314, Support Vector Machine
38093, PREDICTION
18325, test one more set of parameters
2364, Sklearn Voter Pipeline
23735, Title Feature
28179, Using this technique we can identify a variety of entities within the text
31856, Final preparations
17041, One hot Encoder
21546, our final model becomes
20258, Variables
27645, XGBoost otherwise known as eXtreme Gradient Boosting is a great resource to train gradient boosted decision trees fast and accurately
43067, check now the distribution of max values per rows for train and test set
27155, Category 8 Heating and Air Conditioning
31478, Generate test predictions
12703, now we have a completed view of Age it makes sense to visualise it
40788, Even after averaging the structure inside the week is lost when averaging over all places
39038, Anyway grouping by lon lat may also give some problems
30656, Seems like we didn t remove all repeated topics but reduce the number of unique values from 222 to 186
4739, Missing value of each rows
3818, We know the mean age on the Titanic was about 29
26017, Home functionality Assume typical unless deductions are warranted
16462, As much you pay that much you get security
19376, Machine Learning Algorithms
28203, The Corpora
12185, Training models
21441, Mixture w OverallQual and OverallCond
6752, Checking Skewness for feature SalePrice
23207, Advanced Ensemble Methods
24002, Blending Models Ensambling
28582, 2ndFlrSF
23568, try out the model
17047, After submiting each of this solution to Kaggle discovered that cross validation scores are higher than on public leaderboard meaning we are overfitting to training set
22701, Random Vertical and Horizontal Shift
14897, D surface plot and contour plot to visualize the relation among SibSp Parch and Age
4065, The average silhouette score is maximum when K 3
12086, Split data in train and validation 80 20
23738, Fare per Person Feature
13912, Before we proceed further with Decision Tree we need to do some cleanup
21467, Parametrization
15086, Bagging Classifier
4174, The variable Age contains missing data that I fill by extracting a random sample of the variable
9012, Therefore we take the average pool quality of pools that are around the same Area of the Pool and set the pool quality manually to whatever the average pool quality is of pools that are that size
34174, The UpOrDown variable
29596, To prepare to use the range finder we define an initial very low starting learning rate and then create an instance of the optimizer we want to use with that learning rate
27510, fitting train and test data
21415, Feature primitives Basically which functions are we going to use to create features Since we did not specify it we be using standard ones check doc There is a option to define own ones or to just select some of the standards
3222, A pie chart is a circular statistical graphic which is divided into slices to illustrate numerical proportion
13930, Exemple Wilkes Mrs James Ellen Needs
10461, It would also be interesting to understand strong correlations between attribute pairs
36067, Predict Test Data
21025, Visualise the data
24882, No one else with that ticket
8973, fill values according with correlation
3858, split training data for crossvalidation
31012, I use ImageDataGenerator from keras to augment the images
1801, Assumptions of Regression
25265, Data Pre Processing
1819, we have calculated the beta coefficients
23277, Public LB Verification
26001, Predicting the category of San Francisco crimes
15553, Logistic regression
28519, TotRmsAbvGrd
6155, Stacking on training set
36037, Correlations between features and target
9269, To check if any feature type is misclassified
1279, Since 72
19612, Handle missing data
28954, I ll break the categories roughly up into the quartiles
28491, Clearly these three columns can be converted to type int32
26873, let s try to visualize some feature maps for our convolutionnal layers
12014, It clearly implies that desicion tree have a tendency to overfit the data quickly
38972, train list total length is 7613
22838, Generating prodcuct of Shop Item pairs for each month in the training data
29681, Shuffling data and splitting into training and validation set
34735, T SNE applied to Latent Semantic LSA space
14796, Decision Tree Classifier
33261, Build CNN Model
25401, It can be a problem because Neural Network uses a lot of sum
15513, This is highly incomplete
28632, GarageFinish
8952, Fixing Pools
12423, we are ready to go for submitting this vanila model
24698, create two evaluators to compute metrics on train test images and log them to Tensorboard
11691, The accuracy of Random Forest classifier as reported by Kaggle is 76
19038, Specify the source
37810, Read the files
15689, Create two functions to plot the of total counts and of survived dead passengers
31286, Prophet
33029, Prediction and submission
11756, Hyperparameter Tuning
4962, Loading Libraries
29224, For Training Data Set
24232, Start training the model
29183, Define IVs and DV X y
21645, Create a datetime columns from multiple columns
19340, Top Questions
37628, The ToFloat transform divides the pixel values by max value to get a float output array where all values lie in the range documentation
16489, Dealing with MISSING VALUES
40827, Biivariate analysis on continuous data
26241, Preparing PL Model 1
12914, Check for missing values
5681, Fill the missing Age values by calling the routine fill missing age
29589, The filters are still 2 dimensional but they are expanded to a depth of three dimensions inside the plot filter function
15827, for scaled data
210, Libraries and Data
15064, Data exploration
39730, Sex
19289, Crop function
43242, Training
12356, BsmtUnfSF Unfinished square feet of basement area
7925, Select train and test dataset
10943, Structure of test data
1203, Belows function was used to obtain the optimal boosting rounds
21360, Only minimally But that s fine if they were equivalent we wouldn t need this additional target at all
3001, Linear Regression Model
18679, Evaluating for associated model on dev data
12949, Individual Classifiers
31539, replacing with median value
26863, This last filter acts the same way than the two previous one it s a gradient
35179, Final model and submission
20743, GarageFinish column
8046, Relevance of features target
13405, Confusion matrix
20051, shops id 0 1 and 11 are not in the test data I think we should merge the shop id to solve that problem
21646, Show memory usage of a df and every column
24667, Make Predictions
10993, Import necessary packages br
3635, Embarked
34837, Convert the categorical columns to numerica column using the one hot encoding
39215, Create the neural network
34010, humidity
40647, TFIDF W2V
41172, test samples
11147, Cross Correlation chart to view collinearity within the features
43257, Juntando os DataFrames
7448, Examining the Distribution of the Target Column
7819, KFold
31263, Sales data
38105, Compiling and Fitting the Model
41938, define helper functions to reset gradients and train the discriminator
31375, Add noise
16855, Visualization
21492, LOAD DATASET FROM DISK
15974, Fare Feature
3297, Splicing data and visualizing the number of missing values
5517, Random Forest Classifier
14871, we know that if the Alone column is anything but 0 then the passenger had family aboard and wasn t alone
40868, Optimize Lasso
28666, LandSlope
39874, total and garage
8838, Add features
43059, Distribution of mean and std
36493, Start each fold with different groups
39174, index 27476
30598, Correlations
13119, ROC and PR Curves
11769, Loading Data
28709, Start the TensorFlow portions and 1 hot encode the labels
26465, Keras library provides module for generating mini batches of augmented data that we can directly feed into our CNN model which provides convenience as we just have to feed in the dataframe with relevant file path and labels
5530, Lone Travellers Feature
4985, Surviving rates per feature
37797, Train Test Split
25166, Asking Some Basic Question To Our Extracted Feature
5109, Feature Transformation
3411, Create new variables related to family size
42039, where
35834, finally check if there is anymore missing values
31079, LotFrontage Linear feet of street connected to property
35431, Creating the output csv file
6335, news Most of the features are clean from missing values
41111, DateTime Parsing
18126, Cross validation can be used to find optimum hyperparameters
27911, Examine Numerical Features that are failed
8532, Target Variable
20664, WE LEFT HERE THE PARAMETER OF MSZONING
14411, I ll check the accuracies of all the models with KFold Cross Validation and choosing best hyperparameters with GridSerachCV
5533, Deck feature
22101, Convert our Test Data to Tensor Variable
43384, check if we have as much weight vectors as classes
42939, Removing data we no longer need
3855, Feature Deck
15609, Estimate missing Embarkation Data
7356, Drop irrelevant features
25200, we have completed all the pre processes its time to train our model
14308, Selecting Feature from training set to feed to the neural networks
13502, Baseline model
17953, K nearest neighbors Model
41968, Map the Items
10611, Age
17909, so there are 2 NaNs to take care of in data train
12436, Building pipeline
28856, Seed All
36721, Change the first convolutional layer to accept single channel input
35883, Distribution of the response
4294, Add Neighborhood Dummies
42286, Display Data Augmentation
16563, Summary based on visuals
32655, Skewness may be interpreted as the extent to which a variable distribution differs from a normal distribution
10649, SibSp and ParCh
13956, Test data
41185, look into missing value counts in categorical columns in training data
992, let s follow a slightly different route We concatenate our train test datasets first
21225, As we load in our data we need both our X and our Y
689, Encoding the categorical features below is required since the machine learning algorithms work with numbers and not with strings
27740, Missing data
9947, One Hot Encoding of features
4935, Pre processing and Feature Engineering
5067, To get a baseline for all other experiments we setup a dummy regressor that simply predicts a constant that we define e
27929, Data pipeline
24884, This fancy function pulls Family name from names of people and makes a new Family column
193, Lasso Regression
33500, Andorra
15547, filling
24964, Accuracy score
3965, Lasso Regression
8236, Exploiting a datetime feature
2113, This is indeed a clearer signal and we should consider using this feature and dropping the other bath features
9515, Age
2462, Fisher Score chi square implementation
40031, Dataset
11288, Numerical Features Replace missing values with 0
518, loading the data
10804, we have now all data in Embarked column
13572, Keep thinking about Familys
3551, Allright the first thing that we conclude is we should take advices from our mother seriously
38672, Perceptron
28695, Treating skewed features
2966, a slight variation in the K Fold cross validation technique is made such that each fold contains approximately the same percentage of samples of each target class as the complete set or in case of prediction problems the mean response value is approximately equal in all the folds
26589, TASK EXPLORE SALES TRAINING DATA
24393, Predicting values
40719, Compiling Model
30983, First we can plot the validation scores versus the iteration
29885, Analysis of punctuation marks repetition in text
1362, The next model Random Forests is one of the most popular
40390, Defining the dataset
33489, Italy
2672, Mutual Information MI
39128, Print shapes of each dataframe so we don t make mistakes when fitting the Neural Network
38654, Fare
24135, start with Gaussian Naive Bayes Model
41834, Submit task
23215, bayesian optimization part
21840, Write output CSV file of predictions from test data for contest submission
31942, Here are some examples of the predictions made
42408, Random Forest Classification
19893, Split train and test data
42791, Observations
5083, We again have a look at the basic statistics of sales prices in our training data in order to compare these to stats of predicted values
18076, Area of bounding boxes
34832, Drop Column with most null values
29702, Getting Predictions
8089, Identify the best performing model
18256, Fine tune the complete model
3197, Not much left I don t want to think about it anymore
21624, Ordered categories from pandas aptypes import CategoricalDtypee
13391, Correlation of features with target
7353, Correlation more than 0
26867, Learned filters visualisation
39405, Getting the data
33439, XGBClassifier
26808, Loss
13674, We have nulls in Age Cabin and Fare in the validation dataset
22861, ISOMAP
39745, Imputing Age
8487, Passive Aggressive Regressor
39387, Replace null values with median values
20943, Import necessary libraries
40138, Create or retreive dataset
24544, Since 6 out of 160 channels account for about 87
3612, Data Cleaning
11307, Importing the input files
16008, Model
33800, All three EXT SOURCE featureshave negative correlations with the target indicating that as the value of the EXT SOURCE increases the client is more likely to repay the loan
8571, Grouping Rows by Time
133, Grid Search on Logistic Regression
25908, Generating tweets about real disaster
7151, HIERARCHICAL INDEXING
15521, Modeling and Prediction
7378, Merging the unmatched passengers using surname codes and name codes
16481, Looks like 1098 people did not survive while around 684 people in both the dataset survived
29377, Data Augmentation
303, need to scale in case we want to use linear models
23448, Working day
8543, Missing Over 50
26927, now we are up to the key method of preprocessing comparation
31682, The classifier acheives good accuracy on images with no noise
27003, Treated text
10238, To get best parameters for Random Forest let s do Grid Search for Random Forest model
30756, Final set of parameter
1826, Lasso
35455, checking the correlation between features and Target
13052, AdaBoost classifier
451, Creating train and test data
32976, Lets create one feature variable Status in the society This features can be derive from the name features like Dr Rev Col Major etc
37278, Using the gensim function to load in the embeddings ended up being much more time efficient than most of the things available in the public kernels
33046, Neural Network with two layers using neuralnet
26755, Prepare data
8876, Missing Data Handling
41850, First Patient Merged Output
19631, HOW THE ALGORITHM WORKS AND WHY MY FEATURES DON T POSITIVELY IMPACT IT
7112, Dealing with string values
35068, Complexity graph of Solution 4 2
3030, Created 3 models RidgeCV LassoCV ElasticNetCV and are linear models with built in cross validation
17709, do some visualizations
20875, In contrast to other CNNs Inception Neural Networks allow more efficient computation through a dimensionality reduction with stacked 1 1 convolutions
7215, Missing Values Imputation
32981, Preview
23835, There are different number of categories in train and test datset
9132, Therefore we take the average pool quality of pools that are around the same Area of the Pool and set the pool quality manually to whatever the average pool quality is of pools that are that size
28768, Saving
43370, Changing labels to one hot vectors
27494, Networks
25409, Where N is the number of the exemples t is the right labels and y is the predict labels
9740, Features Visualization
29959, Prediction
12370, Defining what to replace with
6654, Corelation of all the attributes by Heatmap
26656, Applying Deep learning model to predict the image
10984, To the next Step
41395, CODE GENDER
28180, Similarity
22683, Our model
15973, As this feature is alphanumerical and it depends on the embarking port prices and more features we drop this feature
3870, the ouput of both impute mode cols and impute NA cols and concatenated by using make union this is helper function to the FeatureUnion in sklearn which can be used to concatenate outputs from multiple pipelines transformers check out the shape of categorical data pipeline
14086, We now drop the Cabin Column as it contains too many null values
4128, We take the weighted sum of the predictions of the three models to form a ensmeble model and submit the ensemble predictions in a submission
42930, Saving the target and ID code data
7152, STACKING and UNSTACKING DATAFRAME
6794, Naive Bayes
100, While passenger who traveled in small groups with sibilings spouses had better changes of survivint than other passengers
38911, Validation Without Some Features
23958, First Few Rows Of Data
36117, Lets have a look to our new cleaned data
3478, look at the score and the parameters for the best model
11122, Model Testing Only catagorical Featues
20196, Chi2 Test
5961, Fare and Survived
39679, We save the final image produced by our model as a
7906, th Model RidgeClassifier
19313, Evaluation prediction and analysis
24023, How to form evaluation sample
15083, Gradient Boosting
41665, The training dataset is split into training and validation samples that are to be used when assesing the Random Forrest models built
27649, Housing Prices
2830, combining the 3 models to predict on the test set
1389, look this keys values further
32631, Making the submission
24762, Again i ll be using RobustScaler to scale all features before initiating the ElasticNet model
10544, Ridge
25829, Loading Libraries
36374, check the number of non duplicates vs the number of duplicates
8586, Deleting not useful features
36369, Create submission file
21574, Moving columns to a specific location
26643, check how many topics and docs are described in this file
3204, Feature Selection from EDA
5024, Errors
20663, HERE WE BEGAN WITH SOME INCREASE AS IN THE CASE TO MODIFY THE DATA GIVEN AS NAN
13888, Family members
20190, Continious Featuers Box Plot
24938, StandardScaling and MinMax Scaling have similar applications and are often more or less interchangeable
16955, The passengers age distribution indicates that the majority of passengers are young in their 20s and 30s
39683, Text Cleaning
24146, Clean Organize
39304, XGBRegressor validation
34074, Age is not correlated with sex but it is correlated with parch sibsp and pclass
35667, Removing features that have mostly just 1 value
33998, count
32907, Here we defined the input shape
40636, Text feature
3266, Visualizing some highly correlated features to get better understanding
40029, There is indeed a big group of test images that is not presented in train
31307, The Date column is of the object type we need to convert it to DateTime
22677, Setting the Bert Classification Model
20522, SalePrice relationship with categorial features
43261, Separandos os conjuntos de Treino e Valida o
9481, Classification Report
43241, Defining the model
21154, not bad already great improvement
40742, Larger CNN
30683, Make a special function for LightGBM
11971, Numerical Features
23917, LDA
36797, Negation
41347, Visualization
7522, Model 3 A univariate quadratic function y ax 2 bx c
35353, Preprocessing data
10605, Step 5 Fit Your Model Predict Know accuracy on Validation Set
4390, From histogram of test data there is a home exist with more MiscVal value
40736, Flatten
18280, Hyperparameter tuning using RandomSearchCV
27859, Feature importance
1548, The goal of this section is to gain an understanding of our data in order to inform what we do in the feature engineering section
1106, Gradient Boosting Classifier
32019, There is one Ms so we assign Ms to Miss category and NaN to all remaining rows
6149, Divide full dataset into train and test subsets again
22007, Use the next code cell to label encode the data in X train and X valid
25467, Back propagation
12204, In other words with the use of the sklearn Pipeline we want to sequentially apply the transformations in the given list
7487, 4c Age Groups
43025, Do some final cleanup and split into test train
41585, Similarly unffreezing the last 2 blocks of the VGG16model
42214, up I add in a Max Pooling layer
23114, Ticket is also an alphanumeric type variable
19775, Set batch size not too high
16219, make a new column which store the values of number of person in a family and another column which tell whether the person is alone or not Then we visualize it so that we can check if survival rate have anything to do with family size of the passengers
43391, The gradient travel guide natural fooling targets
26298, NOTE These plots are more useful when trained for more number of epochs
20472, Days ID publish distribution
3878, ColumnsEqualityChecker returns a FunctionTransformer that wraps the function equalityChecker Any arguments to equalityChecker can be passed in kw args of FunctionTransformer
33792, Well that is extremely interesting It turns out that the anomalies have a lower rate of default
7857, Feature Pruning
38970, first lets find the maximum length or no of words in a text column of train and test
1925, Pool
21253, My Data Processing and Binning Functions
7140, Embarked Title
28706, Ensemble
12295, Age
25639, Ensemble modeling
36404, And the feature importance
4783, Having a look at the correlation of different numerical feature with SalePrice
6288, Model comparison
13595, Check Accuracy
11026, Creating an artificial feature by multiplying age and class
41779, Models created with Keras and most other deep learning frameworks operate on floating point numbers
13200, When we use get
251, Model and Accuracy
16663, Exploring Correlations
5229, Filter
3270, Updating Garage features
11416, Use Case 8 Sales Projections SunBurst Chart
35192, we have got the list of top 30 important features
42617, Before we train the network we need a function that creates randomized batches of training data so that we can implement mini batch optimization
20657, FROM THE TWO FACTORS SHOWN TO BE CORRELATED REMOVE ANY ONE OF THE FACTORS
16681, Describing Data
11136, SalePrice is the variable we need to predict let s do some analysis on this variable first
32403, K fold Split
8013, Stacking Our Model
24137, Gradient Boosting Model
3970, Stacked
22100, Import test data
21423, TotalBsmtSF
6778, The Chart confirms 1st class more likely survivied than other classes
16203, The next model Random Forests is one of the most popular
41118, Geographic Location by Folium and Cluster by KMeans
20390, K Nearest Neighbors Model
40782, The standard way to avoid overfitting is called L2 regularization It consists of appropriately modifying your cost function from
11624, Normalization Standarization
41746, MLP ReLU SGD
15379, Much better
18301, let s checck which items have item price 0
25488, Fill missing values
31688, Plotting predicted class along with the images
11900, Data split
2027, Submission
26539, Like I said the model is no where near convergence The plot is for 30 epochs trained on my system MY advice train it for atleast 50 more epochs with early stopping
40133, Fitting models to the combined dataset with custom pipelines
5990, Missing Value
1376, lets start explore the data
33606, Data Preprocessing
20263, Logistic Regression
5572, Data Scaling
7113, Dealing with correlations
19406, Checking the Performance
8523, Submission
17811, prepare a simple model using Random Forest
23199, there we have it We re down to 2 features only from 47 features we want to calculate how much variance we re able to extract off these 2 components
945, Evaluate Model Performance
4938, Handling categorcial missing data
29393, To better analyse our data let us plot its performance as a function of no of epochs
11275, Looking at the values it looks like we can separate the feature into four bins to capture some patterns from the data
39976, Data Loading
27521, Submission
41080, LSTM with glove 6B 200d word embedding
7736, Ensemble methods are commonly used to boost predictive accuracy by combining the predictions of multiple machine learning models
3149, Afetr tunning i got best fit parameters and model accuracy of percentage which is quite good With this submission my Public Score is Let do the CV and tunning with some other models and check the accuracy
15038, Like the income of resident the distribution is usually left bias compress it with a Log Transform could be more normally distributed
28141, Entities Extraction
13190, let s take a look again into boxplot relation between Fare and our target Survived
25487, Define which of these training features are categorical
28902, Same process for Promo dates
29218, Implementing The regressors comparing accuracies
35126, Handling Outliers
16611, Feature Embarked
36899, Adam
23414, define two useful callbacks one for the model checkpointing and one for managing the learning rate policy
24786, Oof predictions
13602, Similary we fit it onto training dataset
39297, Feature analysis
919, One Hot Encoding
23211, now we have OOF from base or 0 level models models and we can build level 1 model We have 5 base models level 0 models so we expect to get 5 columns in sTrain and sTest sTrain be our input feature to train our meta learner and then prediction be made on sTest after we train our meta learner And this prediction on sTest is actually the prediction for our test set xTest Before we train our meta learner we can investigate sTrain and sTest
6711, Find Categorical Feature Distribution
4016, We use our own ridge regression algorithm
13153, Bin the Age and Fare variables now
39125, With Keras
10976, Linear regression elastic net
26969, DataLoader
42533, Extracting the day of the month
18277, TF IDF VECTORIZATION ON QUORA QUESTION PAIR SIMILARITY
31265, Wavelet denoising
19074, Survived
14458, go to top of section model
11873, Y target value to Log as stated at Kaggle Evaluation page
11002, SibSp
28515, GrLivArea
6700, Changing Labels
1263, Get cross validation scores for each model
38269, Remove leading trailing and extra spaces
8687, Most Related Features to the Target
10103, we have to create a Machine Learning Model
24444, Visualization of data
5706, Utilities For this categorical feature all records are AllPub except for one NoSeWa and 2 NA Since the house with NoSewa is in the training set this feature won t help in predictive modelling We can then safely remove it
31518, we need to implement train test split and feature scaling
202, TheilSen Regressor
13378, Imputation of missing values in Age
4186, the missing SalePrice are the one we try to predict as it match the number of rows in the test set
15931, Sex
35242, We cannot remove stopwords from this variable
7943, First try with a basic MLP
41849, ROI detection
16991, Combining classifer
26200, Reading and Loading the Dataset
39033, Test
25187, Data Preparation
974, Compare the models
417, Dealing with outliers
10385, Understanding our data
11781, Age
10029, RMSE on the entire Train data when averaging
27415, Visualizing and Analysis
42443, Features that belong to similar groupings are tagged as such in the feature names e g ind reg car calc
8890, Model Training
5984, Voting Classifier
6085, Prediction
6615, Label Encoding
21580, Convert one type of values to others
29695, 4 kernels from every convolutional layer
4046, We can visualize the titles over a histogram
10816, The highest values came from Sex PrachCat and SibSpCat
11362, MSZoning The general zoning classification RL is by far the most common value we can fill in missing values with RL
24017, Some examples with predicted label
30822, Partial 50 overlapping
3996, Compare our implementation of the algorithm with the LinearRegression method implemented in sklearn
33768, Split Data into Train and Validation
29813, Vector Averaging With Word2Vec
22332, Convert Negative Word to its Antonyms
3257, em A Comparison between Violin Plot and Box Plots em
20198, Correlation
24006, Split labels and features of training dataset and convert to numpy array
37698, Gradient descent
21659, Split a df into 2 random subsets
38147, Inputs to MLBox
37730, Upsampling the data
24836, use SVM
42034, Groupby Count
38268, Removing punctuation marks
42997, Linear SVM with hyperparameter tuning
21329, Taxs value
291, Additionally there are many missing values for Cabin
832, new dataframes
30686, Make DataFrame
36544, Linear correlations
31842, etc etc
15437, the AdaBoost Classifier
21081, Missing value in test train data set are in same propotion and same column
32476, De duplicate the Data
29829, CNN
14557, Lets explore some real data on the combined data titanic
5026, Ridge Regression
36620, Data Manipulation
684, Preparing the data
4861, Top 20 variables correlated with SalePrice with score
16029, the scale goes to high because too many Z value
25503, Use the next code cell to one hot encode the data in X train and X valid
35114, Find best threshold
26261, Plotting the confusion matrix
7389, the corrected dataset for all merged passengers merg all2 is obtained
37946, We have already noticed that we do not have many features available
30832, Is there any null value present in the train data
27935, With the model trained I ll plot the accuracy achieved per epoch per dataset on a chart so that training progress is clearer
29820, Trained skipgram
10459, We d like to know how each input attribute is able to predict the target e
37823, Remove Stop words
10621, Dealing with Cabin data
4368, 2ndFlrSF feature is linearly correlation with target SalesPrice
731, Immediately two points jump out both of which are GrLivArea the most important feature
34655, Merging everything together
22976, Most store are in a very close competition
12524, Filling the missing values
19310, Evaluation prediction and analysis
34241, GPU use
28214, checking for difference between survived and not survived means
29842, looking at the corrolation
2878, u Class and survivorship plot u
33586, Running inference
7530, Age distribution is positive skewed Need more information to fill missing data plot Age with PClass
27528, Display heatmap of quantitative variables with a numerical variable binned
38137, We be predicting this with svm
19731, For each Category
26689, FLAG DOCUMENT TOTAL the total number of provided document
14407, Deleting Fare feature because now we have FareBand
31576, Target
38553, Tagging Parts Of Speech And More Feature Engineering
29960, Demonstration how it works
1541, Sex Feature
5577, Linear Regression
31610, STOCHASTIQUE GRADIENT DESCENT
35051, To begin with I try a fully connected NN consisting of a layer of 512 neurons followed by a dropout of 20 followed by another layer of 512 neurons and another dropout of 20 and finally a layer of 10 neurons with softmax activation
35567, 5 failure rate for 100K rows
24871, Neural Network
37042, Model training
26980, Save labels in a separate 1D array
18288, Tokenizing
10076, To rescale our data we use the fonction MinMaxScaler of Scikit learn
20781, Understanding the Data
32110, How to limit the number of items printed in output of numpy array
3697, Omit irrelevant columns
8116, Decision Trees
38931, concatinating both train and test dataset to convert categorical data into numerical data
35492, Creating Submission
36391, The NaN values have now been replaced by the median value for each category
11299, test out some common models for regression
2309, Pandas how to apply a function row wise example is adding strings
14462, back to Evaluate the Model model eval
34852, Inspection of Class Balance
32346, No Of Storey Over The Years
37104, Feature Importance
14997, Outliers Detection
820, List of numerical features and their correlation coefficient to target
29607, We can also imagine an image belonging to a specified class
20797, Filling Missing Values in LotFrontage
26420, The best strategy to fill the missing values might be to use the titles to guess the age since we are goint to divide Age in two categories namely 15 and 15 years
4300, Inference
5737, LightGBM
15947, Embarked unvan
5150, Categorical variables
10428, Comparison and merging of all feature importance diagrams
5926, how does standard deviation sigma affect data analysis
5532, Cabin feature
17564, Correlation between Variables
9190, Fare
22702, Load the Data into Tensors
30649, Machine Learning Model Selection Submission h2
27, RandomizedSearchCV ElasticNet
1871, Loading and Viewing Data Set
35193, Feature Transformation
14239, Embarked Features
35057, In this solution I implement a Convolutional Network to replace my Fully Connected Neural Network from solution1
16446, Sex
2776, Light GB
33091, Linear regressor
38568, Feature Agglomeration
40866, Being root mean squared error smaller is better Looks like SVM is the best regression model followed by Ridge GB and XGB Unfortunately LR can t find any linear pattern hence it performs worst and hence discarded
15443, Columns SibSp and Parch are very similar to each other in meaning and in correlation
38770, We are planning to train our model iteratively
5503, Imputing the Embarked
36455, Submission
22596, Test set
10052, We can drop Ticket feature since it is unlikely to have useful information
13218, Data Cleaning
3436, start by cleaning up the categories a little
37376, code to create submission file
18013, NaN by feature
17778, Multiple features visualization
31070, UNIQUE TECHNIQUE TO SEE MISSING VALUES
32669, The accumulation of features with zero as their predominant value may lead to potential overfitting
31329, Submission
42448, Metadata
40672, Histogram plot of Number of words
22349, Logistic Regression
33680, Difference Days font
1044, Outliers detection
36006, Review Data
1089, have a look at some key information about the variables
9248, Filling Nulls for each column appropriately
32555, Simple EDA for Duplicates
4699, Missing LotFrontage prediction
15915, However some common surnames may be shared by people from different families
37360, SibSp vs Survived
37557, Load path
19442, As the pixel intensities are currently between the range of 0 and 255 we proceed to normalize the features using broadcasting
11777, Deleting Unnecessary Variables
33736, One note on the labels
38477, Image Filters
23602, This is where the initial usage of gensim begins
26443, In general the height of the score might not be the best measure to evaluate the performance of a classifier especially if the dataset is skewed e
6878, started by importing our libraries
1895, LinearSVC Model
26310, we are going to pick some features for the model
23312, After merging we have lots of missing values of item cnt month
35125, The columns CompetitionOpenSinceMonth CompetitionOpenSinceYear Promo2SinceWeek Promo2SinceYear PromoInterval have too many values as NaN
22388, I wanted to make some attempt on the y data
26993, If you apply lowerization you lost a bit of informations on other embeddings
26280, Extracting numerical columns and fill all NaNs with 0 in our DataSet
41839, Meta Data
9628, Random Forest Best One Without Feature Engineering
6387, find the average value for Survived column for male and for female passengers
8463, Wrapper Methods
16943, This random forest model is way better than the single tree model when comparing the rmse score on train data
19044, We can also view the pixel distribution of the image
42151, Decoder
26710, Plotting sales distribution of stores across departments
13105, Naive Bayes
8542, MISSING DATA
36487, Load Embeddings
1630, Splitting to features and labels and deleting variables I don t need
27989, Chart analysis
27454, Replace Contractions
27474, TFIDF Features
25042, merge these product details with the order prior details
6250, Cabin
4725, for total number of rows and columns we use pandas
26349, These are the columns with the best correlation rate visualize them
30665, Checking the function on three samples
17654, Stacking models
2556, Lets prepare the data for H2O automl and select only important columns
5551, Make Predictions
10779, Area Columns
36419, Finding Percentage of Missing Values
15863, Feature Importance
27605, To clean up the noise in the image some libraries from skimage are needed
20727, Exterior1st column
36278, many different values let s place missing values with U as Unknown
2342, Random Forest
16730, by fare
36388, Train and measure the MAE with the custom encoder
41844, Data Cleaning
28522, GarageYrBlt
6275, Title
4017, Data Engineering NaN replacements
2971, Bivariate analysis
20612, Embarked
38755, The train accuracy is 82
42215, I be adding in further Convolutional layers shortly however before I define a Dropout layer of 0
37834, Final Submission File
10389, Visualizing the relationship between SalePrice and SaleType
15902, Fare
8554, Series are 1 D arrays it is like one row dataframe
315, Age
25873, Number of tweets according to location per class 0 or1
30564, Initial start only with XGBoost
10534, scaling
7405, Correlation analysis
39157, That means that for every image in the test set it predicts how likely each class is
17022, Name
36143, First we again have to define the dictionary
14684, Most of the passengers were from Southampton Southampton was the starting port of its journey
11732, Submit
33317, Eigenvectors and Eigenvalues
22113, Learning
2044, Defining Features in Training Test Set
34055, Fit Make the predictions
24983, Filling NaNs in numeric columns using mean for each column
14908, There s only 2 missing values in Embarked
7106, Support Vector Machine
20069, Converting class type to reduce memory load
2226, High Correlated Variables with SalePrice
36607, Training our model
14793, Reusable functions
40156, Continuing further with missing data
21493, Load Embeddings
32947, Check correlations
35464, Visualiza the skin cancer at Oral genital
35111, Creating keras callback for QWK
26735, Plotting sales over the week for the 3 categories
25770, this all are unique titles we got from names
4950, Remember when we combined the training data and the test data to make one big all data dataframe
13583, Setting X and Y
7770, Decision Tree
36915, Exploratory Data Analysis
3765, Almost Similar to ridge let s try on normalized data
42042, Insert a value for NaN
15799, Correlation
17375, Great
2945, check features with high correlation only
25173, Last Word Checks if last word is same in both Q1 and Q2
8136, Lasso ElasticNet Ridge and Xgboost gave the best score lets test voting on them
14006, Feature Selection
19944, Filling missing Values
21331, bathroom features
19764, For explanation purpose it makes sense to create a dataset containing the number of each word in each document
12293, Embarked
36259, analysize Pclass
29854, look at the score of each hyperparameter combination tested during the grid search
16185, Create new feature combining existing features
20310, We have found a possible clustering for our customers
25735, check our model performance and visualize some data
37627, In PyTorch the best way to feed data to the model is with a dataloader In particular torch utils data DataLoader is an iterator which provides features such as batching shuffling and loading data in parallel In order to use PyTorch s dataloader we need to create a dataset first The most flexible way to do this is by creating a custom dataset class that inherits from torch utils data Dataset which is an abstract class The PyTorch dataloader tutorial a dataset class tells us that we should override the following methods
13062, How does the passenger distribution vary for the three Ticket Classes 1st 2nd 3rd Pull slider to find out The colors help distinguish port of embarkation Survived 0 No 1
9787, Checking for correlation with a heatmap is a good idea to visualize relationships
29827, Deep Learning Models
13063, Can we visualize the data in 3d This would make more sense with other types of data but we can try with Titanic passengers Try moving the 3d plot around with your mouse
11924, Our random forest model predicts as good as it did before
6293, Linear
18999, Use this instead if you want to make use of the internal categorical feature treatment in lightgbm
16286, Last 5 trees
36588, Use all training data learning rate
23168, Data Transformation
13141, Feature Engg
10266, Observations
35488, Model
40013, Insights
38407, Seems like it s NINE
28645, PoolArea
19427, Final step training
21852, How the Model Works
31213, Numerical Columns
41492, Display the classification report
31701, Checking New Features
32418, Visualize Convolutions
26052, We finally define our training loop
4707, Here the features coefficients
26257, Defining the Image generator function
1167, check some points individually to figure out the best imputation strategy
15034, Fare
25244, perform CV for catboost
26211, Here we define a nice function that is useful not only for this competition but for similar project as well
13827, Lets take help of confusion matrix to find out TP TN FP FN
29901, Linear Model
13177, let s replace na s from Embarked variables by U to indicate an unknown Port of Embarkation
11645, K Nearest Neigbors
9584, Solving linear equation with Numpy
14540, Reading and Inspection font
15394, Before we start filling the missing values for training and test data let s create an array with all the data
18982, Display more than one plot and arrange by row and columns
18355, lasso
5183, Tikhonov Regularization colloquially known as Ridge Classifier is the most commonly used regression algorithm to approximate an answer for an equation with no unique solution This type of problem is very common in machine learning tasks where the best solution must be chosen using limited data If a unique solution exists algorithm return the optimal value However if multiple solutions exist it may choose any of them Reference Brilliant org regression
7877, To solve this problem first it would be likely to think that the chance of survival could depend on the Fare
1260, Visualize some of the features we re going to train our models on
3261, strong Identifying the columns with missing values strong p
15006, A correlation matrix illustrates how variables move together
21226, Our dataset is not ordered in any meaningful way so the order can be ignored when loading our dataset
17839, We fit the model
34665, In case of monthly sales this value is of around 70
654, Before we start exploring the different models we are modifying the categorical string column types to integer
6790, FamilySize
313, Cabin
4476, Importing libraries for classification models
4513, We should impute mising MSZoning with RL
26886, Simplest approach Include only numerical columns drop all the categorical data
25776, Age
39441, Prediction Dashboard
4215, Drop the columns which have more than 70 of missing values
12837, Logistic Regression
42864, We set up utilities to encode these features using Keras Preprocessing Layers
5671, Divide Fare for those sharing the same Ticket
22939, we take a look at tickets
3885, Ordinal Attributes
35102, Run a quick visual test
15223, Categorical EDA
18529, investigate for errors
3155, I highlight the following because it taught me something I should have started my own analysis by checking for points that could be overly influential and possibly getting rid of them
34079, Ticket
31726, Anatomy and Diagnosis
26382, Initialize the weights of our NN to random numbers
20, GridSearchCV Linear SVR
25901, Tfidf Vectorization
7328, Create a feature Title
2798, Save Load Trained Model from cloud
15462, Ticket Number Cluster
40414, Created
2386, Plot confusion matrix new in sklearn
24430, Splitting into to test and train data
32579, test the objective function with the domain to make sure it works
2677, ANOVA for Classification
26543, Define Optimizer
6004, Target kita memiliki skew positif dan kita ingin mengubah target menjadi berdistribusi normal
5937, We have fill numerical columns misssing values with median
18245, View the shape
34480, Individual series weights
7708, Selecting numerical and categorical features
18335, GrLivArea
6321, Decision Tree
9121, I think I like these transformations
5680, Create the DataFrames to fill missing Age values
25674, Predictions
30719, Define the model
11037, Lets tabulate all the confidence scores and check which algorithm is the best fit for this problem
13677, Sex
19273, Loading data
6057, Heating Electricity and Air conditioning
15822, Correlation matrix
28355, Merging the bureau dataset along with application train dataset to do more analysis
116, Creating dummy variables
22178, build the final model and get the predictions on the test set
35403, Normalising datasets
23321, Mean encodings for shop item pairs Mean encoding doesnt work for me
42236, Distributions of attributes
22365, Handle missing data
2649, step is Feature Engineering
13, Modeling Evaluation
38006, This store have 3 categories foods hobbies and households which have 2 3 departments
8248, Original SalePrice Visualization
42646, These texts should be marked as 1
3005, strong Winsorization Method Percentile Capping strong font div
42400, Yearly Cycle Decompose of CA 1 store
20974, Convert Numpy Array format to Pandas Series and then to CSV format
32777, Check Score
9133, Functionality
996, How about plotting our handy correlation matrix again
3373, Importing from GCS to AutoML
13951, Outlier Detection
36877, Keras CNN model 2
4559, Dealing with features containing Years
29073, Categorical features one hot encoding
1359, In machine learning naive Ba classifiers are a family of simple probabilistic classifiers based on applying Ba theorem with strong independence assumptions between the features
20567, Analysing data with graphs
12746, As for the Cabin it s too much to fill so we can just remove them all
40265, Ground Living Area vs Total Basement Surface Area
37394, Tastes like skewness
22745, Deployment
17632, SibSp Parch
21476, All the tweets more than 150 characters long are generated We drop those since they be a later problem
554, SVC GridSearchCV
22595, Aggregate train set by shop item pairs to calculate target aggreagates then target value
15310, First we start by checking the counts of survived 1 and dead 0
17418, lets handel the missing missing values
24552, let s plot which products were chosen along with the dominant product when the customer bought only two products in any single month
33469, Date Day Week Month Year Seasonality
29464, Data preprocessing
23756, Role of Temperature and Climate in spread of COVID 19
6445, Extracting the categorical column from train and test data
8468, Univariate feature selection
8446, One Hot Encode Categorical Features
27194, Providing the Data
18294, We create Synonyms for the most frequent words
1070, I want you to look at this thread and answers 39589post222573
38129, Lets checkout SibSp feature
8724, Sale Price
18703, Re label Mis labeled images cat as dog or dog as cat
39308, XGBRegressor validation
20228, We can extract the titles from names
4414, More Data Prep
10130, Random Forest
32144, How to compute the min by max for each row for a numpy array 2d
270, It is a categorisation algorithm attempts to operate on database records particularly transactional records or records including certain numbers of fields or items It is mainly used for sorting large amounts of data Sorting data often occurs because of association rules
11149, PoolQC data description says NA means No Pool That make sense given the huge ratio of missing value 99 and majority of houses have no Pool at all in general
40930, Optimizer and Loss Functions
6825, Systematically Missing Data
22133, CHECKING FOR CORRELATED FEATURES
31208, XGBClassifier
26958, Information
37467, Necessary Data
41404, NAME HOUSING TYPE
31088, GarageYrBlt font
27972, hist
8521, Other Models
34679, Revenues distribution by category
9618, Categorical Data
5426, Since None is the most frequent value I impute None for the Type and for the area
217, As Classifier
28425, shop id
24791, Predict
17782, We replace the predicted value in the original data
24580, Check what train data looks like
32590, Steps
34, Visualizations
24102, Handling the null values
35787, add the previous averaged models here
27348, There is a clear trend and seasonality in the data let s look in more details by making decomposition
7746, we need some feature engineering
11630, Try various machine learning algorithm
13609, let s create a map that maps each value to its frequency
7730, I found these features might look better without 0 data
17684, FARE SURVIVAL
18963, Display the variability of data and used on graphs to indicate the error
22122, Advanced Ensemble Learning
13154, have a quick overview of the train copy and test copy
36813, Relation Extraction
22798, also look at the entire metrics including the inter quartile range mean standard deviation for all the features
32548, Split into train validation set
23261, Continuous Continuous
14620, after transformation you are not yet finished with the fare Normalize the feature to zero mean and unit variance using your own method or the StandardScaler of scikit learn as you have done with the ages
26576, Flow from directory
29940, The learning rate again appears to be moderately correlated with the score
32499, Loading the weights of Benchmark model
24247, Age
19276, Here I dump out the contents of the array encoding the first image in the training set
20579, Applying Feature Scaling on training data
25759, Seems to work
19325, Visulaize a single digit with an array
27966, Data Collection
27476, Each data point consists of 784 values
1924, All garage related features are missing values in same rows
13370, check the percentage of survival for males and females separately
28432, Category info
37094, Any duplicated rows
24517, There are two columns with almost all values are missing
4810, Creating a dataset which would be submitted for evaluation
18104, To evaluate our performance we predict values from the generator and round them of to the nearest integer to get valid predictions
6917, Hyper params optimization of the models with Grid Search
37793, Model Scoring on Test set
2929, RandomForest
6652, Family Size Computation
2815, train cats is a function in the fastai library that convert strings to pandas categories
36196, I would add 3 level seniority column new first 6 months 1 year 6 12 months and older 12 months and review products based on this more detail seniority But first let s decide what to do with missing data in fecha alta
25291, Prediction
31899, VISUALIZATION OF THE DATASET
36673, There are a lot of arguments and parameters that can be passed to the CountVectorizer In this case we just specify the analyzer to be our own previously defined function
8848, Numericalise
299, train and target
14242, Cabin Feature
32923, Features selection
26907, Conclusion
14907, Embarked
39980, Sice the tail is on right side the distribution is positively skewd
42229, Submission and conclusion
28749, Fit the model
21391, Feature Engineering and Time Series Batch Creation by Country Region to train them seperately as trend is very different in different Regions
6983, We analyze each discrete variable individually and make decisions based on correlation with SalePrice lack of information in each category and so on
3201, only the ones that work from the most relevant to the least one
22818, Another outlier This time with item price
13381, again check for missing values
29914, Linear Regression of Scores versus Iteration
30398, Predictions
16483, Sibling and Spouse on board
793, Visualising processed data
41249, EDA FEATURE ENGINEERING
32716, Comparing Accuracies
38429, Improving the CNN architecture vol 3
30406, Confusion matrix
29816, Vector Averaging With Glove
6774, Data Dictionary
13323, Observations
15268, Import Necessary Libraries
31009, Lets add the 2nd layer but this time we increase the feature maps
8033, Embarked
27424, Count of LB Submissions that improved score
28023, EMBEDDING METHODS WORD2VEC
7313, Model and Predict
42052, Drop columns by label names
38241, EDA
11425, Imputation and Outliers
14373, Passengers Embarked at Cherbourg having highest Survival rate
32523, Generating csv file for submission
22656, Submission
36252, Visualizing features
32467, Longevity Model
14351, Male and Female Distribution on the ship
36772, Below I have displayed some correctly classified and misclassified images which just helps it visualize better
6628, It is clear from vizualisation that most of the survivors were children and women
7133, Cabin
14916, Demonstrate all sorts of title
4447, Split to train and test data
22613, make prediction
15079, Machine Learning
40305, Question1 Frequency
40787, Visualizing and Analysis
41485, Fit the training data and create the decision trees
30989, Sequence of Search Values
33508, China Hubei
16760, HOw do we fill in the missing age lets investigate
38769, Deep Netural Network Modeling
39314, XGBRegressor training for predictions
31080, Looking at the Kde Plot and Description of LotFrontage we can replace Nan Values of this column either by Mean or Median Because data is almost Normal distribution
21616, Sampling with pandas with replacement and weights
11239, use Neural Network to do the train and prediton to compare with RF
19832, To visualise the distribution of the variable Age in this case we plot a histogram to visualise a bell shape and the Q Qplot
659, Logistic Regression again this time with only the selected columns
32801, Logistic Regression
11345, Data Preparation
1099, Extract ticket class from ticket number
21447, You can judge there is an extlier by count
14583, There are no missing values in the Age columns
3598, LightGBM
20686, We can then connect this to an output layer in the same manner
7289, LogisiticRegression Model
7102, we could draw a corrlelation heatmap
11550, Scatterplots to Explore the Dependence of SalePrice on Numerical Features
19408, Lets Predict
12832, Correlation
8901, Gradient Boosting
40067, Data Processing Feature Engineer
9379, check in test data
17647, Support Vector Machine
31995, Produce an equivalent csv DataFrame to output with the train ground truth data
41160, FEATURE 6
11541, Logistic Regression
41288, Saving the model s weights as outputs you can then download these weights later or use them as inputs to other kernels
9026, For basement exposure I just set it to the average value based on both the training and test set where HasBasement is true
19167, XGboost regressor
19075, It is clear that the no of people survived is less than the number of people who died
6663, RBF SVM
21162, loading data
27136, Above Ground Living Area sq ft is having a positive correlation of with Sale Price which is obvious
13076, We do not need Cabin and Ticket and hence can be dropped from our DataFrame
37816, look at the top 10 keywords
32304, I have created another column feature for number of people in the family by adding SibSp Parch and 1
33359, Question 3 Create a treemap plot for item category and the total combined sales
10594, Decomposition with Principal Component Analysis and gradient boosting
23546, Data augmentation
16612, Feature Cabin
24318, we filling in other missing values according to data description
32810, Level 2 Logistic Regression
38318, Improve the model
36541, At a first glance this looks similar to train except from the missing target
1346, drop Parch SibSp and FamilySize features in favor of IsAlone
16457, Feature Engineering
10941, Check the summary of test data
38369, Modeling font
2238, Transforming Missing Values
21140, Before we start antyhing we have to split our data into two parts
7381, Merging the unmatched passengers manually
18125, XGBoost is short for Extreme Gradient Boosting and is popular algorithm on Kaggle
10208, After importing the library let s check how many rows are present in Train and Test set
18706, We can now download the data frame
2019, Checking performance of base models by evaluating the cross validation RMSLE error
25642, Preliminary investigation
25426, Skin Cancer MNIST HAM10000 Repeated
15489, Train model
8416, Slope of property and Lot area
37289, Running Cross Validation
13722, Decode Pclass
18832, LightGBM
43325, Data Augmentation
40289, Ok
6653, Benefits of Feature Selection
30185, Use stratifiedshufflesplit as there exists subspecies in the image set
6162, mod catboost 1
38371, Time them
40312, Data Loading
13031, Embarked
15533, Normalize data to be between 0 and 1
25041, A right tailed distribution with the maximum value at 5
24534, Number of products by customer regularity
27315, Predictions
42574, I removed the contribution of all 0
21779, For easier grouping I would change the grouping order a little
5237, Merging Train and Test Sets
676, Final validation with the testing data set
9827, Embarked Port of Embarkation
20391, Logistic Regression Model
23089, Variable Description and Identification
8696, Similar inferences can be drawn from other plots and graphs
24971, Helper functions params
15687, Filling missing values
34637, Decision Tree Regression
9487, Predict it
32060, NOTE Both Backward Feature Elimination and Forward Feature Selection are time consuming and computationally expensive They are practically only used on datasets that have a small number of input variables
43327, Build CNN Model
20745, GarageQual column
37477, Bag of words
31381, Augmentation with ImgAug
38085, We have actually classes named 0 to 9
29477, Splitting into train and test set with 70 30 ratio
41821, In order to make the optimizer converge faster and closest to the global minimum of the loss function i used an annealing method of the learning rate
41352, Again condition is not clearly correlated as quality do but yet there is a considerable relation
4803, We one hot encode the dataset and then split it to the orginal train set and the test set
42243, Exploring categorical columns
33738, Lets load the test data
13547, PClass
30679, We make a special cross validation function for catboost classifier
35822, And now we embed each chunk individually
9591, Crosstab
8000, Regularized Linear Models
30624, Age is a numerical variable
6602, Create csv to upload to Kaggle
30312, Ensure determinism
6735, 1stFlrSF Vs SalePrice
12644, Importing
19765, The term document matrix looks like this
32221, Use month 34 as validation for training
28213, Output
18287, To get the optimal parameters I ran the code below to obtain the values to plug into the XBGRegressor
39018, Display death by age group
38818, Get the final dataset X and labels Y
7559, we have to import test file and process it before prediction
833, List of all features with strong correlation to SalePrice Log
9147, Since there is only 1 null row with BsmtFinType2 and BsmtFinType2 is highly correlated with BsmtFinSF2 I got the BsmtFinSF2 value of the null row
32982, Linear regression
34012, windspeed
37375, Decision tree
4751, For this Using logarithm transformation
13685, People with 0 2 SibSp have a higher chance of survival
40035, Weighted cross entropy loss
34699, Lagging revenues
41458, Plot a normalized cross tab for Sex Val and Survived
24021, We have almost 3 years of transactions from several 1C shops aggregated daily
9068, Observations
1806, No or Little multicollinearity
4019, Since most of the houses have MasVnrType values None let us replace remaining 8 values with None type and corresponding MasVnrArea value of 0
40030, This is the myterious unknown group of test images that holds 15 of the test data Keep them in mind
22809, Unclassified
27124, Basement
31634, Address
143, look at the feature importance from decision tree grid
37753, Technique 5 Skip No of Rows
16138, Bar Chart for Categorical Features
13579, Dropping unecessary features
25358, Create statistics from texts
17052, SVC
5185, ExtraTreesClassifier implements a meta estimator that fits a number of randomized decision trees a k a extra trees on various sub samples of the dataset and uses averaging to improve the predictive accuracy and control over fitting The default values for the parameters controlling the size of the trees e g max depth min samples leaf etc lead to fully grown and unpruned trees which can potentially be very large on some data sets To reduce memory consumption the complexity and size of the trees should be controlled by setting those parameter values Reference sklearn documentation learn org stable modules generated sklearn ensemble ExtraTreesClassifier html
14933, Sensitivity P hat Y 1 Y 1 frac a a b
10772, Pre processing Three
39031, Hyperparameter
24038, Encoding data for random forest and knn
37822, Remove punctuations special characters numbers
3882, We now train the pipeline with 70 of train data and predict the prices of validation dataset 30 of the train data we set aside
1821, Phew This looks like something we can work with find out the MSE for the regression line as well
32700, n estimators the number of decision trees in a forest
42862, Prediction and submission
20347, The fast ai way
16400, But other than that pd
34363, Checking for missing data
37474, Lemmatization is similar to stemming but instead of looking for a stem of a word you look for its lemma
11075, Ensembling and Stacking
24365, There are some variations in the median price with respect to time
19842, As we suspected there were very few children at the age of 10 on the Titanic
2256, Age and Sex
12482, Embarked Title Pclass
28574, BsmtFinSF2
39858, Training
15507, Fare
42645, Mislabeled Samples punctuations and stopwords were removed
29221, Looking good now we try to handle the 116 categorical variables
18541, here a lot of non normal distributions
10538, Get Dummies
8620, OveralQual vs SalePrice
27633, Lets do an initial top level correlation matrix analysis
38223, Dates Day of the week
8536, Pareto Approach
612, let s check what s going on between Age and Embarked
30187, remove index False from to csv
15532, Convert data to categorical variables
14371, Feature Description
28236, Define the optimizer
30610, Examining the feature improtances it looks as if a few of the feature we constructed are among the most important
584, Data processing
258, Fitting Model
4348, If the skewness is between and the data are fairly symmetrical
10322, take a look to the kitchen sink regression for the full training set
7067, GridSearch for Light GBM
4640, Number of Missing Values in that variable for all the rows
17891, Some ticket numbers have alpha charaters in the number
42233, Categorical columns within the dataset
7507, Feature engineering
38476, Axis 1 and 2 As Feature
12120, PoolQC data description says NA means No Pool
37045, The process of converting data to something a computer can understand is referred to as pre processing One of the major forms of pre processing is to filter out useless data In natural language processing useless words data are referred to as stop words
5391, How about feature importance
42120, we ll train on the full set and make predictions based on that
16000, Age
22805, Exploratory Data Analysis
34515, Bureau Balance
12066, Correlation with Target
227, Library and Data
10139, Data Model Selection
43268, Importando a classe RandomForestRegressor
28950, We predict the values of holdout set by training the model on the entire train dataset
8819, FEATURE ENGINEERING
36279, Feature Engineering
36854, Reading the data
7023, Heating quality and condition
29019, Pclass
420, From this we can tell which features OverallQual GrLivArea and TotalBsmtSF are highly positively correlated with the SalePrice
43326, let s take a look on our Augmentation datagen
6309, Logistic Regression
37743, We should stick to using the category type primarily for object columns where less than 50 of the values are unique If all of the values in a column are unique the category type end up using more memory That s because the column is storing all of the raw string values in addition to the integer category codes You can read more about the limitations of the category type in the pandas documentation
21623, New aggregation function last
16471, Reasons to keep linear model and MLP
30393, Embeddings
20965, Initialising the ANN Model
26051, we ll define a function to calculate the accuracy of our model
25198, Callback Technique
29045, Resizing the image
32803, XGB
26664, bureau
239, Model and Accuracy
2316, Method 2 With Patsy using R like formulas to split the dataframe
2368, Supervised Learning Stacker
34875, Nice Another 3 increase
6503, Read Data
25248, Base XGBoost Model
20458, Number of children
24385, Where do we have NaN values
41337, Categorical Variable
32746, The Random Forest Classifiers for 8 options of selected feature sets
19895, Several shops is duplicated which can be determine by shop name
7149, F LTER NG DATA FRAMES
42776, Random Forest
29905, Fully Connected Model
32843, Training Function
3512, Make predictions on the test set and write to csv
28135, Compiling our model
33516, China Hubei
33687, Weekend or not
8467, Sequential feature selection
8028, Fare
27189, Initial Feature Analysis
18604, Looking at pictures again
11952, Finding best parameters for each model
16607, Categorical Variable
23920, Simple transformers
26976, Prediction
14927, K Nearest Neighbor Classifier
40325, Users
23576, Instantiate the tuner to perform hypertuning
3954, Create TotalBath Feature
17954, Gradient Boosting Classifier
17551, Replace the missing values of Age column with entries with similar other parameters Else replace with mean age of dataset
2745, One way to handle missing values is to drop them
16497, Decision Tree
39882, Prediction with RandomForestRegressor
32127, How to convert a numeric to a categorical text array
6856, Pairplots
4126, Stacking averaged Models Class
14149, First we explore the size of data that we have
10224, Before start with modeling let s check with missing values in training data set columns
1847, Distribution of Continuous Variables and Effect on SalePrice
40397, Fold 4
37308, Download submission csv file
21449, Datetime
33564, Submission
17596, now stepping onto the second task that is creating a new column and creating catergorical fetures
19975, we get val loss and val acc only when you pass the paramter validation data
15548, Filling
22713, Creating list of labels for the plot
31814, Generate the ground truth anchors
2811, Dendogram
9320, Another approach can be to first create the dummies and then split into training and validation set In this case we get
24829, chi2
6487, Houses with central air conditioning cost more
8356, We group the roalties and assign masters to Mr and due to the fact that there were not so many roaly women we assign then to Mrs
5805, Thats default parameters
16, GridSearchCV Lasso
34025, Delete Atemp column
16605, Analyse the distributions of continuous variables
20132, Judging by the plot the best value for C is somewhere between 0
13853, Categorical features
8745, Skew
6589, We can compute the score of each feature to drop any unwanted features
36878, Predictions for test data
37915, Prediction
7626, AdaBoostRegressor
4918, Handle the Categorical Features
39406, Exploring the data
8799, Removing the highly correlated variables
1314, Feature Creation
20160, Splitting data into Train and Test Data and Labels
3395, Missing data
10566, Our task given a user we predict and return a list of movies recommendation for that user to watch
32677, Blending proved extremely helpful on the enhancement of error metrics in this exercise
35892, Calculate derivatives and fit model
1600, Target Distribution
22931, Another interesting feature that I noticed in the name feature was the presence of a second name which was denotated by the brackets
11442, 128233 Importing the Libraries
43053, check the distribution of target value in train dataset
37765, Technique 10 Memory leaks
34849, Variable Correlation
7493, Weigh the embarked feature with the survival rate into a new column
31583, let s check how the target classes are distributd among the REG continuous features
5126, ExtraTrees Classifier
36895, from submission
14203, Checking with some k Fold cross validation
24053, Removing outliers and imputing missing values
29935, Function for 3D plotting
16019, Family feature makes difference more obvious
14879, Import Data and Package
7054, Garage type
40693, note that this way this is hard to visualze a better way is to convert the temp variable into intervals or so called bins and then treat it like a discrete variable
16937, Sklearn
11279, We can use the Series
21500, Plot pie chart with percentage of images of each diabetic retinopathy severity condition
10456, Filling in Missing Data
6670, Cross Validation
41056, Interesting and plotting the most extreme contributors to the first principle component char 38 isn t among them
39353, Submission
39121, Decision Tree
3685, Transform variables
10151, Contents
24443, ul style list style type square
26687, OVER EXPECT CREDIT actual credit larger than goods price
33995, Convolutional Neural Network CNN
41274, Arrow is especially effective
20636, N gram Analysis
1373, Looking the Fare distribuition to survivors and not survivors
23716, Now we check skewness of the selected variables if any variable is deviating from normality
16104, Age Feature
8739, Total Basement Square Footage
704, First steps create a copy of the data and turn the categorical data into dummy variables
13753, Plot categorical features
41987, isnull any isna any To check if there are any null values empty or NaN Not a Number
37341, VGG16 is a convolutional neural network model proposed by K
5518, Gradient Boosting Classifier
19572, Inference
32545, Converting Categorical to Numerical
40880, Predictions look pretty similar for all the 8 models We would like to take kernel ridge svm lgb gb and ridge as the base models for averaging ensemble method you might wonder why I don t choose lasso and elastic net instead of gb and lgb since the former two are superior in terms of rmse As I said earlier the more diverse our base models are the more superior our ensemble is We saw GrLivArea is the top priority for all the 6 models in feature importance section But the second priority for lasso ridge and elastic net was YearBuilt while it was LotArea for xgb gb and lgb That s the variation we need for our ensemble to get better at prediction If we would choose lasso and elastic net there would be similarity instead of diversity well that s just one example there are many more our ensemble would not perform according to our expectation I encourage you to experiment in this part
34689, Average USD RUB exchange rate
7785, value counts with bins
7540, Two missing values belong to same Pclass and Same Sex with same Fare category ie g Lets explore further more
20397, Voting Classifier Model
1382, we might have information enough to think about the model structure
12977, We have usen Name variable to create new feature as Title Therefore we donot need variable Name anymore and we drop it from the data
14549, Survival Rates Based on Gender and Class font
1115, Age Missing Values
20819, We can use head to get a quick look at the contents of each table
14726, Preparing the Data for a Machine Learning Model and Feature Selection
26320, LightGBM
6943, Visualization
9732, I prepare the dataset for training by separating the target variable LotFrontage from the rest selecting relevant features and dummifying categorical variables
5621, Using pandas DataFrame replace
5668, Last Name
30535, Exploration of POS CASH Balance Data
4281, The Sale Price Histogram is right skewed ranging from 34 900 USD to 755 000 USD
20823, join df is a function for joining tables on specific fields
6575, As there is 681 unique count we drop ticket feature from our dataset
18034, Woman
31944, Try using Label Encoder
5541, Submit prediction
23573, It s not quite gaussian but we might expect that because the number of samples is very small
37890, Top 10 Feature Importance Positive and Negative Role
5312, Final feature selection
35552, Combining the meta features and the validation set a logistic regression model is built to make predictions on the test set
25468, Update parameters
32149, How to compute the euclidean distance between two arrays
17608, Decision Tree
12030, From the makers of bamboolib
40006, In contrast we can find multiple images for one patient
1267, Submit predictions
19234, Keras example
23714, we combine all the features to store it in df train DataFrame
17739, simply having a cabin number recorded gives you a survival advantage
25383, Visualize the model
246, Library and Data
38757, Linear Discriminant Analysis LDA
29527, Logistic Regression
2799, datasist
33756, Make predictions
40766, Optional steps my way
22099, Validation Test Accuracy
25482, Evaluate the model
40834, that we have got our guns lock and loaded it s time to shoot
36808, Lastly we achieve our final goal entity extraction
29315, The younger you are the more likely to survive
16629, Trained Model On Whole Data
39342, Prepare the dataset
13321, Observations
1047, We do the same thing with SalePrice column we localize those outliers and make sure they are the right outliers to remove
10801, recall C Cherbourg Q Queenstown S Southampton
50, Linear Regression
35340, PROPERLY CLASSIFIED IMAGES
24803, Retrieving predictions
17994, Often the tickets are shared among family members with the same last name
4681, At first glance it appears that we have a lot of work to deal with missing values
32324, cleaning
26062, The data we want to visualize is in ten dimensions and 100 dimensions
26243, Training
32512, Generating csv file for submission
29143, Plot ly Scatter Plot of feature importances
15126, Engineering
24319, And there is no missing data except for the value we want to predict
25347, Our model
38265, First we fill all the null values with no column name
24368, Total area in square meters
35816, Target encoding
32565, Diff Patient id
896, for SVM classifier
39096, The Fog Scale Gunning FOG Formula
25267, Adding image type with image in dataframe
23190, Looks like gbc is better than rf in terms of f1 score
24497, Zip the inputs together so that the network uses the whole set of X y values for learning and prepare the test data as well as sample submission data
14898, Age and Family size vs Survive
39726, to look at what we ve made
33640, Bivariate Analyis
14770, Ensemble
7960, In our data set we have 80 features check their names and data types using the dataframe attributes columns and info
34692, Average number of items bought
29580, A new transform we use is RandomHorizontalFlip This with a probability of as specified flips the image horizontally an image of a horse facing to the right be flipped so it face to the left We couldn t do this in the MNIST dataset as we are not expecting our test set to contain any flipped digits however natural images such as those in the CIFAR dataset can potentially be flipped as they still make visual sense
23696, Evaluation
16856, AGE
836, List of features used for the Regressors in
24098, Boxplot for finding outliers
6032, Check Correlation between features and remove features with high correlations
3177, Not bad of a timming
29754, We process both the train data and the test data
16597, Submit the file to competition
35599, Using XGBRegressor to calculate the housing price
40044, now we have created a hold out dataset that only consists of one type of image group
31328, Prediction
43262, Selecionando as Colunas que Iremos Executar o Modelo
2324, Grid Search Randomized Search the quest for hyperparameters
36871, Keras 1 hidden layer
21936, Spark
7460, We can use the pandas
15661, Support Vector Machines
28401, Filling NULL Values with KNN
2376, Four ways of displaying the model coefficients
41377, Numerical Features
24579, Load Data
10199, we have added some more high correlated columns to the dataset Lets continue
8049, concat train and test
36714, Large
25243, Add aisle and dept id for catboost
21585, Select multiple slices of columns from a df
13156, Since Embarked was such a complicated feature to get a trend of we ll simply use OHE on it and let the model decide the trend itself on basis of other features
19539, Creating double exponential filtering over the time series
4204, Any duplicated rows
2489, so here we are using the Regex A Za z what it does is it looks for strings which lie between A Z or a z and followed by a dot we successfully extract the Initials from the Name
35074, The network on Solution 5 threw actually a worst score than Solution 4 even though I added Ridge regression to the first layer
6150, Looking for the best hyperparameters
2241, Categorical Encoding Class
33099, LightGradientBoosting regressor
22131, CatBoost
20456, Flag own car and flag own real estate
22619, Plot the last 25 of the devices
11505, Modeling
4057, The transformer missing data fill the Age rows with missing data considering the procedure described in section 2 if the person is a Master we ll input the masters average age and the non masters average age be placed in the othwe missing age rows
22539, Put it all together
24502, Fill in the submissions table
33815, Testing Domain Features
4133, One Hot Encoding rest of the features
41673, let s check what we have to predict
37738, Comparing String to Numeric storage
2828, The below parameters come from this kernel we can optimizing hyperparameter but it take a long time a lot of processing power catBoost parameter tuning
12209, And there we have it some categories converted into numeric features other first recoded and the dummified
20750, EnclosedPorch column
11820, SalePrice vs Total Basement Area
38486, History of CNN font
42799, Missing data
14653, Dropping Columns
7764, Ridge Regression
12980, Embarked
20164, Case 2 Binary Images
17681, GENDER WISE SURVIVAL PERCENTAGE
23829, Again setting a threshold of 0
21178, confusion matrix
36806, Going along the process of named entity extraction we begin by segmenting the text e splitting it into a list of sentences
37939, The most Covid 19 cases are by far in the United States
28207, First we need clean out data by convert text columns and removing irrelevant columns
7964, After reading the description there is the way we impute and cleaning the missing values
35763, now we have 100x the original number of features
23475, Processing of the test file
13516, Feature Engineering
41598, Compile the model
29989, The following function returns a single image from the hdf5 file
11066, Title feature
22246, Class
8946, Fixing Basement
20681, Make a Prediction
18300, let s remove outliers over the 99th percentile
14746, Model 3
2780, Import libraries Load dataset
15872, Hyperparameter tuning
28083, The following is the method to read csv file in Spark
10543, Splitting data into training and testing data
5967, fare feature
20434, Loading tokenizer from the bert layer
30606, Fortunately once we have taken the time to write a function using it is simple
2616, Library and Data
14810, Correlation Between Sibsp Parch Age Fare Survived
20318, Section 2 Processing and viewing our Data
10754, select those passengers who embarked in Cherbourg Embarked C and paid 200 pounds for their ticker fare 200
11190, Kernel Ridge Regression
5589, Discretize Fare
21363, Training on both targets MULTI TASK LEARNING
9951, Violin Plots
4673, Removing outliers
13334, we want to visualize every possible Title value and correlate them with the Sex column
3819, Definition Confidence intervals of sample means can give us information on the population mean of data when we acquire a single sample of data and know the population standard deviation but not the corresponding mean We take our single sample mean and use it to get a range of values with the following formula Confidence interval bounds sample mean z star standard error We already know the formula for standard error from before and we get z star by locating the corresponding z value using the z score table of probability halfway between our desired confidence level and to account for both tails of the normal distribution For example if we want a confidence interval which is standard we locate the z value corresponding to which is With the two bounds we get from the formula we can be or any desired confidence level confident that the true population mean lies between the two values
22344, TF IDF Vectorization
22226, Python Keras K t phanesiyle N ral A olu turma Create Neural Network with Keras in Python
30634, Relationship between embarked and survival rate
28069, we have a trimmed dataframe
21841, RNN with 1 Layer and Multiple Neurons
12705, Sex
25192, Reshaping our data
33339, Quick look at sales df
3926, Naive Bayes Classifier
6718, Find Outliers in numerical features
20688, How to Develop Deep Learning Models
33151, Train a model using Keras
26395, It is important to find out how the different features correlate with Survived
15164, Get Rid of Redundant Data
10044, Split data set
24237, Generate csv for submission
24307, Model Evaluation
2267, Name
7284, Random Forest
15878, But let s take a look at all of the models so we can make a more informed decision
3979, Read data
16644, KNN
40960, LabelEncoding the categorical values to change them to int
4327, SibSp and Parch combined
30777, Find the score of cross validation
5018, Outliers
16914, Correlation Evaluation
7137, AgeGroup
5338, Diplay series with high low open and close points Similar to candlestick
28637, GarageQual
10506, Fare
3553, We talk about the Fare now but let s take a look at the correlation matrix first
14291, Creating submission file
16247, Imports
727, We ve addressed a lot of the issues holding us back when using a linear model
26209, Most people use df width unique df height unique 1024 to check if all images are of 1024x1024 resolution But we not be 100 sure if its true in the training folder we won t use the same way here
25897, Latent Dirichlet Allocation LDA
37357, Embarked vs Survived
7573, Visualizations for numerical features
34410, Filter out rows with incomplete data
32709, Splitting Data into training and validation data set
6771, Import required modules
37073, Encoding Categorical Variables
10723, Handling missing values
24767, LightGBM
3156, And then there s more data cleaning
441, Electrical KitchenQual Exterior1st Exterior2nd SaleType Since this all are categorical values so its better to replace nan values with the most used keyword
41127, Additive Model
25902, Building basic Logistic regression model
10399, Blending with top kernels
5140, Outliers
32093, How to replace items that satisfy a condition without affecting the original array
8516, Modeling
8785, Tunning Params
27545, Display heatmap by count for multiple categories
36114, Lets have a look to our new cleaned data
16184, We can now remove the AgeBand feature
38707, Selecting Features
15357, SHAP values
42989, Visualization
23250, We want to give numerical values to our model we convert the object type values to numeric values
11287, Categorical Features replace missing values with None
42090, Apply Model to the Competition Data
12253, Data
11458, Embarked Pclass and Sex
23469, Using different models for casual
22130, LightGBM
38135, encode Embarked which is of type object
11713, Decision Tree
43162, Submission
20298, Survival for alone passenger is very low and families with member 4 decreases
38475, Axis 2 As Feature
25842, Getting textfile
32025, In the same manner convert Embarked column of test X
38543, F1 looks good But sometimes it may not
8008, Boost Decison Tree
20624, write a small function to draw graphs for stopwords and punctuations present in the tweets
18154, Comparing the model and choosing best model for prediction
1563, Ticket
37455, This is a multi input model
24756, Get dummies
37313, XGBoost
12444, Final predictions
35577, Sales Heatmap Calendar
2134, Tuning RandomForest
9437, kdeplot
39223, Duplicated Features
14852, Parch
8820, Feature Engineering Name Title
10118, No surprise first class passengers have survived more than the rest
36774, MISCLASSIFIED IMAGES
1156, Background
29539, we re going to train our model import some libraries
25944, Final Fit Predict
27440, Model
5696, Features engineering
42031, cut to change continous values to ordinal groups based on physical numbers
4345, Actually MSSubClass feature is a categorical feature
13559, Fare mean by Sex
36378, Reshaping
17643, Fare per person Age are important because as they have float datatype we cannot use them for groupby
9294, Regression on survival on Fare span
22524, Visualise comparison of model performance
35934, Cabin
20404, Plot the 20 nearest word embeddings using a distinct color per cluster
35862, And WHen you check out the training and validation set combined
25679, Rebalancing the Data
39022, Correlation Heatmap
19260, Drop rows with different item or store values than the shifted columns
1415, Title vs Survived
23713, Extensive Feature Importance Visualization
16692, Feature engineering
2886, two categorical features we are going to exxplore is OverallQual and OverallCond
42943, Modeling
5420, In my opinion I think since the distribution survived and dead is pretty clear
8529, Masonry Features
14789, Family Size SibSP Parch
42612, Preprocessing the data
16723, embarked
31678, Defining a function to plot the images of numbers
13220, Building Machine learning Models
20135, check if there is any imbalanced in data labels
10600, Step 3 Clean Your Data
40413, The longitude values range between and the data corresponds to the New York City
14128, Observations
400, Gradient Boosting Classifier
42684, Before starting EDA It would be useful to make meta dataframe which include the information of dtype level response rate and role of each features
13980, The passengers with title Mr are more
15140, Cleaning
19005, Run this cell to do HP optimisation
35852, we reshape the actual test and train set for the conv layers
19905, Grouping by month and shop Id only
36562, Hyperparameter Tuning
41588, Visualizing Predictons on the Validation Set
35581, Preparing Dictionary
43144, Making predictions
6743, KitchenAbvGr Vs SalePrice
21056, Validation Set
30918, Measure F1 for Validation data
16369, Binning The Age
42082, We freeze the base layer ie its weights are not going to be retraning the model
15742, MACHINE LEARNING
28490, By checking the maximum and minimum values of these columns we can make sure which ones to convert to type int32
25455, Using the bottleneck features of a pre trained network
12124, FireplaceQu data description says NA means no fireplace
41027, Explore adult males
21090, One Hot Encoding
29772, We save the prediction in the output file
37406, drop the columns one hot encode the dataframes and then align the columns of the dataframes
2121, then create a pipeline for this model
20127, I can build a feature matrix where the data is all ones row ind comes from trainrow or testrow and col ind is the label encoded app id
38517, Distribution of top Trigrams
30993, The public leaderboard score is only calculated on 10 of the test data so the cross validation score might actually give us a better idea of how the model perform on the full test set
22018, saldo var30
23195, To implement an ensemble we need three basic things
7958, RMSE 0
17788, She is traveling in Cabin D17 in 1st class
12316, It is no wonder that the FullBath increases
3378, Download your predictions
10514, Convert the Categorical Variables into Numeric
18070, Compute at the number of train and test images
31935, Here are some examples of the hand drawn digits from the train dataset
31251, Feature Selection
22816, Using google translate I understood that this item is related to point of delivery and the Russian shipment company Boxberry
21741, Data Leakages
12167, Min leaf nodes
5367, MDI Feature Importance
33892, agregating credit card balance features into previous application dataset
6230, Check correlation of features
20284, Age
18038, Prepare the test data
12237, Tuples
13950, Basic Data Analysis
4269, Functional
7863, Import Data Exploratory Data Analysis
19013, Experiment 3
247, Model and Accuracy
41991, Locating loc To read items with a certain condition
2699, Based on the previous correlation heatmap GarageYrBlt is highly correlated with YearBuilt so let s replace the missing values by medians of YearBuilt
20028, Define classifier over 10 folds
4164, Fill missing data with random sample
27778, Normalization
32863, Group based features
40153, Store a unique Id for each store
16703, drop Parch SibSp
30199, Include back the id and price doc which we had removed from tempData for correct mice NA imputation
1762, Ticket
23427, we move on to class 0
30385, Model
28194, let set the stopwords for english language
13264, Cabin
15007, The correlation matrix contains a lot of information and can be difficult to interpret
22719, Plotting the animation
10722, Feature selection
3948, There are two features related to MasVnr that have missing values MasVnrType and MasVnrArea
18135, Submission
1229, remove outliers
35333, Building the ConvNet Model
13553, Looking quantiles of Fare
40080, we finalized our values with lowest Root Mean Square Error
5512, Modelling
13666, KNN
9805, Submission
28790, Most common words Sentiments Wise
37903, Prediction from Linear Model
14786, Cabin
15017, Age
9972, The Have Pool it s a boolean feature if the pool area it s greater than 0 means that house have a pool
34673, There is a clear seasonality in both overall amount of sales and average sales per day
3748, Filling the null values with mean calculated of the feature
37469, Cheat Sheet for Regular Expressions
426, Missing Data
2481, Making submission
26861, For the solution uncomment and run the cell below
37235, Submission
1930, st Floor in square feet
28120, The weekend looks noticeably different from the weekdays greater proportion of late night activity no real peak in the early afternoon
37657, Check the test image formats
11025, lets create Fare band with 4 segments
31260, Our best parameters
15594, Data Types
29564, We take log transformation of the y variable
24833, use Naive Bayes
5340, Diplay increase and decrease of counts in waterfall chart
6409, There are some Date Variables in the dataset when we performed df head check again
22771, We now build the vocabulary using only the training dataset
32084, Bivariate Analysis
40077, look for multicollinearity within
19290, UPDATE Here we are reading just the validation set
7491, Fill missing data for embarked feature
16712, Nearest Neighbours
21065, Tokenising and Stop words removal
16783, Decision Tree Classifier
4095, Fare
14468, CAUTION Make sure that PassengerId and Survived are saved as int in the submission file Otherwise the submission be accepted but scored as
30338, This function is to do tta
13831, Fare
40724, Learning Rate
29065, Geolocation features
9123, convert categorical ordinal variables to integers
43101, Model training and predictions
17771, Most of the passengers traveled alone
25014, Creating Submission
15933, Pclass
20349, Here we run the learning rate finder Per Jeremy Jeremy says lesson 3 49 00 do lr find lr plot and find the learning rate with the steepest slope not the bottom
31093, BsmtUnfSF font
7038, Type of road access
36893, Comparing classifier performance
36268, Configure the heatmap
8270, Create YrBuiltAndRemod Feature
6769, Score Summary
35928, Data skimming
3248, Scaterplot Matrix w r t Sales Price
6525, We may assume the same numeric features be skewed in test set as well
17277, Random Forest
8485, Orthogonal Matching Pursuit model OMP
25583, Utilities
9351, Predict
19693, Train Test Split
35067, Solution 4 2 Convolutional layers with 32 feature maps
9929, I wanted to study outliars in the most relevant features and that s what I did
8669, Gradient boosting with CatBoost
41030, build an ensemble
20877, Let s take a look at the training dataset
928, Optimize Elastic Net
9844, Roc curve
1261, Setup cross validation and define error metrics
39716, Once the vocabulary is created and the Word2Vec model is specified I train this model by calling train function
40147, We are dealing with time series data so it probably serve us to extract dates for further analysis
24033, it is possible to calculate deaseasonalized item sales count
4383, Something look surprise in histogram of test data
8768, Survival by number of parent or children
42173, To know the data type it contains we can run the following command
7438, As before I also expriment with original y train and log transformed y train log in the neural network model as well
13092, Correlation between Quantitative variables
29209, After making some plots we found that we have some colums with low variance so we decide to delete them
18458, Supervised encoding of location coordinates
2284, Confusion Matrix
5969, Creating dummy variables
13273, Import libraries
32379, Visual Inspection of Mysterious Image Set
26845, First batch
221, It is single layer neural network and used for classification
25188, Plotting first six training images
3067, Embarked
23569, Train the model
11615, Exploratory Data Analysis
37086, Boosting
11352, Exploratory data analysis
38177, Create a Model
33584, TTA
2852, Libraries and Data
11996, Ridge regression is a regualrized version of linear regression means it shrinks those features which are unnecessary for predictions
7833, This looks like a good feature to filter out low priced apartments
8984, This leads me to believe that I should have a column for 1946 NEWER style
7519, Partial Dependence Plots
9256, Lasso Regression
26696, Sales Data
68, Test Set
6067, PoolQC irrelevant I can drop it
53, Important features
35887, Impute numeric columns
41026, we modify the Fare column another time to create the Pfare feature that is just a passenger s Fare divided by his ticket frequency
29941, Correlation Heatmap
14922, The p value of PassengerId is larger than 0
21206, Random initialization
35155, Experiment Dropout percentage
15243, Function
18849, First we want to find the correct learning rate for this dataset problem
969, We start by loading the libraries
22899, Data Preprocessing Part
15240, Age
7512, Parametrization of the XGB model
29063, HOG
12004, Looks like that lasso performs less better than ridge as it not able to regularize well
4844, Coreect it with Box Cox method
35711, separate into lists for each data type
6665, KNN Classification
2747, The other way to handle numeric data is to fill the columns
10901, First run the algorithms with the default parameters to get an idea about their performances on our data and then we would tune the parameters of better performing algorithms to improve their performances
18740, Stopwords
3684, Missing Values
27402, Define the CNN Model
13727, Filling in the missing values in Age
36251, A simple cleaning module for feature extraction
14638, I have grouped the age by Gender Ticket class and Title to get the median age
6612, Concatenate Test and Train data to develop the categorical data
18040, Submission
28154, install the pytorch interface for BERT by Hugging Face
16722, Pclass
16685, We can visualize this data as being concentrated in different regions
28584, LowQualFinSF
23470, We can therefore conclude that the XGBoost Model works best for predicting casual
14716, DECISION TREE
13739, Random Forest Classifier
16934, All this feature preprocessing have a big impact on the model precision
20551, Distribution plot
9777, Decision Tree
19414, As stated before we be using PytorchLighning alongside the library from
10811, it is better
38727, Again this code is very similar to the previous one for the FCGAN
1897, LogisiticRegression Model
14446, go to top of section engr2
4418, Using cross val score
28620, MasVnrType
42731, Using corr and numpy boolean technique with triu we could obtain the correlation matrix without replicates
6172, Sex
26038, we use the random split function to take a random 10 of the training set to use as a validation set
748, Sumbit Code
13668, Naive Beyes
35158, Conclusion Both a 40 and a 60 dropout present the higher accuracies I choose to use a final 40 dropout
1150, TraiF Test Split
22173, create some new features from the given features
6304, Gradient Boosting Classifier
16965, The Embarked and Fare columns have 2 and 1 missing values respectively
24470, Correlation
13068, Random Forest Classifier
37805, LASSO Regression
4150, Check for missing values in age variable
41220, Just curious let s check if the unimportant feature have any pattern order
30139, Implementing Custom Model
32640, Text
31250, Family Survival
43317, try with different classification algorithms
31537, since 50 values are zero so replacing with zero
29750, Sample images
20483, Credit sum AMT CREDIT SUM
35225, How much impact does have
29368, NAIVE BAYES
29709, We would like to know which features have missing values the most
8941, Fixing LotFrontage
14639, Great now let s write a function to fill the missing values of age in the training and test set with these smarter statistics
25009, Trend Features
36302, print our optimal hyperparameters set
10260, We ll now glue the training and test set together for all but the sale price which the test set doesn t have in this case
717, lets examine the numerics which ought to be categorics
11343, Age Binning
5279, Creating the Feature Importance Dataframe
38113, Preparing prediction dataset in wide format and then evaluating it
24945, Feature selection
9, Label Encoding
18218, Agora montamos nossa rede convolucional
3793, The 2 points in the bottom right are outside of the crowd and definetly outliers
13326, Embarked completing and converting to numerical values div
12973, Pclass Parch and SibSp have obvious correlation with Age while Sex doesn t have Therefore it can be logical if we use Pclass Parch and SibSp variables for filling the missing values in Age variable
20936, Submission
39449, bureau data
18504, This submission scores on the leaderboard you can try it out for yourself by heading over to the Output tab at the top of the kernel
27490, Define a neural network model with fully connected layers
24844, V Submission pipeline fixed score
1096, Fill missing values in variables
35239, Two records were found to be null after removal of punctuations
13512, More Exploration
34327, Test if the generator generates same images with two different sizes
20374, Having invoked the t SNE algorithm by simply calling TSNE we fit the digit data to the model and reduce its dimensions with fit transform
38233, Model Evaluation and Validation
19543, Removal of Punctuatuation
3370, From there we ll use our account with the AutoML and GCS libraries to initialize the clients we can use to do the rest of our work
35052, I compiled this model using rmsprop optimization categorical crossentropy for loss measurement and accuracy as metrics measurement
7131, Embarked
32630, XGBoost
42456, ps car 03 cat and ps car 05 cat have a large proportion of records with missing values Remove these variables
26588, TASK IMPORT STORE INFORMATION DATA
22279, Feature Engineering
18562, Looks strange that there are 16 passengers with family size of 7 for example
29401, CHECK NUM OF ROOMS
36490, Keras Run Functions
16338, Decision Tree
5562, I use my own custom simpel imputer it act as simpel sklearn imputer by set strategy most frequent but on categorical data This may not best choice
25017, 3D plotting the scan
128, Misclassification Rate Misclassification Rate is the measure of how often the model is wrong
26716, Plotting boxplot for price changes
3843, Munging Age
23566, Using OverallQual for making more meaningful imputation for missing value cols
4387, create one combined feature total porch area
1974, Model evaluation
9288, Imput Missing or Zero values to the Fare variable span
38151, The heavy lifting optimizing
18512, Sorted Slices of Patient 0acbebb8d463b4b9ca88cf38431aac69 Cancer
19420, unpack this whole process data function
10099, encode the categorical values
37723, Feature Variance Analysis
40613, comes the cool part
1039, We dealt already with small missing values or values that can t be filled with 0 such as Garage year built
3020, Visualize the destributions of the numeric features
37224, There are also frequent mentions of height
8298, Extra Trees Extremely Randomized Trees Ensemble
23906, Service functions
19405, Making Prediction
16442, We fill missing values based on Pclass and SibSp
24715, the bottom rows at each column hold real leaf images that have the first PCA coefficient be at the value of the corresponding percentile of that column for example the left most bottom pictures are leafs with a PC coefficient to be approximately and the right most bottom pictures are leafs with a PC coefficient to be approximately
25449, Applying Random Foresting
17045, Modeling
4867, Finding features with NA values
12351, Remaining Basement variabes with just a few NAs
37100, Transforming
32103, How to swap two columns in a 2d numpy array
30391, Padding
28319, Exmaine the POS CASH balance DataSet
36867, Like for KNN GridSearchCV for SVM takes very long so I only fit one good set of parameters here
9880, Correlation Between Pclass Survived
573, StackingClassifier
35077, Making predictions using Solution 6
11065, Family Size feature
31805, Submissions
33436, The frequent words are similar in fake and real twitters
31641, DayOfWeek
24152, Plotting
39723, I can think of a few things we could do with the names
31472, Data Augmentation
30582, First we can calculate the value counts of each status for each loan
24789, Loss
5320, Library
33692, Days difference from next row
29799, Measure similarity b w two words
6143, We divide all dates into four bins
6486, This is a list of highly correlated features
10580, Visualizing AUC metrics
29958, Training
2001, Nice We got a 0
32510, Prediction
35328, I have created the training and validation sets Much of this section is inspired from the sentdex classification example with convnet work
28461, collect all columns of type float and missing values in a list
19052, Lets first specify the x or input
2173, Regarding family size our hypothesis is that those who travel alone have a lower survival rate
23381, I add some methods to the class that keras needs to operate the data generation
28157, input ids a sequence of integers identifying each input token to its index number in the BERT tokenizer vocabulary
61, Pytorch Loss Function Cross Entropy CE
4776, Gaussian Naive Bayes
40094, Language Features
28057, Continious variables
35684, XGBoost
36963, Testing new Parameters
26717, Calender
8048, automatic outlier detecting
6882, Missing values
21946, Transforming from spark DF to RDD
17738, most first class passengers had their cabin numbers recorded in the dataset yet only a small fraction of 2nd and 3rd class passengers had theirs recorded
1973, Gradient Boosting Classifier
4433, Feature Selection
30357, Predict by Specify Province
4709, Features importance
4536, Numpy DataFrame Series Pandas
43249, Logistic Regression
20685, a fully connected layer can be connected to the input by calling the layer and passing the input layer
36792, Lemmatization
16479, Joint plot Distribution w r t to AGE FARE
20042, Import necessary libraries
3557, We now build our pipeline of transormation that we use to get prepared data
40839, Look at another one
41848, Average price per day depending on distance to Kremlin
38924, Non CV LGBM Approach
16154, FamilySize
32384, Setting Cross Validation and Hold out Set
23955, Applying Decision tree Regressor
7784, value counts with NaN values
12919, Survived
41157, FEATURE 3 AVERAGE NUMBER OF PAST LOANS PER TYPE PER CUSTOMER
13985, There are two null values in the column Embarked
401, Logistic Regression
18378, The R square value for the test set is higher
2491, Filling NaN Ages
11204, We just average four models here ENet GBoost KRR and lasso we could easily add more models in the mix
19140, Model 2 with Adam optimizer
37763, New approach
3658, Comparing accuracy scores
40049, This way loading the images be much faster than doing this on the fly
36534, a long story it is
20555, Sequential model
21199, update the parameters
32706, Importing GLOVE s word
24398, basic model fit
3029, Another difficulty was to find the most optimal hyperparameters
31672, Evaluate Model
32098, How to get the common items between two python numpy arrays
39685, Noise Removal
32045, First let s load the data
20189, Univariate Analysis
34840, Building a Sequential Model
1332, Wrangle data
13700, First we ll drop the Cabin and Cabin Data columns
28187, We re trying to build a classification model but we need a way to know how it s actually performing
12629, Submission
12939, How Big is your family
4825, PoolQC NA means No Pool
6395, Data Type of columns
34666, However in 95 of all cases the monthly sales volume is not greater than 5
40744, how inside the Neural Network work is done
13762, Tune Models
42271, created
13713, Replace missing Age values with mean
7569, Handling missing values
34667, Total sales behaviour depending on month year
10898, Since sex and family status are 2 categorical variables with 2 unique classes they are label encoded
10564, Evaluate Feature Significance
11127, explore these outliers
39345, Multiplying transpose of matrix generated from step 4 with Co variance matrix from step 2
9849, Feature engineering
2558, Loading the H2O package
1644, Explore the target variable properties
27876, Sales Correlation between stores
6784, Age
3696, Missing Values
40664, we have 0
27029, Since we have 200 feats we get 200 pvalue for each sample we can multiply them together
31833, Steps to create a level 12 custom metric that work with any training data even if you are using a subset of product ids
23798, After we dealt with the target let s move to the features
29803, Pretrain Word2Vec
3722, train our models
15940, Cabin
17975, Fare
27604, With the threshold obtained the grayscale image can be converted to binary
39294, RAW FEATURES
16478, Age Distribution
16484, Parent and child on board
7935, Gradient Boosting Regressor
20894, Setup of the gradient descent parameters
42069, Using sklearn OneHot encoder for pre processing
34616, Feed Forward Neural Network
6927, Here I plot missing values
4864, Finding Skewness in SalePrice
6884, And now for the test data
8066, Building remodelling years and age of house
23465, Using polynomial on the dataset
2456, Remove quasi constant features
5940, check the missing values in NULL
20618, Random Forest Classifier
31053, IP Address
13484, LOAD LIBRARIES
22690, LSTM models
21468, Below is the code to create the augmented database where you can change some parameters as you like
32772, Concat train test data
33667, Date font
43212, Generate the submission csv
32263, Relationship between numerical values
5108, Feature Creation
41226, We create a standard function so that we can have similar metrics displayed for different algorithms
33199, The image values are transformed into a float32 array with values between 0 and 1 suitable for neural nets processing by dividing with 255
41294, Build a LSTM Model
14129, Observations
11631, Since the steps of each model would be similar to each other I use function to wrap those processes into 1 line
18846, Having invoked the t SNE algorithm by simply calling TSNE we fit the digit data to the model and reduce its dimensions with fit transform
7575, Skewness and Kurtosis
36357, Examining the pixel value
2639, Most of the passengers have embarked from Cherbourg Southampton
36296, try on scaled data
1830, Submission
26833, Converting to Datetime to Month day hour etc
505, our data is ready to train
21017, Heatmap
34259, Lag features
32416, Preparing submission file
19526, Checking Lineage of RDD
28543, Predict using the trained model on the testing data
5329, Display heatmap of quantitative variables with a numerical variable as dense Similiar to heatmap
10768, to predict
32497, Compiling the Model
23467, Defining a custom scorer function for the models
7938, The best hyperparams for the XGBoosting model are
14737, Model 2
37030, count them
6291, Neighbours
1228, Visually comparing data to sale prices
31620, here we set thresold to a very low value 250000
30883, use the mean absolute error function to calculate the mean absolute error corresponding to the predictions for the validation set
39828, we are now on the last part first make our test data all ready and then we submit our predictions
4998, this is interesting
37571, check the mean y value in each of the binary variable
1199, Embedded methods
9780, Grid Search Score
13294, Neural networks are more complex and more powerful algorithm than standars machine learning it belongs to deep learning models To build a neural network use Keras Keras is a high level API for tensorflow which is a tensor manipulation framework made by google Keras allows you to build neural networks by assembling blocks which are the layers of neural network
20533, Create train and test sets
33100, Support Vector Regressor
37918, Data Preparation of Test Data
8896, Ridge RidgeCV
40933, Test Gens with TTA and Submission
5075, Progress try just all the categoricals instead
19638, Looks like 6 devices with duplicate rows have different values for brand and model
12052, LASSO Regression
34231, That looks much better
17451, SVM
29778, Load Prepare MNIST data
15169, ROC curve
32375, How are the Image Sizes Affecting Targets in Our Data
11, Lasso Regresson
22105, Add an ImageId Column and Save as CSV file
30420, Define cross entropy and accuracy
13114, XGBoost
9319, The dirty fix I adopted before is to simply recode Po to Fa before creating the dummies so that the mismatch is not there This can be justifiable if we look at the counts of each category
32641, Classification
35182, We are using two classifiers Logistic Regression and Random Forest to check if they are able enough to separate train and test data points
41240, SVM
23904, we have the train embeddings This can be used as an input to other machine learning models
42175, To facilitate the entry of data into our neural network we must make a transformation of the tensor from 2 dimensions to a vector of 1 dimension
26531, Model Evaluation
26177, Missing Values
3597, XGBoost
43208, Model creation
21890, visualize how the context vector looks now that we ve applied the attention scores back on it
38134, Feature Engineering
20761, GarageCars column
3895, Histograms
21446, Outlier
5846, Which of the features are more influencing the target variable
12223, For the fans of the numeric metrics
27023, Bonus Best Single Model Function
5328, Display heatmap of multiple time series
9872, Firstly in order to find missing values we need to combine both train and test data
20239, Feature Engineering
26247, Model Evaluation
34301, we can plot a visualization of all the activations within the netowrk
13039, This section of code is for missing value treatment for age
9304, Tree experiment
32106, How to reverse the columns of a 2D array
9067, perform 3 seperate linear regression models for each neighborhood tier
36655, Median filtering which is very similar to averaging changes the central element of the kernel area to the median of the values in the kernel space
10589, XGBoost
20063, Adding column city names where shops are located
6903, To analyse the data based on the family size first create new column
18896, we compare the tuned parameters and not the tuned ones
34397, Null Model Average
43396, One example image
31898, Fine grain hypterparameter tunning
21250, Training Data is split into training and validation data in the ratio of 9 1
7057, Condition of Sale
33637, look at a sample of records
34762, Visualizing Performance of the model
5204, Based on the relations between features I create new features to increase the accuracy of my tree based model
7278, Fare Feature
7042, Physical locations within Ames city limits
26463, Lets visualise one of the training image
26466, Data Augmentation
11927, The first row is about the not survived predictions 489 passengers were correctly classified as not survived and 60 were wrongly classified as not survived
16590, Scaling up data
27750, Character analysis
40408, Bedrooms
38986, Dropping columns with values
8709, RIDGE and tuning with GridSearchCV
5424, MasVnr
39204, Cross Validation
20141, Generating Output
3933, 15 missing data PoolQC Fence MiscFeature Alley LotFrontage a lot
5649, Drop unnecessary columns
14393, that we ve filled in the missing values at least somewhat accurately it is time to map each age group to a numerical value
31638, Dates
14370, there are only 3 passengers who paid more than 280 e 512 as Fare
3732, Check if there are any missing values
37826, Word cloud for Disaster Tweets
23398, loop through the images loading each one as a numpy array applying augmentations to it and feeding it into the model
41773, Show 12 rows of the network
14756, The age distribution for survivors and deceased is actually very similar
12393, Using polynomial on the dataset
36127, Test Trsin Split
9897, calculate the ratio of survived family
4566, Applying XGBOOST Regressor
11247, Encoding categorical variables
3978, Make predictions on the test set
41521, We can interpret from the graph that people who surived had paid more fare
3947, Impute Basement related features
2343, Boosting Family
28304, Making Prediction by training model
5487, Data Modeling
17976, Age
26442, The training score of about 85 already looks very promissing for the titanic dataset
1889, sklearn Models to Test
22441, Correllogram correlation plot
145, Feature Importance
12788, Exporting the predictions and submit them
35489, Training
32219, Delete first three months from matrix
11913, Age
22125, so we have a drop of 0
31699, Outliers
36558, As we have much more cold targets that hot I m not surprised that hot targets occupy only a small part of the data per cluster
14182, Loading our dataset
17384, we have the following
14395, We can drop the Ticket feature since it s unlikely to yield any useful information
40048, And let s pick a resize shape of Bojan Tunguz resized images
538, Chance to survive increases with length of name for all Passenger classes
7417, Check if training data and test data have the same categorical variables
5461, How do we determine what are the important features
2681, Univariate roc auc or mse
11468, Random Forest
15002, Basic Visuals
39725, I think we can drop the name columns now as we won t need it
15843, Fare
27038, Gender wise distribution
29749, The classes are not equaly distributed in the train set being around each with values from for to for also plot a graph for these
39115, LogisticRegression With under sampling or no under sampling
7671, Numerical features
15047, Noble
8505, Numerical Features Bivariate Analysis
2208, XGBOOST
19926, Since outliers can have a dramatic effect on the prediction i choosed to manage them
17443, Cabin
37883, Alpha
32586, An easier method is to read in the csv file since this be a dataframe
35149, Compile 3 times and get statistics
22011, For large datasets with many rows one hot encoding can greatly expand the size of the dataset
25955, Loading Product csv
2478, Grid Search CV
6184, Bivariate Data Analysis
19885, Time Related Features
20614, Modelling
42396, How do stores differ by State
7290, KNN Tuning
12054, Ridge Regression
29107, Visualization
32016, Now we fill NaN rows with the help of Sex column Most frequent categories are Miss and Mr we assign Miss to NaN if female and Mr to NaN if male
42558, Loading the data
13673, The method tells us the data types of the features
43012, Data split
34489, Time to train the model
1069, Blending and submitting the FINAL AVERAGE OF 3 REGRESSORS
18729, The first column is the probability of cat and the second column is the probability of dog
15338, train test split Split arrays or matrices into random train and test subsets
34383, There s a lot of noise in this graph
6227, Target feature SalePrice
4088, Getting hard core
36003, I scale the parameters to stop genetic programming from having to find a good scaling as well as a good prediction
6477, Relationship with numerical variables
13185, Clearly Passengers with less then 1 parch are mo likely to survive latter let s create groups with this information
30668, Text vectorization
14766, Assessing the model s performance based on Cross Validation ROC AUC
28739, X train data
4222, Data Modeling
25685, System When there is NO SOCIAL DISTANCING
10395, LightGBM
19876, Standardisation
1844, Significance of Categorical Features for SalePrice
17039, Before constructing correlation matrix we need to convert categorical features to number
42607, Losses and Optimization
38468, Plot heatmaps with month on x axis and features on y axis for train and test
2496, Observations
28818, Day of the week
22218, Train
13467, Exploration of Passenger Class
28193, Stopwords
16229, As some of the column contains values in string format so first we indexed them using StringIndexer A StringIndexer assign unique integer number to each unique string values
12596, Load modules
8031, There are outliers for this variable hence Median is prefered over mean
12422, No there is one missing data in GarageArea we have to handle by filling with mean value
18258, Model Evaluation
38295, Build and fit the final model with the best parameters
18223, Training
21244, The Generator Model
38736, that we have extracted titles from names we can group data by title and impute missing age values using the median age of each category
7597, Boxplots for categorical features
24292, take a look to the first row
7150, TRANSFORMING DATA
34328, Add data augmentation
15084, Random Forest
17860, check the features importance for the 5 out of 6 models
3507, Parameters of the best model
40644, Plotting Word clouds
16403, Since median can only be computed on Numerical Values so we need to drop the Name for numerical data
24148, Product and Hour of Day Distribution
43214, Since there are many title columns now which do not exist in train and test dataset add those
32325, variable types
13895, Seems that Random Forest and AdaBoost perform better
14443, Update Age with ordinal values from the AgeBand table
16118, k Nearest Neighbors
32855, How sales behaves along the year
2031, Looks like the distribution of ages is slightly skewed right
24888, Time to get prepare data for our model
6836, One Hot Encoding Nominal Features
9506, Import or Load all of your data from the data folder
3886, Continuous and Discrete Attributes
23639, Observations
37867, Distribution of numerical features
30771, Once instantiated the ensemble behave like any other Scikit learn estimator
37819, Define function to remove patterns
25012, Time Features
466, Merging numeric and dummy features
8294, Bagging oob score False
26501, Pooling is plain max pooling over 2x2 blocks
3517, examine categorical features in the train dataset
9354, New title featue
23248, We re deleting the Cabin column because there are too many minus values
22893, Lets Visualize with the class imbalance thing
29753, Prepare the model
12007, checking R squared
30178, Sigmoid function
15285, Creating New Categories
35547, Models include
5284, First of all we are going to train a baseline LR model
24984, Dropping categorical variables with too many unique classes
31677, We are going to artifically add noise to our data
14719, TEST DATA PRE PROCESSING
39966, Statistics
11711, Convert to Categorical
18505, Test set
27923, Optimal Tree Structure
24295, let us plot the first 10 images
11981, Ordinal categories features Mapping from 0 to N
6298, Model feature reduction
35425, Defining the CNN model
9387, BsmtFinSF2 Type 2 finished square feet
42394, Do total sales correlate with the number of items in a department
26193, Converting DICOM in NIFTI
22013, Use the next code cell to one hot encode the data in X train and X valid
9952, Scatter Plots
34381, Relative Time Scale
17007, Amount of missing data in both columns is insignificant
38971, so we pad for length of 64
29683, Reshape images
7474, In this section I plot some of the parameters that have an influence on the outcome
24925, note that one does not have to use only words
14337, Applying the ML Model
5987, Data Test
13431, Encoding and Categorizing Data
15666, Hyper Tuning the Models
3845, option2 replace with median age of gender
19521, Fetch Ordered Elements
3755, the dataset is ready we are now ready to explore it
20760, BsmtHalfBath column
16230, data is converted which are required to predict survival into vector form by using VectorAssembler as VectorAssembler is a transformer that combines a given list of columns into a single vector column It is useful for combining raw features and features generated by different feature transformers into a single feature vector in order to train ML models Normalizer is a Transformer which transforms a dataset of Vector rows normalizing each Vector to have unit norm It takes parameter p which specifies the p norm used for normalization This normalization can help standardize your input data and improve the behavior of learning algorithms then we normalize our data by using Normalizer
18058, Word Cloud for Negative tweets
23054, now lets do the fitting process again i
26496, In this case there are ten different digits labels classes
25756, The original color is still visible as the color of the rim
1015, Nice now let s apply this key technique ourselves We use the basic version of k Fold with 5 folds from our friend Scikit learn
7593, Another option to highlight the correlation of SalePrice to all SF and
13902, Data Encoding
5829, Checking for null values in all feature of this df
40380, Here I created 5 folds and appended the 10 dataframes into a list
24415, Area of Home and Number of Rooms
14206, we pick the one with best accuracy
471, Numerical Features
32513, Model 3 Using less layers of VGG19
6993, Year Sold
5973, Time to train every model with further hyperparameter tunning extension
40004, Image names
19622, PREDICTING RESULTS
43213, Organize and modify features
34954, The confidence interval of the model coefficient can be extracted as follow
12899, get the dummy variables for our non ordinal categorical features
12297, Title
14431, go to top of section engr
38037, The feature importance we get from random forest is very similar to the list we got from decision trees
17425, so how many people we got here
11270, Of these SibSp Parch and Fare look to be standard numeric columns with no missing values
30768, There you have it a comparison of tuned models in one grid search
12327, Can not I delete it if it is not in Test Because the deletion is less risky but
38220, It also contains 2323 duplicates that we should remove
9771, These are the remaining categorical features as dummy variables in our dataset
7953, Tuning on weight initialization
3577, Through such a process I decided on outliers below
10250, Go to Contents Menu
39737, Both stories tell a similar story that smaller families tended to survive more than larger families
32061, convert these images into a numpy array format so that we can perform mathematical operations and also plot the images and flatten the images
3145, Bvariate or Multivariate Data Vizualization
18382, Set the variable q to any disaster related keyword or anything else of your interest
7895, the AgexClass can be dropped or not as I experiment to increase the general performance of the model in the next steps
31605, Cleaning the data set
26081, Adding new layer and Dropout to avoid overfitting
18917, Fare Feature
33602, Load and Visualise the data
36337, Train the model
26646, Similar analysis to dfDocTopic we have
11759, The LGB Regressor Model also was overfitting a lot so we change some of the paramters to help with that
14349, pandas profiling
40929, Image Modeling
16359, Exploring the Pclass vs Survival
11024, there is only one null value in Fare attribute and we are going to fill it with median value value
14669, Logistic Regression
33052, Predict test data
24955, Fare in Test set
3559, I was reading about how to select good feature from so I decided to try it now that I can t add features on myself so let s do it
28007, Processing Data with The Model Fitting
12839, Gaussian Naive Bayes
37223, Much better Lets deal with the contractions now
43277, Instancia uma nova RandomForest
35195, Well both of these feature transformation trials have proved to take symmetrical bell shaped curve now
30092, About Coarse dropout technique
1042, One hot encoding
15359, SHAP Dependence Contribution Plots
10642, first deal with Null values in code Age code
42024, Extracting the first word
39005, Implement cost function with SIGMOID activation for the last layer
18644, There are few peaks and troughs in the plot but there are no visible gaps or anything as such which is alarming
18664, Prepare data for processing by RNN and Ridge
2375, Shuffle when using cross val score
10688, Ridge regression is an L2 penalized model where we simply add the squared sum of the weights to our least squares cost function
35857, For the 1st train I ll boost the training with a 0
26768, My models
1710, Imputing using ffill
30314, Convert a few float type samples in text and selected text columns into strings
20250, Final look
40024, As reading the dicom image file is really slow let s use the jpeg files
41128, Multiplicative
10195, Calculating best metaparameters
31309, We have a total of 1115 Stores all across
14714, KNN Parameter Tuning
656, Splitting the train sample into two sub samples training and testing
36992, Fruits like banana strawberries
6376, Find out the mode values
6748, Detect outliers using IQR
6370, We stack all the previous models including the votingregressor with XGBoost as the meta regressor
7485, As fare is a continuous parameters we benefit from grouping it
12627, Model training and prediction
30349, data looks a bit dirty we might get an overly optimistic prediction because the last number is not the final one for instance
24458, Thresholding is a very popular segmentation technique used for separating an object considered as a foreground from its background
39209, Analyze not real tweets
15976, We can appreciate that a more expensive ticket fare increases the survival rate of the passenger that buy it
34921, Some additional features
34524, Interesting Values
27941, Show predictions and feature importance
35808, Detect same shops
12024, check how correlated are the predictions on test data of all the models applied in this kernel
22910, F1 SCORE
32381, Getting Landscape Attributes from Images
1379, Interesting
28413, Label Encoding
10542, Merging for XGB Regressor
28961, We compare the difference between All years feature with SalePrice
7105, Logistic Regression
1670, introduce the new look of Sex and Embarked
10877, Making a check point
26215, And if you are interested in just visualizing one certain image s bounding box plot you can first extract the chosen image s dataframe and convert the bounding box of the image into a 2d array apply the draw rect function to plot
7033, Pool quality
35790, LightGBM
7609, Pipelines with default model parameters
33699, Monthly Count font
32062, now create a dataframe containing the pixel values of every individual pixel present in each image and also their corresponding labels
17735, Removing variables with too little coverage
7741, create model
38773, XCeption
2427, Ridge Model
24583, Initialize and Reshape the Networks
18477, Promo2SinceWeek Promo2SinceYear and PromoInterval
24370, Floor
40196, Data Agumentation
25633, MODELING
37472, Tokens
41971, When using Google Colab
5894, Train data of these columns used for label encoding
13924, Merge train and test facilitates the exploratory analysis and the feature engineering
30474, ColumnTransformer use cases of passthrough and drop
20265, Convolutional Neural Network CNN
4519, Support Vector Regressor
2117, Feature selection
41951, The dataset contains 10 000 tweets that are hand classified
29162, MasVnrType Fill with None
14200, we gost a much cleaner dataset
17732, XGBoost Parameter Tuning RandomizedSearchCV
9424, Histogram
40271, Basement Quality vs Sale Price
12217, Other evaluation approaches
12750, Some titles only have a handful of occurences so we can replace them as Other
5377, Cleaning Data
25001, To prevent overfitting
42389, Creating Tidy Dataframes Capable of being fed into Models
22222, Veri setinden g rselle tirmeler Vizualizations in dataset
26876, This is the simplest approach
24671, DATA CLEANING
17991, The title of the passengers can be extracted from their names
27405, Make predictions on the test set
32974, Fare is string with number at the end Two consecutive ticket number means they are bougth from same place or they got same deck on the ship
34700, Linear extrapolation
18911, Name Feature
4619, We notice that this is a small dataset
6101, Submit
13773, Ticket Cabin have different levels compared between train and test data
8561, Renaming Columns
15704, Age vs Survived
38179, Plot a Model
12071, Predictors Target Split
33487, Features Select features
19740, Model Formation
29620, Imputing filling missing value by its most frequent For example Embarked attribute consists of 3 differrent port C Cherbourg Q Queenstown S Southampton
21502, Plot histograms for image sizes for the analysis
9984, outliers
9258, The Undisputed XGBoost with RSCV
25830, Loading Data
9042, This narrows down the features significantly
16401, Sex Male Female as 1 0
26526, Has to do with Trump
23939, TF IDF
22457, Joyplot
42037, Groupby mean cmap
20757, SaleType column
9055, This relationship isn t super clear
1608, How is this feature different than Family Size Many passengers travelled along with groups Those groups consist of friends nannies maids and etc They weren t counted as family but they used the same ticket
4941, Outliers
20187, Clean the data
40186, Subtext analysis
5227, FA
15023, Survived ratio of female is much higher than male
27008, Callbacks
10540, Labeling only for XGB Regressor
10869, Using PCA to reduce dimension
30596, The dataframes now have the same columns
9404, Standard Error
24525, Age distribution of the customers
28862, Plot Example
37287, These are the weights that were found offline using hillclimbing
15163, One Hot Encoding
4992, Learning
24894, Logistic Regression
464, Fill NA s in categorical data
24112, lets predict for our test data
27211, Before going any further with training let s take a look at sample photos from both classes
28670, Alley
16613, Handling Outliers
5321, Data
31947, Submission
5720, Getting the new train and test sets
14835, Prediction and Submission
35175, Experiment Replacement of max pooling by convolutions with strides
19694, Convolutional Neural Network CNN
3293, we create an object of this class and use it to fit and transform the test data
617, Admittedly these are quite a few grouping levels but 30 vs 20 are numbers that are still large enough to be useful in this context
2503, Correlation Between The Features
14419, go to top of section prep
35767, maybe do a groupby to make this table more manageable and easier to read
10995, Lets explore the dataset that we imported freshly br
5741, Loading packages
2296, Pandas take a peek at the first few rows note what may be categorical what may be numeric
14800, Lets try hyperparameter tuning for GBM
23638, Paragram Embeddings
926, Optimize Lasso