text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
Statistics, Probability, Probability Distributions, Random Variable, Data Science. Application of the Exponential Distribution On average, a car has to be serviced after 120 days if used every day. The time for servicing can be modeled using an exponential distribution. What is the probability that the car will be serviced after 150 days? What fraction of cars will be serviced
medium
204
Statistics, Probability, Probability Distributions, Random Variable, Data Science. after 90 days but less than 120 days? Here the decay factor (m) = Let us solve part A first. We need to find P(x > 150). The required probability is given by the shaded area in the portion. Image by author In this case, it is easier to find the area of the unshaded area. If you recollect the
medium
205
Statistics, Probability, Probability Distributions, Random Variable, Data Science. discussion on CDF, the area of unshaded area is CDF (X = 150). Once we find that, we can simply subtract that from 1 to find the area of the shaded portion. Since we already have the CDF function, we can rewrite the required probability as In this case, x = 150. And the decay factor Plugging the
medium
206
Statistics, Probability, Probability Distributions, Random Variable, Data Science. values, we get This feels intuitively right — since most cars have to be serviced after 120 days, very few cars will have to be serviced after 150 days. With this done, we can also solve Part B similarly. Part B asks us to find the probability that a car will be serviced between 90 and 120 days or
medium
207
Statistics, Probability, Probability Distributions, Random Variable, Data Science. P(90<X<120). Graphically, the area of the shaded portion is the required probability. Image by author We can break this down as Again, we will use the property of CDF to find these values. Therefore Graphically this can be represented in the following manner. Image by author We can now calculate
medium
208
Statistics, Probability, Probability Distributions, Random Variable, Data Science. the probabilities easily. CDF(X = 120) = 0.6321 CDF(X = 90) = 0.5276 P(90 < X < 120) = 0.6321–0.5276 = 0.1045 or 10.45% Comparison of Poisson Distribution and Exponential Distribution The decay factor m in Exponential Distribution is the inverse of the average occurrence time λ of the Poisson
medium
209
Statistics, Probability, Probability Distributions, Random Variable, Data Science. Distribution. However, there are some differences. In the Poisson Distribution, the random variable x (number of occurrences in the given time period) is discrete. However, in the exponential distribution, the random variable x (the time for the next success) is continuous. To relate this to an
medium
210
Statistics, Probability, Probability Distributions, Random Variable, Data Science. example from the WhatsApp group we did earlier. The number of messages that we receive in a particular time period is a discrete variable and can take positive integral values only (0, 1, 2, 3, … ); while the time to the next message is a continuous variable and can take fractional values as well
medium
211
Statistics, Probability, Probability Distributions, Random Variable, Data Science. (2.3 minutes, 4.8 hours, etc.). The difference can be illustrated in the figure below. Image by author Normal Distribution Irrespective of your academic or professional background, you would have heard of Normal Distribution. It is one of the most used (and abused) probability distributions.
medium
212
Statistics, Probability, Probability Distributions, Random Variable, Data Science. Understanding Normal Distribution is critical to the understanding of inferential statistics. The normal distribution has a bell-shaped curve described by the following PDF Here μ is the mean of the distribution and σ the standard deviation. The graph of the PDF of Normal Distribution looks like
medium
213
Statistics, Probability, Probability Distributions, Random Variable, Data Science. this. The graph is symmetrical around the mean. Image by author Note: You do not need to memorize the formula. Just how to use it. Since the area under the curve must be equal to one, a change in standard deviation results in the fatter or taller curve depending on σ Image by author We can
medium
214
Statistics, Probability, Probability Distributions, Random Variable, Data Science. therefore have infinitely many Normal Distributions. Standard Normal Distribution A Standard Normal Distribution has a mean of 0 and a standard deviation of 1. We can convert any normal distribution with mean μ and standard deviation σ to the standard normal distribution by performing a simple
medium
215
Statistics, Probability, Probability Distributions, Random Variable, Data Science. transformation. The z-score tells how many standard deviations is x away from the mean. This ensures that we just need one table like this to take care of all our probability calculations. This was before the advent of spreadsheets and statistical programs, but it is still very helpful. How to use
medium
216
Statistics, Probability, Probability Distributions, Random Variable, Data Science. z-scores to find probabilities? Most software will be able to provide us with PDF and CDF values very easily. For the sake of better understanding, let us use the z-score table as provided here. As the heading suggests, the table provides us with the area to the left of the z-score. Image by author
medium
217
Statistics, Probability, Probability Distributions, Random Variable, Data Science. In other words, the table gives us CDF values at a particular z-score. Therefore, if we want to find the probability of P(z < -1.63), we can directly read this directly off the table as described in the figure below. First, we read the first two significant digits (-1.6) vertically and then read
medium
218
Statistics, Probability, Probability Distributions, Random Variable, Data Science. the last digit (3) horizontally. Therefore CDF (z = -1.63) = P (z < -1.63) = 0.05155 or 5.155%. If we want to find out P (-1.8 < z < 1.2), we can do it in the same manner as we did for the Exponential distribution earlier. Reading the values off the table, P (-1.8 < z < 1.2) = 0.88493–0.03593 =
medium
219
Statistics, Probability, Probability Distributions, Random Variable, Data Science. 0.849 or 84.9% The Empirical Rule of Normal Distribution Image by author For any normal distribution with mean μ and standard deviation σ, the empirical rule states that The area within one σ either side of the mean is approximately 0.68. In other words, 68% of the time, you can expect x to be
medium
220
Statistics, Probability, Probability Distributions, Random Variable, Data Science. within μ — σ and μ + σ We can find this very easily using the z-score tables. We start off by transforming the x values to z-scores. A value of x = μ + σ will give us a z-score of Similarly, a value of x = μ — σ will translate to a z-score of -1 We can now find the area between these two z-scores
medium
221
Statistics, Probability, Probability Distributions, Random Variable, Data Science. by using the CDF tables as earlier Image by author P (-1 < z < 1) = CDF ( z < 1) — CDF ( z < -1) = 0.84134–0.15866 = 0.68269 or 68.27% Similarly, around 95% of the time, x to be within μ — 2σ and μ + 2σ And, around 99.7% of the time, x to be within μ — 3σ and μ + 3σ Let us use Normal Distribution
medium
222
Statistics, Probability, Probability Distributions, Random Variable, Data Science. in real life. Application of Normal Distribution If it is known that the average weight of adult men in Europe is 183 pounds with a standard deviation of 19 pounds. What fraction of adult men in Europe weigh more than 160 pounds but less than 200 pounds? Given: μ = 183, σ = 19. We need to find P
medium
223
Statistics, Probability, Probability Distributions, Random Variable, Data Science. (160 < x < 200). This problem can be solved as earlier by finding converting the x values to z-scores. The problem, therefore, reduces to P (160 < x < 200) = P (-0.684 < z < 0.895) = CDF (0.895) — CDF(-0.684) = 0.8146–0.2470 = 0.5676 or 56.76% Conclusion In this article, we looked at the basic
medium
224
Statistics, Probability, Probability Distributions, Random Variable, Data Science. terms and techniques involved in working with random variables. We also looked at the different probability distributions for discrete and continuous random variables and finished off by working with normal distribution. We also recommend this comprehensive Statistics Cheat Sheet for important
medium
225
Physics, Business, Thermodynamics, Life. The Universe is a hyperdimensional standing-wave in n-dimensional Brownian motion. There … I said it. No, I have no idea what the above image has got to do with it either … but you try finding a suitable one! I quite liked this one … … but, actually, that’s not the idea at all — for a start, the
medium
227
Physics, Business, Thermodynamics, Life. cube is wrong … it should be a sphere … and, secondly, its standing waves are planar not 3D. This one has something of what I mean about it too … It’s an acoustic hologram … an example of shaping sound waves in 3-D … from an entirely unrelated, but nevertheless interesting article here. But it’s
medium
228
Physics, Business, Thermodynamics, Life. still not right either — that ‘paper boat’ couldn’t be on top of the wave but would be a pattern inside the wave. So … in the end, rather than give everyone the wrong idea, I plumped for the image at the top, on the basis that it was the least misogynistic of the less misleading ¹ ones — many of
medium
229
Physics, Business, Thermodynamics, Life. which were of an altogether unfortunate manga/anime ‘babe’ variety, for some reason that eludes me. Anyway … the reason I’m posting this is because it was easier than the thing I started writing about and how the ultimate goal of any business is total monopoly — 100% of all government contracts +
medium
230
Physics, Business, Thermodynamics, Life. 100% of citizens’ disposable income. Obviously, no business achieves total monopoly but, any business striving for less than that is on a downward trajectory as other businesses outperform it on the way to the total monopoly they are striving for — businesses, like any other life form, have to
medium
231
Physics, Business, Thermodynamics, Life. compete for domination or go extinct … the laws of Thermodynamics dictate it be thus. I have long been of the opinion that absolutely everything … not merely material/energetic particulate entities, but abstract phenomena such as War, Love, Happiness, Life etc. … obeys the laws of Thermodynamics.
medium
232
Physics, Business, Thermodynamics, Life. But that’s one hell of a topic to take on and start writing about. So I opted to procrastinate instead and started thinking about the nature or the Universe and how to model it — and, before I knew it, I found myself looking at pictures like the one above and thinking “WtF … what has that got to do
medium
233
Physics, Business, Thermodynamics, Life. with hyperdimensional standing-waves in Brownian motion?” So … there you go, now you know. Fundamentally, Jeff Goldblum is to blame for this particular piece … he’s the one who drew my attention to the fact that Life itself … the very phenomenon of Life, not merely any specific life form … obeys
medium
234
Physics, Business, Thermodynamics, Life. the laws of Thermodynamics — address any complaints to him. — ¹ The other two are misleading in the same way a little knowledge is a dangerous thing — they make you think you see what I mean when, actually, they’re leading you astray ². ² Because … “Each separate charged particle contributes to the
medium
235
Physics, Business, Thermodynamics, Life. total electric field. The net force at any point in a complex electromagnetic field can be calculated using vectors, if the charges are assumed stationary. If charged particles are moving (and they always are), however, they “create” — are accompanied by — magnetic fields, too, and this changes the
medium
236
Physics, Business, Thermodynamics, Life. magnetic configuration. Changes in a magnetic field in turn create electric fields and thereby affect currents themselves, so fields that start with moving particles represent very complex interactions, feedback loops and messy mathematics.“ Exactly. In fact, simply replace the words ‘charges’,
medium
237
Physics, Business, Thermodynamics, Life. ‘electric’, ‘electromagnetic’ and ‘magnetic’ with the word ‘energetic’ / ‘energy’ and what you have is the definitive explanation of complex systems: Each separate particle contributes to the total energy field. The net force at any point in a complex energy field can be calculated using vectors,
medium
238
Physics, Business, Thermodynamics, Life. if the particles are assumed stationary. If particles are moving (and they always are), however, they “create” — are accompanied by — energy fields, too, and this changes the energetic configuration. Changes in an energy field in turn create energy fields and thereby affect currents themselves, so
medium
239
Physics, Business, Thermodynamics, Life. fields that start with moving particles represent very complex interactions, feedback loops and messy mathematics Scale this up, such that the component elements (‘particles’) are complex combinations of elements and it does not break down — subroutines in computational algorithms are examples of
medium
240
Media, Attribution, Tv Campaings, Optmize. In this post we’ll focus on approaches with mathematical models to specifically measure the effect (or Lift) of TV ad placements on Digital Analytics metrics, such as Sessions or Conversions. It will be a conceptual approach that will serve as the basis for the next post, which will be more
medium
242
Media, Attribution, Tv Campaings, Optmize. practical with code examples. The Importance of Offline Media in the Digital Age At a time when online presence is dominant, offline media, including traditional strategies such as TV campaigns, continue to play an important role in the marketing mix (exemplified in the figure with some
medium
243
Media, Attribution, Tv Campaings, Optmize. touchpoints). As each of these two formats has its strengths and weaknesses, efficient combination and integrated management tends to maximize the impact of campaigns. Source: Measurement and attribution plans (IAB Brazil). Cross-channel strategies can make the most of the particularities of each
medium
244
Media, Attribution, Tv Campaings, Optmize. type of media. One example is the strategic use of TV campaigns to drive traffic to online platforms, the effect of Off on On. Challenges of Offline Attribution in Contrast with Digital When optimizing media, one step that can make a big difference is attributing the impact of each channel.
medium
245
Media, Attribution, Tv Campaings, Optmize. Generally in digital campaigns, touchpoints are addressable, which provides data to measure which contacts the same user had, in what sequence and what weight each of these had in the conversion. In other article we have shared approaches such as MTA for calculating the attribution weights of
medium
246
Media, Attribution, Tv Campaings, Optmize. digital channels: Multichannel Attribution — Optimizing online media investment with Data Science. The ideal scenario would also be to be able to understand the entry of offline into the customer journey alongside digital channels. However, most Offline points are still not addressable. With less
medium
247
Media, Attribution, Tv Campaings, Optmize. direct traceability, it is necessary to approach Offline attribution differently, in parallel with the way Online attribution is done. Attribution Strategies in TV Campaigns In a previous post Offline Attribution: How to evaluate the impact of my Offline actions on my Online environments — Part 1
medium
248
Media, Attribution, Tv Campaings, Optmize. of 2, we presented the concept of Lift (or Uplift) as a metric for evaluating the impact of an Offline insertion, especially in the Online world. Lift can be defined as the increase in a Digital Analytics metric, such as Sessions or Conversions, attributed to the insertion of an Ad on TV. In this
medium
249
Media, Attribution, Tv Campaings, Optmize. first post we covered simpler techniques for calculating Lift and in a second post Offline Attribution: How to evaluate the impact of my Offline actions on my Online environments — Part 2 of 2 we started to draw up guidelines for using more advanced methodologies with mathematical models. The
medium
250
Media, Attribution, Tv Campaings, Optmize. starting point for correctly implementing mathematical models in the calculation of Lift is an understanding of Causal Effect and Correlation. In mathematics there is a well-known phrase that fits well here: “Correlation does not imply causation”. Correlation describes the degree of relationship
medium
251
Media, Attribution, Tv Campaings, Optmize. between two variables and there can be a positive, negative, weak or strong correlation. However, even if there is a dependency between variables, it does not necessarily imply a causal relationship, i.e. that one variable causes another. A causal effect is when something happens based on another
medium
252
Media, Attribution, Tv Campaings, Optmize. fact, for example, if B happened because of A, then the result of B is strong or weak depending on how well A worked. In this sense, measuring the real impact of TV ad placements is more complex than just considering linear models based on correlations of specific variables, such as the
medium
253
Media, Attribution, Tv Campaings, Optmize. proportional variation in sales after placements. Other factors not yet taken into account may be exerting as much influence as the Ads in question, which would result in a distorted model for predicting the Lift in Sales, for example. Factors not considered in correlations between Ads and Sales
medium
254
Media, Attribution, Tv Campaings, Optmize. alone can generate distorted models. One way of assessing the causal effect can be with an A/B test, comparing the results of the control group with those of the test group. With the exception of the actionable of interest (insertion of Ads on TV) present only in the test group (Sales A), the two
medium
255
Media, Attribution, Tv Campaings, Optmize. groups are exposed to the same factors, which allows the effect of insertion to be isolated and calculated. A/B test used to measure the causal effect of TV Ads on Sales. As it may not be so simple, or even impractical, to run A/B test models due to the shared nature of television viewing, which
medium
256
Media, Attribution, Tv Campaings, Optmize. makes it difficult to isolate similar groups, there is an alternative approach in which the control group is simulated. Instead of running experiments with two different groups, causal inference can be made by comparing the observed historical series that had the insertions with a simulated control
medium
257
Media, Attribution, Tv Campaings, Optmize. historical series without TV Ads (Baseline). Causal effect model for calculating Lift: Target series (Observed) and simulated control series (Baseline) Thus, the focus of the methodology with the simulated control is to establish the time series model to delineate the baseline with the best
medium
258
Media, Attribution, Tv Campaings, Optmize. prediction of what would have happened without the intervention, which is called the counterfactual. The difference between the observed data and the counterfactual predictions is the inferred causal impact of the intervention, i.e. the Lift resulting from Ads on TV. Causal Impact Library for
medium
259
Media, Attribution, Tv Campaings, Optmize. Attribution in TV Campaigns Among the different time series models that can be used to establish the control series, we will use as an example the one applied in the Causal Impact library, Bayesian Structural Time Series (BSTS). This library provides great support for measuring the Lift generated
medium
260
Media, Attribution, Tv Campaings, Optmize. by TV campaigns, with a Google case study published in 2017, “TV impact on online searches”. The model used in the library was developed by Google and is described in the article “Inferring causal impact using Bayesian Structural time-series models (BSTS)”, published in 2015. It is a state-space
medium
261
Media, Attribution, Tv Campaings, Optmize. time-series model, in which a simplified representation has the following components: The first two components, local trend (t) and seasonality (t) , are state variables and are modeled based on the historical data series before the intervention. The component with the time-varying regression
medium
262
Media, Attribution, Tv Campaings, Optmize. coefficients (tXt) is the combination of the external covariates, where t is a vector with the regression coefficients and Xt is a vector with the values of the control variable. And t is the residual factor. The external covariates are historical series of variables that are predictive of the
medium
263
Media, Attribution, Tv Campaings, Optmize. target series and which are not impacted by the intervention, as shown in the graph below. The premise is that these covariates maintain the same behavior post-intervention, and knowing the expected relationship with the target series (example sessions) helps in post-intervention prediction. Causal
medium
264
Media, Attribution, Tv Campaings, Optmize. effect model to calculate Lift: Target series (Observed), Simulated control series (Baseline) Covariant series. In addition to the target series and the control series, the Bayesian model has a third information resource, the prior, which are resources to help select the best combination of control
medium
265
Media, Attribution, Tv Campaings, Optmize. series. This makes it possible to add prior assumptions about the nature of the problem. This combination of different predictor variables in a single control series (counterfactual) and with a Bayesian approach to inferring temporal evolution has advantages when compared to common inference models
medium
266
Media, Attribution, Tv Campaings, Optmize. that are based on static regression models alone. The main ones are: the flexibility to include and adjust predictors; and the quantification of the cumulative effect of uncertainties. First steps in the Causal Impact Library The Causal Impact library was initially developed in R and then adapted
medium
267
Media, Attribution, Tv Campaings, Optmize. for Python and also has a TensorFlow version. The libraries differ in their implementation of the models, but the inputs and outputs are similar. In this first post we’ll focus only on configuring the input data and in the next we’ll talk about the library parameters and outputs. The input to the
medium
268
Media, Attribution, Tv Campaings, Optmize. Causal Impact function is expected to be a table, as shown in the figure, containing at least one column with reference to time and another with the metric of interest (Y), such as the number of sessions or conversions in the given time. The date grain can be days, hours, minutes or any other
medium
269
Media, Attribution, Tv Campaings, Optmize. standard as long as it is maintained for the entire column. Table with the input data for the Causal Impact library. In addition to the target historical series column (the metric of interest), you can add covariate columns (Xn), where each column represents a specific variable. Along with the base
medium
270
Media, Attribution, Tv Campaings, Optmize. of the historical series, it is necessary to indicate the pre-insertion and post-insertion periods. The library only allows one insertion period to be entered at a time. With these minimum parameters you can use the library and in the next post, which will be more practical, we’ll show you how to
medium
271
Media, Attribution, Tv Campaings, Optmize. evaluate the results. Conclusion In this first post on mathematical models for Offline attribution, we’ve highlighted the importance of measuring the impact of Offline campaigns and presented some approaches on how to measure in parallel with Online attribution. It is worth noting that it can be
medium
272
Media, Attribution, Tv Campaings, Optmize. interesting to evaluate the journey as a whole and if not possible with user-level attribution, the weights between online and offline channels can be viewed together in an MMM analysis as we have already covered in some articles: Optimizing Marketing Investment Decisions with MMM; Demystifying
medium
273
Media, Attribution, Tv Campaings, Optmize. Marketing Mix Modelling; What is the best MMM tool for your company? An evaluation of LightweightMMM, Orbit and Robyn . Remembering the main points presented: - Different methods for calculating the Lift generated by TV Ads, such as A/B testing and simulated control series; - Details of the causal
medium
274
Media, Attribution, Tv Campaings, Optmize. effect approach, emphasizing that Lift cannot be measured solely in view of the correlation between specific variables, as correlation does not imply causality; - The Causal Impact Library’s time-series model, Bayesian Structural time-series models (BSTS), allows flexibility with the predictors and
medium
275
Media, Attribution, Tv Campaings, Optmize. quantifies the accumulated uncertainties; - The minimum requirements for using the Causal Impact Library are the target series and the dates of the TV ad insertions; References Text “Implementing Causal Impact on Top of TensorFlow Probability” written by Will Fuks in 2020; Article “TV impact on
medium
276
Media, Attribution, Tv Campaings, Optmize. online searches” by Google published in 2017; Article “Inferring causal impact using Bayesian Structural time-series models (BSTS)” by Google published in 2017; Profile of the Author: Amanda Albuquerque| Bachelor in Environmental Engineering and studying Systems Analysis and Development. I
medium
277
. Imagine sitting down with a good friend, sharing stories over a cup of coffee. That’s the essence of design thinking intertwined with storytelling. It’s a conversation, a shared experience, as natural and essential as breathing. In my journey through the world of design thinking, I’ve realized that
medium
279
. real, lived experiences of the people we’re designing for. It’s a bit like being a detective, piecing together clues from their stories to solve the mystery of what they really need. Collaboration: Sharing the Pen Design thinking is a team sport. It’s about bringing people together to write a story
medium
284
. sharing information; we’re sharing a vision, a dream of what could be. In Conclusion: Keep the Stories Coming So, there you have it. Design thinking and storytelling, they go together like a warm fire on a cold night. As we journey through the world of design thinking, let’s not forget the power of
medium
287
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. This is the second article in the Survival Analysis series. The first article (here) was an introduction to Survival analysis using simple non-parametric methods namely the Kaplan Meier method. Survival Analysis is a class of models and techniques used to analyze and predict time to an event.
medium
289
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. Survival Analysis can be useful in any context where we want to analyze the time to an event, some examples below Fig 1.1 Uses of Survival Analysis Survival Models broadly fall into 2 categories Non-Parametric Survival models: These are built directly from the data. They assume no parameters or a
medium
290
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. distribution. In a business context they give good high level segment averages but cannot perform sensitivity analysis. They also tend to be stepped or discontinuous. Fig 1.2 Kaplan Meier Survival Parametric Models: These are generally Survival Regression models and are built for sensitivity
medium
291
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. analysis. Can be used to answer questions like keeping all else constant how will time to event be impacted if one feature is changed Fig 1.3 Smooth survival curves and sensitivity analysis Now lets look at how parametric models work. Fully parametric models are also known as AFT (Accelerated
medium
292
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. Failure Time) models and are represented by the equation below. Fig 1.4 AFT Model equation X represents the time to event e in this case represents a base distribution. We need to find a base distribution that fits the data well x1….xn represent the features and w1…..wn represent the weights or
medium
293
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. regression coefficients attached to the features. In other words there is a base survival curve and changing the feature value accelerates or decelerates the time to event and changes the shape of the survival curve. A negative coefficient for a feature has the effect of decreasing the time to
medium
294
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. event as the feature value increases. Now lets apply this to a dataset. We will use the Prison recidivism dataset to understand what factors affect time to arrest for a population of previous offenders. We will visualize the results and try to generate some predictions. Fortunately Python has a
medium
295
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. robust package called lifelines for all kinds of Survival Analysis, which we will use below. import pandas as pd from lifelines import KaplanMeierFitter from lifelines import WeibullFitter import numpy as np import matplotlib.pyplot as plt # Loading the dataset prison =
medium
296
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. pd.read_csv('https://assets.datacamp.com/production/repositories/5850/datasets/4e20aa97a26bbe32106a94b76ae4cabf1a632d59/rossi.csv') prison.head() prison.shape Fig 1.5 First 5 rows of data Looking at the first 5 rows we can see a series of variables like paroled (paro), priors (prio), age, race etc.
medium
297
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. The event of interest is arrest and the duration column is week. arrest=1 means an arrest took place and the corresponding duration is week. Now lets try to build a parametric survival model. The first step in this process is to try to find a baseline distribution that fits the actual survival.
medium
298
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. This would correspond to the term e in the equation below. Fig 1.6 Equation with baseline distribution An easy way to visualize this is to see if a distribution overlays closely over the Kaplan Meier curve. Within the lifelines package we have many candidate distributions like the Lognormal,
medium
299
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. Weibull, Exponential etc. Here we try some such distributions using the code below from lifelines import LogNormalFitter from lifelines import ExponentialFitter from lifelines import LogLogisticFitter from lifelines import KaplanMeierFitter from lifelines import WeibullFitter from
medium
300
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. lifelines.plotting import qq_plot # Instantiating the various distribution fitters wb = WeibullFitter() ln = LogNormalFitter() Exp = ExponentialFitter() logit = LogLogisticFitter() kmf = KaplanMeierFitter() # Fitting to the data to get the best possible parameters for each distribution
medium
301
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. wb.fit(durations=prison['week'],event_observed=prison['arrest']) ln.fit(durations=prison['week'],event_observed=prison['arrest']) Exp.fit(durations=prison['week'],event_observed=prison['arrest']) logit.fit(durations=prison['week'],event_observed=prison['arrest'])
medium
302
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. kmf.fit(durations=prison['week'],event_observed=prison['arrest']) # Plotting the various distributions over the Kaplan Meier Fitter plt.style.use('ggplot') fig,ax = plt.subplots() ax = kmf.plot_survival_function() ax = wb.plot_survival_function() ax = Exp.plot_survival_function() ax =
medium
303
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. logit.plot_survival_function() ax.set_title('Survival Parametric Models vs Kaplan Meier Actuals') ax.set_ylabel('Fraction') Fig 1.7 Overlay distributions over KM curves Its not very clear here, but the logit and the Weibull distributions seem to overlay the closest on the KM curve. Another way is
medium
304
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. to use a qq_plot which compares the actual quantiles with the quantiles predicted by the distribution. If the empirical quantiles line up with the distribution quantiles then scatter points will be along the y=x line as shown in the plots below. # Code to generate qq_plots from the already fitted
medium
305
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. models models = [wb,ln,Exp,logit] for model in models: qq_plot(model) plt.show() Fig 1.8 qq_plots Based on the qq_plot it is very clear that the Weibull distribution and logit are very close fits, we will use Weibull as our base distribution. Now we can build the Survival regression model. This
medium
306
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. model will model the baseline survival using Weibull distribution and build a regression model on top of it which will give us an estimate of how much each factor, in this case like race or age, causes a deviation from the baseline survival. To accomplish this we import the WeibullAFTFitter()
medium
307
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. class. This class has methods to fit a regression on top of a baseline Weibull curve. from lifelines import WeibullAFTFitter aft = WeibullAFTFitter() # In this case we are using all columns in the data set aft.fit(prison,duration_col='week',event_col='arrest') aft.summary Fig 1.9 Regression results
medium
308
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. The summary of the results is as shown in the table above. The 2 most important columns are exp(coef) and the p column. Since the regression model is fit on Log of the time (X) so the exponent of the coefficient is more interpretable. The p column shows the p_value and the features prio
medium
309
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. corresponding to priors and age have p values less than 0.05 and are statistically significant Lets use the plot_partial_effects_on_outcome method to understand the effect of each of these factors vs the baseline survival # Plotting Effect of priors on time to arrest fig,ax =
medium
310
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. plt.subplots(figsize=(6,4)) ax = aft.plot_partial_effects_on_outcome('prio',[0,2,6]) plt.title('Effect of priors on time to arrest') #Plotting Effect of age on time to arrest fig,ax = plt.subplots(figsize=(6,4)) aft.plot_partial_effects_on_outcome('age',[20,26,35]) plt.title('Effect of age on time
medium
311
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. to arrest') Fig 1.10 Effect of priors and age on time to arrest # Predicting survival for new customers in the data frame called new aft.predict_survival_function(new).transpose() Fig 1.11 New customers scored We can also use the predict method and pass new instances and the fitted model will
medium
312
Data Science, Churn Analysis, Analytics, Lifetime Value, Machine Learning. predict a survival curve for the new instance. In the business context new signups or subscriptions can be scored or assigned a survival curve at the time of sign up. This kind of model can be used as the first step in calculating Customer Lifetime Value (CLV). Key shortcoming of this approach and
medium
313