markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
These validators are simple enough that closures work instead of full-fledged objects. The important part here is to maintain a consistent interface -- if we need to use classes all of a sudden, we need to define a __call__ on them to maintain this interface. We can also change our register callable to accept the repos...
def register_user(user_repository): email_checker = is_email_free(user_repository) username_checker = is_username_free(user_repository) def register_user(username, email, password): if not username_checker(username): raise OurValidationError('Username in use already', 'user...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Of course the tests break now, and that's okay. We made a very sweeping change to the architecture here. We need to go back through and alter the tests one by one, but instead of patching everything out we can do something better: Dependency Injection.
def test_duplicated_email_causes_false(): fake_user_repository = mock.create_autospec(AbstractUserRepository) fake_user_repository.find_by_email.return_value = True checker = is_email_free(fake_user_repository) assert not checker('fred@fred.com') def test_duplicated_username_causes_false(): ...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
But to test that our validators function correctly in this context, we need to fake out find_by_email and find_by_username indpendently. This is a symptom of our code not being Open-Closed. The Open-Closed Problem Revisiting the other major issue from how the code is laid out right now is that it's not Open-Closed. If ...
def register_user(user_repository, validator): def registrar(username, email, password): user = User(username, email, password) validator(user) user_repository.persist(user) return registrar
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Of course, our tests break again, so let's revisit the currently breaking one first:
def test_register_user_happy_path(): fake_user_repository = mock.create_autospec(AbstractUserRepository) registrar = register_user(fake_user_repository, lambda user: None) registrar('fred', 'fred@fred.com', 'fredpassword') assert fake_user_repository.persist.call_count def test_register_u...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
We'll need to tweak the validation logic some to make up for the fact that we're passing the whole user object now:
def validate_username(user_repoistory): def validator(user): if not user_repoistory.find_by_username(user.username): raise OurValidationError('Username in use already', 'username') return True return validator def validate_email(user_repoistory): def validator(user): if ...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
The tests for these are pretty straight forward as well, so I'll omit them. But we need a way to stitch them together...
def validate_many(*validators): def checker(input): return all(validator(input) for validator in validators) return checker
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
And then hook it all up like this:
validator = validate_username(validate_email(user_repository), validate_username(user_repository)) registrar = register_user(user_repository, validator)
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Our neglected Controller We've spent a lot of time looking at how to compartmentalize the registration logic and portion out its concerns. However, the controller itself needs some attention as well. When we last left, it looked like this:
@app.route('/register', methods=['GET', 'POST']) def register_user_view(): form = RegisterUserForm() if form.validate_on_submit(): try: register_user(form.username.data, form.email.data, form.password.data) except OurValidationError as e: form.errors[e.field] = [e.ms...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
But we can do beter than that. The problem here is that the logic is set in stone, nested flows of control. But mostly, I really like any excuse to use class based views.
class RegisterUser(MethodView): def __init__(self, form, registrar, template, redirect): self.form = form self.registrar = registrar self.template = template self.redirect = redirect def get(self): return self._render() def post(self): if self.form.v...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Pero miren lo que pasa con la mediana
print np.percentile(Serie,50) print np.median(Serie)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Que ocurre La media esperada es 0, sin embargo esta presenta una diferencia. Igualmente ocurre conla desviación estandar. Pero no ocurre lo mismo con la mediana, en 1000 datos se presenta dos ordenes de magnitud más cerca de 0 La siguiente figura ejemplifica esto: Definición de función para graficar
def GraficaHistogramaParam(Values,bins=15): # Genera el histograma de valores h,b = np.histogram(Values,bins=bins) h = h.astype(float); h = h / h.sum() b = (b[1:]+b[:-1])/2.0 # Obtiene la figura fig=pl.figure(figsize=(10,8)) ax=fig.add_subplot(111) ax.plot(b,h,'b',lw=2) ax.fill_betw...
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Grafica de las medidas de localización paramétricas
GraficaHistogramaParam(Serie)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Gráfica de medidas no paramétricas
GraficaHistogramaNoParam(Serie)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Caso con menos datos En un caso con menor cantida dde datos se espera tener una mayor diferencia entre ambas medidas, de ahí si inestabilidad:
Serie = np.random.uniform(2.5,10,2e5) print Serie.mean() print np.median(Serie) from scipy import stats as st print st.skew(Serie)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Ejercicio para observar robustez en ambas medidas En el siguiente ejercicio generamos 200 veces series aleatorias cada una con 25 entradas, luego vamos a comprar como son las diferencias entre las medias y las medianas encontradas para cada uno de los casos.
medianas = np.zeros(20000) medias=np.zeros(20000) for i in range(20000): Serie = np.random.normal(0,1,25) medias[i] = Serie.mean() medianas[i]=np.median(Serie) def ComparaHistogramas(Vec1,Vec2,bins=15): # Genera el histograma de valores h1,b1 = np.histogram(Vec1,bins=bins) h1 = h1.astype(float)...
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Suceptibilidad a Datos Atípicos Un dato atípico se define como aquel dato que se encuentra fuera del rango de oscilación de los datos, o bien que no es coherente con la física del fenómeno que se está sensando, los siguientes son ejemplos de datos atípicos: Valores exageradamente altos. Valores negativos en casos de f...
Serie = np.random.normal(0,1,50) fig = pl.figure(figsize=(9,7)) pl.plot(Serie) pl.grid(True)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Incertemos un dato loco, que se salga
Serie2 = np.copy(Serie) Serie2[10] = 50.0 fig = pl.figure(figsize=(9,7)) pl.plot(Serie2) pl.plot(Serie)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Ahora veamos que ocurre con la media:
print Serie.mean() print Serie2.mean()
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Y que ocurre con la mediana:
print np.median(Serie) print np.median(Serie2)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Introducción de múltiples Outliers Que pasa si se introduce una alta cantidad de datos atípicos?, es decir como es la tasa a la cual la media puede ir pasando a ser cada ves un estimador con un mayor error?.
def CreaOutliers(vect,NumOut,Mult=10): # Encuentra el rango de oscilacion Per = np.array([np.percentile(vect,i) for i in [0.1,99.9]]) # Genera los aleatorios vectOut = np.copy(vect) for i in np.random.choice(vect.shape[0],NumOut): p = np.random.choice(2,1)[0] vectOut[i] = vectO...
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Resultados: Según lo obtenido la mediana se ve altamente afectada, y la desviación también: Caso de una distribución Normal
# Definición de variables N = 1000 S1 = np.random.uniform(0,1,N) Medias = []; Std = [] Medianas = []; R25_75 = [] # Introduccion de outliers for i in np.arange(5,200): S2 = CreaOutliers(S1, i) Medias.append(S2.mean()) Medianas.append(np.median(S2)) Std.append(S2.std()) R25_75.append(np.percentile(S2...
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Caso de una distribución uniforme
fig = pl.figure(figsize=(13,5)) ax = fig.add_subplot(121) ax.scatter(Medianas,Medias,c=np.arange(5,200)) ax.set_xlabel('Mediana',size=14) ax.set_ylabel('Media $\mu$',size=14) ax = fig.add_subplot(122) ax.scatter(R25_75,Std,c=np.arange(5,200)) #ax.set_xlim(0,1) ax.set_xlabel('Rango $25%$ - $75\%$',size=14) ax.set_ylabel...
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Cuantiles Como una medida no paramétrica de la distribución de los datos se encuentran los cuantiles, el más conocido es la mediana, sin embargo se pueden obtener cuantiles de cualquier medida. Que representan : El cuantil del 25% igual a 3.56, indica que el 25% de los datos son iguales o inferiores a 3.56. Al ser una ...
S1 = np.random.normal(0,1,100) a=pl.boxplot(S1) a=pl.xlabel('Serie')
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Caso de Introducción de Outliers QQ plot de las series donde se introducen outliers y la serie en donde no
S1 = np.random.normal(0,1,100) S2 = CreaOutliers(S1,10) Per1 = np.array([np.percentile(S1,i) for i in range(10,91,10)]) Per2 = np.array([np.percentile(S2,i) for i in range(10,91,10)]) fig = pl.figure(figsize=(9,7)) ax = fig.add_subplot(111) ax.scatter(Per1,Per2,s=40) ax.set_xlim(-2,2) ax.set_ylim(-2,2) ax.grid(True) a...
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
careful with "=": xx= x means they are the same object xx = x .... whatever or x.copy() they are two different objects
xx = x.copy() xx+=2 xx x
notebooks/Basic_Python.ipynb
HUDataScience/StatisticalMethods2016
apache-2.0
Masking This only works with numpy array. numpy array vs. list
xlist = [3,4,5,6,7,8,9] xarray = np.asarray([3,4,5,6,7,8,9]) # np.asarray(xlist) xlist*2 xarray*2 strangelist = ["toto",3,{},[]] np.asarray(strangelist)*2
notebooks/Basic_Python.ipynb
HUDataScience/StatisticalMethods2016
apache-2.0
how to apply masking? Use Numpy ARRAY
x mask = x>2 mask x[mask] # x[x>2] x[ (x>2) & (x<2.5) ] # x[ (x>2) * (x>1.5) ] # both have to be true x[ (x>2) | (x>1.5) ] # x[ (x>2) + (x>1.5) ] # any have to be true
notebooks/Basic_Python.ipynb
HUDataScience/StatisticalMethods2016
apache-2.0
The case of the NaN Value
iamnan = np.NaN iamnan iamnan==iamnan np.inf==np.inf xwithnan = np.asarray([3,4,5,6,7,2,3,np.NaN,75,75]) xwithnan xwithnan*2 4+np.NaN 4/np.NaN 4**np.NaN np.mean(xwithnan) np.nanmean(xwithnan) np.mean(xwithnan[xwithnan==xwithnan]) ~(xwithnan==xwithnan) xwithnan!=xwithnan np.isnan(xwithnan) xwithnan = [3,...
notebooks/Basic_Python.ipynb
HUDataScience/StatisticalMethods2016
apache-2.0
Your first plot For ploting we are going to use matplotlib. let's plot 2 random variable a vs. b
a = np.random.rand(30) b = np.random.rand(30) # plot within the notebook %matplotlib inline import matplotlib.pyplot as mpl pl = mpl.hist(a) mpl.scatter(a,b,s=150, facecolors="None", edgecolors="b",lw=3)
notebooks/Basic_Python.ipynb
HUDataScience/StatisticalMethods2016
apache-2.0
Stochastic gradient descent (SGD) Stochastic gradient descent (often shortened to SGD), also known as incremental gradient descent, is a stochastic approximation of the gradient descent optimization and iterative method for minimizing an objective function that is written as a sum of differentiable functions. There ar...
%matplotlib inline from matplotlib import pyplot import numpy as np a = 1 b = 2 num_points = 100 np.random.seed(637163) # we make sure we always generate the same sequence x_data = np.random.rand(num_points)*20. y_data = x_data*b+a+3*(2.*np.random.rand(num_points)-1) pyplot.scatter(x_data,y_data) pyplot.plot(x_data, ...
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
We now write an SGD code for this problem. The training_data is a list of tuples (x, y) representing the training inputs and corresponding desired outputs. The variables epochs and mini_batch_size are what you'd expect - the number of epochs to train for, and the size of the mini-batches to use when sampling. eta is th...
epochs = 1000 mini_batch_size = 10 eta = 0.01/mini_batch_size a = 3. b = 3. def update_mini_batch(mini_batch, eta): global a, b a0 = a b0 = b for x, y, in mini_batch: e = eta*(a0+b0*x-y) a -= e b -= x*e training_data = list(zip(x_data,y_data)) for j in range(epochs): np...
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
Challenge 14.2 Use SGD to train the single neuron in the previous notebook using a linearly separable set of 100 points, divided by the line $-\frac{5}{2}x+\frac{3}{2}y+3=0$
### We provide a set of randomly generated training points num_points = 100 w1 = -2.5 w2 = 1.5 w0 = 3. np.random.seed(637163) # we make sure we always generate the same sequence x_data = np.random.rand(num_points)*10. y_data = np.random.rand(num_points)*10. z_data = np.zeros(num_points) for i in range(len(z_data)): ...
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
You will need the following auxiliary functions:
def sigmoid(z): """The sigmoid function.""" return 1.0/(1.0+np.exp(-z)) def sigmoid_prime(z): """Derivative of the sigmoid function.""" return sigmoid(z)*(1-sigmoid(z))
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
A simple network to classify handwritten digits Most of this section has been taken from M. Nielsen's free on-line book: "Neural Networks and Deep Learning" http://neuralnetworksanddeeplearning.com/ In this section we discuss a neural network which can solve the more interesting and difficult problem, namely, recognizi...
""" mnist_loader ~~~~~~~~~~~~ A library to load the MNIST image data. For details of the data structures that are returned, see the doc strings for ``load_data`` and ``load_data_wrapper``. In practice, ``load_data_wrapper`` is the function usually called by our neural network code. """ #### Libraries # Standard lib...
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
Note also that the biases and weights are stored as lists of Numpy matrices. So, for example net.weights[1] is a Numpy matrix storing the weights connecting the second and third layers of neurons. (It's not the first and second layers, since Python's list indexing starts at 0.) Since net.weights[1] is rather verbose, l...
""" network.py ~~~~~~~~~~ A module to implement the stochastic gradient descent learning algorithm for a feedforward neural network. Gradients are calculated using backpropagation. Note that I have focused on making the code simple, easily readable, and easily modifiable. It is not optimized, and omits many desirab...
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
We first load the MNIST data:
training_data, validation_data, test_data = load_data_wrapper()
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
After loading the MNIST data, we'll set up a Network with 30 hidden neurons.
net = Network([784, 30, 10])
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
Finally, we'll use stochastic gradient descent to learn from the MNIST training_data over 30 epochs, with a mini-batch size of 10, and a learning rate of $\eta$=3.0:
net.SGD(training_data, 30, 10, 3.0, test_data=test_data)
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
Note that head([]) is an error since you can't find the first item in an empty list.
tail([1,2]) tail([1])
python/Main.ipynb
banbh/little-pythoner
apache-2.0
Note that tail([]) is an error since the tail of a list is what's left over when you remove the head, and the empty list has no head.
cons(1, [2,3]) cons(1, []) is_num(99) is_num('hello') is_str(99) is_str('hello') is_str_eq('hello', 'hello') is_str_eq('hello', 'goodbye') add1(99) sub1(99)
python/Main.ipynb
banbh/little-pythoner
apache-2.0
Note that sub1(0) is an error because you can't subtract 1 from 0. (Actually it is possible if you allow negative numbers, but in these exercises we will not allow such numbers.) All Strings Write a function, is_list_of_strings, that determines whether a list contains only strings. Below are some examples of how it s...
from solutions import is_list_of_strings is_list_of_strings(['hello', 'goodbye']) is_list_of_strings([1, 'aa']) is_list_of_strings([])
python/Main.ipynb
banbh/little-pythoner
apache-2.0
The Spector dataset is distributed with statsmodels. You can access a vector of values for the dependent variable (endog) and a matrix of regressors (exog) like this:
data = sm.datasets.spector.load_pandas() exog = data.exog endog = data.endog print(sm.datasets.spector.NOTE) print(data.exog.head())
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Them, we add a constant to the matrix of regressors:
exog = sm.add_constant(exog, prepend=True)
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
To create your own Likelihood Model, you simply need to overwrite the loglike method.
class MyProbit(GenericLikelihoodModel): def loglike(self, params): exog = self.exog endog = self.endog q = 2 * endog - 1 return stats.norm.logcdf(q*np.dot(exog, params)).sum()
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Estimate the model and print a summary:
sm_probit_manual = MyProbit(endog, exog).fit() print(sm_probit_manual.summary())
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Compare your Probit implementation to statsmodels' "canned" implementation:
sm_probit_canned = sm.Probit(endog, exog).fit() print(sm_probit_canned.params) print(sm_probit_manual.params) print(sm_probit_canned.cov_params()) print(sm_probit_manual.cov_params())
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Notice that the GenericMaximumLikelihood class provides automatic differentiation, so we didn't have to provide Hessian or Score functions in order to calculate the covariance estimates. Example 2: Negative Binomial Regression for Count Data Consider a negative binomial regression model for count data with log-likeliho...
import numpy as np from scipy.stats import nbinom def _ll_nb2(y, X, beta, alph): mu = np.exp(np.dot(X, beta)) size = 1/alph prob = size/(size+mu) ll = nbinom.logpmf(y, size, prob) return ll
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
New Model Class We create a new model class which inherits from GenericLikelihoodModel:
from statsmodels.base.model import GenericLikelihoodModel class NBin(GenericLikelihoodModel): def __init__(self, endog, exog, **kwds): super(NBin, self).__init__(endog, exog, **kwds) def nloglikeobs(self, params): alph = params[-1] beta = params[:-1] ll = _ll_nb2(self.e...
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Two important things to notice: nloglikeobs: This function should return one evaluation of the negative log-likelihood function per observation in your dataset (i.e. rows of the endog/X matrix). start_params: A one-dimensional array of starting values needs to be provided. The size of this array determines the numbe...
import statsmodels.api as sm medpar = sm.datasets.get_rdataset("medpar", "COUNT", cache=True).data medpar.head()
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
The model we are interested in has a vector of non-negative integers as dependent variable (los), and 5 regressors: Intercept, type2, type3, hmo, white. For estimation, we need to create two variables to hold our regressors and the outcome variable. These can be ndarrays or pandas objects.
y = medpar.los X = medpar[["type2", "type3", "hmo", "white"]].copy() X["constant"] = 1
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Then, we fit the model and extract some information:
mod = NBin(y, X) res = mod.fit()
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Extract parameter estimates, standard errors, p-values, AIC, etc.:
print('Parameters: ', res.params) print('Standard errors: ', res.bse) print('P-values: ', res.pvalues) print('AIC: ', res.aic)
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
As usual, you can obtain a full list of available information by typing dir(res). We can also look at the summary of the estimation results.
print(res.summary())
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Testing We can check the results by using the statsmodels implementation of the Negative Binomial model, which uses the analytic score function and Hessian.
res_nbin = sm.NegativeBinomial(y, X).fit(disp=0) print(res_nbin.summary()) print(res_nbin.params) print(res_nbin.bse)
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship: - Survived: Outcome of survival (0 = No; 1 = Yes) - Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class) - Name: Name of passenger - Sex: Sex of the passenger - Age: Age of the pas...
# Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head())
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i]. To measure the performance of our pred...
def accuracy_score(truth, pred): """ Returns accuracy score for input truth and predictions. """ # Ensure that the number of predictions matches number of outcomes if len(truth) == len(pred): # Calculate and return the accuracy as a percent return "Predictions have an accuracy...
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last...
def predictions_0(data): """ Model with no features. Always predicts a passenger did not survive. """ predictions = [] for _, passenger in data.iterrows(): # Predict the survival of 'passenger' predictions.append(0) # Return our predictions return pd.Series(predictions...
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Question 1 Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived? Hint: Run the code cell below to see the accuracy of this prediction.
print accuracy_score(outcomes, predictions)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Answer: Replace this text with the prediction accuracy you found above. Predictions have an accuracy of 61.62%. Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the visuals.py Python script included with th...
vs.survival_stats(data, outcomes, 'Sex')
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive...
def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here #pass ...
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Question 2 How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive? Hint: Run the code cell below to see the accuracy of this prediction.
print accuracy_score(outcomes, predictions)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Answer: Replace this text with the prediction accuracy you found above. Predictions have an accuracy of 78.68%. Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improv...
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger t...
def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement...
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Question 3 How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived? Hint: Run the code cell below to see the accuracy of this prediction.
print accuracy_score(outcomes, predictions)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Predictions have an accuracy of 79.35%. Answer: Replace this text with the prediction accuracy you found above. Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and cond...
vs.survival_stats(data, outcomes, 'Sex', [ "Pclass == 3" ])
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'female'" , "Embarked == C"])
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. Hint: You can start your implementation of this function using th...
def predictions_3(data): """ Model with multiple features. Makes a prediction with an accuracy of at least 80%. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here #pass #if p...
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Question 4 Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions? Hint: ...
print accuracy_score(outcomes, predictions)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
The sock problem Created by Yuzhong Huang There are two drawers of socks. The first drawer has 40 white socks and 10 black socks; the second drawer has 20 white socks and 30 black socks. We randomly get 2 socks from a drawer, and it turns out to be a pair(same color) but we don't know the color of these socks. What is ...
from functools import reduce import operator def multiply(items): """ multiply takes a list of numbers, multiplies all of them, and returns the result Args: items (list): The list of numbers Return: the items multiplied together """ return reduce(operator.mul, item...
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next we define a drawer suite. This suite will allow us to take n socks up to the least number of socks in a drawer. To make our likelihood function simpler, we ignore the case where we take 11 black socks and that only drawer 2 is possible.
class Drawers(Suite): def Likelihood(self, data, hypo): """ Likelihood returns the likelihood given a bayesian update consisting of a particular hypothesis and new data. In the case of our drawer problem, the probabilities change with the number of pairs we take (without rep...
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, define our hypotheses and create the drawer Suite.
hypos = ['drawer1','drawer2'] drawers = Drawers(hypos) drawers.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, update the drawers by taking two matching socks.
drawers.Update(2) drawers.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
It seems that the drawer with many of a single sock (40 white 10 black) is more likely after the update. To confirm this suspicion, let's restart the problem by taking 5 pairs of socks.
hypos = ['drawer1','drawer2'] drawers5 = Drawers(hypos) drawers5.Update(5) drawers5.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
We see that after we take 5 pairs of socks, the probability of the socks coming from drawer 1 is 80.6%. We can now conclude that the drawer with a more extreme numbers of socks is more likely be chosen if we are updating with matching color socks. Chess-playing twins Allen Downey Two identical twins are members of my c...
twins = Pmf() twins['AB'] = 1 twins['BA'] = 1 twins.Normalize() twins.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Now we update our hypotheses with us winning the first day. We have a 40% chance of winning against Avery and a 70% chance of winning against Blake.
#win day 1 twins['AB'] *= .4 twins['BA'] *= .7 twins.Normalize() twins.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
At this point in time, there is only a 36% chance that we play Avery the first day while a 64% chance that we played Blake the first day. However, let's see what happens when we update with a loss.
#lose day 2 twins['AB'] *= .6 twins['BA'] *= .3 twins.Normalize() twins.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Interesting. Now there is a 53% chance that we played Avery then Blake and a 47% chance that we played Blake then Avery. Who saw that movie? Nathan Yee Every year the MPAA (Motion Picture Association of America) publishes a report about theatrical market statistics. Included in the report, are both the gender and the e...
class Movie(Suite): def Likelihood(self, data, hypo): """ Likelihood returns the likelihood given a bayesian update consisting of a particular hypothesis and data. In this case, we need to calculate the probability of seeing a gender seeing a movie. Then we calculat the probability t...
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next we make our hypotheses and input them as tuples into the Movie class.
genders = range(0,2) ethnicities = range(0,5) pairs = [(gender, ethnicity) for gender in genders for ethnicity in ethnicities] movie = Movie(pairs)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
We decided that we are picking a random person in the United states. So, we can use population demographics of the United States as an informed prior. We will assume that the United States is 50% male and 50% female. Population percent is defined in the order which we enumerate ethnicities.
population_percent = [63.7, 12.2, 16.3, 4.7, 3.1, 63.7, 12.2, 16.3, 4.7, 3.1] for i in range(len(population_percent)): movie[pairs[i]] = population_percent[i] movie.Normalize() movie.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next update with the two movies
movie.Update('Inside Out') movie.Normalize() movie.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Given that a random person has seen Inside Out, the probability that the person is both female and Asian is .58%. Interestingly, when we update our hypotheses with our data, the the chance that the randomly selected person is caucasian goes up to 87%. It seems that our model just increases the chance that the randomly ...
total = 0 for pair in pairs: if pair[0] == 1: total += movie[pair] print(total)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Parking meter theft From DASL(http://lib.stat.cmu.edu/DASL/Datafiles/brinkdat.html) The variable CON in the datafile Parking Meter Theft represents monthly parking meter collections by the principle contractor in New York City from May 1977 to March 1981. In addition to contractor collections, the city made collection...
import pandas as pd df = pd.read_csv('parking.csv', skiprows=17, delimiter='\t') df.head()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
First we need to normalize the CON (contractor) collections by the amount gathered by the CITY. This will give us a ratio of contractor collections to city collections. If we just use the raw contractor collections, fluctuations throughout the months could mislead us.
df['RATIO'] = df['CON'] / df['CITY']
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, lets see what the means of the RATIO data compare between the general contractors and BRINK.
grouped = df.groupby('BRINK') for name, group in grouped: print(name, group.RATIO.mean())
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
We see that for a dollar gathered by the city, general contractors report 244.7 dollars while BRINK only reports 230 dollars. Now, we will fit the data to a Normal class to compute the likelihood of a sameple from the normal distribution. This is a similar process to what we did in the improved reading ability problem.
from scipy.stats import norm class Normal(Suite, Joint): def Likelihood(self, data, hypo): """ data: sequence of test scores hypo: mu, sigma """ mu, sigma = hypo likes = norm.pdf(data, mu, sigma) return np.prod(likes)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, we need to calculate a marginal distribution for both brink and general contractors. To get the marginal distribution of the general contractors, start by generating a bunch of prior distributions for mu and sigma. These will be generated uniformly.
mus = np.linspace(210, 270, 301) sigmas = np.linspace(10, 65, 301)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, use itertools.product to enumerate all pairs of mu and sigma.
from itertools import product general = Normal(product(mus, sigmas)) data = df[df.BRINK==0].RATIO general.Update(data)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next we will plot the probability of each mu-sigma pair on a contour plot.
thinkplot.Contour(general, pcolor=True) thinkplot.Config(xlabel='mu', ylabel='sigma')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, extract the marginal distribution of mu from general.
pmf_mu0 = general.Marginal(0) thinkplot.Pdf(pmf_mu0) thinkplot.Config(xlabel='mu', ylabel='Pmf')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
And the marginal distribution of sigma from the general.
pmf_sigma0 = general.Marginal(1) thinkplot.Pdf(pmf_sigma0) thinkplot.Config(xlabel='sigma', ylabel='Pmf')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, we will run this again for BRINK and see what the difference is between the group. This will give us insight into whether or not Brink employee's are stealing parking money from the city. First use the same range of mus and sigmas calcualte the marginal distributions of brink.
brink = Normal(product(mus, sigmas)) data = df[df.BRINK==1].RATIO brink.Update(data)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Plot the mus and sigmas on a contour plot to see what is going on.
thinkplot.Contour(brink, pcolor=True) thinkplot.Config(xlabel='mu', ylabel='sigma')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Extract the marginal distributions of mu from brink.
pmf_mu1 = brink.Marginal(0) thinkplot.Pdf(pmf_mu1) thinkplot.Config(xlabel='mu', ylabel='Pmf')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Extract the marginal distributions sigma from brink
pmf_sigma1 = brink.Marginal(1) thinkplot.Pdf(pmf_sigma1) thinkplot.Config(xlabel='sigma', ylabel='Pmf')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
From here, we want to compare the two distributions. To do this, we will start by taking the difference between the distributions.
pmf_diff = pmf_mu1 - pmf_mu0 pmf_diff.Mean()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
From here we can calculate the probability that money was stolen from the city.
cdf_diff = pmf_diff.MakeCdf() thinkplot.Cdf(cdf_diff) cdf_diff[0]
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
So we can calculate that the probability money was stolen from the city is 93.9% Next, we want to calculate how much money was stolen from the city. We first need to calculate how much money the city collected during Brink times. Then we can multiply this times our pmf_diff to get a probability distribution of potentia...
money_city = np.where(df['BRINK']==1, df['CITY'], 0).sum(0) print((pmf_diff * money_city).CredibleInterval(50)) thinkplot.Pmf(pmf_diff * money_city)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Above we see a plot of stolen money in millions. We have also calculated a credible interval that tells us that there is a 50% chance that Brink stole between 1.4 to 3.6 million dollars. In pursuit of more evidence, we find the probability that the standard deviation in the Brink collections is higher than that of the ...
pmf_sigma1.ProbGreater(pmf_sigma0)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Building a dynamic model In the previous notebook, <a href="mnist_linear.ipynb">mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module. The boilerplate structure for this module has already been set up in the folder mnist_mod...
%%writefile mnist_models/trainer/task.py import argparse import json import os import sys from . import model def _parse_arguments(argv): """Parses command-line arguments.""" parser = argparse.ArgumentParser() parser.add_argument( '--model_type', help='Which model type to use', ty...
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the scale and load_dataset functions from the previous lab.
%%writefile mnist_models/trainer/util.py import tensorflow as tf def scale(image, label): """Scales images from a 0-255 int range to a 0-1 float range""" image = tf.cast(image, tf.float32) image /= 255 image = tf.expand_dims(image, -1) return image, label def load_dataset( data, training...
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0