markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
**Step 2:** Compute the Jacobians for the firm block around the stationary equilibrium (analytical).
sol.jac_r_K[:] = 0 sol.jac_w_K[:] = 0 sol.jac_r_Z[:] = 0 sol.jac_w_Z[:] = 0 for s in range(par.path_T): for t in range(par.path_T): if t == s+1: sol.jac_r_K[t,s] = par.alpha*(par.alpha-1)*par.Z*par.K_ss**(par.alpha-2) sol.jac_w_K[t,s] = (1-par.alpha)*par.alpha*par.Z*par.K_ss**(par.alpha-1) if t == s: sol.jac_r_Z[t,s] = par.alpha*par.Z*par.K_ss**(par.alpha-1) sol.jac_w_Z[t,s] = (1-par.alpha)*par.Z*par.K_ss**par.alpha
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Step 3:** Use the chain rule and solve for $G$.
H_K = sol.jac_curlyK_r @ sol.jac_r_K + sol.jac_curlyK_w @ sol.jac_w_K - np.eye(par.path_T) H_Z = sol.jac_curlyK_r @ sol.jac_r_Z + sol.jac_curlyK_w @ sol.jac_w_Z G_K_Z = -np.linalg.solve(H_K, H_Z) # H_K^(-1)H_Z
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Step 4:** Find effect on prices and other outcomes than $K$.
G_r_Z = sol.jac_r_Z + sol.jac_r_K@G_K_Z G_w_Z = sol.jac_w_Z + sol.jac_w_K@G_K_Z G_C_Z = sol.jac_C_r@G_r_Z + sol.jac_C_w@G_w_Z
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Step 5:** Plot impulse-responses. **Example I:** News shock (i.e. in a single period) vs. persistent shock where $ dZ_t = \rho dZ_{t-1} $ and $dZ_0$ is the initial shock.
fig = plt.figure(figsize=(12,4)) T_fig = 50 # left: news shock ax = fig.add_subplot(1,2,1) for s in [5,10,15,20,25]: dZ = (1+par.Z_sigma)*par.Z*(np.arange(par.path_T) == s) dK = G_K_Z@dZ ax.plot(np.arange(T_fig),dK[:T_fig],'-o',ms=2,label=f'$s={s}$') ax.legend(frameon=True) ax.set_title(r'1% TFP news shock in period $s$') ax.set_ylabel('$K_t-K_{ss}$') ax.set_xlim([0,T_fig]) # right: persistent shock ax = fig.add_subplot(1,2,2) dZ = model.get_path_Z()-par.Z dK = G_K_Z@dZ ax.plot(np.arange(T_fig),dK[:T_fig],'-o',ms=2) ax.set_title(r'1% TFP shock with persistence $\rho=0.90$') ax.set_ylabel('$K_t-K_{ss}$') ax.set_xlim([0,T_fig]) fig.tight_layout() fig.savefig('figs/news_vs_persistent_shock.pdf')
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Example II:** Further effects of persistent shock.
fig = plt.figure(figsize=(12,8)) T_fig = 50 ax_K = fig.add_subplot(2,2,1) ax_r = fig.add_subplot(2,2,2) ax_w = fig.add_subplot(2,2,3) ax_C = fig.add_subplot(2,2,4) ax_K.set_title('$K_t-K_{ss}$ after 1% TFP shock') ax_K.set_xlim([0,T_fig]) ax_r.set_title('$r_t-r_{ss}$ after 1% TFP shock') ax_r.set_xlim([0,T_fig]) ax_w.set_title('$w_t-w_{ss}$ after 1% TFP shock') ax_w.set_xlim([0,T_fig]) ax_C.set_title('$C_t-C_{ss}$ after 1% TFP shock') ax_C.set_xlim([0,T_fig]) dZ = model.get_path_Z()-par.Z dK = G_K_Z@dZ ax_K.plot(np.arange(T_fig),dK[:T_fig],'-o',ms=2) dr = G_r_Z@dZ ax_r.plot(np.arange(T_fig),dr[:T_fig],'-o',ms=2) dw = G_w_Z@dZ ax_w.plot(np.arange(T_fig),dw[:T_fig],'-o',ms=2) dC = G_C_Z@dZ ax_C.plot(np.arange(T_fig),dC[:T_fig],'-o',ms=2) fig.tight_layout() fig.savefig('figs/irfs.pdf')
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Non-linear transition path Use the Jacobian to speed-up solving for the non-linear transition path using a quasi-Newton method. **1. Solver**
def broyden_solver(f,x0,jac,tol=1e-8,max_iter=100,backtrack_fac=0.5,max_backtrack=30,do_print=False): """ numerical solver using the broyden method """ # a. initial x = x0.ravel() y = f(x) # b. iterate for it in range(max_iter): # i. current difference abs_diff = np.max(np.abs(y)) if do_print: print(f' it = {it:3d} -> max. abs. error = {abs_diff:12.8f}') if abs_diff < tol: return x # ii. new x dx = np.linalg.solve(jac,-y) # iii. evalute with backtrack for _ in range(max_backtrack): try: # evaluate ynew = f(x+dx) except ValueError: # backtrack dx *= backtrack_fac else: # update jac and break from backtracking dy = ynew-y jac = jac + np.outer(((dy - jac @ dx) / np.linalg.norm(dx) ** 2), dx) y = ynew x += dx break else: raise ValueError('too many backtracks, maybe bad initial guess?') else: raise ValueError(f'no convergence after {max_iter} iterations')
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**2. Target function** $$\boldsymbol{H}(\boldsymbol{K},\boldsymbol{Z},D_{ss}) = \mathcal{K}_{t}(\{r(Z_{s},K_{s-1}),w(Z_{s},K_{s-1})\}_{s\geq0},D_{ss})-K_{t}=0$$
def target(path_K,path_Z,model,D0,full_output=False): par = model.par sim = model.sim path_r = np.zeros(path_K.size) path_w = np.zeros(path_K.size) # a. implied prices K0lag = np.sum(par.a_grid[np.newaxis,:]*D0) path_Klag = np.insert(path_K,0,K0lag) for t in range(par.path_T): path_r[t] = model.implied_r(path_Klag[t],path_Z[t]) path_w[t] = model.implied_w(path_r[t],path_Z[t]) # b. solve and simulate model.solve_household_path(path_r,path_w) model.simulate_household_path(D0) # c. market clearing if full_output: return path_r,path_w else: return sim.path_K-path_K
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**3. Solve**
path_Z = model.get_path_Z() f = lambda x: target(x,path_Z,model,sim.D) t0 = time.time() path_K = broyden_solver(f,x0=np.repeat(par.K_ss,par.path_T),jac=H_K,do_print=True) path_r,path_w = target(path_K,path_Z,model,sim.D,full_output=True) print(f'\nIRF found in {elapsed(t0)}')
it = 0 -> max. abs. error = 0.05898269 it = 1 -> max. abs. error = 0.00026075 it = 2 -> max. abs. error = 0.00000163 it = 3 -> max. abs. error = 0.00000000 IRF found in 2.1 secs
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**4. Plot**
fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(1,2,1) ax.set_title('capital, $K_t$') dK = G_K_Z@(path_Z-par.Z) ax.plot(np.arange(T_fig),dK[:T_fig] + par.K_ss,'-o',ms=2,label=f'linear') ax.plot(np.arange(T_fig),path_K[:T_fig],'-o',ms=2,label=f'non-linear') if DO_TP_RELAX: ax.plot(np.arange(T_fig),path_K_relax[:T_fig],'--o',ms=2,label=f'non-linear (relaxtion)') ax.legend(frameon=True) ax = fig.add_subplot(1,2,2) ax.set_title('interest rate, $r_t$') dr = G_r_Z@(path_Z-par.Z) ax.plot(np.arange(T_fig),dr[:T_fig] + par.r_ss,'-o',ms=2,label=f'linear') ax.plot(np.arange(T_fig),path_r[:T_fig],'-o',ms=2,label=f'non-linear') if DO_TP_RELAX: ax.plot(np.arange(T_fig),path_r_relax[:T_fig],'--o',ms=2,label=f'non-linear (relaxtion)') fig.tight_layout() fig.savefig('figs/non_linear.pdf')
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Covariances Assume that $Z_t$ is stochastic and follows$$ d\tilde{Z}_t = \rho d\tilde{Z}_{t-1} + \sigma\epsilon_t,\,\,\, \epsilon_t \sim \mathcal{N}(0,1) $$The covariances between all outcomes can be calculated as follows.
# a. choose parameter rho = 0.90 sigma = 0.10 # b. find change in outputs dZ = rho**(np.arange(par.path_T)) dC = G_C_Z@dZ dK = G_K_Z@dZ # c. covariance of consumption print('auto-covariance of consumption:\n') for k in range(5): if k == 0: autocov_C = sigma**2*np.sum(dC*dC) else: autocov_C = sigma**2*np.sum(dC[:-k]*dC[k:]) print(f' k = {k}: {autocov_C:.4f}') # d. covariance of consumption and capital cov_C_K = sigma**2*np.sum(dC*dK) print(f'\ncovariance of consumption and capital: {cov_C_K:.4f}')
auto-covariance of consumption: k = 0: 0.0445 k = 1: 0.0431 k = 2: 0.0415 k = 3: 0.0399 k = 4: 0.0382 covariance of consumption and capital: 0.2117
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
Extra: No idiosyncratic uncertainty This section solve for the transition path in the case without idiosyncratic uncertainty. **Analytical solution for steady state:**
r_ss_pf = (1/par.beta-1) # from euler-equation w_ss_pf = model.implied_w(r_ss_pf,par.Z) K_ss_pf = model.firm_demand(r_ss_pf,par.Z) Y_ss_pf = model.firm_production(K_ss_pf,par.Z) C_ss_pf = Y_ss_pf-par.delta*K_ss_pf print(f'r: {r_ss_pf:.6f}') print(f'w: {w_ss_pf:.6f}') print(f'Y: {Y_ss_pf:.6f}') print(f'C: {C_ss_pf:.6f}') print(f'K/Y: {K_ss_pf/Y_ss_pf:.6f}')
r: 0.018330 w: 0.998613 Y: 1.122037 C: 1.050826 K/Y: 2.538660
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Function for finding consumption and capital paths given paths of interest rates and wages:**It can be shown that$$ C_{0}=\frac{(1+r_{0})a_{-1}+\sum_{t=0}^{\infty}\frac{1}{\mathcal{R}_{t}}w_{t}}{\sum_{t=0}^{\infty}\beta^{t/\rho}\mathcal{R}_{t}^{\frac{1-\rho}{\rho}}} $$where $$ \mathcal{R}_{t} =\begin{cases} 1 & \text{if }t=0\\ (1+r_{t})\mathcal{R}_{t-1} & \text{else} \end{cases} $$Otherwise the **Euler-equation** holds$$ C_t = (\beta (1+r_{t}))^{\frac{1}{\sigma}}C_{t-1} $$
def path_CK_func(K0,path_r,path_w,r_ss,w_ss,model): par = model.par # a. initialize wealth = (1+path_r[0])*K0 inv_MPC = 0 # b. solve RT = 1 max_iter = 5000 t = 0 while True and t < max_iter: # i. prices padded with steady state r = path_r[t] if t < par.path_T else r_ss w = path_w[t] if t < par.path_T else w_ss # ii. interest rate factor if t == 0: fac = 1 else: fac *= (1+r) # iii. accumulate add_wealth = w/fac add_inv_MPC = par.beta**(t/par.sigma)*fac**((1-par.sigma)/par.sigma) if np.fmax(add_wealth,add_inv_MPC) < 1e-12: break else: wealth += add_wealth inv_MPC += add_inv_MPC # iv. increment t += 1 # b. simulate path_C = np.empty(par.path_T) path_K = np.empty(par.path_T) for t in range(par.path_T): if t == 0: path_C[t] = wealth/inv_MPC K_lag = K0 else: path_C[t] = (par.beta*(1+path_r[t]))**(1/par.sigma)*path_C[t-1] K_lag = path_K[t-1] path_K[t] = (1+path_r[t])*K_lag + path_w[t] - path_C[t] return path_K,path_C
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Test with steady state prices:**
path_r_pf = np.repeat(r_ss_pf,par.path_T) path_w_pf = np.repeat(w_ss_pf,par.path_T) path_K_pf,path_C_pf = path_CK_func(K_ss_pf,path_r_pf,path_w_pf,r_ss_pf,w_ss_pf,model) print(f'C_ss: {C_ss_pf:.6f}') print(f'C[0]: {path_C_pf[0]:.6f}') print(f'C[-1]: {path_C_pf[-1]:.6f}') assert np.isclose(C_ss_pf,path_C_pf[0])
C_ss: 1.050826 C[0]: 1.050826 C[-1]: 1.050826
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Shock paths** where interest rate deviate in one period:
dr = 1e-4 ts = np.array([0,20,40]) path_C_pf_shock = np.empty((ts.size,par.path_T)) path_K_pf_shock = np.empty((ts.size,par.path_T)) for i,t in enumerate(ts): path_r_pf_shock = path_r_pf.copy() path_r_pf_shock[t] += dr K,C = path_CK_func(K_ss_pf,path_r_pf_shock,path_w_pf,r_ss_pf,w_ss_pf,model) path_K_pf_shock[i,:] = K path_C_pf_shock[i,:] = C
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Plot paths:**
fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(1,2,1) ax.plot(np.arange(par.path_T),path_C_pf,'-o',ms=2,label=f'$r_t = r^{{\\ast}}$') for i,t in enumerate(ts): ax.plot(np.arange(par.path_T),path_C_pf_shock[i],'-o',ms=2,label=f'shock to $r_{{{t}}}$') ax.set_xlim([0,50]) ax.set_xlabel('periods') ax.set_ylabel('consumtion, $C_t$'); ax = fig.add_subplot(1,2,2) ax.plot(np.arange(par.path_T),path_K_pf,'-o',ms=2,label=f'$r_t = r^{{\\ast}}$') for i,t in enumerate(ts): ax.plot(np.arange(par.path_T),path_K_pf_shock[i],'-o',ms=2,label=f'shock to $r_{{{t}}}$') ax.legend(frameon=True) ax.set_xlim([0,50]) ax.set_xlabel('$t$') ax.set_ylabel('capital, $K_t$'); fig.tight_layout()
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Find transition path with shooting algorithm:**
# a. allocate dT = 200 path_C_pf = np.empty(par.path_T) path_K_pf = np.empty(par.path_T) path_r_pf = np.empty(par.path_T) path_w_pf = np.empty(par.path_T) # b. settings C_min = C_ss_pf C_max = C_ss_pf + K_ss_pf K_min = 1.5 # guess on lower consumption if below this K_max = 3 # guess on higher consumption if above this tol_pf = 1e-6 max_iter_pf = 5000 path_K_pf[0] = K_ss_pf # capital is pre-determined # c. iterate t = 0 it = 0 while True: # i. update prices path_r_pf[t] = model.implied_r(path_K_pf[t],path_Z[t]) path_w_pf[t] = model.implied_w(path_r_pf[t],path_Z[t]) # ii. consumption if t == 0: C0 = (C_min+C_max)/2 path_C_pf[t] = C0 else: path_C_pf[t] = (1+path_r_pf[t])*par.beta*path_C_pf[t-1] # iii. check for steady state if path_K_pf[t] < K_min: t = 0 C_max = C0 continue elif path_K_pf[t] > K_max: t = 0 C_min = C0 continue elif t > 10 and np.sqrt((path_C_pf[t]-C_ss_pf)**2+(path_K_pf[t]-K_ss_pf)**2) < tol_pf: path_C_pf[t:] = path_C_pf[t] path_K_pf[t:] = path_K_pf[t] for k in range(par.path_T): path_r_pf[k] = model.implied_r(path_K_pf[k],path_Z[k]) path_w_pf[k] = model.implied_w(path_r_pf[k],path_Z[k]) break # iv. update capital path_K_pf[t+1] = (1+path_r_pf[t])*path_K_pf[t] + path_w_pf[t] - path_C_pf[t] # v. increment t += 1 it += 1 if it > max_iter_pf: break
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
**Plot deviations from steady state:**
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(2,2,1) ax.plot(np.arange(par.path_T),path_Z,'-o',ms=2) ax.set_xlim([0,200]) ax.set_title('technology, $Z_t$') ax = fig.add_subplot(2,2,2) ax.plot(np.arange(par.path_T),path_K-model.par.kd_ss,'-o',ms=2,label='$\sigma_e = 0.5$') ax.plot(np.arange(par.path_T),path_K_pf-K_ss_pf,'-o',ms=2,label='$\sigma_e = 0$') ax.legend(frameon=True) ax.set_title('capital, $k_t$') ax.set_xlim([0,200]) ax = fig.add_subplot(2,2,3) ax.plot(np.arange(par.path_T),path_r-model.par.r_ss,'-o',ms=2,label='$\sigma_e = 0.5$') ax.plot(np.arange(par.path_T),path_r_pf-r_ss_pf,'-o',ms=2,label='$\sigma_e = 0$') ax.legend(frameon=True) ax.set_title('interest rate, $r_t$') ax.set_xlim([0,200]) ax = fig.add_subplot(2,2,4) ax.plot(np.arange(par.path_T),path_w-model.par.w_ss,'-o',ms=2,label='$\sigma_e = 0.5$') ax.plot(np.arange(par.path_T),path_w_pf-w_ss_pf,'-o',ms=2,label='$\sigma_e = 0$') ax.legend(frameon=True) ax.set_title('wage, $w_t$') ax.set_xlim([0,200]) fig.tight_layout()
_____no_output_____
MIT
00. DynamicProgramming/05. General Equilibrium.ipynb
JMSundram/ConsumptionSavingNotebooks
LeNet Lab![LeNet Architecture](lenet.png)Source: Yan LeCun Load DataLoad the MNIST data, which comes pre-loaded with TensorFlow.You do not need to modify this section.
from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", reshape=False) X_train, y_train = mnist.train.images, mnist.train.labels X_validation, y_validation = mnist.validation.images, mnist.validation.labels X_test, y_test = mnist.test.images, mnist.test.labels assert(len(X_train) == len(y_train)) assert(len(X_validation) == len(y_validation)) assert(len(X_test) == len(y_test)) print() print("Image Shape: {}".format(X_train[0].shape)) print() print("Training Set: {} samples".format(len(X_train))) print("Validation Set: {} samples".format(len(X_validation))) print("Test Set: {} samples".format(len(X_test)))
/Users/Kornet-Mac-1/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).You do not need to modify this section.
import numpy as np # Pad images with 0s X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant') X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant') X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant') print("Updated Image Shape: {}".format(X_train[0].shape))
Updated Image Shape: (32, 32, 1)
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
Visualize DataView a sample from the dataset.You do not need to modify this section.
import random import numpy as np import matplotlib.pyplot as plt %matplotlib inline index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image, cmap="gray") print(y_train[index])
6
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
Preprocess DataShuffle the training data.You do not need to modify this section.
from sklearn.utils import shuffle X_train, y_train = shuffle(X_train, y_train)
_____no_output_____
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
Setup TensorFlowThe `EPOCH` and `BATCH_SIZE` values affect the training speed and model accuracy.You do not need to modify this section.
import tensorflow as tf EPOCHS = 10 BATCH_SIZE = 128
_____no_output_____
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
TODO: Implement LeNet-5Implement the [LeNet-5](http://yann.lecun.com/exdb/lenet/) neural network architecture.This is the only cell you need to edit. InputThe LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case. Architecture**Layer 1: Convolutional.** The output shape should be 28x28x6.**Activation.** Your choice of activation function.**Pooling.** The output shape should be 14x14x6.**Layer 2: Convolutional.** The output shape should be 10x10x16.**Activation.** Your choice of activation function.**Pooling.** The output shape should be 5x5x16.**Flatten.** Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using `tf.contrib.layers.flatten`, which is already imported for you.**Layer 3: Fully Connected.** This should have 120 outputs.**Activation.** Your choice of activation function.**Layer 4: Fully Connected.** This should have 84 outputs.**Activation.** Your choice of activation function.**Layer 5: Fully Connected (Logits).** This should have 10 outputs. OutputReturn the result of the 2nd fully connected layer.
from tensorflow.contrib.layers import flatten def LeNet(x): # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer mu = 0 sigma = 0.1 # TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6. conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma)) conv1_b = tf.Variable(tf.zeros(6)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b # TODO: Activation. conv1 = tf.nn.relu(conv1) # TODO: Pooling. Input = 28x28x6. Output = 14x14x6. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # TODO: Layer 2: Convolutional. Output = 10x10x16. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(16)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b # TODO: Activation. conv2 = tf.nn.relu(conv2) # TODO: Pooling. Input = 10x10x16. Output = 5x5x16. conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # TODO: Flatten. Input = 5x5x16. Output = 400. fc0 = flatten(conv2) # TODO: Layer 3: Fully Connected. Input = 400. Output = 120. fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma)) fc1_b = tf.Variable(tf.zeros(120)) fc1 = tf.matmul(fc0, fc1_W) + fc1_b # TODO: Activation. fc1 = tf.nn.relu(fc1) # TODO: Layer 4: Fully Connected. Input = 120. Output = 84. fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma)) fc2_b = tf.Variable(tf.zeros(84)) fc2 = tf.matmul(fc1, fc2_W) + fc2_b # TODO: Activation. fc2 = tf.nn.relu(fc2) # TODO: Layer 5: Fully Connected. Input = 84. Output = 10. fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma)) fc3_b = tf.Variable(tf.zeros(10)) logits = tf.matmul(fc2, fc3_W) + fc3_b return logits
_____no_output_____
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
Features and LabelsTrain LeNet to classify [MNIST](http://yann.lecun.com/exdb/mnist/) data.`x` is a placeholder for a batch of input images.`y` is a placeholder for a batch of output labels.You do not need to modify this section.
x = tf.placeholder(tf.float32, (None, 32, 32, 1)) y = tf.placeholder(tf.int32, (None)) one_hot_y = tf.one_hot(y, 10)
_____no_output_____
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
Training PipelineCreate a training pipeline that uses the model to classify MNIST data.You do not need to modify this section.
rate = 0.001 logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation)
_____no_output_____
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
Model EvaluationEvaluate how well the loss and accuracy of the model for a given dataset.You do not need to modify this section.
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples
_____no_output_____
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
Train the ModelRun the training data through the training pipeline to train the model.Before each epoch, shuffle the training set.After each epoch, measure the loss and accuracy of the validation set.Save the model after training.You do not need to modify this section.
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) validation_accuracy = evaluate(X_validation, y_validation) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, './lenet') print("Model saved")
Training... EPOCH 1 ... Validation Accuracy = 0.972 EPOCH 2 ... Validation Accuracy = 0.978 EPOCH 3 ... Validation Accuracy = 0.984 EPOCH 4 ... Validation Accuracy = 0.986 EPOCH 5 ... Validation Accuracy = 0.989 EPOCH 6 ... Validation Accuracy = 0.987 EPOCH 7 ... Validation Accuracy = 0.990 EPOCH 8 ... Validation Accuracy = 0.987 EPOCH 9 ... Validation Accuracy = 0.989 EPOCH 10 ... Validation Accuracy = 0.989 Model saved
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
Evaluate the ModelOnce you are completely satisfied with your model, evaluate the performance of the model on the test set.Be sure to only do this once!If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.You do not need to modify this section.
with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(X_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy))
INFO:tensorflow:Restoring parameters from ./lenet Test Accuracy = 0.990
MIT
LeNet-Lab.ipynb
dumebi/-Udasity-CarND-LeNet-Lab
Praca domowa 3 Ładowanie podstawowych pakietów
import pandas as pd import numpy as np import sklearn import seaborn as sns import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from sklearn.model_selection import StratifiedKFold # used in crossvalidation from sklearn.model_selection import KFold import IPython from time import time
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Krótki wstęp Celem zadania jest by bazujac na danych meteorologicznych z australi sprawdzić i wytrenowac 3 różne modele. Równie ważnym celem zadania jest przejrzenie oraz zmiana tzn. hiperparamterów z każdego nich. Załadowanie danych
data = pd.read_csv("../../australia.csv")
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Przyjrzenie się danym
data.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 56420 entries, 0 to 56419 Data columns (total 18 columns): MinTemp 56420 non-null float64 MaxTemp 56420 non-null float64 Rainfall 56420 non-null float64 Evaporation 56420 non-null float64 Sunshine 56420 non-null float64 WindGustSpeed 56420 non-null float64 WindSpeed9am 56420 non-null float64 WindSpeed3pm 56420 non-null float64 Humidity9am 56420 non-null float64 Humidity3pm 56420 non-null float64 Pressure9am 56420 non-null float64 Pressure3pm 56420 non-null float64 Cloud9am 56420 non-null float64 Cloud3pm 56420 non-null float64 Temp9am 56420 non-null float64 Temp3pm 56420 non-null float64 RainToday 56420 non-null int64 RainTomorrow 56420 non-null int64 dtypes: float64(16), int64(2) memory usage: 7.7 MB
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Nie ma w danych żadnych braków, oraz są one przygotowane idealnie do uczenia maszynowego. Przyjżyjmy się jednak jak wygląda ramka.
data.head()
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Random Forest **Załadowanie potrzebnych bibliotek**
from sklearn.ensemble import RandomForestClassifier
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Inicjalizowanie modelu**
rf_default = RandomForestClassifier()
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Hiperparametry**
params = rf_default.get_params() params
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Zmiana kilku hiperparametrów**
params['n_estimators']=150 params['max_depth']=6 params['min_samples_leaf']=4 params['n_jobs']=4 params['random_state']=0 rf_modified = RandomForestClassifier() rf_modified.set_params(**params)
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Extreme Gradient Boosting **Załadowanie potrzebnych bibliotek**
from xgboost import XGBClassifier
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Inicjalizowanie modelu**
xgb_default = XGBClassifier()
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Hiperparametry**
params = xgb_default.get_params() params
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Zmiana kilku hiperparametrów**
params['n_estimators']=150 params['max_depth']=6 params['n_jobs']=4 params['random_state']=0 xgb_modified = XGBClassifier() xgb_modified.set_params(**params)
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Support Vector Machines **Załadowanie potrzebnych bibliotek**
from sklearn.svm import SVC
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Inicjalizowanie modelu**
svc_default = SVC()
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Hiperparametry**
params = svc_default.get_params() params
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Zmiana kilku hiperparametrów**
params['degree']=3 params['tol']=0.001 params['random_state']=0 svc_modified = SVC() svc_modified.set_params(**params)
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
KomentarzW tym momencie otrzymaliśmy 3 modele z zmienionymi hiperparametrami, oraz ich domyślne odpowiedniki. Zobaczmy teraz jak zmieniły się rezultaty osiągane przez te modele i chociaż nie był to cel tego zadania, zobaczmy czy może udało nam się poprawić jakiś model. Porównanie **Załadowanie potrzebnych bibliotek**
from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score from sklearn.metrics import balanced_accuracy_score from sklearn.metrics import precision_score from sklearn.metrics import average_precision_score from sklearn.metrics import roc_auc_score
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Funkcje pomocnicze
def cv_classifier(classifier,kfolds = 10, X = data.drop("RainTomorrow", axis = 1), y = data.RainTomorrow): start_time = time() scores ={} scores["f1"]=[] scores["accuracy"]=[] scores["balanced_accuracy"]=[] scores["precision"]=[] scores["average_precision"]=[] scores["roc_auc"]=[] # Hardcoded crossvalidation metod, could be cv= StratifiedKFold(n_splits=kfolds,shuffle=True,random_state=0) for i, (train, test) in enumerate(cv.split(X, y)): IPython.display.clear_output() print(f"Model {i+1}/{kfolds}") # Training model classifier.fit(X.iloc[train, ], y.iloc[train], ) # Testing model prediction = classifier.predict(X.iloc[test,]) # calculating and savings scores scores["f1"].append( f1_score(y.iloc[test],prediction)) scores["accuracy"].append( accuracy_score(y.iloc[test],prediction)) scores["balanced_accuracy"].append( balanced_accuracy_score(y.iloc[test],prediction)) scores["precision"].append( precision_score(y.iloc[test],prediction)) scores["average_precision"].append( average_precision_score(y.iloc[test],prediction)) scores["roc_auc"].append( roc_auc_score(y.iloc[test],prediction)) IPython.display.clear_output() print(f"Crossvalidation on {kfolds} folds done in {round((time()-start_time),2)}s") return scores def get_mean_scores(scores_dict): means={} for score_name in scores_dict: means[score_name] = np.mean(scores_dict[score_name]) return means def print_mean_scores(mean_scores_dict,precision=4): for score_name in mean_scores_dict: print(f"Mean {score_name} score is {round(mean_scores_dict[score_name]*100,precision)}%")
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
WynikiPoniżej zamieszczam wyniki predykcji pokazanych wcześniej modeli. Dla kontrastu nauczyłem zmodyfikowane wersję klasyfikatorów jak i również te domyślne. Ze smutkiem muszę stwierdzić, że nie jestem najlepszy w strzelaniu, ponieważ parametry, które dobrałem znacznie pogarszają skutecznść każdego z modeli. Niemniej jednak by to stwierdzić musiałem sie posłóżyć pewnymi miarami. Są to:* F1 * Accuracy* Balanced Accuracy* Precision* Average Precision* ROC AUCWszystkie modele zostały poddane 10 krotnej kroswalidacji, więc przedstawione wyniki są średnią. Kroswalidacja pozwala dokładniej ocenić skutecznosć modelu oraz wyciągajac z nich takie informacje jak odchylenie standardowe wyników, co daje nam możliowść dyskusji na temat działania modelu w skrajnych przypadkach. Random Forest Kroswalidacja modeli
scores_rf_default = cv_classifier(rf_default) scores_rf_modified = cv_classifier(rf_modified) mean_scores_rf_default = get_mean_scores(scores_rf_default) mean_scores_rf_modified = get_mean_scores(scores_rf_modified)
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Random forest default**
print_mean_scores(mean_scores_rf_default,precision=2)
Mean f1 score is 62.85% Mean accuracy score is 86.09% Mean balanced_accuracy score is 74.38% Mean precision score is 76.32% Mean average_precision score is 51.05% Mean roc_auc score is 74.38%
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Random forest modified**
print_mean_scores(mean_scores_rf_modified,precision=2)
Mean f1 score is 56.74% Mean accuracy score is 84.97% Mean balanced_accuracy score is 70.54% Mean precision score is 77.55% Mean average_precision score is 46.88% Mean roc_auc score is 70.54%
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Extreme Gradient Boosting Kroswalidacja modeli
scores_xgb_default = cv_classifier(xgb_default) scores_xgb_modified = cv_classifier(xgb_modified) mean_scores_xgb_default = get_mean_scores(scores_xgb_default) mean_scores_xgb_modified = get_mean_scores(scores_xgb_modified)
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**XGBoost default**
print_mean_scores(mean_scores_xgb_default,precision=2)
Mean f1 score is 63.79% Mean accuracy score is 85.92% Mean balanced_accuracy score is 75.3% Mean precision score is 73.56% Mean average_precision score is 51.06% Mean roc_auc score is 75.3%
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**XGBoost modified**
print_mean_scores(mean_scores_xgb_modified,precision=2)
Mean f1 score is 63.93% Mean accuracy score is 85.89% Mean balanced_accuracy score is 75.44% Mean precision score is 73.18% Mean average_precision score is 51.07% Mean roc_auc score is 75.44%
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Support Vector Machines Kroswalidacja modeli **warning this takes a while**
scores_svc_default = cv_classifier(svc_default) scores_svc_modified = cv_classifier(svc_modified) mean_scores_svc_default = get_mean_scores(scores_svc_default) mean_scores_svc_modified = get_mean_scores(scores_svc_modified)
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**SVM default**
print_mean_scores(mean_scores_svc_default,precision=2)
Mean f1 score is 51.52% Mean accuracy score is 84.38% Mean balanced_accuracy score is 67.63% Mean precision score is 81.47% Mean average_precision score is 44.44% Mean roc_auc score is 67.63%
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**SVM modified**
print_mean_scores(mean_scores_svc_modified,precision=2)
Mean f1 score is 51.52% Mean accuracy score is 84.38% Mean balanced_accuracy score is 67.63% Mean precision score is 81.47% Mean average_precision score is 44.44% Mean roc_auc score is 67.63%
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Podsumowanie Wyniki random forest oraz xgboost były dośyć zbliżone i szczerze mówiąc dosyć słabe. Jeszcze gorzej wypadł SVM, co pewnie wielu nie zdziwi. Ma okropnie długi czas uczenia, ponad minuta na model. Wypada dużo gorzej niż pozostałe algorytmy, gdzie 10 modeli xgboost zostało wyszkolone w 41s. Natomiast wyniki random forest oraz xgboost są dosyć zbliżone. Gdybym jednak miał wybrać jeden z tych trzech modeli, by dalej go dostrajać na pewno zdecydowałbym się na xgboosta. Między innymi dlatego, że czas uczenia i testowania byłby dużo krótszy niż w przypadku random forest, oraz prawdopodobnie z odpowiednimi parametrami xgboost będzie sobie radził lepiej niż random forest. Natomiast wybór najlepszej miary nie jest już taki prosty, a nawet śmiem twierdzić, że nie znalazłem takiej która zasługiwałaby na takie miano. Większość miar w niebanalny sposób reprezentuje jakąś cechę modelu. Natomiast jeżeli musiałbym się ograniczyć do jednej prawdopodobnie wybrałbym ROC_AUC. Z tego powodu, że przez użycie True Positive Rate i False Positive Rate jest ona całkiem logiczna (w przeciwieństwie do wielu), a zarazem wiąć pozwala dobrze tłumaczyć skutecznośc modeli. Część bonusowa - Regresja Przygotowanie danych
data2 = pd.read_csv('allegro-api-transactions.csv') data2 = data2.drop(['lp','date'], axis = 1) data2.head()
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Dane są prawie gotowe do procesu czenia, trzeba jedynie poprawić `it_location` w którym mogą pojawić się powtórki w stylu *Warszawa* i *warszawa*, a następnie zakodować zmienne kategoryczne
data2.it_location = data2.it_location.str.lower() data2.head() encoding_columns = ['categories','seller','it_location','main_category']
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Kodowanie zmiennych kategorycznych
import category_encoders from sklearn.preprocessing import OneHotEncoder
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Podział danych Nie wykonam standardowego podziału na dane test i train, ponieważ w dalszej części dokumentu do oceny skutecznosci użytych kodowań posłużę się kroswalidacją. Pragnę zaznaczyć, ze prawdopodbnie najlepszą metodą tutaj byłoby rozbicie kategorii `categories` na 26 kolumn, zero-jedynkowych, jednak znacznie by to powiększyło rozmiar danych. Z dokładnie tego samego powodu nie wykonam one hot encodingu, tylko posłużę się kodowaniami, które nie powiększą rozmiatu danych.
X = data2.drop('price', axis = 1) y = data2.price
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Target encoding
te = category_encoders.target_encoder.TargetEncoder(data2, cols = encoding_columns) target_encoded = te.fit_transform(X,y) target_encoded
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
James-Stein Encoding
js = category_encoders.james_stein.JamesSteinEncoder(cols = encoding_columns) encoded_js = js.fit_transform(X,y) encoded_js
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Cat Boost Encoding
cb = category_encoders.cat_boost.CatBoostEncoder(cols = encoding_columns) encoded_cb = cb.fit_transform(X,y) encoded_cb
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Testowanie
from sklearn.metrics import r2_score, mean_squared_error from sklearn import linear_model def cv_encoding(model,kfolds = 10, X = data.drop("RainTomorrow", axis = 1), y = data.RainTomorrow): start_time = time() scores ={} scores["r2_score"] = [] scores['RMSE'] = [] # Standard k-fold cv = KFold(n_splits=kfolds,shuffle=False,random_state=0) for i, (train, test) in enumerate(cv.split(X, y)): IPython.display.clear_output() print(f"Model {i+1}/{kfolds}") # Training model model.fit(X.iloc[train, ], y.iloc[train], ) # Testing model prediction = model.predict(X.iloc[test,]) # calculating and savings score scores['r2_score'].append( r2_score(y.iloc[test],prediction)) scores['RMSE'].append( mean_squared_error(y.iloc[test],prediction)) IPython.display.clear_output() print(f"Crossvalidation on {kfolds} folds done in {round((time()-start_time),2)}s") return scores
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Mierzenie skutecznosci kodowańZdecydowałem isę skorzystać z modelu regresji liniowej `Lasso` początkowo chciałem skorzystać z `Elastic Net`, ale jak się okazało zmienne nie sa ze sobą zbytnio powiązane, a to miał być główny powód do jego użycia.
corr=data2.corr() fig, ax=plt.subplots(figsize=(9,6)) ax=sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns, annot=True, cmap="PiYG", center=0, vmin=-1, vmax=1) ax.set_title('Korelacje zmiennych') plt.show();
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Wybór modelu liniowegoOkreślam go w tym miejscu, ponieważ w dalszych częściach dokumentu będę z niego wielokrotnie korzystał przy kroswalidacji
lasso = linear_model.Lasso()
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Wyniki target encodingu
target_encoding_scores = cv_encoding(model = lasso,kfolds=20, X = target_encoded, y = y) target_encoding_scores_mean = get_mean_scores(target_encoding_scores) target_encoding_scores_mean
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Wyniki James-Stein Encodingu
js_encoding_scores = cv_encoding(lasso, 20, encoded_js, y) js_encoding_scores_mean = get_mean_scores(js_encoding_scores) js_encoding_scores_mean
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Wyniki Cat Boost Encodingu
cb_encoding_scores = cv_encoding(lasso, 20 ,encoded_cb, y) cb_encoding_scores_mean = get_mean_scores(cb_encoding_scores) cb_encoding_scores_mean
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
Porównanie Wyniki metryki r2
r2_data = [target_encoding_scores["r2_score"], js_encoding_scores["r2_score"], cb_encoding_scores["r2_score"]] labels = ["Target", " James-Stein", "Cat Boost"] fig, ax = plt.subplots(figsize = (12,9)) ax.set_title('Wyniki r2') ax.boxplot(r2_data, labels = labels) plt.show()
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
**Komentarz** Widać, że użycie kodowania Jamesa-Steina pozwoliło modelowi dużo lepiej się dopasować do danych, jednak stwarza to ewentaulny problem nadmiernego dopasowania się do danych. Warto by było sprawdzić czy przez ten sposób kodowania nie dochodzi do dużo silniejszego overfittingu. Wynikii metryki RMSE
rmse_data = [target_encoding_scores["RMSE"], js_encoding_scores["RMSE"], cb_encoding_scores["RMSE"]] labels = ["Target", " James-Stein", "Cat Boost"] fig, ax = plt.subplots(figsize = (12,9)) ax.set_title('Wyniki RMSE w skali logarytmicznej') ax.set_yscale('log') ax.boxplot(rmse_data, labels = labels) plt.show()
_____no_output_____
Apache-2.0
Prace_domowe/Praca_domowa3/Grupa1/EljasiakBartlomiej/pd3.ipynb
niladrem/2020L-WUM
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement depth-first traversals (in-order, pre-order, post-order) on a binary tree.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume we already have a Node class with an insert method? * Yes* What should we do with each node when we process it? * Call an input method `visit_func` on the node* Can we assume this fits in memory? * Yes Test Cases In-Order Traversal* 5, 2, 8, 1, 3 -> 1, 2, 3, 5, 8* 1, 2, 3, 4, 5 -> 1, 2, 3, 4, 5 Pre-Order Traversal* 5, 2, 8, 1, 3 -> 5, 2, 1, 3, 8* 1, 2, 3, 4, 5 -> 1, 2, 3, 4, 5 Post-Order Traversal* 5, 2, 8, 1, 3 -> 1, 3, 2, 8, 5* 1, 2, 3, 4, 5 -> 5, 4, 3, 2, 1 AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/tree_dfs/dfs_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
# %load ../bst/bst.py class Node(object): def __init__(self, data): self.data = data self.left = None self.right = None self.parent = None def __repr__(self): return str(self.data) class Bst(object): def __init__(self, root=None): self.root = root def insert(self, data): if data is None: raise TypeError('data cannot be None') if self.root is None: self.root = Node(data) return self.root else: return self._insert(self.root, data) def _insert(self, node, data): if node is None: return Node(data) if data <= node.data: if node.left is None: node.left = self._insert(node.left, data) node.left.parent = node return node.left else: return self._insert(node.left, data) else: if node.right is None: node.right = self._insert(node.right, data) node.right.parent = node return node.right else: return self._insert(node.right, data) class BstDfs(Bst): def in_order_traversal(self, node, visit_func): if node is None: return self.in_order_traversal(node.left, visit_func) visit_func(node) self.in_order_traversal(node.right, visit_func) def pre_order_traversal(self, node, visit_func): if node is None: return visit_func(node) self.pre_order_traversal(node.left, visit_func) self.pre_order_traversal(node.right, visit_func) def post_order_traversal(self,node, visit_func): if node is None: return self.post_order_traversal(node.left, visit_func) self.post_order_traversal(node.right, visit_func) visit_func(node)
_____no_output_____
Apache-2.0
graphs_trees/tree_dfs/dfs_challenge.ipynb
hanbf/interactive-coding-challenges
Unit Test
%run ../utils/results.py # %load test_dfs.py from nose.tools import assert_equal class TestDfs(object): def __init__(self): self.results = Results() def test_dfs(self): bst = BstDfs(Node(5)) bst.insert(2) bst.insert(8) bst.insert(1) bst.insert(3) bst.in_order_traversal(bst.root, self.results.add_result) assert_equal(str(self.results), "[1, 2, 3, 5, 8]") self.results.clear_results() bst.pre_order_traversal(bst.root, self.results.add_result) assert_equal(str(self.results), "[5, 2, 1, 3, 8]") self.results.clear_results() bst.post_order_traversal(bst.root, self.results.add_result) assert_equal(str(self.results), "[1, 3, 2, 8, 5]") self.results.clear_results() bst = BstDfs(Node(1)) bst.insert(2) bst.insert(3) bst.insert(4) bst.insert(5) bst.in_order_traversal(bst.root, self.results.add_result) assert_equal(str(self.results), "[1, 2, 3, 4, 5]") self.results.clear_results() bst.pre_order_traversal(bst.root, self.results.add_result) assert_equal(str(self.results), "[1, 2, 3, 4, 5]") self.results.clear_results() bst.post_order_traversal(bst.root, self.results.add_result) assert_equal(str(self.results), "[5, 4, 3, 2, 1]") print('Success: test_dfs') def main(): test = TestDfs() test.test_dfs() if __name__ == '__main__': main()
Success: test_dfs
Apache-2.0
graphs_trees/tree_dfs/dfs_challenge.ipynb
hanbf/interactive-coding-challenges
Generative Adversarial NetworksThroughout most of this book, we've talked about how to make predictions.In some form or another, we used deep neural networks learned mappings from data points to labels.This kind of learning is called discriminative learning,as in, we'd like to be able to discriminate between photos cats and photos of dogs. Classifiers and regressors are both examples of discriminative learning. And neural networks trained by backpropagation have upended everything we thought we knew about discriminative learning on large complicated datasets. Classification accuracies on high-res images has gone from useless to human-level (with some caveats) in just 5-6 years. We'll spare you another spiel about all the other discriminative tasks where deep neural networks do astoundingly well.But there's more to machine learning than just solving discriminative tasks.For example, given a large dataset, without any labels,we might want to learn a model that concisely captures the characteristics of this data.Given such a model, we could sample synthetic data points that resemble the distribution of the training data.For example, given a large corpus of photographs of faces,we might want to be able to generate a *new* photorealistic image that looks like it might plausibly have come from the same dataset. This kind of learning is called *generative modeling*. Until recently, we had no method that could synthesize novel photorealistic images. But the success of deep neural networks for discriminative learning opened up new possiblities.One big trend over the last three years has been the application of discriminative deep netsto overcome challenges in problems that we don't generally think of as supervised learning problems.The recurrent neural network language models are one example of using a discriminative network (trained to predict the next character)that once trained can act as a generative model. In 2014, a young researcher named Ian Goodfellow introduced [Generative Adversarial Networks (GANs)](https://arxiv.org/abs/1406.2661) a clever new way to leverage the power of discriminative models to get good generative models. GANs made quite a splash so it's quite likely you've seen the images before. For instance, using a GAN you can create fake images of bedrooms, as done by [Radford et al. in 2015](https://arxiv.org/pdf/1511.06434.pdf) and depicted below. ![](../img/fake_bedrooms.png)At their heart, GANs rely on the idea that a data generator is goodif we cannot tell fake data apart from real data. In statistics, this is called a two-sample test - a test to answer the question whether datasets $X = \{x_1, \ldots x_n\}$ and $X' = \{x_1', \ldots x_n'\}$ were drawn from the same distribution. The main difference between most statistics papers and GANs is that the latter use this idea in a constructive way.In other words, rather than just training a model to say 'hey, these two datasets don't look like they came from the same distribution', they use the two-sample test to provide training signal to a generative model.This allows us to improve the data generator until it generates something that resembles the real data. At the very least, it needs to fool the classifier. And if our classifier is a state of the art deep neural network.As you can see, there are two pieces to GANs - first off, we need a device (say, a deep network but it really could be anything, such as a game rendering engine) that might potentially be able to generate data that looks just like the real thing. If we are dealing with images, this needs to generate images. If we're dealing with speech, it needs to generate audio sequences, and so on. We call this the *generator network*. The second component is the *discriminator network*. It attempts to distinguish fake and real data from each other. Both networks are in competition with each other. The generator network attempts to fool the discriminator network. At that point, the discriminator network adapts to the new fake data. This information, in turn is used to improve the generator network, and so on. **Generator*** Draw some parameter $z$ from a source of randomness, e.g. a normal distribution $z \sim \mathcal{N}(0,1)$.* Apply a function $f$ such that we get $x' = G(u,w)$* Compute the gradient with respect to $w$ to minimize $\log p(y = \mathrm{fake}|x')$ **Discriminator*** Improve the accuracy of a binary classifier $f$, i.e. maximize $\log p(y=\mathrm{fake}|x')$ and $\log p(y=\mathrm{true}|x)$ for fake and real data respectively.![](../img/simple-gan.png)In short, there are two optimization problems running simultaneously, and the optimization terminates if a stalemate has been reached. There are lots of further tricks and details on how to modify this basic setting. For instance, we could try solving this problem in the presence of side information. This leads to cGAN, i.e. conditional Generative Adversarial Networks. We can change the way how we detect whether real and fake data look the same. This leads to wGAN (Wasserstein GAN), kernel-inspired GANs and lots of other settings, or we could change how closely we look at the objects. E.g. fake images might look real at the texture level but not so at the larger level, or vice versa. Many of the applications are in the context of images. Since this takes too much time to solve in a Jupyter notebook on a laptop, we're going to content ourselves with fitting a much simpler distribution. We will illustrate what happens if we use GANs to build the world's most inefficient estimator of parameters for a Gaussian. Let's get started.
from __future__ import print_function import matplotlib as mpl from matplotlib import pyplot as plt import mxnet as mx from mxnet import gluon, autograd, nd from mxnet.gluon import nn import numpy as np ctx = mx.cpu()
_____no_output_____
Apache-2.0
chapter14_generative-adversarial-networks/gan-intro.ipynb
vishaalkapoor/mxnet-the-straight-dope
Generate some 'real' dataSince this is going to be the world's lamest example, we simply generate data drawn from a Gaussian. And let's also set a context where we'll do most of the computation.
X = nd.random_normal(shape=(1000, 2)) A = nd.array([[1, 2], [-0.1, 0.5]]) b = nd.array([1, 2]) X = nd.dot(X, A) + b Y = nd.ones(shape=(1000, 1)) # and stick them into an iterator batch_size = 4 train_data = mx.io.NDArrayIter(X, Y, batch_size, shuffle=True)
_____no_output_____
Apache-2.0
chapter14_generative-adversarial-networks/gan-intro.ipynb
vishaalkapoor/mxnet-the-straight-dope
Let's see what we got. This should be a Gaussian shifted in some rather arbitrary way with mean $b$ and covariance matrix $A^\top A$.
plt.scatter(X[:,0].asnumpy(), X[:,1].asnumpy()) plt.show() print("The covariance matrix is") print(nd.dot(A.T, A))
_____no_output_____
Apache-2.0
chapter14_generative-adversarial-networks/gan-intro.ipynb
vishaalkapoor/mxnet-the-straight-dope
Defining the networksNext we need to define how to fake data. Our generator network will be the simplest network possible - a single layer linear model. This is since we'll be driving that linear network with a Gaussian data generator. Hence, it literally only needs to learn the parameters to fake things perfectly. For the discriminator we will be a bit more discriminating: we will use an MLP with 3 layers to make things a bit more interesting. The cool thing here is that we have *two* different networks, each of them with their own gradients, optimizers, losses, etc. that we can optimize as we please.
# build the generator netG = nn.Sequential() with netG.name_scope(): netG.add(nn.Dense(2)) # build the discriminator (with 5 and 3 hidden units respectively) netD = nn.Sequential() with netD.name_scope(): netD.add(nn.Dense(5, activation='tanh')) netD.add(nn.Dense(3, activation='tanh')) netD.add(nn.Dense(2)) # loss loss = gluon.loss.SoftmaxCrossEntropyLoss() # initialize the generator and the discriminator netG.initialize(mx.init.Normal(0.02), ctx=ctx) netD.initialize(mx.init.Normal(0.02), ctx=ctx) # trainer for the generator and the discriminator trainerG = gluon.Trainer(netG.collect_params(), 'adam', {'learning_rate': 0.01}) trainerD = gluon.Trainer(netD.collect_params(), 'adam', {'learning_rate': 0.05})
_____no_output_____
Apache-2.0
chapter14_generative-adversarial-networks/gan-intro.ipynb
vishaalkapoor/mxnet-the-straight-dope
Setting up the training loopWe are going to iterate over the data a few times. To make life simpler we need a few variables
real_label = mx.nd.ones((batch_size,), ctx=ctx) fake_label = mx.nd.zeros((batch_size,), ctx=ctx) metric = mx.metric.Accuracy() # set up logging from datetime import datetime import os import time
_____no_output_____
Apache-2.0
chapter14_generative-adversarial-networks/gan-intro.ipynb
vishaalkapoor/mxnet-the-straight-dope
Training loop
stamp = datetime.now().strftime('%Y_%m_%d-%H_%M') for epoch in range(10): tic = time.time() train_data.reset() for i, batch in enumerate(train_data): ############################ # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z))) ########################### # train with real_t data = batch.data[0].as_in_context(ctx) noise = nd.random_normal(shape=(batch_size, 2), ctx=ctx) with autograd.record(): real_output = netD(data) errD_real = loss(real_output, real_label) fake = netG(noise) fake_output = netD(fake.detach()) errD_fake = loss(fake_output, fake_label) errD = errD_real + errD_fake errD.backward() trainerD.step(batch_size) metric.update([real_label,], [real_output,]) metric.update([fake_label,], [fake_output,]) ############################ # (2) Update G network: maximize log(D(G(z))) ########################### with autograd.record(): output = netD(fake) errG = loss(output, real_label) errG.backward() trainerG.step(batch_size) name, acc = metric.get() metric.reset() print('\nbinary training acc at epoch %d: %s=%f' % (epoch, name, acc)) print('time: %f' % (time.time() - tic)) noise = nd.random_normal(shape=(100, 2), ctx=ctx) fake = netG(noise) plt.scatter(X[:,0].asnumpy(), X[:,1].asnumpy()) plt.scatter(fake[:,0].asnumpy(), fake[:,1].asnumpy()) plt.show()
binary training acc at epoch 0: accuracy=0.764500 time: 5.838877
Apache-2.0
chapter14_generative-adversarial-networks/gan-intro.ipynb
vishaalkapoor/mxnet-the-straight-dope
Checking the outcomeLet's now generate some fake data and check whether it looks real.
noise = mx.nd.random_normal(shape=(100, 2), ctx=ctx) fake = netG(noise) plt.scatter(X[:,0].asnumpy(), X[:,1].asnumpy()) plt.scatter(fake[:,0].asnumpy(), fake[:,1].asnumpy()) plt.show()
_____no_output_____
Apache-2.0
chapter14_generative-adversarial-networks/gan-intro.ipynb
vishaalkapoor/mxnet-the-straight-dope
*This notebook is part of course materials for CS 345: Machine Learning Foundations and Practice at Colorado State University.Original versions were created by Asa Ben-Hur.The content is availabe [on GitHub](https://github.com/asabenhur/CS345).**The text is released under the [CC BY-SA license](https://creativecommons.org/licenses/by-sa/4.0/), and code is released under the [MIT license](https://opensource.org/licenses/MIT).*
import numpy as np import matplotlib.pyplot as plt %matplotlib inline %autosave 0
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
Evaluating classifiers: cross validation Learning curvesIntuitively, the more data we have available, the more accurate our classifiers become. To demonstrate this, let's read in some data and evaluate a k-nearest neighbor classifier on a fixed test set with increasing number of training examples. The resulting curve of accuracy as a function of number of examples is called a **learning curve**.
from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier X, y = load_digits(return_X_y=True) training_sizes = [20, 40, 100, 200, 400, 600, 800, 1000, 1200] # note the use of the stratify keyword: it makes it so that each # class is equally represented in both train and test set X_full_train, X_test, y_full_train, y_test = train_test_split( X, y, test_size = len(y)-max(training_sizes), stratify=y, random_state=1) accuracy = [] for training_size in training_sizes : X_train,_ , y_train,_ = train_test_split( X_full_train, y_full_train, test_size = len(y_full_train)-training_size+10, stratify=y_full_train) knn = KNeighborsClassifier(n_neighbors=1) knn.fit(X_train, y_train) y_pred = knn.predict(X_test) accuracy.append(np.sum((y_pred==y_test))/len(y_test)) plt.figure(figsize=(6,4)) plt.plot(training_sizes, accuracy, 'ob') plt.xlabel('training set size') plt.ylabel('accuracy') plt.ylim((0.5,1));
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
It's also instructive to look at the numbers themselves:
print ("# training examples\t accuracy") for i in range(len(accuracy)) : print ("\t{:d}\t\t {:f}".format(training_sizes[i], accuracy[i]))
# training examples accuracy 20 0.636516 40 0.889447 100 0.914573 200 0.953099 400 0.966499 600 0.979899 800 0.983250 1000 0.983250 1200 0.983250
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
Exercise* What can you conclude from this plot? * Why would you want to compute a learning curve on your data? Making better use of our data with cross validationThe discussion above demonstrates that it is best to have as large of a training set as possible. We also need to have a large enough test set, so that the accuracy estimates are accurate. How do we balance these two contradictory requirements? Cross-validation provides us a more effective way to make use of our data. Here it is:**Cross validation*** Randomly partition the data into $k$ subsets ("folds").* Set one fold aside for evaluation and train a model on the remaining $k$ folds and evaluate it on the held-out fold.* Repeat until each fold has been used for evaluation* Compute accuracy by averaging over the accuracy estimates generated for each fold.Here is an illustration of 8-fold cross validation: width="600">As you can see, this procedure is more expensive than dividing your data into train and test set. When dealing with relatively small datasets, which is when you want to use this procedure, this won't be an issue.Typically cross-validation is used with the number of folds being in the range of 5-10. An extreme case is when the number of folds equals the number of training examples. This special case is called *leave-one-out cross-validation*.
from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_validate from sklearn.model_selection import cross_val_score from sklearn import metrics
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
Let's use the scikit-learn breast cancer dataset to demonstrate the use of cross-validation.
from sklearn.datasets import load_breast_cancer data = load_breast_cancer()
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
A scikit-learn data object is container object with whose interesting attributes are: * ‘data’, the data to learn, * ‘target’, the classification labels, * ‘target_names’, the meaning of the labels, * ‘feature_names’, the meaning of the features, and * ‘DESCR’, the full description of the dataset.
X = data.data y = data.target print('number of examples ', len(y)) print('number of features ', len(X[0])) print(data.target_names) print(data.feature_names) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0) classifier = KNeighborsClassifier(n_neighbors=3) #classifier = LogisticRegression() _ = classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test)
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
Let's compute the accuracy of our predictions:
np.mean(y_pred==y_test)
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
We can do the same using scikit-learn:
metrics.accuracy_score(y_test, y_pred)
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
Now let's compute accuracy using [cross_validate](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html) instead:
accuracy = cross_val_score(classifier, X, y, cv=5, scoring='accuracy') print(accuracy)
[0.87719298 0.92105263 0.94736842 0.93859649 0.91150442]
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
This yields an array containing the accuracy values for each fold.When reporting your results, you will typically show the mean:
np.mean(accuracy)
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
The arguments of `cross_val_score`:* A classifier (anything that satisfies the scikit-learn classifier API)* data (features/labels)* `cv` : an integer that specifies the number of folds (can be used in more sophisticated ways as we will see below).* `scoring`: this determines which accuracy measure is evaluated for each fold. Here's a link to the [list of available measures](https://scikit-learn.org/stable/modules/model_evaluation.htmlscoring-parameter) in scikit-learn.You can obtain accuracy for other metrics. *Balanced accuracy* for example, is appropriate when the data is unbalanced (e.g. when one class contains a much larger number of examples than other classes in the data).
accuracy = cross_val_score(classifier, X, y, cv=5, scoring='balanced_accuracy') np.mean(accuracy)
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
`cross_val_score` is somewhat limited, in that it simply returns a list of accuracy scores. In practice, we often want to have more information about what happened during training, and also to compute multiple accuracy measures.`cross_validate` will provide you with that information:
results = cross_validate(classifier, X, y, cv=5, scoring='accuracy', return_estimator=True) print(results)
{'fit_time': array([0.0010581 , 0.00099397, 0.00085902, 0.00093985, 0.00105977]), 'score_time': array([0.00650811, 0.00734997, 0.01043916, 0.00643301, 0.00529218]), 'estimator': (KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=3, p=2, weights='uniform'), KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=3, p=2, weights='uniform'), KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=3, p=2, weights='uniform'), KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=3, p=2, weights='uniform'), KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=None, n_neighbors=3, p=2, weights='uniform')), 'test_score': array([0.87719298, 0.92105263, 0.94736842, 0.93859649, 0.91150442])}
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
The object returned by `cross_validate` is a Python dictionary as the output suggests. To extract a specific piece of data from this object, simply access the dictionary with the appropriate key:
results['test_score']
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
If you would like to know the predictions made for each training example during cross-validation use [cross_val_predict](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_predict.html) instead:
from sklearn.model_selection import cross_val_predict y_pred = cross_val_predict(classifier, X, y, cv=5) metrics.accuracy_score(y, y_pred)
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
The above way of performing cross-validation doesn't always give us enough control on the process: we usually want our machine learning experiments be reproducible, and to be able to use the same cross-validation splits with multiple algorithms. The scikit-learn `KFold` and `StratifiedKFold` cross-validation generators are the way to achieve that. `KFold` simply chooses a random subset of examples for each fold. This strategy can lead to cross-validation folds in which the classes are not well-represented as the following toy example demonstrates:
from sklearn.model_selection import StratifiedKFold, KFold X_toy = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9,10], [11, 12]]) y_toy = np.array([0, 0, 1, 1, 1, 1]) cv = KFold(n_splits=2, random_state=3, shuffle=True) for train_idx, test_idx in cv.split(X_toy, y_toy): print("train:", train_idx, "test:", test_idx) X_train, X_test = X_toy[train_idx], X_toy[test_idx] y_train, y_test = y_toy[train_idx], y_toy[test_idx] print(y_train)
train: [0 1 2] test: [3 4 5] [0 0 1] train: [3 4 5] test: [0 1 2] [1 1 1]
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
`StratifiedKFold` addresses this issue by making sure that each class is represented in each fold in proportion to its overall fraction in the data. This is particularly important when one or more of the classes have few examples.`StratifiedKFold` and `KFold` generate folds that can be used in conjunction with the cross-validation methods we saw above.As an example, we will demonstrate the use of `StratifiedKFold` with `cross_val_score` on the breast cancer datast:
cv = StratifiedKFold(n_splits=5, random_state=1, shuffle=True) accuracy = cross_val_score(classifier, X, y, cv=cv, scoring='accuracy') np.mean(accuracy)
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
For classification problems, `StratifiedKFold` is the preferred strategy. However, for regression problems `KFold` is the way to go. QuestionWhy is `KFold` used in regression probelms rather than `StratifiedKFold`? To clarify the distinction between the different methods of generating cross-validation folds and their different parameters let's look at the following figures:
# the code for the figure is adapted from # https://scikit-learn.org/stable/auto_examples/model_selection/plot_cv_indices.html np.random.seed(42) cmap_data = plt.cm.Paired cmap_cv = plt.cm.coolwarm n_folds = 4 # Generate the data X = np.random.randn(100, 10) # generate labels - classes 0,1,2 and 10,30,60 examples, respectively y = np.array([0] * 10 + [1] * 30 + [2] * 60) def plot_cv_indices(cv, X, y, ax, n_folds): """plot the indices of a cross-validation object.""" # Generate the training/testing visualizations for each CV split for ii, (tr, tt) in enumerate(cv.split(X=X, y=y)): # Fill in indices with the training/test groups indices = np.zeros(len(X)) indices[tt] = 1 # Visualize the results ax.scatter(range(len(indices)), [ii + .5] * len(indices), c=indices, marker='_', lw=15, cmap=cmap_cv, vmin=-.2, vmax=1.2) # Plot the data classes and groups at the end ax.scatter(range(len(X)), [ii + 1.5] * len(X), c=y, marker='_', lw=15, cmap=cmap_data) # Formatting yticklabels = list(range(n_folds)) + ['class'] ax.set(yticks=np.arange(n_folds+2) + .5, yticklabels=yticklabels, xlabel='index', ylabel="CV fold", ylim=[n_folds+1.2, -.2], xlim=[0, 100]) ax.set_title('{}'.format(type(cv).__name__), fontsize=15) return ax
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
Let's visualize the results of using `KFold` for fold generation:
fig, ax = plt.subplots() cv = KFold(n_folds) plot_cv_indices(cv, X, y, ax, n_folds);
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
As you can see, this naive way of using `KFold` can lead to highly undesirable splits into cross-validation folds.Using `StratifiedKFold` addresses this to some extent:
fig, ax = plt.subplots() cv = StratifiedKFold(n_folds) plot_cv_indices(cv, X, y, ax, n_folds);
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
Using `StratifiedKFold` with shuffling of the examples is the preferred way of splitting the data into folds:
fig, ax = plt.subplots() cv = StratifiedKFold(n_folds, shuffle=True) plot_cv_indices(cv, X, y, ax, n_folds);
_____no_output_____
MIT
notebooks/module05_01_cross_validation.ipynb
lottieandrews/CS345
Lecture 3.3: Anomaly Detection[**Lecture Slides**](https://docs.google.com/presentation/d/1_0Z5Pc5yHA8MyEBE8Fedq44a-DcNPoQM1WhJN93p-TI/edit?usp=sharing)This lecture, we are going to use gaussian distributions to detect anomalies in our emoji faces dataset**Learning goals:**- Introduce an anomaly detection problem- Implement Gaussian distribution anomaly detection for images- Debug the optimisation of a learning algorithm- Discuss the imperfection of learning algorithms- Acknowledge other outlier detection methods 1. IntroductionWe have an `emoji_faces` dataset of all our favourite emojis. However, Skynet hates their friendly expressiveness, and wants to destroy emojis forever! 🙀 It sent _terminator robots_ from the future to invade our dataset. We must act fast, and detect them amongst the emojis to prevent the catastrophy. Our challenge here, is that we don't watch many movies, so we don't have a clear idea of what those _terminators_ look like. 🤖 All we know, is that they look very different compared to emojis, and that only a handful managed to infiltrate our dataset.This is a typical scenario of _anomaly detection_. We would like to identify rare examples that differ from our "normal" data points. We choose to use a Gaussian Distribution to model this "normality" and detect the killer robots. 2. Data MungingFirst let's load the images using [pillow](https://pillow.readthedocs.io/en/stable/), like in lecture 2.5:
from PIL import Image import glob paths = glob.glob('emoji_faces/*.png') images = [Image.open(path) for path in paths] len(images)
_____no_output_____
CC-BY-4.0
data_analysis/3.3_anomaly_detection/anomaly_detection.ipynb
camille-vanhoffelen/modern-ML-engineer