markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Looking at these, it appears that there are some overlap between the hospitals. Hospital 17, 21, 42, and 95 are the 4 common hospital that are in the top ten of both these products. We will turn to a hospital examination down the road.
set(data.get_hospitals_by_product('product_1842').index.tolist()) & set(data.get_hospitals_by_product('product_1807').index.tolist())
reports/neiss.ipynb
minh5/cpsc
mit
Could be useful to compare stratum types - Do large hospitals see different rates of injury than small hospitals? Another way of examining product harm would not only to count the total numbers of products but also to see what is the top product that is reported for each hosptial. Here we can look at not only the sheer...
data.top_product_for_hospital()
reports/neiss.ipynb
minh5/cpsc
mit
Another way of approaching would be to fit a Negative Binomial Regression to see if there are any meaningful differences between the sizes of the hospitals. I use a negative binomial regression rather than a poisson regression because there is strong evidence of overdispersion, that is, the variance of the data is much...
counts = data.data.ix[data.data['product'] == 'product_1842',:]['hospital'].value_counts() print('variance of product 1842 counts:', np.var(counts.values)) print('mean of product 1842 counts:', np.mean(counts.values)) data.plot_stratum_dist('product_1842', 'S') data.plot_stratum_dist('product_1842', 'M') data.plot_...
reports/neiss.ipynb
minh5/cpsc
mit
From the model, we see that there are only significant differences between Medium and Small hospital. Given the coefficients, the log count difference between Medium and Small hospitals is -1.55. Other than that there doesn't seem to be any other signficant differences between hospital sizes for Product 1842. We can d...
data.plot_stratum_dist('product_1807', 'S') data.plot_stratum_dist('product_1807', 'M') data.plot_stratum_dist('product_1807', 'L')
reports/neiss.ipynb
minh5/cpsc
mit
The assumptions have been met and after building the model, we see very similar results as the previous model, that there are only significant differences between the small and large hospitals. For future research, we can use similar techniques to see significant differences between hospital sizes for all products.
df2 = data.prepare_stratum_modeling('product_1807') model = smf.glm("counts ~ stratum", data=df, family=sm.families.NegativeBinomial()).fit() model.summary()
reports/neiss.ipynb
minh5/cpsc
mit
Do we see meaningful trends when race is reported? From the top items, we don't see any meaningful differences between the top ten items for people who have race reported and race not reported. Even among the data where we do have race reported, there doesn't seem to be much variation when it comes to the top ten produ...
data.retrieve_query('race_reported', 'reported', 'product') data.retrieve_query('race_reported', 'not reported', 'product') races = ['white', 'black', 'hispanic', 'other'] for race in races: print(race) print(data.retrieve_query('new_race', race, 'product'))
reports/neiss.ipynb
minh5/cpsc
mit
Integral 1 $$ I_1 = \int_0^a {\sqrt{a^2-x^2} dx} = \frac{\pi a^2}{4} $$
def integrand(x, a): return np.sqrt(a**2 - x**2) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, a, args=(a,)) return I def integral_exact(a): return 0.25*np.pi print("Numerical: ", integral_approx(1.0)) print("...
Integration/IntegrationEx02.ipynb
JAmarel/Phys202
mit
Integral 2 $$ I_2 = \int_0^{\frac{\pi}{2}} {\sin^2{x}}{ } {dx} = \frac{\pi}{4} $$
def integrand(x): return np.sin(x)**2 def integral_approx(): I, e = integrate.quad(integrand, 0, np.pi/2) return I def integral_exact(): return 0.25*np.pi print("Numerical: ", integral_approx()) print("Exact : ", integral_exact()) assert True # leave this cell to grade the above integral
Integration/IntegrationEx02.ipynb
JAmarel/Phys202
mit
Integral 3 $$ I_3 = \int_0^{2\pi} \frac{dx}{a+b\sin{x}} = {\frac{2\pi}{\sqrt{a^2-b^2}}} $$
def integrand(x,a,b): return 1/(a+ b*np.sin(x)) def integral_approx(a,b): I, e = integrate.quad(integrand, 0, 2*np.pi,args=(a,b)) return I def integral_exact(a,b): return 2*np.pi/np.sqrt(a**2-b**2) print("Numerical: ", integral_approx(10,0)) print("Exact : ", integral_exact(10,0)) assert True...
Integration/IntegrationEx02.ipynb
JAmarel/Phys202
mit
Integral 4 $$ I_4 = \int_0^{\infty} \frac{x}{e^{x}+1} = {\frac{\pi^2}{12}} $$
def integrand(x): return x/(np.exp(x)+1) def integral_approx(): I, e = integrate.quad(integrand, 0, np.inf) return I def integral_exact(): return (1/12)*np.pi**2 print("Numerical: ", integral_approx()) print("Exact : ", integral_exact()) assert True # leave this cell to grade the above integral
Integration/IntegrationEx02.ipynb
JAmarel/Phys202
mit
Integral 5 $$ I_5 = \int_0^{\infty} \frac{x}{e^{x}-1} = {\frac{\pi^2}{6}} $$
def integrand(x): return x/(np.exp(x)-1) def integral_approx(): I, e = integrate.quad(integrand, 0, np.inf) return I def integral_exact(): return (1/6)*np.pi**2 print("Numerical: ", integral_approx()) print("Exact : ", integral_exact()) assert True # leave this cell to grade the above integral
Integration/IntegrationEx02.ipynb
JAmarel/Phys202
mit
EXERCISE: Use any of the solvers we've seen thus far to find the minimum of the zimmermann function (i.e. use mystic.models.zimmermann as the objective). Use the bounds suggested below, if your choice of solver allows it.
import scipy.optimize as opt import mystic.models result = opt.minimize(mystic.models.zimmermann, [10., 1.], method='powell') print(result.x)
solutions.ipynb
mmckerns/tutmom
bsd-3-clause
EXERCISE: Do the same for the fosc3d function found at mystic.models.fosc3d, using the bounds suggested by the documentation, if your chosen solver accepts bounds or constraints.
import scipy.optimize as opt import mystic.models result = opt.minimize(mystic.models.fosc3d, [-5., 0.5], method='powell') print(result.x)
solutions.ipynb
mmckerns/tutmom
bsd-3-clause
EXERCISE: Use mystic to find the minimum for the peaks test function, with the bound specified by the mystic.models.peaks documentation.
import mystic import mystic.models result = mystic.solvers.fmin_powell(mystic.models.peaks, [0., -2.], bounds=[(-5.,5.)]*2) print(result)
solutions.ipynb
mmckerns/tutmom
bsd-3-clause
EXERCISE: Use mystic to do a fit to the noisy data in the scipy.optimize.curve_fit example (the least squares fit).
import numpy as np import scipy.stats as stats from mystic.solvers import fmin_powell from mystic import reduced # Define the function to fit. def function(coeffs, x): a,b,f,phi = coeffs return a * np.exp(-b * np.sin(f * x + phi)) # Create a noisy data set around the actual parameters true_params = [3, 2, 1, ...
solutions.ipynb
mmckerns/tutmom
bsd-3-clause
EXERCISE: Solve the chebyshev8.cost example exactly, by applying the knowledge that the last term in the chebyshev polynomial will always be be one. Use numpy.round or mystic.constraints.integers or to constrain solutions to the set of integers. Does using mystic.suppressed to supress small numbers accelerate the solu...
# Differential Evolution solver from mystic.solvers import DifferentialEvolutionSolver2 # Chebyshev polynomial and cost function from mystic.models.poly import chebyshev8, chebyshev8cost from mystic.models.poly import chebyshev8coeffs # tools from mystic.termination import VTR, CollapseAt, Or from mystic.strategy imp...
solutions.ipynb
mmckerns/tutmom
bsd-3-clause
EXERCISE: Replace the symbolic constraints in the following "Pressure Vessel Design" code with explicit penalty functions (i.e. use a compound penalty built with mystic.penalty.quadratic_inequality).
"Pressure Vessel Design" def objective(x): x0,x1,x2,x3 = x return 0.6224*x0*x2*x3 + 1.7781*x1*x2**2 + 3.1661*x0**2*x3 + 19.84*x0**2*x2 bounds = [(0,1e6)]*4 # with penalty='penalty' applied, solution is: xs = [0.72759093, 0.35964857, 37.69901188, 240.0] ys = 5804.3762083 from mystic.constraints import as_cons...
solutions.ipynb
mmckerns/tutmom
bsd-3-clause
EXERCISE: Solve the cvxopt "qp" example with mystic. Use symbolic constaints, penalty functions, or constraints operators. If you get it quickly, do all three methods.
def objective(x): x0,x1 = x return 2*x0**2 + x1**2 + x0*x1 + x0 + x1 bounds = [(0.0, None),(0.0, None)] # with penalty='penalty' applied, solution is: xs = [0.25, 0.75] ys = 1.875 from mystic.math.measures import normalize def constraint(x): # impose exactly return normalize(x, 1.0) if __name__ == '__...
solutions.ipynb
mmckerns/tutmom
bsd-3-clause
EXERCISE: Convert one of our previous mystic examples to use parallel computing. Note that if the solver has a SetMapper method, it can take a parallel map.
from mystic.termination import VTR, ChangeOverGeneration, And, Or stop = Or(And(VTR(), ChangeOverGeneration()), VTR(1e-8)) from mystic.models import rosen from mystic.monitors import VerboseMonitor from mystic.solvers import DifferentialEvolutionSolver2 from pathos.pools import ThreadPool if __name__ == '__main__':...
solutions.ipynb
mmckerns/tutmom
bsd-3-clause
Rabi Oscillations $\hat{H} = \hat{H}_0 + \Omega \sin((\omega_0+\Delta)t) \hat{\sigma}_x$ $\hat{H}_0 = \frac{\omega_0}{2}\hat{\sigma}_z$
initial_state = basis(2, 0) initial_state ω0 = 1 Δ = 0.002 Ω = 0.005 ts = 6*np.pi/Ω*np.linspace(0,1,120) H = ω0/2 * sigmaz() + Ω * sigmax() * sin((ω0+Δ)*t) H res = mesolve(H, [], initial_state, ts) σz_expect = expect(sigmaz(), res) res[20] plt.plot(ts*Ω/np.pi, σz_expect, 'r.', label='numerical result') Ωp = (Ω**2+...
examples/Lindblad_Master_Equation_Solver_Examples.ipynb
Krastanov/cutiepy
bsd-3-clause
With Rotating Wave Approximation $\hat{H}^\prime = e^{i\hat{H}_0 t}\hat{H} e^{-i\hat{H}_0 t} \approx \frac{\Delta}{2} \hat{\sigma}_z + \frac{\Omega}{2} \hat{\sigma}_x$
Hp = Δ/2 * sigmaz() + Ω/2 * sigmax() Hp res = mesolve(Hp, [], initial_state, ts) σz_expect = expect(sigmaz(), res) plt.plot(ts*Ω/np.pi, σz_expect, 'r.', label='numerical result') Ωp = (Ω**2+Δ**2)**0.5 plt.plot(ts*Ω/np.pi, 1-(Ω/Ωp)**2*2*np.sin(Ωp*ts/2)**2, 'b-', label=r'$1-2(\Omega^\prime/\Omega)^2\sin^2(\Om...
examples/Lindblad_Master_Equation_Solver_Examples.ipynb
Krastanov/cutiepy
bsd-3-clause
With $\gamma_1$ collapse
γ1 = 0.2*Ω c1 = γ1**0.5 * sigmam() c1 res = mesolve(Hp, [c1], initial_state, ts) σz_expect = expect(sigmaz(), res) plt.plot(ts*Ω/np.pi, σz_expect, 'r.', label='numerical result') plt.ylim(-1,1) plt.title(r'$\langle\sigma_z\rangle$-vs-$t\Omega/\pi$ at ' r'$\Delta/\Omega=%.2f$ in RWA'%(Δ/Ω) + '\n' + ...
examples/Lindblad_Master_Equation_Solver_Examples.ipynb
Krastanov/cutiepy
bsd-3-clause
With $\gamma_2$ collapse
γ2 = 0.2*Ω c2 = γ2**0.5 * sigmaz() c2 res = mesolve(Hp, [c2], initial_state, ts) σz_expect = expect(sigmaz(), res) plt.plot(ts*Ω/np.pi, σz_expect, 'r.', label='numerical result') plt.ylim(-1,1) plt.title(r'$\langle\sigma_z\rangle$-vs-$t\Omega/\pi$ at ' r'$\Delta/\Omega=%.2f$ in RWA'%(Δ/Ω) + '\n' + ...
examples/Lindblad_Master_Equation_Solver_Examples.ipynb
Krastanov/cutiepy
bsd-3-clause
Coherent State in a Harmonic Oscillator $|\alpha\rangle$ evolving under $\hat{H} = \hat{n}$ coupled to a zero temperature heat bath $\kappa = 0.5$
N_cutoff = 40 α = 2.5 initial_state = coherent(N_cutoff, α) initial_state H = num(N_cutoff) H κ = 0.5 n_th = 0 c_down = (κ * (1 + n_th))**2 * destroy(N_cutoff) c_down ts = 2*np.pi*np.linspace(0,1,41) res = mesolve(H, [c_down], initial_state, ts) a = destroy(N_cutoff) a_expect = expect(a, res, keep_complex=True) pl...
examples/Lindblad_Master_Equation_Solver_Examples.ipynb
Krastanov/cutiepy
bsd-3-clause
Estimate aggregated features
from datetime import datetime, timedelta from tqdm import tqdm
notebooks/14-KaggleCompetition.ipynb
albahnsen/ML_SecurityInformatics
mit
Split for each account and create the date as index
card_numbers = data['card_number'].unique() data['trx_id'] = data.index data.index = pd.DatetimeIndex(data['date']) data_ = [] for card_number in tqdm(card_numbers): data_.append(data.query('card_number == ' + str(card_number)))
notebooks/14-KaggleCompetition.ipynb
albahnsen/ML_SecurityInformatics
mit
Create Aggregated Features for one account
res_agg = pd.DataFrame(index=data['trx_id'].values, columns=['Trx_sum_7D', 'Trx_count_1D']) trx = data_[0] for i in range(trx.shape[0]): date = trx.index[i] trx_id = int(trx.ix[i, 'trx_id']) # Sum 7 D agg_ = trx[date-pd.datetools.to_offset('7D').delta:date-timedelta(0,0,1)] ...
notebooks/14-KaggleCompetition.ipynb
albahnsen/ML_SecurityInformatics
mit
All accounts
for trx in tqdm(data_): for i in range(trx.shape[0]): date = trx.index[i] trx_id = int(trx.ix[i, 'trx_id']) # Sum 7 D agg_ = trx[date-pd.datetools.to_offset('7D').delta:date-timedelta(0,0,1)] res_agg.loc[trx_id, 'Trx_sum_7D'] = agg_['amount'].sum() # Count 1D ...
notebooks/14-KaggleCompetition.ipynb
albahnsen/ML_SecurityInformatics
mit
Split train and test
X = data.loc[~data.fraud.isnull()] y = X.fraud X = X.drop(['fraud', 'date', 'card_number'], axis=1) X_kaggle = data.loc[data.fraud.isnull()] X_kaggle = X_kaggle.drop(['fraud', 'date', 'card_number'], axis=1) X_kaggle.head()
notebooks/14-KaggleCompetition.ipynb
albahnsen/ML_SecurityInformatics
mit
Simple Random Forest
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100, n_jobs=-1, class_weight='balanced') from sklearn.metrics import fbeta_score
notebooks/14-KaggleCompetition.ipynb
albahnsen/ML_SecurityInformatics
mit
KFold cross-validation
from sklearn.cross_validation import KFold kf = KFold(X.shape[0], n_folds=5) res = [] for train, test in kf: X_train, X_test, y_train, y_test = X.iloc[train], X.iloc[test], y.iloc[train], y.iloc[test] clf.fit(X_train, y_train) y_pred_proba = clf.predict_proba(X_test)[:, 1] y_pred = (y_pred_proba>0.05)....
notebooks/14-KaggleCompetition.ipynb
albahnsen/ML_SecurityInformatics
mit
Train with all Predict and send to Kaggle
clf.fit(X, y) y_pred = clf.predict_proba(X_kaggle)[:, 1] y_pred = (y_pred>0.05).astype(int) y_pred = pd.Series(y_pred,name='fraud', index=X_kaggle.index) y_pred.head(10) y_pred.to_csv('fraud_transactions_kaggle_1.csv', header=True, index_label='ID')
notebooks/14-KaggleCompetition.ipynb
albahnsen/ML_SecurityInformatics
mit
Vertex AI Pipelines: AutoML text classification pipelines using google-cloud-pipeline-components <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb"> <im...
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Install the latest GA version of google-cloud-storage library as well.
! pip3 install -U google-cloud-storage $USER_FLAG
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Install the latest GA version of google-cloud-pipeline-components library as well.
! pip3 install $USER kfp google-cloud-pipeline-components --upgrade
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))" ! python3 -c "import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))"
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin GPU runtime This tutorial does not require a GPU runtime. Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage cost...
PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:...
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Authenticate your Google Cloud account If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go t...
# If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Google Cloud Notebook, then don't execute this code i...
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. S...
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Service Account If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"} if ( SERVICE_ACCOUNT == "" or SERVICE_ACCOUNT is None or SERVICE_ACCOUNT == "[your-service-account]" ): # Get your GCP project id from gcloud shell_output = !gcloud auth list 2>/dev/null SERVICE_ACCOUNT = shell_output[2].strip...
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set service account access for Vertex AI Pipelines Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME ! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Vertex AI Pipelines constants Setup up the following constants for Vertex AI Pipelines:
PIPELINE_ROOT = "{}/pipeline_root/happydb".format(BUCKET_NAME)
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Additional imports.
import kfp
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Define AutoML text classification model pipeline that uses components from google_cloud_pipeline_components Next, you define the pipeline. Create and deploy an AutoML text classification Model resource using a Dataset resource.
IMPORT_FILE = "gs://cloud-ml-data/NL-classification/happiness.csv" @kfp.dsl.pipeline(name="automl-text-classification" + TIMESTAMP) def pipeline( project: str = PROJECT_ID, region: str = REGION, import_file: str = IMPORT_FILE ): from google_cloud_pipeline_components import aiplatform as gcc_aip from googl...
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Compile the pipeline Next, compile the pipeline.
from kfp.v2 import compiler # noqa: F811 compiler.Compiler().compile( pipeline_func=pipeline, package_path="text classification_pipeline.json".replace(" ", "_"), )
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Run the pipeline Next, run the pipeline.
DISPLAY_NAME = "happydb_" + TIMESTAMP job = aip.PipelineJob( display_name=DISPLAY_NAME, template_path="text classification_pipeline.json".replace(" ", "_"), pipeline_root=PIPELINE_ROOT, enable_caching=False, ) job.run() ! rm text_classification_pipeline.json
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Click on the generated link to see your run in the Cloud Console. <!-- It should look something like this as it is running: <a href="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" width="4...
delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True try: if delete_model and "DISPLAY_NAME" in globals(): models = aip.Model.list( filter=f"display_name={DISPLAY_NAME}", ...
notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
K-fold CV K-fold CV(cross-validation) 방법은 데이터 셋을 K개의 sub-set로 분리하는 방법이다. 분리된 K개의 sub-set 중 하나만 제외한 K-1개의 sub-sets를 training set으로 이용하여 K개의 모형 추정한다. <img src="https://docs.google.com/drawings/d/1JdgUDzuE75LBxqT5sKOhlPgP6umEkvD3Sm-gKnu-jqA/pub?w=762&h=651" style="margin: 0 auto 0 auto;"> Scikit-Learn 의 cross_validation ...
N = 5 X = np.arange(8 * N).reshape(-1, 2) * 10 y = np.hstack([np.ones(N), np.ones(N) * 2, np.ones(N) * 3, np.ones(N) * 4]) print("X:\n", X, sep="") print("y:\n", y, sep="") from sklearn.cross_validation import KFold cv = KFold(len(X), n_folds=3, random_state=0) for train_index, test_index in cv: print("test y:", ...
16. 과최적화와 정규화/02. 교차 검증.ipynb
zzsza/Datascience_School
mit
Stratified K-Fold target class가 어느 한 data set에 몰리지 않도록 한다
from sklearn.cross_validation import StratifiedKFold cv = StratifiedKFold(y, n_folds=3, random_state=0) for train_index, test_index in cv: print("test X:\n", X[test_index]) print("." * 80 ) print("test y:", y[test_index]) print("=" * 80 )
16. 과최적화와 정규화/02. 교차 검증.ipynb
zzsza/Datascience_School
mit
Leave-One-Out (LOO) 하나의 sample만을 test set으로 남긴다.
from sklearn.cross_validation import LeaveOneOut cv = LeaveOneOut(5) for train_index, test_index in cv: print("test X:", X[test_index]) print("." * 80 ) print("test y:", y[test_index]) print("=" * 80 )
16. 과최적화와 정규화/02. 교차 검증.ipynb
zzsza/Datascience_School
mit
Label K-Fold 같은 label이 test와 train에 동시에 들어가지 않게 조절 label에 의한 영향을 최소화
from sklearn.cross_validation import LabelKFold cv = LabelKFold(y, n_folds=3) for train_index, test_index in cv: print("test y:", y[test_index]) print("." * 80 ) print("train y:", y[train_index]) print("=" * 80 )
16. 과최적화와 정규화/02. 교차 검증.ipynb
zzsza/Datascience_School
mit
ShuffleSplit 중복된 데이터를 허용
from sklearn.cross_validation import ShuffleSplit cv = ShuffleSplit(5) for train_index, test_index in cv: print("test X:", X[test_index]) print("=" * 20 )
16. 과최적화와 정규화/02. 교차 검증.ipynb
zzsza/Datascience_School
mit
교차 평가 시행 CV는 단순히 데이터 셋을 나누는 역할을 수행할 뿐이다. 실제로 모형의 성능(편향 오차 및 분산)을 구하려면 이렇게 나누어진 데이터셋을 사용하여 평가를 반복하여야 한다. 이 과정을 자동화하는 명령이 cross_val_score() 이다. cross_val_score(estimator, X, y=None, scoring=None, cv=None) cross validation iterator cv를 이용하여 X, y data 를 분할하고 estimator에 넣어서 scoring metric을 구하는 과정을 반복 인수 estimator : ‘f...
from sklearn.datasets import make_regression from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error X, y, coef = make_regression(n_samples=1000, n_features=1, noise=20, coef=True, random_state=0) model = LinearRegression() cv = KFold(1000, 10) scores = np.zeros(10) for i, (tra...
16. 과최적화와 정규화/02. 교차 검증.ipynb
zzsza/Datascience_School
mit
Camera Calibration with OpenCV Run the code in the cell below to extract object points and image points for camera calibration.
import numpy as np import cv2 import glob import matplotlib.pyplot as plt %matplotlib qt # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) objp = np.zeros((6*8,3), np.float32) objp[:,:2] = np.mgrid[0:8, 0:6].T.reshape(-1,2) # Arrays to store object points and image points from all the images. objpo...
CarND-Camera-Calibration/camera_calibration.ipynb
phuongxuanpham/SelfDrivingCar
gpl-3.0
IO: Reading and preprocess the data We can define a function which will read the data and process them.
def read_spectra(path_csv): """Read and parse data in pandas DataFrames. Parameters ---------- path_csv : str Path to the CSV file to read. Returns ------- spectra : pandas DataFrame, shape (n_spectra, n_freq_point) DataFrame containing all Raman spectra. ...
Day_2_Software_engineering_best_practices/solutions/03_code_style.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause
Plot helper functions We can create two functions: (i) to plot all spectra and (ii) plot the mean spectra with the std intervals. We will make a "private" function which will be used by both plot types.
def _apply_axis_layout(ax, title): """Apply despine style and add labels to axis.""" ax.set_xlabel('Frequency') ax.set_ylabel('Intensity') ax.set_title(title) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left()...
Day_2_Software_engineering_best_practices/solutions/03_code_style.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause
Reusability for new data:
spectra_test, concentration_test, molecule_test = read_spectra('data/spectra_4.csv') plot_spectra(frequency, spectra_test, 'All training spectra') plot_spectra_by_type(frequency, spectra_test, molecule_test, 'Mean spectra in function of the molecules') plot_spectra_by_type(frequency, ...
Day_2_Software_engineering_best_practices/solutions/03_code_style.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause
Training and testing a machine learning model for classification
def plot_cm(cm, classes, title): """Plot a confusion matrix. Parameters ---------- cm : ndarray, shape (n_classes, n_classes) Confusion matrix. classes : array-like, shape (n_classes,) Array contining the different spectra classes used in the classification prob...
Day_2_Software_engineering_best_practices/solutions/03_code_style.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause
Training and testing a machine learning model for regression
def plot_regression(y_true, y_pred, title): """Plot actual vs. predicted scatter plot. Parameters ---------- y_true : array-like, shape (n_samples,) Ground truth (correct) target values. y_pred : array-like, shape (n_samples,) Estimated targets as returned by a regressor. ...
Day_2_Software_engineering_best_practices/solutions/03_code_style.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause
Exemple d'un Socket client qui envoi des données
msg = b'GET /ETS/media/Prive/logo/ETS-rouge-devise-ecran.jpg HTTP/1.1\r\nHost:etsmtl.ca\r\n\r\n' sock.sendall(msg)
IntroductionIOAsync/IntroductionIOAsync.ipynb
luctrudeau/Teaching
lgpl-3.0
Exemple d'un Socket qui reçoit des données
recvd = b'' while True: data = sock.recv(1024) if not data: break recvd += data sock.shutdown(1) sock.close() response = recvd.split(b'\r\n\r\n', 1) Image(data=response[1])
IntroductionIOAsync/IntroductionIOAsync.ipynb
luctrudeau/Teaching
lgpl-3.0
Quoique les versions de l'interface Socket ont évolué avec les années, surtout sur les plateformes orientées-objet, l'essence de l'interface de 1983 reste très présente dans les implémentations modernes. 2.3.1.5. Making connections connect(s, name, namelen); 2.3.1.6. Sending and receiving data cc = sendto(s, buf, len...
import selectors import socket import errno sel = selectors.DefaultSelector() def connector(sock, mask): msg = b'GET /ETS/media/Prive/logo/ETS-rouge-devise-ecran.jpg HTTP/1.1\r\nHost:etsmtl.ca\r\n\r\n' sock.sendall(msg) # Le connector a pour responsabilité # d'instancier un nouveau Handler # et ...
IntroductionIOAsync/IntroductionIOAsync.ipynb
luctrudeau/Teaching
lgpl-3.0
Création d'un Socket Asynchrone
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setblocking(False) try: sock.connect(("etsmtl.ca" , 80)) except socket.error: pass # L'exception est toujours lancé! # C'est normal, l'OS veut nous avertir que # nous ne sommes pas encore connecté
IntroductionIOAsync/IntroductionIOAsync.ipynb
luctrudeau/Teaching
lgpl-3.0
Enregistrement du Connector
# L'application enregistre le Connector sel.register(sock, selectors.EVENT_WRITE, connector) # Le Reactor while len(sel.get_map()): events = sel.select() for key, mask in events: handleEvent = key.data handleEvent(key.fileobj, mask)
IntroductionIOAsync/IntroductionIOAsync.ipynb
luctrudeau/Teaching
lgpl-3.0
Unit Test The following unit test is expected to fail until you solve the challenge.
# %load test_check_balance.py from nose.tools import assert_equal class TestCheckBalance(object): def test_check_balance(self): node = Node(5) insert(node, 3) insert(node, 8) insert(node, 1) insert(node, 4) assert_equal(check_balance(node), True) node = No...
interactive-coding-challenges/graphs_trees/check_balance/check_balance_challenge.ipynb
ThunderShiviah/code_guild
mit
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36. y = tf.constant(39, name='y') # Define y. Set to 39 loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss init = tf.global_variables_initializer() # When init is run later (sessi...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Writing and running programs in TensorFlow has the following steps: Create Tensors (variables) that are not yet executed/evaluated. Write operations between those Tensors. Initialize your Tensors. Create a Session. Run the Session. This will run the operations you'd written above. Therefore, when we created a var...
a = tf.constant(2) b = tf.constant(10) c = tf.multiply(a,b) print(c)
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a sessio...
sess = tf.Session() print(sess.run(c))
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dic...
# Change the value of x in the feed_dict x = tf.placeholder(tf.int64, name = 'x') print(sess.run(2 * x, feed_dict = {x: 3})) sess.close()
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a comp...
# GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes W to be a random tensor of shape (4,3) Initializes X to be a random tensor of shape (3,1) Initializes b to be a random tensor of shape (4,1) Returns: result -- r...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Expected Output : <table> <tr> <td> **result** </td> <td> [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] </td> </tr> </table> 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softma...
# GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.pl...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Expected Output : <table> <tr> <td> **sigmoid(0)** </td> <td> 0.5 </td> </tr> <tr> <td> **sigmoid(12)** </td> <td> 0.999994 </td> </tr> </table> <font color='blue'> To summarize, you how know how to: 1. Create placeholders 2. Specify the computation graph corresponding to operations you want to compute 3. Create...
# GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Expected Output : <table> <tr> <td> **cost** </td> <td> [ 1.00538719 1.03664088 0.41385433 0.39956614] </td> </tr> </table> 1.4 - Using One Hot encodings Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C i...
# GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. ...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Expected Output: <table> <tr> <td> **one_hot** </td> <td> [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] </td> </tr> </table> 1.5 - Initialize with zeros and ones Now you will learn how to init...
# GRADED FUNCTION: ones def ones(shape): """ Creates an array of ones of dimension shape Arguments: shape -- shape of the array you want to create Returns: ones -- array containing only ones """ ### START CODE HERE ### # Create "ones" tensor using tf.ones(.....
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Expected Output: <table> <tr> <td> **ones** </td> <td> [ 1. 1. 1.] </td> </tr> </table> 2 - Building your first neural network in tensorflow In this part of the assignment you will build a neural network using tensorflow. Remember that there are two part...
# Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Change the index below and run the cell to visualize some examples in the dataset.
# Example of a picture index = 0 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
# Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflo...
# GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns:...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Expected Output: <table> <tr> <td> **X** </td> <td> Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) </td> </tr> <tr> <td> **Y** </td> <td> Tensor("Placeholder_2:0", ...
# GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] ...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Expected Output: <table> <tr> <td> **W1** </td> <td> < tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref > </td> </tr> <tr> <td> **b1** </td> <td> < tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref > ...
# GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters, keep_prob): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters ...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Expected Output: <table> <tr> <td> **Z3** </td> <td> Tensor("Add_2:0", shape=(6, ?), dtype=float32) </td> </tr> </table> You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropaga...
# GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the c...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Expected Output: <table> <tr> <td> **cost** </td> <td> Tensor("Mean:0", shape=(), dtype=float32) </td> </tr> </table> 2.5 - Backward propagation & parameter updates This is where you become grateful to programming frameworks. All the backpropagation and t...
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 3000, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 1...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break...
parameters = model(X_train, Y_train, X_test, Y_test)
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
Expected Output: <table> <tr> <td> **Train Accuracy** </td> <td> 0.999074 </td> </tr> <tr> <td> **Test Accuracy** </td> <td> 0.716667 </td> </tr> </table> Amazing, your algorithm can recognize a ...
import scipy from PIL import Image from scipy import ndimage ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg" ## END CODE HERE ## # We preprocess your image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(...
deep_learning_ai/Tensorflow+Tutorial+dropout.ipynb
trangel/Data-Science
gpl-3.0
For a detailed explanation of the above, please refer to Rates Information.
response = oanda.create_order(account_id, instrument = "AUD_USD", units=1000, side="buy", type="limit", price=0.7420, expiry=trade_expire) p...
Oanda v1 REST-oandapy/04.00 Order Management.ipynb
anthonyng2/FX-Trading-with-Python-and-Oanda
mit
Getting Open Orders get_orders(self, account_id, **params)
response = oanda.get_orders(account_id) print(response) pd.DataFrame(response['orders'])
Oanda v1 REST-oandapy/04.00 Order Management.ipynb
anthonyng2/FX-Trading-with-Python-and-Oanda
mit
Getting Specific Order Information get_order(self, account_id, order_id, **params)
response = oanda.get_orders(account_id) id = response['orders'][0]['id'] oanda.get_order(account_id, order_id=id)
Oanda v1 REST-oandapy/04.00 Order Management.ipynb
anthonyng2/FX-Trading-with-Python-and-Oanda
mit
Modify Order modify_order(self, account_id, order_id, **params)
response = oanda.get_orders(account_id) id = response['orders'][0]['id'] oanda.modify_order(account_id, order_id=id, price=0.7040)
Oanda v1 REST-oandapy/04.00 Order Management.ipynb
anthonyng2/FX-Trading-with-Python-and-Oanda
mit
Close Order close_order(self, account_id, order_id, **params)
response = oanda.get_orders(account_id) id = response['orders'][0]['id'] oanda.close_order(account_id, order_id=id)
Oanda v1 REST-oandapy/04.00 Order Management.ipynb
anthonyng2/FX-Trading-with-Python-and-Oanda
mit
Now when we check the orders. The above order has been closed and removed without being filled. There is only one outstanding order now.
oanda.get_orders(account_id)
Oanda v1 REST-oandapy/04.00 Order Management.ipynb
anthonyng2/FX-Trading-with-Python-and-Oanda
mit
This is a bog standard user registration endpoint. We create a form, check if it's valid, shove that information on a user model and then into the database and redirect off. If it's not valid or if it wasn't submitted (the user just navigated to the page), we render out some HTML. It's all very basic, well trodden code...
@mock.patch('myapp.views.RegisterUserForm') @mock.patch('myapp.views.db') @mock.patch('myapp.views.redirect') @mock.patch('myapp.views.url_for') @mock.patch('myapp.views.render_template') def test_register_new_user(render, url_for, redirect, db, form): # TODO: Write test assert True
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
What's even the point of this? We're just testing if Mock works at this point. There's actual things we can do to make it more testable, but before delving into that, It hides logic If registering a user was solely about, "Fill this form out and we'll shove it into a database" there wouldn't be a blog post here. Howev...
class RegisterUserForm(Form): def validate_username(self, field): if User.query.filter(User.username == field.data).count(): raise ValidationError("Username in use already") def validate_email(self, field): if User.query.filter(User.email == field.data).count(): rais...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
When we call RegisterUserForm.validate_on_submit it also runs these two methods. However, I'm not of the opinion that the form should talk to the database at all, let alone run validation against database contents. So, let's write a little test harness that can prove that an existing user with a given username and emai...
from myapp.forms import RegisterUserForm from myapp.models import User from collections import namedtuple from unittest import mock FakeData = namedtuple('User', ['username', 'email', 'password', 'confirm_password']) def test_existing_username_fails_validation(): test_data = FakeData('fred', 'fred@fred.com', 'a...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
If these pass -- which they should, but you may have to install mock if you're not on Python 3 -- I think we should move the username and email validation into their own callables that are independently testable:
def is_username_free(username): return User.query.filter(User.username == username).count() == 0 def is_email_free(email): return User.query.filter(User.email == email).count() == 0
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
And then use these in the endpoint itself:
@app.route('/register', methods=['GET', 'POST']) def register(): form = RegisterUserForm() if form.validate_on_submit(): if not is_username_free(form.username.data): form.errors['username'] = ['Username in use already'] return render_template('register.html', form=form) ...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
This is really hard to test, so instead of even attempting that -- being honest, I spent the better part of an hour attempting to test the actual endpoint and it was just a complete mess -- let's extract out the actual logic and place it into it's own callable:
class OurValidationError(Exception): def __init__(self, msg, field): self.msg = msg self.field = field def register_user(username, email, password): if not is_username_free(username): raise OurValidationError('Username in use already', 'username') if not is_email_free(email): ...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Now we're beginning to see the fruits of our labors. These aren't the easiest functions to test, but there's less we need to mock out in order to test the actual logic we're after.
def test_duplicated_user_raises_error(): ChasteValidator = mock.Mock(return_value=False) with mock.patch('myapp.logic.is_username_free', ChasteValidator): with pytest.raises(OurValidationError) as excinfo: register_user('fred', 'fred@fred.com', 'fredpassword') assert excinfo.va...
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Of course, we should also write tests for the controller. I'll leave that as an exercise. However, there's something very important we're learning from these tests. We have to mock.patch everything still. Our validators lean directly on the database, our user creation leans directly on the database, everything leans di...
from abc import ABC, abstractmethod class AbstractUserRepository(ABC): @abstractmethod def find_by_username(self, username): pass @abstractmethod def find_by_email(self, email): pass @abstractmethod def persist(self, user): pass
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Hmm...that's interesting. Since we'll end up depending on this instead of a concrete implementation, we can run our tests completely in memory and production on top of SQLAlchemy, Mongo, a foreign API, whatever. But we need to inject it into our validators instead of reaching out into the global namespace like we curre...
def is_username_free(user_repository): def is_username_free(username): return not user_repository.find_by_username(username) return is_username_free def is_email_free(user_repository): def is_email_free(email): return not user_repository.find_by_email(email) return is_email_free
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit