markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We need to calculate the function $f(x)$'s arc-length from $[0, 4 \pi]$ $$L = \int_0^{4 \pi} \sqrt{1 + |f'(x)|^2} dx$$ In general need numerical quadrature. Non-Linear population growth Lotka-Volterra Predator-Prey model $$\frac{d R}{dt} = R \cdot (a - b \cdot F)$$ $$\frac{d F}{dt} = F \cdot (c \cdot R + d)$$ Where ar...
data = numpy.loadtxt("./data/sunspot.dat") data.shape plt.plot(data[:, 0], data[:, 1]) plt.xlabel("Year") plt.ylabel("Number") plt.title("Number of Sunspots") plt.show()
0_intro_numerical_methods.ipynb
btw2111/intro-numerical-methods
mit
Part A For now, still using the default, quick get_results function but this time specify merge_type to not merging (no effect here as calculations are independent, the default is merge using UUIDs btw), the analyser to hybrid [3] (not blocking [4] as by default) and while we don't specify analysis start MC iterations,...
results = get_results(["data/0.01_ccsd.out.gz", "data/0.002_ccsd.out.gz"], merge_type='no', analyser='hybrid', start_its='mser')
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
The summary table shows the analysed data by the analyser. The hybrid analyser analyses the instantaneous projected energy (as prepared by the preparator object).
results.summary
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
The hybrid analyser's output can be viewed.
results.analyser.opt_block print(results.analyser.start_its) # Used starting iterations, found using MSER find starting iteration function. print(results.analyser.end_its) # Used end iterations, the last iteration by default.
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Part B Now, we don't use get_results to get the results object but define the extractor, preparator and analyser objects ourselves. Even though it doesn't have an effect here as there is no calculation to merge, we state that we want to merge using the 'legacy' way, i.e. don't use UUID for merging but simply determine ...
extra = Extractor(merge={'type': 'legacy', 'md_shift': ['qmc:shift_damping'], 'shift_key': 'Shift'})
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Define preparator object. It contains the hard coded mapping of column name meaning to column name, i.e. 'ref_key' : 'N_0, for the case of HANDE CCMC/FCIQMC. If you use a different package, you'll need to create your own preparator class.
prep = PrepHandeCcmcFciqmc()
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Define analyser. Use class method inst_hande_ccmc_fciqmc to pre-set what should be analysed (inst. projected energy), name of iteration key ('iterations'), etc. Use 'blocking' start iteration finder and specify that a graph should be shown by the start iteration finder.
ana = HybridAna.inst_hande_ccmc_fciqmc(start_its = 'blocking', find_start_kw_args={'show_graph': True})
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Now, we can execute those three objects. 'analyse_data' is a handy helper to call their .exe() methods. For each calculation, a graph is shown by the find starting iteration method.
results2 = analyse_data(["data/0.01_ccsd.out.gz", "data/0.002_ccsd.out.gz"], extra, prep, ana)
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Have used different starting iteration finder, so these will be different.
results2.analyser.start_its
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
But results are comparable.
results2.summary_pretty
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
But what if we want to analyse the shift instead of the instantaneous projected energy with hybrid analysis? -> BEWARE this is untested. Only used for illustration here! Don't use class method for analyser instantiation anymore. Keep default settings (find start iterations using 'mser' etc). Note that when doing blocki...
ana2 = HybridAna('iterations', 'Shift', 'replica id') results3 = analyse_data(["data/0.01_ccsd.out.gz", "data/0.002_ccsd.out.gz"], extra, prep, ana2) results3.summary_pretty
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Function 2: deciding if an agent is happy Write a function that takes the game board generated by the function you wrote above and determines whether an agent at position i in the game board of a specified type is happy for a game board of any size and a neighborhood of size N (i.e., from position i-N to i+N), and retu...
# Put your code here, using additional cells if necessary.
past-semesters/fall_2016/day-by-day/day15-Schelling-1-dimensional-segregation-day2/Day_15_Pre_Class_Notebook.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Assignment wrapup Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
from IPython.display import HTML HTML( """ <iframe src="https://goo.gl/forms/M7YCyE1OLzyOK7gH3?embedded=true" width="80%" height="1200px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> """ )
past-semesters/fall_2016/day-by-day/day15-Schelling-1-dimensional-segregation-day2/Day_15_Pre_Class_Notebook.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Unit Tests Overview and Principles Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test. There are two parts to writing tests. 1. invoking the code under test so that it is exercised in a particular way; 1. evalua...
import numpy as np # Code Under Test def entropy(ps): items = ps * np.log(ps) return np.abs(-np.sum(items)) # Smoke test entropy([0.2, 0.8])
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1. What is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, whi...
# One-shot test. Need to know the correct answer. entries = [ [0, [1]], ] for entry in entries: ans = entry[0] prob = entry[1] if not np.isclose(entropy(prob), ans): print("Test failed!") print ("Test completed!")
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Question: What is an example of another one-shot test? (Hint: You need to know the expected result.) One edge test of interest is to provide an input that is not a distribution in that probabilities don't sum to 1.
# Edge test. This is something that should cause an exception. entropy([-0.5])
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$. $$ H = -\sum_{i=1}^{n} p_i \log(p_i) = -\sum_{i=1}^{n} \frac{1}{n} \log(\frac{1}{n}) = n (-\frac{1}{n} \log(\frac{1}{n}) ) = -\log(\fr...
# Pattern test def test_equal_probabilities(n): prob = 1.0/n ps = np.repeat(prob , n) if np.isclose(entropy(ps), -np.log(prob)): print("Worked!") else: import pdb; pdb.set_trace() print ("Bad result.") # Run a test test_equal_probabilities(100000)
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better. Unittest Infrastructure There are several reasons to use a test infrastructure: - If you have many test cases (which you should!), the test infrastructure will save you from writing a lot of c...
import unittest # Define a class in which the tests will run class UnitTests(unittest.TestCase): # Each method in the class to execute a test def test_success(self): self.assertEqual(1, 1) def test_success1(self): self.assertTrue(1 == 1) def test_failure(self): self.a...
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach. As expected, the first test passes, but the second tes...
# Implementating a pattern test. Use functions in the test. import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): def test_equal_probability(self): def test(count): """ Invokes the entropy function for a number of values equal to...
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Testing For Exceptions Edge test cases often involves handling exceptions. One approach is to code this directly.
import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): def test_invalid_probability(self): try: entropy([0.1, 0.5]) self.assertTrue(False) except ValueError: self.assertTrue(True) #test_setup(TestEntro...
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
unittest provides help with testing exceptions.
import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): def test_invalid_probability(self): with self.assertRaises(ValueError): entropy([0.1, 0.5]) suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy) _ = unittest.TextTest...
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Test Files Although I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py. The structure of the test file will be very similar to cells above. You will import unittest. You must a...
import unittest # Define a class in which the tests will run class TestEntryopy(unittest.TestCase): def test_oneshot(self): self.assertEqual(geomean([1,1]), 1) def test_oneshot2(self): self.assertEqual(geomean([3, 3, 3]), 3) #test_setup(TestGeomean) #def geomean(argu...
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Ejecutar códigos con otros kernels En la celda de código también es posible ejecutar códigos de otras lenguajes. A continuación, algunas comandos mágicos para ejecutar comandos de otros lenguajes: %%bash %%HTML %%python2 %%python3 %%ruby %%perl
%%bash ls -lah
jupyter/Introducción.ipynb
xmnlab/notebooks
mit
Cargar datos
import pandas as pd df = pd.read_csv('data/kaggle-titanic.csv') df.head() df.info() df.describe()
jupyter/Introducción.ipynb
xmnlab/notebooks
mit
Gráficos
from matplotlib import pyplot as plt df.Survived.value_counts().plot(kind='bar') plt.show() import pixiedust display(df)
jupyter/Introducción.ipynb
xmnlab/notebooks
mit
Widgets
import numpy as np π = np.pi def show_wave(A, f, φ): ω = 2*π*f t = np.linspace(0, 1, 10000) f = A*np.sin(ω*t+φ) plt.grid(True) plt.plot(t, f) plt.show() show_wave(A=5, f=5, φ=2) import ipywidgets as widgets from IPython.display import display params = dict(value=1, min=1, max=100, step...
jupyter/Introducción.ipynb
xmnlab/notebooks
mit
Para más informaciones sobre ipywidgets, consulte el manual de usuario [6]. Help Para ver la documentación de una determinada función o clase puedes ejecutar el comando: ?str.replace() Este comando abrirá una sección en la página con la documentación deseada. Otro modo de ver la documentación es usando la función help,...
?str.replace() help(str.replace)
jupyter/Introducción.ipynb
xmnlab/notebooks
mit
OLS estimation Artificial data:
nsample = 100 x = np.linspace(0, 10, 100) X = np.column_stack((x, x**2)) beta = np.array([1, 0.1, 10]) e = np.random.normal(size=nsample)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Our model needs an intercept so we add a column of 1s:
X = sm.add_constant(X) y = np.dot(X, beta) + e
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Fit and summary:
model = sm.OLS(y, X) results = model.fit() print(results.summary())
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples:
print('Parameters: ', results.params) print('R2: ', results.rsquared)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
OLS non-linear curve but linear in parameters We simulate artificial data with a non-linear relationship between x and y:
nsample = 50 sig = 0.5 x = np.linspace(0, 20, nsample) X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample))) beta = [0.5, 0.5, -0.02, 5.] y_true = np.dot(X, beta) y = y_true + sig * np.random.normal(size=nsample)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Fit and summary:
res = sm.OLS(y, X).fit() print(res.summary())
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Extract other quantities of interest:
print('Parameters: ', res.params) print('Standard errors: ', res.bse) print('Predicted values: ', res.predict())
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.
prstd, iv_l, iv_u = wls_prediction_std(res) fig, ax = plt.subplots(figsize=(8,6)) ax.plot(x, y, 'o', label="data") ax.plot(x, y_true, 'b-', label="True") ax.plot(x, res.fittedvalues, 'r--.', label="OLS") ax.plot(x, iv_u, 'r--') ax.plot(x, iv_l, 'r--') ax.legend(loc='best');
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
OLS with dummy variables We generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.
nsample = 50 groups = np.zeros(nsample, int) groups[20:40] = 1 groups[40:] = 2 #dummy = (groups[:,None] == np.unique(groups)).astype(float) dummy = sm.categorical(groups, drop=True) x = np.linspace(0, 20, nsample) # drop reference category X = np.column_stack((x, dummy[:,1:])) X = sm.add_constant(X, prepend=False) be...
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Inspect the data:
print(X[:5,:]) print(y[:5]) print(groups) print(dummy[:5,:])
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Fit and summary:
res2 = sm.OLS(y, X).fit() print(res2.summary())
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Draw a plot to compare the true relationship to OLS predictions:
prstd, iv_l, iv_u = wls_prediction_std(res2) fig, ax = plt.subplots(figsize=(8,6)) ax.plot(x, y, 'o', label="Data") ax.plot(x, y_true, 'b-', label="True") ax.plot(x, res2.fittedvalues, 'r--.', label="Predicted") ax.plot(x, iv_u, 'r--') ax.plot(x, iv_l, 'r--') legend = ax.legend(loc="best")
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Joint hypothesis test F test We want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \times \beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups:
R = [[0, 1, 0, 0], [0, 0, 1, 0]] print(np.array(R)) print(res2.f_test(R))
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
You can also use formula-like syntax to test hypotheses
print(res2.f_test("x2 = x3 = 0"))
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Small group effects If we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis:
beta = [1., 0.3, -0.0, 10] y_true = np.dot(X, beta) y = y_true + np.random.normal(size=nsample) res3 = sm.OLS(y, X).fit() print(res3.f_test(R)) print(res3.f_test("x2 = x3 = 0"))
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Multicollinearity The Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.
from statsmodels.datasets.longley import load_pandas y = load_pandas().endog X = load_pandas().exog X = sm.add_constant(X)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Fit and summary:
ols_model = sm.OLS(y, X) ols_results = ols_model.fit() print(ols_results.summary())
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Condition number One way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length:
norm_x = X.values for i, name in enumerate(X): if name == "const": continue norm_x[:,i] = X[name]/np.linalg.norm(X[name]) norm_xtx = np.dot(norm_x.T,norm_x)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Then, we take the square root of the ratio of the biggest to the smallest eigen values.
eigs = np.linalg.eigvals(norm_xtx) condition_number = np.sqrt(eigs.max() / eigs.min()) print(condition_number)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Dropping an observation Greene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates:
ols_results2 = sm.OLS(y.ix[:14], X.ix[:14]).fit() print("Percentage change %4.2f%%\n"*7 % tuple([i for i in (ols_results2.params - ols_results.params)/ols_results.params*100]))
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.
infl = ols_results.get_influence()
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations
2./len(X)**.5 print(infl.summary_frame().filter(regex="dfb"))
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
The data The Census Income Data Set that this sample uses for training is provided by the UC Irvine Machine Learning Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/census/data/. Training file is adult.data.csv Evaluation file is adult.test.csv (not used in this notebook) ...
%%writefile ./census_training/train.py # [START setup] import datetime import os import subprocess from sklearn.preprocessing import LabelEncoder import pandas as pd from google.cloud import storage import xgboost as xgb # TODO: REPLACE 'BUCKET_CREATED_ABOVE' with your GCS BUCKET_ID BUCKET_ID = 'torryyang-xgb-models...
notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Part 2: Create Trainer Package Before you can run your trainer application with AI Platform, your code and any dependencies must be placed in a Google Cloud Storage location that your Google Cloud Platform project can access. You can find more info here
%%writefile ./census_training/__init__.py # Note that __init__.py can be an empty file.
notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Part 3: Submit Training Job Next we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags: job-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: census_training_$(date +"%Y%m%d_%H%M%...
! gcloud config set project $PROJECT_ID
notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Submit the training job.
! gcloud ml-engine jobs submit training census_training_$(date +"%Y%m%d_%H%M%S") \ --job-dir $JOB_DIR \ --package-path $TRAINER_PACKAGE_PATH \ --module-name $MAIN_TRAINER_MODULE \ --region $REGION \ --runtime-version=$RUNTIME_VERSION \ --python-version=$PYTHON_VERSION \ --scale-tier BASIC
notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
[Optional] StackDriver Logging You can view the logs for your training job: 1. Go to https://console.cloud.google.com/ 1. Select "Logging" in left-hand pane 1. Select "Cloud ML Job" resource from the drop-down 1. In filter by prefix, use the value of $JOB_NAME to view the logs [Optional] Verify Model File in GCS View t...
! gsutil ls gs://$BUCKET_ID/census_*
notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Selecting evaluation dataset
#@title Paths to evaluation datasets base_path = '/content/' kolmogorov_re_1000 = { f'baseline_{i}x{i}': os.path.join(base_path, f'eval_{i}x{i}_64x64.nc') for i in [64, 128, 256, 512, 1024, 2048] } decaying = { f'baseline_{i}x{i}': os.path.join(base_path, f'eval_{i}x{i}_64x64.nc') for i in [64, 128, 2...
notebooks/ml_model_inference_demo.ipynb
google/jax-cfd
apache-2.0
Selecting model checkpoint to load
class CheckpointState: """Object to package up the state we load and restore.""" def __init__(self, **kwargs): for name, value in kwargs.items(): setattr(self, name, value) checkpoint_paths = { 'LI': "/content/LI_ckpt.pkl", 'LC': "/content/LC_ckpt.pkl", 'EPD': "/content/EPD_ckpt.pkl", } #@t...
notebooks/ml_model_inference_demo.ipynb
google/jax-cfd
apache-2.0
Model inference
#@title Setting up model configuration from the checkpoint; gin.clear_config() gin.parse_config(ckpt.model_config_str) gin.parse_config(strip_imports(reference_ds.attrs['physics_config_str'])) dt = ckpt.model_time_step physics_specs = physics_specifications.get_physics_specs() model_cls = model_builder.get_model_cls(g...
notebooks/ml_model_inference_demo.ipynb
google/jax-cfd
apache-2.0
Computing summaries Note: Evaluations in this notebook are demonstrative and performed over a single sample and shorter times than those used in the paper;
summary = xarray.concat([ cfd_data.evaluation.compute_summary_dataset(ds, target_ds) for ds in datasets.values() ], dim='model') summary.coords['model'] = list(datasets.keys()) correlation = summary.vorticity_correlation.compute() spectrum = summary.energy_spectrum_mean.mean('time').compute() baseline_palette...
notebooks/ml_model_inference_demo.ipynb
google/jax-cfd
apache-2.0
Review In the a_sample_explore_clean notebook we came up with the following query to extract a repeatable and clean sample: <pre> #standardSQL SELECT (tolls_amount + fare_amount) AS fare_amount, -- label pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude FROM `nyc-...
def create_query(phase, sample_size): basequery = """ SELECT (tolls_amount + fare_amount) AS fare_amount, EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek, EXTRACT(HOUR from pickup_datetime) AS hourofday, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, ...
courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Write to CSV Now let's execute a query for train/valid/test and write the results to disk in csv format. We use Pandas's .to_csv() method to do so. Exercise 2 The for loop below will generate the TRAIN/VALID/TEST sampled subsets of our dataset. Complete the code in the cell below to 1) create the BigQuery query_string ...
from google.cloud import bigquery bq = bigquery.Client(project=PROJECT) for phase in ["TRAIN", "VALID", "TEST"]: # 1. Create query string query_string = # TODO: Your code goes here # 2. Load results into DataFrame df = # TODO: Your code goes here # 3. Write DataFrame to CSV df.to_csv("taxi-{}...
courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Note that even with a 1/5000th sample we have a good amount of data for ML. 150K training examples and 30K validation. <h3> Verify that datasets exist </h3>
!ls -l *.csv
courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Preview one of the files
!head taxi-train.csv
courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Looks good! We now have our ML datasets and are ready to train ML models, validate them and test them. Establish rules-based benchmark Before we start building complex ML models, it is a good idea to come up with a simple rules based model and use that as a benchmark. After all, there's no point using ML if it can't be...
import pandas as pd def euclidean_distance(df): return # TODO: Your code goes here def compute_rmse(actual, predicted): return # TODO: Your code goes here def print_rmse(df, rate, name): print("{} RMSE = {}".format(compute_rmse(df["fare_amount"], rate * euclidean_distance(df)), name)) df_train = pd.read...
courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Decoding sensor space data Decoding, a.k.a MVPA or supervised machine learning applied to MEG data in sensor space. Here the classifier is applied to every time point.
import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import roc_auc_score from sklearn.cross_validation import StratifiedKFold import mne from mne.datasets import sample from mne.decoding import TimeDecoding, GeneralizationAcrossTime data_path = sample.data_path() plt.close('all')
0.14/_downloads/plot_sensors_decoding.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.2, 0.5 event_id = dict(aud_l=1, vis_l=3) # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.filter(2, None) # replace b...
0.14/_downloads/plot_sensors_decoding.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Temporal decoding We'll use the default classifer for a binary classification problem which is a linear Support Vector Machine (SVM).
td = TimeDecoding(predict_mode='cross-validation', n_jobs=1) # Fit td.fit(epochs) # Compute accuracy td.score(epochs) # Plot scores across time td.plot(title='Sensor space decoding')
0.14/_downloads/plot_sensors_decoding.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Generalization Across Time This runs the analysis used in [1] and further detailed in [2] Here we'll use a stratified cross-validation scheme.
# make response vector y = np.zeros(len(epochs.events), dtype=int) y[epochs.events[:, 2] == 3] = 1 cv = StratifiedKFold(y=y) # do a stratified cross-validation # define the GeneralizationAcrossTime object gat = GeneralizationAcrossTime(predict_mode='cross-validation', n_jobs=1, cv=cv, s...
0.14/_downloads/plot_sensors_decoding.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Polynomial Logistic Regression
import numpy as np import pandas as pd
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The data we want to investigate is stored in the file 'fake-data.csv'. It is data that I have found somewhere. I am not sure whether this data is real or fake. Therefore, I won't discuss the attributes of the data. The point of the data is that it is a classification problem that can not be solved with ordinary l...
DF = pd.read_csv('fake-data.csv') DF.head()
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We extract the features from the data frame and convert it into a NumPy <em style="color:blue;">feature matrix</em>.
X = np.array(DF[['x','y']])
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We extract the target column and convert it into a NumPy array.
Y = np.array(DF['class'])
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
In order to plot the instances according to their class we divide the feature matrix $X$ into two parts. $\texttt{X_pass}$ contains those examples that have class $1$, while $\texttt{X_fail}$ contains those examples that have class $0$.
X_pass = X[Y == 1.0] X_fail = X[Y == 0.0]
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us plot the data.
import matplotlib.pyplot as plt import seaborn as sns plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('A Classification Problem') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x axis') plt.ylabel('y axis') plt.xticks(np.arange(-0.9, 1.1, step=0.1)) plt.yticks(np.arange(-0.8...
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We want to split the data into a training set and a test set. The training set will be used to compute the parameters of our model, while the testing set is only used to check the accuracy. SciKit-Learn has a predefined method train_test_split that can be used to randomly split data into a training set and a test set.
from sklearn.model_selection import train_test_split
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We will split the data at a ratio of $4:1$, i.e. $80\%$ of the data will be used for training, while the remaining $20\%$ is used to test the accuracy.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
In order to build a <em style="color:blue;">logistic regression</em> classifier, we import the module linear_model from SciKit-Learn.
import sklearn.linear_model as lm
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The function $\texttt{logistic_regression}(\texttt{X_train}, \texttt{Y_train}, \texttt{X_test}, \texttt{Y_test})$ takes a feature matrix $\texttt{X_train}$ and a corresponding vector $\texttt{Y_train}$ and computes a logistic regression model $M$ that best fits these data. Then, the accuracy of the model is computed u...
def logistic_regression(X_train, Y_train, X_test, Y_test, reg=10000): M = lm.LogisticRegression(C=reg, tol=1e-6) M.fit(X_train, Y_train) train_score = M.score(X_train, Y_train) yPredict = M.predict(X_test) accuracy = np.sum(yPredict == Y_test) / len(Y_test) return M, train_score, accuracy
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We use this function to build a model for our data. Initially, we will take all the available data to create the model.
M, score, accuracy = logistic_regression(X, Y, X, Y) score, accuracy
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Given that there are only two classes, the accuracy of our first model is quite poor. Let us extract the coefficients so we can plot the <em style="color:blue;">decision boundary</em>.
ϑ0 = M.intercept_[0] ϑ1, ϑ2 = M.coef_[0] plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('A Classification Problem') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x axis') plt.ylabel('y axis') plt.xticks(np.arange(-0.9, 1.1, step=0.1)) plt.yticks(np.arange(-0.8, 1.2, step=0.1)) p...
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Clearly, pure logistic regression is not working for this example. The reason is, that a linear decision boundary is not able to separate the positive examples from the negative examples. Let us add polynomial features. This enables us to create more complex decision boundaries. The function $\texttt{extend}(X)$ tak...
def extend(X): n = len(X) fx = np.reshape(X[:,0], (n, 1)) # extract first column fy = np.reshape(X[:,1], (n, 1)) # extract second column return np.hstack([fx, fy, fx*fx, fy*fy, fx*fy]) # stack everthing horizontally X_train_quadratic = extend(X_train) X_test_quadratic = extend(X_test) M, score, accu...
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
This seems to work better. Let us compute the decision boundary and plot it.
ϑ0 = M.intercept_[0] ϑ1, ϑ2, ϑ3, ϑ4, ϑ5 = M.coef_[0]
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The decision boundary is now given by the following equation: $$ \vartheta_0 + \vartheta_1 \cdot x + \vartheta_2 \cdot y + \vartheta_3 \cdot x^2 + \vartheta_4 \cdot y^2 + \vartheta_5 \cdot x \cdot y = 0$$ This is the equation of an ellipse. Let us plot the decision boundary with the data.
a = np.arange(-1.0, 1.0, 0.005) b = np.arange(-1.0, 1.0, 0.005) A, B = np.meshgrid(a,b) A B Z = ϑ0 + ϑ1 * A + ϑ2 * B + ϑ3 * A * A + ϑ4 * B * B + ϑ5 * A * B Z plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('A Classification Problem') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xla...
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us try to add <em style="color:blue;">quartic features</em> next. These are features like $x^4$, $x^2\cdot y^2$, etc. Luckily, SciKit-Learn has function that can automize this process.
from sklearn.preprocessing import PolynomialFeatures quartic = PolynomialFeatures(4, include_bias=False) X_train_quartic = quartic.fit_transform(X_train) X_test_quartic = quartic.fit_transform(X_test) print(quartic.get_feature_names(['x', 'y']))
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us fit the quartic model.
M, score, accuracy = logistic_regression(X_train_quartic, Y_train, X_test_quartic, Y_test) score, accuracy
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The accuracy on the training set has increased, but we observe that the accuracy on the training set is actually not improving. Again, we proceed to plot the decision boundary.
ϑ0 = M.intercept_[0] ϑ1, ϑ2, ϑ3, ϑ4, ϑ5, ϑ6, ϑ7, ϑ8, ϑ9, ϑ10, ϑ11, ϑ12, ϑ13, ϑ14 = M.coef_[0]
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Plotting the decision boundary starts to get tedious.
a = np.arange(-1.0, 1.0, 0.005) b = np.arange(-1.0, 1.0, 0.005) A, B = np.meshgrid(a,b) Z = ϑ0 + ϑ1 * A + ϑ2 * B + \ ϑ3 * A**2 + ϑ4 * A * B + ϑ5 * B**2 + \ ϑ6 * A**3 + ϑ7 * A**2 * B + ϑ8 * A * B**2 + ϑ9 * B**3 + \ ϑ10 * A**4 + ϑ11 * A**3 * B + ϑ12 * A**2 * B**2 + ϑ13 * A * B**3 + ϑ14 * B**...
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The decision boundary looks strange. Let's get bold and try to add features of a higher power. However, in order to understand what is happening, we will only plot the training data.
X_pass_train = X_train[Y_train == 1.0] X_fail_train = X_train[Y_train == 0.0]
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
In order to automatize the process, we define some auxiliary functions. $\texttt{polynomial}(n)$ creates a polynomial in the variables A and B that contains all terms of the form $\Theta[k] \cdot A^i \cdot B^j$ where $i+j \leq n$.
def polynomial(n): sum = 'Θ[0]' cnt = 0 for k in range(1, n+1): for i in range(0, k+1): cnt += 1 sum += f' + Θ[{cnt}] * A**{k-i} * B**{i}' print('number of features:', cnt) return sum
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let's check this out for $n=4$.
polynomial(4)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The function $\texttt{polynomial_grid}(n, M)$ takes a number $n$ and a model $M$. It returns a meshgrid that can be used to plot the decision boundary of the model.
def polynomial_grid(n, M): Θ = [M.intercept_[0]] + list(M.coef_[0]) a = np.arange(-1.0, 1.0, 0.005) b = np.arange(-1.0, 1.0, 0.005) A, B = np.meshgrid(a,b) return eval(polynomial(n))
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The function $\texttt{plot_nth_degree_boundary}(n)$ creates a polynomial logistic regression model of degree $n$. It plots both the training data and the decision boundary.
def plot_nth_degree_boundary(n, C=10000): poly = PolynomialFeatures(n, include_bias=False) X_train_poly = poly.fit_transform(X_train) X_test_poly = poly.fit_transform(X_test) M, score, accuracy = logistic_regression(X_train_poly, Y_train, X_test_poly, Y_test, C) print('The accuracy on the t...
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us test this for the polynomial logistic regression model of degree $4$.
plot_nth_degree_boundary(4)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
This seems to be the same shape that we have seen earlier. It looks like the function $\texttt{plot_nth_degree_boundary}(n)$ is working. Let's try higher degree polynomials.
plot_nth_degree_boundary(5)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The score on the training set has improved. What happens if we try still higher degrees?
plot_nth_degree_boundary(6)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We captured one more of the training examples. Let's get bold, we want a $100\%$ training accuracy.
plot_nth_degree_boundary(14)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The model is getting more complicated, but it is not getting better, as the accuracy on the test set has not improved.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=2) X_pass_train = X_train[Y_train == 1.0] X_fail_train = X_train[Y_train == 0.0]
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us check whether regularization can help. Below, the regularization parameter prevents the decision boundary from becoming to wiggly and thus the accuracy on the test set can increase. The function below plots all the data.
def plot_nth_degree_boundary_all(n, C): poly = PolynomialFeatures(n, include_bias=False) X_train_poly = poly.fit_transform(X_train) X_test_poly = poly.fit_transform(X_test) M, score, accuracy = logistic_regression(X_train_poly, Y_train, X_test_poly, Y_test, C) print('The accuracy on the tra...
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Load Digits Dataset Digits is a dataset of handwritten digits. Each feature is the intensity of one pixel of an 8 x 8 image.
# Load digits dataset digits = datasets.load_digits() # Create feature matrix X = digits.data # Create target vector y = digits.target # View the first observation's feature values X[0]
machine-learning/.ipynb_checkpoints/loading_scikit-learns_digits-dataset-checkpoint.ipynb
tpin3694/tpin3694.github.io
mit
The observation's feature values are presented as a vector. However, by using the images method we can load the the same feature values as a matrix and then visualize the actual handwritten character:
# View the first observation's feature values as a matrix digits.images[0] # Visualize the first observation's feature values as an image plt.gray() plt.matshow(digits.images[0]) plt.show()
machine-learning/.ipynb_checkpoints/loading_scikit-learns_digits-dataset-checkpoint.ipynb
tpin3694/tpin3694.github.io
mit
Simple TFX Pipeline Tutorial using Penguin dataset A Short tutorial to run a simple TFX pipeline. Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click "Run in Google Colab". <div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left"> <td><a target="_...
try: import colab !pip install --upgrade pip except: pass
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0