markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Install TFX
!pip install -U tfx
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0
Did you restart the runtime? If you are using Google Colab, the first time that you run the cell above, you must restart the runtime by clicking above "RESTART RUNTIME" button or using "Runtime > Restart runtime ..." menu. This is because of the way that Colab loads packages. Check the TensorFlow and TFX versions.
import tensorflow as tf print('TensorFlow version: {}'.format(tf.__version__)) from tfx import v1 as tfx print('TFX version: {}'.format(tfx.__version__))
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0
Set up variables There are some variables used to define a pipeline. You can customize these variables as you want. By default all output from the pipeline will be generated under the current directory.
import os PIPELINE_NAME = "penguin-simple" # Output directory to store artifacts generated from the pipeline. PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME) # Path to a SQLite DB file to use as an MLMD storage. METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db') # Output directory where c...
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0
Prepare example data We will download the example dataset for use in our TFX pipeline. The dataset we are using is Palmer Penguins dataset which is also used in other TFX examples. There are four numeric features in this dataset: culmen_length_mm culmen_depth_mm flipper_length_mm body_mass_g All features were already...
import urllib.request import tempfile DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory. _data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv' _data_filepath = os.path.join(DATA_ROOT, "data.csv") urllib.request.urlr...
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0
Take a quick look at the CSV file.
!head {_data_filepath}
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0
You should be able to see five values. species is one of 0, 1 or 2, and all other features should have values between 0 and 1. Create a pipeline TFX pipelines are defined using Python APIs. We will define a pipeline which consists of following three components. - CsvExampleGen: Reads in data files and convert them to T...
_trainer_module_file = 'penguin_trainer.py' %%writefile {_trainer_module_file} from typing import List from absl import logging import tensorflow as tf from tensorflow import keras from tensorflow_transform.tf_metadata import schema_utils from tfx import v1 as tfx from tfx_bsl.public import tfxio from tensorflow_met...
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0
Now you have completed all preparation steps to build a TFX pipeline. Write a pipeline definition We define a function to create a TFX pipeline. A Pipeline object represents a TFX pipeline which can be run using one of the pipeline orchestration systems that TFX supports.
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str, module_file: str, serving_model_dir: str, metadata_path: str) -> tfx.dsl.Pipeline: """Creates a three component penguin pipeline with TFX.""" # Brings data into the pipeline. example_gen = tfx.co...
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0
Run the pipeline TFX supports multiple orchestrators to run pipelines. In this tutorial we will use LocalDagRunner which is included in the TFX Python package and runs pipelines on local environment. We often call TFX pipelines "DAGs" which stands for directed acyclic graph. LocalDagRunner provides fast iterations for ...
tfx.orchestration.LocalDagRunner().run( _create_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT, data_root=DATA_ROOT, module_file=_trainer_module_file, serving_model_dir=SERVING_MODEL_DIR, metadata_path=METADATA_PATH))
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0
You should see "INFO:absl:Component Pusher is finished." at the end of the logs if the pipeline finished successfully. Because Pusher component is the last component of the pipeline. The pusher component pushes the trained model to the SERVING_MODEL_DIR which is the serving_model/penguin-simple directory if you did not...
# List files in created model directory. !find {SERVING_MODEL_DIR}
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0
Source localization with MNE/dSPM/sLORETA/eLORETA The aim of this tutorial is to teach you how to compute and apply a linear inverse method such as MNE/dSPM/sLORETA/eLORETA on evoked/raw/epochs data.
# sphinx_gallery_thumbnail_number = 10 import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Process MEG data
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname) # already has an average reference events = mne.find_events(raw, stim_channel='STI 014') event_id = dict(aud_l=1) # event trigger and conditions tmin = -0.2 # start of each epoc...
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute regularized noise covariance For more details see tut_compute_covariance.
noise_cov = mne.compute_covariance( epochs, tmax=0., method=['shrunk', 'empirical'], rank=None, verbose=True) fig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute the evoked response Let's just use MEG channels for simplicity.
evoked = epochs.average().pick_types(meg=True) evoked.plot(time_unit='s') evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag', time_unit='s') # Show whitening evoked.plot_white(noise_cov, time_unit='s') del epochs # to save memory
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Inverse modeling: MNE/dSPM on evoked and raw data
# Read the forward solution and compute the inverse operator fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif' fwd = mne.read_forward_solution(fname_fwd) # make an MEG inverse operator info = evoked.info inverse_operator = make_inverse_operator(info, fwd, noise_cov, ...
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute inverse solution
method = "dSPM" snr = 3. lambda2 = 1. / snr ** 2 stc, residual = apply_inverse(evoked, inverse_operator, lambda2, method=method, pick_ori=None, return_residual=True, verbose=True)
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Visualization View activation time-series
plt.figure() plt.plot(1e3 * stc.times, stc.data[::100, :].T) plt.xlabel('time (ms)') plt.ylabel('%s value' % method) plt.show()
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Examine the original data and the residual after fitting:
fig, axes = plt.subplots(2, 1) evoked.plot(axes=axes) for ax in axes: ax.texts = [] for line in ax.lines: line.set_color('#98df81') residual.plot(axes=axes)
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Here we use peak getter to move visualization to the time point of the peak and draw a marker at the maximum peak vertex.
vertno_max, time_max = stc.get_peak(hemi='rh') subjects_dir = data_path + '/subjects' surfer_kwargs = dict( hemi='rh', subjects_dir=subjects_dir, clim=dict(kind='value', lims=[8, 12, 15]), views='lateral', initial_time=time_max, time_unit='s', size=(800, 800), smoothing_steps=5) brain = stc.plot(**surfer_k...
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Morph data to average brain
# setup source morph morph = mne.compute_source_morph( src=inverse_operator['src'], subject_from=stc.subject, subject_to='fsaverage', spacing=5, # to ico-5 subjects_dir=subjects_dir) # morph data stc_fsaverage = morph.apply(stc) brain = stc_fsaverage.plot(**surfer_kwargs) brain.add_text(0.1, 0.9, 'Morphed...
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Dipole orientations The pick_ori parameter of the :func:mne.minimum_norm.apply_inverse function controls the orientation of the dipoles. One useful setting is pick_ori='vector', which will return an estimate that does not only contain the source power at each dipole, but also the orientation of the dipoles.
stc_vec = apply_inverse(evoked, inverse_operator, lambda2, method=method, pick_ori='vector') brain = stc_vec.plot(**surfer_kwargs) brain.add_text(0.1, 0.9, 'Vector solution', 'title', font_size=20) del stc_vec
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Note that there is a relationship between the orientation of the dipoles and the surface of the cortex. For this reason, we do not use an inflated cortical surface for visualization, but the original surface used to define the source space. For more information about dipole orientations, see sphx_glr_auto_tutorials_plo...
for mi, (method, lims) in enumerate((('dSPM', [8, 12, 15]), ('sLORETA', [3, 5, 7]), ('eLORETA', [0.75, 1.25, 1.75]),)): surfer_kwargs['clim']['lims'] = lims stc = apply_inverse(evoked, inverse_operator, lambda2, me...
0.17/_downloads/d25fdfa446b06c82b756855681845935/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
There are some infeasibilities without line extensions.
for line_name in ["316", "527", "602"]: network.lines.loc[line_name, "s_nom"] = 1200 now = network.snapshots[0]
examples/notebooks/scigrid-sclopf.ipynb
PyPSA/PyPSA
mit
Performing security-constrained linear OPF
branch_outages = network.lines.index[:15] network.sclopf(now, branch_outages=branch_outages, solver_name="cbc")
examples/notebooks/scigrid-sclopf.ipynb
PyPSA/PyPSA
mit
For the PF, set the P to the optimised P.
network.generators_t.p_set = network.generators_t.p_set.reindex( columns=network.generators.index ) network.generators_t.p_set.loc[now] = network.generators_t.p.loc[now] network.storage_units_t.p_set = network.storage_units_t.p_set.reindex( columns=network.storage_units.index ) network.storage_units_t.p_set.lo...
examples/notebooks/scigrid-sclopf.ipynb
PyPSA/PyPSA
mit
Check no lines are overloaded with the linear contingency analysis
p0_test = network.lpf_contingency(now, branch_outages=branch_outages) p0_test
examples/notebooks/scigrid-sclopf.ipynb
PyPSA/PyPSA
mit
Check loading as per unit of s_nom in each contingency
max_loading = ( abs(p0_test.divide(network.passive_branches().s_nom, axis=0)).describe().loc["max"] ) max_loading np.allclose(max_loading, np.ones((len(max_loading))))
examples/notebooks/scigrid-sclopf.ipynb
PyPSA/PyPSA
mit
1.2 Train Ridge Regression on training data The first step is to train the ridge regression model on the training data with a 5-fold cross-validation with an internal line-search to find the optimal hyperparameter $\alpha$. We will plot the training errors against the validation errors, to illustrate the effect of diff...
#Initialize different alpha values for the Ridge Regression model alphas = sp.logspace(-2,8,11) param_grid = dict(alpha=alphas) #5-fold cross-validation (outer-loop) outer_cv = KFold(n_splits=5,shuffle=True,random_state=random_state) #Line-search to find the optimal alpha value (internal-loop) #Model performance is m...
Toy-Example-Solution.ipynb
dominikgrimm/ridge_and_svm
mit
1.3 Train Ridge Regression with optimal $\alpha$ and evaluate model in test data Next we retrain the ridge regresssion model with the optimal $\alpha$ (from the last section). After re-training we will test the model on the not used test data to evaluate the model performance on unseen data.
#Train Ridge Regression on the full training data with optimal alpha model = Ridge(alpha=optimal_alpha,solver="cholesky") model.fit(training_data,training_target) #Use trained model the predict new instances in test data predictions = model.predict(testing_data) print("Prediction results on test data") print("MSE (tes...
Toy-Example-Solution.ipynb
dominikgrimm/ridge_and_svm
mit
<div style="text-align:justify"> Using 5-fold cross-validation on the training data leads to a mean squared error (MSE) of $MSE=587.09 \pm 53.54$. On the test data we get an error of $MSE=699.56$ ($\sim 26.5$ days). That indicates that the ridge regression model performs rather mediocre (even with hyperparameter op...
#Split data into training and testing splits, stratified by class-ratios stratiefied_splitter = StratifiedShuffleSplit(n_splits=1,test_size=0.2,random_state=42) for train_index,test_index in stratiefied_splitter.split(data,binary_target): training_data = data[train_index,:] training_target = binary_target[train...
Toy-Example-Solution.ipynb
dominikgrimm/ridge_and_svm
mit
2.2 Classification with a linear SVM
Cs = sp.logspace(-7, 1, 9) param_grid = dict(C=Cs) grid = GridSearchCV(SVC(kernel="linear",random_state=random_state), param_grid=param_grid, scoring="accuracy", n_jobs=4, return_train_score=True) outer_cv = StratifiedKFold(n_splits=5,shuf...
Toy-Example-Solution.ipynb
dominikgrimm/ridge_and_svm
mit
2.3 Classification with SVM and RBF kernel
Cs = sp.logspace(-4, 4, 9) gammas = sp.logspace(-7, 1, 9) param_grid = dict(C=Cs,gamma=gammas) grid = GridSearchCV(SVC(kernel="rbf",random_state=42), param_grid=param_grid, scoring="accuracy", n_jobs=4, return_train_score=True) outer_cv =...
Toy-Example-Solution.ipynb
dominikgrimm/ridge_and_svm
mit
We use the <b> Iris dataset </b> https://en.m.wikipedia.org/wiki/Iris_flower_data_set The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimetres...
model = LogisticRegression() model.fit(dataset.data, dataset.target) expected = dataset.target predicted = model.predict(dataset.data) # classification metrics report builds a text report showing the main classification metrics # In pattern recognition and information retrieval with binary classification, # precisio...
Session3/code/03 Supervised Learning - 00 Python basics and Logistic Regression.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
We typically need the following libraries: <b> NumPy </b> Numerical Python - mainly used for n-dimensional array(which is absent in traditional Python). Also contains basic linear algebra functions, Fourier transforms, advanced random number capabilities and tools for integration with other low level languages like Fo...
integers_list = [1,3,5,7,9] # lists are seperated by square brackets print(integers_list) tuple_integers = 1,3,5,7,9 #tuples are seperated by commas and are immutable print(tuple_integers) tuple_integers[0] = 11 #Python strings can be in single or double quotes string_ds = "Data Science" string_iot = "Internet of Thi...
Session3/code/03 Supervised Learning - 00 Python basics and Logistic Regression.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Evaluation of your model
# let's try to visualize the estimated and real house values for all data points in test dataset fig, ax = plt.subplots(figsize=(15, 5)) plt.subplot(1, 2, 1) plt.plot(X_test_1,predictions_1, 'o') plt.xlabel('% of lower status of the population') plt.ylabel('Estimated home value in $1000s') plt.subplot(1, 2, 2) plt...
aps/notebooks/ml_varsom/linear_regression.ipynb
kmunve/APS
mit
To evaulate the performance of the model, we can compute the error between the real house value (y_test_1) and the predicted values we got form our model (predictions_1). One such metric is called the residual sum of squares (RSS):
# first we define our RSS function def RSS(y, p): return sum((y - p)**2) # then we calculate RSS: RSS_model_1 = RSS(y_test_1, predictions_1) RSS_model_1
aps/notebooks/ml_varsom/linear_regression.ipynb
kmunve/APS
mit
This number doesn't tell us much - is 7027 good? Is it bad? Unfortunatelly, there is no right answer - it depends on the data. Sometimes RSS of 7000 indicates very bad model, and sometimes 7000 is as good as it gets. That's why we use RSS when comparing models - the model with lowest RSS is the best. The other metri...
lm1.score(X_test_1,y_test_1)
aps/notebooks/ml_varsom/linear_regression.ipynb
kmunve/APS
mit
This means that only 51% of variability is explained by our model. In general, $R^{2}$ is a number between 0 and 1 - the closer it is to 1, the better the model is. Since we got only 0.51, we can conclude that this is not a very good model. But we can try to build a model with second variable - RM - and check if we ...
# we just repeat everything as before X_train_2, X_test_2, y_train_2, y_test_2 = train_test_split(boston_data[['RM']], boston_data.MEDV, random_state = 222, test_size = 0.3) # split the data lm = linear_model.LinearRegression() model_2 = lm.fit(X_train_2, ...
aps/notebooks/ml_varsom/linear_regression.ipynb
kmunve/APS
mit
Since RSS is lower for second modell (and lower the RSS, better the model) and $R^{2}$ is higher for second modell (and we want $R^{2}$ as close to 1 as possible), both measures tells us that second model is better. However, difference is not big - out second model performs slightly better, but we still can't say it fi...
X = boston_data[['CRIM', 'ZN', 'INDUS', 'CHAS', 'RM', 'AGE', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT']] y = boston_data["MEDV"] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 222, test_size = 0.3) # split the data lm = linear_model.LinearRegression() model_lr = lm.fit(X_train, y_train) # train ...
aps/notebooks/ml_varsom/linear_regression.ipynb
kmunve/APS
mit
Navigation Handling the loop yourself Comparing with the high level API 1. Handling the loop yourself For the purposes of this notebook we are going to use one of the predefined objective functions that come with GPyOpt. However, the key thing to realize is that the function could be anything (e.g., the results o...
from emukit.test_functions import forrester_function from emukit.core.loop import UserFunctionWrapper target_function, space = forrester_function()
notebooks/Emukit-tutorial-bayesian-optimization-external-objective-evaluation.ipynb
EmuKit/emukit
apache-2.0
First we are going to run the optimization loop outside of Emukit, and only use the library to get the next point at which to evaluate our function. There are two things to pay attention to when creating the main optimization object: Since we recreate the object anew for each iteration, we need to pass data about all...
X = np.array([[0.1],[0.6],[0.9]]) Y = target_function(X)
notebooks/Emukit-tutorial-bayesian-optimization-external-objective-evaluation.ipynb
EmuKit/emukit
apache-2.0
And we run the loop externally.
from emukit.examples.gp_bayesian_optimization.single_objective_bayesian_optimization import GPBayesianOptimization from emukit.core.loop import UserFunctionResult num_iterations = 10 bo = GPBayesianOptimization(variables_list=space.parameters, X=X, Y=Y) results = None for _ in range(num_iterations): X_new = bo.g...
notebooks/Emukit-tutorial-bayesian-optimization-external-objective-evaluation.ipynb
EmuKit/emukit
apache-2.0
Let's visualize the results. The size of the marker denotes the order in which the point was evaluated - the bigger the marker the later was the evaluation.
x = np.arange(0.0, 1.0, 0.01) y = target_function(x) plt.figure() plt.plot(x, y) for i, (xs, ys) in enumerate(zip(X, Y)): plt.plot(xs, ys, 'ro', markersize=10 + 10 * (i+1)/len(X)) X
notebooks/Emukit-tutorial-bayesian-optimization-external-objective-evaluation.ipynb
EmuKit/emukit
apache-2.0
2. Comparing with the high level API To compare the results, let's now execute the whole loop with Emukit.
X = np.array([[0.1],[0.6],[0.9]]) Y = target_function(X) bo_loop = GPBayesianOptimization(variables_list=space.parameters, X=X, Y=Y) bo_loop.run_optimization(target_function, num_iterations)
notebooks/Emukit-tutorial-bayesian-optimization-external-objective-evaluation.ipynb
EmuKit/emukit
apache-2.0
Now let's print the results of this optimization and compare it to the previous external evaluation run. As before, the size of the marker corresponds to its evaluation order.
x = np.arange(0.0, 1.0, 0.01) y = target_function(x) plt.figure() plt.plot(x, y) for i, (xs, ys) in enumerate(zip(bo_loop.model.model.X, bo_loop.model.model.Y)): plt.plot(xs, ys, 'ro', markersize=10 + 10 * (i+1)/len(bo_loop.model.model.X))
notebooks/Emukit-tutorial-bayesian-optimization-external-objective-evaluation.ipynb
EmuKit/emukit
apache-2.0
Test data We create test data consisting of 5 variables.
psi0 = np.array([ [ 0. , 0. , -0.25, 0. , 0. ], [-0.38, 0. , 0.14, 0. , 0. ], [ 0. , 0. , 0. , 0. , 0. ], [ 0.44, -0.2 , -0.09, 0. , 0. ], [ 0.07, -0.06, 0. , 0.07, 0. ] ]) phi1 = np.array([ [-0.04, -0.29, -0.26, 0.14, 0.47], [-0.42, 0.2 , 0.1 , 0.24, 0....
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Causal Discovery To run causal discovery, we create a VARMALiNGAM object and call the fit method.
model = lingam.VARMALiNGAM(order=(1, 1), criterion=None) model.fit(X)
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery.
model.causal_order_
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Also, using the adjacency_matrices_ properties, we can see the adjacency matrix as a result of the causal discovery.
# psi0 model.adjacency_matrices_[0][0] # psi1 model.adjacency_matrices_[0][1] # omega0 model.adjacency_matrices_[1][0]
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Using DirectLiNGAM for the residuals_ properties, we can calculate psi0 matrix.
dlingam = lingam.DirectLiNGAM() dlingam.fit(model.residuals_) dlingam.adjacency_matrix_
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
We can draw a causal graph by utility funciton
labels = ['y0(t)', 'y1(t)', 'y2(t)', 'y3(t)', 'y4(t)', 'y0(t-1)', 'y1(t-1)', 'y2(t-1)', 'y3(t-1)', 'y4(t-1)'] make_dot(np.hstack(model.adjacency_matrices_[0]), lower_limit=0.3, ignore_shape=True, labels=labels)
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Independence between error variables To check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $e_i$ and $e_j$.
p_values = model.get_error_independence_p_values() print(p_values)
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Bootstrap Bootstrapping We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.
model = lingam.VARMALiNGAM() result = model.bootstrap(X, n_sampling=100)
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Causal Directions Since BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal di...
cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.4, split_by_causal_effect_sign=True)
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
We can check the result by utility function.
labels = ['y0(t)', 'y1(t)', 'y2(t)', 'y3(t)', 'y4(t)', 'y0(t-1)', 'y1(t-1)', 'y2(t-1)', 'y3(t-1)', 'y4(t-1)', 'e0(t-1)', 'e1(t-1)', 'e2(t-1)', 'e3(t-1)', 'e4(t-1)'] print_causal_directions(cdc, 100, labels=labels)
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Directed Acyclic Graphs Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.3 or more.
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.3, split_by_causal_effect_sign=True)
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
We can check the result by utility function.
print_dagc(dagc, 100, labels=labels)
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Probability Using the get_probabilities() method, we can get the probability of bootstrapping.
prob = result.get_probabilities(min_causal_effect=0.1) print('Probability of psi0:\n', prob[0]) print('Probability of psi1:\n', prob[1]) print('Probability of omega1:\n', prob[2])
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Total Causal Effects Using the get_total causal_effects() method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable. We can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below.
causal_effects = result.get_total_causal_effects(min_causal_effect=0.01) df = pd.DataFrame(causal_effects) df['from'] = df['from'].apply(lambda x : labels[x]) df['to'] = df['to'].apply(lambda x : labels[x]) df
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
We can easily perform sorting operations with pandas.DataFrame.
df.sort_values('effect', ascending=False).head()
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards y2(t).
df[df['to']=='y2(t)'].head()
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
Because it holds the raw data of the causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.
import matplotlib.pyplot as plt import seaborn as sns sns.set() %matplotlib inline from_index = 5 # index of y0(t-1). (index:0)+(n_features:5)*(lag:1) = 5 to_index = 2 # index of y2(t). (index:2)+(n_features:5)*(lag:0) = 2 plt.hist(result.total_effects_[:, to_index, from_index])
examples/VARMALiNGAM.ipynb
cdt15/lingam
mit
First we'll load the text file and convert it into integers for our network to use.
with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32) text[:100] chars[:100]
tensorboard/Anna_KaRNNa.ipynb
mdiaz236/DeepLearningFoundations
mit
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one ch...
def split_data(chars, batch_size, num_steps, split_frac=0.9): """ Split character data into training and validation sets, inputs and targets for each set. Arguments --------- chars: character array batch_size: Size of examples in each of batch num_steps: Number of sequence steps to kee...
tensorboard/Anna_KaRNNa.ipynb
mdiaz236/DeepLearningFoundations
mit
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next...
def get_batch(arrs, num_steps): batch_size, slice_size = arrs[0].shape n_batches = int(slice_size/num_steps) for b in range(n_batches): yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs] def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, le...
tensorboard/Anna_KaRNNa.ipynb
mdiaz236/DeepLearningFoundations
mit
Hyperparameters Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you...
batch_size = 100 num_steps = 100 lstm_size = 512 num_layers = 2 learning_rate = 0.001
tensorboard/Anna_KaRNNa.ipynb
mdiaz236/DeepLearningFoundations
mit
Write out the graph for TensorBoard
model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) ...
tensorboard/Anna_KaRNNa.ipynb
mdiaz236/DeepLearningFoundations
mit
Training Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpo...
!mkdir -p checkpoints/anna epochs = 1 save_every_n = 200 train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps) model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=l...
tensorboard/Anna_KaRNNa.ipynb
mdiaz236/DeepLearningFoundations
mit
Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the ne...
def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): prime = "Far" samples = [c for c in prime] model...
tensorboard/Anna_KaRNNa.ipynb
mdiaz236/DeepLearningFoundations
mit
int16 アクティベーションによるトレーニング後の整数量子化 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_integer_quant_16x8"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a> </td> <td> <a target="_blan...
import logging logging.getLogger("tensorflow").setLevel(logging.DEBUG) import tensorflow as tf from tensorflow import keras import numpy as np import pathlib
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
16x8 量子化モードが使用可能であることを確認します
tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
モデルをトレーニングしてエクスポートする
# Load MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model architecture model = keras.Seq...
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
この例では、モデルを 1 エポックでトレーニングしたので、トレーニングの精度は 96% 以下になります。 TensorFlow Lite モデルに変換する Python TFLiteConverter を使用して、トレーニング済みモデルを TensorFlow Lite モデルに変換できるようになりました。 次に、TFliteConverterを使用してモデルをデフォルトの float32 形式に変換します。
converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert()
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
.tfliteファイルに書き込みます。
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/") tflite_models_dir.mkdir(exist_ok=True, parents=True) tflite_model_file = tflite_models_dir/"mnist_model.tflite" tflite_model_file.write_bytes(tflite_model)
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
モデルを 16x8 量子化モードに量子化するには、最初にoptimizationsフラグを設定してデフォルトの最適化を使用します。次に、16x8 量子化モードがターゲット仕様でサポートされる必要な演算であることを指定します。
converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8]
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
int8 トレーニング後の量子化の場合と同様に、コンバーターオプションinference_input(output)_typeを tf.int16 に設定することで、完全整数量子化モデルを生成できます。 キャリブレーションデータを設定します。
mnist_train, _ = tf.keras.datasets.mnist.load_data() images = tf.cast(mnist_train[0], tf.float32) / 255.0 mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1) def representative_data_gen(): for input_value in mnist_ds.take(100): # Model has only one input so each data point has one element. yield ...
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
最後に、通常どおりにモデルを変換します。デフォルトでは、変換されたモデルは呼び出しの便宜上、浮動小数点の入力と出力を引き続き使用します。
tflite_16x8_model = converter.convert() tflite_model_16x8_file = tflite_models_dir/"mnist_model_quant_16x8.tflite" tflite_model_16x8_file.write_bytes(tflite_16x8_model)
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
生成されるファイルのサイズが約1/3であることに注目してください。
!ls -lh {tflite_models_dir}
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
TensorFlow Lite モデルを実行する Python TensorFlow Lite インタープリタを使用して TensorFlow Lite モデルを実行します。 モデルをインタープリタに読み込む
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file)) interpreter.allocate_tensors() interpreter_16x8 = tf.lite.Interpreter(model_path=str(tflite_model_16x8_file)) interpreter_16x8.allocate_tensors()
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
1 つの画像でモデルをテストする
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] interpreter.set_tensor(input_index, test_image) interpreter.invoke() predictions = interpreter.get_tensor(output_index) import m...
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
モデルを評価する
# A helper function to evaluate the TF Lite model using "test" dataset. def evaluate_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for tes...
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
16x8 量子化モデルで評価を繰り返します。
# NOTE: This quantization mode is an experimental post-training mode, # it does not have any optimized kernels implementations or # specialized machine learning hardware accelerators. Therefore, # it could be slower than the float interpreter. print(evaluate_model(interpreter_16x8))
site/ja/lite/performance/post_training_integer_quant_16x8.ipynb
tensorflow/docs-l10n
apache-2.0
Vertex SDK: Train & deploy a TensorFlow model with hosted runtimes (aka pre-built containers) Installation Install the latest (preview) version of Vertex SDK.
! pip3 install -U google-cloud-aiplatform --user
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Restart the Kernel Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
import os if not os.getenv("AUTORUN"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin GPU run-time Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your GCP project The following steps are required, regardless of your notebook environment. Select or create a GCP project. When you first create a...
PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:"...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You cannot use a Multi-Reg...
REGION = "us-central1" # @param {type: "string"}
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Authenticate your GCP account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. Note: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.
import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your Google Cloud account. This provides access # to your Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Vertex, then don't execute this code if not ...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex SDK Import the Vertex SDK into our Python environment.
import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Value
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Vertex AI constants Setup up the following constants for Vertex AI: API_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services. API_PREDICT_ENDPOINT: The Vertex AI API service endpoint for prediction. PARENT: The Vertex AI location root path for dataset, model and endpoint...
# API Endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex AI location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Clients The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex). You will use several clients in this tutorial, so set them all up upfront. Dataset Service for managed datasets. Model Service for manage...
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_endpoint_client(): client = aip.EndpointServiceClient(client_options=client_options) return client...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Prepare a trainer script Package assembly
! rm -rf cifar ! mkdir cifar ! touch cifar/README.md setup_cfg = "[egg_info]\n\ tag_build =\n\ tag_date = 0" ! echo "$setup_cfg" > cifar/setup.cfg setup_py = "import setuptools\n\ # Requires TensorFlow Datasets\n\ setuptools.setup(\n\ install_requires=[\n\ 'tensorflow_datasets==1.3.0',\n\ ],\n\ pa...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Task.py contents
%%writefile cifar/trainer/task.py import tensorflow_datasets as tfds import tensorflow as tf from tensorflow.python.client import device_lib import argparse import os import sys tfds.disable_progress_bar() parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', defa...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Store training script on your Cloud Storage bucket
! rm -f cifar.tar cifar.tar.gz ! tar cvf cifar.tar cifar ! gzip cifar.tar ! gsutil cp cifar.tar.gz gs://$BUCKET_NAME/trainer_cifar.tar.gz
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train a model projects.locations.customJobs.create Request
JOB_NAME = "custom_job_TF_" + TIMESTAMP TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest" TRAIN_NGPU = 1 TRAIN_GPU = aip.AcceleratorType.NVIDIA_TESLA_K80 worker_pool_specs = [ { "replica_count": 1, "machine_spec": { "machine_type": "n1-standard-4", "acceler...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "customJob": { "displayName": "custom_job_TF_20210227173057", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4", "acceleratorType": "NVIDIA...
request = clients["job"].create_custom_job(parent=PARENT, custom_job=training_job)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/customJobs/2970106362064797696", "displayName": "custom_job_TF_20210227173057", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4", "acceleratorType": "NVIDIA_TESLA_K80", ...
# The full unique ID for the custom training job custom_training_id = request.name # The short numeric ID for the custom training job custom_training_short_id = custom_training_id.split("/")[-1] print(custom_training_id)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.customJobs.get Call
request = clients["job"].get_custom_job(name=custom_training_id)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/customJobs/2970106362064797696", "displayName": "custom_job_TF_20210227173057", "jobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4", "acceleratorType": "NVIDIA_TESLA_K80", ...
while True: response = clients["job"].get_custom_job(name=custom_training_id) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: break else: ...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy the model Load the saved model
import tensorflow as tf model = tf.keras.models.load_model(model_artifact_dir)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Serving function for image data
CONCRETE_INPUT = "numpy_inputs" def _preprocess(bytes_input): decoded = tf.io.decode_jpeg(bytes_input, channels=3) decoded = tf.image.convert_image_dtype(decoded, tf.float32) resized = tf.image.resize(decoded, size=(32, 32)) rescale = tf.cast(resized / 255.0, tf.float32) return rescale @tf.funct...
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get the serving function signature
loaded = tf.saved_model.load(model_artifact_dir) input_name = list( loaded.signatures["serving_default"].structured_input_signature[1].keys() )[0] print("Serving function input:", input_name)
notebooks/community/migration/UJ2,12 Custom Training Prebuilt Container TF Keras.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0