markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
(2b) Transformando matriz de dados em vetores quantizados O próximo passo consiste em transformar nosso RDD de frases em um RDD de pares (id, vetor quantizado). Para isso vamos criar uma função quantizador que receberá como parâmetros o objeto, o modelo de k-means, o valor de k e o dicionário word2vec. Para cada ponto,...
# EXERCICIO def quantizador(point, model, k, w2v): key = <COMPLETAR> words = <COMPLETAR> matrix = np.array( <COMPLETAR> ) features = np.zeros(k) for v in matrix: c = <COMPLETAR> features[c] += 1 return (key, features) quantRDD = dataRDD.map(lambda x: quantizador(x, modelK, 5...
Spark/Lab04.ipynb
folivetti/BIGDATA
mit
Basic Histogram
import plotly.plotly as py import plotly.graph_objs as go import numpy as np x = np.random.randn(500) data = [ go.Histogram( x=x ) ] py.iplot(data)
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb
vadim-ivlev/STUDY
mit
Normalized Histogram
import plotly.plotly as py import plotly.graph_objs as go import numpy as np x = np.random.randn(500) data = [ go.Histogram( x=x, histnorm='probability' ) ] py.iplot(data)
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb
vadim-ivlev/STUDY
mit
Horizontal Histogram
import plotly.plotly as py import plotly.graph_objs as go import numpy as np y = np.random.randn(500) data = [ go.Histogram( y=y ) ] py.iplot(data)
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb
vadim-ivlev/STUDY
mit
Overlaid Histgram
import plotly.plotly as py import plotly.graph_objs as go import numpy as np x0 = np.random.randn(500) x1 = np.random.randn(500)+1 trace1 = go.Histogram( x=x0, opacity=0.75 ) trace2 = go.Histogram( x=x1, opacity=0.75 ) data = [trace1, trace2] layout = go.Layout( barmode='overlay' ) fig = go.Figure...
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb
vadim-ivlev/STUDY
mit
Stacked Histograms ###
import plotly.plotly as py import plotly.graph_objs as go import numpy as np x0 = np.random.randn(500) x1 = np.random.randn(500)+1 trace1 = go.Histogram( x=x0 ) trace2 = go.Histogram( x=x1 ) data = [trace1, trace2] layout = go.Layout( barmode='stack' ) fig = go.Figure(data=data, layout=layout) py.iplot(fi...
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb
vadim-ivlev/STUDY
mit
Colored and Styled Histograms
import plotly.plotly as py import plotly.graph_objs as go import numpy as np x0 = np.random.randn(500) x1 = np.random.randn(500)+1 trace1 = go.Histogram( x=x0, histnorm='count', name='control', autobinx=False, xbins=dict( start=-3.2, end=2.8, size=0.2 ), marker=dict...
handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/histograms.ipynb-checkpoint.ipynb
vadim-ivlev/STUDY
mit
Import section specific modules:
pass
2_Mathematical_Groundwork/2_y_exercises.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
2.y. Exercises<a id='math:sec:exercises'></a><!--\label{math:sec:exercises}--> We provide a small set of exercises suitable for an interferometry course. 2.y.1. Fourier transforms and convolution: Fourier transform of the triangle function<a id='math:sec:exercises_fourier_triangle'></a><!--\label{math:sec:exercises_fou...
def plotviewgraph(fig, ax, xmin = 0, xmax = 1., ymin = 0., ymax = 1.): """ Prepare a viewvgraph for plotting a function Parameters: fig: Matplotlib figure ax: Matplotlib subplot xmin (float): Minimum of range xmax (float): Maximum of range ymin (float): Minimum of...
2_Mathematical_Groundwork/2_y_exercises.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
Figure 2.y.1: Triangle function with width $2A$ and amplitude $B$.<a id='math:fig:triangle'></a><!--\label{math:fig:triangle}--> <b>Assignments:</b> <ol type="A"> <li>What can you tell about the complex part of the Fourier transform of $f$ using the symmetry of the function?</li> <li>Write down the function $f$ in ...
def plotfftriangle(): A = 1. B = 1. # Start the plot, create a figure instance and a subplot fig = plt.figure(figsize=(20,5)) ax = fig.add_subplot(111) twv, twh = plotviewgraph(fig, ax, xmin = -3./A, xmax = 3./A, ymin = -0.3, ymax = B) ticx = [[-A,r'$-\frac{1}{A}$'],[A,'...
2_Mathematical_Groundwork/2_y_exercises.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
Figure 2.y.2: Triangle function with width $2A$ and amplitude $B$.<a id='math:fig:ft_of_triangle'></a><!--\label{math:fig:ft_of_triangle}--> 2.y.2. Fourier transforms and convolution: Convolution of two functions with finite support<a id='math:sec:exercises_convolution_of_two_functions_with_finite_support'></a><!--\lab...
def plotrectntria(): A = 1. B = 1.4 # Start the plot, create a figure instance and a subplot fig = plt.figure(figsize=(20,5)) ax = fig.add_subplot(121) twv, twh = plotviewgraph(fig, ax, xmin = 0., xmax = 3.*A, ymin = 0., ymax = 3.) ticx = [[1.*A, r'$A$'], [2.*A, r'$2A$']...
2_Mathematical_Groundwork/2_y_exercises.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
Figure 2.y.3: Triangle function with width $2A$ and amplitude $B$.<a id='math:fig:two_fs_with_finite_support'></a><!--\label{math:fig:two_fs_with_finite_support}--> <b>Assignments:</b> <ol type="A"> <li>Write down the functions g and h.</li> <li>Calculate their convolution.</li> </ol> 2.y.2.1 Convolution of two fu...
def rectntriaconv(A,B,x): xn = x[x < (2*A)] yn = xn*0. y = yn xn = x[(x == 2*A) | (x > 2*A) & (x < 3*A)] yn = (B/A)*(np.power(xn,2)-3*A*xn+2*np.power(A,2)) y = np.append(y,yn) xn = x[(x == 3*A) | (x > 3*A) & (x < 4*A)] yn = (B/A)*((-2*np.power(xn,2))+14*A*xn-22*np.powe...
2_Mathematical_Groundwork/2_y_exercises.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
Adding Spots and Compute Options
b.add_spot(component='primary', relteff=0.8, radius=20, colat=45, colon=90, feature='spot01') b.add_dataset('lc', times=np.linspace(0,1,101)) b.add_compute('phoebe', irrad_method='none', compute='phoebe2') b.add_compute('legacy', irrad_method='none', compute='phoebe1')
2.2/examples/legacy_spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Let's use the external atmospheres available for both phoebe1 and phoebe2
b.set_value_all('atm', 'extern_planckint') b.set_value_all('ld_mode', 'manual') b.set_value_all('ld_func', 'logarithmic') b.set_value_all('ld_coeffs', [0.0, 0.0]) b.run_compute('phoebe2', model='phoebe2model') b.run_compute('phoebe1', model='phoebe1model')
2.2/examples/legacy_spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting
afig, mplfig = b.plot(legend=True, ylim=(1.95, 2.05), show=True)
2.2/examples/legacy_spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Getting help
# information about functions with Python's help() ... help(nest.Models) # ... or IPython's question mark nest.Models? # list neuron models nest.Models() # choose LIF neuron with exponential synaptic currents: 'iaf_psc_exp' # look in documentation for model description # or (if not compiled with MPI) nest.help('iaf_...
session20_NEST/jupyter_notebooks/1_first_steps.ipynb
INM-6/Python-Module-of-the-Week
mit
Creating a neuron
# before creating a new network, # reset the simulation kernel / remove all nodes nest.ResetKernel() # create the neuron neuron = nest.Create('iaf_psc_exp') # investigate the neuron # Create() just returns a list (tuple) with handles to the new nodes # (handles = integer numbers called ids) neuron # current dynamic...
session20_NEST/jupyter_notebooks/1_first_steps.ipynb
INM-6/Python-Module-of-the-Week
mit
Creating a spikegenerator
# create a spike generator spikegenerator = nest.Create('spike_generator') # check out 'spike_times' in its parameters nest.GetStatus(spikegenerator) # set the spike times at 10 and 50 ms nest.SetStatus(spikegenerator, {'spike_times': [10., 50.]})
session20_NEST/jupyter_notebooks/1_first_steps.ipynb
INM-6/Python-Module-of-the-Week
mit
Creating a voltmeter
# create a voltmeter for recording voltmeter = nest.Create('voltmeter') # investigate the voltmeter voltmeter # see that it records membrane voltage, senders, times nest.GetStatus(voltmeter)
session20_NEST/jupyter_notebooks/1_first_steps.ipynb
INM-6/Python-Module-of-the-Week
mit
Connecting
# investigate Connect() function nest.Connect? # connect spike generator and voltmeter to the neuron nest.Connect(spikegenerator, neuron, syn_spec={'weight': 1e3}) nest.Connect(voltmeter, neuron)
session20_NEST/jupyter_notebooks/1_first_steps.ipynb
INM-6/Python-Module-of-the-Week
mit
Simulating
# run simulation for 100 ms nest.Simulate(100.) # look at nest's KernelStatus: # network_size (root node, neuron, spike generator, voltmeter) # num_connections # time (simulation duration) nest.GetKernelStatus() # note that voltmeter has recorded 99 events nest.GetStatus(voltmeter) # read out recording time and volt...
session20_NEST/jupyter_notebooks/1_first_steps.ipynb
INM-6/Python-Module-of-the-Week
mit
Plotting
# plot results # units can be found in documentation pylab.plot(times, voltages, label='Neuron 1') pylab.xlabel('Time (ms)') pylab.ylabel('Membrane potential (mV)') pylab.title('Membrane potential') pylab.legend() # create the same plot with NEST's build-in plotting function import nest.voltage_trace nest.voltage_tra...
session20_NEST/jupyter_notebooks/1_first_steps.ipynb
INM-6/Python-Module-of-the-Week
mit
Representational Similarity Analysis Representational Similarity Analysis is used to perform summary statistics on supervised classifications where the number of classes is relatively high. It consists in characterizing the structure of the confusion matrix to infer the similarity between brain responses and serves as ...
# Authors: Jean-Remi King <jeanremi.king@gmail.com> # Jaakko Leppakangas <jaeilepp@student.jyu.fi> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD (3-clause) import os.path as op import numpy as np from pandas import read_csv import matplotlib.pyplot as plt from sklearn.model_sel...
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's restrict the number of conditions to speed up computation
max_trigger = 24 conds = conds[:max_trigger] # take only the first 24 rows
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Define stimulus - trigger mapping
conditions = [] for c in conds.values: cond_tags = list(c[:2]) cond_tags += [('not-' if i == 0 else '') + conds.columns[k] for k, i in enumerate(c[2:], 2)] conditions.append('/'.join(map(str, cond_tags))) print(conditions[:10])
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's make the event_id dictionary
event_id = dict(zip(conditions, conds.trigger + 1)) event_id['0/human bodypart/human/not-face/animal/natural']
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Read MEG data
n_runs = 4 # 4 for full data (use less to speed up computations) fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif') raws = [read_raw_fif(fname % block, verbose='error') for block in range(n_runs)] # ignore filename warnings raw = concatenate_raws(raws) events = mne.find_events(raw, min_duration=.002...
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Epoch data
picks = mne.pick_types(raw.info, meg=True) epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None, picks=picks, tmin=-.1, tmax=.500, preload=True)
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's plot some conditions
epochs['face'].average().plot() epochs['not-face'].average().plot()
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Representational Similarity Analysis (RSA) is a neuroimaging-specific appelation to refer to statistics applied to the confusion matrix also referred to as the representational dissimilarity matrices (RDM). Compared to the approach from Cichy et al. we'll use a multiclass classifier (Multinomial Logistic Regression) wh...
# Classify using the average signal in the window 50ms to 300ms # to focus the classifier on the time interval with best SNR. clf = make_pipeline(StandardScaler(), LogisticRegression(C=1, solver='liblinear', multi_class='auto')) X = epochs.copy().crop(0.05, 0.3...
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute confusion matrix using ROC-AUC
confusion = np.zeros((len(classes), len(classes))) for ii, train_class in enumerate(classes): for jj in range(ii, len(classes)): confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj]) confusion[jj, ii] = confusion[ii, jj]
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot
labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6 fig, ax = plt.subplots(1) im = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7]) ax.set_yticks(range(len(classes))) ax.set_yticklabels(labels) ax.set_xticks(range(len(classes))) ax.set_xticklabels(labels, rotation=40, ha='left') ax.axhline(11.5, col...
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Confusion matrix related to mental representations have been historically summarized with dimensionality reduction using multi-dimensional scaling [1]. See how the face samples cluster together.
fig, ax = plt.subplots(1) mds = MDS(2, random_state=0, dissimilarity='precomputed') chance = 0.5 summary = mds.fit_transform(chance - confusion) cmap = plt.get_cmap('rainbow') colors = ['r', 'b'] names = list(conds['condition'].values) for color, name in zip(colors, set(names)): sel = np.where([this_name == name fo...
0.23/_downloads/61268d5dc873438a743241ad21a989fd/decoding_rsa_sgskip.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Make a striplog
from striplog import Striplog, Component s = Striplog.from_csv(text=text) s.plot(aspect=5) s[0]
docs/tutorial/12_Calculate_sand_proportion.ipynb
agile-geoscience/striplog
apache-2.0
Make a sand flag log We'll make a log version of the striplog:
start, stop, step = 0, 25, 0.01 L = s.to_log(start=start, stop=stop, step=step) import matplotlib.pyplot as plt plt.figure(figsize=(15, 2)) plt.plot(L)
docs/tutorial/12_Calculate_sand_proportion.ipynb
agile-geoscience/striplog
apache-2.0
Convolve with running window Convolution with a boxcar filter computes the mean in a window.
import numpy as np window_length = 2.5 # metres. N = int(window_length / step) boxcar = 100 * np.ones(N) / N z = np.linspace(start, stop, L.size) prop = np.convolve(L, boxcar, mode='same') plt.plot(z, prop) plt.grid(c='k', alpha=0.2) plt.ylim(-5, 105)
docs/tutorial/12_Calculate_sand_proportion.ipynb
agile-geoscience/striplog
apache-2.0
Write out as CSV Here's the proportion log we made:
z_prop = np.stack([z, prop], axis=1) z_prop.shape
docs/tutorial/12_Calculate_sand_proportion.ipynb
agile-geoscience/striplog
apache-2.0
Save it with NumPy (or you could build up a Pandas DataFrame)...
np.savetxt('prop.csv', z_prop, delimiter=',', header='elev,perc', comments='', fmt='%1.3f')
docs/tutorial/12_Calculate_sand_proportion.ipynb
agile-geoscience/striplog
apache-2.0
Check the file looks okay with a quick command line check (! sends commands to the shell).
!head prop.csv
docs/tutorial/12_Calculate_sand_proportion.ipynb
agile-geoscience/striplog
apache-2.0
Plot everything together
fig, ax = plt.subplots(figsize=(5, 10), ncols=3, sharey=True) # Plot the striplog. s.plot(ax=ax[0]) ax[0].set_title('Striplog') # Fake a striplog by plotting the log... it looks nice! ax[1].fill_betweenx(z, 0.5, 0, color='grey') ax[1].fill_betweenx(z, L, 0, color='gold', lw=0) ax[1].set_title('Faked with log') # Plo...
docs/tutorial/12_Calculate_sand_proportion.ipynb
agile-geoscience/striplog
apache-2.0
Make a histogram of thicknesses
thicks = [iv.thickness for iv in s] _ = plt.hist(thicks, bins=51)
docs/tutorial/12_Calculate_sand_proportion.ipynb
agile-geoscience/striplog
apache-2.0
Age statistics flaws Statistics shown above are misleading since age categories are represented by single number that does not adequately describe each bracket.
def get_money_from(money_string): return int(money_string.split(" ")[0].split("$")[1].replace(",", "")) int_income = extract_column_data(celebrate, "How much total combined money did all members of your HOUSEHOLD earn last year?", get_money_from, ...
2. Data Analysis and Visualization/Analyzing Thanksgiving Dinner/Thanksgiving survey.ipynb
lesonkorenac/dataquest-projects
mit
Household earnings statistics flaws There are same problems as in age statistics.
travel = celebrate["How far will you travel for Thanksgiving?"] display_counts(travel.loc[int_income[int_income < 15000].index].value_counts(), "Low income travel") display_counts(travel.loc[int_income[int_income >= 15000].index].value_counts(), "High income travel")
2. Data Analysis and Visualization/Analyzing Thanksgiving Dinner/Thanksgiving survey.ipynb
lesonkorenac/dataquest-projects
mit
Travel by income Hypothesis that people with lower income travel more, because they might be younger does not seem to be valid (assumption that younger people have lower income may be wrong, we could use values from age instead).
def thanksgiving_and_friends(data, aggregated_column): return data.pivot_table(index="Have you ever tried to meet up with hometown friends on Thanksgiving night?", columns='Have you ever attended a "Friendsgiving?"', values=aggregated_column) print(thanksgiving_and_friend...
2. Data Analysis and Visualization/Analyzing Thanksgiving Dinner/Thanksgiving survey.ipynb
lesonkorenac/dataquest-projects
mit
Create CC object to setup required parameters Please enable mprov param in '/cc_conf/cerebralcortex.yml'. mprov: pennprov. You would need to create a user on mprov server first and set the username and password in the '/cc_conf/cerebralcortex.yml'.
CC = Kernel("/home/jovyan/cc_conf/", study_name="default")
jupyter_demo/mprov_example.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
Generate synthetic GPS data
ds_gps = gen_location_datastream(user_id="bfb2ca0c-e19c-3956-9db2-5459ccadd40c", stream_name="gps--org.md2k.phonesensor--phone")
jupyter_demo/mprov_example.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
Create windows into 60 seconds chunks
windowed_gps_ds=ds_gps.window(windowDuration=60) gps_clusters=cluster_gps(windowed_gps_ds)
jupyter_demo/mprov_example.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
Print Data
gps_clusters.show(10)
jupyter_demo/mprov_example.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
Hat potential The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential": $$ V(x) = -a x^2 + b x^4 $$ Write a function hat(x,a,b) that returns the value of this function:
# YOUR CODE HERE def hat(x,a,b): v=-1*a*x**2+b*x**4 return v assert hat(0.0, 1.0, 1.0)==0.0 assert hat(0.0, 1.0, 1.0)==0.0 assert hat(1.0, 10.0, 1.0)==-9.0
assignments/assignment11/OptimizationEx01.ipynb
JackDi/phys202-2015-work
mit
Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
x=np.linspace(-3,3) b=1.0 a=5.0 plt.plot(x,hat(x,a,b)) # YOUR CODE HERE x0=-2 a = 5.0 b = 1.0 y=opt.minimize(hat,x0,(a,b)) y.x assert True # leave this to grade the plot
assignments/assignment11/OptimizationEx01.ipynb
JackDi/phys202-2015-work
mit
Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima...
# YOUR CODE HERE x0=-2 a = 5.0 b = 1.0 i=0 y.x mini=[] x=np.linspace(-3,3) for i in x: y=opt.minimize(hat,i,(a,b)) z=int(y.x *100000) if np.any(mini[:] == z): i=i+1 else: mini=np.append(mini,z) mini=mini/100000 mini plt.plot(x,hat(x,a,b),label="Hat Function") plt.plot(mini[0],hat(mi...
assignments/assignment11/OptimizationEx01.ipynb
JackDi/phys202-2015-work
mit
Import the dataset and trained model In the previous notebook, you imported 20 million movie recommendations and trained an ALS model with BigQuery ML. We are going to use the same tables, but if this is a new environment, please run the below commands to copy over the clean data. First create the BigQuery dataset and ...
!bq mk movielens %%bash rm -r bqml_data mkdir bqml_data cd bqml_data curl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip' unzip ml-20m.zip yes | bq rm -r $PROJECT:movielens bq --location=US mk --dataset \ --description 'Movie Recommendations' \ $PROJECT:movielens bq --location=US load --source_fo...
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
And create a cleaned movielens.movies table.
%%bigquery --project $PROJECT CREATE OR REPLACE TABLE movielens.movies AS SELECT * REPLACE(SPLIT(genres, "|") AS genres) FROM movielens.movies_raw
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Next, copy over the trained recommendation model. Note that if you're project is in the EU you will need to change the location from US to EU below. Note that as of the time of writing you cannot copy models across regions with bq cp.
%%bash bq --location=US cp \ cloud-training-demos:movielens.recommender \ movielens.recommender
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Next, ensure the model still works by invoking predictions for movie recommendations:
%%bigquery --project $PROJECT SELECT * FROM ML.PREDICT(MODEL `movielens.recommender`, ( SELECT movieId, title, 903 AS userId FROM movielens.movies, UNNEST(genres) g WHERE g = 'Comedy' )) ORDER BY predicted_rating DESC LIMIT 5
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Incorporating user and movie information The matrix factorization approach does not use any information about users or movies beyond what is available from the ratings matrix. However, we will often have user information (such as the city they live, their annual income, their annual expenditure, etc.) and we will almos...
%%bigquery --project $PROJECT SELECT processed_input, feature, TO_JSON_STRING(factor_weights) AS factor_weights, intercept FROM ML.WEIGHTS(MODEL `movielens.recommender`) WHERE (processed_input = 'movieId' AND feature = '96481') OR (processed_input = 'userId' AND feature = '54192')
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Multiplying these weights and adding the intercept is how we get the predicted rating for this combination of movieId and userId in the matrix factorization approach. These weights also serve as a low-dimensional representation of the movie and user behavior. We can create a regression model to predict the rating given...
%%bigquery --project $PROJECT CREATE OR REPLACE TABLE movielens.users AS SELECT userId, RAND() * COUNT(rating) AS loyalty, CONCAT(SUBSTR(CAST(userId AS STRING), 0, 2)) AS postcode FROM movielens.ratings GROUP BY userId
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Input features about users can be obtained by joining the user table with the ML weights and selecting all the user information and the user factors from the weights array.
%%bigquery --project $PROJECT WITH userFeatures AS ( SELECT u.*, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors FROM movielens.users u JOIN ML.WEIGHTS(MODEL movielens.recommender) w ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING) ) SELECT * FROM userFeatu...
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Similarly, we can get product features for the movies data, except that we have to decide how to handle the genre since a movie could have more than one genre. If we decide to create a separate training row for each genre, then we can construct the product features using.
%%bigquery --project $PROJECT WITH productFeatures AS ( SELECT p.* EXCEPT(genres), g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS product_factors FROM movielens.movies p, UNNEST(genres) g JOIN ML.WEIGHTS(MODEL movielens.recommender) w ON processed_input = 'movieId' AND fea...
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Combining these two WITH clauses and pulling in the rating corresponding the movieId-userId combination (if it exists in the ratings table), we can create the training dataset. TODO 1: Combine the above two queries to get the user factors and product factor for each rating.
%%bigquery --project $PROJECT CREATE OR REPLACE TABLE movielens.hybrid_dataset AS WITH userFeatures AS ( SELECT u.*, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors FROM movielens.users u JOIN ML.WEIGHTS(MODEL movielens.recommender) w ON...
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
One of the rows of this table looks like this:
%%bigquery --project $PROJECT SELECT * FROM movielens.hybrid_dataset LIMIT 1
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Essentially, we have a couple of attributes about the movie, the product factors array corresponding to the movie, a couple of attributes about the user, and the user factors array corresponding to the user. These form the inputs to our “hybrid” recommendations model that builds off the matrix factorization model and a...
%%bigquery --project $PROJECT CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_users(u ARRAY<FLOAT64>) RETURNS STRUCT< u1 FLOAT64, u2 FLOAT64, u3 FLOAT64, u4 FLOAT64, u5 FLOAT64, u6 FLOAT64, u7 FLOAT64, u8 FLOAT64, u9 FLOAT64, u10 ...
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
which gives:
%%bigquery --project $PROJECT SELECT movielens.arr_to_input_16_users(u).* FROM (SELECT [0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15.] AS u)
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
We can create a similar function named movielens.arr_to_input_16_products to convert the product factor array into named columns. TODO 2: Create a function that returns named columns from a size 16 product factor array.
%%bigquery --project $PROJECT CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_products(p ARRAY<FLOAT64>) RETURNS STRUCT< p1 FLOAT64, p2 FLOAT64, p3 FLOAT64, p4 FLOAT64, p5 FLOAT64, p6 FLOAT64, p7 FLOAT64, p8 FLOAT64, p9 FLOAT64, p...
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Then, we can tie together metadata about users and products with the user factors and product factors obtained from the matrix factorization approach to create a regression model to predict the rating:
%%bigquery --project $PROJECT CREATE OR REPLACE MODEL movielens.recommender_hybrid OPTIONS(model_type='linear_reg', input_label_cols=['rating']) AS SELECT * EXCEPT(user_factors, product_factors), movielens.arr_to_input_16_users(user_factors).*, movielens.arr_to_input_16_products(product_factors).* FROM mo...
notebooks/recommendation_systems/solutions/3_als_bqml_hybrid.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
The kernel The following GPU kernel computes $$ \log v_{bj} := \log \nu_{bj} - \operatorname{logsumexp}{i} (-\frac{1}{\lambda} c{ij} + \log u_{bi}). $$ This has two key properties that shape our implementation: - The overall reduction structure is akin to a matrix multiplication, i.e. memory accesses to $c_{ij}$ and $...
cuda_source = """ #include <torch/extension.h> #include <ATen/core/TensorAccessor.h> #include <ATen/cuda/CUDAContext.h> using at::RestrictPtrTraits; using at::PackedTensorAccessor; #if defined(__HIP_PLATFORM_HCC__) constexpr int WARP_SIZE = 64; #else constexpr int WARP_SIZE = 32; #endif // The maximum number of thr...
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
Incorporating it in PyTorch We make this into a PyTorch extension module and add a convenience function (and "manual" implementation for the CPU).
wasserstein_ext = torch.utils.cpp_extension.load_inline("wasserstein", cpp_sources="", cuda_sources=cuda_source, extra_cuda_cflags=["--expt-relaxed-constexpr"] ) def sinkstep(dist, log_nu, log_u, lam: float): # dispatch to optimized GPU implementation for GPU t...
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
We use this update step in a building block for the Sinkhorn iteration:
class SinkhornOT(torch.autograd.Function): @staticmethod def forward(ctx, mu, nu, dist, lam=1e-3, N=100): assert mu.dim() == 2 and nu.dim() == 2 and dist.dim() == 2 bs = mu.size(0) d1, d2 = dist.size() assert nu.size(0) == bs and mu.size(1) == d1 and nu.size(1) == d2 log_...
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
We also define a function to get the coupling itself:
def get_coupling(mu, nu, dist, lam=1e-3, N=1000): assert mu.dim() == 2 and nu.dim() == 2 and dist.dim() == 2 bs = mu.size(0) d1, d2 = dist.size() assert nu.size(0) == bs and mu.size(1) == d1 and nu.size(1) == d2 log_mu = mu.log() log_nu = nu.log() log_u = torch.full_like(mu, -math.log(d1)) ...
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
We define some test distributions. These are similar to examples from Python Optimal Transport.
# some test distribution densities n = 100 lam = 1e-3 x = torch.linspace(0, 100, n) mu1 = torch.distributions.Normal(20., 10.).log_prob(x).exp() mu2 = torch.distributions.Normal(60., 30.).log_prob(x).exp() mu3 = torch.distributions.Normal(40., 20.).log_prob(x).exp() mu1 /= mu1.sum() mu2 /= mu2.sum() mu3 /= mu3.sum() mu...
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
We run a sanity check for the distance: (This will take longer than you might expect, as it computes a rather large gradient numerically, but it finishes in $<1$ minute on a GTX 1080)
t = time.time() device = "cuda" res = torch.autograd.gradcheck(lambda x: SinkhornOT.apply(x.softmax(1), mu231.to(device=device, dtype=torch.double), cost.to(device=device, dtype=torch.double), ...
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
We might also check that sinkstep is the same on GPU and CPU (Kai Zhao pointed out that this was not the case for an earlier versions of this notebook, thank you, and indeed, there was a bug in the CPU implementation.)
res_cpu = sinkstep(cost.cpu(), mu123.log().cpu(), mu231.log().cpu(), lam) res_gpu = sinkstep(cost.to(device), mu123.log().to(device), mu231.log().to(device), lam).cpu() assert (res_cpu - res_gpu).abs().max() < 1e-5
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
We can visiualize the coupling along with the marginals:
coupling = get_coupling(mu123.cuda(), mu231.cuda(), cost.cuda()) pyplot.figure(figsize=(10,10)) pyplot.subplot(2, 2, 1) pyplot.plot(mu2.cpu()) pyplot.subplot(2, 2, 4) pyplot.plot(mu1.cpu(), transform=matplotlib.transforms.Affine2D().rotate_deg(270) + pyplot.gca().transData) pyplot.subplot(2, 2, 3) pyplot.imshow(couplin...
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
This looks a lot like the coupling form Python Optimal Transport and in fact all three match results computed with POT:
o_coupling12 = torch.tensor(ot.bregman.sinkhorn_stabilized(mu1.cpu(), mu2.cpu(), cost.cpu(), reg=1e-3)) o_coupling23 = torch.tensor(ot.bregman.sinkhorn_stabilized(mu2.cpu(), mu3.cpu(), cost.cpu(), reg=1e-3)) o_coupling31 = torch.tensor(ot.bregman.sinkhorn_stabilized(mu3.cpu(), mu1.cpu(), cost.cpu(), reg=1e-3)) pyplot.i...
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
Performance comparison to existing implementations We copy the code of Dazac's recent blog post in order to compare performance. Dazac uses early stopping, but this comes at the cost of introducing a synchronization point after each iteration. I modified the code to take the distance matrix as an argument.
# Copyright 2018 Daniel Dazac # MIT Licensed # License and source: https://github.com/dfdazac/wassdistance/ class SinkhornDistance(torch.nn.Module): r""" Given two empirical measures each with :math:`P_1` locations :math:`x\in\mathbb{R}^{D_1}` and :math:`P_2` locations :math:`y\in\mathbb{R}^{D_2}`, outp...
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
With this problem size and forward + backward, we achieve a speedup factor of approximately 6.5 when doing about 3 times as many iterations. Barycenters We can also do barycenters. Let's go 2d to do so. I use relative small $N$ because at the time of writing, my GPU is partially occupied by a long-running training.
N = 50 a, b, c = torch.zeros(3, N, N, device="cuda") x = torch.linspace(-5, 5, N, device="cuda") a[N//5:-N//5, N//5:-N//5] = 1 b[(x[None]**2+x[:,None]**2 > 4) & (x[None]**2+x[:,None]**2 < 9)] = 1 c[((x[None]-2)**2+(x[:,None]-2)**2 < 4) | ((x[None]+2)**2+(x[:,None]+2)**2 < 4)] = 1 pyplot.imshow(c.cpu(), cmap=pyplot.cm.g...
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
It's fast enough to just use baricenters for interpolation:
res = [] for i in torch.linspace(0, 1, 10): res.append(get_barycenter(torch.cat([a, b, c], 0), dist, torch.tensor([i*0.9, (1-i)*0.9, 0], device="cuda"), N=100)) pyplot.figure(figsize=(15,5)) pyplot.imshow(torch.cat([r[0].sum(1).view(N, N).cpu() for r in res], 1), cmap=pyplot.cm.gray_r)
wasserstein-distance/Pytorch_Wasserstein.ipynb
t-vi/pytorch-tvmisc
mit
Will also need to execute some raw SQL, so I'll import a helper function in order to make the results more readable:
from project import sql_to_agate
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
Let's start by examining the distinct values of the statement type on CVR_CAMPAIGN_DISCLOSURE_CD. And let's narrow the scope to only the Form 460 filings.
sql_to_agate( """ SELECT UPPER("STMT_TYPE"), COUNT(*) FROM "CVR_CAMPAIGN_DISCLOSURE_CD" WHERE "FORM_TYPE" = 'F460' GROUP BY 1 ORDER BY COUNT(*) DESC; """ ).print_table()
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
Not all of these values are defined, as previously noted in our docs: * PR might be pre-election * QS is pro probably quarterly statement * YE might be...I don't know "Year-end"? * S is probably semi-annual Maybe come back later and look at the actual filings. There aren't that many. There's another similar-named colum...
sql_to_agate( """ SELECT FF."STMNT_TYPE", LU."CODE_DESC", COUNT(*) FROM "FILER_FILINGS_CD" FF JOIN "LOOKUP_CODES_CD" LU ON FF."STMNT_TYPE" = LU."CODE_ID" AND LU."CODE_TYPE" = 10000 GROUP BY 1, 2; """ ).print_table()
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
One of the tables that caught my eye is FILING_PERIOD_CD, which appears to have a row for each quarterly filing period:
sql_to_agate( """ SELECT * FROM "FILING_PERIOD_CD" """ ).print_table()
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
Every period is described as a quarter, and the records are equally divided among them:
sql_to_agate( """ SELECT "PERIOD_DESC", COUNT(*) FROM "FILING_PERIOD_CD" GROUP BY 1; """ ).print_table()
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
The difference between every START_DATE and END_DATE is actually a three-month interval:
sql_to_agate( """ SELECT "END_DATE" - "START_DATE" AS duration, COUNT(*) FROM "FILING_PERIOD_CD" GROUP BY 1; """ ).print_table()
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
And they have covered every year between 1973 and 2334 (how optimistic!):
sql_to_agate( """ SELECT DATE_PART('year', "START_DATE")::int as year, COUNT(*) FROM "FILING_PERIOD_CD" GROUP BY 1 ORDER BY 1 DESC; """ ).print_table()
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
Filings are linked to filing periods via FILER_FILINGS_CD.PERIOD_ID. While that column is not always populated, it is if you limit your results to just the Form 460 filings:
sql_to_agate( """ SELECT ff."PERIOD_ID", fp."START_DATE", fp."END_DATE", fp."PERIOD_DESC", COUNT(*) FROM "FILER_FILINGS_CD" ff JOIN "CVR_CAMPAIGN_DISCLOSURE_CD" cvr ON ff."FILING_ID" = cvr."FILING_ID" AND ff."FILING_SEQUENCE" = cvr."AMEND_ID" AND cvr."FORM_TYPE" = 'F460' JOIN "FILING_PER...
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
Also, is Schwarzenegger running this cycle? Who else could be filing from so far into the future? AAANNNNYYYway...Also need to check to make sure the join between FILER_FILINGS_CD and CVR_CAMPAIGN_DISCLOSURE_CD isn't filtering out too many filings:
sql_to_agate( """ SELECT cvr."FILING_ID", cvr."FORM_TYPE", cvr."FILER_NAML" FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr LEFT JOIN "FILER_FILINGS_CD" ff ON cvr."FILING_ID" = ff."FILING_ID" AND cvr."AMEND_ID" = ff."FILING_SEQUENCE" WHERE cvr."FORM_TYPE" = 'F460' AND (ff."FILING_ID" IS NULL OR ...
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
So only a handful, mostly local campaigns or just nonsense test data. So another important thing to check is how well these the dates from the filing period look-up records line up with the dates on the Form 460 filing records. It would be bad if the CVR_CAMPAIGN_DISCLOSURE_CD.FROM_DATE were before FILING_PERIOD_CD.STA...
sql_to_agate( """ SELECT CASE WHEN cvr."FROM_DATE" < fp."START_DATE" THEN 'filing from_date before period start_date' WHEN cvr."THRU_DATE" > fp."END_DATE" THEN 'filing thru_date after period end_date' ELSE 'okay' END as test, COUNT(*) FROM "CVR_...
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
So half of the time, the THRU_DATE on the filing is later than the FROM_DATE on the filing period. How big of a difference can exist between these two dates?
sql_to_agate( """ SELECT cvr."THRU_DATE" - fp."END_DATE" as date_diff, COUNT(*) FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr JOIN "FILER_FILINGS_CD" ff ON cvr."FILING_ID" = ff."FILING_ID" AND cvr."AMEND_ID" = ff."FILING_SEQUENCE" JOIN "FILING_PERIOD_CD" fp ON ff."PERIO...
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
Ugh. Looks like, in most of the problem cases, the from date can be a whole quarter later than the end date of the filing period. Let's take a closer look at these...
sql_to_agate( """ SELECT cvr."FILING_ID", cvr."AMEND_ID", cvr."FROM_DATE", cvr."THRU_DATE", fp."START_DATE", fp."END_DATE" FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr JOIN "FILER_FILINGS_CD" ff ON cvr."FILING_ID" = ff."FILING_ID" ...
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
So, actually, this sort of makes sense: Quarterly filings are for three month intervals, while the semi-annual filings are for six month intervals. And FILING_PERIOD_CD only has records for three month intervals. Let's test this theory by getting the distinct CVR_CAMPAIGN_DISCLOSURE_CD.STMT_TYPE values from these recor...
sql_to_agate( """ SELECT UPPER(cvr."STMT_TYPE"), COUNT(*) FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr JOIN "FILER_FILINGS_CD" ff ON cvr."FILING_ID" = ff."FILING_ID" AND cvr."AMEND_ID" = ff."FILING_SEQUENCE" JOIN "FILING_PERIOD_CD" fp ON ff."PERIOD_ID" = fp."PERIOD_ID" WHERE cvr."FORM_TYPE"...
calaccess-exploration/decoding-filing-periods.ipynb
california-civic-data-coalition/python-calaccess-notebooks
mit
Optionally, you can call tnp.experimental_enable_numpy_behavior() to enable type promotion in TensorFlow. This allows TNP to more closely follow the NumPy standard.
tnp.experimental_enable_numpy_behavior()
examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb
keras-team/keras-io
apache-2.0
To test our models we will use the Boston housing prices regression dataset.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data( path="boston_housing.npz", test_split=0.2, seed=113 ) def evaluate_model(model: keras.Model): [loss, percent_error] = model.evaluate(x_test, y_test, verbose=0) print("Mean absolute percent error before training: ", percent_...
examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb
keras-team/keras-io
apache-2.0
Subclassing keras.Model with TNP The most flexible way to make use of the Keras API is to subclass the keras.Model class. Subclassing the Model class gives you the ability to fully customize what occurs in the training loop. This makes subclassing Model a popular option for researchers. In this example, we will imple...
class TNPForwardFeedRegressionNetwork(keras.Model): def __init__(self, blocks=None, **kwargs): super(TNPForwardFeedRegressionNetwork, self).__init__(**kwargs) if not isinstance(blocks, list): raise ValueError(f"blocks must be a list, got blocks={blocks}") self.blocks = blocks ...
examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb
keras-team/keras-io
apache-2.0
Just like with any other Keras model we can utilize any supported optimizer, loss, metrics or callbacks that we want. Let's see how the model performs!
model = TNPForwardFeedRegressionNetwork(blocks=[3, 3]) model.compile( optimizer="adam", loss="mean_squared_error", metrics=[keras.metrics.MeanAbsolutePercentageError()], ) evaluate_model(model)
examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb
keras-team/keras-io
apache-2.0
Great! Our model seems to be effectively learning to solve the problem at hand. We can also write our own custom loss function using TNP.
def tnp_mse(y_true, y_pred): return tnp.mean(tnp.square(y_true - y_pred), axis=0) keras.backend.clear_session() model = TNPForwardFeedRegressionNetwork(blocks=[3, 3]) model.compile( optimizer="adam", loss=tnp_mse, metrics=[keras.metrics.MeanAbsolutePercentageError()], ) evaluate_model(model)
examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb
keras-team/keras-io
apache-2.0
Implementing a Keras Layer Based Model with TNP If desired, TNP can also be used in layer oriented Keras code structure. Let's implement the same model, but using a layered approach!
def tnp_relu(x): return tnp.maximum(x, 0) class TNPDense(keras.layers.Layer): def __init__(self, units, activation=None): super().__init__() self.units = units self.activation = activation def build(self, input_shape): self.w = self.add_weight( name="weights",...
examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb
keras-team/keras-io
apache-2.0
You can also seamlessly switch between TNP layers and native Keras layers!
def create_mixed_model(): return keras.Sequential( [ TNPDense(3, activation=tnp_relu), # The model will have no issue using a normal Dense layer layers.Dense(3, activation="relu"), # ... or switching back to tnp layers! TNPDense(1), ] ...
examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb
keras-team/keras-io
apache-2.0
The Keras API offers a wide variety of layers. The ability to use them alongside NumPy code can be a huge time saver in projects. Distribution Strategy TensorFlow NumPy and Keras integrate with TensorFlow Distribution Strategies. This makes it simple to perform distributed training across multiple GPUs, or even an ent...
gpus = tf.config.list_logical_devices("GPU") if gpus: strategy = tf.distribute.MirroredStrategy(gpus) else: # We can fallback to a no-op CPU strategy. strategy = tf.distribute.get_strategy() print("Running with strategy:", str(strategy.__class__.__name__)) with strategy.scope(): model = create_layered_...
examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb
keras-team/keras-io
apache-2.0
TensorBoard Integration One of the many benefits of using the Keras API is the ability to monitor training through TensorBoard. Using the TensorFlow NumPy API alongside Keras allows you to easily leverage TensorBoard.
keras.backend.clear_session()
examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb
keras-team/keras-io
apache-2.0
To load the TensorBoard from a Jupyter notebook, you can run the following magic: %load_ext tensorboard
models = [ (TNPForwardFeedRegressionNetwork(blocks=[3, 3]), "TNPForwardFeedRegressionNetwork"), (create_layered_tnp_model(), "layered_tnp_model"), (create_mixed_model(), "mixed_model"), ] for model, model_name in models: model.compile( optimizer="adam", loss="mean_squared_error", ...
examples/keras_recipes/ipynb/tensorflow_numpy_models.ipynb
keras-team/keras-io
apache-2.0