markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
As you see, CSR is faster, and for more unstructured patterns the gain will be larger. CSR format has difficulties with adding new elements. How to solve linear systems? Direct or iterative solvers Direct solvers The direct methods use sparse Gaussian elimination, i.e. they eliminate variables while trying to keep the...
N = n = 100 ex = np.ones(n); a = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); a = a.todense() b = np.array(np.linalg.inv(a)) fig,axes = plt.subplots(1, 2) axes[0].spy(a) axes[1].spy(b,markersize=2)
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Looks woefully.
N = n = 5 ex = np.ones(n); A = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); A = A.todense() B = np.array(np.linalg.inv(A)) print B
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
But occasionally L and U factors can be sparse.
p, l, u = scipy.linalg.lu(a) fig,axes = plt.subplots(1, 2) axes[0].spy(l) axes[1].spy(u)
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
In 1D factors L and U are bidiagonal. In 2D factors L and U looks less optimistic, but still ok.)
n = 3 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) T = scipy.sparse.linalg.splu(A) fig,axes = plt.subplots(1, 2) axes[0].spy(a, markersize=1) axes[1].spy(T.L, marker='.', mark...
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Sparse matrices and graph ordering The number of non-zeros in the LU decomposition has a deep connection to the graph theory. (I.e., there is an edge between $(i, j)$ if $a_{ij} \ne 0$.
import networkx as nx n = 13 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) G = nx.Graph(A) nx.draw(G, pos=nx.spring_layout(G), node_size=10)
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Strategies for elimination The reordering that minimizes the fill-in is important, so we can use graph theory to find one. Minimum degree ordering - order by the degree of the vertex Cuthill–McKee algorithm (and reverse Cuthill-McKee) -- order for a small bandwidth Nested dissection: split the graph into two with mini...
import networkx as nx from networkx.utils import reverse_cuthill_mckee_ordering, cuthill_mckee_ordering n = 13 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) G = nx.Graph(A) #rcm ...
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Florida sparse matrix collection Florida sparse matrix collection which contains all sorts of matrices for different applications. It also allows for finding test matrices as well! Let's have a look.
from IPython.display import HTML HTML('<iframe src=http://yifanhu.net/GALLERY/GRAPHS/search.html width=700 height=450></iframe>')
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Test some Let us check some sparse matrix (and its LU).
fname = 'crystm02.mat' !wget http://www.cise.ufl.edu/research/sparse/mat/Boeing/$fname from scipy.io import loadmat import scipy.sparse q = loadmat(fname) #print q mat = q['Problem']['A'][0, 0] T = scipy.sparse.linalg.splu(mat) #Compute its LU %matplotlib inline import matplotlib.pyplot as plt plt.spy(T.L, markersize...
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Iterative solvers The main disadvantage of factorization methods is there computational complexity. A more efficient solution of linear systems can be obtained by iterative methods. This requires a high convergence rate of the iterative process and low arithmetic cost of each iteration. Modern iterative methods are mai...
from IPython.core.display import HTML def css_styling(): styles = open("./styles/custom.css", "r").read() return HTML(styles) css_styling()
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Test Frame Nodes
%%Table nodes NODEID,X,Y,Z A,0,0,5000 B,0,4000,5000 C,8000,4000,5000 D,8000,0,5000 @sl.extend(Frame2D) class Frame2D: COLUMNS_nodes = ('NODEID','X','Y') def install_nodes(self): node_table = self.get_table('nodes') for ix,r in node_table.data.iterrows(): if r.NODEID in...
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Supports
%%Table supports NODEID,C0,C1,C2 A,FX,FY,MZ D,FX,FY def isnan(x): if x is None: return True try: return np.isnan(x) except TypeError: return False @sl.extend(Frame2D) class Frame2D: COLUMNS_supports = ('NODEID','C0','C1','C2') def install_supports(self): t...
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Members
%%Table members MEMBERID,NODEJ,NODEK AB,A,B BC,B,C DC,D,C @sl.extend(Frame2D) class Frame2D: COLUMNS_members = ('MEMBERID','NODEJ','NODEK') def install_members(self): table = self.get_table('members') for ix,m in table.data.iterrows(): if m.MEMBERID in self.members: ...
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Releases
%%Table releases MEMBERID,RELEASE AB,MZK @sl.extend(Frame2D) class Frame2D: COLUMNS_releases = ('MEMBERID','RELEASE') def install_releases(self): table = self.get_table('releases',optional=True) for ix,r in table.data.iterrows(): memb = self.get_member(r.MEMBERID) ...
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Properties If the SST module is loadable, member properties may be specified by giving steel shape designations (such as 'W310x97') in the member properties data. If the module is not available, you may still give $A$ and $I_x$ directly (it only tries to lookup the properties if these two are not provided).
try: from sst import SST __SST = SST() get_section = __SST.section except ImportError: def get_section(dsg,fields): raise ValueError('Cannot lookup property SIZE because SST is not available. SIZE = {}'.format(dsg)) ##return [1.] * len(fields.split(',')) # in case you want to do it that...
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Node Loads
%%Table node_loads LOAD,NODEID,DIRN,F Wind,B,FX,-200000. @sl.extend(Frame2D) class Frame2D: COLUMNS_node_loads = ('LOAD','NODEID','DIRN','F') def install_node_loads(self): table = self.get_table('node_loads') dirns = ['FX','FY','FZ'] for ix,row in table.data.iterrows(): ...
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Member Loads
%%Table member_loads LOAD,MEMBERID,TYPE,W1,W2,A,B,C Live,BC,UDL,-50,,,, Live,BC,PL,-200000,,5000 @sl.extend(Frame2D) class Frame2D: COLUMNS_member_loads = ('LOAD','MEMBERID','TYPE','W1','W2','A','B','C') def install_member_loads(self): table = self.get_table('member_loads') for ix,row...
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Load Combinations
%%Table load_combinations COMBO,LOAD,FACTOR One,Live,1.5 One,Wind,1.75 @sl.extend(Frame2D) class Frame2D: COLUMNS_load_combinations = ('COMBO','LOAD','FACTOR') def install_load_combinations(self): table = self.get_table('load_combinations') for ix,row in table.data.iterrows(): ...
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Load Iterators
@sl.extend(Frame2D) class Frame2D: def iter_nodeloads(self,comboname): for o,l,f in self.loadcombinations.iterloads(comboname,self.nodeloads): yield o,l,f def iter_memberloads(self,comboname): for o,l,f in self.loadcombinations.iterloads(comboname,self.memberloads): ...
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Support Constraints
%%Table supports NODEID,C0,C1,C2 A,FX,FY,MZ D,FX,FY
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Accumulated Cell Data
##test: Table.CELLDATA
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Mandamos llamar al simulador
from robots.simuladores import simulador %matplotlib widget ts, xs = simulador(puerto_zmq="5551", f=f, x0=[0, 0, 0, 0], dt=0.02)
Practicas/practica2/numerico.ipynb
robblack007/clase-dinamica-robot
mit
Hat potential The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential": $$ V(x) = -a x^2 + b x^4 $$ Write a function hat(x,a,b) that returns the value of this function:
def hat(x,a,b): return (-a*x**2 + b*x**4) assert hat(0.0, 1.0, 1.0)==0.0 assert hat(0.0, 1.0, 1.0)==0.0 assert hat(1.0, 10.0, 1.0)==-9.0
assignments/assignment11/OptimizationEx01.ipynb
phungkh/phys202-2015-work
mit
Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
a = 5.0 b = 1.0 x = np.linspace(-3,3,1000) plt.plot(x, hat(x,a,b)) assert True # leave this to grade the plot
assignments/assignment11/OptimizationEx01.ipynb
phungkh/phys202-2015-work
mit
Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima...
min1 = opt.minimize(hat, x0 =-1.7,args=(a,b)) min2=opt.minimize(hat, x0 =1.7, args=(a,b)) print(min1,min2) print('Our minimas are x=-1.58113883 and x=1.58113882') plt.figure(figsize=(7,5)) plt.plot(x,hat(x,a,b), color = 'b',label='hat potential') plt.box(False) plt.title('Hat Potential') plt.scatter(x=-1.58113883,y=h...
assignments/assignment11/OptimizationEx01.ipynb
phungkh/phys202-2015-work
mit
First, we just compute the Python EVZ and display a sample. The "scores()" method returns a list of centrality scores in order of the vertices. Thus, what you see below are the (normalized, see the respective argument) centrality scores for G.nodes()[0], G.nodes()[1], ...
evzSciPy = networkit.centrality.SciPyEVZ(G, normalized=True) evzSciPy.run() evzSciPy.scores()[:10]
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
We now take a look at the 10 most central vertices according to the four heuristics. Here, the centrality algorithms offer the ranking() method that returns a list of (vertex, centrality) ordered by centrality.
evzSciPy.ranking()[:10]
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
Compute the EVZ using the C++ backend and also display the 10 most important vertices, just as above. This should hopefully look similar... Please note: The normalization argument may not be passed as a named argument to the C++-backed centrality measures. This is due to some limitation in the C++ wrapping code.
evz = networkit.centrality.EigenvectorCentrality(G, True) evz.run() evz.ranking()[:10]
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
Now, let's take a look at the PageRank. First, compute the PageRank using the C++ backend and display the 10 most important vertices. The second argument to the algorithm is the dampening factor, i.e. the probability that a random walk just stops at a vertex and instead teleports to some other vertex.
pageRank = networkit.centrality.PageRank(G, 0.95, True) pageRank.run() pageRank.ranking()[:10]
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
Same in Python...
SciPyPageRank = networkit.centrality.SciPyPageRank(G, 0.95, normalized=True) SciPyPageRank.run() SciPyPageRank.ranking()[:10]
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
If everything went well, these should look similar, too. Finally, we take a look at the relative differences between the computed centralities for the vertices:
differences = [(max(x[0], x[1]) / min(x[0], x[1])) - 1 for x in zip(evz.scores(), evzSciPy.scores())] print("Average relative difference: {}".format(sum(differences) / len(differences))) print("Maximum relative difference: {}".format(max(differences)))
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
Loading Data For this example notebook, we'll be using the elevators UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 40% of the data as training and the l...
import urllib.request import os from scipy.io import loadmat from math import floor # this is for running the notebook in our testing framework smoke_test = ('CI' in os.environ) if not smoke_test and not os.path.isfile('../elevators.mat'): print('Downloading \'elevators\' UCI dataset...') urllib.request.url...
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
LOVE can be used with any type of GP model, including exact GPs, multitask models and scalable approximations. Here we demonstrate LOVE in conjunction with KISS-GP, which has the amazing property of producing constant time variances. The KISS-GP + LOVE GP Model We now define the GP model. For more details on the use of...
class LargeFeatureExtractor(torch.nn.Sequential): def __init__(self, input_dim): super(LargeFeatureExtractor, self).__init__() self.add_module('linear1', torch.nn.Linear(input_dim, 1000)) self.add_module('relu1', torch.nn.ReLU()) ...
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Training the model The cell below trains the GP model, finding optimal hyperparameters using Type-II MLE. We run 20 iterations of training using the Adam optimizer built in to PyTorch. With a decent GPU, this should only take a few seconds.
training_iterations = 1 if smoke_test else 20 # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarg...
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Computing predictive variances (KISS-GP or Exact GPs) Using standard computaitons (without LOVE) The next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in preds.mean) using the standard SKI testing code, with no acceleration or precomputation. Note: Full pr...
import time # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(): start_time = time.time() preds = likelihood(model(test_x)) exact_covar = preds.covariance_matrix exact_covar_time = time.time() - start_time print(f"Time to compute exact mean + covariances: {exact_covar_time:.2...
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Using LOVE Next we compute predictive covariances (and the predictive means) for LOVE, but starting from scratch. That is, we don't yet have access to the precomputed cache discussed in the paper. This should still be faster than the full covariance computation code above. To use LOVE, use the context manager with gpyt...
# Clear the cache from the previous computations model.train() likelihood.train() # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(100): start_time = time.time() preds = model(test_x) fast_time_no_cac...
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
The above cell additionally computed the caches required to get fast predictions. From this point onwards, unless we put the model back in training mode, predictions should be extremely fast. The cell below re-runs the above code, but takes full advantage of both the mean cache and the LOVE cache for variances.
with torch.no_grad(), gpytorch.settings.fast_pred_var(): start_time = time.time() preds = likelihood(model(test_x)) fast_covar = preds.covariance_matrix fast_time_with_cache = time.time() - start_time print('Time to compute mean + covariances (no cache) {:.2f}s'.format(fast_time_no_cache)) print('Time ...
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Compute Error between Exact and Fast Variances Finally, we compute the mean absolute error between the fast variances computed by LOVE (stored in fast_covar), and the exact variances computed previously. Note that these tests were run with a root decomposition of rank 10, which is about the minimum you would realistic...
mae = ((exact_covar - fast_covar).abs() / exact_covar.abs()).mean() print(f"MAE between exact covar matrix and fast covar matrix: {mae:.6f}")
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Computing posterior samples (KISS-GP only) With KISS-GP models, LOVE can also be used to draw fast posterior samples. (The same does not apply to exact GP models.) Drawing samples the standard way (without LOVE) We now draw samples from the posterior distribution. Without LOVE, we accomlish this by performing Cholesky ...
import time num_samples = 20 if smoke_test else 20000 # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(): start_time = time.time() exact_samples = model(test_x).rsample(torch.Size([num_samples])) exact_sample_time = time.time() - start_time print(f"Time to compute exact samples...
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Using LOVE Next we compute posterior samples (and the predictive means) using LOVE. This requires the additional context manager with gpytorch.settings.fast_pred_samples():. Note that we also need the with gpytorch.settings.fast_pred_var(): flag turned on. Both context managers respond to the gpytorch.settings.max_root...
# Clear the cache from the previous computations model.train() likelihood.train() # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(200): # NEW FLAG FOR SAMPLING with gpytorch.settings.fast_pred_samples():...
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Compute the empirical covariance matrices Let's see how well LOVE samples and exact samples recover the true covariance matrix.
# Compute exact posterior covar with torch.no_grad(): start_time = time.time() posterior = model(test_x) mean, covar = posterior.mean, posterior.covariance_matrix exact_empirical_covar = ((exact_samples - mean).t() @ (exact_samples - mean)) / num_samples love_empirical_covar = ((love_samples - mean).t() @ ...
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Package up a log-posterior function.
def lnPost(params, x, y): # This is written for clarity rather than numerical efficiency. Feel free to tweak it. a = params[0] b = params[1] lnp = 0.0 # Using informative priors to achieve faster convergence is cheating in this exercise! # But this is where you would add them. lnp += -0.5*np...
notes/InferenceSandbox.ipynb
seniosh/StatisticalMethods
gpl-2.0
Convenience functions encoding the exact posterior:
class ExactPosterior: def __init__(self, x, y, a0, b0): X = np.matrix(np.vstack([np.ones(len(x)), x]).T) Y = np.matrix(y).T self.invcov = X.T * X self.covariance = np.linalg.inv(self.invcov) self.mean = self.covariance * X.T * Y self.a_array = np.arange(0.0, 6.0, 0.02...
notes/InferenceSandbox.ipynb
seniosh/StatisticalMethods
gpl-2.0
Demo some plots of the exact posterior distribution
plt.plot(exact.a_array, exact.P_of_a); plt.plot(exact.b_array, exact.P_of_b); plt.contour(exact.a_array, exact.b_array, exact.P_of_ab, colors='blue', levels=exact.contourLevels); plt.plot(a, b, 'o', color='red');
notes/InferenceSandbox.ipynb
seniosh/StatisticalMethods
gpl-2.0
Ok, you're almost ready to go! A decidely minimal stub of a Metropolis loop appears below; of course, you don't need to stick exactly with this layout. Once again, after running a chain, be sure to visually inspect traces of each parameter to see whether they appear converged compare the marginal and joint posterior...
Nsamples = 501**(2) samples = np.zeros((Nsamples, 2)) # put any more global definitions here def proposal(a_try, b_try, temperature): a = a_try + temperature*np.random.randn(1) b = b_try + temperature*np.random.randn(1) return a, b def we_accept_this_proposal(lnp_try, lnp_current): return np.exp(lnp...
notes/InferenceSandbox.ipynb
seniosh/StatisticalMethods
gpl-2.0
¡Adivina Quién es! El juego de adivina quién es, consiste en adivinar el personaje que tu oponente ha seleccionado antes de que él/ella adivine el tuyo. La dinámica del juego es: * Cada jugador elige un personaje al azar * Por turnos, cada jugador realiza preguntas de sí o no, e intenta adivinar el personaje del opone...
Image('data/guess_who_board.jpg', width=700)
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Cargando los datos Para la carga de datos usaremos la función read_csv de pandas. Pandas cuenta con un amplio listado de funciones para la carga de datos. Mas informacion en la documentación de la API.
df = pd.read_csv('data/guess_who.csv', index_col='observacion') df.head()
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
¿Cuántos personajes tenemos con cada caracteristica?
#Separamos los tipos de variables categorical_var = 'color de cabello' binary_vars = list(set(df.keys()) - set([categorical_var, 'NOMBRE'])) # Para las variables booleanas calculamos la suma df[binary_vars].sum() # Para las variables categoricas, observamos la frecuencia de cada categoría df[categorical_var].value_co...
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Codificación de variables categóricas
from sklearn.feature_extraction import DictVectorizer vectorizer = DictVectorizer(sparse=False) ab=vectorizer.fit_transform(df.to_dict('records')) dft = pd.DataFrame(ab, columns=vectorizer.get_feature_names()) dft.head()
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Entrenando un arbol de decisión
from sklearn.tree import DecisionTreeClassifier classifier = DecisionTreeClassifier(criterion='entropy', splitter='random', random_state=42) classifier.fit(dft, labels)
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Obtención de los pesos de cada feature
classifier.feature_importances_ feat = pd.DataFrame(index=dft.keys(), data=classifier.feature_importances_, columns=['score']) feat = feat.sort_values(by='score', ascending=False) feat.plot(kind='bar',rot=85,figsize=(10,4),)
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Bonus: Visualizando el arbol, requiere graphviz conda install graphviz
from sklearn.tree import export_graphviz dotfile = open('guess_who_tree.dot', 'w') export_graphviz( classifier, out_file = dotfile, filled=True, feature_names = dft.columns, class_names=list(labels), rotate=True, max_depth=1, rounded=True, ) dotfile.close() !dot -Tpng guess_who_t...
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Warm-up exercises Exercise: Suppose that goal scoring in hockey is well modeled by a Poisson process, and that the long-run goal-scoring rate of the Boston Bruins against the Vancouver Canucks is 2.9 goals per game. In their next game, what is the probability that the Bruins score exactly 3 goals? Plot the PMF of k, ...
# Solution goes here # Solution goes here # Solution goes here
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Exercise: Assuming again that the goal scoring rate is 2.9, what is the probability of scoring a total of 9 goals in three games? Answer this question two ways: Compute the distribution of goals scored in one game and then add it to itself twice to find the distribution of goals scored in 3 games. Use the Poisson...
# Solution goes here # Solution goes here
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Exercise: Suppose that the long-run goal-scoring rate of the Canucks against the Bruins is 2.6 goals per game. Plot the distribution of t, the time until the Canucks score their first goal. In their next game, what is the probability that the Canucks score during the first period (that is, the first third of the game)...
# Solution goes here # Solution goes here # Solution goes here
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Exercise: Assuming again that the goal scoring rate is 2.8, what is the probability that the Canucks get shut out (that is, don't score for an entire game)? Answer this question two ways, using the CDF of the exponential distribution and the PMF of the Poisson distribution.
# Solution goes here # Solution goes here
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
The Boston Bruins problem The Hockey suite contains hypotheses about the goal scoring rate for one team against the other. The prior is Gaussian, with mean and variance based on previous games in the league. The Likelihood function takes as data the number of goals scored in a game.
from thinkbayes2 import MakeNormalPmf from thinkbayes2 import EvalPoissonPmf class Hockey(Suite): """Represents hypotheses about the scoring rate for a team.""" def __init__(self, label=None): """Initializes the Hockey object. label: string """ mu = 2.8 sigma = 0.3 ...
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Now we can initialize a suite for each team:
suite1 = Hockey('bruins') suite2 = Hockey('canucks')
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Here's what the priors look like:
thinkplot.PrePlot(num=2) thinkplot.Pdf(suite1) thinkplot.Pdf(suite2) thinkplot.Config(xlabel='Goals per game', ylabel='Probability')
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
And we can update each suite with the scores from the first 4 games.
suite1.UpdateSet([0, 2, 8, 4]) suite2.UpdateSet([1, 3, 1, 0]) thinkplot.PrePlot(num=2) thinkplot.Pdf(suite1) thinkplot.Pdf(suite2) thinkplot.Config(xlabel='Goals per game', ylabel='Probability') suite1.Mean(), suite2.Mean()
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
To predict the number of goals scored in the next game we can compute, for each hypothetical value of $\lambda$, a Poisson distribution of goals scored, then make a weighted mixture of Poissons:
from thinkbayes2 import MakeMixture from thinkbayes2 import MakePoissonPmf def MakeGoalPmf(suite, high=10): """Makes the distribution of goals scored, given distribution of lam. suite: distribution of goal-scoring rate high: upper bound returns: Pmf of goals per game """ metapmf = Pmf() ...
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Here's what the results look like.
goal_dist1 = MakeGoalPmf(suite1) goal_dist2 = MakeGoalPmf(suite2) thinkplot.PrePlot(num=2) thinkplot.Pmf(goal_dist1) thinkplot.Pmf(goal_dist2) thinkplot.Config(xlabel='Goals', ylabel='Probability', xlim=[-0.7, 11.5]) goal_dist1.Mean(), goal_dist2.Mean()
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Now we can compute the probability that the Bruins win, lose, or tie in regulation time.
diff = goal_dist1 - goal_dist2 p_win = diff.ProbGreater(0) p_loss = diff.ProbLess(0) p_tie = diff.Prob(0) print('Prob win, loss, tie:', p_win, p_loss, p_tie)
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
If the game goes into overtime, we have to compute the distribution of t, the time until the first goal, for each team. For each hypothetical value of $\lambda$, the distribution of t is exponential, so the predictive distribution is a mixture of exponentials.
from thinkbayes2 import MakeExponentialPmf def MakeGoalTimePmf(suite): """Makes the distribution of time til first goal. suite: distribution of goal-scoring rate returns: Pmf of goals per game """ metapmf = Pmf() for lam, prob in suite.Items(): pmf = MakeExponentialPmf(lam, high=2.5,...
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Here's what the predictive distributions for t look like.
time_dist1 = MakeGoalTimePmf(suite1) time_dist2 = MakeGoalTimePmf(suite2) thinkplot.PrePlot(num=2) thinkplot.Pmf(time_dist1) thinkplot.Pmf(time_dist2) thinkplot.Config(xlabel='Games until goal', ylabel='Probability') time_dist1.Mean(), time_dist2.Mean()
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
In overtime the first team to score wins, so the probability of winning is the probability of generating a smaller value of t:
p_win_in_overtime = time_dist1.ProbLess(time_dist2) p_adjust = time_dist1.ProbEqual(time_dist2) p_win_in_overtime += p_adjust / 2 print('p_win_in_overtime', p_win_in_overtime)
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Finally, we can compute the overall chance that the Bruins win, either in regulation or overtime.
p_win_overall = p_win + p_tie * p_win_in_overtime print('p_win_overall', p_win_overall)
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Exercises Exercise: To make the model of overtime more correct, we could update both suites with 0 goals in one game, before computing the predictive distribution of t. Make this change and see what effect it has on the results.
# Solution goes here
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Exercise: In the final match of the 2014 FIFA World Cup, Germany defeated Argentina 1-0. What is the probability that Germany had the better team? What is the probability that Germany would win a rematch? For a prior distribution on the goal-scoring rate for each team, use a gamma distribution with parameter 1.3.
from thinkbayes2 import MakeGammaPmf xs = np.linspace(0, 8, 101) pmf = MakeGammaPmf(xs, 1.3) thinkplot.Pdf(pmf) thinkplot.Config(xlabel='Goals per game') pmf.Mean()
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Constrained problem First we set up an objective function (the townsend function) and a constraint function. We further assume both functions are black-box. We also define the optimization domain (2 continuous parameters).
# Objective & constraint def townsend(X): return -(np.cos((X[:,0]-0.1)*X[:,1])**2 + X[:,0] * np.sin(3*X[:,0]+X[:,1]))[:,None] def constraint(X): return -(-np.cos(1.5*X[:,0]+np.pi)*np.cos(1.5*X[:,1])+np.sin(1.5*X[:,0]+np.pi)*np.sin(1.5*X[:,1]))[:,None] # Setup input domain domain = gpflowopt.domain.ContinuousP...
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
Modeling and joint acquisition function We proceed by assigning the objective and constraint function a GP prior. Both functions are evaluated on a space-filling set of points (here, a Latin Hypercube design). Two GPR models are created. The EI is based on the model of the objective function (townsend), whereas PoF is ...
# Initial evaluations design = gpflowopt.design.LatinHyperCube(11, domain) X = design.generate() Yo = townsend(X) Yc = constraint(X) # Models objective_model = gpflow.gpr.GPR(X, Yo, gpflow.kernels.Matern52(2, ARD=True)) objective_model.likelihood.variance = 0.01 constraint_model = gpflow.gpr.GPR(np.copy(X), Yc, gpflow...
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
Initial belief We can now inspect our belief about the optimization problem by plotting the models, the EI, PoF and joint mappings. Both models clearly are not very accurate yet. More specifically, the constraint model does not correctly capture the feasibility yet.
def plot(): Xeval = gpflowopt.design.FactorialDesign(101, domain).generate() Yevala,_ = joint.operands[0].models[0].predict_f(Xeval) Yevalb,_ = joint.operands[1].models[0].predict_f(Xeval) Yevalc = np.maximum(ei.evaluate(Xeval), 0) Yevald = pof.evaluate(Xeval) Yevale = np.maximum(joint.evaluate(...
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
Running Bayesian Optimizer Running the Bayesian optimization is the next step. For this, we must set up an appropriate strategy to optimize the joint acquisition function. Sometimes this can be a bit challenging as often large non-varying areas may occur. A typical strategy is to apply a Monte Carlo optimization step f...
# First setup the optimization strategy for the acquisition function # Combining MC step followed by L-BFGS-B acquisition_opt = gpflowopt.optim.StagedOptimizer([gpflowopt.optim.MCOptimizer(domain, 200), gpflowopt.optim.SciPyOptimizer(domain)]) # Then run the Bayesian...
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
Results If we now plot the belief, we clearly see the constraint model has improved significantly. More specifically, its PoF mapping is an accurate representation of the true constraint function. By multiplying the EI by the PoF, the search is restricted to the feasible regions.
# Plotting belief again print(constraint_model) plot()
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
If we inspect the sampling distribution, we can see that the amount of samples in the infeasible regions is limited. The optimization has focussed on the feasible areas. In addition, it has been active mostly in two optimal regions.
# Plot function, overlayed by the constraint. Also plot the samples axes = plotfx() valid = joint.feasible_data_index() axes.scatter(joint.data[0][valid,0], joint.data[0][valid,1], label='feasible data', c='w') axes.scatter(joint.data[0][np.logical_not(valid),0], joint.data[0][np.logical_not(valid),1], label='data', c=...
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
Finally, the evolution of the best value over the number of iterations clearly shows a very good solution is already found after only a few evaluations.
f, axes = plt.subplots(1, 1, figsize=(7, 5)) f = joint.data[1][:,0] f[joint.data[1][:,1] > 0] = np.inf axes.plot(np.arange(0, joint.data[0].shape[0]), np.minimum.accumulate(f)) axes.set_ylabel('fmin') axes.set_xlabel('Number of evaluated points');
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
The following data was generated using code that can be found on GitHub https://github.com/mtchem/Twitter-Politics/blob/master/data_wrangle/Data_Wrangle.ipynb
# load federal document data from pickle file fed_reg_data = r'data/fed_reg_data.pickle' fed_data = pd.read_pickle(fed_reg_data) # load twitter data from csv twitter_file_path = r'data/twitter_01_20_17_to_3-2-18.pickle' twitter_data = pd.read_pickle(twitter_file_path) len(fed_data)
EDA.ipynb
mtchem/Twitter-Politics
mit
In order to explore the twitter and executive document data I will look at the following: Determine the most used hashtags Determine who President Trump tweeted at(@) the most Create a word frequency plot for the most used words in the twitter data and the presidental documents Find words that both data sets have in c...
# imports import nltk nltk.download('stopwords') from nltk.corpus import stopwords import itertools from collections import Counter import matplotlib.pyplot as plt import seaborn as sns plt.style.use('ggplot')
EDA.ipynb
mtchem/Twitter-Politics
mit
Plot the most used hashtags and @ tags
# find the most used hashtags hashtag_freq = Counter(list(itertools.chain(*(twitter_data.hash_tags)))) hashtag_top20 = hashtag_freq.most_common(20) # find the most used @ tags at_tag_freq = Counter(list(itertools.chain(*(twitter_data['@_tags'])))) at_tags_top20 = at_tag_freq.most_common(20) print(hashtag_top20) # fre...
EDA.ipynb
mtchem/Twitter-Politics
mit
Top used words for the twitter data and the federal document data Define a list of words that have no meaning, such as 'a', 'the', and punctuation
# use nltk's list of stopwords stop_words = set(stopwords.words('english')) # add puncuation to stopwords stop_words.update(['.', ',','get','going','one', 'amp','like' '"','...',"''", "'","n't", '?', '!', ':', ';', '#','@', '(', ')', 'https', '``',"'s", 'rt' ])
EDA.ipynb
mtchem/Twitter-Politics
mit
Make a list of hashtags and @entites used in the twitter data
# combine the hashtags and @ tags, flatten the list of lists, keep the unique items stop_twitter = set(list(itertools.chain(*(twitter_data.hash_tags + twitter_data['@_tags']))))
EDA.ipynb
mtchem/Twitter-Politics
mit
The federal document data also has some words that need to be removed. The words Federal Registry and the date are on the top of every page so they should be removed. Also, words like 'shall', 'order', and 'act' are used quite a bit but don't convay much meaning, so I'm going to remove those words as well.
stop_fed_docs = ['united', 'states', '1','2','3','4','5','6','7','8','9','10', '11','12', '13','14','15','16','17','18','19','20','21','22','23','24','25','26', '27','28','29','30','31','2016', '2015','2014','federal','shall', '4790', 'national', '2017', 'order','presi...
EDA.ipynb
mtchem/Twitter-Politics
mit
Create functions that removes the stop words for each of the datasets
def remove_from_fed_data(token_lst): # remove stopwords and one letter words filtered_lst = [word for word in token_lst if word.lower() not in stop_fed_docs and len(word) > 1 and word.lower() not in stop_words] return filtered_lst def remove_from_twitter_data(token_lst): # remove ...
EDA.ipynb
mtchem/Twitter-Politics
mit
Remove all of the stop words from the tokenized twitter and document data
# apply the remove_stopwords function to all of the tokenized twitter text twitter_words = twitter_data.text_tokenized.apply(lambda x: remove_from_twitter_data(x)) # apply the remove_stopwords function to all of the tokenized document text document_words = fed_data.token_text.apply(lambda x: remove_from_fed_data(x)) #...
EDA.ipynb
mtchem/Twitter-Politics
mit
Count how many times each word is used for both datasets
# create a dictionary using the Counter method, where the key is a word and the value is the number of time it was used twitter_freq = Counter(all_twitter_words) doc_freq = Counter(all_document_words) # determine the top 30 words used in the twitter data top_30_tweet = twitter_freq.most_common(30) top_30_fed = doc_freq...
EDA.ipynb
mtchem/Twitter-Politics
mit
Plot the most used words for the twitter data and the federal document data
# frequency plot for the most used Federal Data df = pd.DataFrame(top_30_fed, columns=['Federal Data', 'frequency']) df.plot(kind='bar', x='Federal Data',legend=None, figsize = (15,5)) plt.ylabel('Frequency',fontsize = 18) plt.xlabel('Words', fontsize=18) plt.title('Most Used Words that Occured in the Federal Data', fo...
EDA.ipynb
mtchem/Twitter-Politics
mit
Determine all of the words that are used in both datasets
# find the unique words in each dataset joint_words = list((set(all_document_words)).intersection(all_twitter_words))
EDA.ipynb
mtchem/Twitter-Politics
mit
Create a dictionary with the unique joint words as keys
# make array of zeros values = np.zeros(len(joint_words)) # create dictionary joint_words_dict = dict(zip(joint_words, values))
EDA.ipynb
mtchem/Twitter-Politics
mit
Create dictionaries for both datasets with document frequency for each joint word
# create a dictionary with a word as key, and a value = number of documents that contain the word for Twitter twitter_document_freq = joint_words_dict.copy() for word in joint_words: for lst in twitter_data.text_tokenized: if word in lst: twitter_document_freq[word]= twitter_document_freq[word] ...
EDA.ipynb
mtchem/Twitter-Politics
mit
Create dataframe with the word and the document percentage for each data set
df = pd.DataFrame([fed_document_freq, twitter_document_freq]).T df.columns = ['Fed', 'Tweet'] df['% Fed'] = (df.Fed/len(df.Fed))*100 df['% Tweet'] = (df.Tweet/len(df.Tweet))*100 top_joint_fed = df[['% Fed','% Tweet']].sort_values(by='% Fed', ascending=False)[0:50] top_joint_tweet = df[['% Fed','% Tweet']].sort_value...
EDA.ipynb
mtchem/Twitter-Politics
mit
Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane 1 * automobile 2 * bird 3 * cat 4 * deer 5 * do...
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function ## image data shape = [t, i,j,k], t= num_img_per_batch (basically the list of ima...
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 t...
# import helper ## I did this because sklearn.preprocessing was defined in there from sklearn import preprocessing ## from sklearn lib import preprocessing lib/sublib/functionality/class def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x:...
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
Implementation of CNN with backprop in NumPy
def get_im2col_indices(x_shape, field_height, field_width, padding=1, stride=1): # First figure out what the size of the output should be N, C, H, W = x_shape assert (H + 2 * padding - field_height) % stride == 0 assert (W + 2 * padding - field_height) % stride == 0 out_height = int((H + 2 * padding...
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
This is where the CNN imllementation in NumPy starts!
# Displaying an image using matplotlib # importing the library/package import matplotlib.pyplot as plot # Using plot with imshow to show the image (N=5000, H=32, W=32, C=3) plot.imshow(valid_features[0, :, :, :]) # # Training cycle # for epoch in range(num_): # # Loop over all batches # n_batches ...
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
The Correlation Function The 2-point correlation function $\xi(\theta)$ is defined as "the probability of finding two galaxies separated by an angular distance $\theta$ with respect to that expected for a random distribution" (Peebles 1980), and is an excellent summary statistic for quantifying the clustering of gala...
# !pip install --upgrade TreeCorr
examples/SDSScatalog/CorrFunc.ipynb
hdesmond/StatisticalMethods
gpl-2.0
Random Catalogs First we'll need a random catalog. Let's make it the same size as the data one.
random = pd.DataFrame({'ra' : ramin + (ramax-ramin)*np.random.rand(Ngals), 'dec' : decmin + (decmax-decmin)*np.random.rand(Ngals)}) print len(random), type(random)
examples/SDSScatalog/CorrFunc.ipynb
hdesmond/StatisticalMethods
gpl-2.0
Now let's plot both catalogs, and compare.
fig, ax = plt.subplots(nrows=1, ncols=2) fig.set_size_inches(15, 6) plt.subplots_adjust(wspace=0.2) random.plot(kind='scatter', x='ra', y='dec', ax=ax[0], title='Random') ax[0].set_xlabel('RA / deg') ax[0].set_ylabel('Dec. / deg') data.plot(kind='scatter', x='ra', y='dec', ax=ax[1], title='Data') ax[1].set_xlabel...
examples/SDSScatalog/CorrFunc.ipynb
hdesmond/StatisticalMethods
gpl-2.0
Estimating $\xi(\theta)$
import treecorr random_cat = treecorr.Catalog(ra=random['ra'], dec=random['dec'], ra_units='deg', dec_units='deg') data_cat = treecorr.Catalog(ra=data['ra'], dec=data['dec'], ra_units='deg', dec_units='deg') # Set up some correlation function estimator objects: sep_units='arcmin' min_sep=0.5 max_sep=10.0 N = 7 bin_s...
examples/SDSScatalog/CorrFunc.ipynb
hdesmond/StatisticalMethods
gpl-2.0
使用 tf.distribute.Strategy 进行自定义训练 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/distribute/custom_training"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td> <td> <a target="_blank" href="...
# Import TensorFlow import tensorflow as tf # Helper libraries import numpy as np import os print(tf.__version__)
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0