markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Batch normalization: Forward
In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation. | # Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = ... | assignment2/BatchNormalization.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
Batch Normalization: backward
Now implement the backward pass for batch normalization in the function batchnorm_backward.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing bran... | # Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a,... | assignment2/BatchNormalization.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out ... | np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = ti... | assignment2/BatchNormalization.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you sh... | np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float6... | assignment2/BatchNormalization.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization. | np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnec... | assignment2/BatchNormalization.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster. | plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_h... | assignment2/BatchNormalization.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training ... | np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_sca... | assignment2/BatchNormalization.ipynb | billzhao1990/CS231n-Spring-2017 | mit |
Import required libraries | %matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import widgets
from ipywidgets import interact, interactive, fixed
from IPython.display import display,HTML,clear_output
HTML('''<script>code_show=true;function code_toggle() {if (code_show){$('div.input').hide();} else {$('div.i... | Examples/immune/Analyse_pMHC.ipynb | phievo/phievo | lgpl-3.0 |
Select the Project | notebook.select_project.display() | Examples/immune/Analyse_pMHC.ipynb | phievo/phievo | lgpl-3.0 |
Select Seed | notebook.select_seed.display() | Examples/immune/Analyse_pMHC.ipynb | phievo/phievo | lgpl-3.0 |
Plot observable
For our immune simulations, the fitness is the mutual information between output concentrations (taken as a probability distribution) and binding time. An ideal fitness is $-1$. | notebook.plot_evolution_observable.display() | Examples/immune/Analyse_pMHC.ipynb | phievo/phievo | lgpl-3.0 |
Select Generation | notebook.select_generation.display() | Examples/immune/Analyse_pMHC.ipynb | phievo/phievo | lgpl-3.0 |
PLot Layout
The Layout of the network for immune accounts for the new interactions defined. $0$ represents the ligand, $1$ the receptor. They interact to form complex, that can be phosphorylated/dephosphorylated (black arrows, indexed with the corresponding kinase or phosphatase). All other species are either kinases o... | notebook.plot_layout_immune.display() | Examples/immune/Analyse_pMHC.ipynb | phievo/phievo | lgpl-3.0 |
Run Dynamics | notebook.run_dynamics_pMHC.display() | Examples/immune/Analyse_pMHC.ipynb | phievo/phievo | lgpl-3.0 |
Plot Response function
The response function for Immune displays the concentration of all species at the end of simulation as a function of the number of ligands presented. The output is the solid line. Left column is for binding time $\tau=3s$, right column for binding time $\tau=10s$. The ideal case such as ``adaptiv... | notebook.plot_pMHC.display() | Examples/immune/Analyse_pMHC.ipynb | phievo/phievo | lgpl-3.0 |
Exam Instructions
: Please insert Name and Email address in the first cell of this notebook
: Please acknowledge receipt of exam by sending a quick email reply to the instructor
: Review the submission form first to scope it out (it will take a 5-10 minutes to input your
answers and other information into this fo... | %%writefile kltext.txt
1.Data Science is an interdisciplinary field about processes and systems to extract knowledge or insights from large volumes of data in various forms (data in various forms, data in various forms, data in various forms), either structured or unstructured,[1][2] which is a continuation of some of ... | exams/MIDS-MidTerm.ipynb | JasonSanchez/w261 | mit |
MRjob class for calculating pairwise similarity using K-L Divergence as the similarity measure
Job 1: create inverted index (assume just two objects) <P>
Job 2: calculate the similarity of each pair of objects | import numpy as np
np.log(3)
!cat kltext.txt
%%writefile kldivergence.py
# coding: utf-8
from __future__ import division
from mrjob.job import MRJob
from mrjob.step import MRStep
import re
import numpy as np
class kldivergence(MRJob):
# process each string character by character
# the relative frequency of ... | exams/MIDS-MidTerm.ipynb | JasonSanchez/w261 | mit |
Questions:
MT7. Which number below is the closest to the result you get for KLD(Line1||line2)?
(a) 0.7
(b) 0.5
(c) 0.2
(d) 0.1
D
MT8. Which of the following letters are missing from these character vectors?
(a) p and t
(b) k and q
(c) j and q
(d) j and f | words = """
1.Data Science is an interdisciplinary field about processes and systems to extract knowledge or insights from large volumes of data in various forms (data in various forms, data in various forms, data in various forms), either structured or unstructured,[1][2] which is a continuation of some of the data an... | exams/MIDS-MidTerm.ipynb | JasonSanchez/w261 | mit |
C | %%writefile kldivergence_smooth.py
from __future__ import division
from mrjob.job import MRJob
import re
import numpy as np
class kldivergence_smooth(MRJob):
# process each string character by character
# the relative frequency of each character emitting Pr(character|str)
# for input record 1.abcbe
... | exams/MIDS-MidTerm.ipynb | JasonSanchez/w261 | mit |
MT9. The KL divergence on multinomials is defined only when they have nonzero entries.
For zero entries, we have to smooth distributions. Suppose we smooth in this way:
(ni+1)/(n+24)
where ni is the count for letter i and n is the total count of all letters.
After smoothing, which number below is the closest t... | %%writefile spam.txt
0002.2001-05-25.SA_and_HP 0 0 good
0002.2001-05-25.SA_and_HP 0 0 very good
0002.2001-05-25.SA_and_HP 1 0 bad
0002.2001-05-25.SA_and_HP 1 0 very bad
0002.2001-05-25.SA_and_HP 1 0 very bad, very BAD
%%writefile spam_test.txt
0002.2001-05-25.SA_and_HP 1 0 good? bad! very Bad!
%%writefile NaiveBayes... | exams/MIDS-MidTerm.ipynb | JasonSanchez/w261 | mit |
QUESTION
Having learnt the Naive Bayes text classification model for this problem using the training data and classified the test data (d6) please indicate which of the following is true:
Statements
* (I) P(very|ham) = 0.33
* (II) P(good|ham) = 0.50
* (I) Posterior Probability P(ham| d6) is approximately 24%
* (IV) C... | def inverse_vector_length(x1, x2):
norm = (x1**2 + x2**2)**.5
return 1.0/norm
inverse_vector_length(1, 5)
0 --> .2
%matplotlib inline
import numpy as np
import pylab
import pandas as pd
data = pd.read_csv("Kmeandata.csv", header=None)
pylab.plot(data[0], data[1], 'o', linewidth=0, alpha=.5);
%%writefile ... | exams/MIDS-MidTerm.ipynb | JasonSanchez/w261 | mit |
tokenize using nltk.tokenize.treebank.TreebankWordTokenizer | from nltk.tokenize.treebank import TreebankWordTokenizer
tokenizer = TreebankWordTokenizer()
tokenizer.tokenize("""that its money would be better spent "in areas such as research" and development.""")
import re
ENDS_WITH_COMMA = re.compile('(.*),$')
ENDS_WITH_PUNCTUATION = re.compile('(.*)(,|.|!|:|;)$')
foo = "Cumm... | python/rstdt-batch-tokenization.ipynb | arne-cl/alt-mulig | gpl-3.0 |
The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank.
This is the method that is invoked by word_tokenize().
It assumes that the text has already been segmented into sentences, e.g. using sent_tokenize(). | from nltk.tokenize import sent_tokenize
sents = sent_tokenize("a tree. You are a ball.")
tokenized_sents = TOKENIZER.tokenize_sents(sents)
u' '.join(tok for sent in tokenized_sents for tok in sent) | python/rstdt-batch-tokenization.ipynb | arne-cl/alt-mulig | gpl-3.0 |
Edge ML with TensorFlow Lite
In this notebook, we convert the saved model into a TensorFlow Lite model
so that we can run it on Edge devices.
In order to do edge inference, we need to handle raw image data from the camera
and process a single image (not a batch of images). | import tensorflow as tf
import os, shutil
MODEL_LOCATION='export/flowers_model3' # will be created
# load from checkpoint and export a model that has desired signature
CHECK_POINT_DIR='gs://practical-ml-vision-book/flowers_5_trained/chkpts'
model = tf.keras.models.load_model(CHECK_POINT_DIR)
IMG_HEIGHT = 345
IMG_WID... | 09_deploying/09e_tflite.ipynb | GoogleCloudPlatform/practical-ml-vision-book | apache-2.0 |
Convert to TFLite
This will take a while to do the conversion | import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_LOCATION)
tflite_model = converter.convert()
with open('export/model.tflite', 'wb') as ofp:
ofp.write(tflite_model)
!ls -lh export/model.tflite | 09_deploying/09e_tflite.ipynb | GoogleCloudPlatform/practical-ml-vision-book | apache-2.0 |
The Space Shuttle problem
Here's a problem from Bayesian Methods for Hackers
On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commi... | # !wget https://raw.githubusercontent.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/master/Chapter2_MorePyMC/data/challenger_data.csv
columns = ['Date', 'Temperature', 'Incident']
df = pd.read_csv('challenger_data.csv', parse_dates=[0])
df.drop(labels=[3, 24], inplace=True)
df
df['In... | examples/shuttle_soln.ipynb | AllenDowney/ThinkBayes2 | mit |
Grid algorithm
We can solve the problem first using a grid algorithm, with parameters b0 and b1, and
$\mathrm{logit}(p) = b0 + b1 * T$
and each datum being a temperature T and a boolean outcome fail, which is true is there was damage and false otherwise.
Hint: the expit function from scipy.special computes the inverse ... | from scipy.special import expit
class Logistic(Suite, Joint):
def Likelihood(self, data, hypo):
"""
data: T, fail
hypo: b0, b1
"""
return 1
# Solution
from scipy.special import expit
class Logistic(Suite, Joint):
def Likelihood(self, data, hypo):
... | examples/shuttle_soln.ipynb | AllenDowney/ThinkBayes2 | mit |
According to the posterior distribution, what was the probability of damage when the shuttle launched at 31 degF? | # Solution
T = 31
total = 0
for hypo, p in suite.Items():
b0, b1 = hypo
log_odds = b0 + b1 * T
p_fail = expit(log_odds)
total += p * p_fail
total
# Solution
pred = suite.Copy()
pred.Update((31, True)) | examples/shuttle_soln.ipynb | AllenDowney/ThinkBayes2 | mit |
MCMC
Implement this model using MCMC. As a starting place, you can use this example from the PyMC3 docs.
As a challege, try writing the model more explicitly, rather than using the GLM module. | from warnings import simplefilter
simplefilter('ignore', FutureWarning)
import pymc3 as pm
# Solution
with pm.Model() as model:
pm.glm.GLM.from_formula('Incident ~ Temperature', df,
family=pm.glm.families.Binomial())
start = pm.find_MAP()
trace = pm.sample(1000, start=st... | examples/shuttle_soln.ipynb | AllenDowney/ThinkBayes2 | mit |
Step1: build the initial state of the entire user network, as well as the purchae history of the users
Input: sample_dataset/batch_log.json | batchlogfile = 'sample_dataset/batch_log.json'
df_batch = pd.read_json(batchlogfile, lines=True)
index_purchase = ['event_type','id','timestamp','amount']
index_friend = ['event_type','id1','id2','timestamp']
#df_batch.head()
#df_batch.describe()
# Read D and T
df_DT=df_batch[df_batch['D'].notnull()]
df_DT=df_DT[['... | anomaly_detection.ipynb | xiaodongpang23/anomaly_detection | mit |
Step2: Determine whether a purchase is anomalous
input file: sample_dataset/stream_log.json | # read in the stream_log.json
streamlogfile = 'sample_dataset/stream_log.json'
df_stream = pd.read_json(streamlogfile, lines=True)
# If sort on the timestamp is needed, commentout the following line
#df_stream = df_stream.sort_values('timestamp')
# open output file flagged_purchases.json
flaggedfile = 'log_output/flag... | anomaly_detection.ipynb | xiaodongpang23/anomaly_detection | mit |
We will also load the other packages we will use in this demo. This could be done before the above import. | import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
Generating Synthetic Data
We begin by generating synthetic data $z$ and measurements $y$ that we will use to test the algorithms. First, we set the dimensions and the shapes of the vectors we will use. | # Parameters
nz = 1000 # number of components of z
ny = 500 # number of measurements y
# Compute the shapes
zshape = (nz,) # Shape of z matrix
yshape = (ny,) # Shape of y matrix
Ashape = (ny,nz) # Shape of A matrix | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
To generate the synthetic data for this demo, we use the following simple probabilistic model. For the input $z$, we will use Bernouli-Gaussian (BG) distribution, a simple model in sparse signal processing. In the BG model, the components $z_i$ are i.i.d. where each component $z_i=0$ with probability $1-\rho$ and $z... | sparse_rat = 0.1 # sparsity ratio
zmean1 = 0 # mean for the active components
zvar1 = 1 # variance for the active components
snr = 30 # SNR in dB | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
Using these parameters, we can generate random sparse z following this distribution with the following simple code. | # Generate the random input
z1 = np.random.normal(zmean1, np.sqrt(zvar1), zshape)
u = np.random.uniform(0, 1, zshape) < sparse_rat
z = z1*u | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
To illustrate the sparsity, we plot the vector z. We can see from this plot that the majority of the components of z are zero. | ind = np.array(range(nz))
plt.plot(ind,z) | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
Now, we create a random transform $A$ and output $y_0 = Az$. | A = np.random.normal(0, 1/np.sqrt(nz), Ashape)
y0 = A.dot(z) | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
Finally, we add noise at the desired SNR | yvar = np.mean(np.abs(y0)**2)
wvar = yvar*np.power(10, -0.1*snr)
y = y0 + np.random.normal(0,np.sqrt(wvar), yshape) | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
Creating the Vampyre estimators
Now that we have created the sparse data, we will use the vampyre package to recover z from y. In vampyre the methods to perform this estimation are called solvers. For this demo, we will use a simple solver called VAMP described in the paper:
Rangan, Sundeep, Philip Schniter, and Aly... | est0 = vp.estim.DiscreteEst(0,1,zshape)
est1 = vp.estim.GaussEst(zmean1,zvar1,zshape) | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
We next use the vampyre class, MixEst, to describe a mixture of the two distributions. This is done by creating a list, est_list, of the estimators and an array pz with the probability of each component. The resulting estimator, est_in, is the estimator for the prior $z$, which is also the input to the transform $A$.... | est_list = [est0, est1]
pz = np.array([1-sparse_rat, sparse_rat])
est_in = vp.estim.MixEst(est_list, w=pz, name='Input') | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
Next, we describe the likelihood function, $p(y|z)$. Since $y=Az+w$, we can first use the MatrixLT class to define a linear transform operator Aop corresponding to the matrix A. Then, we use the LinEstim class to describe the likelihood $y=Az+w$. | Aop = vp.trans.MatrixLT(A,zshape)
est_out = vp.estim.LinEst(Aop,y,wvar,map_est=False, name='Output') | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
Finally, the VAMP method needs a message handler to describe how to perform the Gaussian message passing. This is a more advanced feature. For most applications, you can just use the simple message handler as follows. | msg_hdl = vp.estim.MsgHdlSimp(map_est=False, shape=zshape) | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
Running the VAMP Solver
Having described the input and output estimators and the variance handler, we can now construct a VAMP solver. The construtor takes the input and output estimators, the variance handler and other parameters. The paramter nit is the number of iterations. This is fixed for now. Later, we will ... | nit = 20 # number of iterations
solver = vp.solver.Vamp(est_in,est_out,msg_hdl,\
hist_list=['zhat', 'zhatvar'],nit=nit) | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
We can print a summary of the model which indicates the dimensions and the estimators. | solver.summary() | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
We now run the solver by calling the solve() method. For a small problem like this, this should be close to instantaneous. | solver.solve() | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
The VAMP solver estimate is the field zhat. We plot one column of this (icol=0) and compare it to the corresponding column of the true matrix z. You should see a very good match. | zhat = solver.zhat
ind = np.array(range(nz))
plt.plot(ind,z)
plt.plot(ind,zhat)
plt.legend(['True', 'Estimate']) | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
We can measure the normalized mean squared error as follows. The VAMP solver also produces an estimate of the MSE in the variable zhatvar. We can extract this variable to compute the predicted MSE. We see that the normalized MSE is indeed low and closely matches the predicted value from VAMP. | zerr = np.mean(np.abs(zhat-z)**2)
zhatvar = solver.zhatvar
zpow = np.mean(np.abs(z)**2)
mse_act = 10*np.log10(zerr/zpow)
mse_pred = 10*np.log10(zhatvar/zpow)
print("Normalized MSE (dB): actual {0:f} pred {1:f}".format(mse_act, mse_pred)) | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
Finally, we can plot the actual and predicted MSE as a function of the iteration number. When solver was contructed, we passed an argument hist_list=['zhat', 'zhatvar']. This indicated to store the value of the estimate zhat and predicted error variance zhatvar with each iteration. We can recover these values from s... | # Compute the MSE as a function of the iteration
zhat_hist = solver.hist_dict['zhat']
zhatvar_hist = solver.hist_dict['zhatvar']
nit = len(zhat_hist)
mse_act = np.zeros(nit)
mse_pred = np.zeros(nit)
for it in range(nit):
zerr = np.mean(np.abs(zhat_hist[it]-z)**2)
mse_act[it] = 10*np.log10(zerr/zpow)
mse_pre... | demos/sparse/sparse_lin_inverse.ipynb | GAMPTeam/vampyre | mit |
Step 2: Concatenate Barcodes for QIIME2 Pipeline | ## Note: QIIME takes a single barcode file. The command 'extract_barcodes.py' concatenates the forward and reverse read barcode and attributes it to a single read.
# See http://qiime.org/tutorials/processing_illumina_data.html
for dataset in datasets:
directory = dataset[1]
index1 = directory+indexFile1
i... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 3: Import into QIIME2 | for dataset in datasets:
name = dataset[0]
directory = dataset[1]
os.system(' '.join([
"qiime tools import",
"--type EMPPairedEndSequences",
"--input-path "+directory+"output/",
"--output-path "+directory+"output/"+name+".qza"
]))
# This more direct command ... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 4: Demultiplex | ########
## Note: The barcode you supply to QIIME is now a concatenation of your forward and reverse barcode.
# Your 'forward' barcode is actually the reverse complement of your reverse barcode and the 'reverse' is your forward barcode. The file 'primers.complete.csv' provides this information corresponding to the Buck... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 5: Visualize Quality Scores and Determine Trimming Parameters | ## Based on the Graph Produced using the Following Command enter the trim and truncate values. Trim refers to the start of a sequence and truncate the total length (i.e. number of bases to remove from end)
# The example in the Atacam Desert Tutorial trims 13 bp from the start of each read and does not remove any bases... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 6: Trimming Parameters | USER INPUT REQUIRED | ## User Input Required
trim_dict = {}
## Input your trimming parameters into a python dictionary for all libraries
#trim_dict["LibraryName1"] = [trim_forward, truncate_forward, trim_reverse, truncate_reverse]
#trim_dict["LibraryName2"] = [trim_forward, truncate_forward, trim_reverse, truncate_reverse]
## Example
trim... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 7: Trim, Denoise and Join (aka 'Merge') Reads Using DADA2 | ## Hack for Multithreading
# I hardcoded 'nthreads' in both versions of 'run_dada_paired.R' (find your versions by running 'locate run_dada_paired.R' from your home directory)
# I used ~ 20 threads and the processing finished in ~ 7 - 8hrs
##
## SLOW STEP (~ 6 - 8 hrs, IF multithreading is used)
##
for dataset in da... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 8: Create Summary of OTUs | for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = dataset[2]
os.system(' '.join([
"qiime feature-table summarize",
"--i-table "+directory+"/output/"+name+".table.qza",
"--o-visualization "+directory+"/output/"+name+".table.qzv",
"--m-sample... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 9: Make Phylogenetic Tree | ## Hack for Multithreading
# I hardcoded 'n_threads' in '_mafft.py' in the directory ~/anaconda3/envs/qiime2-2017.9/lib/python3.5/site-packages/q2_alignment
# I used ~ 20 threads and the processing finished in ~ 15 min
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = dataset[2]
... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 10: Classify Seqs | for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = dataset[2]
domain = dataset[3]
# Classify
if domain == 'bacteria':
os.system(' '.join([
"qiime feature-classifier classify-sklearn",
"--i-classifier /home/db/GreenGenes/qiime2_13.8.99_51... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 11: Prepare Data for Import to Phyloseq | ## Make Function to Re-Format Taxonomy File to Contain Full Column Information
# and factor in the certain of the taxonomic assignment
def format_taxonomy(tax_file, min_support):
output = open(re.sub(".tsv",".fixed.tsv",tax_file), "w")
output.write("\t".join(["OTU","Domain","Phylum","Class","Order","Family","... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 13: Get 16S rRNA Gene Copy Number (rrn) | ## This step is based on the database contructed for the software 'copyrighter'
## The software itself lacked information about datastructure (and, the import of a biom from QIIME2 failed, likely because there are multiple versions of the biom format)
downloaded = "N"
for dataset in datasets:
name = dataset[0]
... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 14: Import into Phyloseq | ## Setup R-Magic for Jupyter Notebooks
import rpy2
%load_ext rpy2.ipython
def fix_biom_conversion(file):
with open(file, 'r') as fin:
data = fin.read().splitlines(True)
with open(file, 'w') as fout:
fout.writelines(data[1:])
import pandas as pd
%R library(phyloseq)
%R library(ape)
for datase... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Step 15: Clean-up Intermediate Files and Final Outputs | for dataset in datasets:
directory = dataset[1]
metadata = dataset[2]
# Remove Files
if domain == "bacteria":
%rm -r $directory/output/*tree.unrooted.qza
%rm -r $directory/output/*aligned.masked.qza
%rm $directory/output/*.biom
%rm -r $directory/temp/
%rm $di... | sequence_analysis_walkthrough/QIIME2_Processing_Pipeline.ipynb | buckleylab/Buckley_Lab_SIP_project_protocols | mit |
Постановка
По 1260 опрошенным имеются следующие данные:
заработная плата за час работы, $;
опыт работы, лет;
образование, лет;
внешняя привлекательность, в баллах от 1 до 5;
бинарные признаки: пол, семейное положение, состояние здоровья (хорошее/плохое), членство в профсоюзе, цвет кожи (белый/чёрный), занятость в сфер... | raw = pd.read_csv("beauty.csv", sep=";", index_col=False)
raw.head() | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Посмотрим на матрицу диаграмм рассеяния по количественным признакам: | pd.tools.plotting.scatter_matrix(raw[['wage', 'exper', 'educ', 'looks']], alpha=0.2,
figsize=(15, 15), diagonal='hist')
pylab.show() | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Оценим сбалансированность выборки по категориальным признакам: | print raw.union.value_counts()
print raw.goodhlth.value_counts()
print raw.black.value_counts()
print raw.female.value_counts()
print raw.married.value_counts()
print raw.service.value_counts() | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
У каждого признака все значения встречаются достаточно много раз, так что всё в порядке.
Предобработка | data = raw | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Посмотрим на распределение целевого признака — уровня заработной платы: | plt.figure(figsize(16,7))
plt.subplot(121)
data['wage'].plot.hist()
plt.xlabel('Wage', fontsize=14)
plt.subplot(122)
np.log(data['wage']).plot.hist()
plt.xlabel('Log wage', fontsize=14)
pylab.show() | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Один человек в выборке получает 77.72\$ в час, остальные — меньше 45\$; удалим этого человека, чтобы регрессия на него не перенастроилась. | data = data[data['wage'] < 77] | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Посмотрим на распределение оценок привлекательности: | plt.figure(figsize(8,7))
data.groupby('looks')['looks'].agg(lambda x: len(x)).plot(kind='bar', width=0.9)
plt.xticks(rotation=0)
plt.xlabel('Looks', fontsize=14)
pylab.show() | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
В группах looks=1 и looks=5 слишком мало наблюдений. Превратим признак looks в категориальный и закодируем с помощью фиктивных переменных: | data['belowavg'] = data['looks'].apply(lambda x : 1 if x < 3 else 0)
data['aboveavg'] = data['looks'].apply(lambda x : 1 if x > 3 else 0)
data.drop('looks', axis=1, inplace=True) | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Данные теперь: | data.head() | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Построение модели
Простейшая модель
Построим линейную модель по всем признакам. | m1 = smf.ols('wage ~ exper + union + goodhlth + black + female + married +'\
'service + educ + belowavg + aboveavg',
data=data)
fitted = m1.fit()
print fitted.summary() | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Посмотрим на распределение остатков: | plt.figure(figsize(16,7))
plt.subplot(121)
sc.stats.probplot(fitted.resid, dist="norm", plot=pylab)
plt.subplot(122)
np.log(fitted.resid).plot.hist()
plt.xlabel('Residuals', fontsize=14)
pylab.show() | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Оно скошенное, как и исходный признак. В таких ситуациях часто помогает перейти от регрессии исходного признака к регрессии его логарифма.
Логарифмируем отклик | m2 = smf.ols('np.log(wage) ~ exper + union + goodhlth + black + female + married +'\
'service + educ + belowavg + aboveavg', data=data)
fitted = m2.fit()
print fitted.summary()
plt.figure(figsize(16,7))
plt.subplot(121)
sc.stats.probplot(fitted.resid, dist="norm", plot=pylab)
plt.subplot(12... | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Теперь стало лучше. Посмотрим теперь на зависимость остатков от непрерывных признаков: | plt.figure(figsize(16,7))
plt.subplot(121)
scatter(data['educ'],fitted.resid)
plt.xlabel('Education', fontsize=14)
plt.ylabel('Residuals', fontsize=14)
plt.subplot(122)
scatter(data['exper'],fitted.resid)
plt.xlabel('Experience', fontsize=14)
plt.ylabel('Residuals', fontsize=14)
pylab.show() | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
На втором графике видна квадратичная зависимость остатков от опыта работы. Попробуем добавить к признакам квадрат опыта работы, чтобы учесть этот эффект.
Добавляем квадрат опыта работы | m3 = smf.ols('np.log(wage) ~ exper + np.power(exper,2) + union + goodhlth + black + female +'\
'married + service + educ + belowavg + aboveavg', data=data)
fitted = m3.fit()
print fitted.summary()
plt.figure(figsize(16,7))
plt.subplot(121)
sc.stats.probplot(fitted.resid, dist="norm", plot=p... | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Используем критерий Бройша-Пагана для проверки гомоскедастичности ошибок: | print 'Breusch-Pagan test: p=%f' % sms.het_breushpagan(fitted.resid, fitted.model.exog)[1] | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Ошибки гетероскедастичны, значит, значимость признаков может определяться неверно. Сделаем поправку Уайта: | m4 = smf.ols('np.log(wage) ~ exper + np.power(exper,2) + union + goodhlth + black + female +'\
'married + service + educ + belowavg + aboveavg', data=data)
fitted = m4.fit(cov_type='HC1')
print fitted.summary()
plt.figure(figsize(16,7))
plt.subplot(121)
sc.stats.probplot(fitted.resid, dist=... | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Удаляем незначимые признаки
В предыдущей модели незначимы: цвет кожи, здоровье, семейное положение. Удалим их. Индикатор привлекательности выше среднего тоже незначим, но удалять его не будем, потому что это одна из переменных, по которым на нужно в конце ответить на вопрос. | m5 = smf.ols('np.log(wage) ~ exper + np.power(exper,2) + union + female + service + educ +'\
'belowavg + aboveavg', data=data)
fitted = m5.fit(cov_type='HC1')
print fitted.summary()
plt.figure(figsize(16,7))
plt.subplot(121)
sc.stats.probplot(fitted.resid, dist="norm", plot=pylab)
plt.subpl... | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Посмотрим, не стала ли модель от удаления трёх признаков значимо хуже, с помощью критерия Фишера: | print "F=%f, p=%f, k1=%f" % m4.fit().compare_f_test(m5.fit()) | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Не стала.
Проверим, нет ли наблюдений, которые слишком сильно влияют на регрессионное уравнение: | plt.figure(figsize(8,7))
plot_leverage_resid2(fitted)
pylab.show()
data.loc[[1122]]
data.loc[[269]] | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Выводы
Итоговая модель объясняет 40% вариации логарифма отклика. | plt.figure(figsize(16,7))
plt.subplot(121)
scatter(data['wage'],np.exp(fitted.fittedvalues))
plt.xlabel('Wage', fontsize=14)
plt.ylabel('Exponentiated predictions', fontsize=14)
plt.xlim([0,50])
plt.subplot(122)
scatter(np.log(data['wage']),fitted.fittedvalues)
plt.xlabel('Log wage', fontsize=14)
plt.ylabel('Predictio... | 4 Stats for data analysis/Lectures notebooks/14 regression/stat.regression.ipynb | maxis42/ML-DA-Coursera-Yandex-MIPT | mit |
Cross-Validate Model Using Accuracy | # Cross-validate model using accuracy
cross_val_score(logit, X, y, scoring="accuracy") | machine-learning/accuracy.ipynb | tpin3694/tpin3694.github.io | mit |
2. Program the ZYNQ PL | ol = Overlay('base.bit')
ol.download() | Pynq-Z1/notebooks/examples/pmod_dac_adc.ipynb | AEW2015/PYNQ_PR_Overlay | bsd-3-clause |
3. Instantiate the Pmod peripherals as Python objects | adc = Pmod_ADC(1)
dac = Pmod_DAC(2) | Pynq-Z1/notebooks/examples/pmod_dac_adc.ipynb | AEW2015/PYNQ_PR_Overlay | bsd-3-clause |
4. Write to DAC, read from ADC, print result | dac.write(0.35)
sample = adc.read()
print(sample) | Pynq-Z1/notebooks/examples/pmod_dac_adc.ipynb | AEW2015/PYNQ_PR_Overlay | bsd-3-clause |
Contents
Tracking the IO Error
Report DAC-ADC Pmod Loopback Measurement Error. | from math import ceil
from time import sleep
import numpy as np
import matplotlib.pyplot as plt
from pynq import Overlay
from pynq.iop import Pmod_ADC, Pmod_DAC
ol = Overlay('base.bit')
ol.download()
adc = Pmod_ADC(1)
dac = Pmod_DAC(2)
delay = 0.0
values = np.linspace(0, 2, 20)
samples = []
for value in values:
... | Pynq-Z1/notebooks/examples/pmod_dac_adc.ipynb | AEW2015/PYNQ_PR_Overlay | bsd-3-clause |
Error plot with Matplotlib
This example shows plots in notebook (rather than in separate window). | %matplotlib inline
X = np.arange(len(values))
plt.bar(X + 0.0, values, facecolor='blue',
edgecolor='white', width=0.5, label="Written_to_DAC")
plt.bar(X + 0.25, samples, facecolor='red',
edgecolor='white', width=0.5, label="Read_from_ADC")
plt.title('DAC-ADC Linearity')
plt.xlabel('Sample_number... | Pynq-Z1/notebooks/examples/pmod_dac_adc.ipynb | AEW2015/PYNQ_PR_Overlay | bsd-3-clause |
Contents
XKCD Plot
Same data plotted in XKCD format ...
(http://xkcd.com) | %matplotlib inline
# xkcd comic book style plots
with plt.xkcd():
X = np.arange(len(values))
plt.bar(X + 0.0, values, facecolor='blue',
edgecolor='white', width=0.5, label="Written_to_DAC")
plt.bar(X + 0.25, samples, facecolor='red',
edgecolor='white', width=0.5, label="Read_f... | Pynq-Z1/notebooks/examples/pmod_dac_adc.ipynb | AEW2015/PYNQ_PR_Overlay | bsd-3-clause |
Contents
Widget controlled plot
In this example, we extend the IO plot with a slider widget to control the number of samples appearing in the output plot.
We use the ipwidgets library and the simple interact() method to launch a slider bar.
The interact function (ipywidgets.interact) automatically creates user interf... | from math import ceil
from time import sleep
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact
import ipywidgets as widgets
from pynq import Overlay
from pynq.iop import Pmod_ADC, Pmod_DAC
ol = Overlay('base.bit')
ol.download()
dac = Pmod_DAC(2)
adc = Pmod_ADC(1)
... | Pynq-Z1/notebooks/examples/pmod_dac_adc.ipynb | AEW2015/PYNQ_PR_Overlay | bsd-3-clause |
Introducing Principal Component Analysis
Principal component analysis is a fast and flexible unsupervised method for dimensionality reduction in data, which we saw briefly in Introducing Scikit-Learn.
Its behavior is easiest to visualize by looking at a two-dimensional dataset.
Consider the following 200 points: | rng = np.random.RandomState(1)
X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T
plt.scatter(X[:, 0], X[:, 1])
plt.axis('equal'); | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
By eye, it is clear that there is a nearly linear relationship between the x and y variables.
This is reminiscent of the linear regression data we explored in In Depth: Linear Regression, but the problem setting here is slightly different: rather than attempting to predict the y values from the x values, the unsupervis... | from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X) | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
The fit learns some quantities from the data, most importantly the "components" and "explained variance": | print(pca.components_)
print(pca.explained_variance_) | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
To see what these numbers mean, let's visualize them as vectors over the input data, using the "components" to define the direction of the vector, and the "explained variance" to define the squared-length of the vector: | def draw_vector(v0, v1, ax=None):
ax = ax or plt.gca()
arrowprops=dict(arrowstyle='->',
linewidth=2,
shrinkA=0, shrinkB=0)
ax.annotate('', v1, v0, arrowprops=arrowprops)
# plot data
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
for length, vector in zip(pca.explained_vari... | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
These vectors represent the principal axes of the data, and the length of the vector is an indication of how "important" that axis is in describing the distribution of the data—more precisely, it is a measure of the variance of the data when projected onto that axis.
The projection of each data point onto the principal... | pca = PCA(n_components=1)
pca.fit(X)
X_pca = pca.transform(X)
print("original shape: ", X.shape)
print("transformed shape:", X_pca.shape) | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
The transformed data has been reduced to a single dimension.
To understand the effect of this dimensionality reduction, we can perform the inverse transform of this reduced data and plot it along with the original data: | X_new = pca.inverse_transform(X_pca)
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8)
plt.axis('equal'); | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
The light points are the original data, while the dark points are the projected version.
This makes clear what a PCA dimensionality reduction means: the information along the least important principal axis or axes is removed, leaving only the component(s) of the data with the highest variance.
The fraction of variance ... | from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
Recall that the data consists of 8×8 pixel images, meaning that they are 64-dimensional.
To gain some intuition into the relationships between these points, we can use PCA to project them to a more manageable number of dimensions, say two: | pca = PCA(2) # project from 64 to 2 dimensions
projected = pca.fit_transform(digits.data)
print(digits.data.shape)
print(projected.shape)
digits.target
i=int(np.random.random()*1797)
plt.imshow(digits.data[i].reshape(8,8),cmap='Blues')
digits.target[i]
digits.data[i].reshape(8,8) | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
We can now plot the first two principal components of each point to learn about the data: | plt.scatter(projected[:, 0], projected[:, 1],
c=digits.target, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('Spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar(); | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
Recall what these components mean: the full data is a 64-dimensional point cloud, and these points are the projection of each data point along the directions with the largest variance.
Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits in t... | pca = PCA().fit(digits.data)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance'); | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
This curve quantifies how much of the total, 64-dimensional variance is contained within the first $N$ components.
For example, we see that with the digits the first 10 components contain approximately 75% of the variance, while you need around 50 components to describe close to 100% of the variance.
Here we see that o... | def plot_digits(data):
fig, axes = plt.subplots(4, 10, figsize=(10, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(data[i].reshape(8, 8),
cmap='b... | present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb | csaladenes/csaladenes.github.io | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.