markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
注:如果您有许多数值特征(数百个或更多),首先将它们连接起来并使用单个 normalization 层会更有效。 分类列 在此数据集中,Type 表示为字符串(例如 'Dog' 或 'Cat')。您不能将字符串直接馈送给模型。预处理层负责将字符串表示为独热向量。 get_category_encoding_layer 函数返回一个层,该层将值从词汇表映射到整数索引,并对特征进行独热编码。
def get_category_encoding_layer(name, dataset, dtype, max_tokens=None): # Create a StringLookup layer which will turn strings into integer indices if dtype == 'string': index = preprocessing.StringLookup(max_tokens=max_tokens) else: index = preprocessing.IntegerLookup(max_tokens=max_tokens) # Prepare a...
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
通常,您不应将数字直接输入模型,而是改用这些输入的独热编码。考虑代表宠物年龄的原始数据。
type_col = train_features['Age'] category_encoding_layer = get_category_encoding_layer('Age', train_ds, 'int64', 5) category_encoding_layer(type_col)
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
选择要使用的列 您已经了解了如何使用多种类型的预处理层。现在您将使用它们来训练模型。您将使用 Keras-functional API 来构建模型。Keras 函数式 API 是一种比 tf.keras.Sequential API 更灵活的创建模型的方式。 本教程的目标是向您展示使用预处理层所需的完整代码(例如机制)。任意选择了几列来训练我们的模型。 要点:如果您的目标是构建一个准确的模型,请尝试使用自己的更大的数据集,并仔细考虑哪些特征最有意义,以及它们应该如何表示。 之前,您使用了小批次来演示输入流水线。现在让我们创建一个具有更大批次大小的新输入流水线。
batch_size = 256 train_ds = df_to_dataset(train, batch_size=batch_size) val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size) test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size) all_inputs = [] encoded_features = [] # Numeric features. for header in ['PhotoAmt', 'Fee']: numeric_col = tf....
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
创建、编译并训练模型 接下来,您可以创建端到端模型。
all_features = tf.keras.layers.concatenate(encoded_features) x = tf.keras.layers.Dense(32, activation="relu")(all_features) x = tf.keras.layers.Dropout(0.5)(x) output = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(all_inputs, output) model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossen...
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
我们来可视化连接图:
# rankdir='LR' is used to make the graph horizontal. tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
训练模型。
model.fit(train_ds, epochs=10, validation_data=val_ds) loss, accuracy = model.evaluate(test_ds) print("Accuracy", accuracy)
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
根据新数据进行推断 要点:您开发的模型现在可以直接从 CSV 文件中对行进行分类,因为预处理代码包含在模型本身中。 现在,您可以保存并重新加载 Keras 模型。请按照此处的教程了解有关 TensorFlow 模型的更多信息。
model.save('my_pet_classifier') reloaded_model = tf.keras.models.load_model('my_pet_classifier')
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
要获得对新样本的预测,只需调用 model.predict()。您只需要做两件事: 将标量封装成列表,以便具有批次维度(模型只处理成批次的数据,而不是单个样本) 对每个特征调用 convert_to_tensor
sample = { 'Type': 'Cat', 'Age': 3, 'Breed1': 'Tabby', 'Gender': 'Male', 'Color1': 'Black', 'Color2': 'White', 'MaturitySize': 'Small', 'FurLength': 'Short', 'Vaccinated': 'No', 'Sterilized': 'No', 'Health': 'Healthy', 'Fee': 100, 'PhotoAmt': 2, } input_dict = {name:...
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
Let's say we only have 4 words in our vocabulary: "the", "fight", "wind", and "like". Maybe each word is associated with numbers. | Word | Number | | ------ |:------:| | 'the' | 17 | | 'fight' | 22 | | 'wind' | 35 | | 'like' | 51 |
embeddings_0d = tf.constant([17,22,35,51])
ch11_seq2seq/Concept02_embedding_lookup.ipynb
BinRoot/TensorFlow-Book
mit
Or maybe, they're associated with one-hot vectors. | Word | Vector | | ------ |:------:| | 'the ' | [1, 0, 0, 0] | | 'fight' | [0, 1, 0, 0] | | 'wind' | [0, 0, 1, 0] | | 'like' | [0, 0, 0, 1] |
embeddings_4d = tf.constant([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])
ch11_seq2seq/Concept02_embedding_lookup.ipynb
BinRoot/TensorFlow-Book
mit
This may sound over the top, but you can have any tensor you want, not just numbers or vectors. | Word | Tensor | | ------ |:------:| | 'the ' | [[1, 0] , [0, 0]] | | 'fight' | [[0, 1] , [0, 0]] | | 'wind' | [[0, 0] , [1, 0]] | | 'like' | [[0, 0] , [0, 1]] |
embeddings_2x2d = tf.constant([[[1, 0], [0, 0]], [[0, 1], [0, 0]], [[0, 0], [1, 0]], [[0, 0], [0, 1]]])
ch11_seq2seq/Concept02_embedding_lookup.ipynb
BinRoot/TensorFlow-Book
mit
Let's say we want to find the embeddings for the sentence, "fight the wind".
ids = tf.constant([1, 0, 2])
ch11_seq2seq/Concept02_embedding_lookup.ipynb
BinRoot/TensorFlow-Book
mit
We can use the embedding_lookup function provided by TensorFlow:
lookup_0d = sess.run(tf.nn.embedding_lookup(embeddings_0d, ids)) print(lookup_0d) lookup_4d = sess.run(tf.nn.embedding_lookup(embeddings_4d, ids)) print(lookup_4d) lookup_2x2d = sess.run(tf.nn.embedding_lookup(embeddings_2x2d, ids)) print(lookup_2x2d)
ch11_seq2seq/Concept02_embedding_lookup.ipynb
BinRoot/TensorFlow-Book
mit
Load data
% ll dadiExercises/ % cat dadiExercises/ERY.FOLDED.sfs.dadi_format
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
I have turned the 1D folded SFS's from realSFS into $\delta$d$\delta$i format by hand according to the description in section 3.1 of the manual. Note, that the last line, indicating the mask, has length 37, but the folded spectrum has length 19. Dadi wants to mask counts from invariable sites. For an unfolded spectrum,...
fs_ery = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format') %pdoc dadi.Spectrum.from_file fs_ery ns = fs_ery.sample_sizes ns fs_ery.pop_ids = ['ery'] # must be an array, otherwise leads to error later on # the number of segregating sites in the spectrum fs_ery.sum()
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
According to the number of segregating sites, this spectrum should have good power to distinguish between alternative demographic models (see Adams2004). However, the noise in the data is extreme, as can be seen below, which might compromise this power and maybe even lead to false inferences. Plot the data
%pdoc dadi.Plotting.plot_1d_fs pylab.rcParams['figure.figsize'] = [12.0, 10.0] dadi.Plotting.plot_1d_fs(fs_ery, show=False)
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
Built-in 1D models
# show modules within dadi dir(dadi) dir(dadi.Demographics1D) # show the source of the 'Demographics1D' method %psource dadi.Demographics1D
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
standard neutral model
# create link to method func = dadi.Demographics1D.snm # make the extrapolating version of the demographic model function func_ex = dadi.Numerics.make_extrap_log_func(func) # setting the smallest grid size slightly larger than the largest population sample size pts_l = [40, 50, 60]
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
The snm function does not take parameters to optimize. I can therefore get directly the expected model. The snm function does not take a fold argument. I am therefore going to calculated an unfolded expected spectrum and then fold.
# calculate unfolded AFS under standard neutral model (up to a scaling factor theta) model = func_ex(0, ns, pts_l) model dadi.Plotting.plot_1d_fs(model.fold()[:19], show=False)
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
What's happening in the 18th count class?
# get the source of the fold method, which is part of the Spectrum object %psource dadi.Spectrum.fold # get the docstring of the Spectrum object %pdoc dadi.Spectrum # retrieve the spectrum array from the Spectrum object model.data
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
I am going to fold manually now.
# reverse spectrum and add to itself model_fold = model.data + model.data[::-1] model_fold # discard all count classes >n/2 model_fold = model_fold[:19] model_fold
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
When the sample size is even, then highest sample frequency class corresponds to just one unfolded class (18). This has been added to itself and those SNP's are counted twice at the moment. I need to divide this class by 2 to get the correct count for this folded class.
# divide highest sample frequency class by 2 model_fold[18] = model_fold[18]/2.0 model_fold # create dadi Spectrum object from array, need to specify custom mask model_folded = dadi.Spectrum(data=model_fold, mask_corners=False, mask= [1] + [0]*18) model_folded dadi.Plotting.plot_1d_fs(model_folded)
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
The folded expected spectrum is correct. Also, see figure 4.5 in Wakeley2009. How to fold an unfolded spectrum
# fold the unfolded model model_folded = model.fold() #model_folded = model_folded[:(ns[0]+1)] model_folded.pop_ids = ['ery'] # be sure to give an array, not a scalar string model_folded ll_model_folded = dadi.Inference.ll_multinom(model_folded, fs_ery) print 'The log composite likelihood of the observed ery spectru...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
$\theta$ and implied $N_{ref}$
theta = dadi.Inference.optimal_sfs_scaling(model_folded, fs_ery) print 'The optimal value of theta is {0:.3f}.'.format(theta)
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
This theta estimate is a little bit higher than what I estimated with curve fitting in Fist_Steps_with_dadi.ipynb, which was 10198.849. What effective ancestral population size would that imply? According to section 4.4 in the dadi manual: $$ \theta = 4 N_{ref} \mu_{L} \qquad \text{L: sequence length} $$ Let's assume t...
mu = 3e-9 L = fs_ery.data.sum() # this sums over all entries in the spectrum, including masked ones, i. e. also contains invariable sites print "The total sequence length is " + str(L) N_ref = theta/L/mu/4 print "The effective ancestral population size (in number of diploid individuals) implied by this theta is: {0}."....
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
This effective population size is consistent with those reported in Lynch2016 for other insect species. Begin Digression:
x = pylab.arange(0, 100) y = 0.5**(x) pylab.plot(x, y) x[:10] * y[:10] sum(x * y)
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
End Digression
model_folded * theta pylab.semilogy(model_folded * theta, "bo-", label='SNM') pylab.plot(fs_ery, "ro-", label='ery') pylab.legend() %psource dadi.Plotting.plot_1d_comp_Poisson # compare model prediction and data visually with dadi function dadi.Plotting.plot_1d_comp_multinom(model_folded[:19], fs_ery[:19], residual...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
The lower plot is for the scaled Poisson residuals. $$ residuals = (model - data)/\sqrt{model} $$ The model is the expected counts in each frequency class. If these counts are Poisson distributed, then their variance is equal to their expectation. The differences between model and data are therefore scaled by the expe...
fs_ery # make copy of spectrum array data_abc = fs_ery.data.copy() # resize the array to the unfolded length data_abc.resize((37,)) data_abc fs_ery_ext = dadi.Spectrum(data_abc) fs_ery_ext fs_ery_ext.fold() fs_ery_ext = fs_ery_ext.fold() fs_ery_ext.pop_ids = ['ery'] fs_ery_ext fs_ery_ext.sample_sizes
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
Now, the reported sample size is correct and we have a Spectrum object that dadi can handle correctly. To fold or not to fold by ANGSD Does estimating an unfolded spectrum with ANGSD and then folding yield a sensible folded SFS when the sites are not polarised with respect to an ancestral allele but with respect to the...
% cat dadiExercises/ERY.FOLDED.sfs.dadi_format # load the spectrum that was created from folded SAF's fs_ery_folded_by_Angsd = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format') fs_ery_folded_by_Angsd # extract unmasked entries of the SFS m = fs_ery_folded_by_Angsd.mask fs_ery_folded_by_Angsd[m == ...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
Load unfolded SFS
% ll ../ANGSD/SFS/ERY/
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
I have copied the unfolded SFS into the current directory.
% ll % cat ERY.unfolded.sfs # load unfolded spectrum fs_ery_unfolded_by_ANGSD = dadi.Spectrum.from_file('ERY.unfolded.sfs') fs_ery_unfolded_by_ANGSD # fold unfolded spectrum fs_ery_unfolded_by_Angsd_folded = fs_ery_unfolded_by_ANGSD.fold() fs_ery_unfolded_by_Angsd_folded # plot the two spectra pylab.rcParams['fi...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
The sizes of the residuals (scaled by the Poisson standard deviations) indicate that the two versions of the folded SFS of ery are significantly different. Now, what does the parallelus data say?
% ll dadiExercises/ % cat dadiExercises/PAR.FOLDED.sfs.dadi_format # load the spectrum folded by ANGSD fs_par_folded_by_Angsd = dadi.Spectrum.from_file('dadiExercises/PAR.FOLDED.sfs.dadi_format') fs_par_folded_by_Angsd % cat PAR.unfolded.sfs # load spectrum that has been created from unfolded SAF's fs_par_unfolde...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
The unfolded spectrum folded by dadi seems to be a bit better behaved than the one folded by ANGSD. I really wonder whether folding in ANGSD is needed. The folded 2D spectrum from ANGSD is a 19 x 19 matrix. This is not a format that dadi can understand.
%psource dadi.Spectrum.from_data_dict
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
See this thread on the dadi forum. Exponential growth model
# show the source of the 'Demographics1D' method %psource dadi.Demographics1D.growth # create link to function that specifies a simple growth or decline model func = dadi.Demographics1D.growth # create extrapolating version of the function func_ex = dadi.Numerics.make_extrap_log_func(func) # set lower and upper b...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
Parallelised $\delta$a$\delta$i I need to run the simulation with different starting values to check convergence. I would like to do these runs in parallel. I have 12 cores available on huluvu.
from ipyparallel import Client cl = Client() cl.ids
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
I now have connections to 11 engines. I started the engines with ipcluster start -n 11 & in the terminal.
# create load balanced view of the engines lbview = cl.load_balanced_view() lbview.block # create direct view of all engines dview = cl[:]
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
import variables to namespace of engines
# set starting value for all engines dview['p0'] = [1, 1] dview['p0'] # set lower and upper bounds to nu and T for all engines dview['upper_bound'] = [100, 3] dview['lower_bound'] = [1e-2, 0] dview['fs_ery'] = fs_ery cl[0]['fs_ery'] dview['func_ex'] = func_ex dview['pts_l'] = pts_l
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
import dadi on all engines
with dview.sync_imports(): import sys dview.execute('sys.path.insert(0, \'/home/claudius/Downloads/dadi\')') cl[0]['sys.path'] with dview.sync_imports(): import dadi
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
create parallel function to run dadi
@lbview.parallel(block=True) def run_dadi(x): # for the function to be called with map, it needs to have one input variable # perturb starting values by up to a factor of 2 p1 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound) # run optimisation of paramters popt = ...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
def exp_growth(x): p0 = [1, 1] # perturb starting values by up to a factor of 2 p0 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound) # run optimisation of paramters popt = dadi.Inference.optimize_log(p0=p0, data=fs_ery_ext, model_func=func_ex, pts=pts_l, \ ...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
Unfortunately, parallelisation is not as straightforward as it should be.
popt
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
Except for the last iteration, the two parameter estimates seem to have converged.
ns = fs_ery_ext.sample_sizes ns print popt[0] print popt[9]
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
What is the log likelihood of the model given these two different parameter sets?
model_one = func_ex(popt[0], ns, pts_l) ll_model_one = dadi.Inference.ll_multinom(model_one, fs_ery_ext) ll_model_one model_two = func_ex(popt[9], ns, pts_l) ll_model_two = dadi.Inference.ll_multinom(model_two, fs_ery_ext) ll_model_two
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
The lower log-likelihood for the last set of parameters inferred indicates that the optimisation got trapped in a local minimum in the last run of the optimisation. What the majority of the parameter sets seem to indicate is that at about time $0.007 \times 2 N_{ref}$ generations in the past the ancestral population st...
print 'The model suggests that exponential decline in population size started {0:.0f} generations ago.'.format(popt[0][1] * 2 * N_ref)
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
Two epoch model
dir(dadi.Demographics1D) %psource dadi.Demographics1D.two_epoch
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
This model specifies a stepwise change in population size some time ago. It assumes that the population size has stayed constant since the change.
func = dadi.Demographics1D.two_epoch func_ex = dadi.Numerics.make_extrap_log_func(func) upper_bound = [10, 3] lower_bound = [1e-3, 0] pts_l = [40, 50, 60] def stepwise_pop_change(x): # set initial values p0 = [1, 1] # perturb initial parameter values randomly by up to 2 * fold p0 = dadi.Misc.perturb_...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
This model does not converge on a set of parameter values.
nu = [i[0] for i in popt] nu T = [i[1] for i in popt] T pylab.rcParams['font.size'] = 14.0 pylab.loglog(nu, T, 'bo') pylab.xlabel(r'$\nu$') pylab.ylabel('T')
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
Both parameters seem to be correlated. With the available data, it may not be possible to distinguish between a moderate reduction in population size a long time ago (topright in the above figure) and a drastic reduction in population size a short time ago (bottomleft in the above figure). Bottleneck then exponential g...
%psource dadi.Demographics1D
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
This model has three parameters. $\nu_B$ is the ratio of the population size (with respect to the ancestral population size $N_{ref}$) after the first stepwise change at time T in the past. The population is then asumed to undergo exponential growth/decline to a ratio of population size $\nu_F$ at present.
func = dadi.Demographics1D.bottlegrowth func_ex = dadi.Numerics.make_extrap_log_func(func) upper_bound = [100, 100, 3] lower_bound = [1e-3, 1e-3, 0] pts_l = [40, 50, 60] def bottleneck_growth(x): p0 = [1, 1, 1] # corresponds to constant population size # perturb initial parameter values randomly by u...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
There is no convergence of parameters estimates. The parameter combinations stand for vastly different demographic scenarios. Most seem to suggest a population increase (up to 100 times the ancestral population size), followed by exponential decrease to about the ancestral population size. Three epochs
func = dadi.Demographics1D.three_epoch func_ex = dadi.Numerics.make_extrap_log_func(func) %psource dadi.Demographics1D.three_epoch
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
This model tries to estimate three parameters. The populations is expected to undergo a stepwise population size change (bottleneck) at time TF + TB. At time TF it is expected to recover immediately to the current population size.
upper_bound = [100, 100, 3, 3] lower_bound = [1e-3, 1e-3, 0, 0] pts_l = [40, 50, 60] def opt_three_epochs(x): p0 = [1, 1, 1, 1] # corresponds to constant population size # perturb initial parameter values randomly by up to 2 * fold p0 = dadi.Misc.perturb_params(p0, fold=1.5, \ ...
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
claudiuskerth/PhDthesis
mit
A bit of basic pandas Let's first start by reading in the CSV file as a pandas.DataFrame().
import pandas as pd df = pd.read_csv('data/boston_budget.csv') df.head()
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
To get the columns of a DataFrame object df, call df.columns. This is a list-like object that can be iterated over.
df.columns
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
YAML Files Describe data in a human-friendly & computer-readable format. The environment.yml file in your downloaded repository is also a YAML file, by the way! Structure: yaml key1: value key2: - value1 - value2 - subkey1: - value3 Example YAML-formatted schema: yaml filename: boston_budget.csv column_names: - "Fi...
spec = """ filename: boston_budget.csv columns: - "Fiscal Year" - "Service (Cabinet)" - "Department" - "Program #" - "Program" - "Expense Type" - "ACCT #" - "Expense Category (Account)" - "Fund" - "Amount" """ import yaml metadata = yaml.load(spec) metadata
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
You can also take dictionaries, and return YAML-formatted text.
print(yaml.dump(metadata))
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
By having things YAML formatted, you preserve human-readability and computer-readability simultaneously. Providing metadata should be something already done when doing analytics; YAML-format is a strong suggestion, but YAML schema will depend on use case. Let's now switch roles, and pretend that we're on side of the "...
import pandas as pd import seaborn as sns sns.set_style('white') %matplotlib inline df = pd.read_csv('data/boston_ei-corrupt.csv') df.head()
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
Demo: Visual Diagnostics We can use a package called missingno, which gives us a quick visual view of the completeness of the data. This is a good starting point for deciding whether you need to manually comb through the data or not.
# First, we check for missing data. import missingno as msno msno.matrix(df)
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
Immediately it's clear that there's a number of rows with empty values! Nothing beats a quick visual check like this one. We can get a table version of this using another package called pandas_summary.
# We can do the same using pandas-summary. from pandas_summary import DataFrameSummary dfs = DataFrameSummary(df) dfs.summary()
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
dfs.summary() returns a Pandas DataFrame; this means we can write tests for data completeness! Exercise: Test for data completeness. Write a test named check_data_completeness(df) that takes in a DataFrame and confirms that there's no missing data from the pandas-summary output. Then, write a corresponding test_boston_...
import numpy as np def compute_dimensions(length): """ Given an integer, compute the "square-est" pair of dimensions for plotting. Examples: - length: 17 => rows: 4, cols: 5 - length: 14 => rows: 4, cols: 4 This is a utility function; can be tested separately. """ sqrt = np.sqr...
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
It's often a good idea to standardize numerical data (that aren't count data). The term standardize often refers to the statistical procedure of subtracting the mean and dividing by the standard deviation, yielding an empirical distribution of data centered on 0 and having standard deviation of 1. Exercise Write a test...
data_cols = [i for i in df.columns if i not in ['Year', 'Month']] n_rows, n_cols = compute_dimensions(len(data_cols)) fig = plt.figure(figsize=(n_cols*3, n_rows*3)) from matplotlib.gridspec import GridSpec gs = GridSpec(n_rows, n_cols) for i, col in enumerate(data_cols): ax = plt.subplot(gs[i]) empirical_cumdi...
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
Exercise Did we just copy/paste the function?! It's time to stop doing this. Let's refactor the code into a function that can be called. Categorical Data For categorical-type data, we can plot the empirical distribution as well. (This example uses the smartphone_sanitization.csv dataset.)
from collections import Counter def empirical_catdist(data, ax, title=None): d = Counter(data) print(d) x = range(len(d.keys())) labels = list(d.keys()) y = list(d.values()) ax.bar(x, y) ax.set_xticks(x) ax.set_xticklabels(labels) smartphone_df = pd.read_csv('data/smartphone_sanitizati...
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
Statistical Checks Report on deviations from normality. Normality?! The Gaussian (Normal) distribution is commonly assumed in downstream statistical procedures, e.g. outlier detection. We can test for normality by using a K-S test. K-S test From Wikipedia: In statistics, the Kolmogorov–Smirnov test (K–S test or KS...
from scipy.stats import ks_2samp import numpy.random as npr # Simulate a normal distribution with 10000 draws. normal_rvs = npr.normal(size=10000) result = ks_2samp(normal_rvs, df['labor_force_part_rate'].dropna()) result.pvalue < 0.05 fig = plt.figure() ax = fig.add_subplot(111) empirical_cumdist(normal_rvs, ax=ax) ...
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
Exercise Re-create the panel of cumulative distribution plots, this time adding on the Normal distribution, and annotating the p-value of the K-S test in the title.
data_cols = [i for i in df.columns if i not in ['Year', 'Month']] n_rows, n_cols = compute_dimensions(len(data_cols)) fig = plt.figure(figsize=(n_cols*3, n_rows*3)) from matplotlib.gridspec import GridSpec gs = GridSpec(n_rows, n_cols) for i, col in enumerate(data_cols): ax = plt.subplot(gs[i]) test = ks_2samp...
3-data-checks.ipynb
ericmjl/data-testing-tutorial
mit
Download or use cached file oecd-canada.json. Caching file on disk permits to work off-line and to speed up the exploration of the data.
url = 'http://json-stat.org/samples/oecd-canada.json' file_name = "oecd-canada.json" file_path = os.path.abspath(os.path.join("..", "tests", "fixtures", "www.json-stat.org", file_name)) if os.path.exists(file_path): print("using already downloaded file {}".format(file_path)) else: print("download file and stor...
examples-notebooks/oecd-canada-jsonstat_v1.ipynb
26fe/jsonstat.py
lgpl-3.0
Select the dataset named oedc. Oecd dataset has three dimensions (concept, area, year), and contains 432 values.
oecd = collection.dataset('oecd') oecd
examples-notebooks/oecd-canada-jsonstat_v1.ipynb
26fe/jsonstat.py
lgpl-3.0
Shows some detailed info about dimensions
oecd.dimension('concept') oecd.dimension('area') oecd.dimension('year')
examples-notebooks/oecd-canada-jsonstat_v1.ipynb
26fe/jsonstat.py
lgpl-3.0
Accessing value in the dataset Print the value in oecd dataset for area = IT and year = 2012
oecd.data(area='IT', year='2012') oecd.value(area='IT', year='2012') oecd.value(concept='unemployment rate',area='Australia',year='2004') # 5.39663128 oecd.value(concept='UNR',area='AU',year='2004')
examples-notebooks/oecd-canada-jsonstat_v1.ipynb
26fe/jsonstat.py
lgpl-3.0
Load Iris Flower Data
# Load feature and target data iris = datasets.load_iris() X = iris.data y = iris.target
machine-learning/support_vector_classifier.ipynb
tpin3694/tpin3694.github.io
mit
Create Previously Unseen Observation
# Create new observation new_observation = [[-0.7, 1.1, -1.1 , -1.7]]
machine-learning/support_vector_classifier.ipynb
tpin3694/tpin3694.github.io
mit
Predict Class Of Observation
# Predict class of new observation svc.predict(new_observation)
machine-learning/support_vector_classifier.ipynb
tpin3694/tpin3694.github.io
mit
Feature Selection We are now going to explore Some feature selection procedures, the output of this will then be sent to a classifier Recursive elimination with cross validation Simple best percentile features Tree based feature selection <br/> <br/> The output from this is then sent to the following classifiers <br...
from sklearn.svm import SVC from sklearn.cross_validation import StratifiedKFold from sklearn.feature_selection import RFECV from sklearn.feature_selection import SelectFromModel from sklearn.ensemble import ExtraTreesClassifier # Make the Labels vector clabels1 = [1] * 946 + [0] * 223 # Concatenate and Scale combine...
Code/Assignment-11/AdvancedFeatureSelection.ipynb
Upward-Spiral-Science/spect-team
apache-2.0
Classifiers
# Leave one Out cross validation def leave_one_out(classifier, values, labels): leave_one_out_validator = LeaveOneOut(len(values)) classifier_metrics = cross_validation.cross_val_score(classifier, values, labels, cv=leave_one_out_validator) accuracy = classifier_metrics.mean() deviation = classifier_met...
Code/Assignment-11/AdvancedFeatureSelection.ipynb
Upward-Spiral-Science/spect-team
apache-2.0
Instantiate Persistable: Each persistable object is instantiated with parameters that should uniquely (or nearly uniquely) define the payload.
params = { "hello": "world", "another_dict": { "test": [1,2,3] }, "a": 1, "b": 4 } p = Persistable( payload_name="first_payload", params=params, workingdatapath=LOCALDATAPATH / "knowledgeshare_20170929" # object will live in this local disk location )
examples/Persistable.ipynb
DataReply/persistable
gpl-3.0
Define Payload: Payloads are defined by overriding the _generate_payload function: Payload defined by _generate_payload function: Simply override _generate_payload to give the Persistable object generate functionality. Note that generate here means to create the payload. The term is not meeant to indicate that a pyth...
# ML Example: """ def _generate_payload(self): X = pd.read_csv(self.params['datafile']) model = XGboost(X) model.fit() self.payload['model'] = model """ # Silly Example: def _generate_payload(self): self.payload['sum'] = self.params['a'] + self.params['b'] self.payload['msg'] = self.params['hel...
examples/Persistable.ipynb
DataReply/persistable
gpl-3.0
Now we will monkeypatch the payload generator to override its counterpart in Persistable object (only necessary because we've defined the generator outside of an IDE).
def bind(instance, method): def binding_scope_fn(*args, **kwargs): return method(instance, *args, **kwargs) return binding_scope_fn p._generate_payload = bind(p, _generate_payload) p.generate()
examples/Persistable.ipynb
DataReply/persistable
gpl-3.0
Persistable as a Super Class: The non Monkey Patching equivalent to what we did above:
class SillyPersistableExample(Persistable): def _generate_payload(self): self.payload['sum'] = self.params['a'] + self.params['b'] self.payload['msg'] = self.params['hello'] p2 = SillyPersistableExample(payload_name="silly_example", params=params, workingdatapath=LOCALDATAPATH / "knowledgeshare...
examples/Persistable.ipynb
DataReply/persistable
gpl-3.0
Load:
p_test = Persistable( "first_payload", params=params, workingdatapath=LOCALDATAPATH/"knowledgeshare_20170929" ) p_test.load() p_test.payload
examples/Persistable.ipynb
DataReply/persistable
gpl-3.0
Load csv
df = pd.read_csv('in/gifts_Feb2016_2.csv') source_columns = ['donor_id', 'amount_initial', 'donation_date', 'appeal', 'fund', 'city', 'state', 'zipcode_initial', 'charitable', 'sales'] df.columns = source_columns df.info() strip_func = lambda x: x.strip() if isinstance(x, str) else x df = df.applymap(strip_func)
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Address nan column values
df.replace({'appeal': {'0': ''}}, inplace=True) df.appeal.fillna('', inplace=True) df.fund.fillna('', inplace=True)
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Change column types and drop unused columns
df.donation_date = pd.to_datetime(df.donation_date) df.charitable = df.charitable.astype('bool') df['zipcode'] = df.zipcode_initial.str[0:5] fill_zipcode = lambda x: '0'*(5-len(str(x))) + str(x) x1 = pd.DataFrame([[1, '8820'], [2, 8820]], columns=['a','b']) x1.b = x1.b.apply(fill_zipcode) x1 df.zipcode = df.zipcode.a...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Cleanup amounts
## Ensure that all amounts are dollar figures df[~df.amount_initial.str.startswith('-$') & ~df.amount_initial.str.startswith('$')] ## drop row with invalid data df.drop(df[df.donation_date == '1899-12-31'].index, axis=0, inplace=True) df['amount_cleanup'] = df.amount_initial.str.replace(',', '') df['amount_cleanup'] ...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Outlier data
# There are some outliers in the data, quite a few of them are recent. _ = plt.scatter(df[df.amount > 5000].amount.values, df[df.amount > 5000].donation_date.values) plt.show() # Fun little thing to try out bokeh (we can hover and detect the culprits) def plot_data(df): dates = map(getdate_ym, pd.DatetimeIndex(df[...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Exchanged emails with Anil and confirmed the decision to drop the outlier for the anonymous donor with the 9.5 million dollars.
df.drop(df[(df.state == 'YY') & (df.amount >= 45000)].index, inplace=True) print 'After dropping the anonymous donor, total amounts from the unknown state as a percentage of all amounts is: '\ , thousands_sep(100*df[(df.state == 'YY')].amount.sum()/df.amount.sum()), '%'
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Amounts with zero values
## Some funds have zero amounts associated with them. ## They mostly look like costs - expense fees, transaction fees, administrative fees ## Let us examine if we can safely drop them from our analysis df[df.amount_initial == '$0.00'].groupby(['fund', 'appeal'])['donor_id'].count()
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Dropping rows with zero amounts (after confirmation with SEF office)
df.drop(df[df.amount == 0].index, axis=0, inplace=True)
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Negative amounts
## What is the total amount of the negative? print 'Total negative amount is: ', df[df.amount < 0].amount.sum() # Add if condition to make this re-runnable if df[df.amount < 0].amount.sum() > 0: print 'Amounts grouped by fund and appeal, sorted by most negative amounts' df[df.amount < 0]\ .groupby(['fu...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Dropping rows with negative amounts (after confirmation with SEF office)
df.drop(df[df.amount < 0].index, axis=0, inplace=True)
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Investigate invalid state codes
df.info() df.state.unique() ## States imported from http://statetable.com/ states = pd.read_csv('in/state_table.csv') states.rename(columns={'abbreviation': 'state'}, inplace=True) all_states = pd.merge(states, pd.DataFrame(df.state.unique(), columns=['state']), on='state', how='right') invalid_states = all_states[p...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Explanation for invalid state codes: State|Count|Action|Explanation| -----|-----|------|-----------| YY|268|None|All these rows are bogus entries (City and Zip are also YYYYs) - about 20% of the donation amount has this ON|62|Remove|This is the state of Ontario, Canada AP|18|Remove|This is data for Hyderabad VI|6|Remov...
state_renames = {'Ny': 'NY', 'IO': 'IA', 'Ca' : 'CA', 'Co' : 'CO', 'CF' : 'FL', 'ja' : 'FL'} df.replace({'state': state_renames}, inplace=True)
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Dropping data for non-US locations
non_usa_states = ['ON', 'AP', 'VI', 'PR', '56', 'HY', 'BC', 'AB', 'UK', 'KA'] print 'Total amount for locations outside USA: ', sum(df[df.state.isin(non_usa_states)].amount) #### Total amount for locations outside USA: 30710.63 df.drop(df[df.state.isin(non_usa_states)].index, axis=0, inplace=True)
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Investigate donations with state of YY
print 'Percentage of amount for unknown (YY) state : {:.2f}'.format(100*df[df.state == 'YY'].amount.sum()/df.amount.sum()) print 'Total amount for the unknown state excluding outliers: ', df[(df.state == 'YY') & (df.amount < 45000)].amount.sum() print 'Total amount for the unknown state: ', df[(df.state == 'YY')].amou...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
We will add these donations to the noloc_df below (which is the donations that have empty strings for the city/state/zipcode. Investigate empty city, state and zip code Pecentage of total amount from donations with no location: 3.087 Moving all the data with no location to a different dataframe. We will investigate th...
print 'Pecentage of total amount from donations with no location: ', 100*sum(df[(df.city == '') & (df.state == '') & (df.zipcode_initial == '')].amount)/sum(df.amount) noloc_df = df[(df.city == '') & (df.state == '') & (df.zipcode_initial == '')].copy() df = df[~((df.city == '') & (df.state == '') & (df.zipcode_initia...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Investigate City in ('YYY','yyy') These entries have invalid location information and will be added to the noloc_df dataframe.
noloc_df = noloc_df.append(df[(df.city.str.lower() == 'yyy') | (df.city.str.lower() == 'yyyy')]) df = df[~((df.city.str.lower() == 'yyy') | (df.city.str.lower() == 'yyyy'))] # Verify that we transferred all the rows over correctly. This total must match the total from above. print df.shape[0] + noloc_df.shape[0]
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Investigate empty state but non-empty city Percentage of total amount for data with City but no state: 0.566
print 'Percentage of total amount for data with City but no state: {:.3f}'.format(100*sum(df[df.state == ''].amount)/sum(df.amount)) df[((df.state == '') & (df.city != ''))][['city','zipcode','amount']].sort_values('city', ascending=True).to_csv('out/0/City_No_State.csv')
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
By visually examining the cities for rows that don't have a state, we can see that all the cities are coming from Canada and India and some from other countries (except two entries). So we will correct these two entries and drop all the other rows as they are not relevant to the USA.
index = df[(df.donor_id == '-28K0T47RF') & (df.donation_date == '2007-11-30') & (df.city == 'Cupertino')].index df.ix[index,'state'] = 'CA' index = df[(df.donor_id == '9F4812A118') & (df.donation_date == '2012-06-30') & (df.city == 'San Juan')].index df.ix[index,'state'] = 'WA' df.ix[index,'zipcode'] = 98250 # Verifie...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Investigate empty city and zipcode but valid US state Percentage of total amount for data with valid US state, but no city, zipcode: 4.509 Most of this amount (1.7 of 1.8 million) is coming from about 600 donors in California. We already know that about California is a major contributor to donations. Although, we can d...
print 'Percentage of total amount for data with valid US state, but no city, zipcode: {:.3f}'.format(100*sum(df[(df.city == '') & (df.zipcode_initial == '')].amount)/sum(df.amount)) # Verify that we transferred all the rows over correctly. This total must match the total from above. print df.shape[0] + noloc_df.shape[...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Investigating empty city and empty state with non-empty zip code Since we have the zip code data from the US census data, we can use that to fill in the city and state
## Zip codes from ftp://ftp.census.gov/econ2013/CBP_CSV/zbp13totals.zip zipcodes = pd.read_csv('in/zbp13totals.txt', dtype={'zip': object}) zipcodes = zipcodes[['zip', 'city', 'stabbr']] zipcodes = zipcodes.rename(columns = {'zip':'zipcode', 'stabbr': 'state', 'city': 'city'}) zipcodes.city = zipcodes.city.str.title() ...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Investigate invalid zip codes
all_zipcodes = pd.merge(df, zipcodes, on='zipcode', how='left') all_zipcodes[pd.isnull(all_zipcodes.city_x)].head() ## There seems to be only one row with an invalid zip code. Let's drop it. df.drop(df[df.zipcode_initial.isin(['GU214ND','94000'])].index, axis=0, inplace=True)
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
Final check on all location data to confirm that we have no rows with empty state, city or location
print 'No state: count of rows: ', len(df[df.state == ''].amount),\ 'Total amount: ', sum(df[df.state == ''].amount) print 'No zipcode: count of rows: ', len(df[df.zipcode == ''].amount),\ 'Total amount: ', sum(df[df.zipcode == ''].amount) print 'No city: count of rows: ', len(df[df.city == ''].amount),\ ...
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense
All done! Let's save our dataframes for the next stage of processing
!mkdir -p out/0 df.to_pickle('out/0/donations.pkl') noloc_df.to_pickle('out/0/donations_noloc.pkl') df[df.donor_id == '_1D50SWTKX'].sort_values(by='activity_date').tail() df.columns df.shape
notebooks/0_DataCleanup-Feb2016.ipynb
smalladi78/SEF
unlicense