markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Run the smoothing algorithm on imported data The smoothing algorithm is applied to the datastream by calling the run_algorithm method and passing the method as a parameter along with which columns, some_vals, that should be sent. Finally, the windowDuration parameter specified the size of the time windows on which to ...
smooth_stream = iot_stream.compute(smooth_algo, windowDuration=10) smooth_stream.show(truncate=False)
jupyter_demo/import_and_analyse_data.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
Visualize data These are two plots that show the original and smoothed data to visually check how the algorithm transformed the data.
from cerebralcortex.plotting.basic.plots import plot_timeseries plot_timeseries(iot_stream, user_id=USER_ID) plot_timeseries(smooth_stream, user_id=USER_ID)
jupyter_demo/import_and_analyse_data.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
We load the data in a Pandas dataframe as always and specify our column names.
# read .csv from provided dataset csv_filename="zoo.data" # df=pd.read_csv(csv_filename,index_col=0) df=pd.read_csv(csv_filename, names=["Animal", "Hair" , "Feathers" , "Eggs" , "Milk" , "Airborne", "Aquatic" , "Predator" , "Toothed" , "Backbone", "Breathes" , "Venomous", ...
Classification/Zoo Animal Classification using Naive Bayes.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
We'll have a look at our dataset:
df.head() df.tail() df['Animal'].unique()
Classification/Zoo Animal Classification using Naive Bayes.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Now this data contains textual data. The values are in String not in Integer or Float as we would like for our classifier. So we'll use LabelEncoder to transform the data: Next we convert the Legs column in a binarized form using the get_dummies method.
#Convert animal labels to numbers le_animals = preprocessing.LabelEncoder() df['animals'] = le_animals.fit_transform(df.Animal) #Get binarized Legs columns df['Legs'] = pd.get_dummies(df.Legs) #types = pd.get_dummies(df.Type)
Classification/Zoo Animal Classification using Naive Bayes.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Our data now looks like:
df.head() df['Type'].unique()
Classification/Zoo Animal Classification using Naive Bayes.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Our class values range from 1 to 7 denotin specific Animal type. We specify our features and target variable
features=(list(df.columns[1:])) features features.remove('Type') X = df[features] y = df['Type'] X.head()
Classification/Zoo Animal Classification using Naive Bayes.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As usual, we split our dataset to 60% training and 40% testing
# split dataset to 60% training and 40% testing X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0) X_train.shape, y_train.shape
Classification/Zoo Animal Classification using Naive Bayes.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Finding Feature importances with forests of trees This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import ExtraTreesClassifier # Build a classification task using 3 informative features # Build a forest and compute the feature importances forest = ExtraTreesClassifier(n_estimators=250, random_...
Classification/Zoo Animal Classification using Naive Bayes.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
<hr> Naive Bayes Naive Bayes algorithm is a simple yet powerful algorithm. The naive term comes from the fact that Naive Bayes takes a few shortcuts, which we would take a look over soon, to compute the probabilities for classification. It is flexible enough and can be used on different types of datasets easily and doe...
t4=time() print ("NaiveBayes") nb = BernoulliNB() clf_nb=nb.fit(X_train,y_train) print ("Acurracy: ", clf_nb.score(X_test,y_test)) t5=time() print ("time elapsed: ", t5-t4)
Classification/Zoo Animal Classification using Naive Bayes.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Thus the Accuracy is found to be 87% which is quite good for such limited data. However this accuracy might not be a perfect measure of our model efficiency. Hence we use Cross Validation : Cross-validation for Naive Bayes
tt4=time() print ("cross result========") scores = cross_validation.cross_val_score(nb, X,y, cv=3) print (scores) print (scores.mean()) tt5=time() print ("time elapsed: ", tt5-tt4)
Classification/Zoo Animal Classification using Naive Bayes.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Next we import and configure Pandas, a Python library to work with data.
import pandas as pd from pandas.io.json import json_normalize pd.set_option('max_colwidth', 1000) pd.set_option("display.max_rows",100) pd.set_option("display.max_columns",100)
EXERCISE TranSMART REST API V2 (2017).ipynb
thehyve/transmart-api-training
gpl-3.0
Part 1: Plotting blood pressure over time As a first REST API call it would be nice to see what studies are available in this tranSMART server. You will see a list of all studies, their name (studyId) and what dimensions are available for this study. Remember that tranSMART previously only supported the dimensions pa...
studies = api.get_studies() json_normalize(studies['studies'])
EXERCISE TranSMART REST API V2 (2017).ipynb
thehyve/transmart-api-training
gpl-3.0
We choose the TRAINING study and ask for all patients in this study. You will get a list with their patient details and patient identifier.
study_id = 'TRAINING' patients = api.get_patients(study = study_id) json_normalize(patients['patients'])
EXERCISE TranSMART REST API V2 (2017).ipynb
thehyve/transmart-api-training
gpl-3.0
Next we ask for the full list of observations for this study. This list will include one row per observation, with information from all their dimensions. The columns will have headers like &lt;dimension name&gt;.&lt;field name&gt; and numericValue or stringValue for the actual observation value.
obs = api.get_observations(study = study_id) obsDataframe = json_normalize(api.format_observations(obs)) obsDataframe #DO STUFF WITH THE TRAINING STUDY HERE
EXERCISE TranSMART REST API V2 (2017).ipynb
thehyve/transmart-api-training
gpl-3.0
Part 2: Combining Glowing Bear and the Python client For the second part we will work with the Glowing Bear user interface that was developed at The Hyve, funded by IMI Translocation and BBMRI. An API is great to extract exactly the data you need and analyze that. But it is harder to get a nice overview of all data tha...
patient_set_id = 28733
EXERCISE TranSMART REST API V2 (2017).ipynb
thehyve/transmart-api-training
gpl-3.0
Now let's return all patients for the patient set we made!
patients = api.get_patients(patientSet = patient_set_id) json_normalize(patients['patients'])
EXERCISE TranSMART REST API V2 (2017).ipynb
thehyve/transmart-api-training
gpl-3.0
And do the same for all observations for this patient set.
obs = api.get_observations(study = study_id, patientSet = patient_set_id) obsDataframe = json_normalize(api.format_observations(obs)) obsDataframe
EXERCISE TranSMART REST API V2 (2017).ipynb
thehyve/transmart-api-training
gpl-3.0
Download data
%bash wget http://data.statmt.org/wmt17/translation-task/training-parallel-nc-v12.tgz wget http://data.statmt.org/wmt17/translation-task/dev.tgz !ls *.tgz
blogs/t2t/translate_ende.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Set up problem The Problem in tensor2tensor is where you specify parameters like the size of your vocabulary and where to get the training data from.
%bash rm -rf t2t mkdir -p t2t/ende !pwd %writefile t2t/ende/problem.py import tensorflow as tf from tensor2tensor.data_generators import generator_utils from tensor2tensor.data_generators import problem from tensor2tensor.data_generators import text_encoder from tensor2tensor.data_generators import translate from ten...
blogs/t2t/translate_ende.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Generate training data Our problem (translation) requires the creation of text sequences from the training dataset. This is done using t2t-datagen and the Problem defined in the previous section.
%bash DATA_DIR=./t2t_data TMP_DIR=$DATA_DIR/tmp rm -rf $DATA_DIR $TMP_DIR mkdir -p $DATA_DIR $TMP_DIR # Generate data t2t-datagen \ --t2t_usr_dir=./t2t/ende \ --problem=$PROBLEM \ --data_dir=$DATA_DIR \ --tmp_dir=$TMP_DIR
blogs/t2t/translate_ende.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Provide Cloud ML Engine access to data Copy the data to Google Cloud Storage, and then provide access to the data
%bash DATA_DIR=./t2t_data gsutil -m rm -r gs://${BUCKET}/translate_ende/ gsutil -m cp ${DATA_DIR}/${PROBLEM}* ${DATA_DIR}/vocab* gs://${BUCKET}/translate_ende/data %bash PROJECT_ID=$PROJECT AUTH_TOKEN=$(gcloud auth print-access-token) SVC_ACCOUNT=$(curl -X GET -H "Content-Type: application/json" \ -H "Authorizatio...
blogs/t2t/translate_ende.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train model as a Python package To submit the training job to Cloud Machine Learning Engine, we need a Python module with a main(). We'll use the t2t-trainer that is distributed with tensor2tensor as the main
%bash wget https://raw.githubusercontent.com/tensorflow/tensor2tensor/master/tensor2tensor/bin/t2t-trainer mv t2t-trainer t2t/ende/t2t-trainer.py !touch t2t/__init__.py !find t2t
blogs/t2t/translate_ende.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's test that the Python package works. Since we are running this locally, I'll try it out on a subset of the original data
%bash BASE=gs://${BUCKET}/translate_ende/data OUTDIR=gs://${BUCKET}/translate_ende/subset gsutil -m rm -r $OUTDIR gsutil -m cp \ ${BASE}/${PROBLEM}-train-0008* \ ${BASE}/${PROBLEM}-dev-00000* \ ${BASE}/vocab* \ $OUTDIR %bash OUTDIR=./trained_model rm -rf $OUTDIR export PYTHONPATH=${PYTHONPATH}:${PWD}/...
blogs/t2t/translate_ende.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train on Cloud ML Engine Once we have a working Python package, training on a Cloud ML Engine GPU is straightforward
%bash OUTDIR=gs://${BUCKET}/translate_ende/model JOBNAME=t2t_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC_GPU \ --module-name=ende.t2t-trainer \ --pac...
blogs/t2t/translate_ende.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Instantiating a spaghetti.Network object Instantiate the network from a .shp file
ntw = spaghetti.Network(in_data=libpysal.examples.get_path("streets.shp"))
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
1. Allocating observations (snapping points) to a network: A network is composed of a single topological representation of network elements (arcs and vertices) to which point patterns may be snapped.
pp_name = "crimes" pp_shp = libpysal.examples.get_path("%s.shp" % pp_name) ntw.snapobservations(pp_shp, pp_name, attribute=True) ntw.pointpatterns
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
Attributes for every point pattern dist_snapped dict keyed by point id with the value as snapped distance from observation to network arc
ntw.pointpatterns[pp_name].dist_snapped[0]
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
dist_to_vertex dict keyed by pointid with the value being a dict in the form {node: distance to vertex, node: distance to vertex}
ntw.pointpatterns[pp_name].dist_to_vertex[0]
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
npoints point observations in set
ntw.pointpatterns[pp_name].npoints
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
obs_to_arc dict keyed by arc with the value being a dict in the form {pointID:(x-coord, y-coord), pointID:(x-coord, y-coord), ... }
ntw.pointpatterns[pp_name].obs_to_arc[(161, 162)]
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
obs_to_vertex list of incident network vertices to snapped observation points
ntw.pointpatterns[pp_name].obs_to_vertex[0]
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
points geojson like representation of the point pattern. Includes properties if read with attributes=True
ntw.pointpatterns[pp_name].points[0]
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
snapped_coordinates dict keyed by pointid with the value being (x-coord, y-coord)
ntw.pointpatterns[pp_name].snapped_coordinates[0]
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
2. Counts per link Counts per link (arc or edge) are important, but should not be precomputed since there are spatial and graph representations.
def fetch_cpl(net, pp, mean=True): """Create a counts per link object and find mean.""" cpl = net.count_per_link(net.pointpatterns[pp].obs_to_arc, graph=False) if mean: mean_cpl = sum(list(cpl.values())) / float(len(cpl.keys())) return cpl, mean_cpl return cpl ntw_counts, ntw_ctmean = f...
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
3. Simulate a point pattern on the network The number of points must supplied. The only distribution currently supported is uniform. Generally, this will not be called by the user since the simulation will be used for Monte Carlo permutation.
npts = ntw.pointpatterns[pp_name].npoints npts sim_uniform = ntw.simulate_observations(npts) sim_uniform print(dir(sim_uniform))
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
Extract the simulated points along the network a geopandas.GeoDataFrame
def as_gdf(pp): pp = {idx: Point(coords) for idx, coords in pp.items()} gdf = geopandas.GeoDataFrame.from_dict( pp, orient="index", columns=["geometry"] ) gdf.index.name = "id" return gdf sim_uniform_gdf = as_gdf(sim_uniform.points) sim_uniform_gdf.head()
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
Create geopandas.GeoDataFrame objects of the vertices and arcs
vertices_df, arcs_df = spaghetti.element_as_gdf( ntw, vertices=ntw.vertex_coords, arcs=ntw.arcs )
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
Create geopandas.GeoDataFrame objects of the actual and snapped crime locations
crimes = spaghetti.element_as_gdf(ntw, pp_name=pp_name) crimes_snapped = spaghetti.element_as_gdf(ntw, pp_name=pp_name, snapped=True)
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
Helper plotting function
def plotter(): """Generate a spatial plot.""" def _patch(_kws, labinfo): """Generate a legend patch.""" label = "%s — %s" % tuple(labinfo) _kws.update({"lw":0, "label":label, "alpha":.5}) return matplotlib.lines.Line2D([], [], **_kws) def _legend(handles, anchor=(1....
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
Crimes: empirical, network-snapped, and simulated locations
plotter()
notebooks/pointpattern-attributes.ipynb
pysal/spaghetti
bsd-3-clause
Model preparation Variables Any model exported using the export_inference_graph.py tool can be loaded here simply by changing the path. By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader
def load_model(model_name): base_url = 'http://download.tensorflow.org/models/object_detection/' model_file = model_name + '.tar.gz' model_dir = tf.keras.utils.get_file( fname=model_name, origin=base_url + model_file, untar=True) model_dir = pathlib.Path(model_dir)/"saved_model" model = tf.save...
research/object_detection/colab_tutorials/object_detection_tutorial.ipynb
tombstone/models
apache-2.0
Check the model's input signature, it expects a batch of 3-color images of type uint8:
print(detection_model.signatures['serving_default'].inputs)
research/object_detection/colab_tutorials/object_detection_tutorial.ipynb
tombstone/models
apache-2.0
And returns several outputs:
detection_model.signatures['serving_default'].output_dtypes detection_model.signatures['serving_default'].output_shapes
research/object_detection/colab_tutorials/object_detection_tutorial.ipynb
tombstone/models
apache-2.0
Add a wrapper function to call the model, and cleanup the outputs:
def run_inference_for_single_image(model, image): image = np.asarray(image) # The input needs to be a tensor, convert it using `tf.convert_to_tensor`. input_tensor = tf.convert_to_tensor(image) # The model expects a batch of images, so add an axis with `tf.newaxis`. input_tensor = input_tensor[tf.newaxis,...]...
research/object_detection/colab_tutorials/object_detection_tutorial.ipynb
tombstone/models
apache-2.0
Instance Segmentation
model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28" masking_model = load_model(model_name)
research/object_detection/colab_tutorials/object_detection_tutorial.ipynb
tombstone/models
apache-2.0
Initilaize NN context, it will get a SparkContext with optimized configuration for BigDL performance.
sc = init_nncontext("NCF Example")
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Data Preparation Download and read movielens 1M data
movielens_data = movielens.get_id_ratings("/tmp/movielens/")
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Understand the data. Each record is in format of (userid, movieid, rating_score). UserIDs range between 1 and 6040. MovieIDs range between 1 and 3952. Ratings are made on a 5-star scale (whole-star ratings only). Counts of users and movies are recorded for later use.
min_user_id = np.min(movielens_data[:,0]) max_user_id = np.max(movielens_data[:,0]) min_movie_id = np.min(movielens_data[:,1]) max_movie_id = np.max(movielens_data[:,1]) rating_labels= np.unique(movielens_data[:,2]) print(movielens_data.shape) print(min_user_id, max_user_id, min_movie_id, max_movie_id, rating_labels)
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Transform original data into RDD of sample. We use optimizer of BigDL directly to train the model, it requires data to be provided in format of RDD(Sample). A Sample is a BigDL data structure which can be constructed using 2 numpy arrays, feature and label respectively. The API interface is Sample.from_ndarray(feature...
def build_sample(user_id, item_id, rating): sample = Sample.from_ndarray(np.array([user_id, item_id]), np.array([rating])) return UserItemFeature(user_id, item_id, sample) pairFeatureRdds = sc.parallelize(movielens_data)\ .map(lambda x: build_sample(x[0], x[1], x[2]-1)) pairFeatureRdds.take(3)
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Randomly split the data into train (80%) and validation (20%)
trainPairFeatureRdds, valPairFeatureRdds = pairFeatureRdds.randomSplit([0.8, 0.2], seed= 1) valPairFeatureRdds.cache() train_rdd= trainPairFeatureRdds.map(lambda pair_feature: pair_feature.sample) val_rdd= valPairFeatureRdds.map(lambda pair_feature: pair_feature.sample) val_rdd.persist() print(train_rdd.count()) train...
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Build Model In Analytics Zoo, it is simple to build NCF model by calling NeuralCF API. You need specify the user count, item count and class number according to your data, then add hidden layers as needed, you can also choose to include matrix factorization in the network. The model could be fed into an Optimizer of Bi...
ncf = NeuralCF(user_count=max_user_id, item_count=max_movie_id, class_num=5, hidden_layers=[20, 10], include_mf = False)
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Compile model Compile model given specific optimizers, loss, as well as metrics for evaluation. Optimizer tries to minimize the loss of the neural net with respect to its weights/biases, over the training set. To create an Optimizer in BigDL, you want to at least specify arguments: model(a neural network model), criter...
ncf.compile(optimizer= "adam", loss= "sparse_categorical_crossentropy", metrics=['accuracy'])
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Collect logs You can leverage tensorboard to see the summaries.
tmp_log_dir = create_tmp_path() ncf.set_tensorboard(tmp_log_dir, "training_ncf")
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Train the model
ncf.fit(train_rdd, nb_epoch= 10, batch_size= 8000, validation_data=val_rdd)
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Prediction Zoo models make inferences based on the given data using model.predict(val_rdd) API. A result of RDD is returned. predict_class returns the predicted label.
results = ncf.predict(val_rdd) results.take(5) results_class = ncf.predict_class(val_rdd) results_class.take(5)
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
In Analytics Zoo, Recommender has provied 3 unique APIs to predict user-item pairs and make recommendations for users or items given candidates. Predict for user item pairs
userItemPairPrediction = ncf.predict_user_item_pair(valPairFeatureRdds) for result in userItemPairPrediction.take(5): print(result)
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Recommend 3 items for each user given candidates in the feature RDDs
userRecs = ncf.recommend_for_user(valPairFeatureRdds, 3) for result in userRecs.take(5): print(result)
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Recommend 3 users for each item given candidates in the feature RDDs
itemRecs = ncf.recommend_for_item(valPairFeatureRdds, 3) for result in itemRecs.take(5): print(result)
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
Evaluation Plot the train and validation loss curves
#retrieve train and validation summary object and read the loss data into ndarray's. train_loss = np.array(ncf.get_train_summary("Loss")) val_loss = np.array(ncf.get_validation_summary("Loss")) #plot the train and validation curves # each event data is a tuple in form of (iteration_count, value, timestamp) plt.figure(...
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
plot accuracy
plt.figure(figsize = (12,6)) top1 = np.array(ncf.get_validation_summary("Top1Accuracy")) plt.plot(top1[:,0],top1[:,1],label='top1') plt.title("top1 accuracy") plt.grid(True) plt.legend();
apps/recommendation-ncf/ncf-explicit-feedback.ipynb
intel-analytics/analytics-zoo
apache-2.0
We begin be defining a model, identical to the Fitzhugh Nagumo toy model implemented in pints. The corresponding toy model in pints has its evaluateS1() method defined, so we can compare the results using automatic differentiation.
class AutoGradFitzhughNagumoModel(pints.ForwardModel): def simulate(self, parameters, times): y0 = np.array([-1, 1], dtype=float) def rhs(y, t, p): V, R = y a, b, c = p dV_dt = (V - V**3 / 3 + R) * c dR_dt = (V - a + b * R) / -c re...
examples/interfaces/automatic-differentiation-using-autograd.ipynb
martinjrobins/hobo
bsd-3-clause
Now we wrap an existing pints likelihood class, and use the autograd.grad function to calculate the gradient of the given log-likelihood
class AutoGradLogLikelihood(pints.ProblemLogLikelihood): def __init__(self, likelihood): self.likelihood = likelihood f = lambda x: self.likelihood(x) self.likelihood_grad = grad(f) def __call__(self, x): return self.likelihood(x) def evaluateS1(self, x): values = sel...
examples/interfaces/automatic-differentiation-using-autograd.ipynb
martinjrobins/hobo
bsd-3-clause
Now create some toy data and ensure that the new model gives the same output as the toy model in pints
# Create some toy data real_parameters = np.array(pints_model.suggested_parameters(), dtype='float64') times = pints_model.suggested_times() pints_values = pints_model.simulate(real_parameters, times) autograd_values = autograd_model.simulate(real_parameters, times) plt.figure() plt.plot(times, autograd_values) plt.pl...
examples/interfaces/automatic-differentiation-using-autograd.ipynb
martinjrobins/hobo
bsd-3-clause
Add some noise to the values, and then create log-likelihoods using both the new model, and the pints model
noise = 0.1 values = pints_values + np.random.normal(0, noise, pints_values.shape) # Create an object with links to the model and time series autograd_problem = pints.MultiOutputProblem(autograd_model, times, values) pints_problem = pints.MultiOutputProblem(pints_model, times, values) # Create a log-likelihood functi...
examples/interfaces/automatic-differentiation-using-autograd.ipynb
martinjrobins/hobo
bsd-3-clause
We can calculate the gradients of both likelihood functions at the given parameters to make sure that they are the same
autograd_likelihood.evaluateS1(real_parameters) pints_log_likelihood.evaluateS1(real_parameters)
examples/interfaces/automatic-differentiation-using-autograd.ipynb
martinjrobins/hobo
bsd-3-clause
Now we'll time both functions. You can see that the function using autgrad is significantly slower than the in-built evaluateS1 function for the PINTS model, which calculates the sensitivities analytically.
statement = 'autograd_likelihood.evaluateS1(real_parameters)' setup = 'from __main__ import autograd_likelihood, real_parameters' time_taken = min(repeat(stmt=statement, setup=setup, number=1, repeat=5)) 'Elapsed time: {:.0f} ms'.format(1000. * time_taken) statement = 'pints_log_likelihood.evaluateS1(real_parameters...
examples/interfaces/automatic-differentiation-using-autograd.ipynb
martinjrobins/hobo
bsd-3-clause
Pandas es un paquete que Python que provee estructuras de datos rápidas, flexibles y expresivas diseñadas para trabajar con datos rotulados. Dichas estructuras de datos se pueden pensar como arrays de NumPy donde las filas y columnas están rotuladas. O de forma similar como una planilla de cálculo bajo Python. Así como...
conteo = pd.Series([632, 1638, 569, 115]) conteo
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Una Series tiene siempre dos columnas. La primer columna contiene los índices y la segunda los datos. En el ejemplo anterior, pasamos una lista de datos y omitimos el índice por lo que Pandas creo un índice automáticamente usando una secuencia de enteros, empezando por 0 (como es usual en Python). La idea que una Seri...
conteo.values
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Y también es posible obtener el índice.
conteo.index
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Indexado Es importante notar que los arreglos de NumPy también tienen índices, solo que estos son implícitios y siempre son enteros comenzando desde el 0. En cambio los Index en Pandas son explícitos y no están limitados a enteros. Podemos asignar rótulos que tengan sentido según nuestros datos. Si nuestros datos repre...
bacteria = pd.Series([632, 1638, 569, 115], index=['Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes']) bacteria
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Ahora el Index contiene strings en lugar de enteros. Es posible que estos pares rótulo-dato nos recuerden a un diccionario. Si esta analogía es válida deberíamos poder usar los rótulos para referirnos directamente a los valores contenidos en la serie.
bacteria['Actinobacteria']
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Incluso podemos crear Series a partir de diccionarios
bacteria_dict = {'Firmicutes': 632, 'Proteobacteria': 1638, 'Actinobacteria': 569, 'Bacteroidetes': 115} pd.Series(bacteria_dict)
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
O podemos hacerlo de forma algo más breve usando atributos. Un atributo es el nombre que se le da a un dato o propiedad en programación orientada a objetos. En el siguiente ejemplo bacteria es un objeto y Actinobacteria el atributo.
bacteria.Actinobacteria
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
El hecho que tengamos índices explícitos, como los nombres de bacterias, no elimina la posibilidad de acceder a los datos usando índices implicitos, como es común hacer con listas y arreglos.
bacteria[2]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Si tuvieras una Series con índices que fuesen enteros ¿Qué pasaría al indexarla? una_series[1] ¿Obtendríamos el segundo elemento de la serie o el elemento cuyo índice explícito es 1? ¿Y si el ńumero 1 no estuviera contenido en el índice de la serie? Más adelante veremos que solución ofrece Pandas para evitar confusion...
bacteria[bacteria > 1000]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
O podríamos necesitar encontrar el subconjunto de bacterias cuyos nombres terminan en "bacteria":
bacteria[[nombre.endswith('bacteria') for nombre in bacteria.index]]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Slicing Es posible hacer slicing incluso cuando los índices son strings.
bacteria[:'Actinobacteria'] bacteria[:3]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Al indexar con un índice implícito el último índice NO se incluye. Esto es lo que esperamos de listas, tuplas, arreglos etc. En cambio, al indexar con un índice explícito el último índice se incluye! También es posible indexar usando una lista.
bacteria[['Actinobacteria', 'Proteobacteria']]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Indexadores: loc e iloc La presencia de índices implícitos y explícitos puede ser una fuente de gran confusión al usar Pandas. Veamos, que sucede cuando tenemos una serie con índices explícitos que son enteros.
datos = pd.Series(['x', 'y', 'z'], index=range(10, 13)) datos
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Pandas usará el índice explítico al indexar
datos[10]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Pero el implícito al tomar rebanadas!
datos[0:2]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Pandas provee de dos métodos para indexar. El primero de ellos es loc que permite hacer las operaciones de indexado/rebanado usando SIEMPRE el índice explícito.
datos.loc[10] datos.loc[10:11]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
El otro método es iloc el cual permite usar el índice implícito.
datos.iloc[0] datos.iloc[0:2]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Siguiendo el zen de Python que dice “explícito es mejor que implícito", la recomendación general es usar loc e iloc. De esta forma la intención del código se hace explícita lo que contribuye a una lectura más fluida y a reducir la posibilidad de errores. Funciones universales Una de las características más valiosas de ...
np.log(bacteria)
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Para operaciones binarias, las mismas se realizan sobre los índices alineados Esto facilita el realizar operaciones que implican combinar datos de distintas fuentes, algo que puede no ser tan simple al usar NumPy. Para ejemplificar este comportamiento vamos a crear una nueva Series a partir de un diccionario, pero esp...
bacteria2 = pd.Series(bacteria_dict, index=['Cyanobacteria', 'Firmicutes', 'Actinobacteria']) bacteria2
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Observemos dos detalles de este ejemplo. El orden en el que aparecen los elementos es el mismo que el orden especificado por el argumento index, comparemos esto con el caso anterior donde creamos una serie a partir de un diccionario, pero sin especificar el índice. El otro detalle es que hemos pasado un rótulo para un...
bacteria + bacteria2
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
El resultado es una serie donde el índice corresponde a la unión de los índices originales. Pandas suma solo los valores para los cuales los índices de ambas Series coinciden! Y además propaga los valores faltantes (NaN). ¿Qué sucede si intentamos sumar dos arreglos de NumPy de distinta longitud? <br> <br> <br> <br> ...
bacteria.add(bacteria2, fill_value=0)
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Una variante sería hacer la operación y luego cambiar los NaN por cualquier otro valor.
(bacteria + bacteria2).fillna(0)
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
DataFrame Al analizar datos es común que tengamos que trabajar con datos multivariados. Para esos casos es útil tener algo como una Series donde a cada índice le correspondan más de una columna de valores. Ese objeto se llama DataFrame. Un DataFrame es una estructura de datos tabular que se puede pensar como una colecc...
datos = pd.DataFrame({'conteo':[632, 1638, 569, 115, 433, 1130, 754, 555], 'phylum':['Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes'] * 2, 'paciente':np.repeat([1, 2], 4)}) datos
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Lo primero que notamos es que Jupyter le pone onda al DataFrame, y lo muestra como una tabla con algunas mejoras estéticas. También podemos ver que contrario a un arreglo de NumPy en un DataFrame es posible tener datos de distinto tipo (enteros y strings en este caso). Además se ve que las columnas están ordenadas alfa...
datos[['paciente', 'phylum', 'conteo']]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Los DataFrame tienen dos Index: Uno que se corresponde con las filas, al igual que como vimos con las Series Uno que se corresponde con las columnas
datos.columns
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Es posible acceder a los valores de las columnas de forma similar a como lo haríamos en una serie o en un diccionario.
datos['conteo']
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
También podemos hacerlo por atributo.
datos.conteo
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Esta sintáxis no funciona para todos los casos. Algunos casos donde fallará es si la columna continene espacios o si el nombre de la columna entra en conflicto con algún método existente para DataFrames, por ejemplo no sería raro que llamaramos a una columna con alguno de estos nombres all, cov, index, mean. Una posibl...
datos.loc[3]
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
¿Que pasa si intentamos acceder a una fila usando la sintaxis datos[3]? <br> <br> <br> <br> La Series que se obtienen al indexar un DataFrame es una vista (view) del DataFrame y NO una copia. Por lo que hay que tener cuidado al manipularla, por ello Pandas nos devuelve una advertencia.
cont = datos['conteo'] cont cont[5] = 0 cont datos
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Si queremos modificar una Series que proviene de un DataFrame puede ser buena idea hacer una copia primero.
cont = datos['conteo'].copy() cont[5] = 1000 datos
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Es posible agregar columnas a un DataFrame mediante una asignación.
datos['año'] = 2013 datos
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
Podemos agregar una Series como una nueva columna en un DataFrame, el resultado dependerá de los índices de ambos objetos.
tratamiento = pd.Series([0] * 4 + [1] * 4) tratamiento datos['tratamiento'] = tratamiento datos
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0
¿Qué sucede si intentamos agregar una nueva columna a partir de una lista cuya longitud no coincida con la del DataFrame? ¿Y si en vez de una lista es una Series? <br> <br> <br> <br>
datos['mes'] = ['enero'] * len(datos) datos
03_Manipulación_de_datos_y_Pandas.ipynb
PrACiDa/intro_ciencia_de_datos
gpl-3.0