markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Finally, let's code the models! The tf.keras API accepts an array of layers into a model object, so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: get_layers and create_and_train_model. We will build the structure of our model in get_layers. Las... | %%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D... | notebooks/image_models/solutions/2_mnist_models.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Local Training
With everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script mnist_models/trainer/test.py to make sure the model still passes our previous checks. On line 13, you can specify which model types you would like to check. line 14 and line ... | !python3 -m mnist_models.trainer.test | notebooks/image_models/solutions/2_mnist_models.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Now that we know that our models are working as expected, let's run it on the Google Cloud AI Platform. We can run it as a python module locally first using the command line.
The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp. | current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = "cnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time
) | notebooks/image_models/solutions/2_mnist_models.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our mnist_models/trainer/task.py file. | %%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE | notebooks/image_models/solutions/2_mnist_models.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Training on the cloud
We will use a Deep Learning Container to train this model on AI Platform. Below is a simple Dockerlife which copies our code to be used in a TensorFlow 2.3 environment. | %%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"] | notebooks/image_models/solutions/2_mnist_models.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up here with the name mnist_models. (Click here to enable Cloud Build) | !docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI | notebooks/image_models/solutions/2_mnist_models.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Finally, we can kickoff the AI Platform training job. We can pass in our docker image using the master-image-uri flag. | current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = "cnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time
)
os.environ["JOB_NAME"] = f"mnist_{model_type}_{current_time}"
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-... | notebooks/image_models/solutions/2_mnist_models.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Deploying and predicting with model
Once you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but ${JOB_DIR}keras_export/ can always be changed to a different path. | TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
MODEL_NAME = f"mnist_{TIMESTAMP}"
%env MODEL_NAME = $MODEL_NAME
%%bash
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
gcloud ai-platform models crea... | notebooks/image_models/solutions/2_mnist_models.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
To predict with the model, let's take one of the example images.
TODO 4: Write a .json file with image data to send to an AI Platform deployed model | import codecs
import json
import matplotlib.pyplot as plt
import tensorflow as tf
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.... | notebooks/image_models/solutions/2_mnist_models.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab! | %%bash
gcloud ai-platform predict \
--model=${MODEL_NAME} \
--version=${MODEL_TYPE} \
--json-instances=./test.json | notebooks/image_models/solutions/2_mnist_models.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Now we import a few general packages that we need to start with. The following imports basic numerics and algebra routines (numpy) and plotting routines (matplotlib), and makes sure that all plots are shown inside the notebook rather than in a separate window (nicer that way). | import matplotlib.pylab as plt
import numpy as np
%pylab inline | python/markov_analysis/PyEMMA-API.ipynb | jeiros/Jupyter_notebooks | mit |
Now we import the pyEMMA package that we will be using in the beginning: the coordinates package. This package contains functions and classes for reading and writing trajectory files, extracting order parameters from them (such as distances or angles), as well as various methods for dimensionality reduction and cluster... | import pyemma.coordinates as coor
import pyemma.msm as msm
import pyemma.plots as mplt
from pyemma import config
# some helper funcs
def average_by_state(dtraj, x, nstates):
assert(len(dtraj) == len(x))
N = len(dtraj)
res = np.zeros((nstates))
for i in range(nstates):
I = np.argwhere(dtraj == i... | python/markov_analysis/PyEMMA-API.ipynb | jeiros/Jupyter_notebooks | mit |
TICA and clustering
So we would like to first reduce our dimension by throwing out the ‘uninteresting’ ones and only keeping the ‘relevant’ ones. But how do we do that?
It turns out that a really good way to do that if you are interesting in the slow kinetics of the molecule - e.g. for constructing a Markov model, is t... | tica_obj = coor.tica(inp, lag=100)
Y = tica_obj.get_output()[0] | python/markov_analysis/PyEMMA-API.ipynb | jeiros/Jupyter_notebooks | mit |
By default, TICA will choose a number of output dimensions to cover 95% of the kinetic variance and scale the output to produce a kinetic map. In this case we retain 575 dimensions, which is a lot but note that they are scaled by eigenvalue, so it’s mostly the first dimensions that contribute. | print("Projected data shape: (%s,%s)" % (Y.shape[0], Y.shape[1]))
print('Retained dimensions: %s' % tica_obj.dimension())
plot(tica_obj.cumvar, linewidth=2)
plot([tica_obj.dimension(), tica_obj.dimension()], [0, 1], color='black', linewidth=2)
plot([0, Y.shape[0]], [0.95, 0.95], color='black', linewidth=2)
xlabel('Num... | python/markov_analysis/PyEMMA-API.ipynb | jeiros/Jupyter_notebooks | mit |
The TICA object has a number of properties that we can extract and work with. We have already obtained the projected trajectory and wrote it in a variable Y that is a matrix of size (103125 x 2). The rows are the MD steps, the 2 columns are the independent component coordinates projected onto. So each columns is a traj... | mplt.plot_free_energy(np.vstack(Y)[:, 0], np.vstack(Y)[:, 1])
xlabel('independent component 1'); ylabel('independent component 2') | python/markov_analysis/PyEMMA-API.ipynb | jeiros/Jupyter_notebooks | mit |
A particular thing about the IC’s is that they have zero mean and variance one. We can easily check that: | print("Mean values: %s" % np.mean(Y[0], axis = 0))
print("Variances: %s" % np.var(Y[0], axis = 0)) | python/markov_analysis/PyEMMA-API.ipynb | jeiros/Jupyter_notebooks | mit |
The small deviations from 0 and 1 come from statistical and numerical issues. That’s not a problem. Note that if we had set kinetic_map=True when doing TICA, then the variances would not be 1 but rather the square of the corresponding TICA eigenvalue.
TICA is a special transformation because it will project the data su... | print(-100/np.log(tica_obj.eigenvalues[:5])) | python/markov_analysis/PyEMMA-API.ipynb | jeiros/Jupyter_notebooks | mit |
We will see more timescales later when we estimate a Markov model, and there will be some differences. For now you should treat these numbers as a rough guess of your molecule’s timescales, and we will see later that this guess is actually a bit too fast. The timescales are relative to the 10 ns saving interval, so we ... | subplot2grid((2,1),(0,0))
plot(Y[:,0])
ylabel('ind. comp. 1')
subplot2grid((2,1),(1,0))
plot(Y[:,1])
ylabel('ind. comp. 2')
xlabel('time (10 ns)')
tica_obj.chunksize
mplt.plot_implied_timescales(tica_obj) | python/markov_analysis/PyEMMA-API.ipynb | jeiros/Jupyter_notebooks | mit |
Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Th... | tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reus... | gan_mnist/Intro_to_GANs_Solution.ipynb | mdiaz236/DeepLearningFoundations | mit |
Training | !mkdir checkpoints
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
... | gan_mnist/Intro_to_GANs_Solution.ipynb | mdiaz236/DeepLearningFoundations | mit |
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We j... | saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_... | gan_mnist/Intro_to_GANs_Solution.ipynb | mdiaz236/DeepLearningFoundations | mit |
Spark Configuration and Preparation
Edit the variables in the cell below. If you are running Spark in local mode, please set the local flag to true and adjust the resources you wish to use on your local machine. The same goes for the case when you are running Spark 2.0 and higher. | # Modify these variables according to your needs.
application_name = "Distributed Deep Learning: Analysis"
using_spark_2 = False
local = False
if local:
# Tell master to use local resources.
master = "local[*]"
num_cores = 3
num_executors = 1
else:
# Tell master to use YARN.
master = "yarn-clien... | examples/example_1_analysis.ipynb | ad960009/dist-keras | gpl-3.0 |
Data Preparation
After the Spark Context (or Spark Session if you are using Spark 2.0) has been set up, we can start reading the preprocessed dataset from storage. | # Check if we are using Spark 2.0
if using_spark_2:
reader = sc
else:
reader = sqlContext
# Read the dataset.
raw_dataset = reader.read.parquet("data/processed.parquet")
# Check the schema.
raw_dataset.printSchema() | examples/example_1_analysis.ipynb | ad960009/dist-keras | gpl-3.0 |
After reading the dataset from storage, we will extract several metrics such as nb_features, which basically is the number of input neurons, and nb_classes, which is the number of classes (signal and background). | nb_features = len(raw_dataset.select("features_normalized").take(1)[0]["features_normalized"])
nb_classes = len(raw_dataset.select("label").take(1)[0]["label"])
print("Number of features: " + str(nb_features))
print("Number of classes: " + str(nb_classes)) | examples/example_1_analysis.ipynb | ad960009/dist-keras | gpl-3.0 |
Finally, we split up the dataset for training and testing purposes, and fetch some additional statistics on the number of training and testing instances. | # Finally, we create a trainingset and a testset.
(training_set, test_set) = raw_dataset.randomSplit([0.7, 0.3])
training_set.cache()
test_set.cache()
# Distribute the training and test set to the workers.
test_set = test_set.repartition(num_workers)
training_set = training_set.repartition(num_workers)
num_test_set =... | examples/example_1_analysis.ipynb | ad960009/dist-keras | gpl-3.0 |
Model construction | model = Sequential()
model.add(Dense(500, input_shape=(nb_features,)))
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Dense(500))
model.add(Activation('relu'))
model.add(Dropout(0.6))
model.add(Dense(500))
model.add(Activation('relu'))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
# Su... | examples/example_1_analysis.ipynb | ad960009/dist-keras | gpl-3.0 |
Model evaluation | def evaluate(model):
global test_set
metric_name = "f1"
evaluator = MulticlassClassificationEvaluator(metricName=metric_name, predictionCol="prediction_index", labelCol="label_index")
# Clear the prediction column from the testset.
test_set = test_set.select("features_normalized", "label", "label_i... | examples/example_1_analysis.ipynb | ad960009/dist-keras | gpl-3.0 |
Model training and evaluation
In the next sections we train and evaluate the models trained by different (distributed) optimizers.
Single Trainer | trainer = SingleTrainer(keras_model=model, loss=loss, worker_optimizer=optimizer,
features_col="features_normalized", num_epoch=1, batch_size=64)
trained_model = trainer.train(training_set)
# Fetch the training time.
dt = trainer.get_training_time()
print("Time spent (SingleTrainer): " + `dt` ... | examples/example_1_analysis.ipynb | ad960009/dist-keras | gpl-3.0 |
Asynchronous EASGD | trainer = AEASGD(keras_model=model, worker_optimizer=optimizer, loss=loss, num_workers=num_workers, batch_size=64,
features_col="features_normalized", num_epoch=1, communication_window=32,
rho=5.0, learning_rate=0.1)
trainer.set_parallelism_factor(1)
trained_model = trainer.train(trai... | examples/example_1_analysis.ipynb | ad960009/dist-keras | gpl-3.0 |
DOWNPOUR | trainer = DOWNPOUR(keras_model=model, worker_optimizer=optimizer, loss=loss, num_workers=num_workers,
batch_size=64, communication_window=5, learning_rate=0.1, num_epoch=1,
features_col="features_normalized")
trainer.set_parallelism_factor(1)
trained_model = trainer.train(training_... | examples/example_1_analysis.ipynb | ad960009/dist-keras | gpl-3.0 |
Results
As we can see from the plots below, the distributed optimizers finish a single epoch ~7 times however. However, for this, the distributed optimizers use 16 times the amount of resources. However, a not very descriptive measure since some of jobs are scheduled on the same machines, some machines have a higher lo... | # Plot the time.
fig = plt.figure()
st = fig.suptitle("Lower is better.", fontsize="x-small")
plt.bar(range(len(time_spent)), time_spent.values(), align='center')
plt.xticks(range(len(time_spent)), time_spent.keys())
plt.xlabel("Optimizers")
plt.ylabel("Seconds")
plt.ylim([0, 7000])
plt.show()
# Plot the statistical ... | examples/example_1_analysis.ipynb | ad960009/dist-keras | gpl-3.0 |
A different view of logistic regression
Consider a schematic reframing of the LR model above. This time we'll treat the inputs as nodes, and they connect to other nodes via vertices that represent the weight coefficients.
<img src="img/NN-2.jpeg">
The diagram above is a (simplified form of a) single-neuron model in ... | rng = np.random.RandomState(1)
X = rng.randn(samples, 2)
y = np.array(np.logical_xor(X[:, 0] > 0, X[:, 1] > 0), dtype=int)
clf = LogisticRegression().fit(X,y)
plot_decision_regions(X=X, y=y, clf=clf, res=0.02, legend=2)
plt.xlabel('x1'); plt.ylabel('x2'); plt.title('LR (XOR)') | neural-networks-101/Neural Networks - Part 1.ipynb | fionapigott/Data-Science-45min-Intros | unlicense |
Why does this matter? Well...
Neural Networks
Some history
In the 1960s, when the concept of neural networks were first gaining steam, this type of data was a show-stopper. In particular, the reason our model fails to be effective with this data is that it's not linearly separable; it has interaction terms.
This is a s... | # make the same data as above (just a little closer so it's easier to find)
rng = np.random.RandomState(1)
X = rng.randn(samples, 2)
y = np.array(np.logical_xor(X[:, 0] > 0, X[:, 1] > 0), dtype=int)
def activate(x, deriv=False):
"""sigmoid activation function and its derivative wrt the argument"""
if deriv is... | neural-networks-101/Neural Networks - Part 1.ipynb | fionapigott/Data-Science-45min-Intros | unlicense |
This is the iterative phase. We propagate the input data forward through the synapse (weights), calculate the errors, and then back-propogate those errors through the synapses (weights) according to the proper gradients. Note that the number of iterations is arbitary at this point. We'll come back to that. | for i in range(10000):
# first "layer" is the input data
l0 = X
# forward propagation
l1 = activate(np.dot(l0, syn0))
###
# this is an oversimplified version of backprop + gradient descent
#
# how much did we miss?
l1_error = y - l1
#
# how much should we scale the adj... | neural-networks-101/Neural Networks - Part 1.ipynb | fionapigott/Data-Science-45min-Intros | unlicense |
As expected, this basically didn't work at all!
Even though we aren't looking at the actual output data, we can use it to look at the accuracy; it never got much better than random guessing. Even after thousands of iterations! But remember, we knew that would be the case, because this single-layer network is functiona... | # hold tight, we'll come back to choosing this number
hidden_layer_width = 3
# initialize synapse (weight) matrices randomly with mean 0
syn0 = 2*np.random.random((2,hidden_layer_width)) - 1
syn1 = 2*np.random.random((hidden_layer_width,1)) - 1
for i in range(60000):
# forward propagation through layers 0, 1, an... | neural-networks-101/Neural Networks - Part 1.ipynb | fionapigott/Data-Science-45min-Intros | unlicense |
Ok, this time we started at random guessing (sensible), but notice that we quickly reduced our overall error! That's excellent!
Note: I didn't have time to debug the case where the full XOR data only trained to label one quadrant correctly. To get a sense for how it can look with a smaller set, change the "fall-back da... | def forward_prop(X):
"""forward-propagate data X through the pre-fit network"""
l1 = activate(np.dot(X,syn0))
l2 = activate(np.dot(l1,syn1))
return l2
# numpy and plotting shenanigans come from:
# http://scikit-learn.org/stable/auto_examples/svm/plot_iris.html
# mesh step size
h = .02
# create a me... | neural-networks-101/Neural Networks - Part 1.ipynb | fionapigott/Data-Science-45min-Intros | unlicense |
Executes with mpiexec | !mpiexec -n 4 python2.7 hellompi.py | Untitled5.ipynb | PepSalehi/tuthpc | bsd-3-clause |
Coding for multiple "personalities" (nodes, actually)
Point to point communication | %%file mpipt2pt.py
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank, size = comm.Get_rank(), comm.Get_size()
if rank == 0:
data = range(10)
more = range(0,20,2)
print 'rank %i sends data:' % rank, data
comm.send(data, dest=1, tag=1337)
print 'rank %i sends data:' % rank, more
comm.send(more, ... | Untitled5.ipynb | PepSalehi/tuthpc | bsd-3-clause |
Collective communication | %%file mpiscattered.py
'''mpi scatter
'''
from mpi4py import MPI
import numpy as np
import time
comm = MPI.COMM_WORLD
rank, size = comm.Get_rank(), comm.Get_size()
if rank == 0:
data = np.arange(10)
print 'rank %i has data' % rank, data
data_split_list = np.array_split(data, size)
else:
data_split_lis... | Untitled5.ipynb | PepSalehi/tuthpc | bsd-3-clause |
Not covered: shared memory and shared objects
Better serialization | from mpi4py import MPI
try:
import dill
MPI._p_pickle.dumps = dill.dumps
MPI._p_pickle.loads = dill.loads
except ImportError, AttributeError:
pass | Untitled5.ipynb | PepSalehi/tuthpc | bsd-3-clause |
Working with cluster schedulers, the JOB file | %%file jobscript.sh
#!/bin/sh
#PBS -l nodes=1:ppn=4
#PBS -l walltime=00:03:00
cd ${PBS_O_WORKDIR} || exit 2
mpiexec -np 4 python hellompi.py | Untitled5.ipynb | PepSalehi/tuthpc | bsd-3-clause |
Beyond mpi4py
The task Pool: pyina and emcee.utils | %%file pyinapool.py
def test_pool(obj):
from pyina.launchers import Mpi
x = range(6)
p = Mpi(8)
# worker pool strategy + dill
p.scatter = False
print p.map(obj, x)
# worker pool strategy + dill.source
p.source = True
print p.map(obj, x)
# scatter-gather strategy ... | Untitled5.ipynb | PepSalehi/tuthpc | bsd-3-clause |
5.1 Log-Normal Chain-Ladder
This corresponds to Section 5.1 in the paper. The data are taken from Verrall et al. (2010). Kuang et al. (2015) fitted a log-normal chain-ladder model to this data. The model is given by
$$ M^{LN}{\mu, \sigma^2}: \quad \log(Y{ij}) \stackrel{D}{=} N(\alpha_i + \beta_j + \delta, \sigma^2). $$... | model_VNJ = apc.Model() | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Next, we attach the data for the model. The data come pre-formatted in the package. | model_VNJ.data_from_df(apc.loss_VNJ(), data_format='CL') | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
We fit a log-normal chain-ladder model to the full data. | model_VNJ.fit('log_normal_response', 'AC') | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
and confirm that we get the same result as in the paper for the log-data variance estimate $\hat{\sigma}^{2,LN}$ and the degrees of freedom $df$. This should correspond to the values for $\mathcal{I}$ in Figure 2(b). | print('log-data variance full model: {:.3f}'.format(model_VNJ.s2))
print('degrees of freedom full model: {:.0f}'.format(model_VNJ.df_resid)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
This matches the results in the paper.
Sub-models
We move on to split the data into sub-samples. The sub-samples $\mathcal{I}_1$ and $\mathcal{I}_2$ contain the first and the last five accident years, respectively. Accident years correspond to "cohorts" in age-period-cohort terminology. Rather than first splitting the ... | sub_model_VNJ_1 = model_VNJ.sub_model(coh_from_to=(1,5), fit=True)
sub_model_VNJ_2 = model_VNJ.sub_model(coh_from_to=(6,10), fit=True) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
We can check that this generated the estimates $\hat{\sigma}^{2, LN}\ell$ and degrees of freedom $df\ell$ from the paper. | print('First five accident years (I_1)')
print('-------------------------------')
print('log-data variance: {:.3f}'.format(sub_model_VNJ_1.s2))
print('degrees of freedom: {:.0f}\n'.format(sub_model_VNJ_1.df_resid))
print('Last five accident years (I_2)')
print('------------------------------')
print('log-data variance... | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Reassuringly, it does. We can then also compute the weighted average predictor $\bar{\sigma}^{2,LN}$ | s2_bar_VNJ = ((sub_model_VNJ_1.s2 * sub_model_VNJ_1.df_resid
+ sub_model_VNJ_2.s2 * sub_model_VNJ_2.df_resid)
/(sub_model_VNJ_1.df_resid + sub_model_VNJ_2.df_resid))
print('Weighted avg of log-data variance: {:.3f}'.format(s2_bar_VNJ)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Check!
Testing for common variances
Now we can move on to test the hypothesis of common variances
$$ H_{\sigma^2}: \sigma^2_1 = \sigma^2_2. $$
This corresponds to testing for a reduction from $M^{LN}$ to $M^{LN}_{\sigma^2}$.
First, we can conduct a Bartlett test. This functionality is pre-implemented in the package. | bartlett_VNJ = apc.bartlett_test([sub_model_VNJ_1, sub_model_VNJ_2]) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
The test statistic $B^{LN}$ is computed as the ratio of $LR^{LN}$ to the Bartlett correction factor $C$. The p-value is computed by the $\chi^2$ approximation to the distribution of $B^{LN}$. The number of sub-samples is given by $m$. | for key, value in bartlett_VNJ.items():
print('{}: {:.2f}'.format(key, value)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
We get the same results as in the paper. Specifically, we get a p-value of $0.09$ for the hypothesis so that the Bartlett test does not arm us with strong evidence against the null hypothesis.
In the paper, we also conduct an $F$-test for the same hypothesis. The statistic is computed as
$$ F_{\sigma^2}^{LN} = \frac{\h... | F_VNJ_sigma2 = sub_model_VNJ_2.s2/sub_model_VNJ_1.s2
print('F statistic for common variances: {:.2f}'.format(F_VNJ_sigma2)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Now we can compute p-values in one-sided and two-sided tests.
For an (equal-tailed) two-sided test, we first find the percentile $P(F_{\sigma^2}^{LN} \leq \mathrm{F}_{df_2, df_1})$. This is given by | from scipy import stats
F_VNJ_sigma2_percentile = stats.f.cdf(
F_VNJ_sigma2, dfn=sub_model_VNJ_2.df_resid, dfd=sub_model_VNJ_1.df_resid
)
print('Percentile of F statistic: {:.2f}'.format(F_VNJ_sigma2_percentile)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
If this is below the 50th percentile, the p-value is simply twice the percentile, otherwise we subtract the percentile from unity and multiply that by two. For intuition, we can look at the plot below. The green areas in the lower and upper tail of the distribution contain the same probability mass, namely $P(F_{\sigma... | import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
x = np.linspace(0.01,5,1000)
y = stats.f.pdf(x,
dfn=sub_model_VNJ_2.df_resid,
dfd=sub_model_VNJ_1.df_resid)
plt.figure()
plt.plot(x, y, label='$\mathrm{F}_{df_2, df_1}$ density')
plt.axvline(F_VNJ_sigma2, color='black', li... | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Since $F_{\sigma^2}^{LN}$ is below the 50th percentile, the two-sided equal tailed p-value is in our case given by | print('F test two-sided p-value: {:.2f}'.format(
2*np.min([F_VNJ_sigma2_percentile, 1-F_VNJ_sigma2_percentile])
)
) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
The one-sided p-value for the hypothesis $H_{\sigma^2}: \sigma^2_1 \leq \sigma^2_2$ simply corresponds to the area in the lower tail of the distribution. This is because the statistic is $\hat\sigma^{2,LN}_2/\hat\sigma^{2,LN}_1$ so that smaller values work against our hypothesis. Thus, the rejection region is the lower... | print('F statistic one-sided p-value: {:.2f}'.format(F_VNJ_sigma2_percentile)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Testing for common linear predictors
We can move on to test for common linear predictors:
$$ H_{\mu, \sigma^2}: \sigma^2_1 = \sigma^2_2 \quad \text{and} \quad \alpha_{i,\ell} + \beta_{j,\ell} + \delta_\ell = \alpha_i + \beta_j + \delta $$
If we are happy to accept the hypothesis of common variances $H_{\sigma^2}: \sigm... | f_linpred_VNJ = apc.f_test(model_VNJ, [sub_model_VNJ_1, sub_model_VNJ_2]) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
This returns the test statistic $F_\mu^{LN}$ along with the p-value. | for key, value in f_linpred_VNJ.items():
print('{}: {:.2f}'.format(key, value)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
These results, too, much those from the paper.
5.2 Over-dispersed Poisson Chain-Ladder
This corresponds to Section 5.2 in the paper. The data are taken from Taylor and Ashe (1983). For this data, the desired full model is an over-dispersed Poisson model given by
$$ M^{ODP}{\mu, \sigma^2}: \quad E(Y{ij}) = \exp(\alpha_i... | model_TA = apc.Model()
model_TA.data_from_df(apc.data.pre_formatted.loss_TA(), data_format='CL')
model_TA.fit('od_poisson_response', 'AC')
print('log-data variance full model: {:.0f}'.format(model_TA.s2))
print('degrees of freedom full model: {:.0f}'.format(model_TA.df_resid)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Sub-models
We set up and estimate the models on the four sub-samples. Combined, these models correspond to $M^{ODP}$. | sub_model_TA_1 = model_TA.sub_model(per_from_to=(1,5), fit=True)
sub_model_TA_2 = model_TA.sub_model(coh_from_to=(1,5), age_from_to=(1,5),
per_from_to=(6,10), fit=True)
sub_model_TA_3 = model_TA.sub_model(age_from_to=(6,10), fit=True)
sub_model_TA_4 = model_TA.sub_model(coh_from_to=(... | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Testing for common over-dispersion
We perform a Bartlett test for the hypothesis of common over-dispersion across sub-samples $H_{\sigma^2}: \sigma^2_\ell = \sigma^2$. This corresponds to testing a reduction from $M^{ODP}$ to $M^{ODP}_{\sigma^2}$. | bartlett_TA = apc.bartlett_test(sub_models_TA)
for key, value in bartlett_TA.items():
print('{}: {:.2f}'.format(key, value)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
These results match those in the paper. The Bartlett test yields a p-value of 0.08.
Testing for common linear predictors
If we are happy to impose common over-dispersion, we can test for common linear predictors across sub-samples. Then, this corresponds to a reduction from $M^{ODP}{\sigma^2}$ to $M^{ODP}{\mu, \sigma^... | f_linpred_TA = apc.f_test(model_TA, sub_models_TA)
for key, value in f_linpred_TA.items():
print('{}: {:.2f}'.format(key, value)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Repeated testing
In the paper, we also suggest a procedure to repeat the tests for different sub-sample structures, using a Bonferroni correction for size-control. | sub_models_TA_2 = [model_TA.sub_model(coh_from_to=(1,5), fit=True),
model_TA.sub_model(coh_from_to=(6,10), fit=True)]
sub_models_TA_3 = [model_TA.sub_model(per_from_to=(1,4), fit=True),
model_TA.sub_model(per_from_to=(5,7), fit=True),
model_TA.sub_model(per_from... | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
The test results match those in the paper.
For a quick refresher on the Bonferroni correction we turn to Wikipedia. The idea is to control the family wise error rate, the probability of rejecting at least one null hypothesis when the null is true.
In our scenario, we repeat testing three times. Each individual repetit... | model_BZ = apc.Model()
model_BZ.data_from_df(apc.data.pre_formatted.loss_BZ(), time_adjust=1, data_format='CL')
model_BZ.fit('log_normal_response', 'AC')
print('log-data variance full model: {:.4f}'.format(model_BZ.s2))
print('degrees of freedom full model: {:.0f}'.format(model_BZ.df_resid)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Next, the models for the sub-samples. | sub_models_BZ = [model_BZ.sub_model(per_from_to=(1977,1981), fit=True),
model_BZ.sub_model(per_from_to=(1982,1984), fit=True),
model_BZ.sub_model(per_from_to=(1985,1987), fit=True)]
for i, sm in enumerate(sub_models_BZ):
print('Sub-sample I_{}'.format(i+1))
print('-----------... | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
We move on the Bartlett test for the hypothesis of common log-data variances across sub-samples $H_{\sigma^2}: \sigma^2_\ell = \sigma^2$. | bartlett_BZ = apc.bartlett_test(sub_models_BZ)
for key, value in bartlett_BZ.items():
print('{}: {:.2f}'.format(key, value)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
The Bartlett test yields a p-value of 0.05 as in the paper.
We test for common linear predictors across sub-samples. | f_linpred_BZ = apc.f_test(model_BZ, sub_models_BZ)
for key, value in f_linpred_BZ.items():
print('{}: {:.2f}'.format(key, value)) | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Calendar effect
Now we redo the same for the model with calendar effect. | model_BZe = apc.Model()
model_BZe.data_from_df(apc.data.pre_formatted.loss_BZ(), time_adjust=1, data_format='CL')
model_BZe.fit('log_normal_response', 'APC') # The only change is in this line.
print('log-data variance full model: {:.4f}'.format(model_BZe.s2))
print('degrees of freedom full model: {:.0f}'.format(mod... | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
With this, we replicated Figure 4b.
Closer look at the effect of dropping the calendar effect
In the paper, we move on to take a closer look at the effect of dropping the calendar effect. We do so in two ways starting with $$M^{LNe}{\sigma^2}: \stackrel{D}{=} N(\alpha{i, \ell} + \beta_{j, \ell} + \gamma_{k, \ell} + \de... | model_BZe.fit_table(attach_to_self=False).loc[['AC']] | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
We see that the p-value (P>F) is close to zero.
Next, we consider the second way. We first test $H_{\gamma_{k, \ell}}$. Since $\sigma^2$ is common across the array from the outset, we can do this with a simple $F$-test:
$$ \frac{(RSS_.^{LN} - RSS_.^{LNe})/(df_.^{LN} - df_.^{LNe})}{RSS_.^{LNe}/df_.^{LNe}} \stackrel{D... | rss_BZe_dot = np.sum([sub.rss for sub in sub_models_BZe])
rss_BZ_dot = np.sum([sub.rss for sub in sub_models_BZ])
df_BZe_dot = np.sum([sub.df_resid for sub in sub_models_BZe])
df_BZ_dot = np.sum([sub.df_resid for sub in sub_models_BZ])
F_BZ = ((rss_BZ_dot - rss_BZe_dot)/(df_BZ_dot - df_BZe_dot)) / (rss_BZe_dot/df_BZe_... | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
Thus this is not rejected. However, we already saw that a reduction from $M^{LN}{\sigma^2}$ to $M^{LN}{\mu, \sigma^2}$ is rejected.
Repeated testing
Just as for the Taylor and Ashe (1983) data, we repeat testing for different splits. | sub_models_BZe_2 = [model_BZe.sub_model(coh_from_to=(1977,1981), fit=True),
model_BZe.sub_model(coh_from_to=(1982,1987), fit=True)]
sub_models_BZe_4 = [model_BZe.sub_model(per_from_to=(1977,1981), fit=True),
model_BZe.sub_model(coh_from_to=(1977,1982), age_from_to=(1,5),
... | apc/vignettes/vignette_misspecification.ipynb | JonasHarnau/apc | gpl-3.0 |
To create an animation we need to do two things:
Create the initial visualization, with handles on the figure and axes object.
Write a function that will get called for each frame that updates the data and returns the next frame. | duration = 10.0 # this is the total time
N = 500
# Make the initial plot outside the animation function
fig_mpl, ax = plt.subplots(1,figsize=(5,3), facecolor='white')
x = np.random.normal(0.0, 1.0, size=N)
y = np.random.normal(0.0, 1.0, size=N)
plt.sca(ax)
plt.xlim(-3,3)
plt.ylim(-3,3)
scat = ax.scatter(x, y)
def ma... | days/day20/MoviePy.ipynb | AaronCWong/phys202-2015-work | mit |
Use the following call to generate and display the animation in the notebook: | animation.ipython_display(fps=24) | days/day20/MoviePy.ipynb | AaronCWong/phys202-2015-work | mit |
Use the following to save the animation to a file that can be uploaded you YouTube: | animation.write_videofile("scatter_animation.mp4", fps=20) | days/day20/MoviePy.ipynb | AaronCWong/phys202-2015-work | mit |
Problem 1) Download and Examine the Data
The images for this exercise can be downloaded from here: https://northwestern.box.com/s/x6nzuqtdys3jo1nufvswkx62o44ifa11. Be sure to place the images in the same directory as this notebook (but do not add them to your git repo!).
Before we dive in, here is some background infor... | r_filename = "galaxy_images/85698_sdss_r.fits"
r_data = fits.getdata( # complete
plt.imshow( # complete
plt.colorbar()
plt.tight_layout() | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 1b
Roughly how many sources are present in the image?
Hint - an exact count is not required here.
Solution 1b
Write your answer here
Problem 2) Source Detection
Prior to measuring any properties of sources in the image, we must first determine the number of sources present in the image. Source detection is chal... | threshold = detect_threshold( # complete | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 2b
Develop better intuition for the detection image by plotting it side-by-side with the actual image of the field.
Do you notice anything interesting about the threshold image? | fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(7,4))
ax1.imshow( # complete
ax2.imshow( # complete
fig.tight_layout() | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Following this measurement of the background, we can find sources using the detect_sources function. Briefly, this function uses image segmentation to define and assign pixels to sources, which are defined as objects with $N$ connected pixels that are $s$ times brighter than the background (we already set $s = 3$). Rea... | segm = detect_sources( # complete | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 2d
Plot the segmentation image side-by-side with the actual image of the field.
Are you concerned or happy with the results?
Hint - no stretch should be applied to the segmentation image. | fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(7,4))
ax1.imshow(# complete
ax2.imshow(# complete
fig.tight_layout() | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 3) Source Centroids and Shapes
Now that we have defined all of the sources in the image, we must determine the centroid for each source (in order to ultimately make some form of photometric measurement). As Dora mentioned earlier in the week, there are many ways to determine the centroid of a given source (e.g.... | def get_source_extent(segm_data, source_num):
"""
Determine extent of sources for centroid measurements
Parameters
----------
segm_data : array-like
Segementation image produced by photutils.segmentation.detect_sources
source_num : int
The source number from the segment... | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 3a
Measure the centroid for each source detected in the image using the centroid_com function.
Hint - you'll want to start with a subset of pixels containing the source.
Hint 2 - centroids are measured relative to the provided data, you'll need to convert back to "global" pixel values. | xcentroid = np.zeros_like(np.unique(segm.data)[1:], dtype="float")
ycentroid = np.zeros_like(np.unique(segm.data)[1:], dtype="float")
for source_num in np.unique(segm.data)[1:]:
source_extent = get_source_extent( # complete
xc, yc = centroid_com( # complete
xcentroid[source_num-1], ycentroid[source_num-1] ... | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 3b
Overplot the derived centroids on the image data as a sanity check for your methodology. | fig, ax1 = plt.subplots()
ax1.imshow( # complete
ax1.plot( # complete
fig.tight_layout() | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
With an estimate of the centroid of every source in hand, we now need to determine the ellipse that best describes the galaxies in order to measure their flux. Fortunately, this can be done using the source_properties function within photutils.morphology package.
Briefly, source_properties takes both the data array, an... | cat = source_properties( # complete
tbl = cat.to_table(columns=['id', 'semimajor_axis_sigma','semiminor_axis_sigma', 'orientation']) | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 4) Photometry
We now have all the necessary information to measure the flux in elliptical apertures. The EllipticalAperture function in photutils defines apertures on an image based on input centroids, $a$, $b$, and orientation values.
Problem 4a
Define apertures for the sources that are detected in the image.... | positions = # complete
apertures = [EllipticalAperture( # complete
# complete
# complete
# complete | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 4b
Overplot your apertures on the sources that have been detected.
Hint - each aperture object has a plot() attribute that can be used to show the aperture for each source. | fig, ax1 = plt.subplots()
ax1.imshow( # complete
# complete
# complete
fig.tight_layout() | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
With apertures now defined, we can finally measure the flux of each source. The aperture_photometry function returns the flux (actually counts) in an image for the provided apertures. It takes the image, apertures, and bakground image as arguments.
Note - the background has already been subtracted from these images so ... | bkg = np.random.normal(100, 35, r_data.shape)
uncertainty_img = calc_total_error(r_data, bkg - np.mean(bkg), 1) | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 4c
Measure the counts and uncertainty detected from each source within the apertures defined in 4a.
Hint - you will need to loop over each aperture as aperture_photometry does not take multiple apertures of different shapes as a single argument. | source_cnts = # complete
source_cnts_unc = # complete
for source_num, ap in enumerate(apertures):
phot = # complete
source_cnts[source_num] = # complete
source_cnts_unc[source_num] = # complete | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
The images have been normalized to a zero point of 30. Thus, we can convert from counts to magnitudes via the following equation:
$$m = 30 - 2.5 \log (\mathrm{counts}).$$
Recall from Dora's talk that the uncertainty of the magnitude measurements can be calculated as:
$$\frac{2.5}{\ln(10)} \frac{\sigma_\mathrm{counts}... | source_mag = # complete
source_mag_unc = # complete
for source_num, (mag, mag_unc) in enumerate(zip(source_mag, source_mag_unc)):
print("Source {:d} has m = {:.3f} +/- {:.3f} mag".format( # complete | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
That's it! You've measured the magnitude for every source in the image.
As previously noted, the images provided for this dataset are centered are galaxies within a cluster, and ultimately, these galaxies are all that we care about. For this first image, that means we care about the galaxy centered at $(x,y) \approx (1... | # complete | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 5) Multiwavelength Photometry
Ultimately we want to measure colors for these galaxies. We now know the $r$-band magnitude for galaxy 85698, we need to measure the $g$ and $i$ band magnitudes as well.
Problem 5a Using the various pieces described above, write a function to measure the magnitude of the galaxy at... | def cluster_galaxy_photometry(data):
'''
Determine the magnitude of the galaxy at the center of the image
Parameters
----------
data : array-like
Background subtracted 2D image centered on the galaxy
of interest
Returns
-------
mag : float
Magnitude of t... | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 5b
Confirm that the function calculates the same $r$-band mag that was calculated in Problem 4. | # complete
print("""Previously, we found m = {:.3f} mag.
This new function finds m = {:.3f} mag.""".format( # complete | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 5c
Use this new function to calculate the galaxy magnitude in the $g$ and the $i$ band, and determine the $g - r$ and $r - i$ colors of the galaxy. | g_data = fits.getdata( # complete
i_data = fits.getdata( # complete
# complete
# complete
# complete
print("""The g-r color = {:.3f} +/- {:.3f} mag.
The r-i color = {:.3f} +/- {:.3f} mag""".format(g_mag - r_mag, np.hypot(g_mag_unc, r_mag_unc),
r_mag - i_mag, np.hypot(r_... | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
But wait!
Problem 5d
Was this calculation "fair"?
Hint - this is a relatively red galaxy.
Solution 5d
This calculation was not "fair" because identical apertures were not used in all 3 filters.
Problem 5e
[Assuming your calculation was not fair] Calculate the $g - r$ and $r - i$ colors of the galaxy in a consistent f... | def cluster_galaxy_aperture(data):
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
return aperture
def cluster_galaxy_phot(data, aperture):
# complete
# complete
# complete
# complete
# complete
return mag, mag_unc
r_ap = # complete
# c... | Sessions/Session05/Day5/MultiwavelengthPhotometry.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Using Iris Dataset | # import some data to play with
iris = datasets.load_iris()
# look at individual aspects by uncommenting the below
#iris.data
#iris.feature_names
#iris.target
#iris.target_names | notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb | gregoryg/cdh-projects | apache-2.0 |
Original author converted the data to Pandas Dataframes. Note that we have separated out the inputs (x) and the outputs/labels (y). | # Store the inputs as a Pandas Dataframe and set the column names
x = pd.DataFrame(iris.data)
x.columns = ['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width']
y = pd.DataFrame(iris.target)
y.columns = ['Targets'] | notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb | gregoryg/cdh-projects | apache-2.0 |
Visualise the data
It is always important to have a look at the data. We will do this by plotting two scatter plots. One looking at the Sepal values and another looking at Petal. We will also set it to use some colours so it is clearer. | # Set the size of the plot
plt.figure(figsize=(14,7))
# Create a colormap
colormap = np.array(['red', 'lime', 'black'])
# Plot Sepal
plt.subplot(1, 2, 1)
plt.scatter(x.Sepal_Length, x.Sepal_Width, c=colormap[y.Targets], s=40)
plt.title('Sepal')
plt.subplot(1, 2, 2)
plt.scatter(x.Petal_Length, x.Petal_Width, c=colorm... | notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb | gregoryg/cdh-projects | apache-2.0 |
Build the K Means Model - non-Spark example
This is the easy part, providing you have the data in the correct format (which we do). Here we only need two lines. First we create the model and specify the number of clusters the model should find (n_clusters=3) next we fit the model to the data. | # K Means Cluster
model = KMeans(n_clusters=3)
model.fit(x)
1
2
3
# K Means Cluster
model = KMeans(n_clusters=3)
model.fit(x)
# This is what KMeans thought
model.labels_ | notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb | gregoryg/cdh-projects | apache-2.0 |
Visualise the classifier results
Let's plot the actual classes against the predicted classes from the K Means model.
Here we are plotting the Petal Length and Width, however each plot changes the colors of the points using either c=colormap[y.Targets] for the original class and c=colormap[model.labels_] for the predic... | # View the results
# Set the size of the plot
plt.figure(figsize=(14,7))
# Create a colormap
colormap = np.array(['red', 'lime', 'black'])
# Plot the Original Classifications
plt.subplot(1, 2, 1)
plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[y.Targets], s=40)
plt.title('Real Classification')
# Plot the Model... | notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb | gregoryg/cdh-projects | apache-2.0 |
Fixing the coloring
Here we are going to change the class labels, we are not changing the any of the classification groups we are simply giving each group the correct number. We need to do this for measuring the performance.
Using this code below we using the np.choose() to assign new values, basically we are changing ... | # The fix, we convert all the 1s to 0s and 0s to 1s.
predY = np.choose(model.labels_, [1, 0, 2]).astype(np.int64)
print (model.labels_)
print (predY) | notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb | gregoryg/cdh-projects | apache-2.0 |
Re-plot
Now we can re plot the data as before but using predY instead of model.labels_. | # View the results
# Set the size of the plot
plt.figure(figsize=(14,7))
# Create a colormap
colormap = np.array(['red', 'lime', 'black'])
# Plot Orginal
plt.subplot(1, 2, 1)
plt.scatter(x.Petal_Length, x.Petal_Width, c=colormap[y.Targets], s=40)
plt.title('Real Classification')
# Plot Predicted with corrected value... | notebooks/jupyter/datascience/K Means Cluster Visualization.ipynb | gregoryg/cdh-projects | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.