Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow, Mini-Batch/Stochastic GradientDescent With Moment
Step1: Input
Generamos la muestra de grado 5
Step2: Problema
Calcular los coeficientes que mejor se ajusten a la muestra sabiendo que es de grado 5
Generamos la matriz de coeficientes de grado 5
Step3: Solucion 1
Step4: <img src="capturas/gradient_descent.png">
Solución 2
Step5: <img src="capturas/mini_batch_gradient_descent.png">
Solución 3
Step6: <img src="capturas/minibatch_gradient_descent_momentum.png">
Solución 4
Step7: <img src="capturas/stocastic_gradient_descent_momentum_fail.png"> | Python Code:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
%matplotlib inline
import sys
import time
from IPython.display import Image
sys.path.append('/home/pedro/git/ElCuadernillo/ElCuadernillo/20160301_TensorFlowGradientDescentWithMomentum')
import gradient_descent_with_momentum as gdt
Explanation: TensorFlow, Mini-Batch/Stochastic GradientDescent With Moment
End of explanation
grado=4
tamano=100000
x,y,coeficentes=gdt.generar_muestra(grado,tamano)
print ("Coeficientes: ",coeficentes)
plt.plot(x,y,'.')
Explanation: Input
Generamos la muestra de grado 5
End of explanation
train_x=gdt.generar_matriz_coeficientes(x,grado) # MatrizA
train_y=np.reshape(y,(y.shape[0],-1)) # VectorColumna
learning_rate_inicial=1e-2
activar_sumario=True
Explanation: Problema
Calcular los coeficientes que mejor se ajusten a la muestra sabiendo que es de grado 5
Generamos la matriz de coeficientes de grado 5
End of explanation
pesos_gd,ecm_gd,tiempo_gd=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=1,
learning_rate_inicial=learning_rate_inicial,
momentum=0.0,
activar_sumario=activar_sumario,
prefijo='GD_')
Explanation: Solucion 1: Por medio gradient descent
End of explanation
pesos_mgd,ecm_mgd,tiempo_mgd=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=10000,
learning_rate_inicial=learning_rate_inicial,
momentum=0.0,
activar_sumario=activar_sumario,
prefijo='mGD_')
Explanation: <img src="capturas/gradient_descent.png">
Solución 2: Por medio mini-batch=1000 gradient descent
End of explanation
pesos_mgdm,ecm_mgdm,tiempo_mgdm=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=10000,
learning_rate_inicial=learning_rate_inicial,
momentum=0.9,
activar_sumario=activar_sumario,
prefijo='mGDM_')
Explanation: <img src="capturas/mini_batch_gradient_descent.png">
Solución 3: Por medio mini-batch=10000 gradient descent With Moment
End of explanation
pesos_sgdm,ecm_sgdm,tiempo_sgdm=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=len(train_x),
learning_rate_inicial=learning_rate_inicial,
momentum=0.9,
activar_sumario=activar_sumario,
prefijo='SGDM_')
Explanation: <img src="capturas/minibatch_gradient_descent_momentum.png">
Solución 4: Por medio mini-batch=1 Stocastict gradient descent With Moment
End of explanation
pesos_sgdm,ecm_sgdm,tiempo_sgdm=gdt.gradient_descent_with_momentum(train_x,
train_y,
num_mini_batch=len(train_x),
learning_rate_inicial=1e-3, # Disminuimos la tasa de aprendizaje
momentum=0.9,
activar_sumario=activar_sumario,
prefijo='SGDM_2_')
Explanation: <img src="capturas/stocastic_gradient_descent_momentum_fail.png">
End of explanation |
101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head(50)
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*100].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
data.head()
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y-final_outputs # Output layer error is the difference between desired target and actual output.
output_error_term = error
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(output_error_term, self.weights_hidden_to_output.T)
hidden_error_term = hidden_error*hidden_outputs*(1-hidden_outputs)
# TODO: Backpropagated error terms - Replace these values with your calculations.
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term*X[:,None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term*hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output +=self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden +=self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 2000
learning_rate = 0.08
hidden_nodes = 60
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Parameters used in Manuscript
Step2: Load in data
Load in experimental conditions for Particle Display experiments
The mlpd_params_df contains the experimental information for MLPD.
Parameters are
Step3: Load CSVs
Step7: Helper functions
Step8: Data Analysis
Step9: Generate Figure Data
Here, we generate the raw data used to build Venn diagrams. The final figures were render in Figma. | Python Code:
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Overview
This notebook summarizes the numbers of aptamers that appear to be enriched in positive pools for particular particule display experiments. These values are turned into venn diagrams and pie charts in Figure 2.
The inputs are csvs, where each row is an aptamer and columns indicate the sequencing counts within each particle display subexperiment.
End of explanation
# Required coverage level for analysis. This is in units of number of apatamer
# particles (beads). This is used to minimize potential contamination.
# For example, a tolerated bead fraction of 0.2 means that if, based on read
# depth and number of beads, there are 100 reads expected per bead, then
# sequences with fewer than 20 reads would be excluded from analysis.
TOLERATED_BEAD_FRAC = 0.2
# Ratio cutoff between positive and negative pools to count as being real.
# The ratio is calculated normalized by read depth, so if the ratio is 0.5,
# then positive sequences are expected to have equal read depth (or more) in
# the positive pool as the negative pool. So, as a toy example, if the
# positive pool had 100 reads total and the negative pool had 200 reads total,
# then a sequence with 5 reads in the positive pool and 10 reads in the
# negative pool would have a ratio of 0.5.
POS_NEG_RATIO_CUTOFF = 0.5
# Minimum required reads (when 0 it uses only the above filters)
MIN_READ_THRESH = 0
Explanation: Parameters used in Manuscript
End of explanation
#@title Original PD Data Parameters
# Since these are small I'm going to embed in the colab.
apt_screened_list = [ 2.4*10**6, 2.4*10**6, 1.24*10**6]
apt_collected_list = [3.5 * 10**4, 8.5 * 10**4, 8 * 10**4]
seq_input = [10**5] * 3
conditions = ['round2_high_no_serum_positive',
'round2_medium_no_serum_positive',
'round2_low_no_serum_positive']
flags = ['round2_high_no_serum_flag', 'round2_medium_no_serum_flag',
'round2_low_no_serum_flag']
stringency = ['High', 'Medium', 'Low']
pd_param_df = pd.DataFrame.from_dict({'apt_screened': apt_screened_list,
'apt_collected': apt_collected_list,
'seq_input': seq_input,
'condition': conditions,
'condition_flag': flags,
'stringency': stringency})
pd_param_df
#@title MLPD Data Parameters
apt_screened_list = [ 3283890.016, 6628573.952, 5801469.696, 3508412.512]
apt_collected_list = [12204, 50353, 153845, 201255]
seq_input = [200000] * 4
conditions = ['round1_very_positive',
'round1_high_positive',
'round1_medium_positive',
'round1_low_positive']
flags = ['round1_very_flag', 'round1_high_flag', 'round1_medium_flag',
'round1_low_flag']
stringency = ['Very High', 'High', 'Medium', 'Low']
mlpd_param_df = pd.DataFrame.from_dict({'apt_screened': apt_screened_list,
'apt_collected': apt_collected_list,
'seq_input': seq_input,
'condition': conditions,
'condition_flag': flags,
'stringency': stringency})
mlpd_param_df
Explanation: Load in data
Load in experimental conditions for Particle Display experiments
The mlpd_params_df contains the experimental information for MLPD.
Parameters are:
* apt_collected: The number of aptamer bead particles collected during the FACs experiment of particle display.
* apt_screened: The number of aptamer bead particles screened in order to get the apt_collected beads.
* seq_input: The estimated number of unique sequences in the input sequence library during bead construction.
End of explanation
# PD and MLPD sequencing counts across experiments
# Upload pd_clustered_input_data_manuscript.csv and mlpd_input_data_manuscript.csv
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# Load PD Data
with open('pd_clustered_input_data_manuscript.csv') as f:
pd_input_df = pd.read_csv(f)
# Load MLPD data
with open('mlpd_input_data_manuscript.csv') as f:
mlpd_input_df = pd.read_csv(f)
Explanation: Load CSVs
End of explanation
def generate_cutoffs_via_PD_stats(df, col, apt_screened, apt_collected, seq_input,
tolerated_bead_frac, min_read_thresh):
Use the experimental parameters to determine sequences passing thresholds.
Args:
df: Pandas dataframe with experiment results. Must have columns named
after the col function parameter, containing the read count, and a
column 'sequence'.
col: The string name of the column in the experiment dataframe with the
read count.
apt_screened: The integer number of aptamers screened, from the experiment
parameters.
apt_collected: The integer number of aptamers collected, from the experiment
parameters.
seq_input: The integer number of unique sequences in the sequence library
used to construct the aptamer particles.
tolerated_bead_frac: The float tolerated bead fraction threshold. In other
words, the sequencing depth required to keep a sequence, in units of
fractions of a bead based on the average expected read depth per bead.
min_read_threshold: The integer minimum number of reads that a sequence
must have in order not to be filtered.
Returns:
Pandas series of the sequences from the dataframe that pass filter.
expected_bead_coverage = apt_screened / seq_input
tolerated_bead_coverage = expected_bead_coverage * tolerated_bead_frac
bead_full_min_sequence_coverage = (1. / apt_collected) * tolerated_bead_coverage
col_sum = df[col].sum()
# Look at sequenced counts calculated observed fraction of pool and raw count.
seqs = df[((df[col]/col_sum) > bead_full_min_sequence_coverage) & # Pool frac.
(df[col] > min_read_thresh) # Raw count
].sequence
return seqs
def generate_pos_neg_normalized_ratio(df, col_prefix):
Adds fraction columns to the dataframe with the calculated pos/neg ratio.
Args:
df: Pandas dataframe, expected to have columns [col_prefix]_positive and
[col_prefix]_negative contain read counts for the positive and negative
selection conditions, respectively.
col_prefix: String prefix of the columns to use to calculate the ratio.
For example 'round1_very_positive'.
Returns:
The original dataframe with three new columns:
[col_prefix]_positive_frac contains the fraction of the total positive
pool that is this sequence.
[col_prefix]_negative_frac contains the fraction of the total negative
pool that is this sequence.
[col_prefix]_pos_neg_ratio: The read-depth normalized fraction of the
sequence that ended in the positive pool.
col_pos = col_prefix + '_' + 'positive'
col_neg = col_prefix + '_' + 'negative'
df[col_pos + '_frac'] = df[col_pos] / df[col_pos].sum()
df[col_neg + '_frac'] = df[col_neg] / df[col_neg].sum()
df[col_prefix + '_pos_neg_ratio'] = df[col_pos + '_frac'] / (
df[col_pos + '_frac'] + df[col_neg + '_frac'])
return df
def build_seq_sets_from_df (input_param_df, input_df, tolerated_bead_frac,
pos_neg_ratio, min_read_thresh):
Sets flags for sequences based on whether they clear stringencies.
This function adds a column 'seq_set' to the input_param_df (one row per
stringency level of a particle display experiment) containing all the
sequences in the experiment that passed that stringency level in the
experiment.
Args:
input_param_df: Pandas dataframe with experimental parameters. Expected
to have one row per stringency level in the experiment and
columns 'apt_screened', 'apt_collected', 'seq_input', 'condition', and
'condition_flag'.
input_df: Pandas dataframe with the experimental results (counts per
sequence) for the experiment covered in the input_param_df. Expected
to have a [col_prefix]_pos_neg_ratio column for each row of the
input_param_df (i.e. each stringency level).
tolerated_bead_frac: Float representing the minimum sequence depth, in
units of expected beads, for a sequence to be used in analysis.
pos_neg_ratio: The threshold for the pos_neg_ratio column for a sequence
to be used in the analysis.
min_read_thresh: The integer minimum number of reads for a sequence to
be used in the analysis (not normalized, a straight count.)
Returns:
Nothing.
for _, row in input_param_df.iterrows():
# Get parameters to calculate bead fraction.
apt_screened = row['apt_screened']
apt_collected = row['apt_collected']
seq_input = row['seq_input']
condition = row['condition']
flag = row['condition_flag']
# Get sequences above tolerated_bead_frac in positive pool.
tolerated_bead_frac_seqs = generate_cutoffs_via_PD_stats(
input_df, condition, apt_screened, apt_collected, seq_input,
tolerated_bead_frac, min_read_thresh)
# Intersect with seqs > normalized positive sequencing count ratio.
condition_pre = condition.split('_positive')[0]
ratio_col = '%s_pos_neg_ratio' % (condition_pre)
pos_frac_seqs = input_df[input_df[ratio_col] > pos_neg_ratio].sequence
seqs = set(tolerated_bead_frac_seqs) & set(pos_frac_seqs)
input_df[flag] = input_df.sequence.isin(set(seqs))
Explanation: Helper functions
End of explanation
#@title Add positive_frac / (positive_frac + negative_frac) col to df
for col_prefix in ['round1_very', 'round1_high', 'round1_medium', 'round1_low']:
mlpd_input_df = generate_pos_neg_normalized_ratio(mlpd_input_df, col_prefix)
for col_prefix in ['round2_high_no_serum', 'round2_medium_no_serum', 'round2_low_no_serum']:
pd_input_df = generate_pos_neg_normalized_ratio(pd_input_df, col_prefix)
#@title Measure consistency of particle display data when increasing stringency thresholds within each experimental set (i.e PD and MLPD)
build_seq_sets_from_df(pd_param_df, pd_input_df, TOLERATED_BEAD_FRAC,
POS_NEG_RATIO_CUTOFF, MIN_READ_THRESH)
build_seq_sets_from_df(mlpd_param_df, mlpd_input_df, TOLERATED_BEAD_FRAC,
POS_NEG_RATIO_CUTOFF, MIN_READ_THRESH)
Explanation: Data Analysis
End of explanation
#@title Figure 2B Raw Data
pd_input_df.groupby('round2_low_no_serum_flag round2_medium_no_serum_flag round2_high_no_serum_flag'.split()).count()[['sequence']]
#@title Figure 2C Raw Data
# To build venn (green), sum preceding True flags to get consistent sets
# 512 nM = 5426+3 = 5429
# 512 & 128 nM = 2360+15 = 2375
# 512 & 128 & 32nM (including 8 nM) = 276+84 = 360
# To build venn (grey) Inconsistent flags are summed (ignoring 8nM)
# 128 nM only = 185 + 1 = 186
# 128 nM & 32 nM = 12+1 = 13
# 32 nM only = 2
# 32 nM and 512 nM only = 22+1 = 23
#
# To build pie, look at all round1_very_flags = True
# Green = 84
# Grey = 15+1+3+1+1 = 21
mlpd_input_df.groupby('round1_low_flag round1_medium_flag round1_high_flag round1_very_flag'.split()).count()[['sequence']]
Explanation: Generate Figure Data
Here, we generate the raw data used to build Venn diagrams. The final figures were render in Figma.
End of explanation |
103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jupyter Notebooks
Share the compute process with everyone
How to Install
pip3 install jupyter
How to Run
jupyter notebook
Uses
PDF books
Blog posts
Inline graphics
Multiple languages available via kernel (R, julia, fortran, etc)
Can share Jupyter notebooks via Google drive using Jupyter Drive
Example
specific tools inside of jupyter notebook
Step1: We can make pretty graphs | Python Code:
%%%timeit
maths = list()
for x in range(10):
maths.append(x**x)
%%%timeit
maths = [x**x for x in range(10)]
# maths
Explanation: Jupyter Notebooks
Share the compute process with everyone
How to Install
pip3 install jupyter
How to Run
jupyter notebook
Uses
PDF books
Blog posts
Inline graphics
Multiple languages available via kernel (R, julia, fortran, etc)
Can share Jupyter notebooks via Google drive using Jupyter Drive
Example
specific tools inside of jupyter notebook
End of explanation
import matplotlib.pyplot as plt
import math
import numpy as np
%matplotlib inline
t = np.arange(0., 5., 0.2)
plt.plot(t, t, 'r--', t, t**2, 'bs')
Explanation: We can make pretty graphs
End of explanation |
104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
DCGAN
Step1: Import TensorFlow and enable eager execution
Step2: Load the dataset
We are going to use the MNIST dataset to train the generator and the discriminator. The generator will then generate handwritten digits.
Step3: Use tf.data to create batches and shuffle the dataset
Step4: Write the generator and discriminator models
Generator
It is responsible for creating convincing images that are good enough to fool the discriminator.
It consists of Conv2DTranspose (Upsampling) layers. We start with a fully connected layer and upsample the image 2 times so as to reach the desired image size (mnist image size) which is (28, 28, 1).
We use leaky relu activation except for the last layer which uses tanh activation.
Discriminator
The discriminator is responsible for classifying the fake images from the real images.
In other words, the discriminator is given generated images (from the generator) and the real MNIST images. The job of the discriminator is to classify these images into fake (generated) and real (MNIST images).
Basically the generator should be good enough to fool the discriminator that the generated images are real.
Step5: Define the loss functions and the optimizer
Discriminator loss
The discriminator loss function takes 2 inputs; real images, generated images
real_loss is a sigmoid cross entropy loss of the real images and an array of ones (since these are the real images)
generated_loss is a sigmoid cross entropy loss of the generated images and an array of zeros (since these are the fake images)
Then the total_loss is the sum of real_loss and the generated_loss
Generator loss
It is a sigmoid cross entropy loss of the generated images and an array of ones
The discriminator and the generator optimizers are different since we will train them separately.
Step6: Checkpoints (Object-based saving)
Step7: Training
We start by iterating over the dataset
The generator is given noise as an input which when passed through the generator model will output a image looking like a handwritten digit
The discriminator is given the real MNIST images as well as the generated images (from the generator).
Next, we calculate the generator and the discriminator loss.
Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables (inputs) and apply those to the optimizer.
Generate Images
After training, its time to generate some images!
We start by creating noise array as an input to the generator
The generator will then convert the noise into handwritten images.
Last step is to plot the predictions and voila!
Step8: Restore the latest checkpoint
Step9: Display an image using the epoch number
Step10: Generate a GIF of all the saved images.
<!-- TODO(markdaoust)
Step11: To downlod the animation from Colab uncomment the code below | Python Code:
# to generate gifs
!pip install imageio
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
DCGAN: An example with tf.keras and eager
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
This notebook demonstrates how to generate images of handwritten digits using tf.keras and eager execution. To do so, we use Deep Convolutional Generative Adverserial Networks (DCGAN).
This model takes about ~30 seconds per epoch (using tf.contrib.eager.defun to create graph functions) to train on a single Tesla K80 on Colab, as of July 2018.
Below is the output generated after training the generator and discriminator models for 150 epochs.
End of explanation
from __future__ import absolute_import, division, print_function
# Import TensorFlow >= 1.10 and enable eager execution
import tensorflow as tf
tf.enable_eager_execution()
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
Explanation: Import TensorFlow and enable eager execution
End of explanation
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
# We are normalizing the images to the range of [-1, 1]
train_images = (train_images - 127.5) / 127.5
BUFFER_SIZE = 60000
BATCH_SIZE = 256
Explanation: Load the dataset
We are going to use the MNIST dataset to train the generator and the discriminator. The generator will then generate handwritten digits.
End of explanation
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
Explanation: Use tf.data to create batches and shuffle the dataset
End of explanation
class Generator(tf.keras.Model):
def __init__(self):
super(Generator, self).__init__()
self.fc1 = tf.keras.layers.Dense(7*7*64, use_bias=False)
self.batchnorm1 = tf.keras.layers.BatchNormalization()
self.conv1 = tf.keras.layers.Conv2DTranspose(64, (5, 5), strides=(1, 1), padding='same', use_bias=False)
self.batchnorm2 = tf.keras.layers.BatchNormalization()
self.conv2 = tf.keras.layers.Conv2DTranspose(32, (5, 5), strides=(2, 2), padding='same', use_bias=False)
self.batchnorm3 = tf.keras.layers.BatchNormalization()
self.conv3 = tf.keras.layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False)
def call(self, x, training=True):
x = self.fc1(x)
x = self.batchnorm1(x, training=training)
x = tf.nn.relu(x)
x = tf.reshape(x, shape=(-1, 7, 7, 64))
x = self.conv1(x)
x = self.batchnorm2(x, training=training)
x = tf.nn.relu(x)
x = self.conv2(x)
x = self.batchnorm3(x, training=training)
x = tf.nn.relu(x)
x = tf.nn.tanh(self.conv3(x))
return x
class Discriminator(tf.keras.Model):
def __init__(self):
super(Discriminator, self).__init__()
self.conv1 = tf.keras.layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same')
self.conv2 = tf.keras.layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same')
self.dropout = tf.keras.layers.Dropout(0.3)
self.flatten = tf.keras.layers.Flatten()
self.fc1 = tf.keras.layers.Dense(1)
def call(self, x, training=True):
x = tf.nn.leaky_relu(self.conv1(x))
x = self.dropout(x, training=training)
x = tf.nn.leaky_relu(self.conv2(x))
x = self.dropout(x, training=training)
x = self.flatten(x)
x = self.fc1(x)
return x
generator = Generator()
discriminator = Discriminator()
# Defun gives 10 secs/epoch performance boost
generator.call = tf.contrib.eager.defun(generator.call)
discriminator.call = tf.contrib.eager.defun(discriminator.call)
Explanation: Write the generator and discriminator models
Generator
It is responsible for creating convincing images that are good enough to fool the discriminator.
It consists of Conv2DTranspose (Upsampling) layers. We start with a fully connected layer and upsample the image 2 times so as to reach the desired image size (mnist image size) which is (28, 28, 1).
We use leaky relu activation except for the last layer which uses tanh activation.
Discriminator
The discriminator is responsible for classifying the fake images from the real images.
In other words, the discriminator is given generated images (from the generator) and the real MNIST images. The job of the discriminator is to classify these images into fake (generated) and real (MNIST images).
Basically the generator should be good enough to fool the discriminator that the generated images are real.
End of explanation
def discriminator_loss(real_output, generated_output):
# [1,1,...,1] with real output since it is true and we want
# our generated examples to look like it
real_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.ones_like(real_output), logits=real_output)
# [0,0,...,0] with generated images since they are fake
generated_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.zeros_like(generated_output), logits=generated_output)
total_loss = real_loss + generated_loss
return total_loss
def generator_loss(generated_output):
return tf.losses.sigmoid_cross_entropy(tf.ones_like(generated_output), generated_output)
discriminator_optimizer = tf.train.AdamOptimizer(1e-4)
generator_optimizer = tf.train.AdamOptimizer(1e-4)
Explanation: Define the loss functions and the optimizer
Discriminator loss
The discriminator loss function takes 2 inputs; real images, generated images
real_loss is a sigmoid cross entropy loss of the real images and an array of ones (since these are the real images)
generated_loss is a sigmoid cross entropy loss of the generated images and an array of zeros (since these are the fake images)
Then the total_loss is the sum of real_loss and the generated_loss
Generator loss
It is a sigmoid cross entropy loss of the generated images and an array of ones
The discriminator and the generator optimizers are different since we will train them separately.
End of explanation
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
Explanation: Checkpoints (Object-based saving)
End of explanation
EPOCHS = 150
noise_dim = 100
num_examples_to_generate = 16
# keeping the random vector constant for generation (prediction) so
# it will be easier to see the improvement of the gan.
random_vector_for_generation = tf.random_normal([num_examples_to_generate,
noise_dim])
def generate_and_save_images(model, epoch, test_input):
# make sure the training parameter is set to False because we
# don't want to train the batchnorm layer when doing inference.
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
def train(dataset, epochs, noise_dim):
for epoch in range(epochs):
start = time.time()
for images in dataset:
# generating noise from a uniform distribution
noise = tf.random_normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
generated_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(generated_output)
disc_loss = discriminator_loss(real_output, generated_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.variables))
if epoch % 1 == 0:
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
random_vector_for_generation)
# saving (checkpoint) the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time taken for epoch {} is {} sec'.format(epoch + 1,
time.time()-start))
# generating after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
random_vector_for_generation)
train(train_dataset, EPOCHS, noise_dim)
Explanation: Training
We start by iterating over the dataset
The generator is given noise as an input which when passed through the generator model will output a image looking like a handwritten digit
The discriminator is given the real MNIST images as well as the generated images (from the generator).
Next, we calculate the generator and the discriminator loss.
Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables (inputs) and apply those to the optimizer.
Generate Images
After training, its time to generate some images!
We start by creating noise array as an input to the generator
The generator will then convert the noise into handwritten images.
Last step is to plot the predictions and voila!
End of explanation
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
Explanation: Restore the latest checkpoint
End of explanation
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS)
Explanation: Display an image using the epoch number
End of explanation
with imageio.get_writer('dcgan.gif', mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
# this is a hack to display the gif inside the notebook
os.system('cp dcgan.gif dcgan.gif.png')
display.Image(filename="dcgan.gif.png")
Explanation: Generate a GIF of all the saved images.
<!-- TODO(markdaoust): Remove the hack when Ipython version is updated -->
End of explanation
#from google.colab import files
#files.download('dcgan.gif')
Explanation: To downlod the animation from Colab uncomment the code below:
End of explanation |
105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
My First Query
One of the most powerful features of Marvin 2.0 is ability to query the newly created DRP and DAP databases. You can do this in two ways
Step1: Let's search for galaxies with M$\star$ > 3 $\times$ 10$^{11}$ M$\odot$.
To specify our search parameter, M$_\star$, we must know the database table and name of the parameter. In this case, MaNGA uses the NASA-Sloan Atlas (NSA) for target selection so we will use the Sersic profile determination for stellar mass, which is the sersic_mass parameter of the nsa table, so our search parameter will be nsa.sersic_mass. You can also use nsa.sersic_logmass
Generically, the search parameter will take the form table.parameter.
Step2: Running the query produces a Results object (r1)
Step3: We will learn how to use the features of our Results object a little bit later, but first let's revise our search to see how more complex search queries work.
Multiple Search Criteria
Let's add to our search to find only galaxies with a redshift less than 0.1.
Redshift is the z parameter and is also in the nsa table, so its full search parameter designation is nsa.z.
Step4: Compound Search Statements
We were hoping for a few more than 3 galaxies, so let's try to increase our search by broadening the criteria to also include galaxies with 127 fiber IFUs and a b/a ratio of at least 0.95.
To find 127 fiber IFUs, we'll use the name parameter of the ifu table, which means the full search parameter is ifu.name. However, ifu.name returns the IFU design name, such as 12701, so we need to to set the value to 127*.
The b/a ratio is in nsa table as the ba90 parameter.
We're also going to join this to or previous query with an OR operator and use parentheses to group our individual search statements into a compound search statement.
Step5: Design Your Own Search
OK, now it's your turn to try designing a search.
Exercise
Step6: You should get 8 results
Step7: Go ahead and try to create some new searches on your own from the parameter list. Please feel free to also try out the some of the same search on the Marvin-web Search page.
Returning Bonus Parameters
Often you want to run a query and see the value of parameters that you didn't explicitly search on. For instance, you want to find galaxies above a redshift of 0.1 and would like to know their RA and DECs.
In Marvin-tools, this is as easy as specifying the returnparams option with either a string (for a single bonus parameter) or a list of strings (for multiple bonus parameters). | Python Code:
# Python 2/3 compatibility
from __future__ import print_function, division, absolute_import
from marvin import config
config.mode = 'remote'
config.setRelease('MPL-4')
from marvin.tools.query import Query
Explanation: My First Query
One of the most powerful features of Marvin 2.0 is ability to query the newly created DRP and DAP databases. You can do this in two ways:
1. via the Marvin-web Search page or
2. via Python (in the terminal/notebook/script) with Marvin-tools.
The best part is that both interfaces use the same underlying query structure, so your input search will be the same. Here we will run a few queries with Marvin-tools to learn the basics of how to construct a query and also test drive some of the more advanced features that are unique to the Marvin-tools version of querying.
End of explanation
myquery1 = 'nsa.sersic_mass > 3e11'
# or
myquery1 = 'nsa.sersic_logmass > 11.47'
q1 = Query(searchfilter=myquery1)
r1 = q1.run()
Explanation: Let's search for galaxies with M$\star$ > 3 $\times$ 10$^{11}$ M$\odot$.
To specify our search parameter, M$_\star$, we must know the database table and name of the parameter. In this case, MaNGA uses the NASA-Sloan Atlas (NSA) for target selection so we will use the Sersic profile determination for stellar mass, which is the sersic_mass parameter of the nsa table, so our search parameter will be nsa.sersic_mass. You can also use nsa.sersic_logmass
Generically, the search parameter will take the form table.parameter.
End of explanation
# show results
r1.results
Explanation: Running the query produces a Results object (r1):
End of explanation
myquery2 = 'nsa.sersic_mass > 3e11 AND nsa.z < 0.1'
q2 = Query(searchfilter=myquery2)
r2 = q2.run()
r2.results
Explanation: We will learn how to use the features of our Results object a little bit later, but first let's revise our search to see how more complex search queries work.
Multiple Search Criteria
Let's add to our search to find only galaxies with a redshift less than 0.1.
Redshift is the z parameter and is also in the nsa table, so its full search parameter designation is nsa.z.
End of explanation
myquery3 = '(nsa.sersic_mass > 3e11 AND nsa.z < 0.1) OR (ifu.name=127* AND nsa.ba90 >= 0.95)'
q3 = Query(searchfilter=myquery3)
r3 = q3.run()
r3.results
Explanation: Compound Search Statements
We were hoping for a few more than 3 galaxies, so let's try to increase our search by broadening the criteria to also include galaxies with 127 fiber IFUs and a b/a ratio of at least 0.95.
To find 127 fiber IFUs, we'll use the name parameter of the ifu table, which means the full search parameter is ifu.name. However, ifu.name returns the IFU design name, such as 12701, so we need to to set the value to 127*.
The b/a ratio is in nsa table as the ba90 parameter.
We're also going to join this to or previous query with an OR operator and use parentheses to group our individual search statements into a compound search statement.
End of explanation
# Enter your search here
Explanation: Design Your Own Search
OK, now it's your turn to try designing a search.
Exercise: Write a search filter that will find galaxies with a redshift less than 0.02 that were observed with the 1901 IFU?
End of explanation
# You might have to do an svn update to get this to work (otherwise try the next cell)
q = Query()
q.get_available_params()
# try this if the previous cell didn't return a list of parameters
from marvin.api.api import Interaction
from pprint import pprint
url = config.urlmap['api']['getparams']['url']
ii = Interaction(route=url)
mykeys = ii.getData()
pprint(mykeys)
Explanation: You should get 8 results:
[NamedTuple(mangaid='1-22438', plate=7992, name='1901', z=0.016383046284318),
NamedTuple(mangaid='1-113520', plate=7815, name='1901', z=0.0167652331292629),
NamedTuple(mangaid='1-113698', plate=8618, name='1901', z=0.0167444702237844),
NamedTuple(mangaid='1-134004', plate=8486, name='1901', z=0.0185601413249969),
NamedTuple(mangaid='1-155903', plate=8439, name='1901', z=0.0163660924881697),
NamedTuple(mangaid='1-167079', plate=8459, name='1901', z=0.0157109703868628),
NamedTuple(mangaid='1-209729', plate=8549, name='1901', z=0.0195561610162258),
NamedTuple(mangaid='1-277339', plate=8254, name='1901', z=0.0192211158573627)]
Finding the Available Parameters
Now you might want to go out and try all of the interesting queries that you've been saving up, but you don't know what the parameters are called or what database table they are in.
You can find all of the availabale parameters by:
1. clicking on in the Return Parameters dropdown menu on the left side of the Marvin-web Search page,
2. reading the Marvin Docs page, or
3. via Marvin-tools (see next two cells)
End of explanation
myquery5 = 'nsa.z > 0.1'
bonusparams5 = ['cube.ra', 'cube.dec']
# bonusparams5 = 'cube.ra' # This works too
q5 = Query(searchfilter=myquery5, returnparams=bonusparams5)
r5 = q5.run()
r5.results
Explanation: Go ahead and try to create some new searches on your own from the parameter list. Please feel free to also try out the some of the same search on the Marvin-web Search page.
Returning Bonus Parameters
Often you want to run a query and see the value of parameters that you didn't explicitly search on. For instance, you want to find galaxies above a redshift of 0.1 and would like to know their RA and DECs.
In Marvin-tools, this is as easy as specifying the returnparams option with either a string (for a single bonus parameter) or a list of strings (for multiple bonus parameters).
End of explanation |
106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'bnu-esm-1-1', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: BNU
Source ID: BNU-ESM-1-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Create a TFX pipeline using templates with Local orchestrator
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: NOTE
Step3: Let's check the version of TFX.
bash
python -c "from tfx import version ; print('TFX version
Step4: And, it's done. We are ready to create a pipeline.
Step 2. Copy predefined template to your project directory.
In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below. This will also become the name of the project directory where your files will be put.
bash
export PIPELINE_NAME="my_pipeline"
export PROJECT_DIR=~/tfx/${PIPELINE_NAME}
Step5: TFX includes the taxi template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
sh
tfx template copy \
--pipeline_name="${PIPELINE_NAME}" \
--destination_path="${PROJECT_DIR}" \
--model=taxi
Step6: Change the working directory context in this notebook to the project directory.
bash
cd ${PROJECT_DIR}
Step7: Step 3. Browse your copied source files.
The TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The taxi template uses the same Chicago Taxi dataset and ML model as the Airflow Tutorial.
In Google Colab, you can browse files by clicking a folder icon on the left. Files should be copied under the project directoy, whose name is my_pipeline in this case. You can click directory names to see the content of the directory, and double-click file names to open them.
Here is brief introduction to each of the Python files.
- pipeline - This directory contains the definition of the pipeline
- configs.py — defines common constants for pipeline runners
- pipeline.py — defines TFX components and a pipeline
- models - This directory contains ML model definitions.
- features.py, features_test.py — defines features for the model
- preprocessing.py, preprocessing_test.py — defines preprocessing
jobs using tf
Step8: Step 4. Run your first TFX pipeline
You can create a pipeline using pipeline create command.
bash
tfx pipeline create --engine=local --pipeline_path=local_runner.py
Step9: Then, you can run the created pipeline using run create command.
sh
tfx run create --engine=local --pipeline_name="${PIPELINE_NAME}"
Step10: If successful, you'll see Component CsvExampleGen is finished. When you copy the template, only one component, CsvExampleGen, is included in the pipeline.
Step 5. Add components for data validation.
In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator. If you are interested in data validation, please see Get started with Tensorflow Data Validation.
We will modify copied pipeline definition in pipeline/pipeline.py. If you are working on your local environment, use your favorite editor to edit the file. If you are working on Google Colab,
Click folder icon on the left to open Files view.
Click my_pipeline to open the directory and click pipeline directory to open and double-click pipeline.py to open the file.
Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline. (Tip
Step11: You should be able to see the output log from the added components. Our pipeline creates output artifacts in tfx_pipeline_output/my_pipeline directory.
Step 6. Add components for training.
In this step, you will add components for training and model validation including Transform, Trainer, Resolver, Evaluator, and Pusher.
Open pipeline/pipeline.py. Find and uncomment 5 lines which add Transform, Trainer, Resolver, Evaluator and Pusher to the pipeline. (Tip
Step12: When this execution run finishes successfully, you have now created and run your first TFX pipeline using Local orchestrator!
NOTE
Step13: You should specify your GCP project name to access BigQuery resources using TFX. Set GOOGLE_CLOUD_PROJECT environment variable to your project name.
sh
export GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_NAME_HERE
Step14: Open pipeline/pipeline.py. Comment out CsvExampleGen and uncomment the line which create an instance of BigQueryExampleGen. You also need to uncomment query argument of the create_pipeline function.
We need to specify which GCP project to use for BigQuery again, and this is done by setting --project in beam_pipeline_args when creating a pipeline.
Open pipeline/configs.py. Uncomment the definition of BIG_QUERY__WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the project id and the region value in this file with the correct values for your GCP project.
Open local_runner.py. Uncomment two arguments, query and beam_pipeline_args, for create_pipeline() method.
Now the pipeline is ready to use BigQuery as an example source. Update the pipeline and create a run as we did in step 5 and 6. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import sys
!{sys.executable} -m pip install --upgrade "tfx<2"
Explanation: Create a TFX pipeline using templates with Local orchestrator
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/template_local">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/template_local.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/template_local.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/template_local.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
</table></div>
Introduction
This document will provide instructions to create a TensorFlow Extended (TFX) pipeline
using templates which are provided with TFX Python package.
Most of instructions are Linux shell commands, and corresponding
Jupyter Notebook code cells which invoke those commands using ! are provided.
You will build a pipeline using Taxi Trips dataset
released by the City of Chicago. We strongly encourage you to try to build
your own pipeline using your dataset by utilizing this pipeline as a baseline.
We will build a pipeline which runs on local environment. If you are interested in using Kubeflow orchestrator on Google Cloud, please see TFX on Cloud AI Platform Pipelines tutorial.
Prerequisites
Linux / MacOS
Python >= 3.5.3
You can get all prerequisites easily by running this notebook on Google Colab.
Step 1. Set up your environment.
Throughout this document, we will present commands twice. Once as a copy-and-paste-ready shell command, once as a jupyter notebook cell. If you are using Colab, just skip shell script block and execute notebook cells.
You should prepare a development environment to build a pipeline.
Install tfx python package. We recommend use of virtualenv in the local environment. You can use following shell script snippet to set up your environment.
```sh
Create a virtualenv for tfx.
virtualenv -p python3 venv
source venv/bin/activate
Install python packages.
python -m pip install --upgrade "tfx<2"
```
If you are using colab:
End of explanation
# Set `PATH` to include user python binary directory.
HOME=%env HOME
PATH=%env PATH
%env PATH={PATH}:{HOME}/.local/bin
Explanation: NOTE: There might be some errors during package installation. For example,
ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible.
Please ignore these errors at this moment.
End of explanation
!python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
Explanation: Let's check the version of TFX.
bash
python -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
End of explanation
PIPELINE_NAME="my_pipeline"
import os
# Create a project directory under Colab content directory.
PROJECT_DIR=os.path.join(os.sep,"content",PIPELINE_NAME)
Explanation: And, it's done. We are ready to create a pipeline.
Step 2. Copy predefined template to your project directory.
In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below. This will also become the name of the project directory where your files will be put.
bash
export PIPELINE_NAME="my_pipeline"
export PROJECT_DIR=~/tfx/${PIPELINE_NAME}
End of explanation
!tfx template copy \
--pipeline_name={PIPELINE_NAME} \
--destination_path={PROJECT_DIR} \
--model=taxi
Explanation: TFX includes the taxi template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
sh
tfx template copy \
--pipeline_name="${PIPELINE_NAME}" \
--destination_path="${PROJECT_DIR}" \
--model=taxi
End of explanation
%cd {PROJECT_DIR}
Explanation: Change the working directory context in this notebook to the project directory.
bash
cd ${PROJECT_DIR}
End of explanation
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
Explanation: Step 3. Browse your copied source files.
The TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The taxi template uses the same Chicago Taxi dataset and ML model as the Airflow Tutorial.
In Google Colab, you can browse files by clicking a folder icon on the left. Files should be copied under the project directoy, whose name is my_pipeline in this case. You can click directory names to see the content of the directory, and double-click file names to open them.
Here is brief introduction to each of the Python files.
- pipeline - This directory contains the definition of the pipeline
- configs.py — defines common constants for pipeline runners
- pipeline.py — defines TFX components and a pipeline
- models - This directory contains ML model definitions.
- features.py, features_test.py — defines features for the model
- preprocessing.py, preprocessing_test.py — defines preprocessing
jobs using tf::Transform
- estimator - This directory contains an Estimator based model.
- constants.py — defines constants of the model
- model.py, model_test.py — defines DNN model using TF estimator
- keras - This directory contains a Keras based model.
- constants.py — defines constants of the model
- model.py, model_test.py — defines DNN model using Keras
- local_runner.py, kubeflow_runner.py — define runners for each orchestration engine
You might notice that there are some files with _test.py in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.
You can run unit tests by supplying the module name of test files with -m flag. You can usually get a module name by deleting .py extension and replacing / with .. For example:
bash
python -m models.features_test
End of explanation
!tfx pipeline create --engine=local --pipeline_path=local_runner.py
Explanation: Step 4. Run your first TFX pipeline
You can create a pipeline using pipeline create command.
bash
tfx pipeline create --engine=local --pipeline_path=local_runner.py
End of explanation
!tfx run create --engine=local --pipeline_name={PIPELINE_NAME}
Explanation: Then, you can run the created pipeline using run create command.
sh
tfx run create --engine=local --pipeline_name="${PIPELINE_NAME}"
End of explanation
# Update the pipeline
!tfx pipeline update --engine=local --pipeline_path=local_runner.py
# You can run the pipeline the same way.
!tfx run create --engine local --pipeline_name {PIPELINE_NAME}
Explanation: If successful, you'll see Component CsvExampleGen is finished. When you copy the template, only one component, CsvExampleGen, is included in the pipeline.
Step 5. Add components for data validation.
In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator. If you are interested in data validation, please see Get started with Tensorflow Data Validation.
We will modify copied pipeline definition in pipeline/pipeline.py. If you are working on your local environment, use your favorite editor to edit the file. If you are working on Google Colab,
Click folder icon on the left to open Files view.
Click my_pipeline to open the directory and click pipeline directory to open and double-click pipeline.py to open the file.
Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline. (Tip: find comments containing TODO(step 5):).
Your change will be saved automatically in a few seconds. Make sure that the * mark in front of the pipeline.py disappeared in the tab title. There is no save button or shortcut for the file editor in Colab. Python files in file editor can be saved to the runtime environment even in playground mode.
You now need to update the existing pipeline with modified pipeline definition. Use the tfx pipeline update command to update your pipeline, followed by the tfx run create command to create a new execution run of your updated pipeline.
```sh
Update the pipeline
tfx pipeline update --engine=local --pipeline_path=local_runner.py
You can run the pipeline the same way.
tfx run create --engine local --pipeline_name "${PIPELINE_NAME}"
```
End of explanation
!tfx pipeline update --engine=local --pipeline_path=local_runner.py
!tfx run create --engine local --pipeline_name {PIPELINE_NAME}
Explanation: You should be able to see the output log from the added components. Our pipeline creates output artifacts in tfx_pipeline_output/my_pipeline directory.
Step 6. Add components for training.
In this step, you will add components for training and model validation including Transform, Trainer, Resolver, Evaluator, and Pusher.
Open pipeline/pipeline.py. Find and uncomment 5 lines which add Transform, Trainer, Resolver, Evaluator and Pusher to the pipeline. (Tip: find TODO(step 6):)
As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using tfx pipeline update, and create an execution run using tfx run create.
sh
tfx pipeline update --engine=local --pipeline_path=local_runner.py
tfx run create --engine local --pipeline_name "${PIPELINE_NAME}"
End of explanation
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
Explanation: When this execution run finishes successfully, you have now created and run your first TFX pipeline using Local orchestrator!
NOTE: You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.
It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying enable_cache=True for the Pipeline object in pipeline.py.
Step 7. (Optional) Try BigQueryExampleGen.
[BigQuery] is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline.
You need a Google Cloud Platform account to use BigQuery. Please prepare a GCP project.
Login to your project using colab auth library or gcloud utility.
```sh
You need gcloud tool to login in local shell environment.
gcloud auth login
```
End of explanation
# Set your project name below.
# WARNING! ENTER your project name before running this cell.
%env GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_NAME_HERE
Explanation: You should specify your GCP project name to access BigQuery resources using TFX. Set GOOGLE_CLOUD_PROJECT environment variable to your project name.
sh
export GOOGLE_CLOUD_PROJECT=YOUR_PROJECT_NAME_HERE
End of explanation
!tfx pipeline update --engine=local --pipeline_path=local_runner.py
!tfx run create --engine local --pipeline_name {PIPELINE_NAME}
Explanation: Open pipeline/pipeline.py. Comment out CsvExampleGen and uncomment the line which create an instance of BigQueryExampleGen. You also need to uncomment query argument of the create_pipeline function.
We need to specify which GCP project to use for BigQuery again, and this is done by setting --project in beam_pipeline_args when creating a pipeline.
Open pipeline/configs.py. Uncomment the definition of BIG_QUERY__WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the project id and the region value in this file with the correct values for your GCP project.
Open local_runner.py. Uncomment two arguments, query and beam_pipeline_args, for create_pipeline() method.
Now the pipeline is ready to use BigQuery as an example source. Update the pipeline and create a run as we did in step 5 and 6.
End of explanation |
108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Indexing and Selection
| Operation | Syntax | Result |
|-------------------------------|----------------|-----------|
| Select column | df[col] | Series |
| Select row by label | df.loc[label] | Series |
| Select row by integer | df.iloc[loc] | Series |
| Select rows | df[start
Step1: selection using dictionary-like string
list of strings as index (note | Python Code:
import pandas as pd
import numpy as np
produce_dict = {'veggies': ['potatoes', 'onions', 'peppers', 'carrots'],'fruits': ['apples', 'bananas', 'pineapple', 'berries']}
produce_df = pd.DataFrame(produce_dict)
produce_df
Explanation: Indexing and Selection
| Operation | Syntax | Result |
|-------------------------------|----------------|-----------|
| Select column | df[col] | Series |
| Select row by label | df.loc[label] | Series |
| Select row by integer | df.iloc[loc] | Series |
| Select rows | df[start:stop] | DataFrame |
| Select rows with boolean mask | df[mask] | DataFrame |
documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html
End of explanation
df = pd.DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])
df2 = pd.DataFrame(np.random.randn(7, 3), columns=['A', 'B', 'C'])
sum_df = df + df2
sum_df
Explanation: selection using dictionary-like string
list of strings as index (note: double square brackets)
select row using integer index
select rows using integer slice
+ is over-loaded as concatenation operator
Data alignment and arithmetic
Data alignment between DataFrame objects automatically align on both the columns and the index (row labels).
Note locations for 'NaN'
End of explanation |
109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Breadth First Search, Queue Based Implementation
Implementation
deque is the constructor for a double ended queue.
Step1: The function search takes three arguments to solve a search problem
Step2: Given a state and a parent dictionary Parent, the function path_to returns a path leading to the given state.
Step3: Testing with the $3\times 3$ Sliding Puzzle | Python Code:
from collections import deque
Explanation: Breadth First Search, Queue Based Implementation
Implementation
deque is the constructor for a double ended queue.
End of explanation
def search(start, goal, next_states):
Frontier = deque([start])
Parent = { start: start }
while Frontier:
state = Frontier.popleft()
if state == goal:
return path_to(state, Parent)
for ns in next_states(state):
if ns not in Parent:
Parent[ns] = state
Frontier.append(ns)
Explanation: The function search takes three arguments to solve a search problem:
- start is the start state of the search problem,
- goal is the goal state, and
- next_states is a function with signature $\texttt{next_states}:Q \rightarrow 2^Q$, where $Q$ is the set of states.
For every state $s \in Q$, $\texttt{next_states}(s)$ is the set of states that can be reached from $s$ in one step.
If successful, search returns a path from start to goal that is a solution of the search problem
$$ \langle Q, \texttt{next_states}, \texttt{start}, \texttt{goal} \rangle. $$
The implementation of search uses a queue based implementation of breadth first search to find a path from start to goal.
End of explanation
def path_to(state, Parent):
p = Parent[state]
if p == state:
return [state]
return path_to(p, Parent) + [state]
Explanation: Given a state and a parent dictionary Parent, the function path_to returns a path leading to the given state.
End of explanation
%run Sliding-Puzzle.ipynb
%load_ext memory_profiler
%%time
Path = search(start, goal, next_states)
animation(Path)
Explanation: Testing with the $3\times 3$ Sliding Puzzle
End of explanation |
110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.data - Matplotlib
Tutoriel sur matplotlib.
Step1: Aparté
Les librairies de visualisation en python se sont beaucoup développées (10 plotting librairies).
La référence reste matplotlib, et la plupart sont pensées pour être intégrées à ses objets (c'est par exemple le cas de seaborn, mpld3, plotly et bokeh). Il est donc utile de commencer par se familiariser avec matplotlib.
Pour reprendre les termes de ses développeurs
Step2: La structure des objets décrits par l'API est très hiérarchique, comme illustré par ce schéma
Step3: Un graphique (très) simple avec l'instruction plot.
Step4: Pour faire plusieurs sous graphes, il suffit de modifier les valeurs des paramètres de l'objet subplot.
Step5: Si aucune instance d'axes n'est précisée, la méthode plot est appliquée à la dernière instance créée.
Step6: Pour explorer l'ensemble des catégories de graphiques possibles
Step7: Pas d'autres choix que de paramétrer à la main pour corriger les chiffres qui se superposent.
Couleurs, Marqueurs et styles de ligne
MatplotLib offre la possibilité d'adopter deux types d'écriture
Step8: Plus de détails dans la documentation sur l'API de matplotlib pour paramétrer la
<a href="http
Step9: Ticks labels et legendes
3 méthodes clés
Step10: Inclusion d'annotation et de texte, titre et libellé des axes
Step11: matplotlib et le style
Il est possible de définir son propre style. Cette possibilité est intéressante si vous faîtes régulièrement les mêmes graphes et voulez définir des templates (plutôt que de copier/coller toujours les mêmes lignes de code). Tout est décrit dans style_sheets.
Step12: Comme suggéré dans le nom des styles disponibles dans matplotlib, la librairie seaborn, qui est une sorte de surcouche de matplotlib, est un moyen très pratique d'accéder à des styles pensés et adaptés pour la mise en valeur de pattern dans les données.
Voici quelques exemples, toujours sur la même série de données. Je vous invite également à explorer les palettes de couleurs.
Step13: En dehors du style et des couleurs, Seaborn a mis l'accent sur
Step14: Penser à installer le module dbfread si ce n'est pas fait.
Step15: Dictionnaire des variables.
Step16: Représentez l'age des femmes en fonction de celui des hommes au moment du mariage.
Step17: Le module pandas a prévu un wrapper matplotlib
Step18: Exercice 1
Step19: Avec seaborn
Step20: Seaborn est bien pensé pour les couleurs. Vous pouvez intégrer des palettes convergentes, divergentes. Essayez de faire un camaïeu entre deux couleurs au fur et à mesure de l'age, pour faire ressortir les contrastes.
Exercice 2
Step21: Exercice 3
Step22: Exercice 4
Step23: Graphes intéractifs
Step25: La page callbacks montre comment utiliser les interactions utilisateurs. Seul inconvénient, il faut connaître le javascript.
Step26: Le module bqplot permet de définir des callbacks en Python. L'inconvénient est que cela ne fonction que depuis un notebook et il vaut mieux ne pas trop mélanger les librairies javascripts qui ne peuvent pas toujours fonctionner ensemble.
Plotly
Plotly
Step27: Creation dataframe
Step28: Bars and Scatter | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.data - Matplotlib
Tutoriel sur matplotlib.
End of explanation
#Pour intégrer les graphes à votre notebook, il suffit de faire
%matplotlib inline
#ou alors
%pylab inline
#pylab charge également numpy. C'est la commande du calcul scientifique python.
Explanation: Aparté
Les librairies de visualisation en python se sont beaucoup développées (10 plotting librairies).
La référence reste matplotlib, et la plupart sont pensées pour être intégrées à ses objets (c'est par exemple le cas de seaborn, mpld3, plotly et bokeh). Il est donc utile de commencer par se familiariser avec matplotlib.
Pour reprendre les termes de ses développeurs : "matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell (ala MatLab or mathematica), web application servers, and six graphical user interface toolkits."
La structure sous-jacente de matplotlib est très générale et personnalisable (gestion de l'interface utilisateur, possibilité d'intégration dans des applications web, etc.). Heureusement, il n'est pas nécessaire de maîtriser l'ensemble de ces méthodes pour produire un graphe (il existe pas moins de 2840 pages de documentation). Pour générer des graphes et les modifier, il suffit de passer par l'interface pyplot.
L'interface pyplot est inspirée de celle de MATLAB. Ceux qui la connaissent s'y retrouveront rapidement.
Pour résumer :
- matplotlib - accès "low level" à la librairie de visualisation. Utile si vous souhaitez créer votre propre librairie de visualisation python ou faire des choses très custom.
- matplotlib.pyplot - interface proche de celle de Matplab pour produire vos graphes
- pylab - matplotlib.pyplot + numpy
End of explanation
from matplotlib import pyplot as plt
plt.figure(figsize=(10,8))
plt.subplot(111) # Méthode subplot : pour définir les graphiques appartenant à l'objet figure, ici 1 X 1, indice 1
#plt.subplot(1,1,1) fonctionne aussi
#attention, il est nécessaire de conserver toutes les instructions d'un même graphique dans le même bloc
#pas besoin de plt.show() dans un notebook, sinon c'est nécessaire
Explanation: La structure des objets décrits par l'API est très hiérarchique, comme illustré par ce schéma :
- "Figure" contient l'ensemble de la représentation visuelle. C'est par exemple grâce à cette méta-structure que l'on peut facilement ajouter un titre à une représentation qui contiendrait plusieurs graphes ;
- "Axes" (ou "Subplots") décrit l'ensemble contenant un ou pusieurs graphes (correspond à l'objet subplot et aux méthodes add_subplot)
- "Axis" correspond aux axes d'un graphique (ou instance de subplot) donné.
<img src="http://matplotlib.org/_images/fig_map.png" />
Une dernière remarque d'ordre général : pyplot est une machine à état.
Cela implique que les méthodes pour tracer un graphe ou éditer un label s'appliquent par défaut au dernier état en cours (dernière instance de subplot ou dernière instance d'axe par exemple).
Conséquence : il faut concevoir ses codes comme une séquence d'instructions (par exemple, il ne faut pas séparer les instructions qui se rapportent au même graphique dans deux cellules différentes du Notebook).
Figures et Subplots
End of explanation
from numpy import random
import numpy as np
import pandas as p
plt.figure(figsize=(10,8))
plt.subplot(111)
plt.plot([random.random_sample(1) for i in range(5)])
#Il est possible de passer des listes, des arrays de numpy, des Series et des Dataframes de pandas
plt.plot(np.array([random.random_sample(1) for i in range(5)]))
plt.plot(p.DataFrame(np.array([random.random_sample(1) for i in range(5)])))
#pour afficher plusieurs courbes, il suffit de cumuler les instructions plt.plot
#plt.show()
Explanation: Un graphique (très) simple avec l'instruction plot.
End of explanation
fig = plt.figure(figsize=(15,10))
ax1 = fig.add_subplot(2,2,1) #modifie l'objet fig et créé une nouvelle instance de subplot, appelée ax1
#vous verrez souvent la convention ax comme instance de subplot : c'est parce que l'on parle aussi d'objet "Axe"
#à ne pas confondre avec l'objet "Axis"
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
Explanation: Pour faire plusieurs sous graphes, il suffit de modifier les valeurs des paramètres de l'objet subplot.
End of explanation
from numpy.random import randn
fig = plt.figure(figsize=(10,8))
ax1 = fig.add_subplot(2,2,1)
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
plt.plot(randn(50).cumsum(),'k--')
# plt.show()
from numpy.random import randn
fig = plt.figure(figsize=(15,10))
ax1 = fig.add_subplot(2,2,1)
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
# On peut compléter les instances de sous graphiques par leur contenu.
# Au passage, quelques autres exemples de graphes
ax1.hist(randn(100),bins=20,color='k',alpha=0.3)
ax2.scatter(np.arange(30),np.arange(30)+3*randn(30))
ax3.plot(randn(50).cumsum(),'k--')
Explanation: Si aucune instance d'axes n'est précisée, la méthode plot est appliquée à la dernière instance créée.
End of explanation
fig,axes = plt.subplots(2,2,sharex=True,sharey=True)
# Sharex et sharey portent bien leurs noms : si True, ils indiquent que les sous-graphiques
# ont des axes paramétrés de la même manière
for i in range(2):
for j in range(2):
axes[i,j].hist(randn(500),bins=50,color='k',alpha=0.5)
# L'objet "axes" est un 2darray, simple à indicer et parcourir avec une boucle
print(type(axes))
# N'h'ésitez pas à faire varier les paramètres qui vous posent question. Par exemple, à quoi sert alpha ?
plt.subplots_adjust(wspace=0,hspace=0)
# Cette dernière méthode permet de supprimer les espaces entres les sous graphes.
Explanation: Pour explorer l'ensemble des catégories de graphiques possibles : Gallery. Les plus utiles pour l'analyse de données : scatter, scatterhist, barchart, stackplot, histogram, cumulative distribution function, boxplot, , radarchart.
Ajuster les espaces entre les graphes
End of explanation
from numpy.random import randn
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(111)
ax1.plot(randn(50).cumsum(),color='g',marker='o',linestyle='dashed')
# plt.show()
from numpy.random import randn
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(111)
ax1.plot(randn(50).cumsum(),'og--') #l'ordre des paramètres n'importe pas
Explanation: Pas d'autres choix que de paramétrer à la main pour corriger les chiffres qui se superposent.
Couleurs, Marqueurs et styles de ligne
MatplotLib offre la possibilité d'adopter deux types d'écriture : chaîne de caractère condensée ou paramétrage explicite via un système clé-valeur.
End of explanation
from numpy.random import randn
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(1,1,1)
#avec la norme RGB
ax1.plot(randn(50).cumsum(),color='#D0BBFF',marker='o',linestyle='-.')
ax1.plot(randn(50).cumsum(),color=(0.8156862745098039, 0.7333333333333333, 1.0),marker='o',linestyle='-.')
Explanation: Plus de détails dans la documentation sur l'API de matplotlib pour paramétrer la
<a href="http://matplotlib.org/api/colors_api.html">
couleur
</a>
, les
<a href="http://matplotlib.org/api/markers_api.html">
markers
</a>
, et le
<a href="http://matplotlib.org/api/lines_api.html#matplotlib.lines.Line2D.set_linestyle">
style des lignes
</a>
. MatplotLib est compatible avec plusieurs standards de couleur :
- sous forme d'une lettre : 'b' = blue (bleu), 'g' = green (vert), 'r' = red (rouge), 'c' = cyan (cyan), 'm' = magenta (magenta), 'y' = yellow (jaune), 'k' = black (noir), 'w' = white (blanc).
- sous forme d'un nombre entre 0 et 1 entre quotes qui indique le niveau de gris : par exemple '0.70' ('1' = blanc, '0' = noir).
- sous forme d'un nom : par exemple 'red'.
- sous forme html avec les niveaux respectifs de rouge (R), vert (G) et bleu (B) : '#ffee00'. Voici un site pratique pour récupérer une couleur en RGB hexadécimal.
- sous forme d'un triplet de valeurs entre 0 et 1 avec les niveaux de R, G et B : (0.2, 0.9, 0.1).
End of explanation
from numpy.random import randn
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(1,1,1)
serie1=randn(50).cumsum()
serie2=randn(50).cumsum()
serie3=randn(50).cumsum()
ax1.plot(serie1,color='#33CCFF',marker='o',linestyle='-.',label='un')
ax1.plot(serie2,color='#FF33CC',marker='o',linestyle='-.',label='deux')
ax1.plot(serie3,color='#FFCC99',marker='o',linestyle='-.',label='trois')
#sur le graphe précédent, pour raccourcir le range
ax1.set_xlim([0,21])
ax1.set_ylim([-20,20])
#faire un ticks avec un pas de 2 (au lieu de 5)
ax1.set_xticks(range(0,21,2))
#changer le label sur la graduation
ax1.set_xticklabels(["j +" + str(l) for l in range(0,21,2)])
ax1.set_xlabel('Durée après le traitement')
ax1.legend(loc='best')
#permet de choisir l'endroit le plus vide
Explanation: Ticks labels et legendes
3 méthodes clés :
- xlim() : pour délimiter l'étendue des valeurs de l'axe
- xticks() : pour passer les graduations sur l'axe
- xticklabels() : pour passer les labels
Pour l'axe des ordonnées c'est ylim, yticks, yticklabels.
Pour récupérer les valeurs fixées :
- plt.xlim() ou plt.get_xlim()
- plt.xticks() ou plt.get_xticks()
- plt.xticklabels() ou plt.get_xticklabels()
Pour fixer ces valeurs :
- plt.xlim([start,end]) ou plt.set_xlim([start,end])
- plt.xticks(my_ticks_list) ou plt.get_xticks(my_ticks_list)
- plt.xticklabels(my_labels_list) ou plt.get_xticklabels(my_labels_list)
Si vous voulez customiser les axes de plusieurs sous graphiques, passez par une instance de axis et non subplot.
End of explanation
from numpy.random import randn
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(1,1,1)
ax1.plot(serie1,color='#33CCFF',marker='o',linestyle='-.',label='un')
ax1.plot(serie2,color='#FF33CC',marker='o',linestyle='-.',label='deux')
ax1.plot(serie3,color='#FFCC99',marker='o',linestyle='-.',label='trois')
ax1.set_xlim([0,21])
ax1.set_ylim([-20,20])
ax1.set_xticks(range(0,21,2))
ax1.set_xticklabels(["j +" + str(l) for l in range(0,21,2)])
ax1.set_xlabel('Durée après le traitement')
ax1.annotate("You're here", xy=(7, 7), #point de départ de la flèche
xytext=(10, 10), #position du texte
arrowprops=dict(facecolor='#000000', shrink=0.10),
)
ax1.legend(loc='best')
plt.xlabel("Libellé de l'axe des abscisses")
plt.ylabel("Libellé de l'axe des ordonnées")
plt.title("Une idée de titre ?")
plt.text(5, -10, r'$\mu=100,\ \sigma=15$')
# plt.show()
Explanation: Inclusion d'annotation et de texte, titre et libellé des axes
End of explanation
from numpy.random import randn
#pour que la définition du style soit seulement dans cette cellule notebook
with plt.style.context('ggplot'):
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(1,1,1)
ax1.plot(serie1,color='#33CCFF',marker='o',linestyle='-.',label='un')
ax1.plot(serie2,color='#FF33CC',marker='o',linestyle='-.',label='deux')
ax1.plot(serie3,color='#FFCC99',marker='o',linestyle='-.',label='trois')
ax1.set_xlim([0,21])
ax1.set_ylim([-20,20])
ax1.set_xticks(range(0,21,2))
ax1.set_xticklabels(["j +" + str(l) for l in range(0,21,2)])
ax1.set_xlabel('Durée après le traitement')
ax1.annotate("You're here", xy=(7, 7), #point de départ de la flèche
xytext=(10, 10), #position du texte
arrowprops=dict(facecolor='#000000', shrink=0.10),
)
ax1.legend(loc='best')
plt.xlabel("Libellé de l'axe des abscisses")
plt.ylabel("Libellé de l'axe des ordonnées")
plt.title("Une idée de titre ?")
plt.text(5, -10, r'$\mu=100,\ \sigma=15$')
#plt.show()
import numpy as np
import matplotlib.pyplot as plt
print("De nombreux autres styles sont disponibles, pick up your choice! ", plt.style.available)
with plt.style.context('dark_background'):
plt.plot(serie1, 'r-o')
# plt.show()
Explanation: matplotlib et le style
Il est possible de définir son propre style. Cette possibilité est intéressante si vous faîtes régulièrement les mêmes graphes et voulez définir des templates (plutôt que de copier/coller toujours les mêmes lignes de code). Tout est décrit dans style_sheets.
End of explanation
#on peut remarquer que le style ggplot est resté.
import seaborn as sns
#5 styles disponibles
#sns.set_style("whitegrid")
#sns.set_style("darkgrid")
#sns.set_style("white")
#sns.set_style("dark")
#sns.set_style("ticks")
#si vous voulez définir un style temporairement
with sns.axes_style("ticks"):
fig = plt.figure(figsize(8,6))
ax1 = fig.add_subplot(1,1,1)
plt.plot(serie1)
Explanation: Comme suggéré dans le nom des styles disponibles dans matplotlib, la librairie seaborn, qui est une sorte de surcouche de matplotlib, est un moyen très pratique d'accéder à des styles pensés et adaptés pour la mise en valeur de pattern dans les données.
Voici quelques exemples, toujours sur la même série de données. Je vous invite également à explorer les palettes de couleurs.
End of explanation
import urllib.request
import zipfile
def download_and_save(name, root_url):
if root_url == 'xd':
from pyensae.datasource import download_data
download_data(name)
else:
response = urllib.request.urlopen(root_url+name)
with open(name, "wb") as outfile:
outfile.write(response.read())
def unzip(name):
with zipfile.ZipFile(name, "r") as z:
z.extractall(".")
filenames = ["etatcivil2012_mar2012_dbase.zip",
"etatcivil2012_nais2012_dbase.zip",
"etatcivil2012_dec2012_dbase.zip", ]
# Une copie des fichiers a été postée sur le site www.xavierdupre.fr
# pour tester le notebook plus facilement.
root_url = 'xd' # http://telechargement.insee.fr/fichiersdetail/etatcivil2012/dbase/'
for filename in filenames:
download_and_save(filename, root_url)
unzip(filename)
print("Download of {}: DONE!".format(filename))
Explanation: En dehors du style et des couleurs, Seaborn a mis l'accent sur :
- les graphes de distribution (univariés / bivariés). Particulièrement utiles et pratiques : les pairwiseplot
- les graphes de régression
- les graphes de variables catégorielles
- les heatmap sur les matrices de données
Seaborn ce sont des graphes pensés pour l'analyse de données et la présentation de rapports à des collègues ou clients. C'est peut-être un peu moins customisable que matplotlib mais vous avez le temps avant de vous sentir limités dans les possibilités.
Matplotlib et pandas, intéractions avec seaborn
Comme vu précédemment, matplotlib permet de manipuler et de représenter sous forme de graphes toutes sortes d'objets : listes, arrays numpy, Series et DataFrame pandas. Inversement, pandas a prévu des méthodes qui intègrent les objets matplotlib les plus utiles pour le tracé de graphiques. Nous allons tester un peu l'intégration pandas/matplotlib. D'une amanière générale, tout un écosystème de visualisation s'est développé autour de pandas. Nous allons tester les différentes librairies évoquées. Télécharger les données de l'exercice 4 du TD sur pandas et disponible sur le site de l'INSEE Naissances, décès et mariages de 1998 à 2013.
End of explanation
import pandas
try:
from dbfread import DBF
use_dbfread = True
except ImportError as e :
use_dbfread = False
if use_dbfread:
print("use of dbfread")
def dBase2df(dbase_filename):
table = DBF(dbase_filename, load=True, encoding="cp437")
return pandas.DataFrame(table.records)
df = dBase2df('mar2012.dbf')
#df.to_csv("mar2012.txt", sep="\t", encoding="utf8", index=False)
else :
print("use of zipped version")
import pyensae.datasource
data = pyensae.datasource.download_data("mar2012.zip")
df = pandas.read_csv(data[0], sep="\t", encoding="utf8", low_memory = False)
df.shape, df.columns
Explanation: Penser à installer le module dbfread si ce n'est pas fait.
End of explanation
vardf = dBase2df("varlist_mariages.dbf")
print(vardf.shape, vardf.columns)
vardf
Explanation: Dictionnaire des variables.
End of explanation
#Calcul de l'age (au moment du mariage)
df.head()
#conversion des années en entiers
for c in ['AMAR','ANAISF','ANAISH']:
df[c]=df[c].apply(lambda x: int(x))
#calcul de l'age
df['AGEF'] = df['AMAR'] - df['ANAISF']
df['AGEH'] = df['AMAR'] - df['ANAISH']
Explanation: Représentez l'age des femmes en fonction de celui des hommes au moment du mariage.
End of explanation
#version pandas : df.plot()
#deux possibilités : l'option kind dans df.plot()
df.plot(x='AGEH',y='AGEF',kind='scatter')
#ou la méthode scatter()
#df.plot.scatter(x='AGEH',y='AGEF')
#ensemble des graphiques disponibles dans la méthode plot de pandas : df.plot.<TAB>
#version matplotlib
from matplotlib import pyplot as plt
plt.style.use('seaborn-whitegrid')
fig = plt.figure(figsize(8.5,5))
ax = fig.add_subplot(1,1,1)
ax.scatter(df['AGEH'],df['AGEF'], color="#3333FF", edgecolors='#FFFFFF')
plt.xlabel('AGEH')
plt.ylabel('AGEH')
#Si vous voulez les deux graphes en 1, il suffit de reprendre la structure de matplotlib
#(notamment l'objet subplot) et de voir comment il peut etre appelé dans
#chaque méthode de tracé (df.plot de pandas et sns.plot de searborn)
from matplotlib import pyplot as plt
plt.style.use('seaborn-whitegrid')
fig = plt.figure(figsize(8.5,5))
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
ax1.scatter(df['AGEH'],df['AGEF'], color="#3333FF", edgecolors='#FFFFFF')
df.plot(x='AGEH',y='AGEF',kind='scatter',ax=ax2)
plt.xlabel('AGEH')
plt.ylabel('AGEH')
Explanation: Le module pandas a prévu un wrapper matplotlib
End of explanation
df.plot.hexbin(x='AGEH', y='AGEF', gridsize=100)
Explanation: Exercice 1 : analyser l'âge des hommes en fonction de l'âge des femmes
Ajoutez un titre, changez le style du graphe, faites varier les couleurs (avec un camaïeu), faites une heatmap avec le wrapper pandas hexbin et avec seaborn.
End of explanation
import seaborn as sns
sns.set_style('white')
sns.set_context('paper')
#il faut crééer la matrice AGEH x AGEF
df['nb']=1
df[['AGEH','AGEF']]
df["nb"] = 1
#pour utiliser heatmap, il faut mettre df au frmat wide (au lieu de long) => df.pivot(...)
matrice = df[['nb','AGEH','AGEF']].groupby(['AGEH','AGEF'],as_index=False).count()
matrice=matrice.pivot('AGEH','AGEF','nb')
matrice=matrice.sort_index(axis=0,ascending=False)
fig = plt.figure(figsize(8.5,5))
ax1 = fig.add_subplot(2,2,1)
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
df.plot.hexbin(x='AGEH', y='AGEF', gridsize=100, ax=ax1)
cmap=sns.blend_palette(["#CCFFFF", "#006666"], as_cmap=True)
#Dans tous les graphes qui prévoient une fonction cmap vous pourrez intégrer votre propre palette de couleur
sns.heatmap(matrice,annot=False, xticklabels=10,yticklabels=10,cmap=cmap,ax=ax2)
sample = df.sample(100)
sns.kdeplot(sample['AGEH'],sample['AGEF'],cmap=cmap,ax=ax3)
Explanation: Avec seaborn
End of explanation
df["differenceHF"] = df["ANAISH"] - df["ANAISF"]
df["nb"] = 1
dist = df[["nb","differenceHF"]].groupby("differenceHF", as_index=False).count()
dist.tail()
#version pandas
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper')
fig = plt.figure(figsize(8.5,5))
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
df["differenceHF"].hist(figsize=(16,6), bins=50, ax=ax1)
ax1.set_title('Graphique avec pandas', fontsize=15)
sns.distplot(df["differenceHF"], kde=True,ax=ax2)
#regardez ce que donne l'option kde
ax2.set_title('Graphique avec seaborn', fontsize=15)
Explanation: Seaborn est bien pensé pour les couleurs. Vous pouvez intégrer des palettes convergentes, divergentes. Essayez de faire un camaïeu entre deux couleurs au fur et à mesure de l'age, pour faire ressortir les contrastes.
Exercice 2 : représentez la répartition de la différence d'âge de couples mariés
End of explanation
df["nb"] = 1
dep = df[["DEPMAR","nb"]].groupby("DEPMAR", as_index=False).sum().sort_values("nb",ascending=False)
ax = dep.plot(kind = "bar", figsize=(18,6))
ax.set_xlabel("départements", fontsize=16)
ax.set_title("nombre de mariages par départements", fontsize=16)
ax.legend().set_visible(False) # on supprime la légende
# on change la taille de police de certains labels
for i,tick in enumerate(ax.xaxis.get_major_ticks()):
if i > 10 :
tick.label.set_fontsize(8)
Explanation: Exercice 3 : analyser le nombre de mariages par département
End of explanation
df["nb"] = 1
dissem = df[["JSEMAINE","nb"]].groupby("JSEMAINE",as_index=False).sum()
total = dissem["nb"].sum()
repsem = dissem.cumsum()
repsem["nb"] /= total
sns.set_style('whitegrid')
ax = dissem["nb"].plot(kind="bar")
repsem["nb"].plot(ax=ax, secondary_y=True)
ax.set_title("Distribution des mariages par jour de la semaine",fontsize=16)
df.head()
Explanation: Exercice 4 : répartition du nombre de mariages par jour
End of explanation
from bokeh.plotting import figure, show, output_notebook
output_notebook()
fig = figure()
sample = df.sample(500)
fig.scatter(sample['AGEH'],sample['AGEF'])
fig.xaxis.axis_label = 'AGEH'
fig.yaxis.axis_label = 'AGEH'
show(fig)
Explanation: Graphes intéractifs : bokeh, altair, bqplot
Pour faire simple, il est possible d'introduire du JavaScript dans l'application web locale créée par jupyter. C'est ce que fait D3.js. Les librairies interactives comme bokeh ou altair ont associé le design de matplotlib avec des librairies javascript comme vega-lite. L'exemple suivant utilise bokeh.
End of explanation
from bokeh.plotting import figure, output_file, show
from bokeh.models import ColumnDataSource, HoverTool, CustomJS
# define some points and a little graph between them
x = [2, 3, 5, 6, 8, 7]
y = [6, 4, 3, 8, 7, 5]
links = {
0: [1, 2],
1: [0, 3, 4],
2: [0, 5],
3: [1, 4],
4: [1, 3],
5: [2, 3, 4]
}
p = figure(plot_width=400, plot_height=400, tools="", toolbar_location=None, title='Hover over points')
source = ColumnDataSource({'x0': [], 'y0': [], 'x1': [], 'y1': []})
sr = p.segment(x0='x0', y0='y0', x1='x1', y1='y1', color='olive', alpha=0.6, line_width=3, source=source, )
cr = p.circle(x, y, color='olive', size=30, alpha=0.4, hover_color='olive', hover_alpha=1.0)
# Add a hover tool, that sets the link data for a hovered circle
code =
var links = %s;
var data = {'x0': [], 'y0': [], 'x1': [], 'y1': []};
var cdata = circle.data;
var indices = cb_data.index['1d'].indices;
for (i=0; i < indices.length; i++) {
ind0 = indices[i]
for (j=0; j < links[ind0].length; j++) {
ind1 = links[ind0][j];
data['x0'].push(cdata.x[ind0]);
data['y0'].push(cdata.y[ind0]);
data['x1'].push(cdata.x[ind1]);
data['y1'].push(cdata.y[ind1]);
}
}
segment.data = data;
% links
callback = CustomJS(args={'circle': cr.data_source, 'segment': sr.data_source}, code=code)
p.add_tools(HoverTool(tooltips=None, callback=callback, renderers=[cr]))
show(p)
Explanation: La page callbacks montre comment utiliser les interactions utilisateurs. Seul inconvénient, il faut connaître le javascript.
End of explanation
import pandas as pd
import numpy as np
Explanation: Le module bqplot permet de définir des callbacks en Python. L'inconvénient est que cela ne fonction que depuis un notebook et il vaut mieux ne pas trop mélanger les librairies javascripts qui ne peuvent pas toujours fonctionner ensemble.
Plotly
Plotly: https://plot.ly/python/
Doc: https://plot.ly/python/reference/
Colors: http://www.cssportal.com/css3-rgba-generator/
End of explanation
indx = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
value1 = [0,1,2,3,4,5,6,7,8,9]
value2 = [1,5,2,3,7,5,1,8,9,1]
df = {'indx': indx, 'value1': value1, 'value2': value2}
df = pd.DataFrame(df)
df['rate1'] = df.value1 / 100
df['rate2'] = df.value2 / 100
df = df.set_index('indx')
df.head()
Explanation: Creation dataframe
End of explanation
# installer plotly
import plotly.plotly as py
import os
from pyquickhelper.loghelper import get_password
user = get_password("plotly", "ensae_teaching_cs,login")
pwd = get_password("plotly", "ensae_teaching_cs,pwd")
try:
py.sign_in(user, pwd)
except Exception as e:
print(e)
import plotly
from plotly.graph_objs import Bar, Scatter, Figure, Layout
import plotly.plotly as py
import plotly.graph_objs as go
# BARS
trace1 = go.Bar(
x = df.index,
y = df.value1,
name='Value1', # Bar legend
#orientation = 'h',
marker = dict( # Colors
color = 'rgba(237, 74, 51, 0.6)',
line = dict(
color = 'rgba(237, 74, 51, 0.6)',
width = 3)
))
trace2 = go.Bar(
x = df.index,
y = df.value2,
name='Value 2',
#orientation = 'h', # Uncomment to have horizontal bars
marker = dict(
color = 'rgba(0, 74, 240, 0.4)',
line = dict(
color = 'rgba(0, 74, 240, 0.4)',
width = 3)
))
# SCATTER
trace3 = go.Scatter(
x = df.index,
y = df.rate1,
name='Rate',
yaxis='y2', # Define 2 axis
marker = dict( # Colors
color = 'rgba(187, 0, 0, 1)',
))
trace4 = go.Scatter(
x = df.index,
y = df.rate2,
name='Rate2',
yaxis='y2', # To have a 2nd axis
marker = dict( # Colors
color = 'rgba(0, 74, 240, 0.4)',
))
data = [trace2, trace1, trace3, trace4]
layout = go.Layout(
title='Stack bars and scatter',
barmode ='stack', # Take value 'stack' or 'group'
xaxis=dict(
autorange=True,
showgrid=False,
zeroline=False,
showline=True,
autotick=True,
ticks='',
showticklabels=True
),
yaxis=dict( # Params 1st axis
#range=[0,1200000], # Set range
autorange=True,
showgrid=False,
zeroline=False,
showline=True,
autotick=True,
ticks='',
showticklabels=True
),
yaxis2=dict( # Params 2nd axis
overlaying='y',
autorange=True,
showgrid=False,
zeroline=False,
showline=True,
autotick=True,
ticks='',
side='right'
))
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='marker-h-bar')
trace5 = go.Scatter(
x = ['h', 'h'],
y = [0,0.09],
yaxis='y2', # Define 2 axis
showlegend = False, # Hiding legend for this trace
marker = dict( # Colors
color = 'rgba(46, 138, 24, 1)',
)
)
from plotly import tools
import plotly.plotly as py
import plotly.graph_objs as go
fig = tools.make_subplots(rows=1, cols=2)
# 1st subplot
fig.append_trace(trace1, 1, 1)
fig.append_trace(trace2, 1, 1)
# 2nd subplot
fig.append_trace(trace3, 1, 2)
fig.append_trace(trace4, 1, 2)
fig.append_trace(trace5, 1, 2) # Vertical line here
fig['layout'].update(height=600, width=1000, title='Two in One & Vertical line')
py.iplot(fig, filename='make-subplots')
Explanation: Bars and Scatter
End of explanation |
111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep learning loss functions
Inline plots
Step1: We want to use Theano so that we can use it's auto-differentiation, since I'm too lazy to work out the derivatives of these functions by hand!
Step2: Classification
Step3: Apply
Step4: Probability regression
Step5: Regression
Step6: Regression
Step7: Show the classification logits and probabilities in tables | Python Code:
%matplotlib inline
Explanation: Deep learning loss functions
Inline plots:
End of explanation
import os
import numpy as np
import pandas as pd
import torch, torch.nn as nn, torch.nn.functional as F
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
EPSILON = 1.0e-12
SAVE_PLOTS = True
Explanation: We want to use Theano so that we can use it's auto-differentiation, since I'm too lazy to work out the derivatives of these functions by hand! :)
We also want to avoid the overhead of using the GPU for such small tasks, so tell Theano to use the CPU:
End of explanation
# Softmax function
def f_softmax(logits, axis=1):
ex = np.exp(logits)
return ex / ex.sum(axis=axis, keepdims=True)
# Classification loss: negative log of softmax
def f_clf_loss(logits, axis=1):
t_logits = torch.tensor(logits, requires_grad=True)
# Compute negative log-softmax
return -F.log_softmax(t_logits, dim=axis).detach().numpy()
# Gadient of classification loss
def f_clf_loss_grad(logits, target, axis=1):
t_logits = torch.tensor(logits, requires_grad=True)
t_targets = torch.tensor(target, dtype=torch.int64)
# Compute cross_entropy loss
loss = F.cross_entropy(t_logits, t_targets, reduction='sum')
# Sum and compute gradient
loss.backward()
return t_logits.grad.detach().numpy()
Explanation: Classification: softmax non-linearity with negative log loss
Define a convenience function for computing the gradient of negative log loss with softmax. We use PyTorch to do it as it handles computing the gradient for us.
End of explanation
# Compute the range of values that we wish to explore
xs = np.arange(-5.0, 5.001, 1.0/128.0).astype(np.float32)
# Build an array of logit vector, where each logit vector is for a 2-class problem with the values [0, x[i]]
logits = np.stack([np.zeros_like(xs), xs], axis=1)
# Use softmax to compute predicted probabilities:
clf_q = f_softmax(logits)
# Compute negative log loss of softmax:
clf_loss = f_clf_loss(logits)
# Compute gradient of negative log loss of softmax with respect to the logits:
clf_loss_grad = f_clf_loss_grad(logits, np.ones_like(xs))
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, clf_q[:, 1], label=r'$q = softmax(X)$')
line_loss, = plt.plot(xs, clf_loss[:, 1], label=r'loss $c =-ln(q)$')
line_loss_grad, = plt.plot(xs, clf_loss_grad[:, 1], label=r'grad loss $\frac{dc}{dX_1}$')
plt.legend(handles=[line_p, line_q, line_loss, line_loss_grad])
plt.xlabel(r'$X_1$')
plt.show()
if SAVE_PLOTS:
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, clf_q[:, 1], label=r'$q = softmax(X)$')
plt.legend(handles=[line_p, line_q])
plt.xlabel(r'$X_1$')
plt.savefig('clf_loss_0.png', dpi=600)
plt.close()
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, clf_q[:, 1], label=r'$q = softmax(X)$')
line_loss, = plt.plot(xs, clf_loss[:, 1], label=r'loss $c =-ln(q)$')
plt.legend(handles=[line_p, line_q, line_loss])
plt.xlabel(r'$X_1$')
plt.savefig('clf_loss_1.png', dpi=600)
plt.close()
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, clf_q[:, 1], label=r'$q = softmax(X)$')
line_loss, = plt.plot(xs, clf_loss[:, 1], label=r'loss $c =-ln(q)$')
line_loss_grad, = plt.plot(xs, clf_loss_grad[:, 1], label=r'grad loss $\frac{dc}{dX_1}$')
plt.legend(handles=[line_p, line_q, line_loss, line_loss_grad])
plt.xlabel(r'$X_1$')
plt.savefig('clf_loss_2.png', dpi=600)
plt.close()
Explanation: Apply:
End of explanation
# Sigmoid definition
def f_sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
def f_prob_regr_loss(q, p):
return
# Binary cross-entropy of sigmoid
def f_prob_regr_loss(logits, target):
t_logits = torch.tensor(logits, requires_grad=True)
t_target = torch.tensor(target, requires_grad=True)
loss = -F.binary_cross_entropy_with_logits(t_logits, t_target)
return loss
# Gadient of binary cross-entropy of sigmoid
def f_prob_regr_loss_grad(logits, target, axis=0):
t_logits = torch.tensor(logits, requires_grad=True)
t_target = torch.tensor(target)
# Compute binary cross-entropy of sigmoid
loss = -F.binary_cross_entropy_with_logits(t_logits, t_target)
# Sum and compute gradient
loss.sum().backward()
return t_logits.grad.detach().numpy()
# Compute the range of values that we wish to explore
xs = np.arange(-5.0, 5.0, 0.01).astype(np.float32)
# Use sigmoid to compute predicted probabilities:
prob_regr_q = [f_sigmoid(x) for x in xs]
# Compute binary cross-entropy of sigmoid:
prob_regr_loss = [f_prob_regr_loss(x, 1.0) for x in xs]
# Compute gradient of binary cross-entropy of sigmoid with respect to xs:
prob_regr_loss_grad = [f_prob_regr_loss_grad(x, 1.0) for x in xs]
plt.figure(figsize=(5,5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, prob_regr_q, label=r'$q=sigmoid(X)$')
line_loss, = plt.plot(xs, prob_regr_loss, label=r'loss $c =-ln(q)p-ln(1-q)(1-p)$')
line_loss_grad, = plt.plot(xs, prob_regr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_p, line_q, line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.show()
if SAVE_PLOTS:
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_p, = plt.plot(xs, np.ones_like(xs), label=r'$p$')
line_q, = plt.plot(xs, prob_regr_q, label=r'$q=sigmoid(X)$')
line_loss, = plt.plot(xs, prob_regr_loss, label=r'loss $c =-ln(q)p-ln(1-q)(1-p)$')
line_loss_grad, = plt.plot(xs, prob_regr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_p, line_q, line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.savefig('prob_regr_loss_2.png', dpi=600)
plt.close()
Explanation: Probability regression: sigmoid non-linearity with binary cross-entropy
First define functions for computing sigmoid and binary cross-entropy:
End of explanation
# Function for computing squared error loss
def f_regr_sqr_loss(a, b):
return (a - b)**2
# Gadient of squared error loss
def f_regr_sqr_loss_grad(x_hat, x):
t_x_hat = torch.tensor(x_hat, requires_grad=True)
t_x = torch.tensor(x, requires_grad=True)
# Compute squared error
loss = -(t_x_hat - t_x)**2
# Sum and compute gradient
loss.sum().backward()
return t_x.grad.detach().numpy()
# Compute the range of values that we wish to explore
xs = np.arange(-5.0, 5.0, 0.01).astype(np.float32)
# Use squared error loss:
regr_sqr_loss = [f_regr_sqr_loss(x, 0.0) for x in xs]
# Compute gradient of squared error with respect to x-hat
regr_sqr_loss_grad = [f_regr_sqr_loss_grad(x, 0.0) for x in xs]
plt.figure(figsize=(5,5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_loss, = plt.plot(xs, regr_sqr_loss, label=r'loss $c = (x - \hat{x})^2$')
line_loss_grad, = plt.plot(xs, regr_sqr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.show()
if SAVE_PLOTS:
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_loss, = plt.plot(xs, regr_sqr_loss, label=r'loss $c = (x - \hat{x})^2$')
line_loss_grad, = plt.plot(xs, regr_sqr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.savefig('regr_sqr_loss_2.png', dpi=600)
plt.close()
Explanation: Regression: no non-linearity and squared error loss
End of explanation
# Use PyTorch `smooth_l1_loss`
def f_regr_huber_loss(predictions, targets, delta=1.0):
t_predictions = torch.tensor(predictions, requires_grad=True)
t_targets = torch.tensor(targets, requires_grad=True)
# Compute squared error
return F.smooth_l1_loss(t_predictions, t_targets)
def f_regr_huber_loss_grad(predictions, targets, delta=1.0):
t_predictions = torch.tensor(predictions, requires_grad=True)
t_targets = torch.tensor(targets, requires_grad=True)
# Compute squared error
loss = F.smooth_l1_loss(t_predictions, t_targets)
# Sum and compute gradient
loss.sum().backward()
return t_predictions.grad.detach().numpy()
# Compute the range of values that we wish to explore
xs = np.arange(-5.0, 5.0, 0.01).astype(np.float32)
# Use Huber loss:
regr_sqr_loss = [f_regr_huber_loss(x, 0.0) for x in xs]
# Compute gradient of Huber loss with respect to x-hat
regr_sqr_loss_grad = [f_regr_huber_loss_grad(x, 0.0) for x in xs]
plt.figure(figsize=(5,5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_loss, = plt.plot(xs, regr_sqr_loss, label=r'loss $c = huber(x, \hat{x})$')
line_loss_grad, = plt.plot(xs, regr_sqr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.show()
if SAVE_PLOTS:
plt.figure(figsize=(5, 5))
plt.xlim(-5.0, 5.0)
plt.ylim(-5.0, 5.0)
line_loss, = plt.plot(xs, regr_sqr_loss, label=r'loss $c = huber(x, \hat{x})$')
line_loss_grad, = plt.plot(xs, regr_sqr_loss_grad, label=r'grad loss $\frac{dc}{dx}$')
plt.legend(handles=[line_loss, line_loss_grad])
plt.xlabel(r'$x$')
plt.savefig('regr_huber_loss_2.png', dpi=600)
plt.close()
Explanation: Regression: no non-linearity and Huber loss
End of explanation
data=np.array(logits)
pd.DataFrame(columns=['$X_0$', '$X_1$'], data=data[::128])
data=np.append(np.array(logits), np.array(clf_q), axis=1)
pd.DataFrame(columns=['$X_0$', '$X_1$', '$q_0$', '$q_1$'], data=data[::128])
Explanation: Show the classification logits and probabilities in tables:
End of explanation |
112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graded = 7/8
Step1: & for multiple parameters
Step2: 2) What genres are most represented in the search results?
Edit your previous printout to also display a list of their genres in the format
GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
Tip
Step3: ANSWER
Step4: 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
Is it the same artist who has the largest number of followers?
Step5: ANSWER
Step6: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
Step7: Will the world explode if a musician swears?
Get an average popularity for their explicit songs vs. their non-explicit songs.
How many minutes of explicit songs do they have? Non-explicit?
Step8: QUESTION
Step9: 7) Since we're talking about Lils, what about Biggies?
How many total "Biggie" artists are there? How many total "Lil"s?
If you made 1 request every 5 seconds,
how long would it take to download information on all the Lils vs the Biggies?
Step10: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average? | Python Code:
import requests
response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50')
Explanation: Graded = 7/8
End of explanation
data = response.json()
data.keys()
artist_data = data['artists']
artist_data.keys()
lil_names = artist_data['items']
#lil_names = list of dictionaries = list of artist name, popularity, type, genres etc
Explanation: & for multiple parameters
End of explanation
for names in lil_names:
if not names['genres']:
print(names['name'], names['popularity'], "there are no genres listed")
else:
print(names['name'], names['popularity'], names["genres"])
#Join all the lists of genres in the dictionary and then count the number of elements in it
Explanation: 2) What genres are most represented in the search results?
Edit your previous printout to also display a list of their genres in the format
GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
Tip: "how to join a list Python" might be a helpful search
End of explanation
#ANSWER:
all_genres = []
for artist in lil_names:
print("All genres we've heard of:", all_genres)
#The conditional: None
print("Current artist has:", artist['genres'])
all_genres = all_genres + artist['genres']
#genre_list = ", ".join(artist['genres'])
#print(artist['name'], ":", genre_list)
all_genres.count('dirty soup rap')
all_genres.count('crunk')
#This is bad because dirty south rap shows up four times. We need a unique list of genres
for genre in all_genres:
genre_count = all_genres.count(genre)
print(genre, "shows up", genre_count, "times")
#To remove duplicates. You need to turn a list into a set.
unique_genres = set(list_with_duplicates)
for genre in unique_genres:
genre_count = all_genres.count(genre)
print(genre, "shows up", genre_count, "times")
#There is a library that comes with Python called Collections
#Inside of it is a magic thing called Counter.
import collections # will import the whole collections
#you can also type
from collections import Counter
#all_genres is a list of strings of genrs with duplicates
#counter will count all te genres for us
counts = collections.Counter(all_genres)
counts.most_common(4) #will give you the four most common genres
#HOW TO AUTOMATE GETTING ALL THE RESULTS
response = requests.get("https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=10")
small_data = response.json()
small_data['artists']
print(len(small_data['artists']['items'])) #We only get 10 artists
print(data['artists']['total'])
import math
page_count = math.ceil(4502/50)
#math.ceil rounds up
#math.ceil(page_count)
page_count
#First page artists 1-50:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50
#Second page artists 51-100:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50&offset=50
#Third page artists 101-150:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50&offset=100
#Fourth page artists 151-200:
#https://api.spotify.com/v1/search?query=lil&type=artist&market=US&limit=50&offset=150
for page in [0, 1, 2, 3, 4]:
offset = (page) * 50 #because page 2 is 50 and 2-1 = 1 x 50 = 50
print("We are on page", page, "with an offset of", offset)
for page in range(91):
#Get a page
offset = page * 50
print("We are on page", page, "with an offset of", offset)
#Make the request with a changed offset ?offset [offset]
#data = response.json()
#add all our new artists to our list of existing artists
#all_artists = all_artists + data['artists]['items]
print("Successfully retrieved", len(all_artists), "artists")
Explanation: ANSWER:
for artist in artists:
print(artist['name'], artist['popularity'])
if len(artist['genres']) > 0:
genres = ", ".join(artist['genres'])
print("Genre list: ", genres
else:
print("No genres listed")
OR
if len(artist['name']) == 0:
OR
if not len(artist['genres']) == 0:
"-".join(your_list) to join lists
End of explanation
#TA-Stephan:can't just print the names yourself. The code must do it.
for popularity in lil_names:
print(popularity['name'], popularity['popularity'], popularity['followers'])
print("Lil Yachty, Lil Uzi Vert, Lil Jon have the highest popularity ratings besides Lil Wayne, and they do not have the largest number of followers.")
Explanation: 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
Is it the same artist who has the largest number of followers?
End of explanation
for kim in lil_names:
print(kim['name'], kim['id'])
response = requests.get("https://api.spotify.com/v1/artists/5tth2a3v0sWwV1C7bApBdX/")
kim_data = response.json()
#print(kim_data)
kim_followers = kim_data['followers']
total_kim_followers = kim_followers['total']
#print(total_kim_followers)
for artists in lil_names:
if artists["followers"]["total"] > total_kim_followers:
print(artists['name'], artists['popularity'])
#ANSWER:
for artist in artists:
#print("Looking at", artist['name'])
if artist['name'] == "Lil' Kim":
print("Found Lil Kim")
print(artist['popularity'])
else:
pass
#print("Not Lil Kim")
lil_kim_popularity = 62
for artist in artists:
if artist['popularity'] > lil_kim_popularity:
print(artist['name'], "is more popular with a score of", artist['popularity'])
more_popular_than_lil_kim.append(artist['name'])
else:
print(artist['name'], "is less popular with a score of", artist['popularity'])
print("#### More popular than Lil Kim ####"):
print(artist_name)
more_popular_string = ", ".join(more_popular_than_lil_kim)
print("Artists mroe popular than Lil Kim are:", more_popular_string)
Explanation: ANSWER:
#jonathansoma.com/site/lede/foundations/python-patterns/looping-problems/
second_most_popular_name = ""
second_most_popular_score = 0
for artist in artists:
print("Looking at", artist['name]', "who has a popularity score of", artist['popularity'])
#THE CONDITIONAL aka what you are testing
print("Comparing", artist['popularity'], "to", most_popular_score)
if artist['popularity'] > most_popular_score and artist['name'] != 'Lil Wayne':
OR
if artist['popularity'] > most_popular_score:
if artist['name'] == "Lil Wayne":
print("Nice try, Lil Wayne, we don't care")
else:
print("Not Lil Wayne, updating our notebook")
#The change, aka what you are keeping track of
most_popular_name = artist['name']
most_popular_score = artist['popularity']
print(most_popular_name, most_popular_score)
But what if more than one person has the highest score?
Aggregation Problem. When you're looping through a series of objects and sometimes you
want to add one of those objects to a different list.
target score = 72
initial condition
second_best_artist = []
for artist in artists:
print("Looking at", artist['name'], "who has a popularity of", )
#2: Conditional. When we want to add someone to our list
if artist['popularity'] == 72:
print("!!! The artist's popularity is 72")
#The Change
#Add that artist to our list
#.append(newthing) is how we do that in Python
second_best_artists.append(artist['name'])
print("Our second best artists are:")
for artist in second_best_artist:
print(artist)
Print a list of Lil's that are more popular than Lil' Kim.
End of explanation
#Let's pick Lil Wayne and Lil Mama because I don't know who most of these people are
wayne_id = "55Aa2cqylxrFIXC767Z865"
response = requests.get("https://api.spotify.com/v1/artists/" + wayne_id + "/top-tracks?country=US")
wayne_data = response.json()
top_wayne_tracks = wayne_data['tracks']
for track in top_wayne_tracks:
print(track["name"])
mama_id = "5qK5bOC6wLtuLhG5KvU17c"
response = requests.get("https://api.spotify.com/v1/artists/" + mama_id + "/top-tracks?country=US")
mama_data = response.json()
top_mama_tracks = mama_data['tracks']
for track in top_mama_tracks:
print(track["name"])
Explanation: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
End of explanation
wayne_explicit_count = 0
wayne_exp_popularity_count = 0
wayne_ok_count = 0
wayne_ok_popularity_count = 0
wayne_explicit_len = 0
wayne_ok_len = 0
for track in top_wayne_tracks:
print(track['name'], track['explicit'], track['popularity'], track["duration_ms"])
if True:
wayne_explicit_count = wayne_explicit_count + 1
wayne_exp_popularity_count = wayne_exp_popularity_count + int(track['popularity'])
wayne_avg_pop = wayne_exp_popularity_count / wayne_explicit_count
wayne_explicit_len = wayne_explicit_len + int(track["duration_ms"])
if not track['explicit']:
wayne_ok_count = wayne_ok_count + 1
wayne_ok_popularity_count = wayne_ok_popularity_count + track['popularity']
wayne_ok_avg_pop = wayne_ok_popularity_count / wayne_ok_count
wayne_ok_len = wayne_ok_len + track["duration_ms"]
if wayne_explicit_count > 0:
print("The average popularity for Lil Wayne's explicit songs is", wayne_avg_pop)
#1 minute is 60000 milliseconds, who knew?
wayne_explicit_mins = int(wayne_explicit_len) / 60000
print("Lil Wayne has", wayne_explicit_mins, "minutes of explicit songs")
if wayne_ok_count > 0:
print("The average popularity for Lil Wayne's non-explicit songs is", wayne_ok_avg_pop)
wayne_ok_mins = int(wayne_ok_len) / 60000
print("Lil Wayne has", wayne_ok_mins, "minutes of explicit songs")
Explanation: Will the world explode if a musician swears?
Get an average popularity for their explicit songs vs. their non-explicit songs.
How many minutes of explicit songs do they have? Non-explicit?
End of explanation
mama_exp_count = 0
mama_exp_pop_count = 0
mama_ok_count = 0
mama_ok_pop_count = 0
mama_exp_len = 0
mama_ok_len = 0
for track in top_mama_tracks:
print(track['name'], track['explicit'], track['popularity'], track["duration_ms"])
if True:
mama_exp_count = mama_exp_count + 1
mama_exp_pop_count = mama_exp_pop_count + int(track['popularity'])
mama_avg_pop = int(mama_exp_pop_count) / int(mama_exp_count)
mama_exp_len = mama_exp_len + int(track["duration_ms"])
if not track['explicit']:
mama_ok_count = mama_ok_count + 1
mama_ok_pop_count = mama_ok_pop_count + int(track['popularity'])
mama_ok_avg_pop = int(mama_ok_pop_count) / int(mama_ok_count)
mama_ok_len = mama_ok_len + int(track["duration_ms"])
if mama_exp_count > 0:
#1 minute is 60000 milliseconds, who knew?
print("The average popularity for Lil Mama's xplicit songs is", mama_avg_pop)
mama_exp_mins = int(mama_exp_len) / 60000
print("Lil Mama has", mama_exp_mins, "minutes of explicit songs")
if mama_ok_count > 0:
print("The average popularity for Lil Mama's non-explicit songs is", mama_ok_avg_pop)
mama_ok_mins = int(mama_ok_len) / 60000
print("Lil Mama has", mama_ok_mins, "minutes of non-explicit songs")
Explanation: QUESTION: Why does this return both true and not true statements for non-explicit statements?
for track in top_mama_tracks:
print(track['name'], track['explicit'])
if True:
print(track['name'], "is explicit and has a popularity of", track['popularity'])
if not track['explicit']:
print(track['name'], "is not explicit and has a popularity of", track['popularity'])
End of explanation
#We need to bypass the limit. And find out
response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist')
biggie_data = response.json()
biggie_artists = biggie_data['artists']
biggie_names = biggie_artists['items']
biggie_count= 0
for name in biggie_names:
print(name['name'])
biggie_count = biggie_count + 1
print("There are a total number of", biggie_count, "biggie artists")
response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist')
lil_data = response.json()
lil_x_artists = lil_data['artists']
lil_x_names = lil_x_artists['items']
lil_x_count= 0
for name in lil_x_names:
print(name['name'])
lil_x_count = biggie_count + 1
print("There are a total number of", lil_x_count, "lil artists")
Explanation: 7) Since we're talking about Lils, what about Biggies?
How many total "Biggie" artists are there? How many total "Lil"s?
If you made 1 request every 5 seconds,
how long would it take to download information on all the Lils vs the Biggies?
End of explanation
response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50')
b_data = response.json()
b_artists = b_data['artists']
b_names = b_artists['items']
b_pop_count = 0
b_number = 0
for names in b_names:
print(names['name'], names['popularity'])
b_number = b_number + 1
b_pop_count = b_pop_count + int(names['popularity'])
avg_b_pop = b_pop_count / int(b_number)
print("The Biggies' average popularity is", avg_b_pop)
lil_pop_count = 0
lil_number = 0
for names in lil_names:
print(names['name'], names['popularity'])
lil_number = lil_number + 1
lil_pop_count = lil_pop_count + int(names['popularity'])
avg_lil_pop = lil_pop_count / int(lil_number)
print("The Lils average popularity is", avg_lil_pop)
print("The Lils are far more popular")
Explanation: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
End of explanation |
113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 4
Step1: Load dataset
Step2: Figure 4.1 - Default data set
Step3: 4.3 Logistic Regression
Figure 4.2
Step4: Table 4.1
Step5: scikit-learn
Step6: statsmodels
Step7: Table 4.2
Step8: Table 4.3 - Multiple Logistic Regression
Step9: Figure 4.3 - Confounding
Step10: 4.4 Linear Discriminant Analysis
Table 4.4
Step11: Table 4.5
Instead of using the probability of 50% as decision boundary, we say that a probability of default of 20% is to be classified as 'Yes'.
Step12: Lab
4.6.3 Linear Discriminant Analysis
Step13: 4.6.4 Quadratic Discriminant Analysis
Step14: 4.6.5 K-Nearest Neighbors
Step15: 4.6.6 An Application to Caravan Insurance Data
K-Nearest Neighbors
Step16: Logistic Regression | Python Code:
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.linear_model as skl_lm
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.metrics import confusion_matrix, classification_report, precision_score
from sklearn import preprocessing
from sklearn import neighbors
import statsmodels.api as sm
import statsmodels.formula.api as smf
pd.set_option('display.notebook_repr_html', False)
%matplotlib inline
plt.style.use('seaborn-white')
Explanation: Lab 4: Classification
From the github repo: https://github.com/JWarmenhoven/ISLR-python which is based on the book by James et al. Intro to Statistical Learning. There are no exercises, but it should serve as a great reference.
Running Exercise For the classification problems compare ROC and PR curves, also compare to SVMs.
Load dataset
The Default data set
4.3 Logistic Regression
4.4 Linear Discriminant Analysis
Lab: 4.6.3 Linear Discriminant Analysis
Lab: 4.6.4 Quadratic Discriminant Analysis
Lab: 4.6.5 K-Nearest Neighbors
Lab: 4.6.6 An Application to Caravan Insurance Data
End of explanation
# In R, I exported the dataset from package 'ISLR' to an Excel file
df = pd.read_excel('../data/Default.xlsx')
# Note: factorize() returns two objects: a label array and an array with the unique values.
# We are only interested in the first object.
df['default2'] = df.default.factorize()[0]
df['student2'] = df.student.factorize()[0]
df.head(3)
Explanation: Load dataset
End of explanation
fig = plt.figure(figsize=(12,5))
gs = mpl.gridspec.GridSpec(1, 4)
ax1 = plt.subplot(gs[0,:-2])
ax2 = plt.subplot(gs[0,-2])
ax3 = plt.subplot(gs[0,-1])
# Take a fraction of the samples where target value (default) is 'no'
df_no = df[df.default2 == 0].sample(frac=0.15)
# Take all samples where target value is 'yes'
df_yes = df[df.default2 == 1]
df_ = df_no.append(df_yes)
ax1.scatter(df_[df_.default == 'Yes'].balance, df_[df_.default == 'Yes'].income, s=40, c='orange', marker='+',
linewidths=1)
ax1.scatter(df_[df_.default == 'No'].balance, df_[df_.default == 'No'].income, s=40, marker='o', linewidths='1',
edgecolors='lightblue', facecolors='none')
ax1.set_ylim(ymin=0)
ax1.set_ylabel('Income')
ax1.set_xlim(xmin=-100)
ax1.set_xlabel('Balance')
c_palette = {'No':'lightblue', 'Yes':'orange'}
sns.boxplot('default', 'balance', data=df, orient='v', ax=ax2, palette=c_palette)
sns.boxplot('default', 'income', data=df, orient='v', ax=ax3, palette=c_palette)
gs.tight_layout(plt.gcf())
Explanation: Figure 4.1 - Default data set
End of explanation
X_train = df.balance.reshape(-1,1)
y = df.default2
# Create array of test data. Calculate the classification probability
# and predicted classification.
X_test = np.arange(df.balance.min(), df.balance.max()).reshape(-1,1)
clf = skl_lm.LogisticRegression(solver='newton-cg')
clf.fit(X_train,y)
prob = clf.predict_proba(X_test)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5))
# Left plot
sns.regplot(df.balance, df.default2, order=1, ci=None,
scatter_kws={'color':'orange'},
line_kws={'color':'lightblue', 'lw':2}, ax=ax1)
# Right plot
ax2.scatter(X_train, y, color='orange')
ax2.plot(X_test, prob[:,1], color='lightblue')
for ax in fig.axes:
ax.hlines(1, xmin=ax.xaxis.get_data_interval()[0],
xmax=ax.xaxis.get_data_interval()[1], linestyles='dashed', lw=1)
ax.hlines(0, xmin=ax.xaxis.get_data_interval()[0],
xmax=ax.xaxis.get_data_interval()[1], linestyles='dashed', lw=1)
ax.set_ylabel('Probability of default')
ax.set_xlabel('Balance')
ax.set_yticks([0, 0.25, 0.5, 0.75, 1.])
ax.set_xlim(xmin=-100)
Explanation: 4.3 Logistic Regression
Figure 4.2
End of explanation
y = df.default2
Explanation: Table 4.1
End of explanation
# Using newton-cg solver, the coefficients are equal/closest to the ones in the book.
# I do not know the details on the differences between the solvers.
clf = skl_lm.LogisticRegression(solver='newton-cg')
X_train = df.balance.reshape(-1,1)
clf.fit(X_train,y)
print(clf)
print('classes: ',clf.classes_)
print('coefficients: ',clf.coef_)
print('intercept :', clf.intercept_)
Explanation: scikit-learn
End of explanation
X_train = sm.add_constant(df.balance)
est = smf.Logit(y.ravel(), X_train).fit()
est.summary().tables[1]
Explanation: statsmodels
End of explanation
X_train = sm.add_constant(df.student2)
y = df.default2
est = smf.Logit(y, X_train).fit()
est.summary().tables[1]
Explanation: Table 4.2
End of explanation
X_train = sm.add_constant(df[['balance', 'income', 'student2']])
est = smf.Logit(y, X_train).fit()
est.summary().tables[1]
Explanation: Table 4.3 - Multiple Logistic Regression
End of explanation
# balance and default vectors for students
X_train = df[df.student == 'Yes'].balance.reshape(df[df.student == 'Yes'].balance.size,1)
y = df[df.student == 'Yes'].default2
# balance and default vectors for non-students
X_train2 = df[df.student == 'No'].balance.reshape(df[df.student == 'No'].balance.size,1)
y2 = df[df.student == 'No'].default2
# Vector with balance values for plotting
X_test = np.arange(df.balance.min(), df.balance.max()).reshape(-1,1)
clf = skl_lm.LogisticRegression(solver='newton-cg')
clf2 = skl_lm.LogisticRegression(solver='newton-cg')
clf.fit(X_train,y)
clf2.fit(X_train2,y2)
prob = clf.predict_proba(X_test)
prob2 = clf2.predict_proba(X_test)
df.groupby(['student','default']).size().unstack('default')
# creating plot
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5))
# Left plot
ax1.plot(X_test, pd.DataFrame(prob)[1], color='orange', label='Student')
ax1.plot(X_test, pd.DataFrame(prob2)[1], color='lightblue', label='Non-student')
ax1.hlines(127/2817, colors='orange', label='Overall Student',
xmin=ax1.xaxis.get_data_interval()[0],
xmax=ax1.xaxis.get_data_interval()[1], linestyles='dashed')
ax1.hlines(206/6850, colors='lightblue', label='Overall Non-Student',
xmin=ax1.xaxis.get_data_interval()[0],
xmax=ax1.xaxis.get_data_interval()[1], linestyles='dashed')
ax1.set_ylabel('Default Rate')
ax1.set_xlabel('Credit Card Balance')
ax1.set_yticks([0, 0.2, 0.4, 0.6, 0.8, 1.])
ax1.set_xlim(450,2500)
ax1.legend(loc=2)
# Right plot
sns.boxplot('student', 'balance', data=df, orient='v', ax=ax2, palette=c_palette);
Explanation: Figure 4.3 - Confounding
End of explanation
X = df[['balance', 'income', 'student2']].as_matrix()
y = df.default2.as_matrix()
lda = LinearDiscriminantAnalysis(solver='svd')
y_pred = lda.fit(X, y).predict(X)
df_ = pd.DataFrame({'True default status': y,
'Predicted default status': y_pred})
df_.replace(to_replace={0:'No', 1:'Yes'}, inplace=True)
df_.groupby(['Predicted default status','True default status']).size().unstack('True default status')
print(classification_report(y, y_pred, target_names=['No', 'Yes']))
Explanation: 4.4 Linear Discriminant Analysis
Table 4.4
End of explanation
decision_prob = 0.2
y_prob = lda.fit(X, y).predict_proba(X)
df_ = pd.DataFrame({'True default status': y,
'Predicted default status': y_prob[:,1] > decision_prob})
df_.replace(to_replace={0:'No', 1:'Yes', 'True':'Yes', 'False':'No'}, inplace=True)
df_.groupby(['Predicted default status','True default status']).size().unstack('True default status')
Explanation: Table 4.5
Instead of using the probability of 50% as decision boundary, we say that a probability of default of 20% is to be classified as 'Yes'.
End of explanation
df = pd.read_csv('../data/Smarket.csv', usecols=range(1,10), index_col=0, parse_dates=True)
X_train = df[:'2004'][['Lag1','Lag2']]
y_train = df[:'2004']['Direction']
X_test = df['2005':][['Lag1','Lag2']]
y_test = df['2005':]['Direction']
lda = LinearDiscriminantAnalysis()
pred = lda.fit(X_train, y_train).predict(X_test)
lda.priors_
lda.means_
# These do not seem to correspond to the values from the R output in the book?
lda.coef_
confusion_matrix(y_test, pred).T
print(classification_report(y_test, pred, digits=3))
pred_p = lda.predict_proba(X_test)
np.unique(pred_p[:,1]>0.5, return_counts=True)
np.unique(pred_p[:,1]>0.9, return_counts=True)
Explanation: Lab
4.6.3 Linear Discriminant Analysis
End of explanation
qda = QuadraticDiscriminantAnalysis()
pred = qda.fit(X_train, y_train).predict(X_test)
qda.priors_
qda.means_
confusion_matrix(y_test, pred).T
print(classification_report(y_test, pred, digits=3))
Explanation: 4.6.4 Quadratic Discriminant Analysis
End of explanation
knn = neighbors.KNeighborsClassifier(n_neighbors=1)
pred = knn.fit(X_train, y_train).predict(X_test)
print(confusion_matrix(y_test, pred).T)
print(classification_report(y_test, pred, digits=3))
knn = neighbors.KNeighborsClassifier(n_neighbors=3)
pred = knn.fit(X_train, y_train).predict(X_test)
print(confusion_matrix(y_test, pred).T)
print(classification_report(y_test, pred, digits=3))
Explanation: 4.6.5 K-Nearest Neighbors
End of explanation
# In R, I exported the dataset from package 'ISLR' to a csv file
df = pd.read_csv('../data/Caravan.csv')
y = df.Purchase
X = df.drop('Purchase', axis=1).astype('float64')
X_scaled = preprocessing.scale(X)
X_train = X_scaled[1000:,:]
y_train = y[1000:]
X_test = X_scaled[:1000,:]
y_test = y[:1000]
def KNN(n_neighbors=1, weights='uniform'):
clf = neighbors.KNeighborsClassifier(n_neighbors, weights)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
score = clf.score(X_test, y_test)
return(pred, score, clf.classes_)
def plot_confusion_matrix(cm, classes, n_neighbors, title='Confusion matrix (Normalized)',
cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.title('Normalized confusion matrix: KNN-{}'.format(n_neighbors))
plt.colorbar()
plt.xticks(np.arange(2), classes)
plt.yticks(np.arange(2), classes)
plt.tight_layout()
plt.xlabel('True label',rotation='horizontal', ha='right')
plt.ylabel('Predicted label')
plt.show()
for i in [1,3,5]:
pred, score, classes = KNN(i)
cm = confusion_matrix(y_test, pred)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plot_confusion_matrix(cm_normalized.T, classes, n_neighbors=i)
cm_df = pd.DataFrame(cm.T, index=classes, columns=classes)
cm_df.index.name = 'Predicted'
cm_df.columns.name = 'True'
print(cm_df)
print(pd.DataFrame(precision_score(y_test, pred, average=None),
index=classes, columns=['Precision']))
Explanation: 4.6.6 An Application to Caravan Insurance Data
K-Nearest Neighbors
End of explanation
regr = skl_lm.LogisticRegression()
regr.fit(X_train, y_train)
pred = regr.predict(X_test)
cm_df = pd.DataFrame(confusion_matrix(y_test, pred).T, index=regr.classes_,
columns=regr.classes_)
cm_df.index.name = 'Predicted'
cm_df.columns.name = 'True'
print(cm_df)
print(classification_report(y_test, pred))
pred_p = regr.predict_proba(X_test)
cm_df = pd.DataFrame({'True': y_test, 'Pred': pred_p[:,1] > .25})
cm_df.Pred.replace(to_replace={True:'Yes', False:'No'}, inplace=True)
print(cm_df.groupby(['True', 'Pred']).size().unstack('True').T)
print(classification_report(y_test, cm_df.Pred))
Explanation: Logistic Regression
End of explanation |
114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtragem no domínio da frequência
As operações de filtragem podem ser realizadas tanto no domínio do espaço quanto de frequência, Os filtros em frequência são normalmente classificados em três categorias
Step1: Para entender os efeitos observados na filtragem com o filtro ideal, vamos visualizar o filtro ideal no espaço.
Step2: 2. Filtros passa-altas
Quando o que buscamos em uma imagem são justamente as bordas, ou seja, o conteúdo de alta frequência de uma imagem, usamos um filtro passa-altas, atenuando um intervalo específico de componentes de baixa frequência.
2.1 Filtro passa-altas ideal
Um filtro passa-altas ideal bidimensional é aquele cuja função de transferência satisfaz a relação
$$ H(u,v) = \begin{cases}
0, & \text{se $D(u,v)\leq{D_0}$}\
1, & \text{se $D(u,v)>{D_0}$}
\end{cases}$$
em que $D_0$ é um valor não-negativo específico e $D(u, v)$ é a distância do ponto $(u,v)$ à origem do plano da frequência. Aqui também $D_0$ é denominado frequência de corte. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import scipy.signal as sci
from numpy.fft import fft2, ifft2
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
f = mpimg.imread('../data/cameraman.tif')
plt.imshow(f,cmap='gray');
plt.title('Original')
plt.colorbar()
plt.show()
# Criando o filtro ideal (circulo) em frequência
H = ia.circle(f.shape, 30, np.divide(f.shape, 2))
x,y = f.shape
HPB_ideal = ia.ptrans(H,(x//2,y//2))
plt.figure(1)
plt.subplot(1,2,1)
plt.imshow(H,cmap='gray');
plt.title('Filtro Passa-baixas')
plt.subplot(1,2,2)
plt.imshow(HPB_ideal,cmap='gray');
plt.title('Filtro Passa-baixas ideal')
# Filtrando a imagem no domínio da frequência
F = fft2(f)
G = F * HPB_ideal
gg = ifft2(G)
plt.figure(1, figsize=(12,12))
plt.subplot(1,3,1)
plt.imshow(np.log(np.abs(ia.ptrans(F,(x//2,y//2))+1)),cmap='gray')
plt.title('DFT da Imagem')
plt.subplot(1,3,2)
plt.imshow(np.log(np.abs(ia.ptrans(HPB_ideal,(x//2,y//2))+1)),cmap='gray')
plt.title('Filtro passa-baixas ideal')
plt.subplot(1,3,3)
plt.imshow(np.log(np.abs(ia.ptrans(G,(x//2,y//2))+1)),cmap='gray')
plt.title('F * HPB_ideal')
plt.figure(2)
#plt.subplot(1,4,4)
plt.imshow(gg.real.astype(np.float),cmap='gray');
plt.title('Imagem filtrada')
Explanation: Filtragem no domínio da frequência
As operações de filtragem podem ser realizadas tanto no domínio do espaço quanto de frequência, Os filtros em frequência são normalmente classificados em três categorias:
passa-baixas,
passa-altas e
passa-faixa.
O efeito visual de um filtro passa-baixas é o de suavização da imagem, uma vez que as altas frequências, que correspondem às transições abruptas, são atenuadas. A suavização tende também, pelas mesmas razões, a minimizar o efeito do ruído em imagens. Um filtro passa-altas realça as altas frequências e são normalmente usados para realçar os detalhes na imagem. O efeito obtido é, em geral, o realce de bordas. Um filtro passa-faixa seleciona um intervalo de frequências do sinal para ser realçado.
Filtragem em frequência usando o Teorema da convolução
O Teorema da Convolução garante que a convolução no domínio espacial equivale a um produto no domínio da frequência. Ou seja, ao invés de aplicarmos um filtro no domínio espacial através da convolução da imagem $f(x,y)$ com uma máscara $h(x,y)$ podemos aplicar um filtro no domínio da frequência através do produto da Transformada de Fourier da imagem $F(u,v)$ com a Transformada de Fourier da máscara (filtro) $H(u,v)$.
1. Filtros passa-baixas
As bordas e outras transições abruptas (tal como ruído) nos níveis de cinza de uma imagem contribuem significamente para o conteúdo de alta frequência de uma imagem. Assim, a suavização é alcançada no domínio da frequência através da atenuação de um intervalo específico de componentes de alta frequência na transformada de uma imagem.
1.1 Filtro passa-baixas ideal
Um filtro passa-baixas ideal bidimensional é aquele cuja função de transferência satisfaz a relação
$$ H(u,v) = \begin{cases}
1, & \text{se $D(u,v)\leq{D_0}$}\
0, & \text{se $D(u,v)>{D_0}$}
\end{cases}$$
em que $D_0$ é um valor não-negativo específico e $D(u, v)$ é a distância do ponto $(u,v)$ à origem do plano da frequência. Normalmente $D_0$ é denominado frequência de corte.
Exemplo
A Figura abaixo ilustra cada uma das etapas envolvidas no processo de aplicação de um filtro passa-baixas ideal sobre uma imagem.
End of explanation
hpb_ideal = ifft2(HPB_ideal)
plt.imshow(np.log(np.abs(ia.ptrans(hpb_ideal,(x//2,y//2))+1)),cmap='gray');
plt.title('hpb_ideal')
Explanation: Para entender os efeitos observados na filtragem com o filtro ideal, vamos visualizar o filtro ideal no espaço.
End of explanation
# Criando o filtro ideal (circulo) em frequência
HPA = 1 - ia.circle(f.shape, 30, np.divide(f.shape, 2))
x,y = f.shape
HPA_ideal = ia.ptrans(HPA,(x//2,y//2))
plt.figure(1)
plt.subplot(1,2,1)
plt.imshow(HPA,cmap='gray');
plt.title('H')
plt.subplot(1,2,2)
plt.imshow(HPA_ideal,cmap='gray');
plt.title('H_ideal')
# Filtrando a imagem no domínio da frequência
F = fft2(f)
G2 = F * HPA_ideal
gg2 = ifft2(G2)
plt.figure(1, figsize=(12,12))
plt.subplot(1,3,1)
plt.imshow(np.log(np.abs(ia.ptrans(F,(x//2,y//2))+1)),cmap='gray')
plt.title('DFT da Imagem')
plt.subplot(1,3,2)
plt.imshow(np.log(np.abs(ia.ptrans(HPA_ideal,(x//2,y//2))+1)),cmap='gray')
plt.title('Filtro passa-baixas ideal')
plt.subplot(1,3,3)
plt.imshow(np.log(np.abs(ia.ptrans(G2,(x//2,y//2))+1)),cmap='gray')
plt.title('F * HPB_ideal')
plt.figure(2)
#plt.subplot(1,4,4)
plt.imshow(gg2.real.astype(np.float),cmap='gray');
plt.title('Imagem filtrada')
Explanation: 2. Filtros passa-altas
Quando o que buscamos em uma imagem são justamente as bordas, ou seja, o conteúdo de alta frequência de uma imagem, usamos um filtro passa-altas, atenuando um intervalo específico de componentes de baixa frequência.
2.1 Filtro passa-altas ideal
Um filtro passa-altas ideal bidimensional é aquele cuja função de transferência satisfaz a relação
$$ H(u,v) = \begin{cases}
0, & \text{se $D(u,v)\leq{D_0}$}\
1, & \text{se $D(u,v)>{D_0}$}
\end{cases}$$
em que $D_0$ é um valor não-negativo específico e $D(u, v)$ é a distância do ponto $(u,v)$ à origem do plano da frequência. Aqui também $D_0$ é denominado frequência de corte.
End of explanation |
115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tvorba obchodní strategie - křížení klouzavých průměrů
Informace o notebooku a modulech
Step1: Rychlý přehled základních běžných typů strategií
Pro automatické obchodování, které využívá analýzu dat, se používá termín Quantitative trading. Běžně a nejčastěji se v obchodování se používají dva typy strategií
Step2: Vypočítané long pozice
Step3: Vypočítané short pozice
Step4: Zobrazení výsledků | Python Code:
NB_VERSION = 1,0
import sys
import datetime
import numpy as np
import pandas as pd
import pandas_datareader as pdr
import pandas_datareader.data as pdr_web
import quandl as ql
from matplotlib import __version__ as matplotlib_version
from seaborn import __version__ as seaborn_version
import statsmodels.api as sm
# Load Quandl API key
import json
with open('quandl_key.json','r') as f:
quandl_api_key = json.load(f)
ql.ApiConfig.api_key = quandl_api_key['API-key']
print('Verze notebooku:', '.'.join(map(str, NB_VERSION)))
print('Verze pythonu:', '.'.join(map(str, sys.version_info[0:3])))
print('---')
print('NumPy:', np.__version__)
print('Pandas:', pd.__version__)
print('pandas-datareader:', pdr.__version__)
print('Quandl:', ql.version.VERSION)
print('Matplotlib:', matplotlib_version)
print('Seaborn:', seaborn_version)
print('Statsmodels:', sm.version.version)
Explanation: Tvorba obchodní strategie - křížení klouzavých průměrů
Informace o notebooku a modulech
End of explanation
# Zíkám data ETF trhu SPY, který kopíruje trh S&P 500
start_date = datetime.datetime(2008, 1, 1)
end_date = datetime.datetime.now()
ohlc_data = pdr_web.DataReader("NYSEARCA:SPY", 'google', start=start_date, end=end_date)
# Příprava period pro SMA
short_period = 30
long_period = 90
# Připrava signálního DataFrame s vynulovaným sloupcem 'signal', 0=bez signálu
signals = pd.DataFrame(index=ohlc_data.index)
signals['signal'] = 0.0
# SMA pro kratší a delší periodu.
signals['short_sma'] = ohlc_data['Close'].rolling(window=short_period, min_periods=1, center=False).mean()
signals['long_sma'] = ohlc_data['Close'].rolling(window=long_period, min_periods=1, center=False).mean()
# Získání signálu pro situace, kde SMA s menší periodou je nad SMA s větší periodou.
signals['signal'][short_period:] = np.where(signals['short_sma'][short_period:]
> signals['long_sma'][short_period:], 1.0, 0.0)
# Vygenerování obchodních příkazů
signals['positions'] = signals['signal'].diff()
Explanation: Rychlý přehled základních běžných typů strategií
Pro automatické obchodování, které využívá analýzu dat, se používá termín Quantitative trading. Běžně a nejčastěji se v obchodování se používají dva typy strategií:
Strategie založené na momentu (momentum strategy)
Zde spadají jak trendové strategie, tak i strategie na principu divergencí. V případě obchodování těchto typů strategií spoléháme, že aktuální pohyb trhu bude pokračovat ve stejném směru i v budoucnosti. Zároveň věříme, že tyto trendy dokážeme detekovat a následně využít. Jako příklad takové strategií jsou strategie na bázi křížení klouzavých průměrů (moving average crossover).
Reverzní strategie (reversion strategy)
Někdy se jim říká také strategie konvergence (covergence) nebo strategie burzovních cyklů (cycle trading). Tyto strategie zakládají na myšlence, že aktuální pohyb může eventuálně začít reversovat. Jako příklad může posloužit strategie založená na principu návratu k průměrné ceně (mean reversion strategy).
Strategie na principu křížení klouzavých průměrů
Ta nejjednodušší strategie, kterou lze vytvořit, je křížení klouzavých průměrů.
Využiji dva jednoduché klouzavé průměry (SMA - simple moving average) s rozdílnými periodami, např. 30 a 90 dní. Pro pozici long chci vidět překřížení SMA s kratší periodou směrem nahoru přes SMA s delší periodou. Pro pozici short chci vidět překřížení SMA s kratší periodou směrem dolů přes SMA s delší periodou.
Postup
1. Získám data trhu, kterého chci obchodovat a ty si vložím do proměnné ohlc_data.
+ Definuji si periody pro kratší a delší SMA do proměnných short_period a long_period.
+ Vytvořím si signální DataFrame, který bude mít sloupec signal. Jen musím zajistit, aby indexy v signals odpovídaly s index v ohlc_data.
+ Pomocí rolling() a mean() funkcí vypočítám SMA pro kratší a delší periodu.
+ Do signals['signal'] vložím 1.0 pro dny, kde SMA s kratší periodou je nad SMA s delší periodou. Tento signál musím počítat od pozice, kdy je platná vypočítaná hodnota pro SMA s kratší periodou, tj. od řádku s číslem vloženým v short_period.
+ Nakonec do signals['positions'] vložím rozdíl změny, kdy se mění hodnota ve sloupečku 'signal'. Tím získám velikost pozice v jaké bych měl být pro strategii klouzavých průměrů -> pro long je hodnota 1.0, pro short je to -1.0.
End of explanation
signals[signals['positions']>0]
Explanation: Vypočítané long pozice
End of explanation
signals[signals['positions']<0]
Explanation: Vypočítané short pozice
End of explanation
import matplotlib.pyplot as plt
# Příprava oblasti grafu
fig = plt.figure(figsize=(15,7))
# Přidání grafu s pojmenovanou osou y do oblasti grafu
ax1 = fig.add_subplot(111, ylabel='Cena v $')
# Přidání vývoje Close ceny do grafu.
ohlc_data['Close'].plot(ax=ax1, color='r', lw=2.)
# Přidání vývoje klouzavých průměrů do grafu.
signals[['short_sma', 'long_sma']].plot(ax=ax1, lw=2.)
# Přidání vstupních signálů "buy"
ax1.plot(signals.loc[signals.positions == 1.0].index,
signals.short_sma[signals.positions == 1.0],
'^', markersize=10, color='b')
# Přidání vstupních signálů "sell"
ax1.plot(signals.loc[signals.positions == -1.0].index,
signals.short_sma[signals.positions == -1.0],
'v', markersize=10, color='k')
# Zobrazení grafu
plt.show()
Explanation: Zobrazení výsledků
End of explanation |
116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to Load CSV and Numpy File Types in TensorFlow 2.0
Learning Objectives
Load a CSV file into a tf.data.Dataset.
Load Numpy data
Introduction
In this lab, you load CSV data from a file into a tf.data.Dataset. This tutorial provides an example of loading data from NumPy arrays into a tf.data.Dataset you also load text data.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Load necessary libraries
We will start by importing the necessary libraries for this lab.
Step1: Load data
This section provides an example of how to load CSV data from a file into a tf.data.Dataset. The data used in this tutorial are taken from the Titanic passenger list. The model will predict the likelihood a passenger survived based on characteristics like age, gender, ticket class, and whether the person was traveling alone.
To start, let's look at the top of the CSV file to see how it is formatted.
Step2: You can load this using pandas, and pass the NumPy arrays to TensorFlow. If you need to scale up to a large set of files, or need a loader that integrates with TensorFlow and tf.data then use the tf.data.experimental.make_csv_dataset function
Step3: Now read the CSV data from the file and create a dataset.
(For the full documentation, see tf.data.experimental.make_csv_dataset)
Step4: Each item in the dataset is a batch, represented as a tuple of (many examples, many labels). The data from the examples is organized in column-based tensors (rather than row-based tensors), each with as many elements as the batch size (5 in this case).
It might help to see this yourself.
Step5: As you can see, the columns in the CSV are named. The dataset constructor will pick these names up automatically. If the file you are working with does not contain the column names in the first line, pass them in a list of strings to the column_names argument in the make_csv_dataset function.
Step6: This example is going to use all the available columns. If you need to omit some columns from the dataset, create a list of just the columns you plan to use, and pass it into the (optional) select_columns argument of the constructor.
Step7: Data preprocessing
A CSV file can contain a variety of data types. Typically you want to convert from those mixed types to a fixed length vector before feeding the data into your model.
TensorFlow has a built-in system for describing common input conversions
Step8: Here's a simple function that will pack together all the columns
Step9: Apply this to each element of the dataset
Step10: If you have mixed datatypes you may want to separate out these simple-numeric fields. The tf.feature_column api can handle them, but this incurs some overhead and should be avoided unless really necessary. Switch back to the mixed dataset
Step11: So define a more general preprocessor that selects a list of numeric features and packs them into a single column
Step12: Data Normalization
Continuous data should always be normalized.
Step13: Now create a numeric column. The tf.feature_columns.numeric_column API accepts a normalizer_fn argument, which will be run on each batch.
Bind the MEAN and STD to the normalizer fn using functools.partial.
Step14: When you train the model, include this feature column to select and center this block of numeric data
Step15: The mean based normalization used here requires knowing the means of each column ahead of time.
Categorical data
Some of the columns in the CSV data are categorical columns. That is, the content should be one of a limited set of options.
Use the tf.feature_column API to create a collection with a tf.feature_column.indicator_column for each categorical column.
Step16: This will be become part of a data processing input later when you build the model.
Combined preprocessing layer
Add the two feature column collections and pass them to a tf.keras.layers.DenseFeatures to create an input layer that will extract and preprocess both input types
Step17: Next Step
A next step would be to build a build a tf.keras.Sequential, starting with the preprocessing_layer, which is beyond the scope of this lab. We will cover the Keras Sequential API in the next Lesson.
Load NumPy data
Load necessary libraries
First, restart the Kernel. Then, we will start by importing the necessary libraries for this lab.
Step18: Load data from .npz file
We use the MNIST dataset in Keras.
Step19: Load NumPy arrays with tf.data.Dataset
Assuming you have an array of examples and a corresponding array of labels, pass the two arrays as a tuple into tf.data.Dataset.from_tensor_slices to create a tf.data.Dataset. | Python Code:
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import functools
import numpy as np
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
# Downloads a file from a URL if it not already in the cache using `tf.keras.utils.get_file()`.
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
# Make numpy values easier to read.
np.set_printoptions(precision=3, suppress=True)
Explanation: How to Load CSV and Numpy File Types in TensorFlow 2.0
Learning Objectives
Load a CSV file into a tf.data.Dataset.
Load Numpy data
Introduction
In this lab, you load CSV data from a file into a tf.data.Dataset. This tutorial provides an example of loading data from NumPy arrays into a tf.data.Dataset you also load text data.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Load necessary libraries
We will start by importing the necessary libraries for this lab.
End of explanation
# `head()` function is used to get the first n rows
!head {train_file_path}
Explanation: Load data
This section provides an example of how to load CSV data from a file into a tf.data.Dataset. The data used in this tutorial are taken from the Titanic passenger list. The model will predict the likelihood a passenger survived based on characteristics like age, gender, ticket class, and whether the person was traveling alone.
To start, let's look at the top of the CSV file to see how it is formatted.
End of explanation
# TODO 1
LABEL_COLUMN = 'survived'
LABELS = [0, 1]
Explanation: You can load this using pandas, and pass the NumPy arrays to TensorFlow. If you need to scale up to a large set of files, or need a loader that integrates with TensorFlow and tf.data then use the tf.data.experimental.make_csv_dataset function:
The only column you need to identify explicitly is the one with the value that the model is intended to predict.
End of explanation
# get_dataset() retrieve a Dataverse dataset or its metadata
def get_dataset(file_path, **kwargs):
# TODO 2
# Use `tf.data.experimental.make_csv_dataset()` to read CSV files into a dataset.
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=5, # Artificially small to make examples easier to show.
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True,
**kwargs)
return dataset
raw_train_data = get_dataset(train_file_path)
raw_test_data = get_dataset(test_file_path)
def show_batch(dataset):
for batch, label in dataset.take(1):
for key, value in batch.items():
print("{:20s}: {}".format(key,value.numpy()))
Explanation: Now read the CSV data from the file and create a dataset.
(For the full documentation, see tf.data.experimental.make_csv_dataset)
End of explanation
show_batch(raw_train_data)
Explanation: Each item in the dataset is a batch, represented as a tuple of (many examples, many labels). The data from the examples is organized in column-based tensors (rather than row-based tensors), each with as many elements as the batch size (5 in this case).
It might help to see this yourself.
End of explanation
CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']
# pass column names in a list of strings to the column_names argument.
temp_dataset = get_dataset(train_file_path, column_names=CSV_COLUMNS)
show_batch(temp_dataset)
Explanation: As you can see, the columns in the CSV are named. The dataset constructor will pick these names up automatically. If the file you are working with does not contain the column names in the first line, pass them in a list of strings to the column_names argument in the make_csv_dataset function.
End of explanation
# If you need to omit some columns from the dataset, create a list of just the columns you plan to use, and
# pass it into the select_columns argument of the constructor.
SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'class', 'deck', 'alone']
temp_dataset = get_dataset(train_file_path, select_columns=SELECT_COLUMNS)
show_batch(temp_dataset)
Explanation: This example is going to use all the available columns. If you need to omit some columns from the dataset, create a list of just the columns you plan to use, and pass it into the (optional) select_columns argument of the constructor.
End of explanation
SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'parch', 'fare']
DEFAULTS = [0, 0.0, 0.0, 0.0, 0.0]
temp_dataset = get_dataset(train_file_path,
select_columns=SELECT_COLUMNS,
column_defaults = DEFAULTS)
show_batch(temp_dataset)
example_batch, labels_batch = next(iter(temp_dataset))
Explanation: Data preprocessing
A CSV file can contain a variety of data types. Typically you want to convert from those mixed types to a fixed length vector before feeding the data into your model.
TensorFlow has a built-in system for describing common input conversions: tf.feature_column, see this tutorial for details.
You can preprocess your data using any tool you like (like nltk or sklearn), and just pass the processed output to TensorFlow.
The primary advantage of doing the preprocessing inside your model is that when you export the model it includes the preprocessing. This way you can pass the raw data directly to your model.
Continuous data
If your data is already in an appropriate numeric format, you can pack the data into a vector before passing it off to the model:
End of explanation
# `pack()` function will pack together all the columns
def pack(features, label):
# `tf.stack()` stacks a list of rank-R tensors into one rank-(R+1) tensor.
return tf.stack(list(features.values()), axis=-1), label
Explanation: Here's a simple function that will pack together all the columns:
End of explanation
packed_dataset = temp_dataset.map(pack)
for features, labels in packed_dataset.take(1):
print(features.numpy())
print()
print(labels.numpy())
Explanation: Apply this to each element of the dataset:
End of explanation
show_batch(raw_train_data)
example_batch, labels_batch = next(iter(temp_dataset))
Explanation: If you have mixed datatypes you may want to separate out these simple-numeric fields. The tf.feature_column api can handle them, but this incurs some overhead and should be avoided unless really necessary. Switch back to the mixed dataset:
End of explanation
class PackNumericFeatures(object):
def __init__(self, names):
self.names = names
def __call__(self, features, labels):
numeric_features = [features.pop(name) for name in self.names]
numeric_features = [tf.cast(feat, tf.float32) for feat in numeric_features]
numeric_features = tf.stack(numeric_features, axis=-1)
features['numeric'] = numeric_features
return features, labels
NUMERIC_FEATURES = ['age','n_siblings_spouses','parch', 'fare']
packed_train_data = raw_train_data.map(
PackNumericFeatures(NUMERIC_FEATURES))
packed_test_data = raw_test_data.map(
PackNumericFeatures(NUMERIC_FEATURES))
show_batch(packed_train_data)
example_batch, labels_batch = next(iter(packed_train_data))
Explanation: So define a more general preprocessor that selects a list of numeric features and packs them into a single column:
End of explanation
# pandas is used for data manipulation and analysis.
import pandas as pd
# pandas module read_csv() function reads the CSV file into a DataFrame object.
desc = pd.read_csv(train_file_path)[NUMERIC_FEATURES].describe()
desc
# TODO 1
MEAN = np.array(desc.T['mean'])
STD = np.array(desc.T['std'])
def normalize_numeric_data(data, mean, std):
# TODO 2
# Center the data
return (data-mean)/std
print(MEAN, STD)
Explanation: Data Normalization
Continuous data should always be normalized.
End of explanation
# See what you just created.
# Bind the MEAN and STD to the normalizer fn using `functools.partial`
normalizer = functools.partial(normalize_numeric_data, mean=MEAN, std=STD)
# `tf.feature_column.numeric_column()` represents real valued or numerical features.
numeric_column = tf.feature_column.numeric_column('numeric', normalizer_fn=normalizer, shape=[len(NUMERIC_FEATURES)])
numeric_columns = [numeric_column]
numeric_column
Explanation: Now create a numeric column. The tf.feature_columns.numeric_column API accepts a normalizer_fn argument, which will be run on each batch.
Bind the MEAN and STD to the normalizer fn using functools.partial.
End of explanation
example_batch['numeric']
# `tf.keras.layers.DenseFeatures()` produces a dense Tensor based on given feature_columns.
numeric_layer = tf.keras.layers.DenseFeatures(numeric_columns)
numeric_layer(example_batch).numpy()
Explanation: When you train the model, include this feature column to select and center this block of numeric data:
End of explanation
CATEGORIES = {
'sex': ['male', 'female'],
'class' : ['First', 'Second', 'Third'],
'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'],
'alone' : ['y', 'n']
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
# Use the `tf.feature_column` API to create a collection with a `tf.feature_column.indicator_column` for each categorical column.
cat_col = tf.feature_column.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab)
categorical_columns.append(tf.feature_column.indicator_column(cat_col))
# See what you just created.
categorical_columns
# `tf.keras.layers.DenseFeatures()` produces a dense Tensor based on given feature_columns.
categorical_layer = tf.keras.layers.DenseFeatures(categorical_columns)
print(categorical_layer(example_batch).numpy()[0])
Explanation: The mean based normalization used here requires knowing the means of each column ahead of time.
Categorical data
Some of the columns in the CSV data are categorical columns. That is, the content should be one of a limited set of options.
Use the tf.feature_column API to create a collection with a tf.feature_column.indicator_column for each categorical column.
End of explanation
# Add the two feature column collections
# Pass them to a `tf.keras.layers.DenseFeatures()` to create an input layer.
# TODO 1
preprocessing_layer = tf.keras.layers.DenseFeatures(categorical_columns+numeric_columns)
print(preprocessing_layer(example_batch).numpy()[0])
Explanation: This will be become part of a data processing input later when you build the model.
Combined preprocessing layer
Add the two feature column collections and pass them to a tf.keras.layers.DenseFeatures to create an input layer that will extract and preprocess both input types:
End of explanation
# Importing the necessary libraries
import numpy as np
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
Explanation: Next Step
A next step would be to build a build a tf.keras.Sequential, starting with the preprocessing_layer, which is beyond the scope of this lab. We will cover the Keras Sequential API in the next Lesson.
Load NumPy data
Load necessary libraries
First, restart the Kernel. Then, we will start by importing the necessary libraries for this lab.
End of explanation
DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
# `tf.keras.utils.get_file()` downloads a file from a URL if it not already in the cache.
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
# TODO 1
train_examples = data['x_train']
train_labels = data['y_train']
test_examples = data['x_test']
test_labels = data['y_test']
Explanation: Load data from .npz file
We use the MNIST dataset in Keras.
End of explanation
# With the help of `tf.data.Dataset.from_tensor_slices()` method, we can get the slices of an array in the form of objects.
# by using `tf.data.Dataset.from_tensor_slices()` method.
# TODO 2
train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))
Explanation: Load NumPy arrays with tf.data.Dataset
Assuming you have an array of examples and a corresponding array of labels, pass the two arrays as a tuple into tf.data.Dataset.from_tensor_slices to create a tf.data.Dataset.
End of explanation |
117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem set 3 (90 pts)
Important note
Step1: Algorithm must halt before num_iter_fix + num_iter_adapt iterations if the following condition is satisfied $$ \boxed{\|\lambda_k - \lambda_{k-1}\|_2 / \|\lambda_k\|_2 \leq \varepsilon} \text{ at some step } k.$$
Do not forget to use the orthogonal projection from above in the iterative process to get the correct eigenvector.
It is also a good idea to use shift=0 before the adaptive stragy is used. This, however, is not possible since the matrix $L$ is singular, and sparse decompositions in scipy does not work in this case. Therefore, we first use a very small shift instead.
(3 pts) Generate a random lollipop_graph using networkx library and find its partition. Draw this graph with vertices colored according to the partition.
Step2: (2 pts) Start the method with a random initial guess x0, set num_iter_fix=0 and comment why the method can converge to a wrong eigenvalue.
Step3: Fixed shift is important because w/o one it starts converging to the number equal to initial shift, which can be seen in code explicitly $x_0^T L x_0$
Spectral graph properties (15 pts)
(5 pts) Prove that multiplicity of the eigenvalue $0$ in the spectrum of the graphs Laplacian is the number of its connected components.
(10 pts) The second-smallest eigenvalue of $L(G)$, $\lambda_2(L(G))$, is often called the algebraic connectivity of the
graph $G$. A basic intuition behind the use of this term is that a graph with a higher algebraic
connectivity typically has more edges, and can therefore be thought of as being “more connected”.
To check this statement, create few graphs with equal number of vertices using networkx, one of them should be $C_{30}$ - simple cyclic graph, and one of them should be $K_{30}$ - complete graph. (You also can change the number of vertices if it makes sense for your experiments, but do not make it trivially small).
Find the algebraic connectivity for the each graph using inverse iteration.
Plot the dependency $\lambda_2(G_i)$ on $|E_i|$.
Draw a partition for a chosen graph from the generated set.
Comment on the results.
Image bipartition (10 pts)
Let us deal here with a graph constructed from a binarized image.
Consider the rule, that graph vertices are only pixels with $1$, and each vertex can have no more than $8$ connected vertices (pixel neighbours), $\textit{i.e}$ graph degree is limited by 8.
* (3 pts) Find an image with minimal size equal to $(256, 256)$ and binarize it such that graph built on black pixels has exactly $1$ connected component.
* (5 pts) Write a function that constructs sparse adjacency matrix from the binarized image, taking into account the rule from above.
* (2 pts) Find the partition of the resulting graph and draw the image in accordance with partition.
Step4: Problem 3 (30 pts)
Say hi to the drone
You received a radar-made air scan data of a terrorist hideout made from a heavy-class surveillance drone. Unfortunately, it was made with an old-fashioned radar, so the picture is convolved with the diffractive pattern. You need to deconvolve the picture to recover the building plan.
Step5: In this problem you asked to use using FFT-matvec and make the convolution operator for the picture of the size $N\times N$, where $N=300$ with the following kernel (2D Helmholtz scattering)
Step6: 2. Matvec (5 pts)
Step7: 3. Complexity (3 pts)
Big-O complexity of one matvec operation is $\mathcal{O}(N^2\log{}N)$
4. LinearOperator (2 pts)
Step8: 5. Reconstruction (15pts) | Python Code:
import numpy as np
import scipy as sp
from scipy import sparse
from scipy.sparse import linalg
import networkx as nx
from networkx.linalg.algebraicconnectivity import fiedler_vector
import matplotlib.pyplot as plt
# INPUT:
# A - adjacency matrix (scipy.sparse.csr_matrix)
# num_iter_fix - number of iterations with fixed shift (int)
# shift - (float number)
# num_iter_adapt - number of iterations with adaptive shift (int) -- Rayleigh quotient iteration steps
# x0 - initial guess (1D numpy.ndarray)
# OUTPUT:
# x - normalized Fiedler vector (1D numpy.ndarray)
# eigs - eigenvalue estimations at each step (1D numpy.ndarray)
# eps - relative tolerance (float)
def operator_P(x):
return x - np.ones_like(x) * np.sum(x) / x.shape[0]
def is_small(eigs, eps):
if (abs(eigs[-2] - eigs[-1]) / eigs[-1] <= eps):
return True
def partition(A, shift, num_iter_fix, num_iter_adapt, x0, eps):
x_k = x0 / np.linalg.norm(x0)
eigs = np.array([0])
L, I = nx.laplacian_matrix(nx.from_scipy_sparse_matrix(A)), sp.sparse.identity(x_k.shape[0])
for _ in range(num_iter_fix):
x_k = operator_P(x_k)
x_k = sp.sparse.linalg.spsolve(L - shift * I, x_k)
x_k /= np.linalg.norm(x_k, 2)
eigs = np.append(eigs, np.dot(x_k, L.dot(x_k)))
if is_small(eigs, eps):
return x_k, eigs
for _ in range(num_iter_adapt):
x_k = operator_P(x_k)
x_k = sp.sparse.linalg.spsolve(L - np.dot(x_k, L.dot(x_k)) * I, x_k)
x_k /= np.linalg.norm(x_k, 2)
eigs = np.append(eigs, np.dot(x_k, L.dot(x_k)))
if is_small(eigs, eps):
return x_k, eigs
return x_k, eigs
Explanation: Problem set 3 (90 pts)
Important note: the template for your solution filename is Name_Surname_PS3.ipynb
For this problem set we do not run the bot, so try to debug your solutions with your own simple tests
Problem 1 (20 pts)
(5 pts) Prove that $\mathrm{vec}(AXB) = (B^\top \otimes A)\, \mathrm{vec}(X)$ if $\mathrm{vec}(X)$ is a columnwise reshape of a matrix into a long vector. What does it change if the reshape is rowwise?
Note:
1. To make a columnwise reshape in Python one should use np.reshape(X, order='f'), where the string 'f' stands for the Fortran ordering.
2. If $\mathrm{vec}(X)$ is a rowwise reshape,
$$\mathrm{vec}(AXB)=(A \otimes B^\top) \mathrm{vec}(X).$$
$B_k$ is $k^{th}$ column of $B$ and $x_i$ - $i^{th}$ column of X
$$ (AXB)k = AXB_k = A \sum x_i b_ik = A
\begin{bmatrix}
x_1 & \dots & x_n
\end{bmatrix}
\begin{bmatrix}
b{1k} \
b_{2k} \
\vdots \
b_{nk}
\end{bmatrix} =
\sum b_{ik}Ax_i =
\begin{bmatrix}
b_{1k}A & \dots & b_{nk}A
\end{bmatrix}
\begin{bmatrix}
x_{1} \
x_{2} \
\vdots \
x_{n}
\end{bmatrix} = (b^T_k \otimes A) \mbox{vec}X
$$
After stacking all $b_k^T$-s up we will get:
$$
\mbox{vec}(AXB) = (B^T \otimes A) \mbox{vec} X
$$ q.e.d <br>
If vec if rowwise (let it be vec<sub>r</sub>):
$$
\mbox{vec}_r (AXB) = \mbox{vec}(B^T X^T A^T = (A \otimes B^T)\mbox{vec}X^T = (A \otimes B^T) \mbox{vec}_r X
$$
(2 pts) What is the complexity of a naive computation of $(A \otimes B) x$? Show how it can be reduced. <br>
Let $A \in \mathbb{R}{n \times m}$, $B \in \mathbb{R}{k \times p}$, $x \in \mathbb{R}{mp \times 1}$, $X \in \mathbb{R}{p \times m}$ <br>
Complexity of $(A \otimes B)x$ is $\mathcal{O}(nkmp) \sim \boxed{\mathcal{O}(N^4)}$ <br>
Let $x$ = vec($X$), then:
$$ (A \otimes B) \mbox{vec}(X) = \mbox{vec}(BXA^T) $$
Complexity of $\mbox{vec}(BXA^T)$ is $\mathcal{O}((p+n)km) \sim \boxed{\mathcal{O}(N^3)} $
(3 pts) Let matrices $A$ and $B$ have eigendecompositions $A = S_A\Lambda_A S_A^{-1}$ and $B = S_B\Lambda_B S^{-1}_B$. Find eigenvectors and eigenvalues of the matrix $A\otimes I + I \otimes B$. <br>
Let $W = S_A \otimes S_B$ and $W^{-1} = S_A^{-1} \otimes S_B^{-1}$, then:
$$
W^{-1} (A \otimes I + I \otimes B) W = W^{-1} (A \otimes I) W + W^{-1} (I \otimes B) W = \
= (S_A^{-1} \otimes S_B^{-1})(A \otimes I)(S_A \otimes S_B) + (S_A^{-1} \otimes S_B^{-1})(I \otimes B)(S_A \otimes S_B) = \
= S_A^{-1} A S_A \otimes S^{-1}_B I S_B + S_A^{-1} I S_A \otimes S_B^{-1} B S_B = \
= \Lambda_A \otimes I + I \otimes \Lambda_B =
\begin{bmatrix}
\lambda_1 I & \dots & 0 \
\vdots & \ddots & \vdots \
0 & \dots & \lambda_n I
\end{bmatrix} +
\begin{bmatrix}
\Lambda_B & \dots & 0 \
\vdots & \ddots & \vdots \
0 & \dots & \Lambda_B
\end{bmatrix}, \, \Lambda_B =
\begin{bmatrix}
\mu_1 & \dots & 0 \
\vdots & \ddots & \vdots \
0 & \dots & \mu_m
\end{bmatrix}
$$
Hence eigenvalues of $A \otimes I + I \otimes B$ are $\boxed{\lambda_i + \mu_j}$, $i = 1...n, j = 1...m$ <br>
eigenvectors are columns of $\boxed{S_A \otimes S_B}$
(10 pts) Let $A = \mathrm{diag}\left(\frac{1}{1000},\frac{2}{1000},\dots \frac{999}{1000}, 1, 1000 \right)$. Estimate analytically the number of iterations required to solve linear system with $A$ with the relative accuracy $10^{-4}$ using
Richardson iteration with the optimal choice of parameter (use $2$-norm)
$$
e_k \leq q^k e_0 \to \frac{e_k}{e_0} \leq q^k \
q^k \geq \frac{\|x_k - x_ \|2}{\|x\|2}, x_0 = 0 \
q^k \geq 10^{-4} \to k \mbox{lg} q \geq -4, q = \frac{\lambda{max} - \lambda_{min}}{\lambda_{max} + \lambda_{min}} = \frac{1000 - 10^{-3}}{1000 + 10^{-3}} \
k \geq \frac{-4}{\mbox{lg}q} \approx 4605170 \to \boxed{k \geq 4605170}
$$
Chebyshev iteration (use $2$-norm)
$$
e_{k+1} = p(A) e_0, e_{k+1} \leq Cq^k e_0 \to Cq^k \geq 10^{-4} \
\
\to k \geq \frac{-4}{\mbox{lg}q}, \, q = \frac{\sqrt{\mbox{cond}(A)} - 1}{\sqrt{\mbox{cond}(A)} + 1} = \frac{999}{1001} \
\
\boxed{k \geq 4605}
$$
Conjugate gradient method (use $A$-norm). <br>
In chapter 10 of book Golub G.H., Van Loan C.F. - Matrix Computation it is said that:
$$
\frac{\|x_k - x_ \|A}{\| x \|_A} \leq 2 \Big( \frac{\sqrt{\mbox{cond}(A)} - 1}{\sqrt{\mbox{cond}(A)} + 1} \Big)^k $$
$$ -4 \leq \mbox{lg}2 \Big( \frac{999}{1001} \Big)^k = \mbox{lg}2 + k \mbox{lg}\frac{999}{1001} $$
$$ -4 - \mbox{lg}2 \leq k \mbox{lg}\frac{999}{1001} \to k \geq \frac{-4 \mbox{lg}2}{\mbox{lg}\frac{999}{1001}} $$
$$ \boxed{k \geq 4951} $$
But we might get better analytical results if one tries to derive $k$ from the following formula:
$$
\tau_k = \frac{\|x_k - x_ \|A^2}{\| x \|^2_A} \leq \underset{deg(q) \leq k, q(0) = 1}{\mbox{inf}} \Big( \underset{i = 1...n}{\mbox{max}} q(\lambda_i)^2 \Big)
$$
On each step there will be polynomial $q$ such that deg$q$ = n and all roots are eigenvalues of $A$, so $\tau_k$ = 0. Such polynomial will be available from $n_{th}$ step, so we need a number of iterations equal to the number of eigenvalues $A$ has or 1001, so $\boxed{k \geq 1001}$
Problem 2 (40 pts)
Spectral graph partitioning and inverse iteration
Given connected graph $G$ and its corresponding graph Laplacian matrix $L = D - A$ with eigenvalues $0=\lambda_1, \lambda_2, ..., \lambda_n$, where $D$ is its degree matrix and $A$ is its adjacency matrix, Fiedler vector is an eignevector correspondng to the second smallest eigenvalue $\lambda_2$ of $L$. Fiedler vector can be used for graph partitioning: positive values correspond to the one part of a graph and negative values to another.
Inverse power method (15 pts)
To find the Fiedler vector we will use the inverse iteration with adaptive shifts (Rayleigh quotient iteration).
(5 pts) Write down the orthoprojection matrix on the space orthogonal to the eigenvector of $L$, corresponding to the eigenvalue $0$ and prove (analytically) that it is indeed an orthoprojection. <br>
First of all, it is know that eigenvector corresponding to 0 eigenvalue is $e^T = [ 1 ... 1 ]$.
Let's find orthoprojection matrix on the space orthogonal to the eigenvector $e$, let's consider it in the following setup. Let $U$ be the space perdendicular to $e$, then orthoprojection of arbitrary vector $a$ is:
$$ a_{\bot} = a - a_{e} = I a - \frac{e e^T}{e^T e} a $$
Hence the orthoprojection matrix is:
$$ \boxed{P = I - \frac{e e^T}{e^T e}} $$
Of course matrix is right because it was derived from geometrical considerations, but we can check it by taking any arbitrary vector $x^T$ = $[x_1 ... x_n]$:
$$ Px = x - \frac{1}{n}
\begin{bmatrix}
\sum_i x_i \
\vdots \
\sum_i x_i \
\end{bmatrix} = x - \frac{(x, e)}{(e,e)} e
$$
(5 pts) Implement the spectral partitioning as the function partition:
End of explanation
m, n = (10, 20)
G = nx.lollipop_graph(m, n)
A = nx.adjacency_matrix(G)
x0 = np.random.random((A.shape[0],)).astype(np.float64)
eigs_builtin = np.sort(np.linalg.eigh(nx.laplacian_matrix(G).todense())[0])
x, eigs = partition(A, 0.01, 10, 100, x0, 0.00001)
print("Are second smallest the same?", np.allclose(eigs_builtin[1], eigs[-1]))
pos = nx.spring_layout(G)
plt.figure(figsize=(14,7))
nx.draw_networkx(G, pos, node_color=np.sign(x))
Explanation: Algorithm must halt before num_iter_fix + num_iter_adapt iterations if the following condition is satisfied $$ \boxed{\|\lambda_k - \lambda_{k-1}\|_2 / \|\lambda_k\|_2 \leq \varepsilon} \text{ at some step } k.$$
Do not forget to use the orthogonal projection from above in the iterative process to get the correct eigenvector.
It is also a good idea to use shift=0 before the adaptive stragy is used. This, however, is not possible since the matrix $L$ is singular, and sparse decompositions in scipy does not work in this case. Therefore, we first use a very small shift instead.
(3 pts) Generate a random lollipop_graph using networkx library and find its partition. Draw this graph with vertices colored according to the partition.
End of explanation
x0 = np.random.random((A.shape[0],)).astype(np.float32)
eigs_builtin = np.sort(np.linalg.eigh(nx.laplacian_matrix(G).todense())[0])
x, eigs = partition(A, 1e-3, 0, 5, x0, 1e-5)
print("Are second smallest the same?", np.allclose(eigs_builtin[1], eigs[-2]))
print(eigs_builtin[1], eigs[-1])
Explanation: (2 pts) Start the method with a random initial guess x0, set num_iter_fix=0 and comment why the method can converge to a wrong eigenvalue.
End of explanation
from PIL import Image
import requests
url = "https://findicons.com/files/icons/2773/pictonic_free/256/lin_ubuntu.png"
img = np.array(Image.open(requests.get(url, stream=True).raw).convert('RGBA')).astype(np.uint8)[:,:,3]
img = 1.0 * (img < 100)
plt.imshow(img)
Explanation: Fixed shift is important because w/o one it starts converging to the number equal to initial shift, which can be seen in code explicitly $x_0^T L x_0$
Spectral graph properties (15 pts)
(5 pts) Prove that multiplicity of the eigenvalue $0$ in the spectrum of the graphs Laplacian is the number of its connected components.
(10 pts) The second-smallest eigenvalue of $L(G)$, $\lambda_2(L(G))$, is often called the algebraic connectivity of the
graph $G$. A basic intuition behind the use of this term is that a graph with a higher algebraic
connectivity typically has more edges, and can therefore be thought of as being “more connected”.
To check this statement, create few graphs with equal number of vertices using networkx, one of them should be $C_{30}$ - simple cyclic graph, and one of them should be $K_{30}$ - complete graph. (You also can change the number of vertices if it makes sense for your experiments, but do not make it trivially small).
Find the algebraic connectivity for the each graph using inverse iteration.
Plot the dependency $\lambda_2(G_i)$ on $|E_i|$.
Draw a partition for a chosen graph from the generated set.
Comment on the results.
Image bipartition (10 pts)
Let us deal here with a graph constructed from a binarized image.
Consider the rule, that graph vertices are only pixels with $1$, and each vertex can have no more than $8$ connected vertices (pixel neighbours), $\textit{i.e}$ graph degree is limited by 8.
* (3 pts) Find an image with minimal size equal to $(256, 256)$ and binarize it such that graph built on black pixels has exactly $1$ connected component.
* (5 pts) Write a function that constructs sparse adjacency matrix from the binarized image, taking into account the rule from above.
* (2 pts) Find the partition of the resulting graph and draw the image in accordance with partition.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import hankel2
radiointel = np.load('radiointel.npy')
plt.subplot(1,2,1)
plt.imshow( np.abs(radiointel) )
plt.title('Intensity')
plt.subplot(1,2,2)
plt.imshow( np.angle(radiointel), cmap='bwr' )
plt.title('Phase')
plt.show()
Explanation: Problem 3 (30 pts)
Say hi to the drone
You received a radar-made air scan data of a terrorist hideout made from a heavy-class surveillance drone. Unfortunately, it was made with an old-fashioned radar, so the picture is convolved with the diffractive pattern. You need to deconvolve the picture to recover the building plan.
End of explanation
k0 = 50.0
N = 300
def make_eG(k0, N):
# INPUT:
# k0 #dtype = float
# N #dtype = int
# OUTPUT:
# np.array, shape = (2N-1, 2N-1), dtype = np.complex64
# eG = np.zeros((2*N - 1, 2*N - 1), dtype=np.complex64)
row = np.square(np.linspace(-N + 1, N - 1, 2*N - 1))
column = np.square(np.linspace(-N + 1, N - 1, 2*N - 1))
row_m, col_m = np.meshgrid(row, column, indexing = 'ij')
eG = -1j * hankel2(0, k0 / (N - 1) * np.sqrt(row_m + col_m)) / 4
eG[N - 1, N - 1] = 0
return np.roll(eG.astype(np.complex64), shift=N-1, axis=(0,1))
eG = make_eG(k0=k0, N=N)
plt.imshow(eG.real)
Explanation: In this problem you asked to use using FFT-matvec and make the convolution operator for the picture of the size $N\times N$, where $N=300$ with the following kernel (2D Helmholtz scattering):
$$
T_{\overline{i_1 j_1}, \overline{i_2 j_2} } \equiv eG_{i_1-j_1,i_2-j_2} = \frac{-1j}{4} H^{(2)}0 \left( k_0 \cdot \Delta r{ \overline{i_1 j_1}, \overline{i_2 j_2} } \right), \quad i_1,j_1, i_2, j_2 = 0,\dots, N-1 $$
except when both $i_1=i_2$ and $j_1 = j_2$.
In that case set $$T_{i_1=i_2, j_1=j_2} = 0$$.
Here
$1j$ is the imaginary unit, $H^{(2)}_0(x)$ - (complex-valued) Hankel function of the second kind of the order 0. See 'scipy.special.hankel2'.
$$ \Delta r_{ \overline{i_1 j_1}, \overline{i_2 j_2} } = h \sqrt{ (i_1-i_2)^2 + (j_1-j_2)^2 } $$
$$ h = \frac{1}{N-1}$$
$$k_0 = 50.0$$
Tasks:
(5 pts) Create the complex-valued kernel $eG$ ($2N-1 \times 2N-1$)-sized matrix according with the instructions above. Note that at the point where $\Delta r=0$ value of $eG$ should be manually zet to zero. Store in the variable eG. Plot the eG.real of it with plt.imshow
(5 pts) Write function Gx that calculates matvec of $T$ by a given vector $x$. Make sure all calculations and arrays are in dtype=np.complex64. Hint: matvex with a delta function in pl
(3 pts) What is the complexity of one matvec?
(2 pts) Use scipy.sparse.linalg.LinearOperator to create an object that has attribute .dot() (this object will be further used in the iterative process). Note that .dot() input and output must be 1D vectors, so do not forget to use reshape.
(15 pts) Write a function that takes an appropriate Krylov method(s) and solves linear system $Gx=b$ to deconvolve radiointel. The result should be binary mask array (real, integer, of 0s and 1s) of the plane of the building. Make sure it converged sufficiently and you did the post-processing properly. Plot the result as an image.
Note: You can use standart fft and ifft from e.g. numpy.fft
1. Kernel (5 pts)
End of explanation
def Gx(x, eG):
# input:
# x, np.array, shape=(N**2, ), dtype = np.complex64
# eG, np.array, shape=(2N-1, 2N-1), dtype = np.complex64
# output:
# matvec, np.array, shape = (N**2, ), dtype = np.complex64
N = int(np.sqrt(x.shape[0]))
x_ready = np.pad(x.reshape((N, N)), ((0, N - 1), (0, N - 1)), 'constant')
matvec = np.fft.ifft2(np.fft.fft2(eG) * np.fft.fft2(x_ready))[0:N,0:N]
return matvec.reshape(N**2, 1).astype(np.complex64)
Explanation: 2. Matvec (5 pts)
End of explanation
from scipy.sparse import linalg
L_Gx = linalg.LinearOperator((N ** 2, N ** 2), matvec=lambda x, eG=eG: Gx(x, eG))
Explanation: 3. Complexity (3 pts)
Big-O complexity of one matvec operation is $\mathcal{O}(N^2\log{}N)$
4. LinearOperator (2 pts)
End of explanation
from scipy.sparse.linalg import gmres
maxiter = 200
def normalize(mask): #proper normalization to binary mask
mask = np.clip(mask, a_min=0, a_max=1)
mask = np.round(mask)
mask = np.asarray(mask, dtype=int)
return mask
errs=[]
def callback(err): #callback function to store the history of convergence
global errs
errs.append(err)
return
mask, _ = gmres(L_Gx, radiointel.reshape(N**2,1), maxiter=maxiter, callback = callback)
plt.figure(figsize=(7,7))
plt.imshow(normalize(mask.real).reshape((N, N)), cmap='binary')
plt.title('Restored building plan', size=16)
plt.colorbar()
plt.figure(figsize=(14,7))
plt.semilogy(errs)
plt.grid()
plt.title('Convergence', size=16)
Explanation: 5. Reconstruction (15pts)
End of explanation |
118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Red Hat Insights Core
Insights Core is a framework for collecting and processing data about systems. It allows users to write components that collect and transform sets of raw data into typed python objects, which can then be used in rules that encapsulate knowledge about them.
To accomplish this the framework uses an internal dependency engine. Components in the form of class or function definitions declare dependencies on other components with decorators, and the resulting graphs can be executed once all components you care about have been loaded.
This is an introduction to the dependency system followed by a summary of the standard components Insights Core provides.
Components
To make a component, we first have to create a component type, which is a decorator we'll use to declare it.
Step1: How do I use it?
Step2: Component Types
We can define components of different types by creating different decorators.
Step3: Component Invocation
You can customize how components of a given type get called by overriding the invoke method of your ComponentType class. For example, if you want your components to receive the broker itself instead of individual arguments, you can do the following.
Step4: Notice that broker can be used as a dictionary to get the value of components that have already executed without directly looking at the broker.instances attribute.
Exception Handling
When a component raises an exception, the exception is recorded in a dictionary whose key is the component and whose value is a list of exceptions. The traceback related to each exception is recorded in a dictionary of exceptions to tracebacks. We record exceptions in a list because some components may generate more than one value. We'll come to that later.
Step5: Missing Dependencies
A component with any missing required dependencies will not be called. Missing dependencies are recorded in the broker in a dictionary whose keys are components and whose values are tuples with two values. The first is a list of all missing required dependencies. The second is a list of all dependencies of which at least one was required.
Step6: Notice that the first elements in the dependency list after @stage are simply "a" and "b", but the next two elements are themselves lists. This means that at least one element of each list must be present. The first "any" list has [rand, "d"], and rand is available, so it resolves. However, neither "e" nor "f" are available, so the resolution fails. Our missing dependencies list includes the first two standalone elements as well as the second "any" list.
SkipComponent
Components that raise dr.SkipComponent won't have any values or exceptions recorded and will be treated as missing dependencies for components that depend on them.
Optional Dependencies
There's an "optional" keyword that takes a list of components that should be run before the current one. If they throw exceptions or don't run for some other reason, execute the current component anyway and just say they were None.
Step7: Automatic Dependencies
The definition of a component type may include requires and optional attributes. Their specifications are the same as the requires and optional portions of the component decorators. Any component decorated with a component type that has requires or optional in the class definition will automatically depend on the specified components, and any additional dependencies on the component itself will just be appended.
This functionality should almost never be used because it makes it impossible to tell that the component has implied dependencies.
Step8: Metadata
Component types and components can define metadata in their definitions. If a component's type defines metadata, that metadata is inherited by the component, although the component may override it.
Step9: Component Groups
So far we haven't said how we might group components together outside of defining different component types. But sometimes we might want to specify certain components, even of different component types, to belong together and to only be executed when explicitly asked to do so.
All of our components so far have implicitly belonged to the default group. However, component types and even individual components can be assigned to specific groups, which will run only when specified.
Step10: If a group isn't specified in the type definition or in the component decorator, the default group is assumed. Likewise, the default group is assumed when calling run if one isn't provided.
It's also possible to override the group of an individual component by using the group keyword in its decorator.
run_incremental
Since hundreds or even thousands of dependencies can be defined, it's sometimes useful to separate them into graphs that don't share any components and execute those graphs one at a time. In addition to the run function, the dr module provides a run_incremental function that does exactly that. You can give it a starting broker (or none at all), and it will yield a new broker for each distinct graph among all the dependencies.
run_all
The run_all function is similar to run_incremental since it breaks a graph up into independently executable subgraphs before running them. However, it returns a list of the brokers instead of yielding one at a time. It also has a pool keyword argument that accepts a concurrent.futures.ThreadPoolExecutor, which it will use to run the independent subgraphs in parallel. This can provide a significant performance boost in some situtations.
Inspecting Components
The dr module provides several functions for inspecting components. You can get their aliases, dependencies, dependents, groups, type, even their entire dependency trees.
Step11: Loading Components
If you have components defined in a package and the root of that path is in sys.path, you can load the package and all its subpackages and modules by calling dr.load_components. This way you don't have to load every component module individually.
```python
recursively load all packages and modules in path.to.package
dr.load_components("path.to.package")
or load a single module
dr.load_components("path.to.package.module")
```
Now that you know the basics of Insights Core dependency resolution, let's move on to the rest of Core that builds on it.
Standard Component Types
The standard component types provided by Insights Core are datasource, parser, combiner, rule, condition, and incident. They're defined in insights.core.plugins.
Some have specialized interfaces and executors that adapt the dependency specification parts described above to what developers using previous versions of Insights Core have come to expect.
For more information on parser, combiner, and rule development, please see our component developer tutorials.
Datasource
A datasource used to be called a spec. Components of this type collect data and make it available to other components. Since we have several hundred predefined datasources that fall into just a handful of categories, we've streamlined the process of creating them.
Datasources are defined either with the @datasource decorator or with helper functions from insights.core.spec_factory.
The spec_factory module has a handful of functions for defining common datasource types.
- simple_file
- glob_file
- simple_command
- listdir
- foreach_execute
- foreach_collect
- first_file
- first_of
All datasources defined helper functions will depend on a ExecutionContext of some kind. Contexts let you activate different datasources for different environments. Most of them provide a root path for file collection and may perform some environment specific setup for commands, even modifying the command strings if needed.
For now, we'll use a HostContext. This tells datasources to collect files starting at the root of the file system and to execute commands exactly as they are defined. Other contexts are in insights.core.contexts.
All file collection datasources depend on any context that provides a path to use as root unless a particular context is specified. In other words, some datasources will activate for multiple contexts unless told otherwise.
simple_file
simple_file reads a file from the file system and makes it available as a TextFileProvider. A TextFileProvider instance contains the path to the file and its content as a list of lines.
Step12: glob_file
glob_file accepts glob patterns and evaluates at runtime to a list of TextFileProvider instances, one for each match. You can pass glob_file a single pattern or a list (or set) of patterns. It also accepts an ignore keyword, which should be a regular expression string matching paths to ignore. The glob and ignore patterns can be used together to match lots of files and then throw out the ones you don't want.
Step13: simple_command
simple_command allows you to get the results of a command that takes no arguments or for which you know all of the arguments up front.
It and other command datasources return a CommandOutputProvider instance, which has the command string, any arguments interpolated into it (more later), the return code if you requested it via the keep_rc=True keyword, and the command output as a list of lines.
simple_command also accepts a timeout keyword, which is the maximum number of seconds the system should attempt to execute the command before a CalledProcessError is raised for the component.
A default timeout for all commands can be set on the initial ExecutionContext instance with the timeout keyword argument.
If a timeout isn't specified in the ExecutionContext or on the command itself, none is used.
Step14: listdir
listdir lets you get the contents of a directory.
Step15: foreach_execute
foreach_execute allows you to use output from one component as input to a datasource command string. For example, using the output of the interfaces datasource above, we can get ethtool information about all of the ethernet devices.
The timeout description provided in the simple_command section applies here to each seperate invocation.
Step16: Notice each element in the list returned by interfaces is a single string. The system interpolates each element into the ethtool command string and evaluates each result. This produces a list of objects, one for each input element, instead of a single object. If the list created by interfaces contained tuples with n elements, then our command string would have had n substitution parameters.
foreach_collect
foreach_collect works similarly to foreach_execute, but instead of running commands with interpolated arguments, it collects files at paths with interpolated arguments. Also, because it is a file collection, it doesn't not have execution related keyword arguments
first_file
first_file takes a list of paths and returns a TextFileProvider for the first one it finds. This is useful if you're looking for a single file that might be in different locations.
first_of
first_of is a way to express that you want to use any datasource from a list of datasources you've already defined. This is helpful if the way you collect data differs in different contexts, but the output is the same.
For example, the way you collect installed rpms directly from a machine differs from how you would collect them from a docker image. Ultimately, downstream components don't care
Step17: What datasources does Insights Core provide?
To see a list of datasources we already collect, have a look in insights.specs.
Parsers
Parsers are the next major component type Insights Core provides. A Parser depends on a single datasource and is responsible for converting its raw content into a structured object.
Let's build a simple parser.
Step18: Notice that the parser decorator accepts only one argument, the datasource the component needs. Also notice that our parser has a sensible default constructor that accepts a datasource and passes its content into a parse_content function.
Our hostname parser is pretty simple, but it's easy to see how parsing things like rpm data or configuration files could get complicated.
Speaking of rpms, hopefully it's also easy to see that an rpm parser could depend on our installed_rpms definition in the previous section and parse the content regardless of where the content originated.
What about parser dependencies that produce lists of components?
Not only do parsers have a special decorator, they also have a special executor. If the datasource is a list, the executor will attempt to construct a parser object with each element of the list, and the value of the parser in the broker will be the list of parser objects. It's important to keep this in mind when developing components that depend on parsers.
This is also why exceptions raised by components are stored as lists by component instead of single values.
Here's a simple parser that depends on the ethtool datasource.
Step19: We provide curated parsers for all of our datasources. They're in insights.parsers.
Combiners
Combiners depend on two or more other components. They typically are used to standardize interfaces or to provide a higher-level view of some set of components.
As an example of standardizing interfaces, chkconfig and service commands can be used to retrieve similar data about service status, but the command you run to check that status depends on your operating system version. A datasource would be defined for each command along with a parser to interpret its output. However, a downstream component may just care about a service's status, not about how a particular program exposes it. A combiner can depend on both chkconfig and service parsers (like this, so only one of them is required
Step20: Conditions and Incidents
Conditions and incidents are optional components that can be used by rules to encapsulate particular pieces of logic.
Conditions are questions with answers that can be interpreted as True or False. For example, a condition might be "Does the kdump configuration contain a 'net' target type?" or "Is the operating system Red Hat Enterprise Linux 7?"
Incidents, on the other hand, typically are specific types of warning or error messages from log type files.
Why would you use conditions or incidents instead of just writing the logic directly into the rule? Future versions of Insights may allow automated analysis of rules and their conditions and incidents. You will be able to tell which conditions, incidents, and rule firings across all rules correspond with each other and how strongly. This feature will become more powerful as conditions and incidents are written independently of explicit rules.
Observers
Insights Core allows you to attach functions to component types, and they'll be called any time a component of that type is encountered. You can attach observer functions globally or to a particular broker.
Observers are called whether a component succeeds or not. They take the component and the broker right after the component is evaluated and so are able to ask the broker about values, exceptions, missing requirements, etc. | Python Code:
import sys
sys.path.insert(0, "../..")
from insights.core import dr
# Here's our component type with the clever name "component."
# Insights Core provides several types that we'll come to later.
class component(dr.ComponentType):
pass
Explanation: Red Hat Insights Core
Insights Core is a framework for collecting and processing data about systems. It allows users to write components that collect and transform sets of raw data into typed python objects, which can then be used in rules that encapsulate knowledge about them.
To accomplish this the framework uses an internal dependency engine. Components in the form of class or function definitions declare dependencies on other components with decorators, and the resulting graphs can be executed once all components you care about have been loaded.
This is an introduction to the dependency system followed by a summary of the standard components Insights Core provides.
Components
To make a component, we first have to create a component type, which is a decorator we'll use to declare it.
End of explanation
import random
# Make two components with no dependencies
@component()
def rand():
return random.random()
@component()
def three():
return 3
# Make a component that depends on the other two. Notice that we depend on two
# things, and there are two arguments to the function.
@component(rand, three)
def mul_things(x, y):
return x * y
# Now that we have a few components defined, let's run them.
from pprint import pprint
# If you call run with no arguments, all components of every type (with a few caveats
# I'll address later) are run, and their values or exceptions are collected in an
# object called a broker. The broker is like a fancy dictionary that keeps up with
# the state of an evaluation.
broker = dr.run()
pprint(broker.instances)
Explanation: How do I use it?
End of explanation
class stage(dr.ComponentType):
pass
@stage(mul_things)
def spam(m):
return int(m)
broker = dr.run()
print "All Instances"
pprint(broker.instances)
print
print "Components"
pprint(broker.get_by_type(component))
print
print "Stages"
pprint(broker.get_by_type(stage))
Explanation: Component Types
We can define components of different types by creating different decorators.
End of explanation
class thing(dr.ComponentType):
def invoke(self, broker):
return self.component(broker)
@thing(rand, three)
def stuff(broker):
r = broker[rand]
t = broker[three]
return r + t
broker = dr.run()
print broker[stuff]
Explanation: Component Invocation
You can customize how components of a given type get called by overriding the invoke method of your ComponentType class. For example, if you want your components to receive the broker itself instead of individual arguments, you can do the following.
End of explanation
@stage()
def boom():
raise Exception("Boom!")
broker = dr.run()
e = broker.exceptions[boom][0]
t = broker.tracebacks[e]
pprint(e)
print
print t
Explanation: Notice that broker can be used as a dictionary to get the value of components that have already executed without directly looking at the broker.instances attribute.
Exception Handling
When a component raises an exception, the exception is recorded in a dictionary whose key is the component and whose value is a list of exceptions. The traceback related to each exception is recorded in a dictionary of exceptions to tracebacks. We record exceptions in a list because some components may generate more than one value. We'll come to that later.
End of explanation
@stage("where's my stuff at?")
def missing_stuff(s):
return s
broker = dr.run()
print broker.missing_requirements[missing_stuff]
@stage("a", "b", [rand, "d"], ["e", "f"])
def missing_more_stuff(a, b, c, d, e, f):
return a + b + c + d + e + f
broker = dr.run()
print broker.missing_requirements[missing_more_stuff]
Explanation: Missing Dependencies
A component with any missing required dependencies will not be called. Missing dependencies are recorded in the broker in a dictionary whose keys are components and whose values are tuples with two values. The first is a list of all missing required dependencies. The second is a list of all dependencies of which at least one was required.
End of explanation
@stage(rand, optional=['test'])
def is_greater_than_ten(r, t):
return (int(r*10.0) < 5.0, t)
broker = dr.run()
print broker[is_greater_than_ten]
Explanation: Notice that the first elements in the dependency list after @stage are simply "a" and "b", but the next two elements are themselves lists. This means that at least one element of each list must be present. The first "any" list has [rand, "d"], and rand is available, so it resolves. However, neither "e" nor "f" are available, so the resolution fails. Our missing dependencies list includes the first two standalone elements as well as the second "any" list.
SkipComponent
Components that raise dr.SkipComponent won't have any values or exceptions recorded and will be treated as missing dependencies for components that depend on them.
Optional Dependencies
There's an "optional" keyword that takes a list of components that should be run before the current one. If they throw exceptions or don't run for some other reason, execute the current component anyway and just say they were None.
End of explanation
class mything(dr.ComponentType):
requires = [rand]
@mything()
def dothings(r):
return 4 * r
broker = dr.run(broker=broker)
pprint(broker[dothings])
pprint(dr.get_dependencies(dothings))
Explanation: Automatic Dependencies
The definition of a component type may include requires and optional attributes. Their specifications are the same as the requires and optional portions of the component decorators. Any component decorated with a component type that has requires or optional in the class definition will automatically depend on the specified components, and any additional dependencies on the component itself will just be appended.
This functionality should almost never be used because it makes it impossible to tell that the component has implied dependencies.
End of explanation
class anotherthing(dr.ComponentType):
metadata={"a": 3}
@anotherthing(metadata={"b": 4, "c": 5})
def four():
return 4
dr.get_metadata(four)
Explanation: Metadata
Component types and components can define metadata in their definitions. If a component's type defines metadata, that metadata is inherited by the component, although the component may override it.
End of explanation
class grouped(dr.ComponentType):
group = "grouped"
@grouped()
def five():
return 5
b = dr.Broker()
dr.run(dr.COMPONENTS["grouped"], broker=b)
pprint(b.instances)
Explanation: Component Groups
So far we haven't said how we might group components together outside of defining different component types. But sometimes we might want to specify certain components, even of different component types, to belong together and to only be executed when explicitly asked to do so.
All of our components so far have implicitly belonged to the default group. However, component types and even individual components can be assigned to specific groups, which will run only when specified.
End of explanation
from insights.core import dr
@stage()
def six():
return 6
@stage(six)
def times_two(x):
return x * 2
# If the component's full name was foo.bar.baz.six, this would print "baz"
print "\nModule (times_two):", dr.get_base_module_name(times_two)
print "\nComponent Type (times_two):", dr.get_component_type(times_two)
print "\nDependencies (times_two): "
pprint(dr.get_dependencies(times_two))
print "\nDependency Graph (stuff): "
pprint(dr.get_dependency_graph(stuff))
print "\nDependents (rand): "
pprint(dr.get_dependents(rand))
print "\nGroup (six):", dr.get_group(six)
print "\nMetadata (four): ",
pprint(dr.get_metadata(four))
# prints the full module name of the component
print "\nModule Name (times_two):", dr.get_module_name(times_two)
# prints the module name joined to the component name by a "."
print "\nName (times_two):", dr.get_name(times_two)
print "\nSimple Name (times_two):", dr.get_simple_name(times_two)
Explanation: If a group isn't specified in the type definition or in the component decorator, the default group is assumed. Likewise, the default group is assumed when calling run if one isn't provided.
It's also possible to override the group of an individual component by using the group keyword in its decorator.
run_incremental
Since hundreds or even thousands of dependencies can be defined, it's sometimes useful to separate them into graphs that don't share any components and execute those graphs one at a time. In addition to the run function, the dr module provides a run_incremental function that does exactly that. You can give it a starting broker (or none at all), and it will yield a new broker for each distinct graph among all the dependencies.
run_all
The run_all function is similar to run_incremental since it breaks a graph up into independently executable subgraphs before running them. However, it returns a list of the brokers instead of yielding one at a time. It also has a pool keyword argument that accepts a concurrent.futures.ThreadPoolExecutor, which it will use to run the independent subgraphs in parallel. This can provide a significant performance boost in some situtations.
Inspecting Components
The dr module provides several functions for inspecting components. You can get their aliases, dependencies, dependents, groups, type, even their entire dependency trees.
End of explanation
from insights.core import dr
from insights.core.context import HostContext
from insights.core.spec_factory import (simple_file,
glob_file,
simple_command,
listdir,
foreach_execute,
foreach_collect,
first_file,
first_of)
release = simple_file("/etc/redhat-release")
hostname = simple_file("/etc/hostname")
ctx = HostContext()
broker = dr.Broker()
broker[HostContext] = ctx
broker = dr.run(broker=broker)
print broker[release].path, broker[release].content
print broker[hostname].path, broker[hostname].content
Explanation: Loading Components
If you have components defined in a package and the root of that path is in sys.path, you can load the package and all its subpackages and modules by calling dr.load_components. This way you don't have to load every component module individually.
```python
recursively load all packages and modules in path.to.package
dr.load_components("path.to.package")
or load a single module
dr.load_components("path.to.package.module")
```
Now that you know the basics of Insights Core dependency resolution, let's move on to the rest of Core that builds on it.
Standard Component Types
The standard component types provided by Insights Core are datasource, parser, combiner, rule, condition, and incident. They're defined in insights.core.plugins.
Some have specialized interfaces and executors that adapt the dependency specification parts described above to what developers using previous versions of Insights Core have come to expect.
For more information on parser, combiner, and rule development, please see our component developer tutorials.
Datasource
A datasource used to be called a spec. Components of this type collect data and make it available to other components. Since we have several hundred predefined datasources that fall into just a handful of categories, we've streamlined the process of creating them.
Datasources are defined either with the @datasource decorator or with helper functions from insights.core.spec_factory.
The spec_factory module has a handful of functions for defining common datasource types.
- simple_file
- glob_file
- simple_command
- listdir
- foreach_execute
- foreach_collect
- first_file
- first_of
All datasources defined helper functions will depend on a ExecutionContext of some kind. Contexts let you activate different datasources for different environments. Most of them provide a root path for file collection and may perform some environment specific setup for commands, even modifying the command strings if needed.
For now, we'll use a HostContext. This tells datasources to collect files starting at the root of the file system and to execute commands exactly as they are defined. Other contexts are in insights.core.contexts.
All file collection datasources depend on any context that provides a path to use as root unless a particular context is specified. In other words, some datasources will activate for multiple contexts unless told otherwise.
simple_file
simple_file reads a file from the file system and makes it available as a TextFileProvider. A TextFileProvider instance contains the path to the file and its content as a list of lines.
End of explanation
host_stuff = glob_file("/etc/host*", ignore="(allow|deny)")
broker = dr.run(broker=broker)
print broker[host_stuff]
Explanation: glob_file
glob_file accepts glob patterns and evaluates at runtime to a list of TextFileProvider instances, one for each match. You can pass glob_file a single pattern or a list (or set) of patterns. It also accepts an ignore keyword, which should be a regular expression string matching paths to ignore. The glob and ignore patterns can be used together to match lots of files and then throw out the ones you don't want.
End of explanation
uptime = simple_command("/usr/bin/uptime")
broker = dr.run(broker=broker)
print (broker[uptime].cmd, broker[uptime].args, broker[uptime].rc, broker[uptime].content)
Explanation: simple_command
simple_command allows you to get the results of a command that takes no arguments or for which you know all of the arguments up front.
It and other command datasources return a CommandOutputProvider instance, which has the command string, any arguments interpolated into it (more later), the return code if you requested it via the keep_rc=True keyword, and the command output as a list of lines.
simple_command also accepts a timeout keyword, which is the maximum number of seconds the system should attempt to execute the command before a CalledProcessError is raised for the component.
A default timeout for all commands can be set on the initial ExecutionContext instance with the timeout keyword argument.
If a timeout isn't specified in the ExecutionContext or on the command itself, none is used.
End of explanation
interfaces = listdir("/sys/class/net")
broker = dr.run(broker=broker)
pprint(broker[interfaces])
Explanation: listdir
listdir lets you get the contents of a directory.
End of explanation
ethtool = foreach_execute(interfaces, "ethtool %s")
broker = dr.run(broker=broker)
pprint(broker[ethtool])
Explanation: foreach_execute
foreach_execute allows you to use output from one component as input to a datasource command string. For example, using the output of the interfaces datasource above, we can get ethtool information about all of the ethernet devices.
The timeout description provided in the simple_command section applies here to each seperate invocation.
End of explanation
from insights.specs.default import format_rpm
from insights.core.context import DockerImageContext
from insights.core.plugins import datasource
from insights.core.spec_factory import CommandOutputProvider
rpm_format = format_rpm()
cmd = "/usr/bin/rpm -qa --qf '%s'" % rpm_format
host_rpms = simple_command(cmd, context=HostContext)
@datasource(DockerImageContext)
def docker_installed_rpms(ctx):
root = ctx.root
cmd = "/usr/bin/rpm -qa --root %s --qf '%s'" % (root, rpm_format)
result = ctx.shell_out(cmd)
return CommandOutputProvider(cmd, ctx, content=result)
installed_rpms = first_of([host_rpms, docker_installed_rpms])
broker = dr.run(broker=broker)
pprint(broker[installed_rpms])
Explanation: Notice each element in the list returned by interfaces is a single string. The system interpolates each element into the ethtool command string and evaluates each result. This produces a list of objects, one for each input element, instead of a single object. If the list created by interfaces contained tuples with n elements, then our command string would have had n substitution parameters.
foreach_collect
foreach_collect works similarly to foreach_execute, but instead of running commands with interpolated arguments, it collects files at paths with interpolated arguments. Also, because it is a file collection, it doesn't not have execution related keyword arguments
first_file
first_file takes a list of paths and returns a TextFileProvider for the first one it finds. This is useful if you're looking for a single file that might be in different locations.
first_of
first_of is a way to express that you want to use any datasource from a list of datasources you've already defined. This is helpful if the way you collect data differs in different contexts, but the output is the same.
For example, the way you collect installed rpms directly from a machine differs from how you would collect them from a docker image. Ultimately, downstream components don't care: they just want rpm data.
You could do the following. Notice that host_rpms and docker_installed_rpms implement different ways of getting rpm data that depend on different contexts, but the final installed_rpms datasource just references whichever one ran.
End of explanation
from insights.core import Parser
from insights.core.plugins import parser
@parser(hostname)
class HostnameParser(Parser):
def parse_content(self, content):
self.host, _, self.domain = content[0].partition(".")
broker = dr.run(broker=broker)
print "Host:", broker[HostnameParser].host
Explanation: What datasources does Insights Core provide?
To see a list of datasources we already collect, have a look in insights.specs.
Parsers
Parsers are the next major component type Insights Core provides. A Parser depends on a single datasource and is responsible for converting its raw content into a structured object.
Let's build a simple parser.
End of explanation
@parser(ethtool)
class Ethtool(Parser):
def parse_content(self, content):
self.link_detected = None
self.device = None
for line in content:
if "Settings for" in line:
self.device = line.split(" ")[-1].strip(":")
if "Link detected" in line:
self.link_detected = line.split(":")[-1].strip()
broker = dr.run(broker=broker)
for eth in broker[Ethtool]:
print "Device:", eth.device
print "Link? :", eth.link_detected, "\n"
Explanation: Notice that the parser decorator accepts only one argument, the datasource the component needs. Also notice that our parser has a sensible default constructor that accepts a datasource and passes its content into a parse_content function.
Our hostname parser is pretty simple, but it's easy to see how parsing things like rpm data or configuration files could get complicated.
Speaking of rpms, hopefully it's also easy to see that an rpm parser could depend on our installed_rpms definition in the previous section and parse the content regardless of where the content originated.
What about parser dependencies that produce lists of components?
Not only do parsers have a special decorator, they also have a special executor. If the datasource is a list, the executor will attempt to construct a parser object with each element of the list, and the value of the parser in the broker will be the list of parser objects. It's important to keep this in mind when developing components that depend on parsers.
This is also why exceptions raised by components are stored as lists by component instead of single values.
Here's a simple parser that depends on the ethtool datasource.
End of explanation
from insights.core.plugins import rule, make_fail, make_pass
ERROR_KEY = "IS_LOCALHOST"
@rule(HostnameParser)
def report(hn):
return make_pass(ERROR_KEY) if "localhost" in hn.host else make_fail(ERROR_KEY)
brok = dr.Broker()
brok[HostContext] = HostContext()
brok = dr.run(broker=brok)
pprint(brok.get(report))
Explanation: We provide curated parsers for all of our datasources. They're in insights.parsers.
Combiners
Combiners depend on two or more other components. They typically are used to standardize interfaces or to provide a higher-level view of some set of components.
As an example of standardizing interfaces, chkconfig and service commands can be used to retrieve similar data about service status, but the command you run to check that status depends on your operating system version. A datasource would be defined for each command along with a parser to interpret its output. However, a downstream component may just care about a service's status, not about how a particular program exposes it. A combiner can depend on both chkconfig and service parsers (like this, so only one of them is required: @combiner([[chkconfig, service]])) and provide a unified interface to the data.
As an example of a higher level view of several related components, imagine a combiner that depends on various ethtool and other network information gathering parsers. It can compile all of that information behind one view, exposing a range of information about devices, interfaces, iptables, etc. that might otherwise be scattered across a system.
We provide a few common combiners. They're in insights.combiners.
Here's an example combiner that tries a few different ways to determine the Red Hat release information. Notice that its dependency declarations and interface are just like we've discussed before. If this was a class, the __init__ function would be declared like def __init__(self, rh_release, un).
```python
from collections import namedtuple
from insights.core.plugins import combiner
from insights.parsers.redhat_release import RedhatRelease as rht_release
from insights.parsers.uname import Uname
@combiner([rht_release, Uname])
def redhat_release(rh_release, un):
if un and un.release_tuple[0] != -1:
return Release(*un.release_tuple)
if rh_release:
return Release(rh_release.major, rh_release.minor)
raise Exception("Unabled to determine release.")
```
Rules
Rules depend on parsers and/or combiners and encapsulate particular policies about their state. For example, a rule might detect whether a defective rpm is installed. It might also inspect the lsof parser to determine if a process is using a file from that defective rpm. It could also check network information to see if the process is a server and whether it's bound to an internal or external IP address. Rules can check for anything you can surface in a parser or a combiner.
Rules use the make_fail, make_pass, or make_info helpers to create their return values. They take one required parameter, which is a key identifying the particular state the rule wants to highlight, and any number of required parameters that provide context for that state.
End of explanation
def observer(c, broker):
if c not in broker:
return
value = broker[c]
pprint(value)
broker.add_observer(observer, component_type=parser)
broker = dr.run(broker=broker)
Explanation: Conditions and Incidents
Conditions and incidents are optional components that can be used by rules to encapsulate particular pieces of logic.
Conditions are questions with answers that can be interpreted as True or False. For example, a condition might be "Does the kdump configuration contain a 'net' target type?" or "Is the operating system Red Hat Enterprise Linux 7?"
Incidents, on the other hand, typically are specific types of warning or error messages from log type files.
Why would you use conditions or incidents instead of just writing the logic directly into the rule? Future versions of Insights may allow automated analysis of rules and their conditions and incidents. You will be able to tell which conditions, incidents, and rule firings across all rules correspond with each other and how strongly. This feature will become more powerful as conditions and incidents are written independently of explicit rules.
Observers
Insights Core allows you to attach functions to component types, and they'll be called any time a component of that type is encountered. You can attach observer functions globally or to a particular broker.
Observers are called whether a component succeeds or not. They take the component and the broker right after the component is evaluated and so are able to ask the broker about values, exceptions, missing requirements, etc.
End of explanation |
119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook was created by Mykola Veremchuk (mykola.veremchuk@xfel.eu), Svitozar Serkez. Source and license info is on GitHub. June 2019.
PFS tutorial N4. Converting synchrotron radiation results from Screen object to RadiationField
ocelot.optics.wave.RadiationField objects can be used for analysis as well as propagation of Synchrotron radiation module output
Contents
2D synchrotron radiation field
Generating 2D Screen
Converting 2D Screen to 2D RadiationField
Plotting RadiationField
3D synchrotron radiation field
Generating 3D Screen
Converting 3D Screen to 3D RadiationField
Plotting RadiationField
Necessary imports
Step1: <a id='2d'></a>
2D synchrotron radiation field
<a id='gen_screen_2d'></a>
Generating 2D Screen
See 9_synchrotron_radiation.ipynb tutorial first,
reproducing the 2d visualization
Step2: <a id='gen_dfl_2d'></a>
Converting 2D Screen to 2D RadiationField
to convert SR from Screen to RadiationField there is a function
Step3: <a id='plot_dfl_2d'></a>
Plotting RadiationField
2D (monochromatic) RadiationFied generated from Screen
Step4: Let's study the radiation back-propagated to the middle of the undulator
Step5: <a id='gen_screen_3d'></a>
Generating and converting 3D Screen
One can calculate series of 2D radiation field distributions at different photon energies.
Step6: In this way we obtain a 3D RadiationFied distribution in space-frequency domain
Note the spatial dependence on the photon energy. The slice values of x=0 and y=0 are provided as well as on-axis spectrum
Step7: Transverse projections and integrated spectrum
Step8: Plotting in space-time domain yields pulse duration that is approximately the radiation wavelength (fundamental harmonic) times number of undulator periods ~ 0.08um.
Step9: Radiation distribution in the middle of the undulator
Step10: Because of the rapidly-oscilalting phase, plotting in the far zone in not possible yet, to be solved in the future | Python Code:
import numpy as np
import logging
from ocelot import *
from ocelot.rad import *
from ocelot.optics.wave import dfl_waistscan, screen2dfl, RadiationField
from ocelot.gui.dfl_plot import plot_dfl, plot_dfl_waistscan
from ocelot import ocelog
ocelog.setLevel(logging.ERROR) #suppress logger output
# Activate interactive matplolib in notebook
import matplotlib
%matplotlib inline
# Setup figure white background
matplotlib.rcParams["figure.facecolor"] = (1,1,1,1)
# Setup figure size
matplotlib.rcParams['figure.figsize'] = [10, 10]
Explanation: This notebook was created by Mykola Veremchuk (mykola.veremchuk@xfel.eu), Svitozar Serkez. Source and license info is on GitHub. June 2019.
PFS tutorial N4. Converting synchrotron radiation results from Screen object to RadiationField
ocelot.optics.wave.RadiationField objects can be used for analysis as well as propagation of Synchrotron radiation module output
Contents
2D synchrotron radiation field
Generating 2D Screen
Converting 2D Screen to 2D RadiationField
Plotting RadiationField
3D synchrotron radiation field
Generating 3D Screen
Converting 3D Screen to 3D RadiationField
Plotting RadiationField
Necessary imports
End of explanation
# generating 2D synchrotron radiation (it will take about 1-3 minutes)
und = Undulator(Kx=0.43, nperiods=500, lperiod=0.007, eid="und")
lat = MagneticLattice(und)
beam = Beam()
beam.E = 2.5 # beam energy in [GeV]
beam.I = 0.1 # beam current in [A]
screen_2d = Screen()
screen_2d.z = 100.0 # distance from the begining of lattice to the screen
screen_2d.size_x = 0.002 # half of screen size in [m] in horizontal plane
screen_2d.size_y = 0.002 # half of screen size in [m] in vertical plane
screen_2d.nx = 51 # number of points in horizontal plane
screen_2d.ny = 51 # number of points in vertical plane
screen_2d.start_energy = 7761.2 # [eV], starting photon energy
screen_2d.end_energy = 7761.2 # [eV], ending photon energy
screen_2d.num_energy = 1 # number of energy points[eV]
screen_2d = calculate_radiation(lat, screen_2d, beam)
Explanation: <a id='2d'></a>
2D synchrotron radiation field
<a id='gen_screen_2d'></a>
Generating 2D Screen
See 9_synchrotron_radiation.ipynb tutorial first,
reproducing the 2d visualization
End of explanation
dfl_2d = screen2dfl(screen_2d, polarization='x')
Explanation: <a id='gen_dfl_2d'></a>
Converting 2D Screen to 2D RadiationField
to convert SR from Screen to RadiationField there is a function:
dfl = screen2dfl(screen, polarization='x')
* screen: Screen object, electric field of which will be used to generate RadiationField
* polarization: polarization for conversion to RadiationField ($E_x$ or $E_y$)
see 11_radiation_field.ipynb
End of explanation
plot_dfl(dfl_2d,
fig_name='dfl_2d generated from screen_2d',
column_3d=1)
Explanation: <a id='plot_dfl_2d'></a>
Plotting RadiationField
2D (monochromatic) RadiationFied generated from Screen
End of explanation
plot_dfl(dfl_2d.prop_m(-100-3.5/2, m=0.02, return_result=1),
fig_name='dfl_2d at waist position')
Explanation: Let's study the radiation back-propagated to the middle of the undulator
End of explanation
# generating 3D synchrotron radiation (it will take up to 5 minutes as 75*75*20=112k datapoints are calculated)
screen_3d = Screen()
screen_3d.z = 100.0 # distance from the begining of lattice to the screen
screen_3d.size_x = 0.002 # half of screen size in [m] in horizontal plane
screen_3d.size_y = 0.002 # half of screen size in [m] in vertical plane
screen_3d.ny = 50
screen_3d.nx = 50
screen_3d.start_energy = 7720 # [eV], starting photon energy
screen_3d.end_energy = 7790 # [eV], ending photon energy
screen_3d.num_energy = 25 # number of energy points[eV]
screen_3d = calculate_radiation(lat, screen_3d, beam)
dfl_3d = screen2dfl(screen_3d, polarization='x')
Explanation: <a id='gen_screen_3d'></a>
Generating and converting 3D Screen
One can calculate series of 2D radiation field distributions at different photon energies.
End of explanation
plot_dfl(dfl_3d,
domains='sf',
fig_name='dfl_3d in space-frequency domain',
slice_xy=True) # bool type variable, if True, slices will be plotted; if False, projections will be plotted
Explanation: In this way we obtain a 3D RadiationFied distribution in space-frequency domain
Note the spatial dependence on the photon energy. The slice values of x=0 and y=0 are provided as well as on-axis spectrum
End of explanation
plot_dfl(dfl_3d,
domains='sf',
fig_name='dfl_3d in space-frequency domain',
slice_xy=False)
Explanation: Transverse projections and integrated spectrum
End of explanation
plot_dfl(dfl_3d,
domains='st',
fig_name='dfl_3d in space-time domain',
slice_xy=False)
Explanation: Plotting in space-time domain yields pulse duration that is approximately the radiation wavelength (fundamental harmonic) times number of undulator periods ~ 0.08um.
End of explanation
plot_dfl(dfl_3d.prop_m(-100-3.5/2, m=0.02, fine=1, return_result=1),
domains='fs',
fig_name='dfl_3d at waist position in frequency-space domains',
slice_xy=False)
Explanation: Radiation distribution in the middle of the undulator
End of explanation
plot_dfl(dfl_3d,
domains='fk',
fig_name='dfl_3d at waist position in inverse space-frequency domains',
slice_xy=False)
Explanation: Because of the rapidly-oscilalting phase, plotting in the far zone in not possible yet, to be solved in the future
End of explanation |
120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Statistical inference and Estimation Theory
Motivation
Much of data science and machine learning (ML) is concerned with estimation
Step2: Now let us import some Python libraries and set some settings.
Step3: Let us generate (simulate) a population of 500000. We shall use a single feature drawn from a
Step4: Of course, there is the usual caveat
Step5: Although seaborn's distplot looks more elegant
Step6: Let's introduce an implementation of the sample mean estimator
Step7: Python lambda fans will probably prefer this variant, although there is little benefit from using a lambda here as it is not passed to a function as an argument
Step8: We'll also introduce an implementation of uncorrected sample variance
Step9: What estimates do we get if we use the entire population as our sample?
Step10: The true values for the discrete uniform random variable are given by $$\mathbb{E}[X] = \frac{a+b}{2}$$ and $$\text{Var}[X] = \frac{(b - a + 1)^2 - 1}{12}.$$
Step11: Our estimates above are pretty close. What if we now pick a smaller sample?
Step12: Not quite as good! Let us stick with this small sample size, 10, and compute the sampling distribution of our estimators by computing them for many samples of the same size.
Step13: Let's use a histogram to visualize the sampling distribution for the sample mean estimator
Step14: This looks pretty centred around the true value, so the estimator appears unbiased. What about the sampling distribution of the uncorrected sample variance estimator?
Step15: This time the distribution is to the left of the true population variance, so it appears that we are systematically underestimating it
Step16: Looks like it gets closer to the true value, although a certain bias remains visible on the plot.
Will the (corrected) sample variance estimator do better? Let's implement it...
Step17: ...and compare the uncorrected and corrected sample variance
Step18: The (corrected) sample variance is clearly unbiased. Let us confirm this by visualising its sampling distribution with a histogram
Step19: Now let us demonstrate the concept of consistency. We introduce two more estimators of the mean. The first will be consistent but biased, the second unbiased but inconsistent
Step20: Let's see how these estimators perform
Step21: Now let's see how well the square roots of the square roots of the uncorrected and corrected sample variance estimate the standard deviation | Python Code:
# Copyright (c) Thalesians Ltd, 2018-2019. All rights reserved
# Copyright (c) Paul Alexander Bilokon, 2018-2019. All rights reserved
# Author: Paul Alexander Bilokon <paul@thalesians.com>
# Version: 1.1 (2019.01.24)
# Previous versions: 1.0 (2018.08.31)
# Email: paul@thalesians.com
# Platform: Tested on Windows 10 with Python 3.6
Explanation:
End of explanation
%matplotlib inline
Explanation: Statistical inference and Estimation Theory
Motivation
Much of data science and machine learning (ML) is concerned with estimation: what is the optimal neural net for such and such a task? This question can be rephrased: what are the optimal weights and biases (subject to some sensible criteria) for a neural net for such and such a task?
At best we end up with an estimate of these quantities (weights, biases, linear regression coefficients, etc.). Then it is our task to work out how good they are.
Thus we have to rely on statistical inference and estimation theory, which dates back to the work of Gauss and Legendre.
In this Jupyter lab we shall demonstrate how stochastic simulation can be used to verify theoretical ideas from statistical inference and estimation theory.
Objectives
To show how Python's random can be used to generate simulated data.
To introduce the use of histograms to verify statistical properties of simulated data.
To introduce various estimators for means, variances, and standard deviation.
To perform numerical experiments to study the bias, variance, and consistency of these estimators.
The following is needed to enable inlining of Matplotlib graphs in this Jypyter notebook:
End of explanation
import random as rnd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 16,10
import seaborn as sns
Explanation: Now let us import some Python libraries and set some settings.
End of explanation
population_size = 500000
a = 0; b = 100
population = [rnd.randint(a, b) for _ in range(population_size)]
Explanation: Let us generate (simulate) a population of 500000. We shall use a single feature drawn from a
End of explanation
plt.hist(population, bins=10);
Explanation: Of course, there is the usual caveat: Garbage In, Garbage Out. Our inferences can only be as good as the (pseudo-) random variates that we generate. In this study we take these random variates as our gold standard, and we compare things against them. In practice, the random number generators are never perfect.
Most functions in Python's random module depend on random.random() under the hood, which uses a threadsafe C implementation of the Mersenne Twister algorithm as the core generator. It produces 53-bit precision floats and has a period of 2**19937-1. Mersenne Twister is a completely deterministic generator of pseudo-random numbers.
For a discussion of its quality, we refer the reader to https://stackoverflow.com/questions/12164280/is-pythons-random-randint-statistically-random
The statistical properties of the variates generated are good enough for our purposes. It is still a good idea to check that our population "looks like" it has a discrete uniform distribution, as it should. Matplotlib's histogram should be good enough for that:
End of explanation
sns.distplot(population, bins=10);
Explanation: Although seaborn's distplot looks more elegant:
End of explanation
def mean(x): return sum(x) / len(x)
Explanation: Let's introduce an implementation of the sample mean estimator:
End of explanation
mean = lambda x: sum(x) / len(x)
Explanation: Python lambda fans will probably prefer this variant, although there is little benefit from using a lambda here as it is not passed to a function as an argument:
End of explanation
def var(x, m=None):
m = m or mean(x)
return sum([(xe - m)**2 for xe in x]) / len(x)
Explanation: We'll also introduce an implementation of uncorrected sample variance:
End of explanation
population_mean = mean(population)
population_var = var(population, population_mean)
print("population mean: {:.2f}, population variance: {:.2f}".format(population_mean, population_var))
Explanation: What estimates do we get if we use the entire population as our sample?
End of explanation
true_mean = .5 * (a + b); true_mean
true_var = ((b - a + 1.)**2 - 1) / 12.; true_var
Explanation: The true values for the discrete uniform random variable are given by $$\mathbb{E}[X] = \frac{a+b}{2}$$ and $$\text{Var}[X] = \frac{(b - a + 1)^2 - 1}{12}.$$
End of explanation
sample = rnd.sample(population, 10)
sample_mean = mean(sample)
sample_var = var(sample)
print("sample mean: {:.2f}, sample variance: {:.2f}".format(sample_mean, sample_var))
Explanation: Our estimates above are pretty close. What if we now pick a smaller sample?
End of explanation
sample_means, sample_vars = [], []
for _ in range(100000):
sample = rnd.sample(population, 10)
m = mean(sample)
sample_means.append(m)
sample_vars.append(var(sample, m))
Explanation: Not quite as good! Let us stick with this small sample size, 10, and compute the sampling distribution of our estimators by computing them for many samples of the same size.
End of explanation
plt.hist(sample_means, 50)
plt.axvline(true_mean, color='red');
Explanation: Let's use a histogram to visualize the sampling distribution for the sample mean estimator:
End of explanation
plt.hist(sample_vars, 50);
plt.axvline(true_var, color='red');
Explanation: This looks pretty centred around the true value, so the estimator appears unbiased. What about the sampling distribution of the uncorrected sample variance estimator?
End of explanation
sample_sizes, sample_vars = [], []
for ss in range(1, 50):
sample_sizes.append(ss)
sample_vars.append(mean([var(rnd.sample(population, ss)) for _ in range(1000)]))
plt.plot(sample_sizes, sample_vars, 'o')
plt.axhline(true_var, color='red')
plt.plot(sample_sizes, [(n-1)/n * true_var for n in sample_sizes])
plt.xlabel('sample size')
plt.ylabel('estimate of population variance');
Explanation: This time the distribution is to the left of the true population variance, so it appears that we are systematically underestimating it: the uncorrected sample variance estimator is biased.
How does its value change as we increase the sample size?
End of explanation
def sample_var(x, m=None):
m = m or mean(x)
n = len(x)
return n/(n-1) * var(x, m)
Explanation: Looks like it gets closer to the true value, although a certain bias remains visible on the plot.
Will the (corrected) sample variance estimator do better? Let's implement it...
End of explanation
sample_sizes, sample_vars, sample_vars1 = [], [], []
for ss in range(2, 50):
sample_sizes.append(ss)
sample_vars.append(mean([var(rnd.sample(population, ss)) for _ in range(1000)]))
sample_vars1.append(mean([sample_var(rnd.sample(population, ss)) for _ in range(1000)]))
plt.plot(sample_sizes, sample_vars, 'o', label='uncorrected')
plt.plot(sample_sizes, sample_vars1, 'o', label='corrected')
plt.axhline(true_var, color='red')
plt.plot(sample_sizes, [(n-1)/n * true_var for n in sample_sizes])
plt.xlabel('sample size')
plt.ylabel('estimate of population variance')
plt.legend();
Explanation: ...and compare the uncorrected and corrected sample variance:
End of explanation
sample_means, sample_vars = [], []
for _ in range(100000):
sample = rnd.sample(population, 10)
m = mean(sample)
sample_vars.append(sample_var(sample, m))
plt.hist(sample_vars, 50)
plt.axvline(true_var, color='red');
Explanation: The (corrected) sample variance is clearly unbiased. Let us confirm this by visualising its sampling distribution with a histogram:
End of explanation
def mean1(x): return (sum(x) + 10.) / len(x)
def mean2(x): return x[0]
Explanation: Now let us demonstrate the concept of consistency. We introduce two more estimators of the mean. The first will be consistent but biased, the second unbiased but inconsistent:
End of explanation
sample_sizes, sample_means, sample_means1, sample_means2 = [], [], [], []
for ss in range(1, 50):
sample_sizes.append(ss)
sample_means.append(mean([mean(rnd.sample(population, ss)) for _ in range(1000)]))
sample_means1.append(mean([mean1(rnd.sample(population, ss)) for _ in range(1000)]))
sample_means2.append(mean([mean2(rnd.sample(population, ss)) for _ in range(1000)]))
plt.plot(sample_sizes, sample_means, 'o-', label='sample mean')
plt.plot(sample_sizes, sample_means1, 'o-', label='consistent but biased')
plt.plot(sample_sizes, sample_means2, 'o-', label='unbiased but inconsistent')
plt.axhline(true_mean, color='red')
plt.xlabel('sample size')
plt.ylabel('estimate of population mean')
plt.legend();
Explanation: Let's see how these estimators perform:
End of explanation
import math
sample_sizes, sample_sds, sample_sds1 = [], [], []
for ss in range(2, 50):
sample_sizes.append(ss)
sample_sds.append(mean([math.sqrt(var(rnd.sample(population, ss))) for _ in range(1000)]))
sample_sds1.append(mean([math.sqrt(sample_var(rnd.sample(population, ss))) for _ in range(1000)]))
plt.plot(sample_sizes, sample_sds, 'o', label='uncorrected')
plt.plot(sample_sizes, sample_sds1, 'o', label='corrected')
plt.axhline(math.sqrt(true_var), color='red')
plt.xlabel('sample size')
plt.ylabel('estimate of population standard deviation')
plt.legend();
Explanation: Now let's see how well the square roots of the square roots of the uncorrected and corrected sample variance estimate the standard deviation:
End of explanation |
121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: PCMDI
Source ID: SANDBOX-3
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.algo - La distance d'édition (correction)
Correction.
Step1: Exercice 1
Step2: Exercice 2
Step3: Que se passe-t-il lorsqu'on enlève la condition or min(len(m1), len(m2)) <= 1 ?
version non récursive qui mémorise les résultats
Step4: Il apparaît qu'on perd un temps fou dans la première version à recalculer un grand nombre de fois les mêmes distances. Conserver ces résultats permet d'aller beaucoup plus vite.
Exercice 3
Step5: C'est encore plus rapide.
version non récursive
La version non récursive est plus simple à envisager dans ce cas.
Step6: différence avec l'algorithme de wikipédia
La distance de Hamming n'est pas présente dans l'algorithme décrit sur la page Wikipedia. C'est parce qu'on décompose la distance de Hamming entre un mot de 1 caractère et un mot de 2 caractères par une comparaison et une insertion (ou une suppression). | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
def dist_hamming(m1,m2):
d = 0
for a,b in zip(m1,m2):
if a != b :
d += 1
return d
dist_hamming("close", "cloue")
Explanation: 1A.algo - La distance d'édition (correction)
Correction.
End of explanation
def dist_hamming(m1,m2):
d = abs(len(m1)-len(m2))
for a,b in zip(m1,m2):
if a != b :
d += 1
return d
dist_hamming("close", "cloue"), dist_hamming("close", "clouet")
Explanation: Exercice 1 : comment prendre en compte différentes tailles de mots ?
On peut allonger le mot le plus court par des espaces ce qui revient à ajouter au résultat la différence de taille.
End of explanation
def distance_edition_rec(m1,m2):
if max(len(m1), len(m2)) <= 2 or min(len(m1), len(m2)) <= 1:
return dist_hamming(m1,m2)
else:
collecte = []
for i in range(1,len(m1)):
for j in range(1,len(m2)):
d1 = distance_edition_rec(m1[:i],m2[:j])
d2 = distance_edition_rec(m1[i:],m2[j:])
collecte.append(d1+d2)
return min(collecte)
distance_edition_rec("longmot", "liongmot")
distance_edition_rec("longmot", "longmoit")
Explanation: Exercice 2 : implémenter une distance à partir de cette égalité
première option récursive
Comme l'écriture est récursive, on peut essayer même si cela n'est pas optimal (pas optimal du tout).
End of explanation
def distance_edition_rec_cache(m1,m2,cache=None):
if cache is None:
cache = {}
if (m1,m2) in cache:
return cache[m1,m2]
if max(len(m1), len(m2)) <= 2 or min(len(m1), len(m2)) <= 1:
cache[m1,m2] = dist_hamming(m1,m2)
return cache[m1,m2]
else:
collecte = []
for i in range(1,len(m1)):
for j in range(1,len(m2)):
d1 = distance_edition_rec_cache(m1[:i],m2[:j], cache)
d2 = distance_edition_rec_cache(m1[i:],m2[j:], cache)
collecte.append(d1+d2)
cache[m1,m2] = min(collecte)
return cache[m1,m2]
distance_edition_rec_cache("longmot", "liongmot"), distance_edition_rec_cache("longmot", "longmoit")
%timeit distance_edition_rec("longmot", "longmoit")
%timeit distance_edition_rec_cache("longmot", "longmoit")
Explanation: Que se passe-t-il lorsqu'on enlève la condition or min(len(m1), len(m2)) <= 1 ?
version non récursive qui mémorise les résultats
End of explanation
def distance_edition_rec_cache_insecable(m1,m2,cache=None):
if cache is None:
cache = {}
if (m1,m2) in cache:
return cache[m1,m2]
if max(len(m1), len(m2)) <= 2 or min(len(m1), len(m2)) <= 1:
cache[m1,m2] = dist_hamming(m1,m2)
return cache[m1,m2]
else:
i = len(m1)
j = len(m2)
d1 = distance_edition_rec_cache_insecable(m1[:i-2],m2[:j-1], cache) + dist_hamming(m1[i-2:], m2[j-1:])
d2 = distance_edition_rec_cache_insecable(m1[:i-1],m2[:j-2], cache) + dist_hamming(m1[i-1:], m2[j-2:])
d3 = distance_edition_rec_cache_insecable(m1[:i-1],m2[:j-1], cache) + dist_hamming(m1[i-1:], m2[j-1:])
cache[m1,m2] = min(d1,d2,d3)
return cache[m1,m2]
distance_edition_rec_cache_insecable("longmot", "liongmot"), distance_edition_rec_cache_insecable("longmot", "longmoit")
%timeit distance_edition_rec_cache_insecable("longmot", "longmoit")
Explanation: Il apparaît qu'on perd un temps fou dans la première version à recalculer un grand nombre de fois les mêmes distances. Conserver ces résultats permet d'aller beaucoup plus vite.
Exercice 3 : implémenter la distance d'édition
version récursive avec cache
On reprend la dernière version en la modificant pour ne tenir compte des mots insécables.
End of explanation
def distance_edition_insecable(m1,m2,cache=None):
dist = {}
dist[-2,-1] = 0
dist[-1,-2] = 0
dist[-1,-1] = 0
for i in range(0,len(m1)):
dist[i,-1] = i
dist[i,-2] = i
for j in range(0,len(m2)):
dist[-1,j] = j
dist[-2,j] = j
for i in range(0,len(m1)):
for j in range(0,len(m2)):
d1 = dist[i-2,j-1] + dist_hamming(m1[i-2:i], m2[j-1:j])
d2 = dist[i-1,j-2] + dist_hamming(m1[i-1:i], m2[j-2:j])
d3 = dist[i-1,j-1] + dist_hamming(m1[i-1:i], m2[j-1:j])
dist[i,j] = min(d1,d2,d3)
return dist[len(m1)-1, len(m2)-1]
distance_edition_insecable("longmot", "liongmot"), distance_edition_insecable("longmot", "longmoit")
%timeit distance_edition_insecable("longmot", "longmoit")
Explanation: C'est encore plus rapide.
version non récursive
La version non récursive est plus simple à envisager dans ce cas.
End of explanation
def distance_edition(m1,m2,cache=None):
dist = {}
dist[-1,-1] = 0
for i in range(0,len(m1)):
dist[i,-1] = i
for j in range(0,len(m2)):
dist[-1,j] = j
for i, c in enumerate(m1):
for j, d in enumerate(m2):
d1 = dist[i-1,j] + 1 # insertion
d2 = dist[i,j-1] + 1 # suppression
x = 0 if c == d else 1
d3 = dist[i-1,j-1] + x
dist[i,j] = min(d1,d2,d3)
return dist[len(m1)-1, len(m2)-1]
distance_edition("longmot", "liongmot"), distance_edition("longmot", "longmoit")
%timeit distance_edition_insecable("longmot", "longmoit")
Explanation: différence avec l'algorithme de wikipédia
La distance de Hamming n'est pas présente dans l'algorithme décrit sur la page Wikipedia. C'est parce qu'on décompose la distance de Hamming entre un mot de 1 caractère et un mot de 2 caractères par une comparaison et une insertion (ou une suppression).
End of explanation |
123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evolution d'indicateurs dans les communes
Step1: Jointure entre 2 fichiers
Step2: Il y a bien les colonnes "status", "mean_altitude", "superficie", "is_metropole" et "metropole_name"
Nombre de personnes qui utilisent la voiture pour aller travailler en métropole (pourcentage)
Step3: Il va falloir re-travailler la données pour pouvoir donner le pourcentage de personnes qui prenne la voiture en 2011 / 2012 ainsi qu'avoir la progression
Step5: Calculer une augmentation
Step6: Transport en commun en pourcentage
Step7: Transport vélo
Step8: Célib | Python Code:
commune_metropole = pd.read_csv('data/commune_metropole.csv', encoding='utf-8')
commune_metropole.shape
commune_metropole.head()
insee = pd.read_csv('data/insee.csv',
sep=";", # séparateur du fichier
dtype={'COM' : np.dtype(str)}, # On force la colonne COM est être en string
encoding='utf-8') # encoding
insee.shape
insee.info()
insee.head()
pd.set_option('display.max_columns', 30) # Changer le nombre de colonnnes afficher dans le notebook
insee.head()
Explanation: Evolution d'indicateurs dans les communes :
Documentation : https://github.com/anthill/open-moulinette/blob/master/insee/documentation.md
Chargement de nos données :
End of explanation
data = insee.merge(commune_metropole, on='COM', how='left')
data.shape
data.head()
Explanation: Jointure entre 2 fichiers :
End of explanation
# Clefs pour regrouper par ville
key = ['CODGEO',
'LIBGEO',
'COM',
'LIBCOM',
'REG',
'DEP',
'ARR',
'CV',
'ZE2010',
'UU2010',
'TRIRIS',
'REG2016',
'status_rank']
# 'is_metropole']
# Autres valeurs
features = [col for col in data.columns if col not in key]
# Nom des colonnes qui ne sont pas dans les colonnes de key
len(features)
# On cherche à regrouper nos données sur le nom de la métropole :
# On somme tous nos indicateurs
metropole_sum = data[features][data.is_metropole == 1].groupby('metropole_name').sum().reset_index()
metropole_sum.shape
metropole_sum
voiture_colonnes = ['metropole_name' , 'C11_ACTOCC15P_VOIT', 'P11_ACTOCC15P']
voiture = metropole_sum[voiture_colonnes].copy()
voiture
Explanation: Il y a bien les colonnes "status", "mean_altitude", "superficie", "is_metropole" et "metropole_name"
Nombre de personnes qui utilisent la voiture pour aller travailler en métropole (pourcentage) :
End of explanation
#voiture['pourcentage_car_11'] = (voiture["C11_ACTOCC15P_VOIT"] / voiture["P11_ACTOCC15P"])*100
#voiture
#voiture['pourcentage_car_12'] = (voiture["C12_ACTOCC15P_VOIT"] / voiture["P12_ACTOCC15P"])*100
#voiture
Explanation: Il va falloir re-travailler la données pour pouvoir donner le pourcentage de personnes qui prenne la voiture en 2011 / 2012 ainsi qu'avoir la progression
End of explanation
def augmentation(depart, arrive):
Calcul de l'augmentation entre 2 valeurs :
# ( ( valeur d'arrivée - valeur de départ ) / valeur de départ ) x 100
return ((arrive - depart) / depart) * 100
voiture['augmentation'] = augmentation(voiture['pourcentage_car_11'], voiture['pourcentage_car_12'])
# Les métropole qui utilise le moins la voiture pour aller travailler :
voiture.sort_values('augmentation')
Explanation: Calculer une augmentation :
End of explanation
transp_com_colonnes = ['metropole_name' , 'C11_ACTOCC15P_TCOM', 'P11_ACTOCC15P']
transp_com = metropole_sum[transp_com_colonnes].copy()
transp_com
#transp_com['pourcentage_trans_com_12'] = (transp_com["C12_ACTOCC15P_TCOM"] / transp_com["P12_ACTOCC15P"])*100
#transp_com
transp_com['pourcentage_trans_com_11'] = (transp_com["C11_ACTOCC15P_TCOM"] / transp_com["P11_ACTOCC15P"])*100
transp_com.sort_values('pourcentage_trans_com_11')
transp_com['augmentation'] = augmentation(transp_com['pourcentage_trans_com_11'], transp_com['pourcentage_trans_com_12'])
transp_com.sort_values('augmentation')
Explanation: Transport en commun en pourcentage :
End of explanation
transp_velo_colonnes = ['metropole_name' , 'C11_ACTOCC15P_DROU', 'P11_ACTOCC15P']
transp_velo = metropole_sum[transp_velo_colonnes].copy()
transp_velo
#transp_velo['pourcentage_trans_com_12'] = (transp_velo["C12_ACTOCC15P_TCOM"] / transp_velo["P12_ACTOCC15P"])*100
#transp_velo
transp_velo['pourcentage_trans_com_11'] = (transp_velo["C11_ACTOCC15P_DROU"] / transp_velo["P11_ACTOCC15P"])*100
transp_velo
Explanation: Transport vélo :
End of explanation
data.head()
bdx = data[data.LIBCOM == "Bordeaux"]
bdx.tail()
bdx.LIBGEO.unique()
bdx[bdx.LIBGEO.str.contains("Cauderan")][['P11_POP15P_CELIB', 'P11_F1529', 'P11_H1529', 'P11_POP']].sum()
bdx[bdx.LIBGEO.str.contains("Chartron")][['P11_POP15P_CELIB', 'P11_F1529', 'P11_H1529', 'P11_POP']].sum()
bdx[bdx.LIBGEO.str.contains("Capucins-Victoire")][['P11_POP15P_CELIB', 'P11_F1529', 'P11_H1529', 'P11_POP']].sum()
bdx[bdx.LIBGEO.str.contains("Chartron")][['P11_POP15P_CELIB', 'P12_POP15P_CELIB', 'P12_F1529', 'P12_H1529']].sum()
commune_metropole.head()
Explanation: Célib : Age / Uniquement à Bordeaux par Quartier :
End of explanation |
124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Initial loading of the data
Step5: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step6: We need to define some parameters
Step7: We should check if everithing is OK with an alternation histogram
Step8: If the plot looks good we can apply the parameters with
Step9: Measurements infos
All the measurement data is in the d variable. We can print it
Step10: Or check the measurements duration
Step11: Compute background
Compute the background using automatic threshold
Step12: Burst search and selection
Step14: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
Step15: Gaussian Fit
Fit the histogram with a gaussian
Step16: KDE maximum
Step17: Leakage summary
Step18: Burst size distribution
Step19: Fret fit
Max position of the Kernel Density Estimation (KDE)
Step20: Weighted mean of $E$ of each burst
Step21: Gaussian fit (no weights)
Step22: Gaussian fit (using burst size as weights)
Step23: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE)
Step24: The Maximum likelihood fit for a Gaussian population is the mean
Step25: Computing the weighted mean and weighted standard deviation we get
Step26: Save data to file
Step27: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step28: This is just a trick to format the different variables | Python Code:
ph_sel_name = "all-ph"
data_id = "7d"
# ph_sel_name = "all-ph"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:34:11 2017
Duration: 8 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Data folder:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'),
'DexDem': Ph_sel(Dex='Dem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
Explanation: List of data files:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)
d.burst_search(**bs_kws)
th1 = 30
ds = d.select_bursts(select_bursts.size, th1=30)
bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)
.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))
bursts.head()
burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, th=th1, **bs_kws))
burst_fname
bursts.to_csv(burst_fname)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print ('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
Explanation: Burst search and selection
End of explanation
def hsm_mode(s):
Half-sample mode (HSM) estimator of `s`.
`s` is a sample from a continuous distribution with a single peak.
Reference:
Bickel, Fruehwirth (2005). arXiv:math/0505419
s = memoryview(np.sort(s))
i1 = 0
i2 = len(s)
while i2 - i1 > 3:
n = (i2 - i1) // 2
w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]
i1 = w.index(min(w)) + i1
i2 = i1 + n
if i2 - i1 == 3:
if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:
i2 -= 1
elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:
i1 += 1
else:
i1 = i2 = i1 + 1
return 0.5*(s[i1] + s[i2])
E_pr_do_hsm = hsm_mode(ds_do.E[0])
print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
Explanation: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
End of explanation
E_fitter = bext.bursts_fitter(ds_do, weights=None)
E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))
E_fitter.fit_histogram(model=mfit.factory_gaussian())
E_fitter.params
res = E_fitter.fit_res[0]
res.params.pretty_print()
E_pr_do_gauss = res.best_values['center']
E_pr_do_gauss
Explanation: Gaussian Fit
Fit the histogram with a gaussian:
End of explanation
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_fitter.calc_kde(bandwidth=bandwidth)
E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])
E_pr_do_kde = E_fitter.kde_max_pos[0]
E_pr_do_kde
Explanation: KDE maximum
End of explanation
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)
plt.axvline(E_pr_do_hsm, color='m', label='HSM')
plt.axvline(E_pr_do_gauss, color='k', label='Gauss')
plt.axvline(E_pr_do_kde, color='r', label='KDE')
plt.xlim(0, 0.3)
plt.legend()
print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' %
(E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
Explanation: Leakage summary
End of explanation
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
Explanation: Burst size distribution
End of explanation
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
ds_fret.fit_E_m(weights='size')
Explanation: Weighted mean of $E$ of each burst:
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
Explanation: Gaussian fit (no weights):
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
Explanation: Gaussian fit (using burst size as weights):
End of explanation
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
sample = data_id
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spectral Clustering Algorithms
Notebook version
Step1: 1. Introduction
The key idea of spectral clustering algorithms is to search for groups of connected data. I.e, rather than pursuing compact clusters, spectral clustering allows for arbitrary shape clusters.
This can be illustrated with two artifitial datasets that we will use along this notebook.
1.1. Gaussian clusters
Step2: Note that we have computed two data matrices
Step3: Note, again, that we have computed both the sorted (${\bf X}_{2s}$) and the shuffled (${\bf X}_2$) versions of the dataset in the code above.
Exercise 1
Step4: Spectral clustering algorithms are focused on connectivity
Step5: 2.3. Visualization
We can visualize the affinity matrix as an image, by translating component values into pixel colors or intensities.
Step6: Despite the apparent randomness of the affinity matrix, it contains some hidden structure, that we can uncover by visualizing the affinity matrix computed with the sorted data matrix, ${\bf X}_s$.
Step7: Note that, despite their completely different appearance, both affinity matrices contain the same values, but with a different order of rows and columns.
For this dataset, the sorted affinity matrix is almost block diagonal. Note, also, that the block-wise form of this matrix depends on parameter $\gamma$.
Exercise 2
Step8: 3. Affinity matrix and data graph
Any similarity matrix defines a weighted graph in such a way that the weight of the edge linking ${\bf x}^{(i)}$ and ${\bf x}^{(j)}$ is $K_{ij}$.
If $K$ is a full matrix, the graph is fully connected (there is and edge connecting every pair of nodes). But we can get a more interesting sparse graph by setting to zero the edges with a small weights.
For instance, let us visualize the graph for the truncated affinity matrix $\overline{\bf K}$ with threshold $t$. You can also check the effect of increasing or decreasing $t$.
Step9: Note that, for this dataset, the graph connects edges from the same cluster only. Therefore, the number of diagonal blocks in $\overline{\bf K}_s$ is equal to the number of connected components in the graph.
Note, also, the graph does not depend on the sample ordering in the data matrix
Step10: Exercise 4
Step11: Exercise 5
Step12: Note that the position of 1's in eigenvectors ${\bf v}_i$ points out the samples in the $i$-th connected component. This suggest the following tentative clustering algorithm
Step13: 4.3. Non block diagonal matrices.
Another reason to modify our tentative algorithm is that, in more realistic cases, the affinity matrix may have an imperfect block diagonal structure. In such cases, the smallest eigenvalues may be nonzero and eigenvectors may be not exactly piecewise constant.
Exercise 6
Plot the eigenvector profile for the shuffled and not thresholded affinity matrix, ${\bf K}$.
Step14: Note that, despite the eigenvector components can not be used as a straighforward cluster indicator, they are strongly informative of the clustering structure.
All points in the same cluster have similar values of the corresponding eigenvector components $(v_{n0}, \ldots, v_{n,c-1})$.
Points from different clusters have different values of the corresponding eigenvector components $(v_{n0}, \ldots, v_{n,c-1})$.
Therfore we can define vectors ${\bf z}^{(n)} = (v_{n0}, \ldots, v_{n,c-1})$ and apply a centroid based algorithm (like $K$-means) to identify all points with similar eigenvector components. The corresponding samples in ${\bf X}$ become the final clusters of the spectral clustering algorithm.
One possible way to identify the cluster structure is to apply a $K$-means algorithm over the eigenvector coordinates. The steps of the spectral clustering algorithm become the following
5. A spectral clustering (graph cutting) algorithm
5.1. The steps of the spectral clustering algorithm.
Summarizing, the steps of the spectral clustering algorithm for a data matrix ${\bf X}$ are the following
Step15: Complete step 2, 3 and 4, and draw a scatter plot of the samples in ${\bf Z}$
Step16: Complete step 5
Step17: Finally, complete step 6 and show, in a scatter plot, the result of the clustering algorithm
Step18: 5.2. Scikit-learn implementation.
The <a href=http | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
from sklearn.cluster import KMeans
from sklearn.datasets.samples_generator import make_blobs, make_circles
from sklearn.utils import shuffle
from sklearn.metrics.pairwise import rbf_kernel
from sklearn.cluster import SpectralClustering
# For the graph representation
import networkx as nx
Explanation: Spectral Clustering Algorithms
Notebook version: 1.1 (Nov 17, 2017)
Author: Jesús Cid Sueiro (jcid@tsc.uc3m.es)
Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Changes: v.1.0 - First complete version.
v.1.1 - Python 3 version
End of explanation
N = 300
nc = 4
Xs, ys = make_blobs(n_samples=N, centers=nc,
random_state=6, cluster_std=0.60, shuffle = False)
X, y = shuffle(Xs, ys, random_state=0)
plt.scatter(X[:, 0], X[:, 1], s=30);
plt.axis('equal')
plt.show()
Explanation: 1. Introduction
The key idea of spectral clustering algorithms is to search for groups of connected data. I.e, rather than pursuing compact clusters, spectral clustering allows for arbitrary shape clusters.
This can be illustrated with two artifitial datasets that we will use along this notebook.
1.1. Gaussian clusters:
The first one consists of 4 compact clusters generated from a Gaussian distribution. This is the kind of dataset that are best suited to centroid-based clustering algorithms like $K$-means. If the goal of the clustering algorithm is to minimize the intra-cluster distances and find a representative prototype or centroid for each cluster, $K$-means may be a good option.
End of explanation
X2s, y2s = make_circles(n_samples=N, factor=.5, noise=.05, shuffle=False)
X2, y2 = shuffle(X2s, y2s, random_state=0)
plt.scatter(X2[:, 0], X2[:, 1], s=30)
plt.axis('equal')
plt.show()
Explanation: Note that we have computed two data matrices:
${\bf X}$, which contains the data points in an arbitray ordering
${\bf X}_s$, where samples are ordered by clusters, according to the cluster id array, ${\bf y}$.
Note that both matrices contain the same data (rows) but in different order. The sorted matrix will be useful later for illustration purposes, but keep in mind that, in a real clustering application, vector ${\bf y}$ is unknown (learning is not supervised), and only a data matrix with an arbitrary ordering (like ${\bf X}$) will be available.
1.2. Concentric rings
The second dataset contains two concentric rings. One could expect from a clustering algorithm to identify two different clusters, one per each ring of points. If this is the case, $K$-means or any other algorithm focused on minimizing distances to some cluster centroids is not a good choice.
End of explanation
# <SOL>
est = KMeans(n_clusters=4)
clusters = est.fit_predict(X)
plt.scatter(X[:, 0], X[:, 1], c=clusters, s=30, cmap='rainbow')
plt.axis('equal')
clusters = est.fit_predict(X2)
plt.figure()
plt.scatter(X2[:, 0], X2[:, 1], c=clusters, s=30, cmap='rainbow')
plt.axis('equal')
plt.show()
# </SOL>
Explanation: Note, again, that we have computed both the sorted (${\bf X}_{2s}$) and the shuffled (${\bf X}_2$) versions of the dataset in the code above.
Exercise 1:
Using the code of the previous notebook, run the $K$-means algorithm with 4 centroids for the two datasets. In the light of your results, why do you think $K$-means does not work well for the second dataset?
End of explanation
gamma = 0.5
K = rbf_kernel(X, X, gamma=gamma)
Explanation: Spectral clustering algorithms are focused on connectivity: clusters are determined by maximizing some measure of intra-cluster connectivity and maximizing some form of inter-cluster connectivity.
2. The affinity matrix
2.1. Similarity function
To implement a spectral clustering algorithm we must specify a similarity measure between data points. In this session, we will use the rbf kernel, that computes the similarity between ${\bf x}$ and ${\bf y}$ as:
$$\kappa({\bf x},{\bf y}) = \exp(-\gamma \|{\bf x}-{\bf y}\|^2)$$
Other similarity functions can be used, like the kernel functions implemented in Scikit-learn (see the <a href=http://scikit-learn.org/stable/modules/metrics.html> metrics </a> module).
2.2. Affinity matrix
For a dataset ${\cal S} = {{\bf x}^{(0)},\ldots,{\bf x}^{(N-1)}}$, the $N\times N$ affinity matrix ${\bf K}$ contains the similarity measure between each pair of samples. Thus, its components are
$$K_{ij} = \kappa\left({\bf x}^{(i)}, {\bf x}^{(j)}\right)$$
The following fragment of code illustrates all pairs of distances between any two points in the dataset.
End of explanation
plt.imshow(K, cmap='hot')
plt.colorbar()
plt.title('RBF Affinity Matrix for gamma = ' + str(gamma))
plt.grid('off')
plt.show()
Explanation: 2.3. Visualization
We can visualize the affinity matrix as an image, by translating component values into pixel colors or intensities.
End of explanation
Ks = rbf_kernel(Xs, Xs, gamma=gamma)
plt.imshow(Ks, cmap='hot')
plt.colorbar()
plt.title('RBF Affinity Matrix for gamma = ' + str(gamma))
plt.grid('off')
plt.show()
Explanation: Despite the apparent randomness of the affinity matrix, it contains some hidden structure, that we can uncover by visualizing the affinity matrix computed with the sorted data matrix, ${\bf X}_s$.
End of explanation
t = 0.001
# Kt = <FILL IN> # Truncated affinity matrix
Kt = K*(K>t) # Truncated affinity matrix
# Kst = <FILL IN> # Truncated and sorted affinity matrix
Kst = Ks*(Ks>t) # Truncated and sorted affinity matrix
# </SOL>
Explanation: Note that, despite their completely different appearance, both affinity matrices contain the same values, but with a different order of rows and columns.
For this dataset, the sorted affinity matrix is almost block diagonal. Note, also, that the block-wise form of this matrix depends on parameter $\gamma$.
Exercise 2:
Modify the selection of $\gamma$, and check the effect of this in the appearance of the sorted similarity matrix. Write down the values for which you consider that the structure of the matrix better resembles the number of clusters in the datasets.
Out from the diagonal block, similarities are close to zero. We can enforze a block diagonal structure be setting to zero the small similarity values.
For instance, by thresholding ${\bf K}s$ with threshold $t$, we get the truncated (and sorted) affinity matrix
$$
\overline{K}{s,ij} = K_{s,ij} \cdot \text{u}(K_{s,ij} - t)
$$
(where $\text{u}()$ is the step function) which is block diagonal.
Exercise 3:
Compute the truncated and sorted affinity matrix with $t=0.001$
End of explanation
G = nx.from_numpy_matrix(Kt)
graphplot = nx.draw(G, X, node_size=40, width=0.5,)
plt.axis('equal')
plt.show()
Explanation: 3. Affinity matrix and data graph
Any similarity matrix defines a weighted graph in such a way that the weight of the edge linking ${\bf x}^{(i)}$ and ${\bf x}^{(j)}$ is $K_{ij}$.
If $K$ is a full matrix, the graph is fully connected (there is and edge connecting every pair of nodes). But we can get a more interesting sparse graph by setting to zero the edges with a small weights.
For instance, let us visualize the graph for the truncated affinity matrix $\overline{\bf K}$ with threshold $t$. You can also check the effect of increasing or decreasing $t$.
End of explanation
Dst = np.diag(np.sum(Kst, axis=1))
Lst = Dst - Kst
# Next, we compute the eigenvalues of the matrix
w = np.linalg.eigvalsh(Lst)
plt.figure()
plt.plot(w, marker='.');
plt.title('Eigenvalues of the matrix')
plt.show()
Explanation: Note that, for this dataset, the graph connects edges from the same cluster only. Therefore, the number of diagonal blocks in $\overline{\bf K}_s$ is equal to the number of connected components in the graph.
Note, also, the graph does not depend on the sample ordering in the data matrix: the graphs for any matrix ${\bf K}$ and its sorted version ${\bf K}_s$ are the same.
4. The Laplacian matrix
The <a href = https://en.wikipedia.org/wiki/Laplacian_matrix>Laplacian matrix</a> of a given affinity matrix ${\bf K}$ is given by
$${\bf L} = {\bf D} - {\bf K}$$
where ${\bf D}$ is the diagonal degree matrix given by
$$D_{ii}=\sum^{n}{j} K{ij}$$
4.1. Properties of the Laplacian matrix
The Laplacian matrix of any symmetric matrix ${\bf K}$ has several interesting properties:
P1.
${\bf L}$ is symmetric and positive semidefinite. Therefore, all its eigenvalues $\lambda_0,\ldots, \lambda_{N-1}$ are non-negative. Remind that each eigenvector ${\bf v}$ with eigenvalue $\lambda$ satisfies
$${\bf L} \cdot {\bf v} = \lambda {\bf v}$$
P2.
${\bf L}$ has at least one eigenvector with zero eigenvalue: indeed, for ${\bf v} = {\bf 1}_N = (1, 1, \ldots, 1)^\intercal$ we get
$${\bf L} \cdot {\bf 1}_N = {\bf 0}_N$$
where ${\bf 0}_N$ is the $N$ dimensional all-zero vector.
P3.
If ${\bf K}$ is block diagonal, its Laplacian is block diagonal.
P4.
If ${\bf L}$ is a block diagonal with blocks ${\bf L}0, {\bf L}_1, \ldots, {\bf L}{c-1}$, then it has at least $c$ orthogonal eigenvectors with zero eigenvalue: indeed, each block ${\bf L}_i$ is the Laplacian matrix of the graph containing the samples in the $i$ connected component, therefore, according to property P2,
$${\bf L}i \cdot {\bf 1}{N_i} = {\bf 0}_{N_i}$$
where $N_i$ is the number of samples in the $i$-th connected component.
Therefore, if $${\bf v}i = \left(\begin{array}{l}
{\bf 0}{N_0} \
\vdots \
{\bf 0}{N{i-1}} \
{\bf 1}{N_i} \
{\bf 0}{N_{i+1}} \
\vdots \
{\bf 0}{N{c-1}}
\end{array}
\right)
$$
then
$${\bf L} \cdot {\bf v}{i} = {\bf 0}{N}$$
We can compute the Laplacian matrix for the given dataset and visualize the eigenvalues:
End of explanation
# <SOL>
print(np.linalg.norm(Lst.dot(np.ones((N,1)))))
for i in range(nc):
vi = (ys==i)
print(np.linalg.norm(Lst.dot(vi)))
# </SOL>
Explanation: Exercise 4:
Verify that ${\bf 1}N$ is an eigenvector with zero eigenvalues. To do so, compute ${\bf L}{st} \cdot {\bf 1}_N$ and verify that its <a href= https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html>euclidean norm</a> is close to zero (it may be not exactly zero due to finite precission errors).
Verify that vectors ${\bf v}_i$ defined above (that you can compute using vi = (ys==i)) also have zero eigenvalue.
End of explanation
# <SOL>
Dt = np.diag(np.sum(Kt, axis=1))
Lt = Dt - Kt
print(np.linalg.norm(Lt.dot(np.ones((N,1)))))
for i in range(nc):
vi = (y==i)
print(np.linalg.norm(Lt.dot(vi)))
# </SOL>
Explanation: Exercise 5:
Verify that the spectral properties of the Laplacian matrix computed from ${\bf K}{st}$ still apply using the unsorted matrix, ${\bf K}_t$: compute ${\bf L}{t} \cdot {\bf v}'_{i}$, where ${\bf v}'_i$ is a binary vector with components equal to 1 at the positions corresponding to samples in cluster $i$ (that you can compute using vi = (y==i))), and verify that its euclidean norm is close to zero.
End of explanation
wst, vst = np.linalg.eigh(Lst)
for n in range(nc):
plt.plot(vst[:,n], '.-')
Explanation: Note that the position of 1's in eigenvectors ${\bf v}_i$ points out the samples in the $i$-th connected component. This suggest the following tentative clustering algorithm:
Compute the affinity matrix
Compute the laplacian matrix
Compute $c$ orthogonal eigenvectors with zero eigenvalue
If $v_{in}=1$, assign ${\bf x}^{(n)}$ to cluster $i$.
This is the grounding idea of some spectral clustering algorithms. In this precise form, this algorithm does not usually work, for several reasons that we will discuss next, but with some modifications it becomes a powerfull method.
4.2. Computing eigenvectors of the Laplacian Matrix
One of the reasons why the algorithm above may not work is that vectors ${\bf v}'0, \ldots,{\bf v}'{c-1}$ are not the only zero eigenvectors or ${\bf L}_t$: any linear combination of them is also a zero eigenvector. Eigenvector computation algorithms may return a different set of orthogonal eigenvectors.
However, one can expect that eigenvector should have similar component in the positions corresponding to samples in the same connected component.
End of explanation
# <SOL>
D = np.diag(np.sum(K, axis=1))
L = D - K
w, v = np.linalg.eigh(L)
for n in range(nc):
plt.plot(v[:,n], '.-')
# </SOL>
Explanation: 4.3. Non block diagonal matrices.
Another reason to modify our tentative algorithm is that, in more realistic cases, the affinity matrix may have an imperfect block diagonal structure. In such cases, the smallest eigenvalues may be nonzero and eigenvectors may be not exactly piecewise constant.
Exercise 6
Plot the eigenvector profile for the shuffled and not thresholded affinity matrix, ${\bf K}$.
End of explanation
# <SOL>
g = 20
t = 0.1
K2 = rbf_kernel(X2, X2, gamma=g)
K2t = K2*(K2>t)
G2 = nx.from_numpy_matrix(K2t)
graphplot = nx.draw(G2, X2, node_size=40, width=0.5)
plt.axis('equal')
plt.show()
# </SOL>
Explanation: Note that, despite the eigenvector components can not be used as a straighforward cluster indicator, they are strongly informative of the clustering structure.
All points in the same cluster have similar values of the corresponding eigenvector components $(v_{n0}, \ldots, v_{n,c-1})$.
Points from different clusters have different values of the corresponding eigenvector components $(v_{n0}, \ldots, v_{n,c-1})$.
Therfore we can define vectors ${\bf z}^{(n)} = (v_{n0}, \ldots, v_{n,c-1})$ and apply a centroid based algorithm (like $K$-means) to identify all points with similar eigenvector components. The corresponding samples in ${\bf X}$ become the final clusters of the spectral clustering algorithm.
One possible way to identify the cluster structure is to apply a $K$-means algorithm over the eigenvector coordinates. The steps of the spectral clustering algorithm become the following
5. A spectral clustering (graph cutting) algorithm
5.1. The steps of the spectral clustering algorithm.
Summarizing, the steps of the spectral clustering algorithm for a data matrix ${\bf X}$ are the following:
Compute the affinity matrix, ${\bf K}$. Optionally, truncate the smallest components to zero.
Compute the laplacian matrix, ${\bf L}$
Compute the $c$ orthogonal eigenvectors with smallest eigenvalues, ${\bf v}0,\ldots,{\bf v}{c-1}$
Construct the sample set ${\bf Z}$ with rows ${\bf z}^{(n)} = (v_{0n}, \ldots, v_{c-1,n})$
Apply the $K$-means algorithms over ${\bf Z}$ with $K=c$ centroids.
Assign samples in ${\bf X}$ to clusters: if ${\bf z}^{(n)}$ is assigned by $K$-means to cluster $i$, assign sample ${\bf x}^{(n)}$ in ${\bf X}$ to cluster $i$.
Exercise 7:
In this exercise we will apply the spectral clustering algorithm to the two-rings dataset ${\bf X}_2$, using $\gamma = 20$, $t=0.1$ and $c = 2$ clusters.
Complete step 1, and plot the graph induced by ${\bf K}$
End of explanation
# <SOL>
D2t = np.diag(np.sum(K2t, axis=1))
L2t = D2t - K2t
w2t, v2t = np.linalg.eigh(L2t)
Z2t = v2t[:,0:2]
plt.scatter(Z2t[:,0], Z2t[:,1], s=20)
plt.show()
# </SOL>
Explanation: Complete step 2, 3 and 4, and draw a scatter plot of the samples in ${\bf Z}$
End of explanation
est = KMeans(n_clusters=2)
clusters = est.fit_predict(Z2t)
Explanation: Complete step 5
End of explanation
plt.scatter(X2[:, 0], X2[:, 1], c=clusters, s=50, cmap='rainbow')
plt.axis('equal')
plt.show()
Explanation: Finally, complete step 6 and show, in a scatter plot, the result of the clustering algorithm
End of explanation
n_clusters = 4
gamma = .1 # Warning do not exceed gamma=100
SpClus = SpectralClustering(n_clusters=n_clusters,affinity='rbf',
gamma=gamma)
SpClus.fit(X)
plt.scatter(X[:, 0], X[:, 1], c=SpClus.labels_.astype(np.int), s=50,
cmap='rainbow')
plt.axis('equal')
plt.show()
nc = 2
gamma = 50 #Warning do not exceed gamma=300
SpClus = SpectralClustering(n_clusters=nc, affinity='rbf', gamma=gamma)
SpClus.fit(X2)
plt.scatter(X2[:, 0], X2[:, 1], c=SpClus.labels_.astype(np.int), s=50,
cmap='rainbow')
plt.axis('equal')
plt.show()
nc = 5
SpClus = SpectralClustering(n_clusters=nc, affinity='nearest_neighbors')
SpClus.fit(X2)
plt.scatter(X2[:, 0], X2[:, 1], c=SpClus.labels_.astype(np.int), s=50,
cmap='rainbow')
plt.axis('equal')
plt.show()
Explanation: 5.2. Scikit-learn implementation.
The <a href=http://scikit-learn.org/stable/modules/generated/sklearn.cluster.SpectralClustering.html> spectral clustering algorithm </a> in Scikit-learn requires the number of clusters to be specified. It works well for a small number of clusters but is not advised when using many clusters and/or data.
Finally, we are going to run spectral clustering on both datasets. Spend a few minutes figuring out the meaning of parameters of the Spectral Clustering implementation of Scikit-learn:
http://scikit-learn.org/stable/modules/generated/sklearn.cluster.SpectralClustering.html
Note that there is not equivalent parameter to our threshold $t$, which has been useful for the graph representations. However, playing with $\gamma$ should be enough to get a good clustering.
The following piece of code executes the algorithm with an 'rbf' kernel. You can manually adjust the number of clusters and the parameter of the kernel to study the behavior of the algorithm. When you are done, you can also:
Modify the code to allow for kernels different than the 'rbf'
Repeat the analysis for the second dataset (two_rings)
End of explanation |
126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
List manipulation in Python
Goal for this assignment
The goal for this assignment is to learn to use the various methods for Python's list data type.
Your name
// put your name here!
Part 1
Step1: Now, try it yourself!
Using the array below, create and print out sub-arrays that do the following
Step2: Part 2
Step4: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! | Python Code:
even_numbers = [2, 4, 6, 8, 10, 12, 14]
s1 = even_numbers[1:5] # returns the 2nd through 4th elements
print("s1:", s1)
s2 = even_numbers[2:] # returns the 3rd element thorugh the end
print("s2:", s2)
s3 = even_numbers[:-2] # returns everything but the last two elements
print("s3:", s3)
s4 = even_numbers[1:-2] # returns everything but the first element and the two elements on the end
print("s4:", s4)
s5 = even_numbers[1:-1:2] # returns every other element, starting with the second element and ending at the second-to-last
print("s5:", s5)
s6 = even_numbers[::-1] # starts at the end of the list and returns all elements in backwards order (reversing original list)
print("s6:", s6)
Explanation: List manipulation in Python
Goal for this assignment
The goal for this assignment is to learn to use the various methods for Python's list data type.
Your name
// put your name here!
Part 1: working with lists in Python
A list in Python is what is known as a compound data type, which is fundamentally used to group together other types of variables. It is possible for lists to have values of a variety of types (i.e., integers, strings, floating-point numbers, etc.) but in general people tend to create lists with a single data type. Lists are written as comma-separated values between square brackets, like so:
odd_numbers = [1, 3, 5, 7, 9]
and an empty list can be created by using square brackets with no values:
empty_list = []
The number of elements of a list can be found by using the Python len() method: len(odd_numbers) would return 5, for example.
Lists are accessed using index values: odd_numbers[2] will return the 3rd element from the beginning of the list (since Python counts starting at 0). In this case, the value returned would be 5. Using negative numbers in the index gives you elements starting at the end. For example, odd_numbers[-1] gives you the last element in the list, and odd_numbers[-1] gives you the second-to-last number.
Lists can also be indexed by slicing the list, which gives you a sub-set of the list (which is also a list). A colon indicates that slicing is occurring, and you use the syntax my_array[start:end]. In this example, start is the index where you start, and end is the index after the one you want to end (in keeping with the rest of Python's syntax). If start or end are blank, the slice either begins at the beginning of the list or continues to the end of the list, respectively.
You can also add a third argument, which is the step. In other words, my_array[start:end:step] goes from start index to the index before end in steps of step. More concretely, my_array[1,6,2] will return a list composed of elements 1, 3, and 5 of that list, and my_array[::2] returns every second element in the list.
IMPORTANT: you can do all of these things in Numpy, too!
Some examples are below:
End of explanation
some_letters = ['a','b','c','d','e','f','g','h','i']
# put your code here!
print("1. first four elements:", some_letters[:4])
print("2. last three elements:", some_letters[-3:])
print("3. every third element, starting from 2nd:", some_letters[1::3])
Explanation: Now, try it yourself!
Using the array below, create and print out sub-arrays that do the following:
print out the first four elements (a-d)
print out the last three elements (g-i)
starting with the second element, print out every third element (b, e, h)
End of explanation
A = [1,2,3,4,5,6]
B = ['a','b','c','d','e']
# put your code here!
C = list(A)
C.append(7)
C.append(8)
print("C: ", C)
x = C.pop(2)
C.insert(-2,x)
print("new C:", C)
D = list(A)
D.extend(B[1:-1])
print("D", D)
E = list(B)
E.reverse()
E.pop(0)
print("E:", E)
Explanation: Part 2: list methods in Python
There are several useful methods that are built into lists in Python. A full explanation of all list methods can be found here. However, the most useful list methods are as follows:
list.append(x) - adds an item x to the end of your list.
list.extend(L) - extends the list by adding all items in the given list L. If you try to use the append() method, you will end up with a list that has an element that is another list - this creates a single, unified list made up of the two original lists.
list.insert(i, x) - insert item x at index position i. list.insert(0,x) inserts at the front of the list
list.pop(i) - removes the item at index i and returns it. If you don't give an index, list.pop() gives you the last item in the list.
list.reverse() - reverse the order of the elements of the list. This happens in place, and doesn't return anything.
An important note about copying lists in Python
You may try to copy a list so you can work with it:
new_list = old_list
However, you'll find that if you modify new_list, you also modify old_list. That's because when you equate lists in the way shown above, you are creating a new list that "points at" the old values. To truly copy a list, you have to do the following:
you are still pointing at the old values, and can modify them. The (weird, but correct) way to copy a list is to say:
new_list = list(old_list)
Now, try it yourself!
Using the arrays below, create new arrays, manipulate them as follows, and then print them out:
Create a new array C, which a copy of A, and append the numbers 7 and 8 to the end. (so the elements of C are 1,2,3,4,5,6,7,8)
Then remove the third element of C and put it back into the array before the second element from the end (so its elements, in order, are 1, 2, 4, 5, 6, 3, 7, 8)
Make a new array D that is a copy of A, and then extend it with the middle three elements of array B, using slicing to get the middle 3 elements of B, (so its elements, in order, are 1, 2, 3, 4, 5, 6, 'b', 'c', 'd').
Make a new array E that is a copy of B, reverse the order of its elements, and then remove the first element (so its elements, in order, are 'd', 'c', 'b', 'a').
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/jXRNcKiQ8C3lvt8E2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation |
127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 1 of 3
Step1: A detailed tutorial on dictionaries can be found here. The dict does not offer much functionality aside from basic storage of arbitrary objects, and it is meant to be extended. OpenPNM extends the dict to have functionality specifically suited for dealing with OpenPNM data.
Numpy Arrays of Pore and Throat Data
All data are stored in arrays which can accessed using standard array syntax.
All pore and throat properties are stored in Numpy arrays. All data will be automatically converted to a Numpy array if necessary.
The data for pore i (or throat i) can be found in element of i of an array. This means that pores and throat have indices which are implied by their position in arrays. When we speak of retrieving pore locations, it refers to the indices in the Numpy arrays.
Each property is stored in it's own array, meaning that 'pore diameter' and 'throat volume' are each stored in a separate array.
Arrays that store pore data are Np-long, while arrays that store throat data are Nt-long, where Np is the number of pores and Nt is the number of throats in the network.
Arrays can be any size in the other dimensions. For instance, triplets of pore coordinates (i.e. [x, y, z]) can be stored for each pore creating an Np-by-3 array.
The storage of topological connections is also very nicely accomplished with this 'list-based' format, by creating an array ('throat.conns') that stores which pore indices are found on either end of a throat. This leads to an Nt-by-2 array.
OpenPNM Objects
Step2: Generate a Cubic Network
Now that we have seen the rough outline of how OpenPNM objects store data, we can begin building a simulation. Start by importing OpenPNM and the Scipy package
Step3: Next, generate a Network by choosing the Cubic class, then create an instance with the desired parameters
Step4: The Network object stored in pn contains pores at the correct spatial positions and connections between the pores according the cubic topology.
The shape argument specifies the number of pores in the [X, Y, Z] directions of the cube. Networks in OpenPNM are always 3D dimensional, meaning that a 2D or "flat" network is still 1 layer of pores "thick" so [X, Y, Z] = [20, 10, 1], thus pn in this tutorial is 2D which is easier for visualization.
The spacing argument controls the center-to-center distance between pores and it can be a scalar or vector (i.e. [0.0001, 0.0002, 0.0003]).
The resulting network looks like
Step5: Accessing Pores and Throats via Labels
One simple but important feature of OpenPNM is the ability to label pores and throats. When a Cubic network is created, several labels are automatically created
Step6: The ability to retrieve pore indices is handy for querying pore properties, such as retrieving the pore coordinates of all pores on the 'left' face
Step7: A list of all labels currently assigned to the network can be obtained with
Step8: Create a Geometry Object and Assign Geometric Properties to Pores and Throats
The Network pn does not contain any information about pore and throat sizes at this point. The next step is to create a Geometry object to manage the geometrical properties.
Step9: This statement contains three arguments
Step10: We usually want the throat diameters to always be smaller than the two pores which it connects to maintain physical consistency. This requires understanding a little bit about how OpenPNM stores network topology. Consider the following
Step11: Let's dissect the above lines.
Firstly, P12 is a direct copy of the Network's 'throat.conns' array, which contains the indices of the pore-pair connected by each throat.
Next, this Nt-by-2 array is used to index into the 'pore.diameter' array, resulting in another Nt-by-2 array containing the diameters of the pores on each end of a throat.
Finally, the Scipy function amin is used to find the minimum diameter of each pore-pair by specifying the axis argument as 1, and the resulting Nt-by-1 array is assigned to geom['throat.diameter'].
This trick of using 'throat.conns' to index into a pore property array is commonly used in OpenPNM and you should have a second look at the above code to understand it fully.
We must still specify the remaining geometrical properties of the pores and throats. Since we're creating a "Stick-and-Ball" geometry, the sizes are calculated from the geometrical equations for spheres and cylinders.
For pore volumes, assume a sphere
Step12: The length of each throat is the center-to-center distance between pores, minus the radius of each of two neighboring pores.
Step13: The volume of each throat is found assuming a cylinder
Step14: The basic geometrical properties of the network are now defined. The Geometry class possesses a method called plot_histograms that produces a plot of the most pertinent geometrical properties. The following figure doesn't look very good since the network in this example has only 12 pores, but the utility of the plot for quick inspection is apparent.
<img src="http
Step15: Some notes on this line
Step16: The above lines utilize the fact that OpenPNM converts scalars to full length arrays, essentially setting the temperature in each pore to 298.0 K.
Create a Physics Object
We are still not ready to perform any simulations. The last step is to define the desired pore-scale physics models, which dictate how the phase and geometrical properties interact to give the transport parameters. A classic example of this is the Hagen-Poiseuille equation for fluid flow through a throat to predict the flow rate as a function of the pressure drop. The flow rate is proportional to the geometrical size of the throat (radius and length) as well as properties of the fluid (viscosity) and thus combines geometrical and thermophysical properties
Step17: As with all objects, the Network must be specified
Physics objects combine information from a Phase (i.e. viscosity) and a Geometry (i.e. throat diameter), so each of these must be specified.
Physics objects do not require the specification of which pores and throats where they apply, since this information is implied by the geometry argument which was already assigned to specific locations.
Specify Desired Pore-Scale Transport Parameters
We need to calculate the numerical values representing our chosen pore-scale physics. To continue with the Hagen-Poiseuille example lets calculate the hydraulic conductance of each throat in the network. The throat radius and length are easily accessed as
Step18: The viscosity of the Phases was only defined in the pores; however, the hydraulic conductance must be calculated for each throat. There are several options, but to keep this tutorial simple we'll create a scalar value
Step19: Numpy arrays support vectorization, so since both L and R are arrays of Nt-length, their multiplication in this way results in another array that is also Nt-long.
Create an Algorithm Object for Performing a Permeability Simulation
Finally, it is now possible to run some useful simulations. The code below estimates the permeability through the network by applying a pressure gradient across and calculating the flux. This starts by creating a StokesFlow algorithm, which is pre-defined in OpenPNM
Step20: Like all the above objects, Algorithms must be assigned to a Network via the network argument.
This algorithm is also associated with a Phase object, in this case water, which dictates which pore-scale Physics properties to use (recall that phys_water was associated with water). This can be passed as an argument to the instantiation or to the setup function.
Next the boundary conditions are applied using the set_boundary_conditions method on the Algorithm object. Let's apply a 1 atm pressure gradient between the left and right sides of the domain
Step21: To actually run the algorithm use the run method
Step22: This builds the coefficient matrix from the existing values of hydraulic conductance, and inverts the matrix to solve for pressure in each pore, and stores the results within the Algorithm's dictionary under 'pore.pressure'.
To determine the permeability coefficient, we must invoke Darcy's law
Step23: The StokesFlow class was developed with permeability simulations in mind, so a specific method is available for determining the permeability coefficient that essentially applies the recipe from above. This method could struggle with non-uniform geometries though, so use with caution
Step24: The results ('pore.pressure') are held within the alg object and must be explicitly returned to the Phase object by the user if they wish to use these values in a subsequent calculation. The point of this data containment is to prevent unintentional overwriting of data. Each algorithm has a method called results which returns a dictionary of the pertinent simulation results, which can be added to the phase of interest using the update method. | Python Code:
foo = dict() # Create an empty dict
foo['bar'] = 1 # Store an integer under the key 'bar'
print(foo['bar']) # Retrieve the integer stored in 'bar'
Explanation: Tutorial 1 of 3: Getting Started with OpenPNM
This tutorial is intended to show the basic outline of how OpenPNM works, and necessarily skips many of the more useful and powerful features of the package. So if you find yourself asking "why is this step so labor intensive" it's probably because this tutorial deliberately simplifies some features to provide a more smooth introduction. The second and third tutorials dive into the package more deeply, but those features are best appreciated once the basics are understood.
Learning Objectives
Introduce the main OpenPNM objects and their roles
Explore the way OpenPNM stores data, including network topology
Learn some handy tools for working with objects
Generate a standard cubic Network topology
Calculate geometrical properties and assign them to a Geometry object
Calculate thermophysical properties and assign to a Phase object
Define pore-scale physics and assign transport parameters to a Physics object
Run a permeability simulation using the pre-defined Algorithm
Use the package to calculate the permeability coefficient of a porous media
Python and Numpy Tutorials
Before diving into OpenPNM it is probably a good idea to become familar with Python and Numpy. The following resources should be helpful.
* OpenPNM is written in Python. One of the best guides to learning Python is the set of Tutorials available on the official Python website. The web is literally overrun with excellent Python tutorials owing to the popularity and importance of the language. The official Python website also provides a long list of resources
* For information on using Numpy, Scipy and generally doing scientific computing in Python checkout the Scipy lecture notes. The Scipy website also offers as solid introduction to using Numpy arrays.
* The Stackoverflow website is an incredible resource for all computing related questions, including simple usage of Python, Scipy and Numpy functions.
* For users more familiar with Matlab, there is a Matlab-Numpy cheat sheet that explains how to translate familiar Matlab commands to Numpy.
Overview of Data Storage in OpenPNM
Before creating an OpenPNM simulation it is necessary to give a quick description of how data is stored in OpenPNM; after all, a significant part of OpenPNM is dedicated to data storage and handling.
Python Dictionaries or dicts
OpenPNM employs 5 main objects which each store and manage a different type of information or data:
Network: Manages topological data such as pore spatial locations and pore-to-pore connections
Geometry: Manages geometrical properties such as pore diameter and throat length
Phase: Manages thermophysical properties such as temperature and viscosity
Physics: Manages pore-scale transport parameters such as hydraulic conductance
Algorithm: Contains algorithms that use the data from other objects to perform simulations, such as diffusion or drainage
We will encounter each of these objects in action before the end of this tutorial.
Each of the above objects is a subclass of the Python dictionary or dict, which is a very general storage container that allows values to be accessed by a name using syntax like:
End of explanation
import scipy as sp
import numpy as np
import openpnm as op
np.random.seed(10)
# Instantiate an empty network object with 10 pores and 10 throats
net = op.network.GenericNetwork(Np=10, Nt=10)
# Assign an Np-long array of ones
net['pore.foo'] = np.ones([net.Np, ])
# Assign an Np-long array of increasing ints
net['pore.bar'] = range(0, net.Np)
# The Python range iterator is converted to a proper Numpy array
print(type(net['pore.bar']))
net['pore.foo'][4] = 44.0 # Overwrite values in the array
print(net['pore.foo'][4]) # Retrieve values from the array
print(net['pore.foo'][2:6]) # Extract a slice of the array
print(net['pore.foo'][[2, 4, 6]]) # Extract specific locations
net['throat.foo'] = 2 # Assign a scalar
print(len(net['throat.foo'])) # The scalar values is converted to an Nt-long array
print(net['throat.foo'][4]) # The scalar value was placed into all locations
Explanation: A detailed tutorial on dictionaries can be found here. The dict does not offer much functionality aside from basic storage of arbitrary objects, and it is meant to be extended. OpenPNM extends the dict to have functionality specifically suited for dealing with OpenPNM data.
Numpy Arrays of Pore and Throat Data
All data are stored in arrays which can accessed using standard array syntax.
All pore and throat properties are stored in Numpy arrays. All data will be automatically converted to a Numpy array if necessary.
The data for pore i (or throat i) can be found in element of i of an array. This means that pores and throat have indices which are implied by their position in arrays. When we speak of retrieving pore locations, it refers to the indices in the Numpy arrays.
Each property is stored in it's own array, meaning that 'pore diameter' and 'throat volume' are each stored in a separate array.
Arrays that store pore data are Np-long, while arrays that store throat data are Nt-long, where Np is the number of pores and Nt is the number of throats in the network.
Arrays can be any size in the other dimensions. For instance, triplets of pore coordinates (i.e. [x, y, z]) can be stored for each pore creating an Np-by-3 array.
The storage of topological connections is also very nicely accomplished with this 'list-based' format, by creating an array ('throat.conns') that stores which pore indices are found on either end of a throat. This leads to an Nt-by-2 array.
OpenPNM Objects: Combining dicts and Numpy Arrays
OpenPNM objects combine the above two levels of data storage, meaning they are dicts that are filled with Numpy arrays. OpenPNM enforces several rules to help maintain data consistency:
When storing arrays in an OpenPNM object, their name (or dictionary key) must be prefixed with 'pore.' or 'throat.'.
OpenPNM uses the prefix of the dictionary key to infer how long the array must be.
The specific property that is stored in each array is indicated by the suffix such as 'pore.diameter' or 'throat.length'.
Writing scalar values to OpenPNM objects automatically results in conversion to a full length array filled with the scalar value.
Arrays containing Boolean data are treated as labels, which are explained later in this tutorial.
The following code snippets give examples of how all these pieces fit together using an empty network as an example:
End of explanation
import openpnm as op
import scipy as sp
Explanation: Generate a Cubic Network
Now that we have seen the rough outline of how OpenPNM objects store data, we can begin building a simulation. Start by importing OpenPNM and the Scipy package:
End of explanation
pn = op.network.Cubic(shape=[4, 3, 1], spacing=0.0001)
Explanation: Next, generate a Network by choosing the Cubic class, then create an instance with the desired parameters:
End of explanation
print('The total number of pores on the network is:', pn.num_pores())
print('A short-cut to the total number of pores is:', pn.Np)
print('The total number of throats on the network is:', pn.num_throats())
print('A short-cut to the total number of throats is:', pn.Nt)
print('A list of all calculated properties is availble with:\n', pn.props())
Explanation: The Network object stored in pn contains pores at the correct spatial positions and connections between the pores according the cubic topology.
The shape argument specifies the number of pores in the [X, Y, Z] directions of the cube. Networks in OpenPNM are always 3D dimensional, meaning that a 2D or "flat" network is still 1 layer of pores "thick" so [X, Y, Z] = [20, 10, 1], thus pn in this tutorial is 2D which is easier for visualization.
The spacing argument controls the center-to-center distance between pores and it can be a scalar or vector (i.e. [0.0001, 0.0002, 0.0003]).
The resulting network looks like:
(This image was creating using Paraview, using the instructions given here)
<img src="http://i.imgur.com/ScdydO9l.png" style="width: 60%" align="left"/>
Inspecting Object Properties
OpenPNM objects have additional methods for querying their relevant properties, like the number of pores or throats, which properties have been defined, and so on:
End of explanation
print(pn.pores('left'))
Explanation: Accessing Pores and Throats via Labels
One simple but important feature of OpenPNM is the ability to label pores and throats. When a Cubic network is created, several labels are automatically created: the pores on each face are labeled 'left', 'right', etc. These labels can be used as follows:
End of explanation
print(pn['pore.coords'][pn.pores('left')])
Explanation: The ability to retrieve pore indices is handy for querying pore properties, such as retrieving the pore coordinates of all pores on the 'left' face:
End of explanation
print(pn.labels())
Explanation: A list of all labels currently assigned to the network can be obtained with:
End of explanation
geom = op.geometry.GenericGeometry(network=pn, pores=pn.Ps, throats=pn.Ts)
Explanation: Create a Geometry Object and Assign Geometric Properties to Pores and Throats
The Network pn does not contain any information about pore and throat sizes at this point. The next step is to create a Geometry object to manage the geometrical properties.
End of explanation
geom['pore.diameter'] = np.random.rand(pn.Np)*0.0001 # Units of meters
Explanation: This statement contains three arguments:
network tells the Geometry object which Network it is associated with. There can be multiple networks defined in a given session, so all objects must be associated with a single network.
pores and throats indicate the locations in the Network where this Geometry object will apply. In this tutorial geom applies to all pores and throats, but there are many cases where different regions of the network have different geometrical properties, so OpenPNM allows multiple Geometry objects to be created for managing the data in each region, but this will not be used in this tutorial.
Add Pore and Throat Size Information
This freshly instantiated Geometry object (geom) contains no geometric properties as yet. For this tutorial we'll use the direct assignment of manually calculated values.
We'll start by assigning diameters to each pore from a random distribution, spanning 0 um to 100 um. The upper limit matches the spacing of the Network which was set to 0.0001 m (i.e. 100 um), so pore diameters exceeding 100 um might overlap with their neighbors. Using the Scipy rand function creates an array of random numbers between 0 and 0.0001 that is Np-long, meaning each pore is assigned a unique random number
End of explanation
P12 = pn['throat.conns'] # An Nt x 2 list of pores on the end of each throat
D12 = geom['pore.diameter'][P12] # An Nt x 2 list of pore diameters
Dt = np.amin(D12, axis=1) # An Nt x 1 list of the smaller pore from each pair
geom['throat.diameter'] = Dt
Explanation: We usually want the throat diameters to always be smaller than the two pores which it connects to maintain physical consistency. This requires understanding a little bit about how OpenPNM stores network topology. Consider the following:
End of explanation
Rp = geom['pore.diameter']/2
geom['pore.volume'] = (4/3)*3.14159*(Rp)**3
Explanation: Let's dissect the above lines.
Firstly, P12 is a direct copy of the Network's 'throat.conns' array, which contains the indices of the pore-pair connected by each throat.
Next, this Nt-by-2 array is used to index into the 'pore.diameter' array, resulting in another Nt-by-2 array containing the diameters of the pores on each end of a throat.
Finally, the Scipy function amin is used to find the minimum diameter of each pore-pair by specifying the axis argument as 1, and the resulting Nt-by-1 array is assigned to geom['throat.diameter'].
This trick of using 'throat.conns' to index into a pore property array is commonly used in OpenPNM and you should have a second look at the above code to understand it fully.
We must still specify the remaining geometrical properties of the pores and throats. Since we're creating a "Stick-and-Ball" geometry, the sizes are calculated from the geometrical equations for spheres and cylinders.
For pore volumes, assume a sphere:
End of explanation
C2C = 0.0001 # The center-to-center distance between pores
Rp12 = Rp[pn['throat.conns']]
geom['throat.length'] = C2C - np.sum(Rp12, axis=1)
Explanation: The length of each throat is the center-to-center distance between pores, minus the radius of each of two neighboring pores.
End of explanation
Rt = geom['throat.diameter']/2
Lt = geom['throat.length']
geom['throat.volume'] = 3.14159*(Rt)**2*Lt
Explanation: The volume of each throat is found assuming a cylinder:
End of explanation
water = op.phases.GenericPhase(network=pn)
Explanation: The basic geometrical properties of the network are now defined. The Geometry class possesses a method called plot_histograms that produces a plot of the most pertinent geometrical properties. The following figure doesn't look very good since the network in this example has only 12 pores, but the utility of the plot for quick inspection is apparent.
<img src="http://i.imgur.com/xkK1TYfl.png" style="width: 60%" align="left"/>
Create a Phase Object
The simulation is now topologically and geometrically defined. It has pore coordinates, pore and throat sizes and so on. In order to perform any simulations it is necessary to define a Phase object to manage all the thermophysical properties of the fluids in the simulation:
End of explanation
water['pore.temperature'] = 298.0
water['pore.viscosity'] = 0.001
Explanation: Some notes on this line:
* pn is passed as an argument because Phases must know to which Network they belong.
* Note that pores and throats are NOT specified; this is because Phases are mobile and can exist anywhere or everywhere in the domain, so providing specific locations does not make sense. Algorithms for dynamically determining actual phase distributions are discussed later.
Add Thermophysical Properties
Now it is necessary to fill this Phase object with the desired thermophysical properties. OpenPNM includes a framework for calculating thermophysical properties from models and correlations, but this is covered in :ref:intermediate_usage. For this tutorial, we'll use the basic approach of simply assigning static values as follows:
End of explanation
phys_water = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom)
Explanation: The above lines utilize the fact that OpenPNM converts scalars to full length arrays, essentially setting the temperature in each pore to 298.0 K.
Create a Physics Object
We are still not ready to perform any simulations. The last step is to define the desired pore-scale physics models, which dictate how the phase and geometrical properties interact to give the transport parameters. A classic example of this is the Hagen-Poiseuille equation for fluid flow through a throat to predict the flow rate as a function of the pressure drop. The flow rate is proportional to the geometrical size of the throat (radius and length) as well as properties of the fluid (viscosity) and thus combines geometrical and thermophysical properties:
End of explanation
R = geom['throat.diameter']/2
L = geom['throat.length']
Explanation: As with all objects, the Network must be specified
Physics objects combine information from a Phase (i.e. viscosity) and a Geometry (i.e. throat diameter), so each of these must be specified.
Physics objects do not require the specification of which pores and throats where they apply, since this information is implied by the geometry argument which was already assigned to specific locations.
Specify Desired Pore-Scale Transport Parameters
We need to calculate the numerical values representing our chosen pore-scale physics. To continue with the Hagen-Poiseuille example lets calculate the hydraulic conductance of each throat in the network. The throat radius and length are easily accessed as:
End of explanation
mu_w = 0.001
phys_water['throat.hydraulic_conductance'] = 3.14159*R**4/(8*mu_w*L)
Explanation: The viscosity of the Phases was only defined in the pores; however, the hydraulic conductance must be calculated for each throat. There are several options, but to keep this tutorial simple we'll create a scalar value:
End of explanation
alg = op.algorithms.StokesFlow(network=pn)
alg.setup(phase=water)
Explanation: Numpy arrays support vectorization, so since both L and R are arrays of Nt-length, their multiplication in this way results in another array that is also Nt-long.
Create an Algorithm Object for Performing a Permeability Simulation
Finally, it is now possible to run some useful simulations. The code below estimates the permeability through the network by applying a pressure gradient across and calculating the flux. This starts by creating a StokesFlow algorithm, which is pre-defined in OpenPNM:
End of explanation
BC1_pores = pn.pores('front')
alg.set_value_BC(values=202650, pores=BC1_pores)
BC2_pores = pn.pores('back')
alg.set_value_BC(values=101325, pores=BC2_pores)
Explanation: Like all the above objects, Algorithms must be assigned to a Network via the network argument.
This algorithm is also associated with a Phase object, in this case water, which dictates which pore-scale Physics properties to use (recall that phys_water was associated with water). This can be passed as an argument to the instantiation or to the setup function.
Next the boundary conditions are applied using the set_boundary_conditions method on the Algorithm object. Let's apply a 1 atm pressure gradient between the left and right sides of the domain:
End of explanation
alg.run()
Explanation: To actually run the algorithm use the run method:
End of explanation
Q = alg.rate(pores=pn.pores('front'))
A = 0.0001*3*1 # Cross-sectional area for flow
L = 0.0001*4 # Length of flow path
del_P = 101325 # Specified pressure gradient
K = Q*mu_w*L/(A*del_P)
print(K)
Explanation: This builds the coefficient matrix from the existing values of hydraulic conductance, and inverts the matrix to solve for pressure in each pore, and stores the results within the Algorithm's dictionary under 'pore.pressure'.
To determine the permeability coefficient, we must invoke Darcy's law: Q = KA/uL(Pin - Pout). Everything in this equation is known except for the volumetric flow rate Q. The StokesFlow algorithm possesses a rate method that calculates the rate of a quantity leaving a specified set of pores:
End of explanation
K = alg.calc_effective_permeability(domain_area=A, domain_length=L)
print(K)
Explanation: The StokesFlow class was developed with permeability simulations in mind, so a specific method is available for determining the permeability coefficient that essentially applies the recipe from above. This method could struggle with non-uniform geometries though, so use with caution:
End of explanation
water.update(alg.results())
Explanation: The results ('pore.pressure') are held within the alg object and must be explicitly returned to the Phase object by the user if they wish to use these values in a subsequent calculation. The point of this data containment is to prevent unintentional overwriting of data. Each algorithm has a method called results which returns a dictionary of the pertinent simulation results, which can be added to the phase of interest using the update method.
End of explanation |
128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autoencoder
This notebook demonstrates the invocation of the SystemML autoencoder script, and alternative ways of passing in/out data.
This notebook is supported with SystemML 0.14.0 and above.
Step1: SystemML Read/Write data from local file system
Step3: Generate Data and write out to file.
Step4: Alternatively to passing in/out file names, use Python variables. | Python Code:
!pip show systemml
import pandas as pd
from systemml import MLContext, dml
ml = MLContext(sc)
print(ml.info())
sc.version
Explanation: Autoencoder
This notebook demonstrates the invocation of the SystemML autoencoder script, and alternative ways of passing in/out data.
This notebook is supported with SystemML 0.14.0 and above.
End of explanation
FsPath = "/tmp/data/"
inp = FsPath + "Input/"
outp = FsPath + "Output/"
Explanation: SystemML Read/Write data from local file system
End of explanation
import numpy as np
X_pd = pd.DataFrame(np.arange(1,2001, dtype=np.float)).values.reshape(100,20)
# X_pd = pd.DataFrame(range(1, 2001,1),dtype=float).values.reshape(100,20)
script =
write(X, $Xfile)
prog = dml(script).input(X=X_pd).input(**{"$Xfile":inp+"X.csv"})
ml.execute(prog)
!ls -l /tmp/data/Input
autoencoderURL = "https://raw.githubusercontent.com/apache/incubator-systemml/master/scripts/staging/autoencoder-2layer.dml"
rets = ("iter", "num_iters_per_epoch", "beg", "end", "o")
prog = dml(autoencoderURL).input(**{"$X":inp+"X.csv"}) \
.input(**{"$H1":500, "$H2":2, "$BATCH":36, "$EPOCH":5 \
, "$W1_out":outp+"W1_out", "$b1_out":outp+"b1_out" \
, "$W2_out":outp+"W2_out", "$b2_out":outp+"b2_out" \
, "$W3_out":outp+"W3_out", "$b3_out":outp+"b3_out" \
, "$W4_out":outp+"W4_out", "$b4_out":outp+"b4_out" \
}).output(*rets)
iter, num_iters_per_epoch, beg, end, o = ml.execute(prog).get(*rets)
print (iter, num_iters_per_epoch, beg, end, o)
!ls -l /tmp/data/Output
Explanation: Generate Data and write out to file.
End of explanation
autoencoderURL = "https://raw.githubusercontent.com/apache/incubator-systemml/master/scripts/staging/autoencoder-2layer.dml"
rets = ("iter", "num_iters_per_epoch", "beg", "end", "o")
rets2 = ("W1", "b1", "W2", "b2", "W3", "b3", "W4", "b4")
prog = dml(autoencoderURL).input(X=X_pd) \
.input(**{ "$H1":500, "$H2":2, "$BATCH":36, "$EPOCH":5}) \
.output(*rets) \
.output(*rets2)
result = ml.execute(prog)
iter, num_iters_per_epoch, beg, end, o = result.get(*rets)
W1, b1, W2, b2, W3, b3, W4, b4 = result.get(*rets2)
print (iter, num_iters_per_epoch, beg, end, o)
Explanation: Alternatively to passing in/out file names, use Python variables.
End of explanation |
129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
#0 Journal paper critique
<center>
<img src=hw_2_data/bale_2005.png width="400"></img>
<img src=hw_2_data/bale_2005_caption.png width="400"></img>
</center>
The figure trying to three messages
Step1: load data
Step2: #2 Reproduce figure in $\textrm{matplotlib}$
Step3: load data
Step4: #3 Generic "Brushing" code -- by matplotlib
Step5: Read dataset with many rows and multiple columns (variables/parameters).
Step13: data brushing
Step14: <font color='red'> ipython widget make it too slow, so please coment the following 3 lines out to play interactively </font>
Step15: #3 Generic "Brushing" code -- by Bokeh
Step16: generate grids and plot data on figures
Step17: show grid plot | Python Code:
from bokeh.plotting import figure, output_notebook, show
from bokeh.models import HoverTool
from bokeh.layouts import row
import numpy as np
output_notebook()
Explanation: #0 Journal paper critique
<center>
<img src=hw_2_data/bale_2005.png width="400"></img>
<img src=hw_2_data/bale_2005_caption.png width="400"></img>
</center>
The figure trying to three messages:
- $B$ power spectral density shows a spectral break at ion gyroradius scale, indicating transition in physical processes.
- $E$ power spectral density agrees with $B$ in the ion inertial length scale but begin to diverge from $B$ in smaller scales (larger $k$)
- Transition of non-dispersive shear Alfven wave to dispersive kinetic Alfven wave (KAW) happens at the same length scale as the $B$ spectrum breaks, suggesting possible role of KAW in this transition length scale.
good side:
- clean images, appropriate ammount of annotations
- high resolution
- use of color shade to emphasize different ranges
Room for improvement:
- can use bigger fontsize for labels
- can have longer axis ticks
- reconsider the area of three subplots. (b) and (c) can be slightly bigger
#1 Reproduce figure with $\textrm{Bokeh}$
End of explanation
# data dir
data_dir = 'hw_2_data/'
efficiency_file = data_dir + 'Efficiency.txt'
purity_file = data_dir + 'Purity.txt'
eff_data = np.genfromtxt(fname=efficiency_file, skip_header=1)
purity_data = np.genfromtxt(fname=purity_file, skip_header=1)
eff_followed = eff_data[:, 0]
eff_observed = eff_data[:, 1]
eff_uncertainty = eff_data[:, 2]
purity_followed = purity_data[:, 0]
purity_observed = purity_data[:, 1]
purity_uncertainty = purity_data[:, 2]
TOOLS = "pan,wheel_zoom,box_zoom,reset,save,box_select, crosshair"
p1 = figure(tools = TOOLS,
width = 300, height=300,
x_axis_label='Fraction of GRBs followed up',
y_axis_label='Fraction of high (Z>4) GRBs observed',
title='Efficieny')
p1.line(eff_followed, eff_observed, legend="observed")
p1.line([0, 1], [0, 1], legend="random guess", line_dash='dashed')
# use patch to plot upper/lower bound
band_x = np.append(eff_followed, eff_followed[::-1])
lowerband = eff_observed + eff_uncertainty
upperband = eff_observed - eff_uncertainty
band_y = np.append(lowerband, upperband[::-1])
p1.patch(band_x, band_y, color='#7570B3', fill_alpha=0.2, line_color=None)
p1.legend.location = "bottom_right"
p2 = figure(tools = TOOLS,
width = 300, height=300,
x_range=p1.x_range, y_range=p1.y_range,
x_axis_label='Fraction of GRBs followed up',
y_axis_label='Fraction of high (Z>4) GRBs observed',
title='Efficieny')
p2.line(purity_followed, purity_observed, legend="observed")
guess_2 = purity_observed[-1]
p2.line([0, 1], [guess_2, guess_2], legend="random guess", line_dash='dashed')
# use patch to plot upper/lower bound
band_x_2 = np.append(purity_followed, purity_followed[::-1])
lowerband_2 = purity_observed + purity_uncertainty
upperband_2 = purity_observed - purity_uncertainty
band_y_2 = np.append(lowerband_2, upperband_2[::-1])
p2.patch(band_x_2, band_y_2, color='#7570B3', fill_alpha=0.2, line_color=None)
p2.legend.location = "top_right"
r = row([p1, p2])
show(r)
Explanation: load data
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: #2 Reproduce figure in $\textrm{matplotlib}$
End of explanation
data_dir = 'hw_2_data/'
ny_temp = np.loadtxt(data_dir + 'ny_temps.txt', skiprows=1)
google_stock = np.loadtxt(data_dir + 'google_data.txt', skiprows=1)
yahoo_stock = np.loadtxt(data_dir + 'yahoo_data.txt', skiprows=1)
google_t, google_v = google_stock[:, 0], google_stock[:, 1]
yahoo_t, yahoo_v = yahoo_stock[:, 0], yahoo_stock[:, 1]
ny_t, ny_v = ny_temp[:, 0], ny_temp[:, 1]
from matplotlib.ticker import MultipleLocator
fs=16
fig, ax1 = plt.subplots(figsize=[8,6])
lns1 = ax1.plot(yahoo_t, yahoo_v, 'purple', label='Yahoo! Stock Value')
lns2 = ax1.plot(google_t, google_v, 'b-', label='Google Stock Value')
ax1.set_xlabel('Date (MJD)', fontsize=fs)
ax1.set_ylabel('Value (Dollars)', fontsize=fs)
ax1.set_ylim([-20, 780])
ax1.set_xlim([49000, 55000])
# add minor ticks
ax1.xaxis.set_minor_locator(MultipleLocator(200))
ax1.yaxis.set_minor_locator(MultipleLocator(20))
# set font for title
font = {'family': 'sans-serif','color': 'black',
'weight': 'bold','size': fs}
ax1.set_title('New York Temperature, Google and Yahoo!', fontdict=font)
# turn off major and minor ticks from upper x axis
ax1.tick_params(axis='x', which='both', top='off')
ax2 = ax1.twinx()
lns3 = ax2.plot(ny_t, ny_v, 'r--', label='NY Mon. High Temp')
ax2.set_ylabel('Temperature $(^\circ \mathrm{F})$', fontsize=fs)
ax2.set_ylim([-150, 100])
ax2.yaxis.set_minor_locator(MultipleLocator(10))
# add legend
lns = lns1+lns2+lns3
labs = [l.get_label() for l in lns]
ax1.legend(lns, labs, loc='center left', frameon=False)
plt.show()
Explanation: load data
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
# %matplotlib inline
Explanation: #3 Generic "Brushing" code -- by matplotlib
End of explanation
data_dir = 'hw_2_data/'
filename = data_dir + 'flowers.csv'
!tail -n 5 hw_2_data/flowers.csv
# datatype
dt = [('sepalLength', 'f4'), ('sepalWidth', 'f4'),
('petalLength', 'f4'), ('petalWidth', 'f4'), ('species', 'S10')]
names = ['sepalLength', 'sepalWidth', 'petalLength', 'petalWidth', 'species']
formats = ['f4', 'f4', 'f4', 'f4', 'S10']
dt = {'names': names, 'formats':formats}
# dataset
ds = np.loadtxt(filename, delimiter=',', skiprows=1, dtype=dt)
Explanation: Read dataset with many rows and multiple columns (variables/parameters).
End of explanation
# define colors for different species
blue = [0, 0, 1, 0.75]
red = [1, 0, 0, 0.75]
green = [0, 0.5, 0, 0.75]
grey = [0.75, 0.75, 0.75, 0.5]
colorTable = {b'setosa': red, b'versicolor': blue, b'virginica': green}
class Brushing(object):
def __init__(self):
Constructor of Brushing object
- make panel plots
- implementation requires that m, n >= 2
- set default values of figure, axes handle
- connect figure to press, move and release events
self.m = 4
self.n = 4
fig, axes = plt.subplots(self.m, self.n, sharey='row',
sharex='col', figsize=[10, 10])
self.axes = np.array(fig.axes).reshape(self.m, self.n)
self.scatters = []
for i, var_i in enumerate(names[:self.m]):
for j, var_j in enumerate(names[:self.n]):
data_i = ds[var_i]
data_j = ds[var_j]
ax = axes[i, j]
colors = np.array([colorTable[s] for s in ds['species']])
sc = ax.scatter(data_j, data_i, c=colors)
self.scatters += [sc]
sc.set_edgecolors(colors)
if i == j:
ax.text(0.1, 0.8, var_i, transform=ax.transAxes)
self.scatters = np.array(self.scatters).reshape(self.m, self.n)
self.rect = None
self.fig = fig
self.x0 = None
self.y0 = None
self.x1 = None
self.y1 = None
self.ax = None
self.ax_ij = None
self.press = None
self.fig.canvas.mpl_connect('button_press_event', self.on_press)
self.fig.canvas.mpl_connect('button_release_event', self.on_release)
self.fig.canvas.mpl_connect('motion_notify_event', self.on_motion)
def selected(self):
return boolean array for indices of the selected data points
i, j = self.ax_ij
data_i = ds[names[i]]
data_j = ds[names[j]]
xmin = min(self.x0, self.x1)
xmax = max(self.x0, self.x1)
ymin = min(self.y0, self.y1)
ymax = max(self.y0, self.y1)
if xmin == xmax and ymin == ymax:
selected=np.empty(len(data_i), dtype=bool)
selected.fill(True)
return selected
return (data_j > xmin) & (data_j < xmax) & \
(data_i > ymin) & (data_i < ymax)
def on_press(self, event):
when mouse press release
- draw/redraw triangle
if not self.rect:
self.rect = Rectangle((0,0), 0, 0, facecolor='grey', alpha = 0.2)
self.ax = event.inaxes
self.ax.add_patch(self.rect)
self.ax_ij = self.which_axis()
if self.ax != event.inaxes:
self.ax = event.inaxes
self.rect.set_visible(False)
del self.rect
self.rect = Rectangle((0,0), 0, 0, facecolor='grey', alpha = 0.2)
self.ax.add_patch(self.rect)
self.ax_ij = self.which_axis()
else:
self.rect.set_width(0)
self.rect.set_height(0)
self.press = True
self.x0 = event.xdata
self.y0 = event.ydata
def on_release(self, event):
when mouse press release
- redraw triangle
- reset colors of data points
self.press = None
if event.inaxes != self.rect.axes: return
self.x1 = event.xdata
self.y1 = event.ydata
self.rect.set_width(self.x1 - self.x0)
self.rect.set_height(self.y1 - self.y0)
self.rect.set_xy((self.x0, self.y0))
self.set_color()
self.fig.canvas.draw()
def on_motion(self, event):
when mouse move
- redraw triangle
- reset colors of data points
if self.press is None: return
if event.inaxes != self.rect.axes: return
self.x1 = event.xdata
self.y1 = event.ydata
self.rect.set_width(self.x1 - self.x0)
self.rect.set_height(self.y1 - self.y0)
self.rect.set_xy((self.x0, self.y0))
self.set_color()
self.fig.canvas.draw()
def which_axis(self):
find the (i, j) index of the subplot selected by mouse event
for i in range(self.m):
for j in range(self.n):
if self.axes[i,j] is self.ax:
return (i, j)
return
def set_color(self):
set color of scattered plots
- selected data points keep the colors
- other data points are shaded in grey
selected = self.selected()
for i, var_i in enumerate(names[:self.m]):
for j, var_j in enumerate(names[:self.n]):
colors = np.array([colorTable[s] for s in ds['species']])
colors[~selected, :] = grey
sc = self.scatters[i,j]
sc.set_facecolors(colors)
sc.set_edgecolors(colors)
return
Explanation: data brushing
End of explanation
# import ipywidgets as widgets
# %matplotlib notebook
# w = widgets.HTML()
a = Brushing()
plt.show()
Explanation: <font color='red'> ipython widget make it too slow, so please coment the following 3 lines out to play interactively </font>
End of explanation
import numpy as np
from bokeh.plotting import figure, gridplot, show, output_notebook
from bokeh.models import ColumnDataSource
data_dir = 'hw_2_data/'
filename = data_dir + 'flowers.csv'
# datatype
dt = [('sepalLength', 'f4'), ('sepalWidth', 'f4'),
('petalLength', 'f4'), ('petalWidth', 'f4'), ('species', 'S10')]
names = ['sepalLength', 'sepalWidth', 'petalLength', 'petalWidth', 'species']
formats = ['f4', 'f4', 'f4', 'f4', 'S10']
dt = {'names': names, 'formats':formats}
# dataset
ds = np.loadtxt(filename, delimiter=',', skiprows=1, dtype=dt)
# construct colors for species
blue = "rgba(0, 0, 255, 1)"
red = "rgba(255, 0, 0, 0.75)"
green = "rgba(0, 128, 0, 0.75)"
grey = "rgba(192, 192, 192, 0.5)"
colorTable = {b'setosa': red, b'versicolor': blue, b'virginica': green}
colors = np.array([colorTable[s] for s in ds['species']])
output_notebook()
source = ColumnDataSource(data={name : ds[name] for name in names[:4]})
TOOLS = "pan,wheel_zoom,box_zoom,reset,save,box_select,lasso_select"
Explanation: #3 Generic "Brushing" code -- by Bokeh
End of explanation
m, n= 4, 4
grids = [[figure(tools = TOOLS, width = 200, height=200) for i in range(m)] for j in range(n)]
for i, ni in enumerate(names[:m]):
for j, nj in enumerate(names[:n]):
grids[i][j].circle(nj, ni, fill_color = colors, line_color=None, source = source)
Explanation: generate grids and plot data on figures
End of explanation
p = gridplot(grids)
show(p)
Explanation: show grid plot
End of explanation |
130 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I want to process a gray image in the form of np.array. | Problem:
import numpy as np
im = np.array([[0,0,0,0,0,0],
[0,0,5,1,2,0],
[0,1,8,0,1,0],
[0,0,0,7,1,0],
[0,0,0,0,0,0]])
mask = im == 0
rows = np.flatnonzero((~mask).sum(axis=1))
cols = np.flatnonzero((~mask).sum(axis=0))
if rows.shape[0] == 0:
result = np.array([])
else:
result = im[rows.min():rows.max()+1, cols.min():cols.max()+1] |
131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enhanced Deep Residual Networks for single-image super-resolution
Author
Step1: Download the training dataset
We use the DIV2K Dataset, a prominent single-image super-resolution dataset with 1,000
images of scenes with various sorts of degradations,
divided into 800 images for training, 100 images for validation, and 100
images for testing. We use 4x bicubic downsampled images as our "low quality" reference.
Step5: Flip, crop and resize images
Step6: Prepare a tf.Data.Dataset object
We augment the training data with random horizontal flips and 90 rotations.
As low resolution images, we use 24x24 RGB input patches.
Step8: Visualize the data
Let's visualize a few sample images
Step9: Build the model
In the paper, the authors train three models
Step10: Train the model
Step12: Run inference on new images and plot the results | Python Code:
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
AUTOTUNE = tf.data.AUTOTUNE
Explanation: Enhanced Deep Residual Networks for single-image super-resolution
Author: Gitesh Chawda<br>
Date created: 2022/04/07<br>
Last modified: 2022/04/07<br>
Description: Training an EDSR model on the DIV2K Dataset.
Introduction
In this example, we implement
Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR)
by Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee.
The EDSR architecture is based on the SRResNet architecture and consists of multiple
residual blocks. It uses constant scaling layers instead of batch normalization layers to
produce consistent results (input and output have similar distributions, thus
normalizing intermediate features may not be desirable). Instead of using a L2 loss (mean squared error),
the authors employed an L1 loss (mean absolute error), which performs better empirically.
Our implementation only includes 16 residual blocks with 64 channels.
Alternatively, as shown in the Keras example
Image Super-Resolution using an Efficient Sub-Pixel CNN,
you can do super-resolution using an ESPCN Model. According to the survey paper, EDSR is one of the top-five
best-performing super-resolution methods based on PSNR scores. However, it has more
parameters and requires more computational power than other approaches.
It has a PSNR value (≈34db) that is slightly higher than ESPCN (≈32db).
As per the survey paper, EDSR performs better than ESPCN.
Paper:
A comprehensive review of deep learning based single image super-resolution
Comparison Graph:
<img src="https://dfzljdn9uc3pi.cloudfront.net/2021/cs-621/1/fig-11-2x.jpg" width="500" />
Imports
End of explanation
# Download DIV2K from TF Datasets
# Using bicubic 4x degradation type
div2k_data = tfds.image.Div2k(config="bicubic_x4")
div2k_data.download_and_prepare()
# Taking train data from div2k_data object
train = div2k_data.as_dataset(split="train", as_supervised=True)
train_cache = train.cache()
# Validation data
val = div2k_data.as_dataset(split="validation", as_supervised=True)
val_cache = val.cache()
Explanation: Download the training dataset
We use the DIV2K Dataset, a prominent single-image super-resolution dataset with 1,000
images of scenes with various sorts of degradations,
divided into 800 images for training, 100 images for validation, and 100
images for testing. We use 4x bicubic downsampled images as our "low quality" reference.
End of explanation
def flip_left_right(lowres_img, highres_img):
Flips Images to left and right.
# Outputs random values from a uniform distribution in between 0 to 1
rn = tf.random.uniform(shape=(), maxval=1)
# If rn is less than 0.5 it returns original lowres_img and highres_img
# If rn is greater than 0.5 it returns flipped image
return tf.cond(
rn < 0.5,
lambda: (lowres_img, highres_img),
lambda: (
tf.image.flip_left_right(lowres_img),
tf.image.flip_left_right(highres_img),
),
)
def random_rotate(lowres_img, highres_img):
Rotates Images by 90 degrees.
# Outputs random values from uniform distribution in between 0 to 4
rn = tf.random.uniform(shape=(), maxval=4, dtype=tf.int32)
# Here rn signifies number of times the image(s) are rotated by 90 degrees
return tf.image.rot90(lowres_img, rn), tf.image.rot90(highres_img, rn)
def random_crop(lowres_img, highres_img, hr_crop_size=96, scale=4):
Crop images.
low resolution images: 24x24
hight resolution images: 96x96
lowres_crop_size = hr_crop_size // scale # 96//4=24
lowres_img_shape = tf.shape(lowres_img)[:2] # (height,width)
lowres_width = tf.random.uniform(
shape=(), maxval=lowres_img_shape[1] - lowres_crop_size + 1, dtype=tf.int32
)
lowres_height = tf.random.uniform(
shape=(), maxval=lowres_img_shape[0] - lowres_crop_size + 1, dtype=tf.int32
)
highres_width = lowres_width * scale
highres_height = lowres_height * scale
lowres_img_cropped = lowres_img[
lowres_height : lowres_height + lowres_crop_size,
lowres_width : lowres_width + lowres_crop_size,
] # 24x24
highres_img_cropped = highres_img[
highres_height : highres_height + hr_crop_size,
highres_width : highres_width + hr_crop_size,
] # 96x96
return lowres_img_cropped, highres_img_cropped
Explanation: Flip, crop and resize images
End of explanation
def dataset_object(dataset_cache, training=True):
ds = dataset_cache
ds = ds.map(
lambda lowres, highres: random_crop(lowres, highres, scale=4),
num_parallel_calls=AUTOTUNE,
)
if training:
ds = ds.map(random_rotate, num_parallel_calls=AUTOTUNE)
ds = ds.map(flip_left_right, num_parallel_calls=AUTOTUNE)
# Batching Data
ds = ds.batch(16)
if training:
# Repeating Data, so that cardinality if dataset becomes infinte
ds = ds.repeat()
# prefetching allows later images to be prepared while the current image is being processed
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = dataset_object(train_cache, training=True)
val_ds = dataset_object(val_cache, training=False)
Explanation: Prepare a tf.Data.Dataset object
We augment the training data with random horizontal flips and 90 rotations.
As low resolution images, we use 24x24 RGB input patches.
End of explanation
lowres, highres = next(iter(train_ds))
# Hight Resolution Images
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(highres[i].numpy().astype("uint8"))
plt.title(highres[i].shape)
plt.axis("off")
# Low Resolution Images
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(lowres[i].numpy().astype("uint8"))
plt.title(lowres[i].shape)
plt.axis("off")
def PSNR(super_resolution, high_resolution):
Compute the peak signal-to-noise ratio, measures quality of image.
# Max value of pixel is 255
psnr_value = tf.image.psnr(high_resolution, super_resolution, max_val=255)[0]
return psnr_value
Explanation: Visualize the data
Let's visualize a few sample images:
End of explanation
class EDSRModel(tf.keras.Model):
def train_step(self, data):
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
def predict_step(self, x):
# Adding dummy dimension using tf.expand_dims and converting to float32 using tf.cast
x = tf.cast(tf.expand_dims(x, axis=0), tf.float32)
# Passing low resolution image to model
super_resolution_img = self(x, training=False)
# Clips the tensor from min(0) to max(255)
super_resolution_img = tf.clip_by_value(super_resolution_img, 0, 255)
# Rounds the values of a tensor to the nearest integer
super_resolution_img = tf.round(super_resolution_img)
# Removes dimensions of size 1 from the shape of a tensor and converting to uint8
super_resolution_img = tf.squeeze(
tf.cast(super_resolution_img, tf.uint8), axis=0
)
return super_resolution_img
# Residual Block
def ResBlock(inputs):
x = layers.Conv2D(64, 3, padding="same", activation="relu")(inputs)
x = layers.Conv2D(64, 3, padding="same")(x)
x = layers.Add()([inputs, x])
return x
# Upsampling Block
def Upsampling(inputs, factor=2, **kwargs):
x = layers.Conv2D(64 * (factor ** 2), 3, padding="same", **kwargs)(inputs)
x = tf.nn.depth_to_space(x, block_size=factor)
x = layers.Conv2D(64 * (factor ** 2), 3, padding="same", **kwargs)(x)
x = tf.nn.depth_to_space(x, block_size=factor)
return x
def make_model(num_filters, num_of_residual_blocks):
# Flexible Inputs to input_layer
input_layer = layers.Input(shape=(None, None, 3))
# Scaling Pixel Values
x = layers.Rescaling(scale=1.0 / 255)(input_layer)
x = x_new = layers.Conv2D(num_filters, 3, padding="same")(x)
# 16 residual blocks
for _ in range(num_of_residual_blocks):
x_new = ResBlock(x_new)
x_new = layers.Conv2D(num_filters, 3, padding="same")(x_new)
x = layers.Add()([x, x_new])
x = Upsampling(x)
x = layers.Conv2D(3, 3, padding="same")(x)
output_layer = layers.Rescaling(scale=255)(x)
return EDSRModel(input_layer, output_layer)
model = make_model(num_filters=64, num_of_residual_blocks=16)
Explanation: Build the model
In the paper, the authors train three models: EDSR, MDSR, and a baseline model. In this code example,
we only train the baseline model.
Comparison with model with three residual blocks
The residual block design of EDSR differs from that of ResNet. Batch normalization
layers have been removed (together with the final ReLU activation): since batch normalization
layers normalize the features, they hurt output value range flexibility.
It is thus better to remove them. Further, it also helps reduce the
amount of GPU RAM required by the model, since the batch normalization layers consume the same amount of
memory as the preceding convolutional layers.
<img src="https://miro.medium.com/max/1050/1*EPviXGqlGWotVtV2gqVvNg.png" width="500" />
End of explanation
# Using adam optimizer with initial learning rate as 1e-4, changing learning rate after 5000 steps to 5e-5
optim_edsr = keras.optimizers.Adam(
learning_rate=keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5000], values=[1e-4, 5e-5]
)
)
# Compiling model with loss as mean absolute error(L1 Loss) and metric as psnr
model.compile(optimizer=optim_edsr, loss="mae", metrics=[PSNR])
# Training for more epochs will improve results
model.fit(train_ds, epochs=100, steps_per_epoch=200, validation_data=val_ds)
Explanation: Train the model
End of explanation
def plot_results(lowres, preds):
Displays low resolution image and super resolution image
plt.figure(figsize=(24, 14))
plt.subplot(132), plt.imshow(lowres), plt.title("Low resolution")
plt.subplot(133), plt.imshow(preds), plt.title("Prediction")
plt.show()
for lowres, highres in val.take(10):
lowres = tf.image.random_crop(lowres, (150, 150, 3))
preds = model.predict_step(lowres)
plot_results(lowres, preds)
Explanation: Run inference on new images and plot the results
End of explanation |
132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fire up graphlab create
Step1: Load some house value vs. crime rate data
Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
Step2: Exploring the data
The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
Step3: Fit the regression model using crime as the feature
Step4: Let's see what our fit looks like
Step5: Above
Step6: Refit our simple regression model on this modified dataset
Step7: Look at the fit
Step8: Compare coefficients for full-data fit versus no-Center-City fit
Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
Step9: Above
Step10: Do the coefficients change much? | Python Code:
import graphlab
Explanation: Fire up graphlab create
End of explanation
sales = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv/')
sales
Explanation: Load some house value vs. crime rate data
Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
End of explanation
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
Explanation: Exploring the data
The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
End of explanation
crime_model = graphlab.linear_regression.create(sales, target='HousePrice', features=['CrimeRate'],validation_set=None,verbose=False)
Explanation: Fit the regression model using crime as the feature
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(sales['CrimeRate'],sales['HousePrice'],'.',
sales['CrimeRate'],crime_model.predict(sales),'-')
Explanation: Let's see what our fit looks like
End of explanation
sales_noCC = sales[sales['MilesPhila'] != 0.0]
sales_noCC.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
Explanation: Above: blue dots are original data, green line is the fit from the simple regression.
Remove Center City and redo the analysis
Center City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's remove this datapoint and see what happens.
End of explanation
crime_model_noCC = graphlab.linear_regression.create(sales_noCC, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
Explanation: Refit our simple regression model on this modified dataset:
End of explanation
plt.plot(sales_noCC['CrimeRate'],sales_noCC['HousePrice'],'.',
sales_noCC['CrimeRate'],crime_model.predict(sales_noCC),'-')
Explanation: Look at the fit:
End of explanation
crime_model.get('coefficients')
crime_model_noCC.get('coefficients')
Explanation: Compare coefficients for full-data fit versus no-Center-City fit
Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
End of explanation
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
crime_model_nohighend = graphlab.linear_regression.create(sales_nohighend, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
Explanation: Above: We see that for the "no Center City" version, per unit increase in crime, the predicted decrease in house prices is 2,287. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different!
High leverage points:
Center City is said to be a "high leverage" point because it is at an extreme x value where there are not other observations. As a result, recalling the closed-form solution for simple regression, this point has the potential to dramatically change the least squares line since the center of x mass is heavily influenced by this one point and the least squares line will try to fit close to that outlying (in x) point. If a high leverage point follows the trend of the other data, this might not have much effect. On the other hand, if this point somehow differs, it can be strongly influential in the resulting fit.
Influential observations:
An influential observation is one where the removal of the point significantly changes the fit. As discussed above, high leverage points are good candidates for being influential observations, but need not be. Other observations that are not leverage points can also be influential observations (e.g., strongly outlying in y even if x is a typical value).
Remove high-value outlier neighborhoods and redo analysis
Based on the discussion above, a question is whether the outlying high-value towns are strongly influencing the fit. Let's remove them and see what happens.
End of explanation
crime_model_noCC.get('coefficients')
crime_model_nohighend.get('coefficients')
Explanation: Do the coefficients change much?
End of explanation |
133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Robust Kalman filtering for vehicle tracking
We will try to pinpoint the location of a moving vehicle with high accuracy from noisy sensor data. We'll do this by modeling the vehicle state as a discrete-time linear dynamical system. Standard Kalman filtering can be used to approach this problem when the sensor noise is assumed to be Gaussian. We'll use robust Kalman filtering to get a more accurate estimate of the vehicle state for a non-Gaussian case with outliers.
Problem statement
A discrete-time linear dynamical system consists of a sequence of state vectors $x_t \in \mathbf{R}^n$, indexed by time $t \in \lbrace 0, \ldots, N-1 \rbrace$ and dynamics equations
\begin{align}
x_{t+1} &= Ax_t + Bw_t\
y_t &=Cx_t + v_t,
\end{align}
where $w_t \in \mathbf{R}^m$ is an input to the dynamical system (say, a drive force on the vehicle), $y_t \in \mathbf{R}^r$ is a state measurement, $v_t \in \mathbf{R}^r$ is noise, $A$ is the drift matrix, $B$ is the input matrix, and $C$ is the observation matrix.
Given $A$, $B$, $C$, and $y_t$ for $t = 0, \ldots, N-1$, the goal is to estimate $x_t$ for $t = 0, \ldots, N-1$.
Kalman filtering
A Kalman filter estimates $x_t$ by solving the optimization problem
\begin{array}{ll}
\mbox{minimize} & \sum_{t=0}^{N-1} \left(
\|w_t\|2^2 + \tau \|v_t\|_2^2\right)\
\mbox{subject to} & x{t+1} = Ax_t + Bw_t,\quad t=0,\ldots, N-1\
& y_t = Cx_t+v_t,\quad t = 0, \ldots, N-1,
\end{array}
where $\tau$ is a tuning parameter. This problem is actually a least squares problem, and can be solved via linear algebra, without the need for more general convex optimization. Note that since we have no observation $y_{N}$, $x_N$ is only constrained via $x_{N} = Ax_{N-1} + Bw_{N-1}$, which is trivially resolved when $w_{N-1} = 0$ and $x_{N} = Ax_{N-1}$. We maintain this vestigial constraint only because it offers a concise problem statement.
This model performs well when $w_t$ and $v_t$ are Gaussian. However, the quadratic objective can be influenced by large outliers, which degrades the accuracy of the recovery. To improve estimation in the presence of outliers, we can use robust Kalman filtering.
Robust Kalman filtering
To handle outliers in $v_t$, robust Kalman filtering replaces the quadratic cost with a Huber cost, which results in the convex optimization problem
\begin{array}{ll}
\mbox{minimize} & \sum_{t=0}^{N-1} \left( \|w_t\|^2_2 + \tau \phi_\rho(v_t) \right)\
\mbox{subject to} & x_{t+1} = Ax_t + Bw_t,\quad t=0,\ldots, N-1\
& y_t = Cx_t+v_t,\quad t=0,\ldots, N-1,
\end{array}
where $\phi_\rho$ is the Huber function
$$
\phi_\rho(a)= \left{ \begin{array}{ll} \|a\|_2^2 & \|a\|_2\leq \rho\
2\rho \|a\|_2-\rho^2 & \|a\|_2>\rho.
\end{array}\right.
$$
The Huber penalty function penalizes estimation error linearly outside of a ball of radius $\rho$, whereas in standard Kalman filtering, all errors are penalized quadratically. Thus, large errors are penalized less harshly, making this model more robust to outliers.
Vehicle tracking example
We'll apply standard and robust Kalman filtering to a vehicle tracking problem with state $x_t \in \mathbf{R}^4$, where
$(x_{t,0}, x_{t,1})$ is the position of the vehicle in two dimensions, and $(x_{t,2}, x_{t,3})$ is the vehicle velocity.
The vehicle has unknown drive force $w_t$, and we observe noisy measurements of the vehicle's position, $y_t \in \mathbf{R}^2$.
The matrices for the dynamics are
$$
A = \begin{bmatrix}
1 & 0 & \left(1-\frac{\gamma}{2}\Delta t\right) \Delta t & 0 \
0 & 1 & 0 & \left(1-\frac{\gamma}{2} \Delta t\right) \Delta t\
0 & 0 & 1-\gamma \Delta t & 0 \
0 & 0 & 0 & 1-\gamma \Delta t
\end{bmatrix},
$$
$$
B = \begin{bmatrix}
\frac{1}{2}\Delta t^2 & 0 \
0 & \frac{1}{2}\Delta t^2 \
\Delta t & 0 \
0 & \Delta t \
\end{bmatrix},
$$
$$
C = \begin{bmatrix}
1 & 0 & 0 & 0 \
0 & 1 & 0 & 0
\end{bmatrix},
$$
where $\gamma$ is a velocity damping parameter.
1D Model
The recurrence is derived from the following relations in a single dimension. For this subsection, let $x_t, v_t, w_t$ be the vehicle position, velocity, and input drive force. The resulting acceleration of the vehicle is $w_t - \gamma v_t$, with $- \gamma v_t$ is a damping term depending on velocity with parameter $\gamma$.
The discretized dynamics are obtained from numerically integrating
Step1: Problem Data
We generate the data for the vehicle tracking problem. We'll have $N=1000$, $w_t$ a standard Gaussian, and $v_t$ a standard Guassian, except $20\%$ of the points will be outliers with $\sigma = 20$.
Below, we set the problem parameters and define the matrices $A$, $B$, and $C$.
Step2: Simulation
We seed $x_0 = 0$ (starting at the origin with zero velocity) and simulate the system forward in time. The results are the true vehicle positions x_true (which we will use to judge our recovery) and the observed positions y.
We plot the position, velocity, and system input $w$ in both dimensions as a function of time.
We also plot the sets of true and observed vehicle positions.
Step3: Kalman filtering recovery
The code below solves the standard Kalman filtering problem using CVXPY. We plot and compare the true and recovered vehicle states. Note that the recovery is distorted by outliers in the measurements.
Step4: Robust Kalman filtering recovery
Here we implement robust Kalman filtering with CVXPY. We get a better recovery than the standard Kalman filtering, which can be seen in the plots below. | Python Code:
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
def plot_state(t,actual, estimated=None):
'''
plot position, speed, and acceleration in the x and y coordinates for
the actual data, and optionally for the estimated data
'''
trajectories = [actual]
if estimated is not None:
trajectories.append(estimated)
fig, ax = plt.subplots(3, 2, sharex='col', sharey='row', figsize=(8,8))
for x, w in trajectories:
ax[0,0].plot(t,x[0,:-1])
ax[0,1].plot(t,x[1,:-1])
ax[1,0].plot(t,x[2,:-1])
ax[1,1].plot(t,x[3,:-1])
ax[2,0].plot(t,w[0,:])
ax[2,1].plot(t,w[1,:])
ax[0,0].set_ylabel('x position')
ax[1,0].set_ylabel('x velocity')
ax[2,0].set_ylabel('x input')
ax[0,1].set_ylabel('y position')
ax[1,1].set_ylabel('y velocity')
ax[2,1].set_ylabel('y input')
ax[0,1].yaxis.tick_right()
ax[1,1].yaxis.tick_right()
ax[2,1].yaxis.tick_right()
ax[0,1].yaxis.set_label_position("right")
ax[1,1].yaxis.set_label_position("right")
ax[2,1].yaxis.set_label_position("right")
ax[2,0].set_xlabel('time')
ax[2,1].set_xlabel('time')
def plot_positions(traj, labels, axis=None,filename=None):
'''
show point clouds for true, observed, and recovered positions
'''
matplotlib.rcParams.update({'font.size': 14})
n = len(traj)
fig, ax = plt.subplots(1, n, sharex=True, sharey=True,figsize=(12, 5))
if n == 1:
ax = [ax]
for i,x in enumerate(traj):
ax[i].plot(x[0,:], x[1,:], 'ro', alpha=.1)
ax[i].set_title(labels[i])
if axis:
ax[i].axis(axis)
if filename:
fig.savefig(filename, bbox_inches='tight')
Explanation: Robust Kalman filtering for vehicle tracking
We will try to pinpoint the location of a moving vehicle with high accuracy from noisy sensor data. We'll do this by modeling the vehicle state as a discrete-time linear dynamical system. Standard Kalman filtering can be used to approach this problem when the sensor noise is assumed to be Gaussian. We'll use robust Kalman filtering to get a more accurate estimate of the vehicle state for a non-Gaussian case with outliers.
Problem statement
A discrete-time linear dynamical system consists of a sequence of state vectors $x_t \in \mathbf{R}^n$, indexed by time $t \in \lbrace 0, \ldots, N-1 \rbrace$ and dynamics equations
\begin{align}
x_{t+1} &= Ax_t + Bw_t\
y_t &=Cx_t + v_t,
\end{align}
where $w_t \in \mathbf{R}^m$ is an input to the dynamical system (say, a drive force on the vehicle), $y_t \in \mathbf{R}^r$ is a state measurement, $v_t \in \mathbf{R}^r$ is noise, $A$ is the drift matrix, $B$ is the input matrix, and $C$ is the observation matrix.
Given $A$, $B$, $C$, and $y_t$ for $t = 0, \ldots, N-1$, the goal is to estimate $x_t$ for $t = 0, \ldots, N-1$.
Kalman filtering
A Kalman filter estimates $x_t$ by solving the optimization problem
\begin{array}{ll}
\mbox{minimize} & \sum_{t=0}^{N-1} \left(
\|w_t\|2^2 + \tau \|v_t\|_2^2\right)\
\mbox{subject to} & x{t+1} = Ax_t + Bw_t,\quad t=0,\ldots, N-1\
& y_t = Cx_t+v_t,\quad t = 0, \ldots, N-1,
\end{array}
where $\tau$ is a tuning parameter. This problem is actually a least squares problem, and can be solved via linear algebra, without the need for more general convex optimization. Note that since we have no observation $y_{N}$, $x_N$ is only constrained via $x_{N} = Ax_{N-1} + Bw_{N-1}$, which is trivially resolved when $w_{N-1} = 0$ and $x_{N} = Ax_{N-1}$. We maintain this vestigial constraint only because it offers a concise problem statement.
This model performs well when $w_t$ and $v_t$ are Gaussian. However, the quadratic objective can be influenced by large outliers, which degrades the accuracy of the recovery. To improve estimation in the presence of outliers, we can use robust Kalman filtering.
Robust Kalman filtering
To handle outliers in $v_t$, robust Kalman filtering replaces the quadratic cost with a Huber cost, which results in the convex optimization problem
\begin{array}{ll}
\mbox{minimize} & \sum_{t=0}^{N-1} \left( \|w_t\|^2_2 + \tau \phi_\rho(v_t) \right)\
\mbox{subject to} & x_{t+1} = Ax_t + Bw_t,\quad t=0,\ldots, N-1\
& y_t = Cx_t+v_t,\quad t=0,\ldots, N-1,
\end{array}
where $\phi_\rho$ is the Huber function
$$
\phi_\rho(a)= \left{ \begin{array}{ll} \|a\|_2^2 & \|a\|_2\leq \rho\
2\rho \|a\|_2-\rho^2 & \|a\|_2>\rho.
\end{array}\right.
$$
The Huber penalty function penalizes estimation error linearly outside of a ball of radius $\rho$, whereas in standard Kalman filtering, all errors are penalized quadratically. Thus, large errors are penalized less harshly, making this model more robust to outliers.
Vehicle tracking example
We'll apply standard and robust Kalman filtering to a vehicle tracking problem with state $x_t \in \mathbf{R}^4$, where
$(x_{t,0}, x_{t,1})$ is the position of the vehicle in two dimensions, and $(x_{t,2}, x_{t,3})$ is the vehicle velocity.
The vehicle has unknown drive force $w_t$, and we observe noisy measurements of the vehicle's position, $y_t \in \mathbf{R}^2$.
The matrices for the dynamics are
$$
A = \begin{bmatrix}
1 & 0 & \left(1-\frac{\gamma}{2}\Delta t\right) \Delta t & 0 \
0 & 1 & 0 & \left(1-\frac{\gamma}{2} \Delta t\right) \Delta t\
0 & 0 & 1-\gamma \Delta t & 0 \
0 & 0 & 0 & 1-\gamma \Delta t
\end{bmatrix},
$$
$$
B = \begin{bmatrix}
\frac{1}{2}\Delta t^2 & 0 \
0 & \frac{1}{2}\Delta t^2 \
\Delta t & 0 \
0 & \Delta t \
\end{bmatrix},
$$
$$
C = \begin{bmatrix}
1 & 0 & 0 & 0 \
0 & 1 & 0 & 0
\end{bmatrix},
$$
where $\gamma$ is a velocity damping parameter.
1D Model
The recurrence is derived from the following relations in a single dimension. For this subsection, let $x_t, v_t, w_t$ be the vehicle position, velocity, and input drive force. The resulting acceleration of the vehicle is $w_t - \gamma v_t$, with $- \gamma v_t$ is a damping term depending on velocity with parameter $\gamma$.
The discretized dynamics are obtained from numerically integrating:
$$
\begin{align}
x_{t+1} &= x_t + \left(1-\frac{\gamma \Delta t}{2}\right)v_t \Delta t + \frac{1}{2}w_{t} \Delta t^2\
v_{t+1} &= \left(1-\gamma\right)v_t + w_t \Delta t.
\end{align}
$$
Extending these relations to two dimensions gives us the dynamics matrices $A$ and $B$.
Helper Functions
End of explanation
n = 1000 # number of timesteps
T = 50 # time will vary from 0 to T with step delt
ts, delt = np.linspace(0,T,n,endpoint=True, retstep=True)
gamma = .05 # damping, 0 is no damping
A = np.zeros((4,4))
B = np.zeros((4,2))
C = np.zeros((2,4))
A[0,0] = 1
A[1,1] = 1
A[0,2] = (1-gamma*delt/2)*delt
A[1,3] = (1-gamma*delt/2)*delt
A[2,2] = 1 - gamma*delt
A[3,3] = 1 - gamma*delt
B[0,0] = delt**2/2
B[1,1] = delt**2/2
B[2,0] = delt
B[3,1] = delt
C[0,0] = 1
C[1,1] = 1
Explanation: Problem Data
We generate the data for the vehicle tracking problem. We'll have $N=1000$, $w_t$ a standard Gaussian, and $v_t$ a standard Guassian, except $20\%$ of the points will be outliers with $\sigma = 20$.
Below, we set the problem parameters and define the matrices $A$, $B$, and $C$.
End of explanation
sigma = 20
p = .20
np.random.seed(6)
x = np.zeros((4,n+1))
x[:,0] = [0,0,0,0]
y = np.zeros((2,n))
# generate random input and noise vectors
w = np.random.randn(2,n)
v = np.random.randn(2,n)
# add outliers to v
np.random.seed(0)
inds = np.random.rand(n) <= p
v[:,inds] = sigma*np.random.randn(2,n)[:,inds]
# simulate the system forward in time
for t in range(n):
y[:,t] = C.dot(x[:,t]) + v[:,t]
x[:,t+1] = A.dot(x[:,t]) + B.dot(w[:,t])
x_true = x.copy()
w_true = w.copy()
plot_state(ts,(x_true,w_true))
plot_positions([x_true,y], ['True', 'Observed'],[-4,14,-5,20],'rkf1.pdf')
Explanation: Simulation
We seed $x_0 = 0$ (starting at the origin with zero velocity) and simulate the system forward in time. The results are the true vehicle positions x_true (which we will use to judge our recovery) and the observed positions y.
We plot the position, velocity, and system input $w$ in both dimensions as a function of time.
We also plot the sets of true and observed vehicle positions.
End of explanation
%%time
import cvxpy as cp
x = cp.Variable(shape=(4, n+1))
w = cp.Variable(shape=(2, n))
v = cp.Variable(shape=(2, n))
tau = .08
obj = cp.sum_squares(w) + tau*cp.sum_squares(v)
obj = cp.Minimize(obj)
constr = []
for t in range(n):
constr += [ x[:,t+1] == A*x[:,t] + B*w[:,t] ,
y[:,t] == C*x[:,t] + v[:,t] ]
cp.Problem(obj, constr).solve(verbose=True)
x = np.array(x.value)
w = np.array(w.value)
plot_state(ts,(x_true,w_true),(x,w))
plot_positions([x_true,y], ['True', 'Noisy'], [-4,14,-5,20])
plot_positions([x_true,x], ['True', 'KF recovery'], [-4,14,-5,20], 'rkf2.pdf')
print("optimal objective value: {}".format(obj.value))
Explanation: Kalman filtering recovery
The code below solves the standard Kalman filtering problem using CVXPY. We plot and compare the true and recovered vehicle states. Note that the recovery is distorted by outliers in the measurements.
End of explanation
%%time
import cvxpy as cp
x = cp.Variable(shape=(4, n+1))
w = cp.Variable(shape=(2, n))
v = cp.Variable(shape=(2, n))
tau = 2
rho = 2
obj = cp.sum_squares(w)
obj += cp.sum([tau*cp.huber(cp.norm(v[:,t]),rho) for t in range(n)])
obj = cp.Minimize(obj)
constr = []
for t in range(n):
constr += [ x[:,t+1] == A*x[:,t] + B*w[:,t] ,
y[:,t] == C*x[:,t] + v[:,t] ]
cp.Problem(obj, constr).solve(verbose=True)
x = np.array(x.value)
w = np.array(w.value)
plot_state(ts,(x_true,w_true),(x,w))
plot_positions([x_true,y], ['True', 'Noisy'], [-4,14,-5,20])
plot_positions([x_true,x], ['True', 'Robust KF recovery'], [-4,14,-5,20],'rkf3.pdf')
print("optimal objective value: {}".format(obj.value))
Explanation: Robust Kalman filtering recovery
Here we implement robust Kalman filtering with CVXPY. We get a better recovery than the standard Kalman filtering, which can be seen in the plots below.
End of explanation |
134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Energy Meter Examples
Linux Kernel HWMon
More details can be found at https
Step1: Import required modules
Step2: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
Step3: Workload Execution and Power Consumptions Samping
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
Each EnergyMeter derived class has two main methods
Step4: Power Measurements Data | Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
Explanation: Energy Meter Examples
Linux Kernel HWMon
More details can be found at https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirements#linux-hwmon.
End of explanation
# Generate plots inline
%matplotlib inline
import os
# Support to access the remote target
import devlib
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
Explanation: Import required modules
End of explanation
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
# Folder where all the results will be collected
"results_dir" : "EnergyMeter_HWMON",
# Energy Meters Configuration for BayLibre's ACME Cape
"emeter" : {
"instrument" : "hwmon",
"conf" : {
# Prefixes of the HWMon labels
'sites' : ['a53', 'a57'],
# Type of hardware monitor to be used
'kinds' : ['energy']
},
'channel_map' : {
'LITTLE' : 'a53',
'big' : 'a57',
}
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
"rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
Explanation: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
End of explanation
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# EnergyMeter Start
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
# EnergyMeter Stop and samples collection
nrg_report = te.emeter.report(te.res_dir)
logging.info("Collected data:")
!tree $te.res_dir
Explanation: Workload Execution and Power Consumptions Samping
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
Each EnergyMeter derived class has two main methods: reset and report.
- The reset method will reset the energy meter and start sampling from channels specified in the target configuration. <br>
- The report method will stop capture and will retrieve the energy consumption data. This returns an EnergyReport composed of the measured channels energy and the report file. Each of the samples can also be obtained, as you can see below.
End of explanation
logging.info("Measured channels energy:")
logging.info("%s", nrg_report.channels)
logging.info("Generated energy file:")
logging.info(" %s", nrg_report.report_file)
!cat $nrg_report.report_file
Explanation: Power Measurements Data
End of explanation |
135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x : 1/(1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error
hidden_errors = np.dot(output_errors, self.weights_hidden_to_output).T # errors propagated to the hidden layer
hidden_grad = hidden_outputs * (1 - hidden_outputs) # hidden layer gradients
# TODO: Update the weights
self.weights_hidden_to_output += self.lr*np.dot(output_errors,hidden_outputs.T) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot((hidden_grad*hidden_errors), inputs.T) # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden , inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output , hidden_outputs)# signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 1000
learning_rate = 0.008
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\nProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
# plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods)
unittest.TextTestRunner().run(suite)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2D Fast Accurate Fourier Transform
with an extra gpu array for the 33th complex values
Step1: Loading FFT routines
Step2: Initializing Data
Gaussian
Step3: $W$ TRANSFORM FROM AXES-0
After the transfom, f_gpu[
Step4: Forward Transform
Step5: Central Section
Step6: Inverse Transform
Step7: $W$ TRANSFORM FROM AXES-1
After the transfom, f_gpu[
Step8: Forward Transform
Step9: Inverse Transform | Python Code:
import numpy as np
import ctypes
from ctypes import *
import pycuda.gpuarray as gpuarray
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import math
%matplotlib inline
Explanation: 2D Fast Accurate Fourier Transform
with an extra gpu array for the 33th complex values
End of explanation
gridDIM = 64
size = gridDIM*gridDIM
axes0 = 0
axes1 = 1
makeC2C = 0
makeR2C = 1
makeC2R = 1
axesSplit_0 = 0
axesSplit_1 = 1
m = size
segment_axes0 = 0
segment_axes1 = 0
DIR_BASE = "/home/robert/Documents/new1/FFT/mycode/"
# FAFT
_faft128_2D = ctypes.cdll.LoadLibrary( DIR_BASE+'FAFT128_2D_R2C.so' )
_faft128_2D.FAFT128_2D_R2C.restype = int
_faft128_2D.FAFT128_2D_R2C.argtypes = [ctypes.c_void_p, ctypes.c_void_p,
ctypes.c_float, ctypes.c_float, ctypes.c_int,
ctypes.c_int, ctypes.c_int, ctypes.c_int]
cuda_faft = _faft128_2D.FAFT128_2D_R2C
# Inv FAFT
_ifaft128_2D = ctypes.cdll.LoadLibrary( DIR_BASE+'IFAFT128_2D_C2R.so' )
_ifaft128_2D.IFAFT128_2D_C2R.restype = int
_ifaft128_2D.IFAFT128_2D_C2R.argtypes = [ctypes.c_void_p, ctypes.c_void_p,
ctypes.c_float, ctypes.c_float, ctypes.c_int,
ctypes.c_int, ctypes.c_int, ctypes.c_int]
cuda_ifaft = _ifaft128_2D.IFAFT128_2D_C2R
def fftGaussian(p,sigma):
return np.exp( - p**2*sigma**2/2. )
Explanation: Loading FFT routines
End of explanation
def Gaussian(x,mu,sigma):
return np.exp( - (x-mu)**2/sigma**2/2. )/(sigma*np.sqrt( 2*np.pi ))
def fftGaussian(p,mu,sigma):
return np.exp(-1j*mu*p)*np.exp( - p**2*sigma**2/2. )
# Gaussian parameters
mu_x = 1.5
sigma_x = 1.
mu_y = 1.5
sigma_y = 1.
# Grid parameters
x_amplitude = 7.
p_amplitude = 5. # With the traditional method p amplitude is fixed to: 2 * np.pi /( 2*x_amplitude )
dx = 2*x_amplitude/float(gridDIM) # This is dx in Bailey's paper
dp = 2*p_amplitude/float(gridDIM) # This is gamma in Bailey's paper
delta = dx*dp/(2*np.pi)
x_range = np.linspace( -x_amplitude, x_amplitude-dx, gridDIM)
p_range = np.linspace( -p_amplitude, p_amplitude-dp, gridDIM)
x = x_range[ np.newaxis, : ]
y = x_range[ :, np.newaxis ]
f = Gaussian(x,mu_x,sigma_x)*Gaussian(y,mu_y,sigma_y)
plt.imshow( f, extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] , origin='lower')
axis_font = {'size':'24'}
plt.text( 0., 7.1, '$W$' , **axis_font)
plt.colorbar()
#plt.ylim(0,0.44)
print ' Amplitude x = ',x_amplitude
print ' Amplitude p = ',p_amplitude
print ' '
print 'mu_x = ', mu_x
print 'mu_y = ', mu_y
print 'sigma_x = ', sigma_x
print 'sigma_y = ', sigma_y
print ' '
print 'n = ', x.size
print 'dx = ', dx
print 'dp = ', dp
print ' standard fft dp = ',2 * np.pi /( 2*x_amplitude ) , ' '
print ' '
print 'delta = ', delta
print ' '
print 'The Gaussian extends to the numerical error in single precision:'
print ' min = ', np.min(f)
Explanation: Initializing Data
Gaussian
End of explanation
f33 = np.zeros( [1 ,64], dtype = np.complex64 )
# One gpu array.
f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) )
f33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) )
Explanation: $W$ TRANSFORM FROM AXES-0
After the transfom, f_gpu[:32, :] contains real values and f_gpu[32:, :] contains imaginary values. g33_gpu contains the 33th. complex values
End of explanation
# Executing FFT
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeR2C, axesSplit_0 )
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeC2C, axesSplit_0 )
plt.imshow(
f_gpu.get()
)
plt.plot( f33_gpu.get().real.reshape(64) )
def ReconstructFFT2D_axesSplit_0(f,f65):
n = f.shape[0]
freal_half = f_gpu.get()[:n/2,:]
freal = np.append( freal_half , f65.real.reshape(1,f65.size) , axis=0)
freal = np.append( freal , freal_half[:0:-1,:] ,axis=0)
fimag_half = f_gpu.get()[n/2:,:]
fimag = np.append( fimag_half , f65.imag.reshape(1,f65.size) ,axis=0)
fimag = np.append( fimag , -fimag_half[:0:-1,:] ,axis=0)
return freal + 1j*fimag
plt.imshow(
ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() ).real/float(size),
extent=[-p_amplitude , p_amplitude-dp, -p_amplitude , p_amplitude-dp] , origin='lower')
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -2, 6.2, '$Re \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , p_amplitude-dp)
plt.ylim(-p_amplitude , p_amplitude-dp)
plt.xlabel('$p_x$',**axis_font)
plt.ylabel('$p_y$',**axis_font)
plt.imshow(
ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() ).imag/float(size),
extent=[-p_amplitude , p_amplitude-dp, -p_amplitude , p_amplitude-dp] , origin='lower')
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -2, 6.2, '$Imag\, \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , p_amplitude-dp)
plt.ylim(-p_amplitude , p_amplitude-dp)
plt.xlabel('$p_x$',**axis_font)
plt.ylabel('$p_y$',**axis_font)
Explanation: Forward Transform
End of explanation
plt.figure(figsize=(10,10))
plt.plot( p_range,
ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() )[32,:].real/float(size),
'o-' , label='Real')
plt.plot( p_range,
ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() )[32,:].imag/float(size),
'ro-' , label='Imag')
plt.xlabel('$p_x$',**axis_font)
plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).real ,'bx');
plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).imag ,'rx');
plt.legend(loc='upper left')
Explanation: Central Section: $p_y =0$
End of explanation
# Executing iFFT
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2C, axesSplit_0 )
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2R, axesSplit_0 )
plt.imshow( f_gpu.get()/(float(size*size)) ,
extent=[-x_amplitude , x_amplitude-dx, -x_amplitude, x_amplitude-dx], origin='lower' )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -1, 7.2, '$W$', **axis_font )
plt.xlim(-x_amplitude , x_amplitude-dx)
plt.ylim(-x_amplitude , x_amplitude-dx)
plt.xlabel('$x$',**axis_font)
plt.ylabel('$y$',**axis_font)
Explanation: Inverse Transform
End of explanation
f = Gaussian(x,mu_x,sigma_x)*Gaussian(y,mu_y,sigma_y)
plt.imshow( f, extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] , origin='lower')
f33 = np.zeros( [64, 1], dtype = np.complex64 )
# One gpu array.
f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) )
f33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) )
Explanation: $W$ TRANSFORM FROM AXES-1
After the transfom, f_gpu[:, :32] contains real values and f_gpu[:, 32:] contains imaginary values. f33_gpu contains the 33th. complex values
End of explanation
# Executing FFT
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeR2C, axesSplit_1 )
cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeC2C, axesSplit_1 )
plt.imshow(
f_gpu.get()
)
plt.plot( f33_gpu.get().real.reshape(64) )
def ReconstructFFT2D_axesSplit_1(f,f65):
n = f.shape[0]
freal_half = f_gpu.get()[:,:n/2]
freal = np.append( freal_half , f65.real.reshape(f65.size,1) , axis=1)
freal = np.append( freal , freal_half[:,:0:-1] , axis=1)
fimag_half = f_gpu.get()[:,n/2:]
fimag = np.append( fimag_half , f65.imag.reshape(f65.size,1) ,axis=1)
fimag = np.append( fimag , -fimag_half[:,:0:-1] ,axis=1)
return freal + 1j*fimag
ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() ).shape
plt.imshow( ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() ).real/float(size),
extent=[-p_amplitude , p_amplitude-dp, -p_amplitude, p_amplitude-dp] )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -3.0, 6.2, '$Re \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , p_amplitude-dp)
plt.ylim(-p_amplitude , p_amplitude-dp)
plt.xlabel('$p_x$',**axis_font)
plt.ylabel('$p_y$',**axis_font)
plt.imshow( ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() ).imag/float(size),
extent=[-p_amplitude , p_amplitude-dp, -p_amplitude, p_amplitude-dp] )
plt.colorbar()
axis_font = {'size':'24'}
plt.text( -3.0, 6.2, '$Imag \\mathcal{F}(W)$', **axis_font )
plt.xlim(-p_amplitude , p_amplitude-dp)
plt.ylim(-p_amplitude , p_amplitude-dp)
plt.xlabel('$p_x$',**axis_font)
plt.ylabel('$p_y$',**axis_font)
plt.figure(figsize=(10,10))
plt.plot( p_range,
ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() )[32,:].real/float(size),
'o-' , label='Real')
plt.plot( p_range,
ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() )[32,:].imag/float(size),
'ro-' , label='Imag')
plt.xlabel('$p_x$',**axis_font)
plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).real ,'bx');
plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).imag ,'rx');
plt.legend(loc='upper left')
Explanation: Forward Transform
End of explanation
# Executing iFFT
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2C, axesSplit_1 )
cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2R, axesSplit_1 )
plt.imshow( f_gpu.get()/float(size)**2 ,
extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] , origin='lower')
axis_font = {'size':'24'}
plt.text( 0., 7.1, '$W$' , **axis_font)
plt.colorbar()
Explanation: Inverse Transform
End of explanation |
137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of contents
Mathematical Background
The Null space and its orthogonal
Probabilistic interpretation of the curvature
Code documentation
A convenient basis
The Markov chain
Computation of the curvature
Appendix
Lemmas
Step3: A convenient basis
The file curv_basis.py contains the decomposition along the null space and a complementary subspace, according to the discussion above.
Step4: The first step is the function rotation, which returns the matrix which exchanges the basis given by $(|0\rangle,|1\rangle)$ with the basis given by $(2^{-\frac 12}(|0\rangle+|1\rangle),2^{-\frac 12}(|0\rangle+|1\rangle))$, on $(\mathbb{C}^2)^{\otimes N}$.
The second step is the function fullbasis, which computes the decomposition of the space given by Lemma 2 (with $a=|0\rangle$ and $b=|1\rangle$. Here, the non-trivial part is to give an orthonormal basis of the space $E$.
Indeed, if $v$ is a base vector, $\langle a\otimes v-v\otimes a|a\otimes v'-v'\otimes a\rangle$ is equal to $2$ if $v=v'$ (unless $v=aa.....a$), and otherwise is equal to $-1$ if $v\otimes a=a\otimes v'$ (which happens only if $v$ begins by an $a$ and $v'=ls(v)$) or if $v'\otimes a=a\otimes v$ (which happens only if $v$ ends by an $a$ and $v=ls(v')$). The exceptional case when both $v\otimes a=a\otimes v'$ and $v'\otimes a=a\otimes v$ means that $v=awa$ and $v'=aw'a$ with $v'=ls(v)$ so that $v=a...a$, a case which was excluded a priori.
Hence, the structure of the matrix given by the previous scalar product is formed of tridiagonal Toeplitz matrices, with $2$ on the diagonal and $-1$ on the over- and underdiagonal; these matrices can be explicitly diagonalized.
The natural basis of $F$ and $G$ given in Lemma 2 is orthogonal (as every word appears only once, either in $F$, or in $G$), and requires only a normalization.
Step9: Computation of the curvature
The file curvature.py encodes the Markov chain to compute the curvature.
Step10: We first have to generate the square norm of the coefficients $C$; here they are chosen as uniform independent random variables, shifted away from zero (as we want to ensure that all coefficients are non-zero)
Step11: Then we can compute the curvature (which, for the moment, is expressed in the canonical basis).
Step12: As a control tool, we printed the step at which the Markov chain converges, and the equilibrium measure.
The matrix G is indeed quite complicated at this step.
Step13: However, in the base given by the matrix $U$, the quadratic form is simpler.
Step14: As proved in Theorem 1, only the lower right quadrant of this matrix is non-zero; moreover this quadrant is positive.
Step15: This code has exponential complexity in $d$ (both in time and in memory). There is numerical evidence that the spectral gap of the Markov chain is uniform in $d$ (as long as the coefficients $C$ are uniformly away from zero and infinity), and the code works on a laptop up to $d=9$ or $10$. | Python Code:
import numpy as np, random, scipy.linalg
Explanation: Table of contents
Mathematical Background
The Null space and its orthogonal
Probabilistic interpretation of the curvature
Code documentation
A convenient basis
The Markov chain
Computation of the curvature
Appendix
Lemmas: the structure of zero modes
Mathematical Background
We investigate constant plaquettes of size d, with one plaquette per site. In order terms, we consider the map $$\begin{matrix}\mathbb{C}^{2^d}&\mapsto &(\mathbb{C}^2)^{\otimes N}\(C^{\delta_1\ldots \delta_d}) &\mapsto & \sum_{\epsilon}\prod_{i=1}^{N}C^{\epsilon_i\cdots \epsilon_{i+d-1}}|\epsilon\rangle,\end{matrix}$$ which to $C$ associates $|\phi(C)>$. This map is holomorphic.
We focus on the local geometry of this map. As we will see, it is never an immersion. The null tangential vectors are explicit, and we will also give a basis of their orthogonal space.
We also give a numerical scheme to compute, up to machine precision, the curvature, which is a symmetric 2-form on the holomorphic tangent space: $$G(\eta,\xi)=\overline{\partial}{\eta}\partial{\xi}\log(\langle\phi(C)|\phi(C)\rangle).$$ This form is positive, as the Levi form of a plurisubharmonic function, but it is definite positive only when restricted to the orthogonal of the null space above.
The Null space
In order to understand completely the geometry of the image set above, we will make two simplifying assumptions:
* N is a multiple of d(d-1);
* All coefficients $C^{\delta_1,\ldots,\delta_d}$ are non-zero.
The first hypothesis might seem strange, since we are mainly interested in the large $N$ case. In fact, when $N$ is a multiple of $d(d-1)$ the proofs are much simpler, and when it is not the case (or when we consider plaquettes on a chain, without periodicity), one must add terms to $G$, of order $N^{-1}$.
The second hypothesis is of course true in a generic configuration; it is also true along a (real) one-dimensional family of configurations. As we are interested in time evolution, one can argue that this hypothesis will remain valid along trajectories.
As all coefficients are non-zero, we make use of the projective properties of the map $\phi$ to consider a basis for variations of parameters which is more convenient than the natural one. If $\alpha \in (\mathbb{C}^2)^{\otimes d}$, we let $$\alpha \cdot |\phi(C)\rangle=\sum_{\delta}\alpha_{\delta}C^{\delta}\partial_{C^{\delta}}|\phi(C)\rangle.$$
This is motivated by the fact that $$C^{\delta}\partial_{C^{\delta}}|\phi(C)\rangle=\sum_{i=1}^N\sum_{\large{\epsilon\in A_i^{\delta}}}\prod_{j=1}^NC^{\epsilon_j\ldots \epsilon_{j+d-1}}|\epsilon\rangle.$$ Here, $$A_i^{\delta}={\epsilon\,|\,\epsilon_i=\delta_1,\ldots,\epsilon_{i+d-1}=\delta_d}.$$
The main result is
Theorem 1 $\alpha\cdot |\phi(C)\rangle=0$ if and only if $\alpha=(|0\rangle+|1\rangle)\otimes v-v\otimes(|0\rangle+|1\rangle)$ for some $v\in (\mathbb{C}^2)^{\otimes d-1}$.
Proof of Theorem 1 One inclusion proceeds by direct computation. Indeed, if $v=\sum v_{\delta}|\delta\rangle$, then
$$(|0\rangle+|1\rangle)\otimes v \cdot |\phi(C)\rangle = \sum_{\delta}\sum_{i=1}^Nv_{\delta}\sum_{\large{\epsilon\in A_i^{0\delta}\cup A_i^{1\delta}}}\prod_{j=1}^NC^{\epsilon_j\cdots \epsilon_{j+d-1}}|\epsilon\rangle,$$ and $$A_i^{0\delta}\cup A_i^{1\delta}=A_{i+1}^{\delta0}\cup A_{i+1}^{\delta_1},$$ so that $$(|0\rangle+|1\rangle)\otimes v-v\otimes(|0\rangle+|1\rangle) \cdot |\phi(C)\rangle=0.$$
The other inclusion is less direct and makes full uses of the hypotheses above.
Since all coefficients are non-zero, then $\alpha\cdot |\phi(C)\rangle=0$ if and only if, for every $\epsilon$, one has $\sum_{i=1}^N\alpha_{\epsilon_i\cdots \epsilon_{i+d-1}}=0.$
In particular, let $\delta\in {0,1}^d$ arbitrary, and let $\epsilon=\delta \delta \delta...$ a concatenation of size $N$. (We assumed that $N$ is a multiple of $d$). Then the equation above yields $\sum_{k=0}^{d-1}\alpha_{ls(d)}=0$, with $ls$ the operator of left shift.
In particular, $$\sum_{k=0}^{d-1}ls^k(\alpha)=0.$$
Using Lemma 1 in the Appendix, it follows that $\alpha=v-ls(v)$ for some $v$.
We will now prove that, for every word $w\in {a,b}^{d-2}$, one has $$\alpha_{0w0}+\alpha_{1w1}-\alpha_{0w1}-\alpha{1w0}=0.$$ Using Lemma 3, with $a=\frac{|0\rangle+|1\rangle}{\sqrt{2}}$ and $b=\frac{|0\rangle-|1\rangle}{\sqrt{2}}$, this will conclude the proof.
Recall that each $\epsilon \in {0,1}^N$ yields an equation on $\alpha$. Substracting the equation given by $\epsilon=...0w0w1w0w0...$ (recall N is a multiple of $d-1$) from the one given by $\epsilon=...0w0w0...$ yields $$\alpha_{0w1}+\ldots+\alpha_{1w0}=\alpha_{0w0}.$$
Now, substracting the equation given by $\epsilon=...0w0w1w1w0w0...$ from $\epsilon=...0w0w0...$ yields $$\alpha_{0w1}+\ldots+\alpha_{1w0}-\alpha_{1w0}+\alpha{1w1}-\alpha{0w1}+\alpha{0w1}+\ldots+\alpha{1w0}=\alpha{0w0}.$$
Hence,
$$\alpha_{1w1}+\alpha_{0w0}=\alpha_{0w1}-\alpha_{1w0}.$$
This concludes the proof.
Probabilistic interpretation of the curvature
The expression of the derivative of $|\phi(C)\rangle$ along a base vector was formulated as a sum with the same terms as the initial sum, but restricted on some configuration set $A^{\delta}_i$. These sums are ubiquitous in the computation of the curvature, which will allow a probabilistic reformulation. Since we are dealing with one-dimensional loops, the probabilistic model will be a Markov chain (restricted to identical final and initial states). If all coefficients are non-zero, this Markov chain is mixing, and this will allow a precise numerical treatment.
We want to compute $$G(\alpha,\beta)=\alpha\overline{\cdot}\beta\cdot \log(\langle \phi(C)|\phi(C)\rangle).$$ Here, $\overline{\cdot}$ stands for anti-holomorphic derivation. This is exactly $$G(\alpha,\beta)=\cfrac{\langle \alpha\cdot \phi(C)|\beta\cdot \phi(C)\rangle}{\langle \phi(C)|\phi(C)\rangle} - \cfrac{\langle \alpha\cdot \phi(C)|\phi(C)\rangle \langle \phi(C)|\beta\cdot \phi(C)\rangle}{(\langle \phi(C)|\phi(C)\rangle)^2}.$$
Now $$\langle \phi(C)|\phi(C)\rangle=\sum_{\epsilon}\prod_{j=1}^N|C^{\epsilon_j\ldots\epsilon_{j+d-1}}|^2.$$ Moreover, if $\alpha=C^{\delta}\partial_{C^{\delta}}$ and $\beta=C^{\delta'}\partial_{C^{\delta'}}$, then
$$\langle \alpha\cdot \phi(C)|\phi(C)\rangle=\sum_{i=1}^N\sum_{\large{\epsilon\in A^{\delta}i}}\prod_j|C^{\epsilon_j\ldots \epsilon{j+d-1}}|^2.$$ Finally,
$$\langle \alpha\cdot \phi(C)|\beta\cdot \phi(C)\rangle=\sum_{i_1=1}^N\sum_{i_2=1}^N\sum_{\large{\epsilon\in A^{\delta}{i_1}\cap A^{\delta'}{i_2}}}\prod_j|C^{\epsilon_j\ldots \epsilon_{j+d-1}}|^2.$$
Those expressions are quite clumsy. Let $({0,1}^N,\mathcal{P}({0,1}^N),\mathbb{P}C)$ be a probabilized space whose elementary events are words of length $N$ on ${0,1}$, with $$\mathbb{P}_C(\epsilon)=(\prod{j=1}^N|C^{\epsilon_j\cdots \epsilon_{j+d-1}}|^2)\langle \phi(C)|\phi(C)\rangle^{-1}.$$
Then the matrix elements of $G$ are simply $$G_{\delta,\delta'}=\sum_i\sum_j\mathbb{P}_C(A^{\delta}_i\cap A^{\delta'}_j)-\mathbb{P}_C(A^{\delta}_i)\mathbb{P}_C(A^{\delta}_j).$$
To evaluate those probabilities, we will reformulate this probabilized space with a Markov chain.
Consider the Markov chain, with ${0,1}^d$ as set of states, with transition matrix:
$$T(\delta_1\ldots \delta_d,\delta_2\ldots\delta_{d+1})=\cfrac{|C^{\delta_2\ldots\delta_{d+1}}|^2}{|C^{\delta_2\ldots \delta_d 0}|^2+|C^{\delta_2\ldots \delta_d 1}|^2}.$$
Consider realisations of this Markov chain, of length $N+1$, conditionned to having the same final and initial state. Since the $d-1$ last digits at one time coincide with the $d-1$ first digits at the following time, this realisation is a sequence of $0$ and $1$, of length $N+d$ (where the state, at step $j$, consists of the digits $j$ to $j+d-1$.). Moreover, the $N+1$ digits corresponds to the first one, and so on. Then the probability of this sequence $\epsilon$ is exactly $\mathbb{P}_C(\epsilon)$, as long as the initial state is chosen according to the law of $A_1$.
The matrix $T$ verifies the hypotheses of the Perron-Frobenius theorem: since all coefficients are non-zero, the matrix $T^d$ has only positive entries. In particular $T$ has 1 as only eigenvalue of modulus one (all other eigenvalues have strictly smaller modulus), and this eigenvalue is simple. From there we are able to identify the law of $A_i$. Indeed, on one hand the original probability space is rotation invariant so that $A_i$ and $A_j$ have the same law for any $i$ and $j$; on the other hand the law of $A_{i+1}$ (which is a $2^d$-dimensional vector) is $T$ times the law of $A_i$, plus an error of size $e^{-cN}$. Hence, if $\mu_C$ is the equilibrium measure for $T$, then $$\mathbb{P}_C(A^{\delta}_i)=\mu_C(\delta)+O(e^{-cN}).$$
Along the same lines, if $j$ is closer from $i$ to the left than to the right (meaning $j$ is between $i$ and $i+N/2$ on $\mathbb{Z}/N\mathbb{Z}$, then $$\mathbb{P}_C(A^{\delta'}_j|A^{\delta}_i)=T^{j-i}[\delta',\delta]+O(e^{-cN}).$$ In particular, $$\mathbb{P}_C(A^{\delta'}_j|A^{\delta}_i)=\mathbb{P}_C(A^{\delta'}_j+O(e^{-c|i-j|}).$$ The intuition of this estimate is that, if $i$ and $j$ are far apart (more than the reduced spectral radius of $T$ allows for), then $A_i$ and $A_j$ are independent up to an exponentially small error.
This offers a numerical scheme to compute $G$ up to machine precision. First, we compute and store the sequence of positive powers of $T$, until they converge to a matrix of rank one (at step $k_{M}$). Each line of this matrix is the invariant measure $\mu_C$. Then $$N^{-1}G_{\delta,\delta'}=\sum_{j=0}^{k_M}\mu_C(\delta)(T^j_{\delta',\delta}-\mu_C(\delta'))+\sum_{j=1}^{k_M}\mu_C(\delta')(T^j_{\delta,\delta'}-\mu_C(\delta'))+ O(\text{machine precision})+O(Ne^{-N}).$$
Code documentation
First and foremost, we need to import standard numerical libraries, as well as scipy.linalg to show the spectrum of the curvature form
End of explanation
# %load ../curv_basis.py
# a tentative basis
def lshift(c,d):
return (2*c)%(2**d)+c/(2**(d-1))
def rtldigits(c,d):
Returns the reversed digits of c in base 2, of length d
ccopy=c
dig=[]
for j in range(d):
dig.append(ccopy%2)
ccopy=ccopy/2
return np.array(dig)
def rotation(d=3):
Rotates from the base along z, to the base along x.
U=np.zeros((2**d,2**d))
for c in range(2**d):
for k in range(2**d):
U[c,k]=(-1)**(sum((1-rtldigits(c,d))*(1-rtldigits(k,d))))/2**(0.5*d)
return U
def fullbasis(d=3):
U=np.zeros((2**d,2**d))
line=0
#First we look at the zero vectors
#they look like 1c-c1 for some c
#but we have to give an orthogonal basis !
for j in range(d-2):
for c in range(2**j):
#we consider the vectors from 1...10c0 to 0c01.....1 of size d-1
#here they form a Toeplitz matrix of rank d-2-j,
#which can easily be diagonalized
for k in range(d-2-j):
for l in range(d-2-j):
vect=2**(d-1)-2**(l+j+2)+c*2**(l+1)+2**l-1
val=np.sin(np.pi*(k+1)*(l+1)/(d-1-j))
U[line+k,2**(d-1)+vect]+=val
U[line+k,2*vect+1]-=val
U[line+k]/=np.sqrt(np.dot(U[line+k],U[line+k]))
line+=d-2-j
#then we only forgot the vectors 1.....10 to 01......1
for k in range(d-1):
for l in range(d-1):
vect=2**(d-1)-1-2**l
val=np.sin(np.pi*(k+1)*(l+1)/d)
U[line+k,2**(d-1)+vect]+=val
U[line+k,2*vect+1]-=val
U[line+k]/=np.sqrt(np.dot(U[line+k],U[line+k]))
#The vector 1...1 is itself orthogonal
line+=d-1
U[line,2**d-1]=1.
line+=1
#There are several types of orthogonal (nonzero) vectors:
#All 0c0 vectors
for c in range(2**(d-2)):
U[line+c,2*c]=1
line+=2**(d-2)
#All 10c0+0c01 vectors
for c in range(2**(d-3)):
U[line+c,2*c+2**(d-1)]=1./np.sqrt(2)
U[line+c,4*c+1]=1./np.sqrt(2)
line += 2**(d-3)
#Some weird vectors
for c in range(2**(d-3)):
current=2*c+2**(d-1)+2**(d-2)
shiftlist=[current]
while current>=2**(d-1):
current=lshift(current,d)
shiftlist.append(current)
for shifted in shiftlist:
U[line+c,shifted]=1./np.sqrt(len(shiftlist))
return U
Explanation: A convenient basis
The file curv_basis.py contains the decomposition along the null space and a complementary subspace, according to the discussion above.
End of explanation
#We check that the change of basis U is indeed orthogonal
U=np.dot(fullbasis(4),rotation(4))
np.sum(np.abs(np.dot(U,U.T)-np.eye(2**4)))
Explanation: The first step is the function rotation, which returns the matrix which exchanges the basis given by $(|0\rangle,|1\rangle)$ with the basis given by $(2^{-\frac 12}(|0\rangle+|1\rangle),2^{-\frac 12}(|0\rangle+|1\rangle))$, on $(\mathbb{C}^2)^{\otimes N}$.
The second step is the function fullbasis, which computes the decomposition of the space given by Lemma 2 (with $a=|0\rangle$ and $b=|1\rangle$. Here, the non-trivial part is to give an orthonormal basis of the space $E$.
Indeed, if $v$ is a base vector, $\langle a\otimes v-v\otimes a|a\otimes v'-v'\otimes a\rangle$ is equal to $2$ if $v=v'$ (unless $v=aa.....a$), and otherwise is equal to $-1$ if $v\otimes a=a\otimes v'$ (which happens only if $v$ begins by an $a$ and $v'=ls(v)$) or if $v'\otimes a=a\otimes v$ (which happens only if $v$ ends by an $a$ and $v=ls(v')$). The exceptional case when both $v\otimes a=a\otimes v'$ and $v'\otimes a=a\otimes v$ means that $v=awa$ and $v'=aw'a$ with $v'=ls(v)$ so that $v=a...a$, a case which was excluded a priori.
Hence, the structure of the matrix given by the previous scalar product is formed of tridiagonal Toeplitz matrices, with $2$ on the diagonal and $-1$ on the over- and underdiagonal; these matrices can be explicitly diagonalized.
The natural basis of $F$ and $G$ given in Lemma 2 is orthogonal (as every word appears only once, either in $F$, or in $G$), and requires only a normalization.
End of explanation
# %load ../curvature.py
import numpy as np
def generate_C(d,m=1,M=2):
Generates a bunch of randomly chosen coefficients, far from zero
C=[]
for k in range(2**d):
C.append(random.random()*(M-m)+m)
return C
def sparse_mult(mat,C):
returns an efficient multiplication of mat by the transfer matrix.
Runs in quadratic time.
prod=np.zeros((len(C),len(C)),dtype=float)
for i in range(len(C)):
for j in range(len(C)):
prod[i,j]=mat[i,(2*j)%len(C)]*C[(2*j)%len(C)]
prod[i,j]+=mat[i,(2*j+1)%len(C)]*C[(2*j+1)%len(C)]
prod[i,j]/=C[(2*j)%len(C)]+C[(2*j+1)%len(C)]
return prod
def Markov_powers(C,ERROR=1.0e-14):
Computes the powers of the Markov chain
for constant plaquettes
powerseq=[np.eye(len(C))]
converged=False
while not converged:
current=powerseq[-1]
new=sparse_mult(current,C)
powerseq.append(new)
dist=0.
for k in range(len(C)/2):
dist+=abs(new[k,0]-new[k,1])
if dist<ERROR:
converged=True
#print dist
print "Markov chain convergence after",len(powerseq),"steps"
return powerseq
def curvature(C,ERROR=1.0e-14):
Computes, via a Markov chain, the curvature tensor
for constant plaquettes up to some prescribed error.
#compute the sequence (M**i) until convergence
powerseq=Markov_powers(C,ERROR)
#the lines of the last matrix are the equilibium measure
eq_m=powerseq[-1].T[1]
print "Equilibrium measure: ", eq_m
G=np.zeros((len(C),len(C)),dtype=float)
M=powerseq[1]
for i in range(len(C)):
for j in range(len(C)):
for k in range(len(powerseq)):
current=powerseq[k]
G[i,j]+=eq_m[j]*(current[i,j]-eq_m[i])
if k !=0:
G[i,j]+=eq_m[i]*(current[j,i]-eq_m[j])
return G
Explanation: Computation of the curvature
The file curvature.py encodes the Markov chain to compute the curvature.
End of explanation
C=generate_C(4)
Explanation: We first have to generate the square norm of the coefficients $C$; here they are chosen as uniform independent random variables, shifted away from zero (as we want to ensure that all coefficients are non-zero)
End of explanation
G=curvature(C)
Explanation: Then we can compute the curvature (which, for the moment, is expressed in the canonical basis).
End of explanation
G
Explanation: As a control tool, we printed the step at which the Markov chain converges, and the equilibrium measure.
The matrix G is indeed quite complicated at this step.
End of explanation
np.dot(U,np.dot(G,U.T))
Explanation: However, in the base given by the matrix $U$, the quadratic form is simpler.
End of explanation
Gred=np.dot(U,np.dot(G,U.T))[8:15].T[8:15]
scipy.linalg.eigvalsh(Gred)
Explanation: As proved in Theorem 1, only the lower right quadrant of this matrix is non-zero; moreover this quadrant is positive.
End of explanation
import matplotlib.pyplot as plt
C=generate_C(9)
G=curvature(C)
U=np.dot(fullbasis(9),rotation(9))
Gred=np.dot(U,np.dot(G,U.T))[2**8:2**9-1].T[2**8:2**9-1]
vals=scipy.linalg.eigvalsh(Gred)
plt.plot(np.log(vals),'ro')
plt.show()
Explanation: This code has exponential complexity in $d$ (both in time and in memory). There is numerical evidence that the spectral gap of the Markov chain is uniform in $d$ (as long as the coefficients $C$ are uniformly away from zero and infinity), and the code works on a laptop up to $d=9$ or $10$.
End of explanation |
138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the jupyter notebook! To run any cell, press Shift+Enter or Ctrl+Enter.
IMPORTANT
Step1: Notebook Basics
A cell contains any type of python inputs (expression, function definitions, etc...). Running a cell is equivalent to input this block in the python interpreter. The notebook will print the output of the last executed line.
Step2: Numpy Basics
IMPORTANT
Step3: Creation of arrays
Creating ndarrays (np.zeros, np.ones) is done by giving the shape as an iterable (List or Tuple). An integer is also accepted for one-dimensional array.
np.eye creates an identity matrix.
You can also create an array by giving iterables to it.
(NB
Step4: ndarray basics
A ndarray python object is just a reference to the data location and its characteristics.
All numpy operations applying on an array can be called np.function(a) or a.function() (i.e np.sum(a) or a.sum())
It has an attribute shape that returns a tuple of the different dimensions of the ndarray. It also has an attribute dtype that describes the type of data of the object (default type is float64)
WARNING because of the object structure, unless you call copy() copying the reference is not copying the data.
Step5: Basic operators are working element-wise (+, -, *, /)
When trying to apply operators for arrays with different sizes, they are very specific rules that you might want to understand in the future
Step6: Accessing elements and slicing
For people uncomfortable with the slicing of arrays, please have a look at the 'Indexing and Slicing' section of http
Step7: Changing the shape of arrays
ravel creates a flattened view of an array (1-D representation) whereas flatten creates flattened copy of the array.
reshape allows in-place modification of the shape of the data. transpose shuffles the dimensions.
np.newaxis allows the creation of empty dimensions.
Step8: Reduction operations
Reduction operations (np.sum, np.max, np.min, np.std) work on the flattened ndarray by default. You can specify the reduction axis as an argument
Step9: Linear-algebra operations
Step10: Grouping operations
Grouping operations (np.stack, np.hstack, np.vstack, np.concatenate) take an iterable of ndarrays and not ndarrays as separate arguments
Step11: Working on subset of the elements
We have two ways in order to apply operations on subparts of arrays (besides slicing).
Slicing reminders
Step12: Binary masks
Using logical operations on arrays give a binary mask. Using a binary mask as indexing acts as a filter and outputs just the very elements where the value is True. This gives a memoryview of the array that can get modified.
Step13: Working with indices
The second way to work on subpart of arrays are through indices. Usually you'd use one array per dimension with matching indices.
WARNING
Step14: Working with arrays, examples
Thanks to all these tools, you should be able to avoid writing almost any for-loops which are extremely costly in Python (even more than in Matlab, because good JIT engines are yet to come). In case you really need for-loops for array computation (usually not needed but it happens) have a look at http
Step15: Compute polynomial for a lot of values
Step16: SciPy
SciPy is a collection of libraries more specialized than Numpy. It is the equivalent of toolboxes in Matlab.
Have a look at their collection | Python Code:
# Useful starting lines
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
%load_ext autoreload
%autoreload 2
Explanation: Welcome to the jupyter notebook! To run any cell, press Shift+Enter or Ctrl+Enter.
IMPORTANT : Please have a look at Help->User Interface Tour and Help->Keyboard Shortcuts in the toolbar above that will help you get started.
End of explanation
1
x = [2,3,4]
def my_function(l):
l.append(12)
my_function(x)
x
# Matplotlib is used for plotting, plots are directly embedded in the
# notebook thanks to the '%matplolib inline' command at the beginning
plt.hist(np.random.randn(10000), bins=40)
plt.xlabel('X label')
plt.ylabel('Y label')
Explanation: Notebook Basics
A cell contains any type of python inputs (expression, function definitions, etc...). Running a cell is equivalent to input this block in the python interpreter. The notebook will print the output of the last executed line.
End of explanation
np.multiply
Explanation: Numpy Basics
IMPORTANT : the numpy documentation is quite good. The Notebook system is really good to help you. Use the Auto-Completion with Tab, and use Shift+Tab to get the complete documentation about the current function (when the cursor is between the parenthesis of the function for instance).
For example, you want to multiply two arrays. np.mul + Tab complete to the only valid function np.multiply. Then using Shift+Tab you learn np.multiply is actually the element-wise multiplication and is equivalent to the * operator.
End of explanation
np.zeros(4)
np.eye(3)
np.array([[1,3,4],[2,5,6]])
np.arange(10) # NB : np.array(range(10)) is a slightly more complicated equivalent
np.random.randn(3, 4) # normal distributed values
# 3-D tensor
tensor_3 = np.ones((2, 4, 2))
tensor_3
Explanation: Creation of arrays
Creating ndarrays (np.zeros, np.ones) is done by giving the shape as an iterable (List or Tuple). An integer is also accepted for one-dimensional array.
np.eye creates an identity matrix.
You can also create an array by giving iterables to it.
(NB : The random functions np.random.rand and np.random.randn are exceptions though)
End of explanation
tensor_3.shape, tensor_3.dtype
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = np.array([[4, 3], [2, 1]])
(b.dtype, a.dtype) # each array has a data type (casting rules apply for int -> float)
np.array(["Mickey", "Mouse"]) # can hold more than just numbers
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = a # Copying the reference only
b[0,0] = 3
a
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = a.copy() # Deep-copy of the data
b[0,0] = 3
a
Explanation: ndarray basics
A ndarray python object is just a reference to the data location and its characteristics.
All numpy operations applying on an array can be called np.function(a) or a.function() (i.e np.sum(a) or a.sum())
It has an attribute shape that returns a tuple of the different dimensions of the ndarray. It also has an attribute dtype that describes the type of data of the object (default type is float64)
WARNING because of the object structure, unless you call copy() copying the reference is not copying the data.
End of explanation
np.ones((2, 4)) * np.random.randn(2, 4)
np.eye(3) - np.ones((3,3))
print(a)
print(a.shape) # Get shape
print(a.shape[0]) # Get size of first dimension
Explanation: Basic operators are working element-wise (+, -, *, /)
When trying to apply operators for arrays with different sizes, they are very specific rules that you might want to understand in the future : http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html
End of explanation
print(a[0]) # Get first line (slice for the first dimension)
print(a[:, 1]) # Get second column (slice for the second dimension)
print(a[0, 1]) # Get first line second column element
Explanation: Accessing elements and slicing
For people uncomfortable with the slicing of arrays, please have a look at the 'Indexing and Slicing' section of http://www.python-course.eu/numpy.php
End of explanation
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = np.array([[4, 3], [2, 1]])
v = np.array([0.5, 2.0])
print(a)
print(a.T) # Equivalent : a.tranpose(), np.transpose(a)
print(a.ravel())
c = np.random.randn(4,5)
print(c.shape)
print(c[np.newaxis].shape) # Adding a dimension
print(c.T.shape)
print(c.reshape([10,2]).shape)
print(c)
print(c.reshape([10,2]))
a.reshape((-1, 1)) # a[-1] means 'whatever needs to go there'
Explanation: Changing the shape of arrays
ravel creates a flattened view of an array (1-D representation) whereas flatten creates flattened copy of the array.
reshape allows in-place modification of the shape of the data. transpose shuffles the dimensions.
np.newaxis allows the creation of empty dimensions.
End of explanation
np.sum(a), np.sum(a, axis=0), np.sum(a, axis=1) # reduce-operations reduce the whole array if no axis is specified
Explanation: Reduction operations
Reduction operations (np.sum, np.max, np.min, np.std) work on the flattened ndarray by default. You can specify the reduction axis as an argument
End of explanation
np.dot(a, b) # matrix multiplication
# Other ways of writing matrix multiplication, the '@' operator for matrix multiplication
# was introduced in Python 3.5
np.allclose(a.dot(b), a @ b)
# For other linear algebra operations, use the np.linalg module
np.linalg.eig(a) # Eigen-decomposition
print(np.linalg.inv(a)) # Inverse
np.allclose(np.linalg.inv(a) @ a, np.identity(a.shape[1])) # a^-1 * a = Id
np.linalg.solve(a, v) # solves ax = v
Explanation: Linear-algebra operations
End of explanation
np.hstack([a, b])
np.vstack([a, b])
np.vstack([a, b]) + v # broadcasting
np.hstack([a, b]) + v # does not work
np.hstack([a, b]) + v.T # transposing a 1-D array achieves nothing
np.hstack([a, b]) + v.reshape((-1, 1)) # reshaping to convert v from a (2,) vector to a (2,1) matrix
np.hstack([a, b]) + v[:, np.newaxis] # equivalently, we can add an axis
Explanation: Grouping operations
Grouping operations (np.stack, np.hstack, np.vstack, np.concatenate) take an iterable of ndarrays and not ndarrays as separate arguments : np.concatenate([a,b]) and not np.concatenate(a,b).
End of explanation
r = np.random.random_integers(0, 9, size=(3, 4))
r
r[0], r[1]
r[0:2]
r[1][2] # regular python
r[1, 2] # numpy
r[:, 1:3]
Explanation: Working on subset of the elements
We have two ways in order to apply operations on subparts of arrays (besides slicing).
Slicing reminders
End of explanation
r > 5 # Binary element-wise result
r[r > 5] # Use the binary mask as filter
r[r > 5] = 999 # Modify the corresponding values with a constant
r
Explanation: Binary masks
Using logical operations on arrays give a binary mask. Using a binary mask as indexing acts as a filter and outputs just the very elements where the value is True. This gives a memoryview of the array that can get modified.
End of explanation
# Get the indices where the condition is true, gives a tuple whose length
# is the number of dimensions of the input array
np.where(r == 999)
print(np.where(np.arange(10) < 5)) # Is a 1-tuple
np.where(np.arange(10) < 5)[0] # Accessing the first element gives the indices array
np.where(r == 999, -10, r+1000) # Ternary condition, if True take element from first array, otherwise from second
r[(np.array([1,2]), np.array([2,2]))] # Gets the view corresponding to the indices. NB : iterable of arrays as indexing
Explanation: Working with indices
The second way to work on subpart of arrays are through indices. Usually you'd use one array per dimension with matching indices.
WARNING : indices are usually slower than binary masks because it is harder to be parallelized by the underlying BLAS library.
End of explanation
numbers = np.random.randn(1000, 1000)
%%timeit # Naive version
my_sum = 0
for n in numbers.ravel():
if n>0:
my_sum += 1
%timeit np.sum(numbers > 0)
Explanation: Working with arrays, examples
Thanks to all these tools, you should be able to avoid writing almost any for-loops which are extremely costly in Python (even more than in Matlab, because good JIT engines are yet to come). In case you really need for-loops for array computation (usually not needed but it happens) have a look at http://numba.pydata.org/ (For advanced users)
Counting the number of positive elements that satisfy a condition
End of explanation
X = np.random.randn(10000)
%%timeit # Naive version
my_result = np.zeros(len(X))
for i, x in enumerate(X.ravel()):
my_result[i] = 1 + x + x**2 + x**3 + x**4
%timeit 1 + X + X**2 + X**3 + X**4
Explanation: Compute polynomial for a lot of values
End of explanation
X = np.random.randn(1000)
from scipy.fftpack import fft
plt.plot(fft(X).real)
Explanation: SciPy
SciPy is a collection of libraries more specialized than Numpy. It is the equivalent of toolboxes in Matlab.
Have a look at their collection: http://docs.scipy.org/doc/scipy/reference/
Many traditionnal functions are coded there.
End of explanation |
139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Blood Donations
Step1: Clean Data
Are there any missing values?
Step2: Visualize Data
Table
Step3: Insights from Summary stats table
Step4: Plot data as a scatter plot (w/r 'Made Donations in March 2007')
In order to visually inspect whether the given data is linearly separable
- want to create scatter plots of the data (like those in Abu-Mostafa, et al., 2012)
2-dim Scatterplot | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
data_dir = '../data/raw/'
data_filename = 'blood_train.csv'
df_blood = pd.read_csv(data_dir+data_filename)
df_blood.head()
Explanation: Predicting Blood Donations: Initial Data Exploration
To do:
- Import data
- Clean data
- Visualize data
Import Data
Functions used:
- pandas.read_csv
- [pandas df].head()
End of explanation
# FILL IN TEST
# FILL IN ACTION
Explanation: Clean Data
Are there any missing values?
End of explanation
df_blood.iloc[:, 1:].describe()
Explanation: Visualize Data
Table: Summary Statistics
To get a feel for the data as a whole.
Functions Used:
- [pandas df].iloc()
- [pandas df].describe()
End of explanation
plot_scatter = pd.scatter_matrix(df_blood.iloc[:, 1:],
figsize=(20,20))
Explanation: Insights from Summary stats table:
| Variable | Value | Interpretation |
|----: |:----: |:---- |
| Number of data points N | 576 | Not too big of a dataset |
| Average number of donations in March, 2007 | 0.2396 | Whether blood was donated in March was low in general |
| Max Months since 1st Donation | 98 | Earliest donation was 98 months (~8 years) ago |
| Average number of donations | 5.427 | People in dataset donate an average of ~5.5 times |
Plot: Scatter Matrix of all of the variables + histograms
Note:
- Number of donations & Total Volume Donated are perfectly correlated
- thus can probably drop one of the variables
- More likely to NOT have donated in March 2008 (from Made Donation histogram)
End of explanation
import seaborn as sns
# sns.set_context("notebook", font_scale=1.1)
# sns.set_style("ticks")
sns.set_context("notebook", font_scale=1.5, rc={'figure.figsize': [11, 8]})
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
g = sns.lmplot(data=df_blood,
x='Number of Donations',
y='Months since First Donation',
hue='Made Donation in March 2007',
fit_reg=False,
palette='RdYlBu',
aspect=3/1,
scatter_kws={"marker": "D",
"s": 50})
Explanation: Plot data as a scatter plot (w/r 'Made Donations in March 2007')
In order to visually inspect whether the given data is linearly separable
- want to create scatter plots of the data (like those in Abu-Mostafa, et al., 2012)
2-dim Scatterplot: Number of Donations + Months since First Donation ~ Made Donation in March 2007
With 2-dimensions/factors (Number of Donations & Months since First Donation), can we linearly separate whether a donation was made in March, 2007?
End of explanation |
140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 4
Ryan Rose
Scientific Computing
9/21/2016
Step1: Loading Fifty Books
First, we load all fifty books from their text files.
Step2: Cleaning up the Data
Next, we create a mapping of titles to their book's text along with removing the Project Gutenberg header and footer.
Step3: Next, we iterate through all of the words, strip all characters that are not upper or lower-case letters. If the the resulting word is considered, non-empty, we throw it out. Else, we add the word in all lowercase stripped of all non-ASCII letters to our list of words for that book.
This is useful to determine word frequencies.
Step4: Determining Frequencies
Now, we determine the frequencies for each word and putting them in a dictionary for each book.
Step5: Top 20 Words
Now, let's determine the top 20 words across the whole corpus
Step6: Creating the 20-dimensional vectors
Using the top 20 words above, let's determine the book vectors.
Step7: Creating the Elbow Graph
Let's try each k and see what makes the sharpest elbow.
Step8: We can see that the best k is 3 or 6.
Clustering
Let's cluster based on k = 3 and plot the clusters.
Step9: Do the clusters make sense?
Yes. For instance, we can see that The Republic and The Iliad of Homer are in the same cluster.
Performing PCA
Now, let's perform PCA and determine the most important elements and plot the clusters.
Step10: We can see the data clusters well and the most important words are i and the based on them having the standard deviation. This is based on the concept of PCA.fracs aligning to the variance based on this documentation
Step11: Now, let's do k-means based on the previously determined k.
Step12: We can see that the new book is the black square above. In addition, it makes sense it fits into that cluster especially when we compare it to Jane Eyre.
Stop Words
Step13: We can see that our k could be 3 or 7. Let's choose 7. | Python Code:
## Imports!
%matplotlib inline
import os
import re
import string
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from matplotlib.mlab import PCA
from scipy.cluster.vq import kmeans, vq
Explanation: Lab 4
Ryan Rose
Scientific Computing
9/21/2016
End of explanation
os.chdir("/home/ryan/School/scientific_computing/labs/lab4/books")
filenames = os.listdir()
books = []
for name in filenames:
with open(name) as f:
books.append(f.read())
Explanation: Loading Fifty Books
First, we load all fifty books from their text files.
End of explanation
def get_title(text):
pattern = "\*\*\*\s*START OF (THIS|THE) PROJECT GUTENBERG EBOOK ([A-Z,;' ]*)\*\*\*"
m = re.search(pattern, text)
if m:
return m.group(2).strip()
return None
def remove_gutenberg_info(text):
pattern = "\*\*\*\s*START OF (THIS|THE) PROJECT GUTENBERG EBOOK ([A-Z,;' ]*)\*\*\*"
start = re.search(pattern, text).end()
pattern = "\*\*\*\s*END OF (THIS|THE) PROJECT GUTENBERG EBOOK ([A-Z,;' ]*)\*\*\*"
end = re.search(pattern, text).start()
return text[start:end]
cut_off_books = { get_title(book):remove_gutenberg_info(book) for book in books}
pd.DataFrame(cut_off_books, index=["Book's Text"]).T.head()
Explanation: Cleaning up the Data
Next, we create a mapping of titles to their book's text along with removing the Project Gutenberg header and footer.
End of explanation
def strip_word(word, alphabet):
ret = ""
for c in word:
if c in alphabet:
ret += c.lower()
if len(ret) == 0:
return None
else:
return ret
def get_words(book):
alphabet = set(string.ascii_letters)
b = book.split()
words = []
for word in b:
w = strip_word(word, alphabet)
if w:
words.append(w)
return words
cut_books = {name:get_words(book) for name, book in cut_off_books.items()}
Explanation: Next, we iterate through all of the words, strip all characters that are not upper or lower-case letters. If the the resulting word is considered, non-empty, we throw it out. Else, we add the word in all lowercase stripped of all non-ASCII letters to our list of words for that book.
This is useful to determine word frequencies.
End of explanation
def get_word_freq(words):
word_counts = {}
for word in words:
if word in word_counts:
word_counts[word] += 1
else:
word_counts[word] = 1
return word_counts
book_freqs = {}
for name, words in cut_books.items():
book_freqs[name] = get_word_freq(words)
Explanation: Determining Frequencies
Now, we determine the frequencies for each word and putting them in a dictionary for each book.
End of explanation
total_word_count = {}
for dicts in book_freqs.values():
for word, count in dicts.items():
if word in total_word_count:
total_word_count[word] += count
else:
total_word_count[word] = count
a, b = zip(*total_word_count.items())
tuples = list(zip(b, a))
tuples.sort()
tuples.reverse()
tuples[:20]
_, top_20_words = zip(*tuples[:20])
top_20_words
Explanation: Top 20 Words
Now, let's determine the top 20 words across the whole corpus
End of explanation
def filter_frequencies(frequencies, words):
d = {}
for word, freq in frequencies.items():
if word in words:
d[word] = freq
return d
labels = {}
for name, freqs in book_freqs.items():
labels[name] = filter_frequencies(freqs, top_20_words)
df = pd.DataFrame(labels).fillna(0)
df = (df / df.sum()).T
df.head()
Explanation: Creating the 20-dimensional vectors
Using the top 20 words above, let's determine the book vectors.
End of explanation
kvals = []
dists = []
for k in range(2, 11):
centroids, distortion = kmeans(df, k)
kvals.append(k)
dists.append(distortion)
plt.plot(kvals, dists)
plt.show()
Explanation: Creating the Elbow Graph
Let's try each k and see what makes the sharpest elbow.
End of explanation
centroids, _ = kmeans(df, 3)
idx, _ = vq(df, centroids)
clusters = {}
for i, cluster in enumerate(idx):
if cluster in clusters:
clusters[cluster].append(df.iloc[i].name)
else:
clusters[cluster] = [df.iloc[i].name]
clusters
Explanation: We can see that the best k is 3 or 6.
Clustering
Let's cluster based on k = 3 and plot the clusters.
End of explanation
m = PCA(df)
fig, ax = plt.subplots()
for i in range(len(idx)):
plt.plot(m.Y[idx==i, 0], m.Y[idx==i, 1], "o", alpha=.75)
for index, (x, y) in enumerate(zip(m.Y[:, 0], m.Y[:, 1])):
plt.text(x, y, df.index[index])
fig.set_size_inches(36,40)
plt.show()
m.sigma.sort_values()[-2:]
Explanation: Do the clusters make sense?
Yes. For instance, we can see that The Republic and The Iliad of Homer are in the same cluster.
Performing PCA
Now, let's perform PCA and determine the most important elements and plot the clusters.
End of explanation
with open("../pg45.txt") as f:
anne = f.read()
get_title(anne)
anne_cut = remove_gutenberg_info(anne)
anne_words = get_words(anne_cut)
anne_freq = {get_title(anne):filter_frequencies(get_word_freq(anne_words), top_20_words)}
anne_frame = pd.DataFrame(anne_freq).fillna(0)
anne_frame = (anne_frame / anne_frame.sum()).T
anne_frame
Explanation: We can see the data clusters well and the most important words are i and the based on them having the standard deviation. This is based on the concept of PCA.fracs aligning to the variance based on this documentation: https://www.clear.rice.edu/comp130/12spring/pca/pca_docs.shtml. And, since PCA.sigma is the square root of the variance, the highest standard deviation should correspond to the highest value for the PCA.fracs. Then, i and the are the most important words
New Book
So, we continue as before by loading Anne of Green Gables, parsing it, creating an array, and normalizing the book vector.
End of explanation
df_with_anne = df.append(anne_frame).sort_index()
centroids, _ = kmeans(df_with_anne, 3)
idx2, _ = vq(df_with_anne, centroids)
clusters = {}
for i, cluster in enumerate(idx2):
if cluster in clusters:
clusters[cluster].append(df_with_anne.iloc[i].name)
else:
clusters[cluster] = [df_with_anne.iloc[i].name]
clusters
coords = m.project(np.array(anne_frame).flatten())
fig, _ = plt.subplots()
plt.plot(coords[0], coords[1], "s", markeredgewidth=5)
for i in range(len(idx)):
plt.plot(m.Y[idx==i, 0], m.Y[idx==i, 1], "o", alpha=.75)
for index, (x, y) in enumerate(zip(m.Y[:, 0], m.Y[:, 1])):
plt.text(x, y, df.index[index])
fig.set_size_inches(36,40)
plt.show()
Explanation: Now, let's do k-means based on the previously determined k.
End of explanation
stop_words_text = open("../common-english-words.txt").read()
stop_words = stop_words_text.split(",")
stop_words[:5]
word_counts_without_stop = [t for t in tuples if t[1] not in stop_words]
word_counts_without_stop[:20]
_, top_20_without_stop = zip(*word_counts_without_stop[:20])
top_20_without_stop
no_stop_labels = {}
for name, freqs in book_freqs.items():
no_stop_labels[name] = filter_frequencies(freqs, top_20_without_stop)
df_without_stop = pd.DataFrame(no_stop_labels).fillna(0)
df_without_stop = (df_without_stop / df_without_stop.sum()).T
df_without_stop.head()
kvals = []
dists = []
for k in range(2, 11):
centroids, distortion = kmeans(df_without_stop, k)
kvals.append(k)
dists.append(distortion)
plt.plot(kvals, dists)
plt.show()
Explanation: We can see that the new book is the black square above. In addition, it makes sense it fits into that cluster especially when we compare it to Jane Eyre.
Stop Words
End of explanation
centroids, _ = kmeans(df_without_stop, 7)
idx3, _ = vq(df, centroids)
clusters = {}
for i, cluster in enumerate(idx3):
if cluster in clusters:
clusters[cluster].append(df_without_stop.iloc[i].name)
else:
clusters[cluster] = [df_without_stop.iloc[i].name]
clusters
m2 = PCA(df_without_stop)
fig, _ = plt.subplots()
for i in range(len(idx3)):
plt.plot(m2.Y[idx3==i, 0], m2.Y[idx3==i, 1], "o", alpha=.75)
for index, (x, y) in enumerate(zip(m2.Y[:, 0], m2.Y[:, 1])):
plt.text(x, y, df_without_stop.index[index])
fig.set_size_inches(36,40)
plt.show()
m2.sigma.sort_values()[-2:]
Explanation: We can see that our k could be 3 or 7. Let's choose 7.
End of explanation |
141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'mri-esm2-0', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: MRI
Source ID: MRI-ESM2-0
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from random import randint
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
x = np.linspace(-1.0, 1.0, size)
if sigma > 0:
N = np.random.normal(0, sigma, size)
else:
N = 0
y = m*x + b + N
return(x, y)
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
plt.xlim(-1.1,1.1)
plt.ylim(-10.0, 10.0)
plt.xlabel("X")
plt.ylabel("Y")
plt.title("y = mx + b + N (0, $\sigma$ ** 2)", fontsize=16)
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
x, y = random_line(m,b,sigma,size=10)
plt.scatter(x,y,color=color)
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_random_line, m=(-10.0,10.0,0.1), b=(-5.0,5.0,0.1), sigma=(0.0,5.0,0.01), size=(10,100,10), color={"red":'red', "green": 'green', "blue":'blue'});
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading MDAnalysis
Step1: The Universe
The basic object to load structures and trajectories into MDAnalysis are Universes. Universe take a topology and a trajectory (optional) as arguments
Step2: Universe extract all the information in the topology and make them available to you. For example you can see how many atoms are in the topology
Step3: Or the number of individual residues
Step4: We can also leverage numpy to see how many unique residue types there are. Residue names are stored in resnames.
Step5: Select parts of a Topology
mostly we don't want to work on a complete topology but only analyze a specific part (selection) of it. For this we can create selections. If we want to select all backbone atoms for example
Step6: To check that only backbone atoms are selected we can look if we only have C, CA, N and O atoms
Step7: Or we want to only select the C$_{\alpha}$-atoms.
Step8: The complete syntax and possible selection can be look up here
NOTE
It is very important to select C$_{\alpha}$-atoms with the string protein and name CA because Calcium ions also have the name CA.
Load a trajectory
To load a simulation we can give the Universe object also a trajectory file as an argument
Step9: now we can look how many frames the trajectory has
Step10: Iterating over a trajectory work pythonically. | Python Code:
import MDAnalysis as mda
mda.__version__
Explanation: Loading MDAnalysis
End of explanation
TOPOLOGY = 'data/adk.psf'
u = mda.Universe(TOPOLOGY)
Explanation: The Universe
The basic object to load structures and trajectories into MDAnalysis are Universes. Universe take a topology and a trajectory (optional) as arguments
End of explanation
u.atoms.n_atoms
Explanation: Universe extract all the information in the topology and make them available to you. For example you can see how many atoms are in the topology
End of explanation
u.residues.n_residues
Explanation: Or the number of individual residues
End of explanation
np.unique(u.residues.resnames)
Explanation: We can also leverage numpy to see how many unique residue types there are. Residue names are stored in resnames.
End of explanation
bb = u.atoms.select_atoms('backbone')
Explanation: Select parts of a Topology
mostly we don't want to work on a complete topology but only analyze a specific part (selection) of it. For this we can create selections. If we want to select all backbone atoms for example
End of explanation
print('number of backbone atoms : {}'.format(bb.n_atoms))
print('unique atoms types : {}'.format(np.unique(bb.names)))
Explanation: To check that only backbone atoms are selected we can look if we only have C, CA, N and O atoms
End of explanation
ca = u.atoms.select_atoms('protein and name CA')
print('number of Ca atoms : {}'.format(ca.n_atoms))
print('unique atoms types : {}'.format(np.unique(ca.names)))
Explanation: Or we want to only select the C$_{\alpha}$-atoms.
End of explanation
TRAJECTORY = 'data/adk_dims.dcd'
u = mda.Universe(TOPOLOGY, TRAJECTORY)
Explanation: The complete syntax and possible selection can be look up here
NOTE
It is very important to select C$_{\alpha}$-atoms with the string protein and name CA because Calcium ions also have the name CA.
Load a trajectory
To load a simulation we can give the Universe object also a trajectory file as an argument
End of explanation
u.trajectory.n_frames
Explanation: now we can look how many frames the trajectory has
End of explanation
for time_step in u.trajectory:
print(time_step.time)
Explanation: Iterating over a trajectory work pythonically.
End of explanation |
144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hosted Settings
Step1: Part 1
Step2: Question 3.1. Some stufff
<img src="https | Python Code:
from client.api.notebook import Notebook
ok = Notebook('ipy.ok')
ok.auth(force=True)
Explanation: Hosted Settings:
In a hosted environment such as jupyterhub:
- run pip install okpy>=1.8.2 --upgrade
Local Setting
For Local Dev:
run jupyter notebook from the root ok-client folder
Running from a distributed ok file
import zipimport
ok_bundle = zipimport.zipimporter('./ok').load_module('client')
ok = ok_bundle.api.notebook.Notebook('ipy.ok')
End of explanation
all_cities = [1, 2]
_ = ok.grade('q01')
_ = ok.submit()
Explanation: Part 1: Maps
The districts and zips data sets are Map objects. Documentation on mapping in the datascience package can be found at data8.org/datascience/maps.html. To view a map of California's water districts, run the cell below. Click on a district to see its description.
Question 2.1. Assign the name income_by_zipcode to a table with just one row per ZIP code. When you group according to ZIP code, the remaining columns should be summed. In other words, for any other column such as 'N02650', the value of 'N02650' in a row corresponding to ZIP code 90210 (for example) should be the sum of the values of 'N02650' in the 6 rows of income_raw corresponding to ZIP code 90210.
End of explanation
_ = ok.grade('q02')
_ = ok.backup()
ok.submit()
Explanation: Question 3.1. Some stufff
<img src="https://i.imgur.com/jicA2to.png"/>
End of explanation |
145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of using pygosa
We illustrate hereafter the use of the pygosa module.
Step1: We define Sobol use-case, which is very common in case of sensitivity analysis
Step2: Design of experiment
We define the experiment design
Step3: The benefits of using a crude Monte-Carlo approach is the potential use of several contrasts.
In this demonstrate example, the used contrast are
Step4: Quantile sensitivities
Hereafter we apply the quantile contrast to the previous design in order to get the sensitivities for quantile levels $\alpha=(5\%, 25\%, 50\%, 75\%, 95\%)$
Step5: Probability sensitivities
Hereafter we apply the probability contrast to the previous design in order to get the sensitivities for thresholds $t=(-2.50, 0, 2.50, 7.0, 7.85)$ | Python Code:
import openturns as ot
import numpy as np
import pygosa
%pylab inline
Explanation: Example of using pygosa
We illustrate hereafter the use of the pygosa module.
End of explanation
model = ot.SymbolicFunction(["x1","x2","x3"], ["sin(x1) + 7*sin(x2)^2 + 0.1*(x3^4)*sin(x1)"])
dist = ot.ComposedDistribution( 3 * [ot.Uniform(-np.pi, np.pi)] )
Explanation: We define Sobol use-case, which is very common in case of sensitivity analysis:
End of explanation
mcsp = pygosa.SensitivityDesign(dist=dist, model=model, size=1000)
Explanation: Design of experiment
We define the experiment design:
End of explanation
sam = pygosa.MeanSensitivities(mcsp)
factors_m = sam.compute_factors()
fig, ax = sam.boxplot()
figure = pygosa.plot_mean_sensitivities(sam,set_labels=True)
Explanation: The benefits of using a crude Monte-Carlo approach is the potential use of several contrasts.
In this demonstrate example, the used contrast are :
Mean contrast to derive its sensitivities
Quantile contrast to derive sensitivities for some specific quantile levels
Mean contrast to derive sensitivities for some specific threshold values
Mean contrast & sensitivities
Hereafter we apply the mean contrast to the previous design in order to get the sensitivities :
End of explanation
saq = pygosa.QuantileSensitivities(mcsp)
factors_q = [saq.compute_factors(alpha=q) for q in [0.05, 0.25, 0.50, 0.75, 0.95]]
fig, ax = saq.boxplot()
figure = pygosa.plot_quantiles_sensitivities(saq,set_labels=True)
Explanation: Quantile sensitivities
Hereafter we apply the quantile contrast to the previous design in order to get the sensitivities for quantile levels $\alpha=(5\%, 25\%, 50\%, 75\%, 95\%)$:
End of explanation
sap = pygosa.ProbabilitySensitivities(mcsp)
factors_p = [sap.compute_factors(threshold=v) for v in [-2.5, 0, 2.5, 7.0, 7.85]]
fig, ax = sap.boxplot(threshold=7.85)
figure = pygosa.plot_probability_sensitivities(sap, set_labels=True)
Explanation: Probability sensitivities
Hereafter we apply the probability contrast to the previous design in order to get the sensitivities for thresholds $t=(-2.50, 0, 2.50, 7.0, 7.85)$:
End of explanation |
146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<pre>
함수의 return은 오직 한개의 객체만 리턴한다.
보통 튜플로 리턴을 할 경우 여러개의 변수로 한번에 받게 할 수 있어 마치 return이 여러개의 값을 동시에 리턴하는 것처럼 보이지만
튜플이라는 객체 하나를 리턴하는 것이다.
</pre>
Step1: 함수가 생성되면 함수 자체도 객체이다. 그리고 그객체의 주소를 swap라는 변수가 참조한다.
Step2: <pre>
nameSpace 는 프로그램에서 쓰이는 이름이 저장되는 공간이다.
여기서 이름이란 객체 참조 변수를 말하는 것이다.
함수 안에는 별도의 이름공간이 생성이 된다. 그 이름 공간을 Local scope, 함수 밖은 global scope 이라고 하고
파이썬에서 정의한 내용에 대한 영역을 내장 영역(built-in-scope)라고 한다.
함수 안에서 어떤 네임을 사용할 경우 그 함수 스코프 내에 이름이 없는경우 상의 스코프를 뒤진다.
찾는 순서는 Local -> global -> built-in 순으로 검색 한다.
이걸 스코핑 룰 이라 한다.
</pre>
Step3: <pre>
global이 하는 일은 새로운 지역 변수를 만들거나 복사하는 것이 아니라
단지 전역 영역의 값을 지역 영역에 참조할 수 있게 전역변수의 레퍼런스를 지역변수 영역 이름공간에 생성하는 것이다.
</pre>
Step5: <pre>
아래와 같이 람다를 여러줄에 걸쳐 표현 할 수 있지만 안티 패턴이다.
</pre>
Step6: <pre>
시퀀스형 자료는 속성으로 이터레이터 객체를 가지고 있다.
</pre> | Python Code:
def swap(a,b):
return b,a #튜플이 반환된다.
a = 1
b = 2
a,b = swap(a,b)
print(a,b)
print(swap)
Explanation: <pre>
함수의 return은 오직 한개의 객체만 리턴한다.
보통 튜플로 리턴을 할 경우 여러개의 변수로 한번에 받게 할 수 있어 마치 return이 여러개의 값을 동시에 리턴하는 것처럼 보이지만
튜플이라는 객체 하나를 리턴하는 것이다.
</pre>
End of explanation
def intersect(prelist, postlist):
ret_list = []
for x in prelist:
if x in postlist and x not in ret_list:
ret_list.append(x)
return ret_list
list1 = "HELLO"
list2 = "CLS"
print(intersect(list1, list2))
a = 10 # Namespace:global 이다.
b = 20 # Namespace:global 이다.
def sum(x,y):
return x+y
print(sum(a,b)) #x,y 가 글로벌 네임스테이스의 a,b의 주소를 참조하게 된다.
def sum2(x,y):
x = 1 # 1이라는 객체를 생성하고 그 주소를 변수 x가 새롭게 참조 한다. 변수 x의 네임스페이스는 sum2이다.
return x+y
x = 10 # 10은 이뮤터블이다. 즉 10 객체는 오퍼레이션이 없다. 10자체를 변경시키는 오퍼레이션이 없다.
print(sum2(x,b)) #x,y가 x=10 과 b를 참조한다. 그러나 sum2네임스페이스안에서 x는 새로운 객체 1을 참조 하는 것으로 변경 된다.
print(x)
def change(x):
print(x[0])
print(type(x[0]))
print(id(x))
print(id(x[0]))
print(id(x[1]))
print(id(x[2]))
x[0] = 'H'
w_list = ["S", "A", "M"] # 리스트는 뮤터블한 객체이다 함수 안에서 내용을 변경 할 수 있다. 뮤터블이기 때문에 그렇다.
change(w_list)
print(w_list)
def change2(x):
x = x[:] # clone를 해서 변경을 한다. 그러면 외부의 list는 그대로 있게 된다.
x[0] = "H"
w_list = ["S","A","M"]
print(w_list)
change2(w_list)
print(w_list)
Explanation: 함수가 생성되면 함수 자체도 객체이다. 그리고 그객체의 주소를 swap라는 변수가 참조한다.
End of explanation
a = [1,2,3]
def scoping():
a = [4,5,6]
print(a)
scoping()
print(a)
x = 1
def func(a):
return a + x #x를 찾지만 local scope내의 이름공간에 x가 정의 되지 않았기 때문에 상위 scope에서 찾는다. 단 사용하는것 만가능함 상위 스코프의 변수에 조작은 못함
func(1)
def func2(a):
x = 2
return a+x #x를 local scope내의 이름공간서 찾는데 있어서 그걸로 사용한다.
func2(1)
def func3(a):
try:
x = x + 1
except UnboundLocalError:
print("에러 발생함")
else:
return a + x
func3(1)
Explanation: <pre>
nameSpace 는 프로그램에서 쓰이는 이름이 저장되는 공간이다.
여기서 이름이란 객체 참조 변수를 말하는 것이다.
함수 안에는 별도의 이름공간이 생성이 된다. 그 이름 공간을 Local scope, 함수 밖은 global scope 이라고 하고
파이썬에서 정의한 내용에 대한 영역을 내장 영역(built-in-scope)라고 한다.
함수 안에서 어떤 네임을 사용할 경우 그 함수 스코프 내에 이름이 없는경우 상의 스코프를 뒤진다.
찾는 순서는 Local -> global -> built-in 순으로 검색 한다.
이걸 스코핑 룰 이라 한다.
</pre>
End of explanation
g = 1
def testScope(a):
global g #전역 스코프에 존재하는 변수의 값을 변경 할 수 있게 한다.
g = 2 #java는 이렇게 하면 전역에 있는 값이 수정되지만 파이썬은 타입개념이 없어 이렇게만 하면 그냥 지역변수 선언이다 그래서 전역에 영향을 미치도록 하기 위해서 global을 사용한다.
return g + a
testScope(1)
g
print(dir())
print(dir(func2))
dir(__name__)
print(globals())
def argTest(a,b,c):
print(a,b,c)
argTest(a=1,b=2,3) #키워드인자 들이 일반 인자보다 나중에 써야 한다.
argTest(1,a=1,b=2) #이렇게 하면 일반인자 뒤에 키워드 인자들을 사용했으니 될거 같은데 안된다.
argTest(1,b=2,c=3) #일반인자의 순서는 시그니처 순으로 무조건 와야 한다.
argTest(a=1,2,c=3) #이렇게 하더라도 두번째 인자인 2 앞에 키워드인자 a=1이 사용이되어서 안된다.
def test(*args):
print(type(args))
print(args)
test(1,2,3,4,5)
def union(*args):
res = []
for i in args:
for x in i:
if x not in res:
res.append(x)
return res
union("ham","mma", "spam")
def urlBuilder(server, **args):
print(server)
for i in args.keys():
print(i, " : ",args[i])
urlBuilder("gg",a = "val_a", b = "val_b", c = "val_c")
(lambda x : x+1)(1)
b = lambda x: x+1
b(3)
Explanation: <pre>
global이 하는 일은 새로운 지역 변수를 만들거나 복사하는 것이 아니라
단지 전역 영역의 값을 지역 영역에 참조할 수 있게 전역변수의 레퍼런스를 지역변수 영역 이름공간에 생성하는 것이다.
</pre>
End of explanation
def testLambda(g):
g(1,2,3)
testLambda(lambda a,b,c: print("sum id ", \
a+b+c, ": type id a ", type(a) ,\
":list object is ", zip([a,b,c])))
def fibo(n):
if n<2: return 1
return fibo(n-1) + fibo(n-2)
fibo(3)
fibo(4)
for i in range(0,10):
print(fibo(i))
temp = {}
def newFibo(n):
if n in temp.keys():
return temp[n]
if n<2:
temp[n] = 1
return 1
temp[n] = newFibo(n-1) + newFibo(n-2)
return temp[n]
newFibo(1)
newFibo(2)
newFibo(3)
newFibo(4)
temp = {}
for i in range(10):
print(newFibo(i))
print(temp)
def plus(a,b):
return a+b
help(plus)
def minus(a,b):
return a+b
help(minus)
plus.__doc__ = "a,b의 더한 값을 반환 합니다."
help(plus)
def fac(n):
팩토리얼 함수 입니다.
>>> fac(6)
if n == 1:
return n
return n * fac(n-1)
print(fac(3))
help(fac)
for key in {"a":1,"b":2}:
print(key)
Explanation: <pre>
아래와 같이 람다를 여러줄에 걸쳐 표현 할 수 있지만 안티 패턴이다.
</pre>
End of explanation
s = "abc"
it = iter(s)
print(it)
next(it)
next(it)
next(it)
next(it)
it.__next__()
it = iter(s)
it.__next__()
it.__next__()
it.__next__()
it.__next__()
next(iter("ab"))
Explanation: <pre>
시퀀스형 자료는 속성으로 이터레이터 객체를 가지고 있다.
</pre>
End of explanation |
147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Сравнение S4G и 2MASS массовых моделей
Идея в том, чтобы сравнить полные массы галактик из S4G и из 2MASS фотометрии, чтобы понять, почему диски так недооцениваются.
Данные из https
Step1: Разброс M/L по калибровке $log(M/L) = −0.339(±0.057)\times([3.6]−[4.5])−0.336(±0.002)$ https
Step2: Разброс достаточно маленький.
Step3: Как видим, все замечательно согласуется.
Step4: Что насчет полной массы из калибровок, как она согласуется с табличной?
https
Step5: j_m_ext 8.307
h_m_ext 7.493
k_m_ext 7.360
Step6: Хорошо согласуется. | Python Code:
sun_abs_mags = {'U' : 5.61,
'B' : 5.48,
'V' : 4.83,
'R' : 4.42,
'I' : 4.08,
'J' : 3.64,
'H' : 3.32,
'K' : 3.28,
'3.6' : 3.24, # Oh et al. 2008
'u' : 6.77, #SDSS bands from http://mips.as.arizona.edu/~cnaw/sun.html
'g' : 5.36,
'r' : 4.67,
'i' : 4.48,
'z' : 4.42
}
s4g_rawdata = []
with open('IpacTableFromSource.tbl') as s4g:
for line in s4g.readlines():
if line[0] == '\\' or line[0] == '|':
pass
else:
s4g_rawdata.append(" ".join(line.split()).split(' '))
print 'name [3.6] [4.5] log10(stellar mass) distance(Mpc) std(distance)'
s4g_rawdata[0]
Explanation: Сравнение S4G и 2MASS массовых моделей
Идея в том, чтобы сравнить полные массы галактик из S4G и из 2MASS фотометрии, чтобы понять, почему диски так недооцениваются.
Данные из https://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-dd?catalog=s4gcat и https://ned.ipac.caltech.edu/forms/gmd.html
End of explanation
m_to_ls = []
for l in s4g_rawdata:
if l[1] != 'null' and l[2] != 'null':
m_to_ls.append(np.power(10, -0.339*(float(l[1]) - float(l[2])) -0.336))
plt.hist(m_to_ls, bins=20, alpha=0.2)
plt.axvline(x=0.6);
Explanation: Разброс M/L по калибровке $log(M/L) = −0.339(±0.057)\times([3.6]−[4.5])−0.336(±0.002)$ https://arxiv.org/abs/1410.0009:
End of explanation
twomass_rawdata = []
with open('NED_S4G_2MASS_crossdata.txt') as crossdata:
for line in crossdata.readlines():
twomass_rawdata.append(" ".join(line.split()).split('|'))
twomass_rawdata[0]
print 'Name J H K_s'
print twomass_rawdata[0][2], twomass_rawdata[0][10], twomass_rawdata[0][14], twomass_rawdata[0][18]
print s4g_rawdata[522]
print '{:2.2E}'.format(np.power(10., float(s4g_rawdata[522][3])))
for ind, l in enumerate(twomass_rawdata):
if l[2] == 'NGC4544 ':
print ind
print l
M = float(twomass_rawdata[699][14]) - 5*np.log10(float(s4g_rawdata[522][4])) + 5 - 30.
m_to_l = 1.0
print '{:2.2E}'.format(m_to_l*np.power(10., 0.4*(sun_abs_mags['H'] - M)))
s4g_dict = {}
for l in s4g_rawdata:
if l[3] != 'null' and l[4] != 'null' and l[5] != 'null':
s4g_dict[l[0]+' '] = (float(l[3]), float(l[4]), float(l[5]))
final = []
for gal in twomass_rawdata:
if gal[2] in s4g_dict.keys() and gal[10] != ' ' and gal[14] != ' ' and gal[18] != ' ':
log_stellar, dist, err_dist = s4g_dict[gal[2]]
stellar_mass_s4g = np.power(10., log_stellar)
magJ, magH, magK = float(gal[10]), float(gal[14]), float(gal[18])
photom = []
for m_to_l, mag, band in zip([1.2, 1., 0.8], [magJ, magH, magK], ['J','H','K']):
M = mag - 5*np.log10(dist) - 30. + 5.
twomass_stellar = m_to_l*np.power(10., 0.4*(sun_abs_mags[band] - M))
photom.append(twomass_stellar)
final.append((gal[2], stellar_mass_s4g, photom))
len(final)
fig = plt.figure(figsize = [16, 5])
ax = plt.subplot('131')
ax.plot([l[1] for l in final], [l[2][0] for l in final], '.')
ax.set_title('J')
ax.set_xlabel('S4G')
ax.set_ylabel('2MASS')
ax.plot(ax.get_xlim(), ax.get_ylim(), ls="--", c=".3")
ax = plt.subplot('132')
ax.plot([l[1] for l in final], [l[2][1] for l in final], '.')
ax.set_title('H')
ax.set_xlabel('S4G')
ax.set_ylabel('2MASS')
ax.plot(ax.get_xlim(), ax.get_ylim(), ls="--", c=".3")
ax = plt.subplot('133')
ax.plot([l[1] for l in final], [l[2][2] for l in final], '.')
ax.set_title('K_s')
ax.set_xlabel('S4G')
ax.set_ylabel('2MASS')
ax.plot(ax.get_xlim(), ax.get_ylim(), ls="--", c=".3")
plt.show()
Explanation: Разброс достаточно маленький.
End of explanation
fig = plt.figure(figsize = [8, 5])
ax = plt.subplot()
ax.plot([l[1] for l in final], [l[2][0] for l in final], '.', label='J')
ax = plt.subplot()
ax.plot([l[1] for l in final], [l[2][1] for l in final], '.', label='H')
ax = plt.subplot()
ax.plot([l[1] for l in final], [l[2][2] for l in final], '.', label='K')
ax.set_title('J, H, K_s')
ax.set_xlabel('S4G')
ax.set_ylabel('2MASS')
ax.plot(ax.get_xlim(), ax.get_ylim(), ls="--", c=".3")
plt.legend(loc='best')
plt.show()
for l in final:
if l[0] in ['NGC2985 ', 'NGC4725 ', 'NGC4258 ']:
print l
for ind, l in enumerate(twomass_rawdata):
if l[2] in ['NGC2985 ', 'NGC4725 ', 'NGC4258 ']:
print ind
print l
Explanation: Как видим, все замечательно согласуется.
End of explanation
for ind, l in enumerate(s4g_rawdata):
if l[0] in ['NGC2985', 'NGC4725', 'NGC4258']:
print ind
print l
print s4g_rawdata[1829]
print '{:2.2E}'.format(np.power(10., float(s4g_rawdata[1829][3])))
Explanation: Что насчет полной массы из калибровок, как она согласуется с табличной?
https://irsa.ipac.caltech.edu/cgi-bin/2MASS/PubGalPS/nph-galps?locstr=147.592417+72.278967+eq&radius=5
End of explanation
M = 7.493 - 5*np.log10(float(s4g_rawdata[1829][4])) + 5 - 30.
m_to_l = 1.0
print '{:2.2E}'.format(m_to_l*np.power(10., 0.4*(sun_abs_mags['H'] - M)))
M = 8.307 - 5*np.log10(float(s4g_rawdata[1829][4])) + 5 - 30.
m_to_l = 1.2
print '{:2.2E}'.format(m_to_l*np.power(10., 0.4*(sun_abs_mags['J'] - M)))
M = 7.360 - 5*np.log10(float(s4g_rawdata[1829][4])) + 5 - 30.
m_to_l = 0.8
print '{:2.2E}'.format(m_to_l*np.power(10., 0.4*(sun_abs_mags['K'] - M)))
final_s4g = []
for l in s4g_rawdata:
if l[1] != 'null' and l[2] != 'null' and l[3] != 'null':
stellar_mass_s4g = np.power(10., float(l[3]))
m_to_l = np.power(10, -0.339*(float(l[1]) - float(l[2])) -0.336)
photom_stellar = m_to_l*np.power(10., 0.4*(sun_abs_mags['3.6'] - float(l[1])))
final_s4g.append((l[0], stellar_mass_s4g, photom_stellar))
len(final_s4g)
fig = plt.figure(figsize = [5, 5])
ax = plt.subplot()
ax.plot(zip(*final_s4g)[1], zip(*final_s4g)[2], '.')
ax.set_title('S4G calibration')
ax.set_xlabel('stellar mass from table')
ax.set_ylabel('from calibration')
ax.plot(ax.get_xlim(), ax.get_ylim(), ls="--", c=".3")
plt.show()
Explanation: j_m_ext 8.307
h_m_ext 7.493
k_m_ext 7.360
End of explanation
for l in final_s4g:
if l[0] in ['NGC2985', 'NGC4725', 'NGC4258']:
print l
Explanation: Хорошо согласуется.
End of explanation |
148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Revision control software
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http
Step1: In any software development, one of the most important tools are revision control software (RCS).
They are used in virtually all software development and in all environments, by everyone and everywhere (no kidding!)
RCS can used on almost any digital content, so it is not only restricted to software development, and is also very useful for manuscript files, figures, data and notebooks!
There are two main purposes of RCS systems
Step2: If we want to fork or clone an existing repository, we can use the command git clone repository
Step3: Git clone can take a URL to a public repository, like above, or a path to a local directory
Step4: We can also clone private repositories over secure protocols such as SSH
Step5: In this case, only the current ipython notebook has been added. It is listed as an untracked file, and is therefore not in the repository yet.
Adding files and committing changes
To add a new file to the repository, we first create the file and then use the git add filename command
Step6: After having added the file README, the command git status list it as an untracked file.
Step7: Now that it has been added, it is listed as a new file that has not yet been commited to the repository.
Step8: After committing the change to the repository from the local working directory, git status again reports that working directory is clean.
Commiting changes
When files that is tracked by GIT are changed, they are listed as modified by git status
Step9: Again, we can commit such changes to the repository using the git commit -m "message" command.
Step10: Removing files
To remove file that has been added to the repository, use git rm filename, which works similar to git add filename
Step11: Add it
Step12: Remove it again
Step13: Commit logs
The messages that are added to the commit command are supposed to give a short (often one-line) description of the changes/additions/deletions in the commit. If the -m "message" is omitted when invoking the git commit message an editor will be opened for you to type a commit message (for example useful when a longer commit message is requried).
We can look at the revision log by using the command git log
Step14: In the commit log, each revision is shown with a timestampe, a unique has tag that, and author information and the commit message.
Diffs
All commits results in a changeset, which has a "diff" describing the changes to the file associated with it. We can use git diff so see what has changed in a file
Step15: That looks quite cryptic but is a standard form for describing changes in files. We can use other tools, like graphical user interfaces or web based systems to get a more easily understandable diff.
In github (a web-based GIT repository hosting service) it can look like this
Step16: Discard changes in the working directory
To discard a change (revert to the latest version in the repository) we can use the checkout command like this
Step17: Checking out old revisions
If we want to get the code for a specific revision, we can use "git checkout" and giving it the hash code for the revision we are interested as argument
Step18: Now the content of all the files like in the revision with the hash code listed above (first revision)
Step19: We can move back to "the latest" (master) with the command
Step20: Tagging and branching
Tags
Tags are named revisions. They are useful for marking particular revisions for later references. For example, we can tag our code with the tag "paper-1-final" when when simulations for "paper-1" are finished and the paper submitted. Then we can always retreive the exactly the code used for that paper even if we continue to work on and develop the code for future projects and papers.
Step21: To retreive the code in the state corresponding to a particular tag, we can use the git checkout tagname command
Step22: We can list the existing branches like this
Step23: And we can switch between branches using checkout
Step24: Make a change in the new branch.
Step25: We can merge an existing branch and all its changesets into another branch (for example the master branch) like this
Step26: We can delete the branch expr1 now that it has been merged into the master
Step27: pulling and pushing changesets between repositories
If the respository has been cloned from another repository, for example on github.com, it automatically remembers the address of the parant repository (called origin)
Step28: pull
We can retrieve updates from the origin repository by "pulling" changesets from "origin" to our repository
Step29: We can register addresses to many different repositories, and pull in different changesets from different sources, but the default source is the origin from where the repository was first cloned (and the work origin could have been omitted from the line above).
push
After making changes to our local repository, we can push changes to a remote repository using git push. Again, the default target repository is origin, so we can do
Step30: Hosted repositories
Github.com is a git repository hosting site that is very popular with both open source projects (for which it is free) and private repositories (for which a subscription might be needed).
With a hosted repository it easy to collaborate with colleagues on the same code base, and you get a graphical user interface where you can browse the code and look at commit logs, track issues etc.
Some good hosted repositories are
Github
Step31: Graphical user interfaces
There are also a number of graphical users interfaces for GIT. The available options vary a little bit from platform to platform | Python Code:
from IPython.display import Image
Explanation: Revision control software
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http://github.com/jrjohansson/scientific-python-lectures.
The other notebooks in this lecture series are indexed at http://jrjohansson.github.io.
End of explanation
# create a new git repository called gitdemo:
!git init gitdemo
Explanation: In any software development, one of the most important tools are revision control software (RCS).
They are used in virtually all software development and in all environments, by everyone and everywhere (no kidding!)
RCS can used on almost any digital content, so it is not only restricted to software development, and is also very useful for manuscript files, figures, data and notebooks!
There are two main purposes of RCS systems:
Keep track of changes in the source code.
Allow reverting back to an older revision if something goes wrong.
Work on several "branches" of the software concurrently.
Tags revisions to keep track of which version of the software that was used for what (for example, "release-1.0", "paper-A-final", ...)
Make it possible for serveral people to collaboratively work on the same code base simultaneously.
Allow many authors to make changes to the code.
Clearly communicating and visualizing changes in the code base to everyone involved.
Basic principles and terminology for RCS systems
In an RCS, the source code or digital content is stored in a repository.
The repository does not only contain the latest version of all files, but the complete history of all changes to the files since they were added to the repository.
A user can checkout the repository, and obtain a local working copy of the files. All changes are made to the files in the local working directory, where files can be added, removed and updated.
When a task has been completed, the changes to the local files are commited (saved to the repository).
If someone else has been making changes to the same files, a conflict can occur. In many cases conflicts can be resolved automatically by the system, but in some cases we might manually have to merge different changes together.
It is often useful to create a new branch in a repository, or a fork or clone of an entire repository, when we doing larger experimental development. The main branch in a repository is called often master or trunk. When work on a branch or fork is completed, it can be merged in to the master branch/repository.
With distributed RCSs such as GIT or Mercurial, we can pull and push changesets between different repositories. For example, between a local copy of there repository to a central online reposistory (for example on a community repository host site like github.com).
Some good RCS software
GIT (git) : http://git-scm.com/
Mercurial (hg) : http://mercurial.selenic.com/
In the rest of this lecture we will look at git, although hg is just as good and work in almost exactly the same way.
Installing git
On Linux:
$ sudo apt-get install git
On Mac (with macports):
$ sudo port install git
The first time you start to use git, you'll need to configure your author information:
$ git config --global user.name 'Robert Johansson'
$ git config --global user.email robert@riken.jp
Creating and cloning a repository
To create a brand new empty repository, we can use the command git init repository-name:
End of explanation
!git clone https://github.com/qutip/qutip
Explanation: If we want to fork or clone an existing repository, we can use the command git clone repository:
End of explanation
!git clone gitdemo gitdemo2
Explanation: Git clone can take a URL to a public repository, like above, or a path to a local directory:
End of explanation
!git status
Explanation: We can also clone private repositories over secure protocols such as SSH:
$ git clone ssh://myserver.com/myrepository
Status
Using the command git status we get a summary of the current status of the working directory. It shows if we have modified, added or removed files.
End of explanation
%%file README
A file with information about the gitdemo repository.
!git status
Explanation: In this case, only the current ipython notebook has been added. It is listed as an untracked file, and is therefore not in the repository yet.
Adding files and committing changes
To add a new file to the repository, we first create the file and then use the git add filename command:
End of explanation
!git add README
!git status
Explanation: After having added the file README, the command git status list it as an untracked file.
End of explanation
!git commit -m "Added a README file" README
!git add Lecture-7-Revision-Control-Software.ipynb
!git commit -m "added notebook file" Lecture-7-Revision-Control-Software.ipynb
!git status
Explanation: Now that it has been added, it is listed as a new file that has not yet been commited to the repository.
End of explanation
%%file README
A file with information about the gitdemo repository.
A new line.
!git status
Explanation: After committing the change to the repository from the local working directory, git status again reports that working directory is clean.
Commiting changes
When files that is tracked by GIT are changed, they are listed as modified by git status:
End of explanation
!git commit -m "added one more line in README" README
!git status
Explanation: Again, we can commit such changes to the repository using the git commit -m "message" command.
End of explanation
%%file tmpfile
A short-lived file.
Explanation: Removing files
To remove file that has been added to the repository, use git rm filename, which works similar to git add filename:
End of explanation
!git add tmpfile
!git commit -m "adding file tmpfile" tmpfile
Explanation: Add it:
End of explanation
!git rm tmpfile
!git commit -m "remove file tmpfile" tmpfile
Explanation: Remove it again:
End of explanation
!git log
Explanation: Commit logs
The messages that are added to the commit command are supposed to give a short (often one-line) description of the changes/additions/deletions in the commit. If the -m "message" is omitted when invoking the git commit message an editor will be opened for you to type a commit message (for example useful when a longer commit message is requried).
We can look at the revision log by using the command git log:
End of explanation
%%file README
A file with information about the gitdemo repository.
README files usually contains installation instructions, and information about how to get started using the software (for example).
!git diff README
Explanation: In the commit log, each revision is shown with a timestampe, a unique has tag that, and author information and the commit message.
Diffs
All commits results in a changeset, which has a "diff" describing the changes to the file associated with it. We can use git diff so see what has changed in a file:
End of explanation
Image(filename='images/github-diff.png')
Explanation: That looks quite cryptic but is a standard form for describing changes in files. We can use other tools, like graphical user interfaces or web based systems to get a more easily understandable diff.
In github (a web-based GIT repository hosting service) it can look like this:
End of explanation
!git checkout -- README
!git status
Explanation: Discard changes in the working directory
To discard a change (revert to the latest version in the repository) we can use the checkout command like this:
End of explanation
!git log
!git checkout 1f26ad648a791e266fbb951ef5c49b8d990e6461
Explanation: Checking out old revisions
If we want to get the code for a specific revision, we can use "git checkout" and giving it the hash code for the revision we are interested as argument:
End of explanation
!cat README
Explanation: Now the content of all the files like in the revision with the hash code listed above (first revision)
End of explanation
!git checkout master
!cat README
!git status
Explanation: We can move back to "the latest" (master) with the command:
End of explanation
!git log
!git tag -a demotag1 -m "Code used for this and that purpuse"
!git tag -l
!git show demotag1
Explanation: Tagging and branching
Tags
Tags are named revisions. They are useful for marking particular revisions for later references. For example, we can tag our code with the tag "paper-1-final" when when simulations for "paper-1" are finished and the paper submitted. Then we can always retreive the exactly the code used for that paper even if we continue to work on and develop the code for future projects and papers.
End of explanation
!git branch expr1
Explanation: To retreive the code in the state corresponding to a particular tag, we can use the git checkout tagname command:
$ git checkout demotag1
Branches
With branches we can create diverging code bases in the same repository. They are for example useful for experimental development that requires a lot of code changes that could break the functionality in the master branch. Once the development of a branch has reached a stable state it can always be merged back into the trunk. Branching-development-merging is a good development strategy when serveral people are involved in working on the same code base. But even in single author repositories it can often be useful to always keep the master branch in a working state, and always branch/fork before implementing a new feature, and later merge it back into the main trunk.
In GIT, we can create a new branch like this:
End of explanation
!git branch
Explanation: We can list the existing branches like this:
End of explanation
!git checkout expr1
Explanation: And we can switch between branches using checkout:
End of explanation
%%file README
A file with information about the gitdemo repository.
README files usually contains installation instructions, and information about how to get started using the software (for example).
Experimental addition.
!git commit -m "added a line in expr1 branch" README
!git branch
!git checkout master
!git branch
Explanation: Make a change in the new branch.
End of explanation
!git checkout master
!git merge expr1
!git branch
Explanation: We can merge an existing branch and all its changesets into another branch (for example the master branch) like this:
First change to the target branch:
End of explanation
!git branch -d expr1
!git branch
!cat README
Explanation: We can delete the branch expr1 now that it has been merged into the master:
End of explanation
!git remote
!git remote show origin
Explanation: pulling and pushing changesets between repositories
If the respository has been cloned from another repository, for example on github.com, it automatically remembers the address of the parant repository (called origin):
End of explanation
!git pull origin
Explanation: pull
We can retrieve updates from the origin repository by "pulling" changesets from "origin" to our repository:
End of explanation
!git status
!git add Lecture-7-Revision-Control-Software.ipynb
!git commit -m "added lecture notebook about RCS" Lecture-7-Revision-Control-Software.ipynb
!git push
Explanation: We can register addresses to many different repositories, and pull in different changesets from different sources, but the default source is the origin from where the repository was first cloned (and the work origin could have been omitted from the line above).
push
After making changes to our local repository, we can push changes to a remote repository using git push. Again, the default target repository is origin, so we can do:
End of explanation
Image(filename='images/github-project-page.png')
Explanation: Hosted repositories
Github.com is a git repository hosting site that is very popular with both open source projects (for which it is free) and private repositories (for which a subscription might be needed).
With a hosted repository it easy to collaborate with colleagues on the same code base, and you get a graphical user interface where you can browse the code and look at commit logs, track issues etc.
Some good hosted repositories are
Github : http://www.github.com
Bitbucket: http://www.bitbucket.org
End of explanation
Image(filename='images/gitk.png')
Explanation: Graphical user interfaces
There are also a number of graphical users interfaces for GIT. The available options vary a little bit from platform to platform:
http://git-scm.com/downloads/guis
End of explanation |
149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
0. Preparation and setup
One Python library that makes GraphX support available to our Jupyter notebooks is not yet bound to the runtime by default.
To get it added to the Spark context you have to use the !pip magic cell command install first to bind the library to the existing runtime.
The pixiedust library is implemented and loaded from https
Step1: Pixiedust provides a nice visualization plugin for d3 style plots. Have a look at https
Step2: When the library has been loaded successfully you have access to the PackageManager. Use the PackageManager to install a package to supply GraphFrames. Those are needed later in the notebook to complete the instructions for Spark GraphX.
Step3: At this point you are being asked to Please restart Kernel to complete installation of the new package. Use the Restart Kernel dialog from the menu to do that. Once completed, you can start the analysis and resume with the next section.
Please restart your Kernel before you proceed!
1. Load data from Twitter to Cloudant
Following the lab instructions you should at this point have
Step4: Import all required Python libraries.
Step5: Define a class with helper functions to query the Twitter service API and load documents in the Cloudant database using the bulk load API. (Note
Step6: Finally we make the call the load our Cloudant database with tweets. To do that, we require two parameters
Step7: At this point you should see a number of debug messages with response codes 200 and 201. As a result your database is loaded with the number of tweets you provided in count variable above.
If there are response codes like 401 (unauthorized) or 404 (not found) please check your credentails and URLs provided in the properties above. Changes you make to these settings are applied when you execute the cell again. There is no need to execute other cells (that have not been changed) and you can immediately come back here to re-run your TwitterToCloudant functions.
Should there be any severe problems that can not be resolved, we made a database called tweets already avaialable in your Cloudant account. You can continue to work through the following instructions using the tweets database instead.
2. Analyze tweets with Spark SQL
In this section your are going to explore the tweets loaded into your Cloudant database using Spark SQL queries. The Cloudant Spark connector library available at https
Step8: Now you want to create a Spark SQL context object off the given Spark context.
Step9: The Spark SQL context (sqlContext) is used to read data from the Cloudant database. We use a schema sample size and specified number of partitions to load the data with. For details on these parameters check https
Step10: For performance reasons we will cache the Data Frame to prevent re-loading.
Step11: The schema of a Data Frame reveals the structure of all JSON documents loaded from your Cloudant database. Depending on the setting for the parameter schemaSampleSize the created RDD contains attributes for the first document only, for the first N documents, or for all documents. Please have a look at https
Step12: With the use of the IBM Insights for Twitter API all tweets are enriched with metadata. For example, the gender of the Twitter user or the state of his account location are added in clear text. Sentiment analysis is also done at the time the tweets are loaded from the original Twitter API. This allows us to group tweets according to their positive, neutral, or negative sentiment.
In a first example you can extract the gender, state, and polarity details from the DataFrame (or use any other field available in the schema output above).
Note
Step13: The above statement executes extremely fast because no actual function or transformation was computed yet. Spark uses a lazy approach to compute functions only when they are actually needed. The following function is used to show the output of the Data Frame. At that point only do you see a longer runtime to compute tweetsDF2.
Step14: Work with other Spark SQL functions to do things like counting, grouping etc.
Step15: 2.1 Plot results using matplotlib
In Python you can use simple libraries to plot your DataFrames directly in diagrams. However, the use of matplotlib is not trivial and once the data is rendered in the diagram it is static. For more comprehensive graphing Spark provides the GraphX extension. Here the data is transformed into a directed multigraph model (similar to those used in GraphDBs) called GraphFrames.
You will explore GraphFrames later in this lab. Let's first have a look at simply plotting your DataFrames using matplotlib.
Step16: Plot the number of tweets per state. Notice again how Spark computes the result lazily. In no previous output did we require the full DataFrame and it did not have to get fully computed until now.
Step17: More plots to group data by gender and polarity.
Step18: 2.2 Create SQL temporary tables
With Spark SQL you can create in-memory tables and query your Spark RDDs in tables using SQL syntax. This is just an alternative represenation of your RDD where SQL functions (like filters or projections) are converted into Spark functions. For the user it mostly provides a SQL wrapper over Spark and a familiar way to query data.
Step19: Run SQL statements using the sqlContext.sql() function and render output with show(). The result of a SQL function could again be mapped to a data frame.
Step20: With multiple temporary tables (potentially from different databases) you can execute JOIN and UNION queries to analyze the database in combination.
In the next query we will return all hashtags used in our body of tweets.
Step21: The hashtags are in lists, one per tweet. We flat map this list to a large list and then store it back into a temporary table. The temporary table can be used to define a hashtag cloud to understand which hashtag has been used how many times.
Step22: Create a DataFrame from the Python dictionary we used to flatten our hashtags into. The DataFrame has a simple schema with just a single column called hastag.
Step23: Register a new temp table for hashtags. Group and count tags to get a sense of trending issues.
Step24: 2.3 Visualize tag cloud with Brunel
Let's create some charts and diagrams with Brunel commands.
The basic format of each call to Brunel is simple. Whether the command is a single line or a set of lines, the commands are concatenated together and the result interpreted as one command.
Here are some of the rules for using Brunel that you'll need in this notebook
Step25: Brunel libraries are able to read data from CSV files only. We will export our Panda DataFrames to CSV first to be able to load them with the Brunel libraries below.
Step26: Top 5 records in every Panda DF.
Step27: The hast tag cloud is visualized using the Brunel cloud graph.
Step28: State and location data can be plotted on a map or a bubble graph representing the number of tweets per state. We will exercise maps later using the GraphX framework.
Step29: Brunel graphs are D3 based and interactive. Try using your mouse on the graph for Gender polarity to hover over details and zoom in on the Y axis.
Step30: 2.4 Write analysis results back to Cloudant
Next we are going to persist the hashtags_DF back into a Cloudant database. (Note
Step31: 3. Analysis with Spark GraphX
Import dependencies from the Pixiedust library loaded in the preperation section. See https
Step32: To render a chart you have options to select the columns to display or the aggregation function to apply.
Step33: Use a data set with at least two numeric columns to create scatter plots.
4. Analysis with Spark MLlib
Here we are going to use KMeans clustering algorithm from Spark MLlib.
Clustering will let us cluster similar tweets together.
We will then display clusters using Brunel library. | Python Code:
!pip install --user --upgrade --no-deps pixiedust
Explanation: 0. Preparation and setup
One Python library that makes GraphX support available to our Jupyter notebooks is not yet bound to the runtime by default.
To get it added to the Spark context you have to use the !pip magic cell command install first to bind the library to the existing runtime.
The pixiedust library is implemented and loaded from https://github.com/ibm-cds-labs/pixiedust. See the project documentation for details.
End of explanation
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
Explanation: Pixiedust provides a nice visualization plugin for d3 style plots. Have a look at https://d3js.org/ if you are not yet familiar with d3.
Having non-ascii characters in some of your tweets requires the Python interpreter to be set to support UTF-8. Reload your Python sys settings with UTF-8 encoding.
End of explanation
from pixiedust.packageManager import PackageManager
pkg=PackageManager()
pkg.installPackage("graphframes:graphframes:0")
Explanation: When the library has been loaded successfully you have access to the PackageManager. Use the PackageManager to install a package to supply GraphFrames. Those are needed later in the notebook to complete the instructions for Spark GraphX.
End of explanation
properties = {
'twitter': {
'restAPI': 'https://xxx:xxx@cdeservice.mybluemix.net/api/v1/messages/search',
'username': 'xxx',
'password': 'xxx'
},
'cloudant': {
'account':'https://xxx:xxx@xxx.cloudant.com',
'username':'xxx',
'password':'xxx',
'database':'election2016'
}
}
Explanation: At this point you are being asked to Please restart Kernel to complete installation of the new package. Use the Restart Kernel dialog from the menu to do that. Once completed, you can start the analysis and resume with the next section.
Please restart your Kernel before you proceed!
1. Load data from Twitter to Cloudant
Following the lab instructions you should at this point have:
a Cloudant account
an empty database in your Cloudant account
an IBM Insights for Twitter service instance
Provide the details for both into the global variables section below, including
Twitter:
- restAPI - the API endpoint we use to query the Twitter API with. Use the URL for your IBM Insights for Twitter service and add /api/v1/messages/search as path (for example https://cdeservice.stage1.mybluemix.net/api/v1/messages/search)
- username - the username for your IBM Insights for Twitter service instance
- password - the password for your IBM Insights for Twitter service instance
Cloudant:
- account - the fully qualified account https URL (for example https://testy-dynamite-001.cloudant.com)
- username - the Cloudant account username
- password - the Cloudant account password
- database - the database name you want your tweets to be loaded into (Note: the database will NOT get created by the script below. Please create the database manually into your Cloudant account first.)
End of explanation
import requests
import json
from requests.auth import HTTPBasicAuth
import http.client
Explanation: Import all required Python libraries.
End of explanation
class TwitterToCloudant:
count = 100
def query_twitter(self, config, url, query, loop):
loop = loop + 1
if loop > (int(self.count) / 100):
return
# QUERY TWITTER
if url is None:
url = config["twitter"]["restAPI"]
print(url, query)
tweets = self.get_tweets(config, url, query)
else:
print(url)
tweets = self.get_tweets(config, url, query)
# LOAD TO CLOUDANT
self.load_cloudant(config, tweets)
# CONTINUE TO PAGE THROUGH RESULTS ....
if "related" in tweets:
url = tweets["related"]["next"]["href"]
#!! recursive call
self.query_twitter(config, url, None, loop)
def get_tweets(self, config, url, query):
# GET tweets from twitter endpoint
user = config["twitter"]["username"]
password = config["twitter"]["password"]
print ("GET: Tweets from {} ".format(url))
if query is None:
payload = {'country_code' :' us', 'lang' : 'en'}
else:
payload = {'q': query, 'country_code' :' us', 'lang' : 'en'}
response = requests.get(url, params=payload, auth=HTTPBasicAuth(user, password))
print ("Got {} response ".format(response.status_code))
tweets = json.loads(response.text)
return tweets
def load_cloudant(self, config, tweets):
# POST tweets to Cloudant database
url = config["cloudant"]["account"] + "/" + config["cloudant"]["database"] + "/_bulk_docs"
user = config["cloudant"]["username"]
password = config["cloudant"]["password"]
headers = {"Content-Type": "application/json"}
if "tweets" in tweets:
docs = {}
docs["docs"] = tweets["tweets"]
print ("POST: Docs to {}".format(url))
response = requests.post(url, data=json.dumps(docs), headers=headers, auth=HTTPBasicAuth(user, password))
print ("Got {} response ".format(response.status_code))
Explanation: Define a class with helper functions to query the Twitter service API and load documents in the Cloudant database using the bulk load API. (Note: no code is being executed yet and you don't expect any output for these declarations.)
End of explanation
query = "#election2016"
count = 300
TtC = TwitterToCloudant()
TtC.count = count
TtC.query_twitter(properties, None, query, 0)
Explanation: Finally we make the call the load our Cloudant database with tweets. To do that, we require two parameters:
query - the query string to pass to the Twitter API. Use #election2016 as default or experiment with your own.
count - the number of tweets to process. Use 200 as a good start or scale up if you want. (Note: Execution time depends on ....)
End of explanation
sc.version
sc._conf.getAll()
Explanation: At this point you should see a number of debug messages with response codes 200 and 201. As a result your database is loaded with the number of tweets you provided in count variable above.
If there are response codes like 401 (unauthorized) or 404 (not found) please check your credentails and URLs provided in the properties above. Changes you make to these settings are applied when you execute the cell again. There is no need to execute other cells (that have not been changed) and you can immediately come back here to re-run your TwitterToCloudant functions.
Should there be any severe problems that can not be resolved, we made a database called tweets already avaialable in your Cloudant account. You can continue to work through the following instructions using the tweets database instead.
2. Analyze tweets with Spark SQL
In this section your are going to explore the tweets loaded into your Cloudant database using Spark SQL queries. The Cloudant Spark connector library available at https://github.com/cloudant-labs/spark-cloudant is already linked with the Spark deployment underneath this notebook. All you have to do at this point is to read your Cloudant documents into a DataFrame.
First, this notebook runs on a shared Spark cluster but obtains a dedicated Spark context for isolated binding. The Spark context (sc) is made available automatically when the notebook is launched and should be started at this point. With a few statements you can inspect the Spark version and resources allocated for this context.
Note: If there is ever a problem with the running Spark context, you can submit sc.stop() and sc.start() to recycle it
End of explanation
sqlContext = SQLContext(sc)
Explanation: Now you want to create a Spark SQL context object off the given Spark context.
End of explanation
tweetsDF = sqlContext.read.format("com.cloudant.spark").\
option("cloudant.host",properties['cloudant']['account'].replace('https://','')).\
option("cloudant.username", properties['cloudant']['username']).\
option("cloudant.password", properties['cloudant']['password']).\
option("schemaSampleSize", "-1").\
option("jsonstore.rdd.partitions", "5").\
load(properties['cloudant']['database'])
tweetsDF.show(5)
Explanation: The Spark SQL context (sqlContext) is used to read data from the Cloudant database. We use a schema sample size and specified number of partitions to load the data with. For details on these parameters check https://github.com/cloudant-labs/spark-cloudant#configuration-on-sparkconf
End of explanation
tweetsDF.cache()
Explanation: For performance reasons we will cache the Data Frame to prevent re-loading.
End of explanation
tweetsDF.printSchema()
Explanation: The schema of a Data Frame reveals the structure of all JSON documents loaded from your Cloudant database. Depending on the setting for the parameter schemaSampleSize the created RDD contains attributes for the first document only, for the first N documents, or for all documents. Please have a look at https://github.com/cloudant-labs/spark-cloudant#schema-variance for details on schema computation.
End of explanation
tweetsDF2 = tweetsDF.select(tweetsDF.cde.author.gender.alias("gender"),
tweetsDF.cde.author.location.state.alias("state"),
tweetsDF.cde.content.sentiment.polarity.alias("polarity"))
Explanation: With the use of the IBM Insights for Twitter API all tweets are enriched with metadata. For example, the gender of the Twitter user or the state of his account location are added in clear text. Sentiment analysis is also done at the time the tweets are loaded from the original Twitter API. This allows us to group tweets according to their positive, neutral, or negative sentiment.
In a first example you can extract the gender, state, and polarity details from the DataFrame (or use any other field available in the schema output above).
Note: To extract a nested field you have to use the full attribute path, for example cde.author.gender or cde.content.sentiment.polarity. The alias() function is available to simplify the name in the resulting DataFrame.
End of explanation
tweetsDF2.count()
tweetsDF2.printSchema()
Explanation: The above statement executes extremely fast because no actual function or transformation was computed yet. Spark uses a lazy approach to compute functions only when they are actually needed. The following function is used to show the output of the Data Frame. At that point only do you see a longer runtime to compute tweetsDF2.
End of explanation
# count tweets by state
tweets_state = tweetsDF2.groupBy(tweetsDF2.state).count()
tweets_state.show(100)
# count by gender & polarity
tweets_gp0 = tweetsDF2.groupBy(tweetsDF2.gender, tweetsDF2.polarity).count()
tweets_gp0.show(100)
tweets_gp= tweetsDF2.where(tweetsDF2.polarity.isNotNull()).groupBy("polarity").pivot("gender").count()
tweets_gp.show(100)
Explanation: Work with other Spark SQL functions to do things like counting, grouping etc.
End of explanation
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: 2.1 Plot results using matplotlib
In Python you can use simple libraries to plot your DataFrames directly in diagrams. However, the use of matplotlib is not trivial and once the data is rendered in the diagram it is static. For more comprehensive graphing Spark provides the GraphX extension. Here the data is transformed into a directed multigraph model (similar to those used in GraphDBs) called GraphFrames.
You will explore GraphFrames later in this lab. Let's first have a look at simply plotting your DataFrames using matplotlib.
End of explanation
tweets_state_pd = tweets_state.toPandas()
values = tweets_state_pd['count']
labels = tweets_state_pd['state']
plt.gcf().set_size_inches(16, 12, forward=True)
plt.title('Number of tweets by state')
plt.barh(range(len(values)), values)
plt.yticks(range(len(values)), labels)
plt.show()
Explanation: Plot the number of tweets per state. Notice again how Spark computes the result lazily. In no previous output did we require the full DataFrame and it did not have to get fully computed until now.
End of explanation
tweets_gp_pd = tweets_gp.toPandas()
labels = tweets_gp_pd['polarity']
N = len(labels)
male = tweets_gp_pd['male']
female = tweets_gp_pd['female']
unknown = tweets_gp_pd['unknown']
ind = np.arange(N) # the x locations for the groups
width = 0.2 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind-width, male, width, color='b', label='male')
rects2 = ax.bar(ind, female, width, color='r', label='female')
rects3 = ax.bar(ind + width, unknown, width, color='y', label='unknown')
ax.set_ylabel('Count')
ax.set_title('Tweets by polarity and gender')
ax.set_xticks(ind + width)
ax.set_xticklabels(labels)
ax.legend((rects1[0], rects2[0], rects3[0]), ('male', 'female', 'unknown'))
plt.show()
Explanation: More plots to group data by gender and polarity.
End of explanation
tweetsDF.registerTempTable("tweets_DF")
Explanation: 2.2 Create SQL temporary tables
With Spark SQL you can create in-memory tables and query your Spark RDDs in tables using SQL syntax. This is just an alternative represenation of your RDD where SQL functions (like filters or projections) are converted into Spark functions. For the user it mostly provides a SQL wrapper over Spark and a familiar way to query data.
End of explanation
sqlContext.sql("SELECT count(*) AS cnt FROM tweets_DF").show()
sqlContext.sql("SELECT message.actor.displayName AS author, count(*) as cnt FROM tweets_DF GROUP BY message.actor.displayName ORDER BY cnt DESC").show(10)
Explanation: Run SQL statements using the sqlContext.sql() function and render output with show(). The result of a SQL function could again be mapped to a data frame.
End of explanation
hashtags = sqlContext.sql("SELECT message.object.twitter_entities.hashtags.text as tags \
FROM tweets_DF \
WHERE message.object.twitter_entities.hashtags.text IS NOT NULL")
Explanation: With multiple temporary tables (potentially from different databases) you can execute JOIN and UNION queries to analyze the database in combination.
In the next query we will return all hashtags used in our body of tweets.
End of explanation
l = hashtags.map(lambda x: x.tags).collect()
tagCloud = [item for sublist in l for item in sublist]
Explanation: The hashtags are in lists, one per tweet. We flat map this list to a large list and then store it back into a temporary table. The temporary table can be used to define a hashtag cloud to understand which hashtag has been used how many times.
End of explanation
from pyspark.sql import Row
tagCloudDF = sc.parallelize(tagCloud)
row = Row("hashtag")
hashtagsDF = tagCloudDF.map(row).toDF()
Explanation: Create a DataFrame from the Python dictionary we used to flatten our hashtags into. The DataFrame has a simple schema with just a single column called hastag.
End of explanation
hashtagsDF.registerTempTable("hashtags_DF")
trending = sqlContext.sql("SELECT count(hashtag) as CNT, hashtag as TAG FROM hashtags_DF GROUP BY hashtag ORDER BY CNT DESC")
trending.show(10)
Explanation: Register a new temp table for hashtags. Group and count tags to get a sense of trending issues.
End of explanation
import brunel
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
trending_pd = trending.toPandas()
Explanation: 2.3 Visualize tag cloud with Brunel
Let's create some charts and diagrams with Brunel commands.
The basic format of each call to Brunel is simple. Whether the command is a single line or a set of lines, the commands are concatenated together and the result interpreted as one command.
Here are some of the rules for using Brunel that you'll need in this notebook:
DataFrame: Use the data command to specify the pandas DataFrame.
Chart type: Use commands like chord and treemap to specify a chart type. If you don't specify a type, the default chart type is a scatterplot.
Chart definition: Use the x and y commands to specify the data to include on the x-axis and the y-axis.
Styling: Use commands like color, tooltip, and label to control the styling of the graph.
Size: Use the width and height key-value pairs to specify the size of the graph. The key-value pairs must be preceded with two colons and separated with a comma, for example: :: width=800, height=300
See detailed documentation on the Brunel Visualization language at https://brunel.mybluemix.net/docs.
End of explanation
trending_pd.to_csv('trending_pd.csv')
tweets_state_pd.to_csv('tweets_state_pd.csv')
tweets_gp_pd.to_csv('tweets_gp_pd.csv')
Explanation: Brunel libraries are able to read data from CSV files only. We will export our Panda DataFrames to CSV first to be able to load them with the Brunel libraries below.
End of explanation
trending_pd.head(5)
Explanation: Top 5 records in every Panda DF.
End of explanation
%brunel data('trending_pd') cloud color(cnt) size(cnt) label(tag) :: width=900, height=600
Explanation: The hast tag cloud is visualized using the Brunel cloud graph.
End of explanation
tweets_state_pd.head(5)
%brunel data('tweets_state_pd') bubble label(state) x(state) color(count) size(count)
Explanation: State and location data can be plotted on a map or a bubble graph representing the number of tweets per state. We will exercise maps later using the GraphX framework.
End of explanation
tweets_gp_pd.head(5)
%brunel data('tweets_gp_pd') bar x(polarity) y(male, female) color(male, female) tooltip(#all) legends(none) :: width=800, height=300
Explanation: Brunel graphs are D3 based and interactive. Try using your mouse on the graph for Gender polarity to hover over details and zoom in on the Y axis.
End of explanation
hashtagsDF.write.format("com.cloudant.spark").\
option("cloudant.host",properties['cloudant']['account'].replace('https://','')).\
option("cloudant.username", properties['cloudant']['username']).\
option("cloudant.password", properties['cloudant']['password']).\
option("bulkSize", "2000").\
save("hashtags")
Explanation: 2.4 Write analysis results back to Cloudant
Next we are going to persist the hashtags_DF back into a Cloudant database. (Note: The database hashtags has to exist in Cloudant. Please create that database first.)
End of explanation
from pixiedust.display import *
Explanation: 3. Analysis with Spark GraphX
Import dependencies from the Pixiedust library loaded in the preperation section. See https://github.com/ibm-cds-labs/pixiedust for details.
End of explanation
tweets_state_us = tweets_state.filter(tweets_state.state.isin("Alabama", "Alaska", "Arizona",
"Arkansas", "California", "Colorado", "Connecticut", "Delaware", "Florida",
"Georgia", "Hawaii", "Idaho", "Illinois Indiana", "Iowa", "Kansas", "Kentucky",
"Louisiana", "Maine", "Maryland", "Massachusetts", "Michigan", "Minnesota",
"Mississippi", "Missouri", "Montana Nebraska", "Nevada", "New Hampshire",
"New Jersey", "New Mexico", "New York", "North Carolina", "North Dakota",
"Ohio", "Oklahoma", "Oregon", "Pennsylvania Rhode Island", "South Carolina",
"South Dakota", "Tennessee", "Texas","Utah", "Vermont", "Virginia",
"Washington", "West Virginia", "Wisconsin", "Wyoming"))
tweets_state_us.show(5)
display(tweets_state_us)
Explanation: To render a chart you have options to select the columns to display or the aggregation function to apply.
End of explanation
# TRAINING by hashtag
from pyspark.mllib.feature import HashingTF
from pyspark.mllib.clustering import KMeans, KMeansModel
# dataframe of tweets' messages and hashtags
mhDF = sqlContext.sql("SELECT message.body as message, \
message.object.twitter_entities.hashtags.text as tags \
FROM tweets_DF \
WHERE message.object.twitter_entities.hashtags.text IS NOT NULL")
mhDF.show()
# create an RDD of hashtags
hashtagsRDD = mhDF.rdd.map(lambda h: h.tags)
# create Feature verctor for every tweet's hastags
# each hashtag represents feature
# a function calculates how many time hashtag is in a tweet
htf = HashingTF(100)
vectors = hashtagsRDD.map(lambda hs: htf.transform(hs)).cache()
print(vectors.take(2))
# Build the model (cluster the data)
numClusters = 10 # number of clusters
model = KMeans.train(vectors, numClusters, maxIterations=10, initializationMode="random")
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType, StringType
def predict(tags):
vector = htf.transform(tags)
return model.predict(vector)
# Creates a Column expression representing a user defined function
udfPredict = udf(predict, IntegerType())
def formatstr(message):
lines = message.splitlines()
return " ".join(lines)
udfFormatstr = udf(formatstr, StringType())
# transform mhDF into cmhDF, a dataframe containing formatted messages,
# hashtabs and cluster
mhDF2 = mhDF.withColumn("message", udfFormatstr(mhDF.message))
cmhDF = mhDF2.withColumn("cluster", udfPredict(mhDF2.tags))
cmhDF.show()
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
# visualizing clusters
import brunel
cmh_pd = cmhDF.toPandas()
cmh_pd.to_csv('cmh_pd.csv')
%brunel data('cmh_pd') bubble x(cluster) color(#all) size(#count) tooltip(message, tags) legends(none)
Explanation: Use a data set with at least two numeric columns to create scatter plots.
4. Analysis with Spark MLlib
Here we are going to use KMeans clustering algorithm from Spark MLlib.
Clustering will let us cluster similar tweets together.
We will then display clusters using Brunel library.
End of explanation |
150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tiingo-Python
This notebook shows basic usage of the tiingo-python library. If you're running this on mybinder.org, you can run this code without installing anything on your computer. You can find more information about what available at the Tiingo website, but this notebook will let you play around with real code samples in your browser.
If you've never used jupyter before, I recommend this tutorial from Datacamp.
Basic Setup
First, you'll need to provide your API key as a string in the cell below. If you forget to do this, the notebook cannot run. You can find your API key by visiting this link and logging in to your Tiingo account.
Step1: Minimal Data Fetching Examples
Below are the code samples from the README.rst along with sample outputs, but this is just the tip of the iceberg of this library's capabilities.
Step2: For values of frequency
Step3: For each ticker, you may access
ticker
Step4: Basic Pandas Dataframe Examples
Pandas is a popular python library for data analysis an manipulation. We provide out-of-the-box support for returning responses from Tiingo as Python Dataframes. If you are unfamiliar with pandas, I recommend the Mode Notebooks python data analysis tutorial. | Python Code:
TIINGO_API_KEY = 'REPLACE-THIS-TEXT-WITH-A-REAL-API-KEY'
# This is here to remind you to change your API key.
if not TIINGO_API_KEY or (TIINGO_API_KEY == 'REPLACE-THIS-TEXT-WITH-A-REAL-API-KEY'):
raise Exception("Please provide a valid Tiingo API key!")
from tiingo import TiingoClient
config = {
'api_key': TIINGO_API_KEY,
'session': True # Reuse HTTP sessions across API calls for better performance
}
# Throughout the rest of this notebook, you'll use the "client" to interact with the Tiingo backend services.
client = TiingoClient(config)
Explanation: Tiingo-Python
This notebook shows basic usage of the tiingo-python library. If you're running this on mybinder.org, you can run this code without installing anything on your computer. You can find more information about what available at the Tiingo website, but this notebook will let you play around with real code samples in your browser.
If you've never used jupyter before, I recommend this tutorial from Datacamp.
Basic Setup
First, you'll need to provide your API key as a string in the cell below. If you forget to do this, the notebook cannot run. You can find your API key by visiting this link and logging in to your Tiingo account.
End of explanation
# Get Ticker Metadata for the stock "GOOGL"
ticker_metadata = client.get_ticker_metadata("GOOGL")
print(ticker_metadata)
# Get latest prices, based on 3+ sources as JSON, sampled weekly
ticker_price = client.get_ticker_price("GOOGL", frequency="weekly")
print(ticker_price)
Explanation: Minimal Data Fetching Examples
Below are the code samples from the README.rst along with sample outputs, but this is just the tip of the iceberg of this library's capabilities.
End of explanation
# Get historical GOOGL prices from August 2017 as JSON, sampled daily
historical_prices = client.get_ticker_price("GOOGL",
fmt='json',
startDate='2017-08-01',
endDate='2017-08-31',
frequency='daily')
# Print the first 2 days of data, but you will find more days of data in the overall historical_prices variable.
print(historical_prices[:2])
# See what tickers are available
# Check what tickers are available, as well as metadata about each ticker
# including supported currency, exchange, and available start/end dates.
tickers = client.list_stock_tickers()
print(tickers[:2])
Explanation: For values of frequency:
You can specify any of the end of day frequencies (daily, weekly, monthly, and annually) or any intraday frequency for both the get_ticker_price and get_dataframe methods. Weekly frequencies resample to the end of day on Friday, monthly frequencies resample to the last day of the month, and annually frequencies resample to the end of day on 12-31 of each year. The intraday frequencies are specified using an integer followed by Min or Hour, for example 30Min or 1Hour.
End of explanation
# Search news articles about particular tickers
# This method will not work error if you do not have a paid Tiingo account associated with your API key.
articles = client.get_news(tickers=['GOOGL', 'AAPL'],
tags=['Laptops'],
sources=['washingtonpost.com'],
startDate='2017-01-01',
endDate='2017-08-31')
# Display a sample article
articles[0]
Explanation: For each ticker, you may access
ticker: The ticker's abbreviation
exchange: Which exchange it's traded on
priceCurrency: Currency for the prices listed for this ticker
startDate/ endDate: Start / End Date for Tiingo's data about this ticker
Note that Tiingo is constantly adding new data sources, so the values returned from this call will probably change every day.
End of explanation
# Boilerplate to make pandas charts render inline in jupyter
import matplotlib.pyplot as plt
%matplotlib inline
# Scan some historical Google data
ticker_history_df = client.get_dataframe("GOOGL",
startDate='2018-05-15',
endDate='2018-05-31',
frequency='daily')
# Check which columns you'd like to work with
ticker_history_df.columns
# Browse the first few entries of the raw data
ticker_history_df.head(5)
# View your columns of data on separate plots
columns_to_plot = ['adjClose', 'adjOpen']
ticker_history_df[columns_to_plot].plot.line(subplots=True)
# Plot multiple columns of data in the same chart
ticker_history_df[columns_to_plot].plot.line(subplots=False)
# Make a histogram to see what typical trading volumes are
ticker_history_df.volume.hist()
# You may also fetch data for multiple tickers at once, as long as you are only interested in 1 metric
# at a time. If you need to compare multiple metrics, you must fetch the data 1
# Here we compare Google with Apple's trading volume.
multiple_ticker_history_df = client.get_dataframe(['GOOGL', 'AAPL'],
frequency='weekly',
metric_name='volume',
startDate='2018-01-01',
endDate='2018-07-31')
# Compare the companies: AAPL's volume seems to be much more volatile in the first half of 2018.
multiple_ticker_history_df.plot.line()
Explanation: Basic Pandas Dataframe Examples
Pandas is a popular python library for data analysis an manipulation. We provide out-of-the-box support for returning responses from Tiingo as Python Dataframes. If you are unfamiliar with pandas, I recommend the Mode Notebooks python data analysis tutorial.
End of explanation |
151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
General info on the fullCyc dataset (as it pertains to SIPSim validation)
Simulating 12C gradients
Determining if simulated taxon abundance distributions resemble the true distributions
Simulation parameters to infer from dataset
Step1: Init
Step2: Loading phyloseq list datasets
Step3: Infer abundance distribution of each bulk soil community
distribution fit
Step4: Relative abundance of most abundant taxa
Step5: Making a community file for the simulations
Step6: Adding reference genome taxon names
Step7: Writing file
Step8: parsing amp-Frag file to match comm file | Python Code:
%load_ext rpy2.ipython
%%R
workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/'
physeqDir = '/home/nick/notebook/SIPSim/dev/fullCyc_trim/'
physeqBulkCore = 'bulk-core_trm'
physeqSIP = 'SIP-core_unk_trm'
ampFragFile = '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags_kde.pkl'
Explanation: General info on the fullCyc dataset (as it pertains to SIPSim validation)
Simulating 12C gradients
Determining if simulated taxon abundance distributions resemble the true distributions
Simulation parameters to infer from dataset:
Infer total richness of bulk soil community
richness of starting community
Infer abundance distribution of bulk soil community
NO: distribution fit
INSTEAD: using relative abundances of bulk soil community
Get distribution of total OTU abundances per fraction
Number of sequences per sample
User variables
End of explanation
import os
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(phyloseq)
library(fitdistrplus)
library(sads)
%%R
dir.create(workDir, showWarnings=FALSE)
Explanation: Init
End of explanation
%%R
# bulk core samples
F = file.path(physeqDir, physeqBulkCore)
physeq.bulk = readRDS(F)
#physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk %>% names
%%R
# SIP core samples
F = file.path(physeqDir, physeqSIP)
physeq.SIP = readRDS(F)
#physeq.SIP.m = physeq.SIP %>% sample_data
physeq.SIP %>% names
Explanation: Loading phyloseq list datasets
End of explanation
%%R
physeq2otu.long = function(physeq){
df.OTU = physeq %>%
transform_sample_counts(function(x) x/sum(x)) %>%
otu_table %>%
as.matrix %>%
as.data.frame
df.OTU$OTU = rownames(df.OTU)
df.OTU = df.OTU %>%
gather('sample', 'abundance', 1:(ncol(df.OTU)-1))
return(df.OTU)
}
df.OTU.l = lapply(physeq.bulk, physeq2otu.long)
df.OTU.l %>% names
#df.OTU = do.call(rbind, lapply(physeq.bulk, physeq2otu.long))
#df.OTU$Day = gsub('.+\\.D([0-9]+)\\.R.+', '\\1', df.OTU$sample)
#df.OTU %>% head(n=3)
%%R -w 450 -h 400
lapply(df.OTU.l, function(x) descdist(x$abundance, boot=1000))
%%R
fitdists = function(x){
fit.l = list()
#fit.l[['norm']] = fitdist(x$abundance, 'norm')
fit.l[['exp']] = fitdist(x$abundance, 'exp')
fit.l[['logn']] = fitdist(x$abundance, 'lnorm')
fit.l[['gamma']] = fitdist(x$abundance, 'gamma')
fit.l[['beta']] = fitdist(x$abundance, 'beta')
# plotting
plot.legend = c('exponential', 'lognormal', 'gamma', 'beta')
par(mfrow = c(2,1))
denscomp(fit.l, legendtext=plot.legend)
qqcomp(fit.l, legendtext=plot.legend)
# fit summary
gofstat(fit.l, fitnames=plot.legend) %>% print
return(fit.l)
}
fits.l = lapply(df.OTU.l, fitdists)
fits.l %>% names
%%R
# getting summaries for lognormal fits
get.summary = function(x, id='logn'){
summary(x[[id]])
}
fits.s = lapply(fits.l, get.summary)
fits.s %>% names
%%R
# listing estimates for fits
df.fits = do.call(rbind, lapply(fits.s, function(x) x$estimate)) %>% as.data.frame
df.fits$Sample = rownames(df.fits)
df.fits$Day = gsub('.+D([0-9]+)\\.R.+', '\\1', df.fits$Sample) %>% as.numeric
df.fits
%%R -w 650 -h 300
ggplot(df.fits, aes(Day, meanlog,
ymin=meanlog-sdlog,
ymax=meanlog+sdlog)) +
geom_pointrange() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R
# mean of estimaates
apply(df.fits, 2, mean)
Explanation: Infer abundance distribution of each bulk soil community
distribution fit
End of explanation
%%R -w 800
df.OTU = do.call(rbind, df.OTU.l) %>%
mutate(abundance = abundance * 100) %>%
group_by(sample) %>%
mutate(rank = row_number(desc(abundance))) %>%
ungroup() %>%
filter(rank < 10)
ggplot(df.OTU, aes(rank, abundance, color=sample, group=sample)) +
geom_point() +
geom_line() +
labs(y = '% rel abund')
Explanation: Relative abundance of most abundant taxa
End of explanation
%%R -w 800 -h 300
df.OTU = do.call(rbind, df.OTU.l) %>%
mutate(abundance = abundance * 100) %>%
group_by(sample) %>%
mutate(rank = row_number(desc(abundance))) %>%
group_by(rank) %>%
summarize(mean_abundance = mean(abundance)) %>%
ungroup() %>%
mutate(library = 1,
mean_abundance = mean_abundance / sum(mean_abundance) * 100) %>%
rename('rel_abund_perc' = mean_abundance) %>%
dplyr::select(library, rel_abund_perc, rank) %>%
as.data.frame
df.OTU %>% nrow %>% print
ggplot(df.OTU, aes(rank, rel_abund_perc)) +
geom_point() +
geom_line() +
labs(y = 'mean % rel abund')
Explanation: Making a community file for the simulations
End of explanation
ret = !SIPSim KDE_info -t /home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags_kde.pkl
ret = ret[1:]
ret[:5]
%%R
F = '/home/nick/notebook/SIPSim/dev/fullCyc_trim//ampFrags_kde_amplified.txt'
ret = read.delim(F, sep='\t')
ret = ret$genomeID
ret %>% length %>% print
ret %>% head
%%R
ret %>% length %>% print
df.OTU %>% nrow
%%R -i ret
# randomize
ret = ret %>% sample %>% sample %>% sample
# adding to table
df.OTU$taxon_name = ret[1:nrow(df.OTU)]
df.OTU = df.OTU %>%
dplyr::select(library, taxon_name, rel_abund_perc, rank)
df.OTU %>% head
%%R
#-- debug -- #
df.gc = read.delim('~/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags_parsed_kde_info.txt',
sep='\t', row.names=)
top.taxa = df.gc %>%
filter(KDE_ID == 1, median > 1.709, median < 1.711) %>%
dplyr::select(taxon_ID) %>%
mutate(taxon_ID = taxon_ID %>% sample) %>%
head
top.taxa = top.taxa$taxon_ID %>% as.vector
top.taxa
%%R
#-- debug -- #
p1 = df.OTU %>%
filter(taxon_name %in% top.taxa)
p2 = df.OTU %>%
head(n=length(top.taxa))
p3 = anti_join(df.OTU, rbind(p1, p2), c('taxon_name' = 'taxon_name'))
df.OTU %>% nrow %>% print
p1 %>% nrow %>% print
p2 %>% nrow %>% print
p3 %>% nrow %>% print
p1 = p2$taxon_name
p2$taxon_name = top.taxa
df.OTU = rbind(p2, p1, p3)
df.OTU %>% nrow %>% print
df.OTU %>% head
Explanation: Adding reference genome taxon names
End of explanation
%%R
F = file.path(workDir, 'fullCyc_12C-Con_trm_comm.txt')
write.table(df.OTU, F, sep='\t', quote=FALSE, row.names=FALSE)
cat('File written:', F, '\n')
Explanation: Writing file
End of explanation
ampFragFile
!ls -thlc
!tail -n +2 /home/nick/notebook/SIPSim/dev/fullCyc/fullCyc_12C-Con_trm_comm.txt | \
cut -f 2 > /home/nick/notebook/SIPSim/dev/fullCyc/fullCyc_12C-Con_trm_comm_taxa.txt
outFile = os.path.splitext(ampFragFile)[0] + '_parsed.pkl'
!SIPSim KDE_parse \
$ampFragFile \
/home/nick/notebook/SIPSim/dev/fullCyc/fullCyc_12C-Con_trm_comm_taxa.txt \
> $outFile
print 'File written {}'.format(outFile)
!SIPSim KDE_info -n $outFile
Explanation: parsing amp-Frag file to match comm file
End of explanation |
152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Here I'm process by chunks the entire region.
Step1: Algorithm for processing Chunks
Make a partition given the extent
Produce a tuple (minx ,maxx,miny,maxy) for each element on the partition
Calculate the semivariogram for each chunk and save it in a dataframe
Plot Everything
Do the same with a mMatern Kernel
Step2: For efficiency purposes we restrict to 10 variograms
Step3: Take an average of the empirical variograms also with the envelope.
We will use the group by directive on the field lags | Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
import django
django.setup()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
## Use the ggplot style
plt.style.use('ggplot')
from external_plugins.spystats import tools
%run ../testvariogram.py
section.shape
Explanation: Here I'm process by chunks the entire region.
End of explanation
minx,maxx,miny,maxy = getExtent(new_data)
maxy
## If prefered a fixed number of chunks
N = 100
xp,dx = np.linspace(minx,maxx,N,retstep=True)
yp,dy = np.linspace(miny,maxy,N,retstep=True)
### Distance interval
print(dx)
print(dy)
## Let's build the partition
## If prefered a fixed size of chunk
ds = 300000 #step size (meters)
xp = np.arange(minx,maxx,step=ds)
yp = np.arange(miny,maxy,step=ds)
dx = ds
dy = ds
N = len(xp)
xx,yy = np.meshgrid(xp,yp)
Nx = xp.size
Ny = yp.size
#coordinates_list = [ (xx[i][j],yy[i][j]) for i in range(N) for j in range(N)]
coordinates_list = [ (xx[i][j],yy[i][j]) for i in range(Ny) for j in range(Nx)]
from functools import partial
tuples = map(lambda (x,y) : partial(getExtentFromPoint,x,y,step_sizex=dx,step_sizey=dy)(),coordinates_list)
chunks = map(lambda (mx,Mx,my,My) : subselectDataFrameByCoordinates(new_data,'newLon','newLat',mx,Mx,my,My),tuples)
## Here we can filter based on a threshold
threshold = 20
chunks_non_empty = filter(lambda df : df.shape[0] > threshold ,chunks)
len(chunks_non_empty)
lengths = pd.Series(map(lambda ch : ch.shape[0],chunks_non_empty))
lengths.plot.hist()
Explanation: Algorithm for processing Chunks
Make a partition given the extent
Produce a tuple (minx ,maxx,miny,maxy) for each element on the partition
Calculate the semivariogram for each chunk and save it in a dataframe
Plot Everything
Do the same with a mMatern Kernel
End of explanation
smaller_list = chunks_non_empty[:10]
variograms =map(lambda chunk : tools.Variogram(chunk,'residuals1',using_distance_threshold=200000),smaller_list)
vars = map(lambda v : v.calculateEmpirical(),variograms)
vars = map(lambda v : v.calculateEnvelope(num_iterations=50),variograms)
Explanation: For efficiency purposes we restrict to 10 variograms
End of explanation
envslow = pd.concat(map(lambda df : df[['envlow']],vars),axis=1)
envhigh = pd.concat(map(lambda df : df[['envhigh']],vars),axis=1)
variogram = pd.concat(map(lambda df : df[['variogram']],vars),axis=1)
lags = vars[0][['lags']]
meanlow = list(envslow.apply(lambda row : np.mean(row),axis=1))
meanhigh = list(envhigh.apply(np.mean,axis=1))
meanvariogram = list(variogram.apply(np.mean,axis=1))
results = pd.DataFrame({'meanvariogram':meanvariogram,'meanlow':meanlow,'meanhigh':meanhigh})
result_envelope = pd.concat([lags,results],axis=1)
meanvg = tools.Variogram(section,'residuals1')
meanvg.plot()
meanvg.envelope.columns
result_envelope.columns
result_envelope.columns = ['lags','envhigh','envlow','variogram']
meanvg.envelope = result_envelope
meanvg.plot(refresh=False)
Explanation: Take an average of the empirical variograms also with the envelope.
We will use the group by directive on the field lags
End of explanation |
153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Project
Step1: Read in an Image
Step10: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are
Step11: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
Step12: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
Propose Pipeline for detecting lines from image
Read in an image
Select lane colors only
Convert input image to grayscale
Apply a Gausian noise kernel filter to blur the grayscale image
Use Canny edge detection to detect edges from the blurred image
Select region of interest on the edge image for further processing
Use Hough transformation to detect lines from masked image
Draw the lines on the original image
Step13: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos
Step14: Let's try the one with the solid white lane on the right first ...
Step16: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step18: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
Step20: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! | Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
import pdb
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
Explanation: Read in an Image
End of explanation
import math
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=10, vertices = np.array([[(10,539),(440, 330), (520, 330), (950,539)]], dtype=np.int32)):
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
# Adjust the draw line function: average the position of each of the lines
# and extrapolate to the top and bottom of the lane
# First, average the slope and intercept of lines
right_slope = []
right_intercept = []
left_slope = []
left_intercept = []
for line in lines:
for x1,y1,x2,y2 in line:
#cv2.line(img, (x1, y1), (x2, y2), color, thickness)
# Calculate the slope and intercept of the line
slope = float((y2-y1)/(x2-x1))
intercept = y1 - slope * x1
# Separating line segments by their slope, removing nearly horizontal lines with slope_threshold_low and
# nearly vertical lines with slope_threshold_high
slope_threshold_low = 20.0 * np.pi/180.0
slope_threshold_high = 80.0 * np.pi/180.0
if not (np.isnan(slope) or np.isinf(slope)):
if slope < -slope_threshold_low and slope > -slope_threshold_high: # negative, belong to right lane line
right_slope.append(slope)
right_intercept.append(intercept)
if slope > slope_threshold_low and slope < slope_threshold_high: # positive, belong to left lane line
left_slope.append(slope)
left_intercept.append(intercept)
right_slope_avg = np.mean(right_slope)
right_intercept_avg = np.mean(right_intercept)
left_slope_avg = np.mean(left_slope)
left_intercept_avg = np.mean(left_intercept)
# Then, extrapolate to the top and bottom of the lane
if not (np.isnan(right_intercept_avg) or np.isnan(right_slope_avg)):
# right bottom
right_lane_y1 = vertices[0][3][1]
# print(right_lane_y1,right_intercept_avg,right_slope_avg)
right_lane_x1 = int((right_lane_y1 - right_intercept_avg) / right_slope_avg)
# right top
right_lane_y2 = vertices[0][2][1]
right_lane_x2 = int((right_lane_y2 - right_intercept_avg) / right_slope_avg)
# left bottom
left_lane_y1 = vertices[0][0][1]
left_lane_x1 = int((left_lane_y1 - left_intercept_avg) / left_slope_avg)
# right top
left_lane_y2 = vertices[0][1][1]
left_lane_x2 = int((left_lane_y2 - left_intercept_avg) / left_slope_avg)
# Draw the lanes
cv2.line(img, (right_lane_x1, right_lane_y1), (right_lane_x2, right_lane_y2), color, thickness)
cv2.line(img, (left_lane_x1, left_lane_y1), (left_lane_x2, left_lane_y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap, vertices):
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines, vertices=vertices)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, λ)
def filter_white_yellow(image):
Applies a color selection, select only white and yellow colors.
Only keeps white and yellow colors. The rest of the image is set to black.
# change to hls color space for better color selection due to light conditions and noise
hls_image = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
# white color mask
lower = np.uint8([ 0, 200, 0])
upper = np.uint8([255, 255, 255])
white_mask = cv2.inRange(hls_image, lower, upper)
# yellow color mask
lower = np.uint8([ 10, 0, 100])
upper = np.uint8([ 40, 255, 255])
yellow_mask = cv2.inRange(hls_image, lower, upper)
# combine the mask
mask = cv2.bitwise_or(white_mask, yellow_mask)
masked = cv2.bitwise_and(image, image, mask = mask)
return masked
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
import os
os.listdir("test_images/")
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
# 0. Read in an image
files = os.listdir("test_images/")
image = mpimg.imread('test_images/' + files[1])
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.figure()
plt.imshow(image)
# 1. Select lane colors only
white_yellow = filter_white_yellow(image)
# 2. Convert input image to grayscale
gray = grayscale(white_yellow)
# 3. Apply a Gausian noise kernel filter to blur the grayscale image
kernel_size = 5
blur_gray = gaussian_blur(gray, kernel_size)
# 4. Use Canny edge detection to detect edges from the blurred image
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
# 5. Select region of interest on the edge image for further processing
vertices = np.array([[(10,image.shape[0]-1),(image.shape[1]/2 - 40, image.shape[0]*0.61), (image.shape[1]/2 + 40, image.shape[0]*0.61), (image.shape[1]-10,image.shape[0]-1)]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
# Plot region for tuning parameters
region = np.copy(image)*0
for i in range(0,3):
cv2.line(region,tuple(vertices[0][i]),tuple(vertices[0][i+1]),(0,0,255),5) #draw lines with blue and size = 10
cv2.line(region,tuple(vertices[0][3]),tuple(vertices[0][0]),(0,0,255),5)
region_image = weighted_img(region, image, α=0.8, β=1., λ=0.)
plt.figure()
plt.imshow(region_image)
# 6. Use Hough transformation to detect lines from masked image
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 25 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 5 #minimum number of pixels making up a line
max_line_gap = 10 # maximum gap in pixels between connectable line segments
lines = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap, vertices)
# 7. Draw the lines on the original image
lines_image = weighted_img(lines, image, α=0.8, β=1., λ=0.)
plt.figure()
plt.imshow(lines_image)
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
Propose Pipeline for detecting lines from image
Read in an image
Select lane colors only
Convert input image to grayscale
Apply a Gausian noise kernel filter to blur the grayscale image
Use Canny edge detection to detect edges from the blurred image
Select region of interest on the edge image for further processing
Use Hough transformation to detect lines from masked image
Draw the lines on the original image
End of explanation
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import imageio
imageio.plugins.ffmpeg.download()
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
# 1. Select lane colors only
white_yellow = filter_white_yellow(image)
# 2. Convert input image to grayscale
gray = grayscale(white_yellow)
# 3. Apply a Gausian noise kernel filter to blur the grayscale image
kernel_size = 5
blur_gray = gaussian_blur(gray, kernel_size)
# 4. Use Canny edge detection to detect edges from the blurred image
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
# 5. Select region of interest on the edge image for further processing
vertices = np.array([[(10,image.shape[0]-1),(image.shape[1]/2 - 40, image.shape[0]*0.61), (image.shape[1]/2 + 40, image.shape[0]*0.61), (image.shape[1]-10,image.shape[0]-1)]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
# 6. Use Hough transformation to detect lines from masked image
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 25 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 5 #minimum number of pixels making up a line
max_line_gap = 10 # maximum gap in pixels between connectable line segments
lines = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap, vertices)
# 7. Draw the lines on the original image
lines_image = weighted_img(lines, image, α=0.8, β=1., λ=0.)
# plt.figure()
# plt.imshow(lines_image)
return lines_image
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(white_output))
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(challenge_output))
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation |
154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Country Converter
The country converter (coco) is a Python package to convert country names into different classifications and between different naming versions. Internally it uses regular expressions to match country names.
Installation
The package is available as PyPI, use
pip install country_converter -upgrade
from the command line or use your preferred python package installer.
The source code is available on github
Step1: Given a list of countries is a certain classification
Step2: This can be converted to any classification provided by
Step3: or
Step4: The parameter "src" specifies the input-, "to" the output format. Possible values for both parameter can be found by
Step5: Internally, these names are the column header of the underlying pandas dataframe (see below).
The convert function can also be accessed without initiating the CountryConverter. This can be useful for one time usage. For multiple matches, initiating the CountryConverter avoids that the file providing the matching data gets read in for each conversion.
Step6: Some of the classifications can be accessed by some shortcuts. For example
Step7: Handling missing data
The return value for non-found entries is be default set to 'not found'
Step8: but can also be rest to something else
Step9: Alternativly, the non-found entries can be passed through by passing None to not_found
Step10: To extend the underlying dataset, an additional dataframe (or file) can be passed.
Step11: Alternatively to a ad hoc dataframe, additional datafiles can be passed. These must have the same format as basic data set.
An example can be found here
Step12: The passed data (file or dataframe) must at least contain the headers 'name_official', 'name_short' and 'regex'. Of course, if the additional data shall be used to a conversion to any other field, these must also be included.
Additionally passed data always overwrites the existing one.
This can be used to adjust coco for datasets with wrong country names.
For example, assuming a dataset erroneous switched the ISO2 codes for India (IN) and Indonesia (ID) (therefore assuming 'ID' for India and 'IN' for Indonesia), one can accomedate for that by
Step13: Regular expression matching
The input parameter "src" can be set to "regex" to use regular expression matching for a given country list. For example
Step14: The regular expressions can also be used to match any list of countries to any other. For example
Step15: If the regular expression matches several times, all results are given as list and a warning is generated
Step16: The parameter "enforce_sublist" can be set to ensure consistent output
Step17: You get a warning if one of the names couldn't be found
Step18: And the value for non found countries can be specified
Step19: This can also be used to pass the not found country to the new classification
Step20: Internals
Within the new instance, the raw data for the conversion is saved within a pandas dataframe.
This dataframe can be accessed directly with
Step21: This dataframe can be extended in both directions. The only requirement is to provide unique values for name_short, name_official and regex.
Internally, the data is saved in country_data.txt as tab-separated values (utf-8 encoded).
Of course, all pandas indexing and matching methods can be used. For example, to get new OECD members since 1995 present in a list | Python Code:
import country_converter as coco
converter = coco.CountryConverter()
Explanation: Country Converter
The country converter (coco) is a Python package to convert country names into different classifications and between different naming versions. Internally it uses regular expressions to match country names.
Installation
The package is available as PyPI, use
pip install country_converter -upgrade
from the command line or use your preferred python package installer.
The source code is available on github: https://github.com/konstantinstadler/country_converter
Conversion
The country converter provides one main class which is used for the conversion:
End of explanation
iso3_codes = ['USA', 'VUT', 'TKL', 'AUT', 'AFG', 'ALB']
Explanation: Given a list of countries is a certain classification:
End of explanation
converter.convert(names = iso3_codes, src = 'ISO3', to = 'name_official')
Explanation: This can be converted to any classification provided by:
End of explanation
converter.convert(names = iso3_codes, src = 'ISO3', to = 'continent')
Explanation: or
End of explanation
converter.valid_class
Explanation: The parameter "src" specifies the input-, "to" the output format. Possible values for both parameter can be found by:
End of explanation
converter.convert(names = iso3_codes, src = 'ISO3', to = 'ISO2')
Explanation: Internally, these names are the column header of the underlying pandas dataframe (see below).
The convert function can also be accessed without initiating the CountryConverter. This can be useful for one time usage. For multiple matches, initiating the CountryConverter avoids that the file providing the matching data gets read in for each conversion.
End of explanation
converter.EU27
converter.OECDas('ISO2')
Explanation: Some of the classifications can be accessed by some shortcuts. For example:
End of explanation
iso3_codes_missing = ['ABC', 'AUT', 'XXX']
converter.convert(iso3_codes_missing, src='ISO3')
Explanation: Handling missing data
The return value for non-found entries is be default set to 'not found':
End of explanation
converter.convert(iso3_codes_missing, src='ISO3', not_found='missing')
Explanation: but can also be rest to something else:
End of explanation
converter.convert(iso3_codes_missing, src='ISO3', not_found=None)
Explanation: Alternativly, the non-found entries can be passed through by passing None to not_found:
End of explanation
import pandas as pd
add_data = pd.DataFrame.from_dict({
'name_short' : ['xxx country', 'abc country'],
'name_official' : ['The XXX country', 'The ABC country'],
'regex' : ['xxx country', 'abc country'],
'ISO3': ['xxx', 'abc']}
)
add_data
extended_converter = coco.CountryConverter(additional_data=add_data)
extended_converter.convert(iso3_codes_missing, src='ISO3', to='name_short')
Explanation: To extend the underlying dataset, an additional dataframe (or file) can be passed.
End of explanation
# extended_converter = coco.CountryConverter(additional_data=path/to/datafile)
Explanation: Alternatively to a ad hoc dataframe, additional datafiles can be passed. These must have the same format as basic data set.
An example can be found here:
https://github.com/konstantinstadler/country_converter/tree/master/tests/custom_data_example.txt
The custom data example contains the ISO3 code mapping for Romania before 2002 and switches the regex matching for congo between DR Congo and Congo Republic.
To use is pass the path to the additional country file:
End of explanation
switched_converter = coco.CountryConverter(additional_data=pd.DataFrame.from_dict({
'name_short' : ['India', 'Indonesia'],
'name_official' : ['India', 'Indonesia'],
'regex' : ['india', 'indonesia'],
'ISO2': ['ID', 'IN']}))
converter.convert('IN', src='ISO2', to='name_short')
switched_converter.convert('ID', src='ISO2', to='name_short')
Explanation: The passed data (file or dataframe) must at least contain the headers 'name_official', 'name_short' and 'regex'. Of course, if the additional data shall be used to a conversion to any other field, these must also be included.
Additionally passed data always overwrites the existing one.
This can be used to adjust coco for datasets with wrong country names.
For example, assuming a dataset erroneous switched the ISO2 codes for India (IN) and Indonesia (ID) (therefore assuming 'ID' for India and 'IN' for Indonesia), one can accomedate for that by:
End of explanation
some_names = ['United Rep. of Tanzania', 'Cape Verde', 'Burma', 'Iran (Islamic Republic of)', 'Korea, Republic of', "Dem. People's Rep. of Korea"]
coco.convert(names = some_names, src = "regex", to = "name_short")
Explanation: Regular expression matching
The input parameter "src" can be set to "regex" to use regular expression matching for a given country list. For example:
End of explanation
match_these = ['norway', 'united_states', 'china', 'taiwan']
master_list = ['USA', 'The Swedish Kingdom', 'Norway is a Kingdom too', 'Peoples Republic of China', 'Republic of China' ]
coco.match(match_these, master_list)
Explanation: The regular expressions can also be used to match any list of countries to any other. For example:
End of explanation
match_these = ['norway', 'united_states', 'china', 'taiwan']
master_list = ['USA', 'The Swedish Kingdom', 'Norway is a Kingdom too', 'Peoples Republic of China', 'Taiwan, province of china', 'Republic of China' ]
coco.match(match_these, master_list)
Explanation: If the regular expression matches several times, all results are given as list and a warning is generated:
End of explanation
coco.match(match_these, master_list, enforce_sublist = True)
Explanation: The parameter "enforce_sublist" can be set to ensure consistent output:
End of explanation
match_these = ['norway', 'united_states', 'china', 'taiwan', 'some other country']
master_list = ['USA', 'The Swedish Kingdom', 'Norway is a Kingdom too', 'Peoples Republic of China', 'Republic of China' ]
coco.match(match_these, master_list)
Explanation: You get a warning if one of the names couldn't be found:
End of explanation
coco.match(match_these, master_list, not_found = 'its not there')
Explanation: And the value for non found countries can be specified:
End of explanation
coco.match(match_these, master_list, not_found = None)
Explanation: This can also be used to pass the not found country to the new classification:
End of explanation
converter.data.head()
Explanation: Internals
Within the new instance, the raw data for the conversion is saved within a pandas dataframe.
This dataframe can be accessed directly with:
End of explanation
some_countries = ['Australia', 'Belgium', 'Brazil', 'Bulgaria', 'Cyprus', 'Czech Republic', 'Denmark', 'Estonia', 'Finland', 'France', 'Germany', 'Greece', 'Hungary', 'India', 'Indonesia', 'Ireland', 'Italy', 'Japan', 'Latvia', 'Lithuania', 'Luxembourg', 'Malta', 'Romania', 'Russia', 'Turkey', 'United Kingdom', 'United States']
converter.data[(converter.data.OECD >= 1995) & converter.data.name_short.isin(some_countries)].name_short
Explanation: This dataframe can be extended in both directions. The only requirement is to provide unique values for name_short, name_official and regex.
Internally, the data is saved in country_data.txt as tab-separated values (utf-8 encoded).
Of course, all pandas indexing and matching methods can be used. For example, to get new OECD members since 1995 present in a list:
End of explanation |
155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Case study
Step1: 3. Normalization of dataset A with entry sensing
Step2: 3. Normalization of dataset B with blind normalization | Python Code:
# Setup of general parameters for the recovery experiment.
n_restarts = 10
rank = 6
n_measurements = 2800
shape = (50, 70) # samples, features
missing_fraction = 0.1
noise_amplitude = 2.0
m_blocks_size = 5 # size of each block
correlation_threshold = 0.75
correlation_strength = 1.0
bias_model = 'image'
# Creation of the true signal for both datasets.
truth = DataSimulated(shape, rank, bias_model=bias_model, correlation_threshold=correlation_threshold, m_blocks_size=m_blocks_size, noise_amplitude=noise_amplitude, correlation_strength=correlation_strength, missing_fraction=missing_fraction)
true_bias = truth.d['sample']['true_bias']
true_bias_unshuffled = truth.d['sample']['true_bias_unshuffled']
true_signal = truth.d['sample']['signal']
true_signal_unshuffled = truth.d['sample']['signal_unshuffled']
true_correlations = {'sample': truth.d['sample']['true_correlations'], 'feature': truth.d['feature']['true_correlations']}
true_correlations_unshuffled = {'sample': truth.d['sample']['true_correlations_unshuffled'], 'feature': truth.d['feature']['true_correlations_unshuffled']}
true_pairs = {'sample': truth.d['sample']['true_pairs'], 'feature': truth.d['feature']['true_pairs']}
true_pairs_unshuffled = {'sample': truth.d['sample']['true_pairs_unshuffled'], 'feature': truth.d['feature']['true_pairs_unshuffled']}
true_directions = {'sample': truth.d['sample']['true_directions'], 'feature': truth.d['feature']['true_directions']}
true_stds = {'sample': truth.d['sample']['true_stds'], 'feature': truth.d['feature']['true_stds']}
# Creation of the corrupted signal for both datasets.
mixed = truth.d['sample']['mixed']
show_absolute(true_signal_unshuffled, kind='Signal', unshuffled=True, map_backward=truth.map_backward)
show_absolute(true_signal, kind='Signal')
show_dependence_structure(true_correlations, 'sample')
show_dependence_structure(true_correlations_unshuffled, 'sample', unshuffled=True, map_backward=truth.map_backward)
show_dependence_structure(true_correlations, 'feature')
show_dependence_structure(true_correlations_unshuffled, 'feature', unshuffled=True, map_backward=truth.map_backward)
show_dependences(true_signal, true_pairs, 'sample')
show_dependences(true_signal, true_pairs, 'feature')
show_independences(true_signal, true_pairs, 'sample')
show_independences(true_signal, true_pairs, 'feature')
show_absolute(true_bias_unshuffled, unshuffled=True, map_backward=truth.map_backward_bias, kind='Bias', vmin=-1.5, vmax=1.5)
show_absolute(true_bias, kind='Bias', vmin=-1.5, vmax=1.5)
# Here the white dots are missing values as common in real data.
show_absolute(mixed, kind='Mixed')
show_dependences(mixed, true_pairs, 'sample')
show_dependences(mixed, true_pairs, 'feature')
Explanation: Case study: Data normalization for unknown confounding factors!
Outline
Description of common use cases.
Creation of two corrupted datasets (A and B).
Application of matrix recovery.
Dataset A with entry sensing.
Dataset B with blind normalization.
Discussion of results and advantages.
1. Use cases
You have to work with two datasets that are corrupted by an unknown number of confounding factors. Both datasets consist of a matrix of samples and features. For example, customers and products with a rating of satisfaction, or location and time of temperature measurements across the globe, or the change in value of stocks at closing time at the financial markets. Thus, values in the matrix are continuous and can range from negative to positive.
Luckily, for dataset A you were able to determine the true values for a small subset of matrix entries, for example through quantitative measurement standards or time intensive in-depth analysis. Thus, the recovery of confounding factors is similar to a matrix recovery problem solvable through entry sensing, as the observed values subtracted by the true values give the necessary entries for the bias matrix of confounding factors to be recovered.
For dataset B it is more challenging, as you were not able to determine any true values for its entries. However, instead you know with certainty that several of the samples and several of the features are strongly correlated and that you are likely to be able to identify those, as the corruption through the confounding factors is not stronger than the underlying signal. Thus the problem can be approached by blind normalization.
In order to remove the unknown confounding factors several assumptions have to be satisfied for dataset A and B. First of all, the to be recovered bias matrix must lie on a sufficiently low dimensional manifold, such as one modelled by a low rank matrix. Secondly, the dataset must satisfy certain incoherence requirements. If both assumptions are satisfied, the otherwise NP-HARD recovery problem can be solved efficiently in the framework of compressed sensing.
2. Creation of dataset A and B
End of explanation
# Construct measurements from known entries.
operator = LinearOperatorEntry(n_measurements)
measurements = operator.generate(true_bias)
# Construct cost function.
cost = Cost(measurements['A'], measurements['y'], sparsity=1)
# Recover the bias.
solver = ConjugateGradientSolver(mixed, cost.cost_func, guess_func, rank, guess_noise_amplitude=noise_amplitude, verbosity=0)
results = solver.recover()
# Recovery performance statistics.
recovery_performance(mixed, cost.cost_func, truth.d['sample']['true_bias'], results['estimated_signal'], truth.d['sample']['signal'], results['estimated_bias'])
show_absolute(results['estimated_bias'], kind='Bias', unshuffle=True, map_backward=truth.map_backward_bias, vmin=-1.5, vmax=1.5)
show_absolute(results['guess_X'], kind='Bias', unshuffle=True, map_backward=truth.map_backward_bias, vmin=-0.1, vmax=0.1)
Explanation: 3. Normalization of dataset A with entry sensing
End of explanation
possible_measurements(shape, missing_fraction, m_blocks_size=m_blocks_size)
# Prior information estimated from the corrputed signal and used for blind recovery.
signal_characterists = estimate_partial_signal_characterists(mixed, correlation_threshold, true_pairs=true_pairs, true_directions=true_directions, true_stds=true_stds)
estimated_correlations = {'sample': signal_characterists['sample']['estimated_correlations'], 'feature': signal_characterists['feature']['estimated_correlations']}
show_threshold(estimated_correlations, correlation_threshold, 'sample')
show_threshold(estimated_correlations, correlation_threshold, 'feature')
# Construct measurements from corrupted signal and its estimated partial characteristics.
operator = LinearOperatorCustom(n_measurements)
measurements = operator.generate(signal_characterists)
# Construct cost function.
cost = Cost(measurements['A'], measurements['y'], sparsity=2)
# Recover the bias.
solver = ConjugateGradientSolver(mixed, cost.cost_func, guess_func, rank, guess_noise_amplitude=noise_amplitude, verbosity=0)
results = solver.recover()
recovery_performance(mixed, cost.cost_func, truth.d['sample']['true_bias'], results['estimated_signal'], truth.d['sample']['signal'], results['estimated_bias'])
show_absolute(results['estimated_bias'], kind='Bias', unshuffle=True, map_backward=truth.map_backward_bias, vmin=-1.5, vmax=1.5)
show_absolute(results['guess_X'], kind='Bias', unshuffle=True, map_backward=truth.map_backward_bias, vmin=-0.1, vmax=0.1)
# Here red dots are the corrupted signal, green the clean signal, and blue the recovered signal.
# Note: missing blue dots indicate missing values.
show_recovery(mixed, results['guess_X'], true_signal, results['estimated_signal'], true_pairs['sample'], signal_characterists['sample']['estimated_pairs'], true_stds['sample'], signal_characterists['sample']['estimated_stds'], true_directions['sample'], signal_characterists['sample']['estimated_directions'], n_pairs=5, n_points=50)
Explanation: 3. Normalization of dataset B with blind normalization
End of explanation |
156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hand Following example
In this notebook, you will use Pypot and an Inverse Kinematics toolbox to make Torso's hands follow each other.
Your Torso has two arms, and you can use simple methods to get and set the position of each hand.
Requirements
You will need a fully functionning torso, either IRL or in simulator (V-REP).
More info here.
The experiment
To be more precise, we will tell the right hand to keep a constant distance with the moving left hand, like on the picture above
Step1: Then, create your Pypot robot
Step2: Initialize your robot positions to 0
Step3: The left arm must be compliant (so you can move it), and the right arm must be active
Step5: Following the left hand
To follow the left hand, the script will do the following steps
Step6: Now, do this repeatedly in a loop | Python Code:
import time
import numpy as np
from pypot.creatures import PoppyTorso
Explanation: Hand Following example
In this notebook, you will use Pypot and an Inverse Kinematics toolbox to make Torso's hands follow each other.
Your Torso has two arms, and you can use simple methods to get and set the position of each hand.
Requirements
You will need a fully functionning torso, either IRL or in simulator (V-REP).
More info here.
The experiment
To be more precise, we will tell the right hand to keep a constant distance with the moving left hand, like on the picture above :
The left arm will be compliant, so you can move it and watch the right arm following it.
Setting up the robot
We begin by configuring the robot, to fit our needs for the experiment.
Begin with some useful imports :
End of explanation
poppy = PoppyTorso()
Explanation: Then, create your Pypot robot :
End of explanation
for m in poppy.motors:
m.goto_position(0, 2)
Explanation: Initialize your robot positions to 0 :
End of explanation
# Left arm is compliant, right arm is active
for m in poppy.l_arm:
m.compliant = False
for m in poppy.r_arm:
m.compliant = False
# The torso itself must not be compliant
for m in poppy.torso:
m.compliant = False
Explanation: The left arm must be compliant (so you can move it), and the right arm must be active
End of explanation
def follow_hand(poppy, delta):
Tell the right hand to follow the left hand
right_arm_position = poppy.l_arm_chain.end_effector + delta
poppy.r_arm_chain.goto(right_arm_position, 0.5, wait=True)
Explanation: Following the left hand
To follow the left hand, the script will do the following steps :
* Find the 3D position of the left hand, with Forward Kinematics
* Assign this position ( + a gap to avoid collision) as the target of the right hand
* Tell the right hand to reach this target
That's exactly what we do in the hand_follow function :
End of explanation
try:
while True:
follow_hand(poppy, target_delta)
time.sleep(delay_time)
# Close properly the object when finished
except KeyboardInterrupt:
poppy.close()
Explanation: Now, do this repeatedly in a loop :
End of explanation |
157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tramp Steamer Problem
Al Duke 3/11/2022
<p style="text-align
Step1: I will encode the graph as a python dictionary. Each vertex is a key. The value for this key is another dictionary. This dictionary has keys for each connected vertex and values that are tuples containing profit and time for each connection. For example vertex A is connected to B with a profit of 12 and travel time of 3. To make the rest of the code clearer, I will decompose the graph into a cost graph (negative profits) and a time graph.
Step4: To shorten the search, I want to find only the unique cycles in this graph.
Step9: I have 4 unique cycles in this graph. 'Unique' means I eliminate cycles that follow the same path but start at a different vertex. For example, A -> C -> A is equivalent to C -> A -> C.
Sequential Search
The first method I will use the find the minimum cost cycle is called sequential search. I included a couple of classes to handle the negative cycle case plus some helper functions for path weight, cost to time ratio and a brute force shortest cycle function.
Step11: After 3 passes the sequential algorithm finds an optimal cycle with a profit/time ratio of 4.818. Worst case time complexity for sequential search is O(|cycles|) or 4 in this case.
Binary Search
The second algorithm mentioned in [1] is binary search. | Python Code:
graph = {
"A": {"B": (12, 3), "C":(25, 6), },
"B": {"C": (11, 2), },
"C": {"A": (30, 6), "D": (16, 4), },
"D": {"A": (12, 2), },
}
Explanation: Tramp Steamer Problem
Al Duke 3/11/2022
<p style="text-align: center"> <b>Figure 1. Tramp Steamer</b> </p>
Imagine you own a cargo ship that transports goods between ports for profit. There are several ports and each voyage between ports results in profit p and takes time t. The port-port pairs are determined by goods available at port A and desired at port B. Not all ports are connected and connections are one-way. You want to find the routes that generate the most profit in the shortest time.
We can address this problem using Graph Theory. For this problem, we have a directed, weighted graph (digraph) which includes vertices V (ports) and edges E (connections) which have profits p and traversal times t. Example 1 below shows a simple graph.
<p style="text-align: center"> <b>Figure 2. Example Graph</b> </p>
I will represent ports(Vertices) as A through D. Note the connections (edges or arcs) between them have arrows indicating direction of travel. That makes this a "directed graph". A is connected to B but there is no edge from B to A. Each edge is labeled with a profit (p) and time (t) value. We need to find the routes (cycles) in this digraph. A cycle (W) starts and ends at the same vertex. For example, A -> B -> C -> A or A -> C -> A. Profits can be negative and must be integers. Times must be positive integers.
This type of problem is also known as the "Minimum cost-to-time ratio problem". Our goal is to find a directed cycle W with maximum ratio of profit to travel time. However, the algorithms I will use below seek to minimize the objective. To cast this as a minimization problem I will define cost as $c_{i,j}=-p_{i,j}$. I will then seek to minimize:
$$\mu (W) = \frac{\sum_{(i,j)\in W}^{}c_{i,j}}{\sum_{(i,j)\in W}^{}t_{i,j} }$$
Using Python code I will capture the details of the digraph in Figure 1.
End of explanation
# decompose graph into separate cost and time graphs
graph_c = {key: {key2: -val2[0] for (key2, val2) in value.items()}
for (key, value) in graph.items()}
graph_c
graph_t = {key: {key2: val2[1] for (key2, val2) in value.items()}
for (key, value) in graph.items()}
graph_t
Explanation: I will encode the graph as a python dictionary. Each vertex is a key. The value for this key is another dictionary. This dictionary has keys for each connected vertex and values that are tuples containing profit and time for each connection. For example vertex A is connected to B with a profit of 12 and travel time of 3. To make the rest of the code clearer, I will decompose the graph into a cost graph (negative profits) and a time graph.
End of explanation
from itertools import permutations
def all_vertices(graph):
Return a set of all vertices in a graph.
graph -- a directed graph.
vertices = set()
for v in graph.keys():
vertices.add(v)
for u in graph[v].keys():
vertices.add(u)
return vertices
def is_edge(graph, tail, head):
Return True if the edge (tail)->(head) is present in a graph.
graph -- a directed graph.
tail -- a vertex.
head -- a vertex.
return (tail in graph) and (head in graph[tail])
V = tuple(all_vertices(graph))
n = len(V)
all_paths = [path for path_n in range(1, n + 1)
for path in permutations(V, path_n)
if all( is_edge(graph, tail, head)
for (tail, head) in zip(path, path[1:])
) ]
all_cycles = [(*path, path[0]) for path in all_paths
if is_edge(graph, path[-1], path[0])]
cycles = []
cycle_sets = []
for cycle in all_cycles:
edges = set(x[0]+x[1] for x in zip(cycle, cycle[1:]))
if edges not in cycle_sets:
cycle_sets.append(edges)
cycles.append(cycle)
cycles
Explanation: To shorten the search, I want to find only the unique cycles in this graph.
End of explanation
class NoShortestPathError(Exception):
pass
class NegativeCycleError(NoShortestPathError):
def __init__(self, weight, cycle):
self.weight = weight
self.cycle = cycle
def __str__(self):
return f"Weight {self.weight}: {self.cycle}"
def path_weight(path, graph):
Returns the sum of the weights along a path or cycle.
return sum(graph[tail][head] for (tail, head) in zip(path, path[1:]))
def cost_time_ratio(cycle, graph_c, graph_t):
Find cost to time ratio for a cycle. (tramp steamer problem objective)
Parameters
----------
cycle : list
directed path that ends where it began
graph_c : dict
graph with cost values
graph_t : dict
graph with time values
Returns
-------
Ratio of net cost to net travel time for a cycle.
w = sum(graph_c[tail][head] for (tail, head) in zip(cycle, cycle[1:]) )
t = sum(graph_t[tail][head] for (tail, head) in zip(cycle, cycle[1:]) )
return w/t
def shortest_cycle_bf(graph, cycles):
Find the shortest cycle in cycles using a brute force approach.
If a negative cycle exists, raise NegativeCycleError. Otherwise
return shortest cycle.
Parameters
----------
graph : dictionary
A directed, weighted graph.
cycles : list of tuples
List of cycles contained in graph.
Returns
-------
Tuple with weight and path of the cycle with the lowest weight.
for cycle in cycles:
weight = path_weight(cycle, graph)
if weight < 0:
raise NegativeCycleError(weight, cycle)
return min( (path_weight(path, graph), path) for path in cycles)
def seq_search(graph, cycles, mu=100):
Perform sequential search of cycles in graph using le = ce - mu * te for edge
weights. If a Negative weight cycle is found, mu is updated based on this cycle and the
process is repeated until no negative cycles are found by the shortest cycle step.
Parameters
----------
graph : dictionary
A directed, weighted graph.
cycles : list of tuples
List of cycles contained in graph.
mu : float
Initial cost to weight ratio. Expected upper bound.
Returns
-------
Cycle with the minimum cost to time ratio and the number of loops required
to find it.
loops = 0 # for algo comparison
while True:
loops += 1 # for algo comparison
# Compute 'length' values for edges: l_e = c_e - mu * t_e
graph_le = graph.copy()
for (key, dic) in graph_le.items():
graph_le[key] = {keyy: -val[0] - mu * val[1]
for (keyy, val) in dic.items()}
try: # find shortest cycle based on le weights
cycle_weight, W = shortest_cycle_bf(graph_le, cycles)
except NegativeCycleError as error: # found negative cost cycle w.r.t. le)
mu = cost_time_ratio(error.cycle, graph_c, graph_t)
# print('mu ', mu, 'weight ', error.weight)
else: # Found a zero cost cycle w.r.t. le.
optimum = W
break
return optimum, loops
opt, loops = seq_search(graph, cycles)
print("loops: ",loops)
print("Optimal path: ", opt)
print("Optimal cost ratio: {:.3f}".format(
cost_time_ratio(opt, graph_c, graph_t)))
Explanation: I have 4 unique cycles in this graph. 'Unique' means I eliminate cycles that follow the same path but start at a different vertex. For example, A -> C -> A is equivalent to C -> A -> C.
Sequential Search
The first method I will use the find the minimum cost cycle is called sequential search. I included a couple of classes to handle the negative cycle case plus some helper functions for path weight, cost to time ratio and a brute force shortest cycle function.
End of explanation
def binary_search(graph, cycles):
Perform binary search of cycles in graph using an initial search
range of -C to C with C = max(all costs in graph). Search for shortest
cycle using le = ce - mu * te for the edge weights. If a negative
cycle is found, set upper end of range to the current cost ratio
estimate (mu) otherwise set the lower end of the range to mu.
Terminate the search if the current cycle weight is less than
the precision.
Parameters
----------
graph : dict
a directed, weighted graph.
cycles : list of tuples
List of cycles contained in graph..
Returns
-------
optimum : tuple
Cycle with the minimum cost to time ratio.
loops : int
Number of loops required to find minima.
# define limits for starting range of search
C = max([max(val.values()) for key, val in graph_c.items()])
mu_lower = C # lower limit to search for min cycle cost ratio.
# Since c_i are all negative, need to reverse the signs
mu_upper = -C # upper limit to search for min cycle cost ratio
loops = 0
while True:
loops += 1
mu = (mu_lower + mu_upper) / 2
#Compute 'length' values for edges: l_e = c_e - mu * t_e
graph_le = graph.copy()
for (key, dic) in graph_le.items():
graph_le[key] = {keyy: -val[0] - mu * val[1]
for (keyy, val) in dic.items()}
try: # Solve shortest path problem with lengths le
cycle_weight, W = shortest_cycle_bf(graph_le, cycles)
# print("c_weight: {:.3g}, range: {:.3g} to {:.3g}, mu: {}".
# format(cycle_weight, mu_lower, mu_upper, mu))
except NegativeCycleError as error: # negative cost cycle w.r.t. le': mu_star < mu
mu_upper = mu
W = error.cycle
cycle_weight = error.weight
# print("*c_weight: {:.3g}, range: {:.3g} to {:.3g}, mu: {}".
# format(error.weight, mu_lower, mu_upper, mu))
else: # mu_star > mu
mu_lower = mu
finally:
# precision of 1/sum(t_e) suffices to solve the problem exactly
precision = 1/ sum(graph_t[tail][head] for (tail, head) in
zip(W, W[1:]) )
if abs(cycle_weight) < precision: # zero cost cycle, mu_star == mu
optimum = W
break
return optimum, loops
opt, loops = binary_search(graph, cycles)
print("loops: ",loops)
print("Optimal path: ", opt)
print("Optimal cost ratio: {:.3f}".format(
cost_time_ratio(opt, graph_c, graph_t)))
Explanation: After 3 passes the sequential algorithm finds an optimal cycle with a profit/time ratio of 4.818. Worst case time complexity for sequential search is O(|cycles|) or 4 in this case.
Binary Search
The second algorithm mentioned in [1] is binary search.
End of explanation |
158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setting up a PEST interface from MODFLOW6 using the PstFrom class
The PstFrom class is a generalization of the prototype PstFromFlopy class. The generalization in PstFrom means users need to explicitly define what files are to be parameterized and what files contain model outputs to treat as observations. Two primary types of files are supported
Step1: An existing MODFLOW6 model is in the directory freyberg_mf6. Lets check it out
Step2: You can see that all the input array and list data for this model have been written "externally" - this is key to using the PstFrom class.
Let's quickly viz the model top just to remind us of what we are dealing with
Step3: Now let's copy those files to a temporary location just to make sure we don't goof up those original files
Step4: Now we need just a tiny bit of info about the spatial discretization of the model - this is needed to work out separation distances between parameters for build a geostatistical prior covariance matrix later.
Here we will load the flopy sim and model instance just to help us define some quantities later - flopy is not required to use the PstFrom class.
Step5: Here we use the simple SpatialReference pyemu implements to help us spatially locate parameters
Step6: Now we can instantiate a PstFrom class instance
Step7: Observations
So now that we have a PstFrom instance, but its just an empty container at this point, so we need to add some PEST interface "observations" and "parameters". Let's start with observations using MODFLOW6 head. These are stored in heads.csv
Step8: The main entry point for adding observations is (surprise) PstFrom.add_observations(). This method works on the list-type observation output file. We need to tell it what column is the index column (can be string if there is a header or int if no header) and then what columns contain quantities we want to monitor (e.g. "observe") in the control file - in this case we want to monitor all columns except the index column
Step9: We can see that it returned a dataframe with lots of useful info
Step10: Nice! We also have a PEST-style instruction file for those obs.
Now lets do the same for SFR observations
Step11: Sweet as! Now that we have some observations, let's add parameters!
Parameters
In the PstFrom realm, all parameters are setup as multipliers against existing array and list files. This is a good thing because it lets us preserve the existing model inputs and treat them as the mean of the prior parameter distribution. It also let's us use mixtures of spatial and temporal scales in the parameters to account for varying scale of uncertainty.
Since we are all sophisticated and recognize the importance of expressing spatial and temporal uncertainty (e.g. heterogeneity) in the model inputs (and the corresponding spatial correlation in those uncertain inputs), let's use geostatistics to express uncertainty. To do that we need to define "geostatistical structures". As we will see, defining parameter correlation is optional and only matters for the prior parameter covariance matrix and prior parameter ensemble
Step12: Now let's get the idomain array to use as a zone array - this keeps us from setting up parameters in inactive model cells
Step13: First, let's setup parameters for static properties - HK, VK, SS, SY. Do that, we need to find all the external array files that contain these static arrays. Let's do just HK slowly so as to explain what is happening
Step14: So those are the existing model input arrays for HK. Notice we found the files in the temporary model workspace - PstFrom will copy all those files to the new model workspace for us in a bit...
Let's setup grid-scale multiplier parameter for HK in layer 1
Step15: What just happened there? Well, we told our PstFrom instance to setup a set of grid-scale multiplier parameters (par_type="grid") for the array file "freyberg6.npf_k_layer1.txt". We told it to prefix the parameter names with "hk_layer_1" and also to make the parameter group "hk_layer_1" (pargp="hk_layer_1"). When specified two sets of bound information
Step16: So those might look like pretty redic parameter names, but they contain heaps of metadata to help you post process things later...
Pilot points in PstFrom
You can add pilot points in two ways PstFrom can generate them for you on a regular grid or you can supply PstFrom with existing pilot point location information. First lets looks at the regular simple stuf - when you change par_type to "pilotpoints", by default, a regular grid of pilot points is setup using a default pp_space value of 10, which is every 10th row and column. We can override this default like
Step17: Now lets look at how to supply existing pilot locations - to do this, we simply change the pp_space arg to a filename or a dataframe. The dataframe must have "name", "x", and "y" as columns - it can have more, but must have those. If you supply pp_space as an str it is assumed to be a filename the extension is the guide
Step18: Normally, you would probably put more thought in to pilot point locations, or maybe not! Now we call add_parameters and just pass the shapefile name for pp_space
Step19: Extra pre- and post-processing functions
You will also certainly need to include some additional processing steps. These are supported thru the PstFrom.pre_py cmds and PstFrom.post_py_cmds, which are lists for pre and post model run python commands and PstFrom.pre_sys_cmds and PstFrom.post_sys_cmds, which are lists for pre and post model run system commands (these are wrapped in pyemu.os_utils.run(). But what if your additional steps are actually an entire python function? Well, we got that too! PstFrom.add_py_function(). For example, let's say you have a post processing function called process_model_outputs() in a python source file called helpers.py
Step20: We see that the file helpers.py contains two functions (could be more..). We want to call process_model_outputs() each time pest(++) runs the model as a post processing function. This function will yield some quantities that we want to record with an instruction. So, first, we can call the function write_ins_file() in helpers.py to build the instruction file for the special processed outputs that process_model_outputs() will produce (in this trivial example, process_model_outputs() just generates random numbers...). Note that the instruction file needs to be in the template_ws directory since it is a pest interface file.
Lets make sure our new instruction file exists...
Step21: First, we can add the function process_model_outputs() to the forward run script like this
Step22: This will copy the function process_model_outputs() from helpers.py into the forward run script that PstFrom will write. But we still need to add the instruction file into the mix - lets do that!
Step23: that pst_path argument tells PstFrom that the instruction file will be in the directory where pest(++) is running
build the control file, pest interface files, and forward run script
At this point, we have some parameters and some observations, so we can create a control file
Step24: Oh snap! we did it! thanks for playing...
Well, there is a little more to the story. Like how do we run this thing? Lucky for you, PstFrom writes a forward run script for you! Say Wat?!
Step25: Not bad! We have everything we need, including our special post processing function...except we didnt set a command to run the model! Doh!
Let's add that
Step26: That's better! See the pyemu.os_utils.run(r'mf6') line in main()?
We also see that we now have a function called process_model_outputs() added to the forward run script and the function is being called after the model run call.
Generating geostatistical prior covariance matrices and ensembles
So that's nice, but how do we include spatial correlation in these parameters? It simple
Step27: let's also check out the super awesome prior parameter covariance matrix and prior parameter ensemble helpers in PstFrom
Step28: Da-um! that's sweet ez! We can see the first block of HK parameters in the upper left as "uncorrelated" (diagonal only) entries, then the second block of HK parameters (lower right) that are spatially correlated.
List file parameterization
Let's add parameters for well extraction rates (always uncertain, rarely estimated!)
Step29: There are several ways to approach wel file parameterization. One way is to add a constant multiplier parameter for each stress period (that is, one scaling parameter that is applied all active wells for each stress period). Let's see how that looks, but first one important point
Step30: See the little offset in the lower right? there are a few parameters there in a small block
Step31: Those are our constant-in-space but correlated in time wel rate parameters - snap!
To compliment those stress period level constant multipliers, lets add a set of multipliers, one for each pumping well, that is broadcast across all stress periods (and let's add spatial correlation for these)
Step32: The upper left block is the constant-in-space but correlated-in-time wel rate multiplier parameters, while the lower right block is the constant-in-time but correlated-in-space wel rate multiplier parameters. Boom!
After building the control file
At this point, we can do some additional modifications that would typically be done that are problem specific. Note that any modifications made after calling PstFrom.build_pst() will only exist in memory - you need to call pf.pst.write() to record these changes to the control file on disk. Also note that if you call PstFrom.build_pst() after making some changes, these changes will be lost.
Additional parameters in existing template files
In many cases, you will have additional odd-ball parameters that arent in list or array file format that you want to include in the pest control. To demonstrate how this works, lets make up a template file
Step33: Tying parameters
Let's say you want to tie some parameters in the control file. This happens through the Pst.parameter_data dataframe. Here let's tie the first parameter in the control file to the second
Step34: Manipulating parameter bounds
While you can pass parameter bound information to PstFrom.add_parameters(), in many cases, you may want to change the bounds for individual parameters before build the prior parameter covariance matrix and/or generating the prior parameter ensemble. This can be done through the PstFrom.pst.parameter_data dataframe
Step35: Setting observation values and weights
So far, we have automated the setup for pest(++). But one critical task remains and there is not an easy way to automate it
Step36: Industrial strength control file setup
This functionality mimics the demonstration the PstFrom manuscript | Python Code:
import os
import shutil
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pyemu
import flopy
Explanation: Setting up a PEST interface from MODFLOW6 using the PstFrom class
The PstFrom class is a generalization of the prototype PstFromFlopy class. The generalization in PstFrom means users need to explicitly define what files are to be parameterized and what files contain model outputs to treat as observations. Two primary types of files are supported: arrays and lists. Array files contain a data type (usually floating points) while list files will have a few columns that contain index information and then columns of floating point values.
End of explanation
org_model_ws = os.path.join('freyberg_mf6')
os.listdir(org_model_ws)
Explanation: An existing MODFLOW6 model is in the directory freyberg_mf6. Lets check it out:
End of explanation
id_arr = np.loadtxt(os.path.join(org_model_ws,"freyberg6.dis_idomain_layer3.txt"))
top_arr = np.loadtxt(os.path.join(org_model_ws,"freyberg6.dis_top.txt"))
top_arr[id_arr==0] = np.nan
plt.imshow(top_arr)
Explanation: You can see that all the input array and list data for this model have been written "externally" - this is key to using the PstFrom class.
Let's quickly viz the model top just to remind us of what we are dealing with:
End of explanation
tmp_model_ws = "temp_pst_from"
if os.path.exists(tmp_model_ws):
shutil.rmtree(tmp_model_ws)
shutil.copytree(org_model_ws,tmp_model_ws)
os.listdir(tmp_model_ws)
Explanation: Now let's copy those files to a temporary location just to make sure we don't goof up those original files:
End of explanation
sim = flopy.mf6.MFSimulation.load(sim_ws=tmp_model_ws)
m = sim.get_model("freyberg6")
Explanation: Now we need just a tiny bit of info about the spatial discretization of the model - this is needed to work out separation distances between parameters for build a geostatistical prior covariance matrix later.
Here we will load the flopy sim and model instance just to help us define some quantities later - flopy is not required to use the PstFrom class.
End of explanation
sr = pyemu.helpers.SpatialReference.from_namfile(
os.path.join(tmp_model_ws, "freyberg6.nam"),
delr=m.dis.delr.array, delc=m.dis.delc.array)
sr
Explanation: Here we use the simple SpatialReference pyemu implements to help us spatially locate parameters
End of explanation
template_ws = "freyberg6_template"
pf = pyemu.utils.PstFrom(original_d=tmp_model_ws, new_d=template_ws,
remove_existing=True,
longnames=True, spatial_reference=sr,
zero_based=False,start_datetime="1-1-2018")
Explanation: Now we can instantiate a PstFrom class instance
End of explanation
df = pd.read_csv(os.path.join(tmp_model_ws,"heads.csv"),index_col=0)
df
Explanation: Observations
So now that we have a PstFrom instance, but its just an empty container at this point, so we need to add some PEST interface "observations" and "parameters". Let's start with observations using MODFLOW6 head. These are stored in heads.csv:
End of explanation
hds_df = pf.add_observations("heads.csv",insfile="heads.csv.ins",index_cols="time",
use_cols=list(df.columns.values),prefix="hds",)
hds_df
Explanation: The main entry point for adding observations is (surprise) PstFrom.add_observations(). This method works on the list-type observation output file. We need to tell it what column is the index column (can be string if there is a header or int if no header) and then what columns contain quantities we want to monitor (e.g. "observe") in the control file - in this case we want to monitor all columns except the index column:
End of explanation
[f for f in os.listdir(template_ws) if f.endswith(".ins")]
Explanation: We can see that it returned a dataframe with lots of useful info: the observation names that were formed (obsnme), the values that were read from heads.csv (obsval) and also some generic weights and group names. At this point, no control file has been created, we have simply prepared to add this observations to the control file later.
End of explanation
df = pd.read_csv(os.path.join(tmp_model_ws, "sfr.csv"), index_col=0)
sfr_df = pf.add_observations("sfr.csv", insfile="sfr.csv.ins", index_cols="time", use_cols=list(df.columns.values))
sfr_df
Explanation: Nice! We also have a PEST-style instruction file for those obs.
Now lets do the same for SFR observations:
End of explanation
v = pyemu.geostats.ExpVario(contribution=1.0,a=1000)
grid_gs = pyemu.geostats.GeoStruct(variograms=v, transform='log')
temporal_gs = pyemu.geostats.GeoStruct(variograms=pyemu.geostats.ExpVario(contribution=1.0,a=60))
grid_gs.plot()
print("spatial variogram")
temporal_gs.plot()
"temporal variogram (x axis in days)"
Explanation: Sweet as! Now that we have some observations, let's add parameters!
Parameters
In the PstFrom realm, all parameters are setup as multipliers against existing array and list files. This is a good thing because it lets us preserve the existing model inputs and treat them as the mean of the prior parameter distribution. It also let's us use mixtures of spatial and temporal scales in the parameters to account for varying scale of uncertainty.
Since we are all sophisticated and recognize the importance of expressing spatial and temporal uncertainty (e.g. heterogeneity) in the model inputs (and the corresponding spatial correlation in those uncertain inputs), let's use geostatistics to express uncertainty. To do that we need to define "geostatistical structures". As we will see, defining parameter correlation is optional and only matters for the prior parameter covariance matrix and prior parameter ensemble:
End of explanation
ib = m.dis.idomain[0].array
Explanation: Now let's get the idomain array to use as a zone array - this keeps us from setting up parameters in inactive model cells:
End of explanation
hk_arr_files = [f for f in os.listdir(tmp_model_ws) if "npf_k_" in f and f.endswith(".txt")]
hk_arr_files
Explanation: First, let's setup parameters for static properties - HK, VK, SS, SY. Do that, we need to find all the external array files that contain these static arrays. Let's do just HK slowly so as to explain what is happening:
End of explanation
pf.add_parameters(filenames="freyberg6.npf_k_layer1.txt",par_type="grid",
par_name_base="hk_layer_1",pargp="hk_layer_1",zone_array=ib,
upper_bound=10.,lower_bound=0.1,ult_ubound=100,ult_lbound=0.01)
Explanation: So those are the existing model input arrays for HK. Notice we found the files in the temporary model workspace - PstFrom will copy all those files to the new model workspace for us in a bit...
Let's setup grid-scale multiplier parameter for HK in layer 1:
End of explanation
[f for f in os.listdir(template_ws) if f.endswith(".tpl")]
with open(os.path.join(template_ws,"hk_layer_1_inst0_grid.csv.tpl"),'r') as f:
for _ in range(2):
print(f.readline().strip())
Explanation: What just happened there? Well, we told our PstFrom instance to setup a set of grid-scale multiplier parameters (par_type="grid") for the array file "freyberg6.npf_k_layer1.txt". We told it to prefix the parameter names with "hk_layer_1" and also to make the parameter group "hk_layer_1" (pargp="hk_layer_1"). When specified two sets of bound information: upper_bound and lower_bound are the standard control file bounds, while ult_ubound and ult_lbound are bounds that are applied at runtime to the resulting (multiplied out) model input array - since we are using multipliers (and potentially, sets of multipliers - stay tuned), it is important to make sure we keep the resulting model input arrays within the range of realistic values.
If you inspect the contents of the working directory, we will see a new template file:
End of explanation
pf.add_parameters(filenames="freyberg6.npf_k_layer3.txt",par_type="pilotpoints",
par_name_base="hk_layer_1",pargp="hk_layer_1",zone_array=ib,
upper_bound=10.,lower_bound=0.1,ult_ubound=100,ult_lbound=0.01,
pp_space=5)
Explanation: So those might look like pretty redic parameter names, but they contain heaps of metadata to help you post process things later...
Pilot points in PstFrom
You can add pilot points in two ways PstFrom can generate them for you on a regular grid or you can supply PstFrom with existing pilot point location information. First lets looks at the regular simple stuf - when you change par_type to "pilotpoints", by default, a regular grid of pilot points is setup using a default pp_space value of 10, which is every 10th row and column. We can override this default like:
End of explanation
xmn = m.modelgrid.xvertices.min()
xmx = m.modelgrid.xvertices.max()
ymn = m.modelgrid.yvertices.min()
ymx = m.modelgrid.yvertices.max()
numpp = 20
xvals = np.random.uniform(xmn,xmx,numpp)
yvals = np.random.uniform(ymn, ymx, numpp)
pp_locs = pd.DataFrame({"x":xvals,"y":yvals})
pp_locs.loc[:,"zone"] = 1
pp_locs.loc[:,"name"] = ["pp_{0}".format(i) for i in range(numpp)]
pp_locs.loc[:,"parval1"] = 1.0
pyemu.pp_utils.write_pp_shapfile(pp_locs,os.path.join(template_ws,"pp_locs.shp"))
Explanation: Now lets look at how to supply existing pilot locations - to do this, we simply change the pp_space arg to a filename or a dataframe. The dataframe must have "name", "x", and "y" as columns - it can have more, but must have those. If you supply pp_space as an str it is assumed to be a filename the extension is the guide: ".csv" for dataframe, ".shp" for shapefile (point-type) and everything else is assumed to be a pilot points file type. For example, here it is with a shapefile - first we will just make up some random pilot point locations and write those to a shapefile:
End of explanation
pf.add_parameters(filenames="freyberg6.npf_k_layer2.txt",par_type="pilotpoints",
par_name_base="hk_layer_1",pargp="hk_layer_1",zone_array=ib,
upper_bound=10.,lower_bound=0.1,ult_ubound=100,ult_lbound=0.01,
pp_space="pp_locs.shp")
Explanation: Normally, you would probably put more thought in to pilot point locations, or maybe not! Now we call add_parameters and just pass the shapefile name for pp_space:
End of explanation
_ = [print(line.rstrip()) for line in open("helpers.py",'r').readlines()]
Explanation: Extra pre- and post-processing functions
You will also certainly need to include some additional processing steps. These are supported thru the PstFrom.pre_py cmds and PstFrom.post_py_cmds, which are lists for pre and post model run python commands and PstFrom.pre_sys_cmds and PstFrom.post_sys_cmds, which are lists for pre and post model run system commands (these are wrapped in pyemu.os_utils.run(). But what if your additional steps are actually an entire python function? Well, we got that too! PstFrom.add_py_function(). For example, let's say you have a post processing function called process_model_outputs() in a python source file called helpers.py:
End of explanation
assert os.path.exists("special_outputs.dat.ins")
special_ins_filename = os.path.join(template_ws,"special_outputs.dat.ins")
shutil.copy2("special_outputs.dat.ins",special_ins_filename)
Explanation: We see that the file helpers.py contains two functions (could be more..). We want to call process_model_outputs() each time pest(++) runs the model as a post processing function. This function will yield some quantities that we want to record with an instruction. So, first, we can call the function write_ins_file() in helpers.py to build the instruction file for the special processed outputs that process_model_outputs() will produce (in this trivial example, process_model_outputs() just generates random numbers...). Note that the instruction file needs to be in the template_ws directory since it is a pest interface file.
Lets make sure our new instruction file exists...
End of explanation
pf.add_py_function("helpers.py","process_model_outputs()",is_pre_cmd=False)
Explanation: First, we can add the function process_model_outputs() to the forward run script like this:
End of explanation
out_file = special_ins_filename.replace(".ins","")
pf.add_observations_from_ins(ins_file=special_ins_filename,out_file=out_file,pst_path=".")
Explanation: This will copy the function process_model_outputs() from helpers.py into the forward run script that PstFrom will write. But we still need to add the instruction file into the mix - lets do that!
End of explanation
pst = pf.build_pst()
Explanation: that pst_path argument tells PstFrom that the instruction file will be in the directory where pest(++) is running
build the control file, pest interface files, and forward run script
At this point, we have some parameters and some observations, so we can create a control file:
End of explanation
[f for f in os.listdir(template_ws) if f.endswith(".py")]
_ = [print(line.rstrip()) for line in open(os.path.join(template_ws,"forward_run.py"))]
Explanation: Oh snap! we did it! thanks for playing...
Well, there is a little more to the story. Like how do we run this thing? Lucky for you, PstFrom writes a forward run script for you! Say Wat?!
End of explanation
# only execute this block once!
pf.mod_sys_cmds.append("mf6")
pst = pf.build_pst()
_ = [print(line.rstrip()) for line in open(os.path.join(template_ws,"forward_run.py"))]
Explanation: Not bad! We have everything we need, including our special post processing function...except we didnt set a command to run the model! Doh!
Let's add that:
End of explanation
pf.add_parameters(filenames="freyberg6.npf_k_layer3.txt",par_type="grid",
par_name_base="hk_layer_3",pargp="hk_layer_3",zone_array=ib,
upper_bound=10.,lower_bound=0.1,ult_ubound=100,ult_lbound=0.01,
geostruct=grid_gs)
Explanation: That's better! See the pyemu.os_utils.run(r'mf6') line in main()?
We also see that we now have a function called process_model_outputs() added to the forward run script and the function is being called after the model run call.
Generating geostatistical prior covariance matrices and ensembles
So that's nice, but how do we include spatial correlation in these parameters? It simple: just pass the geostruct arg to PstFrom.add_parameters()
End of explanation
pst = pf.build_pst()
cov = pf.build_prior()
x = cov.x.copy()
x[x<0.00001] = np.NaN
plt.imshow(x)
Explanation: let's also check out the super awesome prior parameter covariance matrix and prior parameter ensemble helpers in PstFrom:
End of explanation
wel_files = [f for f in os.listdir(tmp_model_ws) if "wel_stress_period" in f and f.endswith(".txt")]
wel_files
pd.read_csv(os.path.join(tmp_model_ws,wel_files[0]),header=None)
Explanation: Da-um! that's sweet ez! We can see the first block of HK parameters in the upper left as "uncorrelated" (diagonal only) entries, then the second block of HK parameters (lower right) that are spatially correlated.
List file parameterization
Let's add parameters for well extraction rates (always uncertain, rarely estimated!)
End of explanation
# build up a container of stress period start datetimes - this will
# be used to specify the datetime of each multipler parameter
dts = pd.to_datetime(pf.start_datetime) + pd.to_timedelta(np.cumsum(sim.tdis.perioddata.array["perlen"]),unit='d')
for wel_file in wel_files:
# get the stress period number from the file name
kper = int(wel_file.split('.')[1].split('_')[-1]) - 1
pf.add_parameters(filenames=wel_file,par_type="constant",par_name_base="wel_cn",
pargp="wel_cn", upper_bound = 1.5, lower_bound=0.5,
datetime=dts[kper],geostruct=temporal_gs)
pst = pf.build_pst()
cov = pf.build_prior(fmt="none") # skip saving to a file...
x = cov.x.copy()
x[x==0] = np.NaN
plt.imshow(x)
Explanation: There are several ways to approach wel file parameterization. One way is to add a constant multiplier parameter for each stress period (that is, one scaling parameter that is applied all active wells for each stress period). Let's see how that looks, but first one important point: If you use the same parameter group name (pargp) and same geostruct, the PstFrom will treat parameters setup across different calls to add_parameters() as correlated. In this case, we want to express temporal correlation in the well multiplier pars, so we use the same parameter group names, specify the datetime and geostruct args.
End of explanation
plt.imshow(x[-25:,-25:])
Explanation: See the little offset in the lower right? there are a few parameters there in a small block:
End of explanation
pf.add_parameters(filenames=wel_files,par_type="grid",par_name_base="wel_gr",
pargp="wel_gr", upper_bound = 1.5, lower_bound=0.5,
geostruct=grid_gs)
pst = pf.build_pst()
cov = pf.build_prior(fmt="none")
x = cov.x.copy()
x[x==0] = np.NaN
plt.imshow(x[-49:,-49:])
Explanation: Those are our constant-in-space but correlated in time wel rate parameters - snap!
To compliment those stress period level constant multipliers, lets add a set of multipliers, one for each pumping well, that is broadcast across all stress periods (and let's add spatial correlation for these):
End of explanation
tpl_filename = os.path.join(template_ws,"special_pars.dat.tpl")
with open(tpl_filename,'w') as f:
f.write("ptf ~\n")
f.write("special_par1 ~ special_par1 ~\n")
f.write("special_par2 ~ special_par2 ~\n")
pf.pst.add_parameters(tpl_filename,pst_path=".")
Explanation: The upper left block is the constant-in-space but correlated-in-time wel rate multiplier parameters, while the lower right block is the constant-in-time but correlated-in-space wel rate multiplier parameters. Boom!
After building the control file
At this point, we can do some additional modifications that would typically be done that are problem specific. Note that any modifications made after calling PstFrom.build_pst() will only exist in memory - you need to call pf.pst.write() to record these changes to the control file on disk. Also note that if you call PstFrom.build_pst() after making some changes, these changes will be lost.
Additional parameters in existing template files
In many cases, you will have additional odd-ball parameters that arent in list or array file format that you want to include in the pest control. To demonstrate how this works, lets make up a template file:
End of explanation
par = pf.pst.parameter_data
par.loc[pf.pst.par_names[0],"partrans"] = "tied"
par.loc[pf.pst.par_names[0],"partied"] = pf.pst.par_names[1]
Explanation: Tying parameters
Let's say you want to tie some parameters in the control file. This happens through the Pst.parameter_data dataframe. Here let's tie the first parameter in the control file to the second:
End of explanation
par.loc[pf.pst.par_names[5:10],"parlbnd"]
par.loc[pf.pst.par_names[5:10],"parlbnd"] = 0.25
par.loc[pf.pst.par_names[5:10],"parlbnd"]
Explanation: Manipulating parameter bounds
While you can pass parameter bound information to PstFrom.add_parameters(), in many cases, you may want to change the bounds for individual parameters before build the prior parameter covariance matrix and/or generating the prior parameter ensemble. This can be done through the PstFrom.pst.parameter_data dataframe:
End of explanation
pe = pf.draw(num_reals=100,use_specsim=True)
pe.to_csv(os.path.join(template_ws,"prior.csv"))
print(pe.loc[:,pst.adj_par_names[0]])
pe.loc[:,pst.adj_par_names[0]]._df.hist()
Explanation: Setting observation values and weights
So far, we have automated the setup for pest(++). But one critical task remains and there is not an easy way to automate it: setting the actual observed values and weights in the * observation data information. PstFrom and Pst will both try to read existing model output files that correspond to instruction files and put those simulated values into the * observation data section for the observed values (the obsval quantity). However, if you have actual observation data and you want to use pest(++) to try to match these data, then you need to get these values into the * observation data section and you will probably also need to adjust the weight quantities as well. You can do this operation with pandas or you can save the control file in "version 2" format, which will write the * observation data section (along with the sections) as a CSV file, which can be imported into any number of spreadsheet programs.
Generating a prior parameter ensemble
This is crazy easy - using the previous defined correlation structures, we can draw from the block diagonal covariance matrix (and use spectral simulation for the grid-scale parameters):
End of explanation
# load the mf6 model with flopy to get the spatial reference
sim = flopy.mf6.MFSimulation.load(sim_ws=tmp_model_ws)
m = sim.get_model("freyberg6")
# work out the spatial rediscretization factor
redis_fac = m.dis.nrow.data / 40
# where the pest interface will be constructed
template_ws = tmp_model_ws.split('_')[1] + "_template"
# instantiate PstFrom object
pf = pyemu.utils.PstFrom(original_d=tmp_model_ws, new_d=template_ws,
remove_existing=True,
longnames=True, spatial_reference=m.modelgrid,
zero_based=False,start_datetime="1-1-2018")
# add observations from the sfr observation output file
df = pd.read_csv(os.path.join(tmp_model_ws, "sfr.csv"), index_col=0)
pf.add_observations("sfr.csv", insfile="sfr.csv.ins", index_cols="time",
use_cols=list(df.columns.values),
prefix="sfr")
# add observations for the heads observation output file
df = pd.read_csv(os.path.join(tmp_model_ws, "heads.csv"), index_col=0)
pf.add_observations("heads.csv", insfile="heads.csv.ins",
index_cols="time", use_cols=list(df.columns.values),
prefix="hds")
# the geostruct object for grid-scale parameters
grid_v = pyemu.geostats.ExpVario(contribution=1.0,a=500)
grid_gs = pyemu.geostats.GeoStruct(variograms=grid_v)
# the geostruct object for pilot-point-scale parameters
pp_v = pyemu.geostats.ExpVario(contribution=1.0, a=2000)
pp_gs = pyemu.geostats.GeoStruct(variograms=pp_v)
# the geostruct for recharge grid-scale parameters
rch_v = pyemu.geostats.ExpVario(contribution=1.0, a=1000)
rch_gs = pyemu.geostats.GeoStruct(variograms=rch_v)
# the geostruct for temporal correlation
temporal_v = pyemu.geostats.ExpVario(contribution=1.0,a=60)
temporal_gs = pyemu.geostats.GeoStruct(variograms=temporal_v)
# import flopy as part of the forward run process
pf.extra_py_imports.append('flopy')
# use the idomain array for masking parameter locations
ib = m.dis.idomain[0].array
# define a dict that contains file name tags and lower/upper bound information
tags = {"npf_k_":[0.1,10.],"npf_k33_":[.1,10],"sto_ss":[.1,10],
"sto_sy":[.9,1.1],"rch_recharge":[.5,1.5]}
dts = pd.to_datetime("1-1-2018") + \
pd.to_timedelta(np.cumsum(sim.tdis.perioddata.array["perlen"]),unit="d")
# loop over each tag, bound info pair
for tag,bnd in tags.items():
lb,ub = bnd[0],bnd[1]
# find all array based files that have the tag in the name
arr_files = [f for f in os.listdir(template_ws) if tag in f
and f.endswith(".txt")]
if len(arr_files) == 0:
print("warning: no array files found for ",tag)
continue
# make sure each array file in nrow X ncol dimensions (not wrapped, sigh)
for arr_file in arr_files:
arr = np.loadtxt(os.path.join(template_ws,arr_file)).reshape(ib.shape)
np.savetxt(os.path.join(template_ws,arr_file),arr,fmt="%15.6E")
# if this is the recharge tag
if "rch" in tag:
# add one set of grid-scale parameters for all files
pf.add_parameters(filenames=arr_files, par_type="grid",
par_name_base="rch_gr",pargp="rch_gr",
zone_array=ib, upper_bound=ub,
lower_bound=lb,geostruct=rch_gs)
# add one constant parameter for each array, and
# assign it a datetime so we can work out the
# temporal correlation
for arr_file in arr_files:
kper = int(arr_file.split('.')[1].split('_')[-1]) - 1
pf.add_parameters(filenames=arr_file,par_type="constant",
par_name_base=arr_file.split('.')[1]+"_cn",
pargp="rch_const",zone_array=ib,upper_bound=ub,
lower_bound=lb,geostruct=temporal_gs,
datetime=dts[kper])
# otherwise...
else:
# for each array add both grid-scale and pilot-point scale parameters
for arr_file in arr_files:
pf.add_parameters(filenames=arr_file,par_type="grid",
par_name_base=arr_file.split('.')[1]+"_gr",
pargp=arr_file.split('.')[1]+"_gr",zone_array=ib,
upper_bound=ub,lower_bound=lb,
geostruct=grid_gs)
pf.add_parameters(filenames=arr_file, par_type="pilotpoints",
par_name_base=arr_file.split('.')[1]+"_pp",
pargp=arr_file.split('.')[1]+"_pp",
zone_array=ib,upper_bound=ub,lower_bound=lb,
pp_space=int(5 * redis_fac),geostruct=pp_gs)
# get all the list-type files associated with the wel package
list_files = [f for f in os.listdir(tmp_model_ws) if
"freyberg6.wel_stress_period_data_"
in f and f.endswith(".txt")]
# for each wel-package list-type file
for list_file in list_files:
kper = int(list_file.split(".")[1].split('_')[-1]) - 1
# add spatially constant, but temporally correlated parameter
pf.add_parameters(filenames=list_file,par_type="constant",
par_name_base="twel_mlt_{0}".format(kper),
pargp="twel_mlt".format(kper),index_cols=[0,1,2],
use_cols=[3],upper_bound=1.5,lower_bound=0.5,
datetime=dts[kper], geostruct=temporal_gs)
# add temporally indep, but spatially correlated grid-scale
# parameters, one per well
pf.add_parameters(filenames=list_file, par_type="grid",
par_name_base="wel_grid_{0}".format(kper),
pargp="wel_{0}".format(kper), index_cols=[0, 1, 2],
use_cols=[3],upper_bound=1.5, lower_bound=0.5)
# add grid-scale parameters for SFR reach conductance.
# Use layer, row, col and reach number in the
# parameter names
pf.add_parameters(filenames="freyberg6.sfr_packagedata.txt",
par_name_base="sfr_rhk",
pargp="sfr_rhk", index_cols=[0,1,2,3],
use_cols=[9], upper_bound=10.,
lower_bound=0.1,
par_type="grid")
# add model run command
pf.mod_sys_cmds.append("mf6")
# build pest control file
pst = pf.build_pst('freyberg.pst')
# draw from the prior and save the ensemble in binary format
pe = pf.draw(100, use_specsim=True)
pe.to_binary(os.path.join(template_ws, "prior.jcb"))
# set some algorithmic controls
pst.control_data.noptmax = 0
pst.pestpp_options["additional_ins_delimiters"] = ","
# write the control file
pst.write(os.path.join(pf.new_d, "freyberg.pst"))
# run with noptmax = 0
pyemu.os_utils.run("{0} freyberg.pst".format(
os.path.join("pestpp-ies")), cwd=pf.new_d)
# make sure it ran
res_file = os.path.join(pf.new_d, "freyberg.base.rei")
assert os.path.exists(res_file), res_file
pst.set_res(res_file)
print(pst.phi)
# if successful, set noptmax = -1 for prior-based Monte Carlo
pst.control_data.noptmax = -1
# define what file has the prior parameter ensemble
pst.pestpp_options["ies_par_en"] = "prior.jcb"
# write the updated pest control file
pst.write(os.path.join(pf.new_d, "freyberg.pst"))
Explanation: Industrial strength control file setup
This functionality mimics the demonstration the PstFrom manuscript
End of explanation |
159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize the hyperparameter tuning process
Author
Step1: Introduction
KerasTuner prints the logs to screen including the values of the
hyperparameters in each trial for the user to monitor the progress. However,
reading the logs is not intuitive enough to sense the influences of
hyperparameters have on the results, Therefore, we provide a method to
visualize the hyperparameter values and the corresponding evaluation results
with interactive figures using TensorBaord.
TensorBoard is a useful tool for
visualizing the machine learning experiments. It can monitor the losses and
metrics during the model training and visualize the model architectures.
Running KerasTuner with TensorBoard will give you additional features for
visualizing hyperparameter tuning results using its HParams plugin.
We will use a simple example of tuning a model for the MNIST image
classification dataset to show how to use KerasTuner with TensorBoard.
The first step is to download and format the data.
Step2: Then, we write a build_model function to build the model with hyperparameters
and return the model. The hyperparameters include the type of model to use
(multi-layer perceptron or convolutional neural network), the number of layers,
the number of units or filters, whether to use dropout.
Step3: We can do a quick test of the models to check if it build successfully for both
CNN and MLP.
Step4: Initialize the RandomSearch tuner with 10 trials and using validation
accuracy as the metric for selecting models.
Step5: Start the search by calling tuner.search(...). To use TensorBoard, we need
to pass a keras.callbacks.TensorBoard instance to the callbacks. | Python Code:
!pip install keras-tuner -q
Explanation: Visualize the hyperparameter tuning process
Author: Haifeng Jin<br>
Date created: 2021/06/25<br>
Last modified: 2021/06/05<br>
Description: Using TensorBoard to visualize the hyperparameter tuning process in KerasTuner.
End of explanation
import numpy as np
import keras_tuner
from tensorflow import keras
from tensorflow.keras import layers
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Normalize the pixel values to the range of [0, 1].
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# Add the channel dimension to the images.
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
# Print the shapes of the data.
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
Explanation: Introduction
KerasTuner prints the logs to screen including the values of the
hyperparameters in each trial for the user to monitor the progress. However,
reading the logs is not intuitive enough to sense the influences of
hyperparameters have on the results, Therefore, we provide a method to
visualize the hyperparameter values and the corresponding evaluation results
with interactive figures using TensorBaord.
TensorBoard is a useful tool for
visualizing the machine learning experiments. It can monitor the losses and
metrics during the model training and visualize the model architectures.
Running KerasTuner with TensorBoard will give you additional features for
visualizing hyperparameter tuning results using its HParams plugin.
We will use a simple example of tuning a model for the MNIST image
classification dataset to show how to use KerasTuner with TensorBoard.
The first step is to download and format the data.
End of explanation
def build_model(hp):
inputs = keras.Input(shape=(28, 28, 1))
# Model type can be MLP or CNN.
model_type = hp.Choice("model_type", ["mlp", "cnn"])
x = inputs
if model_type == "mlp":
x = layers.Flatten()(x)
# Number of layers of the MLP is a hyperparameter.
for i in range(hp.Int("mlp_layers", 1, 3)):
# Number of units of each layer are
# different hyperparameters with different names.
output_node = layers.Dense(
units=hp.Int(f"units_{i}", 32, 128, step=32), activation="relu",
)(x)
else:
# Number of layers of the CNN is also a hyperparameter.
for i in range(hp.Int("cnn_layers", 1, 3)):
x = layers.Conv2D(
hp.Int(f"filters_{i}", 32, 128, step=32),
kernel_size=(3, 3),
activation="relu",
)(x)
x = layers.MaxPooling2D(pool_size=(2, 2))(x)
x = layers.Flatten()(x)
# A hyperparamter for whether to use dropout layer.
if hp.Boolean("dropout"):
x = layers.Dropout(0.5)(x)
# The last layer contains 10 units,
# which is the same as the number of classes.
outputs = layers.Dense(units=10, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Compile the model.
model.compile(
loss="sparse_categorical_crossentropy", metrics=["accuracy"], optimizer="adam",
)
return model
Explanation: Then, we write a build_model function to build the model with hyperparameters
and return the model. The hyperparameters include the type of model to use
(multi-layer perceptron or convolutional neural network), the number of layers,
the number of units or filters, whether to use dropout.
End of explanation
# Initialize the `HyperParameters` and set the values.
hp = keras_tuner.HyperParameters()
hp.values["model_type"] = "cnn"
# Build the model using the `HyperParameters`.
model = build_model(hp)
# Test if the model runs with our data.
model(x_train[:100])
# Print a summary of the model.
model.summary()
# Do the same for MLP model.
hp.values["model_type"] = "mlp"
model = build_model(hp)
model(x_train[:100])
model.summary()
Explanation: We can do a quick test of the models to check if it build successfully for both
CNN and MLP.
End of explanation
tuner = keras_tuner.RandomSearch(
build_model,
max_trials=10,
# Do not resume the previous search in the same directory.
overwrite=True,
objective="val_accuracy",
# Set a directory to store the intermediate results.
directory="/tmp/tb",
)
Explanation: Initialize the RandomSearch tuner with 10 trials and using validation
accuracy as the metric for selecting models.
End of explanation
tuner.search(
x_train,
y_train,
validation_split=0.2,
epochs=2,
# Use the TensorBoard callback.
# The logs will be write to "/tmp/tb_logs".
callbacks=[keras.callbacks.TensorBoard("/tmp/tb_logs")],
)
Explanation: Start the search by calling tuner.search(...). To use TensorBoard, we need
to pass a keras.callbacks.TensorBoard instance to the callbacks.
End of explanation |
160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
First a non-learning test
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few). | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = 252*2 + 28
STARTING_DAYS_AHEAD = 20
POSSIBLE_FRACTIONS = [0.0, 1.0]
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
n_levels=10)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.9999,
dyna_iterations=0,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
Explanation: In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
Explanation: Let's show the symbols data, to see how good the recommender has to be.
End of explanation
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: Let's run the trained agent, with the test set
First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
End of explanation
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
import pickle
with open('../../data/simple_q_learner_fast_learner_10000_states.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
End of explanation |
161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Non-linear power flow after LOPF
In this example, the dispatch of generators is optimised using the linear OPF, then a non-linear power flow is run on the resulting dispatch.
Data sources
Grid
Step1: Plot the distribution of the load and of generating tech
Step2: Run Linear Optimal Power Flow on the first day of 2011.
To approximate n-1 security and allow room for reactive power flows, don't allow any line to be loaded above 70% of their thermal rating
Step3: There are some infeasibilities without small extensions
Step4: We performing a linear OPF for one day, 4 snapshots at a time.
Step5: With the linear load flow, there is the following per unit loading
Step6: Let's have a look at the marginal prices
Step7: Curtailment variable
By considering how much power is available and how much is generated, you can see what share is curtailed
Step8: Non-Linear Power Flow
Now perform a full Newton-Raphson power flow on the first hour. For the PF, set the P to the optimised P.
Step9: Set all buses to PV, since we don't know what Q set points are
Step10: Now, perform the non-linear PF.
Step11: Any failed to converge?
Step12: With the non-linear load flow, there is the following per unit loading of the full thermal rating.
Step13: Let's inspect the voltage angle differences across the lines have (in degrees)
Step14: Plot the reactive power | Python Code:
import pypsa
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
%matplotlib inline
network = pypsa.examples.scigrid_de(from_master=True)
Explanation: Non-linear power flow after LOPF
In this example, the dispatch of generators is optimised using the linear OPF, then a non-linear power flow is run on the resulting dispatch.
Data sources
Grid: based on SciGRID Version 0.2 which is based on OpenStreetMap.
Load size and location: based on Landkreise (NUTS 3) GDP and population.
Load time series: from ENTSO-E hourly data, scaled up uniformly by factor 1.12 (a simplification of the methodology in Schumacher, Hirth (2015)).
Conventional power plant capacities and locations: BNetzA list.
Wind and solar capacities and locations: EEG Stammdaten, based on http://www.energymap.info/download.html, which represents capacities at the end of 2014. Units without PLZ are removed.
Wind and solar time series: REatlas, Andresen et al, "Validation of Danish wind time series from a new global renewable energy atlas for energy system analysis," Energy 93 (2015) 1074 - 1088.
Where SciGRID nodes have been split into 220kV and 380kV substations, all load and generation is attached to the 220kV substation.
Warnings
The data behind the notebook is no longer supported. See https://github.com/PyPSA/pypsa-eur for a newer model that covers the whole of Europe.
This dataset is ONLY intended to demonstrate the capabilities of PyPSA and is NOT (yet) accurate enough to be used for research purposes.
Known problems include:
Rough approximations have been made for missing grid data, e.g. 220kV-380kV transformers and connections between close sub-stations missing from OSM.
There appears to be some unexpected congestion in parts of the network, which may mean for example that the load attachment method (by Voronoi cell overlap with Landkreise) isn't working, particularly in regions with a high density of substations.
Attaching power plants to the nearest high voltage substation may not reflect reality.
There is no proper n-1 security in the calculations - this can either be simulated with a blanket e.g. 70% reduction in thermal limits (as done here) or a proper security constrained OPF (see e.g. http://www.pypsa.org/examples/scigrid-sclopf.ipynb).
The borders and neighbouring countries are not represented.
Hydroelectric power stations are not modelled accurately.
The marginal costs are illustrative, not accurate.
Only the first day of 2011 is in the github dataset, which is not representative. The full year of 2011 can be downloaded at http://www.pypsa.org/examples/scigrid-with-load-gen-trafos-2011.zip.
The ENTSO-E total load for Germany may not be scaled correctly; it is scaled up uniformly by factor 1.12 (a simplification of the methodology in Schumacher, Hirth (2015), which suggests monthly factors).
Biomass from the EEG Stammdaten are not read in at the moment.
Power plant start up costs, ramping limits/costs, minimum loading rates are not considered.
End of explanation
fig, ax = plt.subplots(
1, 1, subplot_kw={"projection": ccrs.EqualEarth()}, figsize=(8, 8)
)
load_distribution = (
network.loads_t.p_set.loc[network.snapshots[0]].groupby(network.loads.bus).sum()
)
network.plot(bus_sizes=1e-5 * load_distribution, ax=ax, title="Load distribution");
network.generators.groupby("carrier")["p_nom"].sum()
network.storage_units.groupby("carrier")["p_nom"].sum()
techs = ["Gas", "Brown Coal", "Hard Coal", "Wind Offshore", "Wind Onshore", "Solar"]
n_graphs = len(techs)
n_cols = 3
if n_graphs % n_cols == 0:
n_rows = n_graphs // n_cols
else:
n_rows = n_graphs // n_cols + 1
fig, axes = plt.subplots(
nrows=n_rows, ncols=n_cols, subplot_kw={"projection": ccrs.EqualEarth()}
)
size = 6
fig.set_size_inches(size * n_cols, size * n_rows)
for i, tech in enumerate(techs):
i_row = i // n_cols
i_col = i % n_cols
ax = axes[i_row, i_col]
gens = network.generators[network.generators.carrier == tech]
gen_distribution = (
gens.groupby("bus").sum()["p_nom"].reindex(network.buses.index, fill_value=0.0)
)
network.plot(ax=ax, bus_sizes=2e-5 * gen_distribution)
ax.set_title(tech)
fig.tight_layout()
Explanation: Plot the distribution of the load and of generating tech
End of explanation
contingency_factor = 0.7
network.lines.s_max_pu = contingency_factor
Explanation: Run Linear Optimal Power Flow on the first day of 2011.
To approximate n-1 security and allow room for reactive power flows, don't allow any line to be loaded above 70% of their thermal rating
End of explanation
network.lines.loc[["316", "527", "602"], "s_nom"] = 1715
Explanation: There are some infeasibilities without small extensions
End of explanation
group_size = 4
network.storage_units.state_of_charge_initial = 0.0
for i in range(int(24 / group_size)):
# set the initial state of charge based on previous round
if i:
network.storage_units.state_of_charge_initial = (
network.storage_units_t.state_of_charge.loc[
network.snapshots[group_size * i - 1]
]
)
network.lopf(
network.snapshots[group_size * i : group_size * i + group_size],
solver_name="cbc",
pyomo=False,
)
p_by_carrier = network.generators_t.p.groupby(network.generators.carrier, axis=1).sum()
p_by_carrier.drop(
(p_by_carrier.max()[p_by_carrier.max() < 1700.0]).index, axis=1, inplace=True
)
p_by_carrier.columns
colors = {
"Brown Coal": "brown",
"Hard Coal": "k",
"Nuclear": "r",
"Run of River": "green",
"Wind Onshore": "blue",
"Solar": "yellow",
"Wind Offshore": "cyan",
"Waste": "orange",
"Gas": "orange",
}
# reorder
cols = [
"Nuclear",
"Run of River",
"Brown Coal",
"Hard Coal",
"Gas",
"Wind Offshore",
"Wind Onshore",
"Solar",
]
p_by_carrier = p_by_carrier[cols]
c = [colors[col] for col in p_by_carrier.columns]
fig, ax = plt.subplots(figsize=(12, 6))
(p_by_carrier / 1e3).plot(kind="area", ax=ax, linewidth=4, color=c, alpha=0.7)
ax.legend(ncol=4, loc="upper left")
ax.set_ylabel("GW")
ax.set_xlabel("")
fig.tight_layout()
fig, ax = plt.subplots(figsize=(12, 6))
p_storage = network.storage_units_t.p.sum(axis=1)
state_of_charge = network.storage_units_t.state_of_charge.sum(axis=1)
p_storage.plot(label="Pumped hydro dispatch", ax=ax, linewidth=3)
state_of_charge.plot(label="State of charge", ax=ax, linewidth=3)
ax.legend()
ax.grid()
ax.set_ylabel("MWh")
ax.set_xlabel("")
fig.tight_layout()
now = network.snapshots[4]
Explanation: We performing a linear OPF for one day, 4 snapshots at a time.
End of explanation
loading = network.lines_t.p0.loc[now] / network.lines.s_nom
loading.describe()
fig, ax = plt.subplots(subplot_kw={"projection": ccrs.EqualEarth()}, figsize=(9, 9))
network.plot(
ax=ax,
line_colors=abs(loading),
line_cmap=plt.cm.jet,
title="Line loading",
bus_sizes=1e-3,
bus_alpha=0.7,
)
fig.tight_layout();
Explanation: With the linear load flow, there is the following per unit loading:
End of explanation
network.buses_t.marginal_price.loc[now].describe()
fig, ax = plt.subplots(subplot_kw={"projection": ccrs.PlateCarree()}, figsize=(8, 8))
plt.hexbin(
network.buses.x,
network.buses.y,
gridsize=20,
C=network.buses_t.marginal_price.loc[now],
cmap=plt.cm.jet,
zorder=3,
)
network.plot(ax=ax, line_widths=pd.Series(0.5, network.lines.index), bus_sizes=0)
cb = plt.colorbar(location="bottom")
cb.set_label("Locational Marginal Price (EUR/MWh)")
fig.tight_layout()
Explanation: Let's have a look at the marginal prices
End of explanation
carrier = "Wind Onshore"
capacity = network.generators.groupby("carrier").sum().at[carrier, "p_nom"]
p_available = network.generators_t.p_max_pu.multiply(network.generators["p_nom"])
p_available_by_carrier = p_available.groupby(network.generators.carrier, axis=1).sum()
p_curtailed_by_carrier = p_available_by_carrier - p_by_carrier
p_df = pd.DataFrame(
{
carrier + " available": p_available_by_carrier[carrier],
carrier + " dispatched": p_by_carrier[carrier],
carrier + " curtailed": p_curtailed_by_carrier[carrier],
}
)
p_df[carrier + " capacity"] = capacity
p_df["Wind Onshore curtailed"][p_df["Wind Onshore curtailed"] < 0.0] = 0.0
fig, ax = plt.subplots(figsize=(10, 4))
p_df[[carrier + " dispatched", carrier + " curtailed"]].plot(
kind="area", ax=ax, linewidth=3
)
p_df[[carrier + " available", carrier + " capacity"]].plot(ax=ax, linewidth=3)
ax.set_xlabel("")
ax.set_ylabel("Power [MW]")
ax.set_ylim([0, 40000])
ax.legend()
fig.tight_layout()
Explanation: Curtailment variable
By considering how much power is available and how much is generated, you can see what share is curtailed:
End of explanation
network.generators_t.p_set = network.generators_t.p
network.storage_units_t.p_set = network.storage_units_t.p
Explanation: Non-Linear Power Flow
Now perform a full Newton-Raphson power flow on the first hour. For the PF, set the P to the optimised P.
End of explanation
network.generators.control = "PV"
# Need some PQ buses so that Jacobian doesn't break
f = network.generators[network.generators.bus == "492"]
network.generators.loc[f.index, "control"] = "PQ"
Explanation: Set all buses to PV, since we don't know what Q set points are
End of explanation
info = network.pf();
Explanation: Now, perform the non-linear PF.
End of explanation
(~info.converged).any().any()
Explanation: Any failed to converge?
End of explanation
(network.lines_t.p0.loc[now] / network.lines.s_nom).describe()
Explanation: With the non-linear load flow, there is the following per unit loading of the full thermal rating.
End of explanation
df = network.lines.copy()
for b in ["bus0", "bus1"]:
df = pd.merge(
df, network.buses_t.v_ang.loc[[now]].T, how="left", left_on=b, right_index=True
)
s = df[str(now) + "_x"] - df[str(now) + "_y"]
(s * 180 / np.pi).describe()
Explanation: Let's inspect the voltage angle differences across the lines have (in degrees)
End of explanation
fig, ax = plt.subplots(subplot_kw={"projection": ccrs.EqualEarth()}, figsize=(9, 9))
q = network.buses_t.q.loc[now]
bus_colors = pd.Series("r", network.buses.index)
bus_colors[q < 0.0] = "b"
network.plot(
bus_sizes=1e-4 * abs(q),
ax=ax,
bus_colors=bus_colors,
title="Reactive power feed-in (red=+ve, blue=-ve)",
);
Explanation: Plot the reactive power
End of explanation |
162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
문서 전처리
모든 데이터 분석 모형은 숫자로 구성된 고정 차원 벡터를 독립 변수로 하고 있으므로 문서(document)를 분석을 하는 경우에도 숫자로 구성된 특징 벡터(feature vector)를 문서로부터 추출하는 과정이 필요하다. 이러한 과정을 문서 전처리(document preprocessing)라고 한다.
BOW (Bag of Words)
문서를 숫자 벡터로 변환하는 가장 기본적인 방법은 BOW (Bag of Words) 이다. BOW 방법에서는 전체 문서 ${D_1, D_2, \ldots, D_n}$ 들를 구성하는 고정된 단어장(vocabulary) ${W_1, W_2, \ldots, W_m}$ 를 만들고 $D_i$라는 개별 문서에 단어장에 해당하는 단어들이 포함되어 있는지를 표시하는 방법이다.
$$ \text{ if word $W_j$ in document $D_i$ }, \;\; \rightarrow x_{ij} = 1 $$
Scikit-Learn 의 문서 전처리 기능
Scikit-Learn 의 feature_extraction.text 서브 패키지는 다음과 같은 문서 전처리용 클래스를 제공한다.
CountVectorizer
Step1: 문서 처리 옵션
CountVectorizer는 다양한 인수를 가진다. 그 중 중요한 것들은 다음과 같다.
stop_words
Step2: 토큰(token)
토큰은 문서에서 단어장을 생성할 때 하나의 단어가 되는 단위를 말한다. analyzer, tokenizer, token_pattern 등의 인수로 조절할 수 있다.
Step3: n-그램
n-그램은 단어장 생성에 사용할 토큰의 크기를 결정한다. 1-그램은 토큰 하나만 단어로 사용하며 2-그램은 두 개의 연결된 토큰을 하나의 단어로 사용한다.
Step4: 빈도수
max_df, min_df 인수를 사용하여 문서에서 토큰이 나타난 횟수를 기준으로 단어장을 구성할 수도 있다. 토큰의 빈도가 max_df로 지정한 값을 초과 하거나 min_df로 지정한 값보다 작은 경우에는 무시한다. 인수 값은 정수인 경우 횟수, 부동소수점인 경우 비중을 뜻한다.
Step5: TF-IDF
TF-IDF(Term Frequency – Inverse Document Frequency) 인코딩은 단어를 갯수 그대로 카운트하지 않고 모든 문서에 공통적으로 들어있는 단어의 경우 문서 구별 능력이 떨어진다고 보아 가중치를 축소하는 방법이다.
구제적으로는 문서 $d$(document)와 단어 $t$ 에 대해 다음과 같이 계산한다.
$$ \text{tf-idf}(d, t) = \text{tf}(d, t) \cdot \text{idf}(d, t) $$
여기에서
$\text{tf}(d, t)$
Step6: Hashing Trick
CountVectorizer는 모든 작업을 in-memory 상에서 수행하므로 데이터 양이 커지면 속도가 느려지거나 실행이 불가능해진다. 이 때
HashingVectorizer를 사용하면 Hashing Trick을 사용하여 메모리 및 실행 시간을 줄일 수 있다.
Step7: 형태소 분석기 이용
Step8: 예 | Python Code:
from sklearn.feature_extraction.text import CountVectorizer
corpus = [
'This is the first document.',
'This is the second second document.',
'And the third one.',
'Is this the first document?',
'The last document?',
]
vect = CountVectorizer()
vect.fit(corpus)
vect.vocabulary_
vect.transform(['This is the second document.']).toarray()
vect.transform(['Something completely new.']).toarray()
vect.transform(corpus).toarray()
Explanation: 문서 전처리
모든 데이터 분석 모형은 숫자로 구성된 고정 차원 벡터를 독립 변수로 하고 있으므로 문서(document)를 분석을 하는 경우에도 숫자로 구성된 특징 벡터(feature vector)를 문서로부터 추출하는 과정이 필요하다. 이러한 과정을 문서 전처리(document preprocessing)라고 한다.
BOW (Bag of Words)
문서를 숫자 벡터로 변환하는 가장 기본적인 방법은 BOW (Bag of Words) 이다. BOW 방법에서는 전체 문서 ${D_1, D_2, \ldots, D_n}$ 들를 구성하는 고정된 단어장(vocabulary) ${W_1, W_2, \ldots, W_m}$ 를 만들고 $D_i$라는 개별 문서에 단어장에 해당하는 단어들이 포함되어 있는지를 표시하는 방법이다.
$$ \text{ if word $W_j$ in document $D_i$ }, \;\; \rightarrow x_{ij} = 1 $$
Scikit-Learn 의 문서 전처리 기능
Scikit-Learn 의 feature_extraction.text 서브 패키지는 다음과 같은 문서 전처리용 클래스를 제공한다.
CountVectorizer:
문서 집합으로부터 단어의 수를 세어 카운트 행렬을 만든다.
TfidfVectorizer:
문서 집합으로부터 단어의 수를 세고 TF-IDF 방식으로 단어의 가중치를 조정한 카운트 행렬을 만든다.
HashingVectorizer:
hashing trick 을 사용하여 빠르게 카운트 행렬을 만든다.
End of explanation
vect = CountVectorizer(stop_words=["and", "is", "the", "this"]).fit(corpus)
vect.vocabulary_
vect = CountVectorizer(stop_words="english").fit(corpus)
vect.vocabulary_
Explanation: 문서 처리 옵션
CountVectorizer는 다양한 인수를 가진다. 그 중 중요한 것들은 다음과 같다.
stop_words : 문자열 {‘english’}, 리스트 또는 None (디폴트)
stop words 목록.‘english’이면 영어용 스탑 워드 사용.
analyzer : 문자열 {‘word’, ‘char’, ‘char_wb’} 또는 함수
단어 n-그램, 문자 n-그램, 단어 내의 문자 n-그램
tokenizer : 함수 또는 None (디폴트)
토큰 생성 함수 .
token_pattern : string
토큰 정의용 정규 표현식
ngram_range : (min_n, max_n) 튜플
n-그램 범위
max_df : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1
단어장에 포함되기 위한 최대 빈도
min_df : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1
단어장에 포함되기 위한 최소 빈도
vocabulary : 사전이나 리스트
단어장
Stop Words
Stop Words 는 문서에서 단어장을 생성할 때 무시할 수 있는 단어를 말한다. 보통 영어의 관사나 접속사, 한국어의 조사 등이 여기에 해당한다. stop_words 인수로 조절할 수 있다.
End of explanation
vect = CountVectorizer(analyzer="char").fit(corpus)
vect.vocabulary_
import nltk
nltk.download("punkt")
vect = CountVectorizer(tokenizer=nltk.word_tokenize).fit(corpus)
vect.vocabulary_
vect = CountVectorizer(token_pattern="t\w+").fit(corpus)
vect.vocabulary_
Explanation: 토큰(token)
토큰은 문서에서 단어장을 생성할 때 하나의 단어가 되는 단위를 말한다. analyzer, tokenizer, token_pattern 등의 인수로 조절할 수 있다.
End of explanation
vect = CountVectorizer(ngram_range=(2,2)).fit(corpus)
vect.vocabulary_
vect = CountVectorizer(ngram_range=(1,2), token_pattern="t\w+").fit(corpus)
vect.vocabulary_
Explanation: n-그램
n-그램은 단어장 생성에 사용할 토큰의 크기를 결정한다. 1-그램은 토큰 하나만 단어로 사용하며 2-그램은 두 개의 연결된 토큰을 하나의 단어로 사용한다.
End of explanation
vect = CountVectorizer(max_df=4, min_df=2).fit(corpus)
vect.vocabulary_, vect.stop_words_
vect.transform(corpus).toarray().sum(axis=0)
Explanation: 빈도수
max_df, min_df 인수를 사용하여 문서에서 토큰이 나타난 횟수를 기준으로 단어장을 구성할 수도 있다. 토큰의 빈도가 max_df로 지정한 값을 초과 하거나 min_df로 지정한 값보다 작은 경우에는 무시한다. 인수 값은 정수인 경우 횟수, 부동소수점인 경우 비중을 뜻한다.
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
tfidv = TfidfVectorizer().fit(corpus)
tfidv.transform(corpus).toarray()
Explanation: TF-IDF
TF-IDF(Term Frequency – Inverse Document Frequency) 인코딩은 단어를 갯수 그대로 카운트하지 않고 모든 문서에 공통적으로 들어있는 단어의 경우 문서 구별 능력이 떨어진다고 보아 가중치를 축소하는 방법이다.
구제적으로는 문서 $d$(document)와 단어 $t$ 에 대해 다음과 같이 계산한다.
$$ \text{tf-idf}(d, t) = \text{tf}(d, t) \cdot \text{idf}(d, t) $$
여기에서
$\text{tf}(d, t)$: 단어의 빈도수
$\text{idf}(d, t)$ : inverse document frequency
$$ \text{idf}(d, t) = \log \dfrac{n_d}{1 + \text{df}(t)} $$
$n_d$ : 전체 문서의 수
$\text{df}(t)$: 단어 $t$를 가진 문서의 수
End of explanation
from sklearn.datasets import fetch_20newsgroups
twenty = fetch_20newsgroups()
len(twenty.data)
%time CountVectorizer().fit(twenty.data).transform(twenty.data)
from sklearn.feature_extraction.text import HashingVectorizer
hv = HashingVectorizer(n_features=10)
%time hv.transform(twenty.data)
Explanation: Hashing Trick
CountVectorizer는 모든 작업을 in-memory 상에서 수행하므로 데이터 양이 커지면 속도가 느려지거나 실행이 불가능해진다. 이 때
HashingVectorizer를 사용하면 Hashing Trick을 사용하여 메모리 및 실행 시간을 줄일 수 있다.
End of explanation
corpus = ["imaging", "image", "imagination", "imagine", "buys", "buying", "bought"]
vect = CountVectorizer().fit(corpus)
vect.vocabulary_
from sklearn.datasets import fetch_20newsgroups
twenty = fetch_20newsgroups()
docs = twenty.data[:100]
vect = CountVectorizer(stop_words="english", token_pattern="wri\w+").fit(docs)
vect.vocabulary_
from nltk.stem import SnowballStemmer
class StemTokenizer(object):
def __init__(self):
self.s = SnowballStemmer('english')
self.t = CountVectorizer(stop_words="english", token_pattern="wri\w+").build_tokenizer()
def __call__(self, doc):
return [self.s.stem(t) for t in self.t(doc)]
vect = CountVectorizer(tokenizer=StemTokenizer()).fit(docs)
vect.vocabulary_
Explanation: 형태소 분석기 이용
End of explanation
import urllib2
import json
import string
from konlpy.utils import pprint
from konlpy.tag import Hannanum
hannanum = Hannanum()
req = urllib2.Request("https://www.datascienceschool.net/download-notebook/708e711429a646818b9dcbb581e0c10a/")
opener = urllib2.build_opener()
f = opener.open(req)
json = json.loads(f.read())
cell = ["\n".join(c["source"]) for c in json["cells"] if c["cell_type"] == u"markdown"]
docs = [w for w in hannanum.nouns(" ".join(cell)) if ((not w[0].isnumeric()) and (w[0] not in string.punctuation))]
vect = CountVectorizer().fit(docs)
count = vect.transform(docs).toarray().sum(axis=0)
plt.bar(range(len(count)), count)
plt.show()
pprint(zip(vect.get_feature_names(), count))
Explanation: 예
End of explanation |
163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Euler's method
We look at numerically solving differential equations. Most scientific software packages already include a wide variety of numerical integrators. Here we'll write our own simple version and compare it to the built in solutions.
Here's the built in solution using the ode integrator already available.
Step1: Now we implement Euler's method
Step2: Now lets print the solutions | Python Code:
%matplotlib inline
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def input(t):
f = np.cos(t)
return f
def msd(state, t, m, c, k):
x, xd = state
pos_dot = xd
vel_dot = 1/m*(input(t) - c*xd - k*x)
state_dot = [pos_dot, vel_dot]
return state_dot
num_steps = 100
tf = 10
t = np.linspace(0,tf,num_steps)
x0 = [0,0]
m = 2
c = 2
k = 1
sol_ode = odeint(msd, x0, t, args=(m, c, k))
Explanation: Euler's method
We look at numerically solving differential equations. Most scientific software packages already include a wide variety of numerical integrators. Here we'll write our own simple version and compare it to the built in solutions.
Here's the built in solution using the ode integrator already available.
End of explanation
sol_euler = np.zeros((num_steps,2))
delta_t = tf/(num_steps-1)
sol_euler[0,:] = x0
for ii in range(num_steps-1):
sol_euler[ii+1,0] = sol_euler[ii,0] + sol_euler[ii,1]*delta_t
a = 1/m*(input(t[ii])-c*sol_euler[ii,1] - k*sol_euler[ii,0])
sol_euler[ii+1,1] = sol_euler[ii,1]+a*delta_t
Explanation: Now we implement Euler's method
End of explanation
plt.figure(figsize=(16,8))
plt.plot(t,sol_ode[:,0],label='ODE')
plt.plot(t,sol_euler[:,0],label='Euler')
plt.xlabel('Time')
plt.ylabel('Position')
plt.grid(True)
plt.legend()
plt.show()
Explanation: Now lets print the solutions
End of explanation |
164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Introduction to Tensors
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Tensors are multi-dimensional arrays with a uniform type (called a dtype). You can see all supported dtypes at tf.dtypes.DType.
If you're familiar with NumPy, tensors are (kind of) like np.arrays.
All tensors are immutable like Python numbers and strings
Step3: A "vector" or "rank-1" tensor is like a list of values. A vector has one axis
Step4: A "matrix" or "rank-2" tensor has two axes
Step5: <table>
<tr>
<th>A scalar, shape
Step6: There are many ways you might visualize a tensor with more than two axes.
<table>
<tr>
<th colspan=3>A 3-axis tensor, shape
Step7: Tensors often contain floats and ints, but have many other types, including
Step8: Tensors are used in all kinds of operations (ops).
Step9: About shapes
Tensors have shapes. Some vocabulary
Step10: <table>
<tr>
<th colspan=2>A rank-4 tensor, shape
Step11: While axes are often referred to by their indices, you should always keep track of the meaning of each. Often axes are ordered from global to local
Step12: Indexing with a scalar removes the axis
Step13: Indexing with a
Step14: Multi-axis indexing
Higher rank tensors are indexed by passing multiple indices.
The exact same rules as in the single-axis case apply to each axis independently.
Step15: Passing an integer for each index, the result is a scalar.
Step16: You can index using any combination of integers and slices
Step17: Here is an example with a 3-axis tensor
Step18: <table>
<tr>
<th colspan=2>Selecting the last feature across all locations in each example in the batch </th>
</tr>
<tr>
<td>
<img src="images/tensor/index1.png" alt="A 3x2x5 tensor with all the values at the index-4 of the last axis selected.">
</td>
<td>
<img src="images/tensor/index2.png" alt="The selected values packed into a 2-axis tensor.">
</td>
</tr>
</table>
Read the tensor slicing guide to learn how you can apply indexing to manipulate individual elements in your tensors.
Manipulating Shapes
Reshaping a tensor is of great utility.
Step19: You can reshape a tensor into a new shape. The tf.reshape operation is fast and cheap as the underlying data does not need to be duplicated.
Step20: The data maintains its layout in memory and a new tensor is created, with the requested shape, pointing to the same data. TensorFlow uses C-style "row-major" memory ordering, where incrementing the rightmost index corresponds to a single step in memory.
Step21: If you flatten a tensor you can see what order it is laid out in memory.
Step22: Typically the only reasonable use of tf.reshape is to combine or split adjacent axes (or add/remove 1s).
For this 3x2x5 tensor, reshaping to (3x2)x5 or 3x(2x5) are both reasonable things to do, as the slices do not mix
Step23: <table>
<th colspan=3>
Some good reshapes.
</th>
<tr>
<td>
<img src="images/tensor/reshape-before.png" alt="A 3x2x5 tensor">
</td>
<td>
<img src="images/tensor/reshape-good1.png" alt="The same data reshaped to (3x2)x5">
</td>
<td>
<img src="images/tensor/reshape-good2.png" alt="The same data reshaped to 3x(2x5)">
</td>
</tr>
</table>
Reshaping will "work" for any new shape with the same total number of elements, but it will not do anything useful if you do not respect the order of the axes.
Swapping axes in tf.reshape does not work; you need tf.transpose for that.
Step24: <table>
<th colspan=3>
Some bad reshapes.
</th>
<tr>
<td>
<img src="images/tensor/reshape-bad.png" alt="You can't reorder axes, use tf.transpose for that">
</td>
<td>
<img src="images/tensor/reshape-bad4.png" alt="Anything that mixes the slices of data together is probably wrong.">
</td>
<td>
<img src="images/tensor/reshape-bad2.png" alt="The new shape must fit exactly.">
</td>
</tr>
</table>
You may run across not-fully-specified shapes. Either the shape contains a None (an axis-length is unknown) or the whole shape is None (the rank of the tensor is unknown).
Except for tf.RaggedTensor, such shapes will only occur in the context of TensorFlow's symbolic, graph-building APIs
Step25: Broadcasting
Broadcasting is a concept borrowed from the equivalent feature in NumPy. In short, under certain conditions, smaller tensors are "stretched" automatically to fit larger tensors when running combined operations on them.
The simplest and most common case is when you attempt to multiply or add a tensor to a scalar. In that case, the scalar is broadcast to be the same shape as the other argument.
Step26: Likewise, axes with length 1 can be stretched out to match the other arguments. Both arguments can be stretched in the same computation.
In this case a 3x1 matrix is element-wise multiplied by a 1x4 matrix to produce a 3x4 matrix. Note how the leading 1 is optional
Step27: <table>
<tr>
<th>A broadcasted add
Step28: Most of the time, broadcasting is both time and space efficient, as the broadcast operation never materializes the expanded tensors in memory.
You see what broadcasting looks like using tf.broadcast_to.
Step29: Unlike a mathematical op, for example, broadcast_to does nothing special to save memory. Here, you are materializing the tensor.
It can get even more complicated. This section of Jake VanderPlas's book Python Data Science Handbook shows more broadcasting tricks (again in NumPy).
tf.convert_to_tensor
Most ops, like tf.matmul and tf.reshape take arguments of class tf.Tensor. However, you'll notice in the above case, Python objects shaped like tensors are accepted.
Most, but not all, ops call convert_to_tensor on non-tensor arguments. There is a registry of conversions, and most object classes like NumPy's ndarray, TensorShape, Python lists, and tf.Variable will all convert automatically.
See tf.register_tensor_conversion_function for more details, and if you have your own type you'd like to automatically convert to a tensor.
Ragged Tensors
A tensor with variable numbers of elements along some axis is called "ragged". Use tf.ragged.RaggedTensor for ragged data.
For example, This cannot be represented as a regular tensor
Step30: Instead create a tf.RaggedTensor using tf.ragged.constant
Step31: The shape of a tf.RaggedTensor will contain some axes with unknown lengths
Step32: String tensors
tf.string is a dtype, which is to say you can represent data as strings (variable-length byte arrays) in tensors.
The strings are atomic and cannot be indexed the way Python strings are. The length of the string is not one of the axes of the tensor. See tf.strings for functions to manipulate them.
Here is a scalar string tensor
Step33: And a vector of strings
Step34: In the above printout the b prefix indicates that tf.string dtype is not a unicode string, but a byte-string. See the Unicode Tutorial for more about working with unicode text in TensorFlow.
If you pass unicode characters they are utf-8 encoded.
Step35: Some basic functions with strings can be found in tf.strings, including tf.strings.split.
Step36: <table>
<tr>
<th>Three strings split, shape
Step37: Although you can't use tf.cast to turn a string tensor into numbers, you can convert it into bytes, and then into numbers.
Step38: The tf.string dtype is used for all raw bytes data in TensorFlow. The tf.io module contains functions for converting data to and from bytes, including decoding images and parsing csv.
Sparse tensors
Sometimes, your data is sparse, like a very wide embedding space. TensorFlow supports tf.sparse.SparseTensor and related operations to store sparse data efficiently.
<table>
<tr>
<th>A `tf.SparseTensor`, shape | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import numpy as np
Explanation: Introduction to Tensors
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/tensor"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/tensor.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/tensor.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/tensor.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
End of explanation
# This will be an int32 tensor by default; see "dtypes" below.
rank_0_tensor = tf.constant(4)
print(rank_0_tensor)
Explanation: Tensors are multi-dimensional arrays with a uniform type (called a dtype). You can see all supported dtypes at tf.dtypes.DType.
If you're familiar with NumPy, tensors are (kind of) like np.arrays.
All tensors are immutable like Python numbers and strings: you can never update the contents of a tensor, only create a new one.
Basics
Let's create some basic tensors.
Here is a "scalar" or "rank-0" tensor . A scalar contains a single value, and no "axes".
End of explanation
# Let's make this a float tensor.
rank_1_tensor = tf.constant([2.0, 3.0, 4.0])
print(rank_1_tensor)
Explanation: A "vector" or "rank-1" tensor is like a list of values. A vector has one axis:
End of explanation
# If you want to be specific, you can set the dtype (see below) at creation time
rank_2_tensor = tf.constant([[1, 2],
[3, 4],
[5, 6]], dtype=tf.float16)
print(rank_2_tensor)
Explanation: A "matrix" or "rank-2" tensor has two axes:
End of explanation
# There can be an arbitrary number of
# axes (sometimes called "dimensions")
rank_3_tensor = tf.constant([
[[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]],
[[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]],
[[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]],])
print(rank_3_tensor)
Explanation: <table>
<tr>
<th>A scalar, shape: <code>[]</code></th>
<th>A vector, shape: <code>[3]</code></th>
<th>A matrix, shape: <code>[3, 2]</code></th>
</tr>
<tr>
<td>
<img src="images/tensor/scalar.png" alt="A scalar, the number 4" />
</td>
<td>
<img src="images/tensor/vector.png" alt="The line with 3 sections, each one containing a number."/>
</td>
<td>
<img src="images/tensor/matrix.png" alt="A 3x2 grid, with each cell containing a number.">
</td>
</tr>
</table>
Tensors may have more axes; here is a tensor with three axes:
End of explanation
np.array(rank_2_tensor)
rank_2_tensor.numpy()
Explanation: There are many ways you might visualize a tensor with more than two axes.
<table>
<tr>
<th colspan=3>A 3-axis tensor, shape: <code>[3, 2, 5]</code></th>
<tr>
<tr>
<td>
<img src="images/tensor/3-axis_numpy.png"/>
</td>
<td>
<img src="images/tensor/3-axis_front.png"/>
</td>
<td>
<img src="images/tensor/3-axis_block.png"/>
</td>
</tr>
</table>
You can convert a tensor to a NumPy array either using np.array or the tensor.numpy method:
End of explanation
a = tf.constant([[1, 2],
[3, 4]])
b = tf.constant([[1, 1],
[1, 1]]) # Could have also said `tf.ones([2,2])`
print(tf.add(a, b), "\n")
print(tf.multiply(a, b), "\n")
print(tf.matmul(a, b), "\n")
print(a + b, "\n") # element-wise addition
print(a * b, "\n") # element-wise multiplication
print(a @ b, "\n") # matrix multiplication
Explanation: Tensors often contain floats and ints, but have many other types, including:
complex numbers
strings
The base tf.Tensor class requires tensors to be "rectangular"---that is, along each axis, every element is the same size. However, there are specialized types of tensors that can handle different shapes:
Ragged tensors (see RaggedTensor below)
Sparse tensors (see SparseTensor below)
You can do basic math on tensors, including addition, element-wise multiplication, and matrix multiplication.
End of explanation
c = tf.constant([[4.0, 5.0], [10.0, 1.0]])
# Find the largest value
print(tf.reduce_max(c))
# Find the index of the largest value
print(tf.math.argmax(c))
# Compute the softmax
print(tf.nn.softmax(c))
Explanation: Tensors are used in all kinds of operations (ops).
End of explanation
rank_4_tensor = tf.zeros([3, 2, 4, 5])
Explanation: About shapes
Tensors have shapes. Some vocabulary:
Shape: The length (number of elements) of each of the axes of a tensor.
Rank: Number of tensor axes. A scalar has rank 0, a vector has rank 1, a matrix is rank 2.
Axis or Dimension: A particular dimension of a tensor.
Size: The total number of items in the tensor, the product of the shape vector's elements.
Note: Although you may see reference to a "tensor of two dimensions", a rank-2 tensor does not usually describe a 2D space.
Tensors and tf.TensorShape objects have convenient properties for accessing these:
End of explanation
print("Type of every element:", rank_4_tensor.dtype)
print("Number of axes:", rank_4_tensor.ndim)
print("Shape of tensor:", rank_4_tensor.shape)
print("Elements along axis 0 of tensor:", rank_4_tensor.shape[0])
print("Elements along the last axis of tensor:", rank_4_tensor.shape[-1])
print("Total number of elements (3*2*4*5): ", tf.size(rank_4_tensor).numpy())
Explanation: <table>
<tr>
<th colspan=2>A rank-4 tensor, shape: <code>[3, 2, 4, 5]</code></th>
</tr>
<tr>
<td>
<img src="images/tensor/shape.png" alt="A tensor shape is like a vector.">
<td>
<img src="images/tensor/4-axis_block.png" alt="A 4-axis tensor">
</td>
</tr>
</table>
End of explanation
rank_1_tensor = tf.constant([0, 1, 1, 2, 3, 5, 8, 13, 21, 34])
print(rank_1_tensor.numpy())
Explanation: While axes are often referred to by their indices, you should always keep track of the meaning of each. Often axes are ordered from global to local: The batch axis first, followed by spatial dimensions, and features for each location last. This way feature vectors are contiguous regions of memory.
<table>
<tr>
<th>Typical axis order</th>
</tr>
<tr>
<td>
<img src="images/tensor/shape2.png" alt="Keep track of what each axis is. A 4-axis tensor might be: Batch, Width, Height, Features">
</td>
</tr>
</table>
Indexing
Single-axis indexing
TensorFlow follows standard Python indexing rules, similar to indexing a list or a string in Python, and the basic rules for NumPy indexing.
indexes start at 0
negative indices count backwards from the end
colons, :, are used for slices: start:stop:step
End of explanation
print("First:", rank_1_tensor[0].numpy())
print("Second:", rank_1_tensor[1].numpy())
print("Last:", rank_1_tensor[-1].numpy())
Explanation: Indexing with a scalar removes the axis:
End of explanation
print("Everything:", rank_1_tensor[:].numpy())
print("Before 4:", rank_1_tensor[:4].numpy())
print("From 4 to the end:", rank_1_tensor[4:].numpy())
print("From 2, before 7:", rank_1_tensor[2:7].numpy())
print("Every other item:", rank_1_tensor[::2].numpy())
print("Reversed:", rank_1_tensor[::-1].numpy())
Explanation: Indexing with a : slice keeps the axis:
End of explanation
print(rank_2_tensor.numpy())
Explanation: Multi-axis indexing
Higher rank tensors are indexed by passing multiple indices.
The exact same rules as in the single-axis case apply to each axis independently.
End of explanation
# Pull out a single value from a 2-rank tensor
print(rank_2_tensor[1, 1].numpy())
Explanation: Passing an integer for each index, the result is a scalar.
End of explanation
# Get row and column tensors
print("Second row:", rank_2_tensor[1, :].numpy())
print("Second column:", rank_2_tensor[:, 1].numpy())
print("Last row:", rank_2_tensor[-1, :].numpy())
print("First item in last column:", rank_2_tensor[0, -1].numpy())
print("Skip the first row:")
print(rank_2_tensor[1:, :].numpy(), "\n")
Explanation: You can index using any combination of integers and slices:
End of explanation
print(rank_3_tensor[:, :, 4])
Explanation: Here is an example with a 3-axis tensor:
End of explanation
# Shape returns a `TensorShape` object that shows the size along each axis
x = tf.constant([[1], [2], [3]])
print(x.shape)
# You can convert this object into a Python list, too
print(x.shape.as_list())
Explanation: <table>
<tr>
<th colspan=2>Selecting the last feature across all locations in each example in the batch </th>
</tr>
<tr>
<td>
<img src="images/tensor/index1.png" alt="A 3x2x5 tensor with all the values at the index-4 of the last axis selected.">
</td>
<td>
<img src="images/tensor/index2.png" alt="The selected values packed into a 2-axis tensor.">
</td>
</tr>
</table>
Read the tensor slicing guide to learn how you can apply indexing to manipulate individual elements in your tensors.
Manipulating Shapes
Reshaping a tensor is of great utility.
End of explanation
# You can reshape a tensor to a new shape.
# Note that you're passing in a list
reshaped = tf.reshape(x, [1, 3])
print(x.shape)
print(reshaped.shape)
Explanation: You can reshape a tensor into a new shape. The tf.reshape operation is fast and cheap as the underlying data does not need to be duplicated.
End of explanation
print(rank_3_tensor)
Explanation: The data maintains its layout in memory and a new tensor is created, with the requested shape, pointing to the same data. TensorFlow uses C-style "row-major" memory ordering, where incrementing the rightmost index corresponds to a single step in memory.
End of explanation
# A `-1` passed in the `shape` argument says "Whatever fits".
print(tf.reshape(rank_3_tensor, [-1]))
Explanation: If you flatten a tensor you can see what order it is laid out in memory.
End of explanation
print(tf.reshape(rank_3_tensor, [3*2, 5]), "\n")
print(tf.reshape(rank_3_tensor, [3, -1]))
Explanation: Typically the only reasonable use of tf.reshape is to combine or split adjacent axes (or add/remove 1s).
For this 3x2x5 tensor, reshaping to (3x2)x5 or 3x(2x5) are both reasonable things to do, as the slices do not mix:
End of explanation
# Bad examples: don't do this
# You can't reorder axes with reshape.
print(tf.reshape(rank_3_tensor, [2, 3, 5]), "\n")
# This is a mess
print(tf.reshape(rank_3_tensor, [5, 6]), "\n")
# This doesn't work at all
try:
tf.reshape(rank_3_tensor, [7, -1])
except Exception as e:
print(f"{type(e).__name__}: {e}")
Explanation: <table>
<th colspan=3>
Some good reshapes.
</th>
<tr>
<td>
<img src="images/tensor/reshape-before.png" alt="A 3x2x5 tensor">
</td>
<td>
<img src="images/tensor/reshape-good1.png" alt="The same data reshaped to (3x2)x5">
</td>
<td>
<img src="images/tensor/reshape-good2.png" alt="The same data reshaped to 3x(2x5)">
</td>
</tr>
</table>
Reshaping will "work" for any new shape with the same total number of elements, but it will not do anything useful if you do not respect the order of the axes.
Swapping axes in tf.reshape does not work; you need tf.transpose for that.
End of explanation
the_f64_tensor = tf.constant([2.2, 3.3, 4.4], dtype=tf.float64)
the_f16_tensor = tf.cast(the_f64_tensor, dtype=tf.float16)
# Now, cast to an uint8 and lose the decimal precision
the_u8_tensor = tf.cast(the_f16_tensor, dtype=tf.uint8)
print(the_u8_tensor)
Explanation: <table>
<th colspan=3>
Some bad reshapes.
</th>
<tr>
<td>
<img src="images/tensor/reshape-bad.png" alt="You can't reorder axes, use tf.transpose for that">
</td>
<td>
<img src="images/tensor/reshape-bad4.png" alt="Anything that mixes the slices of data together is probably wrong.">
</td>
<td>
<img src="images/tensor/reshape-bad2.png" alt="The new shape must fit exactly.">
</td>
</tr>
</table>
You may run across not-fully-specified shapes. Either the shape contains a None (an axis-length is unknown) or the whole shape is None (the rank of the tensor is unknown).
Except for tf.RaggedTensor, such shapes will only occur in the context of TensorFlow's symbolic, graph-building APIs:
tf.function
The keras functional API.
More on DTypes
To inspect a tf.Tensor's data type use the Tensor.dtype property.
When creating a tf.Tensor from a Python object you may optionally specify the datatype.
If you don't, TensorFlow chooses a datatype that can represent your data. TensorFlow converts Python integers to tf.int32 and Python floating point numbers to tf.float32. Otherwise TensorFlow uses the same rules NumPy uses when converting to arrays.
You can cast from type to type.
End of explanation
x = tf.constant([1, 2, 3])
y = tf.constant(2)
z = tf.constant([2, 2, 2])
# All of these are the same computation
print(tf.multiply(x, 2))
print(x * y)
print(x * z)
Explanation: Broadcasting
Broadcasting is a concept borrowed from the equivalent feature in NumPy. In short, under certain conditions, smaller tensors are "stretched" automatically to fit larger tensors when running combined operations on them.
The simplest and most common case is when you attempt to multiply or add a tensor to a scalar. In that case, the scalar is broadcast to be the same shape as the other argument.
End of explanation
# These are the same computations
x = tf.reshape(x,[3,1])
y = tf.range(1, 5)
print(x, "\n")
print(y, "\n")
print(tf.multiply(x, y))
Explanation: Likewise, axes with length 1 can be stretched out to match the other arguments. Both arguments can be stretched in the same computation.
In this case a 3x1 matrix is element-wise multiplied by a 1x4 matrix to produce a 3x4 matrix. Note how the leading 1 is optional: The shape of y is [4].
End of explanation
x_stretch = tf.constant([[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3]])
y_stretch = tf.constant([[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4]])
print(x_stretch * y_stretch) # Again, operator overloading
Explanation: <table>
<tr>
<th>A broadcasted add: a <code>[3, 1]</code> times a <code>[1, 4]</code> gives a <code>[3,4]</code> </th>
</tr>
<tr>
<td>
<img src="images/tensor/broadcasting.png" alt="Adding a 3x1 matrix to a 4x1 matrix results in a 3x4 matrix">
</td>
</tr>
</table>
Here is the same operation without broadcasting:
End of explanation
print(tf.broadcast_to(tf.constant([1, 2, 3]), [3, 3]))
Explanation: Most of the time, broadcasting is both time and space efficient, as the broadcast operation never materializes the expanded tensors in memory.
You see what broadcasting looks like using tf.broadcast_to.
End of explanation
ragged_list = [
[0, 1, 2, 3],
[4, 5],
[6, 7, 8],
[9]]
try:
tensor = tf.constant(ragged_list)
except Exception as e:
print(f"{type(e).__name__}: {e}")
Explanation: Unlike a mathematical op, for example, broadcast_to does nothing special to save memory. Here, you are materializing the tensor.
It can get even more complicated. This section of Jake VanderPlas's book Python Data Science Handbook shows more broadcasting tricks (again in NumPy).
tf.convert_to_tensor
Most ops, like tf.matmul and tf.reshape take arguments of class tf.Tensor. However, you'll notice in the above case, Python objects shaped like tensors are accepted.
Most, but not all, ops call convert_to_tensor on non-tensor arguments. There is a registry of conversions, and most object classes like NumPy's ndarray, TensorShape, Python lists, and tf.Variable will all convert automatically.
See tf.register_tensor_conversion_function for more details, and if you have your own type you'd like to automatically convert to a tensor.
Ragged Tensors
A tensor with variable numbers of elements along some axis is called "ragged". Use tf.ragged.RaggedTensor for ragged data.
For example, This cannot be represented as a regular tensor:
<table>
<tr>
<th>A `tf.RaggedTensor`, shape: <code>[4, None]</code></th>
</tr>
<tr>
<td>
<img src="images/tensor/ragged.png" alt="A 2-axis ragged tensor, each row can have a different length.">
</td>
</tr>
</table>
End of explanation
ragged_tensor = tf.ragged.constant(ragged_list)
print(ragged_tensor)
Explanation: Instead create a tf.RaggedTensor using tf.ragged.constant:
End of explanation
print(ragged_tensor.shape)
Explanation: The shape of a tf.RaggedTensor will contain some axes with unknown lengths:
End of explanation
# Tensors can be strings, too here is a scalar string.
scalar_string_tensor = tf.constant("Gray wolf")
print(scalar_string_tensor)
Explanation: String tensors
tf.string is a dtype, which is to say you can represent data as strings (variable-length byte arrays) in tensors.
The strings are atomic and cannot be indexed the way Python strings are. The length of the string is not one of the axes of the tensor. See tf.strings for functions to manipulate them.
Here is a scalar string tensor:
End of explanation
# If you have three string tensors of different lengths, this is OK.
tensor_of_strings = tf.constant(["Gray wolf",
"Quick brown fox",
"Lazy dog"])
# Note that the shape is (3,). The string length is not included.
print(tensor_of_strings)
Explanation: And a vector of strings:
<table>
<tr>
<th>A vector of strings, shape: <code>[3,]</code></th>
</tr>
<tr>
<td>
<img src="images/tensor/strings.png" alt="The string length is not one of the tensor's axes.">
</td>
</tr>
</table>
End of explanation
tf.constant("🥳👍")
Explanation: In the above printout the b prefix indicates that tf.string dtype is not a unicode string, but a byte-string. See the Unicode Tutorial for more about working with unicode text in TensorFlow.
If you pass unicode characters they are utf-8 encoded.
End of explanation
# You can use split to split a string into a set of tensors
print(tf.strings.split(scalar_string_tensor, sep=" "))
# ...but it turns into a `RaggedTensor` if you split up a tensor of strings,
# as each string might be split into a different number of parts.
print(tf.strings.split(tensor_of_strings))
Explanation: Some basic functions with strings can be found in tf.strings, including tf.strings.split.
End of explanation
text = tf.constant("1 10 100")
print(tf.strings.to_number(tf.strings.split(text, " ")))
Explanation: <table>
<tr>
<th>Three strings split, shape: <code>[3, None]</code></th>
</tr>
<tr>
<td>
<img src="images/tensor/string-split.png" alt="Splitting multiple strings returns a tf.RaggedTensor">
</td>
</tr>
</table>
And tf.string.to_number:
End of explanation
byte_strings = tf.strings.bytes_split(tf.constant("Duck"))
byte_ints = tf.io.decode_raw(tf.constant("Duck"), tf.uint8)
print("Byte strings:", byte_strings)
print("Bytes:", byte_ints)
# Or split it up as unicode and then decode it
unicode_bytes = tf.constant("アヒル 🦆")
unicode_char_bytes = tf.strings.unicode_split(unicode_bytes, "UTF-8")
unicode_values = tf.strings.unicode_decode(unicode_bytes, "UTF-8")
print("\nUnicode bytes:", unicode_bytes)
print("\nUnicode chars:", unicode_char_bytes)
print("\nUnicode values:", unicode_values)
Explanation: Although you can't use tf.cast to turn a string tensor into numbers, you can convert it into bytes, and then into numbers.
End of explanation
# Sparse tensors store values by index in a memory-efficient manner
sparse_tensor = tf.sparse.SparseTensor(indices=[[0, 0], [1, 2]],
values=[1, 2],
dense_shape=[3, 4])
print(sparse_tensor, "\n")
# You can convert sparse tensors to dense
print(tf.sparse.to_dense(sparse_tensor))
Explanation: The tf.string dtype is used for all raw bytes data in TensorFlow. The tf.io module contains functions for converting data to and from bytes, including decoding images and parsing csv.
Sparse tensors
Sometimes, your data is sparse, like a very wide embedding space. TensorFlow supports tf.sparse.SparseTensor and related operations to store sparse data efficiently.
<table>
<tr>
<th>A `tf.SparseTensor`, shape: <code>[3, 4]</code></th>
</tr>
<tr>
<td>
<img src="images/tensor/sparse.png" alt="An 3x4 grid, with values in only two of the cells.">
</td>
</tr>
</table>
End of explanation |
165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DAT210x - Programming with Python for DS
Module5- Lab6
Step1: A Convenience Function
This method is for your visualization convenience only. You aren't expected to know how to put this together yourself, although you should be able to follow the code by now
Step2: The Assignment
Use the same code from Module4/assignment4.ipynb to load up the face_data.mat file into a dataframe called df. Be sure to calculate the num_pixels value, and to rotate the images to being right-side-up instead of sideways. This was demonstrated in the Lab Assignment 4 code.
Step3: Load up your face_labels dataset. It only has a single column, and you're only interested in that single column. You will have to slice the column out so that you have access to it as a "Series" rather than as a "Dataframe". This was discussed in the the "Slicin'" lecture of the "Manipulating Data" reading on the course website. Use an appropriate indexer to take care of that. Be sure to print out the labels and compare what you see to the raw face_labels.csv so you know you loaded it correctly.
Step4: Do train_test_split. Use the same code as on the EdX platform in the reading material, but set the random_state=7 for reproducibility, and the test_size to 0.15 (150%). Your labels are actually passed in as a series (instead of as an NDArray) so that you can access their underlying indices later on. This is necessary so you can find your samples in the original dataframe. The convenience methods we've written for you that handle drawing expect this, so that they can plot your testing data as images rather than as points
Step5: Dimensionality Reduction
Step6: Implement KNeighborsClassifier here. You can use any K value from 1 through 20, so play around with it and attempt to get good accuracy. Fit the classifier against your training data and labels.
Step7: Calculate and display the accuracy of the testing set (data_test and label_test)
Step8: Let's chart the combined decision boundary, the training data as 2D plots, and the testing data as small images so we can visually validate performance
Step9: After submitting your answers, experiment with using using PCA instead of ISOMap. Are the results what you expected? Also try tinkering around with the test/train split percentage from 10-20%. Notice anything? | Python Code:
import random, math
import pandas as pd
import numpy as np
import scipy.io
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot') # Look Pretty
# Leave this alone until indicated:
Test_PCA = False
Explanation: DAT210x - Programming with Python for DS
Module5- Lab6
End of explanation
def Plot2DBoundary(DTrain, LTrain, DTest, LTest):
# The dots are training samples (img not drawn), and the pics are testing samples (images drawn)
# Play around with the K values. This is very controlled dataset so it should be able to get perfect classification on testing entries
# Play with the K for isomap, play with the K for neighbors.
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('Transformed Boundary, Image Space -> 2D')
padding = 0.1 # Zoom out
resolution = 1 # Don't get too detailed; smaller values (finer rez) will take longer to compute
colors = ['blue','green','orange','red']
# ------
# Calculate the boundaries of the mesh grid. The mesh grid is
# a standard grid (think graph paper), where each point will be
# sent to the classifier (KNeighbors) to predict what class it
# belongs to. This is why KNeighbors has to be trained against
# 2D data, so we can produce this countour. Once we have the
# label for each point on the grid, we can color it appropriately
# and plot it.
x_min, x_max = DTrain[:, 0].min(), DTrain[:, 0].max()
y_min, y_max = DTrain[:, 1].min(), DTrain[:, 1].max()
x_range = x_max - x_min
y_range = y_max - y_min
x_min -= x_range * padding
y_min -= y_range * padding
x_max += x_range * padding
y_max += y_range * padding
# Using the boundaries, actually make the 2D Grid Matrix:
xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution),
np.arange(y_min, y_max, resolution))
# What class does the classifier say about each spot on the chart?
# The values stored in the matrix are the predictions of the model
# at said location:
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the mesh grid as a filled contour plot:
plt.contourf(xx, yy, Z, cmap=plt.cm.terrain, z=-100)
# ------
# When plotting the testing images, used to validate if the algorithm
# is functioning correctly, size them as 5% of the overall chart size
x_size = x_range * 0.05
y_size = y_range * 0.05
# First, plot the images in your TEST dataset
img_num = 0
for index in LTest.index:
# DTest is a regular NDArray, so you'll iterate over that 1 at a time.
x0, y0 = DTest[img_num,0]-x_size/2., DTest[img_num,1]-y_size/2.
x1, y1 = DTest[img_num,0]+x_size/2., DTest[img_num,1]+y_size/2.
# DTest = our images isomap-transformed into 2D. But we still want
# to plot the original image, so we look to the original, untouched
# dataset (at index) to get the pixels:
img = df.iloc[index,:].reshape(num_pixels, num_pixels)
ax.imshow(img,
aspect='auto',
cmap=plt.cm.gray,
interpolation='nearest',
zorder=100000,
extent=(x0, x1, y0, y1),
alpha=0.8)
img_num += 1
# Plot your TRAINING points as well... as points rather than as images
for label in range(len(np.unique(LTrain))):
indices = np.where(LTrain == label)
ax.scatter(DTrain[indices, 0], DTrain[indices, 1], c=colors[label], alpha=0.8, marker='o')
# Plot
plt.show()
Explanation: A Convenience Function
This method is for your visualization convenience only. You aren't expected to know how to put this together yourself, although you should be able to follow the code by now:
End of explanation
# .. your code here ..
Explanation: The Assignment
Use the same code from Module4/assignment4.ipynb to load up the face_data.mat file into a dataframe called df. Be sure to calculate the num_pixels value, and to rotate the images to being right-side-up instead of sideways. This was demonstrated in the Lab Assignment 4 code.
End of explanation
# .. your code here ..
Explanation: Load up your face_labels dataset. It only has a single column, and you're only interested in that single column. You will have to slice the column out so that you have access to it as a "Series" rather than as a "Dataframe". This was discussed in the the "Slicin'" lecture of the "Manipulating Data" reading on the course website. Use an appropriate indexer to take care of that. Be sure to print out the labels and compare what you see to the raw face_labels.csv so you know you loaded it correctly.
End of explanation
# .. your code here ..
Explanation: Do train_test_split. Use the same code as on the EdX platform in the reading material, but set the random_state=7 for reproducibility, and the test_size to 0.15 (150%). Your labels are actually passed in as a series (instead of as an NDArray) so that you can access their underlying indices later on. This is necessary so you can find your samples in the original dataframe. The convenience methods we've written for you that handle drawing expect this, so that they can plot your testing data as images rather than as points:
End of explanation
if Test_PCA:
# INFO: PCA is used *before* KNeighbors to simplify your high dimensionality
# image samples down to just 2 principal components! A lot of information
# (variance) is lost during the process, as I'm sure you can imagine. But
# you have to drop the dimension down to two, otherwise you wouldn't be able
# to visualize a 2D decision surface / boundary. In the wild, you'd probably
# leave in a lot more dimensions, which is better for higher accuracy, but
# worse for visualizing the decision boundary;
#
# Your model should only be trained (fit) against the training data (data_train)
# Once you've done this, you need use the model to transform both data_train
# and data_test from their original high-D image feature space, down to 2D
# TODO: Implement PCA here. ONLY train against your training data, but
# transform both your training + test data, storing the results back into
# data_train, and data_test.
# .. your code here ..
else:
# INFO: Isomap is used *before* KNeighbors to simplify your high dimensionality
# image samples down to just 2 components! A lot of information has been is
# lost during the process, as I'm sure you can imagine. But if you have
# non-linear data that can be represented on a 2D manifold, you probably will
# be left with a far superior dataset to use for classification. Plus by
# having the images in 2D space, you can plot them as well as visualize a 2D
# decision surface / boundary. In the wild, you'd probably leave in a lot more
# dimensions, which is better for higher accuracy, but worse for visualizing the
# decision boundary;
# Your model should only be trained (fit) against the training data (data_train)
# Once you've done this, you need use the model to transform both data_train
# and data_test from their original high-D image feature space, down to 2D
# TODO: Implement Isomap here. ONLY train against your training data, but
# transform both your training + test data, storing the results back into
# data_train, and data_test.
# .. your code here ..
Explanation: Dimensionality Reduction
End of explanation
# .. your code here ..
Explanation: Implement KNeighborsClassifier here. You can use any K value from 1 through 20, so play around with it and attempt to get good accuracy. Fit the classifier against your training data and labels.
End of explanation
# .. your code here ..
Explanation: Calculate and display the accuracy of the testing set (data_test and label_test):
End of explanation
Plot2DBoundary(data_train, label_train, data_test, label_test)
Explanation: Let's chart the combined decision boundary, the training data as 2D plots, and the testing data as small images so we can visually validate performance:
End of explanation
# .. your code changes above ..
Explanation: After submitting your answers, experiment with using using PCA instead of ISOMap. Are the results what you expected? Also try tinkering around with the test/train split percentage from 10-20%. Notice anything?
End of explanation |
166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Linear Regression
Step1: From
Step2: Next, let's create vectors of our ages and heights.
Step3: Now let's visualize our data to make sure that linear regression is appropriate for predicting its distributions.
Step5: Our data looks pretty linear. We can now calculate the slope and intercept of the line of least squares. We abstract numpy's least squares function using a function of our own.
Step6: To use our leastSquares function, we input our age and height vectors as our x and y arguments. Next, let's call leastSquares to get the slope and intercept, and use the slope and intercept to calculate the size of our alpha (intercept) and beta (slope) ranges.
Step7: Now we can visualize the slope and intercept on the same plot as the data to make sure it is working correctly.
Step8: Looks great! Based on the plot above, we are confident that bayesian linear regression will give us reasonable distributions for predicting future values. Now we need to create our hypotheses. Each hypothesis will consist of a range of intercepts (alphas), slopes (betas) and sigmas.
Step10: Next make a least squares class that inherits from Suite and Joint where likelihood is calculated based on error from data. The likelihood function will depend on the data and normal distributions for each hypothesis.
Step11: Now instantiate a LeastSquaresHypos suite with our hypos.
Step12: And update the suite with our data.
Step13: We can now plot marginal distributions to visualize the probability distribution for each of our hypotheses for intercept, slope, and sigma values. Our hypotheses were carefully picked based on ranges that we found worked well, which is why all the intercepts, slopes, and sigmas that are important to this dataset are included in our hypotheses.
Step16: Next, we want to sample random data from our hypotheses. To do this, we will make two functions, getHeights and getRandomData. getRandomData calls getHeights to obtain random height values.
Step17: Now we take 10000 random samples of pairs of months and heights. Here we want at least 10000 items so that we can get very smooth sampling.
Step18: Next, we want to get the intensity of the data at locations. We do that by adding the randomly sampled values to buckets. This gives us intensity values for a grid of pixels in our sample range.
Step21: Since density plotting is much simpler in Mathematica, we have written these funcitons to export all our data to csv files and plot them in Mathematica. | Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint, EvalNormalPdf
import thinkplot
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Bayesian Linear Regression:
Computational bayes final project.
Nathan Yee
Uma Desai
First example to gain understanding is taken from Cypress Frankenfeld.
http://allendowney.blogspot.com/2015/04/two-hour-marathon-by-2041-probably.html
End of explanation
df = pd.read_csv('ageVsHeight.csv', skiprows=0, delimiter='\t')
df
Explanation: From: http://lib.stat.cmu.edu/DASL/Datafiles/Ageandheight.html
The height of a child is not stable but increases over time. Since the pattern of growth varies from child to child, one way to understand the general growth pattern is by using the average of several children's heights, as presented in this data set. The scatterplot of height versus age is almost a straight line, showing a linear growth pattern. The straightforward relationship between height and age provides a simple illustration of linear relationships, correlation, and simple regression.
Description: Mean heights of a group of children in Kalama, an Egyptian village that is the site of a study of nutrition in developing countries. The data were obtained by measuring the heights of all 161 children in the village each month over several years.
Age: Age in months
Height: Mean height in centimeters for children at this age
Let's start by loading our data into a Pandas dataframe to see what we're working with.
End of explanation
ages = np.array(df['age'])
heights = np.array(df['height'])
Explanation: Next, let's create vectors of our ages and heights.
End of explanation
plt.plot(ages, heights, 'o', label='Original data', markersize=10)
Explanation: Now let's visualize our data to make sure that linear regression is appropriate for predicting its distributions.
End of explanation
def leastSquares(x, y):
leastSquares takes in two arrays of values. Then it returns the slope and intercept
of the least squares of the two.
Args:
x (numpy array): numpy array of values.
y (numpy array): numpy array of values.
Returns:
slope, intercept (tuple): returns a tuple of floats.
A = np.vstack([x, np.ones(len(x))]).T
slope, intercept = np.linalg.lstsq(A, y)[0]
return slope, intercept
Explanation: Our data looks pretty linear. We can now calculate the slope and intercept of the line of least squares. We abstract numpy's least squares function using a function of our own.
End of explanation
slope, intercept = leastSquares(ages, heights)
print(slope, intercept)
alpha_range = .03 * intercept
beta_range = .05 * slope
Explanation: To use our leastSquares function, we input our age and height vectors as our x and y arguments. Next, let's call leastSquares to get the slope and intercept, and use the slope and intercept to calculate the size of our alpha (intercept) and beta (slope) ranges.
End of explanation
plt.plot(ages, heights, 'o', label='Original data', markersize=10)
plt.plot(ages, slope*ages + intercept, 'r', label='Fitted line')
plt.legend()
plt.show()
Explanation: Now we can visualize the slope and intercept on the same plot as the data to make sure it is working correctly.
End of explanation
alphas = np.linspace(intercept - alpha_range, intercept + alpha_range, 20)
betas = np.linspace(slope - beta_range, slope + beta_range, 20)
sigmas = np.linspace(2, 4, 15)
hypos = ((alpha, beta, sigma) for alpha in alphas
for beta in betas for sigma in sigmas)
data = [(age, height) for age in ages for height in heights]
Explanation: Looks great! Based on the plot above, we are confident that bayesian linear regression will give us reasonable distributions for predicting future values. Now we need to create our hypotheses. Each hypothesis will consist of a range of intercepts (alphas), slopes (betas) and sigmas.
End of explanation
class leastSquaresHypos(Suite, Joint):
def Likelihood(self, data, hypo):
Likelihood calculates the probability of a particular line (hypo)
based on data (ages Vs height) of our original dataset. This is
done with a normal pmf as each hypo also contains a sigma.
Args:
data (tuple): tuple that contains ages (float), heights (float)
hypo (tuple): intercept (float), slope (float), sigma (float)
Returns:
P(data|hypo)
intercept, slope, sigma = hypo
total_likelihood = 1
for age, measured_height in data:
hypothesized_height = slope * age + intercept
error = measured_height - hypothesized_height
total_likelihood *= EvalNormalPdf(error, mu=0, sigma=sigma)
return total_likelihood
Explanation: Next make a least squares class that inherits from Suite and Joint where likelihood is calculated based on error from data. The likelihood function will depend on the data and normal distributions for each hypothesis.
End of explanation
LeastSquaresHypos = leastSquaresHypos(hypos)
Explanation: Now instantiate a LeastSquaresHypos suite with our hypos.
End of explanation
for item in data:
LeastSquaresHypos.Update([item])
LeastSquaresHypos[LeastSquaresHypos.MaximumLikelihood()]
Explanation: And update the suite with our data.
End of explanation
marginal_intercepts = LeastSquaresHypos.Marginal(0)
thinkplot.hist(marginal_intercepts)
marginal_slopes = LeastSquaresHypos.Marginal(1)
thinkplot.hist(marginal_slopes)
marginal_sigmas = LeastSquaresHypos.Marginal(2)
thinkplot.hist(marginal_sigmas)
Explanation: We can now plot marginal distributions to visualize the probability distribution for each of our hypotheses for intercept, slope, and sigma values. Our hypotheses were carefully picked based on ranges that we found worked well, which is why all the intercepts, slopes, and sigmas that are important to this dataset are included in our hypotheses.
End of explanation
def getHeights(hypo_samples, random_months):
getHeights takes in random hypos and random months and returns the corresponding
random height
random_heights = np.zeros(len(random_months))
for i in range(len(random_heights)):
intercept = hypo_samples[i][0]
slope = hypo_samples[i][1]
sigma = hypo_samples[i][2]
month = random_months[i]
random_heights[i] = np.random.normal((slope * month + intercept), sigma, 1)
return random_heights
def getRandomData(start_month, end_month, n, LeastSquaresHypos):
start_month (int): Starting x range of our data
end_month (int): Ending x range of our data
n (int): Number of samples
LeastSquaresHypos (Suite): Contains the hypos we want to sample
random_hypos = LeastSquaresHypos.Sample(n)
random_months = np.random.uniform(start_month, end_month, n)
random_heights = getHeights(random_hypos, random_months)
return random_months, random_heights
Explanation: Next, we want to sample random data from our hypotheses. To do this, we will make two functions, getHeights and getRandomData. getRandomData calls getHeights to obtain random height values.
End of explanation
num_samples = 10000
random_months, random_heights = getRandomData(18, 40, num_samples, LeastSquaresHypos)
Explanation: Now we take 10000 random samples of pairs of months and heights. Here we want at least 10000 items so that we can get very smooth sampling.
End of explanation
num_buckets = 70 #num_buckets^2 is actual number
# create horizontal and vertical linearly spaced ranges as buckets.
hori_range, hori_step = np.linspace(18, 40 , num_buckets, retstep=True)
vert_range, vert_step = np.linspace(65, 100, num_buckets, retstep=True)
hori_step = hori_step / 2
vert_step = vert_step / 2
# store each bucket as a tuple in a the buckets dictionary.
buckets = dict()
keys = [(hori, vert) for hori in hori_range for vert in vert_range]
# set each bucket as empty
for key in keys:
buckets[key] = 0
# loop through the randomly sampled data
for month, height in zip(random_months, random_heights):
# check each bucket and see if randomly sampled data
for key in buckets:
if month > key[0] - hori_step and month < key[0] + hori_step:
if height > key[1] - vert_step and height < key[1] + vert_step:
buckets[key] += 1
break # can only fit in a single bucket
pcolor_months = []
pcolor_heights = []
pcolor_intensities = []
for key in buckets:
pcolor_months.append(key[0])
pcolor_heights.append(key[1])
pcolor_intensities.append(buckets[key])
print(len(pcolor_months), len(pcolor_heights), len(pcolor_intensities))
plt.plot(random_months, random_heights, 'o', label='Random Sampling')
plt.plot(ages, heights, 'o', label='Original data', markersize=10)
plt.plot(ages, slope*ages + intercept, 'r', label='Fitted line')
# plt.legend()
plt.show()
Explanation: Next, we want to get the intensity of the data at locations. We do that by adding the randomly sampled values to buckets. This gives us intensity values for a grid of pixels in our sample range.
End of explanation
def append_to_file(path, data):
append_to_file appends a line of data to specified file. Then adds new line
Args:
path (string): the file path
Return:
VOID
with open(path, 'a') as file:
file.write(data + '\n')
def delete_file_contents(path):
delete_file_contents deletes the contents of a file
Args:
path: (string): the file path
Return:
VOID
with open(path, 'w'):
pass
def intensityCSV(x, y, z):
file_name = 'intensityData.csv'
delete_file_contents(file_name)
for xi, yi, zi in zip(x, y, z):
append_to_file(file_name, "{}, {}, {}".format(xi, yi, zi))
def monthHeightCSV(ages, heights):
file_name = 'monthsHeights.csv'
delete_file_contents(file_name)
for month, height in zip(ages, heights):
append_to_file(file_name, "{}, {}".format(month, height))
def fittedLineCSV(ages, slope, intercept):
file_name = 'fittedLineCSV.csv'
delete_file_contents(file_name)
for age in ages:
append_to_file(file_name, "{}, {}".format(age, slope*age + intercept))
def makeCSVData(pcolor_months, pcolor_heights, pcolor_intensities, ages, heights, slope, intercept):
intensityCSV(pcolor_months, pcolor_heights, pcolor_intensities)
monthHeightCSV(ages, heights)
fittedLineCSV(ages, slope, intercept)
makeCSVData(pcolor_months, pcolor_heights, pcolor_intensities, ages, heights, slope, intercept)
Explanation: Since density plotting is much simpler in Mathematica, we have written these funcitons to export all our data to csv files and plot them in Mathematica.
End of explanation |
167 | Given the following text description, write Python code to implement the functionality described.
Description:
Find a triplet that sum to a given value
returns true if there is triplet with sum equal to ' sum ' present in A [ ] . Also , prints the triplet ; Sort the elements ; Now fix the first element one by one and find the other two elements ; To find the other two elements , start two index variables from two corners of the array and move them toward each other index of the first element in the remaining elements ; index of the last element ; A [ i ] + A [ l ] + A [ r ] > sum ; If we reach here , then no triplet was found ; Driver program to test above function
| Python Code:
def find3Numbers(A , arr_size , sum ) :
A . sort()
for i in range(0 , arr_size - 2 ) :
l = i + 1
r = arr_size - 1
while(l < r ) :
if(A[i ] + A[l ] + A[r ] == sum ) :
print("Triplet ▁ is ", A[i ] , ' , ▁ ' , A[l ] , ' , ▁ ' , A[r ] ) ;
return True
elif(A[i ] + A[l ] + A[r ] < sum ) :
l += 1
else :
r -= 1
return False
A =[1 , 4 , 45 , 6 , 10 , 8 ]
sum = 22
arr_size = len(A )
find3Numbers(A , arr_size , sum )
|
168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3 Types, Functions and Flow Control
Data types
Step1: Numbers
Step2: If this should not make sense, you can print some documentation
Step3: Strings
Step4: Slicing
Step5: String Operations
Step6: capitalizing strings
Step7: Enviromnents like Jupyter and Spyder allow you to explore the methods (like .capitalize() or .upper() using x. and pressing tab.
Formating
You can also format strings, e.g. to display rounded numbers
Step8: With python 3.6 this became even more readable
Step9: Lists
Step10: Tuples
Tuples are immutable and can be thought of as read-only lists.
Step11: Dictionaries
Dictonaries are lists with named entries. There is also named tuples, which are immutable dictonaries. Use OrderedDict from collections if you need to preserve the order.
Step12: When duplicate keys encountered during assignment, the last assignment wins
Step13: Finding the total number of items in the dictionary
Step14: Produces a printable string representation of a dictionary
Step16: Functions
Step17: Flow Control
In general, statements are executed sequentially
Step18: A loop becomes infinite loop if a condition never becomes FALSE. You must use caution when using while loops because of the possibility that this condition never resolves to a FALSE value. This results in a loop that never ends. Such a loop is called an infinite loop.
An infinite loop might be useful in client/server programming where the server needs to run continuously so that client programs can communicate with it as and when required.
for Loop
Executes a sequence of statements multiple times and abbreviates the code that manages the loop variable.
Step19: Sometimes one also needs the index of the element, e.g. to plot a subset of data on different subplots. Then enumerate provides an elegant ("pythonic") way
Step20: In principle one could also iterate over an index going from 0 to the number of elements
Step21: for loops can be elegantly integrated for creating lists
Step22: This is equivalent to the following loop
Step23: Nested Loops
Python programming language allows to use one loop inside another loop.
A final note on loop nesting is that you can put any type of loop inside of any other type of loop. For example a for loop can be inside a while loop or vice versa.
Step24: if
The if statement will evaluate the code only if a given condition is met (used with logical operator such as ==, <, >, =<, =>, not, is, in, etc.
Optionally we can introduce a elsestatement to execute an alternative code when the condition is not met.
Step25: We can also use one or more elif statements to check multiple expressions for TRUE and execute a block of code as soon as one of the conditions evaluates to TRUE
Step26: else statements can be also use with while and for loops (the code will be executed in the end)
You can also use nested if statements.
break
It terminates the current loop and resumes execution at the next statement. The break statement can be used in both while and for loops. If you are using nested loops, the break statement stops the execution of the innermost loop and start executing the next line of code after the block.
Step27: continue
The continue statement rejects all the remaining statements in the current iteration of the loop and moves the control back to the top of the loop (like a "skip").
The continue statement can be used in both while and for loops.
Step28: pass
The pass statement is a null operation; nothing happens when it executes. The pass is also useful in places where your code will eventually go, but has not been written yet
Step29: pass and continuecould seem similar but they are not. The printed message "This is pass block", wouldn't have been printed if continue had been used instead. pass does nothing, continue goes to the next iteration. | Python Code:
x_int = 3
x_float = 3.
x_string = 'three'
x_list = [3, 'three']
type(x_float)
type(x_string)
type(x_list)
Explanation: 3 Types, Functions and Flow Control
Data types
End of explanation
abs(-1)
import math
math.floor(4.5)
math.exp(1)
math.log(1)
math.log10(10)
math.sqrt(9)
round(4.54,1)
Explanation: Numbers
End of explanation
round?
Explanation: If this should not make sense, you can print some documentation:
End of explanation
string = 'Hello World!'
string2 = "This is also allowed, helps if you want 'this' in a string and vice versa"
len(string)
Explanation: Strings
End of explanation
print(string)
print(string[0])
print(string[2:5])
print(string[2:])
print(string[:5])
print(string * 2)
print(string + 'TEST')
print(string[-1])
Explanation: Slicing
End of explanation
print(string/2)
print(string - 'TEST')
print(string**2)
Explanation: String Operations
End of explanation
x = 'test'
x.capitalize()
x.find('e')
x = 'TEST'
x.lower()
Explanation: capitalizing strings:
End of explanation
print('Pi is {:06.2f}'.format(3.14159))
print('Space can be filled using {:_>10}'.format(x))
Explanation: Enviromnents like Jupyter and Spyder allow you to explore the methods (like .capitalize() or .upper() using x. and pressing tab.
Formating
You can also format strings, e.g. to display rounded numbers
End of explanation
print(f'{x} 1 2 3')
Explanation: With python 3.6 this became even more readable
End of explanation
x_list
x_list[0]
x_list.append('III')
x_list
x_list.append('III')
x_list
del x_list[-1]
x_list
y_list = ['john', '2.', '1']
y_list + x_list
x_list*2
z_list=[4,78,3]
max(z_list)
min(z_list)
sum(z_list)
z_list.count(4)
z_list.append(4)
z_list.count(4)
z_list.sort()
z_list
z_list.reverse()
z_list
Explanation: Lists
End of explanation
y_tuple = ('john', '2.', '1')
type(y_tuple)
y_list
y_list[0] = 'Erik'
y_list
y_tuple[0] = 'Erik'
Explanation: Tuples
Tuples are immutable and can be thought of as read-only lists.
End of explanation
tinydict = {'name': 'john', 'code':6734, 'dept': 'sales'}
type(tinydict)
print(tinydict)
print(tinydict.keys())
print(tinydict.values())
tinydict['code']
tinydict['surname']
tinydict['dept'] = 'R&D' # update existing entry
tinydict['surname'] = 'Sloan' # Add new entry
tinydict['surname']
del tinydict['code'] # remove entry with key 'code'
tinydict['code']
tinydict.clear()
del tinydict
Explanation: Dictionaries
Dictonaries are lists with named entries. There is also named tuples, which are immutable dictonaries. Use OrderedDict from collections if you need to preserve the order.
End of explanation
dic = {'Name': 'Zara', 'Age': 7, 'Name': 'Manni'}
dic
Explanation: When duplicate keys encountered during assignment, the last assignment wins
End of explanation
len(dic)
Explanation: Finding the total number of items in the dictionary:
End of explanation
str(dic)
Explanation: Produces a printable string representation of a dictionary:
End of explanation
def mean(mylist):
Calculate the mean of the elements in mylist
number_of_items = len(mylist)
sum_of_items = sum(mylist)
return sum_of_items / number_of_items
type(mean)
z_list
mean(z_list)
help(mean)
mean?
Explanation: Functions
End of explanation
count = 0
while (count < 9):
print('The count is: ' + str(count))
count += 1
print('Good bye!')
Explanation: Flow Control
In general, statements are executed sequentially: The first statement in a function is executed first, followed by the second, and so on. There may be a situation when you need to execute a block of code several number of times. In Python a block is delimitated by intendation, i.e. all lines starting at the same space are one block.
Programming languages provide various control structures that allow for more complicated execution paths.
A loop statement allows us to execute a statement or group of statements multiple times.
while Loop
Repeats a statement or group of statements while a given condition is TRUE. It tests the condition before executing the loop body.
End of explanation
fruits = ['banana', 'apple', 'mango']
for fruit in fruits: # Second Example
print('Current fruit :', fruit)
Explanation: A loop becomes infinite loop if a condition never becomes FALSE. You must use caution when using while loops because of the possibility that this condition never resolves to a FALSE value. This results in a loop that never ends. Such a loop is called an infinite loop.
An infinite loop might be useful in client/server programming where the server needs to run continuously so that client programs can communicate with it as and when required.
for Loop
Executes a sequence of statements multiple times and abbreviates the code that manages the loop variable.
End of explanation
for index, fruit in enumerate(fruits):
print('Current fruit :', fruits[index])
Explanation: Sometimes one also needs the index of the element, e.g. to plot a subset of data on different subplots. Then enumerate provides an elegant ("pythonic") way:
End of explanation
for index in range(len(fruits)):
print('Current fruit:', fruits[index])
Explanation: In principle one could also iterate over an index going from 0 to the number of elements:
End of explanation
fruits_with_b = [fruit for fruit in fruits if fruit.startswith('b')]
fruits_with_b
Explanation: for loops can be elegantly integrated for creating lists
End of explanation
fruits_with_b = []
for fruit in fruits:
if fruit.startswith('b'):
fruits_with_b.append(fruit)
fruits_with_b
Explanation: This is equivalent to the following loop:
End of explanation
for x in range(1, 3):
for y in range(1, 4):
print(f'{x} * {y} = {x*y}')
Explanation: Nested Loops
Python programming language allows to use one loop inside another loop.
A final note on loop nesting is that you can put any type of loop inside of any other type of loop. For example a for loop can be inside a while loop or vice versa.
End of explanation
x = 'Mark'
if x in ['Mark', 'Jack', 'Mary']:
print('present!')
else:
print('absent!')
x = 'Tom'
if x in ['Mark', 'Jack', 'Mary']:
print('present!')
else:
print('absent!')
Explanation: if
The if statement will evaluate the code only if a given condition is met (used with logical operator such as ==, <, >, =<, =>, not, is, in, etc.
Optionally we can introduce a elsestatement to execute an alternative code when the condition is not met.
End of explanation
x = 'Tom'
if x in ['Mark', 'Jack', 'Mary']:
print('present in list A!')
elif x in ['Tom', 'Dick', 'Harry']:
print('present in list B!')
else:
print('absent!')
Explanation: We can also use one or more elif statements to check multiple expressions for TRUE and execute a block of code as soon as one of the conditions evaluates to TRUE
End of explanation
var = 10
while var > 0:
print('Current variable value: ' + str(var))
var = var -1
if var == 5:
break
print('Good bye!')
Explanation: else statements can be also use with while and for loops (the code will be executed in the end)
You can also use nested if statements.
break
It terminates the current loop and resumes execution at the next statement. The break statement can be used in both while and for loops. If you are using nested loops, the break statement stops the execution of the innermost loop and start executing the next line of code after the block.
End of explanation
for letter in 'Python':
if letter == 'h':
continue
print('Current Letter: ' + letter)
Explanation: continue
The continue statement rejects all the remaining statements in the current iteration of the loop and moves the control back to the top of the loop (like a "skip").
The continue statement can be used in both while and for loops.
End of explanation
for letter in 'Python':
if letter == 'h':
pass
print('This is pass block')
print('Current Letter: ' + letter)
print('Good bye!')
Explanation: pass
The pass statement is a null operation; nothing happens when it executes. The pass is also useful in places where your code will eventually go, but has not been written yet
End of explanation
for letter in 'Python':
if letter == 'h':
continue
print('This is pass block')
print('Current Letter: ' + letter)
print('Good bye!')
Explanation: pass and continuecould seem similar but they are not. The printed message "This is pass block", wouldn't have been printed if continue had been used instead. pass does nothing, continue goes to the next iteration.
End of explanation |
169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Get the Data
Step2: Get the regions and color them
Step5: Build the plot
We build it using html to use the html slider widget instead of the ipython widget. This makes the plot more re-useable and means the slider will work on NBViewer.
Step6: Embed in your own template | Python Code:
# Links via http://www.gapminder.org/data/
population_url = "http://spreadsheets.google.com/pub?key=phAwcNAVuyj0XOoBL_n5tAQ&output=xls"
fertility_url = "http://spreadsheets.google.com/pub?key=phAwcNAVuyj0TAlJeCEzcGQ&output=xls"
life_expectancy_url = "http://spreadsheets.google.com/pub?key=tiAiXcrneZrUnnJ9dBU-PAw&output=xls"
def get_data(url):
# Get the data from the url and return only 1962 - 2013
df = pd.read_excel(url, index_col=0)
df = df.unstack().unstack()
df = df[(df.index >= 1964) & (df.index <= 2013)]
df = df.unstack().unstack()
return df
fertility_df = get_data(fertility_url)
life_expectancy_df = get_data(life_expectancy_url)
population_df = get_data(population_url)
fertility_df.to_hdf('fertility_df.hdf', 'df')
life_expectancy_df.to_hdf('life_expectancy_df.hdf', 'df')
population_df.to_hdf('population_df.hdf', 'df')
fertility_df = pd.read_hdf('fertility_df.hdf', 'df')
life_expectancy_df = pd.read_hdf('life_expectancy_df.hdf', 'df')
population_df = pd.read_hdf('population_df.hdf', 'df')
# have common countries across all data
fertility_df = fertility_df.drop(fertility_df.index.difference(life_expectancy_df.index))
population_df = population_df.drop(population_df.index.difference(life_expectancy_df.index))
# get a size value based on population, but don't let it get too small
population_df_size = np.sqrt(population_df/np.pi)/200
min_size = 3
population_df_size = population_df_size.where(population_df_size >= min_size).fillna(min_size)
Explanation: Get the Data
End of explanation
regions_url = "https://docs.google.com/spreadsheets/d/1OxmGUNWeADbPJkQxVPupSOK5MbAECdqThnvyPrwG5Os/pub?gid=1&output=xls"
regions_df = pd.read_excel(regions_url, index_col=0)
regions_df = regions_df.drop(regions_df.index.difference(life_expectancy_df.index))
regions_df.Group = regions_df.Group.astype('category')
cats = list(regions_df.Group.cat.categories)
def get_color(r):
index = cats.index(r.Group)
return Spectral6[cats.index(r.Group)]
regions_df['region_color'] = regions_df.apply(get_color, axis=1)
Explanation: Get the regions and color them
End of explanation
# Set up the data.
#
# We make a dictionary of sources that can then be passed to the callback so they are ready for JS object to use.
#
# Dictionary_of_sources is:
# {
# 1962: '_1962',
# 1963: '_1963',
# ....
# }
# We turn this into a string and replace '_1962' with _1962. So the end result is js_source_array:
# '{1962: _1962, 1963: _1963, ....}'
#
# When this is passed into the callback and then accessed at runtime,
# the _1962, _1963 are replaced with the actual source objects that are passed in as args.
sources = {}
years = list(fertility_df.columns)
region_color = regions_df['region_color']
region_color.name = 'region_color'
for year in years:
fertility = fertility_df[year]
fertility.name = 'fertility'
life = life_expectancy_df[year]
life.name = 'life'
population = population_df_size[year]
population.name = 'population'
new_df = pd.concat([fertility, life, population, region_color], axis=1)
#new_df = pd.concat([fertility, life, population], axis=1)
sources['_' + str(year)] = ColumnDataSource(new_df)
dictionary_of_sources = dict(zip([x for x in years], ['_%s' % x for x in years]))
js_source_array = str(dictionary_of_sources).replace("'", "")
# Set up the plot
xdr = Range1d(1, 9)
ydr = Range1d(20, 100)
plot = Plot(
x_range=xdr,
y_range=ydr,
title="",
plot_width=800,
plot_height=400,
outline_line_color=None,
toolbar_location=None,
)
AXIS_FORMATS = dict(
minor_tick_in=None,
minor_tick_out=None,
major_tick_in=None,
major_label_text_font_size="10pt",
major_label_text_font_style="normal",
axis_label_text_font_size="10pt",
axis_line_color='#AAAAAA',
major_tick_line_color='#AAAAAA',
major_label_text_color='#666666',
major_tick_line_cap="round",
axis_line_cap="round",
axis_line_width=1,
major_tick_line_width=1,
)
xaxis = LinearAxis(SingleIntervalTicker(interval=1), axis_label="Live births per woman", **AXIS_FORMATS)
yaxis = LinearAxis(SingleIntervalTicker(interval=20), axis_label="Average life expectancy (years)", **AXIS_FORMATS)
plot.add_layout(xaxis, 'below')
plot.add_layout(yaxis, 'left')
# Add the year in background (add before circle)
text_source = ColumnDataSource({'year': ['%s' % years[0]]})
text = Text(x=2, y=35, text='year', text_font_size='150pt', text_color='#EEEEEE')
plot.add_glyph(text_source, text)
# Add the circle
renderer_source = sources['_%s' % years[0]]
circle_glyph = Circle(
x='fertility', y='life', size='population',
fill_color='region_color', fill_alpha=0.8,
line_color='#7c7e71', line_width=0.5, line_alpha=0.5)
circle_renderer = plot.add_glyph(renderer_source, circle_glyph)
# Add the hover (only against the circle and not other plot elements)
tooltips = "@index"
plot.add_tools(HoverTool(tooltips=tooltips, renderers=[circle_renderer]))
# Add the slider
code =
var year = slider.get('value'),
sources = %s,
new_source_data = sources[year].get('data');
renderer_source.set('data', new_source_data);
renderer_source.trigger('change');
text_source.set('data', {'year': [String(year)]});
text_source.trigger('change');
% js_source_array
callback = Callback(args=sources, code=code)
slider = Slider(start=years[0], end=years[-1], value=1, step=1, title="Year", callback=callback)
callback.args["slider"] = slider
callback.args["renderer_source"] = renderer_source
callback.args["text_source"] = text_source
# Add the legend
text_x = 7
text_y = 95
text_properties = dict(x=text_x, text_font_size='10pt', text_color='#666666')
circle_properties = dict(size=10, line_color=None, fill_alpha=0.8)
for i, region in enumerate(cats):
plot.add_glyph(Text(y=text_y, text=[region], **text_properties))
plot.add_glyph(Circle(x=text_x - 0.1, y=text_y + 2, fill_color=Spectral6[i], **circle_properties))
text_y = text_y - 5
# Add a play button
callback = Callback(args={'slider': slider}, code=
var cur_year = slider.get('value');
if (cur_year == 1) {
cur_year = %d;
}
console.log(cur_year);
console.log(slider);
% years[0])
play = Toggle(label="Play", callback=callback)
# Stick the plot and the slider together
layout = vplot(plot, hplot(play, slider))
Explanation: Build the plot
We build it using html to use the html slider widget instead of the ipython widget. This makes the plot more re-useable and means the slider will work on NBViewer.
End of explanation
with open('my_template.html', 'r') as f:
template = Template(f.read())
script, div = components(layout)
html = template.render(
title="Bokeh - Gapminder demo",
plot_script=script,
plot_div=div,
)
display(HTML(html))
Explanation: Embed in your own template
End of explanation |
170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Binary logistic regression using Sklearn
Step2: PyTorch data and model
Step3: BFGS
Step4: SGD
Step5: Momentum
Step6: SPS | Python Code:
!pip install git+https://github.com/IssamLaradji/sps.git
import sklearn
import scipy
import scipy.optimize
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
import itertools
import time
from functools import partial
import os
import numpy as np
from scipy.special import logsumexp
np.set_printoptions(precision=3)
np.set_printoptions(formatter={"float": lambda x: "{0:0.3f}".format(x)})
import torch
import torch.nn as nn
import torchvision
print("torch version {}".format(torch.__version__))
if torch.cuda.is_available():
print(torch.cuda.get_device_name(0))
print("current device {}".format(torch.cuda.current_device()))
else:
print("Torch cannot find GPU")
def set_seed(seed):
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
# torch.backends.cudnn.benchmark = True
Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/supplements/sps_logreg_torch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Stochastic Polyak Stepsize
https://github.com/IssamLaradji/sps/
Setup
End of explanation
# Fit the model usign sklearn
import sklearn.datasets
from sklearn.model_selection import train_test_split
iris = sklearn.datasets.load_iris()
X = iris["data"]
y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0'
N, D = X.shape # 150, 4
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
from sklearn.linear_model import LogisticRegression
# We set C to a large number to turn off regularization.
# We don't fit the bias term to simplify the comparison below.
log_reg = LogisticRegression(solver="lbfgs", C=1e5, fit_intercept=False)
log_reg.fit(X_train, y_train)
# extract probability of class 1
pred_sklearn_train = log_reg.predict_proba(X_train)[:, 1]
pred_sklearn_test = log_reg.predict_proba(X_test)[:, 1]
w_mle_sklearn = np.ravel(log_reg.coef_)
print(w_mle_sklearn)
Explanation: Binary logistic regression using Sklearn
End of explanation
from torch.utils.data import DataLoader, TensorDataset
# data. By default, numpy uses double but torch uses float
X_train_t = torch.tensor(X_train, dtype=torch.float)
y_train_t = torch.tensor(y_train, dtype=torch.float)
X_test_t = torch.tensor(X_test, dtype=torch.float)
y_test_t = torch.tensor(y_test, dtype=torch.float)
# To make things interesting, we pick a batchsize of B=33, which is not divisible by N=100
dataset = TensorDataset(X_train_t, y_train_t)
B = 33
dataloader = DataLoader(dataset, batch_size=B, shuffle=True)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.Linear(D, 1, bias=False)
def forward(self, x):
y_pred = torch.sigmoid(self.linear(x))
return y_pred[:, 0] # (N,1) -> (N)
def criterion(ypred, ytrue, L2reg=0):
loss = torch.nn.BCELoss(reduction="mean")(ypred, ytrue)
w = 0.0
for p in model.parameters():
w += (p**2).sum()
loss += L2reg * w
return loss
Explanation: PyTorch data and model
End of explanation
set_seed(0)
model = Model()
loss_trace = []
optimizer = torch.optim.LBFGS(model.parameters(), history_size=10)
def closure():
optimizer.zero_grad()
y_pred = model(X_train_t)
loss = criterion(y_pred, y_train_t, L2reg=0)
loss.backward()
return loss.item()
max_iter = 10
for i in range(max_iter):
loss = optimizer.step(closure)
loss_trace.append(loss)
plt.figure()
plt.plot(loss_trace)
pred_sgd_train = model(X_train_t).detach().numpy()
pred_sgd_test = model(X_test_t).detach().numpy()
print("predicitons on test set using sklearn")
print(pred_sklearn_test)
print("predicitons on test set using sgd")
print(pred_sgd_test)
set_seed(0)
model = Model()
loss_trace = []
optimizer = torch.optim.LBFGS(model.parameters(), history_size=10)
def closure():
optimizer.zero_grad()
y_pred = model(X_train_t)
loss = criterion(y_pred, y_train_t, L2reg=1e-4)
loss.backward()
return loss.item()
max_iter = 10
for i in range(max_iter):
loss = optimizer.step(closure)
loss_trace.append(loss)
plt.figure()
plt.plot(loss_trace)
pred_sgd_train = model(X_train_t).detach().numpy()
pred_sgd_test = model(X_test_t).detach().numpy()
print("predicitons on test set using sklearn")
print(pred_sklearn_test)
print("predicitons on test set using sgd")
print(pred_sgd_test)
Explanation: BFGS
End of explanation
nepochs = 100
learning_rate = 1e-1
loss_trace = []
set_seed(0)
model = Model()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0)
for epoch in range(nepochs):
for X, y in dataloader:
y_pred = model(X)
loss = criterion(y_pred, y, L2reg=0)
loss_trace.append(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
plt.figure()
plt.plot(loss_trace)
plt.ylim([0, 2])
pred_sgd_train = model(X_train_t).detach().numpy()
pred_sgd_test = model(X_test_t).detach().numpy()
print("predicitons on test set using sklearn")
print(pred_sklearn_test)
print("predicitons on test set using sgd")
print(pred_sgd_test)
Explanation: SGD
End of explanation
nepochs = 100
loss_trace = []
set_seed(0)
model = Model()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
for epoch in range(nepochs):
for X, y in dataloader:
y_pred = model(X)
loss = criterion(y_pred, y, L2reg=0)
loss_trace.append(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
plt.figure()
plt.plot(loss_trace)
plt.ylim([0, 2])
pred_sgd_train = model(X_train_t).detach().numpy()
pred_sgd_test = model(X_test_t).detach().numpy()
print("predicitons on test set using sklearn")
print(pred_sklearn_test)
print("predicitons on test set using sgd")
print(pred_sgd_test)
Explanation: Momentum
End of explanation
import sps
set_seed(0)
model = Model()
score_list = []
opt = sps.Sps(model.parameters(), c=0.5, eta_max=1, adapt_flag="constant")
# , fstar_flag=True)
# c=0.2 blows up
nepochs = 100
for epoch in range(nepochs):
for X, y in dataloader:
def closure():
loss = criterion(model(X), y, L2reg=1e-4)
loss.backward()
return loss
opt.zero_grad()
loss = opt.step(closure=closure)
loss_trace.append(loss)
# Record metrics
score_dict = {"epoch": epoch}
score_dict["step_size"] = opt.state.get("step_size", {})
score_dict["step_size_avg"] = opt.state.get("step_size_avg", {})
score_dict["train_loss"] = loss
score_list += [score_dict]
import pandas as pd
df = pd.DataFrame(score_list)
df.head()
plt.figure()
plt.plot(df["train_loss"])
plt.ylim([0, 2])
pred_sgd_train = model(X_train_t).detach().numpy()
pred_sgd_test = model(X_test_t).detach().numpy()
print("predicitons on test set using sklearn")
print(pred_sklearn_test)
print("predicitons on test set using sgd")
print(pred_sgd_test)
plt.figure()
plt.plot(df["step_size"])
Explanation: SPS
End of explanation |
171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Numerical Summaries of Data
Congratulations, you have some data. Now what? Well, ideally, you'll have a research question to match. In practice, that's not always true or even possible.
So, in the following tutorial, we're going to present some methods to explore your data, using an example data set from SDSS. Most of these methods focus on summarizing the data in some way, that is, compress the information in your data set in such a way that it will be more useful to you than the raw data. There are many ways to achieve this sort of compression; we'll only showcase a few of them. In general, one can summarize data numerically, that is, compute a set of numbers that describe the data in a meaningful way that trades loss of information with descriptiveness. One can also summarize data sets visually, in the form of graphs. Here, we will explore the former category (although we won't get away without making any graphs here, either!). The following tutorial will explore visual summaries of data in more detail.
Many of these methods may seem familiar, but we'll try to teach you something new, too!
At the same time, you'll also get to know some of the most useful packages for numerical computation in python
Step2: Summarizing Data
There are a few very simple, but often used ways to summarize data
Step3: Note that if you have multivariate data (i.e. several dimensions), you can use the axis keyword to specify which dimension to average over
Step4: Note
Step5: Both the median and mean are useful, but be aware that unless the underlying distribution that created your sample is symmetric
Step6: For a (large enough) sample drawn from the distribution the mean and median will not be the same, and more importantly, in this case, they will also not be equal to the most probable value (the top of the distribution). The latter is also called the mode. There's no intrinsic nifty way to compute the mode of a distribution (or sample). In practice, for a univariate sample, you might make a histogram of the sample (that is, define finite bins over the width of your sample, and count all elements that fall into each bin), and then find the bin with the most counts. Note that for creating a histogram, you need to pick the number of your bins (or, equivalently, the width of each bin). This can be tricky, as we'll discuss in more detail in the visualization part of this tutorial.
Step7: Mean, median and mode all tell us something about the center of the sample. However, it tells us nothing about the spread of the sample
Step8: Note
Step9: Mean and variance give us general information about the center and the spread of the sample, but tell us nothing about the shape.
If we want yet more information about how the data are distributed, we'll have to introduce yet another concept
Step10: Using this, we can now compute the Tukey five-number summary
Step11: Finally, if you're working with a pandas DataFrame, you can use the method describe to print some statistical summaries as well
Step12: Let's think more about the mean of a sample. Imagine we actually have a reasonably good idea of what the mean should be. We might then want to ask whether the sample we collected is consistent with that predetermined theoretical value. To do that, we can take the difference between the sample mean and the theoretical value, but we'll need to normalize it by the standard error. After all, the sample mean could be far from the theoretical prediction, but if the spread in the sample is large, that might not mean much.
$t = \frac{\bar{x} - \mu}{\sqrt{(s_x^2/n)}}$
This number is also called the Student t-statistic for univariate data.
This is, of course, not hard to write down in python with the functions for mean and variance we've learned above, but as above, we can make use of functionality in scipy to achieve the same result
Step13: need to write some explanation here
Statistical Distributions in scipy
Aside from the two examples above, scipy.stats has a large amount of functionality that can be helpful when exploring data sets.
We will not go into the theory of probabilities and probability distributions here. Some of that, you will learn later this week. But before we start, some definitions
Step14: We've now created an object that defines a normal (Gaussian) distribution with a scale parameter of 0.4 and a center of 2. What can we do with this?
For example, draw a random sample
Step15: We have sampled 100 numbers from that distribution!
We can also compute the PDF in a given range
Step16: or the CDF
Step17: Mean, median, var and std work as methods for the distribution, too
Step18: Easier than that, we can get it to just give us the moments of the distribution
Step19: For computing the $n$th-order non-central moment, use the moment method
Step20: For more advanced purposes, we can also compute the survival function (1-CDF) and the percent point function (1/CDF) of the distribution
Step21: To compute a confidence interval around the median of the distribution, use the method interval
Step22: Finally, if you have some data that you think was drawn from the distribution, you may use the fit method to fit the distribution's parameters to the sample
Step23: Continuous distributions function in exactly the same way, except that instead of a pdf method, they will have a pmf method
Step24: Exercise
Step25: Note that the matrix is symmetric about the diagonal. Also, the values on the diagonal are just the variance
Step26: To compute the actual correlation between two samples, compute the correlation coefficient, which is the ratio of sample covariance and the product of the individual standard deviations
Step27: The correlation coefficient can range between -1 and 1. If two samples are only offset by a scaling factor (perfect correlation), then $r = 1$. $r = 0$ implies that there is no correlation.
Another way to compare two samples is to compare the means of the two samples. To do this, we can use a generalization of the Student's t-test above to higher dimensions for two samples $x$ and $y$
Step28: Note that the t-statistic tests whether the means of the two samples are the same. We can also test whether a sample is likely to be produced by a reference distribution (single-sample Kolmogorov-Smirnov test) or whether two samples are produced by the same, underlying (unknown) distribution (2-sample Kolmogorov-Smirnov test).
The one-sample KS-test (test sample against a reference distribution) can take, for example, the cdf method of any of the distributions defined in scipy.stats.
Step29: Analogously, for the 2-sample KS-test
Step30: There are many more statistical tests to compare distributions and samples, too many to showcase them all here. Some of them are implemented in scipy.stats, so I invite you to look there for your favourite test!
Linear Regression
Perhaps you've found a high correlation coefficient in your data. Perhaps you already expect some kind of functional dependence of your (bivariate) data set.
In this case, linear regression comes in handy. Here, we'll only give a very brief overview of how to do it (and introduce you to scipy.optimize). Later in the week, you'll learn how to do parameter estimation properly.
Doing simple linear regression in python requires two steps
Step31: Now we'll take two variables, in this case the g- and the r-band magnitudes, and use curve_fit from scipy.optimize to do linear regression
Step32: curve_fit returns first the best-fit parameters, and also the covariance matrix of the parameters
Step33: Let's plot this to see how well our model's done!
Step34: Note that standard linear regression assumes a model that's strictly linear in the coefficients (that is, for example a power law of the type $f(x) = ax^b$ wouldn't be), as well as errors on the data that are independent and identically distributed (they are homoscedastic), curve_fit allows for non-linear models as well as heteroscedastic errors
Step37: The errors that I made up for the example above depend on the value of each data point. Therefore, for a higher r-band magnitude, the error will be larger, too.
Note that curve_fit still assumes your measurement errors are Gaussian, which will allow you to use (generalized) least-squares to do the optimization. If this is not true, you will need to define your own likelihood. The likelihood is the probability of obtaining a data set under the assumption of a model and some model parameters. What that means in detail we'll worry about more on Thursday and Friday. Today, I'll just give you an example of how to set up a likelihood (not the only one, mind you) and use other methods in scipy.optimize to do the minimization
Step38: In practice, we want to maximize the likelihood, but optimization routines always minimize a function. Therefore, we actually want to minimize the log-likelihood.
With the class we've set up above, we can now put this into scipy.optimize.minimize. This function provides a combined interface to the numerous optimization routines available in scipy.optimize. Some of these allow for constrained optimization (that is, optimization where the possible parameter range is restricted), others don't. Each optimization routine may have its own keyword parameters. I invite you to look at the documentation here http
Step39: The output will be an object that contains the necessary information, for example the best-fit parameters, the covariance matrix (for some, not all methods), a status message describing whether the optimization was successful, and the value of the likelihood function at the end of the optimization | Python Code:
import os
import requests
# get some CSV data from the SDSS SQL server
URL = "http://skyserver.sdss.org/dr12/en/tools/search/x_sql.aspx"
cmd =
SELECT TOP 10000
p.u, p.g, p.r, p.i, p.z, s.class, s.z, s.zerr
FROM
PhotoObj AS p
JOIN
SpecObj AS s ON s.bestobjid = p.objid
WHERE
p.u BETWEEN 0 AND 19.6 AND
p.g BETWEEN 0 AND 20 AND
(s.class = 'STAR' OR s.class = 'GALAXY' OR s.class = 'QSO')
if not os.path.exists('all_colors.csv'):
cmd = ' '.join(map(lambda x: x.strip(), cmd.split('\n')))
response = requests.get(URL, params={'cmd': cmd, 'format':'csv'})
with open('all_colors.csv', 'w') as f:
f.write(response.text)
import pandas as pd
df = pd.read_csv("all_colors.csv",skiprows=1)
df
Explanation: Numerical Summaries of Data
Congratulations, you have some data. Now what? Well, ideally, you'll have a research question to match. In practice, that's not always true or even possible.
So, in the following tutorial, we're going to present some methods to explore your data, using an example data set from SDSS. Most of these methods focus on summarizing the data in some way, that is, compress the information in your data set in such a way that it will be more useful to you than the raw data. There are many ways to achieve this sort of compression; we'll only showcase a few of them. In general, one can summarize data numerically, that is, compute a set of numbers that describe the data in a meaningful way that trades loss of information with descriptiveness. One can also summarize data sets visually, in the form of graphs. Here, we will explore the former category (although we won't get away without making any graphs here, either!). The following tutorial will explore visual summaries of data in more detail.
Many of these methods may seem familiar, but we'll try to teach you something new, too!
At the same time, you'll also get to know some of the most useful packages for numerical computation in python: numpy and scipy.
Before we start, let's load some astronomy data!
End of explanation
import numpy as np
galaxies = df[df["class"]=="GALAXY"]
x = np.array(galaxies["r"])
x_mean = np.mean(x)
print("The mean of the r-band magnitude of the galaxies in the sample is %.3f"%x_mean)
Explanation: Summarizing Data
There are a few very simple, but often used ways to summarize data: the arithmetic mean and median, along the standard deviation, variance or standard error. For a first look at what your data looks like, these can be incredibly useful tools, but be aware of their limitations, too!
The arithmetic mean of a (univariate) sample $x = {x_1, x_2, ..., x_n}$ with n elements is defined as
$\bar{x} = \frac{1}{n}\sum_{i=1}^{n}{x_i}$ .
In python, you can of course define your own function to do this, but much faster is the implementation in numpy:
End of explanation
x_multi = np.array(galaxies[["u", "g", "r"]])
print(x.shape)
## global average
print(np.mean(x_multi))
## average over the sample for each colour
print(np.mean(x_multi, axis=0))
## average over all colours for each galaxy in the sample
print(np.mean(x_multi, axis=1))
Explanation: Note that if you have multivariate data (i.e. several dimensions), you can use the axis keyword to specify which dimension to average over:
End of explanation
x_med = np.median(x)
print("The median of the r-band magnitude of the sample of galaxies is %.3f"%x_med)
Explanation: Note: which dimension you want to average over depends on the shape of your array, so be careful (and check the shape of the output, if in doubt!).
The average is nice to have, since it's a measure for the "center of gravity" of your sample, however, it is also prone to be strongly affected by outliers! In some cases, the median, the middle of the sample, can be a better choice. For a sample of length $n$, the median is either the $(n+1)/2$-th value, if $n$ is odd, or the mean of the middle two values $n/2$ and $(n+1)/2$, if $n$ is even.
Again, numpy allows for easy and quick computation:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.stats
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(16,6))
xcoord = np.linspace(0,2,100)
ax1.plot(xcoord, scipy.stats.norm(1.0, 0.3).pdf(xcoord))
ax1.set_title("Symmetric distribution")
ax2.plot(xcoord, scipy.stats.lognorm(1.0, 0.1).pdf(xcoord))
ax2.set_title("Asymmetric distribution")
Explanation: Both the median and mean are useful, but be aware that unless the underlying distribution that created your sample is symmetric:
End of explanation
h, edges = np.histogram(x, bins=30,range=[16.,19.6], normed=False)
## find the index where the binned counts have their maximum
h_max = np.where(h == np.max(h))[0]
## these are the maximum binned counts
max_counts = h[h_max]
## find the middle of the bin with the maximum counts
edge_min = edges[h_max]
edge_max = edges[h_max+1]
edge_mean = np.mean([edge_min, edge_max])
print("The mode of the distribution is located at %.4f"%edge_mean)
Explanation: For a (large enough) sample drawn from the distribution the mean and median will not be the same, and more importantly, in this case, they will also not be equal to the most probable value (the top of the distribution). The latter is also called the mode. There's no intrinsic nifty way to compute the mode of a distribution (or sample). In practice, for a univariate sample, you might make a histogram of the sample (that is, define finite bins over the width of your sample, and count all elements that fall into each bin), and then find the bin with the most counts. Note that for creating a histogram, you need to pick the number of your bins (or, equivalently, the width of each bin). This can be tricky, as we'll discuss in more detail in the visualization part of this tutorial.
End of explanation
x_var = np.var(x)
x_std = np.std(x)
print("The variance of the r-band magnitude for the galaxies in the sample is %.4f"%x_var)
print("The standard deviation of the r-band magnitude for the galaxies in the sample is %.4f"%x_std)
x_var_multi = np.var(x_multi, axis=0)
x_std_multi = np.std(x_multi, axis=0)
print("The variance of the for all three bands for the galaxies in the sample is "+str(x_var_multi))
print("The standard deviation for all three bands for the galaxies in the sample is " + str(x_std_multi))
Explanation: Mean, median and mode all tell us something about the center of the sample. However, it tells us nothing about the spread of the sample: are most values close to the mean (high precision) or are they scattered very far (low precision)?
For this, we can use the variance, the squared deviations from the mean:
$s^2_x = \frac{1}{n-1}\sum_{i=1}^{n}{(x_i - \bar{x})^2}$
or its square root, the standard deviation: $s_x = \sqrt{s^2_x}$.
Similarly to the mean, there are functions in numpy for the mean and the standard deviation, and again, it is possible to specify the axis along which to compute either:
End of explanation
se = np.sqrt(np.var(x)/x.shape[0])
print("The standard error on the mean for the r-band galaxy magnitudes is %.4f"%se)
Explanation: Note: If your data set contains NaN values, you can use the functions nanmean, nanvar and nanstd instead, which will compute the mean, variance and standard deviation, respectively, while ignoring any NaN values in your data.
What is the error on the mean? That depends on the size of your sample! Imagine you pick many samples from the population of possible outcomes. If each sample has a large number of elements, then you are more likely to find similar means for each of your samples than if the sample size is small.
Given that we might not have many samples from the population (but perhaps only one SDSS data set!), we'd like to quantify how well we can specify the mean. We do this by dividing the variance by the number of data points and taking the square root to get the standard error:
End of explanation
from scipy.stats.mstats import mquantiles
q = mquantiles(x, prob=[0.25, 0.5, 0.75])
print("The 0.25, 0.5 and 0.75 of the r-band galaxy magnitudes are " + str(q))
Explanation: Mean and variance give us general information about the center and the spread of the sample, but tell us nothing about the shape.
If we want yet more information about how the data are distributed, we'll have to introduce yet another concept: the quantile the $\alpha$-quantile of a sample is the point below which a fraction $\alpha$ of the data occur.
For example, the $0.25$-quantile is the point below which $25\%$ of the sample is located. Note that the 0.5-quantile is the median.
The 0.25, 0.5 and 0.75 quantiles are also called the first, second and third quantile, respectively. The difference between the 0.25 and 0.75 quantile are called the interquartile range (remember that! We'll come back to it!).
This time, there's no nifty numpy function (that I know of) that we can use. Instead, we'll turn to scipy instead, which contains much functionality for statistics:
End of explanation
def tukey_five_number(x):
x_min = np.min(x)
x_max = np.max(x)
q = mquantiles(x, prob=[0.25, 0.5, 0.75])
return np.hstack([x_min, q, x_max])
print("The Tukey five-number summary of the galaxy r-band magnitudes is: " + str(tukey_five_number(x)))
Explanation: Using this, we can now compute the Tukey five-number summary: a collection of five numbers that contains first quartil, median, second quartile as well as the minimum and maximum values in the sample. Together, they give a reasonably good first impression of how the data are distributed:
End of explanation
df.describe()
Explanation: Finally, if you're working with a pandas DataFrame, you can use the method describe to print some statistical summaries as well:
End of explanation
import scipy.stats
## some theoretical prediction
mu = 16.8
## compute the t-statistic.
t = scipy.stats.ttest_1samp(x, mu)
t_statistic = t.statistic
p_value = t.pvalue
print("The t-statistic for the galaxy r-band magnitude is %.4f"%t_statistic)
print("The p-value for that t-statistic is " + str(p_value))
Explanation: Let's think more about the mean of a sample. Imagine we actually have a reasonably good idea of what the mean should be. We might then want to ask whether the sample we collected is consistent with that predetermined theoretical value. To do that, we can take the difference between the sample mean and the theoretical value, but we'll need to normalize it by the standard error. After all, the sample mean could be far from the theoretical prediction, but if the spread in the sample is large, that might not mean much.
$t = \frac{\bar{x} - \mu}{\sqrt{(s_x^2/n)}}$
This number is also called the Student t-statistic for univariate data.
This is, of course, not hard to write down in python with the functions for mean and variance we've learned above, but as above, we can make use of functionality in scipy to achieve the same result:
End of explanation
## continuous (normal) distribution
loc = 2.0
scale = 0.4
dist = scipy.stats.norm(loc=loc, scale=scale)
dist
Explanation: need to write some explanation here
Statistical Distributions in scipy
Aside from the two examples above, scipy.stats has a large amount of functionality that can be helpful when exploring data sets.
We will not go into the theory of probabilities and probability distributions here. Some of that, you will learn later this week. But before we start, some definitions:
random variable: technically, a function that maps sample space (e.g. "red", "green", "blue") of some process onto real numbers (e.g. 1, 2, 3)
we will distinguish between continuous random variables (which can take all real values in the support of the distribution) and discrete random variables (which may only take certain values, e.g. integers).
the probability mass function (PMF) for discrete variables maps the probability of a certain outcome to that outcome
in analogy, the probability density function (PDF) for continuous random variables is the probability mass in an interval, divided by the size of that interval (in the limit of the interval size going to zero)
the cumulative distribution function (CDF) at a point x of a distribution is the probability of a random variable X being smaller than x: it translates to the integral (sum) over the PDF (PMF) from negative infinity up to that point x for a continuous (discrete) distribution.
scipy.stats defines a large number of both discrete and continuous probability distributions that will likely satisfy most of your requirements. For example, let's define a standard normal distribution.
End of explanation
s = dist.rvs(100)
print(s)
Explanation: We've now created an object that defines a normal (Gaussian) distribution with a scale parameter of 0.4 and a center of 2. What can we do with this?
For example, draw a random sample:
End of explanation
xcoord = np.linspace(0,4,500)
pdf = dist.pdf(xcoord)
plt.plot(xcoord, pdf)
Explanation: We have sampled 100 numbers from that distribution!
We can also compute the PDF in a given range:
End of explanation
cdf = dist.cdf(xcoord)
plt.plot(xcoord, cdf)
Explanation: or the CDF:
End of explanation
print("distribution mean %.4f"%dist.mean())
print("distribution median %.4f"%dist.median())
print("distribution standard deviation %.4f"%dist.std())
print("distribution variance %.4f"%dist.var())
Explanation: Mean, median, var and std work as methods for the distribution, too:
End of explanation
dist.stats(moments='mvsk')
Explanation: Easier than that, we can get it to just give us the moments of the distribution: the mean, the variance, the skewness and the kurtosis (below, replace 'mvsk' by any combination of those four letters to get it to print any of the four):
End of explanation
dist.moment(1)
Explanation: For computing the $n$th-order non-central moment, use the moment method:
End of explanation
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,6))
ax1.plot(xcoord, dist.sf(xcoord))
ax1.set_title("Survival function")
ax2.plot(xcoord, dist.ppf(xcoord))
ax2.set_title("Percent point function")
Explanation: For more advanced purposes, we can also compute the survival function (1-CDF) and the percent point function (1/CDF) of the distribution:
End of explanation
dist.interval(0.6)
Explanation: To compute a confidence interval around the median of the distribution, use the method interval:
End of explanation
normal_data = np.random.normal(4.0, 0.2, size=1000)
data_loc, data_scale = scipy.stats.norm.fit(normal_data)
print("the location and scale parameters of the fit distribution are %.3f and %.3f, respectively."%(data_loc, data_scale))
Explanation: Finally, if you have some data that you think was drawn from the distribution, you may use the fit method to fit the distribution's parameters to the sample:
End of explanation
## set the distribution
dist = scipy.stats.poisson(10)
## make an x-coordinate: Poisson distribution only defined
## for positive integer numbers!
xcoord = np.arange(0,50,1.0)
## plot the results
plt.scatter(xcoord, dist.pmf(xcoord))
Explanation: Continuous distributions function in exactly the same way, except that instead of a pdf method, they will have a pmf method:
End of explanation
cov = np.cov(x_multi.T)
print(cov)
Explanation: Exercise: Take the galaxy data we've used above and fit a distribution of your choice (see http://docs.scipy.org/doc/scipy/reference/stats.html for a list of all of them) and compare the parameters of your distribution to the sample mean and variance (if they're comparable.
Multivariate data
Of course, most data sets aren't univariate. As we've seen above, we can use the same functions that we've used to compute mean, median, variance and standard deviation for multi-variate data sets in the same way as for single-valued data, except that we need to specify the axis along which to compute the operation.
However, these functions will compute the mean, variance etc. for each dimension in the data set independently. This unfortunately tells us nothing about whether the data vary with each other, that is, whether they are correlated in any way. One way to look at whether data vary with each other is by computing the covariance. Let's take our slightly expanded galaxy data set (with three colours from above, and compute the covariance between the three magnitude bands.
Because of the way I've set up the array above, we need to take the transpose of it in order to get the covariance between the bands (and not between the samples!).
End of explanation
x_var = np.var(x_multi, axis=0)
print(x_var)
x_var_cov = np.diag(cov)
print(x_var_cov)
Explanation: Note that the matrix is symmetric about the diagonal. Also, the values on the diagonal are just the variance:
End of explanation
r = np.corrcoef(x_multi.T)
print(r)
Explanation: To compute the actual correlation between two samples, compute the correlation coefficient, which is the ratio of sample covariance and the product of the individual standard deviations:
$s_{xy} = \frac{1}{n-1}\sum{(x_i - \bar{x})(y_i - \bar{y})}$ ;
$r = \frac{s_{xy}}{s_x s_y}$
End of explanation
t = scipy.stats.ttest_ind(x_multi[:,0], x_multi[:,1])
t_stat = t.statistic
t_pvalue = t.pvalue
print("The t-statistic for the two bands is %.4f"%t_stat)
print("The p-value for that t-statistic is " + str(t_pvalue))
Explanation: The correlation coefficient can range between -1 and 1. If two samples are only offset by a scaling factor (perfect correlation), then $r = 1$. $r = 0$ implies that there is no correlation.
Another way to compare two samples is to compare the means of the two samples. To do this, we can use a generalization of the Student's t-test above to higher dimensions for two samples $x$ and $y$:
$t = \frac{\bar{x} - \bar{y}}{\sqrt{\left(\frac{s_x^2}{n} + \frac{s_y^2}{n}\right)}}$.
In python:
End of explanation
ks = scipy.stats.kstest(x, scipy.stats.norm(np.mean(x), np.var(x)).cdf)
print("The KS statistic for the sample and a normal distribution is " + str(ks.statistic))
print("The corresponding p-value is " + str(ks.pvalue))
Explanation: Note that the t-statistic tests whether the means of the two samples are the same. We can also test whether a sample is likely to be produced by a reference distribution (single-sample Kolmogorov-Smirnov test) or whether two samples are produced by the same, underlying (unknown) distribution (2-sample Kolmogorov-Smirnov test).
The one-sample KS-test (test sample against a reference distribution) can take, for example, the cdf method of any of the distributions defined in scipy.stats.
End of explanation
ks = scipy.stats.ks_2samp(x_multi[:,0], x_multi[:,1])
print("The KS statistic for the u and g-band magnitudes is " + str(ks.statistic))
print("The corresponding p-value is " + str(ks.pvalue))
Explanation: Analogously, for the 2-sample KS-test:
End of explanation
def straight(x, a, b):
return a*x+b
Explanation: There are many more statistical tests to compare distributions and samples, too many to showcase them all here. Some of them are implemented in scipy.stats, so I invite you to look there for your favourite test!
Linear Regression
Perhaps you've found a high correlation coefficient in your data. Perhaps you already expect some kind of functional dependence of your (bivariate) data set.
In this case, linear regression comes in handy. Here, we'll only give a very brief overview of how to do it (and introduce you to scipy.optimize). Later in the week, you'll learn how to do parameter estimation properly.
Doing simple linear regression in python requires two steps: one, you need a linear (in the coefficients) model, and a way to optimize the parameters with respect to the data.
Here's where scipy.optimize comes in really handy. But first, let's build a model:
End of explanation
import scipy.optimize
x = np.array(galaxies["r"])
y = np.array(galaxies["g"])
popt, pcov = scipy.optimize.curve_fit(straight, x, y, p0=[1.,2.])
Explanation: Now we'll take two variables, in this case the g- and the r-band magnitudes, and use curve_fit from scipy.optimize to do linear regression:
End of explanation
print("The best fit slope and intercept are " + str(popt))
print("The covariance matrix of slope and intercept is " + str(pcov))
Explanation: curve_fit returns first the best-fit parameters, and also the covariance matrix of the parameters:
End of explanation
fig = plt.figure(figsize=(10,6))
ax = fig.add_subplot(111)
ax.scatter(x,y, color="orange", alpha=0.7)
ax.plot(x, straight(x, *popt), color="black")
ax.set_xlabel("r-band magnitude")
ax.set_ylabel("g-band magnitude")
Explanation: Let's plot this to see how well our model's done!
End of explanation
## make up some heteroscedastic errors:
yerr = np.random.normal(size=y.shape[0])*y/10.
popt, pcov = scipy.optimize.curve_fit(straight, x, y, sigma=yerr)
fig = plt.figure(figsize=(10,6))
ax = fig.add_subplot(111)
ax.errorbar(x,y, fmt="o", yerr=yerr, color="orange", alpha=0.7)
ax.plot(x, straight(x, *popt), color="black")
ax.set_xlabel("r-band magnitude")
ax.set_ylabel("g-band magnitude")
Explanation: Note that standard linear regression assumes a model that's strictly linear in the coefficients (that is, for example a power law of the type $f(x) = ax^b$ wouldn't be), as well as errors on the data that are independent and identically distributed (they are homoscedastic), curve_fit allows for non-linear models as well as heteroscedastic errors:
End of explanation
logmin = -10000000000.0
class PoissonLikelihood(object):
def __init__(self, x, y, func):
Set up the model
self.x = x
self.y = y
self.func = func
def loglikelihood(self, pars, neg=True):
Set up a simple Poisson likelihood
m = self.func(self.x, *pars)
logl = np.sum(-m + self.y*np.log(m) - scipy.special.gammaln(self.y + 1.))
## catch infinite values and NaNs
if np.isinf(logl) or np.isnan(logl):
logl = logmin
## can choose whether to return log(L) or -log(L)
if neg:
return -logl
else:
return logl
def __call__(self, pars, neg=True):
return self.loglikelihood(pars, neg)
loglike = PoissonLikelihood(x, y, straight)
ll = loglike([1., 3.], neg=True)
print("The log-likelihood for our trial parameters is " + str(ll))
Explanation: The errors that I made up for the example above depend on the value of each data point. Therefore, for a higher r-band magnitude, the error will be larger, too.
Note that curve_fit still assumes your measurement errors are Gaussian, which will allow you to use (generalized) least-squares to do the optimization. If this is not true, you will need to define your own likelihood. The likelihood is the probability of obtaining a data set under the assumption of a model and some model parameters. What that means in detail we'll worry about more on Thursday and Friday. Today, I'll just give you an example of how to set up a likelihood (not the only one, mind you) and use other methods in scipy.optimize to do the minimization:
End of explanation
res = scipy.optimize.minimize(loglike, [1, 3], method="L-BFGS-B")
Explanation: In practice, we want to maximize the likelihood, but optimization routines always minimize a function. Therefore, we actually want to minimize the log-likelihood.
With the class we've set up above, we can now put this into scipy.optimize.minimize. This function provides a combined interface to the numerous optimization routines available in scipy.optimize. Some of these allow for constrained optimization (that is, optimization where the possible parameter range is restricted), others don't. Each optimization routine may have its own keyword parameters. I invite you to look at the documentation here http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html for more details.
End of explanation
res
print("Was the optimization successful? " + str(res.success))
print("The best-fit parameters are: " + str(res.x))
print("The likelihood value at the best-fit parameters is: " + str(res.fun))
Explanation: The output will be an object that contains the necessary information, for example the best-fit parameters, the covariance matrix (for some, not all methods), a status message describing whether the optimization was successful, and the value of the likelihood function at the end of the optimization:
End of explanation |
172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Definition of model variables
Model domain / grid parameters
Step1: Definition of wave parameters
Step2: Definition of sediment parameters
Step3: Model initialisation function
Loading the bathymetry file
Step4: Defining wave region computational grid
Step5: Initialising wave boundary conditions
Step6: Waves evolution
The waves are then transformed from deep to shallow water assuming shore-parallel depth contours. The orientation of wave fronts is determine by wave celerity and refraction due to depth variations.
Travel time in the domain is calculated from Huygen's principle (using an order $\sqrt{5}$ approximation).
Assuming no refraction or loss of energy due to bottom friction, wave power P is conserved from deep to shallow water.
Step7: Visualisation of wave characteristics
Step8: Sediment entrainment, transport & deposition
Sediment entrainment relates to wave induced shear stress. The transport is computed according to both
wave direction and longshore transport. Deposition is dependent of shear stress and diffusion.
Step9: Visualisation of sediment transport characteristics
Step10: Saving wave/sedimentation data | Python Code:
file1='../data/gbr_south.csv'
file2='../data/topoGBR1000.csv'
# Bathymetric filename
bfile = file1
# Resolution factor
rfac = 4
Explanation: Definition of model variables
Model domain / grid parameters
End of explanation
# Wave heights (m)
H0 = 2
# Define wave source direction at boundary
# (angle in degrees counterclock wise from horizontal axis)
dir = 300
# Maximum depth for wave influence (m)
wbase = 20
# Sea level position (m)
slvl = 0.
Explanation: Definition of wave parameters
End of explanation
# Mean grain size diameter in m
d50 = 0.0001
# Steps used to perform sediment transport
tsteps = 1000
# Steps used to perform sediment diffusion
dsteps = 1000
Explanation: Definition of sediment parameters
End of explanation
#help(ocean.wave.__init__)
wavesed = ocean.wave(filename=bfile,wavebase=wbase,resfac=rfac,dia=d50)
Explanation: Model initialisation function
Loading the bathymetry file
End of explanation
t0 = time.clock()
wavesed.findland(slvl)
print 'Wave region computation took (s):',time.clock()-t0
Explanation: Defining wave region computational grid
End of explanation
wdir = wavesed.wavesource(dir)
Explanation: Initialising wave boundary conditions
End of explanation
#help(wavesed.cmptwaves)
t0 = time.clock()
wavesed.cmptwaves(src=wdir, h0=H0, sigma=1.)
print 'Wave parameters computation took (s): ',time.clock()-t0
Explanation: Waves evolution
The waves are then transformed from deep to shallow water assuming shore-parallel depth contours. The orientation of wave fronts is determine by wave celerity and refraction due to depth variations.
Travel time in the domain is calculated from Huygen's principle (using an order $\sqrt{5}$ approximation).
Assuming no refraction or loss of energy due to bottom friction, wave power P is conserved from deep to shallow water.
End of explanation
#help(wavesed.plotData)
size = (15,30)
i1 = 0
i2 = -1
j1 = 0
j2 = -1
# Zooming to a specific region
# i1 = 170
# i2 = 260
# j1 = 0
# j2 = 70
wavesed.plotData(data='bathy', figsize=size, vmin=0, vmax=0,
fontsize=10, imin=i1, imax=i2, jmin=j1, jmax=j2)
wavesed.plotData(data='travel', figsize=size, tstep=400, vmin=0, vmax=0,
fontsize=10, imin=i1, imax=i2, jmin=j1, jmax=j2)
wavesed.plotData(data='wcelerity', figsize=size, vmin=0, vmax=15,
fontsize=10, stream=3, imin=i1, imax=i2, jmin=j1, jmax=j2)
wavesed.plotData(data='ubot', figsize=size, vmin=0, vmax=2,
fontsize=10, imin=i1, imax=i2, jmin=j1, jmax=j2)
wavesed.plotData(data='shear', figsize=size, vmin=-0.5, vmax=0.5,
fontsize=10, imin=i1, imax=i2, jmin=j1, jmax=j2)
Explanation: Visualisation of wave characteristics
End of explanation
#help(wavesed.cmptsed)
t0 = time.clock()
wavesed.cmptsed(sigma=1.,tsteps=tsteps,dsteps=dsteps)
print 'Sediment erosion/deposition computation took (s): ',time.clock()-t0
Explanation: Sediment entrainment, transport & deposition
Sediment entrainment relates to wave induced shear stress. The transport is computed according to both
wave direction and longshore transport. Deposition is dependent of shear stress and diffusion.
End of explanation
#help(wavesed.plotData)
size = (10,20)
# i1 = 0
# i2 = -1
# j1 = 0
# j2 = -1
# Zooming to a specific region
i1 = 600
i2 = 1200
j1 = 0
j2 = 500
wavesed.plotData(data='fbathy', figsize=size, vmin=0, vmax=0,
fontsize=10, stream=0, imin=i1, imax=i2, jmin=j1, jmax=j2)
wavesed.plotData(data='erodep', figsize=size, vmin=-2., vmax=2.,
fontsize=10, stream=0, imin=i1, imax=i2, jmin=j1, jmax=j2)
Explanation: Visualisation of sediment transport characteristics
End of explanation
#waveparams.outputCSV(filename='erodep.csv', seddata=1)
Explanation: Saving wave/sedimentation data
End of explanation |
173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using FISSA with Suite2p
suite2p is a blind source separation toolbox for cell detection and signal extraction.
Here we illustrate how to use suite2p to detect cell locations, and then use FISSA to remove neuropil signals from the ROI signals.
The suite2p parts of this tutorial are based on their Jupyter notebook example.
Note that the below results are not representative of either suite2p or FISSA performance, as we are using a very small example dataset.
Reference
Step1: Run suite2p
Step2: Load the relevant data from the analysis
Step3: Run FISSA with the defined ROIs and data
Step4: Plot the resulting ROI signals | Python Code:
# FISSA toolbox
import fissa
# suite2p toolbox
import suite2p
# For plotting our results, use numpy and matplotlib
import matplotlib.pyplot as plt
import numpy as np
Explanation: Using FISSA with Suite2p
suite2p is a blind source separation toolbox for cell detection and signal extraction.
Here we illustrate how to use suite2p to detect cell locations, and then use FISSA to remove neuropil signals from the ROI signals.
The suite2p parts of this tutorial are based on their Jupyter notebook example.
Note that the below results are not representative of either suite2p or FISSA performance, as we are using a very small example dataset.
Reference:
Pachitariu, M., Stringer, C., Dipoppa, M., Schröder, S., Rossi, L. F., Dalgleish, H., Carandini, M. & Harris, K. D. (2017). Suite2p: beyond 10,000 neurons with standard two-photon microscopy. bioRxiv: 061507; doi: 10.1101/061507.
Import packages
End of explanation
# Set your options for running
ops = suite2p.default_ops() # populates ops with the default options
# Provide an h5 path in 'h5py' or a tiff path in 'data_path'
# db overwrites any ops (allows for experiment specific settings)
db = {
"h5py": [], # a single h5 file path
"h5py_key": "data",
"look_one_level_down": False, # whether to look in ALL subfolders when searching for tiffs
"data_path": ["exampleData/20150529"], # a list of folders with tiffs
# (or folder of folders with tiffs if look_one_level_down is True,
# or subfolders is not empty)
"save_path0": "./", # save path
"subfolders": [], # choose subfolders of 'data_path' to look in (optional)
"fast_disk": "./", # specify where the binary file will be stored (should be an SSD)
"reg_tif": True, # save the motion corrected tiffs
"tau": 0.7, # timescale of gcamp6f
"fs": 1, # sampling rate
"spatial_scale": 10, # rough guess of spatial scale cells
"batch_size": 288, # length in frames of each trial
}
# Run one experiment
opsEnd = suite2p.run_s2p(ops=ops, db=db)
Explanation: Run suite2p
End of explanation
# Extract the motion corrected tiffs (make sure that the reg_tif option is set to true, see above)
images = "./suite2p/plane0/reg_tif"
# Load the detected regions of interest
stat = np.load("./suite2p/plane0/stat.npy", allow_pickle=True) # cell stats
ops = np.load("./suite2p/plane0/ops.npy", allow_pickle=True).item()
iscell = np.load("./suite2p/plane0/iscell.npy", allow_pickle=True)[:, 0]
# Get image size
Lx = ops["Lx"]
Ly = ops["Ly"]
# Get the cell ids
ncells = len(stat)
cell_ids = np.arange(ncells) # assign each cell an ID, starting from 0.
cell_ids = cell_ids[iscell == 1] # only take the ROIs that are actually cells.
num_rois = len(cell_ids)
# Generate ROI masks in a format usable by FISSA (in this case, a list of masks)
rois = [np.zeros((Ly, Lx), dtype=bool) for n in range(num_rois)]
for i, n in enumerate(cell_ids):
# i is the position in cell_ids, and n is the actual cell number
ypix = stat[n]["ypix"][~stat[n]["overlap"]]
xpix = stat[n]["xpix"][~stat[n]["overlap"]]
rois[i][ypix, xpix] = 1
Explanation: Load the relevant data from the analysis
End of explanation
output_folder = "fissa_suite2p_example"
experiment = fissa.Experiment(images, [rois[:ncells]], output_folder)
experiment.separate()
Explanation: Run FISSA with the defined ROIs and data
End of explanation
# Fetch the colormap object for Cynthia Brewer's Paired color scheme
cmap = plt.get_cmap("Paired")
# Select which trial (TIFF index) to plot
trial = 0
# Plot the mean image and ROIs from the FISSA experiment
plt.figure(figsize=(7, 7))
plt.imshow(experiment.means[trial], cmap="gray")
XLIM = plt.xlim()
YLIM = plt.ylim()
for i_roi in range(len(experiment.roi_polys)):
# Plot border around ROI
for contour in experiment.roi_polys[i_roi, trial][0]:
plt.plot(
contour[:, 1],
contour[:, 0],
color=cmap((i_roi * 2 + 1) % cmap.N),
)
# ROI co-ordinates are half a pixel outside the image,
# so we reset the axis limits
plt.xlim(XLIM)
plt.ylim(YLIM)
plt.show()
# Plot all ROIs and trials
# Get the number of ROIs and trials
n_roi = experiment.result.shape[0]
n_trial = experiment.result.shape[1]
# Find the maximum signal intensities for each ROI
roi_max_raw = [
np.max([np.max(experiment.raw[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
roi_max_result = [
np.max([np.max(experiment.result[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
roi_max = np.maximum(roi_max_raw, roi_max_result)
# Plot our figure using subplot panels
plt.figure(figsize=(16, 2.5 * n_roi))
for i_roi in range(n_roi):
for i_trial in range(n_trial):
# Make subplot axes
i_subplot = 1 + i_roi * n_trial + i_trial
plt.subplot(n_roi, n_trial, i_subplot)
# Plot the data
plt.plot(
experiment.raw[i_roi][i_trial][0, :],
label="Raw (suite2p)",
color=cmap(i_roi * 2 % cmap.N),
)
plt.plot(
experiment.result[i_roi][i_trial][0, :],
label="FISSA",
color=cmap((i_roi * 2 + 1) % cmap.N),
)
# Labels and boiler plate
plt.ylim([-0.05 * roi_max[i_roi], roi_max[i_roi] * 1.05])
if i_trial == 0:
plt.ylabel(
"ROI {}\n\nSignal intensity\n(candela per unit area)".format(i_roi)
)
if i_roi == 0:
plt.title("Trial {}".format(i_trial + 1))
plt.legend()
if i_roi == n_roi - 1:
plt.xlabel("Time (frame number)")
plt.show()
Explanation: Plot the resulting ROI signals
End of explanation |
174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EITC Housing Benefits
In this notebook, we try to estimate the cost of a hypothetical EITC program to aid with rental housing payments. The aim is to supply a family with enough EITC to have them pay no more than 30% of their income for housing. Namely, the algorithm we implement is as follows
Step2: Caveat
Step3: Fair Housing Share
EITC data contain the amount of EITC benefits a family receives for a given income level. The goal here is to get a family's total income and what would be considered a fair share of income for housing. This is where we perform following aforementioned calculation
Step4: Mapping children to bedrooms
In order to estimate relate FMR data to family types, we must map the number of bedrooms to the number of children and marriage. Here, we assume that marriage does not make a difference, thus, the mapping only varies on the number of children in the family. Presently, the following is implemented
Step5: Calculate amount of EITC housing aid each family type in each metro receives
Step6: Calculate families of each type in each income bin
Step7: Calculate total cost
Step8: Supplemental Questions
varying parameters on original sim
number metros where families earning 52,400 recieve EITC housing aid
what if no one receives more aid than the median income earning family
Varying parameters
Step9: Test various cases
Step11: Capping EITC benefits at the city median income
Step12: Get Number of metros where families earn 52,400
Step13: Make some additional output tables
20 top and bottom cities that contribute most in terms of cost
Step14: A metric of EITC housing recipients. Using indexers defined at the top | Python Code:
##Load modules and set data path:
import pandas as pd
import numpy as np
import numpy.ma as ma
import re
data_path = "C:/Users/SpiffyApple/Documents/USC/RaphaelBostic/EITChousing"
output_container = {}
#################################################################
################### load tax data ###############################
#upload tax data
tx_o = pd.read_csv("/".join([data_path, "metro_TY13.csv"]))
#there is some weird formatting in the data due to excel -> fix:
tx_o['cbsa'] = tx_o.cbsa.apply(lambda x: re.sub("=|\"", "",x))
#numerous numerical column entries have commas -> fix:
tx_o.iloc[:,3:] = tx_o.iloc[:,3:].replace(",", "",regex=True).astype(np.int64)
#set the cbsa as the dataframe index:
tx_o.set_index('cbsa', inplace=True)
#for convenience, extract cols corresponding with number of ppl filed for EITC in each income bracket
eagi_cols = tx_o.columns[tx_o.columns.str.contains("EAGI")]
#cut the 50-60 bin number in half according to assumptions
tx_o['EAGI50_13'] = tx_o['EAGI50_13']/2
#tx_o['EAGI40_13'] = tx_o['EAGI40_13']/2
Explanation: EITC Housing Benefits
In this notebook, we try to estimate the cost of a hypothetical EITC program to aid with rental housing payments. The aim is to supply a family with enough EITC to have them pay no more than 30% of their income for housing. Namely, the algorithm we implement is as follows:
Let $y_i$ denote the income of family $i$. Furthermore, suppose that family $i$ has characteristics married $s \in {0,1}=S$ and children $c \in {0,\dots, 3}=C$. For convenience, let the family type $t\in C \times S$. Suppose $y_i$ is such that it qualifies family $i$ for $e_i = e(y_i,s,c)$ amount of EITC benefits. Finally, suppose that family $i$ lives in location $l$ and pays rent $r$ for a $b\in {0,\dots,4}$ bedroom housing unit. Then, under our scheme, family $i$ will receive additional EITC housing benefits $h$ of:
$$h_{l,t,y} = max(r_{l,b} - .3(y_i+e_i(y_i,s,c)), 0 )$$
Data
We have 3 datasets containing the following information:
Number of EITC recipients in 356 metros per income bracket.
Number of EITC recipients in 356 metros per number of children.
Amount of EITC benefits one receives for various income levels and family characteristics (ie marriage and number of childre)
The FMRs in 356 metros per number of bedrooms of the housing unit.
Assumptions
Two datasets, taxes and FMRs, are merged based on their CDSA codes but not all of the codes match. Namely, the tax data contains 381 metros but only 351 of them match to the FMR data. We are able to match an additional 5 CDSA codes for a total of 356 metros. However, there is no guarantee that the additional 5 metros match precisely with the metro of the Brookings data. For example, the FMR data contain a CDSA as 'Santa Barbara-Santa Maria-Goleta' while the tax has CDSA 'Santa Maria-Santa Barbara'. clearly, the FMR area is larger but in adding the 5 metros, we disregard this discrepancy.
We have a distribution of files incomes in each metro area. The distributions are binned in increments of \$5k up to \$40k and then are binned at \$10k increments. Since we only have the number of people who filed within each bin, we assume a uniform distribution of incomes in each bin and use the bin mean as the representative income for that bin. However, the highest income for which EITC benefits are given is \$52400k but we must rely on \$55k. The latter will overestimate the income and number of EITC recipients in that bin. One proposal to mediate that is to divide the bin \$50-\$60k in half and to use \$52500 as the median and half the number of \$50-\$60k as the filers.
We do not have any figures on marriage but need to differentiate between married and unmarried families. Consequently, we employ 1-50.2% -figure from some Christian Science article*- as the proportion of adults married. In doing so, we must assume that marriage rates across the income distribution, number of children, and metros do not vary.
Both FMR and EITC data are for year 2014 whereas the tax data is for year 2013. Thus, we assume that FMRs and EITC benefits do not differ drastically between 2013 and 2014.
We only have numbers aggregated based on children and based on income bins but not across both; therefore, to estimate the proportion of families with each child type in each income bin, we assume that the number of children a family has does not vary across the income distributions.
Consider the following variables:
the number $n_{c,l}$ of EITC eligible families in metro $l$ and with number of children $c\in {0,1,2,3+}$
the number $m_{b,l}$ of EITC filers in each metro $l$ and income bin $b \in {1,2,\dots, 10}$
The tax data contains two sets of numbers data: 1) the number $n_l = \sum_{c} n_{c,l}$ of EITC eligible people in each metro for each family type 2) the number $m_l = \sum_{b} m_{b,l}$ of families who filed for EITC in each income bracket. Note $n_l \geq m_l \quad \forall l$. To acquire the number of family types in each income bracket, we calculate
$$p_{l,c} = \frac{n_{l,c}}{n_{l}}$$
then compute
$$k_{c,l,b} = p_{l,c} \times m_{b,l}$$
Where $k_{c,l,b}$ is the number of families with children $c$ and in income bin $b$ in metro $l$. Of course, for this we must assume that the proportions among the eligible hold equally among those who actually file. Finally, let $T(p_{l,c})$ be the total cost calculated using eligible counts and let $T(p_{l,b})$ be that total cost calculated using bin proportions then
$$T(p_{l,c}) \geq T(p_{l,b})$$
Unlike in the tax data, in which all cbsa codes are unique, the FMR data sometimes contains multiple entries for the same cbsa, albeit over different jurisdictions. Most of the repeats of cbsa codes in the FMR data are also duplicates but, on occasion, there are deviations within a particular cbsa code such as NY - 35620. To give some weight to differentials within cbsa FMR entries, we take the mean within duplicated cbsa codes.
End of explanation
#################################################################
###################### Read EITC data ###########################
##parse the EITC data:
eitc = pd.ExcelFile("/".join([data_path, "EITC Calculator-2-14.xlsx"]))
sheets = eitc.sheet_names
eitc.close()
eitc_dict = pd.read_excel("/".join([data_path, "EITC Calculator-2-14.xlsx"]), sheetname = sheets[9:17], skiprows = 14 )
eitc_o = pd.concat(eitc_dict)
#################################################################
################### Process eitc_o data ###########################
eitc_o = eitc_o.iloc[:,[0,40]]
eitc_o.dropna(inplace=True)
eitc_o = eitc_o.loc[eitc_o[2014]>0,:]
eitc_o.reset_index(level=0, inplace=True)
eitc_o.reset_index(drop=True, inplace=True)
#eitc_o['num_kids'] = eitc_o.level_0.str.extract("(\d)", expand=False)
#eitc_o['married'] = eitc_o.level_0.str.contains("Married").astype(int)
#eitc_o.to_csv("./EITC_benefits.csv")
#################################################################
################# Read & process fmr data ######################
#read in fmr data
fmr_o1 = pd.read_excel("/".join([data_path, "FY2014_4050_RevFinal.xls"]))
#drop non Metro areas:
fmr_o1 = fmr_o1[fmr_o1.Metro_code.str.contains("METRO")]
#extract cbsa code:
fmr_o1['cbsa'] = fmr_o1.Metro_code.str.extract("RO(\d{4,5})[MN]", expand=False)
#edit FMR CBSA codes to match tax CBSA codes:
cbsa_chng_map = {'14060':'14010', '29140':'29200', '31100':'31080', '42060':'42200', '44600':'48260'}#, '36061':'35620'}
fmr_o1.cbsa.replace(cbsa_chng_map, inplace=True)
#fetch columns that pertain to FMR figures
fmr_cols = fmr_o1.columns[fmr_o1.columns.str.contains("fmr\d")]
#clean up the area names by removing "MSA and HUD Metro FMR Area"
fmr_o1['Areaname'] = fmr_o1.Areaname.apply(lambda x: re.sub(" MSA| HUD Metro FMR Area", "", x))
#drop duplicates based on cbsa code:
#fmr_o = fmr_o.drop_duplicates('cbsa')
fmr_o2 = fmr_o1.groupby("cbsa").mean()[fmr_cols]
fmr_o = pd.concat([fmr_o2, fmr_o1[['cbsa', 'Areaname']].drop_duplicates("cbsa").set_index("cbsa")],axis=1)
#set an interpratable index
#fmr_o.set_index("cbsa", inplace=True)
#subset to only matching cbsa codes between tax and fmr data
common_cbsa = fmr_o.index.intersection(tx_o.index)
fmr_o = fmr_o.loc[common_cbsa]
tx_o = tx_o.loc[common_cbsa]
fmr_o[fmr_cols] = (fmr_o[fmr_cols])*12
print("The number of CBSAs between tax and fmr data matches?:", fmr_o.shape[0] == tx_o.shape[0])
######################################
##0. Define function to calculate eitc
######################################
def calc_haus_eitc(group, income, fmr):
INPUT:
1. group - (pd.df) subset of data frame by family type
2. income - (int64) total income the family earns
OUTPUT:
1. aid - (pd.Series) a series containing eitc housing aid for each family type for a given income
DESCRIPTION:
The function is basically the max(r - (y+e)*.3, 0) but if we are at income levels that don't qualify
a family type for EITC benefits then we need to output a corresponding vector of NAN values so that
the groupby operation doesn't error out on its concatenation step.
details = group[group['Nominal Earnings'] == income]
if details.shape[0] > 0:
aid = fmr[details.r_type]-details.haus_share.iloc[0]
aid[aid<0] = 0
else:
aid = pd.DataFrame(np.array([np.nan]*fmr.shape[0]))
aid.index = fmr.index
#aid.columns = [group.r_type.iloc[0]]
aid.columns = ['aid']
return(aid)
Explanation: Caveat:
The cell below is the longest running part of this code because we must interact with excel, individually load 9 sheets, and then concatenate them into a single dataset.
End of explanation
#calculate fair share of income for housing
eitc = eitc_o[['Nominal Earnings','level_0', 2014]].copy()
eitc['total_income'] = eitc['Nominal Earnings']+eitc[2014]
percent_of_income=.3
eitc['haus_share'] = eitc.total_income*percent_of_income
#remove "Nominal" from family description.
eitc.level_0.replace(", Nominal", "", regex=True, inplace=True)
######################################
##I. Make a vector of income bin means
######################################
min_earn = 2500
mid_earn = 37500
step = 5000
income_vect = np.linspace(min_earn,mid_earn,((mid_earn-min_earn)/step+1))
add_vect = [45000,52400]
income_vect = np.concatenate([income_vect, add_vect])
Explanation: Fair Housing Share
EITC data contain the amount of EITC benefits a family receives for a given income level. The goal here is to get a family's total income and what would be considered a fair share of income for housing. This is where we perform following aforementioned calculation:
$$\text{total income} = y_i + e_i(s,c)$$
$$\text{housing share} = .3\times \text{ total income} $$
End of explanation
##Map FMR data to EITC data (Variable)
#assigned bedrooms to child counts
repl_dict ={'Married, 0 Kid':'fmr1', 'Married, 1 Kid':'fmr2', 'Married, 2 Kids':'fmr2',
'Married, 3 Kids':'fmr3', 'Single, 0 Kid':'fmr1', 'Single, 1 Kid':'fmr2',
'Single, 2 Kids':'fmr2', 'Single, 3 Kids':'fmr3'}
eitc['r_type'] = eitc.level_0.replace(repl_dict)
haus_share = eitc[['level_0', 'haus_share', 'r_type', 'Nominal Earnings']]
#reformat monthly fmr to annual cost of rent
fmr_discount = 1
Explanation: Mapping children to bedrooms
In order to estimate relate FMR data to family types, we must map the number of bedrooms to the number of children and marriage. Here, we assume that marriage does not make a difference, thus, the mapping only varies on the number of children in the family. Presently, the following is implemented:
$$
\begin{array}{ccc}
\text{num of children} & & \text{num of bedrooms} \
0 & \to & 1 \
1 & \vdots & 2 \
2 & & 2 \
3+ & \to & 3 \
\end{array}
$$
End of explanation
################################################
##II. Group by family type and loop over incomes
################################################
groups = haus_share.groupby(by = 'level_0')
#place eachincome is placed into a dictionary entry
aid_incomes = {}
for income in income_vect:
aid_per_income = groups.apply(calc_haus_eitc, income=income, fmr=fmr_o*fmr_discount)
aid_incomes[income] = aid_per_income.unstack(level=0)
#concatenate dictionary into a 2-indexed data frame (flattened 3D)
one_family_aid = pd.concat(aid_incomes)
one_family_aid.columns = one_family_aid.columns.levels[1]
Explanation: Calculate amount of EITC housing aid each family type in each metro receives
End of explanation
#################################################################
############### 0. pre-checks and params ########################
#it doesn't seem that the total eligible for eitc matches the distributional count of incomes
print("Prop accounted for in income distribution counts\n",(tx_o[eagi_cols].sum(axis=1)/tx_o.eitc13).quantile(np.linspace(0,1,5)))
tx = tx_o.copy()
#################################################################
################ I. compute proportions #########################
#set proportion of ppl married - some Christian Science article (not sure if credible source)
prop_married = 1-50.2/100
eqc_cols = tx.columns[tx.columns.str.contains("EQC\d_")]
chld_prop = tx[eqc_cols].div(tx.eitc13,axis=0)
m_chld_prop = chld_prop*prop_married
s_chld_prop = chld_prop - m_chld_prop
m_chld_prop.columns = m_chld_prop.columns + "_married"
s_chld_prop.columns = s_chld_prop.columns + "_single"
tx = pd.concat([tx, m_chld_prop,s_chld_prop],axis=1)
eqc_cols = tx.columns[tx.columns.str.contains('EQC\d_13_married|EQC\d_13_single', regex=True)]
#################################################################
############### II. multiply to across ##########################
#here I make a 3D matrix with metros, bins, types on each axis
#then flatten it into a 2D data frame.
#Implicit broadcasting across two 2D matrices into a 3D matrix
C_3D=np.einsum('ij,ik->jik',tx[eagi_cols],tx[eqc_cols])
#flatten into a pandas dataframe
C_2D=pd.Panel(np.rollaxis(C_3D,2)).to_frame()
C_2D.columns = one_family_aid.columns
C_2D.index = one_family_aid.index
pop_figs = C_2D.sum(axis=1).groupby(level=1).sum().round(0)
Explanation: Calculate families of each type in each income bin
End of explanation
##################################################################
############### aggregate aid and filers #########################
disaggregated =np.multiply(C_2D, one_family_aid)
#summing once gives us metro-level totals -> summing that gives us total
total = disaggregated.sum(axis=1).sum()
output_container['base_sim'] = total
print("Total EITC Housing Aid cost: %.2f billion" %(output_container['base_sim']/1e9))
Explanation: Calculate total cost
End of explanation
def vary_params(prcnt_of_income = .3,fmr_discount = 1,prop_married=prop_married, repl_dict = repl_dict, fmr=fmr_o[fmr_cols], tx = tx_o):
######################################
##I. Make a vector of income bin means
######################################
min_earn = 2500
mid_earn = 37500
step = 5000
income_vect = np.linspace(min_earn,mid_earn,((mid_earn-min_earn)/step+1))
add_vect = [45000,52400]
income_vect = np.concatenate([income_vect, add_vect])
#calculate fair share of income for housing
eitc = eitc_o[['Nominal Earnings','level_0', 2014]].copy()
eitc['total_income'] = eitc['Nominal Earnings']+eitc[2014]
eitc['haus_share'] = eitc.total_income*prcnt_of_income
#remove "Nominal" from family description.
eitc.level_0.replace(", Nominal", "", regex=True, inplace=True)
eitc['r_type'] = eitc.level_0.replace(repl_dict)
haus_share = eitc[['level_0', 'haus_share', 'r_type', 'Nominal Earnings']]
#reformat monthly fmr to annual cost of rent
#fmr = fmr*fmr_discount
################################################
##II. Group by family type and loop over incomes
################################################
groups = haus_share.groupby(by = 'level_0')
#place eachincome is placed into a dictionary entry
aid_incomes = {}
for income in income_vect:
aid_per_income = groups.apply(calc_haus_eitc, income=income, fmr = fmr*fmr_discount)
aid_incomes[income] = aid_per_income.unstack(level=0)
#concatenate dictionary into a 2-indexed data frame (flattened 3D)
one_family_aid = pd.concat(aid_incomes)
one_family_aid.columns = one_family_aid.columns.levels[1]
#################################################################
################ I. compute proportions #########################
eqc_cols = tx.columns[tx.columns.str.contains("EQC\d_")]
chld_prop = tx[eqc_cols].div(tx.eitc13,axis=0)
m_chld_prop = chld_prop*prop_married
s_chld_prop = chld_prop - m_chld_prop
m_chld_prop.columns = m_chld_prop.columns + "_married"
s_chld_prop.columns = s_chld_prop.columns + "_single"
tx = pd.concat([tx, m_chld_prop,s_chld_prop],axis=1)
eqc_cols = tx.columns[tx.columns.str.contains('EQC\d_13_married|EQC\d_13_single', regex=True)]
#################################################################
############### II. multiply to across ##########################
#here I make a 3D matrix with metros, bins, types on each axis
#then flatten it into a 2D data frame.
#Implicit broadcasting across two 2D matrices into a 3D matrix
C_3D=np.einsum('ij,ik->jik',tx[eagi_cols],tx[eqc_cols])
#flatten into a pandas dataframe
C_2D=pd.Panel(np.rollaxis(C_3D,2)).to_frame()
C_2D.columns = one_family_aid.columns
C_2D.index = one_family_aid.index
##################################################################
############### aggregate aid and filers #########################
disaggregated =np.multiply(C_2D, one_family_aid)
#summing once gives us metro-level totals -> summing that gives us total
total = disaggregated.sum(axis=1).sum()
return(total)
#Check NY area which is '35620'
disaggregated.groupby(level=1).sum().loc['35620'].sum()
#test out sim function:
print("%.2f" %(vary_params()/1e9))
Explanation: Supplemental Questions
varying parameters on original sim
number metros where families earning 52,400 recieve EITC housing aid
what if no one receives more aid than the median income earning family
Varying parameters
End of explanation
output_container['low_marr_prop'] = vary_params(prop_married = .4)
output_container['sub_fmr_rent_80'] = vary_params(fmr_discount=.8)
output_container['sub_fmr_rent_90'] = vary_params(fmr_discount=.9)
output_container['haus_share_40'] = vary_params(prcnt_of_income = .4)
output_container['haus_share_50'] = vary_params(prcnt_of_income = .5)
repl_dict_studio ={'Married, 0 Kid':'fmr0', 'Married, 1 Kid':'fmr2', 'Married, 2 Kids':'fmr2',
'Married, 3 Kids':'fmr3', 'Single, 0 Kid':'fmr0', 'Single, 1 Kid':'fmr2',
'Single, 2 Kids':'fmr2', 'Single, 3 Kids':'fmr3'}
output_container['tight_living'] = vary_params(repl_dict = repl_dict_studio)
output_container['sub_fmr_rent_90_haus_share_40'] = vary_params(prcnt_of_income = .4, fmr_discount = .9)
output_container
Explanation: Test various cases:
marriage proportion is 40%
support only 80% and 90% of FMR
allow a higher income proportion to go to housing: 40 and 45
put singles with no children into studios
End of explanation
quantiles = np.divide(C_3D,np.cumsum(C_3D, axis=0))
#find the median income values
Cool stuff: in essence, we have the following algorithm
-consider only values greater than .5 in the cumulative percentiles of each income bin
-find the minimum on the masked array -> these are the medians
-use the rows set as true as the identifiers of the median income in the index
masked_quantiles = ma.masked_where(quantiles<=.5,quantiles)
median_idx = masked_quantiles.argmin(0)
median_idx = np.where(masked_quantiles == masked_quantiles.min(axis=0))
median_mat = np.zeros(quantiles.shape, dtype=bool)
median_mat[median_idx] = True
median_mat.shape
#flatten into a pandas dataframe
median_2D=pd.Panel(np.rollaxis(median_mat,2)).to_frame()
median_2D.columns = one_family_aid.columns
median_2D.index = one_family_aid.index
#just double check the results from above
np.divide(tx[eagi_cols].cumsum(axis=1),tx[eagi_cols].sum(axis=1).reshape(357,1)).head(n=3)
#the amount of aid the median income family in each metro is getting
median_metro_aid = one_family_aid[median_2D].reset_index(level=0, drop=True).dropna()
#the amount of aid the median income family in each metro is getting
median_metro_aid = one_family_aid[median_2D].reset_index(level=0, drop=True).dropna()
comparison = pd.concat([median_metro_aid]*10)
#inefficient way of making the comparison matrix the same size as the original matrix
comparison.index = one_family_aid.index
#also inefficient but copy in order to not lose the original matrix
X = one_family_aid.copy()
#set the values that are above the median to np.nan
X[comparison <= one_family_aid] = np.nan
#fill nan values with the median values
X.fillna(comparison, inplace=True)
##################################################################
############### aggregate aid and filers #########################
disaggregated_med =np.multiply(C_2D, X)
#summing once gives us metro-level totals -> summing that gives us total
output_container['median_capped'] = disaggregated_med.sum(axis=1).sum()
print("Total EITC Housing Aid cost: %.2f billion" %(output_container['median_capped']/1e9))
outcome_series = (pd.Series(output_container)/1e9).round(2).to_frame()
outcome_series.columns = ["total cost"]
outcome_series.sort_values(by='total cost')
Explanation: Capping EITC benefits at the city median income
End of explanation
#Cacl number of metros where family earning ...
max_inc_aid = (one_family_aid.loc[52400]>0).sum(axis=1).sum()
print("Number of metros where family earning 52400 gets EITC housing aid: %d" %max_inc_aid)
expens_cbsas = one_family_aid.loc[52400,'Married, 3 Kids'].loc[(one_family_aid.loc[52400, "Married, 3 Kids"]>0)].index
fmr_o.loc[expens_cbsas].Areaname[:5]
Explanation: Get Number of metros where families earn 52,400
End of explanation
##make the top 20 and bottom 20 table of expenditures
cost_per_metro = disaggregated.sum(axis=1).unstack(level=0).sum(axis=1)
cost_per_metro = pd.concat([cost_per_metro, pop_figs], axis=1)
cost_per_metro.columns = ['cost', 'recipients']
cost_per_metro['cost_per_recipient'] = (cost_per_metro.cost/cost_per_metro.recipients).round(0)
cost_per_metro.head()
cost_per_metro.sort_values("cost", inplace=True)
bottom_20_cbsa = cost_per_metro.iloc[:20].index
bottom_20_costs = pd.concat([fmr_o.loc[bottom_20_cbsa].Areaname, cost_per_metro.iloc[:20][['cost','cost_per_recipient']]],axis=1)
bottom_20_costs.columns = ['Metro','Cost', 'CPR']
bottom_20_costs.Cost =(bottom_20_costs.Cost/1e6).round(2)
bottom_20_costs.rename(columns = {'Cost':"Cost in millions"},inplace=True)
bottom_20_costs.head()
top_20_cbsa = cost_per_metro.iloc[-20:].index
top_20_costs = pd.concat([fmr_o.loc[top_20_cbsa].Areaname, cost_per_metro.iloc[-20:][['cost','cost_per_recipient']]],axis=1)
top_20_costs.columns = ['Metro','Cost', 'CPR']
top_20_costs.Cost = (top_20_costs.Cost/1e9).round(2)
top_20_costs.rename(columns = {'Cost':"Cost in billions"},inplace=True)
pd.concat([top_20_costs.reset_index(drop=True),bottom_20_costs.reset_index(drop=True)],axis=1)
Explanation: Make some additional output tables
20 top and bottom cities that contribute most in terms of cost
End of explanation
yes_numbers = np.multiply(C_2D, one_family_aid>0)
non_receivers = 1-(np.divide(yes_numbers,C_2D).fillna(0).sum(axis=1)/8).unstack(level=0).sum(axis=1)/10
non_receivers.sort_values(inplace=True)
print("Average proportion of non-qualifiers: %.02f" %non_receivers.mean())
top_low_prop=pd.concat([fmr_o.loc[non_receivers.iloc[-20:].index].Areaname, non_receivers.iloc[-20:]], axis=1).reset_index(drop=True)
top_low_prop.columns = ['metro','prop of non-recipients']
top_low_prop
Explanation: A metric of EITC housing recipients. Using indexers defined at the top:
$$
\sum_{t\in T} = \frac { \sum_y I(h_{l,t,y}=0) }{ \sum_y I(h_{l,t,y} \geq 0 ) }
$$
End of explanation |
175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<hr>
<h1>Predicting Benign and Malignant Classes in Mammograms Using Thresholded Data</h1>
<p>Jay Narhan</p>
June 2017
This is an application of the best performing models but using thresholded data instead of differenced data. See JN_DC_Diff_Diagnosis.ipynb for more background and details on the problem.
<hr>
Step1: <h2>Reproducible Research</h2>
Step2: Class Balancing
Here - I look at a modified version of SMOTE, growing the under-represented class via synthetic augmentation, until there is a balance among the categories
Step3: Create the Training and Test Datasets
Step4: <h2>Support Vector Machine Model</h2>
Step6: <h2>CNN Modelling Using VGG16 in Transfer Learning</h2>
Step7: <h2>Core CNN Modelling</h2>
Prep and package the data for Keras processing
Step8: Heavy Regularization | Python Code:
import os
import sys
import time
import numpy as np
from tqdm import tqdm
import sklearn.metrics as skm
from sklearn import metrics
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from skimage import color
import keras.callbacks as cb
import keras.utils.np_utils as np_utils
from keras import applications
from keras import regularizers
from keras.models import Sequential
from keras.constraints import maxnorm
from keras.preprocessing.image import ImageDataGenerator
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers import Activation, Dense, Dropout, Flatten, GaussianNoise
from matplotlib import pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10,10)
np.set_printoptions(precision=2)
sys.path.insert(0, '../helper_modules/')
import jn_bc_helper as bc
Explanation: <hr>
<h1>Predicting Benign and Malignant Classes in Mammograms Using Thresholded Data</h1>
<p>Jay Narhan</p>
June 2017
This is an application of the best performing models but using thresholded data instead of differenced data. See JN_DC_Diff_Diagnosis.ipynb for more background and details on the problem.
<hr>
End of explanation
%%python
import os
os.system('python -V')
os.system('python ../helper_modules/Package_Versions.py')
SEED = 7
np.random.seed(SEED)
CURR_DIR = os.getcwd()
DATA_DIR = '/Users/jnarhan/Dropbox/Breast_Cancer_Data/Data_Thresholded/ALL_IMGS/'
AUG_DIR = '/Users/jnarhan/Dropbox/Breast_Cancer_Data/Data_Thresholded/AUG_DIAGNOSIS_IMGS/'
meta_file = '../../Meta_Data_Files/meta_data_all.csv'
PATHO_INX = 6 # Column number of pathology label in meta_file
FILE_INX = 1 # Column number of File name in meta_file
meta_data, _ = tqdm( bc.load_meta(meta_file, patho_idx=PATHO_INX, file_idx=FILE_INX,
balanceByRemoval=False, verbose=False) )
# Minor addition to reserve records in meta data for which we actually have images:
meta_data = bc.clean_meta(meta_data, DATA_DIR)
# Only work with benign and malignant classes:
for k,v in meta_data.items():
if v not in ['benign', 'malignant']:
del meta_data[k]
bc.pprint('Loading data')
cats = bc.bcLabels(['benign', 'malignant'])
# For smaller images supply tuple argument for a parameter 'imgResize':
# X_data, Y_data = bc.load_data(meta_data, DATA_DIR, cats, imgResize=(150,150))
X_data, Y_data = tqdm( bc.load_data(meta_data, DATA_DIR, cats) )
cls_cnts = bc.get_clsCnts(Y_data, cats)
bc.pprint('Before Balancing')
for k in cls_cnts:
print '{0:10}: {1}'.format(k, cls_cnts[k])
Explanation: <h2>Reproducible Research</h2>
End of explanation
datagen = ImageDataGenerator(rotation_range=5, width_shift_range=.01, height_shift_range=0.01,
data_format='channels_first')
X_data, Y_data = bc.balanceViaSmote(cls_cnts, meta_data, DATA_DIR, AUG_DIR, cats,
datagen, X_data, Y_data, seed=SEED, verbose=True)
Explanation: Class Balancing
Here - I look at a modified version of SMOTE, growing the under-represented class via synthetic augmentation, until there is a balance among the categories:
End of explanation
X_train, X_test, Y_train, Y_test = train_test_split(X_data, Y_data,
test_size=0.20, # deviation given small data set
random_state=SEED,
stratify=zip(*Y_data)[0])
print 'Size of X_train: {:>5}'.format(len(X_train))
print 'Size of X_test: {:>5}'.format(len(X_test))
print 'Size of Y_train: {:>5}'.format(len(Y_train))
print 'Size of Y_test: {:>5}'.format(len(Y_test))
print X_train.shape
print X_test.shape
print Y_train.shape
print Y_test.shape
data = [X_train, X_test, Y_train, Y_test]
Explanation: Create the Training and Test Datasets
End of explanation
X_train_svm = X_train.reshape( (X_train.shape[0], -1))
X_test_svm = X_test.reshape( (X_test.shape[0], -1))
SVM_model = SVC(gamma=0.001)
SVM_model.fit( X_train_svm, Y_train)
predictOutput = SVM_model.predict(X_test_svm)
svm_acc = metrics.accuracy_score(y_true=Y_test, y_pred=predictOutput)
print 'SVM Accuracy: {: >7.2f}%'.format(svm_acc * 100)
print 'SVM Error: {: >10.2f}%'.format(100 - svm_acc * 100)
svm_matrix = skm.confusion_matrix(y_true=Y_test, y_pred=predictOutput)
numBC = bc.reverseDict(cats)
class_names = numBC.values()
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=True,
title='SVM Normalized Confusion Matrix Using Thresholded \n')
plt.tight_layout()
plt.savefig('../../figures/jn_SVM_Diagnosis_CM_Threshold_20170609.png', dpi=100)
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=False,
title='SVM Normalized Confusion Matrix Using Thresholded \n')
plt.tight_layout()
bc.cat_stats(svm_matrix)
Explanation: <h2>Support Vector Machine Model</h2>
End of explanation
def VGG_Prep(img_data):
:param img_data: training or test images of shape [#images, height, width]
:return: the array transformed to the correct shape for the VGG network
shape = [#images, height, width, 3] transforms to rgb and reshapes
images = np.zeros([len(img_data), img_data.shape[1], img_data.shape[2], 3])
for i in range(0, len(img_data)):
im = (img_data[i] * 255) # Original imagenet images were not rescaled
im = color.gray2rgb(im)
images[i] = im
return(images)
def vgg16_bottleneck(data, modelPath, fn_train_feats, fn_train_lbls, fn_test_feats, fn_test_lbls):
# Loading data
X_train, X_test, Y_train, Y_test = data
print('Preparing the Training Data for the VGG_16 Model.')
X_train = VGG_Prep(X_train)
print('Preparing the Test Data for the VGG_16 Model')
X_test = VGG_Prep(X_test)
print('Loading the VGG_16 Model')
# "model" excludes top layer of VGG16:
model = applications.VGG16(include_top=False, weights='imagenet')
# Generating the bottleneck features for the training data
print('Evaluating the VGG_16 Model on the Training Data')
bottleneck_features_train = model.predict(X_train)
# Saving the bottleneck features for the training data
featuresTrain = os.path.join(modelPath, fn_train_feats)
labelsTrain = os.path.join(modelPath, fn_train_lbls)
print('Saving the Training Data Bottleneck Features.')
np.save(open(featuresTrain, 'wb'), bottleneck_features_train)
np.save(open(labelsTrain, 'wb'), Y_train)
# Generating the bottleneck features for the test data
print('Evaluating the VGG_16 Model on the Test Data')
bottleneck_features_test = model.predict(X_test)
# Saving the bottleneck features for the test data
featuresTest = os.path.join(modelPath, fn_test_feats)
labelsTest = os.path.join(modelPath, fn_test_lbls)
print('Saving the Test Data Bottleneck Feaures.')
np.save(open(featuresTest, 'wb'), bottleneck_features_test)
np.save(open(labelsTest, 'wb'), Y_test)
# Locations for the bottleneck and labels files that we need
train_bottleneck = '2Class_Lesions_VGG16_bottleneck_features_train_threshold.npy'
train_labels = '2Class_Lesions_VGG16_labels_train_threshold.npy'
test_bottleneck = '2Class_Lesions_VGG16_bottleneck_features_test_threshold.npy'
test_labels = '2Class_Lesions_VGG16_labels_test_threshold.npy'
modelPath = os.getcwd()
top_model_weights_path = './weights/'
np.random.seed(SEED)
vgg16_bottleneck(data, modelPath, train_bottleneck, train_labels, test_bottleneck, test_labels)
def train_top_model(train_feats, train_lab, test_feats, test_lab, model_path, model_save, epoch = 50, batch = 64):
start_time = time.time()
train_bottleneck = os.path.join(model_path, train_feats)
train_labels = os.path.join(model_path, train_lab)
test_bottleneck = os.path.join(model_path, test_feats)
test_labels = os.path.join(model_path, test_lab)
history = bc.LossHistory()
X_train = np.load(train_bottleneck)
Y_train = np.load(train_labels)
Y_train = np_utils.to_categorical(Y_train, num_classes=2)
X_test = np.load(test_bottleneck)
Y_test = np.load(test_labels)
Y_test = np_utils.to_categorical(Y_test, num_classes=2)
model = Sequential()
model.add(Flatten(input_shape=X_train.shape[1:]))
model.add( Dropout(0.7))
model.add( Dense(256, activation='relu', kernel_constraint= maxnorm(3.)) )
model.add( Dropout(0.5))
# Softmax for probabilities for each class at the output layer
model.add( Dense(2, activation='softmax'))
model.compile(optimizer='rmsprop', # adadelta
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(X_train, Y_train,
epochs=epoch,
batch_size=batch,
callbacks=[history],
validation_data=(X_test, Y_test),
verbose=2)
print "Training duration : {0}".format(time.time() - start_time)
score = model.evaluate(X_test, Y_test, batch_size=16, verbose=2)
print "Network's test score [loss, accuracy]: {0}".format(score)
print 'CNN Error: {:.2f}%'.format(100 - score[1] * 100)
bc.save_model(model_save, model, "jn_VGG16_Diagnosis_top_weights_threshold.h5")
return model, history.losses, history.acc, score
np.random.seed(SEED)
(trans_model, loss_cnn, acc_cnn, test_score_cnn) = train_top_model(train_feats=train_bottleneck,
train_lab=train_labels,
test_feats=test_bottleneck,
test_lab=test_labels,
model_path=modelPath,
model_save=top_model_weights_path,
epoch=100)
plt.figure(figsize=(10,10))
bc.plot_losses(loss_cnn, acc_cnn)
plt.savefig('../../figures/epoch_figures/jn_Transfer_Diagnosis_Threshold_20170609.png', dpi=100)
print 'Transfer Learning CNN Accuracy: {: >7.2f}%'.format(test_score_cnn[1] * 100)
print 'Transfer Learning CNN Error: {: >10.2f}%'.format(100 - test_score_cnn[1] * 100)
predictOutput = bc.predict(trans_model, np.load(test_bottleneck))
trans_matrix = skm.confusion_matrix(y_true=Y_test, y_pred=predictOutput)
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(trans_matrix, classes=class_names, normalize=True,
title='Transfer CNN Normalized Confusion Matrix Using Thresholded \n')
plt.tight_layout()
plt.savefig('../../figures/TMP_jn_Transfer_Diagnosis_CM_Threshold_20170609.png', dpi=100)
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(trans_matrix, classes=class_names, normalize=False,
title='Transfer CNN Normalized Confusion Matrix Using Thresholded \n')
plt.tight_layout()
bc.cat_stats(trans_matrix)
Explanation: <h2>CNN Modelling Using VGG16 in Transfer Learning</h2>
End of explanation
data = [X_train, X_test, Y_train, Y_test]
X_train, X_test, Y_train, Y_test = bc.prep_data(data, cats)
data = [X_train, X_test, Y_train, Y_test]
print X_train.shape
print X_test.shape
print Y_train.shape
print Y_test.shape
Explanation: <h2>Core CNN Modelling</h2>
Prep and package the data for Keras processing:
End of explanation
def diff_model_v7_reg(numClasses, input_shape=(3, 150,150), add_noise=False, noise=0.01, verbose=False):
model = Sequential()
if (add_noise):
model.add( GaussianNoise(noise, input_shape=input_shape))
model.add( Convolution2D(filters=16,
kernel_size=(5,5),
data_format='channels_first',
padding='same',
activation='relu'))
else:
model.add( Convolution2D(filters=16,
kernel_size=(5,5),
data_format='channels_first',
padding='same',
activation='relu',
input_shape=input_shape))
# model.add( Dropout(0.7))
model.add( Dropout(0.5))
model.add( Convolution2D(filters=32, kernel_size=(3,3),
data_format='channels_first', padding='same', activation='relu'))
model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first'))
# model.add( Dropout(0.4))
model.add( Dropout(0.25))
model.add( Convolution2D(filters=32, kernel_size=(3,3),
data_format='channels_first', activation='relu'))
model.add( Convolution2D(filters=64, kernel_size=(3,3),
data_format='channels_first', padding='same', activation='relu',
kernel_regularizer=regularizers.l2(0.01)))
model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first'))
model.add( Convolution2D(filters=64, kernel_size=(3,3),
data_format='channels_first', activation='relu',
kernel_regularizer=regularizers.l2(0.01)))
#model.add( Dropout(0.4))
model.add( Dropout(0.25))
model.add( Convolution2D(filters=128, kernel_size=(3,3),
data_format='channels_first', padding='same', activation='relu',
kernel_regularizer=regularizers.l2(0.01)))
model.add( MaxPooling2D(pool_size= (2,2), data_format='channels_first'))
model.add( Convolution2D(filters=128, kernel_size=(3,3),
data_format='channels_first', activation='relu',
kernel_regularizer=regularizers.l2(0.01)))
#model.add(Dropout(0.4))
model.add( Dropout(0.25))
model.add( Flatten())
model.add( Dense(128, activation='relu', kernel_constraint= maxnorm(3.)) )
# model.add( Dropout(0.4))
model.add( Dropout(0.25))
model.add( Dense(64, activation='relu', kernel_constraint= maxnorm(3.)) )
# model.add( Dropout(0.4))
model.add( Dropout(0.25))
# Softmax for probabilities for each class at the output layer
model.add( Dense(numClasses, activation='softmax'))
if verbose:
print( model.summary() )
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
return model
diff_model7_noise_reg = diff_model_v7_reg(len(cats),
input_shape=(X_train.shape[1], X_train.shape[2], X_train.shape[3]),
add_noise=True, verbose=True)
np.random.seed(SEED)
(cnn_model, loss_cnn, acc_cnn, test_score_cnn) = bc.run_network(model=diff_model7_noise_reg, earlyStop=True,
data=data,
epochs=50, batch=64)
plt.figure(figsize=(10,10))
bc.plot_losses(loss_cnn, acc_cnn)
plt.savefig('../../figures/epoch_figures/jn_Core_CNN_Diagnosis_Threshold_20170609.png', dpi=100)
bc.save_model(dir_path='./weights/', model=cnn_model, name='jn_Core_CNN_Diagnosis_Threshold_20170609')
print 'Core CNN Accuracy: {: >7.2f}%'.format(test_score_cnn[1] * 100)
print 'Core CNN Error: {: >10.2f}%'.format(100 - test_score_cnn[1] * 100)
predictOutput = bc.predict(cnn_model, X_test)
cnn_matrix = skm.confusion_matrix(y_true=[val.argmax() for val in Y_test], y_pred=predictOutput)
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(cnn_matrix, classes=class_names, normalize=True,
title='CNN Normalized Confusion Matrix Using Thresholded \n')
plt.tight_layout()
plt.savefig('../../figures/jn_Core_CNN_Diagnosis_Threshold_201706090.png', dpi=100)
plt.figure(figsize=(8,6))
bc.plot_confusion_matrix(cnn_matrix, classes=class_names, normalize=False,
title='CNN Raw Confusion Matrix Using Thresholded \n')
plt.tight_layout()
bc.cat_stats(cnn_matrix)
Explanation: Heavy Regularization
End of explanation |
176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fast chem_evol
Created by Benoit Côté
This notebook presents and tests the pre_calculate_SSPs implementation in chem_evol.py. When many timesteps are required in OMEGA, the computational time becomes significant for a simple one-zone chemical evolution model. This fast version pre-calculates the simple stellar populations (SSPs) and the interpolation coefficients. This means the code does not need to fold the yields with the initial mass function (IMF) at each timestep. However, this option prevents you from modifying the IMF properties during a simulation.
Step1: Original Version
Step2: Fast Version
Step3: By using the dt_in_SSPs array, the OMEGA timesteps can be different from the SSP timesteps. If dt_in_SSPs is not provided when running OMEGA, each SSP will have the same timsteps as OMEGA.
Step4: Comparison Between the Original and Fast Versions | Python Code:
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from NuPyCEE import omega
from NuPyCEE import sygma
Explanation: Fast chem_evol
Created by Benoit Côté
This notebook presents and tests the pre_calculate_SSPs implementation in chem_evol.py. When many timesteps are required in OMEGA, the computational time becomes significant for a simple one-zone chemical evolution model. This fast version pre-calculates the simple stellar populations (SSPs) and the interpolation coefficients. This means the code does not need to fold the yields with the initial mass function (IMF) at each timestep. However, this option prevents you from modifying the IMF properties during a simulation.
End of explanation
# Run original OMEGA with 1000 timestesp (this may take a minute ..)
o_ori = omega.omega(galaxy='milky_way', special_timesteps=1000)
Explanation: Original Version
End of explanation
# Let's create the timestep template for each SSP
# Here, you can decide any type of timestep option
s_dt_template = sygma.sygma(special_timesteps=1000)
# Copy the SSP timestep array
dt_in_SSPs = s_dt_template.history.timesteps
Explanation: Fast Version
End of explanation
# Let's pre-calculate the SSPs.
# Here I choose a very low number of OMEGA timesteps.
# I do this because I only want to use this instance
# to copy the SSPs, so I won't have to recalculate
# them each time I want to run an OMEGA simulation (in the fast version).
# You can ignore the Warning notice.
o_for_SSPs = omega.omega(special_timesteps=2, pre_calculate_SSPs=True, dt_in_SSPs=dt_in_SSPs)
# Let's copy the SSPs array
SSPs_in = [o_for_SSPs.ej_SSP, o_for_SSPs.ej_SSP_coef, o_for_SSPs.dt_ssp, o_for_SSPs.t_ssp]
# SSPs_in[0] --> Mass (in log) ejected for each isotope. It's an array in the form of [nb Z][nb SSP dt][isotope]
# SSPs_in[1] --> Interpolation coefficients of each isotope
# SSPs_in[2] --> List of timesteps
# SSPs_in[3] --> List of galactic ages
# Finally, let's run the fast version (1000 timesteps).
# This should be ~3 times faster
o_fast = omega.omega(galaxy='milky_way', special_timesteps=1000, pre_calculate_SSPs=True, SSPs_in=SSPs_in)
Explanation: By using the dt_in_SSPs array, the OMEGA timesteps can be different from the SSP timesteps. If dt_in_SSPs is not provided when running OMEGA, each SSP will have the same timsteps as OMEGA.
End of explanation
%matplotlib nbagg
o_ori.plot_spectro(color='b', label='Original')
o_fast.plot_spectro(color='g', shape='--', label='Fast')
plt.xscale('log')
%matplotlib nbagg
o_ori.plot_mass( specie='Mg', color='b', markevery=100000, label='Original')
o_fast.plot_mass(specie='Mg', color='g', markevery=100, shape='--', label='Fast')
plt.ylabel('Mass of Mg [Msun]')
%matplotlib nbagg
o_ori.plot_spectro( xaxis='[O/H]', yaxis='[Ca/O]', color='b', \
markevery=100000, label='Original')
o_fast.plot_spectro(xaxis='[O/H]', yaxis='[Ca/O]', color='g', \
shape='--', markevery=300, label='Fast')
plt.ylim(-2.0,1.0)
Explanation: Comparison Between the Original and Fast Versions
End of explanation |
177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Flint, MI water crisis - part II
Student names
Type the names of everybody in your group here!
Learning Goals (why are we asking you to do this?)
As discussed in last class, there are two main reasons
Step1: What you should now have is a data frame called flint_data, which contains water quality data with many elements for three different dates (2015-08-01, 2016-03-01, and 2016-07-01), and with three different bottles per date (bottle1, bottle2, bottle3, corresponding to the three samples). You can see all of the columns by typing flint_data.columns - note that all quantities are in parts per billion, not mg/L! Note also that the sample numbers in the leftmost column may be misleading - this is a result of combining together several datasets.
Step3: Feedback (this is required) | Python Code:
# THIS CELL READS IN THE FLINT DATASET - DO NOT CHANGE ANYTHING!
# Make plots inline
%matplotlib inline
# Make inline plots vector graphics instead of raster graphics
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'svg')
# import modules for plotting and data analysis
import matplotlib.pyplot # as plt
import pandas
import numpy as np
import functools
def add_bottle_id_column(data, key_name):
data['bottleID'] = np.repeat(key_name, data.shape[0])
return data
'''
Loads the flint water quality dataset from the spreadsheet.
This manipulation is necessary because (1) the data is in a spreadsheet
rather than a CSV file or something else, and (2) the data is spread out
across multiple sheets in the spreadsheet.
'''
def load_flint_water_data():
flint_water_data = pandas.read_excel(
# NOTE: uncomment the following line and comment out the one after that if
# you have problems getting this to run on a Windows machine.
# io = “https://github.com/ComputationalModeling/flint-water-data/raw/f6093bba145b1745b68bac2964b341fa30f3a08a/Flint%20Lead%20Kits%20ICP%20Data.xlsx”,
io = "Flint_Lead_Kits_ICP_Data.xlsx",
sheetname = [
"Sub_B1-8.15",
"Sub_B2-8.15",
"Sub_B3-8.15",
"Sub_B1-3.16",
"Sub_B2-3.16",
"Sub_B3-3.16",
"Sub_B1-7.16",
"Sub_B2-7.16",
"Sub_B3-7.16"],
header = 0,
skiprows = 3,
names = [
"Sample",
"208Pb",
"",
"23Na",
"25Mg",
"27Al",
"28Si",
"31P",
"PO4",
"34S",
"35Cl",
"39K",
"43Ca",
"47Ti",
"51V",
"52Cr",
"54Fe",
"55Mn",
"59Co",
"60Ni",
"65Cu",
"66Zn",
"75As",
"78Se",
"88Sr",
"95Mo",
"107Ag",
"111Cd",
"112Sn",
"137Ba",
"238U"
]
)
data_with_id = [
add_bottle_id_column(value, key)
for key, value
in flint_water_data.items()]
# collapse dataframes into one long dataframe
flint_water_data = functools.reduce(lambda x,y: x.append(y), data_with_id)
return flint_water_data
def add_date_and_bottle_number(flint_data):
flint_data['bottle_number'] = flint_data['bottleID'].apply(lambda x: x.split('-')[0])
flint_data['date_collected'] = flint_data['bottleID'].apply(lambda x: x.split('-')[1])
return(flint_data)
bottle_map = {
'Sub_B1': 'bottle1',
'Sub_B2': 'bottle2',
'Sub_B3': 'bottle3'
}
date_map = {
'8.15': '2015-08-01',
'3.16': '2016-03-01',
'7.16': '2016-07-01'
}
flint_data = load_flint_water_data()
flint_data = add_date_and_bottle_number(flint_data)
flint_data = flint_data.replace(
{'bottle_number': bottle_map,
'date_collected': date_map })
flint_data['date_collected'] = pandas.DatetimeIndex(flint_data['date_collected'])
flint_data = flint_data.drop('bottleID', axis = 1)
# the end result is that you have a data frame called "flint_data"
Explanation: The Flint, MI water crisis - part II
Student names
Type the names of everybody in your group here!
Learning Goals (why are we asking you to do this?)
As discussed in last class, there are two main reasons:
Because data analysis is something that can and should be used for (among other things) improving local and federal government, and serving humanity.
Because data analysis and visualization are two of the most important parts of modeling and understanding a system.
In today's class, we'll be pursuing both of these objectives.
Today's activity
We'll be looking at the Flint Water Quality dataset, as we did in the last class. However, now we'll be looking at lead content data for the same houses, but sampled three times over almost a year.
Your goal for today is to answer this question: Is the water quality for the residents of Flint, Michigan, getting better, getting worse, or not substantially changing?
The dataset you have been provided is a much larger and richer dataset than you worked with in the last class session, and while it has data for fewer houses (162 total) it includes data for more elements as well as 3 bottles (immediately, after 45 seconds, after 2 minutes) for three different dates: August 2015, March 2016, and July 2016, for a total of 1458 records.
You are deliberately being given very little in the way of concrete guidance for this project. Talk with your group members about how you want to do your analysis, and then create whatever plots, charts, or statistical analyses are necessary in order to answer the question. Use the rest of this notebook to show your work!
Note: make sure that the Excel spreadsheet Flint_Lead_Kits_ICP_Data.xlsx is in the same directory as this notebook. You may also wish to look at the "pandas cheat sheet" notebook and your pre-class assignment for inspiration, as well as the "10 minutes to Pandas" tutorial.
The cell below is code that we wrote to read in the Flint water quality dataset - do not change it! Add your code in the cells below it.
Click here for the Pandas cheat sheet
End of explanation
# Put all of your code, plots, and explanations here.
# If necessary, insert additional cells below this one!
Explanation: What you should now have is a data frame called flint_data, which contains water quality data with many elements for three different dates (2015-08-01, 2016-03-01, and 2016-07-01), and with three different bottles per date (bottle1, bottle2, bottle3, corresponding to the three samples). You can see all of the columns by typing flint_data.columns - note that all quantities are in parts per billion, not mg/L! Note also that the sample numbers in the leftmost column may be misleading - this is a result of combining together several datasets.
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/DjojWCmPAwzlHrpQ2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Feedback (this is required)
End of explanation |
178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encoder-Decoder Analysis
Model Architecture
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
Step7: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores | Python Code:
report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_bow_200_512_04drb/encdec_noing15_bow_200_512_04drb.json'
log_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_bow_200_512_04drb/encdec_noing15_bow_200_512_04drb_logs.json'
import json
import matplotlib.pyplot as plt
with open(report_file) as f:
report = json.loads(f.read())
with open(log_file) as f:
logs = json.loads(f.read())
print'Encoder: \n\n', report['architecture']['encoder']
print'Decoder: \n\n', report['architecture']['decoder']
Explanation: Encoder-Decoder Analysis
Model Architecture
End of explanation
print('Train Perplexity: ', report['train_perplexity'])
print('Valid Perplexity: ', report['valid_perplexity'])
print('Test Perplexity: ', report['test_perplexity'])
Explanation: Perplexity on Each Dataset
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Loss vs. Epoch
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
Explanation: Perplexity vs. Epoch
End of explanation
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
for i, sample in enumerate(report['train_samples']):
print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)
for i, sample in enumerate(report['valid_samples']):
print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)
for i, sample in enumerate(report['test_samples']):
print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)
Explanation: Generations
End of explanation
def print_bleu(blue_struct):
print 'Overall Score: ', blue_struct['score'], '\n'
print '1-gram Score: ', blue_struct['components']['1']
print '2-gram Score: ', blue_struct['components']['2']
print '3-gram Score: ', blue_struct['components']['3']
print '4-gram Score: ', blue_struct['components']['4']
# Training Set BLEU Scores
print_bleu(report['train_bleu'])
# Validation Set BLEU Scores
print_bleu(report['valid_bleu'])
# Test Set BLEU Scores
print_bleu(report['test_bleu'])
# All Data BLEU Scores
print_bleu(report['combined_bleu'])
Explanation: BLEU Analysis
End of explanation
# Training Set BLEU n-pairs Scores
print_bleu(report['n_pairs_bleu_train'])
# Validation Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_valid'])
# Test Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_test'])
# Combined n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_all'])
# Ground Truth n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_gold'])
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
print 'Average (Train) Generated Score: ', report['average_alignment_train']
print 'Average (Valid) Generated Score: ', report['average_alignment_valid']
print 'Average (Test) Generated Score: ', report['average_alignment_test']
print 'Average (All) Generated Score: ', report['average_alignment_all']
print 'Average Gold Score: ', report['average_alignment_gold']
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation |
179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classify Data
Create a classifier for different kinds of plankton using supervised machine learning
Executing this Notebook requires a personal STOQS database. Follow the steps to build your own development system — this will take about an hour and depends on a good connection to the Internet. Once your server is up log into it (after a cd ~/Vagrants/stoqsvm) and activate your virtual environment with the usual commands
Step1: Build up command-line parameters so that we can call methods on our Classifier() object c
Step2: Load the labeled data, normalize, and and split into train and test sets (borrowing from classify.py's createClassifier() method)
Step3: Setup plotting
Step4: Plot classifier comparisons as in http | Python Code:
from contrib.analysis.classify import Classifier
c = Classifier()
Explanation: Classify Data
Create a classifier for different kinds of plankton using supervised machine learning
Executing this Notebook requires a personal STOQS database. Follow the steps to build your own development system — this will take about an hour and depends on a good connection to the Internet. Once your server is up log into it (after a cd ~/Vagrants/stoqsvm) and activate your virtual environment with the usual commands:
vagrant ssh -- -X
cd /vagrant/dev/stoqsgit
source venv-stoqs/bin/activate
Then load the stoqs_september2013 database with the commands:
cd stoqs
ln -s mbari_campaigns.py campaigns.py
export DATABASE_URL=postgis://stoqsadm:CHANGEME@127.0.0.1:5432/stoqs
loaders/load.py --db stoqs_september2013
loaders/load.py --db stoqs_september2013 --updateprovenance
Loading this database can take over a day as there are over 40 million measurments from 22 different platforms. You may want to edit the stoqs/loaders/CANON/loadCANON_september2013.py file and comment all but the loadDorado() method calls at the end of the file. You can also set a stride value or use the --test option to create a stoqs_september2013_t database, in which case you'll need to set the STOQS_CAMPAIGNS envrironment variable:
export STOQS_CAMPAIGNS=stoqs_september2013_t
Use the stoqs/contrib/analysis/classify.py script to create some labeled data that we will learn from:
contrib/analysis/classify.py --createLabels --groupName Plankton \
--database stoqs_september2013 --platform dorado \
--start 20130916T124035 --end 20130919T233905 \
--inputs bbp700 fl700_uncorr --discriminator salinity \
--labels diatom dino1 dino2 sediment \
--mins 33.33 33.65 33.70 33.75 --maxes 33.65 33.70 33.75 33.93 --clobber -v
A little explanation is probably warranted here. The Dorado missions on 16-19 September 2013 sampled distinct water types in Monterey Bay that are easily identified by ranges of salinity. These water types contain different kinds of particles as identified by bbp700 (backscatter) and fl700_uncorr (chlorophyll). The previous command "labeled" MeasuredParameters in the database according to our understanding of the optical properties of diatoms, dinoflagellates, and sediment. This works for this data set because of the particular oceanographic conditions at the time.
This Notebook demonstrates creating a classification algortithm from these labeled data and addresses Issue 227 on GitHub. To be able to execute the cells and experiment with different algortithms and parameters launch Jupyter Notebook with:
cd contrib/notebooks
../../manage.py shell_plus --notebook
navigate to this file and open it. You will then be able to execute the cells and experiment with different settings and code.
Use code from the classify module to read data from the database:
End of explanation
from argparse import Namespace
ns = Namespace()
ns.database = 'stoqs_september2013_t'
ns.classifier='Decision_Tree'
ns.inputs=['bbp700', 'fl700_uncorr']
ns.labels=['diatom', 'dino1', 'dino2', 'sediment']
ns.test_size=0.4
ns.train_size=0.4
ns.verbose=True
c.args = ns
Explanation: Build up command-line parameters so that we can call methods on our Classifier() object c
End of explanation
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
X, y = c.loadLabeledData('Labeled Plankton', classes=('diatom', 'sediment'))
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=c.args.test_size, train_size=c.args.train_size)
Explanation: Load the labeled data, normalize, and and split into train and test sets (borrowing from classify.py's createClassifier() method)
End of explanation
%pylab inline
import pylab as plt
from matplotlib.colors import ListedColormap
plt.rcParams['figure.figsize'] = (27, 3)
Explanation: Setup plotting
End of explanation
for i, (name, clf) in enumerate(c.classifiers.items()):
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, .02),
np.arange(y_min, y_max, .02))
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(1, len(c.classifiers) + 1, i + 1)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot also the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
Explanation: Plot classifier comparisons as in http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html
End of explanation |
180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Least squares fitting of models to data
This is a quick introduction to statsmodels for physical scientists (e.g. physicists, astronomers) or engineers.
Why is this needed?
Because most of statsmodels was written by statisticians and they use a different terminology and sometimes methods, making it hard to know which classes and functions are relevant and what their inputs and outputs mean.
Step2: Linear models
Assume you have data points with measurements y at positions x as well as measurement errors y_err.
How can you use statsmodels to fit a straight line model to this data?
For an extensive discussion see Hogg et al. (2010), "Data analysis recipes
Step3: To fit a straight line use the weighted least squares class WLS ... the parameters are called
Step4: Check against scipy.optimize.curve_fit
Step6: Check against self-written cost function
Step7: Non-linear models | Python Code:
import numpy as np
import pandas as pd
import statsmodels.api as sm
Explanation: Least squares fitting of models to data
This is a quick introduction to statsmodels for physical scientists (e.g. physicists, astronomers) or engineers.
Why is this needed?
Because most of statsmodels was written by statisticians and they use a different terminology and sometimes methods, making it hard to know which classes and functions are relevant and what their inputs and outputs mean.
End of explanation
data =
x y y_err
201 592 61
244 401 25
47 583 38
287 402 15
203 495 21
58 173 15
210 479 27
202 504 14
198 510 30
158 416 16
165 393 14
201 442 25
157 317 52
131 311 16
166 400 34
160 337 31
186 423 42
125 334 26
218 533 16
146 344 22
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
data = pd.read_csv(StringIO(data), delim_whitespace=True).astype(float)
# Note: for the results we compare with the paper here, they drop the first four points
data.head()
Explanation: Linear models
Assume you have data points with measurements y at positions x as well as measurement errors y_err.
How can you use statsmodels to fit a straight line model to this data?
For an extensive discussion see Hogg et al. (2010), "Data analysis recipes: Fitting a model to data" ... we'll use the example data given by them in Table 1.
So the model is f(x) = a * x + b and on Figure 1 they print the result we want to reproduce ... the best-fit parameter and the parameter errors for a "standard weighted least-squares fit" for this data are:
* a = 2.24 +- 0.11
* b = 34 +- 18
End of explanation
exog = sm.add_constant(data["x"])
endog = data["y"]
weights = 1.0 / (data["y_err"] ** 2)
wls = sm.WLS(endog, exog, weights)
results = wls.fit(cov_type="fixed scale")
print(results.summary())
Explanation: To fit a straight line use the weighted least squares class WLS ... the parameters are called:
* exog = sm.add_constant(x)
* endog = y
* weights = 1 / sqrt(y_err)
Note that exog must be a 2-dimensional array with x as a column and an extra column of ones. Adding this column of ones means you want to fit the model y = a * x + b, leaving it off means you want to fit the model y = a * x.
And you have to use the option cov_type='fixed scale' to tell statsmodels that you really have measurement errors with an absolute scale. If you do not, statsmodels will treat the weights as relative weights between the data points and internally re-scale them so that the best-fit model will have chi**2 / ndf = 1.
End of explanation
# You can use `scipy.optimize.curve_fit` to get the best-fit parameters and parameter errors.
from scipy.optimize import curve_fit
def f(x, a, b):
return a * x + b
xdata = data["x"]
ydata = data["y"]
p0 = [0, 0] # initial parameter estimate
sigma = data["y_err"]
popt, pcov = curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma=True)
perr = np.sqrt(np.diag(pcov))
print("a = {0:10.3f} +- {1:10.3f}".format(popt[0], perr[0]))
print("b = {0:10.3f} +- {1:10.3f}".format(popt[1], perr[1]))
Explanation: Check against scipy.optimize.curve_fit
End of explanation
# You can also use `scipy.optimize.minimize` and write your own cost function.
# This does not give you the parameter errors though ... you'd have
# to estimate the HESSE matrix separately ...
from scipy.optimize import minimize
def chi2(pars):
Cost function.
y_model = pars[0] * data["x"] + pars[1]
chi = (data["y"] - y_model) / data["y_err"]
return np.sum(chi ** 2)
result = minimize(fun=chi2, x0=[0, 0])
popt = result.x
print("a = {0:10.3f}".format(popt[0]))
print("b = {0:10.3f}".format(popt[1]))
Explanation: Check against self-written cost function
End of explanation
# TODO: we could use the examples from here:
# http://probfit.readthedocs.org/en/latest/api.html#probfit.costfunc.Chi2Regression
Explanation: Non-linear models
End of explanation |
181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Before we get started, a couple of reminders to keep in mind when using iPython notebooks
Step2: Fixing Data Types
Step4: Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur.
Investigating the Data
Step5: Problems in the Data
Step6: Missing Engagement Records
Step7: Checking for More Problem Records
Step8: Tracking Down the Remaining Problems
Step9: Refining the Question
Step10: Getting Data from First Week
Step11: Exploring Student Engagement
Step12: Debugging Data Analysis Code
Step16: Lessons Completed in First Week
Step17: Number of Visits in First Week
Step18: Splitting out Passing Students
Step19: Comparing the Two Student Groups
Step20: Making Histograms
Step21: Improving Plots and Sharing Findings | Python Code:
import unicodecsv
## Longer version of code (replaced with shorter, equivalent version below)
# enrollments = []
# f = open('enrollments.csv', 'rb')
# reader = unicodecsv.DictReader(f)
# for row in reader:
# enrollments.append(row)
# f.close()
with open('enrollments.csv', 'rb') as f:
reader = unicodecsv.DictReader(f)
enrollments = list(reader)
#####################################
# 1 #
#####################################
## Read in the data from daily_engagement.csv and project_submissions.csv
## and store the results in the below variables.
## Then look at the first row of each table.
def csv_loader(file_name):
Reads a CSV using unicodecsv module and returns a list
with open(file_name, "rb") as f:
reader = unicodecsv.DictReader(f)
return list(reader)
daily_engagement = csv_loader("daily_engagement.csv")
print(daily_engagement[0])
project_submissions = csv_loader("project_submissions.csv")
print(project_submissions[0])
Explanation: Before we get started, a couple of reminders to keep in mind when using iPython notebooks:
Remember that you can see from the left side of a code cell when it was last run if there is a number within the brackets.
When you start a new notebook session, make sure you run all of the cells up to the point where you last left off. Even if the output is still visible from when you ran the cells in your previous session, the kernel starts in a fresh state so you'll need to reload the data, etc. on a new session.
The previous point is useful to keep in mind if your answers do not match what is expected in the lesson's quizzes. Try reloading the data and run all of the processing steps one by one in order to make sure that you are working with the same variables and data that are at each quiz stage.
Load Data from CSVs
End of explanation
from datetime import datetime as dt
# Takes a date as a string, and returns a Python datetime object.
# If there is no date given, returns None
def parse_date(date):
if date == '':
return None
else:
return dt.strptime(date, '%Y-%m-%d')
# Takes a string which is either an empty string or represents an integer,
# and returns an int or None.
def parse_maybe_int(i):
if i == '':
return None
else:
return int(i)
# Clean up the data types in the enrollments table
for enrollment in enrollments:
enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])
enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])
enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'
enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'
enrollment['join_date'] = parse_date(enrollment['join_date'])
enrollments[0]
# Clean up the data types in the engagement table
for engagement_record in daily_engagement:
engagement_record['lessons_completed'] = int(float(engagement_record['lessons_completed']))
engagement_record['num_courses_visited'] = int(float(engagement_record['num_courses_visited']))
engagement_record['projects_completed'] = int(float(engagement_record['projects_completed']))
engagement_record['total_minutes_visited'] = float(engagement_record['total_minutes_visited'])
engagement_record['utc_date'] = parse_date(engagement_record['utc_date'])
daily_engagement[0]
# Clean up the data types in the submissions table
for submission in project_submissions:
submission['completion_date'] = parse_date(submission['completion_date'])
submission['creation_date'] = parse_date(submission['creation_date'])
project_submissions[0]
Explanation: Fixing Data Types
End of explanation
#####################################
# 2 #
#####################################
## Find the total number of rows and the number of unique students (account keys)
## in each table.
# Part 1
print(
len(enrollments),
len(daily_engagement),
len(project_submissions)
)
# Part 2
def get_unique_students(file_name):
Retrieves a list of unique account keys from the specified file
unqiue_students = set()
for e in file_name:
unqiue_students.add(e["account_key"])
return unqiue_students
u_enrollments = get_unique_students(enrollments)
u_daily_engagement = get_unique_students(daily_engagement)
u_project_submissions = get_unique_students(project_submissions)
print(
len(u_enrollments),
len(u_daily_engagement),
len(u_project_submissions)
)
Explanation: Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur.
Investigating the Data
End of explanation
#####################################
# 3 #
#####################################
## Rename the "acct" column in the daily_engagement table to "account_key".
for engagement_record in daily_engagement:
engagement_record['account_key'] = engagement_record['acct']
del[engagement_record['acct']]
Explanation: Problems in the Data
End of explanation
#####################################
# 4 #
#####################################
## Find any one student enrollments where the student is missing from the daily engagement table.
## Output that enrollment.
for e in enrollments:
if e["account_key"] not in u_daily_engagement:
print("\n", e)
Explanation: Missing Engagement Records
End of explanation
#####################################
# 5 #
#####################################
## Find the number of surprising data points (enrollments missing from
## the engagement table) that remain, if any.
for ix, e in enumerate(enrollments):
if e["account_key"] not in u_daily_engagement and e["join_date"] != e["cancel_date"]:
print("\n", "Index: %i" % ix, "\n Correspoinding record: \n %s" % e)
Explanation: Checking for More Problem Records
End of explanation
# Create a set of the account keys for all Udacity test accounts
udacity_test_accounts = set()
for enrollment in enrollments:
if enrollment['is_udacity']:
udacity_test_accounts.add(enrollment['account_key'])
len(udacity_test_accounts)
# Given some data with an account_key field, removes any records corresponding to Udacity test accounts
def remove_udacity_accounts(data):
non_udacity_data = []
for data_point in data:
if data_point['account_key'] not in udacity_test_accounts:
non_udacity_data.append(data_point)
return non_udacity_data
# Remove Udacity test accounts from all three tables
non_udacity_enrollments = remove_udacity_accounts(enrollments)
non_udacity_engagement = remove_udacity_accounts(daily_engagement)
non_udacity_submissions = remove_udacity_accounts(project_submissions)
print(
len(non_udacity_enrollments),
len(non_udacity_engagement),
len(non_udacity_submissions))
Explanation: Tracking Down the Remaining Problems
End of explanation
#####################################
# 6 #
#####################################
## Create a dictionary named paid_students containing all students who either
## haven't canceled yet or who remained enrolled for more than 7 days. The keys
## should be account keys, and the values should be the date the student enrolled.
paid_students = dict()
for e in non_udacity_enrollments:
# check wether days_to_cancel == None or days_to_cancel > 7
if e["days_to_cancel"] == None or e["days_to_cancel"] > 7:
# store account key and join date in temporary variables
temp_key = e["account_key"]
temp_date = e["join_date"]
# check wether account key already exists in temp variable or if join date > existing join date
if temp_key not in paid_students or temp_date > paid_students[temp_key]:
# add account_key and enrollment_date to
paid_students[temp_key] = temp_date
len(paid_students)
Explanation: Refining the Question
End of explanation
# Takes a student's join date and the date of a specific engagement record,
# and returns True if that engagement record happened within one week
# of the student joining.
def within_one_week(join_date, engagement_date):
time_delta = engagement_date - join_date
return time_delta.days >= 0 and time_delta.days < 7
def remove_free_trial_cancels(data):
new_data = []
for data_point in data:
if data_point['account_key'] in paid_students:
new_data.append(data_point)
return new_data
paid_enrollments = remove_free_trial_cancels(non_udacity_enrollments)
paid_engagement = remove_free_trial_cancels(non_udacity_engagement)
paid_submissions = remove_free_trial_cancels(non_udacity_submissions)
#####################################
# 7 #
#####################################
## Create a list of rows from the engagement table including only rows where
## the student is one of the paid students you just found, and the date is within
## one week of the student's join date.
paid_engagement_in_first_week = []
# loop over engagements
for e in non_udacity_engagement:
# check if student is in paid students and if engagement date is valid
if e["account_key"] in paid_students and within_one_week(paid_students[e["account_key"]], e["utc_date"]) == True:
paid_engagement_in_first_week.append(e)
len(paid_engagement_in_first_week)
Explanation: Getting Data from First Week
End of explanation
from collections import defaultdict
# Create a dictionary of engagement grouped by student.
# The keys are account keys, and the values are lists of engagement records.
engagement_by_account = defaultdict(list)
for engagement_record in paid_engagement_in_first_week:
account_key = engagement_record['account_key']
engagement_by_account[account_key].append(engagement_record)
# Create a dictionary with the total minutes each student spent in the classroom during the first week.
# The keys are account keys, and the values are numbers (total minutes)
total_minutes_by_account = {}
for account_key, engagement_for_student in engagement_by_account.items():
total_minutes = 0
for engagement_record in engagement_for_student:
total_minutes += engagement_record['total_minutes_visited']
total_minutes_by_account[account_key] = total_minutes
import numpy as np
# Summarize the data about minutes spent in the classroom
total_minutes = list(total_minutes_by_account.values())
print('Mean:', np.mean(total_minutes))
print('Standard deviation:', np.std(total_minutes))
print('Minimum:', np.min(total_minutes))
print('Maximum:', np.max(total_minutes))
Explanation: Exploring Student Engagement
End of explanation
#####################################
# 8 #
#####################################
## Go through a similar process as before to see if there is a problem.
## Locate at least one surprising piece of data, output it, and take a look at it.
for k,v in total_minutes_by_account.items():
if v > 7200:
print("\n", "account key: ", k, "value: ", v)
print(
paid_engagement_in_first_week["account_key" == 460],
paid_engagement_in_first_week["account_key" == 140],
paid_engagement_in_first_week["account_key" == 108],
paid_engagement_in_first_week["account_key" == 78]
)
Explanation: Debugging Data Analysis Code
End of explanation
#####################################
# 9 #
#####################################
## Adapt the code above to find the mean, standard deviation, minimum, and maximum for
## the number of lessons completed by each student during the first week. Try creating
## one or more functions to re-use the code above.
def group_data(data, key_name):
Given data in dict form and a key, the function returns a grouped data set
grouped_data = defaultdict(list)
for e in data:
key = e[key_name]
grouped_data[key].append(e)
return grouped_data
engagement_by_account = group_data(paid_engagement_in_first_week, "account_key")
def sum_grouped_data(data, field_name):
Given data in dict form and a field name, the function returns sum of the field name per key
summed_data = {}
for key, values in data.items():
total = 0
for value in values:
total += value[field_name]
summed_data[key] = total
return summed_data
total_lessons_per_account = sum_grouped_data(engagement_by_account, "lessons_completed")
def describe_data(data):
Given a dataset the function returns mean, std. deviation, min and max
print(
"Mean: %f" % np.mean(data),
"Standard deviation: %f" % np.std(data),
"Min: %f" % np.min(data),
"Max: %f" % np.max(data))
plt.hist(data)
describe_data(list(total_lessons_per_account.values()))
Explanation: Lessons Completed in First Week
End of explanation
######################################
# 10 #
######################################
## Find the mean, standard deviation, minimum, and maximum for the number of
## days each student visits the classroom during the first week.
for el in paid_engagement_in_first_week:
if el["num_courses_visited"] > 0:
el["has_visited"] = 1
else:
el["has_visited"] = 0
engagement_by_account = group_data(paid_engagement_in_first_week, "account_key")
total_visits_per_day_per_account = sum_grouped_data(engagement_by_account, "has_visited")
describe_data(list(total_visits_per_day_per_account.values()))
Explanation: Number of Visits in First Week
End of explanation
######################################
# 11 #
######################################
## Create two lists of engagement data for paid students in the first week.
## The first list should contain data for students who eventually pass the
## subway project, and the second list should contain data for students
## who do not.
subway_project_lesson_keys = ['746169184', '3176718735']
passing_engagement = []
non_passing_engagement = []
# loop over project submission data
for el in paid_submissions:
# check if project submission account key is in engagement data
if el["account_key"] in paid_engagement:
print(e["account_key"])
# check if lesson key is in subway_project_lesson key
if el["lesson_key"] in subway_project_lesson_keys:
print(e["lesson_key"])
# check if assigned_rating is PASSED or DISTINCTION
if el["assigned_rating"] in ["PASSED", "DISTINCTION"]:
print(e["assigned_rating"])
# if so, add record to passing_engagement list
passing_engagement.append(el)
# else add record to non_passing_engagement list
else:
non_passing_engagement.append(el)
print("Passing: ", len(passing_engagement), "Not passing: ", len(non_passing_engagement))
subway_project_lesson_keys = ['746169184', '3176718735']
pass_subway_project = set()
for el in paid_submissions:
if ((el["lesson_key"] in subway_project_lesson_keys) and
(el["assigned_rating"] == 'PASSED' or el["assigned_rating"] == 'DISTINCTION')):
pass_subway_project.add(el['account_key'])
len(pass_subway_project)
passing_engagement = []
non_passing_engagement = []
for el in paid_engagement_in_first_week:
if el['account_key'] in pass_subway_project:
passing_engagement.append(el)
else:
non_passing_engagement.append(el)
print(len(passing_engagement))
print(len(non_passing_engagement))
Explanation: Splitting out Passing Students
End of explanation
######################################
# 12 #
######################################
## Compute some metrics you're interested in and see how they differ for
## students who pass the subway project vs. students who don't. A good
## starting point would be the metrics we looked at earlier (minutes spent
## in the classroom, lessons completed, and days visited).
# prepare passing data
passing_engagement_grouped = group_data(passing_engagement, "account_key")
non_passing_engagement_grouped = group_data(non_passing_engagement, "account_key")
passing_minutes = sum_grouped_data(passing_engagement_grouped, "total_minutes_visited")
passing_lessons = sum_grouped_data(passing_engagement_grouped, "lessons_completed")
passing_days = sum_grouped_data(passing_engagement_grouped, "has_visited")
passing_projects = sum_grouped_data(passing_engagement_grouped, "projects_completed")
# prepare non passing data
non_passing_minutes = sum_grouped_data(non_passing_engagement_grouped, "total_minutes_visited")
non_passing_lessons = sum_grouped_data(non_passing_engagement_grouped, "lessons_completed")
non_passing_days = sum_grouped_data(non_passing_engagement_grouped, "has_visited")
non_passing_projects = sum_grouped_data(non_passing_engagement_grouped, "projects_completed")
# compare
print("Minutes", "\n")
describe_data(list(passing_minutes.values()))
describe_data(list(non_passing_minutes.values()))
print("\n", "Lessons", "\n")
describe_data(list(passing_lessons.values()))
describe_data(list(non_passing_lessons.values()))
print("\n", "Days", "\n")
describe_data(list(passing_days.values()))
describe_data(list(non_passing_days.values()))
print("\n", "Projects", "\n")
describe_data(list(passing_projects.values()))
describe_data(list(non_passing_projects.values()))
passing_engagement[0:2]
Explanation: Comparing the Two Student Groups
End of explanation
######################################
# 13 #
######################################
## Make histograms of the three metrics we looked at earlier for both
## students who passed the subway project and students who didn't. You
## might also want to make histograms of any other metrics you examined.
# setup
%matplotlib inline
import matplotlib.pyplot as plt
# minutes passing
plt.title("Passing students by minute")
plt.hist(list(passing_minutes.values()))
# minutes non-passing
plt.title("_NON_ Passing students by minute")
plt.hist(list(non_passing_minutes.values()))
# lessons
plt.title("Passing students by lessons")
plt.hist(list(passing_lessons.values()))
# lessons non-passing
plt.title("_NON_ Passing students by lessons")
plt.hist(list(non_passing_lessons.values()))
# days
plt.title("Passing students by days")
plt.hist(list(passing_days.values()))
# days non-passing
plt.title("_NON_ Passing students by days")
plt.hist(list(non_passing_days.values()))
Explanation: Making Histograms
End of explanation
######################################
# 14 #
######################################
## Make a more polished version of at least one of your visualizations
## from earlier. Try importing the seaborn library to make the visualization
## look better, adding axis labels and a title, and changing one or more
## arguments to the hist() function.
import seaborn as sns
# seaborn only
plt.title("_NON_ Passing students by days with S-E-A-B-O-R-N")
plt.xlabel("days spent in the classroom")
plt.ylabel("frequency")
plt.hist(list(non_passing_days.values()), bins=8)
Explanation: Improving Plots and Sharing Findings
End of explanation |
182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data
Step2: Create An Index Using The Column 'pid' As The Unique ID | Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
Explanation: Title: Create An Index For A Table
Slug: create_index_for_a_table
Summary: Create An Index For A Table in SQL.
Date: 2016-05-01 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.
For more, check out Learning SQL by Alan Beaulieu.
End of explanation
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
Explanation: Create Data
End of explanation
%%sql
-- Create a index called uid
CREATE INDEX uid
-- For the table 'criminals' and the column 'pid'
ON criminals (pid)
Explanation: Create An Index Using The Column 'pid' As The Unique ID
End of explanation |
183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1.9 查找两字典的相同点
怎样在两字典中寻找相同点(相同的key or 相同的 value)
Step1: 为寻找两字典的相同点 可通过简单的在两字典keys() or items() method 中Return 结果 进行set 操作
Step2: 以上操作亦可用于修改or过滤dict element <br> if you want 以现有dict 来构造一个排除指定key的new dict 下面利用dict 推到来实现 this 需求
Step3: 当然 'w'并没有出现在a中 就算for 上也无大碍
一个dict 即 a key and value 's set 的映射关系<br>dict 的 keys() method to Return a 展现 key set 's key view object
键视图 (dict.keys())
支持set 操作 比如 并 交 差运算 【这样 就不用将key 转换成set】
元素视图 (dict.items())
支持set运算
值视图 (dict.values()
不支持set 运算 若是需要set操作 【这样 就需要将value 转换成set】
1.10 删除序列相同元素并保持顺序
怎样在一个序列上保持顺序的同时并消除重复的值
if sequence 上的value 都是 hashalbe 类型 则可利用set or generator 来实现
Step4: 上述方法仅仅在sequence 中element 属于hashable (不能改变其顺序 同时其存储结构亦不可改变)
Step5: 这里key args 指定一个函数 将sequence element 转换成 hashable 类型
Step6: 若是仅仅为了除重 将之转化成set类型即可
Step7: 不过这样即会使元素本身顺序改变 生成的结果中element 位置被打乱<br>使用generator 可让函数更加通用 以下是对文件本身进行消除重复行
Step8: 本身somefile 中有10行zlxs 结果显示一行<br>key 函数 模仿了 sorted min max 函数内置函数的类似功能 key 多了这个以lambda 匿名函数的大力帮助
1.11 命名切片
你的程序已经出现一大堆无法直视的硬编码切片下标
Step9: 提高代码的可读性与可维护性
Step10: 此时a是一个slice object 可以分别调用 start stop step
Step11: 1.12 序列出现次数最多的元素
怎么找出序列中的出现次数的元素
collections.Counter 类即专门为此类问题设计 同时拥有most_common 方法直接获得
Step12: 作为输入 Counter对象可接受任意的hashable sequence 对象 在底层实现上,一个Counter 对象即为一个dict 将element 映射到他出现的次数上
Step13: 若是又有另一个dict中的words 若是想加上计算其中的频率
Step14: 上述示例中morewords还有eyes单词 所以 利用for循环 可利用Counter对象再次计算 还可以使用update method
Step15: Counter instance 是一个鲜为人知的特性 可与数学运算操作结合 | Python Code:
a = {
'x':1,
'y':2,
'z':3
}
b = {
'w':10,
'x':11,
'y':2
}
# In a ::: x : 1 ,y : 2
# In b ::: x : 11,y : 2
Explanation: 1.9 查找两字典的相同点
怎样在两字典中寻找相同点(相同的key or 相同的 value)
End of explanation
# Find keys in common
kc = a.keys() & b.keys()
print('a 和 b 共有的键',kc)
# Find keys in a that are not in b
knb = a.keys() - b.keys()
print('a 有的键 而 b 没有的键',knb)
# Find (key value) pair in common
kv = a.items() & b.items()
print('a 和 b 共有的元素',kv)
Explanation: 为寻找两字典的相同点 可通过简单的在两字典keys() or items() method 中Return 结果 进行set 操作
End of explanation
# Make a new dictionary with certain keys remove
c = {key : a[key] for key in a.keys() - {'z','w'}}
# c 排除Le {'z' and 'w'}对应的value
print(c)
Explanation: 以上操作亦可用于修改or过滤dict element <br> if you want 以现有dict 来构造一个排除指定key的new dict 下面利用dict 推到来实现 this 需求
End of explanation
def dedupe(items):
seen = set()
for item in items:
if item not in seen:
yield item
seen.add(item)
a = [1,3,5,6,7,8,1,5,1,10]
list(dedupe(a))
len(a)- len(_)
Explanation: 当然 'w'并没有出现在a中 就算for 上也无大碍
一个dict 即 a key and value 's set 的映射关系<br>dict 的 keys() method to Return a 展现 key set 's key view object
键视图 (dict.keys())
支持set 操作 比如 并 交 差运算 【这样 就不用将key 转换成set】
元素视图 (dict.items())
支持set运算
值视图 (dict.values()
不支持set 运算 若是需要set操作 【这样 就需要将value 转换成set】
1.10 删除序列相同元素并保持顺序
怎样在一个序列上保持顺序的同时并消除重复的值
if sequence 上的value 都是 hashalbe 类型 则可利用set or generator 来实现
End of explanation
def dedupe2(items,key=None):
seen = set()
for item in items:
val = item if key is None else key(item)
if val not in seen:
yield item
seen.add(val)
Explanation: 上述方法仅仅在sequence 中element 属于hashable (不能改变其顺序 同时其存储结构亦不可改变)
End of explanation
b = [{'x':1,'y':2},{'x':1,'y':3},{'x':1,'y':2},{'x':1,'y':4}]
bd = list(dedupe2(b,key=lambda d: (d['x'],d['y'])))
bd2 = list(dedupe2(b,key=lambda d: (d['x'])))
print('删除满足重复的d[\'x\'],d[\'y\']格式的元素')
bd
# 因为以上格式而又不重复的即为此3 其中{'x':1,'y':2}重复了两次 则被删除
print('删除满足重复的d[\'x\']格式的元素')
bd2
# 因为以上格式而又不重复的即为此2 b中所有元素 都是以{'x': 1开始的 所以返回第一个element
Explanation: 这里key args 指定一个函数 将sequence element 转换成 hashable 类型
End of explanation
ra = [1,1,1,1,1,1,1]
set(ra)
Explanation: 若是仅仅为了除重 将之转化成set类型即可
End of explanation
with open('somefile.txt','r') as f:
for line in dedupe2(f):
print(line)
Explanation: 不过这样即会使元素本身顺序改变 生成的结果中element 位置被打乱<br>使用generator 可让函数更加通用 以下是对文件本身进行消除重复行
End of explanation
ns = ''
for nn in range(10):
ns += str(nn)
ns = ns*3
Explanation: 本身somefile 中有10行zlxs 结果显示一行<br>key 函数 模仿了 sorted min max 函数内置函数的类似功能 key 多了这个以lambda 匿名函数的大力帮助
1.11 命名切片
你的程序已经出现一大堆无法直视的硬编码切片下标
End of explanation
items = [1,2,3,4,5,6,7,8,9]
a = slice(2,4)
items[2:4]
items[a]
items[a] =[10,11]
items
del items[a]
items
Explanation: 提高代码的可读性与可维护性
End of explanation
s = slice(5,10,2)
print('s 起始',s.start)
print('s 终末',s.stop)
print('s 公差',s.step)
st = 'helloworld'
a.indices(len(s))
Explanation: 此时a是一个slice object 可以分别调用 start stop step
End of explanation
words = [
'look', 'into', 'my', 'eyes', 'look', 'into', 'my', 'eyes',
'the', 'eyes', 'the', 'eyes', 'the', 'eyes', 'not', 'around', 'the',
'eyes', "don't", 'look', 'around', 'the', 'eyes', 'look', 'into',
'my', 'eyes', "you're", 'under'
]
# 找出words 中 出现频率最高的单词
from collections import Counter
word_counts = Counter(words)
# 出现频率最高德3个单词
top_three = word_counts.most_common(3)
print(top_three)
Explanation: 1.12 序列出现次数最多的元素
怎么找出序列中的出现次数的元素
collections.Counter 类即专门为此类问题设计 同时拥有most_common 方法直接获得
End of explanation
word_counts['not']
word_counts['eyes']
Explanation: 作为输入 Counter对象可接受任意的hashable sequence 对象 在底层实现上,一个Counter 对象即为一个dict 将element 映射到他出现的次数上
End of explanation
morewords = ['why','are','you','looking','in','eyes']
for word in morewords:
word_counts[word] += 1
word_counts['eyes']
Explanation: 若是又有另一个dict中的words 若是想加上计算其中的频率
End of explanation
word_counts.update(morewords)
word_counts['eyes']
Explanation: 上述示例中morewords还有eyes单词 所以 利用for循环 可利用Counter对象再次计算 还可以使用update method
End of explanation
a = Counter(words)
b = Counter(morewords)
a
b
# combine counts
# 合并
c = a + b
print(c)
# Subtract counts
# 减取
d = a -b
print(d)
Explanation: Counter instance 是一个鲜为人知的特性 可与数学运算操作结合
End of explanation |
184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
Step1: Distribution of Passengers
Gender - Analysis | Graph
<a id="Gender - Analysis | Graph"></a>
Distribution of Genders in Passenger Population
Step2: Distribution of Genders in pClass populations
Step3: Age - Analysis | Graph
<a id="Age - Analysis | Graph"></a>
Step4: Distrbution of Age in passenger population
Step5: Distribution of Age in pClass population
Step6: Distribution of passengers into adult and children age groups (Child = less than 21 years of age)
Reference
Step7: Distribution of child and adult (male and female) age groups by age
Step8: Distribution of child and adult (male and female) by pClass
Step9: Alone or Family
<a id="Alone or Family"></a>
Step10: Locational
Step11: Locational
Step12: Surivival Graph Comparison
Survival Count (Overall)
<a id="Survival Count (Overall)"></a>
Step13: Survival by Gender
<a id="Survival by Gender"></a>
Step14: Survival by Pclass
<a id="Survival by Pclass"></a>
Step15: Survival Pclass and Gender
<a id="Survival Pclass and Gender"></a>
Step16: Survival By Pclass and Age Group (Adult (Male / Female) / Child)
<a id="Survival By Pclass and Age Group (Adult (Male / Female) / Child)"></a>
Step17: Survival by Age Distribution
<a id="Survival by Age Distribution"></a>
Step18: Survival by Alone or with Family
<a id="Survival by Alone or with Family"></a>
Step19: Survival pClass by Age Distribution
<a id="Survival pClass by Age Distribution"></a>
Step20: Survival Gender by Age Distribution
<a id="Survival Gender by Age Distribution"></a>
Step21: Process CSV - Generation of Estimation Survival Table | Python Code:
# Imports for pandas, and numpy
import numpy as np
import pandas as pd
# imports for seaborn to and matplotlib to allow graphing
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid")
%matplotlib inline
# import Titanic CSV - NOTE: adjust file path as neccessary
dTitTrain_DF = pd.read_csv('train.csv')
# Clearing of Columns not neccesary for statistical analysis
dTitTrain_DF = dTitTrain_DF.drop(["Name", "Ticket"], axis=1)
dTitTrain_DF.info()
dTitTrain_DF.describe()
titAge = dTitTrain_DF.dropna(subset=['Age'])
# Distribution gender (adult and male)
ACmenData = dTitTrain_DF[dTitTrain_DF.Sex == 'male']
ACwomenData = dTitTrain_DF[dTitTrain_DF.Sex == 'female']
ACmenDataCount = float(ACmenData['Sex'].count())
ACwomenDataCount = float(ACwomenData['Sex'].count())
# Gender Specific DFs
AmenData = dTitTrain_DF[dTitTrain_DF.Sex == 'male'][dTitTrain_DF.Age >= 21]
AwomenData = dTitTrain_DF[dTitTrain_DF.Sex == 'female'][dTitTrain_DF.Age >= 21]
AmenDataCount = float(AmenData['Sex'].count())
AwomenDataCount = float(AwomenData['Sex'].count())
# print(menDataCount)
# print(womenDataCount)
# Age Specific Groups
adultData = titAge[titAge.Age >= 21]
childData = titAge[titAge.Age < 21]
adultDataCount = float(adultData['Age'].count())
childDataCount = float(childData['Age'].count())
#print(childDataCount)
#print(adultDataCount)
# Pclass
titClass1 = dTitTrain_DF[dTitTrain_DF.Pclass == 1]
titClass2 = dTitTrain_DF[dTitTrain_DF.Pclass == 2]
titClass3 = dTitTrain_DF[dTitTrain_DF.Pclass == 3]
# Alone or Family
dTitTrain_DF['SoloOrFamily'] = dTitTrain_DF.SibSp + dTitTrain_DF.Parch
dTitTrain_DF['SoloOrFamily'].loc[dTitTrain_DF['SoloOrFamily'] > 0] = 'Family'
dTitTrain_DF['SoloOrFamily'].loc[dTitTrain_DF['SoloOrFamily'] == 0] = 'Alone'
# Survivor Column (Yes or no)
dTitTrain_DF['Survivor']= dTitTrain_DF.Survived.map({0:'No', 1:'Yes'})
titCabin = dTitTrain_DF.dropna(subset=['Cabin'])
# Locational Data Groups
titDecks = titCabin['Cabin']
def deckGrab(tDK, cabLetter):
deckLevels = []
for level in tDK:
deckLevels.append(level[0])
TDF = pd.DataFrame(deckLevels)
TDF.columns = ['Cabin']
TDF = TDF[TDF.Cabin == cabLetter]
return TDF
def deckCount(tDK, cabLetter):
TDF = deckGrab(tDK, cabLetter)
return TDF[TDF.Cabin == cabLetter].count()['Cabin']
# print(deckCount(titDecks, "A"))
# print(deckCount(titDecks, "B"))
# print(deckCount(titDecks, "C"))
# print(deckCount(titDecks, "D"))
# print(deckCount(titDecks, "E"))
# print(deckCount(titDecks, "F"))
# print(deckCount(titDecks, "G"))
# embarked
titCherbourg = dTitTrain_DF[dTitTrain_DF.Embarked == 'C']
titQueenstown = dTitTrain_DF[dTitTrain_DF.Embarked == 'Q']
titSouthampton = dTitTrain_DF[dTitTrain_DF.Embarked == 'S']
Explanation: Table of Contents:
Distribution of Passengers:
<a href="#Data Organization (Data Wrangling)">Data Organization (Data Wrangling)</a>
<a href="#Gender - Analysis | Graph">Gender</a>
<a href="#Age - Analysis | Graph">Age</a>
<a href="#Alone or Family">Alone or Family</a>
<a href="#Locational: Cabin Analysis | Graph">Locational (Cabin)</a>
<a href="#Locational: Disembark Analysis | Graph">Locational (Disembark)</a>
Surivival Graph Comparison:
<a href="#Survival Count (Overall)">Survival Count (Overall)</a>
<a href="#Survival by Gender">Survival By Gender</a>
<a href="#Survival by Pclass">Survival By Pclass</a>
<a href="#Survival Pclass and Gender">Survival By Pclass and Gender</a>
<a href="#Survival By Pclass and Age Group (Adult (Male / Female) / Child)">Survival By Pclass and Age Group (Adult (Male / Female) / Child)</a>
<a href="#Survival by Age Distribution">Survival by Age Distribution</a>
<a href="#Survival by Alone or with Family">Survival by Alone or with Family</a>
<a href="#Survival pClass by Age Distribution">Survival pClass by Age Distribution</a>
<a href="#Survival Gender by Age Distribution">Survival Gender by Age Distribution</a>
Process CSV - Generation of Estimation Survival Table
In Process
<a id="Data Organization (Data Wrangling)"></a>
Data Organization (Data Wrangling)
End of explanation
printG = "Men account for " + str(ACmenDataCount) + " and " + "Women account for " + str(ACwomenDataCount) + " (Total Passengers: " + str(dTitTrain_DF.count()['Age']) + ")"
print(printG)
gSSC = sns.factorplot('Sex', data=dTitTrain_DF, kind='count')
gSSC.despine(left=True)
gSSC.set_ylabels("count of passengers")
Explanation: Distribution of Passengers
Gender - Analysis | Graph
<a id="Gender - Analysis | Graph"></a>
Distribution of Genders in Passenger Population
End of explanation
gGCSC= sns.factorplot('Pclass',order=[1,2,3], data=dTitTrain_DF, hue='Sex', kind='count')
gGCSC.despine(left=True)
gGCSC.set_ylabels("count of passengers")
Explanation: Distribution of Genders in pClass populations
End of explanation
printA = "Youngest Passenger in the passenger list was " + str(titAge['Age'].min()) + " years of age." \
+ "\n" + "Oldest Passenger in the passenger list was " + str(titAge['Age'].max()) + " years of age." \
+ "\n" + "Mean of Passengers ages in the passenger list is " + str(titAge['Age'].mean()) + " years of age."
print(printA)
Explanation: Age - Analysis | Graph
<a id="Age - Analysis | Graph"></a>
End of explanation
titAge['Age'].hist(bins=80)
Explanation: Distrbution of Age in passenger population
End of explanation
gCPS = sns.FacetGrid(titAge,hue='Pclass', aspect=4, hue_order=[1,2,3])
gCPS.map(sns.kdeplot,'Age', shade=True)
gCPS.set(xlim=(0,titAge['Age'].max()))
gCPS.add_legend()
Explanation: Distribution of Age in pClass population
End of explanation
# splits passengers into 3 categories (male of female if considered adult, and child if below 21 of age)
def minorOrAdult(passenger):
age, sex = passenger
if age < 21:
return 'child'
else:
return sex
# adds new column to dataframe that distinguishes a passenger as a child or an adult
dTitTrain_DF['PersonStatus'] = dTitTrain_DF[['Age', 'Sex']].apply(minorOrAdult, axis=1)
dTitTrain_DF['PersonStatus'].value_counts()
Explanation: Distribution of passengers into adult and children age groups (Child = less than 21 years of age)
Reference:
Source: http://history.stackexchange.com/questions/17481/what-was-the-age-of-majority-in-1900-united-states
By the common law the age of majority is fixed at twenty-one years for both sexes, and, in the absence of any statute to >the contrary, every person under that age, whether male or female, is an infant. (21)
-- The American and English Encyclopedia of Law, Garland and McGeehee, 1900
By the common law, every person is, technically, an infant, until he is twenty-one years old; and, in legal presumption, is >not of sufficient discretion to contract an obligation at an earlier age.
-- Institutes of the Lawes of England by Coke (1628-1644). The laws on infants are at 171b.
End of explanation
gACPS = sns.FacetGrid(dTitTrain_DF, hue='PersonStatus', aspect=4, hue_order=['child', 'male', 'female'])
gACPS.map(sns.kdeplot,'Age', shade=True)
gACPS.set(xlim=(0,titAge['Age'].max()))
gACPS.add_legend()
Explanation: Distribution of child and adult (male and female) age groups by age
End of explanation
gGAC= sns.factorplot('Pclass', order=[1,2,3], data=dTitTrain_DF, hue='PersonStatus', kind='count',hue_order=['child','male','female'])
gGAC.despine(left=True)
gGAC.set_ylabels("count of passengers")
Explanation: Distribution of child and adult (male and female) by pClass
End of explanation
sns.factorplot('SoloOrFamily', data=dTitTrain_DF, kind='count')
print("Alone: " + str(dTitTrain_DF[dTitTrain_DF.SoloOrFamily == "Alone"].count()['SoloOrFamily']))
print("Family: " + str(dTitTrain_DF[dTitTrain_DF.SoloOrFamily == "Family"].count()['SoloOrFamily']))
Explanation: Alone or Family
<a id="Alone or Family"></a>
End of explanation
def prepareDeckGraph(titDecksDF):
deckLevels = []
for level in titDecksDF:
deckLevels.append(level[0])
T_DF = pd.DataFrame(deckLevels)
T_DF.columns = ['Cabin']
T_DF = T_DF[T_DF.Cabin != 'T']
return T_DF
gTD_DF = prepareDeckGraph(titDecks)
sns.factorplot('Cabin', order=['A','B','C','D','E','F','G'], data=gTD_DF, kind='count')
print("A: " + str(deckCount(titDecks, "A")))
print("B: " + str(deckCount(titDecks, "B")))
print("C: " + str(deckCount(titDecks, "C")))
print("D: " + str(deckCount(titDecks, "D")))
print("E: " + str(deckCount(titDecks, "E")))
print("F: " + str(deckCount(titDecks, "F")))
print("G: " + str(deckCount(titDecks, "G")))
Explanation: Locational: Cabin Analysis | Graph
<a id="Locational: Cabin Analysis | Graph"></a>
End of explanation
sns.factorplot('Embarked', order=['C','Q','S'], data=dTitTrain_DF, hue='Pclass', kind='count', hue_order=[1,2,3])
# titCherbourg
# titQueenstown
# titSouthampton
print("Total:")
print("Cherbourg: " + str(titCherbourg.count()['Embarked']))
print("Queenstown: " + str(titQueenstown.count()['Embarked']))
print("Southampton: " + str(titSouthampton.count()['Embarked']))
print("")
print("Cherbourg: ")
print("Pclass 1 - " + str(titCherbourg[titCherbourg.Pclass == 1].count()['Embarked']))
print("Pclass 2 - " + str(titCherbourg[titCherbourg.Pclass == 2].count()['Embarked']))
print("Pclass 3 - " + str(titCherbourg[titCherbourg.Pclass == 3].count()['Embarked']))
print("")
print("Queenstown: ")
print("Pclass 1 - " + str(titQueenstown[titQueenstown.Pclass == 1].count()['Embarked']))
print("Pclass 2 - " + str(titQueenstown[titQueenstown.Pclass == 2].count()['Embarked']))
print("Pclass 3 - " + str(titQueenstown[titQueenstown.Pclass == 3].count()['Embarked']))
print("")
print("Southampton: ")
print("Pclass 1 - " + str(titSouthampton[titSouthampton.Pclass == 1].count()['Embarked']))
print("Pclass 2 - " + str(titSouthampton[titSouthampton.Pclass == 2].count()['Embarked']))
print("Pclass 3 - " + str(titSouthampton[titSouthampton.Pclass == 3].count()['Embarked']))
Explanation: Locational: Disembark Analysis | Graph
<a id="Locational: Disembark Analysis | Graph"></a>
End of explanation
# Survivors Overall
gSOA = sns.factorplot('Survivor', data=dTitTrain_DF, kind='count')
gSOA.despine(left=True)
gSOA.set_ylabels("count of passengers")
print("Survivor: " + str(dTitTrain_DF[dTitTrain_DF.Survivor == "Yes"].count()['Survivor']))
print("Non-Survivor: " + str(dTitTrain_DF[dTitTrain_DF.Survivor == "No"].count()['Survivor']))
Explanation: Surivival Graph Comparison
Survival Count (Overall)
<a id="Survival Count (Overall)"></a>
End of explanation
# Series probability - access probability of survived in men and women
menProb = ACmenData.groupby('Sex').Survived.mean()
womenProb = ACwomenData.groupby('Sex').Survived.mean()
menPercent = menProb[0]*100
womenPercent = womenProb[0]*100
print("Men Survivalbility: ")
print(menProb[0])
print("Women Survivalbility: ")
print(womenProb[0])
gSSP = sns.factorplot("Sex", "Survived", data=dTitTrain_DF, kind="bar", size=5)
gSSP.despine(left=True)
gSSP.set_ylabels("survival probability")
Explanation: Survival by Gender
<a id="Survival by Gender"></a>
End of explanation
# Determines the probability of survival for a given Pclass
def define_pClassProb(dataFrameIN, numClass):
classEntries = dataFrameIN[dataFrameIN.Pclass == numClass]
sClassEntries = classEntries[classEntries.Survived == 1]
cClassEntries = (classEntries.count(numeric_only=True)['Pclass']).astype(float)
cSClassEntries = (sClassEntries.count(numeric_only=True)['Pclass']).astype(float)
return (cSClassEntries/cClassEntries)
print("Class 1 Survivality: ")
print(define_pClassProb(dTitTrain_DF, 1))
print("Class 2 Survivality: ")
print(define_pClassProb(dTitTrain_DF, 2))
print("Class 3 Survivality: ")
print(define_pClassProb(dTitTrain_DF, 3))
gCS = sns.factorplot("Pclass", "Survived",order=[1,2,3],data=dTitTrain_DF, kind="bar", size=5)
gCS.despine(left=True)
gCS.set_ylabels("survival probability")
print("Class 1 Survivality: ")
print(define_pClassProb(dTitTrain_DF, 1))
print("Class 2 Survivality: ")
print(define_pClassProb(dTitTrain_DF, 2))
print("Class 3 Survivality: ")
print(define_pClassProb(dTitTrain_DF, 3))
sns.factorplot("Pclass", "Survived",order=[1,2,3], data=dTitTrain_DF, kind='point')
Explanation: Survival by Pclass
<a id="Survival by Pclass"></a>
End of explanation
# determines the probability of survival for genders in a given Pclass
def define_pClassProbSex(dataFrameIN, numClass, sex):
classEntries = dataFrameIN[dataFrameIN.Pclass == numClass][dataFrameIN.Sex == sex]
sClassEntries = classEntries[classEntries.Survived == 1]
cClassEntries = (classEntries.count(numeric_only=True)['Pclass']).astype(float)
cSClassEntries = (sClassEntries.count(numeric_only=True)['Pclass']).astype(float)
return (cSClassEntries/cClassEntries)
print("Class 1 Survivality(MALE): ")
print(define_pClassProbSex(dTitTrain_DF, 1, 'male'))
print("Class 1 Survivality(FEMALE): ")
print(define_pClassProbSex(dTitTrain_DF, 1, 'female'))
print("Class 2 Survivality(MALE): ")
print(define_pClassProbSex(dTitTrain_DF, 2, 'male'))
print("Class 2 Survivality(FEMALE): ")
print(define_pClassProbSex(dTitTrain_DF, 2, 'female'))
print("Class 3 Survivality(MALE): ")
print(define_pClassProbSex(dTitTrain_DF, 3, 'male'))
print("Class 3 Survivality(FEMALE): ")
print(define_pClassProbSex(dTitTrain_DF, 3, 'female'))
gGCSP = sns.factorplot("Pclass", "Survived",order=[1,2,3],data=dTitTrain_DF,hue='Sex', kind='bar')
gGCSP.despine(left=True)
gGCSP.set_ylabels("survival probability")
sns.factorplot("Pclass", "Survived", hue='Sex',order=[1,2,3], data=dTitTrain_DF, kind='point')
Explanation: Survival Pclass and Gender
<a id="Survival Pclass and Gender"></a>
End of explanation
#Determine probability of survival of children in a given Pclass
def define_pClassChildProb(dataFrameIN, numClass):
ChildDF = dataFrameIN[dataFrameIN.Pclass == numClass][dataFrameIN.PersonStatus == 'child']
ChildSurvived = dataFrameIN[dataFrameIN.Pclass == numClass][dataFrameIN.PersonStatus == 'child'][dataFrameIN.Survivor == 'Yes']
totalCChild = ChildDF.count()['PassengerId'].astype(float)
CChildSurvived = ChildSurvived.count()['PassengerId'].astype(float)
return CChildSurvived/totalCChild
def define_pClassAdultProb(dataFrameIN, numClass, sex):
AdultDF = dataFrameIN[dataFrameIN.Pclass == numClass][dataFrameIN.PersonStatus == sex]
AdultSurvived = dataFrameIN[dataFrameIN.Pclass == numClass][dataFrameIN.PersonStatus == sex][dataFrameIN.Survivor == 'Yes']
totalCAdult = AdultDF.count()['PassengerId'].astype(float)
CAdultSurvived = AdultSurvived.count()['PassengerId'].astype(float)
return CAdultSurvived/totalCAdult
print("PClass 1 Survival Child: ")
print(define_pClassChildProb(dTitTrain_DF, 1))
print("PClass 1 Survival Female: ")
print(define_pClassAdultProb(dTitTrain_DF, 1, 'female'))
print("PClass 1 Survival Male: ")
print(define_pClassAdultProb(dTitTrain_DF, 1, 'male'))
print("-----------")
print("PClass 2 Survival Child: ")
print(define_pClassChildProb(dTitTrain_DF, 2))
print("PClass 2 Survival Female: ")
print(define_pClassAdultProb(dTitTrain_DF, 2, 'female'))
print("PClass 2 Survival Male: ")
print(define_pClassAdultProb(dTitTrain_DF, 2, 'male'))
print("-----------")
print("PClass 3 Survival Child: ")
print(define_pClassChildProb(dTitTrain_DF, 3))
print("PClass 3 Survival Female: ")
print(define_pClassAdultProb(dTitTrain_DF, 3, 'female'))
print("PClass 3 Survival Male: ")
print(define_pClassAdultProb(dTitTrain_DF, 3, 'male'))
sns.factorplot("Pclass", "Survived", hue='PersonStatus',order=[1,2,3], data=dTitTrain_DF, kind='point')
Explanation: Survival By Pclass and Age Group (Adult (Male / Female) / Child)
<a id="Survival By Pclass and Age Group (Adult (Male / Female) / Child)"></a>
End of explanation
#sns.lmplot('Age', 'Survived', data=dTitTrain_DF)
pSBA = sns.boxplot(data=dTitTrain_DF, x='Survived', y='Age')
pSBA.set(title='Age Distribution by Survival',
xlabel = 'Survival',
ylabel = 'Age Distrobution',
xticklabels = ['Died', 'Survived'])
Explanation: Survival by Age Distribution
<a id="Survival by Age Distribution"></a>
End of explanation
# Using Solo or family column created earlier in passenger distributions section created a separate dataframes for traveling
#alone and with family passengers
familyPass = dTitTrain_DF[dTitTrain_DF['SoloOrFamily'] == "Family"]
alonePass = dTitTrain_DF[dTitTrain_DF['SoloOrFamily'] == "Alone"]
# Creates a list of surviving family and alone passengers
AFamilyPass = familyPass[familyPass.Survivor == "Yes"]
AAlonePass = alonePass[alonePass.Survivor == "Yes"]
# Determines the probability of survival for passengers that traveled alone and with family
pAF = float(AFamilyPass['SoloOrFamily'].count()) / float(familyPass['SoloOrFamily'].count())
pAA = float(AAlonePass['SoloOrFamily'].count()) / float(alonePass['SoloOrFamily'].count())
print("Probability of Survival being with Family: ")
print(pAF)
print("")
print("Probability of Survival being alone: ")
print(pAA)
gSSP = sns.factorplot("SoloOrFamily", "Survived", data=dTitTrain_DF, kind="bar", size=5)
gSSP.despine(left=True)
gSSP.set_ylabels("survival probability")
Explanation: Survival by Alone or with Family
<a id="Survival by Alone or with Family"></a>
End of explanation
#sns.lmplot('Age', 'Survived',hue='Pclass', data=dTitanic_DF, hue_order=[1,2,3])
pACSB = sns.boxplot(data = dTitTrain_DF.dropna(subset = ['Age']).sort_values('Pclass'), x='Pclass', y='Age', hue='Survivor')
pACSB.set(title='Age by Class and Survival - Box Plot', xlabel='Pclass')
pACSB.legend(bbox_to_anchor=(1.05, .7), loc=2, title = 'Survived',borderaxespad=0.)
Explanation: Survival pClass by Age Distribution
<a id="Survival pClass by Age Distribution"></a>
End of explanation
#sns.lmplot('Age', 'Survived', hue='Sex' ,data=dTitanic_DF)
pAGSB = sns.boxplot(data=dTitTrain_DF.dropna(subset = ['Age']), x= 'Sex', y= 'Age', hue='Survivor')
pAGSB.set(title='Age by Gender and Survival - Box Plot')
pAGSB.legend(bbox_to_anchor=(1.05, .7), loc=2, title = 'Survived',borderaxespad=0.)
Explanation: Survival Gender by Age Distribution
<a id="Survival Gender by Age Distribution"></a>
End of explanation
# Determining better odds which will be compared to test group (First comparison - Pclass and age group)
import csv
# # Manual - Age Group and gender adult with highest above 49%
# print(define_pClassChildProb(dTitTrain_DF, 1))
# print(define_pClassAdultProb(dTitTrain_DF, 1, 'female'))
# print(define_pClassChildProb(dTitTrain_DF, 2))
# print(define_pClassAdultProb(dTitTrain_DF, 2, 'female'))
# print(define_pClassAdultProb(dTitTrain_DF, 3, 'female'))
# #sibsp and parch
test_file = open('test.csv', 'rb')
test_file_object = csv.reader(test_file)
header = test_file_object.next()
prediction_file = open("genderPclassbasedmodel.csv", "wb")
prediction_file_object = csv.writer(prediction_file)
prediction_file_object.writerow(["PassengerId", "Survived"])
for row in test_file_object: # For each row in test.csv
weight = 0.0
if row[1] == 1:
weight = weight + 9
elif row[1] == 2:
weight = weight + 8
else:
weight = 5
if row[3] == 'female':
weight = weight + 8
else:
weight = weight + 2
if row[4] < 21:
# child
weight = weight + 6
else:
# adult
weight = weight + 5
aFam = row[5] + row[6]
if aFam > 0:
weight = weight + 5
else:
weight = weight + 3
weightScore = weight/40.0
print(str(weightScore))
if(weight >= .5):
prediction_file_object.writerow([row[0],'1'])
else:
prediction_file_object.writerow([row[0],'0'])
#prediction_file_object.writerow([row[0],'1'])
#prediction_file_object.writerow([row[0],'0'])
test_file.close()
prediction_file.close()
Explanation: Process CSV - Generation of Estimation Survival Table
End of explanation |
185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Clean Up
Let's clean this up just a little!
Step2: Step 2
Step3: Decomposition
ETS decomposition allows us to see the individual parts!
Step5: Testing for Stationarity
We can use the Augmented Dickey-Fuller unit root test.
In statistics and econometrics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity.
Basically, we are trying to whether to accept the Null Hypothesis H0 (that the time series has a unit root, indicating it is non-stationary) or reject H0 and go with the Alternative Hypothesis (that the time series has no unit root and is stationary).
We end up deciding this based on the p-value return.
A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.
Let's run the Augmented Dickey-Fuller test on our data
Step6: Important Note!
We have now realized that our data is seasonal (it is also pretty obvious from the plot itself). This means we need to use Seasonal ARIMA on our model. If our data was not seasonal, it means we could use just ARIMA on it. We will take this into account when differencing our data! Typically financial stock data won't be seasonal, but that is kind of the point of this section, to show you common methods, that won't work well on stock finance data!
Differencing
The first difference of a time series is the series of changes from one period to the next. We can do this easily with pandas. You can continue to take the second difference, third difference, and so on until your data is stationary.
First Difference
Step7: Second Difference
Step8: Seasonal Difference
Step9: Seasonal First Difference
Step10: Autocorrelation and Partial Autocorrelation Plots
An autocorrelation plot (also known as a Correlogram ) shows the correlation of the series with itself, lagged by x time units. So the y axis is the correlation and the x axis is the number of time units of lag.
So imagine taking your time series of length T, copying it, and deleting the first observation of copy #1 and the last observation of copy #2. Now you have two series of length T−1 for which you calculate a correlation coefficient. This is the value of of the vertical axis at x=1x=1 in your plots. It represents the correlation of the series lagged by one time unit. You go on and do this for all possible time lags x and this defines the plot.
You will run these plots on your differenced/stationary data. There is a lot of great information for identifying and interpreting ACF and PACF here and here.
Autocorrelation Interpretation
The actual interpretation and how it relates to ARIMA models can get a bit complicated, but there are some basic common methods we can use for the ARIMA model. Our main priority here is to try to figure out whether we will use the AR or MA components for the ARIMA model (or both!) as well as how many lags we should use. In general you would use either AR or MA, using both is less common.
If the autocorrelation plot shows positive autocorrelation at the first lag (lag-1), then it suggests to use the AR terms in relation to the lag
If the autocorrelation plot shows negative autocorrelation at the first lag, then it suggests using MA terms.
<font color='red'> Important Note! </font>
Here we will be showing running the ACF and PACF on multiple differenced data sets that have been made stationary in different ways, typically you would just choose a single stationary data set and continue all the way through with that.
The reason we use two here is to show you the two typical types of behaviour you would see when using ACF.
Step11: Pandas also has this functionality built in, but only for ACF, not PACF. So I recommend using statsmodels, as ACF and PACF is more core to its functionality than it is to pandas' functionality.
Step12: Partial Autocorrelation
In general, a partial correlation is a conditional correlation.
It is the correlation between two variables under the assumption that we know and take into account the values of some other set of variables.
For instance, consider a regression context in which y = response variable and x1, x2, and x3 are predictor variables. The partial correlation between y and x3 is the correlation between the variables determined taking into account how both y and x3 are related to x1 and x2.
Formally, this is relationship is defined as
Step13: Interpretation
Typically a sharp drop after lag "k" suggests an AR-k model should be used. If there is a gradual decline, it suggests an MA model.
Final Thoughts on Autocorrelation and Partial Autocorrelation
Identification of an AR model is often best done with the PACF.
For an AR model, the theoretical PACF “shuts off” past the order of the model. The phrase “shuts off” means that in theory the partial autocorrelations are equal to 0 beyond that point. Put another way, the number of non-zero partial autocorrelations gives the order of the AR model. By the “order of the model” we mean the most extreme lag of x that is used as a predictor.
Identification of an MA model is often best done with the ACF rather than the PACF.
For an MA model, the theoretical PACF does not shut off, but instead tapers toward 0 in some manner. A clearer pattern for an MA model is in the ACF. The ACF will have non-zero autocorrelations only at lags involved in the model.
Final ACF and PACF Plots
We've run quite a few plots, so let's just quickly get our "final" ACF and PACF plots. These are the ones we will be referencing in the rest of the notebook below.
Step14: Using the Seasonal ARIMA model
Finally we can use our ARIMA model now that we have an understanding of our data!
Step15: p,d,q parameters
p
Step16: Prediction of Future Values
Firts we can get an idea of how well our model performs by just predicting for values that we actually already know
Step17: Forecasting
This requires more time periods, so let's create them with pandas onto our original dataframe! | Python Code:
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('monthly-milk-production-pounds-p.csv')
df.head()
df.tail()
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
<center>Copyright Pierian Data 2017</center>
<center>For more information, visit us at www.pieriandata.com</center>
Warning! This is a complicated topic! Remember that this is an optional notebook to go through and that to fully understand it you should read the supplemental links and watch the full explanatory walkthrough video. This notebook and the video lectures are not meant to be a full comprehensive overview of ARIMA, but instead a walkthrough of what you can use it for, so you can later understand why it may or may not be a good choice for Financial Stock Data.
ARIMA and Seasonal ARIMA
Autoregressive Integrated Moving Averages
The general process for ARIMA models is the following:
* Visualize the Time Series Data
* Make the time series data stationary
* Plot the Correlation and AutoCorrelation Charts
* Construct the ARIMA Model
* Use the model to make predictions
Let's go through these steps!
Step 1: Get the Data (and format it)
We will be using some data about monthly milk production, full details on it can be found here.
Its saved as a csv for you already, let's load it up:
End of explanation
df.columns = ['Month', 'Milk in pounds per cow']
df.head()
# Weird last value at bottom causing issues
df.drop(168,
axis = 0,
inplace = True)
df['Month'] = pd.to_datetime(df['Month'])
df.head()
df.set_index('Month',inplace=True)
df.head()
df.describe().transpose()
Explanation: Clean Up
Let's clean this up just a little!
End of explanation
df.plot()
timeseries = df['Milk in pounds per cow']
timeseries.rolling(12).mean().plot(label='12 Month Rolling Mean')
timeseries.rolling(12).std().plot(label='12 Month Rolling Std')
timeseries.plot()
plt.legend()
timeseries.rolling(12).mean().plot(label = '12 Month Rolling Mean')
timeseries.plot()
plt.legend()
Explanation: Step 2: Visualize the Data
Let's visualize this data with a few methods.
End of explanation
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(df['Milk in pounds per cow'],
freq = 12)
fig = plt.figure()
fig = decomposition.plot()
fig.set_size_inches(15, 8)
Explanation: Decomposition
ETS decomposition allows us to see the individual parts!
End of explanation
df.head()
from statsmodels.tsa.stattools import adfuller
result = adfuller(df['Milk in pounds per cow'])
print('Augmented Dickey-Fuller Test:')
labels = ['ADF Test Statistic',
'p-value',
'#Lags Used',
'Number of Observations Used']
for value,label in zip(result,labels):
print(label+' : '+str(value) )
if result[1] <= 0.05:
print("strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary")
else:
print("weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary ")
# Store in a function for later use!
def adf_check(time_series):
Pass in a time series, returns ADF report
result = adfuller(time_series)
print('Augmented Dickey-Fuller Test:')
labels = ['ADF Test Statistic',
'p-value',
'#Lags Used',
'Number of Observations Used']
for value,label in zip(result,labels):
print(label+' : '+str(value) )
if result[1] <= 0.05:
print("strong evidence against the null hypothesis, reject the null hypothesis. Data has no unit root and is stationary")
else:
print("weak evidence against null hypothesis, time series has a unit root, indicating it is non-stationary ")
Explanation: Testing for Stationarity
We can use the Augmented Dickey-Fuller unit root test.
In statistics and econometrics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity.
Basically, we are trying to whether to accept the Null Hypothesis H0 (that the time series has a unit root, indicating it is non-stationary) or reject H0 and go with the Alternative Hypothesis (that the time series has no unit root and is stationary).
We end up deciding this based on the p-value return.
A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, so you reject the null hypothesis.
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.
Let's run the Augmented Dickey-Fuller test on our data:
End of explanation
df['Milk First Difference'] = df['Milk in pounds per cow'] - df['Milk in pounds per cow'].shift(1)
adf_check(df['Milk First Difference'].dropna())
df['Milk First Difference'].plot()
Explanation: Important Note!
We have now realized that our data is seasonal (it is also pretty obvious from the plot itself). This means we need to use Seasonal ARIMA on our model. If our data was not seasonal, it means we could use just ARIMA on it. We will take this into account when differencing our data! Typically financial stock data won't be seasonal, but that is kind of the point of this section, to show you common methods, that won't work well on stock finance data!
Differencing
The first difference of a time series is the series of changes from one period to the next. We can do this easily with pandas. You can continue to take the second difference, third difference, and so on until your data is stationary.
First Difference
End of explanation
# Sometimes it would be necessary to do a second difference
# This is just for show, we didn't need to do a second difference in our case
df['Milk Second Difference'] = df['Milk First Difference'] - df['Milk First Difference'].shift(1)
adf_check(df['Milk Second Difference'].dropna())
df['Milk Second Difference'].plot()
Explanation: Second Difference
End of explanation
df['Seasonal Difference'] = df['Milk in pounds per cow'] - df['Milk in pounds per cow'].shift(12)
df['Seasonal Difference'].plot()
# Seasonal Difference by itself was not enough!
adf_check(df['Seasonal Difference'].dropna())
Explanation: Seasonal Difference
End of explanation
# You can also do seasonal first difference
df['Seasonal First Difference'] = df['Milk First Difference'] - df['Milk First Difference'].shift(12)
df['Seasonal First Difference'].plot()
adf_check(df['Seasonal First Difference'].dropna())
Explanation: Seasonal First Difference
End of explanation
from statsmodels.graphics.tsaplots import plot_acf,plot_pacf
# Duplicate plots
# Check out: https://stackoverflow.com/questions/21788593/statsmodels-duplicate-charts
# https://github.com/statsmodels/statsmodels/issues/1265
fig_first = plot_acf(df["Milk First Difference"].dropna())
fig_seasonal_first = plot_acf(df["Seasonal First Difference"].dropna())
Explanation: Autocorrelation and Partial Autocorrelation Plots
An autocorrelation plot (also known as a Correlogram ) shows the correlation of the series with itself, lagged by x time units. So the y axis is the correlation and the x axis is the number of time units of lag.
So imagine taking your time series of length T, copying it, and deleting the first observation of copy #1 and the last observation of copy #2. Now you have two series of length T−1 for which you calculate a correlation coefficient. This is the value of of the vertical axis at x=1x=1 in your plots. It represents the correlation of the series lagged by one time unit. You go on and do this for all possible time lags x and this defines the plot.
You will run these plots on your differenced/stationary data. There is a lot of great information for identifying and interpreting ACF and PACF here and here.
Autocorrelation Interpretation
The actual interpretation and how it relates to ARIMA models can get a bit complicated, but there are some basic common methods we can use for the ARIMA model. Our main priority here is to try to figure out whether we will use the AR or MA components for the ARIMA model (or both!) as well as how many lags we should use. In general you would use either AR or MA, using both is less common.
If the autocorrelation plot shows positive autocorrelation at the first lag (lag-1), then it suggests to use the AR terms in relation to the lag
If the autocorrelation plot shows negative autocorrelation at the first lag, then it suggests using MA terms.
<font color='red'> Important Note! </font>
Here we will be showing running the ACF and PACF on multiple differenced data sets that have been made stationary in different ways, typically you would just choose a single stationary data set and continue all the way through with that.
The reason we use two here is to show you the two typical types of behaviour you would see when using ACF.
End of explanation
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(df['Seasonal First Difference'].dropna())
Explanation: Pandas also has this functionality built in, but only for ACF, not PACF. So I recommend using statsmodels, as ACF and PACF is more core to its functionality than it is to pandas' functionality.
End of explanation
result = plot_pacf(df["Seasonal First Difference"].dropna())
Explanation: Partial Autocorrelation
In general, a partial correlation is a conditional correlation.
It is the correlation between two variables under the assumption that we know and take into account the values of some other set of variables.
For instance, consider a regression context in which y = response variable and x1, x2, and x3 are predictor variables. The partial correlation between y and x3 is the correlation between the variables determined taking into account how both y and x3 are related to x1 and x2.
Formally, this is relationship is defined as:
$\frac{\text{Covariance}(y, x_3|x_1, x_2)}{\sqrt{\text{Variance}(y|x_1, x_2)\text{Variance}(x_3| x_1, x_2)}}$
Check out this link for full details on this.
We can then plot this relationship:
End of explanation
fig = plt.figure(figsize = (12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(df['Seasonal First Difference'].iloc[13:],
lags = 40,
ax = ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(df['Seasonal First Difference'].iloc[13:],
lags = 40,
ax = ax2)
Explanation: Interpretation
Typically a sharp drop after lag "k" suggests an AR-k model should be used. If there is a gradual decline, it suggests an MA model.
Final Thoughts on Autocorrelation and Partial Autocorrelation
Identification of an AR model is often best done with the PACF.
For an AR model, the theoretical PACF “shuts off” past the order of the model. The phrase “shuts off” means that in theory the partial autocorrelations are equal to 0 beyond that point. Put another way, the number of non-zero partial autocorrelations gives the order of the AR model. By the “order of the model” we mean the most extreme lag of x that is used as a predictor.
Identification of an MA model is often best done with the ACF rather than the PACF.
For an MA model, the theoretical PACF does not shut off, but instead tapers toward 0 in some manner. A clearer pattern for an MA model is in the ACF. The ACF will have non-zero autocorrelations only at lags involved in the model.
Final ACF and PACF Plots
We've run quite a few plots, so let's just quickly get our "final" ACF and PACF plots. These are the ones we will be referencing in the rest of the notebook below.
End of explanation
# For non-seasonal data
from statsmodels.tsa.arima_model import ARIMA
# I recommend you glance over this!
#
help(ARIMA)
Explanation: Using the Seasonal ARIMA model
Finally we can use our ARIMA model now that we have an understanding of our data!
End of explanation
# We have seasonal data!
model = sm.tsa.statespace.SARIMAX(df['Milk in pounds per cow'],
order = (0,1,0),
seasonal_order = (1,1,1,12))
results = model.fit()
print(results.summary())
results.resid.plot()
results.resid.plot(kind = 'kde')
Explanation: p,d,q parameters
p: The number of lag observations included in the model.
d: The number of times that the raw observations are differenced, also called the degree of differencing.
q: The size of the moving average window, also called the order of moving average.
End of explanation
df['forecast'] = results.predict(start = 150,
end = 168,
dynamic = True)
df[['Milk in pounds per cow','forecast']].plot(figsize = (12, 8))
Explanation: Prediction of Future Values
Firts we can get an idea of how well our model performs by just predicting for values that we actually already know:
End of explanation
df.tail()
# https://pandas.pydata.org/pandas-docs/stable/timeseries.html
# Alternatives
# pd.date_range(df.index[-1],periods=12,freq='M')
from pandas.tseries.offsets import DateOffset
future_dates = [df.index[-1] + DateOffset(months = x) for x in range(0,24) ]
future_dates
future_dates_df = pd.DataFrame(index = future_dates[1:],
columns = df.columns)
future_df = pd.concat([df,future_dates_df])
future_df.head()
future_df.tail()
future_df['forecast'] = results.predict(start = 168,
end = 188,
dynamic= True)
future_df[['Milk in pounds per cow', 'forecast']].plot(figsize = (12, 8))
Explanation: Forecasting
This requires more time periods, so let's create them with pandas onto our original dataframe!
End of explanation |
186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Introduction
This notebook generates the various data representations in lecture 12. It is easy to generalize this to other applications.
Step2: Stem and leaf
The code below shows how to generate a stem and leaf display.
Step3: Boxplot | Python Code:
from __future__ import division
import matplotlib.pyplot as plt
import matplotlib as mpl
import palettable
import numpy as np
import math
import seaborn as sns
from collections import defaultdict
%matplotlib inline
# Here, we customize the various matplotlib parameters for font sizes and define a color scheme.
# As mentioned in the lecture, the typical defaults in most software are not optimal from a
# data presentation point of view. You need to work hard at selecting these parameters to ensure
# an effective data presentation.
colors = palettable.colorbrewer.qualitative.Set1_4.mpl_colors
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['lines.color'] = 'r'
mpl.rcParams['axes.titlesize'] = 32
mpl.rcParams['axes.labelsize'] = 30
mpl.rcParams['axes.labelsize'] = 30
mpl.rcParams['xtick.labelsize'] = 24
mpl.rcParams['ytick.labelsize'] = 24
data = 105 221 183 186 121 181 180 143
97 154 153 174 120 168 167 141
245 228 174 199 181 158 176 110
163 131 154 115 160 208 158 133
207 180 190 193 194 133 156 123
134 178 76 167 184 135 229 146
218 157 101 171 165 172 158 169
199 151 142 163 145 171 148 158
160 175 149 87 160 237 150 135
196 201 200 176 150 170 118 149
data = [[int(x) for x in d.split()] for d in data.split("\n")]
d = np.array(data).flatten()
min_val = d.min()
max_val = d.max()
start_val = math.floor(min_val / 10) * 10
mean = np.average(d)
median = np.median(d)
print("Min value = %d, Max value = %d." % (min_val, max_val))
print("Mean = %.1f" % mean)
print("Median = %.1f" % median)
print("Standard deviation = %.1f" % np.sqrt(np.var(d)))
freq, bins = np.histogram(d, bins=np.arange(70, 260, 20))
plt.figure(figsize=(12,8))
bins = np.arange(70, 260, 20)
plt.hist(np.array(d), bins=bins)
plt.xticks(bins + 10, ["%d-%d" % (bins[i], bins[i+1]) for i in range(len(bins) - 1)], rotation=-45)
ylabel = plt.ylabel("f")
xlabel = plt.xlabel("Compressive strength (psi)")
Explanation: Introduction
This notebook generates the various data representations in lecture 12. It is easy to generalize this to other applications.
End of explanation
def generate_stem_and_leaf(data):
stem_and_leaf = defaultdict(list)
for i in data:
k = int(math.floor(i / 10))
v = int(i % 10)
stem_and_leaf[k].append(v)
for k in sorted(stem_and_leaf.keys()):
print("%02d | %s" % (k, " ".join(["%d" % i for i in stem_and_leaf[k]])))
generate_stem_and_leaf(d)
Explanation: Stem and leaf
The code below shows how to generate a stem and leaf display.
End of explanation
plt.figure(figsize=(12,8))
ax = sns.boxplot(y=d, color="c")
ax = sns.swarmplot(y=d, color=".25") # We add the swarm plot as well to show all data points.
ylabel = plt.ylabel("Compressive Strength (psi)")
Explanation: Boxplot
End of explanation |
187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dense Communities in Networks
In this problem, we study the problem of finding dense communities in networks. Assume $G$ = ($V,E$) is an undirected graph (e.g., representing a social network).
For any subset $S \subseteq V$ , we let the induced edge set (denoted by $E[S]$) to be the set of edges both of whose endpoints belong to $S$.
For any $v \in S$, we let $deg_S(v) = |{u \in S | (u,v) \in E}|$.
Then, we define the density of $S$ to be $$\rho(S)=\frac{|E[S]|}{|S|}$$
Finally, the maximum density of the graph $G$ is the density of the densest induced subgraph of $G$, defined as $$\rho^\ast(G) = \max\limits_{S \subseteq V}{\rho(S)}$$
Our goal is to find an induced subgraph of $G$ whose density is not much smaller than $\rho^\ast(G)$. Such a set is very densely connected, and hence may indicate a community in the network represented by $G$. Also, since the graphs of interest are usually very large in practice, we would like the algorithm to be highly scalable. We consider the following algorithm
Step1: Part 1
The number of iterations needed to find a dense subgraph depends upon the value of $\epsilon$. Show how many iterations it takes to calculate the first dense subgraph when $\epsilon$ = {0.1, 0.5, 1, 2}
Step2: Part 2
When $\epsilon$ = 0.05, plot separate graphs showing $\rho(S_i)$, |$E(S_i)$| and |$S_i$| as a function of $i$ where $i$ is the iteration of the while loop.
Step3: Single Iteration Plots
Step4: Part 3
The algorithm above only describes how to find one dense component (or community). It is also possible to find multiple components by running the above algorithm to find the first dense component and then deleting all vertices (and edges) belonging to that component. To find the next dense component, run the same algorithm on the modified graph. Plot separate graphs showing $\rho(\tilde{S_j})$, |$E[\tilde{S_j}]$| and |$\tilde{S_j}$| as a function of $j$ where $j$ is the current community that has been found. You can stop if 20 communities have already been found. Have $\epsilon$ = 0.05.
Note
Step5: Community Plots | Python Code:
%pylab inline
import pylab
def initialVertices(numNodes):
V = set();
for i in range(numNodes):
V.add(i);
return V;
def initializeDeg(deg, S):
for s in S:
deg[s] = 0;
def computeDegree(S, edgesfile):
fedges = open(edgesfile, 'r');
deg = {};
initializeDeg(deg, S)
for line in fedges:
split = map(int, line.split())
if split[0] in S and split[1] in S:
deg[split[0]] += 1;
deg[split[1]] += 1;
return deg;
def computeDensitySet(S, edgesfile):
fedges = open(edgesfile, 'r');
num = 0;
for line in fedges:
split = map(int, line.split())
if split[0] in S and split[1] in S:
num += 1;
return float(num) / len(S)
def computeASet(S, deg, maxDens):
A = set();
for s in S:
if deg[s] <= maxDens:
A.add(s);
return A;
def findDenseCommunity(V, edgesfile, eps):
tS = V;
densTS = computeDensitySet(tS, edgesfile);
S = V;
densS = densTS;
numIter = 0;
while len(S) > 0:
numIter += 1;
maxDens = 2 * (1 + eps) * densS;
deg = computeDegree(S, edgesfile);
A = computeASet(S, deg, maxDens);
S = S - A;
if len(S) == 0:
break;
densS = computeDensitySet(S, edgesfile);
if densS > densTS:
tS = S;
densTS = densS;
return tS, numIter;
def writeMapToFile(values, filename):
f = open(filename, 'w');
for k in values.keys():
f.write(k);
f.write(' ');
f.write(str(values[k]));
f.write('\n');
def writeVectorToFile(vec, filename):
f = open(filename, 'w');
for i in range(len(vec)):
f.write(str(i));
f.write(' ');
f.write(str(vec[i]));
f.write('\n');
def countEdges(edgesfile, S):
fedges = open(edgesfile, 'r');
res = 0;
for line in fedges:
split = map(int, line.split())
if split[0] in S and split[1] in S:
res += 1;
return res;
def findDenseCommunityModified(V, edgesfile, eps):
tS = V;
densTS = computeDensitySet(tS, edgesfile);
S = V;
densS = densTS;
numIter = 0;
sizeS = [];
eVec = [];
rhoVec = [];
while len(S) > 0:
rhoVec.append(densS);
sizeS.append(len(S));
eVec.append(countEdges(edgesfile, S));
numIter += 1;
maxDens = 2 * (1 + eps) * densS;
deg = computeDegree(S, edgesfile);
A = computeASet(S, deg, maxDens);
S = S - A;
if len(S) == 0:
break;
densS = computeDensitySet(S, edgesfile);
if densS > densTS:
tS = S;
densTS = densS;
return tS, numIter, rhoVec, eVec, sizeS;
edgesfile = 'livejournal-undirected.txt';
numNodes = 499923;
numEdges = 7794290;
V = initialVertices(numNodes);
Explanation: Dense Communities in Networks
In this problem, we study the problem of finding dense communities in networks. Assume $G$ = ($V,E$) is an undirected graph (e.g., representing a social network).
For any subset $S \subseteq V$ , we let the induced edge set (denoted by $E[S]$) to be the set of edges both of whose endpoints belong to $S$.
For any $v \in S$, we let $deg_S(v) = |{u \in S | (u,v) \in E}|$.
Then, we define the density of $S$ to be $$\rho(S)=\frac{|E[S]|}{|S|}$$
Finally, the maximum density of the graph $G$ is the density of the densest induced subgraph of $G$, defined as $$\rho^\ast(G) = \max\limits_{S \subseteq V}{\rho(S)}$$
Our goal is to find an induced subgraph of $G$ whose density is not much smaller than $\rho^\ast(G)$. Such a set is very densely connected, and hence may indicate a community in the network represented by $G$. Also, since the graphs of interest are usually very large in practice, we would like the algorithm to be highly scalable. We consider the following algorithm:
Require: G=(V,E) and ε>0
S ̃,S ← V
while S = ̸= ∅ do
A(S) := {i ∈ S | deg_S(i) ≤ 2(1+ε)ρ(S)}
S ← S \ A(S)
if ρ(S) > ρ(S ̃) then
S ̃ ← S
end if
end while
return S ̃
The basic idea in the algorithm is that the nodes with low degrees do not contribute much to the density of a dense subgraph, hence they can be removed without significantly influencing the density.
An undirected graph specified as a list of edges, is provided at 'livejournal-undirected.txt.zip.' The data set consists of 499923 vertices and 7794290 edges. Treat each line in the file as an undirected edge connecting the two given vertices. Implement the algorithm described above using a streaming model. For this problem, let the density of an empty set be 0.
End of explanation
epsVec = [0.1, 0.5, 1, 2];
iterVec = {}
for i in range(len(epsVec)):
tS, numIter = findDenseCommunity(V, edgesfile, epsVec[i]);
iterVec[epsVec[i]] = numIter;
print(iterVec)
Explanation: Part 1
The number of iterations needed to find a dense subgraph depends upon the value of $\epsilon$. Show how many iterations it takes to calculate the first dense subgraph when $\epsilon$ = {0.1, 0.5, 1, 2}
End of explanation
eps = 0.05;
tS, numIter, rhoVec, eVec, sizeS = findDenseCommunityModified(V, edgesfile, eps);
writeVectorToFile(rhoVec, 'rhoVecii');
writeVectorToFile(eVec, 'eVecii');
writeVectorToFile(sizeS, 'sizeSii');
print(numIter);
Explanation: Part 2
When $\epsilon$ = 0.05, plot separate graphs showing $\rho(S_i)$, |$E(S_i)$| and |$S_i$| as a function of $i$ where $i$ is the iteration of the while loop.
End of explanation
rhoVii = pylab.loadtxt('rhoVecii')
eVii = pylab.loadtxt('eVecii')
sizSii = pylab.loadtxt('sizeSii')
f2 = pylab.figure(figsize=(9,18))
p1 = f2.add_subplot(311)
p1.plot(rhoVii[:,0], rhoVii[:,1])
p1.set_xlabel("Iteration #")
p1.set_ylabel("rho(S)")
p2 = f2.add_subplot(312)
p2.plot(eVii[:,0], eVii[:,1])
p2.set_xlabel("Iteration #")
p2.set_ylabel("|S|")
p3 = f2.add_subplot(313)
p3.plot(sizSii[:,0], sizSii[:,1])
p3.set_xlabel("Iteration #")
p3.set_ylabel("|E(S)|")
Explanation: Single Iteration Plots
End of explanation
eps = 0.05;
numCommunities = 20;
def findSeveralDenseCommunities(V, edgesfile, eps, numCommunities):
rhoVec = [];
sizeTS = [];
eVec = [];
for i in range(numCommunities):
print(str(i));
tS, numIter = findDenseCommunity(V, edgesfile, eps);
sizeTS.append(len(tS));
eVec.append(countEdges(edgesfile, tS));
rhoVec.append(computeDensitySet(tS, edgesfile));
V = V - tS;
return rhoVec, eVec, sizeTS;
rhoVec, eVec, sizeTS = findSeveralDenseCommunities(V, edgesfile, eps, numCommunities);
writeVectorToFile(rhoVec, 'rhoVeciii');
writeVectorToFile(eVec, 'eVeciii');
writeVectorToFile(sizeTS, 'sizeTSiii');
Explanation: Part 3
The algorithm above only describes how to find one dense component (or community). It is also possible to find multiple components by running the above algorithm to find the first dense component and then deleting all vertices (and edges) belonging to that component. To find the next dense component, run the same algorithm on the modified graph. Plot separate graphs showing $\rho(\tilde{S_j})$, |$E[\tilde{S_j}]$| and |$\tilde{S_j}$| as a function of $j$ where $j$ is the current community that has been found. You can stop if 20 communities have already been found. Have $\epsilon$ = 0.05.
Note: the simulation can take over an hour because you have to run the algorithm to find communities 20 times.
End of explanation
rhoViii = pylab.loadtxt('rhoVeciii')
eViii = pylab.loadtxt('eVeciii')
sTSii = pylab.loadtxt('sizeTSiii')
f3 = pylab.figure(figsize=(9,18))
p1 = f3.add_subplot(311)
p1.plot(rhoViii[:,0], rhoViii[:,1])
p1.set_xlabel("Community #")
p1.set_ylabel("rho(S~)")
p2 = f3.add_subplot(312)
p2.plot(eViii[:,0], eViii[:,1])
p2.set_xlabel("Community #")
p2.set_ylabel("|S~|")
p3 = f3.add_subplot(313)
p3.plot(sTSii[:,0], sTSii[:,1])
p3.set_xlabel("Community #")
p3.set_ylabel("|E(S~)|")
Explanation: Community Plots
End of explanation |
188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook was created by Sergey Tomin for Workshop
Step1: Change RF parameters for the comparison with ASTRA
Step2: Initializing SpaceCharge
Step3: Comparison with ASTRA
Beam tracking with ASTRA was performed by Igor Zagorodnov (DESY). | Python Code:
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
from time import time
# this python library provides generic shallow (copy) and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot main modules and functions
from ocelot import *
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# import injector lattice
from ocelot.test.workshop.injector_lattice import *
# load beam distribution
# this function convert Astra beam distribution to Ocelot format - ParticleArray. ParticleArray is designed for tracking.
# in order to work with converters we have to import specific module from ocelot.adaptors
from ocelot.adaptors.astra2ocelot import *
Explanation: This notebook was created by Sergey Tomin for Workshop: Designing future X-ray FELs. Source and license info is on GitHub. August 2016.
Tutorial N3. Space Charge.
Second order tracking with space charge effect of the 200k particles.
As an example, we will use lattice file (converted to Ocelot format) of the European XFEL Injector.
The space charge forces are calculated by solving the Poisson equation in the bunch frame.
Then the Lorentz transformed electromagnetic field is applied as a kick in the laboratory frame.
For the solution of the Poisson equation we use an integral representation of the electrostatic potential by convolution of the free-space Green's function with the charge distribution. The convolution equation is solved with the help of the Fast Fourier Transform (FFT). The same algorithm for solution of the 3D Poisson equation is used, for example, in ASTRA.
This example will cover the following topics:
Initialization of the Space Charge objects and the places of their applying
tracking of second order with space charge effect.
Requirements
injector_lattice.py - input file, the The European XFEL Injector lattice.
beam_6MeV.ast - input file, initial beam distribution in ASTRA format (was obtained from s2e simulation performed with ASTRA).
Import of modules
End of explanation
phi1=18.7268
V1=18.50662e-3/np.cos(phi1*pi/180)
C_A1_1_1_I1.v = V1; C_A1_1_1_I1.phi = phi1
C_A1_1_2_I1.v = V1; C_A1_1_2_I1.phi = phi1
C_A1_1_3_I1.v = V1; C_A1_1_3_I1.phi = phi1
C_A1_1_4_I1.v = V1; C_A1_1_4_I1.phi = phi1
C_A1_1_5_I1.v = V1; C_A1_1_5_I1.phi = phi1
C_A1_1_6_I1.v = V1; C_A1_1_6_I1.phi = phi1
C_A1_1_7_I1.v = V1; C_A1_1_7_I1.phi = phi1
C_A1_1_8_I1.v = V1; C_A1_1_8_I1.phi = phi1
phi13=180
V13=-20.2E-3/8/np.cos(phi13*pi/180)
C3_AH1_1_1_I1.v=V13; C3_AH1_1_1_I1.phi=phi13
C3_AH1_1_2_I1.v=V13; C3_AH1_1_2_I1.phi=phi13
C3_AH1_1_3_I1.v=V13; C3_AH1_1_3_I1.phi=phi13
C3_AH1_1_4_I1.v=V13; C3_AH1_1_4_I1.phi=phi13
C3_AH1_1_5_I1.v=V13; C3_AH1_1_5_I1.phi=phi13
C3_AH1_1_6_I1.v=V13; C3_AH1_1_6_I1.phi=phi13
C3_AH1_1_7_I1.v=V13; C3_AH1_1_7_I1.phi=phi13
C3_AH1_1_8_I1.v=V13; C3_AH1_1_8_I1.phi=phi13
p_array_init = astraBeam2particleArray(filename='beam_6MeV.ast')
bins_start, hist_start = get_current(p_array_init, charge=p_array_init.q_array[0], num_bins=200)
plt.title("current: end")
plt.plot(bins_start*1000, hist_start)
plt.xlabel("s, mm")
plt.ylabel("I, A")
plt.grid(True)
plt.show()
# initialization of tracking method
method = MethodTM()
# for second order tracking we have to choose SecondTM
method.global_method = SecondTM
# for first order tracking uncomment next line
# method.global_method = TransferMap
# we will start simulation from point 3.2 from the gun. For this purpose marker was created (start_sim=Marker())
# and placed in 3.2 m after gun
# Q_38_I1 is quadrupole between RF cavities 1.3 GHz and 3.9 GHz
# C3_AH1_1_8_I1 is the last section of the 3.9 GHz cavity
lat = MagneticLattice(cell, start=start_sim, stop=Q_38_I1, method=method)
Explanation: Change RF parameters for the comparison with ASTRA
End of explanation
sc1 = SpaceCharge()
sc1.nmesh_xyz = [63, 63, 63]
sc1.low_order_kick = False
sc1.step = 1
sc5 = SpaceCharge()
sc5.nmesh_xyz = [63, 63, 63]
sc5.step = 5
sc5.low_order_kick = False
navi = Navigator(lat)
# add physics processes from the first element to the last of the lattice
navi.add_physics_proc(sc1, lat.sequence[0], C_A1_1_2_I1)
navi.add_physics_proc(sc5, C_A1_1_2_I1, lat.sequence[-1])
# definiing of unit step in [m]
navi.unit_step = 0.02
# deep copy of the initial beam distribution
p_array = deepcopy(p_array_init)
start = time()
tws_track, p_array = track(lat, p_array, navi)
print("time exec: ", time() - start, "sec")
# you can change top_plot argument, for example top_plot=["alpha_x", "alpha_y"]
plot_opt_func(lat, tws_track, top_plot=["E"], fig_name=0, legend=False)
plt.show()
Explanation: Initializing SpaceCharge
End of explanation
sa, bx_sc, by_sc, bx_wo_sc, by_wo_sc = np.loadtxt("astra_sim.txt", usecols=(0, 1, 2, 3, 4), unpack=True)
s = [tw.s for tw in tws_track]
bx = [tw.beta_x for tw in tws_track]
by = [tw.beta_y for tw in tws_track]
ax = plot_API(lat, legend=False)
ax.plot(s, bx, "r", label="Ocelot, bx")
ax.plot(sa-3.2, bx_sc, "b-",label="ASTRA, bx")
ax.plot(s, by, "r", label="Ocelot, by")
ax.plot(sa-3.2, by_sc, "b-",label="ASTRA, by")
ax.legend()
plt.show()
Explanation: Comparison with ASTRA
Beam tracking with ASTRA was performed by Igor Zagorodnov (DESY).
End of explanation |
189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visual Diagnosis of Text Analysis with Baleen
This notebook has been created as part of the Yellowbrick user study. I hope to explore how visual methods might improve the workflow of text classification on a small to medium sized corpus.
Dataset
The dataset used in this study is a sample of the Baleen Corpus. The Baleen corpus has been ingesting RSS feeds on the hour from a variety of topical feeds since March 2016, including news, hobbies, and political documents and currently has over 1.2M posts from 373 feeds. Baleen (an open source system) has a sister library called Minke that provides multiprocessing support for dealing with Gigabytes worth of text.
The dataset I'll use in this study is a sample of the larger data set that contains 68,052 or roughly 6% of the total corpus. For this test, I've chosen to use the preprocessed corpus, which means I won't have to do any tokenization, but can still apply normalization techniques. The corpus is described as follows
Step4: Loading Data
In order to load data, I'd typically use a CorpusReader. However, for the sake of simplicity, I'll load data using some simple Python generator functions. I need to create two primary methods, the first loads the documents using pickle, and the second returns the vector of targets for supervised learning.
Step8: Feature Extraction and Normalization
In order to conduct analyses with Scikit-Learn, I'll need some helper transformers to modify the loaded data into a form that can be used by the sklearn.feature_extraction text transformers. I'll be mostly using the CountVectorizer and TfidfVectorizer, so these normalizer transformers and identity functions help a lot.
Step9: Corpus Analysis
At this stage, I'd like to get a feel for what was in my corpus, so that I can start thinking about how to best vectorize the text and do different types of counting. With the Yellowbrick 0.3.3 release, support has been added for two text visualizers, which I think I will test out at scale using this corpus.
Step10: Classification
The primary task for this kind of corpus is classification - sentiment analysis, etc. | Python Code:
%matplotlib inline
import os
import sys
import nltk
import pickle
# To import yellowbrick
sys.path.append("../..")
Explanation: Visual Diagnosis of Text Analysis with Baleen
This notebook has been created as part of the Yellowbrick user study. I hope to explore how visual methods might improve the workflow of text classification on a small to medium sized corpus.
Dataset
The dataset used in this study is a sample of the Baleen Corpus. The Baleen corpus has been ingesting RSS feeds on the hour from a variety of topical feeds since March 2016, including news, hobbies, and political documents and currently has over 1.2M posts from 373 feeds. Baleen (an open source system) has a sister library called Minke that provides multiprocessing support for dealing with Gigabytes worth of text.
The dataset I'll use in this study is a sample of the larger data set that contains 68,052 or roughly 6% of the total corpus. For this test, I've chosen to use the preprocessed corpus, which means I won't have to do any tokenization, but can still apply normalization techniques. The corpus is described as follows:
Baleen corpus contains 68,052 files in 12 categories.
Structured as:
1,200,378 paragraphs (17.639 mean paragraphs per file)
2,058,635 sentences (1.715 mean sentences per paragraph).
Word count of 44,821,870 with a vocabulary of 303,034 (147.910 lexical diversity).
Category Counts:
books: 1,700 docs
business: 9,248 docs
cinema: 2,072 docs
cooking: 733 docs
data science: 692 docs
design: 1,259 docs
do it yourself: 2,620 docs
gaming: 2,884 docs
news: 33,253 docs
politics: 3,793 docs
sports: 4,710 docs
tech: 5,088 docs
This is quite a lot of data, so for now we'll simply create a classifier for the "hobbies" categories: e.g. books, cinema, cooking, diy, gaming, and sports.
Note: this data set is not currently publically available, but I am happy to provide it on request.
End of explanation
CORPUS_ROOT = os.path.join(os.getcwd(), "data")
CATEGORIES = ["books", "cinema", "cooking", "diy", "gaming", "sports"]
def fileids(root=CORPUS_ROOT, categories=CATEGORIES):
Fetch the paths, filtering on categories (pass None for all).
for name in os.listdir(root):
dpath = os.path.join(root, name)
if not os.path.isdir(dpath):
continue
if categories and name in categories:
for fname in os.listdir(dpath):
yield os.path.join(dpath, fname)
def documents(root=CORPUS_ROOT, categories=CATEGORIES):
Load the pickled documents and yield one at a time.
for path in fileids(root, categories):
with open(path, 'rb') as f:
yield pickle.load(f)
def labels(root=CORPUS_ROOT, categories=CATEGORIES):
Return a list of the labels associated with each document.
for path in fileids(root, categories):
dpath = os.path.dirname(path)
yield dpath.split(os.path.sep)[-1]
Explanation: Loading Data
In order to load data, I'd typically use a CorpusReader. However, for the sake of simplicity, I'll load data using some simple Python generator functions. I need to create two primary methods, the first loads the documents using pickle, and the second returns the vector of targets for supervised learning.
End of explanation
from nltk.corpus import wordnet as wn
from nltk.stem import WordNetLemmatizer
from unicodedata import category as ucat
from nltk.corpus import stopwords as swcorpus
from sklearn.base import BaseEstimator, TransformerMixin
def identity(args):
The identity function is used as the "tokenizer" for
pre-tokenized text. It just passes back it's arguments.
return args
def is_punctuation(token):
Returns true if all characters in the token are
unicode punctuation (works for most punct).
return all(
ucat(c).startswith('P')
for c in token
)
def wnpos(tag):
Returns the wn part of speech tag from the penn treebank tag.
return {
"N": wn.NOUN,
"V": wn.VERB,
"J": wn.ADJ,
"R": wn.ADV,
}.get(tag[0], wn.NOUN)
class TextNormalizer(BaseEstimator, TransformerMixin):
def __init__(self, stopwords='english', lowercase=True, lemmatize=True, depunct=True):
self.stopwords = frozenset(swcorpus.words(stopwords)) if stopwords else frozenset()
self.lowercase = lowercase
self.depunct = depunct
self.lemmatizer = WordNetLemmatizer() if lemmatize else None
def fit(self, docs, labels=None):
return self
def transform(self, docs):
for doc in docs:
yield list(self.normalize(doc))
def normalize(self, doc):
for paragraph in doc:
for sentence in paragraph:
for token, tag in sentence:
if token.lower() in self.stopwords:
continue
if self.depunct and is_punctuation(token):
continue
if self.lowercase:
token = token.lower()
if self.lemmatizer:
token = self.lemmatizer.lemmatize(token, wnpos(tag))
yield token
Explanation: Feature Extraction and Normalization
In order to conduct analyses with Scikit-Learn, I'll need some helper transformers to modify the loaded data into a form that can be used by the sklearn.feature_extraction text transformers. I'll be mostly using the CountVectorizer and TfidfVectorizer, so these normalizer transformers and identity functions help a lot.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from yellowbrick.text import FreqDistVisualizer
visualizer = Pipeline([
('norm', TextNormalizer()),
('count', CountVectorizer(tokenizer=lambda x: x, preprocessor=None, lowercase=False)),
('viz', FreqDistVisualizer())
])
visualizer.fit_transform(documents(), labels())
visualizer.named_steps['viz'].show()
vect = Pipeline([
('norm', TextNormalizer()),
('count', CountVectorizer(tokenizer=lambda x: x, preprocessor=None, lowercase=False)),
])
docs = vect.fit_transform(documents(), labels())
viz = FreqDistVisualizer()
viz.fit(docs, vect.named_steps['count'].get_feature_names())
viz.show()
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from yellowbrick.text import TSNEVisualizer
vect = Pipeline([
('norm', TextNormalizer()),
('tfidf', TfidfVectorizer(tokenizer=lambda x: x, preprocessor=None, lowercase=False)),
])
docs = vect.fit_transform(documents(), labels())
viz = TSNEVisualizer()
viz.fit(docs, labels())
viz.show()
Explanation: Corpus Analysis
At this stage, I'd like to get a feel for what was in my corpus, so that I can start thinking about how to best vectorize the text and do different types of counting. With the Yellowbrick 0.3.3 release, support has been added for two text visualizers, which I think I will test out at scale using this corpus.
End of explanation
from sklearn.model_selection import train_test_split as tts
docs_train, docs_test, labels_train, labels_test = tts(docs, list(labels()), test_size=0.2)
from sklearn.linear_model import LogisticRegression
from yellowbrick.classifier import ClassBalance, ClassificationReport, ROCAUC
logit = LogisticRegression()
logit.fit(docs_train, labels_train)
logit_balance = ClassBalance(logit, classes=set(labels_test))
logit_balance.score(docs_test, labels_test)
logit_balance.show()
logit_balance = ClassificationReport(logit, classes=set(labels_test))
logit_balance.score(docs_test, labels_test)
logit_balance.show()
logit_balance = ClassificationReport(LogisticRegression())
logit_balance.fit(docs_train, labels_train)
logit_balance.score(docs_test, labels_test)
logit_balance.show()
logit_balance = ROCAUC(logit)
logit_balance.score(docs_test, labels_test)
logit_balance.show()
Explanation: Classification
The primary task for this kind of corpus is classification - sentiment analysis, etc.
End of explanation |
190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Define the Data Generating Distribution
To examine different approximations to the true loss surface of a particular data generating distribution $P(X,Y)$ we must first define it. We will work backwards by first defining a relationship between random variables and then construct the resulting distribution. Let
$$y=mx+\epsilon$$
where $X \sim U[0,1]$ ($p(x)=1$) and $\epsilon \sim \mathcal{N}(0,s^2)$ is the noise term. We immediately see that $y$ can be interpreted as the result of a reparameterization, thus given a particular observation $X=x$ the random variable $Y$ is also distributed normally $\mathcal{N}(mx,s^2)$ with the resulting pdf.
$$p(y|x) = \frac{1}{\sqrt{2\pi s^2}}\exp\bigg(-\frac{(y-mx)^2}{2 s^2}\bigg)$$
In this way, we can trivially define the joint pdf
$$p(x,y) = p(y|x)p(x) = p(y|x) = \frac{1}{\sqrt{2\pi s^2}}\exp\bigg(-\frac{(y-mx)^2}{2 s^2}\bigg)$$
Visualizing the Joint Distribution
We can create observations from $P(X,Y)$ via ancestral sampling, i.e. we first draw a sample $x\sim p(x)$ and then use it to draw a sample $y \sim p(y|x)$ resulting in $(x,y) \sim P(X,Y)$.
Step1: Fitting a Line
We now wish to fit a line to the points drawn from $P(X,Y)$. To do this we must introduce a model and a loss function. Our model is a line with a y-intercept of $0$, $y=\theta x$, and we use the standard sum of squared errors (SSE) loss function
$$ L(\theta) = \frac{1}{N}\sum\limits_{i=1}^N (\theta x_i-y_i)^2$$
Examining Loss Surfaces
The above SSE is where most introductions to machine learning start. However, we can look a little bit closer. Not all loss surfaces are created equally because in practice the loss functions are calculated from only a sample of the true distribution. To see this, let us first take a look at what the true loss surface is
\begin{align}
L_{\text{True}}(\theta) &= \mathop{\mathbb{E}}_{(x,y)\sim P}[(\theta x-y)^2] \
&= \theta^2 \mathbb{E}[x^2]-2\theta\mathbb{E}[xy] +\mathbb{E}[y^2] \
\end{align}
Let us begin with the first expectation.
\begin{align}
\mathbb{E}[x^2] &= \int_0^1 \int_{-\infty}^{\infty} x^2 p(x,y) dy dx \
&= \int_0^1 x^2 \bigg(\int_{-\infty}^{\infty} p(y|x) dy \bigg) dx \
&= \int_0^1 x^2 dx \
&= \frac{1}{3}
\end{align}
The expectation of $xy$ follows a similar pattern
\begin{align}
\mathbb{E}[xy] &= \int_0^1 \int_{-\infty}^{\infty} xy p(x,y) dy dx \
&= \int_0^1 x \bigg(\int_{-\infty}^{\infty} yp(y|x) dy \bigg) dx \
&= \int_0^1 x(mx) dx \
&= m\int_0^1 x^2 dx \
&= \frac{m}{3}
\end{align}
As well as the final expectation...
\begin{align}
\mathbb{E}{P(X,Y)}[y^2] &= \int_0^1 \int{-\infty}^{\infty} y^2 p(x,y) dy dx \
&= \int_0^1 \mathbb{E}_{P(Y|X)}[y^2] dx
\end{align}
Here we use the fact that
$$\mathbb{E}{P(Y|X)}[y^2] = Var[y]+(\mathbb{E}{P(Y|X)}[y])^2$$
to arrive at
\begin{align}
\mathbb{E}_{P(X,Y)}[y^2] &= \int_0^1 s^2+(mx)^2 dx \
&= s^2 + \frac{m^2}{3}
\end{align}
We now substitute all three results into the definition of $L_{\text{True}}$
\begin{align}
L_{\text{True}}(\theta) &= \frac{1}{3}\theta^2 -\frac{2m}{3}\theta + \frac{m^2}{3} + s^2 \
&= \frac{1}{3}(\theta - m)^2 + s^2
\end{align}
where we see the classic results that
$$\text{argmin}{\theta} [L{\text{True}}(\theta)] = m$$
and
$$L_{\text{True}}(m) = s^2$$
showing us that the best we can do in minimizing the loss is governed by the gaussian noise injected into the data
Visualize True Loss Surface
Step2: Approximate Loss Surfaces
The question now becomes, what does the loss surface look like when we only include a finite number of observations from the data generating distribution $P(X,Y)$? We can find an expression for it by expanding the previous defintion of the SSE above
\begin{align}
L(\theta) &= \frac{1}{N}\sum\limits_{i=1}^N (\theta x_i-y_i)^2 \
&= \theta^2 \frac{1}{N}\sum\limits_{i=1}^N x_i^2 -2\theta \frac{1}{N}\sum\limits_{i=1}^N x_i y_i + \frac{1}{N}\sum\limits_{i=1}^N y_i^2 \
\end{align}
Step3: Now we can examine what different approximations to the loss surface look like relative to the true loss surface
Step4: Now let us approximate the derivative of the true loss surface. We begin with calculating the true derivative
\begin{align}
L_{\text{True}}(\theta) &= \frac{1}{3}(\theta - m)^2 + s^2 \
\frac{\partial L_{\text{True}}}{\partial \theta}(\theta) &= \frac{2}{3}(\theta-m)
\end{align}
Meanwhile the derivative of the approximate loss function is
\begin{align}
L(\theta) &= \theta^2 \frac{1}{N}\sum\limits_{i=1}^N x_i^2 -2\theta \frac{1}{N}\sum\limits_{i=1}^N x_i y_i + \frac{1}{N}\sum\limits_{i=1}^N y_i^2 \
\frac{\partial L}{\partial \theta}(\theta) &= \theta \frac{2}{N}\sum\limits_{i=1}^N x_i^2 -\frac{2}{N}\sum\limits_{i=1}^N x_i y_i
\end{align}
and we immediately see that our approximation to the minimum of this loss function is
$$\theta^* = \frac{\sum\limits_{i=1}^N x_i y_i}{\sum\limits_{i=1}^N x_i^2}$$
We will draw samples in the range from $[0,N]$. For each sample size, we will use 70% of it to approximate the minimum of that surface using $\theta^$ above. Then we will approximate the gradient of the loss surface at $\theta^$ using the remaining 30% of the data, as well as with all N of the data. For each case we will calculate the error between it and the true gradient of the loss surface. We will then plot both of the error rates as a function of sample size. | Python Code:
import numpy as np
class P():
def __init__(self, m, s):
self.m = m # Slope of line
self.s = s # Standard deviation of injected noise
def sample(self, size):
x = np.random.uniform(size=size)
y = []
for xi in x:
y.append(np.random.normal(self.m*xi,self.s))
return (x,y)
import matplotlib.pyplot as plt
m = 2.7
s = 0.2
p = P(m,s)
x,y = p.sample(50)
plt.figure(figsize=(7,7))
plt.plot([0,1],[0,m],label="Conditional Expectation", c='k')
plt.scatter(x,y, label="Samples")
plt.xlabel("x",fontsize=18)
plt.ylabel('y', fontsize=18)
plt.legend(fontsize=16)
plt.show()
Explanation: Define the Data Generating Distribution
To examine different approximations to the true loss surface of a particular data generating distribution $P(X,Y)$ we must first define it. We will work backwards by first defining a relationship between random variables and then construct the resulting distribution. Let
$$y=mx+\epsilon$$
where $X \sim U[0,1]$ ($p(x)=1$) and $\epsilon \sim \mathcal{N}(0,s^2)$ is the noise term. We immediately see that $y$ can be interpreted as the result of a reparameterization, thus given a particular observation $X=x$ the random variable $Y$ is also distributed normally $\mathcal{N}(mx,s^2)$ with the resulting pdf.
$$p(y|x) = \frac{1}{\sqrt{2\pi s^2}}\exp\bigg(-\frac{(y-mx)^2}{2 s^2}\bigg)$$
In this way, we can trivially define the joint pdf
$$p(x,y) = p(y|x)p(x) = p(y|x) = \frac{1}{\sqrt{2\pi s^2}}\exp\bigg(-\frac{(y-mx)^2}{2 s^2}\bigg)$$
Visualizing the Joint Distribution
We can create observations from $P(X,Y)$ via ancestral sampling, i.e. we first draw a sample $x\sim p(x)$ and then use it to draw a sample $y \sim p(y|x)$ resulting in $(x,y) \sim P(X,Y)$.
End of explanation
def true_loss(theta):
return 1/3*(theta-m)**2 + s**2
thetas = np.linspace(m-2,m+2,1000)
plt.figure(figsize=(7,5))
plt.plot(thetas, true_loss(thetas),c='k',label="$\mathcal{L}_D$")
plt.plot([2.7,2.7],[0,1.38],c='r',ls='dashed',label="$m$")
plt.xlabel("x",fontsize=16)
plt.ylabel("y",fontsize=16)
plt.legend(fontsize=17)
plt.show()
Explanation: Fitting a Line
We now wish to fit a line to the points drawn from $P(X,Y)$. To do this we must introduce a model and a loss function. Our model is a line with a y-intercept of $0$, $y=\theta x$, and we use the standard sum of squared errors (SSE) loss function
$$ L(\theta) = \frac{1}{N}\sum\limits_{i=1}^N (\theta x_i-y_i)^2$$
Examining Loss Surfaces
The above SSE is where most introductions to machine learning start. However, we can look a little bit closer. Not all loss surfaces are created equally because in practice the loss functions are calculated from only a sample of the true distribution. To see this, let us first take a look at what the true loss surface is
\begin{align}
L_{\text{True}}(\theta) &= \mathop{\mathbb{E}}_{(x,y)\sim P}[(\theta x-y)^2] \
&= \theta^2 \mathbb{E}[x^2]-2\theta\mathbb{E}[xy] +\mathbb{E}[y^2] \
\end{align}
Let us begin with the first expectation.
\begin{align}
\mathbb{E}[x^2] &= \int_0^1 \int_{-\infty}^{\infty} x^2 p(x,y) dy dx \
&= \int_0^1 x^2 \bigg(\int_{-\infty}^{\infty} p(y|x) dy \bigg) dx \
&= \int_0^1 x^2 dx \
&= \frac{1}{3}
\end{align}
The expectation of $xy$ follows a similar pattern
\begin{align}
\mathbb{E}[xy] &= \int_0^1 \int_{-\infty}^{\infty} xy p(x,y) dy dx \
&= \int_0^1 x \bigg(\int_{-\infty}^{\infty} yp(y|x) dy \bigg) dx \
&= \int_0^1 x(mx) dx \
&= m\int_0^1 x^2 dx \
&= \frac{m}{3}
\end{align}
As well as the final expectation...
\begin{align}
\mathbb{E}{P(X,Y)}[y^2] &= \int_0^1 \int{-\infty}^{\infty} y^2 p(x,y) dy dx \
&= \int_0^1 \mathbb{E}_{P(Y|X)}[y^2] dx
\end{align}
Here we use the fact that
$$\mathbb{E}{P(Y|X)}[y^2] = Var[y]+(\mathbb{E}{P(Y|X)}[y])^2$$
to arrive at
\begin{align}
\mathbb{E}_{P(X,Y)}[y^2] &= \int_0^1 s^2+(mx)^2 dx \
&= s^2 + \frac{m^2}{3}
\end{align}
We now substitute all three results into the definition of $L_{\text{True}}$
\begin{align}
L_{\text{True}}(\theta) &= \frac{1}{3}\theta^2 -\frac{2m}{3}\theta + \frac{m^2}{3} + s^2 \
&= \frac{1}{3}(\theta - m)^2 + s^2
\end{align}
where we see the classic results that
$$\text{argmin}{\theta} [L{\text{True}}(\theta)] = m$$
and
$$L_{\text{True}}(m) = s^2$$
showing us that the best we can do in minimizing the loss is governed by the gaussian noise injected into the data
Visualize True Loss Surface
End of explanation
def approx_loss(theta, x_vals, y_vals):
x_sq = np.power(x,2).mean()
xy = (x*y).mean()
y_sq = np.power(y,2).mean()
return theta**2*x_sq - 2*theta*xy + y_sq
Explanation: Approximate Loss Surfaces
The question now becomes, what does the loss surface look like when we only include a finite number of observations from the data generating distribution $P(X,Y)$? We can find an expression for it by expanding the previous defintion of the SSE above
\begin{align}
L(\theta) &= \frac{1}{N}\sum\limits_{i=1}^N (\theta x_i-y_i)^2 \
&= \theta^2 \frac{1}{N}\sum\limits_{i=1}^N x_i^2 -2\theta \frac{1}{N}\sum\limits_{i=1}^N x_i y_i + \frac{1}{N}\sum\limits_{i=1}^N y_i^2 \
\end{align}
End of explanation
plt.figure(figsize=(7,5))
plt.plot([m,m],[-0.1,1.0],ls='dashed',c='r',label='m')
plt.plot(thetas, true_loss(thetas),c='k',label='$\mathcal{L}_D$')
sample_sizes = [5,10,20,50]
colors = ['b','y','g','c','m']
for ss,color in zip(sample_sizes,colors):
x,y = p.sample(ss)
plt.plot(thetas, approx_loss(thetas,x,y), color, label="{} Samples".format(ss))
plt.legend()
plt.ylim(0.0,s**2 + s)
plt.ylabel("Mean Squared Error")
plt.xlim(m-1.0,m+1.0)
plt.xlabel("$m$")
plt.show()
Explanation: Now we can examine what different approximations to the loss surface look like relative to the true loss surface
End of explanation
def partial_L(theta,x,y):
x_sq = np.power(x,2).mean()
xy = (x*y).mean()
return 2*(theta*x_sq-xy)
def partial_L_True(theta):
return 2/3*(theta-m)
def approx_argmin(x,y):
x_sq = np.power(x,2).sum()
xy = (x*y).sum()
return xy/x_sq
err_tot = []
err_test = []
grad = False
for samp_size in range(10,1000):
x,y = p.sample(samp_size)
index = int(samp_size*0.7)
theta_min = approx_argmin(x[:index], y[:index])
pL_test = partial_L(theta_min, x[index:], y[index:])
pL_tot = partial_L(theta_min, x, y)
pL_T = partial_L_True(theta_min)
err_tot.append(np.abs(pL_T-pL_tot))
err_test.append(np.abs(pL_T-pL_test))
plt.plot(err_tot, c='r', label='Err Tot')
plt.plot(err_test, c='b', label='Err Test', alpha=0.5)
plt.ylabel('Error')
plt.xlabel('Sample Size')
plt.title('Error in Approx Loss Geometry at $\\theta^*$')
plt.legend()
plt.show()
err_tot = []
err_test = []
err_train = []
grad = False
for samp_size in range(10,1000):
x,y = p.sample(samp_size)
index = int(samp_size*0.7)
theta_min = approx_argmin(x[:index], y[:index])
loss_test = approx_loss(theta_min, x[index:], y[index:])
loss_tot = approx_loss(theta_min, x, y)
loss_train = approx_loss(theta_min, x[:index], y[:index])
loss_true = true_loss(theta_min)
err_test.append(np.abs(loss_true-loss_test))
err_tot.append(np.abs(loss_true-loss_tot))
err_train.append(np.abs(loss_true-loss_train))
plt.plot(err_tot, c='r', label='Err Tot')
plt.plot(err_test, c='b', label='Err Test', alpha=0.5)
plt.plot(err_test, c='g', label='Err Train', alpha=0.5)
plt.ylabel('Error')
plt.xlabel('Sample Size')
plt.title('Error in Approx Loss Geometry at $\\theta^*$')
plt.legend()
plt.show()
Explanation: Now let us approximate the derivative of the true loss surface. We begin with calculating the true derivative
\begin{align}
L_{\text{True}}(\theta) &= \frac{1}{3}(\theta - m)^2 + s^2 \
\frac{\partial L_{\text{True}}}{\partial \theta}(\theta) &= \frac{2}{3}(\theta-m)
\end{align}
Meanwhile the derivative of the approximate loss function is
\begin{align}
L(\theta) &= \theta^2 \frac{1}{N}\sum\limits_{i=1}^N x_i^2 -2\theta \frac{1}{N}\sum\limits_{i=1}^N x_i y_i + \frac{1}{N}\sum\limits_{i=1}^N y_i^2 \
\frac{\partial L}{\partial \theta}(\theta) &= \theta \frac{2}{N}\sum\limits_{i=1}^N x_i^2 -\frac{2}{N}\sum\limits_{i=1}^N x_i y_i
\end{align}
and we immediately see that our approximation to the minimum of this loss function is
$$\theta^* = \frac{\sum\limits_{i=1}^N x_i y_i}{\sum\limits_{i=1}^N x_i^2}$$
We will draw samples in the range from $[0,N]$. For each sample size, we will use 70% of it to approximate the minimum of that surface using $\theta^$ above. Then we will approximate the gradient of the loss surface at $\theta^$ using the remaining 30% of the data, as well as with all N of the data. For each case we will calculate the error between it and the true gradient of the loss surface. We will then plot both of the error rates as a function of sample size.
End of explanation |
191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional Generative Adversarial Network (DCGAN) Tutorial
This tutorials walks through an implementation of DCGAN as described in Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.
To learn more about generative adversarial networks, see my Medium post on them.
Step1: We will be using the MNIST dataset. input_data is a library that downloads the dataset and uzips it automatically. It can be acquired Github here
Step2: Helper Functions
Step3: Defining the Adversarial Networks
Generator Network
The generator takes a vector of random numbers and transforms it into a 32x32 image. Each layer in the network involves a strided transpose convolution, batch normalization, and rectified nonlinearity. Tensorflow's slim library allows us to easily define each of these layers.
Step4: Discriminator Network
The discriminator network takes as input a 32x32 image and transforms it into a single valued probability of being generated from real-world data. Again we use tf.slim to define the convolutional layers, batch normalization, and weight initialization.
Step5: Connecting them together
Step6: Training the network
Now that we have fully defined our network, it is time to train it!
Step7: Using a trained network
Once we have a trained model saved, we may want to use it to generate new images, and explore the representation it has learned. | Python Code:
#Import the libraries we will need.
import tensorflow as tf
import numpy as np
import input_data
import matplotlib.pyplot as plt
import tensorflow.contrib.slim as slim
import os
import scipy.misc
import scipy
Explanation: Deep Convolutional Generative Adversarial Network (DCGAN) Tutorial
This tutorials walks through an implementation of DCGAN as described in Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.
To learn more about generative adversarial networks, see my Medium post on them.
End of explanation
mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)
Explanation: We will be using the MNIST dataset. input_data is a library that downloads the dataset and uzips it automatically. It can be acquired Github here: https://gist.github.com/awjuliani/1d21151bc17362bf6738c3dc02f37906
End of explanation
#This function performns a leaky relu activation, which is needed for the discriminator network.
def lrelu(x, leak=0.2, name="lrelu"):
with tf.variable_scope(name):
f1 = 0.5 * (1 + leak)
f2 = 0.5 * (1 - leak)
return f1 * x + f2 * abs(x)
#The below functions are taken from carpdem20's implementation https://github.com/carpedm20/DCGAN-tensorflow
#They allow for saving sample images from the generator to follow progress
def save_images(images, size, image_path):
return imsave(inverse_transform(images), size, image_path)
def imsave(images, size, path):
return scipy.misc.imsave(path, merge(images, size))
def inverse_transform(images):
return (images+1.)/2.
def merge(images, size):
h, w = images.shape[1], images.shape[2]
img = np.zeros((h * size[0], w * size[1]))
for idx, image in enumerate(images):
i = idx % size[1]
j = idx // size[1]
img[j*h:j*h+h, i*w:i*w+w] = image
return img
Explanation: Helper Functions
End of explanation
def generator(z):
zP = slim.fully_connected(z,4*4*256,normalizer_fn=slim.batch_norm,\
activation_fn=tf.nn.relu,scope='g_project',weights_initializer=initializer)
zCon = tf.reshape(zP,[-1,4,4,256])
gen1 = slim.convolution2d_transpose(\
zCon,num_outputs=64,kernel_size=[5,5],stride=[2,2],\
padding="SAME",normalizer_fn=slim.batch_norm,\
activation_fn=tf.nn.relu,scope='g_conv1', weights_initializer=initializer)
gen2 = slim.convolution2d_transpose(\
gen1,num_outputs=32,kernel_size=[5,5],stride=[2,2],\
padding="SAME",normalizer_fn=slim.batch_norm,\
activation_fn=tf.nn.relu,scope='g_conv2', weights_initializer=initializer)
gen3 = slim.convolution2d_transpose(\
gen2,num_outputs=16,kernel_size=[5,5],stride=[2,2],\
padding="SAME",normalizer_fn=slim.batch_norm,\
activation_fn=tf.nn.relu,scope='g_conv3', weights_initializer=initializer)
g_out = slim.convolution2d_transpose(\
gen3,num_outputs=1,kernel_size=[32,32],padding="SAME",\
biases_initializer=None,activation_fn=tf.nn.tanh,\
scope='g_out', weights_initializer=initializer)
return g_out
Explanation: Defining the Adversarial Networks
Generator Network
The generator takes a vector of random numbers and transforms it into a 32x32 image. Each layer in the network involves a strided transpose convolution, batch normalization, and rectified nonlinearity. Tensorflow's slim library allows us to easily define each of these layers.
End of explanation
def discriminator(bottom, reuse=False):
dis1 = slim.convolution2d(bottom,16,[4,4],stride=[2,2],padding="SAME",\
biases_initializer=None,activation_fn=lrelu,\
reuse=reuse,scope='d_conv1',weights_initializer=initializer)
dis2 = slim.convolution2d(dis1,32,[4,4],stride=[2,2],padding="SAME",\
normalizer_fn=slim.batch_norm,activation_fn=lrelu,\
reuse=reuse,scope='d_conv2', weights_initializer=initializer)
dis3 = slim.convolution2d(dis2,64,[4,4],stride=[2,2],padding="SAME",\
normalizer_fn=slim.batch_norm,activation_fn=lrelu,\
reuse=reuse,scope='d_conv3',weights_initializer=initializer)
d_out = slim.fully_connected(slim.flatten(dis3),1,activation_fn=tf.nn.sigmoid,\
reuse=reuse,scope='d_out', weights_initializer=initializer)
return d_out
Explanation: Discriminator Network
The discriminator network takes as input a 32x32 image and transforms it into a single valued probability of being generated from real-world data. Again we use tf.slim to define the convolutional layers, batch normalization, and weight initialization.
End of explanation
tf.reset_default_graph()
z_size = 100 #Size of z vector used for generator.
#This initializaer is used to initialize all the weights of the network.
initializer = tf.truncated_normal_initializer(stddev=0.02)
#These two placeholders are used for input into the generator and discriminator, respectively.
z_in = tf.placeholder(shape=[None,z_size],dtype=tf.float32) #Random vector
real_in = tf.placeholder(shape=[None,32,32,1],dtype=tf.float32) #Real images
Gz = generator(z_in) #Generates images from random z vectors
Dx = discriminator(real_in) #Produces probabilities for real images
Dg = discriminator(Gz,reuse=True) #Produces probabilities for generator images
#These functions together define the optimization objective of the GAN.
d_loss = -tf.reduce_mean(tf.log(Dx) + tf.log(1.-Dg)) #This optimizes the discriminator.
g_loss = -tf.reduce_mean(tf.log(Dg)) #This optimizes the generator.
tvars = tf.trainable_variables()
#The below code is responsible for applying gradient descent to update the GAN.
trainerD = tf.train.AdamOptimizer(learning_rate=0.0002,beta1=0.5)
trainerG = tf.train.AdamOptimizer(learning_rate=0.0002,beta1=0.5)
d_grads = trainerD.compute_gradients(d_loss,tvars[9:]) #Only update the weights for the discriminator network.
g_grads = trainerG.compute_gradients(g_loss,tvars[0:9]) #Only update the weights for the generator network.
update_D = trainerD.apply_gradients(d_grads)
update_G = trainerG.apply_gradients(g_grads)
Explanation: Connecting them together
End of explanation
batch_size = 128 #Size of image batch to apply at each iteration.
iterations = 500000 #Total number of iterations to use.
sample_directory = './figs' #Directory to save sample images from generator in.
model_directory = './models' #Directory to save trained model to.
init = tf.initialize_all_variables()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
for i in range(iterations):
zs = np.random.uniform(-1.0,1.0,size=[batch_size,z_size]).astype(np.float32) #Generate a random z batch
xs,_ = mnist.train.next_batch(batch_size) #Draw a sample batch from MNIST dataset.
xs = (np.reshape(xs,[batch_size,28,28,1]) - 0.5) * 2.0 #Transform it to be between -1 and 1
xs = np.lib.pad(xs, ((0,0),(2,2),(2,2),(0,0)),'constant', constant_values=(-1, -1)) #Pad the images so the are 32x32
_,dLoss = sess.run([update_D,d_loss],feed_dict={z_in:zs,real_in:xs}) #Update the discriminator
_,gLoss = sess.run([update_G,g_loss],feed_dict={z_in:zs}) #Update the generator, twice for good measure.
_,gLoss = sess.run([update_G,g_loss],feed_dict={z_in:zs})
if i % 10 == 0:
print "Gen Loss: " + str(gLoss) + " Disc Loss: " + str(dLoss)
z2 = np.random.uniform(-1.0,1.0,size=[batch_size,z_size]).astype(np.float32) #Generate another z batch
newZ = sess.run(Gz,feed_dict={z_in:z2}) #Use new z to get sample images from generator.
if not os.path.exists(sample_directory):
os.makedirs(sample_directory)
#Save sample generator images for viewing training progress.
save_images(np.reshape(newZ[0:36],[36,32,32]),[6,6],sample_directory+'/fig'+str(i)+'.png')
if i % 1000 == 0 and i != 0:
if not os.path.exists(model_directory):
os.makedirs(model_directory)
saver.save(sess,model_directory+'/model-'+str(i)+'.cptk')
print "Saved Model"
Explanation: Training the network
Now that we have fully defined our network, it is time to train it!
End of explanation
sample_directory = './figs' #Directory to save sample images from generator in.
model_directory = './models' #Directory to load trained model from.
batch_size_sample = 36
init = tf.initialize_all_variables()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
#Reload the model.
print 'Loading Model...'
ckpt = tf.train.get_checkpoint_state(model_directory)
saver.restore(sess,ckpt.model_checkpoint_path)
zs = np.random.uniform(-1.0,1.0,size=[batch_size_sample,z_size]).astype(np.float32) #Generate a random z batch
newZ = sess.run(Gz,feed_dict={z_in:z2}) #Use new z to get sample images from generator.
if not os.path.exists(sample_directory):
os.makedirs(sample_directory)
save_images(np.reshape(newZ[0:batch_size_sample],[36,32,32]),[6,6],sample_directory+'/fig'+str(i)+'.png')
Explanation: Using a trained network
Once we have a trained model saved, we may want to use it to generate new images, and explore the representation it has learned.
End of explanation |
192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:42
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 6
Step1: Step 3
Step2: Run the below cell, and copy the output into the Google Cloud Shell | Python Code:
%%bash
# Check your project name
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os
os.environ["BUCKET"] = "your-bucket-id-here" # Recommended: use your project name
Explanation: LAB 6: Serving baby weight predictions
Learning Objectives
Deploy a web application that consumes your model service on Cloud AI Platform.
Introduction
Verify that you have previously Trained your Keras model and Deployed it predicting with Keras model on Cloud AI Platform. If not, go back to 5a_train_keras_ai_platform_babyweight.ipynb and 5b_deploy_keras_ai_platform_babyweight.ipynb create them.
In the previous notebook, we deployed our model to CAIP. In this notebook, we'll make a Flask app to show how our models can interact with a web application which could be deployed to App Engine with the Flexible Environment.
Step 1: Review Flask App code in application folder
Let's start with what our users will see. In the application folder, we have prebuilt the components for web application. In the templates folder, the <a href="application/templates/index.html">index.html</a> file is the visual GUI our users will make predictions with.
It works by using an HTML form to make a POST request to our server, passing along the values captured by the input tags.
The form will render a little strangely in the notebook since the notebook environment does not run javascript, nor do we have our web server up and running. Let's get to that!
Step 2: Set environment variables
End of explanation
%%bash
gsutil -m rm -r gs://$BUCKET/baby_app
gsutil -m cp -r application/ gs://$BUCKET/baby_app
Explanation: Step 3: Complete application code in application/main.py
We can set up our server with python using Flask. Below, we've already built out most of the application for you.
The @app.route() decorator defines a function to handle web reqests. Let's say our website is www.example.com. With how our @app.route("/") function is defined, our sever will render our <a href="application/templates/index.html">index.html</a> file when users go to www.example.com/ (which is the default route for a website).
So, when a user pings our server with www.example.com/predict, they would use @app.route("/predict", methods=["POST"]) to make a prediction. The data that gets sent over the internet isn't a dictionary, but a string like below:
name1=value1&name2=value2 where name corresponds to the name on the input tag of our html form, and the value is what the user entered. Thankfully, Flask makes it easy to transform this string into a dictionary with request.form.to_dict(), but we still need to transform the data into a format our model expects. We've done this with the gender2str and the plurality2str utility functions.
Ok! Let's set up a webserver to take in the form inputs, process them into features, and send these features to our model on Cloud AI Platform to generate predictions to serve to back to users.
Fill in the TODO comments in <a href="application/main.py">application/main.py</a>. Give it a go first and review the solutions folder if you get stuck.
Note: AppEngine test configurations have already been set for you in the file <a href="application/app.yaml">application/app.yaml</a>. Review app.yaml documentation for additional configuration options.
Step 4: Deploy application
So how do we know that it works? We'll have to deploy our website and find out! Notebooks aren't made for website deployment, so we'll move our operation to the Google Cloud Shell.
By default, the shell doesn't have Flask installed, so copy over the following command to install it.
python3 -m pip install --user Flask==0.12.1
Next, we'll need to copy our web app to the Cloud Shell. We can use Google Cloud Storage as an inbetween.
End of explanation
%%bash
echo rm -r baby_app/
echo mkdir baby_app/
echo gsutil cp -r gs://$BUCKET/baby_app ./
echo python3 baby_app/main.py
Explanation: Run the below cell, and copy the output into the Google Cloud Shell
End of explanation |
194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of physical analysis with IPython
Step1: Reading simulation data
Step2: Looking at data, taking first rows
Step3: Plotting some feature
Step5: Adding interesting features
for each particle we compute it's P, PT and energy (under assumption this is Kaon)
Step6: Adding features of $B$
We are able to compute 4-momentum of B, given 4-momenta of produced particles
Step7: looking at result (with added features)
Step9: Dalitz plot
computing dalitz variables and checking that no resonances in simulation
Step11: Working with real data
Preselection
Step12: adding features
Step14: additional preselection
which uses added features
Step15: Adding Dalitz plot for real data
Step16: Ordering dalitz variables
let's reorder particles so the first Dalitz variable is always greater
Step17: Binned dalitz plot
let's plot the same in bins, as physicists like
Step18: Looking at local CP-asimmetry
adding one more column
Step19: Leaving only signal region in mass
Step20: counting number of positively and negatively charged B particles
Step21: Estimating significance of deviation (approximately)
we will assume that $N_{+} + N_{-}$, and under null hypothesis each observation contains with $p=0.5$ positive or negative particle.
So, under these assumptions $N_{+}$ is distributed as binomial random variable.
Step23: Subtracting background
using RooFit to fit mixture of exponential (bkg) and gaussian (signal) distributions.
Based on the fit, we estimate number of events in mass region
Step24: Computing asymmetry with subtracted background | Python Code:
%pylab inline
import numpy
import pandas
import root_numpy
folder = '/moosefs/notebook/datasets/Manchester_tutorial/'
Explanation: Example of physical analysis with IPython
End of explanation
def load_data(filenames, preselection=None):
# not setting treename, it's detected automatically
data = root_numpy.root2array(filenames, selection=preselection)
return pandas.DataFrame(data)
sim_data = load_data(folder + 'PhaseSpaceSimulation.root', preselection=None)
Explanation: Reading simulation data
End of explanation
sim_data.head()
Explanation: Looking at data, taking first rows:
End of explanation
# hist data will contain all information from histogram
hist_data = hist(sim_data.H1_PX, bins=40, range=[-100000, 100000])
Explanation: Plotting some feature
End of explanation
def add_momenta_and_energy(dataframe, prefix, compute_energy=False):
Adding P, PT and У of particle with given prefix, say, 'H1_'
pt_squared = dataframe[prefix + 'PX'] ** 2. + dataframe[prefix + 'PY'] ** 2.
dataframe[prefix + 'PT'] = numpy.sqrt(pt_squared)
p_squared = pt_squared + dataframe[prefix + 'PZ'] ** 2.
dataframe[prefix + 'P'] = numpy.sqrt(p_squared)
if compute_energy:
E_squared = p_squared + dataframe[prefix + 'M'] ** 2.
dataframe[prefix + 'E'] = numpy.sqrt(E_squared)
for prefix in ['H1_', 'H2_', 'H3_']:
# setting Kaon mass to each of particles:
sim_data[prefix + 'M'] = 493
add_momenta_and_energy(sim_data, prefix, compute_energy=True)
Explanation: Adding interesting features
for each particle we compute it's P, PT and energy (under assumption this is Kaon)
End of explanation
def add_B_features(data):
for axis in ['PX', 'PY', 'PZ', 'E']:
data['B_' + axis] = data['H1_' + axis] + data['H2_' + axis] + data['H3_' + axis]
add_momenta_and_energy(data, prefix='B_', compute_energy=False)
data['B_M'] = data.eval('(B_E ** 2 - B_PX ** 2 - B_PY ** 2 - B_PZ ** 2) ** 0.5')
add_B_features(sim_data)
Explanation: Adding features of $B$
We are able to compute 4-momentum of B, given 4-momenta of produced particles
End of explanation
sim_data.head()
_ = hist(sim_data['B_M'], range=[5260, 5280], bins=100)
Explanation: looking at result (with added features)
End of explanation
def add_dalitz_variables(data):
function to add Dalitz variables, names of prudicts are H1, H2, H3
for i, j in [(1, 2), (1, 3), (2, 3)]:
momentum = pandas.DataFrame()
for axis in ['E', 'PX', 'PY', 'PZ']:
momentum[axis] = data['H{}_{}'.format(i, axis)] + data['H{}_{}'.format(j, axis)]
data['M_{}{}'.format(i,j)] = momentum.eval('(E ** 2 - PX ** 2 - PY ** 2 - PZ ** 2) ** 0.5')
add_dalitz_variables(sim_data)
scatter(sim_data.M_12, sim_data.M_13, alpha=0.05)
Explanation: Dalitz plot
computing dalitz variables and checking that no resonances in simulation
End of explanation
preselection =
H1_IPChi2 > 1 && H2_IPChi2 > 1 && H3_IPChi2 > 1
&& H1_IPChi2 + H2_IPChi2 + H3_IPChi2 > 500
&& B_VertexChi2 < 12
&& H1_ProbPi < 0.5 && H2_ProbPi < 0.5 && H3_ProbPi < 0.5
&& H1_ProbK > 0.9 && H2_ProbK > 0.9 && H3_ProbK > 0.9
&& !H1_isMuon
&& !H2_isMuon
&& !H3_isMuon
preselection = preselection.replace('\n', '')
real_data = load_data([folder + 'B2HHH_MagnetDown.root', folder + 'B2HHH_MagnetUp.root'], preselection=preselection)
Explanation: Working with real data
Preselection
End of explanation
for prefix in ['H1_', 'H2_', 'H3_']:
# setting Kaon mass:
real_data[prefix + 'M'] = 493
add_momenta_and_energy(real_data, prefix, compute_energy=True)
add_B_features(real_data)
_ = hist(real_data.B_M, bins=50)
Explanation: adding features
End of explanation
momentum_preselection =
(H1_PT > 100) && (H2_PT > 100) && (H3_PT > 100)
&& (H1_PT + H2_PT + H3_PT > 4500)
&& H1_P > 1500 && H2_P > 1500 && H3_P > 1500
&& B_M > 5050 && B_M < 6300
momentum_preselection = momentum_preselection.replace('\n', '').replace('&&', '&')
real_data = real_data.query(momentum_preselection)
_ = hist(real_data.B_M, bins=50)
Explanation: additional preselection
which uses added features
End of explanation
add_dalitz_variables(real_data)
# check that 2nd and 3rd particle have same sign
numpy.mean(real_data.H2_Charge * real_data.H3_Charge)
scatter(real_data['M_12'], real_data['M_13'], alpha=0.1)
xlabel('M_12'), ylabel('M_13')
show()
# lazy way for plots
real_data.plot('M_12', 'M_13', kind='scatter', alpha=0.1)
Explanation: Adding Dalitz plot for real data
End of explanation
scatter(numpy.maximum(real_data['M_12'], real_data['M_13']),
numpy.minimum(real_data['M_12'], real_data['M_13']),
alpha=0.1)
xlabel('max(M12, M13)'), ylabel('min(M12, M13)')
show()
Explanation: Ordering dalitz variables
let's reorder particles so the first Dalitz variable is always greater
End of explanation
hist2d(numpy.maximum(real_data['M_12'], real_data['M_13']),
numpy.minimum(real_data['M_12'], real_data['M_13']),
bins=8)
colorbar()
xlabel('max(M12, M13)'), ylabel('min(M12, M13)')
show()
Explanation: Binned dalitz plot
let's plot the same in bins, as physicists like
End of explanation
real_data['B_Charge'] = real_data.H1_Charge + real_data.H2_Charge + real_data.H3_Charge
hist(real_data.B_M[real_data.B_Charge == +1].values, bins=30, range=[5050, 5500], alpha=0.5)
hist(real_data.B_M[real_data.B_Charge == -1].values, bins=30, range=[5050, 5500], alpha=0.5)
pass
Explanation: Looking at local CP-asimmetry
adding one more column
End of explanation
signal_charge = real_data.query('B_M > 5200 & B_M < 5320').B_Charge
Explanation: Leaving only signal region in mass
End of explanation
n_plus = numpy.sum(signal_charge == +1)
n_minus = numpy.sum(signal_charge == -1)
print n_plus, n_minus, n_plus - n_minus
print 'asymmetry = ', (n_plus - n_minus) / float(n_plus + n_minus)
Explanation: counting number of positively and negatively charged B particles
End of explanation
# computing properties of n_plus according to H_0 hypothesis.
n_mean = len(signal_charge) * 0.5
n_std = numpy.sqrt(len(signal_charge) * 0.25)
print 'significance = ', (n_plus - n_mean) / n_std
Explanation: Estimating significance of deviation (approximately)
we will assume that $N_{+} + N_{-}$, and under null hypothesis each observation contains with $p=0.5$ positive or negative particle.
So, under these assumptions $N_{+}$ is distributed as binomial random variable.
End of explanation
# Lots of ROOT imports for fitting and plotting
from rootpy import asrootpy, log
from rootpy.plotting import Hist, Canvas, set_style, get_style
from ROOT import (RooFit, RooRealVar, RooDataHist, RooArgList, RooArgSet,
RooAddPdf, TLatex, RooGaussian, RooExponential )
def compute_n_signal_by_fitting(data_for_fit):
Computing the amount of signal with in region [x_min, x_max]
returns: canvas with fit, n_signal in mass region
# fit limits
hmin, hmax = data_for_fit.min(), data_for_fit.max()
hist = Hist(100, hmin, hmax, drawstyle='EP')
root_numpy.fill_hist(hist, data_for_fit)
# Declare observable x
x = RooRealVar("x","x", hmin, hmax)
dh = RooDataHist("dh","dh", RooArgList(x), RooFit.Import(hist))
frame = x.frame(RooFit.Title("D^{0} mass"))
# this will show histogram data points on canvas
dh.plotOn(frame, RooFit.MarkerColor(2), RooFit.MarkerSize(0.9), RooFit.MarkerStyle(21))
# Signal PDF
mean = RooRealVar("mean", "mean", 5300, 0, 6000)
width = RooRealVar("width", "width", 10, 0, 100)
gauss = RooGaussian("gauss","gauss", x, mean, width)
# Background PDF
cc = RooRealVar("cc", "cc", -0.01, -100, 100)
exp = RooExponential("exp", "exp", x, cc)
# Combined model
d0_rate = RooRealVar("D0_rate", "rate of D0", 0.9, 0, 1)
model = RooAddPdf("model","exp+gauss",RooArgList(gauss, exp), RooArgList(d0_rate))
# Fitting model
result = asrootpy(model.fitTo(dh, RooFit.Save(True)))
mass = result.final_params['mean'].value
hwhm = result.final_params['width'].value
# this will show fit overlay on canvas
model.plotOn(frame, RooFit.Components("exp"), RooFit.LineStyle(3), RooFit.LineColor(3))
model.plotOn(frame, RooFit.LineColor(4))
# Draw all frames on a canvas
canvas = Canvas()
frame.GetXaxis().SetTitle("m_{K#pi#pi} [GeV]")
frame.GetXaxis().SetTitleOffset(1.2)
frame.Draw()
# Draw the mass and error label
label = TLatex(0.6, 0.8, "m = {0:.2f} #pm {1:.2f} GeV".format(mass, hwhm))
label.SetNDC()
label.Draw()
# Calculate the rate of background below the signal curve inside (x_min, x_max)
x_min, x_max = 5200, 5330
x.setRange(hmin, hmax)
bkg_total = exp.getNorm(RooArgSet(x))
sig_total = gauss.getNorm(RooArgSet(x))
x.setRange(x_min, x_max)
bkg_level = exp.getNorm(RooArgSet(x))
sig_level = gauss.getNorm(RooArgSet(x))
bkg_ratio = bkg_level / bkg_total
sig_ratio = sig_level / sig_total
n_elements = hist.GetEntries()
# TODO - normally get parameter form fit_result
sig_part = (d0_rate.getVal())
bck_part = (1 - d0_rate.getVal())
# estimating ratio of signal and background
bck_sig_ratio = (bkg_ratio * n_elements * bck_part) / (sig_ratio * n_elements * sig_part)
# n_events in (x_min, x_max)
n_events_in_mass_region = numpy.sum((data_for_fit > x_min) & (data_for_fit < x_max))
n_signal_in_mass_region = n_events_in_mass_region / (1. + bck_sig_ratio)
return canvas, n_signal_in_mass_region
B_mass_range = [5050, 5500]
mass_for_fitting_plus = real_data.query('(B_M > 5050) & (B_M < 5500) & (B_Charge == +1)').B_M
mass_for_fitting_minus = real_data.query('(B_M > 5050) & (B_M < 5500) & (B_Charge == -1)').B_M
canvas_plus, n_positive_signal = compute_n_signal_by_fitting(mass_for_fitting_plus)
canvas_plus
canvas_minus, n_negative_signal = compute_n_signal_by_fitting(mass_for_fitting_minus)
canvas_minus
Explanation: Subtracting background
using RooFit to fit mixture of exponential (bkg) and gaussian (signal) distributions.
Based on the fit, we estimate number of events in mass region
End of explanation
print n_positive_signal, n_negative_signal
print (n_positive_signal - n_negative_signal) / (n_positive_signal + n_negative_signal)
n_mean = 0.5 * (n_positive_signal + n_negative_signal)
n_std = numpy.sqrt(0.25 * (n_positive_signal + n_negative_signal))
print (n_positive_signal - n_mean) / n_std
Explanation: Computing asymmetry with subtracted background
End of explanation |
195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Probability, Statistics, and Machine Learning
Step1: Useful Inequalities
In practice, few quantities can be analytically calculated. Some knowledge
of bounding inequalities helps find the ballpark for potential solutions. This
sections discusses three key inequalities that are important for
probability, statistics, and machine learning.
Markov's Inequality
Let $X$ be a non-negative random variable
and suppose that $\mathbb{E}(X) < \infty$. Then,
for any $t>0$,
$$
\mathbb{P}(X>t)\leq \frac{\mathbb{E}(X)}{t}
$$
This is a foundational inequality that is
used as a stepping stone to other inequalities. It is easy
to prove. Because $X>0$, we have the following,
$$
\begin{align}
\mathbb{E}(X)&=\int_0^\infty x f_x(x)dx =\underbrace{\int_0^t x f_x(x)dx}_{\text{omit this}}+\int_t^\infty x f_x(x)dx \\
&\ge\int_t^\infty x f_x(x)dx \ge t\int_t^\infty x f_x(x)dx = t \mathbb{P}(X>t)
\end{align}
$$
The step that establishes the inequality is the part where the
$\int_0^t x f_x(x)dx$ is omitted. For a particular $f_x(x)$ that my be
concentrated around the $[0,t]$ interval, this could be a lot to throw out.
For that reason, the Markov Inequality is considered a loose inequality,
meaning that there is a substantial gap between both sides of the inequality.
For example, as shown in Figure, the
$\chi^2$ distribution has a lot of its mass on the left, which would be omitted
in the Markov Inequality. Figure shows
the two curves established by the Markov Inequality. The gray shaded region is
the gap between the two terms and indicates that looseness of the bound
(fatter shaded region) for this case.
<!-- dom
Step2: To get the left side of the Chebyshev inequality, we
have to write this out as the following conditional probability,
Step3: This is because of certain limitations in the statistics module at
this point in its development regarding the absolute value function. We could
take the above expression, which is a function of $t$ and attempt to compute
the integral, but that would take a very long time (the expression is very long
and complicated, which is why we did not print it out above). This is because
Sympy is a pure-python module that does not utilize any C-level optimizations
under the hood. In this situation, it's better to use the built-in cumulative
density function as in the following (after some rearrangement of the terms),
Step4: To plot this, we can evaluated at a variety of t values by using
the .subs substitution method, but it is more convenient to use the
lambdify method to convert the expression to a function.
Step5: Then, we can evaluate this function using something like | Python Code:
from pprint import pprint
import textwrap
import sys, re
Explanation: Python for Probability, Statistics, and Machine Learning
End of explanation
import sympy
import sympy.stats as ss
t=sympy.symbols('t',real=True)
x=ss.ChiSquared('x',1)
Explanation: Useful Inequalities
In practice, few quantities can be analytically calculated. Some knowledge
of bounding inequalities helps find the ballpark for potential solutions. This
sections discusses three key inequalities that are important for
probability, statistics, and machine learning.
Markov's Inequality
Let $X$ be a non-negative random variable
and suppose that $\mathbb{E}(X) < \infty$. Then,
for any $t>0$,
$$
\mathbb{P}(X>t)\leq \frac{\mathbb{E}(X)}{t}
$$
This is a foundational inequality that is
used as a stepping stone to other inequalities. It is easy
to prove. Because $X>0$, we have the following,
$$
\begin{align}
\mathbb{E}(X)&=\int_0^\infty x f_x(x)dx =\underbrace{\int_0^t x f_x(x)dx}_{\text{omit this}}+\int_t^\infty x f_x(x)dx \\
&\ge\int_t^\infty x f_x(x)dx \ge t\int_t^\infty x f_x(x)dx = t \mathbb{P}(X>t)
\end{align}
$$
The step that establishes the inequality is the part where the
$\int_0^t x f_x(x)dx$ is omitted. For a particular $f_x(x)$ that my be
concentrated around the $[0,t]$ interval, this could be a lot to throw out.
For that reason, the Markov Inequality is considered a loose inequality,
meaning that there is a substantial gap between both sides of the inequality.
For example, as shown in Figure, the
$\chi^2$ distribution has a lot of its mass on the left, which would be omitted
in the Markov Inequality. Figure shows
the two curves established by the Markov Inequality. The gray shaded region is
the gap between the two terms and indicates that looseness of the bound
(fatter shaded region) for this case.
<!-- dom:FIGURE: [fig-probability/ProbabilityInequalities_001.png, width=500 frac=0.75] The $\chi_1^2$ density has much of its weight on the left, which is excluded in the establishment of the Markov Inequality. <div id="fig:ProbabilityInequalities_001"></div> -->
<!-- begin figure -->
<div id="fig:ProbabilityInequalities_001"></div>
<p>The $\chi_1^2$ density has much of its weight on the left, which is excluded in the establishment of the Markov Inequality.</p>
<img src="fig-probability/ProbabilityInequalities_001.png" width=500>
<!-- end figure -->
<!-- dom:FIGURE: [fig-probability/ProbabilityInequalities_002.png, width=500 frac=0.75] The shaded area shows the region between the curves on either side of the Markov Inequality. <div id="fig:ProbabilityInequalities_002"></div> -->
<!-- begin figure -->
<div id="fig:ProbabilityInequalities_002"></div>
<p>The shaded area shows the region between the curves on either side of the Markov Inequality.</p>
<img src="fig-probability/ProbabilityInequalities_002.png" width=500>
<!-- end figure -->
Chebyshev's Inequality
Chebyshev's Inequality drops out directly from the Markov Inequality. Let
$\mu=\mathbb{E}(X)$ and $\sigma^2=\mathbb{V}(X)$. Then, we have
$$
\mathbb{P}(\vert X-\mu\vert \ge t) \le \frac{\sigma^2}{t^2}
$$
Note that if we normalize so that $Z=(X-\mu)/\sigma$, we
have $\mathbb{P}(\vert Z\vert \ge k) \le 1/k^2$. In particular,
$\mathbb{P}(\vert Z\vert \ge 2) \le 1/4$. We can illustrate this
inequality using Sympy statistics module,
End of explanation
r = ss.P((x-1) > t,x>1)+ss.P(-(x-1) > t,x<1)
Explanation: To get the left side of the Chebyshev inequality, we
have to write this out as the following conditional probability,
End of explanation
w=(1-ss.cdf(x)(t+1))+ss.cdf(x)(1-t)
Explanation: This is because of certain limitations in the statistics module at
this point in its development regarding the absolute value function. We could
take the above expression, which is a function of $t$ and attempt to compute
the integral, but that would take a very long time (the expression is very long
and complicated, which is why we did not print it out above). This is because
Sympy is a pure-python module that does not utilize any C-level optimizations
under the hood. In this situation, it's better to use the built-in cumulative
density function as in the following (after some rearrangement of the terms),
End of explanation
fw=sympy.lambdify(t,w)
Explanation: To plot this, we can evaluated at a variety of t values by using
the .subs substitution method, but it is more convenient to use the
lambdify method to convert the expression to a function.
End of explanation
map(fw,[0,1,2,3,4])
Explanation: Then, we can evaluate this function using something like
End of explanation |
196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Instalação do Pygame
O programa abaixo usa o Pygame e descreve como se pode criar uma janela de titulo "O PyGame é fixe!"
Para isso crie um novo ficheiro no menu File do IDLE, seleccionando new window. Escreva o código abaixo no editor do IDLE e grave com o nome fixepygame.py. Depois corra o programa premindo F5 ou seleccionando Run > Run Module do menu no topo do editor de ficheiros.
Step1: O processo de carregamento e inicialização do pygame é muito simples. O pygame é uma colecção de módulos numa única biblioteca. Alguns destes módulos estão escritos em C, outros em Python. Alguns são opcionais, podendo não estar sempre presentes.
Init - Iniciar o motor de jogo
Antes de fazer seja o que for deve iniciar o motor de jogo. A forma mais usual de o fazer é através de
Step2: pygame.draw.circle - desenha um círculo centrada num ponto
pygame.draw.circle(Surface, color, pos, radius, width=0)
Step3: pygame.mouse
Módulo do pygame que permite trabalhar com o rato. As funções descritas permitem obter o estado actual do rato, podendo alterar o cursor do sistema para o rato.
Quando um ecrã é criado, a fila de eventos começa a receber os eventos gerados pelo rato. Os botões do rato geram eventos pygame.MOUSEBUTTONDOWN e pygame.MOUSEBUTTONUP sempre que são premidos ou soltos. Estes eventos contêm um atributo que permite distinguir qual o botão que foi premido. A roda do rato gera um evento pygame.MOUSEBUTTONDOWN sempre que for rodada. Sempre que o rato é movido este gera um evento pygame.MOUSEMOTION.
Step4: pygame.mouse
pygame.mouse.get_pressed - Devolve o estado dos botões do rato.
pygame.mouse.get_pressed()
Step5: Problema 2
Step6: Uso do teclado
Os scripts apresentados tratam eventos QUIT, que é fundamental, a menos que queira janelas imortais! O pygame permite tratar outros eventos como o movimento do rato e teclas primidas.
Nos exemplos anteriores chamamos pygame.event.get() para aceder à lista dos eventos. Dando possibilidade a tratar através do ciclo for todos os eventos à sua medida.
Os objectos na lista pygame.event.get() contêm atributos próprios que permitem descreve-los. A única coisa comum aos eventos é o seu tipo. Na lista abaixo apresentamos os diferentes tipos de eventos.
Evento | Objectivos | Parâmetros
-------|------------|-----------
QUIT | Fecho de janela |
ACTIVEEVENT | Janela está activa ou escondida | gain, state
KEYDOWN | A tecla foi primida | unicode, key, mod
KEYUP | A tecla foi largada | pos, button
MOUSEMOTION | O rato foi movido | pos, rel, buttons
MOUSEBUTTONDOWN | Um botão do rato foi primido | pos, button
MOUSEBUTTONUP | Um botão foi largado | pos, button
VIDEORESIZE | A janela foi redimensionada| size, w, h
USEREVENT | Um evento do utilizador | code
O teclado gera eventos KEYDOWN quando uma tecla é primida e KEYUP quando uma tecla é largada. Abaixo tratamos estes eventos
Step7: pygame.image - Módulo para transferir imagens.
O módulo contém funções para carregar e gravar imagens, bem como para transformar superfícies para formatos usados noutros módulos.
Note que, não existe um classe imagem, uma imagem é carregada como uma superfície. A classe superfície pode ser manipulada (desenhar linhas, definir pixeis, capturar regiões, etc.).
Por defeito este módulo apenas carrega imagens BMP não comprimidas. Quando definido com suporte total de imagens, a função pygame.image.load permite carregar imagens em formato
Step8: Exemplo
Step9: Problema 3
Step10: Problema 4 | Python Code:
# Importa o modulo com o pygame
import pygame_sdl2 as pygame
# Inicia o motor de jogo
pygame.init()
# Indica as dimensões da janela
size=[700,500]
screen=pygame.display.set_mode(size)
# Escreve titulo na Janela
pygame.display.set_caption("O PyGame é fixe!")
# Usado para controlar a velocidade com que a janela é actualizada
clock=pygame.time.Clock()
#Para entrar em loop, até que a janela seja fechada.
done=False
while done==False:
for event in pygame.event.get(): # O utilizador actua na janela
if event.type == pygame.QUIT: # Se o utilizador escolheu fechar janela
done=True # o jogo deve terminar.
# Actualiza a janela para esta nova composição da cena.
pygame.display.flip()
# Limita a 20 o número de frames por segundo
clock.tick(20)
# Para terminar o motor de jogo
pygame.quit ()
Explanation: Instalação do Pygame
O programa abaixo usa o Pygame e descreve como se pode criar uma janela de titulo "O PyGame é fixe!"
Para isso crie um novo ficheiro no menu File do IDLE, seleccionando new window. Escreva o código abaixo no editor do IDLE e grave com o nome fixepygame.py. Depois corra o programa premindo F5 ou seleccionando Run > Run Module do menu no topo do editor de ficheiros.
End of explanation
import pygame
from random import *
white = ( 255, 255, 255)
def desenha_fundo(screen):
# Limpa a janela e define a cor do fundo
screen.fill(white)
pygame.init()
screen = pygame.display.set_mode((640, 480))
done = False
clock=pygame.time.Clock()
while done == False:
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
desenha_fundo(screen)
for count in range(10):
random_color = (randint(0,255), randint(0,255), randint(0,255))
random_pos = (randint(0,639), randint(0,479))
random_size = (639-randint(random_pos[0],639), 479-randint (random_pos[1],479))
pygame.draw.rect(screen, random_color, [random_pos,random_size])
pygame.display.flip()
clock.tick(2)
pygame.quit()
Explanation: O processo de carregamento e inicialização do pygame é muito simples. O pygame é uma colecção de módulos numa única biblioteca. Alguns destes módulos estão escritos em C, outros em Python. Alguns são opcionais, podendo não estar sempre presentes.
Init - Iniciar o motor de jogo
Antes de fazer seja o que for deve iniciar o motor de jogo. A forma mais usual de o fazer é através de:
pygame.init()
Iniciando todos os módulos por si. Nem todos os módulos necessitam ser iniciados, mas assim todos os que necessitam serão.
Quit - Terminar o motor de jogo
Os módulos inicializados são terminados usando o método quit().
Para a execução de um script não é obrigatório que este termine com pygame.quit(). Mas caso seja executado através do IDLE, se o script não termina os módulos do pygame, a janela gráfica não é fechada e o editor pode bloquear.
Módulos
O Pygame tem os seguintes módulos dos quais vamos usar um número muito restrito das suas funções:
cdrom - Controla o acesso à unidade de cdrom devices, permitindo a reprodução de audio.
cursors - Carrega imagens para o cursor do rato.
display - Controlo da janela gráfica.
draw - Permite desenhar elementos gráficos (linhas, círculos,...).
event - Gestão de eventos e controlo da lista de eventos.
font - Cria e faz o render de fontes para texto.
image - Carrega e grava imagens.
joystick - Controlo pelo joystick.
key - Controlo pelo teclado.
mouse - Controlo pelo rato.
movie - Reproduz ficheiro multimédia mpeg.
sndarray - Manipula sons com o módulo Numeric.
surfarray - Manipula imagens com o módulo Numeric.
time - Controlo do tempo.
transform - Transformações gráficas de mudança de escala, rotação, e flip.
pygame.draw
pygame.draw é o módulo do pygame para desenhar formas. Destacamos aqui alguns dos seus métodos
pygame.draw.rect - desenha rectângulos
pygame.draw.rect(Surface, color, Rect, width=0): return Rect
onde Rect descreve uma área rectangular. O argumento width é a largura do traço. Caso width seja zero o rectangular é preenchido.
End of explanation
# Importa o modulo com o pygame
import pygame
# Definição das cores a usar
black = ( 0, 0, 0)
white = ( 255, 255, 255)
blue = ( 50, 50, 255)
green = ( 0, 255, 0)
dkgreen = ( 0, 100, 0)
red = ( 255, 0, 0)
purple = (0xBF,0x0F,0xB5)
brown = (0x55,0x33,0x00)
# Função para desenhar Fundo
def desenha_fundo(screen):
# Limpa a janela e define a cor do fundo
screen.fill(white)
# Inicia o motor de jogo
pygame.init()
# Indica as dimensões da janela
size=[700,500]
screen=pygame.display.set_mode(size)
#Para entrar em loop, até que a janela seja fechada.
done=False
# Usado para controlar a velocidade com que a janela é actualizada
clock=pygame.time.Clock()
while done==False:
for event in pygame.event.get(): # O utilizador actua na janela
if event.type == pygame.QUIT: # Se o utilizador escolheu fechar janela
done=True # o jogo deve terminar.
desenha_fundo(screen)
# Determina as coordenadas do ponteiro. O resultado de pygame.mouse.get_pos()
# é uma lista com dois elementos [x,y].
pos = pygame.mouse.get_pos()
# As coordenadas do ponteiro.
print pos
# Actualiza a janela para esta nova composição da cena.
pygame.display.flip()
# Limita a 20 o número de frames por segundo
clock.tick(1)
pygame.quit()
# Inicia o motor de jogo
pygame.init()
# Indica as dimensões da janela
size=[700,500]
screen=pygame.display.set_mode(size)
#Para entrar em loop, até que a janela seja fechada.
done=False
def desenha_elemento(screen,x,y):
pygame.draw.rect(screen,green,[0+x,0+y,30,10],0)
pygame.draw.circle(screen,black,[15+x,5+y],7,0)
while done==False:
for event in pygame.event.get(): # O utilizador actua na janela
if event.type == pygame.QUIT: # Se o utilizador escolheu fechar janela
done=True # o jogo deve terminar.
desenha_fundo(screen)
# Determina as coordenadas do ponteiro. O resultado de pygame.mouse.get_pos()
# é uma lista com dois elementos [x,y].
pos = pygame.mouse.get_pos()
# A componente x e y das coordenadas do ponteiro.
x=pos[0]
y=pos[1]
# Desenha elemento onde está o ponteiro.
desenha_elemento(screen,x,y)
# Actualiza a janela para esta nova composição da cena.
pygame.display.flip()
# Limita a 20 o número de frames por segundo
clock.tick(20)
# Para terminar o motor de jogo
pygame.quit ()
Explanation: pygame.draw.circle - desenha um círculo centrada num ponto
pygame.draw.circle(Surface, color, pos, radius, width=0): return Rect
O argumento pos define o centro do círculo, e radius o seu raio. O argumento width define a espessura do polígono. Caso width seja zero o polígono é preenchido.
pygame.draw.line - desenha um segmento
pygame.draw.line(Surface, color, start_pos, end_pos, width=1): return Rect
Os argumentos start_pos e end_pos definem o ponto inicial e final da linha.
End of explanation
# Inicia o motor de jogo
pygame.init()
# Indica as dimensões da janela
size=[700,500]
screen=pygame.display.set_mode(size)
def desenha_elemento(screen,x,y):
pygame.draw.rect(screen,green,[0+x,0+y,30,10],0)
pygame.draw.circle(screen,black,[15+x,5+y],7,0)
# Função para desenhar Fundo
def desenha_fundo(screen):
# Limpa a janela e define a cor do fundo
screen.fill(white)
#Para entrar em loop, até que a janela seja fechada.
done=False
while done==False:
desenha_fundo(screen)
for event in pygame.event.get(): # O utilizador actua na janela
if event.type == pygame.QUIT: # Se o utilizador escolheu fechar janela
done=True # o jogo deve terminar.
if event.type == pygame.MOUSEMOTION:
# A componente x e y das coordenadas do ponteiro.
x=event.pos[0]
y=event.pos[1]
# Desenha elemento onde está o ponteiro.
desenha_elemento(screen,x,y)
# Actualiza a janela para esta nova composição da cena.
pygame.display.flip()
# Limita a 20 o número de frames por segundo
clock.tick(20)
# Para terminar o motor de jogo
pygame.quit ()
Explanation: pygame.mouse
Módulo do pygame que permite trabalhar com o rato. As funções descritas permitem obter o estado actual do rato, podendo alterar o cursor do sistema para o rato.
Quando um ecrã é criado, a fila de eventos começa a receber os eventos gerados pelo rato. Os botões do rato geram eventos pygame.MOUSEBUTTONDOWN e pygame.MOUSEBUTTONUP sempre que são premidos ou soltos. Estes eventos contêm um atributo que permite distinguir qual o botão que foi premido. A roda do rato gera um evento pygame.MOUSEBUTTONDOWN sempre que for rodada. Sempre que o rato é movido este gera um evento pygame.MOUSEMOTION.
End of explanation
# Importa o modulo com o pygame
import pygame
# Definição das cores a usar
black = ( 0, 0, 0)
white = ( 255, 255, 255)
blue = ( 50, 50, 255)
green = ( 0, 255, 0)
dkgreen = ( 0, 100, 0)
red = ( 255, 0, 0)
purple = (0xBF,0x0F,0xB5)
brown = (0x55,0x33,0x00)
# Função para desenhar Fundo
def desenha_fundo(screen):
# Limpa a janela e define a cor do fundo
screen.fill(white)
# Desena um circulo
def desenha_elemento(screen,x,y):
pygame.draw.circle(screen,black,[x+5,y+5],10,0)
# Inicia o motor de jogo
pygame.init()
# Indica as dimensões da janela
size=[700,500]
screen=pygame.display.set_mode(size)
# Inicia lista de pontos
point_list = []
#Para entrar em loop, até que a janela seja fechada.
done=False
# Usado para controlar a velocidade com que a janela é actualizada
clock=pygame.time.Clock()
while done==False:
for event in pygame.event.get(): # O utilizador actua na janela
if event.type == pygame.QUIT: # Se o utilizador escolheu fechar janela
done=True # o jogo deve terminar.
desenha_fundo(screen)
# Determina as coordenadas do ponteiro. O resultado de pygame.mouse.get_pos()
# é uma lista com dois elementos [x,y].
pos = pygame.mouse.get_pos()
# A componente x e y das coordenadas do ponteiro.
x=pos[0]
y=pos[1]
mousestat= pygame.mouse.get_pressed() #teclas do rato
# adiciona ponto à lista de pontos
if mousestat[0]: # tecla esquerda do rato
point_list.append((x,y))
# Desenha circuloas nos pontos seleccionados.
for (x,y) in point_list:
desenha_elemento(screen,x,y)
# Actualiza a janela para esta nova composição da cena.
pygame.display.flip()
# Limita a 20 o número de frames por segundo
clock.tick(20)
# Para terminar o motor de jogo
pygame.quit ()
Explanation: pygame.mouse
pygame.mouse.get_pressed - Devolve o estado dos botões do rato.
pygame.mouse.get_pressed(): return (button1, button2, button3)
Devolve uma sequência de booleanos representando o estado de todos os botões do rato. Um valor True indica que o rato estava premido no momento em que a função foi chamada.
pygame.mouse.get_pos - Devolve a actual posição do cursor do rato.
pygame.mouse.get_pos(): return (x, y)
Devolve as coordenadas do cursor do rato. Estas coordenadas são relativas ao canto superior esquerdo do ecrã. Apear de o cursor poder estar fora da janela, as suas coordenadas devolvidas por esta função são reescritas a coordenadas da janela.
pygame.mouse.get_rel - Quantidade de movimento
pygame.mouse.get_rel(): return (x, y)
Devolve a deslocação em x e em y desde a última chamada à função. Sendo a deslocação restrita aos vértices da janela.
pygame.mouse.set_pos - Localiza o cursor do rato
pygame.mouse.set_pos([x, y]): return None
Indica a posição do rato na janela. Se o cursor do rato está visível este salta para a nova posição.
pygame.mouse.set_visible - Esconde ou mostra o cursor
pygame.mouse.set_visible(bool): return bool
Se o argumento é True, o cursor do rato passa a estar visível. Devolvendo o estado anterior.
Problema 1:
Na janela gráfica, deve ficar um circulo com centro nos pontos que selecciona com o rato.
End of explanation
# Importa o modulo com o pygame
import pygame
# Definição das cores a usar
black = ( 0, 0, 0)
white = ( 255, 255, 255)
blue = ( 50, 50, 255)
green = ( 0, 255, 0)
dkgreen = ( 0, 100, 0)
red = ( 255, 0, 0)
purple = (0xBF,0x0F,0xB5)
brown = (0x55,0x33,0x00)
# Função para desenhar Fundo
def desenha_fundo(screen):
# Limpa a janela e define a cor do fundo
screen.fill(white)
# Desenha linha
def desenha_elemento(screen,x_1,y_1,x_2,y_2):
pygame.draw.line(screen,black,[x_1,y_1],[x_2,y_2],3)
# Inicia o motor de jogo
pygame.init()
# Indica as dimensões da janela
size=[700,500]
screen=pygame.display.set_mode(size)
# Inicia lista de pontos
point_list = []
#Para entrar em loop, até que a janela seja fechada.
done=False
# Usado para controlar a velocidade com que a janela é actualizada
clock=pygame.time.Clock()
while done==False:
for event in pygame.event.get(): # O utilizador actua na janela
if event.type == pygame.QUIT: # Se o utilizador escolheu fechar janela
done=True # o jogo deve terminar.
desenha_fundo(screen)
# Determina as coordenadas do ponteiro. O resultado de pygame.mouse.get_pos()
# é uma lista com dois elementos [x,y].
pos = pygame.mouse.get_pos()
# A componente x e y das coordenadas do ponteiro.
x=pos[0]
y=pos[1]
mousestat= pygame.mouse.get_pressed() #teclas do rato
# adiciona ponto à lista de pontos
if mousestat[0]: # tecla esquerda do rato
point_list.append((x,y))
# remove ponto da lista de pontos
if mousestat[2] and len(point_list)>0: # tecla direita do rato
point_list.pop()
# Desenha linha entre pontos.
if len(point_list)>1:
p_1 = point_list[0]
for p_2 in point_list[1:]:
(x_p_1,y_p_1)=p_1
(x_p_2,y_p_2)=p_2
desenha_elemento(screen,x_p_1,y_p_1,x_p_2,y_p_2)
p_1=p_2
# Actualiza a janela para esta nova composição da cena.
pygame.display.flip()
# Limita a 20 o número de frames por segundo
clock.tick(20)
# Para terminar o motor de jogo
pygame.quit()
Explanation: Problema 2:
Na janela gráfica, sempre que selecciona um ponto as suas coordenadas devem ser armazenadas. Use estes pontos para traçar uma linha poligonal. Note que, ao segundo ponto seleccionado, uma linha deve ser traçada do primeiro para o segundo. Ao terceiro ponto uma linha deve ser acrescentada do segundo ponto para o terceiro ponto seleccionado, e assim sucessivamente. Use o botão direito do rato para seleccionar.
Adicional faça com que o último arco traçado seja removido através do botão direito do rato.
End of explanation
-
Explanation: Uso do teclado
Os scripts apresentados tratam eventos QUIT, que é fundamental, a menos que queira janelas imortais! O pygame permite tratar outros eventos como o movimento do rato e teclas primidas.
Nos exemplos anteriores chamamos pygame.event.get() para aceder à lista dos eventos. Dando possibilidade a tratar através do ciclo for todos os eventos à sua medida.
Os objectos na lista pygame.event.get() contêm atributos próprios que permitem descreve-los. A única coisa comum aos eventos é o seu tipo. Na lista abaixo apresentamos os diferentes tipos de eventos.
Evento | Objectivos | Parâmetros
-------|------------|-----------
QUIT | Fecho de janela |
ACTIVEEVENT | Janela está activa ou escondida | gain, state
KEYDOWN | A tecla foi primida | unicode, key, mod
KEYUP | A tecla foi largada | pos, button
MOUSEMOTION | O rato foi movido | pos, rel, buttons
MOUSEBUTTONDOWN | Um botão do rato foi primido | pos, button
MOUSEBUTTONUP | Um botão foi largado | pos, button
VIDEORESIZE | A janela foi redimensionada| size, w, h
USEREVENT | Um evento do utilizador | code
O teclado gera eventos KEYDOWN quando uma tecla é primida e KEYUP quando uma tecla é largada. Abaixo tratamos estes eventos:
End of explanation
# pygame.font
def texto(pos,txt):
font = pygame.font.Font(None, 25)
text = font.render(txt,True,black)
screen.blit(text, pos)
Explanation: pygame.image - Módulo para transferir imagens.
O módulo contém funções para carregar e gravar imagens, bem como para transformar superfícies para formatos usados noutros módulos.
Note que, não existe um classe imagem, uma imagem é carregada como uma superfície. A classe superfície pode ser manipulada (desenhar linhas, definir pixeis, capturar regiões, etc.).
Por defeito este módulo apenas carrega imagens BMP não comprimidas. Quando definido com suporte total de imagens, a função pygame.image.load permite carregar imagens em formato:
JPG, PNG, GIF (não animado), BMP, PCX, TGA (não comprimido), TIF, PBM
Permitindo gravar imagens nos seguintes formatos BMP,TGA,PNGe JPEG.
Neste módulo só vamos recorrer à função pygame.image.load
pygame.image.load - Para carregar imagens a partir de um ficheiro.
pygame.image.load(filename): return Surface
pygame.image.load(fileobj, namehint=""): return Surface
Carrega uma imagem de um ficheiro. O argumento pode ser o nome de um ficheiro ou um objecto de tipo ficheiro.
O Pygame determina automaticamente o tipo da imagem (i.e. GIF ou bitmap) e cria uma nova superfície a partir dos dados. Em alguns casos é necessário saber a extensão do ficheiro (i.e., as imagens GIF têm extensões ".gif"). Se usar como referência à imagem um objecto de tipo ficheiro, pode ter a necessidade de definir o nome do ficheiro original como segundo argumento.
A superfície que é devolvida contém o mesmo formato de cores, e transparência alfa como no ficheiro que lhe deu origem. Em condições normais vai pretender chamar Surface.convert - para normalizar a estrutura que representa os pixeis.
Para transparências alfa, como nas imagens .png use o método convert_alpha() após serem carregadas por forma a definir pixeis transparentes.
Outra forma de definir a transparência numa imagem é através do método Surface.set_colorkey. Neste caso definimos a cor na superfície que deve ser assumida como transparente. Por exemplo
jogador = pygame.image.load("jogador1.gif").convert()
jogador.set_colorkey((255,255,255))
na superfície jogador é assumido que todos os pontos de cor (255,255,255) devem ser assumidos como transparentes.
Para facilitar a compatibilidade entre plataformas (Linux, Windows,...) deve recorrer a os.path.join(). Por exemplo
superficie = pygame.image.load(os.path.join('data', 'blabla.png'))
Embora o carregamento de uma imagem permita a definição de uma superfície, por vezes tem-se a necessidade de definir superfícies genéricas. Geralmente, com o propósito de processar imagens ou para a criação de formas no programa. Por exemplo
superficie = pygame.Surface((256, 256))
define uma superfície genérica de 256x256 de cor preta. Nestas superfícies podemos compor imagens.
End of explanation
import pygame
black = [ 0, 0, 0]
white = [255,255,255]
blue = [ 0, 0,255]
green = [ 0,255, 0]
red = [255, 0, 0]
pygame.init()
def texto(pos,txt):
font = pygame.font.Font(None, 25)
text = font.render(txt,True,black)
screen.blit(text, pos)
screen=pygame.display.set_mode([400,500])
pygame.display.set_caption("MDP Game - OVNI -2012/13")
done=False
xfno=-50 #inicia x do OVNI
yfno=50 #inicia y do OVNI
xOld=0 #inicia coordenadas
yOld=0
pontos = 0 #inicia pontos
balas=[]
clock = pygame.time.Clock()
background = pygame.image.load("sky.jpg").convert()
fno = pygame.image.load("fno.png").convert() #imagem do OVNI
fno.set_colorkey(white) # Define Cor que se assume como transparente
jogador=[] #incia lista de imagens do utilizador
jogador.append(pygame.image.load("play1.gif").convert())
jogador.append(pygame.image.load("play2.gif").convert())
jogador.append(pygame.image.load("play3.gif").convert())
balas_sound = pygame.mixer.Sound("pickup.wav") #som da bala a sair
ponto_sound = pygame.mixer.Sound("SCREECH.wav") #som do OVNI
while done==False:
for event in pygame.event.get():
if event.type == pygame.QUIT:
done=True
screen.fill(blue)
screen.blit(background, [0,0])
# Rato
pos = pygame.mouse.get_pos() #coordenadas do rato
#posição do rato
xR=pos[0]
yR=pos[1]
mousestat= pygame.mouse.get_pressed() #teclas do rato
# Jogor:
if yR<250: # Limita yR
yR=250
elif yR>450:
yR=450
if xR<0: # Limita xR
xR=0
elif xR>350:
xR=350
if xR<xOld:
screen.blit(jogador[1], [xR,yR]) #move direita
elif xR>xOld:
screen.blit(jogador[2], [xR,yR]) #move esquerda
else:
screen.blit(jogador[0], [xR,yR]) #está parado ou move na vertical
xOld=xR #actualiza posição
yOld=yR
# adiciona bala na lista balas
if mousestat[0]:
balas.append([xR+30,yR])
balas_sound.play()
if xfno>450: #chega ao fim da janela
xfno=-50
else:
xfno=xfno+15
screen.blit(fno, [xfno,yfno]) #desenha OVNI
#move bala
newbalas=[]
#update da posição das bolas na lista balas
for bala in balas:
if bala[1]>0: #bola chega ao topo da janela
#print(newbalas)
xB=bala[0] #coordenadas da bola
yB=bala[1]
pygame.draw.circle(screen,black,[xB,yB],5) #pinta bola
if xB>xfno and xB<xfno+50 and yB>yfno and yB<yfno+30: #bola bate no OVNI
pygame.draw.circle(screen,red,[xB,yB],20)
pontos=pontos+1
ponto_sound.play()
else:
bala[1]= bala[1]-10 #move uma bala para cima
newbalas.append(bala) #remove bala da lista
balas=newbalas
texto([50,450],'Pontos: '+str(pontos)) #imprime pontos
pygame.display.flip()
clock.tick(10)
pygame.quit ()
Explanation: Exemplo: O invasor:
Inicie o pygame com uma janela gráfica 400x500.
Junta imagem de fundo (sky.jpg).
sky.jpg|
--------|
<img src="sky.jpg" width = 200/>|
Desloque no topo da janela, da esquerda para a direita, a imagem de um OVNI (fno.png). Impondo o branco como a cor transparente.
<img src="fno.png" width = 100/>
Use a metade inferior da janela para deslocar a imagem do jogador, controlada pelo rato. Estas imagens estão em play1.gif, play2.gif e play3.dif, devendo a imagem escolhida depender do movimento do rato. Devendo usar play1.gif quando o rato não está em movimento. Reservado play2.gif e play3.gif, respectivamente, para identificar as situações onde o rato se desloca da direita para a esquerda e da esquerda para a direita.
play1.gif | play2.gif | play3.dif
-------|--------|--------
<img src="play1.gif" width = 100/>|<img src="play2.gif" width = 100/>|<img src="play3.gif" width = 100/>
O jogador dispara um círculo, no sentido ascendente, sempre que se usa o botão esquerdo do rato. Atenção: os projecteis que não estejam visíveis devem ser removidas da estrutura auxiliar.
Meta no ecrã um contador de pontos. Sempre que um projéctil intersecte o OVNI, deve incrementá-lo.
Sempre que se dispara ou sempre que um projéctil intersecta o OVNI devem ser emitidos sons diferentes.
End of explanation
import pygame
# inicialização do módulo pygame
pygame.init()
# criação de uma janela
#Cor
white=[255,255,255]
black=[0,0,0]
largura = 17*30
altura = 17*30
# pygame.font
def texto(screen,pos,txt):
font = pygame.font.Font(None, 25)
text = font.render(txt,True,black)
screen.blit(text, pos)
# Função para desenhar Fundo
def desenha_fundo(screen):
# Limpa a janela e define a cor do fundo
screen.fill(white)
numero=1
for linha in range(17):
desenha_elemento(screen,0,linha*30,largura,linha*30)
desenha_elemento(screen,linha*30,0,linha*30,altura)
for coluna in range(17):
texto(screen,(coluna*30,linha*30+5),str(numero))
numero=numero+1
# Desenha linha
def desenha_elemento(screen,x_1,y_1,x_2,y_2):
pygame.draw.line(screen,black,[x_1,y_1],[x_2,y_2],3)
tamanho = (largura, altura)
screen = pygame.display.set_mode(tamanho)
#Para entrar em loop, até que a janela seja fechada.
done=False
# Usado para controlar a velocidade com que a janela é actualizada
clock=pygame.time.Clock()
while done==False:
for event in pygame.event.get(): # O utilizador actua na janela
if event.type == pygame.QUIT: # Se o utilizador escolheu fechar janela
done=True # o jogo deve terminar.
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
move_x = -1
elif event.key == pygame.K_RIGHT:
move_x = +1
elif event.key == pygame.K_UP:
move_y = -1
elif event.key == pygame.K_DOWN:
move_y = +1
desenha_fundo(screen)
# Actualiza a janela para esta nova composição da cena.
pygame.display.flip()
# Limita a 20 o número de frames por segundo
clock.tick(20)
# Para terminar o motor de jogo
pygame.quit()
Explanation: Problema 3:
Crie um tabuleiro de 17x17, com quadrados de 30x30 pontos, onde cada quadrado deve ter um número associado.
End of explanation
import pygame
# inicialização do módulo pygame
pygame.init()
# criação de uma janela
#Cor
white=[255,255,255]
black=[0,0,0]
largura = 17*30
altura = 17*30
# pygame.font
def texto(screen,pos,txt):
font = pygame.font.Font(None, 25)
text = font.render(txt,True,black)
screen.blit(text, pos)
# Função para desenhar Fundo
def desenha_fundo(screen):
# Limpa a janela e define a cor do fundo
screen.fill(white)
numero=1
for linha in range(17):
desenha_elemento(screen,0,linha*30,largura,linha*30)
desenha_elemento(screen,linha*30,0,linha*30,altura)
for coluna in range(17):
texto(screen,(coluna*30,linha*30+5),str(numero))
numero=numero+1
# Desenha linha
def desenha_elemento(screen,x_1,y_1,x_2,y_2):
pygame.draw.line(screen,black,[x_1,y_1],[x_2,y_2],3)
jogador=[] #incia lista de imagens do utilizador
jogador.append(pygame.image.load("play1.gif").convert())
jogador.append(pygame.image.load("play2.gif").convert())
jogador.append(pygame.image.load("play3.gif").convert())
tamanho = (largura, altura)
screen = pygame.display.set_mode(tamanho)
#Para entrar em loop, até que a janela seja fechada.
done=False
# Usado para controlar a velocidade com que a janela é actualizada
clock=pygame.time.Clock()
move_coluna=0
move_linha=0
pos_x=100
pos_y=100
while done==False:
for event in pygame.event.get(): # O utilizador actua na janela
if event.type == pygame.QUIT: # Se o utilizador escolheu fechar janela
done=True # o jogo deve terminar.
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
move_coluna = -1
elif event.key == pygame.K_RIGHT:
move_coluna = +1
elif event.key == pygame.K_UP:
move_linha = -1
elif event.key == pygame.K_DOWN:
move_linha = +1
desenha_fundo(screen)
pos_x=pos_x+move_coluna*10
pos_y=pos_y+move_linha*10
screen.blit(jogador[0], (pos_x,pos_y))
# Actualiza a janela para esta nova composição da cena.
pygame.display.flip()
# Limita a 20 o número de frames por segundo
clock.tick(20)
# Para terminar o motor de jogo
pygame.quit()
# aula 08 - pygame
# importar o módulo pygame
# se a execução deste import em python 3 der algum erro é porque o pygame
# não está bem instalado
import pygame
# inicialização do módulo pygame
pygame.init()
# criação de uma janela
largura = 600
altura = 400
tamanho = (largura, altura)
janela = pygame.display.set_mode(tamanho)
# número de imagens por segundo
frame_rate = 10
# relógio para controlo do frame rate
clock = pygame.time.Clock()
# nova imagem a mostrar
nova_frame = None
# cor de fundo (tuplo com os valores Red, Green, Blue entre 0 e 255)
RED = (255, 0, 0)
# número da frame
numero = 0
# posicao do número da frame
posicao_numero_x = int(largura/2)
posicao_numero_y = int(altura/2)
# tipo de letra do número da frame
# tamanho
font_size = 25
# fonte pré-definida
font = pygame.font.Font(None, font_size)
# suavização
antialias = True
# cor (tuplo com os valores Red, Green, Blue entre 0 e 255)
WHITE = (255, 255, 255)
# ler uma imagem em formato bmp
isel_surface = pygame.image.load("fno.png").convert()
# variável de controlo do ciclo principal
fim = False
# função de processamento de eventos pygame
def processar_eventos_pygame():
global fim
global numero
global posicao_numero_x
global posicao_numero_y
# ciclo para processar os eventos pygame
for event in pygame.event.get():
# evento fechar a janela gráfica
if event.type == pygame.QUIT:
fim = True
# evento mouse click botão esquerdo (código = 1)
elif event.type == pygame.MOUSEBUTTONUP and event.button == 1:
(x, y) = event.pos
numero = 0
posicao_numero_x = x
posicao_numero_y = y
# evento teclas setas
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
posicao_numero_x = posicao_numero_x - 10
elif event.key == pygame.K_RIGHT:
posicao_numero_x = posicao_numero_x + 10
elif event.key == pygame.K_UP:
posicao_numero_y = posicao_numero_y - 10
elif event.key == pygame.K_DOWN:
posicao_numero_y = posicao_numero_y + 10
# função para construção de uma nova frame de raíz. Não é usada
# nehuma informação gráfica da frame anterior.
def construir_nova_frame():
global nova_frame
# criar uma nova frame
nova_frame = pygame.Surface(tamanho)
# cor de fundo
nova_frame.fill(RED)
# inserir uma imagem
nova_frame.blit(isel_surface, (0, 0))
# inserir o número da frame
numero_surface = font.render(str(numero), antialias, WHITE)
nova_frame.blit(numero_surface, (posicao_numero_x, posicao_numero_y))
# ciclo principal
while not(fim):
processar_eventos_pygame()
construir_nova_frame()
# actualizar pygame com a nova imagem
janela.blit(nova_frame, (0, 0))
pygame.display.flip()
# esperar o tempo necessário para cumprir o frame rate
# só deve ser chamado uma vez por frame
clock.tick(frame_rate)
# actualizar numero da frame
numero = numero + 1
# fechar a janela. pygame.quit() só é necessário
# quando o programa é executado a partir do IDLE
# uma referêncoa para a janela aberta e sem esta
# janela não fecha quando o programa termina. Só
# termina).
pygame.quit()
Explanation: Problema 4:
Controle uma peça usando as teclas do cursor. Sempre que é pressionada uma tecla o jogador deve ser movido no tabuleiro uma posição na horizontal ou vertical. Sempre que usa uma tecla o jogador fica virado na direcção seleccionada, representada pela imagem play1.gif, play2.gif ou play3.gif.
End of explanation |
197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tuples
Tuples are just like lists, but you can't change their values. The values that you give it first up, are the values that you are stuck with for the rest of the program. Again, each value is numbered starting from zero, for easy reference. Example
Step1: Lists
Lists are what they seem - a list of values. Each one of them is numbered, starting from zero - the first one is numbered zero, the second 1, the third 2, etc. You can remove values from the list, and add new values to the end. Example
Step2: Dictionaries
Dictionaries are similar to what their name suggests - a dictionary. In a dictionary, you have an 'index' of words, and for each of them a definition. In python, the word is called a 'key', and the definition a 'value'. The values in a dictionary aren't numbered - tare similar to what their name suggests - a dictionary. In a dictionary, you have an 'index' of words, and for each of them a definition. In python, the word is called a 'key', and the definition a 'value'. The values in a dictionary aren't numbered - they aren't in any specific order, either - the key does the same thing. You can add, remove, and modify the values in dictionaries. Example | Python Code:
months = ('January','February','March','April','May','June',\
'July','August','September','October','November',' December')
Explanation: Tuples
Tuples are just like lists, but you can't change their values. The values that you give it first up, are the values that you are stuck with for the rest of the program. Again, each value is numbered starting from zero, for easy reference. Example: the names of the months of the year.
End of explanation
cats = ['Tom', 'Snappy', 'Kitty', 'Jessie', 'Chester']
print (cats[2])
cats.append('Catherine')
Explanation: Lists
Lists are what they seem - a list of values. Each one of them is numbered, starting from zero - the first one is numbered zero, the second 1, the third 2, etc. You can remove values from the list, and add new values to the end. Example: Your many cats' names.
End of explanation
#Make the phone book:
phonebook = {'Andrew Parson':8806336, \
'Emily Everett':6784346, 'Peter Power':7658344, \
'Lewis Lame':1122345}
#Add the person 'Ram' to the phonebook:
phonebook['Ram'] = 1234567
del phonebook['Andrew Parson']
Explanation: Dictionaries
Dictionaries are similar to what their name suggests - a dictionary. In a dictionary, you have an 'index' of words, and for each of them a definition. In python, the word is called a 'key', and the definition a 'value'. The values in a dictionary aren't numbered - tare similar to what their name suggests - a dictionary. In a dictionary, you have an 'index' of words, and for each of them a definition. In python, the word is called a 'key', and the definition a 'value'. The values in a dictionary aren't numbered - they aren't in any specific order, either - the key does the same thing. You can add, remove, and modify the values in dictionaries. Example: telephone book.
End of explanation |
198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's get some JSON data from the web - both a point layer and a polygon GeoJson dataset with some population data.
Step1: And take a look at what our data looks like
Step2: Look how far the minimum and maximum values for the density are from the top and bottom quartile breakpoints! We have some outliers in our data that are well outside the meat of most of the distribution. Let's look into this to find the culprits within the sample.
Step3: Looks like Washington D.C. and Alaska were the culprits on each end of the range. Washington was more dense than the next most dense state, New Jersey, than the least dense state, Alaska was from Wyoming, however. Washington D.C. has a has a relatively small land area for the amount of people that live there, so it makes sense that it's pretty dense. And Alaska has a lot of land area, but not much of it is habitable for humans.
<br><br>
However, we're looking at all of the states in the US to look at things on a more regional level. That high figure at the top of our range for Washington D.C. will really hinder the ability for us to differentiate between the other states, so let's account for that in the min and max values for our color scale, by getting the quantile values close to the end of the range. Anything higher or lower than those values will just fall into the 'highest' and 'lowest' bins for coloring.
Step4: This looks better. Our min and max values for the colorscale are much closer to the mean value now. Let's run with these values, and make a colorscale. I'm just going to use a sequential light-to-dark color palette from the ColorBrewer.
Step5: Let's narrow down these cities to United states cities, by using GeoPandas' spatial join functionality between two GeoDataFrame objects, using the Point 'within' Polygon functionality.
Step6: Ok, now we have a new GeoDataFrame with our top 20 populated cities. Let's see the top 5.
Step7: Alright, let's build a map! | Python Code:
import geopandas
states = geopandas.read_file(
"https://rawcdn.githack.com/PublicaMundi/MappingAPI/main/data/geojson/us-states.json",
driver="GeoJSON",
)
cities = geopandas.read_file(
"https://d2ad6b4ur7yvpq.cloudfront.net/naturalearth-3.3.0/ne_50m_populated_places_simple.geojson",
driver="GeoJSON",
)
Explanation: Let's get some JSON data from the web - both a point layer and a polygon GeoJson dataset with some population data.
End of explanation
states.describe()
Explanation: And take a look at what our data looks like:
End of explanation
states_sorted = states.sort_values(by="density", ascending=False)
states_sorted.head(5).append(states_sorted.tail(5))[["name", "density"]]
Explanation: Look how far the minimum and maximum values for the density are from the top and bottom quartile breakpoints! We have some outliers in our data that are well outside the meat of most of the distribution. Let's look into this to find the culprits within the sample.
End of explanation
def rd2(x):
return round(x, 2)
minimum, maximum = states["density"].quantile([0.05, 0.95]).apply(rd2)
mean = round(states["density"].mean(), 2)
print(f"minimum: {minimum}", f"maximum: {maximum}", f"Mean: {mean}", sep="\n\n")
Explanation: Looks like Washington D.C. and Alaska were the culprits on each end of the range. Washington was more dense than the next most dense state, New Jersey, than the least dense state, Alaska was from Wyoming, however. Washington D.C. has a has a relatively small land area for the amount of people that live there, so it makes sense that it's pretty dense. And Alaska has a lot of land area, but not much of it is habitable for humans.
<br><br>
However, we're looking at all of the states in the US to look at things on a more regional level. That high figure at the top of our range for Washington D.C. will really hinder the ability for us to differentiate between the other states, so let's account for that in the min and max values for our color scale, by getting the quantile values close to the end of the range. Anything higher or lower than those values will just fall into the 'highest' and 'lowest' bins for coloring.
End of explanation
import branca
colormap = branca.colormap.LinearColormap(
colors=["#f2f0f7", "#cbc9e2", "#9e9ac8", "#756bb1", "#54278f"],
index=states["density"].quantile([0.2, 0.4, 0.6, 0.8]),
vmin=minimum,
vmax=maximum,
)
colormap.caption = "Population Density in the United States"
colormap
Explanation: This looks better. Our min and max values for the colorscale are much closer to the mean value now. Let's run with these values, and make a colorscale. I'm just going to use a sequential light-to-dark color palette from the ColorBrewer.
End of explanation
us_cities = geopandas.sjoin(cities, states, how="inner", op="within")
pop_ranked_cities = us_cities.sort_values(by="pop_max", ascending=False)[
["nameascii", "pop_max", "geometry"]
].iloc[:20]
Explanation: Let's narrow down these cities to United states cities, by using GeoPandas' spatial join functionality between two GeoDataFrame objects, using the Point 'within' Polygon functionality.
End of explanation
pop_ranked_cities.head(5)
Explanation: Ok, now we have a new GeoDataFrame with our top 20 populated cities. Let's see the top 5.
End of explanation
import folium
from folium.plugins import Search
m = folium.Map(location=[38, -97], zoom_start=4)
def style_function(x):
return {
"fillColor": colormap(x["properties"]["density"]),
"color": "black",
"weight": 2,
"fillOpacity": 0.5,
}
stategeo = folium.GeoJson(
states,
name="US States",
style_function=style_function,
tooltip=folium.GeoJsonTooltip(
fields=["name", "density"], aliases=["State", "Density"], localize=True
),
).add_to(m)
citygeo = folium.GeoJson(
pop_ranked_cities,
name="US Cities",
tooltip=folium.GeoJsonTooltip(
fields=["nameascii", "pop_max"], aliases=["", "Population Max"], localize=True
),
).add_to(m)
statesearch = Search(
layer=stategeo,
geom_type="Polygon",
placeholder="Search for a US State",
collapsed=False,
search_label="name",
weight=3,
).add_to(m)
citysearch = Search(
layer=citygeo,
geom_type="Point",
placeholder="Search for a US City",
collapsed=True,
search_label="nameascii",
).add_to(m)
folium.LayerControl().add_to(m)
colormap.add_to(m)
m
Explanation: Alright, let's build a map!
End of explanation |
199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Boletins de ocorrência registrados na Grande São Paulo em 2016
Neste passo-a-passo analisaremos um conjunto de dados referente às ocorrências policiais registradas na Grande São Paulo durante o ano de 2016. Os dados estão disponíveis aqui. A tabela a seguir apresenta os campos e uma breve descrição deles.
| Campo | Descrição |
|
Step1: Algumas correções
O conjunto de dados possui algumas inconsistências.
Remover os espaços no nome de municípios.
Step2: Análise preliminar
Todo prolema em ciências de dados começa com a análise preliminar do conjunto a ser analisado.
Número de ocorrências ao longo do tempo
Vamos agrupar as ocorrências registradas por cidade e verificar a evolução ao longo do tempo.
Step3: Mapa de ocorrências
Onde acontecem as principais ocorrências? | Python Code:
import numpy
import pandas
from matplotlib import pyplot
%matplotlib inline
pyplot.style.use('fivethirtyeight')
pyplot.rcParams['figure.figsize'] = [11, 8]
url = '../dat/boletins_ocorrencia_sp_clean_2016.csv.gz'
dat = pandas.read_csv(url, encoding='ISO-8859-1', low_memory=False)
dat.describe()
dat.head()
Explanation: Boletins de ocorrência registrados na Grande São Paulo em 2016
Neste passo-a-passo analisaremos um conjunto de dados referente às ocorrências policiais registradas na Grande São Paulo durante o ano de 2016. Os dados estão disponíveis aqui. A tabela a seguir apresenta os campos e uma breve descrição deles.
| Campo | Descrição |
|:----------------------|:------------------------------------------------------------------|
|NUM_BO | Ano da ocorrencia |
|ANO_BO | Número do BO |
|ID_DELEGACIA | Código da delegacia responsável pelo registro da ocorrencia |
|NOME_DEPARTAMENTO | Departamento responsável pelo registro |
|NOME_SECCIONAL | Delegacia Seccional responsável pelo registro |
|DELEGACIA | Delegacia responsável pelo registro |
|NOME_DEPARTAMENTO_CIRC | Departamento responsável pela área onde houve a ocorrencia |
|NOME_SECCIONAL_CIRC | Delegacia Seccional responsável pela área onde houve a ocorrencia |
|NOME_DELEGACIA_CIRC | Delegacia responsável pela área onde houve a ocorrencia |
|ANO | Ano do registro |
|MES | Mês do registro |
|DATA_OCORRENCIA_BO | Data do fato |
|HORA_OCORRENCIA_BO | Hora do fato |
|FLAG_STATUS | Indica se é crime consumado ou tentado |
|RUBRICA | Natureza juridica da ocorrencia |
|DESDOBRAMENTO | Desdobramentos juridicos envolvidos na ocorrencia |
|CONDUTA | Tipo de local ou circunstancia que qualifica a ocorrencia |
|LATITUDE | Coordenadas geograficas |
|LONGITUDE | Coordenadas geograficas |
|CIDADE | Municipio de registro da ocorrencia |
|LOGRADOURO | Logradouro do fato |
|NUMERO_LOGRADOURO | Número do Logradouro do fato |
|FLAG_STATUS | Indica se é crime consumado ou tentado |
|DESCR_TIPO_PESSOA | Indica o tipo de envolvimento da pessoa na ocorrencia |
|CONT_PESSOA | Indica ordem de registro da pessoa no BO |
|SEXO_PESSOA | Sexo da pessoa relacionada |
|IDADE_PESSOA | Idade da pessoa relacionada |
|COR | Cor/raça da pessoa relacionada |
|DESCR_PROFISSAO | Profissão da pessoa |
|DESCR_GRAU_INSTRUCAO |Grau de intrução da pessoa |
Pré-processamento
Vamos inicializar o sistema para a análise: bibliotecas e conjunto de dados.
End of explanation
dat['CIDADE'] = dat['CIDADE'].map(lambda x: x.strip())
Explanation: Algumas correções
O conjunto de dados possui algumas inconsistências.
Remover os espaços no nome de municípios.
End of explanation
table = pandas.pivot_table(dat, index=['CIDADE'], columns=['ANO'],
values='NUM_BO', aggfunc=numpy.count_nonzero)
ax = table.plot(title='Distribuição de ocorrências por município', kind='bar',
legend=False)
pyplot.tight_layout()
table = pandas.pivot_table(dat, index=['ANO', 'MES'], columns=['CIDADE'],
values='NUM_BO', aggfunc=numpy.count_nonzero)
ax = table.plot(title='Série temporal de boletins de ocorrência por município',
legend=False)
pyplot.tight_layout()
table = pandas.pivot_table(dat, columns=['CONDUTA'], values='NUM_BO')
ax = table.plot(kind='bar', title='Número de ocorrências por conduta')
pyplot.tight_layout()
table = pandas.pivot_table(dat, index=['ANO', 'MES'], columns=['CONDUTA'],
values='NUM_BO', aggfunc=numpy.count_nonzero)
table.plot(title='Série temporal de boletins de ocorrência por conduta', legend=False)
Explanation: Análise preliminar
Todo prolema em ciências de dados começa com a análise preliminar do conjunto a ser analisado.
Número de ocorrências ao longo do tempo
Vamos agrupar as ocorrências registradas por cidade e verificar a evolução ao longo do tempo.
End of explanation
lon, lat = dat['LONGITUDE'], dat['LATITUDE']
pyplot.plot(lon, lat, '.')
pyplot.show()
Explanation: Mapa de ocorrências
Onde acontecem as principais ocorrências?
End of explanation |