text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can easily see that only a few survival times are right-censored (Status is False), i.e., most veteran's died during the study period (Status is True).
Step2: Using the formula from above, we can compute $\hat{S}(t=11) = \frac{3}{5}$, but not $\hat{S}(t=30)$, because we don't know whether the 4th patient is still alive at $t = 30$, all we know is that when we last checked at $t = 25$, the patient was still alive.
Step3: The estimated curve is a step function, with steps occurring at time points where one or more patients died. From the plot we can see that most patients died in the first 200 days, as indicated by the steep slope of the estimated survival function in the first 200 days.
Step4: Roughly half the patients received the alternative treatment.
Step5: Unfortunately, the results are inconclusive, because the difference between the two estimated survival functions is too small to confidently argue that the drug affects survival or not.
Step6: In this case, we observe a pronounced difference between two groups. Patients with squamous or large cells seem to have a better prognosis compared to patients with small or adeno cells.
Step7: Survival models in scikit-survival follow the same rules as estimators in scikit-learn, i.e., they have a fit method, which expects a data matrix and a structured array of survival times and binary event indicators.
Step8: The result is a vector of coefficients, one for each variable, where each value corresponds to the log hazard ratio.
Step9: Using the fitted model, we can predict a patient-specific survival function, by passing an appropriate data matrix to the estimator's predict_survival_function method.
Step10: Similar to kaplan_meier_estimator, the predict_survival_function method returns a sequence of step functions, which we can plot.
Step11: Measuring the Performance of Survival Models
Step12: or alternatively
Step13: Our model's c-index indicates that the model clearly performs better than random, but is also far from perfect.
Step14: Karnofsky_score is the best variable, whereas Months_from_Diagnosis and Prior_therapy='yes' have almost no predictive power on their own.
Step15: Next, we need to define the range of parameters we want to explore during grid search. Here, we want to optimize the parameter k of the SelectKBest class and allow k to vary from 1 feature to all 8 features.
Step16: The results show that it is sufficient to select the 3 most predictive features.
|
<ASSISTANT_TASK:>
Python Code:
from sksurv.datasets import load_veterans_lung_cancer
data_x, data_y = load_veterans_lung_cancer()
data_y
import pandas as pd
pd.DataFrame.from_records(data_y[[11, 5, 32, 13, 23]], index=range(1, 6))
%matplotlib inline
import matplotlib.pyplot as plt
from sksurv.nonparametric import kaplan_meier_estimator
time, survival_prob = kaplan_meier_estimator(data_y["Status"], data_y["Survival_in_days"])
plt.step(time, survival_prob, where="post")
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
data_x["Treatment"].value_counts()
for treatment_type in ("standard", "test"):
mask_treat = data_x["Treatment"] == treatment_type
time_treatment, survival_prob_treatment = kaplan_meier_estimator(
data_y["Status"][mask_treat],
data_y["Survival_in_days"][mask_treat])
plt.step(time_treatment, survival_prob_treatment, where="post",
label="Treatment = %s" % treatment_type)
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="best")
for value in data_x["Celltype"].unique():
mask = data_x["Celltype"] == value
time_cell, survival_prob_cell = kaplan_meier_estimator(data_y["Status"][mask],
data_y["Survival_in_days"][mask])
plt.step(time_cell, survival_prob_cell, where="post",
label="%s (n = %d)" % (value, mask.sum()))
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="best")
from sksurv.preprocessing import OneHotEncoder
data_x_numeric = OneHotEncoder().fit_transform(data_x)
data_x_numeric.head()
from sksurv.linear_model import CoxPHSurvivalAnalysis
estimator = CoxPHSurvivalAnalysis()
estimator.fit(data_x_numeric, data_y)
pd.Series(estimator.coef_, index=data_x_numeric.columns)
x_new = pd.DataFrame.from_dict({
1: [65, 0, 0, 1, 60, 1, 0, 1],
2: [65, 0, 0, 1, 60, 1, 0, 0],
3: [65, 0, 1, 0, 60, 1, 0, 0],
4: [65, 0, 1, 0, 60, 1, 0, 1]},
columns=data_x_numeric.columns, orient='index')
x_new
import numpy as np
pred_surv = estimator.predict_survival_function(x_new)
time_points = np.arange(1, 1000)
for i, surv_func in enumerate(pred_surv):
plt.step(time_points, surv_func(time_points), where="post",
label="Sample %d" % (i + 1))
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="best")
from sksurv.metrics import concordance_index_censored
prediction = estimator.predict(data_x_numeric)
result = concordance_index_censored(data_y["Status"], data_y["Survival_in_days"], prediction)
result[0]
estimator.score(data_x_numeric, data_y)
import numpy as np
def fit_and_score_features(X, y):
n_features = X.shape[1]
scores = np.empty(n_features)
m = CoxPHSurvivalAnalysis()
for j in range(n_features):
Xj = X[:, j:j+1]
m.fit(Xj, y)
scores[j] = m.score(Xj, y)
return scores
scores = fit_and_score_features(data_x_numeric.values, data_y)
pd.Series(scores, index=data_x_numeric.columns).sort_values(ascending=False)
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import Pipeline
pipe = Pipeline([('encode', OneHotEncoder()),
('select', SelectKBest(fit_and_score_features, k=3)),
('model', CoxPHSurvivalAnalysis())])
from sklearn.model_selection import GridSearchCV, KFold
param_grid = {'select__k': np.arange(1, data_x_numeric.shape[1] + 1)}
cv = KFold(n_splits=3, random_state=1, shuffle=True)
gcv = GridSearchCV(pipe, param_grid, return_train_score=True, cv=cv)
gcv.fit(data_x, data_y)
results = pd.DataFrame(gcv.cv_results_).sort_values(by='mean_test_score', ascending=False)
results.loc[:, ~results.columns.str.endswith("_time")]
pipe.set_params(**gcv.best_params_)
pipe.fit(data_x, data_y)
encoder, transformer, final_estimator = [s[1] for s in pipe.steps]
pd.Series(final_estimator.coef_, index=encoder.encoded_columns_[transformer.get_support()])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Option 2
Step2: Option 3
Step3: Step 3. Extract the Coefficients
Step4: Note that the coefficients are all of order 1e-5, and that the coefficient for NIR is significantly higher than the coefficient for blue. This is a big deal if your use case involves performing band math because a pixel with a NIR/blue ratio of 1.0 in the radiance image will have a NIR/blue ratio of 3.35/1.929=1.73 in the reflectance image.
Step 4
Step5: As a quick check of our calculations, let's print the minimum and maximum values in both the original (radiance) values, as well as our new (reflectance) values.
Step6: Step 5. Save the Reflectance Image
|
<ASSISTANT_TASK:>
Python Code:
# To use Planet's CLI from this Notebook, begin your line as follows:
!planet data
# < add your own code here >
# To use Planet's API, you'll probably begin by importing your favorite HTTP toolkit, e.g.:
import requests
from requests.auth import HTTPBasicAuth
# Your Planet API key is available in this Notebook as an env variable, e.g.:
import os
PLANET_API_KEY = os.getenv('PL_API_KEY')
# < add your own code here >
import rasterio
import numpy as np
filename = None
# < update above with your own code >
# Load blue, green, red and NIR bands - note all PlanetScope 4-band images have band order BGRN
with rasterio.open(filename) as src:
band_blue_radiance = None
# < update above with your own code >
# ...
# < add your own code here >
from xml.dom import minidom
xmldoc = minidom.parse("data/20170623_180038_0f34_3B_AnalyticMS_metadata.xml")
nodes = xmldoc.getElementsByTagName("ps:bandSpecificMetadata")
# XML parser refers to bands by numbers 1-4
coeffs = {}
for node in nodes:
bn = node.getElementsByTagName("ps:bandNumber")[0].firstChild.data
if bn in ['1', '2', '3', '4']:
i = int(bn)
value = node.getElementsByTagName("ps:reflectanceCoefficient")[0].firstChild.data
coeffs[i] = float(value)
print("Conversion coefficients:", coeffs)
# Multiply the current values in each band by the TOA reflectance coefficients
band_blue_reflectance = None
# < update above with your own code >
# ...
# < add your own code here >
import numpy as np
# < add your own code here >
# get the metadata of original GeoTIFF:
# < add your own code here >
# set the source metadata as kwargs we'll use to write the new data:
# < add your own code here >
# update the 'dtype' value to 'uint16':
# < add your own code here >
# As noted above, scale reflectance values by a factor of 10k:
scale = 10000
blue_ref_scaled = None
# < update above with your own code >
# ...
# < add your own code here >
# Compute new min & max values for the scaled red band & print them, just for comparison
# < add your own code here >
# set the type for each band to uint16
# hint: remember, bands are just numpy arrays!
from rasterio import uint16
# < add your own code here >
# Finally, write band calculations to a new raster file 'data/reflectance.tif'
# Hint: rasterio.write_band() to write individual bands
# < add your own code here >
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Business Problem
Step2: Step 1
Step3: Exercise 1
Step4: Step 2
Step5: Step 2b
Step6: Step 3
Step7: And here's what the expression for the 8th-degree non-linear model looks like
Step8: Right. Back to the relatively simpler non-linear model we'd like to try
Step9: Step 3c
Step10: In general this is going to be far from the actual values; so we know that the values for $W$ in $W_{opt}$ must be quite far from the optimal values for $W$ -- the values that will minimize the cost of getting it wrong.
Step 4
Step11: We don't know yet if this is high or low -- we'll have to try out a whole bunch of $W$ values. Or better yet, we can use pick an iterative method and implement it.
Step 5
Step12: Step 6
Step13: What does this prediction surface look like?
Step14: Non-Linear Regression in Orange - Demonstration
|
<ASSISTANT_TASK:>
Python Code:
# Use the functions from another notebook in this notebook
%run Shared-Functions.ipynb
# Import our usual libraries
import numpy as np
import pandas as pd
import math
import matplotlib.pyplot as plt
%matplotlib inline
import os
# Load the housing prices dataset
file_url = data_dir + os.sep + "portland-house-prices.txt"
# Load the data into a dataframe
data2 = pd.read_csv(file_url, header=None, names=['Size', 'Bedrooms', 'Price'])
data2.head()
import seaborn as sns
sns.set(style='whitegrid', context='notebook')
cols = ['Size', 'Bedrooms', 'Price']
sns.pairplot(data2[cols], size=4)
plt.show()
# From https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html
# 3D scatter plot of size, bedrooms and price
from mpl_toolkits.mplot3d import Axes3D
threeD = plt.figure(figsize=(12,8)).gca(projection='3d')
threeD.scatter(data2['Bedrooms'], data2['Size'], data2['Price'])
threeD.set_xlabel('Bedrooms', fontsize=14)
threeD.set_ylabel('Size', fontsize=14)
threeD.set_zlabel('Price', fontsize=14)
plt.show()
# Normalize the scales in the dataset
# NOTE: the output doesn't have to be rescaled but we've done that here anyway
# If you know statistics: What we're doing is rewriting each value in terms of standard deviation units
data2Norm = (data2 - data2.mean()) / data2.std()
data2Norm.head()
# In Orange use the Preprocessor widget
# Number of columns in the dataset
cols = data2Norm.shape[1]
# Inputs are our first two columns
X = data2Norm.iloc[:, 0:cols-1]
# Add an initial column of 1s to X to keep the notation simple
# X.insert(0,'x0',1)
# First few rows of features (remember it's scaled)
X.head()
# The output -- the price of a house
# Don't need to normalize the output
#y = data2['Price']
y = data2.iloc[:, cols-1:cols]
# First few house prices in the dataset
y.head()
# First drop the Bedrooms column from the data set
# Notice -- this is the raw data -- no scaling yet.
data3 = data2.drop('Bedrooms', axis = 1)
data3.head()
# Getting a handle on the simplified dataset
data3.describe()
# Using non-linear (polynomial) models to "fit" the simplified dataset
X_p = data3['Size']
y_p = data3['Price']
xx = np.linspace(500, 5000, 100)
#print(xx)
# fit the data with a first degree polynomial
z1 = np.polyfit(X_p, y_p, 1)
p1 = np.poly1d(z1)
# fit the data with a 2nd degree polynomial
z2 = np.polyfit(X_p, y_p, 2)
p2 = np.poly1d(z2) # construct the polynomial (note: that's a one in "poly1d")
# fit the data with a 3rd degree polynomial
z3 = np.polyfit(X_p, y_p, 3)
p3 = np.poly1d(z3) # construct the polynomial
# fit the data with a 4th degree polynomial
z4 = np.polyfit(X_p, y_p, 4)
p4 = np.poly1d(z4) # construct the polynomial
# fit the data with a 8th degree polynomial - just for the heck of it :-)
z8 = np.polyfit(X_p, y_p, 8)
p8 = np.poly1d(z8) # construct the polynomial
# fit the data with a 16th degree polynomial - just for the heck of it :-)
z16 = np.polyfit(X_p, y_p, 16)
p16 = np.poly1d(z16) # construct the polynomial
plt.figure(figsize=(12,8))
plt.plot(X_p, y_p, 'o', label='data')
plt.title('Non-Linear Ways to Fit Housing Prices', fontsize=14)
plt.xlabel('Size')
plt.ylabel('Price')
plt.plot(xx, p1(xx), 'b-', label='linear')
#plt.plot(xx, p2(xx), 'g-', label='2nd degree')
#plt.plot(xx, p3(xx), 'y-', label='3rd degree')
#plt.plot(xx, p4(xx), 'r-', label='4th degree')
plt.plot(xx, p8(xx), 'c-', label='8th degree')
plt.plot(xx, p16(xx), 'm-', label='16th degree')
plt.legend(loc=2)
plt.axis([500,5000,150000,800000]); # Useful for higher degrees of polynomials
# What does the 8th-degree polynomial expression look like?
print(p8)
# Add a column of bedroom values squared
data2Norm.insert(2, '# Bedrooms Squared', np.power(data2Norm['Bedrooms'], 2))
# Our new dataset will be:
X_2 = data2Norm.drop(['Bedrooms', 'Price'], axis=1)
X_2.head()
# Get a handle on our transformed dataset built for a non-linear model.
X_2.describe()
# X_2 * W for the first 5 rows of X_2 (more accurately: X_2 * W transpose)
df_addOnes(X_2.iloc[0:5]) * np.matrix('-10;1;0')
# Initialize the parameter values, the learning rate and the number of interations
W_init = [-1, 1.4, 0.5]
learning_rate = 0.005 # the learning rate
num_iters = 10 # number of iterations
# Run gradient descent
# Outputs generated by our model for the first 5 inputs with the specific w values below
W_opt, final_penalty, running_w, running_penalty = gradientDescent(X_2, y, W_init, num_iters, learning_rate)
# These are initial predictions
# Compare these outputs to the actual values of y in the dataset (after de-scaling)
(df_addOnes(X_2.iloc[0:5]) * np.matrix(W_opt))
# Compute the cost for a given set of W values over the entire dataset
# Get X and y in to matrix form
penalty(X_2, y, W_init, squaredPenalty)
# Set hyper-parameters
num_iters = 2000 # number of iterations
learning_rate = 0.0001 # the learning rate
# Run gradient descent and capture the progression of cost values and the ultimate optimal W values
%time W_opt, final_penalty, running_w, running_penalty = gradientDescent(X_2, y, W_init, num_iters, learning_rate)
# Get the optimal W values and the last few W values and cost values
W_opt, final_penalty, running_w[-5:], running_penalty[-5:]
# How the cost changes as the number of iterations increase
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(np.arange(num_iters), running_penalty, 'g')
ax.set_xlabel('Number of Iterations')
ax.set_ylabel('Cost')
plt.xlim(0,num_iters)
ax.set_title('Cost vs. Iterations Over the Dataset');
# Run gradient descent for a few different values of the learning rate
learning_rates = [0.0001, 0.003, 0.005, 0.01, 0.03, 0.1]
gdResults = [gradientDescent(X_2, y, W_init, num_iters, learning_rates[i]) for i in range(len(learning_rates))]
#gdResults
# For each learning rate, get the progression of costs
# for each iteration
penalty_list = [gdResults[i][3] for i in range(len(gdResults))]
penalty_list[0]
# How the cost of the transformation varies with the learning rate
plot_color_list = ['g', 'r', 'c', 'b', 'm', 'y']
fig, ax = plt.subplots(figsize=(12,8))
[ax.plot(np.arange(num_iters), penalty_list[i], plot_color_list[i], label=learning_rates[i]) for i in range(len(learning_rates))]
ax.set_xlabel('Iterations')
ax.set_ylabel('Cost')
ax.legend()
plt.xlim(0,100)
ax.set_title('Cost vs. Learning Rate');
# Change size and num_bedrooms to make distinct predictions
size = 30000
num_bedrooms = 5
# Remember we've run the model using rescaled house sizes and number of bedrooms
# So we should scale the inputs down and then scale the prediction up when we're done
size_scaled = (size - data2.mean()[0])/data2.std()[0]
beds_scaled = (num_bedrooms - data2.mean()[1])/data2.std()[1]
# This is our model -- we're just using it here to make a calculation
pred_price = (W_opt[0] * 1) + (W_opt[1] * size_scaled) + (W_opt[2] * beds_scaled)
# Get the optimal W values into the right form for display
W_opt_display = np.array(W_opt.squeeze()).squeeze()
# Format and print the result
print("Predicted Price: ", '${:8,.0f}'.format(math.ceil(pred_price)))
print("Optimal Parameter Values: {}".format(W_opt))
'''
from https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html
======================
3D surface (color map)
======================
Demonstrates plotting a 3D surface colored with the coolwarm color map.
The surface is made opaque by using antialiased=False.
Also demonstrates using the LinearLocator and custom formatting for the
z axis tick labels.
'''
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
fig = plt.figure(figsize=(14,10))
ax = fig.gca(projection='3d')
# Make data.
X = np.arange(-5, 5, 1)
Y = np.arange(-5, 5, 1)
X, Y = np.meshgrid(X, Y)
#R = np.sqrt(X**2 + Y**2)
#Z = np.sin(R)
Z = W_opt_display[0] + W_opt_display[1] * X + W_opt_display[2] * Y
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=True)
plt.xlabel('Size in Square Feet (Scaled)', fontsize=12)
plt.ylabel('# Bedrooms Squared (Scaled)', fontsize=12)
plt.title('Predicted Price', fontsize=14)
# Customize the z axis.
#ax.set_zlim(100000, 1000000)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.0f'))
# Add a color bar which maps values to colors.
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
'''
From https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html
=================
3D wireframe plot
=================
A very basic demonstration of a wireframe plot.
'''
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(14,10))
ax = fig.add_subplot(111, projection='3d')
# Grab some test data.
#X, Y, Z = axes3d.get_test_data(0.05)
X = np.arange(-5, 5, 1)
Y = np.arange(-5, 5, 1)
Z = W_opt_display[0] + W_opt_display[1] * X + W_opt_display[2] * Y
# Plot a basic wireframe.
ax.plot_wireframe(X, Y, Z, rstride=1, cstride=1)
plt.show()
# More complicated models -- what the dataset looks like
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: The surface of interest will be $U(q) = -\log{f(q)}$
Step4: Tuning parameters
Step5: Banana-shaped target distribution
Step9: NUTS Sampler
Step10: Testing on the banana
|
<ASSISTANT_TASK:>
Python Code:
dtarget = lambda x: multivariate_normal.pdf(x, mean=(3, 10), cov=[[1, 0], [0, 1]])
x1 = np.linspace(-6, 12, 101)
x2 = np.linspace(-11, 31, 101)
X, Y = np.meshgrid(x1, x2)
Z = np.array(map(dtarget, zip(X.flat, Y.flat))).reshape(101, 101)
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
plt.xlim(0, 6)
plt.ylim(7, 13)
plt.show()
def HMC_one_step(U, current_q, Eps, L, m=1):
One step of the Hamiltonian Monte Carlo.
Parameters
----------
U : callable
A function that takes a single argument, the position.
q : array-like
Current position.
Eps : float
The step size, epsilon.
L : int
Number of leapfrog stpes.
m : float
Mass of the particle.
Returns
-------
q_out : array
Path from ``q`` to the proposed position.
q = copy.copy(current_q)
Nq = len(q)
p = multivariate_normal.rvs([0. for i in xrange(Nq)])
current_p = copy.copy(p)
out = {}
out['p'] = np.zeros((L, Nq))
out['p'][0,:] = copy.copy(p)
out['q'] = np.zeros((L, Nq))
out['q'][0,:] = copy.copy(q)
for i in xrange(1, L):
p -= Eps*derivative(U, q, 0.01)/2.
q += (Eps/m)*p
out['q'][i, :] = copy.copy(q)
p -= Eps*derivative(U, q, 0.01)/2.
out['p'][i, :] = copy.copy(p)
current_U = U(current_q)
current_K = (current_p**2).sum()/2.
proposed_U = U(q)
proposed_K = (p**2).sum()/2.
if uniform.rvs() < exp(current_U - proposed_U + current_K - proposed_K):
out['value'] = q
else:
out['value'] = current_q
return out
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
U = lambda x: -1.*np.log(dtarget(x))
chain = HMC_one_step(U, np.array([4., 10.]), Eps=0.2, L=10, m=2)['q']
plt.plot(chain[:, 0], chain[:, 1], 'ro')
plt.plot(chain[:, 0], chain[:, 1], 'r-')
plt.plot(chain[0, 0], chain[0,1], 'bo')
plt.xlim(0, 6)
plt.ylim(7, 13)
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
def HMC(dtarget, start, Eps=0.2, L=10, m=2, N=1000, num_chains=4):
Perform an HMC simulation.
Parameters
----------
dtarget : callable
Target PDF.
# Invert the target PDF into a concave surface.
neg_log_dtarget = lambda x: -1.*np.log(dtarget(x))
# If only one starting position is provided, use it for all chains.
if len(start.shape) == 1:
start = np.array([np.array(start) for i in xrange(num_chains)])
chains = []
for j in xrange(num_chains):
chain = [start[j, :]]
for i in xrange(N):
proposal = HMC_one_step(neg_log_dtarget,
copy.copy(chain[-1]),
Eps, L, m)['value']
chain.append(proposal)
chains.append(np.array(chain))
return np.array(chains)
def Gelman(chains):
if len(chains.shape) == 3:
N_p = chains.shape[2]
else:
N_p = 1
generate = lambda ptn: np.array([np.array([np.array([ptn(p, i, c)
for p in xrange(N_p)
for i in xrange(chains.shape[1])])
for c in xrange(chains.shape[0])])])
params = generate(lambda p, i, c: 'x{0}'.format(p))
iters = generate(lambda p, i, c: i)
labels = generate(lambda p, i, c: c)
data = zip(chains.flat, params.flat, iters.flat, labels.flat)
dataframe = pd.DataFrame(data, columns=('Value', 'Parameter', 'Iteration', 'Chain'))
xbar = dataframe.groupby('Parameter').Value.mean()
m = chains.shape[0]
xbar_i = dataframe.groupby(('Parameter', 'Chain')).Value.mean()
s2_i = dataframe.groupby(('Parameter', 'Chain')).Value.var()
n = dataframe.groupby(('Parameter', 'Chain')).Value.count().mean()
W = s2_i.mean()
B = (n/(m-1.)) * ((xbar_i - xbar)**2).sum()
sigma2_hat = W*(n-1.)/n + B/n
R_hat = np.sqrt(sigma2_hat/W)
n_eff = m*n*sigma2_hat/B # I missed what this was for.
return R_hat, n_eff
chains = HMC(dtarget, array([4., 10.]), Eps=0.2, L=5, N=1000)
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
plt.plot(chains[0][:, 0], chains[0][:, 1], alpha=0.5)
plt.plot(chains[1][:, 0], chains[1][:, 1], alpha=0.5)
plt.plot(chains[2][:, 0], chains[2][:, 1], alpha=0.5)
plt.plot(chains[3][:, 0], chains[3][:, 1], alpha=0.5)
plt.xlim(0, 6)
plt.ylim(7, 13)
plt.show()
plt.subplot(211)
for i in xrange(chains.shape[0]):
plt.plot(chains[i,:,0])
plt.ylabel('x1')
plt.subplot(212)
for i in xrange(chains.shape[0]):
plt.plot(chains[i,:,1])
plt.ylabel('x2')
Gelman(chains)
dtarget = lambda x: exp( (-x[0]**2)/200. - 0.5*(x[1]+(0.05*x[0]**2) - 100.*0.05)**2)
x1 = np.linspace(-20, 20, 101)
x2 = np.linspace(-15, 10, 101)
X, Y = np.meshgrid(x1, x2)
Z = np.array(map(dtarget, zip(X.flat, Y.flat))).reshape(101, 101)
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
plt.show()
start = np.array([[uniform.rvs(loc=-10., scale=15.),
uniform.rvs(loc=0., scale=10)]
for i in xrange(4)])
chains = HMC(dtarget, start, Eps=0.7, L=12, m=2, N=10000)
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
plt.plot(chains[0][:, 0], chains[0][:, 1], alpha=0.5)
plt.plot(chains[1][:, 0], chains[1][:, 1], alpha=0.5)
plt.plot(chains[2][:, 0], chains[2][:, 1], alpha=0.5)
plt.plot(chains[3][:, 0], chains[3][:, 1], alpha=0.5)
plt.show()
plt.subplot(211)
plt.title(Gelman(chains)[0])
for i in xrange(chains.shape[0]):
plt.plot(chains[i,:,0])
plt.ylabel('x1')
plt.subplot(212)
for i in xrange(chains.shape[0]):
plt.plot(chains[i,:,1])
plt.ylabel('x2')
plt.tight_layout()
plt.show()
def Leapfrog(U, theta, r, Eps, m=1.):
Slightly different update rules, since the negative log of the
target PDF is not used.
gradient = lambda U, theta: derivative(U, theta, 0.01)
r += (Eps/2.)*gradient(U, theta)
theta += (Eps/m)*r
r += (Eps/2.)*gradient(U, theta)
return copy.copy(theta), copy.copy(r)
def BuildTree(U, theta, r, u, v, j, Eps, m=1., delta_max=1000):
Recursive tree-building.
TODO: Make this less ugly.
if j == 0:
# Take one leapfrog step in the direction v.
theta_p, r_p = Leapfrog(U, theta, r, v*Eps, m=m)
n_p = float(u <= exp(U(theta_p) - np.dot(0.5*r_p, r_p)))
s_p = float(u < exp(delta_max + U(theta_p) - np.dot(0.5*r_p, r_p)))
return theta_p, r_p, theta_p, r_p, theta_p, n_p, s_p
else:
# Recursion -- implicitly build the left and right subtrees.
rargs = (u, v, j-1., Eps)
rkwargs = {'m':m}
theta_n, r_n, theta_f, r_f, theta_p, n_p, s_p = BuildTree(U, theta, r, *rargs, **rkwargs)
if s_p == 1:
if v == -1:
theta_n, r_n, null, null, theta_dp, n_dp, s_dp = BuildTree(U, theta_n, r_n, *rargs, **rkwargs)
else:
null, null, theta_f, r_f, theta_dp, n_dp, s_dp = BuildTree(U, theta_f, r_f, *rargs, **rkwargs)
try:
if uniform.rvs() <= (n_dp/(n_p + n_dp)):
theta_p = copy.copy(theta_dp)
except ZeroDivisionError:
pass
s_p = s_p*s_dp*int(np.dot((theta_f - theta_n), r_n) >= 0)*int( np.dot((theta_f - theta_n), r_f) >= 0)
n_p += n_dp
return theta_n, r_n, theta_f, r_f, theta_p, n_p, s_p
def NUTS_one_step(U, theta_last, Eps, m=1.):
TODO: clean up all the copies -- stop being so paranoid.
r_not = norm.rvs(0, 1., size=len(theta_last))
u = uniform.rvs(0, exp(U(theta_last) - np.dot(0.5*r_not, r_not)))
# Initialize.
theta_m = copy.copy(theta_last)
theta_n, theta_f = copy.copy(theta_last), copy.copy(theta_last)
r_n, r_f = copy.copy(r_not), copy.copy(r_not)
j = 0.
s = 1.
n = 1.
while s == 1.:
v_j = np.random.choice(np.array([-1., 1.])) # Choose a direction.
if v_j == -1:
theta_n, r_n, null, null, theta_p, n_p, s_p = BuildTree(U, theta_n, r_n, u, v_j, j, Eps, m=m)
else:
null, null, theta_f, r_f, theta_p, n_p, s_p = BuildTree(U, theta_f, r_f, u, v_j, j, Eps, m=m)
if s_p == 1:
try:
if uniform.rvs() <= min(1., (n_p/n)):
theta_m = copy.copy(theta_p)
except ZeroDivisionError:
pass
s = s_p*int(np.dot((theta_f - theta_n), r_n) >= 0)*int( np.dot((theta_f - theta_n), r_f) >= 0)
j += 1.
return theta_m
NUTS_one_step(lambda x: np.log(dtarget(x)), np.array([3.2, 9.1]), 0.02)
def NUTS(dtarget, theta_not, Eps, num_iters=1000, delta_max=1000, m=1.):
U = lambda x: np.log(dtarget(x))
theta = [theta_not]
for i in xrange(num_iters):
theta_i = NUTS_one_step(U, theta[-1], Eps, m=m)
theta.append(theta_i)
return theta
start = np.array([[uniform.rvs(loc=-10., scale=15.),
uniform.rvs(loc=0., scale=10)]
for i in xrange(4)])
chains = np.array([ np.array(NUTS(dtarget, start[i, :], Eps=0.55, m=1.5, num_iters=10000)) for i in xrange(start.shape[0])])
plt.figure(figsize=(10,7))
plt.contour(X, Y, Z)
for i in xrange(chains.shape[0]):
plt.scatter(chains[i, :, 0], chains[i, :, 1], alpha=0.5, s=0.02)
plt.show()
plt.subplot(211)
plt.title(Gelman(chains)[0])
for i in xrange(chains.shape[0]):
plt.plot(chains[i, :, 0])
plt.ylabel('x1')
plt.subplot(212)
for i in xrange(chains.shape[0]):
plt.plot(chains[i, :, 1])
plt.ylabel('x2')
plt.tight_layout()
plt.show()
plt.hist(chains[0,:,0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-2', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Een mail verzenden is, mits het inladen van de juiste bibliotheek, een fluitje van een cent...
Step2: ... maar als we het wat verder doordrijven kunnen we ons deurbel project via de cloud koppelen aan het verzenden van een mail!
|
<ASSISTANT_TASK:>
Python Code:
MAIL_SERVER = "mail.****.com"
FROM_ADDRESS = "noreply@****.com"
TO_ADDRESS = "my_friend@****.com"
from sender import Mail
mail = Mail(MAIL_SERVER)
mail.fromaddr = ("Geheime aanbidder", FROM_ADDRESS)
mail.send_message("Raspberry Pi heeft een boontje voor je", to=TO_ADDRESS, body="Hey lekker ding! Zin in een smoothie?")
APPKEY = "******"
mail.fromaddr = ("Uw deurbel", FROM_ADDRESS)
mail_to_addresses = {
"Donald Duck":"dd@****.com",
"Maleficent":"mf@****.com",
"BozeWolf":"bw@****.com"
}
def on_message(sender, channel, message):
boodschap = "{}: Er is aangebeld bij {}".format(channel, message)
print(boodschap)
mail.send_message("Raspberry Pi alert!", to=mail_to_addresses[message], body=boodschap)
import ortc
oc = ortc.OrtcClient()
oc.cluster_url = "http://ortc-developers.realtime.co/server/2.1"
def on_connected(sender):
print('Connected')
oc.subscribe('deurbel', True, on_message)
oc.set_on_connected_callback(on_connected)
oc.connect(APPKEY)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Balle rebondissante
Step3: Le code Javscript qui suit va être ajouté à celui du notebook. Il s'active dès que le curseur passe au-dessus du cadre qui précède.
Step4: Vous devriez voir quelque chose comme ceci
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import HTML, Javascript
HTML(<canvas id="canvas" style="border: 1px solid" width="600" height="300"></canvas>)
Javascript(
var canvas = document.getElementById('canvas');
var ctx = canvas.getContext('2d');
var raf;
var ball = {
x: 100,
y: 100,
vx: 5,
vy: 2,
radius: 25,
color: 'blue',
draw: function() {
ctx.beginPath();
ctx.arc(this.x, this.y, this.radius, 0, Math.PI * 2, true);
ctx.closePath();
ctx.fillStyle = this.color;
ctx.fill();
}
};
function draw() {
ctx.clearRect(0,0, canvas.width, canvas.height);
ball.draw();
ball.x += ball.vx;
ball.y += ball.vy;
ball.vy *= .99;
ball.vy += .25;
if (ball.y + ball.vy > canvas.height ||
ball.y + ball.vy < 0) {
ball.vy = -ball.vy;
}
if (ball.x + ball.vx > canvas.width ||
ball.x + ball.vx < 0) {
ball.vx = -ball.vx;
}
raf = window.requestAnimationFrame(draw);
}
canvas.addEventListener('mouseover', function(e) {
raf = window.requestAnimationFrame(draw);
});
canvas.addEventListener('mouseout', function(e) {
window.cancelAnimationFrame(raf);
});
ball.draw();
)
from pyquickhelper.helpgen import NbImage
NbImage("balle.png", width=200)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step2: Look at the data info, we know that there are missing data at Age, Cabin, and Embarked columns. Although Cabin is an important feature (because the position of a cabin in the ship can affect the chance of people in that cabin to survive), we will skip it in this tutorial for simplicity. In the dataset, there are many categorical columns and two numerical columns Age and Fare. Let's first look at the distribution of those categorical columns
Step3: Prepare data
Step4: Looking closer at the data, we can observe that each name contains a title. We know that age is correlated with the title of the name
Step5: We will make a new column Title, where rare titles are merged into one group Misc..
Step6: Now, it is ready to turn the dataframe, which includes categorical values, into numpy arrays. We also perform standardization (a good practice for regression models) for Age column.
Step7: Modelling
Step8: Note that in the model, the prior for age is dist.Normal(age_mu, age_sigma), where the values of age_mu and age_sigma depend on title. Because there are missing values in age, we will encode those missing values in the latent parameter age_impute. Then we can replace NaN entries in age with the vector age_impute.
Step9: To double check that the assumption "age is correlated with title" is reasonable, let's look at the infered age by title. Recall that we performed standarization on age, so here we need to scale back to original domain.
Step10: The infered result confirms our assumption that Age is correlated with Title
Step11: So far so good, we have many information about the regression coefficients together with imputed values and their uncertainties. Let's inspect those results a bit
Step12: This is a pretty good result using a simple logistic regression model. Let's see how the model performs if we don't use Bayesian imputation here.
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
# first, we need some imports
import os
from IPython.display import set_matplotlib_formats
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from jax import numpy as jnp
from jax import random
from jax.scipy.special import expit
import numpyro
from numpyro import distributions as dist
from numpyro.distributions import constraints
from numpyro.infer import MCMC, NUTS, Predictive
plt.style.use("seaborn")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
assert numpyro.__version__.startswith("0.9.1")
train_df = pd.read_csv(
"https://raw.githubusercontent.com/agconti/kaggle-titanic/master/data/train.csv"
)
train_df.info()
train_df.head()
for col in ["Survived", "Pclass", "Sex", "SibSp", "Parch", "Embarked"]:
print(train_df[col].value_counts(), end="\n\n")
train_df.SibSp.clip(0, 1, inplace=True)
train_df.Parch.clip(0, 2, inplace=True)
train_df.Embarked.fillna("S", inplace=True)
train_df.Name.str.split(", ").str.get(1).str.split(" ").str.get(0).value_counts()
train_df["Title"] = (
train_df.Name.str.split(", ")
.str.get(1)
.str.split(" ")
.str.get(0)
.apply(lambda x: x if x in ["Mr.", "Miss.", "Mrs.", "Master."] else "Misc.")
)
title_cat = pd.CategoricalDtype(
categories=["Mr.", "Miss.", "Mrs.", "Master.", "Misc."], ordered=True
)
embarked_cat = pd.CategoricalDtype(categories=["S", "C", "Q"], ordered=True)
age_mean, age_std = train_df.Age.mean(), train_df.Age.std()
data = dict(
age=train_df.Age.pipe(lambda x: (x - age_mean) / age_std).values,
pclass=train_df.Pclass.values - 1,
title=train_df.Title.astype(title_cat).cat.codes.values,
sex=(train_df.Sex == "male").astype(int).values,
sibsp=train_df.SibSp.values,
parch=train_df.Parch.values,
embarked=train_df.Embarked.astype(embarked_cat).cat.codes.values,
)
survived = train_df.Survived.values
# compute the age mean for each title
age_notnan = data["age"][jnp.isfinite(data["age"])]
title_notnan = data["title"][jnp.isfinite(data["age"])]
age_mean_by_title = jnp.stack([age_notnan[title_notnan == i].mean() for i in range(5)])
def model(
age, pclass, title, sex, sibsp, parch, embarked, survived=None, bayesian_impute=True
):
b_pclass = numpyro.sample("b_Pclass", dist.Normal(0, 1).expand([3]))
b_title = numpyro.sample("b_Title", dist.Normal(0, 1).expand([5]))
b_sex = numpyro.sample("b_Sex", dist.Normal(0, 1).expand([2]))
b_sibsp = numpyro.sample("b_SibSp", dist.Normal(0, 1).expand([2]))
b_parch = numpyro.sample("b_Parch", dist.Normal(0, 1).expand([3]))
b_embarked = numpyro.sample("b_Embarked", dist.Normal(0, 1).expand([3]))
# impute age by Title
isnan = np.isnan(age)
age_nanidx = np.nonzero(isnan)[0]
if bayesian_impute:
age_mu = numpyro.sample("age_mu", dist.Normal(0, 1).expand([5]))
age_mu = age_mu[title]
age_sigma = numpyro.sample("age_sigma", dist.Normal(0, 1).expand([5]))
age_sigma = age_sigma[title]
age_impute = numpyro.sample(
"age_impute",
dist.Normal(age_mu[age_nanidx], age_sigma[age_nanidx]).mask(False),
)
age = jnp.asarray(age).at[age_nanidx].set(age_impute)
numpyro.sample("age", dist.Normal(age_mu, age_sigma), obs=age)
else:
# fill missing data by the mean of ages for each title
age_impute = age_mean_by_title[title][age_nanidx]
age = jnp.asarray(age).at[age_nanidx].set(age_impute)
a = numpyro.sample("a", dist.Normal(0, 1))
b_age = numpyro.sample("b_Age", dist.Normal(0, 1))
logits = a + b_age * age
logits = logits + b_title[title] + b_pclass[pclass] + b_sex[sex]
logits = logits + b_sibsp[sibsp] + b_parch[parch] + b_embarked[embarked]
numpyro.sample("survived", dist.Bernoulli(logits=logits), obs=survived)
mcmc = MCMC(NUTS(model), num_warmup=1000, num_samples=1000)
mcmc.run(random.PRNGKey(0), **data, survived=survived)
mcmc.print_summary()
age_by_title = age_mean + age_std * mcmc.get_samples()["age_mu"].mean(axis=0)
dict(zip(title_cat.categories, age_by_title))
train_df.groupby("Title")["Age"].mean()
posterior = mcmc.get_samples()
survived_pred = Predictive(model, posterior)(random.PRNGKey(1), **data)["survived"]
survived_pred = (survived_pred.mean(axis=0) >= 0.5).astype(jnp.uint8)
print("Accuracy:", (survived_pred == survived).sum() / survived.shape[0])
confusion_matrix = pd.crosstab(
pd.Series(survived, name="actual"), pd.Series(survived_pred, name="predict")
)
confusion_matrix / confusion_matrix.sum(axis=1)
mcmc.run(random.PRNGKey(2), **data, survived=survived, bayesian_impute=False)
posterior_1 = mcmc.get_samples()
survived_pred_1 = Predictive(model, posterior_1)(random.PRNGKey(2), **data)["survived"]
survived_pred_1 = (survived_pred_1.mean(axis=0) >= 0.5).astype(jnp.uint8)
print("Accuracy:", (survived_pred_1 == survived).sum() / survived.shape[0])
confusion_matrix = pd.crosstab(
pd.Series(survived, name="actual"), pd.Series(survived_pred_1, name="predict")
)
confusion_matrix / confusion_matrix.sum(axis=1)
confusion_matrix = pd.crosstab(
pd.Series(survived, name="actual"), pd.Series(survived_pred_1, name="predict")
)
confusion_matrix / confusion_matrix.sum(axis=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading and plotting the data
Step2: TODO
Step3: Training function
Step4: Time to train the algorithm!
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
pass
# Output (prediction) formula
def output_formula(features, weights, bias):
pass
# Error (log-loss) formula
def error_formula(y, output):
pass
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
pass
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
train(X, y, epochs, learnrate, True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Extract Records from XML
Step3: Save the Records as a Pandas DataFrame
|
<ASSISTANT_TASK:>
Python Code:
%%time
# flag to indicate whether we're inside a <text> tag as we iterate over the file line by line
in_text = False
# Set of tags to remove
text_to_remove = set(['<code>', '</code>', '<code/>', '<br>', '<br/>', '</br>'])
with open('EFWritingData.xml') as f_in, open('EFWritingData_new.xml', 'w') as f_out:
for line in f_in:
if '<text>' in line:
# We can't handle another <text> tag while we're already inside, so raise an error.
assert not in_text, 'Found nested text tag!'
in_text = True
if in_text:
for invalid_tag in text_to_remove:
if invalid_tag in line:
line = line.replace(invalid_tag, '')
if '</text>' in line:
in_text = False
f_out.write(line)
%%time
import xml.etree.ElementTree as ET
root = ET.parse('EFWritingData_new.xml')
def xml_to_record(node):
Given one <writing> node, return a flat record with all the relevant sub-element and attributes.
article_id, level, unit = [int(node.attrib[x]) for x in ('id', 'level', 'unit')]
topic = node.find('topic')
topic_id, topic_text = int(topic.attrib['id']), topic.text
grade = int(node.find('grade').text)
text = node.find('text').text.strip()
return {
'article_id': article_id,
'level' : level,
'unit' : unit,
'topic_id' : topic_id,
'topic_text': topic_text,
'grade' : grade,
'text' : text
}
content_nodes = root.findall('.//writing')
input_recs = list(map(xml_to_record, content_nodes))
import pandas as pd
input_df = pd.DataFrame.from_records(input_recs)
input_df.head()
%%time
input_df.to_pickle('input.pkl')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, let's implement some helper functions to visualize intermediate results
Step2: BaseImageAugmentationLayer Introduction
Step3: Our layer overrides BaseImageAugmentationLayer.augment_image(). This method is
Step4: Next, let's augment it and visualize the result
Step5: Looks great! We can also call our layer on batched inputs
Step7: Adding Random Behavior with the FactorSampler API.
Step8: Now, we can configure the random behavior of ou RandomBlueTint layer.
Step9: Each image is augmented differently with a random factor sampled from the range
Step11: As you can see, the augmentations now are drawn from a normal distributions.
Step12: To make use of these new methods, you will need to feed your inputs in with a
Step13: Now if we call our layer on the inputs
Step14: Both the inputs and labels are augmented.
Step16: Note that this is an incredibly weak augmentation!
Step17: Now our elephants are only slgihtly blue tinted. This is the expected behavior when
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow import keras
import keras_cv
from tensorflow.keras import layers
from keras_cv import utils
from keras_cv.layers import BaseImageAugmentationLayer
import matplotlib.pyplot as plt
tf.autograph.set_verbosity(0)
def imshow(img):
img = img.astype(int)
plt.axis("off")
plt.imshow(img)
plt.show()
def gallery_show(images):
images = images.astype(int)
for i in range(9):
image = images[i]
plt.subplot(3, 3, i + 1)
plt.imshow(image.astype("uint8"))
plt.axis("off")
plt.show()
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
def augment_image(self, image, transformation=None):
# image is of shape (height, width, channels)
[*others, blue] = tf.unstack(image, axis=-1)
blue = tf.clip_by_value(blue + 100, 0.0, 255.0)
return tf.stack([*others, blue], axis=-1)
SIZE = (300, 300)
elephants = tf.keras.utils.get_file(
"african_elephant.jpg", "https://i.imgur.com/Bvro0YD.png"
)
elephants = tf.keras.utils.load_img(elephants, target_size=SIZE)
elephants = tf.keras.utils.img_to_array(elephants)
imshow(elephants)
layer = RandomBlueTint()
augmented = layer(elephants)
imshow(augmented.numpy())
layer = RandomBlueTint()
augmented = layer(tf.expand_dims(elephants, axis=0))
imshow(augmented.numpy()[0])
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
RandomBlueTint randomly applies a blue tint to images.
Args:
factor: A tuple of two floats, a single float or a
`keras_cv.FactorSampler`. `factor` controls the extent to which the
image is blue shifted. `factor=0.0` makes this layer perform a no-op
operation, while a value of 1.0 uses the degenerated result entirely.
Values between 0 and 1 result in linear interpolation between the original
image and a fully blue image.
Values should be between `0.0` and `1.0`. If a tuple is used, a `factor` is
sampled between the two values for every image augmented. If a single float
is used, a value between `0.0` and the passed float is sampled. In order to
ensure the value is always the same, please pass a tuple with two identical
floats: `(0.5, 0.5)`.
def __init__(self, factor, **kwargs):
super().__init__(**kwargs)
self.factor = utils.parse_factor(factor)
def augment_image(self, image, transformation=None):
[*others, blue] = tf.unstack(image, axis=-1)
blue_shift = self.factor() * 255
blue = tf.clip_by_value(blue + blue_shift, 0.0, 255.0)
return tf.stack([*others, blue], axis=-1)
many_elephants = tf.repeat(tf.expand_dims(elephants, axis=0), 9, axis=0)
layer = RandomBlueTint(factor=0.5)
augmented = layer(many_elephants)
gallery_show(augmented.numpy())
many_elephants = tf.repeat(tf.expand_dims(elephants, axis=0), 9, axis=0)
factor = keras_cv.NormalFactorSampler(
mean=0.3, stddev=0.1, min_value=0.0, max_value=1.0
)
layer = RandomBlueTint(factor=factor)
augmented = layer(many_elephants)
gallery_show(augmented.numpy())
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
RandomBlueTint randomly applies a blue tint to images.
Args:
factor: A tuple of two floats, a single float or a
`keras_cv.FactorSampler`. `factor` controls the extent to which the
image is blue shifted. `factor=0.0` makes this layer perform a no-op
operation, while a value of 1.0 uses the degenerated result entirely.
Values between 0 and 1 result in linear interpolation between the original
image and a fully blue image.
Values should be between `0.0` and `1.0`. If a tuple is used, a `factor` is
sampled between the two values for every image augmented. If a single float
is used, a value between `0.0` and the passed float is sampled. In order to
ensure the value is always the same, please pass a tuple with two identical
floats: `(0.5, 0.5)`.
def __init__(self, factor, **kwargs):
super().__init__(**kwargs)
self.factor = utils.parse_factor(factor)
def get_random_transformation(self, **kwargs):
# kwargs holds {"images": image, "labels": label, etc...}
return self.factor() * 255
def augment_image(self, image, transformation=None, **kwargs):
[*others, blue] = tf.unstack(image, axis=-1)
blue = tf.clip_by_value(blue + transformation, 0.0, 255.0)
return tf.stack([*others, blue], axis=-1)
def augment_label(self, label, transformation=None, **kwargs):
# you can use transformation somehow if you want
if transformation > 100:
# i.e. maybe class 2 corresponds to blue images
return 2.0
return label
def augment_bounding_boxes(self, bounding_boxes, transformation=None, **kwargs):
# you can also perform no-op augmentations on label types to support them in
# your pipeline.
return bounding_boxes
labels = tf.constant([[1, 0]])
inputs = {"images": elephants, "labels": labels}
layer = RandomBlueTint(factor=(0.6, 0.6))
augmented = layer(inputs)
print(augmented["labels"])
layer = RandomBlueTint(factor=(0.1, 0.1))
elephants_0_1 = elephants / 255
print("min and max before augmentation:", elephants_0_1.min(), elephants_0_1.max())
augmented = layer(elephants_0_1)
print(
"min and max after augmentation:",
(augmented.numpy()).min(),
augmented.numpy().max(),
)
imshow((augmented * 255).numpy().astype(int))
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
RandomBlueTint randomly applies a blue tint to images.
Args:
value_range: value_range: a tuple or a list of two elements. The first value
represents the lower bound for values in passed images, the second represents
the upper bound. Images passed to the layer should have values within
`value_range`.
factor: A tuple of two floats, a single float or a
`keras_cv.FactorSampler`. `factor` controls the extent to which the
image is blue shifted. `factor=0.0` makes this layer perform a no-op
operation, while a value of 1.0 uses the degenerated result entirely.
Values between 0 and 1 result in linear interpolation between the original
image and a fully blue image.
Values should be between `0.0` and `1.0`. If a tuple is used, a `factor` is
sampled between the two values for every image augmented. If a single float
is used, a value between `0.0` and the passed float is sampled. In order to
ensure the value is always the same, please pass a tuple with two identical
floats: `(0.5, 0.5)`.
def __init__(self, value_range, factor, **kwargs):
super().__init__(**kwargs)
self.value_range = value_range
self.factor = utils.parse_factor(factor)
def get_random_transformation(self, **kwargs):
# kwargs holds {"images": image, "labels": label, etc...}
return self.factor() * 255
def augment_image(self, image, transformation=None, **kwargs):
image = utils.transform_value_range(image, self.value_range, (0, 255))
[*others, blue] = tf.unstack(image, axis=-1)
blue = tf.clip_by_value(blue + transformation, 0.0, 255.0)
result = tf.stack([*others, blue], axis=-1)
result = utils.transform_value_range(result, (0, 255), self.value_range)
return result
def augment_label(self, label, transformation=None, **kwargs):
# you can use transformation somehow if you want
if transformation > 100:
# i.e. maybe class 2 corresponds to blue images
return 2.0
return label
def augment_bounding_boxes(self, bounding_boxes, transformation=None, **kwargs):
# you can also perform no-op augmentations on label types to support them in
# your pipeline.
return bounding_boxes
layer = RandomBlueTint(value_range=(0, 1), factor=(0.1, 0.1))
elephants_0_1 = elephants / 255
print("min and max before augmentation:", elephants_0_1.min(), elephants_0_1.max())
augmented = layer(elephants_0_1)
print(
"min and max after augmentation:",
augmented.numpy().min(),
augmented.numpy().max(),
)
imshow((augmented * 255).numpy().astype(int))
class UnVectorizable(keras_cv.layers.BaseImageAugmentationLayer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# this disables BaseImageAugmentationLayer's Auto Vectorization
self.auto_vectorize = False
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lecture des données
Step2: Calcul des propriétés statistiques
Step3: Représentation graphique des données
|
<ASSISTANT_TASK:>
Python Code:
%pylab --no-import-all inline
from scipy.stats import linregress, pearsonr
all_sets = list()
for i in range(0, 8, 2):
x, y = np.loadtxt("anscombe.dat", usecols=(i, i+1), skiprows=1, unpack=True)
all_sets.append((x, y))
print(all_sets[0][0])
print(all_sets[0][1])
def show_stat(data):
x, y = data
print("moyenne x : %4.2f" % x.mean())
print("variance x : %4.2f" % np.var(x))
print("moyenne y : %4.2f" % y.mean())
print("variance y : %4.2f" % np.var(y))
cor, p = pearsonr(x, y)
print("corrélation : %5.3f" % cor)
a, b, r, p_value, std_err = linregress(x, y)
print("regression linéaire : %3.1f x + %3.1f (r^2 = %4.2f)" % (a, b, r**2))
for i, data in enumerate(all_sets):
print("\nset %d" % i)
print("------")
show_stat(data)
fig = plt.figure(figsize=(10, 8))
fig.suptitle("Quartet d'Anscombe", size=20)
for i, data in enumerate(all_sets):
ax = plt.subplot(2, 2, i + 1)
x, y = data
ax.plot(x, y, marker="o", color="C3", linestyle="", label="set %d" % (i+1))
ax.set_ylabel("y%d" % (i+1), size=14)
ax.set_xlabel("x%d" % (i+1), size=14)
a, b, r, p_value, std_err = linregress(x, y)
ax.plot([0, 20], [b, a*20 + b], color="C0")
ax.set_xlim(0, 20)
ax.set_ylim(0, 15)
ax.legend(loc="lower right", fontsize=18)
ax.grid(True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Date Data
Step2: Create A Rolling Time Window Of Two Rows
|
<ASSISTANT_TASK:>
Python Code:
# Load library
import pandas as pd
# Create datetimes
time_index = pd.date_range('01/01/2010', periods=5, freq='M')
# Create data frame, set index
df = pd.DataFrame(index=time_index)
# Create feature
df['Stock_Price'] = [1,2,3,4,5]
# Calculate rolling mean
df.rolling(window=2).mean()
# Identify max value in rolling time window
df.rolling(window=2).max()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: First we'll define the system parameters
Step3: And then create three light curve datasets at the same times, but in different passbands
Step4: Now we'll set some atmosphere and limb-darkening options
Step5: And flip the extinction constraint so we can provide E(B-V).
Step6: For comparison, we'll run a model without extinction
Step7: and then another model with extinction
Step8: Lastly, we'll convert the model fluxes into magnitudes and format the figures.
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import matplotlib
matplotlib.rcParams['text.usetex'] = True
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
from matplotlib import gridspec
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger('error')
b = phoebe.default_binary()
b.set_value('period', component='binary', value=1000.0*u.d)
b.set_value('teff', component='primary', value=17000*u.K)
b.set_value('teff', component='secondary', value=4000*u.K)
b.set_value('requiv', component='primary', value=4.22173036*u.solRad)
b.set_value('requiv', component='secondary', value=40.732435*u.solRad)
b.flip_constraint('mass@primary', solve_for='sma@binary')
b.set_value('mass', component='primary', value=6.5*u.solMass)
b.flip_constraint('mass@secondary', solve_for='q')
b.set_value('mass', component='secondary', value=1.9145*u.solMass)
times = phoebe.linspace(-20, 20, 101)
b.add_dataset('lc', times=times, dataset='B', passband="Johnson:B")
b.add_dataset('lc', times=times, dataset='R', passband="Cousins:R")
b.add_dataset('lc', times=times, dataset='KEP', passband="Kepler:mean")
b.set_value_all('atm', 'ck2004')
b.set_value_all('gravb_bol', 0.0)
b.set_value_all('ld_mode_bol', 'manual')
b.set_value_all('ld_func_bol', 'linear')
b.set_value_all('ld_coeffs_bol', [0.0])
b.flip_constraint('ebv', solve_for='Av')
b.set_value('ebv', 0.0)
b.run_compute(distortion_method='rotstar', irrad_method='none', model='noext')
b.set_value('ebv', 1.0)
b.run_compute(distortion_method='rotstar', irrad_method='none', model='ext')
Bextmags=-2.5*np.log10(b['value@fluxes@B@ext@model'])
Bnoextmags=-2.5*np.log10(b['value@fluxes@B@noext@model'])
Bextmags_norm=Bextmags-Bextmags.min()+1
Bnoextmags_norm=Bnoextmags-Bnoextmags.min()+1
Bresid=Bextmags_norm-Bnoextmags_norm
Rextmags=-2.5*np.log10(b['value@fluxes@R@ext@model'])
Rnoextmags=-2.5*np.log10(b['value@fluxes@R@noext@model'])
Rextmags_norm=Rextmags-Rextmags.min()+1
Rnoextmags_norm=Rnoextmags-Rnoextmags.min()+1
Rresid=Rextmags_norm-Rnoextmags_norm
fig=plt.figure(figsize=(12,6))
gs=gridspec.GridSpec(2,2,height_ratios=[4,1],width_ratios=[1,1])
ax=plt.subplot(gs[0,0])
ax.plot(b['value@times@B@noext@model']/1000,Bnoextmags_norm,color='k',linestyle="--")
ax.plot(b['value@times@B@ext@model']/1000,Bextmags_norm,color='k',linestyle="-")
ax.set_ylabel('Magnitude')
ax.set_xticklabels([])
ax.set_xlim([-0.02,0.02])
ax.set_ylim([3.5,0.8])
ax.set_title('(a) Johnson B')
ax2=plt.subplot(gs[0,1])
ax2.plot(b['value@times@R@noext@model']/1000,Rnoextmags_norm,color='k',linestyle="--")
ax2.plot(b['value@times@R@ext@model']/1000,Rextmags_norm,color='k',linestyle="-")
ax2.set_ylabel('Magnitude')
ax2.set_xticklabels([])
ax2.set_xlim([-0.02,0.02])
ax2.set_ylim([3.5,0.8])
ax2.set_title('(b) Cousins Rc')
ax_1=plt.subplot(gs[1,0])
ax_1.plot(b['value@times@B@noext@model']/1000,Bresid,color='k',linestyle='-')
ax_1.set_ylabel(r'$\Delta m$')
ax_1.set_xlabel('Phase')
ax_1.set_xlim([-0.02,0.02])
ax_1.set_ylim([0.05,-0.3])
ax_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
ax2_1=plt.subplot(gs[1,1])
ax2_1.plot(b['value@times@R@noext@model']/1000,Rresid,color='k',linestyle='-')
ax2_1.set_ylabel(r'$\Delta m$')
ax2_1.set_xlabel('Phase')
ax2_1.set_xlim([-0.02,0.02])
ax2_1.set_ylim([0.05,-0.3])
ax2_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
plt.tight_layout()
fig.canvas.draw()
KEPextmags=-2.5*np.log10(b['value@fluxes@KEP@ext@model'])
KEPnoextmags=-2.5*np.log10(b['value@fluxes@KEP@noext@model'])
KEPextmags_norm=KEPextmags-KEPextmags.min()+1
KEPnoextmags_norm=KEPnoextmags-KEPnoextmags.min()+1
KEPresid=KEPextmags_norm-KEPnoextmags_norm
fig=plt.figure(figsize=(6,6))
gs=gridspec.GridSpec(2,1,height_ratios=[4,1])
ax=plt.subplot(gs[0])
ax.plot(b['value@times@KEP@noext@model']/1000,KEPnoextmags_norm,color='k',linestyle="--")
ax.plot(b['value@times@KEP@ext@model']/1000,KEPextmags_norm,color='k',linestyle="-")
ax.set_ylabel('Magnitude')
ax.set_xticklabels([])
ax.set_xlim([-0.02,0.02])
ax.set_ylim([3.5,0.8])
ax.set_title('Kepler K')
ax_1=plt.subplot(gs[1])
ax_1.plot(b['value@times@KEP@noext@model']/1000,KEPresid,color='k',linestyle='-')
ax_1.set_ylabel(r'$\Delta m$')
ax_1.set_xlabel('Phase')
ax_1.set_xlim([-0.02,0.02])
ax_1.set_ylim([0.05,-0.3])
ax_1.axhline(y=0., linestyle='dashed',color='k',linewidth=0.5)
plt.tight_layout()
fig.canvas.draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We load the data in a Pandas dataframe as always and specify our column names.
Step2: We'll have a look at our dataset
Step3: Now this data contains textual data. The values are in String not in Integer or Float as we would like for our classifier. So we'll use LabelEncoder to transform the data
Step4: Our data now looks like
Step5: Our class values range from 1 to 7 denotin specific Animal type.
Step6: As usual, we split our dataset to 60% training and 40% testing
Step7: Finding Feature importances with forests of trees
Step8: <hr>
Step9: Thus the Accuracy is found to be 87% which is quite good for such limited data. However this accuracy might not be a perfect measure of our model efficiency. Hence we use Cross Validation
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation, metrics
from sklearn.naive_bayes import BernoulliNB
from time import time
from sklearn import preprocessing
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_auc_score , classification_report
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report
# read .csv from provided dataset
csv_filename="zoo.data"
# df=pd.read_csv(csv_filename,index_col=0)
df=pd.read_csv(csv_filename,
names=["Animal", "Hair" , "Feathers" , "Eggs" , "Milk" , "Airborne",
"Aquatic" , "Predator" , "Toothed" , "Backbone", "Breathes" , "Venomous",
"Fins", "Legs", "Tail", "Domestic", "Catsize", "Type" ])
df.head()
df.tail()
df['Animal'].unique()
#Convert animal labels to numbers
le_animals = preprocessing.LabelEncoder()
df['animals'] = le_animals.fit_transform(df.Animal)
#Get binarized Legs columns
df['Legs'] = pd.get_dummies(df.Legs)
#types = pd.get_dummies(df.Type)
df.head()
df['Type'].unique()
features=(list(df.columns[1:]))
features
features.remove('Type')
X = df[features]
y = df['Type']
X.head()
# split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)
X_train.shape, y_train.shape
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
# Build a classification task using 3 informative features
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d - %s (%f) " % (f + 1, indices[f], features[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(num=None, figsize=(14, 10), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
importances[indices[:5]]
for f in range(5):
print("%d. feature %d - %s (%f)" % (f + 1, indices[f], features[indices[f]] ,importances[indices[f]]))
best_features = []
for i in indices[:5]:
best_features.append(features[i])
# Plot the top 5 feature importances of the forest
plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(5), importances[indices][:5],
color="r", yerr=std[indices][:5], align="center")
plt.xticks(range(5), best_features)
plt.xlim([-1, 5])
plt.show()
t4=time()
print ("NaiveBayes")
nb = BernoulliNB()
clf_nb=nb.fit(X_train,y_train)
print ("Acurracy: ", clf_nb.score(X_test,y_test))
t5=time()
print ("time elapsed: ", t5-t4)
tt4=time()
print ("cross result========")
scores = cross_validation.cross_val_score(nb, X,y, cv=3)
print (scores)
print (scores.mean())
tt5=time()
print ("time elapsed: ", tt5-tt4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This function will compile the average read-coverage for each locus and sample and will select the n loci with the best coverage accross all samples.
Step2: We can also plot the results of the selection in a more detailed view, by showing the contig and read coverage of only the selected loci
Step3: And here are the corresponding legends
Step4: The plotting scripts also automatically output a text file that contains the corresponding locus name for each exon index in the plot. The text file is stored in the same folder as the input data, below we show the first lines of the file (note that the locus names have been assigned numerical IDs by secapr, see file reference_fasta_header_info.txt in the output folder of the secapr find_target_contigs function)
Step5: Aligning selected loci
Step6: The read-coverage in the set of selected loci however is rather good for all/most samples
|
<ASSISTANT_TASK:>
Python Code:
%%bash
source activate secapr_env
secapr locus_selection -h
import sys
sys.path.append("../../src")
import plot_contig_data_function as secapr_plot
contig_input_file = '../../data/processed/target_contigs/match_table.txt'
alignment_folder = '../../data/processed/alignments/contig_alignments'
read_cov_file_selected = '../../data/processed/selected_loci/overview_selected_loci.txt'
secapr_plot.plot_contigs_alignments_read_cov(contig_input_file,alignment_folder,read_cov_file_selected,norm_value=10)
%matplotlib inline
selected_loci = secapr_plot.plot_contigs_alignments_read_cov(contig_input_file,alignment_folder,read_cov_file_selected,reduce=True,norm_value=10)
#selected_loci.savefig(os.path.join('/Users/tobias/GitHub/seqcap_processor/data/processed/selected_loci_50/','contig_exon_coverage_matrix_reduced.png'), dpi = 500)
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
legend = secapr_plot.plot_heatmap_legend(0,10,width=.75,height=2, font_size=14)
contig = secapr_plot.general_scale_bar(2,tick_labels=['No','Yes'],x0=.1,x1=.25,plot_height=.5,plot_width=.3,font_size = 14,color1='white',color2=(0.031372549019607843, 0.25098039215686274, 0.50588235294117645),height=1,width=.75,plot_label='Contig present')
align = secapr_plot.general_scale_bar(2,tick_labels=['No','Yes'],x0=.1,x1=.25,plot_height=.5,plot_width=.3,font_size = 14,color1='white',color2=(0.0, 0.26666666666666666, 0.10588235294117647),height=1,width=.75,plot_label='Alignment present')
plt.show(contig)
plt.show(align)
plt.show(legend)
import pandas as pd
pd.read_csv('../../data/processed/selected_loci/locus_index_overview.txt',sep='\t',header=None).head(10)
%%bash
cd ../../
python src/heatmap_plot.py
%%html
<div>
<a href="https://plot.ly/~tobiashofmann/48/?share_key=wC4zjzzzXVpyZ4iRjUja28" target="_blank" title="plot from API (24)" style="display: block; text-align: center;"><img src="https://plot.ly/~tobiashofmann/48.png?share_key=wC4zjzzzXVpyZ4iRjUja28" alt="plot from API (24)" style="max-width: 100%;width: 600px;" width="600" onerror="this.onerror=null;this.src='https://plot.ly/404.png';" /></a>
<script data-plotly="tobiashofmann:48" sharekey-plotly="wC4zjzzzXVpyZ4iRjUja28" src="https://plot.ly/embed.js" async></script>
</div>
%%html
<div>
<a href="https://plot.ly/~tobiashofmann/50/?share_key=VZLFvEmzO1oJ3VGD3SUc8g" target="_blank" title="plot from API (25)" style="display: block; text-align: center;"><img src="https://plot.ly/~tobiashofmann/50.png?share_key=VZLFvEmzO1oJ3VGD3SUc8g" alt="plot from API (25)" style="max-width: 100%;width: 600px;" width="600" onerror="this.onerror=null;this.src='https://plot.ly/404.png';" /></a>
<script data-plotly="tobiashofmann:50" sharekey-plotly="VZLFvEmzO1oJ3VGD3SUc8g" src="https://plot.ly/embed.js" async></script>
</div>
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first example model
Step2: 2. Creating and calibrating the model
Step3: 3. Computing and visualizing the SGI
Step4: Second Example
Step5: 2. Add linear trend
Step6: 3. Computing and plotting the SGI
Step7: What about human influenced groundwater systems?
Step8: 2. SGI with and without groundwater pumping
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.set_log_level("ERROR")
ps.show_versions()
# Load input data
head = pd.read_csv("../data/B32C0639001.csv", parse_dates=['date'],
index_col='date', squeeze=True)
evap = ps.read_knmi("../data/etmgeg_260.txt", variables="EV24").series * 1e3
rain = ps.read_knmi("../data/etmgeg_260.txt", variables="RH").series * 1e3
# Plot input data
ps.plots.series(head, [rain, evap]);
# Create the basic Pastas model
ml = ps.Model(head)
# Add a recharge model
rch = ps.rch.FlexModel()
rm = ps.RechargeModel(rain, evap, recharge=rch, rfunc=ps.Exponential, name="rch")
ml.add_stressmodel(rm)
# Solve the model
ml.solve(noise=True, tmin="1990", report=False)
ml.plots.results(figsize=(10, 6));
# Compute the SGI
sim = ml.simulate(tmin="1990")
sgi = ps.stats.sgi(sim.resample("W").mean())
ci = ml.fit.prediction_interval(n=10)
# Make the plot
fig, [ax1, ax2] = plt.subplots(2, 1, figsize=(10,5), sharex=True)
# Upper subplot
sim.plot(ax=ax1, zorder=10)
ml.oseries.series.plot(ax=ax1, linestyle=" ", marker=".", color="k")
ax1.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="gray")
ax1.legend(["Simulation", "Observations", "Prediction interval"], ncol=3)
# Lower subplot
sgi.plot(ax=ax2, color="k")
ax2.axhline(0, linestyle="--", color="k")
droughts = sgi.to_numpy(copy=True)
droughts[droughts > 0] = 0
ax2.fill_between(sgi.index, 0, droughts, color="C0")
# Dress up the plot
ax1.set_ylabel("GWL [m]")
ax1.set_title("Groundwater levels")
ax2.set_ylabel("SGI [-]")
ax2.set_title("Standardized Groundwater Index")
# Loads heads and create Pastas model
head2 = ps.read_dino("../data/B32C0609001_1.csv")
ml2 = ps.Model(head2)
# Add a recharge model
rch = ps.rch.FlexModel()
rm = ps.RechargeModel(rain, evap, recharge=rch, rfunc=ps.Exponential, name="rch")
ml2.add_stressmodel(rm)
# Solve and plot the model
ml2.solve(noise=True, tmin="1990", report=False)
ml2.plots.results(figsize=(10, 6));
# Add a linear trend
tm = ps.LinearTrend("1990", "2020", name="trend")
ml2.add_stressmodel(tm)
# Solve the model
#ml2.solve(noise=False, tmin="1990", report=False) # Get better initial estimated first
ml2.solve(noise=True, tmin="1990", report=False)
ml2.plots.results(figsize=(10, 6));
# Compute the SGI
sim = ml2.simulate(tmin="1990")
sgi = ps.stats.sgi(sim.resample("W").mean())
ci = ml2.fit.prediction_interval(n=10)
# Make the plot
fig, [ax1, ax2] = plt.subplots(2, 1, figsize=(10,5), sharex=True)
# Upper subplot
sim.plot(ax=ax1, zorder=10)
ml2.oseries.series.plot(ax=ax1, linestyle=" ", marker=".", color="k")
ax1.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="gray")
ax1.legend(["Simulation", "Observations", "Prediction interval"], ncol=3)
# Lower subplot
sgi.plot(ax=ax2, color="k")
ax2.axhline(0, linestyle="--", color="k")
droughts = sgi.to_numpy(copy=True)
droughts[droughts > 0] = 0
ax2.fill_between(sgi.index, 0, droughts, color="C0")
# Dress up the plot
ax1.set_ylabel("GWL [m]")
ax1.set_title("Groundwater levels")
ax2.set_ylabel("SGI [-]")
ax2.set_title("Standardized Groundwater Index");
# Load input data
head = pd.read_csv("data_notebook_9/head.csv", parse_dates=True, index_col=0, squeeze=True)
prec = pd.read_csv("data_notebook_9/prec.csv", parse_dates=True, index_col=0, squeeze=True)
evap = pd.read_csv("data_notebook_9/evap.csv", parse_dates=True, index_col=0, squeeze=True)
well = pd.read_csv("data_notebook_9/well.csv", parse_dates=True, index_col=0, squeeze=True)
# Create the Pastas model
ml3 = ps.Model(head, name="heads")
# Add recharge and a well
sm = ps.RechargeModel(prec, evap, ps.Exponential,
name='rch', recharge=ps.rch.FlexModel())
wm = ps.StressModel(well, ps.Exponential, well.name,
up=False, settings="well")
ml3.add_stressmodel([sm, wm])
# Solve the model
ml3.solve(noise=True, report=False)
ml3.plots.results(figsize=(10, 6));
# Compute the SGI
sim = ml3.simulate(tmin="1940")
sgi = ps.stats.sgi(sim.resample("M").mean())
recharge = ml3.get_contribution("rch", tmin="1940")
sgi2 = ps.stats.sgi(recharge.resample("M").mean())
#ci = ml3.fit.prediction_interval()
# Make the plot
fig, [ax1, ax2, ax3] = plt.subplots(3, 1, figsize=(10,6), sharex=True)
sim.plot(ax=ax1, x_compat=True)
(recharge+ml3.get_parameters("constant")).plot(ax=ax1, linestyle="--")
ml3.oseries.series.plot(ax=ax1, linestyle=" ", marker=".", zorder=-1,
markersize=2, color="k", x_compat=True)
#ax1.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="gray")
ax1.legend(["Simulation", "Simulation w/o pumping"], ncol=1)
sgi.plot(ax=ax2, color="k", x_compat=True)
ax2.axhline(0, linestyle="--", color="k")
droughts = sgi.to_numpy(copy=True)
droughts[droughts > 0] = 0
ax2.fill_between(sgi.index, 0, droughts, color="C0")
sgi2.plot(ax=ax3, color="k", x_compat=True)
ax3.axhline(0, linestyle="--", color="k")
droughts = sgi2.to_numpy(copy=True)
droughts[droughts > 0] = 0
ax3.fill_between(sgi2.index, 0, droughts, color="C1")
ax1.set_ylabel("GWL [m]")
ax1.set_title("Groundwater levels")
ax2.set_ylabel("SGI [-]")
ax2.set_title("SGI With Groundwater pumping")
ax3.set_ylabel("SGI [-]")
ax3.set_title("SGI under 'Natural conditions'")
plt.xlim("1940", "2016");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Small parameters
Step2: The need for spin
Step3: We now see that we resolved the first problem
Step4: Of course we didn't break bulk-edge correspondence. Majoranas in our system would have to have a spin, which isn't possible. That in turn means that they cannot appear, and that means that the system cannot be gapped.
Step5: Yep, it does
Step6: Here the vertical line denotes the critical value of Zeeman field at which the wire becomes topological.
Step7: When $\mu$ is very negative we see two split electron bands at positive energy corresponding to two spin orientations.
Step8: Outlook
|
<ASSISTANT_TASK:>
Python Code:
MoocVideo("GQLfs4i22ms", src_location="2.1-intro")
# matplotlib.rcParams['axes.color_cycle'] = 'k'
f = plt.figure(figsize=[9, 3.5])
ax0 = f.add_subplot(1, 2, 1)
ax0.set_xlabel('$k$')
ax0.set_xticks([-1.0, 0.0, 1.0])
ax0.set_ylabel('$E$')
ax0.set_xlim([-1.03, 1.03])
ax0.set_ylim([-1.0, 1.0])
ax0.set_yticks([-1.0, 0.0, 1.0])
ax0.set_title('Trivial bandstructure')
ax1 = f.add_subplot(1, 2, 2)
ax1.set_xlabel('$k$')
ax1.set_xticks([-1.0, 0.0, 1.0])
ax1.set_ylabel('$E$')
ax1.set_xlim([-1.03, 1.03])
ax1.set_ylim([-1.0, 1.0])
ax1.set_yticks([-1.0, 0.0, 1.0])
ax1.set_title('Topological bandstructure')
sys0 = spinful_kitaev_chain(L=None, t=1.0, delta=0.1, mu=-0.3, B=0)
sys1 = spinful_kitaev_chain(L=None, t=1.0, delta=0.1, mu=0.3, B=0)
ax0.set_color_cycle(['k'])
ax1.set_color_cycle(['k'])
kwant.plotter.bands(sys0, momenta=201, ax=ax0)
kwant.plotter.bands(sys1, momenta=201, ax=ax1)
# Plot of the Kitaev dispersion with Zeeman as a parameter.
# mu should be a few times larger than the gap, so that there is a mexican hat dispersion.
def kitaev_bands(B):
sys = spinful_kitaev_chain(None, 1.0, 0.1, 0.3, B)
momenta = np.linspace(-1.03, 1.03, 201)
bands = kwant.physics.Bands(sys)
energies = [bands(k) for k in momenta]
fig = plt.figure()
plt.plot(momenta, energies, 'k-')
plt.xlabel('$k$')
plt.xticks([-1.0, 0.0, 1.0])
plt.ylabel('$E$')
plt.xlim([-1.03, 1.03])
plt.ylim([-1.0, 1.0])
plt.yticks([-1.0, 0.0, 1.0])
plt.title('Band structure')
return fig
StaticInteract(lambda B: kitaev_bands(B*0.05),
B = RangeWidget(0, 8))
StaticInteract(lambda B: spinorbit_band(1.0, 0.0, 0.1, 0.04*B, 0.0),
B = RangeWidget(0, 5, 1))
StaticInteract(lambda alpha: spinorbit_band(1., 0.1, 0.1, 0.2, 0.05*alpha),
alpha=RangeWidget(0, 8, 1, default=0))
B_vals = np.linspace(0, 0.3, 71)
StaticInteract(lambda mu: spinorbit_band_gap(mu*0.025, 1.0, 0.1, B_vals),
mu=RangeWidget(-2, 6, 1))
StaticInteract(lambda mu: spinorbit_band(t=1.0, mu=0.025*mu + 0.02, delta=0.025, B=0.07, alpha=0.8),
mu=RangeWidget(-8, 8, 0.5, default=-8))
question = ("What happens if we align the magnetic field $B$ along the $y$-direction instead of the $z$-direction?")
answers = ["Then we do not need spin-orbit coupling anymore in order to get Majoranas.",
"Then spin projection along $y$ direction is conserved, so we can't get Majoranas.",
"It's impossible, because a magnetic field can only be applied along $z$.",
"Then the spin-orbit term is automatically modified to point along the $z$ direction, so nothing really changes."]
explanation = ("If both the magnetic field and the spin orbit coupling point in the $y$ direction, " +
"then the Hamiltonian commutes with $\sigma_y$, and spin projection along $y$ is a good quantum number. " +
"So we are back to the problem that a gap at finite momentum does not open, " +
"and we do not get a topological phase supporting Majoranas.")
MoocMultipleChoiceAssessment(question, answers, correct_answer=1, explanation=explanation)
MoocVideo("MsFyJBAMFLI", src_location="2.1-summary")
MoocDiscussion("Questions", "Majoranas in nanowires")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A complex wave is a time series of imaginary numbers. As a such, it has a real part, and an imaginary part for every time. To visualized a complex wave, we can use a 3D representation. If we plot the imaginary part vs time, we will see a sine wave, and if we plot the real part vs time, we will see a cosine wave. Looking through the real axis you will see the cosine, and through the imaginary axis the sine.
Step2: If you try to compute the Fourier transform with only real-valued sines (or cosines), you will get a result
Step3: Same amplitudes and phases, but different frequencies yields to zero. Only when the steps are one or 0.5. It is
|
<ASSISTANT_TASK:>
Python Code:
t = np.arange(0,np.pi, 1/30000)
freq = 2 # in Hz
phi = 0
amp = 1
k = 2*np.pi*freq*t + phi
cwv = amp * np.exp(-1j* k) # complex sine wave
fig, ax = plt.subplots(2,1, figsize=(8,4), sharex=True)
ax[0].plot(t, np.real(cwv), lw=1.5)
ax[0].plot(t, np.imag(cwv), lw=0.5, color='orange')
ax[0].set_title('real (cosine)', color='C0')
ax[1].plot(t, np.imag(cwv), color='orange', lw=1.5)
ax[1].plot(t, np.real(cwv), lw=0.5, color='C0')
ax[1].set_title('imaginary (sine)', color='orange')
for myax in ax:
myax.set_yticks(range(-2,2,1))
myax.set_xlabel('Time (sec)')
myax.set_ylabel('Amplitude (AU)')
from mpl_toolkits.mplot3d import Axes3D # <--- This is important for 3d plotting
fig = plt.figure()
ax = fig.gca(projection ='3d')
ax.plot(t, cwv.real, cwv.imag)
ax.set_xlabel('Time (s)'), ax.set_ylabel('Real part'), ax.set_zlabel('Imaginary part')
dt = 1/30000 # sampling interval in sec
t = np.arange(0,4, dt)
myparams1 = dict(amp = 2, freq = 5, phi = np.pi/2)
myparams2 = dict(amp = 2, freq = 5, phi = np.pi/2)
sinew1 = mysine(t, **myparams1)
sinew2 = mysine(t, **myparams2)
fig, ax = plt.subplots(1,1, figsize=(16,4))
ax.plot(t, sinew1, lw = 2)
ax.plot(t, sinew2, color='orange', lw=2)
ax.set_ylim(-10,10)
ax.text(3, 7.5, '{:2.4f}'.format(np.dot(sinew1, sinew2)), fontsize=15)
#
myparams1 = dict(amp = 2, freq = 5, phi = np.pi/2)
myparams2 = dict(amp = 2, freq = 5, phi = 2*np.pi/2) # ortogonal
sinew1 = mysine(t, **myparams1)
sinew2 = mysine(t, **myparams2)
fig, ax = plt.subplots(1,1, figsize=(16,4))
ax.plot(t, sinew1, lw = 2)
ax.plot(t, sinew2, color='orange', lw=2)
ax.set_ylim(-10,10)
ax.text(3, 7.5, '{:2.4f}'.format(np.dot(sinew1, sinew2)), fontsize=15);
t = np.arange(-1., 1., 1/1000.)
theta = 2*np.pi/4
morlet = lambda f : np.sin(2*np.pi*f*t + theta) * np.exp( (-t**2)/ 0.1) # Gaussian is exp(-t^2/stdev)
signal = morlet(5)
fval = np.arange(2,10,0.5)
fig, ax = plt.subplots(2,1, figsize=(16,8))
dotlist = list()
for i in fval:
dotlist.append(np.dot(signal,morlet(i)))
ax[0].plot(t, morlet(i), color='gray', alpha=.3)
ax[0].plot(t, signal, lw = 2)
ax[0].set_xlabel('Time (sec)')
ax[1].stem(fval, dotlist)
ax[1].set_ylabel('Dot product')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.4 Data Structures
Step2: What datatype is the counts object?
Step3: 2.3 Series - String indexes
Step4: 2.4 Series - Dictionary
Step5: Make pandas Series from RNA-counts dict
Step6: 2.5.1 Series - Referencing Elements - Integer
Step7: Get the 2nd through 4th elements
Step8: 2.5.2 Series - Referencing Elements - String
Step9: Get the counts for FOXA1, GATA2 and BRCA2
Step10: 2.5.3 Series - Referencing Elements - array/index values
Step11: Get the index of the rna_counts matrix
Step12: 2.5.4 Series - Referencing Elements - labels
Step13: 2.6 Series - array operations
Step14: Select genes that have greater than 20 counts
Step15: 2.7 Series - null values
Step16: Find movies with no opening revenue
Step17: Find movies with opening revenue
Step18: Display only movie with no opening revenue
Step19: 2.8 Series - auto alignment
Step20: Combine counts for 2 cells (rna_counts and rna_counts_cell2)
Step21: Adding Series combined values with the same label in the resulting series
Step22: <div style="background-color
Step23: Agenda
Step24: What datatype is df_mcu?
Step25: 3.3 Dataframe - specifying indices and columns
Step26: 3.4 Dataframe - from nested dict of dicts
Step27: 3.5 Dataframe - number of rows and columns
Step28: Get the (rows, cols) of the Dataframe
Step29: 3.6 Dataframe - index, columns and values
Step30: Get the row index values
Step31: Get values of the Dataframe only
Step32: 3.7 Dataframe - Selecting Columns and Rows
Step33: Select values in a list of columns
Step34: Use slice to get the first n rows (NumPy style indexing)
Step35: Can combine slice and column selection to select the first n rows
Step36: Order of column and slice doesn't matter
Step37: 3.7.2 Dataframe - Selecting Columns and Rows - Integer based selection
Step38: A list or array of integers
Step39: A slice object with ints
Step40: 3.7.3 Dataframe - Selecting Columns and Rows - Integer based selection
Step41: A list or array of labels
Step42: A slice object with labels
Step43: 3.7.4 Dataframe - Selecting Columns and Rows - Data Filtering
Step44: Filter and select on multiple conidtions
Step45: 3.8 Dataframe - Adding and Deleting data
Step46: Add a new row
Step47: Drop an existing column
Step48: Drop an existing row
Step49: Columns, Rows or individual elements can be modified similarly using loc or iloc.
Step50: <div style="background-color
Step51: <div style="background-color
Step52: Agenda
Step53: Agenda
Step54: 5.2 Summarizing and Computing Descriptive Statistics - Sorting
Step55: Sort Dataframe by columns in descending order
Step56: 5.3 Summarizing and Computing Descriptive Statistics - Descriptive statistics
Step57: mean()
Step58: var()
Step59: 5.4 Summarizing and Computing Descriptive Statistics - Missing Data
Step60: - Fill Nan with default value
Step61: - Use inplace to modify the dataframe instead of retunring a new object
Step62: <div style="background-color
Step63: <div style="background-color
Step64: Agenda
Step65: 6.3 Grouped and apply - Apply
Step66: - Apply a new function that subtract max from 2 times min in every column
Step67: <div style="background-color
Step68: <div style="background-color
Step69: Introduction to Pandas
Step70: Plot the Number of Medals in barchart by country using
Step71: Other useful plots
Step72: <div style="background-color
Step73: <div style="background-color
|
<ASSISTANT_TASK:>
Python Code:
# Import Pandas and Numpy
import pandas as pd
import numpy as np
# Make Series of count data and visaulize series
counts = pd.Series([223, 43, 53, 24, 43])
counts
# What datatype is the counts object?
type(counts)
# Make Series of count data with Gene Symbols
rna_counts = pd.Series([50, 10, 12, 29, 4], index=['BRCA2', 'GATA2', 'Myc', 'FOXA1', 'ERCC2'])
rna_counts
# Construct second sample RNA-counts dict
cell2_counts = {'BRCA2':5, 'GATA2':20, 'Myc':45, 'FOXA1':10, 'ERCC2':0, 'BRCA1': 20}
cell2_counts
# Make pandas Series from RNA-counts dict
rna_counts_cell2 = pd.Series(cell2_counts)
rna_counts_cell2
# Access the 1st element of counts data
counts[0]
# Get the 2nd through 4th elements
counts[1:4]
# Get the counts for Myc Gene
rna_counts['Myc']
# Get the Counts for FOXA1, GATA2 and BRCA2
rna_counts[['FOXA1', 'GATA2', 'BRCA2']]
# Get the values in the counts matrix
counts.values
# Get the index of the rna_counts matrix
rna_counts.index
rna_counts.name = 'RNA Counts'
rna_counts.index.name = 'Symbol'
rna_counts
# Select Genes that have greater than 20 counts
rna_counts > 20
# Select genes that have greater than 20 counts
rna_counts[rna_counts > 20]
# Make Movie Database with missing values
mcu_opening = {'Black Panther': 202003951, 'Thor: Ragnarok': 122744989, 'Spider-Man: Homecoming': 117027503,
'Guardians of the Galaxy Vol. 2': 146510104, 'Doctor Strange': 85058311,
'Captain America: Civil War': 179139142}
mcu_movies = ['Ant-Man and the Wasp', 'Avengers: Infinity War', 'Black Panther', 'Thor: Ragnarok',
'Spider-Man: Homecoming', 'Guardians of the Galaxy Vol. 2', 'Doctor Strange', 'Captain America: Civil War']
mcu_series = pd.Series(mcu_opening, index=mcu_movies)
mcu_series
# Find movies with no opening revenue
pd.isnull(mcu_series)
# Good opportunity to use Boolean filter get index and only movie names
mcu_series[pd.isnull(mcu_series)].index.values
# Find movies with opening revenue
pd.notnull(mcu_series)
# Display only movies with no opening revenue
mcu_series[pd.isnull(mcu_series)].index.values
rna_counts
rna_counts_cell2
# Combine counts for 2 cells
rna_counts + rna_counts_cell2
# Sample Python data and labels:
students = ['Anastasia', 'Dima', 'Katherine', 'James', 'Emily', 'Michael', 'Matthew', 'Laura', 'Kevin', 'Jonas']
test_scores = [12.5, 9, 16.5, np.nan, 9, 20, 14.5, np.nan, 8, 19]
s_scores = pd.Series(test_scores, index=students)
# Which Students have scores greater than 15?
s_scores[s_scores > 15].index.values
# Bonus: How would you use get the Students scores greater than 15 and less than 20?
s_scores[(s_scores > 15) & (s_scores < 20)]
# What is the mean, median and max test scores?
s_scores.mean()
s_scores.median()
s_scores.max()
# Make Dataframe of Marvel data
mcu_data = {'Title': ['Ant-Man and the Wasp', 'Avengers: Infinity War', 'Black Panther', 'Thor: Ragnarok',
'Spider-Man: Homecoming', 'Guardians of the Galaxy Vol. 2'],
'Year':[2018, 2018, 2018, 2017, 2017, 2017],
'Studio':['Beuna Vista', 'Beuna Vista', 'Beuna Vista', 'Beuna Vista', 'Sony', 'Beuna Vista'],
'Rating': [np.nan, np.nan, 0.96, 0.92, 0.92, 0.83]}
df_mcu = pd.DataFrame(mcu_data)
df_mcu
# What datatype is df_mcu?
type(df_mcu)
# Assign column order and index based on Marvel Cinemetic Universe Movie Number
mcu_index = ['mcu_20','mcu_19', 'mcu_18', 'mcu_17', 'mcu_16', 'mcu_15']
mcu_columns = ['Title', 'Year', 'Studio', 'Rating']
df_mcu = pd.DataFrame(mcu_data, columns = mcu_columns, index = mcu_index)
df_mcu
# Make Dataframe of population
pop = {'Nevada': {2001: 2.9, 2002: 2.9}, 'Ohio': {2002: 3.6, 2001: 1.7, 2000: 1.5}}
df_pop = pd.DataFrame(pop)
df_pop
# Get the number of rows in a Dataframe
len(df_mcu)
# Get the (rows, cols) of the Dataframe
df_mcu.shape
# Get the column headers
df_mcu.columns
# Get the row index values
df_mcu.index
# Get values of the Dataframe only
df_mcu.values
# Select values in a single column
df_mcu['Title']
# Select values in a list of columns
df_mcu[['Title', 'Rating']]
# Use slice to get the first n rows (NumPy style indexing)
df_mcu[:2]
# Can combine slice and column selection to select the first n rows
df_mcu['Title'][:2]
df_mcu[:4]['Year']
# Return values in the first row
df_mcu.iloc[0]
# Return values in the first row and second column
df_mcu.iloc[0,1]
# Return values in the 3,5 and 6th rows
df_mcu.iloc[[2,4,5]]
# Return values in the first row and columns 2 and 3
df_mcu.iloc[:2, 1:3]
# Select all values of the 20th Movie
df_mcu.loc['mcu_20']
# Select all values of the 20th, 17th and 15th movie, which uses row index values,
# Not to be confused with df_mcu[['Title', 'Rating']] which uses column headers
df_mcu.loc[['mcu_20', 'mcu_17', 'mcu_15']]
# Select the Year and Rating
df_mcu.loc[:, ['Year', 'Rating']]
# Filter for Rating < .95
df_mcu.loc[df_mcu['Rating'] < .95, :]
# Filter for Rating < .95 and Sudio is Sony
# Reuse the bitwise comparator seen earlier but with OR instead of AND.
df_mcu.loc[(df_mcu['Rating'] < .95) | (df_mcu['Studio'] == 'Sony'), :]
# Add new predicted rating to Dataframe
df_mcu['Predicted Rating'] = np.random.random(len(df_mcu))
df_mcu
# Add a new row for a new movie
new_row = pd.Series(['Captain Marvel', 2019, 'BeunaVista', np.nan, np.random.random(1)[0]], index=df_mcu.columns, name= 'mcu_21' )
df_mcu.append(new_row)
# Drop the Rating Column
df_mcu.drop('Rating', axis=1)
# Drop the 17 and 19th movies
df_mcu.drop(['mcu_15', 'mcu_17'])
# Sample Python data:
exam_data = {'Names': ['Anastasia', 'Dima', 'Katherine', 'James', 'Emily', 'Michael', 'Matthew', 'Laura', 'Kevin', 'Jonas'],
'Scores': [12.5, 9, 16.5, np.nan, 9, 20, 14.5, np.nan, 8, 19],
'Attempts': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'Qualify': ['yes', 'no', 'yes', 'no', 'no', 'yes', 'yes', 'no', 'no', 'yes']}
student_data = pd.DataFrame(exam_data)
student_data
# Select the students the qualify based on scores using the Qualify column 'yes'
student_data[student_data['Qualify'] == 'yes']
# Add a new column of, Grade Level, to indicate which Grade in High school the students are in?
grade_level = ['9th', '9th', '10th', '11th', '12th', '9th', '10th', '11th', '12th', '11th']
student_data['Grade'] = grade_level
student_data
student_data.columns
# Add a new student named, Jack? (Hint: Need to use ignore_index=True)
jack = pd.Series([2, 'Jack', 'yes' , 17, '9th'],index=student_data.columns )
student_data.append(jack, ignore_index=True)
# Add a new column of Pass that is either 0 or 1 based on the column Qualify. (Hint: use numpy.where)
student_data['Pass'] = np.where(student_data['Qualify'] == 'yes', 1, 0)
student_data
# Read in Winter Olympic Medal Winners
winter_olympics = pd.read_csv('data/winter_olympics.csv')
winter_olympics.head()
# Get the First 3 lines of a Dataframe
winter_olympics.head(3)
# Sort Dataframe by rows in ascending order
df_mcu.sort_index(axis=0, ascending=True)
# Sort Dataframe by column in descending order
df_mcu.sort_values(by=['Rating', 'Predicted Rating'], ascending=False)
# Summary Statistics for the Dataframe
df_mcu.describe()
# Mean of the Rating and Predicted Rating Columns
df_mcu.loc[:,['Rating', 'Predicted Rating']].mean()
# Get the variance of the Rating column
df_mcu.loc[:,['Rating']].var()
# Drop rows with NaN values
df_mcu.dropna()
# File Nan in Dataframe with default value
df_mcu.fillna(0)
# File Nan in Dataframe with default value in place
df_mcu.fillna(0, inplace=True)
df_mcu
# What is the median score of the students on the exam?
student_data['Scores'].median()
# Deduct 4 points from everyone that attempted the exam 2 or more times. Replace all Nan scores with 0. (Passing is 12 points)
student_data.loc[student_data['Attempts'] >= 2, 'Scores'] -= 4
student_data
# Compute the mean. Would the class as a whole pass the test?
student_data['Scores'].mean()
# Are there any students that will fail now?
student_data[(student_data['Qualify'] == 'yes')]
# Groupby year of release and get mean Rating
df_mcu.groupby('Year').mean()
# Apply square to every value in a dataframe
test_data = np.arange(9).reshape(3,-1)
df_test = pd.DataFrame(test_data, index=['r1', 'r2', 'r3'], columns=['c1', 'c2', 'c3'])
df_test
df_test.applymap(np.square)
# Define max minus min function
def max_minus_min(x):
return max(x)-(2*min(x))
# Apply a new function that subtract max from min in every column
df_test.apply(max_minus_min)
# Group students by attempts and find the average score?
student_data.groupby('Attempts')['Scores'].mean()
# Group students by their pass result and report the variance in scores?
student_data.groupby('Pass')['Scores'].var()
# Import maplotlib and setup to display plots notebook
import matplotlib.pyplot as plt
%matplotlib inline
# Plot the Number of Medals in barchart by country using: plot.bar()
winter_olympics.groupby(['Country'])['Medal'].count().plot.bar()
# In the Winter olympics which country has the most Biathlon medals?
winter_sport_medal = winter_olympics.groupby(['Country','Sport'])['Medal'].count()
winter_sport_medal.head(10)
winter_sport_medal.loc[:,'Biathlon',:].sort_values(ascending=False).head()
# In the Winter olympics which country has the most Skiing medals?
winter_sport_medal.loc[:,'Skiing',:].sort_values(ascending=False).head()
# And in which event do they have the most Gold medals?
winter_sport_disc_medal = winter_olympics.groupby(['Country','Sport', 'Discipline', 'Medal'])['Medal'].count()
winter_sport_disc_medal.head()
winter_sport_disc_medal['NOR','Skiing'].sort_values(ascending=False)
# Import the Summer Olympic dataset located in ('data/summer_olypmics.csv')
summer_olympics = pd.read_csv('data/summer_olympics.csv')
#Which Olympian has the most medals?
summer_olympics.groupby(['Athlete'])['Medal'].count().sort_values(ascending=False).head()
# Which Olympian has the most Gold medals and for which Country?
summer_olympics.groupby(['Athlete','Medal','Country'])['Medal'].count().sort_values(ascending=False).head()
# Which Olympian has the most Gold medals and for which Sport?
summer_olympics.groupby(['Athlete','Medal','Discipline'])['Medal'].count().sort_values(ascending=False).head()
# Which rows have no values and why?
summer_olympics[pd.isnull(summer_olympics).any(axis=1)]
# Import the example RNA-seq Count Data in ('data/RNAseq_count_table.txt')
rna_counts = pd.read_csv('data/RNAseq_count_table.txt', sep = '\t', index_col = 0)
rna_counts.head()
# Calculate CPM for each Sample.
rna_cpm = rna_counts.divide(rna_counts.sum(axis=0)).multiply(1000000)
rna_cpm.head()
# Which Gene has the highest average CPM?
rna_cpm.mean(axis=1).sort_values(ascending=False).head()
# What is the Correlation between SRR1550986 SRR1550987?
rna_cpm.corr()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: read and assign dictionary values for computed datasets
Step2: view of first 5 rows of calculated dataset corresponding to proposal 1
Step3: assign reusable variables for functions below
Step4: assign proposal variable (string)
Step5: assign p variable as the proposal dataframe to use for many examples below
Step6: view of columns in p dataset
Step7: view of colors represented by color dictionary 'job_colors' value
Step8: assign sample employee and employee list variables for some functions
Step9: age vs. list percentage for a specific month
Step10: employees from each group holding a specific job level
Step11: integrated distribution of all jobs, all employee groups
Step12: slicing and groupby month
Step13: custom plot example, eg job count over time, standalone vs. proposal
Step14: custom color list creator demo
Step15: seaborn method of viewing a custom color list
|
<ASSISTANT_TASK:>
Python Code:
%%time
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import cm
import functions as f
import matplotlib_charting as mp
#plt.style.use('bmh')
#plt.style.use('fivethirtyeight')
sns.set_style('whitegrid')
%matplotlib inline
pd.set_option('notebook_repr_html', False)
pd.set_option('max_columns', 50)
pd.set_option('expand_frame_repr', True)
pd.set_option('max_rows', 18)
pd.set_option('display.precision', 3)
pd.options.display.width = 150
pd.options.display.max_colwidth=0
pd.set_option('display.float_format', lambda x: '%.4f' % x)
np.set_printoptions(linewidth=200, edgeitems=10)
plt.rcParams['figure.figsize']=(12.0, 9.0)
plt.rcParams['font.size']=14
plt.rcParams['axes.labelsize']=14
plt.rcParams['xtick.labelsize']=13
plt.rcParams['ytick.labelsize']=13
plt.rcParams['axes.titlesize']=16
plt.rcParams['lines.linewidth']=3
plt.rcParams['legend.fontsize']=14
plt.rcParams['legend.fancybox']=True
#%%time
#f.save_and_load_dill_folder(load_case='sample3')
#f.save_and_load_dill_folder()
%%time
ds_dict = f.load_datasets()
%%time
sdict = pd.read_pickle('dill/dict_settings.pkl')
adict = pd.read_pickle('dill/dict_attr.pkl')
cdict = pd.read_pickle('dill/dict_color.pkl')
tdict = pd.read_pickle('dill/dict_job_tables.pkl')
%%time
ds_dict['p1'].head()
%%time
enhanced_jobs = sdict['enhanced_jobs']
job_levels = sdict['num_of_job_levels']
eg_colors = cdict['eg_colors']
job_colors = cdict['job_colors']
p_dict = sdict['p_dict']
p_dict_verbose = sdict['p_dict_verbose']
job_strs = sdict['job_strs']
ret_age = sdict['ret_age']
starting_date = sdict['starting_date']
job_dict = sdict['jd']
job_levels
%%time
proposal = 'p1'
%%time
p = ds_dict[proposal]
sdict['num_of_job_levels']
p.columns
ds_dict['standalone'].columns
%%time
sns.palplot(job_colors)
%%time
# sample employee - halfway down the list from first month
first_month = p[p.mnum==0]
sample_emp = first_month.iloc[int(len(first_month) / 2)].empkey
%%time
# sample employee list - median of each employee group
sample_emp_list = []
g = p[p.mnum==0][['eg', 'empkey']].groupby('eg')
egs = pd.unique(p.eg)
for eg in egs:
sample_emp_list.append(g.get_group(eg).iloc[int(len(g.get_group(eg)) * .5)].empkey)
g =[]
%%time
mp.percent_diff_bins('p1', 'standalone', 2, kind='bar', ds_dict=ds_dict, num_display_colors=25,
#attr3='ldate', oper3='<=', val3='1998-05-31',
#attr2='s_lmonths', oper2='>=', val2=200,
#attr1='ret_mark', oper1='==', val1=1,
#cmap_pos='terrain', cmap_neg='terrain',
chart_style='ticks',
title_size=14, man_plotlim=None, invert_barh=False, legend_size=12.5)
%%time
mp.eg_attributes('p1', 'date', 'cat_order',
sdict, adict, cdict,
ds_dict=ds_dict,
mnum=0,
ret_only=True,
eg_list=[2],
chart_style='ticks',
#attr1='ldate', oper1='<=', val1='1990-12-31',
#attr2='ldate', oper2='>=', val2='1986-12-31',
q_eglist_only=False,
xquant_lines=True,
yquant_lines=True,
xsize=14, ysize=11)
%%time
mp.eg_attributes('p1', 'retdate', 'ldate',
sdict, adict, cdict,
ds_dict=ds_dict,
mnum=0,
#markersize=7,
marker_alpha=.75,
tick_size=14,
qtick_size=13,
label_size=16,
ret_only=True,
eg_list=[1,2,3],
attr1='ldate', oper1='<=', val1='1999-12-31',
chart_style='ticks',
q_eglist_only=True,
xquant_lines=True,
yquant_lines=True,
xsize=16, ysize=14)
%%time
# uncomment the two lines of arguments below for an example of attribute filtering
mp.quantile_years_in_position('p1', 'standalone', job_levels, 40,
job_strs, p_dict,
job_colors, ds_dict=ds_dict,
flip_x=True, flip_y=False,
xsize=12, ysize=16, plot_differential=True,
chart_style='ticks', grid_alpha=.35,
#attr1='ldate', oper1='<=', val1='1997-12-31',
#attr2='ldate', oper2='>=', val2='1986-12-31',
rotate=True, custom_color=False, cm_name='Set1',
normalize_yr_scale=True, gain_loss_bg=True)
%%time
mnum = 60 # month number
mp.age_vs_spcnt('p1', [1,2,3], mnum, eg_colors,
p_dict, ret_age,
ds_dict=ds_dict,
attr1='ldate', oper1='<=', val1='1997-12-31',
attr2='jnum', oper2='>=', val2=5)
%%time
jnum = 4 # job number
job_p = p[p.jnum==jnum]
mp.age_vs_spcnt(p, [1,2,3], 0, eg_colors,
p_dict, ret_age,
ds_dict=ds_dict,
size=20,
xsize=10, ysize=8,
image_dir='im')
%%time
jnum = 4 # job number
job_p = p[p.jnum==jnum]
mp.age_vs_spcnt(job_p, [1,2,3], 42, eg_colors,
p_dict, ret_age,
ds_dict=ds_dict)
%%time
mp.multiline_plot_by_emp('p1', 'jobp', 'date', sample_emp_list,
job_levels, ret_age,
eg_colors, job_strs, sdict,
adict, ds_dict=ds_dict)
%%time
mp.multiline_plot_by_emp('p1', 'mpay', 'spcnt', sample_emp_list,
job_levels, ret_age,
eg_colors, job_strs, sdict,
adict, ds_dict=ds_dict)
%%time
#samp_emps = [20011251, 10012608]
samp_emps = [10012608, 20011251]
mp.multiline_plot_by_emp('p1', 'jnum', 'date', samp_emps,
job_levels, ret_age,
eg_colors, job_strs, sdict,
adict, ds_dict=ds_dict, plot_jobp=True, xsize=14, ysize=10)
%%time
mp.multiline_plot_by_emp('p1', 'jnum', 'date', sample_emp_list,
job_levels, ret_age,
eg_colors, job_strs, sdict,
adict, chart_style='ticks', ds_dict=ds_dict)
%%time
mp.eg_attributes('p3', 'cat_order', 'ylong',
sdict, adict, cdict,
ds_dict=ds_dict,
mnum=40,
#ret_only=True,
eg_list=[1,2,3],
chart_style='ticks',
attr1='ldate', oper1='<=', val1='1999-12-31',
q_eglist_only=True,
xquant_lines=True,
yquant_lines=True,
xsize=14, ysize=11)
%%time
mp.eg_attributes('p1', 'lspcnt', 'jobp',
sdict, adict, cdict,
ds_dict=ds_dict,
mnum=60,
#ret_only=True,
eg_list=[1,2,3],
chart_style='ticks',
attr1='ldate', oper1='<=', val1='1999-12-31',
q_eglist_only=True,
xquant_lines=True,
yquant_lines=True,
xsize=14, ysize=11)
%%time
mp.eg_attributes('p1', 'spcnt', 'cat_order',
sdict, adict, cdict,
ds_dict=ds_dict,
mnum=60,
#ret_only=True,
eg_list=[1,2,3],
chart_style='ticks',
markersize=0,
linestyle='solid',
linewidth=4,
attr1='ldate', oper1='<=', val1='1999-12-31',
q_eglist_only=True,
xquant_lines=True,
yquant_lines=True,
xsize=14, ysize=11)
%%time
# if a FutureWarning is triggered, it is due to an update to matplotlib 3.0 which
# affects the seaborn violinplot function. This warning is expected to be corrected
# when the seaborn library is updated to meet the matplotlib 3.0 specifications.
mp.violinplot_by_eg(p, 's_lmonths', ret_age,
cdict,
adict, ds_dict=ds_dict,
saturation=.6,
chart_style='darkgrid',
mnum=60, scale='count')
%%time
mp.age_kde_dist('p1', eg_colors,
p_dict, ret_age,
ds_dict=ds_dict, mnum=0)
%%time
mp.eg_diff_boxplot(['p1', 'p2', 'p3'], 'standalone', [2, 3, 1],
cdict['eg_colors_lgt'],
job_levels,
job_levels + 1,
adict, ds_dict=ds_dict,
#attr1='ldate', oper1='<=', val1='1986-12-31',
measure='spcnt', comparison='',
linewidth=.75,
year_clip=2035, chart_style='darkgrid')
%%time
mp.eg_boxplot(['standalone', 'p1', 'p2', 'p3'], [1,2,3], cdict['eg_colors_lgt'], job_levels + 1,
adict,
#attr1='retdate', oper1='<=', val1='2022-12-31',
show_whiskers=True, xsize=16, ysize=8,
measure='spcnt', ds_dict=ds_dict, chart_style='darkgrid')
%%time
mp.stripplot_dist_in_category('p1', job_levels,
sdict['stripplot_full_time_pcnt'],
eg_colors,
job_colors, sdict['job_strs_dict'],
adict,
p_dict, mnum=35, ds_dict=ds_dict, bg_alpha=.15,
#attr1='age', oper1='>=', val1='55',
#attr2='ldate', oper2='<=', val2='1995-12-31',
#fur_color=[0.5, 0.5, 0.5, .5],
ysize=11, xsize=4)
df1 = ds_dict['p1']
df1[df1.mnum==40]['fur'].sum()
%%time
mp.multiline_plot_by_emp('p1', 'spcnt', 'date', [10013738, 20011150, 30010419],
job_levels, ret_age,
eg_colors, job_strs, sdict,
adict, line_alpha=.75, through_date='2031-01-01',
chart_style='ticks', ds_dict=ds_dict, plot_jobp=True)
%%time
mp.job_level_progression('p1', [20011150, 10013738, 30010419], '2031-01-31',
sdict, cdict,
eg_colors, job_colors,
ds_dict=ds_dict, job_bands_alpha=.15)
%%time
mp.job_level_progression('p1', sample_emp_list, '2030-12-31',
sdict, cdict,
eg_colors, job_colors,
ds_dict=ds_dict)
%%time
mp.differential_scatter(['p1', 'p2', 'p3'], 'standalone', 'cat_order',
[1, 2, 3], adict, cdict, p_dict, ds_dict=ds_dict, prop_order=True,
attr1='ret_mark', oper1='==', val1=1,
show_scatter=True, show_lin_reg=False,
show_mean=False, mean_len=80,
dot_size=20, lin_reg_order=12,
ylimit=False, ylim=5,
bright_bg=True,
chart_style='whitegrid')
%%time
# where is an employee located on a proposed list compared to employees from other
# groups who share the same attribute, such as longevity date?
# positive numbers mean better location
# areas are outlined by respective employee group color(s)
mp.cohort_differential('p2', 2, sdict, cdict, adict,
#attr1='age', oper1='<', val1=50,
ds_dict=ds_dict, mnum=0, measure='ldate',
sort_xax_by_measure=True, single_eg_compare=None,
compare_value='1988-12-31', bg_color=None, #'#ffffe6',
xsize=16, ysize=12)
%%time
mp.parallel(['p1', 'p2'], 'standalone', [1, 2, 3],
'spcnt', [0, 40, 80, 120, 200],
job_levels,
eg_colors, sdict, adict,
ds_dict=ds_dict, facecolor='#fffff5',
chart_style='whitegrid',
#attr1='ldate', oper1='<=', val1='1997-12-31',
left=0, xsize=5, ysize=5)
%%time
mp.job_grouping_over_time('p1', [1,2,3], job_strs,
job_colors, p_dict,
rets_only=True, ds_dict=ds_dict,
#attr1='ldate', oper1='<=', val1='1997-12-31',
time_group='A', plt_kind='bar',
display_yrs=40, legend_loc=4, xsize=11,
ysize=7)
%%time
mp.rows_of_color('p1', 62, ['jnum'], eg_colors,
job_colors, sdict, ds_dict=ds_dict,
cols=120, job_only=False, jnum=4,
cell_border=True, eg_border_color='.75', eg_list=[1,2,3],
job_border_color='.75', xsize=14, ysize=9, fur_color='c')
%%time
mp.rows_of_color('p1', 62, ['eg', 'fur'], eg_colors,
job_colors, sdict, ds_dict=ds_dict,
cols=150, job_only=False, jnum=6,
cell_border=True, eg_border_color='.3',
job_border_color='.55', xsize=15, ysize=9)
%%time
mp.rows_of_color('p1', 62, ['eg'], eg_colors,
job_colors, sdict, ds_dict=ds_dict,
cols=100, job_only=True, jnum=4,
attr1='age', oper1='>=', val1='60',
cell_border=False, eg_border_color='.6',
job_border_color='.85', xsize=15, ysize=9)
%%time
mp.rows_of_color('p1', 62, ['eg', 'fur'], eg_colors,
job_colors, sdict, ds_dict=ds_dict,
cols=100, job_only=True, jnum=4,
#attr1='ldate', oper1='<=', val1='1987-12-31',
cell_border=False, eg_border_color='.3',
job_border_color='.85', xsize=15, ysize=9)
%%time
mp.quantile_bands_over_time('p1', 3, 'lspcnt', bins=20, ds_dict=ds_dict,
year_clip=None, kind='area',
quantile_ticks=True,
quantile_alpha=.65, cm_name='nipy_spectral',
grid_alpha=.5,
alt_bg_color=False, bg_color='#faf6eb')
%%time
mp.job_transfer('p1', 'standalone', 1,
job_colors, job_levels,
job_strs, p_dict,
ds_dict=ds_dict, chart_style='ticks',
gb_period='M')
%%time
d1 = ds_dict['p1']
d2 = ds_dict['standalone']
attr = 'doh'
value = '1989-12-31'
mp.job_transfer(d1[d1[attr] <= value], d2[d2[attr] <= value],
1, job_colors, job_levels,
job_strs, p_dict,
ds_dict=ds_dict,
gb_period='M')
%%time
mp.eg_multiplot_with_cat_order('p1', 35, 'cat_order', 'spcnt',
job_strs, job_colors,
job_levels, sdict, adict, cdict,
egs=[1,2,3], ds_dict=ds_dict,
job_bands_alpha=.2, xsize=14,
plot_scatter=True, s=15, a=.7, lw=0,
tick_size=11)
mp.make_color_list()
mp.make_color_list(num_of_colors=4, cm_name_list=['Accent'])
ds_dict['p1'].columns
%%time
mp.diff_range(['p1', 'p2', 'p3'], 'standalone', 'jobp', [1, 2, 3],
adict, ds_dict=ds_dict, cm_name='Set1',
#attr1='ldate', oper1='<=', val1='1991-12-31',
#attr2='ldate', oper2='>=', val2='1985-12-31',
show_range=False, show_mean=True,
year_clip=2030, range_alpha=.3, chart_style='ticks',
tick_size=13, suptitle_size=16, title_size=16, label_size=16, legend_size=14, image_dir='im', image_format='png')
%%time
mp.job_count_charts('p1', 'standalone', sdict, eg_colors,
eg_list=[1, 2, 3], ds_dict=ds_dict,
#attr1='lname', oper1='<=', val1='jones',
plot_egs_sep=True, plot_total=True, year_max=2050)
%%time
mp.emp_quick_glance(sample_emp, 'p1', ds_dict=ds_dict)
%%time
mp.single_emp_compare(sample_emp, 'cat_order',
['p1', 'p2', 'p3', 'standalone'],
'date', job_strs, eg_colors,
p_dict, job_levels, adict,
chart_style='darkgrid',
ds_dict=ds_dict)
%%time
mp.job_time_change(['p1'], 'standalone', [1, 2, 3], job_colors,
sdict['job_strs_dict'], job_levels, adict,
'lspcnt', ds_dict=ds_dict, chart_style='ticks',
#attr1='ldate', oper1='<=', val1='1987-12-31',
bg_color='#ffffff',
xsize=12, ysize=8, experimental=False)
%%time
mp.group_average_and_median('p1', 'standalone', [1, 2, 3],
eg_colors, 'cat_order', job_levels,
sdict, adict, chart_style='ticks',
attr1='ldate', oper1='<=', val1='1989-12-31',
show_full_yscale=False,
ds_dict=ds_dict)
%%time
mp.group_average_and_median('p1', 'standalone', [1, 2, 3],
eg_colors, 'jnum', job_levels,
sdict, adict,
ds_dict=ds_dict,
chart_style='ticks',
attr1='age', oper1='>', val1='50')
%%time
# filter entire dataset to include only those employees who were 50 or older during month 0
job_slice = mp.slice_ds_by_filtered_index('p1', ds_dict=ds_dict, mnum=0, attr='age', attr_oper='>=', attr_val=50)
%%time
# use the filtered dataset from the cell above as input (job_slice).
# this chart shows average job category order number for employees who were at least 50 at month 0 and
# who have a longevity date in 1999 or earlier...
mp.group_average_and_median(job_slice, 'standalone', [1, 2, 3],
eg_colors, 'cat_order', job_levels,
sdict, adict,
attr1='ldate', oper1='<=', val1='1999-12-31',
ds_dict=ds_dict, job_labels=False)
%%time
mp.stripplot_eg_density('p1', 100, eg_colors,
mnum_order=True,
dot_size=2.6,
#attr1='age', oper1='>=', val1='62',
ds_dict=ds_dict, xsize=4)
p.columns
%%time
mp.job_count_bands(['standalone', 'p1', 'p2', 'p3'], [1, 2, 3], job_colors,
sdict, ds_dict=ds_dict, emp_list=sample_emp_list,
#attr1='ldate', oper1='<=', val1='1990-12-31',
chart_style='white', xsize=13, ysize=8)
%%time
mp.quantile_bands_over_time(p, 1, 'lspcnt', bins=40, quantile_ticks=False, cm_name='Set2')
%%time
# cat_order progression for the median of each 2.5% segment (40 quantiles)
# of employee group 2 (standalone dataset)
mp.quantile_groupby(['standalone'], [2], 'cat_order', 40,
eg_colors, job_colors, sdict, adict, job_dict,
ds_dict=ds_dict, show_job_bands=False,
num_cat_order_yticks=20,
custom_color=False,
through_date='2037-12-31')
%%time
# same as above, but as affected by integration proposal p1,
# with a delayed implementation date
mp.quantile_groupby(['p1'], [2], 'cat_order', 40,
eg_colors, job_colors, sdict, adict,
job_dict,
ds_dict=ds_dict,
num_cat_order_yticks=20,
show_job_bands=False, custom_color=False,
through_date='2037-12-31')
%%time
# same as above, but with the addition of job bands
mp.quantile_groupby(['p1'], [2], 'cat_order', 200,
eg_colors, job_colors, sdict, adict,
job_dict,
ds_dict=ds_dict,
num_cat_order_yticks=20,
show_job_bands=True, custom_color=False,
through_date='2037-12-31')
%%time
# cat_order progression for the median of each .4% segment (250 quantiles)
# of employee group 2 (p1 dataset)
# custom color example
mp.quantile_groupby(['p1'], [2], 'cat_order', 200,
eg_colors, job_colors, sdict, adict,
job_dict,
groupby_method='median',
ds_dict=ds_dict, show_job_bands=True,
line_width=1, custom_color=True, chart_style='whitegrid',
line_alpha=.9,
num_cat_order_yticks=20,
show_grid=True, cm_name='tab20c',
through_date='2037-12-31')
%%time
# cat_order progression comparison for the median of each 10% segment (10 quantiles)
# of employee group 2 for standalone (solid lines) and p1 (dashed lines) datasets.
# custom color example
mp.quantile_groupby(['standalone', 'p1'], [2], 'cat_order', 10,
eg_colors, job_colors, sdict, adict,
job_dict,
groupby_method='median',
ds_dict=ds_dict, show_job_bands=False,
num_cat_order_yticks=20,
line_width=2.5, custom_color=True, chart_style='whitegrid',
line_alpha=1,
show_grid=True, cm_name='Dark2',
through_date='2037-12-31')
%%time
eg_num = 2
fur_color = '#404040'
j_colors = job_colors[:]
j_colors[-1] = fur_color
stand = ds_dict['standalone']
p = ds_dict['p1']
base_jobs = stand[stand.eg==eg_num].groupby(['date', 'jnum']).size().astype(int).unstack()
p_jobs = p[p.eg==eg_num].groupby(['date', 'jnum']).size().astype(int).unstack()
base_jobs_cols = list(base_jobs.columns)
p_jobs_cols = list(p_jobs.columns)
#-----------------------------------------------------------------------
# Baseline job counts chart:
base_jobs.plot(color=list(j_colors[i - 1] for i in base_jobs_cols))
fig = plt.gcf()
fig.set_size_inches(12, 7)
ax = plt.gca()
ax.margins(x=0, y=0)
# Shrink current axis by 20%
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# legend position to right
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_title('baseline job counts, eg ' + str(eg_num))
#------------------------------------------------------------------------
# Proposal job counts chart:
p_jobs.plot(color=list(j_colors[i - 1] for i in p_jobs_cols))
fig = plt.gcf()
fig.set_size_inches(12, 7)
ax = plt.gca()
ax.margins(x=0, y=0)
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_title('proposal job counts, eg ' + str(eg_num))
plt.show()
%%time
# get all available colormap names
sorted(m for m in plt.cm.datad if not m.endswith("_r"))
# use a colormap name(s) within the cm_name_list variable
# the make_color_list function can do several things, here it returns color lists...
mp.make_color_list(num_of_colors=10, cm_name_list=['Paired', 'cool'])
# same colormap inputs, this time with seaborn palplot output.
# the function produces lists of colors which may then be used for plotting.
mp.make_color_list(num_of_colors=10, cm_name_list=['Paired', 'cool'], return_list=False)
sns.palplot([(0.65, 0.80, 0.89, 1.0), (0.12, 0.47, 0.70, 1.0), (0.69, 0.87, 0.54, 1.0)])
sns.palplot(['r', 'g', 'b'])
eg_color_dict = {1: 'k', 2: 'b', 3: 'red'}
prop_dict = {0: 'standalone', 1: 'p1', 2: 'p2', 3: 'p3', 4: 'edit', 5: 'hybrid'}
prop = 1
this_p = ds_dict[prop_dict[prop]].copy()
%%time
this_p['start_age'] = f.make_starting_val_column(this_p, 'age', inplace=False)
%%time
f.make_eg_pcnt_column(this_p)
this_p.columns
eg_list = [2, 3]
this_p.start_age.max()
tp = this_p[this_p.retdate <= '2025-12-31']
fig, ax = plt.subplots()
m1 = 'start_age'
m2 = 'jobp'
for eg in eg_list:
filt_df = tp[(tp.eg==eg) & (tp.ret_mark==1)][[m1, m2]]
filt_df.plot(x=m1, y=m2,
kind='scatter', ax=ax,
color=eg_color_dict[eg],
alpha=.15, s=60)
ax.invert_yaxis()
if m2 in ['jnum', 'jobp']:
max_jnum = min(int(filt_df[m2].max()) + 2, 18)
ax.set_yticks(np.arange(1, max_jnum, 1))
#ax.invert_xaxis()
tp = this_p[this_p.retdate <= '2045-12-31']
fig, ax = plt.subplots()
m1 = 'eg_start_pcnt'
m2 = 'cat_order'
for eg in eg_list:
tp[(tp.eg==eg) & (tp.ret_mark==1)][[m1, m2]].plot(x=m1, y=m2,
kind='scatter', ax=ax,
color=eg_color_dict[eg],
alpha=.15, s=60)
ax.invert_yaxis()
ax.invert_xaxis()
np.unique(this_p[this_p.mnum==80]['jnum'], return_counts=True)
mp.cond_test('p1', [2], enhanced_jobs, job_colors, job_dict, [1, 3], ds_dict=ds_dict, max_mnum=110)
#uncomment lines below to run, this one takes a long time to calculate...
#sns.swarmplot(x='eg', y='cat_order', data=p[p.mnum==0]);
#plt.gca().invert_yaxis()
#plt.gcf().set_size_inches(10, 10)
chart_style = 'whitegrid'
with sns.axes_style(chart_style):
fig, ax = plt.subplots()
sns.stripplot(x='eg', y='cat_order', data=p[(p.ret_mark==1) & (p.ldate <= '1987-12-31') & (p.retdate > '2017-12-31')], jitter=.5, size=5, alpha=.65, ax=ax);
fig.set_size_inches(10, 10)
ax.invert_yaxis()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For this homework we are going to use new data downloaded in the last week. The file names are
Step2: So, for example, the file name for Barack Obama is BarackObama_tweets.txt. The first step is to rewrite the above loop to download all five files and save them to your local directory.
Step3: Step 2
Step4: Let's test this on a simple tweet to make sure everything is working. (Note
Step5: Step 3
Step6: I assume you got a different answer (maybe 0.18181818181812)? Take a moment and see if you can figure out why.
Step7: That is interesting. The original tweet had 10 words (what we would expect) and the tweet with the doublespace has one extra word? Where did that come from? Well, let's look by printing the entire list instead of just the length (We can do this because the list is short).
Step8: See the empty string represented by the two quotations? It seems that the split() function is adding a "word" which is completely empty between the double spaces. If you think about it, it makes sense, but this can cause trouble. The same problem is occuring in both the positive_words and negative_words lists. You can check this with the following command.
Step9: This means that anytime there is a doublespace in a tweet that will be counted as both a positive and negative word. This may not be a problem for the "total" emotion (since the net is zero) - however, the extra words impact the total words in the tweet. The best way to avoid this problem is to just remove the doublespaces. For example
Step10: Fix your above function to take into account the empty strings. Make sure you consider all cases
Step11: Test your code here and make sure it returns 0.2 for both tests
Step12: Step 4
Step13: Assuming we wrote this function correctly we can use it to test the tweets in an entire file. For example, the following should work
Step14: Step 5
Step15: Step 6
Step16: Step 7
Step17: Question 3
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from string import punctuation
import urllib.request
files=['negative.txt','positive.txt']
path='http://www.unc.edu/~ncaren/haphazard/'
for file_name in files:
urllib.request.urlretrieve(path+file_name,file_name)
pos_sent = open("positive.txt").read()
positive_words=pos_sent.split('\n')
neg_sent = open("negative.txt").read()
negative_words=neg_sent.split('\n')
twitter_names = [ 'BarackObama', 'realDonaldTrump','HillaryClinton', 'BernieSanders', 'tedcruz']
# Write your code to download the tweet files from each of the politicians.
def tweetcount(tweet, positive_words, negative_words):
#Put your code here and modify the return statement
return 0.5
tweetcount('We have some delightful new food in the cafeteria. Awesome!!!', positive_words, negative_words)
tweetcount('We have some delightful new food in the cafeteria. Awesome!!!', positive_words, negative_words)
tweet = 'We have some delightful new food in the cafeteria. Awesome!!!';
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
print(len(words))
tweet = 'We have some delightful new food in the cafeteria. Awesome!!!';
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
print(len(words))
tweet = 'We have some delightful new food in the cafeteria. Awesome!!!';
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
print(words)
tweet = 'We have some delightful new food in the cafeteria. Awesome!!!';
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
print(words)
if '' in positive_words:
print('emtpy string in positive words')
if '' in negative_words:
print('emtpy string in negative words')
words.remove('')
print(words)
#Put your new function code here.
def tweetcount(tweet, positive_words, negative_words):
#Put your code here
return 0.5
tweetcount('We have some delightful new food in the cafeteria. Awesome!!!', positive_words, negative_words)
tweetcount('We have some delightful new food in the cafeteria. Awesome!!!', positive_words, negative_words)
def average_tweet(tweets_list, positive_words, negative_words):
total = 0
## put your code here. Note that this is not a trick question.
## You may be able to do this in just a couple lines of code.
return total / len(tweets_list)
tweets = open('BarackObama_tweets.txt').read()
tweets = tweets.split('\n')
average_tweet(tweets, positive_words, negative_words)
average_tweets = []
# Put your loop here. Again, if we call the functions, this code should be fairly short.
plt.bar(range(len(average_tweets)), average_tweets);
#Put your code here
fig,ax = plt.subplots(1, 1, figsize=(20, 10))
#Center Lables
barlist = plt.bar(range(len(average_tweets)), average_tweets, align='center');
plt.xticks(range(len(average_tweets)), twitter_names, fontsize=20);
plt.yticks(fontsize=20)
plt.xlabel('Candidate', fontsize=24)
plt.ylabel('Emotion', fontsize=24)
barlist[1].set_color('r')
barlist[4].set_color('r')
# Put your code here. Add additional cells as necessary!
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting HYCOM Global Ocean Forecast Data
Step2: Let's choose a location near Oahu, Hawaii...
Step3: Important! You'll need to replace apikey below with your actual Planet OS API key, which you'll find on the Planet OS account settings page.
Step4: Show the available variables and their contexts...
Step5: Now let's extract data for all variables and create a different plot for each...
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import dateutil.parser
import datetime
from urllib.request import urlopen, Request
import simplejson as json
def extract_reference_time(API_data_loc):
Find reference time that corresponds to most complete forecast. Should be the earliest value.
reftimes = set()
for i in API_data_loc['entries']:
reftimes.update([i['axes']['reftime']])
reftimes=list(reftimes)
if len(reftimes)>1:
reftime = reftimes[0] if dateutil.parser.parse(reftimes[0])<dateutil.parser.parse(reftimes[1]) else reftimes[1]
else:
reftime = reftimes[0]
return reftime
location = 'Hawaii Oahu'
if location == 'Est':
longitude = 24.+45./60
latitude = 59+25/60.
elif location == 'Au':
longitude = 149. + 7./60
latitude = -35.-18./60
elif location == "Hawaii Oahu":
latitude = 21.205
longitude = -158.35
elif location == 'Somewhere':
longitude == -20.
latitude == 10.
apikey = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
API_url = "http://api.planetos.com/v1/datasets/hycom_glbu0.08_91.2_global_0.08d/point?lon={0}&lat={1}&count=10000&verbose=false&apikey={2}".format(longitude,latitude,apikey)
request = Request(API_url)
response = urlopen(request)
API_data = json.loads(response.read())
varlist = []
print("{0:<50} {1}".format("Variable","Context"))
print()
for k,v in set([(j,i['context']) for i in API_data['entries'] for j in i['data'].keys()]):
print("{0:<50} {1}".format(k,v))
varlist.append(k)
reftime = extract_reference_time(API_data)
vardict = {}
for i in varlist:
vardict['time_'+i]=[]
vardict['data_'+i]=[]
for i in API_data['entries']:
#print(i['context'])
reftime = extract_reference_time(API_data)
for j in i['data']:
if reftime == i['axes']['reftime']:
if j != 'surf_el':
if i['axes']['z'] < 1.:
vardict['data_'+j].append(i['data'][j])
vardict['time_'+j].append(dateutil.parser.parse(i['axes']['time']))
else:
vardict['data_'+j].append(i['data'][j])
vardict['time_'+j].append(dateutil.parser.parse(i['axes']['time']))
for i in varlist:
fig = plt.figure(figsize=(15,3))
plt.title(i)
ax = fig.add_subplot(111)
plt.plot(vardict['time_'+i],vardict['data_'+i],color='r')
ax.set_ylabel(i)
print(API_data['entries'][0]['data'])
print(API_data['entries'][0]['axes'])
print(API_data['entries'][0]['context'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Dataset
Step2: See Data
Step3: Looks good now.
Step4: vectorise features such that text based classifications are transformed one hot encoded features
Step5: Shows that any campaign after 20 is useless. Hence no customer must be approached more than 20 times.
Step6: Optimize using campaign
Step7: The Market acceptable conversion ratio is 2-10% so it is a fair to say any call after the 6th call is not acceptable since it has a conversion ratio of less than 1.6%
Step8: No extra ordinary patterns visible in age groups compared to the numbers over all ages
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.feature_extraction import DictVectorizer
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from pandas.plotting import scatter_matrix
data = pd.read_csv("bank-additional-full.csv")
data = pd.read_csv("bank-additional-full.csv", sep=';')
data.head()
data_dict = data.T.to_dict().values()
vec = DictVectorizer()
signal_array = vec.fit_transform(data_dict).toarray()
feature_names = vec.get_feature_names()
# signal_array = data.as_matrix() #spits out a numpy matrix
# feature_names = list(data)
df = pd.DataFrame(signal_array,columns=feature_names)
df.head()
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
X = signal_array[:,:-2]
X = np.hstack((X[:,:14],X[:,15:]))
y = signal_array[:,-1]
# Build a forest and compute the feature importances
forest = RandomForestClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %s (%f)" % (f + 1, feature_names[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
ax = sns.regplot(x="age", y="y=yes", order=3, data=df, truncate=True)
ax = sns.regplot(x="campaign", y="y=yes", order=1, data=df, truncate=True)
df.loc[(df['campaign'] >15) & (df['y=yes']==1)]
ax = sns.regplot(x="emp.var.rate", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="nr.employed", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="age", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="duration", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="contact=telephone", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="month=sep", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="cons.conf.idx", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="poutcome=nonexistent", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="education=unknown", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="euribor3m", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="housing=unknown", y="y=yes", order=1, data=df, truncate=True)
ax = sns.regplot(x="housing=yes", y="y=yes", order=1, data=df, truncate=True)
# Total Conversion ratio
sum(df['y=yes'])/sum(df['campaign'])
# Now let's see efficiancy on every additional call
print "Nth Call \t Efficiency"
for i in range(1,30):
goo = sum(df.loc[df['campaign']==i]['y=yes']) / float(df.loc[df['campaign'] >= i].shape[0])
print (str((i))+" \t\t "+str(goo))
print("For age upto 30")
print "Nth Call \t Efficiency"
for i in range(1,30):
num = float(df[(df['age'] <= 30) & (df['campaign']==i) & (df['y=yes']==1)].shape[0])
den = 1+float(df[(df['age'] <= 30) & (df['campaign'] >= i)].shape[0])
print (str((i))+" \t\t "+str(num/den))
print("For age between 30-40")
print "Nth Call \t Efficiency"
for i in range(1,30):
num = float(df[(df['age'] <= 40) & (df['age'] > 30) & (df['campaign']==i) & (df['y=yes']==1)].shape[0])
den = 1+float(df[(df['age'] <= 40) & (df['age'] > 30) & (df['campaign'] >= i)].shape[0])
print (str((i))+" \t\t "+str(num/den))
print("For age between 40-50")
print "Nth Call \t Efficiency"
for i in range(1,30):
num = float(df[(df['age'] <= 50) & (df['age'] > 40) & (df['campaign']==i) & (df['y=yes']==1)].shape[0])
den = 1+float(df[(df['age'] <= 50) & (df['age'] > 40) & (df['campaign'] >= i)].shape[0])
print (str((i))+" \t\t "+str(num/den))
print("For age between 50-60")
print "Nth Call \t Efficiency"
for i in range(1,30):
num = float(df[(df['age'] <= 60) & (df['age'] > 50) & (df['campaign']==i) & (df['y=yes']==1)].shape[0])
den = 1+float(df[(df['age'] <= 60) & (df['age'] > 50) & (df['campaign'] >= i)].shape[0])
print (str((i))+" \t\t "+str(num/den))
print("For age above 60")
print "Nth Call \t Efficiency"
for i in range(1,30):
num = float(df[(df['age'] > 60) & (df['campaign']==i) & (df['y=yes']==1)].shape[0])
den = float(df[(df['age'] > 60) & (df['campaign'] >= i)].shape[0])+1
print (str((i))+" \t\t "+str(num/den))
# Calculate how many calls were made in total
total_calls = sum(df['campaign'])
print(total_calls)
# Calculate how many calls were made after the 6th call
extra_calls = sum(df[df['campaign']>6]['campaign']) - 6*df[df['campaign']>6].shape[0]
print(extra_calls)
# Calculate reduction in marketing cost
reduction=100*extra_calls/total_calls
print(reduction)
total_sales=float(df[df['y=yes']==1].shape[0])
print(total_sales)
less_costly_sales=float(df[(df['campaign'] <= 6) & (df['y=yes']==1)].shape[0])
print(less_costly_sales)
sales_percent=100*less_costly_sales/total_sales
print(sales_percent)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: accéder à un élément en particulier
Step2: Les deux écritures ont l'air identique puisqu'elle retourne le même résultat. Néanmoins, mat[2][3] crée un tableau temporaire puis extrait un élément. Les éléments ne sont pas recopiés mais un objet intermédiaire est créé.
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import numpy
mat = numpy.zeros((5, 5))
for i in range(mat.shape[0]):
for j in range(mat.shape[1]):
mat[i, j] = i * 10 + j
mat
mat[2, 3], mat[2][3]
%timeit mat[2, 3]
%timeit mat[2][3]
mat[2]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <div class="alert alert-info" style="font-size
Step2: Reversing this operation, is reset_index
Step3: Selecting data based on the index
Step4: But the row or column indexer can also be a list, slice, boolean array (see next section), ..
Step5: <div class="alert alert-danger">
Step6: The different indexing methods can also be used to assign data
Step7: <div class="alert alert-info" style="font-size
Step8: <div class="alert alert-success">
Step9: <div class="alert alert-success">
Step10: <div class="alert alert-success">
Step11: <div class="alert alert-success">
Step12: The next exercise uses the titanic data set
Step13: <div class="alert alert-success">
Step14: We will later see an easier way to calculate both averages at the same time with groupby.
Step15: Pitfall
Step16: When updating values in a DataFrame, you can run into the infamous "SettingWithCopyWarning" and issues with chained indexing.
Step17: However, we get a warning, and we can also see that the original dataframe did not change
Step18: The warning message explains that we should use .loc[row_indexer,col_indexer] = value instead. That is what we just learned in this notebook, so we can do
Step19: And now the dataframe actually changed
Step20: To explain why the original df[df['population'] > 50]['population'] = 50 didn't work, we can do the "chained indexing" in two explicit steps
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
# redefining the example dataframe
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
countries = countries.set_index('country')
countries
countries.reset_index('country')
countries.loc['Germany', 'area']
countries.loc['France':'Germany', ['area', 'population']]
countries.iloc[0:2,1:3]
countries2 = countries.copy()
countries2.loc['Belgium':'Germany', 'population'] = 10
countries2
countries['density'] = countries['population']*1000000 / countries['area']
countries.loc[countries['density'] > 300, ['capital', 'population']]
countries['density_ratio'] = countries['density'] / countries['density'].mean()
countries
countries.loc['United Kingdom', 'capital'] = 'Cambridge'
countries
countries[(countries['density'] > 100) & (countries['density'] < 300)]
df = pd.read_csv("data/titanic.csv")
df.head()
df.loc[df['Sex'] == 'male', 'Age'].mean()
df.loc[df['Sex'] == 'female', 'Age'].mean()
population = countries['population']
s1 = population[['Belgium', 'France']]
s2 = population[['France', 'Germany']]
s1
s2
s1 + s2
df = countries.copy()
df[df['population'] > 50]['population'] = 50
df
df.loc[df['population'] > 50, 'population'] = 50
df
temp = df[df['population'] > 50]
temp['population'] = 50
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hh', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What we got was the annual trend
Step2: West side congested in the morning while the east side is congested in the everning.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
from module_code.data import get_fremont_data
df = get_fremont_data()
df.head()
df.resample('W').sum().plot() # ugly looking graphs. Change to seaborn.
# resample daily and find the rolling sum of 365 days.
ax = df.resample('D').sum().rolling(365).sum().plot()
ax = df.resample('D').sum().rolling(365).sum().plot()
ax.set_ylim(0, None)
df.groupby(df.index.time).mean().plot()
pivoted = df.pivot_table('Total', index=df.index.time, columns=df.index.date)
pivoted.iloc[:5,:5]
pivoted.plot(legend = False, alpha = 0.01)
get_fremont_data??
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After Objax is installed, you can import all necessary modules
Step2: Tensors
Step3: From a NumPy array
Step4: From another JAX tensor
Step5: Using ones or zeros
Step6: As a result of a mathematical operation performed on other tensors
Step7: Tensor Properties
Step8: Converting tensors to numpy array
Step9: Tensors are immutable
Step10: Instead of updating an existing tensor, a new tensor should be created with updated elements. Updates of individual tensor elements is done using
Step11: More details about per-element updates of tensors can be found in JAX documentation.
Step12: There are multiple primitives for doing so
Step13: Objax Variables and Modules
Step14: Modules keep track of all variables they own, including variables in sub-modules. The .vars() method list all the module's variables.
Step15: If the __call__ method of the module takes tensors as input and outputs tensors then it can act as a mathematical function. In the general case __call__ can be a multivariate vector-values function.
Step16: The way jn.dot works allows us to run code on 2D tensors as well. In this case SimpleModule will treat the input as a batch of vectors, perform the dot product on each of them and return a vector with the results
Step17: For comparison, here is the result of calling module m on each row of tensor x
Step18: How to compute gradients
Step19: Then we create an objax.GradValues module which computes the gradients of loss_fn. We need to pass the function itself to the constructor of objax.GradValues as well as a VarCollection with the variables that loss_fn depends on
Step20: gv is a module which returns the gradients of loss_fn and the values of loss_fn for the given input
Step21: In the example above, grads is a list of gradients with respect to all variables from module_vars. The order of gradients in the grads list is the same as the order of corresponding variables in module_vars. So grads[0] is the gradient of the function w.r.t. m.v1 and grads[1] is the gradient w.r.t. m.v2.
Step22: Objax.Jit can compile not only modules, but also functions and callables. In this case a variable collection should be passed to objax.Jit
Step23: There is no need to use JIT if you only need to compute a single JAX operation. However JIT can give significant speedups when multiple Objax/JAX operations are chained together. The next tutorial will show examples of how JIT is used in practice.
|
<ASSISTANT_TASK:>
Python Code:
%pip --quiet install objax
import jax.numpy as jn
import numpy as np
import objax
# Providing explicit values
jn.array([[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]])
arr = np.array([1.0, 2.0, 3.0])
jn.array(arr)
another_tensor = jn.array([[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]])
jn.array(another_tensor)
jn.ones((3, 4))
jn.zeros((4, 5))
t1 = jn.array([[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]])
t2 = jn.ones(t1.shape) * 3
t1 + t2
t = jn.array([[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]])
print('Number of dimensions: ', t.ndim)
print('Shape: ', t.shape)
print('Data type: ', t.dtype)
np.array(t)
print('Original tensor t:\n', t)
try:
t[0, 0] = -5.0 # This line will fail
except Exception as e:
print(f'Exception {e}')
print('Tensor t after failed attempt to update:\n', t)
import jax.ops
print('Original tensor t:\n', t)
new_t = jax.ops.index_update(t, jax.ops.index[0, 0], -5.0)
print('Tensor t after update stays the same:\n', t)
print('Tensor new_t has updated value:\n', new_t)
x = objax.random.normal((3, 4))
print(x)
print('Random integers:', objax.random.randint((4,), low=0, high=10))
print('Random normal:', objax.random.normal((4,), mean=1.0, stddev=2.0))
print('Random truncated normal: ', objax.random.truncated_normal((4,), stddev=2.0))
print('Random uniform: ', objax.random.uniform((4,)))
class SimpleModule(objax.Module):
def __init__(self, length):
self.v1 = objax.TrainVar(objax.random.normal((length,)))
self.v2 = objax.TrainVar(jn.ones((2,)))
def __call__(self, x):
return jn.dot(x, self.v1)
m = SimpleModule(3)
module_vars = m.vars()
print('type(module_vars): ', type(module_vars))
print('isinstance(module_vars, dict): ', isinstance(module_vars, dict))
print()
print('Variable names and shapes:')
print(module_vars)
print()
print('Variable names and values:')
for k, v in module_vars.items():
print(f'{k} {v.value}')
x = jn.ones((3,))
y = m(x)
print('Input: ', x)
print('Output: ', y)
x = jn.array([[1., 1., 1.],
[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
y = m(x)
print('Input:\n', x)
print('Output:\n', y)
print('Sequentially calling module on each row of 2D tensor:')
for idx in range(x.shape[0]):
row_value = x[idx]
out_value = m(row_value)
print(f'm( {row_value} ) = {out_value}')
def loss_fn(x):
return m(x).sum()
print('loss_fn(x) = ', loss_fn(x))
# Construct a module which computes gradients
gv = objax.GradValues(loss_fn, module_vars)
# gv returns both gradients and values of original function
grads, value = gv(x)
print('Gradients:')
for g, var_name in zip(grads, module_vars.keys()):
print(g, ' w.r.t. ', var_name)
print()
print('Value: ', value)
jit_m = objax.Jit(m)
y = jit_m(x)
print('Input:\n', x)
print('Output:\n', y)
def loss_fn(x, y):
return ((m(x) - y) ** 2).sum()
jit_loss_fn = objax.Jit(loss_fn, module_vars)
x = objax.random.normal((2, 3))
y = jn.array((-1.0, 1.0))
print('x:\n ', x)
print('y:\n', y)
print('loss_fn(x, y): ', loss_fn(x, y))
print('jit_loss_fn(x, y): ', jit_loss_fn(x, y))
x = objax.random.normal((100, 3))
# gv is a module define above which compute gradients
jit_gv = objax.Jit(gv)
print('Timing for jit_gv:')
%timeit jit_gv(x)
print('Timing for gv:')
%timeit gv(x)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 13.3. External Mixture
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Step60: 14.2. Shortwave Bands
Step61: 14.3. Longwave Bands
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Step63: 15.2. Twomey
Step64: 15.3. Twomey Minimum Ccn
Step65: 15.4. Drizzle
Step66: 15.5. Cloud Lifetime
Step67: 15.6. Longwave Bands
Step68: 16. Model
Step69: 16.2. Processes
Step70: 16.3. Coupling
Step71: 16.4. Gas Phase Precursors
Step72: 16.5. Scheme Type
Step73: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-am4', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
DOC.set_value("whole atmosphere")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Bulk aerosol model")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
DOC.set_value("Other: 3d mass/volume mixing ratio for aerosols")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(16)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
DOC.set_value("Other: uses atmosphericchemistry time stepping")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("N/A")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("N/A")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
DOC.set_value("Advection (horizontal)")
DOC.set_value("Advection (vertical)")
DOC.set_value("Ageing")
DOC.set_value("Dry deposition")
DOC.set_value("Heterogeneous chemistry")
DOC.set_value("Oxidation (gas phase)")
DOC.set_value("Oxidation (in cloud)")
DOC.set_value("Sedimentation")
DOC.set_value("Wet deposition (impaction scavenging)")
DOC.set_value("Wet deposition (nucleation scavenging)")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
DOC.set_value("Clouds")
DOC.set_value("Other: heterogeneouschemistry")
DOC.set_value("Other: landsurface")
DOC.set_value("Radiation")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
DOC.set_value("DMS")
DOC.set_value("SO2")
DOC.set_value("Terpene")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
DOC.set_value("Bin")
DOC.set_value("Bulk")
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
DOC.set_value("Organic")
DOC.set_value("Other: bc (black carbon / soot)")
DOC.set_value("POM (particulate organic matter)")
DOC.set_value("SOA (secondary organic aerosols)")
DOC.set_value("Sulphate")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: load ref genomes
Step2: find orthologs in each genome
Step3: predict MHC-I and MHC-II epitopes
Step4: conservation
Step5: Find conserved predicted epitopes in all proteins
Step6: Compare predictions to mateus exp results
Step7: check epitope selection method
|
<ASSISTANT_TASK:>
Python Code:
import os, math, time, pickle, subprocess
from importlib import reload
from collections import OrderedDict, defaultdict
import numpy as np
import pandas as pd
pd.set_option('display.width', 180)
import epitopepredict as ep
from epitopepredict import base, sequtils, plotting, peptutils, analysis
from IPython.display import display, HTML, Image
%matplotlib inline
import matplotlib as mpl
import pylab as plt
import pathogenie
from Bio import SeqIO,AlignIO
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
labels = {'sars':'NC_004718.3','scov2':'NC_045512.2','229E':'NC_002645.1','NL63':'NC_005831.2','OC43':'NC_006213.1','HKU1':'NC_006577.2'}
genomes = []
for l in labels:
df = ep.genbank_to_dataframe(labels[l]+'.gb',cds=True)
df['label'] = l
genomes.append(df)
genomes = pd.concat(genomes)
scov2_df = genomes[genomes.label=='scov2']
scov2_df = scov2_df.drop_duplicates('locus_tag')
#print (genomes[['label','gene','product','length']])
def get_seqs(gene):
seqs = []
sub = genomes[genomes['gene']==gene]
for i,r in sub.iterrows():
s=SeqRecord(Seq(r.translation),id=r.label)
seqs.append(s)
return seqs
seqs=get_seqs('S')
pathogenie.tools.dataframe_to_fasta(genomes, idkey='locus_tag', descrkey='product', outfile='proteins.fa')
pathogenie.tools.make_blast_database('proteins.fa', dbtype='prot')
def get_orthologs(gene):
sub = scov2_df[scov2_df['gene']==gene].iloc[0]
rec = SeqRecord(Seq(sub.translation),id=sub.gene)
bl = pathogenie.tools.blast_sequences('proteins.fa', [rec], maxseqs=10, evalue=1e-4,
cmd='blastp', threads=4)
bl = bl.drop_duplicates('sseqid')
#print (bl.sseqid)
found = genomes[genomes.locus_tag.isin(bl.sseqid)].drop_duplicates('locus_tag')
#print (found)
recs = pathogenie.tools.dataframe_to_seqrecords(found,
seqkey='translation',idkey='label',desckey='product')
return recs
seqs = get_orthologs('S')
aln = pathogenie.clustal_alignment(seqs=seqs)
print (aln)
spikesars = SeqIO.to_dict(seqs)['sars'].seq
spikesars
sc2 = ep.genbank_to_dataframe('NC_045512.2.gb',cds=True)
sc2 = sc2.drop_duplicates('gene')
m1_alleles = ep.get_preset_alleles('broad_coverage_mhc1')
m2_alleles = ep.get_preset_alleles('mhc2_supertypes')
P1 = base.get_predictor('netmhcpan')
P1.predict_sequences(sc2, alleles=m1_alleles,threads=10,path='scov2_netmhcpan',length=9,overwrite=False)
P1.load(path='scov2_netmhcpan')
pb1 = P1.promiscuous_binders(n=3, cutoff=.95)
P2 = base.get_predictor('netmhciipan')
P2.predict_sequences(sc2, alleles=m2_alleles,threads=10,path='scov2_netmhciipan',length=15,overwrite=False,verbose=True)
P3 = base.get_predictor('tepitope')
P3.predict_sequences(sc2, alleles=m2_alleles,threads=10,path='scov2_tepitope',length=15,overwrite=False)
P3.load(path='scov2_tepitope')
P2.load(path='scov2_netmhciipan')
pb2 = P2.promiscuous_binders(n=3, cutoff=.95, limit=70)
#rb2 = P2.promiscuous_binders(n=3, cutoff_method='rank', cutoff=40)
pb2.name.value_counts()
#pb3 = P3.promiscuous_binders(n=3, cutoff=50, cutoff_method='rank', limit=50)
#pb3.name.value_counts()
import difflib
def get_conservation(x, w):
m = difflib.get_close_matches(x, w, n=1, cutoff=.67)
if len(m)==0:
return 0
else:
m=m[0]
s = difflib.SequenceMatcher(None, x, m)
return s.ratio()
def find_epitopes_conserved(pb,gene,locus_tag):
seqs = get_orthologs(gene)
df = pb[pb.name==locus_tag]
#print (df)
print (len(seqs),len(df))
s=seqs[0]
for s in seqs:
if s.id == 'scov2':
continue
w,ss = peptutils.create_fragments(seq=str(s.seq), length=11)
df.loc[:,s.id] = df.peptide.apply(lambda x: get_conservation(x, w),1)
df.loc[:,'total'] = df[df.columns[8:]].sum(1)
df = df.sort_values('total',ascending=False)
df = df[df.total>0]
df = df.round(2)
return df
df = find_epitopes_conserved(pb2, 'S','GU280_gp02')
#df.to_csv('S_netmhciipan_conserved.csv')
res=[]
for i,r in scov2_df.iterrows():
print (r.locus_tag,r.gene)
df = find_epitopes_conserved(pb2,r.gene,r.locus_tag)
df['gene'] = r.gene
res.append(df)
res = pd.concat(res).sort_values('total',ascending=False).dropna().reset_index()
print (len(res),len(pb2))
res.to_csv('scov2_netmhciipan_conserved.csv')
cols = ['gene','peptide','pos','alleles','sars','229E','NL63','OC43','HKU1']
h=res[:30][cols].style.background_gradient(cmap="ocean_r",subset=['sars','229E','NL63','OC43','HKU1']).set_precision(2)
#res[:30][cols]
s1 = pd.read_csv('mateus_hcov_reactive.csv')
hits=[]
w = list(res.peptide)
for i,r in s1.iterrows():
m = difflib.get_close_matches(r.Sequence, w, n=2, cutoff=.6)
#print (r.Sequence,m,r.Protein)
if len(m)>0:
hits.append(m)
else:
hits.append(None)
s1['hit'] = hits
display(s1)
print (len(s1.hit.dropna())/len(s1))
reload(base)
P = base.get_predictor('tepitope')
P.predict_sequences(sc2, alleles=m2_alleles[:4],names=['GU280_gp01','GU280_gp02','GU280_gp03','GU280_gp04'],threads=10,length=9)
pb= P.promiscuous_binders(n=2, cutoff=.98, limit=20)
pb.name.value_counts()
rb= P.promiscuous_binders(n=3, cutoff_method='rank',cutoff=30,limit=20)
rb.name.value_counts()
sb= P.promiscuous_binders(n=2, cutoff_method='score',cutoff=3.5,limit=20)
sb.name.value_counts()
from matplotlib_venn import venn3
ax = venn3((set(pb.peptide),set(rb.peptide),set(sb.peptide)), set_labels = ('default', 'ranked', 'score'))
b=P.get_binders(cutoff=10, cutoff_method='rank')
func = max
s=b.groupby(['peptide','pos','name']).agg({'allele': pd.Series.count,
'core': base.first, P.scorekey:[func,np.mean],
'rank': np.median})
s.columns = s.columns.get_level_values(1)
s.rename(columns={'max': P.scorekey, 'count': 'alleles','median':'median_rank',
'first':'core'}, inplace=True)
s = s.reset_index()
s
s.name.value_counts()
s=s.groupby('name').head(10)
s.name.value_counts()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Examples
Step2: Use NLTK word tokenization and normalize as POS
Step3: Use NLTK word tokenization and strip vowels and punctuation to normalize
|
<ASSISTANT_TASK:>
Python Code:
def tokenize(input):
pass
def normalize(input):
pass
sample = "Hello, Mom!"
tokens = tokenize(sample)
# print(tokens)
normalized = [normalize(token) for token in tokens]
print(normalized)
def tokenize(input): # tokenize on white space
return input.split()
def normalize(input): # normalize as lower case
return (input, input.lower())
sample = "Hello, Mom!"
tokens = tokenize(sample)
# print(tokens)
normalized = [normalize(token) for token in tokens]
print(normalized)
import nltk
def tokenize(input): # use NLTK word tokenization
return nltk.word_tokenize(input)
def normalize(input): # normalize as POS
pos = nltk.pos_tag([input]) # since it’s a single word, make it a list
print(pos)
return (pos)
sample = "Hello, Mom!"
tokens = tokenize(sample)
# print(tokens)
normalized = [normalize(token) for token in tokens]
print(normalized)
import re
def tokenize(input): # use NLTK word tokenization
return nltk.word_tokenize(input)
def normalize(input): # normalize as POS
return (input, re.sub('[AEIOUaeiou]','',input))
sample = "Hello, Mom!"
tokens = tokenize(sample)
# print(tokens)
normalized = [normalize(token) for token in tokens]
print(normalized)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Great, we're good to go. Now let's take a look at the data. See the README in this directory for background on how we generated the CSV file we'll work with here.
Step2: Okay, that's a pretty big file
Step3: It might be obvious from the header row, but we're looking at record identifiers, the two indicators, all of the subfields present in the subject represented as a single string (e.g. "ay0" means subfields a, y, and 0 are present with values), and the value of subfield 2. You might notice that the bibid values repeat; in this case we are looking at three different subjects for bibid 2164503, four for 2164506, and two for 2164508. Perfectly normal.
Step4: Looks good, but we lost our record ordering - not a big deal, but it might be helpful to get them back in order. csvkit's csvsort can help with that
Step5: We can confirm that our sample has at least some examples of multiple subjects per record, then, by noting that bibid 2091975 appears twice. Good enough for now - the sorting won't matter to Spark, which will be counting in parallel anyway, but it's reassuring that our sample is useful.
Step6: And verify that the data loaded correctly
Step7: Looks good. Let's split the data up now, and start to count things up. First we have to parse the CSV structure, then we pull out the tag (index position 1) and put that into a tuple with the value 1.
Step8: Now we have a long list of tuples like [('650', 1), ('651', 1), ('650', 1), ...]. We structure the data this way because that makes it easy to run this next step, adding it all up, using the Python operator "add" within the reduceByKey method. We pipe that result into takeOrdered to get a sorted list of tags by count. Note that we use -v for a descending count.
Step9: Let's turn that into a function that we can use for the whole dataset.
Step10: Now getting the count across all the data is just a matter of loading the full set in and passing it to the function.
Step11: What exactly did Spark just do?
Step12: This shows us the several stages that went into the job in the timeline at top - the two skinny blue blobs at far right are the completed stages, one after the other. You can see them spelled out in detail in the "Completed Stages" table at the bottom. reduceByKey came first, and took 33 seconds, with six tasks. You can see how the stages flowed with the data partitioning across cores in the "DAG Visualization" image in the middle.
Step13: At the individual stage of reduceByKey, we can see the graph of partitioning the data across the four cores, then ultimately moving the data into the pair RDD (the tuples with the counts that got added up). The timeline below shows those reduce jobs occuring across the cores, with the data partitioned into four jobs over approximately 1,400,000 records and then another two tasks across approximately 1,300,000 records, each of which starts (look close at the green bars) as soon as the first and second of the earlier tasks complete. If this laptop had six cores or more, we can imagine that Spark would have executed them all simulaneously.
Step14: It works - so now we can vary the columns it's using to count things up just by passing in a different splitter. Let's look at just tag + indicator combinations first.
Step15: It seems to be working just fine, so let's run it over the full dataset, pulling out the top combinations.
Step16: Let's take a better look at that using matplotlib.
Step17: Let's go a little further and add the subfields to the mix.
Step18: This would look a lot better with the y-tick labels aligned to the left, but I couldn't quite get that right without some awkward code. (Oh well... not the main point here.)
Step19: I'll leave scraping LC's Subject Heading and Source Term Codes and mapping the dl element values to a table we can cross-reference as an exercise for the reader.
Step20: A majority of subjects use indicator 2 of '7', that a source is specified in subfield 2. Let's look at that next.
Step21: Indeed, it is almost exactly the same, with just a few counts off by a few items.
|
<ASSISTANT_TASK:>
Python Code:
sc
!ls -lh combined.csv
!wc combined.csv
!head combined.csv
!gshuf -n 100000 -o sample.csv combined.csv
!wc sample.csv
!head sample.csv
!csvsort --no-header-row -c1 sample.csv > sample-sorted.csv
!head sample-sorted.csv
subjects_sample = sc.textFile("sample.csv")
subjects_sample.count()
tag_pairs = subjects_sample.map(lambda l: l.split(",")).map(lambda l: (l[1], 1))
from operator import add
tag_counts = tag_pairs.reduceByKey(add).takeOrdered(25, key=lambda (k, v): -v)
tag_counts
def count_tags(subjects_rdd):
pairs = subjects_rdd.map(lambda l: l.split(',')).map(lambda l: (l[1], 1))
return pairs.reduceByKey(add).takeOrdered(25, key=lambda (k, v): -v)
subjects_all = sc.textFile("combined.csv")
%time count_tags(subjects_all)
from IPython.display import Image
Image("ui-job.png")
Image("ui-stages.png")
def counter(subjects_rdd, splitter, limit=25):
pairs = subjects_rdd.map(lambda l: l.split(',')).map(splitter)
return pairs.reduceByKey(add).takeOrdered(limit, key=lambda (k, v): -v)
splitter = lambda l: (l[1], 1)
counter(subjects_sample, splitter, 10)
counter(subjects_sample, lambda l: ((l[1], l[2], l[3]), 1), 5)
top_subjects_indicators = counter(subjects_all, lambda l: ((l[1], l[2], l[3]), 1))
%matplotlib inline
top = [("%s-%s-%s" % (k[0], '_' if k[1] == ' ' else k[1], '_' if k[2] == ' ' else k[2]), v)
for k, v in top_subjects_indicators]
import matplotlib.pyplot as plt
import numpy as np
y_pos = np.arange(len(top))
plt.barh(y_pos, [v for k, v in reversed(top)],
align='center', color='steelblue', edgecolor='none')
plt.xlabel('Count')
plt.xlim([0, top[0][1] * 1.05])
plt.gca().xaxis.grid(True)
plt.ylabel('Tag and indicators')
plt.yticks(y_pos, [k for k, v in reversed(top)])
plt.ylim([-1, 25])
plt.tick_params(axis='y', labelsize=14)
plt.title('Counts of tags + indicator combinations')
plt.show()
top_subfields = counter(subjects_all, lambda l: ((l[1], l[2], l[3], l[4]), 1))
top = [("%s-%s-%s-%s" % (k[0], '_' if k[1] == ' ' else k[1], '_' if k[2] == ' ' else k[2], k[3]), v)
for k, v in top_subfields]
y_pos = np.arange(len(top_subf))
plt.barh(y_pos, [v for k, v in reversed(top)],
align='center', color='steelblue', edgecolor='none')
plt.xlabel('Count')
plt.xlim([0, top[0][1] * 1.05])
plt.gca().xaxis.grid(True)
plt.ylabel('Tag and indicators')
plt.yticks(y_pos, [k for k, v in reversed(top)])
plt.ylim([-1, 25])
plt.tick_params(axis='y', labelsize=14)
plt.title('Counts of tags + indicator combinations')
plt.show()
sf2 = counter(subjects_all, lambda l: (l[5], 1))
sf2
pairs = subjects_all.map(lambda l: l.split(','))
pairs = pairs.filter(lambda l: l[3] != ' ')
pairs = pairs.map(lambda l: (l[3], 1))
pairs.reduceByKey(add).takeOrdered(25, key=lambda (k, v): -v)
sf2 = subjects_all.map(lambda l: l.split(','))
sf2 = sf2.filter(lambda l: l[3] == '7')
sf2 = sf2.map(lambda l: (l[5], 1))
sf2.reduceByKey(add).takeOrdered(100, key=lambda (k, v): -v)
from pyspark.mllib.stat import Statistics
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Potential Calculation
Step2: Longrange Potential
Step3: Parameter for Na+ and Cl-
Step4: Calculate Lennard Jones Potential of Na+ and Cl-
Step5: Testing
Step6: test_shortrange_ewald( )
Step7: test_lj_potential( )
Step8: test_longrange_potential( )
Step9: Random Test-System for short-range Potentials
Step10: Random particle test for Coulomb-potential
Step11: Random particle test for the different shortrange calculations
Step12: Random particle test for Lennard-Jones potential
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import particlesim.api
import particlesim.test_total_potential as test
import particlesim.total_potential as pot
import time
import particlesim.helpers_for_tests as create
from mpl_toolkits.mplot3d import Axes3D
def time_short():
shortrange_time_total = 0
for i in range(100):
system_conf = create.create_system_configuration(100, 10)
potential = pot.TotalPotential(system_conf)
start = time.time()
potential.shortrange_energy(system_conf.xyz)
shortrange_time_total += time.time() - start
return shortrange_time_total / 100
print("Time for short-range potential calculation = ", time_short(), "s")
def time_long():
longrange_time_total = 0
for i in range(100):
system_conf = create.create_system_configuration(100, 10)
potential = pot.TotalPotential(system_conf)
start = time.time()
potential.longrange_energy(system_conf.xyz)
longrange_time_total += time.time() - start
return longrange_time_total / 100
print("Time for long-range potential calculation = ", time_long(), "s")
def particle_config(box_size = 10.2, charges= 1, sigma = 1.0, epsilon = 1.0, r_lim = 0.95, lennard_jones = True, coulomb = False):
particle_1 = [0.,0,0]
particle_2 = [box_size - r_lim,0,0]
particle_positions = np.array([particle_1, particle_2])
system_configuration = particlesim.api.SystemConfiguration(particle_positions,charges=charges, sigmas = sigma, epsilons = epsilon,
box_size = box_size)
distance, pot = [], []
while particle_positions[0][0] <= particle_positions[1][0]-r_lim:
particle_positions[0][0] += 0.05
r = np.linalg.norm(particle_positions[0] - particle_positions[1])
if r > box_size/2:
r -= box_size
distance.append(r)
pot.append(system_configuration.potential(xyz_trial = particle_positions,
lennard_jones=lennard_jones, coulomb = coulomb))
distance = distance[int(len(distance)/2):]+distance[:int(len(distance)/2)]
pot = pot[int(len(pot)/2):] + pot[:int(len(pot)/2)]
return distance[::-1], pot[::-1]
sigma_na = 1.21496
epsilon_na = 0.0469
q_na = +1.0
sigma_cl = 2.02234
epsilon_cl = 0.15
q_cl = -1.0
sigmas=[sigma_na, sigma_cl]
epsilons = [epsilon_na, epsilon_cl]
charges = [q_na, q_cl]
distance_na, pot_na = particle_config(sigma=sigma_na, epsilon=epsilon_na, lennard_jones=True, coulomb=False)
distance_cl, pot_cl = particle_config(sigma=sigma_cl, epsilon=epsilon_cl, lennard_jones=True, coulomb=False)
distance_mix, pot_mix = particle_config(sigma=sigmas,epsilon=epsilons, lennard_jones=True, coulomb=False)
distance_coulomb, pot_coulomb = particle_config(charges=charges, sigma=sigmas, epsilon=epsilons, r_lim=0.1,
lennard_jones= False, coulomb=True)
distance_LJ_C, pot_LJ_C = particle_config(charges=charges, sigma=sigmas, epsilon=epsilons, r_lim=0.7,
lennard_jones= True, coulomb=True)
fig, axes = plt.subplots(3,1, figsize=(10,10))
fig.tight_layout(h_pad=5)
axes[0].plot(distance_na,pot_na, label='Na+')
axes[0].plot(distance_cl,pot_cl, label='Cl-')
axes[0].plot(distance_mix,pot_mix, label='Na+ and Cl-')
axes[0].set_xlabel(r"$r$", fontsize=15)
axes[0].set_xlim([-5,5])
axes[0].set_ylim([-0.5, 1.5])
axes[0].set_title(r"Lennard Jones Potential")
axes[0].set_xlabel(r"Particle Particle Distances in $\AA$", fontsize=12)
axes[0].set_ylabel(r"Energy in kcal/mol", fontsize=12)
axes[0].legend()
axes[1].plot(distance_coulomb, pot_coulomb, label='q_Na = +1.0, q_Cl = -1.0')
axes[1].set_xlabel(r"$r$", fontsize=15)
axes[1].set_xlim([-5,5])
axes[1].set_ylim([-1000, 0])
axes[1].set_title(r"Coulomb Potential")
axes[1].set_xlabel(r"Particle Particle Distances in $\AA$", fontsize=12)
axes[1].set_ylabel(r"Energy in kcal/mol", fontsize=12)
axes[2].plot(distance_LJ_C, pot_LJ_C,'c', label='q_Na = +1.0, q_Cl = -1.0')
axes[2].set_xlabel(r"$r$", fontsize=15)
axes[2].set_xlim([-5,5])
axes[2].set_ylim([-1000, 1000])
axes[2].set_title(r"Coulomb and Lennard Jones Potential")
axes[2].set_xlabel(r"Particle Particle Distances in $\AA$", fontsize=12)
axes[2].set_ylabel(r"Energy in kcal/mol", fontsize=12)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter([0], [0], [0] ,c='r')
ax.scatter([0], [0], [1] ,c='b')
ax.set_xlim([0, 10])
ax.set_ylim([0, 10])
ax.set_zlim([0, 10])
print(test.test_shortrange_ewald())
print(test.test_lj_potential())
print(test.test_longrange_potential())
Repetitions = 10
particle_box = 3
boxsize = 20
n = 20
real-space-cutoff = 8.
x_Na = np.random.uniform(0, particle_box, int(n/2))
y_Na = np.random.uniform(0, particle_box, int(n/2))
z_Na = np.random.uniform(0, particle_box, int(n/2))
x_Cl = np.random.uniform(0, particle_box, int(n/2))
y_Cl = np.random.uniform(0, particle_box, int(n/2))
z_Cl = np.random.uniform(0, particle_box, int(n/2))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x_Na, y_Na, z_Na, c='r')
ax.scatter(x_Cl, y_Cl, z_Cl, c='b')
ax.set_xlim([0, boxsize])
ax.set_ylim([0, boxsize])
ax.set_zlim([0, boxsize])
print(test.test_coulomb_random())
print(test.test_shortrange_with_different_neighbouring())
print(test.test_lennard_jones_rondom())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple graph analytics for the Twitter stream
Step2: Building the directed graph
Step3: Most retweeted users
Step4: Top 10 Pageranked users
Step5: Basic network viz
|
<ASSISTANT_TASK:>
Python Code:
from kafka import KafkaConsumer
import uuid
import json
consumer = KafkaConsumer(bootstrap_servers='',
value_deserializer=lambda s: json.loads(s, encoding='utf-8'),
auto_offset_reset='smallest',
group_id=uuid.uuid4())
consumer.subscribe(['tweets'])
limit = 500
consumer.poll(max_records=limit)
count = 0
data = []
for msg in consumer:
data.append(msg.value)
count += 1
if count >= limit:
break
len(data)
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
%matplotlib inline
import networkx as nx
graph = nx.DiGraph()
for tweet in data:
if tweet.get('retweet') == 'Y':
name = tweet.get('name')
original_name = tweet.get('original_name')
followers = tweet.get('followers')
if name not in graph: graph.add_node(name, retweets = 0)
if original_name not in graph:
graph.add_node(original_name, retweets = 1)
else:
graph.node[original_name]['retweets'] = graph.node[original_name]['retweets'] +1
graph.add_edge(original_name, name)
top10_retweets = sorted([(node,graph.node[node]['retweets']) for node in graph.nodes()], key = lambda x: -x[1])[0:10]
top10_retweets
pr = nx.pagerank(graph)
colors = [pr[node] for node in graph.nodes()]
top10_pr = sorted([(k,v) for k,v in pr.items()], key = lambda x: x[1])[0:10]
label_dict = dict([(k[0],k[0]) for k in top10_pr])
top10_pr
plt.figure(figsize=(11,11))
plt.axis('off')
weights = [10*(graph.node[node]['retweets'] + 1) for node in graph.nodes()]
nx.draw_networkx(graph, node_size = weights, width = .1, linewidths = .1, with_labels=True,
node_color = colors, cmap = 'RdYlBu',
labels = label_dict)
consumer.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Display variables
Step2: Defining the Euler updates (gradient descent)
Step3: Dirac GAN
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pylab as plt
from scipy.integrate import odeint
from scipy.integrate import solve_ivp
from scipy.misc import derivative
import seaborn as sns
def set_up_fonts():
sns.reset_orig()
import matplotlib
matplotlib.rcParams["pdf.fonttype"] = 42
matplotlib.rcParams["ps.fonttype"] = 42
hw = 10
hl = 6
minshaft = 2
scale = 1.5
color = ["blue", "red", "green", "orange", "magenta"]
set_up_fonts()
def euler_alternating(fn, v, t):
last_t = t[0]
vs = [v]
num_dims = len(v)
last_v = list(v)
for current_t in t[1:]:
delta_t = current_t - last_t
for i in range(num_dims):
interim_v = last_v + delta_t * np.array(fn(current_t, last_v))
last_v[i] = interim_v[i]
last_t = current_t
vs.append(last_v.copy())
return np.array(vs)
def euler(fn, v, t):
last_t = t[0]
vs = [v]
last_v = v
for current_t in t[1:]:
current_v = last_v + (current_t - last_t) * np.array(fn(current_t, last_v))
last_t = current_t
last_v = current_v
vs.append(current_v)
return np.array(vs)
grad_f = lambda x: 1.0 / (1 + np.exp(-x))
vect0 = [(1, 1)]
# Write the problem in a way compatible with solve_ivp.
# Return the gradients for each player.
def system(t, vect):
x, y = vect
return [-grad_f(x * y) * y, grad_f(x * y) * x]
t = np.arange(0, 100, 0.2)
plot = plt.figure()
v = vect0[0]
sol = solve_ivp(system, (0, 200), v, t_eval=t, dense_output=True, method="RK45")
sol = sol.sol(t).T
widths = np.linspace(0, 2, sol.size)
plt.quiver(
sol[:-1, 0],
sol[:-1, 1],
sol[1:, 0] - sol[:-1, 0],
sol[1:, 1] - sol[:-1, 1],
scale_units="xy",
angles="xy",
scale=2,
color=color[0],
linewidths=widths,
edgecolors=color[0],
label="Continuous dynamics",
headwidth=hw,
headlength=hl,
minshaft=2,
)
plt.title("Dirac GAN", fontsize=16)
plt.plot(v[0], v[1], "go", markersize=10)
plt.plot(0, 0, "rx", markersize=12)
plt.plot(0, 0, "rx", markersize=12, label="equilibruim (0, 0)")
plt.legend(loc="upper right", bbox_to_anchor=(0.8, 1), fontsize=13, framealpha=0)
plt.xlabel(r"$\phi$", fontsize=16)
plt.ylabel(r"$\theta$", fontsize=16)
plt.xticks([])
plt.yticks([])
plt.xlim((-4, 4))
plt.ylim((-3, 4.5))
disc_lr = 0.1
gen_lr = 0.1
vect0 = [(1, 1)]
t = np.arange(0, 100, disc_lr)
plot = plt.figure()
v = vect0[0]
sol = euler(system, v, t)
widths = np.linspace(0, 2, sol.size)
plt.quiver(
sol[:-1, 0],
sol[:-1, 1],
sol[1:, 0] - sol[:-1, 0],
sol[1:, 1] - sol[:-1, 1],
scale_units="xy",
angles="xy",
scale=2,
color=color[0],
linewidths=widths,
edgecolors=color[0],
label="Simultaneous gradient descent",
headwidth=hw,
headlength=hl,
minshaft=2,
)
plt.title("Dirac GAN", fontsize=16)
plt.plot(v[0], v[1], "go", markersize=10)
plt.plot(0, 0, "rx", markersize=12, label="equilibruim (0, 0)")
plt.legend(loc="upper right", bbox_to_anchor=(0.8, 1), fontsize=13, framealpha=0)
plt.xlabel(r"$\phi$", fontsize=16)
plt.ylabel(r"$\theta$", fontsize=16)
plt.xticks([])
plt.yticks([])
plt.xlim((-4, 4))
plt.ylim((-3, 4.5))
plt.vlines(0, 0, 10, lw=3, colors="b", label=r"$p^*$")
plt.vlines(2, 0, 10, lw=3, colors="g", label=r"$q_{\theta}$", linestyles="--")
plt.hlines(0, -1, 10, lw=2, colors="k")
xlim = np.linspace(-0.5, 2.5, 50)
plt.plot(xlim, 1.7 * xlim, color="r", label=r"$D_{\phi}(x) = \phi x$", ls="-.")
plt.xlim(-0.5, 2.5)
plt.yticks([])
plt.xticks([])
plt.legend(framealpha=0, loc="upper center", fontsize=14)
lr = 0.1
vect0 = [(1, 1)]
t = np.arange(0, 100, lr)
plot = plt.figure()
v = vect0[0]
sol = euler_alternating(system, v, t)
widths = np.linspace(0, 2, sol.size)
plt.quiver(
sol[:-1, 0],
sol[:-1, 1],
sol[1:, 0] - sol[:-1, 0],
sol[1:, 1] - sol[:-1, 1],
scale_units="xy",
angles="xy",
scale=2,
color=color[0],
linewidths=widths,
edgecolors=color[0],
label="Alternating gradient descent",
headwidth=hw,
headlength=hl,
minshaft=2,
)
plt.title("Dirac GAN", fontsize=16)
plt.plot(v[0], v[1], "go", markersize=10)
plt.plot(0, 0, "rx", markersize=12, label="equilibruim (0, 0)")
plt.legend(loc="upper right", bbox_to_anchor=(0.8, 1), fontsize=13, framealpha=0)
plt.xlabel(r"$\phi$", fontsize=16)
plt.ylabel(r"$\theta$", fontsize=16)
plt.xticks([])
plt.yticks([])
plt.xlim((-4, 4))
plt.ylim((-3, 4.5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 这有点像之前那样的,我们约定 np 代表 numpy。因为 Series 和 DataFrame 用的次数非常多,所以将其引入命名空间会更加方便
Step2: 从上面我们可以看到,Series 的字符串表现形式为:索引在左边,值在右边。因为我们上面没有为数据指定特殊的索引,所以系统会自动创建一个 0 到 N-1 (N为数据的长度)的整数型索引。你可以通过 Series 的 values 和 index 属性获取数组表示形式和索引对象:
Step3: 大部分情况下,我们更希望去定制索引,以方便我们的使用。例如
Step4: 我们可以看到上面,与普通的numpy数组相比,我们可以通过索引的方式选取Series中的单个或者一组值。
Step5: 通过上面我们可以看出,pandas中的数组操作,都会保留索引和值之间的链接。
Step6: 如果数据被存放在一个python字典中,也可以直接通过这个字典创建Series, 特别地,如果只传入一个字典,则Series中得到的索引就是原始字典中的键(有序排列)
Step7: 在上例中,sdata 中跟cities索引相匹配的三个值会被找出来并放到相应的位置上,但是由于Osaka所对应的sdata值没有找到,所以它的值为NaN,即非数字(not a number)。在Pandas中,它用于表示确实或NA值。本书中,我们使用missing或者NA表示缺失的数据。pandas的 isnull 和 notnull 函数可以用于检测缺失数据。
Step8: Series 也有类似的实例方法:
Step9: 在这一章节之后的内容中,我们将详细讲解如何处理缺失数据。
Step10: 关于数据对其的功能,我们之后还会详细介绍,此处暂时跳过。
Step11: 我们可以通过直接赋值的方式来修改Series的索引
Step12: Quiz
Step13: DataFrame
Step14: 我们可以看到像Series一样,索引被自动加上,并且字典中的数据都被有序地排列。
Step15: 同Series相同,如果传入的列在数据中是没有的,那么返回的也是NA值。
Step16: 通过类似字典标记的方式或属性的方式,可以将DataFrame中的列获取为一个Series
Step17: 我们可以看出来,这个新得到的Series具有与原来DataFrame相同的索引,并且name属性也已经被相应地设置好了。
Step18: 列可以通过赋值的方式进行修改。例如,我们可以修改frame2中NaN的history列
Step19: 将列表或数组赋值给某个列时,它的长度必须和DataFrame中的长度相匹配(其实就是不能超过原DataFrame中的长度)。如果赋值的是一个Series,就会精确匹配DataFrame中的索引,所欲的空位都将被填上缺失值:
Step20: 通过为不存在的列赋值,我们可以给DataFrame增加列,例如
Step21: 我们还可以利用 del 这个关键字来对列进行删除
Step22: 特别注意 通过索引方式返回的列只是相应数据的视图而已,并不是副本。因此,对返回的Series所做的任何就地修改都会反映到源DataFrame中。通过Series的copy方法可以显示的赋值列。
Step23: 如果把它传给DataFrame,它会被解释为:外层字典的键作为列,内层的键则作为行索引
Step24: 我们也可以对结果进行转置
Step25: 内层字典的键会被合并、排序以形成最终的索引。如果显示指定了索引,则不会进行自动排序,例如:
Step26: 这里,我重新复习一下上面DataFrame的内容。以原书数据作为参照。
Step27: 现在来看看,如何让上面这个名字为df的table增加一个新的列,列的名字叫做area,并且填入随机数
Step28: 但是需要注意的是,上面assign的方法不会直接改原有的DataFrame,而后一种方法直接给新列赋值的方法是直接修改原始DataFrame的。因为在DataFrame中,任何通过索引方式返回的Series修改,都会反应到原始的DataFrame中。而如果想利用后一种方法,但是又不想改变原来的DataFrame,那么就首先要显示的创建一个副本。
Step29: 上面是对一个已经存在的DataFrame增加一个列。在利用上面的data字典进行构建DataFrame的时候,我们也可以增加一个列。例如:
Step30: 上面我展示了如何对一个已经或者新构建的DataFrame增加一个列。现在我们来看看,利用类似字典标记的方式,如何来轻松获取DataFrame中的某个特别的列。我这里介绍两种方式:
Step31: 列可以通过赋值地方式进行修改。例如我们可以给frame2中地‘debt’列填入新数据:
Step32: 值得注意的是,如果要用列表或者数组给DataFrame某个列赋值时,长度必须跟DataFrame长度一样。这也是为什么前面我在给一个已经存在的DataFrame增加新列的时候我首先利用len()得到了长度。
Step33: 这里我再提及一次利用字典嵌套方式来创建DataFrame的方法:
Step34: Indexing into a DataFrame
Step35: Selection with loc and iloc
Step36: Now, let's use interger indexing method iloc to select the same subset as selected in last cell by loc
Step37: As you can see above, we have many ways to select and rerange the data contained in a pandas project
Step38: Integer indexes
Step39: Arithmetic and data alignment
Step40: 由于s1和s2中的index有一部分是不重合的,所以那些部分就出现了 NaN。这类数据在pandas的其他算数运算中也会进行类似地传递,即以 NaN 的形式出现。
Step41: 对上述两个dataframe求和的结果中,所有在df1和df2中出现的index和columns都会出现,对于那些没有重合的index或者columns,则以NaN的结果出现。
Step42: 假设两个dataframe中没有index或者column是一致的,那么他们运算的结果会得到所有都是NaN的dataframe 例如
Step43: 相比于上个cell,我们改变一下做法,use fill_value to pass an argument (here, I use 0)
Step44: Table
Step45: DataFrame 和 Series 之间的运算
Step46: 这就叫做 broadcasting。后面有一个专门的章节会介绍broadcasting。DataFrame和Series之间的运算也接近与此。
Step47: By default, DataFrame and Series 之间的算数运算会把Series的index匹配到DataFrame的列,然后沿着行一直向下广播
Step48: 不过,假设Series中的索引和DataFrame中的列没有一一对应会发生什么呢?我们来看一个例子
Step49: 从上面我们可以看到,假设Series的index和DataFrame的Columns没有overlap,参与运算的的两个对象就会被重新索引以形成并集
Step50: Function application and mapping 函数应用和映射
Step51: 另外一个常见的操作是,将函数应用到由各行或者列形成的1D array上。DataFrame的apply方法即可实现这样的功能
Step52: 对于python使用者方便的是,大部分在数组中的统计函数(例 sum、mean)等也都是DataFrame的方法,因此大部分情况下并不需要使用apply方法。
Step53: 除了标量之外,传递给apply的函数还可返回由多个值组成的Series:
Step54: The reason why we call it 'applymap' is that Series itself has a 'map' method for applying an element-wise function
Step55: Sorting and Ranking 排序和排名
Step56: 我们注意到,上面对这个Series根据index进行了重新排序。
Step57: 上面,我们是对索引进行排序。如果要按照值对Series排序,则可以用order方法
Step58: The missing values are sorted to the end of the Series by default, see exmaple
Step59: For DataFrame, 我们可以根据需要对一个或者多个列中的值进行排序。讲一个或者多个列的名字传递给 by option of sort_values
Step60: Ranking 的排名值从1开始,直到数组中有效数组的数量。rank 方法默认会通过‘为各组分配一个平均排名’的方式来破坏平级关系:
Step61: 此外,我们也可根据值在原数据中出现的顺序给出排名:
Step62: 我们也可以按照降序进行排名
Step63: axis indexes with duplicate labels 带有重复值的轴索引
Step64: Using 'is_unique' can check whether the labels are unique or not
Step65: 对具有重复值的索引,数据选取行为会有些不同。
Step66: 这里我再举一个DataFrame的例子,但其实和Series的用法一致
Step67: Summarizng and computing descriptive statistics 汇总和计算描述统计
Step68: 调用DataFrame的sum方法能返回一个含有列小计的Series
Step69: !!! Attention
Step70: 在Pandas中,一些方法例如 idxmin和idxmax返回的是间接统计,比如达到最小值和最大值的索引。例如下面的例子
Step71: 对于非数值型数据,describe会产生另外一种汇总统计:
Step72: Correlation and covariance 相关系数和协方差
Step73: corr 方法可以计算两个Series中重叠的、非NA的、按索引对齐的值的相关系数。这个我在具体的研究中也使用过。详细的请看书本好了。
Step74: Unique values, value counts, and membership 唯一值、值计算 、成员资格
Step75: 如果需要排序的话,
Step76: 从上面可以看出来,结果默认是按照频率降序排序的。value_counts 还是一个顶级的pandas方法,可以用于任何数组
Step77: In some cases, you may want to compute a histogram on multiple
Step78: 将 pd.value_counts 传给DataFrame的apply函数,将得到
Step79: 处理缺失数据
Step80: Python 中内置的None值,也会被作为NA处理
Step81: NA处理常见方法: dropna, fillna, isnull, notnull.
Step82: 也可以通过Bollean型索引达到这个目的
Step83: fillna可以用于填充丢失的数据。例如
Step84: 层次化索引 hierarchical indexing
Step85: 上面就是一个简单的,所有两个层次索引的数据。如果要选取数据的子集,我们可以使用:
Step86: 进一步地,我们可以对第二层索引:
Step87: 或者
Step88: 层次化索引对实际生活和工作中的数据处理非常有帮助。
Step89: unstack的逆运算是stack
Step90: 对于DataFrame来说,每条轴都可以有分层索引:
Step91: 这样,我们就能方便地对这个DataFrame进行数据选取
Step92: 重排分级顺序
Step93: 为了上面重新分级的数据有一个排序。我们还可以用到sortlevel
Step94: 根据级别汇总统计
Step95: 这些其实是利用了pandas的groupby功能,以后还会继续介绍。
Step96: 如果是想在原始DataFrame上改,可以用
Step97: 上面的操作中,当我门把某列作为行索引时,它自动就从列中移除了。如果我们想保留它们,可以加上drop操作
Step98: 与set_index相反,reset_index则是可以把层次化索引转化到列中。
Step99: Visualization
Step100: Pandas for data analysis | scipy 2017 Tutorial | Dainiel Chen
|
<ASSISTANT_TASK:>
Python Code:
from pandas import Series, DataFrame
import pandas as pd
from pandas import Series
a = [4, 5, 2, -4]
obj = Series(a)
obj
obj2=obj+obj
obj2
obj.values
obj.index # 注意,如果使用python 2,显示的结果可能在形式上有点不同。
obj2 = Series([4, 5, 2 ,-4], index=['d', 'b',
'a', 'c'])
obj2.values
obj2.index
obj2['a']
obj2[['a', 'b']]
import pandas as pd
from pandas import Series, DataFrame
import numpy as np
obj11 = Series(np.arange(10))
print('value of obj11 is', obj11.values)
print('index of obj11 is', obj11.index)
obj13 = Series([1,2,3,4], index=['a','b','c','d'])
print('index of obj13 is', obj13.index)
print('value of obj13 is', obj13.values)
print('print the values that are large than 3\n', obj13[obj13>2])
print(obj13['b'])
print(obj13[['b','c']])
print(obj11[2])
obj13=obj13*2
print(obj13)
'b' in obj13
'e' in obj13
sdata = {'Kyoto': 35000, "Tokyo": 40000, "Sendai": 30000, "Nagoya": 45000}
sdata
obj4 = Series(sdata)
print(obj4)
cities = ['Kyoto', 'Osaka', 'Sendai', 'Nagoya']
obj5=Series(sdata, index=cities)
print(obj5)
pd.isnull(obj5)
pd.notnull(obj5)
obj5.isnull()
obj5.notnull()
obj4
obj5
obj5+obj4
obj4.name = 'population'
obj4.index.name = 'cities'
obj4
obj5.name = 'area'
obj5.index.name = 'cities'
obj5
obj5.index=['A', 'B', 'C', 'D']
print(obj5.index)
print(obj5)
import numpy as np
import pandas as pd
from pandas import Series
from numpy import random
import string
series_value = random.randn(26)
letter_string = string.ascii_lowercase[:]
letter_list = [i for i in letter_string]
sample_series = Series(series_value, index=letter_list)
sample_series.index.name = 'lower letters'
sample_series.name = 'random values'
print(sample_series)
import pandas as pd
from pandas import DataFrame, Series
data = {'province':['Zhejiang', 'Zhejiang', 'Zhejiang', 'Hubei', 'Hubei'],'area':[101,101,101,185,185],
'pop':[98,98,98,100,100]}
frame1 = DataFrame(data)
frame1
DataFrame(data, columns=['province','area', 'pop'])
frame2 = DataFrame(data,columns=['province', 'pop', 'area', 'history'],index=[1,2,3,4,5])
frame2
frame2.columns
frame2.index
frame2['province'] #标记方式
frame2.province #属性方式
frame2.ix[3] #ix is derepted in python3, instead use iloc
frame2['history']=2000
frame2
val = Series([2000,3000,2100],index=[1,3,5])
frame2['history']=val
frame2 #index 1,3,5的 被改变了,2和4因为在val中是没有的,所以填入的是NaN
frame2['estern']=frame2.province=="Zhejiang" #将boolean value赋值给estern这个列
frame2 #我们看到frame2中的确增加了一个estern列
del frame2['estern']
frame2.columns # estern coulumn was delted
frame2["area"]=frame2["area"]+1
frame2
pop = {'Nevada':{2001:2.4, 2002:2.9},
'Ohio':{2000:1.5, 2001: 1.7, 2002:3.6}}
frame3 = DataFrame(pop)
frame3
frame3.T
DataFrame(pop, index=[2001,2002,2000])
DataFrame(pop, columns=["Ohio", "Nevada"])
DataFrame(pop, columns=["Ohio", "Nevada"],index=[2002,2001,2000])
#这里我们利用一种由列表组成的字典来创建DataFrame,后面我还会提及一种利用嵌套字典来创建的方式
import pandas as pd
import numpy as np
from pandas import DataFrame
data = {'state':['Ohio','Ohio','Ohio','Nevada','Nevada'],
'year':[2000,2001,2002,2001,2002],
'pop':[1.5, 1.7, 3.6, 2.4, 2.9]}
frame = DataFrame(data)
frame
frame.iloc[1]
现在我们想一下,如何让这个表格按照'year','state','pop'顺序来排列?
f = DataFrame(data, columns = ['year', 'state', 'pop'])
f
f.sort_values(by=['year'])
#类似地,我们可以按照人口数量来从小到大排列。
f.sort_values(by=['pop'])
#反过来,我们根据人口数量从大到小进行排序
df = f.sort_values(by=['pop'],ascending=False) #ascend means ‘go up'.
df
sLength = len(df['year']) # 得到行的数目
print(sLength)
df1=df.assign(area=pd.Series(np.random.randn(sLength)).values)
df1
还有一种方法就是给一个新的列赋值,这也是增加列的方法:
df['area'] = pd.Series(np.random.randn(sLength)).values
df
df2=df.copy()
print(df2)
print('\n')
del df2['area'] #delet a column in df2
print(df2)
print('\n')
print(df)
frame2 = DataFrame(data, columns=['year','state','pop','debt'], index =['one',
'two',
'three',
'four',
'five'])
frame2
frame2.columns
#方法1
frame2['pop']
#方法2
frame2.year #当我实用frame2.pop时候出现的结果和我预想的不同,这似乎是因为pop是python中毕竟特殊的关键字?
上面是从DataFrame中获取列,同样地我们也可以通过位置或名称地方式来获取行,比如用索引字段ix
frame2.ix['three']
frame2['debt']=16.5
frame2
frame2['debt']=np.arange(5.) #give a list to this 'debt' columns
frame2
frame2['debt']=np.arange(6.) #give a list to this 'debt' columns
frame2
#这里我们再试一下用Series来赋值
from pandas import Series
val = Series([-1.2, -1.5, -1.7], index = ['two', 'four', 'five'])
frame2['debt']=val
frame2
import pandas as pd
import numpy as np
obj = pd.Series(np.arange(4.), index=['a', 'b', 'c', 'd'])
obj
type(obj)
obj[obj<2]
data = pd.DataFrame(np.arange(16).reshape(4,4),
index=['Ohio', 'Colorado','Utah', 'New York'],
columns=['one', 'two', 'three', 'four'])
data
data['two']
data[['two', 'one']]
Slice DataFrame
data[::2]
data[:2]
Select data with a bollean array
data[data['three'] > 5]
data[data['three'] > 3]
data.loc['Colorado']
data.loc['Colorado', ['two', 'four']]
data.iloc[1,[1, 3]]
data.loc[:"Colorado"]
data.loc[:"Colorado"][:1]
data.loc["Colorado"]
# Compared to the last cell, I remove the comma :
# Hence only the row of 'Colorado'
data.loc[:"Colorado", ['two', 'three']]
data
data.iloc[[1,2],[1, 3]]
data.iloc[:, 1:3][data.three > 6]
data.at['Utah', 'two']
data.iat[2, 1]
data.reindex
data.get_value
data.set_value
#For example
import numpy as np
data_list = np.arange(10).tolist()
data_list
data_list[:2]
data_df = pd.DataFrame(np.arange(9).reshape((3,3)),
index=['0', '1', '2'],
columns=['a', 'b', 'c'])
data_df
data_df.iloc[:2]
series_test = data_df.a
series_test
series = series_test[:]*0.1
type(series)
type(series_test)
ser = pd.Series(np.arange(3.))
print(type(ser))
ser
ser.loc[:2.]
import pandas as pd
import numpy as np
from numpy.random import randn
s1 = pd.Series(randn(4), index=['a', 'b', 'c', 'd'])
s2 = pd.Series(randn(4), index=['e', 'b', 'c', 'd'])
s2
s1 + s2
import pandas as pd
import numpy as np
df1 = pd.DataFrame(np.arange(9).reshape((3,3)),
columns = list('bcd'),
index=['Ohio', 'Texas', 'Colorado'])
df2 = pd.DataFrame(np.arange(12).reshape((4,3)),
columns = list('bde'),
index=['Utah', 'Ohio', 'Texas', 'Oregon'])
df1
df2
df1 + df2
df1 = pd.DataFrame({'A': [1,2]})
df2 = pd.DataFrame({'B': [3,4]})
print(df1)
print(df2)
df1 + df2
df1.add(df2, fill_value=0)
1/df1
df1.rdiv(1)
df1.floordiv(df2, fill_value=4)
# about floordiv, you can see https://www.quora.com/What-does-floor-division-in-Python-do
df1.mul(df2,fill_value=1)
df1.pow(df2, fill_value=3)
arr = np.arange(12.).reshape((3,4))
arr
arr[0]
arr - arr[0]
frame = pd.DataFrame(np.arange(12.).reshape((4,3)),
columns = list('bde'),
index = ['Utah', 'Ohio', 'Texas', 'Oregon'])
frame
series = frame.iloc[0]
print(series)
frame - series
series2 = pd.Series(range(3), index= ['b', 'e', 'f'])
frame + series2
series3 = frame['d']
series
frame.sub(series3, axis=0) # 这里传入的轴号0就是希望匹配的轴。在本立中,我们的目的是匹配DataFrame中的行索引,并进行广播
from pandas import DataFrame
import numpy as np
frame = DataFrame(np.random.randn(4,3), columns = list('bde'),
index = ['Utah', 'Ohio', 'Texas', 'Oregon'])
frame
np.abs(frame)
f = lambda x: x.max() - x.min()
frame.apply(f)
# alternatively, we can use frame.apply(f, axis='index')
# or frame.apply(f, axis=0)
frame.apply(f, axis=1)
# alternatively, we can use frame.apply(f, axis='columns')
# or frame.apply(f, axis=1)
np.sum(frame, axis='index')
np.sum(frame, axis='columns')
# using this function
# we can print the maximum or minimum for each column of a DataFrame
from pandas import Series
def F(x):
return Series([x.min(), x.max()], index =
['min', 'max'])
frame.apply(F)
此外,元素级的Python函数也是可以使用的。
In the following example, I will show you how to format the flowaing, i.e. keep specifiec figures for the floatings
import pandas as pd
from pandas import DataFrame, Series
import numpy as np
frame = DataFrame(np.random.randn(3,4), columns = [list('abcd')],
index=[list('123')])
format = lambda x: '%.2f' % x
print(frame)
df = frame.applymap(format)
df
series = Series([1.111,2.111,3.111], index=['a', 'c', 'b'])
series.map(format)
from pandas import DataFrame, Series
obj = Series(range(4), index = ['d', 'a', 'b', 'c'])
obj
obj.sort_index()
import numpy as np
frame = DataFrame(np.random.randn(4,3), columns=[list('dbc')],
index=np.arange(4).tolist())
frame
frame.sort_index(axis=0, ascending=False)
frame.sort_index(axis=1, ascending=True)
obj = Series([3,12,-222,3])
obj.sort_values()
# note that in python 2.7: obj.order()
# I highly recommend using the new feature in python3
import pandas as pd
obj = pd.Series([45, np.nan, 72, 83, np.nan, -33 ])
obj
obj.sort_values()
frame = pd.DataFrame({'b':[4, 7, -3, 2], 'a':[0, 1.1, 0, 1.1]})
frame
frame.sort_values(by='b')
frame.sort_values(by=['a', 'b'])
obj = Series([7, -5, 7, 4, 2, 0, 4])
obj.rank() #表示在这个数在原来的向量中排第几名,有相同的数,取平均
obj.rank(method = 'first') # 注意,和上一个cell中方法的区别;
obj.rank(ascending=False, method='max') # 使用整个分组的最大排名
obj.rank(ascending=False, method='min') # 使用整个分组的额最小排名
obj = Series(range(5), index=[list('aabbc')])
obj
obj.index.is_unique
a_series = obj['a']
print(type(a_series))
a_series
c_value = obj['c']
print(type(c_value))
print(c_value)
obj['c']
df = pd.DataFrame(np.random.randn(4,3), index=[list('aabb')])
df
df.loc['b']
import pandas as pd
import numpy as np
df = pd.DataFrame([[1.4, np.nan], [7.1, -4.5],
[np.nan, np.nan], [0.75, -1.3]],
index = [list('abcd')],
columns = ['one', 'two'])
df
df.sum()
#Passing axis = 'columns' or axis = 1 sums across the columns instead
df.sum(axis=1)
如果我们不想对存在NaN的值进行运算,我们可以使用
df.sum(axis=1, skipna=False) #还有一个option是level,适用于在multiindex的情况下,目前我还没有碰到过
df.idxmax()
另一些方法则是直累计型的,如cumsum
df.cumsum()
还有一些方法,则既不是约简型也不是累计行。例如
describe方法(在实际当中,这个方法被使用的相当频繁,因为可以快速的直到一个dataset大致的情况)
df.describe()
obj = pd.Series(['a', 'a', 'b', 'c']*4)
obj
obj.describe()
import pandas_datareader.data as web
import pandas as pd
all_data = {ticker: web.get_data_robinhood(ticker)
for ticker in ['AAPL']}
price = pd.DataFrame({ticker: data['low_price'] #Adj Close
for ticker, data in all_data.items()})
# volume = pd.DataFrame({ticker: data['symbol']
# for ticker, data in all_data.items()})
price.head(10)
price_aapl = price['AAPL']
type(price_aapl)
price_float_appl = pd.to_numeric(price_aapl)
returns = price_float_appl.pct_change()
returns.tail()
returns_copy = returns.copy()
returns.corr(returns_copy)
returns.cov(returns_copy)
obj = pd.Series(['c', 'a', 'd', 'a', 'a', 'b', 'b', 'c', 'c'])
obj
uniques = obj.unique()
uniques
sorted(uniques)
obj.value_counts()
pd.value_counts(obj.values, sort=False)
#value_counts returns a Series
#containing unique values as its index and frequencies as its values,
#ordered count in descending order
obj
mask = obj.isin(['b', 'c'])
mask
type(mask)
obj[mask]
data = pd.DataFrame({'Qu1': [1, 3, 4, 3, 4],
'Qu2': [2, 3, 1, 2, 3],
'Qu3': [1, 5, 2, 4, 4]})
data
result = data.apply(pd.value_counts).fillna(0) # 这个以后得重新复习一下。
result
import pandas as pd
import numpy as np
string_data = pd.Series(['abc', 'dh', np.nan, 'xh'])
string_data
string_data[0] = None
string_data.isnull()
from numpy import nan as NA
data = pd.Series([1, NA, 3, NA, 7])
data
data.dropna()
data[data.notnull()]
对于DataFrame,事情会复杂一些,得区分一下,是丢弃全NA或含有NA的行或者列。
dropna默认丢失任何含有缺失值的行:
data = pd.DataFrame([[1., 6.5, 3], [1., NA, NA],
[NA, NA, NA], [NA, 6.5, 3.]])
data
data.dropna()
data.dropna(how='all') # by defualt, axis=0,也就是行全含NA时,才被丢弃
data.dropna(how='all', axis=1) # 列全为NA时,被丢弃
newdata = data.fillna(2.0)
newdata
import pandas as pd
import numpy as np
data = pd.Series(np.random.randn(10),
index = [['a', 'a',
'a', 'b',
'b', 'b',
'c', 'c',
'd', 'd'],
[1, 2, 3, 1, 2, 3, 1, 2, 2, 3]])
data
data['b']
data['b'][1]
data['b', 1]
data.unstack()
data.unstack().stack()
frame = pd.DataFrame(np.arange(12).reshape(4,3),
index=[['a', 'a',
'b', 'b'], [1, 2, 1, 2]],
columns=[['Ohio', 'Ohio', 'Colorado'],
['Green', 'Red', 'Green']])
frame # 下面中,会打印出轴标签
frame.index.names = ['key1', 'key2']
frame.columns.names = ['state', 'color']
frame
frame['Ohio']
frame.swaplevel('key1', 'key2')
frame.swaplevel('key1', 'key2').sort_index(level='key2')
frame.sum(level='color', axis=1)
frame.sum(level='key2', axis=0)
frame = pd.DataFrame({'a': range(7), 'b': range(7, 0, -1),
'c': ['one', 'one', 'one', 'two', 'two', 'two', 'two'],
'd': [0, 1, 2, 0, 1, 2, 3]})
frame_back = frame.copy()
frame_back
frame2 = frame.set_index(['c'])
frame3 = frame.set_index(['c', 'd'])
frame2
frame3
frame_back.set_index(['c'], inplace=True)
frame_back
frame.set_index(['c'], drop=False)
frame2.reset_index()
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
ts = pd.Series(np.random.randn(1000), index = pd.date_range(
'1/1/2000', periods = 1000))
ts = ts.cumsum()
ts.plot()
df = pd.DataFrame(np.random.randn(1000, 4),
index = ts.index, columns = list('ABCD'))
df = df.cumsum()
plt.figure(); df.plot();
df3 = pd.DataFrame(np.random.randn(1000,2), columns
= ['B', 'C']).cumsum()
df3['A'] = pd.Series(list(range(len(df))))
#pass logy to get a log-scale y-axis
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = np.exp(ts.cumsum())
ts.plot(logy=True)
df.A.plot()
df.B.plot(secondary_y=True, style='g')
#To plot some columns in a DataFrame, give the column names to the secondary_y keyword:
plt.figure()
ax = df.plot(secondary_y=['A', 'B'])
ax.set_ylabel('CD scale')
ax.right_ax.set_ylabel('AB scale')
plt.figure()
# plt.show()
df.plot(secondary_y=['A', 'B'], mark_right=False) #set false to turn off the automatic marking
import pandas as pd
#check the version of the padas you are using
pd.__version__
print('the version of pandas in my computer is', pd.__version__)
pd.read_csv('./chapter05/data/gapminder.tsv', delimiter='\t').head(10)
pd.read_csv('./chapter05/data/gapminder.tsv', delimiter='\t').head() # 如果括号中不指定,默认是5,所以只打印出表格的前面五行
#let us specify df for the imported file
df = pd.read_csv("./chapter05/data/gapminder.tsv", delimiter='\t')
df
df.head(10)
type(df)
df.shape
df.shape() #you will see an error because as you see above, df.shape is a tuple
df.info()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python standard function dir(obj) gets all member names of an object. Lest's see what are in the FORTH kernel vm
Step2: I only want you to see that there are very few properties and methods in this FORTH kernel object and many of them are conventional FORTH tokens like code, endcode, comma, compiling, dictionary, here, last, stack, pop, push, tos, rpop, rstack, rtos, tib, ntib, tick, and words.
Step3: vm.dictate() method is the way project-k VM receives your commands (a string). It actually is also the way we feed it an entire FORTH source code file. Everything given to vm.dictate() is like a command line you type to the FORTH system as simple as only a number
Step4: The first line above dictates project-k VM to push 123 onto the data stack and the second line views the data stack. We can even cascade these two lines into one
Step5: because vm.dictate() returns the vm object itself.
Step6: Did you know what have we done? We defined a new FORTH code word! By the way, we can use any character in a word name except white spaces. This is a FORTH convention.
Step7: In the above definition the vm.words is a python dictionary (not FORTH dictionary) defined in the project-k VM object as a property which is something like an array of all recent words in the recent vocabulary named forth which is the only one vocabulary comes with the FORTH kernel. Where a FORTH 'vocabulary' is simply a key in python dictionary key
Step8: This example demonstrates how to use built-in methods push(), pop(), nexttoken() and the stack property (or global variable). As shown in above definitions, we can omit vm. so vm.push, vm.stack are simplified to push, stack because code ... end-code definitions are right in the VM name space. Now let's try these new words
Step9: The + command can certainly concatenate strings together and also can add numbers because Python's + operator works that way. Please try it with integers and floating point numbers
|
<ASSISTANT_TASK:>
Python Code:
# In case you are not familiar with Jupyter Notebook, click here and press Ctrl+Enter to run this cell.
import projectk as vm
vm
print(dir(vm))
vm.stack
vm.dictate("123")
vm.stack
vm.dictate("456").stack
vm.dictate("code hi! print('Hello World!!') end-code"); # define the "hi!" comamnd where print() is a standard python function
vm.dictate("hi!");
vm.dictate("code words print([w.name for w in vm.words['forth'][1:]]) end-code")
vm.dictate("words");
vm.dictate("code + push(pop(1)+pop()) end-code"); # pop two operands from FORTH data stack and push back the result
vm.dictate("code .s print(stack) end-code"); # print the FORTH data stack
vm.dictate('code s" push(nexttoken(\'"\'));nexttoken() end-code'); # get a string
vm.dictate('words'); # list all recent words
vm.stack = [] # clear the data stack
vm.dictate(' s" Forth "') # get the string 'Forth '
vm.dictate(' s" is the easist "') # get the string 'is the easist '
vm.dictate(' s" programming langage."') # get the string 'programing language.'
vm.dictate('.s'); # view the data stack
print(vm.dictate('+').stack) # concatenate top two strings
print(vm.dictate('+').stack) # concatenate the reset
print(vm.dictate('123 456 + ').pop()); # Push 123, push 456, add them
print(vm.dictate('1.23 45.6 + ').pop());
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load USGS data into a dataframe
|
<ASSISTANT_TASK:>
Python Code:
# Import the libraries that we'll be using
import numpy as np
import pandas as pd
import hydropy as hp
# Set the notebook to plot graphs in the output cells.
%matplotlib inline
# Use HydroCloud.org to find a stream gauge to investigate.
# Click on the red points to find the site number.
from IPython.display import HTML
HTML('<iframe src=https://hydrocloud.org/ width=700 height=400></iframe>')
# Create a Pandas dataframe using the USGS daily discharge for Herring Run.
herring = hp.get_usgs('01585200', 'dv', '2011-01-01', '2016-01-01')
# List the first few values from the top of the dataframe.
herring.head()
# Calculate some basic statistics for the dataframe.
herring.describe()
# For more advanced analysis, use the HydroAnalysis class.
my_analysis = hp.HydroAnalysis(herring)
# Plot discharge on a logarithmic scale for the Y axis.
my_analysis.plot(figsize=(16,6), logy=True)
## Finding Help
# Use help() to learn more about a particular function.
help(hp.get_usgs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <br>
Step2: <br>
Step3: <br>
Step4: BEGIN Solution
Step5: <br>
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import torch
%matplotlib inline
import matplotlib.pyplot as plt
import numdifftools as nd
from scipy.optimize import minimize
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
### BEGIN Solution
def p_star_w(w):
x = np.array([2/3, 1/6, 1/6], dtype=np.float64)
E = np.array([[1, -0.25, 0.75], [-0.25, 1, 0.5], [0.75, 0.5, 2]], dtype=np.float64)
left_part = 1/(1+np.exp(- w.T @ x))
right_part = multivariate_normal(mean=[0,0,0], cov=E).pdf(w)
return left_part * right_part
def log_star_w(w):
return -np.log(p_star_w(w))
w_0 = minimize(log_star_w, np.array([1,2,1], dtype=np.float64)).x
Hessian = nd.Hessian(log_star_w)
A = Hessian(w_0)
Z_p = p_star_w(w_0) * np.sqrt((2*np.pi)**3/np.linalg.det(A))
print("The value of intergral:", Z_p)
### END Solution
import torch
from torch.autograd import Variable, grad
### BEGIN Solution
def pt_p_star_w(w):
x = np.array([2/3, 1/6, 1/6], dtype=np.float64)
E = np.array([[1, -0.25, 0.75], [-0.25, 1, 0.5], [0.75, 0.5, 2]], dtype=np.float64)
left_part = torch.sigmoid(torch.dot(w, Variable(torch.from_numpy(x).type(torch.FloatTensor))))
right_part = 1 / (( 2 * np.pi )**(3/2) * np.linalg.det(E)**(1/2)) *\
torch.exp(-0.5 * w @ Variable(torch.from_numpy(np.linalg.inv(E)).type(torch.FloatTensor))@w)
return left_part * right_part
def pt_log_star_w(w):
return -torch.log(pt_p_star_w(w))
def hessian_diag(func, w):
w = Variable(torch.FloatTensor(w), requires_grad=True)
grad_params = torch.autograd.grad(func(w), w, create_graph=True)
hessian = [torch.autograd.grad(grad_params[0][i], w, create_graph=True)[0].data.numpy() \
for i in range(3)]
return np.diagonal(hessian)*np.eye(3)
A = hessian_diag(pt_log_star_w, w_0)
pt_Z_p = (np.sqrt((2*np.pi)**3 / np.linalg.det(A)) *\
pt_p_star_w(Variable(torch.from_numpy(w_0).type(torch.FloatTensor)))).data.numpy()
print('Integral value is', pt_Z_p)
### END Solution
from scipy.integrate import tplquad
### BEGIN Solution
def p_star_w_adapter(x, y, z):
return p_star_w(np.array([x,y,z]))
acc_Z_p = tplquad(p_star_w_adapter, -10, 10, -10, 10, -10, 10)
print("Laplace method: %.05f" % abs(acc_Z_p[0] - Z_p))
print("Diag. Hessian Approx: %.05f" % abs(acc_Z_p[0] - pt_Z_p))
### END Solution
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
### BEGIN Solution
df = pd.read_csv('data/monthly_co2_mlo.csv')
df = df.replace(-99.99, np.nan).dropna()
df.head(10)
y = df['CO2 [ppm]']
X = df.drop(['CO2 [ppm]'], axis=1)
X['year'] -= 1958
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, shuffle=False, test_size=0.25)
X.head(10)
### END Solution
scaler = StandardScaler()
y_test_min = np.min(y_train.values)
y_test_abs = np.max(y_train.values) - np.min(y_train.values)
y_train_scaled = scaler.fit_transform(y_train.values.reshape(-1, 1))
y_test_scaled = scaler.transform(y_test.values.reshape(-1, 1))
plt.figure(figsize=(14, 5))
plt.plot(X_train['year'], y_train_scaled)
plt.plot(X_test['year'], y_test_scaled)
plt.axvline(x=0.75 * np.max([np.max(X_train['year'].values), np.max(X_test['year'].values)]), c='black', ls='-')
plt.grid()
plt.ylabel(r'${CO}_2$', size=18)
plt.xlabel('Train and test split', size=18)
plt.show()
from GPy.models import GPRegression
from GPy.kern import RBF, Poly, StdPeriodic, White, Linear
from sklearn.metrics import r2_score
### BEGIN Solution
kernels = RBF(input_dim=1, variance=1., lengthscale=10.) + \
Poly(input_dim=1) + \
StdPeriodic(input_dim=1) + \
White(input_dim=1) + \
Linear(input_dim=1)
gpr = GPRegression(X_train['year'].values.reshape(-1, 1), y_train_scaled, kernels)
gpr.plot(figsize=(13,4))
plt.show()
### END Solution
predicted = gpr.predict(X_test['year'].values.reshape(-1, 1))
plt.figure(figsize=(13,4))
plt.plot(scaler.inverse_transform(y_test_scaled), scaler.inverse_transform(y_test_scaled), label='x = y', c='r')
plt.scatter(scaler.inverse_transform(predicted[0]), scaler.inverse_transform(y_test_scaled), label="")
plt.title("QQ - plot", size=16)
plt.xlabel("True value", size=16)
plt.ylabel("Predicted values", size=16)
plt.legend()
plt.show()
r2_score(predicted[0], y_test_scaled)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So how does this work behind the scenes?
|
<ASSISTANT_TASK:>
Python Code:
from pubchempy import get_compounds
for compound in get_compounds('glucose', 'name'):
print(compound.cid)
print(compound.isomeric_smiles)
from pubchempy import Compound
vioxx = Compound.from_cid(5090)
print vioxx.molecular_formula
print vioxx.molecular_weight
print vioxx.xlogp
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h1>Audio Applications</h1>
Step2: <h1>Biphonation Overview</h1>
Step3: The code below will extract a subsection of the signal and perform a sliding window embedding + 1D persistent homology. Using the interactive plot of the audio waveform above, find two different time ranges to plot
Step4: <h1>Music Analysis</h1>
Step5: <BR><BR>
Step6: Unfortunately, the sample rate is just to high and the signal is just too messy for this algorithm to work. We will have to do some more sophisticated preprocessing before applying the algorithm
Step7: You might notice that there are vertical streaks in a semi-periodic pattern. These correspond to "broadband percussive events," or, on other words, likely onsets for beats when drums occur. An audio novelty function is derived from a spectrogram by looking at the difference between successive frames to try to pick up on this. The code below extracts the audio novelty function and displays it for the same audio snippet.
Step8: Not only is the audio novelty function a cleaner signal, but it is also at a much lower sample rate. Since the "hop size" between each spectrogram window is 256 samples, the temporal resolution is coarser by that factor.
Step9: <h2>Speech Example</h2>
|
<ASSISTANT_TASK:>
Python Code:
##Do all of the imports and setup inline plotting
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from scipy.interpolate import InterpolatedUnivariateSpline
from ripser import ripser
from persim import plot_diagrams
import scipy.io.wavfile
from IPython.display import clear_output
def getSlidingWindow(x, dim, Tau, dT):
Return a sliding window of a time series,
using arbitrary sampling. Use linear interpolation
to fill in values in windows not on the original grid
Parameters
----------
x: ndarray(N)
The original time series
dim: int
Dimension of sliding window (number of lags+1)
Tau: float
Length between lags, in units of time series
dT: float
Length between windows, in units of time series
Returns
-------
X: ndarray(N, dim)
All sliding windows stacked up
N = len(x)
NWindows = int(np.floor((N-dim*Tau)/dT))
if NWindows <= 0:
print("Error: Tau too large for signal extent")
return np.zeros((3, dim))
X = np.zeros((NWindows, dim))
spl = InterpolatedUnivariateSpline(np.arange(N), x)
for i in range(NWindows):
idxx = dT*i + Tau*np.arange(dim)
start = int(np.floor(idxx[0]))
end = int(np.ceil(idxx[-1]))+2
# Only take windows that are within range
if end >= len(x):
X = X[0:i, :]
break
X[i, :] = spl(idxx)
return X
#Read in the audio file. Fs is the sample rate, and
#X is the audio signal
Fs, X = scipy.io.wavfile.read("horsewhinnie.wav")
plt.figure()
plt.plot(np.arange(len(X))/float(Fs), X)
plt.xlabel("Time (Seconds)")
plt.title("Horse Whinnie Waveform")
plt.show()
from IPython.display import Audio
# load a remote WAV file
Audio('horsewhinnie.wav')
#These variables are used to adjust the window size
F0 = 493 #First fundamental frequency
G0 = 1433 #Second fundamental frequency
###TODO: Modify this variable (time in seconds)
time = 0.91
#Step 1: Extract an audio snippet starting at the chosen time
SigLen = 512 #The number of samples to take after the start time
iStart = int(round(time*Fs))
x = X[iStart:iStart + SigLen]
W = int(round(Fs/G0))
#Step 2: Get the sliding window embedding
Y = getSlidingWindow(x, W, 2, 2)
#Mean-center and normalize
Y = Y - np.mean(Y, 1)[:, None]
Y = Y/np.sqrt(np.sum(Y**2, 1))[:, None]
#Step 3: Do the 1D rips filtration
PDs = ripser(Y, maxdim=1)['dgms']
PD = PDs[1]
#Step 4: Figure out the second largest persistence
sP = 0
sPIdx = 0
if PD.shape[0] > 1:
Pers = PD[:, 1] - PD[:, 0]
sPIdx = np.argsort(-Pers)[1]
sP = Pers[sPIdx]
#Step 5: Plot the results
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.title("Starting At %g Seconds"%time)
plt.plot(time + np.arange(SigLen)/Fs, x)
plt.xlabel("Time")
plt.subplot(122)
plot_diagrams(PDs)
plt.plot([PD[sPIdx, 0]]*2, PD[sPIdx, :], 'r')
plt.scatter(PD[sPIdx, 0], PD[sPIdx, 1], 20, 'r')
plt.title("Second Largest Persistence: %g"%sP)
Fs, X = scipy.io.wavfile.read("journey.wav") #Don't Stop Believing
X = X/(2.0**15) #Loaded in as 16 bit shorts, convert to float
plt.figure()
plt.plot(np.arange(len(X))/float(Fs), X)
plt.xlabel("Time (Seconds)")
plt.title("Don't Stop Believin")
plt.show()
Audio('journey.wav')
#Sliding window code here assumes integer x, dim, and Tau so no interpolation
#is needed (for computational efficiency)
def getSlidingWindowInteger(x, dim, Tau, dT):
N = len(x)
NWindows = int(np.floor((N-dim*Tau)/dT)) #The number of windows
if NWindows <= 0:
print("Error: Tau too large for signal extent")
return np.zeros((3, dim))
X = np.zeros((NWindows, dim)) #Create a 2D array which will store all windows
idx = np.arange(N)
for i in range(NWindows):
#Figure out the indices of the samples in this window
idxx = np.array(dT*i + Tau*np.arange(dim), dtype=np.int32)
X[i, :] = x[idxx]
return X
#Note that dim*Tau here spans a half a second of audio,
#since Fs is the sample rate
dim = round(Fs/200)
Tau = 100
dT = Fs/100
Y = getSlidingWindowInteger(X[0:Fs*3], dim, Tau, dT)
print("Y.shape = ", Y.shape)
#Mean-center and normalize
Y = Y - np.mean(Y, 1)[:, None]
Y = Y/np.sqrt(np.sum(Y**2, 1))[:, None]
PDs = ripser(Y, maxdim=1)['dgms']
pca = PCA()
Z = pca.fit_transform(Y)
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.title("2D PCA")
plt.scatter(Z[:, 0], Z[:, 1])
plt.subplot(122)
plot_diagrams(PDs)
plt.title("Persistence Diagram")
plt.show()
from MusicFeatures import *
#Compute the power spectrogram and audio novelty function
winSize = 512
hopSize = 256
plt.figure()
(S, novFn) = getAudioNoveltyFn(X, Fs, winSize, hopSize)
plt.imshow(np.log(S.T), cmap = 'afmhot', aspect = 'auto')
plt.title('Log-frequency power spectrogram')
plt.show()
plt.figure(figsize=(8, 4))
#Plot the spectrogram again
plt.subplot(211)
plt.imshow(np.log(S.T), cmap = 'afmhot', aspect = 'auto')
plt.ylabel('Frequency Bin')
plt.title('Log-frequency power spectrogram')
#Plot the audio novelty function
plt.subplot(212)
plt.plot(np.arange(len(novFn))*hopSize/float(Fs), novFn)
plt.xlabel("Time (Seconds)")
plt.ylabel('Audio Novelty')
plt.xlim([0, len(novFn)*float(hopSize)/Fs])
plt.show()
(S, novFn) = getAudioNoveltyFn(X, Fs, winSize, hopSize)
#Take the first 3 seconds of the novelty function
fac = int(Fs/hopSize)
novFn = novFn[fac*4:fac*7]
#Make sure the window size is half of a second, noting that
#the audio novelty function has been downsampled by a "hopSize" factor
dim = 20
Tau = (Fs/2)/(float(hopSize)*dim)
dT = 1
Y = getSlidingWindowInteger(novFn, dim, Tau, dT)
print("Y.shape = ", Y.shape)
#Mean-center and normalize
Y = Y - np.mean(Y, 1)[:, None]
Y = Y/np.sqrt(np.sum(Y**2, 1))[:, None]
PDs = ripser(Y, maxdim=1)['dgms']
pca = PCA()
Z = pca.fit_transform(Y)
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.title("2D PCA")
plt.scatter(Z[:, 0], Z[:, 1])
plt.subplot(122)
plot_diagrams(PDs)
plt.title("Persistence Diagram")
plt.show()
#Read in the audio file. Fs is the sample rate, and
#X is the audio signal
Fs, X = scipy.io.wavfile.read("speech.wav")
X = X/(2.0**15)
(S, novFn) = getAudioNoveltyFn(X, Fs, winSize, hopSize)
plt.figure()
plt.plot(np.arange(len(novFn))*hopSize/float(Fs), novFn)
plt.xlabel("Time (Seconds)")
plt.title("Audio Novelty Function for Speech")
Audio('speech.wav')
plt.show()
#Get the novelty function for the first three seconds, and use the
#exact same parameters as before
novFn = novFn[0:int((Fs/hopSize)*3)]
dim = 20
Tau = (Fs/2)/(float(hopSize)*dim)
dT = 1
Y = getSlidingWindowInteger(novFn, dim, Tau, dT)
print("Y.shape = ", Y.shape)
#Mean-center and normalize
Y = Y - np.mean(Y, 1)[:, None]
Y = Y/np.sqrt(np.sum(Y**2, 1))[:, None]
PDs = ripser(Y, maxdim=1)['dgms']
pca = PCA()
Z = pca.fit_transform(Y)
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.title("2D PCA")
plt.scatter(Z[:, 0], Z[:, 1])
plt.subplot(122)
plot_diagrams(PDs[1], labels=['H1'])
plt.title("Persistence Diagram")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step11: Embedding
Step12: LSTM cell
Step13: RNN forward pass
Step14: Output
Step15: Validation accuracy
Step16: Batching
Step17: Training
Step18: Testing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:1000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
reviews_ints =
seq_len = 200
features =
features[:10,:100]
split_frac = 0.8
train_x, val_x =
train_y, val_y =
val_x, test_x =
val_y, test_y =
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ =
labels_ =
keep_prob =
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding =
embed =
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state =
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Passband Options
Step3: As you might expect, if you want to pass different values to different components, simply provide them in a dictionary.
Step4: Note here that we didn't explicitly override the defaults for '_default', so they used the phoebe-wide defaults. If you wanted to set a value for the ld_coeffs of any star added in the future, you would have to provide a value for '_default' in the dictionary as well.
Step5: This syntax may seem a bit bulky - but alternatively you can add the dataset without providing values and then change the values individually using dictionary access or set_value.
Step6: Enabling and Disabling Datasets
Step7: All of these by default use the period in the top-level of the current hierarchy,
Step8: or
Step9: Although it isn't possible to attach data in phase-space, it is possible to tell PHOEBE at which phases to compute the model by setting compute_phases. Note that this overrides the value of times when the model is computed.
Step10: The usage of compute_phases (as well as compute_times) will be discussed in further detail in the compute tutorial and the advanced
Step11: Removing Datasets
Step12: The simplest way to remove a dataset is by its dataset tag
Step13: But remove_dataset also takes any other tag(s) that could be sent to filter.
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc',
times=[0,1],
dataset='lc01',
overwrite=True)
print(b.get_parameter(qualifier='times', dataset='lc01'))
print(b.filter(qualifier='ld_mode', dataset='lc01'))
b.add_dataset('lc',
times=[0,1],
ld_mode='manual',
ld_func={'primary': 'logarithmic', 'secondary': 'quadratic'},
dataset='lc01',
overwrite=True)
print(b.filter(qualifier='ld_func', dataset='lc01'))
print(b.filter(qualifier'ld_func@lc01', check_default=False))
times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
b.add_dataset('lc',
times=times,
fluxes=fluxes,
sigmas=sigmas,
dataset='lc01',
overwrite=True)
print(b.get_ephemeris())
print(b.to_phase(0.0))
print(b.to_time(-0.25))
print(b.to_phase(b.get_value(qualifier='times')))
print(b.to_phase('times@lc01'))
b.add_dataset('lc',
compute_phases=np.linspace(0,1,11),
dataset='lc01',
overwrite=True)
b.add_dataset('lc',
times=[0],
dataset='lc01',
overwrite=True)
print(b['compute_phases@lc01'])
b.flip_constraint('compute_phases', dataset='lc01', solve_for='compute_times')
b.set_value('compute_phases', dataset='lc01', value=np.linspace(0,1,101))
print(b.datasets)
b.remove_dataset('lc01')
print(b.datasets)
b.remove_dataset(kind='rv')
print(b.datasets)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Simple addition on the GPU
Step3: Plot the Sigmoid function
Step4: Timing Numpy vs. PyCUDA ...
|
<ASSISTANT_TASK:>
Python Code:
# !pip install pycuda
%reset -f
import pycuda
from pycuda import compiler
import pycuda.driver as cuda
import numpy
import numpy as np
from pycuda.compiler import SourceModule
cuda.init()
print("%d device(s) found." % cuda.Device.count())
for ordinal in range(cuda.Device.count()):
dev = cuda.Device(ordinal)
print "Device #%d: %s" % (ordinal, dev.name())
print cuda
! watch --color -n1.0 gpustat
import pycuda.autoinit
# a = np.random.uniform(low=1, high=20, size=(10,))
a = numpy.arange(-100000, 100000, 1)
a = a.astype(numpy.float32)
ARR_SIZE = numpy.int32(a.shape[-1])
print ARR_SIZE
a_gpu = cuda.mem_alloc(a.nbytes)
xout_gpu = cuda.mem_alloc(a.nbytes)
cuda.memcpy_htod(a_gpu, a)
xout_gpu=cuda.mem_alloc_like(a)
# size_gpu=cuda.mem_alloc_like(size)
mod = SourceModule(
__global__ void sigmoid(float* a, float* b, int size)
{
int index = blockDim.x * blockIdx.x + threadIdx.x;
if (index < size)
b[index] = 1.0f / (1.0f + exp(-1.0f * a[index]));
}
)
func = mod.get_function("sigmoid")
def sigmoidGPU():
func(a_gpu, xout_gpu,ARR_SIZE, block=(ARR_SIZE/1024,1,1))
a_sigmoid = numpy.empty_like(a)
cuda.memcpy_dtoh(a_sigmoid, xout_gpu)
return a_sigmoid
# print sigmoidGPU()
from scipy.special import expit
y = expit(a)
# print ("__________________________________")
# print y
import matplotlib.pyplot as plt
plt.plot(a,y)
plt.text(4,0.8,r'$\sigma(x)=\frac{1}{1+e^{-x}}$',fontsize=15)
plt.legend(loc='lower right')
plt.show()
import timeit
n_iter = ARR_SIZE
rounds = 1000 # for timeit
print 'numpy', timeit.timeit(lambda:
expit(a),
number=rounds)
print 'pycuda', timeit.timeit(lambda:
sigmoidGPU(),
number=rounds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enoncé
Step2: On vérifie que la fonctionne jette bien une exception lorsque les chaînes de caractères sont de longueurs différentes.
Step3: Q2
Step4: Q3
Step5: Q4
Step6: La seconde version est plus alambiquée, elle modifie légèrement la version alambiquée. C'est une version récursive.
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
def distance_meme_longueur(m1, m2):
if len(m1) != len(m2):
raise ValueError("m1 et m2 sont de longueurs différentes")
d = 0
for c1, c2 in zip(m1, m2):
if c1 != c2:
d += 1
return d
distance_meme_longueur('abcef', 'abcde')
try:
distance_meme_longueur('a', 'bb')
except Exception as e:
print(e)
def distance(m1, m2):
if len(m1) < len(m2):
return distance(m2, m1)
if len(m1) == len(m2):
return distance_meme_longueur(m1, m2)
d = len(m1) - len(m2)
mind = [distance_meme_longueur(m1[i:i+len(m2)], m2)
for i in range(0, d)]
return d + min(mind)
distance('aa', 'aa'), distance('aa', 'aaa'), distance('aa', 'bbb')
def distance_alambiquee(m1, m2):
mini = None
for i in range(len(m1)):
for j in range(len(m2)):
d = distance(m1[:i], m2[:j]) + distance(m1[i:], m2[j:])
if mini is None or d < mini:
mini = d
# Option verlan
d = distance(m1[:i], m2[j:]) + distance(m1[i:], m2[:j]) + 0.5
if d < mini:
mini = d
return mini
(distance('abc', 'ac'),
distance_alambiquee('abc', 'ac'),
distance_alambiquee('abc', 'ca'),
distance_alambiquee('b', 'b'))
def levenstein(m1, m2):
d = {}
d[0,0] = 0
for i in range(len(m1) + 1):
d[i, 0] = i
for j in range(len(m2) + 1):
d[0, j] = j
for i in range(1, len(m1) + 1):
for j in range(1, len(m2) + 1):
d[i, j] = min(d[i-1,j] +1, d[i,j-1] +1,
d[i-1, j-1] + (1 if m1[i-1] != m2[j-1] else 0))
return d[len(m1), len(m2)]
levenstein('abc', 'ac')
def distance_alambiquee_levenstein(m1, m2):
mini = None
for i in range(len(m1)):
for j in range(len(m2)):
if i > 0 and i < len(m1) - 1 and j > 0 and j < len(m2) - 1:
d1 = distance_alambiquee_levenstein(m1[:i], m2[:j])
d2 = distance_alambiquee_levenstein(m1[i:], m2[j:])
else:
d1 = distance(m1[:i], m2[:j])
d2 = distance(m1[i:], m2[j:])
d = d1 + d2
if mini is None or d < mini:
mini = d
return mini
(distance_alambiquee('abcde', 'ace'),
levenstein('abcde', 'ace'),
distance_alambiquee_levenstein('abcde', 'ace'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Input
Step2: Vectorizer
Step3: Models
Step4: Pipelines
Step5: Gridsearch
Step6: Training
Step7: Evaluation
Step8: Visual Inspection
Step9: Conclusion
Step10: Conclusion
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import load_files
corpus = load_files("../data/")
doc_count = len(corpus.data)
print("Doc count:", doc_count)
assert doc_count is 56, "Wrong number of documents loaded, should be 56 (56 stories)"
from helpers.tokenizer import TextWrangler
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
bow_stem = CountVectorizer(strip_accents="ascii", tokenizer=TextWrangler(kind="stem"))
X_bow_stem = bow_stem.fit_transform(corpus.data)
tfidf_stem = TfidfVectorizer(strip_accents="ascii", tokenizer=TextWrangler(kind="stem"))
X_tfidf_stem = tfidf_stem.fit_transform(corpus.data)
from sklearn.decomposition import LatentDirichletAllocation, TruncatedSVD, NMF
n_topics = 5
lda = LatentDirichletAllocation(n_components=n_topics,
learning_decay=0.5, learning_offset=1.,
random_state=23)
lsa = TruncatedSVD(n_components=n_topics, random_state=23)
nmf = NMF(n_components=n_topics, solver="mu", beta_loss="kullback-leibler", alpha=0.1, random_state=23)
lda_params = {"lda__learning_decay": [0.5, 0.7, 0.9],
"lda__learning_offset": [1., 5., 10.]}
from sklearn.pipeline import Pipeline
lda_pipe = Pipeline([
("bow", bow_stem),
("lda", lda)
])
lsa_pipe = Pipeline([
("tfidf", tfidf_stem),
("lsa", lsa)
])
nmf_pipe = Pipeline([
("tfidf", tfidf_stem),
("nmf", nmf)
])
from sklearn.model_selection import GridSearchCV
lda_model = GridSearchCV(lda_pipe, param_grid=lda_params, cv=5, n_jobs=-1)
#lda_model.fit(corpus.data)
#lda_model.best_params_
lda_pipe.fit(corpus.data)
nmf_pipe.fit(corpus.data)
lsa_pipe.fit(corpus.data)
print("LDA")
print("Log Likelihood:", lda_pipe.score(corpus.data))
def df_topic_model(vectorizer, model, n_words=20):
keywords = np.array(vectorizer.get_feature_names())
topic_keywords = []
for topic_weights in model.components_:
top_keyword_locs = (-topic_weights).argsort()[:n_words]
topic_keywords.append(keywords.take(top_keyword_locs))
df_topic_keywords = pd.DataFrame(topic_keywords)
df_topic_keywords.columns = ['Word '+str(i) for i in range(df_topic_keywords.shape[1])]
df_topic_keywords.index = ['Topic '+str(i) for i in range(df_topic_keywords.shape[0])]
return df_topic_keywords
print("LDA")
df_topic_model(vectorizer=bow_stem, model=lda_pipe.named_steps.lda, n_words=15)
print("LSA")
df_topic_model(vectorizer=tfidf_stem, model=lsa_pipe.named_steps.lsa, n_words=15)
print("NMF")
df_topic_model(vectorizer=tfidf_stem, model=nmf_pipe.named_steps.nmf, n_words=15)
import pyLDAvis
from pyLDAvis.sklearn import prepare
pyLDAvis.enable_notebook()
prepare(lda_pipe.named_steps.lda, X_bow_stem, bow_stem, mds="tsne")
prepare(nmf_pipe.named_steps.nmf, X_tfidf_stem, tfidf_stem, mds="tsne")
df_topic_word_lda = df_topic_model(vectorizer=bow_stem, model=lda_pipe.named_steps.lda, n_words=10)
df_topic_word_lsa = df_topic_model(vectorizer=tfidf_stem, model=lsa_pipe.named_steps.lsa, n_words=10)
df_topic_word_nmf = df_topic_model(vectorizer=tfidf_stem, model=nmf_pipe.named_steps.nmf, n_words=10)
def jaccard_index(list1, list2):
s1 = set(list1)
s2 = set(list2)
jaccard_index = len(s1.intersection(s2)) / len(s1.union(s2))
return jaccard_index
sims_lda_lsa, sims_lda_nmf, sims_lsa_nmf = {}, {}, {}
assert df_topic_word_lda.shape[0] == df_topic_word_lsa.shape[0] == df_topic_word_nmf.shape[0], "n_topics mismatch"
for ix, row in df_topic_word_lda.iterrows():
l1 = df_topic_word_lda.loc[ix, :].values.tolist()
l2 = df_topic_word_lsa.loc[ix, :].values.tolist()
l3 = df_topic_word_nmf.loc[ix, :].values.tolist()
sims_lda_lsa[ix] = jaccard_index(l1, l2)
sims_lda_nmf[ix] = jaccard_index(l1, l3)
sims_lsa_nmf[ix] = jaccard_index(l2, l3)
df_jaccard_sims = pd.DataFrame([sims_lda_lsa, sims_lda_nmf, sims_lsa_nmf])
df_jaccard_sims.index = ["LDA vs LSA", "LDA vs NMF", "LSA vs NMF"]
df_jaccard_sims["mean_sim"] = df_jaccard_sims.mean(axis=1)
df_jaccard_sims
nmf_topic_distr = nmf_pipe.transform(corpus.data)
collections_map = {0: "His Last Bow", 1: "The Adventures of Sherlock Holmes",
2: "The Case-Book of Sherlock_Holmes", 3: "The Memoirs of Sherlock Holmes",
4: "The Return of Sherlock Holmes"}
# Titles created from dominant words in topics
novel_collections_map = {0: "The Whispering Ways Sherlock Holmes Waits to Act on Waste",
1: "Vengeful Wednesdays: Unexpected Incidences on the Tapering Train by Sherlock Holmes",
2: "A Private Journey of Sherlock Holmes: Thirteen Unfolded Veins on the Move",
3: "Sherlock Holmes Tumbling into the hanging arms of Scylla",
4: "The Shooking Jaw of Sherlock Holmes in the Villa of the Baronet"}
print("Novel Sherlock Holmes Short Stories Collections:")
for _,title in novel_collections_map.items():
print("*", title)
topics = ["Topic" + str(i) for i in range(n_topics)]
docs = [" ".join(f_name.split("/")[-1].split(".")[0].split("_"))
for f_name in corpus.filenames]
df_document_topic = pd.DataFrame(np.round(nmf_topic_distr, 3), columns=topics, index=docs)
df_document_topic["assigned_topic"] = np.argmax(df_document_topic.values, axis=1)
df_document_topic["orig_collection"] = [collections_map[item] for item in corpus.target]
df_document_topic["novel_collection"] = [novel_collections_map.get(item, item)
for item in df_document_topic.assigned_topic.values]
df_novel_assignment = df_document_topic.sort_values("assigned_topic").loc[:, ["orig_collection",
"novel_collection"]]
df_novel_assignment
from yellowbrick.text import TSNEVisualizer
tsne = TSNEVisualizer()
tsne.fit(X_tfidf_stem, df_document_topic.novel_collection)
tsne.poof()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The use of response functions
Step2: Scaling of the step response functions
Step3: Parameter settings
Step4: Comparison to classical analytical response functions
Step5: Hantush step function compared to classic Hantush function
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.show_versions()
# Default Settings
cutoff = 0.999
meanstress = 1
up = True
responses = {}
exp = ps.Exponential(up=up, meanstress=meanstress, cutoff=cutoff)
responses["Exponential"] = exp
gamma = ps.Gamma(up=up, meanstress=meanstress, cutoff=cutoff)
responses["Gamma"] = gamma
hantush = ps.Hantush(up=up, meanstress=meanstress, cutoff=cutoff)
responses["Hantush"] = hantush
polder = ps.Polder(up=up, meanstress=meanstress, cutoff=cutoff)
responses["Polder"] = polder
fourp = ps.FourParam(up=up, meanstress=meanstress, cutoff=cutoff)
responses["FourParam"] = fourp
DoubleExp = ps.DoubleExponential(up=up, meanstress=meanstress, cutoff=cutoff)
responses["DoubleExponential"] = DoubleExp
parameters = pd.DataFrame()
fig, [ax1, ax2] = plt.subplots(1,2, sharex=True, figsize=(10,3))
for name, response in responses.items():
p = response.get_init_parameters(name)
parameters = parameters.append(p)
ax1.plot(response.block(p.initial), label=name)
ax2.plot(response.step(p.initial), label=name)
ax1.set_title("Block response")
ax2.set_title("Step responses")
ax1.set_xlabel("Time [days]")
ax2.set_xlabel("Time [days]")
ax1.legend()
plt.xlim(1e-1, 500)
plt.show()
A = 1
a = 50
b = 0.4
plt.figure(figsize=(16, 8))
for i, n in enumerate([-0.5, 1e-6, 0.5, 1, 1.5]):
plt.subplot(2, 3, i + 1)
plt.title(f'n={n:0.1f}')
fp = fourp.step([A, n, a, b], dt=1, cutoff=0.95)
plt.plot(np.arange(1, len(fp) + 1), fp, 'C0', label='4-param')
e = exp.step([A, a], dt=1, cutoff=0.95)
plt.plot(np.arange(1, len(e) + 1), e, 'C1', label='exp')
if n > 0:
g = gamma.step([A, n, a], dt=1, cutoff=0.95)
plt.plot(np.arange(1, len(g) + 1), g, 'C2', label='gamma')
h = hantush.step([A, a, b], dt=1, cutoff=0.95) / hantush.gain([A, a, b])
plt.plot(np.arange(1, len(h) + 1), h, 'C3', label='hantush')
p = polder.step([A, a, b], dt=1, cutoff=0.95) / polder.gain([A, a, b])
plt.plot(np.arange(1, len(p) + 1), p, 'C4', label='polder')
plt.xlim(0, 200)
plt.legend()
if n > 0:
print('fp, e, g, h, p:', fp[-1], e[-1], g[-1], h[-1], p[-1])
else:
print('fp, e, h, p:', fp[-1], e[-1], h[-1], p[-1])
plt.axhline(0.95, linestyle=':')
parameters
from scipy.special import erfc
def polder_classic(t, x, T, S, c):
X = x / (2 * np.sqrt(T * c))
Y = np.sqrt(t / (c * S))
rv = 0.5 * np.exp(2 * X) * erfc(X / Y + Y) + \
0.5 * np.exp(-2 * X) * erfc(X / Y - Y)
return rv
delh = 2
T = 20
c = 5000
S = 0.01
x = 400
x / np.sqrt(c * T)
t = np.arange(1, 121)
h_polder_classic = np.zeros(len(t))
for i in range(len(t)):
h_polder_classic[i] = delh * polder_classic(t[i], x=x, T=T, S=S, c=c)
#
A = delh
a = c * S
b = x ** 2 / (4 * T * c)
pd = polder.step([A, a, b], dt=1, cutoff=0.95)
#
plt.plot(t, h_polder_classic, label='Polder classic')
plt.plot(np.arange(1, len(pd) + 1), pd, label='Polder Pastas', linestyle="--")
plt.legend()
from scipy.integrate import quad
def integrand_hantush(y, r, lab):
return np.exp(-y - r ** 2 / (4 * lab ** 2 * y)) / y
def hantush_classic(t=1, r=1, Q=1, T=100, S=1e-4, c=1000):
lab = np.sqrt(T * c)
u = r ** 2 * S / (4 * T * t)
F = quad(integrand_hantush, u, np.inf, args=(r, lab))[0]
return -Q / (4 * np.pi * T) * F
c = 1000 # d
S = 0.01 # -
T = 100 # m^2/d
r = 500 # m
Q = 20 # m^3/d
#
t = np.arange(1, 45)
h_hantush_classic = np.zeros(len(t))
for i in range(len(t)):
h_hantush_classic[i] = hantush_classic(t[i], r=r, Q=20, T=T, S=S, c=c)
#
a = c * S
b = r ** 2 / (4 * T * c)
ht = hantush.step([1, a, b], dt=1, cutoff=0.99) * (-Q / (2 * np.pi * T))
#
plt.plot(t, h_hantush_classic, label='Hantush classic')
plt.plot(np.arange(1, len(ht) + 1), ht, '--', label='Hantush Pastas')
plt.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Simulating a Drag Race
Step15: Running the Race
Step17: Let's Race!
Step18: Now, let's build a race with these cars.
Step19: The race is ready to be run. If we print the result, we'll see the winner!
Step20: Who won? Go back and change the distance to see if anything changes.
|
<ASSISTANT_TASK:>
Python Code:
class Car:
Our Car class
def __init__(self,
year,
make,
model,
top_speed,
acceleration
):
Car Constructor function
self.year = year
self.make = make
self.model = model
self.top_speed = top_speed
self.acceleration = acceleration
self.current_speed = 0
self.odometer = 0
self.velocity_history = []
def __str__(self):
return str(self.year) + " " + self.make + " " + self.model
def advance(self):
Speed up the car...if possible
Also, we need to increment the odometer
if self.current_speed < self.top_speed:
self.current_speed += self.acceleration
# The Governor
if self.current_speed > self.top_speed:
self.current_speed = self.top_speed
# Add speed to history
self.velocity_history.append(self.current_speed)
# Advance odometer
self.odometer += (self.current_speed / 60 ) / 60
class Race:
Our Race between two cars
def __init__(self, racers, distance):
Construct the race
self.racers = racers
self.distance = distance
self.elapsed_time = 0
def tick(self):
Advance the race by 1 unit of time
Return the current leader
self.elapsed_time += 1
for car in self.racers:
car.advance()
leader = self.racers[0]
for car in self.racers[1:]:
if car.odometer > leader.odometer:
leader = car
return leader
def run(self):
Run the race
# Switch variable for if there's a winner
finish_line = False
# Advance time until we have a winner
while not finish_line:
leader = self.tick()
# Check if the leader crossed the finish line
# If yes, return that Car
if leader.odometer >= self.distance:
return leader
demon = Car(2017,"Dodge","Demon",168,25.22)
tesla = Car(2017,"Tesla","P100D",155,26.37)
print(demon)
print(tesla)
race = Race([demon, tesla],0.25)
print(race.run())
# Don't forget to run this code first!
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
output_notebook()
# Create our figure
p = figure(plot_width=500, plot_height=500)
tesla_x = list(range(len(tesla.velocity_history)))
demon_x = list(range(len(demon.velocity_history)))
p.multi_line([tesla_x,demon_x],
[tesla.velocity_history,demon.velocity_history],
color=["blue","red"],
line_width=4,
)
p.title.text = "Tesla (Blue) vs. Demon (Red)"
show(p)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read in the hanford.csv file
Step2: County
Step3: 3. Calculate the basic descriptive statistics on the data
Step4: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step5: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step6: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step7: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import pandas as pd
import matplotlib.pyplot as plt # package for doing plotting (necessary for adding the line)
import statsmodels.formula.api as smf # package we'll be using for linear regression
%matplotlib inline
df = pd.read_csv('../data/hanford.csv')
df
df.describe()
correlation = df.corr()
print(correlation)
df.plot(kind='scatter', x='Exposure', y='Mortality')
lm = smf.ols(formula="Mortality~Exposure",data=df).fit()
lm.params
intercept, height = lm.params
# Function using the built math.
def simplest_predictor(exposure, height, intercept):
height = float(height)
intercept = float(intercept)
exposure = float(exposure)
return height*exposure+intercept
# Input the data
exposure = input("Please enter the exposure: ")
print("The mortality rate for your exposure lies at", simplest_predictor(exposure,height,intercept), ".")
df.plot(kind="scatter",x="Exposure",y="Mortality")
plt.plot(df["Exposure"],height*df["Exposure"]+intercept,"-",color="darkgrey") #we create the best fit line from the values in the fit model
def predictiong_mortality_rate(exposure):
return intercept + float()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check data api connection
Step3: Data API Search
Step5: Geometry helper
Step6: Make a geometry dict for coordinates in San Francisco
Step7: Try getting item ids
Step9: Getting Webtiles
Step10: Testing tile urls
Step11: Display a tile
Step13: Animate tiles over time
|
<ASSISTANT_TASK:>
Python Code:
import os
# Setup the API Key from the `PL_API_KEY` environment variable
PLANET_API_KEY = os.getenv('PL_API_KEY')
# If you're following along with this notebook, you can enter your API Key on the following line, and uncomment it:
# PLANET_API_KEY = 'YOUR_KEY_HERE'
assert PLANET_API_KEY, 'api key not set'
# Import helper modules
import json
import mercantile
import requests
# Setup Planet Data API base URL
API_URL = "https://api.planet.com/data/v1"
# Setup the session
session = requests.Session()
# Authenticate
session.auth = (PLANET_API_KEY, "")
# Make a GET request to the Planet Data API
resp = session.get(API_URL)
if not resp.ok:
print("Something is wrong:", resp.content)
from datetime import datetime
def get_item_ids(geometry, item_type='PSScene', start_date=None, end_date=None, limit=100):
Get Planet Data API item_id values for matching filters.
Args:
geometry: geojson geometry dict
item_type: item_type (see https://developers.planet.com/docs/api/items-assets/#item-types)
start_date: inclusive lower bound ISO 8601 datetime string (include items captured on or after this date)
end_date: exclusive lower bound ISO 8601 datetime string (include items captured before this date)
limit: max number of ids to return
Returns:
item_ids: list of id strings
# Data API Geometry Filter
geometry_filter = {
"type": "GeometryFilter",
"field_name": "geometry",
"config": geometry
}
# use a default end_date of the current time
if not end_date:
end_date = datetime.utcnow().isoformat() + 'Z'
date_filter = {
"type": "DateRangeFilter", # Type of filter -> Date Range
"field_name": "acquired", # The field to filter on: "acquired" -> Date on which the "image was taken"
"config": {
"lt": end_date, # "lt" -> Less than
}
}
# start_date is optional
if start_date:
# greater than or equal to start date
date_filter["config"]["gte"] = start_date
# combine geometry and date filters with an AndFilter
and_filter = {
"type": "AndFilter",
"config": [geometry_filter, date_filter]
}
quick_url = "{}/quick-search".format(API_URL)
# Setup the request
filter_request = {
"item_types" : [item_type],
"filter" : and_filter
}
# get ids from search results
resp = session.post(quick_url, json=filter_request)
results = resp.json()
ids = [f['id'] for f in results['features']]
# follow pagination links until we hit the limit
while len(ids) < limit and results['_links'].get('next'):
results = requests.get(results['_links'].get('next')).json()
more_ids = [f['id'] for f in results['features']]
ids += more_ids
return ids[:limit]
def coords_to_geometry(lat, lon):
Given latitude and longitude floats, construct a geojson geometry dict
return {
"type": "Point",
"coordinates": [lon, lat]
}
geom = coords_to_geometry(37.77493, -122.41942)
print(geom)
get_item_ids(geom, start_date="2019-01-01T00:00:00.000Z", end_date="2019-10-01T00:00:00.000Z", limit=5)
def get_tile_urls(lat, lon, zoom=15, item_type='PSScene', start_date='2019-01-01T00:00:00.000Z', end_date='2019-10-01T00:00:00.000Z', limit=5):
Get webtile urls for given coordinates, zoom, and matching filters.
Args:
lat: latitude float
lon: longitude float
zoom: zoom level int (usually between 1 and 15)
item_type: item_type (see https://developers.planet.com/docs/api/items-assets/#item-types)
start_date: inclusive lower bound ISO 8601 datetime string (include items captured on or after this date)
end_date: exclusive lower bound ISO 8601 datetime string (include items captured before this date)
limit: max number of ids to return
Returns:
item_ids: list of id strings
geom = coords_to_geometry(lat, lon)
item_ids = get_item_ids(geom, item_type=item_type, start_date=start_date, end_date=end_date, limit=limit)
tile = mercantile.tile(lon, lat, zoom)
tile_url_template = 'https://tiles.planet.com/data/v1/{item_type}/{item_id}/{z}/{x}/{y}.png?api_key={api_key}'
return [tile_url_template.format(item_type=item_type, item_id=i, x=tile.x, y=tile.y, z=zoom, api_key=PLANET_API_KEY) for i in item_ids]
tile_urls = get_tile_urls(37.77493, -122.41942, limit=5)
for url in tile_urls:
print(url)
print()
from IPython.display import Image
resp = requests.get(tile_urls[0])
Image(resp.content)
%matplotlib inline
from IPython.display import HTML
import random
import time
def animate(urls, delay=1.0, loops=1):
Display an animated loop of images
Args:
urls: list of image url strings
delay: how long in seconds to display each image
loops: how many times to repeat the image sequence
disp_id = str(random.random())
display("placeholder", display_id=disp_id)
for loop in range(loops):
for frame_url in urls:
htmlDisplay = f'<img src="{frame_url}" class="mySlides">'
display(HTML(htmlDisplay), display_id=disp_id, update=True)
time.sleep(delay)
animate(tile_urls, delay=0.5, loops=3)
tile_urls = get_tile_urls(37.77493, -122.41942, limit=100)
animate(tile_urls, delay=1, loops=3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize database
Step2: Generate data
Step3: Insert data
Step4: Inspect data
Step5: Does the data match?
Step6: Did the triggers work?
Step7: Were the vantage points inserted?
Step8: Terminate database
Step9: Reload database
Step10: ... once more, with feeling!
Step11: Does the data match?
Step12: Were the vantage points re-loaded?
Step13: Does the trigger still work?
Step14: Terminate database
Step15: Additional Feature
Step16: Is there any data?
Step17: What does it look like?
Step18: Can we run a vantage point search?
Step19: How do the stocks compare?
Step20: Can we run an iSAX tree similarity search?
Step21: How do the stocks compare?
Step22: How do the methods compare?
Step23: Show me another graph!
Step24: What is an iSAX tree, really?
Step25: Don't forget to close the server!
|
<ASSISTANT_TASK:>
Python Code:
# database parameters
ts_length = 100
data_dir = '../db_files'
db_name = 'default'
dir_path = data_dir + '/' + db_name + '/'
# clear file system for testing
if not os.path.exists(dir_path):
os.makedirs(dir_path)
filelist = [dir_path + f for f in os.listdir(dir_path)]
for f in filelist:
os.remove(f)
# load the server
server = subprocess.Popen(['python', '../go_server_persistent.py',
'--ts_length', str(ts_length), '--data_dir', data_dir, '--db_name', db_name])
time.sleep(5) # make sure it loads completely
# load the webserver
webserver = subprocess.Popen(['python', '../go_webserver.py'])
time.sleep(5) # make sure it loads completely
# initialize the web interface
web_interface = WebInterface()
# check that the database is empty
web_interface.select()
# generate sample time series
num_ts = 50
mus = np.random.uniform(low=0.0, high=1.0, size=num_ts)
sigs = np.random.uniform(low=0.05, high=0.4, size=num_ts)
jits = np.random.uniform(low=0.05, high=0.2, size=num_ts)
# initialize dictionaries for time series and their metadata
primary_keys = []
tsdict = {}
metadict = {}
# fill dictionaries with randomly generated entries for database
for i, m, s, j in zip(range(num_ts), mus, sigs, jits):
meta, tsrs = tsmaker(m, s, j) # generate data
pk = "ts-{}".format(i) # generate primary key
primary_keys.append(pk) # keep track of all primary keys
tsdict[pk] = tsrs # store time series data
metadict[pk] = meta # store metadata
# to assist with later testing
ts_keys = sorted(tsdict.keys())
# randomly choose time series as vantage points
num_vps = 5
vpkeys = list(np.random.choice(ts_keys, size=num_vps, replace=False))
vpdist = ['d_vp_{}'.format(i) for i in vpkeys]
# add stats trigger
web_interface.add_trigger('stats', 'insert_ts', ['mean', 'std'], None)
# insert the time series
for k in tsdict:
web_interface.insert_ts(k, tsdict[k])
# upsert the metadata
for k in tsdict:
web_interface.upsert_meta(k, metadict[k])
# add the vantage points
for i in range(num_vps):
web_interface.insert_vp(vpkeys[i])
# select all database entries; all metadata fields
results = web_interface.select(fields=[])
# we have the right number of database entries
assert len(results) == num_ts
# we have all the right primary keys
assert sorted(results.keys()) == ts_keys
# check that all the time series and metadata matches
for k in tsdict:
results = web_interface.select(fields=['ts'], md={'pk': k})
assert results[k]['ts'] == tsdict[k]
results = web_interface.select(fields=[], md={'pk': k})
for field in metadict[k]:
assert metadict[k][field] == results[k][field]
# check that the trigger has executed as expected (allowing for rounding errors)
for k in tsdict:
results = web_interface.select(fields=['mean', 'std'], md={'pk': k})
assert np.round(results[k]['mean'], 4) == np.round(tsdict[k].mean(), 4)
assert np.round(results[k]['std'], 4) == np.round(tsdict[k].std(), 4)
# check that the vantage points match
print('Vantage points selected:', vpkeys)
print('Vantage points in database:',
web_interface.select(fields=None, md={'vp': True}, additional={'sort_by': '+pk'}).keys(), '\n')
# check that the vantage point distance fields have been created
results = web_interface.select(fields=vpdist, additional={'sort_by': '+pk', 'limit': 1})
print('Vantage point distance fields:', vpdist)
print('Vantage point distance example:')
print(results)
os.kill(server.pid, signal.SIGINT)
time.sleep(5) # give it time to terminate
os.kill(webserver.pid, signal.SIGINT)
time.sleep(5) # give it time to terminate
web_interface = None
server = subprocess.Popen(['python', '../go_server_persistent.py',
'--ts_length', str(ts_length), '--data_dir', data_dir, '--db_name', db_name])
time.sleep(5) # give it time to load fully
webserver = subprocess.Popen(['python', '../go_webserver.py'])
time.sleep(5) # give it time to load fully
web_interface = WebInterface()
# select all database entries; all metadata fields
results = web_interface.select(fields=[])
# we have the right number of database entries
assert len(results) == num_ts
# we have all the right primary keys
assert sorted(results.keys()) == ts_keys
# check that all the time series and metadata matches
for k in tsdict:
results = web_interface.select(fields=['ts'], md={'pk': k})
assert results[k]['ts'] == tsdict[k]
results = web_interface.select(fields=[], md={'pk': k})
for field in metadict[k]:
assert metadict[k][field] == results[k][field]
# check that the vantage points match
print('Vantage points selected:', vpkeys)
print('Vantage points in database:',
web_interface.select(fields=None, md={'vp': True}, additional={'sort_by': '+pk'}).keys())
# check that the trigger is still there by loading new data
# create test time series
_, test = tsmaker(np.random.uniform(low=0.0, high=1.0),
np.random.uniform(low=0.05, high=0.4),
np.random.uniform(low=0.05, high=0.2))
# insert test time series
web_interface.insert_ts('test', test)
# check that mean and standard deviation have been calculated
web_interface.select(fields=['mean', 'std'], md={'pk': 'test'})
# terminate processes before exiting
os.kill(server.pid, signal.SIGINT)
time.sleep(5) # give it time to terminate
web_interface = None
webserver.terminate()
# load the database server
server = subprocess.Popen(['python', '../go_server_persistent.py',
'--ts_length', str(num_days), '--data_dir', '../db_files', '--db_name', 'stock_prices'])
time.sleep(5) # make sure it loads completely
# load the database webserver
webserver = subprocess.Popen(['python', '../go_webserver.py'])
time.sleep(5) # make sure it loads completely
# initialize the web interface
web_interface = WebInterface()
# number of stocks
len(web_interface.select())
# let's look at the first 10 stocks
web_interface.select(fields=['ts'], additional={'sort_by': '+pk', 'limit': 10})
# pick a stock
stock = np.random.choice(stocks_exclude)
print('Stock:', stock)
# run the vantage point similarity search
result = web_interface.vp_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]), 1)
stock_match = list(result)[0]
stock_ts = web_interface.select(fields=['ts'], md={'pk': stock_match})[stock_match]['ts']
print('Most similar stock:', stock_match)
# visualize similarity
plt.plot(stock_data_exclude[stock], label='Query:' + stock)
plt.plot(stock_ts.values(), label='Result:' + stock_match)
plt.xticks([])
plt.legend(loc='best')
plt.title('Daily Stock Price Similarity')
plt.show()
# pick the stock
stock = np.random.choice(stocks_exclude)
print('Stock:', stock)
# run the isax tree similarity search
result = web_interface.isax_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]))
if result == 'ERROR: NO_MATCH':
print('Could not find a similar stock.')
else:
stock_match = list(result)[0]
stock_ts = web_interface.select(fields=['ts'], md={'pk': stock_match})[stock_match]['ts']
print('Most similar stock:', stock_match)
# visualize similarity
plt.plot(stock_data_exclude[stock], label='Query:' + stock)
plt.plot(stock_ts.values(), label='Result:' + stock_match)
plt.xticks([])
plt.legend(loc='best')
plt.title('Daily Stock Price Similarity')
plt.show()
# pick the stock
stock = np.random.choice(stocks_exclude)
print('Stock:', stock)
# run the vantage point similarity search
result = web_interface.vp_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]), 1)
match_vp = list(result)[0]
ts_vp = web_interface.select(fields=['ts'], md={'pk': match_vp})[match_vp]['ts']
print('VP search result:', match_vp)
# run the isax similarity search
result = web_interface.isax_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]))
if result == 'ERROR: NO_MATCH':
print('iSAX search result: Could not find a similar stock.')
else:
match_isax = list(result)[0]
ts_isax = web_interface.select(fields=['ts'], md={'pk': match_isax})[match_isax]['ts']
print('iSAX search result:', match_isax)
# visualize similarity
plt.plot(stock_data_exclude[stock], label='Query:' + stock)
plt.plot(ts_vp.values(), label='Result:' + match_vp)
plt.plot(ts_isax.values(), label='Result:' + match_isax)
plt.xticks([])
plt.legend(loc='best')
plt.title('Daily Stock Price Similarity')
plt.show()
print(web_interface.isax_tree())
# terminate processes before exiting
os.kill(server.pid, signal.SIGINT)
time.sleep(5) # give it time to terminate
web_interface = None
webserver.terminate()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0 - Naive Face Verification
Step3: Expected Output
Step4: Expected Output
Step5: Here're some examples of distances between the encodings between three individuals
Step7: Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
Step8: Younes is trying to enter the Happy House and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture
Step9: Expected Output
Step11: Expected Output
Step12: Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
|
<ASSISTANT_TASK:>
Python Code:
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1
pos_dist = tf.reduce_sum(tf.square(y_pred[1] - y_pred[0]), axis = -1)
# Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1
neg_dist = tf.reduce_sum(tf.square(y_pred[2] - y_pred[0]), axis = -1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = pos_dist - neg_dist + alpha
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(encoding - database[identity])
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome home!")
door_open = None
else:
print("It's not " + str(identity) + ", please go away")
door_open = None
### END CODE HERE ###
return dist, door_open
verify("images/camera_0.jpg", "younes", database, FRmodel)
verify("images/camera_2.jpg", "kian", database, FRmodel)
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
Implements face recognition for the happy house by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
dist = np.linalg.norm(np.subtract(encoding ,database[name]))
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
who_is_it("images/camera_0.jpg", database, FRmodel)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step10: Training
Step11: Testing
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
labels_vecs = lb.fit_transform(labels)# Your one-hot encoded labels array here
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
splitter = ss.split(codes, labels)
train_idx, val_idx = next(splitter)
# 我们取一半的作为validation
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
fc = tf.contrib.layers.fully_connected(inputs_, 256,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# print(predicted), 我们在计算 argmax 需要制定 dimension = 1 表示按行取最大值
# print(labels_,logits)
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
epochs = 100
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
print(prediction)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up corpus
Step2: Set up two topic models
Step3: Using U_Mass Coherence
Step4: View the pipeline parameters for one coherence model
Step5: Interpreting the topics
Step6: Using C_V coherence
Step7: Pipeline parameters for C_V coherence
Step8: Print coherence values
Step9: Support for wrappers
Step10: Support for other topic models
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import os
import logging
import json
import warnings
try:
raise ImportError
import pyLDAvis.gensim
CAN_VISUALIZE = True
pyLDAvis.enable_notebook()
from IPython.display import display
except ImportError:
ValueError("SKIP: please install pyLDAvis")
CAN_VISUALIZE = False
import numpy as np
from gensim.models import CoherenceModel, LdaModel, HdpModel
from gensim.models.wrappers import LdaVowpalWabbit, LdaMallet
from gensim.corpora import Dictionary
warnings.filterwarnings('ignore') # To ignore all warnings that arise here to enhance clarity
texts = [['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
goodLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=50, num_topics=2)
badLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=1, num_topics=2)
goodcm = CoherenceModel(model=goodLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
badcm = CoherenceModel(model=badLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
print(goodcm)
if CAN_VISUALIZE:
prepared = pyLDAvis.gensim.prepare(goodLdaModel, corpus, dictionary)
display(pyLDAvis.display(prepared))
if CAN_VISUALIZE:
prepared = pyLDAvis.gensim.prepare(badLdaModel, corpus, dictionary)
display(pyLDAvis.display(prepared))
print(goodcm.get_coherence())
print(badcm.get_coherence())
goodcm = CoherenceModel(model=goodLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')
badcm = CoherenceModel(model=badLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')
print(goodcm)
print(goodcm.get_coherence())
print(badcm.get_coherence())
# Replace with path to your Vowpal Wabbit installation
vw_path = '/usr/local/bin/vw'
# Replace with path to your Mallet installation
home = os.path.expanduser('~')
mallet_path = os.path.join(home, 'mallet-2.0.8', 'bin', 'mallet')
model1 = LdaVowpalWabbit(vw_path, corpus=corpus, num_topics=2, id2word=dictionary, passes=50)
model2 = LdaVowpalWabbit(vw_path, corpus=corpus, num_topics=2, id2word=dictionary, passes=1)
cm1 = CoherenceModel(model=model1, corpus=corpus, coherence='u_mass')
cm2 = CoherenceModel(model=model2, corpus=corpus, coherence='u_mass')
print(cm1.get_coherence())
print(cm2.get_coherence())
model1 = LdaMallet(mallet_path, corpus=corpus, num_topics=2, id2word=dictionary, iterations=50)
model2 = LdaMallet(mallet_path, corpus=corpus, num_topics=2, id2word=dictionary, iterations=1)
cm1 = CoherenceModel(model=model1, texts=texts, coherence='c_v')
cm2 = CoherenceModel(model=model2, texts=texts, coherence='c_v')
print(cm1.get_coherence())
print(cm2.get_coherence())
hm = HdpModel(corpus=corpus, id2word=dictionary)
# To get the topic words from the model
topics = []
for topic_id, topic in hm.show_topics(num_topics=10, formatted=False):
topic = [word for word, _ in topic]
topics.append(topic)
topics[:2]
# Initialize CoherenceModel using `topics` parameter
cm = CoherenceModel(topics=topics, corpus=corpus, dictionary=dictionary, coherence='u_mass')
cm.get_coherence()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup - virtualenv jupyter kernel
Step2: Setup - Initialize Django
Step3: Setup R
|
<ASSISTANT_TASK:>
Python Code:
import datetime
import six
print( "packages imported at " + str( datetime.datetime.now() ) )
%pwd
%run ../django_init.py
# start to support python 3:
from __future__ import unicode_literals
from __future__ import division
#==============================================================================#
# ! imports
#==============================================================================#
# grouped by functional area, then alphabetical order by package, then
# alphabetical order by name of thing being imported.
# context_analysis imports
from context_analysis.reliability.reliability_names_analyzer import ReliabilityNamesAnalyzer
#==============================================================================#
# ! logic
#==============================================================================#
# declare variables
my_analysis_instance = None
label = ""
indices_to_process = -1
result_status = ""
# make reliability instance
my_analysis_instance = ReliabilityNamesAnalyzer()
# database connection information - 2 options... Enter it here:
#my_analysis_instance.db_username = ""
#my_analysis_instance.db_password = ""
#my_analysis_instance.db_host = "localhost"
#my_analysis_instance.db_name = "sourcenet"
# Or set up the following properties in Django_Config, inside the django admins.
# All have application of: "sourcenet-db-admin":
# - db_username
# - db_password
# - db_host
# - db_port
# - db_name
# run the analyze method, see what happens.
#label = "prelim_reliability_test"
#indices_to_process = 3
#label = "prelim_reliability_combined_human"
#indices_to_process = 3
#label = "name_data_test_combined_human"
#indices_to_process = 3
#label = "prelim_reliability_combined_human_final"
#indices_to_process = 3
#label = "prelim_reliability_combined_all"
#indices_to_process = 4
#label = "prelim_reliability_combined_all_final"
#indices_to_process = 4
#label = "prelim_reliability_test_human"
#indices_to_process = 3
#label = "prelim_reliability_test_all"
#indices_to_process = 4
label = "prelim_month"
indices_to_process = 2
result_status = my_analysis_instance.analyze_reliability_names( label, indices_to_process )
print( "result status: {status_string}".format( status_string = result_status ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modeling with MKS
Step2: The function make_cahnHilliard generates n_samples number of random microstructures, X, and the associated updated microstructures, y, after one time step y. The following cell plots one of these microstructures along with its update.
Step3: Calibrate Influence Coefficients
Step4: We are now going to calibrate the influence coefficients while varying the number of local states from 2 up to 20. Each of these models will then predict the evolution of the concentration fields. Mean square error will be used to compared the results with the testing dataset to evaluate how the MKS model's performance changes as we change the number of local states.
Step5: Next we will calibrate the influence coefficients while varying the number of local states and compute the mean squared error. The following demonstrates how to use Scikit-learn's GridSearchCV to optimize n_states as a hyperparameter. Of course, the best fit is always with a larger value of n_states. Increasing this parameter does not overfit the data.
Step6: As expected the accuracy of the MKS model monotonically increases as we increase n_states, but accuracy doesn't improve significantly as n_states gets larger than signal digits.
Step7: Here are the first 4 influence coefficients.
Step8: Predict Microstructure Evolution
Step9: In order to move forward in time, we need to feed the concentration back into the Cahn-Hilliard simulation and the MKS model.
Step10: Let's take a look at the concentration fields.
Step11: The MKS model was able to capture the microstructure evolution with 6 local states.
Step12: Once again we are going to march forward in time by feeding the concentration fields back into the Cahn-Hilliard simulation and the MKS model.
Step13: Let's take a look at the results.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
import pymks
from pymks.datasets import make_cahn_hilliard
n = 41
n_samples = 400
dt = 1e-2
np.random.seed(99)
= make_cahn_hilliard(n_samples=n_samples, size=(n, n), dt=dt)
from pymks.tools import draw_concentrations
# import sklearn
# from sklearn.cross_validation import train_test_split
# split_shape = (X.shape[0],) + (np.product(X.shape[1:]),)
# X_train, X_test, y_train, y_test = train_test_split(X.reshape(split_shape), y.reshape(split_shape),
# test_size=0.5, random_state=3)
# from pymks import MKSLocalizationModel
# from pymks.bases import PrimitiveBasis
# from sklearn.grid_search import GridSearchCV
# parameters_to_tune = {'n_states': np.arange(2, 11)}
# prim_basis = PrimitiveBasis(2, [-1, 1])
# model = MKSLocalizationModel(prim_basis)
# gs = GridSearchCV(model, parameters_to_tune, cv=5, fit_params={'size': (n, n)})
# gs.fit
# print(gs.best_estimator_)
# print(gs.score(X_test, y_test))
from pymks.tools import draw_gridscores
# model = MKSLocalizationModel(basis=PrimitiveBasis(6, [-1, 1]))
# model.fit(X, y)
from pymks.tools import draw_coeff
# from pymks.datasets.cahn_hilliard_simulation import CahnHilliardSimulation
# np.random.seed(191)
# phi0 = np.random.normal(0, 1e-9, (1, n, n))
# ch_sim = CahnHilliardSimulation(dt=dt)
# phi_sim = phi0.copy()
# phi_pred = phi0.copy()
# time_steps = 10
# for ii in range(time_steps):
# ch_sim.run(phi_sim)
# phi_sim = ch_sim.response
# phi_pred = model.predict(phi_pred)
from pymks.tools import draw_concentrations_compare
# m = 3 * n
# phi0 = np.random.normal(0, 1e-9, (1, m, m))
# phi_sim = phi0.copy()
# phi_pred = phi0.copy()
# for ii in range(1000):
# ch_sim.run(phi_sim)
# phi_sim = ch_sim.response
# phi_pred = model.predict(phi_pred)
# from pymks.tools import draw_concentrations_compare
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 练习:数据探索
Step2: 准备数据
Step3: 对于高度倾斜分布的特征如'capital-gain'和'capital-loss',常见的做法是对数据施加一个<a href="https
Step4: 规一化数字特征
Step5: 练习:数据预处理
Step6: 混洗和切分数据
Step7: 评价模型性能
Step8: 监督学习模型
Step9: 练习:初始模型的评估
Step10: 提高效果
Step11: 问题 5 - 最终模型评估
Step12: 问题 7 - 提取特征重要性
|
<ASSISTANT_TASK:>
Python Code:
# 为这个项目导入需要的库
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # 允许为DataFrame使用display()
# 导入附加的可视化代码visuals.py
import visuals as vs
# 为notebook提供更加漂亮的可视化
%matplotlib inline
# 导入人口普查数据
data = pd.read_csv("census.csv")
# 成功 - 显示第一条记录
display(data.head(n=1))
# TODO:总的记录数
n_records = len(data['income'])
# TODO:被调查者的收入大于$50,000的人数
n_greater_50k = len(data[data['income'] == '>50K'])
# TODO:被调查者的收入最多为$50,000的人数
n_at_most_50k = len(data[data['income'] == '<=50K'])
# TODO:被调查者收入大于$50,000所占的比例
greater_percent = (n_at_most_50k + 0.0) / n_records * 100
# 打印结果
print "Total number of records: {}".format(n_records)
print "Individuals making more than $50,000: {}".format(n_greater_50k)
print "Individuals making at most $50,000: {}".format(n_at_most_50k)
print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)
# 将数据切分成特征和对应的标签
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# 可视化原来数据的倾斜的连续特征
vs.distribution(data)
# 对于倾斜的数据使用Log转换
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# 可视化经过log之后的数据分布
vs.distribution(features_raw, transformed = True)
# 导入sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# 初始化一个 scaler,并将它施加到特征上
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# 显示一个经过缩放的样例记录
display(features_raw.head(n = 1))
# TODO:使用pandas.get_dummies()对'features_raw'数据进行独热编码
features = pd.get_dummies(features_raw)
income_mapping = {
'<=50K': 0,
'>50K': 1
}
# TODO:将'income_raw'编码成数字值
income = income_raw.map(income_mapping)
# 打印经过独热编码之后的特征数量
encoded = list(features.columns)
print "{} total features after one-hot encoding.".format(len(encoded))
# 移除下面一行的注释以观察编码的特征名字
#print encoded
# 导入 train_test_split
from sklearn.cross_validation import train_test_split
# 将'features'和'income'数据切分成训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)
# 显示切分的结果
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
# TODO: 计算准确率
accuracy = n_greater_50k / float(n_records)
precision = accuracy
recall = 1.0
# TODO: 使用上面的公式,并设置beta=0.5计算F-score
fscore = (1 + 0.5**2) * precision * recall / ( 0.5 ** 2 * precision + recall)
# 打印结果
print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore)
# TODO:从sklearn中导入两个评价指标 - fbeta_score和accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# TODO:使用sample_size大小的训练数据来拟合学习器
# TODO: Fit the learner to the training data using slicing with 'sample_size'
start = time() # 获得程序开始时间
learner = learner.fit(X_train, y_train)
end = time() # 获得程序结束时间
# TODO:计算训练时间
results['train_time'] = end - start
# TODO: 得到在测试集上的预测值
# 然后得到对前300个训练数据的预测结果
start = time() # 获得程序开始时间
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[0:300])
end = time() # 获得程序结束时间
# TODO:计算预测用时
results['pred_time'] = end - start
# TODO:计算在最前面的300个训练数据的准确率
results['acc_train'] = accuracy_score(y_train[0:300], predictions_train)
# TODO:计算在测试集上的准确率
results['acc_test'] = accuracy_score(y_test, predictions_test)
# TODO:计算在最前面300个训练数据上的F-score
results['f_train'] = fbeta_score(y_train[0:300], predictions_train, average='macro', beta=0.5)
# TODO:计算测试集上的F-score
results['f_test'] = fbeta_score(y_test, predictions_test, average='macro', beta=0.5)
# 成功
print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size)
# 返回结果
return results
# TODO:从sklearn中导入三个监督学习模型
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
# TODO:初始化三个模型
clf_A = LogisticRegression(random_state=0)
clf_B = SVC(kernel='linear', C=1.0, random_state=0)
clf_C = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=0)
# TODO:计算1%, 10%, 100%的训练数据分别对应多少点
samples_1 = int(len(X_train) * 0.01)
samples_10 = int(len(X_train) * 0.1)
samples_100 = int(len(X_train) * 1)
# 收集学习器的结果
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# 对选择的三个模型得到的评价结果进行可视化
vs.evaluate(results, accuracy, fscore)
# TODO:导入'GridSearchCV', 'make_scorer'和其他一些需要的库
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import fbeta_score, make_scorer
from sklearn.svm import SVC
# TODO:初始化分类器
clf = SVC(random_state=1)
# TODO:创建你希望调节的参数列表
parameters = [{
'C': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0],
'kernel': ['linear']
},{
'C': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0],
'gamma': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0],
'kernel': ['rbf']
}]
# TODO:创建一个fbeta_score打分对象
scorer = make_scorer(fbeta_score, beta=0.5)
# TODO:在分类器上使用网格搜索,使用'scorer'作为评价函数
grid_obj = GridSearchCV(
estimator=clf,
param_grid=parameters,
scoring=scorer,
cv=10
)
# TODO:用训练数据拟合网格搜索对象并找到最佳参数
grid_obj.fit(X_train, y_train)
# 得到estimator
best_clf = grid_obj.best_estimator_
# 使用没有调优的模型做预测
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# 汇报调参前和调参后的分数
print "Unoptimized model\n------"
print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5))
print "\nOptimized Model\n------"
print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
# TODO:导入一个有'feature_importances_'的监督学习模型
from sklearn.ensemble import RandomForestClassifier
# TODO:在训练集上训练一个监督学习模型
model = RandomForestClassifier(n_estimators=10000, random_state=0)
model.fit(X_train, y_train)
# TODO: 提取特征重要性
importances = model.feature_importances_
# 绘图
vs.feature_plot(importances, X_train, y_train)
# 导入克隆模型的功能
from sklearn.base import clone
# 减小特征空间
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# 在前面的网格搜索的基础上训练一个“最好的”模型
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# 做一个新的预测
reduced_predictions = clf.predict(X_test_reduced)
# 对于每一个版本的数据汇报最终模型的分数
print "Final Model trained on full data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
print "\nFinal Model trained on reduced data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first step in the Schulze method is to calculate the pairwise preferences of the voters regarding the candidates.
Step2: The second step in the Schulze method is to create an election graph.
Step3: We can then see the adjacency matrix for our election example
Step4: With the adjacency matrix available, we can implement the calculation of the strongest paths.
Step5: We now apply calc_strongest_paths(p) to our example
Step6: The final step in the Schulze algorithm is finding, for each candidate the candidates that are less popular.
Step7: Finally, we can find the winner of the election
Step8: Candidate A wins over C.
Step9: The printout of the preferences dictionary is less elegant than the printout of the preferences matrix that we had before.
Step10: We then create the election graph.
Step11: In this way we save space.
Step12: We'll use again defaultdicts to implement calc_strongest_paths(p, candidates).
Step13: We now apply calc_strongest_paths(p, candidates) to our example
Step14: Finally, we calculate the results.
Step15: So, here are the results again
Step16: Floyd-Warshall All Pairs Shortest Paths
Step17: We'll use the algorithm on the familiar traffic_grid_graph.txt algorithm.
Step18: We go ahead and read it
Step19: Our implementation of the Floyd-Warshall algorithms requires an adjacency matrix as input.
Step20: We do the conversion, and then we run the Floyd-Warshall algorithm.
Step21: You may have noticed than the distance of a node to itself has been set to MAX_INT.
|
<ASSISTANT_TASK:>
Python Code:
import csv
import pprint
with open('ballots.csv') as ballots_file:
reader = csv.reader(ballots_file)
ballots = list(reader)
pprint.pprint(ballots, width=30)
from collections import defaultdict
candidates = {
'A': 0,
'B': 1,
'C': 2,
'D': 3
}
def calc_pairwise_prefs(ballots, candidates):
# Initialize p to 0.
p = [ [0 for j in candidates.keys() ] for i in candidates.keys() ]
# Take each ballot in turn.
for ballot in ballots:
# Take each candidate in the ballot.
for i, c_i in enumerate(ballot):
# Take all following candidates in the ballot.
for c_j in ballot[i+1:]:
# Add to the score of c_i vs c_j.
p[candidates[c_i]][candidates[c_j]] += 1
return p
p = calc_pairwise_prefs(ballots, candidates)
pprint.pprint(p, width=20)
def create_election_graph(p):
n = len(p)
g = [ [-1 for j in range(n) ] for i in range(n) ]
for i in range(n):
for j in range(n):
if p[i][j] > p[j][i]:
g[i][j] = p[i][j] - p[j][i]
return g
g = create_election_graph(p)
pprint.pprint(g, width=20)
def calc_strongest_paths(p):
n = len(p)
# Initialize strongest paths array.
s = [ [ -1 for j in range(n) ] for i in range(n) ]
# Initialize predecessors array.
pred = [ [ -1 for j in range(n) ] for i in range(n) ]
# Initially the strength of the path s[i][j] is simply
# the difference in the weights between p[i][j]
# and p[j][i].
for i in range(n):
for j in range(n):
if p[i][j] > p[j][i]:
s[i][j] = p[i][j] - p[j][i]
pred[i][j] = i
# For each k, i, j, such that the path from i to j
# can be strengthened by taking the detour from i to k
# and k to j adjust the path and the predecessor.
# This can happen at most n times.
for k in range(n):
for i in range(n):
if i != k:
for j in range(n):
if j != i:
if s[i][j] < min(s[i][k], s[k][j]):
s[i][j] = min(s[i][k], s[k][j])
pred[i][j] = pred[k][j]
return (s, pred)
s, pred = calc_strongest_paths(p)
print('strongest paths')
pprint.pprint(s, width=30)
print('predecessors')
pprint.pprint(pred, width=30)
def calc_results(s):
n = len(s)
wins = [ [] for i in range(n) ]
for i in range(n):
for j in range(n):
if i != j:
if s[i][j] > s[j][i]:
wins[i].append(j)
return wins
wins = calc_results(s)
print(wins)
from collections import defaultdict
def calc_pairwise_prefs(ballots):
p = defaultdict(int)
for ballot in ballots:
for i, c_i in enumerate(ballot):
for c_j in ballot[i+1:]:
p[(c_i, c_j)] += 1
return p
p = calc_pairwise_prefs(ballots)
pprint.pprint(p)
p = calc_pairwise_prefs(ballots)
import itertools
candidates = ['A', 'B', 'C', 'D']
def print_matrix(candidates, matrix, col_width=5):
print(' ', end="")
num_candidates = len(candidates)
for candidate in candidates:
print(f'{candidate:^{col_width}}', end="")
i = 0
for c1, c2 in itertools.product(candidates, repeat=2):
if i % num_candidates == 0:
print()
print(f'{candidates[i // num_candidates]:<2}', end="")
print(f'{matrix[(c1, c2)]:^{col_width}}', end="")
i += 1
print()
print_matrix(candidates, p, 5)
def create_election_graph(p):
g = defaultdict(lambda:-1)
for (c_i, c_j), pref in p.items():
if pref > p[(c_j, c_i)]:
g[(c_i, c_j)] = pref - p[(c_j, c_i)]
return g
g = create_election_graph(p)
print_matrix(candidates, g, 5)
def calc_strongest_paths(p, candidates):
# Initialize strongest paths dict.
s = defaultdict(lambda:-1)
# Initialize predecessors dict.
pred = defaultdict(lambda:-1)
# Initially the strength of the path from c_i to c_j is simply
# the difference in the weights between s[(c_i, c_j)]
# and s[(c_j, c_i)].
for (c_i, c_j), pref in p.items():
if pref > p[(c_j, c_i)]:
s[(c_i, c_j)] = pref - p[(c_j, c_i)]
pred[(c_i, c_j)] = c_i
# For each c_k, c_i, c_j, such that the path from c_i to c_j
# can be strengthened by taking the detour from c_i to c_k
# and then to c_k and c_j adjust the path and the predecessor.
# This can happen at most as many times as there are candidates.
for c_k in candidates:
for c_i in candidates:
if c_i != c_k:
for c_j in candidates:
if c_j != c_i:
if s[(c_i, c_j)] < min(s[(c_i, c_k)], s[(c_k, c_j)]):
s[(c_i, c_j)] = min(s[(c_i, c_k)], s[(c_k, c_j)])
pred[(c_i, c_j)] = pred[(c_k, c_j)]
return (s, pred)
s, pred = calc_strongest_paths(p, candidates)
print('strongest paths')
print_matrix(candidates, s, 5)
print('predecessors')
print_matrix(candidates, pred, 5)
def calc_results(s):
wins = defaultdict(list)
for (c_i, c_j), v in s.items():
if s[(c_i, c_j)] > s[(c_j, c_i)]:
wins[c_i].append(c_j)
return wins
wins = calc_results(s)
pprint.pprint(wins)
import sys
MAX_INT = sys.maxsize
def floyd_warshall(w):
n = len(w)
# Initialize distances matrix.
dist = [ [ MAX_INT for j in range(n) ] for i in range(n) ]
# Initialize predecessors matrix.
pred = [ [ -1 for j in range(n) ] for i in range(n) ]
# Initially the length of the path from i to j is simply
# the weight between w[i][j], if it exists, and then
# i is the predecessor of j.
for i in range(n):
for j in range(n):
if w[i][j] != 0:
dist[i][j] = w[i][j]
pred[i][j] = i
# For each k, i, j, such that the path from i to j
# can be shortened by taking the detour from i to k
# and k to j adjust the path and the predecessor.
# This can happen at most n times.
for k in range(n):
for i in range(n):
if i != k:
for j in range(n):
if j != i:
if (dist[i][k] != MAX_INT and
dist[k][j] != MAX_INT and
dist[i][j] > dist[i][k] + dist[k][j]):
dist[i][j] = dist[i][k] + dist[k][j]
pred[i][j] = pred[k][j]
return (dist, pred)
def read_graph(filename, directed=False):
graph = {}
with open(filename) as input_file:
for line in input_file:
parts = line.split()
if len(parts) != 3:
continue # not a valid line, ignore
[n1, n2, w] = [ int (x) for x in parts ]
if n1 not in graph:
graph[n1] = []
if n2 not in graph:
graph[n2] = []
graph[n1].append((n2, w))
if not directed:
graph[n2].append((n1, w))
return graph
g = read_graph('traffic_grid_graph.txt')
pprint.pprint(g)
def adjlist_to_matrix(g):
m = [ [ MAX_INT for j in g.keys() ] for i in g.keys() ]
for u in g.keys():
m[u][u] = 0
for u in g.keys():
for (v, w) in g[u]:
m[u][v] = w
return m
m = adjlist_to_matrix(g)
dist, pred = floyd_warshall(m)
for s in sorted(g.keys()):
print('starting node:', s)
print(pred[s])
print(dist[s])
for i in range(len(dist)):
dist[i][i] = 0
for s in sorted(g.keys()):
print('starting node:', s)
print(pred[s])
print(dist[s])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you can see the plot below, it's not linear separatable.
Step2: Model and Cost Function
Step3: Cost Function
Step4: Momentum
Step5: Code Implementaion
Step6: Visualization of the trained classifier
Step7: Transformation of the input domain
|
<ASSISTANT_TASK:>
Python Code:
# Python imports
import numpy as np # Matrix and vector computation package
import sklearn.datasets # To generate the dataset
import matplotlib.pyplot as plt # Plotting library
from matplotlib.colors import colorConverter, ListedColormap # Some plotting functions
from mpl_toolkits.mplot3d import Axes3D # 3D plots
from matplotlib import cm # Colormaps
# Allow matplotlib to plot inside this notebook
%matplotlib inline
# Set the seed of the numpy random number generator so that the tutorial is reproducable
np.random.seed(seed=1)
# Generate the dataset
X, t = sklearn.datasets.make_circles(n_samples=100, shuffle=False, factor=0.4, noise=0.1)
T = np.zeros((100,2)) # Define target matrix
T[t==1,1] = 1
T[t==0,0] = 1
# Separate the red and blue points for plotting
x_red = X[t==0]
x_blue = X[t==1]
print('shape of X: {}'.format(X.shape))
print('shape of T: {}'.format(T.shape))
# Plot both classes on the x1, x2 plane
plt.plot(x_red[:,0], x_red[:,1], 'ro', label='class red')
plt.plot(x_blue[:,0], x_blue[:,1], 'bo', label='class blue')
plt.grid()
plt.legend(loc=1)
plt.xlabel('$x_1$', fontsize=15)
plt.ylabel('$x_2$', fontsize=15)
plt.axis([-1.5, 1.5, -1.5, 1.5])
plt.title('red vs blue classes in the input space')
plt.show()
# Define the logistic function. - for hidden layer activation.
def logistic(z):
return 1 / (1 + np.exp(-z))
# Define the softmax function
def softmax(z):
return np.exp(z) / np.sum(np.exp(z), axis=1, keepdims=True)
# Function to compute the hidden activations
def hidden_activations(X, Wh, bh):
return logistic(X.dot(Wh) + bh)
# Define output layer feedforward
def output_activations(H, Wo, bo):
return softmax(H.dot(Wo) + bo)
# Define the neural network function
def nn(X, Wh, bh, Wo, bo):
return output_activations(hidden_activations(X, Wh, bh), Wo, bo)
# Define the neural network prediction function that only returns
# 1 or 0 depending on the predicted class
def nn_predict(X, Wh, bh, Wo, bo):
return np.around(nn(X, Wh, bh, Wo, bo))
# Define the cost function
def cost(Y, T):
return - np.multiply(T, np.log(Y)).sum()
# Define the error function at the output
def error_output(Y, T):
return Y - T
# Define the gradient function for the weight parameters at the output layer
def gradient_weight_out(H, Eo):
return H.T.dot(Eo)
# Define the gradient function for the bias parameters at the output layer
def gradient_bias_out(Eo):
return np.sum(Eo, axis=0, keepdims=True)
# Define the error function at the hidden layer
def error_hidden(H, Wo, Eo):
# H * (1-H) * (E . Wo^T)
return np.multiply(np.multiply(H,(1 - H)), Eo.dot(Wo.T))
# Define the gradient function for the weight parameters at the hidden layer
def gradient_weight_hidden(X, Eh):
return X.T.dot(Eh)
# Define the gradient function for the bias parameters at the output layer
def gradient_bias_hidden(Eh):
return np.sum(Eh, axis=0, keepdims=True)
# Define the update function to update the network parameters over 1 iteration
def backprop_gradients(X, T, Wh, bh, Wo, bo):
# Compute the output of the network
# Compute the activations of the layers
H = hidden_activations(X, Wh, bh)
Y = output_activations(H, Wo, bo)
# Compute the gradients of the output layer
Eo = error_output(Y, T)
JWo = gradient_weight_out(H, Eo)
Jbo = gradient_bias_out(Eo)
# Compute the gradients of the hidden layer
Eh = error_hidden(H, Wo, Eo)
JWh = gradient_weight_hidden(X, Eh)
Jbh = gradient_bias_hidden(Eh)
return [JWh, Jbh, JWo, Jbo]
def update_velocity(X, T, ls_of_params, Vs, momentum_term, learning_rate):
# ls_of_params = [Wh, bh, Wo, bo]
# Js = [JWh, Jbh, JWo, Jbo]
Js = backprop_gradients(X, T, *ls_of_params)
return [momentum_term * V - learning_rate * J for V,J in zip(Vs, Js)]
def update_params(ls_of_params, Vs):
# ls_of_params = [Wh, bh, Wo, bo]
# Vs = [VWh, Vbh, VWo, Vbo]
return [P + V for P,V in zip(ls_of_params, Vs)]
# Run backpropagation
# Initialize weights and biases
init_var = 0.1
# Initialize hidden layer parameters
bh = np.random.randn(1, 3) * init_var
Wh = np.random.randn(2, 3) * init_var
# Initialize output layer parameters
bo = np.random.randn(1, 2) * init_var
Wo = np.random.randn(3, 2) * init_var
# Parameters are already initilized randomly with the gradient checking
# Set the learning rate
learning_rate = 0.02
momentum_term = 0.9
# define the velocities Vs = [VWh, Vbh, VWo, Vbo]
Vs = [np.zeros_like(M) for M in [Wh, bh, Wo, bo]]
# Start the gradient descent updates and plot the iterations
nb_of_iterations = 300 # number of gradient descent updates
lr_update = learning_rate / nb_of_iterations # learning rate update rule
ls_costs = [cost(nn(X, Wh, bh, Wo, bo), T)] # list of cost over the iterations
for i in range(nb_of_iterations):
# Update the velocities and the parameters
Vs = update_velocity(X, T, [Wh, bh, Wo, bo], Vs, momentum_term, learning_rate)
Wh, bh, Wo, bo = update_params([Wh, bh, Wo, bo], Vs)
ls_costs.append(cost(nn(X, Wh, bh, Wo, bo), T))
# Plot the cost over the iterations
plt.plot(ls_costs, 'b-')
plt.xlabel('iteration')
plt.ylabel('$\\xi$', fontsize=15)
plt.title('Decrease of cost over backprop iteration')
plt.grid()
plt.show()
# Plot the resulting decision boundary
# Generate a grid over the input space to plot the color of the
# classification at that grid point
nb_of_xs = 200
xs1 = np.linspace(-2, 2, num=nb_of_xs)
xs2 = np.linspace(-2, 2, num=nb_of_xs)
xx, yy = np.meshgrid(xs1, xs2) # create the grid
# Initialize and fill the classification plane
classification_plane = np.zeros((nb_of_xs, nb_of_xs))
for i in range(nb_of_xs):
for j in range(nb_of_xs):
pred = nn_predict(np.asmatrix([xx[i,j], yy[i,j]]), Wh, bh, Wo, bo)
classification_plane[i,j] = pred[0,0]
# Create a color map to show the classification colors of each grid point
cmap = ListedColormap([
colorConverter.to_rgba('w', alpha=0.30),
colorConverter.to_rgba('g', alpha=0.30)])
# Plot the classification plane with decision boundary and input samples
plt.contourf(xx, yy, classification_plane, cmap=cmap)
# Plot both classes on the x1, x2 plane
plt.plot(x_red[:,0], x_red[:,1], 'ro', label='class red')
plt.plot(x_blue[:,0], x_blue[:,1], 'bo', label='class blue')
plt.grid()
plt.legend(loc=1)
plt.xlabel('$x_1$', fontsize=15)
plt.ylabel('$x_2$', fontsize=15)
plt.axis([-1.5, 1.5, -1.5, 1.5])
plt.title('red vs blue classification boundary')
plt.show()
# Plot the projection of the input onto the hidden layer
# Define the projections of the blue and red classes
H_blue = hidden_activations(x_blue, Wh, bh)
H_red = hidden_activations(x_red, Wh, bh)
# Plot the error surface
fig = plt.figure()
ax = Axes3D(fig)
ax.plot(np.ravel(H_blue[:,2]), np.ravel(H_blue[:,1]), np.ravel(H_blue[:,0]), 'bo')
ax.plot(np.ravel(H_red[:,2]), np.ravel(H_red[:,1]), np.ravel(H_red[:,0]), 'ro')
ax.set_xlabel('$h_1$', fontsize=15)
ax.set_ylabel('$h_2$', fontsize=15)
ax.set_zlabel('$h_3$', fontsize=15)
ax.view_init(elev=10, azim=-40)
plt.title('Projection of the input X onto the hidden layer H')
plt.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Считаем данные по росту и весу (weights_heights.csv, приложенный в задании) в объект Pandas DataFrame
Step2: Чаще всего первое, что надо надо сделать после считывания данных - это посмотреть на первые несколько записей. Так можно отловить ошибки чтения данных (например, если вместо 10 столбцов получился один, в названии которого 9 точек с запятой). Также это позволяет познакомиться с данными, как минимум, посмотреть на признаки и их природу (количественный, категориальный и т.д.).
Step3: Аргументы
Step4: Один из эффективных методов первичного анализа данных - отображение попарных зависимостей признаков. Создается $m \times m$ графиков (m - число признаков), где по диагонали рисуются гистограммы распределения признаков, а вне диагонали - scatter plots зависимости двух признаков. Это можно делать с помощью метода $scatter_matrix$ Pandas Data Frame или pairplot библиотеки Seaborn.
Step5: [3]. Постройте картинку, на которой будут отображены попарные зависимости признаков , 'Height', 'Weight' и 'BMI' друг от друга. Используйте метод pairplot библиотеки Seaborn.
Step6: Часто при первичном анализе данных надо исследовать зависимость какого-то количественного признака от категориального (скажем, зарплаты от пола сотрудника). В этом помогут "ящики с усами" - boxplots библиотеки Seaborn. Box plot - это компактный способ показать статистики вещественного признака (среднее и квартили) по разным значениям категориального признака. Также помогает отслеживать "выбросы" - наблюдения, в которых значение данного вещественного признака сильно отличается от других.
Step7: [5]. Постройте scatter plot зависимости роста от веса, используя метод plot для Pandas DataFrame с аргументом kind='scatter'. Подпишите картинку.
Step8: Задание 2. Минимизация квадратичной ошибки
Step9: Итак, мы решаем задачу
Step10: Минимизация квадратичной функции ошибки - относительная простая задача, поскольку функция выпуклая. Для такой задачи существует много методов оптимизации. Посмотрим, как функция ошибки зависит от одного параметра (наклон прямой), если второй параметр (свободный член) зафиксировать.
Step11: Теперь методом оптимизации найдем "оптимальный" наклон прямой, приближающей зависимость роста от веса, при фиксированном коэффициенте $w_0 = 50$.
Step12: При анализе многомерных данных человек часто хочет получить интуитивное представление о природе данных с помощью визуализации. Увы, при числе признаков больше 3 такие картинки нарисовать невозможно. На практике для визуализации данных в 2D и 3D в данных выделаяют 2 или, соответственно, 3 главные компоненты (как именно это делается - мы увидим далее в курсе) и отображают данные на плоскости или в объеме.
Step13: Создаем объекты типа matplotlib.figure.Figure (рисунок) и matplotlib.axes._subplots.Axes3DSubplot (ось).
Step14: [10]. Постройте 3D-график зависимости функции ошибки, посчитанной в п.6 от параметров $w_0$ и $w_1$. Подпишите ось $x$ меткой «Intercept», ось $y$ – меткой «Slope», a ось $z$ – меткой «Error».
Step15: [11]. С помощью метода minimize из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_0$ в диапазоне [-100,100] и $w_1$ - в диапазоне [-5, 5]. Начальная точка – ($w_0$, $w_1$) = (0, 0). Используйте метод оптимизации L-BFGS-B (аргумент method метода minimize). Проведите на графике из п. 5 Задания 1 прямую, соответствующую найденным оптимальным значениям параметров $w_0$ и $w_1$. Подпишите оси и график.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv('weights_heights.csv', index_col='Index')
data.plot(y='Height', kind='hist',
color='red', title='Height (inch.) distribution')
plt.show()
data.head(5)
data.plot(y='Weight', kind='hist', color='green', title='Weight (pnd.) distribution')
plt.show()
def make_bmi(height_inch, weight_pound):
METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462
return (weight_pound / KILO_TO_POUND) / \
(height_inch / METER_TO_INCH) ** 2
data['BMI'] = data.apply(lambda row: make_bmi(row['Height'],
row['Weight']), axis=1)
sns.pairplot(data, size=3)
plt.show()
def weight_category(weight):
if weight < 120:
return 1
elif weight >= 150:
return 3
else:
return 2
data['weight_cat'] = data['Weight'].apply(weight_category)
dataBoxplot = sns.boxplot(x='weight_cat', y='Height', data=data)
dataBoxplot.set(xlabel=u'Весовая категория', ylabel=u'Рост')
plt.show()
dataPlot = data.plot(x='Weight', y='Height', kind='scatter', title=u'Зависимость роста от веса')
dataPlot.set(xlabel=u'Вес', ylabel=u'Рост')
plt.show()
def squaredError(w0, w1):
rData, cData = data.shape
error = np.zeros( rData )
error = (data['Height'] - (w0 + w1 * data['Weight'])) ** 2
return error.sum()
#lambda-функция прямой
lineFunc = lambda x, w0, w1: w0 + w1 * x
#массив точек по оси X
pointsNum = 100
xLines = np.linspace(0, 200, pointsNum)
#массив значений коэффициентов прямых
wLine = np.array( [[60., 0.05], [50, 0.16]] )
rwLine, cwLine = wLine.shape
#массив точек по оси Y
yLines = np.zeros( (rwLine, pointsNum) )
for i in xrange(rwLine):
yLines[i] = np.array( lineFunc(xLines, wLine[i, 0], wLine[i, 1]) )
#построение графиков
plt.scatter(data['Weight'], data['Height'], alpha=0.5, c='red', label='Data points')
for i in xrange(rwLine):
text = 'w0: ' + str(wLine[i, 0]) + ', w1: ' + str(wLine[i, 1])
plt.plot(xLines, yLines[i], linewidth=3.0, label=text)
plt.legend()
plt.axis( [75, 175, 60, 75] )
plt.title(u'Зависимость роста от веса')
plt.xlabel(u'Вес')
plt.ylabel(u'Рост')
plt.show()
#изменяем параметр w1
numw1Iter = 100
w1Iter = np.linspace(-5., 5., numw1Iter)
#ошибка для каждого w1
errw1Iter = np.zeros( (numw1Iter) )
for i in xrange(numw1Iter):
errw1Iter[i] = squaredError(50., w1Iter[i])
#построение графика
plt.plot(w1Iter, errw1Iter)
plt.title(u'Зависимость функции ошибки\nот параметра w1 при w0 = 50')
plt.xlabel(u'w1')
plt.ylabel(u'Ошибка')
plt.show()
from scipy.optimize import minimize_scalar
optw1Res = minimize_scalar(lambda w: squaredError(50., w), bounds=(-5, 5))
optw1 = optw1Res.x
print 'Optimal w1 value for w0 = 50:', round(optw1, 3)
#значения линейной аппроксимации для оптимального значения w1
yLinesOpt = np.array( lineFunc(xLines, 50., optw1) )
#построение графиков
dataPlot = data.plot(x='Weight', y='Height', kind='scatter', title=u'Зависимость роста от веса', label=u'Data points')
dataPlot.set(xlabel=u'Вес', ylabel=u'Рост')
plt.plot(xLines, yLinesOpt, c='red', label=u'Optimal line')
plt.legend()
plt.axis( [75, 175, 60, 75] )
plt.title(u'Зависимость роста от веса')
plt.xlabel(u'Вес')
plt.ylabel(u'Рост')
plt.show()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = np.sin(np.sqrt(X**2 + Y**2))
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(0., 100., 1)
Y = np.arange(-5., 5., 0.5)
X, Y = np.meshgrid(X, Y)
squaredErrorVect = np.vectorize(squaredError)
Z = np.array( squaredErrorVect(X.ravel(), Y.ravel()) )
Z.shape = X.shape
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel('Error')
plt.show()
from scipy.optimize import minimize
squaredErrorOneArg = lambda w: squaredError(w[0], w[1])
bnds = ((-100., 100.), (-5., 5.))
x0 = (0., 0.)
optww1Res = minimize(squaredErrorOneArg, x0, bounds=bnds, method='L-BFGS-B')
print optww1Res
optw0 = optww1Res.x[0]
optw1 = optww1Res.x[1]
print 'Optimal ( w0, w1 ) value: (', round(optw0, 3), ',', round(optw1, 3), ')'
#значения линейной аппроксимации для оптимального значения w1
yLinesOpt = np.array( lineFunc(xLines, optw0, optw1) )
#построение графиков
dataPlot = data.plot(x='Weight', y='Height', kind='scatter', title=u'Зависимость роста от веса', label=u'Data points')
dataPlot.set(xlabel=u'Вес', ylabel=u'Рост')
plt.plot(xLines, yLinesOpt, c='red', label=u'Optimal line')
plt.legend()
plt.axis( [75, 175, 60, 75] )
plt.title(u'Зависимость роста от веса')
plt.xlabel(u'Вес')
plt.ylabel(u'Рост')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What is Numba?
Step2: Now, test the function, first with two scalar integers
Step3: 1b) With Numpy, we can use our function to add not just scalars, but vectors as well. Using your favorite array creation routine, create two integer arrays with ten elements each, called a and b, and use your add function to add them.
Step4: Okay, so our function can add things. Now, let's use Numba's jit function to create a Numba version of our addition function
Step5: More commonly, you will use jit as a decorator, by adding @jit to the line above your function definition, but the above version shows you that at heart, @jit is just a python function that takes other functions as its argument!
Step6: Profiling
Step7: What's going on here? %timeit is running our function many times, and then reporting the average time it takes to run. This is generally a better approach than timing a single function execution, because it accounts for random events that may cause any given run to perform poorly.
Step8: So, scalars are faster than arrays (makes sense), and python's addition function is better than ours (seems reasonable). Now, let's see how fast our pre-compiled Numba addition function is.
Step9: Hold on - our new pre-compiled function is running even slower than the original python version! What's going on here?
Step10: 2b) Use the %timeit function to determine whether the Numba version of the hyptonenuse function is better than the original Python implementation.
Step11: 2c) Numba functions can call other functions, provided they are also Numba functions. Below is a function that loops through two numpy arrays and puts their sum into an output array. Modify the following function to calculate the hypotenuse instead.
Step13: Problem 3 - Fun with Fractals
Step14: Want to see what you made? Run the following code to plot the image. Feel free to pick your favorite matplotlib color map
Step16: 3b) There is more than one type of fractal in the world, however! Below is a function that determines membership in the Mandelbrot set. Modify the function below using Numba, then modify the code above to produce a new pretty picture.
Step17: Problem 4 - Typing
Step18: This tells us the array contains integers, and each integer has been assigned 64bits in memory (or equivalently, 8 bytes). Most python functions are defined to work on arbitrary types, so that if you use the + operator, for example, you can add integers, floats, complex numbers, or even strings! However, this flexibility comes at a cost, performance-wise. Numba, on the other hand, compiles each function based on the types of its arguments, and infers the type of the result. You can see this if you run the inspect_types function on a numba function
Step19: 4a) Numba has inferred the types for this function based on how we've used it. Try out your numba_add function with two floating point numbers, then re-inspect the types of the Numba function. Are they the same?
Step20: So far we have been using what Numba refers to as "lazy" (or "call-time") decoration. Basically, we've been letting Numba do the work of figuring out how we're using the function and inferring the types for us. Alternatively, if we know how we are going to use a given function, we can use "eager" (or "compile-time") decoration. To do this, we make use of the vectorize decorator. For example, if we want to make an integer-only version of our addition function, we could write
Step21: Numpy Universal Functions
Step22: 4b) Try your ufunc out with a new target, 'parallel'. How does the speed compare? What if the array size is much larger?
Step25: Problem 5 - Direct Summation
Step27: Next, we define a function to calculate the particle interaction via direct summation
Step28: All that's left is to create a list of random particles with assigned masses
Step29: 5a) Run the direct summation code and determine how long it takes with 10, 100, 1000 particles. Is there a relationship?
Step30: Add answer here.
Step31: You can access an individual "attribute" like this
Step32: 5b) Write a jit function create_n_random_particles that takes the arguments n (number of particles), m (mass of every particle) and a domain within which to generate a random number (as in the class above).
Step33: Now we'll create our array of particles using the new function.
Step34: We don't have a distance method anymore, so we need to write a function to take care of that.
Step35: 5d) Modify the direct_sum function above to instead work on a NumPy array of particles. Loop over each element in the array and calculate its total potential. Time the result and compare it to your previous version of this function.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
import math
from numba import jit, njit, vectorize
def add(x, y):
# add code here
# add code here
# add code here
numba_add = jit(add)
# add code here
%timeit add(1,2)
# add code here
# add code here
# add code here
def hypotenuse(x, y):
x = abs(x);
y = abs(y);
t = min(x, y);
x = max(x, y);
t = t / x;
return x * math.sqrt(1+t*t)
# add code here
@njit # this is an alias for @jit(nopython=True)
def ex_func(x, y, out):
for i in range(x.shape[0]):
out[i] = x[i] + y[i] # change this line
in1 = np.arange(10, dtype=np.float64)
in2 = 2 * in1 + 1
out = np.empty_like(in1)
print('in1:', in1)
print('in2:', in2)
ex_func(in1, in2, out)
print('out:', out)
# This test will fail until you fix the ex1 function
np.testing.assert_almost_equal(out, np.hypot(in1, in2))
# add code here
def julia(x, y, max_iters):
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Julia
set given a fixed number of iterations.
i = 0
c = complex(-0.8, 0.156)
a = complex(x,y)
for i in range(max_iters):
a = a*a + c
if (a.real*a.real + a.imag*a.imag) > 1000:
return 0
return 255
# add code here
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = julia(real, imag, iters)
image[y, x] = color
return image
image = np.zeros((500, 750), dtype=np.uint8)
%timeit create_fractal(-2.0, 2.0, -1.0, 1.0, image, 200)
plt.imshow(image)
plt.viridis()
plt.show()
# add code here
def mandel(x, y, max_iters):
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Mandelbrot
set given a fixed number of iterations.
i = 0
c = complex(x,y)
z = 0.0j
for i in range(max_iters):
z = z*z + c
if (z.real*z.real + z.imag*z.imag) >= 4:
return i
return 255
# add code here
image = np.zeros((500, 750), dtype=np.uint8)
create_fractal(-2.0, 2.0, -1.0, 1.0, image, 200)
plt.imshow(image)
plt.viridis()
plt.show()
a.dtype
numba_add.inspect_types()
#Add code here
@vectorize(['int64(int64, int64)'], target='cpu')
def add_ufunc(x, y):
return x + y
%timeit add_ufunc(a,b)
# add code here
class Point():
Arguments:
domain: the domain of random generated coordinates x,y,z,
default=1.0
Attributes:
x, y, z: coordinates of the point
def __init__(self, domain=1.0):
self.x = domain * np.random.random()
self.y = domain * np.random.random()
self.z = domain * np.random.random()
def distance(self, other):
return ((self.x - other.x)**2 +
(self.y - other.y)**2 +
(self.z - other.z)**2)**.5
class Particle(Point):
Attributes:
m: mass of the particle
phi: the potential of the particle
def __init__(self, domain=1.0, m=1.0):
Point.__init__(self, domain)
self.m = m
self.phi = 0.
def direct_sum(particles):
Calculate the potential at each particle
using direct summation method.
Arguments:
particles: the list of particles
for i, target in enumerate(particles):
for source in (particles[:i] + particles[i+1:]):
r = target.distance(source)
target.phi += source.m / r
n = 1000
particles = [Particle(m = 1 / n) for i in range(n)]
# add code here
particle_dtype = np.dtype({'names':['x','y','z','m','phi'],
'formats':[np.double,
np.double,
np.double,
np.double,
np.double]})
myarray = np.ones(3, dtype=particle_dtype)
myarray
myarray[0]['x'] = 2.0
# add code here
particles = create_n_random_particles(1000, .001, 1)
particles[:3]
# add code here
distance(particles[0], particles[1])
%%timeit
distance(particles[0], particles[1])
# add code here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: But I don't want scare people with spooky, scary math
Step2: For this example, we will only use the Sepal Length and Sepal Width feature, so we can plot the data in a 2D grid.
Step3: K-Nearest Neighbors
Step4: First, we'll create a random sample of 25 data points
Step5: Let's look at a random sample of the Iris dataset. Suppose I were to ask you to classify the green dot as either red, yellow, or blue. What would you say?
Step6: Your intuition is that the green point will have a label similar to the points surrounding it. This is the idea behind the K-Nearest Neighbors algorithm.
Step7: We created a K-NN classifier that uses 13 neighbors to decide each test point. Let's see how it would label the space.
Step8: K-NN is easy to understand and implement, but because it requires us to test each new data point against each point in the training set, it is slow and scales poorly to larger datasets.
Step9: Logistic Regression is the classification cousin of linear regression. Our boundaries are made up of lines.
Step10: Compared to KNN, we can create more concise decision boundaries for the data points.
Step11: Logistic regression is quick and simple, but sometimes does not perform well due to the strict partitioning of the decision boundary.
Step12: Well, we can put a line down the middle that separates them.
Step13: But now if we add another red ball, the boundary does not hold
Step14: Could this problem have been avoided? Is there a way to choose the line in the initial problem to prevent this?
Step15: Ta-Da!
Step16: But let's say we have an arrangement of balls that no line can separate. What can we do?
Step17: One of the big breakthroughs in machine learning is the kernel function. Using the kernel function, we can map the input data into a new feature space.
Step18: After mapping our points to a higher dimensional space, we can map this line back down to our original space.
Step19: Support Vector Machine
Step20: Let's create a new dataset
Step21: If we want to separate the two cluster of points with lines, there are several lines we can choose from. How would we determine which line is the best line.
Step22: Well... we can look at how big of margin each line has. The larger the margin, the more "confident" we are that the data is better separated.
Step24: Now, let's create a Support Vector Machine
Step25: We can plot the decision boundary of the SVM along with its margin.
Step26: Now, you may be wondering why this algorithm is called a Support Vector Machine. If you look back at the plot, you will notice there are three data points on the margin.
Step27: These points are called the Support vectors. The support vectors are used by the algorithm to determine the margin boundary. Now, let's look at a more interactive demo.
Step28: Let's look at a different dataset. In this dataset, it is obvious that the data points are not linearly separable.
Step29: We can use a kernel function to map the data to a higher dimension.
Step30: Let's see what the data looks like in this higher dimensional space
Step31: Now, we can map this boundary back to our original space
Step32: SVMs are a mathematically beautiful algorithm that are guaranteed to find a wide margin decision boundary, but they do not perform well on large datasets due to scaling issues.
Step33: Decision Trees are commonly used by business analyst because they are easy to understand.
Step34: Like the previous algorithms, the decision tree can partition up the space of points.
Step35: Notice how the boundaries of the decision tree are more form-fitting compared to the decision boundaries of our previous algorithms. One issue with decision trees is that they overfit to the data.
Step36: The Decision Tree is easy to interpret, but if there are too many features, the tree may become too large very quickly.
Step37: One Decision Tree is good, but what if we created a bunch of them and pooled their results together!
Step38: The idea of combining multiple models together is a good idea, but as we can see from above, it has some overfitting issues.
|
<ASSISTANT_TASK:>
Python Code:
YouTubeVideo("IFACrIx5SZ0", start = 85, end = 95)
from sklearn.datasets import load_iris
iris = load_iris()
x_ind = 0
y_ind = 1
X = iris.data[:,(x_ind, y_ind)]
labels = iris.target
print X.shape
print labels.shape
# this formatter will label the colorbar with the correct target names
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.scatter(X[:, x_ind], X[:, y_ind],
c=labels, cmap=plt.cm.get_cmap('RdYlBu', 3))
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.clim(-0.5, 2.5)
plt.xlabel(iris.feature_names[x_ind])
plt.ylabel(iris.feature_names[y_ind]);
from sklearn import neighbors
sample_size = 25
sample_size_zeroInd = sample_size - 1
rand_sample = np.random.randint(0,150, (sample_size,1))
x_sample = X[rand_sample].reshape((sample_size,2))
label_sample = labels[rand_sample]
plt.scatter(x_sample[:sample_size_zeroInd, 0], x_sample[:sample_size_zeroInd, 1],
c=label_sample[:sample_size_zeroInd], cmap=plt.cm.get_cmap('RdYlBu', 3), s = 40)
plt.colorbar(ticks=[0, 1, 2], format = formatter)
plt.clim(-0.5, 2.5)
plt.xlabel(iris.feature_names[x_ind])
plt.ylabel(iris.feature_names[y_ind]);
plt.scatter(x_sample[sample_size_zeroInd, x_ind], x_sample[sample_size_zeroInd, y_ind], s = 100, c ='g', alpha = 0.5)
n_neighbors = 13
h = 0.02
clf = neighbors.KNeighborsClassifier(n_neighbors)
clf.fit(X, labels)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.get_cmap('RdYlBu', 3), alpha = 0.2)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=labels, cmap=plt.cm.get_cmap('RdYlBu', 3), s = 40)
plt.colorbar(ticks=[0, 1, 2], format = formatter)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xlabel(iris.feature_names[x_ind])
plt.ylabel(iris.feature_names[y_ind]);
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1e5)
clf.fit(X, labels)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4, 3))
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.get_cmap('RdYlBu', 3), alpha = 0.2)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=labels, edgecolors='k', cmap=plt.cm.get_cmap('RdYlBu', 3), s = 40)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
Image(url = 'http://i.imgur.com/zDBbD.png')
Image(url ='http://i.imgur.com/aLZlG.png')
Image(url = 'http://i.imgur.com/kxWgh.png')
Image(url ='http://i.imgur.com/ePy4V.png')
Image(url ='http://i.imgur.com/BWYYZ.png')
Image(url = 'http://i.imgur.com/R9967.png')
Image(url = 'http://i.imgur.com/WuxyO.png')
Image(url = 'http://i.imgur.com/gWdPX.png')
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring');
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
from sklearn.svm import SVC # "Support Vector Classifier"
clf = SVC(kernel='linear')
clf.fit(X, y)
def plot_svc_decision_function(clf, ax=None):
Plot the decision function for a 2D SVC
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function([xi, yj])
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf);
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
from IPython.html.widgets import interact
def plot_svm(N=10):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
clf = SVC(kernel='linear')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plt.xlim(-1, 4)
plt.ylim(-1, 6)
plot_svc_decision_function(clf, plt.gca())
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none')
interact(plot_svm, N=[10, 200], kernel='linear');
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf);
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
clf = SVC(kernel='rbf')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
from sklearn.tree import DecisionTreeClassifier
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
def visualize_tree(estimator, X, y, boundaries=True,
xlim=None, ylim=None):
estimator.fit(X, y)
if xlim is None:
xlim = (X[:, 0].min() - 0.1, X[:, 0].max() + 0.1)
if ylim is None:
ylim = (X[:, 1].min() - 0.1, X[:, 1].max() + 0.1)
x_min, x_max = xlim
y_min, y_max = ylim
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
Z = estimator.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, alpha=0.2, cmap='rainbow')
plt.clim(y.min(), y.max())
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow')
plt.axis('off')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.clim(y.min(), y.max())
# Plot the decision boundaries
def plot_boundaries(i, xlim, ylim):
if i < 0:
return
tree = estimator.tree_
if tree.feature[i] == 0:
plt.plot([tree.threshold[i], tree.threshold[i]], ylim, '-k')
plot_boundaries(tree.children_left[i],
[xlim[0], tree.threshold[i]], ylim)
plot_boundaries(tree.children_right[i],
[tree.threshold[i], xlim[1]], ylim)
elif tree.feature[i] == 1:
plt.plot(xlim, [tree.threshold[i], tree.threshold[i]], '-k')
plot_boundaries(tree.children_left[i], xlim,
[ylim[0], tree.threshold[i]])
plot_boundaries(tree.children_right[i], xlim,
[tree.threshold[i], ylim[1]])
if boundaries:
plot_boundaries(0, plt.xlim(), plt.ylim())
clf = DecisionTreeClassifier()
plt.figure()
visualize_tree(clf, X[:200], y[:200], boundaries=False)
plt.figure()
visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
from sklearn.ensemble import RandomForestClassifier
def fit_randomized_tree(random_state=0):
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=2.0)
clf = DecisionTreeClassifier(max_depth=15)
rng = np.random.RandomState(random_state)
i = np.arange(len(y))
rng.shuffle(i)
visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,
xlim=(X[:, 0].min(), X[:, 0].max()),
ylim=(X[:, 1].min(), X[:, 1].max()))
from IPython.html.widgets import interact
interact(fit_randomized_tree, random_state=[0, 100]);
clf = RandomForestClassifier(n_estimators=100, random_state=0)
visualize_tree(clf, X, y, boundaries=False);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Everything in a single tool - tellurium
Step2: Antimony is a language that is analog to SBML Systems Biology Markup Language but human-readable. Therefore the concepts that are present in SBML can be found in antimony too. The definition of a compartment is such a SB concept!
Step 1
Step3: Ok... But that's just boring. Let's define something truly special and add a second compartment which is located in your previous one
Step4: Awesome! But be aware... circular definitions are not allowed.
Step5: To specify units you can write the value and the unit combined in the same line
Step6: Or simply change the units
Step7: Step 2
Step8: Smart! You already initialized your species. But you forgot to specify the location of your species! Add these lines
Step9: Step 3
Step10: Syntax in antimony is subtile
Step11: Or set a time dependend event. You may notice that time is special and the already predefined model time
Step12: Ok, now you may want to test your model and simulate it. You should definitly do this before we continue. Make sure that your model is correctly defined at the top of this jupyter notebook (you may execute the cell again!).
|
<ASSISTANT_TASK:>
Python Code:
import tellurium as te; te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
antimony_model = '''J0: -> y; -x;J1: -> x; y;x = 1.0;y = 0.2;'''
r = te.loada(antimony_model)
r.simulate(0,100,1000)
r.plot()
import tellurium as te
model = ''''''
model_backup = '''
model example
# UNITS:
#
#unit alienliters = 0.123 liters
#unit fL = 1.e-15 liters
# COMPARTMENTS:
#
compartment cell = 100;
compartment mitochondria = 10;
mitochondria in cell;
#mitochondria has fL;
#cell has fL;
# INITIAL VALUES
#
TOM1 = 10;
A_c = 100;
A_m = 1;
I = 1;
TOM1 in cell;
A_c in cell;
A_m in mitochondria;
I in cell;
# REACTIONS
#
T0: A_c + TOM1 -> 2 A_m + TOM1; kineticLaw;
kineticLaw := k1 * TOM1 * (A_c - A_m)/I
k1 = 0.01
# EVENTS:
#E1: at (A_c < 50): k1 = 0;
#E2: at (time>300): TOM1 = 20, A_c = 120;
end
'''
r = te.loada(model)
def aa(line, cell):
global model
#print line,cell
model = model + str(cell) + "\n"
te.loada(str(model))
get_ipython().register_magic_function(aa, "cell")
%%aa # %%aa: A-ppend to A-ntimony model, only for educational purpose!
%%aa
%%aa
%%aa
%%aa
%%aa
%%aa
%%aa
%%aa
%%aa
def draw(model):
diagram = te.visualization.SBMLDiagram(model.getSBML())
diagram.draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Download the data from the source website if necessary.
Step4: Read the data into a string.
Step5: Build the dictionary and replace rare words with UNK token.
Step6: Function to generate a training batch for the skip-gram model.
Step7: Train a skip-gram model.
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import zipfile
from matplotlib import pylab
from six.moves import range
from six.moves.urllib.request import urlretrieve
from sklearn.manifold import TSNE
from itertools import compress
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
def read_data(filename):
Extract the first file enclosed in a zip file as a list of words
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data(filename)
print('Data size %d' % len(words))
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10])
del words # Hint to reduce memory.
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
def generate_batch_cbow(batch_size, skip_window):
global data_index
surrounding_words = 2 * skip_window # words surrounding the target
assert batch_size % surrounding_words == 0
total_labels = batch_size / surrounding_words
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(total_labels, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(total_labels):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
labels[i, 0] = buffer[target] # label the target
for j in range(surrounding_words):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * surrounding_words + j] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
surrounding_words = 2 * skip_window
total_labels = batch_size / surrounding_words
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[total_labels, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
mask = np.zeros(batch_size, dtype=np.int32)
mask_index = -1
for i in range(batch_size):
if i % surrounding_words == 0:
mask_index = mask_index + 1
mask[i] = mask_index
embed_filtered = tf.segment_sum(embed, mask)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed_filtered,
labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))
# Optimizer.
# Note: The optimizer will optimize the softmax_weights AND the embeddings.
# This is because the embeddings are defined as a variable quantity and the
# optimizer's `minimize` method will by default modify all variable quantities
# that contribute to the tensor it is passed.
# See docs on `tf.train.Optimizer.minimize()` for more details.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch_cbow(
batch_size, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As we know that wire AB is D_wire, wire CD is V_wire and wire EF is H_wire, build new wires with data
Step2: Build wires from external data files (json)
|
<ASSISTANT_TASK:>
Python Code:
from phantasy.apps import Wire, WireScannerData
from phantasy import Line
import numpy as np
# e.g.1
#data = np.loadtxt('data/case2/data_cor_2.dat')
#direction = (-45, 0.2, 120)
#h0, v0 = 15, -10
# e.g.2
data = np.loadtxt('data/case1/datafiles/demo3.dat')
direction = (225, 0.2, 120)
h0, v0 = 15, 10
hx, hy = data[:,2], data[:,3]
vx, vy = data[:,0], data[:,1]
dx, dy = data[:,4], data[:,5]
h_wire = Wire('H', line=Line((0,0), (1,0)))
# NOTE: define line property to indicate wire position
h_wire.direction = direction
h_wire.xstream = hx
h_wire.ystream = hy
h_wire.line = Line((0, h0), (1, h0))
v_wire = Wire('V', line=Line((0,0), (0,1)))
v_wire.direction = direction
v_wire.xstream = vx
v_wire.ystream = vy
v_wire.line = Line((v0, 0), (v0, 1))
d_wire = Wire('D', line=Line((0,1),(1,0)))
d_wire.direction = direction
d_wire.xstream = dx
d_wire.ystream = dy
wsd = WireScannerData()
wsd.H_wire = h_wire
wsd.V_wire = v_wire
wsd.D_wire = d_wire
res = wsd.process(fit_model='gaussian')
res
res = wsd.process()
res
wsd.report()
x,y,z = wsd.restructure(mcov=res['mcov'], p0=(res['x0'],res['y0']), fac=2, zoom=10, ngrid=500, level=100)
import json
with open('ws-0001.json', 'rb') as fp:
data = json.load(fp)
direction = (225, 0.2, 120)
h0, v0 = 15, 10
h_wire = Wire('H', line=Line((0,0), (1,0)))
h_wire.direction = direction
h_wire.xstream = data['EF']['x']
h_wire.ystream = data['EF']['y']
h_wire.line = Line((0, h0), (1, h0))
v_wire = Wire('V', line=Line((0,0), (0,1)))
v_wire.direction = direction
v_wire.xstream = data['CD']['x']
v_wire.ystream = data['CD']['y']
v_wire.line = Line((v0, 0), (v0, 1))
d_wire = Wire('D', line=Line((0,1), (1,0)))
d_wire.direction = direction
d_wire.xstream = data['AB']['x']
d_wire.ystream = data['AB']['y']
wsd = WireScannerData()
wsd.H_wire = h_wire
wsd.V_wire = v_wire
wsd.D_wire = d_wire
res = wsd.process()
wsd.report()
x,y,z = wsd.restructure(mcov=res['mcov'], p0=(res['x0'],res['y0']), fac=2, zoom=10, ngrid=500, level=100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fitting a decaying oscillation
Step2: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
# YOUR CODE HERE
data = np.load('decay_osc.npz')
time=data['tdata']
y=data['ydata']
dy=data['dy']
plt.errorbar(time, y, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y');
plt.plot(time,y)
assert True # leave this to grade the data import and raw data plot
# YOUR CODE HERE
def ymodel(A,lamda,w,sigma,time):
return A*np.exp(-lamda*time)*np.cos(w*time)+sigma
z,w=opt.curve_fit(ymodel,time,y,p0=[.1,7,-1,.1],sigma=dy,absolute_sigma=True)
plt.plot(z)
plt.plot(time,y)
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, let's try to instantiate the best predictor that was found
Step2: Let's see the range of the test set (to check that no data from the recommender test set is in the training set for the predictor)
Step3: Good!
Step4: Let's test the saved predictor... just in case.
Step5: Looks good to me.
Step6: Now, let's predict one step
Step7: Just for fun, let's see some predictions...
|
<ASSISTANT_TASK:>
Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
from sklearn.externals import joblib
best_params_df = pd.read_pickle('../../data/best_params_final_df.pkl')
best_params_df
import predictor.feature_extraction as fe
from predictor.linear_predictor import LinearPredictor
import utils.misc as misc
import predictor.evaluation as ev
ahead_days = 1
# Get some parameters
train_days = int(best_params_df.loc[ahead_days, 'train_days'])
GOOD_DATA_RATIO, \
train_val_time, \
base_days, \
step_days, \
ahead_days, \
SAMPLES_GOOD_DATA_RATIO, \
x_filename, \
y_filename = misc.unpack_params(best_params_df.loc[ahead_days,:])
pid = 'base{}_ahead{}'.format(base_days, ahead_days)
# Get the datasets
x_train = pd.read_pickle('../../data/x_{}.pkl'.format(pid))
y_train = pd.read_pickle('../../data/y_{}.pkl'.format(pid))
x_test = pd.read_pickle('../../data/x_{}_test.pkl'.format(pid)).sort_index()
y_test = pd.DataFrame(pd.read_pickle('../../data/y_{}_test.pkl'.format(pid))).sort_index()
# Let's cut the training set to use only the required number of samples
end_date = x_train.index.levels[0][-1]
start_date = fe.add_market_days(end_date, -train_days)
x_sub_df = x_train.loc[(slice(start_date,None),slice(None)),:]
y_sub_df = pd.DataFrame(y_train.loc[(slice(start_date,None),slice(None))])
# Create the estimator and train
estimator = LinearPredictor()
estimator.fit(x_sub_df, y_sub_df)
# Get the training and test predictions
y_train_pred = estimator.predict(x_sub_df)
y_test_pred = estimator.predict(x_test)
# Get the training and test metrics for each symbol
metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred)
metrics_test = ev.get_metrics_df(y_test, y_test_pred)
# Show the mean metrics
metrics_df = pd.DataFrame(columns=['train', 'test'])
metrics_df['train'] = metrics_train.mean()
metrics_df['test'] = metrics_test.mean()
print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70))
# Plot the metrics in time
metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days)
metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days)
plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.')
plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.')
plt.title('$r^2$ metrics')
plt.legend()
plt.figure()
plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.')
plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.')
plt.title('MRE metrics')
plt.legend()
print('The first training day for the predictor is: {}.'.format(start_date))
print('The last training day for the predictor is: {}.'.format(fe.add_market_days(end_date, base_days)))
print('The testing data for the recommender')
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
total_data_test_df.head()
print('The first TEST day for the recommender is: {}'.format(total_data_test_df.index[-0]))
joblib.dump(estimator, '../../data/best_predictor.pkl')
estimator_reloaded = joblib.load('../../data/best_predictor.pkl')
# Get the training and test predictions
y_train_pred = estimator_reloaded.predict(x_sub_df)
y_test_pred = estimator_reloaded.predict(x_test)
# Get the training and test metrics for each symbol
metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred)
metrics_test = ev.get_metrics_df(y_test, y_test_pred)
# Show the mean metrics
metrics_df = pd.DataFrame(columns=['train', 'test'])
metrics_df['train'] = metrics_train.mean()
metrics_df['test'] = metrics_test.mean()
print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70))
# Plot the metrics in time
metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days)
metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days)
plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.')
plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.')
plt.title('$r^2$ metrics')
plt.legend()
plt.figure()
plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.')
plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.')
plt.title('MRE metrics')
plt.legend()
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()[['Close', 'Volume']]
data_train_df.head()
def generate_samples(data_df):
start_date = data_df.index[0]
close_sample = pd.DataFrame(data_df['Close'].values, columns=[start_date]).T
close_sample = close_sample / close_sample.iloc[0,0]
volume_sample = pd.DataFrame(data_df['Volume'].values, columns=[start_date]).T
volume_sample = volume_sample / volume_sample.iloc[0,0]
return close_sample, volume_sample
data_df = data_train_df[:112]
start_date = data_df.index[0]
close_sample = pd.DataFrame(data_df['Close'].values, columns=[start_date]).T
close_sample = close_sample / close_sample.iloc[0,0]
volume_sample = pd.DataFrame(data_df['Volume'].values, columns=[start_date]).T
volume_sample = volume_sample / volume_sample.iloc[0,0]
close_sample
close_sample, volume_sample = generate_samples(data_df)
close_sample
volume_sample
history_df = data_train_df[:112]
estimator_close = joblib.load('../../data/best_predictor.pkl')
estimator_volume = joblib.load('../../data/best_volume_predictor.pkl')
h_history_df = history_df.copy()
def predict_one_step(h_history_df, keep=False):
close_sample, volume_sample = generate_samples(h_history_df)
estimated_close = estimator_close.predict(close_sample).iloc[0,0] * h_history_df['Close'].iloc[0]
estimated_volume = estimator_volume.predict(volume_sample).iloc[0,0] * h_history_df['Volume'].iloc[0]
predicted_date = fe.add_market_days(h_history_df.index[-1], 1)
h_history_df = h_history_df.drop(h_history_df.index[0])
h_history_df.loc[predicted_date,:] = {'Close': estimated_close,'Volume': estimated_volume}
return h_history_df
close_sample, volume_sample = generate_samples(h_history_df)
estimated_close = estimator_close.predict(close_sample).iloc[0,0] * h_history_df['Close'].iloc[0]
estimated_volume = estimator_volume.predict(volume_sample).iloc[0,0] * h_history_df['Volume'].iloc[0]
estimator_close.predict(close_sample).iloc[0,0]
predicted_date = fe.add_market_days(h_history_df.index[-1], 1)
predicted_date
history_df
h_history_df = h_history_df.drop(h_history_df.index[0])
h_history_df.loc[predicted_date,:] = {'Close': estimated_close,'Volume': estimated_volume}
h_history_df
h_history_df = history_df.copy()
for i in range(20):
h_history_df = predict_one_step(h_history_df.copy())
h_history_df = history_df.copy()
predicted_df = pd.DataFrame()
for i in range(112):
h_history_df = predict_one_step(h_history_df.copy())
predicted_df = predicted_df.append(h_history_df.iloc[-1])
predicted_df
real_df = history_df.append(data_train_df[112:224])
plt.plot(real_df.index, real_df['Close'], 'b', label='real')
plt.plot(predicted_df.index, predicted_df['Close'], 'r', label='predicted')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: That's so much line noise. Like what. Look at that default assignment. Like, I get why the annotations are inlined with the signature. But they're just ugly. Meanwhile, here's a similar Haskell function with the type signature
Step2: Nice. We retain that information in a dictionary out of the way. What if we could combine these two things?
Step3: I like this. It stays out of the way, it uses a decorator (never enough decorators). Let's checkout the __annotations__ dict
Step4: Uh....Well, it didn't fill anything in. What did it do? Well, it attaches the signature to the docstring...
Step5: That's...nice. I'm actually perfectly content with this solution currently. But wouldn't it be cool? (this is a phrase that only preceeds michief and trouble). Wouldn't it be cool if that Haskell type was somehow transformed into a Python annotations dictionary and on the other end we'd be able to inspect the annotation and get this
Step6: However, this is complicated because what if we had a higher order function? The Haskell type signature looks like this
Step7: Oops. But this works
|
<ASSISTANT_TASK:>
Python Code:
def my_func(a: int, b: str = 'hello') -> tuple:
return (a, b)
my_func(1, 'wut')
my_func.__annotations__
from pynads.utils.decorators import annotate
@annotate(type="Int -> String -> (Int, String)")
def my_func(a, b='hello'):
return (a, b)
my_func.__annotations__
print(my_func.__doc__)
{'a': 'Int',
'b': 'String',
'returns': '(Int, String)'
}
from pynads.do import do, mreturn
from pynads import List
@do(monad=List)
def chessboard(ranks, files):
r = yield List(*ranks)
f = yield List(*files)
mreturn((r,f))
#chessboard('abcdefgh', range(1,9))
def chessboard(ranks, files):
return List(*ranks) >> (lambda r:
List(*files) >> (lambda f:
List.unit((r,f)) ))
chessboard('abcdefgh', range(1,9))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's add on an activation signal to both voxels
Step2: How can we address this problem? A general solution is to first run a general linear model to remove the task effect and then compute the correlation on the residuals.
Step3: What happens if we get the hemodynamic model wrong? Let's use the temporal derivative model to generate an HRF that is lagged compared to the canonical.
Step4: Let's see if using a more flexible basis set, like an FIR model, will allow us to get rid of the task-induced correlation.
|
<ASSISTANT_TASK:>
Python Code:
import os,sys
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
sys.path.insert(0,'../utils')
from mkdesign import create_design_singlecondition
from nipy.modalities.fmri.hemodynamic_models import spm_hrf,compute_regressor
from make_data import make_continuous_data
data=make_continuous_data(N=200)
print('correlation without activation:',numpy.corrcoef(data.T)[0,1])
plt.plot(range(data.shape[0]),data[:,0],color='blue')
plt.plot(range(data.shape[0]),data[:,1],color='red')
design_ts,design=create_design_singlecondition(blockiness=1.0,offset=30,blocklength=20,deslength=data.shape[0])
regressor,_=compute_regressor(design,'spm',numpy.arange(0,len(design_ts)))
regressor*=50.
data_act=data+numpy.hstack((regressor,regressor))
plt.plot(range(data.shape[0]),data_act[:,0],color='blue')
plt.plot(range(data.shape[0]),data_act[:,1],color='red')
print ('correlation with activation:',numpy.corrcoef(data_act.T)[0,1])
X=numpy.vstack((regressor.T,numpy.ones(data.shape[0]))).T
beta_hat=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(data_act)
y_est=X.dot(beta_hat)
resid=data_act - y_est
print ('correlation of residuals:',numpy.corrcoef(resid.T)[0,1])
regressor_td,_=compute_regressor(design,'spm_time',numpy.arange(0,len(design_ts)))
regressor_lagged=regressor_td.dot(numpy.array([1,0.5]))*50
plt.plot(regressor_lagged)
plt.plot(regressor)
data_lagged=data+numpy.vstack((regressor_lagged,regressor_lagged)).T
beta_hat_lag=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(data_lagged)
plt.subplot(211)
y_est_lag=X.dot(beta_hat_lag)
plt.plot(y_est)
plt.plot(data_lagged)
resid=data_lagged - y_est_lag
print ('correlation of residuals:',numpy.corrcoef(resid.T)[0,1])
plt.subplot(212)
plt.plot(resid)
regressor_fir,_=compute_regressor(design,'fir',numpy.arange(0,len(design_ts)),fir_delays=range(28))
regressor_fir.shape
X_fir=numpy.vstack((regressor_fir.T,numpy.ones(data.shape[0]))).T
beta_hat_fir=numpy.linalg.inv(X_fir.T.dot(X_fir)).dot(X_fir.T).dot(data_lagged)
plt.subplot(211)
y_est_fir=X_fir.dot(beta_hat_fir)
plt.plot(y_est)
plt.plot(data_lagged)
resid=data_lagged - y_est_fir
print ('correlation of residuals:',numpy.corrcoef(resid.T)[0,1])
plt.subplot(212)
plt.plot(resid)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As shown earlier in slide,<br>
Step2: As you can see above, Product spaces(Probability spaces) get large very quickly.
Step3: A Function on the Outcome Space
Step4: Functions of Random Variables,
Step5: There are 126 values of $\omega$ for which $S(\omega) = 10$. Since all the $\omega$ are equally likely, the chance that $S$ has the value 10 is 126/7776.
Step6: The contents of the table – all the possible values of the random variable, along with all their probabilities – are called the probability distribution of $S$ , or just distribution of $S$ for short. The distribution shows how the total probability of 100% is distributed over all the possible values of $S$ .
Step7: That's 1 in a computing environment, and it is true in general for the distribution of any random variable.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from itertools import product
# from IPython.core.display import HTML
# css = open('media/style-table.css').read() + open('media/style-notebook.css').read()
# HTML('<style>{}</style>'.format(css))
one_toss = np.array(['H', 'T'])
two_tosses = list(product(one_toss, repeat=2))
two_tosses
# For three tosses, just change the number of repetitions:
three_tosses = list(product(one_toss, repeat=3))
three_tosses
three_toss_probs = (1/8)*np.ones(8)
three_toss_space = pd.DataFrame({
'Omega':three_tosses,
'P(omega)':three_toss_probs
})
three_toss_space
die = np.arange(1, 7, 1)
five_rolls = list(product(die, repeat=5))
# five_rolls = [list(i) for i in product(die, repeat=5)]
five_roll_probs = (1/6**5)**np.ones(6**5)
five_roll_space = pd.DataFrame({
'Omega':five_rolls,
'P(omega)':five_roll_probs
})
five_roll_space
five_rolls_sum = pd.DataFrame({
'Omega':five_rolls,
'S(omega)':five_roll_space['Omega'].map(lambda val: sum(val)),
'P(omega)':five_roll_probs
})
five_rolls_sum
five_rolls_sum[five_rolls_sum['S(omega)']==10]
dist_S = five_rolls_sum.drop('Omega', axis=1).groupby('S(omega)', as_index=False).sum()
dist_S
dist_S.ix[:,1].sum()
dist_S.ix[:,0], dist_S.ix[:,1]
s = dist_S.ix[:,0]
p_s = dist_S.ix[:,1]
dist_S = pd.concat([s,p_s],axis=1)
dist_S
dist_S.plot(x="S(omega)",y="P(omega)", kind="bar")
from prob140 import Plot
!pip install sympy
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading the Data
Step2: Sanitizing the Data
Step3: Analysis and Visualization (V1)
Step4: Analysis and Visualization (V2)
Step5: HOLY SHIT
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import sys
import pandas as pd
import csv
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20.0, 10.0)
# %load util.py
#!/usr/bin/python
# Util file to import in all of the notebooks to allow for easy code re-use
# Calculate Percent of Attendees that did not speak
def percent_silent(df):
total = len(df)
silent = 0
for row in df.iteritems():
if row[1] == 0:
silent = silent + 1
percent = {}
percent['TOTAL'] = total
percent['SILENT'] = silent
percent['VERBOSE'] = total - silent
return percent
# Calculate Percent of Attendees that left
def percent_left(df):
total = len(df)
left = 0
for row in df.iteritems():
if row[1] == 0:
left = left + 1
percent = {}
percent['TOTAL'] = total
percent['LEFT'] = left
percent['STAYED'] = total - left
return percent
# Calculate Percent of Attendees along gender
def percent_gender(df):
total = len(df)
female = 0
for row in df.iteritems():
if row[1] == 1:
female = female + 1
percent = {}
percent['TOTAL'] = total
percent['FEMALE'] = female
percent['MALE'] = total - female
return percent
# Calculate Percent of Talking points by
def percent_talking_gender(df):
total = 0
male = 0
female = 0
for talks, gender in df.itertuples(index=False):
if talks > 0:
total = total + 1
if gender == 0:
male = male + 1
elif gender == 1:
female = female + 1
percent = {}
percent['TOTAL'] = total
percent['FEMALE'] = female
percent['MALE'] = male
return percent
# Read
data = pd.read_csv('data/1_solid.csv')
# Display
data
# Convert GENDER to Binary (sorry, i know...)
data.loc[data["GENDER"] == "M", "GENDER"] = 0
data.loc[data["GENDER"] == "F", "GENDER"] = 1
# Convert STAYED to 1 and Left/Late to 0
data.loc[data["STAYED"] == "Y", "STAYED"] = 1
data.loc[data["STAYED"] == "N", "STAYED"] = 0
data.loc[data["STAYED"] == "L", "STAYED"] = 0
# We should now see the data in numeric values
data
# Run Describe to give us some basic Min/Max/Mean/Std values
data.describe()
# Run Value_Counts in order to see some basic grouping by attribute
vc_talks = data['TALKS'].value_counts()
vc_talks
vc_gender = data['GENDER'].value_counts()
vc_gender
vc_stayed = data['STAYED'].value_counts()
vc_stayed
# Now let's do some basic plotting with MatPlotLib
data.plot()
data.plot(kind='bar')
fig1, ax1 = plt.subplots()
ax1.pie(data['TALKS'], autopct='%1.f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
data_hostless = data.drop(data.index[0])
data_hostless.head()
data_hostless.describe()
dh_vc_talks = data_hostless['TALKS'].value_counts()
dh_vc_talks
dh_vc_gender = data_hostless['GENDER'].value_counts()
dh_vc_gender
dh_vc_stayed = data_hostless['STAYED'].value_counts()
dh_vc_stayed
data_hostless.plot()
data_hostless.plot(kind='bar')
fig1, ax1 = plt.subplots()
ax1.pie(data_hostless['TALKS'], autopct='%1.f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
# Percentage of attendees that were silent during the talk
silent = percent_silent(data['TALKS'])
silent
fig1, ax1 = plt.subplots()
sizes = [silent['SILENT'], silent['VERBOSE']]
labels = 'Silent', 'Talked'
explode = (0.05, 0)
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.0f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
# Percentage of attendees that left early during the talk
left = percent_left(data['STAYED'])
left
fig1, ax1 = plt.subplots()
sizes = [left['LEFT'], left['STAYED']]
labels = 'Left', 'Stayed'
explode = (0.1, 0)
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.0f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
# Percentage of attendees that were Male vs. Female (see notes above around methodology)
gender = percent_gender(data['GENDER'])
gender
fig1, ax1 = plt.subplots()
sizes = [gender['FEMALE'], gender['MALE']]
labels = 'Female', 'Male'
explode = (0.1, 0)
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.0f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
# Calculate Percent of Talking points by GENDER
distribution = percent_talking_gender(data[['TALKS','GENDER']])
distribution
fig1, ax1 = plt.subplots()
sizes = [distribution['FEMALE'], distribution['MALE']]
labels = 'Female Speakers', 'Male Speakers'
explode = (0.1, 0)
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.0f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Genera archivos con la extensión .ipynb que se guardan en el directorio local.
Step2: La representación de los objetos es más legible
Step3: y contrasten contra la siguiente representación
Step4: Puedo probar "pedacitos" de código (por ejemplo de tutoriales)
Step5: Puedo ver los errores o tracebacks de una manera simple y visualmente informativa
Step6: Otras ventajas a la hora de trabajar con código
Step8: Ayuda integrada
Step9: "Atajos" interactivos (aliases, magics)
Step11: Pero tambien tenemos cell magics (%%) a través de las cuales el notebook soporta correr código en otros lenguajes (entre otras tareas
Step12: Puedo interaccionar fácilmente con el sistema operativo
Step13: Por ejemplo, puedo generar figuras programáticamente utilizando matplotlib
Step14: Documentar el flujo de trabajo.
Step16: Imágenes
Step17: Video
Step19: HTML
Step20: Sitios web
Step22: Música
Step24: Javascript
Step25: Cargar código remoto
Step27: Reendeo de LaTeX
Step28: Pandas
Step29: The IPython kernel/client model
Step30: 4. nbconvert
Step31: Slides
Step32: Más info
|
<ASSISTANT_TASK:>
Python Code:
from IPython.nbformat import current
with open('test_slides.ipynb') as f:
nb = current.read(f,'json')
nb.worksheets[0].cells[18:19]
print "Hola Scipy..."
from numpy.random import randn
data = {i : randn() for i in range(10)}
data
>>> from numpy.random import randn
>>> data = {i : randn() for i in range(7)}
>>> print data # pequeña trampa para verlo como se vería en consola
>>> the_world_is_flat = 1
>>> if the_world_is_flat:
... print "Hello Scipy Conference!"
for x in range(10):
print(x)
In [5]: [x*x for x in range(7)]
%run non_existent_file
x = 1
y = 4
z = y/(1-x)
z
import scipy as sp
sp.
?
def suma(a, b):
Demo de una función que suma cosas...
return a + b
suma?
suma??
%magic
%lsmagic
%quickref
%%bash
mkdir temporal/
touch temporal/mi_programa.py
cat temporal/mi_programa.py
%%writefile temporal/mi_programa.py
def mi_funcion(x, y):
Demo
return x / y
x = 4
y = 2
resultado = mi_funcion(x, y)
%run temporal/mi_programa.py
resultado
!pwd
files = !ls
print "My current directory's files:"
print files
!echo $files
!echo {files[0].upper()}
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 3*np.pi, 500)
y = np.sin(x**2)
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x, y, 'r')
axes.set_xlabel('x')
axes.set_ylabel('sin x^2')
axes.set_title('Una figura simple');
from IPython.display import display
from IPython.display import Image
Image(filename='logo.png')
from IPython.display import SVG
SVG(filename='python-logo.svg')
from IPython.display import YouTubeVideo
YouTubeVideo('MIAKOMzRl1I')
from IPython.display import HTML
s = <table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
h = HTML(s)
display(h)
from IPython.display import IFrame
IFrame('http://www.damian.oquanta.info', 800, 600)
%load soln/soundcloud.py
from IPython.display import HTML
h = HTML(<iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=http%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F94543639"></iframe>)
display(h)
from IPython.display import Javascript
# fetch d3 from cloudflare
Javascript($.getScript('//cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js'))
%%html
<style type="text/css">
circle {
fill: rgb(31, 119, 180);
fill-opacity: .25;
stroke: rgb(31, 119, 180);
stroke-width: 1px;
}
.leaf circle {
fill: #ff7f0e;
fill-opacity: 1;
}
text {
font: 10px sans-serif;
}
</style>
%%javascript
// This unhides the output area
container.show();
// element is the jQuery element we will append to
var e = element.get(0);
var diameter = 600,
format = d3.format(",d");
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.value(function(d) { return d.size; });
var svg = d3.select(e).append("svg")
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(2,2)");
d3.json("flare.json", function(error, root) {
var node = svg.datum(root).selectAll(".node")
.data(pack.nodes)
.enter().append("g")
.attr("class", function(d) { return d.children ? "node" : "leaf node"; })
.attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });
node.append("title")
.text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); });
node.append("circle")
.attr("r", function(d) { return d.r; });
node.filter(function(d) { return !d.children; }).append("text")
.attr("dy", ".3em")
.style("text-anchor", "middle")
.text(function(d) { return d.name.substring(0, d.r / 3); });
});
d3.select(self.frameElement).style("height", diameter + "px");
%load http://matplotlib.sourceforge.net/mpl_examples/api/collections_demo.py
#!/usr/bin/env python
'''Demonstration of LineCollection, PolyCollection, and
RegularPolyCollection with autoscaling.
For the first two subplots, we will use spirals. Their
size will be set in plot units, not data units. Their positions
will be set in data units by using the "offsets" and "transOffset"
kwargs of the LineCollection and PolyCollection.
The third subplot will make regular polygons, with the same
type of scaling and positioning as in the first two.
The last subplot illustrates the use of "offsets=(xo,yo)",
that is, a single tuple instead of a list of tuples, to generate
successively offset curves, with the offset given in data
units. This behavior is available only for the LineCollection.
'''
import matplotlib.pyplot as plt
from matplotlib import collections, transforms
from matplotlib.colors import colorConverter
import numpy as np
nverts = 50
npts = 100
# Make some spirals
r = np.array(range(nverts))
theta = np.array(range(nverts)) * (2*np.pi)/(nverts-1)
xx = r * np.sin(theta)
yy = r * np.cos(theta)
spiral = list(zip(xx,yy))
# Make some offsets
rs = np.random.RandomState([12345678])
xo = rs.randn(npts)
yo = rs.randn(npts)
xyo = list(zip(xo, yo))
# Make a list of colors cycling through the rgbcmyk series.
colors = [colorConverter.to_rgba(c) for c in ('r','g','b','c','y','m','k')]
fig, axes = plt.subplots(2,2)
((ax1, ax2), (ax3, ax4)) = axes # unpack the axes
col = collections.LineCollection([spiral], offsets=xyo,
transOffset=ax1.transData)
trans = fig.dpi_scale_trans + transforms.Affine2D().scale(1.0/72.0)
col.set_transform(trans) # the points to pixels transform
# Note: the first argument to the collection initializer
# must be a list of sequences of x,y tuples; we have only
# one sequence, but we still have to put it in a list.
ax1.add_collection(col, autolim=True)
# autolim=True enables autoscaling. For collections with
# offsets like this, it is neither efficient nor accurate,
# but it is good enough to generate a plot that you can use
# as a starting point. If you know beforehand the range of
# x and y that you want to show, it is better to set them
# explicitly, leave out the autolim kwarg (or set it to False),
# and omit the 'ax1.autoscale_view()' call below.
# Make a transform for the line segments such that their size is
# given in points:
col.set_color(colors)
ax1.autoscale_view() # See comment above, after ax1.add_collection.
ax1.set_title('LineCollection using offsets')
# The same data as above, but fill the curves.
col = collections.PolyCollection([spiral], offsets=xyo,
transOffset=ax2.transData)
trans = transforms.Affine2D().scale(fig.dpi/72.0)
col.set_transform(trans) # the points to pixels transform
ax2.add_collection(col, autolim=True)
col.set_color(colors)
ax2.autoscale_view()
ax2.set_title('PolyCollection using offsets')
# 7-sided regular polygons
col = collections.RegularPolyCollection(7,
sizes = np.fabs(xx)*10.0, offsets=xyo,
transOffset=ax3.transData)
trans = transforms.Affine2D().scale(fig.dpi/72.0)
col.set_transform(trans) # the points to pixels transform
ax3.add_collection(col, autolim=True)
col.set_color(colors)
ax3.autoscale_view()
ax3.set_title('RegularPolyCollection using offsets')
# Simulate a series of ocean current profiles, successively
# offset by 0.1 m/s so that they form what is sometimes called
# a "waterfall" plot or a "stagger" plot.
nverts = 60
ncurves = 20
offs = (0.1, 0.0)
yy = np.linspace(0, 2*np.pi, nverts)
ym = np.amax(yy)
xx = (0.2 + (ym-yy)/ym)**2 * np.cos(yy-0.4) * 0.5
segs = []
for i in range(ncurves):
xxx = xx + 0.02*rs.randn(nverts)
curve = list(zip(xxx, yy*100))
segs.append(curve)
col = collections.LineCollection(segs, offsets=offs)
ax4.add_collection(col, autolim=True)
col.set_color(colors)
ax4.autoscale_view()
ax4.set_title('Successive data offsets')
ax4.set_xlabel('Zonal velocity component (m/s)')
ax4.set_ylabel('Depth (m)')
# Reverse the y-axis so depth increases downward
ax4.set_ylim(ax4.get_ylim()[::-1])
plt.show()
from IPython.display import Latex
Latex(r\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\,
\frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\,
\frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray})
import pandas
pandas.set_option('display.notebook_repr_html', True)
%%writefile data.csv
Date,Open,High,Low,Close,Volume,Adj Close
2012-06-01,569.16,590.00,548.50,584.00,14077000,581.50
2012-05-01,584.90,596.76,522.18,577.73,18827900,575.26
2012-04-02,601.83,644.00,555.00,583.98,28759100,581.48
2012-03-01,548.17,621.45,516.22,599.55,26486000,596.99
2012-02-01,458.41,547.61,453.98,542.44,22001000,540.12
2012-01-03,409.40,458.24,409.00,456.48,12949100,454.53
df = pandas.read_csv('data.csv')
df
%connect_info
%qtconsole
%%bash
ipython nbconvert IPython_fu_talk.ipynb --to html
import subprocess
command = "ipython nbconvert test_slides.ipynb --to slides --post serve"
subprocess.Popen(command.split())
IFrame('http://www.damian.oquanta.info/posts/blogging-with-nikola-and-ipython.html', 800, 600)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Names scores
Step2: Non-abundant sums
Step3: Lexicographic permutations
Step4: 1000-digit Fibonacci number
Step5: Reciprocal cycles
Step6: Quadratic primes
Step7: Number spiral diagonals
Step8: Distinct powers
Step9: Digit fifth powers
|
<ASSISTANT_TASK:>
Python Code:
from euler import timer, Seq
from math import sqrt
def d(n):
return (range(2, int(sqrt(n))+1)
>> Seq.filter (lambda x: n%x == 0)
>> Seq.map (lambda x: x if x*x == n else n/x + x)
>> Seq.sum) + 1
def isAmicable(a):
b = d(a)
return (a == d(b)) and (a <> b)
def p021():
return (range(1, 10001)
>> Seq.filter(isAmicable)
>> Seq.sum)
timer(p021)
from euler import timer, Seq
def score(s):
return s >> Seq.map(lambda x: ord(x) - 64) >> Seq.sum
def p022():
return (
open('data/p022.txt').read().split(',')
>> Seq.map(lambda x: x.strip('"'))
>> Seq.sort
>> Seq.mapi(lambda (i,x): score(x)*(i+1))
>> Seq.sum)
timer(p022)
from euler import FactorInteger, Seq, timer
from operator import mul
def divisor_sum(n):
return (
FactorInteger(n)
>> Seq.map(lambda (p,a): (p**(a+1) - 1)/(p-1))
>> Seq.reduce(mul)
) - n
def p023():
max_n = 28123
abundants = range(12, max_n+1) >> Seq.filter(lambda n: n < divisor_sum(n)) >> Seq.toList
abundant_sums = (abundants
>> Seq.collect(lambda a: abundants
>> Seq.map(lambda b: a+b)
>> Seq.takeWhile(lambda x: x < (max_n+1)))
>> Seq.toSet)
return max_n * (max_n + 1) / 2 - sum(abundant_sums)
timer(p023)
from math import factorial
from euler import timer
def p024():
numbers = range(10)
def loop(remainder, acc):
k = len(numbers) - 1
if k==0:
return acc + str(numbers[0])
else:
next = numbers[remainder / factorial(k)]
numbers.remove(next)
return loop((remainder%(factorial(k))),(acc + str(next)))
return loop(999999,"")
timer(p024)
from math import log10
from euler import timer, Seq
def p025():
return (
Seq.unfold(lambda (a,b):(b, (b,a+b)), (0,1))
>> Seq.findIndex(lambda x: log10(x) > 999)
) + 1
timer(p025)
from euler import timer, Seq
def cycle(denom):
if denom==2 or denom==5:
return 0
elif denom%2==0:
return cycle(denom/2)
elif denom%5==0:
return cycle(denom/5)
else:
return (
Seq.initInfinite(lambda x: x+1)
>> Seq.map (lambda x: 10 ** x - 1)
>> Seq.findIndex(lambda x: x%denom==0)
) + 1
def p026():
return range(1, 1001) >> Seq.maxBy(cycle)
timer(p026)
from euler import is_prime, Seq, timer, primes
def primes_generated(x):
a,b = x
return (
Seq.initInfinite(lambda n: n*n + a*n + b)
>> Seq.takeWhile(is_prime)
>> Seq.length)
def p027():
primes_1000 = (primes()
>> Seq.takeWhile(lambda x: x<1000)
>> Seq.toList)
a,b = ([(a,b) for a in range(-999,1000)
for b in primes_1000]
>> Seq.maxBy(primes_generated))
return a*b
timer(p027)
from euler import timer
def p028():
n = 1001
def collect(depth, start, acc):
if (depth > n/2):
return acc
else:
return collect(depth+1, start+8*depth, acc+4*start+20*depth)
return collect(1,1,1)
timer(p028)
from euler import timer
def p029():
return (set(a **b for a in range(2,101) for b in range(2,101))
>> Seq.length)
timer(p029)
from euler import timer
def p030():
def is_sum(n):
return (
str(n)
>> Seq.map(lambda x: int(x) ** 5)
>> Seq.sum
) == n
max_n = (
((Seq.unfold(lambda x: (x, x+1), 1)
>> Seq.find(lambda x: 10 ** x - 1 > x * 9 ** 5)
) - 1) * 9 ** 5)
return (
range(2, max_n + 1)
>> Seq.filter(is_sum)
>> Seq.sum)
timer(p030)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need to create metabolites as well. If we were using an existing model, we could use Model.get_by_id to get the appropriate Metabolite objects instead.
Step2: Side note
Step3: The gene_reaction_rule is a boolean representation of the gene requirements for this reaction to be active as described in Schellenberger et al 2011 Nature Protocols 6(9)
Step4: At this point in time, the model is still empty
Step5: We will add the reaction to the model, which will also add all associated metabolites and genes
Step6: We can iterate through the model objects to observe the contents
Step7: Objective
Step8: The created objective is a symbolic algebraic expression and we can examine it by printing it
Step9: which here shows that the solver will maximize the flux in the forward direction.
Step10: The model is valid with no COBRA or SBML errors or warnings.
Step11: Boundary reactions are defined on metabolites. First we add two metabolites to the model then
Step12: To create a demand reaction instead of a sink use type demand instead of sink.
Step13: A neat trick to get all metabolic reactions is
|
<ASSISTANT_TASK:>
Python Code:
from cobra import Model, Reaction, Metabolite
model = Model('example_model')
reaction = Reaction('R_3OAS140')
reaction.name = '3 oxoacyl acyl carrier protein synthase n C140 '
reaction.subsystem = 'Cell Envelope Biosynthesis'
reaction.lower_bound = 0. # This is the default
reaction.upper_bound = 1000. # This is the default
ACP_c = Metabolite(
'ACP_c',
formula='C11H21N2O7PRS',
name='acyl-carrier-protein',
compartment='c')
omrsACP_c = Metabolite(
'M3omrsACP_c',
formula='C25H45N2O9PRS',
name='3-Oxotetradecanoyl-acyl-carrier-protein',
compartment='c')
co2_c = Metabolite('co2_c', formula='CO2', name='CO2', compartment='c')
malACP_c = Metabolite(
'malACP_c',
formula='C14H22N2O10PRS',
name='Malonyl-acyl-carrier-protein',
compartment='c')
h_c = Metabolite('h_c', formula='H', name='H', compartment='c')
ddcaACP_c = Metabolite(
'ddcaACP_c',
formula='C23H43N2O8PRS',
name='Dodecanoyl-ACP-n-C120ACP',
compartment='c')
reaction.add_metabolites({
malACP_c: -1.0,
h_c: -1.0,
ddcaACP_c: -1.0,
co2_c: 1.0,
ACP_c: 1.0,
omrsACP_c: 1.0
})
reaction.reaction # This gives a string representation of the reaction
reaction.gene_reaction_rule = '( STM2378 or STM1197 )'
reaction.genes
print(f'{len(model.reactions)} reactions initially')
print(f'{len(model.metabolites)} metabolites initially')
print(f'{len(model.genes)} genes initially')
model.add_reactions([reaction])
# The objects have been added to the model
print(f'{len(model.reactions)} reactions')
print(f'{len(model.metabolites)} metabolites')
print(f'{len(model.genes)} genes')
# Iterate through the the objects in the model
print("Reactions")
print("---------")
for x in model.reactions:
print("%s : %s" % (x.id, x.reaction))
print("")
print("Metabolites")
print("-----------")
for x in model.metabolites:
print('%9s : %s' % (x.id, x.formula))
print("")
print("Genes")
print("-----")
for x in model.genes:
associated_ids = (i.id for i in x.reactions)
print("%s is associated with reactions: %s" %
(x.id, "{" + ", ".join(associated_ids) + "}"))
model.objective = 'R_3OAS140'
print(model.objective.expression)
print(model.objective.direction)
import tempfile
from pprint import pprint
from cobra.io import write_sbml_model, validate_sbml_model
with tempfile.NamedTemporaryFile(suffix='.xml') as f_sbml:
write_sbml_model(model, filename=f_sbml.name)
report = validate_sbml_model(filename=f_sbml.name)
pprint(report)
print("exchanges", model.exchanges)
print("demands", model.demands)
print("sinks", model.sinks)
model.add_metabolites([
Metabolite(
'glycogen_c',
name='glycogen',
compartment='c'
),
Metabolite(
'co2_e',
name='CO2',
compartment='e'
),
])
# create exchange reaction
model.add_boundary(model.metabolites.get_by_id("co2_e"), type="exchange")
# create exchange reaction
model.add_boundary(model.metabolites.get_by_id("glycogen_c"), type="sink")
# Now we have an additional exchange and sink reaction in the model
print("exchanges", model.exchanges)
print("sinks", model.sinks)
print("demands", model.demands)
# boundary reactions
model.boundary
# metabolic reactions
set(model.reactions) - set(model.boundary)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Source estimation method such as MNE require a noise estimations from the
Step2: The definition of noise depends on the paradigm. In MEG it is quite common
Step3: Now that you the covariance matrix in a python object you can save it to a
Step4: Note that this method also attenuates the resting state activity in your
Step5: Plot the covariance matrices
Step6: How should I regularize the covariance matrix?
Step7: This procedure evaluates the noise covariance quantitatively by how well it
Step8: This plot displays both, the whitened evoked signals for each channels and
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import mne
from mne.datasets import sample
data_path = sample.data_path()
raw_empty_room_fname = op.join(
data_path, 'MEG', 'sample', 'ernoise_raw.fif')
raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname)
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(raw_fname)
raw.set_eeg_reference()
raw.info['bads'] += ['EEG 053'] # bads + 1 more
noise_cov = mne.compute_raw_covariance(raw_empty_room, tmin=0, tmax=None)
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=0.0,
baseline=(-0.2, 0.0))
noise_cov_baseline = mne.compute_covariance(epochs)
noise_cov.plot(raw_empty_room.info, proj=True)
noise_cov_baseline.plot(epochs.info)
cov = mne.compute_covariance(epochs, tmax=0., method='auto')
evoked = epochs.average()
evoked.plot_white(cov)
covs = mne.compute_covariance(epochs, tmax=0., method=('empirical', 'shrunk'),
return_estimators=True)
evoked = epochs.average()
evoked.plot_white(covs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: II. Process text into INDRA statements using REACH parser
Step2: III. Assemble an INDRA model
Step3: What do the statements look like?
Step4: Import some necessities
Step5: Instntiate a CyJSAssembler
Step6: add the statements into the assembler
Step7: make_model
Step8: set_context
Step9: save_model
Step10: Display the assembled model
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import, print_function, unicode_literals
from builtins import dict, str
statement_path = 'ras_pathway.txt'
txt = open(statement_path, 'rt').read()
print(txt)
from indra import reach
rp = reach.process_text(txt, offline=False)
st = rp.statements
from indra.preassembler.grounding_mapper import GroundingMapper, gm
GM = GroundingMapper(gm)
mapped_stmts = GM.map_agents(st, do_rename=True)
import pickle
pickle.dump(mapped_stmts, open("st.pkl", "wb"))
import pickle
st = pickle.load(open("st.pkl", "rb"))
st
from indra.statements import *
from indra.assemblers import CyJSAssembler
cja = CyJSAssembler()
cja.add_statements(st)
cja.make_model(grouping=True,
drop_virtual_edges=False,
add_edge_weights=True)
cja.set_context(cell_type = 'LOXIMVI_SKIN',
bin_expression = True,
n_bins = 9)
cja.save_model('cyjs/model.js')
%%html
<iframe width="100%" height="600" src="cyjs/index.html"></iframe>
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create dataframe
Step2: Make plot
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'pre_score': [4, 24, 31, 2, 3],
'mid_score': [25, 94, 57, 62, 70],
'post_score': [5, 43, 23, 23, 51]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'pre_score', 'mid_score', 'post_score'])
df
# Create a figure with a single subplot
f, ax = plt.subplots(1, figsize=(10,5))
# Set bar width at 1
bar_width = 1
# positions of the left bar-boundaries
bar_l = [i for i in range(len(df['pre_score']))]
# positions of the x-axis ticks (center of the bars as bar labels)
tick_pos = [i+(bar_width/2) for i in bar_l]
# Create the total score for each participant
totals = [i+j+k for i,j,k in zip(df['pre_score'], df['mid_score'], df['post_score'])]
# Create the percentage of the total score the pre_score value for each participant was
pre_rel = [i / j * 100 for i,j in zip(df['pre_score'], totals)]
# Create the percentage of the total score the mid_score value for each participant was
mid_rel = [i / j * 100 for i,j in zip(df['mid_score'], totals)]
# Create the percentage of the total score the post_score value for each participant was
post_rel = [i / j * 100 for i,j in zip(df['post_score'], totals)]
# Create a bar chart in position bar_1
ax.bar(bar_l,
# using pre_rel data
pre_rel,
# labeled
label='Pre Score',
# with alpha
alpha=0.9,
# with color
color='#019600',
# with bar width
width=bar_width,
# with border color
edgecolor='white'
)
# Create a bar chart in position bar_1
ax.bar(bar_l,
# using mid_rel data
mid_rel,
# with pre_rel
bottom=pre_rel,
# labeled
label='Mid Score',
# with alpha
alpha=0.9,
# with color
color='#3C5F5A',
# with bar width
width=bar_width,
# with border color
edgecolor='white'
)
# Create a bar chart in position bar_1
ax.bar(bar_l,
# using post_rel data
post_rel,
# with pre_rel and mid_rel on bottom
bottom=[i+j for i,j in zip(pre_rel, mid_rel)],
# labeled
label='Post Score',
# with alpha
alpha=0.9,
# with color
color='#219AD8',
# with bar width
width=bar_width,
# with border color
edgecolor='white'
)
# Set the ticks to be first names
plt.xticks(tick_pos, df['first_name'])
ax.set_ylabel("Percentage")
ax.set_xlabel("")
# Let the borders of the graphic
plt.xlim([min(tick_pos)-bar_width, max(tick_pos)+bar_width])
plt.ylim(-10, 110)
# rotate axis labels
plt.setp(plt.gca().get_xticklabels(), rotation=45, horizontalalignment='right')
# shot plot
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let try it with this function $f(x)$ on $[a,b]=[1993,2015]$
Step2: The first value is the numerical value of the integral $\int_{a}^{b} f(x) \mathrm{d}x$ and the second value is an estimate of the numerical error.
Step3: It converges quite quickly to the "true" value as given by scipy.integrate.quad.
Step4: It converges quite quickly to the "true" value as given by scipy.integrate.quad.
Step5: It converges quite quickly to the "true" value as given by scipy.integrate.quad.
Step6: We already see that the recursive version is much slower than the dynamic programming one!
Step7: And now the same code executed by an external Pypy interpreter (Python 2.7.13 and PyPy 5.8.0 with GCC 5.4.0)
Step8: This version uses the improved memoization trick (no dictionary), but uses nested lists and not numpy arrays, I didn't bother to install numpy on my Pypy installation (even thought it should be possible).
Step9: It fails! Almost as always when trying Numba, it fails cryptically, too bad. I don't want to spend time debugging this.
Step10: It seems to work well, like the Python implementation. We get the same numerical result
Step11: Let try a less naive version using a fixed-size array instead of a dictionary. (as we did before for the Python version)
Step12: Benchmark between Python, Pypy and Julia
Step13: And now the same code executed by an external Pypy interpreter (Python 2.7.13 and PyPy 5.8.0 with GCC 5.4.0)
Step14: And finally with Julia
Step15: On this first test, it doesn't look faster than Pypy...
Step16: Oh, this time it finally seems faster. Really faster? Yes, about 3 to 4 time faster than Pypy.
Step17: And now the same code executed by an external Pypy interpreter (Python 2.7.13 and PyPy 5.8.0 with GCC 5.4.0)
Step18: And finally with Julia
Step19: Still not faster than Pypy... So what is the goal of Julia?
|
<ASSISTANT_TASK:>
Python Code:
from scipy.integrate import quad
import math
f = lambda x: (12*x+1)/(1+math.cos(x)**2)
a, b = 1993, 2017
quad(f, a, b)
def romberg_rec(f, xmin, xmax, n=8, m=None):
if m is None: # not m was considering 0 as None
m = n
assert n >= m
if n == 0 and m == 0:
return ((xmax - xmin) / 2.0) * (f(xmin) + f(xmax))
elif m == 0:
h = (xmax - xmin) / float(2**n)
N = (2**(n - 1)) + 1
term = math.fsum(f(xmin + ((2 * k) - 1) * h) for k in range(1, N))
return (term * h) + (0.5) * romberg_rec(f, xmin, xmax, n - 1, 0)
else:
return (1.0 / ((4**m) - 1)) * ((4**m) * romberg_rec(f, xmin, xmax, n, m - 1) - romberg_rec(f, xmin, xmax, n - 1, m - 1))
romberg_rec(f, a, b, n=0) # really not accurate!
romberg_rec(f, a, b, n=1) # alreay pretty good!
romberg_rec(f, a, b, n=2)
romberg_rec(f, a, b, n=3)
romberg_rec(f, a, b, n=8) # Almost the exact value.
romberg_rec(f, a, b, n=10) # Almost the exact value.
romberg_rec(f, a, b, n=12) # Almost the exact value.
def romberg(f, xmin, xmax, n=8, m=None):
assert xmin <= xmax
if m is None:
m = n
assert n >= m >= 0
# First value:
r = {(0, 0): 0.5 * (xmax - xmin) * (f(xmax) + f(xmin))}
# One side of the triangle:
for i in range(1, n + 1):
h_i = (xmax - xmin) / float(2**i)
xsamples = [xmin + ((2 * k - 1) * h_i) for k in range(1, 1 + 2**(i - 1))]
r[(i, 0)] = (0.5 * r[(i - 1, 0)]) + h_i * math.fsum(f(x) for x in xsamples)
# All the other values:
for j in range(1, m + 1):
for i in range(j, n + 1):
try:
r[(i, j)] = (((4**j) * r[(i, j - 1)]) - r[(i - 1, j - 1)]) / float((4**j) - 1)
except:
raise ValueError("romberg() with n = {}, m = {} and i = {}, j = {} was an error.".format(n, m, i, j))
return r[(n, m)]
romberg(f, a, b, n=0) # really not accurate!
romberg(f, a, b, n=1) # alreay pretty good!
romberg(f, a, b, n=2)
romberg(f, a, b, n=3)
romberg(f, a, b, n=8) # Almost the exact value.
romberg(f, a, b, n=10) # Almost the exact value.
romberg(f, a, b, n=12) # Almost the exact value.
import numpy as np
def romberg_better(f, xmin, xmax, n=8, m=None):
assert xmin <= xmax
if m is None:
m = n
assert n >= m >= 0
# First value:
r = np.zeros((n+1, m+1))
r[0, 0] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in range(1, n + 1):
h_i = (xmax - xmin) / 2.**i
r[i, 0] = (0.5 * r[i - 1, 0]) + h_i * math.fsum(
f(xmin + ((2 * k - 1) * h_i))
for k in range(1, 1 + 2**(i - 1))
)
# All the other values:
for j in range(1, m + 1):
for i in range(j, n + 1):
r[i, j] = (((4.**j) * r[i, j - 1]) - r[i - 1, j - 1]) / ((4.**j) - 1.)
return r[n, m]
romberg_better(f, a, b, n=0) # really not accurate!
romberg_better(f, a, b, n=1) # alreay pretty good!
romberg_better(f, a, b, n=2)
romberg_better(f, a, b, n=3)
romberg_better(f, a, b, n=8) # Almost the exact value.
romberg_better(f, a, b, n=10) # Almost the exact value.
romberg_better(f, a, b, n=12) # Almost the exact value.
%timeit quad(f, a, b)
%timeit romberg_rec(f, a, b, n=10)
%timeit romberg(f, a, b, n=10)
%timeit romberg_better(f, a, b, n=10)
%%time
import numpy as np
import math
import random
f = lambda x: (12*x+1)/(1+math.cos(x)**2)
# Same code
def romberg(f, xmin, xmax, n=8, m=None):
assert xmin <= xmax
if m is None:
m = n
assert n >= m >= 0
# First value:
r = np.zeros((n+1, m+1))
r[0, 0] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in range(1, n + 1):
h_i = (xmax - xmin) / 2.**i
r[i, 0] = (0.5 * r[i - 1, 0]) + h_i * math.fsum(
f(xmin + ((2 * k - 1) * h_i))
for k in range(1, 1 + 2**(i - 1))
)
# All the other values:
for j in range(1, m + 1):
for i in range(j, n + 1):
r[i, j] = (((4.**j) * r[i, j - 1]) - r[i - 1, j - 1]) / ((4.**j) - 1.)
return r[n, m]
for _ in range(100000):
a = random.randint(-2000, 2000)
b = a + random.randint(0, 100)
romberg(f, a, b)
%%time
%%pypy
import math
import random
f = lambda x: (12*x+1)/(1+math.cos(x)**2)
# Same code
def romberg_pypy(f, xmin, xmax, n=8, m=None):
assert xmin <= xmax
if m is None:
m = n
assert n >= m >= 0
# First value:
r = [[0 for _ in range(n+1)] for _ in range(m+1)]
r[0][0] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in range(1, n + 1):
h_i = (xmax - xmin) / 2.**i
r[i][0] = (0.5 * r[i - 1][0]) + h_i * math.fsum(
f(xmin + ((2 * k - 1) * h_i))
for k in range(1, 1 + 2**(i - 1))
)
# All the other values:
for j in range(1, m + 1):
for i in range(j, n + 1):
r[i][j] = (((4.**j) * r[i][j - 1]) - r[i - 1][j - 1]) / ((4.**j) - 1.)
return r[n][m]
for _ in range(100000):
a = random.randint(-2000, 2000)
b = a + random.randint(0, 100)
romberg_pypy(f, a, b)
from numba import jit
@jit
def romberg_numba(f, xmin, xmax, n=8):
assert xmin <= xmax
m = n
# First value:
r = {(0, 0): 0.5 * (xmax - xmin) * (f(xmax) + f(xmin))}
# One side of the triangle:
for i in range(1, n + 1):
h_i = (xmax - xmin) / float(2**i)
xsamples = [xmin + ((2 * k - 1) * h_i) for k in range(1, 1 + 2**(i - 1))]
r[(i, 0)] = (0.5 * r[(i - 1, 0)]) + h_i * math.fsum(f(x) for x in xsamples)
# All the other values:
for j in range(1, m + 1):
for i in range(j, n + 1):
try:
r[(i, j)] = (((4**j) * r[(i, j - 1)]) - r[(i - 1, j - 1)]) / float((4**j) - 1)
except:
raise ValueError("romberg() with n = {}, m = {} and i = {}, j = {} was an error.".format(n, m, i, j))
return r[(n, m)]
romberg_numba(f, a, b, n=8) # Almost the exact value.
%%script julia
function f(x)
(12*x + 1) / (1 + cos(x)^2)
end
a = 1993
b = 2017
function romberg_julia(f, xmin, xmax; n=8)
m = n
# First value:
r = Dict()
r[(0, 0)] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in 1 : n
h_i = (xmax - xmin) / (2^i)
sum_f_x = 0
for k in 1 : 2^(i - 1)
sum_f_x += f(xmin + ((2 * k - 1) * h_i))
end
r[(i, 0)] = (r[(i - 1, 0)] / 2.) + (h_i * sum_f_x)
end
# All the other values:
for j in 1 : m
for i in j : n
r[(i, j)] = (((4^j) * r[(i, j - 1)]) - r[(i - 1, j - 1)]) / (4^j - 1.)
end
end
r[(n, m)]
end
println(romberg_julia(f, a, b, n=0)) # really not accurate!
println(romberg_julia(f, a, b, n=1)) # alreay pretty good!
println(romberg_julia(f, a, b, n=2))
println(romberg_julia(f, a, b, n=3))
println(romberg_julia(f, a, b, n=8)) # Almost the exact value.
println(romberg_julia(f, a, b, n=10)) # Almost the exact value.
println(romberg_julia(f, a, b, n=12)) # Almost the exact value.
f = lambda x: (12*x+1)/(1+math.cos(x)**2)
a, b = 1993, 2017
quad(f, a, b)
romberg(f, a, b, n=12)
%%script julia
function f(x)
(12*x + 1) / (1 + cos(x)^2)
end
a = 1993
b = 2017
function romberg_julia_better(f, xmin, xmax; n=8)
m = n
# First value:
r = zeros((n+1, m+1)) # https://docs.julialang.org/en/latest/stdlib/arrays/#Base.zeros
r[1, 1] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in 1 : n
h_i = (xmax - xmin) / (2^i)
sum_f_x = 0
for k in 1 : 2^(i - 1)
sum_f_x += f(xmin + ((2 * k - 1) * h_i))
end
r[i + 1, 1] = (r[i, 1] / 2.) + (h_i * sum_f_x)
end
# All the other values:
for j in 1 : m
for i in j : n
r[i + 1, j + 1] = (((4.^j) * r[i + 1, j]) - r[i, j]) / (4.^j - 1.)
end
end
r[n + 1, m + 1]
end
println(romberg_julia_better(f, a, b, n=0)) # really not accurate!
println(romberg_julia_better(f, a, b, n=1)) # alreay pretty good!
println(romberg_julia_better(f, a, b, n=2))
println(romberg_julia_better(f, a, b, n=3))
println(romberg_julia_better(f, a, b, n=8)) # Almost the exact value.
println(romberg_julia_better(f, a, b, n=10)) # Almost the exact value.
println(romberg_julia_better(f, a, b, n=12)) # Almost the exact value.
%%time
import numpy as np
import math
import random
f = lambda x: (12*x+1)/(1+math.cos(x)**2)
# Same code
def romberg(f, xmin, xmax, n=8, m=None):
assert xmin <= xmax
if m is None:
m = n
assert n >= m >= 0
# First value:
r = np.zeros((n+1, m+1))
r[0, 0] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in range(1, n + 1):
h_i = (xmax - xmin) / 2.**i
r[i, 0] = (0.5 * r[i - 1, 0]) + h_i * math.fsum(
f(xmin + ((2 * k - 1) * h_i))
for k in range(1, 1 + 2**(i - 1))
)
# All the other values:
for j in range(1, m + 1):
for i in range(j, n + 1):
r[i, j] = (((4.**j) * r[i, j - 1]) - r[i - 1, j - 1]) / ((4.**j) - 1.)
return r[n, m]
for _ in range(100000):
a = random.randint(-2000, 2000)
b = a + random.randint(0, 100)
romberg(f, a, b)
%%time
%%pypy
import math
import random
f = lambda x: (12*x+1)/(1+math.cos(x)**2)
# Same code
def romberg_pypy(f, xmin, xmax, n=8, m=None):
assert xmin <= xmax
if m is None:
m = n
assert n >= m >= 0
# First value:
r = [[0 for _ in range(n+1)] for _ in range(m+1)]
r[0][0] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in range(1, n + 1):
h_i = (xmax - xmin) / 2.**i
r[i][0] = (0.5 * r[i - 1][0]) + h_i * math.fsum(
f(xmin + ((2 * k - 1) * h_i))
for k in range(1, 1 + 2**(i - 1))
)
# All the other values:
for j in range(1, m + 1):
for i in range(j, n + 1):
r[i][j] = (((4.**j) * r[i][j - 1]) - r[i - 1][j - 1]) / ((4.**j) - 1.)
return r[n][m]
for _ in range(100000):
a = random.randint(-2000, 2000)
b = a + random.randint(0, 100)
romberg_pypy(f, a, b)
%%time
%%script julia
function f(x)
(12*x + 1) / (1 + cos(x)^2)
end
function romberg_julia(f, xmin, xmax; n=8)
m = n
# First value:
r = Dict()
r[(0, 0)] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in 1 : n
h_i = (xmax - xmin) / (2^i)
sum_f_x = 0
for k in 1 : 2^(i - 1)
sum_f_x += f(xmin + ((2 * k - 1) * h_i))
end
r[(i, 0)] = (r[(i - 1, 0)] / 2.) + (h_i * sum_f_x)
end
# All the other values:
for j in 1 : m
for i in j : n
r[(i, j)] = (((4^j) * r[(i, j - 1)]) - r[(i - 1, j - 1)]) / (4^j - 1.)
end
end
r[(n, m)]
end
for _ in 1:100000
a = rand(-2000:2000)
b = a + rand(0:100)
romberg_julia(f, a, b)
end
%%time
%%script julia
function f(x)
(12*x + 1) / (1 + cos(x)^2)
end
function romberg_julia_better(f, xmin, xmax; n=8)
m = n
# First value:
r = zeros((n+1, m+1)) # https://docs.julialang.org/en/latest/stdlib/arrays/#Base.zeros
r[1, 1] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in 1 : n
h_i = (xmax - xmin) / (2^i)
sum_f_x = 0
for k in 1 : 2^(i - 1)
sum_f_x += f(xmin + ((2 * k - 1) * h_i))
end
r[i + 1, 1] = (r[i, 1] / 2.) + (h_i * sum_f_x)
end
# All the other values:
for j in 1 : m
for i in j : n
r[i + 1, j + 1] = (((4.^j) * r[i + 1, j]) - r[i, j]) / (4.^j - 1.)
end
end
r[n + 1, m + 1]
end
for _ in 1:100000
a = rand(-2000:2000)
b = a + rand(0:100)
romberg_julia_better(f, a, b)
end
%%time
import numpy as np
import math
import random
f = lambda x: (2.0 / math.sqrt(math.pi)) * math.exp(-x**2)
# Same code
def romberg(f, xmin, xmax, n=8, m=None):
assert xmin <= xmax
if m is None:
m = n
assert n >= m >= 0
# First value:
r = np.zeros((n+1, m+1))
r[0, 0] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in range(1, n + 1):
h_i = (xmax - xmin) / 2.**i
r[i, 0] = (0.5 * r[i - 1, 0]) + h_i * math.fsum(
f(xmin + ((2 * k - 1) * h_i))
for k in range(1, 1 + 2**(i - 1))
)
# All the other values:
for j in range(1, m + 1):
for i in range(j, n + 1):
r[i, j] = (((4.**j) * r[i, j - 1]) - r[i - 1, j - 1]) / ((4.**j) - 1.)
return r[n, m]
for _ in range(100000):
a = 0
b = 1
romberg(f, a, b)
print(romberg(f, a, b))
%%time
%%pypy
import math
import random
f = lambda x: (2.0 / math.sqrt(math.pi)) * math.exp(-x**2)
# Same code
def romberg_pypy(f, xmin, xmax, n=8, m=None):
assert xmin <= xmax
if m is None:
m = n
assert n >= m >= 0
# First value:
r = [[0 for _ in range(n+1)] for _ in range(m+1)]
r[0][0] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in range(1, n + 1):
h_i = (xmax - xmin) / 2.**i
r[i][0] = (0.5 * r[i - 1][0]) + h_i * math.fsum(
f(xmin + ((2 * k - 1) * h_i))
for k in range(1, 1 + 2**(i - 1))
)
# All the other values:
for j in range(1, m + 1):
for i in range(j, n + 1):
r[i][j] = (((4.**j) * r[i][j - 1]) - r[i - 1][j - 1]) / ((4.**j) - 1.)
return r[n][m]
for _ in range(100000):
a = 0
b = 1
romberg_pypy(f, a, b)
print(romberg_pypy(f, a, b))
%%time
%%script julia
function f(x)
(2.0 / sqrt(pi)) * exp(-x^2)
end
function romberg_julia(f, xmin, xmax; n=8)
m = n
# First value:
r = Dict()
r[(0, 0)] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in 1 : n
h_i = (xmax - xmin) / (2^i)
sum_f_x = 0
for k in 1 : 2^(i - 1)
sum_f_x += f(xmin + ((2 * k - 1) * h_i))
end
r[(i, 0)] = (r[(i - 1, 0)] / 2.) + (h_i * sum_f_x)
end
# All the other values:
for j in 1 : m
for i in j : n
r[(i, j)] = (((4^j) * r[(i, j - 1)]) - r[(i - 1, j - 1)]) / (4^j - 1.)
end
end
r[(n, m)]
end
for _ in 1:100000
a = 0
b = 1
romberg_julia(f, a, b)
end
println(romberg_julia(f, 0, 1))
%%time
%%script julia
function f(x)
(2.0 / sqrt(pi)) * exp(-x^2)
end
function romberg_julia_better(f, xmin, xmax; n=8)
m = n
# First value:
r = zeros((n+1, m+1)) # https://docs.julialang.org/en/latest/stdlib/arrays/#Base.zeros
r[1, 1] = (xmax - xmin) * (f(xmax) + f(xmin)) / 2.
# One side of the triangle:
for i in 1 : n
h_i = (xmax - xmin) / (2^i)
sum_f_x = 0
for k in 1 : 2^(i - 1)
sum_f_x += f(xmin + ((2 * k - 1) * h_i))
end
r[i + 1, 1] = (r[i, 1] / 2.) + (h_i * sum_f_x)
end
# All the other values:
for j in 1 : m
for i in j : n
r[i + 1, j + 1] = (((4.^j) * r[i + 1, j]) - r[i, j]) / (4.^j - 1.)
end
end
r[n + 1, m + 1]
end
for _ in 1:100000
a = 0
b = 1
romberg_julia_better(f, a, b)
end
println(romberg_julia_better(f, 0, 1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Those are our option and market parameters
Step2: We now define our payoff function using a closure
Step3: We now generate a set of Standard Gaussian variables $z$ as a basis for our simulation...
Step4: ...and transform it in a Gaussian variable with the right mean and standard deviation, ie a variable that is distributed according to $N(forward,vol)$
Step5: We first look at the histogram of the spot prices $x$
Step6: We now determine the payoff values from our draws of the final spot price. Note that we need to use the map command rather than simply writing po = payoff(x). The reason for this is that this latter form is not compatible with the if statement in our payoff function. We also already compute the forward value of the option, which is simply the average payoff over all simulations.
Step7: Now we produce the histogram of the payoffs
Step8: In the next step we compute our "Greeks", ie a number of derivatives of the forward value with respect to the underlying parameters. What is crucial here is that those derivative are calculated on the same draw random numbers $z$, otherwise the Monte Carlo sampling error will dwarf the signal. The sensitivities we compute are to increase / decrease the forward by one currency unit (for Delta and Gamma) and to increase the volatility by one currency unit (for Vega)
Step9: Licence and version
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
strike = 100
forward = 110
vol = 50
def call(k=100):
def payoff(spot):
if spot > k:
return spot - k
else:
return 0
return payoff
payoff = call(k=strike)
#payoff(110)
N = 10000
z = np.random.standard_normal((N))
#z
x = forward + vol * z
#x
hg0=np.histogram(x, bins=50)
xvals0 = np.delete(hg0[1], -1)
plt.bar(xvals0,hg0[0], width=0.5*(xvals0[1]-xvals0[0]))
plt.title('forward distribution')
plt.xlabel('forward')
plt.ylabel('occurrences')
plt.show()
po = list(map(payoff,x))
fv = mean(po)
#po
hg = np.histogram(po,bins=50)
xvals = np.delete(hg[1], -1)
plt.bar(xvals,hg[0], width=0.9*(xvals[1]-xvals[0]))
plt.title('payout distribution')
plt.xlabel('payout')
plt.ylabel('occurrences')
plt.show()
x = forward + 1 + vol * z
po = list(map(payoff,x))
fv_plus = mean(po)
x = forward - 1 + vol * z
po = list(map(payoff,x))
fv_minus = mean(po)
x = forward + (vol + 1) * z
po = list(map(payoff,x))
fv_volp = mean(po)
print ("Forward = %f" % forward)
print ("Strike = %f" % strike)
print ("Volatility = %f" % vol)
print ("PV = %f" % fv)
print ("Delta = %f" % ((fv_plus - fv_minus)/2))
print ("Gamma = %f" % ((fv_plus + fv_minus - 2 * fv)))
print ("Vega = %f" % ((fv_volp - fv)))
import sys
print(sys.version)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the model
Step2: By default the call SIMUS() create a solver that internally uses the PuLP solver to solve the linear programs. Other availables solvers are
Step3: If you check the last column the raking is
Step4: for example the attribute stages stores all the Linear programs executed by SIMUS
Step5: The attribute stages_results stores the eficients restults normalized matrix
Step6: References
|
<ASSISTANT_TASK:>
Python Code:
# first lets import the DATA class
from skcriteria import Data
data = Data(
# the alternative matrix
mtx=[[250, 120, 20, 800],
[130, 200, 40, 1000],
[350, 340, 15, 600]],
# optimal sense
criteria=[max, max, min, max],
# names of alternatives and criteria
anames=["Prj 1", "Prj 2", "Prj 3"],
cnames=["jobs", "green", "fin", "env"])
# show the data object
data
# import the class
from skcriteria.madm.simus import SIMUS
# create the new simus and
dm = SIMUS()
# store the decision inside the dec variable
dec = dm.decide(data, b=[None, 500, None, None])
# let's see the decision
dec
dec.e_
dec._e.stages
dec.e_.stage_results
import datetime as dt
import skcriteria
print("Scikit-Criteria version:", skcriteria.VERSION)
print("Running datetime:", dt.datetime.now())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CNN Encoder using PyTorch
Step2: CNN Decoder using PyTorch
Step3: PyTorch Lightning AutoEncoder
Step4: Arguments
Step5: Weights and Biases Callback
Step6: Training an AE
|
<ASSISTANT_TASK:>
Python Code:
import torch
import torchvision
import wandb
import time
from torch import nn
from einops import rearrange
from argparse import ArgumentParser
from pytorch_lightning import LightningModule, Trainer, Callback
from pytorch_lightning.loggers import WandbLogger
from torch.optim import Adam
from torch.optim.lr_scheduler import CosineAnnealingLR
class Encoder(nn.Module):
def __init__(self, n_features=1, kernel_size=3, n_filters=16, feature_dim=16):
super().__init__()
self.conv1 = nn.Conv2d(n_features, n_filters, kernel_size=kernel_size, stride=2)
self.conv2 = nn.Conv2d(n_filters, n_filters*2, kernel_size=kernel_size, stride=2)
self.conv3 = nn.Conv2d(n_filters*2, n_filters*4, kernel_size=kernel_size, stride=2)
self.fc1 = nn.Linear(256, feature_dim)
def forward(self, x):
y = nn.ReLU()(self.conv1(x))
y = nn.ReLU()(self.conv2(y))
y = nn.ReLU()(self.conv3(y))
y = rearrange(y, 'b c h w -> b (c h w)')
y = self.fc1(y)
return y
# use this to get the correct input shape for fc1.
encoder = Encoder(n_features=1)
x = torch.Tensor(1, 1, 28, 28)
h = encoder(x)
print("h.shape:", h.shape)
class Decoder(nn.Module):
def __init__(self, kernel_size=3, n_filters=64, feature_dim=16, output_size=28, output_channels=1):
super().__init__()
self.init_size = output_size // 2**2 - 1
self.fc1 = nn.Linear(feature_dim, self.init_size**2 * n_filters)
# output size of conv2dtranspose is (h-1)*2 + 1 + (kernel_size - 1)
self.conv1 = nn.ConvTranspose2d(n_filters, n_filters//2, kernel_size=kernel_size, stride=2)
self.conv2 = nn.ConvTranspose2d(n_filters//2, n_filters//4, kernel_size=kernel_size, stride=2)
self.conv3 = nn.ConvTranspose2d(n_filters//4, output_channels, kernel_size=kernel_size-1)
def forward(self, x):
B, _ = x.shape
y = self.fc1(x)
y = rearrange(y, 'b (c h w) -> b c h w', b=B, h=self.init_size, w=self.init_size)
y = nn.ReLU()(self.conv1(y))
y = nn.ReLU()(self.conv2(y))
y = nn.Sigmoid()(self.conv3(y))
return y
decoder = Decoder()
x_tilde = decoder(h)
print("x_tilde.shape:", x_tilde.shape)
def noise_collate_fn(batch):
x, _ = zip(*batch)
x = torch.stack(x, dim=0)
# mean=0.5, std=0.5 normal noise
noise = torch.normal(0.5, 0.5, size=x.shape)
xn = x + noise
xn = torch.clamp(xn, 0, 1)
return xn, x
def clean_collate_fn(batch):
x, _ = zip(*batch)
x = torch.stack(x, dim=0)
return x, x
class LitAEMNISTModel(LightningModule):
def __init__(self, feature_dim=16, lr=0.001, batch_size=64,
num_workers=4, max_epochs=30, denoise=False, **kwargs):
super().__init__()
self.save_hyperparameters()
self.encoder = Encoder(feature_dim=feature_dim)
self.decoder = Decoder(feature_dim=feature_dim)
self.loss = nn.MSELoss()
self.denoise = denoise
def forward(self, x):
h = self.encoder(x)
x_tilde = self.decoder(h)
return x_tilde
# this is called during fit()
def training_step(self, batch, batch_idx):
x_in, x = batch
x_tilde = self.forward(x_in)
loss = self.loss(x_tilde, x)
return {"loss": loss}
# calls to self.log() are recorded in wandb
def training_epoch_end(self, outputs):
avg_loss = torch.stack([x["loss"] for x in outputs]).mean()
self.log("train_loss", avg_loss, on_epoch=True)
# this is called at the end of an epoch
def test_step(self, batch, batch_idx):
x_in, x = batch
x_tilde = self.forward(x_in)
loss = self.loss(x_tilde, x)
return {"x_in" : x_in, "x_tilde" : x_tilde, "test_loss" : loss,}
# this is called at the end of all epochs
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x["test_loss"] for x in outputs]).mean()
self.log("test_loss", avg_loss, on_epoch=True, prog_bar=True)
# validation is the same as test
def validation_step(self, batch, batch_idx):
return self.test_step(batch, batch_idx)
def validation_epoch_end(self, outputs):
return self.test_epoch_end(outputs)
# we use Adam optimizer
def configure_optimizers(self):
optimizer = Adam(self.parameters(), lr=self.hparams.lr)
# this decays the learning rate to 0 after max_epochs using cosine annealing
scheduler = CosineAnnealingLR(optimizer, T_max=self.hparams.max_epochs)
return [optimizer], [scheduler]
# this is called after model instatiation to initiliaze the datasets and dataloaders
def setup(self, stage=None):
self.train_dataloader()
self.test_dataloader()
# build train and test dataloaders using MNIST dataset
# we use simple ToTensor transform
def train_dataloader(self):
collate_fn = noise_collate_fn if self.denoise else clean_collate_fn
return torch.utils.data.DataLoader(
torchvision.datasets.MNIST(
"./data", train=True, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=True,
num_workers=self.hparams.num_workers,
pin_memory=True,
collate_fn=collate_fn
)
def test_dataloader(self):
collate_fn = noise_collate_fn if self.denoise else clean_collate_fn
return torch.utils.data.DataLoader(
torchvision.datasets.MNIST(
"./data", train=False, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=False,
num_workers=self.hparams.num_workers,
pin_memory=True,
collate_fn=collate_fn
)
def val_dataloader(self):
return self.test_dataloader()
def get_args():
parser = ArgumentParser(description="PyTorch Lightning AE MNIST Example")
parser.add_argument("--max-epochs", type=int, default=30, help="num epochs")
parser.add_argument("--batch-size", type=int, default=64, help="batch size")
parser.add_argument("--lr", type=float, default=0.001, help="learning rate")
parser.add_argument("--feature-dim", type=int, default=2, help="ae feature dimension")
# if denoise is true
parser.add_argument("--denoise", action="store_true", help="train a denoising AE")
parser.add_argument("--devices", default=1)
parser.add_argument("--accelerator", default='gpu')
parser.add_argument("--num-workers", type=int, default=4, help="num workers")
args = parser.parse_args("")
return args
class WandbCallback(Callback):
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
# process first 10 images of the first batch
if batch_idx == 0:
x, _ = batch
n = 10
outputs = outputs["x_tilde"]
columns = ['image']
if pl_module.denoise:
columns += ['denoised']
key = "mnist-ae-denoising"
else:
columns += ["reconstruction"]
key = "mnist-ae-reconstruction"
data = [[wandb.Image(x_i), wandb.Image(x_tilde_i)] for x_i, x_tilde_i in list(zip(x[:n], outputs[:n]))]
wandb_logger.log_table(key=key, columns=columns, data=data)
if __name__ == "__main__":
args = get_args()
ae = LitAEMNISTModel(feature_dim=args.feature_dim, lr=args.lr,
batch_size=args.batch_size, num_workers=args.num_workers,
denoise=args.denoise, max_epochs=args.max_epochs)
ae.setup()
wandb_logger = WandbLogger(project="ae-mnist")
start_time = time.time()
trainer = Trainer(accelerator=args.accelerator,
devices=args.devices,
max_epochs=args.max_epochs,
logger=wandb_logger,
callbacks=[WandbCallback()])
trainer.fit(ae)
elapsed_time = time.time() - start_time
print("Elapsed time: {}".format(elapsed_time))
wandb.finish()
# decoder as a generative model
import matplotlib.pyplot as plt
decoder = ae.decoder
decoder.eval()
with torch.no_grad():
# generate a tensor of random noise with size 1, feature_dim
x_in = torch.randn(1, args.feature_dim)
x_tilde = decoder.forward(x_in)
plt.imshow(x_tilde[0].detach().numpy().reshape(28, 28), cmap="gray")
plt.axis('off')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Set up your Google Cloud project
Step4: Region
Step5: Timestamp
Step6: Create a Cloud Storage bucket
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Set up variables
Step10: Initialize Vertex SDK for Python
Step11: Set pre-built containers
Step12: Set machine type
Step13: Examine the training package
Step14: Store training script on your Cloud Storage bucket
Step15: Train a model (training.create-python-pre-built-container)
Step16: Run the custom training job
Step17: The custom training job will take some time to complete.
Step18: Make batch predictions (predictions.batch-prediction)
Step19: Make the batch input file
Step20: Make the batch prediction request
Step21: Batch prediction request will take 25-30 mins to complete.
Step22: Make online predictions (predictions.deploy-model-api)
Step23: Model deployment will take some time to complete.
Step24: Make the prediction
Step25: Undeploy the model
|
<ASSISTANT_TASK:>
Python Code:
# import necessary libraries
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "<your-project>" # replace with your project ID
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1"
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET_NAME = "gs://<your-bucket>" # replace bucket name
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import google.cloud.aiplatform as aip
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
TRAIN_VERSION = "xgboost-cpu.1-1"
DEPLOY_VERSION = "xgboost-cpu.1-1"
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
import os
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Iris tabular classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single Instance Training for Iris
import datetime
import os
import subprocess
import sys
import pandas as pd
import xgboost as xgb
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
args = parser.parse_args()
# Download data
iris_data_filename = 'iris_data.csv'
iris_target_filename = 'iris_target.csv'
data_dir = 'gs://cloud-samples-data/ai-platform/iris'
# gsutil outputs everything to stderr so we need to divert it to stdout.
subprocess.check_call(['gsutil', 'cp', os.path.join(data_dir,
iris_data_filename),
iris_data_filename], stderr=sys.stdout)
subprocess.check_call(['gsutil', 'cp', os.path.join(data_dir,
iris_target_filename),
iris_target_filename], stderr=sys.stdout)
# Load data into pandas, then use `.values` to get NumPy arrays
iris_data = pd.read_csv(iris_data_filename).values
iris_target = pd.read_csv(iris_target_filename).values
# Convert one-column 2D array into 1D array for use with XGBoost
iris_target = iris_target.reshape((iris_target.size,))
# Load data into DMatrix object
dtrain = xgb.DMatrix(iris_data, label=iris_target)
# Train XGBoost model
bst = xgb.train({}, dtrain, 20)
# Export the classifier to a file
model_filename = 'model.bst'
bst.save_model(model_filename)
# Upload the saved model file to Cloud Storage
gcs_model_path = os.path.join(args.model_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_iris.tar.gz
# TODO
# constructs a Custom Training Job using a Python script
job = aip.CustomTrainingJob(
display_name="iris_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
job.run(
replica_count=1, machine_type=TRAIN_COMPUTE, base_output_dir=MODEL_DIR, sync=True
)
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
# TODO
model = aip.Model.upload(
display_name="iris_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
sync=False,
)
model.wait()
INSTANCES = [[1.4, 1.3, 5.1, 2.8], [1.5, 1.2, 4.7, 2.4]]
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
for i in INSTANCES:
f.write(str(i) + "\n")
! gsutil cat $gcs_input_uri
MIN_NODES = 1
MAX_NODES = 1
# TODO
batch_predict_job = model.batch_predict(
job_display_name="iris_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="jsonl",
predictions_format="jsonl",
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=False,
)
print(batch_predict_job)
import json
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
DEPLOYED_NAME = "iris-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
# TODO
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
INSTANCE = [1.4, 1.3, 5.1, 2.8]
instances_list = [INSTANCE]
prediction = endpoint.predict(instances_list)
print(prediction)
endpoint.undeploy_all()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
Step7: Problem 2
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
|
<ASSISTANT_TASK:>
Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
x_min = 0
x_max = 255
return a + ((image_data - x_min) * (b - a)) / (x_max - x_min)
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
# features =
# labels =
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
# weights =
# biases =
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
# epochs =
# learning_rate =
epochs = 5
learning_rate = 0.05
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow Addons 損失
Step2: データを準備する
Step3: モデルを構築する
Step4: トレーニングして評価する
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -U tensorflow-addons
import io
import numpy as np
import tensorflow as tf
import tensorflow_addons as tfa
import tensorflow_datasets as tfds
def _normalize_img(img, label):
img = tf.cast(img, tf.float32) / 255.
return (img, label)
train_dataset, test_dataset = tfds.load(name="mnist", split=['train', 'test'], as_supervised=True)
# Build your input pipelines
train_dataset = train_dataset.shuffle(1024).batch(32)
train_dataset = train_dataset.map(_normalize_img)
test_dataset = test_dataset.batch(32)
test_dataset = test_dataset.map(_normalize_img)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=(28,28,1)),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation=None), # No activation on final dense layer
tf.keras.layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1)) # L2 normalize embeddings
])
# Compile the model
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tfa.losses.TripletSemiHardLoss())
# Train the network
history = model.fit(
train_dataset,
epochs=5)
# Evaluate the network
results = model.predict(test_dataset)
# Save test embeddings for visualization in projector
np.savetxt("vecs.tsv", results, delimiter='\t')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for img, labels in tfds.as_numpy(test_dataset):
[out_m.write(str(x) + "\n") for x in labels]
out_m.close()
try:
from google.colab import files
files.download('vecs.tsv')
files.download('meta.tsv')
except:
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Загружаем наши данные и смотрим на их состояние
Step2: Легко заметить, что в тренировочном датасете у нас не хватает данных о возрасте, каюте и месте погружения пассажира на корабль.
Step3: Посмотрим на общею зависимость шанса выживания от пункта погружения.
Step4: Смотрим на другие возможные зависимости, которые могли б нам указать на то, где пассажиры попали на корабль.
Step5: Теперь исправим пустое поле с платой за путешествение в тестовом датасете.
Step6: Давайте посмотрим на всех пассажиров, с похожими другими признаками.
Step7: Делаем вывод, что вероятнее всего плата была в таком размере.
Step8: Теперь разберемся с полем Возраста в тренировочном датасете. Ему нужно уделить больше внимания, т.к. это очень важный признак, который сильно влияет на выживаемость пассажиров.
Step9: В именах есть приставки, с ними тоже можно кое-что сделать, т.к. социальный статус может быть важным признаком выживаемости.
Step10: Вместо двух полей указывающий на наличие партнера (Parch) или родственника (SibSp), сделаем одно поле FamilySize
Step11: Пол тоже очень важный признак, но если вы смотрели фильм титаник, то наверное помните "Сначала женщин и детей." Поэтому предлагаю сооздать новый признак, который будет учитывать как пол, так и возраст
Step12: Убедились, что теперь наши данные в порядке и переходим к откидыванию лишнего.
Step13: У нас есть дискретные переменные и нам стоило б их закодировать. Для этого в пандас уже существует функция get_dummies
Step14: Создадим функцию, которая будет строить зависимость обучаемости от кол-ва тестовых семплов.
Step15: Разбиваем наш тренировочный датасет на 2, что б прежде чем сабмитить нашу модель, мы убедились что она не переобучается на наших данных (т.н. кросс-валидация)
Step16: Посмотрим модель рандом фореста. Параметры укажем обычные, потом благодаря GridSearchCV подберем оптимальные. Ну и в конце взглянем на то, что у нас вышло
Step17: Повторим все выше описанные процедуры, которые мы делали для рандом фореста, теперь для логистической регрессии.
Step18: Выбираем ту модель, которая нам больше понравилась и сабмитим ее на кагл.
|
<ASSISTANT_TASK:>
Python Code:
# pandas
import pandas as pd
from pandas import DataFrame
import re
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import learning_curve, train_test_split, GridSearchCV
from sklearn.metrics import make_scorer, accuracy_score
train_df = pd.read_csv("titanic/train.csv")
test_df = pd.read_csv("titanic/test.csv")
test_df.head()
train_df.info()
print("----------------------------")
test_df.info()
# Embarked
train_df[train_df.Embarked.isnull()]
# plot
#sns.factorplot('Embarked','Survived', data=train_df,size=4,aspect=3)
fig, (axis1,axis2,axis3) = plt.subplots(1,3,figsize=(15,5))
sns.countplot(x='Embarked', data=train_df, ax=axis1)
sns.countplot(x='Survived', hue="Embarked", data=train_df, order=[1,0], ax=axis2)
# group by embarked, and get the mean for survived passengers for each value in Embarked
embark_perc = train_df[["Embarked", "Survived"]].groupby(['Embarked'],as_index=False).mean()
sns.barplot(x='Embarked', y='Survived', data=embark_perc,order=['S','C','Q'],ax=axis3)
train_df.loc[train_df.Ticket == '113572']
print( 'C == ' + str( len(train_df.loc[train_df.Pclass == 1].loc[train_df.Fare > 75].loc[train_df.Fare < 85].loc[train_df.Embarked == 'C']) ) )
print( 'S == ' + str( len(train_df.loc[train_df.Pclass == 1].loc[train_df.Fare > 75].loc[train_df.Fare < 85].loc[train_df.Embarked == 'S']) ) )
train_df = train_df.set_value(train_df.Embarked.isnull(), 'Embarked', 'C')
train_df.loc[train_df.Embarked.isnull()]
test_df[test_df.Fare.isnull()]
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(111)
test_df[(test_df.Pclass==3)&(test_df.Embarked=='S')].Fare.hist(bins=100, ax=ax)
plt.xlabel('Fare')
plt.ylabel('Frequency')
plt.title('Histogram of Fare, Plcass 3 and Embarked S')
print ("The top 5 most common value of Fare")
test_df[(test_df.Pclass==3)&(test_df.Embarked=='S')].Fare.value_counts().head()
test_df.set_value(test_df.Fare.isnull(), 'Fare', 8.05)
test_df.loc[test_df.Fare.isnull()]
test_df.loc[test_df.Age.isnull()].head()
fig, (axis1,axis2) = plt.subplots(1,2,figsize=(15,4))
axis1.set_title('Original Age values')
axis2.set_title('New Age values')
# среднее, дисперсия и пустые значение в тестовом датасете
average_age_train = train_df["Age"].mean()
std_age_train = train_df["Age"].std()
count_nan_age_train = train_df["Age"].isnull().sum()
# среднее, дисперсия и пустые значение в тестовом датасете
average_age_test = test_df["Age"].mean()
std_age_test = test_df["Age"].std()
count_nan_age_test = test_df["Age"].isnull().sum()
# генерируем случайные значения (mean - std) & (mean + std)
rand_1 = np.random.randint(average_age_train - std_age_train, average_age_train + std_age_train, size = count_nan_age_train)
rand_2 = np.random.randint(average_age_test - std_age_test, average_age_test + std_age_test, size = count_nan_age_test)
# строим гистограму возраста до изменений (пустые конвертим в инты)
train_df['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
test_df['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
# заполняем случайными значениями пустые поля с возрастом
train_df["Age"][np.isnan(train_df["Age"])] = rand_1
test_df["Age"][np.isnan(test_df["Age"])] = rand_2
# конвертим флоаты в инты
train_df['Age'] = train_df['Age'].astype(int)
test_df['Age'] = test_df['Age'].astype(int)
# гистограма нового возраста
train_df['Age'].hist(bins=70, ax=axis2)
test_df['Age'].hist(bins=70, ax=axis2)
# Еще немного графиков
# пик выживаемости в зависимости от возраста
facet = sns.FacetGrid(train_df, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train_df['Age'].max()))
facet.add_legend()
# средняя выживаемость по возрасту
fig, axis1 = plt.subplots(1,1,figsize=(18,4))
average_age = train_df[["Age", "Survived"]].groupby(['Age'],as_index=False).mean()
sns.barplot(x='Age', y='Survived', data=average_age)
train_df.info()
test_df.info()
Title_Dictionary = {
"Capt": "Officer",
"Col": "Officer",
"Major": "Officer",
"Jonkheer": "Nobel",
"Don": "Nobel",
"Sir" : "Nobel",
"Dr": "Officer",
"Rev": "Officer",
"the Countess":"Nobel",
"Dona": "Nobel",
"Mme": "Mrs",
"Mlle": "Miss",
"Ms": "Mrs",
"Mr" : "Mr",
"Mrs" : "Mrs",
"Miss" : "Miss",
"Master" : "Master",
"Lady" : "Nobel"
}
train_df['Title'] = train_df['Name'].apply(lambda x: Title_Dictionary[x.split(',')[1].split('.')[0].strip()])
test_df['Title'] = test_df['Name'].apply(lambda x: Title_Dictionary[x.split(',')[1].split('.')[0].strip()])
train_df.head(100)
train_df['FamilySize'] = train_df['SibSp'] + train_df['Parch']
test_df['FamilySize'] = test_df['SibSp'] + test_df['Parch']
train_df.head()
def get_person(passenger):
age,sex = passenger
return 'child' if age < 16 else sex
train_df['Person'] = train_df[['Age','Sex']].apply(get_person,axis=1)
test_df['Person'] = test_df[['Age','Sex']].apply(get_person,axis=1)
train_df.head()
train_df.info()
print("----------------------------")
train_df.info()
train_df.drop(labels=['PassengerId', 'Name', 'Cabin', 'Ticket', 'SibSp', 'Parch', 'Sex'], axis=1, inplace=True)
test_df.drop(labels=['Name', 'Cabin', 'Ticket', 'SibSp', 'Parch', 'Sex'], axis=1, inplace=True)
train_df.head()
dummies_person_train = pd.get_dummies(train_df['Person'],prefix='Person')
dummies_embarked_train = pd.get_dummies(train_df['Embarked'], prefix= 'Embarked')
dummies_title_train = pd.get_dummies(train_df['Title'], prefix= 'Title')
dummies_pclass_train = pd.get_dummies(train_df['Pclass'], prefix= 'Pclass')
train_df = pd.concat([train_df, dummies_person_train, dummies_embarked_train, dummies_title_train, dummies_pclass_train], axis=1)
train_df = train_df.drop(['Person','Embarked','Title', 'Pclass'], axis=1)
train_df.head()
dummies_person_test = pd.get_dummies(test_df['Person'],prefix='Person')
dummies_embarked_test = pd.get_dummies(test_df['Embarked'], prefix= 'Embarked')
dummies_title_test = pd.get_dummies(test_df['Title'], prefix= 'Title')
dummies_pclass_test = pd.get_dummies(test_df['Pclass'], prefix= 'Pclass')
test_df = pd.concat([test_df, dummies_person_test, dummies_embarked_test, dummies_title_test, dummies_pclass_test], axis=1)
test_df = test_df.drop(['Person','Embarked','Title', 'Pclass'], axis=1)
test_df.head()
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5), scoring='accuracy'):
plt.figure(figsize=(10,6))
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel(scoring)
train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, scoring=scoring,
n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
X = train_df.drop(['Survived'], axis=1)
y = train_df.Survived
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size = 0.3)
# Choose the type of classifier.
clf = RandomForestClassifier()
# Choose some parameter combinations to try
parameters = {'n_estimators': [4, 6, 9],
'max_features': ['log2', 'sqrt','auto'],
'criterion': ['entropy', 'gini'],
'max_depth': [2, 3, 5, 10],
'min_samples_split': [2, 3, 5],
'min_samples_leaf': [1,5,8]
}
# Type of scoring used to compare parameter combinations
acc_scorer = make_scorer(accuracy_score)
# Run the grid search
grid_obj = GridSearchCV(clf, parameters, scoring=acc_scorer)
grid_obj = grid_obj.fit(X_train, y_train)
# Set the clf to the best combination of parameters
clf = grid_obj.best_estimator_
# Fit the best algorithm to the data.
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
print(accuracy_score(y_test, predictions))
plot_learning_curve(clf, 'Random Forest', X, y, cv=4);
from sklearn.cross_validation import KFold
def run_kfold(clf):
kf = KFold(891, n_folds=10)
outcomes = []
fold = 0
for train_index, test_index in kf:
fold += 1
X_train, X_test = X.values[train_index], X.values[test_index]
y_train, y_test = y.values[train_index], y.values[test_index]
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
accuracy = accuracy_score(y_test, predictions)
outcomes.append(accuracy)
print("Fold {0} accuracy: {1}".format(fold, accuracy))
mean_outcome = np.mean(outcomes)
print("Mean Accuracy: {0}".format(mean_outcome))
run_kfold(clf)
from sklearn.linear_model import LogisticRegression
lg = LogisticRegression(random_state=42, penalty='l1')
parameters = {'C':[0.5]}
# Type of scoring used to compare parameter combinations
acc_scorer_lg = make_scorer(accuracy_score)
# Run the grid search
grid_obj_lg = GridSearchCV(lg, parameters, scoring=acc_scorer_lg)
grid_obj_lg = grid_obj_lg.fit(X_train, y_train)
# Set the clf to the best combination of parameters
lg = grid_obj_lg.best_estimator_
# Fit the best algorithm to the data.
lg.fit(X_train, y_train)
predictions_lg = lg.predict(X_test)
print(accuracy_score(y_test, predictions_lg))
plot_learning_curve(lg, 'Logistic Regression', X, y, cv=4);
ids = test_df['PassengerId']
predictions = clf.predict(test_df.drop('PassengerId', axis=1))
output = pd.DataFrame({ 'PassengerId' : ids, 'Survived': predictions })
output.to_csv('titanic-predictions.csv', index = False)
output.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'UserId': [1, 1, 1, 2, 3, 3],
'ProductId': [1, 4, 7, 4, 2, 1],
'Quantity': [6, 1, 3, 2, 7, 2]})
def g(df):
l = int(0.2 * len(df))
dfupdate = df.sample(l, random_state=0)
dfupdate.Quantity = 0
df.update(dfupdate)
return df
df = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Como podemos ver os dados que recuperamos é um objeto de dados especial chamado networkx.classes.multidigraph.MultiDiGraph.
Step2: Agora podemos ver que nosso gráfico contém os vértices ou nós (círculos azuis) e as arestas (linhas cinzas) que conectam esses nós entre si.
Step3: Agora que foi recuperado apenas as ruas onde é possível dirigir com um carro. Vamos confirmar
Step4: Então podemos observar que temos muitas colunas em nosso GeoDataFrame. A maioria das colunas são auto-explicativas, mas a tabela a seguir descreve todas elas.
Step5: Análise de menor caminho
Step6: Em seguida precisamos de uma rede de ruas que apresente pelo menos os pontos de origem e destino. A alternativa mais simples seria carregar toda a rede de ruas da cidade de Fortaleza, porém isso acrescentaria um custo de computação amais. Utilizando os nosso conhecimento, vamos diminuir a rede de busca.
Step7: Agora vamos utilizar o OSMnx para obter o grafo dentro do nosso polygon. Para isso podemos utilizar a função graph_from_polygon(). Lembrando que queremos somente a rede de carros, então usaremos o parâmetro network_type.
Step8: Para calcular a menor rota no grafo precisamos que os nodes de origem e destino estejam presentes no grafo. Para isso vamos calcular o nó mais próximo do nosso ponto de origem e do nosso ponto de destino.
Step9: Agora que já temos os nós de origem e destino, vamos calcular o menor caminho na rede de ruas utilizando o algoritmo de Dijkstra. Para isso podemos usar a função shortest_path do networkx.
Step10: Como resultado, obtemos uma lista de todos os nós que estão no caminho mais curto.
Step11: Para calcular o tamanho da nossa rota em metros, vamos primeiro criar um LineString da nossa rota, depois mudar a e projeção e finalmente usaremos o método length
Step12: Como pode ser visto as coordenadas estão na projeção Mercator, que é a mesma utilizada pelo Google (de onde definimos as nossas coordenadas). Para podermos calcular a distância em metros, vamos primeiro transformar a projeção dos dados para a projeção SIRGAS 2000 / UTM zone 24S (EPSG 31984)
Step13: Finalmente podemos criar o nosso LineString usando a projeção apropriada para a nossa tarefa.
|
<ASSISTANT_TASK:>
Python Code:
import osmnx as ox
import matplotlib.pyplot as plt
%matplotlib inline
# Specify the name that is used to seach for the data
place_name = "Brasil, Ceará, Fortaleza"
# Fetch OSM street network from the location
graph = ox.graph_from_place(place_name)
type(graph)
# Plot the streets
fig, ax = ox.plot_graph(graph)
# Specify the name that is used to seach for the data
place_name = "Brasil, Ceará, Fortaleza"
# Fetch OSM street network from the location
graph_drive = ox.graph_from_place(place_name, network_type='drive')
fig, ax = ox.plot_graph(graph_drive)
# Retrieve only edges from the graph
nodes, edges = ox.graph_to_gdfs(graph_drive)
# Check the data type
print('Data type of edges: ', type(edges))
# Check columns
print(edges.columns)
edges.head(5)
# plot only the road's network
edges.plot()
from shapely.geometry import Point
# approximate coord of computer department: -3.746088, -38.574236
# approximate coord of benfica shopping: -3.739631, -38.540785
origin = Point(-3.746088, -38.574236)
destiny = Point(-3.739631, -38.540785)
import geopandas as gpd
# filename of the ais data
ais_fp = 'data/ais.shp'
# load the ais data
ais_gdf = gpd.read_file(ais_fp)
# separate only the desired data
poly_ais_5 = ais_gdf.loc[ais_gdf['AIS'] == 5, 'geometry'].values[0]
poly_ais_6 = ais_gdf.loc[ais_gdf['AIS'] == 6, 'geometry'].values[0]
poly = poly_ais_5.union(poly_ais_6)
poly
graph = ox.graph_from_polygon(poly, network_type='drive')
fig, ax = ox.plot_graph(graph)
# Get the x and y coordinates
origin_xy = (origin.x, origin.y)
dest_xy = (destiny.x, destiny.y)
# Find the closest origin and target nodes from the graph (the ids of them)
orig_node = ox.get_nearest_node(graph, origin_xy, method='euclidean')
target_node = ox.get_nearest_node(graph, dest_xy, method='euclidean')
# Show the results
print(orig_node)
print(target_node)
import networkx as nx
# Calculate the shortest path
route = nx.shortest_path(G=graph, source=orig_node, target=target_node, weight='length')
# Show what we have
print(route)
# Plot the shortest path
fig, ax = ox.plot_graph_route(graph, route, orig_dest_size=20)
# Retrieve only edges from the graph
nodes, edges = ox.graph_to_gdfs(graph)
# Get the nodes along the shortest path
route_nodes = nodes.loc[route]
route_nodes.head()
# Change projection of the data
new_epsg = 31984 # more accurate for northeast region
route_nodes_proj = route_nodes.to_crs(epsg=new_epsg)
route_nodes_proj.head()
from shapely.geometry import LineString, Point
# Create a geometry for the shortest path
route_line = LineString(list(route_nodes_proj.geometry.values))
print("tamanho da rota em metros: {:.3f} m".format(route_line.length))
route_line
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear Regression
Step2: Ridge Regression (L2 penalty)
Step3: Lasso (L1 penalty)
Step4: Linear models for classification
Step5: Multi-Class linear classification
Step6: Exercises
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import make_regression
from sklearn.cross_validation import train_test_split
X, y, true_coefficient = make_regression(n_samples=80, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5)
print(X_train.shape)
print(y_train.shape)
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
from plots import plot_linear_svc_regularization
plot_linear_svc_regularization()
from sklearn.datasets import make_blobs
plt.figure()
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=y)
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.scatter(X[:, 0], X[:, 1], c=y)
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1])
plt.ylim(-10, 15)
plt.xlim(-10, 8)
# %load solutions/linear_models.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part B
Step2: Part C
Step3: Part D
Step4: Part E
Step5: Part F
Step6: Part G
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(57442)
x1 = np.random.random(10)
x2 = np.random.random(10)
np.testing.assert_allclose(x1.dot(x2), dot(x1, x2))
np.random.seed(495835)
x1 = np.random.random(100)
x2 = np.random.random(100)
np.testing.assert_allclose(x1.dot(x2), dot(x1, x2))
import numpy as np
np.random.seed(5381)
x1 = np.random.random(43)
i1 = np.random.randint(0, 43, 10)
a1 = np.array([ 0.24317871, 0.16900041, 0.20687451, 0.38726974, 0.49798077,
0.32797843, 0.18801287, 0.29021025, 0.65418547, 0.78651195])
np.testing.assert_allclose(a1, subarray(x1, i1), rtol = 1e-5)
x2 = np.random.random(74)
i2 = np.random.randint(0, 74, 5)
a2 = np.array([ 0.96372034, 0.84256813, 0.08188566, 0.71852542, 0.92384611])
np.testing.assert_allclose(a2, subarray(x2, i2), rtol = 1e-5)
import numpy as np
np.random.seed(85928)
x = np.random.random((10, 20, 30))
t = 0.001
y = np.array([ 0.0005339 , 0.00085714, 0.00091265, 0.00037283])
np.testing.assert_allclose(y, less_than(x, t))
np.random.seed(8643)
x2 = np.random.random((100, 100, 10))
t2 = 0.0001
y2 = np.array([ 2.91560413e-06, 6.80065620e-06, 3.63294064e-05,
7.50659065e-05, 1.61602031e-06, 9.37205052e-05])
np.testing.assert_allclose(y2, less_than(x2, t2), rtol = 1e-05)
import numpy as np
np.random.seed(592582)
x = np.random.random((10, 20, 30))
t = 0.999
y = np.array([ 0.99910167, 0.99982779, 0.99982253, 0.9991043 ])
np.testing.assert_allclose(y, greater_than(x, t))
np.random.seed(689388)
x2 = np.random.random((100, 100, 10))
t2 = 0.9999
y2 = np.array([ 0.99997265, 0.99991169, 0.99998906, 0.99999012, 0.99992325,
0.99993289, 0.99996637, 0.99996416, 0.99992627, 0.99994388,
0.99993102, 0.99997486, 0.99992968, 0.99997598])
np.testing.assert_allclose(y2, greater_than(x2, t2), rtol = 1e-05)
import numpy as np
np.random.seed(7472)
x = np.random.random((10, 20, 30))
lo = 0.499
hi = 0.501
y = np.array([ 0.50019884, 0.50039172, 0.500711 , 0.49983418, 0.49942259,
0.4994417 , 0.49979261, 0.50029046, 0.5008376 , 0.49985266,
0.50015914, 0.50068227, 0.50060399, 0.49968918, 0.50091042,
0.50063015, 0.50050032])
np.testing.assert_allclose(y, in_between(x, lo, hi))
import numpy as np
np.random.seed(14985)
x = np.random.random((30, 40, 50))
lo = 0.49999
hi = 0.50001
y = np.array([ 0.50000714, 0.49999045])
np.testing.assert_allclose(y, in_between(x, lo, hi))
import numpy as np
np.random.seed(475185)
x = np.random.random((10, 20, 30))
lo = 0.001
hi = 0.999
y = np.array([ 9.52511605e-04, 8.62993716e-04, 3.70243252e-04,
9.99945849e-01, 7.21751759e-04, 9.36931041e-04,
5.10792605e-04, 6.44911672e-04])
np.testing.assert_allclose(y, not_in_between(x, lo, hi))
np.random.seed(51954)
x = np.random.random((30, 40, 50))
lo = 0.00001
hi = 0.99999
y = np.array([ 8.46159001e-06, 9.99998669e-01, 9.99993873e-01,
5.58488698e-06, 9.99993348e-01])
np.testing.assert_allclose(y, not_in_between(x, lo, hi))
import numpy as np
np.random.seed(5748)
x1 = np.random.random(75)
y1 = x1[::-1] # Sorry, you're not allowed to do this!
np.testing.assert_allclose(y1, reverse_array(x1))
np.random.seed(68382)
x2 = np.random.random(581)
y2 = x2[::-1] # Sorry, you're not allowed to do this!
np.testing.assert_allclose(y2, reverse_array(x2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Performing the SVD and counting the number of singular values that are greater than $10^{-9}$
Step3: With only three nonzero singular values, we could reconstruct the matrix with very little data - just three singular values and six vectors.
Step4: The effect of the noise is clear in the image.
Step5: We can see that much of the noise in the image has been eliminated.
Step6: We could work with the colour image, but it is simpler to work with a gray scale image because then we have only one value for the colour intensity at each pixel rather than three (red/green/blue).
Step7: We can convert the image to a regular matrix with values between 0 and 255, with each entry corresponding to a pixel in the image. Creating the matrix and inspecting first four rows and three columns (top left corner of the image)
Step8: Now, maybe we can discard information associated with small singular values without perceiving any visual change in the image. To explore this, we compute the SVD of the gray scale image
Step9: The argument full_matrices=False tells NumPy to not store all the redundant zero terms in the $\boldsymbol{\Sigma}$ array. This is the normal approach in practice, but not in most text books. Note that NumPy return the singular values as a one-dimendional array, not as a matrix.
Step10: We can now try compressing the image. We first try retaining using only the largest 25% of values
Step11: We have discarded 3/4 of the singular values, but can barely perceive a difference in the image.
Step12: Let's try retaining just 10% of the singular values
Step13: Even with only 10% if the singular values retains, it is hard to perceive a difference between the images. Next we try keeping only 2%
Step14: We now see some image clear degradation, but the image is sill recognisable. We'll try one more case where we retain only 0.5% of the singular values.
Step15: The image quality is now quite poor.
Step16: We can extract the red, green and blue components to have a look
Step17: We now compute an SVD for the matrix of each colour
Step18: Compressing the matrix for each colouring separately and then reconstructing the three-dimensional array
Step19: Comparing the compressed and original images side-by-side
Step20: Retaining 10% of the singular values for each colour, we can see some artifacts in the compressed image, which indicates that using the SVD for each colour independently is probably not a good idea.
Step21: Compute the SVD of each channel
Step22: Compress each channel, and display compressed channels in gray scale
Step23: Combine compressed channels
Step24: Interactive compression
Step25: Effective rank
Step26: Clearly the first two rows are linearly dependent and the rank of this matrix is 2. We can verify this using NumPy
Step27: We now add some noise in the range $(0, 10^{-6})$ to the matrix entries
Step28: We now test the rank
Step29: The problem is that we have a 'data set' that is linearly dependent, but this is being masked by very small measurement noise.
Step30: If we define the effective rank as the number of singular values that are greater than the noise level, the effective rank of $\boldsymbol{A}$ is 2.
Step31: To find the polynomial coefficients we want to solve
Step32: It is clear by inspection that $\boldsymbol{A}$ is not full rank, and is rank 2.
Step33: We can see that two of the singular values are zero. To find a least-squares fit to the data with minimal $\| \boldsymbol{c}\|_{2}$ we compute
Step34: Computing the least-squares solution from $\hat{\boldsymbol{c}} = \boldsymbol{V}{1} \boldsymbol{\Sigma}^{+} \boldsymbol{U}{1}^{T}\boldsymbol{b}$
Step35: The solution is $f(x, y) = 1 + 2x$, which in this case in fact interpolates the data points. Plotting the function, we have a plane that passes through the points.
Step36: We now try adding some noise to the sample positions and the measured values. The Vandermonde matrix is no longer singular so we can solve $\boldsymbol{A} \boldsymbol{c} = \boldsymbol{f}$ to get the polynomial coefficients
Step37: We now see significant coefficients for the $y$ and $xy$ terms in the interpolating polynomial just as a consequence of adding small amount of noise. Plotting the surface and the points, we see in dramatic impact of the noise.
Step38: Performing an SVD on the matrix with noise and printing the singular values
Step39: We see that two of the values are considerably small than the others. If we set these to zero and follow the least-squares procedure for rank-deficient problem
Step40: We see that the fitting polynomial is very close to the noise-free case.
Step41: We collect the data in a $200 \times 2$ matrix $\boldsymbol{X}$ (200 measurements, 2 variables)
Step42: We can compute the covariance matrix $\boldsymbol{C}$ by making the columns of $\boldsymbol{X}$ zero mean and computing $\boldsymbol{X}^{T}\boldsymbol{X}^{T}/(N-1)$
Step43: The covariance matrix is square and symmetric, so w can diagonalise it by computing the eigenvalues and eigenvectors.
Step44: Plotting the data set and the principal directions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
A = np.ones((100, 200))
A[33:33 + 4, 33:133] = 0.0
A[78:78 + 4, 33:133] = 0.0
A[33:78+4, 33:33+4] = 0.0
A[33:78+4, 129:129+4] = 0.0
plt.imshow(A, cmap='gray', interpolation='none')
plt.show()
U, s, V = np.linalg.svd(A, full_matrices=False)
print("Number of singular values greater than 1.0e-9: {}".format((s > 1.0e-9).sum()))
A = np.ones((100, 200))
A = A - 1.0e-1*np.random.rand(100, 200)
A[33:33 + 4, 33:133] = 0.0
A[78:78 + 4, 33:133] = 0.0
A[33:78+4, 33:33+4] = 0.0
A[33:78+4, 129:129+4] = 0.0
plt.imshow(A, cmap='gray', interpolation='none');
# Compute SVD of nois matrix
U, s, V = np.linalg.svd(A, full_matrices=False)
# Set any singular values less than 1.0 equation zero
s[s < 1.0] = 0.0
# Reconstruct low rank approximation and display
A_denoised = np.dot(U, np.dot(np.diag(s), V))
plt.imshow(A_denoised, cmap='gray', interpolation='none')
plt.show();
from urllib.request import urlopen
url = "https://github.com/garth-wells/notebooks-3M1/raw/master/photo/2020-1.png"
img_colour = Image.open(urlopen(url))
img_colour = img_colour.convert('RGB')
print("Image size (pixels):", img_colour.size)
print("Image array shape: ", np.array(img_colour).shape)
plt.figure(figsize=(15, 15/1.77))
plt.imshow(img_colour);
img_bw = img_colour.convert('L')
plt.figure(figsize=(15, 15/1.77))
plt.imshow(img_bw, cmap='gray');
print("Image array shape: {}".format(img_bw.size))
plt.savefig("bw.pdf")
img_array = np.array(img_bw)
print("Image shape:", img_array.shape)
print(img_array[:4, :3])
U, s, V = np.linalg.svd(img_array, full_matrices=False)
print("Number of singular values: {}".format(len(s)))
print("Max, min singular values: {}, {}".format(s[0], s[-1]))
plt.xlabel('$i$')
plt.ylabel('$\sigma_i$')
plt.title('Singular values')
plt.yscale('log')
plt.plot(s, 'bo');
plt.savefig("bw-svd.pdf")
# Compute num_sigma/4 (25%) and zero values
r = int(0.25*len(s))
# Re-construct low rank approximation (this may look a little cryptic, but we use the below
# expression to avoid unecessary computation)
compressed = U[:,:r].dot(s[:r, np.newaxis]*V[:r,:])
compressed = compressed.astype(int)
# Plot compressed and original image
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(18, 18/1.77));
axes[0].set_title('Compressed image with largest 25% of singular values retained')
axes[0].imshow(compressed, cmap='gray');
axes[1].set_title('Original image')
axes[1].imshow(img_array, cmap='gray');
def compress_image(U, s, V, f):
"Compress image where 0 < f <= 1 is the fraction on singular values to retain"
r = int(f*len(s))
return (U[:,:r].dot(s[:r, np.newaxis]*V[:r,:])).astype(int)
# Compress image/matrix
compressed = compress_image(U, s, V, 0.1)
# Plot compressed and original image
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Compressed image with largest 10% of singular values retained')
axes[0].imshow(compressed, cmap='gray');
axes[1].set_title('Original image')
axes[1].imshow(img_array, cmap='gray');
plt.savefig("bw-0-10.pdf")
# Compress image/matrix
compressed = compress_image(U, s, V, 0.02)
# Plot compressed and original image
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Compressed image with largest 2% of singular values retained')
axes[0].imshow(compressed, cmap='gray');
axes[1].set_title('Original image')
axes[1].imshow(img_array, cmap='gray');
plt.savefig("bw-0-02.pdf")
# Compress image/matrix
compressed = compress_image(U, s, V, 0.005)
# Plot compressed and original image
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Compressed image with largest 0.5% of singular values retained')
axes[0].imshow(compressed, cmap='gray');
axes[1].set_title('Original image')
axes[1].imshow(img_array, cmap='gray');
plt.savefig("bw-0-005.pdf")
print("Image array shape: {}".format(img_colour.size))
plt.figure(figsize=(20,20/1.77))
plt.title('This is a photo of 2020 3M1 class members')
plt.imshow(img_colour);
# Display red, green and blue channels by zeroing other channels
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 20/1.77))
img_array = np.array(img_colour)
# Zero the g/b channels
red = img_array.copy()
red[:,:,(1,2)] = 0.0
axes[0].imshow(red);
# Zero the r/b channels
green = img_array.copy()
green[:,:,(0,2)] = 0.0
axes[1].imshow(green);
# Zero the r/g channels
blue = img_array.copy()
blue[:,:,(0,1)] = 0.0
axes[2].imshow(blue);
# Compute SVD for each colour
U, s, V = [0]*3, [0]*3, [0]*3
for i in range(3):
U[i], s[i], V[i] = np.linalg.svd(img_array[:, :, i], full_matrices=False)
# Compress each colour separately
compressed = [compress_image(U[i], s[i], V[i], 0.1) for i in range(3)]
# Reconstruct 3D RGB array and filter any values outside of (0, 1)
compressed = np.dstack(compressed)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Image with largest 10% of singular values retained')
axes[0].imshow(compressed, interpolation="nearest");
axes[1].set_title('Original image')
axes[1].imshow(img_colour);
img_colour_ycbcr = np.array(img_colour.convert("YCbCr"))
# Display Luminance(Y), Blue Chroma(Cb) and Red Chroma(Cr) channels
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 20/1.77))
Y = img_colour_ycbcr[:,:,0]
axes[0].imshow(Y, cmap='gray');
Cb = img_colour_ycbcr[:,:,1]
axes[1].imshow(Cb, cmap='gray');
Cr = img_colour_ycbcr[:,:,2]
axes[2].imshow(Cr, cmap='gray');
# Compute SVD for each channel
U, s, V = [0]*3, [0]*3, [0]*3
for i in range(3):
U[i], s[i], V[i] = np.linalg.svd(img_colour_ycbcr[:, :, i], full_matrices=False)
# Compress each component separately
compressed = [compress_image(U[0], s[0], V[0], 0.05),
compress_image(U[1], s[1], V[1], 0.005),
compress_image(U[2], s[2], V[2], 0.005)]
# Reconstruct 3D YCbCr array
compressed = np.dstack(compressed)
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 20/1.77))
Y = compressed[:,:,0]
axes[0].imshow(Y, cmap='gray');
Cb = compressed[:,:,1]
axes[1].imshow(Cb, cmap='gray');
Cr = compressed[:,:,2]
axes[2].imshow(Cr, cmap='gray');
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Image with largest 20% of brightness singular values retained and 0.5% colours')
im = Image.fromarray(np.uint8(compressed), mode="YCbCr")
axes[0].imshow(im)
axes[1].set_title('Original image')
axes[1].imshow(img_colour);
from ipywidgets import widgets
from ipywidgets import interact
url = "https://github.com/garth-wells/notebooks-3M1/raw/master/photo/IMG_20190117_141222563.png"
img = Image.open(urlopen(url))
img_colour_ycbcr = np.array(img.convert("YCbCr"))
# Compute SVD for each channel
U0, s0, V0 = [0]*3, [0]*3, [0]*3
for i in range(3):
U0[i], s0[i], V0[i] = np.linalg.svd(img_colour_ycbcr[:, :, i], full_matrices=False)
@interact(ratio_Y=(0.005, 0.4, 0.02),
ratio_Cb=(0.001, 0.1, 0.01),
ratio_Cr=(0.001, 0.1, 0.01))
def plot_image(ratio_Y=0.1, ratio_Cb=0.01, ratio_Cr=0.01):
compressed = [compress_image(U0[0], s0[0], V0[0], ratio_Y),
compress_image(U0[1], s0[1], V0[1], ratio_Cb),
compress_image(U0[2], s0[2], V0[2], ratio_Cr)]
# Reconstruct 3D YCbCr array
compressed = np.dstack(compressed)
img_compressed = Image.fromarray(np.uint8(compressed), mode="YCbCr")
# Show
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))
axes[0].set_title('Compressed image')
axes[0].imshow(img_compressed)
axes[1].set_title('Original image')
axes[1].imshow(img)
A = np.array([[1, 1, 1], [2, 2, 2], [1, 0 ,1]])
print(A)
print("Rank of A is: {}".format(np.linalg.matrix_rank(A)))
np.random.seed(10)
A = A + 1.0e-6*np.random.rand(A.shape[0], A.shape[1])
print("Rank of A (with noise) is: {}".format(np.linalg.matrix_rank(A)))
U, s, V = np.linalg.svd(A)
print("The singular values of A (with noise) are: {}".format(s))
x, y, f = np.zeros(4), np.zeros(4), np.zeros(4)
x[0], y[0], f[0] = 1.0, 0.0, 3.0
x[1], y[1], f[1] = 2.0, 0.0, 5.0
x[2], y[2], f[2] = 3.0, 0.0, 7.0
x[3], y[3], f[3] = 4.0, 0.0, 9.0
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('f')
ax.scatter(x, y, f)
plt.show()
A = np.polynomial.polynomial.polyvander2d(y, x, [1, 1])
print(A)
U, s, V = np.linalg.svd(A)
print(s)
# Create view of U with last two columns removed
U1 = U[:, :2]
# Create view of V with last two columns removed
V1 = V[:2,:]
# Create Sigma^{+} by inverting the nonzero singular values and
# discarding the zero singular values
S1 = np.diag(1.0/s[:-2])
print(S1)
c = np.transpose(V1).dot(S1.dot(U1.T).dot(f))
print(c)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Plot points
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_zlabel('$f$')
ax.scatter(x, y, f)
# Plot surface
X = np.arange(0, 5, 0.2)
Y = np.arange(-5, 5, 0.2)
X, Y = np.meshgrid(X, Y)
Z = 1.0 + 2.0*X + 0.0*Y
surf = ax.plot_surface(X, Y, Z, rstride=5, cstride=5, alpha=0.1)
ax.view_init(elev=30, azim=80)
plt.show()
np.random.seed(20)
xn = x + 1.0e-3*(1.0 - np.random.rand(len(x)))
yn = y + 1.0e-3*(1.0 - np.random.rand(len(y)))
fn = f + 1.0e-3*(1.0 - np.random.rand(len(f)))
A = np.polynomial.polynomial.polyvander2d(yn, xn, [1, 1])
c = np.linalg.solve(A, fn)
print(c)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Plot points
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_zlabel('$f$')
ax.scatter(xn, yn, fn)
# Plot surface
X = np.arange(0, 5, 0.2)
Y = np.arange(-5, 5, 0.2)
X, Y = np.meshgrid(X, Y)
Z = c[0] + c[1]*X + c[2]*Y + c[3]*X*Y
surf = ax.plot_surface(X, Y, Z, rstride=5, cstride=5, alpha=0.1)
ax.view_init(elev=30, azim=80)
plt.show()
U, s, V = np.linalg.svd(A)
print(s)
# Create view of U with last two columns removed
U1 = U[:, :2]
# Create view of V with last two columns removed
V1 = V[:2,:]
# Create \Sigma^{+}
S1 = np.diag(1.0/s[:-2])
c = np.transpose(V1).dot(S1.dot(U1.T).dot(f))
print(c)
np.random.seed(1)
x0 = np.random.randn(200) + 5.0
x1 = 1.5*x0 + np.random.rand(len(x0))
ax = plt.axes()
ax.scatter(x0, x1, alpha=0.5);
ax.set_xlabel('$x_{1}$');
ax.set_ylabel('$x_{2}$');
X = np.column_stack((x0, x1))
for c in range(X.shape[1]):
X[:,c] = X[:,c] - np.mean(X[:,c])
C = (X.T).dot(X)/(len(x0)-1.0)
U, s, V = np.linalg.svd(C)
print(s)
ax = plt.axes()
ax.set_aspect(1.0);
ax.set_ylim(-4.0, 4.0);
ax.set_xlabel('$x_{1}$')
ax.set_ylabel('$x_{2}$')
ax.quiver(V[0, 0], V[0, 1], angles='xy',scale_units='xy',scale=0.3);
ax.quiver(V[1, 0], V[1, 1], angles='xy',scale_units='xy',scale=1);
ax.scatter(X[:,0], X[:,1], alpha=0.2);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Загрузка данных
Step2: Одновыборочные критерии
Step3: Критерий знаков
Step4: Критерий знаковых рангов Вилкоксона
Step5: Перестановочный критерий
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import itertools
from scipy import stats
from statsmodels.stats.descriptivestats import sign_test
from statsmodels.stats.weightstats import zconfint
%pylab inline
mouses_data = pd.read_csv('mirror_mouses.txt', header = None)
mouses_data.columns = ['proportion_of_time']
mouses_data
mouses_data.describe()
pylab.hist(mouses_data.proportion_of_time)
pylab.show()
print '95%% confidence interval for the mean time: [%f, %f]' % zconfint(mouses_data)
print "M: %d, p-value: %f" % sign_test(mouses_data, 0.5)
m0 = 0.5
stats.wilcoxon(mouses_data.proportion_of_time - m0)
def permutation_t_stat_1sample(sample, mean):
t_stat = sum(map(lambda x: x - mean, sample))
return t_stat
permutation_t_stat_1sample(mouses_data.proportion_of_time, 0.5)
def permutation_zero_distr_1sample(sample, mean, max_permutations = None):
centered_sample = map(lambda x: x - mean, sample)
if max_permutations:
signs_array = set([tuple(x) for x in 2 * np.random.randint(2, size = (max_permutations,
len(sample))) - 1 ])
else:
signs_array = itertools.product([-1, 1], repeat = len(sample))
distr = [sum(centered_sample * np.array(signs)) for signs in signs_array]
return distr
pylab.hist(permutation_zero_distr_1sample(mouses_data.proportion_of_time, 0.5), bins = 15)
pylab.show()
def permutation_test(sample, mean, max_permutations = None, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
t_stat = permutation_t_stat_1sample(sample, mean)
zero_distr = permutation_zero_distr_1sample(sample, mean, max_permutations)
if alternative == 'two-sided':
return sum([1. if abs(x) >= abs(t_stat) else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'less':
return sum([1. if x <= t_stat else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'greater':
return sum([1. if x >= t_stat else 0. for x in zero_distr]) / len(zero_distr)
print "p-value: %f" % permutation_test(mouses_data.proportion_of_time, 0.5)
print "p-value: %f" % permutation_test(mouses_data.proportion_of_time, 0.5, 10000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect to server
Step2: <hr> Random binary network
Step3: <hr> Random weighted network
Step4: <hr> Lobster network
Step5: <hr> Coloring by degree
Step6: <hr> Graph of clustering
|
<ASSISTANT_TASK:>
Python Code:
import os
from lightning import Lightning
from numpy import random, asarray, linspace, corrcoef
from colorsys import hsv_to_rgb
from sklearn import datasets
import networkx as nx
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
n = 100
G = nx.random_regular_graph(3,n)
mat = nx.adjacency_matrix(G).todense()
lgn.force(mat)
G = nx.random_regular_graph(3,100)
mat = asarray(nx.adjacency_matrix(G).todense()) * (((random.rand(100,100))*5) ** 2)
lgn.force(mat)
G = nx.random_lobster(60, 0.5, 0.0, seed=44)
mat = nx.adjacency_matrix(G).todense()
n = len(G.nodes())
c = [list(asarray(hsv_to_rgb(float(x) / n , 0.8, 1.0))*255) if y > 1 else [150,150,150] for (x,y) in G.degree_iter()]
lgn.force(mat, color=c)
G = nx.random_geometric_graph(50, 0.1)
mat = nx.adjacency_matrix(G).todense()
g = G.degree().values()
lgn.force(mat, group=g, size=(asarray(g) + 1.5)*3)
d, g = datasets.make_blobs(n_features=5, n_samples=50, centers=5, cluster_std=2.0, random_state=100)
c = corrcoef(d)
c[c<0.9] = 0
lgn.force(c, group=g)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 3 initiation mutant simulations
Step2: Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 4 double mutant simulations
Step3: Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 5 CTC distance mutant simulations
Step4: Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 11 CTA distance mutant simulations
Step5: Fit Run 13 stall strength to reproduce measured single mutant YFP rates during serine starvation for Run 14 initiation mutant simulations
Step6: Fit Run 13 stall strength to reproduce measured single mutant YFP rates during serine starvation for Run 15 double mutant simulations
Step7: Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 16 multiple CTA mutant simulations
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import re
import os
import numpy as np
import simulation_utils
from scipy.interpolate import interp1d
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_initiation_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=2)
simulationdata.drop(
['files'], axis=1).to_csv(
'../rawdata/simulations/run2_data.tsv', sep='\t', index_label='index')
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run2_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant.
lower())]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run2_fit_stallstrength_for_initiation_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_double_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=2)
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run2_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant)]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run2_fit_stallstrength_for_double_' +
title + '.tsv',
sep='\t',
index_label='mutant')
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_distance_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=2)
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run2_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant.
lower())]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run2_fit_stallstrength_for_ctc_distance_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_cta_distance_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=2)
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run2_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant.
lower())]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run2_fit_stallstrength_for_cta_distance_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_serine_initiation_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=13)
simulationdata.drop(
['files'], axis=1).to_csv(
'../rawdata/simulations/run13_data.tsv', sep='\t', index_label='index')
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run13_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant.
lower())]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run13_serine_fit_stallstrength_for_initiation_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_serine_double_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=13)
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run13_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
if mutant == 'tcg8': # I did not use TCG8 for double mutants
continue
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant.
lower())]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run13_serine_fit_stallstrength_for_double_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_leucine_multiple_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=2)
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run2_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant)]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run2_fit_stallstrength_for_leucine_multiple_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you want to embed plots inside IPython notebook, you need to turn on this option.
Step2: Generate the data
Step3: Parameters of the joint distribution of $X$ and $e$.
Step4: Regression level and slope.
Step5: Simulate $X$ and $e$ from multivariate normal distribution.
Step6: OLS Estimator
Step7: Compute the estimates
Step8: For each simulated sample we compute the estimate and append it to the list. IPython "magic" %%timeit will show how much time was spent in the current block.
Step9: Same for the second implimentation through arrays instead of matrices.
Step10: Analyze the results
Step11: Plot the histogram.
Step12: Monte Carlo experiment
Step13: Plot densities of the estimates.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
from numpy.linalg import inv, lstsq
sns.set_context('notebook')
%matplotlib inline
N, S = 100, 1000
mean = [0,0]
rho = .1
cov = [[1, rho], [rho, 1]]
alpha, beta = 2, 3
def simulate_data(mean, cov, alpha, beta, size):
X, e = np.random.multivariate_normal(mean, cov, size).T
Y = alpha + beta * X + e
return Y, X
Y, X = simulate_data(mean, cov, alpha, beta, (N, S))
def ols_matrix(Y,X):
Y = np.matrix(Y).T
X = np.matrix(np.vstack((np.ones_like(X), X))).T
beta = np.array(inv(X.T * X) * (X.T * Y))
return float(beta[1])
def ols_array(Y,X):
X = np.vstack((np.ones_like(X), X)).T
Qxx = np.dot(X.T, X)
Qxy = np.dot(X.T, Y)
beta = np.dot(inv(Qxx), Qxy)
return float(beta[1])
def ols_lstsq(Y,X):
X = np.vstack((np.ones_like(X), X)).T
beta = lstsq(X,Y)[0]
return float(beta[1])
beta_hat1, beta_hat2, beta_hat3 = [], [], []
%%time
for y, x in zip(Y, X):
beta_hat1.append(ols_matrix(y, x))
%%time
for y, x in zip(Y, X):
beta_hat2.append(ols_array(y, x))
%%time
for y, x in zip(Y, X):
beta_hat3.append(ols_lstsq(y, x))
print(np.array_equal(beta_hat1, beta_hat2))
print(np.array_equal(beta_hat1, beta_hat3))
print(np.allclose(beta_hat1, beta_hat3, atol=1e-20))
plt.figure(figsize = (12, 6))
plt.hist(beta_hat1, 50, histtype='stepfilled', normed=True, lw=0, alpha=.5, label='Density')
plt.axvline(beta, color='red', lw=5, label='True')
plt.axvline(np.mean(beta_hat1), color='black', lw=5, label='Mean Estimate')
plt.xlabel(r'$\hat{\beta}$')
plt.ylabel('%')
plt.legend()
plt.show()
nobs = [250, 500, 1000, 2000]
beta_hat = []
for N in nobs:
Y, X = simulate_data(mean, cov, alpha, beta, (N, S))
temp = []
for y, x in zip(Y, X):
temp.append(ols_array(y, x))
beta_hat.append(temp)
beta_hat = np.array(beta_hat)
plt.figure(figsize=(8, 5))
for i in range(len(nobs)):
sns.kdeplot(beta_hat[i], alpha=.4, lw=3, shade=True, label=nobs[i])
plt.axvline(beta, color='red', lw=3, label='True')
plt.xlabel(r'$\hat{\beta}$')
plt.ylabel('%')
plt.legend(title='N')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load WWW dataset with sparse matrices
Step2: Compute growth rate of WWW dataset with varying size of nodes
Step3: Fit the growth curve
Step4: Plot the empirical and fitted growth curves
|
<ASSISTANT_TASK:>
Python Code:
import os
import pickle
import time
from collections import defaultdict
import matplotlib.pyplot as plt
import numpy as np
from scipy.sparse import csc_matrix, csr_matrix, dok_matrix
from scipy.optimize import curve_fit
%matplotlib inline
n_e = 325729
def getWWWdataset(n_e = 325729, shuffle=True):
if shuffle:
node_idx = np.arange(n_e)
np.random.shuffle(node_idx)
node_dic = {i:node_idx[i] for i in range(n_e)}
else:
node_dic = {i:i for i in range(n_e)}
row_list = list()
col_list = list()
with open('../data/www/www.dat.txt', 'r') as f:
for line in f.readlines():
row, col = line.split()
row = int(row.strip())
col = int(col.strip())
row_list.append(node_dic[row])
col_list.append(node_dic[col])
return row_list, col_list
if not os.path.exists('www_growth.pkl'):
n_e = 325729
n_link = defaultdict(list)
n_samples = 10
for si in range(n_samples):
row_list, col_list = getWWWdataset()
www_row = csr_matrix((np.ones(len(row_list)), (row_list, col_list)), shape=(n_e, n_e))
www_col = csc_matrix((np.ones(len(row_list)), (row_list, col_list)), shape=(n_e, n_e))
n_link[0].append(0)
for i in range(1, n_e):
# counting triples by expanding tensor
cnt = 0
cnt += www_row.getrow(i)[:,:i].nnz
cnt += www_col.getcol(i)[:i-1,:].nnz
n_link[i].append(cnt + n_link[i-1][-1])
pickle.dump(n_link, open('www_growth.pkl', 'wb'))
else:
n_link = pickle.load(open('www_growth.pkl', 'rb'))
avg_cnt = [np.mean(n_link[i]) for i in range(n_e)]
def func(x, a, b, c):
return c*x**a + b
def poly2(x, a, b, c):
return c*x**2 + b*x + a
popt, pcov = curve_fit(func, np.arange(n_e), avg_cnt)
fitted_t = func(np.arange(n_e), *popt)
popt2, pcov2 = curve_fit(poly2, np.arange(n_e), avg_cnt)
fitted_t2 = poly2(np.arange(n_e), *popt2)
plt.figure(figsize=(16,6))
plt.subplot(1,2,1)
plt.plot(avg_cnt, label='empirical')
plt.plot(fitted_t, label='$y=%.5f x^{%.2f} + %.2f$' % (popt[2], popt[0], popt[1]))
plt.plot(fitted_t2, label='$y=%.5f x^2 + %.5f x + %.2f$' % (popt2[2], popt2[1], popt2[0]))
plt.legend(loc='upper left')
plt.title('# of nodes vs # of links')
plt.xlabel('# nodes')
plt.ylabel('# links')
plt.subplot(1,2,2)
plt.plot(avg_cnt, label='empirical')
plt.plot(fitted_t, label='$y=%.5f x^{%.2f} + %.2f$' % (popt[2], popt[0], popt[1]))
plt.plot(fitted_t2, label='$y=%.5f x^2 + %.5f x + %.2f$' % (popt2[2], popt2[1], popt2[0]))
plt.legend(loc='upper left')
plt.title('# of nodes vs # of links (Magnified)')
plt.xlabel('# nodes')
plt.ylabel('# links')
plt.axis([100000,150000,100000,350000])
row_list, col_list = getWWWdataset()
www_row = csr_matrix((np.ones(len(row_list)), (row_list, col_list)), shape=(n_e, n_e))
www_col = csc_matrix((np.ones(len(row_list)), (row_list, col_list)), shape=(n_e, n_e))
entity_degree = (www_row.sum(1) + www_col.sum(0).T).tolist()
e_list = np.arange(n_e)
np.random.shuffle(e_list)
one_entity = [entity_degree[ei][0] == 1 for ei in e_list]
cumsum = np.cumsum(one_entity)
plt.figure(figsize=(8,6))
plt.plot(cumsum)
plt.xlabel('# of entities')
plt.ylabel('# of entities of degree one')
plt.title('# of entities of degree one in WWW')
plt.axis([0, n_e, 0, n_e])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Defining the model
Step2: Training the network
Step3: We then define a function that plots the prediction of the model for a given set of weights, along with the training data.
Step4: We can then use the function interact of ipywidgets to call this function with adjustable weights
Step5: Performing predictions on the test sets
Step6: We can now check the accuracy of our model on the first point for instance
Step7: More generally, by using pandas syntax, we can perform predictions on the whole dataset
|
<ASSISTANT_TASK:>
Python Code:
# Import packages
import pandas as pd
import numpy as np
from ipywidgets import interact
import matplotlib.pyplot as plt
%matplotlib inline
def probability_setosa( petal_length, petal_width, w0, w1, w2 ):
"Return the probability that a given specimen belongs to the species setosa"
# Compute sum of features times weights
x = w0 + w1*petal_width + w2*petal_length
# Apply non-linear function: sigmoid
p = 1./( 1. + np.exp( -x ) )
return( p )
df = pd.read_csv('./data/setosa/train.csv')
df.head(10)
def plot_model( w0, w1, w2 ):
"Plot the model, along with the training data."
# Calculate the probability on a mesh
petal_width_mesh, petal_length_mesh = \
np.meshgrid( np.linspace(0,3,100), np.linspace(0,8,100) )
p = probability_setosa( petal_width_mesh, petal_length_mesh, w0, w1, w2 )
# Plot the probability on the mesh
plt.clf()
plt.imshow( p.T, extent=[0,3,0,8], origin='lower',
vmin=0, vmax=1, cmap='RdBu', aspect='auto', alpha=0.5 )
# Plot the data points
plt.scatter( df['petal width (cm)'], df['petal length (cm)'], c=df['setosa'], cmap='RdBu')
plt.xlabel('petal width (cm)')
plt.ylabel('petal length (cm)')
cb = plt.colorbar()
cb.set_label('setosa')
interact( plot_model, w0=(-4.,5.), w1=(-2.,2.), w2=(-2., 3.))
# Optimal weights: fill these values
w0 =
w1 =
w2 =
df_test = pd.read_csv('./data/setosa/test.csv')
df_test.head(10)
probability_setosa( 4.2, 1.5, w0, w1, w2 )
df_test['probability_setosa_predicted'] = \
probability_setosa( df_test['petal length (cm)'], df_test['petal width (cm)'], w0, w1, w2 )
df_test
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Drop na
Step2: Create X and y
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
pd.set_option('display.max_columns', 500)
import zipfile
with zipfile.ZipFile('../datasets/KaggleCredit2.csv.zip', 'r') as z:
f = z.open('KaggleCredit2.csv')
data = pd.read_csv(f, index_col=0)
data.head()
data.shape
data.isnull().sum(axis=0)
data.dropna(inplace=True)
data.shape
y = data['SeriousDlqin2yrs']
X = data.drop('SeriousDlqin2yrs', axis=1)
y.mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df1 = pd.DataFrame({'id': [1, 2, 3, 4, 5],
'city': ['bj', 'bj', 'sh', 'sh', 'sh'],
'district': ['ft', 'ft', 'hp', 'hp', 'hp'],
'date': ['2019/1/1', '2019/1/1', '2019/1/1', '2019/1/1', '2019/1/1'],
'value': [1, 5, 9, 13, 17]})
df2 = pd.DataFrame({'id': [3, 4, 5, 6, 7],
'date': ['2019/2/1', '2019/2/1', '2019/2/1', '2019/2/1', '2019/2/1'],
'value': [1, 5, 9, 13, 17]})
def g(df1, df2):
return pd.concat([df1,df2.merge(df1[['id','city','district']], how='left', on='id')],sort=False).reset_index(drop=True)
result = g(df1.copy(),df2.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Overriding root-finding in the generated C++ code -- Chemical kinetics
Step3: There are some linking issues with boost's program options in the below (commented) cells.
Step4: Time to reach steady state
|
<ASSISTANT_TASK:>
Python Code:
from operator import mul
from functools import reduce
import subprocess
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from pyodesys.results import Result
from pyodesys.symbolic import SymbolicSys
from pyodesys.native import native_sys
from pyodesys.native.util import parse_standalone_output
from _chem_kinet import get_odesys
sp.init_printing()
%matplotlib inline
# Fe+3 + SCN- <-> FeSCN+2
stoich_reac, stoich_prod = [(1, 1, 0), (0, 0, 1)], [(0, 0, 1), (1, 1, 0)]
kineticsys = get_odesys(stoich_reac, stoich_prod, 'Fe+3 SCN- FeSCN+2'.split(), SymbolicSys,
steady_state_root=True, latex_names=['%s' % s for s in 'Fe^{3+} SCN^- FeSCN^{2+}'.split()])
def integrate_and_plot(odesys, plot=True, **kwargs):
tend = 2
result = odesys.integrate(tend, [1e-2, 2e-3, 0], [800, 8], integrator='cvode', **kwargs)
if plot:
fig, axes = plt.subplots(1, 2, figsize=(14, 4))
if result.xout[-1] != tend:
axes[0].axvline(result.xout[-1], linestyle='--', label='t = %.4f' % result.xout[-1])
result.plot(ax=axes[0])
result.plot(ax=axes[1], deriv=True)
axes[1].set_yscale('symlog', linthreshy=1e-9)
axes[1].axhline(1e-9, linestyle='--')
axes[1].axhline(-1e-9, linestyle='--')
for ax in axes:
ax.set_xlim([0, tend])
return result
integrate_and_plot(kineticsys)
integrate_and_plot(kineticsys, atol=1e-14, rtol=1e-14)
integrate_and_plot(kineticsys, atol=1e-14, rtol=1e-14, return_on_root=True)
kineticsys.roots
native_override = {
'p_nroots': return 1; ,
'p_roots':
const indextype ny = get_ny();
std::vector<realtype> f(ny);
realtype tot=0.0;
rhs(x, y, &f[0]);
for (indextype i=0; i<ny; ++i){
tot += std::min(std::abs(f[i]/m_atol[i]), std::abs(f[i]/y[i]/m_rtol));
}
out[0] = tot/ny - m_special_settings[0];
this->nrev++;
return AnyODE::Status::success;
}
native_extend={
'p_constructor': [
'if (special_settings.size() != 1) std::cerr << "len(special_settings) != 1" << std::endl;'
]
}
nativesys = native_sys['cvode'].from_other(
kineticsys, namespace_override=native_override, namespace_extend=native_extend)
for path in nativesys._native._written_files:
if path.endswith('.cpp'):
print(path)
print('...\n' + ''.join(open(path).readlines()[-20:]))
print("")
#standalone_prog = nativesys.as_standalone('chem_kinet', compile_kwargs=dict(options=['warn', 'pic', 'openmp', 'debug']))
#standalone_prog
#p = subprocess.Popen([standalone_prog, '--return-on-root', '1', '--special-settings', '1000'],
# stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#out, err = p.communicate(input='2 1e-2 2e-3 0 800 8 0 0 0 0 1'.encode('utf-8'))
#retc = p.wait()
#assert retc == 0
#print(err.decode('utf-8'))
#res_sa, = [Result(*args, kineticsys) for args in parse_standalone_output(out.decode('utf-8').split('\n'))]
#res_sa.plot()
native = native_sys['cvode'].from_other(kineticsys, namespace_override=native_override)
def plot_tss_conv(factor, tols, ax):
tol_kw = dict(plot=False, return_on_root=True, nsteps=2000, special_settings=[factor])
tss = [integrate_and_plot(native, atol=tol, rtol=tol, **tol_kw).xout[-1] for tol in tols]
ax.semilogx(tols, tss, label=factor)
fig, ax = plt.subplots(figsize=(14, 6))
tols = np.logspace(-15, -10, 200)
for factor in [1e2, 1e3, 1e4, 1.1e4, 1e5, 1e6, 1e7]:
plot_tss_conv(factor, tols, ax)
ax.legend()
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.