markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
训练无约束(非单调)的校准线性模型
nomon_linear_estimator = optimize_learning_rates( train_df=law_train_df, val_df=law_val_df, test_df=law_test_df, monotonicity=0, learning_rates=LEARNING_RATES, batch_size=BATCH_SIZE, num_epochs=NUM_EPOCHS, get_input_fn=get_input_fn_law, get_feature_columns_and_configs=get_feature_col...
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
训练单调的校准线性模型
mon_linear_estimator = optimize_learning_rates( train_df=law_train_df, val_df=law_val_df, test_df=law_test_df, monotonicity=1, learning_rates=LEARNING_RATES, batch_size=BATCH_SIZE, num_epochs=NUM_EPOCHS, get_input_fn=get_input_fn_law, get_feature_columns_and_configs=get_feature_colum...
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
训练其他无约束的模型 我们演示了可以将 TFL 校准线性模型训练成在 LSAT 分数和 GPA 上均单调,而不会牺牲过多的准确率。 但是,与其他类型的模型(如深度神经网络 (DNN) 或梯度提升树 (GBT))相比,校准线性模型表现如何?DNN 和 GBT 看起来会有公平合理的输出吗?为了解决这一问题,我们接下来将训练无约束的 DNN 和 GBT。实际上,我们将观察到 DNN 和 GBT 都很容易违反 LSAT 分数和本科生 GPA 中的单调性。 训练无约束的深度神经网络 (DNN) 模型 之前已对此架构进行了优化,可以实现较高的验证准确率。
feature_names = ['ugpa', 'lsat'] dnn_estimator = tf.estimator.DNNClassifier( feature_columns=[ tf.feature_column.numeric_column(feature) for feature in feature_names ], hidden_units=[100, 100], optimizer=tf.keras.optimizers.Adam(learning_rate=0.008), activation_fn=tf.nn.relu) dnn_estimator...
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
训练无约束的梯度提升树 (GBT) 模型 之前已对此树形结构进行了优化,可以实现较高的验证准确率。
tree_estimator = tf.estimator.BoostedTreesClassifier( feature_columns=[ tf.feature_column.numeric_column(feature) for feature in feature_names ], n_batches_per_layer=2, n_trees=20, max_depth=4) tree_estimator.train( input_fn=get_input_fn_law( law_train_df, num_epochs=NUM_EPOCHS,...
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
案例研究 2:信用违约 我们将在本教程中考虑的第二个案例研究是预测个人的信用违约概率。我们将使用 UCI 存储库中的 Default of Credit Card Clients 数据集。这些数据收集自 30,000 名中国台湾信用卡用户,并包含一个二元标签,用于标识用户是否在时间窗口内拖欠了付款。特征包括婚姻状况、性别、教育程度以及在 2005 年 4-9 月的每个月中,用户拖欠现有账单的时间有多长。 正如我们在第一个案例研究中所做的那样,我们再次阐明了使用单调性约束来避免不公平的惩罚:使用该模型来确定用户的信用评分时,在其他条件都相同的情况下,如果许多人因较早支付账单而受到惩罚,那么这对他们来说是不公平的。因此,我们应用了单调性...
# Load data file. credit_file_name = 'credit_default.csv' credit_file_path = os.path.join(DATA_DIR, credit_file_name) credit_df = pd.read_csv(credit_file_path, delimiter=',') # Define label column name. CREDIT_LABEL = 'default'
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
将数据划分为训练/验证/测试集
credit_train_df, credit_val_df, credit_test_df = split_dataset(credit_df)
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
可视化数据分布 首先,我们可视化数据的分布。我们将为婚姻状况和还款状况不同的人绘制观察到的违约率的平均值和标准误差。还款状态表示一个人已偿还贷款的月数(截至 2005 年 4 月)。
def get_agg_data(df, x_col, y_col, bins=11): xbins = pd.cut(df[x_col], bins=bins) data = df[[x_col, y_col]].groupby(xbins).agg(['mean', 'sem']) return data def plot_2d_means_credit(input_df, x_col, y_col, x_label, y_label): plt.rcParams['font.family'] = ['serif'] _, ax = plt.subplots(nrows=1, ncols=1) plt...
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
训练校准线性模型以预测信用违约率 接下来,我们将通过 TFL 训练校准线性模型,以预测某人是否会拖欠贷款。两个输入特征将是该人的婚姻状况以及该人截至 4 月已偿还贷款的月数(还款状态)。训练标签将是该人是否拖欠过贷款。 我们首先在没有任何约束的情况下训练校准线性模型。然后,我们在具有单调性约束的情况下训练校准线性模型,并观察模型输出和准确率的差异。 用于配置信用违约数据集特征的辅助函数 下面这些辅助函数专用于信用违约案例研究。
def get_input_fn_credit(input_df, num_epochs, batch_size=None): """Gets TF input_fn for credit default models.""" return tf.compat.v1.estimator.inputs.pandas_input_fn( x=input_df[['MARRIAGE', 'PAY_0']], y=input_df['default'], num_epochs=num_epochs, batch_size=batch_size or len(input_df), ...
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
用于可视化训练的模型输出的辅助函数
def plot_predictions_credit(input_df, estimator, x_col, x_label='Repayment Status (April)', y_label='Predicted default probability'): predictions = get_predicted_probabilities( estimator=estimator, in...
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
训练无约束(非单调)的校准线性模型
nomon_linear_estimator = optimize_learning_rates( train_df=credit_train_df, val_df=credit_val_df, test_df=credit_test_df, monotonicity=0, learning_rates=LEARNING_RATES, batch_size=BATCH_SIZE, num_epochs=NUM_EPOCHS, get_input_fn=get_input_fn_credit, get_feature_columns_and_configs=get...
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
训练单调的校准线性模型
mon_linear_estimator = optimize_learning_rates( train_df=credit_train_df, val_df=credit_val_df, test_df=credit_test_df, monotonicity=1, learning_rates=LEARNING_RATES, batch_size=BATCH_SIZE, num_epochs=NUM_EPOCHS, get_input_fn=get_input_fn_credit, get_feature_columns_and_configs=get_f...
site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb
tensorflow/docs-l10n
apache-2.0
As shown above, the error from the model is not normally distributed. The error is skewed to the right, similar to the raw data itself. Additionally, the distribution of error terms is not consistent, it is heteroscadastic. Inspect the Data and Transform The data is skewed to the right. The data is transformed by tak...
plt.hist(actual) plt.show() sqrt_actual = np.sqrt(actual) plt.hist(sqrt_actual) plt.show()
CorrectingForAssumptions.ipynb
Jackie789/JupyterNotebooks
gpl-3.0
That's a little better. Has this helped the multivariate normality? Yes.
# Extract predicted values. predicted = regr.predict(X).ravel() actual = data['Sales'] # Calculate the error, also called the residual. corr_residual = sqrt_actual - predicted plt.hist(corr_residual) plt.title('Residual counts') plt.xlabel('Residual') plt.ylabel('Count') plt.show()
CorrectingForAssumptions.ipynb
Jackie789/JupyterNotebooks
gpl-3.0
Transforming the data into the sqrt of the data lessened the skewness to the right, and allowed the error from the model to be more normally-distributed. Let's see if our transformation helped the problem with heteroscedasticity. Homoscedasticity
plt.scatter(predicted, corr_residual) plt.xlabel('Predicted') plt.ylabel('Residual') plt.axhline(y=0) plt.title('Residual vs. Predicted') plt.show()
CorrectingForAssumptions.ipynb
Jackie789/JupyterNotebooks
gpl-3.0
Import the usual libraries You can load that library with the code cell above:
# import matplotlib and numpy # use "inline" instead of "notebook" for non-interactive # use widget for jupyterlab needs ipympl to be installed import sys if 'google.colab' in sys.modules: %pylab --no-import-all notebook else: %pylab --no-import-all widget from s...
jupyter_notebooks/image_registration.ipynb
pycroscopy/pycroscopy
mit
Load an image stack : Please, load an image stack. <br> A stack of images is used to reduce noise, but for an added image the images have to be aligned to compensate for drift and other microscope instabilities. You select here (with the open_file_dialog parameter), whether an open file dialog apears in the code cell b...
if 'google.colab' in sys.modules: from google.colab import drive drive.mount("/content/drive") drive_directory = 'drive/MyDrive/' else: drive_directory = '.' file_widget = open_file_dialog(drive_directory) file_widget
jupyter_notebooks/image_registration.ipynb
pycroscopy/pycroscopy
mit
Plot Image Stack Either we load the selected file in hte widget above above or a file dialog window appears. This is the point the notebook can be repeated with a new file. Either select a file above again (without running the code cell above) or open a file dialog here Note that the open file dialog might not apear in...
try: main_dataset.h5_dataset.file.close() except: pass dm3_reader = DM3Reader(file_widget.selected) main_dataset = dm3_reader.read() if main_dataset.data_type.name != 'IMAGE_STACK': print(f"Please load an image stack for this notebook, this is an {main_dataset.data_type}") print(main_dataset) main_dat...
jupyter_notebooks/image_registration.ipynb
pycroscopy/pycroscopy
mit
Complete Registration Takes a while, depending on your computer between 1 and 10 minutes.
## Do all of registration notebook_tags ={'notebook': __notebook__, 'notebook_version': __notebook_version__} non_rigid_registered, rigid_registered_dataset = px.image.complete_registration(main_dataset) non_rigid_registered.plot() non_rigid_registered
jupyter_notebooks/image_registration.ipynb
pycroscopy/pycroscopy
mit
Check Drift
scale_x = (rigid_registered_dataset.x[1]-rigid_registered_dataset.x[0])*1. drift = rigid_registered_dataset.metadata['drift'] x = np.linspace(0,drift.shape[0]-1,drift.shape[0]) polynom_degree = 2 # 1 is linear fit, 2 is parabolic fit, ... line_fit_x = np.polyfit(x, drift[:,0], polynom_degree) poly_x = np.poly1d(line_f...
jupyter_notebooks/image_registration.ipynb
pycroscopy/pycroscopy
mit
Appendix Demon Registration Here we use the Diffeomorphic Demon Non-Rigid Registration as provided by simpleITK. Please Cite: * simpleITK and T. Vercauteren, X. Pennec, A. Perchant and N. Ayache Diffeomorphic Demons Using ITK\'s Finite Difference Solver Hierarchy The Insight Journal, 2007 This Non-Rigid Registrat...
import simpleITK as sitk def DemonReg(cube, verbose = False): """ Diffeomorphic Demon Non-Rigid Registration Usage: DemReg = DemonReg(cube, verbose = False) Input: cube: stack of image after rigid registration and cropping Output: DemReg: stack of images with non-rigid re...
jupyter_notebooks/image_registration.ipynb
pycroscopy/pycroscopy
mit
2 - Dataset First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables X and Y.
X, Y = load_planar_dataset()
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
# Visualize the data: plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
You have: - a numpy-array (matrix) X that contains your features (x1, x2) - a numpy-array (vector) Y that contains your labels (red:0, blue:1). Lets first get a better sense of what our data is like. Exercise: How many training examples do you have? In addition, what is the shape of the variables X and Y? Hin...
### START CODE HERE ### (≈ 3 lines of code) shape_X = X.shape shape_Y = Y.shape m = Y.flatten().shape # training set size ### END CODE HERE ### print ('The shape of X is: ' + str(shape_X)) print ('The shape of Y is: ' + str(shape_Y)) print ('I have m = %d training examples!' % (m))
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output: <table style="width:20%"> <tr> <td>**shape of X**</td> <td> (2, 400) </td> </tr> <tr> <td>**shape of Y**</td> <td>(1, 400) </td> </tr> <tr> <td>**m**</td> <td> 400 </td> </tr> </table> 3 - Simple Logistic Regression Before building a full neural network, le...
# Train the logistic regression classifier clf = sklearn.linear_model.LogisticRegressionCV(); clf.fit(X.T, Y.T);
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
You can now plot the decision boundary of these models. Run the code below.
# Plot the decision boundary for logistic regression plot_decision_boundary(lambda x: clf.predict(x), X, Y) plt.title("Logistic Regression") # Print accuracy LR_predictions = clf.predict(X.T) print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*1...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output: <table style="width:20%"> <tr> <td>**Accuracy**</td> <td> 47% </td> </tr> </table> Interpretation: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! 4 - Neural Network model Logistic regress...
# GRADED FUNCTION: layer_sizes def layer_sizes(X, Y): """ Arguments: X -- input dataset of shape (input size, number of examples) Y -- labels of shape (output size, number of examples) Returns: n_x -- the size of the input layer n_h -- the size of the hidden layer n_y -- the size o...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded). <table style="width:20%"> <tr> <td>**n_x**</td> <td> 5 </td> </tr> <tr> <td>**n_h**</td> <td> 4 </td> </tr> <tr> <td>**n_y**</td> <td> 2 </td> ...
# GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): """ Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: params -- python dictionary containing your parameters: W1 -- wei...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output: <table style="width:90%"> <tr> <td>**W1**</td> <td> [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]] </td> </tr> <tr> <td>**b1**</td> <td> [[ 0.] [ 0.] [ 0.] [ 0.]] </td> </tr> <tr> <td>**W2**</td> <td>...
# GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Argument: X -- input data of size (n_x, m) parameters -- python dictionary containing your parameters (output of initialization function) Returns: A2 -- The sigmoid output of the second activation cache ...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output: <table style="width:55%"> <tr> <td> -0.000499755777742 -0.000496963353232 0.000438187450959 0.500109546852 </td> </tr> </table> Now that you have computed $A^{[2]}$ (in the Python variable "A2"), which contains $a^{2}$ for every example, you can compute the cost function as follows: $$J = - \...
# GRADED FUNCTION: compute_cost def compute_cost(A2, Y, parameters): """ Computes the cross-entropy cost given in equation (13) Arguments: A2 -- The sigmoid output of the second activation, of shape (1, number of examples) Y -- "true" labels vector of shape (1, number of examples) paramete...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output: <table style="width:20%"> <tr> <td>**cost**</td> <td> 0.692919893776 </td> </tr> </table> Using the cache computed during forward propagation, you can now implement backward propagation. Question: Implement the function backward_propagation(). Instructions: Backpropagation is usually the...
# GRADED FUNCTION: backward_propagation def backward_propagation(parameters, cache, X, Y): """ Implement the backward propagation using the instructions above. Arguments: parameters -- python dictionary containing our parameters cache -- a dictionary containing "Z1", "A1", "Z2" and "A2". ...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected output: <table style="width:80%"> <tr> <td>**dW1**</td> <td> [[ 0.01018708 -0.00708701] [ 0.00873447 -0.0060768 ] [-0.00530847 0.00369379] [-0.02206365 0.01535126]] </td> </tr> <tr> <td>**db1**</td> <td> [[-0.00069728] [-0.00060606] [ 0.000364 ] [ 0.00151207]] </td> </tr> ...
# GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate = 1.2): """ Updates parameters using the gradient descent update rule given above Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradie...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output: <table style="width:80%"> <tr> <td>**W1**</td> <td> [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]]</td> </tr> <tr> <td>**b1**</td> <td> [[ -1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [ -3.20136836e-06]]<...
# GRADED FUNCTION: nn_model def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False): """ Arguments: X -- dataset of shape (2, number of examples) Y -- labels of shape (1, number of examples) n_h -- size of the hidden layer num_iterations -- Number of iterations in gradient descent loo...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output: <table style="width:90%"> <tr> <td>**W1**</td> <td> [[-4.18494056 5.33220609] [-7.52989382 1.24306181] [-4.1929459 5.32632331] [ 7.52983719 -1.24309422]]</td> </tr> <tr> <td>**b1**</td> <td> [[ 2.32926819] [ 3.79458998] [ 2.33002577] [-3.79468846]]</td> </tr> <tr...
# GRADED FUNCTION: predict def predict(parameters, X): """ Using the learned parameters, predicts a class for each example in X Arguments: parameters -- python dictionary containing your parameters X -- input data of size (n_x, m) Returns predictions -- vector of predictions of o...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output: <table style="width:40%"> <tr> <td>**predictions mean**</td> <td> 0.666666666667 </td> </tr> </table> It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
# Build a model with a n_h-dimensional hidden layer parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True) # Plot the decision boundary plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y) plt.title("Decision Boundary for hidden layer size " + str(4))
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output: <table style="width:40%"> <tr> <td>**Cost after iteration 9000**</td> <td> 0.218607 </td> </tr> </table>
# Print accuracy predictions = predict(parameters, X) print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Expected Output: <table style="width:15%"> <tr> <td>**Accuracy**</td> <td> 90% </td> </tr> </table> Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic ...
# This may take about 2 minutes to run plt.figure(figsize=(16, 32)) hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50] for i, n_h in enumerate(hidden_layer_sizes): plt.subplot(5, 2, i+1) plt.title('Hidden Layer of size %d' % n_h) parameters = nn_model(X, Y, n_h, num_iterations = 5000) plot_decision_boundary(...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
Interpretation: - The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. - The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting. - Y...
# Datasets noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets() datasets = {"noisy_circles": noisy_circles, "noisy_moons": noisy_moons, "blobs": blobs, "gaussian_quantiles": gaussian_quantiles} ### START CODE HERE ### (choose your dataset) dat...
course_1/week_3/assignment_1/planar_data_classification_with_one_hidden_layer_v1.ipynb
ImAlexisSaez/deep-learning-specialization-coursera
mit
We want to add these comits to a REBOUND simulation(s). The first thing to do is set the units, which have to be consistent throughout. Here we have a table in AU and days, so we'll use the gaussian gravitational constant (AU, days, solar masses).
sim = rebound.Simulation() k = 0.01720209895 # Gaussian constant sim.G = k**2
ipython_examples/HyperbolicOrbits.ipynb
dtamayo/rebound
gpl-3.0
We also set the simulation time to the epoch at which the elements are valid:
sim.t = epoch_of_elements
ipython_examples/HyperbolicOrbits.ipynb
dtamayo/rebound
gpl-3.0
We then add the giant planets in our Solar System to the simulation. You could for example query JPL HORIZONS for the states of the planets at each comet's corresponding epoch of observation (see Horizons.ipynb). Here we set up toy masses and orbits for Jupiter & Saturn:
sim.add(m=1.) # Sun sim.add(m=1.e-3, a=5.) # Jupiter sim.add(m=3.e-4, a=10.) # Saturn
ipython_examples/HyperbolicOrbits.ipynb
dtamayo/rebound
gpl-3.0
Let's write a function that takes a comet from the table and adds it to our simulation:
def addOrbit(sim, comet_elem): tracklet_id, e, q, inc, Omega, argperi, t_peri, epoch_of_observation = comet_elem sim.add(primary=sim.particles[0], a = q/(1.-e), e = e, inc = inc*np.pi/180., # have to convert to radians Omega = Omega*np.pi/180., omega...
ipython_examples/HyperbolicOrbits.ipynb
dtamayo/rebound
gpl-3.0
By default, REBOUND adds and outputs particles in Jacobi orbital elements. Typically orbital elements for comets are heliocentric. Mixing the two will give you relative errors in elements, positions etc. of order the mass ratio of Jupiter to the Sun ($\sim 0.001$) which is why we pass the additional primary=sim.parti...
addOrbit(sim, comets[0]) %matplotlib inline fig = rebound.OrbitPlot(sim, trails=True)
ipython_examples/HyperbolicOrbits.ipynb
dtamayo/rebound
gpl-3.0
Now we just integrate until whatever final time we’re interested in. Here it's the epoch at which we observe the comet, which is the last column in our table:
tfinal = comets[0][-1] sim.integrate(tfinal) fig = rebound.OrbitPlot(sim, trails=True)
ipython_examples/HyperbolicOrbits.ipynb
dtamayo/rebound
gpl-3.0
REBOUND automatically find out if you want to integrate forward or backward in time. For fun, let's add all the coments to a simulation:
sim = rebound.Simulation() sim.G = k**2 sim.t = epoch_of_elements sim.add(m=1.) # Sun sim.add(m=1.e-3, a=5.) # Jupiter sim.add(m=3.e-4, a=10.) # Saturn for comet in comets: addOrbit(sim, comet) fig = rebound.OrbitPlot(sim, trails=True)
ipython_examples/HyperbolicOrbits.ipynb
dtamayo/rebound
gpl-3.0
1. Data preparation You are going to read the graph from an adjacency list saved in earlier exercises.
call_adjmatrix = pd.read_csv('./call.adjmatrix', index_col=0) call_graph = nx.from_numpy_matrix(call_adjmatrix.as_matrix()) # Display call graph object. plt.figure(figsize=(10,10)) plt.axis('off') pos = graphviz_layout(call_graph, prog='dot') nx.draw_networkx(call_graph, pos=pos, node_color='#11DD11', with_labels...
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
2. Hierarchical clustering This notebook makes use of a hierarchical clustering algorithm, as implemented in Scipy. The following example uses the average distance measure. Since the graph is weighted, you can also use the single linkage inter-cluster distance measure (see exercises).
def create_hc(G, linkage='average'): """ Creates hierarchical cluster of graph G from distance matrix """ path_length=nx.all_pairs_shortest_path_length(G) distances=np.zeros((G.order(),G.order())) for u,p in dict(path_length).items(): for v,d in p.items(): distanc...
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
Below is a demonstration of hierarchical clustering when applied to the call graph.
# Perform hierarchical clustering using 'average' linkage. Z = create_hc(call_graph, linkage='average')
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
The dendrogram corresponding to the partitioned graph is obtained as follows:
hierarchy.dendrogram(Z) plt.show()
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
You will notice that the full dendrogram is unwieldy, and difficult to use or read. Fortunately, the dendrogram method has a feature that allows one to only show the lastp merged clusters, where $p$ is the desired number of last p merged clusters.
plt.title('Hierarchical Clustering Dendrogram (pruned)') plt.xlabel('sample index (or leaf size)') plt.ylabel('distance') hierarchy.dendrogram( Z, truncate_mode='lastp', # show only the last p merged clusters p=10, # show only the last p merged clusters show_leaf_counts=True, # numbe...
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
This dendrogram can help explain what happens as a result of the agglomerative method of hierarchical clustering. Starting at the bottom-most level, each node is assigned its own cluster. The closest pair of nodes (according to a distance function) are then merged into a new cluster. The distance matrix is recomputed, ...
fancy_dendrogram( Z, truncate_mode='lastp', p=12, leaf_rotation=90., leaf_font_size=12.0, show_contracted=False, annotate_above=10, max_d=3.5) plt.show() opt_clust = 3 opt_clust
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
You can now assign the data to these "opt_clust" clusters.
cluster_assignments = get_cluster_membership(Z, maxclust=opt_clust)
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
The partitioned graph, corresponding to the dendrogram above, can now be visualized.
clust = list(set(cluster_assignments.values())) clust cluster_centers = sorted(set(cluster_assignments.values())) freq = [list(cluster_assignments.values()).count(x) for x in cluster_centers] # Creata a DataFrame object containing list of cluster centers and number of objects in each cluster df = pd.DataFrame({'clust...
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
<br> <div class="alert alert-info"> <b>Exercise 1 Start.</b> </div> Instructions How many clusters are obtained after the final step of a generic agglomerative clustering algorithm (before post-processing)? Note: Post-processing involves determining the optimal clusters for the problem at hand. Based on your answe...
# Your code here.
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
<br> <div class="alert alert-info"> <b>Exercise 2 [Advanced] End.</b> </div> Exercise complete: This is a good time to "Save and Checkpoint". 3. Community detection Community detection is an important component in the analysis of large and complex networks. Identifying these subgraph structures helps in understand...
# Create the spectral partition using the spectral clustering function from Scikit-Learn. spectral_partition = spc(call_adjmatrix.as_matrix(), 9, assign_labels='discretize') pos = graphviz_layout(call_graph, prog='dot') nx.draw_networkx_nodes(call_graph, pos, cmap=plt.cm.RdYlBu, node_color=spectral_partition) nx.draw_...
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
<br> <div class="alert alert-info"> <b>Exercise 3 [Advanced] Start.</b> </div> Instructions Compute the size of each the clusters obtained using the spectral graph partitioning method.
# Your code here.
module_4/M4_NB3_NetworkClustering.ipynb
getsmarter/bda
mit
Household class Below is a rough draft of the household class. It only has one component: constructor: class constructor, which "initializes" or "creates" the household when we call Household(). This is in the init method.
class Household(object): """ Household class, which encapsulates the entire behavior of a household. """ def __init__(self, model, household_id, adopted=False, threshold=1): """ Constructor for HH class. By default, * not adopted * threshold = 1 ...
samples/cscs530-w2015-midterm-sample1.ipynb
mjbommar/cscs-530-w2016
bsd-2-clause
Model class Below, we will define our model class. This can be broken up as follows: - constructor: class constructor, which "initializes" or "creates" the model when we call Model(). This is in the init method. - setup_network: sets up graph - setup_households: sets up households - get_neighborhood: defines a function...
class Model(object): """ Model class, which encapsulates the entire behavior of a single "run" in network model. """ def __init__(self, network, alpha, HH_adopted, HH_not_adopted): """ Class constructor. """ # Set our model parameters self.network = network ...
samples/cscs530-w2015-midterm-sample1.ipynb
mjbommar/cscs-530-w2016
bsd-2-clause
Wrapper with parameter sweep Below is the code which wrappers around the model. It does the following: - Loops through all villages we wish to examine - Pulls network data from a csv and puts in the appropriate format - Loops through all possible pairs of nodes within each village - Sweeps through alpha and number...
## cycle through villages: ## (need to create village list where each item points to a different csv file) num_samples = 2000 for fn in village_list: village = np.genfromtxt(fn, delimiter=",") network = from_numpy_matrix(village) for HH_adopted in itertools.combinations(nx.nodes(network),2): HH_no...
samples/cscs530-w2015-midterm-sample1.ipynb
mjbommar/cscs-530-w2016
bsd-2-clause
Suppose we want to study the environmental temperature for plankton drifting around a peninsula. We have a dataset with surface ocean velocities and the corresponding sea surface temperature stored in netcdf files in the folder "Peninsula_data". Besides the velocity fields, we load the temperature field using extra_fie...
# Velocity and temperature fields fieldset = FieldSet.from_parcels("Peninsula_data/peninsula", extra_fields={'T': 'T'}, allow_time_extrapolation=True) # Particle locations and initial time npart = 10 # number of particles to be released lon = 3e3 * np.ones(npart) lat = np.linspace(3e3 , 45e3, npart, dtype=np.float32)...
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
To sample the temperature field, we need to create a new class of particles where temperature is a Variable. As an argument for the Variable class, we need to provide the initial values for the particles. The easiest option is to access fieldset.T, but this option has some drawbacks.
class SampleParticle(JITParticle): # Define a new particle class temperature = Variable('temperature', initial=fieldset.T) # Variable 'temperature' initialised by sampling the temperature pset = ParticleSet(fieldset=fieldset, pclass=SampleParticle, lon=lon, lat=lat, time=time)
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
Using fieldset.T leads to the WARNING displayed above because Variable accesses the fieldset in the slower SciPy mode. Another problem can occur when using the repeatdt argument instead of time: <a id='repeatdt_error'></a>
repeatdt = delta(hours=3) pset = ParticleSet(fieldset=fieldset, pclass=SampleParticle, lon=lon, lat=lat, repeatdt=repeatdt)
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
Since the initial time is not defined, the Variable class does not know at what time to access the temperature field. The solution to this initialisation problem is to leave the initial value zero and sample the initial condition in JIT mode with the sampling Kernel:
class SampleParticleInitZero(JITParticle): # Define a new particle class temperature = Variable('temperature', initial=0) # Variable 'temperature' initially zero pset = ParticleSet(fieldset=fieldset, pclass=SampleParticleInitZero, lon=lon, lat=lat, time=time) def SampleT(particle, fieldset, time): ...
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
To sample the initial values we can execute the Sample kernel over the entire particleset with dt = 0 so that time does not increase
pset.execute(sample_kernel, dt=0) # by only executing the sample kernel we record the initial temperature of the particles output_file = pset.ParticleFile(name="InitZero.nc", outputdt=delta(hours=1)) pset.execute(AdvectionRK4 + sample_kernel, runtime=delta(hours=30), dt=delta(minutes=5), output_file=outpu...
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
The particle dataset now contains the particle trajectories and the corresponding environmental temperature
Particle_data = xr.open_dataset("InitZero.nc") plt.figure() ax = plt.axes() ax.set_ylabel('Y') ax.set_xlabel('X') ax.set_ylim(1000, 49000) ax.set_xlim(1000, 99000) ax.plot(Particle_data.lon.transpose(), Particle_data.lat.transpose(), c='k', zorder=1) T_scatter = ax.scatter(Particle_data.lon, Particle_data.lat, c=Parti...
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
Sampling initial values In some simulations only the particles initial value within the field is of interest: the variable does not need to be known along the entire trajectory. To reduce computing we can specify the to_write argument to the temperature Variable. This argument can have three values: True, False or 'onc...
class SampleParticleOnce(JITParticle): # Define a new particle class temperature = Variable('temperature', initial=0, to_write='once') # Variable 'temperature' pset = ParticleSet(fieldset=fieldset, pclass=SampleParticleOnce, lon=lon, lat=lat, time=time) pset.execute(sample_kernel, dt=0) # by only exe...
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
Since all the particles are released at the same x-position and the temperature field is invariant in the y-direction, all particles have an initial temperature of 0.4$^\circ$C
Particle_data = xr.open_dataset("WriteOnce.nc") plt.figure() ax = plt.axes() ax.set_ylabel('Y') ax.set_xlabel('X') ax.set_ylim(1000, 49000) ax.set_xlim(1000, 99000) ax.plot(Particle_data.lon.transpose(), Particle_data.lat.transpose(), c='k', zorder=1) T_scatter = ax.scatter(Particle_data.lon, Particle_data.lat, ...
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
Sampling with repeatdt Some experiments require large sets of particles to be released repeatedly on the same locations. The particleset object has the option repeatdt for this, but when you want to sample the initial values this introduces some problems as we have seen here. For more advanced control over the repeate...
outputdt = delta(hours=1).total_seconds() # write the particle data every hour repeatdt = delta(hours=6).total_seconds() # release each set of particles six hours later runtime = delta(hours=24).total_seconds() pset = ParticleSet(fieldset=fieldset, pclass=SampleParticleInitZero, lon=[], lat=[], time=[]) # Using Sampl...
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
In each iteration of the loop, spanning six hours, we have added ten particles.
Particle_data = xr.open_dataset("RepeatLoop.nc") print(Particle_data.time[:,0].values / np.timedelta64(1, 'h')) # The initial hour at which each particle is released assert np.allclose(Particle_data.time[:,0].values / np.timedelta64(1, 'h'), [int(k/10)*6 for k in range(40)])
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
Let's check if the initial temperatures were sampled correctly for all particles
print(Particle_data.temperature[:,0].values) assert np.allclose(Particle_data.temperature[:,0].values, Particle_data.temperature[:,0].values[0])
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
And see if the sampling of the temperature field is done correctly along the trajectories
Release0 = Particle_data.where(Particle_data.time[:,0]==np.timedelta64(0, 's')) # the particles released at t = 0 plt.figure() ax = plt.axes() ax.set_ylabel('Y') ax.set_xlabel('X') ax.set_ylim(1000, 49000) ax.set_xlim(1000, 99000) ax.plot(Release0.lon.transpose(), Release0.lat.transpose(), c='k', zorder=1) T_scatter =...
parcels/examples/tutorial_sampling.ipynb
OceanPARCELS/parcels
mit
Compute degree-days First we compute heating degree-days for different base temperatures. More information on the computation of degree-days can be found in this demo.
%matplotlib inline # resample weather data to daily values and compute degree-days dfw = dfw.resample('D').mean() dfw_HDD = og.library.weather.compute_degree_days(ts=dfw['temperature'], heating_base_temperatures=range(8, 18, 2), ...
notebooks/Multi-variable Linear Regression Demo.ipynb
opengridcc/opengrid
apache-2.0
Create a monthly model for the gas consumption
# resample to monthly data and plot df_month = df_day.resample('MS').sum() # create the model mvlr = og.MultiVarLinReg(df_month, endog='313b') print(mvlr.fit.summary()) mvlr.plot()
notebooks/Multi-variable Linear Regression Demo.ipynb
opengridcc/opengrid
apache-2.0
<br>
# Import Titanic data (local CSV) titanic = h2o.import_file("kaggle_titanic.csv") titanic.head(5) # Convert 'Survived' and 'Pclass' to categorical values titanic['Survived'] = titanic['Survived'].asfactor() titanic['Pclass'] = titanic['Pclass'].asfactor() # Define features (or predictors) manually features = ['Pclass...
introduction_to_machine_learning/py_04b_classification_ensembles.ipynb
woobe/h2o_tutorials
mit
<br> Define Search Criteria for Random Grid Search
# define the criteria for random grid search search_criteria = {'strategy': "RandomDiscrete", 'max_models': 9, 'seed': 1234}
introduction_to_machine_learning/py_04b_classification_ensembles.ipynb
woobe/h2o_tutorials
mit
<br> Step 1: Build GBM Models using Random Grid Search and Extract the Best Model
# define the range of hyper-parameters for GBM grid search # 27 combinations in total hyper_params = {'sample_rate': [0.7, 0.8, 0.9], 'col_sample_rate': [0.7, 0.8, 0.9], 'max_depth': [3, 5, 7]} # Set up GBM grid search # Add a seed for reproducibility gbm_rand_grid = H2OGridSearch( ...
introduction_to_machine_learning/py_04b_classification_ensembles.ipynb
woobe/h2o_tutorials
mit
<br> Step 2: Build DRF Models using Random Grid Search and Extract the Best Model
# define the range of hyper-parameters for DRF grid search # 27 combinations in total hyper_params = {'sample_rate': [0.5, 0.6, 0.7], 'col_sample_rate_per_tree': [0.7, 0.8, 0.9], 'max_depth': [3, 5, 7]} # Set up DRF grid search # Add a seed for reproducibility drf_rand_grid = H2OGridSea...
introduction_to_machine_learning/py_04b_classification_ensembles.ipynb
woobe/h2o_tutorials
mit
<br> Model Stacking
# Define a list of models to be stacked # i.e. best model from each grid all_ids = [best_gbm_model_id, best_drf_model_id] # Set up Stacked Ensemble ensemble = H2OStackedEnsembleEstimator(model_id = "my_ensemble", base_models = all_ids) # use .train to start model stacking # GLM ...
introduction_to_machine_learning/py_04b_classification_ensembles.ipynb
woobe/h2o_tutorials
mit
<br> Comparison of Model Performance on Test Data
print('Best GBM model from Grid (AUC) : ', best_gbm_from_rand_grid.model_performance(titanic_test).auc()) print('Best DRF model from Grid (AUC) : ', best_drf_from_rand_grid.model_performance(titanic_test).auc()) print('Stacked Ensembles (AUC) : ', ensemble.model_performance(titanic_test).auc())
introduction_to_machine_learning/py_04b_classification_ensembles.ipynb
woobe/h2o_tutorials
mit
NOTE: You may ignore specific incompatibility errors and warnings. These components and issues do not impact your ability to complete the lab. Download .whl file for tensorflow-transform. We will pass this file to Beam Pipeline Options so it is installed on the DataFlow workers
!pip download tensorflow-transform==0.15.0 --no-deps
notebooks/feature_engineering/solutions/5_tftransform_taxifare.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
<b>Restart the kernel</b> (click on the reload button above).
%%bash pip freeze | grep -e 'flow\|beam' import shutil import tensorflow as tf import tensorflow_transform as tft print(tf.__version__) import os PROJECT = !gcloud config get-value project PROJECT = PROJECT[0] BUCKET = PROJECT REGION = "us-central1" os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os...
notebooks/feature_engineering/solutions/5_tftransform_taxifare.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Input source: BigQuery Get data from BigQuery but defer the majority of filtering etc. to Beam. Note that the dayofweek column is now strings.
from google.cloud import bigquery def create_query(phase, EVERY_N): """Creates a query with the proper splits. Args: phase: int, 1=train, 2=valid. EVERY_N: int, take an example EVERY_N rows. Returns: Query string with the proper splits. """ base_query = """ WITH dayna...
notebooks/feature_engineering/solutions/5_tftransform_taxifare.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Let's pull this query down into a Pandas DataFrame and take a look at some of the statistics.
df_valid = bigquery.Client().query(query).to_dataframe() display(df_valid.head()) df_valid.describe()
notebooks/feature_engineering/solutions/5_tftransform_taxifare.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as TFRecord files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried o...
import datetime import apache_beam as beam import tensorflow as tf import tensorflow_metadata as tfmd import tensorflow_transform as tft from tensorflow_transform.beam import impl as beam_impl def is_valid(inputs): """Check to make sure the inputs are valid. Args: inputs: dict, dictionary of TableRo...
notebooks/feature_engineering/solutions/5_tftransform_taxifare.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
This will take 10-15 minutes. You cannot go on in this lab until your DataFlow job has succesfully completed.
%%bash # ls preproc_tft gsutil ls gs://${BUCKET}/taxifare/preproc_tft/
notebooks/feature_engineering/solutions/5_tftransform_taxifare.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Train off preprocessed data Now that we have our data ready and verified it is in the correct location we can train our taxifare model locally.
%%bash rm -r ./taxi_trained export PYTHONPATH=${PYTHONPATH}:$PWD python3 -m tft_trainer.task \ --train_data_path="gs://${BUCKET}/taxifare/preproc_tft/train*" \ --eval_data_path="gs://${BUCKET}/taxifare/preproc_tft/eval*" \ --output_dir=./taxi_trained \ !ls $PWD/taxi_trained/export/exporter
notebooks/feature_engineering/solutions/5_tftransform_taxifare.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Now let's create fake data in JSON format and use it to serve a prediction with gcloud ai-platform local predict
%%writefile /tmp/test.json {"dayofweek":0, "hourofday":17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2.0} %%bash sudo find "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine" -name '*.pyc' -delete %%bash model_dir=$(ls $PWD/taxi_...
notebooks/feature_engineering/solutions/5_tftransform_taxifare.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Remarks The crucial step (<i>cf.</i> <b>Quick Sort</b>) that determines whether we have best case or worst case performance is the choice of the pivot – if we are really lucky we will get a value that cuts down the list the algorithm needs to search very substantially at each step.<br/><br/> The algorithm is divide-and...
def basicStringSearch(searchString, target): searchIndex = 0 lenT = len(target) lenS = len(searchString) while searchIndex + lenT <= lenS: targetIndex = 0 while targetIndex < lenT and target[targetIndex] == searchString[ targetIndex + searchIndex]: target...
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
Remarks It becomes immediately apparent when implement that this algorithm would consist of two nested loops leading to complexity $O(mn) > O(m^2)$.<br/><br/> We know that if the character in $S$ following the failed comparison with $T$ is not in $T$ then there is no need to slide along one place to do another comparis...
def buildShiftTable(target, alphabet): shiftTable = {} for character in alphabet: shiftTable[character] = len(target) + 1 for i in range(len(target)): char = target[i] shift = len(target) - i shiftTable[char] = shift return shiftTable def quickSearch (searchString, t...
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
Tests
theAlphabet = {'G', 'A', 'C', 'T'} stringToSearch = 'ATGAATACCCACCTTACAGAAACCTGGGAAAAGGCAATAAATATTATAAAAGGTGAACTTACAGAAGTAA' for thetarget in ['ACAG', 'AAGTAA', 'CCCC']: print(quickSearch(stringToSearch, thetarget, theAlphabet))
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
Remarks The basic brute-force algorithm we wrote first will work fine with relatively short search strings but, as with all algorithms, inputs of huge size may overwhelm it. For example, DNA strings can be billions of bases long, so algorithmic efficiency can be vital. We noted already that the complexity of the basic ...
prefixTable = [0, 1, 0, 0, 0, 1, 2, 3, 4, 0, 0, 0, 1, 2]
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
Code
# Helper function for kmpSearch() def buildPrefixTable(target): #The first line of code just builds a list that has len(target) #items all of which are given the default value 0 prefixTable = [0] * len(target) q = 0 for p in range(1, len(target)): while q > 0 and target[q] != target...
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
Tests
stringToSearch = 'ATGAATACCCACCTTACAGAAACCTGGGAAAAGGCAATAAATATTATAAAAGGTGAACTTACAGAAGTAA' for thetarget in ['ACAG', 'AAGTAA', 'CCCC']: print(kmpSearch(stringToSearch, thetarget))
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
Remarks What about the complexity of the KMP algorithm? Computing the prefix table takes significant effort but in fact there is an efficient algorithm for doing it. Overall, the KMP algorithm has complexity $O(m + n)$. Since $n$ is usually enormously larger than $m$ (think of searching a DNA string of billions of base...
set_of_integers = [54, 26, 93, 17, 77, 31] hash_function = lambda x: [y % 11 for y in x] hash_vals = hash_function(set_of_integers) hash_vals
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
Once the hash values have been computed, we can insert each item into the hash table at the designated position: <img src="http://interactivepython.org/courselib/static/pythonds/_images/hashtable2.png"> Now when we want to search for an item, we simply use the hash function to compute the slot name for the item and the...
word = 4365554601 word = str(word) step = 2 slots = 11 folds = [int(word[n: n+2]) for n in range(0, len(word), step)] print(folds) print(sum(folds)) print(sum(folds)%slots)
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
Another numerical technique for constructing a hash function is called the <b>mid-square method</b>. We first square the item, and then extract <i>some portion</i> of the resulting digits. For example, if the item were $44$, we would first compute $44^2=1,936$. By extracting the middle two digits, $93$, and performing ...
set_of_integers = [54, 26, 93, 17, 77, 31] hash_function = lambda x: [int(str(y**2)[1:-1])%11 for y in x] hash_vals = hash_function(set_of_integers) hash_vals
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
We can also create hash functions for character-based items such as strings. The word “cat” can be thought of as a sequence of ordinal values. Summing these (unicode values), summing and then taking the remainder from division by $11$:
word = 'cat' sum([ord(l) for l in word]) % 11
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
To avoid conflicts from anagram, we could weights:
sum([(ord(word[x]) * (x + 1)) for x in range(len(word))]) % 11
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
You may be able to think of a number of additional ways to compute hash values for items in a collection. The important thing to remember is that the hash function has to be efficient so that it does not become the dominant part of the storage and search process. If the hash function is too complex, then it becomes mor...
set_of_integers = [123456, 431941, 789012, 60375] print(set_of_integers) set_of_integers = [((int(str(x)[0:2]) + int(str(x)[2:4]) + int(str(x)[4:])) % 80) -1 for x in set_of_integers] print(set_of_integers)
Notebook/M269 Unit 4 Notes -- Search.ipynb
BoasWhip/Black
mit
Using default legends
from cartoframes.viz import default_legend Map([ Layer('countries', legends=default_legend('Countries')), Layer('global_power_plants', legends=default_legend('Global Power Plants')), Layer('world_rivers', legends=default_legend('World Rivers')) ])
docs/examples/data_visualization/layers/add_multiple_layers.ipynb
CartoDB/cartoframes
bsd-3-clause
Adding a Layer Selector
from cartoframes.viz import default_legend Map([ Layer('countries', title='Countries', legends=default_legend()), Layer('global_power_plants', title='Global Power Plants', legends=default_legend()), Layer('world_rivers', title='World Rivers', legends=default_legend()) ], layer_selector=True)
docs/examples/data_visualization/layers/add_multiple_layers.ipynb
CartoDB/cartoframes
bsd-3-clause