markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Performance of homemade model
plt.figure(figsize=(12,4)) plt.subplot(1,3,1) plt.scatter(xTest[:,0], xTest[:,1], c=labelEst0, cmap=pltcolors.ListedColormap(testColors), marker='x', alpha=0.2); plt.xlabel('x0') plt.ylabel('x1') plt.grid() plt.title('Estimated') cb = plt.colorbar() loc = np.arange(0,1,1./len(testColors)) cb.set_ticks(loc) cb.set_ticklabels([0,1]); plt.subplot(1,3,2) plt.hist(labelEst0, 10, density=True, alpha=0.5) plt.title('Bernouilli parameter =' + str(np.mean(labelEst0))) plt.subplot(1,3,3) plt.scatter(xTest[:,0], xTest[:,1], c=labelTest, cmap=pltcolors.ListedColormap(colors), marker='x', alpha=0.1); plt.xlabel('x0') plt.ylabel('x1') plt.grid() plt.title('Generator') cb = plt.colorbar() loc = np.arange(0,1,1./len(colors)) cb.set_ticks(loc) cb.set_ticklabels([0,1]); accuracy0 = np.sum(labelTest == labelEst0)/N print('Accuracy =', accuracy0)
Accuracy = 0.9265
MIT
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
Precision $p(y = 1 \mid \hat{y} = 1)$
print('Precision =', np.sum(labelTest[labelEst0 == 1])/np.sum(labelEst0))
Precision = 0.9505783385909569
MIT
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
Recall$p(\hat{y} = 1 \mid y = 1)$
print('Recall =', np.sum(labelTest[labelEst0 == 1])/np.sum(labelTest))
Recall = 0.900398406374502
MIT
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
Confusion matrix
plotConfusionMatrix(labelTest, labelEst0, np.array(['Blue', 'Red'])); print(metrics.classification_report(labelTest, labelEst0))
precision recall f1-score support False 0.90 0.95 0.93 996 True 0.95 0.90 0.92 1004 accuracy 0.93 2000 macro avg 0.93 0.93 0.93 2000 weighted avg 0.93 0.93 0.93 2000
MIT
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
This non-parametric model has a the best performance of all models used so far, including the neural network with two layers.The large drawback is the amount of computation for each sample to predict. This method is hardly usable for sample sizes over 10k. Using SciKit LearnReferences:- SciKit documentation- https://stackabuse.com/k-nearest-neighbors-algorithm-in-python-and-scikit-learn/
model1 = SkKNeighborsClassifier(n_neighbors=k) model1.fit(xTrain, labelTrain) labelEst1 = model1.predict(xTest) print('Accuracy =', model1.score(xTest, labelTest)) plt.hist(labelEst1*1.0, 10, density=True, alpha=0.5) plt.title('Bernouilli parameter =' + str(np.mean(labelEst1)));
_____no_output_____
MIT
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
Confusion matrix (plot)
plotConfusionMatrix(labelTest, labelEst1, np.array(['Blue', 'Red']));
_____no_output_____
MIT
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
Classification report
print(metrics.classification_report(labelTest, labelEst1))
precision recall f1-score support False 0.90 0.95 0.93 996 True 0.95 0.90 0.92 1004 accuracy 0.93 2000 macro avg 0.93 0.93 0.93 2000 weighted avg 0.93 0.93 0.93 2000
MIT
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
ROC curve
logit_roc_auc = metrics.roc_auc_score(labelTest, labelEst1) fpr, tpr, thresholds = metrics.roc_curve(labelTest, model1.predict_proba(xTest)[:,1]) plt.plot(fpr, tpr, label='KNN classification (area = %0.2f)' % logit_roc_auc) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right");
_____no_output_____
MIT
classification/ClassificationContinuous2Features-KNN.ipynb
tonio73/data-science
Network DEA function example在此示範如何使用Network DEA函式,並顯示執行後的結果。 ※示範程式碼及csv資料存放於[這裡](https://github.com/wurmen/DEA/tree/master/Functions/network_data%26code),可自行下載測試。
import network_function #載入存放Network DEA函式的py檔(在此檔名為"network_function.py")
_____no_output_____
MIT
Functions/network_data&code/Network_DEA_function_example.ipynb
PO-LAB/DEA
- 將檔案讀成所需格式- X、Z_input存放於network_data_input.csv中,X位於檔案中的2~3行,Z_input位於4~5行- Y、Z_output存放於network_data_output.csv中,Y位於檔案中第2行,Z_output位於3~4行- 該系統共有3個製程,透過csv2dict_for_network_dea()函式進行讀檔,並回傳得到DMU列表及整體系統與各製程產出投入資料,以及製程數
DMU,X,Z_input,p_n=network_function.csv2dict_for_network_dea('network_data_input.csv', v1_range=[2,3], v2_range=[4,5], p_n=3) DMU,Y,Z_output,p_n=network_function.csv2dict_for_network_dea('network_data_output.csv', v1_range=[2,2], v2_range=[3,4], p_n=3)
_____no_output_____
MIT
Functions/network_data&code/Network_DEA_function_example.ipynb
PO-LAB/DEA
- 將上述讀檔程式所轉換後的資料放入network DEA函式中,並將權重下限設為1e-11
network_function.network(DMU,X,Y,Z_input,Z_output,p_n,var_lb=1e-11)
The efficiency of DMU A:0.523 The efficiency and inefficiency of Process 0 for DMU A:1.0000 and 0 The efficiency and inefficiency of Process 1 for DMU A:0.7500 and 0.09091 The efficiency and inefficiency of Process 2 for DMU A:0.3462 and 0.3864 The efficiency of DMU B:0.595 The efficiency and inefficiency of Process 0 for DMU B:0.8333 and 0.07143 The efficiency and inefficiency of Process 1 for DMU B:1.0000 and 0 The efficiency and inefficiency of Process 2 for DMU B:0.5088 and 0.3333 The efficiency of DMU C:0.568 The efficiency and inefficiency of Process 0 for DMU C:0.5000 and 0.1364 The efficiency and inefficiency of Process 1 for DMU C:0.4000 and 0.2727 The efficiency and inefficiency of Process 2 for DMU C:0.9474 and 0.02273 The efficiency of DMU D:0.482 The efficiency and inefficiency of Process 0 for DMU D:0.5625 and 0.125 The efficiency and inefficiency of Process 1 for DMU D:0.8000 and 0.07143 The efficiency and inefficiency of Process 2 for DMU D:0.3333 and 0.3214 The efficiency of DMU E:0.800 The efficiency and inefficiency of Process 0 for DMU E:0.8333 and 0.06667 The efficiency and inefficiency of Process 1 for DMU E:0.5000 and 0.1333 The efficiency and inefficiency of Process 2 for DMU E:1.0000 and 0
MIT
Functions/network_data&code/Network_DEA_function_example.ipynb
PO-LAB/DEA
=====================================================================Compute Phase Slope Index (PSI) in source space for a visual stimulus=====================================================================This example demonstrates how the Phase Slope Index (PSI) [1]_ can be computedin source space based on single trial dSPM source estimates. In addition,the example shows advanced usage of the connectivity estimation routinesby first extracting a label time course for each epoch and then combiningthe label time course with the single trial source estimates to compute theconnectivity.The result clearly shows how the activity in the visual label precedes morewidespread activity (a postivive PSI means the label time course is leading).References----------.. [1] Nolte et al. "Robustly Estimating the Flow Direction of Information in Complex Physical Systems", Physical Review Letters, vol. 100, no. 23, pp. 1-4, Jun. 2008.
# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import sample from mne.minimum_norm import read_inverse_operator, apply_inverse_epochs from mne.connectivity import seed_target_indices, phase_slope_index print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' fname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' fname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' fname_label = data_path + '/MEG/sample/labels/Vis-lh.label' event_id, tmin, tmax = 4, -0.2, 0.3 method = "dSPM" # use dSPM method (could also be MNE or sLORETA) # Load data inverse_operator = read_inverse_operator(fname_inv) raw = mne.io.read_raw_fif(fname_raw) events = mne.read_events(fname_event) # pick MEG channels picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True, exclude='bads') # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13, eog=150e-6)) # Compute inverse solution and for each epoch. Note that since we are passing # the output to both extract_label_time_course and the phase_slope_index # functions, we have to use "return_generator=False", since it is only possible # to iterate over generators once. snr = 1.0 # use lower SNR for single epochs lambda2 = 1.0 / snr ** 2 stcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method, pick_ori="normal", return_generator=True) # Now, we generate seed time series by averaging the activity in the left # visual corex label = mne.read_label(fname_label) src = inverse_operator['src'] # the source space used seed_ts = mne.extract_label_time_course(stcs, label, src, mode='mean_flip', verbose='error') # Combine the seed time course with the source estimates. There will be a total # of 7500 signals: # index 0: time course extracted from label # index 1..7499: dSPM source space time courses stcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method, pick_ori="normal", return_generator=True) comb_ts = list(zip(seed_ts, stcs)) # Construct indices to estimate connectivity between the label time course # and all source space time courses vertices = [src[i]['vertno'] for i in range(2)] n_signals_tot = 1 + len(vertices[0]) + len(vertices[1]) indices = seed_target_indices([0], np.arange(1, n_signals_tot)) # Compute the PSI in the frequency range 8Hz..30Hz. We exclude the baseline # period from the connectivity estimation fmin = 8. fmax = 30. tmin_con = 0. sfreq = raw.info['sfreq'] # the sampling frequency psi, freqs, times, n_epochs, _ = phase_slope_index( comb_ts, mode='multitaper', indices=indices, sfreq=sfreq, fmin=fmin, fmax=fmax, tmin=tmin_con) # Generate a SourceEstimate with the PSI. This is simple since we used a single # seed (inspect the indices variable to see how the PSI scores are arranged in # the output) psi_stc = mne.SourceEstimate(psi, vertices=vertices, tmin=0, tstep=1, subject='sample') # Now we can visualize the PSI using the plot method. We use a custom colormap # to show signed values v_max = np.max(np.abs(psi)) brain = psi_stc.plot(surface='inflated', hemi='lh', time_label='Phase Slope Index (PSI)', subjects_dir=subjects_dir, clim=dict(kind='percent', pos_lims=(95, 97.5, 100))) brain.show_view('medial') brain.add_label(fname_label, color='green', alpha=0.7)
_____no_output_____
BSD-3-Clause
0.15/_downloads/plot_mne_inverse_psi_visual.ipynb
drammock/mne-tools.github.io
LinkedIn - Get contact from profile **Tags:** linkedin profile contact naas_drivers Input Import library
from naas_drivers import linkedin
_____no_output_____
BSD-3-Clause
LinkedIn/LinkedIn_Get_contact_from_profile.ipynb
vivard/awesome-notebooks
Get your cookiesHow to get your cookies ?
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2 JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
_____no_output_____
BSD-3-Clause
LinkedIn/LinkedIn_Get_contact_from_profile.ipynb
vivard/awesome-notebooks
Enter profile URL
PROFILE_URL = "PROFILE_URL"
_____no_output_____
BSD-3-Clause
LinkedIn/LinkedIn_Get_contact_from_profile.ipynb
vivard/awesome-notebooks
Model Get the information return in a dataframe.**Available columns :**- PROFILE_URN : LinkedIn unique profile id- PROFILE_ID : LinkedIn public profile id- EMAIL- CONNECTED_AT- BIRTHDATE- TWITER- ADDRESS- WEBSITES- INTERESTS
df = linkedin.connect(LI_AT, JSESSIONID).profile.get_contact(PROFILE_URL)
_____no_output_____
BSD-3-Clause
LinkedIn/LinkedIn_Get_contact_from_profile.ipynb
vivard/awesome-notebooks
Output Display result
df
_____no_output_____
BSD-3-Clause
LinkedIn/LinkedIn_Get_contact_from_profile.ipynb
vivard/awesome-notebooks
Table of Contents
%matplotlib inline import math,sys,os,numpy as np from numpy.random import random from matplotlib import pyplot as plt, rcParams, animation, rc from __future__ import print_function, division from ipywidgets import interact, interactive, fixed from ipywidgets.widgets import * rc('animation', html='html5') rcParams['figure.figsize'] = 3, 3 %precision 4 np.set_printoptions(precision=4, linewidth=100) def lin(a,b,x): return a*x+b a=3. b=8. n=30 x = random(n) y = lin(a,b,x) x y plt.scatter(x,y) def sse(y,y_pred): return ((y-y_pred)**2).sum() def loss(y,a,b,x): return sse(y, lin(a,b,x)) def avg_loss(y,a,b,x): return np.sqrt(loss(y,a,b,x)/n) a_guess=-1. b_guess=1. avg_loss(y, a_guess, b_guess, x) lr=0.01 # d[(y-(a*x+b))**2,b] = 2 (b + a x - y) = 2 (y_pred - y) # d[(y-(a*x+b))**2,a] = 2 x (b + a x - y) = x * dy/db def upd(): global a_guess, b_guess y_pred = lin(a_guess, b_guess, x) dydb = 2 * (y_pred - y) dyda = x*dydb a_guess -= lr*dyda.mean() b_guess -= lr*dydb.mean() ?animation.FuncAnimation fig = plt.figure(dpi=100, figsize=(5, 4)) plt.scatter(x,y) line, = plt.plot(x,lin(a_guess,b_guess,x)) plt.close() def animate(i): line.set_ydata(lin(a_guess,b_guess,x)) for i in range(10): upd() return line, ani = animation.FuncAnimation(fig, animate, np.arange(0, 40), interval=100) ani
_____no_output_____
Apache-2.0
deeplearning1/nbs/sgd-intro.ipynb
shabeer/fastai_courses
📝 Exercise M7.03As with the classification metrics exercise, we will evaluate the regressionmetrics within a cross-validation framework to get familiar with the syntax.We will use the Ames house prices dataset.
import pandas as pd import numpy as np ames_housing = pd.read_csv("../datasets/house_prices.csv") data = ames_housing.drop(columns="SalePrice") target = ames_housing["SalePrice"] data = data.select_dtypes(np.number) target /= 1000
_____no_output_____
CC-BY-4.0
notebooks/metrics_ex_02.ipynb
leonsor/scikit-learn-mooc
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. The first step will be to create a linear regression model.
# Write your code here. from sklearn.linear_model import LinearRegression linreg = LinearRegression()
_____no_output_____
CC-BY-4.0
notebooks/metrics_ex_02.ipynb
leonsor/scikit-learn-mooc
Then, use the `cross_val_score` to estimate the generalization performance ofthe model. Use a `KFold` cross-validation with 10 folds. Make the use of the$R^2$ score explicit by assigning the parameter `scoring` (even though it isthe default score).
from sklearn.model_selection import cross_val_score scores = cross_val_score(linreg, data, target, cv=10, scoring='r2') print(f"R2 score: {scores.mean():.3f} +/- {scores.std():.3f}") # Write your code here. from sklearn.model_selection import cross_validate result_linreg_r2 = cross_validate(linreg, data, target, cv=10, scoring="r2") result_reg_r2_df = pd.DataFrame(result_linreg_r2) result_reg_r2_df print(f"R2 result for linreg: {result_reg_r2_df['test_score'].mean():.3f} +/- {result_reg_r2_df['test_score'].std():.3f}")
R2 result for linreg: 0.794 +/- 0.109
CC-BY-4.0
notebooks/metrics_ex_02.ipynb
leonsor/scikit-learn-mooc
Then, instead of using the $R^2$ score, use the mean absolute error. You needto refer to the documentation for the `scoring` parameter.
# Write your code here. result_linreg_mae = cross_validate(linreg, data, target, cv=10, scoring="neg_mean_absolute_error") result_reg_mae_df = pd.DataFrame(result_linreg_mae) result_reg_mae_df scores = cross_val_score(linreg, data, target, cv=10, scoring='neg_mean_absolute_error') scores = -scores print(f"Mean Absolute Error: {scores.mean():.3f} +/- {scores.std():.3f}") print(f"Mean Absolute Error result for linreg: {-result_reg_mae_df['test_score'].mean():.3f} +/- {-result_reg_mae_df['test_score'].std():.3f}")
Mean Absolute Error result for linreg: 21.892 +/- -2.346
CC-BY-4.0
notebooks/metrics_ex_02.ipynb
leonsor/scikit-learn-mooc
Finally, use the `cross_validate` function and compute multiple scores/errorsat once by passing a list of scorers to the `scoring` parameter. You cancompute the $R^2$ score and the mean absolute error for instance.
# Write your code here. scoring = ["r2", "neg_mean_absolute_error"] result_linreg_duo = cross_validate(linreg, data, target, cv=10, scoring=scoring) scores = {"R2": result_linreg_duo["test_r2"], "MAE": -result_linreg_duo["test_neg_mean_absolute_error"]} scores_df = pd.DataFrame(scores) scores_df result_linreg_duo
_____no_output_____
CC-BY-4.0
notebooks/metrics_ex_02.ipynb
leonsor/scikit-learn-mooc
Flights data preparation
from pyspark.sql import SQLContext from pyspark.sql import DataFrame from pyspark.sql import Row from pyspark.sql.types import * import pandas as pd import StringIO import matplotlib.pyplot as plt hc = sc._jsc.hadoopConfiguration() hc.set("hive.execution.engine", "mr")
_____no_output_____
Apache-2.0
integration-tests/examples/test_templates/jupyter/template_preparation_pyspark.ipynb
AdamsDisturber/incubator-dlab
Function to parse CSV
import csv def parseCsv(csvStr): f = StringIO.StringIO(csvStr) reader = csv.reader(f, delimiter=',') row = reader.next() return row scsv = '"02Q","Titan Airways"' row = parseCsv(scsv) print row[0] print row[1] working_storage = 'WORKING_STORAGE' output_directory = 'jupyter/py2' protocol_name = 'PROTOCOL_NAME://'
_____no_output_____
Apache-2.0
integration-tests/examples/test_templates/jupyter/template_preparation_pyspark.ipynb
AdamsDisturber/incubator-dlab
Parse and convert Carrier data to parquet
carriersHeader = 'Code,Description' carriersText = sc.textFile(protocol_name + working_storage + "/jupyter_dataset/carriers.csv").filter(lambda x: x != carriersHeader) carriers = carriersText.map(lambda s: parseCsv(s)) \ .map(lambda s: Row(code=s[0], description=s[1])).cache().toDF() carriers.write.mode("overwrite").parquet(protocol_name + working_storage + "/" + output_directory + "/carriers") sqlContext.registerDataFrameAsTable(carriers, "carriers") carriers.limit(20).toPandas()
_____no_output_____
Apache-2.0
integration-tests/examples/test_templates/jupyter/template_preparation_pyspark.ipynb
AdamsDisturber/incubator-dlab
Parse and convert to parquet Airport data
airportsHeader= '"iata","airport","city","state","country","lat","long"' airports = sc.textFile(protocol_name + working_storage + "/jupyter_dataset/airports.csv") \ .filter(lambda x: x != airportsHeader) \ .map(lambda s: parseCsv(s)) \ .map(lambda p: Row(iata=p[0], \ airport=p[1], \ city=p[2], \ state=p[3], \ country=p[4], \ lat=float(p[5]), \ longt=float(p[6])) \ ).cache().toDF() airports.write.mode("overwrite").parquet(protocol_name + working_storage + "/" + output_directory + "/airports") sqlContext.registerDataFrameAsTable(airports, "airports") airports.limit(20).toPandas()
_____no_output_____
Apache-2.0
integration-tests/examples/test_templates/jupyter/template_preparation_pyspark.ipynb
AdamsDisturber/incubator-dlab
Parse and convert Flights data to parquet
flightsHeader = 'Year,Month,DayofMonth,DayOfWeek,DepTime,CRSDepTime,ArrTime,CRSArrTime,UniqueCarrier,FlightNum,TailNum,ActualElapsedTime,CRSElapsedTime,AirTime,ArrDelay,DepDelay,Origin,Dest,Distance,TaxiIn,TaxiOut,Cancelled,CancellationCode,Diverted,CarrierDelay,WeatherDelay,NASDelay,SecurityDelay,LateAircraftDelay' flights = sc.textFile(protocol_name + working_storage + "/jupyter_dataset/2008.csv.bz2") \ .filter(lambda x: x!= flightsHeader) \ .map(lambda s: parseCsv(s)) \ .map(lambda p: Row(Year=int(p[0]), \ Month=int(p[1]), \ DayofMonth=int(p[2]), \ DayOfWeek=int(p[3]), \ DepTime=p[4], \ CRSDepTime=p[5], \ ArrTime=p[6], \ CRSArrTime=p[7], \ UniqueCarrier=p[8], \ FlightNum=p[9], \ TailNum=p[10], \ ActualElapsedTime=p[11], \ CRSElapsedTime=p[12], \ AirTime=p[13], \ ArrDelay=int(p[14].replace("NA", "0")), \ DepDelay=int(p[15].replace("NA", "0")), \ Origin=p[16], \ Dest=p[17], \ Distance=long(p[18]), \ TaxiIn=p[19], \ TaxiOut=p[20], \ Cancelled=p[21], \ CancellationCode=p[22], \ Diverted=p[23], \ CarrierDelay=int(p[24].replace("NA", "0")), \ CarrierDelayStr=p[24], \ WeatherDelay=int(p[25].replace("NA", "0")), \ WeatherDelayStr=p[25], \ NASDelay=int(p[26].replace("NA", "0")), \ SecurityDelay=int(p[27].replace("NA", "0")), \ LateAircraftDelay=int(p[28].replace("NA", "0")))) \ .toDF() flights.write.mode("ignore").parquet(protocol_name + working_storage + "/" + output_directory + "/flights") sqlContext.registerDataFrameAsTable(flights, "flights") flights.limit(10).toPandas()[["ArrDelay","CarrierDelay","CarrierDelayStr","WeatherDelay","WeatherDelayStr","Distance"]]
_____no_output_____
Apache-2.0
integration-tests/examples/test_templates/jupyter/template_preparation_pyspark.ipynb
AdamsDisturber/incubator-dlab
# Last amended: 30th March, 2021 # Myfolder: github/hadoop # Objective: # i) Install hadoop on colab # (current version is 3.2.2) # ii) Experiments with hadoop # iii) Install spark on colab # iv) Access hadoop file from spark # v) Install koalas on colab # # # Java 8 install: https://stackoverflow.com/a/58191107 # Hadoop install: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html # Spark install: https://stackoverflow.com/a/64183749 # https://www.analyticsvidhya.com/blog/2020/11/a-must-read-guide-on-how-to-work-with-pyspark-on-google-colab-for-data-scientists/
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Install hadoopIf it takes too long, it means, it is awaiting input from you regarding overwriting ssh keys Define functionsNo downloads. Just function definitions
# 1.0 How to set environment variable import os import time
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
ssh_install()
# 2.0 Function to install ssh client and sshd (Server) def ssh_install(): print("\n--1. Download and install ssh server----\n") ! sudo apt-get remove openssh-client openssh-server ! sudo apt install openssh-client openssh-server print("\n--2. Restart ssh server----\n") ! service ssh restart
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Java install
# 3.0 Function to download and install java 8 def install_java(): ! rm -rf /usr/java print("\n--Download and install Java 8----\n") !apt-get install -y openjdk-8-jdk-headless -qq > /dev/null # install openjdk os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" # set environment variable !update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java !update-alternatives --set javac /usr/lib/jvm/java-8-openjdk-amd64/bin/javac !mkdir -p /usr/java ! ln -s "/usr/lib/jvm/java-8-openjdk-amd64" "/usr/java" ! mv "/usr/java/java-8-openjdk-amd64" "/usr/java/latest" !java -version #check java version !javac -version
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
hadoop install
# 4.0 Function to download and install hadoop def hadoop_install(): print("\n--5. Download hadoop tar.gz----\n") ! wget -c https://mirrors.estointernet.in/apache/hadoop/common/hadoop-3.2.2/hadoop-3.2.2.tar.gz print("\n--6. Transfer downloaded content and unzip tar.gz----\n") ! mv /content/hadoop* /opt/ ! tar -xzf /opt/hadoop-3.2.2.tar.gz --directory /opt/ print("\n--7. Create hadoop folder----\n") ! rm -r /app/hadoop/tmp ! mkdir -p /app/hadoop/tmp print("\n--8. Check folder for files----\n") ! ls -la /opt
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
hadoop config
# 5.0 Function for setting hadoop configuration def hadoop_config(): print("\n--Begin Configuring hadoop---\n") print("\n=============================\n") print("\n--9. core-site.xml----\n") ! cat /opt/hadoop-3.2.2/etc/hadoop/core-site.xml print("\n--10. Amend core-site.xml----\n") ! echo '<?xml version="1.0" encoding="UTF-8"?>' > /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo '<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' <configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' <name>fs.defaultFS</name>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' <value>hdfs://localhost:9000</value>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' <name>hadoop.tmp.dir</name>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' <value>/app/hadoop/tmp</value>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' <description>A base for other temporary directories.</description>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml # Added following regarding safemode from here: # https://stackoverflow.com/a/33800253 ! echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' <name>dfs.safemode.threshold.pct</name>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' <value>0</value>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml ! echo ' </configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/core-site.xml print("\n--11. Amended core-site.xml----\n") ! cat /opt/hadoop-3.2.2/etc/hadoop/core-site.xml print("\n--12. yarn-site.xml----\n") !cat /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo '<?xml version="1.0" encoding="UTF-8"?>' > /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo '<configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo ' <name>yarn.nodemanager.aux-services</name>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo ' <value>mapreduce_shuffle</value>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo ' <name>yarn.nodemanager.vmem-check-enabled</name>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo ' <value>false</value>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml !echo ' </configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml print("\n--13. Amended yarn-site.xml----\n") !cat /opt/hadoop-3.2.2/etc/hadoop/yarn-site.xml print("\n--14. mapred-site.xml----\n") !cat /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml print("\n--15. Amend mapred-site.xml----\n") !echo '<?xml version="1.0"?>' > /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo '<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo '<configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <name>mapreduce.framework.name</name>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <value>yarn</value>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <name>yarn.app.mapreduce.am.env</name>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <name>mapreduce.map.env</name>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <name>mapreduce.reduce.env</name>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml !echo '</configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml print("\n--16, Amended mapred-site.xml----\n") !cat /opt/hadoop-3.2.2/etc/hadoop/mapred-site.xml print("\n---17. hdfs-site.xml----\n") !cat /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml print("\n---18. Amend hdfs-site.xml----\n") !echo '<?xml version="1.0" encoding="UTF-8"?> ' > /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo '<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo '<configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo ' <name>dfs.replication</name>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo ' <value>1</value>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo ' <property>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo ' <name>dfs.block.size</name>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo ' <value>16777216</value>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo ' <description>Block size</description>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo ' </property>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml !echo '</configuration>' >> /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml print("\n---19. Amended hdfs-site.xml----\n") !cat /opt/hadoop-3.2.2/etc/hadoop/hdfs-site.xml print("\n---20. hadoop-env.sh----\n") # https://stackoverflow.com/a/53140448 !cat /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh ! echo 'export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh ! echo 'export HDFS_NAMENODE_USER="root"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh ! echo 'export HDFS_DATANODE_USER="root"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh ! echo 'export HDFS_SECONDARYNAMENODE_USER="root"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh ! echo 'export YARN_RESOURCEMANAGER_USER="root"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh ! echo 'export YARN_NODEMANAGER_USER="root"' >> /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh print("\n---21. Amended hadoop-env.sh----\n") !cat /opt/hadoop-3.2.2/etc/hadoop/hadoop-env.sh
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
ssh keys
# 6.0 Function tp setup ssh passphrase def set_keys(): print("\n---22. Generate SSH keys----\n") ! cd ~ ; pwd ! cd ~ ; ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa ! cd ~ ; cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ! cd ~ ; chmod 0600 ~/.ssh/authorized_keys
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Set environment
# 7.0 Function to set up environmental variables def set_env(): print("\n---23. Set Environment variables----\n") # 'export' command does not work in colab # https://stackoverflow.com/a/57240319 os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" #set environment variable os.environ["JRE_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64/jre" os.environ["HADOOP_HOME"] = "/opt/hadoop-3.2.2" os.environ["HADOOP_CONF_DIR"] = "/opt/hadoop-3.2.2/etc/hadoop" os.environ["LD_LIBRARY_PATH"] += ":/opt/hadoop-3.2.2/lib/native" os.environ["PATH"] += ":/opt/hadoop-3.2.2/bin:/opt/hadoop-3.2.2/sbin"
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Install all function
# 8.0 Function to call all functions def install_hadoop(): print("\n--Install java----\n") ssh_install() install_java() hadoop_install() hadoop_config() set_keys() set_env()
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Begin installStart downloading, install and configure. Takes around 2 minutes
# 9.0 Start installation start = time.time() install_hadoop() end = time.time() print("\n---Time taken----\n") print((end- start)/60)
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Format hadoop
# 10.0 Format hadoop print("\n---24. Format namenode----\n") !hdfs namenode -format
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Start and test hadoopIf namenode is in safemode, use the command: `!hdfs dfsadmin -safemode leave` Start hadoopIf start fails with 'Connection refused', run `ssh_install()` once again
# 11.0 Start namenode # If this fails, run # ssh_install() below # and start hadoop again: print("\n---25. Start namenode----\n") ! start-dfs.sh #ssh_install()
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Start yarn
# 11.1 Start yarn ! start-yarn.sh
Starting resourcemanager Starting nodemanagers
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
If `start-dfs.sh` fails, issue the following three commands, one after another: `! sudo apt-get remove openssh-client openssh-server``! sudo apt-get install openssh-client openssh-server``! service ssh restart`And then try to start hadoop again, as: `start-dfs.sh` Test hadoopIF in safe mode, leave safe mode as:`!hdfs dfsadmin -safemode leave`
# 11.1 print("\n---26. Make folders in hadoop----\n") ! hdfs dfs -mkdir /user ! hdfs dfs -mkdir /user/ashok # 11.2 Run hadoop commands ! hdfs dfs -ls / ! hdfs dfs -ls /user # 11.3 Stopping hadoop # Gives some errors # But hadoop stops #!stop-dfs.sh
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Run the `ssh_install()` again if hadoop fails to start with `start-dfs.sh` and then try to start hadoop again. Install spark Define functions `findspark`: PySpark isn't on `sys.path` by default, but that doesn't mean it can't be used as a regular library. You can address this by either symlinking pyspark into your site-packages, or adding `pyspark` to `sys.path` at runtime. `findspark` does the latter.
# 1.0 Function to download and unzip spark def spark_koalas_install(): print("\n--1.1 Install findspark----\n") !pip install -q findspark print("\n--1.2 Install databricks Koalas----\n") !pip install koalas print("\n--1.3 Download Apache tar.gz----\n") ! wget -c https://mirrors.estointernet.in/apache/spark/spark-3.1.1/spark-3.1.1-bin-hadoop3.2.tgz print("\n--1.4 Transfer downloaded content and unzip tar.gz----\n") ! mv /content/spark* /opt/ ! tar -xzf /opt/spark-3.1.1-bin-hadoop3.2.tgz --directory /opt/ print("\n--1.5 Check folder for files----\n") ! ls -la /opt # 1.1 Function to set environment def set_spark_env(): print("\n---2. Set Environment variables----\n") os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["JRE_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64/jre" os.environ["SPARK_HOME"] = "/opt/spark-3.1.1-bin-hadoop3.2" os.environ["LD_LIBRARY_PATH"] += ":/opt/spark-3.1.1-bin-hadoop3.2/lib/native" os.environ["PATH"] += ":/opt/spark-3.1.1-bin-hadoop3.2/bin:/opt/spark-3.1.1-bin-hadoop3.2/sbin" print("\n---2.1. Check Environment variables----\n") # Check ! echo $PATH ! echo $LD_LIBRARY_PATH # 1.2 Function to configure spark def spark_conf(): print("\n---3. Configure spark to access hadoop----\n") !mv /opt/spark-3.1.1-bin-hadoop3.2/conf/spark-env.sh.template /opt/spark-3.1.1-bin-hadoop3.2/conf/spark-env.sh !echo "HADOOP_CONF_DIR=/opt/hadoop-3.2.2/etc/hadoop/" >> /opt/spark-3.1.1-bin-hadoop3.2/conf/spark-env.sh print("\n---3.1 Check ----\n") #!cat /opt/spark-3.1.1-bin-hadoop3.2/conf/spark-env.sh
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Install spark
# 2.0 Call all the three functions def install_spark(): spark_koalas_install() set_spark_env() spark_conf() # 2.1 install_spark()
--1.1 Install findspark---- --1.2 Install databricks Koalas---- Collecting koalas [?25l Downloading https://files.pythonhosted.org/packages/40/de/87c016a3e5055251ed117c86eb3b0de2381518c7acae54e115711ff30ceb/koalas-1.7.0-py3-none-any.whl (1.4MB)  |████████████████████████████████| 1.4MB 5.6MB/s [?25hRequirement already satisfied: numpy<1.20.0,>=1.14 in /usr/local/lib/python3.7/dist-packages (from koalas) (1.19.5) Requirement already satisfied: pyarrow>=0.10 in /usr/local/lib/python3.7/dist-packages (from koalas) (3.0.0) Requirement already satisfied: pandas<1.2.0,>=0.23.2 in /usr/local/lib/python3.7/dist-packages (from koalas) (1.1.5) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas<1.2.0,>=0.23.2->koalas) (2018.9) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas<1.2.0,>=0.23.2->koalas) (2.8.1) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas<1.2.0,>=0.23.2->koalas) (1.15.0) Installing collected packages: koalas Successfully installed koalas-1.7.0 --1.3 Download Apache tar.gz---- --2021-03-30 11:29:04-- https://mirrors.estointernet.in/apache/spark/spark-3.1.1/spark-3.1.1-bin-hadoop3.2.tgz Resolving mirrors.estointernet.in (mirrors.estointernet.in)... 43.255.166.254, 2403:8940:3:1::f Connecting to mirrors.estointernet.in (mirrors.estointernet.in)|43.255.166.254|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 228721937 (218M) [application/octet-stream] Saving to: ‘spark-3.1.1-bin-hadoop3.2.tgz’ spark-3.1.1-bin-had 100%[===================>] 218.13M 11.9MB/s in 22s 2021-03-30 11:29:27 (9.91 MB/s) - ‘spark-3.1.1-bin-hadoop3.2.tgz’ saved [228721937/228721937] --1.4 Transfer downloaded content and unzip tar.gz---- --1.5 Check folder for files---- total 609576 drwxr-xr-x 1 root root 4096 Mar 30 11:29 . drwxr-xr-x 1 root root 4096 Mar 30 11:26 .. drwxr-xr-x 1 root root 4096 Mar 18 13:31 google drwxr-xr-x 10 1000 1000 4096 Mar 30 11:26 hadoop-3.2.2 -rw-r--r-- 1 root root 395448622 Jan 13 18:48 hadoop-3.2.2.tar.gz drwxr-xr-x 4 root root 4096 Mar 18 13:25 nvidia drwxr-xr-x 13 1000 1000 4096 Feb 22 02:11 spark-3.1.1-bin-hadoop3.2 -rw-r--r-- 1 root root 228721937 Feb 22 02:45 spark-3.1.1-bin-hadoop3.2.tgz ---2. Set Environment variables---- ---2.1. Check Environment variables---- /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/opt/hadoop-3.2.2/bin:/opt/hadoop-3.2.2/sbin:/opt/spark-3.1.1-bin-hadoop3.2/bin:/opt/spark-3.1.1-bin-hadoop3.2/sbin /usr/local/nvidia/lib:/usr/local/nvidia/lib64:/opt/hadoop-3.2.2/lib/native:/opt/spark-3.1.1-bin-hadoop3.2/lib/native ---3. Configure spark to access hadoop---- ---3.1 Check ----
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Test sparkHadoop should have been started Call some libraries
# 3.0 Just call some libraries to test import pandas as pd import numpy as np # 3.1 Get spark in sys.path import findspark findspark.init() # 3.2 Call other spark libraries # Just to test from pyspark.sql import SparkSession import databricks.koalas as ks from pyspark.ml.feature import VectorAssembler from pyspark.ml.regression import LinearRegression # 3.1 Build spark session spark = SparkSession. \ builder. \ master("local[*]"). \ getOrCreate() # 4.0 Pandas DataFrame pdf = pd.DataFrame({ 'x1': ['a','a','b','b', 'b', 'c', 'd','d'], 'x2': ['apple', 'orange', 'orange','orange', 'peach', 'peach','apple','orange'], 'x3': [1, 1, 2, 2, 2, 4, 1, 2], 'x4': [2.4, 2.5, 3.5, 1.4, 2.1,1.5, 3.0, 2.0], 'y1': [1, 0, 1, 0, 0, 1, 1, 0], 'y2': ['yes', 'no', 'no', 'yes', 'yes', 'yes', 'no', 'yes'] }) # 4.1 pdf # 4.2 Transform to Spark DataFrame df = spark.createDataFrame(pdf) df.show() # 4.3 Create a csv file # and tranfer it to hdfs !echo "a,b,c,d" > /content/airports.csv !echo "5,4,6,7" >> /content/airports.csv !echo "2,3,4,5" >> /content/airports.csv !echo "8,9,0,1" >> /content/airports.csv !echo "2,3,4,1" >> /content/airports.csv !echo "1,2,2,1" >> /content/airports.csv !echo "0,1,2,6" >> /content/airports.csv !echo "9,3,1,8" >> /content/airports.csv !ls -la /content # 4.4 !hdfs dfs -rm -f /user/ashok/airports.csv !hdfs dfs -put /content/airports.csv /user/ashok/ !hdfs dfs -ls /user/ashok # 5.0 Read file directly from hadoop airports_df = spark.read.csv( "/user/ashok/airports.csv", inferSchema = True, header = True ) # 5.1 Show file airports_df.show()
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
Test KoalasHadoop should have been started Create a koalas dataframe
# 6.0 # If namenode is in safemode, first use: # hdfs dfsadmin -safemode leave kdf = ks.DataFrame( { 'a': [1, 2, 3, 4, 5, 6], 'b': [100, 200, 300, 400, 500, 600], 'c': ["one", "two", "three", "four", "five", "six"] }, index=[10, 20, 30, 40, 50, 60] ) # 6.1 And show kdf # 6.2 Pandas DataFrame pdf = pd.DataFrame({'x':range(3), 'y':['a','b','b'], 'z':['a','b','b']}) # 6.2.1 Transform to koalas DataFrame df = ks.from_pandas(pdf) # 6.3 Rename koalas dataframe columns df.columns = ['x', 'y', 'z1'] # 6.4 Do some operations on koalas DF, in place: df['x2'] = df.x * df.x # 6.6 Finally show koalas df df # 6.7 Read csv file from hadoop # and create koalas df ks.read_csv("/user/ashok/airports.csv").head(10) ###################
_____no_output_____
MIT
hadoop & spark/hadoop_spark_install_on_Colab.ipynb
harnalashok/hadoop
OBJECTFPredire $\rho$, $\sigma_a$ et $\sigma_c$ en fonction de $E_r$, $F_r$, et $T_r$ a droite du domaine en toute temps PREPARATION Les imports
%reset -f import matplotlib.pyplot as plt import numpy as np import pandas as pd from ast import literal_eval as l_eval np.set_printoptions(precision = 3)
_____no_output_____
MIT
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
Chargement des donnees
# """ VERSION COLAB """ # # to load data from my personal github repo (update it if we have to) # import os # if not os.path.exists("assets"): # print("Data wansn't here. Let's download it!") # !git clone https://github.com/desmond-rn/assets.git # else: # print("Data already here. Let's update it!") # %cd assets # # %rm -rf assets # !git pull https://github.com/desmond-rn/assets.git # %cd .. # print("\n") # !ls assets/dataframes/inverse # df_path = "assets/dataframes/inverse/df_temporal.csv" # """ VERSION JUPYTER """ # to load data locally %ls "../../data" df_t_path = "../../data/df_temporal.csv" df_s_path = "../../data/df_spatial.csv"
Volume in drive C has no label. Volume Serial Number is 2248-85E1 Directory of C:\Users\Roussel\Dropbox\Unistra\SEMESTRE 2\Projet & Stage\Inverse\REPO\data 21-Jun-20 12:53 PM <DIR> . 21-Jun-20 12:53 PM <DIR> .. 21-Jun-20 12:53 PM <DIR> anim 24-Jun-20 02:07 PM 14,895 case_1_spatial.csv 24-Jun-20 02:07 PM 30,388 case_1_temporal.csv 24-Jun-20 02:07 PM 10,946 case_2_spatial.csv 24-Jun-20 02:07 PM 22,409 case_2_temporal.csv 24-Jun-20 02:07 PM 10,901 case_3_spatial.csv 21-Jun-20 12:53 PM 62,201 dataframe_1.csv 21-Jun-20 12:53 PM 89,950 dataframe_2.csv 21-Jun-20 12:53 PM 1,396 df_1.csv 22-Jun-20 07:11 AM 47,585 df_1_test.csv 21-Jun-20 12:53 PM 1,486 df_2.csv 22-Jun-20 07:11 AM 54,818 df_2_test.csv 24-Jun-20 02:07 PM 3,643,332 df_spatial.csv 24-Jun-20 02:07 PM 4,813,222 df_temporal.csv 21-Jun-20 12:53 PM 14,757 fichier_export_csv.csv 21-Jun-20 12:53 PM <DIR> img 21-Jun-20 12:53 PM <DIR> video 14 File(s) 8,818,286 bytes 5 Dir(s) 103,518,060,544 bytes free
MIT
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
Donnees temporelles
types = {'rho_expr':str, 'sigma_a_expr':str, 'sigma_c_expr':str, 'E_x_0_expr':str, 'F_x_0_expr':str, 'T_x_0_expr':str} converters={'t':l_eval, 'E_l':l_eval, 'F_l':l_eval, 'T_l':l_eval, 'E_r':l_eval, 'F_r':l_eval, 'T_r':l_eval} # on veut convertir les str en listes df_t = pd.read_csv(df_t_path, thousands=',', dtype=types, converters=converters) df_t.head(2)
_____no_output_____
MIT
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
Donnees spatiales
types = {'rho_expr':str, 'sigma_a_expr':str, 'sigma_c_expr':str, 'E_x_0_expr':str, 'F_x_0_expr':str, 'T_x_0_expr':str} converters={'x':l_eval, 'rho':l_eval, 'sigma_a':l_eval, 'sigma_c':l_eval, 'E_0':l_eval, 'F_0':l_eval, 'T_0':l_eval, 'E':l_eval, 'F':l_eval, 'T':l_eval} df_s = pd.read_csv(df_s_path, thousands=',', dtype=types, converters=converters) df_s.head(2)
_____no_output_____
MIT
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
Prerequis pour cet apprentissage Tous les unputs doivent etre similaires sur un certain nombre de leurs parametres.
t_f = 0.005 x_min = 0 x_max = 1 for i in range(len(df_t)): assert df_t.loc[i, 't_f'] == 0.005 assert df_t.loc[i, 'E_0_expr'] == "0.01372*(5^4)" # etc... assert df_t.loc[i, 'x_min'] == x_min assert df_t.loc[i, 'x_max'] == x_max
_____no_output_____
MIT
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
Visualisation
""" Visualisons les signaux sur la droite et la densite sur le domaine """ def plot_inputs(ax, df_t, index): t = np.array(df_t.loc[index, 't']) # inputs E_r = np.array(df_t.loc[index, 'E_r']) F_r = np.array(df_t.loc[index, 'F_r']) T_r = np.array(df_t.loc[index, 'T_r']) # plot ax[0].plot(t, E_r, 'b', label='énergie à droite', lw=3) ax[0].set_ylim(8.275, 8.875) ax[0].set_xlabel('t') ax[0].legend() ax[1].plot(t, F_r, 'y', label='flux à droite', lw=3) ax[1].set_ylim(-0.25, 0.25) ax[1].set_xlabel('t') ax[1].legend() ax[2].plot(t, T_r, 'r', label='température à droite', lw=3) ax[2].set_ylim(4.96, 5.04) ax[2].set_xlabel('t') ax[2].legend() def plot_output(ax, df_s, index): x = np.array(df_s.loc[index, 'x']) rho = np.array(df_s.loc[index, 'rho']) # plot ax.plot(x, rho, 'm--', label='densité') ax.set_ylim(0.5, 10.5) ax.set_xlabel('x') ax.legend() def plot_io(index): fig, ax = plt.subplots(2, 3, figsize=(12, 6)) fig.delaxes(ax[1][0]) fig.delaxes(ax[1][2]) plot_inputs(ax[0], df_t, index) plot_output(ax[1, 1], df_s, index) plt.tight_layout() index = 0 plot_io(index)
_____no_output_____
MIT
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
Creation des inputs X Pour chacun des signaux E_r, F_r et T_r, il faut tout d'abord:- Tronquer le signal pour ne ne garder que la fin- Reechantilloner le signal pour ne garder que 20, voir 50 pas de temps
""" Permet de couper le debut du signal, parite toujours constante. Retourne la fraction de fin """ def trim(input, ratio): len_input = len(input) len_output = int(len_input*ratio) return input[len_input-len_output:] """ Fonction pour extraire n pas d'iterations """ def resample(input, len_output): len_input = len(input) output = [] for i in np.arange(0, len_input, len_input//len_output): output.append(input[i]) return np.array(output)[1:] """ Testons avec un exemple """ t = np.array(df_t.loc[index, 't']) E_r = np.array(df_t.loc[index, 'E_r']) ratio, len_output = 1/2, 20 t = resample(trim(t, ratio), len_output) E_r = resample(trim(E_r, ratio), len_output) fig, ax = plt.subplots(1, 1, figsize=(6, 4)) ax.plot(t, E_r, 'b', label='énergie à droite coupé et reechantilloné', lw=3) ax.set_ylim(8.275, 8.875) ax.set_xlabel('t') ax.legend(); """ Generation les inputs X """ size = len(df_t) X = np.empty(shape=(size, 3, len_output), dtype=float) for i in range(size): X[i][0] = resample(trim(df_t.loc[i, 'E_r'], ratio), len_output) X[i][1] = resample(trim(df_t.loc[i, 'F_r'], ratio), len_output) X[i][2] = resample(trim(df_t.loc[i, 'T_r'], ratio), len_output) print("X shape =", X.shape)
X shape = (103, 3, 20)
MIT
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
Creations des outputs y Pour le signal rho, il faut tout d'abord:- Detecter la position, la hauteur et la larrgeur de chaque crenau
""" Calcule les decalages a droite et a gauche d'un signal """ def decay(signal): signal_right = np.zeros_like(signal) signal_right[1:] = signal[:-1] signal_right[0] = signal[0] signal_left = np.zeros_like(signal) signal_left[:-1] = signal[1:] signal_left[-1] = signal[-1] return signal_left, signal_right """ Fonction de lissage laplacien 3-means d'un signal """ def smooth(signal): signal_left, signal_right = decay(signal) return (signal + signal_left + signal_right) / 3. """ Pour eliminer les tres tres faibles valeurs dans un signal """ def sharpen(signal, precision): return np.where(abs(signal) < precision, np.zeros_like(signal), signal) """ Pour afficher un signal et sa derivee seconde """ def plot_signal(ax, signal): signal_left, signal_right = decay(signal) diff = -2*signal + signal_right + signal_left diff = sharpen(diff, 1e-4) ax[0].plot(signal, 'm--', label='signal') ax[1].plot(diff[1:-1], 'c--', label='derivee seconde du signal'); ax[0].legend() ax[1].legend() """ Une fonction pour detecter la position, hauteur et largeur des crenaux """ def detect_niches(signal): signal_left, signal_right = decay(signal) diff = -2*signal + signal_right + signal_left diff = sharpen(diff, 1e-4) # zero_crossings = [] # les points de traverse du 0 niches = [] # les crenaux detectes prev = diff[0] next = diff[2] ended = False # indique si on aretrouve la fin d'un crenau start = 1 end = 1 step = 1 # pas de recherche i = step len_signal = len(diff) while i < len_signal-step: prev = diff[i-step] val = diff[i] next = diff[i+step] if prev > 0. and next < 0.: # zero_crossings.append(i) start = i ended = False if i == len_signal-step-1 and ended == False: prev = -1. next = 1. if prev < 0. and next > 0. and ended==False: # zero_crossings.append(i) end = i niche_width = end - start # largeur relative a N = len_signal niche_center = (end + start) // 2 # position relative a N niche_height = signal[niche_center] # hauteur du crenaux niches.append((niche_center, niche_height, niche_width)) ended = True # print(i, ended) # print(prev, next) i += 1 return niches """ Testons avec un exemple """ signal = np.zeros(500) signal[100:170] = 5. # ajout des crenaux signal[250:265] = 3. signal[325:375] = 10. for i in range(25): # lissage du signal signal = smooth(signal) fig, ax = plt.subplots(1, 2, figsize=(12, 4)) plot_signal(ax, signal) niches = detect_niches(signal) print("Position, hauteur et largeur des creneaux detectes") for el in niches: print(" -", el) """ Testons sur un vrai rho """ # signal = np.array(df_s.loc[4, 'rho']) # fig, ax = plt.subplots(1, 2, figsize=(12, 4)) # plot_signal(ax, signal) # niches = detect_niches(signal) # for el in niches: # print(" -", el) """ Pour creer les y, il faut normaliser par rapport a l'abcisse du domaine """ y = np.empty(shape=(size, 3), dtype=float) for i in range(size): x = np.array(df_s.loc[i, 'x']) rho = np.array(df_s.loc[i, 'rho']) niche = detect_niches(rho)[0] # on suppose qu'il ny a qu'un seul créneau dx = (x_max - x_min) / df_s.loc[i, 'N'] # xmin = 0, xmax = 1 bien sur. condition necessaire pour cette etude y[i][0] = x[niche[0]] # position relative a x y[i][1] = niche[1] # hauteur y[i][2] = niche[2]*dx # largeur # print(i, niche) # print(i, y[i]) print("y shape =", np.shape(y))
y shape = (103, 3)
MIT
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
Separation des donnees train, test et val
len_train, len_val = 60, 20 X_train = X[:len_train] X_val = X[len_train:len_train+len_val] X_test = X[len_train+len_val:] y_train = y[:len_train] y_val = y[len_train:len_train+len_val] y_test = y[len_train+len_val:] print("X shapes =", np.shape(X_train), np.shape(X_val), np.shape(X_test)) print("y shapes =", np.shape(y_train), np.shape(y_val), np.shape(y_test))
X shapes = (60, 3, 20) (20, 3, 20) (23, 3, 20) y shapes = (60, 3) (20, 3) (23, 3)
MIT
src/notebook/.ipynb_checkpoints/reseaux_de_neurones-checkpoint.ipynb
desmond-rn/projet-inverse
Testen via de Python moduleWe hebben hiervoor nodig `directory`; een map met data om te valideren, deze bestaat uit:* een map `datasets` met daarin 1 of meerdere GeoPackages met HyDAMO lagen* een bestand `validation_rules.json` met daarin de validatieregelsOmdat we op de HyDAMO objecten de maaiveldhoogte willen bepalen definieren we een `coverage`. Dit is een python dictionary. Elke `key` geeft een identificatie voor de coverage die aangeroepen kan worden in de `validation_rules.json`. De `value` verwijst naar een map met daarin:* GeoTiffs* index.shp met een uitlijn van elke GeoTiff
coverage = {"AHN": r"../tests/data/dtm"} directory = r"../tests/data/tasks/test_profielen"
_____no_output_____
MIT
notebooks/test_profielen.ipynb
d2hydro/HyDAMOValidatieModule
We importeren de validator en maken een HyDAMO validator aan die geopackages, csvs en geojsons weg schrijft. We kennen ook de coverage toe.
from hydamo_validation import validator hydamo_validator = validator(output_types=["geopackage", "csv", "geojson"], coverages=coverage, log_level="INFO")
_____no_output_____
MIT
notebooks/test_profielen.ipynb
d2hydro/HyDAMOValidatieModule
Nu kunnen we onze `directory` gaan valideren. Dat duurt ongeveer 20-30 seconden
datamodel, layer_summary, result_summary = hydamo_validator(directory=directory, raise_error=True)
profielgroep is empty (!) INFO:hydamo_validation.validator:finished in 3.58 seconds
MIT
notebooks/test_profielen.ipynb
d2hydro/HyDAMOValidatieModule
We kijken naar de samenvatting van het resultaat
result_summary.to_dict()
_____no_output_____
MIT
notebooks/test_profielen.ipynb
d2hydro/HyDAMOValidatieModule
Phase 3 - deployment This notebook will provide and overview how to deploy and predict the CPE in two ways- The model was build/export in the last notebook (Phase_2_Advanced_Analytics__predictions) This notebook show another option to save/export the model using the H2O flow UI and complement the information with deployment for predictions.The predictions will be presented in 2 ways- Batch process - Online / real time predictions Export model: Export the model GBM (best performance) using H2O flow UI as detailed below
from IPython.display import Image Image(filename='./data/H2O-FLOW-UI-GBM-MODEL.PNG') from IPython.display import Image Image(filename='./data/H2O-FLOW-UI-GBM-MODEL-download.PNG')
_____no_output_____
MIT
Marketing_Campaign_optimization/bin/Phase_3_Deployment_options.ipynb
ThiagoBarsante/DataScience_projects
Sample of new campaigns to be predicted
import pandas as pd df = pd.read_csv('./GBM_MODEL/New_campaings_for_predictions.csv') df.tail(10)
_____no_output_____
MIT
Marketing_Campaign_optimization/bin/Phase_3_Deployment_options.ipynb
ThiagoBarsante/DataScience_projects
Important attention point- All information will be provided for prediction (base information available in the simulated/demo data) however just the relevant information were used during the model build detailed in the Notebook: Phase_2_Advanced_Analytics__predictions - For example LineItemsID is just an index number and do not provide relevant information and is not going to be used for prediction Batch Prediction: Generate prediction for new data To execute the prediction as presented below it is not necessary to have an H2O cluster running The processo show below was executed in 2 steps to show in detail the process but in production environment this process must be executed in just one step &emsp; Simulation in 2 stepsStep 1. batch process to run the java programStep 2. python program to link the new data and the predictions with the CPE &emsp; &emsp; Can be used any programming language to run the prediction and get the results (such as R, Python, Java, C, ...) Run batch java process to gererate/score the predictions of CPE
## To generate prediction (CPE) for new data just run the command ## EXAMPLE ## java -Xmx4g -XX:ReservedCodeCacheSize=256m -cp <h2o-genmodel.jar_EXPORTED_ABOVE> hex.genmodel.tools.PredictCsv --mojo <GBM_log_CPE_model.zip_EXPORTED_ABOVE> --input INPUT_FILE_FOR_PREDICTION.csv --output OUTUPUT_FILE_WITH_PREDICTIONS_FOR_CPE__EXPORT_EXPORT_PREDICTIONS.csv --decimal ## REAL PREDICTION ## java -Xmx4g -XX:ReservedCodeCacheSize=256m -cp h2o-genmodel.jar hex.genmodel.tools.PredictCsv --mojo GBM_log_CPE_model.zip --input New_campaings_for_predictions.csv --output New_campaings_for_predictions__EXPORT_EXPORT_PREDICTIONS.csv --decimal from IPython.display import Image Image(filename='./data/Batch-prediction-h2o.PNG')
_____no_output_____
MIT
Marketing_Campaign_optimization/bin/Phase_3_Deployment_options.ipynb
ThiagoBarsante/DataScience_projects
Sincronize all information - new campaign data and new predictions for CPE- Remember that the prediction was done in logarithmic scale and now is necessary to rever the result with exponential function
CPE_predictions = pd.read_csv('./GBM_MODEL/New_campaings_for_predictions__EXPORT_EXPORT_PREDICTIONS.csv') CPE_predictions.tail() import numpy as np df['CPE_predition_LOG'] = CPE_predictions['predict'] df['CPE_predition'] = round(np.exp(CPE_predictions['predict']) -1, 3) df.tail()
_____no_output_____
MIT
Marketing_Campaign_optimization/bin/Phase_3_Deployment_options.ipynb
ThiagoBarsante/DataScience_projects
Online prediction: Generate prediction for new data The online prediction could be implemented using diferent architectures such as1. Serverless function such as Amazon AWS Lambda + API Gateway https://aws.amazon.com/lambda/?nc2=h_ql_prod_fs_lbd 2. Java program that use POJO/MOJO model for online prediction http://docs.h2o.ai/h2o/latest-stable/h2o-docs/productionizing.htmlstep-2-compile-and-run-the-mojo3. Microservices architecture using Docker (python + flask app + NGINX for load balance) Could be implemented on-premise solution or even using cloud solutions such as container orchestration as GKE (Google Kubernetes Engine) https://cloud.google.com/kubernetes-engine/The solution presented below show the prediction done trought one json information passed to the URL &emsp; This API could be deployed in any of the 3 options detailed above
from IPython.display import Image Image(filename='./data/Online-Prediction.PNG')
_____no_output_____
MIT
Marketing_Campaign_optimization/bin/Phase_3_Deployment_options.ipynb
ThiagoBarsante/DataScience_projects
Comprehensive Example
# Enabling the `widget` backend. # This requires jupyter-matplotlib a.k.a. ipympl. # ipympl can be install via pip or conda. %matplotlib widget import matplotlib.pyplot as plt import numpy as np # Testing matplotlib interactions with a simple plot fig = plt.figure() plt.plot(np.sin(np.linspace(0, 20, 100))); # Always hide the toolbar fig.canvas.toolbar_visible = False # Put it back to its default fig.canvas.toolbar_visible = 'fade-in-fade-out' # Change the toolbar position fig.canvas.toolbar_position = 'top' # Hide the Figure name at the top of the figure fig.canvas.header_visible = False # Hide the footer fig.canvas.footer_visible = False # Disable the resizing feature fig.canvas.resizable = False # If true then scrolling while the mouse is over the canvas will not move the entire notebook fig.canvas.capture_scroll = True
_____no_output_____
BSD-3-Clause
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
You can also call `display` on `fig.canvas` to display the interactive plot anywhere in the notebooke
fig.canvas.toolbar_visible = True display(fig.canvas)
_____no_output_____
BSD-3-Clause
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
Or you can `display(fig)` to embed the current plot as a png
display(fig)
_____no_output_____
BSD-3-Clause
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
3D plotting
from mpl_toolkits.mplot3d import axes3d fig = plt.figure() ax = fig.add_subplot(111, projection='3d') # Grab some test data. X, Y, Z = axes3d.get_test_data(0.05) # Plot a basic wireframe. ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10) plt.show()
_____no_output_____
BSD-3-Clause
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
Subplots
# A more complex example from the matplotlib gallery np.random.seed(0) n_bins = 10 x = np.random.randn(1000, 3) fig, axes = plt.subplots(nrows=2, ncols=2) ax0, ax1, ax2, ax3 = axes.flatten() colors = ['red', 'tan', 'lime'] ax0.hist(x, n_bins, density=1, histtype='bar', color=colors, label=colors) ax0.legend(prop={'size': 10}) ax0.set_title('bars with legend') ax1.hist(x, n_bins, density=1, histtype='bar', stacked=True) ax1.set_title('stacked bar') ax2.hist(x, n_bins, histtype='step', stacked=True, fill=False) ax2.set_title('stack step (unfilled)') # Make a multiple-histogram of data-sets with different length. x_multi = [np.random.randn(n) for n in [10000, 5000, 2000]] ax3.hist(x_multi, n_bins, histtype='bar') ax3.set_title('different sample sizes') fig.tight_layout() plt.show() fig.canvas.toolbar_position = 'right' fig.canvas.toolbar_visible = False
_____no_output_____
BSD-3-Clause
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
Interactions with other widgets and layoutingWhen you want to embed the figure into a layout of other widgets you should call `plt.ioff()` before creating the figure otherwise `plt.figure()` will trigger a display of the canvas automatically and outside of your layout. Without using `ioff`Here we will end up with the figure being displayed twice. The button won't do anything it just placed as an example of layouting.
import ipywidgets as widgets # ensure we are interactive mode # this is default but if this notebook is executed out of order it may have been turned off plt.ion() fig = plt.figure() ax = fig.gca() ax.imshow(Z) widgets.AppLayout( center=fig.canvas, footer=widgets.Button(icon='check'), pane_heights=[0, 6, 1] )
_____no_output_____
BSD-3-Clause
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
Fixing the double display with `ioff`If we make sure interactive mode is off when we create the figure then the figure will only display where we want it to.There is ongoing work to allow usage of `ioff` as a context manager, see the [ipympl issue](https://github.com/matplotlib/ipympl/issues/220) and the [matplotlib issue](https://github.com/matplotlib/matplotlib/issues/17013)
plt.ioff() fig = plt.figure() plt.ion() ax = fig.gca() ax.imshow(Z) widgets.AppLayout( center=fig.canvas, footer=widgets.Button(icon='check'), pane_heights=[0, 6, 1] )
_____no_output_____
BSD-3-Clause
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
Interacting with other widgets Changing a line plot with a slide
# When using the `widget` backend from ipympl, # fig.canvas is a proper Jupyter interactive widget, which can be embedded in # an ipywidgets layout. See https://ipywidgets.readthedocs.io/en/stable/examples/Layout%20Templates.html # One can bound figure attributes to other widget values. from ipywidgets import AppLayout, FloatSlider plt.ioff() slider = FloatSlider( orientation='horizontal', description='Factor:', value=1.0, min=0.02, max=2.0 ) slider.layout.margin = '0px 30% 0px 30%' slider.layout.width = '40%' fig = plt.figure() fig.canvas.header_visible = False fig.canvas.layout.min_height = '400px' plt.title('Plotting: y=sin({} * x)'.format(slider.value)) x = np.linspace(0, 20, 500) lines = plt.plot(x, np.sin(slider.value * x)) def update_lines(change): plt.title('Plotting: y=sin({} * x)'.format(change.new)) lines[0].set_data(x, np.sin(change.new * x)) fig.canvas.draw() fig.canvas.flush_events() slider.observe(update_lines, names='value') AppLayout( center=fig.canvas, footer=slider, pane_heights=[0, 6, 1] )
_____no_output_____
BSD-3-Clause
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
Update image data in a performant mannerTwo useful tricks to improve performance when updating an image displayed with matplolib are to:1. Use the `set_data` method instead of calling imshow2. Precompute and then index the array
# precomputing all images x = np.linspace(0,np.pi,200) y = np.linspace(0,10,200) X,Y = np.meshgrid(x,y) parameter = np.linspace(-5,5) example_image_stack = np.sin(X)[None,:,:]+np.exp(np.cos(Y[None,:,:]*parameter[:,None,None])) plt.ioff() fig = plt.figure() plt.ion() im = plt.imshow(example_image_stack[0]) def update(change): im.set_data(example_image_stack[change['new']]) fig.canvas.draw_idle() slider = widgets.IntSlider(value=0, min=0, max=len(parameter)-1) slider.observe(update, names='value') widgets.VBox([slider, fig.canvas])
_____no_output_____
BSD-3-Clause
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
Debugging widget updates and matplotlib callbacksIf an error is raised in the `update` function then will not always display in the notebook which can make debugging difficult. This same issue is also true for matplotlib callbacks on user events such as mousemovement, for example see [issue](https://github.com/matplotlib/ipympl/issues/116). There are two ways to see the output:1. In jupyterlab the output will show up in the Log Console (View > Show Log Console)2. using `ipywidgets.Output`Here is an example of using an `Output` to capture errors in the update function from the previous example. To induce errors we changed the slider limits so that out of bounds errors will occur:From: `slider = widgets.IntSlider(value=0, min=0, max=len(parameter)-1)`To: `slider = widgets.IntSlider(value=0, min=0, max=len(parameter)+10)`If you move the slider all the way to the right you should see errors from the Output widget
plt.ioff() fig = plt.figure() plt.ion() im = plt.imshow(example_image_stack[0]) out = widgets.Output() @out.capture() def update(change): with out: if change['name'] == 'value': im.set_data(example_image_stack[change['new']]) fig.canvas.draw_idle slider = widgets.IntSlider(value=0, min=0, max=len(parameter)+10) slider.observe(update) display(widgets.VBox([slider, fig.canvas])) display(out)
_____no_output_____
BSD-3-Clause
docs/examples/full-example.ipynb
martinRenou/jupyter-matplotlib
Lambda School Data Science*Unit 2, Sprint 3, Module 1*--- Wrangle ML datasets- [ ] Continue to clean and explore your data. - [ ] For the evaluation metric you chose, what score would you get just by guessing?- [ ] Can you make a fast, first model that beats guessing?**We recommend that you use your portfolio project dataset for all assignments this sprint.****But if you aren't ready yet, or you want more practice, then use the New York City property sales dataset for today's assignment.** Follow the instructions below, to just keep a subset for the Tribeca neighborhood, and remove outliers or dirty data. [Here's a video walkthrough](https://youtu.be/pPWFw8UtBVg?t=584) you can refer to if you get stuck or want hints!- Data Source: [NYC OpenData: NYC Citywide Rolling Calendar Sales](https://data.cityofnewyork.us/dataset/NYC-Citywide-Rolling-Calendar-Sales/usep-8jbt)- Glossary: [NYC Department of Finance: Rolling Sales Data](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) Your code starts here:
!wget 'https://raw.githubusercontent.com/washingtonpost/data-school-shootings/master/school-shootings-data.csv' import pandas as pd df = pd.read_csv('school-shootings-data.csv') print(df.shape) df.head() # Replace shooting type with 'other' for rows not 'targeted' or 'indiscriminate' df['shooting_type'] = df['shooting_type'].replace(['accidental', 'unclear', 'targeted and indiscriminate', 'public suicide', 'hostage suicide', 'accidental or targeted', 'public suicide (attempted)'], 'other') # Fill missing value with 'other' df['shooting_type'] = df['shooting_type'].fillna('other') # Majority class baseline 59% df['shooting_type'].value_counts(normalize=True) from sklearn.model_selection import train_test_split # Create train, test train, test = train_test_split(df, train_size=0.80, random_state=21, stratify=df['shooting_type']) train.shape, test.shape def wrangle(df): # Avoid SettingWithCopyWarning df = df.copy() # Remove commas from numbers df['white'] = df['white'].str.replace(",", "") # Change from object to int df['white'] = pd.to_numeric(df['white']) # Remove commas from numbers df['enrollment'] = df['enrollment'].str.replace(",", "") # Change from object to int df['enrollment'] = pd.to_numeric(df['enrollment']) # Fill missing values for these specific columns df.fillna({'white': 0, 'black': 0, 'hispanic': 0, 'asian': 0, 'american_indian_alaska_native': 0, 'hawaiian_native_pacific_islander': 0, 'two_or_more': 0, 'district_name': 'Unknown', 'time': '12:00 PM', 'lat': 33.612910, 'long': -86.682000, 'staffing': 60.42, 'low_grade': '9', 'high_grade': '12'}, inplace=True) # Drop columns with 200+ missing values df = df.drop(columns=['deceased_notes1', 'age_shooter2', 'gender_shooter2', 'race_ethnicity_shooter2', 'shooter_relationship2', 'shooter_deceased2', 'deceased_notes2']) # Drop unusable variance df = df.drop(columns=['uid', 'nces_school_id', 'nces_district_id', 'weapon', 'weapon_source', 'state_fips', 'county_fips', 'ulocale', 'lunch', 'age_shooter1', 'gender_shooter1', 'race_ethnicity_shooter1', 'shooter_relationship1', 'shooter_deceased1']) # Change date to datettime df['date'] = pd.to_datetime(df['date']) return df train = wrangle(train) test = wrangle(test) train.shape, test.shape !pip install category_encoders==2.* import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.model_selection import cross_val_score from sklearn.pipeline import make_pipeline from sklearn.ensemble import RandomForestClassifier from sklearn.preprocessing import StandardScaler from sklearn.feature_selection import f_classif, SelectKBest from sklearn.linear_model import Ridge target = 'shooting_type' features = train.columns.drop([target, 'date']) X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] pipeline = make_pipeline( ce.OrdinalEncoder(), StandardScaler(), RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=21) ) k = 20 scores = cross_val_score(pipeline, X_train, y_train, cv=k) print(f'MAE for {k} folds:', scores) scores.mean() from sklearn.tree import DecisionTreeClassifier target = 'shooting_type' features = train.columns.drop([target, 'date', ]) X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] pipeline = make_pipeline( ce.OrdinalEncoder(), DecisionTreeClassifier(max_depth=3) ) pipeline.fit(X_train, y_train) print('Test Accuracy:', pipeline.score(X_test, y_test))
Test Accuracy: 0.5416666666666666
MIT
module2-wrangle-ml-datasets/LS_DS12_232_assignment.ipynb
jdz014/DS-Unit-2-Applied-Modeling
Interactive single compartment HH exampleTo run this interactive Jupyter Notebook, please click on the rocket icon 🚀 in the top panel. For more information, please see {ref}`how to use this documentation `. Please uncomment the line below if you use the Google Colab. (It does not include these packages by default).
#%pip install pyneuroml neuromllite NEURON import math from neuroml import NeuroMLDocument from neuroml import Cell from neuroml import IonChannelHH from neuroml import GateHHRates from neuroml import BiophysicalProperties from neuroml import MembraneProperties from neuroml import ChannelDensity from neuroml import HHRate from neuroml import SpikeThresh from neuroml import SpecificCapacitance from neuroml import InitMembPotential from neuroml import IntracellularProperties from neuroml import IncludeType from neuroml import Resistivity from neuroml import Morphology, Segment, Point3DWithDiam from neuroml import Network, Population from neuroml import PulseGenerator, ExplicitInput import numpy as np from pyneuroml import pynml from pyneuroml.lems import LEMSSimulation
_____no_output_____
CC-BY-4.0
source/Userdocs/NML2_examples/HH_single_compartment.ipynb
NeuroML/Documentation
Declare the model Create ion channels
def create_na_channel(): """Create the Na channel. This will create the Na channel and save it to a file. It will also validate this file. returns: name of the created file """ na_channel = IonChannelHH(id="na_channel", notes="Sodium channel for HH cell", conductance="10pS", species="na") gate_m = GateHHRates(id="na_m", instances="3", notes="m gate for na channel") m_forward_rate = HHRate(type="HHExpLinearRate", rate="1per_ms", midpoint="-40mV", scale="10mV") m_reverse_rate = HHRate(type="HHExpRate", rate="4per_ms", midpoint="-65mV", scale="-18mV") gate_m.forward_rate = m_forward_rate gate_m.reverse_rate = m_reverse_rate na_channel.gate_hh_rates.append(gate_m) gate_h = GateHHRates(id="na_h", instances="1", notes="h gate for na channel") h_forward_rate = HHRate(type="HHExpRate", rate="0.07per_ms", midpoint="-65mV", scale="-20mV") h_reverse_rate = HHRate(type="HHSigmoidRate", rate="1per_ms", midpoint="-35mV", scale="10mV") gate_h.forward_rate = h_forward_rate gate_h.reverse_rate = h_reverse_rate na_channel.gate_hh_rates.append(gate_h) na_channel_doc = NeuroMLDocument(id="na_channel", notes="Na channel for HH neuron") na_channel_fn = "HH_example_na_channel.nml" na_channel_doc.ion_channel_hhs.append(na_channel) pynml.write_neuroml2_file(nml2_doc=na_channel_doc, nml2_file_name=na_channel_fn, validate=True) return na_channel_fn def create_k_channel(): """Create the K channel This will create the K channel and save it to a file. It will also validate this file. :returns: name of the K channel file """ k_channel = IonChannelHH(id="k_channel", notes="Potassium channel for HH cell", conductance="10pS", species="k") gate_n = GateHHRates(id="k_n", instances="4", notes="n gate for k channel") n_forward_rate = HHRate(type="HHExpLinearRate", rate="0.1per_ms", midpoint="-55mV", scale="10mV") n_reverse_rate = HHRate(type="HHExpRate", rate="0.125per_ms", midpoint="-65mV", scale="-80mV") gate_n.forward_rate = n_forward_rate gate_n.reverse_rate = n_reverse_rate k_channel.gate_hh_rates.append(gate_n) k_channel_doc = NeuroMLDocument(id="k_channel", notes="k channel for HH neuron") k_channel_fn = "HH_example_k_channel.nml" k_channel_doc.ion_channel_hhs.append(k_channel) pynml.write_neuroml2_file(nml2_doc=k_channel_doc, nml2_file_name=k_channel_fn, validate=True) return k_channel_fn def create_leak_channel(): """Create a leak channel This will create the leak channel and save it to a file. It will also validate this file. :returns: name of leak channel nml file """ leak_channel = IonChannelHH(id="leak_channel", conductance="10pS", notes="Leak conductance") leak_channel_doc = NeuroMLDocument(id="leak_channel", notes="leak channel for HH neuron") leak_channel_fn = "HH_example_leak_channel.nml" leak_channel_doc.ion_channel_hhs.append(leak_channel) pynml.write_neuroml2_file(nml2_doc=leak_channel_doc, nml2_file_name=leak_channel_fn, validate=True) return leak_channel_fn
_____no_output_____
CC-BY-4.0
source/Userdocs/NML2_examples/HH_single_compartment.ipynb
NeuroML/Documentation
Create cell
def create_cell(): """Create the cell. :returns: name of the cell nml file """ # Create the nml file and add the ion channels hh_cell_doc = NeuroMLDocument(id="cell", notes="HH cell") hh_cell_fn = "HH_example_cell.nml" hh_cell_doc.includes.append(IncludeType(href=create_na_channel())) hh_cell_doc.includes.append(IncludeType(href=create_k_channel())) hh_cell_doc.includes.append(IncludeType(href=create_leak_channel())) # Define a cell hh_cell = Cell(id="hh_cell", notes="A single compartment HH cell") # Define its biophysical properties bio_prop = BiophysicalProperties(id="hh_b_prop") # notes="Biophysical properties for HH cell") # Membrane properties are a type of biophysical properties mem_prop = MembraneProperties() # Add membrane properties to the biophysical properties bio_prop.membrane_properties = mem_prop # Append to cell hh_cell.biophysical_properties = bio_prop # Channel density for Na channel na_channel_density = ChannelDensity(id="na_channels", cond_density="120.0 mS_per_cm2", erev="50.0 mV", ion="na", ion_channel="na_channel") mem_prop.channel_densities.append(na_channel_density) # Channel density for k channel k_channel_density = ChannelDensity(id="k_channels", cond_density="360 S_per_m2", erev="-77mV", ion="k", ion_channel="k_channel") mem_prop.channel_densities.append(k_channel_density) # Leak channel leak_channel_density = ChannelDensity(id="leak_channels", cond_density="3.0 S_per_m2", erev="-54.3mV", ion="non_specific", ion_channel="leak_channel") mem_prop.channel_densities.append(leak_channel_density) # Other membrane properties mem_prop.spike_threshes.append(SpikeThresh(value="-20mV")) mem_prop.specific_capacitances.append(SpecificCapacitance(value="1.0 uF_per_cm2")) mem_prop.init_memb_potentials.append(InitMembPotential(value="-65mV")) intra_prop = IntracellularProperties() intra_prop.resistivities.append(Resistivity(value="0.03 kohm_cm")) # Add to biological properties bio_prop.intracellular_properties = intra_prop # Morphology morph = Morphology(id="hh_cell_morph") # notes="Simple morphology for the HH cell") seg = Segment(id="0", name="soma", notes="Soma segment") # We want a diameter such that area is 1000 micro meter^2 # surface area of a sphere is 4pi r^2 = 4pi diam^2 diam = math.sqrt(1000 / math.pi) proximal = distal = Point3DWithDiam(x="0", y="0", z="0", diameter=str(diam)) seg.proximal = proximal seg.distal = distal morph.segments.append(seg) hh_cell.morphology = morph hh_cell_doc.cells.append(hh_cell) pynml.write_neuroml2_file(nml2_doc=hh_cell_doc, nml2_file_name=hh_cell_fn, validate=True) return hh_cell_fn
_____no_output_____
CC-BY-4.0
source/Userdocs/NML2_examples/HH_single_compartment.ipynb
NeuroML/Documentation
Create a network
def create_network(): """Create the network :returns: name of network nml file """ net_doc = NeuroMLDocument(id="network", notes="HH cell network") net_doc_fn = "HH_example_net.nml" net_doc.includes.append(IncludeType(href=create_cell())) # Create a population: convenient to create many cells of the same type pop = Population(id="pop0", notes="A population for our cell", component="hh_cell", size=1) # Input pulsegen = PulseGenerator(id="pg", notes="Simple pulse generator", delay="100ms", duration="100ms", amplitude="0.08nA") exp_input = ExplicitInput(target="pop0[0]", input="pg") net = Network(id="single_hh_cell_network", note="A network with a single population") net_doc.pulse_generators.append(pulsegen) net.explicit_inputs.append(exp_input) net.populations.append(pop) net_doc.networks.append(net) pynml.write_neuroml2_file(nml2_doc=net_doc, nml2_file_name=net_doc_fn, validate=True) return net_doc_fn
_____no_output_____
CC-BY-4.0
source/Userdocs/NML2_examples/HH_single_compartment.ipynb
NeuroML/Documentation
Plot the data we record
def plot_data(sim_id): """Plot the sim data. Load the data from the file and plot the graph for the membrane potential using the pynml generate_plot utility function. :sim_id: ID of simulaton """ data_array = np.loadtxt(sim_id + ".dat") pynml.generate_plot([data_array[:, 0]], [data_array[:, 1]], "Membrane potential", show_plot_already=False, save_figure_to=sim_id + "-v.png", xaxis="time (s)", yaxis="membrane potential (V)") pynml.generate_plot([data_array[:, 0]], [data_array[:, 2]], "channel current", show_plot_already=False, save_figure_to=sim_id + "-i.png", xaxis="time (s)", yaxis="channel current (A)") pynml.generate_plot([data_array[:, 0], data_array[:, 0]], [data_array[:, 3], data_array[:, 4]], "current density", labels=["Na", "K"], show_plot_already=False, save_figure_to=sim_id + "-iden.png", xaxis="time (s)", yaxis="current density (A_per_m2)")
_____no_output_____
CC-BY-4.0
source/Userdocs/NML2_examples/HH_single_compartment.ipynb
NeuroML/Documentation
Create and run the simulationCreate the simulation, run it, record data, and plot the recorded information.
def main(): """Main function Include the NeuroML model into a LEMS simulation file, run it, plot some data. """ # Simulation bits sim_id = "HH_single_compartment_example_sim" simulation = LEMSSimulation(sim_id=sim_id, duration=300, dt=0.01, simulation_seed=123) # Include the NeuroML model file simulation.include_neuroml2_file(create_network()) # Assign target for the simulation simulation.assign_simulation_target("single_hh_cell_network") # Recording information from the simulation simulation.create_output_file(id="output0", file_name=sim_id + ".dat") simulation.add_column_to_output_file("output0", column_id="pop0[0]/v", quantity="pop0[0]/v") simulation.add_column_to_output_file("output0", column_id="pop0[0]/iChannels", quantity="pop0[0]/iChannels") simulation.add_column_to_output_file("output0", column_id="pop0[0]/na/iDensity", quantity="pop0[0]/hh_b_prop/membraneProperties/na_channels/iDensity/") simulation.add_column_to_output_file("output0", column_id="pop0[0]/k/iDensity", quantity="pop0[0]/hh_b_prop/membraneProperties/k_channels/iDensity/") # Save LEMS simulation to file sim_file = simulation.save_to_file() # Run the simulation using the default jNeuroML simulator pynml.run_lems_with_jneuroml(sim_file, max_memory="2G", nogui=True, plot=False) # Plot the data plot_data(sim_id) if __name__ == "__main__": main()
pyNeuroML >>> Written LEMS Simulation HH_single_compartment_example_sim to file: LEMS_HH_single_compartment_example_sim.xml pyNeuroML >>> Generating plot: Membrane potential
CC-BY-4.0
source/Userdocs/NML2_examples/HH_single_compartment.ipynb
NeuroML/Documentation
Amazon SageMaker Feature Store: Encrypt Data in your Online or Offline Feature Store using KMS key This notebook demonstrates how to enable encyption for your data in your online or offline Feature Store using KMS key. We start by showing how to programmatically create a KMS key, and how to apply it to the feature store creation process for data encryption. The last portion of this notebook demonstrates how to verify that your KMS key is being used to encerypt your data in your feature store. Overview1. Create a KMS key. - How to create a KMS key programmatically using the KMS client from boto3?2. Attach role to your KMS key. - Attach the required entries to your policy for data encryption in your feature store.3. Create an online or offline feature store and apply it to your feature store creation process. - How to enable encryption for your online store? - How to enable encryption for your offline store?4. How to verify that your data is encrypted in your online or offline store? PrerequisitesThis notebook uses both `boto3` and Python SDK libraries, and the `Python 3 (Data Science)` kernel. This notebook also works with Studio, Jupyter, and JupyterLab. Library Dependencies:* sagemaker>=2.0.0* numpy* pandas
import sagemaker import sys import boto3 import pandas as pd import numpy as np import json original_version = sagemaker.__version__ %pip install 'sagemaker>=2.0.0'
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
Set up
sagemaker_session = sagemaker.Session() s3_bucket_name = sagemaker_session.default_bucket() prefix = "sagemaker-featurestore-kms-demo" role = sagemaker.get_execution_role() region = sagemaker_session.boto_region_name
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
Create a KMS client using boto3. Note that you can access your boto session through your sagemaker session, e.g.,`sagemaker_session`.
kms = sagemaker_session.boto_session.client("kms")
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
KMS Policy TemplateBelow is the policy template you will use for creating a KMS key. You will specify your role to grant it access to various KMS operations that will be used in the back-end for encrypting your data in your Online or Offline Feature Store. **Note**: You will need to substitute your Account number in for `123456789012` in the policy below for these lines: `arn:aws:cloudtrail:*:123456789012:trail/*`. It is important to understand that the policy below will grant admin privileges for Customer Managed Keys (CMK) around viewing and revoking grants, decrypt and encrypt permissions on CloudTrail and full access permissions through Feature Store. Also, note that the the Feature Store Service creates additonal grants that are used for encryption purposes for your online store.
policy = { "Version": "2012-10-17", "Id": "key-policy-feature-store", "Statement": [ { "Sid": "Allow access through Amazon SageMaker Feature Store for all principals in the account that are authorized to use Amazon SageMaker Feature Store", "Effect": "Allow", "Principal": {"AWS": role}, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:DescribeKey", "kms:CreateGrant", "kms:RetireGrant", "kms:ReEncryptFrom", "kms:ReEncryptTo", "kms:GenerateDataKey", "kms:ListAliases", "kms:ListGrants", ], "Resource": ["*"], "Condition": {"StringLike": {"kms:ViaService": "sagemaker.*.amazonaws.com"}}, }, { "Sid": "Allow administrators to view the CMK and revoke grants", "Effect": "Allow", "Principal": {"AWS": [role]}, "Action": ["kms:Describe*", "kms:Get*", "kms:List*", "kms:RevokeGrant"], "Resource": ["*"], }, { "Sid": "Enable CloudTrail Encrypt Permissions", "Effect": "Allow", "Principal": {"Service": "cloudtrail.amazonaws.com", "AWS": [role]}, "Action": "kms:GenerateDataKey*", "Resource": "*", "Condition": { "StringLike": { "kms:EncryptionContext:aws:cloudtrail:arn": [ "arn:aws:cloudtrail:*:123456789012:trail/*", "arn:aws:cloudtrail:*:123456789012:trail/*", ] } }, }, { "Sid": "Enable CloudTrail log decrypt permissions", "Effect": "Allow", "Principal": {"AWS": [role]}, "Action": "kms:Decrypt", "Resource": ["*"], "Condition": {"Null": {"kms:EncryptionContext:aws:cloudtrail:arn": "false"}}, }, ], }
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
Create your new KMS key using the policy above and your KMS client.
try: new_kms_key = kms.create_key( Policy=json.dumps(policy), Description="string", KeyUsage="ENCRYPT_DECRYPT", CustomerMasterKeySpec="SYMMETRIC_DEFAULT", Origin="AWS_KMS", ) AliasName = "my-new-kms-key" ## provide a unique alias name kms.create_alias( AliasName="alias/" + AliasName, TargetKeyId=new_kms_key["KeyMetadata"]["KeyId"] ) print(new_kms_key) except Exception as e: print("Error {}".format(e))
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
Now that we have our KMS key created and the necessary operations added to our role, we now load in our data.
customer_data = pd.read_csv("data/feature_store_introduction_customer.csv") orders_data = pd.read_csv("data/feature_store_introduction_orders.csv") customer_data.head() orders_data.head() customer_data.dtypes orders_data.dtypes
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
Creating Feature GroupsWe first start by creating feature group names for customer_data and orders_data. Following this, we create two Feature Groups, one for customer_dat and another for orders_data
from time import gmtime, strftime, sleep customers_feature_group_name = "customers-feature-group-" + strftime("%d-%H-%M-%S", gmtime()) orders_feature_group_name = "orders-feature-group-" + strftime("%d-%H-%M-%S", gmtime())
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
Instantiate a FeatureGroup object for customers_data and orders_data.
from sagemaker.feature_store.feature_group import FeatureGroup customers_feature_group = FeatureGroup( name=customers_feature_group_name, sagemaker_session=sagemaker_session ) orders_feature_group = FeatureGroup( name=orders_feature_group_name, sagemaker_session=sagemaker_session ) import time current_time_sec = int(round(time.time())) record_identifier_feature_name = "customer_id"
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
Append EventTime feature to your data frame. This parameter is required, and time stamps each data point.
customer_data["EventTime"] = pd.Series([current_time_sec] * len(customer_data), dtype="float64") orders_data["EventTime"] = pd.Series([current_time_sec] * len(orders_data), dtype="float64") customer_data.head() orders_data.head()
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
Load feature definitions to your feature group.
customers_feature_group.load_feature_definitions(data_frame=customer_data) orders_feature_group.load_feature_definitions(data_frame=orders_data)
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
How to create an Online or Offline Feature Store that uses your KMS key for encryption?Below we create two feature groups, `customers_feature_group` and `orders_feature_group` respectively, and explain how use your KMS key to securely encrypt your data in your online or offline feature store. How to create an Online Feature store with your KMS key? To encrypt data in your online feature store, set `enable_online_store` to be `True` and specify your KMS key as parameter `online_store_kms_key_id`. You will need to substitute your Account number in `arn:aws:kms:us-east-1:123456789012:key/` replacing `123456789012` with your Account number.```customers_feature_group.create( s3_uri=f"s3://{s3_bucket_name}/{prefix}", record_identifier_name=record_identifier_feature_name, event_time_feature_name="EventTime", role_arn=role, enable_online_store=True, online_store_kms_key_id = 'arn:aws:kms:us-east-1:123456789012:key/'+ new_kms_key['KeyMetadata']['KeyId'])orders_feature_group.create( s3_uri=f"s3://{s3_bucket_name}/{prefix}", record_identifier_name=record_identifier_feature_name, event_time_feature_name="EventTime", role_arn=role, enable_online_store=True, online_store_kms_key_id = 'arn:aws:kms:us-east-1:123456789012:key/'+new_kms_key['KeyMetadata']['KeyId'])``` How to create an Offline Feature store with your KMS key? Similar to the above, set `enable_online_store` to be `False` and then specify your KMS key as parameter `offline_store_kms_key_id`. You will need to substitute your Account number in `arn:aws:kms:us-east-1:123456789012:key/` replacing `123456789012` with your Account number.```customers_feature_group.create( s3_uri=f"s3://{s3_bucket_name}/{prefix}", record_identifier_name=record_identifier_feature_name, event_time_feature_name="EventTime", role_arn=role, enable_online_store=False, offline_store_kms_key_id = 'arn:aws:kms:us-east-1:123456789012:key/'+ new_kms_key['KeyMetadata']['KeyId'])orders_feature_group.create( s3_uri=f"s3://{s3_bucket_name}/{prefix}", record_identifier_name=record_identifier_feature_name, event_time_feature_name="EventTime", role_arn=role, enable_online_store=False, offline_store_kms_key_id = 'arn:aws:kms:us-east-1:123456789012:key/'+new_kms_key['KeyMetadata']['KeyId'])``` For this example we create an online feature store that encrypts your data using your KMS key.**Note**: You will need to substitute your Account number in `arn:aws:kms:us-east-1:123456789012:key/` replacing `123456789012` with your Account number.
customers_feature_group.create( s3_uri=f"s3://{s3_bucket_name}/{prefix}", record_identifier_name=record_identifier_feature_name, event_time_feature_name="EventTime", role_arn=role, enable_online_store=False, offline_store_kms_key_id="arn:aws:kms:us-east-1:123456789012:key/" + new_kms_key["KeyMetadata"]["KeyId"], ) orders_feature_group.create( s3_uri=f"s3://{s3_bucket_name}/{prefix}", record_identifier_name=record_identifier_feature_name, event_time_feature_name="EventTime", role_arn=role, enable_online_store=False, offline_store_kms_key_id="arn:aws:kms:us-east-1:123456789012:key/" + new_kms_key["KeyMetadata"]["KeyId"], )
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
How to verify that your KMS key is being used to encrypt your data in your Online or Offline Feature Store? Online Store VerificationTo demonstrate that your data is being encrypted in your Online store, use your `kms` client from `boto3` to list the grants under your KMS key. It should show 'SageMakerFeatureStore-' and the name of your feature group you created and should list these operations under Operations:`['Decrypt','Encrypt','GenerateDataKey','ReEncryptFrom','ReEncryptTo','CreateGrant','RetireGrant','DescribeKey']`An alternative way for you to check that your data is encrypted in your Online store is to check [Cloud Trails](https://console.aws.amazon.com/cloudtrail/) and navigate to your account name. Once here, under General details you should see that SSE-KMS encryption is enabled and with your AWS KMS key shown below it. Below is a screenshot showing this: ![Cloud Trails](images/cloud-trails.png) Offline Store VerificationTo verify that your data in being encrypted in your Offline store, you must navigate to your S3 bucket through the [Console](https://console.aws.amazon.com/s3/home?region=us-east-1) and then navigate to your prefix, offline store, feature group name and into the /data/ folder. Once here, select a parquet file which is the file containing your feature group data. For this example, the directory path in S3 was this: `Amazon S3/MYBUCKET/PREFIX/123456789012/sagemaker/region/offline-store/customers-feature-group-23-22-44-47/data/year=2021/month=03/day=23/hour=22/20210323T224448Z_IdfObJjhpqLQ5rmG.parquet.` After selecting the parquet file, navigate to Server-side encryption settings. It should mention that Default encryption is enabled and reference (SSE-KMS) under server-side encryption. If this show, then your data is being encrypted in the offline store. Below is a screenshot of how this should look like in the console: ![Feature Store Policy](images/s3-sse-enabled.png) For this example since we created a secure Online store using our KMS key, below we use `list_grants` to check that our feature group and required grants are present under operations.
kms.list_grants( KeyId="arn:aws:kms:us-east-1:123456789012:key/" + new_kms_key["KeyMetadata"]["KeyId"] )
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
Clean Up ResourcesRemove the Feature Groups we created.
customers_feature_group.delete() orders_feature_group.delete() # preserve original sagemaker version %pip install 'sagemaker=={}'.format(original_version)
_____no_output_____
Apache-2.0
sagemaker-featurestore/feature_store_kms_key_encryption.ipynb
Amirosimani/amazon-sagemaker-examples
Hyperparameter tuningIn the previous section, we did not discuss the parameters of random forestand gradient-boosting. However, there are a couple of things to keep in mindwhen setting these.This notebook gives crucial information regarding how to set thehyperparameters of both random forest and gradient boosting decision treemodels.Caution!For the sake of clarity, no cross-validation will be used to estimate thetesting error. We are only showing the effect of the parameterson the validation set of what should be the inner cross-validation. Random forestThe main parameter to tune for random forest is the `n_estimators` parameter.In general, the more trees in the forest, the better the generalizationperformance will be. However, it will slow down the fitting and predictiontime. The goal is to balance computing time and generalization performance whensetting the number of estimators when putting such learner in production.The `max_depth` parameter could also be tuned. Sometimes, there is no needto have fully grown trees. However, be aware that with random forest, treesare generally deep since we are seeking to overfit the learners on thebootstrap samples because this will be mitigated by combining them.Assembling underfitted trees (i.e. shallow trees) might also lead to anunderfitted forest.
from sklearn.datasets import fetch_california_housing from sklearn.model_selection import train_test_split data, target = fetch_california_housing(return_X_y=True, as_frame=True) target *= 100 # rescale the target in k$ data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) import pandas as pd from sklearn.model_selection import GridSearchCV from sklearn.ensemble import RandomForestRegressor param_grid = { "n_estimators": [10, 20, 30], "max_depth": [3, 5, None], } grid_search = GridSearchCV( RandomForestRegressor(n_jobs=2), param_grid=param_grid, scoring="neg_mean_absolute_error", n_jobs=2, ) grid_search.fit(data_train, target_train) columns = [f"param_{name}" for name in param_grid.keys()] columns += ["mean_test_score", "rank_test_score"] cv_results = pd.DataFrame(grid_search.cv_results_) cv_results["mean_test_score"] = -cv_results["mean_test_score"] cv_results[columns].sort_values(by="rank_test_score")
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hyperparameters.ipynb
lesteve/scikit-learn-mooc
We can observe that in our grid-search, the largest `max_depth` togetherwith the largest `n_estimators` led to the best generalization performance. Gradient-boosting decision treesFor gradient-boosting, parameters are coupled, so we cannot set theparameters one after the other anymore. The important parameters are`n_estimators`, `max_depth`, and `learning_rate`.Let's first discuss the `max_depth` parameter.We saw in the section on gradient-boosting that the algorithm fits the errorof the previous tree in the ensemble. Thus, fitting fully grown trees willbe detrimental.Indeed, the first tree of the ensemble would perfectly fit (overfit) the dataand thus no subsequent tree would be required, since there would be noresiduals.Therefore, the tree used in gradient-boosting should have a low depth,typically between 3 to 8 levels. Having very weak learners at each step willhelp reducing overfitting.With this consideration in mind, the deeper the trees, the faster theresiduals will be corrected and less learners are required. Therefore,`n_estimators` should be increased if `max_depth` is lower.Finally, we have overlooked the impact of the `learning_rate` parameteruntil now. When fitting the residuals, we would like the treeto try to correct all possible errors or only a fraction of them.The learning-rate allows you to control this behaviour.A small learning-rate value would only correct the residuals of very fewsamples. If a large learning-rate is set (e.g., 1), we would fit theresiduals of all samples. So, with a very low learning-rate, we will needmore estimators to correct the overall error. However, a too largelearning-rate tends to obtain an overfitted ensemble,similar to having a too large tree depth.
from sklearn.ensemble import GradientBoostingRegressor param_grid = { "n_estimators": [10, 30, 50], "max_depth": [3, 5, None], "learning_rate": [0.1, 1], } grid_search = GridSearchCV( GradientBoostingRegressor(), param_grid=param_grid, scoring="neg_mean_absolute_error", n_jobs=2 ) grid_search.fit(data_train, target_train) columns = [f"param_{name}" for name in param_grid.keys()] columns += ["mean_test_score", "rank_test_score"] cv_results = pd.DataFrame(grid_search.cv_results_) cv_results["mean_test_score"] = -cv_results["mean_test_score"] cv_results[columns].sort_values(by="rank_test_score")
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hyperparameters.ipynb
lesteve/scikit-learn-mooc
Germany: SK Mainz (Rheinland-Pfalz)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Rheinland-Pfalz-SK-Mainz.ipynb)
import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview(country="Germany", subregion="SK Mainz", weeks=5); overview(country="Germany", subregion="SK Mainz"); compare_plot(country="Germany", subregion="SK Mainz", dates="2020-03-15:"); # load the data cases, deaths = germany_get_region(landkreis="SK Mainz") # get population of the region for future normalisation: inhabitants = population(country="Germany", subregion="SK Mainz") print(f'Population of country="Germany", subregion="SK Mainz": {inhabitants} people') # compose into one table table = compose_dataframe_summary(cases, deaths) # show tables with up to 1000 rows pd.set_option("max_rows", 1000) # display the table table
_____no_output_____
CC-BY-4.0
ipynb/Germany-Rheinland-Pfalz-SK-Mainz.ipynb
oscovida/oscovida.github.io
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Rheinland-Pfalz-SK-Mainz.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}")
_____no_output_____
CC-BY-4.0
ipynb/Germany-Rheinland-Pfalz-SK-Mainz.ipynb
oscovida/oscovida.github.io
Auto assume puan kam
case_1 =['มะนาวต่างดุ๊ด', 'กาเป็นหมู', 'ก้างใหญ่', 'อะหรี่ดอย', 'นอนแล้ว', 'ตะปู', 'นักเรียน', 'ขนม', 'เรอทัก', 'สวัสดี', ['เป็ด','กิน','ไก่'], 'ภูมิหล่อ'] for k in case_1: print('input: ',k) print('output: ',puan_kam(k)) print('===========')
input: มะนาวต่างดุ๊ด output: [['มุด', 'นาว', 'ต่าง', 'ด๊ะ'], ['มะ', 'นุด', 'ต่าง', 'ดาว']] =========== input: กาเป็นหมู output: ['กู', 'เป็น', 'หมา'] =========== input: ก้างใหญ่ output: ['ใก้', 'หญ่าง'] =========== input: อะหรี่ดอย output: ['อะ', 'หร่อย', 'ดี'] =========== input: นอนแล้ว output: ['แนว', 'ล้อน'] =========== input: ตะปู output: ['ตู', 'ปะ'] =========== input: นักเรียน output: ['เนียน', 'รัก'] =========== input: ขนม check this case not sure output: ['ขม', 'หนะ'] =========== input: เรอทัก output: ['รัก', 'เทอ'] =========== input: สวัสดี output: ['สะ', 'วี', 'ดัส'] =========== input: ['เป็ด', 'กิน', 'ไก่'] output: ['เป็ด', 'ไก', 'กิ่น'] =========== input: ภูมิหล่อ output: ['ภะ', 'หมอ', 'หลู่'] ===========
MIT
notebooks/Example.ipynb
Theerit/kampuan_api
Puan all case
for k in case_1: print(k) print(puan_kam_all(k)) print('===========')
มะนาวต่างดุ๊ด {0: ['มุด', 'นาว', 'ต่าง', 'ด๊ะ'], 1: ['มะ', 'นุด', 'ต่าง', 'ดาว']} =========== กาเป็นหมู {0: ['กู', 'เป็น', 'หมา'], 1: ['กา', 'ปู', 'เหม็น']} =========== ก้างใหญ่ {0: ['ใก้', 'หญ่าง'], 1: ['ใก้', 'หญ่าง']} =========== อะหรี่ดอย {0: ['ออย', 'หรี่', 'ดะ'], 1: ['อะ', 'หร่อย', 'ดี']} =========== นอนแล้ว {0: ['แนว', 'ล้อน'], 1: ['แนว', 'ล้อน']} =========== ตะปู {0: ['ตู', 'ปะ'], 1: ['ตู', 'ปะ']} =========== นักเรียน {0: ['เนียน', 'รัก'], 1: ['เนียน', 'รัก']} =========== ขนม check this case not sure check this case not sure {0: ['ขม', 'หนะ'], 1: ['ขม', 'หนะ']} =========== เรอทัก {0: ['รัก', 'เทอ'], 1: ['รัก', 'เทอ']} =========== สวัสดี {0: ['ซี', 'หวัส', 'ดะ'], 1: ['สะ', 'วี', 'ดัส']} =========== ['เป็ด', 'กิน', 'ไก่'] {0: ['ไป่', 'กิน', 'เก็ด'], 1: ['เป็ด', 'ไก', 'กิ่น']} =========== ภูมิหล่อ {0: ['ผ่อ', 'หูมิ', 'ละ'], 1: ['ภะ', 'หมอ', 'หลู่']} ===========
MIT
notebooks/Example.ipynb
Theerit/kampuan_api