markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Use the QuantileDiscretizer model to split our continuous variable into 5 buckets (see the numBuckets parameter).
discretizer = ft.QuantileDiscretizer( numBuckets=5, inputCol='continuous_var', outputCol='discretized')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Let's see what we got.
data_discretized = discretizer.fit(data).transform(data) data_discretized \ .groupby('discretized')\ .mean('continuous_var')\ .sort('discretized')\ .collect()
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Standardizing continuous variables Create a vector representation of our continuous variable (as it is only a single float)
vectorizer = ft.VectorAssembler( inputCols=['continuous_var'], outputCol= 'continuous_vec')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Build a normalizer and a pipeline.
normalizer = ft.StandardScaler( inputCol=vectorizer.getOutputCol(), outputCol='normalized', withMean=True, withStd=True ) pipeline = Pipeline(stages=[vectorizer, normalizer]) data_standardized = pipeline.fit(data).transform(data)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Classification We will now use the RandomForestClassfier to model the chances of survival for an infant. First, we need to cast the label feature to DoubleType.
import pyspark.sql.functions as func births = births.withColumn( 'INFANT_ALIVE_AT_REPORT', func.col('INFANT_ALIVE_AT_REPORT').cast(typ.DoubleType()) ) births_train, births_test = births \ .randomSplit([0.7, 0.3], seed=666)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
We are ready to build our model.
classifier = cl.RandomForestClassifier( numTrees=5, maxDepth=5, labelCol='INFANT_ALIVE_AT_REPORT') pipeline = Pipeline( stages=[ encoder, featuresCreator, classifier]) model = pipeline.fit(births_train) test = model.transform(births_test)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Let's now see how the RandomForestClassifier model performs compared to the LogisticRegression.
evaluator = ev.BinaryClassificationEvaluator( labelCol='INFANT_ALIVE_AT_REPORT') print(evaluator.evaluate(test, {evaluator.metricName: "areaUnderROC"})) print(evaluator.evaluate(test, {evaluator.metricName: "areaUnderPR"}))
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Let's test how well would one tree do, then.
classifier = cl.DecisionTreeClassifier( maxDepth=5, labelCol='INFANT_ALIVE_AT_REPORT') pipeline = Pipeline(stages=[ encoder, featuresCreator, classifier] ) model = pipeline.fit(births_train) test = model.transform(births_test) evaluator = ev.BinaryClassificationEvaluator( labelCo...
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Clustering In this example we will use k-means model to find similarities in the births data.
import pyspark.ml.clustering as clus kmeans = clus.KMeans(k = 5, featuresCol='features') pipeline = Pipeline(stages=[ encoder, featuresCreator, kmeans] ) model = pipeline.fit(births_train)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Having estimated the model, let's see if we can find some differences between clusters.
test = model.transform(births_test) test \ .groupBy('prediction') \ .agg({ '*': 'count', 'MOTHER_HEIGHT_IN': 'avg' }).collect()
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
In the field of NLP, problems such as topic extract rely on clustering to detect documents with similar topics. First, let's create our dataset.
text_data = spark.createDataFrame([ ['''To make a computer do anything, you have to write a computer program. To write a computer program, you have to tell the computer, step by step, exactly what you want it to do. The computer then "executes" the program, following each step mechanically, to a...
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
First, we will once again use the RegexTokenizer and the StopWordsRemover models.
tokenizer = ft.RegexTokenizer( inputCol='documents', outputCol='input_arr', pattern='\s+|[,.\"]') stopwords = ft.StopWordsRemover( inputCol=tokenizer.getOutputCol(), outputCol='input_stop')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Next in our pipeline is the CountVectorizer.
stringIndexer = ft.CountVectorizer( inputCol=stopwords.getOutputCol(), outputCol="input_indexed") tokenized = stopwords \ .transform( tokenizer\ .transform(text_data) ) stringIndexer \ .fit(tokenized)\ .transform(tokenized)\ .select('input_indexed')\ .take(2)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
We will use the LDA model - the Latent Dirichlet Allocation model - to extract the topics.
clustering = clus.LDA(k=2, optimizer='online', featuresCol=stringIndexer.getOutputCol())
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Put these puzzles together.
pipeline = Pipeline(stages=[ tokenizer, stopwords, stringIndexer, clustering] )
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Let's see if we have properly uncovered the topics.
topics = pipeline \ .fit(text_data) \ .transform(text_data) topics.select('topicDistribution').collect()
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Regression In this section we will try to predict the MOTHER_WEIGHT_GAIN.
features = ['MOTHER_AGE_YEARS','MOTHER_HEIGHT_IN', 'MOTHER_PRE_WEIGHT','DIABETES_PRE', 'DIABETES_GEST','HYP_TENS_PRE', 'HYP_TENS_GEST', 'PREV_BIRTH_PRETERM', 'CIG_BEFORE','CIG_1_TRI', 'CIG_2_TRI', 'CIG_3_TRI' ]
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
First, we will collate all the features together and use the ChiSqSelector to select only the top 6 most important features.
featuresCreator = ft.VectorAssembler( inputCols=[col for col in features[1:]], outputCol='features' ) selector = ft.ChiSqSelector( numTopFeatures=6, outputCol="selectedFeatures", labelCol='MOTHER_WEIGHT_GAIN' )
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
In order to predict the weight gain we will use the gradient boosted trees regressor.
import pyspark.ml.regression as reg regressor = reg.GBTRegressor( maxIter=15, maxDepth=3, labelCol='MOTHER_WEIGHT_GAIN')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Finally, again, we put it all together into a Pipeline.
pipeline = Pipeline(stages=[ featuresCreator, selector, regressor]) weightGain = pipeline.fit(births_train)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Having created the weightGain model, let's see if it performs well on our testing data.
evaluator = ev.RegressionEvaluator( predictionCol="prediction", labelCol='MOTHER_WEIGHT_GAIN') print(evaluator.evaluate( weightGain.transform(births_test), {evaluator.metricName: 'r2'}))
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
First we need to download the Caltech256 dataset.
DATASET_URL = r"http://homes.esat.kuleuven.be/~tuytelaa/"\ "unsup/unsup_caltech256_dense_sift_1000_bow.tar.gz" DATASET_DIR = "../../../projects/weiyen/data" filename = os.path.split(DATASET_URL)[1] dest_path = os.path.join(DATASET_DIR, filename) if os.path.exists(dest_path): print("{} exists. Skipping download......
mclearn/knfst/python/test.ipynb
chengsoonong/mclass-sky
bsd-3-clause
Calculate multi-class KNFST model for multi-class novelty detection INPUT K: NxN kernel matrix containing similarities of n training samples labels: Nx1 column vector containing multi-class labels of N training samples OUTPUT proj: Projection of KNFST target_points: The projections of training data into the nu...
ds = datasets.load_files(path) ds.data = np.vstack([np.fromstring(txt, sep='\t') for txt in ds.data]) data = ds.data target = ds.target
mclearn/knfst/python/test.ipynb
chengsoonong/mclass-sky
bsd-3-clause
Select a few "known" classes
classes = np.unique(target) num_class = len(classes) num_known = 5 known = np.random.choice(classes, num_known) mask = np.array([y in known for y in target]) X_train = data[mask] y_train = target[mask] idx = y_train.argsort() X_train = X_train[idx] y_train = y_train[idx] print(X_train.shape) print(y_train.shape) d...
mclearn/knfst/python/test.ipynb
chengsoonong/mclass-sky
bsd-3-clause
Train the model, and obtain the projection and class target points.
def learn(K, labels): classes = np.unique(labels) if len(classes) < 2: raise Exception("KNFST requires 2 or more classes") n, m = K.shape if n != m: raise Exception("Kernel matrix must be quadratic") centered_k = KernelCenterer().fit_transform(K) basis_values, basis_vec...
mclearn/knfst/python/test.ipynb
chengsoonong/mclass-sky
bsd-3-clause
X๊ฐ’ ์‚ด์ง ๋ฐ”๋€ ๊ฒฝ์šฐ(์Šค๋ฌด๋”ฉ์„ ์จ์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ)
X1 = np.array([[1,0,0],[1,0,1], [0,1,1],[0,1,0],[0,0,1],[1,1,1]]) y01 = np.zeros(2) y11 = np.ones(4) y1 = np.hstack([y01, y11]) clf_bern1 = BernoulliNB().fit(X1, y1) fc1 = clf_bern1.feature_count_ fc1 np.repeat(clf_bern1.class_count_[:, np.newaxis], 3, axis=1) fc1 / np.repeat(clf_bern1.class_count_[:, np.newaxis], ...
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
๋‹คํ•ญ์˜ ๊ฒฝ์šฐ ์‹ค์Šต ์˜ˆ์ œ
X = np.array([[4,4,2],[4,3,3], [6,3,1],[4,6,0],[0,4,1],[1,3,1],[1,1,3],[0,3,2]]) y0 = np.zeros(4) y1 = np.ones(4) y = np.hstack([y0, y1]) print(X) print(y) from sklearn.naive_bayes import MultinomialNB clf_mult = MultinomialNB().fit(X, y) clf_mult.classes_ clf_mult.class_count_ fc = clf_mult.feature_count_ fc np.r...
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
๋ฌธ์ œ1 feature์™€ target์ด ๋‹ค์Œ๊ณผ ๊ฐ™์„ ๋•Œ, ๋ฒ ๋ฅด๋ˆ„์ด ๋‚˜์ด๋ธŒ ๋ฒ ์ด์ง€์•ˆ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์Œ ๋ฌธ์ œ๋ฅผ ํ‘ธ์„ธ์š”.
X = np.array([ [1, 0, 0], [1, 0, 1], [0, 0, 1], [0, 0, 0], [1, 1, 1], [0, 1, 1], [0, 0, 1], [0, 1, 0], ]) y = np.array([0,0,0,0,1,1,1,1])
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
(1) ์‚ฌ์ „ ๋ถ„ํฌ(prior) p(y)๋ฅผ ๊ตฌํ•˜์„ธ์š”. p(y=0) = 0.5 p(y=1) = 0.5
py0, py1 = (y==0).sum()/len(y), (y==1).sum()/len(y) py0, py1
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
(2) ์Šค๋ฌด๋”ฉ ๋ฒกํ„ฐ ์•ŒํŒŒ=0 ์ผ ๋•Œ, ๋‹ค์Œ x_new์— ๋Œ€ํ•ด ์šฐ๋„(likelihood)ํ•จ์ˆ˜ p(x|y)๋ฅผ ๊ตฌํ•˜๊ณ  ์กฐ๊ฑด๋ถ€ ํ™•๋ฅ  ๋ถ„ํฌ p(y|x)๋ฅผ ๊ตฌํ•˜์„ธ์š”.(normalize ๋œ ๊ฐ’์ด ์•„๋‹˜!) * x_new = [1 1 0] <img src="1.png.jpg" style="width:70%; margin: 0 auto 0 auto;">
x_new = np.array([1, 1, 0]) theta0 = X[y==0, :].sum(axis=0)/len(X[y==0, :]) theta0 theta1 = X[y==1, :].sum(axis=0)/len(X[y==1, :]) theta1 likelihood0 = (theta0**x_new).prod()*((1-theta0)**(1-x_new)).prod() likelihood0 likelihood1 = (theta1**x_new).prod()*((1-theta1)**(1-x_new)).prod() likelihood1 px = likelihood0 ...
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
(3) ์Šค๋ฌด๋”ฉ ํŒฉํ„ฐ ์•ŒํŒŒ=0.5์ผ ๋•Œ, ๋ฌธ์ œ(2)๋ฅผ ๋‹ค์‹œ ํ’€์–ด๋ณด์„ธ์š”. <img src="22.png.jpg" style="width:70%; margin: 0 auto 0 auto;">
theta0 = (X[y==0, :].sum(axis=0) + 0.5*np.ones(3))/(len(X[y==0,:])+1) theta0 theta1 = (X[y==1, :].sum(axis=0) + 0.5*np.ones(3))/(len(X[y==1,:])+1) theta1 x_new = np.array([1, 1, 0]) likelihood0 = (theta0**x_new).prod()*((1-theta0)**(1-x_new)).prod() likelihood0 likelihood1 = (theta1**x_new).prod()*((1-theta1)**(1-x...
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
๋ฌธ์ œ2 ๋ฌธ์ œ 1์„ ๋‹คํ•ญ ๋‚˜์ด๋ธŒ ๋ฒ ์ด์ง€์•ˆ(Multinomial Naive Bayesian) ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ (1), (2), (3)์„ ๋‹ค์‹œ ํ’€์–ด๋ณด์„ธ์š” (1) ์‚ฌ์ „ ๋ถ„ํฌ(prior) p(y)๋ฅผ ๊ตฌํ•˜์„ธ์š”. p(y = 0) = 0.5 p(y = 1) = 0.5 (2) ์Šค๋ฌด๋”ฉ ํŒฉํ„ฐ ์•ŒํŒŒ=0 ์ผ ๋•Œ, ๋‹ค์Œ x_new์— ๋Œ€ํ•ด ์šฐ๋„(likelihood)ํ•จ์ˆ˜ p(x|y)๋ฅผ ๊ตฌํ•˜๊ณ  ์กฐ๊ฑด๋ถ€ ํ™•๋ฅ  ๋ถ„ํฌ p(y|x)๋ฅผ ๊ตฌํ•˜์„ธ์š”.(normalize ๋œ ๊ฐ’์ด ์•„๋‹˜!) * x_new = [2 3 1] <img src="3.png.jpg" style="width:70%; margin: 0 au...
x_new = np.array([2, 3, 1]) theta0 = X[y==0, :].sum(axis=0)/X[y==0, :].sum() theta0 theta1 = X[y==1, :].sum(axis=0)/X[y==1, :].sum() theta1 likelihood0 = (theta0**x_new).prod() likelihood0 likelihood1 = (theta1**x_new).prod() likelihood1 px = likelihood0 * py0 + likelihood1 * py1 px likelihood0 * py0 / px, likeli...
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
(3) ์Šค๋ฌด๋”ฉ ํŒฉํ„ฐ ์•ŒํŒŒ=0.5์ผ ๋•Œ, ๋ฌธ์ œ(2)๋ฅผ ๋‹ค์‹œ ํ’€์–ด๋ณด์„ธ์š”. <img src="4.png.jpg" style="width:70%; margin: 0 auto 0 auto;">
theta0 = (X[y==0, :].sum(axis=0) + 0.5*np.ones(3))/ (X[y==0, :].sum() + 1.5) theta0 theta1 = (X[y==1, :].sum(axis=0) + 0.5*np.ones(3))/ (X[y==1, :].sum() + 1.5) theta1 likelihood0 = (theta0**x_new).prod() likelihood0 likelihood1 = (theta1**x_new).prod() likelihood1 px = likelihood0 * py0 + likelihood1 * py1 px lik...
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
import a LiDAR swath
swath = np.genfromtxt('../../PhD/python-phd/swaths/is6_f11_pass1_aa_nr2_522816_523019_c.xyz') import pandas as pd columns = ['time', 'X', 'Y', 'Z', 'I','A', 'x_u', 'y_u', 'z_u', '3D_u'] swath = pd.DataFrame(swath, columns=columns) swath[1:5]
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Now load up the aircraft trajectory
air_traj = np.genfromtxt('../../PhD/is6_f11/trajectory/is6_f11_pass1_local_ice_rot.3dp') columns = ['time', 'X', 'Y', 'Z', 'R', 'P', 'H', 'x_u', 'y_u', 'z_u', 'r_u', 'p_u', 'h_u'] air_traj = pd.DataFrame(air_traj, columns=columns) air_traj[1:5]
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
take a quick look at the data
fig = plt.figure(figsize = ([30/2.54, 6/2.54])) ax0 = fig.add_subplot(111) a0 = ax0.scatter(swath['Y'], swath['X'], c=swath['Z'] - np.min(swath['Z']), cmap = 'gist_earth', vmin=0, vmax=10, edgecolors=None,lw=0, s=0.6) a1 = ax0.scatter(air_traj['Y'], air_traj['X'], c=air_traj['Z'], cmap = 'Reds', ...
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Making a HDF file out of those points
import h5py #create a file instance, with the intention to write it out lidar_test = h5py.File('lidar_test.hdf5', 'w') swath_data = lidar_test.create_group('swath_data') swath_data.create_dataset('GPS_SOW', data=swath['time']) #some data swath_data.create_dataset('UTM_X', data=swath['X']) swath_data.create_dataset(...
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
That's some swath data, now some trajectory data at a different sampling rate
traj_data = lidar_test.create_group('traj_data') #some attributes traj_data.attrs['flight'] = 11 traj_data.attrs['pass'] = 1 traj_data.attrs['source'] = 'RAPPLS flight 11, SIPEX-II 2012' #some data traj_data.create_dataset('pos_x', data = air_traj['X']) traj_data.create_dataset('pos_y', data = air_traj['Y']) traj_da...
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
close and write the file out
lidar_test.close()
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
OK, that's an arbitrary HDF file built The generated file is substantially smaller than the combined sources - 158 MB from 193, with no attention paid to optimisation. The .LAZ version of the input text file here is 66 MB. More compact, but we can't query it directly - and we have to fake fields! Everything in the swat...
photo = np.genfromtxt('/Users/adam/Documents/PhD/is6_f11/photoscan/is6_f11_photoscan_Cloud.txt',skip_header=1) columns = ['X', 'Y', 'Z', 'R', 'G', 'B'] photo = pd.DataFrame(photo[:,0:6], columns=columns) #create a file instance, with the intention to write it out lidar_test = h5py.File('lidar_test.hdf5', 'r+') photo...
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Storage is a bit less efficient here. ASCII cloud: 2.1 GB .LAZ format with same data: 215 MB HDF file containing LiDAR, trajectory, 3D photo cloud: 1.33 GB So, there's probably a case for keeping super dense clouds in different files (along with all their ancillary data). Note that .LAZ is able to store all the data ...
from netCDF4 import Dataset thedata = Dataset('lidar_test.hdf5', 'r') thedata
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
There are the two groups - swath_data and traj_data
swath = thedata['swath_data'] swath utm_xy = np.column_stack((swath['UTM_X'],swath['UTM_Y'])) idx = np.where((utm_xy[:,0] > -100) & (utm_xy[:,0] < 200) & (utm_xy[:,1] > -100) & (utm_xy[:,1] < 200) ) chunk_z = swath['Z'][idx] chunk_z.size max(chunk_z) chunk_x = swath['UTM_X'][idx] chunk_x.size chunk_y = swath['U...
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
That gave us a small chunk of LIDAR points, without loading the whole point dataset. Neat! ...but being continually dissatisfied, we want more! Lets get just the corresponding trajectory:
traj = thedata['traj_data'] traj
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Because there's essentiually no X extent for flight data, only the Y coordinate of the flight data are needed...
pos_y = traj['pos_y'] idx = np.where((pos_y[:] > -100.) & (pos_y[:] < 200.)) cpos_x = traj['pos_x'][idx] cpos_y = traj['pos_y'][idx] cpos_z = traj['pos_z'][idx]
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Now plot the flight line and LiDAR together
plt.scatter(chunk_x, chunk_y, c=chunk_z, lw=0, s=3, cmap='gist_earth') plt.scatter(cpos_x, cpos_y, c=cpos_z, lw=0, s=5, cmap='Oranges')
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
...and prove that we are looking at a trajectory and some LiDAR
from mpl_toolkits.mplot3d import Axes3D #set up a plot plt_az=310 plt_elev = 40. plt_s = 3 cb_fmt = '%.1f' cmap1 = plt.get_cmap('gist_earth', 10) #make a plot fig = plt.figure() fig.set_size_inches(35/2.51, 20/2.51) ax0 = fig.add_subplot(111, projection='3d') a0 = ax0.scatter(chunk_x, chunk_y, (chunk_z-min(chunk_z))...
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
plot coloured by point uncertainty
#set up a plot plt_az=310 plt_elev = 40. plt_s = 3 cb_fmt = '%.1f' cmap1 = plt.get_cmap('gist_earth', 30) #make a plot fig = plt.figure() fig.set_size_inches(35/2.51, 20/2.51) ax0 = fig.add_subplot(111, projection='3d') a0 = ax0.scatter(chunk_x, chunk_y, (chunk_z-min(chunk_z))*2, c=np.ndarray.tolist(...
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
now pull in the photogrammetry cloud This gets a little messy, since it appears we still need to grab X and Y dimensions - so still 20 x 10^6 x 2 points. Better than 20 x 10^6 x 6, but I wonder if I'm missing something about indexing.
photo = thedata['3d_photo'] photo photo_xy = np.column_stack((photo['UTM_X'],photo['UTM_Y'])) idx_p = np.where((photo_xy[:,0] > 0) & (photo_xy[:,0] < 100) & (photo_xy[:,1] > 0) & (photo_xy[:,1] < 100) ) plt.scatter(photo['UTM_X'][idx_p], photo['UTM_Y'][idx_p], c = photo['Z'][idx_p],\ cmap='hot',vmi...
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
This is kind of a clunky plot - but you get the idea (I hope). LiDAR is in blues, the 100 x 100 photogrammetry patch in orange, trajectory in orange. Different data sources, different resolutions, extracted using pretty much the same set of queries.
print('LiDAR points: {0}\nphotogrammetry points: {1}\ntrajectory points: {2}'. format(len(chunk_x), len(p_x), len(cpos_x) ))
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Lists always have their order preserved in Python, so you can guarantee that shopping_list[0] will have the value "bread" Tuples A tuple is another of the standard Python data strucure. They behave in a similar way to the list but have one key difference, they are immutable. Let's look at what this means. A more detail...
# A tuple is declared with the curved brackets () instead of the [] for a list my_tuple = (1,2,'cat','dog') # But since a tuple is immutable the next line will not run my_tuple[0] = 4
lesson4.ipynb
trsherborne/learn-python
mit
So what can we learn from this? Once you declare a tuple, the object cannot be changed. For this reason, tuples have more optimised methods when you use them so can be more efficient and faster in your code. A closer look at using Tuples
# A tuple might be immutable but can contain mutable objects my_list_tuple = ([1,2,3],[4,5,6]) # This won't work # my_list_tuple[0] = [3,2,1] # But this will! my_list_tuple[0][0:3] = [3,2,1] print(my_list_tuple) # You can add tuples together t1 = (1,2,3) t1 += (4,5,6) print(t1) t2 = (10,20,30) t3 = (40,50,60) prin...
lesson4.ipynb
trsherborne/learn-python
mit
Question - Write a function which swaps two elements using tuples
# TO DO def my_swap_function(a,b): # write here! return b,a #ย END TO DO a = 1 b = 2 x = my_swap_function(a,b) print(x)
lesson4.ipynb
trsherborne/learn-python
mit
Dictionaries Dictionaries are perhaps the most useful and hardest to grasp data structure from the basic set in Python. Dictionaries are not iterable in the same sense as lists and tuples and using them required a different approach. Dictionaries are sometimes called hash maps, hash tables or maps in other programming ...
# Declare a dictionary using the {} brackets or the dict() method my_dict = {} # Add new items to the dictionary by stating the key as the index and the value my_dict['bananas'] = 'this is a fruit and a berry' my_dict['apples'] = 'this is a fruit' my_dict['avocados'] = 'this is a berry' print(my_dict) # So now we ca...
lesson4.ipynb
trsherborne/learn-python
mit
Wrapping everything up, we can create a list of dictionaries with multiple fields and iterate over a dictionary
# Declare a list europe = [] # Create dicts and add to lists germany = {"name": "Germany", "population": 81000000,"speak_german":True} europe.append(germany) luxembourg = {"name": "Luxembourg", "population": 512000,"speak_german":True} europe.append(luxembourg) uk = {"name":"United Kingdom","population":64100000,"spea...
lesson4.ipynb
trsherborne/learn-python
mit
Question - Add at least 3 more countries to the europe list and use a for loop to get a new list of every country which speaks German
# TO DO - You might need more than just a for loop! # END TO DO
lesson4.ipynb
trsherborne/learn-python
mit
A peek at Pandas We've seen some of the standard library of Data structures in Python. We will briefly look at Pandas now, a powerful data manipulation library which is a sensible next step to organising your data when you need to use something more complex than standard Python data structures. The core of Pandas is th...
# We import the Pandas packages using the import statement we've seen before import pandas as pd # To create a Pandas DataFrame from a simpler data structure we use the following routine europe_df = pd.DataFrame.from_dict(europe) print(type(europe_df)) # Running this cell as is provides the fancy formatting of Pan...
lesson4.ipynb
trsherborne/learn-python
mit
With that out of the way, let's load the MNIST data set and scale the images to a range between 0 and 1. If you haven't already downloaded the data set, the Keras load_data function will download the data directly from S3 on AWS.
# Loads the training and test data sets (ignoring class labels) (x_train, _), (x_test, _) = mnist.load_data() # Scales the training and test data to range between 0 and 1. max_value = float(x_train.max()) x_train = x_train.astype('float32') / max_value x_test = x_test.astype('float32') / max_value
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
The data set consists 3D arrays with 60K training and 10K test images. The images have a resolution of 28 x 28 (pixels).
x_train.shape, x_test.shape
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
To work with the images as vectors, let's reshape the 3D arrays as matrices. In doing so, we'll reshape the 28 x 28 images into vectors of length 784
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) (x_train.shape, x_test.shape)
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Simple Autoencoder Let's start with a simple autoencoder for illustration. The encoder and decoder functions are each fully-connected neural layers. The encoder function uses a ReLU activation function, while the decoder function uses a sigmoid activation function. So what are the encoder and the decoder layers doing? ...
# input dimension = 784 input_dim = x_train.shape[1] encoding_dim = 32 compression_factor = float(input_dim) / encoding_dim print("Compression factor: %s" % compression_factor) autoencoder = Sequential() autoencoder.add( Dense(encoding_dim, input_shape=(input_dim,), activation='relu') ) autoencoder.add( Dense...
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Encoder Model We can extract the encoder model from the first layer of the autoencoder model. The reason we want to extract the encoder model is to examine what an encoded image looks like.
input_img = Input(shape=(input_dim,)) encoder_layer = autoencoder.layers[0] encoder = Model(input_img, encoder_layer(input_img)) encoder.summary()
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Okay, now we're ready to train our first autoencoder. We'll iterate on the training data in batches of 256 in 50 epochs. Let's also use the Adam optimizer and per-pixel binary crossentropy loss. The purpose of the loss function is to reconstruct an image similar to the input image. I want to call out something that may...
autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True, validation_data=(x_test, x_test))
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
We've successfully trained our first autoencoder. With a mere 50,992 parameters, our autoencoder model can compress an MNIST digit down to 32 floating-point digits. Not that impressive, but it works. To check out the encoded images and the reconstructed image quality, we randomly sample 10 test images. I really like ho...
num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) encoded_imgs = encoder.predict(x_test) decoded_imgs = autoencoder.predict(x_test) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(3...
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Deep Autoencoder Above, we used single fully-connected layers for both the encoding and decoding models. Instead, we can stack multiple fully-connected layers to make each of the encoder and decoder functions deep. You know because deep learning. In this next model, we'll use 3 fully-connected layers for the encoding m...
autoencoder = Sequential() # Encoder Layers autoencoder.add(Dense(4 * encoding_dim, input_shape=(input_dim,), activation='relu')) autoencoder.add(Dense(2 * encoding_dim, activation='relu')) autoencoder.add(Dense(encoding_dim, activation='relu')) # Decoder Layers autoencoder.add(Dense(2 * encoding_dim, activation='rel...
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Encoder Model Like we did above, we can extract the encoder model from the autoencoder. The encoder model consists of the first 3 layers in the autoencoder, so let's extract them to visualize the encoded images.
input_img = Input(shape=(input_dim,)) encoder_layer1 = autoencoder.layers[0] encoder_layer2 = autoencoder.layers[1] encoder_layer3 = autoencoder.layers[2] encoder = Model(input_img, encoder_layer3(encoder_layer2(encoder_layer1(input_img)))) encoder.summary() autoencoder.compile(optimizer='adam', loss='binary_crossent...
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
As with the simple autoencoder, we randomly sample 10 test images (the same ones as before). The reconstructed digits look much better than those from the single-layer autoencoder. This observation aligns with the reduction in validation loss after adding multiple layers to the autoencoder.
num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) encoded_imgs = encoder.predict(x_test) decoded_imgs = autoencoder.predict(x_test) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(3...
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Convolutional Autoencoder Now that we've explored deep autoencoders, let's use a convolutional autoencoder instead, given that the input objects are images. What this means is our encoding and decoding models will be convolutional neural networks instead of fully-connected networks. Again, Keras makes this very easy fo...
x_train = x_train.reshape((len(x_train), 28, 28, 1)) x_test = x_test.reshape((len(x_test), 28, 28, 1))
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
To build the convolutional autoencoder, we'll make use of Conv2D and MaxPooling2D layers for the encoder and Conv2D and UpSampling2D layers for the decoder. The encoded images are transformed to a 3D array of dimensions 4 x 4 x 8, but to visualize the encoding, we'll flatten it to a vector of length 128. I tried to use...
autoencoder = Sequential() # Encoder Layers autoencoder.add(Conv2D(16, (3, 3), activation='relu', padding='same', input_shape=x_train.shape[1:])) autoencoder.add(MaxPooling2D((2, 2), padding='same')) autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(MaxPooling2D((2, 2), padding='sam...
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Encoder Model To extract the encoder model for the autoencoder, we're going to use a slightly different approach than before. Rather than extracting the first 6 layers, we're going to create a new Model with the same input as the autoencoder, but the output will be that of the flattening layer. As a side note, this is ...
encoder = Model(inputs=autoencoder.input, outputs=autoencoder.get_layer('flatten_1').output) encoder.summary() autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder.fit(x_train, x_train, epochs=100, batch_size=128, validation_data=(x_test, x_test)...
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
The reconstructed digits look even better than before. This is no surprise given an even lower validation loss. Other than slight improved reconstruction, check out how the encoded image has changed. What's even cooler is that the encoded images of the 9 look similar as do those of the 8's. This similarity was far less...
num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) encoded_imgs = encoder.predict(x_test) decoded_imgs = autoencoder.predict(x_test) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(3...
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Denoising Images with the Convolutional Autoencoder Earlier, I mentioned that autoencoders are useful for denoising data including images. When I learned about this concept in grad school, my mind was blown. This simple task helped me realize data can be manipulated in very useful ways and that the dirty data we often ...
x_train_noisy = x_train + np.random.normal(loc=0.0, scale=0.5, size=x_train.shape) x_train_noisy = np.clip(x_train_noisy, 0., 1.) x_test_noisy = x_test + np.random.normal(loc=0.0, scale=0.5, size=x_test.shape) x_test_noisy = np.clip(x_test_noisy, 0., 1.) num_images = 10 np.random.seed(42) random_test_images = np.rand...
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Convolutional Autoencoder - Take 2 Well, those images are terrible. They remind me of the mask from the movie Scream. Okay, so let's try that again. This time we're going to build a ConvNet with a lot more parameters and forego visualizing the encoding layer. The network will be a bit larger and slower to train, but t...
autoencoder = Sequential() # Encoder Layers autoencoder.add(Conv2D(32, (3, 3), activation='relu', padding='same', input_shape=x_train.shape[1:])) autoencoder.add(MaxPooling2D((2, 2), padding='same')) autoencoder.add(Conv2D(32, (3, 3), activation='relu', padding='same')) autoencoder.add(MaxPooling2D((2, 2), padding='sa...
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Set up Network
import network # 784 (28 x 28 pixel images) input neurons; 30 hidden neurons; 10 output neurons net = network.Network([784, 30, 10])
neural-networks-and-deep-learning/src/run_network.ipynb
the-deep-learners/study-group
mit
Train Network
# Use stochastic gradient descent over 30 epochs, with mini-batch size of 10, learning rate of 3.0 net.SGD(training_data, 30, 10, 3.0, test_data=test_data)
neural-networks-and-deep-learning/src/run_network.ipynb
the-deep-learners/study-group
mit
Exercise: Create network with just two layers
two_layer_net = network.Network([784, 10]) two_layer_net.SGD(training_data, 10, 10, 1.0, test_data=test_data) two_layer_net.SGD(training_data, 10, 10, 2.0, test_data=test_data) two_layer_net.SGD(training_data, 10, 10, 3.0, test_data=test_data) two_layer_net.SGD(training_data, 10, 10, 4.0, test_data=test_data) two_...
neural-networks-and-deep-learning/src/run_network.ipynb
the-deep-learners/study-group
mit
The data can be obtained from the World Bank web site, but here we work with a slightly cleaned-up version of the data:
data = sm.datasets.fertility.load_pandas().data data.head()
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Here we construct a DataFrame that contains only the numerical fertility rate data and set the index to the country names. We also drop all the countries with any missing data.
columns = list(map(str, range(1960, 2012))) data.set_index("Country Name", inplace=True) dta = data[columns] dta = dta.dropna() dta.head()
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
There are two ways to use PCA to analyze a rectangular matrix: we can treat the rows as the "objects" and the columns as the "variables", or vice-versa. Here we will treat the fertility measures as "variables" used to measure the countries as "objects". Thus the goal will be to reduce the yearly fertility rate values...
ax = dta.mean().plot(grid=False) ax.set_xlabel("Year", size=17) ax.set_ylabel("Fertility rate", size=17) ax.set_xlim(0, 51)
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Next we perform the PCA:
pca_model = PCA(dta.T, standardize=False, demean=True)
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Based on the eigenvalues, we see that the first PC dominates, with perhaps a small amount of meaningful variation captured in the second and third PC's.
fig = pca_model.plot_scree(log_scale=False)
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Next we will plot the PC factors. The dominant factor is monotonically increasing. Countries with a positive score on the first factor will increase faster (or decrease slower) compared to the mean shown above. Countries with a negative score on the first factor will decrease faster than the mean. The second factor...
fig, ax = plt.subplots(figsize=(8, 4)) lines = ax.plot(pca_model.factors.iloc[:, :3], lw=4, alpha=0.6) ax.set_xticklabels(dta.columns.values[::10]) ax.set_xlim(0, 51) ax.set_xlabel("Year", size=17) fig.subplots_adjust(0.1, 0.1, 0.85, 0.9) legend = fig.legend(lines, ["PC 1", "PC 2", "PC 3"], loc="center right") legend.d...
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
To better understand what is going on, we will plot the fertility trajectories for sets of countries with similar PC scores. The following convenience function produces such a plot.
idx = pca_model.loadings.iloc[:, 0].argsort()
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
First we plot the five countries with the greatest scores on PC 1. These countries have a higher rate of fertility increase than the global mean (which is decreasing).
def make_plot(labels): fig, ax = plt.subplots(figsize=(9, 5)) ax = dta.loc[labels].T.plot(legend=False, grid=False, ax=ax) dta.mean().plot(ax=ax, grid=False, label="Mean") ax.set_xlim(0, 51) fig.subplots_adjust(0.1, 0.1, 0.75, 0.9) ax.set_xlabel("Year", size=17) ax.set_ylabel("Fertility", si...
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Here are the five countries with the greatest scores on factor 2. These are countries that reached peak fertility around 1980, later than much of the rest of the world, followed by a rapid decrease in fertility.
idx = pca_model.loadings.iloc[:, 1].argsort() make_plot(dta.index[idx[-5:]])
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Finally we have the countries with the most negative scores on PC 2. These are the countries where the fertility rate declined much faster than the global mean during the 1960's and 1970's, then flattened out.
make_plot(dta.index[idx[:5]])
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
We can also look at a scatterplot of the first two principal component scores. We see that the variation among countries is fairly continuous, except perhaps that the two countries with highest scores for PC 2 are somewhat separated from the other points. These countries, Oman and Yemen, are unique in having a sharp ...
fig, ax = plt.subplots() pca_model.loadings.plot.scatter(x="comp_00", y="comp_01", ax=ax) ax.set_xlabel("PC 1", size=17) ax.set_ylabel("PC 2", size=17) dta.index[pca_model.loadings.iloc[:, 1] > 0.2].values
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Dataset We are using CelebA Dataset which is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations. The images in this dataset cover large pose variations and background clutter. We randomly downsample it by a factor of 30 for computational reasons.
#N=int(len(imgfiles)/30) N=len(imgfiles) print("Number of images = {}".format(N)) test = imgfiles[0:N] test[1]
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Loading the data
sample_path = imgfiles[0] sample_im = load_image(sample_path) sample_im = np.array(sample_im) img_shape = (sample_im.shape[0],sample_im.shape[1]) ims = np.zeros((N, sample_im.shape[1]*sample_im.shape[0])) for i, filepath in enumerate(test): im = load_image(filepath) im = np.array(im) im = im.mean(axis=2) ...
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Learning the Manifold We are using Isomap for dimensionality reduction as we believe that the face image data lies on a structured manifold in a higher dimension and thus is embeddable in a much lower dimension without much loss of information. Further, Isomap is a graph based technique which aligns with our scope.
#iso = manifold.Isomap(n_neighbors=2, n_components=3, max_iter=500, n_jobs=-1) #Z = iso.fit_transform(ims) #don't run, can load from pickle as in below cells #saving the learnt embedding #with open('var6753_n2_d3.pkl', 'wb') as f: #model learnt with n_neighbors=2 and n_components=3 # pickle.dump(Z,f) #with op...
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Regeneration from Lower Dimensional Space While traversing the chosen path, we are also sub sampling in the lower dimensional space in order to create smooth transitions in the video. We naturally expect smoothness as points closer in the lower dimensional space should correspond to similar images. Since we do not have...
#Mapping the regressor from low dimension space to high dimension space lin = ExtraTreeRegressor(max_depth=19) lin.fit(Z, ims) lin.score(Z, ims) pred = lin.predict(Z[502].reshape(1, -1)); fig_new, [ax1,ax2] = plt.subplots(1,2) ax1.imshow(ims[502].reshape(*img_shape), cmap = 'gray') ax1.set_title('Original') ax2.imsh...
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Please check the generated video in the same enclosing folder. Observing the output of the tree regressor we notice sudden jumps in the reconstructed video. We suspect that these discontinuities are either an artefact of the isomap embedding in a much lower dimension or because of the reconstruction method. To investi...
norm_vary = list() norm_im = list() lbd = np.linspace(0, 1, 101) person1=12 person2=14 for i in range(101): test = (lbd[i] * Z[person2]) + ((1-lbd[i]) * Z[person1]) norm_vary.append(norm(test)) pred = lin.predict(test.reshape(1, -1)) im = Image.fromarray(pred.reshape(*img_shape)) norm_im.append(norm...
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Even after extensive hyperparamter tuning, we are unable to learn a reasonable regressor hence we use the convex combination approach in him dim. Method 2 Instead of choosing a path from the graph, manually choosing a set of points which visibbly lie on a 2D manifold. For regeneration of sub-sampled points, we use conv...
#Interesting paths with N4D3 model #imlist = [1912,3961,2861,4870,146,6648] #imlist = [3182,5012,5084,1113,2333,1375] #imlist = [5105,5874,4255,2069,1178] #imlist = [3583,2134,1034, 3917,3704, 5920,6493] #imlist = [1678,6535,6699,344,6677,5115,6433] #Interesting paths with N2D3 model imlist = [1959,3432,6709,4103, 48...
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Description A synchronous machine has a synchronous reactance of $1.0\,\Omega$ per phase and an armature resistance of $0.1\,\Omega$ per phase. If $\vec{E}A = 460\,V\angle-10ยฐ$ and $\vec{V}\phi = 480\,V\angle0ยฐ$, is this machine a motor or a generator? How much power P is this machine consuming from or supplying to ...
Ea = 460 # [V] EA_angle = -10/180*pi # [rad] EA = Ea * (cos(EA_angle) + 1j*sin(EA_angle)) Vphi = 480 # [V] VPhi_angle = 0/180*pi # [rad] VPhi = Vphi*exp(1j*VPhi_angle) Ra = 0.1 # [Ohm] Xs = 1.0 # [Ohm]
Chapman/Ch5-Problem_5-10.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
SOLUTION This machine is a motor, consuming power from the power system, because $\vec{E}A$ is lagging $\vec{V}\phi$ It is also consuming reactive power, because $E_A \cos{\delta} < V_\phi$ . The current flowing in this machine is: $$\vec{I}A = \frac{\vec{V}\phi - \vec{E}_A}{R_A + jX_s}$$
IA = (VPhi - EA) / (Ra + Xs*1j) IA_angle = arctan(IA.imag/IA.real) print('IA = {:.1f} A โˆ  {:.2f}ยฐ'.format(abs(IA), IA_angle/pi*180))
Chapman/Ch5-Problem_5-10.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Therefore the real power consumed by this motor is: $$P =3V_\phi I_A \cos{\theta}$$
theta = abs(IA_angle) P = 3* abs(VPhi)* abs(IA)* cos(theta) print(''' P = {:.1f} kW ============'''.format(P/1e3))
Chapman/Ch5-Problem_5-10.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
and the reactive power consumed by this motor is: $$Q = 3V_\phi I_A \sin{\theta}$$
Q = 3* abs(VPhi)* abs(IA)* sin(theta) print(''' Q = {:.1f} kvar ============='''.format(Q/1e3))
Chapman/Ch5-Problem_5-10.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Define categorical data types
s = ["Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_Hist...
.ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb
ramabrahma/data-sci-int-capstone
gpl-3.0
Importing life insurance data set The following variables are all categorical (nominal): Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, Ins...
#Import training data d = pd.read_csv('prud_files/train.csv') def normalize_df(d): min_max_scaler = preprocessing.MinMaxScaler() x = d.values.astype(np.float) return pd.DataFrame(min_max_scaler.fit_transform(x)) # Import training data d = pd.read_csv('prud_files/train.csv') #Separation into groups df_...
.ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb
ramabrahma/data-sci-int-capstone
gpl-3.0
Grouping of various categorical data sets Histograms and descriptive statistics for Risk Response, Ins_Age, BMI, Wt
plt.figure(0) plt.title("Categorical - Histogram for Risk Response") plt.xlabel("Risk Response (1-7)") plt.ylabel("Frequency") plt.hist(df.Response) plt.savefig('images/hist_Response.png') print df.Response.describe() print "" plt.figure(1) plt.title("Continuous - Histogram for Ins_Age") plt.xlabel("Normalized Ins_Ag...
.ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb
ramabrahma/data-sci-int-capstone
gpl-3.0
Histograms and descriptive statistics for Product_Info_1-7
for i in range(1,8): print "The iteration is: "+str(i) print df['Product_Info_'+str(i)].describe() print "" plt.figure(i) if(i == 4): plt.title("Continuous - Histogram for Product_Info_"+str(i)) plt.xlabel("Normalized value: [0,1]") plt.ylabel("Frequency") else...
.ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb
ramabrahma/data-sci-int-capstone
gpl-3.0