markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Q8. Compute softmax cross entropy between logits and labels. | logits = tf.random_normal(shape=[2, 5, 10])
labels = tf.convert_to_tensor(np.random.randint(0, 10, size=[2, 5]), tf.int32)
labels = tf.one_hot(labels, depth=10)
output = tf.nn....
with tf.Session() as sess:
print(sess.run(output)) | programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb | diegocavalca/Studies | cc0-1.0 |
Embeddings
Q9. Map tensor x to the embedding. | tf.reset_default_graph()
x = tf.constant([0, 2, 1, 3, 4], tf.int32)
embedding = tf.constant([0, 0.1, 0.2, 0.3, 0.4], tf.float32)
output = tf.nn....
with tf.Session() as sess:
print(sess.run(output)) | programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb | diegocavalca/Studies | cc0-1.0 |
Choose most probable B-events | _, take_indices = numpy.unique(data[event_id_column], return_index=True)
figure(figsize=[15, 5])
subplot(1, 2, 1)
hist(data.Bmass.values[take_indices], bins=100)
title('B mass hist')
xlabel('mass')
subplot(1, 2, 2)
hist(data.N_sig_sw.values[take_indices], bins=100, normed=True)
title('sWeights hist')
xlabel('signal ... | Stefania_files/track-based-tagging-track-sign-usage.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
Define B-like events for training
Events with low sWeight still will be used only to test quality. | sweight_threshold = 1.
data_sw_passed = data[data.N_sig_sw > sweight_threshold]
data_sw_not_passed = data[data.N_sig_sw <= sweight_threshold]
get_events_statistics(data_sw_passed)
_, take_indices = numpy.unique(data_sw_passed[event_id_column], return_index=True)
figure(figsize=[15, 5])
subplot(1, 2, 1)
hist(data_sw_p... | Stefania_files/track-based-tagging-track-sign-usage.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
Main idea:
find tracks, which can help reconstruct the sign of B if you know track sign.
label = signB * signTrack
* the highest output means that this is same sign B as track
* the lowest output means that this is opposite sign B than track
Define features | features = list(set(data.columns) - {'index', 'run', 'event', 'i', 'signB', 'N_sig_sw', 'Bmass', 'mult',
'PIDNNp', 'PIDNNpi', 'label', 'thetaMin', 'Dist_phi', event_id_column,
'mu_cut', 'e_cut', 'K_cut', 'ID', 'diff_phi', 'group_column'})
featu... | Stefania_files/track-based-tagging-track-sign-usage.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
PID pairs scatters | figure(figsize=[15, 16])
bins = 60
step = 3
for i, (feature1, feature2) in enumerate(combinations(['PIDNNk', 'PIDNNm', 'PIDNNe', 'PIDNNp', 'PIDNNpi'], 2)):
subplot(4, 3, i + 1)
Z, (x, y) = numpy.histogramdd(data_sw_passed[[feature1, feature2]].values, bins=bins, range=([0, 1], [0, 1]))
pcolor(numpy.log(Z).T... | Stefania_files/track-based-tagging-track-sign-usage.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
count of tracks | _, n_tracks = numpy.unique(data_sw_passed[event_id_column], return_counts=True)
hist(n_tracks, bins=100)
title('Number of tracks')
# plt.savefig('img/tracks_number_less_PID.png' , format='png') | Stefania_files/track-based-tagging-track-sign-usage.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
DT | from rep.estimators import XGBoostClassifier
xgb_base = XGBoostClassifier(n_estimators=100, colsample=0.7, eta=0.01, nthreads=12,
subsample=0.1, max_depth=6)
xgb_folding = FoldingGroupClassifier(xgb_base, n_folds=2, random_state=11,
train_features=feat... | Stefania_files/track-based-tagging-track-sign-usage.ipynb | tata-antares/tagging_LHCb | apache-2.0 |
Triangular mesh generation
In the last lesson we learned how to create a quad mesh by Transfinite Interpolation to accurately approximate the strong topography of a sea dike. We can use this mesh for spectral element modelling. But what should we do, if we need a triangular mesh for example for Finite Element modelling... | # Import Libraries
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Here, I introduce a new library, which is useful
# to define the fonts and size of a figure in a notebook
from pylab import rcParams
# Get rid of a Matplotlib deprecation warning
import warnings
warnings.filterwarnings("ignore... | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
This quad mesh is already able to accurately describe the free-surface topography.
Triangular mesh generation
If we need a triangular mesh, for example for finite element or finite volume modelling, we could apply Delaunay triangulation to the node point distribution of the Yigma Tepe TFI mesh. For further details re... | # Reshape X and Z vector
x = X.flatten()
z = Z.flatten()
# Assemble x and z vector into NX*NZ x 2 matrix
points = np.vstack([x,z]).T | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
Next, we compute the Voronoi diagram for the mesh points. This describes the partitioning of a plane with n points into convex polygons such that each polygon contains exactly one generating point and every point in a given polygon is closer to its generating point than to any other. | # calculate and plot Voronoi diagram for mesh points
from scipy.spatial import Voronoi, voronoi_plot_2d
vor = Voronoi(points)
plt.figure(figsize=(12,6))
ax = plt.subplot(111, aspect='equal')
voronoi_plot_2d(vor, ax=ax)
plt.title("Part of Yigma Tepe (Voronoi diagram)" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.xli... | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
The Delaunay triangulation creates triangles by connecting the points in neighbouring Voronoi cells. | # Apply Delaunay triangulation to the quad mesh node points
from scipy.spatial import Delaunay
tri = Delaunay(points)
plt.figure(figsize=(12,6))
ax = plt.subplot(111, aspect='equal')
voronoi_plot_2d(vor, ax=ax)
plt.triplot(points[:,0], points[:,1], tri.simplices.copy(), linewidth=3, color='b')
plt.title("Part of Yigm... | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
Let's take a look at the final mesh for the Yigma Tepe model | # Plot triangular mesh
plt.triplot(points[:,0], points[:,1], tri.simplices.copy())
plt.title("Yigma Tepe Delaunay mesh" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
plt.show() | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
The regular triangulation within the tumulus looks reasonable. However, the Delaunay triangulation also added unwanted triangles above the topography. To solve this problem we have to use constrained Delaunay triangulation in order to restrict the triangulation to the model below the free-surface topography. Unfortunat... | # import triangulate library
from triangle import triangulate, show_data, plot as tplot
import triangle | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
In order to use the constrained Delaunay triangulation, we obviously have to define the constraining vertex points lying on the boundaries of our model. In this case it is quite easy, because the TFI mesh is regular.
OK, perhaps not so easy, because we have to be sure that no redundant points are in the final list and... | # Estimate boundary points
# surface topography
surf = np.vstack([X[9,:-2],Z[9,:-2]]).T
# right model boundary
right = np.vstack([X[1:,69],Z[1:,69]]).T
# bottom model boundary
bottom = np.vstack([X[0,1:],Z[0,1:]]).T
# left model boundary
left = np.vstack([X[:-2,0],Z[:-2,0]]).T
# assemble model boundary
model_stac... | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
The above code looks a little bit chaotic, but you can check that the points in the resulting array model_bound are correctly sorted and contains no redundant points. | plt.plot(model_bound[:,0],model_bound[:,1],'bo')
plt.title("Yigma Tepe model boundary" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
plt.show() | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
Good, now we have defined the model boundary points. Time for some constrained Delaunay triangulation ... | # define vertices (no redundant points)
vert = model_bound
# apply Delaunay triangulation to vertices
tri = triangle.delaunay(vert)
# define vertex markers
vertm = np.array(np.zeros((len(vert),1)),dtype='int32')
# define how the vertices are connected, e.g. point 0 is connected to point 1,
# point 1 to point 2 and ... | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
Very good, compared to the SciPy Delaunay triangulation, no triangles are added above the topography. However, most triangles have very small minimum angles, which would lead to serious numerical issues in later finite element modelling runs. So in the next step we restrict the minimum angle to 20° using the option q20... | cncfq20dt = triangulate(A,'pq20D')
ax = plt.subplot(111, aspect='equal')
tplot.plot(ax,**cncfq20dt) | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
Finally, we want a more evenly distribution of the triangle sizes. This can be achieved by imposing a maximum area to the triangles with the option a20. | cncfq20adt = triangulate(A,'pq20a20D')
ax = plt.subplot(111, aspect='equal')
tplot.plot(ax,**cncfq20adt) | 02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb | daniel-koehn/Theory-of-seismic-waves-II | gpl-3.0 |
All hypotheses discussed herein will be expressed with Gaussian / normal distributions. Let's look at the properties of this distribution.
Start by plotting it. We'll set the mean to 0 and the width the 1...the standard normal distribution. | x = np.arange(-10, 10, 0.001)
plt.plot(x,norm.pdf(x,0,1)) # final arguments are mean and width | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
Now look at the cumulative distribution function of the standard normal, which integrates from negative infinity up to the function argument, on a unit-normalized distribution. | norm.cdf(0) | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
The function also accepts a list. | norm.cdf([-1., 0, 1]) | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
Now let's be more explicit about the parameters of the distribution. | mu = 0
sigma = 1
norm(loc=mu, scale=sigma)
norm.cdf([-1., 0, 1])
sigma=2
mu = 0
n = norm(loc=mu, scale=sigma)
n.cdf([-1., 0, 1]) | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
In addition to exploring properties of the exact function, we can sample points from it. | [normal() for _ in range(5)] | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
We can also approximate the exact distribution by sampling a large number of points from it. | size = 1000000
num_bins = 300
plt.hist([normal() for _ in range(size)],num_bins)
plt.xlim([-10,10])
| error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
Data samples
If we have sample of points, we can summarize them in a model-nonspecific way by calculating the mean.
Here, we draw them from a Gaussian for convenience. | n = 10
my_sample = [normal() for _ in range(n)]
my_sample_mean = np.mean(my_sample)
print(my_sample_mean) | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
Now let's generate a large number of data samples and plot the corresponding distribution of sample means. | n = 10
means_10 = []
for _ in range(10000):
my_sample = [normal() for _ in range(n)]
my_sample_mean = np.mean(my_sample)
means_10.append(my_sample_mean)
plt.hist(means_10,100)
plt.xlim([-1.5,1.5])
plt.xlabel("P(mean(X))")
plt.show()
n = 100
means_100 = []
for _ in range(10000):
my_sample = [norma... | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
Note that by increasing the number of data points, the variation on the mean decreases.
Notation: the variable containing all possible n-sized sets of samples is called $X$. A specific $X$, like the one actually observed in an experiment, is called $X_0$.
What can we say about the data?
are the data consistent with h... | def d(X=[0], mu = 0, sigma = 1):
X_bar = np.mean(X)
return (X_bar - mu) / sigma * np.sqrt(len(X))
n = 10
my_sample = [normal() for _ in range(n)]
d(my_sample) | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
Let's numerically determine the sampling distribution under the hypothesis: $H_0$: $\mu = 0, \sigma = 1$ | size = 100000
n = 10
d_sample = []
for _ in range(size):
my_sample = [normal() for _ in range(n)] # get a sample of size n
d_sample.append(d(my_sample)) # add test statistic for this sample to the list
plt.hist(d_sample,100)
plt.xlabel("P(d(X);H0)")
| error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
With this sampling distribution (which can be calculated exactly), we know exectly how likely a particular result $d(X_0)$ is. We also know how likely it is to observe a result that is even less probable than $d(X_0)$, $P(d(X) > d(X_0); \mu)$.
Rejecting the null
This probability is the famous p-value. When the p-value... | # look at the distributions of sample means for two hypotheses
def make_histograms(mu0=0,mu1=1,num_samples=10000,n=100,sigma=1):
#d0_sample = []
#d1_sample = []
m0_sample = []
m1_sample = []
for _ in range(num_samples):
H0_sample = [normal(loc=mu0,scale=sigma) for _ in range(n)] # get a s... | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
Now, imagine that we observe $\bar X_0 = 0.4$. The probability of $\bar X > 0.4$ is less than $2\%$ under $H_0$, so let's say we've rejected $H_0$.
Question, what regions of $\mu$ (defined as $\mu > \mu_1$) have been severely tested?
$SEV(\mu>\mu_1) = P(d(X)<d(X_0);!(\mu>\mu_1)) = P(d(X)<d(X_0); \mu<=\mu_1)$ ---> $P(d(... | # severity for the interval: mu > mu_1
# note that we calculate the probability in terms of the _lower bound_ of the interval,
# since it will provide the _lowest_ severity
def severity(mu_1=0, x=[0], mu0=0, sigma=sigma, n=100):
# find the mean of the observed data
x_bar = np.mean(x)
# calculate the test... | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
Calculate the severity of an outcome that is rather unlike (is lower) than the lower bound of a range of alternate hypotheses ($\mu > \mu_1$). | sigma = 2
mu_1 = 0.2
x = [0.4]
severity(mu_1=mu_1,x=x,sigma=sigma)
num_samples = 10000
n = 100
mu0 = 0
mu1 = 0.2
sigma=2
make_histograms(mu0=mu0,mu1=mu1,num_samples=num_samples,n=n,sigma=sigma) | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
Calculate the severity for a set of observations. | x_bar_values = [[0.4],[0.6],[1.]]
color_indices = ["b","k","r"]
for x,color_idx in zip(x_bar_values,color_indices):
mu_values = scipy.linspace(0,1,100)
sev = [severity(mu_1=mu_1,x=x,sigma=sigma) for mu_1 in mu_values]
plt.plot(mu_values,sev,color_idx,label=x)
plt.ylim(0,1.1)
plt.ylabel("severity fo... | error_statistics-101/error_stats_and_severity.ipynb | gitreset/Data-Science-45min-Intros | unlicense |
Create a mock light curve | lc = MockLC(SimulationSetup('M', 0.1, 0.0, 0.0, 'short_transit', cteff=5500, know_orbit=True))
lc.create(wnsigma=[0.001, 0.001, 0.001, 0.001], rnsigma=0.00001, rntscale=0.5, nights=1);
lc.plot(); | notebooks/contamination/example_1b.ipynb | hpparvi/PyTransit | gpl-2.0 |
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features.
Convert to Numpy Array
Although SFrames offer a number of benefits to u... | print sales.head()
import numpy as np # note this allows us to refer to numpy as np instead | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output.
Since the derivative of a sum is the sum of the derivatives we can com... | def feature_derivative(errors, feature):
# Assume that errors and feature are both numpy arrays of the same length (number of data points)
# compute twice the dot product of these vectors as 'derivative' and return the value
derivative = 2 * np.dot(errors, feature)
return derivative | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of de... | from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2)
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # make sure it's a numpy array
cou... | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Now compute your predictions using test_simple_feature_matrix and your weights from above. | predictions = predict_output(test_simple_feature_matrix, simple_weights)
print predictions | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)? | print predictions[0] | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output). | rss = ((predictions - test_output) ** 2).sum()
print rss | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Running a multiple regression
Now we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters: | model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, multi_output) = get_numpy_data(train_data, model_features, my_output)
initial_weights = np.array([-100000., 1., 1.])
step_size = 4e-12
tolerance = 1e9 | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Use the above parameters to estimate the model weights. Record these values for your quiz. | multi_weights = regression_gradient_descent(feature_matrix, multi_output, initial_weights, step_size, tolerance)
print multi_weights | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first! | (test_multi_feature_matrix, multi_output) = get_numpy_data(test_data, model_features, my_output)
multi_predictions = predict_output(test_multi_feature_matrix, multi_weights)
print multi_predictions | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)? | print multi_predictions[0] | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Quiz Question: Which estimate was closer to the true price for the 1st house on the Test data set, model 1 or model 2?
Now use your predictions and the output to compute the RSS for model 2 on TEST data. | print 'prediction from first model is $356134 and prediction from 2nd model is $366651' | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Quiz Question: Which model (1 or 2) has lowest RSS on all of the TEST data? | rss = ((multi_predictions - multi_output) ** 2).sum()
print rss
print 'RSS from first model is 2.75400047593e+14 and RSS from 2nd model is 2.70263446465e+14' | Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb | subhankarb/Machine-Learning-PlayGround | apache-2.0 |
Again, none of these are beautiful, but for mean and standard deviation I think that magnetic_field_y and magnetic_field_z will be the most helpful.
That gives us a "who made the cut" feature list:
attitude_roll
attitude_pitch
attitude_yaw
rotation_rate_x
rotation_rate_y
gravity_z
user_acc_y
user_acc_z
magnetic_field_y... | # http://stackoverflow.com/questions/17315737/split-a-large-pandas-dataframe
# input - df: a Dataframe, chunkSize: the chunk size
# output - a list of DataFrame
# purpose - splits the DataFrame into smaller of max size chunkSize (last may be smaller)
def splitDataFrameIntoSmaller(df, chunkSize = 1000):
listOfDf = ... | .ipynb_checkpoints/.ipynb_checkpoints/A3-checkpoint.ipynb | eherold/PersonalInformatics | mit |
Now it's time to add those features | # This is where the feature data will go. The array for each activity will have length 30.
walk_featured = []
drive_featured = []
static_featured = []
upstairs_featured = []
run_featured = []
# Populate the features
for df in walk_chunked:
features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_... | .ipynb_checkpoints/.ipynb_checkpoints/A3-checkpoint.ipynb | eherold/PersonalInformatics | mit |
Running Cross-Validation | # Create and run cross-validation on a K-Nearest Neighbors classifier
knn = KNeighborsClassifier()
knn_scores = cross_val_score(knn, all_featured, target, cv = 5)
print 'K-NEAREST NEIGHBORS CLASSIFIER'
print knn_scores
# Create and run cross-validation on a Logistic Regression classifier
lr = LogisticRegression()
lr_s... | .ipynb_checkpoints/.ipynb_checkpoints/A3-checkpoint.ipynb | eherold/PersonalInformatics | mit |
What if I don't know how to use a function, you can access the documentation.
? <module>.<function>
Let's look at the documentation of math.log10 | ?? math.log10 | notebooks/python_intro.ipynb | mined-gatech/pymks_overview | mit |
Use the cell below to manipulate the array we just created. | B + B | notebooks/python_intro.ipynb | mined-gatech/pymks_overview | mit |
Let's do some simple matrix multiplication using np.dot.
$$ \mathbf{A} \overrightarrow{x} = \overrightarrow{y}$$
First checkout the documentation of np.dot. | ? np.dot
N = 5
A = np.eye(N) * 2
x = np.arange(N)
print('A =')
print(A)
print('x =')
print(x)
= np.dot(A, x)
print('y =')
print(y) | notebooks/python_intro.ipynb | mined-gatech/pymks_overview | mit |
Use the cell below to call another function from NumPy.
Scikit-Learn
Scikit-Learn, a.k.a. sklearn, is a scientific toolkit (there are many others) for machine learning and it built on SciPy and NumPy.
Below is an example from scikit-learn for linear regression.
This example also using the plotting library matplotlib to... | %matplotlib inline
# Code source: Jaques Grobler
# License: BSD 3 clause
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model
# Load the diabetes dataset
diabetes = datasets.load_diabetes()
# Use only one feature
diabetes_X = diabetes.data[:, np.newaxis]
diabetes_X_temp = diabetes_X[:, :, 2]... | notebooks/python_intro.ipynb | mined-gatech/pymks_overview | mit |
Series
Panda's Series class extends NumPy's ndarray with a labelled index. The key to using Series is to understand how to use its index. | # Create a Series with auto-generated indices
pd.Series(data=[100, 101, 110, 111], dtype=np.int8)
# Create a Series with custom indices
pd.Series(data=[100, 101, 110, 111], index=['a', 'b', 'c', 'd'], dtype=np.int8)
# Create a Series using a dictionary
d = {'a' : 100, 'b': 101, 'c': 110, 'd': 111}
pd.Series(data=d,... | pandas.ipynb | sheikhomar/ml | mit |
Arithmetic | day1 = pd.Series(data=[400, 600, 400], index=['breakfast', 'lunch', 'dinner'], dtype=np.int16)
day1
day2 = pd.Series(data=[350, 500, 150], index=['breakfast', 'lunch', 'snack'], dtype=np.int16)
day2
# Note that only values of matched indices are added together.
day1 + day2 | pandas.ipynb | sheikhomar/ml | mit |
DataFrame
A DataFrame is container for tabular data. Basically, a DataFrame is just a collection of Series that share the same index. | def init_df():
return pd.DataFrame(data=np.arange(1,17).reshape(4,4), index='w x y z'.split(), columns='A B C D'.split())
df = init_df()
df | pandas.ipynb | sheikhomar/ml | mit |
Creating and deleting | # Create a new column based on another column
df['E'] = df['A'] ** 2
df
# Create a new DataFrame, where certain columns are excluded.
df.drop(['A', 'E'], axis=1)
# Remove a column permanently
df.drop('E', axis=1, inplace=True)
df | pandas.ipynb | sheikhomar/ml | mit |
Querying | # Select column 'A'
df['A']
# Note that all columns are stored as Series objects
type(df['A'])
# Selecting multiple columns, we get a new DataFrame object
df[['A', 'D']]
# Select a row by its label
df.loc['x']
# Select a row by its numerical index position
df.iloc[0]
# Select the value of the first cell
df.loc['w... | pandas.ipynb | sheikhomar/ml | mit |
Indicies | # Reset the index to a numerical value
# Note that the old index will become
# a column in our DataFrame.
df.reset_index()
# Set a new index.
df['Country'] = 'CA DE DK NO'.split()
df.set_index('Country')
# To overrides the old index use following line instead:
# df.set_index('Country', inplace=True) | pandas.ipynb | sheikhomar/ml | mit |
Hierarchical indexing | outside = 'p p p q q q'.split()
inside = [1, 2, 3, 1, 2, 3]
hierarchical_index = list(zip(outside, inside))
multi_index = pd.MultiIndex.from_tuples(hierarchical_index, names='outside inside'.split())
multi_index
df = pd.DataFrame(data=np.random.randn(6,2), index=multi_index, columns=['Column 1', 'Column 2'])
df
# Sel... | pandas.ipynb | sheikhomar/ml | mit |
Cross section is used when we need to select data at a particular level. | # Select rows whose inside index is equal 1
df.xs(1, level='I') | pandas.ipynb | sheikhomar/ml | mit |
Dealing with missing data | d = {'A': [1, 2, np.nan], 'B': [1, np.nan, np.nan], 'C': [1, 2, 3]}
df = pd.DataFrame(d)
df
# Drop any rows with missing values
df.dropna()
# Keep only the rows with at least 2 non-na values:
df.dropna(thresh=2) | pandas.ipynb | sheikhomar/ml | mit |
The subset parameter can be used to specify which columns an action should apply to instead of all columns. For instance, if we want to drop rows with missing values, subset specifies a list of columns to include.
For instance, df.dropna(thresh=1, subset=['A','B']) will drop all rows with less than 1 NA value in only ... | # Drop any columns with missing values
df.dropna(axis=1)
# Replace missing values
df.fillna(0)
# Replace missing values with the mean of the column
df['A'].fillna(value=df['A'].mean()) | pandas.ipynb | sheikhomar/ml | mit |
Grouping | columns = 'Id EmployeeName JobTitle TotalPay Year'.split()
salaries_df = pd.read_csv('data/sf-salaries-subset.csv', index_col='Id', usecols=columns)
salaries_df.head()
# Group by job title
salaries_by_job_df = salaries_df.groupby('JobTitle')
# Get some statistics on the TotalPay column
salaries_by_job_df['TotalPay'].... | pandas.ipynb | sheikhomar/ml | mit |
Combining DataFrames | df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B':... | pandas.ipynb | sheikhomar/ml | mit |
The merge function is useful if we want to combine DataFrames like we join tables using SQL. | left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', ... | pandas.ipynb | sheikhomar/ml | mit |
The join function is used to combine the columns of DataFrames that may have different indices. It works exactly like the merge function except the keys that we join on are on the indices instead of the columns. | left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
left.join(right)
left.join(right,... | pandas.ipynb | sheikhomar/ml | mit |
Operations | df = pd.DataFrame({'col1':[1,2,3,4],
'col2':[444,555,666,444],
'col3':['abc','def','ghi','xyz']})
df.head()
# Find the unique values in col2
df['col2'].unique()
# Find the number of unique values in col2
df['col2'].nunique()
# Find the unique values in col2
df['col2'].value_coun... | pandas.ipynb | sheikhomar/ml | mit |
Reading data from HTML | data = pd.read_html('https://borsen.dk/kurser/danske_aktier/c20_cap.html', thousands='.', decimal=',')
df = data[0]
# Show information about the data
df.info()
df.columns
df.columns = ['Akie', '%', '+/-', 'Kurs', 'ATD%', 'Bud', 'Udbud', 'Omsætning']
df.info()
df['Omsætning'][0] | pandas.ipynb | sheikhomar/ml | mit |
Loading up the raw data | mldb.put('/v1/procedures/import_reddit', {
"type": "import.text",
"params": {
"dataFileUrl": "file://mldb/mldb_test_data/reddit.csv.zst",
'delimiter':'',
'quoteChar':'',
'outputDataset': 'reddit_raw',
'runOnCreation': True
}
})
| container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
And here is what our raw dataset looks like. The lineText column will need to be parsed: it's comma-delimited, with the first token being a user ID and the remaining tokens being the set of subreddits that user contributed to. | mldb.query("select * from reddit_raw limit 5") | container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
Transforming the raw data into a sparse matrix
We will create and run a Procedure of type transform. The tokenize function will project out the subreddit names into columns. | mldb.put('/v1/procedures/reddit_import', {
"type": "transform",
"params": {
"inputData": "select tokenize(lineText, {offset: 1, value: 1}) as * from reddit_raw",
"outputDataset": "reddit_dataset",
"runOnCreation": True
}
}) | container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
Here is the resulting dataset: it's a sparse matrix with a row per user and a column per subreddit, where the cells are 1 if the row's user was a contributor to the column's subreddit, and null otherwise. | mldb.query("select * from reddit_dataset limit 5") | container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
Dimensionality Reduction with Singular Value Decomposition (SVD)
We will create and run a Procedure of type svd.train. | mldb.put('/v1/procedures/reddit_svd', {
"type" : "svd.train",
"params" : {
"trainingData" : """
SELECT
COLUMN EXPR (AS columnName() ORDER BY rowCount() DESC, columnName() LIMIT 4000)
FROM reddit_dataset
""",
"columnOutputDataset" : "reddit_svd_em... | container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
The result of this operation is a new dataset with a row per subreddit for the 4000 most-active subreddits and columns representing coordinates for that subreddit in a 100-dimensional space.
Note: the row names are the subreddit names followed by ".numberEquals.1" because the SVD training procedure interpreted the inp... | mldb.query("select * from reddit_svd_embedding limit 5") | container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
Clustering with K-Means
We will create and run a Procedure of type kmeans.train. | mldb.put('/v1/procedures/reddit_kmeans', {
"type" : "kmeans.train",
"params" : {
"trainingData" : "select * from reddit_svd_embedding",
"outputDataset" : "reddit_kmeans_clusters",
"numClusters" : 20,
"runOnCreation": True
}
})
| container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
The result of this operation is a simple dataset which associates each row in the input (i.e. each subreddit) to one of 20 clusters. | mldb.query("select * from reddit_kmeans_clusters limit 5") | container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
2-d Dimensionality Reduction with t-SNE
We will create and run a Procedure of type tsne.train. | mldb.put('/v1/procedures/reddit_tsne', {
"type" : "tsne.train",
"params" : {
"trainingData" : "select * from reddit_svd_embedding",
"rowOutputDataset" : "reddit_tsne_embedding",
"runOnCreation": True
}
})
| container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
The result is similar to the SVD step above: we get a row per subreddit and the columns are coordinates, but this time in a 2-dimensional space appropriate for visualization. | mldb.query("select * from reddit_tsne_embedding limit 5") | container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
Counting the number of users per subreddit
We will create and run a Procedure of type transform on the transpose of the original input dataset. | mldb.put('/v1/procedures/reddit_count_users', {
"type": "transform",
"params": {
"inputData": "select columnCount() as numUsers from transpose(reddit_dataset)",
"outputDataset": "reddit_user_counts",
"runOnCreation": True
}
}) | container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
We appended "|1" to the row names in this dataset to allow the merge operation below to work well. | mldb.query("select * from reddit_user_counts limit 5") | container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
Querying and Visualizating the output
We'll use the Query API to get the data into a Pandas DataFrame and then use Bokeh to visualize it.
In the query below we renamed the rows to get rid of the "|1" which the SVD appended to each subreddit name and we filter out rows where cluster is null because we only clustered the... | df = mldb.query("""
select c.* as *, m.* as *, quantize(m.x, 7) as grid_x, quantize(m.y, 7) as grid_y
named c.rowName()
from merge(reddit_tsne_embedding, reddit_kmeans_clusters) as m
join reddit_user_counts as c on c.rowName() = m.rowPathElement(0)
where m.cluster is not null
order by c.n... | container_files/demos/Mapping Reddit.ipynb | mldbai/mldb | apache-2.0 |
Create a soil layer, which defines the median value. | soil_type = pysra.site.DarendeliSoilType(18.0, plas_index=0, ocr=1, stress_mean=50) | examples/example-05.ipynb | arkottke/pysra | mit |
Create the simulated nonlinear curves | n = 10
correlation = 0
simulated = []
for name, model in zip(
["Darendeli (2001)", "EPRI SPID (2014)"],
[
pysra.variation.DarendeliVariation(correlation),
pysra.variation.SpidVariation(correlation),
],
):
simulated.append((name, [model(soil_type) for _ in range(n)])) | examples/example-05.ipynb | arkottke/pysra | mit |
Compare the uncertainty models. | fig, axes = plt.subplots(2, 2, sharex=True, sharey="row", subplot_kw={"xscale": "log"})
for i, (name, sims) in enumerate(simulated):
for j, prop in enumerate(["mod_reduc", "damping"]):
axes[j, i].plot(
getattr(soil_type, prop).strains,
np.transpose([getattr(s, prop).values for s in ... | examples/example-05.ipynb | arkottke/pysra | mit |
1. Inference
detect.py runs YOLOv3 inference on a variety of sources, downloading models automatically from the latest YOLOv3 release, and saving results to runs/detect. Example inference sources are:
<img src="https://user-images.githubusercontent.com/26833433/114307955-5c7e4e80-9ae2-11eb-9f50-a90e39bee53f.png" width=... | !python detect.py --weights yolov3.pt --img 640 --conf 0.25 --source data/images/
Image(filename='runs/detect/exp/zidane.jpg', width=600) | components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb | robocomp/robocomp-robolab | gpl-3.0 |
2. Test
Test a model's accuracy on COCO val or test-dev datasets. Models are downloaded automatically from the latest YOLOv3 release. To show results by class use the --verbose flag. Note that pycocotools metrics may be ~1% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP c... | # Download COCO val2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip
# Run YOLOv3 on COCO val2017
!python test.py --weights yolov3.pt --data coco.yaml --img 640 --iou 0.65 | components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb | robocomp/robocomp-robolab | gpl-3.0 |
COCO test-dev2017
Download COCO test2017 dataset (7GB - 40,000 images), to test model accuracy on test-dev set (20,000 images, no labels). Results are saved to a *.json file which should be zipped and submitted to the evaluation server at https://competitions.codalab.org/competitions/20794. | # Download COCO test-dev2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip # unzip labels
!f="test2017.zip" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f && rm $f # 7GB, 41k image... | components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb | robocomp/robocomp-robolab | gpl-3.0 |
3. Train
Download COCO128, a small 128-image tutorial dataset, start tensorboard and train YOLOv3 from a pretrained checkpoint for 3 epochs (note actual training is typically much longer, around 300-1000 epochs, depending on your dataset). | # Download COCO128
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip | components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb | robocomp/robocomp-robolab | gpl-3.0 |
Train a YOLOv3 model on COCO128 with --data coco128.yaml, starting from pretrained --weights yolov3.pt, or from randomly initialized --weights '' --cfg yolov3.yaml. Models are downloaded automatically from the latest YOLOv3 release, and COCO, COCO128, and VOC datasets are downloaded automatically on first use.
All trai... | # Tensorboard (optional)
%load_ext tensorboard
%tensorboard --logdir runs/train
# Weights & Biases (optional)
%pip install -q wandb
import wandb
wandb.login()
# Train YOLOv3 on COCO128 for 3 epochs
!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov3.pt --nosave --cache | components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb | robocomp/robocomp-robolab | gpl-3.0 |
4. Visualize
Weights & Biases Logging 🌟 NEW
Weights & Biases (W&B) is now integrated with YOLOv3 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration for teams. To enable W&B pip install wandb, and then tr... | Image(filename='runs/train/exp/train_batch0.jpg', width=800) # train batch 0 mosaics and labels
Image(filename='runs/train/exp/test_batch0_labels.jpg', width=800) # test batch 0 labels
Image(filename='runs/train/exp/test_batch0_pred.jpg', width=800) # test batch 0 predictions | components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb | robocomp/robocomp-robolab | gpl-3.0 |
<img src="https://user-images.githubusercontent.com/26833433/83667642-90fcb200-a583-11ea-8fa3-338bbf7da194.jpeg" width="750">
train_batch0.jpg shows train batch 0 mosaics and labels
<img src="https://user-images.githubusercontent.com/26833433/83667626-8c37fe00-a583-11ea-997b-0923fe59b29b.jpeg" width="750">
test_batch0_... | from utils.plots import plot_results
plot_results(save_dir='runs/train/exp') # plot all results*.txt as results.png
Image(filename='runs/train/exp/results.png', width=800) | components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb | robocomp/robocomp-robolab | gpl-3.0 |
<img src="https://user-images.githubusercontent.com/26833433/97808309-8182b180-1c66-11eb-8461-bffe1a79511d.png" width="800">
Environments
YOLOv3 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
Google Colab and Kaggle not... | # Re-clone repo
%cd ..
%rm -rf yolov3 && git clone https://github.com/ultralytics/yolov3
%cd yolov3
# Reproduce
for x in 'yolov3', 'yolov3-spp', 'yolov3-tiny':
!python test.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.25 --iou 0.45 # speed
!python test.py --weights {x}.pt --data coco.yaml --img 640 --c... | components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb | robocomp/robocomp-robolab | gpl-3.0 |
Import Data
Cerebral Cortex provides a set of predefined data import routines that fit typical use cases. The most common is CSV data parser, csv_data_parser. These parsers are easy to write and can be extended to support most types of data. Additionally, the data importer, import_data, needs to be brought into this... | iot_stream = CC.read_csv(file_path="sample_data/data.csv", stream_name="some-sample-iot-stream", column_names=["timestamp", "some_vals", "version", "user"]) | jupyter_demo/import_and_analyse_data.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
View Imported Data | iot_stream.show(4) | jupyter_demo/import_and_analyse_data.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
Document Data | stream_metadata = Metadata()
stream_metadata.set_name("iot-data-stream").set_description("This is randomly generated data for demo purposes.") \
.add_dataDescriptor(
DataDescriptor().set_name("timestamp").set_type("datetime").set_attribute("description", "UTC timestamp of data point collection.")) \
.ad... | jupyter_demo/import_and_analyse_data.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
View Metadata | iot_stream.metadata | jupyter_demo/import_and_analyse_data.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
How to write an algorithm
This section provides an example of how to write a simple smoothing algorithm and apply it to the data that was just imported.
Import the necessary modules | from pyspark.sql.functions import pandas_udf, PandasUDFType
from pyspark.sql.types import StructField, StructType, StringType, FloatType, TimestampType, IntegerType
from pyspark.sql.functions import minute, second, mean, window
from pyspark.sql import functions as F
import numpy as np | jupyter_demo/import_and_analyse_data.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
Define the Schema
This schema defines what the computation module will return to the execution context for each row or window in the datastream. | from pyspark.sql.types import StructField, StructType, StringType, DoubleType, IntegerType, TimestampType
schema = StructType([
StructField("timestamp", TimestampType()),
StructField("some_vals", DoubleType()),
StructField("version", IntegerType()),
StructField("user", StringType())
]) | jupyter_demo/import_and_analyse_data.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
Write a user defined function
The user-defined function (UDF) is one of two mechanisms available for distributed data processing within the Apache Spark framework.
The F.udf Python decorator assigns the recently defined schema as a return type of the udf method. The method, smooth_algo, accepts a list of values, val... | @pandas_udf(schema, PandasUDFType.GROUPED_MAP)
def smooth_algo(df):
some_vals_mean = df["some_vals"].mean()
df["some_vals"] = df["some_vals"]/some_vals_mean
return df | jupyter_demo/import_and_analyse_data.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.