markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
d) Single matrix operations In this section we describe a few operations that can be done over matrices: d.1) TransposeA very common operation is the transpose. If you are used to see matrix notation, you should know what this operation is. Take a matrix with 2 dimensions:$$ X = \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix} $$Transposing the matrix is inverting its data with respect to its diagonal:$$ X^T = \begin{bmatrix} a & c \\ b & d \\ \end{bmatrix} $$This means that the rows of X will become its columns and vice-versa. You can attain the transpose of a matrix by using either `.T` on a matrix or calling `numpy.transpose`: | m1 = np.array([[ .1, 1., 2.], [ 3., .24, 4.], [ 6., 2., 5.]])
print('Initial matrix: \n{}'.format(m1))
m1_transposed = m1.transpose()
print('Transposed matrix with `transpose` \n{}'.format(m1_transposed))
m1_transposed = m1.T
print('Transposed matrix with `T` \n{}'.format(m1_transposed)) | Transposed matrix with `T`
[[0.1 3. 6. ]
[1. 0.24 2. ]
[2. 4. 5. ]]
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
A few examples of non-squared matrices. In these, you'll see that the shape (a, b) gets inverted to (b, a): | m1 = np.array([[ .1, 1., 2., 5.], [ 3., .24, 4., .6]])
print('Initial matrix: \n{}'.format(m1))
m1_transposed = m1.T
print('Transposed matrix: \n{}'.format(m1_transposed))
m1 = np.array([[ .1, 1.], [2., 5.], [ 3., .24], [4., .6]])
print('Initial matrix: \n{}'.format(m1))
m1_transposed = m1.T
print('Transposed matrix: \n{}'.format(m1_transposed))
| Initial matrix:
[[0.1 1. ]
[2. 5. ]
[3. 0.24]
[4. 0.6 ]]
Transposed matrix:
[[0.1 2. 3. 4. ]
[1. 5. 0.24 0.6 ]]
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
For vectors represented as matrices, this means transforming from a row vector (1, N) to a column vector (N, 1) or vice-versa: | v1 = np.array([ .1, 1., 2.])
v1_reshaped = v1.reshape((1, -1))
print('Row vector as 2-d array: {}'.format(v1_reshaped))
print('Shape: {}'.format(v1_reshaped.shape))
v1_transposed = v1_reshaped.T
print('Transposed (column vector as 2-d array): \n{}'.format(v1_transposed))
print('Shape: {}'.format(v1_transposed.shape))
v1 = np.array([ 3., .23, 2., .6])
v1_reshaped = v1.reshape((-1, 1))
print('Column vector as 2-d array: \n{}'.format(v1_reshaped))
print('Shape: {}'.format(v1_reshaped.shape))
v1_transposed = v1_reshaped.T
print('Transposed (row vector as 2-d array): {}'.format(v1_transposed))
print('Shape: {}'.format(v1_transposed.shape))
| Column vector as 2-d array:
[[3. ]
[0.23]
[2. ]
[0.6 ]]
Shape: (4, 1)
Transposed (row vector as 2-d array): [[3. 0.23 2. 0.6 ]]
Shape: (1, 4)
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
d.2) Statistics operatorsNumpy also allows us to perform several operations over the rows and columns of a matrix, such as: * Sum* Mean* Max* Min* ...The most important thing to take into account when using these is to know exactly in which direction we are performing the operations. We can perform, for example, a `max` operation over the whole matrix, obtaining the max value in all of the matrix values. Or we might want this value for each row, or for each column. Check the following examples: | m1 = np.array([[ .1, 1.], [2., 5.], [ 3., .24], [4., .6]])
print('Initial matrix: \n{}'.format(m1)) | Initial matrix:
[[0.1 1. ]
[2. 5. ]
[3. 0.24]
[4. 0.6 ]]
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
Operating over all matrix' values: | print('Total sum of matrix elements: {}'.format(m1.sum()))
print('Minimum of all matrix elements: {}'.format(m1.max()))
print('Maximum of all matrix elements: {}'.format(m1.min()))
print('Mean of all matrix elements: {}'.format(m1.mean())) | Total sum of matrix elements: 15.94
Minimum of all matrix elements: 5.0
Maximum of all matrix elements: 0.1
Mean of all matrix elements: 1.9925
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
Operating across rows - produces a row with the sum/max/min/mean for each column: | print('Total sum of matrix elements: {}'.format(m1.sum(axis=0)))
print('Minimum of all matrix elements: {}'.format(m1.max(axis=0)))
print('Maximum of all matrix elements: {}'.format(m1.min(axis=0)))
print('Mean of all matrix elements: {}'.format(m1.mean(axis=0))) | Total sum of matrix elements: [9.1 6.84]
Minimum of all matrix elements: [4. 5.]
Maximum of all matrix elements: [0.1 0.24]
Mean of all matrix elements: [2.275 1.71 ]
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
Operating across columns - produces a column with the sum/max/min/mean for each row: | print('Total sum of matrix elements: {}'.format(m1.sum(axis=1)))
print('Minimum of all matrix elements: {}'.format(m1.max(axis=1)))
print('Maximum of all matrix elements: {}'.format(m1.min(axis=1)))
print('Mean of all matrix elements: {}'.format(m1.mean(axis=1))) | Total sum of matrix elements: [1.1 7. 3.24 4.6 ]
Minimum of all matrix elements: [1. 5. 3. 4.]
Maximum of all matrix elements: [0.1 2. 0.24 0.6 ]
Mean of all matrix elements: [0.55 3.5 1.62 2.3 ]
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
As an example, imagine that you have a matrix of shape (n_samples, n_features), where each row represents all the features for one sample. Then , to average over the samples, we do: | m1 = np.array([[ .1, 1.], [2., 5.], [ 3., .24], [4., .6]])
print('Initial matrix: \n{}'.format(m1))
print('\n')
print('Sample 1: {}'.format(m1[0, :]))
print('Sample 2: {}'.format(m1[1, :]))
print('Sample 3: {}'.format(m1[2, :]))
print('Sample 4: {}'.format(m1[3, :]))
print('\n')
print('Average over samples: \n{}'.format(m1.mean(axis=0))) | Initial matrix:
[[0.1 1. ]
[2. 5. ]
[3. 0.24]
[4. 0.6 ]]
Sample 1: [0.1 1. ]
Sample 2: [2. 5.]
Sample 3: [3. 0.24]
Sample 4: [4. 0.6]
Average over samples:
[2.275 1.71 ]
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
Other statistical functions behave in a similar manner, so it is important to know how to work the axis of these objects. e) Multiple matrix operations e.1) Element wise operationsSeveral operations available work at the element level, this is, if we have two matrices A and B:$$ A = \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix} $$and $$ B = \begin{bmatrix} e & f \\ g & h \\ \end{bmatrix} $$an element-wise operation produces a matrix:$$ Op(A, B) = \begin{bmatrix} Op(a,e) & Op(b,f) \\ Op(c,g) & Op(d,h) \\ \end{bmatrix} $$You can perform sum and difference, but also element-wise multiplication and division. These are implemented with the regular operators `+`, `-`, `*`, `/`. Check out the examples below: | m1 = np.array([[ .1, 1., 2., 5.], [ 3., .24, 4., .6]])
m2 = np.array([[ .1, 4., .25, .1], [ 2., 1.5, .42, -1.]])
print('Matrix 1: \n{}'.format(m1))
print('Matrix 2: \n{}'.format(m1))
print('\n')
print('Sum: \n{}'.format(m1 + m2))
print('\n')
print('Difference: \n{}'.format(m1 - m2))
print('\n')
print('Multiplication: \n{}'.format(m1*m2))
print('\n')
print('Division: \n{}'.format(m1/m2)) | Matrix 1:
[[0.1 1. 2. 5. ]
[3. 0.24 4. 0.6 ]]
Matrix 2:
[[0.1 1. 2. 5. ]
[3. 0.24 4. 0.6 ]]
Sum:
[[ 0.2 5. 2.25 5.1 ]
[ 5. 1.74 4.42 -0.4 ]]
Difference:
[[ 0. -3. 1.75 4.9 ]
[ 1. -1.26 3.58 1.6 ]]
Multiplication:
[[ 0.01 4. 0.5 0.5 ]
[ 6. 0.36 1.68 -0.6 ]]
Division:
[[ 1. 0.25 8. 50. ]
[ 1.5 0.16 9.52380952 -0.6 ]]
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
For these operations, ideally your matrices should have the same dimensions. An exception to this is when you have one of the elements that can be [broadcasted](https://numpy.org/doc/stable/user/basics.broadcasting.html) over the other. However we won't cover that in these examples. e.2) Matrix multiplicationAlthough you've seen how to perform element wise multiplication with the basic operation, one of the most common matrix operations is matrix multiplication, where the output is not the result of an element wise combination of its elements, but actually a linear combination between rows of the first matrix nd columns of the second.In other words, element (i, j) of the resulting matrix is the dot product between row i of the first matrix and column j of the second:Where the dot product represented breaks down to:$$ 58 = 1 \times 7 + 2 \times 9 + 3 \times 11 $$Numpy already provides this function, so check out the following examples: | m1 = np.array([[ .1, 1., 2., 5.], [ 3., .24, 4., .6]])
m2 = np.array([[ .1, 4.], [.25, .1], [ 2., 1.5], [.42, -1.]])
print('Matrix 1: \n{}'.format(m1))
print('Matrix 2: \n{}'.format(m1))
print('\n')
print('Matrix multiplication: \n{}'.format(np.matmul(m1, m2)))
m1 = np.array([[ .1, 4.], [.25, .1], [ 2., 1.5], [.42, -1.]])
m2 = np.array([[ .1, 1., 2.], [ 3., .24, 4.]])
print('Matrix 1: \n{}'.format(m1))
print('Matrix 2: \n{}'.format(m1))
print('\n')
print('Matrix multiplication: \n{}'.format(np.matmul(m1, m2)))
| Matrix 1:
[[ 0.1 4. ]
[ 0.25 0.1 ]
[ 2. 1.5 ]
[ 0.42 -1. ]]
Matrix 2:
[[ 0.1 4. ]
[ 0.25 0.1 ]
[ 2. 1.5 ]
[ 0.42 -1. ]]
Matrix multiplication:
[[12.01 1.06 16.2 ]
[ 0.325 0.274 0.9 ]
[ 4.7 2.36 10. ]
[-2.958 0.18 -3.16 ]]
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
Notice that in both operations the matrix multiplication of shapes `(k, l)` and `(m, n)` yields a matrix of dimensions `(k, n)`. Additionally, for this operation to be possible, the inner dimensions need to match, this is `l == m` . See what happens if we try to multiply matrices with incompatible dimensions: | m1 = np.array([[ .1, 4., 3.], [.25, .1, 1.], [ 2., 1.5, .5], [.42, -1., 4.3]])
m2 = np.array([[ .1, 1., 2.], [ 3., .24, 4.]])
print('Matrix 1: \n{}'.format(m1))
print('Shape: {}'.format(m1.shape))
print('Matrix 2: \n{}'.format(m1))
print('Shape: {}'.format(m2.shape))
print('\n')
try:
m3 = np.matmul(m1, m2)
except Exception as e:
print('Matrix multiplication raised the following error: {}'.format(e))
| Matrix 1:
[[ 0.1 4. 3. ]
[ 0.25 0.1 1. ]
[ 2. 1.5 0.5 ]
[ 0.42 -1. 4.3 ]]
Shape: (4, 3)
Matrix 2:
[[ 0.1 4. 3. ]
[ 0.25 0.1 1. ]
[ 2. 1.5 0.5 ]
[ 0.42 -1. 4.3 ]]
Shape: (2, 3)
Matrix multiplication raised the following error: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 2 is different from 3)
| MIT | S01 - Bootcamp and Binary Classification/SLU07 - Regression with Linear Regression/Example notebook.ipynb | claury/sidecar-academy-batch2 |
numpy | def add(image, c):
return uint8(np.clip(float64(image) + c, 0, 255)) | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
matplotlib | def matplot(img, title=None, cmap=None, figsize=None):
col = len(img)
if figsize is None:
plt.figure(figsize=(col * 4, col * 4))
else:
plt.figure(figsize=figsize)
for i, j in enumerate(img):
plt.subplot(1, col, i + 1)
plt.axis("off")
if title != None:
plt.title(title[i])
if cmap != None and cmap[i] != "":
plt.imshow(j, cmap=cmap[i])
else:
imshow(j) | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
Chapter 2 | def imread(fname):
return io.imread(os.path.join("/home/nbuser/library/", "Image", "read", fname))
def imsave(fname, image):
io.imsave(os.path.join("/home/nbuser/library/", "Image", "save", fname), image) | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
Chapter 3 | def spatial_resolution(image, scale):
return rescale(rescale(image, 1 / scale), scale, order=0)
def grayslice(image, n):
image = img_as_ubyte(image)
v = 256 // n
return image // v * v | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
Chapter 4 | def imhist(image, equal=False):
if equal:
image = img_as_ubyte(equalize_hist(image))
f = plt.figure()
f.show(plt.hist(image.flatten(), bins=256)) | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
Chapter 5 | def unsharp(alpha=0.2):
A1 = array([[-1, 1, -1],
[1, 1, 1],
[-1, 1, -1]], dtype=float64)
A2 = array([[0, -1, 0],
[-1, 5, -1],
[0, -1, 0]], dtype=float64)
return (alpha * A1 + A2) / (alpha + 1) | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
Chapter 6 | ne = array([[1, 1, 0], [1, 1, 0], [0, 0, 0]])
bi = array([[1, 2, 1], [2, 4, 2], [1, 2, 1]]) / 4
bc = array([[1, 4, 6, 4, 1],
[4, 16, 24, 16, 4],
[6, 24, 35, 24, 6],
[4, 16, 24, 16, 4],
[1, 4, 6, 4, 1]]) / 64
def zeroint(img):
r, c = img.shape
res = zeros((r*2, c*2))
res[::2, ::2] = img
return res
def spatial_filtering(img, p, filt):
for i in range(int(log2(p))):
img_zi = zeroint(img)
img_sf = correlate(img_zi, filt, mode="reflect")
return img_sf | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
Chapter 7 | def fftformat(F):
for f in F:
print("%8.4f %+.4fi" % (f.real, f.imag))
def fftshow(f, type="log"):
if type == "log":
return rescale_intensity(np.log(1 + abs(f)), out_range=(0, 1))
elif type == "abs":
return rescale_intensity(abs(f), out_range=(0, 1))
def circle_mask(img, type, lh, D=15, n=2, sigma=10):
r, c = img.shape
arr = arange(-r / 2, r / 2)
arc = arange(-c / 2, c / 2)
x, y = np.meshgrid(arr, arc)
if type == "ideal":
if lh == "low":
return x**2 + y**2 < D**2
elif lh == "high":
return x**2 + y**2 > D**2
elif type == "butterworth":
if lh == "low":
return 1 / (1 + (np.sqrt(2) - 1) * ((x**2 + y**2) / D**2)**n)
elif lh == "high":
return 1 / (1 + (D**2 / (x**2 + y**2))**n)
elif type == "gaussian":
g = np.exp(-(x**2 + y**2) / sigma**2)
if lh == "low":
return g / g.max()
elif lh == "high":
return 1 - g / g.max()
def fft_filter(img, type, lh, D=15, n=2, sigma=10):
f = fftshift(fft2(img))
c = circle_mask(img, type, lh, D, n, sigma)
fc = f * c
return fftshow(f), c, fftshow(fc), fftshow(ifft2(fc), "abs") | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
Chapter 8 | def periodic_noise(img, s=None):
if "numpy" not in str(type(s)):
r, c = img.shape
x, y = np.mgrid[0:r, 0:c].astype(float64)
s = np.sin(x / 3 + y / 3) + 1
return (2 * img_as_float(img) + s / 2) / 3
def outlier_filter(img, D=0.5):
av = array([[1, 1, 1],
[1, 0, 1],
[1, 1, 1]]) / 8
img_av = convolve(img, av)
r = abs(img - img_av) > D
return r * img_av + (1 - r) * img
def image_average(img, n):
x, y = img.shape
t = zeros((x, y, n))
for i in range(n):
t[:, :, i] = random_noise(img, "gaussian")
return np.mean(t, 2)
def pseudo_median(x):
MAXMIN = 0
MINMAX = 255
for i in range(len(x) - 2):
MAXMIN = max(MAXMIN, min(x[i:i+3]))
MINMAX = min(MINMAX, max(x[i:i+3]))
return 0.5 * (MAXMIN + MINMAX)
def periodic_filter(img, type="band", k=1):
r, c = img.shape
x_mid, y_mid = r // 2, c // 2
f = fftshift(fft2(img))
f2 = img_as_ubyte(fftshow(f, "abs"))
f2[x_mid, y_mid] = 0
x, y = np.where(f2 == f2.max())
d = np.sqrt((x[0] - x_mid)**2 + (y[0] - y_mid)**2)
if type == "band":
x, y = np.meshgrid(arange(0, r), arange(0, c))
z = np.sqrt((x - x_mid)**2 + (y - y_mid)**2)
br = (z < np.floor(d - k)) | (z > np.ceil(d + k))
fc = f * br
elif type == "criss":
fc = np.copy(f)
fc[x, :] = 0
fc[:, y] = 0
fci = ifft2(fc)
return fftshow(f), fftshow(fc), fftshow(fci, "abs")
def fft_inverse(img, c, type="low", D2=15, n2=2, d=0.01):
f = fftshift(fft2(img_as_ubyte(img)))
if type == "low":
c2 = circle_mask(img, "butterworth", "low", D2, n2, 10)
fb = f / c * c2
elif type == "con":
c2 = np.copy(c)
c2[np.where(c2 < d)] = 1
fb = f / c2
return c2, fftshow(ifft2(fb), "abs")
def deblur(img, m, type="con",d=0.02):
m2 = zeros_like(img, dtype=float64)
r, c = m.shape
m2[0:r, 0:c] = m
mf = fft2(m2)
if type == "div":
bmi = ifft2(fft2(img) / mf)
bmu = fftshow(bmi, "abs")
elif type == "con":
mf[np.where(abs(mf) < d)] = 1
bmi = abs(ifft2(fft2(img) / mf))
bmu = img_as_ubyte(bmi / bmi.max())
bmu = rescale_intensity(bmu, in_range=(0, 128))
return bmu | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
Chapter 9 | def threshold_adaptive(img, cut):
r, c = img.shape
w = c // cut
starts = range(0, c - 1, w)
ends = range(w, c + 1, w)
z = zeros((r, c))
for i in range(cut):
tmp = img[:, starts[i]:ends[i]]
z[:, starts[i]:ends[i]] = tmp > threshold_otsu(tmp)
return z
def zerocross(img):
r, c = img.shape
z = np.zeros_like(img)
for i in range(1, r - 1):
for j in range(1, c - 1):
if (img[i][j] < 0 and (img[i - 1][j] > 0 or img[i + 1][j] > 0 or img[i][j - 1] > 0 or img[i][j + 1] > 0)) or \
(img[i][j] == 0 and (img[i - 1][j] * img[i + 1][j] < 0 or img[i][j - 1] * img[i][j + 1] < 0)):
z[i][j] = 1
return z
def laplace_zerocross(img):
return zerocross(ndi.laplace(float64(img), mode="constant"))
def marr_hildreth(img, sigma=0.5):
return zerocross(ndi.gaussian_laplace(float64(img), sigma=sigma)) | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
Chapter 10 | sq = square(3)
cr = array([[0, 1, 0],
[1, 1, 1],
[0, 1, 0]])
sq
cr
def internal_boundary(a, b):
'''
A - (A erosion B)
'''
return a - binary_erosion(a, b)
def external_boundary(a, b):
'''
(A dilation B) - A
'''
return binary_dilation(a, b) - a
def morphological_gradient(a, b):
'''
(A dilation B) - (A erosion B)
'''
return binary_dilation(a, b) * 1 - binary_erosion(a, b)
def hit_or_miss(t, b1):
'''
(A erosion B1) and (not A erosion B2)
'''
r, c = b1.shape
b2 = ones((r + 2, c + 2))
b2[1:r+1, 1:c+1] = 1 - b1
t = img_as_bool(t)
tb1 = binary_erosion(t, b1)
tb2 = binary_erosion(1 - t, b2)
x, y = np.where((tb1 & tb2) == 1)
tb3 = np.zeros_like(tb1)
tb3[x, y] = 1
return x, y, tb1, tb2, tb3
def bwskel(img, kernel=sq):
skel = zeros_like(img, dtype=bool)
e = (np.copy(img) > 0) * 1
while e.max() > 0:
o = binary_opening(e, kernel) * 1
skel = skel | (e & (1 - o))
e = binary_erosion(e, kernel) * 1
return skel | _____no_output_____ | MIT | util/imutil.ipynb | shoulderhu/azure-image-ipy |
MethodsWe've already seen a few example of methods when learning about Object and Data Structure Types in Python. Methods are essentially functions built into objects. Later on in the course we will learn about how to create our own objects and methods using Object Oriented Programming (OOP) and classes.Methods will perform specific actions on the object and can also take arguments, just like a function. This lecture will serve as just a bried introduction to methods and get you thinking about overall design methods that we will touch back upon when we reach OOP in the course.Methods are in the form: object.method(arg1,arg2,etc...) You'll later see that we can think of methods as having an argument 'self' referring to the object itself. You can't see this argument but we will be using it later on in the course during the OOP lectures.Lets take a quick look at what an example of the various methods a list has: | # Create a simple list
l = [1,2,3,4,5] | _____no_output_____ | BSD-3-Clause | notebooks/Complete-Python-Bootcamp-master/Methods.ipynb | sheldon-cheah/cppkernel |
Fortunately, with iPython and the Jupyter Notebook we can quickly see all the possible methods using the tab key. The methods for a list are:* append* count* extend* insert* pop* remove* reverse* sortLet's try out a few of them: append() allows us to add elements to the end of a list: | l.append(6)
l | _____no_output_____ | BSD-3-Clause | notebooks/Complete-Python-Bootcamp-master/Methods.ipynb | sheldon-cheah/cppkernel |
Great! Now how about count()? The count() method will count the number of occurences of an element in a list. | # Check how many times 2 shows up in the list
l.count(2) | _____no_output_____ | BSD-3-Clause | notebooks/Complete-Python-Bootcamp-master/Methods.ipynb | sheldon-cheah/cppkernel |
You can always use Shift+Tab in the Jupyter Notebook to get more help about the method. In general Python you can use the help() function: | help(l.count) | Help on built-in function count:
count(...)
L.count(value) -> integer -- return number of occurrences of value
| BSD-3-Clause | notebooks/Complete-Python-Bootcamp-master/Methods.ipynb | sheldon-cheah/cppkernel |
Exercise 3: Order of ExecutionThis Jupyter notebook has been written to partner with Lesson 1 - Machine Learning Toolkit | print('Hello World!!')
print(hello_world)
hello_world = 'Hello World!!!!!!!!!!!!' | _____no_output_____ | MIT | Chapter 1 - Machine Learning Toolkit/Exercise 3 - Order of Execution.ipynb | doc-E-brown/Applied-Supervised-Learning-with-Python |
Task 4: Support Vector Machines_All credit for the code examples of this notebook goes to the book "Hands-On Machine Learning with Scikit-Learn & TensorFlow" by A. Geron. Modifications were made and text was added by K. Zoch in preparation for the hands-on sessions._ Setup First, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: | # Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Function to save a figure. This also decides that all output files
# should stored in the subdirectorz 'classification'.
PROJECT_ROOT_DIR = "."
EXERCISE = "SVMs"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "output", EXERCISE, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300) | _____no_output_____ | Apache-2.0 | task_7_SVMs.ipynb | knutzk/handson-ml |
Large margin *vs* margin violations This code example contains two linear support vector machine classifiers ([LinearSVC](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html), which are initialised with different hyperparameter C. The used dataset is the iris dataset also shown in the lecture (iris verginica vcs. iris versicolor). Try a few different values for C and compare the results! What effect do different values of C have on: (1) the width of the street, (2) the number of outliers, (3) the number of support vectors? | import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
# Load the dataset and store the necessary features/labels in X/y.
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
# Initialise a scaler and the two SVC instances.
scaler = StandardScaler()
svm_clf1 = LinearSVC(C=1, loss="hinge", max_iter=10000, random_state=42)
svm_clf2 = LinearSVC(C=100, loss="hinge", max_iter=10000, random_state=42)
# Create pipelines to automatically scale the input.
scaled_svm_clf1 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf1),
])
scaled_svm_clf2 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf2),
])
# Perform the actual fit of the two models.
scaled_svm_clf1.fit(X, y)
scaled_svm_clf2.fit(X, y)
# Convert to unscaled parameters
b1 = svm_clf1.decision_function([-scaler.mean_ / scaler.scale_])
b2 = svm_clf2.decision_function([-scaler.mean_ / scaler.scale_])
w1 = svm_clf1.coef_[0] / scaler.scale_
w2 = svm_clf2.coef_[0] / scaler.scale_
svm_clf1.intercept_ = np.array([b1])
svm_clf2.intercept_ = np.array([b2])
svm_clf1.coef_ = np.array([w1])
svm_clf2.coef_ = np.array([w2])
# Find support vectors (LinearSVC does not do this automatically)
t = y * 2 - 1
support_vectors_idx1 = (t * (X.dot(w1) + b1) < 1).ravel()
support_vectors_idx2 = (t * (X.dot(w2) + b2) < 1).ravel()
svm_clf1.support_vectors_ = X[support_vectors_idx1]
svm_clf2.support_vectors_ = X[support_vectors_idx2]
# Now do the plotting.
def plot_svc_decision_boundary(svm_clf, xmin, xmax):
w = svm_clf.coef_[0]
b = svm_clf.intercept_[0]
# At the decision boundary, w0*x0 + w1*x1 + b = 0
# => x1 = -w0/w1 * x0 - b/w1
x0 = np.linspace(xmin, xmax, 200)
decision_boundary = -w[0]/w[1] * x0 - b/w[1]
margin = 1/w[1]
gutter_up = decision_boundary + margin
gutter_down = decision_boundary - margin
svs = svm_clf.support_vectors_
plt.scatter(svs[:, 0], svs[:, 1], s=180, facecolors='#FFAAAA')
plt.plot(x0, decision_boundary, "k-", linewidth=2)
plt.plot(x0, gutter_up, "k--", linewidth=2)
plt.plot(x0, gutter_down, "k--", linewidth=2)
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs", label="Iris-Versicolor")
plot_svc_decision_boundary(svm_clf1, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.title("$C = {}$".format(svm_clf1.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("$C = {}$".format(svm_clf2.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
save_fig("regularization_plot") | _____no_output_____ | Apache-2.0 | task_7_SVMs.ipynb | knutzk/handson-ml |
Polynomial features vs. polynomial kernelsLet's create a non-linear dataset, for which we can compare two approaches: (1) adding polynomial features to the model, (2) using a polynomial kernel (see exercise sheet). First, create some random data. | from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)
def plot_dataset(X, y, axes):
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.axis(axes)
plt.grid(True, which='both')
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.show() | _____no_output_____ | Apache-2.0 | task_7_SVMs.ipynb | knutzk/handson-ml |
Now let's first look at a linear SVM classifier that uses polynomial features. We will implement them through a pipeline including scaling of the inputs. What happens if you increase the degrees of polynomial features? Does the model get better? How is the computing time affected? Hint: you might have to increase the `max_iter` parameter for higher degrees. | from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
polynomial_svm_clf = Pipeline([
("poly_features", PolynomialFeatures(degree=3)),
("scaler", StandardScaler()),
("svm_clf", LinearSVC(C=10, loss="hinge", max_iter=1000, random_state=42))
])
polynomial_svm_clf.fit(X, y)
def plot_predictions(clf, axes):
x0s = np.linspace(axes[0], axes[1], 100)
x1s = np.linspace(axes[2], axes[3], 100)
x0, x1 = np.meshgrid(x0s, x1s)
X = np.c_[x0.ravel(), x1.ravel()]
y_pred = clf.predict(X).reshape(x0.shape)
y_decision = clf.decision_function(X).reshape(x0.shape)
plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2)
plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1)
plot_predictions(polynomial_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
save_fig("moons_polynomial_svc_plot")
plt.show() | _____no_output_____ | Apache-2.0 | task_7_SVMs.ipynb | knutzk/handson-ml |
Now let's try the same without polynomial features, but a polynomial kernel instead. What is the fundamental difference between these two approaches? How do they scale in terms of computing time: (1) as a function of the number of features, (2) as a function of the number of instances?1. Try out different degrees for the polynomial kernel. Do you expect any changes in the computing time? How does the model itself change in the plot?2. Try different values for the `coef0` parameter. Can you guess what it controls? You should be able to see different behaviour for different degrees in the kernel.3. Try different values for the hyperparameter C, which controls margin violations. | from sklearn.svm import SVC
# Let's make one pipeline with polynomial kernel degree 3.
poly_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))
])
poly_kernel_svm_clf.fit(X, y)
# And another pipeline with polynomial kernel degree 10.
poly100_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5))
])
poly100_kernel_svm_clf.fit(X, y)
# Now start the plotting.
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_predictions(poly_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=3, r=1, C=5$", fontsize=18)
plt.subplot(122)
plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=10, r=100, C=5$", fontsize=18)
save_fig("moons_kernelized_polynomial_svc_plot")
plt.show() | _____no_output_____ | Apache-2.0 | task_7_SVMs.ipynb | knutzk/handson-ml |
Gaussian kernels Before trying the following piece of code which implements Gaussian RBF (Radial Basis Function) kernels, remember _similarity features_ that were discussed in the lecture:1. What are similarity features? What is the idea of adding a "landmark"?2. If similarity features help to increase the power of the model, why should we be careful to just add a similarity feature for _each_ instance of the dataset?3. How does the kernel trick (once again) save the day in this case?4. What does the `gamma` parameter control?Below you find a code implementation which creates a set of four plots with different values for gamma and hyperparameter C. Try different values for both. Which direction _increases_ regularisation of the model? In which direction would you go to avoid underfitting? In which to avoid overfitting? | from sklearn.svm import SVC
# Set up multiple values for gamma and hyperparameter C
# and create a list of value pairs.
gamma1, gamma2 = 0.1, 5
C1, C2 = 0.001, 1000
hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)
# Store multiple SVM classifiers in a list with these sets of
# hyperparameters. For all of them, use a pipeline to allow
# scaling of the inputs.
svm_clfs = []
for gamma, C in hyperparams:
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C))
])
rbf_kernel_svm_clf.fit(X, y)
svm_clfs.append(rbf_kernel_svm_clf)
# Now do the plotting.
plt.figure(figsize=(11, 7))
for i, svm_clf in enumerate(svm_clfs):
plt.subplot(221 + i)
plot_predictions(svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
gamma, C = hyperparams[i]
plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16)
save_fig("moons_rbf_svc_plot")
plt.show() | _____no_output_____ | Apache-2.0 | task_7_SVMs.ipynb | knutzk/handson-ml |
RegressionThe following code implements the support vector regression class from Scikit-Learn ([SVR](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html)). Here are a couple of questions (some of which require changes to the code, others are just conceptual:1. Quick recap: whereas the SVC class tries to make a classification decision, what is the job of this regression class? How is the output different?2. Try different values for the hyperparameter C. What does it control?3. How should the margin of a 'good' SVR model look like? Should it be broad? Should it be narrow? How does the parameter epsilon affect this? | # Generate some random data (degree = 2).
np.random.seed(42)
m = 100
X = 2 * np.random.rand(m, 1) - 1
y = (0.2 + 0.1 * X + 0.5 * X**2 + np.random.randn(m, 1)/10).ravel()
# Import the support vector regression class and create two
# instances with different hyperparameters.
from sklearn.svm import SVR
svm_poly_reg1 = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="auto")
svm_poly_reg2 = SVR(kernel="poly", degree=2, C=0.01, epsilon=0.1, gamma="auto")
svm_poly_reg1.fit(X, y)
svm_poly_reg2.fit(X, y)
# Now do the plotting.
def plot_svm_regression(svm_reg, X, y, axes):
x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1)
y_pred = svm_reg.predict(x1s)
plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$")
plt.plot(x1s, y_pred + svm_reg.epsilon, "k--")
plt.plot(x1s, y_pred - svm_reg.epsilon, "k--")
plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA')
plt.plot(X, y, "bo")
plt.xlabel(r"$x_1$", fontsize=18)
plt.legend(loc="upper left", fontsize=18)
plt.axis(axes)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plot_svm_regression(svm_poly_reg1, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg1.degree, svm_poly_reg1.C, svm_poly_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
plt.subplot(122)
plot_svm_regression(svm_poly_reg2, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg2.degree, svm_poly_reg2.C, svm_poly_reg2.epsilon), fontsize=18)
save_fig("svm_with_polynomial_kernel_plot")
plt.show() | _____no_output_____ | Apache-2.0 | task_7_SVMs.ipynb | knutzk/handson-ml |
Create TensorFlow Deep Neural Network Model**Learning Objective**- Create a DNN model using the high-level Estimator API IntroductionWe'll begin by modeling our data using a Deep Neural Network. To achieve this we will use the high-level Estimator API in Tensorflow. Have a look at the various models available through the Estimator API in [the documentation here](https://www.tensorflow.org/api_docs/python/tf/estimator). Start by setting the environment variables related to your project. | PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.14" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/05_review/3_tensorflow_dnn.ipynb | Glairly/introduction_to_tensorflow |
Create TensorFlow model using TensorFlow's Estimator API We'll begin by writing an input function to read the data and define the csv column names and label column. We'll also set the default csv column values and set the number of training steps. | import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
CSV_COLUMNS = "weight_pounds,is_male,mother_age,plurality,gestation_weeks".split(',')
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
TRAIN_STEPS = 1000 | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/05_review/3_tensorflow_dnn.ipynb | Glairly/introduction_to_tensorflow |
Create the input functionNow we are ready to create an input function using the Dataset API. | def read_dataset(filename_pattern, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(records = value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename = filename_pattern)
# Create dataset from file list
dataset = (tf.data.TextLineDataset(filenames = file_list) # Read text file
.map(map_func = decode_csv)) # Transform each elem by applying decode_csv fn
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(count = num_epochs).batch(batch_size = batch_size)
return dataset
return _input_fn | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/05_review/3_tensorflow_dnn.ipynb | Glairly/introduction_to_tensorflow |
Create the feature columnsNext, we define the feature columns | def get_categorical(name, values):
return tf.feature_column.indicator_column(
categorical_column = tf.feature_column.categorical_column_with_vocabulary_list(key = name, vocabulary_list = values))
def get_cols():
# Define column types
return [\
get_categorical("is_male", ["True", "False", "Unknown"]),
tf.feature_column.numeric_column(key = "mother_age"),
get_categorical("plurality",
["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)","Multiple(2+)"]),
tf.feature_column.numeric_column(key = "gestation_weeks")
] | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/05_review/3_tensorflow_dnn.ipynb | Glairly/introduction_to_tensorflow |
Create the Serving Input function To predict with the TensorFlow model, we also need a serving input function. This will allow us to serve prediction later using the predetermined inputs. We will want all the inputs from our user. | def serving_input_fn():
feature_placeholders = {
"is_male": tf.placeholder(dtype = tf.string, shape = [None]),
"mother_age": tf.placeholder(dtype = tf.float32, shape = [None]),
"plurality": tf.placeholder(dtype = tf.string, shape = [None]),
"gestation_weeks": tf.placeholder(dtype = tf.float32, shape = [None])
}
features = {
key: tf.expand_dims(input = tensor, axis = -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = feature_placeholders) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/05_review/3_tensorflow_dnn.ipynb | Glairly/introduction_to_tensorflow |
Create the model and run training and evaluationLastly, we'll create the estimator to train and evaluate. In the cell below, we'll set up a `DNNRegressor` estimator and the train and evaluation operations. | def train_and_evaluate(output_dir):
EVAL_INTERVAL = 300
run_config = tf.estimator.RunConfig(
save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
estimator = tf.estimator.DNNRegressor(
model_dir = output_dir,
feature_columns = get_cols(),
hidden_units = [64, 32],
config = run_config)
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset("train.csv", mode = tf.estimator.ModeKeys.TRAIN),
max_steps = TRAIN_STEPS)
exporter = tf.estimator.LatestExporter(name = "exporter", serving_input_receiver_fn = serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset("eval.csv", mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator = estimator, train_spec = train_spec, eval_spec = eval_spec) | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/05_review/3_tensorflow_dnn.ipynb | Glairly/introduction_to_tensorflow |
Finally, we train the model! | # Run the model
shutil.rmtree(path = "babyweight_trained_dnn", ignore_errors = True) # start fresh each time
train_and_evaluate("babyweight_trained_dnn") | _____no_output_____ | Apache-2.0 | courses/machine_learning/deepdive/05_review/3_tensorflow_dnn.ipynb | Glairly/introduction_to_tensorflow |
Compare different DEMs for individual glaciers For most glaciers in the world there are several digital elevation models (DEM) which cover the respective glacier. In OGGM we have currently implemented 10 different open access DEMs to choose from. Some are regional and only available in certain areas (e.g. Greenland or Antarctica) and some cover almost the entire globe. For more information, visit the [rgitools documentation about DEMs](https://rgitools.readthedocs.io/en/latest/dems.html).This notebook allows to see which of the DEMs are available for a selected glacier and how they compare to each other. That way it is easy to spot systematic differences and also invalid points in the DEMs. Input parameters This notebook can be run as a script with parameters using [papermill](https://github.com/nteract/papermill), but it is not necessary. The following cell contains the parameters you can choose from: | # The RGI Id of the glaciers you want to look for
# Use the original shapefiles or the GLIMS viewer to check for the ID: https://www.glims.org/maps/glims
rgi_id = 'RGI60-11.00897'
# The default is to test for all sources available for this glacier
# Set to a list of source names to override this
sources = None
# Where to write the plots. Default is in the current working directory
plot_dir = ''
# The RGI version to use
# V62 is an unofficial modification of V6 with only minor, backwards compatible modifications
prepro_rgi_version = 62
# Size of the map around the glacier. Currently only 10 and 40 are available
prepro_border = 10
# Degree of processing level. Currently only 1 is available.
from_prepro_level = 1 | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Check input and set up | # The sources can be given as parameters
if sources is not None and isinstance(sources, str):
sources = sources.split(',')
# Plotting directory as well
if not plot_dir:
plot_dir = './' + rgi_id
import os
plot_dir = os.path.abspath(plot_dir)
import pandas as pd
import numpy as np
from oggm import cfg, utils, workflow, tasks, graphics, GlacierDirectory
import xarray as xr
import geopandas as gpd
import salem
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import AxesGrid
import itertools
from oggm.utils import DEM_SOURCES
from oggm.workflow import init_glacier_directories
# Make sure the plot directory exists
utils.mkdir(plot_dir);
# Use OGGM to download the data
cfg.initialize()
cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-DEMS', reset=True)
cfg.PARAMS['use_intersects'] = False | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Download the data using OGGM utility functions Note that you could reach the same goal by downloading the data manually from https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.4/rgitopo/ | # URL of the preprocessed GDirs
gdir_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.4/rgitopo/'
# We use OGGM to download the data
gdir = init_glacier_directories([rgi_id], from_prepro_level=1, prepro_border=10,
prepro_rgi_version='62', prepro_base_url=gdir_url)[0] | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Read the DEMs and store them all in a dataset | if sources is None:
sources = [src for src in os.listdir(gdir.dir) if src in utils.DEM_SOURCES]
print('RGI ID:', rgi_id)
print('Available DEM sources:', sources)
print('Plotting directory:', plot_dir)
# We use xarray to store the data
ods = xr.Dataset()
for src in sources:
demfile = os.path.join(gdir.dir, src) + '/dem.tif'
with xr.open_rasterio(demfile) as ds:
data = ds.sel(band=1).load() * 1.
ods[src] = data.where(data > -100, np.NaN)
sy, sx = np.gradient(ods[src], gdir.grid.dx, gdir.grid.dx)
ods[src + '_slope'] = ('y', 'x'), np.arctan(np.sqrt(sy**2 + sx**2))
with xr.open_rasterio(gdir.get_filepath('glacier_mask')) as ds:
ods['mask'] = ds.sel(band=1).load()
# Decide on the number of plots and figure size
ns = len(sources)
x_size = 12
n_cols = 3
n_rows = -(-ns // n_cols)
y_size = x_size / n_cols * n_rows | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Raw topography data | smap = salem.graphics.Map(gdir.grid, countries=False)
smap.set_shapefile(gdir.read_shapefile('outlines'))
smap.set_plot_params(cmap='topo')
smap.set_lonlat_contours(add_tick_labels=False)
smap.set_plot_params(vmin=np.nanquantile([ods[s].min() for s in sources], 0.25),
vmax=np.nanquantile([ods[s].max() for s in sources], 0.75))
fig = plt.figure(figsize=(x_size, y_size))
grid = AxesGrid(fig, 111,
nrows_ncols=(n_rows, n_cols),
axes_pad=0.7,
cbar_mode='each',
cbar_location='right',
cbar_pad=0.1
)
for i, s in enumerate(sources):
data = ods[s]
smap.set_data(data)
ax = grid[i]
smap.visualize(ax=ax, addcbar=False, title=s)
if np.isnan(data).all():
grid[i].cax.remove()
continue
cax = grid.cbar_axes[i]
smap.colorbarbase(cax)
# take care of uneven grids
if ax != grid[-1]:
grid[-1].remove()
grid[-1].cax.remove()
plt.savefig(os.path.join(plot_dir, 'dem_topo_color.png'), dpi=150, bbox_inches='tight') | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Shaded relief | fig = plt.figure(figsize=(x_size, y_size))
grid = AxesGrid(fig, 111,
nrows_ncols=(n_rows, n_cols),
axes_pad=0.7,
cbar_mode='none',
cbar_location='right',
cbar_pad=0.1
)
smap.set_plot_params(cmap='Blues')
smap.set_shapefile()
for i, s in enumerate(sources):
data = ods[s].copy().where(np.isfinite(ods[s]), 0)
smap.set_data(data * 0)
ax = grid[i]
smap.set_topography(data)
smap.visualize(ax=ax, addcbar=False, title=s)
# take care of uneven grids
if ax != grid[-1]:
grid[-1].remove()
grid[-1].cax.remove()
plt.savefig(os.path.join(plot_dir, 'dem_topo_shade.png'), dpi=150, bbox_inches='tight') | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Slope | fig = plt.figure(figsize=(x_size, y_size))
grid = AxesGrid(fig, 111,
nrows_ncols=(n_rows, n_cols),
axes_pad=0.7,
cbar_mode='each',
cbar_location='right',
cbar_pad=0.1
)
smap.set_topography();
smap.set_plot_params(vmin=0, vmax=0.7, cmap='Blues')
for i, s in enumerate(sources):
data = ods[s + '_slope']
smap.set_data(data)
ax = grid[i]
smap.visualize(ax=ax, addcbar=False, title=s + ' (slope)')
cax = grid.cbar_axes[i]
smap.colorbarbase(cax)
# take care of uneven grids
if ax != grid[-1]:
grid[-1].remove()
grid[-1].cax.remove()
plt.savefig(os.path.join(plot_dir, 'dem_slope.png'), dpi=150, bbox_inches='tight') | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Some simple statistics about the DEMs | df = pd.DataFrame()
for s in sources:
df[s] = ods[s].data.flatten()[ods.mask.data.flatten() == 1]
dfs = pd.DataFrame()
for s in sources:
dfs[s] = ods[s + '_slope'].data.flatten()[ods.mask.data.flatten() == 1]
df.describe() | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Comparison matrix plot | # Table of differences between DEMS
df_diff = pd.DataFrame()
done = []
for s1, s2 in itertools.product(sources, sources):
if s1 == s2:
continue
if (s2, s1) in done:
continue
df_diff[s1 + '-' + s2] = df[s1] - df[s2]
done.append((s1, s2))
# Decide on plot levels
max_diff = df_diff.quantile(0.99).max()
base_levels = np.array([-8, -5, -3, -1.5, -1, -0.5, -0.2, -0.1, 0, 0.1, 0.2, 0.5, 1, 1.5, 3, 5, 8])
if max_diff < 10:
levels = base_levels
elif max_diff < 100:
levels = base_levels * 10
elif max_diff < 1000:
levels = base_levels * 100
else:
levels = base_levels * 1000
levels = [l for l in levels if abs(l) < max_diff]
if max_diff > 10:
levels = [int(l) for l in levels]
levels
smap.set_plot_params(levels=levels, cmap='PuOr', extend='both')
smap.set_shapefile(gdir.read_shapefile('outlines'))
fig = plt.figure(figsize=(14, 14))
grid = AxesGrid(fig, 111,
nrows_ncols=(ns - 1, ns - 1),
axes_pad=0.3,
cbar_mode='single',
cbar_location='right',
cbar_pad=0.1
)
done = []
for ax in grid:
ax.set_axis_off()
for s1, s2 in itertools.product(sources, sources):
if s1 == s2:
continue
if (s2, s1) in done:
continue
data = ods[s1] - ods[s2]
ax = grid[sources.index(s1) * (ns - 1) + sources[1:].index(s2)]
ax.set_axis_on()
smap.set_data(data)
smap.visualize(ax=ax, addcbar=False)
done.append((s1, s2))
ax.set_title(s1 + '-' + s2, fontsize=8)
cax = grid.cbar_axes[0]
smap.colorbarbase(cax);
plt.savefig(os.path.join(plot_dir, 'dem_diffs.png'), dpi=150, bbox_inches='tight') | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Comparison scatter plot | import seaborn as sns
sns.set(style="ticks")
l1, l2 = (utils.nicenumber(df.min().min(), binsize=50, lower=True),
utils.nicenumber(df.max().max(), binsize=50, lower=False))
def plot_unity(xdata, ydata, **kwargs):
points = np.linspace(l1, l2, 100)
plt.gca().plot(points, points, color='k', marker=None,
linestyle=':', linewidth=3.0)
g = sns.pairplot(df.dropna(how='all', axis=1).dropna(), plot_kws=dict(s=50, edgecolor="C0", linewidth=1));
g.map_offdiag(plot_unity)
for asx in g.axes:
for ax in asx:
ax.set_xlim((l1, l2))
ax.set_ylim((l1, l2))
plt.savefig(os.path.join(plot_dir, 'dem_scatter.png'), dpi=150, bbox_inches='tight') | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Table statistics | df.describe()
df.corr()
df_diff.describe()
df_diff.abs().describe() | _____no_output_____ | BSD-3-Clause | notebooks/dem_comparison.ipynb | pat-schmitt/tutorials |
Created from https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/random_cut_forest/random_cut_forest.ipynb | import boto3
import botocore
import sagemaker
import sys
bucket = 'tdk-awsml-sagemaker-data.io-dev' # <--- specify a bucket you have access to
prefix = ''
execution_role = sagemaker.get_execution_role()
# check if the bucket exists
try:
boto3.Session().client('s3').head_bucket(Bucket=bucket)
except botocore.exceptions.ParamValidationError as e:
print('Hey! You either forgot to specify your S3 bucket'
' or you gave your bucket an invalid name!')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == '403':
print("Hey! You don't have permission to access the bucket, {}.".format(bucket))
elif e.response['Error']['Code'] == '404':
print("Hey! Your bucket, {}, doesn't exist!".format(bucket))
else:
raise
else:
print('Training input/output will be stored in: s3://{}/{}'.format(bucket, prefix))
%%time
import pandas as pd
import urllib.request
data_filename = 'nyc_taxi.csv'
data_source = 'https://raw.githubusercontent.com/numenta/NAB/master/data/realKnownCause/nyc_taxi.csv'
urllib.request.urlretrieve(data_source, data_filename)
taxi_data = pd.read_csv(data_filename, delimiter=',')
from sagemaker import RandomCutForest
session = sagemaker.Session()
# specify general training job information
rcf = RandomCutForest(role=execution_role,
train_instance_count=1,
train_instance_type='ml.m5.large',
data_location='s3://{}/{}/'.format(bucket, prefix),
output_path='s3://{}/{}/output'.format(bucket, prefix),
num_samples_per_tree=512,
num_trees=50)
# automatically upload the training data to S3 and run the training job
# TK - had to modify this line to use to_numpy() instead of as_matrix()
rcf.fit(rcf.record_set(taxi_data.value.to_numpy().reshape(-1,1)))
rcf_inference = rcf.deploy(
initial_instance_count=1,
instance_type='ml.m5.large',
)
print('Endpoint name: {}'.format(rcf_inference.endpoint))
from sagemaker.predictor import csv_serializer, json_deserializer
rcf_inference.content_type = 'text/csv'
rcf_inference.serializer = csv_serializer
rcf_inference.accept = 'application/json'
rcf_inference.deserializer = json_deserializer
# TK - had to modify this line to use to_numpy() instead of as_matrix()
taxi_data_numpy = taxi_data.value.to_numpy().reshape(-1,1)
print(taxi_data_numpy[:6])
results = rcf_inference.predict(taxi_data_numpy[:6])
sagemaker.Session().delete_endpoint(rcf_inference.endpoint) | _____no_output_____ | MIT | code/sagemaker_rcf.ipynb | tkeech1/aws_ml |
Predicting employee attrition rate in organizations Using PyCaret Step 1: Importing the data | import numpy as np
import pandas as pd
from pycaret.regression import *
train_csv = '../dataset/Train.csv'
test_csv = '../dataset/Test.csv'
train_data = pd.read_csv(train_csv)
test_data = pd.read_csv(test_csv) | _____no_output_____ | MIT | notebooks/.ipynb_checkpoints/pycaret-final-checkpoint.ipynb | ChandrakanthNethi/predict-the-employee-attrition-rate-in-organizations |
Step 2: Setup | reg = setup(train_data, target='Attrition_rate', ignore_features=['Employee_ID']) |
Setup Succesfully Completed!
| MIT | notebooks/.ipynb_checkpoints/pycaret-final-checkpoint.ipynb | ChandrakanthNethi/predict-the-employee-attrition-rate-in-organizations |
Step 3: Tuning the models | compare_models() | _____no_output_____ | MIT | notebooks/.ipynb_checkpoints/pycaret-final-checkpoint.ipynb | ChandrakanthNethi/predict-the-employee-attrition-rate-in-organizations |
Step 4: Selecting a model | model = create_model('br')
print(model) | BayesianRidge(alpha_1=1e-06, alpha_2=1e-06, alpha_init=None,
compute_score=False, copy_X=True, fit_intercept=True,
lambda_1=1e-06, lambda_2=1e-06, lambda_init=None, n_iter=300,
normalize=False, tol=0.001, verbose=False)
| MIT | notebooks/.ipynb_checkpoints/pycaret-final-checkpoint.ipynb | ChandrakanthNethi/predict-the-employee-attrition-rate-in-organizations |
Step 5: Predicting on test data | predictions = predict_model(model, data = test_data)
predictions
predictions.rename(columns={"Label": "Attrition_rate"}, inplace=True)
predictions[['Employee_ID', 'Attrition_rate']].to_csv('../predictions.csv', index=False) | _____no_output_____ | MIT | notebooks/.ipynb_checkpoints/pycaret-final-checkpoint.ipynb | ChandrakanthNethi/predict-the-employee-attrition-rate-in-organizations |
from google.colab import drive
drive.mount('/content/drive')
pip install nilearn
pip install tables
pip install git+https://www.github.com/farizrahman4u/keras-contrib.git
pip install SimpleITK
#pip install tensorflow==1.4
import tensorflow as tf
from tensorflow.python.framework import ops
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
def cross_entropy_loss_v1(y_true, y_pred, sample_weight=None, eps=1e-6):
"""
:param y_pred: output 5D tensor, [batch size, dim0, dim1, dim2, class]
:param y_true: 4D GT tensor, [batch size, dim0, dim1, dim2]
:param eps: avoid log0
:return: cross entropy loss
"""
log_y = tf.log(y_pred + eps)
num_samples = tf.cast(tf.reduce_prod(tf.shape(y_true)), "float32")
label_one_hot = tf.one_hot(indices=y_true, depth=y_pred.shape[-1], axis=-1, dtype=tf.float32)
if sample_weight is not None:
# ce = mean(- weight * y_true * log(y_pred)).
label_one_hot = label_one_hot * sample_weight
cross_entropy = - tf.reduce_sum(label_one_hot * log_y) / num_samples
return cross_entropy
def cross_entropy_loss(y_true, y_pred, sample_weight=None):
# may not use one_hot when use tf.keras.losses.CategoricalCrossentropy
y_true = tf.one_hot(indices=y_true, depth=y_pred.shape[-1], axis=-1, dtype=tf.float32)
if sample_weight is not None:
# ce = mean(weight * y_true * log(y_pred)).
y_true = y_true * sample_weight
return tf.keras.losses.BinaryCrossentropy()(y_true, y_pred)
def cross_entropy_loss_with_weight(y_true, y_pred, sample_weight_per_c=None, eps=1e-6):
# for simple calculate this batch.
# if possible, get weight per epoch before training.
num_dims, num_classes = [len(y_true.shape), y_pred.shape.as_list()[-1]]
if sample_weight_per_c is None:
print('use batch to calculate weight')
num_lbls_in_ygt = tf.cast(tf.reduce_prod(tf.shape(y_true)), dtype="float32")
num_lbls_in_ygt_per_c = tf.bincount(arr=tf.cast(y_true, tf.int32), minlength=num_classes, maxlength=num_classes,
dtype="float32") # without the min/max, length of vector can change.
sample_weight_per_c = (1. / (num_lbls_in_ygt_per_c + eps)) * (num_lbls_in_ygt / num_classes)
sample_weight_per_c = tf.reshape(sample_weight_per_c, [1] * num_dims + [num_classes])
# use cross_entropy_loss get negative value, while cross_entropy_loss and cross_entropy_loss_v1 get the same
# when no weight. I guess may some error when batch distribution is huge different from epoch distribution.
return cross_entropy_loss_v1(y_true, y_pred, sample_weight=sample_weight_per_c)
def dice_coef(y_true, y_pred, eps=1e-6):
# problem: when gt class-0 >> class-1, the pred p(class-0) >> p(class-1)
# eg. gt = [0, 0, 0, 0, 1] pred = [[1, 0], [1, 0], [1, 0], [1, 0], [1, 0]]. 2 * 4 / (5 + 5) = 0.8
# in fact, change every pred, 4/5 -> 0.6, 1/5 ->1, so the model just pred all 0. imbalance class problem.
# only calculate gt == 1 can fix my problem, but for multi-class task, weight needed like ce loss above.
y_true = tf.one_hot(indices=y_true, depth=y_pred.shape[-1], axis=-1, dtype=tf.float32)
abs_x_and_y = 2 * tf.reduce_sum(y_true * y_pred)
abs_x_plus_abs_y = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
return (abs_x_and_y + eps) / (abs_x_plus_abs_y + eps)
def dice_coef_loss(y_true, y_pred):
return 1. - dice_coef(y_true, y_pred)
import numpy as np
from keras import backend as K
from keras.engine import Input, Model
from keras.layers import Conv3D, MaxPooling3D, UpSampling3D, Activation, BatchNormalization, PReLU#, Deconvolution3D
from keras.optimizers import Adam
#from unet3d.metrics import dice_coefficient_loss, get_label_dice_coefficient_function, dice_coefficient
K.set_image_data_format("channels_first")
try:
from keras.engine import merge
except ImportError:
from keras.layers.merge import concatenate
def unet_model_3d(input_shape, pool_size=(2, 2, 2), n_labels=1, initial_learning_rate=0.00001, deconvolution=False,
depth=4, n_base_filters=32, include_label_wise_dice_coefficients=False, metrics=dice_coef,
batch_normalization=False, activation_name="sigmoid"):
"""
Builds the 3D UNet Keras model.f
:param metrics: List metrics to be calculated during model training (default is dice coefficient).
:param include_label_wise_dice_coefficients: If True and n_labels is greater than 1, model will report the dice
coefficient for each label as metric.
:param n_base_filters: The number of filters that the first layer in the convolution network will have. Following
layers will contain a multiple of this number. Lowering this number will likely reduce the amount of memory required
to train the model.
:param depth: indicates the depth of the U-shape for the model. The greater the depth, the more max pooling
layers will be added to the model. Lowering the depth may reduce the amount of memory required for training.
:param input_shape: Shape of the input data (n_chanels, x_size, y_size, z_size). The x, y, and z sizes must be
divisible by the pool size to the power of the depth of the UNet, that is pool_size^depth.
:param pool_size: Pool size for the max pooling operations.
:param n_labels: Number of binary labels that the model is learning.
:param initial_learning_rate: Initial learning rate for the model. This will be decayed during training.
:param deconvolution: If set to True, will use transpose convolution(deconvolution) instead of up-sampling. This
increases the amount memory required during training.
:return: Untrained 3D UNet Model
"""
inputs = Input(input_shape)
current_layer = inputs
levels = list()
# add levels with max pooling
for layer_depth in range(depth):
layer1 = create_convolution_block(input_layer=current_layer, n_filters=n_base_filters*(2**layer_depth),
batch_normalization=batch_normalization)
layer2 = create_convolution_block(input_layer=layer1, n_filters=n_base_filters*(2**layer_depth)*2,
batch_normalization=batch_normalization)
if layer_depth < depth - 1:
current_layer = MaxPooling3D(pool_size=pool_size)(layer2)
levels.append([layer1, layer2, current_layer])
else:
current_layer = layer2
levels.append([layer1, layer2])
# add levels with up-convolution or up-sampling
for layer_depth in range(depth-2, -1, -1):
up_convolution = get_up_convolution(pool_size=pool_size, deconvolution=deconvolution,
n_filters=current_layer._keras_shape[1])(current_layer)
concat = concatenate([up_convolution, levels[layer_depth][1]], axis=1)
current_layer = create_convolution_block(n_filters=levels[layer_depth][1]._keras_shape[1],
input_layer=concat, batch_normalization=batch_normalization)
current_layer = create_convolution_block(n_filters=levels[layer_depth][1]._keras_shape[1],
input_layer=current_layer,
batch_normalization=batch_normalization)
final_convolution = Conv3D(n_labels, (1, 1, 1))(current_layer)
act = Activation(activation_name)(final_convolution)
model = Model(inputs=inputs, outputs=act)
if not isinstance(metrics, list):
metrics = [metrics]
if include_label_wise_dice_coefficients and n_labels > 1:
label_wise_dice_metrics = [get_label_dice_coefficient_function(index) for index in range(n_labels)]
if metrics:
metrics = metrics + label_wise_dice_metrics
else:
metrics = label_wise_dice_metrics
model.compile(optimizer=Adam(lr=initial_learning_rate), loss=dice_coefficient_loss, metrics=metrics)
return model
def create_convolution_block(input_layer, n_filters, batch_normalization=False, kernel=(3, 3, 3), activation=None,
padding='same', strides=(1, 1, 1), instance_normalization=False):
"""
:param strides:
:param input_layer:
:param n_filters:
:param batch_normalization:
:param kernel:
:param activation: Keras activation layer to use. (default is 'relu')
:param padding:
:return:
"""
layer = Conv3D(n_filters, kernel, padding=padding, strides=strides)(input_layer)
if batch_normalization:
layer = BatchNormalization(axis=1)(layer)
elif instance_normalization:
try:
from keras_contrib.layers.normalization.instancenormalization import InstanceNormalization
except ImportError:
raise ImportError("Install keras_contrib in order to use instance normalization."
"\nTry: pip install git+https://www.github.com/farizrahman4u/keras-contrib.git")
layer = InstanceNormalization(axis=1)(layer)
if activation is None:
return Activation('relu')(layer)
else:
return activation()(layer)
def compute_level_output_shape(n_filters, depth, pool_size, image_shape):
"""
Each level has a particular output shape based on the number of filters used in that level and the depth or number
of max pooling operations that have been done on the data at that point.
:param image_shape: shape of the 3d image.
:param pool_size: the pool_size parameter used in the max pooling operation.
:param n_filters: Number of filters used by the last node in a given level.
:param depth: The number of levels down in the U-shaped model a given node is.
:return: 5D vector of the shape of the output node
"""
output_image_shape = np.asarray(np.divide(image_shape, np.power(pool_size, depth)), dtype=np.int32).tolist()
return tuple([None, n_filters] + output_image_shape)
def get_up_convolution(n_filters, pool_size, kernel_size=(2, 2, 2), strides=(2, 2, 2),
deconvolution=False):
if deconvolution:
return Deconvolution3D(filters=n_filters, kernel_size=kernel_size,
strides=strides)
else:
return UpSampling3D(size=pool_size)
import os
import glob
#from unet3d.data import write_data_to_file, open_data_file
#from unet3d.generator import get_training_and_validation_generators
#from unet3d.model import unet_model_3d
#from unet3d.training import load_old_model, train_model
config = dict()
config["pool_size"] = (2, 2, 2) # pool size for the max pooling operations
config["image_shape"] = (144, 144, 144) # This determines what shape the images will be cropped/resampled to.
config["patch_shape"] = (64, 64, 64) # switch to None to train on the whole image
config["labels"] = (1, 2, 4) # the label numbers on the input image
config["n_labels"] = len(config["labels"])
config["all_modalities"] = ["t1", "t1ce", "flair", "t2"]
config["training_modalities"] = config["all_modalities"] # change this if you want to only use some of the modalities
config["nb_channels"] = len(config["training_modalities"])
if "patch_shape" in config and config["patch_shape"] is not None:
config["input_shape"] = tuple([config["nb_channels"]] + list(config["patch_shape"]))
else:
config["input_shape"] = tuple([config["nb_channels"]] + list(config["image_shape"]))
config["truth_channel"] = config["nb_channels"]
config["deconvolution"] = True # if False, will use upsampling instead of deconvolution
config["batch_size"] = 6
config["validation_batch_size"] = 12
config["n_epochs"] = 500 # cutoff the training after this many epochs
config["patience"] = 10 # learning rate will be reduced after this many epochs if the validation loss is not improving
config["early_stop"] = 50 # training will be stopped after this many epochs without the validation loss improving
config["initial_learning_rate"] = 0.00001
config["learning_rate_drop"] = 0.5 # factor by which the learning rate will be reduced
config["validation_split"] = 0.8 # portion of the data that will be used for training
config["flip"] = False # augments the data by randomly flipping an axis during
config["permute"] = True # data shape must be a cube. Augments the data by permuting in various directions
config["distort"] = None # switch to None if you want no distortion
config["augment"] = config["flip"] or config["distort"]
config["validation_patch_overlap"] = 0 # if > 0, during training, validation patches will be overlapping
config["training_patch_start_offset"] = (16, 16, 16) # randomly offset the first patch index by up to this offset
config["skip_blank"] = True # if True, then patches without any target will be skipped
config["data_file"] = os.path.abspath("/content/drive/My Drive/Brats2019/data.h5")
config["model_file"] = os.path.abspath("/content/drive/My Drive/Brats2019/tumor_segmentation_model.h5")
config["training_file"] = os.path.abspath("/content/drive/My Drive/Brats2019/pkl/training_ids.pkl")
config["validation_file"] = os.path.abspath("/content/drive/My Drive/Brats2019/pkl/validation_ids.pkl")
config["overwrite"] = False # If True, will previous files. If False, will use previously written files.
def fetch_training_data_files():
training_data_files = list()
for subject_dir in glob.glob(os.path.join(os.path.dirname(__file__), "data", "preprocessed", "*", "*")):
subject_files = list()
for modality in config["training_modalities"] + ["truth"]:
subject_files.append(os.path.join(subject_dir, modality + ".nii.gz"))
training_data_files.append(tuple(subject_files))
return training_data_files
def main(overwrite=False):
# convert input images into an hdf5 file
if overwrite or not os.path.exists(config["data_file"]):
training_files = fetch_training_data_files()
write_data_to_file(training_files, config["data_file"], image_shape=config["image_shape"])
data_file_opened = open_data_file(config["data_file"])
if not overwrite and os.path.exists(config["model_file"]):
model = load_old_model(config["model_file"])
else:
# instantiate new model
model = unet_model_3d(input_shape=config["input_shape"],
pool_size=config["pool_size"],
n_labels=config["n_labels"],
initial_learning_rate=config["initial_learning_rate"],
deconvolution=config["deconvolution"])
# get training and testing generators
train_generator, validation_generator, n_train_steps, n_validation_steps = get_training_and_validation_generators(
data_file_opened,
batch_size=config["batch_size"],
data_split=config["validation_split"],
overwrite=overwrite,
validation_keys_file=config["validation_file"],
training_keys_file=config["training_file"],
n_labels=config["n_labels"],
labels=config["labels"],
patch_shape=config["patch_shape"],
validation_batch_size=config["validation_batch_size"],
validation_patch_overlap=config["validation_patch_overlap"],
training_patch_start_offset=config["training_patch_start_offset"],
permute=config["permute"],
augment=config["augment"],
skip_blank=config["skip_blank"],
augment_flip=config["flip"],
augment_distortion_factor=config["distort"])
# run training
train_model(model=model,
model_file=config["model_file"],
training_generator=train_generator,
validation_generator=validation_generator,
steps_per_epoch=n_train_steps,
validation_steps=n_validation_steps,
initial_learning_rate=config["initial_learning_rate"],
learning_rate_drop=config["learning_rate_drop"],
learning_rate_patience=config["patience"],
early_stopping_patience=config["early_stop"],
n_epochs=config["n_epochs"])
data_file_opened.close()
if __name__ == "__main__":
main(overwrite=config["overwrite"]) | _____no_output_____ | MIT | test.ipynb | sima97/unihobby |
|
 $$\vec{v}+\vec{w}=(x_1,x_2)+(y_1,y_2)=(x_1 +y_1, x_2 +y_2)$$$$\vec{v}-\vec{w}=(x_1,x_2)-(y_1,y_2)=(x_1 -y_1, x_2 -y_2)$$  | #Algebra lineal se enfoca en matrices y vectores en python
import numpy as np #importar numpy
M = np.array([[1, 2, 3],[4, 5, 6],[7, 8, 9]])#Matriz
v = np.array([[1],[2],[3]])# Vector que es de una sola columna
v1=np.array([1,2,3])#Vector fila
print(M)
print(v)
print(v1)
print (M.shape)
print (v.shape)#nos indica que tiene 3 elementos
v_single_dim = np.array([1, 2, 3])
print (v_single_dim.shape)
print(v+v)#suma de dos vecotres
#se lo suma a el m
print(3*v)#multiplcaciones de un valor escalar
#un valor escalar es un valor unico
#multiplca cada uno de los elemtnos por 3
# Otra forma de crear matrices
v1 = np.array([1, 2, 3])#puedo crear arreglos y unirlos
v2 = np.array([4, 5, 6])
v3 = np.array([7, 8, 9])
M = np.vstack([v1, v2, v3])#losasi se unen y forman una matriz
print(M)
M
# Indexar matrices
print (M[:2, 1:3])#puedo hacer recortes de la matriz, SACAR ELEMENTOS
v
#Indexar vectores
print(v[1,0])# de la posicion 1 columna 0
print(v[1:,0])#desde el 1 en adelante
#similar a las listar pero puedo sacar filas y columnas
lista=[[1,2],[3,4],[4,6]]
#DIRENCIAS CON LISTAS
#los vectores si se suman entre ellos en cambio las listas indenxa otra lista igual
print(v+v)
print(lista+lista)
#LOS ARREGLOS ME PERMITEN HACER OPERACIONES MATRICIALES
v*3
lista*3 | _____no_output_____ | MIT | Repaso_algebra_LinealHeidy.ipynb | 1966hs/MujeresDigitales |
    | v.T# TRANSPUESTA
print (M.dot(v))
print (v.T.dot(v))
v1=np.array([3,-3,1])
v2=np.array([4,9,2])
print (np.cross(v1, v2, axisa=0, axisb=0).T)
print (np.multiply(M, v))
print (np.multiply(v, v))
| [[14]
[32]
[50]]
[[14]]
[-15 -2 39]
[[ 1 2 3]
[ 8 10 12]
[21 24 27]]
[[1]
[4]
[9]]
| MIT | Repaso_algebra_LinealHeidy.ipynb | 1966hs/MujeresDigitales |
Transpuesta$$C_{mxn}=A_{nxm}^T$$$$c_{ij}=a_{ji}$$$$(A+B)^T = A^T + B^T$$$$(AB)^T = B^T A^T$$Si $A=A^T$ entonces A es **simetrica** | M
print(M.T)#transpuestas
print(v.T)
#el determinante es para saber el valor de la matriz | _____no_output_____ | MIT | Repaso_algebra_LinealHeidy.ipynb | 1966hs/MujeresDigitales |
  | np.identity(3)
#hacer una matriz que multiplcarse por si misma da la matriz identidad
#que todo lo diagonal es 1 y el resto 0
v1 = np.array([3, 0, 2])
v2 = np.array([2, 0, -2])
v3 = np.array([0, 1, 1])
M = np.vstack([v1, v2, v3])#creamos la matriz apartir de esos valores y luego los unimos
print (np.linalg.inv(M))#para invertir la matriz
print (np.linalg.det(M))#para ver el determinante de mi matriz
print (np.linalg.inv(M))#invertir
print (np.linalg.det(M))#determinante | 10.000000000000002
| MIT | Repaso_algebra_LinealHeidy.ipynb | 1966hs/MujeresDigitales |
**Definicion de Variables** | a= np.array([1,1,1])
b= np.array([2,2,2])
#Multiplcacion de los elementos
#(lo hace elemento a elemento )
a*b
#metodo multiplicacion de elementos:
np.multiply(a,b)
#Metodo multiplicacion de matrices
#2*1+2*1+2*1
np.matmul(a,b)
#Metodo producto punto
#similar a la multiplcacion matricial
np.dot(a,b)
#Metodo producto cruz
#como son paralelos no es muy perpendicular
np.cross(a,b)
#Metodo producto cruz con vectores ortogonales
#ortogonal seria la parte de abajo en eje z
np.cross(np.array([1,0,0]), np.array([0,1,0])) | _____no_output_____ | MIT | Repaso_algebra_LinealHeidy.ipynb | 1966hs/MujeresDigitales |
**Definicion de Matrices** | a = np.array([[1,2], [2,3]])
b = np.array([[3,4],[5,6]])
print(a)
print(b)
#Multiplicacion elemento a elemento
a*b
#mulitplicacion punto a punto
#Metodo multiplicacion elemento
np.multiply(a,b)
#Metodo multiplcacion matricial
#1*3+2*5=13
#1*4+2*4=16
#2*3+3*5=21
#2*4+3*6=26
np.matmul(a,b)
#Metodo producto punto | _____no_output_____ | MIT | Repaso_algebra_LinealHeidy.ipynb | 1966hs/MujeresDigitales |
**Inversion de matrices** | a= np.array([[1,1,1],[0,2,5],[2,5,-1]])#esta es mi matriz
b= np.linalg.inv(a)#aqui la estoy invirtiendo
b
np.matmul(a,b)#cuando la multiplico matricialmente me da una matriz identidad
#cuando iniverto una matriz y luego la multilplico me da una identidad
v1= np.array([3,0,2])
v2=np.array([2,0,-2])
v3=np.array([0,1,1])
M=np.vstack([v1,v2,v3])
M
M_inv = np.linalg.inv(M)#LA INVERTIMOS
M_inv | _____no_output_____ | MIT | Repaso_algebra_LinealHeidy.ipynb | 1966hs/MujeresDigitales |
Valores y vectores propiosUn valor propio $\lambda$ y un vector propio $\vec{u}$ satisfacen$$Au = \lambda u$$Donde A es una matriz cuadrada.Reordenando la ecuacion anterior tenemos el sistema:$$Au -\lambda u = (A- \lambda I)u =0$$El cual tiene solucion si y solo si $det(A-\lambda I)=0$1. Los valores propios son las raices del polinomio caracteristico del determinante2. Susituyendo los valores propios en $$Au = \lambda u$$ y resolviendo se puede obtener el vector propio asociado | #TENGO UN ESPACIO DE DOS DIMENSIONES Y LO QUE HAGO
#ES DISTORSIONAR ESE ESPACIO DIMENSIAL
v1 = np.array([0, 1])
v2 = np.array([-2, -3])
M = np.vstack([v1, v2])
eigvals, eigvecs= np.linalg.eig(M)
print(eigvals)#caractericas de las matrices
print(eigvecs)
#valor propio es un valor que podemos crear y hacer la solucion de las operaciones
A=np.array([[-81,16],[-420,83]])
A
eigvals,eigvecs=np.linalg.eig(A)
eigvals | _____no_output_____ | MIT | Repaso_algebra_LinealHeidy.ipynb | 1966hs/MujeresDigitales |
Setup | from day1 import puzzle1
from day1 import puzzle1slow
from day1 import puzzle1maybefaster
from day2 import puzzle2
from day2 import puzzle2slow
from day2 import puzzle2maybefaster
sample_input1 = [1721, 979, 366, 299, 675, 1456]
input1 = open("input1.txt", "r").read()
input1_list = [int(x) for x in input1.split("\n") if x] | _____no_output_____ | MIT | AdventofCode2020/timings/Timing Day 1.ipynb | evan-freeman/puzzles |
Puzzle Timings | %%timeit
puzzle1(input1_list)
%%timeit
puzzle1maybefaster(input1_list)
%%timeit
puzzle1slow(input1_list)
%%timeit
puzzle2(input1_list)
%%timeit
puzzle2maybefaster(input1_list)
%%timeit
puzzle2slow(input1_list) | 393 ms ± 60.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
| MIT | AdventofCode2020/timings/Timing Day 1.ipynb | evan-freeman/puzzles |
Sample Input Timings | %%timeit
puzzle1(sample_input1)
%%timeit
puzzle1slow(sample_input1)
%%timeit
puzzle1maybefaster(sample_input1)
%%timeit
puzzle2(sample_input1)
%%timeit
puzzle2maybefaster(sample_input1)
%%timeit
puzzle2slow(sample_input1) | 6.02 µs ± 166 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
| MIT | AdventofCode2020/timings/Timing Day 1.ipynb | evan-freeman/puzzles |
📃 Solution of Exercise M6.01The aim of this notebook is to investigate if we can tune the hyperparametersof a bagging regressor and evaluate the gain obtained.We will load the California housing dataset and split it into a training anda testing set. | from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
data, target = fetch_california_housing(as_frame=True, return_X_y=True)
target *= 100 # rescale the target in k$
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0, test_size=0.5) | _____no_output_____ | CC-BY-4.0 | notebooks/M6-ensemble_sol_01.ipynb | datagistips/scikit-learn-mooc |
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a `BaggingRegressor` and provide a `DecisionTreeRegressor`to its parameter `base_estimator`. Train the regressor and evaluate itsstatistical performance on the testing set using the mean absolute error. | from sklearn.metrics import mean_absolute_error
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import BaggingRegressor
tree = DecisionTreeRegressor()
bagging = BaggingRegressor(base_estimator=tree, n_jobs=-1)
bagging.fit(data_train, target_train)
target_predicted = bagging.predict(data_test)
print(f"Basic mean absolute error of the bagging regressor:\n"
f"{mean_absolute_error(target_test, target_predicted):.2f} k$")
abs(target_test - target_predicted).mean() | _____no_output_____ | CC-BY-4.0 | notebooks/M6-ensemble_sol_01.ipynb | datagistips/scikit-learn-mooc |
Now, create a `RandomizedSearchCV` instance using the previous model andtune the important parameters of the bagging regressor. Find the bestparameters and check if you are able to find a set of parameters thatimprove the default regressor still using the mean absolute error as ametric.TipYou can list the bagging regressor's parameters using the get_paramsmethod. | for param in bagging.get_params().keys():
print(param)
from scipy.stats import randint
from sklearn.model_selection import RandomizedSearchCV
param_grid = {
"n_estimators": randint(10, 30),
"max_samples": [0.5, 0.8, 1.0],
"max_features": [0.5, 0.8, 1.0],
"base_estimator__max_depth": randint(3, 10),
}
search = RandomizedSearchCV(
bagging, param_grid, n_iter=20, scoring="neg_mean_absolute_error"
)
_ = search.fit(data_train, target_train)
import pandas as pd
columns = [f"param_{name}" for name in param_grid.keys()]
columns += ["mean_test_score", "std_test_score", "rank_test_score"]
cv_results = pd.DataFrame(search.cv_results_)
cv_results = cv_results[columns].sort_values(by="rank_test_score")
cv_results["mean_test_score"] = -cv_results["mean_test_score"]
cv_results
target_predicted = search.predict(data_test)
print(f"Mean absolute error after tuning of the bagging regressor:\n"
f"{mean_absolute_error(target_test, target_predicted):.2f} k$") | Mean absolute error after tuning of the bagging regressor:
40.29 k$
| CC-BY-4.0 | notebooks/M6-ensemble_sol_01.ipynb | datagistips/scikit-learn-mooc |
Recommendations with MovieTweetings: Collaborative FilteringOne of the most popular methods for making recommendations is **collaborative filtering**. In collaborative filtering, you are using the collaboration of user-item recommendations to assist in making new recommendations. There are two main methods of performing collaborative filtering:1. **Neighborhood-Based Collaborative Filtering**, which is based on the idea that we can either correlate items that are similar to provide recommendations or we can correlate users to one another to provide recommendations.2. **Model Based Collaborative Filtering**, which is based on the idea that we can use machine learning and other mathematical models to understand the relationships that exist amongst items and users to predict ratings and provide ratings.In this notebook, you will be working on performing **neighborhood-based collaborative filtering**. There are two main methods for performing collaborative filtering:1. **User-based collaborative filtering:** In this type of recommendation, users related to the user you would like to make recommendations for are used to create a recommendation.2. **Item-based collaborative filtering:** In this type of recommendation, first you need to find the items that are most related to each other item (based on similar ratings). Then you can use the ratings of an individual on those similar items to understand if a user will like the new item.In this notebook you will be implementing **user-based collaborative filtering**. However, it is easy to extend this approach to make recommendations using **item-based collaborative filtering**. First, let's read in our data and necessary libraries.**NOTE**: Because of the size of the datasets, some of your code cells here will take a while to execute, so be patient! | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tests as t
from scipy.sparse import csr_matrix
from IPython.display import HTML
%matplotlib inline
# Read in the datasets
movies = pd.read_csv('movies_clean.csv')
reviews = pd.read_csv('reviews_clean.csv')
del movies['Unnamed: 0']
del reviews['Unnamed: 0']
print(reviews.head()) | user_id movie_id rating timestamp date month_1 \
0 1 68646 10 1381620027 2013-10-12 23:20:27 0
1 1 113277 10 1379466669 2013-09-18 01:11:09 0
2 2 422720 8 1412178746 2014-10-01 15:52:26 0
3 2 454876 8 1394818630 2014-03-14 17:37:10 0
4 2 790636 7 1389963947 2014-01-17 13:05:47 0
month_2 month_3 month_4 month_5 ... month_9 month_10 month_11 \
0 0 0 0 0 ... 0 1 0
1 0 0 0 0 ... 0 0 0
2 0 0 0 0 ... 0 1 0
3 0 0 0 0 ... 0 0 0
4 0 0 0 0 ... 0 0 0
month_12 year_2013 year_2014 year_2015 year_2016 year_2017 year_2018
0 0 1 0 0 0 0 0
1 0 1 0 0 0 0 0
2 0 0 1 0 0 0 0
3 0 0 1 0 0 0 0
4 0 0 1 0 0 0 0
[5 rows x 23 columns]
| MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
Measures of SimilarityWhen using **neighborhood** based collaborative filtering, it is important to understand how to measure the similarity of users or items to one another. There are a number of ways in which we might measure the similarity between two vectors (which might be two users or two items). In this notebook, we will look specifically at two measures used to compare vectors:* **Pearson's correlation coefficient**Pearson's correlation coefficient is a measure of the strength and direction of a linear relationship. The value for this coefficient is a value between -1 and 1 where -1 indicates a strong, negative linear relationship and 1 indicates a strong, positive linear relationship. If we have two vectors x and y, we can define the correlation between the vectors as:$$CORR(x, y) = \frac{\text{COV}(x, y)}{\text{STDEV}(x)\text{ }\text{STDEV}(y)}$$where $$\text{STDEV}(x) = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})^2}$$and $$\text{COV}(x, y) = \frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})$$where n is the length of the vector, which must be the same for both x and y and $\bar{x}$ is the mean of the observations in the vector. We can use the correlation coefficient to indicate how alike two vectors are to one another, where the closer to 1 the coefficient, the more alike the vectors are to one another. There are some potential downsides to using this metric as a measure of similarity. You will see some of these throughout this workbook.* **Euclidean distance**Euclidean distance is a measure of the straightline distance from one vector to another. Because this is a measure of distance, larger values are an indication that two vectors are different from one another (which is different than Pearson's correlation coefficient).Specifically, the euclidean distance between two vectors x and y is measured as:$$ \text{EUCL}(x, y) = \sqrt{\sum_{i=1}^{n}(x_i - y_i)^2}$$Different from the correlation coefficient, no scaling is performed in the denominator. Therefore, you need to make sure all of your data are on the same scale when using this metric.**Note:** Because measuring similarity is often based on looking at the distance between vectors, it is important in these cases to scale your data or to have all data be in the same scale. In this case, we will not need to scale data because they are all on a 10 point scale, but it is always something to keep in mind!------------ User-Item MatrixIn order to calculate the similarities, it is common to put values in a matrix. In this matrix, users are identified by each row, and items are represented by columns.  In the above matrix, you can see that **User 1** and **User 2** both used **Item 1**, and **User 2**, **User 3**, and **User 4** all used **Item 2**. However, there are also a large number of missing values in the matrix for users who haven't used a particular item. A matrix with many missing values (like the one above) is considered **sparse**.Our first goal for this notebook is to create the above matrix with the **reviews** dataset. However, instead of 1 values in each cell, you should have the actual rating. The users will indicate the rows, and the movies will exist across the columns. To create the user-item matrix, we only need the first three columns of the **reviews** dataframe, which you can see by running the cell below. | user_items = reviews[['user_id', 'movie_id', 'rating']]
user_items.head() | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
Creating the User-Item MatrixIn order to create the user-items matrix (like the one above), I personally started by using a [pivot table](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html). However, I quickly ran into a memory error (a common theme throughout this notebook). I will help you navigate around many of the errors I had, and achieve useful collaborative filtering results! _____`1.` Create a matrix where the users are the rows, the movies are the columns, and the ratings exist in each cell, or a NaN exists in cells where a user hasn't rated a particular movie. If you get a memory error (like I did), [this link here](https://stackoverflow.com/questions/39648991/pandas-dataframe-pivot-memory-error) might help you! | # Create user-by-item matrix
user_by_movie = user_items.groupby(['user_id', 'movie_id'])['rating'].max().unstack() | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
Check your results below to make sure your matrix is ready for the upcoming sections. | assert movies.shape[0] == user_by_movie.shape[1], "Oh no! Your matrix should have {} columns, and yours has {}!".format(movies.shape[0], user_by_movie.shape[1])
assert reviews.user_id.nunique() == user_by_movie.shape[0], "Oh no! Your matrix should have {} rows, and yours has {}!".format(reviews.user_id.nunique(), user_by_movie.shape[0])
print("Looks like you are all set! Proceed!")
HTML('<img src="images/greatjob.webp">') | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
`2.` Now that you have a matrix of users by movies, use this matrix to create a dictionary where the key is each user and the value is an array of the movies each user has rated. | # Create a dictionary with users and corresponding movies seen
def movies_watched(user_id):
'''
INPUT:
user_id - the user_id of an individual as int
OUTPUT:
movies - an array of movies the user has watched
'''
movies = user_by_movie.loc[user_id][user_by_movie.loc[user_id].isnull() == False].index.values
return movies
def create_user_movie_dict():
'''
INPUT: None
OUTPUT: movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
Creates the movies_seen dictionary
'''
n_users = user_by_movie.shape[0]
movies_seen = dict()
for user1 in range(1, n_users+1):
# assign list of movies to each user key
movies_seen[user1] = movies_watched(user1)
return movies_seen
movies_seen = create_user_movie_dict() | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
`3.` If a user hasn't rated more than 2 movies, we consider these users "too new". Create a new dictionary that only contains users who have rated more than 2 movies. This dictionary will be used for all the final steps of this workbook. | # Remove individuals who have watched 2 or fewer movies - don't have enough data to make recs
def create_movies_to_analyze(movies_seen, lower_bound=2):
'''
INPUT:
movies_seen - a dictionary where each key is a user_id and the value is an array of movie_ids
lower_bound - (an int) a user must have more movies seen than the lower bound to be added to the movies_to_analyze dictionary
OUTPUT:
movies_to_analyze - a dictionary where each key is a user_id and the value is an array of movie_ids
The movies_seen and movies_to_analyze dictionaries should be the same except that the output dictionary has removed
'''
movies_to_analyze = dict()
for user, movies in movies_seen.items():
if len(movies) > lower_bound:
movies_to_analyze[user] = movies
return movies_to_analyze
movies_to_analyze = create_movies_to_analyze(movies_seen)
# Run the tests below to check that your movies_to_analyze matches the solution
assert len(movies_to_analyze) == 23512, "Oops! It doesn't look like your dictionary has the right number of individuals."
assert len(movies_to_analyze[2]) == 23, "Oops! User 2 didn't match the number of movies we thought they would have."
assert len(movies_to_analyze[7]) == 3, "Oops! User 7 didn't match the number of movies we thought they would have."
print("If this is all you see, you are good to go!") | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
Calculating User SimilaritiesNow that you have set up the **movies_to_analyze** dictionary, it is time to take a closer look at the similarities between users. Below is the pseudocode for how I thought about determining the similarity between users:```for user1 in movies_to_analyze for user2 in movies_to_analyze see how many movies match between the two users if more than two movies in common pull the overlapping movies compute the distance/similarity metric between ratings on the same movies for the two users store the users and the distance metric```However, this took a very long time to run, and other methods of performing these operations did not fit on the workspace memory!Therefore, rather than creating a dataframe with all possible pairings of users in our data, your task for this question is to look at a few specific examples of the correlation between ratings given by two users. For this question consider you want to compute the [correlation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html) between users.`4.` Using the **movies_to_analyze** dictionary and **user_by_movie** dataframe, create a function that computes the correlation between the ratings of similar movies for two users. Then use your function to compare your results to ours using the tests below. | def compute_correlation(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the correlation between the matching ratings between the two users
'''
# Pull movies for each user
movies1 = movies_to_analyze[user1]
movies2 = movies_to_analyze[user2]
# Find Similar Movies
sim_movs = np.intersect1d(movies1, movies2, assume_unique=True)
# Calculate correlation between the users
df = user_by_movie.loc[(user1, user2), sim_movs]
corr = df.transpose().corr().iloc[0,1]
return corr #return the correlation
# Test your function against the solution
assert compute_correlation(2,2) == 1.0, "Oops! The correlation between a user and itself should be 1.0."
assert round(compute_correlation(2,66), 2) == 0.76, "Oops! The correlation between user 2 and 66 should be about 0.76."
assert np.isnan(compute_correlation(2,104)), "Oops! The correlation between user 2 and 104 should be a NaN."
print("If this is all you see, then it looks like your function passed all of our tests!") | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
Why the NaN's?If the function you wrote passed all of the tests, then you have correctly set up your function to calculate the correlation between any two users. `5.` But one question is, why are we still obtaining **NaN** values? As you can see in the code cell above, users 2 and 104 have a correlation of **NaN**. Why? Think and write your ideas here about why these NaNs exist, and use the cells below to do some coding to validate your thoughts. You can check other pairs of users and see that there are actually many NaNs in our data - 2,526,710 of them in fact. These NaN's ultimately make the correlation coefficient a less than optimal measure of similarity between two users.```In the denominator of the correlation coefficient, we calculate the standard deviation for each user's ratings. The ratings for user 2 are all the same rating on the movies that match with user 104. Therefore, the standard deviation is 0. Because a 0 is in the denominator of the correlation coefficient, we end up with a **NaN** correlation coefficient. Therefore, a different approach is likely better for this particular situation.``` | # Which movies did both user 2 and user 104 see?
set_2 = set(movies_to_analyze[2])
set_104 = set(movies_to_analyze[104])
set_2.intersection(set_104)
# What were the ratings for each user on those movies?
print(user_by_movie.loc[2, set_2.intersection(set_104)])
print(user_by_movie.loc[104, set_2.intersection(set_104)]) | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
`6.` Because the correlation coefficient proved to be less than optimal for relating user ratings to one another, we could instead calculate the euclidean distance between the ratings. I found [this post](https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy) particularly helpful when I was setting up my function. This function should be very similar to your previous function. When you feel confident with your function, test it against our results. | def compute_euclidean_dist(user1, user2):
'''
INPUT
user1 - int user_id
user2 - int user_id
OUTPUT
the euclidean distance between user1 and user2
'''
# Pull movies for each user
movies1 = movies_to_analyze[user1]
movies2 = movies_to_analyze[user2]
# Find Similar Movies
sim_movs = np.intersect1d(movies1, movies2, assume_unique=True)
# Calculate euclidean distance between the users
df = user_by_movie.loc[(user1, user2), sim_movs]
dist = np.linalg.norm(df.loc[user1] - df.loc[user2])
return dist #return the euclidean distance
# Read in solution euclidean distances"
import pickle
df_dists = pd.read_pickle("data/Term2/recommendations/lesson1/data/dists.p")
# Test your function against the solution
assert compute_euclidean_dist(2,2) == df_dists.query("user1 == 2 and user2 == 2")['eucl_dist'][0], "Oops! The distance between a user and itself should be 0.0."
assert round(compute_euclidean_dist(2,66), 2) == round(df_dists.query("user1 == 2 and user2 == 66")['eucl_dist'][1], 2), "Oops! The distance between user 2 and 66 should be about 2.24."
assert np.isnan(compute_euclidean_dist(2,104)) == np.isnan(df_dists.query("user1 == 2 and user2 == 104")['eucl_dist'][4]), "Oops! The distance between user 2 and 104 should be 2."
print("If this is all you see, then it looks like your function passed all of our tests!") | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
Using the Nearest Neighbors to Make RecommendationsIn the previous question, you read in **df_dists**. Therefore, you have a measure of distance between each user and every other user. This dataframe holds every possible pairing of users, as well as the corresponding euclidean distance.Because of the **NaN** values that exist within the correlations of the matching ratings for many pairs of users, as we discussed above, we will proceed using **df_dists**. You will want to find the users that are 'nearest' each user. Then you will want to find the movies the closest neighbors have liked to recommend to each user.I made use of the following objects:* df_dists (to obtain the neighbors)* user_items (to obtain the movies the neighbors and users have rated)* movies (to obtain the names of the movies)`7.` Complete the functions below, which allow you to find the recommendations for any user. There are five functions which you will need:* **find_closest_neighbors** - this returns a list of user_ids from closest neighbor to farthest neighbor using euclidean distance* **movies_liked** - returns an array of movie_ids* **movie_names** - takes the output of movies_liked and returns a list of movie names associated with the movie_ids* **make_recommendations** - takes a user id and goes through closest neighbors to return a list of movie names as recommendations* **all_recommendations** = loops through every user and returns a dictionary of with the key as a user_id and the value as a list of movie recommendations | def find_closest_neighbors(user):
'''
INPUT:
user - (int) the user_id of the individual you want to find the closest users
OUTPUT:
closest_neighbors - an array of the id's of the users sorted from closest to farthest away
'''
# I treated ties as arbitrary and just kept whichever was easiest to keep using the head method
# You might choose to do something less hand wavy
closest_users = df_dists[df_dists['user1']==user].sort_values(by='eucl_dist').iloc[1:]['user2']
closest_neighbors = np.array(closest_users)
return closest_neighbors
def movies_liked(user_id, min_rating=7):
'''
INPUT:
user_id - the user_id of an individual as int
min_rating - the minimum rating considered while still a movie is still a "like" and not a "dislike"
OUTPUT:
movies_liked - an array of movies the user has watched and liked
'''
movies_liked = np.array(user_items.query('user_id == @user_id and rating > (@min_rating -1)')['movie_id'])
return movies_liked
def movie_names(movie_ids):
'''
INPUT
movie_ids - a list of movie_ids
OUTPUT
movies - a list of movie names associated with the movie_ids
'''
movie_lst = list(movies[movies['movie_id'].isin(movie_ids)]['movie'])
return movie_lst
def make_recommendations(user, num_recs=10):
'''
INPUT:
user - (int) a user_id of the individual you want to make recommendations for
num_recs - (int) number of movies to return
OUTPUT:
recommendations - a list of movies - if there are "num_recs" recommendations return this many
otherwise return the total number of recommendations available for the "user"
which may just be an empty list
'''
# I wanted to make recommendations by pulling different movies than the user has already seen
# Go in order from closest to farthest to find movies you would recommend
# I also only considered movies where the closest user rated the movie as a 9 or 10
# movies_seen by user (we don't want to recommend these)
movies_seen = movies_watched(user)
closest_neighbors = find_closest_neighbors(user)
# Keep the recommended movies here
recs = np.array([])
# Go through the neighbors and identify movies they like the user hasn't seen
for neighbor in closest_neighbors:
neighbs_likes = movies_liked(neighbor)
#Obtain recommendations for each neighbor
new_recs = np.setdiff1d(neighbs_likes, movies_seen, assume_unique=True)
# Update recs with new recs
recs = np.unique(np.concatenate([new_recs, recs], axis=0))
# If we have enough recommendations exit the loop
if len(recs) > num_recs-1:
break
# Pull movie titles using movie ids
recommendations = movie_names(recs)
return recommendations
def all_recommendations(num_recs=10):
'''
INPUT
num_recs (int) the (max) number of recommendations for each user
OUTPUT
all_recs - a dictionary where each key is a user_id and the value is an array of recommended movie titles
'''
# All the users we need to make recommendations for
users = np.unique(df_dists['user1'])
n_users = len(users)
#Store all recommendations in this dictionary
all_recs = dict()
# Make the recommendations for each user
for user in users:
all_recs[user] = make_recommendations(user, num_recs)
return all_recs
all_recs = all_recommendations(10)
# This loads our solution dictionary so you can compare results - FULL PATH IS "data/Term2/recommendations/lesson1/data/all_recs.p"
all_recs_sol = pd.read_pickle("data/Term2/recommendations/lesson1/data/all_recs.p")
assert all_recs[2] == make_recommendations(2), "Oops! Your recommendations for user 2 didn't match ours."
assert all_recs[26] == make_recommendations(26), "Oops! It actually wasn't possible to make any recommendations for user 26."
assert all_recs[1503] == make_recommendations(1503), "Oops! Looks like your solution for user 1503 didn't match ours."
print("If you made it here, you now have recommendations for many users using collaborative filtering!")
HTML('<img src="images/greatjob.webp">') | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
Now What?If you made it this far, you have successfully implemented a solution to making recommendations using collaborative filtering. `8.` Let's do a quick recap of the steps taken to obtain recommendations using collaborative filtering. | # Check your understanding of the results by correctly filling in the dictionary below
a = "pearson's correlation and spearman's correlation"
b = 'item based collaborative filtering'
c = "there were too many ratings to get a stable metric"
d = 'user based collaborative filtering'
e = "euclidean distance and pearson's correlation coefficient"
f = "manhattan distance and euclidean distance"
g = "spearman's correlation and euclidean distance"
h = "the spread in some ratings was zero"
i = 'content based recommendation'
sol_dict = {
'The type of recommendation system implemented here was a ...': d,
'The two methods used to estimate user similarity were: ': e,
'There was an issue with using the correlation coefficient. What was it?': h
}
t.test_recs(sol_dict) | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
Additionally, let's take a closer look at some of the results. There are two solution files that you read in to check your results, and you created these objects* **df_dists** - a dataframe of user1, user2, euclidean distance between the two users* **all_recs_sol** - a dictionary of all recommendations (key = user, value = list of recommendations) `9.` Use these two objects along with the cells below to correctly fill in the dictionary below and complete this notebook! | a = 567
b = 1503
c = 1319
d = 1325
e = 2526710
f = 0
g = 'Use another method to make recommendations - content based, knowledge based, or model based collaborative filtering'
sol_dict2 = {
'For how many pairs of users were we not able to obtain a measure of similarity using correlation?': e,
'For how many pairs of users were we not able to obtain a measure of similarity using euclidean distance?': f,
'For how many users were we unable to make any recommendations for using collaborative filtering?': c,
'For how many users were we unable to make 10 recommendations for using collaborative filtering?': d,
'What might be a way for us to get 10 recommendations for every user?': g
}
t.test_recs2(sol_dict2)
# Use the cells below for any work you need to do!
# Users without recs
users_without_recs = []
for user, movie_recs in all_recs.items():
if len(movie_recs) == 0:
users_without_recs.append(user)
len(users_without_recs)
# NaN euclidean distance values
df_dists['eucl_dist'].isnull().sum()
# Users with fewer than 10 recs
users_with_less_than_10recs = []
for user, movie_recs in all_recs.items():
if len(movie_recs) < 10:
users_with_less_than_10recs.append(user)
len(users_with_less_than_10recs) | _____no_output_____ | MIT | lessons/Recommendations/1_Intro_to_Recommendations/4_Collaborative Filtering - Solution.ipynb | callezenwaka/DSND_Term2 |
Feature Engineering notebook This is a demo notebook to play with feature engineering toolkit. In this notebook we will see some capabilities of the toolkit like filling missing values, PCA, Random Projections, Normalizing values, and etc. | %load_ext autoreload
%autoreload 1
%matplotlib inline
from Pipeline import Pipeline
from Compare import Compare
from StructuredData.LoadCSV import LoadCSV
from StructuredData.MissingValues import MissingValues
from StructuredData.Normalize import Normalize
from StructuredData.Factorize import Factorize
from StructuredData.PCAFeatures import PCAFeatures
from StructuredData.RandomProjection import RandomProjection
csv_path = './DemoData/synthetic_classification.csv'
df = LoadCSV(csv_path)()
df.head(5) | _____no_output_____ | MIT | Feature_Engineering_Toolkit_demo_features_v1.ipynb | jassimran/Feature-Engineering-Toolkit |
Filling missing valuesBy default, median of the values of the column is applied for filling out the missing values | pipelineObj = Pipeline([MissingValues()])
new_df = pipelineObj(df, '0')
new_df.head(5) | _____no_output_____ | MIT | Feature_Engineering_Toolkit_demo_features_v1.ipynb | jassimran/Feature-Engineering-Toolkit |
However, the imputation type is a configurable parameter to customize it as per needs. | pipelineObj = Pipeline([MissingValues(imputation_type = 'mean')])
new_df = pipelineObj(df, '0')
new_df.head(5) | _____no_output_____ | MIT | Feature_Engineering_Toolkit_demo_features_v1.ipynb | jassimran/Feature-Engineering-Toolkit |
Normalize dataBy default, Min max normalization is applied. Please note that assertion has been set such that normlization cant be applied if there rae missing values in that column. This is part of validation phase | pipelineObj = Pipeline([MissingValues(), Normalize(['1','2', '3'])])
new_df = pipelineObj(df, '0')
df.head(5) | _____no_output_____ | MIT | Feature_Engineering_Toolkit_demo_features_v1.ipynb | jassimran/Feature-Engineering-Toolkit |
Factorize dataEncode the object as an enumerated type or categorical variable for column 4 and 8, but we must remove missing values before Factorizing | pipelineObj = Pipeline([MissingValues(), Factorize(['4','8'])])
new_df = pipelineObj(df, '0')
new_df.head(5) | _____no_output_____ | MIT | Feature_Engineering_Toolkit_demo_features_v1.ipynb | jassimran/Feature-Engineering-Toolkit |
Principal Component Analysis Use n_components to play around with how many dimensions you want to keep. Please note that assertions will validate if a data frame has any missing values before applying PCA. In the below example, the pipeline first removed missing values before applying PCA. | pipelineObj = Pipeline([MissingValues(imputation_type = 'mean'), PCAFeatures(n_components = 5)])
pca_df = pipelineObj(df, '0')
pca_df.head(5) | _____no_output_____ | MIT | Feature_Engineering_Toolkit_demo_features_v1.ipynb | jassimran/Feature-Engineering-Toolkit |
Random ProjectionsUse n_components to play around with how many dimensions you want to keep. Please note that assertions will validate if a data frame has any missing values before applying Random Projections. Type of projections can be specified as an argument, by default GaussianRandomProjection is applied. In the below example, the pipeline first removed missing values before applying Sparse Random Projection. As of now, 'auto' deduction of number of dimensions which are sufficient to represent the features with minimal loss of information has not been implemeted, hence default value for ouput columns is 2 (Use n_components to specify custom value) | pipelineObj = Pipeline([MissingValues(imputation_type = 'mean'), RandomProjection(n_components = 6, proj_type = 'Sparse')])
new_df = pipelineObj(df, '0')
new_df.head() | _____no_output_____ | MIT | Feature_Engineering_Toolkit_demo_features_v1.ipynb | jassimran/Feature-Engineering-Toolkit |
Download the modified CSVAt any point, the new tranformed features can be downloaded using below command | csv_path = './DemoData/synthetic_classification_transformed.csv'
new_df.to_csv(csv_path) | _____no_output_____ | MIT | Feature_Engineering_Toolkit_demo_features_v1.ipynb | jassimran/Feature-Engineering-Toolkit |
Figure 4: NIRCam Grism + Filter Sensitivities ($1^{st}$ order) *** Table of Contents1. [Information](Information)2. [Imports](Imports)3. [Data](Data)4. [Generate the First Order Grism + Filter Sensitivity Plot](Generate-the-First-Order-Grism-+-Filter-Sensitivity-Plot)5. [Issues](Issues)6. [About this Notebook](About-this-Notebook)*** Information JDox links: * [NIRCam Grisms](https://jwst-docs.stsci.edu/display/JTI/NIRCam+GrismsNIRCamGrisms-Sensitivity) * Figure 4. NIRCam grism + filter sensitivities ($1^{st}$ order) Imports | import os
import pylab
import numpy as np
from astropy.io import ascii, fits
from astropy.table import Table
from scipy.optimize import fmin
from scipy.interpolate import interp1d
import requests
import matplotlib.pyplot as plt
%matplotlib inline | _____no_output_____ | BSD-3-Clause | nircam_jdox/nircam_grisms/figure4_sensitivity.ipynb | aliciacanipe/nircam_jdox |
Data Data Location: The data is stored in a NIRCam JDox Box folder here:[ST-INS-NIRCAM -> JDox -> nircam_grisms](https://stsci.box.com/s/wu9mo54vi957x50rdirlcg9zkkr3xiaw) | files = [('https://stsci.box.com/shared/static/i0a9dkp02nnuw6w0xcfd7b42ctxfb8es.fits', 'NIRCam.F250M.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/vfnyk9veote92dz1edpbu83un5n20rsw.fits', 'NIRCam.F250M.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/ssvltwzt7f4y5lfvch2o1prdk5hb2gz2.fits', 'NIRCam.F250M.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/56wjvzx1jf2i5yg7l1gg77vtvi01ec5p.fits', 'NIRCam.F250M.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/v1621dcm44be21n381mbgd2hzxxqrb2e.fits', 'NIRCam.F277W.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/8slec91wj6ety6d8qvest09msklpypi8.fits', 'NIRCam.F277W.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/r42hdv64x6skqqszv24qkxohiijitqcf.fits', 'NIRCam.F277W.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/3vye6ni05i3kdqyd5vs1jk2q59yyms2e.fits', 'NIRCam.F277W.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/twcxbe6lxrjckqph980viiijv8fpmm8b.fits', 'NIRCam.F300M.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/bpvluysg3zsl3q4b4l5rj5nue84ydjem.fits', 'NIRCam.F300M.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/15x7rbwngsxiubbexy7zcezxqm3ndq54.fits', 'NIRCam.F300M.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/a7tqdp0feqcttw3d9vaioy7syzfsftz6.fits', 'NIRCam.F300M.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/i76sb53pthieh4kn62fpxhcxn8lreffj.fits', 'NIRCam.F322W2.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/wgbyfi3ofs7i19b7zsf2iceupzkbkokq.fits', 'NIRCam.F322W2.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/jhk3deym5wbc68djtcahy3otk2xfjdb5.fits', 'NIRCam.F322W2.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/zu3xqnicbyfjn54yb4kgzvnglanf13ak.fits', 'NIRCam.F322W2.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/e2srtf52wnh6vvxsy2aiknbcr8kx2xr5.fits', 'NIRCam.F335M.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/bav3tswdd7lemsyd53bnpj4b6yke5bgd.fits', 'NIRCam.F335M.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/81wm768mjemzj84w1ogzqddgmrk3exvt.fits', 'NIRCam.F335M.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/fhopmyongqifibdtwt3qr682lwdjaf7a.fits', 'NIRCam.F335M.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/j9gd8bclethgex40o7qi1e79hgj2hsyt.fits', 'NIRCam.F356W.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/s23novi3p6qwm9f9hj9wutgju08be776.fits', 'NIRCam.F356W.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/41fnmswn1ttnwts6jj5fu73m4hs6icxd.fits', 'NIRCam.F356W.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/wx3rvjt0mvf0hnhv4wvqcmxu61gamwmm.fits', 'NIRCam.F356W.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/e0p6vkiow4jlp49deqkji9kekzdt4oon.fits', 'NIRCam.F360M.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/xbh0rjjvxn0x22k9ktiyikol7c4ep6ka.fits', 'NIRCam.F360M.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/e7artuotyv8l9wfoa3rk1k00o5mv8so8.fits', 'NIRCam.F360M.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/9r5bmick13ti22l6hcsw0uod75vqartw.fits', 'NIRCam.F360M.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/tqd1uqsf8nj12he5qa3hna0zodnlzfea.fits', 'NIRCam.F410M.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/4szffesvswh0h8fjym5m5ht37sj0jzrl.fits', 'NIRCam.F410M.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/iur0tpbts23lc5rn5n0tplzndlkoudel.fits', 'NIRCam.F410M.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/rvz8iznsnl0bsjrqiw7rv74jj24b0otb.fits', 'NIRCam.F410M.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/sv3g82qbb4u2umksgu5zdl7rp569sdi7.fits', 'NIRCam.F430M.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/mmqv1pkuzpj6abtufxxfo960z2v1oygc.fits', 'NIRCam.F430M.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/84q83haic2h6eq5c6p2frkybz551hp8d.fits', 'NIRCam.F430M.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/3osceplhq6kmvmm2a72jsgrg6z1ggw1p.fits', 'NIRCam.F430M.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/kitx7gdo5kool6jus2g19vdy7q7hmxck.fits', 'NIRCam.F444W.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/ug7y93v0en9c84hfp6d3vtjogmjou9u3.fits', 'NIRCam.F444W.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/0p9h9ofayq8q6dbfsccf3tn5lvxxod9i.fits', 'NIRCam.F444W.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/34hbqzibt5h72hm0rj9wylttj7m9wd19.fits', 'NIRCam.F444W.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/vj0rkyebg0afny1khdyiho4mktmtsi1q.fits', 'NIRCam.F460M.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/ky1z1dpewsjqab1o9hstihrec7h52oq4.fits', 'NIRCam.F460M.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/s93cwpcvnxfjwqbulnkh9ts9ln0fu9cz.fits', 'NIRCam.F460M.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/1178in8zg462es1fkl0mgcbpgp6kgb6t.fits', 'NIRCam.F460M.R.B.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/b855uj293klac8hnoqhrnv8ei0rcvudj.fits', 'NIRCam.F480M.R.A.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/werzjlp3ybxk2ovg6u689zsfpts2t8w3.fits', 'NIRCam.F480M.R.A.2nd.sensitivity.fits'),
('https://stsci.box.com/shared/static/yrh5mylru1upbo5rifbz77acn8k1ud6i.fits', 'NIRCam.F480M.R.B.1st.sensitivity.fits'),
('https://stsci.box.com/shared/static/oxu6jsg9cn9yqkh3nh646fx0flhw8rej.fits', 'NIRCam.F480M.R.B.2nd.sensitivity.fits')]
def download_file(url, file_name, output_directory='./', overwrite=False):
"""Download a file from Box given the direct URL
Parameters
----------
url : str
URL to the file to be downloaded
file_name : str
The name of the file being downloaded
output_directory : str
Directory to download file_name into
overwrite : str
If False and the file to download already exists, the download
will be skipped. If True, the file will be downloaded regardless
of whether it already exists in output_directory
Returns
-------
download_filename : str
Name of the downloaded file
"""
download_filename = os.path.join(output_directory, file_name)
if not os.path.isfile(download_filename) or overwrite is True:
print("Downloading {}".format(file_name))
with requests.get(url, stream=True) as response:
if response.status_code != 200:
raise RuntimeError("Wrong URL - {}".format(url))
with open(download_filename, 'wb') as f:
for chunk in response.iter_content(chunk_size=2048):
if chunk:
f.write(chunk)
else:
print("{} already exists. Skipping download.".format(download_filename))
return download_filename | _____no_output_____ | BSD-3-Clause | nircam_jdox/nircam_grisms/figure4_sensitivity.ipynb | aliciacanipe/nircam_jdox |
Load the data(The next cell assumes you downloaded the data into your ```Users/$(logname)/``` home directory) | if os.environ.get('LOGNAME') is None:
raise ValueError("WARNING: LOGNAME environment variable not set!")
box_directory = os.path.join("/Users/", os.environ['LOGNAME'], "box_data")
box_directory
if not os.path.isdir(box_directory):
try:
os.mkdir(box_directory)
except:
raise OSError("Unable to create {}".format(box_directory))
for file_info in files:
file_url, filename = file_info
outfile = download_file(file_url, filename, output_directory=box_directory)
grism = "R"
mod = "A"
filters = ["F250M","F277W","F300M","F322W2","F335M","F356W","F360M","F410M","F430M","F444W","F460M","F480M"]
filenames = []
for fil in filters:
filenames.append(os.path.join(box_directory, "NIRCam.%s.%s.%s.1st.sensitivity.fits" % (fil,grism,mod)))
filenames | _____no_output_____ | BSD-3-Clause | nircam_jdox/nircam_grisms/figure4_sensitivity.ipynb | aliciacanipe/nircam_jdox |
Generate the First Order Grism + Filter Sensitivity Plot Define some convenience functions | def find_nearest(array,value):
idx = (np.abs(array-value)).argmin()
return array[idx]
def find_nearest(array,value):
idx = (np.abs(array-value)).argmin()
return array[idx]
def find_mid(w,s,w0,thr=0.05):
fct = interp1d(w,s,bounds_error=None,fill_value='extrapolate')
def func(x):
#print "x:",x
return np.abs(fct(x)-thr)
res = fmin(func,w0)
return res[0] | _____no_output_____ | BSD-3-Clause | nircam_jdox/nircam_grisms/figure4_sensitivity.ipynb | aliciacanipe/nircam_jdox |
Create the plots | f, ax1 = plt.subplots(1, figsize=(15, 10))
NUM_COLORS = len(filters)
cm = pylab.get_cmap('tab10')
grism = "R"
mod = "A"
for i,fname in zip(range(NUM_COLORS),filenames):
color = cm(1.*i/NUM_COLORS)
d = fits.open(fname)
w = d[1].data["WAVELENGTH"]
s = d[1].data["SENSITIVITY"]/(1e17)
ax1.plot(w,s,label=fil,lw=4,color=color)
ax1.legend(fontsize=16)
miny,maxy = ax1.get_ylim()
minx,maxx = ax1.get_xlim()
ax1.set_ylim(miny,2.15)
ax1.set_xlim(2.1,maxx)
ax1.tick_params(labelsize=18)
f.text(0.5, 0.04, 'Wavelength ($\mu m$)', ha='center', fontsize=22)
f.text(0.03, 0.5, 'Sensitivity ('+r'$1 \times 10^{17}\ \frac{e^{-} s^{-1}}{erg s^{-1} cm^{-2} A^{-1}}$'+')', va='center', rotation='vertical', fontsize=22) | _____no_output_____ | BSD-3-Clause | nircam_jdox/nircam_grisms/figure4_sensitivity.ipynb | aliciacanipe/nircam_jdox |
Figure option 2: filter name positions | f, ax1 = plt.subplots(1, figsize=(15, 10))
thr = 0.05 # 5% of peak boundaries
NUM_COLORS = len(filters)
cm = pylab.get_cmap('tab10')
for i,fil,fname in zip(range(NUM_COLORS),filters,filenames):
color = cm(1.*i/NUM_COLORS)
d = fits.open(fname)
w = d[1].data["WAVELENGTH"]
s = d[1].data["SENSITIVITY"]/(1e17)
wmin,wmax = np.min(w),np.max(w)
vg = w<(wmax+wmin)/2.
w1 = find_mid(w[vg],s[vg],wmin,thr)
vg = w>(wmax+wmin)/2.
w2 = find_mid(w[vg],s[vg],wmax,thr)
if fil == 'F356W':
ax1.text((w2+w1)/2 -0.04, s[np.where(w == find_nearest(w, (w2+w1)/2))]+0.25, fil, ha='center',color=color,fontsize=16,weight='bold')
elif fil == 'F335M':
ax1.text((w2+w1)/2 -0.03, s[np.where(w == find_nearest(w, (w2+w1)/2))]+0.22, fil, ha='center',color=color,fontsize=16,weight='bold')
elif fil == 'F460M':
ax1.text((w2+w1)/2+0.15, s[np.where(w == find_nearest(w, (w2+w1)/2))]+0.12, fil, ha='center',color=color,fontsize=16,weight='bold')
elif fil == 'F480M':
ax1.text((w2+w1)/2+0.15, s[np.where(w == find_nearest(w, (w2+w1)/2))]+0.1, fil, ha='center',color=color,fontsize=16,weight='bold')
else:
ax1.text((w2+w1)/2 -0.04, s[np.where(w == find_nearest(w, (w2+w1)/2))]+0.2, fil, ha='center',color=color,fontsize=16,weight='bold')
ax1.plot(w,s,label=fil,lw=4,color=color)
miny,maxy = ax1.get_ylim()
minx,maxx = ax1.get_xlim()
ax1.set_ylim(miny,2.15)
ax1.set_xlim(2.1,maxx)
ax1.tick_params(labelsize=18)
f.text(0.5, 0.04, 'Wavelength ($\mu m$)', ha='center', fontsize=22)
f.text(0.03, 0.5, 'Sensitivity ('+r'$1 \times 10^{17}\ \frac{e^{-} s^{-1}}{erg\ s^{-1} cm^{-2} A^{-1}}$'+')', va='center', rotation='vertical', fontsize=22) | _____no_output_____ | BSD-3-Clause | nircam_jdox/nircam_grisms/figure4_sensitivity.ipynb | aliciacanipe/nircam_jdox |
**Version 2**: disable unfreezing for speed setup for pytorch/xla on TPU | import os
import collections
from datetime import datetime, timedelta
os.environ["XRT_TPU_CONFIG"] = "tpu_worker;0;10.0.0.2:8470"
_VersionConfig = collections.namedtuple('_VersionConfig', 'wheels,server')
VERSION = "torch_xla==nightly"
CONFIG = {
'torch_xla==nightly': _VersionConfig('nightly', 'XRT-dev{}'.format(
(datetime.today() - timedelta(1)).strftime('%Y%m%d')))}[VERSION]
DIST_BUCKET = 'gs://tpu-pytorch/wheels'
TORCH_WHEEL = 'torch-{}-cp36-cp36m-linux_x86_64.whl'.format(CONFIG.wheels)
TORCH_XLA_WHEEL = 'torch_xla-{}-cp36-cp36m-linux_x86_64.whl'.format(CONFIG.wheels)
TORCHVISION_WHEEL = 'torchvision-{}-cp36-cp36m-linux_x86_64.whl'.format(CONFIG.wheels)
!export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
!apt-get install libomp5 -y
!apt-get install libopenblas-dev -y
!pip uninstall -y torch torchvision
!gsutil cp "$DIST_BUCKET/$TORCH_WHEEL" .
!gsutil cp "$DIST_BUCKET/$TORCH_XLA_WHEEL" .
!gsutil cp "$DIST_BUCKET/$TORCHVISION_WHEEL" .
!pip install "$TORCH_WHEEL"
!pip install "$TORCH_XLA_WHEEL"
!pip install "$TORCHVISION_WHEEL" |
The following NEW packages will be installed:
libomp5
0 upgraded, 1 newly installed, 0 to remove and 32 not upgraded.
Need to get 228 kB of archives.
After this operation, 750 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian stretch/main amd64 libomp5 amd64 3.9.1-1 [228 kB]
Fetched 228 kB in 0s (5208 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libomp5:amd64.
(Reading database ... 59973 files and directories currently installed.)
Preparing to unpack .../libomp5_3.9.1-1_amd64.deb ...
Unpacking libomp5:amd64 (3.9.1-1) ...
Setting up libomp5:amd64 (3.9.1-1) ...
Processing triggers for libc-bin (2.24-11+deb9u4) ...
The following additional packages will be installed:
libopenblas-base
The following NEW packages will be installed:
libopenblas-base libopenblas-dev
0 upgraded, 2 newly installed, 0 to remove and 32 not upgraded.
Need to get 7602 kB of archives.
After this operation, 91.5 MB of additional disk space will be used.
Get:1 http://deb.debian.org/debian stretch/main amd64 libopenblas-base amd64 0.2.19-3 [3793 kB]
Get:2 http://deb.debian.org/debian stretch/main amd64 libopenblas-dev amd64 0.2.19-3 [3809 kB]
Fetched 7602 kB in 0s (35.9 MB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libopenblas-base.
(Reading database ... 59978 files and directories currently installed.)
Preparing to unpack .../libopenblas-base_0.2.19-3_amd64.deb ...
Unpacking libopenblas-base (0.2.19-3) ...
Selecting previously unselected package libopenblas-dev.
Preparing to unpack .../libopenblas-dev_0.2.19-3_amd64.deb ...
Unpacking libopenblas-dev (0.2.19-3) ...
Processing triggers for libc-bin (2.24-11+deb9u4) ...
Setting up libopenblas-base (0.2.19-3) ...
update-alternatives: using /usr/lib/openblas-base/libblas.so.3 to provide /usr/lib/libblas.so.3 (libblas.so.3) in auto mode
update-alternatives: using /usr/lib/openblas-base/liblapack.so.3 to provide /usr/lib/liblapack.so.3 (liblapack.so.3) in auto mode
Setting up libopenblas-dev (0.2.19-3) ...
update-alternatives: using /usr/lib/openblas-base/libblas.so to provide /usr/lib/libblas.so (libblas.so) in auto mode
update-alternatives: using /usr/lib/openblas-base/liblapack.so to provide /usr/lib/liblapack.so (liblapack.so) in auto mode
Processing triggers for libc-bin (2.24-11+deb9u4) ...
Found existing installation: torch 1.4.0
Uninstalling torch-1.4.0:
Successfully uninstalled torch-1.4.0
Found existing installation: torchvision 0.5.0
Uninstalling torchvision-0.5.0:
Successfully uninstalled torchvision-0.5.0
Copying gs://tpu-pytorch/wheels/torch-nightly-cp36-cp36m-linux_x86_64.whl...
Operation completed over 1 objects/77.8 MiB.
Copying gs://tpu-pytorch/wheels/torch_xla-nightly-cp36-cp36m-linux_x86_64.whl...
Operation completed over 1 objects/112.7 MiB.
Copying gs://tpu-pytorch/wheels/torchvision-nightly-cp36-cp36m-linux_x86_64.whl...
Operation completed over 1 objects/2.5 MiB.
Processing ./torch-nightly-cp36-cp36m-linux_x86_64.whl
[31mERROR: fastai 1.0.60 requires torchvision, which is not installed.[0m
[31mERROR: catalyst 20.2.1 requires torchvision>=0.2.1, which is not installed.[0m
[31mERROR: allennlp 0.9.0 has requirement spacy<2.2,>=2.1.0, but you'll have spacy 2.2.3 which is incompatible.[0m
Installing collected packages: torch
Successfully installed torch-1.5.0a0+e0b90b8
Processing ./torch_xla-nightly-cp36-cp36m-linux_x86_64.whl
Installing collected packages: torch-xla
Successfully installed torch-xla-0.8+f1455a7
Processing ./torchvision-nightly-cp36-cp36m-linux_x86_64.whl
Requirement already satisfied: numpy in /opt/conda/lib/python3.6/site-packages (from torchvision==nightly) (1.18.1)
Requirement already satisfied: pillow>=4.1.1 in /opt/conda/lib/python3.6/site-packages (from torchvision==nightly) (5.4.1)
Requirement already satisfied: torch in /opt/conda/lib/python3.6/site-packages (from torchvision==nightly) (1.5.0a0+e0b90b8)
Requirement already satisfied: six in /opt/conda/lib/python3.6/site-packages (from torchvision==nightly) (1.14.0)
Installing collected packages: torchvision
Successfully installed torchvision-0.6.0a0+b2e9565
| MIT | image/2. Flower Classification with TPUs/kaggle/fast-pytorch-xla-for-tpu-with-multiprocessing.ipynb | nishchalnishant/Completed_Kaggle_competitions |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.