markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We apply another type of normalization to 0-1 just for the purposes of plotting the image. If we didn't do this, the range of our values would be somewhere between -1 and 1, and matplotlib would not be able to interpret the entire range of values. By rescaling our -1 to 1 valued images to 0-1, we can visualize it bet...
norm_imgs_show = (norm_imgs - np.min(norm_imgs)) / (np.max(norm_imgs) - np.min(norm_imgs)) plt.figure(figsize=(10, 10)) plt.imshow(utils.montage(norm_imgs_show, 'normalized.png'))
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
<a name="part-five---convolve-the-dataset"></a> Part Five - Convolve the Dataset <a name="instructions-4"></a> Instructions Using tensorflow, we'll attempt to convolve your dataset with one of the kernels we created during the lesson, and then in the next part, we'll take the sum of the convolved output to use for sort...
# First build 3 kernels for each input color channel ksize = ... kernel = np.concatenate([utils.gabor(ksize)[:, :, np.newaxis] for i in range(3)], axis=2) # Now make the kernels into the shape: [ksize, ksize, 3, 1]: kernel_4d = ... assert(kernel_4d.shape == (ksize, ksize, 3, 1))
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
We'll Perform the convolution with the 4d tensor in kernel_4d. This is a ksize x ksize x 3 x 1 tensor, where each input color channel corresponds to one filter with 1 output. Each filter looks like:
plt.figure(figsize=(5, 5)) plt.imshow(kernel_4d[:, :, 0, 0], cmap='gray') plt.imsave(arr=kernel_4d[:, :, 0, 0], fname='kernel.png', cmap='gray')
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
Perform the convolution with the 4d tensors: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
convolved = utils.convolve(... convolved_show = (convolved - np.min(convolved)) / (np.max(convolved) - np.min(convolved)) print(convolved_show.shape) plt.figure(figsize=(10, 10)) plt.imshow(utils.montage(convolved_show[..., 0], 'convolved.png'), cmap='gray')
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
What we've just done is build a "hand-crafted" feature detector: the Gabor Kernel. This kernel is built to respond to particular orientation: horizontal edges, and a particular scale. It also responds equally to R, G, and B color channels, as that is how we have told the convolve operation to work: use the same kerne...
# Create a set of operations using tensorflow which could # provide you for instance the sum or mean value of every # image in your dataset: # First flatten our convolved images so instead of many 3d images, # we have many 1d vectors. # This should convert our 4d representation of N x H x W x C to a # 2d representatio...
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
What does your sorting reveal? Could you imagine the same sorting over many more images reveal the thing your dataset sought to represent? It is likely that the representations that you wanted to find hidden within "higher layers", i.e., "deeper features" of the image, and that these "low level" features, edges essen...
utils.build_submission('session-1.zip', ('dataset.png', 'mean.png', 'std.png', 'normalized.png', 'kernel.png', 'convolved.png', 'sorted.png', ...
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
Shor's algorithm <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/tutorials/shor"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.rese...
"""Install Cirq.""" try: import cirq except ImportError: print("installing cirq...") !pip install --quiet cirq print("installed cirq.") """Imports for the notebook.""" import fractions import math import random import numpy as np import sympy from typing import Callable, List, Optional, Sequence, Unio...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
Order finding Factoring an integer $n$ can be reduced to finding the period of the <i>modular exponential function</i> (to be defined). Finding this period can be accomplished (with high probability) by finding the <i>order</i> of a randomly chosen element of the multiplicative group modulo $n$. Let $n$ be a positive i...
"""Function to compute the elements of Z_n.""" def multiplicative_group(n: int) -> List[int]: """Returns the multiplicative group modulo n. Args: n: Modulus of the multiplicative group. """ assert n > 1 group = [1] for x in range(2, n): if math.gcd(x, n) == 1: gr...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
For example, the multiplicative group modulo $n = 15$ is shown below.
"""Example of a multiplicative group.""" n = 15 print(f"The multiplicative group modulo n = {n} is:") print(multiplicative_group(n))
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
One can check that this set of elements indeed forms a group (under ordinary multiplication). Classical order finding A function for classically computing the order $r$ of an element $x \in \mathbb{Z}_n$ is provided below. This function simply computes the sequence $$ x^2 \text{ mod } n $$ $$ x^3 \text{ mod } n $$ $$ ...
"""Function for classically computing the order of an element of Z_n.""" def classical_order_finder(x: int, n: int) -> Optional[int]: """Computes smallest positive r such that x**r mod n == 1. Args: x: Integer whose order is to be computed, must be greater than one and belong to the multipli...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
An example of computing $r$ for a given $x \in \mathbb{Z}_n$ and given $n$ is shown in the code block below.
"""Example of (classically) computing the order of an element.""" n = 15 # The multiplicative group is [1, 2, 4, 7, 8, 11, 13, 14]. x = 8 r = classical_order_finder(x, n) # Check that the order is indeed correct. print(f"x^r mod n = {x}^{r} mod {n} = {x**r % n}")
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
The quantum part of Shor's algorithm is order finding, but done via a quantum circuit, which we'll discuss below. Quantum order finding Quantum order finding is essentially quantum phase estimation with unitary $U$ that computes the modular exponential function $f_x(z)$ for some randomly chosen $x \in \mathbb{Z}_n$. Th...
"""Example of defining an arithmetic (quantum) operation in Cirq.""" class Adder(cirq.ArithmeticOperation): """Quantum addition.""" def __init__(self, target_register, input_register): self.input_register = input_register self.target_register = target_register def registers(self): ...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
Now that we have the operation defined, we can use it in a circuit. The cell below creates two qubit registers, then sets the first register to be $|10\rangle$ (in binary) and the second register to be $|01\rangle$ (in binary) via $X$ gates. Then, we use the Adder operation, then measure all the qubits. Since $10 + 01 ...
"""Example of using an Adder in a circuit.""" # Two qubit registers. qreg1 = cirq.LineQubit.range(2) qreg2 = cirq.LineQubit.range(2, 4) # Define the circuit. circ = cirq.Circuit( cirq.ops.X.on(qreg1[0]), cirq.ops.X.on(qreg2[1]), Adder(input_register=qreg1, target_register=qreg2), cirq.measure_each(*qre...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
In the output of this code block, we first see the circuit which shows the initial $X$ gates, the Adder operation, then the final measurements. Next, we see the measurement outcomes which are all the bitstring $1011$ as expected. It is also possible to see the unitary of the adder operation, which we do below. Here, we...
"""Example of the unitary of an Adder operation.""" cirq.unitary( Adder(target_register=cirq.LineQubit.range(2), input_register=1) ).real
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
We can understand this unitary as follows. The $i$th column of the unitary is the state $|i + 1 \text{ mod } 4\rangle$. For example, if we look at the $0$th column of the unitary, we see the state $|i + 1 \text{ mod } 4\rangle = |0 + 1 \text{ mod } 4\rangle = |1\rangle$. If we look at the $1$st column of the unitary, w...
"""Defines the modular exponential operation used in Shor's algorithm.""" class ModularExp(cirq.ArithmeticOperation): """Quantum modular exponentiation. This class represents the unitary which multiplies base raised to exponent into the target modulo the given modulus. More precisely, it represents the ...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
In the apply method, we see that we evaluate (target * base**exponent) % modulus. The target and the exponent depend on the values of the respective qubit registers, and the base and modulus are constant -- namely, the modulus is $n$ and the base is some $x \in \mathbb{Z}_n$. The total number of qubits we will use is ...
"""Create the target and exponent registers for phase estimation, and see the number of qubits needed for Shor's algorithm. """ n = 15 L = n.bit_length() # The target register has L qubits. target = cirq.LineQubit.range(L) # The exponent register has 2L + 3 qubits. exponent = cirq.LineQubit.range(L, 3 * L + 3) # Dis...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
As with the simple adder operation, this modular exponential operation has a unitary which we can display (memory permitting) as follows.
"""See (part of) the unitary for a modular exponential operation.""" # Pick some element of the multiplicative group modulo n. x = 5 # Display (part of) the unitary. Uncomment if n is small enough. # cirq.unitary(ModularExp(target, exponent, x, n))
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
Using the modular exponentional operation in a circuit The quantum part of Shor's algorithm is just phase estimation with the unitary $U$ corresponding to the modular exponential operation. The following cell defines a function which creates the circuit for Shor's algorithm using the ModularExp operation we defined abo...
"""Function to make the quantum circuit for order finding.""" def make_order_finding_circuit(x: int, n: int) -> cirq.Circuit: """Returns quantum circuit which computes the order of x modulo n. The circuit uses Quantum Phase Estimation to compute an eigenvalue of the unitary U|y⟩ = |y * x mod n⟩ ...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
Using this function, we can visualize the circuit for a given $x$ and $n$ as follows.
"""Example of the quantum circuit for period finding.""" n = 15 x = 7 circuit = make_order_finding_circuit(x, n) print(circuit)
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
As previously described, we put the exponent register into an equal superposition via Hadamard gates. The $X$ gate on the last qubit in the target register is used for phase kickback. The modular exponential operation performs the sequence of controlled unitaries in phase estimation, then we apply the inverse quantum F...
"""Measuring Shor's period finding circuit.""" circuit = make_order_finding_circuit(x=5, n=6) res = cirq.sample(circuit, repetitions=8) print("Raw measurements:") print(res) print("\nInteger in exponent register:") print(res.data)
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
We interpret each measured bitstring as an integer, but what do these integers tell us? In the next section we look at how to classically post-process to interpret them. Classical post-processing The integer we measure is close to $s / r$ where $r$ is the order of $x \in \mathbb{Z}_n$ and $0 \le s < r$ is an integer. W...
def process_measurement(result: cirq.Result, x: int, n: int) -> Optional[int]: """Interprets the output of the order finding circuit. Specifically, it determines s/r such that exp(2πis/r) is an eigenvalue of the unitary U|y⟩ = |xy mod n⟩ 0 <= y < n U|y⟩ = |y⟩ n <= y then ...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
The next code block shows an example of creating an order finding circuit, executing it, then using the classical postprocessing function to determine the order. Recall that the quantum part of the algorithm succeeds with some probability. If the order is None, try re-running the cell a few times.
"""Example of the classical post-processing.""" # Set n and x here n = 6 x = 5 print(f"Finding the order of x = {x} modulo n = {n}\n") measurement = cirq.sample(circuit, repetitions=1) print("Raw measurements:") print(measurement) print("\nInteger in exponent register:") print(measurement.data) r = process_measureme...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
You should see that the order of $x = 5$ in $\mathbb{Z}_6$ is $r = 2$. Indeed, $5^2 \text{ mod } 6 = 25 \text{ mod } 6 = 1$. Quantum order finder We can now define a streamlined function for the quantum version of order finding using the functions we have previously written. The quantum order finder below creates the ...
def quantum_order_finder(x: int, n: int) -> Optional[int]: """Computes smallest positive r such that x**r mod n == 1. Args: x: integer whose order is to be computed, must be greater than one and belong to the multiplicative group of integers modulo n (which consists of positiv...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
This completes our quantum implementation of an order finder, and the quantum part of Shor's algorithm. The complete factoring algorithm We can use this quantum order finder (or the classical order finder) to complete Shor's algorithm. In the following code block, we add a few pre-processing steps which: (1) Check if $...
"""Functions for factoring from start to finish.""" def find_factor_of_prime_power(n: int) -> Optional[int]: """Returns non-trivial factor of n if n is a prime power, else None.""" for k in range(2, math.floor(math.log2(n)) + 1): c = math.pow(n, 1 / k) c1 = math.floor(c) if c1**k == n: ...
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
The function find_factor uses the quantum_order_finder by default, in which case it is executing Shor's algorithm. As previously mentioned, due to the large memory requirements for classically simulating this circuit, we cannot run Shor's algorithm for $n \ge 15$. However, we can use the classical order finder as a sub...
"""Example of factoring via Shor's algorithm (order finding).""" # Number to factor n = 184573 # Attempt to find a factor p = find_factor(n, order_finder=classical_order_finder) q = n // p print("Factoring n = pq =", n) print("p =", p) print("q =", q) """Check the answer is correct.""" p * q == n
docs/tutorials/shor.ipynb
quantumlib/Cirq
apache-2.0
Manifold Learning One weakness of PCA is that it cannot detect non-linear features. A set of algorithms known as Manifold Learning have been developed to address this deficiency. A canonical dataset used in Manifold learning is the S-curve:
from sklearn.datasets import make_s_curve X, y = make_s_curve(n_samples=1000) from mpl_toolkits.mplot3d import Axes3D ax = plt.axes(projection='3d') ax.scatter3D(X[:, 0], X[:, 1], X[:, 2], c=y) ax.view_init(10, -60);
notebooks/21.Unsupervised_learning-Non-linear_dimensionality_reduction.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
This is a 2-dimensional dataset embedded in three dimensions, but it is embedded in such a way that PCA cannot discover the underlying data orientation:
from sklearn.decomposition import PCA X_pca = PCA(n_components=2).fit_transform(X) plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y);
notebooks/21.Unsupervised_learning-Non-linear_dimensionality_reduction.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Manifold learning algorithms, however, available in the sklearn.manifold submodule, are able to recover the underlying 2-dimensional manifold:
from sklearn.manifold import Isomap iso = Isomap(n_neighbors=15, n_components=2) X_iso = iso.fit_transform(X) plt.scatter(X_iso[:, 0], X_iso[:, 1], c=y);
notebooks/21.Unsupervised_learning-Non-linear_dimensionality_reduction.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Manifold learning on the digits data We can apply manifold learning techniques to much higher dimensional datasets, for example the digits data that we saw before:
from sklearn.datasets import load_digits digits = load_digits() fig, axes = plt.subplots(2, 5, figsize=(10, 5), subplot_kw={'xticks':(), 'yticks': ()}) for ax, img in zip(axes.ravel(), digits.images): ax.imshow(img, interpolation="none", cmap="gray")
notebooks/21.Unsupervised_learning-Non-linear_dimensionality_reduction.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
We can visualize the dataset using a linear technique, such as PCA. We saw this already provides some intuition about the data:
# build a PCA model pca = PCA(n_components=2) pca.fit(digits.data) # transform the digits data onto the first two principal components digits_pca = pca.transform(digits.data) colors = ["#476A2A", "#7851B8", "#BD3430", "#4A2D4E", "#875525", "#A83683", "#4E655E", "#853541", "#3A3120","#535D8E"] plt.figure(figsi...
notebooks/21.Unsupervised_learning-Non-linear_dimensionality_reduction.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Using a more powerful, nonlinear techinque can provide much better visualizations, though. Here, we are using the t-SNE manifold learning method:
from sklearn.manifold import TSNE tsne = TSNE(random_state=42) # use fit_transform instead of fit, as TSNE has no transform method: digits_tsne = tsne.fit_transform(digits.data) plt.figure(figsize=(10, 10)) plt.xlim(digits_tsne[:, 0].min(), digits_tsne[:, 0].max() + 1) plt.ylim(digits_tsne[:, 1].min(), digits_tsne[:, ...
notebooks/21.Unsupervised_learning-Non-linear_dimensionality_reduction.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
t-SNE has a somewhat longer runtime that other manifold learning algorithms, but the result is quite striking. Keep in mind that this algorithm is purely unsupervised, and does not know about the class labels. Still it is able to separate the classes very well (though the classes four, one and nine have been split into...
# %load solutions/21A_isomap_digits.py # %load solutions/21B_tsne_classification.py
notebooks/21.Unsupervised_learning-Non-linear_dimensionality_reduction.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
1 - Problem Statement 1.1 - Dataset and Preprocessing Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
data = open('dinos.txt', 'r').read() data= data.lower() chars = list(set(data)) data_size, vocab_size = len(data), len(chars) print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
Course 5/Dinosaurus Island Character level language model final v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the &lt;EOS&gt; (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a py...
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) } ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) } print(ix_to_char)
Course 5/Dinosaurus Island Character level language model final v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
1.2 - Overview of the model Your model will have the following structure: Initialize parameters Run the optimization loop Forward propagation to compute the loss function Backward propagation to compute the gradients with respect to the loss function Clip the gradients to avoid exploding gradients Using the gradient...
### GRADED FUNCTION: clip def clip(gradients, maxValue): ''' Clips the gradients' values between minimum and maximum. Arguments: gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby" maxValue -- everything above this number is set to this number, and everything...
Course 5/Dinosaurus Island Character level language model final v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
Expected output: <table> <tr> <td> **gradients["dWaa"][1][2] ** </td> <td> 10.0 </td> </tr> <tr> <td> **gradients["dWax"][3][1]** </td> <td> -10.0 </td> </td> </tr> <tr> <td> **gradients["dWya"][1][2]** </td> <td> 0.29713815361 </td> </tr> <...
# GRADED FUNCTION: sample def sample(parameters, char_to_ix, seed): """ Sample a sequence of characters according to a sequence of probability distributions output of the RNN Arguments: parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. char_to_ix -- python dictio...
Course 5/Dinosaurus Island Character level language model final v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
Expected output: <table> <tr> <td> **list of sampled indices:** </td> <td> [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, <br> 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0] </td> </tr><tr> <...
# GRADED FUNCTION: optimize def optimize(X, Y, a_prev, parameters, learning_rate = 0.01): """ Execute one step of the optimization to train the model. Arguments: X -- list of integers, where each integer is a number that maps to a character in the vocabulary. Y -- list of integers, exactly the...
Course 5/Dinosaurus Island Character level language model final v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
Expected output: <table> <tr> <td> **Loss ** </td> <td> 126.503975722 </td> </tr> <tr> <td> **gradients["dWaa"][1][2]** </td> <td> 0.194709315347 </td> <tr> <td> **np.argmax(gradients["dWax"])** </td> <td> 93 </td> </tr> <tr> <td> **gra...
# GRADED FUNCTION: model def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27): """ Trains the model and generates dinosaur names. Arguments: data -- text corpus ix_to_char -- dictionary that maps the index to a character char_to_ix -- ...
Course 5/Dinosaurus Island Character level language model final v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
parameters = model(data, ix_to_char, char_to_ix)
Course 5/Dinosaurus Island Character level language model final v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
Conclusion You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see ...
from __future__ import print_function from keras.callbacks import LambdaCallback from keras.models import Model, load_model, Sequential from keras.layers import Dense, Activation, Dropout, Input, Masking from keras.layers import LSTM from keras.utils.data_utils import get_file from keras.preprocessing.sequence import p...
Course 5/Dinosaurus Island Character level language model final v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called "The Sonnets". Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run generate_output, which will prompt asking you for an in...
print_callback = LambdaCallback(on_epoch_end=on_epoch_end) model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback]) # Run this cell to try with different inputs without having to re-train the model generate_output()
Course 5/Dinosaurus Island Character level language model final v3.ipynb
ShubhamDebnath/Coursera-Machine-Learning
mit
Introduction The HyperModel class in KerasTuner provides a convenient way to define your search space in a reusable object. You can override HyperModel.build() to define and hypertune the model itself. To hypertune the training process (e.g. by selecting the proper batch size, number of training epochs, or data augment...
import keras_tuner import tensorflow as tf from tensorflow import keras import numpy as np x_train = np.random.rand(1000, 28, 28, 1) y_train = np.random.randint(0, 10, (1000, 1)) x_val = np.random.rand(1000, 28, 28, 1) y_val = np.random.randint(0, 10, (1000, 1))
guides/ipynb/keras_tuner/custom_tuner.ipynb
keras-team/keras-io
apache-2.0
Then, we subclass the HyperModel class as MyHyperModel. In MyHyperModel.build(), we build a simple Keras model to do image classification for 10 different classes. MyHyperModel.fit() accepts several arguments. Its signature is shown below: python def fit(self, hp, model, x, y, validation_data, callbacks=None, **kwargs)...
class MyHyperModel(keras_tuner.HyperModel): def build(self, hp): """Builds a convolutional model.""" inputs = keras.Input(shape=(28, 28, 1)) x = keras.layers.Flatten()(inputs) x = keras.layers.Dense( units=hp.Choice("units", [32, 64, 128]), activation="relu" )(x)...
guides/ipynb/keras_tuner/custom_tuner.ipynb
keras-team/keras-io
apache-2.0
Now, we can initialize the tuner. Here, we use Objective("my_metric", "min") as our metric to be minimized. The objective name should be consistent with the one you use as the key in the logs passed to the 'on_epoch_end()' method of the callbacks. The callbacks need to use this value in the logs to find the best epoch ...
tuner = keras_tuner.RandomSearch( objective=keras_tuner.Objective("my_metric", "min"), max_trials=2, hypermodel=MyHyperModel(), directory="results", project_name="custom_training", overwrite=True, )
guides/ipynb/keras_tuner/custom_tuner.ipynb
keras-team/keras-io
apache-2.0
We start the search by passing the arguments we defined in the signature of MyHyperModel.fit() to tuner.search().
tuner.search(x=x_train, y=y_train, validation_data=(x_val, y_val))
guides/ipynb/keras_tuner/custom_tuner.ipynb
keras-team/keras-io
apache-2.0
Finally, we can retrieve the results.
best_hps = tuner.get_best_hyperparameters()[0] print(best_hps.values) best_model = tuner.get_best_models()[0] best_model.summary()
guides/ipynb/keras_tuner/custom_tuner.ipynb
keras-team/keras-io
apache-2.0
Calculate the Nonredundant Read Fraction (NRF) SAM format example: SRR585264.8766235 0 1 4 15 35M * 0 0 CTTAAACAATTATTCCCCCTGCAAACATTTTCAAT GGGGGGGGGGGGGGGGGGGGGGFGGGGGGGGGGGG XT:A:U NM:i:1 X0:i:1 X1:i:6 XM:i:1 XO:i:0 XG:i:0 MD:Z:8T26 Import the required...
import subprocess import matplotlib.pyplot as plt import random import numpy as np
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Make figures prettier and biger
plt.style.use('ggplot') figsize(10,5)
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Parse the SAM file and extract the unique start coordinates. First store the file name in the variable
file = "/ngschool/chip_seq/bwa/input.sorted.bam"
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Next we read the file using samtools. From each read we need to store the flag, chromosome name and start coordinate.
p = subprocess.Popen(["samtools", "view", "-q10", "-F260", file], stdout=subprocess.PIPE) coords = [] for line in p.stdout: flag, ref, start = line.decode('utf-8').split()[1:4] coords.append([flag, ref, start]) coords[:3]
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
What is the total number of our unique reads?
len(coords)
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Randomly sample the coordinates to get 1M for NRF calculations
random.seed(1234) sample = random.sample(coords, 1000000) len(sample)
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
How many of those coordinates are unique? (We will use the set python object which only the unique items.)
uniqueStarts = {'watson': set(), 'crick': set()} for coord in sample: flag, ref, start = coord if int(flag) & 16: uniqueStarts['crick'].add((ref, start)) else: uniqueStarts['watson'].add((ref, start))
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
How many on the Watson strand?
len(uniqueStarts['watson'])
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
And on the Crick?
len(uniqueStarts['crick'])
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Calculate the NRF
NRF_input = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample) print(NRF_input)
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Lets create a function from what we did above and apply it to all of our files! To use our function on the real sequencing datasets (not only on a small subset) we need to optimize our method a bit- we will use python module called numpy.
def calculateNRF(filePath, pickSample=True, sampleSize=10000000, seed=1234): p = subprocess.Popen(['samtools', 'view', '-q10', '-F260', filePath], stdout=subprocess.PIPE) coordType = np.dtype({'names': ['flag', 'ref', 'start'], 'formats': ['uint16', 'U10', 'ui...
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Calculate the NRF for the chip-seq sample
NRF_chip = calculateNRF("/ngschool/chip_seq/bwa/sox2_chip.sorted.bam", sampleSize=1000000) print(NRF_chip)
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Plot the NRF!
plt.bar([0,2],[NRF_input, NRF_chip], width=1) plt.xlim([-0.5,3.5]), plt.xticks([0.5, 2.5], ['Input', 'ChIP']) plt.xlabel('Sample') plt.ylabel('NRF') plt.ylim([0, 1.25]), plt.yticks(np.arange(0, 1.2, 0.2)) plt.plot((-0.5,3.5), (0.8,0.8), 'red', linestyle='dashed') plt.show()
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Calculate the Signal Extraction Scaling Load the results from the coverage calculations
countList = [] with open('/ngschool/chip_seq/bedtools/input_coverage.bed', 'r') as covFile: for line in covFile: countList.append(int(line.strip('\n').split('\t')[3])) countList[0:6] countList[-15:]
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Lets see where do our reads align to the genome. Plot the distribution of tags along the genome.
plt.plot(range(len(countList)), countList) plt.xlabel('Bin number') plt.ylabel('Bin coverage') plt.xlim([0, len(countList)]) plt.show()
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Now sort the list- order the windows based on the tag count
countList.sort() countList[0:6]
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Sum all the aligned tags
countSum = sum(countList) countSum
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Calculate the summaric fraction of tags along the ordered windows.
countFraction = [] for i, count in enumerate(countList): if i == 0: countFraction.append(count*1.0 / countSum) else: countFraction.append((count*1.0 / countSum) + countFraction[i-1])
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Look at the last five items of the list:
countFraction[-5:]
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Calculate the number of windows.
winNumber = len(countFraction) winNumber
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Calculate what fraction of a whole is the position of each window.
winFraction = [] for i in range(winNumber): winFraction.append(i*1.0 / winNumber)
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Look at the last five items of our new list:
winFraction[-5:]
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Now prepare the function!
def calculateSES(filePath): countList = [] with open(filePath, 'r') as covFile: for line in covFile: countList.append(int(line.strip('\n').split('\t')[3])) plt.plot(range(len(countList)), countList) plt.xlabel('Bin number') plt.ylabel('Bin coverage') plt.xlim([0, len(countLis...
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Use our function to calculate the signal extraction scaling for the Sox2 ChIP sample:
chipSes = calculateSES("/ngschool/chip_seq/bedtools/sox2_chip_coverage.bed")
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
Now we can plot the calculated fractions for both the input and ChIP sample:
plt.plot(winFraction, countFraction, label='input') plt.plot(chipSes[0], chipSes[1], label='Sox2 ChIP') plt.ylim([0,1]) plt.xlabel('Ordered window franction') plt.ylabel('Genome coverage fraction') plt.legend(loc='best') plt.show()
jupyter/ndolgikh/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
NGSchool2016/ngschool2016-materials
gpl-3.0
The data contains one event per row and has 5 variables: user_id: Identifier for each user. event_timestamp: The time each event happened. lat: The latitude of the user when the event occurred. lon: The longitude of the user when the event occurred. event_type: The type of event that occurred: login, level, buy_coins ...
apply_ex = data.groupby('user_id').apply(len) print(apply_ex.head())
posts/product_data_071317.ipynb
dtrimarco/blog
mit
The output here is a pandas Series with each user_id as the index and the count of the number of events as values. Now to try the same thing with transform.
transform_ex = data.groupby('user_id').transform(len) print(transform_ex.head())
posts/product_data_071317.ipynb
dtrimarco/blog
mit
What the heck happened here? This odd DataFrame highlights a key difference: apply by default returns an object with one element per group and transform returns an object of the exact same size as the input object. Unless specified, it operates column by column in order. How about we clean this up a bit and create a ne...
data['event_count'] = data.groupby('user_id')['user_id'].transform(len) print(data.head(7))
posts/product_data_071317.ipynb
dtrimarco/blog
mit
Much better. All we had to do was assign to the new event_count column and then specify the ['user_id'] column after the groupby statement. Whether you would prefer to have this additional column of repeating values depends on what you intend to do with the data afterwards. Let's assume this is acceptable. Now for some...
def add_value(x): if x == 'buy_coins': y = 1.00 elif x == 'megapack': y = 10.00 else: y=0.0 return y
posts/product_data_071317.ipynb
dtrimarco/blog
mit
Here we've defined a very simple custom function that assigns values to each of the four event types. Now to apply it to our data.
data['event_value'] = data['event_type'].apply(add_value) print(data.head(7))
posts/product_data_071317.ipynb
dtrimarco/blog
mit
A Convenience Function
def plotDecisionBoundary(model, X, y): fig = plt.figure() ax = fig.add_subplot(111) padding = 0.6 resolution = 0.0025 colors = ['royalblue','forestgreen','ghostwhite'] # Calculate the boundaris x_min, x_max = X[:, 0].min(), X[:, 0].max() y_min, y_max = X[:, 1].min(), X[:, 1].max() ...
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
The Assignment Load up the dataset into a variable called X. Check .head and dtypes to make sure you're loading your data properly--don't fail on the 1st step!
# .. your code here ..
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
Copy the wheat_type series slice out of X, and into a series called y. Then drop the original wheat_type column from the X:
# .. your code here ..
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
Do a quick, "ordinal" conversion of y. In actuality our classification isn't ordinal, but just as an experiment...
# .. your code here ..
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
Do some basic nan munging. Fill each row's nans with the mean of the feature:
# .. your code here ..
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
Split X into training and testing data sets using train_test_split(). Use 0.33 test size, and use random_state=1. This is important so that your answers are verifiable. In the real world, you wouldn't specify a random_state:
# .. your code here ..
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
Create an instance of SKLearn's Normalizer class and then train it using its .fit() method against your training data. The reason you only fit against your training data is because in a real-world situation, you'll only have your training data to train with! In this lab setting, you have both train+test data; but in th...
# .. your code here ..
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
With your trained pre-processor, transform both your training AND testing data. Any testing data has to be transformed with your preprocessor that has ben fit against your training data, so that it exist in the same feature-space as the original data used to train your models.
# .. your code here ..
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
Just like your preprocessing transformation, create a PCA transformation as well. Fit it against your training data, and then project your training and testing features into PCA space using the PCA model's .transform() method. This has to be done because the only way to visualize the decision boundary in 2D would be if...
# .. your code here ..
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
Create and train a KNeighborsClassifier. Start with K=9 neighbors. Be sure train your classifier against the pre-processed, PCA- transformed training data above! You do not, of course, need to transform your labels.
# .. your code here .. # I hope your KNeighbors classifier model from earlier was named 'knn' # If not, adjust the following line: plotDecisionBoundary(knn, X_train, y_train)
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
Display the accuracy score of your test data/labels, computed by your KNeighbors model. You do NOT have to run .predict before calling .score, since .score will take care of running your predictions for you automatically.
# .. your code here ..
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
Bonus Instead of the ordinal conversion, try and get this assignment working with a proper Pandas get_dummies for feature encoding. You might have to update some of the plotDecisionBoundary() code.
plt.show()
Module5/Module5 - Lab5.ipynb
authman/DAT210x
mit
<a id='step2'></a> 2. Specify all variables for the module and scene Below find a list of all of the possible parameters for makeModule. scene and simulation parameters are also organized below. This simulation will be a complete simulation in terms of parameters that you can modify. The below routine creates a HEXAG...
simulationname = 'tutorial_4' ## SceneDict Parameters gcr = 0.33 # ground cover ratio, = module_height / pitch albedo = 0.28 #'concrete' # ground albedo hub_height = 2.35 # we could also pass clearance_height. azimuth_ang = 90 # Modules will be facing East. lat = 37.5 lon = -77.6 nMods = 4 # doing a smal...
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step3'></a> 3. Create the Radiance Object and generate the Sky
demo = bifacial_radiance.RadianceObj(simulationname, path=str(testfolder)) # Create a RadianceObj 'object' demo.setGround(albedo) # input albedo number or material name like 'concrete'. To see options, run this without any input. epwfile = demo.getEPW(lat,lon) # pull TMY data for any global lat/lon metdata = demo.rea...
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step4'></a> 4. Calculating tracker angle/geometry for a specific timestamp This trick is useful if you are trying to use the fixed-tilt steps in bifacial_radiance to model a tracker for one specific point in time (if you take a picture of a tracker, it looks fixed, right? Well then). We assigned a 10 degree til...
# Some tracking parameters that won't be needed after getting this angle: axis_azimuth = 180 axis_tilt = 0 limit_angle = 60 backtrack = True tilt = demo.getSingleTimestampTrackerAngle(metdata, timestamp, gcr, axis_azimuth, axis_tilt,limit_angle, backtrack) print ("\n NEW Calculated Tilt: %s " % tilt)
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step5'></a> 5. Making the Module & the Scene, Visualize and run Analysis
# Making module with all the variables module = demo.makeModule(name=module_type,x=x,y=y,bifi=1, zgap=zgap, ygap=ygap, xgap=xgap, numpanels=numpanels) module.addTorquetube(diameter=diameter, material=material, tubetype=tubetype, visible=True, axisofrotation=True) # create...
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
At this point you should be able to go into a command window (cmd.exe) and check the geometry. It should look like the image at the beginning of the journal. Example: rvu -vf views\front.vp -e .01 -pe 0.02 -vp -2 -12 14.5 tutorial_4.oct
## Comment the line below to run rvu from the Jupyter notebook instead of your terminal. ## Simulation will stop until you close the rvu window #!rvu -vf views\front.vp -e .01 tutorial_4.oct
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
And then proceed happily with your analysis:
analysis = bifacial_radiance.AnalysisObj(octfile, demo.name) # return an analysis object including the scan dimensions for back irradiance sensorsy = 200 # setting this very high to see a detailed profile of the irradiance, including #the shadow of the torque tube on the rear side of the module. frontscan, backscan =...
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step6'></a> 6. Calculate Bifacial Ratio (clean results) Although we could calculate a bifacial ratio average at this point, this value would be misleading, since some of the sensors generated will fall on the torque tube, the sky, and/or the ground since we have torquetube and ygap in the scene. To calculate the...
resultFile='results/irr_tutorial_4.csv' results_loaded = bifacial_radiance.load.read1Result(resultFile) print("Printing the dataframe containing the results just calculated in %s: " % resultFile) results_loaded print("Looking at only 1 sensor in the middle -- position 100 out of the 200 sensors sampled:") results_load...
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
As an example, we can see above that sensor 100 falls in the hextube, and in the sky. We need to remove this to calculate the real bifacial_gain from the irradiance falling into the modules. To do this we use cleanResult form the load.py module in bifacial_radiance. This finds the invalid materials and sets the irradia...
# Cleaning Results: # remove invalid materials and sets the irradiance values to NaN clean_results = bifacial_radiance.load.cleanResult(results_loaded) print("Sampling the same location as before to see what the results are now:") clean_results.loc[100] print('CORRECT Annual bifacial ratio average: %0.3f' %( clean...
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
<a id='step7'></a> 7. Add Custom Elements to your Scene Example: Marker at 0,0 position This shows how to add a custom element, in this case a Cube, that will be placed in the center of your already created scene to mark the 0,0 location. This can be added at any point after makeScene has been run once. Notice that i...
name='MyMarker' text='! genbox black CenterMarker 0.1 0.1 4 | xform -t -0.05 -0.05 0' customObject = demo.makeCustomObject(name,text)
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
This should have created a MyMarker.rad object on your objects folder. But creating the object does not automatically adds it to the seen. So let's now add the customObject to the Scene. We are not going to translate it or anything because we want it at the center, but you can pass translation, rotation, and any other ...
demo.appendtoScene(scene.radfiles, customObject, '!xform -rz 0') # makeOct combines all of the ground, sky and object files into a .oct file. octfile = demo.makeOct(demo.getfilelist())
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
appendtoScene appended to the Scene.rad file the name of the custom object we created and the xform transformation we included as text. Then octfile merged this new scene with the ground and sky files. At this point you should be able to go into a command window (cmd.exe) and check the geometry, and the marker should b...
## Comment the line below to run rvu from the Jupyter notebook instead of your terminal. ## Simulation will stop until you close the rvu window #!rvu -vf views\front.vp -e .01 tutorial_4.oct
docs/tutorials/4 - Medium Level Example - Debugging your Scene with Custom Objects (Fixed Tilt 2-up with Torque Tube + CLEAN Routine + CustomObject).ipynb
NREL/bifacial_radiance
bsd-3-clause
Lorenz Attractor - 3D line and point plotting demo Lorenz attractor is a 3D differential equation that we will use to demonstrate mayavi's 3D plotting ability. We will look at some ways to make plotting lots of data more efficient.
# setup parameters for Lorenz equations sigma=10 beta=8/3. rho=28 def lorenz(x, t, ): dx = np.zeros(3) dx[0] = -sigma*x[0] + sigma*x[1] dx[1] = rho*x[0] - x[1] - x[0]*x[2] dx[2] = -beta*x[2] + x[0]*x[1] return dx # solve for a specific particle # initial condition y0 = np.ones(3) + .01 # time ste...
code_examples/python_mayavi/mayavi_intermediate.ipynb
thehackerwithin/berkeley
bsd-3-clause