markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Let's also plot the cost function and the gradients.
# Plot learning curve (with costs) costs = np.squeeze(d['costs']) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(d["learning_rate"])) plt.show()
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Interpretation: You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the te...
learning_rates = [0.01, 0.001, 0.0001] models = {} for i in learning_rates: print ("learning rate is: " + str(i)) models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False) print ('\n' + "--------------------------------------------...
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Interpretation: - Different learning rates give different costs and thus different predictions results. - If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't...
## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "my_image.jpg" # change this to the name of your image file ## END CODE HERE ## # We preprocess the image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(num...
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Explore the Data Play around with view_sentence_range to view different parts of the data.
view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentenc...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence fr...
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionar...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file.
""" DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU
""" DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__ve...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() f...
def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_r...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data ...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ enc_cell = tf.contrib.rnn.M...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded ...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder R...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_le...
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function...
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Inpu...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_siz...
# Number of Epochs epochs = 4 # Batch Size batch_size = 128 # RNN Size rnn_size = 384 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.6
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Build the Graph Build the graph using the neural network you implemented.
""" DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default():...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], ...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Save Parameters Save the batch_size and save_path parameters for inference.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path)
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Checkpoint
""" DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params()
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the <UNK> word id.
def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ return [vocab_to_int[word] if word in vocab_to_int else vocab_to_int['<UNK>'] ...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
Translate This will translate translate_sentence from English to French.
translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path +...
py3/project-4/dlnd_language_translation.ipynb
jjonte/udacity-deeplearning-nd
unlicense
TensorFlow example Note: Change the action from update to create if you are deploying the model for the first time.
kfserving_op = components.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/65bed9b6d1d676ef2d541a970d3edc0aee12400d/components/kubeflow/kfserving/component.yaml' ) @dsl.pipeline( name='kfserving pipeline', description='A pipeline for kfserving.' ) def kfservingPipeline( action =...
docs/samples/pipelines/kfs-pipeline-v1alpha2.ipynb
kubeflow/kfserving-lts
apache-2.0
Custom model example
kfserving_op = components.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/65bed9b6d1d676ef2d541a970d3edc0aee12400d/components/kubeflow/kfserving/component.yaml' ) @dsl.pipeline( name='kfserving pipeline', description='A pipeline for kfserving.' ) def kfservingPipeline( action =...
docs/samples/pipelines/kfs-pipeline-v1alpha2.ipynb
kubeflow/kfserving-lts
apache-2.0
First we want to download a video, so that we can compare the algorithmic result against the original video. The file is downloaded, if it does not already exist in the working directory. Next, it will create a directory of the same name, and unzip the file contents (Campus.zip to Campus/filename).
import numpy as np import matplotlib.pyplot as plt %matplotlib inline try: from urllib2 import urlopen except ImportError: from urllib.request import urlopen from scipy.io import loadmat, savemat import os ext = {"water":'WaterSurface.zip', "fountain":'Fountain.zip', "campus":'Campus.zip', ...
blogsite/posts/robust-matrix-decomposition.ipynb
kastnerkyle/kastnerkyle.github.io-nikola
bsd-3-clause
The code below will read in all the .bmp images downloaded and unzipped from the website, as well as converting to grayscale, scaling the result between 0 and 1. Eventually, I plan to do a "full-color" version of this testing, but for now the greyscale will have to suffice.
from scipy import misc import numpy as np from glob import glob def rgb2gray(rgb): r, g, b = rgb[:, :, 0], rgb[:, :, 1], rgb[:, :, 2] gray = 0.2989 * r + 0.5870 * g + 0.1140 * b return gray / 255. fdir = bname(ext[example]) names = sorted(glob(fdir + "/*.bmp")) d1, d2, channels = misc.imread(names[0]).sha...
blogsite/posts/robust-matrix-decomposition.ipynb
kastnerkyle/kastnerkyle.github.io-nikola
bsd-3-clause
Robust PCA Robust Principal Component Analysis (PCA) is an extension of PCA. Rather than attempting to solve $X = L$, where $L$ is typically a low-rank approximation ($N \times M$, vs. $N \times P$, $M < P$), Robust PCA solves the factorization problem $X = L + S$, where $L$ is a low-rank approximation, and $S$ is a sp...
import numpy as np from numpy.linalg import norm, svd def inexact_augmented_lagrange_multiplier(X, lmbda=.01, tol=1e-3, maxiter=100, verbose=True): """ Inexact Augmented Lagrange Multiplier """ Y = X norm_two = norm(Y.ravel(), 2) norm_inf = norm(Y.ravel...
blogsite/posts/robust-matrix-decomposition.ipynb
kastnerkyle/kastnerkyle.github.io-nikola
bsd-3-clause
GoDec The code below contains an implementation of the GoDec algorithm, which attempts to solve the problem $X = L + S + G$, with $L$ low-rank, $S$ sparse, and $G$ as a component of Gaussian noise. By allowing the decomposition to expand to 3 matrix components, the algorithm is able to more effectively differentiate th...
import numpy as np from numpy.linalg import norm from scipy.linalg import qr def wthresh(a, thresh): #Soft wavelet threshold res = np.abs(a) - thresh return np.sign(a) * ((res > 0) * res) #Default threshold of .03 is assumed to be for input in the range 0-1... #original matlab had 8 out of 255, which is a...
blogsite/posts/robust-matrix-decomposition.ipynb
kastnerkyle/kastnerkyle.github.io-nikola
bsd-3-clause
A Momentary Lapse of Reason Now it is time to do something a little unreasonable - we can actually take all of this data, reshape it into a series of images, and plot it as a video inside the IPython notebook! The first step is to generate the frames for the video as .png files, as shown below.
import os import sys import matplotlib.pyplot as plt from scipy.io import loadmat import numpy as np from matplotlib import cm import matplotlib #demo inspired by / stolen from @kuantkid on Github - nice work! def mlabdefaults(): matplotlib.rcParams['lines.linewidth'] = 1.5 matplotlib.rcParams['savefig.dpi'] =...
blogsite/posts/robust-matrix-decomposition.ipynb
kastnerkyle/kastnerkyle.github.io-nikola
bsd-3-clause
Echoes The code below will display HTML5 video for each of the videos generated in the previos step, and embed it in the IPython notebook. There are "echoes" of people, which are much more pronounced in the Robust PCA video than the GoDec version, likely due to the increased flexibility of an independent Gaussian term....
from IPython.display import HTML from base64 import b64encode def html5_video(alg, frames): #This *should* support all browsers... framesz = 250 info = {"mp4": {"ext":"mp4", "encoded": '', "size":(frames * framesz, framesz)}} html_output = [] for k in info.keys(): f = open("%s_animation.%s"...
blogsite/posts/robust-matrix-decomposition.ipynb
kastnerkyle/kastnerkyle.github.io-nikola
bsd-3-clause
If these videos freeze for some reason, just hit refresh and they should start playing.
html5_video("IALM", 3) html5_video("GoDec", 4)
blogsite/posts/robust-matrix-decomposition.ipynb
kastnerkyle/kastnerkyle.github.io-nikola
bsd-3-clause
Load CTCF ChIP-seq peaks for HFF from ENCODE This approach makes use of the narrowPeak schema for bioframe.read_table .
ctcf_peaks = bioframe.read_table("https://www.encodeproject.org/files/ENCFF401MQL/@@download/ENCFF401MQL.bed.gz", schema='narrowPeak') ctcf_peaks[0:5]
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
Get CTCF motifs from JASPAR
### CTCF motif: http://jaspar.genereg.net/matrix/MA0139.1/ jaspar_url = 'http://expdata.cmmt.ubc.ca/JASPAR/downloads/UCSC_tracks/2022/hg38/' jaspar_motif_file = 'MA0139.1.tsv.gz' ctcf_motifs = bioframe.read_table(jaspar_url+jaspar_motif_file,schema='jaspar',skiprows=1) ctcf_motifs[0:4]
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
Overlap peaks & motifs
df_peaks_motifs = bioframe.overlap(ctcf_peaks,ctcf_motifs, suffixes=('_1','_2'), return_index=True)
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
There are often multiple motifs overlapping one ChIP-seq peak, and a substantial number of peaks without motifs:
# note that counting motifs per peak can also be handled directly with bioframe.count_overlaps # but since we re-use df_peaks_motifs below we instead use the pandas operations directly motifs_per_peak = df_peaks_motifs.groupby(["index_1"])["index_2"].count().values plt.hist(motifs_per_peak,np.arange(0,np.max(motifs_pe...
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
assign the strongest motif to each peak
# since idxmax does not currently take NA, fill with -1 df_peaks_motifs['pval_2'] = df_peaks_motifs['pval_2'].fillna(-1) idxmax_peaks_motifs = df_peaks_motifs.groupby(["chrom_1", "start_1","end_1"])["pval_2"].idxmax().values df_peaks_maxmotif = df_peaks_motifs.loc[idxmax_peaks_motifs] df_peaks_maxmotif['pval_2'].repla...
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
stronger peaks tend to have stronger motifs:
plt.rcParams['font.size']=12 df_peaks_maxmotif['fc_1'] = df_peaks_maxmotif['fc_1'].values.astype('float') plt.scatter(df_peaks_maxmotif['fc_1'].values, df_peaks_maxmotif['pval_2'].values, 5, alpha=0.5,lw=0) plt.xlabel('ENCODE CTCF peak strength, fc') plt.ylabel('JASPAR CTCF motif strength \n (-log10 pval ...
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
We can also ask the reverse question: how many motifs overlap a ChIP-seq peak?
df_motifs_peaks = bioframe.overlap(ctcf_motifs,ctcf_peaks,how='left', suffixes=('_1','_2')) m = df_motifs_peaks.sort_values('pval_1') plt.plot( m['pval_1'].values[::-1] , np.cumsum(pd.isnull(m['chrom_2'].values[::-1])==0)/np.arange(1,len(m)+1)) plt.xlabel('pval') plt.ylabel('probability motif overlaps a peak');
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
filter peaks overlapping blacklisted regions do any of our peaks overlap blacklisted genomic regions?
blacklist = bioframe.read_table('https://www.encodeproject.org/files/ENCFF356LFX/@@download/ENCFF356LFX.bed.gz', schema='bed3') blacklist[0:3]
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
there appears to be a small spike in the number of peaks close to blacklist regions
closest_to_blacklist = bioframe.closest(ctcf_peaks,blacklist) plt.hist(closest_to_blacklist['distance'].astype('Float64').astype('float'),np.arange(0,1e4,100));
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
to be safe, let's remove anything +/- 1kb from a blacklisted region
# first let's select the columns we want for our final dataframe of peaks with motifs df_peaks_maxmotif = df_peaks_maxmotif[ ['chrom_1','start_1','end_1','fc_1', 'chrom_2','start_2','end_2','pval_2','strand_2']] # then rename columns for convenience when subtrac...
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
there it is! we now have a dataframe containing positions of CTCF ChIP peaks, including the strongest motif underlying that peak, and after conservative filtering for proximity to blacklisted regions
df_peaks_maxmotif_clean.iloc[7:15]
docs/tutorials/tutorial_assign_motifs_to_peaks.ipynb
open2c/bioframe
mit
1. Your first steps with Python 1.1 Introduction Python is a general purpose programming language. It is used extensively for scientific computing, data analytics and visualization, web development and software development. It has a wide user base and excellent library support. There are many ways to use and interact...
# change this cell into a Markdown cell. Then type something here and execute it (Shift-Enter)
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
2.2 Your first script It is a time honoured tradition that your very first program should be to print "Hello world!" How is this achieved in Python?
'''Make sure you are in "edit" mode and that this cell is for Coding ( You should see the In [ ]:) on the left of the cell. ''' print("Hello world!")
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Notice that Hello world! is printed at the bottom of the cell as an output. In general, this is how output of a python code is displayed to you. print is a special function in Python. It's purpose is to display output to the console. Notice that we pass an argument-in this case a string "Hello world!"- to the function...
# print your name in this cell.
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
2.3 Commenting Commenting is a way to annotate and document code. There are two ways to do this: Inline using the # character or by using ''' &lt;documentation block&gt; ''', the latter being multi-line and hence used mainly for documenting functions or classes. Comments enclosed using ''' '''' style commenting are ac...
# Addition 5+3 # Subtraction 8-9 # Multiplication 3*12 # Division 48/12
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Note the floating point answer. In previous versions of Python, / meant floor division. This is no longer the case in Python 3
# Exponentiation. Limited precision though! 16**0.5 # Residue class modulo n 5%2
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
In the above 5%2 means return me the remainder after 5 is divided by 2 (which is indeed 1). 3.1.1 Precedence A note on arithmetic precedence. As one expects, () have the highest precedence, following by * and /. Addition and subtraction have the lowest precedence.
# Guess the output before executing this cell. Come on, don't cheat! 6%(1+3)
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
It is interesting to note that the % operator is not distributive. 3.1.2 Variables In general, one does not have to declare variables in python before using it. We merely need to assign numbers to variables. In the computer, this means that a certain place in memory has been allocated to store that particular number. ...
# Assignment x=1 y=2 x+y x/y
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Notice that after assignment, I can access the variables in a different cell. However, if you reassign a variable to a different number, the old values for that variable are overwritten.
x=5 x+y-2
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Now try clicking back to the cell x+y and re-executing it. What do you the answer will be? Even though that cell was above our reassignment cell, nevertheless re-executing that cell means executing that block of code that the latest values for that variable. It is for this reason that one must be very careful with the...
# For example x = x+1 print(x)
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
So what happened here? Well, if we recall x originally was assigned 5. Therefore x+1 would give us 6. This value is then reassigned to the exact same location in memory represented by the variable x. So now that piece of memory contains the value 6. We then use the print function to display the content of x. As this i...
# reset x to 5 x=5 x += 1 print(x) x = 5 #What do you think the values of x will be for x -= 1, x *= 2 or x /= 2? # Test it out in the space below print(x)
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
3.1.3 Floating point precision All of the above applies equally to floating point numbers (or real numbers). However, we must be mindful of floating point precision.
0.1+0.2
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
The following exerpt from the Python documentation explains what is happening quite clearly. To be fair, even our decimal system is inadequate to represent rational numbers like 1/3, 1/11 and so on. 3.2 Strings Strings are basically text. These are enclosed in ' ' or " ". The reason for having two ways of denoting st...
# Noting the difference between printing quoted variables (strings) and printing the variable itself. x = 5 print(x) print('x')
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
In the second print function, the text 'x' is printed while in the first print function, it is the contents of x which is printed to the console. 3.2.1 String formatting Strings can be assigned to variables just like numbers. And these can be recalled in a print function.
my_name = 'Tang U-Liang' print(my_name) # String formatting: Using the % age = 35 print('Hello doctor, my name is %s. I am %d years old. I weigh %.1f kg' % (my_name, age, 70.25)) # or using .format method print("Hi, I'm {name}. Please register {name} for this conference".format(name=my_name))
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
When using % to indicate string substitution, take note of the common formatting "placeholders" %s to substitue strings. %d for printing integer substitutions %.1f means to print a floating point number up to 1 decimal place. Note that there is no rounding The utility of the .format method arises when the same stri...
fruit = 'Apple' drink = 'juice' print(fruit+drink) # concatenation #Don't like the lack of spacing between words? print(fruit+' '+drink)
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Use [] to access specific letters in the string. Python uses 0 indexing. So the first letter is accessed by my_string[0] while my_string[1] accesses the second letter.
print(fruit[0]) print(fruit[1])
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Slicing is a way of get specific subsets of the string. If you let $x_n$ denote the $n+1$-th letter (note zero indexing) in a string (and by letter this includes whitespace characters as well!) then writing my_string[i:j] returns a subset $$x_i, x_{i+1}, \ldots, x_{j-1}$$ of letters in a string. That means the slice [i...
favourite_drink = fruit+' '+drink print("Printing the first to 3rd letter.") print(favourite_drink[0:3]) print("\nNow I want to print the second to seventh letter:") print(favourite_drink[1:7])
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Notice the use of \n in the second print function. This is called a newline character which does exactly what its name says. Also in the third print function notice the seperation between e and j. It is actually not seperated. The sixth letter is a whitespace character ' '. Slicing also utilizes arithmetic progressio...
print(favourite_drink[0:7:2]) # Here's a trick, try this out print(favourite_drink[3:0:-1])
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
So what happened above? Well [3:0:-1] means that starting from the 4-th letter $x_3$ which is 'l' return a subtring including $x_{2}, x_{1}$ as well. Note that the progression does not include $x_0 =$ 'A' because the stopping point is non-inclusive of j. The slice [:j] or [i:] means take substrings starting from the b...
# Write your answer here and check it with the output below
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Answer: eciuj elppA 3.3 The type function All objects in python are instances of classes. It is useful sometimes to find out what type of object we are looking at, especially if it has been assigned to a variable. For this we use the type function.
x = 5.0 type(x) type(favourite_drink) type(True) type(500)
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
4. list, here's where the magic begins list are the fundamental data structure in Python. These are analogous to arrays in C or Java. If you use R, lists are analogous to vectors (and not R list) Declaring a list is as simple as using square brackets [ ] to enclose a list of objects (or variables) seperated by commas.
# Here's a list called staff containing his name, his age and current renumeration staff = ['Andy', 28, 980.15]
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
4.1 Properties of list objects and indexing One of the fundamental properties we can ask about lists is how many objects they contain. We use the len (short for length) function to do that.
len(staff)
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Perhaps you want to recover that staff's name. It's in the first position of the list.
staff[0]
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Notice that Python still outputs to console even though we did not use the print function. Actually the print function prints a particularly "nice" string representation of the object, which is why Andy is printed without the quotation marks if print was used. Can you find me Andy's age now?
# type your answer here and run the cell
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
The same slicing rules for strings apply to lists as well. If we wanted Andy's age and wage, we would type staff[1:3]
staff[1:3]
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
This returns us a sub-list containing Andy's age and renumeration. 4.2 Nested lists Lists can also contain other lists. This ability to have a nested structure in lists gives it flexibility.
nested_list = ['apples', 'banana', [1.50, 0.40]]
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Notice that if I type nested_list[2], Python will return me the list [1.50, .40]. This can be accessed again using indexing (or slicing notation) [ ].
# Accesing items from within a nested list structure. print(nested_list[2]) # Assigning nested_list[2] to a variable. The variable price represents a list price = nested_list[2] print(type(price)) # Getting the smaller of the two floats print(nested_list[2][1])
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
4.3 List methods Right now, let us look at four very useful list methods. Methods are basically operations which modify lists. These are: pop which allows us to remove an item in a list. So for example if $x_0, x_1, \ldots, x_n$ are items in a list, calling my_list.pop(r) will modify the list so that it contains on...
# append staff.append('Finance') print(staff) # pop away the information about his salary andys_salary = staff.pop(2) print(andys_salary) print(staff) # oops, made a mistake, I want to reinsert information about his salary staff.insert(3, andys_salary) print(staff) contacts = [99993535, "andy@company.com"] staf...
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
4.3.1 Your first programming challenge Move information for Andy's email to the second position (i.e. index 1) in the list staff in one line of code
staff = ['Andy', 28, 'Finance', 980.15, 99993535, 'andy@company.com'] staff # type your answer here print(staff)
Day 1 - Unit 1.1.ipynb
uliang/First-steps-with-the-Python-language
mit
Unit Test The following unit test is expected to fail until you solve the challenge.
# %load test_sort_stack.py from random import randint from nose.tools import assert_equal class TestSortStack(object): def get_sorted_stack(self, numbers): stack = MyStack() for x in numbers: stack.push(x) sorted_stack = stack.sort() return sorted_stack def test_s...
interactive-coding-challenges/stacks_queues/sort_stack/sort_stack_challenge.ipynb
saashimi/code_guild
mit
Using PCA to extract features Now we'll take a look at unsupervised learning on a facial recognition example. This uses a dataset available within scikit-learn consisting of a subset of the Labeled Faces in the Wild data. Note that this is a relatively large download (~200MB) so it may take a while to execute.
from sklearn import datasets lfw_people = datasets.fetch_lfw_people(min_faces_per_person=70, resize=0.4, data_home='datasets') lfw_people.data.shape
notebooks/03.2 Methods - Unsupervised Preprocessing.ipynb
rhiever/scipy_2015_sklearn_tutorial
cc0-1.0
Let's visualize these faces to see what we're working with:
fig = plt.figure(figsize=(8, 6)) # plot several images for i in range(15): ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[]) ax.imshow(lfw_people.images[i], cmap=plt.cm.bone)
notebooks/03.2 Methods - Unsupervised Preprocessing.ipynb
rhiever/scipy_2015_sklearn_tutorial
cc0-1.0
We'll do a typical train-test split on the images before performing unsupervised learning:
from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(lfw_people.data, lfw_people.target, random_state=0) print(X_train.shape, X_test.shape)
notebooks/03.2 Methods - Unsupervised Preprocessing.ipynb
rhiever/scipy_2015_sklearn_tutorial
cc0-1.0
Feature Reduction Using Principal Component Analysis We can use PCA to reduce the original 1850 features of the face images to a manageable size, while maintaining most of the information in the dataset. Here it is useful to use a variant of PCA called RandomizedPCA, which is an approximation of PCA that can be much f...
from sklearn import decomposition pca = decomposition.RandomizedPCA(n_components=150, whiten=True) pca.fit(X_train)
notebooks/03.2 Methods - Unsupervised Preprocessing.ipynb
rhiever/scipy_2015_sklearn_tutorial
cc0-1.0
One interesting part of PCA is that it computes the "mean" face, which can be interesting to examine:
plt.imshow(pca.mean_.reshape((50, 37)), cmap=plt.cm.bone)
notebooks/03.2 Methods - Unsupervised Preprocessing.ipynb
rhiever/scipy_2015_sklearn_tutorial
cc0-1.0
The principal components measure deviations about this mean along orthogonal axes. It is also interesting to visualize these principal components:
print(pca.components_.shape) fig = plt.figure(figsize=(16, 6)) for i in range(30): ax = fig.add_subplot(3, 10, i + 1, xticks=[], yticks=[]) ax.imshow(pca.components_[i].reshape((50, 37)), cmap=plt.cm.bone)
notebooks/03.2 Methods - Unsupervised Preprocessing.ipynb
rhiever/scipy_2015_sklearn_tutorial
cc0-1.0
The components ("eigenfaces") are ordered by their importance from top-left to bottom-right. We see that the first few components seem to primarily take care of lighting conditions; the remaining components pull out certain identifying features: the nose, eyes, eyebrows, etc. With this projection computed, we can now p...
X_train_pca = pca.transform(X_train) X_test_pca = pca.transform(X_test) print(X_train_pca.shape) print(X_test_pca.shape)
notebooks/03.2 Methods - Unsupervised Preprocessing.ipynb
rhiever/scipy_2015_sklearn_tutorial
cc0-1.0
Compute MxNE with time-frequency sparse prior The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA) that promotes focal (sparse) sources (such as dipole fitting techniques) [1] [2]. The benefit of this approach is that: it is spatio-temporal without assuming stationarity (sources properties can...
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse from mne.inverse_sparse impor...
0.18/_downloads/a35e576fa66929a73782579dc334f91a/plot_time_frequency_mixed_norm_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Run solver
# alpha parameter is between 0 and 100 (100 gives 0 active source) alpha = 40. # general regularization parameter # l1_ratio parameter between 0 and 1 promotes temporal smoothness # (0 means no temporal regularization) l1_ratio = 0.03 # temporal regularization parameter loose, depth = 0.2, 0.9 # loose orientation &...
0.18/_downloads/a35e576fa66929a73782579dc334f91a/plot_time_frequency_mixed_norm_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot dipole activations
plot_dipole_amplitudes(dipoles) # Plot dipole location of the strongest dipole with MRI slices idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles]) plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample', subjects_dir=subjects_dir, mode='orthoview', ...
0.18/_downloads/a35e576fa66929a73782579dc334f91a/plot_time_frequency_mixed_norm_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show the evoked response and the residual for gradiometers
ylim = dict(grad=[-120, 120]) evoked.pick_types(meg='grad', exclude='bads') evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim, proj=True, time_unit='s') residual.pick_types(meg='grad', exclude='bads') residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim, ...
0.18/_downloads/a35e576fa66929a73782579dc334f91a/plot_time_frequency_mixed_norm_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Generate stc from dipoles
stc = make_stc_from_dipoles(dipoles, forward['src'])
0.18/_downloads/a35e576fa66929a73782579dc334f91a/plot_time_frequency_mixed_norm_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
View in 2D and 3D ("glass" brain like 3D plot)
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1), opacity=0.1, fig_name="TF-MxNE (cond %s)" % condition, modes=['sphere'], scale_factors=[1.]) time_label = 'TF-MxNE time=%0.2f ms' clim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9]) brain = ...
0.18/_downloads/a35e576fa66929a73782579dc334f91a/plot_time_frequency_mixed_norm_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
As before, we can use a native Python function to organize the definition of our TFF into a reusable component.
from mantle import DFF class TFF(m.Circuit): IO = ['O', m.Out(m.Bit)] + m.ClockInterface() @classmethod def definition(io): # instance a dff to hold the state of the toggle flip-flop - this needs to be done first dff = DFF() # compute the next state as the not of the old state ff.O ...
notebooks/tutorial/icestick/TFF.ipynb
phanrahan/magmathon
mit
Then we simply call this function inside our definition of the IceStick main.
from loam.boards.icestick import IceStick icestick = IceStick() icestick.Clock.on() icestick.J3[0].rename('J3').output().on() main = icestick.DefineMain() main.J3 <= tff() m.EndDefine()
notebooks/tutorial/icestick/TFF.ipynb
phanrahan/magmathon
mit
We'll compile and build our program using the standard flow.
m.compile("build/tff", main) %%bash cd build yosys -q -p 'synth_ice40 -top main -blif tff.blif' tff.v arachne-pnr -q -d 1k -o tff.txt -p tff.pcf tff.blif icepack tff.txt tff.bin #iceprog tff.bin
notebooks/tutorial/icestick/TFF.ipynb
phanrahan/magmathon
mit
Let's inspect the generated verilog.
%cat build/tff.v
notebooks/tutorial/icestick/TFF.ipynb
phanrahan/magmathon
mit
We can verify our implementation is function correctly by using a logic analyzer.
%cat build/tff.pcf
notebooks/tutorial/icestick/TFF.ipynb
phanrahan/magmathon
mit
=================================================================== Support Vector Regression (SVR) using linear and non-linear kernels =================================================================== Toy example of 1D regression using linear, polynomial and RBF kernels.
print(__doc__) import numpy as np from sklearn.svm import SVR import matplotlib.pyplot as plt
labwork/lab2/sci-learn/non_linear_regression.ipynb
chaitra8/ml_lab_ecsc_306
apache-2.0
Generate sample data
X = np.sort(5 * np.random.rand(40, 1), axis=0) y = np.sin(X).ravel()
labwork/lab2/sci-learn/non_linear_regression.ipynb
chaitra8/ml_lab_ecsc_306
apache-2.0
Add noise to targets
y[::5] += 3 * (0.5 - np.random.rand(8))
labwork/lab2/sci-learn/non_linear_regression.ipynb
chaitra8/ml_lab_ecsc_306
apache-2.0
Fit regression model
svr_rbf = SVR(kernel='rbf', C=1e3, gamma=0.1) svr_lin = SVR(kernel='linear', C=1e3) svr_poly = SVR(kernel='poly', C=1e3, degree=2) y_rbf = svr_rbf.fit(X, y).predict(X) y_lin = svr_lin.fit(X, y).predict(X) y_poly = svr_poly.fit(X, y).predict(X)
labwork/lab2/sci-learn/non_linear_regression.ipynb
chaitra8/ml_lab_ecsc_306
apache-2.0
look at the results
lw = 2 plt.scatter(X, y, color='darkorange', label='data') plt.hold('on') plt.plot(X, y_rbf, color='navy', lw=lw, label='RBF model') plt.plot(X, y_lin, color='c', lw=lw, label='Linear model') plt.plot(X, y_poly, color='cornflowerblue', lw=lw, label='Polynomial model') plt.xlabel('data') plt.ylabel('target') plt.title('...
labwork/lab2/sci-learn/non_linear_regression.ipynb
chaitra8/ml_lab_ecsc_306
apache-2.0
De code is georganiseerd in cellen en als je wil kan je kan de code in de cellen aanpassen en opnieuw uitvoeren. Pas hierboven de som aan en voer ze opnieuw uit. ... doe maar, ik wacht hier even ... "Pffff, dat kan ik ook met eender welke rekenmachine" Klopt, maar dit is nog maar het begin; laat ons eens iets anders pr...
print("Hallo allemaal!")
notebooks/nl-be/101 - Intro - Python leren kennen en IPython gebruiken.ipynb
RaspberryJamBe/ipython-notebooks
cc0-1.0
IPython zal, als het commando een resultaat geeft, deze "output" onder de cell uitprinten. En als je iets vergeet of fout typt, wordt-ie boos:
print("Dit lukt dus niet"
notebooks/nl-be/101 - Intro - Python leren kennen en IPython gebruiken.ipynb
RaspberryJamBe/ipython-notebooks
cc0-1.0
Er wordt dan door Python geprobeerd om uit te leggen wat er mis gaat, maar dat is niet altijd 100% duidelijk. Kan je uitvissen wat er hierboven misloopt? -Tip: het "Hallo allemaal" commando kan misschien helpen, het is maar een kleine vergetelheid, maar een computer kan van iets dergelijks helemaal in de war raken- OK,...
a = 'Dit is een tekst' # tekst moet je tussen aanhalingstekens '...' zetten a = "Dit is een tekst" # maar het mogen ook dubbele aanhalingstekens "..." zijn (als je ze maar niet door mekaar haalt) # oh, ja en alles wat achter een # staat is commentaar, dat slaat Python gewoon over b = 13 c = 273.15 # voo...
notebooks/nl-be/101 - Intro - Python leren kennen en IPython gebruiken.ipynb
RaspberryJamBe/ipython-notebooks
cc0-1.0
Zie, je geen resultaat, dus IPython print niets uit, maar de variabelen zitten wel in het geheugen, kijk maar:
print(a, b, c)
notebooks/nl-be/101 - Intro - Python leren kennen en IPython gebruiken.ipynb
RaspberryJamBe/ipython-notebooks
cc0-1.0
Methodes en "dot notation" (punt notatie) sommige "dingen" of objecten die je in Python gebruikt krijgen een soort superpowers mee in de vorm van methodes die je kan aanroepen. Dit doe je door een punt achter het object te zetten en dan de methode te typen (opgelet, voor het aanroepen van een functie moet je altijd haa...
# bvb een tekst in hoofdletters omzetten: a.upper()
notebooks/nl-be/101 - Intro - Python leren kennen en IPython gebruiken.ipynb
RaspberryJamBe/ipython-notebooks
cc0-1.0
Door na het punt op de <TAB> toets te drukken, zal IPython een lijst van beschikbare methodes laten zien; zet je cursor achter het punt en type <TAB> om het uit te proberen:
a.
notebooks/nl-be/101 - Intro - Python leren kennen en IPython gebruiken.ipynb
RaspberryJamBe/ipython-notebooks
cc0-1.0