markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
In the previous notebook we used `model = KNeighborsClassifier()`. Allscikit-learn models can be created without arguments, which means that youdon't need to understand the details of the model to use it in scikit-learn.One of the `KNeighborsClassifier` parameters is `n_neighbors`. It controlsthe number of neighbors we are going to use to make a prediction for a newdata point.What is the default value of the `n_neighbors` parameter? Hint: Look at thehelp inside your notebook `KNeighborsClassifier?` or on the [scikit-learnwebsite](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) Create a `KNeighborsClassifier` model with `n_neighbors=50`
# Write your code here.
_____no_output_____
CC-BY-4.0
notebooks/02_numerical_pipeline_ex_00.ipynb
khanfarhan10/scikit-learn-mooc
Fit this model on the data and target loaded above
# Write your code here.
_____no_output_____
CC-BY-4.0
notebooks/02_numerical_pipeline_ex_00.ipynb
khanfarhan10/scikit-learn-mooc
Use your model to make predictions on the first 10 data points inside thedata. Do they match the actual target values?
# Write your code here.
_____no_output_____
CC-BY-4.0
notebooks/02_numerical_pipeline_ex_00.ipynb
khanfarhan10/scikit-learn-mooc
Compute the accuracy on the training data.
# Write your code here.
_____no_output_____
CC-BY-4.0
notebooks/02_numerical_pipeline_ex_00.ipynb
khanfarhan10/scikit-learn-mooc
Now load the test data from `"../datasets/adult-census-numeric-test.csv"` andcompute the accuracy on the test data.
# Write your code here.
_____no_output_____
CC-BY-4.0
notebooks/02_numerical_pipeline_ex_00.ipynb
khanfarhan10/scikit-learn-mooc
The next step in the gap analysis is to calculate the Turbine Ideal Energy (TIE) for the wind farm based on SCADA data
%load_ext autoreload %autoreload 2
_____no_output_____
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
This notebook provides an overview and walk-through of the turbine ideal energy (TIE) method in OpenOA. The TIE metric is defined as the amount of electricity generated by all turbines at a wind farm operating under normal conditions (i.e., not subject to downtime or significant underperformance, but subject to wake losses and moderate turbine performance losses). The approach to calculate TIE is to:1. Filter out underperforming data from the power curve for each turbine,2. Develop a statistical relationship between the remaining power data and key atmospheric variables from a long-term reanalysis product3. Long-term correct the period of record power data using the above statistical relationship4. Sum up the long-term corrected power data across all turbines to get TIE for the wind farmHere we use different reanalysis products to capture the uncertainty around the modeled wind resource. We also consider uncertainty due to power data accuracy and the power curve filtering choices for identifying normal turbine performance made by the analyst.In this example, the process for estimating TIE is illustrated both with and without uncertainty quantification.
# Import required packages import matplotlib.pyplot as plt import numpy as np import pandas as pd from project_ENGIE import Project_Engie from operational_analysis.methods import turbine_long_term_gross_energy
_____no_output_____
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
In the call below, make sure the appropriate path to the CSV input files is specfied. In this example, the CSV files are located directly in the 'examples/data/la_haute_borne' folder
# Load plant object project = Project_Engie('./data/la_haute_borne/') # Load and prepare the wind farm data project.prepare() # Let's take a look at the columns in the SCADA data frame project._scada.df.columns
_____no_output_____
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
TIE calculation without uncertainty quantificationNext we create a TIE object which will contain the analysis to be performed. The method has the ability to calculate uncertainty in the TIE metric through a Monte Carlo sampling of filtering thresholds, power data, and reanalysis product choices. For now, we turn this option off and run the method a single time.
ta = turbine_long_term_gross_energy.TurbineLongTermGrossEnergy(project)
INFO:operational_analysis.methods.turbine_long_term_gross_energy:Initializing TurbineLongTermGrossEnergy Object INFO:operational_analysis.methods.turbine_long_term_gross_energy:Note: uncertainty quantification will NOT be performed in the calculation INFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing SCADA data into dictionaries by turbine (this can take a while)
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
All of the steps in the TI calculation process are pulled under a single run() function. These steps include:1. Processing reanalysis data to daily averages.2. Filtering the SCADA data3. Fitting the daily reanalysis data to daily SCADA data using a Generalized Additive Model (GAM)4. Apply GAM results to calculate long-term TIE for the wind farmBy setting UQ = False (the default argument value), we must manually specify key filtering thresholds that would otherwise be sampled from a range of values through Monte Carlo. Specifically, we must set thresholds applied to the bin_filter() function in the toolkits.filtering class of OpenOA.
# Specify filter threshold values to be used wind_bin_thresh = 2.0 # Exclude data outside 2 m/s of the median for each power bin max_power_filter = 0.90 # Don't apply bin filter above 0.9 of turbine capacity
_____no_output_____
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
We also must decide how to deal with missing data when computing daily sums of energy production from each turbine. Here we set the threshold at 0.9 (i.e., if greater than 90% of SCADA data are available for a given day, scale up the daily energy by the fraction of data missing. If less than 90% data recovery, exclude that day from analysis.
# Set the correction threshold to 90% correction_threshold = 0.90
_____no_output_____
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
Now we'll call the run() method to calculate TIE, choosing two reanalysis products to be used in the TIE calculation process.
# We can choose to save key plots to a file by setting enable_plotting = True and # specifying a directory to save the images. For now we turn off this feature. ta.run(reanal_subset = ['era5', 'merra2'], enable_plotting = False, plot_dir = None, wind_bin_thresh = wind_bin_thresh, max_power_filter = max_power_filter, correction_threshold = correction_threshold)
0%| | 0/2 [00:00<?, ?it/s]INFO:operational_analysis.methods.turbine_long_term_gross_energy:Filtering turbine data INFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing reanalysis data to daily averages INFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing scada data to daily sums 0it [00:00, ?it/s] 4it [00:00, 27.11it/s] INFO:operational_analysis.methods.turbine_long_term_gross_energy:Setting up daily data for model fitting INFO:operational_analysis.methods.turbine_long_term_gross_energy:Fitting model data /Users/esimley/opt/anaconda3/lib/python3.7/site-packages/scipy/linalg/basic.py:1321: RuntimeWarning: internal gelsd driver lwork query error, required iwork dimension not returned. This is likely the result of LAPACK bug 0038, fixed in LAPACK 3.2.2 (released July 21, 2010). Falling back to 'gelss' driver. x, resids, rank, s = lstsq(a, b, cond=cond, check_finite=False) INFO:operational_analysis.methods.turbine_long_term_gross_energy:Applying fitting results to calculate long-term gross energy 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1/2 [00:02<00:02, 2.02s/it]INFO:operational_analysis.methods.turbine_long_term_gross_energy:Filtering turbine data INFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing reanalysis data to daily averages INFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing scada data to daily sums 0it [00:00, ?it/s] 4it [00:00, 25.93it/s] INFO:operational_analysis.methods.turbine_long_term_gross_energy:Setting up daily data for model fitting INFO:operational_analysis.methods.turbine_long_term_gross_energy:Fitting model data INFO:operational_analysis.methods.turbine_long_term_gross_energy:Applying fitting results to calculate long-term gross energy 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:03<00:00, 1.93s/it] INFO:operational_analysis.methods.turbine_long_term_gross_energy:Run completed
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
Now that we've finished the TIE calculation, let's examine results
ta._plant_gross # What is the long-term annual TIE for whole plant print('Long-term turbine ideal energy is %s GWh/year' %np.round(np.mean(ta._plant_gross/1e6),1))
Long-term turbine ideal energy is 13.7 GWh/year
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
The long-term TIE value of 13.7 GWh/year is based on the mean TIE resulting from the two reanalysis products considered. Next, we can examine how well the filtering worked by examining the power curves for each turbine using the plot_filtered_power_curves() function.
# Currently saving figures in examples folder. The folder where figures are saved can be changed if desired. ta.plot_filtered_power_curves(save_folder = "./", output_to_terminal = True)
_____no_output_____
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
Overall these are very clean power curves, and the filtering algorithms seem to have done a good job of catching the most egregious outliers. Now let's look at the daily data and how well the power curve fit worked
# Currently saving figures in examples folder. The folder where figures are saved can be changed if desired. ta.plot_daily_fitting_result(save_folder = "./", output_to_terminal = True)
_____no_output_____
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
Overall the fit looks good. The modeled data sometimes estimate higher energy at low wind speeds compared to the observed, but keep in mind the model fits to long term wind speed, wind direction, and air density, whereas we are only showing the relationship to wind speed here.Note that 'imputed' means daily power data that were missing for a specific turbine, but were calculated by establishing statistical relationships with that turbine and its neighbors. This is necessary since a wind farm often has one turbine down and, without imputation, very little daily data would be left if we excluded days when a turbine was down. TIE calculation including uncertainty quantificationNow we will create a TIE object for calculating TIE and quantifying the uncertainty in our estimate. The method estimates uncertainty in the TIE metric through a Monte Carlo sampling of filtering thresholds, power data, and reanalysis product choices.Note that we set the number of Monte Carlo simulations to only 100 in this example because of the relatively high computational effort required to perform a single iteration. In practice, a larger number of simulations is recommended (the default value is 2000).
ta = turbine_long_term_gross_energy.TurbineLongTermGrossEnergy(project, UQ = True, # enable uncertainty quantification num_sim = 100 # number of Monte Carlo simulations to perform )
INFO:operational_analysis.methods.turbine_long_term_gross_energy:Initializing TurbineLongTermGrossEnergy Object INFO:operational_analysis.methods.turbine_long_term_gross_energy:Note: uncertainty quantification will be performed in the calculation INFO:operational_analysis.methods.turbine_long_term_gross_energy:Processing SCADA data into dictionaries by turbine (this can take a while)
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
With uncertainty quantification enabled (UQ = True), we can specify the assumed uncertainty of the SCADA power data (0.5% by default) and ranges of two key filtering thresholds from which the Monte Carlo simulations will sample. Specifically, these thresholds are applied to the bin_filter() function in the toolkits.filtering class of OpenOA.Note that the following parameters are the default values used in the run() method.
uncertainty_scada=0.005 # Assumed uncertainty of SCADA power data (0.5%) # Range of filter threshold values to be used by Monte Carlo simulations # Data outside of a range of wind speeds from 1 to 3 m/s of the median for each power bin are considered wind_bin_thresh=(1, 3) # The bin filter will be applied up to fractions of turbine capacity from 80% to 90% max_power_filter=(0.8, 0.9)
_____no_output_____
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
We will consider a range of availability thresholds for dealing with missing data when computing daily sums of energy production from each turbine (i.e., if greater than the given threshold of SCADA data are available for a given day, scale up the daily energy by the fraction of data missing. If less than the given threshold of data are available, exclude that day from analysis. Here we set the range of thresholds as 85% to 95%.
correction_threshold=(0.85, 0.95)
_____no_output_____
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
Now we'll call the run() method to calculate TIE with uncertainty quantification, again choosing two reanalysis products to be used in the TIE calculation process.Note that without uncertainty quantification (UQ = False), a separate TIE value is calculated for each reanalysis product specified. However, when UQ = True, the reanalysis product is treated as another Monte Carlo sampling parameter. Thus, the impact of different reanlysis products is considered to be part of the overall uncertainty in TIE.
# We can choose to save key plots to a file by setting enable_plotting = True and # specifying a directory to save the images. For now we turn off this feature. ta.run(reanal_subset = ['era5', 'merra2'], enable_plotting = False, plot_dir = None, uncertainty_scada = uncertainty_scada, wind_bin_thresh = wind_bin_thresh, max_power_filter = max_power_filter, correction_threshold = correction_threshold)
_____no_output_____
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
Now that we've finished the Monte Carlo TIE calculation simulations, let's examine results
np.mean(ta._plant_gross) np.std(ta._plant_gross) # Mean long-term annual TIE for whole plant print('Mean long-term turbine ideal energy is %s GWh/year' %np.round(np.mean(ta._plant_gross/1e6),1)) # Uncertainty in long-term annual TIE for whole plant print('Uncertainty in long-term turbine ideal energy is %s GWh/year, or %s percent' % (np.round(np.std(ta._plant_gross/1e6),1), np.round(100*np.std(ta._plant_gross)/np.mean(ta._plant_gross),1)))
Mean long-term turbine ideal energy is 13.7 GWh/year Uncertainty in long-term turbine ideal energy is 0.1 GWh/year, or 0.8 percent
BSD-3-Clause
examples/03_turbine_ideal_energy.ipynb
nbodini/OpenOA
Code Review 1 Purpose: To introduce the group to looking at code analyticallyCreated By: Hawley HelmbrechtCreation Date: 10-12-21 Introduction to Analyzing Code All snipets within this section are taken from the Hitchhiker's Guide to Python (https://docs.python-guide.org/writing/style/) Example 1: Explicit Code
def make_complex(*args): x, y = args return dict(**locals()) def make_complex(x, y): return {'x': x, 'y': y}
_____no_output_____
MIT
code_reviews/Code_review_1.ipynb
Nance-Lab/textile
Example 2: One Statement per Line
print('one'); print('two') if x == 1: print('one') if <complex comparison> and <other complex comparison>: # do something print('one') print('two') if x == 1: print('one') cond1 = <complex comparison> cond2 = <other complex comparison> if cond1 and cond2: # do something
_____no_output_____
MIT
code_reviews/Code_review_1.ipynb
Nance-Lab/textile
Intro to Pep 8 Example 1: Limit all lines to a maximum of 79 characters.
#Wrong: income = (gross_wages + taxable_interest + (dividends - qualified_dividends) - ira_deduction - student_loan_interest) #Correct: income = (gross_wages + taxable_interest + (dividends - qualified_dividends) - ira_deduction - student_loan_interest)
_____no_output_____
MIT
code_reviews/Code_review_1.ipynb
Nance-Lab/textile
Example 2: Line breaks around binary operators
# Wrong: # operators sit far away from their operands income = (gross_wages + taxable_interest + (dividends - qualified_dividends) - ira_deduction - student_loan_interest) # Correct: # easy to match operators with operands income = (gross_wages + taxable_interest + (dividends - qualified_dividends) - ira_deduction - student_loan_interest)
_____no_output_____
MIT
code_reviews/Code_review_1.ipynb
Nance-Lab/textile
Example 3: Import formatting
# Correct: import os import sys # Wrong: import sys, os
_____no_output_____
MIT
code_reviews/Code_review_1.ipynb
Nance-Lab/textile
Let's look at some code! Sci-kit images Otsu Threshold code! (https://github.com/scikit-image/scikit-image/blob/main/skimage/filters/thresholding.py)
def threshold_otsu(image=None, nbins=256, *, hist=None): """Return threshold value based on Otsu's method. Either image or hist must be provided. If hist is provided, the actual histogram of the image is ignored. Parameters ---------- image : (N, M[, ..., P]) ndarray, optional Grayscale input image. nbins : int, optional Number of bins used to calculate histogram. This value is ignored for integer arrays. hist : array, or 2-tuple of arrays, optional Histogram from which to determine the threshold, and optionally a corresponding array of bin center intensities. If no hist provided, this function will compute it from the image. Returns ------- threshold : float Upper threshold value. All pixels with an intensity higher than this value are assumed to be foreground. References ---------- .. [1] Wikipedia, https://en.wikipedia.org/wiki/Otsu's_Method Examples -------- >>> from skimage.data import camera >>> image = camera() >>> thresh = threshold_otsu(image) >>> binary = image <= thresh Notes ----- The input image must be grayscale. """ if image is not None and image.ndim > 2 and image.shape[-1] in (3, 4): warn(f'threshold_otsu is expected to work correctly only for ' f'grayscale images; image shape {image.shape} looks like ' f'that of an RGB image.') # Check if the image has more than one intensity value; if not, return that # value if image is not None: first_pixel = image.ravel()[0] if np.all(image == first_pixel): return first_pixel counts, bin_centers = _validate_image_histogram(image, hist, nbins) # class probabilities for all possible thresholds weight1 = np.cumsum(counts) weight2 = np.cumsum(counts[::-1])[::-1] # class means for all possible thresholds mean1 = np.cumsum(counts * bin_centers) / weight1 mean2 = (np.cumsum((counts * bin_centers)[::-1]) / weight2[::-1])[::-1] # Clip ends to align class 1 and class 2 variables: # The last value of ``weight1``/``mean1`` should pair with zero values in # ``weight2``/``mean2``, which do not exist. variance12 = weight1[:-1] * weight2[1:] * (mean1[:-1] - mean2[1:]) ** 2 idx = np.argmax(variance12) threshold = bin_centers[idx] return threshold
_____no_output_____
MIT
code_reviews/Code_review_1.ipynb
Nance-Lab/textile
What do you observe about the code that makes it pythonic?
Do the pythonic conventions make it easier to understand?
_____no_output_____
MIT
code_reviews/Code_review_1.ipynb
Nance-Lab/textile
______ Python Crash Course Exercises - Solutions ExercisesAnswer the questions or complete the tasks outlined in bold below, use the specific method described if applicable. ** What is 7 to the power of 4?**
7**4
_____no_output_____
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
** Split this string:** s = "Hi there Sam!" **into a list. **
s = 'Hi there Sam!' s.split()
_____no_output_____
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
** Given the variables:** planet = "Earth" diameter = 12742** Use .format() to print the following string: ** The diameter of Earth is 12742 kilometers.
planet = "Earth" diameter = 12742 print("The diameter of {} is {} kilometers.".format(planet,diameter))
The diameter of Earth is 12742 kilometers.
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
** Given this nested list, use indexing to grab the word "hello" **
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7] lst[-3][1][2][0]
_____no_output_____
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
** Given this nest dictionary grab the word "hello". Be prepared, this will be annoying/tricky **
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]} d['k1'][3]['tricky'][3]['target'][3]
_____no_output_____
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
** What is the main difference between a tuple and a list? **
# Tuple is immutable
_____no_output_____
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
** Create a function that grabs the email website domain from a string in the form: ** user@domain.com **So for example, passing "user@domain.com" would return: domain.com**
def domainGet(email): return email.split('@')[-1] domainGet('user@domain.com')
_____no_output_____
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
** Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization. **
def findDog(st): return 'dog' in st.lower().split() findDog('Is there a dog here?')
_____no_output_____
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
** Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases. **
def countDog(st): count = 0 for word in st.lower().split(): if word == 'dog': count += 1 return count countDog('This dog runs faster than the other dog dude!')
_____no_output_____
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
** Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example:** seq = ['soup','dog','salad','cat','great']**should be filtered down to:** ['soup','salad']
seq = ['soup','dog','salad','cat','great'] list(filter(lambda word: word[0]=='s',seq))
_____no_output_____
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
Final Problem**You are driving a little too fast, and a police officer stops you. Write a function to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket". If your speed is 60 or less, the result is "No Ticket". If speed is between 61 and 80 inclusive, the result is "Small Ticket". If speed is 81 or more, the result is "Big Ticket". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all cases. **
def caught_speeding(speed, is_birthday): if is_birthday: speeding = speed - 5 else: speeding = speed if speeding > 80: return 'Big Ticket' elif speeding > 60: return 'Small Ticket' else: return 'No Ticket' caught_speeding(81,True) caught_speeding(81,False)
_____no_output_____
MIT
03-Python Crash Course Exercises - Solutions.ipynb
avinash-nahar/Learning
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
""" DON'T MODIFY ANYTHING IN THIS CELL """ # load in data import helper data_dir = './data/Seinfeld_Scripts.txt' text = helper.load_data(data_dir)
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
view_line_range = (2, 12) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) lines = text.split('\n') print('Number of lines: {}'.format(len(lines))) word_count_line = [len(line.split()) for line in lines] print('Average number of words in each line: {}'.format(np.average(word_count_line))) print() print('The lines {} to {}:'.format(*view_line_range)) print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
Dataset Stats Roughly the number of unique words: 46367 Number of lines: 109233 Average number of words in each line: 5.544240293684143 The lines 2 to 12: jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother. george: are you through? jerry: you do of course try on, when you buy? george: yes, it was purple, i liked it, i dont actually recall considering the buttons. jerry: oh, you dont recall?
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
import problem_unittests as tests from collections import Counter def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function #reference source: inspired/copied from course samples word_counts = Counter(text) sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True) int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)} vocab_to_int = {word: ii for ii, word in int_to_vocab.items()} # return tuple return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
Tests Passed
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenized dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function retval = { ".": "||Period||", ",": "||Comma||", "\"": "||QuotationMark||", ";": "||Semicolon||", "!": "||ExclamationMark||", "?": "||QuestionMark||", "(": "||LeftParentheses||", ")": "||RightParentheses||", "-": "||Dash||", "\n": "||Return||", } return retval """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
Tests Passed
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # pre-process training data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() len(int_text)
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
""" DON'T MODIFY ANYTHING IN THIS CELL """ import torch # Check for a GPU train_on_gpu = torch.cuda.is_available() if not train_on_gpu: print('No GPU found. Please use a GPU to train your neural network.')
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
from torch.utils.data import TensorDataset, DataLoader nb_samples = 6 features = torch.randn(nb_samples, 10) labels = torch.empty(nb_samples, dtype=torch.long).random_(10) dataset = TensorDataset(features, labels) loader = DataLoader( dataset, batch_size=2 ) for batch_idx, (x, y) in enumerate(loader): print(x.shape, y.shape) print(features) from torch.utils.data import TensorDataset, DataLoader def batch_data(words, sequence_length, batch_size): """ Batch the neural network data using DataLoader :param words: The word ids of the TV scripts :param sequence_length: The sequence length of each batch :param batch_size: The size of each batch; the number of sequences in a batch :return: DataLoader with batched data """ # TODO: Implement function batch = len(words)//batch_size words = words[:batch*batch_size] feature_tensors, target_tensors = [], [] for ndx in range(len(words) - sequence_length): feature_tensors += [words[ndx:ndx+sequence_length]] target_tensors += [words[ndx+sequence_length]] feature_tensors = torch.LongTensor(feature_tensors) target_tensors = torch.LongTensor(target_tensors) data = TensorDataset(feature_tensors, target_tensors) data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True ) # return a dataloader return data_loader # there is no test for this function, but you are encouraged to create # print statements and tests of your own
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
# test dataloader test_text = range(50) t_loader = batch_data(test_text, sequence_length=6, batch_size=10) data_iter = iter(t_loader) sample_x, sample_y = data_iter.next() print(sample_x.shape) print(sample_x) print(sample_y.shape) print(sample_y)
torch.Size([10, 6]) tensor([[ 13, 14, 15, 16, 17, 18], [ 20, 21, 22, 23, 24, 25], [ 30, 31, 32, 33, 34, 35], [ 2, 3, 4, 5, 6, 7], [ 16, 17, 18, 19, 20, 21], [ 24, 25, 26, 27, 28, 29], [ 0, 1, 2, 3, 4, 5], [ 38, 39, 40, 41, 42, 43], [ 7, 8, 9, 10, 11, 12], [ 18, 19, 20, 21, 22, 23]]) torch.Size([10]) tensor([ 19, 26, 36, 8, 22, 30, 6, 44, 13, 24])
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
#reference source: inspired/copied from course samples import numpy as np def one_hot_encode(arr, n_labels): arr = arr.cpu().numpy() # Initialize the the encoded array one_hot = np.zeros((np.multiply(*arr.shape), n_labels), dtype=np.float32) # Fill the appropriate elements with ones one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1. # Finally reshape it to get back to the original array one_hot = one_hot.reshape((*arr.shape, n_labels)) if(train_on_gpu): return torch.from_numpy(one_hot).cuda() else: return torch.from_numpy(one_hot) # check that the function works as expected test_seq = np.array([[3, 5, 1]]) test_seq = torch.from_numpy(test_seq) print(test_seq) one_hot = one_hot_encode(test_seq, 8) print(one_hot) import torch.nn as nn class RNN(nn.Module): def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5): """ Initialize the PyTorch RNN Module :param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary) :param output_size: The number of output dimensions of the neural network :param embedding_dim: The size of embeddings, should you choose to use them a :param hidden_dim: The size of the hidden layer outputs :param dropout: dropout to add in between LSTM/GRU layers """ super(RNN, self).__init__() # TODO: Implement function # set class variables self.input_dim = vocab_size self.hidden_dim = hidden_dim self.output_dim = output_size self.n_layers = n_layers self.dropout_prob = dropout self.embedding_dim = embedding_dim ## define model layers self.embed = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, self.hidden_dim, self.n_layers, dropout=self.dropout_prob, batch_first=True) self.dropout = nn.Dropout(dropout) #final fully connected self.fc = nn.Linear(self.hidden_dim, self.output_dim) def forward(self, nn_input, hidden): """ Forward propagation of the neural network :param nn_input: The input to the neural network :param hidden: The hidden state :return: Two Tensors, the output of the neural network and the latest hidden state """ # TODO: Implement function # ## outputs and the new hidden state # nn_input = one_hot_encode(nn_input, self.input_dim) embedding = self.embed(nn_input) lstm_output, hidden = self.lstm(embedding, hidden) # lstm_output, hidden = self.lstm(nn_input, hidden) #without embedding out = self.dropout(lstm_output) #stack the outputs of the lstm to pass to your fully-connected layer out = out.contiguous().view(-1, self.hidden_dim) out = self.fc(out) ##From notes above #The output of this model should be the last batch of word scores after a complete sequence has been processed. #That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. # reshape into (batch_size, seq_length, output_size) out = out.view(self.batch_size, -1, self.output_dim) # get last batch out = out[:, -1] # return one batch of output word scores and the hidden state return out, hidden def init_hidden(self, batch_size): ''' Initialize the hidden state of an LSTM/GRU :param batch_size: The batch_size of the hidden state :return: hidden state of dims (n_layers, batch_size, hidden_dim) ''' # Implement function self.batch_size = batch_size weight = next(self.parameters()).data # two new tensors with sizes n_layers x batch_size x n_hidden # initialize hidden state with zero weights, and move to GPU if available if (train_on_gpu): hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(), weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda()) else: hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(), weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()) return hidden """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_rnn(RNN, train_on_gpu)
Tests Passed
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden): """ Forward and backward propagation on the neural network :param decoder: The PyTorch Module that holds the neural network :param decoder_optimizer: The PyTorch optimizer for the neural network :param criterion: The PyTorch loss function :param inp: A batch of input to the neural network :param target: The target output for the batch of input :return: The loss and the latest hidden state Tensor """ # TODO: Implement Function #one hot encoding? #required for non embeded case only # zero accumulated gradients rnn.zero_grad() #To avoid retain_graph=True, inspired from course discussions hidden = (hidden[0].detach(), hidden[1].detach()) # move data to GPU, if available if(train_on_gpu): inp = inp.cuda() target = target.cuda() output, hidden = rnn(inp, hidden) loss = criterion(output, target) #target.view(batch_size*sequence_length) # perform backpropagation and optimization # loss.backward(retain_graph=True) #Removed due to high resource consumption loss.backward() ##did not get any advantage # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs. # nn.utils.clip_grad_norm_(rnn.parameters(), clip) ? optimizer.step() # return the loss over a batch and the hidden state produced by our model return loss.item(), hidden # Note that these tests aren't completely extensive. # they are here to act as general checks on the expected outputs of your functions """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
Tests Passed
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
""" DON'T MODIFY ANYTHING IN THIS CELL """ def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100): batch_losses = [] rnn.train() print("Training for %d epoch(s), %d batch size, %d show every..." % (n_epochs, batch_size, show_every_n_batches)) for epoch_i in range(1, n_epochs + 1): # initialize hidden state hidden = rnn.init_hidden(batch_size) for batch_i, (inputs, labels) in enumerate(train_loader, 1): # make sure you iterate over completely full batches, only n_batches = len(train_loader.dataset)//batch_size if(batch_i > n_batches): break # forward, back prop loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden) # record loss batch_losses.append(loss) # printing loss stats if batch_i % show_every_n_batches == 0: print('Epoch: {:>4}/{:<4} Loss: {}'.format( epoch_i, n_epochs, np.average(batch_losses))) batch_losses = [] # returns a trained rnn return rnn #modified version with detailed printing, global loss for loaded network (rnn), and saving network def train_rnn_copy(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100, myGlobalLoss=10): batch_losses = [] rnn.train() print("Training for %d epoch(s), %d batch size, show every %d, global loss %.4f..." % (n_epochs, batch_size, show_every_n_batches, myGlobalLoss)) for epoch_i in range(1, n_epochs + 1): # initialize hidden state hidden = rnn.init_hidden(batch_size) for batch_i, (inputs, labels) in enumerate(train_loader, 1): # make sure you iterate over completely full batches, only n_batches = len(train_loader.dataset)//batch_size if(batch_i > n_batches): break # forward, back prop loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden) # record loss batch_losses.append(loss) # printing loss stats if batch_i % show_every_n_batches == 0: avgLoss = np.average(batch_losses) print('Epoch: {:>4}/{:<4} Batch: {:>4}/{:<4} Loss: {}'.format( epoch_i, n_epochs, batch_i, n_batches, np.average(batch_losses))) batch_losses = [] if(myGlobalLoss > avgLoss): print('Global Loss {} ---> {}. Saving...'.format(myGlobalLoss, avgLoss)) myGlobalLoss = avgLoss #saved at batch level for quick testing and restart #should be moved to epoch level to avoid saving semi-trained network helper.save_model('./save/trained_rnn_mid_we', rnn) # returns a trained rnn return rnn
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
# Data params # Sequence Length, # of words in a sequence sequence_length = 10 # Batch Size if(train_on_gpu): batch_size = 512 #128 #64 else: batch_size = 5 # data loader - do not change train_loader = batch_data(int_text, sequence_length, batch_size) # Training parameters myGlobalLoss = 5 myDropout = 0.5 #0.8 # Number of Epochs num_epochs = 10 #5 #50 # Learning Rate learning_rate = 0.001 #0.002 #0.005 #0.001 # Model parameters # Vocab size vocab_size = len(vocab_to_int)+1 # Output size output_size = vocab_size # Embedding Dimension embedding_dim = 300 #256 #200 # Hidden Dimension, Usually larger is better performance wise. Common values are 128, 256, 512, hidden_dim = 512 #256 # Number of RNN Layers, Typically between 1-3 n_layers = 2 # Show stats for every n number of batches if(train_on_gpu): show_every_n_batches = 200 else: show_every_n_batches = 1
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
#for debugging purposes # import os # os.environ['CUDA_LAUNCH_BLOCKING'] = "1" """ DON'T MODIFY ANYTHING IN THIS CELL """ # create model and move to gpu if available rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=myDropout) if train_on_gpu: rnn.cuda() # defining loss and optimization functions for training optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate) criterion = nn.CrossEntropyLoss() try: rnn = helper.load_model('./save/trained_rnn_mid_we') print("loaded mid save model") except: try: rnn = helper.load_model('./save/trained_rnn') print("failed mid save.. loaded global model") except: print("could not load any model") finally: print(rnn) # training the model trained_rnn = train_rnn_copy(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches, myGlobalLoss) # saving the trained model helper.save_model('./save/trained_rnn', trained_rnn) print('Model Trained and Saved')
could not load any model RNN( (dropout): Dropout(p=0.5) (embed): Embedding(21389, 300) (lstm): LSTM(300, 512, num_layers=2, batch_first=True, dropout=0.5) (fc): Linear(in_features=512, out_features=21389, bias=True) ) Training for 10 epoch(s), 512 batch size, show every 200, global loss 5.0000... Epoch: 1/10 Batch: 200/1741 Loss: 5.5300157618522645 Epoch: 1/10 Batch: 400/1741 Loss: 4.861690397262573 Global Loss 5 ---> 4.861690397262573. Saving...
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)- Tried with multiple combinations of hyperparameters to get optimum results. - sequence_length: Tried different sequence lengths between 5-30. Higher sequence lengths took more time to train. Therefore, used 10 which gave satisfactory results.- batch size: Higher batch size resulted in better results. Due to GPU memory limitations used 512 with embedding. When tried without embedding, the maximum size (again due to memory limitation) was 128- embedding layer: To begin with, for experimentation purposes, did not use embedding. Later, when the embedding was used memory and time seedup were recorded.- learning rate: Tried different leanring rates. During initial investigations, higher learning rates ~0.01 did not converge well to a satisfactory solution. Also, tried decreaing learning rate (manually) after a few epoches to see marginal improvements. Then tried between 0.001 to 0.0005. 0.001 gave the best results. Therefore, used the same.- hidden dim: Increasing hidden dim decreased loss. But, due to memory limitations used 512- n_layers: A value between 1-3 is recommended. 2 was a good choice and gave good results. --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
""" DON'T MODIFY ANYTHING IN THIS CELL """ import torch import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() trained_rnn = helper.load_model('./save/trained_rnn')
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ import torch.nn.functional as F def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100): """ Generate text using the neural network :param decoder: The PyTorch Module that holds the trained neural network :param prime_id: The word id to start the first prediction :param int_to_vocab: Dict of word id keys to word values :param token_dict: Dict of puncuation tokens keys to puncuation values :param pad_value: The value used to pad a sequence :param predict_len: The length of text to generate :return: The generated text """ rnn.eval() # create a sequence (batch_size=1) with the prime_id current_seq = np.full((1, sequence_length), pad_value) current_seq[-1][-1] = prime_id predicted = [int_to_vocab[prime_id]] for _ in range(predict_len): if train_on_gpu: current_seq = torch.LongTensor(current_seq).cuda() else: current_seq = torch.LongTensor(current_seq) # initialize the hidden state hidden = rnn.init_hidden(current_seq.size(0)) # get the output of the rnn output, _ = rnn(current_seq, hidden) # get the next word probabilities p = F.softmax(output, dim=1).data if(train_on_gpu): p = p.cpu() # move to cpu # use top_k sampling to get the index of the next word top_k = 5 p, top_i = p.topk(top_k) top_i = top_i.numpy().squeeze() # select the likely next word index with some element of randomness p = p.numpy().squeeze() word_i = np.random.choice(top_i, p=p/p.sum()) # retrieve that word from the dictionary word = int_to_vocab[word_i] predicted.append(word) # the generated word becomes the next "current sequence" and the cycle can continue current_seq = np.roll(current_seq, -1, 1) current_seq[-1][-1] = word_i gen_sentences = ' '.join(predicted) # Replace punctuation tokens for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' gen_sentences = gen_sentences.replace(' ' + token.lower(), key) gen_sentences = gen_sentences.replace('\n ', '\n') gen_sentences = gen_sentences.replace('( ', '(') # return all the sentences return gen_sentences
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
# run the cell multiple times to get different results! gen_length = 400 # modify the length to your preference prime_word = 'jerry' # name for starting the script """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ pad_word = helper.SPECIAL_WORDS['PADDING'] generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length) print(generated_script)
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:51: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
# save script to a text file f = open("generated_script_1.txt","w") f.write(generated_script) f.close()
_____no_output_____
MIT
project-tv-script-generation/dlnd_tv_script_generation.ipynb
ankursial/Deep-Learning-ND
**Recursion and Higher Order Functions**Today we're tackling recursion, and touching on higher-order functions in Python. A **recursive** function is one that calls itself. A classic example: the Fibonacci sequence.The Fibonacci sequence was originally described to model population growth, and is self-referential in its definition.The nth Fib number is defined in terms of the previous two:- F(n) = F(n-1) + F(n-2)- F(1) = 0- F(2) = 1Another classic example: Factorial: - n! = n(n-1)(n-2)(n-3) ... 1or: - n! = n*(n-1)!Let's look at an implementation of the factorial and of the Fibonacci sequence in Python:
def factorial(n): if n == 1: return 1 else: return n*factorial(n-1) print(factorial(5)) def fibonacci(n): if n == 1: return 0 elif n == 2: return 1 else: # print('working on number ' + str(n)) return fibonacci(n-1)+fibonacci(n-2) fibonacci(7)
120
BSD-3-Clause
cycle_2_fancy_functions/cycle_2_lecture_recursion_higher_post_recording.ipynb
magicicada/cs1px_2020
There are two very important parts of these functions: a base case (or two) and a recursive case. When designing recursive functions it can help to think about these two cases!The base case is the case when we know we are done, and can just return a value. (e.g. in fibonacci above there are two base cases, `n ==1` and `n ==2`).The recursive case is the case when we make the recursive call - that is we call the function again. Let's write a function that counts down from a parameter n to zero, and then prints "Blastoff!".
def countdown(n): # base case if n == 0: print('Blastoff!') # recursive case else: print(n) countdown(n-1) countdown(10)
10 9 8 7 6 5 4 3 2 1 Blastoff!
BSD-3-Clause
cycle_2_fancy_functions/cycle_2_lecture_recursion_higher_post_recording.ipynb
magicicada/cs1px_2020
Let's write a recursive function that adds up the elements of a list:
def add_up_list(my_list): # base case if len(my_list) == 0: return 0 # recursive case else: first_elem = my_list[0] return first_elem + add_up_list(my_list[1:]) my_list = [1, 2, 1, 3, 4] print(add_up_list(my_list))
11
BSD-3-Clause
cycle_2_fancy_functions/cycle_2_lecture_recursion_higher_post_recording.ipynb
magicicada/cs1px_2020
**Higher-order functions**are functions that takes a function as an argument or returns a function. We will talk briefly about functions that take a function as an argument. Let's look at an example.
def h(x): return x+4 def g(x): return x**2 def doItTwice(f, x): return f(f(x)) print(doItTwice(h, 3)) print(doItTwice(g, 3))
11 81
BSD-3-Clause
cycle_2_fancy_functions/cycle_2_lecture_recursion_higher_post_recording.ipynb
magicicada/cs1px_2020
A common reason for using a higher-order function is to apply a parameter-specified function repeatedly over a data structure (like a list or a dictionary).Let's look at an example function that applies a parameter function to every element of a list:
def sampleFunction1(x): return 2*x def sampleFunction2(x): return x % 2 def applyToAll(func, myList): newList = [] for element in myList: newList.append(func(element)) return newList aList = [2, 3, 4, 5] print(applyToAll(sampleFunction1, aList)) print(applyToAll(sampleFunction2, aList))
[4, 6, 8, 10] [0, 1, 0, 1]
BSD-3-Clause
cycle_2_fancy_functions/cycle_2_lecture_recursion_higher_post_recording.ipynb
magicicada/cs1px_2020
Something like this applyToAll function is built into Python, and is called map
def sampleFunction1(x): return 2*x def sampleFunction2(x): return x % 2 aList = [2, 3, 4, 5] print(list(map(sampleFunction1, aList))) bList = [2, 3, 4, 5] print(list(map(sampleFunction2, aList)))
[4, 6, 8, 10] [0, 1, 0, 1]
BSD-3-Clause
cycle_2_fancy_functions/cycle_2_lecture_recursion_higher_post_recording.ipynb
magicicada/cs1px_2020
Python has quite a few built-in functions (some higher-order, some not). You can find lots of them here: https://docs.python.org/3.3/library/functions.html (I **will not** by default require you to remember those for an exam!!) Example: zip does something that may be familiar from last week's lab.
x = [1, 2, 3] y = [4, 5, 6] zipped = zip(x, y) print(list(zipped))
[(1, 4), (2, 5), (3, 6)]
BSD-3-Clause
cycle_2_fancy_functions/cycle_2_lecture_recursion_higher_post_recording.ipynb
magicicada/cs1px_2020
Introduction to `pandas`
import numpy as np import pandas as pd
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Series and Data Frames Series objects A `Series` is like a vector. All elements must have the same type or are nulls.
s = pd.Series([1,1,2,3] + [None]) s
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Size
s.size
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Unique Counts
s.value_counts()
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Special types of series Strings
words = 'the quick brown fox jumps over the lazy dog'.split() s1 = pd.Series([' '.join(item) for item in zip(words[:-1], words[1:])]) s1 s1.str.upper() s1.str.split() s1.str.split().str[1]
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Categories
s2 = pd.Series(['Asian', 'Asian', 'White', 'Black', 'White', 'Hispanic']) s2 s2 = s2.astype('category') s2 s2.cat.categories s2.cat.codes
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
DataFrame objects A `DataFrame` is like a matrix. Columns in a `DataFrame` are `Series`.- Each column in a DataFrame represents a **variale**- Each row in a DataFrame represents an **observation**- Each cell in a DataFrame represents a **value**
df = pd.DataFrame(dict(num=[1,2,3] + [None])) df df.num
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
IndexRow and column identifiers are of `Index` type.Somewhat confusingly, index is also a a synonym for the row identifiers.
df.index
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Setting a column as the row index
df df1 = df.set_index('num') df1
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Making an index into a column
df1.reset_index()
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
ColumnsThis is just a different index object
df.columns
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Getting raw valuesSometimes you just want a `numpy` array, and not a `pandas` object.
df.values
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Creating Data Frames Manual
from collections import OrderedDict n = 5 dates = pd.date_range(start='now', periods=n, freq='d') df = pd.DataFrame(OrderedDict(pid=np.random.randint(100, 999, n), weight=np.random.normal(70, 20, n), height=np.random.normal(170, 15, n), date=dates, )) df
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
From fileYou can read in data from many different file types - plain text, JSON, spreadsheets, databases etc. Functions to read in data look like `read_X` where X is the data type.
%%file measures.txt pid weight height date 328 72.654347 203.560866 2018-11-11 14:16:18.148411 756 34.027679 189.847316 2018-11-12 14:16:18.148411 185 28.501914 158.646074 2018-11-13 14:16:18.148411 507 17.396343 180.795993 2018-11-14 14:16:18.148411 919 64.724301 173.564725 2018-11-15 14:16:18.148411 df = pd.read_table('measures.txt') df
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Indexing Data Frames Implicit defaultsif you provide a slice, it is assumed that you are asking for rows.
df[1:3]
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
If you provide a singe value or list, it is assumed that you are asking for columns.
df[['pid', 'weight']]
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Extracting a column Dictionary style access
df['pid']
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Property style accessThis only works for column names tat are also valid Python identifier (i.e., no spaces or dashes or keywords)
df.pid
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Indexing by locationThis is similar to `numpy` indexing
df.iloc[1:3, :] df.iloc[1:3, [True, False, True]]
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Indexing by name
df.loc[1:3, 'weight':'height']
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
**Warning**: When using `loc`, the row slice indicates row names, not positions.
df1 = df.copy() df1.index = df.index + 1 df1 df1.loc[1:3, 'weight':'height']
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Structure of a Data Frame Data types
df.dtypes
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Converting data types Using `astype` on one column
df.pid = df.pid.astype('category')
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Using `astype` on multiple columns
df = df.astype(dict(weight=float, height=float))
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Using a conversion function
df.date = pd.to_datetime(df.date)
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Check
df.dtypes
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Basic properties
df.size df.shape df.describe()
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Inspection
df.head(n=3) df.tail(n=3) df.sample(n=3) df.sample(frac=0.5)
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Selecting, Renaming and Removing Columns Selecting columns
df.filter(items=['pid', 'date']) df.filter(regex='.*ght')
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Note that you can also use regular string methods on the columns
df.loc[:, df.columns.str.contains('d')]
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Renaming columns
df.rename(dict(weight='w', height='h'), axis=1) orig_cols = df.columns df.columns = list('abcd') df df.columns = orig_cols df
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Removing columns
df.drop(['pid', 'date'], axis=1) df.drop(columns=['pid', 'date']) df.drop(columns=df.columns[df.columns.str.contains('d')])
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Selecting, Renaming and Removing Rows Selecting rows
df[df.weight.between(60,70)] df[(69 <= df.weight) & (df.weight < 70)] df[df.date.between(pd.to_datetime('2018-11-13'), pd.to_datetime('2018-11-15 23:59:59'))]
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Renaming rows
df.rename({i:letter for i,letter in enumerate('abcde')}) df.index = ['the', 'quick', 'brown', 'fox', 'jumphs'] df df = df.reset_index(drop=True) df
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Dropping rows
df.drop([1,3], axis=0)
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Dropping duplicated data
df['something'] = [1,1,None,2,None] df.loc[df.something.duplicated()] df.drop_duplicates(subset='something')
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019
Dropping missing data
df df.something.fillna(0) df.something.ffill() df.something.bfill() df.something.interpolate() df.dropna()
_____no_output_____
BSD-3-Clause
notebooks/03_Using_Pandas_Annotated.ipynb
abbarcenasj/bios-823-2019