markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
-> Converting time to datetime in order to make it easy to manipulate
dataset['Data/Hora'] = dataset['Data/Hora'].str.replace("/","-") dataset['Data/Hora'] = pd.to_datetime(dataset['Data/Hora'])
_____no_output_____
MIT
drafts/exercises/ibovespa.ipynb
ItamarRocha/introduction-to-AI
-> Visualizing the data
dataset.head()
_____no_output_____
MIT
drafts/exercises/ibovespa.ipynb
ItamarRocha/introduction-to-AI
-> creating date dataframe and splitting its features date = dataset.iloc[:,0:1]date['day'] = date['Data/Hora'].dt.daydate['month'] = date['Data/Hora'].dt.monthdate['year'] = date['Data/Hora'].dt.yeardate = date.drop(columns = ['Data/Hora']) -> removing useless columns
dataset = dataset.drop(columns = ['Data/Hora','Unnamed: 7','Unnamed: 8','Unnamed: 9'])
_____no_output_____
MIT
drafts/exercises/ibovespa.ipynb
ItamarRocha/introduction-to-AI
-> transforming atributes to the correct format
for key, value in dataset.head().iteritems(): dataset[key] = dataset[key].str.replace(".","").str.replace(",",".").astype(float) """ for key, value in date.head().iteritems(): dataset[key] = date[key] """
_____no_output_____
MIT
drafts/exercises/ibovespa.ipynb
ItamarRocha/introduction-to-AI
-> Means
dataset.mean()
_____no_output_____
MIT
drafts/exercises/ibovespa.ipynb
ItamarRocha/introduction-to-AI
-> plotting graphics
plt.boxplot(dataset['Volume']) plt.title('boxplot') plt.xlabel('volume') plt.ylabel('valores') plt.ticklabel_format(style='sci', axis='y', useMathText = True) dataset['Maxima'].median() dataset['Minima'].mean()
_____no_output_____
MIT
drafts/exercises/ibovespa.ipynb
ItamarRocha/introduction-to-AI
-> Média truncada
from scipy import stats m = stats.trim_mean(dataset['Minima'], 0.1) print(m)
99109.76692307692
MIT
drafts/exercises/ibovespa.ipynb
ItamarRocha/introduction-to-AI
-> variancia e standard deviation
v = dataset['Cotacao'].var() print(v) d = dataset['Cotacao'].std() print(v) m = dataset['Cotacao'].mean() print(m)
99674.05773437498
MIT
drafts/exercises/ibovespa.ipynb
ItamarRocha/introduction-to-AI
-> covariancia dos atributos, mas antes fazer uma standard scaler pra facilitar a visão e depois transforma de volta pra dataframe pandas correlation shows us the relationship between the two variables and how are they related while covariance shows us how the two variables vary from each other.
from sklearn.preprocessing import StandardScaler sc = StandardScaler() dataset_cov = sc.fit_transform(dataset) dataset_cov = pd.DataFrame(dataset_cov) dataset_cov.cov()
_____no_output_____
MIT
drafts/exercises/ibovespa.ipynb
ItamarRocha/introduction-to-AI
-> plotting the graph may be easier to observe the correlation
corr = dataset.corr() corr.style.background_gradient(cmap = 'coolwarm') pd.plotting.scatter_matrix(dataset, figsize=(6, 6)) plt.show() plt.matshow(dataset.corr()) plt.xticks(range(len(dataset.columns)), dataset.columns) plt.yticks(range(len(dataset.columns)), dataset.columns) plt.colorbar() plt.show()
_____no_output_____
MIT
drafts/exercises/ibovespa.ipynb
ItamarRocha/introduction-to-AI
Exercise 02 - Functions and Getting Help ! 1. Complete Your Very First Function Complete the body of the following function according to its docstring.*HINT*: Python has a builtin function `round`
def round_to_two_places(num): """Return the given number rounded to two decimal places. >>> round_to_two_places(3.14159) 3.14 """ # Replace this body with your own code. # ("pass" is a keyword that does literally nothing. We used it as a placeholder, # so that it will not raise any errors, # because after we begin a code block, Python requires at least one line of code) pass def round_to_two_places(num): num = round(num,2) print('The number after rounded to two decimal places is: ', num) round_to_two_places(3.4455)
The number after rounded to two decimal places is: 3.45
MIT
python-for-data/Ex02 - Functions and Getting Help.ipynb
hoaintp/atom-assignments
2. Explore the Built-in FunctionThe help for `round` says that `ndigits` (the second argument) may be negative.What do you think will happen when it is? Try some examples in the following cell?Can you think of a case where this would be useful?
print(round(122.3444,-3)) print(round(122.3456,-2)) print(round(122.5454,-1)) print(round(122.13432,0)) #round with ndigits <=0 - the rounding will begin from the decimal point to the left
0.0 100.0 120.0 122.0
MIT
python-for-data/Ex02 - Functions and Getting Help.ipynb
hoaintp/atom-assignments
3. More FunctionGiving the problem of candy-sharing friends Alice, Bob and Carol tried to split candies evenly. For the sake of their friendship, any candies left over would be smashed. For example, if they collectively bring home 91 candies, they will take 30 each and smash 1.Below is a simple function that will calculate the number of candies to smash for *any* number of total candies.**Your task**: - Modify it so that it optionally takes a second argument representing the number of friends the candies are being split between. If no second argument is provided, it should assume 3 friends, as before.- Update the docstring to reflect this new behaviour.
def to_smash(total_candies,n = 3): """Return the number of leftover candies that must be smashed after distributing the given number of candies evenly between 3 friends. >>> to_smash(91) 1 """ return total_candies % n print('#no. of candies to smash = ', to_smash(31)) print('#no. of candies to smash = ', to_smash(32,5))
#no. of candies to smash = 1 #no. of candies to smash = 2
MIT
python-for-data/Ex02 - Functions and Getting Help.ipynb
hoaintp/atom-assignments
4. Taste some ErrorsIt may not be fun, but reading and understanding **error messages** will help you improve solving problem skills.Each code cell below contains some commented-out buggy code. For each cell...1. Read the code and predict what you think will happen when it's run.2. Then uncomment the code and run it to see what happens. *(**Tips**: In the kernel editor, you can highlight several lines and press `ctrl`+`/` to toggle commenting.)*3. Fix the code (so that it accomplishes its intended purpose without throwing an exception)
round_to_two_places(9.9999) x = -10 y = 5 # Which of the two variables above has the smallest absolute value? smallest_abs = min(abs(x),abs(y)) print(smallest_abs) def f(x): y = abs(x) return y print(f(5))
5
MIT
python-for-data/Ex02 - Functions and Getting Help.ipynb
hoaintp/atom-assignments
5. More and more FunctionsFor this question, we'll be using two functions imported from Python's `time` module. Time FunctionThe [time](https://docs.python.org/3/library/time.htmltime.time) function returns the number of seconds that have passed since the Epoch (aka [Unix time](https://en.wikipedia.org/wiki/Unix_time)). Try it out below. Each time you run it, you should get a slightly larger number.
# Importing the function 'time' from the module of the same name. # (We'll discuss imports in more depth later) from time import time t = time() print(t, "seconds since the Epoch")
1621529220.6860213 seconds since the Epoch
MIT
python-for-data/Ex02 - Functions and Getting Help.ipynb
hoaintp/atom-assignments
Sleep FunctionWe'll also be using a function called [sleep](https://docs.python.org/3/library/time.htmltime.sleep), which makes us wait some number of seconds while it does nothing particular. (Sounds useful, right?)You can see it in action by running the cell below:
from time import sleep duration = 5 print("Getting sleepy. See you in", duration, "seconds") sleep(duration) print("I'm back. What did I miss?")
Getting sleepy. See you in 5 seconds I'm back. What did I miss?
MIT
python-for-data/Ex02 - Functions and Getting Help.ipynb
hoaintp/atom-assignments
Your Own FunctionWith the help of these functions, complete the function **`time_call`** below according to its docstring.
def time_call(fn, arg): """Return the amount of time the given function takes (in seconds) when called with the given argument. """ from time import time start_time = time() fn(arg) end_time = time() duration = end_time - start_time return duration
_____no_output_____
MIT
python-for-data/Ex02 - Functions and Getting Help.ipynb
hoaintp/atom-assignments
How would you verify that `time_call` is working correctly? Think about it...
#solution? use sleep function?
_____no_output_____
MIT
python-for-data/Ex02 - Functions and Getting Help.ipynb
hoaintp/atom-assignments
6. 🌶️ Reuse your Function*Note: this question depends on a working solution to the previous question.*Complete the function below according to its docstring.
def slowest_call(fn, arg1, arg2, arg3): """Return the amount of time taken by the slowest of the following function calls: fn(arg1), fn(arg2), fn(arg3) """ slowest = min(time_call(fn, arg1), time_call(fn, arg2), time_call(fn,arg3)) return slowest print(slowest_call(sleep,1,2,3))
1.012155294418335
MIT
python-for-data/Ex02 - Functions and Getting Help.ipynb
hoaintp/atom-assignments
Core> API details.
#hide from nbdev.showdoc import * # export from attrdict import AttrDict from fastcore.basics import Path import subprocess
_____no_output_____
Apache-2.0
00_core.ipynb
mgfrantz/dessiccate
Running bash commands in Python
# export def run_bash(cmd, return_output=True): process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE) output, error = process.communicate() if not error: print(output.decode('utf-8')) if return_output: return output.decode('utf-8') else: print(error.decode('utf-8')) if return_output: return error.decoe('utf-8') out = run_bash('ls')
00_core.ipynb 01_plotting.ipynb 02_pandas.ipynb CONTRIBUTING.md LICENSE MANIFEST.in Makefile README.md build conda dessiccate dessiccate.egg-info dist docker-compose.yml docs index.ipynb settings.ini setup.py
Apache-2.0
00_core.ipynb
mgfrantz/dessiccate
Setting up in colabIf you're in colab, you may not have the proper packages installed.Running this function will set you up to work in colab.
# export def colab_setup(): """ Sets up for development in Google Colab. Repo must be cloned in drive/colab/ directory. """ try: from google.colab import drive print('Running in colab') drive.mount('/content/drive', force_remount=True) _ = run_bash("pip install -Uqq nbdev") import os os.chdir('/content/drive/MyDrive/colab/dessiccate/') print("Working in", os.getcwd()) _ = run_bash('pip install -e . --quiet') except: import os print("Working in", os.getcwd()) print('Running locally') colab_setup()
Working in /Users/michaelfrantz/Google Drive/colab/dessiccate Running locally
Apache-2.0
00_core.ipynb
mgfrantz/dessiccate
PathOften, you want to create a new directory.Even if all you have is a file path, you can now call `mkdir_if_not_exists` to create the parent directory.
# export def mkdir_if_not_exists(self, parents=True): """ Creates the directory of the path if ot doesn't exist. If the path is a file, will not make the file itself, but will create the parent directory. """ if path.is_dir(): p = path else: p = path.parent if not p.exists(): p.mkdir(parents=parents) Path.mkdir_if_not_exists = mkdir_if_not_exists path = Path('testdir/test.txt') assert not path.exists() path.mkdir_if_not_exists() assert not path.exists() assert path.parent.exists() path.parent.rmdir() assert not path.parent.exists()
_____no_output_____
Apache-2.0
00_core.ipynb
mgfrantz/dessiccate
Patricia Bay7277 Patricia Bay 48.6536  123.4515 
get_tidal_stations(-123.4515, 48.6536, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10)
_____no_output_____
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Woodwards7610 Woodwards's Landing 49.1251  123.0754 
get_tidal_stations(-123.0754, 49.1251, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10)
_____no_output_____
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
New Westminster7654 New Westminster 49.203683  122.90535 
get_tidal_stations(-122.90535, 49.203683, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10)
_____no_output_____
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Sandy Cove7786 Sandy Cove 49.34  123.23 
get_tidal_stations(-123.23, 49.34, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10)
_____no_output_____
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Port Renfrew check8525 Port Renfrew 48.555 124.421
get_tidal_stations(-124.421, 48.555, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10)
_____no_output_____
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Victoria7120 Victoria 48.424666  123.3707 
get_tidal_stations(-123.3707, 48.424666, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10)
_____no_output_____
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Sand Heads7594 Sand Heads 49.125 123.195  From Marlene's email 49º 06’ 21.1857’’, -123º 18’ 12.4789’’we are using 426, 292 end of jetty is 429, 295
lat_sh = 49+6/60.+21.1857/3600. lon_sh = -(123+18/60.+12.4789/3600.) print(lon_sh, lat_sh) get_tidal_stations(lon_sh, lat_sh, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=20)
-123.3034663611111 49.10588491666667
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Nanaimo7917 Nanaimo 49.17  123.93 
get_tidal_stations(-123.93, 49.17, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10)
_____no_output_____
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
In our code its at 484, 208 with lon,lat at -123.93 and 49.16: leave as is for now Boundary BayGuesstimated from Map-122.925 49.0
get_tidal_stations(-122.925, 49.0, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=15)
_____no_output_____
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Squamish49 41.675 N 123 09.299 W
print (49+41.675/60, -(123+9.299/60.)) print (model_lons.shape) get_tidal_stations(-(123+9.299/60.), 49.+41.675/60., model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10)
49.694583333333334 -123.15498333333333 (898, 398)
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Half Moon Bay49 30.687 N 123 54.726 W
print (49+30.687/60, -(123+54.726/60.)) get_tidal_stations(-(123+54.726/60.), 49.+30.687/60., model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10)
49.51145 -123.9121
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Friday Harbour-123.016667, 48.55
get_tidal_stations(-123.016667, 48.55, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10)
_____no_output_____
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Neah Bay-124.6, 48.4
get_tidal_stations(-124.6, 48.4, model_lons, model_lats, wind_lons, wind_lats, wave_lons, wave_lats, t_mask, wave_mask, size=10) from salishsea_tools import places
_____no_output_____
Apache-2.0
notebooks/Tidal Station Locations.ipynb
SalishSeaCast/analysis-susan
Plotting massive data setsThis notebook plots about half a million LIDAR points around Toronto from the KITTI data set. ([Source](http://www.cvlibs.net/datasets/kitti/raw_data.php)) The data is meant to be played over time. With pydeck, we can render these points and interact with them. Cleaning the dataFirst we need to import the data. Each row of data represents one x/y/z coordinate for a point in space at a point in time, with each frame representing about 115,000 points.We also need to scale the points to plot closely on a map. These point coordinates are not given in latitude and longitude, so as a workaround we'll plot them very close to (0, 0) on the earth.In future versions of pydeck other viewports, like a flat plane, will be supported out-of-the-box. For now, we'll make do with scaling the points.
import pandas as pd all_lidar = pd.concat([ pd.read_csv('https://raw.githubusercontent.com/ajduberstein/kitti_subset/master/kitti_1.csv'), pd.read_csv('https://raw.githubusercontent.com/ajduberstein/kitti_subset/master/kitti_2.csv'), pd.read_csv('https://raw.githubusercontent.com/ajduberstein/kitti_subset/master/kitti_3.csv'), pd.read_csv('https://raw.githubusercontent.com/ajduberstein/kitti_subset/master/kitti_4.csv'), ]) # Filter to one frame of data lidar = all_lidar[all_lidar['source'] == 136] lidar.loc[: , ['x', 'y']] = lidar[['x', 'y']] / 10000
_____no_output_____
MIT
bindings/pydeck/examples/04 - Plotting massive data sets.ipynb
jcready/deck.gl
Plotting the dataWe'll define a single `PointCloudLayer` and plot it.Pydeck by default expects the input of `get_position` to be a string name indicating a single position value. For convenience, you can pass in a string indicating the X/Y/Z coordinate, here `get_position='[x, y, z]'`. You also have access to a small expression parser--in our `get_position` function here, we increase the size of the z coordinate times 10.Using `pydeck.data_utils.compute_view`, we'll zoom to the approximate center of the data.
import pydeck as pdk point_cloud = pdk.Layer( 'PointCloudLayer', lidar[['x', 'y', 'z']], get_position=['x', 'y', 'z * 10'], get_normal=[0, 0, 1], get_color=[255, 0, 100, 200], pickable=True, auto_highlight=True, point_size=1) view_state = pdk.data_utils.compute_view(lidar[['x', 'y']], 0.9) view_state.max_pitch = 360 view_state.pitch = 80 view_state.bearing = 120 r = pdk.Deck( point_cloud, initial_view_state=view_state, map_provider=None, ) r.show() import time from collections import deque # Choose a handful of frames to loop through frame_buffer = deque([42, 56, 81, 95]) print('Press the stop icon to exit') while True: current_frame = frame_buffer[0] lidar = all_lidar[all_lidar['source'] == current_frame] r.layers[0].get_position = '@@=[x / 10000, y / 10000, z * 10]' r.layers[0].data = lidar.to_dict(orient='records') frame_buffer.rotate() r.update() time.sleep(0.5)
_____no_output_____
MIT
bindings/pydeck/examples/04 - Plotting massive data sets.ipynb
jcready/deck.gl
Seq2Seq with Attention for Korean-English Neural Machine Translation- Network architecture based on this [paper](https://arxiv.org/abs/1409.0473)- Fit to run on Google Colaboratory
import os import io import tarfile import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torchtext from torchtext.data import Dataset from torchtext.data import Example from torchtext.data import Field from torchtext.data import BucketIterator
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
1. Upload Data to Colab Workspace로컬에 존재하는 다음 3개의 데이터를 가상 머신에 업로드. 파일의 원본은 [여기](https://github.com/jungyeul/korean-parallel-corpora/tree/master/korean-english-news-v1/)에서도 확인- korean-english-park.train.tar.gz- korean-english-park.dev.tar.gz- korean.english-park.test.tar.gz
# 현재 작업경로를 확인 & 'data' 폴더 생성 !echo 'Current working directory:' ${PWD} !mkdir -p data/ !ls -al # 로컬의 데이터 업로드 from google.colab import files uploaded = files.upload() # 'data' 폴더 하위로 이동, 잘 옮겨졌는지 확인 !mv *.tar.gz data/ !ls -al data/
total 8864 drwxr-xr-x 2 root root 4096 Aug 1 00:25 . drwxr-xr-x 1 root root 4096 Aug 1 00:25 .. -rw-r--r-- 1 root root 113461 Aug 1 00:23 korean-english-park.dev.tar.gz -rw-r--r-- 1 root root 229831 Aug 1 00:23 korean-english-park.test.tar.gz -rw-r--r-- 1 root root 8718893 Aug 1 00:24 korean-english-park.train.tar.gz
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
2. Check Packages KoNLPy (설치 필요)
# Java 1.8 & KoNLPy 설치 !apt-get update !apt-get install g++ openjdk-8-jdk python-dev python3-dev !pip3 install JPype1-py3 !pip3 install konlpy from konlpy.tag import Okt ko_tokens = Okt().pos('트위터 데이터로 학습한 형태소 분석기가 잘 실행이 되는지 확인해볼까요?') # list of (word, POS TAG) tuples ko_tokens = [t[0] for t in ko_tokens] # Only get words print(ko_tokens) del ko_tokens # 필요 없으니까 삭제
/usr/local/lib/python3.6/dist-packages/jpype/_core.py:210: UserWarning: ------------------------------------------------------------------------------- Deprecated: convertStrings was not specified when starting the JVM. The default behavior in JPype will be False starting in JPype 0.8. The recommended setting for new code is convertStrings=False. The legacy value of True was assumed for this session. If you are a user of an application that reported this warning, please file a ticket with the developer. ------------------------------------------------------------------------------- """)
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Spacy (이미 설치되어 있음)
# 설치가 되어있는지 확인 !pip show spacy # 설치가 되어있는지 확인 (없다면 자동설치됨) !python -m spacy download en_core_web_sm import spacy spacy_en = spacy.load('en_core_web_sm') en_tokens = [t.text for t in spacy_en.tokenizer('Check that spacy tokenizer works.')] print(en_tokens) del en_tokens # 필요 없으니까 삭제
['Check', 'that', 'spacy', 'tokenizer', 'works', '.']
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
3. Define Tokenizing Functions문장을 받아 그보다 작은 어절 혹은 형태소 단위의 리스트로 반환해주는 함수를 각 언어에 대해 작성- Korean: konlpy.tag.Okt() <- Twitter()에서 명칭변경- English: spacy.tokenizer Korean Tokenizer
#from konlpy.tag import Okt class KoTokenizer(object): """For Korean.""" def __init__(self): self.tokenizer = Okt() def tokenize(self, text): tokens = self.tokenizer.pos(text) tokens = [t[0] for t in tokens] return tokens # Usage example print(KoTokenizer().tokenize('전처리는 언제나 지겨워요.'))
['전', '처리', '는', '언제나', '지겨워요', '.']
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
English Tokenizer
#import spacy class EnTokenizer(object): """For English.""" def __init__(self): self.spacy_en = spacy.load('en_core_web_sm') def tokenize(self, text): tokens = [t.text for t in self.spacy_en.tokenizer(text)] return tokens # Usage example print(EnTokenizer().tokenize("What I cannot create, I don't understand."))
['What', 'I', 'can', 'not', 'create', ',', 'I', 'do', "n't", 'understand', '.']
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
4. Data Preprocessing Load data
# Current working directory & list of files !echo 'Current working directory:' ${PWD} !ls -al DATA_DIR = './data/' print('Data directory exists:', os.path.isdir(DATA_DIR)) print('List of files:') print(*os.listdir(DATA_DIR), sep='\n') def get_data_from_tar_gz(filename): """ Retrieve contents from a `tar.gz` file without extraction. Arguments: filename: path to `tar.gz` file. Returns: dict, (name, content) pairs """ assert os.path.exists(filename) out = {} with tarfile.open(filename, 'r:gz') as tar: for member in tar.getmembers(): lang = member.name.split('.')[-1] # ex) korean-english-park.train.ko -> ko f = tar.extractfile(member) if f is not None: content = f.read().decode('utf-8') content = content.splitlines() out[lang] = content assert isinstance(out, dict) return out # Each 'xxx_data' is a dictionary with keys; 'ko', 'en' train_dict= get_data_from_tar_gz(os.path.join(DATA_DIR, 'korean-english-park.train.tar.gz')) # train dev_dict = get_data_from_tar_gz(os.path.join(DATA_DIR, 'korean-english-park.dev.tar.gz')) # dev test_dict = get_data_from_tar_gz(os.path.join(DATA_DIR, 'korean-english-park.test.tar.gz')) # test # Some samples (ko) train_dict['ko'][100:105] # Some samples (en) train_dict['en'][100:105]
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Define Datasets
#from torchtext.data import Dataset #from torchtext.data import Example class KoEnTranslationDataset(Dataset): """A dataset for Korean-English Neural Machine Translation.""" @staticmethod def sort_key(ex): return torchtext.data.interleave_keys(len(ex.src), len(ex.trg)) def __init__(self, data_dict, field_dict, source_lang='ko', max_samples=None, **kwargs): """ Only 'ko' and 'en' supported for `language` Arguments: data_dict: dict of (`language`, text) pairs. field_dict: dict of (`language`, Field instance) pairs. source_lang: str, default 'ko'. Other kwargs are passed to the constructor of `torchtext.data.Dataset`. """ if not all(k in ['ko', 'en'] for k in data_dict.keys()): raise KeyError("Check data keys.") if not all(k in ['ko', 'en'] for k in field_dict.keys()): raise KeyError("Check field keys.") if source_lang == 'ko': fields = [('src', field_dict['ko']), ('trg', field_dict['en'])] src_data = data_dict['ko'] trg_data = data_dict['en'] elif source_lang == 'en': fields = [('src', field_dict['en']), ('trg', field_dict['ko'])] src_data = data_dict['en'] trg_data = data_dict['ko'] else: raise NotImplementedError if not len(src_data) == len(trg_data): raise ValueError('Inconsistent number of instances between two languages.') examples = [] for i, (src_line, trg_line) in enumerate(zip(src_data, trg_data)): src_line = src_line.strip() trg_line = trg_line.strip() if src_line != '' and trg_line != '': examples.append( torchtext.data.Example.fromlist( [src_line, trg_line], fields ) ) i += 1 if max_samples is not None: if i >= max_samples: break super(KoEnTranslationDataset, self).__init__(examples, fields, **kwargs)
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Define Fields- Instantiate tokenizers; one for each language.- The 'tokenize' argument of `Field` requires a tokenizing function.
#from torchtext.data import Field ko_tokenizer = KoTokenizer() # korean tokenizer en_tokenizer = EnTokenizer() # english tokenizer # Field instance for korean KOREAN = Field( init_token='<sos>', eos_token='<eos>', tokenize=ko_tokenizer.tokenize, batch_first=True, lower=False ) # Field instance for english ENGLISH = Field( init_token='<sos>', eos_token='<eos>', tokenize=en_tokenizer.tokenize, batch_first=True, lower=True ) # Store Field instances in a dictionary field_dict = { 'ko': KOREAN, 'en': ENGLISH, }
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Instantiate datasets- one for each set (train, dev, test)
# 학습시간 단축을 위해 학습 데이터 줄이기 MAX_TRAIN_SAMPLES = 10000 # Instantiate with data train_set = KoEnTranslationDataset(train_dict, field_dict, max_samples=MAX_TRAIN_SAMPLES) print('Train set ready.') print('#. examples:', len(train_set.examples)) dev_set = KoEnTranslationDataset(dev_dict, field_dict) print('Dev set ready...') print('#. examples:', len(dev_set.examples)) test_set = KoEnTranslationDataset(test_dict, field_dict) print('Test set ready...') print('#. examples:', len(test_set.examples)) # Training example (KO, source language) train_set.examples[50].src # Training example (EN, target language) train_set.examples[50].trg
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Build Vocabulary- 각 언어별 생성: `Field`의 인스턴스를 활용- 최소 빈도수(`MIN_FREQ`) 값을 작게 하면 vocabulary의 크기가 커짐.- 최소 빈도수(`MIN_FREQ`) 값을 크게 하면 vocabulary의 크기가 작아짐.
MIN_FREQ = 2 # TODO: try different values # Build vocab for Korean KOREAN.build_vocab(train_set, dev_set, test_set, min_freq=MIN_FREQ) # ko print('Size of source vocab (ko):', len(KOREAN.vocab)) # Check indices of some important tokens tokens = ['<unk>', '<pad>', '<sos>', '<eos>'] for token in tokens: print(f"{token} -> {KOREAN.vocab.stoi[token]}") # Build vocab for English ENGLISH.build_vocab(train_set, dev_set, test_set, min_freq=MIN_FREQ) # en print('Size of target vocab (en):', len(ENGLISH.vocab)) # Check indices of some important tokens tokens = ['<unk>', '<pad>', '<sos>', '<eos>'] for token in tokens: print(f"{token} -> {KOREAN.vocab.stoi[token]}")
<unk> -> 0 <pad> -> 1 <sos> -> 2 <eos> -> 3
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Configure Device- *'런타임' -> '런타임 유형변경'* 에서 하드웨어 가속기로 **GPU** 선택
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('Device to use:', device)
Device to use: cuda
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Create Data Iterators- 데이터를 미니배치(mini-batch) 단위로 반환해주는 역할- `train_set`, `dev_set`, `test_set`에 대해 개별적으로 정의해야 함- `BATCH_SIZE`를 정의해주어야 함- `torchtext.data.BucketIterator`는 하나의 미니배치를 서로 비슷한 길이의 관측치들로 구성함- [Bucketing](https://medium.com/@rashmi.margani/how-to-speed-up-the-training-of-the-sequence-model-using-bucketing-techniques-9e302b0fd976)의 효과: 하나의 미니배치 내 padding을 최소화하여 연산의 낭비를 줄여줌
BATCH_SIZE = 128 #from torchtext.data import BucketIterator # Train iterator train_iterator = BucketIterator( train_set, batch_size=BATCH_SIZE, train=True, shuffle=True, device=device ) print(f'Number of minibatches per epoch: {len(train_iterator)}') #from torchtext.data import BucketIterator # Dev iterator dev_iterator = BucketIterator( dev_set, batch_size=100, train=False, shuffle=False, device=device ) print(f'Number of minibatches per epoch: {len(dev_iterator)}') #from torchtext.data import BucketIterator # Test iterator test_iterator = BucketIterator( test_set, batch_size=200, train=False, shuffle=False, device=device ) print(f'Number of minibatches per epoch: {len(test_iterator)}') train_batch = next(iter(train_iterator)) print('a batch of source examples has shape:', train_batch.src.size()) # (b, s) print('a batch of target examples has shape:', train_batch.trg.size()) # (b, s) # Checking first sample in mini-batch (KO, source lang) ko_indices = train_batch.src[0] ko_tokens = [KOREAN.vocab.itos[i] for i in ko_indices] for t, i in zip(ko_tokens, ko_indices): print(f"{t} ({i})") del ko_indices, ko_tokens # Checking first sample in mini-batch (EN, target lang) en_indices = train_batch.trg[0] en_tokens = [ENGLISH.vocab.itos[i] for i in en_indices] for t, i in zip(en_tokens, en_indices): print(f"{t} ({i})") del en_indices, en_tokens del train_batch # 더 이상 필요 없으니까 삭제
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
5. Building Seq2Seq Model Hyperparameters
# Hyperparameters INPUT_DIM = len(KOREAN.vocab) OUTPUT_DIM = len(ENGLISH.vocab) ENC_EMB_DIM = DEC_EMB_DIM = 100 ENC_HID_DIM = DEC_HID_DIM = 60 USE_BIDIRECTIONAL = False
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Encoder
class Encoder(nn.Module): """ Learns an embedding for the source text. Arguments: input_dim: int, size of input language vocabulary. emb_dim: int, size of embedding layer output. enc_hid_dim: int, size of encoder hidden state. dec_hid_dim: int, size of decoder hidden state. bidirectional: uses bidirectional RNNs if True. default is False. """ def __init__(self, input_dim, emb_dim, enc_hid_dim, dec_hid_dim, bidirectional=False): super(Encoder, self).__init__() self.input_dim = input_dim self.emb_dim = emb_dim self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.bidirectional = bidirectional self.embedding = nn.Embedding( num_embeddings=self.input_dim, embedding_dim=self.emb_dim ) self.rnn = nn.GRU( input_size=self.emb_dim, hidden_size=self.enc_hid_dim, bidirectional=self.bidirectional, batch_first=True ) self.rnn_output_dim = self.enc_hid_dim if self.bidirectional: self.rnn_output_dim *= 2 self.fc = nn.Linear(self.rnn_output_dim, self.dec_hid_dim) self.dropout = nn.Dropout(.2) def forward(self, src): """ Arguments: src: 2d tensor of shape (batch_size, input_seq_len) Returns: outputs: 3d tensor of shape (batch_size, input_seq_len, num_directions * enc_h) hidden: 2d tensor of shape (b, dec_h). This tensor will be used as the initial hidden state value of the decoder (h0 of decoder). """ assert len(src.size()) == 2, 'Input requires dimension (batch_size, seq_len).' # Shape: (b, s, h) embedded = self.embedding(src) embedded = self.dropout(embedded) outputs, hidden = self.rnn(embedded) if self.bidirectional: # (2, b, enc_h) -> (b, 2 * enc_h) hidden = torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1) else: # (1, b, enc_h) -> (b, enc_h) hidden = hidden.squeeze(0) # (b, num_directions * enc_h) -> (b, dec_h) hidden = self.fc(hidden) hidden = torch.tanh(hidden) return outputs, hidden
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Attention
class Attention(nn.Module): def __init__(self, enc_hid_dim, dec_hid_dim, encoder_is_bidirectional=False): super(Attention, self).__init__() self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.encoder_is_bidirectional = encoder_is_bidirectional self.attention_input_dim = enc_hid_dim + dec_hid_dim if self.encoder_is_bidirectional: self.attention_input_dim += enc_hid_dim # 2 * h_enc + h_dec self.linear = nn.Linear(self.attention_input_dim, dec_hid_dim) self.v = nn.Parameter(torch.rand(dec_hid_dim)) def forward(self, hidden, encoder_outputs): """ Arguments: hidden: 2d tensor with shape (batch_size, dec_hid_dim). encoder_outputs: 3d tensor with shape (batch_size, input_seq_len, enc_hid_dim). if encoder is bidirectional, expects (batch_size, input_seq_len, 2 * enc_hid_dim). """ # Shape check assert hidden.dim() == 2 assert encoder_outputs.dim() == 3 batch_size, seq_len, _ = encoder_outputs.size() # (b, dec_h) -> (b, s, dec_h) hidden = hidden.unsqueeze(1).expand(-1, seq_len, -1) # concat; shape results in (b, s, enc_h + dec_h). # if encoder is bidirectional, (b, s, 2 * h_enc + h_dec). concat = torch.cat((hidden, encoder_outputs), dim=2) # concat; shape is (b, s, dec_h) concat = self.linear(concat) concat = torch.tanh(concat) # tile v; (dec_h, ) -> (b, dec_h, 1) v = self.v.repeat(batch_size, 1).unsqueeze(2) # attn; (b, s, dec_h) @ (b, dec_h, 1) -> (b, s, 1) -> (b, s) attn_scores = torch.bmm(concat, v).squeeze(-1) assert attn_scores.dim() == 2 # Final shape check: (b, s) return F.softmax(attn_scores, dim=1)
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Decoder
class Decoder(nn.Module): """ Unlike the encoder, a single forward pass of a `Decoder` instance is defined for only a single timestep. Arguments: output_dim: int, emb_dim: int, enc_hid_dim: int, dec_hid_dim: int, attention_module: torch.nn.Module, encoder_is_bidirectional: False """ def __init__(self, output_dim, emb_dim, enc_hid_dim, dec_hid_dim, attention_module, encoder_is_bidirectional=False): super(Decoder, self).__init__() self.emb_dim = emb_dim self.enc_hid_dim = enc_hid_dim self.dec_hid_dim = dec_hid_dim self.output_dim = output_dim self.encoder_is_bidirectional = encoder_is_bidirectional if isinstance(attention_module, nn.Module): self.attention_module = attention_module else: raise ValueError self.rnn_input_dim = enc_hid_dim + emb_dim # enc_h + dec_emb_dim if self.encoder_is_bidirectional: self.rnn_input_dim += enc_hid_dim # 2 * enc_h + dec_emb_dim self.embedding = nn.Embedding(output_dim, emb_dim) self.rnn = nn.GRU( input_size=self.rnn_input_dim, hidden_size=dec_hid_dim, bidirectional=False, batch_first=True, ) out_input_dim = 2 * dec_hid_dim + emb_dim # hidden + dec_hidden_dim + dec_emb_dim self.out = nn.Linear(out_input_dim, output_dim) self.dropout = nn.Dropout(.2) def forward(self, inp, hidden, encoder_outputs): """ Arguments: inp: 1d tensor with shape (batch_size, ) hidden: 2d tensor with shape (batch_size, dec_hid_dim). This `hidden` tensor is the hidden state vector from the previous timestep. encoder_outputs: 3d tensor with shape (batch_size, seq_len, enc_hid_dim). If encoder_is_bidirectional is True, expects shape (batch_size, seq_len, 2 * enc_hid_dim). """ assert inp.dim() == 1 assert hidden.dim() == 2 assert encoder_outputs.dim() == 3 # (batch_size, ) -> (batch_size, 1) inp = inp.unsqueeze(1) # (batch_size, 1) -> (batch_size, 1, emb_dim) embedded = self.embedding(inp) embedded = self.dropout(embedded) # attention probabilities; (batch_size, seq_len) attn_probs = self.attention_module(hidden, encoder_outputs) # (batch_size, 1, seq_len) attn_probs = attn_probs.unsqueeze(1) # (b, 1, s) @ (b, s, enc_hid_dim) -> (b, 1, enc_hid_dim) weighted = torch.bmm(attn_probs, encoder_outputs) # (batch_size, 1, emb_dim + enc_hid_dim) rnn_input = torch.cat((embedded, weighted), dim=2) # output; (batch_size, 1, dec_hid_dim) # new_hidden; (1, batch_size, dec_hid_dim) output, new_hidden = self.rnn(rnn_input, hidden.unsqueeze(0)) embedded = embedded.squeeze(1) # (b, 1, emb) -> (b, emb) output = output.squeeze(1) # (b, 1, dec_h) -> (b, dec_h) weighted = weighted.squeeze(1) # (b, 1, dec_h) -> (b, dec_h) # output; (batch_size, emb + 2 * dec_h) -> (batch_size, output_dim) output = self.out(torch.cat((output, weighted, embedded), dim=1)) return output, new_hidden.squeeze(0)
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Seq2Seq
class Seq2Seq(nn.Module): def __init__(self, encoder, decoder, device): super(Seq2Seq, self).__init__() self.encoder = encoder self.decoder = decoder self.device = device def forward(self, src, trg, teacher_forcing_ratio=.5): batch_size, max_seq_len = trg.size() trg_vocab_size = self.decoder.output_dim # An empty tesnor to store decoder outputs (time index first for indexing) outputs_shape = (max_seq_len, batch_size, trg_vocab_size) outputs = torch.zeros(outputs_shape).to(self.device) encoder_outputs, hidden = self.encoder(src) # first input to the decoder is '<sos>' # trg; shape (batch_size, seq_len) initial_dec_input = output = trg[:, 0] # get first timestep token for t in range(1, max_seq_len): output, hidden = self.decoder(output, hidden, encoder_outputs) outputs[t] = output # Save output for timestep t, for 1 <= t <= max_len top1_val, top1_idx = output.max(dim=1) teacher_force = torch.rand(1).item() >= teacher_forcing_ratio output = trg[:, t] if teacher_force else top1_idx # Switch batch and time dimensions for consistency (batch_first=True) outputs = outputs.permute(1, 0, 2) # (s, b, trg_vocab) -> (b, s, trg_vocab) return outputs
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Build Model
# Define encoder enc = Encoder( input_dim=INPUT_DIM, emb_dim=ENC_EMB_DIM, enc_hid_dim=ENC_HID_DIM, dec_hid_dim=DEC_HID_DIM, bidirectional=USE_BIDIRECTIONAL ) print(enc) # Define attention layer attn = Attention( enc_hid_dim=ENC_HID_DIM, dec_hid_dim=DEC_HID_DIM, encoder_is_bidirectional=USE_BIDIRECTIONAL ) print(attn) # Define decoder dec = Decoder( output_dim=OUTPUT_DIM, emb_dim=DEC_EMB_DIM, enc_hid_dim=ENC_HID_DIM, dec_hid_dim=DEC_HID_DIM, attention_module=attn, encoder_is_bidirectional=USE_BIDIRECTIONAL ) print(dec) model = Seq2Seq(enc, dec, device).to(device) print(model) def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters.')
The model has 5,500,930 trainable parameters.
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
6. Train Optimizer- Use `optim.Adam` or `optim.RMSprop`.
optimizer = optim.Adam(model.parameters(), lr=0.001) #optimizer = optim.RMSprop(model.parameters(), lr=0.01)
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Loss function
# Padding indices should not be considered when loss is calculated. PAD_IDX = ENGLISH.vocab.stoi['<pad>'] criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX)
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Train function
def train(seq2seq_model, iterator, optimizer, criterion, grad_clip=1.0): seq2seq_model.train() epoch_loss = .0 for i, batch in enumerate(iterator): print('.', end='') src = batch.src trg = batch.trg optimizer.zero_grad() decoder_outputs = seq2seq_model(src, trg, teacher_forcing_ratio=.5) seq_len, batch_size, trg_vocab_size = decoder_outputs.size() # (b, s, trg_vocab) # (b-1, s, trg_vocab) decoder_outputs = decoder_outputs[:, 1:, :] # ((b-1) * s, trg_vocab) decoder_outputs = decoder_outputs.contiguous().view(-1, trg_vocab_size) # ((b-1) * s, ) trg = trg[:, 1:].contiguous().view(-1) loss = criterion(decoder_outputs, trg) loss.backward() # Gradient clipping; remedy for exploding gradients torch.nn.utils.clip_grad_norm_(seq2seq_model.parameters(), grad_clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator)
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Evaluate function
def evaluate(seq2seq_model, iterator, criterion): seq2seq_model.eval() epoch_loss = 0. with torch.no_grad(): for i, batch in enumerate(iterator): print('.', end='') src = batch.src trg = batch.trg decoder_outputs = seq2seq_model(src, trg, teacher_forcing_ratio=0.) seq_len, batch_size, trg_vocab_size = decoder_outputs.size() # (b, s, trg_vocab) # (b-1, s, trg_vocab) decoder_outputs = decoder_outputs[:, 1:, :] # ((b-1) * s, trg_vocab) decoder_outputs = decoder_outputs.contiguous().view(-1, trg_vocab_size) # ((b-1) * s, ) trg = trg[:, 1:].contiguous().view(-1) loss = criterion(decoder_outputs, trg) epoch_loss += loss.item() return epoch_loss / len(iterator)
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Epoch time measure function
def epoch_time(start_time, end_time): """Returns elapsed time in mins & secs.""" elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Train for multiple epochs
NUM_EPOCHS = 50 import time import math best_dev_loss = float('inf') for epoch in range(NUM_EPOCHS): start_time = time.time() train_loss = train(model, train_iterator, optimizer, criterion) dev_loss = evaluate(model, dev_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if dev_loss < best_dev_loss: best_dev_loss = dev_loss torch.save(model.state_dict(), './best_model.pt') print("\n") print(f"Epoch: {epoch + 1:>02d} | Time: {epoch_mins}m {epoch_secs}s") print(f"Train Loss: {train_loss:>.4f} | Train Perplexity: {math.exp(train_loss):7.3f}") print(f"Dev Loss: {dev_loss:>.4f} | Dev Perplexity: {math.exp(dev_loss):7.3f}")
......................................................................................... Epoch: 01 | Time: 1m 19s Train Loss: 7.2537 | Train Perplexity: 1413.394 Dev Loss: 6.5596 | Dev Perplexity: 705.983 ......................................................................................... Epoch: 02 | Time: 1m 18s Train Loss: 6.5319 | Train Perplexity: 686.695 Dev Loss: 6.4354 | Dev Perplexity: 623.532 ......................................................................................... Epoch: 03 | Time: 1m 18s Train Loss: 6.4319 | Train Perplexity: 621.383 Dev Loss: 6.3587 | Dev Perplexity: 577.470 ......................................................................................... Epoch: 04 | Time: 1m 18s Train Loss: 6.3550 | Train Perplexity: 575.378 Dev Loss: 6.2845 | Dev Perplexity: 536.183 ......................................................................................... Epoch: 05 | Time: 1m 19s Train Loss: 6.2784 | Train Perplexity: 532.939 Dev Loss: 6.2367 | Dev Perplexity: 511.187 ......................................................................................... Epoch: 06 | Time: 1m 20s Train Loss: 6.2436 | Train Perplexity: 514.711 Dev Loss: 6.2160 | Dev Perplexity: 500.680 ......................................................................................... Epoch: 07 | Time: 1m 19s Train Loss: 6.1707 | Train Perplexity: 478.508 Dev Loss: 6.1750 | Dev Perplexity: 480.602 ......................................................................................... Epoch: 08 | Time: 1m 19s Train Loss: 6.1252 | Train Perplexity: 457.256 Dev Loss: 6.1175 | Dev Perplexity: 453.724 ......................................................................................... Epoch: 09 | Time: 1m 18s Train Loss: 6.0602 | Train Perplexity: 428.447 Dev Loss: 6.0776 | Dev Perplexity: 435.961 ......................................................................................... Epoch: 10 | Time: 1m 17s Train Loss: 6.0100 | Train Perplexity: 407.500 Dev Loss: 6.0515 | Dev Perplexity: 424.754 ......................................................................................... Epoch: 11 | Time: 1m 18s Train Loss: 5.9789 | Train Perplexity: 395.025 Dev Loss: 6.0203 | Dev Perplexity: 411.700 ......................................................................................... Epoch: 12 | Time: 1m 17s Train Loss: 5.9032 | Train Perplexity: 366.219 Dev Loss: 5.9970 | Dev Perplexity: 402.239 ......................................................................................... Epoch: 13 | Time: 1m 18s Train Loss: 5.8394 | Train Perplexity: 343.577 Dev Loss: 5.9690 | Dev Perplexity: 391.100 ......................................................................................... Epoch: 14 | Time: 1m 18s Train Loss: 5.7811 | Train Perplexity: 324.115 Dev Loss: 5.9306 | Dev Perplexity: 376.392 ......................................................................................... Epoch: 15 | Time: 1m 18s Train Loss: 5.7331 | Train Perplexity: 308.923 Dev Loss: 5.9141 | Dev Perplexity: 370.223 ......................................................................................... Epoch: 16 | Time: 1m 17s Train Loss: 5.7038 | Train Perplexity: 300.002 Dev Loss: 5.8974 | Dev Perplexity: 364.074 ......................................................................................... Epoch: 17 | Time: 1m 17s Train Loss: 5.6431 | Train Perplexity: 282.331 Dev Loss: 5.8884 | Dev Perplexity: 360.836 ......................................................................................... Epoch: 18 | Time: 1m 18s Train Loss: 5.5801 | Train Perplexity: 265.111 Dev Loss: 5.8606 | Dev Perplexity: 350.934 ......................................................................................... Epoch: 19 | Time: 1m 18s Train Loss: 5.5536 | Train Perplexity: 258.167 Dev Loss: 5.8534 | Dev Perplexity: 348.428 ......................................................................................... Epoch: 20 | Time: 1m 18s Train Loss: 5.4865 | Train Perplexity: 241.412 Dev Loss: 5.8389 | Dev Perplexity: 343.409 ......................................................................................... Epoch: 21 | Time: 1m 18s Train Loss: 5.4370 | Train Perplexity: 229.756 Dev Loss: 5.8224 | Dev Perplexity: 337.769 ......................................................................................... Epoch: 22 | Time: 1m 18s Train Loss: 5.4458 | Train Perplexity: 231.791 Dev Loss: 5.8218 | Dev Perplexity: 337.593 ......................................................................................... Epoch: 23 | Time: 1m 18s Train Loss: 5.3683 | Train Perplexity: 214.504 Dev Loss: 5.8152 | Dev Perplexity: 335.373 ......................................................................................... Epoch: 24 | Time: 1m 18s Train Loss: 5.3330 | Train Perplexity: 207.066 Dev Loss: 5.8054 | Dev Perplexity: 332.090 ......................................................................................... Epoch: 25 | Time: 1m 18s Train Loss: 5.2826 | Train Perplexity: 196.878 Dev Loss: 5.8117 | Dev Perplexity: 334.199 ......................................................................................... Epoch: 26 | Time: 1m 18s Train Loss: 5.2420 | Train Perplexity: 189.039 Dev Loss: 5.7976 | Dev Perplexity: 329.503 ......................................................................................... Epoch: 27 | Time: 1m 19s Train Loss: 5.2041 | Train Perplexity: 182.024 Dev Loss: 5.8016 | Dev Perplexity: 330.841 ......................................................................................... Epoch: 28 | Time: 1m 17s Train Loss: 5.1639 | Train Perplexity: 174.838 Dev Loss: 5.7970 | Dev Perplexity: 329.298 ......................................................................................... Epoch: 29 | Time: 1m 17s Train Loss: 5.1491 | Train Perplexity: 172.281 Dev Loss: 5.8014 | Dev Perplexity: 330.763 ......................................................................................... Epoch: 30 | Time: 1m 19s Train Loss: 5.0821 | Train Perplexity: 161.110 Dev Loss: 5.7841 | Dev Perplexity: 325.084 ......................................................................................... Epoch: 31 | Time: 1m 17s Train Loss: 5.1006 | Train Perplexity: 164.126 Dev Loss: 5.7953 | Dev Perplexity: 328.739 ......................................................................................... Epoch: 32 | Time: 1m 19s Train Loss: 5.0535 | Train Perplexity: 156.566 Dev Loss: 5.7965 | Dev Perplexity: 329.147 ......................................................................................... Epoch: 33 | Time: 1m 18s Train Loss: 4.9971 | Train Perplexity: 147.984 Dev Loss: 5.7983 | Dev Perplexity: 329.726 ......................................................................................... Epoch: 34 | Time: 1m 17s Train Loss: 4.9565 | Train Perplexity: 142.093 Dev Loss: 5.7988 | Dev Perplexity: 329.910 ......................................................................................... Epoch: 35 | Time: 1m 18s Train Loss: 4.9383 | Train Perplexity: 139.528 Dev Loss: 5.8000 | Dev Perplexity: 330.293 ......................................................................................... Epoch: 36 | Time: 1m 19s Train Loss: 4.8999 | Train Perplexity: 134.283 Dev Loss: 5.8093 | Dev Perplexity: 333.389 ......................................................................................... Epoch: 37 | Time: 1m 18s Train Loss: 4.8997 | Train Perplexity: 134.245 Dev Loss: 5.8123 | Dev Perplexity: 334.372 ......................................................................................... Epoch: 38 | Time: 1m 17s Train Loss: 4.8393 | Train Perplexity: 126.383 Dev Loss: 5.8158 | Dev Perplexity: 335.573 ......................................................................................... Epoch: 39 | Time: 1m 19s Train Loss: 4.8277 | Train Perplexity: 124.927 Dev Loss: 5.8190 | Dev Perplexity: 336.649 ......................................................................................... Epoch: 40 | Time: 1m 18s Train Loss: 4.8214 | Train Perplexity: 124.136 Dev Loss: 5.8254 | Dev Perplexity: 338.790 ......................................................................................... Epoch: 41 | Time: 1m 17s Train Loss: 4.7666 | Train Perplexity: 117.515 Dev Loss: 5.8272 | Dev Perplexity: 339.422 ......................................................................................... Epoch: 42 | Time: 1m 17s Train Loss: 4.7465 | Train Perplexity: 115.185 Dev Loss: 5.8347 | Dev Perplexity: 341.961 ......................................................................................... Epoch: 43 | Time: 1m 18s Train Loss: 4.7185 | Train Perplexity: 112.002 Dev Loss: 5.8403 | Dev Perplexity: 343.890 ......................................................................................... Epoch: 44 | Time: 1m 18s Train Loss: 4.6992 | Train Perplexity: 109.863 Dev Loss: 5.8445 | Dev Perplexity: 345.327 ......................................................................................... Epoch: 45 | Time: 1m 18s Train Loss: 4.6552 | Train Perplexity: 105.133 Dev Loss: 5.8490 | Dev Perplexity: 346.876 ......................................................................................... Epoch: 46 | Time: 1m 17s Train Loss: 4.6396 | Train Perplexity: 103.507 Dev Loss: 5.8575 | Dev Perplexity: 349.859 ......................................................................................... Epoch: 47 | Time: 1m 17s Train Loss: 4.6115 | Train Perplexity: 100.631 Dev Loss: 5.8613 | Dev Perplexity: 351.196 ......................................................................................... Epoch: 48 | Time: 1m 18s Train Loss: 4.5860 | Train Perplexity: 98.106 Dev Loss: 5.8655 | Dev Perplexity: 352.655 ......................................................................................... Epoch: 49 | Time: 1m 19s Train Loss: 4.5769 | Train Perplexity: 97.214 Dev Loss: 5.8753 | Dev Perplexity: 356.114 ......................................................................................... Epoch: 50 | Time: 1m 19s Train Loss: 4.5624 | Train Perplexity: 95.810 Dev Loss: 5.8867 | Dev Perplexity: 360.214
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Save last model (overfitted)
torch.save(model.state_dict(), './last_model.pt')
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
7. Test Function to convert indices to original text strings
def indices_to_text(src_or_trg, lang_field): assert src_or_trg.dim() == 1, f'{src_or_trg.dim()}' #(seq_len, ) assert isinstance(lang_field, torchtext.data.Field) assert hasattr(lang_field, 'vocab') return [lang_field.vocab.itos[t] for t in src_or_trg]
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Function to make predictions- Returns a list of examples, where each example is a (src, trg, prediction) tuple.
def predict(seq2seq_model, iterator): seq2seq_model.eval() out = [] with torch.no_grad(): for i, batch in enumerate(iterator): src = batch.src trg = batch.trg decoder_outputs = seq2seq_model(src, trg, teacher_forcing_ratio=0.) seq_len, batch_size, trg_vocab_size = decoder_outputs.size() # (b, s, trg_vocab) # Discard initial decoder input (index = 0) #decoder_outputs = decoder_outputs[:, 1:, :] decoder_predictions = decoder_outputs.argmax(dim=-1) # (b, s) for i, pred in enumerate(decoder_predictions): out.append((src[i], trg[i], pred)) return out
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Load best model
!ls -al # Load model model.load_state_dict(torch.load('./best_model.pt'))
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Make predictions
# Make prediction test_predictions = predict(model, dev_iterator) for i, prediction in enumerate(test_predictions): src, trg, pred = prediction src_text = indices_to_text(src, lang_field=KOREAN) trg_text = indices_to_text(trg, lang_field=ENGLISH) pred_text = indices_to_text(pred, lang_field=ENGLISH) print('source:\n', src_text) print('target:\n', trg_text) print('prediction:\n', pred_text) print('-' * 160) if i > 5: break
source: ['<sos>', '오랫동안', '이탈리아', '전역', '이', '곤혹', '스러웠다', '.', '<eos>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>'] target: ['<sos>', 'naples', ',', 'italy', '(', 'cnn', ')', 'for', 'years', ',', 'it', "'s", 'been', 'a', 'national', 'embarrassment', '.', '<eos>', '<pad>'] prediction: ['<unk>', 'the', 'was', 'the', '.', 'cnn', ')', '.', 'the', '.', '<eos>', '.', '.', '.', '.', '.', '.', '<eos>', '<eos>'] ---------------------------------------------------------------------------------------------------------------------------------------------------------------- source: ['<sos>', 'bank', '-', '<unk>', 'company', '은행', '지주회사', '<eos>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>'] target: ['<sos>', '“', 'gm', "'s", 'financing', 'arm', ',', 'gmac', ',', 'has', 'been', 'declared', 'a', 'bank', '-', 'holding', 'company', '.', '<eos>'] prediction: ['<unk>', 'the', 'the', 'to', '<unk>', 'to', '<unk>', '<unk>', '<unk>', '<unk>', 'been', '<unk>', '.', '<unk>', '.', '<unk>', '<unk>', '.', '<eos>'] ---------------------------------------------------------------------------------------------------------------------------------------------------------------- source: ['<sos>', '미', '연', '방법', '원', '이', '로비스트', '박동선', '씨', '의', '보석', '신청', '을', '기각', '했다', '.', '<eos>'] target: ['<sos>', 'u.s.', 'government', 'prosecutors', 'are', 'against', 'bail', 'being', 'set', 'for', 'south', 'korean', 'lobbyist', 'tongsun', 'park', '.', '<eos>', '<pad>', '<pad>'] prediction: ['<unk>', 'the', 'officials', 'has', ',', '<unk>', 'the', ',', 'recounted', 'up', 'the', 'korea', 'peninsula', '.', '.', '.', '<eos>', '<eos>', '.'] ---------------------------------------------------------------------------------------------------------------------------------------------------------------- source: ['<sos>', 'GM', '의', '자회사', '가', '연방', '준비', '제도로', '부터', '재정', '적', '지원', '을', '받게', '되었습니다', '.', '<eos>'] target: ['<sos>', 'a', 'division', 'of', 'general', 'motors', 'is', 'getting', 'some', 'financial', 'help', 'from', 'the', 'federal', 'reserve', ':', '<eos>', '<pad>', '<pad>'] prediction: ['<unk>', 'the', 'few', 'of', 'the', 'motors', 'to', 'a', 'to', 'of', 'crisis', '.', 'the', '.', '.', '.', '<eos>', '.', '.'] ---------------------------------------------------------------------------------------------------------------------------------------------------------------- source: ['<sos>', '검찰', '은', '명단', '의', '모든', '사람', '이', '공모자', '로', '여겨지고', '있지는', '않다고', '전', '했다', '.', '<eos>'] target: ['<sos>', 'a', 'prosecutor', 'said', 'not', 'everyone', 'on', 'the', 'list', 'was', 'considered', 'a', 'co', '-', '<unk>', '.', '<eos>', '<pad>', '<pad>'] prediction: ['<unk>', 'the', 'statement', 'said', 'that', 'the', ',', 'the', ',', 'of', 'a', 'to', '<unk>', '-', 'old', '.', '<eos>', '<eos>', '.'] ---------------------------------------------------------------------------------------------------------------------------------------------------------------- source: ['<sos>', '이', '날', '테러', '현장', '인근', '에', '주차', '한', '차량', '에서', '폭탄', '이', '발견', '됐다', '.', '<eos>'] target: ['<sos>', 'a', 'bomb', 'was', 'discovered', 'in', 'a', 'parked', 'car', 'near', 'the', 'site', 'of', 'the', 'attack', '.', '<eos>', '<pad>', '<pad>'] prediction: ['<unk>', 'the', 'police', 'exploded', 'a', 'in', 'the', '<unk>', ',', 'bomb', 'the', ',', ',', 'the', ',', ',', '<eos>', '<eos>', 'police'] ---------------------------------------------------------------------------------------------------------------------------------------------------------------- source: ['<sos>', '그러나', '합성', '테스토스테론', '제', '는', '남성', '을', '위', '해서만', '승인', '을', '받은', '의약품', '이다', '.', '<eos>'] target: ['<sos>', '<unk>', 'testosterone', ',', 'however', ',', 'has', 'been', 'approved', 'only', 'for', 'use', 'with', 'men', '.', '<eos>', '<pad>', '<pad>', '<pad>'] prediction: ['<unk>', 'but', ',', ',', 'the', ',', 'the', 'been', 'to', 'to', 'condemn', 'the', '.', 'the', '.', '<eos>', '<eos>', '.', '.'] ----------------------------------------------------------------------------------------------------------------------------------------------------------------
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
8. Download Model
!ls -al from google.colab import files print('Downloading models...') # Known bug; if using Firefox, a print statement in the same cell is necessary. files.download('./best_model.pt') files.download('./last_model.pt')
Downloading models...
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
9. Discussions
_____no_output_____
MIT
colab/NMT-Seq2SeqWithAttention.ipynb
drlego9/NMT-pytorch
Imports and Paths
import urllib3 http = urllib3.PoolManager() from urllib import request from bs4 import BeautifulSoup, Comment import pandas as pd from datetime import datetime # from shutil import copyfile # import time import json
_____no_output_____
MIT
notebooks/bgg_weekly_crawler.ipynb
MichoelSnow/BGG
Load in previous list of games
df_gms_lst = pd.read_csv('../data/bgg_top2000_2018-10-06.csv') df_gms_lst.columns metadata_dict = {"title": "BGG Top 2000", "subtitle": "Board Game Geek top 2000 games rankings", "description": "Board Game Geek top 2000 games rankings and other info", "id": "mseinstein/bgg_top2000", "licenses": [{"name": "CC-BY-SA-4.0"}], "resources":[ {"path": "bgg_top2000_2018-10-06.csv", "description": "Board Game Geek top 2000 games on 2018-10-06" } ] } with open('../data/kaggle/dataset-metadata.json', 'w') as fp: json.dump(metadata_dict, fp)
_____no_output_____
MIT
notebooks/bgg_weekly_crawler.ipynb
MichoelSnow/BGG
Get the id's of the top 2000 board games
pg_gm_rnks = 'https://boardgamegeek.com/browse/boardgame/page/' def extract_gm_id(soup): rows = soup.find('div', {'id': 'collection'}).find_all('tr')[1:] id_list = [] for row in rows: id_list.append(int(row.find_all('a')[1]['href'].split('/')[2])) return id_list def top_2k_gms(pg_gm_rnks): gm_ids = [] for pg_num in range(1,21): pg = request.urlopen(f'{pg_gm_rnks}{str(pg_num)}') soup = BeautifulSoup(pg, 'html.parser') gm_ids += extract_gm_id(soup) return gm_ids gm_ids = top_2k_gms(pg_gm_rnks) len(gm_ids)
_____no_output_____
MIT
notebooks/bgg_weekly_crawler.ipynb
MichoelSnow/BGG
Extract the info for each game in the top 2k using the extracted game id's
bs_pg = 'https://www.boardgamegeek.com/xmlapi2/' bs_pg_gm = f'{bs_pg}thing?type=boardgame&stats=1&ratingcomments=1&page=1&pagesize=10&id=' def extract_game_item(item): gm_dict = {} field_int = ['yearpublished', 'minplayers', 'maxplayers', 'playingtime', 'minplaytime', 'maxplaytime', 'minage'] field_categ = ['boardgamecategory', 'boardgamemechanic', 'boardgamefamily','boardgamedesigner', 'boardgameartist', 'boardgamepublisher'] field_rank = [x['friendlyname'] for x in item.find_all('rank')] field_stats = ['usersrated', 'average', 'bayesaverage', 'stddev', 'median', 'owned', 'trading', 'wanting', 'wishing', 'numcomments', 'numweights', 'averageweight'] gm_dict['name'] = item.find('name')['value'] gm_dict['id'] = item['id'] gm_dict['num_of_rankings'] = int(item.find('comments')['totalitems']) for i in field_int: field_val = item.find(i) if field_val is None: gm_dict[i] = -1 else: gm_dict[i] = int(field_val['value']) for i in field_categ: gm_dict[i] = [x['value'] for x in item.find_all('link',{'type':i})] for i in field_rank: field_val = item.find('rank',{'friendlyname':i}) if field_val is None or field_val['value'] == 'Not Ranked': gm_dict[i.replace(' ','')] = -1 else: gm_dict[i.replace(' ','')] = int(field_val['value']) for i in field_stats: field_val = item.find(i) if field_val is None: gm_dict[i] = -1 else: gm_dict[i] = float(field_val['value']) return gm_dict len(gm_ids) gm_list = [] idx_split = 4 idx_size = int(len(gm_ids)/idx_split) for i in range(idx_split): idx = str(gm_ids[i*idx_size:(i+1)*idx_size]).replace(' ','')[1:-1] pg = request.urlopen(f'{bs_pg_gm}{str(idx)}') item_ct = 0 xsoup = BeautifulSoup(pg, 'xml') # while item_ct < 500: # xsoup = BeautifulSoup(pg, 'xml') # item_ct = len(xsoup.find_all('item')) gm_list += [extract_game_item(x) for x in xsoup.find_all('item')] # break df2 = pd.DataFrame(gm_list) df2.shape df2.head() df2.loc[df2["Children'sGameRank"].notnull(),:].head().T df2.isnull().sum() gm_list = [] idx_split = 200 idx_size = int(len(gm_ids)/idx_split) for i in range(idx_split): idx = str(gm_ids[i*idx_size:(i+1)*idx_size]).replace(' ','')[1:-1] break # pg = request.urlopen(f'{bs_pg_gm}{str(idx)}') # item_ct = 0 # xsoup = BeautifulSoup(pg, 'xml') # # while item_ct < 500: # # xsoup = BeautifulSoup(pg, 'xml') # # item_ct = len(xsoup.find_all('item')) # gm_list += [extract_game_item(x) for x in xsoup.find_all('item')] # # break # df2 = pd.DataFrame(gm_list) # df2.shape idx def create_df_gm_ranks(gm_ids, bs_pg_gm): gm_list = [] idx_split = 4 idx_size = int(len(gm_ids)/idx_split) for i in range(idx_split): idx = str(gm_ids[i*idx_size:(i+1)*idx_size]).replace(' ','')[1:-1] pg = request.urlopen(f'{bs_pg_gm}{str(idx)}') xsoup = BeautifulSoup(pg, 'xml') gm_list += [extract_game_item(x) for x in xsoup.find_all('item')] df = pd.DataFrame(gm_list) return df df = create_df_gm_ranks(gm_ids, bs_pg_gm) df2.to_csv(f'../data/kaggle/{str(datetime.now().date())}_bgg_top{len(gm_ids)}.csv', index=False) with open('../data/kaggle/dataset-metadata.json', 'rb') as f: meta_dict = json.load(f) meta_dict['resources'].append({ 'path': f'{str(datetime.now().date())}_bgg_top{len(gm_ids)}.csv', 'description': f'Board Game Geek top 2000 games on {str(datetime.now().date())}' }) meta_dict meta_dict['title'] = 'Board Game Geek (BGG) Top 2000' meta_dict['resources'][-1]['path'] = '2018-12-15_bgg_top2000.csv' meta_dict['resources'][-1]['description']= 'Board Game Geek top 2000 games on 2018-12-15' with open('../data/kaggle/dataset-metadata.json', 'w') as fp: json.dump(meta_dict, fp)
_____no_output_____
MIT
notebooks/bgg_weekly_crawler.ipynb
MichoelSnow/BGG
Code for kagglekaggle datasets version -m "week of 2018-10-20" -p .\ -d
meta_dict gm_list = [] idx_split = 4 idx_size = int(len(gm_ids)/idx_split) for i in range(idx_split): idx = str(gm_ids[i*idx_size:(i+1)*idx_size]).replace(' ','')[1:-1] break idx2 = '174430,161936,182028,167791,12333,187645,169786,220308,120677,193738,84876,173346,180263,115746,3076,102794,205637' pg = request.urlopen(f'{bs_pg_gm}{str(idx)}') xsoup = BeautifulSoup(pg, 'xml') aa = xsoup.find_all('item') len(aa) http.urlopen() r = http.request('GET', f'{bs_pg_gm}{str(idx)}') xsoup2 = BeautifulSoup(r.data, 'xml') bb = xsoup.find_all('item') len(bb)
_____no_output_____
MIT
notebooks/bgg_weekly_crawler.ipynb
MichoelSnow/BGG
Artificial Intelligence Nanodegree Voice User Interfaces Project: Speech Recognition with Neural Networks---In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully! > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.--- Introduction In this notebook, you will build a deep neural network that functions as part of an end-to-end automatic speech recognition (ASR) pipeline! Your completed pipeline will accept raw audio as input and return a predicted transcription of the spoken language. The full pipeline is summarized in the figure below.- **STEP 1** is a pre-processing step that converts raw audio to one of two feature representations that are commonly used for ASR. - **STEP 2** is an acoustic model which accepts audio features as input and returns a probability distribution over all potential transcriptions. After learning about the basic types of neural networks that are often used for acoustic modeling, you will engage in your own investigations, to design your own acoustic model!- **STEP 3** in the pipeline takes the output from the acoustic model and returns a predicted transcription. Feel free to use the links below to navigate the notebook:- [The Data](thedata)- [**STEP 1**](step1): Acoustic Features for Speech Recognition- [**STEP 2**](step2): Deep Neural Networks for Acoustic Modeling - [Model 0](model0): RNN - [Model 1](model1): RNN + TimeDistributed Dense - [Model 2](model2): CNN + RNN + TimeDistributed Dense - [Model 3](model3): Deeper RNN + TimeDistributed Dense - [Model 4](model4): Bidirectional RNN + TimeDistributed Dense - [Models 5+](model5) - [Compare the Models](compare) - [Final Model](final)- [**STEP 3**](step3): Obtain Predictions The DataWe begin by investigating the dataset that will be used to train and evaluate your pipeline. [LibriSpeech](http://www.danielpovey.com/files/2015_icassp_librispeech.pdf) is a large corpus of English-read speech, designed for training and evaluating models for ASR. The dataset contains 1000 hours of speech derived from audiobooks. We will work with a small subset in this project, since larger-scale data would take a long while to train. However, after completing this project, if you are interested in exploring further, you are encouraged to work with more of the data that is provided [online](http://www.openslr.org/12/).In the code cells below, you will use the `vis_train_features` module to visualize a training example. The supplied argument `index=0` tells the module to extract the first example in the training set. (You are welcome to change `index=0` to point to a different training example, if you like, but please **DO NOT** amend any other code in the cell.) The returned variables are:- `vis_text` - transcribed text (label) for the training example.- `vis_raw_audio` - raw audio waveform for the training example.- `vis_mfcc_feature` - mel-frequency cepstral coefficients (MFCCs) for the training example.- `vis_spectrogram_feature` - spectrogram for the training example. - `vis_audio_path` - the file path to the training example.
%load_ext autoreload %autoreload 1 %pip install python_speech_features !rm -rf AIND-VUI-Capstone/ !git clone https://github.com/RomansWorks/AIND-VUI-Capstone !cp -r ./AIND-VUI-Capstone/* . !wget https://filebin.net/archive/s14yfd2p3q0sj1r2/zip !unzip zip !7z x capstone-ds.zip !mv aind-vui-capstone-ds-processed/* . !rm zip capstone-ds.* from data_generator import vis_train_features # extract label and audio features for a single training example vis_text, vis_raw_audio, vis_mfcc_feature, vis_spectrogram_feature, vis_audio_path = vis_train_features()
There are 2136 total training examples.
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
The following code cell visualizes the audio waveform for your chosen example, along with the corresponding transcript. You also have the option to play the audio in the notebook!
from IPython.display import Markdown, display from data_generator import vis_train_features, plot_raw_audio from IPython.display import Audio %matplotlib inline # plot audio signal plot_raw_audio(vis_raw_audio) # print length of audio signal display(Markdown('**Shape of Audio Signal** : ' + str(vis_raw_audio.shape))) # print transcript corresponding to audio clip display(Markdown('**Transcript** : ' + str(vis_text))) # play the audio file Audio(vis_audio_path)
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
STEP 1: Acoustic Features for Speech RecognitionFor this project, you won't use the raw audio waveform as input to your model. Instead, we provide code that first performs a pre-processing step to convert the raw audio to a feature representation that has historically proven successful for ASR models. Your acoustic model will accept the feature representation as input.In this project, you will explore two possible feature representations. _After completing the project_, if you'd like to read more about deep learning architectures that can accept raw audio input, you are encouraged to explore this [research paper](https://pdfs.semanticscholar.org/a566/cd4a8623d661a4931814d9dffc72ecbf63c4.pdf). SpectrogramsThe first option for an audio feature representation is the [spectrogram](https://www.youtube.com/watch?v=_FatxGN3vAM). In order to complete this project, you will **not** need to dig deeply into the details of how a spectrogram is calculated; but, if you are curious, the code for calculating the spectrogram was borrowed from [this repository](https://github.com/baidu-research/ba-dls-deepspeech). The implementation appears in the `utils.py` file in your repository.The code that we give you returns the spectrogram as a 2D tensor, where the first (_vertical_) dimension indexes time, and the second (_horizontal_) dimension indexes frequency. To speed the convergence of your algorithm, we have also normalized the spectrogram. (You can see this quickly in the visualization below by noting that the mean value hovers around zero, and most entries in the tensor assume values close to zero.)
from data_generator import plot_spectrogram_feature # plot normalized spectrogram plot_spectrogram_feature(vis_spectrogram_feature) # print shape of spectrogram display(Markdown('**Shape of Spectrogram** : ' + str(vis_spectrogram_feature.shape)))
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Mel-Frequency Cepstral Coefficients (MFCCs)The second option for an audio feature representation is [MFCCs](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum). You do **not** need to dig deeply into the details of how MFCCs are calculated, but if you would like more information, you are welcome to peruse the [documentation](https://github.com/jameslyons/python_speech_features) of the `python_speech_features` Python package. Just as with the spectrogram features, the MFCCs are normalized in the supplied code.The main idea behind MFCC features is the same as spectrogram features: at each time window, the MFCC feature yields a feature vector that characterizes the sound within the window. Note that the MFCC feature is much lower-dimensional than the spectrogram feature, which could help an acoustic model to avoid overfitting to the training dataset.
from data_generator import plot_mfcc_feature # plot normalized MFCC plot_mfcc_feature(vis_mfcc_feature) # print shape of MFCC display(Markdown('**Shape of MFCC** : ' + str(vis_mfcc_feature.shape)))
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
When you construct your pipeline, you will be able to choose to use either spectrogram or MFCC features. If you would like to see different implementations that make use of MFCCs and/or spectrograms, please check out the links below:- This [repository](https://github.com/baidu-research/ba-dls-deepspeech) uses spectrograms.- This [repository](https://github.com/mozilla/DeepSpeech) uses MFCCs.- This [repository](https://github.com/buriburisuri/speech-to-text-wavenet) also uses MFCCs.- This [repository](https://github.com/pannous/tensorflow-speech-recognition/blob/master/speech_data.py) experiments with raw audio, spectrograms, and MFCCs as features. STEP 2: Deep Neural Networks for Acoustic ModelingIn this section, you will experiment with various neural network architectures for acoustic modeling. You will begin by training five relatively simple architectures. **Model 0** is provided for you. You will write code to implement **Models 1**, **2**, **3**, and **4**. If you would like to experiment further, you are welcome to create and train more models under the **Models 5+** heading. All models will be specified in the `sample_models.py` file. After importing the `sample_models` module, you will train your architectures in the notebook.After experimenting with the five simple architectures, you will have the opportunity to compare their performance. Based on your findings, you will construct a deeper architecture that is designed to outperform all of the shallow models.For your convenience, we have designed the notebook so that each model can be specified and trained on separate occasions. That is, say you decide to take a break from the notebook after training **Model 1**. Then, you need not re-execute all prior code cells in the notebook before training **Model 2**. You need only re-execute the code cell below, that is marked with **`RUN THIS CODE CELL IF YOU ARE RESUMING THE NOTEBOOK AFTER A BREAK`**, before transitioning to the code cells corresponding to **Model 2**.
##################################################################### # RUN THIS CODE CELL IF YOU ARE RESUMING THE NOTEBOOK AFTER A BREAK # ##################################################################### # allocate 50% of GPU memory (if you like, feel free to change this) # from keras.backend.tensorflow_backend import set_session import tensorflow as tf # config = tf.ConfigProto() # config.gpu_options.per_process_gpu_memory_fraction = 0.5 # set_session(tf.Session(config=config)) from tensorflow.keras.optimizers import Adam, SGD # watch for any changes in the sample_models module, and reload it automatically %load_ext autoreload %autoreload 2 # import NN architectures for speech recognition from sample_models import * # import function for training acoustic model from train_utils import train_model
The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Model 0: RNNGiven their effectiveness in modeling sequential data, the first acoustic model you will use is an RNN. As shown in the figure below, the RNN we supply to you will take the time sequence of audio features as input.At each time step, the speaker pronounces one of 28 possible characters, including each of the 26 letters in the English alphabet, along with a space character (" "), and an apostrophe (').The output of the RNN at each time step is a vector of probabilities with 29 entries, where the $i$-th entry encodes the probability that the $i$-th character is spoken in the time sequence. (The extra 29th character is an empty "character" used to pad training examples within batches containing uneven lengths.) If you would like to peek under the hood at how characters are mapped to indices in the probability vector, look at the `char_map.py` file in the repository. The figure below shows an equivalent, rolled depiction of the RNN that shows the output layer in greater detail. The model has already been specified for you in Keras. To import it, you need only run the code cell below.
model_0 = simple_rnn_model(input_dim=161) # change to 13 if you would like to use MFCC features
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= the_input (InputLayer) [(None, None, 161)] 0 rnn (GRU) (None, None, 29) 16704 softmax (Activation) (None, None, 29) 0 ================================================================= Total params: 16,704 Trainable params: 16,704 Non-trainable params: 0 _________________________________________________________________ None
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
As explored in the lesson, you will train the acoustic model with the [CTC loss](http://www.cs.toronto.edu/~graves/icml_2006.pdf) criterion. Custom loss functions take a bit of hacking in Keras, and so we have implemented the CTC loss function for you, so that you can focus on trying out as many deep learning architectures as possible :). If you'd like to peek at the implementation details, look at the `add_ctc_loss` function within the `train_utils.py` file in the repository.To train your architecture, you will use the `train_model` function within the `train_utils` module; it has already been imported in one of the above code cells. The `train_model` function takes three **required** arguments:- `input_to_softmax` - a Keras model instance.- `pickle_path` - the name of the pickle file where the loss history will be saved.- `save_model_path` - the name of the HDF5 file where the model will be saved.If we have already supplied values for `input_to_softmax`, `pickle_path`, and `save_model_path`, please **DO NOT** modify these values. There are several **optional** arguments that allow you to have more control over the training process. You are welcome to, but not required to, supply your own values for these arguments.- `minibatch_size` - the size of the minibatches that are generated while training the model (default: `20`).- `spectrogram` - Boolean value dictating whether spectrogram (`True`) or MFCC (`False`) features are used for training (default: `True`).- `mfcc_dim` - the size of the feature dimension to use when generating MFCC features (default: `13`).- `optimizer` - the Keras optimizer used to train the model (default: `SGD(lr=0.02, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)`). - `epochs` - the number of epochs to use to train the model (default: `20`). If you choose to modify this parameter, make sure that it is *at least* 20.- `verbose` - controls the verbosity of the training output in the `model.fit_generator` method (default: `1`).- `sort_by_duration` - Boolean value dictating whether the training and validation sets are sorted by (increasing) duration before the start of the first epoch (default: `False`).The `train_model` function defaults to using spectrogram features; if you choose to use these features, note that the acoustic model in `simple_rnn_model` should have `input_dim=161`. Otherwise, if you choose to use MFCC features, the acoustic model should have `input_dim=13`.We have chosen to use `GRU` units in the supplied RNN. If you would like to experiment with `LSTM` or `SimpleRNN` cells, feel free to do so here. If you change the `GRU` units to `SimpleRNN` cells in `simple_rnn_model`, you may notice that the loss quickly becomes undefined (`nan`) - you are strongly encouraged to check this for yourself! This is due to the [exploding gradients problem](http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/). We have already implemented [gradient clipping](https://arxiv.org/pdf/1211.5063.pdf) in your optimizer to help you avoid this issue.__IMPORTANT NOTE:__ If you notice that your gradient has exploded in any of the models below, feel free to explore more with gradient clipping (the `clipnorm` argument in your optimizer) or swap out any `SimpleRNN` cells for `LSTM` or `GRU` cells. You can also try restarting the kernel to restart the training process.
from tensorflow.keras.optimizers import Adam train_model(input_to_softmax=model_0, pickle_path='model_0.pickle', save_model_path='model_0.h5', minibatch_size=25, optimizer=Adam(learning_rate=0.1, clipnorm=5), #SGD(lr=0.002, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5), spectrogram=True) # change to False if you would like to use MFCC features
/content/train_utils.py:77: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. callbacks=[checkpointer], verbose=verbose)
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
(IMPLEMENTATION) Model 1: RNN + TimeDistributed DenseRead about the [TimeDistributed](https://keras.io/layers/wrappers/) wrapper and the [BatchNormalization](https://keras.io/layers/normalization/) layer in the Keras documentation. For your next architecture, you will add [batch normalization](https://arxiv.org/pdf/1510.01378.pdf) to the recurrent layer to reduce training times. The `TimeDistributed` layer will be used to find more complex patterns in the dataset. The unrolled snapshot of the architecture is depicted below.The next figure shows an equivalent, rolled depiction of the RNN that shows the (`TimeDistrbuted`) dense and output layers in greater detail. Use your research to complete the `rnn_model` function within the `sample_models.py` file. The function should specify an architecture that satisfies the following requirements:- The first layer of the neural network should be an RNN (`SimpleRNN`, `LSTM`, or `GRU`) that takes the time sequence of audio features as input. We have added `GRU` units for you, but feel free to change `GRU` to `SimpleRNN` or `LSTM`, if you like!- Whereas the architecture in `simple_rnn_model` treated the RNN output as the final layer of the model, you will use the output of your RNN as a hidden layer. Use `TimeDistributed` to apply a `Dense` layer to each of the time steps in the RNN output. Ensure that each `Dense` layer has `output_dim` units.Use the code cell below to load your model into the `model_1` variable. Use a value for `input_dim` that matches your chosen audio features, and feel free to change the values for `units` and `activation` to tweak the behavior of your recurrent layer.
model_1 = rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features units=200, activation='relu')
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_1.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_1.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
train_model(input_to_softmax=model_1, pickle_path='model_1.pickle', save_model_path='model_1.h5', optimizer=Adam(clipvalue=0.5, clipnorm=1.0), spectrogram=True) # change to False if you would like to use MFCC features
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
(IMPLEMENTATION) Model 2: CNN + RNN + TimeDistributed DenseThe architecture in `cnn_rnn_model` adds an additional level of complexity, by introducing a [1D convolution layer](https://keras.io/layers/convolutional/conv1d). This layer incorporates many arguments that can be (optionally) tuned when calling the `cnn_rnn_model` module. We provide sample starting parameters, which you might find useful if you choose to use spectrogram audio features. If you instead want to use MFCC features, these arguments will have to be tuned. Note that the current architecture only supports values of `'same'` or `'valid'` for the `conv_border_mode` argument.When tuning the parameters, be careful not to choose settings that make the convolutional layer overly small. If the temporal length of the CNN layer is shorter than the length of the transcribed text label, your code will throw an error.Before running the code cell below, you must modify the `cnn_rnn_model` function in `sample_models.py`. Please add batch normalization to the recurrent layer, and provide the same `TimeDistributed` layer as before.
model_2 = cnn_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features filters=200, kernel_size=11, conv_stride=2, conv_border_mode='valid', units=100)
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_2.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_2.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
train_model(input_to_softmax=model_2, pickle_path='model_2.pickle', save_model_path='model_2.h5', optimizer=Adam(clipvalue=0.5), spectrogram=True) # change to False if you would like to use MFCC features
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
(IMPLEMENTATION) Model 3: Deeper RNN + TimeDistributed DenseReview the code in `rnn_model`, which makes use of a single recurrent layer. Now, specify an architecture in `deep_rnn_model` that utilizes a variable number `recur_layers` of recurrent layers. The figure below shows the architecture that should be returned if `recur_layers=2`. In the figure, the output sequence of the first recurrent layer is used as input for the next recurrent layer.Feel free to change the supplied values of `units` to whatever you think performs best. You can change the value of `recur_layers`, as long as your final value is greater than 1. (As a quick check that you have implemented the additional functionality in `deep_rnn_model` correctly, make sure that the architecture that you specify here is identical to `rnn_model` if `recur_layers=1`.)
model_3 = deep_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features units=100, recur_layers=2)
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_3.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_3.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
train_model(input_to_softmax=model_3, pickle_path='model_3.pickle', save_model_path='model_3.h5', optimizer=Adam(clipvalue=0.5), spectrogram=True) # change to False if you would like to use MFCC features
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
(IMPLEMENTATION) Model 4: Bidirectional RNN + TimeDistributed DenseRead about the [Bidirectional](https://keras.io/layers/wrappers/) wrapper in the Keras documentation. For your next architecture, you will specify an architecture that uses a single bidirectional RNN layer, before a (`TimeDistributed`) dense layer. The added value of a bidirectional RNN is described well in [this paper](http://www.cs.toronto.edu/~hinton/absps/DRNN_speech.pdf).> One shortcoming of conventional RNNs is that they are only able to make use of previous context. In speech recognition, where whole utterances are transcribed at once, there is no reason not to exploit future context as well. Bidirectional RNNs (BRNNs) do this by processing the data in both directions with two separate hidden layers which are then fed forwards to the same output layer.Before running the code cell below, you must complete the `bidirectional_rnn_model` function in `sample_models.py`. Feel free to use `SimpleRNN`, `LSTM`, or `GRU` units. When specifying the `Bidirectional` wrapper, use `merge_mode='concat'`.
model_4 = bidirectional_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features units=100)
Model: "model_17" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= the_input (InputLayer) [(None, None, 161)] 0 bidi (Bidirectional) (None, None, 200) 157800 batch_normalization_9 (Batc (None, None, 200) 800 hNormalization) time_distributed_9 (TimeDis (None, None, 29) 5829 tributed) softmax (Activation) (None, None, 29) 0 ================================================================= Total params: 164,429 Trainable params: 164,029 Non-trainable params: 400 _________________________________________________________________ None
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_4.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_4.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
train_model(input_to_softmax=model_4, pickle_path='model_4.pickle', save_model_path='model_4.h5', optimizer=Adam(clipvalue=0.5), spectrogram=True) # change to False if you would like to use MFCC features
/content/train_utils.py:77: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. callbacks=[checkpointer], verbose=verbose)
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
(OPTIONAL IMPLEMENTATION) Models 5+If you would like to try out more architectures than the ones above, please use the code cell below. Please continue to follow the same convention for saving the models; for the $i$-th sample model, please save the loss at **`model_i.pickle`** and saving the trained model at **`model_i.h5`**.
## (Optional) TODO: Try out some more models! ### Feel free to use as many code cells as needed. model_5 = dilated_double_cnn_rnn_model(input_dim=161, filters=200, kernel_size=6, conv_border_mode='valid', units=200, dilation=2) train_model(input_to_softmax=model_5, pickle_path='model_5.pickle', save_model_path='model_5.h5', optimizer=Adam(clipvalue=0.5, amsgrad=True), spectrogram=True)
Model: "model_11" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= the_input (InputLayer) [(None, None, 161)] 0 conv_1d_1 (Conv1D) (None, None, 200) 193400 conv_1d_2 (Conv1D) (None, None, 100) 240100 rnn (GRU) (None, None, 200) 181200 batch_normalization_6 (Batc (None, None, 200) 800 hNormalization) time_distributed_6 (TimeDis (None, None, 29) 5829 tributed) softmax (Activation) (None, None, 29) 0 ================================================================= Total params: 621,329 Trainable params: 620,929 Non-trainable params: 400 _________________________________________________________________ None Epoch 1/20
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Compare the ModelsExecute the code cell below to evaluate the performance of the drafted deep learning models. The training and validation loss are plotted for each model.
from glob import glob import numpy as np import _pickle as pickle import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline sns.set_style(style='white') # obtain the paths for the saved model history all_pickles = sorted(glob("results/*.pickle")) # extract the name of each model model_names = [item[8:-7] for item in all_pickles] # extract the loss history for each model valid_loss = [pickle.load( open( i, "rb" ) )['val_loss'] for i in all_pickles] train_loss = [pickle.load( open( i, "rb" ) )['loss'] for i in all_pickles] # save the number of epochs used to train each model num_epochs = [len(valid_loss[i]) for i in range(len(valid_loss))] fig = plt.figure(figsize=(16,5)) # plot the training loss vs. epoch for each model ax1 = fig.add_subplot(121) for i in range(len(all_pickles)): ax1.plot(np.linspace(1, num_epochs[i], num_epochs[i]), train_loss[i], label=model_names[i]) # clean up the plot ax1.legend() ax1.set_xlim([1, max(num_epochs)]) plt.xlabel('Epoch') plt.ylabel('Training Loss') # plot the validation loss vs. epoch for each model ax2 = fig.add_subplot(122) for i in range(len(all_pickles)): ax2.plot(np.linspace(1, num_epochs[i], num_epochs[i]), valid_loss[i], label=model_names[i]) # clean up the plot ax2.legend() ax2.set_xlim([1, max(num_epochs)]) plt.xlabel('Epoch') plt.ylabel('Validation Loss') plt.show()
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
__Question 1:__ Use the plot above to analyze the performance of each of the attempted architectures. Which performs best? Provide an explanation regarding why you think some models perform better than others. __Answer:__ (IMPLEMENTATION) Final ModelNow that you've tried out many sample models, use what you've learned to draft your own architecture! While your final acoustic model should not be identical to any of the architectures explored above, you are welcome to merely combine the explored layers above into a deeper architecture. It is **NOT** necessary to include new layer types that were not explored in the notebook.However, if you would like some ideas for even more layer types, check out these ideas for some additional, optional extensions to your model:- If you notice your model is overfitting to the training dataset, consider adding **dropout**! To add dropout to [recurrent layers](https://faroit.github.io/keras-docs/1.0.2/layers/recurrent/), pay special attention to the `dropout_W` and `dropout_U` arguments. This [paper](http://arxiv.org/abs/1512.05287) may also provide some interesting theoretical background.- If you choose to include a convolutional layer in your model, you may get better results by working with **dilated convolutions**. If you choose to use dilated convolutions, make sure that you are able to accurately calculate the length of the acoustic model's output in the `model.output_length` lambda function. You can read more about dilated convolutions in Google's [WaveNet paper](https://arxiv.org/abs/1609.03499). For an example of a speech-to-text system that makes use of dilated convolutions, check out this GitHub [repository](https://github.com/buriburisuri/speech-to-text-wavenet). You can work with dilated convolutions [in Keras](https://keras.io/layers/convolutional/) by paying special attention to the `padding` argument when you specify a convolutional layer.- If your model makes use of convolutional layers, why not also experiment with adding **max pooling**? Check out [this paper](https://arxiv.org/pdf/1701.02720.pdf) for example architecture that makes use of max pooling in an acoustic model.- So far, you have experimented with a single bidirectional RNN layer. Consider stacking the bidirectional layers, to produce a [deep bidirectional RNN](https://www.cs.toronto.edu/~graves/asru_2013.pdf)!All models that you specify in this repository should have `output_length` defined as an attribute. This attribute is a lambda function that maps the (temporal) length of the input acoustic features to the (temporal) length of the output softmax layer. This function is used in the computation of CTC loss; to see this, look at the `add_ctc_loss` function in `train_utils.py`. To see where the `output_length` attribute is defined for the models in the code, take a look at the `sample_models.py` file. You will notice this line of code within most models:```model.output_length = lambda x: x```The acoustic model that incorporates a convolutional layer (`cnn_rnn_model`) has a line that is a bit different:```model.output_length = lambda x: cnn_output_length( x, kernel_size, conv_border_mode, conv_stride)```In the case of models that use purely recurrent layers, the lambda function is the identity function, as the recurrent layers do not modify the (temporal) length of their input tensors. However, convolutional layers are more complicated and require a specialized function (`cnn_output_length` in `sample_models.py`) to determine the temporal length of their output.You will have to add the `output_length` attribute to your final model before running the code cell below. Feel free to use the `cnn_output_length` function, if it suits your model.
# specify the model model_end = final_model()
Model: "model_7" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= the_input (InputLayer) [(None, None, 161)] 0 conv1d (Conv1D) (None, None, 400) 708800 batch_normalization_6 (Batc (None, None, 400) 1600 hNormalization) bidi (Bidirectional) (None, None, 400) 722400 batch_normalization_7 (Batc (None, None, 400) 1600 hNormalization) time_distributed_3 (TimeDis (None, None, 29) 11629 tributed) dropout_3 (Dropout) (None, None, 29) 0 softmax (Activation) (None, None, 29) 0 ================================================================= Total params: 1,446,029 Trainable params: 1,444,429 Non-trainable params: 1,600 _________________________________________________________________ None
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_end.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_end.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
train_model(input_to_softmax=model_end, pickle_path='model_end.pickle', save_model_path='model_end.h5', optimizer=Adam(clipvalue=0.5, amsgrad=True), spectrogram=True) # change to False if you would like to use MFCC features
/content/train_utils.py:77: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. callbacks=[checkpointer], verbose=verbose)
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
__Question 2:__ Describe your final model architecture and your reasoning at each step. __Answer:__ STEP 3: Obtain PredictionsWe have written a function for you to decode the predictions of your acoustic model. To use the function, please execute the code cell below.
import numpy as np from data_generator import AudioGenerator from keras import backend as K from utils import int_sequence_to_text from IPython.display import Audio def get_predictions(index, partition, input_to_softmax, model_path): """ Print a model's decoded predictions Params: index (int): The example you would like to visualize partition (str): One of 'train' or 'validation' input_to_softmax (Model): The acoustic model model_path (str): Path to saved acoustic model's weights """ # load the train and test data data_gen = AudioGenerator() data_gen.load_train_data() data_gen.load_validation_data() # obtain the true transcription and the audio features if partition == 'validation': transcr = data_gen.valid_texts[index] audio_path = data_gen.valid_audio_paths[index] data_point = data_gen.normalize(data_gen.featurize(audio_path)) elif partition == 'train': transcr = data_gen.train_texts[index] audio_path = data_gen.train_audio_paths[index] data_point = data_gen.normalize(data_gen.featurize(audio_path)) else: raise Exception('Invalid partition! Must be "train" or "validation"') # obtain and decode the acoustic model's predictions input_to_softmax.load_weights(model_path) prediction = input_to_softmax.predict(np.expand_dims(data_point, axis=0)) output_length = [input_to_softmax.output_length(data_point.shape[0])] pred_ints = (K.eval(K.ctc_decode( prediction, output_length)[0][0])+1).flatten().tolist() # play the audio file, and display the true and predicted transcriptions print('-'*80) Audio(audio_path) print('True transcription:\n' + '\n' + transcr) print('-'*80) print('Predicted transcription:\n' + '\n' + ''.join(int_sequence_to_text(pred_ints))) print('-'*80)
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Use the code cell below to obtain the transcription predicted by your final model for the first example in the training dataset.
get_predictions(index=0, partition='train', input_to_softmax=final_model(), model_path='model_end.h5')
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Use the next code cell to visualize the model's prediction for the first example in the validation dataset.
get_predictions(index=0, partition='validation', input_to_softmax=final_model(), model_path='model_end.h5')
_____no_output_____
MIT
vui_notebook.ipynb
RomansWorks/AIND-VUI-Capstone
Logistic Regression ROMÂNĂ În final, o să observăm dacă Google PlayStore a avut destule date pentru a putea prezice popularitatea unei aplicații de trading sau pentru topul jocurilor plătite. Lucrul acesta se va face prin împărțirea descărcărilor în 2 variabile dummy. Cu mai mult de 1.000.000 pentru variabila 1 și cu mai puțin de 1.000.000 pentru variabila 0, pentru aplicațiile de Trading și pentru jocurile plătite cu mai mult de 670.545 de descărcari pentru variabila 1 iar 0 corespunde celorlalte aplicații. ENGLISH Lastly, we shall see if Google PlayStore had enough data in order to predict the popularity of a trading app or for the top paid games of the store . This will be done by dividing the downloads into 2 dummy variables. With more than 1,000,000 for variable 1 and less than 1,000,000 for variable 0, for Trading applications and for paid games with more than 670,545 downloads for variable 1 and 0 corresponding to the other applications. Now we shall create a logistic regression model using a 80/20 ratio between the training sample and the testing sample
from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report, confusion_matrix from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt def Log_reg(x,y): model = LogisticRegression(solver='liblinear',C=10, random_state=0).fit(x,y) print("Model accuracy",model.score(x,y)) cm = confusion_matrix(y, model.predict(x)) fig, ax = plt.subplots(figsize=(8, 8)) ax.imshow(cm) ax.grid(False) ax.xaxis.set(ticks=(0, 1), ticklabels=('Predicted 0s', 'Predicted 1s')) ax.yaxis.set(ticks=(0, 1), ticklabels=('Actual 0s', 'Actual 1s')) ax.set_ylim(1.5, -0.5) for i in range(2): for j in range(2): ax.text(j, i, cm[i, j], ha='center', va='center', color='black') plt.title('Confusion Matrix') plt.show() print(classification_report(y, model.predict(x))) scores = cross_val_score(model, x,y, cv=10) print('Cross-Validation Accuracy Scores', scores) scores = pd.Series(scores) print("Mean Accuracy: ",scores.mean()) import pandas as pd import numpy as np #path = "D:\Java\VS-CodPitonul\\GAME.xlsx" #df = pd.read_excel (path, sheet_name='Sheet1') path = "D:\Java\VS-CodPitonul\\Trading_Apps.xlsx" df = pd.read_excel (path, sheet_name='Results') ''' RO: Folosește dropna daca ai valori lipsă, altfel îți va da eroare ENG: Use dropna only if you have missing values else you will recive an error message ''' #df = df.dropna() #Log_reg(df[['Score','Ratings','Reviews','Months_From_Release','Price']],df['Instalari_Bin']) #For GAME.xlsx Log_reg(df[['Score','Ratings','Reviews','Months_From_Release']],df['Instalari_Bin']) #For Trading_Apps.xlsx
Model accuracy 0.847457627118644
Apache-2.0
Code/LogisticRegression.ipynb
IulianRo3/Predicting-GooglePlayStore-Apps-Succes-Through-Logistic-Regression
Import libraries and dataDataset was obtained in the capstone project description (direct link [here](https://d3c33hcgiwev3.cloudfront.net/_429455574e396743d399f3093a3cc23b_capstone.zip?Expires=1530403200&Signature=FECzbTVo6TH7aRh7dXXmrASucl~Cy5mlO94P7o0UXygd13S~Afi38FqCD7g9BOLsNExNB0go0aGkYPtodekxCGblpc3I~R8TCtWRrys~2gciwuJLGiRp4CfNtfp08sFvY9NENaRb6WE2H4jFsAo2Z2IbXV~llOJelI3k-9Waj~M_&Key-Pair-Id=APKAJLTNE6QMUY6HBC5A)) and splited manually in separated csv files. They were stored at my personal github account (folder link [here](https://github.com/caiomiyashiro/RecommenderSystemsNotebooks/tree/master/data/capstone)) and you can download and paste inside your working directory in order for this notebook to run.
import pandas as pd import numpy as np
_____no_output_____
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr
Preprocess dataFloat data came with ',' in the csv and python works with '.', so it treated the number as text. In order to convert them to numbers, I first replaced all the commas by punct and then converted the columns to float.
items = pd.read_csv('data/capstone/Capstone Data - Office Products - Items.csv', index_col=0) actual_ratings = pd.read_csv('data/capstone/Capstone Data - Office Products - Ratings.csv', index_col=0) content_based = pd.read_csv('data/capstone/Capstone Data - Office Products - CBF.csv', index_col=0) user_user = pd.read_csv('data/capstone/Capstone Data - Office Products - User-User.csv', index_col=0) item_item = pd.read_csv('data/capstone/Capstone Data - Office Products - Item-Item.csv', index_col=0) matrix_fact = pd.read_csv('data/capstone/Capstone Data - Office Products - MF.csv', index_col=0) pers_bias = pd.read_csv('data/capstone/Capstone Data - Office Products - PersBias.csv', index_col=0) items[['Availability','Price']] = items[['Availability','Price']].apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float) # preprocess content_based = content_based.apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float) user_user = user_user.apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float) item_item = item_item.apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float) matrix_fact = matrix_fact.apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float) pers_bias = pers_bias.apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float) print('items.shape = ' + str(items.shape)) print('actual_ratings.shape = ' + str(actual_ratings.shape)) print('content_based.shape = ' + str(content_based.shape)) print('user_user.shape = ' + str(user_user.shape)) print('item_item.shape = ' + str(item_item.shape)) print('matrix_fact.shape = ' + str(matrix_fact.shape)) print('pers_bias.shape = ' + str(pers_bias.shape)) actual_ratings.head()
items.shape = (200, 7) actual_ratings.shape = (200, 100) content_based.shape = (200, 100) user_user.shape = (200, 100) item_item.shape = (200, 100) matrix_fact.shape = (200, 100) pers_bias.shape = (200, 100)
MIT
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
sparsh-ai/reco-tut-asr