markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
When the optimization is done you should receive a message describing if the method converged or if the maximum number of iterations was reached. In one dimensional examples, you can see the result of the optimization as follows.
myBopt.plot_acquisition() myBopt.plot_convergence()
_____no_output_____
BSD-3-Clause
manual/GPyOpt_reference_manual.ipynb
komorihi/GPyOpt
In problems of any dimension two evaluations plots are available.* The distance between the last two observations.* The value of $f$ at the best location previous to each iteration.To see these plots just run the following cell.
myBopt.plot_convergence()
_____no_output_____
BSD-3-Clause
manual/GPyOpt_reference_manual.ipynb
komorihi/GPyOpt
Now let's make a video to track what the algorithm is doing in each iteration. Let's use the LCB in this case with parameter equal to 2. 4. Two dimensional exampleNext, we try a 2-dimensional example. In this case we minimize the use the Six-hump camel function $$f(x_1,x_2) = \left(4-2.1x_1^2 = \frac{x_1^4}{3} \right)x_1^2 + x_1x_2 + (-4 +4x_2^2)x_2^2,$$in $[-3,3]\times [-2,2]$. This functions has two global minimum, at $(0.0898,-0.7126)$ and $(-0.0898,0.7126)$. As in the previous case we create the function, which is already in GPyOpt. In this case we generate observations of the function perturbed with white noise of $sd=0.1$.
# create the object function f_true = GPyOpt.objective_examples.experiments2d.sixhumpcamel() f_sim = GPyOpt.objective_examples.experiments2d.sixhumpcamel(sd = 0.1) bounds =[{'name': 'var_1', 'type': 'continuous', 'domain': f_true.bounds[0]}, {'name': 'var_2', 'type': 'continuous', 'domain': f_true.bounds[1]}] f_true.plot()
_____no_output_____
BSD-3-Clause
manual/GPyOpt_reference_manual.ipynb
komorihi/GPyOpt
We create the GPyOpt object. In this case we use the Lower Confidence bound acquisition function to solve the problem.
# Creates three identical objects that we will later use to compare the optimization strategies myBopt2D = GPyOpt.methods.BayesianOptimization(f_sim.f, domain=bounds, model_type = 'GP', acquisition_type='EI', normalize_Y = True, acquisition_weight = 2)
_____no_output_____
BSD-3-Clause
manual/GPyOpt_reference_manual.ipynb
komorihi/GPyOpt
We run the optimization for 40 iterations and show the evaluation plot and the acquisition function.
# runs the optimization for the three methods max_iter = 40 # maximum time 40 iterations max_time = 60 # maximum time 60 seconds myBopt2D.run_optimization(max_iter,max_time,verbosity=False)
_____no_output_____
BSD-3-Clause
manual/GPyOpt_reference_manual.ipynb
komorihi/GPyOpt
Finally, we plot the acquisition function and the convergence plot.
myBopt2D.plot_acquisition() myBopt2D.plot_convergence()
_____no_output_____
BSD-3-Clause
manual/GPyOpt_reference_manual.ipynb
komorihi/GPyOpt
!pip install alpaca_trade_api
Collecting alpaca_trade_api Downloading alpaca_trade_api-1.5.0-py3-none-any.whl (45 kB) [?25l  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 10 kB 27.1 MB/s eta 0:00:01  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 20 kB 20.0 MB/s eta 0:00:01  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 30 kB 10.7 MB/s eta 0:00:01  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 40 kB 8.8 MB/s eta 0:00:01  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 45 kB 1.8 MB/s [?25hRequirement already satisfied: numpy>=1.11.1 in /usr/local/lib/python3.7/dist-packages (from alpaca_trade_api) (1.19.5) Collecting aiohttp==3.7.4 Downloading aiohttp-3.7.4-cp37-cp37m-manylinux2014_x86_64.whl (1.3 MB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.3 MB 8.8 MB/s [?25hCollecting PyYAML==5.4.1 Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 636 kB 66.2 MB/s [?25hRequirement already satisfied: urllib3<2,>1.24 in /usr/local/lib/python3.7/dist-packages (from alpaca_trade_api) (1.24.3) Collecting msgpack==1.0.2 Downloading msgpack-1.0.2-cp37-cp37m-manylinux1_x86_64.whl (273 kB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 273 kB 48.3 MB/s [?25hCollecting deprecation==2.1.0 Downloading deprecation-2.1.0-py2.py3-none-any.whl (11 kB) Requirement already satisfied: pandas>=0.18.1 in /usr/local/lib/python3.7/dist-packages (from alpaca_trade_api) (1.3.5) Collecting websockets<10,>=8.0 Downloading websockets-9.1-cp37-cp37m-manylinux2010_x86_64.whl (103 kB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 103 kB 73.8 MB/s [?25hCollecting websocket-client<2,>=0.56.0 Downloading websocket_client-1.2.3-py3-none-any.whl (53 kB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 53 kB 2.1 MB/s [?25hRequirement already satisfied: requests<3,>2 in /usr/local/lib/python3.7/dist-packages (from alpaca_trade_api) (2.23.0) Collecting multidict<7.0,>=4.5 Downloading multidict-6.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (94 kB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94 kB 4.0 MB/s [?25hCollecting async-timeout<4.0,>=3.0 Downloading async_timeout-3.0.1-py3-none-any.whl (8.2 kB) Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp==3.7.4->alpaca_trade_api) (21.4.0) Requirement already satisfied: chardet<4.0,>=2.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp==3.7.4->alpaca_trade_api) (3.0.4) Collecting yarl<2.0,>=1.0 Downloading yarl-1.7.2-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (271 kB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 271 kB 32.5 MB/s [?25hRequirement already satisfied: typing-extensions>=3.6.5 in /usr/local/lib/python3.7/dist-packages (from aiohttp==3.7.4->alpaca_trade_api) (3.10.0.2) Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from deprecation==2.1.0->alpaca_trade_api) (21.3) Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.18.1->alpaca_trade_api) (2018.9) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.18.1->alpaca_trade_api) (2.8.2) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.18.1->alpaca_trade_api) (1.15.0) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>2->alpaca_trade_api) (2021.10.8) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>2->alpaca_trade_api) (2.10) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->deprecation==2.1.0->alpaca_trade_api) (3.0.7) Installing collected packages: multidict, yarl, async-timeout, websockets, websocket-client, PyYAML, msgpack, deprecation, aiohttp, alpaca-trade-api Attempting uninstall: PyYAML Found existing installation: PyYAML 3.13 Uninstalling PyYAML-3.13: Successfully uninstalled PyYAML-3.13 Attempting uninstall: msgpack Found existing installation: msgpack 1.0.3 Uninstalling msgpack-1.0.3: Successfully uninstalled msgpack-1.0.3 Successfully installed PyYAML-5.4.1 aiohttp-3.7.4 alpaca-trade-api-1.5.0 async-timeout-3.0.1 deprecation-2.1.0 msgpack-1.0.2 multidict-6.0.2 websocket-client-1.2.3 websockets-9.1 yarl-1.7.2
Apache-2.0
1D_CNN_Attempts/1D_CNN_asof_111312FEB.ipynb
Cloblak/aipi540_deeplearning
Features To Consider - Targets are only predicting sell within market hours, i.e. at 1530, target is prediciting price for 1100 the next day. Data from pre and post market is taken into consideration, and a sell or buy will be indicated if the price will flucuate after close.
# Import Dependencies import numpy as np import pandas as pd import torch from torch.utils.data import DataLoader, TensorDataset from torch.autograd import Variable from torch.nn import Linear, ReLU, CrossEntropyLoss, Sequential, Conv2d, MaxPool2d, Module, Softmax, BatchNorm2d, Dropout from torch.optim import Adam, SGD from torch.utils.data import DataLoader, TensorDataset from torch.utils.tensorboard import SummaryWriter from torchsummary import summary import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from tqdm.notebook import tqdm import alpaca_trade_api as tradeapi from datetime import datetime, timedelta, tzinfo, timezone, time import os.path import ast import threading import math import seaborn as sns import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, MinMaxScaler import warnings random_seed = 182 torch.manual_seed(random_seed) PAPER_API_KEY = "PKE39LILN9SL1FMJMFV7" PAPER_SECRET_KEY = "TkU7fXH6WhP15MewgWlSnQG5RUoHGOPQ7yqlD6xq" PAPER_BASE_URL = 'https://paper-api.alpaca.markets' api = tradeapi.REST(PAPER_API_KEY, PAPER_SECRET_KEY, PAPER_BASE_URL, api_version='v2') def prepost_train_test_validate_offset_data(api, ticker, interval, train_days=180, test_days=60, validate_days=30, offset_days = 0): ticker_data_dict = None ticker_data_dict = {} monthly_data_dict = None monthly_data_dict = {} interval_loop_data = None interval_loop_data = pd.DataFrame() stock_data = None days_to_collect = train_days + test_days + validate_days + offset_days TZ = 'US/Eastern' start = pd.to_datetime((datetime.now() - timedelta(days=days_to_collect)).strftime("%Y-%m-%d %H:%M"), utc=True) end = pd.to_datetime(datetime.now().strftime("%Y-%m-%d %H:%M"), utc=True) stock_data = api.get_bars(ticker, interval, start = start.isoformat(), end=end.isoformat(), adjustment="raw").df interval_loop_data = interval_loop_data.append(stock_data) df_start_ref = interval_loop_data.index[0] start_str_ref = pd.to_datetime(start, utc=True) while start_str_ref.value < ( pd.to_datetime(df_start_ref, utc=True) - pd.Timedelta(days=2.5)).value: end_new = pd.to_datetime(interval_loop_data.index[0].strftime("%Y-%m-%d %H:%M"), utc=True).isoformat() stock_data_new = None stock_data_new = api.get_bars(ticker, interval, start=start, end=end_new, adjustment="raw").df #stock_data_new = stock_data_new.reset_index() interval_loop_data = interval_loop_data.append(stock_data_new).sort_values(by=['index'], ascending=True) df_start_ref = interval_loop_data.index[0] stock_yr_min_df = interval_loop_data.copy() stock_yr_min_df["Open"] = stock_yr_min_df['open'] stock_yr_min_df["High"]= stock_yr_min_df["high"] stock_yr_min_df["Low"] = stock_yr_min_df["low"] stock_yr_min_df["Close"] = stock_yr_min_df["close"] stock_yr_min_df["Volume"] = stock_yr_min_df["volume"] stock_yr_min_df["VolumeWeightedAvgPrice"] = stock_yr_min_df["vwap"] stock_yr_min_df["Time"] = stock_yr_min_df.index.tz_convert(TZ) stock_yr_min_df.index = stock_yr_min_df.index.tz_convert(TZ) final_df = stock_yr_min_df.filter(["Time", "Open", "High", "Low", "Close", "Volume", "VolumeWeightedAvgPrice"], axis = 1) first_day = final_df.index[0] traintest_day = final_df.index[-1] - pd.Timedelta(days= test_days+validate_days+offset_days) valtest_day = final_df.index[-1] - pd.Timedelta(days= test_days+offset_days) last_day = final_df.index[-1] - pd.Timedelta(days= offset_days) training_df = final_df.loc[first_day:traintest_day] #(data_split - pd.Timedelta(days=1))] validate_df = final_df.loc[traintest_day:valtest_day] testing_df = final_df.loc[valtest_day:last_day] full_train = final_df.loc[first_day:last_day] offset_df = final_df.loc[last_day:] return training_df, validate_df, testing_df, full_train, offset_df, final_df, traintest_day, valtest_day from datetime import date train_start = date(2017, 1, 1) train_end = date(2020, 3, 29) train_delta = train_end - train_start print(f'Number of days of Training Data {train_delta.days}') val_day_num = 400 print(f'Number of days of Validation Data {val_day_num}') test_start = train_end + timedelta(val_day_num) test_end = date.today() test_delta = (test_end - test_start) print(f'Number of days of Holdout Test Data {test_delta.days}') ticker = "CORN" # Ticker Symbol to Test interval = "5Min" # Interval of bars train_day_int = train_delta.days # Size of training set (Jan 2010 - Oct 2017) val_day_int = val_day_num # Size of validation set test_day_int = test_delta.days # Size of test set offset_day_int = 0 # Number of days to off set the training data train_raw, val_raw, test_raw, full_raw, offset_raw, complete_raw, traintest_day, testval_day = prepost_train_test_validate_offset_data(api, ticker, interval, train_days=train_day_int, test_days=test_day_int, validate_days=val_day_int, offset_days = offset_day_int) def timeFilterAndBackfill(df): """ Prep df to be filled out for each trading day: Time Frame: 0930-1930 Backfilling NaNs Adjusting Volume to Zero if no Trading data is present - Assumption is that there were no trades duing that time We will build over lapping arrays by 30 min to give ourselfs more oppurtunities to predict during a given trading day """ df = df.between_time('07:29','17:29') # intial sorting of data TZ = 'US/Eastern' # define the correct timezone start_dateTime = pd.Timestamp(year = df.index[0].year, month = df.index[0].month, day = df.index[0].day, hour = 7, minute = 25, tz = TZ) end_dateTime = pd.Timestamp(year = df.index[-1].year, month = df.index[-1].month, day = df.index[-1].day, hour = 17, minute = 35, tz = TZ) # build blank index that has ever 5 min interval represented dateTime_index = pd.date_range(start_dateTime, end_dateTime, freq='5min').tolist() dateTime_index_df = pd.DataFrame() dateTime_index_df["Time"] = dateTime_index filtered_df = pd.merge_asof(dateTime_index_df, df, on='Time').set_index("Time").between_time('09:29','17:29') # create the close array by back filling NA, to represent no change in close closeset_list = [] prev_c = None for c in filtered_df["Close"]: if prev_c == None: if math.isnan(c): prev_c = 0 closeset_list.append(0) else: prev_c = c closeset_list.append(c) elif prev_c != None: if c == prev_c: closeset_list.append(c) elif math.isnan(c): closeset_list.append(prev_c) else: closeset_list.append(c) prev_c = c filtered_df["Close"] = closeset_list # create the volume volumeset_list = [] prev_v = None for v in filtered_df["Volume"]: if prev_v == None: if math.isnan(v): prev_v = 0 volumeset_list.append(0) else: prev_v = v volumeset_list.append(v) elif prev_v != None: if v == prev_v: volumeset_list.append(0) prev_v = v elif math.isnan(v): volumeset_list.append(0) prev_v = 0 else: volumeset_list.append(v) prev_v = v filtered_df["Volume"] = volumeset_list adjvolumeset_list = [] prev_v = None for v in filtered_df["VolumeWeightedAvgPrice"]: if prev_v == None: if math.isnan(v): prev_v = 0 adjvolumeset_list.append(0) else: prev_v = v adjvolumeset_list.append(v) elif prev_v != None: if v == prev_v: adjvolumeset_list.append(0) prev_v = v elif math.isnan(v): adjvolumeset_list.append(0) prev_v = 0 else: adjvolumeset_list.append(v) prev_v = v filtered_df["VolumeWeightedAvgPrice"] = adjvolumeset_list preped_df = filtered_df.backfill() return preped_df train_raw[275:300] def buildTargets_VolOnly(full_df = full_raw, train_observations = train_raw.shape[0], val_observations = val_raw.shape[0], test_observations = test_raw.shape[0], alph = .55, volity_int = 10): """ This function will take a complete set of train, val, and test data and return the targets. Volitility will be calculated over the 252 5min incriments The Target shift is looking at 2 hours shift from current time """ returns = np.log(full_df['Close']/(full_df['Close'].shift())) returns.fillna(0, inplace=True) volatility = returns.rolling(window=(volity_int)).std()*np.sqrt(volity_int) return volatility #return train_targets, val_targets, test_targets, full_targets volatility = buildTargets_VolOnly() fig = plt.figure(figsize=(15, 7)) ax1 = fig.add_subplot(1, 1, 1) volatility.plot(ax=ax1, color = "red") ax1.set_xlabel('Date') ax1.set_ylabel('Volatility', color = "red") ax1.set_title(f'Annualized volatility for {ticker}') ax2 = ax1.twinx() full_raw.Close.plot(ax=ax2, color = "blue") ax2.set_ylabel('Close', color = "blue") ax2.axvline(x=full_raw.index[train_raw.shape[0]]) ax2.axvline(x=full_raw.index[val_raw.shape[0]+train_raw.shape[0]]) plt.show() train = timeFilterAndBackfill(train_raw) val = timeFilterAndBackfill(val_raw) test = timeFilterAndBackfill(test_raw) train = train[train.index.dayofweek <= 4].copy() val = val[val.index.dayofweek <= 4].copy() test = test[test.index.dayofweek <= 4].copy() train["Open"] = np.where((train["Volume"] == 0), train["Close"], train["Open"]) train["High"] = np.where((train["Volume"] == 0), train["Close"], train["High"]) train["Low"] = np.where((train["Volume"] == 0), train["Close"], train["Low"]) val["Open"] = np.where((val["Volume"] == 0), val["Close"], val["Open"]) val["High"] = np.where((val["Volume"] == 0), val["Close"], val["High"]) val["Low"] = np.where((val["Volume"] == 0), val["Close"], val["Low"]) test["Open"] = np.where((test["Volume"] == 0), test["Close"], test["Open"]) test["High"] = np.where((test["Volume"] == 0), test["Close"], test["High"]) test["Low"] = np.where((test["Volume"] == 0), test["Close"], test["Low"]) def strided_axis0(a, L, overlap=1): if L==overlap: raise Exception("Overlap arg must be smaller than length of windows") S = L - overlap nd0 = ((len(a)-L)//S)+1 if nd0*S-S!=len(a)-L: warnings.warn("Not all elements were covered") m,n = a.shape s0,s1 = a.strides return np.lib.stride_tricks.as_strided(a, shape=(nd0,L,n), strides=(S*s0,s0,s1)) # OLDER CODE WITHOUT OVERLAP OF LABELING # def blockshaped(arr, nrows, ncols): # """ # Return an array of shape (n, nrows, ncols) where # n * nrows * ncols = arr.size # If arr is a 2D array, the returned array should look like n subblocks with # each subblock preserving the "physical" layout of arr. # """ # h, w = arr.shape # assert h % nrows == 0, f"{h} rows is not evenly divisible by {nrows}" # assert w % ncols == 0, f"{w} cols is not evenly divisible by {ncols}" # return np.flip(np.rot90((arr.reshape(h//nrows, nrows, -1, ncols) # .swapaxes(1,2) # .reshape(-1, nrows, ncols)), axes = (1, 2)), axis = 1) def blockshaped(arr, nrows, ncols, overlapping_5min_intervals = 12): """ Return an array of shape (n, nrows, ncols) where n * nrows * ncols = arr.size If arr is a 2D array, the returned array should look like n subblocks with each subblock preserving the "physical" layout of arr. """ h, w = arr.shape assert h % nrows == 0, f"{h} rows is not evenly divisible by {nrows}" assert w % ncols == 0, f"{w} cols is not evenly divisible by {ncols}" return np.flip(np.rot90((strided_axis0(arr, 24, overlap=overlapping_5min_intervals).reshape(-1, nrows, ncols)), axes = (1, 2)), axis = 1) train_tonp = train[["Open", "High", "Low", "Close", "Volume"]] val_tonp = val[["Open", "High", "Low", "Close", "Volume"]] test_tonp = test[["Open", "High", "Low", "Close", "Volume"]] train_array = train_tonp.to_numpy() val_array = val_tonp.to_numpy() test_array = test_tonp.to_numpy() X_train_pre_final = blockshaped(train_array, 24, 5, overlapping_5min_intervals = 12) X_val_pre_final = blockshaped(val_array, 24, 5, overlapping_5min_intervals = 12) X_test_pre_final = blockshaped(test_array, 24, 5, overlapping_5min_intervals = 12) # X_train_pre_final = blockshaped(train_array, 24, 5) # X_val_pre_final = blockshaped(val_array, 24, 5) # X_test_pre_final = blockshaped(test_array, 24, 5) X_train_pre_final[0] # create target from OHLC and Volume Data def buildTargets(obs_array, alph = .55, volity_int = 10): """ This function will take a complete set of train, val, and test data and return the targets. Volitility will be calculated over the 24 5min incriments. The Target shift is looking at 2 hours shift from current time shift_2hour = The amount of time the data interval take to equal 2 hours (i.e. 5 min data interval is equal to 24) alph = The alpha value for calculating the shift in price volity_int = the number of incriments used to calculate volitility """ target_close_list =[] for arr in obs_array: target_close_list.append(arr[3][-1]) target_close_df = pd.DataFrame() target_close_df["Close"] = target_close_list target_close_df["Volitility"] = target_close_df["Close"].rolling(volity_int).std() # print(len(volatility), len(target_close_df["Close"])) targets = [2] * len(target_close_df.Close) targets = np.where(target_close_df.Close.shift() >= (target_close_df.Close * (1 + alph * target_close_df["Volitility"])), 1, targets) targets = np.where(target_close_df.Close.shift() <= (target_close_df.Close * (1 - alph * target_close_df["Volitility"])), 0, targets) return targets volity_val = 10 alph = .015 y_train_pre_final = buildTargets(X_train_pre_final, alph=alph, volity_int = volity_val) y_val_pre_final = buildTargets(X_val_pre_final, alph=alph, volity_int = volity_val) y_test_pre_final = buildTargets(X_test_pre_final, alph=alph, volity_int = volity_val) def get_class_distribution(obj): count_dict = { "up": 0, "flat": 0, "down": 0, } for i in obj: if i == 1: count_dict['up'] += 1 elif i == 0: count_dict['down'] += 1 elif i == 2: count_dict['flat'] += 1 else: print("Check classes.") return count_dict bfig, axes = plt.subplots(nrows=1, ncols=3, figsize=(25,7)) # Train sns.barplot(data = pd.DataFrame.from_dict([get_class_distribution(y_train_pre_final)]).melt(), x = "variable", y="value", hue="variable", ax=axes[0]).set_title('Class Distribution in Train Set') # Validation sns.barplot(data = pd.DataFrame.from_dict([get_class_distribution(y_val_pre_final)]).melt(), x = "variable", y="value", hue="variable", ax=axes[1]).set_title('Class Distribution in Val Set') # Test sns.barplot(data = pd.DataFrame.from_dict([get_class_distribution(y_test_pre_final)]).melt(), x = "variable", y="value", hue="variable", ax=axes[2]).set_title('Class Distribution in Test Set') def createFinalData_RemoveLateAfternoonData(arr, labels): assert arr.shape[0] == len(labels), "X data do not match length of y labels" step_count = 0 filtered_y_labels = [] for i in range(arr.shape[0]): if i == 0: final_arr = arr[i] filtered_y_labels.append(labels[i]) #print(f'Appending index {i}, step_count: {step_count}') step_count += 1 elif i == 1: final_arr = np.stack((final_arr, arr[i])) filtered_y_labels.append(labels[i]) step_count += 1 elif step_count == 0: final_arr = np.vstack((final_arr, arr[i][None])) filtered_y_labels.append(labels[i]) #print(f'Appending index {i}, step_count: {step_count}') step_count += 1 elif (step_count) % 5 == 0: #print(f'skipping {i} array, step_count: {step_count}') step_count += 1 elif (step_count) % 6 == 0: #print(f'skipping {i} array, step_count: {step_count}') step_count += 1 elif (step_count) % 7 == 0: #print(f'skipping {i} array, step_count: {step_count}') step_count = 0 else: final_arr = np.vstack((final_arr, arr[i][None])) filtered_y_labels.append(labels[i]) #print(f'Appending index {i}, step_count: {step_count}') step_count += 1 return final_arr, filtered_y_labels X_train, y_train = createFinalData_RemoveLateAfternoonData(X_train_pre_final, y_train_pre_final) X_val, y_val = createFinalData_RemoveLateAfternoonData(X_val_pre_final, y_val_pre_final) X_test, y_test = createFinalData_RemoveLateAfternoonData(X_test_pre_final, y_test_pre_final) y_train = np.array(y_train) y_val = np.array(y_val) y_test = np.array(y_test) # Check it arrays are made correctly train[12:48] np.set_printoptions(threshold=200) y_train_pre_final[0:24] ###### # Code fro scaling at a later date ###### # from sklearn.preprocessing import MinMaxScaler scalers = {} for i in range(X_train.shape[1]): scalers[i] = MinMaxScaler() X_train[:, i, :] = scalers[i].fit_transform(X_train[:, i, :]) for i in range(X_val.shape[1]): scalers[i] = MinMaxScaler() X_val[:, i, :] = scalers[i].fit_transform(X_val[:, i, :]) for i in range(X_test.shape[1]): scalers[i] = MinMaxScaler() X_test[:, i, :] = scalers[i].fit_transform(X_test[:, i, :]) def get_class_distribution(obj): count_dict = { "up": 0, "flat": 0, "down": 0, } for i in obj: if i == 1: count_dict['up'] += 1 elif i == 0: count_dict['down'] += 1 elif i == 2: count_dict['flat'] += 1 else: print("Check classes.") return count_dict bfig, axes = plt.subplots(nrows=1, ncols=3, figsize=(25,7)) # Train sns.barplot(data = pd.DataFrame.from_dict([get_class_distribution(y_train)]).melt(), x = "variable", y="value", hue="variable", ax=axes[0]).set_title('Class Distribution in Train Set') # Validation sns.barplot(data = pd.DataFrame.from_dict([get_class_distribution(y_val)]).melt(), x = "variable", y="value", hue="variable", ax=axes[1]).set_title('Class Distribution in Val Set') # Test sns.barplot(data = pd.DataFrame.from_dict([get_class_distribution(y_test)]).melt(), x = "variable", y="value", hue="variable", ax=axes[2]).set_title('Class Distribution in Test Set') ###### ONLY EXECUTE FOR 2D CNN ##### X_train = X_train.reshape(X_train.shape[0], 1, X_train.shape[1], X_train.shape[2]) X_val = X_val.reshape(X_val.shape[0], 1, X_val.shape[1], X_val.shape[2]) X_test = X_test.reshape(X_test.shape[0], 1, X_test.shape[1], X_test.shape[2]) print(f'X Train Length {X_train.shape}, y Train Label Length {y_train.shape}') print(f'X Val Length {X_val.shape}, y Val Label Length {y_val.shape}') print(f'X Test Length {X_test.shape}, y Test Label Length {y_test.shape}')
X Train Length (4220, 1, 5, 24), y Train Label Length (4220,) X Val Length (1430, 1, 5, 24), y Val Label Length (1430,) X Test Length (1025, 1, 5, 24), y Test Label Length (1025,)
Apache-2.0
1D_CNN_Attempts/1D_CNN_asof_111312FEB.ipynb
Cloblak/aipi540_deeplearning
2D CNN Build Model
trainset = TensorDataset(torch.from_numpy(X_train).float(), torch.from_numpy(y_train).long()) valset = TensorDataset(torch.from_numpy(X_val).float(), torch.from_numpy(y_val).long()) testset = TensorDataset(torch.from_numpy(X_test).float(), torch.from_numpy(y_test).long()) trainset batch_size = 1 # train_data = [] # for i in range(len(X_train)): # train_data.append([X_train[i].astype('float'), y_train[i]]) train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=False) i1, l1 = next(iter(train_loader)) print(i1.shape) # val_data = [] # for i in range(len(X_val)): # val_data.append([X_val[i].astype('float'), y_val[i]]) val_loader = torch.utils.data.DataLoader(valset, batch_size=batch_size, shuffle=False) i1, l1 = next(iter(val_loader)) print(i1.shape) test_loader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False) i1, l1 = next(iter(test_loader)) print(i1.shape) # Get next batch of training images windows, labels = iter(train_loader).next() print(windows) windows = windows.numpy() # plot the windows in the batch, along with the corresponding labels for idx in range(batch_size): print(labels[idx]) # Set up dict for dataloaders dataloaders = {'train':train_loader,'val':val_loader} # Store size of training and validation sets dataset_sizes = {'train':len(trainset),'val':len(valset)} # Get class names associated with labels classes = [0,1,2] class StockShiftClassification(nn.Module): def __init__(self): super(StockShiftClassification, self).__init__() self.conv1 = nn.Conv2d(1, 32, kernel_size = (1,3), stride=1, padding = 1) self.pool1 = nn.MaxPool2d((1,4),4) self.conv2 = nn.Conv2d(32, 64, kernel_size = (1,3), stride=1, padding = 1) self.pool2 = nn.MaxPool2d((1,3),3) self.conv3 = nn.Conv2d(64, 128, kernel_size = (1,3), stride=1, padding = 1) self.pool3 = nn.MaxPool2d((1,2),2) self.fc1 = nn.Linear(256,1000) #calculate this self.fc2 = nn.Linear(1000, 500) #self.fc3 = nn.Linear(500, 3) def forward(self, x): x = F.relu(self.conv1(x)) x = self.pool1(x) x = F.relu(self.conv2(x)) x = self.pool2(x) x = F.relu(self.conv3(x)) x = self.pool3(x) #print(x.size(1)) x = x.view(x.size(0), -1) # Linear layer x = self.fc1(x) x = self.fc2(x) #x = self.fc3(x) output = x#F.softmax(x, dim=1) return output # Instantiate the model net = StockShiftClassification().float() # Display a summary of the layers of the model and output shape after each layer summary(net,(windows.shape[1:]),batch_size=batch_size,device="cpu") def train_model(model, criterion, optimizer, train_loaders, device, num_epochs=50, scheduler=onecycle_scheduler): model = model.to(device) # Send model to GPU if available writer = SummaryWriter() # Instantiate TensorBoard iter_num = {'train':0,'val':0} # Track total number of iterations for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Get the input images and labels, and send to GPU if available for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # Zero the weight gradients optimizer.zero_grad() # Forward pass to get outputs and calculate loss # Track gradient only for training data with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) # print(outputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # Backpropagation to get the gradients with respect to each weight # Only if in train if phase == 'train': loss.backward() # Update the weights optimizer.step() # Convert loss into a scalar and add it to running_loss running_loss += loss.item() * inputs.size(0) # Track number of correct predictions running_corrects += torch.sum(preds == labels.data) # Iterate count of iterations iter_num[phase] += 1 # Write loss for batch to TensorBoard writer.add_scalar("{} / batch loss".format(phase), loss.item(), iter_num[phase]) # scheduler.step() # Calculate and display average loss and accuracy for the epoch epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc)) # Write loss and accuracy for epoch to TensorBoard writer.add_scalar("{} / epoch loss".format(phase), epoch_loss, epoch) writer.add_scalar("{} / epoch accuracy".format(phase), epoch_acc, epoch) writer.close() return # Train the model device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Cross entropy loss combines softmax and nn.NLLLoss() in one single class. weights = torch.tensor([1.5, 2.25, 1.]).to(device) criterion_weighted = nn.CrossEntropyLoss(weight=weights) criterion = nn.CrossEntropyLoss() # Define optimizer #optimizer = optim.SGD(net.parameters(), lr=0.001) optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=0.00001) n_epochs= 10 # For demo purposes. Use epochs>100 for actual training onecycle_scheduler = optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.01, base_momentum = 0.8, steps_per_epoch=len(train_loader), epochs=n_epochs) train_model(net, criterion, optimizer, dataloaders, device, num_epochs=n_epochs) #, scheduler=onecycle_scheduler) def test_model(model,val_loader,device): # Turn autograd off with torch.no_grad(): # Set the model to evaluation mode model = model.to(device) model.eval() # Set up lists to store true and predicted values y_true = [] test_preds = [] # Calculate the predictions on the test set and add to list for data in val_loader: inputs, labels = data[0].to(device), data[1].to(device) # Feed inputs through model to get raw scores logits = model.forward(inputs) #print(f'Logits: {logits}') # Convert raw scores to probabilities (not necessary since we just care about discrete probs in this case) probs = F.log_softmax(logits, dim=1) #print(f'Probs after LogSoft: {probs}') # Get discrete predictions using argmax preds = np.argmax(probs.cpu().numpy(),axis=1) # Add predictions and actuals to lists test_preds.extend(preds) y_true.extend(labels) # Calculate the accuracy test_preds = np.array(test_preds) y_true = np.array(y_true) test_acc = np.sum(test_preds == y_true)/y_true.shape[0] # Recall for each class recall_vals = [] for i in range(3): class_idx = np.argwhere(y_true==i) total = len(class_idx) correct = np.sum(test_preds[class_idx]==i) recall = correct / total recall_vals.append(recall) return test_acc, recall_vals # Calculate the test set accuracy and recall for each class acc,recall_vals = test_model(net,val_loader,device) print('Test set accuracy is {:.3f}'.format(acc)) for i in range(3): print('For class {}, recall is {}'.format(classes[i],recall_vals[i])) from sklearn.metrics import confusion_matrix def plot_confusion_matrix(cm, target_names, title='Confusion matrix', cmap=None, normalize=True): """ given a sklearn confusion matrix (cm), make a nice plot Arguments --------- cm: confusion matrix from sklearn.metrics.confusion_matrix target_names: given classification classes such as [0, 1, 2] the class names, for example: ['high', 'medium', 'low'] title: the text to display at the top of the matrix cmap: the gradient of the values displayed from matplotlib.pyplot.cm see http://matplotlib.org/examples/color/colormaps_reference.html plt.get_cmap('jet') or plt.cm.Blues normalize: If False, plot the raw numbers If True, plot the proportions Usage ----- plot_confusion_matrix(cm = cm, # confusion matrix created by # sklearn.metrics.confusion_matrix normalize = True, # show proportions target_names = y_labels_vals, # list of names of the classes title = best_estimator_name) # title of graph Citiation --------- http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html """ import matplotlib.pyplot as plt import numpy as np import itertools accuracy = np.trace(cm) / np.sum(cm).astype('float') misclass = 1 - accuracy if cmap is None: cmap = plt.get_cmap('Blues') plt.figure(figsize=(8, 6)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() if target_names is not None: tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 1.5 if normalize else cm.max() / 2 for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): if normalize: plt.text(j, i, "{:0.4f}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") else: plt.text(j, i, "{:,}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass)) plt.show() nb_classes = 9 # Initialize the prediction and label lists(tensors) predlist=torch.zeros(0,dtype=torch.long, device='cpu') lbllist=torch.zeros(0,dtype=torch.long, device='cpu') with torch.no_grad(): for i, (inputs, classes) in enumerate(dataloaders['val']): inputs = inputs.to(device) classes = classes.to(device) outputs = net.forward(inputs) _, preds = torch.max(outputs, 1) # Append batch prediction results predlist=torch.cat([predlist,preds.view(-1).cpu()]) lbllist=torch.cat([lbllist,classes.view(-1).cpu()]) # Confusion matrix conf_mat=confusion_matrix(lbllist.numpy(), predlist.numpy()) plot_confusion_matrix(conf_mat, [0,1,2]) from sklearn.metrics import precision_score precision_score(lbllist.numpy(), predlist.numpy(), average='weighted') from sklearn.metrics import classification_report print(classification_report(lbllist.numpy(), predlist.numpy(), target_names=["down","up","flat"], digits=4)) train_x = torch.from_numpy(X_train).float() train_y = torch.from_numpy(y_train).long() val_x = torch.from_numpy(X_val).float() val_y = torch.from_numpy(y_val).long() # defining the model model = net # defining the optimizer optimizer = Adam(model.parameters(), lr=0.07) # defining the loss function criterion = CrossEntropyLoss() # checking if GPU is available if torch.cuda.is_available(): model = model.cuda() criterion = criterion.cuda() from torch.autograd import Variable def train(epoch, train_x, train_y, val_x, val_y): model.train() tr_loss = 0 # getting the training set x_train, y_train = Variable(train_x), Variable(train_y) # getting the validation set x_val, y_val = Variable(val_x), Variable(val_y) # converting the data into GPU format if torch.cuda.is_available(): x_train = x_train.cuda() y_train = y_train.cuda() x_val = x_val.cuda() y_val = y_val.cuda() # clearing the Gradients of the model parameters optimizer.zero_grad() # prediction for training and validation set output_train = model(x_train) output_val = model(x_val) # computing the training and validation loss loss_train = criterion(output_train, y_train) loss_val = criterion(output_val, y_val) train_losses.append(loss_train) val_losses.append(loss_val) # computing the updated weights of all the model parameters loss_train.backward() optimizer.step() tr_loss = loss_train.item() if epoch%2 == 0: # printing the validation loss print('Epoch : ',epoch+1, '\t', 'loss :', loss_val) # defining the number of epochs n_epochs = 100 # empty list to store training losses train_losses = [] # empty list to store validation losses val_losses = [] # training the model for epoch in range(n_epochs): train(epoch, X_train, y_train, X_val, y_val) # plotting the training and validation loss plt.plot(train_losses, label='Training loss') plt.plot(val_losses, label='Validation loss') plt.legend() plt.show() from sklearn.metrics import accuracy_score from tqdm import tqdm with torch.no_grad(): output = model(X_val.cuda()) softmax = torch.exp(output).cpu() prob = list(softmax.numpy()) predictions = np.argmax(prob, axis=1) # accuracy on training set accuracy_score(y_val, predictions) # defining the number of epochs n_epochs = 25 # empty list to store training losses train_losses = [] # empty list to store validation losses val_losses = [] # training the model for epoch in range(n_epochs): train(epoch) def train_model(model, criterion, optimizer, train_loaders, device, num_epochs=50): #, scheduler=onecycle_scheduler): model = model.to(device) # Send model to GPU if available writer = SummaryWriter() # Instantiate TensorBoard iter_num = {'train':0,'val':0} # Track total number of iterations for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Get the input images and labels, and send to GPU if available for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # Zero the weight gradients optimizer.zero_grad() # Forward pass to get outputs and calculate loss # Track gradient only for training data with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # Backpropagation to get the gradients with respect to each weight # Only if in train if phase == 'train': loss.backward() # Update the weights optimizer.step() # Convert loss into a scalar and add it to running_loss running_loss += loss.item() * inputs.size(0) # Track number of correct predictions running_corrects += torch.sum(preds == labels.data) # Iterate count of iterations iter_num[phase] += 1 # Write loss for batch to TensorBoard writer.add_scalar("{} / batch loss".format(phase), loss.item(), iter_num[phase]) # scheduler.step() # Calculate and display average loss and accuracy for the epoch epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc)) # Write loss and accuracy for epoch to TensorBoard writer.add_scalar("{} / epoch loss".format(phase), epoch_loss, epoch) writer.add_scalar("{} / epoch accuracy".format(phase), epoch_acc, epoch) writer.close() return # Train the model device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Cross entropy loss combines softmax and nn.NLLLoss() in one single class. weights = torch.tensor([1.75, 2.25, 1.]).to(device) criterion_weighted = nn.CrossEntropyLoss(weight=weights) criterion = nn.CrossEntropyLoss() # Define optimizer #optimizer = optim.SGD(net.parameters(), lr=0.001) optimizer = optim.Adam(net.parameters(), lr=0.001, weight_decay=0.00001) n_epochs= 10 # For demo purposes. Use epochs>100 for actual training # onecycle_scheduler = optim.lr_scheduler.OneCycleLR(optimizer, # max_lr=0.01, # base_momentum = 0.8, # steps_per_epoch=len(train_loader), # epochs=n_epochs) train_model(net, criterion, optimizer, dataloaders, device, num_epochs=n_epochs) #, scheduler=onecycle_scheduler) def test_model(model,val_loader,device): # Turn autograd off with torch.no_grad(): # Set the model to evaluation mode model.eval() # Set up lists to store true and predicted values y_true = [] test_preds = [] # Calculate the predictions on the test set and add to list for data in val_loader: inputs, labels = data[0].to(device), data[1].to(device) # Feed inputs through model to get raw scores logits = model.forward(inputs) #print(f'Logits: {logits}') # Convert raw scores to probabilities (not necessary since we just care about discrete probs in this case) probs = F.softmax(logits, dim=0) # print(f'Probs after LogSoft: {probs}') # Get discrete predictions using argmax preds = np.argmax(probs.cpu().numpy(),axis=1) # Add predictions and actuals to lists test_preds.extend(preds) y_true.extend(labels) # Calculate the accuracy test_preds = np.array(test_preds) y_true = np.array(y_true) test_acc = np.sum(test_preds == y_true)/y_true.shape[0] # Recall for each class recall_vals = [] for i in range(2): class_idx = np.argwhere(y_true==i) total = len(class_idx) correct = np.sum(test_preds[class_idx]==i) recall = correct / total recall_vals.append(recall) return test_acc, recall_vals # Calculate the test set accuracy and recall for each class acc,recall_vals = test_model(net,test_loader,device) print('Test set accuracy is {:.3f}'.format(acc)) for i in range(2): print('For class {}, recall is {}'.format(classes[i],recall_vals[i])) import time def train(model, optimizer, loss_fn, train_dl, val_dl, epochs=100, device='cpu'): print('train() called: model=%s, opt=%s(lr=%f), epochs=%d, device=%s\n' % \ (type(model).__name__, type(optimizer).__name__, optimizer.param_groups[0]['lr'], epochs, device)) history = {} # Collects per-epoch loss and acc like Keras' fit(). history['loss'] = [] history['val_loss'] = [] history['acc'] = [] history['val_acc'] = [] start_time_sec = time.time() for epoch in range(1, epochs+1): # --- TRAIN AND EVALUATE ON TRAINING SET ----------------------------- model.train() train_loss = 0.0 num_train_correct = 0 num_train_examples = 0 for batch in train_dl: optimizer.zero_grad() x = batch[0].to(device) y = batch[1].to(device) yhat = model(x) loss = loss_fn(yhat, y) loss.backward() optimizer.step() train_loss += loss.data.item() * x.size(0) num_train_correct += (torch.max(yhat, 1)[1] == y).sum().item() num_train_examples += x.shape[0] train_acc = num_train_correct / num_train_examples train_loss = train_loss / len(train_dl.dataset) # --- EVALUATE ON VALIDATION SET ------------------------------------- model.eval() val_loss = 0.0 num_val_correct = 0 num_val_examples = 0 for batch in val_dl: x = batch[0].to(device) y = batch[1].to(device) yhat = model(x) loss = loss_fn(yhat, y) val_loss += loss.data.item() * x.size(0) num_val_correct += (torch.max(yhat, 1)[1] == y).sum().item() num_val_examples += y.shape[0] val_acc = num_val_correct / num_val_examples val_loss = val_loss / len(val_dl.dataset) if epoch == 1 or epoch % 10 == 0: print('Epoch %3d/%3d, train loss: %5.2f, train acc: %5.2f, val loss: %5.2f, val acc: %5.2f' % \ (epoch, epochs, train_loss, train_acc, val_loss, val_acc)) history['loss'].append(train_loss) history['val_loss'].append(val_loss) history['acc'].append(train_acc) history['val_acc'].append(val_acc) # END OF TRAINING LOOP end_time_sec = time.time() total_time_sec = end_time_sec - start_time_sec time_per_epoch_sec = total_time_sec / epochs print() print('Time total: %5.2f sec' % (total_time_sec)) print('Time per epoch: %5.2f sec' % (time_per_epoch_sec)) return history y_flat_num = y_train[np.where(y_train == 2)].size y_down_weight = round((y_flat_num / y_train[np.where(y_train == 0)].size) * 1.2, 3) y_up_weight = round((y_flat_num / y_train[np.where(y_train == 1)].size) * 1.5, 3) print(y_down_weight, y_up_weight, 1) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = net.to(device) criterion = nn.CrossEntropyLoss() # weights = torch.tensor([y_down_weight, y_up_weight, 1.]).to(device) # criterion_weighted = nn.CrossEntropyLoss(weight=weights) optimizer = torch.optim.Adam(net.parameters(), lr = 0.001, weight_decay=0.00001) epochs = 20 history = train( model = model, optimizer = optimizer, loss_fn = criterion, train_dl = train_loader, val_dl = test_loader, epochs=epochs, device=device) import matplotlib.pyplot as plt acc = history['acc'] val_acc = history['val_acc'] loss = history['loss'] val_loss = history['val_loss'] epochs = range(1, len(acc) + 1) plt.plot(epochs, acc, 'b', label='Training acc') plt.plot(epochs, val_acc, 'r', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'b', label='Training loss') plt.plot(epochs, val_loss, 'r', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() def test_model(model,val_loader,device): # Turn autograd off with torch.no_grad(): # Set the model to evaluation mode model = model.to(device) model.eval() # Set up lists to store true and predicted values y_true = [] test_preds = [] # Calculate the predictions on the test set and add to list for data in val_loader: inputs, labels = data[0].to(device), data[1].to(device) # Feed inputs through model to get raw scores logits = model.forward(inputs) #print(f'Logits: {logits}') # Convert raw scores to probabilities (not necessary since we just care about discrete probs in this case) probs = F.softmax(logits) # print(f'Probs after LogSoft: {probs}') # Get discrete predictions using argmax preds = np.argmax(probs.cpu().numpy(),axis=1) # Add predictions and actuals to lists test_preds.extend(preds) y_true.extend(labels) # Calculate the accuracy test_preds = np.array(test_preds) y_true = np.array(y_true) test_acc = np.sum(test_preds == y_true)/y_true.shape[0] # Recall for each class recall_vals = [] for i in range(2): class_idx = np.argwhere(y_true==i) total = len(class_idx) correct = np.sum(test_preds[class_idx]==i) recall = correct / total recall_vals.append(recall) return test_acc, recall_vals # Calculate the test set accuracy and recall for each class acc,recall_vals = test_model(model,test_loader,device) print('Test set accuracy is {:.3f}'.format(acc)) for i in range(2): print('For class {}, recall is {}'.format(classes[i],recall_vals[i])) from sklearn.metrics import confusion_matrix def plot_confusion_matrix(cm, target_names, title='Confusion matrix', cmap=None, normalize=True): """ given a sklearn confusion matrix (cm), make a nice plot Arguments --------- cm: confusion matrix from sklearn.metrics.confusion_matrix target_names: given classification classes such as [0, 1, 2] the class names, for example: ['high', 'medium', 'low'] title: the text to display at the top of the matrix cmap: the gradient of the values displayed from matplotlib.pyplot.cm see http://matplotlib.org/examples/color/colormaps_reference.html plt.get_cmap('jet') or plt.cm.Blues normalize: If False, plot the raw numbers If True, plot the proportions Usage ----- plot_confusion_matrix(cm = cm, # confusion matrix created by # sklearn.metrics.confusion_matrix normalize = True, # show proportions target_names = y_labels_vals, # list of names of the classes title = best_estimator_name) # title of graph Citiation --------- http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html """ import matplotlib.pyplot as plt import numpy as np import itertools accuracy = np.trace(cm) / np.sum(cm).astype('float') misclass = 1 - accuracy if cmap is None: cmap = plt.get_cmap('Blues') plt.figure(figsize=(8, 6)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() if target_names is not None: tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 1.5 if normalize else cm.max() / 2 for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): if normalize: plt.text(j, i, "{:0.4f}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") else: plt.text(j, i, "{:,}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass)) plt.show() nb_classes = 2 # Initialize the prediction and label lists(tensors) predlist=torch.zeros(0,dtype=torch.long, device='cpu') lbllist=torch.zeros(0,dtype=torch.long, device='cpu') with torch.no_grad(): for i, (inputs, classes) in enumerate(dataloaders['val']): # print(inputs) inputs = inputs.to(device) classes = classes.to(device) outputs = model.forward(inputs) #print(outputs) _, preds = torch.max(outputs, 1) # Append batch prediction results predlist=torch.cat([predlist,preds.view(-1).cpu()]) lbllist=torch.cat([lbllist,classes.view(-1).cpu()]) # Confusion matrix conf_mat=confusion_matrix(lbllist.numpy(), predlist.numpy()) plot_confusion_matrix(conf_mat, [0,1]) from sklearn.metrics import precision_score precision_score(lbllist.numpy(), predlist.numpy(), average='weighted') from sklearn.metrics import classification_report print(classification_report(lbllist.numpy(), predlist.numpy(), target_names=["down","up"], digits=4))
_____no_output_____
Apache-2.0
1D_CNN_Attempts/1D_CNN_asof_111312FEB.ipynb
Cloblak/aipi540_deeplearning
Spark on Kubernetes Preparing the notebook https://towardsdatascience.com/make-kubeflow-into-your-own-data-science-workspace-cc8162969e29 Setup service account permissions https://github.com/kubeflow/kubeflow/issues/4306 issue with launching spark-operator from jupyter notebook Run command in your shell (not in notebook)```shellexport NAMESPACE=kubectl create serviceaccount spark -n ${NAMESPACE}kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=${NAMESPACE}:spark --namespace=${NAMESPACE}``` Python version> Note: Make sure your driver python and executor python version matches.> Otherwise, you will see error msg like belowException: Python in worker has different version 3.7 than that in driver 3.6, PySpark cannot run with different minor versions.Please check environment variables `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` are correctly set.
import sys print(sys.version)
_____no_output_____
MIT-0
notebooks/spark-on-eks-cluster-mode.ipynb
aws-samples/eks-kubeflow-spot-sample
Client Mode
import findspark, pyspark,socket from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession findspark.init() localIpAddress = socket.gethostbyname(socket.gethostname()) conf = SparkConf().setAppName('sparktest1') conf.setMaster('k8s://https://kubernetes.default.svc:443') conf.set("spark.submit.deployMode", "client") conf.set("spark.executor.instances", "2") conf.set("spark.driver.host", localIpAddress) conf.set("spark.driver.port", "7778") conf.set("spark.kubernetes.namespace", "yahavb") conf.set("spark.kubernetes.container.image", "seedjeffwan/spark-py:v2.4.6") conf.set("spark.kubernetes.pyspark.pythonVersion", "3") conf.set("spark.kubernetes.authenticate.driver.serviceAccountName", "spark") conf.set("spark.kubernetes.executor.annotation.sidecar.istio.io/inject", "false") sc = pyspark.context.SparkContext.getOrCreate(conf=conf) # following works as well # spark = SparkSession.builder.config(conf=conf).getOrCreate() num_samples = 100000 def inside(p): x, y = random.random(), random.random() return x*x + y*y < 1 count = sc.parallelize(range(0, num_samples)).filter(inside).count() sc.stop()
_____no_output_____
MIT-0
notebooks/spark-on-eks-cluster-mode.ipynb
aws-samples/eks-kubeflow-spot-sample
Cluster Mode Java
%%bash /opt/spark-2.4.6/bin/spark-submit --master "k8s://https://kubernetes.default.svc:443" \ --deploy-mode cluster \ --name spark-java-pi \ --class org.apache.spark.examples.SparkPi \ --conf spark.executor.instances=30 \ --conf spark.kubernetes.namespace=yahavb \ --conf spark.kubernetes.driver.annotation.sidecar.istio.io/inject=false \ --conf spark.kubernetes.executor.annotation.sidecar.istio.io/inject=false \ --conf spark.kubernetes.container.image=seedjeffwan/spark:v2.4.6 \ --conf spark.kubernetes.driver.pod.name=spark-java-pi-driver \ --conf spark.kubernetes.executor.request.cores=4 \ --conf spark.kubernetes.node.selector.computetype=gpu \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ local:///opt/spark/examples/jars/spark-examples_2.11-2.4.6.jar 262144 %%bash kubectl -n yahavb delete po ` kubectl -n yahavb get po | grep spark-java-pi-driver | awk '{print $1}'`
_____no_output_____
MIT-0
notebooks/spark-on-eks-cluster-mode.ipynb
aws-samples/eks-kubeflow-spot-sample
Python
%%bash /opt/spark-2.4.6/bin/spark-submit --master "k8s://https://kubernetes.default.svc:443" \ --deploy-mode cluster \ --name spark-python-pi \ --conf spark.executor.instances=50 \ --conf spark.kubernetes.container.image=seedjeffwan/spark-py:v2.4.6 \ --conf spark.kubernetes.driver.pod.name=spark-python-pi-driver \ --conf spark.kubernetes.namespace=yahavb \ --conf spark.kubernetes.driver.annotation.sidecar.istio.io/inject=false \ --conf spark.kubernetes.executor.annotation.sidecar.istio.io/inject=false \ --conf spark.kubernetes.pyspark.pythonVersion=3 \ --conf spark.kubernetes.executor.request.cores=4 \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark /opt/spark/examples/src/main/python/pi.py 64000 %%bash kubectl -n yahavb delete po `kubectl -n yahavb get po | grep spark-python-pi-driver | awk '{print $1}'`
_____no_output_____
MIT-0
notebooks/spark-on-eks-cluster-mode.ipynb
aws-samples/eks-kubeflow-spot-sample
Designing the maze
arr=np.array([[0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0], [0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0], [0,1,0,0,1,0,0,1,1,1,1,1,0,1,1,0,1,1,1,0], [0,1,0,0,1,0,0,0,0,0,1,0,0,1,0,0,1,0,0,0], [0,0,0,0,1,0,0,1,1,1,0,0,1,1,0,0,1,0,0,0], [0,0,0,0,0,0,0,1,0,0,0,1,0,1,0,1,1,0,1,1], [1,1,1,0,1,1,0,1,0,0,1,0,0,1,0,0,1,0,0,0], [0,0,1,0,1,0,1,0,0,1,0,0,0,0,0,0,1,0,1,0], [0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,1,0,1,0], [0,0,0,0,1,0,0,1,0,0,0,0,0,1,1,1,0,0,0,0], [1,0,1,1,1,0,1,0,0,1,0,0,0,1,0,0,0,1,0,0], [1,0,1,1,1,0,1,0,0,1,0,0,1,1,0,0,0,1,0,0], [1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,1,0,0,0,0], [0,0,0,0,1,0,1,0,0,1,1,0,1,0,0,0,1,1,1,0], [0,0,1,1,1,0,1,0,0,1,0,1,0,0,1,1,0,0,0,0], [0,1,1,0,0,0,0,1,0,1,0,0,1,1,0,1,0,1,1,1], [0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,1,0,0,0,0], [0,0,1,1,1,0,1,1,0,0,1,0,1,0,0,1,1,0,0,0], [1,0,0,0,1,0,1,0,0,0,1,0,1,0,0,1,1,1,0,0], [1,1,0,0,0,0,1,0,0,0,1,0,1,0,0,0,0,1,0,0] ],dtype=float) #Position of the rat rat=(0,0) #If Cheese is None, cheese is placed in the bottom-right cell of the maze cheese=None #The maze object takes the maze maze=Maze(arr,rat,cheese) maze.show_maze()
_____no_output_____
MIT
run.ipynb
burhanusman/RL-Maze
Defining a Agent [Sarsa Agent because it uses Sarsa to solve the maze]
agent=SarsaAgent(maze)
_____no_output_____
MIT
run.ipynb
burhanusman/RL-Maze
Making the agent play episodes and learn
agent.learn(episodes=1000)
_____no_output_____
MIT
run.ipynb
burhanusman/RL-Maze
Plotting the maze
nrow=maze.nrow ncol=maze.ncol fig=plt.figure() ax=fig.gca() ax.set_xticks(np.arange(0.5,ncol,1)) ax.set_yticks(np.arange(0.5,nrow,1)) ax.set_xticklabels([]) ax.set_yticklabels([]) ax.grid('on') img=ax.imshow(maze.maze,cmap="gray",) a=5
_____no_output_____
MIT
run.ipynb
burhanusman/RL-Maze
Making Animation of the maze solution
def gen_func(): maze=Maze(arr,rat,cheese) done=False while not done: row,col,_=maze.state cell=(row,col) action=agent.get_policy(cell) maze.step(action) done=maze.get_status() yield maze.get_canvas() def update_plot(canvas): img.set_data(canvas) anim=animation.FuncAnimation(fig,update_plot,gen_func) HTML(anim.to_html5_video()) anim.save("big_maze.gif",animation.PillowWriter())
_____no_output_____
MIT
run.ipynb
burhanusman/RL-Maze
VAE MNIST example: BO in a latent space In this tutorial, we use the MNIST dataset and some standard PyTorch examples to show a synthetic problem where the input to the objective function is a `28 x 28` image. The main idea is to train a [variational auto-encoder (VAE)](https://arxiv.org/abs/1312.6114) on the MNIST dataset and run Bayesian Optimization in the latent space. We also refer readers to [this tutorial](http://krasserm.github.io/2018/04/07/latent-space-optimization/), which discusses [the method](https://arxiv.org/abs/1610.02415) of jointly training a VAE with a predictor (e.g., classifier), and shows a similar tutorial for the MNIST setting.
import os import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets # transforms device = torch.device("cuda" if torch.cuda.is_available() else "cpu") dtype = torch.float
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
Problem setupLet's first define our synthetic expensive-to-evaluate objective function. We assume that it takes the following form:$$\text{image} \longrightarrow \text{image classifier} \longrightarrow \text{scoring function} \longrightarrow \text{score}.$$The classifier is a convolutional neural network (CNN) trained using the architecture of the [PyTorch CNN example](https://github.com/pytorch/examples/tree/master/mnist).
class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5, 1) self.conv2 = nn.Conv2d(20, 50, 5, 1) self.fc1 = nn.Linear(4 * 4 * 50, 500) self.fc2 = nn.Linear(500, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = x.view(-1, 4*4*50) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1)
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
We next instantiate the CNN for digit recognition and load a pre-trained model.Here, you may have to change `PRETRAINED_LOCATION` to the location of the `pretrained_models` folder on your machine.
PRETRAINED_LOCATION = "./pretrained_models" cnn_model = Net().to(device) cnn_state_dict = torch.load(os.path.join(PRETRAINED_LOCATION, "mnist_cnn.pt"), map_location=device) cnn_model.load_state_dict(cnn_state_dict);
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
Our VAE model follows the [PyTorch VAE example](https://github.com/pytorch/examples/tree/master/vae), except that we use the same data transform from the CNN tutorial for consistency. We then instantiate the model and again load a pre-trained model. To train these models, we refer readers to the PyTorch Github repository.
class VAE(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 400) self.fc21 = nn.Linear(400, 20) self.fc22 = nn.Linear(400, 20) self.fc3 = nn.Linear(20, 400) self.fc4 = nn.Linear(400, 784) def encode(self, x): h1 = F.relu(self.fc1(x)) return self.fc21(h1), self.fc22(h1) def reparameterize(self, mu, logvar): std = torch.exp(0.5*logvar) eps = torch.randn_like(std) return mu + eps*std def decode(self, z): h3 = F.relu(self.fc3(z)) return torch.sigmoid(self.fc4(h3)) def forward(self, x): mu, logvar = self.encode(x.view(-1, 784)) z = self.reparameterize(mu, logvar) return self.decode(z), mu, logvar vae_model = VAE().to(device) vae_state_dict = torch.load(os.path.join(PRETRAINED_LOCATION, "mnist_vae.pt"), map_location=device) vae_model.load_state_dict(vae_state_dict);
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
We now define the scoring function that maps digits to scores. The function below prefers the digit '3'.
def score(y): """Returns a 'score' for each digit from 0 to 9. It is modeled as a squared exponential centered at the digit '3'. """ return torch.exp(-2 * (y - 3)**2)
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
Given the scoring function, we can now write our overall objective, which as discussed above, starts with an image and outputs a score. Let's say the objective computes the expected score given the probabilities from the classifier.
def score_image_recognition(x): """The input x is an image and an expected score based on the CNN classifier and the scoring function is returned. """ with torch.no_grad(): probs = torch.exp(cnn_model(x)) # b x 10 scores = score(torch.arange(10, device=device, dtype=dtype)).expand(probs.shape) return (probs * scores).sum(dim=1)
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
Finally, we define a helper function `decode` that takes as input the parameters `mu` and `logvar` of the variational distribution and performs reparameterization and the decoding. We use batched Bayesian optimization to search over the parameters `mu` and `logvar`
def decode(train_x): with torch.no_grad(): decoded = vae_model.decode(train_x) return decoded.view(train_x.shape[0], 1, 28, 28)
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
Model initialization and initial random batchWe use a `SingleTaskGP` to model the score of an image generated by a latent representation. The model is initialized with points drawn from $[-6, 6]^{20}$.
from botorch.models import SingleTaskGP from gpytorch.mlls.exact_marginal_log_likelihood import ExactMarginalLogLikelihood bounds = torch.tensor([[-6.0] * 20, [6.0] * 20], device=device, dtype=dtype) def initialize_model(n=5): # generate training data train_x = (bounds[1] - bounds[0]) * torch.rand(n, 20, device=device, dtype=dtype) + bounds[0] train_obj = score_image_recognition(decode(train_x)) best_observed_value = train_obj.max().item() # define models for objective and constraint model = SingleTaskGP(train_X=train_x, train_Y=train_obj) model = model.to(train_x) mll = ExactMarginalLogLikelihood(model.likelihood, model) mll = mll.to(train_x) return train_x, train_obj, mll, model, best_observed_value
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
Define a helper function that performs the essential BO stepThe helper function below takes an acquisition function as an argument, optimizes it, and returns the batch $\{x_1, x_2, \ldots x_q\}$ along with the observed function values. For this example, we'll use a small batch of $q=3$.
from botorch.optim import joint_optimize BATCH_SIZE = 3 def optimize_acqf_and_get_observation(acq_func): """Optimizes the acquisition function, and returns a new candidate and a noisy observation""" # optimize candidates = joint_optimize( acq_function=acq_func, bounds=bounds, q=BATCH_SIZE, num_restarts=10, raw_samples=200, ) # observe new values new_x = candidates.detach() new_obj = score_image_recognition(decode(new_x)) return new_x, new_obj
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
Perform Bayesian Optimization loop with qEIThe Bayesian optimization "loop" for a batch size of $q$ simply iterates the following steps: (1) given a surrogate model, choose a batch of points $\{x_1, x_2, \ldots x_q\}$, (2) observe $f(x)$ for each $x$ in the batch, and (3) update the surrogate model. We run `N_BATCH=75` iterations. The acquisition function is approximated using `MC_SAMPLES=2000` samples. We also initialize the model with 5 randomly drawn points.
from botorch import fit_gpytorch_model from botorch.acquisition.monte_carlo import qExpectedImprovement from botorch.acquisition.sampler import SobolQMCNormalSampler seed=1 torch.manual_seed(seed) N_BATCH = 50 MC_SAMPLES = 2000 best_observed = [] # call helper function to initialize model train_x, train_obj, mll, model, best_value = initialize_model(n=5) best_observed.append(best_value)
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
We are now ready to run the BO loop (this make take a few minutes, depending on your machine).
import warnings warnings.filterwarnings("ignore") print(f"\nRunning BO ", end='') from matplotlib import pyplot as plt # run N_BATCH rounds of BayesOpt after the initial random batch for iteration in range(N_BATCH): # fit the model fit_gpytorch_model(mll) # define the qNEI acquisition module using a QMC sampler qmc_sampler = SobolQMCNormalSampler(num_samples=MC_SAMPLES, seed=seed) qEI = qExpectedImprovement(model=model, sampler=qmc_sampler, best_f=best_value) # optimize and get new observation new_x, new_obj = optimize_acqf_and_get_observation(qEI) # update training points train_x = torch.cat((train_x, new_x)) train_obj = torch.cat((train_obj, new_obj)) # update progress best_value = score_image_recognition(decode(train_x)).max().item() best_observed.append(best_value) # reinitialize the model so it is ready for fitting on next iteration model.set_train_data(train_x, train_obj, strict=False) print(".", end='')
Running BO ..................................................
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
EI recommends the best point observed so far. We can visualize what the images corresponding to recommended points *would have* been if the BO process ended at various times. Here, we show the progress of the algorithm by examining the images at 0%, 10%, 25%, 50%, 75%, and 100% completion. The first image is the best image found through the initial random batch.
import numpy as np from matplotlib import pyplot as plt %matplotlib inline fig, ax = plt.subplots(1, 6, figsize=(14, 14)) percentages = np.array([0, 10, 25, 50, 75, 100], dtype=np.float32) inds = (N_BATCH * BATCH_SIZE * percentages / 100 + 4).astype(int) for i, ax in enumerate(ax.flat): b = torch.argmax(score_image_recognition(decode(train_x[:inds[i],:])), dim=0) img = decode(train_x[b].view(1, -1)).squeeze().cpu() ax.imshow(img, alpha=0.8, cmap='gray')
_____no_output_____
MIT
tutorials/vae_mnist.ipynb
Igevorse/botorch
Ensemble LearningSometimes aggregrates or ensembles of many different opinions on a question can perform as well or better than askinga single expert on the same question. This is known as the *wisdom of the crowd* when the aggregrate opinion ofpeople on a question performs as well or better as a single isolated expert in predicting some outcome.Likewise, for machine learning predictors, a similar effect can also often occur. The aggregrate performance ofmultiple predictors and often make a small but significant improvement on building a classifier or regressionpredictor for a complex set of data. A group of machine learning predictors is called an *ensemble*, and thusthis technique of combining the predictions of an ensemble is known as *Ensemble Learning* . For exampl,e we could train a group of Decision Tree classifiers, each on a different random subset of the trainingdata. To make an ensemble prediciton, you just obtain the predictions of all individual trees, then predict theclass that gets the most votes. Such an ensemble of Decision Trees is called a *Random Forest*, and despite therelative simplicity of decision tree predictors, it can be surprisingly powerful as a ML predictor. Voting ClassifiersSay you have several classifiers for the same classification problem (say a Logistic Classifier, and SVM,a Decision Tree and a KNN classifier and perhaps a few more). The simplest way to create an ensemble classifieris to aggregrate the predictions of each classifier and predict the class that gets the most votes. Thismajority-vote classifier is called a *hard voting* classifier.Somewhat surprisingly, this voting classifier often achieves a higher accuracy than the best classifier in theensemble. In fact, even if each classifier is a *weak learner* (meaning it only does slightly better thanrandom guessing), the ensemble can still be a *strong learner* (achieving high accuracy). The key to makinggood ensemble predictors is that you need both a sufficient number of learners (even of weak learners), butalso maybe more importantly, the learners need to be "sufficiently diverse", where diverse is a bit fuzzy todefine, but in general the classifiers must be as independent as possible, so that even if they are weak predictors,they are weak in different and diverse ways.
def flip_unfair_coin(num_flips, head_ratio): """Simulate flipping an unbalanced coin. We return a numpy array of size num_flips, with 0 to represent 1 to represent a head and 0 a tail flip. We generate a head or tail result using the head_ratio probability threshold drawn from a standard uniform distribution. """ # array of correct size to hold resulting simulated flips flips = np.empty(num_flips) # flip the coin the number of indicated times for flip in range(num_flips): flips[flip] = np.random.random() < head_ratio # return the resulting coin flip trials trials return flips def running_heads_ratio(flips): """Given a sequence of flips, where 1 represents a "Head" and 0 a "Tail" flip, return an array of the running ratio of heads / tails """ # array of correct size to hold resulting heads ratio seen at each point in the flips sequence num_flips = flips.shape[0] head_ratios = np.empty(num_flips) # keep track of number of heads seen so far, the ratio is num_heads / num_flips num_heads = 0.0 # calculate ratio for each flips instance in the sequence for flip in range(num_flips): num_heads += flips[flip] head_ratios[flip] = num_heads / (flip + 1) # return the resulting sequence of head ratios seen in the flips return head_ratios NUM_FLIPPERS = 10 NUM_FLIPS = 10000 HEAD_PERCENT = 0.51 # create 3 separate sequences of flippers flippers = np.empty( (NUM_FLIPPERS, NUM_FLIPS) ) for flipper in range(NUM_FLIPPERS): flips = flip_unfair_coin(NUM_FLIPS, HEAD_PERCENT) head_ratios = running_heads_ratio(flips) flippers[flipper] = head_ratios # create an ensemble, in this case we will average the individual flippers ensemble = flippers.mean(axis=0) # plot the resulting head ratio for our flippers flips = np.arange(1, NUM_FLIPS+1) for flipper in range(NUM_FLIPPERS): plt.plot(flips, flippers[flipper], alpha=0.25) plt.plot(flips, ensemble, 'b-', alpha=1.0, label='ensemble decision') plt.ylim([0.42, 0.58]) plt.plot([1, NUM_FLIPS], [HEAD_PERCENT, HEAD_PERCENT], 'k--', label='51 %') plt.plot([1, NUM_FLIPS], [0.5, 0.5], 'k-', label='50 %') plt.xlabel('Number of coin tosses') plt.ylabel('Heads ratio') plt.legend();
_____no_output_____
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
Scikit-Learn Voting ClassifierThe following code is an example of creating a voting classifier in Scikit-Learn. We are using the moons datasetshown.Here we create 3 separate classifiers by hand, a logistic regressor, a decision tree,and a support vector classifier (SVC). Notice we specify 'hard' voting for the voting classifier, whichas we discussed is the simple method of choosing the class with the most votes.(This is a binary classification so 2 out of 3 or 3 out of 3 are the only possibilities. For a multiclassclassification, in case of a tie vote, the voting classifier may fall back to the probability scores theclassifiers give, assuming the provide probability/confidence measures of their prediction).
# helper functions to visualize decision boundaries for 2-feature classification tasks # create a scatter plot of the artificial multiclass dataset from matplotlib import cm # visualize the blobs using matplotlib. An example of a funciton we can reuse, since later # we want to plot the decision boundaries along with the scatter plot data def plot_multiclass_data(X, y): """Create a scatter plot of a set of multiclass data. We assume that X has 2 features so that we can plot on a 2D grid, and that y are integer labels [0,1,2,...] with a unique integer label for each class of the dataset. Parameters ---------- X - A (m,2) shaped number array of m samples each with 2 features y - A (m,) shaped vector of integers with the labeled classes of each of the X input features """ # hardcoded to handle only up to 8 classes markers = ['o', '^', 's', 'd', '*', 'p', 'P', 'v'] #colors = ['r', 'g', 'b', 'c', 'm', 'y', 'k'] # determine number of features in the data m = X.shape[0] # determine the class labels labels = np.unique(y) #colors = cm.rainbow(np.linspace(0.0, 1.0, labels.size)) colors = cm.Set1.colors # loop to plot each for label, marker, color in zip(labels, markers, colors): X_label = X[y == label] y_label = y[y == label] label_text = 'Class %s' % label plt.plot(X_label[:,0], X_label[:,1], marker=marker, markersize=8.0, markeredgecolor='k', color=color, alpha=0.5, linestyle='', label=label_text) plt.xlabel('Feature 1') plt.ylabel('Feature 2') plt.legend(); def plot_multiclass_decision_boundaries(model, X, y): from matplotlib.colors import ListedColormap """Use a mesh/grid to create a contour plot that will show the decision boundaries reached by a trained scikit-learn classifier. We expect that the model passed in is a trained scikit-learn classifier that supports/implements a predict() method, that will return predictions for the given set of X data. Parameters ---------- model - A trained scikit-learn classifier that supports prediction using a predict() method X - A (m,2) shaped number array of m samples each with 2 features """ # determine the class labels labels = np.unique(y) #colors = cm.rainbow(np.linspace(0.0, 1.0, labels.size)) #colors = cm.Set1.colors newcmp = ListedColormap(plt.cm.Set1.colors[:len(labels)]) # create the mesh of points to use for the contour plot h = .02 # step size in the mesh x_min, x_max = X[:, 0].min() - h, X[:, 0].max() + h y_min, y_max = X[:, 1].min() - h, X[:, 1].max() + h xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # create the predictions over the mesh using the trained models predict() function Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) # Create the actual contour plot, which will show the decision boundaries Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=newcmp, alpha=0.33) #plt.colorbar() from sklearn.datasets import make_moons X, y = make_moons(n_samples=2500, noise=0.3) # we will split data using a 75%/25% train/test split this time from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) from sklearn.ensemble import VotingClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC log_clf = LogisticRegression(solver='lbfgs', C=5.0) tree_clf = DecisionTreeClassifier(max_depth=10) svm_clf = SVC(gamma=100.0, C=1.0) voting_clf = VotingClassifier( estimators=[('lr', log_clf), ('tree', tree_clf), ('svc', svm_clf)], voting='hard' ) voting_clf.fit(X_train, y_train) plot_multiclass_decision_boundaries(voting_clf, X, y) plot_multiclass_data(X, y)
_____no_output_____
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
Lets look at each classifier's accuracy on the test set, including for the ensemble voting classifier:
from sklearn.metrics import accuracy_score for clf in (log_clf, tree_clf, svm_clf, voting_clf): clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
LogisticRegression 0.8576 DecisionTreeClassifier 0.8768 SVC 0.9056 VotingClassifier 0.904
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
The voting classifier will usually outperform all the individual classifier, if the data is sufficientlynonseparable to make it relatively hard (e.g. with less random noise in the moons data set, you can getreal good performance sometimes with random forest and/or svc, which will exceed the voting classifier).If all classifiers are able to estimate class probabilities (i.e. in `scikit-learn` they support`predict_proba()` method), then you can tell `scikit-learn` to predict the class with the highest classprobability, averaged over all individual classifiers. You can think of this as each classifier havingits vote weighted by its confidence of the prediction. This is called *soft voting*. It often achieveshigher performance than hard voting because it gives more weight to highly confident votes. All youneed to do is replace `voting='hard'` with `voting='soft'` and ensure that all classifiers can estimateclas sprobabilities. If you recall, support vector machine classifiers (`SVC`) do not estimate class probabilities bydefault, but if you set `SVC` `probability` hyperparameter to `True`, the `SVC` class will use cross-validationto estimate class probabilities. This slows training, but it makes the `predict_proba()` method validfor `SVC`, and since both logistic regression and random forests support this confidence estimate, wecan then use soft voting for the voting classifier.
log_clf = LogisticRegression(solver='lbfgs', C=5.0) tree_clf = DecisionTreeClassifier(max_depth=8) svm_clf = SVC(gamma=1000.0, C=1.0, probability=True) # enable probability estimates for svm classifier voting_clf = VotingClassifier( estimators=[('lr', log_clf), ('tree', tree_clf), ('svc', svm_clf)], voting='soft' # use soft voting this time ) voting_clf.fit(X_train, y_train) plot_multiclass_decision_boundaries(voting_clf, X, y) plot_multiclass_data(X, y) from sklearn.metrics import accuracy_score for clf in (log_clf, tree_clf, svm_clf, voting_clf): clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
LogisticRegression 0.8576 DecisionTreeClassifier 0.8944 SVC 0.8464 VotingClassifier 0.8976
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
Bagging and PastingOne way to get a diverse set of classifiers is to use very different training algorithms. The previous votingclassifier was an example of this, where we used 3 very different kinds of classifiers for the voting ensemble.Another approach is to use the same training for every predictor, but to train them on different randomsubsets of the training set. When sampling is performed with replacement, this method is called*bagging* (short for *bootstrap aggregrating*). When sampling is performed without replacement, it iscalled *pasting*.In other words, both approaches are similar. In both cases you are sampling the training data to buildmultiple instances of a classifier. In both cases a training item could be sampled and used to trainmultiple instances in the collection of classifiers that is produced. In bagging, it is possible for a trainingsample to be sampled multiple times in the training for the same predictor. This type of bootstrap aggregrationis a type of data enhancement, and it is used in other contexts as well in ML to artificially increase the sizeof the training set.Once all predictors are trained, the ensemble can make predictions for a new instance by simply aggregating thepredictions of all the predictors. The aggregration function is typically the *statistical mode* (i.e. themost frequent prediction, just like hard voting) for classification, or the average for regression.Each individual predictor has a higher bias than if it were trained on the original training set (because you don'tuse all of the training data on an individual bagged/pasted classifier). But the aggregration overall should usuallyreduce both bias and variance on the final performance. Generall the net result is that the ensemble has a similarbias but a lower variance than a single predictor trained on the whole original training set.Computationally bagging and pasting are very attractive because in theory and in practice all of the classifierscan be trained in parallel. Thus if you have a large number of CPU cores, or even a distributed memorycomputing cluster, you can independently train the individual classifiers all in parallel. Scikit-Learn Bagging and Pasting ExamplesThe ensemble API in `scikit-learn` for performing bagging and/or pasting is relatively simple. As with the votingclassifier, we specify which type of classifer we want to use. But since bagging/pasting train multipleclassifiers all of this type, we only have to specify 1. The `n_jobs` parameter tells `scikit-learn` the number ofcpu cores to use for training and predictions (-1 tells `scikit-learn` to use all available cores).The following trains an ensemble of 500 decision tree classifiers (`n_estimators`), each trained on 100 traininginstances randomly sampled from the training set with replacement (`bootstrap=True`). If you want to use pastingwe simply set `bootstrap=False` instead.**NOTE**: The `BaggingClassifier` automatically performs soft voting instead of hard voting if the base classifiercan estimate class probabilities (i.e. it has a `predict_proba()` method).
from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier bag_clf = BaggingClassifier( DecisionTreeClassifier(max_leaf_nodes=20), n_estimators=500, max_samples=100, bootstrap=True, n_jobs=-1 ) bag_clf.fit(X_train, y_train) y_pred = bag_clf.predict(X_test) print(bag_clf.__class__.__name__, accuracy_score(y_test, y_pred)) plot_multiclass_decision_boundaries(bag_clf, X, y) plot_multiclass_data(X, y)
_____no_output_____
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
Out-of-Bag EvaluationWith bagging, some instances may be sampled several times for any given predictor, while others may not besampled at all. By default a `BaggingClassifier` samples `m` training instances with replacement, where `m`is the size of the training set. This means that only about 63% of the training instances are sampled on average foreach predictor. The remaining 37% of the training instances that are not sampled are called *out-of-bag* (oob)instances. **NOTE**: they are not the same 37% for each resulting predictor, each predictor has a different oob.Since a predictor never sees the oob instances during training, it can be evaluated on these instances, without the needfor a separate validation set or cross-validation. You can evaluate the ensemble itself by averaging out the oobevaluations for each predictor.In `scikit-learn` you can set `oob_score=True` when creating a `BaggingClassifier` to request an automatic oobevaluation after training:
bag_clf = BaggingClassifier( DecisionTreeClassifier(), n_estimators=500, bootstrap=True, n_jobs=-1, oob_score=True ) bag_clf.fit(X_train, y_train) print(bag_clf.oob_score_) y_pred = bag_clf.predict(X_test) print(accuracy_score(y_test, y_pred))
0.9056 0.8976
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
The oob decision function for each training instance is also available through the`oob_decision_function_` variable.
bag_clf.oob_decision_function_
_____no_output_____
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
Random Patches and Random SubspacesThe default behavior of the bagging/patching classifier is to only sample the training target outputs. However,it can also be useful to build classifiers that only use some of the feautres of the input data. We have lookedat methods for adding features, for example by adding polynomial combinations of the feature inputs. But often forbig data, we might have thousands or even millions of input features. In that case, it can very well be that someor many of the features are not really all that useful, or even somewhat harmful, to building a truly good andgeneral classifier.So one approach when we have large number of features is to build multiple classifiers (using bagging/patching)on sampled subsets of the features. In `scikit-learn` `BaggingClassifier` this is controllerd by twohyperparameters: `max_features` and `bootstrap_features`. They work the same as `max_samples` and `bootstrap`but for feature sampling instead of output instance sampling. Thus each predictor will be trained on a random subsetof the input features. This is particularly useful when dealing with high-dimensional inputs. Sampling from both training instances and features simultaneously is called the *Random Patches method*.Keeping all training instances, but sampling features is called *Random Subspaces method*. Random ForestsAs we have already mentioned, a `RandomForest` is simply an ensemble of decision trees, generally trained via thebagging method, typically with `max_samples` set to the size of the training set. We could create arandom forest by hand using `scikit-learn` `BaggingClassifier` on a DecisionTree, which is in fact what we justdid in the previous section. Our previous ensemble was an example of a random forest classifier.But in `scikit-learn` instead of building the ensemble somewhat by hant, you can instead use the`RandomForestClassifier` class, which is more convenient and which has default hyperparameter settingsoptimized for random forests.The following code trains a random forest classifier with 500 treas (each limited to a maximum of 16 nodes),using all available CPU cores:
from sklearn.ensemble import RandomForestClassifier rnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, n_jobs=-1) rnd_clf.fit(X_train, y_train) y_pred = rnd_clf.predict(X_test) print(accuracy_score(y_test, y_pred))
0.8976
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
A random forest classifier has all of the hyperparameters of a `DecisionTreeClassifier` (to controlhow trees are grown), plus all of the hyperparameters of a `BaggingClassifier` to control the ensemble itself.The random forest algorithm introduces extra randomness when growing trees. Instead of searchingfor the very best feature when splitting a node, it searches for the best feature among a random subset offeatures. This results in a greater tree diversity, which trades a higher bias for a lower variance, generally yieldinga better overall ensemble model.The following `BaggingClassifier` is roughly equivalent to the previous `RandomForestClassifier`:
bag_clf = BaggingClassifier( DecisionTreeClassifier(splitter='random', max_leaf_nodes=16), n_estimators=500, max_samples=1.0, bootstrap=True, n_jobs=-1 ) bag_clf.fit(X_train, y_train) y_pred = bag_clf.predict(X_test) print(accuracy_score(y_test, y_pred))
0.8992
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
Extra-TreesWhen growing a tree in a random forest, at each node only a random subset of features is considered for splitting aswe just discussed. It is possible to make trees even more random by also using random thresholds for eachfeature rather than searching for the best possible thresholds.A forest of such extremely random trees is called an *Extremely Randomized Trees* ensemble (or *Extra-Trees*for short. You can create an extra-trees classifier using `scikit-learn`s `ExtraTreesClassifier` class, its API is identicalto the `RandomForestClassifier` class.**TIP:** It is hard to tell in advance whether a random forest or an extra-tree will perform better or worse on agiven set of data. Generally the only way to know is to try both and compare them using cross-validation. Feature ImportanceLastly, if you look at a single decision tree, important features are likely to appear closer to the root of thetree, while unimportnat features will often appear closer to th eleaves (or not a all). Therefore anotheruse of random forests is to get an estimate on the importance of the features when making classificationpredictions. We can get an estimate of a feature's importance by computing the average depth at which it appears across alltrees in a random forest. `scikit-learn` computes this automatically for every feature after training. You canaccess the result using the `feature_importances_` variable.For example, if we build a `RandomForestClassifier` on the iris data set (with 4 features), we can output eachfeatures estimated importance.
from sklearn.datasets import load_iris iris = load_iris() rnd_clf = RandomForestClassifier(n_estimators=500, n_jobs=-1) rnd_clf.fit(iris['data'], iris['target']) for name, score in zip(iris['feature_names'], rnd_clf.feature_importances_): print(name, score)
sepal length (cm) 0.0945223465095581 sepal width (cm) 0.022011194440888737 petal length (cm) 0.4251170433221595 petal width (cm) 0.4583494157273937
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
It seems the most importan feature is petal length, followed closely by petal width. Sepal length and especiallysepal width are relatively less important. Boosting*Boosting* (originally called *hypothesis boosting* refers to any ensemble method that can combine several weak learnersinto a strong learner. But unlike the ensembles we looked at before, the general idea is to train predictorssequentially, each trying to correct it predecessor. There are many boosting methods, the most popular being*AdaBoost* (short for *Adaptive Boosting*) and *Gradient Boosting*. AdaBoost Gradient Boost StackingStacking works similar to the voting ensembles we have looked at. Multiple independent classifiers are trainedin parallel and aggregrated. But instead of using a trivial aggregration method (like hard voting), we trainyet another model to perform the aggregration. This final model (called a *blender* or *meta learner*) takesthe other trained predictors's output as input and makes a final prediciton from them. 'Scikit-learn' does not support stacking directly (unlike voting ensembles and boosting). But it is not too difficultto hand roll basic implementations of stacking from `scikit-learn` apis.
import sys sys.path.append("../../src") # add our class modules to the system PYTHON_PATH from ml_python_class.custom_funcs import version_information version_information()
Module Versions -------------------- ------------------------------------------------------------ matplotlib: ['3.3.0'] numpy: ['1.18.5'] pandas: ['1.0.5']
CC-BY-3.0
lectures/ng/Lecture-09-Ensembles-Random-Forests.ipynb
tgrasty/CSCI574-Machine-Learning
class Student: def __init__ (self, Name, Student_No, Age, School, Course): self.Name = Name self.Student_No = Student_No self.Age = Age self.School = School self.Course = Course def self (self): print ("Name: ", self.Name) print ("Student Number: ", self.Student_No) print ("Age : ", self.Age) print ("School: ", self.School) print ("Course: ", self.Course) Myself = Student ("Gencianeo Sunvick A.", 202117757, 18, "Adamson University", "BS in Computer Engineering") Myself.self ()
Name: Gencianeo Sunvick A. Student Number: 202117757 Age : 18 School: Adamson University Course: BS in Computer Engineering
Apache-2.0
Gencianeo_Sunvick_A_Prelim1.ipynb
Sunvick/OPP-58001
ProblemFind the smallest difference between two arrays.The function should take in two arrays and find the pair of numbers in the array whose absolute difference is closest to zero.
def smallest_difference(array_one, array_two): """ Complexity: Time: O(nlog(n)) + mlog(m)) where n = length of first array, m = length of second array (the nlog n comes from sorting using an optimal sorting algorithm) Space: O(1) """ # first, we sort the arrays array_one.sort() array_two.sort() # init pointers that we'll use for each array idx_one = 0 idx_two = 0 current_diff = float('inf') smallest_diff = float('inf') while idx_one < len(array_one) and idx_two < len(array_two): first_num = array_one[idx_one] second_num = array_two[idx_two] # find absolute difference current_diff = abs(first_num - second_num) if first_num < second_num: # increment the index of first array idx_one += 1 elif second_num < first_num: # increment the index of second array idx_two += 1 else: return [first_num, second_num] if smallest_diff > current_diff: smallest_diff = current_diff smallest_pair = [first_num, second_num] return smallest_pair array1 = [2, 1, 3, 5, 4] array2 = [4, 5, 6, 3, 2] smallest_difference(array1, array2)
_____no_output_____
MIT
arrays/smallest_difference.ipynb
codacy-badger/algorithms-1
Neuromatch Academy: Week1, Day 2, Tutorial 2 Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Utilities Setup and Convenience FunctionsPlease run the following **3** chunks to have functions and data available.
#@title Utilities and setup # set up the environment for this tutorial import time # import time import numpy as np # import numpy import scipy as sp # import scipy from scipy.stats import gamma # import gamma distribution import math # import basic math functions import random # import basic random number generator functions import matplotlib.pyplot as plt # import matplotlib from IPython import display fig_w, fig_h = (12, 8) plt.rcParams.update({'figure.figsize': (fig_w, fig_h)}) plt.style.use('ggplot') %matplotlib inline #%config InlineBackend.figure_format = 'retina' from scipy.signal import medfilt # make #@title Convenience functions: Plotting and Filtering # define some convenience functions to be used later def my_moving_window(x, window=3, FUN=np.mean): ''' Calculates a moving estimate for a signal Args: x (numpy.ndarray): a vector array of size N window (int): size of the window, must be a positive integer FUN (function): the function to apply to the samples in the window Returns: (numpy.ndarray): a vector array of size N, containing the moving average of x, calculated with a window of size window There are smarter and faster solutions (e.g. using convolution) but this function shows what the output really means. This function skips NaNs, and should not be susceptible to edge effects: it will simply use all the available samples, which means that close to the edges of the signal or close to NaNs, the output will just be based on fewer samples. By default, this function will apply a mean to the samples in the window, but this can be changed to be a max/min/median or other function that returns a single numeric value based on a sequence of values. ''' # if data is a matrix, apply filter to each row: if len(x.shape) == 2: output = np.zeros(x.shape) for rown in range(x.shape[0]): output[rown,:] = my_moving_window(x[rown,:],window=window,FUN=FUN) return output # make output array of the same size as x: output = np.zeros(x.size) # loop through the signal in x for samp_i in range(x.size): values = [] # loop through the window: for wind_i in range(int(-window), 1): if ((samp_i+wind_i) < 0) or (samp_i+wind_i) > (x.size - 1): # out of range continue # sample is in range and not nan, use it: if not(np.isnan(x[samp_i+wind_i])): values += [x[samp_i+wind_i]] # calculate the mean in the window for this point in the output: output[samp_i] = FUN(values) return output def my_plot_percepts(datasets=None, plotconditions=False): if isinstance(datasets,dict): # try to plot the datasets # they should be named... # 'expectations', 'judgments', 'predictions' fig = plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really plt.ylabel('perceived self motion [m/s]') plt.xlabel('perceived world motion [m/s]') plt.title('perceived velocities') # loop through the entries in datasets # plot them in the appropriate way for k in datasets.keys(): if k == 'expectations': expect = datasets[k] plt.scatter(expect['world'],expect['self'],marker='*',color='xkcd:green',label='my expectations') elif k == 'judgments': judgments = datasets[k] for condition in np.unique(judgments[:,0]): c_idx = np.where(judgments[:,0] == condition)[0] cond_self_motion = judgments[c_idx[0],1] cond_world_motion = judgments[c_idx[0],2] if cond_world_motion == -1 and cond_self_motion == 0: c_label = 'world-motion condition judgments' elif cond_world_motion == 0 and cond_self_motion == 1: c_label = 'self-motion condition judgments' else: c_label = 'condition [%d] judgments'%condition plt.scatter(judgments[c_idx,3],judgments[c_idx,4], label=c_label, alpha=0.2) elif k == 'predictions': predictions = datasets[k] for condition in np.unique(predictions[:,0]): c_idx = np.where(predictions[:,0] == condition)[0] cond_self_motion = predictions[c_idx[0],1] cond_world_motion = predictions[c_idx[0],2] if cond_world_motion == -1 and cond_self_motion == 0: c_label = 'predicted world-motion condition' elif cond_world_motion == 0 and cond_self_motion == 1: c_label = 'predicted self-motion condition' else: c_label = 'condition [%d] prediction'%condition plt.scatter(predictions[c_idx,4],predictions[c_idx,3], marker='x', label=c_label) else: print("datasets keys should be 'hypothesis', 'judgments' and 'predictions'") if plotconditions: # this code is simplified but only works for the dataset we have: plt.scatter([1],[0],marker='<',facecolor='none',edgecolor='xkcd:black',linewidths=2,label='world-motion stimulus',s=80) plt.scatter([0],[1],marker='>',facecolor='none',edgecolor='xkcd:black',linewidths=2,label='self-motion stimulus',s=80) plt.legend(facecolor='xkcd:white') plt.show() else: if datasets is not None: print('datasets argument should be a dict') raise TypeError def my_plot_motion_signals(): dt = 1/10 a = gamma.pdf( np.arange(0,10,dt), 2.5, 0 ) t = np.arange(0,10,dt) v = np.cumsum(a*dt) fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col', sharey='row', figsize=(14,6)) fig.suptitle('Sensory ground truth') ax1.set_title('world-motion condition') ax1.plot(t,-v,label='visual [$m/s$]') ax1.plot(t,np.zeros(a.size),label='vestibular [$m/s^2$]') ax1.set_xlabel('time [s]') ax1.set_ylabel('motion') ax1.legend(facecolor='xkcd:white') ax2.set_title('self-motion condition') ax2.plot(t,-v,label='visual [$m/s$]') ax2.plot(t,a,label='vestibular [$m/s^2$]') ax2.set_xlabel('time [s]') ax2.set_ylabel('motion') ax2.legend(facecolor='xkcd:white') plt.show() def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False, addaverages=False): wm_idx = np.where(judgments[:,0] == 0) sm_idx = np.where(judgments[:,0] == 1) opticflow = opticflow.transpose() wm_opticflow = np.squeeze(opticflow[:,wm_idx]) sm_opticflow = np.squeeze(opticflow[:,sm_idx]) vestibular = vestibular.transpose() wm_vestibular = np.squeeze(vestibular[:,wm_idx]) sm_vestibular = np.squeeze(vestibular[:,sm_idx]) X = np.arange(0,10,.1) fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(15,10)) fig.suptitle('Sensory signals') my_axes[0][0].plot(X,wm_opticflow, color='xkcd:light red', alpha=0.1) my_axes[0][0].plot([0,10], [0,0], ':', color='xkcd:black') if addaverages: my_axes[0][0].plot(X,np.average(wm_opticflow, axis=1), color='xkcd:red', alpha=1) my_axes[0][0].set_title('world-motion optic flow') my_axes[0][0].set_ylabel('[motion]') my_axes[0][1].plot(X,sm_opticflow, color='xkcd:azure', alpha=0.1) my_axes[0][1].plot([0,10], [0,0], ':', color='xkcd:black') if addaverages: my_axes[0][1].plot(X,np.average(sm_opticflow, axis=1), color='xkcd:blue', alpha=1) my_axes[0][1].set_title('self-motion optic flow') my_axes[1][0].plot(X,wm_vestibular, color='xkcd:light red', alpha=0.1) my_axes[1][0].plot([0,10], [0,0], ':', color='xkcd:black') if addaverages: my_axes[1][0].plot(X,np.average(wm_vestibular, axis=1), color='xkcd:red', alpha=1) my_axes[1][0].set_title('world-motion vestibular signal') my_axes[1][0].set_xlabel('time [s]') my_axes[1][0].set_ylabel('[motion]') my_axes[1][1].plot(X,sm_vestibular, color='xkcd:azure', alpha=0.1) my_axes[1][1].plot([0,10], [0,0], ':', color='xkcd:black') if addaverages: my_axes[1][1].plot(X,np.average(sm_vestibular, axis=1), color='xkcd:blue', alpha=1) my_axes[1][1].set_title('self-motion vestibular signal') my_axes[1][1].set_xlabel('time [s]') if returnaxes: return my_axes else: plt.show() def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct): plt.figure(figsize=(12,8)) plt.title('threshold effects') plt.plot([min(thresholds),max(thresholds)],[0,0],':',color='xkcd:black') plt.plot([min(thresholds),max(thresholds)],[0.5,0.5],':',color='xkcd:black') plt.plot([min(thresholds),max(thresholds)],[1,1],':',color='xkcd:black') plt.plot(thresholds, world_prop, label='world motion') plt.plot(thresholds, self_prop, label='self motion') plt.plot(thresholds, prop_correct, color='xkcd:purple', label='correct classification') plt.xlabel('threshold') plt.ylabel('proportion correct or classified as self motion') plt.legend(facecolor='xkcd:white') plt.show() def my_plot_predictions_data(judgments, predictions): conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2]))) veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4])) velpredict = np.concatenate((predictions[:,3],predictions[:,4])) # self: conditions_self = np.abs(judgments[:,1]) veljudgmnt_self = judgments[:,3] velpredict_self = predictions[:,3] # world: conditions_world = np.abs(judgments[:,2]) veljudgmnt_world = judgments[:,4] velpredict_world = predictions[:,4] fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row', figsize=(12,5)) ax1.scatter(veljudgmnt_self,velpredict_self, alpha=0.2) ax1.plot([0,1],[0,1],':',color='xkcd:black') ax1.set_title('self-motion judgments') ax1.set_xlabel('observed') ax1.set_ylabel('predicted') ax2.scatter(veljudgmnt_world,velpredict_world, alpha=0.2) ax2.plot([0,1],[0,1],':',color='xkcd:black') ax2.set_title('world-motion judgments') ax2.set_xlabel('observed') ax2.set_ylabel('predicted') plt.show() #@title Data generation code (needs to go on OSF and deleted here) def my_simulate_data(repetitions=100, conditions=[(0,-1),(+1,0)] ): """ Generate simulated data for this tutorial. You do not need to run this yourself. Args: repetitions: (int) number of repetitions of each condition (default: 30) conditions: list of 2-tuples of floats, indicating the self velocity and world velocity in each condition (default: returns data that is good for exploration: [(-1,0),(0,+1)] but can be flexibly extended) The total number of trials used (ntrials) is equal to: repetitions * len(conditions) Returns: dict with three entries: 'judgments': ntrials * 5 matrix 'opticflow': ntrials * 100 matrix 'vestibular': ntrials * 100 matrix The default settings would result in data where first 30 trials reflect a situation where the world (other train) moves in one direction, supposedly at 1 m/s (perhaps to the left: -1) while the participant does not move at all (0), and 30 trials from a second condition, where the world does not move, while the participant moves with 1 m/s in the opposite direction from where the world is moving in the first condition (0,+1). The optic flow should be the same, but the vestibular input is not. """ # reproducible output np.random.seed(1937) # set up some variables: ntrials = repetitions * len(conditions) # the following arrays will contain the simulated data: judgments = np.empty(shape=(ntrials,5)) opticflow = np.empty(shape=(ntrials,100)) vestibular = np.empty(shape=(ntrials,100)) # acceleration: a = gamma.pdf(np.arange(0,10,.1), 2.5, 0 ) # divide by 10 so that velocity scales from 0 to 1 (m/s) # max acceleration ~ .308 m/s^2 # not realistic! should be about 1/10 of that # velocity: v = np.cumsum(a*.1) # position: (not necessary) #x = np.cumsum(v) ################################# # REMOVE ARBITRARY SCALING & CORRECT NOISE PARAMETERS vest_amp = 1 optf_amp = 1 # we start at the first trial: trialN = 0 # we start with only a single velocity, but it should be possible to extend this for conditionno in range(len(conditions)): condition = conditions[conditionno] for repetition in range(repetitions): # # generate optic flow signal OF = v * np.diff(condition) # optic flow: difference between self & world motion OF = (OF * optf_amp) # fairly large spike range OF = OF + (np.random.randn(len(OF)) * .1) # adding noise # generate vestibular signal VS = a * condition[0] # vestibular signal: only self motion VS = (VS * vest_amp) # less range VS = VS + (np.random.randn(len(VS)) * 1.) # acceleration is a smaller signal, what is a good noise level? # store in matrices, corrected for sign #opticflow[trialN,:] = OF * -1 if (np.sign(np.diff(condition)) < 0) else OF #vestibular[trialN,:] = VS * -1 if (np.sign(condition[1]) < 0) else VS opticflow[trialN,:], vestibular[trialN,:] = OF, VS ######################################################### # store conditions in judgments matrix: judgments[trialN,0:3] = [ conditionno, condition[0], condition[1] ] # vestibular SD: 1.0916052957046194 and 0.9112684509277528 # visual SD: 0.10228834313079663 and 0.10975472557444346 # generate judgments: if (abs(np.average(np.cumsum(medfilt(VS/vest_amp,5)*.1)[70:90])) < 1): ########################### # NO self motion detected ########################### selfmotion_weights = np.array([.01,.01]) # there should be low/no self motion worldmotion_weights = np.array([.01,.99]) # world motion is dictated by optic flow else: ######################## # self motion DETECTED ######################## #if (abs(np.average(np.cumsum(medfilt(VS/vest_amp,15)*.1)[70:90]) - np.average(medfilt(OF,15)[70:90])) < 5): if True: #################### # explain all self motion by optic flow selfmotion_weights = np.array([.01,.99]) # there should be lots of self motion, but determined by optic flow worldmotion_weights = np.array([.01,.01]) # very low world motion? else: # we use both optic flow and vestibular info to explain both selfmotion_weights = np.array([ 1, 0]) # motion, but determined by vestibular signal worldmotion_weights = np.array([ 1, 1]) # very low world motion? # integrated_signals = np.array([ np.average( np.cumsum(medfilt(VS/vest_amp,15))[90:100]*.1 ), np.average((medfilt(OF/optf_amp,15))[90:100]) ]) selfmotion = np.sum(integrated_signals * selfmotion_weights) worldmotion = np.sum(integrated_signals * worldmotion_weights) #print(worldmotion,selfmotion) judgments[trialN,3] = abs(selfmotion) judgments[trialN,4] = abs(worldmotion) # this ends the trial loop, so we increment the counter: trialN += 1 return {'judgments':judgments, 'opticflow':opticflow, 'vestibular':vestibular} simulated_data = my_simulate_data() judgments = simulated_data['judgments'] opticflow = simulated_data['opticflow'] vestibular = simulated_data['vestibular']
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
Micro-tutorial 6 - planning the model
#@title Video: Planning the model from IPython.display import YouTubeVideo video = YouTubeVideo(id='daEtkVporBE', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
Video available at https://youtube.com/watch?v=daEtkVporBE
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? The figure below shows a generic model we will use to guide our code construction. ![Model as code](https://i.ibb.co/hZdHmkk/modelfigure.jpg)Our model will have:* **inputs**: the values the system has available - for this tutorial the sensory information in a trial. We want to gather these together and plan how to process them. * **parameters**: unless we are lucky, our functions will have unknown parameters - we want to identify these and plan for them.* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial. Ideally these are directly comparable to our data. * **Model functions**: A set of functions that perform the hypothesized computations.>Using Python (with Numpy and Scipy) we will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.Recap of what we've accomplished so far:To model perceptual estimates from our sensory data, we need to 1. _integrate_ to ensure sensory information are in appropriate units2. _reduce noise and set timescale_ by filtering3. _threshold_ to model detection Remember the kind of operations we identified:* integration: `np.cumsum()`* filtering: `my_moving_window()`* threshold: `if` with a comparison (`>` or `<`) and `else`We will collect all the components we've developed and design the code by:1. **identifying the key functions** we need2. **sketching the operations** needed in each. **_Planning our model:_**We know what we want the model to do, but we need to plan and organize the model into functions and operations. We're providing a draft of the first function. For each of the two other code chunks, write mostly comments and help text first. This should put into words what role each of the functions plays in the overall model, implementing one of the steps decided above. _______Below is the main function with a detailed explanation of what the function is supposed to do: what input is expected, and what output will generated. The code is not complete, and only returns nans for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **TD 6.1**: Complete main model functionThe function `my_train_illusion_model()` below should call one other function: `my_perceived_motion()`. What input do you think this function should get? **Complete main model function**
def my_train_illusion_model(sensorydata, params): ''' Generate output predictions of perceived self-motion and perceived world-motion velocity based on input visual and vestibular signals. Args (Input variables passed into function): sensorydata: (dict) dictionary with two named entries: opticflow: (numpy.ndarray of float) NxM array with N trials on rows and M visual signal samples in columns vestibular: (numpy.ndarray of float) NxM array with N trials on rows and M vestibular signal samples in columns params: (dict) dictionary with named entries: threshold: (float) vestibular threshold for credit assignment filterwindow: (list of int) determines the strength of filtering for the visual and vestibular signals, respectively integrate (bool): whether to integrate the vestibular signals, will be set to True if absent FUN (function): function used in the filter, will be set to np.mean if absent samplingrate (float): the number of samples per second in the sensory data, will be set to 10 if absent Returns: dict with two entries: selfmotion: (numpy.ndarray) vector array of length N, with predictions of perceived self motion worldmotion: (numpy.ndarray) vector array of length N, with predictions of perceived world motion ''' # sanitize input a little if not('FUN' in params.keys()): params['FUN'] = np.mean if not('integrate' in params.keys()): params['integrate'] = True if not('samplingrate' in params.keys()): params['samplingrate'] = 10 # number of trials: ntrials = sensorydata['opticflow'].shape[0] # set up variables to collect output selfmotion = np.empty(ntrials) worldmotion = np.empty(ntrials) # loop through trials? for trialN in range(ntrials): #these are our sensory variables (inputs) vis = sensorydata['opticflow'][trialN,:] ves = sensorydata['vestibular'][trialN,:] ######################################################## # generate output predicted perception: ######################################################## #our inputs our vis, ves, and params selfmotion[trialN], worldmotion[trialN] = [np.nan, np.nan] ######################################################## # replace above with # selfmotion[trialN], worldmotion[trialN] = my_perceived_motion( ???, ???, params=params) # and fill in question marks ######################################################## # comment this out when you've filled raise NotImplementedError("Student excercise: generate predictions") return {'selfmotion':selfmotion, 'worldmotion':worldmotion} # uncomment the following lines to run the main model function: ## here is a mock version of my_perceived motion. ## so you can test my_train_illusion_model() #def my_perceived_motion(*args, **kwargs): #return np.random.rand(2) ##let's look at the preditions we generated for two sample trials (0,100) ##we should get a 1x2 vector of self-motion prediction and another for world-motion #sensorydata={'opticflow':opticflow[[0,100],:0], 'vestibular':vestibular[[0,100],:0]} #params={'threshold':0.33, 'filterwindow':[100,50]} #my_train_illusion_model(sensorydata=sensorydata, params=params)
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_685e0a13.py) **TD 6.2**: Draft perceived motion functionsNow we draft a set of functions, the first of which is used in the main model function (see above) and serves to generate perceived velocities. The other two are used in the first one. Only write help text and/or comments, you don't have to write the whole function. Each time ask yourself these questions:* what sensory data is necessary? * what other input does the function need, if any?* which operations are performed on the input?* what is the output?(the number of arguments is correct) **Template perceived motion**
# fill in the input arguments the function should have: # write the help text for the function: def my_perceived_motion(arg1, arg2, arg3): ''' Short description of the function Args: argument 1: explain the format and content of the first argument argument 2: explain the format and content of the second argument argument 3: explain the format and content of the third argument Returns: what output does the function generate? Any further description? ''' # structure your code into two functions: "my_selfmotion" and "my_worldmotion" # write comments outlining the operations to be performed on the inputs by each of these functions # use the elements from micro-tutorials 3, 4, and 5 (found in W1D2 Tutorial Part 1) # # # # what kind of output should this function produce? return output
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
We've completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
#Full perceived motion function def my_perceived_motion(vis, ves, params): ''' Takes sensory data and parameters and returns predicted percepts Args: vis (numpy.ndarray): 1xM array of optic flow velocity data ves (numpy.ndarray): 1xM array of vestibular acceleration data params: (dict) dictionary with named entries: see my_train_illusion_model() for details Returns: [list of floats]: prediction for perceived self-motion based on vestibular data, and prediction for perceived world-motion based on perceived self-motion and visual data ''' # estimate self motion based on only the vestibular data # pass on the parameters selfmotion = my_selfmotion(ves=ves, params=params) # estimate the world motion, based on the selfmotion and visual data # pass on the parameters as well worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params) return [selfmotion, worldmotion]
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
**Template calculate self motion**Put notes in the function below that describe the inputs, the outputs, and steps that transform the output from the input using elements from micro-tutorials 3,4,5.
def my_selfmotion(arg1, arg2): ''' Short description of the function Args: argument 1: explain the format and content of the first argument argument 2: explain the format and content of the second argument Returns: what output does the function generate? Any further description? ''' # what operations do we perform on the input? # use the elements from micro-tutorials 3, 4, and 5 # 1. # 2. # 3. # 4. # what output should this function produce? return output
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_181325a9.py) **Template calculate world motion**Put notes in the function below that describe the inputs, the outputs, and steps that transform the output from the input using elements from micro-tutorials 3,4,5.
def my_worldmotion(arg1, arg2, arg3): ''' Short description of the function Args: argument 1: explain the format and content of the first argument argument 2: explain the format and content of the second argument argument 3: explain the format and content of the third argument Returns: what output does the function generate? Any further description? ''' # what operations do we perform on the input? # use the elements from micro-tutorials 3, 4, and 5 # 1. # 2. # 3. # what output should this function produce? return output
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_8f913582.py) Micro-tutorial 7 - implement model
#@title Video: implement the model from IPython.display import YouTubeVideo video = YouTubeVideo(id='gtSOekY8jkw', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
Video available at https://youtube.com/watch?v=gtSOekY8jkw
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* average: `np.mean(data)`* threshold: if (value > thr): else: **TD 7.1:** Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you!**Template finish self motion function**
def my_selfmotion(ves, params): ''' Estimates self motion for one vestibular signal Args: ves (numpy.ndarray): 1xM array with a vestibular signal params (dict): dictionary with named entries: see my_train_illusion_model() for details Returns: (float): an estimate of self motion in m/s ''' ###uncomment the code below and fill in with your code ## 1. integrate vestibular signal #ves = np.cumsum(ves*(1/params['samplingrate'])) ## 2. running window function to accumulate evidence: #selfmotion = YOUR CODE HERE ## 3. take final value of self-motion vector as our estimate #selfmotion = ## 4. compare to threshold. Hint the threshodl is stored in params['threshold'] ## if selfmotion is higher than threshold: return value ## if it's lower than threshold: return 0 #if YOURCODEHERE #selfmotion = YOURCODHERE # comment this out when you've filled raise NotImplementedError("Student excercise: estimate my_selfmotion") return output
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_3ea16348.py) Estimate world motionWe have completed the `my_worldmotion()` function for you.**World motion function**
# World motion function def my_worldmotion(vis, selfmotion, params): ''' Short description of the function Args: vis (numpy.ndarray): 1xM array with the optic flow signal selfmotion (float): estimate of self motion params (dict): dictionary with named entries: see my_train_illusion_model() for details Returns: (float): an estimate of world motion in m/s ''' # running average to smooth/accumulate sensory evidence visualmotion = my_moving_window(vis, window=params['filterwindows'][1], FUN=np.mean) # take final value visualmotion = visualmotion[-1] # subtract selfmotion from value worldmotion = visualmotion + selfmotion # return final value return worldmotion
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
Micro-tutorial 8 - completing the model
#@title Video: completing the model from IPython.display import YouTubeVideo video = YouTubeVideo(id='-NiHSv4xCDs', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
Video available at https://youtube.com/watch?v=-NiHSv4xCDs
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). **TD 8.1:** See if the model produces illusions
#@title Run to plot model predictions of motion estimates # prepare to run the model again: data = {'opticflow':opticflow, 'vestibular':vestibular} params = {'threshold':0.6, 'filterwindows':[100,50], 'FUN':np.mean} modelpredictions = my_train_illusion_model(sensorydata=data, params=params) # process the data to allow plotting... predictions = np.zeros(judgments.shape) predictions[:,0:3] = judgments[:,0:3] predictions[:,3] = modelpredictions['selfmotion'] predictions[:,4] = modelpredictions['worldmotion'] *-1 my_plot_percepts(datasets={'predictions':predictions}, plotconditions=True)
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
**Questions:*** Why is the data distributed this way? How does it compare to the plot in TD 1.2?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two sets of data. Does this mean the model can help us understand the phenomenon? Micro-tutorial 9 - testing and evaluating the model
#@title Video: Background from IPython.display import YouTubeVideo video = YouTubeVideo(id='5vnDOxN3M_k', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
Video available at https://youtube.com/watch?v=5vnDOxN3M_k
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data. Quantify model quality with $R^2$Let's look at how well our model matches the actual judgment data.
#@title Run to plot predictions over data my_plot_predictions_data(judgments, predictions)
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
#@title Run to calculate R^2 conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2]))) veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4])) velpredict = np.concatenate((predictions[:,3],predictions[:,4])) slope, intercept, r_value, p_value, std_err = sp.stats.linregress(conditions,veljudgmnt) print('conditions -> judgments R^2: %0.3f'%( r_value**2 )) slope, intercept, r_value, p_value, std_err = sp.stats.linregress(veljudgmnt,velpredict) print('predictions -> judgments R^2: %0.3f'%( r_value**2 ))
conditions -> judgments R^2: 0.032 predictions -> judgments R^2: 0.256
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: the model tends to have the same illusions as the participants. **TD 9.1** Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds**
# Testing thresholds def test_threshold(threshold=0.33): # prepare to run model data = {'opticflow':opticflow, 'vestibular':vestibular} params = {'threshold':threshold, 'filterwindows':[100,50], 'FUN':np.mean} modelpredictions = my_train_illusion_model(sensorydata=data, params=params) # get predictions in matrix predictions = np.zeros(judgments.shape) predictions[:,0:3] = judgments[:,0:3] predictions[:,3] = modelpredictions['selfmotion'] predictions[:,4] = modelpredictions['worldmotion'] *-1 # get percepts from participants and model conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2]))) veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4])) velpredict = np.concatenate((predictions[:,3],predictions[:,4])) # calculate R2 slope, intercept, r_value, p_value, std_err = sp.stats.linregress(veljudgmnt,velpredict) print('predictions -> judgments R2: %0.3f'%( r_value**2 )) test_threshold(threshold=0.5)
predictions -> judgments R2: 0.267
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
**TD 9.2:** Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here.**Template function for credit assigment of self motion**
# Template binary self-motion estimates def my_selfmotion(ves, params): ''' Estimates self motion for one vestibular signal Args: ves (numpy.ndarray): 1xM array with a vestibular signal params (dict): dictionary with named entries: see my_train_illusion_model() for details Returns: (float): an estimate of self motion in m/s ''' # integrate signal: ves = np.cumsum(ves*(1/params['samplingrate'])) # use running window to accumulate evidence: selfmotion = my_moving_window(ves, window=params['filterwindows'][0], FUN=params['FUN']) ## take the final value as our estimate: selfmotion = selfmotion[-1] ########################################## # this last part will have to be changed # compare to threshold, set to 0 if lower and else... if selfmotion < params['threshold']: selfmotion = 0 #uncomment the lines below and fill in with your code #else: #YOUR CODE HERE # comment this out when you've filled raise NotImplementedError("Student excercise: modify with credit assignment") return selfmotion
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_90571e21.py) The function you just wrote will be used when we run the model again below.
#@title Run model credit assigment of self motion # prepare to run the model again: data = {'opticflow':opticflow, 'vestibular':vestibular} params = {'threshold':0.33, 'filterwindows':[100,50], 'FUN':np.mean} modelpredictions = my_train_illusion_model(sensorydata=data, params=params) # no process the data to allow plotting... predictions = np.zeros(judgments.shape) predictions[:,0:3] = judgments[:,0:3] predictions[:,3] = modelpredictions['selfmotion'] predictions[:,4] = modelpredictions['worldmotion'] *-1 my_plot_percepts(datasets={'predictions':predictions}, plotconditions=False)
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved:
#@title Run to calculate R^2 for model with self motion credit assignment conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2]))) veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4])) velpredict = np.concatenate((predictions[:,3],predictions[:,4])) my_plot_predictions_data(judgments, predictions) slope, intercept, r_value, p_value, std_err = sp.stats.linregress(conditions,veljudgmnt) print('conditions -> judgments R2: %0.3f'%( r_value**2 )) slope, intercept, r_value, p_value, std_err = sp.stats.linregress(velpredict,veljudgmnt) print('predictions -> judgments R2: %0.3f'%( r_value**2 ))
_____no_output_____
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are actually worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. Interpret the model's meaningHere's what you should have learned: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis._It's always possible to refine our models to improve the fits._There are many ways to try to do this. A few examples; we could implement a full sensory cue integration model, perhaps with Kalman filters (Week 2, Day 3), or we could add prior knowledge (at what time do the trains depart?). However, we decided that for now we have learned enough, so it's time to write it up. Micro-tutorial 10 - publishing the model
#@title Video: Background from IPython.display import YouTubeVideo video = YouTubeVideo(id='kf4aauCr5vA', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video
Video available at https://youtube.com/watch?v=kf4aauCr5vA
CC-BY-4.0
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
sanchobarriga/course-content
Fix merge conflicts> Fix merge conflicts in jupyter notebooks When working with jupyter notebooks (which are json files behind the scenes) and GitHub, it is very common that a merge conflict (that will add new lines in the notebook source file) will break some notebooks you are working on. This module defines the function `fix_conflicts` to fix those notebooks for you, and attempt to automatically merge standard conflicts. The remaining ones will be delimited by markdown cells like this: Walk cells
#hide tst_nb="""{ "cells": [ { "cell_type": "code", <<<<<<< HEAD "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "z=3\n", "z" ] }, { "cell_type": "code", "execution_count": 7, ======= "execution_count": 5, >>>>>>> a7ec1b0bfb8e23b05fd0a2e6cafcb41cd0fb1c35 "metadata": {}, "outputs": [ { "data": { "text/plain": [ "6" ] }, <<<<<<< HEAD "execution_count": 7, ======= "execution_count": 5, >>>>>>> a7ec1b0bfb8e23b05fd0a2e6cafcb41cd0fb1c35 "metadata": {}, "output_type": "execute_result" } ], "source": [ "x=3\n", "y=3\n", "x+y" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }"""
_____no_output_____
Apache-2.0
nbs/05_merge.ipynb
aviadr1/nbdev
This is an example of broken notebook we defined in `tst_nb`. The json format is broken by the lines automatically added by git. Such a file can't be opened again in jupyter notebook, leaving the user with no other choice than to fix the text file manually.
print(tst_nb)
{ "cells": [ { "cell_type": "code", <<<<<<< HEAD "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "z=3 ", "z" ] }, { "cell_type": "code", "execution_count": 7, ======= "execution_count": 5, >>>>>>> a7ec1b0bfb8e23b05fd0a2e6cafcb41cd0fb1c35 "metadata": {}, "outputs": [ { "data": { "text/plain": [ "6" ] }, <<<<<<< HEAD "execution_count": 7, ======= "execution_count": 5, >>>>>>> a7ec1b0bfb8e23b05fd0a2e6cafcb41cd0fb1c35 "metadata": {}, "output_type": "execute_result" } ], "source": [ "x=3 ", "y=3 ", "x+y" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 2 }
Apache-2.0
nbs/05_merge.ipynb
aviadr1/nbdev
Note that in this example, the second conflict is easily solved: it just concerns the execution count of the second cell and can be solved by choosing either option without really impacting your notebook. This is the kind of conflicts `fix_conflicts` will (by default) fix automatically. The first conflict is more complicated as it spans across two cells and there is a cell present in one version, not the other. Such a conflict (and generally the ones where the inputs of the cells change form one version to the other) aren't automatically fixed, but `fix_conflicts` will return a proper json file where the annotations introduced by git will be placed in markdown cells.The first step to do this is to walk the raw text file to extract the cells. We can't read it as a JSON since it's broken, so we have to parse the text.
#export def extract_cells(raw_txt): "Manually extract cells in potential broken json `raw_txt`" lines = raw_txt.split('\n') cells = [] i = 0 while not lines[i].startswith(' "cells"'): i+=1 i += 1 start = '\n'.join(lines[:i]) while lines[i] != ' ],': while lines[i] != ' {': i+=1 j = i while not lines[j].startswith(' }'): j+=1 c = '\n'.join(lines[i:j+1]) if not c.endswith(','): c = c + ',' cells.append(c) i = j+1 end = '\n'.join(lines[i:]) return start,cells,end
_____no_output_____
Apache-2.0
nbs/05_merge.ipynb
aviadr1/nbdev
This function returns the beginning of the text (before the cells are defined), the list of cells and the end of the text (after the cells are defined).
start,cells,end = extract_cells(tst_nb) test_eq(len(cells), 3) test_eq(cells[0], """ { "cell_type": "code", <<<<<<< HEAD "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "3" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "z=3\n", "z" ] },""") #hide #Test the whole text is there #We add a , to the last cell (because we might add some after for merge conflicts at the end, so we need to remove it) test_eq(tst_nb, '\n'.join([start] + cells[:-1] + [cells[-1][:-1]] + [end]))
_____no_output_____
Apache-2.0
nbs/05_merge.ipynb
aviadr1/nbdev
When walking the broken cells, we will add conflicts marker before and after the cells with conflicts as markdown cells. To do that we use this function.
#export def get_md_cell(txt): "A markdown cell with `txt`" return ''' { "cell_type": "markdown", "metadata": {}, "source": [ "''' + txt + '''" ] },''' tst = ''' { "cell_type": "markdown", "metadata": {}, "source": [ "A bit of markdown" ] },''' assert get_md_cell("A bit of markdown") == tst #export conflicts = '<<<<<<< ======= >>>>>>>'.split() #export def _split_cell(cell, cf, names): "Split `cell` between `conflicts` given state in `cf`, save `names` of branches if seen" res1,res2 = [],[] for line in cell.split('\n'): if line.startswith(conflicts[cf]): if names[cf//2] is None: names[cf//2] = line[8:] cf = (cf+1)%3 continue if cf<2: res1.append(line) if cf%2==0: res2.append(line) return '\n'.join(res1),'\n'.join(res2),cf,names #hide tst = '\n'.join(['a', f'{conflicts[0]} HEAD', 'b', conflicts[1], 'c', f'{conflicts[2]} lala', 'd']) v1,v2,cf,names = _split_cell(tst, 0, [None,None]) assert v1 == 'a\nb\nd' assert v2 == 'a\nc\nd' assert cf == 0 assert names == ['HEAD', 'lala'] #hide tst = '\n'.join(['a', f'{conflicts[0]} HEAD', 'b', conflicts[1], 'c', f'{conflicts[2]} lala', 'd', f'{conflicts[0]} HEAD', 'e']) v1,v2,cf,names = _split_cell(tst, 0, [None,None]) assert v1 == 'a\nb\nd\ne' assert v2 == 'a\nc\nd' assert cf == 1 assert names == ['HEAD', 'lala'] #hide tst = '\n'.join(['a', f'{conflicts[0]} HEAD', 'b', conflicts[1], 'c', f'{conflicts[2]} lala', 'd', f'{conflicts[0]} HEAD', 'e', conflicts[1]]) v1,v2,cf,names = _split_cell(tst, 0, [None,None]) assert v1 == 'a\nb\nd\ne' assert v2 == 'a\nc\nd' assert cf == 2 assert names == ['HEAD', 'lala'] #hide tst = '\n'.join(['b', conflicts[1], 'c', f'{conflicts[2]} lala', 'd']) v1,v2,cf,names = _split_cell(tst, 1, ['HEAD',None]) assert v1 == 'b\nd' assert v2 == 'c\nd' assert cf == 0 assert names == ['HEAD', 'lala'] #hide tst = '\n'.join(['c', f'{conflicts[2]} lala', 'd']) v1,v2,cf,names = _split_cell(tst, 2, ['HEAD',None]) assert v1 == 'd' assert v2 == 'c\nd' assert cf == 0 assert names == ['HEAD', 'lala'] #export _re_conflict = re.compile(r'^<<<<<<<', re.MULTILINE) #hide assert _re_conflict.search('a\nb\nc') is None assert _re_conflict.search('a\n<<<<<<<\nc') is not None #export def same_inputs(t1, t2): "Test if the cells described in `t1` and `t2` have the same inputs" if len(t1)==0 or len(t2)==0: return False try: c1,c2 = json.loads(t1[:-1]),json.loads(t2[:-1]) return c1['source']==c2['source'] except Exception as e: return False ts = [''' { "cell_type": "code", "source": [ "'''+code+'''" ] },''' for code in ["a=1", "b=1", "a=1"]] assert same_inputs(ts[0],ts[2]) assert not same_inputs(ts[0], ts[1]) #export def analyze_cell(cell, cf, names, prev=None, added=False, fast=True, trust_us=True): "Analyze and solve conflicts in `cell`" if cf==0 and _re_conflict.search(cell) is None: return cell,cf,names,prev,added old_cf = cf v1,v2,cf,names = _split_cell(cell, cf, names) if fast and same_inputs(v1,v2): if old_cf==0 and cf==0: return (v2 if trust_us else v1),cf,names,prev,added v1,v2 = (v2,v2) if trust_us else (v1,v1) res = [] if old_cf == 0: added=True res.append(get_md_cell(f'`{conflicts[0]} {names[0]}`')) res.append(v1) if cf ==0: res.append(get_md_cell(f'`{conflicts[1]}`')) if prev is not None: res += prev res.append(v2) res.append(get_md_cell(f'`{conflicts[2]} {names[1]}`')) prev = None else: prev = [v2] if prev is None else prev + [v2] return '\n'.join([r for r in res if len(r) > 0]),cf,names,prev,added
_____no_output_____
Apache-2.0
nbs/05_merge.ipynb
aviadr1/nbdev
This is the main function used to walk through the cells of a notebook. `cell` is the cell we're at, `cf` the conflict state: `0` if we're not in any conflict, `1` if we are inside the first part of a conflict (between `<<<<<<<` and `=======`) and `2` for the second part of a conflict. `names` contains the names of the branches (they start at `[None,None]` and get updated as we pass along conflicts). `prev` contains a copy of what should be included at the start of the second version (if `cf=1` or `cf=2`). `added` starts at `False` and keeps track of whether we added any markdown cells (this flag allows us to know if a fast merge didn't leave any conflicts at the end). `fast` and `trust_us` are passed along by `fix_conflicts`: if `fast` is `True`, we don't point out conflict between cells if the inputs in the two versions are the same. Instead we merge using the local or remote branch, depending on `trust_us`.The function then returns the updated text (with one or several cells, depending on the conflicts to solve), the updated `cf`, `names`, `prev` and `added`.
tst = '\n'.join(['a', f'{conflicts[0]} HEAD', 'b', conflicts[1], 'c']) c,cf,names,prev,added = analyze_cell(tst, 0, [None,None], None, False,fast=False) test_eq(c, get_md_cell('`<<<<<<< HEAD`')+'\na\nb') test_eq(cf, 2) test_eq(names, ['HEAD', None]) test_eq(prev, ['a\nc']) test_eq(added, True)
_____no_output_____
Apache-2.0
nbs/05_merge.ipynb
aviadr1/nbdev
Here in this example, we were entering cell `tst` with no conflict state. At the end of the cells, we are still in the second part of the conflict, hence `cf=2`. The result returns a marker for the branch head, then the whole cell in version 1 (a + b). We save a (prior to the conflict hence common to the two versions) and c (only in version 2) for the next cell in `prev` (that should contain the resolution of this conflict). Main function
#export def fix_conflicts(fname, fast=True, trust_us=True): "Fix broken notebook in `fname`" fname=Path(fname) shutil.copy(fname, fname.with_suffix('.ipynb.bak')) with open(fname, 'r') as f: raw_text = f.read() start,cells,end = extract_cells(raw_text) res = [start] cf,names,prev,added = 0,[None,None],None,False for cell in cells: c,cf,names,prev,added = analyze_cell(cell, cf, names, prev, added, fast=fast, trust_us=trust_us) res.append(c) if res[-1].endswith(','): res[-1] = res[-1][:-1] with open(f'{fname}', 'w') as f: f.write('\n'.join([r for r in res+[end] if len(r) > 0])) if fast and not added: print("Succesfully merged conflicts!") else: print("One or more conflict remains in the notebook, please inspect manually.")
_____no_output_____
Apache-2.0
nbs/05_merge.ipynb
aviadr1/nbdev
The function will begin by backing the notebook `fname` to `fname.bak` in case something goes wrong. Then it parses the broken json, solving conflicts in cells. If `fast=True`, every conflict that only involves metadata or outputs of cells will be solved automatically by using the local (`trust_us=True`) or the remote (`trust_us=False`) branch. Otherwise, or for conflicts involving the inputs of cells, the json will be repaired by including the two version of the conflicted cell(s) with markdown cells indicating the conflicts. You will be able to open the notebook again and search for the conflicts (look for `<<<<<<<`) then fix them as you wish.If `fast=True`, the function will print a message indicating whether the notebook was fully merged or if conflicts remain. Export-
#hide from nbdev.export import notebook2script notebook2script()
Converted 00_export.ipynb. Converted 01_sync.ipynb. Converted 02_showdoc.ipynb. Converted 03_export2html.ipynb. Converted 04_test.ipynb. Converted 05_merge.ipynb. Converted 06_cli.ipynb. Converted 07_clean.ipynb. Converted 08_flag_tests.ipynb. Converted 99_search.ipynb. Converted index.ipynb. Converted tutorial.ipynb.
Apache-2.0
nbs/05_merge.ipynb
aviadr1/nbdev
Π‘ΠΊΠ°Ρ‡ΠΈΠ²Π°Π΅ΠΌ Π΄Π°Π½Π½Ρ‹Π΅, ΠΏΡ€Π΅ΠΎΠ±Ρ€Π°Π·ΡƒΠ΅ΠΌ ΠΈΡ… Π² ΠΎΠ΄Π½Ρƒ Ρ‚Π°Π±Π»ΠΈΡ†Ρƒ
import numpy as np import pandas as pd import json from datetime import datetime from datetime import date from math import sqrt from zipfile import ZipFile from os import listdir from os.path import isfile, join filesDir = "/content/drive/MyDrive/training_data" csvFiles = [join(filesDir, f) for f in listdir(filesDir) if (isfile(join(filesDir, f)) and 'csv' in f)] data = pd.DataFrame() for file in csvFiles: if 'acc' in file: with ZipFile(file, 'r') as zipObj: listOfFileNames = zipObj.namelist() for fileName in listOfFileNames: if 'chest' in fileName: with zipObj.open(fileName) as csvFile: newData = pd.read_csv(csvFile) newData['type'] = str(csvFile.name).replace('_',' ').replace('.',' ').split()[1] data = data.append(newData) # newData = pd.read_csv(csvFile) # newColumns = [col for col in newData.columns if col not in data.columns] # print(newColumns) # if data.empty or not newColumns: # newData['type'] = str(csvFile.name).replace('_',' ').replace('.',' ').split()[1] # data = data.append(newData) # else: # for index, newRow in newData.iterrows(): # print(newRow['attr_time']) # print(data.iloc[[0]]['attr_time']) # print(len(data[data['attr_time'] < newRow['attr_time']])) # existingRow = data[data['attr_time'] <= newRow['attr_time']].iloc[-1] # existingRow[newColumns] = newRow[newColumns] # data = data.sort_values(by=['attr_time']) #print(data) data = data.sort_values(by=['attr_time']) print(data) # heart = pd.read_csv('https://raw.githubusercontent.com/Ivan-Nebogatikov/HumanActivityRecognition/master/datasets/2282_3888_bundle_archive/heart.csv') # heart['timestamp'] = heart['timestamp'].map(lambda x: datetime.strptime(x, "%Y-%m-%d %H:%M:%S.%f")) # heart = heart.sort_values(by='timestamp') # def getHeart(x): # dt = datetime.strptime(x, "%Y-%m-%d %H:%M:%S.%f") # f = heart[heart['timestamp'] < dt] # lastValue = f.iloc[[-1]]['values'].tolist()[0] # intValue = list(json.loads(lastValue.replace('\'', '"')))[0] # return intValue # acc = pd.read_csv('https://raw.githubusercontent.com/Ivan-Nebogatikov/HumanActivityRecognition/master/datasets/2282_3888_bundle_archive/acc.csv') # acc['heart'] = acc['timestamp'].map(lambda x: getHeart(x)) # print(acc) # def change(x): # if x == 'Pause' or x == 'Movie': # x = 'Watching TV' # if x == 'Shop': # x = 'Walk' # if x == 'Football': # x = 'Running' # if x == 'Meeting' or x == 'Work' or x == 'Picnic ' or x == 'In vehicle' or x == 'In bus' : # x = 'Sitting' # if x == 'On bus stop': # x = 'Walk' # if x == 'Walking&party' or x == 'Shopping& wearing' or x == 'At home': # x = 'Walk' # return x # acc['act'] = acc['act'].map(lambda x: change(x)) # labels = np.array(acc['act']) # arrays = acc['values'].map(lambda x: getValue(x)) # x = getDiff(list(arrays.map(lambda x: np.double(x[0])))) # y = getDiff(list(arrays.map(lambda x: np.double(x[1])))) # z = getDiff(list(arrays.map(lambda x: np.double(x[2])))) # dist = list(map(lambda a, b, c: sqrt(a*a+b*b+c*c), x, y, z)) labels = np.array(data['type'])
_____no_output_____
Apache-2.0
Processing.ipynb
Ivan-Nebogatikov/HumanActivityRecognitionOutliersDetection
data['time_diff'] = data['attr_time'].diff() indMin = int(data[['time_diff']].idxmin()) print(indMin) t_j = data.iloc[indMin]['attr_time'] print(t_j) t_j1 = data.iloc[indMin+1]['attr_time'] diff = t_j1 - t_j print(diff) # interpolated = [] data['attr_x_i'] = data.apply(lambda row: (t_j1 - row['attr_time']) * row['attr_x'] / diff + (row['attr_time'] - t_j) * row['attr_x'] / diff, axis=1) # !!! Ρ‚ΡƒΡ‚ Π½ΡƒΠΆΠ΅Π½ +1 строка data['attr_y_i'] = data.apply(lambda row: (t_j1 - row['attr_time']) * row['attr_y'] / diff + (row['attr_time'] - t_j) * row['attr_y'] / diff, axis=1) data['attr_z_i'] = data.apply(lambda row: (t_j1 - row['attr_time']) * row['attr_z'] / diff + (row['attr_time'] - t_j) * row['attr_z'] / diff, axis=1) # # for i, row in data.iterrows(): # # t_i = row['attr_time'] # # def axis(value): (t_j1 - t_i) * value / (t_j1 - t_j) + (t_i + t_j) * value / (t_j1 + t_j) # # interpolated.append([row["id"], row['attr_time'], axis(row['attr_x']), axis(row['attr_y']), axis(row['attr_z']), row['type'], row['time_diff']]) print(data) data['g_x'] = data['attr_x_i'].rolling(window=5).mean() data['g_y'] = data['attr_y_i'].rolling(window=5).mean() data['g_z'] = data['attr_z_i'].rolling(window=5).mean() print(data['g_x']) data['g_x'] = data['attr_x_i'].rolling(window=5).mean() data['g_y'] = data['attr_y_i'].rolling(window=5).mean() data['g_z'] = data['attr_z_i'].rolling(window=5).mean() print(data['g_x']) import numpy as np def acc(a, g): return np.cross(np.cross(a, g) / np.dot(g, g), g) data['a_tv'] = data.apply(lambda row: acc([row.attr_x_i, row.attr_y_i, row.attr_z_i], [row.g_x, row.g_y, row.g_z]), axis=1) data['a_th'] = data.apply(lambda row: [row.attr_x_i - row.a_tv[0], row.attr_y_i - row.a_tv[1], row.attr_z_i - row.a_tv[2]], axis=1) print(data['a_tv']) print(data['a_th'])
0 [nan, nan, nan] 1 [nan, nan, nan] 2 [nan, nan, nan] 3 [nan, nan, nan] 4 [0.5486020271807942, 9.603077026582291, 1.2872... ... 31414 [-3.162400964945532, 8.992966616538839, 1.3876... 31415 [-3.108340203427508, 8.924675918347459, 1.6835... 31416 [-3.2279468040838215, 8.943086012271925, 1.807... 31417 [-3.347224001114585, 8.939447319366597, 1.3910... 31418 [-3.119910674017178, 9.025276258131134, 1.3772... Name: a_th, Length: 221613, dtype: object
Apache-2.0
Processing.ipynb
Ivan-Nebogatikov/HumanActivityRecognitionOutliersDetection
Π’ΡΠΏΠΎΠΌΠΎΠ³Π°Ρ‚Π΅Π»ΡŒΠ½Π°Ρ функция для Π²Ρ‹Π²ΠΎΠ΄Π° Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ΠΎΠ²
import pandas as pd import numpy as np from scipy import interp from sklearn.metrics import accuracy_score from sklearn.metrics import precision_recall_fscore_support from sklearn.metrics import roc_curve, auc from sklearn.preprocessing import LabelBinarizer def class_report(y_true, y_pred, y_score=None, average='micro'): if y_true.shape != y_pred.shape: print("Error! y_true %s is not the same shape as y_pred %s" % ( y_true.shape, y_pred.shape) ) return accuracy = accuracy_score(y_true, y_pred) print("Accuracy:", accuracy) lb = LabelBinarizer() if len(y_true.shape) == 1: lb.fit(y_true) #Value counts of predictions labels, cnt = np.unique( y_pred, return_counts=True) n_classes = 5 pred_cnt = pd.Series(cnt, index=labels) metrics_summary = precision_recall_fscore_support( y_true=y_true, y_pred=y_pred, labels=labels) avg = list(precision_recall_fscore_support( y_true=y_true, y_pred=y_pred, average='weighted')) metrics_sum_index = ['precision', 'recall', 'f1-score', 'support'] class_report_df = pd.DataFrame( list(metrics_summary), index=metrics_sum_index, columns=labels) support = class_report_df.loc['support'] total = support.sum() class_report_df['avg / total'] = avg[:-1] + [total] class_report_df = class_report_df.T class_report_df['pred'] = pred_cnt class_report_df['pred'].iloc[-1] = total if not (y_score is None): fpr = dict() tpr = dict() roc_auc = dict() for label_it, label in enumerate(labels): fpr[label], tpr[label], _ = roc_curve( (y_true == label).astype(int), y_score[:, label_it]) roc_auc[label] = auc(fpr[label], tpr[label]) if average == 'micro': if n_classes <= 2: fpr["avg / total"], tpr["avg / total"], _ = roc_curve( lb.transform(y_true).ravel(), y_score[:, 1].ravel()) else: fpr["avg / total"], tpr["avg / total"], _ = roc_curve( lb.transform(y_true).ravel(), y_score.ravel()) roc_auc["avg / total"] = auc( fpr["avg / total"], tpr["avg / total"]) elif average == 'macro': # First aggregate all false positive rates all_fpr = np.unique(np.concatenate([ fpr[i] for i in labels] )) # Then interpolate all ROC curves at this points mean_tpr = np.zeros_like(all_fpr) for i in labels: mean_tpr += interp(all_fpr, fpr[i], tpr[i]) # Finally average it and compute AUC mean_tpr /= n_classes fpr["macro"] = all_fpr tpr["macro"] = mean_tpr roc_auc["avg / total"] = auc(fpr["macro"], tpr["macro"]) class_report_df['AUC'] = pd.Series(roc_auc) print(class_report_df) return accuracy
_____no_output_____
Apache-2.0
Processing.ipynb
Ivan-Nebogatikov/HumanActivityRecognitionOutliersDetection
ΠžΠΏΡ€Π΅Π΄Π΅Π»ΡΠ΅ΠΌ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ для прСдсказания с использованиСм классификатора ΠΈ с использованиСм Π½Π΅ΡΠΊΠΎΠ»ΡŒΠΊΠΈΡ… классификаторов
from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import roc_auc_score from sklearn import metrics from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn.neural_network import MLPClassifier import pandas as pd from sklearn.model_selection import cross_val_score from sklearn.metrics import plot_confusion_matrix import matplotlib.pyplot as plt from sklearn.utils import shuffle def Predict(x, classifier = RandomForestClassifier(n_estimators = 400, random_state = 3, class_weight='balanced')): train_features, test_features, train_labels, test_labels = train_test_split(x, labels, test_size = 0.15, random_state = 242) print('Training Features Shape:', train_features.shape) print('Testing Features Shape:', test_features.shape) print("\n") classifier.fit(train_features, train_labels); x_shuffled, labels_shuffled = shuffle(np.array(x), np.array(labels)) scores = cross_val_score(classifier, x_shuffled, labels_shuffled, cv=7) print("%f accuracy with a standard deviation of %f" % (scores.mean(), scores.std())) predictions = list(classifier.predict(test_features)) pred_prob = classifier.predict_proba(test_features) accuracy = class_report( y_true=test_labels, y_pred=np.asarray(predictions), y_score=pred_prob, average='micro') if hasattr(classifier, 'feature_importances_'): print(classifier.feature_importances_) plot_confusion_matrix(classifier, test_features, test_labels) plt.xticks(rotation = 90) plt.style.library['seaborn-darkgrid'] plt.show() return [accuracy, scores.mean(), scores.std()] def PredictWithClassifiers(data, classifiers): accuracies = {} for name, value in classifiers.items(): accuracy = Predict(data, value) accuracies[name] = accuracy print("\n") df = pd.DataFrame({(k, v[0], v[1], v[2]) for k, v in accuracies.items()}, columns=["Method", "Accuracy", "Mean", "Std"]) print(df)
_____no_output_____
Apache-2.0
Processing.ipynb
Ivan-Nebogatikov/HumanActivityRecognitionOutliersDetection
ΠžΠΏΡ€Π΅Π΄Π΅Π»ΡΠ΅ΠΌ Π½Π°Π±ΠΎΡ€ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅ΠΌΡ‹Ρ… классификаторов
from sklearn import svm from sklearn.naive_bayes import GaussianNB from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF from sklearn.ensemble import AdaBoostClassifier methods = { "MLP" : MLPClassifier(random_state=1, max_iter=300), "K-neigh" : KNeighborsClassifier(), # default k = 5 "Random Forest" : RandomForestClassifier(n_estimators = 400, random_state = 3, class_weight='balanced'), "Bayes" : GaussianNB(), "AdaBoost" : AdaBoostClassifier(), "SVM" : svm.SVC(probability=True, class_weight='balanced') } frame = pd.DataFrame(data['a_th'].to_list(), columns=['x','y','z']).fillna(0) print(frame) feature_list = list(frame.columns) print(frame) PredictWithClassifiers(frame, methods)
x y z 0 0.000000 0.000000 0.000000 1 0.000000 0.000000 0.000000 2 0.000000 0.000000 0.000000 3 0.000000 0.000000 0.000000 4 0.548602 9.603077 1.287286 ... ... ... ... 221608 -3.162401 8.992967 1.387654 221609 -3.108340 8.924676 1.683505 221610 -3.227947 8.943086 1.807846 221611 -3.347224 8.939447 1.391044 221612 -3.119911 9.025276 1.377277 [221613 rows x 3 columns] x y z 0 0.000000 0.000000 0.000000 1 0.000000 0.000000 0.000000 2 0.000000 0.000000 0.000000 3 0.000000 0.000000 0.000000 4 0.548602 9.603077 1.287286 ... ... ... ... 221608 -3.162401 8.992967 1.387654 221609 -3.108340 8.924676 1.683505 221610 -3.227947 8.943086 1.807846 221611 -3.347224 8.939447 1.391044 221612 -3.119911 9.025276 1.377277 [221613 rows x 3 columns] Training Features Shape: (188371, 3) Testing Features Shape: (33242, 3) 0.860793 accuracy with a standard deviation of 0.001170 Accuracy: 0.8628542205643464 precision recall f1-score support pred AUC climbingdown 0.947478 0.963002 0.955177 3784.0 3846.0 0.998307 climbingup 0.730908 0.793458 0.760900 4861.0 5277.0 0.965126 jumping 0.736196 0.392799 0.512273 611.0 326.0 0.965964 lying 0.996778 0.978082 0.987342 4745.0 4656.0 0.999438 running 0.820877 0.712463 0.762837 4702.0 4081.0 0.956360 sitting 0.936003 0.945902 0.940927 4917.0 4969.0 0.994198 standing 0.894068 0.930101 0.911728 4764.0 4956.0 0.994421 walking 0.754044 0.796418 0.774652 4858.0 5131.0 0.966664 avg / total 0.863435 0.862854 0.861296 33242.0 33242.0 0.987875
Apache-2.0
Processing.ipynb
Ivan-Nebogatikov/HumanActivityRecognitionOutliersDetection
SF Salaries Exercise - SolutionsWelcome to a quick exercise for you to practice your pandas skills! We will be using the [SF Salaries Dataset](https://www.kaggle.com/kaggle/sf-salaries) from Kaggle! Just follow along and complete the tasks outlined in bold below. The tasks will get harder and harder as you go along. ** Import pandas as pd.**
import pandas as pd
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** Read Salaries.csv as a dataframe called sal.**
sal = pd.read_csv('Salaries.csv')
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** Check the head of the DataFrame. **
sal.head()
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** Use the .info() method to find out how many entries there are.**
sal.info() # 148654 Entries
<class 'pandas.core.frame.DataFrame'> RangeIndex: 148654 entries, 0 to 148653 Data columns (total 13 columns): Id 148654 non-null int64 EmployeeName 148654 non-null object JobTitle 148654 non-null object BasePay 148045 non-null float64 OvertimePay 148650 non-null float64 OtherPay 148650 non-null float64 Benefits 112491 non-null float64 TotalPay 148654 non-null float64 TotalPayBenefits 148654 non-null float64 Year 148654 non-null int64 Notes 0 non-null float64 Agency 148654 non-null object Status 0 non-null float64 dtypes: float64(8), int64(2), object(3) memory usage: 14.7+ MB
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
**What is the average BasePay ?**
sal['BasePay'].mean()
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** What is the highest amount of OvertimePay in the dataset ? **
sal['OvertimePay'].max()
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** What is the job title of JOSEPH DRISCOLL ? Note: Use all caps, otherwise you may get an answer that doesn't match up (there is also a lowercase Joseph Driscoll). **
sal[sal['EmployeeName']=='JOSEPH DRISCOLL']['JobTitle']
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** How much does JOSEPH DRISCOLL make (including benefits)? **
sal[sal['EmployeeName']=='JOSEPH DRISCOLL']['TotalPayBenefits']
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** What is the name of highest paid person (including benefits)?**
sal[sal['TotalPayBenefits']== sal['TotalPayBenefits'].max()] #['EmployeeName'] # or # sal.loc[sal['TotalPayBenefits'].idxmax()]
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** What is the name of lowest paid person (including benefits)? Do you notice something strange about how much he or she is paid?**
sal[sal['TotalPayBenefits']== sal['TotalPayBenefits'].min()] #['EmployeeName'] # or # sal.loc[sal['TotalPayBenefits'].idxmax()]['EmployeeName'] ## ITS NEGATIVE!! VERY STRANGE sal.groupby('Year')['BasePay'].mean()
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** What was the average (mean) BasePay of all employees per year? (2011-2014) ? **
sal.groupby('Year').mean()['BasePay']
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** How many unique job titles are there? **
sal['JobTitle'].nunique() sal.head(0) # Get the col headers only sal['JobTitle'].value_counts().head()
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** What are the top 5 most common jobs? **
sal['JobTitle'].value_counts().head(5) # sum(sal[sal['Year']==2013]['JobTitle'].value_counts() == 1) # sal[sal['Year']==2013sal['Year']==2013 yr_cond = sal['Year']==2013 # Return a series of all rows in sal and whether they have 2013 in their 'Year' column yr_filtered_sal = sal[yr_cond] # Return sal filtered to only the rows that meet above criteria job_counts = yr_filtered_sal['JobTitle'].value_counts() job_counts_is_one = (job_counts == 1) sum(job_counts_is_one)
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** How many Job Titles were represented by only one person in 2013? (e.g. Job Titles with only one occurence in 2013?) **
sum(sal[sal['Year']==2013]['JobTitle'].value_counts() == 1) # pretty tricky way to do this... sal def chief_check(title): if 'chief' in title.lower(): return True else: return False sum(sal['JobTitle'].apply(lambda title: chief_check(title)))
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** How many people have the word Chief in their job title? (This is pretty tricky) **
def chief_string(title): if 'chief' in title.lower(): return True else: return False sal['Title_len'] = sal['JobTitle'].apply(len) sal[['Title_len','TotalPayBenefits']].corr() sum(sal['JobTitle'].apply(lambda x: chief_string(x)))
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
** Bonus: Is there a correlation between length of the Job Title string and Salary? **
sal['title_len'] = sal['JobTitle'].apply(len) sal[['title_len','TotalPayBenefits']].corr() # No correlation.
_____no_output_____
MIT
Python/data_science/data_analysis/04-Pandas-Exercises/02-SF Salaries Exercise - Solutions.ipynb
Kenneth-Macharia/Learning
Sequence Reconstruction [PASSED]Check whether the original sequence org can be uniquely reconstructed from the sequences in seqs. The org sequence is a permutation of the integers from 1 to n, with 1 ≀ n ≀ 104. Reconstruction means building a shortest common supersequence of the sequences in seqs (i.e., a shortest sequence so that all sequences in seqs are subsequences of it). Determine whether there is only one sequence that can be reconstructed from seqs and it is the org sequence. Example 1- Input: org: [1,2,3], seqs: [[1,2],[1,3]]- Output: false- Explanation: [1,2,3] is not the only one sequence that can be reconstructed, because [1,3,2] is also a valid sequence that can be reconstructed. Example 2- Input: org: [1,2,3], seqs: [[1,2]]- Output: false- Explanation: The reconstructed sequence can only be [1,2]. Example 3- Input: org: [1,2,3], seqs: [[1,2],[1,3],[2,3]]- Output: true- Explanation: The sequences [1,2], [1,3], and [2,3] can uniquely reconstruct the original sequence [1,2,3]. Example 4:- Input: org: [4,1,5,2,6,3], seqs: [[5,2,6,3],[4,1,5,2]]- Output: true Solution IntuitionTake the first numbers from each list. From the set of first numbers find the one that appears only in a first place. Add this number to result and delete it from all lists. Run until all input list are empty. Implementation
def is_unique_first(x: int, seqs: list): return len([s for s in seqs if x in s[1:]]) == 0 def reconstruct(orgs: list, seqs: list) -> bool: res = [] while seqs: first: set = set([lst[0] for lst in seqs if is_unique_first(lst[0], seqs)]) if not first or len(first) > 2: return False else: elem = list(first)[0] res.append(elem) seqs = [[x for x in lst if x != elem] for lst in seqs] seqs = [s for s in seqs if s] return res == orgs reconstruct([1,2,3], [[1,2],[1,3]]) reconstruct([1,2,3], [[1,2],[1,3],[2,3]]) reconstruct([4,1,5,2,6,3], [[5,2,6,3],[4,1,5,2]])
_____no_output_____
MIT
python-data-structures/interviews/goog-2021-02-03.ipynb
dimastatz/courses
$\newcommand{\xv}{\mathbf{x}} \newcommand{\wv}{\mathbf{w}} \newcommand{\yv}{\mathbf{y}} \newcommand{\zv}{\mathbf{z}} \newcommand{\uv}{\mathbf{u}} \newcommand{\vv}{\mathbf{v}} \newcommand{\Chi}{\mathcal{X}} \newcommand{\R}{\rm I\!R} \newcommand{\sign}{\text{sign}} \newcommand{\Tm}{\mathbf{T}} \newcommand{\Xm}{\mathbf{X}} \newcommand{\Zm}{\mathbf{Z}} \newcommand{\Im}{\mathbf{I}} \newcommand{\Um}{\mathbf{U}} \newcommand{\Vm}{\mathbf{V}} \newcommand{\muv}{\boldsymbol\mu} \newcommand{\Sigmav}{\boldsymbol\Sigma} \newcommand{\Lambdav}{\boldsymbol\Lambda}$ Machine Learning MethodologyFor machine learning algorithms, we learned how to set goas to optimize and how to reach or approach the optimal solutions. Now, let us discuss how to evaluate the learned models. There will be many different aspects that we need to consider not simply accuracy, so we will further discuss techniques to make the machine learning models better. Performance Measurement, Overfitting, Regularization, and Cross-Validation In machine learning, *what is a good measure to assess the quality of a machine learning model?* Let us step back from what we have learned in class about ML techniques and think about this. In previous lectures, we have discussed various measures such a `root mean square error (RMSE)`, `mean square error (MSE)`, `mean absolute error (MAE)` for **regression problems**, and `accuracy`, `confusion matrix`, `precision/recall`, `F1-score`, `receiver operating characteristic (ROC) curve`, and others for **classification**. For your references, here are the list of references for diverse metrics for different categories of machine learning.* Regressions: https://arxiv.org/pdf/1809.03006.pdf* Classification: As we have a cheatsheet already, here is a comprehensive version from ICMLA tutorial. https://www.icmla-conference.org/icmla11/PE_Tutorial.pdf* Clustering: https://scikit-learn.org/stable/modules/clustering.htmlclustering-performance-evaluationAnyway, are these measures good enough to say a specific model is better than the other? Let us take a look at following codes examples and think about something that we are missing.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline from copy import deepcopy as copy x = np.arange(3) t = copy(x) def plot_data(): plt.plot(x, t, "o", markersize=10) plot_data()
_____no_output_____
MIT
reading_assignments/5_Note-ML Methodology.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
I know that it is silly to apply a linear regression on this obvious model, but let us try. :)
## Least Square solution: Filled codes here to fit and plot as the instructor's output import numpy as np # First creast X1 by adding 1's column to X N = x.shape[0] X1 = np.c_[np.ones((N, 1)), x] # Next, using inverse, solve, lstsq function to get w* w = np.linalg.inv(X1.transpose().dot(X1)).dot(X1.transpose()).dot(t) # print(w) y = X1.dot(w) plot_data() plt.plot(y)
_____no_output_____
MIT
reading_assignments/5_Note-ML Methodology.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
Can we try a nonlinear model on this data? Why not? We can make a nonlinear model by simply adding higher degree terms such square, cubic, quartic, and so on. $$ f(\xv; \wv) = w_0 + w_1 \xv + w_2 \xv^2 + w_3 \xv^3 + \cdots $$This is called *polynomial regression* as we transform the input features to nonlinear by extending the features high dimensional with higher polynomial degree terms. For instance, your input feature $(1, x)$ is extended to $(1, x, x^2, x^3)$ for cubic polynomial regression model. After input transformation, you can simply use least squares or least mean squares to find the weights as the model is still linear with respect to the weight $\wv$. Let us make the polynomial regression model and fit to the data above with lease squares.
# Polinomial regression def poly_regress(x, d=3, t=None, **params): bnorm = params.pop('normalize', False) X_poly = [] ####################################################################################### # Transform input features: append polynomial terms (from bias when i=0) to degree d for i in range(d+1): X_poly.append(x**i) X_poly = np.vstack(X_poly).T # normalize if bnorm: mu, sd = np.mean(X_poly[:, 1:, None], axis=0), np.std(X_poly[:, 1:, None], axis=0) X_poly[:, 1:] = (X_poly[:, 1:] - mu.flat) / sd.flat # least sqaures if t is not None: # added least square solution here w = np.linalg.inv(X_poly.transpose().dot(X_poly)).dot(X_poly.transpose()).dot(t) if bnorm: return X_poly, mu, sd, w return X_poly, w if bnorm: return X_poly, mu, sd return X_poly
_____no_output_____
MIT
reading_assignments/5_Note-ML Methodology.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
The poly_regress() function trains with the data when target input is given after transform the input x as the following example. The function also returns the transformed input X_poly.
Xp, wp = poly_regress(x, 3, t) print(wp.shape) print(Xp.shape) yp = Xp @ wp plot_data() plt.plot(x, y) plt.plot(x, yp)
(4,) (3, 4)
MIT
reading_assignments/5_Note-ML Methodology.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
Hmm... They both look good on this. Then, what is the difference? Let us take a look at how they change if I add the test data. If I compare the MSE, they are equivalent. Try to expand the data for test and see how different they are. Here, we use another usage of poly_regress() function without passing target, so we transform the target input to polynomial features.
xtest = np.arange(11)-5 Xptest = poly_regress(xtest, 3) yptest = Xptest @ wp X1test = np.vstack((np.ones(len(xtest)), xtest)).T ytest = X1test @ w plot_data() plt.plot(xtest, ytest) plt.plot(xtest, yptest)
_____no_output_____
MIT
reading_assignments/5_Note-ML Methodology.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
Here the orange is th linear model and the green line is 3rd degree polynomial regression. Which model looks better? What is your pick? Learning CurveFrom the above example, we realized that the model evaluation we discussed so far is not enough. First, let us consider how well a learned model generalizes to new data with respect to the number of training samples. We assume that the test data are drawn from same distribution over example space as training data. In this plot, we can compare the two learning algorithms and find which one generalizes better than the other. Also, during the training, we can access to the training error (or empirical loss). This may not look similar (mostly not) to the test error (generalization loss above). Let us take a look at the example in the Geron textbook.
import os import pandas as pd import sklearn def prepare_country_stats(oecd_bli, gdp_per_capita): oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"] oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value") gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True) gdp_per_capita.set_index("Country", inplace=True) full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True) full_country_stats.sort_values(by="GDP per capita", inplace=True) remove_indices = [0, 1, 6, 8, 33, 34, 35] keep_indices = list(set(range(36)) - set(remove_indices)) return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices], full_country_stats[["GDP per capita", 'Life satisfaction']] !curl https://raw.githubusercontent.com/ageron/handson-ml/master/datasets/lifesat/oecd_bli_2015.csv > oecd_bli_2015.csv !curl https://raw.githubusercontent.com/ageron/handson-ml/master/datasets/lifesat/gdp_per_capita.csv > gdp_per_capita.csv # Load the data oecd_bli = pd.read_csv("oecd_bli_2015.csv", thousands=',') gdp_per_capita = pd.read_csv("gdp_per_capita.csv",thousands=',',delimiter='\t', encoding='latin1', na_values="n/a") # Prepare the data country_stats, full_country_stats = prepare_country_stats(oecd_bli, gdp_per_capita) X = np.c_[country_stats["GDP per capita"]] y = np.c_[country_stats["Life satisfaction"]] # Visualize the data country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(6.5,4)) plt.axis([0, 60000, 0, 10]) plt.show()
_____no_output_____
MIT
reading_assignments/5_Note-ML Methodology.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning
This looks like the data showing a linear trend. Now, let us extend the x-axis to further to 110K and see how it looks.
# Visualize the full data # Visualize the data full_country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(12,4)) plt.axis([0, 110000, 0, 10]) plt.show()
_____no_output_____
MIT
reading_assignments/5_Note-ML Methodology.ipynb
biqar/Fall-2020-ITCS-8156-MachineLearning