repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
metpy/MetPy
v1.0/_downloads/0eff36d3fdf633f2a71ae3e92fdeb5b8/Simple_Sounding.ipynb
bsd-3-clause
[ "%matplotlib inline", "Simple Sounding\nUse MetPy as straightforward as possible to make a Skew-T LogP plot.", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nimport metpy.calc as mpcalc\nfrom metpy.cbook import get_test_data\nfrom metpy.plots import add_metpy_logo, SkewT\nfrom metpy.units import units\n\n# Change default to be better for skew-T\nplt.rcParams['figure.figsize'] = (9, 9)\n\n# Upper air data can be obtained using the siphon package, but for this example we will use\n# some of MetPy's sample data.\n\ncol_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']\n\ndf = pd.read_fwf(get_test_data('jan20_sounding.txt', as_file_obj=False),\n skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)\n\n# Drop any rows with all NaN values for T, Td, winds\ndf = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'\n ), how='all').reset_index(drop=True)", "We will pull the data out of the example dataset into individual variables and\nassign units.", "p = df['pressure'].values * units.hPa\nT = df['temperature'].values * units.degC\nTd = df['dewpoint'].values * units.degC\nwind_speed = df['speed'].values * units.knots\nwind_dir = df['direction'].values * units.degrees\nu, v = mpcalc.wind_components(wind_speed, wind_dir)\n\nskew = SkewT()\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\nskew.ax.set_ylim(1000, 100)\n\n# Add the MetPy logo!\nfig = plt.gcf()\nadd_metpy_logo(fig, 115, 100)\n\n# Example of defining your own vertical barb spacing\nskew = SkewT()\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\n\n# Set spacing interval--Every 50 mb from 1000 to 100 mb\nmy_interval = np.arange(100, 1000, 50) * units('mbar')\n\n# Get indexes of values closest to defined interval\nix = mpcalc.resample_nn_1d(p, my_interval)\n\n# Plot only values nearest to defined interval values\nskew.plot_barbs(p[ix], u[ix], v[ix])\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\nskew.ax.set_ylim(1000, 100)\n\n# Add the MetPy logo!\nfig = plt.gcf()\nadd_metpy_logo(fig, 115, 100)\n\n# Show the plot\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code" ]
AhmedHani/Kaggle-Machine-Learning-Competitions
Easy/PokerRuleInduction/PokerRuleInduction.ipynb
mit
[ "Poker Induction Rule Problem\nThe problem is about the famous Cards Game, Poker.\nIn Poker, there is something called Poker Hands, The hand consists of 5 cards which determines the score of each player. Tranditionally in our life, the rules for calculating the hands can be seen here https://en.wikipedia.org/wiki/List_of_poker_hands\nStraight flush > Four of a kind > Full house > Flush > Straight > Three of a kind > Two pair > One pair > High card(Nothing).\nWell, the problem considers us as Aliens that don't know anything about this game and its rules, It wants from us to predict the rules of the games given a dataset that contains the 5 cards and the class(Poker Hands).\nDataset\nThe dataset is taken from here https://archive.ics.uci.edu/ml/datasets/Poker+Hand, we have 10 features and a label for each record. Each consecutive pair represents a card (Suit {Heart, Spade, Diamond, Club}, Rank {Ace, 2, 3, ... Q, K}). The label is represented as numerical value (0 - 9) which indciates the poker hand.\n1: One pair; one pair of equal ranks within five cards\n2: Two pairs; two pairs of equal ranks within five cards \n3: Three of a kind; three equal ranks within five cards \n4: Straight; five cards, sequentially ranked with no gaps\n5: Flush; five cards with the same suit \n6: Full house; pair + different rank three of a kind \n7: Four of a kind; four equal ranks within five cards \n8: Straight flush; straight + flush \n9: Royal flush; {Ace, King, Queen, Jack, Ten} + flush \nSolution\nWhen solving problems on Kaggle using Python, make sure that you have Pandas, NumPy, SciPy, Scikit-learn libraries installed in your Python package, they contains many utilities and built-in algorithms that make the solving easier.\nFirst, let's import the modules that we will use in the project.", "import pandas as pnd\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.neighbors import KNeighborsClassifier", "Using the awesome Panda library, we can parse the .csv file of the training set and hold the data in a table", "def getTrainingData():\n print(\"Get training data ...\\n\")\n\n trainingData = pnd.read_csv(\"./train.csv\")\n trainingData['id'] = range(1, len(trainingData) + 1) #For 1-base index\n\n return trainingData", "Second, We need to extract the features and the label from the table", "trainingData = getTrainingData()\nlabels = trainingData['hand']\nfeatures = trainingData.drop(['id', 'hand'], axis=1)", "When dealing with Machine Learning algorithms, you need to calculate the effiency of the algorithm with the data, this could be done using several techniques such as K-Fold cross validation https://en.wikipedia.org/wiki/Cross-validation_(statistics)#k-fold_cross-validation and Precision and recall https://en.wikipedia.org/wiki/Precision_and_recall.\nI've used K-fold cross validation for this problem.", "def kFoldCrossValidation(kFold):\n trainingData = getTrainingData()\n label = trainingData['hand']\n features = trainingData.drop(['id'], axis=1)\n crossValidationResult = dict()\n\n print(\"Start Cross Validation ...\\n\")\n\n randomForest = RandomForestClassifier(n_estimators=100)\n kNearestNeighbour = KNeighborsClassifier(n_neighbors=100)\n crossValidationResult['RF'] = cross_val_score(randomForest, trainingData, label, cv=kFold).mean()\n crossValidationResult['KNN'] = cross_val_score(kNearestNeighbour, trainingData, label, cv=kFold).mean()\n\n print(\"KNN: %s\\n\" % str(crossValidationResult['KNN']))\n print(\"RF: %s\\n\" % str(crossValidationResult['RF']))\n print(\"\\n\")\n\n return crossValidationResult['KNN'], crossValidationResult['RF']", "I've decided to use K Nearest Neighbour and Random Forest according to the recommendation and the benchmark of the problem. Above, I've created instances from the Random Forest and K Nearest Neighbour modules, then get the score of each one to help me to decide which one is better.", "if __name__ == '__main__':\n trainingData = getTrainingData()\n labels = trainingData['hand']\n features = trainingData.drop(['id', 'hand'], axis=1)\n\n KNN, RF = kFoldCrossValidation(5)\n classifier = None\n\n if KNN > RF:\n classifier = KNeighborsClassifier(n_neighbors=100)\n else:\n classifier = RandomForestClassifier(n_estimators=10, n_jobs=-1)\n\n testData, result = getTestData()\n\n print(\"Classification in progress ...\\n\")\n\n classifier.fit(features, labels)\n result.insert(1, 'hand', classifier.predict(testData))\n result.to_csv(\"./results.csv\", index=False)\n\n print(\"Classification Ends ...\\n\")", "I've made a condition to decide which classifier will be used according to the calculated score in the previous step.\nResult\nThe best score I've got is 0.5624 from Random Forest which is close to the benchmark of this algorithm which is 0.62408. But the best score for this problem is 1.0000.\nWell, this is was the first problem I've ever solved on Kaggle (Year ago), I was trying to begin and learn how to use Python libraries and submit the result, so, I haven't tried to improve the accuracy. Later, I've found that to improve the accuracy, we may make some preprocessing on data, tuning the parameters, regularization and other things that help on getting better accuracy.\nImportant Note\nDon't use any module as a black box when you don't know how it works, the libraries are made to use it to avoid wasting the time on re-code them again, but, you MUST know what is your code doing." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
WenboTien/Crime_data_analysis
exploratory_data_analysis/.ipynb_checkpoints/UCIrvine_Crime_data_analysis-checkpoint.ipynb
mit
[ "%pylab inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport sklearn\n\nfrom scipy import stats, optimize\nfrom sklearn.preprocessing import Imputer\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.linear_model import Lasso, LinearRegression, Ridge\n\nfrom sklearn.base import clone\nfrom itertools import combinations\nfrom sklearn.metrics import explained_variance_score, r2_score, median_absolute_error\n\nprint('The scikit-learn version is {}.'.format(sklearn.__version__))\nprint('The pandas version is {}.'.format(pd.__version__))\nprint('The numpy version is {}.'.format(np.__version__))", "Read the CSV\nWe use pandas read_csv(path/to/csv) method to read the csv file. Next, replace the missing values with np.NaN i.e. Not a Number. This way we can count the number of missing values per column.", "df = pd.read_csv('../datasets/UCIrvineCrimeData.csv');\ndf = df.replace('?',np.NAN)\nfeatures = [x for x in df.columns if x not in ['state', 'community', 'communityname', 'county'\n , 'ViolentCrimesPerPop']]", "Find the number of missing values in every column", "df.isnull().sum()", "Eliminating samples or features with missing values\nOne of the easiest ways to deal with missing values is to simply remove the corresponding features(columns) or samples(rows) from the dataset entirely. Rows with missing values can be easily dropped via the dropna method.", "df.dropna()", "Similarly, we can drop columns that have atleast one NaN in any row by setting the axis argument to 1:", "df.dropna(axis=1);", "The dropna() method supports additional parameters that can come in handy.", "#only drop rows where all columns are null\ndf.dropna(how='all');\n\n# drop rows that have not at least 4 non-NaN values\ndf.dropna(thresh=4);\n\n# only drop rows where NaN appear in specific columns (here :'community')\ndf.dropna(subset=['community']);", "Imputing missing values\nOften, the removal of samples or dropping of entire feature columns is simply not feasible, because we might lost too much valuable data. In this case, we can use different interpolation techniques to estimate the missing values from the othere training samples in our dataset. One of the most common interpolation technique is mean interpolation, where we simply replace the missing value by the mean value of the entire feature column. A convenient way to achieve this is using the Imputer class from the scikit-learn as shown in the following code.", "imr = Imputer(missing_values='NaN', strategy='mean', axis=0)\nimr = imr.fit(df[features])\nimputed_data = imr.transform(df[features]);", "Sklearn fundamentals\nA convenient way to randomly partition the dataset into a separate test & training dataset is to use the train_test_split function from scikit-learn's cross_validation submodule", "#df = df.drop([\"communityname\", \"state\", \"county\", \"community\"], axis=1)\nX, y = imputed_data, df['ViolentCrimesPerPop']\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0);", "First, we assigned the NumPy array representation of features columns to the variable X, and we assigned the predicted variable to the variable y. Then we used the train_test_split function to randomly split X and y into separate training & test datasets. By setting test_size=0.3 we assigned 30 percent of samples to X_test and the remaining 70 percent to X_train.\nSequential Feature Selection algorithm : Sequential Backward Algorithm(SBS)\nSequential feature selection algorithms are a family of greedy search algorithms that can reduce an initial d-dimensional feature space into a k-dimensional feature subspace where k < d. The idea is to select the most relevant subset of features to improve computational efficieny and reduce generalization error", "class SBS():\n def __init__(self, estimator, features, \n scoring=r2_score, test_size=0.25,\n random_state=1):\n self.scoring = scoring\n self.estimator = estimator\n self.features = features\n self.test_size = test_size\n self.random_state = random_state\n \n def fit(self, X, y):\n X_train, X_test, y_train, y_test = train_test_split(X, \n y, \n test_size = self.test_size,\n random_state = self.random_state)\n dim = X_train.shape[1]\n self.indices_ = tuple(range(dim))\n self.subsets_ = [self.indices_]\n score = self._calc_score(X_train, y_train, X_test, y_test, self.indices_)\n self.scores_ = [score]\n \n while dim > self.features:\n scores = []\n subsets = []\n for p in combinations(self.indices_, r=dim-1):\n score = self._calc_score(X_train, y_train, X_test, y_test, p)\n scores.append(score)\n subsets.append(p)\n best = np.argmax(score)\n self.indices_ = subsets[best]\n self.subsets_.append(self.indices_)\n dim -= 1\n self.scores_.append(scores[best])\n print self.scores_\n self.k_score_ = self.scores_[-1]\n return self\n \n def transform(self, X):\n return X[:, self.indices_]\n \n def _calc_score(self, X_train, y_train, X_test, y_test, indices):\n self.estimator.fit(X_train[:, indices], y_train)\n y_pred = self.estimator.predict(X_test[:, indices])\n score = self.scoring(y_test, y_pred)\n return score\n\nclf = LinearRegression()\nsbs = SBS(clf, features=1)\nsbs.fit(X_train, y_train)\n\nk_feat = [len(k) for k in sbs.subsets_]\nplt.plot(k_feat, sbs.scores_, marker='o')\nplt.ylim([-1, 1])\nplt.ylabel('Accuracy')\nplt.xlabel('Number of Features')\nplt.grid()\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
WNoxchi/Kaukasos
FAI_old/lesson3/L3HW_MNIST.ipynb
mit
[ "I. Imports", "import keras\nimport numpy as np\n\nfrom keras.datasets import mnist\nfrom keras.optimizers import Adam\nfrom keras.models import Sequential\nfrom keras.preprocessing import image\nfrom keras.layers.core import Dense\nfrom keras.layers.core import Lambda\nfrom keras.layers.core import Flatten\nfrom keras.layers.core import Dropout\nfrom keras.layers.pooling import MaxPooling2D\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.normalization import BatchNormalization\nfrom keras.utils.np_utils import to_categorical", "I want to import Vgg16 as well because I'll want it's low-level features", "# import os, sys\n# sys.path.insert(1, os.path.join('../utils/'))", "Actually, looks like Vgg's ImageNet weights won't be needed.", "# from vgg16 import Vgg16\n# vgg = Vgg16()", "II. Load Data", "(x_train, y_train), (x_test, y_test) = mnist.load_data()", "III. Preprocessing\nKeras Convolutional layers expect color channels, so expand an empty dimension in the input data, to account for no colors.", "x_train = np.expand_dims(x_train, 1) # can also enter <axis=1> for <1>\nx_test = np.expand_dims(x_test, 1)\nx_train.shape", "One-Hot Encoding the outputs:", "y_train, y_test = to_categorical(y_train), to_categorical(y_test)", "Since this notebook's models are all mimicking Vgg16, the input data should be preprocessed in the same way: in this case normalized by subtracting the mean and dividing by the standard deviation. It turns out this is a good idea generally.", "x_mean = x_train.mean().astype(np.float32)\nx_stdv = x_train.std().astype(np.float32)\ndef norm_input(x): return (x - x_mean) / x_stdv", "Create Data Batch Generator\nImageDataGenerator with no arguments will return a generator. Later, when data is augmented, it'll be told how to do so. I don't know what batch-size should be set to: in Lecture it was 64.", "gen = image.ImageDataGenerator()\ntrn_batches = gen.flow(x_train, y_train, batch_size=64)\ntst_batches = gen.flow(x_test, y_test, batch_size=64)", "General workflow, going forward:\n* Define the model's architecture.\n* Run 1 Epoch at default learning rate (0.01 ~ 0.001 depending on optimizer) to get it started.\n* Jack up the learning to 0.1 (as high as you'll ever want to go) and run 1 Epoch, possibly more if you can get away with it.\n* Lower the learning rate by a factor of 10 and run for a number of Epochs -- repeat until model begins to overfit (acc > valacc)\nPoints on internal architecture:\n* Each model will have a data-preprocessing Lambda layer, which normalizes the input and assigns a shape of (1 color-channel x 28 pixels x 28 pixels)\n* Weights are flattened before entering FC layers\n* Convolutional Layers will come in 2 pairs (because this is similar to the Vgg model). \n* Convol layer-pairs will start with 32 3x3 filters and double to 64 3x3 layers\n* A MaxPooling Layer comes after each Convol-pair.\n* When Batch-Normalization is applied, it is done after every layer but last (excluding MaxPooling).\n* Final layer is always an FC softmax layer with 10 outputs for our 10 digits.\n* Dropout, when applied, should increase toward later layers.\n* Optimizer used in Lecture was Adam(), all layers but last use a ReLU activation, loss function is categorical cross-entropy.\n1. Linear Model\naka 'Dense', 'Fully-Connected'", "def LinModel():\n model = Sequential([\n Lambda(norm_input, input_shape=(1, 28, 28)),\n Flatten(),\n Dense(10, activation='softmax')\n ])\n model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nLinear_model = LinModel()\nLinear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,\n validation_data=tst_batches, nb_val_samples=trn_batches.n)\n\nLinear_model.optimizer.lr=0.1\nLinear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\nLinear_model.optimizer.lr=0.01\nLinear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\nLinear_model.optimizer.lr=0.001\nLinear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=8,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)", "2. Single Dense Layer\nThis is what people in the 80s & 90s thought of as a 'Neural Network': a single Fully-Connected hidden layer. I don't yet know why the hidden layer is ouputting 512 units. For natural-image recognition it's 4096. I'll see whether a ReLU or Softmax hidden layer works better.\nBy the way, the training and hyper-parameter tuning process should be automated. I want to use a NN to figure out how to do that for me.", "def FCModel():\n model = Sequential([\n Lambda(norm_input, input_shape=(1, 28, 28)),\n Dense(512, activation='relu'),\n Flatten(),\n Dense(10, activation='softmax')\n ])\n model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nFC_model = FCModel()\nFC_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\nFC_model.optimizer=0.1\nFC_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\nFC_model.optimizer=0.01\nFC_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)", "With an accuracy of 0.9823 and validation accuracy of 0.9664, the model's starting to overfit significantly and hit its limits, so it's time to go on to the next technique.\n3. Basic 'VGG' style Convolutional Neural Network\nI'm specifying an output shape equal to the input shape, to suppress the warnings keras was giving me; and it stated it was defaulting to that anyway. Or maybe I should've written output_shape=input_shape\nAha: yes it's as I thought. See this thread -- output_shape warnings were added to Keras, and neither vgg16.py (nor I until now) were specifying output_shape. It's fine.\nThe first time I ran this, I forgot to have 2 pairs of Conv layers. At the third λr=0.01 epoch I had acc/val of 0.9964, 0.9878\nAlso noticing: in lecture JH was using a GPU which I think was an NVidia Titan X. I'm using an Intel Core i5 CPU on a MacBook Pro. His epochs took on average 6 seconds, mine are taking 180~190. Convolutions are also the most computationally-intensive part of the NN being built here.\nInterestingly, the model with 2 Conv-layer pairs is taking avg 160s. Best Acc/Val: 0.9968/0.9944\nFinal: 0.9975/0.9918 - massive overfitting", "def ConvModel():\n model = Sequential([\n Lambda(norm_input, input_shape=(1, 28, 28), output_shape=(1, 28, 28)),\n Convolution2D(32, 3, 3, activation='relu'),\n Convolution2D(32, 3, 3, activation='relu'),\n MaxPooling2D(),\n Convolution2D(64, 3, 3, activation='relu'),\n Convolution2D(64, 3, 3, activation='relu'),\n MaxPooling2D(),\n Flatten(),\n Dense(512, activation='relu'),\n Dense(10, activation='softmax')\n ])\n model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nCNN_model = ConvModel()\nCNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\nCNN_model.optimizer=0.1\nCNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\nCNN_model.optimizer=0.01\nCNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\n# Running again until validation accuracy stops increasing\nCNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)", "4. Data Augmentation", "gen = image.ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,\n height_shift_range=0.08, zoom_range=0.08)\ntrn_batches = gen.flow(x_train, y_train, batch_size=64)\ntst_batches = gen.flow(x_test, y_test, batch_size=64)\n\nCNN_Aug_model = ConvModel()\nCNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n# upping LR\nprint(\"Learning Rate, η = 0.1\")\nCNN_Aug_model.optimizer.lr=0.1\nCNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n# brining LR back down for more epochs\nprint(\"Learning Rate, η = 0.01\")\nCNN_Aug_model.optimizer.lr=0.01\nCNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\n# 4 more epochs at η=0.01\nCNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)", "5. Batch Normalization + Data Augmentation\nSee this thread for info on BatchNorm axis.", "def ConvModelBN():\n model = Sequential([\n Lambda(norm_input, input_shape=(1, 28, 28), output_shape=(1, 28, 28)),\n Convolution2D(32, 3, 3, activation='relu'),\n BatchNormalization(axis=1),\n Convolution2D(32, 3, 3, activation='relu'),\n MaxPooling2D(),\n BatchNormalization(axis=1),\n Convolution2D(64, 3, 3, activation='relu'),\n BatchNormalization(axis=1),\n Convolution2D(64, 3, 3, activation='relu'),\n MaxPooling2D(),\n Flatten(),\n BatchNormalization(),\n Dense(512, activation='relu'),\n BatchNormalization(),\n Dense(10, activation='softmax')\n ])\n model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nCNN_BNAug_model = ConvModelBN()\nCNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\nprint(\"Learning Rate, η = 0.1\")\nCNN_BNAug_model.optimizer=0.1\nCNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=2, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\nprint(\"Learning Rate, η = 0.01\")\nCNN_BNAug_model.optimizer=0.01\nCNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\n# some more training at 0.1 and 0.01:\nprint(\"Learning Rate, η = 0.1\")\nCNN_BNAug_model.optimizer=0.1\nCNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\nprint(\"Learning Rate, η = 0.01\")\nCNN_BNAug_model.optimizer=0.01\nCNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)", "6. Dropout + Batch Normalization + Data Augmentation", "def ConvModelBNDo():\n model = Sequential([\n Lambda(norm_input, input_shape=(1, 28, 28), output_shape=(1, 28, 28)),\n Convolution2D(32, 3, 3, activation='relu'),\n BatchNormalization(axis=1),\n Convolution2D(32, 3, 3, activation='relu'),\n MaxPooling2D(),\n BatchNormalization(axis=1),\n Convolution2D(64, 3, 3, activation='relu'),\n BatchNormalization(axis=1),\n Convolution2D(64, 3, 3, activation='relu'),\n MaxPooling2D(),\n Flatten(),\n BatchNormalization(),\n Dense(512, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(10, activation='softmax')\n ])\n model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nCNN_BNDoAug_model = ConvModelBNDo()\nCNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\nprint(\"Learning Rate, η = 0.1\")\nCNN_BNDoAug_model.optimizer.lr=0.1\nCNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\nprint(\"Learning Rate, η = 0.01\")\nCNN_BNDoAug_model.optimizer.lr=0.01\nCNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\n# 6 more epochs at 0.01\nCNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n\nprint(\"Learning Rate η = 0.001\")\nCNN_BNDoAug_model.optimizer.lr=0.001\nCNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=12, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)", "7. Ensembling\nDefine a function to automatically train a model:", "# I'll set it to display progress at the start of each LR-change\ndef train_model():\n model = ConvModelBNDo()\n model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n \n model.optimizer.lr=0.1\n model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n model.fit_generator(trn_batches, trn_batches.n, nb_epoch=3, verbose=0,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n \n model.optimizer.lr=0.01\n model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n model.fit_generator(trn_batches, trn_batches.n, nb_epoch=11, verbose=0,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n \n model.optimizer.lr=0.001\n model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n model.fit_generator(trn_batches, trn_batches.n, nb_epoch=11, verbose=0,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)\n return model\n\n# Running a little test on the GPU now\ntestmodel = ConvModelBNDo()\ntestmodel.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,\n validation_data=tst_batches, nb_val_samples=tst_batches.n)", "I finally got my GPU running on my workstation. Decided to leave the ghost of Bill Gates alone and put Ubuntu Linux on the second harddrive. This nvidia GTX 870M takes 17 seconds to get through the 60,000 images. The Core i5 on my Mac took an average of 340. A 20x speed up. This also means, at those numbers, a 6-strong ensemble running the regime in train_model() will take about 49 minutes and 18 seconds, instead of 16 hours and 26 minutes. You can see what the motivation was, for me to spend ~9 hours today and get the GPU working. It's a warm feeling, knowing your computer isn't just good for playing DOOM, but'll be doing its share of work real soon.\nSo, onward:\nCreate an array of models", "# this'll take some time\nmodels = [train_model() for m in xrange(6)]", "Save the models' weights -- bc this wasn't computationally cheap", "from os import getcwd\npath = getcwd() + 'data/mnist/'\nmodel_path = path + 'models/'\nfor i,m in enumerate(models):\n m.save_weights(model_path + 'MNIST_CNN' + str(i) + '.pkl')", "Create an array of predictions from the models on the test-set. I'm using a batch size of 256 because that's what was done in lecture, and prediction is such an easier task that I think the large size just helps things go faster.", "ensemble_preds = np.stack([m.predict(x_test, batch_size=256) for m in models])", "Finally, take the average of the predictions:", "avg_preds = ensemble_preds.mean(axis=0)\n\nkeras.metrics.categorical_accuracy(y_test, avg_preds).eval()", "Boom. 0.99699.. ~ 99.7% accuracy. Same as achieved in lecture; took roughly 50 minutes to train. Unfortunately I didn't have the h5py module installed when I ran this, so the weight's can't be saved easily -- simple fix of rerunning after install.\nTrying the above again, this time having h5py installed.", "# this'll take some time\nmodels = [train_model() for m in xrange(6)]\n\nfrom os import getcwd\nimport os\npath = getcwd() + '/data/mnist/'\nmodel_path = path + 'models/'\n\nif not os.path.exists(path):\n os.mkdir('data')\n os.mkdir('data/mnist')\nif not os.path.exists(model_path): os.mkdir(model_path)\n\nfor i,m in enumerate(models):\n m.save_weights(model_path + 'MNIST_CNN' + str(i) + '.pkl')\n\nensemble_preds = np.stack([m.predict(x_test, batch_size=256) for m in models])\navg_preds = ensemble_preds.mean(axis=0)\n\nkeras.metrics.categorical_accuracy(y_test, avg_preds).eval()", "And that's it. 99.71% -- 19 May 2017 - Wayne H Nixalo" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google-research/google-research
socraticmodels/SocraticModels_MSR_VTT.ipynb
apache-2.0
[ "Copyright 2021 Google LLC.\nSPDX-License-Identifier: Apache-2.0\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nSocratic Models: MSR-VTT Video-to-Text Retrieval\nSocratic Models (SMs) is a framework that composes multiple pre-existing foundation models (e.g., large language models, visual language models, audio-language models) to provide results for new multimodal tasks, without any model finetuning.\nThis colab runs SMs for zero-shot video-to-text retrieval on the MSR-VTT Full and 1k-A test sets. Specifically, this augments Portillo-Quintero et al. 2021 with audio information by using an ALM for speech-to-text, summarizing the transcriptions with a causal LM (e.g., GPT-3), and re-ranking CLIP (VLM) matching scores against captions with a masked LM (e.g., RoBERTa) on the summaries.\nThis is a reference implementation of one task demonstrated in the work: Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language\nDisclaimer: this colab uses CLIP and GPT-3 as foundation models, and may be subject to unwanted biases. This code should be used with caution (and checked for correctness) in downstream applications.\nQuick Start:\nStep 1. Register for an OpenAI API key to use GPT-3 (there's a free trial) and enter it below\nStep 2. Menu > Change runtime type > Hardware accelerator > \"GPU\"\nStep 3. Menu > Runtime > Run all", "openai_api_key = \"your-api-key\"", "Setup\nThis installs a few dependencies: PyTorch, CLIP, GPT-3.", "!pip install -U --no-cache-dir gdown --pre\n!pip install -U sentence-transformers\n!pip install openai ftfy\n!nvidia-smi # Show GPU info.\n\nimport json\nimport os\n\nimport numpy as np\nimport openai\nimport pandas as pd\nimport pickle\nfrom sentence_transformers import SentenceTransformer\nfrom sentence_transformers import util as st_utils\nimport torch\n\nopenai.api_key = openai_api_key\n\n# From: https://github.com/Deferf/CLIP_Video_Representation\nif not os.path.exists('MSRVTT_test_dict_CLIP_text.pt'):\n !gdown 1-3tpfZzo1_D18WdrioQzc-iogEl-KSnA -O \"MSRVTT_test_dict_CLIP_text.pt\"\nif not os.path.exists('MSRVTT_test_dict_CLIP_visual.pt'):\n !gdown 1Gp3_I_OvcKwjOQmn334-T4wfwQk29TCp -O \"MSRVTT_test_dict_CLIP_visual.pt\"\nif not os.path.exists('test_videodatainfo.json'):\n !gdown 1BzTt1Bf-XJSUXxBfJVxLL3mYWLZ6odsw -O \"test_videodatainfo.json\"\nif not os.path.exists('JS_test_dict_CLIP_text.pt'):\n !gdown --id 15mvFQxrWLNvBvFg4_9rr_Kqyzsy9dudj -O \"JS_test_dict_CLIP_text.pt\"\n\n# Load generated video transcriptions from Google cloud speed-to-text API.\nif not os.path.exists('video_id_to_gcloud_transcription_full.json'):\n !gdown 1LTmvtf9zzw61O7D8YUqdS2mbql76nO6E -O \"video_id_to_gcloud_transcription_full.json\"\n\n# Load generated summaries from LM (comment this out to generate your own with GPT-3).\nif not os.path.exists('msr_full_summaries.pkl'):\n !gdown 1ESXkRv3-3Kz1jZTNtkIhBXME6k1Jr9SW -O \"msr_full_summaries.pkl\"\n\n# Import helper functions from Portillo-Quintero et al. 2021\n!git clone https://github.com/Deferf/Experiments\n%cd Experiments\nfrom metrics import rank_at_k_precomputed,stack_encoded_dict,generate_sim_tensor,tensor_video_to_text_sim,tensor_text_to_video_metrics,normalize_matrix,pad_dict,list_recall\n%cd \"/content\"", "Load RoBERTa (masked LM)", "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nroberta_model = SentenceTransformer('stsb-roberta-large').to(device)", "Wrap GPT-3 (causal LM)", "gpt_version = \"text-davinci-002\"\ndef prompt_llm(prompt, max_tokens=64, temperature=0, stop=None):\n response = openai.Completion.create(engine=gpt_version, prompt=prompt, max_tokens=max_tokens, temperature=temperature, stop=stop)\n return response[\"choices\"][0][\"text\"].strip()", "Evaluate on MSR-Full", "# Load raw text captions from MSR-Full.\nwith open('test_videodatainfo.json', 'r') as j:\n msr_full_info = json.loads(j.read())\nmsr_full_vid_id_to_captions = {}\nfor info in msr_full_info['sentences']:\n if info['video_id'] not in msr_full_vid_id_to_captions:\n msr_full_vid_id_to_captions[info['video_id']] = []\n msr_full_vid_id_to_captions[info['video_id']].append(info['caption'])\n\n# Reproduce original results with original eval code.\nmsr_full_vid_id_to_clip_vid_feats = torch.load(\"/content/MSRVTT_test_dict_CLIP_visual.pt\", map_location=\"cpu\")\nmsr_full_vid_ids_to_clip_text_feats = torch.load(\"/content/MSRVTT_test_dict_CLIP_text.pt\", map_location=\"cpu\")\nmsr_full_vid_ids = list(msr_full_vid_ids_to_clip_text_feats.keys())\nmsr_full_sim_tensor = generate_sim_tensor(msr_full_vid_ids_to_clip_text_feats, msr_full_vid_id_to_clip_vid_feats, msr_full_vid_ids)\nmsr_full_vid_text_sim = tensor_video_to_text_sim(msr_full_sim_tensor)\nmsr_full_metrics_vtt = rank_at_k_precomputed(msr_full_vid_text_sim)\nprint(msr_full_metrics_vtt)\n\n# Transcription results from gCloud API.\nwith open('video_id_to_gcloud_transcription_full.json', 'r') as j:\n msr_full_vid_id_to_transcript = json.loads(j.read())\n \n# Sort video IDs by transcription length.\nnum_transcripts = 0\ntranscript_lengths = []\nfor i in msr_full_vid_ids:\n if msr_full_vid_id_to_transcript[i] is None:\n transcript_lengths.append(0)\n else:\n num_transcripts += 1\n transcript_lengths.append(len(msr_full_vid_id_to_transcript[i]))\nmsr_full_sorted_vid_ids = [msr_full_vid_ids[i] for i in np.argsort(transcript_lengths)[::-1]]\n\n# Summarize transcriptions with LLM.\nif os.path.exists('msr_full_summaries.pkl'):\n msr_full_vid_id_to_summary = pickle.load(open('msr_full_summaries.pkl', 'rb'))\nelse:\n\n # Zero-shot LLM: summarize transcriptions.\n msr_full_vid_id_to_summary = {}\n for vid_id in msr_full_sorted_vid_ids:\n transcript = msr_full_vid_id_to_transcript[vid_id]\n print('Video ID:', vid_id)\n print('Transcript:', transcript)\n \n if transcript is not None:\n transcript = transcript.strip()\n prompt = 'I am an intelligent video captioning bot.'\n prompt += f'\\nI hear a person saying: \"{transcript}\".'\n prompt += f\"\\nQ: What's a short video caption for this video? A: In this video,\"\n print('Prompt:', prompt)\n summary = prompt_llm(prompt, temperature=0, stop='.')\n print('Summary:', summary)\n msr_full_vid_id_to_summary[vid_id] = summary\n \n pickle.dump(msr_full_vid_id_to_summary, open(f'msr_full_summaries.pkl', 'wb'))\n\n# Compute RoBERTa features for all captions.\nmsr_full_vid_id_to_roberta_feats = {}\nfor vid_id in msr_full_sorted_vid_ids:\n msr_full_vid_id_to_roberta_feats[vid_id] = roberta_model.encode(msr_full_vid_id_to_captions[vid_id], convert_to_tensor=True, device=device)\n\ntopk = 100 # Pre-rank with top-100 from Portillo.\ncombine_clip_roberta = True # Combine CLIP (text-video) x RoBERTa (text-text) scores?\nportillo_vid_id_to_topk_vid_ids = {}\nsocratic_vid_id_to_topk_vid_ids = {}\nmsr_full_all_clip_text_feats = torch.cat([msr_full_vid_ids_to_clip_text_feats[i] for i in msr_full_sorted_vid_ids], dim=0).cpu().numpy()\nfor vid_id in msr_full_sorted_vid_ids:\n \n # Get Portillo top-K captions.\n vid_feats = msr_full_vid_id_to_clip_vid_feats[vid_id] # CLIP features for all frames of the video\n vid_feat = normalize_matrix(torch.mean(vid_feats, dim = 0, keepdim = True)).cpu().numpy()\n clip_scores = msr_full_all_clip_text_feats @ vid_feat.T\n clip_scores = clip_scores.squeeze()\n clip_scores = clip_scores.reshape(-1, 20)\n clip_scores = np.max(clip_scores, axis=1)\n sorted_idx = np.argsort(clip_scores).squeeze()[::-1]\n portillo_topk_vid_ids = [msr_full_sorted_vid_ids[i] for i in sorted_idx[:topk]]\n portillo_vid_id_to_topk_vid_ids[vid_id] = portillo_topk_vid_ids\n\n # If no LLM summary, default to Portillo ranking.\n socratic_vid_id_to_topk_vid_ids[vid_id] = portillo_topk_vid_ids\n if vid_id not in msr_full_vid_id_to_summary:\n continue\n\n # Get RoBERTa scores between LLM summary and captions.\n summary = msr_full_vid_id_to_summary[vid_id]\n summary_feat = roberta_model.encode([summary], convert_to_tensor=True, device=device)\n caption_feats = torch.cat([msr_full_vid_id_to_roberta_feats[i] for i in portillo_topk_vid_ids], dim=0)\n roberta_scores = st_utils.pytorch_cos_sim(caption_feats, summary_feat).detach().cpu().numpy().squeeze()\n roberta_scores = roberta_scores.reshape(-1, 20)\n roberta_scores = np.max(roberta_scores, axis=1)\n\n # Re-rank top-K with RoBERTa scores.\n sort_idx = np.argsort(roberta_scores, kind='stable').squeeze()[::-1]\n socratic_vid_id_to_topk_vid_ids[vid_id] = [portillo_topk_vid_ids[i] for i in sort_idx]\n\n # Combine CLIP (text-video) x RoBERTa (text-text) scores.\n if combine_clip_roberta:\n clip_scores = np.sort(clip_scores, kind='stable').squeeze()[::-1][:topk]\n scores = clip_scores * roberta_scores\n sort_idx = np.argsort(scores, kind='stable').squeeze()[::-1]\n socratic_vid_id_to_topk_vid_ids[vid_id] = [portillo_topk_vid_ids[i] for i in sort_idx] # Override ranking from only LLM\n\n# Return R@1, R@5, R@10.\ndef get_recall(vid_ids, socratic_subset, k=[1, 5, 10]):\n recall = []\n rank = []\n for vid_id in vid_ids:\n sorted_vid_ids = portillo_vid_id_to_topk_vid_ids[vid_id]\n if vid_id in socratic_subset:\n sorted_vid_ids = socratic_vid_id_to_topk_vid_ids[vid_id]\n recall.append([(vid_id in sorted_vid_ids[:i]) for i in k])\n rank.append(sorted_vid_ids.index(vid_id) + 1 if vid_id in sorted_vid_ids else len(sorted_vid_ids))\n mdr = np.median(rank)\n return np.mean(np.float32(recall) * 100, axis=0), mdr\n \nsubset_size = 1007 # Subset of long transcripts.\n \n# Portillo only.\nrecall, mdr = get_recall(msr_full_sorted_vid_ids, msr_full_sorted_vid_ids[:0])\nprint(f'R@1: {recall[0]:.1f}\\tR@5: {recall[1]:.1f}\\tR@10: {recall[2]:.1f}\\tMdR: {mdr}')\n \n# Socratic + Portillo.\nrecall, mdr = get_recall(msr_full_sorted_vid_ids, msr_full_sorted_vid_ids[:subset_size])\nprint(f'R@1: {recall[0]:.1f}\\tR@5: {recall[1]:.1f}\\tR@10: {recall[2]:.1f}\\tMdR: {mdr}')\n \n# Portillo only on long transcripts.\nrecall, mdr = get_recall(msr_full_sorted_vid_ids[:subset_size], msr_full_sorted_vid_ids[:0])\nprint(f'R@1: {recall[0]:.1f}\\tR@5: {recall[1]:.1f}\\tR@10: {recall[2]:.1f}\\tMdR: {mdr}')\n \n# Socratic + Portillo on long transcripts.\nrecall, mdr = get_recall(msr_full_sorted_vid_ids[:subset_size], msr_full_sorted_vid_ids[:subset_size])\nprint(f'R@1: {recall[0]:.1f}\\tR@5: {recall[1]:.1f}\\tR@10: {recall[2]:.1f}\\tMdR: {mdr}')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
choderalab/assaytools
examples/direct-fluorescence-assay/2c Bayesian fit for two component binding - simulated data- WITH EMCEE.ipynb
lgpl-2.1
[ "Bayesian fit for two component binding - simulated data - WITH EMCEE\nIn this notebook we see how well we can reproduce Kd from simulated experimental data with our Bayesian methods, these are the same as those used in the quickmodel.py script. We will be testing to see if sampling using the emcee library improves our equilibration time and ultimately, the results.", "import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom scipy import optimize\nimport seaborn as sns\n\n%pylab inline", "We use the same setup here as we do in the 'Simulating Experimental Fluorescence Binding Data' notebook.", "# We define a Kd,\nKd = 2e-9 # M\n\n# a protein concentration,\nPtot = 1e-9 * np.ones([12],np.float64) # M\n\n# and a gradient of ligand concentrations for our experiment.\nLtot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M\n\ndef two_component_binding(Kd, Ptot, Ltot):\n \"\"\"\n Parameters\n ----------\n Kd : float\n Dissociation constant\n Ptot : float\n Total protein concentration\n Ltot : float\n Total ligand concentration\n \n Returns\n -------\n P : float\n Free protein concentration\n L : float\n Free ligand concentration\n PL : float\n Complex concentration\n \"\"\"\n \n PL = 0.5 * ((Ptot + Ltot + Kd) - np.sqrt((Ptot + Ltot + Kd)**2 - 4*Ptot*Ltot)) # complex concentration (uM)\n P = Ptot - PL; # free protein concentration in sample cell after n injections (uM) \n L = Ltot - PL; # free ligand concentration in sample cell after n injections (uM) \n \n return [P, L, PL]\n\n[L, P, PL] = two_component_binding(Kd, Ptot, Ltot)\n\n# y will be complex concentration\n# x will be total ligand concentration\nplt.semilogx(Ltot,PL, 'o')\nplt.xlabel('$[L]_{tot}$ / M')\nplt.ylabel('$[PL]$ / M')\nplt.ylim(0,1.3e-9)\nplt.axhline(Ptot[0],color='0.75',linestyle='--',label='$[P]_{tot}$')\nplt.legend();", "Now make this a fluorescence experiment", "# Making max 1400 relative fluorescence units, and scaling all of PL (complex concentration) \n# to that, adding some random noise\nnpoints = len(Ltot)\nsigma = 10.0 # size of noise\nF_PL_i = (1400/1e-9)*PL + sigma * np.random.randn(npoints)\n\n# y will be complex concentration\n# x will be total ligand concentration\nplt.semilogx(Ltot,F_PL_i, 'ro')\nplt.xlabel('$[L]_{tot}$ / M')\nplt.ylabel('$Fluorescendce$')\nplt.legend();\n\n#Let's add an F_background just so we don't ever go below zero\nF_background = 40\n#We also need to model fluorescence for our ligand\nF_L_i = F_background + (.4/1e-8)*Ltot + sigma * np.random.randn(npoints)\n\n#Let's also add these to our complex fluorescence readout\nF_PL_i = F_background + ((1400/1e-9)*PL + sigma * np.random.randn(npoints)) + ((.4/1e-8)*L + sigma * np.random.randn(npoints))\n\n# y will be complex concentration\n# x will be total ligand concentration\nplt.semilogx(Ltot,F_PL_i, 'ro')\nplt.semilogx(Ltot,F_L_i, 'ko')\nplt.xlabel('$[L]_{tot}$ / M')\nplt.ylabel('$Fluorescence$')\nplt.legend();\n\n# We know errors from our pipetting instruments.\nP_error = 0.35\nL_error = 0.08\n\nassay_volume = 100e-6 # assay volume, L\n\ndPstated = P_error * Ptot\ndLstated = L_error * Ltot\n\n# Now we'll use our Bayesian modeling scheme from assaytools.\nfrom assaytools import pymcmodels\npymc_model = pymcmodels.make_model(Ptot, dPstated, Ltot, dLstated,\n top_complex_fluorescence=F_PL_i,\n top_ligand_fluorescence=F_L_i,\n use_primary_inner_filter_correction=True,\n use_secondary_inner_filter_correction=True,\n assay_volume=assay_volume, DG_prior='uniform')\n\nmcmc = pymcmodels.run_mcmc(pymc_model)\n\nimport matplotlib.patches as mpatches #this is for plotting with color patches\n\ndef mcmc_three_plots(pymc_model,mcmc,Lstated):\n\n sns.set(style='white')\n sns.set_context('talk')\n \n import pymbar\n [t,g,Neff_max] = pymbar.timeseries.detectEquilibration(mcmc.DeltaG.trace())\n \n interval= np.percentile(a=mcmc.DeltaG.trace()[t:], q=[2.5, 50.0, 97.5])\n [hist,bin_edges] = np.histogram(mcmc.DeltaG.trace()[t:],bins=40,normed=True)\n binwidth = np.abs(bin_edges[0]-bin_edges[1])\n\n #set colors for 95% interval\n clrs = [(0.7372549019607844, 0.5098039215686274, 0.7411764705882353) for xx in bin_edges]\n idxs = bin_edges.argsort()\n idxs = idxs[::-1]\n gray_before = idxs[bin_edges[idxs] < interval[0]]\n gray_after = idxs[bin_edges[idxs] > interval[2]]\n for idx in gray_before:\n clrs[idx] = (.5,.5,.5)\n for idx in gray_after:\n clrs[idx] = (.5,.5,.5)\n \n plt.clf();\n plt.figure(figsize=(12,3));\n\n plt.subplot(131)\n property_name = 'top_complex_fluorescence'\n complex = getattr(pymc_model, property_name)\n property_name = 'top_ligand_fluorescence'\n ligand = getattr(pymc_model, property_name)\n for top_complex_fluorescence_model in mcmc.top_complex_fluorescence_model.trace()[::10]:\n plt.semilogx(Lstated, top_complex_fluorescence_model, marker='.',color='silver')\n for top_ligand_fluorescence_model in mcmc.top_ligand_fluorescence_model.trace()[::10]:\n plt.semilogx(Lstated, top_ligand_fluorescence_model, marker='.',color='lightcoral', alpha=0.2)\n plt.semilogx(Lstated, complex.value, 'ko',label='complex')\n plt.semilogx(Lstated, ligand.value, marker='o',color='firebrick',linestyle='None',label='ligand')\n #plt.xlim(.5e-8,5e-5)\n plt.xlabel('$[L]_T$ (M)');\n plt.yticks([])\n plt.ylabel('fluorescence');\n plt.legend(loc=0);\n\n plt.subplot(132)\n plt.bar(bin_edges[:-1]+binwidth/2,hist,binwidth,color=clrs, edgecolor = \"white\");\n sns.kdeplot(mcmc.DeltaG.trace()[t:],bw=.4,color=(0.39215686274509803, 0.7098039215686275, 0.803921568627451),shade=False)\n plt.axvline(x=interval[0],color=(0.5,0.5,0.5),linestyle='--')\n plt.axvline(x=interval[1],color=(0.5,0.5,0.5),linestyle='--')\n plt.axvline(x=interval[2],color=(0.5,0.5,0.5),linestyle='--')\n plt.axvline(x=np.log(Kd),color='k')\n plt.xlabel('$\\Delta G$ ($k_B T$)',fontsize=16);\n plt.ylabel('$P(\\Delta G)$',fontsize=16);\n #plt.xlim(-15,-8)\n hist_legend = mpatches.Patch(color=(0.7372549019607844, 0.5098039215686274, 0.7411764705882353), \n label = '$\\Delta G$ = %.3g [%.3g,%.3g] $k_B T$' \n %(interval[1],interval[0],interval[2]) )\n plt.legend(handles=[hist_legend],fontsize=10,loc=0,frameon=True);\n\n plt.subplot(133)\n plt.plot(range(0,t),mcmc.DeltaG.trace()[:t], 'g.',label='equil. at %s'%t)\n plt.plot(range(t,len(mcmc.DeltaG.trace())),mcmc.DeltaG.trace()[t:], '.')\n plt.xlabel('MCMC sample');\n plt.ylabel('$\\Delta G$ ($k_B T$)');\n plt.legend(loc=2);\n\n plt.tight_layout();\n \n return [t,interval,hist,bin_edges,binwidth]\n\nKd\n\nprint 'Real Kd is 2nm or %s k_B T.' %np.log(Kd)\n\n[t,interval,hist,bin_edges,binwidth] = mcmc_three_plots(pymc_model,mcmc,Ltot)", "That works, but the equilibration seems to happen quite late in our sampling! Let's look at some of the other parameters.", "well_area = 0.1586 # well area, cm^2 # half-area wells were used here\npath_length = assay_volume / well_area\n\nfrom assaytools import plots\nplots.plot_mcmc_results(Ltot, Ptot, path_length, mcmc)", "Now let's see if we can get better results using the newly implemented emcee option.\nFollowing instructions as described here: http://twiecki.github.io/blog/2013/09/23/emcee-pymc/", "mcmc_emcee = pymcmodels.run_mcmc_emcee(pymc_model)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
liviu-/notebooks
notebooks/super-resolution_coordinates.ipynb
mit
[ "Super-Resolution using Coordinate Nets\nThis is a quick experiment focused on upscaling images using neural nets. Please check out the accompanying blog post for a high-level view of the idea.", "# There are several libraries to install\n#!pip3 install tensorflow numpy matplotlib scikit-image\n\n%matplotlib inline\n\nimport itertools\n\nimport tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport skimage\nfrom skimage import io, transform\n\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 10, 7", "Load image\nThe code expects a local image filepath through the image_path variable below.", "\"\"\"User Parameters\"\"\"\n# The train image will be scaled to a square of dimensions `train_size x train_size`\ntrain_size = 32\n# When generating the image, the network will generate for an image of\n# size `test_size x test_size`\ntest_size = 2048\n# Path to load the image you want upscaled\nimage_path = '../img/colors.jpg'\n\nif not image_path:\n print('Please specify an image for training the network')\nelse:\n image = transform.resize(io.imread(image_path), (train_size, train_size))\n # Just a quick line to get rid of the alpha channel if it exists\n # (e.g. for transparent png files)\n image = image if len(image.shape) < 3 or image.shape[2] == 3 else image[:,:,:3]\n io.imshow(image)", "Model\nFor simplicity, the model below is an MLP created with TF.\nInput\nThe input is just a matrix of floats of shape (None, 2):\n- 2 refers to the 2 x, y coordinates\n- and None is just a placeholder that allows for training multiple coordinates at one time for speed (i.e. using batches of unknown size)", "X = tf.placeholder('float32', (None, 2)) ", "Architecture\nAn MLP with several fully connected layers. The architecture was inspired from here.", "def model(X, w):\n h1 = tf.nn.tanh(tf.matmul(X, w['h1']))\n h2 = tf.nn.tanh(tf.matmul(h1, w['h2']))\n h3 = tf.nn.tanh(tf.matmul(h2, w['h3']))\n h4 = tf.nn.tanh(tf.matmul(h3, w['h4']))\n h5 = tf.nn.tanh(tf.matmul(h4, w['h4']))\n h6 = tf.nn.tanh(tf.matmul(h5, w['h4']))\n h7 = tf.nn.tanh(tf.matmul(h6, w['h4']))\n h8 = tf.nn.tanh(tf.matmul(h7, w['h4'])) \n return tf.nn.sigmoid(tf.matmul(h8, w['out']))\n\ndef init_weights(shape):\n return tf.Variable(tf.truncated_normal(shape, stddev=0.1))\n\n# (None, None) refers to (batch_size, n_colors)\nY = tf.placeholder(\"float32\", (None, None))\n\nw = {\n 'h1': init_weights([2, 20]),\n 'h2': init_weights([20, 20]),\n 'h3': init_weights([20, 20]),\n 'h4': init_weights([20, 20]),\n 'h5': init_weights([20, 20]),\n 'h6': init_weights([20, 20]),\n 'h7': init_weights([20, 20]),\n 'h8': init_weights([20, 20]),\n 'out': init_weights([20, 3]),\n}\n\nout = model(X, w)", "Training\nThe model is trained to minimise MSE (common loss for regression problems) and uses Adam as an optimiser (any other optimiser will likely also work).", "cost = tf.reduce_mean(tf.squared_difference(out, Y))\ntrain_op = tf.train.AdamOptimizer().minimize(cost)\n\n# Feel free to adjust the number of epochs to your liking.\nn_epochs = 5e+4\n\n# Create function to generate a coordinate matrix (i.e. matrix of normalised coordinates)\n# Pardon my lambda \ngenerate_coord = lambda size: (\n np.array(list(itertools.product(np.linspace(0,1,size),np.linspace(0,1,size)))).reshape(size ** 2, 2))\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Training data\n x = generate_coord(train_size)\n # Labels\n reshaped_image = np.array(image.reshape(train_size ** 2, -1))\n \n for epoch in range(int(n_epochs + 1)):\n _, c = sess.run([train_op, cost], feed_dict={X: x, Y: reshaped_image})\n \n # Print progress\n if epoch % (n_epochs/10) == 0:\n print('{:0.0%} \\t Loss: {}'.format(epoch/n_epochs, c).expandtabs(7))\n \n # Generate\n new_image = sess.run(out, feed_dict={X: generate_coord(test_size)})", "Evaluation\naka plotting the generated image and carefully considering whether it meets the desired standards or there's a need for readjusting either the hyperparameters or the expectations.", "plt.imshow(new_image.reshape(test_size, test_size, -1))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tabakg/potapov_interpolation
graph_to_matrices.ipynb
gpl-3.0
[ "In this notebook we come up with an easy way for the user to input networks to the Potapov Interpolation package, as well as a nice way to visualize the resulting system.", "import networkx as nx\nimport numpy as np\nfrom numpy import linalg as la\n\nfrom sympy import init_printing\ninit_printing() \n\nimport matplotlib.pyplot as plt\n%pylab inline", "Passive Linear Time Delays Networks\nThe first component of the project takes as inputs a description of a model, which can be thought of as a graph where the nodes and edges have some special properties. These properties are outlines below.", "G = nx.DiGraph(selfloops=True)", "We use a directed graph with various properties along the nodes and edges. The direction describes the propagation of signals in the system.\nThere are three kinds of nodes: inputs nodes, internal nodes, and output nodes. There is the same number of input and output nodes (say n). The number of internal nodes may be different. Each internal node has an associated matrix describing its relationship between its incoming and outgoing signals. It suffices for now to take $2 \\times 2$ matrices of the form $\\begin{pmatrix} t && -r \\ r && t \\end{pmatrix}$ corresponding to a beamsplitter, where $r$ and $t$ are the reflectivity and transmissivity of the beamsplitter, respectively. These satisfy $r^2+t^2 = 1$.\nIn general we may want other matrices, but it's not really necessary.\nIf the signal along several edges is thought of as a vector, multiplying by the matrix from the left represents the signal traveling through the element. This formalism works only for linear networks.\nLet's make an example graph:", "rs = np.asarray([0.9,0.5,0.8]) ## some sample values \nts = np.sqrt(1.-rs**2) ## ts are determined from rs\n\nN = 2 ## number of input nodes\n\nfor i in range(N): ## make the input and output nodes\n G.add_node(i*2,label='x_in_'+str(i))\n G.add_node(i*2+1,label='x_out_'+str(i))\nfor i,(r,t) in enumerate(zip(rs,ts)): ## make the remaining nodes\n G.add_node(2*N+i,label='x_'+str(i),M=np.matrix([[t,-r],[r,t]]))\n\nG.nodes(data=True) ## display the nodes\n\nnum_nodes = len(G.nodes(data=True))", "Each (directed) edge $j$ has a time delay $\\tau_j$. In general a delay line may have an additional phase shift $\\exp(i\\theta_j)$ which is determined by a number $\\theta_j$.\nWe will also include a pair of indices for each edge. The first index corresponds to the previous node and the second index corresponds to the next node. The indices indicate enumerations of the edges with respect to the input and output nodes, respectively. If the previous or next node is an input or output node of the graph, the index will be $0$.\nFor now, let's assume that only internal edges have nonzero delays.\nFor the visualization, it would be nice if for a given node, the incoming and outgoing edges with the same index value would appear as a straight line, since this physically means the signal is being transmitted without reflecting.", "## edges to inputs\nG.add_edge(0,4,delay=0.,indices=(0,0),theta=0.,edge_type = 'input',edge_num=0)\nG.add_edge(2,6,delay=0.,indices=(0,1),theta=0.,edge_type = 'input',edge_num=1)\n\n## edges to outputs\nG.add_edge(4,1,delay=0.,indices=(1,0),theta=0.,edge_type = 'output',edge_num=2)\nG.add_edge(6,3,delay=0.,indices=(0,0),theta=0.,edge_type = 'output',edge_num=3)\n\n## internal edges\nG.add_edge(4,5,delay=1.,indices=(0,0),theta=0.,edge_type = 'internal',edge_num=4)\nG.add_edge(5,4,delay=1.,indices=(1,1),theta=0.,edge_type = 'internal',edge_num=5)\nG.add_edge(5,6,delay=1.,indices=(0,0),theta=0.,edge_type = 'internal',edge_num=6)\nG.add_edge(6,5,delay=1.,indices=(1,1),theta=0.,edge_type = 'internal',edge_num=7)\n\nG.edges(data=True)\n\n## I can make a diagram for the graph, output to file\nA=nx.to_agraph(G)\nA.draw('file.ps',prog='neato')", "Convert the network of nodes and edges to the framework used in the paper.\nThis would take the graph structure above and generate matrices $M1,M2,M2,M3$ in the notation used in Potapov_Code.Time_Delay_Network.py. This would allow generating an instance of Time_Delay_Network.", "internal_edges = {(edge[0],edge[1]):edge[2] for edge in G.edges(data=True) if edge[2]['edge_type'] == 'internal'}\nm = len(internal_edges)\n\n# input_edges = [edge for edge in G.edges(data=True) if edge[2]['edge_type'] == 'input']\n# output_edges = [edge for edge in G.edges(data=True) if edge[2]['edge_type'] == 'output']\n\nM1 = np.zeros((m,m))\ninternal_node_range = range(2*N,num_nodes)\ninternal_connections = []\nfor i in internal_node_range: ## internal nodes\n outgoing_connections = nx.edges(G,[i])\n internal_connections += [connection for connection in outgoing_connections if connection[1] in internal_node_range]\n\nfor i in internal_connections:\n for j in internal_connections:\n if i[1] == j[0]:\n matrix_indices = G.edge[i[0]][i[1]]['indices'][0], G.edge[j[0]][j[1]]['indices'][1]\n M1[internal_edges[j]['edge_num']-2*N,internal_edges[i]['edge_num']-2*N] = G.node[i[1]]['M'][matrix_indices]\n\nM1\n\nall_connections = []\nfor i in range(num_nodes): ## internal nodes\n outgoing_connections = nx.edges(G,[i])\n all_connections += [connection for connection in outgoing_connections if connection[1] in range(num_nodes)]\n\nall_edges = {(edge[0],edge[1]):edge[2] for edge in G.edges(data=True)}\nm_all = len(all_edges)\n\nU = np.zeros((m_all,m_all))\n\nfor i in all_connections:\n for j in all_connections:\n if i[1] == j[0]:\n matrix_indices = G.edge[i[0]][i[1]]['indices'][0], G.edge[j[0]][j[1]]['indices'][1]\n U[all_edges[j]['edge_num'],all_edges[i]['edge_num']] = G.node[i[1]]['M'][matrix_indices]\n\n## should coincide with M1\n\nM1 = U[4:8,4:8]\n\nM4 = U[:4,:4]\n\nM3 = U[8:16,4:8]\n\nM4 = U[8:16,8:16]", "Usage Description\nUsing the run_Potapov function of this method generates the variables that will be used for the first part of the visualization. Those are contained in an instance of the Time_Delay_Network. Specifically, the outputs we will want to plot are (1) Time_Delay_Network.roots (2) Time_Delay_Network.spatial_modes. \nThe roots $r_1,...,r_n$ are a list of complex numbers corresponding to the modes indexed by $1,...,n$. The imaginary part of root $r_k$ correspond to the frequency of mode $k$, and the real part of $r_k$ indicate the decay coefficients of mode $k$.\nThe spatial_modes are a list $v_1,...,v_n$ of complex-valued vectors. Each vector $v_k$ in the list corresponds to a mode $k$, in the same order as the roots. Each vector has the same length as the number of time delays of the network, $\\tau_1,...,\\tau_m$. The $l_{th}$ component $v_{k,l}$ of vector $v_k$ indicates the spatially normalized amplitude of mode $k$ along the delay $\\tau_l$. \nWhat would be cool is to be able to select one or many modes $1,...,k,...,n$ and to illustrate the spatial component of the signal of the selected modes along the graph. Specifically, the frequency of the root could correspond to a color or a periodic sinusoidal shape (higher frequency would be more blue or a shorter period), or both. The absolute value of the spatial mode component could be indicated by the thickness of the signal along each time delay. A phase shift could be indicated by a shift in the frequency of a sinusoidal signal.", "import Potapov_Code\n\nNetwork = Potapov_Code.Time_Delay_Network.Example3() ## an example network with hardcoded values\n\nNetwork.run_Potapov(commensurate_roots=True) ## run the analysis\n\nroots = Network.roots ## roots\nplt.scatter(map(lambda z: z.real, roots), map(lambda z: z.imag, roots))\n\nNetwork.spatial_modes ## the spatial modes" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
artttt/RiverConflation
conflationExample.ipynb
mit
[ "Please be aware that this example needs 5 - 6 GB of RAM and approx. 40 minutes to run", "%load_ext autoreload\n%autoreload 2\n\nimport riverConflation as rc\nimport os\nimport networkx as nx\n\n#both datasets used for testing can be downloaded here\n#http://www.bom.gov.au/water/geofabric/download.shtml\n#note the geodatabases had to be upgraded in ArcCatalog to V10+ so that gdal/fiona can read them.\nnetGDB1 = r'C:\\data\\temp\\SH_Network_GDB_V2_1_1\\SH_Network_GDB\\SH_Network.gdb'\nnetGDB2 =r'C:\\data\\temp\\SH_Network_GDB_V3_0_PG\\SH_Network_GDB\\SH_Network.gdb'\npkl_path = r'C:\\data\\temp'\n#checks all nodes within at least this distance for a conflation\n#its a time saving optimisation and should be set on the large size to avoid missing the best result\n#A guide to setting this is: set it a bit bigger then the largest expected separation of conflated nodes\n#for small networks networks just set it very large\n#in projection units (1degree approx 100km)\n# a few kms is probably reasonable usually. Set low here for faster demo.\nsearchRadius = 0.01\n#number of potential matches to keep for each feature\nmaxMatchKeep = 10", "First thing is to read in your data. This example uses the Australian Geofabric V2 and V3 data. Other datasets would need their own customised data prep code.\nIn the next 2 steps ignore the duplicate catchment warnings for the case of testing the code.\nI havent dealt with all the minor details of the geofabric quite right.", "DG2 = rc.read_geofabric_data(netGDB2)\nrc.remove_geofabric_catch_duplicates(DG2)\nnx.write_gpickle(DG2, os.path.join(pkl_path, 'PG_conflation2.p'))\nDG2_idx = rc.build_index(DG2)\n\nDG1 = rc.read_geofabric_data(netGDB1,DG2_idx.bounds)\nrc.remove_geofabric_catch_duplicates(DG1)\nnx.write_gpickle(DG1, os.path.join(pkl_path, 'PG_conflation.p'))", "you can start from here by loading in the pickles that were created with the above code earlier.\nRun the imports and global variables code at the top first though", "DG1 = nx.read_gpickle(os.path.join(pkl_path, 'PG_conflation.p'))\nDG2 = nx.read_gpickle(os.path.join(pkl_path, 'PG_conflation2.p'))\nDG2_idx = rc.build_index(DG2)\n# starting from pickles = 1 minute 2GB\n#starting from scratch = 12 minutes\n\n#This is done seperate to finding matches because it takes a while so its nice to split it out for debugging\n#%%timeit -r1 -n1\nrc.build_overlaps(DG1,DG2,DG2_idx)\n# 15 minutes 1GB\n\nrc.catch_area(DG1)\nrc.catch_area(DG2)\n# 1 min", "The next step is to sum up areas to find the catchment overlap for every combination. We are only interested in the best or a short list of the overlaps that match well\nThe simple approach is a brute force exhustive test of all combinations. This works well for a few thousand (75 minutes for 17k x 17k) sub catchments in each graph however it would not scale well as network sizes increase.\nThere are a few ways to reduce the set of catchments to test for a match. One issue to keep in mind is to not make assumptions about how similar the two networks topology might be. The approach taken is to use a tunable spatial proximity limit that should be set to a size that is expected to ensure finding the best matches within that radius, setting it too small would cause missed matches, too large would just take longer.\nThere is also a limit on the pairs of catchments searched based on area similarity.this works well as good matches have to, by defininition, be of a similar size.\nGenerally the results from this method are not very sensitive to these parameters, if set conservatively, other then faster processing time.", "#%%timeit -n1 -r1\nrc.upstream_edge_set(DG2)\n#<10sec approx 2GB\n\nsizeRatio=0.5\nmatches = rc.find_all_matches(DG1,DG2,DG2_idx,searchRadius,sizeRatio,maxMatchKeep)\n# depends on the search radius used.\n# 8 minutes with searchRadius=0.005 (500m) and a sizeRatio=0\n# 7.5 minutes with searchRadius=0.01 (1km) and a sizeRatio=0.5\n\nbest = rc.best_match(matches)\n\n# simple outputs\n# more complete outputs still to be re implemented.... stay tuned.\n\nrc.write_debug_lines(DG1,DG2,best,os.path.join(pkl_path, 'debug_lines.shp'))\n\n# a more refined match of nodes considering each inflow to a confluence.\n#find_all_matches needs to have saved a shortlist of matches by setting maxMatchKeep to something like 10\nnode_matches = rc.confluence_matches(DG1,matches)\nrc.write_debug_lines_confluence_matches(DG1,DG2,node_matches,os.path.join(pkl_path, 'debug_lines_nodes.shp'))\n\nrc.write_catch(DG1,os.path.join(pkl_path, 'catch1.shp'))\nrc.write_catch(DG2,os.path.join(pkl_path, 'catch2.shp'))\nrc.write_stream(DG1,os.path.join(pkl_path, 'stream1.shp'))\nrc.write_stream(DG2,os.path.join(pkl_path, 'stream2.shp'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
learn1do1/learn1do1.github.io
python_notebooks/Watershed Problem.ipynb
mit
[ "Watershed Problem\nGiven a 2D surface in 3D space, find the distinct watersheds in it. \nIn other words, if you have a three-dimensional landcsape, you want to find the different regions where all rainfall flows down to the same final location. \nYour input will be a n by n matrix of integers, where the high integers denote peaks and hills while low integers denote valleys.", "heights = [[1,2,3,4],[1,2,1,0],[1,0,1,0],[0,0,1,4]]\n\nclass position():\n def __init__(self, coordinates, height):\n self.coordinates = coordinates\n self.height = height\n \n def __repr__(self):\n return ','.join([str(self.coordinates), str(self.height)])", "To kick us off, I will draw this landscape, using a matplotlib heatmap function that shows the highest altitude in red, down to the lowest altitudes in blue:", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.dates\nplt.rcParams['figure.figsize'] = (20.0, 8.0)\nplt.figure()\nplt.imshow(heights , interpolation='nearest', cmap='jet')\nplt.title('heights')\nplt.show()", "We have to make a decision. Should the water always flow down the steepest slope? Let's assume yes, even tho it may upset Ian Malcolm from Jurassic Park:\n\nShould it pool together when the slope is 0? This describes the 3 adjacent blocks of height Zero in the heights matrix, drawn above. I'd argue yes. In order to guarantee that adjacent blocks of equal height pool together, I will initialize the slopes array to have a slope of -1.", "watersheds = [[None] * len(heights) for x in range(len(heights))]\nslopes = [[-1] * len(heights) for x in range(len(heights))]", "The watershed matrix stores an integer for each cell. When Cells in that matrix that share the same integer, it means they belong to the same watershed.\nThe slopes matrix stores the steepest slope that the water can flow in each cell", "import operator\n\ndef initialize_positions(heights):\n positions = [] \n for i in range(len(heights)):\n for j in range(len(heights)):\n positions.append(position((i,j), heights[i][j]))\n positions.sort(key=operator.attrgetter('height'))\n return positions\n\npositions = initialize_positions(heights)", "Our strategy is to sort positions from deepest to highest. Starting at the deepest, let's find all adjacent positions that would flow into it. We determine those positions by using the flow_up function. We continue this search from each of the new positions we have just moved up to, until every cell in the slopes array has been visited.", "# Will return all neighbors where the slope to the current position is steeper than we have yet seen.\ndef flow_up(heights, (i, j)):\n up_coordinates = set()\n neighbor_coordinates = set()\n local_height = heights[i][j]\n \n # look up, down, left, right\n neighbor_coordinates.add((max(i - 1, 0),j))\n neighbor_coordinates.add((min(i + 1, len(heights) - 1),j))\n neighbor_coordinates.add((i,max(j - 1, 0)))\n neighbor_coordinates.add((i,min(j + 1, len(heights) - 1)))\n \n for c in neighbor_coordinates:\n slope = heights[c[0]][c[1]] - local_height\n if slope > slopes[c[0]][c[1]]:\n slopes[c[0]][c[1]] = slope\n up_coordinates.add(c)\n return up_coordinates\n\ndef main(): \n for k, position in enumerate(positions):\n if watersheds[position.coordinates[0]][position.coordinates[1]] == None:\n new_coordinates = [position.coordinates]\n while len(new_coordinates) > 0:\n for (i, j) in new_coordinates:\n watersheds[i][j] = k\n past_coordinates = list(new_coordinates)\n new_coordinates = set()\n for coordinates in past_coordinates:\n new_coordinates.update(flow_up(heights, coordinates))\n \nmain()\nprint watersheds\nprint slopes\n\nplt.rcParams['figure.figsize'] = (20.0, 8.0)\nplt.figure()\nplt.imshow(heights , interpolation='nearest', cmap='jet')\nplt.title('heights')\nplt.figure()\nplt.imshow(slopes , interpolation='nearest', cmap='jet')\nplt.title('slopes')\nplt.figure()\nplt.imshow(watersheds , interpolation='nearest', cmap='jet')\nplt.title('watersheds')", "Let's do a simple test of our functions. Let's give it a landscape that looks like a wide staircase and make sure the output is just a single watershed.", "n = 10\n\nheights = [[x] * n for x in range(n)]\nwatersheds = [[None] * len(heights) for x in range(len(heights))]\nslopes = [[-1] * len(heights) for x in range(len(heights))]\npositions = initialize_positions(heights)\npositions.sort(key=operator.attrgetter('height'))\nmain()\nplt.figure()\nplt.imshow(heights , interpolation='nearest', cmap='jet')\nplt.rcParams['figure.figsize'] = (20.0, 8.0)\nplt.title('heights')\nplt.figure()\nplt.imshow(slopes , interpolation='nearest', cmap='jet')\nplt.title('slopes')\nplt.figure()\nplt.imshow(watersheds , interpolation='nearest', cmap='jet')\nplt.title('watersheds')", "It's interesting in this single-watershed case to see how simple the slopes object becomes. Either water is spreading in a flat basin (slope of 0) or it is flowing down the staircase (slope of 1). \nNow we can showcase the watershed code on a random input landscape", "import random\nheights = [[random.randint(0, n) for x in range(n)] for x in range(n)]\n\nwatersheds = [[None] * len(heights) for x in range(len(heights))]\nslopes = [[-1] * len(heights) for x in range(len(heights))]\npositions = initialize_positions(heights)\nmain()\nplt.figure()\nplt.imshow(heights , interpolation='nearest', cmap='jet')\nplt.rcParams['figure.figsize'] = (20.0, 8.0)\nplt.title('heights')\nplt.figure()\nplt.imshow(slopes , interpolation='nearest', cmap='jet')\nplt.title('slopes')\nplt.figure()\nplt.imshow(watersheds , interpolation='nearest', cmap='jet')\nplt.title('watersheds')", "It becomes apparent in this case that we have the deepest blue represent the deepest well. This is because we sorted the positions by height at the beginning." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nntisapeh/intro_programming
notebooks/if_statements.ipynb
mit
[ "If Statements\nBy allowing you to respond selectively to different situations and conditions, if statements open up whole new possibilities for your programs. In this section, you will learn how to test for certain conditions, and then respond in appropriate ways to those conditions.\nPrevious: Introducing Functions | \nHome |\nNext: While Loops and Input\nContents\n\nWhat is an if statement?\nExample\n\n\nLogical tests\nEquality\nInequality\nOther inequalities\nChecking if an item is in a list\nExercises\n\n\nThe if-elif...else chain\nSimple if statements\nif-else statements\nif-elif...else chains\nExercises\n\n\nMore than one passing test\nTrue and False values\nOverall Challenges\n\nWhat is an if statement?\nAn if statement tests for a condition, and then responds to that condition. If the condition is true, then whatever action is listed next gets carried out. You can test for multiple conditions at the same time, and respond appropriately to each condition.\nExample\nHere is an example that shows a number of the desserts I like. It lists those desserts, but lets you know which one is my favorite.", "# A list of desserts I like.\ndesserts = ['ice cream', 'chocolate', 'apple crisp', 'cookies']\nfavorite_dessert = 'apple crisp'\n\n# Print the desserts out, but let everyone know my favorite dessert.\nfor dessert in desserts:\n if dessert == favorite_dessert:\n # This dessert is my favorite, let's let everyone know!\n print(\"%s is my favorite dessert!\" % dessert.title())\n else:\n # I like these desserts, but they are not my favorite.\n print(\"I like %s.\" % dessert)", "What happens in this program?\n\nThe program starts out with a list of desserts, and one dessert is identified as a favorite.\nThe for loop runs through all the desserts.\nInside the for loop, each item in the list is tested.\nIf the current value of dessert is equal to the value of favorite_dessert, a message is printed that this is my favorite.\nIf the current value of dessert is not equal to the value of favorite_dessert, a message is printed that I just like the dessert.\n\n\n\nYou can test as many conditions as you want in an if statement, as you will see in a little bit.\ntop\nLogical Tests\nEvery if statement evaluates to True or False. True and False are Python keywords, which have special meanings attached to them. You can test for the following conditions in your if statements:\n\nequality (==)\ninequality (!=)\nother inequalities\ngreater than (>)\ngreater than or equal to (>=)\nless than (<)\nless than or equal to (<=)\n\n\nYou can test if an item is in a list.\n\nWhitespace\nRemember learning about PEP 8? There is a section of PEP 8 that tells us it's a good idea to put a single space on either side of all of these comparison operators. If you're not sure what this means, just follow the style of the examples you see below.\nEquality\nTwo items are equal if they have the same value. You can test for equality between numbers, strings, and a number of other objects which you will learn about later. Some of these results may be surprising, so take a careful look at the examples below.\nIn Python, as in many programming languages, two equals signs tests for equality.\nWatch out! Be careful of accidentally using one equals sign, which can really throw things off because that one equals sign actually sets your item to the value you are testing for!", "5 == 5\n\n3 == 5 \n\n5 == 5.0\n\n'eric' == 'eric'\n\n'Eric' == 'eric'\n\n'Eric'.lower() == 'eric'.lower()\n\n'5' == 5\n\n'5' == str(5)", "top\nInequality\nTwo items are inequal if they do not have the same value. In Python, we test for inequality using the exclamation point and one equals sign.\nSometimes you want to test for equality and if that fails, assume inequality. Sometimes it makes more sense to test for inequality directly.", "3 != 5\n\n5 != 5\n\n'Eric' != 'eric'", "top\nOther Inequalities\ngreater than", "5 > 3", "greater than or equal to", "5 >= 3\n\n3 >= 3", "less than", "3 < 5", "less than or equal to", "3 <= 5\n\n3 <= 3", "top\nChecking if an item is in a list\nYou can check if an item is in a list using the in keyword.", "vowels = ['a', 'e', 'i', 'o', 'u']\n'a' in vowels\n\nvowels = ['a', 'e', 'i', 'o', 'u']\n'b' in vowels", "<a id=\"Exercises-logical\"></a>\nExercises\n\nTrue and False\n\nWrite a program that consists of at least ten lines, each of which has a logical statement on it. The output of your program should be 5 Trues and 5 Falses.\nNote: You will probably need to write print(5 &gt; 3), not just 5 &gt; 3.\n\ntop\nThe if-elif...else chain\nYou can test whatever series of conditions you want to, and you can test your conditions in any combination you want.\nSimple if statements\nThe simplest test has a single if statement, and a single statement to execute if the condition is True.", "dogs = ['willie', 'hootz', 'peso', 'juno']\n\nif len(dogs) > 3:\n print(\"Wow, we have a lot of dogs here!\")", "In this situation, nothing happens if the test does not pass.", "###highlight=[2]\ndogs = ['willie', 'hootz']\n\nif len(dogs) > 3:\n print(\"Wow, we have a lot of dogs here!\")", "Notice that there are no errors. The condition len(dogs) &gt; 3 evaluates to False, and the program moves on to any lines after the if block.\nif-else statements\nMany times you will want to respond in two possible ways to a test. If the test evaluates to True, you will want to do one thing. If the test evaluates to False, you will want to do something else. The if-else structure lets you do that easily. Here's what it looks like:", "dogs = ['willie', 'hootz', 'peso', 'juno']\n\nif len(dogs) > 3:\n print(\"Wow, we have a lot of dogs here!\")\nelse:\n print(\"Okay, this is a reasonable number of dogs.\")", "Our results have not changed in this case, because if the test evaluates to True only the statements under the if statement are executed. The statements under else area only executed if the test fails:", "###highlight=[2]\ndogs = ['willie', 'hootz']\n\nif len(dogs) > 3:\n print(\"Wow, we have a lot of dogs here!\")\nelse:\n print(\"Okay, this is a reasonable number of dogs.\")", "The test evaluated to False, so only the statement under else is run.\nif-elif...else chains\nMany times, you will want to test a series of conditions, rather than just an either-or situation. You can do this with a series of if-elif-else statements\nThere is no limit to how many conditions you can test. You always need one if statement to start the chain, and you can never have more than one else statement. But you can have as many elif statements as you want.", "dogs = ['willie', 'hootz', 'peso', 'monty', 'juno', 'turkey']\n\nif len(dogs) >= 5:\n print(\"Holy mackerel, we might as well start a dog hostel!\")\nelif len(dogs) >= 3:\n print(\"Wow, we have a lot of dogs here!\")\nelse:\n print(\"Okay, this is a reasonable number of dogs.\")", "It is important to note that in situations like this, only the first test is evaluated. In an if-elif-else chain, once a test passes the rest of the conditions are ignored.", "###highlight=[2]\ndogs = ['willie', 'hootz', 'peso', 'monty']\n\nif len(dogs) >= 5:\n print(\"Holy mackerel, we might as well start a dog hostel!\")\nelif len(dogs) >= 3:\n print(\"Wow, we have a lot of dogs here!\")\nelse:\n print(\"Okay, this is a reasonable number of dogs.\")", "The first test failed, so Python evaluated the second test. That test passed, so the statement corresponding to len(dogs) &gt;= 3 is executed.", "###highlight=[2]\ndogs = ['willie', 'hootz']\n\nif len(dogs) >= 5:\n print(\"Holy mackerel, we might as well start a dog hostel!\")\nelif len(dogs) >= 3:\n print(\"Wow, we have a lot of dogs here!\")\nelse:\n print(\"Okay, this is a reasonable number of dogs.\")", "In this situation, the first two tests fail, so the statement in the else clause is executed. Note that this statement would be executed even if there are no dogs at all:", "###highlight=[2]\ndogs = []\n\nif len(dogs) >= 5:\n print(\"Holy mackerel, we might as well start a dog hostel!\")\nelif len(dogs) >= 3:\n print(\"Wow, we have a lot of dogs here!\")\nelse:\n print(\"Okay, this is a reasonable number of dogs.\")", "Note that you don't have to take any action at all when you start a series of if statements. You could simply do nothing in the situation that there are no dogs by replacing the else clause with another elif clause:", "###highlight=[8]\ndogs = []\n\nif len(dogs) >= 5:\n print(\"Holy mackerel, we might as well start a dog hostel!\")\nelif len(dogs) >= 3:\n print(\"Wow, we have a lot of dogs here!\")\nelif len(dogs) >= 1:\n print(\"Okay, this is a reasonable number of dogs.\")", "In this case, we only print a message if there is at least one dog present. Of course, you could add a new else clause to respond to the situation in which there are no dogs at all:", "###highlight=[10,11]\ndogs = []\n\nif len(dogs) >= 5:\n print(\"Holy mackerel, we might as well start a dog hostel!\")\nelif len(dogs) >= 3:\n print(\"Wow, we have a lot of dogs here!\")\nelif len(dogs) >= 1:\n print(\"Okay, this is a reasonable number of dogs.\")\nelse:\n print(\"I wish we had a dog here.\")", "As you can see, the if-elif-else chain lets you respond in very specific ways to any given situation.\n<a id=\"Exercises-elif\"></a>\nExercises\n\nThree is a Crowd\n\nMake a list of names that includes at least four people.\nWrite an if test that prints a message about the room being crowded, if there are more than three people in your list.\nModify your list so that there are only two people in it. Use one of the methods for removing people from the list, don't just redefine the list.\nRun your if test again. There should be no output this time, because there are less than three people in the list.\nBonus: Store your if test in a function called something like crowd_test.\n\nThree is a Crowd - Part 2\n\nSave your program from Three is a Crowd under a new name.\nAdd an else statement to your if tests. If the else statement is run, have it print a message that the room is not very crowded.\n\nSix is a Mob\n\nSave your program from Three is a Crowd - Part 2 under a new name.\nAdd some names to your list, so that there are at least six people in the list.\nModify your tests so that\nIf there are more than 5 people, a message is printed about there being a mob in the room.\nIf there are 3-5 people, a message is printed about the room being crowded.\nIf there are 1 or 2 people, a message is printed about the room not being crowded.\nIf there are no people in the room, a message is printed abou the room being empty.\n\n\n\ntop\nMore than one passing test\nIn all of the examples we have seen so far, only one test can pass. As soon as the first test passes, the rest of the tests are ignored. This is really good, because it allows our code to run more efficiently. Many times only one condition can be true, so testing every condition after one passes would be meaningless.\nThere are situations in which you want to run a series of tests, where every single test runs. These are situations where any or all of the tests could pass, and you want to respond to each passing test. Consider the following example, where we want to greet each dog that is present:", "dogs = ['willie', 'hootz']\n\nif 'willie' in dogs:\n print(\"Hello, Willie!\")\nif 'hootz' in dogs:\n print(\"Hello, Hootz!\")\nif 'peso' in dogs:\n print(\"Hello, Peso!\")\nif 'monty' in dogs:\n print(\"Hello, Monty!\")", "If we had done this using an if-elif-else chain, only the first dog that is present would be greeted:", "###highlight=[6,7,8,9,10,11]\ndogs = ['willie', 'hootz']\n\nif 'willie' in dogs:\n print(\"Hello, Willie!\")\nelif 'hootz' in dogs:\n print(\"Hello, Hootz!\")\nelif 'peso' in dogs:\n print(\"Hello, Peso!\")\nelif 'monty' in dogs:\n print(\"Hello, Monty!\")", "Of course, this could be written much more cleanly using lists and for loops. See if you can follow this code.", "dogs_we_know = ['willie', 'hootz', 'peso', 'monty', 'juno', 'turkey']\ndogs_present = ['willie', 'hootz']\n\n# Go through all the dogs that are present, and greet the dogs we know.\nfor dog in dogs_present:\n if dog in dogs_we_know:\n print(\"Hello, %s!\" % dog.title())", "This is the kind of code you should be aiming to write. It is fine to come up with code that is less efficient at first. When you notice yourself writing the same kind of code repeatedly in one program, look to see if you can use a loop or a function to make your code more efficient.\ntop\nTrue and False values\nEvery value can be evaluated as True or False. The general rule is that any non-zero or non-empty value will evaluate to True. If you are ever unsure, you can open a Python terminal and write two lines to find out if the value you are considering is True or False. Take a look at the following examples, keep them in mind, and test any value you are curious about. I am using a slightly longer test just to make sure something gets printed each time.", "###highlight=[2]\nif 0:\n print(\"This evaluates to True.\")\nelse:\n print(\"This evaluates to False.\")\n\n###highlight=[2]\nif 1:\n print(\"This evaluates to True.\")\nelse:\n print(\"This evaluates to False.\")\n\n###highlight=[2,3]\n# Arbitrary non-zero numbers evaluate to True.\nif 1253756:\n print(\"This evaluates to True.\")\nelse:\n print(\"This evaluates to False.\")\n\n###highlight=[2,3]\n# Negative numbers are not zero, so they evaluate to True.\nif -1:\n print(\"This evaluates to True.\")\nelse:\n print(\"This evaluates to False.\")\n\n###highlight=[2,3]\n# An empty string evaluates to False.\nif '':\n print(\"This evaluates to True.\")\nelse:\n print(\"This evaluates to False.\")\n\n###highlight=[2,3]\n# Any other string, including a space, evaluates to True.\nif ' ':\n print(\"This evaluates to True.\")\nelse:\n print(\"This evaluates to False.\")\n\n###highlight=[2,3]\n# Any other string, including a space, evaluates to True.\nif 'hello':\n print(\"This evaluates to True.\")\nelse:\n print(\"This evaluates to False.\")\n\n###highlight=[2,3]\n# None is a special object in Python. It evaluates to False.\nif None:\n print(\"This evaluates to True.\")\nelse:\n print(\"This evaluates to False.\")", "top\nOverall Challenges\nAlien Points\n\nMake a list of ten aliens, each of which is one color: 'red', 'green', or 'blue'.\nYou can shorten this to 'r', 'g', and 'b' if you want, but if you choose this option you have to include a comment explaining what r, g, and b stand for.\n\n\nRed aliens are worth 5 points, green aliens are worth 10 points, and blue aliens are worth 20 points.\nUse a for loop to determine the number of points a player would earn for destroying all of the aliens in your list.\nhint\n\ntop\n\nPrevious: Introducing Functions | \nHome |\nNext: While Loops and Input\nHints\nThese are placed at the bottom, so you can have a chance to solve exercises without seeing any hints.\nAlien Invaders\n\nAfter you define your list of aliens, set a variable called current_score or current_points equal to 0.\nInside your for loop, write a series of if tests to determine how many points to add to the current score.\nTo keep a running total, use the syntax current_score = current_score + points, where points is the number of points for the current alien." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
seg/2016-ml-contest
LA_Team/Facies_classification_LA_TEAM_05.ipynb
apache-2.0
[ "Facies classification using Machine Learning\nLA Team Submission 5 ##\nLukas Mosser, Alfredo De la Fuente\nIn this approach for solving the facies classfication problem ( https://github.com/seg/2016-ml-contest. ) we will explore the following statregies:\n- Features Exploration: based on Paolo Bestagini's work, we will consider imputation, normalization and augmentation routines for the initial features.\n- Model tuning: \nLibraries\nWe will need to install the following libraries and packages.", "%%sh\npip install pandas\npip install scikit-learn\npip install tpot\n\nfrom __future__ import print_function\nimport numpy as np\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import KFold , StratifiedKFold\nfrom classification_utilities import display_cm, display_adj_cm\nfrom sklearn.metrics import confusion_matrix, f1_score\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import LeavePGroupsOut\nfrom sklearn.multiclass import OneVsOneClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom scipy.signal import medfilt", "Data Preprocessing", "#Load Data\ndata = pd.read_csv('../facies_vectors.csv')\n\n# Parameters\nfeature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']\nfacies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']\nfacies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']\n\n# Store features and labels\nX = data[feature_names].values \ny = data['Facies'].values \n\n# Store well labels and depths\nwell = data['Well Name'].values\ndepth = data['Depth'].values\n\n# Fill 'PE' missing values with mean\nimp = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)\nimp.fit(X)\nX = imp.transform(X)", "We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.", "# Feature windows concatenation function\ndef augment_features_window(X, N_neig):\n \n # Parameters\n N_row = X.shape[0]\n N_feat = X.shape[1]\n\n # Zero padding\n X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))\n\n # Loop over windows\n X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))\n for r in np.arange(N_row)+N_neig:\n this_row = []\n for c in np.arange(-N_neig,N_neig+1):\n this_row = np.hstack((this_row, X[r+c]))\n X_aug[r-N_neig] = this_row\n\n return X_aug\n\n\n# Feature gradient computation function\ndef augment_features_gradient(X, depth):\n \n # Compute features gradient\n d_diff = np.diff(depth).reshape((-1, 1))\n d_diff[d_diff==0] = 0.001\n X_diff = np.diff(X, axis=0)\n X_grad = X_diff / d_diff\n \n # Compensate for last missing value\n X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))\n \n return X_grad\n\n\n# Feature augmentation function\ndef augment_features(X, well, depth, N_neig=1):\n \n # Augment features\n X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))\n for w in np.unique(well):\n w_idx = np.where(well == w)[0]\n X_aug_win = augment_features_window(X[w_idx, :], N_neig)\n X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])\n X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)\n \n # Find padded rows\n padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])\n \n return X_aug, padded_rows\n\nX_aug, padded_rows = augment_features(X, well, depth)\n\n# Initialize model selection methods\nlpgo = LeavePGroupsOut(2)\n\n# Generate splits\nsplit_list = []\nfor train, val in lpgo.split(X, y, groups=data['Well Name']):\n hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)\n hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)\n if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):\n split_list.append({'train':train, 'val':val})\n \n \ndef preprocess():\n \n # Preprocess data to use in model\n X_train_aux = []\n X_test_aux = []\n y_train_aux = []\n y_test_aux = []\n \n # For each data split\n split = split_list[5]\n \n # Remove padded rows\n split_train_no_pad = np.setdiff1d(split['train'], padded_rows)\n\n # Select training and validation data from current split\n X_tr = X_aug[split_train_no_pad, :]\n X_v = X_aug[split['val'], :]\n y_tr = y[split_train_no_pad]\n y_v = y[split['val']]\n\n # Select well labels for validation data\n well_v = well[split['val']]\n\n # Feature normalization\n scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)\n X_tr = scaler.transform(X_tr)\n X_v = scaler.transform(X_v)\n \n X_train_aux.append( X_tr )\n X_test_aux.append( X_v )\n y_train_aux.append( y_tr )\n y_test_aux.append ( y_v )\n \n X_train = np.concatenate( X_train_aux )\n X_test = np.concatenate ( X_test_aux )\n y_train = np.concatenate ( y_train_aux )\n y_test = np.concatenate ( y_test_aux )\n \n return X_train , X_test , y_train , y_test ", "Data Analysis\nIn this section we will run a Cross Validation routine", "from tpot import TPOTClassifier\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = preprocess()\n\ntpot = TPOTClassifier(generations=5, population_size=20, \n verbosity=2,max_eval_time_mins=20,\n max_time_mins=100,scoring='f1_micro',\n random_state = 17)\ntpot.fit(X_train, y_train)\nprint(tpot.score(X_test, y_test))\ntpot.export('FinalPipeline.py')\n\nfrom sklearn.ensemble import RandomForestClassifier, VotingClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.pipeline import make_pipeline, make_union\nfrom sklearn.preprocessing import FunctionTransformer\nimport xgboost as xgb\nfrom xgboost.sklearn import XGBClassifier\n\n# Train and test a classifier\ndef train_and_test(X_tr, y_tr, X_v, well_v):\n \n # Feature normalization\n scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)\n X_tr = scaler.transform(X_tr)\n X_v = scaler.transform(X_v)\n \n # Train classifier\n #clf = make_pipeline(make_union(VotingClassifier([(\"est\", ExtraTreesClassifier(criterion=\"gini\", max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), XGBClassifier(learning_rate=0.73, max_depth=10, min_child_weight=10, n_estimators=500, subsample=0.27))\n #clf = make_pipeline( KNeighborsClassifier(n_neighbors=5, weights=\"distance\") ) \n #clf = make_pipeline(MaxAbsScaler(),make_union(VotingClassifier([(\"est\", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)),ExtraTreesClassifier(criterion=\"entropy\", max_features=0.0001, n_estimators=500))\n # * clf = make_pipeline( make_union(VotingClassifier([(\"est\", BernoulliNB(alpha=60.0, binarize=0.26, fit_prior=True))]), FunctionTransformer(lambda X: X)),RandomForestClassifier(n_estimators=500))\n clf = make_pipeline ( XGBClassifier(learning_rate=0.12, max_depth=3, min_child_weight=10, n_estimators=150, seed = 17, colsample_bytree = 0.9) )\n clf.fit(X_tr, y_tr)\n \n # Test classifier\n y_v_hat = clf.predict(X_v)\n \n # Clean isolated facies for each well\n for w in np.unique(well_v):\n y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)\n \n return y_v_hat", "Prediction", "#Load testing data\ntest_data = pd.read_csv('../validation_data_nofacies.csv')\n\n# Prepare training data\nX_tr = X\ny_tr = y\n\n# Augment features\nX_tr, padded_rows = augment_features(X_tr, well, depth)\n\n# Removed padded rows\nX_tr = np.delete(X_tr, padded_rows, axis=0)\ny_tr = np.delete(y_tr, padded_rows, axis=0) \n\n# Prepare test data\nwell_ts = test_data['Well Name'].values\ndepth_ts = test_data['Depth'].values\nX_ts = test_data[feature_names].values\n\n# Augment features\nX_ts, padded_rows = augment_features(X_ts, well_ts, depth_ts)\n\n# Predict test labels\ny_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts)\n\n# Save predicted labels\ntest_data['Facies'] = y_ts_hat\ntest_data.to_csv('Prediction_XX_Final.csv')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ethen8181/machine-learning
python/class.ipynb
mit
[ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Working-with-Python-Classes\" data-toc-modified-id=\"Working-with-Python-Classes-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Working with Python Classes</a></span><ul class=\"toc-item\"><li><span><a href=\"#Public,-Private,-Protected\" data-toc-modified-id=\"Public,-Private,-Protected-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Public, Private, Protected</a></span></li><li><span><a href=\"#Class-Decorators\" data-toc-modified-id=\"Class-Decorators-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Class Decorators</a></span><ul class=\"toc-item\"><li><span><a href=\"#@Property\" data-toc-modified-id=\"@Property-1.2.1\"><span class=\"toc-item-num\">1.2.1&nbsp;&nbsp;</span>@Property</a></span></li><li><span><a href=\"#@classmethod-and-@staticmethod\" data-toc-modified-id=\"@classmethod-and-@staticmethod-1.2.2\"><span class=\"toc-item-num\">1.2.2&nbsp;&nbsp;</span>@classmethod and @staticmethod</a></span></li></ul></li></ul></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Reference</a></span></li></ul></div>", "# code for loading the format for the notebook\nimport os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(plot_style=False)\n\nos.chdir(path)\n\n# 1. magic to print version\n# 2. magic so that the notebook will reload external python modules\n%load_ext watermark\n%load_ext autoreload \n%autoreload 2\n\n%watermark -a 'Ethen' -d -t -v", "Working with Python Classes\nEncapsulation is seen as the bundling of data with the methods that operate on that data. It is often accomplished by providing two kinds of methods for attributes: The methods for retrieving or accessing the values of attributes are called getter methods. Getter methods do not change the values of attributes, they just return the values. The methods used for changing the values of attributes are called setter methods. \nPublic, Private, Protected\nThere are two ways to restrict the access to class attributes:\n\nprotected. First, we can prefix an attribute name with a leading underscore \"_\". This marks the attribute as protected. It tells users of the class not to use this attribute unless, somebody writes a subclass.\nprivate. Second, we can prefix an attribute name with two leading underscores \"__\". The attribute is now inaccessible and invisible from outside. It's neither possible to read nor write to those attributes except inside of the class definition itself.", "class A:\n \n def __init__(self):\n self.__priv = \"I am private\"\n self._prot = \"I am protected\"\n self.pub = \"I am public\"\n\nx = A()\nprint(x.pub)\n\n# Whenever we assign or retrieve any object attribute \n# Python searches it in the object's __dict__ dictionary\nprint(x.__dict__)", "When the Python compiler sees a private attribute, it actually transforms the actual name to _[Class name]__[private attribute name]. However, this still does not prevent the end-user from accessing the attribute. Thus in Python land, it is more common to use public and protected attribute, write proper docstrings and assume that everyone is a consenting adult, i.e. won't do anything with the protected method unless they know what they are doing.\nClass Decorators\n\n@property The Pythonic way to introduce attributes is to make them public, and not introduce getters and setters to retrieve or change them.\n@classmethod To add additional constructor to the class.\n@staticmethod To attach functions to classes so people won't misuse them in wrong places.\n\n@Property\nLet's assume one day we decide to make a class that could store the temperature in degree Celsius. The temperature will be a private method, so our end-users won't have direct access to it.\nThe class will also implement a method to convert the temperature into degree Fahrenheit. And we also want to implement a value constraint to the temperature, so that it cannot go below -273 degree Celsius. One way of doing this is to define a getter and setter interfaces to manipulate it.", "class Celsius:\n \n def __init__(self, temperature = 0):\n self.set_temperature(temperature)\n\n def to_fahrenheit(self):\n return (self.get_temperature() * 1.8) + 32\n\n def get_temperature(self):\n return self._temperature\n\n def set_temperature(self, value):\n if value < -273:\n raise ValueError('Temperature below -273 is not possible')\n \n self._temperature = value\n\n# c = Celsius(-277) # this returns an error\nc = Celsius(37)\nc.get_temperature()", "Instead of that, now the property way. Where we define the @property and the @[attribute name].setter.", "class Celsius:\n \n def __init__(self, temperature = 0):\n self._temperature = temperature\n\n def to_fahrenheit(self):\n return (self.temperature * 1.8) + 32\n \n # have access to the value like it is an attribute instead of a method\n @property\n def temperature(self):\n return self._temperature\n \n # like accessing the attribute with an extra layer of error checking\n @temperature.setter\n def temperature(self, value):\n if value < -273:\n raise ValueError('Temperature below -273 is not possible')\n \n print('Setting value')\n self._temperature = value\n\nc = Celsius(37)\n\n# much easier to access then the getter, setter way\nprint(c.temperature)\n\n# note that you can still access the private attribute\n# and violate the temperature checking, \n# but then it's the users fault not yours\nc._temperature = -300\nprint(c._temperature)\n\n# accessing the attribute will return the ValueError error\n# c.temperature = -300", "@classmethod and @staticmethod\n@classmethods create alternative constructors for the class. An example of this behavior is there are different ways to construct a dictionary.", "print(dict.fromkeys(['raymond', 'rachel', 'mathew']))\n\nimport time\n\nclass Date:\n # Primary constructor\n def __init__(self, year, month, day):\n self.year = year\n self.month = month\n self.day = day\n\n # Alternate constructor\n @classmethod\n def today(cls):\n t = time.localtime()\n return cls(t.tm_year, t.tm_mon, t.tm_mday)\n\n# Primary\na = Date(2012, 12, 21) \nprint(a.__dict__)\n\n# Alternate\nb = Date.today() \nprint(b.__dict__)", "The cls is critical, as it is an object that holds the class itself. This makes them work with inheritance.", "class NewDate(Date):\n pass\n\n# Creates an instance of Date (cls=Date)\nc = Date.today() \nprint(c.__dict__)\n\n# Creates an instance of NewDate (cls=NewDate)\nd = NewDate.today() \nprint(d.__dict__)", "The purpose of @staticmethod is to attach functions to classes. We do this to improve the findability of the function and to make sure that people are using the function in the appropriate context.", "class Date:\n # Primary constructor\n def __init__(self, year, month, day):\n self.year = year\n self.month = month\n self.day = day\n\n # Alternate constructor\n @classmethod\n def today(cls):\n t = time.localtime()\n return cls(t.tm_year, t.tm_mon, t.tm_mday)\n \n # the logic belongs with the date class\n @staticmethod\n def show_tomorrow_date():\n t = time.localtime()\n return t.tm_year, t.tm_mon, t.tm_mday + 1\n\nDate.show_tomorrow_date()", "For those interested, the following link contains a much more in-depth introduction into @classmethod and @staticmethod. Blog: Python's Instance, Class, and Static Methods Demystified\nReference\n\nPython Tutorials: Python @property \nOnlines Python Course Notes: Properties vs. Getters and Setters" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
james-prior/euler
euler-018-maximum-path-sum-i-20161225.ipynb
mit
[ "Project Euler\nMaximum path sum I\nProblem 18\nBy starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. \n<center>\n3\n7 4\n 2 4 6\n 8 5 9 3\n</center>\nThat is, 3 + 7 + 4 + 9 = 23. \nFind the maximum total from top to bottom of the triangle below: \n<center>\n75\n 95 64\n 17 47 82\n 18 35 87 10\n 20 04 82 47 65\n 19 01 23 75 03 34\n 88 02 77 73 07 63 67\n 99 65 04 28 06 16 70 92\n 41 41 26 56 83 40 80 70 33\n 41 48 72 33 47 32 37 16 94 29\n 53 71 44 65 25 43 91 52 97 51 14\n 70 11 33 28 77 73 17 78 39 68 17 57\n 91 71 52 38 17 14 91 43 58 50 27 29 48\n 63 66 04 68 89 53 67 30 73 16 69 87 40 31\n 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23\n</center>\nNOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o)", "t4 = [\n [3],\n [7, 4],\n [2, 4, 6],\n [8, 5, 9, 3],\n]\nt4\n\nt15 = [\n [75],\n [95, 64],\n [17, 47, 82],\n [18, 35, 87, 10],\n [20, 4, 82, 47, 65],\n [19, 1, 23, 75, 3, 34],\n [88, 2, 77, 73, 7, 63, 67],\n [99, 65, 4, 28, 6, 16, 70, 92],\n [41, 41, 26, 56, 83, 40, 80, 70, 33],\n [41, 48, 72, 33, 47, 32, 37, 16, 94, 29],\n [53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14],\n [70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57],\n [91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48],\n [63, 66, 4, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31],\n [ 4, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 4, 23],\n]\nlen(t15)\n\nfrom copy import deepcopy\n\ndef foo(t):\n t = deepcopy(t)\n for i in range(len(t))[::-1]:\n r = t[i]\n try:\n nr = t[i+1]\n except IndexError:\n for j in range(len(t[i])):\n t[i][j] = (t[i][j], None)\n else:\n for j in range(len(t[i])):\n dir = (t[i+1][j+1][0] > t[i+1][j+0][0])\n t[i][j] = (t[i][j] + t[i+1][j+dir][0], dir)\n return t[0][0][0]\n\nn = t4\n%timeit foo(n)\nfoo(n)\n\nn = t15\n%timeit foo(n)\nfoo(n)", "Let's try a somewhat functional approach.\nIt is much easier to understand.\nI like that.", "def foo(t):\n old_row = []\n for row in t:\n stagger_max = map(max, zip([0] + old_row, old_row + [0]))\n old_row = list(map(sum, zip(stagger_max, row)))\n \n return max(old_row)\n\nn = t4\n%timeit foo(n)\nfoo(n)\n\nn = t15\n%timeit foo(n)\nfoo(n)", "Try tuples instead of lists.\nIt's a little bit faster and still readable.\nThat's a good combination.", "def foo(t):\n old_row = tuple()\n for row in t:\n stagger_max = map(max, zip((0,) + old_row, old_row + (0,)))\n old_row = tuple(map(sum, zip(stagger_max, row)))\n \n return max(old_row)\n\nn = t4\n%timeit foo(n)\nfoo(n)\n\nn = t15\n%timeit foo(n)\nfoo(n)", "Convert t4 and t15 to be tuples instead of lists.\nThis does not affect readability.\nIt is faster yet.", "t4 = tuple(tuple(row) for row in t4)\nt15 = tuple(tuple(row) for row in t15)\n\ndef foo(t):\n old_row = tuple()\n for row in t:\n stagger_max = map(max, zip((0,) + old_row, old_row + (0,)))\n old_row = tuple(map(sum, zip(stagger_max, row)))\n \n return max(old_row)\n\nn = t4\n%timeit foo(n)\nfoo(n)\n\nn = t15\n%timeit foo(n)\nfoo(n)", "I like cell 7 the most.\nFor me, its lists are more readable than tuples." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ianhamilton117/deep-learning
intro-to-rnns/Anna_KaRNNa_Exercises.ipynb
mit
[ "Anna KaRNNa\nIn this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">", "import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf", "First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.", "with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nencoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)", "Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.", "text[:100]", "And we can see the characters encoded as integers.", "encoded[:100]", "Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.", "len(vocab)", "Making training mini-batches\nHere is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:\n<img src=\"assets/sequence_batching@1x.png\" width=500px>\n<br>\nWe have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.\nThe first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \\times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.\nAfter that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \\times (M * K)$ where $K$ is the number of batches.\nNow that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \\times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:\npython\ny[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]\nwhere x is the input batch and y is the target batch.\nThe way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.\n\nExercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.", "def get_batches(arr, n_seqs, n_steps):\n '''Create a generator that returns batches of size\n n_seqs x n_steps from arr.\n \n Arguments\n ---------\n arr: Array you want to make batches from\n n_seqs: Batch size, the number of sequences per batch\n n_steps: Number of sequence steps per batch\n '''\n # Get the number of characters per batch and number of batches we can make\n characters_per_batch = \n n_batches = \n \n # Keep only enough characters to make full batches\n arr = \n \n # Reshape into n_seqs rows\n arr = \n \n for n in range(0, arr.shape[1], n_steps):\n # The features\n x = \n # The targets, shifted by one\n y = \n yield x, y", "Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.", "batches = get_batches(encoded, 10, 50)\nx, y = next(batches)\n\nprint('x\\n', x[:10, :10])\nprint('\\ny\\n', y[:10, :10])", "If you implemented get_batches correctly, the above output should look something like \n```\nx\n [[55 63 69 22 6 76 45 5 16 35]\n [ 5 69 1 5 12 52 6 5 56 52]\n [48 29 12 61 35 35 8 64 76 78]\n [12 5 24 39 45 29 12 56 5 63]\n [ 5 29 6 5 29 78 28 5 78 29]\n [ 5 13 6 5 36 69 78 35 52 12]\n [63 76 12 5 18 52 1 76 5 58]\n [34 5 73 39 6 5 12 52 36 5]\n [ 6 5 29 78 12 79 6 61 5 59]\n [ 5 78 69 29 24 5 6 52 5 63]]\ny\n [[63 69 22 6 76 45 5 16 35 35]\n [69 1 5 12 52 6 5 56 52 29]\n [29 12 61 35 35 8 64 76 78 28]\n [ 5 24 39 45 29 12 56 5 63 29]\n [29 6 5 29 78 28 5 78 29 45]\n [13 6 5 36 69 78 35 52 12 43]\n [76 12 5 18 52 1 76 5 58 52]\n [ 5 73 39 6 5 12 52 36 5 78]\n [ 5 29 78 12 79 6 61 5 59 63]\n [78 69 29 24 5 6 52 5 63 76]]\n ``\n although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.\nBuilding the model\nBelow is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.\n<img src=\"assets/charRNN.png\" width=500px>\nInputs\nFirst off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.\n\nExercise: Create the input placeholders in the function below.", "def build_inputs(batch_size, num_steps):\n ''' Define placeholders for inputs, targets, and dropout \n \n Arguments\n ---------\n batch_size: Batch size, number of sequences per batch\n num_steps: Number of sequence steps in a batch\n \n '''\n # Declare placeholders we'll feed into the graph\n inputs = \n targets = \n \n # Keep probability placeholder for drop out layers\n keep_prob = \n \n return inputs, targets, keep_prob", "LSTM Cell\nHere we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.\nWe first create a basic LSTM cell with\npython\nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nwhere num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with \npython\ntf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\nYou pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this\npython\ntf.contrib.rnn.MultiRNNCell([cell]*num_layers)\nThis might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like\n```python\ndef build_cell(num_units, keep_prob):\n lstm = tf.contrib.rnn.BasicLSTMCell(num_units)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\nreturn drop\n\ntf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])\n```\nEven though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.\nWe also need to create an initial cell state of all zeros. This can be done like so\npython\ninitial_state = cell.zero_state(batch_size, tf.float32)\nBelow, we implement the build_lstm function to create these LSTM cells and the initial state.", "def build_lstm(lstm_size, num_layers, batch_size, keep_prob):\n ''' Build LSTM cell.\n \n Arguments\n ---------\n keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability\n lstm_size: Size of the hidden layers in the LSTM cells\n num_layers: Number of LSTM layers\n batch_size: Batch size\n\n '''\n ### Build the LSTM Cell\n # Use a basic LSTM cell\n lstm = \n \n # Add dropout to the cell outputs\n drop = \n \n # Stack up multiple LSTM layers, for deep learning\n cell = \n initial_state = \n \n return cell, initial_state", "RNN Output\nHere we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.\nIf our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \\times M \\times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \\times M \\times L$. \nWe are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \\times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \\times L$.\nOne we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.\n\nExercise: Implement the output layer in the function below.", "def build_output(lstm_output, in_size, out_size):\n ''' Build a softmax layer, return the softmax output and logits.\n \n Arguments\n ---------\n \n lstm_output: List of output tensors from the LSTM layer\n in_size: Size of the input tensor, for example, size of the LSTM cells\n out_size: Size of this softmax layer\n \n '''\n\n # Reshape output so it's a bunch of rows, one row for each step for each sequence.\n # Concatenate lstm_output over axis 1 (the columns)\n seq_output = \n # Reshape seq_output to a 2D tensor with lstm_size columns\n x = \n \n # Connect the RNN outputs to a softmax layer\n with tf.variable_scope('softmax'):\n # Create the weight and bias variables here\n softmax_w = \n softmax_b = \n \n # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch\n # of rows of logit outputs, one for each step and sequence\n logits = \n \n # Use softmax to get the probabilities for predicted characters\n out = \n \n return out, logits", "Training loss\nNext up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \\times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \\times C$.\nThen we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.\n\nExercise: Implement the loss calculation in the function below.", "def build_loss(logits, targets, lstm_size, num_classes):\n ''' Calculate the loss from the logits and the targets.\n \n Arguments\n ---------\n logits: Logits from final fully connected layer\n targets: Targets for supervised learning\n lstm_size: Number of LSTM hidden units\n num_classes: Number of classes in targets\n \n '''\n \n # One-hot encode targets and reshape to match logits, one row per sequence per step\n y_one_hot = \n y_reshaped = \n \n # Softmax cross entropy loss\n loss = \n \n return loss", "Optimizer\nHere we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.", "def build_optimizer(loss, learning_rate, grad_clip):\n ''' Build optmizer for training, using gradient clipping.\n \n Arguments:\n loss: Network loss\n learning_rate: Learning rate for optimizer\n \n '''\n \n # Optimizer for training, using gradient clipping to control exploding gradients\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n return optimizer", "Build the network\nNow we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN. \n\nExercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.", "class CharRNN:\n \n def __init__(self, num_classes, batch_size=64, num_steps=50, \n lstm_size=128, num_layers=2, learning_rate=0.001, \n grad_clip=5, sampling=False):\n \n # When we're using this network for sampling later, we'll be passing in\n # one character at a time, so providing an option for that\n if sampling == True:\n batch_size, num_steps = 1, 1\n else:\n batch_size, num_steps = batch_size, num_steps\n\n tf.reset_default_graph()\n \n # Build the input placeholder tensors\n self.inputs, self.targets, self.keep_prob = \n\n # Build the LSTM cell\n cell, self.initial_state = \n\n ### Run the data through the RNN layers\n # First, one-hot encode the input tokens\n x_one_hot = \n \n # Run each sequence step through the RNN with tf.nn.dynamic_rnn \n outputs, state =\n self.final_state = state\n \n # Get softmax predictions and logits\n self.prediction, self.logits = \n \n # Loss and optimizer (with gradient clipping)\n self.loss = \n self.optimizer = ", "Hyperparameters\nHere are the hyperparameters for the network.\n\nbatch_size - Number of sequences running through the network in one pass.\nnum_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.\nlstm_size - The number of units in the hidden layers.\nnum_layers - Number of hidden LSTM layers to use\nlearning_rate - Learning rate for training\nkeep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.\n\nHere's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.\n\nTips and Tricks\nMonitoring Validation Loss vs. Training Loss\nIf you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:\n\nIf your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.\nIf your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)\n\nApproximate number of parameters\nThe two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:\n\nThe number of parameters in your model. This is printed when you start training.\nThe size of your dataset. 1MB file is approximately 1 million characters.\n\nThese two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:\n\nI have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.\nI have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.\n\nBest models strategy\nThe winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.\nIt is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.\nBy the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.", "batch_size = 10 # Sequences per batch\nnum_steps = 50 # Number of sequence steps per batch\nlstm_size = 128 # Size of hidden layers in LSTMs\nnum_layers = 2 # Number of LSTM layers\nlearning_rate = 0.01 # Learning rate\nkeep_prob = 0.5 # Dropout keep probability", "Time for training\nThis is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.\nHere I'm saving checkpoints with the format\ni{iteration number}_l{# hidden layer units}.ckpt\n\nExercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.", "epochs = 20\n# Save every N iterations\nsave_every_n = 200\n\nmodel = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,\n lstm_size=lstm_size, num_layers=num_layers, \n learning_rate=learning_rate)\n\nsaver = tf.train.Saver(max_to_keep=100)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/______.ckpt')\n counter = 0\n for e in range(epochs):\n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for x, y in get_batches(encoded, batch_size, num_steps):\n counter += 1\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: keep_prob,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.loss, \n model.final_state, \n model.optimizer], \n feed_dict=feed)\n \n end = time.time()\n print('Epoch: {}/{}... '.format(e+1, epochs),\n 'Training Step: {}... '.format(counter),\n 'Training loss: {:.4f}... '.format(batch_loss),\n '{:.4f} sec/batch'.format((end-start)))\n \n if (counter % save_every_n == 0):\n saver.save(sess, \"checkpoints/i{}_l{}.ckpt\".format(counter, lstm_size))\n \n saver.save(sess, \"checkpoints/i{}_l{}.ckpt\".format(counter, lstm_size))", "Saved checkpoints\nRead up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables", "tf.train.get_checkpoint_state('checkpoints')", "Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.", "def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n samples = [c for c in prime]\n model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.prediction, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.prediction, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)", "Here, pass in the path to a checkpoint and sample from the network.", "tf.train.latest_checkpoint('checkpoints')\n\ncheckpoint = tf.train.latest_checkpoint('checkpoints')\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = 'checkpoints/i200_l512.ckpt'\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = 'checkpoints/i600_l512.ckpt'\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = 'checkpoints/i1200_l512.ckpt'\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vascotenner/holoviews
doc/Tutorials/Columnar_Data.ipynb
bsd-3-clause
[ "In this Tutorial we will explore how to work with columnar data in HoloViews. Columnar data has a fixed list of column headings, with values stored in an arbitrarily long list of rows. Spreadsheets, relational databases, CSV files, and many other typical data sources fit naturally into this format. HoloViews defines an extensible system of interfaces to load, manipulate, and visualize this kind of data, as well as allowing conversion of any of the non-columnar data types into columnar data for analysis or data interchange.\nBy default HoloViews will use one of three storage formats for columnar data:\n\nA pure Python dictionary containing each column.\nA purely NumPy-based format for numeric data.\nPandas DataFrames", "import numpy as np\nimport pandas as pd\nimport holoviews as hv\nfrom IPython.display import HTML\nhv.notebook_extension()", "Simple Dataset\nUsually when working with data we have one or more independent variables, taking the form of categories, labels, discrete sample coordinates, or bins. These variables are what we refer to as key dimensions (or kdims for short) in HoloViews. The observer or dependent variables, on the other hand, are referred to as value dimensions (vdims), and are ordinarily measured or calculated given the independent variables. The simplest useful form of a Dataset object is therefore a column 'x' and a column 'y' corresponding to the key dimensions and value dimensions respectively. An obvious visual representation of this data is a Table:", "xs = range(10)\nys = np.exp(xs)\n\ntable = hv.Table((xs, ys), kdims=['x'], vdims=['y'])\ntable", "However, this data has many more meaningful visual representations, and therefore the first important concept is that Dataset objects are interchangeable as long as their dimensionality allows it, meaning that you can easily create the different objects from the same data (and cast between the objects once created):", "hv.Scatter(table) + hv.Curve(table) + hv.Bars(table)", "Each of these three plots uses the same data, but represents a different assumption about the semantic meaning of that data -- the Scatter plot is appropriate if that data consists of independent samples, the Curve plot is appropriate for samples chosen from an underlying smooth function, and the Bars plot is appropriate for independent categories of data. Since all these plots have the same dimensionality, they can easily be converted to each other, but there is normally only one of these representations that is semantically appropriate for the underlying data. For this particular data, the semantically appropriate choice is Curve, since the y values are samples from the continuous function exp.\nAs a guide to which Elements can be converted to each other, those of the same dimensionality here should be interchangeable, because of the underlying similarity of their columnar representation:\n\n0D: BoxWhisker, Spikes, Distribution*, \n1D: Scatter, Curve, ErrorBars, Spread, Bars, BoxWhisker, Regression*\n2D: Points, HeatMap, Bars, BoxWhisker, Bivariate*\n3D: Scatter3D, Trisurface, VectorField, BoxWhisker, Bars\n\n* - requires Seaborn\nThis categorization is based only on the kdims, which define the space in which the data has been sampled or defined. An Element can also have any number of value dimensions (vdims), which may be mapped onto various attributes of a plot such as the color, size, and orientation of the plotted items. For a reference of how to use these various Element types, see the Elements Tutorial.\nData types and Constructors\nAs discussed above, Dataset provide an extensible interface to store and operate on data in different formats. All interfaces support a number of standard constructors.\nStorage formats\nDataset types can be constructed using one of three supported formats, (a) a dictionary of columns, (b) an NxD array with N rows and D columns, or (c) pandas dataframes:", "print(repr(hv.Scatter({'x': xs, 'y': ys}) +\n hv.Scatter(np.column_stack([xs, ys])) +\n hv.Scatter(pd.DataFrame({'x': xs, 'y': ys}))))", "Literals\nIn addition to the main storage formats, Dataset Elements support construction from three Python literal formats: (a) An iterator of y-values, (b) a tuple of columns, and (c) an iterator of row tuples.", "print(repr(hv.Scatter(ys) + hv.Scatter((xs, ys)) + hv.Scatter(zip(xs, ys))))", "For these inputs, the data will need to be copied to a new data structure, having one of the three storage formats above. By default Dataset will try to construct a simple array, falling back to either pandas dataframes (if available) or the dictionary-based format if the data is not purely numeric. Additionally, the interfaces will try to maintain the provided data's type, so numpy arrays and pandas DataFrames will therefore always be parsed by the array and dataframe interfaces first respectively.", "df = pd.DataFrame({'x': xs, 'y': ys, 'z': ys*2})\nprint(type(hv.Scatter(df).data))", "Dataset will attempt to parse the supplied data, falling back to each consecutive interface if the previous could not interpret the data. The default list of fallbacks and simultaneously the list of allowed datatypes is:", "hv.Dataset.datatype", "To select a particular storage format explicitly, supply one or more allowed datatypes:", "print(type(hv.Scatter((xs, ys), datatype=['array']).data))\nprint(type(hv.Scatter((xs, ys), datatype=['dictionary']).data))\nprint(type(hv.Scatter((xs, ys), datatype=['dataframe']).data))", "Sharing Data\nSince the formats with labelled columns do not require any specific order, each Element can effectively become a view into a single set of data. By specifying different key and value dimensions, many Elements can show different values, while sharing the same underlying data source.", "overlay = hv.Scatter(df, kdims='x', vdims='y') * hv.Scatter(df, kdims='x', vdims='z')\noverlay", "We can quickly confirm that the data is actually shared:", "overlay.Scatter.I.data is overlay.Scatter.II.data", "For columnar data, this approach is much more efficient than creating copies of the data for each Element, and allows for some advanced features like linked brushing in the Bokeh backend.\nConverting to raw data\nColumn types make it easy to export the data to the three basic formats: arrays, dataframes, and a dictionary of columns.\nArray", "table.array()", "Pandas DataFrame", "HTML(table.dframe().head().to_html())", "Dataset dictionary", "table.columns()", "Creating tabular data from Elements using the .table and .dframe methods\nIf you have data in some other HoloViews element and would like to use the columnar data features, you can easily tabularize any of the core Element types into a Table Element, using the .table() method. Similarly, the .dframe() method will convert an Element into a pandas DataFrame. These methods are very useful if you want to then transform the data into a different Element type, or to perform different types of analysis.\nTabularizing simple Elements\nFor a simple example, we can create a Curve of an exponential function and convert it to a Table with the .table method, with the same result as creating the Table directly from the data as done earlier on this Tutorial:", "xs = np.arange(10)\ncurve = hv.Curve(zip(xs, np.exp(xs)))\ncurve * hv.Scatter(zip(xs, curve)) + curve.table()", "Similarly, we can get a pandas dataframe of the Curve using curve.dframe(). Here we wrap that call as raw HTML to allow automated testing of this notebook, but just calling curve.dframe() would give the same result visually:", "HTML(curve.dframe().to_html())", "Although 2D image-like objects are not inherently well suited to a flat columnar representation, serializing them by converting to tabular data is a good way to reveal the differences between Image and Raster elements. Rasters are a very simple type of element, using array-like integer indexing of rows and columns from their top-left corner as in computer graphics applications. Conversely, Image elements are a higher-level abstraction that provides a general-purpose continuous Cartesian coordinate system, with x and y increasing to the right and upwards as in mathematical applications, and each point interpreted as a sample representing the pixel in which it is located (and thus centered within that pixel). Given the same data, the .table() representation will show how the data is being interpreted (and accessed) differently in the two cases (as explained in detail in the Continuous Coordinates Tutorial):", "%%opts Points (s=200) [size_index=None]\nextents = (-1.6,-2.7,2.0,3)\nnp.random.seed(42)\nmat = np.random.rand(3, 3)\n\nimg = hv.Image(mat, bounds=extents)\nraster = hv.Raster(mat)\n\nimg * hv.Points(img) + img.table() + \\\nraster * hv.Points(raster) + raster.table()", "Tabularizing space containers\nEven deeply nested objects can be deconstructed in this way, serializing them to make it easier to get your raw data out of a collection of specialized Element types. Let's say we want to make multiple observations of a noisy signal. We can collect the data into a HoloMap to visualize it and then call .table() to get a columnar object where we can perform operations or transform it to other Element types. Deconstructing nested data in this way only works if the data is homogenous. In practical terms, the requirement is that your data structure contains Elements (of any types) in these Container types: NdLayout, GridSpace, HoloMap, and NdOverlay, with all dimensions consistent throughout (so that they can all fit into the same set of columns).\nLet's now go back to the Image example. We will now collect a number of observations of some noisy data into a HoloMap and display it:", "obs_hmap = hv.HoloMap({i: hv.Image(np.random.randn(10, 10), bounds=(0,0,3,3))\n for i in range(3)}, key_dimensions=['Observation'])\nobs_hmap", "Now we can serialize this data just as before, where this time we get a four-column (4D) table. The key dimensions of both the HoloMap and the Images, as well as the z-values of each Image, are all merged into a single table. We can visualize the samples we have collected by converting it to a Scatter3D object.", "%%opts Layout [fig_size=150] Scatter3D [color_index=3 size_index=None] (cmap='hot' edgecolor='k' s=50)\nobs_hmap.table().to.scatter3d() + obs_hmap.table()", "Here the z dimension is shown by color, as in the original images, and the other three dimensions determine where the datapoint is shown in 3D. This way of deconstructing will work for any data structure that satisfies the conditions described above, no matter how nested. If we vary the amount of noise while continuing to performing multiple observations, we can create an NdLayout of HoloMaps, one for each level of noise, and animated by the observation number.", "from itertools import product\nextents = (0,0,3,3)\nerror_hmap = hv.HoloMap({(i, j): hv.Image(j*np.random.randn(3, 3), bounds=extents)\n for i, j in product(range(3), np.linspace(0, 1, 3))},\n key_dimensions=['Observation', 'noise'])\nnoise_layout = error_hmap.layout('noise')\nnoise_layout", "And again, we can easily convert the object to a Table:", "%%opts Table [fig_size=150]\nnoise_layout.table()", "Applying operations to the data\nSorting by columns\nOnce data is in columnar form, it is simple to apply a variety of operations. For instance, Dataset can be sorted by their dimensions using the .sort() method. By default, this method will sort by the key dimensions, but any other dimension(s) can be supplied to specify sorting along any other dimensions:", "bars = hv.Bars((['C', 'A', 'B', 'D'], [2, 7, 3, 4]))\nbars + bars.sort() + bars.sort(['y'])", "Working with categorical or grouped data\nData is often grouped in various ways, and the Dataset interface provides various means to easily compare between groups and apply statistical aggregates. We'll start by generating some synthetic data with two groups along the x-axis and 4 groups along the y axis.", "n = np.arange(1000)\nxs = np.repeat(range(2), 500)\nys = n%4\nzs = np.random.randn(1000)\ntable = hv.Table((xs, ys, zs), kdims=['x', 'y'], vdims=['z'])\ntable", "Since there are repeat observations of the same x- and y-values, we have to reduce the data before we display it or else use a datatype that supports plotting distributions in this way. The BoxWhisker type allows doing exactly that:", "%%opts BoxWhisker [aspect=2 fig_size=200 bgcolor='w']\nhv.BoxWhisker(table)", "Aggregating/Reducing dimensions\nMost types require the data to be non-duplicated before being displayed. For this purpose, HoloViews makes it easy to aggregate and reduce the data. These two operations are simple inverses of each other--aggregate computes a statistic for each group in the supplied dimensions, while reduce combines all the groups except the supplied dimensions. Supplying only a function and no dimensions will simply aggregate or reduce all available key dimensions.", "%%opts Bars [show_legend=False] {+axiswise}\nhv.Bars(table).aggregate(function=np.mean) + hv.Bars(table).reduce(x=np.mean)", "(A) aggregates over both the x and y dimension, computing the mean for each x/y group, while (B) reduces the x dimension leaving just the mean for each group along y.\nCollapsing multiple Dataset Elements\nWhen multiple observations are broken out into a HoloMap they can easily be combined using the collapse method. Here we create a number of Curves with increasingly larger y-values. By collapsing them with a function and a spreadfn we can compute the mean curve with a confidence interval. We then simply cast the collapsed Curve to a Spread and Curve Element to visualize them.", "hmap = hv.HoloMap({i: hv.Curve(np.arange(10)*i) for i in range(10)})\ncollapsed = hmap.collapse(function=np.mean, spreadfn=np.std)\nhv.Spread(collapsed) * hv.Curve(collapsed) + collapsed.table()", "Working with complex data\nIn the last section we only scratched the surface of what the Dataset interface can do. When it really comes into its own is when working with high-dimensional datasets. As an illustration, we'll load a dataset of some macro-economic indicators for OECD countries from 1964-1990, cached on the HoloViews website.", "macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\\t')\n\ndimensions = {'unem': 'Unemployment',\n 'capmob': 'Capital Mobility',\n 'gdp': 'GDP Growth', \n 'trade': 'Trade',\n 'year': 'Year', \n 'country': 'Country'}\n\nmacro_df = macro_df.rename(columns=dimensions)", "We'll also take this opportunity to set default options for all the following plots.", "%output dpi=100\noptions = hv.Store.options()\nopts = hv.Options('plot', aspect=2, fig_size=250, show_frame=False, show_grid=True, legend_position='right')\noptions.NdOverlay = opts\noptions.Overlay = opts", "Loading the data\nAs we saw above, we can supply a dataframe to any Dataset type. When dealing with so many dimensions it would be cumbersome to supply all the dimensions explicitly, but luckily Dataset can easily infer the dimensions from the dataframe itself. We simply supply the kdims, and it will infer that all other numeric dimensions should be treated as value dimensions (vdims).", "macro = hv.Table(macro_df, kdims=['Year', 'Country'])", "To get an overview of the data we'll quickly sort it and then view the data for one year.", "%%opts Table [aspect=1.5 fig_size=300]\nmacro = macro.sort()\nmacro[1988]", "Most of the examples above focus on converting a Table to simple Element types, but HoloViews also provides powerful container objects to explore high-dimensional data, such as HoloMap, NdOverlay, NdLayout, and GridSpace. HoloMaps work as a useful interchange format from which you can conveniently convert to the other container types using its .overlay(), .layout(), and .grid() methods. This way we can easily create an overlay of GDP Growth curves by year for each country. Here Year is a key dimension and GDP Growth a value dimension. We are then left with the Country dimension, which we can overlay using the .overlay() method.", "%%opts Curve (color=Palette('Set3'))\ngdp_curves = macro.to.curve('Year', 'GDP Growth')\ngdp_curves.overlay('Country')", "Now that we've extracted the gdp_curves, we can apply some operations to them. As in the simpler example above we will collapse the HoloMap of Curves using a number of functions to visualize the distribution of GDP Growth rates over time. First we find the mean curve with np.std as the spreadfn and cast the result to a Spread type, then we compute the min, mean and max curve in the same way and put them all inside an Overlay.", "%%opts Overlay [bgcolor='w' legend_position='top_right'] Curve (color='k' linewidth=1) Spread (facecolor='gray' alpha=0.2)\nhv.Spread(gdp_curves.collapse('Country', np.mean, np.std), label='std') *\\\nhv.Overlay([gdp_curves.collapse('Country', fn).relabel(name)(style=dict(linestyle=ls))\n for name, fn, ls in [('max', np.max, '--'), ('mean', np.mean, '-'), ('min', np.min, '--')]])", "Many HoloViews Element types support multiple kdims, including HeatMap, Points, Scatter, Scatter3D, and Bars. Bars in particular allows you to lay out your data in groups, categories and stacks. By supplying the index of that dimension as a plotting option you can choose to lay out your data as groups of bars, categories in each group, and stacks. Here we choose to lay out the trade surplus of each country with groups for each year, no categories, and stacked by country. Finally, we choose to color the Bars for each item in the stack.", "%opts Bars [bgcolor='w' aspect=3 figure_size=450 show_frame=False]\n\n%%opts Bars [category_index=2 stack_index=0 group_index=1 legend_position='top' legend_cols=7 color_by=['stack']] (color=Palette('Dark2'))\nmacro.to.bars(['Country', 'Year'], 'Trade', [])", "This plot contains a lot of data, and so it's probably a good idea to focus on specific aspects of it, telling a simpler story about them. For instance, using the .select method we can then customize the palettes (e.g. to use consistent colors per country across multiple analyses).\nPalettes can customized by selecting only a subrange of the underlying cmap to draw the colors from. The Palette draws samples from the colormap using the supplied sample_fn, which by default just draws linear samples but may be overriden with any function that draws samples in the supplied ranges. By slicing the Set1 colormap we draw colors only from the upper half of the palette and then reverse it.", "%%opts Bars [padding=0.02 color_by=['group']] (alpha=0.6, color=Palette('Set1', reverse=True)[0.:.2])\ncountries = {'Belgium', 'Netherlands', 'Sweden', 'Norway'}\nmacro.to.bars(['Country', 'Year'], 'Unemployment').select(Year=(1978, 1985), Country=countries)", "Many HoloViews Elements support multiple key and value dimensions. A HeatMap is indexed by two kdims, so we can visualize each of the economic indicators by year and country in a Layout. Layouts are useful for heterogeneous data you want to lay out next to each other.\nBefore we display the Layout let's apply some styling; we'll suppress the value labels applied to a HeatMap by default and substitute it for a colorbar. Additionally we up the number of xticks that are drawn and rotate them by 90 degrees to avoid overlapping. Flipping the y-axis ensures that the countries appear in alphabetical order. Finally we reduce some of the margins of the Layout and increase the size.", "%opts HeatMap [show_values=False xticks=40 xrotation=90 aspect=1.2 invert_yaxis=True colorbar=True]\n%opts Layout [figure_size=120 aspect_weight=0.5 hspace=0.8 vspace=0]\n\nhv.Layout([macro.to.heatmap(['Year', 'Country'], value)\n for value in macro.data.columns[2:]]).cols(2)", "Another way of combining heterogeneous data dimensions is to map them to a multi-dimensional plot type. Scatter Elements, for example, support multiple vdims, which may be mapped onto the color and size of the drawn points in addition to the y-axis position. \nAs for the Curves above we supply 'Year' as the sole key dimension and rely on the Table to automatically convert the Country to a map dimension, which we'll overlay. However this time we select both GDP Growth and Unemployment, to be plotted as points. To get a sensible chart, we adjust the scaling_factor for the points to get a reasonable distribution in sizes and apply a categorical Palette so we can distinguish each country.", "%%opts Scatter [scaling_method='width' scaling_factor=2] (color=Palette('Set3') edgecolors='k')\ngdp_unem_scatter = macro.to.scatter('Year', ['GDP Growth', 'Unemployment'])\ngdp_unem_scatter.overlay('Country')", "In this way we can plot any dimension against any other dimension, very easily allowing us to iterate through different ways of revealing relationships in the dataset.", "%%opts NdOverlay [legend_cols=2] Scatter [size_index=1] (color=Palette('Blues'))\nmacro.to.scatter('GDP Growth', 'Unemployment', ['Year']).overlay()", "This view, for example, immediately highlights the high unemployment rates of the 1980s.\nSince all HoloViews Elements are composable, we can generate complex figures just by applying the * operator. We'll simply reuse the GDP curves we generated earlier, combine them with the scatter points (which indicate the unemployment rate by size) and annotate the data with some descriptions of what happened economically in these years.", "%%opts Curve (color='k') Scatter [color_index=2 size_index=2 scaling_factor=1.4] (cmap='Blues' edgecolors='k')\n\nmacro_overlay = gdp_curves * gdp_unem_scatter\nannotations = hv.Arrow(1973, 8, 'Oil Crisis', 'v') * hv.Arrow(1975, 6, 'Stagflation', 'v') *\\\nhv.Arrow(1979, 8, 'Energy Crisis', 'v') * hv.Arrow(1981.9, 5, 'Early Eighties\\n Recession', 'v')\nmacro_overlay * annotations", "Since we didn't map the country to some other container type, we get a widget allowing us to view the plot separately for each country, reducing the forest of curves we encountered before to manageable chunks. \nWhile looking at the plots individually like this allows us to study trends for each country, we may want to lay out a subset of the countries side by side, e.g. for non-interactive publications. We can easily achieve this by selecting the countries we want to view and and then applying the .layout method. We'll also want to restore the square aspect ratio so the plots compose nicely.", "%opts Overlay [aspect=1]\n\n%%opts NdLayout [figure_size=100] Scatter [color_index=2] (cmap='Reds')\ncountries = {'United States', 'Canada', 'United Kingdom'}\n(gdp_curves * gdp_unem_scatter).select(Country=countries).layout('Country')", "Finally, let's combine some plots for each country into a Layout, giving us a quick overview of each economic indicator for each country:", "%%opts Layout [fig_size=100] Scatter [color_index=2] (cmap='Reds')\n(macro_overlay.relabel('GDP Growth', depth=1) +\\\nmacro.to.curve('Year', 'Unemployment', ['Country'], group='Unemployment',) +\\\nmacro.to.curve('Year', 'Trade', ['Country'], group='Trade') +\\\nmacro.to.scatter('GDP Growth', 'Unemployment', ['Country'])).cols(2)", "As you can see, columnar data makes a huge range of analyses and visualizations quite straightforward! You can use these tools with many of the Elements and Containers available in HoloViews, to easily express what you want to visualize." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
alvaroing12/CADL
session-3/lecture-3.ipynb
apache-2.0
[ "Session 3: Unsupervised and Supervised Learning\n<p class=\"lead\">\nParag K. Mital<br />\n<a href=\"https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\">Creative Applications of Deep Learning w/ Tensorflow</a><br />\n<a href=\"https://www.kadenze.com/partners/kadenze-academy\">Kadenze Academy</a><br />\n<a href=\"https://twitter.com/hashtag/CADL\">#CADL</a>\n</p>\n\n<a name=\"learning-goals\"></a>\nLearning Goals\n\nBuild an autoencoder w/ linear and convolutional layers\nUnderstand how one hot encodings work\nBuild a classification network w/ linear and convolutional layers\n\n<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->\n\n\nIntroduction\nUnsupervised vs. Supervised Learning\nAutoencoders\nMNIST\nFully Connected Model\nConvolutional Autoencoder\nDenoising Autoencoder\nVariational Autoencoders\n\n\nPredicting Image Labels\nOne-Hot Encoding\nUsing Regression for Classification\nFully Connected Network\nConvolutional Networks\n\n\nSaving/Loading Models\nCheckpoint\nProtobuf\n\n\nWrap Up\nReading\n\n<!-- /MarkdownTOC -->\n\n<a name=\"introduction\"></a>\nIntroduction\nIn the last session we created our first neural network.\nWe saw that in order to create a neural network, we needed to define a cost function which would allow gradient descent to optimize all the parameters in our network <TODO: Insert animation of gradient descent from previous session>. We also saw how neural networks become much more expressive by introducing series of linearities followed by non-linearities, or activation functions. <TODO: Insert graphic of activation functions from previous session>.\nWe then explored a fun application of neural networks using regression to learn to paint color values given x, y positions. This allowed us to build up a sort of painterly like version of an image.\nIn this session, we'll see how to use some simple deep nets with about 3 or 4 layers capable of performing unsupervised and supervised learning, and I'll explain those terms in a bit. The components we learn here will let us explore data in some very interesting ways.\n<a name=\"unsupervised-vs-supervised-learning\"></a>\nUnsupervised vs. Supervised Learning\nMachine learning research in deep networks performs one of two types of learning. You either have a lot of data and you want the computer to reason about it, maybe to encode the data using less data, and just explore what patterns there might be. That's useful for clustering data, reducing the dimensionality of the data, or even for generating new data. That's generally known as unsupervised learning. In the supervised case, you actually know what you want out of your data. You have something like a label or a class that is paired with every single piece of data. In this first half of this session, we'll see how unsupervised learning works using something called an autoencoder and how it can be extended using convolution.. Then we'll get into supervised learning and show how we can build networks for performing regression and classification. By the end of this session, hopefully all of that will make a little more sense. Don't worry if it doesn't yet! Really the best way to learn is to put this stuff into practice in the homeworks.\n<a name=\"autoencoders\"></a>\nAutoencoders\n<TODO: Graphic of autoencoder network diagram>\nAn autoencoder is a type of neural network that learns to encode its inputs, often using much less data. It does so in a way that it can still output the original input with just the encoded values. For it to learn, it does not require \"labels\" as its output. Instead, it tries to output whatever it was given as input. So in goes an image, and out should also go the same image. But it has to be able to retain all the details of the image, even after possibly reducing the information down to just a few numbers.\nWe'll also explore how this method can be extended and used to cluster or organize a dataset, or to explore latent dimensions of a dataset that explain some interesting ideas. For instance, we'll see how with handwritten numbers, we will be able to see how each number can be encoded in the autoencoder without ever telling it which number is which.\n<TODO: place teaser of MNIST video learning>\nBut before we get there, we're going to need to develop an understanding of a few more concepts.\nFirst, imagine a network that takes as input an image. The network can be composed of either matrix multiplications or convolutions to any number of filters or dimensions. At the end of any processing, the network has to be able to recompose the original image it was input.\nIn the last session, we saw how to build a network capable of taking 2 inputs representing the row and column of an image, and predicting 3 outputs, the red, green, and blue colors. Instead if having 2 inputs, we'll now have an entire image as an input, the brightness of every pixel in our image. And as output, we're going to have the same thing, the entire image being output.\n<a name=\"mnist\"></a>\nMNIST\nLet's first get some standard imports:", "# imports\n%matplotlib inline\n# %pylab osx\nimport tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nimport matplotlib.cm as cmx\n# Some additional libraries which we'll use just\n# to produce some visualizations of our training\nfrom libs.utils import montage\nfrom libs import gif\nimport IPython.display as ipyd\nplt.style.use('ggplot')\n\n# Bit of formatting because I don't like the default inline code style:\nfrom IPython.core.display import HTML\nHTML(\"\"\"<style> .rendered_html code { \n padding: 2px 4px;\n color: #c7254e;\n background-color: #f9f2f4;\n border-radius: 4px;\n} </style>\"\"\")", "Then we're going to try this with the MNIST dataset, which I've included a simple interface for in the libs module.", "from libs.datasets import MNIST\nds = MNIST()", "Let's take a look at what this returns:", "# ds.<tab>", "So we can see that there are a few interesting accessors. ... we're not going to worry about the labels until a bit later when we talk about a different type of model which can go from the input image to predicting which label the image is. But for now, we're going to focus on trying to encode the image and be able to reconstruct the image from our encoding. let's take a look at the images which are stored in the variable X. Remember, in this course, we'll always use the variable X to denote the input to a network. and we'll use the variable Y to denote its output.", "print(ds.X.shape)", "So each image has 784 features, and there are 70k of them. If we want to draw the image, we're going to have to reshape it to a square. 28 x 28 is 784. So we're just going to reshape it to a square so that we can see all the pixels arranged in rows and columns instead of one giant vector.", "plt.imshow(ds.X[0].reshape((28, 28)))\n\n# Let's get the first 1000 images of the dataset and reshape them\nimgs = ds.X[:1000].reshape((-1, 28, 28))\n\n# Then create a montage and draw the montage\nplt.imshow(montage(imgs), cmap='gray')", "Let's take a look at the mean of the dataset:", "# Take the mean across all images\nmean_img = np.mean(ds.X, axis=0)\n\n# Then plot the mean image.\nplt.figure()\nplt.imshow(mean_img.reshape((28, 28)), cmap='gray')", "And the standard deviation", "# Take the std across all images\nstd_img = np.std(ds.X, axis=0)\n\n# Then plot the std image.\nplt.figure()\nplt.imshow(std_img.reshape((28, 28)))", "So recall from session 1 that these two images are really saying whats more or less contant across every image, and what's changing. We're going to try and use an autoencoder to try to encode everything that could possibly change in the image.\n<a name=\"fully-connected-model\"></a>\nFully Connected Model\nTo try and encode our dataset, we are going to build a series of fully connected layers that get progressively smaller. So in neural net speak, every pixel is going to become its own input neuron. And from the original 784 neurons, we're going to slowly reduce that information down to smaller and smaller numbers. It's often standard practice to use other powers of 2 or 10. I'll create a list of the number of dimensions we'll use for each new layer.", "dimensions = [512, 256, 128, 64]", "So we're going to reduce our 784 dimensions down to 512 by multiplyling them by a 784 x 512 dimensional matrix. Then we'll do the same thing again using a 512 x 256 dimensional matrix, to reduce our dimensions down to 256 dimensions, and then again to 128 dimensions, then finally to 64. To get back to the size of the image, we're going to just going to do the reverse. But we're going to use the exact same matrices. We do that by taking the transpose of the matrix, which reshapes the matrix so that the rows become columns, and vice-versa. So our last matrix which was 128 rows x 64 columns, when transposed, becomes 64 rows x 128 columns.\nSo by sharing the weights in the network, we're only really learning half of the network, and those 4 matrices are going to make up the bulk of our model. We just have to find out what they are using gradient descent.\nWe're first going to create placeholders for our tensorflow graph. We're going to set the first dimension to None. This is something special for placeholders which tells tensorflow \"let this dimension be any possible value\". 1, 5, 100, 1000, it doesn't matter. We're going to pass our entire dataset in minibatches. So we'll send 100 images at a time. But we'd also like to be able to send in only 1 image and see what the prediction of the network is. That's why we let this dimension be flexible in the graph.", "# So the number of features is the second dimension of our inputs matrix, 784\nn_features = ds.X.shape[1]\n\n# And we'll create a placeholder in the tensorflow graph that will be able to get any number of n_feature inputs.\nX = tf.placeholder(tf.float32, [None, n_features])", "Now we're going to create a network which will perform a series of multiplications on X, followed by adding a bias, and then wrapping all of this in a non-linearity:", "# let's first copy our X placeholder to the name current_input\ncurrent_input = X\nn_input = n_features\n\n# We're going to keep every matrix we create so let's create a list to hold them all\nWs = []\n\n# We'll create a for loop to create each layer:\nfor layer_i, n_output in enumerate(dimensions):\n\n # just like in the last session,\n # we'll use a variable scope to help encapsulate our variables\n # This will simply prefix all the variables made in this scope\n # with the name we give it.\n with tf.variable_scope(\"encoder/layer/{}\".format(layer_i)):\n\n # Create a weight matrix which will increasingly reduce\n # down the amount of information in the input by performing\n # a matrix multiplication\n W = tf.get_variable(\n name='W',\n shape=[n_input, n_output],\n initializer=tf.random_normal_initializer(mean=0.0, stddev=0.02))\n\n # Now we'll multiply our input by our newly created W matrix\n # and add the bias\n h = tf.matmul(current_input, W)\n\n # And then use a relu activation function on its output\n current_input = tf.nn.relu(h)\n\n # Finally we'll store the weight matrix so we can build the decoder.\n Ws.append(W)\n\n # We'll also replace n_input with the current n_output, so that on the\n # next iteration, our new number inputs will be correct.\n n_input = n_output", "So now we've created a series of multiplications in our graph which take us from our input of batch size times number of features which started as None x 784, and then we're multiplying it by a series of matrices which will change the size down to None x 64.", "print(current_input.get_shape())", "In order to get back to the original dimensions of the image, we're going to reverse everything we just did. Let's see how we do that:", "# We'll first reverse the order of our weight matrices\nWs = Ws[::-1]\n\n# then reverse the order of our dimensions\n# appending the last layers number of inputs.\ndimensions = dimensions[::-1][1:] + [ds.X.shape[1]]\nprint(dimensions)\n\nfor layer_i, n_output in enumerate(dimensions):\n # we'll use a variable scope again to help encapsulate our variables\n # This will simply prefix all the variables made in this scope\n # with the name we give it.\n with tf.variable_scope(\"decoder/layer/{}\".format(layer_i)):\n\n # Now we'll grab the weight matrix we created before and transpose it\n # So a 3072 x 784 matrix would become 784 x 3072\n # or a 256 x 64 matrix, would become 64 x 256\n W = tf.transpose(Ws[layer_i])\n\n # Now we'll multiply our input by our transposed W matrix\n h = tf.matmul(current_input, W)\n\n # And then use a relu activation function on its output\n current_input = tf.nn.relu(h)\n\n # We'll also replace n_input with the current n_output, so that on the\n # next iteration, our new number inputs will be correct.\n n_input = n_output", "After this, our current_input will become the output of the network:", "Y = current_input", "Now that we have the output of the network, we just need to define a training signal to train the network with. To do that, we create a cost function which will measure how well the network is doing:", "# We'll first measure the average difference across every pixel\ncost = tf.reduce_mean(tf.squared_difference(X, Y), 1)\nprint(cost.get_shape())", "And then take the mean again across batches:", "cost = tf.reduce_mean(cost)", "We can now train our network just like we did in the last session. We'll need to create an optimizer which takes a parameter learning_rate. And we tell it that we want to minimize our cost, which is measuring the difference between the output of the network and the input.", "learning_rate = 0.001\noptimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "Now we'll create a session to manage the training in minibatches:", "# %%\n# We create a session to use the graph\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())", "Now we'll train:", "# Some parameters for training\nbatch_size = 100\nn_epochs = 5\n\n# We'll try to reconstruct the same first 100 images and show how\n# The network does over the course of training.\nexamples = ds.X[:100]\n\n# We'll store the reconstructions in a list\nimgs = []\nfig, ax = plt.subplots(1, 1)\nfor epoch_i in range(n_epochs):\n for batch_X, _ in ds.train.next_batch():\n sess.run(optimizer, feed_dict={X: batch_X - mean_img})\n recon = sess.run(Y, feed_dict={X: examples - mean_img})\n recon = np.clip((recon + mean_img).reshape((-1, 28, 28)), 0, 255)\n img_i = montage(recon).astype(np.uint8)\n imgs.append(img_i)\n ax.imshow(img_i, cmap='gray')\n fig.canvas.draw()\n print(epoch_i, sess.run(cost, feed_dict={X: batch_X - mean_img}))\ngif.build_gif(imgs, saveto='ae.gif', cmap='gray')\n\nipyd.Image(url='ae.gif?{}'.format(np.random.rand()),\n height=500, width=500)", "<a name=\"convolutional-autoencoder\"></a>\nConvolutional Autoencoder\nTo get even better encodings, we can also try building a convolutional network. Why would a convolutional network perform any different to a fully connected one? Let's see what we were doing in the fully connected network. For every pixel in our input, we have a set of weights corresponding to every output neuron. Those weights are unique to each pixel. Each pixel gets its own row in the weight matrix. That really doesn't make a lot of sense, since we would guess that nearby pixels are probably not going to be so different. And we're not really encoding what's happening around that pixel, just what that one pixel is doing.\nIn a convolutional model, we're explicitly modeling what happens around a pixel. And we're using the exact same convolutions no matter where in the image we are. But we're going to use a lot of different convolutions.\nRecall in session 1 we created a Gaussian and Gabor kernel and used this to convolve an image to either blur it or to accentuate edges. Armed with what you know now, you could try to train a network to learn the parameters that map an untouched image to a blurred or edge filtered version of it. What you should find is the kernel will look sort of what we built by hand. I'll leave that as an excercise for you.\nBut in fact, that's too easy really. That's just 1 filter you would have to learn. We're going to see how we can use many convolutional filters, way more than 1, and how it will help us to encode the MNIST dataset.\nTo begin we'll need to reset the current graph and start over.", "from tensorflow.python.framework.ops import reset_default_graph\nreset_default_graph()\n\n# And we'll create a placeholder in the tensorflow graph that will be able to get any number of n_feature inputs.\nX = tf.placeholder(tf.float32, [None, n_features])", "Since X is currently [batch, height*width], we need to reshape it to a\n4-D tensor to use it in a convolutional graph. Remember back to the first session that in order to perform convolution, we have to use 4-dimensional tensors describing the:\nN x H x W x C\nWe'll reshape our input placeholder by telling the shape parameter to be these new dimensions. However, since our batch dimension is None, we cannot reshape without using the special value -1, which says that the size of that dimension should be computed so that the total size remains constant. Since we haven't defined the batch dimension's shape yet, we use -1 to denote this\ndimension should not change size.", "X_tensor = tf.reshape(X, [-1, 28, 28, 1])", "We'll now setup the first convolutional layer. Remember from Session 2 that the weight matrix for convolution should be\n[height x width x input_channels x output_channels]\nThink a moment about how this is different to the fully connected network. In the fully connected network, every pixel was being multiplied by its own weight to every other neuron. With a convolutional network, we use the extra dimensions to allow the same set of filters to be applied everywhere across an image. This is also known in the literature as weight sharing, since we're sharing the weights no matter where in the input we are. That's unlike the fully connected approach, which has unique weights for every pixel. What's more is after we've performed the convolution, we've retained the spatial organization of the input. We still have dimensions of height and width. That's again unlike the fully connected network which effectively shuffles or takes int account information from everywhere, not at all caring about where anything is. That can be useful or not depending on what we're trying to achieve. Often, it is something we might want to do after a series of convolutions to encode translation invariance. Don't worry about that for now. With MNIST especially we won't need to do that since all of the numbers are in the same position.\nNow with our tensor ready, we're going to do what we've just done with the fully connected autoencoder. Except, instead of performing matrix multiplications, we're going to create convolution operations. To do that, we'll need to decide on a few parameters including the filter size, how many convolution filters we want, and how many layers we want. I'll start with a fairly small network, and let you scale this up in your own time.", "n_filters = [16, 16, 16]\nfilter_sizes = [4, 4, 4]", "Now we'll create a loop to create every layer's convolution, storing the convolution operations we create so that we can do the reverse.", "current_input = X_tensor\n\n# notice instead of having 784 as our input features, we're going to have\n# just 1, corresponding to the number of channels in the image.\n# We're going to use convolution to find 16 filters, or 16 channels of information in each spatial location we perform convolution at.\nn_input = 1\n\n# We're going to keep every matrix we create so let's create a list to hold them all\nWs = []\nshapes = []\n\n# We'll create a for loop to create each layer:\nfor layer_i, n_output in enumerate(n_filters):\n # just like in the last session,\n # we'll use a variable scope to help encapsulate our variables\n # This will simply prefix all the variables made in this scope\n # with the name we give it.\n with tf.variable_scope(\"encoder/layer/{}\".format(layer_i)):\n # we'll keep track of the shapes of each layer\n # As we'll need these for the decoder\n shapes.append(current_input.get_shape().as_list())\n\n # Create a weight matrix which will increasingly reduce\n # down the amount of information in the input by performing\n # a matrix multiplication\n W = tf.get_variable(\n name='W',\n shape=[\n filter_sizes[layer_i],\n filter_sizes[layer_i],\n n_input,\n n_output],\n initializer=tf.random_normal_initializer(mean=0.0, stddev=0.02))\n\n # Now we'll convolve our input by our newly created W matrix\n h = tf.nn.conv2d(current_input, W,\n strides=[1, 2, 2, 1], padding='SAME')\n\n # And then use a relu activation function on its output\n current_input = tf.nn.relu(h)\n\n # Finally we'll store the weight matrix so we can build the decoder.\n Ws.append(W)\n\n # We'll also replace n_input with the current n_output, so that on the\n # next iteration, our new number inputs will be correct.\n n_input = n_output", "Now with our convolutional encoder built and the encoding weights stored, we'll reverse the whole process to decode everything back out to the original image.", "# We'll first reverse the order of our weight matrices\nWs.reverse()\n# and the shapes of each layer\nshapes.reverse()\n# and the number of filters (which is the same but could have been different)\nn_filters.reverse()\n# and append the last filter size which is our input image's number of channels\nn_filters = n_filters[1:] + [1]\n\nprint(n_filters, filter_sizes, shapes)\n\n# and then loop through our convolution filters and get back our input image\n# we'll enumerate the shapes list to get us there\nfor layer_i, shape in enumerate(shapes):\n # we'll use a variable scope to help encapsulate our variables\n # This will simply prefix all the variables made in this scope\n # with the name we give it.\n with tf.variable_scope(\"decoder/layer/{}\".format(layer_i)):\n\n # Create a weight matrix which will increasingly reduce\n # down the amount of information in the input by performing\n # a matrix multiplication\n W = Ws[layer_i]\n\n # Now we'll convolve by the transpose of our previous convolution tensor\n h = tf.nn.conv2d_transpose(current_input, W,\n tf.stack([tf.shape(X)[0], shape[1], shape[2], shape[3]]),\n strides=[1, 2, 2, 1], padding='SAME')\n\n # And then use a relu activation function on its output\n current_input = tf.nn.relu(h)", "Now we have the reconstruction through the network:", "Y = current_input\nY = tf.reshape(Y, [-1, n_features])", "We can measure the cost and train exactly like before with the fully connected network:", "cost = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(X, Y), 1))\nlearning_rate = 0.001\n\n# pass learning rate and cost to optimize\noptimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)\n\n# Session to manage vars/train\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\n# Some parameters for training\nbatch_size = 100\nn_epochs = 5\n\n# We'll try to reconstruct the same first 100 images and show how\n# The network does over the course of training.\nexamples = ds.X[:100]\n\n# We'll store the reconstructions in a list\nimgs = []\nfig, ax = plt.subplots(1, 1)\nfor epoch_i in range(n_epochs):\n for batch_X, _ in ds.train.next_batch():\n sess.run(optimizer, feed_dict={X: batch_X - mean_img})\n recon = sess.run(Y, feed_dict={X: examples - mean_img})\n recon = np.clip((recon + mean_img).reshape((-1, 28, 28)), 0, 255)\n img_i = montage(recon).astype(np.uint8)\n imgs.append(img_i)\n ax.imshow(img_i, cmap='gray')\n fig.canvas.draw()\n print(epoch_i, sess.run(cost, feed_dict={X: batch_X - mean_img}))\ngif.build_gif(imgs, saveto='conv-ae.gif', cmap='gray')\n\nipyd.Image(url='conv-ae.gif?{}'.format(np.random.rand()),\n height=500, width=500)", "<a name=\"denoising-autoencoder\"></a>\nDenoising Autoencoder\nThe denoising autoencoder is a very simple extension to an autoencoder. Instead of seeing the input, it is corrupted, for instance by masked noise. but the reconstruction loss is still measured on the original uncorrupted image. What this does is lets the model try to interpret occluded or missing parts of the thing it is reasoning about. It would make sense for many models, that not every datapoint in an input is necessary to understand what is going on. Denoising autoencoders try to enforce that, and as a result, the encodings at the middle most layer are often far more representative of the actual classes of different objects.\nIn the resources section, you'll see that I've included a general framework autoencoder allowing you to use either a fully connected or convolutional autoencoder, and whether or not to include denoising. If you interested in the mechanics of how this works, I encourage you to have a look at the code.\n<a name=\"variational-autoencoders\"></a>\nVariational Autoencoders\nA variational autoencoder extends the traditional autoencoder by using an additional layer called the variational layer. It is actually two networks that are cleverly connected using a simple reparameterization trick, to help the gradient flow through both networks during backpropagation allowing both to be optimized.\nWe dont' have enough time to get into the details, but I'll try to quickly explain: it tries to optimize the likelihood that a particular distribution would create an image, rather than trying to optimize simply the L2 loss at the end of the network. Or put another way it hopes that there is some distribution that a distribution of image encodings could be defined as. This is a bit tricky to grasp, so don't worry if you don't understand the details. The major difference to hone in on is that instead of optimizing distance in the input space of pixel to pixel distance, which is actually quite arbitrary if you think about it... why would we care about the exact pixels being the same? Human vision would not care for most cases, if there was a slight translation of our image, then the distance could be very high, but we would never be able to tell the difference. So intuitively, measuring error based on raw pixel to pixel distance is not such a great approach.\nInstead of relying on raw pixel differences, the variational autoencoder tries to optimize two networks. One which says that given my pixels, I am pretty sure I can encode them to the parameters of some well known distribution, like a set of Gaussians, instead of some artbitrary density of values. And then I can optimize the latent space, by saying that particular distribution should be able to represent my entire dataset, and I try to optimize the likelihood that it will create the images I feed through a network. So distance is somehow encoded in this latent space. Of course I appreciate that is a difficult concept so forgive me for not being able to expand on it in more details.\nBut to make up for the lack of time and explanation, I've included this model under the resources section for you to play with! Just like the \"vanilla\" autoencoder, this one supports both fully connected, convolutional, and denoising models.\nThis model performs so much better than the vanilla autoencoder. In fact, it performs so well that I can even manage to encode the majority of MNIST into 2 values. The following visualization demonstrates the learning of a variational autoencoder over time.\n<mnist visualization>\nThere are of course a lot more interesting applications of such a model. You could for instance, try encoding a more interesting dataset, such as CIFAR which you'll find a wrapper for in the libs/datasets module.\n<TODO: produce GIF visualization madness>\nOr the celeb faces dataset:\n<celeb dataset>\nOr you could try encoding an entire movie. We tried it with the copyleft movie, \"Sita Sings The Blues\". Every 2 seconds, we stored an image of this movie, and then fed all of these images to a deep variational autoencoder. This is the result.\n<show sita sings the blues training images>\nAnd I'm sure we can get closer with deeper nets and more train time. But notice how in both celeb faces and sita sings the blues, the decoding is really blurred. That is because of the assumption of the underlying representational space. We're saying the latent space must be modeled as a gaussian, and those factors must be distributed as a gaussian. This enforces a sort of discretization of my representation, enforced by the noise parameter of the gaussian. In the last session, we'll see how we can avoid this sort of blurred representation and get even better decodings using a generative adversarial network.\nFor now, consider the applications that this method opens up. Once you have an encoding of a movie, or image dataset, you are able to do some very interesting things. You have effectively stored all the representations of that movie, although its not perfect of course. But, you could for instance, see how another movie would be interpretted by the same network. That's similar to what Terrance Broad did for his project on reconstructing blade runner and a scanner darkly, though he made use of both the variational autoencoder and the generative adversarial network. We're going to look at that network in more detail in the last session.\nWe'll also look at how to properly handle very large datasets like celeb faces or the one used here to create the sita sings the blues autoencoder. Taking every 60th frame of Sita Sings The Blues gives you aobut 300k images. And that's a lot of data to try and load in all at once. We had to size it down considerably, and make use of what's called a tensorflow input pipeline. I've included all the code for training this network, which took about 1 day on a fairly powerful machine, but I will not get into the details of the image pipeline bits until session 5 when we look at generative adversarial networks. I'm delaying this because we'll need to learn a few things along the way before we can build such a network.\n<a name=\"predicting-image-labels\"></a>\nPredicting Image Labels\nWe've just seen a variety of types of autoencoders and how they are capable of compressing information down to its inner most layer while still being able to retain most of the interesting details. Considering that the CelebNet dataset was nearly 200 thousand images of 64 x 64 x 3 pixels, and we're able to express those with just an inner layer of 50 values, that's just magic basically. Magic.\nOkay, let's move on now to a different type of learning often called supervised learning. Unlike what we just did, which is work with a set of data and not have any idea what that data should be labeled as, we're going to explicitly tell the network what we want it to be labeled by saying what the network should output for a given input. In the previous cause, we just had a set of Xs, our images. Now, we're going to have Xs and Ys given to us, and use the Xs to try and output the Ys.\nWith MNIST, the outputs of each image are simply what numbers are drawn in the input image. The wrapper for grabbing this dataset from the libs module takes an additional parameter which I didn't talk about called one_hot.", "from libs import datasets\n# ds = datasets.MNIST(one_hot=True)", "To see what this is doing, let's compare setting it to false versus true:", "ds = datasets.MNIST(one_hot=False)\n# let's look at the first label\nprint(ds.Y[0])\n# okay and what does the input look like\nplt.imshow(np.reshape(ds.X[0], (28, 28)), cmap='gray')\n# great it is just the label of the image\n\nplt.figure()\n# Let's look at the next one just to be sure\nprint(ds.Y[1])\n# Yea the same idea\nplt.imshow(np.reshape(ds.X[1], (28, 28)), cmap='gray')", "And now let's look at what the one hot version looks like:", "ds = datasets.MNIST(one_hot=True)\nplt.figure()\nplt.imshow(np.reshape(ds.X[0], (28, 28)), cmap='gray')\nprint(ds.Y[0])\n# array([ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.])\n# Woah a bunch more numbers. 10 to be exact, which is also the number\n# of different labels in the dataset.\nplt.imshow(np.reshape(ds.X[1], (28, 28)), cmap='gray')\nprint(ds.Y[1])\n# array([ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.])", "So instead of have a number from 0-9, we have 10 numbers corresponding to the digits, 0-9, and each value is either 0 or 1. Whichever digit the image represents is the one that is 1.\nTo summarize, we have all of the images of the dataset stored as:\nn_observations x n_features tensor (n-dim array)", "print(ds.X.shape)", "And labels stored as n_observations x n_labels where each observation is a one-hot vector, where only one element is 1 indicating which class or label it is.", "print(ds.Y.shape)\nprint(ds.Y[0])", "<a name=\"one-hot-encoding\"></a>\nOne-Hot Encoding\nRemember in the last session, we saw how to build a network capable of taking 2 inputs representing the row and column of an image, and predicting 3 outputs, the red, green, and blue colors. Just like in our unsupervised model, instead of having 2 inputs, we'll now have 784 inputs, the brightness of every pixel in our image. And instead of 3 outputs, like in our painting network from last session, or the 784 outputs we had in our unsupervised MNIST network, we'll now have 10 outputs representing the one-hot encoding of its label.\nSo why don't we just have 1 output? A number from 0-9? Wouldn't having 10 different outputs instead of just 1 be harder to learn? Consider how we normally train the network. We have to give it a cost which it will use to minimize. What could our cost be if our output was just a single number, 0-9? We would still have the true label, and the predicted label. Could we just take the subtraction of the two values? e.g. the network predicted 0, but the image was really the number 8. Okay so then our distance could be:", "# cost = tf.reduce_sum(tf.abs(y_pred - y_true))", "But in this example, the cost would be 8. If the image was a 4, and the network predicted a 0 again, the cost would be 4... but isn't the network still just as wrong, not half as much as when the image was an 8? In a one-hot encoding, the cost would be 1 for both, meaning they are both just as wrong. So we're able to better measure the cost, by separating each class's label into its own dimension.\n<a name=\"using-regression-for-classification\"></a>\nUsing Regression for Classification\nThe network we build will be trained to output values between 0 and 1. They won't output exactly a 0 or 1. But rather, they are able to produce any value. 0, 0.1, 0.2, ... and that means the networks we've been using are actually performing regression. In regression, the output is \"continuous\", rather than \"discrete\". The difference is this: a discrete output means the network can only output one of a few things. Like, 0, 1, 2, or 3, and that's it. But a continuous output means it can output any real number.\nIn order to perform what's called classification, we're just simply going to look at whichever value is the highest in our one hot encoding. In order to do that a little better, we're actually going interpret our one hot encodings as probabilities by scaling the total output by their sum. What this does is allows us to understand that as we grow more confident in one prediction, we should grow less confident in all other predictions. We only have so much certainty to go around, enough to add up to 1. If we think the image might also be the number 1, then we lose some certainty of it being the number 0.\nIt turns out there is a better cost function that simply measuring the distance between two vectors when they are probabilities. It's called cross entropy:\n\\begin{align}\n\\Large{H(x) = -\\sum{y_{\\text{t}}(x) * \\log(y_{\\text{p}}(x))}}\n\\end{align}\nWhat this equation does is measures the similarity of our prediction with our true distribution, by exponentially increasing error whenever our prediction gets closer to 1 when it should be 0, and similarly by exponentially increasing error whenever our prediction gets closer to 0, when it should be 1. I won't go into more detail here, but just know that we'll be using this measure instead of a normal distance measure.\n<a name=\"fully-connected-network\"></a>\nFully Connected Network\nDefining the Network\nLet's see how our one hot encoding and our new cost function will come into play. We'll create our network for predicting image classes in pretty much the same way we've created previous networks:\nWe will have as input to the network 28 x 28 values.", "import tensorflow as tf\nfrom libs import datasets\nds = datasets.MNIST(split=[0.8, 0.1, 0.1])\nn_input = 28 * 28", "As output, we have our 10 one-hot-encoding values", "n_output = 10", "We're going to create placeholders for our tensorflow graph. We're going to set the first dimension to None. Remember from our unsupervised model, this is just something special for placeholders which tells tensorflow \"let this dimension be any possible value\". 1, 5, 100, 1000, it doesn't matter. Since we're going to pass our entire dataset in batches we'll need this to be say 100 images at a time. But we'd also like to be able to send in only 1 image and see what the prediction of the network is. That's why we let this dimension be flexible.", "X = tf.placeholder(tf.float32, [None, n_input])", "For the output, we'll have None again, since for every input, we'll have the same number of images that have outputs.", "Y = tf.placeholder(tf.float32, [None, n_output])", "Now we'll connect our input to the output with a linear layer. Instead of relu, we're going to use softmax. This will perform our exponential scaling of the outputs and make sure the output sums to 1, making it a probability.", "# We'll use the linear layer we created in the last session, which I've stored in the libs file:\n# NOTE: The lecture used an older version of this function which had a slightly different definition.\nfrom libs import utils\nY_pred, W = utils.linear(\n x=X,\n n_output=n_output,\n activation=tf.nn.softmax,\n name='layer1')", "And then we write our loss function as the cross entropy. And then we'll give our optimizer the cross_entropy measure just like we would with GradientDescent. The formula for cross entropy is:\n\\begin{align}\n\\Large{H(x) = -\\sum{\\text{Y}{\\text{true}} * log(\\text{Y}{pred})}}\n\\end{align}", "# We add 1e-12 because the log is undefined at 0.\ncross_entropy = -tf.reduce_sum(Y * tf.log(Y_pred + 1e-12))\noptimizer = tf.train.AdamOptimizer(0.001).minimize(cross_entropy)", "To determine the correct class from our regression output, we have to take the maximum index.", "predicted_y = tf.argmax(Y_pred, 1)\nactual_y = tf.argmax(Y, 1)", "We can then measure the accuracy by seeing whenever these are equal. Note, this is just for us to see, and is not at all used to \"train\" the network!", "correct_prediction = tf.equal(predicted_y, actual_y)\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))", "Training the Network\nThe rest of the code will be exactly the same as before. We chunk the training dataset into batch_size chunks, and let these images help train the network over a number of iterations.", "sess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\n# Now actually do some training:\nbatch_size = 50\nn_epochs = 5\nfor epoch_i in range(n_epochs):\n for batch_xs, batch_ys in ds.train.next_batch():\n sess.run(optimizer, feed_dict={\n X: batch_xs,\n Y: batch_ys\n })\n valid = ds.valid\n print(sess.run(accuracy,\n feed_dict={\n X: valid.images,\n Y: valid.labels\n }))\n\n# Print final test accuracy:\ntest = ds.test\nprint(sess.run(accuracy,\n feed_dict={\n X: test.images,\n Y: test.labels\n }))", "What we should see is the accuracy being printed after each \"epoch\", or after every run over the entire dataset. Since we're using batches, we use the notion of an \"epoch\" to denote whenever we've gone through the entire dataset.\n<a name=\"inspecting-the-network\"></a>\nInspecting the Trained Network\nLet's try and now inspect how the network is accomplishing this task. We know that our network is a single matrix multiplication of our 784 pixel values. The weight matrix, W, should therefore have 784 rows. As outputs, it has 10 values. So the matrix is composed in the linear function as n_input x n_output values. So the matrix is 784 rows x 10 columns.\n<TODO: graphic w/ wacom showing network and matrix multiplication and pulling out single neuron/column>\nIn order to get this matrix, we could have had our linear function return the tf.Tensor. But since everything is part of the tensorflow graph, and we've started using nice names for all of our operations, we can actually find this tensor using tensorflow:", "# We first get the graph that we used to compute the network\ng = tf.get_default_graph()\n\n# And can inspect everything inside of it\n[op.name for op in g.get_operations()]", "Looking at the names of the operations, we see there is one linear/W. But this is the tf.Operation. Not the tf.Tensor. The tensor is the result of the operation. To get the result of the operation, we simply add \":0\" to the name of the operation:", "W = g.get_tensor_by_name('layer1/W:0')", "We can use the existing session to compute the current value of this tensor:", "W_arr = np.array(W.eval(session=sess))\nprint(W_arr.shape)", "And now we have our tensor! Let's try visualizing every neuron, or every column of this matrix:", "fig, ax = plt.subplots(1, 10, figsize=(20, 3))\nfor col_i in range(10):\n ax[col_i].imshow(W_arr[:, col_i].reshape((28, 28)), cmap='coolwarm')", "We're going to use the coolwarm color map, which will use \"cool\" values, or blue-ish colors for low values. And \"warm\" colors, red, basically, for high values. So what we begin to see is that there is a weighting of all the input values, where pixels that are likely to describe that number are being weighted high, and pixels that are not likely to describe that number are being weighted low. By summing all of these multiplications together, the network is able to begin to predict what number is in the image. This is not a very good network though, and the representations it learns could still do a much better job. We were only right about 93% of the time according to our accuracy. State of the art models will get about 99.9% accuracy.\n<a name=\"convolutional-networks\"></a>\nConvolutional Networks\nTo get better performance, we can build a convolutional network. We've already seen how to create a convolutional network with our unsupervised model. We're going to make the same modifications here to help us predict the digit labels in MNIST.\nDefining the Network\nI'll first reset the current graph, so we can build a new one. We'll use tensorflow's nice helper function for doing this.", "from tensorflow.python.framework.ops import reset_default_graph\nreset_default_graph()", "And just to confirm, let's see what's in our graph:", "# We first get the graph that we used to compute the network\ng = tf.get_default_graph()\n\n# And can inspect everything inside of it\n[op.name for op in g.get_operations()]", "Great. Empty.\nNow let's get our dataset, and create some placeholders like before:", "# We'll have placeholders just like before which we'll fill in later.\nds = datasets.MNIST(one_hot=True, split=[0.8, 0.1, 0.1])\nX = tf.placeholder(tf.float32, [None, 784])\nY = tf.placeholder(tf.float32, [None, 10])", "Since X is currently [batch, height*width], we need to reshape to a\n4-D tensor to use it in a convolutional graph. Remember, in order to perform convolution, we have to use 4-dimensional tensors describing the:\nN x H x W x C\nWe'll reshape our input placeholder by telling the shape parameter to be these new dimensions and we'll use -1 to denote this dimension should not change size.", "X_tensor = tf.reshape(X, [-1, 28, 28, 1])", "We'll now setup the first convolutional layer. Remember that the weight matrix for convolution should be\n[height x width x input_channels x output_channels]\nLet's create 32 filters. That means every location in the image, depending on the stride I set when we perform the convolution, will be filtered by this many different kernels. In session 1, we convolved our image with just 2 different types of kernels. Now, we're going to let the computer try to find out what 32 filters helps it map the input to our desired output via our training signal.", "filter_size = 5\nn_filters_in = 1\nn_filters_out = 32\nW_1 = tf.get_variable(\n name='W',\n shape=[filter_size, filter_size, n_filters_in, n_filters_out],\n initializer=tf.random_normal_initializer())", "Bias is always [output_channels] in size.", "b_1 = tf.get_variable(\n name='b',\n shape=[n_filters_out],\n initializer=tf.constant_initializer())", "Now we can build a graph which does the first layer of convolution:\nWe define our stride as batch x height x width x channels. This has the effect of resampling the image down to half of the size.", "h_1 = tf.nn.relu(\n tf.nn.bias_add(\n tf.nn.conv2d(input=X_tensor,\n filter=W_1,\n strides=[1, 2, 2, 1],\n padding='SAME'),\n b_1))", "And just like the first layer, add additional layers to create a deep net.", "n_filters_in = 32\nn_filters_out = 64\nW_2 = tf.get_variable(\n name='W2',\n shape=[filter_size, filter_size, n_filters_in, n_filters_out],\n initializer=tf.random_normal_initializer())\nb_2 = tf.get_variable(\n name='b2',\n shape=[n_filters_out],\n initializer=tf.constant_initializer())\nh_2 = tf.nn.relu(\n tf.nn.bias_add(\n tf.nn.conv2d(input=h_1,\n filter=W_2,\n strides=[1, 2, 2, 1],\n padding='SAME'),\n b_2))", "4d -> 2d", "# We'll now reshape so we can connect to a fully-connected/linear layer:\nh_2_flat = tf.reshape(h_2, [-1, 7 * 7 * n_filters_out])", "Create a fully-connected layer:", "# NOTE: This uses a slightly different version of the linear function than the lecture!\nh_3, W = utils.linear(h_2_flat, 128, activation=tf.nn.relu, name='fc_1')", "And one last fully-connected layer which will give us the correct number of outputs, and use a softmax to expoentially scale the outputs and convert them to a probability:", "# NOTE: This uses a slightly different version of the linear function than the lecture!\nY_pred, W = utils.linear(h_3, n_output, activation=tf.nn.softmax, name='fc_2')", "<TODO: Draw as graphical representation>\nTraining the Network\nThe rest of the training process is the same as the previous network. We'll define loss/eval/training functions:", "cross_entropy = -tf.reduce_sum(Y * tf.log(Y_pred + 1e-12))\noptimizer = tf.train.AdamOptimizer().minimize(cross_entropy)", "Monitor accuracy:", "correct_prediction = tf.equal(tf.argmax(Y_pred, 1), tf.argmax(Y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))", "And create a new session to actually perform the initialization of all the variables:", "sess = tf.Session()\nsess.run(tf.global_variables_initializer())", "Then we'll train in minibatches and report accuracy:", "batch_size = 50\nn_epochs = 10\nfor epoch_i in range(n_epochs):\n for batch_xs, batch_ys in ds.train.next_batch():\n sess.run(optimizer, feed_dict={\n X: batch_xs,\n Y: batch_ys\n })\n valid = ds.valid\n print(sess.run(accuracy,\n feed_dict={\n X: valid.images,\n Y: valid.labels\n }))\n\n# Print final test accuracy:\ntest = ds.test\nprint(sess.run(accuracy,\n feed_dict={\n X: test.images,\n Y: test.labels\n }))", "<TODO: Fun timelapse of waiting>\nInspecting the Trained Network\nLet's take a look at the kernels we've learned using the following montage function, similar to the one we've been using for creating image montages, except this one is suited for the dimensions of convolution kernels instead of 4-d images. So it has the height and width first, unlike images which have batch then height then width. We'll use this function to visualize every convolution kernel in the first and second layers of our network.", "from libs.utils import montage_filters\nW1 = sess.run(W_1)\nplt.figure(figsize=(10, 10))\nplt.imshow(montage_filters(W1), cmap='coolwarm', interpolation='nearest')", "What we're looking at are all of the convolution kernels that have been learned. Compared to the previous network we've learned, it is much harder to understand what's happening here. But let's try and explain these a little more. The kernels that have been automatically learned here are responding to edges of different scales, orientations, and rotations. It's likely these are really describing parts of letters, or the strokes that make up letters. Put another way, they are trying to get at the \"information\" in the image by seeing what changes.\nThat's a pretty fundamental idea. That information would be things that change. Of course, there are filters for things that aren't changing as well. Some filters may even seem to respond to things that are mostly constant. However, if our network has learned a lot of filters that look like that, it's likely that the network hasn't really learned anything at all. The flip side of this is if the filters all look more or less random. That's also a bad sign.\nLet's try looking at the second layer's kernels:", "W2 = sess.run(W_2)\nplt.imshow(montage_filters(W2 / np.max(W2)), cmap='coolwarm')", "It's really difficult to know what's happening here. There are many more kernels in this layer. They've already passed through a set of filters and an additional non-linearity. How can we really know what the network is doing to learn its objective function? The important thing for now is to see that most of these filters are different, and that they are not all constant or uniformly activated. That means it's really doing something, but we aren't really sure yet how to see how that effects the way we think of and perceive the image. In the next session, we'll learn more about how we can start to interrogate these deeper representations and try to understand what they are encoding. Along the way, we'll learn some pretty amazing tricks for producing entirely new aesthetics that eventually led to the \"deep dream\" viral craze.\n<a name=\"savingloading-models\"></a>\nSaving/Loading Models\nTensorflow provides a few ways of saving/loading models. The easiest way is to use a checkpoint. Though, this really useful while you are training your network. When you are ready to deploy or hand out your network to others, you don't want to pass checkpoints around as they contain a lot of unnecessary information, and it also requires you to still write code to create your network. Instead, you can create a protobuf which contains the definition of your graph and the model's weights. Let's see how to do both:\n<a name=\"checkpoint\"></a>\nCheckpoint\nCreating a checkpoint requires you to have already created a set of operations in your tensorflow graph. Once you've done this, you'll create a session like normal and initialize all of the variables. After this, you create a tf.train.Saver which can restore a previously saved checkpoint, overwriting all of the variables with your saved parameters.", "import os\n\nsess = tf.Session()\ninit_op = tf.global_variables_initializer()\nsaver = tf.train.Saver()\nsess.run(init_op)\nif os.path.exists(\"model.ckpt\"):\n saver.restore(sess, \"model.ckpt\")\n print(\"Model restored.\")", "Creating the checkpoint is easy. After a few iterations of training, depending on your application say between 1/10 of the time to train the full model, you'll want to write the saved model. You can do this like so:", "save_path = saver.save(sess, \"./model.ckpt\")\nprint(\"Model saved in file: %s\" % save_path)", "<a name=\"protobuf\"></a>\nProtobuf\nThe second way of saving a model is really useful for when you don't want to pass around the code for producing the tensors or computational graph itself. It is also useful for moving the code to deployment or for use in the C++ version of Tensorflow. To do this, you'll want to run an operation to convert all of your trained parameters into constants. Then, you'll create a second graph which copies the necessary tensors, extracts the subgraph, and writes this to a model. The summarized code below shows you how you could use a checkpoint to restore your models parameters, and then export the saved model as a protobuf.", "path='./'\nckpt_name = './model.ckpt'\nfname = 'model.tfmodel'\ndst_nodes = ['Y']\ng_1 = tf.Graph()\nwith tf.Session(graph=g_1) as sess:\n x = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))\n # Replace this with some code which will create your tensorflow graph:\n net = create_network()\n sess.run(tf.global_variables_initializer())\n saver.restore(sess, ckpt_name)\n graph_def = tf.python.graph_util.convert_variables_to_constants(\n sess, sess.graph_def, dst_nodes)\ng_2 = tf.Graph()\nwith tf.Session(graph=g_2) as sess:\n tf.train.write_graph(\n tf.python.graph_util.extract_sub_graph(\n graph_def, dst_nodes), path, fname, as_text=False)", "When you wanted to import this model, now you wouldn't need to refer to the checkpoint or create the network by specifying its placeholders or operations. Instead, you'd use the import_graph_def operation like so:", "with open(\"model.tfmodel\", mode='rb') as f:\n graph_def = tf.GraphDef()\n graph_def.ParseFromString(f.read())\n\ntf.import_graph_def(net['graph_def'], name='model')", "<a name=\"wrap-up\"></a>\nWrap Up\nIn the next session, we'll learn some very powerful techniques for exploring the representations learned by these kernels, and how we can better understand what they are learning. We'll look at state of the art deep networks for image recognition and interrogate what they've learned using techniques that led the public to Deep Dream.\n<a name=\"reading\"></a>\nReading\nBourlard, H.; Kamp, Y. (1988). \"Auto-association by multilayer perceptrons and singular value decomposition\". Biological Cybernetics 59 (4–5): 291–294.\nG. E. Hinton, R. R. Salakhutdinov. Reducing the Dimensionality of Data with Neural Networks. Science, 28 Jul 2006. Vol. 313, Issue 5786, pp. 504-507. \nDOI: 10.1126/science.1127647. http://science.sciencemag.org/content/313/5786/504.abstract\nBengio, Y. (2009). \"Learning Deep Architectures for AI\". Foundations and Trends in Machine Learning 2. doi:10.1561/2200000006\nVincent, Pascal; Larochelle, Hugo; Lajoie, Isabelle; Bengio, Yoshua; Manzagol, Pierre-Antoine (2010). \"Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion\". The Journal of Machine Learning Research 11: 3371–3408.\nAuto-Encoding Variational Bayes, Kingma, D.P. and Welling, M., ArXiv e-prints, 2013 http://arxiv.org/abs/1312.6114" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hyzhak/mle
mle.ipynb
mit
[ "Maximum likelihood Estimation (MLE)\nbased on http://python-for-signal-processing.blogspot.com/2012/10/maximum-likelihood-estimation-maximum.html\nSimulate coin flipping\n\nBernoulli distribution \nis the probability distribution of a random variable which takes the value 1 with probability $p$ and the value 0 with probability $q = 1 - p$\nscipy.stats.bernoulli", "import numpy as np\nfrom scipy.stats import bernoulli \n\nnp.random.seed(123456789)\n\np_true = 1/2 # this is the value we will try to estimate from the observed data\nfp = bernoulli(p_true)\n\ndef sample(n=10):\n \"\"\"\n simulate coin flipping\n \"\"\"\n return fp.rvs(n) # flip it n times\n\nxs = sample(100) # generate some samples", "Find maximum of Bernoulli distribution\nSingle experiment\n$$\\phi(x) = p ^ {x} * (1 - p) ^ { 1 - x }$$\nSeries of experiments\n$$\\mathcal{L}(p|x) = \\prod_{i=1}^{n} p^{x_{i}}*(p-1)^{1-x_{i}}$$\nHints\n\nsympy.diff()\nsympy.expand()\nsympy.expand_log()\nsympy.solve()\nsympy.symbols()\nsympy gotchas", "import sympy\nfrom sympy.abc import x\n\np = sympy.symbols('p', positive=True)\nphi = p ** x * (1 - p) ** (1 - x)\nL = np.prod([phi.subs(x, i) for i in xs]) # objective function to maximize\nlog_L = sympy.expand_log(sympy.log(L))\nsol = sympy.solve(sympy.diff(log_L, p), p)[0]\n\nimport matplotlib.pyplot as plt\n\nx_space = np.linspace(1/100, 1, 100, endpoint=False)\n\nplt.plot(x_space,\n list(map(sympy.lambdify(p, log_L, 'numpy'), x_space)),\n sol,\n log_L.subs(p, sol),\n 'o',\n p_true,\n log_L.subs(p, p_true),\n 's',\n )\nplt.xlabel('$p$', fontsize=18)\nplt.ylabel('Likelihood', fontsize=18)\nplt.title('Estimate not equal to true value', fontsize=18)\nplt.grid(True)\nplt.show()", "Empirically examine the behavior of the maximum likelihood estimator\n\nevalf()", "def estimator_gen(niter=10, ns=100):\n \"\"\"\n generate data to estimate distribution of maximum likelihood estimator'\n \"\"\"\n x = sympy.symbols('x', real=True)\n phi = p**x*(1-p)**(1-x)\n for i in range(niter):\n xs = sample(ns) # generate some samples from the experiment\n L = np.prod([phi.subs(x,i) for i in xs]) # objective function to maximize\n log_L = sympy.expand_log(sympy.log(L)) \n sol = sympy.solve(sympy.diff(log_L, p), p)[0]\n yield float(sol.evalf())\n \nentries = list(estimator_gen(100)) # this may take awhile, depending on how much data you want to generate\nplt.hist(entries) # histogram of maximum likelihood estimator\nplt.title('$\\mu={:3.3f},\\sigma={:3.3f}$'.format(np.mean(entries), np.std(entries)), fontsize=18)\nplt.show()", "Dynamic of MLE by length sample sequence", "def estimator_dynamics(ns_space, num_tries = 20):\n for ns in ns_space:\n estimations = list(estimator_gen(num_tries, ns))\n yield np.mean(estimations), np.std(estimations)\n \nns_space = list(range(10, 100, 5))\nentries = list(estimator_dynamics(ns_space))\nentries_mean = list(map(lambda e: e[0], entries))\nentries_std = list(map(lambda e: e[1], entries))\n\nplt.errorbar(ns_space, entries_mean, entries_std, fmt='-o')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statsmaths/stat665
lectures/lec22/.ipynb_checkpoints/notebook22-checkpoint.ipynb
gpl-2.0
[ "Problem Set 8 Review & Transfer Learning with word2vec\nImport various modules that we need for this notebook (now using Keras 1.0.0)", "%pylab inline\n\nimport copy\n\nimport numpy as np\nimport pandas as pd\nimport sys\nimport os\nimport re\n\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Dropout, Activation\nfrom keras.optimizers import SGD, RMSprop\nfrom keras.layers.normalization import BatchNormalization\nfrom keras.layers.wrappers import TimeDistributed\nfrom keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing import sequence\nfrom keras.layers.embeddings import Embedding\nfrom keras.layers.recurrent import SimpleRNN, LSTM, GRU\n\nfrom sklearn.svm import SVC\nfrom sklearn.linear_model import LogisticRegression\nfrom gensim.models import word2vec\n", "I. Problem Set 8, Part 1\nLet's work through a solution to the first part of problem set 8, where you applied various techniques to the STL-10 dataset.", "dir_in = \"../../../class_data/stl10/\"\nX_train = np.genfromtxt(dir_in + 'X_train_new.csv', delimiter=',')\nY_train = np.genfromtxt(dir_in + 'Y_train.csv', delimiter=',')\nX_test = np.genfromtxt(dir_in + 'X_test_new.csv', delimiter=',')\nY_test = np.genfromtxt(dir_in + 'Y_test.csv', delimiter=',')", "And construct a flattened version of it, for the linear model case:", "Y_train_flat = np.zeros(Y_train.shape[0])\nY_test_flat = np.zeros(Y_test.shape[0])\nfor i in range(10):\n Y_train_flat[Y_train[:,i] == 1] = i\n Y_test_flat[Y_test[:,i] == 1] = i", "(1) neural network\nWe now build and evaluate a neural network.", "model = Sequential()\n\nmodel.add(Dense(1024, input_shape = (X_train.shape[1],)))\nmodel.add(Activation(\"relu\"))\nmodel.add(BatchNormalization())\nmodel.add(Dropout(0.5))\n\nmodel.add(Dense(1024))\nmodel.add(Activation(\"relu\"))\nmodel.add(BatchNormalization())\nmodel.add(Dropout(0.5))\n\nmodel.add(Dense(1024))\nmodel.add(Activation(\"relu\"))\nmodel.add(BatchNormalization())\nmodel.add(Dropout(0.5))\n\nmodel.add(Dense(10))\nmodel.add(Activation('softmax'))\n\nrms = RMSprop()\nmodel.compile(loss='categorical_crossentropy', optimizer=rms,\n metrics=['accuracy'])\n\nmodel.fit(X_train, Y_train, batch_size=32, nb_epoch=5, verbose=1)\n\ntest_rate = model.evaluate(X_test, Y_test)[1]\nprint(\"Test classification rate %0.05f\" % test_rate)", "(2) support vector machine\nAnd now, a basic linear support vector machine.", "svc_obj = SVC(kernel='linear', C=1)\nsvc_obj.fit(X_train, Y_train_flat)\n\npred = svc_obj.predict(X_test)\npd.crosstab(pred, Y_test_flat)\nc_rate = sum(pred == Y_test_flat) / len(pred)\nprint(\"Test classification rate %0.05f\" % c_rate)", "(3) penalized logistc model\nAnd finally, an L1 penalized model:", "lr = LogisticRegression(penalty = 'l1')\nlr.fit(X_train, Y_train_flat)\n\npred = lr.predict(X_test)\npd.crosstab(pred, Y_test_flat)\nc_rate = sum(pred == Y_test_flat) / len(pred)\nprint(\"Test classification rate %0.05f\" % c_rate)", "II. Problem Set 8, Part 2\nNow, let's read in the Chicago crime dataset and see how well we can get a neural network to perform on it.", "dir_in = \"../../../class_data/chi_python/\"\nX_train = np.genfromtxt(dir_in + 'chiCrimeMat_X_train.csv', delimiter=',')\nY_train = np.genfromtxt(dir_in + 'chiCrimeMat_Y_train.csv', delimiter=',')\nX_test = np.genfromtxt(dir_in + 'chiCrimeMat_X_test.csv', delimiter=',')\nY_test = np.genfromtxt(dir_in + 'chiCrimeMat_Y_test.csv', delimiter=',')", "Now, built a neural network for the model", "model = Sequential()\n\nmodel.add(Dense(1024, input_shape = (434,)))\nmodel.add(Activation(\"relu\"))\nmodel.add(BatchNormalization())\nmodel.add(Dropout(0.2))\n\nmodel.add(Dense(1024))\nmodel.add(Activation(\"relu\"))\nmodel.add(BatchNormalization())\nmodel.add(Dropout(0.2))\n\nmodel.add(Dense(1024))\nmodel.add(Activation(\"relu\"))\nmodel.add(BatchNormalization())\nmodel.add(Dropout(0.2))\n\nmodel.add(Dense(5))\nmodel.add(Activation('softmax'))\n\nrms = RMSprop()\nmodel.compile(loss='categorical_crossentropy', optimizer=rms,\n metrics=['accuracy'])\n\n# downsample, if need be:\nnum_sample = X_train.shape[0]\n\nmodel.fit(X_train[:num_sample], Y_train[:num_sample], batch_size=32,\n nb_epoch=10, verbose=1)\n\ntest_rate = model.evaluate(X_test, Y_test)[1]\nprint(\"Test classification rate %0.05f\" % test_rate)", "III. Transfer Learning IMDB Sentiment analysis\nNow, let's use the word2vec embeddings on the IMDB sentiment analysis corpus. This will allow us to use a significantly larger vocabulary of words. I'll start by reading in the IMDB corpus again from the raw text.", "path = \"../../../class_data/aclImdb/\"\n\nff = [path + \"train/pos/\" + x for x in os.listdir(path + \"train/pos\")] + \\\n [path + \"train/neg/\" + x for x in os.listdir(path + \"train/neg\")] + \\\n [path + \"test/pos/\" + x for x in os.listdir(path + \"test/pos\")] + \\\n [path + \"test/neg/\" + x for x in os.listdir(path + \"test/neg\")]\n\nTAG_RE = re.compile(r'<[^>]+>')\n\ndef remove_tags(text):\n return TAG_RE.sub('', text)\n \ninput_label = ([1] * 12500 + [0] * 12500) * 2\ninput_text = []\n\nfor f in ff:\n with open(f) as fin:\n pass\n input_text += [remove_tags(\" \".join(fin.readlines()))]", "I'll fit a significantly larger vocabular this time, as the embeddings are basically given for us.", "num_words = 5000\nmax_len = 400\ntok = Tokenizer(num_words)\ntok.fit_on_texts(input_text[:25000])\n\nX_train = tok.texts_to_sequences(input_text[:25000])\nX_test = tok.texts_to_sequences(input_text[25000:])\ny_train = input_label[:25000]\ny_test = input_label[25000:]\n\nX_train = sequence.pad_sequences(X_train, maxlen=max_len)\nX_test = sequence.pad_sequences(X_test, maxlen=max_len)\n\nwords = []\nfor iter in range(num_words):\n words += [key for key,value in tok.word_index.items() if value==iter+1]\n\nloc = \"/Users/taylor/files/word2vec_python/GoogleNews-vectors-negative300.bin\"\nw2v = word2vec.Word2Vec.load_word2vec_format(loc, binary=True)\n\nweights = np.zeros((num_words,300))\nfor idx, w in enumerate(words):\n try:\n weights[idx,:] = w2v[w]\n except KeyError as e:\n pass\n\nmodel = Sequential()\n\nmodel.add(Embedding(num_words, 300, input_length=max_len))\nmodel.add(Dropout(0.5))\n\nmodel.add(GRU(16,activation='relu'))\n\nmodel.add(Dense(128))\nmodel.add(Dropout(0.5))\nmodel.add(Activation('relu'))\n\nmodel.add(Dense(1))\nmodel.add(Activation('sigmoid'))\n\nmodel.layers[0].set_weights([weights])\nmodel.layers[0].trainable = False\n\nmodel.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])\n\nmodel.fit(X_train, y_train, batch_size=32, nb_epoch=10, verbose=1,\n validation_data=(X_test, y_test))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pycam/python-basic
live/python_basic_1_4_live.ipynb
unlicense
[ "Quick recap\n\nCollections: Lists and String", "# list\nmy_list = [1, 4, 5, 9]\nprint(my_list)\n\ntype(my_list)\n\n# accessing each element by index\nprint(my_list[2])\n\nlen(my_list)\n\n# assigning new value\nmy_list[1] = 12\nprint(my_list)\n\n# append an element at the end\nmy_list.append(7)\nprint(my_list)\n\nhelp(list)\n\n# String\nmy_name = 'Anne' # it is also a tuple of characters\nmy_name[2]\n\nlen(my_name)\n\n# sequence string separated by space\nseq = 'AAA TTT CCC GGG'\nprint(seq.split())\n\n?str.split\n\n?str.join\n\n','.join(my_name)", "Session 1.4\n\nCollections Sets and dictionaries", "# Sets\nmy_set = set([1, 2, 3, 3, 3, 4])\nprint(my_set)\n\nlen(my_set)\n\nmy_set.add(3) # sets are unordered\nprint(my_set)\n\nmy_set.remove(3)\nprint(my_set)\n\n# set operation using union | or intersection &\nmy_first_set = set([1, 2, 4, 6, 8])\nmy_second_set = set([8, 9, 10])\nmy_first_set | my_second_set\n\nmy_first_set & my_second_set", "Exercises 1.4.1\nGiven the protein sequence \"MPISEPTFFEIF\", find the unique amino acids in the sequence.", "# Dictionnaries are collections of key/value pairs\nmy_dict = {'A': 'Adenine', 'C': 'Cytosine', 'T': 'Thymine', 'G': 'Guanine'}\nprint(my_dict)\n\nmy_dict['C']\n\nmy_dict['N']\n\n?my_dict.get\n\nmy_dict.get('N', 'unknown')\n\nprint(my_dict)\nlen(my_dict)\n\ntype(my_dict)\n\n'T' in my_dict\n\n# Assign new key/value pair\nmy_dict['Y'] = 'Pyrimidine'\nprint(my_dict)\n\nmy_dict['Y'] = 'Cytosine or Thymine'\nprint(my_dict)\n\ndel my_dict['Y']\nprint(my_dict)\n\nhelp(dict)\n\nmy_dict.keys()\n\nlist(my_dict.keys())\n\nmy_dict.values()\n\nmy_dict.items()", "Exercises 1.4.2\n\nCreate a codon sequence \"GTT GCA CCA CAA CCG\"\nBuild a genetic code dictionary using the DNA codon table\nPrint each codon and its correspondong amino acid\n\nExercise 1.4.3\n\nCount and store in a dictionary the abundance of different amino acid types present in this lysozyme protein sequence (http://www.uniprot.org/uniprot/B2R4C5.fasta)\n\nTomorrow\n\nConditional execution\nLoops\nFiles" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nhuntwalker/gatspy
examples/FastLombScargle.ipynb
bsd-2-clause
[ "Fast Lomb-Scargle Periodograms in Python\nThe Lomb-Scargle Periodogram is a well-known method of finding periodicity in irregularly-sampled time-series data.\nThe common implementation of the periodogram is relatively slow: for $N$ data points, a frequency grid of $\\sim N$ frequencies is required and the computation scales as $O[N^2]$.\nIn a 1989 paper, Press and Rybicki presented a faster technique which makes use of fast Fourier transforms to reduce this cost to $O[N\\log N]$ on a regular frequency grid.\nThe gatspy package implement this in the LombScargleFast object, which we'll explore below.\nBut first, we'll motivate why this algorithm is needed at all.\nWe'll start this notebook with some standard imports:", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# use seaborn's default plotting styles for matplotlib\nimport seaborn; seaborn.set()", "To begin, let's make a function which will create $N$ noisy, irregularly-spaced data points containing a periodic signal, and plot one realization of that data:", "def create_data(N, period=2.5, err=0.1, rseed=0):\n rng = np.random.RandomState(rseed)\n t = np.arange(N, dtype=float) + 0.3 * rng.randn(N)\n y = np.sin(2 * np.pi * t / period) + err * rng.randn(N)\n return t, y, err\n\nt, y, dy = create_data(100, period=20)\nplt.errorbar(t, y, dy, fmt='o');", "From this, our algorithm should be able to identify any periodicity that is present.\nChoosing the Frequency Grid\nThe Lomb-Scargle Periodogram works by evaluating a power for a set of candidate frequencies $f$. So the first question is, how many candidate frequencies should we choose?\nIt turns out that this question is very important. If you choose the frequency spacing poorly, it may lead you to miss strong periodic signal in the data!\nFrequency spacing\nFirst, let's think about the frequency spacing we need in our grid. If you're asking about a candidate frequency $f$, then data with range $T$ contains $T \\cdot f$ complete cycles. If our error in frequency is $\\delta f$, then $T\\cdot\\delta f$ is the error in number of cycles between the endpoints of the data.\nIf this error is a significant fraction of a cycle, this will cause problems. This givs us the criterion\n$$\nT\\cdot\\delta f \\ll 1\n$$\nCommonly, we'll choose some oversampling factor around 5 and use $\\delta f = (5T)^{-1}$ as our frequency grid spacing.\nFrequency limits\nNext, we need to choose the limits of the frequency grid. On the low end, $f=0$ is suitable, but causes some problems – we'll go one step away and use $\\delta f$ as our minimum frequency.\nBut on the high end, we need to make a choice: what's the highest frequency we'd trust our data to be sensitive to?\nAt this point, many people are tempted to mis-apply the Nyquist-Shannon sampling theorem, and choose some version of the Nyquist limit for the data.\nBut this is entirely wrong! The Nyquist frequency applies for regularly-sampled data, but irregularly-sampled data can be sensitive to much, much higher frequencies, and the upper limit should be determined based on what kind of signals you are looking for.\nStill, a common (if dubious) rule-of-thumb is that the high frequency is some multiple of what Press & Rybicki call the \"average\" Nyquist frequency,\n$$\n\\hat{f}_{Ny} = \\frac{N}{2T}\n$$\nWith this in mind, we'll use the following function to determine a suitable frequency grid:", "def freq_grid(t, oversampling=5, nyquist_factor=3):\n T = t.max() - t.min()\n N = len(t)\n \n df = 1. / (oversampling * T)\n fmax = 0.5 * nyquist_factor * N / T\n N = int(fmax // df)\n return df + df * np.arange(N)", "Now let's use the gatspy tools to plot the periodogram:", "t, y, dy = create_data(100, period=2.5)\nfreq = freq_grid(t)\nprint(len(freq))\n\nfrom gatspy.periodic import LombScargle\nmodel = LombScargle().fit(t, y, dy)\nperiod = 1. / freq\npower = model.periodogram(period)\nplt.plot(period, power)\nplt.xlim(0, 5);", "The algorithm finds a strong signal at a period of 2.5.\nTo demonstrate explicitly that the Nyquist rate doesn't apply in irregularly-sampled data, let's use a period below the averaged sampling rate and show that we can find it:", "t, y, dy = create_data(100, period=0.3)\nperiod = 1. / freq_grid(t, nyquist_factor=10)\n\nmodel = LombScargle().fit(t, y, dy)\npower = model.periodogram(period)\nplt.plot(period, power)\nplt.xlim(0, 1);", "With a data sampling rate of approximately $1$ time unit, we easily find a period of $0.3$ time units. The averaged Nyquist limit clearly does not apply for irregularly-spaced data!\nNevertheless, short of a full analysis of the temporal window function, it remains a useful milepost in estimating the upper limit of frequency.\nScaling with $N$\nWith these rules in mind, we see that the size of the frequency grid is approximately\n$$\nN_f = \\frac{f_{max}}{\\delta f} \\propto \\frac{N/(2T)}{1/T} \\propto N\n$$\nSo for $N$ data points, we will require some multiple of $N$ frequencies (with a constant of proportionality typically on order 10) to suitably explore the frequency space.\nThis is the source of the $N^2$ scaling of the typical periodogram: finding periods in $N$ datapoints requires a grid of $\\sim 10N$ frequencies, and $O[N^2]$ operations.\nWhen $N$ gets very, very large, this becomes a problem.\nFast Periodograms with LombScargleFast\nFinally we get to the meat of this discussion.\nIn a 1989 paper, Press and Rybicki proposed a clever method whereby a Fast Fourier Transform is used on a grid extirpolated from the original data, such that this problem can be solved in $O[N\\log N]$ time. The gatspy package contains a pure-Python implementation of this algorithm, and we'll explore it here.\nIf you're interested in seeing how the algorithm works in Python, check out the code in the gatspy source.\nIt's far more readible and understandable than the Fortran source presented in Press et al.\nFor convenience, the implementation has a periodogram_auto method which automatically selects a frequency/period range based on an oversampling factor and a nyquist factor:", "from gatspy.periodic import LombScargleFast\nhelp(LombScargleFast.periodogram_auto)\n\nfrom gatspy.periodic import LombScargleFast\n\nt, y, dy = create_data(100)\nmodel = LombScargleFast().fit(t, y, dy)\nperiod, power = model.periodogram_auto()\nplt.plot(period, power)\nplt.xlim(0, 5);", "Here, to illustrate the different computational scalings, we'll evaluate the computational time for a number of inputs, using LombScargleAstroML (a fast implementation of the $O[N^2]$ algorithm) and LombScargleFast, which is the fast FFT-based implementation:", "from time import time\nfrom gatspy.periodic import LombScargleAstroML, LombScargleFast\n \n\ndef get_time(N, Model):\n t, y, dy = create_data(N)\n \n model = Model().fit(t, y, dy)\n t0 = time()\n model.periodogram_auto()\n t1 = time()\n result = t1 - t0\n \n # for fast operations, we should do several and take the median\n if result < 0.1:\n N = min(50, 0.5 / result)\n times = []\n for i in range(5):\n t0 = time()\n model.periodogram_auto()\n t1 = time()\n times.append(t1 - t0)\n result = np.median(times)\n return result\n\nN_obs = list(map(int, 10 ** np.linspace(1, 4, 5)))\ntimes1 = [get_time(N, LombScargleAstroML) for N in N_obs]\ntimes2 = [get_time(N, LombScargleFast) for N in N_obs]\n\nplt.loglog(N_obs, times1, label='Naive Implmentation')\nplt.loglog(N_obs, times2, label='FFT Implementation')\nplt.xlabel('N observations')\nplt.ylabel('t (sec)')\nplt.legend(loc='upper left');", "For fewer than 100 observations, the naive implementation wins out, but as the number of points grows, we observe the clear trends in scaling: $O[N^2]$ for the Naive method, and $O[N\\log N]$ for the fast method. We could push this plot higher, but the trends are already clear: for $10^5$ points, while the FFT method would complete in a couple seconds, the Naive method would take nearly two hours! Who's got the time for that plot?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
daniel-koehn/Theory-of-seismic-waves-II
04_FD_stability_dispersion/1_fd_stability_dispersion.ipynb
gpl-3.0
[ "Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from this Jupyter notebook by Heiner Igel (@heinerigel), Lion Krischer (@krischer) and Taufiqurrahman (@git-taufiqurrahman) which is a supplemenatry material to the book Computational Seismology: A Practical Introduction, additional modifications by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi", "# Execute this cell to load the notebook's style sheet, then ignore it\nfrom IPython.core.display import HTML\ncss_file = '../style/custom.css'\nHTML(open(css_file, \"r\").read())", "FD stability and dispersion\nIn the last lesson we developed a 1D acoustic FD modelling code. For the given modelling parameters, the code worked flawlessly and delivered modelled seismograms, which are in good agreement with the analytical solution. In this lesson we want to investigate how to choose optimum time steps dt and spatial grid point distances dx, to get stable and accurate FD modelling results. We start, by revisiting a simplified version of our 1D acoustic FD modelling code ...", "# Import Libraries \n# ----------------\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom pylab import rcParams\n\n# Ignore Warning Messages\n# -----------------------\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n# Definition of modelling parameters\n# ----------------------------------\nxmax = 500 # maximum spatial extension of the 1D model (m)\ndx = 0.5 # grid point distance in x-direction\n\ntmax = 1.001 # maximum recording time of the seismogram (s)\ndt = 0.0010 # time step\n\nvp0 = 333. # P-wave speed in medium (m/s)\n\n# acquisition geometry\nxr = 365.0 # receiver position (m)\nxsrc = 249.5 # source position (m)\n\nf0 = 25. # dominant frequency of the source (Hz)\nt0 = 4. / f0 # source time shift (s)", "Comparison of numerical with analytical solution\nIn the function below we solve the homogeneous 1D acoustic wave equation by the 3-point spatial/temporal difference operator and compare the numerical results with the analytical solution. To play a little bit more with the modelling parameters, I restricted the input parameters to dt and dx. The number of spatial grid points and time steps, as well as the discrete source and receiver positions are estimated within this function.", "# 1D Wave Propagation (Finite Difference Solution) \n# ------------------------------------------------\ndef FD_1D_acoustic(dt,dx):\n \n nx = (int)(xmax/dx) # number of grid points in x-direction\n print('nx = ',nx)\n \n nt = (int)(tmax/dt) # maximum number of time steps \n print('nt = ',nt)\n \n ir = (int)(xr/dx) # receiver location in grid in x-direction \n isrc = (int)(xsrc/dx) # source location in grid in x-direction\n\n # Source time function (Gaussian)\n # -------------------------------\n src = np.zeros(nt + 1)\n time = np.linspace(0 * dt, nt * dt, nt)\n\n # 1st derivative of a Gaussian\n src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))\n\n # Analytical solution\n # -------------------\n G = time * 0.\n\n # Initialize coordinates\n # ----------------------\n x = np.arange(nx)\n x = x * dx # coordinate in x-direction\n\n for it in range(nt): # Calculate Green's function (Heaviside function)\n if (time[it] - np.abs(x[ir] - x[isrc]) / vp0) >= 0:\n G[it] = 1. / (2 * vp0)\n Gc = np.convolve(G, src * dt)\n Gc = Gc[0:nt]\n lim = Gc.max() # get limit value from the maximum amplitude\n \n # Initialize empty pressure arrays\n # --------------------------------\n p = np.zeros(nx) # p at time n (now)\n pold = np.zeros(nx) # p at time n-1 (past)\n pnew = np.zeros(nx) # p at time n+1 (present)\n d2px = np.zeros(nx) # 2nd space derivative of p\n\n # Initialize model (assume homogeneous model)\n # -------------------------------------------\n vp = np.zeros(nx)\n vp = vp + vp0 # initialize wave velocity in model\n\n # Initialize empty seismogram\n # ---------------------------\n seis = np.zeros(nt) \n \n # Calculate Partial Derivatives\n # -----------------------------\n for it in range(nt):\n \n # FD approximation of spatial derivative by 3 point operator\n for i in range(1, nx - 1):\n d2px[i] = (p[i + 1] - 2 * p[i] + p[i - 1]) / dx ** 2\n\n # Time Extrapolation\n # ------------------\n pnew = 2 * p - pold + vp ** 2 * dt ** 2 * d2px\n\n # Add Source Term at isrc\n # -----------------------\n # Absolute pressure w.r.t analytical solution\n pnew[isrc] = pnew[isrc] + src[it] / dx * dt ** 2\n \n # Remap Time Levels\n # -----------------\n pold, p = p, pnew\n \n # Output of Seismogram\n # -----------------\n seis[it] = p[ir] \n \n # Compare FD Seismogram with analytical solution\n # ---------------------------------------------- \n # Define figure size\n rcParams['figure.figsize'] = 12, 5\n plt.plot(time, seis, 'b-',lw=3,label=\"FD solution\") # plot FD seismogram\n Analy_seis = plt.plot(time,Gc,'r--',lw=3,label=\"Analytical solution\") # plot analytical solution\n plt.xlim(time[0], time[-1])\n plt.ylim(-lim, lim)\n plt.title('Seismogram')\n plt.xlabel('Time (s)')\n plt.ylabel('Amplitude')\n plt.legend()\n plt.grid()\n plt.show() \n\ndx = 0.5 # grid point distance in x-direction\ndt = 0.0010 # time step\nFD_1D_acoustic(dt,dx)", "This is the same result, we achieved in the last lesson. Now, you might get the smart idea to save some computation time by increasing the timestep dt. Let's try it ...", "dx = 0.5 # grid point distance in x-direction\n#dt = 0.0010 # old time step\ndt = 0.0015023 # time step\nFD_1D_acoustic(dt,dx)", "Oops, maybe this idea was not so smart at all, because the modelling becomes unstable. Instead of increasing the time step dt, we could try to increase the spatial discretization dx to save computation time ...", "# dx = 0.5 # old grid point distance in x-direction\ndx = 7.0 # new grid point distance in x-direction\ndt = 0.0010 # time step\nFD_1D_acoustic(dt,dx)", "Hmm, the accurracy of the FD modelling result compared to the analytical solution is clearly deterioated, when the spatial grid point $dx$ is increased. And why does the P-body wave becomes dispersive? More generally, how do I choose $dx$ and $dt$ without using a trial-and-error approach, which requires a lot of computation time, especially when considering 3D modelling. To understand the underlying problems, we will investigate the stability and numerical dispersion of the FD method in the next two sections.\nStability of 1D acoustic wave equation finite difference approximation\nTo analyse the stability of the finite difference approximation of the 1D acoustic wave equation:\n\\begin{equation}\n \\frac{p_{j}^{n+1} - 2 p_{j}^n + p_{j}^{n-1}}{\\mathrm{d}t^2} \\ = \\ vp_{j}^2 \\frac{p_{j+1}^{n} - 2 p_{j}^n + p_{j-1}^{n}}{\\mathrm{d}x^2},\n\\end{equation}\nwe use an approach introduced by the famous mathematician and pioneer of computational sciences John von Neumann. For the von Neumann Analysis, we assume harmonic plane wave solutions for the pressure wavefield like:\n\\begin{equation}\np = exp(i(kx-\\omega t)),\\nonumber\n\\end{equation}\nwith $i^2=-1$, the wavenumber $k$ and circular frequency $\\omega$. Using the discrete \nspatial coordinates:\n$x_j = j dx,$\nand times \n$t_n = n dt.$\nWe can calculate discrete plane wave solutions at the discrete locations and times in eq. (1), for example at grid point j and time n: \n\\begin{equation}\np_j^n = exp(i(kjdx-\\omega n dt)),\\nonumber\n\\end{equation}\nor at grid point j and time n+1:\n\\begin{align}\np_j^{n+1} &= exp(i(kjdx-\\omega (n+1) dt))\\nonumber\\\n&= exp(-i\\omega dt)\\; exp(i(kjdx-\\omega n dt))\\nonumber\\\n&= exp(-i\\omega dt)\\; p_j^n,\\nonumber\\\n\\end{align}\nor at the grid point j and time n-1:\n\\begin{align}\np_j^{n-1} &= exp(i(kjdx-\\omega (n-1) dt))\\nonumber\\\n&= exp(i\\omega dt)\\; exp(i(kjdx-\\omega n dt))\\nonumber\\\n&= exp(i\\omega dt)\\; p_j^n.\\nonumber\\\n\\end{align}\nSimilar approximations can be estimated for time n at the spatial grid points j+1:\n\\begin{align}\np_{j+1}^{n} &= exp(i(k(j+1)dx-\\omega n dt))\\nonumber\\\n&= exp(ik dx)\\; exp(i(kjdx-\\omega n dt))\\nonumber\\\n&= exp(ik dx)\\; p_j^n,\\nonumber\\\n\\end{align}\nand a grid point j-1:\n\\begin{align}\np_{j-1}^{n} &= exp(i(k(j-1)dx-\\omega n dt))\\nonumber\\\n&= exp(-ik dx)\\; exp(i(kjdx-\\omega n dt))\\nonumber\\\n&= exp(-ik dx)\\; p_j^n.\\nonumber\\\n\\end{align}\nInserting the discrete pressure wavefield solutions $p_j^{n+1}$, $p_j^{n-1}$, $p_{j+1}^{n}$ and $p_{j-1}^{n}$ in eq. (1), we get after some minor rearrangement:\n\\begin{equation}\nexp(-i\\omega dt)p_j^n - 2 p_j^n + exp(i\\omega dt)p_j^n = vp_j^2 \\frac{dt^2}{dx^2}\\biggl(exp(-ik dx)p_j^n - 2 p_j^n + exp(ik dx)p_j^n\\biggr).\\nonumber\n\\end{equation}\nAssuming that $p_j^n \\ne 0$, we can divide the RHS and LHS by $p_j^n$\n\\begin{equation}\nexp(-i\\omega dt) - 2 + exp(i\\omega dt) = vp_j^2 \\frac{dt^2}{dx^2}\\biggl(exp(-ik dx) - 2 + exp(ik dx)\\biggr).\\nonumber\n\\end{equation}\nBy further dividing RHS and LHS by 2, we get:\n\\begin{equation}\n\\frac{exp(i\\omega dt) + exp(-i\\omega dt)}{2} - 1 = vp_j^2 \\frac{dt^2}{dx^2}\\biggl(\\frac{exp(ik dx) + exp(-ik dx)}{2} - 1\\biggr).\\nonumber\n\\end{equation}\nUsing the definition \n\\begin{equation}\n\\cos(x) = \\frac{exp(ix) + exp(-ix)}{2},\\nonumber\n\\end{equation}\nwe can simplify this expression to:\n\\begin{equation}\ncos(\\omega dt) - 1 = vp_j^2 \\frac{dt^2}{dx^2}\\biggl(cos(k dx) - 1\\biggr).\\nonumber\n\\end{equation}\nAfter some further rearrangements and division of both sides by 2, leads to:\n\\begin{equation}\n\\frac{1 - cos(\\omega dt)}{2} = vp_j^2 \\frac{dt^2}{dx^2}\\biggl(\\frac{1 - cos(k dx)}{2}\\biggr).\\nonumber\n\\end{equation}\nWith the relation \n\\begin{equation}\nsin^2\\biggl(\\frac{x}{2}\\biggr) = \\frac{1-cos(x)}{2}, \\nonumber\n\\end{equation}\nwe get \n\\begin{equation}\nsin^2\\biggl(\\frac{\\omega dt}{2}\\biggr) = vp_j^2 \\frac{dt^2}{dx^2}\\biggl(sin^2\\biggl(\\frac{k dx}{2}\\biggr)\\biggr).\\nonumber\n\\end{equation}\nTaking the square root of both sides finally leads to \n\\begin{equation}\nsin\\frac{\\omega dt}{2} = vp_j \\frac{dt}{dx}\\biggl(sin\\frac{k dx}{2}\\biggr).\n\\end{equation}\nThis result is quite interesting. Notice, that the amplitude of the sine functions $sin(x)$ on the LHS and RHS vary between -1 and 1. However, if the factor on the RHS\n\\begin{equation}\n\\epsilon = vp_j \\frac{dt}{dx} \\nonumber\n\\end{equation}\nis larger 1 ($\\epsilon>1$), you get only imaginary solutions, while the real part is zero. Consequently, the numerical scheme becomes unstable. Therefore, the criterion\n\\begin{equation}\n\\epsilon = vp_j \\frac{dt}{dx} \\le 1 \\nonumber\n\\end{equation}\nhas to be satisfied. This very important stability criterion was first described by the german-american mathematicians Richard Courant, Kurt Friedrichs and Hans Lewy in this paper from 1928. The Courant-Friedrichs-Lewy criterion or in short CFL-criterion, can also be rearranged to the time step dt, assuming that we have defined a spatial grid point distance dx:\n\\begin{equation}\ndt \\le \\frac{dx}{vp_j}. \\nonumber\n\\end{equation}\nThis criterion is only correct for the FD solution of the 1D acoustic wave equation using the 3-point spatial/temporal FD operators and an explicit time-stepping scheme (eq.(1)).\nMore generally, we can write the Courant criterion as \n\\begin{equation}\ndt \\le \\frac{dx}{\\zeta vp_j}, \\nonumber\n\\end{equation}\nwhere the factor $\\zeta$ depends on the used FD operator, dimension of the problem (1D, 2D, 3D) and the overall algorithm. Even though the CFL criterion strictly depends on the P-wave velocity at a specific grid point, in most cases the maximum velocity $v_{max}$ in the medium is used to estimate a constant time step $dt$ for the whole FD modelling run:\n\\begin{equation}\ndt \\le \\frac{dx}{\\zeta v_{max}}, \\nonumber\n\\end{equation}\nwhere $v_{max}$ is the maximum P-wave velocity in the acoustic case or the maximum S-wave velocity for the SH-problem. While the fulfillment of the CFL criterion leads to a stable simulation, it does not guarantee accurate modelling results. \nThe CFL criterion allows us to estimate an appropriate time step dt based on the maximum velocity in the model and the spatial grid point distance. But how do we choose the spatial gridpoint distance dx?\nNumerical grid dispersion\nIn the modelling examples at the beginning of this Jupyter notebook, we have seen that the modelled wavefield can become subject to dispersion, when choosing a too large spatial grid point distance. The result of the von Neumann analysis can also explain this behaviour. Starting from eq. (2)\n\\begin{equation}\nsin\\frac{\\omega dt}{2} = \\epsilon\\; sin\\frac{k dx}{2}, \\nonumber\n\\end{equation}\nwe apply the $arcsin$ to both sides\n\\begin{equation}\n\\frac{\\omega dt}{2} = arcsin\\biggl(\\epsilon\\; sin\\frac{k dx}{2} \\biggr)\\nonumber\n\\end{equation}\nand multiply the result by $\\frac{2}{dt}$, we get\n\\begin{equation}\n\\omega = \\frac{2}{dt}arcsin\\biggl(\\epsilon\\; sin\\frac{k dx}{2}\\biggr)\\nonumber\n\\end{equation}\nInserting this $\\omega-k$ dependence into the definition of the phase velocity \n\\begin{equation}\nv_{phase} = \\frac{\\omega}{k},\\nonumber\n\\end{equation}\nleads to \n\\begin{equation}\nv_{phase} = \\frac{2}{k dt}arcsin\\biggl(\\epsilon\\; sin\\frac{k dx}{2}\\biggr).\\nonumber\n\\end{equation}\nAs you can see, the phase velocity of the numerical FD solution is a function of the wavenumber k. Therefore, it can be subject to dispersion. To investigate this problem in more detail, we rewrite the phase velocity.\nWith the wavenumber $k=\\frac{2 \\pi}{\\lambda}$, where $\\lambda$ denotes the wavelength, we get:\n\\begin{equation}\nv_{phase} = \\frac{\\lambda}{\\pi dt}arcsin\\biggl(\\epsilon\\; sin\\frac{\\pi dx}{\\lambda}\\biggr).\\nonumber\n\\end{equation}\nFrom the definition of $\\epsilon = vp_0 \\frac{dt}{dx}$, we can replace $dt$ by $dt = \\frac{\\epsilon dx}{vp_0}$ in the phase velocity:\n\\begin{equation}\nv_{phase} = \\frac{\\lambda vp_0}{\\pi \\epsilon dx}arcsin\\biggl(\\epsilon\\; sin\\frac{\\pi dx}{\\lambda}\\biggr).\\nonumber\n\\end{equation}\nIntroducing the number of grid points per wavelength $N_\\lambda = \\frac{\\lambda}{dx}$, we finally get:\n\\begin{equation}\nv_{phase} = \\frac{N_\\lambda vp_0}{\\pi \\epsilon}arcsin\\biggl(\\epsilon\\; sin\\frac{\\pi }{N_\\lambda}\\biggr).\\nonumber\n\\end{equation}\nLet's plot this result for $N_\\lambda$ between 2 and 12, the homogeneous P-wave velocity $vp0\\;=\\;333\\;m/s$, and $\\epsilon$ values form 0.7 to 1.0 ...", "Nwave = np.arange(2,12,0.25) # numbers per wavelength\nvp0 = 333.0 # P-wave velocity (m/s)\n\ndef dispersion_1D(eps):\n \n vp_phase = (vp0*Nwave/(np.pi*eps)) * np.arcsin(eps*np.sin(np.pi/Nwave)) \n \n return vp_phase\n\nvp_eps_1 = dispersion_1D(1.0)\nvp_eps_2 = dispersion_1D(0.9)\nvp_eps_3 = dispersion_1D(0.8)\nvp_eps_4 = dispersion_1D(0.7)\n\nplt.plot(Nwave, vp_eps_1, 'b-',lw=3,label=r\"$\\epsilon=1$\")\nplt.plot(Nwave, vp_eps_2, 'r-',lw=3,label=r\"$\\epsilon=0.9$\") \nplt.plot(Nwave, vp_eps_3, 'g-',lw=3,label=r\"$\\epsilon=0.8$\") \nplt.plot(Nwave, vp_eps_4, 'k-',lw=3,label=r\"$\\epsilon=0.7$\") \nplt.title('Grid dispersion')\nplt.xlabel('Number of grid points per wavelength')\nplt.ylabel('Phase velocity $v_{phase}$, m/s')\nplt.legend()\nplt.grid()\nplt.show() ", "Notice, that no grid dispersion occurs in the case of $\\epsilon=1$. Keep in mind though that this only true for the homogeneous medium. Realistic modelling problems have a variable P-wave velocity, so we have not a constant $\\epsilon$ within the model.\nFor all values $\\epsilon<1$, numerical dispersion can occur, if the sampling of the spatial model is too small, especially when using only the Nyquist criterion $N_\\lambda = 2$. For the 1D acoustic wave equation, the dispersion is minimized for $N_\\lambda$ values between 8-12.\nMore generally, we can define the grid dispersion criterion for the spatial gridpoint distance\n\\begin{equation}\ndx \\le \\frac{\\lambda_{min}}{N_\\lambda} = \\frac{v_{min}}{N_\\lambda f_{max}},\\nonumber\n\\end{equation}\nwhere $N_\\lambda$ depends on the used FD operator, numerical scheme and also wave type, $v_{min}$ is the minimum P- or S-wave velocity in the model and $f_{max}$ the maximum frequency of the source wavelet. \nFinally, let's apply the dispersion and stability criteria to our test problem in order to find optimum dt and dx values ...", "# calculate dx according to the grid dispersion criterion\nNlam = 12 # number of grid points per wavelength\nfmax = 50.0 # fmax = 2 * f0 (Hz)\ndx = vp0 / (Nlam*fmax) # spatial gridpoint distance (m)\nprint('dx = ', dx)\n\n# calculate dt according to the CFL criterion\ndt = dx / vp0 # time step (s)\n\n# check CFL criterion\nepsilon = vp0 * dt / dx\nprint('epsilon = ', epsilon)\nif(epsilon>1.0):\n print('Warning: CFL condition is violated!')\nprint('dt = ', dt)\n\nFD_1D_acoustic(dt,dx)", "What we learned:\n\nEstimation of the Courant-Friedrichs-Lewy (CFL) stability criterion $dt \\le \\frac{dx}{v_{max}}$ for the 1D acoustic wave equation using the von Neumann analysis\nDispersion analysis of the 1D acoustic wave equation\nGrid dispersion criterion: $dx \\le \\frac{v_{min}}{N_\\lambda f_{max}}$\nOptimize FD modelling parameters by using the grid dispersion and CFL conditions" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cydcowley/Imperial-Visualizations
visuals_maths/2D_Transformations/notebook/2D_transformations.ipynb
mit
[ "<img src=\"imperial_logo.png\" width=\"275\" align=\"left\"><p style=\"text-align: right\">Created by Ryo Kurashina<br>Email: rk2014@ic.ac.uk<br><a>HTML Version (This will be a link)</a></p><br>\n2D Transformations\nLearning Objectives:\n\nUnderstand basic types of matrix transformations.\nBe able to implement these transformations on Python to create animations on Plotly.\n\nTable of Contents\n\nIntroduction\nRotation Matrices\nScaling Matrices\nCustom Matrices\nSkew Matrices\nDeterminants\n\n1. Introduction\nA general matrix transformation in 2D can be written as: $$A:I!R^2 \\mapsto I!R^2$$<br>$$A \\begin{pmatrix}x\\y\\end{pmatrix}=\\begin{pmatrix}a&b\\c&d\\end{pmatrix}\\begin{pmatrix}x\\y\\end{pmatrix}=\n\\begin{pmatrix}ax+by\\cx+dy\\end{pmatrix}$$<br>\nOn this IPython Notebook we will be looking at particular cases of these matrix transformations and how they transform vectors from a geometric point of view.\n2. Rotation Matrices\nIf we consider any point in the $x$-$y$ plane to be written in terms of its $\\mathbf{\\hat{i}},\\,\\mathbf{\\hat{j}}$ unit vectors: \n<br><br>\n$$ \\begin{pmatrix}x \\ y \\end{pmatrix} = x\\begin{pmatrix} 1 \\ 0 \\end{pmatrix} + y\\begin{pmatrix} 0 \\ 1 \\end{pmatrix} \\qquad (1)$$\n<br>\nThen rotation of both of these unit vectors by an amount $\\theta$ would lead to the unit vectors being mapped to:\n<br><br>\n$$ R_{\\theta} : \\begin{pmatrix} 1 \\ 0 \\end{pmatrix} \\mapsto \\begin{pmatrix} \\cos\\theta \\ \\sin\\theta\\end{pmatrix}, \n\\; R_{\\theta} : \\begin{pmatrix} 0 \\ 1 \\end{pmatrix} \\mapsto \\begin{pmatrix} -\\sin\\theta \\ \\cos\\theta\\end{pmatrix} \\qquad (2)$$\n<br> \nNow, if we want to rotate an arbitrary vector by an amount $\\theta$ then we can combine $(1)$ and $(2)$ to get:\n<br><br>\n$$ R_{\\theta} : \\begin{pmatrix} x \\ y \\end{pmatrix} \\mapsto x\\begin{pmatrix} \\cos\\theta \\ \\sin\\theta\\end{pmatrix} +y\\begin{pmatrix} -\\sin\\theta \\ \\cos\\theta\\end{pmatrix} $$ \n<br>\nWhich is equivalent to the matrix or <b>linear</b> transformation:\n<br><br>\n$$ R_{\\theta} \\begin{pmatrix} x \\ y \\end{pmatrix} = \\begin{pmatrix} \\cos\\theta & -\\sin\\theta \\ \\sin\\theta & \\cos\\theta\\end{pmatrix}\\begin{pmatrix} x \\ y \\end{pmatrix} $$", "# Import libraries/packages to be used (HIT SHIFT + ENTER TO RUN CELL)\nimport numpy as np\nimport math as ma \nimport plotly.figure_factory as ff\nfrom plotly.offline import download_plotlyjs,init_notebook_mode,plot,iplot\nimport plotly.graph_objs as go\ninit_notebook_mode(connected=True)", "Now, let's apply the theory of rotation matrices to write some code which will rotate a vector by amount $\\theta$. The function rotmat(th) returns the rotation matrix.", "def rotmat(th):\n rotator = np.array([[ma.cos(th), -ma.sin(th)],[ma.sin(th), ma.cos(th)]])\n return rotator", "This function rotation(th, vec) takes in a rotation angle and vector input and returns a tuple of numpy arrays which can be animated to create a \"smooth transition\" of the rotation using Plotly Animate.", "def rotation(th, vec):\n # Parameters \n t = np.linspace(0,1,50)\n tt = th*t\n # Rotation matrix\n BigR = np.identity(2)\n for i in range(len(tt)-1):\n BigR = np.vstack((BigR,rotmat(tt[i+1])))\n newvec = np.matmul(BigR,vec)\n x = newvec[::2]\n y = newvec[1::2]\n return (x,y)", "In the cell below, enter a rotation angle and vector inside the rotation() function which has some inputs inside already and hit shift enter to generate an animation of the rotation! (<b>N.B. Don't worry too much if you're not familiar with the plotly syntax, it's more important you understand what the matrices are doing, the cell will run itself after you choose the input arguments and hit Shift + Enter</b>)", "# Enter a 2D vector here...\nvec = [1,0]\n# Enter rotation angle here...\nth = 4\n(x0,y0) = rotation(th, vec)\nx0 = list(x0)\ny0 = list(y0)\n\n# Syntax for plotly, see documentation for more info\ndata = [{\"x\": [x0[i],0], \"y\": [y0[i],0], \"frame\": i} for i in range(len(x0))]\n\nfigure = {'data': [{'x': data[0]['x'], 'y': data[0]['y']}],\n 'layout': {'xaxis': {'range': [-2, 2], 'autorange': False},\n 'yaxis': {'range': [-2, 2], 'autorange': False},\n 'height': 600,\n 'width': 600,\n 'title': 'Rotation Animation',\n 'updatemenus': [{'type': 'buttons',\n 'buttons': [{'label': 'Play',\n 'method': 'animate',\n 'args': [None, dict(frame=dict(duration=50, redraw=False), \n transition=dict(duration=50),\n fromcurrent=True,\n mode='immediate')]}]}]\n },\n 'frames': [{'data': [{'x': data[i]['x'], 'y': data[i]['y']}]} for i in range(len(x0))]\n }\n# Plot\niplot(figure)", "3. Scaling Matrices\nNow we are familiar with rotation matrices, we will move onto another type of matrix transformation known as a \"scaling\" matrix. Scaling matrices have the form:\n<br>\n<br>\n$$ \\text{Scale} = \\begin{pmatrix} s1 & 0 \\ 0 & s2 \\end{pmatrix} $$\n<br>\nNow let's look at what this matrix does to an arbitrary vector $(x, y)$:\n<br><br>\n$$ \\begin{pmatrix} s1 & 0 \\ 0 & s2 \\end{pmatrix}\\begin{pmatrix} x \\ y\\end{pmatrix} = s1\\begin{pmatrix}x\\0\\end{pmatrix}+s2\\begin{pmatrix}0\\y\\end{pmatrix}$$\n<br>\nAs we can see, this \"scale\" matrix scales the vector in the $x$-direction by a factor $s1$ and scales the vector in the $y$-direction by a factor s2. Now we write a function scale(vec, *args) which takes in a vector input as well as an additional 1 OR 2 arguments. If one is given, then a matrix which scales both $x$ and $y$ directions equally is returned while if 2 are given then a matrix which scales by the arguments given is returned.", "# Input vector, scale 1, scale 2 as arguments\ndef scale(vec, *args):\n assert len(vec)==2, \"Please provide a 2D vector for the first argument\"\n assert len(args)==1 or len(args)==2, \"Please provide 1 or 2 scale arguments\"\n t = np.linspace(1,args[0],50)\n # If only one scale argument given then scale in both directions by same amount\n if len(args) == 1:\n x = vec[0]*t\n y = vec[1]*t\n return(x,y)\n # If two scale arguments given then scale individual directions\n else:\n s = np.linspace(1,args[1],50)\n x = vec[0]*t\n y = vec[1]*s\n return(x,y)", "Now try it for yourself by running the function with your own inputs, by default 2 scale arguments have been inputted but you can try 1 if you like as well.", "# Again input vector here\nvec = [1,1]\n# Arguments here\ns1 = 2\ns2 = 3\n(x1,y1) = scale(vec, s1, s2)\nx1 = list(x1)\ny1 = list(y1)\n\n# Plotly syntax again\ndata = [{\"x\": [x1[i],0], \"y\": [y1[i],0], \"frame\": i} for i in range(len(x1))]\n\nfigure = {'data': [{'x': data[0]['x'], 'y': data[0]['y']}],\n 'layout': {'xaxis': {'range': [-2, 2], 'autorange': False},\n 'yaxis': {'range': [-2, 2], 'autorange': False},\n 'height': 600,\n 'width': 600,\n 'title': 'Scale Animation',\n 'updatemenus': [{'type': 'buttons',\n 'buttons': [{'label': 'Play',\n 'method': 'animate',\n 'args': [None, dict(frame=dict(duration=50, redraw=False), \n transition=dict(duration=50),\n fromcurrent=True,\n mode='immediate')]}]}]\n },\n 'frames': [{'data': [{'x': data[i]['x'], 'y': data[i]['y']}]} for i in range(len(x1))]\n }\n\niplot(figure)", "4. Custom Matrix\nNow we have explained some basic matrix transformations, feel free to use the following code to create your own 2x2 matrix transformations.", "# Custom 2D transformation\ndef custom(vec):\n print(\"Enter values for 2x2 matrix [[a,b],[c,d]] \")\n a = input(\"Enter a value for a: \")\n b = input(\"Enter a value for b: \")\n c = input(\"Enter a value for c: \")\n d = input(\"Enter a value for d: \")\n try:\n a = float(a)\n except ValueError:\n print(\"Enter a float or integer for a\")\n try:\n b = float(b)\n except ValueError:\n print(\"Enter a float or integer for b\")\n try:\n c = float(c)\n except ValueError:\n print(\"Enter a float or integer for c\")\n try:\n d = float(d)\n except ValueError:\n print(\"Enter a float or integer for d\")\n \n A = [[a,b],[c,d]]\n t = np.linspace(0,1,50)\n w = np.matmul(A,vec)-vec\n x = [vec[0]+tt*w[0] for tt in t]\n y = [vec[1]+tt*w[1] for tt in t]\n \n return(x,y)\n\n(x2,y2) = custom([1,1])\nx2 = list(x2)\ny2 = list(y2)\n\ndata = [{\"x\": [x2[i],0], \"y\": [y2[i],0], \"frame\": i} for i in range(len(x2))]\n\nfigure = {'data': [{'x': data[0]['x'], 'y': data[0]['y']}],\n 'layout': {'xaxis': {'range': [-2, 2], 'autorange': False},\n 'yaxis': {'range': [-2, 2], 'autorange': False},\n 'height': 600,\n 'width': 600,\n 'title': 'Custom Animation',\n 'updatemenus': [{'type': 'buttons',\n 'buttons': [{'label': 'Play',\n 'method': 'animate',\n 'args': [None, dict(frame=dict(duration=50, redraw=False), \n transition=dict(duration=50),\n fromcurrent=True,\n mode='immediate')]}]}]\n },\n 'frames': [{'data': [{'x': data[i]['x'], 'y': data[i]['y']}]} for i in range(len(x2))]\n }\n\niplot(figure)", "5. Skew Matrices\nFor the next matrix we will use a slightly different approach to visualize what this transformation does. Instead of taking one vector and following what the matrix does to it, we will take 3 vectors ((1, 0), (1, 1) and (0, 1)) and look at what the matrix does to the entire area captured between these 3 points and the origin (i.e. the unit box). Why is this? <br>\nWell, matrix transformations are linear transformations and any point inside the box is a linear combination of $\\mathbf{\\hat{i}},\\,\\mathbf{\\hat{j}}$ unit vectors. Consider a matrix $A$ acting upon a vector (x,y). <br><br>\n$$ A \\begin{pmatrix}x\\y\\end{pmatrix} = \\begin{pmatrix}a&b\\c&d\\end{pmatrix}\\begin{pmatrix}x\\y\\end{pmatrix} =\nx\\begin{pmatrix}a\\c\\end{pmatrix}+y\\begin{pmatrix}b\\d\\end{pmatrix}\n$$ <br>\nAs we can see, the $\\mathbf{\\hat{i}},\\,\\mathbf{\\hat{j}}$ unit vectors are mapped to vectors $(a,\\,c)$ and $(b,\\,d)$ , respectively, so any points inside the unit square are mapped inside the parallelogram formed by the 2 vectors $(a,\\,c)$ and $(b,\\,d)$, (see the <b>Parallelipiped</b> visualization for more info). To visualize this, let's write a function which returns a skew matrix and see how it deforms the unit square. It's okay if you're not sure what a skew matrix is or what it does as you'll see what happens when we make the animation.", "def skew(axis, vec):\n t = np.linspace(0,1,50)\n # Skew in x-direction\n if axis == 0:\n A = [[1,1],[0,1]]\n w = np.matmul(A,vec)-vec\n x = [vec[0]+tt*w[0] for tt in t]\n y = [vec[1]+tt*w[1] for tt in t]\n return(x, y)\n # Skew in y-direction\n elif axis == 1:\n A = [[1,0],[1,1]]\n w = np.matmul(A,vec)-vec\n x = [vec[0]+tt*w[0] for tt in t]\n y = [vec[1]+tt*w[1] for tt in t]\n return(x, y)\n else: \n return ValueError('Axis must be 0 or 1')", "Now we write a function which will take 6 arrays in total (2 for (1, 0), 2 for (0, 1) and 2 for (1, 1)) and shows an animation of how the 3 vectors are transformed. Remember that we can forget about the origin as it is always mapped to itself (this is a standard property of linear transformations).", "# Function that returns data in a format to be used by plotly and then plots it \ndef sqtransformation(x0,x1,x2,y0,y1,y2):\n data = [{\"x\": [0,x0[i],x1[i],x2[i],0], \"y\": [0,y0[i],y1[i],y2[i],0], \"frame\": i} for i in range(len(x0))]\n\n figure = {'data': [{'x': data[0]['x'], 'y': data[0]['y'], 'fill':'tonexty'}],\n 'layout': {'xaxis': {'range': [-2, 2], 'autorange': False},\n 'yaxis': {'range': [-2, 2], 'autorange': False},\n 'height': 600,\n 'width': 600,\n 'title': 'Square Animation',\n 'updatemenus': [{'type': 'buttons',\n 'buttons': [{'label': 'Play',\n 'method': 'animate',\n 'args': [None, dict(frame=dict(duration=50, redraw=False), \n transition=dict(duration=50),\n fromcurrent=True,\n mode='immediate')]}]}]\n },\n 'frames': [{'data': [{'x': data[i]['x'], 'y': data[i]['y']}]} for i in range(len(x0))]\n }\n\n iplot(figure)\n\n# Transform the 3 vectors that form the unit box. \n(x0,y0) = skew(1,[1,0])\n(x1,y1) = skew(1,[1,1])\n(x2,y2) = skew(1,[0,1])\n\nsqtransformation(x0,x1,x2,y0,y1,y2)", "So a skew transformation in 2D can be seen as a \"shear\" where the box is pushed into a parallelogram.\nA good exercise might be to combine the above script as well as the functions we have already written into making one wrapper function which will transform a square using any of the transformations we have discussed above (see html version of this pynb).\n6. Determinants\nThe determinant of a 2 x 2 matrix is defined to be:\n$$ |A| = \\begin{vmatrix}a_1&a_2\\b_1&b_2\\end{vmatrix} = a_1b_2-a_2b_1$$ <br>\nNow if we take the magnitude of the curl of two 3D vectors $\\vec{a}=(a_1,\\,a_2,\\,0)$ and $\\vec{b}=(b_1,\\,b_2,\\,0)$ with a zero $z$-component, recall that this is the area of a parallelogram formed by $\\vec{a}$ and $\\vec{b}$ (see Parallelipiped visualisation), then we get: \n$$ \\mid\\vec{a}\\times\\vec{b}\\mid = \\begin{vmatrix}\\mathbf{\\hat{i}}&\\mathbf{\\hat{j}}&\\mathbf{\\hat{k}}\\a_1&a_2&0\\b_1&b_2&0\\end{vmatrix} = a_1b_2-a_2b_1 $$ <br>\nSo for two vectors which lie on the $x-y$ plane we get the absolute value of the cross product of 2 vectors is equal to the area of the parallelogram formed. However, any two vectors in 3D are always coplanar so this result is always true for two general 3D vectors, since we can always rotate coordinate systems such that the two vectors lie on the $x-y$ plane (Google isometries for more info), without changing the area of the parallelogram." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
smharper/openmc
examples/jupyter/mdgxs-part-i.ipynb
mit
[ "This IPython Notebook introduces the use of the openmc.mgxs module to calculate multi-energy-group and multi-delayed-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features:\n\nCreation of multi-delayed-group cross sections for an infinite homogeneous medium\nCalculation of delayed neutron precursor concentrations\n\nIntroduction to Multi-Delayed-Group Cross Sections (MDGXS)\nMany Monte Carlo particle transport codes, including OpenMC, use continuous-energy nuclear cross section data. However, most deterministic neutron transport codes use multi-group cross sections defined over discretized energy bins or energy groups. Furthermore, kinetics calculations typically separate out parameters that involve delayed neutrons into prompt and delayed components and further subdivide delayed components by delayed groups. An example is the energy spectrum for prompt and delayed neutrons for U-235 and Pu-239 computed for a light water reactor spectrum.", "from IPython.display import Image\nImage(filename='images/mdgxs.png', width=350)", "A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations and different delayed group models (e.g. 6, 7, or 8 delayed group models) for fine-mesh heterogeneous deterministic neutron transport applications.\nBefore proceeding to illustrate how one may use the openmc.mgxs module, it is worthwhile to define the general equations used to calculate multi-energy-group and multi-delayed-group cross sections. This is only intended as a brief overview of the methodology used by openmc.mgxs - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic.\nIntroductory Notation\nThe continuous real-valued microscopic cross section may be denoted $\\sigma_{n,x}(\\mathbf{r}, E)$ for position vector $\\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\\Phi(\\mathbf{r},E)$ for position $\\mathbf{r}$ and energy $E$. Note: Although nuclear cross sections are dependent on the temperature $T$ of the interacting medium, the temperature variable is neglected here for brevity.\nSpatial and Energy Discretization\nThe energy domain for critical systems such as thermal reactors spans more than 10 orders of magnitude of neutron energies from 10$^{-5}$ - 10$^7$ eV. The multi-group approximation discretization divides this energy range into one or more energy groups. In particular, for $G$ total groups, we denote an energy group index $g$ such that $g \\in {1, 2, ..., G}$. The energy group indices are defined such that the smaller group the higher the energy, and vice versa. The integration over neutron energies across a discrete energy group is commonly referred to as energy condensation.\nThe delayed neutrons created from fissions are created from > 30 delayed neutron precursors. Modeling each of the delayed neutron precursors is possible, but this approach has not recieved much attention due to large uncertainties in certain precursors. Therefore, the delayed neutrons are often combined into \"delayed groups\" that have a set time constant, $\\lambda_d$. Some cross section libraries use the same group time constants for all nuclides (e.g. JEFF 3.1) while other libraries use different time constants for all nuclides (e.g. ENDF/B-VII.1). Multi-delayed-group cross sections can either be created with the entire delayed group set, a subset of delayed groups, or integrated over all delayed groups.\nMulti-group cross sections are computed for discretized spatial zones in the geometry of interest. The spatial zones may be defined on a structured and regular fuel assembly or pin cell mesh, an arbitrary unstructured mesh or the constructive solid geometry used by OpenMC. For a geometry with $K$ distinct spatial zones, we designate each spatial zone an index $k$ such that $k \\in {1, 2, ..., K}$. The volume of each spatial zone is denoted by $V_{k}$. The integration over discrete spatial zones is commonly referred to as spatial homogenization.\nGeneral Scalar-Flux Weighted MDGXS\nThe multi-group cross sections computed by openmc.mgxs are defined as a scalar flux-weighted average of the microscopic cross sections across each discrete energy group. This formulation is employed in order to preserve the reaction rates within each energy group and spatial zone. In particular, spatial homogenization and energy condensation are used to compute the general multi-group cross section. For instance, the delayed-nu-fission multi-energy-group and multi-delayed-group cross section, $\\nu_d \\sigma_{f,x,k,g}$, can be computed as follows:\n$$\\nu_d \\sigma_{n,x,k,g} = \\frac{\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r} \\nu_d \\sigma_{f,x}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}{\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\Phi(\\mathbf{r},E')}$$\nThis scalar flux-weighted average microscopic cross section is computed by openmc.mgxs for only the delayed-nu-fission and delayed neutron fraction reaction type at the moment. These double integrals are stochastically computed with OpenMC's tally system - in particular, filters on the energy range and spatial zone (material, cell, universe, or mesh) define the bounds of integration for both numerator and denominator.\nMulti-Group Prompt and Delayed Fission Spectrum\nThe energy spectrum of neutrons emitted from fission is denoted by $\\chi_{n}(\\mathbf{r},E' \\rightarrow E'')$ for incoming and outgoing energies $E'$ and $E''$, respectively. Unlike the multi-group cross sections $\\sigma_{n,x,k,g}$ considered up to this point, the fission spectrum is a probability distribution and must sum to unity. The outgoing energy is typically much less dependent on the incoming energy for fission than for scattering interactions. As a result, it is common practice to integrate over the incoming neutron energy when computing the multi-group fission spectrum. The fission spectrum may be simplified as $\\chi_{n}(\\mathbf{r},E)$ with outgoing energy $E$.\nComputing the cumulative energy spectrum of emitted neutrons, $\\chi_{n}(\\mathbf{r},E)$, has been presented in the mgxs-part-i.ipynb notebook. Here, we will present the energy spectrum of prompt and delayed emission neutrons, $\\chi_{n,p}(\\mathbf{r},E)$ and $\\chi_{n,d}(\\mathbf{r},E)$, respectively. Unlike the multi-group cross sections defined up to this point, the multi-group fission spectrum is weighted by the fission production rate rather than the scalar flux. This formulation is intended to preserve the total fission production rate in the multi-group deterministic calculation. In order to mathematically define the multi-group fission spectrum, we denote the microscopic fission cross section as $\\sigma_{n,f}(\\mathbf{r},E)$ and the average number of neutrons emitted from fission interactions with nuclide $n$ as $\\nu_{n,p}(\\mathbf{r},E)$ and $\\nu_{n,d}(\\mathbf{r},E)$ for prompt and delayed neutrons, respectively. The multi-group fission spectrum $\\chi_{n,k,g,d}$ is then the probability of fission neutrons emitted into energy group $g$ and delayed group $d$. There are not prompt groups, so inserting $p$ in place of $d$ just denotes all prompt neutrons. \nSimilar to before, spatial homogenization and energy condensation are used to find the multi-energy-group and multi-delayed-group fission spectrum $\\chi_{n,k,g,d}$ as follows:\n$$\\chi_{n,k,g',d} = \\frac{\\int_{E_{g'}}^{E_{g'-1}}\\mathrm{d}E''\\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\chi_{n,d}(\\mathbf{r},E'\\rightarrow E'')\\nu_{n,d}(\\mathbf{r},E')\\sigma_{n,f}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}{\\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\nu_{n,d}(\\mathbf{r},E')\\sigma_{n,f}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}$$\nThe fission production-weighted multi-energy-group and multi-delayed-group fission spectrum for delayed neutrons is computed using OpenMC tallies with energy in, energy out, and delayed group filters. Alternatively, the delayed group filter can be omitted to compute the fission spectrum integrated over all delayed groups.\nThis concludes our brief overview on the methodology to compute multi-energy-group and multi-delayed-group cross sections. The following sections detail more concretely how users may employ the openmc.mgxs module to power simulation workflows requiring multi-group cross sections for downstream deterministic calculations.\nGenerate Input Files", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport openmc\nimport openmc.mgxs as mgxs", "First we need to define materials that will be used in the problem. Let's create a material for the homogeneous medium.", "# Instantiate a Material and register the Nuclides\ninf_medium = openmc.Material(name='moderator')\ninf_medium.set_density('g/cc', 5.)\ninf_medium.add_nuclide('H1', 0.03)\ninf_medium.add_nuclide('O16', 0.015)\ninf_medium.add_nuclide('U235', 0.0001)\ninf_medium.add_nuclide('U238', 0.007)\ninf_medium.add_nuclide('Pu239', 0.00003)\ninf_medium.add_nuclide('Zr90', 0.002)", "With our material, we can now create a Materials object that can be exported to an actual XML file.", "# Instantiate a Materials collection and export to XML\nmaterials_file = openmc.Materials([inf_medium])\nmaterials_file.export_to_xml()", "Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.", "# Instantiate boundary Planes\nmin_x = openmc.XPlane(boundary_type='reflective', x0=-0.63)\nmax_x = openmc.XPlane(boundary_type='reflective', x0=0.63)\nmin_y = openmc.YPlane(boundary_type='reflective', y0=-0.63)\nmax_y = openmc.YPlane(boundary_type='reflective', y0=0.63)", "With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.", "# Instantiate a Cell\ncell = openmc.Cell(cell_id=1, name='cell')\n\n# Register bounding Surfaces with the Cell\ncell.region = +min_x & -max_x & +min_y & -max_y\n\n# Fill the Cell with the Material\ncell.fill = inf_medium", "We now must create a geometry and export it to XML.", "# Create Geometry and set root Universe\nopenmc_geometry = openmc.Geometry([cell])\n\n# Export to \"geometry.xml\"\nopenmc_geometry.export_to_xml()", "Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.", "# OpenMC simulation parameters\nbatches = 50\ninactive = 10\nparticles = 5000\n\n# Instantiate a Settings object\nsettings_file = openmc.Settings()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\nsettings_file.output = {'tallies': True}\n\n# Create an initial uniform spatial source distribution over fissionable zones\nbounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]\nuniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\nsettings_file.source = openmc.Source(space=uniform_dist)\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()", "Now we are ready to generate multi-group cross sections! First, let's define a 100-energy-group structure and 1-energy-group structure using the built-in EnergyGroups class. We will also create a 6-delayed-group list.", "# Instantiate a 100-group EnergyGroups object\nenergy_groups = mgxs.EnergyGroups()\nenergy_groups.group_edges = np.logspace(-3, 7.3, 101)\n\n# Instantiate a 1-group EnergyGroups object\none_group = mgxs.EnergyGroups()\none_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])\n\ndelayed_groups = list(range(1,7))", "We can now use the EnergyGroups object and delayed group list, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class:\n\nTotalXS\nTransportXS\nAbsorptionXS\nCaptureXS\nFissionXS\nNuFissionMatrixXS\nKappaFissionXS\nScatterXS\nScatterMatrixXS\nChi\nInverseVelocity\n\nA separate abstract MDGXS class is used for cross-sections and parameters that involve delayed neutrons. The subclasses of MDGXS include:\n\nDelayedNuFissionXS\nChiDelayed\nBeta\nDecayRate\n\nThese classes provide us with an interface to generate the tally inputs as well as perform post-processing of OpenMC's tally data to compute the respective multi-group cross sections. \nIn this case, let's create the multi-group chi-prompt, chi-delayed, and prompt-nu-fission cross sections with our 100-energy-group structure and multi-group delayed-nu-fission and beta cross sections with our 100-energy-group and 6-delayed-group structures. \nThe prompt chi and nu-fission data can actually be gathered using the Chi and FissionXS classes, respectively, by passing in a value of True for the optional prompt parameter upon initialization.", "# Instantiate a few different sections\nchi_prompt = mgxs.Chi(domain=cell, groups=energy_groups, by_nuclide=True, prompt=True)\nprompt_nu_fission = mgxs.FissionXS(domain=cell, groups=energy_groups, by_nuclide=True, nu=True, prompt=True)\nchi_delayed = mgxs.ChiDelayed(domain=cell, energy_groups=energy_groups, by_nuclide=True)\ndelayed_nu_fission = mgxs.DelayedNuFissionXS(domain=cell, energy_groups=energy_groups, delayed_groups=delayed_groups, by_nuclide=True)\nbeta = mgxs.Beta(domain=cell, energy_groups=energy_groups, delayed_groups=delayed_groups, by_nuclide=True)\ndecay_rate = mgxs.DecayRate(domain=cell, energy_groups=one_group, delayed_groups=delayed_groups, by_nuclide=True)\n\nchi_prompt.nuclides = ['U235', 'Pu239']\nprompt_nu_fission.nuclides = ['U235', 'Pu239']\nchi_delayed.nuclides = ['U235', 'Pu239']\ndelayed_nu_fission.nuclides = ['U235', 'Pu239']\nbeta.nuclides = ['U235', 'Pu239']\ndecay_rate.nuclides = ['U235', 'Pu239']", "Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Decay Rate object as follows.", "decay_rate.tallies", "The Beta object includes tracklength tallies for the 'nu-fission' and 'delayed-nu-fission' scores in the 100-energy-group and 6-delayed-group structure in cell 1. Now that each MGXS and MDGXS object contains the tallies that it needs, we must add these tallies to a Tallies object to generate the \"tallies.xml\" input file for OpenMC.", "# Instantiate an empty Tallies object\ntallies_file = openmc.Tallies()\n\n# Add chi-prompt tallies to the tallies file\ntallies_file += chi_prompt.tallies.values()\n\n# Add prompt-nu-fission tallies to the tallies file\ntallies_file += prompt_nu_fission.tallies.values()\n\n# Add chi-delayed tallies to the tallies file\ntallies_file += chi_delayed.tallies.values()\n\n# Add delayed-nu-fission tallies to the tallies file\ntallies_file += delayed_nu_fission.tallies.values()\n\n# Add beta tallies to the tallies file\ntallies_file += beta.tallies.values()\n\n# Add decay rate tallies to the tallies file\ntallies_file += decay_rate.tallies.values()\n\n# Export to \"tallies.xml\"\ntallies_file.export_to_xml()", "Now we a have a complete set of inputs, so we can go ahead and run our simulation.", "# Run OpenMC\nopenmc.run()", "Tally Data Processing\nOur simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.", "# Load the last statepoint file\nsp = openmc.StatePoint('statepoint.50.h5')", "In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a Summary object is automatically linked when a StatePoint is loaded. This is necessary for the openmc.mgxs module to properly process the tally data.\nThe statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.", "# Load the tallies from the statepoint into each MGXS object\nchi_prompt.load_from_statepoint(sp)\nprompt_nu_fission.load_from_statepoint(sp)\nchi_delayed.load_from_statepoint(sp)\ndelayed_nu_fission.load_from_statepoint(sp)\nbeta.load_from_statepoint(sp)\ndecay_rate.load_from_statepoint(sp)", "Voila! Our multi-group cross sections are now ready to rock 'n roll!\nExtracting and Storing MGXS Data\nLet's first inspect our delayed-nu-fission section by printing it to the screen after condensing the cross section down to one group.", "delayed_nu_fission.get_condensed_xs(one_group).get_xs()", "Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a \"derived\" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.", "df = delayed_nu_fission.get_pandas_dataframe()\ndf.head(10)\n\ndf = decay_rate.get_pandas_dataframe()\ndf.head(12)", "Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.", "beta.export_xs_data(filename='beta', format='excel')", "The following code snippet shows how to export the chi-prompt and chi-delayed MGXS to the same HDF5 binary data store.", "chi_prompt.build_hdf5_store(filename='mdgxs', append=True)\nchi_delayed.build_hdf5_store(filename='mdgxs', append=True)", "Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations\nFinally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a \"derived\" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to compute the delayed neutron precursor concentrations using the Beta, DelayedNuFissionXS, and DecayRate objects. The delayed neutron precursor concentrations are modeled using the following equations:\n$$\\frac{\\partial}{\\partial t} C_{k,d} (t) = \\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r} \\beta_{k,d} (t) \\nu_d \\sigma_{f,x}(\\mathbf{r},E',t)\\Phi(\\mathbf{r},E',t) - \\lambda_{d} C_{k,d} (t) $$\n$$C_{k,d} (t=0) = \\frac{1}{\\lambda_{d}} \\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r} \\beta_{k,d} (t=0) \\nu_d \\sigma_{f,x}(\\mathbf{r},E',t=0)\\Phi(\\mathbf{r},E',t=0) $$\nFirst, let's investigate the decay rates for U235 and Pu235. The fraction of the delayed neutron precursors remaining as a function of time after fission for each delayed group and fissioning isotope have been plotted below.", "# Get the decay rate data\ndr_tally = decay_rate.xs_tally\ndr_u235 = dr_tally.get_values(nuclides=['U235']).flatten()\ndr_pu239 = dr_tally.get_values(nuclides=['Pu239']).flatten()\n\n# Compute the exponential decay of the precursors\ntime = np.logspace(-3,3)\ndr_u235_points = np.exp(-np.outer(dr_u235, time))\ndr_pu239_points = np.exp(-np.outer(dr_pu239, time))\n\n# Create a plot of the fraction of the precursors remaining as a f(time)\ncolors = ['b', 'g', 'r', 'c', 'm', 'k']\nlegend = []\nfig = plt.figure(figsize=(8,6))\nfor g,c in enumerate(colors):\n plt.semilogx(time, dr_u235_points [g,:], color=c, linestyle='--', linewidth=3)\n plt.semilogx(time, dr_pu239_points[g,:], color=c, linestyle=':' , linewidth=3)\n legend.append('U-235 $t_{1/2}$ = ' + '{0:1.2f} seconds'.format(np.log(2) / dr_u235[g]))\n legend.append('Pu-239 $t_{1/2}$ = ' + '{0:1.2f} seconds'.format(np.log(2) / dr_pu239[g]))\n\nplt.title('Delayed Neutron Precursor Decay Rates')\nplt.xlabel('Time (s)')\nplt.ylabel('Fraction Remaining')\nplt.legend(legend, loc=1, bbox_to_anchor=(1.55, 0.95))", "Now let's compute the initial concentration of the delayed neutron precursors:", "# Use tally arithmetic to compute the precursor concentrations\nprecursor_conc = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \\\n delayed_nu_fission.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / \\\n decay_rate.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True)\n\n# Get the Pandas DataFrames for inspection\nprecursor_conc.get_pandas_dataframe()", "We can plot the delayed neutron fractions for each nuclide.", "energy_filter = [f for f in beta.xs_tally.filters if type(f) is openmc.EnergyFilter]\nbeta_integrated = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True)\nbeta_u235 = beta_integrated.get_values(nuclides=['U235'])\nbeta_pu239 = beta_integrated.get_values(nuclides=['Pu239'])\n\n# Reshape the betas\nbeta_u235.shape = (beta_u235.shape[0])\nbeta_pu239.shape = (beta_pu239.shape[0])\n\ndf = beta_integrated.summation(filter_type=openmc.DelayedGroupFilter, remove_filter=True).get_pandas_dataframe()\nprint('Beta (U-235) : {:.6f} +/- {:.6f}'.format(df[df['nuclide'] == 'U235']['mean'][0], df[df['nuclide'] == 'U235']['std. dev.'][0]))\nprint('Beta (Pu-239): {:.6f} +/- {:.6f}'.format(df[df['nuclide'] == 'Pu239']['mean'][1], df[df['nuclide'] == 'Pu239']['std. dev.'][1]))\n\nbeta_u235 = np.append(beta_u235[0], beta_u235)\nbeta_pu239 = np.append(beta_pu239[0], beta_pu239)\n\n# Create a step plot for the MGXS\nplt.plot(np.arange(0.5, 7.5, 1), beta_u235, drawstyle='steps', color='b', linewidth=3)\nplt.plot(np.arange(0.5, 7.5, 1), beta_pu239, drawstyle='steps', color='g', linewidth=3)\n\nplt.title('Delayed Neutron Fraction (beta)')\nplt.xlabel('Delayed Group')\nplt.ylabel('Beta(fraction total neutrons)')\nplt.legend(['U-235', 'Pu-239'])\nplt.xlim([0,7])", "We can also plot the energy spectrum for fission emission of prompt and delayed neutrons.", "chi_d_u235 = np.squeeze(chi_delayed.get_xs(nuclides=['U235'], order_groups='decreasing'))\nchi_d_pu239 = np.squeeze(chi_delayed.get_xs(nuclides=['Pu239'], order_groups='decreasing'))\nchi_p_u235 = np.squeeze(chi_prompt.get_xs(nuclides=['U235'], order_groups='decreasing'))\nchi_p_pu239 = np.squeeze(chi_prompt.get_xs(nuclides=['Pu239'], order_groups='decreasing'))\n\nchi_d_u235 = np.append(chi_d_u235 , chi_d_u235[0])\nchi_d_pu239 = np.append(chi_d_pu239, chi_d_pu239[0])\nchi_p_u235 = np.append(chi_p_u235 , chi_p_u235[0])\nchi_p_pu239 = np.append(chi_p_pu239, chi_p_pu239[0])\n\n# Create a step plot for the MGXS\nplt.semilogx(energy_groups.group_edges, chi_d_u235 , drawstyle='steps', color='b', linestyle='--', linewidth=3)\nplt.semilogx(energy_groups.group_edges, chi_d_pu239, drawstyle='steps', color='g', linestyle='--', linewidth=3)\nplt.semilogx(energy_groups.group_edges, chi_p_u235 , drawstyle='steps', color='b', linestyle=':', linewidth=3)\nplt.semilogx(energy_groups.group_edges, chi_p_pu239, drawstyle='steps', color='g', linestyle=':', linewidth=3)\n\nplt.title('Energy Spectrum for Fission Neutrons')\nplt.xlabel('Energy (eV)')\nplt.ylabel('Fraction on emitted neutrons')\nplt.legend(['U-235 delayed', 'Pu-239 delayed', 'U-235 prompt', 'Pu-239 prompt'],loc=2)\nplt.xlim(1.0e3, 20.0e6)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
stable/_downloads/775a4c9edcb81275d5a07fdad54343dc/channel_epochs_image.ipynb
bsd-3-clause
[ "%matplotlib inline", "Visualize channel over epochs as an image\nThis will produce what is sometimes called an event related\npotential / field (ERP/ERF) image.\nTwo images are produced, one with a good channel and one with a channel\nthat does not show any evoked field.\nIt is also demonstrated how to reorder the epochs using a 1D spectral\nembedding as described in :footcite:GramfortEtAl2010.", "# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD-3-Clause\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Set parameters", "meg_path = data_path / 'MEG' / 'sample'\nraw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'\nevent_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'\nevent_id, tmin, tmax = 1, -0.2, 0.4\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\n\n# Create epochs, here for gradiometers + EOG only for simplicity\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=('grad', 'eog'), baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, eog=150e-6))", "Show event-related fields images", "# and order with spectral reordering\n# If you don't have scikit-learn installed set order_func to None\nfrom sklearn.manifold import spectral_embedding # noqa\nfrom sklearn.metrics.pairwise import rbf_kernel # noqa\n\n\ndef order_func(times, data):\n this_data = data[:, (times > 0.0) & (times < 0.350)]\n this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]\n return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),\n n_components=1, random_state=0).ravel())\n\n\ngood_pick = 97 # channel with a clear evoked response\nbad_pick = 98 # channel with no evoked response\n\n# We'll also plot a sample time onset for each trial\nplt_times = np.linspace(0, .2, len(epochs))\n\nplt.close('all')\nmne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=.5,\n order=order_func, vmin=-250, vmax=250,\n overlay_times=plt_times, show=True)", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wegamekinglc/alpha-mind
notebooks/Example 6 - Target Volatility Builder.ipynb
mit
[ "请在环境变量中设置DB_URI指向数据库", "%matplotlib inline\nimport os\nimport numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nfrom PyFin.api import *\nfrom alphamind.api import *\nfrom alphamind.strategy.strategy import Strategy, RunningSetting\nfrom alphamind.portfolio.meanvariancebuilder import target_vol_builder\n\nplt.style.use('ggplot')", "1. Single Day Analysis", "ref_date = '2020-01-02'\nengine = SqlEngine(os.environ['DB_URI'])\nuniverse = Universe('hs300')\n\ncodes = engine.fetch_codes(ref_date, universe)\ntotal_data = engine.fetch_data(ref_date, 'EMA5D', codes, 300, industry='sw', risk_model='short')\nall_styles = risk_styles + industry_styles + ['COUNTRY']\n\nrisk_cov = total_data['risk_cov'][all_styles].values\nfactor = total_data['factor']\nrisk_exposure = factor[all_styles].values\nspecial_risk = factor['srisk'].values", "Portfolio Construction\n\nusing EPS factor as alpha factor;\nshort selling is forbiden;\ntarget of volatility for the activate weight is setting at 2.5% annually level.", "er = factor['EMA5D'].fillna(factor[\"EMA5D\"].median()).values\nbm = factor['weight'].values\nlbound = np.zeros(len(er))\nubound = bm + 0.01\ncons_mat = np.ones((len(er), 1))\nrisk_targets = (bm.sum(), bm.sum())\ntarget_vol = 0.025\nrisk_model = dict(cov=None, factor_cov=risk_cov/10000, factor_loading=risk_exposure, idsync=special_risk ** 2 / 10000.)\n\nstatus, p_er, p_weight = \\\n target_vol_builder(er, risk_model, bm, lbound, ubound, cons_mat, risk_targets, target_vol)\n \nsec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000. + np.diag(special_risk ** 2) / 10000\n\n# check the result\nprint(f\"total weight is {p_weight.sum(): .4f}\")\nprint(f\"portfolio activate weight forecasting vol is {np.sqrt((p_weight - bm) @ sec_cov @ (p_weight - bm)):.4f}\")\nprint(f\"portfolio er: {p_weight @ er:.4f} comparing with benchmark er: {bm @ er:.4f}\")", "2. Porfolio Construction: 2016 ~ 2018", "\"\"\"\nBack test parameter settings\n\"\"\"\n\nstart_date = '2020-01-01'\nend_date = '2020-02-21'\n\nfreq = '10b'\nneutralized_risk = industry_styles\nindustry_name = 'sw'\nindustry_level = 1\nrisk_model = 'short'\nbatch = 0\nhorizon = map_freq(freq)\nuniverse = Universe('hs300')\ndata_source = os.environ['DB_URI']\nbenchmark_code = 300\ntarget_vol = 0.05\nweights_bandwidth = 0.02\n\n\"\"\"\nFactor Model\n\"\"\"\n\nalpha_factors = {'f01': CSRank(LAST('EMA5D'))}\nweights = dict(f01=1.)\nalpha_model = ConstLinearModel(features=alpha_factors, weights=weights)\n\ndata_meta = DataMeta(freq=freq,\n universe=universe,\n batch=batch,\n neutralized_risk=neutralized_risk,\n risk_model='short',\n pre_process=[winsorize_normal, standardize],\n post_process=[standardize],\n warm_start=0,\n data_source=data_source)\n\n\"\"\"\nConstraintes settings\n\"\"\"\n\nconstraint_risk = ['SIZE', 'SIZENL', 'BETA']\ntotal_risk_names = constraint_risk + ['benchmark', 'total']\n\nb_type = []\nl_val = []\nu_val = []\n\nprevious_pos = pd.DataFrame()\nrets = []\nturn_overs = []\nleverags = []\n\nfor name in total_risk_names:\n if name == 'benchmark':\n b_type.append(BoundaryType.RELATIVE)\n l_val.append(0.8)\n u_val.append(1.0)\n else:\n b_type.append(BoundaryType.ABSOLUTE)\n l_val.append(0.0)\n u_val.append(0.0)\n\nbounds = create_box_bounds(total_risk_names, b_type, l_val, u_val)\n\n\"\"\"\nRunning Settings\n\"\"\"\nrunning_setting = RunningSetting(weights_bandwidth=weights_bandwidth,\n rebalance_method='tv',\n bounds=bounds,\n target_vol=target_vol)\n\n\"\"\"\nStrategy run\n\"\"\"\nstrategy = Strategy(alpha_model,\n data_meta,\n universe=universe,\n start_date=start_date,\n end_date=end_date,\n freq=freq,\n benchmark=benchmark_code)\nstrategy.prepare_backtest_data()\nret_df, positions = strategy.run(running_setting)\n\nret_df[['excess_return', 'turn_over']].cumsum().plot(figsize=(14, 7),\n title='Fixed freq rebalanced with target vol \\\n at {2}: {0} with benchmark {1}'.format(freq, benchmark_code, target_vol),\n secondary_y='turn_over')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
UIUC-iSchool-DataViz/spring2017
week05/examples_week05.ipynb
mit
[ "Image Plotting\nThis week, we will do just a very short bit of image plotting. We'll be using a Koala scan.", "%matplotlib inline\n\nimport h5py\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport ipywidgets\nplt.rcParams[\"figure.figsize\"] = (12, 10)", "We're going to load the data in using h5py from an hdf5 file. HDF5 is a file format that allows for very simple storage of numerical data; in this particular case, we'll be loading in a 3D array, and then examining it.", "f = h5py.File(\"/srv/nbgrader/data/koala.hdf5\", \"r\")\nprint(list(f.keys()))", "Here, we load in the data by reading from the key koala that we just found.", "koala = f[\"/koala\"][:]\nprint(koala.shape)", "We'll use subplots to show the maximum value along each of the three axes, along with a histogram of all the values. The .max() function here accepts and axis argument, which means \"max along a given axis.\"", "for i in range(3):\n plt.subplot(2,2,i+1)\n plt.imshow(koala.max(axis=i), interpolation='nearest', origin='lower', cmap='viridis')\nplt.subplot(2,2,4)\nplt.hist(koala.ravel(), bins = 32, log = True)", "We'll make a slicer, too -- this one is along the x value. Note how we take a floating point value and turn that into an index to make the image.", "def xslicer(coord = 0.5):\n # We're accepting a float here, so we convert that into the right index we want\n ind = int(coord * koala.shape[0])\n plt.imshow(koala[ind,:,:], interpolation = 'nearest', origin='lower')\n\nipywidgets.interact(xslicer, coord = (0.0, 1.0, 0.01))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jeanpat/DeepFISH
notebooks/Clean Dataset from their spurious pixels.ipynb
gpl-3.0
[ "Cleaning the groundtruth images from their spurious pixels", "import os\nimport h5py\nfrom matplotlib import pyplot as plt\n\nimport numpy as np\nfrom numpy import newaxis\nfrom skimage import morphology as mo\nfrom scipy.ndimage import distance_transform_bf as distance\n\ndef distanceTransform(bIm):\n #from pythonvision.org\n dist = distance(bIm)\n dist = dist.max() - dist\n dist -= dist.min()\n dist = dist/float(dist.ptp()) * 255\n dist = dist.astype(np.uint8)\n return dist\n\ndef clean_ground_truth(gd_lab, size = 2):\n \"\"\"Remove spurious pixels from badly labelled groundtruth image\n returns three binary images (One hot shot) first and a labelled image.\n \"\"\"\n mask = gd_lab > 0\n dmap = distanceTransform(mask)\n \n cleaned_lab1 = mo.binary_opening(gd_lab == 1, selem = mo.disk(size))\n cleaned_lab2 = mo.binary_opening(gd_lab == 2, selem = mo.disk(size))\n cleaned_lab3 = mo.binary_opening(gd_lab == 3, selem = mo.disk(size))\n \n seeds = cleaned_lab1+2*cleaned_lab2+3*cleaned_lab3\n seg = mo.watershed(dmap, markers = seeds, mask = 1*mask)\n chrom_lab1 = seg == 1\n chrom_lab2 = seg == 2\n overlapp = seg == 3\n \n return chrom_lab1, chrom_lab2, overlapp, seg\n\n", "Download the dataset from its repository at github\nhttps://github.com/jeanpat/DeepFISH/tree/master/dataset", "!wget https://github.com/jeanpat/DeepFISH/blob/master/dataset/LowRes_13434_overlapping_pairs.h5 \n\nfilename = './LowRes_13434_overlapping_pairs.h5'\nh5f = h5py.File(filename,'r')\npairs = h5f['dataset_1'][:]\nh5f.close()\nprint('dataset is a numpy array of shape:', pairs.shape)\n\nN = 11508\ngrey = pairs[N,:,:,0]\ng_truth = pairs[N,:,:,1]\nl1, l2, l3, seg = clean_ground_truth(g_truth, size = 1)\n\ntest = np.dstack((grey, g_truth))\nprint(test.shape)\nt2 = np.stack((test,test))\nprint(t2.shape)", "Let's compare the groundtruth image befor and after cleaning", "plt.figure(figsize=(20,10))\n\nplt.subplot(251,xticks=[],yticks=[])\nplt.imshow(grey, cmap=plt.cm.gray)\nplt.subplot(252,xticks=[],yticks=[])\nplt.imshow(g_truth, cmap=plt.cm.flag_r)\nplt.subplot(253,xticks=[],yticks=[])\nplt.imshow(g_truth == 1, cmap=plt.cm.flag_r)\nplt.subplot(254,xticks=[],yticks=[])\nplt.imshow(g_truth == 2, cmap=plt.cm.flag_r)\nplt.subplot(255,xticks=[],yticks=[])\nplt.imshow(g_truth == 3, cmap=plt.cm.flag_r)\n\n#plt.subplot(256,xticks=[],yticks=[])\n#plt.imshow(mo.white_tophat(grey, selem = mo.disk(2)) > 0, cmap=plt.cm.jet)\nplt.subplot(257,xticks=[],yticks=[])\nplt.imshow(l1+2*l2+3*l3, cmap=plt.cm.flag_r)\nplt.subplot(258,xticks=[],yticks=[])\nplt.imshow(l1, cmap=plt.cm.flag_r)\nplt.subplot(259,xticks=[],yticks=[])\nplt.imshow(l2, cmap=plt.cm.flag_r)\nplt.subplot(2,5,10,xticks=[],yticks=[])\nplt.imshow(l3, cmap=plt.cm.flag_r)", "Clean the whole dataset", "new_data = np.zeros((1,94,93,2), dtype = int)\nN = pairs.shape[0]#10\nfor idx in range(N):\n g_truth = pairs[idx,:,:,1]\n grey = pairs[idx,:,:,0]\n _, _, _, seg = clean_ground_truth(g_truth, size = 1)\n paired = np.dstack((grey, seg))\n #\n #https://stackoverflow.com/questions/7372316/how-to-make-a-2d-numpy-array-a-3d-array/7372678\n #\n new_data = np.concatenate((new_data, paired[newaxis,:, :, :]))\nnew_data = new_data[1:,:,:,:]\n\nplt.figure(figsize=(20,10))\nN=10580\ngrey = new_data[N,:,:,0]\ng_truth = new_data[N,:,:,1]\n\nplt.subplot(121,xticks=[],yticks=[])\nplt.imshow(grey, cmap=plt.cm.gray)\nplt.subplot(122,xticks=[],yticks=[])\nplt.imshow(g_truth, cmap=plt.cm.flag_r)\n", "Save the dataset using hdf5 format", "filename = './Cleaned_LowRes_13434_overlapping_pairs.h5'\nhf = h5py.File(filename,'w')\nhf.create_dataset('13434_overlapping_chrom_pairs_LowRes', data=new_data, compression='gzip', compression_opts=9)\nhf.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
r2rahul/numericalanalysis
codes/driver_kmeans.ipynb
gpl-2.0
[ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set() # for plot styling\nimport numpy as np\nfrom sklearn.datasets.samples_generator import make_blobs\nimport random\nimport csv\nimport math\nimport numpy as np\nimport sys", "K-means Clustering non-distributed implementation", "X, y_true = make_blobs(n_samples=300, centers=4,\n cluster_std=0.60, random_state=0)\n# Save simulated data to be used in MapReduce code\nnp.savetxt(\"kmeans_simulated_data.txt\", X, fmt='%.18e', delimiter=' ')\nplt.scatter(X[:, 0], X[:, 1], s=50);\n\n# Write modules for the simulation of local K-Meanss\ndef assign_clusters(X, m):\n clusters = {}\n labels = []\n for x in X:\n #Calculate pair wise distance from each centroid\n pair_dist = [(i[0], np.linalg.norm(x-m[i[0]])) for i in enumerate(m)]\n #Sort and select the minimum distance centroid\n best_centroid = min(pair_dist, key=lambda t:t[1])[0]\n labels.append(best_centroid)\n try:\n clusters[best_centroid].append(x)\n except KeyError:\n clusters[best_centroid] = [x]\n return(clusters, labels)\n\ndef evaluate_cluster_mean(clusters):\n new_centroid = []\n keys = sorted(clusters.keys())\n for k in keys:\n #Calculate new centroid\n new_centroid.append(np.mean(clusters[k], axis = 0))\n return(new_centroid)\n \ndef check_convergence(new_centroid, old_centroid):\n #Check if new and old centroid have changed or not\n error = np.all(np.array(new_centroid) == np.array(old_centroid))\n return(error)\n\ndef driver_kmeans(X, K):\n # Initialize Random K centres\n old_centroid = random.sample(list(X), K)\n new_centroid = random.sample(list(X), K)\n #Saving centroid co-ordinates for the comparison with MapReduce code\n np.savetxt(\"kmeans_cache.txt\", new_centroid, fmt='%.18e', delimiter=' ')\n counter = 0\n while not check_convergence(new_centroid, old_centroid):\n old_centroid = new_centroid\n #Map points to nearest centroid\n clusters, labels = assign_clusters(X, new_centroid)\n # Find new centroids\n new_centroid = evaluate_cluster_mean(clusters)\n counter += 1\n return(new_centroid, clusters, labels, counter)\n\n#Driver code intialize the mapreduce code\n#Not used in the current implementation, added for completion\ndef init_kmeans(X, K):\n centroid = random.sample(list(X), K)\n init_centroid = np.array([np.concatenate(([i[0]], i[1])) for i in enumerate(centroid)])\n np.savetxt(\"kmeans_cache.txt\", init_centroid, fmt='%.18e', delimiter=' ')\n\ncenters, d, labels, counter = driver_kmeans(X, 4)\nplt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis')\ncx = [i[0] for i in centers]\ncy = [i[1] for i in centers]\nplt.scatter(cx, cy, c='black', s=200, alpha=0.5);", "Simulating MapReduce K-Means Algorithm\nMapper Script\n\nAssumes mapper data input in tidy format and all varibales are properly encoded", "%%writefile mapper_kmeans.py\n\nimport sys\nimport csv\nimport math\nimport numpy as np\n\n#Read the centroids iteratively and its co-ordinates\nwith open('kmeans_cache.txt', 'r') as f:\n fp = csv.reader(f, delimiter = \" \")\n m = np.array([[float(i) for i in j] for j in fp])\n \n \n# input comes from STDIN (standard input)\nfor line in sys.stdin:\n # remove leading and trailing whitespace\n line = line.strip().split()\n features = np.array([float(j) for j in line])\n # Calculate the pair wise distance\n pair_dist = [(i[0], np.linalg.norm(features - m[i[0]])) for i in enumerate(m)]\n #Sort and select the minimum distance centroid\n best_centroid = min(pair_dist, key=lambda t:t[1])[0]\n #emit cluster id and coressponding values\n out_features = \",\".join([str(k) for k in features])\n print('{}\\t{}'.format(best_centroid, out_features))", "Reducer Script", "%%writefile reducer_kmeans.py\nfrom operator import itemgetter\nimport sys\nimport numpy as np\n\ncurrent_cluster = None\ncurrent_val = 0\n\n# input comes from STDIN\nfor line in sys.stdin:\n # remove leading and trailing whitespace\n line = line.strip()\n cluster, value = line.split('\\t', 1)\n #Convert value to float\n try:\n value = [float(i) for i in value.split(',')]\n except ValueError:\n #Accounts for error in value inputs. Skips the error lines\n continue\n \n #Cluster id as key and corresponding value is passed here\n if current_cluster == cluster:\n current_val = np.vstack((current_val, value))\n else:\n if current_cluster:\n #Updates the centroids\n center = [str(i) for i in np.mean(current_val, axis = 0)]\n print('{}'.format(\" \".join(center)))\n \n current_val = value\n current_cluster= cluster\n\n# To print the last line/clutster id\nif current_cluster == cluster:\n #Updates the centroids\n center = [str(i) for i in np.mean(current_val, axis = 0)]\n print('{}'.format(\" \".join(center)))", "Simulate Job Chaining with Shell Script\n\nfor loop iterates over each reducer output . \nInside for loop Centroid co-ordinates are updated in kmeans_cache.txt at each iteration . \nFinal output is stored in kmeans_cache.txt", "%%sh\n#Initialize the initial clusters\nfor i in `seq 1 20`;\ndo \necho 'Iteration Number = '$i\ncat kmeans_simulated_data.txt | python mapper_kmeans.py | sort | python reducer_kmeans.py > kmeans_temp.txt\nmv kmeans_temp.txt kmeans_cache.txt\ndone", "Test MapReduce Implementation", "#Check if the centroid calculated in the non-distributed and distributed method are in same range\ndef check_mapreduce(centroid_non, centroid_dist):\n #Check if new and old centroid have changed or not\n error = np.all(np.array(centroid_non) == np.array(centroid_dist))\n #error calculation second way: Relative Error\n num_error = np.linalg.norm(np.array(centroid_non) - np.array(centroid_dist))\n return(error, num_error)\n\n#Read the final centroid file\nwith open('kmeans_cache.txt', 'r') as f:\n fp = csv.reader(f, delimiter = \" \")\n centroid_map = np.array([[float(i) for i in j] for j in fp])\n\nflag, relative_error = check_mapreduce(centers, centroid_map)\n\nif flag:\n print(\"Test Succeded: Both MapReduce and local algorithm returns the same centroids\")\nelif relative_error < 1e-6:\n msg = \"Test Succeded: Both MapReduce and local algorithm returns the same centroids with tolerance = \"\n print('{}\\t{}'.format(msg, relative_error))\nelse:\n errmsg = '''Check MapReduce code, perhaps check if both\n Mapreduce and Local are initalized from same centroids\n Rerun both the codes multiple time to verify'''\n print(errmsg)", "Next Steps\n\nCreating output for labeled data for further analysis in reducer \nCreating output for parameters for the model validation like cost value for selecting optimal k \n\nReferences\n\nKumar, S., Dr. (2018, May 03). Shailesh Kumar - MapReduce and the \"Art of Thinking Parallel\". Retrieved May 6, 2018, from https://vimeo.com/72168757 \nKumar, S., Dr. (2018, May 03). Https://www.slideshare.net/hyderabadscalability/map-reduce-and-the-art-of-thinking-parallel-dr-shailesh-kumar. Retrieved May 6, 2018, from https://vimeo.com/72168757 \nLeskovec, J., Rajaraman, A., & Ullman, J. D. (2016). Mining of massive datasets. Delhi: Cambridge University Press. \nGuttag, J. (2017). Introduction to computation and programming using Python: With application to understanding data. Cambridge, MA: The MIT Press. \nVanderPlas, Jake. “In Depth: k-Means Clustering.” Https://Jakevdp.github.io/PythonDataScienceHandbook/05.11-k-Means.html, 9 May 2018, jakevdp.github.io/PythonDataScienceHandbook/05.11-k-means.html." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
davebshow/DH3501
class19.ipynb
mit
[ "<div align=\"left\">\n<h4><a href=\"index.ipynb\">RETURN TO INDEX</a></h4>\n</div>\n<div align=\"center\">\n<h1><a href=\"index.ipynb\">DH3501: Advanced Social Networks</a><br/><br/><em>Class 19</em>: Bargaining, Stability, and Balance in Networks</h1>\n</div>\n\n<div style=\"float:left\">\n<b>Western University</b><br/>\n<b>Department of Modern Languages and Literatures</b><br/>\n<b>Digital Humanities – DH 3501</b><br/>\n<br/>\n<b>Instructor</b>: David Brown<br/>\n<b>E-mail</b>: <a href=\"mailto:dbrow52@uwo.ca\">dbrow52@uwo.ca</a><br/>\n<b>Office</b>: AHB 1R14<br/>\n</div>\n<div style=\"float:left\">\n<img style=\"width:200px; margin-left:100px\" src=\"http://www.bsr.org/images/blog/networks.jpg\" />\n</div>\n\nWhat determines an individual's power?\n\n\nIs it an individual characteristic?\n\n\nA network property?\n\n\n\"Indeed, as Richard Emerson has observed in his fundamental work on this subject, power is not so much a property of an individual as it is a property of a relation between two individuals -- it makes more sense to study the conditions under which one person has power over another, rather than simply asserting that a particular person is \"powerful\". E & K, 340", "%matplotlib inline\nimport networkx as nx\nimport matplotlib.pyplot as plt\ng = nx.Graph([(\"A\", \"B\")])\nnx.draw_networkx(g)", "Value in relationship\nIf we assume that a relationsip holds some sort of value, how is that value divided?\n\n\nThink about it...what kinds of value could a relationship hold?\n\n\nIf we think about power in terms of an imbalance in social exchange, how is the value of a relationship distributed based on the power of the individuals of the network?\n\n\nWhere does power come from?\n\n\nNetwork Exchange Theory addresses questions of social imbalance and its relation to network structure.\n<img style=\"float:left; width: 400px\" src=\"img/Nelson_and_bart.gif\" />\nPrinciples of power", "g.add_edges_from([(\"B\", \"C\"), (\"B\", \"D\"), (\"D\", \"E\")])\nnx.draw_networkx(g)", "Dependence - if relationships confer value, nodes A and C are completely dependent on node B for value.\n\n\nExclusion - node B can easily exclude node A or C from the value conferred by the network.\n\n\nSatiation - at a certain point, nodes like B begin to see diminishing returns and only maintains relations from which they can receive an unequal share of the value.\n\n\nBetweenness - can confer power, this sort of centrality allows nodes like B to take advantages of structural holes and also control the flow of information througout the network. Note: high betweenness does not always confer an advantage in bargaining situations (as we will soon see).\n\n\nExperimental methodology: Riddle me this...\n<img style=\"float:left; width: 300px\" src=\"img/experiment_comic.jpg\" />\nRecall the experimental methodology typically used to study power and exchange? Get together with your pods and refresh your memories...there are five steps.\n\n\nAre the results of these experiments considered to be robust?\n\n\nWhy or why not (according to E & K)?\n\n\nApplication: The following visualizations show 4 commonly tested paths. What were the experimental results for each path?", "g = nx.Graph([(\"A\", \"B\")])\nnx.draw_networkx(g)\nplt.title(\"2-Node Path\")\n\ng = nx.Graph([(\"A\", \"B\"), (\"B\", \"C\")])\nnx.draw_networkx(g)\nplt.title(\"3-Node Path\")\n\ng = nx.Graph([(\"A\", \"B\"), (\"B\", \"C\"), (\"C\", \"D\")])\nnx.draw_networkx(g)\nplt.title(\"4-Node Path\")\n\ng = nx.Graph([(\"A\", \"B\"), (\"B\", \"C\"), (\"C\", \"D\"), (\"D\", \"E\")])\nnx.draw_networkx(g)\nplt.title(\"5-Node Path\")", "How about power in a network that looks like this?", "g = nx.Graph([(\"A\", \"B\"), (\"B\", \"C\"), (\"B\", \"D\"), (\"C\", \"D\")])\nnx.draw_networkx(g)\nplt.title(\"Triangle with outlier\")", "Or this?", "g = nx.Graph([(\"A\", \"B\"), (\"B\", \"C\"), (\"C\", \"A\")])\nnx.draw_networkx(g)\nplt.title(\"Triangle\")", "The Nash bargaining solution\n\n\nWhat happens when exchanges take place on arbitrary networks?\n\n\nWhat happens if we try to formalize a mathematical framework?\n\n\nEver see A Beautiful Mind?\n<img style=\"float:left; width: 400px\" src=\"img/beautmind.jpg\" />\nDon't worry! We won't start hallucinating Russian conspiracies!\nIn order to draw distinction between equal and asymetrical power distribution over an edge; between strong power and weak power, and between networks that stabilize and networks that don't, we can use a solution proposed by John Nash, the Nash bargaining solution.\n<img style=\"float:left; width: 400px\" src=\"img/nashbar.png\" />\nAssume that\n\n$value\\ of\\ edge \\gt 1$\n\nand \n\n$x + y \\leq 1$ \n\nThe negotiation (in terms of competition) becomes about how to split the surpluss:\n\n$1 - x - y$\n\nIf A and B have equal power, we expect that each individual receives half of the surplus, so the outcome is equidependent:\n\n$x + \\frac{1}{2}s = \\frac{x + 1 - y}{2}\\ to\\ A,\\ and\\ y + \\frac{1}{2}s = \\frac{y + 1 - x}{2}\\ to\\ B$\n\nHow did perception of status effect the outcomes of empirical studies of the Nash bargaining soluiton?\nThe ultimatum game\n\n\n(i) Person A is givin a dollar and told to propose a division of it to person B. That is, A should propose how much he keeps for herself, and how much she gives to B.\n\n\n(ii) Person B is then given the option of approving or regecting the proposed division.\n\n\n(iii) If B approves, each person keeps the proposed amount. If B rejects, then each person gets nothing.\n\n\nWhat are the experimental results of the ultimatum game? Would you walk away with only 1% if you were person B?\nBalance and stability\nHow do we predict the outcomes of network exchanges on an arbitrary graph?\nWell...what is an outcome?\n\n\nA matching on a the network that specifies who exchanges with whom. A node may at most complete one exchanges, others may be left out and complete none.\n\n\nA number that is associated with each node that indicates how much value the node receives from its exchange. If two nodes are matched in the outcome, the sum of their values must equal 1. An unmatched node has the value of 0.\n\n\nStability\n\nStability is a state in which no node X can propose an offer to some other node Y that makes both X and Y better off -- thus \"stealing\" node Y away from an existing agreement.\n\n<img style=\"float:left; width: 400px\" src=\"img/stability.png\" />\nBased on the above definition of stability, why are (a) and (c) not stable outcomes?\nAn instability can be defined as follows:\n\nGiven an outcome consisting of a matching and values for the nodes, and instability in this outcome is an edge not in the matching, joining two nodes X and Y, such that the sum of X's value and Y's value is less than 1.\n\nTherefore, an outcome of network exchange is stable if and only if it contains no instablities.\nThis sort of prediction is limited in that humans do not usually acheive the kinds of extremes presented by this model, furthermore, it creates ambiguity in more complex situations as in (c) and (d) above.\nBalanced outcomes\nTo account for these limitations, we can consider outcomes from the perspective of the Nash bargaining solution. Here we will consider related nodes in the network as the providers of outside options.\nWe can define a balanced outcome as follows:\n\nAn outcome is balanced if, for each edge in the matching, the split of the money represents the Nash bargaining outcome for the two nodes involved, given the best outside options for each node provided by the values in the rest of the network.\n\nApplication: Calculate the Nash bargaining outcome for each node in the image below\n<img style=\"float:left; width: 400px\" src=\"img/balance.png\" />\nRemember all balanced outcomes are stable, but not all stable outcomes are balanced!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Dans-labs/dariah
static/tools/.ipynb_checkpoints/from_filemaker-checkpoint.ipynb
mit
[ "Importing InKind from FileMaker\nWe use an XML export of the various tables in the FileMaker Inkind database.\nThe XML will be read, field definitions will be extracted from it, the data will be read.\nWe do the following:\n* adapt the table and field organization;\n* adjust the field types and the values, especially for datetime and currency;\n* generate value tables and cross tables;\n* add extra information for countries, so that they can be visualized on a map\n* link values to existing tables;\n* write SQL create statements and insert statements\n* import a moderately denormalized version of the data into MongoDB", "import os,sys,re,collections,json\nfrom os.path import splitext, basename\nfrom functools import reduce\nfrom glob import glob\nfrom lxml import etree\nfrom datetime import datetime\nfrom pymongo import MongoClient\nfrom bson.objectid import ObjectId", "Locations", "HOME_DIR = os.path.expanduser('~').replace('\\\\', '/')\nBASE_DIR = '{}/Documents/DANS/projects/has/dacs'.format(HOME_DIR)\nFM_DIR = '{}/fm'.format(BASE_DIR)\nFMNS = '{http://www.filemaker.com/fmpxmlresult}'\nCONFIG_DIR = '.'", "Config\nAll configuration in a big yaml file", "with open('{}/config.yaml')", "Data description\nMain source tables and fields to skip", "CONFIG = yaml.load('''\nmainTables:\n- contrib\n- country\n''')\n\nmainTables = ('contrib', 'country')\n\nSKIP_FIELDS = dict(\n contrib=set('''\ndateandtime_ciozero\nikid\nikid_base\nfind_country_id\nfind_type\ngnewpassword\ngnewpassword2\ngoldpassword\nhelp_description\nhelp_text\nmessage\nmessage_allert\nteller\ntotal_costs_total\nwhois\n'''.strip().split()),\n country=set('''\n'''.strip().split()),\n)", "Fields to merge", "MERGE_FIELDS = dict(\n contrib=dict(\n academic_entity_url=['academic_entity_url_2'],\n contribution_url=['contribution_url_2'],\n contact_person_mail=['contact_person_mail_2'],\n type_of_inkind=['other_type_of_inkind'],\n vcc11_name=[\n 'vcc12_name',\n 'vcc21_name',\n 'vcc22_name',\n 'vcc31_name',\n 'vcc32_name',\n 'vcc41_name',\n 'vcc42_name',\n ],\n vcc_head_decision_vcc11=[\n 'vcc_head_decision_vcc12',\n 'vcc_head_decision_vcc21',\n 'vcc_head_decision_vcc22',\n 'vcc_head_decision_vcc31',\n 'vcc_head_decision_vcc32',\n 'vcc_head_decision_vcc41',\n 'vcc_head_decision_vcc42',\n ],\n ),\n country=dict(),\n)", "Fields to rename", "MAP_FIELDS = dict(\n contrib=dict(\n approved='approved',\n academic_entity_url='urlAcademic',\n contribution_url='urlContribution',\n contact_person_mail='contactPersonEmail',\n contact_person_name='contactPersonName',\n costs_description='costDescription',\n costs_total='costTotal',\n country='country',\n creation_date_time='dateCreated',\n creator='creator',\n dateandtime_approval='dateApproved',\n dateandtime_cioapproval='dateApprovedCIO',\n description_of_contribution='description',\n disciplines_associated='discipline',\n last_modifier='modifiedBy',\n modification_date_time='dateModified',\n other_keywords='keyword',\n submit='submitted',\n tadirah_research_activities='tadirahActivity',\n tadirah_research_objects='tadirahObject',\n tadirah_research_techniques='tadirahTechnique',\n title='title',\n total_costs_total='costTotalTotal',\n type_of_inkind='typeContribution',\n vcc='vcc',\n vcc11_name='reviewerName',\n vcc_head_decision='vccDecision',\n vcc_head_decision_vcc11='reviewerDecision',\n vcchead_approval='vccApproval',\n vcchead_disapproval='vccDisApproval',\n year='year',\n ),\n country=dict(\n countrycode='iso',\n countryname='name',\n member_dariah='isMember',\n ),\n)", "Fields to split into multiple values", "generic = re.compile('[ \\t]*[\\n+][ \\t\\n]*') # split on newlines (with surrounding white space)\ngenericComma = re.compile('[ \\t]*[\\n+,;][ \\t\\n]*') # split on newlines or commas (with surrounding white space)\n\nSPLIT_FIELDS=dict(\n contrib=dict(\n discipline=generic,\n keyword=genericComma,\n typeContribution=generic,\n tadirahActivity=generic,\n tadirahObject=generic,\n tadirahTechnique=generic,\n vcc=generic,\n ),\n country=dict(),\n)", "Fields to hack", "STRIP_NUM = re.compile('^[0-9]\\s*\\.?\\s+')\n\ndef stripNum(v): return STRIP_NUM.sub('', v)\n \nHACK_FIELDS=dict(\n contrib=dict(\n tadirahActivity=stripNum,\n ),\n country=dict(),\n)", "Fields to decompose into several fields", "DECOMPOSE_FIELDS=dict(\n contrib=dict(\n typeContribution='typeContributionOther',\n ),\n country=dict(),\n)", "Custom field types", "FIELD_TYPE = dict(\n contrib=dict(\n costTotal='valuta',\n dateCreated='datetime',\n dateModified='datetime',\n dateApproved='datetime',\n dateApprovedCIO='datetime',\n contactPersonEmail='email',\n submitted='bool',\n approved='bool',\n reviewerDecision='bool',\n vccApproval='bool',\n vccDecision='bool',\n vccDisApproval='bool',\n ),\n country=dict(\n isMember='bool',\n ),\n)", "Default values", "DEFAULT_VALUES=dict(\n contrib=dict(\n dateCreated=datetime(2000,1,1,0,0,0),\n creator=\"admin\",\n type_of_inkind=\"General\",\n ),\n country=dict(),\n)", "Fields to move to other tables", "MOVE_FIELDS=dict(\n contrib=dict(\n assessment=set('''\napproved\ndateApproved\ndateApprovedCIO\nsubmitted\nreviewerName\nreviewerDecision\nvccDecision\nvccApproval\nvccDisApproval\n '''.strip().split()),\n ),\n country=dict(),\n)", "Fields to value lists", "MAKE_VALUE_LISTS = dict(\n contrib=set('''\nkeyword\nyear\n'''.strip().split()),\n)\nVALUE_LISTS = dict(\n contrib=set('''\ndiscipline\nkeyword\ntadirahActivity\ntadirahObject\ntadirahTechnique\ntypeContribution\ntypeContributionOther:typeContribution\nvcc\nyear\n'''.strip().split()),\n)\n\nMOVE_MISSING = dict(\n contrib='description',\n)", "Field values\nPatterns for value types", "# Source field types, including types assigned by type overriding (see FIELD_TYPE_OVERRIDE above).\n# These will be translated into appropriate SQL field types\n\nTYPES = {'bool', 'number', 'decimal', 'text', 'valuta', 'email', 'date', 'datetime'}\n\n# dates are already in ISO (date2_pattern).\n# If we encounter other dates, we could use date_pattern instead)\n# datetimes are not in iso, they will be transformed to iso.\n\nDECIMAL_PATTERN = re.compile(\n r'^-?[0-9]+\\.?[0-9]*'\n)\nDATE_PATTERN = re.compile(\n r'^\\s*([0-9]{2})/([0-9]{2})/([0-9]{4})$'\n)\nDATE2_PATTERN = re.compile(\n r'^\\s*([0-9]{4})-([0-9]{2})-([0-9]{2})$'\n)\nDATETIME_PATTERN = re.compile(\n r'^\\s*([0-9]{2})/([0-9]{2})/([0-9]{4})\\s+([0-9]{2}):([0-9]{2})(?::([0-9]{2}))?$'\n)\n\n# meaningless values will be translated into None\nNULL_VALUES = {\n 'http://',\n 'https://',\n '@',\n}\n\nBOOL_VALUES = {\n True: {'Yes', 'YES', 'yes', 1, '1', True},\n False: {'No', 'NO', 'no', 0, '0', 'NULL', False},\n}", "Date and Time values", "def date_repl(match):\n [d,m,y] = list(match.groups())\n return '{}-{}-{}'.format(y,m,d)\n \ndef date2_repl(match):\n [y,m,d] = list(match.groups())\n return '{}-{}-{}'.format(y,m,d)\n \ndef datetime_repl(match):\n [d,m,y,hr,mn,sc] = list(match.groups())\n return '{}-{}-{}T{}:{}:{}'.format(y,m,d,hr,mn,sc or '00')\n\ndef dt(v_raw, i, t, fname):\n if not DATE2_PATTERN.match(v_raw):\n warning(\n 'table `{}` field `{}` record {}: not a valid date: \"{}\"'.format(\n t, fname, i, v_raw\n ))\n return v_raw\n return datetime(*map(int, re.split('[:T-]', DATE2_PATTERN.sub(date2_repl, v_raw))))\n\ndef dtm(v_raw, i, t, fname):\n if not DATETIME_PATTERN.match(v_raw):\n warning(\n 'table `{}` field `{}` record {}: not a valid date time: \"{}\"'.format(\n t, fname, i, v_raw\n ))\n return v_raw\n return datetime(*map(int, re.split('[:T-]', DATETIME_PATTERN.sub(datetime_repl, v_raw))))", "Boolean, numeric and string values", "def bools(v_raw, i, t, fname):\n if v_raw in BOOL_VALUES[True]: return True\n if v_raw in BOOL_VALUES[False]: return False\n warning(\n 'table `{}` field `{}` record {}: not a boolean value: \"{}\"'.format(\n t, fname, i, v_raw\n ))\n return v_raw\n\ndef num(v_raw, i, t, fname):\n if type(v_raw) is int: return v_raw\n if v_raw.isdigit(): return int(v_raw)\n warning(\n 'table `{}` field `{}` record {}: not an integer: \"{}\"'.format(\n t, fname, i, v_raw\n ))\n return v_raw\n\ndef decimal(v_raw, i, t, fname):\n if type(v_raw) is float: return v_raw\n if v_raw.isdigit(): return float(v_raw)\n if DECIMAL_PATTERN.match(v_raw): return float(v_raw)\n warning(\n 'table `{}` field `{}` record {}: not an integer: \"{}\"'.format(\n t, fname, i, v_raw\n ))\n return v_raw\n\ndef email(v_raw, i, t, fname):\n return v_raw.replace('mailto:', '', 1) if v_raw.startswith('mailto:') else v_raw", "Money values", "def money(v_raw, i, t, fname):\n note = ',' in v_raw or '.' in v_raw\n v = v_raw.strip().lower().replace(' ','').replace('€', '').replace('euro', '').replace('\\u00a0', '')\n for p in range(2,4): # interpret . or , as decimal point if less than 3 digits follow it\n if len(v) >= p and v[-p] in '.,': \n v_i = v[::-1]\n if v_i[p-1] == ',': v_i = v_i.replace(',', 'D', 1)\n elif v_i[p-1] == '.': v_i = v_i.replace('.', 'D', 1)\n v = v_i[::-1]\n v = v.replace('.','').replace(',','')\n v = v.replace('D', '.')\n if not v.replace('.','').isdigit():\n if len(set(v) & set('0123456789')):\n warning(\n 'table `{}` field `{}` record {}: not a decimal number: \"{}\" <= \"{}\"'.format(\n t, fname, i, v, v_raw,\n ))\n money_warnings.setdefault('{}:{}'.format(t, fname), {}).setdefault(v, set()).add(v_raw)\n v = None\n else:\n v = None\n money_notes.setdefault('{}:{}'.format(t, fname), {}).setdefault('NULL', set()).add(v_raw)\n elif note:\n money_notes.setdefault('{}:{}'.format(t, fname), {}).setdefault(v, set()).add(v_raw)\n return None if v == None else float(v)", "Clean up field values", "def sanitize(t, i, fname, value):\n if fname == '_id': return value\n (ftype, fmult) = allFields[t][fname]\n newValue = []\n for v_raw in value:\n if v_raw == None or v_raw in NULL_VALUES: continue\n elif ftype == 'text': v = v_raw\n elif ftype == 'bool': v = bools(v_raw, i, t, fname)\n elif ftype == 'number': v = num(v_raw, i, t, fname)\n elif ftype == 'decimal': v = decimal(v_raw, i, t, fname)\n elif ftype == 'email': v = email(v_raw, i, t, fname)\n elif ftype == 'valuta': v = money(v_raw, i, t, fname)\n elif ftype == 'date': v = dt(v_raw, i, t, fname)\n elif ftype == 'datetime': v = dtm(v_raw, i, t, fname)\n else: v = v_raw\n if v != None and (fmult <= 1 or v != ''): newValue.append(v)\n if len(newValue) == 0:\n defValue = DEFAULT_VALUES.get(t, {}).get(fname, None)\n if defValue != None:\n newValue = [defValue]\n return newValue", "Show information", "def info(x): sys.stdout.write('{}\\n'.format(x))\ndef warning(x): sys.stderr.write('{}\\n'.format(x))\n\ndef showFields():\n for (mt, defs) in sorted(allFields.items()):\n info(mt)\n for (fname, fdef) in sorted(defs.items()):\n info('{:>25}: {:<10} ({})'.format(fname, *fdef))\n\ndef showdata(rows):\n for row in rows:\n for f in sorted(row.items()):\n info('{:>20} = {}'.format(*f))\n info('o-o-o-o-o-o-o-o-o-o-o-o')\n\ndef showData():\n for (mt, rows) in sorted(allData.items()):\n info('o-o-o-o-o-o-o TABLE {} with {} rows o-o-o-o-o-o-o-o '.format(mt, len(rows)))\n showdata(rows[0:2])\n\ndef showMoney():\n for tf in sorted(money_notes):\n for v in sorted(money_notes[tf]):\n info('{} \"{}\" <= {}'.format(\n tf, v,\n ' | '.join(money_notes[tf][v]),\n ))", "Read FM fields", "def readFmFields():\n for mt in mainTables:\n infile = '{}/{}.xml'.format(FM_DIR, mt)\n root = etree.parse(infile, parser).getroot()\n fieldroots = [x for x in root.iter(FMNS+'METADATA')]\n fieldroot = fieldroots[0]\n fields = []\n fieldDefs = {}\n for x in fieldroot.iter(FMNS+'FIELD'):\n fname = x.get('NAME').lower().replace(' ','_').replace(':', '_')\n ftype = x.get('TYPE').lower()\n fmult = int(x.get('MAXREPEAT'))\n fields.append(fname)\n fieldDefs[fname] = [ftype, fmult]\n rawFields[mt] = fields\n allFields[mt] = fieldDefs\n\n for f in SKIP_FIELDS[mt]:\n del allFields[mt][f]\n\n for (f, mfs) in MERGE_FIELDS[mt].items():\n allFields[mt][f][1] += 1\n for mf in mfs:\n del allFields[mt][mf]\n allFields[mt] = dict((MAP_FIELDS[mt][f], v) for (f,v) in allFields[mt].items())\n for f in SPLIT_FIELDS[mt]:\n allFields[mt][f][1] += 1\n for (f, fo) in DECOMPOSE_FIELDS[mt].items():\n allFields[mt][fo] = allFields[mt][f]\n allFields[mt][f] = [allFields[mt][f][0], 1]\n for (f, t) in FIELD_TYPE[mt].items():\n allFields[mt][f][0] = t", "Read FM data", "def readFmData():\n for mt in mainTables:\n infile = '{}/{}.xml'.format(FM_DIR, mt)\n root = etree.parse(infile, parser).getroot()\n dataroots = [x for x in root.iter(FMNS+'RESULTSET')]\n dataroot = dataroots[0]\n rows = []\n rowsRaw = []\n fields = rawFields[mt]\n for (i, r) in enumerate(dataroot.iter(FMNS+'ROW')):\n rowRaw = []\n for c in r.iter(FMNS+'COL'):\n data = [x.text.strip() for x in c.iter(FMNS+'DATA') if x.text != None]\n rowRaw.append(data)\n if len(rowRaw) != len(fields):\n warning('row {}: fields encountered = {}, should be {}'.format(len(row), len(fields)))\n rowsRaw.append(dict((f,v) for (f, v) in zip(fields, rowRaw)))\n row = dict((f,v) for (f, v) in zip(fields, rowRaw) if f not in SKIP_FIELDS[mt])\n for (f, mfs) in MERGE_FIELDS[mt].items():\n for mf in mfs:\n row[f].extend(row[mf])\n del row[mf]\n row = dict((MAP_FIELDS[mt][f], v) for (f,v) in row.items())\n for (f, spl) in SPLIT_FIELDS[mt].items():\n row[f] = reduce(lambda x,y: x+y, [spl.split(v) for v in row[f]], [])\n for (f, hack) in HACK_FIELDS[mt].items():\n row[f] = [hack(v) for v in row[f]]\n for (f, fo) in DECOMPOSE_FIELDS[mt].items():\n row[fo] = row[f][1:]\n row[f] = [row[f][0]] if len(row[f]) else []\n row['_id'] = ObjectId()\n #info('\\n'.join('{}={}'.format(*x) for x in sorted(row.items())))\n for (f, v) in row.items(): row[f] = sanitize(mt, i, f, v)\n rows.append(row)\n allData[mt] = rows\n rawData[mt] = rowsRaw\n\n if money_warnings:\n for tf in sorted(money_warnings):\n for v in sorted(money_warnings[tf]):\n warning('{} \"{}\" <= {}'.format(\n tf, v,\n ' | '.join(money_warnings[tf][v]),\n ))", "Split tables into several tables by column groups", "def moveFields():\n for mt in mainTables:\n for (omt, mfs) in MOVE_FIELDS[mt].items():\n for mf in mfs:\n allFields.setdefault(omt, dict())[mf] = allFields[mt][mf]\n del allFields[mt][mf]\n allFields.setdefault(omt, dict)['{}_id'.format(mt)] = ('id', 1)\n\n for row in allData[mt]:\n for (omt, mfs) in MOVE_FIELDS[mt].items():\n orow = dict((mf, row[mf]) for mf in mfs)\n orow['_id'] = ObjectId()\n orow['{}_id'.format(mt)] = row['_id']\n allData.setdefault(omt, []).append(orow)\n for mf in mfs: del row[mf]", "Value Lists", "def readLists():\n valueLists = dict()\n for path in glob('{}/*.txt'.format(FM_DIR)):\n tname = basename(splitext(path)[0])\n data = []\n with open(path) as fh:\n for line in fh:\n data.append(line.rstrip().split('\\t'))\n valueLists[tname] = data\n\n for (vList, data) in valueLists.items():\n if vList == 'countryExtra':\n mapping = dict((x[0], x[1:]) for x in data)\n else:\n mapping = dict((i+1, x[0]) for (i, x) in enumerate(data))\n valueDict[vList] = mapping\n allFields[vList] = dict(\n _id=('id', 1),\n value=('string', 1),\n )\n \n for mt in allData:\n fs = MAKE_VALUE_LISTS.get(mt, set())\n for f in fs:\n valSet = set()\n for row in allData[mt]:\n values = row.get(f, [])\n if type(values) is not list:\n values = [values]\n valSet |= set(values)\n valueDict[f] = dict((i+1, x) for (i, x) in enumerate(sorted(valSet)))\n allFields[f] = dict(\n _id=('id', 1),\n value=('string', 1),\n )", "Country table", "def countryTable():\n extraInfo = valueDict['countryExtra']\n idMapping = dict()\n\n for row in allData['country']:\n for f in row:\n if type(row[f]) is list: row[f] = row[f][0]\n iso = row['iso']\n row['_id'] = ObjectId()\n idMapping[iso] = row['_id']\n (name, lat, long) = extraInfo[iso]\n row['latitude'] = lat\n row['longitude'] = long\n\n for row in allData['contrib']:\n newValue = []\n for iso in row['country']:\n newValue.append(dict(_id=idMapping[iso], iso=iso, value=extraInfo[iso][0]))\n row['country'] = newValue\n \n allFields['country']['_id'] = ('id', 1)\n allFields['country']['iso'] = ('string', 1)\n allFields['country']['latitude'] = ('float', 1)\n allFields['country']['longitude'] = ('float', 1)\n", "User table", "def userTable():\n idMapping = dict()\n existingUsers = []\n testUsers = [\n dict(eppn='suzan', email='suzan1@test.eu', mayLogin=True, authority='local', \n firstName='Suzan', lastName='Karelse'),\n dict(eppn='marie', email='suzan2@test.eu', mayLogin=True, authority='local',\n firstName='Marie', lastName='Pieterse'),\n dict(eppn='gertjan', email='gertjan@test.eu', mayLogin=False, authority='local',\n firstName='Gert Jan', lastName='Klein-Holgerink'),\n dict(eppn='lisa', email='lisa@test.eu', mayLogin=True, authority='local',\n firstName='Lisa', lastName='de Leeuw'),\n dict(eppn='dirk', email='dirk@test.eu', mayLogin=True, authority='local',\n firstName='Dirk', lastName='Roorda'),\n ] \n\n users = collections.defaultdict(set)\n eppnSet = set()\n for c in allData['contrib']:\n crs = c.get('creator', []) + c.get('modifiedBy', [])\n for cr in crs:\n eppnSet.add(cr)\n idMapping = dict((eppn, ObjectId()) for eppn in sorted(eppnSet))\n for c in allData['contrib']:\n c['creator'] = [dict(_id=idMapping[cr]) for cr in c['creator']]\n\n if 'modifiedBy' not in c:\n c['modifiedBy'] = []\n else:\n c['modifiedBy'] = [dict(_id=idMapping[cr]) for cr in c['modifiedBy']]\n\n users = dict((i, eppn) for (eppn, i) in idMapping.items())\n for (i, eppn) in sorted(users.items()):\n existingUsers.append(dict(_id=i, eppn=eppn, mayLogin=False, authority='legacy'))\n\n for u in testUsers:\n u['_id'] = ObjectId()\n idMapping[u['eppn']] = u['_id']\n existingUsers.append(u)\n inGroups = [\n dict(eppn='DirkRoorda@dariah.eu', authority='DARIAH', group='system'),\n dict(eppn='LisaDeLeeuw@dariah.eu', authority='DARIAH', group='office'),\n dict(eppn='suzan', authority='local', group='auth'),\n dict(eppn='marie', authority='local', group='auth'),\n dict(eppn='gertjan', authority='local', group='auth'),\n dict(eppn='lisa', authority='local', group='office'),\n dict(eppn='dirk', authority='local', group='system'),\n ]\n inGroups = [dict(tuple(ig.items())+(('_id', ObjectId()),)) for ig in inGroups]\n allData['user'] = existingUsers\n allData['group'] = inGroups\n \n allFields['user'] = dict(\n _id=('id', 1),\n eppn=('string', 1),\n email=('email', 1),\n mayLogin=('bool', 1),\n authority=('string', 1),\n firstName=('string', 1),\n lastName=('string', 1),\n )\n allFields['group'] = dict(\n _id=('id', 1),\n eppn=('string', 1),\n authority=('string', 1),\n group=('string', 1),\n )\n uidMapping.update(idMapping)", "Related tables", "def relTables():\n def norm(x): return x.strip().lower()\n \n relIndex = dict()\n for mt in sorted(VALUE_LISTS):\n rows = allData[mt]\n for f in sorted(VALUE_LISTS[mt]):\n comps = f.split(':')\n if len(comps) == 2:\n (f, fAs) = comps\n else:\n fAs = f\n relInfo = valueDict[fAs]\n if not fAs in relIndex:\n idMapping = dict((i, ObjectId()) for i in relInfo)\n allData[fAs] = [dict(_id=idMapping[i], value=v) for (i, v) in relInfo.items()]\n relIndex[fAs] = dict((norm(v), (idMapping[i], v)) for (i, v) in relInfo.items())\n for row in rows:\n newValue = []\n for v in row[f]:\n rnv = norm(v)\n (i, nv) = relIndex[fAs].get(rnv, (\"-1\", None))\n if nv == None:\n target = MOVE_MISSING[mt]\n if target not in row: row[target] = ['']\n row[target][0] += '\\nMOVED FROM {}: {}'.format(f, v)\n else: newValue.append(dict(_id=i, value=nv))\n row[f] = newValue ", "Test tweaks\nTweaks for testing purposes.", "def testTweaks():\n mt = 'contrib'\n myContribs = {'3DHOP', 'AAI'}\n my = uidMapping['dirk']\n for row in allData[mt]:\n title = row.get('title', [None])\n if len(title) == 0: title = [None]\n if title[0] in myContribs:\n row['creator'] = [dict(_id=my)]", "Insert into MongoDB", "def importMongo():\n client = MongoClient()\n client.drop_database('dariah')\n db = client.dariah\n for (mt, rows) in allData.items():\n info(mt)\n db[mt].insert_many(rows)", "The whole pipeline", "money_warnings = {}\nmoney_notes = {}\nvalueDict = dict()\nrawFields = dict()\nallFields = dict()\nrawData = dict()\nallData = dict()\nuidMapping = dict()\n\nparser = etree.XMLParser(remove_blank_text=True, ns_clean=True)\nreadFmFields()\nreadFmData()\nreadLists()\nmoveFields()\ncountryTable()\nuserTable()\nrelTables()\ntestTweaks()\nimportMongo()\n#showData()\n#showMoney()", "To import the bson dump in another mongodb installation, use the commandline to dump the dariah database here\nmongodump -d dariah -o dariah\n\nand to import it elsewhere.\nmongorestore --drop -d dariah dariah", "valueDict.keys()\n\nvalueDict['keywords']", "Exploration\nThe process has finished, but here is space to explore the data, in order to find patterns, regularities, and, more importantly, irregularities.\nFirst step: create csv files of the data and combine them into an excel sheet.", "import xlsxwriter\n\nEXPORT_DIR = os.path.expanduser('~/Downloads')\nEXPORT_ORIG = '{}/contribFromFileMaker.xlsx'.format(EXPORT_DIR)\nEXPORT_MONGO = '{}/contribInMongoDB.xlsx'.format(EXPORT_DIR)\n\nworkbook = xlsxwriter.Workbook(EXPORT_ORIG, {'strings_to_urls': False})\nfor mt in rawData:\n worksheet = workbook.add_worksheet(mt)\n for (f, field) in enumerate(rawFields[mt]):\n worksheet.write(0, f, field)\n for (r, row) in enumerate(rawData[mt]):\n for (f, field) in enumerate(rawFields[mt]):\n val = row[field]\n val = [] if val == None else val if type(val) is list else [val]\n val = '|'.join(val)\n worksheet.write(r+1, f, val)\nworkbook.close()\n\nworkbook = xlsxwriter.Workbook(EXPORT_MONGO, {'strings_to_urls': False})\nfor mt in allData:\n worksheet = workbook.add_worksheet(mt)\n fields = sorted(allFields[mt])\n for (f, field) in enumerate(fields):\n worksheet.write(0, f, field)\n for (r, row) in enumerate(allData[mt]):\n for (f, field) in enumerate(fields):\n fmt = None\n val = row.get(field, [])\n (ftype, fmult) = allFields[mt][field]\n val = [] if val == None else [val] if type(val) is not list else val\n exportVal = []\n for v in val:\n if type(v) is dict:\n exportVal.append(','.join(str(vv) for vv in v.values()))\n elif ftype == 'date' or ftype == 'datetime':\n exportVal.append(v if type(v) is str else v.isoformat())\n else:\n exportVal.append(str(v))\n worksheet.write(r+1, f, ' | '.join(exportVal))\nworkbook.close()\n\nshowFields()\n\nclient = MongoClient()\ndbm = client.dariah\nfor d in dbm.contrib.find({'title': '3DHOP'}).limit(2):\n print('=' * 50)\n for f in sorted(d):\n print('{}={}'.format(f, d[f]))", "Here is a query to get all 'type_of_inkind' values for contributions.", "for c in dbm.contrib.distinct('typeContribution', {}):\n print(c)", "Here are the users:", "for c in dbm.users.find({}):\n print(c)", "Here are the countries:", "for c in dbm.country.find({'isMember': True}):\n print(c)\n\nfor c in dbm.contrib.distinct('country', {}):\n print(c)", "Let us get related data: the type_of_inkind of all contributions. For each contribution we need only the ids of the related type_of_inkind values.", "for d in dbm.contrib.find({}, {'typeContribution': True}).limit(10):\n print(d)\n\nfor d in dbm.contrib.find({}, {'country': True}).limit(10):\n print(d)\n\nx = dict(_id=5, value='66')\ny = dict(_id=5, value='66')\nx == y" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
newworldnewlife/TensorFlow-Tutorials
03C_Keras_API.ipynb
mit
[ "TensorFlow Tutorial #03-C\nKeras API\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nIntroduction\nTutorial #02 showed how to implement a Convolutional Neural Network in TensorFlow. We made a few helper-functions for creating the layers in the network. It is essential to have a good high-level API because it makes it much easier to implement complex models, and it lowers the risk of errors.\nThere are several of these builder API's available for TensorFlow: PrettyTensor (Tutorial #03), Layers API (Tutorial #03-B), and several others. But they were never really finished and now they seem to be more or less abandoned by their developers.\nThis tutorial is about the Keras API which is already highly developed with very good documentation - and the development continues. It seems likely that Keras will be the standard API for TensorFlow in the future so it is recommended that you use it instead of the other APIs.\nThe author of Keras has written a blog-post on his API design philosophy which you should read.\nFlowchart\nThe following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. See Tutorial #02 for a more detailed description of convolution.\nThere are two convolutional layers, each followed by a down-sampling using max-pooling (not shown in this flowchart). Then there are two fully-connected layers ending in a softmax-classifier.\n\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nimport math", "We need to import several things from Keras. Note the long import-statements. This might be a bug. Hopefully it will be possible to write shorter and more elegant lines in the future.", "# from tf.keras.models import Sequential # This does not work!\nfrom tensorflow.python.keras.models import Sequential\nfrom tensorflow.python.keras.layers import InputLayer, Input\nfrom tensorflow.python.keras.layers import Reshape, MaxPooling2D\nfrom tensorflow.python.keras.layers import Conv2D, Dense, Flatten", "This was developed using Python 3.6 (Anaconda) and TensorFlow version:", "tf.__version__\n\ntf.keras.__version__", "Load Data\nThe MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.", "from tensorflow.examples.tutorials.mnist import input_data\ndata = input_data.read_data_sets('data/MNIST/', one_hot=True)", "The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.", "print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))", "The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.", "data.test.cls = np.argmax(data.test.labels, axis=1)", "Data Dimensions\nThe data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.", "# We know that MNIST images are 28 pixels in each dimension.\nimg_size = 28\n\n# Images are stored in one-dimensional arrays of this length.\nimg_size_flat = img_size * img_size\n\n# Tuple with height and width of images used to reshape arrays.\n# This is used for plotting the images.\nimg_shape = (img_size, img_size)\n\n# Tuple with height, width and depth used to reshape arrays.\n# This is used for reshaping in Keras.\nimg_shape_full = (img_size, img_size, 1)\n\n# Number of colour channels for the images: 1 channel for gray-scale.\nnum_channels = 1\n\n# Number of classes, one class for each of 10 digits.\nnum_classes = 10", "Helper-function for plotting images\nFunction used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.", "def plot_images(images, cls_true, cls_pred=None):\n assert len(images) == len(cls_true) == 9\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n fig.subplots_adjust(hspace=0.3, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n # Plot image.\n ax.imshow(images[i].reshape(img_shape), cmap='binary')\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n xlabel = \"True: {0}, Pred: {1}\".format(cls_true[i], cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Plot a few images to see if data is correct", "# Get the first images from the test-set.\nimages = data.test.images[0:9]\n\n# Get the true classes for those images.\ncls_true = data.test.cls[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true)", "Helper-function to plot example errors\nFunction for plotting examples of images from the test-set that have been mis-classified.", "def plot_example_errors(cls_pred):\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Boolean array whether the predicted class is incorrect.\n incorrect = (cls_pred != data.test.cls)\n\n # Get the images from the test-set that have been\n # incorrectly classified.\n images = data.test.images[incorrect]\n \n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = data.test.cls[incorrect]\n \n # Plot the first 9 images.\n plot_images(images=images[0:9],\n cls_true=cls_true[0:9],\n cls_pred=cls_pred[0:9])", "PrettyTensor API\nThis is how the Convolutional Neural Network was implemented in Tutorial #03 using the PrettyTensor API. It is shown here for easy comparison to the Keras implementation below.", "if False:\n x_pretty = pt.wrap(x_image)\n\n with pt.defaults_scope(activation_fn=tf.nn.relu):\n y_pred, loss = x_pretty.\\\n conv2d(kernel=5, depth=16, name='layer_conv1').\\\n max_pool(kernel=2, stride=2).\\\n conv2d(kernel=5, depth=36, name='layer_conv2').\\\n max_pool(kernel=2, stride=2).\\\n flatten().\\\n fully_connected(size=128, name='layer_fc1').\\\n softmax_classifier(num_classes=num_classes, labels=y_true)", "Sequential Model\nThe Keras API has two modes of constructing Neural Networks. The simplest is the Sequential Model which only allows for the layers to be added in sequence.", "# Start construction of the Keras Sequential model.\nmodel = Sequential()\n\n# Add an input layer which is similar to a feed_dict in TensorFlow.\n# Note that the input-shape must be a tuple containing the image-size.\nmodel.add(InputLayer(input_shape=(img_size_flat,)))\n\n# The input is a flattened array with 784 elements,\n# but the convolutional layers expect images with shape (28, 28, 1)\nmodel.add(Reshape(img_shape_full))\n\n# First convolutional layer with ReLU-activation and max-pooling.\nmodel.add(Conv2D(kernel_size=5, strides=1, filters=16, padding='same',\n activation='relu', name='layer_conv1'))\nmodel.add(MaxPooling2D(pool_size=2, strides=2))\n\n# Second convolutional layer with ReLU-activation and max-pooling.\nmodel.add(Conv2D(kernel_size=5, strides=1, filters=36, padding='same',\n activation='relu', name='layer_conv2'))\nmodel.add(MaxPooling2D(pool_size=2, strides=2))\n\n# Flatten the 4-rank output of the convolutional layers\n# to 2-rank that can be input to a fully-connected / dense layer.\nmodel.add(Flatten())\n\n# First fully-connected / dense layer with ReLU-activation.\nmodel.add(Dense(128, activation='relu'))\n\n# Last fully-connected / dense layer with softmax-activation\n# for use in classification.\nmodel.add(Dense(num_classes, activation='softmax'))", "Model Compilation\nThe Neural Network has now been defined and must be finalized by adding a loss-function, optimizer and performance metrics. This is called model \"compilation\" in Keras.\nWe can either define the optimizer using a string, or if we want more control of its parameters then we need to instantiate an object. For example, we can set the learning-rate.", "from tensorflow.python.keras.optimizers import Adam\n\noptimizer = Adam(lr=1e-3)", "For a classification-problem such as MNIST which has 10 possible classes, we need to use the loss-function called categorical_crossentropy. The performance metric we are interested in is the classification accuracy.", "model.compile(optimizer=optimizer,\n loss='categorical_crossentropy',\n metrics=['accuracy'])", "Training\nNow that the model has been fully defined with loss-function and optimizer, we can train it. This function takes numpy-arrays and performs the given number of training epochs using the given batch-size. An epoch is one full use of the entire training-set. So for 10 epochs we would iterate randomly over the entire training-set 10 times.", "model.fit(x=data.train.images,\n y=data.train.labels,\n epochs=1, batch_size=128)", "Evaluation\nNow that the model has been trained we can test its performance on the test-set. This also uses numpy-arrays as input.", "result = model.evaluate(x=data.test.images,\n y=data.test.labels)", "We can print all the performance metrics for the test-set.", "for name, value in zip(model.metrics_names, result):\n print(name, value)", "Or we can just print the classification accuracy.", "print(\"{0}: {1:.2%}\".format(model.metrics_names[1], result[1]))", "Prediction\nWe can also predict the classification for new images. We will just use some images from the test-set but you could load your own images into numpy arrays and use those instead.", "images = data.test.images[0:9]", "These are the true class-number for those images. This is only used when plotting the images.", "cls_true = data.test.cls[0:9]", "Get the predicted classes as One-Hot encoded arrays.", "y_pred = model.predict(x=images)", "Get the predicted classes as integers.", "cls_pred = np.argmax(y_pred,axis=1)\n\nplot_images(images=images,\n cls_true=cls_true,\n cls_pred=cls_pred)", "Examples of Mis-Classified Images\nWe can plot some examples of mis-classified images from the test-set.\nFirst we get the predicted classes for all the images in the test-set:", "y_pred = model.predict(x=data.test.images)", "Then we convert the predicted class-numbers from One-Hot encoded arrays to integers.", "cls_pred = np.argmax(y_pred,axis=1)", "Plot some of the mis-classified images.", "plot_example_errors(cls_pred)", "Functional Model\nThe Keras API can also be used to construct more complicated networks using the Functional Model. This may look a little confusing at first, because each call to the Keras API will create and return an instance that is itself callable. It is not clear whether it is a function or an object - but we can call it as if it is a function. This allows us to build computational graphs that are more complex than the Sequential Model allows.", "# Create an input layer which is similar to a feed_dict in TensorFlow.\n# Note that the input-shape must be a tuple containing the image-size.\ninputs = Input(shape=(img_size_flat,))\n\n# Variable used for building the Neural Network.\nnet = inputs\n\n# The input is an image as a flattened array with 784 elements.\n# But the convolutional layers expect images with shape (28, 28, 1)\nnet = Reshape(img_shape_full)(net)\n\n# First convolutional layer with ReLU-activation and max-pooling.\nnet = Conv2D(kernel_size=5, strides=1, filters=16, padding='same',\n activation='relu', name='layer_conv1')(net)\nnet = MaxPooling2D(pool_size=2, strides=2)(net)\n\n# Second convolutional layer with ReLU-activation and max-pooling.\nnet = Conv2D(kernel_size=5, strides=1, filters=36, padding='same',\n activation='relu', name='layer_conv2')(net)\nnet = MaxPooling2D(pool_size=2, strides=2)(net)\n\n# Flatten the output of the conv-layer from 4-dim to 2-dim.\nnet = Flatten()(net)\n\n# First fully-connected / dense layer with ReLU-activation.\nnet = Dense(128, activation='relu')(net)\n\n# Last fully-connected / dense layer with softmax-activation\n# so it can be used for classification.\nnet = Dense(num_classes, activation='softmax')(net)\n\n# Output of the Neural Network.\noutputs = net", "Model Compilation\nWe have now defined the architecture of the model with its input and output. We now have to create a Keras model and compile it with a loss-function and optimizer, so it is ready for training.", "from tensorflow.python.keras.models import Model", "Create a new instance of the Keras Functional Model. We give it the inputs and outputs of the Convolutional Neural Network that we constructed above.", "model2 = Model(inputs=inputs, outputs=outputs)", "Compile the Keras model using the rmsprop optimizer and with a loss-function for multiple categories. The only performance metric we are interested in is the classification accuracy, but you could use a list of metrics here.", "model2.compile(optimizer='rmsprop',\n loss='categorical_crossentropy',\n metrics=['accuracy'])", "Training\nThe model has now been defined and compiled so it can be trained using the same fit() function as used in the Sequential Model above. This also takes numpy-arrays as input.", "model2.fit(x=data.train.images,\n y=data.train.labels,\n epochs=1, batch_size=128)", "Evaluation\nOnce the model has been trained we can evaluate its performance on the test-set. This is the same syntax as for the Sequential Model.", "result = model2.evaluate(x=data.test.images,\n y=data.test.labels)", "The result is a list of values, containing the loss-value and all the metrics we defined when we compiled the model. Note that 'accuracy' is now called 'acc' which is a small inconsistency.", "for name, value in zip(model.metrics_names, result):\n print(name, value)", "We can also print the classification accuracy as a percentage:", "print(\"{0}: {1:.2%}\".format(model.metrics_names[1], result[1]))", "Examples of Mis-Classified Images\nWe can plot some examples of mis-classified images from the test-set.\nFirst we get the predicted classes for all the images in the test-set:", "y_pred = model2.predict(x=data.test.images)", "Then we convert the predicted class-numbers from One-Hot encoded arrays to integers.", "cls_pred = np.argmax(y_pred, axis=1)", "Plot some of the mis-classified images.", "plot_example_errors(cls_pred)", "Save & Load Model\nNOTE: You need to install h5py for this to work!\nTutorial #04 was about saving and restoring the weights of a model using native TensorFlow code. It was an absolutely horrible API! Fortunately, Keras makes this very easy.\nThis is the file-path where we want to save the Keras model.", "path_model = 'model.keras'", "Saving a Keras model with the trained weights is then just a single function call, as it should be.", "model2.save(path_model)", "Delete the model from memory so we are sure it is no longer used.", "del model2", "We need to import this Keras function for loading the model.", "from tensorflow.python.keras.models import load_model", "Loading the model is then just a single function-call, as it should be.", "model3 = load_model(path_model)", "We can then use the model again e.g. to make predictions. We get the first 9 images from the test-set and their true class-numbers.", "images = data.test.images[0:9]\n\ncls_true = data.test.cls[0:9]", "We then use the restored model to predict the class-numbers for those images.", "y_pred = model3.predict(x=images)", "Get the class-numbers as integers.", "cls_pred = np.argmax(y_pred, axis=1)", "Plot the images with their true and predicted class-numbers.", "plot_images(images=images,\n cls_pred=cls_pred,\n cls_true=cls_true)", "Visualization of Layer Weights and Outputs\nHelper-function for plotting convolutional weights", "def plot_conv_weights(weights, input_channel=0):\n # Get the lowest and highest values for the weights.\n # This is used to correct the colour intensity across\n # the images so they can be compared with each other.\n w_min = np.min(weights)\n w_max = np.max(weights)\n\n # Number of filters used in the conv. layer.\n num_filters = weights.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot all the filter-weights.\n for i, ax in enumerate(axes.flat):\n # Only plot the valid filter-weights.\n if i<num_filters:\n # Get the weights for the i'th filter of the input channel.\n # See new_conv_layer() for details on the format\n # of this 4-dim tensor.\n img = weights[:, :, input_channel, i]\n\n # Plot image.\n ax.imshow(img, vmin=w_min, vmax=w_max,\n interpolation='nearest', cmap='seismic')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Get Layers\nKeras has a simple way of listing the layers in the model.", "model3.summary()", "We count the indices to get the layers we want.\nThe input-layer has index 0.", "layer_input = model3.layers[0]", "The first convolutional layer has index 2.", "layer_conv1 = model3.layers[2]\nlayer_conv1", "The second convolutional layer has index 4.", "layer_conv2 = model3.layers[4]", "Convolutional Weights\nNow that we have the layers we can easily get their weights.", "weights_conv1 = layer_conv1.get_weights()[0]", "This gives us a 4-rank tensor.", "weights_conv1.shape", "Plot the weights using the helper-function from above.", "plot_conv_weights(weights=weights_conv1, input_channel=0)", "We can also get the weights for the second convolutional layer and plot them.", "weights_conv2 = layer_conv2.get_weights()[0]\n\nplot_conv_weights(weights=weights_conv2, input_channel=0)", "Helper-function for plotting the output of a convolutional layer", "def plot_conv_output(values):\n # Number of filters used in the conv. layer.\n num_filters = values.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot the output images of all the filters.\n for i, ax in enumerate(axes.flat):\n # Only plot the images for valid filters.\n if i<num_filters:\n # Get the output image of using the i'th filter.\n img = values[0, :, :, i]\n\n # Plot image.\n ax.imshow(img, interpolation='nearest', cmap='binary')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Input Image\nHelper-function for plotting a single image.", "def plot_image(image):\n plt.imshow(image.reshape(img_shape),\n interpolation='nearest',\n cmap='binary')\n\n plt.show()", "Plot an image from the test-set which will be used as an example below.", "image1 = data.test.images[0]\nplot_image(image1)", "Output of Convolutional Layer - Method 1\nThere are different ways of getting the output of a layer in a Keras model. This method uses a so-called K-function which turns a part of the Keras model into a function.", "from tensorflow.python.keras import backend as K\n\noutput_conv1 = K.function(inputs=[layer_input.input],\n outputs=[layer_conv1.output])", "We can then call this function with the input image. Note that the image is wrapped in two lists because the function expects an array of that dimensionality. Likewise, the function returns an array with one more dimensionality than we want so we just take the first element.", "layer_output1 = output_conv1([[image1]])[0]\nlayer_output1.shape", "We can then plot the output of all 16 channels of the convolutional layer.", "plot_conv_output(values=layer_output1)", "Output of Convolutional Layer - Method 2\nKeras also has another method for getting the output of a layer inside the model. This creates another Functional Model using the same input as the original model, but the output is now taken from the convolutional layer that we are interested in.", "output_conv2 = Model(inputs=layer_input.input,\n outputs=layer_conv2.output)", "This creates a new model-object where we can call the typical Keras functions. To get the output of the convoloutional layer we call the predict() function with the input image.", "layer_output2 = output_conv2.predict(np.array([image1]))\nlayer_output2.shape", "We can then plot the images for all 36 channels.", "plot_conv_output(values=layer_output2)", "Conclusion\nThis tutorial showed how to use the so-called Keras API for easily building Convolutional Neural Networks in TensorFlow. Keras is by far the most complete and best designed API for TensorFlow.\nThis tutorial also showed how to use Keras to save and load a model, as well as getting the weights and outputs of convolutional layers.\nIt seems likely that Keras will be the standard API for TensorFlow in the future, for the simple reason that is already very good and it is constantly being improved. So it is recommended that you use Keras.\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\nTrain for more epochs. Does it improve the classification accuracy?\nChange the activation function to sigmoid for some of the layers.\nCan you find a simple way of changing the activation function for all the layers?\nPlot the output of the max-pooling layers instead of the conv-layers.\nReplace the 2x2 max-pooling layers with stride=2 in the convolutional layers. Is there a difference in classification accuracy? What if you optimize it again and again? The difference is random, so how would you measure if there really is a difference? What are the pros and cons of using max-pooling vs. stride in the conv-layer?\nChange the parameters for the layers, e.g. the kernel, depth, size, etc. What is the difference in time usage and classification accuracy?\nAdd and remove some convolutional and fully-connected layers.\nWhat is the simplest network you can design that still performs well?\nChange the Functional Model so it has another convolutional layer that connects in parallel to the existing conv-layers before going into the dense layers.\nChange the Functional Model so it outputs the predicted class both as a One-Hot encoded array and as an integer, so we don't have to use numpy.argmax() afterwards.\nRemake the program yourself without looking too much at this source-code.\nExplain to a friend how the program works.\n\nLicense (MIT)\nCopyright (c) 2016-2017 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jseabold/statsmodels
examples/notebooks/mediation_survival.ipynb
bsd-3-clause
[ "Mediation analysis with duration data\nThis notebook demonstrates mediation analysis when the\nmediator and outcome are duration variables, modeled\nusing proportional hazards regression. These examples\nare based on simulated data.", "import pandas as pd\nimport numpy as np\nimport statsmodels.api as sm\nfrom statsmodels.stats.mediation import Mediation", "Make the notebook reproducible.", "np.random.seed(3424)", "Specify a sample size.", "n = 1000", "Generate an exposure variable.", "exp = np.random.normal(size=n)", "Generate a mediator variable.", "def gen_mediator():\n mn = np.exp(exp)\n mtime0 = -mn * np.log(np.random.uniform(size=n))\n ctime = -2 * mn * np.log(np.random.uniform(size=n))\n mstatus = (ctime >= mtime0).astype(np.int)\n mtime = np.where(mtime0 <= ctime, mtime0, ctime)\n return mtime0, mtime, mstatus", "Generate an outcome variable.", "def gen_outcome(otype, mtime0):\n if otype == \"full\":\n lp = 0.5*mtime0\n elif otype == \"no\":\n lp = exp\n else:\n lp = exp + mtime0\n mn = np.exp(-lp)\n ytime0 = -mn * np.log(np.random.uniform(size=n))\n ctime = -2 * mn * np.log(np.random.uniform(size=n))\n ystatus = (ctime >= ytime0).astype(np.int)\n ytime = np.where(ytime0 <= ctime, ytime0, ctime)\n return ytime, ystatus", "Build a dataframe containing all the relevant variables.", "def build_df(ytime, ystatus, mtime0, mtime, mstatus):\n df = pd.DataFrame({\"ytime\": ytime, \"ystatus\": ystatus,\n \"mtime\": mtime, \"mstatus\": mstatus,\n \"exp\": exp})\n return df", "Run the full simulation and analysis, under a particular\npopulation structure of mediation.", "def run(otype):\n\n mtime0, mtime, mstatus = gen_mediator()\n ytime, ystatus = gen_outcome(otype, mtime0)\n df = build_df(ytime, ystatus, mtime0, mtime, mstatus)\n\n outcome_model = sm.PHReg.from_formula(\"ytime ~ exp + mtime\", status=\"ystatus\", data=df)\n mediator_model = sm.PHReg.from_formula(\"mtime ~ exp\", status=\"mstatus\", data=df)\n\n med = Mediation(outcome_model, mediator_model, \"exp\", \"mtime\",\n outcome_predict_kwargs={\"pred_only\": True})\n med_result = med.fit(n_rep=20)\n print(med_result.summary())", "Run the example with full mediation", "run(\"full\")", "Run the example with partial mediation", "run(\"partial\")", "Run the example with no mediation", "run(\"no\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
luzhijun/Optimization
CMA-ES/分段数对效果的影响/分段数对拟合效果的影响.ipynb
apache-2.0
[ "说明\n 目的: 为探讨在同一个样本点集下,cma-es算法对$f(x)$拟合效果与分段数的关系,实验记录每个分段方案下cma-es过程的迭代时间,关系矩阵变化过程,计算结果,cpu计算时间等进行对比。\n 函数: \n<center>$ f(x)=10sin0.6x+uniform(-1.5,1.5)gauss(0,5),x \\in[-7,7)$ </center>\n 分段: 为保证每段点数的一致性,通过$f(x)$在定义域内均匀分布300个样本点,段数分别为$PN(partition\\ number) \\in {5,10,15,20,25,30}$,对应CMA问题的求解维度从15至90,每段点数不小于10.\n评价函数:\n1. $M1$:分段测试函数的mse;\n2. $M2$:在f1基础上增加间断点连续性判断指标:$\\sum_{i=1}^{k} (e^{\\Delta Y_i-\\alpha}-1)$,其中$\\Delta Y_i$ 代表拟合分段函数在间断点处的左右间断点差的绝对值,$\\alpha $默认代表$Y$值域范围的$1\\%$大小;\n3. $M3$:在f2基础上增加间断点一阶导数评价指标:$\\sum_{i=1}^{k}(e^{\\frac{\\Delta \\sigma_i-\\beta}{10e}}-1)$,其中$\\Delta \\sigma_i$代表左右间断点处的左右导数的$arctan$差的绝对值,$\\beta$如未特殊说明全局默认为10度($\\frac {\\pi}{18}$); \n实验\n迭代次数与分段数关系图表", "import makeData as md\n%pylab inline\nplt.rc('figure', figsize=(16, 9))\nX=md.loadData('result.tl')\n\nimport numpy as np\nimport pandas as pd\nb=[]\nfor i in range(5,35,5):\n temp=[]\n for j in range(3):\n temp.append(X[i][j]['iter'])\n b.append(temp)\nbs=np.array(b).T\nind=range(5,35,5)\nd={'M1':pd.Series(bs[0],index=ind),\n 'M2':pd.Series(bs[1],index=ind),\n 'M3':pd.Series(bs[2],index=ind)}\ndf = pd.DataFrame(d)\ndf.columns.name='function'\ndf.index.name='partition'\ndf\n\ndf.plot(kind='bar',fontsize=20)\nleg = plt.gca().get_legend()\nltext = leg.get_texts()\nplt.setp(ltext, fontsize='20')\nplt.title(\"iteration counts\",fontsize=16)\ndf.plot()\nleg = plt.gca().get_legend()\nltext = leg.get_texts()\nplt.setp(ltext, fontsize='20')\nplt.title(\"iteration counts\",fontsize=16)", "cpu计算耗时与分段数关系图表", "b=[]\nfor i in range(5,35,5):\n temp=[]\n for j in range(3):\n temp.append(X[i][j]['time'])\n b.append(temp)\nbs=np.array(b).T\nind=range(5,35,5)\nd={'M1':pd.Series(bs[0],index=ind),\n 'M2':pd.Series(bs[1],index=ind),\n 'M3':pd.Series(bs[2],index=ind)}\ndf = pd.DataFrame(d)\ndf.columns.name='function'\ndf.index.name='partition'\ndf\n\ndf.plot(kind='bar',fontsize=20)\nleg = plt.gca().get_legend()\nltext = leg.get_texts()\nplt.setp(ltext, fontsize='20')\nplt.title(\"CPU time \",fontsize=16)\ndf.plot()\nleg = plt.gca().get_legend()\nltext = leg.get_texts()\nplt.setp(ltext, fontsize='20')\nplt.title(\"CPU time\",fontsize=16)", "随分段数变化拟合情况" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
phungkh/phys202-2015-work
assignments/assignment06/ProjectEuler17.ipynb
mit
[ "Project Euler: Problem 17\nhttps://projecteuler.net/problem=17\nIf the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.\nIf all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?\nNOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of \"and\" when writing out numbers is in compliance with British usage.\nFirst write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above", "def number_to_words(n):\n a={1:'one',2:'two',3:'three',4:'four',5:'five',6:'six',7:'seven',8:'eight',9:'nine',10:'ten',11:'eleven',12:'twelve',13:'thirteen',\n 14:'fourteen',15:'fifteen',16:'sixteen',17:'seventeen',18:'eighteen',19:'nineteen',20:'twenty',30:'thirty',40:'forty',\n 50:'fifty',60:'sixty',70:'seventy',80:'eighty',90:'ninety',100:'hundred'\n }\n \n if n<=19:\n return(a[n])\n \n elif n>=21 and n<=29:\n b=str(n) # turn n into a string \n return(a[20]+\"-\"+a[int(b[1])]) #turn n back to a integer and take the 1st digit as the index in our dictionary\n \n \n elif n>=31 and n<=39:\n b=str(n)\n return(a[30]+\"-\"+a[int(b[1])])\n \n \n elif n>=41 and n<=49:\n b=str(n)\n return(a[40]+\"-\"+a[int(b[1])])\n \n elif n>=51 and n<=59:\n b=str(n)\n return(a[50]+\"-\"+a[int(b[1])])\n \n elif n>=61 and n<=69:\n b=str(n)\n return(a[60]+\"-\"+a[int(b[1])])\n \n elif n>=71 and n<=79: \n b=str(n)\n return(a[70]+\"-\"+a[int(b[1])])\n \n elif n>=81 and n<=89:\n b=str(n)\n return(a[80]+\"-\"+a[int(b[1])])\n \n elif n>=91 and n<=99:\n b=str(n)\n return(a[90]+\"-\"+a[int(b[1])])\n elif n==100:\n return'one hundred'\n elif n==200:\n return'two hundred'\n elif n==300:\n return'three hundred'\n elif n==400:\n return'four hundred'\n elif n==500:\n return'five hundred'\n elif n==600:\n return'six hundred'\n elif n==700:\n return'seven hundred'\n elif n==800:\n return'eight hundred'\n elif n==900:\n return'nine hundred'\n elif n==1000:\n return'one thousand'\n \n elif n>=101:\n b=str(n) #if n=139, then b='139'\n # c is the last two digits, so in this case, = '39'\n c=int(b[1:])\n \n return(a[int(b[0])]+\" \"+a[100]+\" \"+\"and\"+\" \"+number_to_words(c))\n \n else:\n return(a[n]) # for numbers like 20,30,40,50...\n \nnumber_to_words(149)\n\n\nZ = list(range(1,1001))\nX=[]\nfor i in Z:\n X.append(number_to_words(i))\n \n\n \n", "Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.", "Z = list(range(1,1001))\nX=[]\nfor i in Z:\n X.append(number_to_words(i)) # LIST OF ALL THE WORDS!\n \nassert number_to_words(10)=='ten'\nassert number_to_words(55)=='fifty-five'\nassert number_to_words(99)=='ninety-nine'\nassert number_to_words(155)=='one hundred and fifty-five'\nassert number_to_words(777)=='seven hundred and seventy-seven'\n \n\n\nassert True # use this for grading the number_to_words tests.", "Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.", "def count_letters(n):\n \"\"\"Count the number of letters used to write out the words for 1-n inclusive.\"\"\"\n number_of_characters= ' '.join(number_to_words(x) for x in range(1,n+1))\n count=0\n for i in number_of_characters:\n if i !='-'and i !=' ':\n count+=1\n return count\n \n\n \n ", "Now write a set of assert tests for your count_letters function that verifies that it is working as expected.", "count_letters(1)==3\ncount_letters(5)==19\n\nassert True # use this for grading the count_letters tests.", "Finally used your count_letters function to solve the original question.", "print(count_letters(1000))\n\nassert True # use this for gradig the answer to the original question." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
msampathkumar/kaggle-quora-tensorflow
references/sentiment-rnn/Sentiment RNN.ipynb
apache-2.0
[ "Sentiment Analysis with an RNN\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.\nThe architecture for this network is shown below.\n<img src=\"assets/network_diagram.png\" width=400px>\nHere, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.\nFrom the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.\nWe don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.", "import numpy as np\nimport tensorflow as tf\n\nwith open('../sentiment_network/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('../sentiment_network/labels.txt', 'r') as f:\n labels = f.read()\n\nreviews[:2000]", "Data preprocessing\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\nYou can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \\n. To deal with those, I'm going to split the text into each review using \\n as the delimiter. Then I can combined all the reviews back together into one big string.\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.", "from string import punctuation\nall_text = ''.join([c for c in reviews if c not in punctuation])\nreviews = all_text.split('\\n')\n\nall_text = ' '.join(reviews)\nwords = all_text.split()\n\nall_text[:2000]\n\nwords[:100]", "Encoding the words\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\nExercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.\nAlso, convert the reviews to integers and store the reviews in a new list called reviews_ints.", "reviews[:2]\n\nfrom collections import Counter\nwords_dummy = ['qwe','ert','yui', 'fgh', 'dfg', 'kjg','fgh', 'dfg', 'kjg']\ncounts_dummy = Counter(words_dummy)\nprint(counts_dummy)\nv = enumerate(counts_dummy,1)\nprint(list(v))\nprint(counts_dummy.get('qwe'))\n\nvocab_dummy = sorted(counts_dummy, key=counts_dummy.get, reverse=True)\nvocab_to_int_dummy = {word: ii for ii, word in enumerate(vocab_dummy, 1)}\nprint(vocab_dummy)\nprint(vocab_to_int_dummy)\n\n\ncounts = Counter(words)\nvocab = sorted(counts, key=counts.get, reverse=True)\nvocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}\n\nreviews_ints = []\nfor each in reviews:\n reviews_ints.append([vocab_to_int[word] for word in each.split()])\n\nlabels = labels.split('\\n')\nlabels = np.array([1 if each == 'positive' else 0 for each in labels])", "Encoding the labels\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\nExercise: Convert labels from positive and negative to 1 and 0, respectively.", "review_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))\n\nprint(max(review_lens))\nprint(min(review_lens))\n# max?", "Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.\n\nExercise: First, remove the review with zero length from the reviews_ints list.", "# Filter out that review with 0 length\nreviews_ints = [each for each in reviews_ints if len(each) > 0]", "Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.", "seq_len = 200\nfeatures = np.zeros((len(reviews), seq_len), dtype=int)\nfor i, row in enumerate(reviews_ints):\n features[i, -len(row):] = np.array(row)[:seq_len]\n\nfeatures[:10,:100]", "Training, Validation, Test\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\nExercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.", "split_frac = 0.8\nsplit_idx = int(len(features)*0.8)\ntrain_x, val_x = features[:split_idx], features[split_idx:]\ntrain_y, val_y = labels[:split_idx], labels[split_idx:]\n\ntest_idx = int(len(val_x)*0.5)\nval_x, test_x = val_x[:test_idx], val_x[test_idx:]\nval_y, test_y = val_y[:test_idx], val_y[test_idx:]\n\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))", "With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:\nFeature Shapes:\nTrain set: (20000, 200) \nValidation set: (2500, 200) \nTest set: (2501, 200)\nBuild the graph\nHere, we'll build the graph. First up, defining the hyperparameters.\n\nlstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\nlstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.\nbatch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.\nlearning_rate: Learning rate", "lstm_size = 256\nlstm_layers = 1\nbatch_size = 250\nlearning_rate = 0.001", "For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.\n\nExercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.", "n_words = len(vocab)\n\n# Create the graph object\ngraph = tf.Graph()\n# Add nodes to the graph\nwith graph.as_default():\n inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')\n labels_ = tf.placeholder(tf.int32, [None, None], name='labels')\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')", "Embedding\nNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.\n\nExercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].", "# Size of the embedding vectors (number of units in the embedding layer)\nembed_size = 300 \n\nwith graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, inputs_)", "LSTM cell\n<img src=\"assets/network_diagram.png\" width=400px>\nNext, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.\nTo create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:\ntf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;)\nyou can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like \nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nto create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like\ndrop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\nMost of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:\ncell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\nHere, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.\nSo the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.\n\nExercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.\n\nHere is a tutorial on building RNNs that will help you out.", "with graph.as_default():\n # Your basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\n \n # Getting an initial state of all zeros\n initial_state = cell.zero_state(batch_size, tf.float32)", "RNN forward pass\n<img src=\"assets/network_diagram.png\" width=400px>\nNow we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.\noutputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)\nAbove I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.\n\nExercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.", "with graph.as_default():\n outputs, final_state = tf.nn.dynamic_rnn(cell, embed,\n initial_state=initial_state)", "Output\nWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.", "with graph.as_default():\n predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)\n cost = tf.losses.mean_squared_error(labels_, predictions)\n \n optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "Validation accuracy\nHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass.", "with graph.as_default():\n correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batching\nThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].", "def get_batches(x, y, batch_size=100):\n \n n_batches = len(x)//batch_size\n x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]\n for ii in range(0, len(x), batch_size):\n yield x[ii:ii+batch_size], y[ii:ii+batch_size]", "Training\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.", "epochs = 10\n\nwith graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=graph) as sess:\n sess.run(tf.global_variables_initializer())\n iteration = 1\n for e in range(epochs):\n state = sess.run(initial_state)\n \n for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 0.5,\n initial_state: state}\n loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)\n \n if iteration%5==0:\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Train loss: {:.3f}\".format(loss))\n\n if iteration%25==0:\n val_acc = []\n val_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for x, y in get_batches(val_x, val_y, batch_size):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: val_state}\n batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)\n val_acc.append(batch_acc)\n print(\"Val acc: {:.3f}\".format(np.mean(val_acc)))\n iteration +=1\n saver.save(sess, \"checkpoints/sentiment.ckpt\")", "Testing", "test_acc = []\nwith tf.Session(graph=graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n test_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: test_state}\n batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)\n test_acc.append(batch_acc)\n print(\"Test accuracy: {:.3f}\".format(np.mean(test_acc)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cdawei/flickr-photo
src/trajectory_construction.ipynb
gpl-2.0
[ "Trajectory Construction\nOur goal is to reconstruct users' travel trajectories from photos with geo-tag and timestamp around Melbourne area. To do this, we use the YFCC100M dataset dataset which is a collection of photos/videos uploaded to Flickr between 2004 and 2014. \nThe original YFCC100M dataset contains 100 million photos/videos and does not provide any trajectories about users. In this notebook, we will describe how we reconstruct user trajectories from the original dataset.\n(*We will not distinguish a photo and video in the rest of this notebook)\nThree major steps for constructing trajectories:\n\n\nStep 1. Extract photos taken near Melbourne area from original YFCC100M dataset (filtering_bigbox.py)\n\nThe YFCC100M dataset contains 100 million photos which requires huge computational cost to handle.\nSince we are only interested in trajectories near Melbourne, we will extract photos near Melbourne to reduce further computational cost from next steps.\n\n\n\nStep 2. Extract candidate trajectories based on the extracted photos (generate_tables.py)\n\nFrom the extracted photos in Step 1, we will reconstruct user trajectories by using geo-tag and timestamp of photos.\nBasically, we group photos by user, sort the grouped photos by timestamp, and then link sequentially consecutive photos to construct trajectory\n\n\n\nStep 3. Filter out some abnormal trajectories using various criteria (this notebook)\n\nThe candidate trajectories may contain abnormal, meaningless, improbable trajectories.\nDue to serveral reasons such as GPS errors, invalid time-stamp, \nIn this step, we will remove those abnormal trajectories by using various criteria\nData cleaning/noise filtering\n\n\n\nBelow we will describe the details of each step with source codes.\nTable of Contents\n\nStep 1. Extract relevant photos from YFCC100M dataset\n1.1. Basic stats of initial dataset\n1.2. Scatter plot of extracted points\n\n\nStep 2. Extract candidate trajectories from extracted points\n2.1. Basic statistics about the candidate trajectories\n2.2. Scatter plot of points in all candidate trajectories\n\n\nStep 3. Filter Abnormal Trajectory\n3.1. Filter by Travel time\n3.2. Filter by Travel distance\n3.3. Filter by Travel Speed\n3.3.1. Drop trajectory by average speed\n3.3.2. Drop trajectory by point-to-point speed\n\n\n\n\n4. Final Trajectory\n4.1. Basic Stats\n\n\n\n We place all parameters used to generate trajectories next. Modifing some parameter and rerun this notebook will generate different trajectory files\n<a id='argument_list'></a>", "%matplotlib inline\n\nimport os\nimport matplotlib\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\nimport generate_tables\n\n# data files\ndata_dir = '../data/'\ntable0 = 'Melb_photos_bigbox.csv'\ntable1 = 'trajectory_photos.csv'\ntable2 = 'trajectory_stats.csv'\nraw_table = os.path.join(data_dir, table0)\nphoto_table = os.path.join(data_dir, table1)\ntraj_table = os.path.join(data_dir, table2)\n\n# parameters for generating trajectories from extracted photos\ntime_gap = 8 # hour\nminimum_photo = 1 # minimum number of photos for each trajectory\nlng_min = 144.597363 # small bounding box\nlat_min = -38.072257\nlng_max = 145.360413\nlat_max = -37.591764\n\n# parameters for filtering trajectories\nminimum_distance = 0.5 # km\nspeed_filter = 1 #(0: filter by average speed, 1: filter by point-to-point speed)\nmaximum_speed = 100 # km/h\nmaximum_duration = 1440 # minutes\nminimum_duration = 30 #minute", "<a id='first_step'></a>\nStep 1. Extract relevant photos from YFCC100M dataset\nThe original YFCC100M dataset contains 100million photos. We are only interested in user behavior near Melbourne area. Therefore, we first extract the photos belongs to the below region to reduce the further computational cost.\nThe latitudes and longitudes of this region is described in data/Melbourne-bbox.kml file. \n\nThis process can be done by src/filtering_bigbox.py.\nfiltering_bigbox.py file takes the original YFCC100M file to extract photos and videos from above region as well as a time window [2000-01-01 00:00:00, 2015-03-05 23:59:59], then generates a cvs file containing:\n\nPhoto/video ID\nNSID (user ID)\nDate\nLongitude\nLatitude\nAccuracy (GPS accuracy)\nPhoto/video URL\nPhoto/video identifier (0 = photo, 1 = video)\n\nThe usage of this file is :\n\npython filtering_bigbox.py YFCC100M_DATA_FILE\n\nwhich will generate filtered output out.YFCC100M_DATA_FILE file.\nThe original YFCC100M data files are not incorporated in this repository. But we incorporate the filtered output in data/Melb_photos_bigbox.csv\n1.1. Basic stats of initial dataset\nHere are some basic statistics after extracting relevant photos from the YFCC100M.", "raw = pd.read_csv(raw_table, parse_dates=[2], skipinitialspace=True)\nprint('Number of photos:', raw['Photo_ID'].shape[0])\nprint('Number of users: ', raw['User_ID'].unique().shape[0])\nraw[['Longitude', 'Latitude', 'Accuracy']].describe() ", "1.2. Scatter plot of extracted points\nWe also plot the location of extracted photos. The high density area represents the areas where a lot of photos has been taken.", "plt.figure(figsize=[8, 8])\nplt.xlabel('Longitude')\nplt.ylabel('Latitude')\nplt.scatter(raw['Longitude'], raw['Latitude'])", "Step 2. Extract candidate trajectories from extracted points\nFrom the extracted photos, we reconstruct user trajectories using geo-tag and timestamp of photos as follows:\n\nStep 2.1: Group the extracted photos by user\nStep 2.2: Sort the grouped photos by timestamp\nStep 2.3: Split the sorted photos into trajectories if the time gap between two consecutive photos is greater than 8 (\\$time_gap) hours\nStep 2.4: We plot the trajectories on map. Keep trajectories at least one (\\$minimum_photo) photo is taken from the central Melbourne area below. To make sure that the travel is not far from Melbourne\n\n\nsrc/generate_tables.py will generate the inital trajectories using arguments.\nThe usage of this file is :\n\n`python generate_tables.py extracted_points_file lng_min lat_min lng_max lat_max minimum_photo time_gap\n\nwith arguments:\n\nextracted_points_file = the output of src/filtering_bigbox.py\nlng_min = min longitude of target region\nlat_min = min latitude of target region\nlng_max = max longtidue of target region\nlat_max = max latitude of targer region\nminimum_photo = minimum number of photos for each trajectory\ntime_gap = Split the sorted photos into trajectories if the time gap between two consecutive photos is greater than this", "extracted_points_file = raw_table # outputfile path of extracted points\n%run generate_tables $extracted_points_file $lng_min $lat_min $lng_max $lat_max $minimum_photo $time_gap ", "This will result two data files: 1 trajectory data file (\\$photo table), and 2 trajectory statistic file (\\$traj_table). \n\ntrajectory data file: each entry(line) of this file reprsents single photo with additional information about the photo:\n * Trajectory_ID: trajectory ID of entry (multiple entries belong to the same trajectory will have the same trajectory ID)\n * Photo_ID: Unique Photo ID of entry\n * User_ID: User ID\n * Timestamp: Timestamp of when the photo was taken\n * Longitude: Longitude of entry \n * Latitude: Latitude of entry\n * Accuracy: GPS Accuracy level (16 - the most accurate, 1 - the least accurate)\n * Marker: 0 if the entry is photo, 1 if the entry is video\n * URL: flickr URL to the entry\ntrajectory statistic file: each entry(line) of this file represents single trajectory with addtional information about the trajectory:\n * Trajectory_ID: Unique trajectory ID\n * User_ID: User ID\n * #Photo: Number of photos in the trajectory\n * Start_Time: When the first photo was taken\n * Travel_Distance(km): Sum of the distances between sequantially consecutive photos (Euclidean Distance)\n * Total_Time(min): The time gap between the first photo and the last photo\n * Average_Speed(km/h): Travel_Distances(km)/Total_Time(h)\n\nWe read these files by using pandas library for further processing:", "traj = pd.read_csv(photo_table, parse_dates=[3], skipinitialspace=True)\ntraj_stats = pd.read_csv(traj_table, parse_dates=[3], skipinitialspace=True)", "2.1. Basic statistics about the candidate trajectories\nHere are the basic statistics about candidate trajectories from src/generate tables.py:", "num_photo = traj['Photo_ID'].shape[0]\nnum_user = traj_stats['User_ID'].unique().shape[0]\nnum_traj = traj_stats['Trajectory_ID'].shape[0]\nprint('Number of photos:', num_photo)\nprint('Number of users: ', num_user)\nprint('Number of trajectories:', num_traj)\nprint('Average number of photos per user:', num_photo / num_user)\nprint('Average number of trajectories per user:', num_traj / num_user)\n\ntraj_stats[['#Photo', 'Travel_Distance(km)', 'Total_Time(min)', 'Average_Speed(km/h)']].describe()", "2.2. Scatter plot of points in all candidate trajectories\nWe plot the location of extracted photos in the cadidate trajectories. The high density area represents the areas where a lot of photos has been taken.", "plt.figure(figsize=[8, 8])\nplt.xlabel('Longitude')\nplt.ylabel('Latitude')\nplt.scatter(traj['Longitude'], traj['Latitude'])", "2.3. Histograms of number of photos in trajectories, total time/distance and average speed of trajectories", "plt.figure(figsize=[18, 10])\nplt.subplot(2,2,1)\nplt.xlabel('#Photo')\nplt.ylabel('#Trajectory')\nplt.title('Histogram of #Photo in trajectories')\nax0 = traj_stats['#Photo'].hist(bins=50)\nax0.set_yscale('log')\n\nplt.subplot(2,2,2)\nplt.xlabel('Travel Distance (km)')\nplt.ylabel('#Trajectory')\nplt.title('Histogram of Travel Distance of Trajectories')\nax1 = traj_stats['Travel_Distance(km)'].hist(bins=50)\nax1.set_yscale('log')\n\nplt.subplot(2,2,3)\nplt.xlabel('Total Time (minute)')\nplt.ylabel('#Trajectory')\nplt.title('Histogram of Total Time of Trajectories')\nax2 = traj_stats['Total_Time(min)'].hist(bins=50)\nax2.set_yscale('log')\n\nplt.subplot(2,2,4)\nplt.xlabel('Average Speed (km/h)')\nplt.ylabel('#Trajectory')\nplt.title('Histogram of Average Speed of Trajectories')\nax3 = traj_stats['Average_Speed(km/h)'].hist(bins=50)\nax3.set_yscale('log')", "As these histogram indicates, there are several abnormal trajectories in this dataset. For example, some trajectories span several days, some trajectory shows improbably high speed (3500000 km/h), and travel distance of some trajectories is almost zero which might not be interesting as a travel trajectory. \nIn the following section, we will provide guidelines to filter out these abnormal trajectories.\n3. Filter Trajectory\nAfter getting an initial list of trajectories, we further filter out improbable trajectories with various criteria.\nWe use four different criteria as follows:\n\nTravel time: Some suspicious trajectory span over more than several days. We remove trajectories spanning more than several days or only few minutes. (maximum_duration, minimum_duration).\nTravel distance: Trajectories consist of photos taken from single location is not meaningful as a trajectory. We remove these trajectories (minimum_distance)\nTravel speed: Due to the GPS error, there are some trajectories in which a user moves unbelievably fast speed. We remove these trajectories, but try to recover as much information as possible from some trajectories.\n\nThe list of arguments we used to generate final trajectories are available at the top of the notebook.\n3.1. Filter by travel time\nFirst, we filter out trajectories which have suspiciously long or short travel times. We want to see the one-day long travel trajectories of users, and also want to avoid the trajectory that are captured in very short time.\nIn this step, we filtered out the trajectories of which travel time is greather than maximum_duration or less than minimum_duration.", "traj_stats1 = traj_stats[traj_stats['Total_Time(min)'] < maximum_duration]\ntraj_stats1 = traj_stats1[traj_stats1['Total_Time(min)'] > minimum_duration]\ntraj1 = traj[traj['Trajectory_ID'].isin(traj_stats1['Trajectory_ID'])]", "3.1.1. Histogram of travel time\nHere's the histogram of travel time before and after filtering. We removed several trajectories of which travel time is less than 30 min and greater than 24 hours.", "plt.figure(figsize=[18, 5])\nplt.subplot(1,2,1)\nplt.xlabel('Travel_Time(min)')\nplt.ylabel('#traj')\nplt.title('Before Filtering')\nax0 = traj_stats['Total_Time(min)'].hist(bins=50)\nax0.set_yscale('log')\n\nplt.subplot(1,2,2)\nplt.xlabel('Travel_Time(min)')\nplt.ylabel('#traj')\nplt.title('After filtering')\nax1 = traj_stats1['Total_Time(min)'].hist(bins=50)\nax1.set_yscale('log')", "3.2. Filter by travel distance\nTo be a meaningful trajectory, the travel distance of trajactory spans at least several hundred meters. Extremely short travel distance only shows the interesting area where the photo has been taken.\nTo get the trajectory, we filter out the trajectories of which travel distance is less than minimum_distance.", "traj_stats2 = traj_stats1[traj_stats1['Travel_Distance(km)'] > minimum_distance]\ntraj2 = traj[traj['Trajectory_ID'].isin(traj_stats2['Trajectory_ID'])]", "3.2.1. Histogram of trajectory length\nHere's the histogram of travel distances before and after filtering. Trajectories with very short travel distance has been removed from our dataset.", "plt.figure(figsize=[18, 5])\nplt.subplot(1,2,1)\nplt.xlabel('Travel_Distance(km)')\nplt.ylabel('#traj')\nplt.title('Before Filtering')\nax1 = traj_stats1['Travel_Distance(km)'].hist(bins=50)\nax1.set_yscale('log')\n\nplt.subplot(1,2,2)\nplt.xlabel('Travel_Distance(km)')\nplt.ylabel('#traj')\nplt.title('After filtering')\nax2 = traj_stats2['Travel_Distance(km)'].hist(bins=50)\nax2.set_yscale('log')\n\ntraj_stats_new = traj_stats2\ntraj_new = traj2", "3.3. Filter by travel speed\nSome trajectories have suspiciously high speed. It may caused by various reasons. For example, errors in GPS system or errors in time stampmight yeild super sonic users. \nThere are two (or more) alternative ways to filter out trajectory which has suspiciously high speed.\nHere, we provide two filtering method: (the switch to use one of the methods can be set at the top of the notebook)\n\nFiltered by average speed\nFiltered by speed of adjacency points\n\n3.3.1. Drop trajectory by average speed\nWe check average speed of every trajectory, and then throw out all trajectories of which average speed is less than predefined maximum_speed", "if speed_filter == 0:\n traj_stats_new = traj_stats_new[traj_stats_new['Average_Speed(km/h)'] < maximum_speed]\n traj_new = traj_new[traj_new['Trajectory_ID'].isin(traj_stats_new['Trajectory_ID'])]", "3.4.1.1. Histogram of trajectory speed", "if speed_filter == 0:\n plt.figure(figsize=[18, 5])\n plt.subplot(1,2,1)\n plt.xlabel('Average_Speed(km/h)')\n plt.ylabel('#traj')\n ax = traj_stats_new['Average_Speed(km/h)'].hist(bins=50)\n ax.set_yscale('log')", "3.3.2. Drop trajectory by point-to-point speed\nThe first approach might be inefficient when the improbable speed occurs by GPS calibration error. To keep as much information as possible, we propose more sophisticated method to recover information from abnormal trajectories.\nThere are four cases of improbably fast trajectory might be happened\n\nThe first point of trajectory is far away from the rest of the trajectory (GPS calibrating/entering building etc..)\nThe last point of trajectory is far away from the rest of the trajectory\nOne or more middle points of trajectory are far way from the rest of the trajectory (GPS error)\nMixture of previous three cases\n\nThe first and second cases are easy to recover by cutting the corresponding point. But it seems we could not easily decide which point(s) should be cut for third and fourth cases. We've decided to remove trajectories in case 3 and 4.\nCompute point-to-point speed before filtering", "speeds = []\nif speed_filter == 1: \n for tid in traj_stats_new['Trajectory_ID']:\n photos = traj_new[traj_new['Trajectory_ID'] == tid]\n if photos.shape[0] < 2: continue\n for i in range(len(photos.index)-1):\n idx1 = photos.index[i]\n idx2 = photos.index[i+1]\n dist = generate_tables.calc_dist(photos.loc[idx1, 'Longitude'], photos.loc[idx1, 'Latitude'], \\\n photos.loc[idx2, 'Longitude'], photos.loc[idx2, 'Latitude'])\n seconds = (photos.loc[idx1, 'Timestamp'] - photos.loc[idx2, 'Timestamp']).total_seconds()\n if seconds == 0: continue\n speed = dist * 60. * 60. / abs(seconds)\n speeds.append(speed)", "Histogram of point-to-point speed before filtering", "#S = [100, 150, 200, 250, 500, 1000, 1236, 100000]\nS = [100, 150, 200, 250] \nif speed_filter == 1:\n p2pspeeds = pd.Series(speeds)\n plt.figure(figsize=[18,20])\n for it in range(len(S)):\n plt.subplot(4,2,it+1)\n plt.xlabel('Point-to-Point Speed (km/h)')\n plt.ylabel('#Point-pair')\n plt.title('Speed < ' + str(S[it]) + ' km/h')\n ax = p2pspeeds[p2pspeeds < S[it]].hist(bins=50)\n ax.set_yscale('log')", "Drop the first/last point in a trajectories for case1/case2, drop enter trajectories for case3 and case4", "if speed_filter == 1:\n # raise an exception if assigning value to a copy (instead of the original data) of DataFrame\n pd.set_option('mode.chained_assignment','raise') \n traj_stats_new = traj_stats_new.copy()\n traj_new = traj_new.copy()\n\n indicator_traj = pd.Series(data=np.ones(traj_stats_new.shape[0], dtype=np.bool), index=traj_stats_new.index)\n indicator_photo = pd.Series(data=np.ones(traj_new.shape[0], dtype=np.bool), index=traj_new.index)\n cnt1 = 0\n cnt2 = 0\n cnt34 = 0\n for i in traj_stats_new['Trajectory_ID'].index:\n tid = traj_stats_new.loc[i, 'Trajectory_ID']\n photos = traj_new[traj_new['Trajectory_ID'] == tid]\n if photos.shape[0] <= 2:\n if traj_stats_new.loc[i, 'Average_Speed(km/h)'] > maximum_speed: # drop the trajectory\n indicator_traj.loc[i] = False\n indicator_photo.loc[photos.index] = False\n continue\n # trajectory: 1-->2-->...-->3-->4, 2 and 3 could be the same\n idx1 = photos.index[0]\n idx2 = photos.index[1]\n idx3 = photos.index[-2]\n idx4 = photos.index[-1]\n d12 = generate_tables.calc_dist(photos.loc[idx1, 'Longitude'], photos.loc[idx1, 'Latitude'], \\\n photos.loc[idx2, 'Longitude'], photos.loc[idx2, 'Latitude'])\n d24 = traj_stats_new.loc[i, 'Travel_Distance(km)'] - d12\n t12 = abs((photos.loc[idx1, 'Timestamp'] - photos.loc[idx2, 'Timestamp']).total_seconds())\n t24 = abs((photos.loc[idx2, 'Timestamp'] - photos.loc[idx4, 'Timestamp']).total_seconds())\n # check case 1\n if t12 == 0 or (d12 * 60. * 60. / t12) > maximum_speed: #photo1-->photo2, inf speed or large speed\n if t24 == 0 or abs(d24) < 1e-3 or (d24 * 60. * 60. / t24) > maximum_speed: # drop the trajectory\n indicator_traj.loc[i] = False\n indicator_photo.loc[photos.index] = False\n continue\n else: # case 1, drop the first photo, update trajectory statistics\n assert(d24 > 0.)\n #traj_stats.ix[i]['Start_Time'] = photos.ix[idx2]['Timestamp'] # SettingWithCopyWarning\n indicator_photo.loc[idx1] = False\n traj_stats_new.loc[i, 'Start_Time'] = photos.loc[idx2, 'Timestamp']\n traj_stats_new.loc[i, 'Travel_Distance(km)'] = d24\n traj_stats_new.loc[i, 'Total_Time(min)'] = t24 / 60.\n traj_stats_new.loc[i, 'Average_Speed(km/h)'] = d24 * 60. * 60. / t24\n cnt1 += 1\n continue\n # check case 2\n d34 = generate_tables.calc_dist(photos.loc[idx3, 'Longitude'], photos.loc[idx3, 'Latitude'], \\\n photos.loc[idx4, 'Longitude'], photos.loc[idx4, 'Latitude'])\n d13 = traj_stats_new.loc[i, 'Travel_Distance(km)'] - d34\n t34 = abs((photos.loc[idx3, 'Timestamp'] - photos.loc[idx4, 'Timestamp']).total_seconds())\n t13 = abs((photos.loc[idx1, 'Timestamp'] - photos.loc[idx3, 'Timestamp']).total_seconds())\n if t34 == 0 or (d34 * 60. * 60. / t34) > maximum_speed: #photo3-->photo4, inf speed or large speed\n if t13 == 0 or abs(d13) < 1e-3 or (d13 * 60. * 60. / t13) > maximum_speed: # drop the trajectory\n indicator_traj.loc[i] = False\n indicator_photo.loc[photos.index] = False\n continue\n else: # case 2, drop the last photo, update trajectory statistics\n assert(d13 > 0.)\n #traj_stats.ix[i]['Travel_Distance(km)'] = d13 # SettingWithCopyWarning\n indicator_photo.loc[idx4] = False\n traj_stats_new.loc[i, 'Travel_Distance(km)'] = d13\n traj_stats_new.loc[i, 'Total_Time(min)'] = d13 / 60.\n traj_stats_new.loc[i, 'Average_Speed(km/h)'] = d13 * 60. * 60. / t13\n cnt2 += 1\n continue\n \n # case 3 or 4, drop trajectory\n if traj_stats_new.loc[i, 'Average_Speed(km/h)'] > maximum_speed:\n indicator_traj.loc[i] = False\n indicator_photo.loc[photos.index] = False\n cnt34 += 1\n \n print('Number of trajectories in case 1:', cnt1)\n print('Number of trajectories in case 2:', cnt2)\n print('Number of trajectories in case 3 & 4:', cnt34)\n\n traj_new = traj_new[indicator_photo]\n traj_stats_new = traj_stats_new[indicator_traj]", "Compute point-to-point speed after filtering", "speeds_new = []\nif speed_filter == 1:\n for tid in traj_stats_new['Trajectory_ID']:\n photos = traj_new[traj_new['Trajectory_ID'] == tid]\n if photos.shape[0] < 2: continue\n for i in range(len(photos.index)-1):\n idx1 = photos.index[i]\n idx2 = photos.index[i+1]\n dist = generate_tables.calc_dist(photos.loc[idx1, 'Longitude'], photos.loc[idx1, 'Latitude'], \\\n photos.loc[idx2, 'Longitude'], photos.loc[idx2, 'Latitude'])\n seconds = (photos.loc[idx1, 'Timestamp'] - photos.loc[idx2, 'Timestamp']).total_seconds()\n if seconds == 0: continue\n speed = dist * 60. * 60. / abs(seconds)\n speeds_new.append(speed)", "Histogram of point-to-point speed after filtering", "#S = [100, 150, 200, 250, 500, 1000, 1236, 100000]\nS = [100, 150, 200, 250]\nif speed_filter == 1:\n p2pspeeds_new = pd.Series(speeds_new)\n plt.figure(figsize=[18,20])\n for it in range(len(S)):\n plt.subplot(4,2,it+1)\n plt.xlabel('Point-to-Point Speed(km/h)')\n plt.ylabel('#Point-pair')\n plt.title('Speed < ' + str(S[it]) + ' km/h')\n ax = p2pspeeds_new[p2pspeeds_new < S[it]].hist(bins=50)\n ax.set_yscale('log')", "4. Final Trajectory\nIn this section, we will show some basic statistics about our final trajectory data.\n4.1. Basic Stats\nMore detail analysis will be included in filckr_analysis.ipynb and slides. Here we show simple stats from the final result.", "num_photo = traj_new['Photo_ID'].shape[0]\nnum_user = traj_stats_new['User_ID'].unique().shape[0]\nnum_traj = traj_stats_new['Trajectory_ID'].shape[0]\nprint('Number of photos:', num_photo)\nprint('Number of users: ', num_user)\nprint('Number of trajectories:', num_traj)\nprint('Average number of photos per user:', num_photo / num_user)\nprint('Average number of trajectories per user:', num_traj / num_user)\n\ntraj_stats_new[['#Photo', 'Travel_Distance(km)', 'Total_Time(min)', 'Average_Speed(km/h)']].describe()", "Histograms of number of photos in trajectories, total time/distance and average speed of trajectories", "plt.figure(figsize=[18, 10])\nplt.subplot(2,2,1)\nplt.xlabel('#Photo')\nplt.ylabel('#Trajectory')\nplt.title('Histogram of #Photo in trajectories after Filtering')\nax0 = traj_stats_new['#Photo'].hist(bins=50)\nax0.set_yscale('log')\n\nplt.subplot(2,2,2)\nplt.xlabel('Travel Distance (km)')\nplt.ylabel('#Trajectory')\nplt.title('Histogram of Travel Distance of Trajectories after Filtering')\nax1 = traj_stats_new['Travel_Distance(km)'].hist(bins=50)\nax1.set_yscale('log')\n\nplt.subplot(2,2,3)\nplt.xlabel('Total Time (minute)')\nplt.ylabel('#Trajectory')\nplt.title('Histogram of Total Time of Trajectories after Filtering')\nax2 = traj_stats_new['Total_Time(min)'].hist(bins=50)\nax2.set_yscale('log')\n\nplt.subplot(2,2,4)\nplt.xlabel('Average Speed (km/h)')\nplt.ylabel('#Trajectory')\nplt.title('Histogram of Average Speed of Trajectories after Filtering')\nax3 = traj_stats_new['Average_Speed(km/h)'].hist(bins=50)\nax3.set_yscale('log')", "Save final trajectories to the data folder", "file1 = os.path.join(data_dir + table1)\nfile2 = os.path.join(data_dir + table2)\ntraj_new.to_csv(file1, index=False)\ntraj_stats_new.to_csv(file2, index=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
stable/_downloads/8ea2bfc401dbdff70c284d271d62fa8c/label_from_stc.ipynb
bsd-3-clause
[ "%matplotlib inline", "Generate a functional label from source estimates\nThreshold source estimates and produce a functional label. The label\nis typically the region of interest that contains high values.\nHere we compare the average time course in the anatomical label obtained\nby FreeSurfer segmentation and the average time course from the\nfunctional label. As expected the time course in the functional\nlabel yields higher values.", "# Author: Luke Bloy <luke.bloy@gmail.com>\n# Alex Gramfort <alexandre.gramfort@inria.fr>\n# License: BSD-3-Clause\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.minimum_norm import read_inverse_operator, apply_inverse\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nfname_inv = (\n data_path / 'MEG' / 'sample' / 'sample_audvis-meg-oct-6-meg-inv.fif')\nfname_evoked = data_path / 'MEG' / 'sample' / 'sample_audvis-ave.fif'\nsubjects_dir = data_path / 'subjects'\nsubject = 'sample'\n\nsnr = 3.0\nlambda2 = 1.0 / snr ** 2\nmethod = \"dSPM\" # use dSPM method (could also be MNE or sLORETA)\n\n# Compute a label/ROI based on the peak power between 80 and 120 ms.\n# The label bankssts-lh is used for the comparison.\naparc_label_name = 'bankssts-lh'\ntmin, tmax = 0.080, 0.120\n\n# Load data\nevoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))\ninverse_operator = read_inverse_operator(fname_inv)\nsrc = inverse_operator['src'] # get the source space\n\n# Compute inverse solution\nstc = apply_inverse(evoked, inverse_operator, lambda2, method,\n pick_ori='normal')\n\n# Make an STC in the time interval of interest and take the mean\nstc_mean = stc.copy().crop(tmin, tmax).mean()\n\n# use the stc_mean to generate a functional label\n# region growing is halted at 60% of the peak value within the\n# anatomical label / ROI specified by aparc_label_name\nlabel = mne.read_labels_from_annot(subject, parc='aparc',\n subjects_dir=subjects_dir,\n regexp=aparc_label_name)[0]\nstc_mean_label = stc_mean.in_label(label)\ndata = np.abs(stc_mean_label.data)\nstc_mean_label.data[data < 0.6 * np.max(data)] = 0.\n\n# 8.5% of original source space vertices were omitted during forward\n# calculation, suppress the warning here with verbose='error'\nfunc_labels, _ = mne.stc_to_label(stc_mean_label, src=src, smooth=True,\n subjects_dir=subjects_dir, connected=True,\n verbose='error')\n\n# take first as func_labels are ordered based on maximum values in stc\nfunc_label = func_labels[0]\n\n# load the anatomical ROI for comparison\nanat_label = mne.read_labels_from_annot(subject, parc='aparc',\n subjects_dir=subjects_dir,\n regexp=aparc_label_name)[0]\n\n# extract the anatomical time course for each label\nstc_anat_label = stc.in_label(anat_label)\npca_anat = stc.extract_label_time_course(anat_label, src, mode='pca_flip')[0]\n\nstc_func_label = stc.in_label(func_label)\npca_func = stc.extract_label_time_course(func_label, src, mode='pca_flip')[0]\n\n# flip the pca so that the max power between tmin and tmax is positive\npca_anat *= np.sign(pca_anat[np.argmax(np.abs(pca_anat))])\npca_func *= np.sign(pca_func[np.argmax(np.abs(pca_anat))])", "plot the time courses....", "plt.figure()\nplt.plot(1e3 * stc_anat_label.times, pca_anat, 'k',\n label='Anatomical %s' % aparc_label_name)\nplt.plot(1e3 * stc_func_label.times, pca_func, 'b',\n label='Functional %s' % aparc_label_name)\nplt.legend()\nplt.show()", "plot brain in 3D with mne.viz.Brain if available", "brain = stc_mean.plot(hemi='lh', subjects_dir=subjects_dir)\nbrain.show_view('lateral')\n\n# show both labels\nbrain.add_label(anat_label, borders=True, color='k')\nbrain.add_label(func_label, borders=True, color='b')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
letsgoexploring/teaching
winter2017/econ129/python/Econ129_Class_12_Complete.ipynb
mit
[ "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n%matplotlib inline", "Class 12: Stochastic Time Series Processes\nSimulating normal random variables with Numpy\nThe numpy.random module has bunch of functions for generating random variables and evaluating probability and cumulative density functions for a wide variety of probability distributions. Learn more about the module here:\nhttps://docs.scipy.org/doc/numpy/reference/routines.random.html\nWe're going to make use of the numpy.random.normal() function to crate arrays of random draws from the normal distribution. The function takes three arguments:\n* loc: the mean of the distribution (default=0)\n* scale: the standard deviation of the distribution (default=1)\n* size: how many to numbers to draw (default = None)\nEvidently the default is to draw numbers from the standard normal distribution.", "# Create an array with 5 draws from the normal(0,1) distribution and print\nnp.random.normal(size=5)\n\n# Create an array with 5 draws from the normal(0,1) distribution and print\nnp.random.normal(size=5)", "Computers by definition cannot generate truly random numbers. The Mersenne Twister is a widely-used algorithm for generating pseudo random numbers form a deterministic process. That is, while the numbers generated from the algorithm are not random in the literal sense, they exhibit distributional qualities that make them indistinguishable from truly random numbers.\nA nice feature of pseudo random numbers is that they can be replicated by specifying the seed, or starting point, for the random number generating algorithm.", "# Set the seed for the random number generator\nnp.random.seed(129)\n\n# Create an array with 5 draws from the normal(0,1) distribution and print\nnp.random.normal(size=5)", "Example\nDraw 500 values each from the $\\mathcal{N}(0,1)$ and $\\mathcal{N}(0,2^2)$ distributions. Plot.", "# Set the seed for the random number generator\nnp.random.seed(129)\n\n# Create two arrays:\n# x: 500 draws from the normal(0,1) distribution\n# y: 500 draws from the normal(0,2) distribution\nx = np.random.normal(loc=0,scale=1,size=500)\ny = np.random.normal(loc=0,scale=2,size=500)\n\n# Plot\nplt.plot(x,lw=3,alpha = 0.6,label='$\\sigma=1$')\nplt.plot(y,lw=3,alpha = 0.6,label='$\\sigma=2$')\nplt.grid(linestyle=':')\nplt.legend(ncol=2,loc='lower right')", "The white noise process\nIn the previous example, we created two variables that stored draws from normal distrbutions with means of zero but with different standard deviations. Both of the variables were simulations of whit noise processes. A white noise process is a random variable $\\epsilon_t$ with constant mean and constant variance. We are concerned only with zero-mean white noise process and we'll often denote that a variable is a zero-mean white noise process with the following shorthand notation:\n\\begin{align}\n\\epsilon_t & \\sim \\text{WN}(0,\\sigma^2),\n\\end{align}\nwhere $\\sigma^2$ is the variance of the processes. Strictly speaking, a white noise process can follow any distribution as long as the mean and and variance are constant, but we'll concentrate exclusively white noise process drawn from the normal distribution.\nThe AR(1) process\nA random variable $X_t$ is an autoregressive of order 1 process or AR(1) process if it can be written in the following form:\n\\begin{align}\nX_t & (1+\\rho)\\mu + \\rho X_{t+1} + \\epsilon_t,\n\\end{align}\nwhere $\\rho$ and $\\mu$ are constants and $\\epsilon \\sim \\text{WN}(0,\\sigma^2)$. The AR(1) process is the stochastic analog of the first-order difference equation.\nExample\nSimulate an AR(1) process for 51 periods using the following parameter values:\n\\begin{align}\n\\rho & = 0.5\\\n\\mu & = 1 \\\n\\sigma & = 1\n\\end{align}", "# Simulate an AR(1) process for 51 periods. Set the RNG seed to 129\nnp.random.seed(129)\n\nT = 51\nx0=0\nmu=1\nrho=0.5\nsigma=1\n\nx = np.zeros(T)\nx[0] = x0\n\n# draw random numbers for white noise process\neps= np.random.normal(loc=0,scale=sigma,size=T-1)\nfor t in range(T-1):\n x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]\n \n# Plot\nplt.plot(x,lw=3,alpha = 0.6,label='$\\sigma=1$')\nplt.grid(linestyle=':')", "Example\nSimulate an AR(1) process for 51 periods using the following parameter values:\n\\begin{align}\n\\rho & = 1.5\\\n\\mu & = 1 \\\n\\sigma & = 1\n\\end{align}", "# Simulate an AR(1) process for 51 periods. Set the RNG seed to 129\nnp.random.seed(129)\n\nT = 51\nx0=0\nmu=1\nrho=1.5\nsigma=1\n\nimport time\n \n# Wait for 5 seconds\n\n\nx = np.zeros(T)\nx[:] = np.NAN\nx[0] = x0\n\n# draw random numbers for white noise process\neps= np.random.normal(loc=0,scale=sigma,size=T-1)\nfor t in range(T-1):\n x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]\n \n# Plot\nplt.plot(x,lw=3,alpha = 0.6,label='$\\sigma=1$')\nplt.grid(linestyle=':')", "Notice that if $-1< \\rho < 1$, then $\\mu$ is the expected value of the process. That is, when $-1< \\rho < 1$, the process will fluctuate around $\\mu$. But if $\\rho>1$ or $\\rho<-1$, the process will explode away from $\\mu$.", "def ar1(mu=0,rho=0,sigma=1,x0=0,T=25):\n '''Funciton for simulating an AR(1) process for T periods\n \n Args:\n mu (float): mean of the AR(1) process\n rho (float): autoregressive parameter\n sigma (float): standard deviation of the white noise process\n x0 (float): initial value of the process\n T (int): number of periods to simulate\n \n Returns:\n numpy array\n '''\n \n # initialize x array\n x = np.zeros(T)\n x[0] = x0\n \n # draw random numbers for white noise process\n eps= np.random.normal(loc=0,scale=sigma,size=T-1)\n for t in range(T-1):\n x[t+1] = mu*(1-rho) + rho*x[t] + eps[t]\n \n return x", "Example\nConstruct a $2\\times2$ grid of AR(1) processes simulated for 51 periods with $\\sigma = 1$ and $\\mu = 0$.\nUse the following values for $\\rho$:\n* Top-left: $\\rho=0$\n* Top-right: $\\rho=0.5$\n* lower-left: $\\rho=0.9$\n* lower-left: $\\rho=-0.5$\nBe sure to use the same seed for each simulation so you can see how changing $\\rho$ affects the output", "fig = plt.figure(figsize=(12,8))\n\nnp.random.seed(129)\ny = ar1(mu=0,rho=0,sigma=1,x0=0,T=51)\nax1 = fig.add_subplot(2,2,1)\nax1.plot(y,lw=3,alpha=0.7)\nax1.set_title('$X_t = \\epsilon_t$')\nax1.grid()\n\nnp.random.seed(129)\ny = ar1(mu=0,rho=0,sigma=1,x0=0,T=51)\nax2 = fig.add_subplot(2,2,2)\nax2.plot(y,lw=3,alpha=0.7)\nax2.set_title('$X_t = 0.5\\cdot X_{t-1} + \\epsilon_t$')\nax2.grid()\n\nnp.random.seed(129)\ny = ar1(mu=0,rho=0.9,sigma=1,x0=0,T=51)\nax3 = fig.add_subplot(2,2,3)\nax3.plot(y,lw=3,alpha=0.7)\nax3.set_title('$X_t = 0.9\\cdot X_{t-1} + \\epsilon_t$')\nax3.grid()\n\nnp.random.seed(129)\ny = ar1(mu=0,rho=-0.5,sigma=1,x0=0,T=51)\nax4 = fig.add_subplot(2,2,4)\nax4.plot(y,lw=3,alpha=0.7)\nax4.set_title('$X_t = -0.5\\cdot X_{t-1} + \\epsilon_t$')\nax4.grid()", "The random walk process\nThe random walk process is an AR(1) process with $\\rho=1$:\n\\begin{align}\nX_t = X_{t-1} + \\epsilon_t\n\\end{align}\nThe random walk process has an important place in finance since the evidence suggests that stock prices follow a random walk process.\nExample\nSimulate 7 random walk processes for 501 periods. Set $\\sigma = 1$. Plot all 7 simulated processes on the same axes.", "np.random.seed(129)\nfor i in range(7):\n plt.plot(ar1(rho=1,T=501))\n \nplt.grid()\nplt.title('Five random walk processes')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
adrn/TwoFace
notebooks/figures/HighK-unimodal.ipynb
mit
[ "import os\nfrom os import path\n\n# Third-party\nfrom astropy.io import fits\nfrom astropy.stats import median_absolute_deviation\nfrom astropy.table import Table, QTable, join\nfrom astropy.time import Time\nimport astropy.units as u\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom matplotlib.gridspec import GridSpec\nimport numpy as np\n%matplotlib inline\nimport h5py\nimport pandas as pd\nfrom sqlalchemy import func\nimport tqdm\n\nfrom thejoker import JokerSamples\n\nfrom twoface.config import TWOFACE_CACHE_PATH\nfrom twoface.samples_analysis import unimodal_P, MAP_sample\nfrom twoface.db import (db_connect, AllStar, AllVisit, AllVisitToAllStar, NessRG,\n StarResult, Status, JokerRun)\nfrom twoface.plot import plot_two_panel, plot_phase_fold, plot_data_orbits, _RV_LBL\nfrom twoface.mass import get_m2_min, mf, asini, a2sini, stellar_radius\n\nplot_path = '../../paper/1-catalog/figures/'\ntable_path = '../../paper/1-catalog/tables/'\n\nSession, _ = db_connect(path.join(TWOFACE_CACHE_PATH, 'apogee.sqlite'))\nsession = Session()\n\nsamples_file = path.join(TWOFACE_CACHE_PATH, 'apogee-jitter.hdf5')\nmcmc_samples_file = path.join(TWOFACE_CACHE_PATH, 'apogee-jitter-mcmc.hdf5')\n\nrun = session.query(JokerRun).limit(1).one()\njoker_pars = run.get_joker_params()\n\nhigh_K_stars = session.query(AllStar).join(StarResult).filter(StarResult.status_id>0).filter(StarResult.high_K).all()\nlen(high_K_stars)\n\nmean_R = []\nmax_R = []\nwith h5py.File(mcmc_samples_file, 'r') as f:\n for k in f.keys():\n R = f[k]['chain-stats/gelman_rubin'][:]\n mean_R.append(np.mean(R))\n max_R.append(np.max(R))\n\n# mean_R_old = []\n# max_R_old = []\n# with h5py.File('../../cache/apogee-jitter-mcmc-old.hdf5', 'r') as f:\n# for k in f.keys():\n# R = f[k]['chain-stats/gelman_rubin'][:]\n# mean_R_old.append(np.mean(R))\n# max_R_old.append(np.max(R))\n\n# bins = np.linspace(0.9, 2.2, 32)\n# plt.hist(mean_R, bins=bins, alpha=0.3)\n# plt.hist(mean_R_old, bins=bins, alpha=0.3);\n# plt.axvline(1.1)\n\n# print((np.array(mean_R) < 1.1).sum(),\n# (np.array(mean_R_old) < 1.1).sum())\n\n# plt.figure()\n# plt.hist(max_R, bins=bins, alpha=0.3)\n# plt.hist(max_R_old, bins=bins, alpha=0.3);\n# plt.axvline(1.1)\n\n# print((np.array(max_R) < 1.1).sum(),\n# (np.array(max_R_old) < 1.1).sum())", "Make the catalog:\nFor all high-K stars, classify as unimodal or not based on TheJoker samples. Then do same for MCMC samples, AND the selections:", "unimodal_thejoker = []\nwith h5py.File(samples_file, 'r') as f:\n for star in tqdm.tqdm(high_K_stars):\n samples = JokerSamples.from_hdf5(f[star.apogee_id])\n\n data = star.apogeervdata()\n unimodal_thejoker.append(unimodal_P(samples, data))\n\nunimodal_thejoker = np.array(unimodal_thejoker)\nunimodal_thejoker.sum()\n\nunimodal_mcmc = []\nconverged_mcmc = []\nwith h5py.File(mcmc_samples_file, 'r') as f:\n for star in tqdm.tqdm(high_K_stars):\n if star.apogee_id not in f: \n unimodal_mcmc.append(False)\n converged_mcmc.append(True)\n continue\n \n R = f[star.apogee_id]['chain-stats/gelman_rubin'][:]\n converged_mcmc.append(np.mean(R) <= 1.1)\n \n samples = JokerSamples.from_hdf5(f[star.apogee_id])\n\n data = star.apogeervdata()\n unimodal_mcmc.append(unimodal_P(samples, data))\n \nunimodal_mcmc = np.array(unimodal_mcmc)\nconverged_mcmc = np.array(converged_mcmc)\nunimodal_mcmc.sum(), converged_mcmc.sum()\n\nunimodal_mask = unimodal_thejoker | unimodal_mcmc\nunimodal_converged_mask = unimodal_thejoker & (unimodal_mcmc & converged_mcmc)\nunimodal_converged_idx = np.where(unimodal_converged_mask)[0]\nunimodal_mask.sum(), unimodal_converged_mask.sum()\n\nunimodal_stars = np.array(high_K_stars)[unimodal_mask]\nunimodal_converged = converged_mcmc[unimodal_mask]\n\nrows = dict()\nrows['APOGEE_ID'] = []\nfor k in JokerSamples._valid_keys:\n rows[k] = []\n rows[k + '_err'] = []\nrows['t0'] = []\nrows['converged'] = []\nrows['Gelman-Rubin'] = []\n\nwith h5py.File(mcmc_samples_file, 'r') as mcmc_f, h5py.File(samples_file, 'r') as joker_f:\n for i, star in tqdm.tqdm(enumerate(unimodal_stars)):\n data = star.apogeervdata()\n if star.apogee_id in mcmc_f: # and unimodal_converged[i]:\n samples = JokerSamples.from_hdf5(mcmc_f[star.apogee_id])\n R = mcmc_f[star.apogee_id]['chain-stats/gelman_rubin'][:]\n else:\n samples = JokerSamples.from_hdf5(joker_f[star.apogee_id])\n R = np.full(7, np.nan)\n \n rows['APOGEE_ID'].append(star.apogee_id)\n MAP = MAP_sample(data, samples, joker_pars)\n for k in samples.keys():\n rows[k].append(MAP[k])\n \n# if unimodal_converged[i]:\n# rows[k+'_err'].append(1.5 * median_absolute_deviation(samples[k]))\n# else:\n# rows[k+'_err'].append(np.nan * samples[k].unit)\n rows[k+'_err'].append(1.5 * median_absolute_deviation(samples[k]))\n \n rows['t0'].append(data.t0.tcb.mjd)\n rows['converged'].append(unimodal_converged[i])\n rows['Gelman-Rubin'].append(R)\n \nfor k in rows:\n if hasattr(rows[k][0], 'unit'):\n rows[k] = u.Quantity(rows[k])\n \nrows['t0'] = Time(rows['t0'], format='mjd', scale='tcb')\n\ntbl = Table(rows, masked=True)", "Add Ness masses to table:", "ness_tbl = Table.read('../../data/NessRG.fits')\nness_tbl.rename_column('2MASS', 'APOGEE_ID')\nness_tbl = ness_tbl[np.isin(ness_tbl['APOGEE_ID'], tbl['APOGEE_ID'])]\n\n# trim the duplicates...\n_, unq_idx = np.unique(ness_tbl['APOGEE_ID'], return_index=True)\nness_tbl = ness_tbl[unq_idx]", "Compute m2_min, a2sini, R1 using Ness mass", "def stddev(vals):\n return 1.5 * median_absolute_deviation(vals, ignore_nan=True)\n\nrnd = np.random.RandomState(seed=42)\nN = rnd.normal\n\ntbl['M1'] = np.full(len(tbl), np.nan) * u.Msun\ntbl['M1_err'] = np.full(len(tbl), np.nan) * u.Msun\ntbl['M2_min'] = np.full(len(tbl), np.nan) * u.Msun\ntbl['M2_min_err'] = np.full(len(tbl), np.nan) * u.Msun\ntbl['q_min'] = np.full(len(tbl), np.nan)\ntbl['q_min_err'] = np.full(len(tbl), np.nan)\n\ntbl['R1'] = np.full(len(tbl), np.nan) * u.Rsun\ntbl['R1_err'] = np.full(len(tbl), np.nan) * u.Rsun\ntbl['a_sini'] = np.full(len(tbl), np.nan) * u.au\ntbl['a_sini_err'] = np.full(len(tbl), np.nan) * u.au\ntbl['a2_sini'] = np.full(len(tbl), np.nan) * u.au\ntbl['a2_sini_err'] = np.full(len(tbl), np.nan) * u.au\n\nn_samples = 8192\nfor i, row in tqdm.tqdm(enumerate(tbl)):\n ness_row = ness_tbl[ness_tbl['APOGEE_ID'] == row['APOGEE_ID']]\n if len(ness_row) == 0:\n continue\n \n star = AllStar.get_apogee_id(session, row['APOGEE_ID'])\n \n m1_samples = np.exp(N(ness_row['lnM'], ness_row['e_logM'], size=n_samples)) * u.Msun\n loggs = N(star.logg, star.logg_err, n_samples)\n \n Ps = N(row['P'], row['P_err'], n_samples) * tbl['P'].unit\n Ks = N(row['K'], row['K_err'], n_samples) * tbl['K'].unit\n es = N(row['e'], row['e_err'], n_samples)\n \n# else:\n# Ps = ([row['P']] * n_samples) * tbl['P'].unit\n# Ks = ([row['K']] * n_samples) * tbl['K'].unit\n# es = np.array([row['e']] * n_samples)\n \n \n mass_func = mf(P=Ps, K=Ks, e=es)\n m2_mins = get_m2_min(m1_samples, mass_func)\n asinis = asini(Ps, es, Ks, m1_samples, m2_mins)\n a2sinis = a2sini(Ps, es, Ks, m1_samples, m2_mins)\n R1s = stellar_radius(loggs, m1_samples).to(u.Rsun)\n \n tbl['M1'][i] = np.median(m1_samples).to(u.Msun).value\n tbl['M2_min'][i] = np.nanmedian(m2_mins).to(u.Msun).value\n tbl['a_sini'][i] = np.nanmedian(asinis).to(u.au).value\n tbl['a2_sini'][i] = np.nanmedian(a2sinis).to(u.au).value\n tbl['R1'][i] = np.nanmedian(R1s).to(u.Rsun).value\n \n tbl['M1_err'][i] = stddev(m1_samples).to(u.Msun).value\n tbl['M2_min_err'][i] = stddev(m2_mins).to(u.Msun).value\n tbl['a_sini_err'][i] = stddev(asinis).to(u.au).value\n tbl['a2_sini_err'][i] = stddev(a2sinis).to(u.au).value\n tbl['R1_err'][i] = stddev(R1s).to(u.Rsun).value\n \ntbl['q_min'] = (u.Quantity(tbl['M2_min']) / u.Quantity(tbl['M1'])).decompose()\ntbl['q_min_err'] = tbl['q_min'] * \\\n np.sqrt((tbl['M2_min_err']/tbl['M2_min'])**2 + \n (tbl['M1_err']/tbl['M1'])**2) \n\nmask_ = np.isnan(tbl['M1']) | np.isnan(tbl['M2_min'])\ntbl['M1'].mask = mask_\ntbl['M1_err'].mask = mask_\ntbl['M2_min'].mask = mask_\ntbl['M2_min_err'].mask = mask_", "Add Ness columns following our columns:", "tbl_with_ness = join(tbl, ness_tbl, keys='APOGEE_ID', join_type='outer')\nassert len(tbl_with_ness) == len(tbl)", "Now we load the APOGEE AllStar table to join the APOGEE data with our orbits:", "allstar_tbl = fits.getdata('/Users/adrian/data/APOGEE_DR14/allStar-l31c.2.fits')\nallstar_tbl = allstar_tbl[np.isin(allstar_tbl['APOGEE_ID'], tbl['APOGEE_ID'])]\n\n# trim the duplicates...\n_, unq_idx = np.unique(allstar_tbl['APOGEE_ID'], return_index=True)\nallstar_tbl = allstar_tbl[unq_idx]\nassert len(allstar_tbl) == len(tbl)\n\nallstar_tbl = Table(allstar_tbl)\nallstar_tbl.rename_column('K', 'KS')\nallstar_tbl.rename_column('K_ERR', 'KS_ERR')\n\nfull_catalog = join(tbl_with_ness, allstar_tbl, keys='APOGEE_ID')\nfull_catalog[:1]", "Add binary flags \"DR14RC\" if in DR14 RC catalog, \"TINGRC\" if in Yuan-Sen's recent paper:", "from astropy.io import ascii\n\nrcdr14 = Table.read('/Users/adrian/data/APOGEE_DR14/apogee-rc-DR14.fits')\nrcting = ascii.read('../../data/ting-2018.txt')\n\n(rcting['Classification'] == 'RC_Pristine').sum()\n\nfull_catalog['DR14RC'] = np.isin(full_catalog['APOGEE_ID'], rcdr14['APOGEE_ID'])\nfull_catalog['TINGRC'] = np.isin(full_catalog['APOGEE_ID'], rcting[rcting['Classification'] == 'RC_Pristine']['Designation'])\n# full_catalog['TINGRC'] = np.isin(full_catalog['APOGEE_ID'], rcting['Designation'])\n\nlen(full_catalog), full_catalog['DR14RC'].sum(), full_catalog['TINGRC'].sum()\n\nfull_catalog['M1'][full_catalog['M1'].mask] = np.nan\nfull_catalog['M2_min'][full_catalog['M2_min'].mask] = np.nan\n\nfor name in full_catalog.colnames[:30]:\n c1 = '\\\\texttt{{{0}}}'.format(name.replace('_', '\\\\_'))\n try:\n c2 = '{0:latex_inline}'.format(full_catalog[name].unit)\n except TypeError:\n c2 = ''\n except AttributeError:\n c2 = ''\n \n if len(c1) < 26:\n c1 = c1 + ' '*(26 - len(c1))\n \n if len(c2) < 24:\n c2 = c2 + ' '*(24 - len(c2))\n \n print('{0} & {1} & <description> \\\\\\\\'.format(c1, c2))", "TODO: describe in README with data to use QTable.read('', astropy_native=True)\nBy-eye vetting:\nPlot all of the stars, see what orbits look like bad (2) or questionable (1) fits:", "# _path = '../../plots/unimodal/'\n# os.makedirs(_path, exist_ok=True)\n\n# units = dict()\n# for c in full_catalog.colnames:\n# if full_catalog[c].unit is not None:\n# units[c] = full_catalog[c].unit\n# else:\n# units[c] = 1.\n \n# for row in full_catalog:\n# apogee_id = row['APOGEE_ID']\n# star = AllStar.get_apogee_id(session, apogee_id)\n# data = star.apogeervdata()\n \n# row = row[JokerSamples._valid_keys]\n# sample = JokerSamples(**{c: row[c]*units[c] for c in row.colnames})\n# sample.t0 = data.t0\n \n# fig, axes = plt.subplots(1, 2, figsize=(12, 5), sharey=True)\n \n# plot_data_orbits(data, sample[None], highlight_P_extrema=False, \n# ax=axes[0], plot_kwargs=dict(alpha=1., linewidth=1.))\n# plot_phase_fold(data, sample, ax=axes[1], label=False)\n# axes[1].set_xlabel('phase')\n# axes[0].set_title(apogee_id)\n# fig.tight_layout()\n# fig.savefig(path.join(_path, '{0}.png'.format(apogee_id)), dpi=200)\n# plt.close(fig)\n\n# unimodal:\ncheck = np.array([\n '2M05224382+4300425',\n '2M08505498+1156503',\n '2M08510723+1153019',\n '2M08512530+1202563', \n '2M09522871+3811487', \n '2M10264342+1340172', \n '2M10513288-0250550',\n '2M13011859+2844170',\n '2M13162279+1739074',\n '2M13175687+7151180',\n '2M13484871+1913474',\n '2M14574438+2106271',\n '2M15054553+2220325',\n '2M15101168+6708289', \n '2M16342938-1248117',\n '2M18012240-0920302',\n '2M18343302+1949166',\n '2M18481414-0251133', \n '2M17223366+4850318',\n '2M15184139+0206004',\n '2M21260907+1100178',\n '2M17105698+4301117'\n])\n\n# Suspect:\n# SUSPECT_BROAD_LINES, or SUSPECT_RV_COMBINATIONS\nsuspect = full_catalog['APOGEE_ID'][(full_catalog['STARFLAG'] & np.sum(2**np.array([16]))) != 0]\ncheck = check[~np.isin(check, suspect)]\nprint(len(suspect), len(check))\n\nclean_flag = np.zeros(len(full_catalog), dtype=int)\nclean_flag[np.isin(full_catalog['APOGEE_ID'], check)] = 1\nclean_flag[np.isin(full_catalog['APOGEE_ID'], suspect)] = 2\nfull_catalog['clean_flag'] = clean_flag\n\n(full_catalog['clean_flag'] == 0).sum()\n\nfull_catalog.write(path.join(table_path, 'highK-unimodal.fits'), overwrite=True)\n\ntest = QTable.read(path.join(table_path, 'highK-unimodal.fits'), \n astropy_native=True, character_as_bytes=False)", "Make paper figure:", "full_catalog = Table.read(path.join(table_path, 'highK-unimodal.fits'))\n\narr = np.array(full_catalog[full_catalog['converged'] & np.isfinite(full_catalog['Gelman-Rubin'][:, 0])]['APOGEE_ID'],\n dtype='U20')\n\nnp.random.seed(42)\n\nrc = {\n 'axes.labelsize': 18,\n 'xtick.labelsize': 14,\n 'ytick.labelsize': 14\n}\n \nsubset = full_catalog[full_catalog['converged'] & np.isfinite(full_catalog['Gelman-Rubin'][:, 0])]\nrand_subset = np.random.choice(len(subset), size=8, replace=False)\nrand_subset = rand_subset[np.argsort(subset['e'][rand_subset])]\n\nwith h5py.File(samples_file, 'r') as jok_f, h5py.File(mcmc_samples_file, 'r') as mcmc_f:\n with mpl.rc_context(rc):\n fig, axes = plt.subplots(4, 2, figsize=(8, 10), sharex=True)\n\n for i, idx in enumerate(rand_subset):\n ax = axes.flat[i]\n \n apogee_id = subset[idx]['APOGEE_ID']\n star = AllStar.get_apogee_id(session, apogee_id)\n data = star.apogeervdata()\n\n if apogee_id in mcmc_f:\n f = mcmc_f\n print('mcmc')\n else:\n f = jok_f\n print('thejoker')\n\n samples = JokerSamples.from_hdf5(f[star.apogee_id])\n samples.t0 = data.t0\n\n if len(samples) > 1:\n sample = MAP_sample(data, samples, joker_pars)\n else:\n sample = samples[0]\n\n fig = plot_phase_fold(data, sample, ax=ax, \n jitter_errorbar=True, label=False)\n xlim = ax.get_xlim()\n ylim = (data.rv.value.min(), data.rv.value.max())\n yspan = ylim[1]-ylim[0]\n ylim = ax.set_ylim(ylim[0]-0.35*yspan, ylim[1]+0.35*yspan)\n\n text = ('{0}, '.format(star.apogee_id) + \n '$P = {0.value:.2f}$ {0.unit:latex}, '.format(sample['P']) + \n '$e = {0:.2f}$'.format(sample['e']))\n ax.text(xlim[0] + (xlim[1]-xlim[0])/15,\n ylim[1] - (ylim[1]-ylim[0])/20,\n text, fontsize=10, va='top', ha='left')\n # _ = plot_two_panel(data, samples)\n\n ax.set_xlim(-0.02, 1.02)\n\n for i in [0,1]:\n axes[-1, i].set_xlabel(r'phase, $\\frac{M-M_0}{2\\pi}$')\n\n for i in range(4):\n axes[i, 0].set_ylabel(_RV_LBL.format(u.km/u.s))\n\n fig.suptitle('High-$K$, unimodal', \n x=0.55, y=0.96, fontsize=18)\n fig.tight_layout()\n fig.subplots_adjust(top=0.92)\n fig.savefig(path.join(plot_path, 'highK-unimodal.pdf'))", "For my own sake, make the same for unconverged stars:", "np.random.seed(123)\n\nrc = {\n 'axes.labelsize': 18,\n 'xtick.labelsize': 14,\n 'ytick.labelsize': 14\n}\n \nsubset = full_catalog[np.logical_not(full_catalog['converged'])]\nrand_subset = np.random.choice(len(subset), size=8, replace=False)\nrand_subset = rand_subset[np.argsort(subset['e'][rand_subset])]\n\nwith h5py.File(samples_file, 'r') as jok_f, h5py.File(mcmc_samples_file, 'r') as mcmc_f:\n with mpl.rc_context(rc):\n fig, axes = plt.subplots(4, 2, figsize=(8, 10), sharex=True)\n\n for i, idx in enumerate(rand_subset):\n ax = axes.flat[i]\n\n star = AllStar.get_apogee_id(session, subset[idx]['APOGEE_ID'])\n data = star.apogeervdata()\n\n if apogee_id in mcmc_f:\n f = mcmc_f\n print('mcmc')\n else:\n f = jok_f\n print('thejoker')\n\n samples = JokerSamples.from_hdf5(jok_f[star.apogee_id])\n samples.t0 = data.t0\n\n if len(samples) > 1:\n sample = MAP_sample(data, samples, joker_pars)\n else:\n sample = samples[0]\n\n fig = plot_phase_fold(data, sample, ax=ax, \n jitter_errorbar=True, label=False)\n xlim = ax.get_xlim()\n ylim = (data.rv.value.min(), data.rv.value.max())\n yspan = ylim[1]-ylim[0]\n ylim = ax.set_ylim(ylim[0]-0.35*yspan, ylim[1]+0.35*yspan)\n\n text = ('{0}, '.format(star.apogee_id) + \n '$P = {0.value:.2f}$ {0.unit:latex}, '.format(sample['P']) + \n '$e = {0:.2f}$'.format(sample['e']))\n ax.text(xlim[0] + (xlim[1]-xlim[0])/15,\n ylim[1] - (ylim[1]-ylim[0])/20,\n text, fontsize=10, va='top', ha='left')\n # _ = plot_two_panel(data, samples)\n\n ax.set_xlim(-0.02, 1.02)\n\n for i in [0,1]:\n axes[-1, i].set_xlabel(r'phase, $\\frac{M-M_0}{2\\pi}$')\n\n for i in range(4):\n axes[i, 0].set_ylabel(_RV_LBL.format(u.km/u.s))\n\n fig.suptitle('Example stars from the high-$K$, unimodal sample', \n x=0.55, y=0.96, fontsize=18)\n fig.tight_layout()\n fig.subplots_adjust(top=0.92)", "Bulk properties", "full_catalog['converged'].sum(), len(full_catalog)-full_catalog['converged'].sum()\n\n# plt.hist(full_catalog['e'][~full_catalog['converged']], bins='auto');\nplt.hist(full_catalog['e'], bins='auto');", "", "emcee_converged = full_catalog[full_catalog['emcee_converged']]\n\n_path = '../../plots/emcee_converged'\nos.makedirs(_path, exist_ok=True)\n\nwith h5py.File(mcmc_samples_file, 'r') as mcmc_f, h5py.File(samples_file, 'r') as f:\n for row in emcee_converged:\n star = AllStar.get_apogee_id(session, row['APOGEE_ID'])\n data = star.apogeervdata()\n \n if star.apogee_id in mcmc_f:\n samples = JokerSamples.from_hdf5(mcmc_f[star.apogee_id])\n print('mcmc')\n else:\n samples = JokerSamples.from_hdf5(f[star.apogee_id])\n print('thejoker')\n \n samples.t0 = data.t0\n \n fig = plot_two_panel(data, samples, \n plot_data_orbits_kw=dict(n_times=16384, \n highlight_P_extrema=False))\n fig.axes[0].set_title(star.apogee_id)\n fig.tight_layout()\n fig.savefig(path.join(_path, '{0}.png'.format(star.apogee_id)), dpi=200)\n plt.close(fig)", "By-eye vetting: these ones are suspicious", "suspicious_ids = ['2M05224382+4300425',\n '2M08505498+1156503',\n '2M10264342+1340172',\n '2M10513288-0250550',\n '2M14574438+2106271',\n '2M16131259+5043080',\n '2M17121495+3211467',\n '2M17212080+6003296',\n '2M18571262-0328064',\n '2M21260907+1100178',\n '2M21374395+4304268']\n\nderp = emcee_converged[~np.isin(emcee_converged['APOGEE_ID'], suspicious_ids)]\n\nderp = full_catalog\n\nfig, ax = plt.subplots(1, 1, figsize=(6,6))\n\nax.errorbar(derp['P'], derp['LOGG'],\n xerr=derp['P_err'], yerr=derp['LOGG_ERR'],\n marker='o', linestyle='none', alpha=0.8)\n\nax.set_xscale('log')\nax.set_xlim(0.8, 2000)\nax.set_ylim(4., 0)\nax.set_xlabel('P')\nax.set_ylabel('logg')\n\n# -----\n\nfig, ax = plt.subplots(1, 1, figsize=(6,6))\n\nax.errorbar(derp['P'], derp['e'],\n xerr=derp['P_err'], yerr=derp['e_err'],\n marker='o', linestyle='none', alpha=0.8)\n\nax.set_xscale('log')\nax.set_xlim(0.8, 2000)\nax.set_ylim(0, 1)\nax.set_xlabel('P')\nax.set_ylabel('e')\n\n# -----\n\nfig, axes = plt.subplots(1, 2, figsize=(10, 5))\n\nax = axes[0]\nax.errorbar(derp['M1'], derp['M2_min']/derp['M1'],\n xerr=derp['M1_err'], yerr=np.sqrt(derp['M1_err']**2+derp['M2_min_err']**2),\n marker='o', linestyle='none', alpha=0.8)\nax.set_xlabel('M1')\nax.set_ylabel('M2/M1')\n\nax = axes[1]\nmass_ratio = derp['M2_min']/derp['M1']\nax.hist(mass_ratio[np.isfinite(mass_ratio)], bins='auto')\nax.set_xlabel('M2/M1')\n\nwith h5py.File(mcmc_samples_file, 'r') as mcmc_f, h5py.File(samples_file, 'r') as f:\n for row in derp[rc_mask & (derp['P'] < 20)]:\n star = AllStar.get_apogee_id(session, row['APOGEE_ID'])\n data = star.apogeervdata()\n \n if star.apogee_id in mcmc_f:\n samples = JokerSamples.from_hdf5(mcmc_f[star.apogee_id])\n print('mcmc')\n else:\n samples = JokerSamples.from_hdf5(f[star.apogee_id])\n print('thejoker')\n \n samples.t0 = data.t0\n \n fig = plot_two_panel(data, samples, \n plot_data_orbits_kw=dict(n_times=16384, \n highlight_P_extrema=False))\n fig.axes[0].set_title('P = {0:.2f}'.format(samples['P'][0]))\n fig.tight_layout()\n\nderp[rc_mask & (derp['P'] < 20)]" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
antonpavlov/traffic-sign-recognition
datasetProcessor.ipynb
mit
[ "Dataset Processor\nThis notebook contains steps for dataset analisys and preparation to be used in training of a neural network.", "import tensorflow as tf\nprint(tf.__version__)", "Be aware of version compatibility. This copybook uses functions form Trensorflow package version 1.3.0 and higher.\nImports", "# Some important imports\nimport math\nimport numpy as np\nimport colorsys\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport random\nimport pickle", "Load data", "# If your files are named differently or placed in a different folder, please update lines below.\ntraining_file =\"./raw_data/train.p\" \nvalidation_file = \"./raw_data/valid.p\"\ntesting_file = \"./raw_data/test.p\"\n\nwith open(training_file, mode='rb') as f:\n train = pickle.load(f)\nwith open(validation_file, mode='rb') as f:\n valid = pickle.load(f)\nwith open(testing_file, mode='rb') as f:\n test = pickle.load(f)\n \nX_train, y_train = train['features'], train['labels']\nX_valid, y_valid = valid['features'], valid['labels']\nX_test, y_test = test['features'], test['labels']\n\n# Make sure that the number of features equals the number of labels\nassert(len(X_train) == len(y_train))\nassert(len(X_valid) == len(y_valid))\nassert(len(X_test) == len(y_test))", "Basic Summary", "# Number of training examples\nn_train = X_train.shape[0]\n# Number of training labels\nn_train_lables = y_train.shape[0]\n\n# Number of validation examples\nn_validation = X_valid.shape[0]\n# Number of validation labels\nn_validation_labels = y_valid.shape[0]\n\n# Number of testing examples\nn_test = X_test.shape[0]\n# Number of test labels\nn_test_labels = y_test.shape[0]\n\n# The shape of an traffic sign image\ntrain_image_shape = [X_train.shape[1], X_train.shape[2], X_train.shape[3]]\nvalid_image_shape = [X_valid.shape[1], X_valid.shape[2], X_valid.shape[3]]\ntest_image_shape = [X_test.shape[1], X_test.shape[2], X_test.shape[3]]\n\n# Number of unique classes/labels in the dataset.\nn_classes = len(set(train['labels']))\n\nprint(\"Number of training examples =\", n_train)\nprint(\"Number of training labels =\", n_train_lables)\nprint()\nprint(\"Number of validation examples =\", n_validation)\nprint(\"Number of validation labels =\", n_validation)\nprint()\nprint(\"Number of testing examples =\", n_test)\nprint(\"Number of testing labels =\", n_test)\nprint()\nprint(\"Training image data shape =\", train_image_shape)\nprint(\"Validation image data shape =\", valid_image_shape)\nprint(\"Test image data shape =\", test_image_shape)\nprint()\nprint(\"Number of classes =\", n_classes)", "Some exploratory visualizations", "n_pics_row = 5\nn_pic_col = 10\n\nplots = []\nfor i in range(n_pics_row):\n for j in range(n_pic_col):\n ax = plt.subplot2grid((n_pics_row,n_pic_col), (i,j))\n ax.imshow(X_train[random.randint(0, n_train)][:][:][:], cmap='gray')\n ax.set_xticks([])\n ax.set_yticks([])\nplt.show()\n\n# Frequencies of training data per class\nplt.hist(y_train, bins = np.arange(n_classes)) # arguments are passed to np.histogram\nplt.title(\"Frequencies of classes in training set\")\nplt.show()\n\n# Frequencies of validation data per class\nplt.hist(y_valid, bins = np.arange(n_classes)) # arguments are passed to np.histogram\nplt.title(\"Frequencies of classes in validation set\")\nplt.show()\n\n# Frequencies of test data per class\nplt.hist(y_test, bins = np.arange(n_classes)) # arguments are passed to np.histogram\nplt.title(\"Frequencies of classes in testing set\")\nplt.show()", "Note: in terms of frequencies, it can be confirmed that the dataset was divided correctly. Training, validation and testing data have similar histograms of class frequencies. \nNormalize all data", "def normalize_img(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9],\n :param image_data: The image data to be normalized,\n :return: Normalized image data.\n \"\"\"\n a = 0.1\n b = 0.9\n scale_min = 0\n scale_max = 255\n return a + (((image_data - scale_min)*(b - a))/(scale_max - scale_min))\n\nX_train_norm = normalize_img(X_train)\nX_valid_norm = normalize_img(X_valid)\nX_test_norm = normalize_img(X_test)", "Transform normalized RGB image to grayscale", "tf.reset_default_graph()\nX_train2gray = tf.image.rgb_to_grayscale(X_train_norm)\n\nwith tf.Session() as sess:\n X_train_gray = sess.run(X_train2gray)", "Create rotated images from normalized original data\nAt this point, the training data will be extended with rotated images (-15, +15 deg).", "tf.reset_default_graph()\nX_train_rotated_ccw = tf.contrib.image.rotate(X_train_norm, 15 * math.pi / 180, interpolation='BILINEAR')\nX_train_rotated_cw = tf.contrib.image.rotate(X_train_norm, -15 * math.pi / 180, interpolation='BILINEAR')\n\nwith tf.Session() as sess:\n rotated_images_ccw = sess.run(X_train_rotated_ccw)\n rotated_images_cw = sess.run(X_train_rotated_cw) \n\ntf.reset_default_graph()\nrotated_ccw2gray = tf.image.rgb_to_grayscale(rotated_images_ccw) # Ready to export\nrotated_cw2gray = tf.image.rgb_to_grayscale(rotated_images_cw) # Ready to export\n\nwith tf.Session() as sess:\n rotated_images_ccw_gray = sess.run(rotated_ccw2gray)\n rotated_images_cw_gray = sess.run(rotated_cw2gray)\n\n# Copy labels for rotated images\nrotated_ccw_labels = y_train\nrotated_cw_labels = y_train", "Modify brightness randomly\nMake a copy of training data and modify randomly a brightness of each image.", "# Time consuming task! Function is sequential. TODO: optimize it.\ndef random_brightness(image):\n \"\"\"\n Modify image bightness with following formula: brightness = 0.2 + np.random.uniform(),\n :param image: The image data to be processed,\n :return: Modified image data\n \"\"\"\n result = image\n for i in range(image.shape[0]):\n one_image = image[i][:][:][:]\n brightness = 0.2 + np.random.uniform()\n for x in range(one_image.shape[0]):\n for y in range(one_image.shape[1]):\n h, s, v = colorsys.rgb_to_hsv(one_image[x][y][0], one_image[x][y][1], one_image[x][y][2])\n v = v * brightness\n one_image[x][y][0], one_image[x][y][1], one_image[x][y][2] = colorsys.hsv_to_rgb(h, s, v)\n result[i][:][:][:] = one_image[:][:][:]\n return result\n\n## Create a copy of original dataset and modify imeges' brightness\nX_train_bright = random_brightness(X_train_norm)\ny_train_bright = y_train", "Convert processed images to grayscale.", "tf.reset_default_graph()\nX_train_bright2gray = tf.image.rgb_to_grayscale(X_train_bright)\n\nwith tf.Session() as sess:\n X_train_bright_gray = sess.run(X_train_bright2gray)", "Add random noise", "# Time consuming task! Function is sequential. TODO: optimize it.\ndef random_noise(image):\n result = image\n for i in range(image.shape[0]):\n one_image = image[i][:][:][:]\n for x in range(one_image.shape[0]):\n for y in range(one_image.shape[1]):\n brightness = np.random.uniform(low=0.0, high=0.3) # be careful with upper limit -> impact validation \n h, s, v = colorsys.rgb_to_hsv(one_image[x][y][0], one_image[x][y][1], one_image[x][y][2])\n v = v * brightness\n one_image[x][y][0], one_image[x][y][1], one_image[x][y][2] = colorsys.hsv_to_rgb(h, s, v)\n result[i][:][:][:] = one_image[:][:][:]\n return result\n\nX_train_noise = random_noise(X_train_norm)\ny_train_noise = y_train\n\ntf.reset_default_graph()\nX_train_noise2gray = tf.image.rgb_to_grayscale(X_train_noise)\n\nwith tf.Session() as sess:\n X_train_noise_gray = sess.run(X_train_noise2gray)", "Concatenate all training data together", "X_train_ready = X_train_gray\ny_train_ready = y_train\n\nX_train_ready = np.append(X_train_ready, rotated_images_ccw_gray, axis=0)\ny_train_ready = np.append(y_train_ready, rotated_ccw_labels, axis=0)\n\nX_train_ready = np.append(X_train_ready, rotated_images_cw_gray, axis=0)\ny_train_ready = np.append(y_train_ready, rotated_cw_labels, axis=0)\n\nX_train_ready = np.append(X_train_ready, X_train_bright_gray, axis=0)\ny_train_ready = np.append(y_train_ready, y_train_bright, axis=0)\n\nX_train_ready = np.append(X_train_ready, X_train_noise_gray, axis=0)\ny_train_ready = np.append(y_train_ready, y_train_noise, axis=0)", "Convert to grayscale validation and test data", "tf.reset_default_graph()\nX_valid_gray = tf.image.rgb_to_grayscale(X_valid_norm) # Ready to export\nX_test_gray = tf.image.rgb_to_grayscale(X_test_norm) # Ready to export\n\nwith tf.Session() as sess:\n X_valid_ready = sess.run(X_valid_gray)\n X_test_ready = sess.run(X_test_gray)\n\n# Propagate their labels\ny_valid_ready = y_valid \ny_test_ready = y_test\n\nprint(\"Training dataset shape: \", X_train_ready.shape)\nprint(\"Validation dataset shape: \", X_valid_ready.shape)\nprint(\"Test dataset shape: \", X_test_ready.shape)\n\n# Make sure that the number of features equals the number of labels\nassert(len(X_train_ready) == len(y_train_ready))\nassert(len(X_valid_ready) == len(y_valid_ready))\nassert(len(X_test_ready) == len(y_test_ready))\n\nwith open('./train_data/aug_train_features_ready2.pickle', 'wb') as output:\n pickle.dump(X_train_ready, output)\n\nwith open('./train_data/aug_train_labels_ready2.pickle', 'wb') as output:\n pickle.dump(y_train_ready, output)\n \nwith open('./train_data/aug_valid_features_ready2.pickle', 'wb') as output:\n pickle.dump(X_valid_ready, output)\n\nwith open('./train_data/aug_valid_labels_ready2.pickle', 'wb') as output:\n pickle.dump(y_valid_ready, output)\n \nwith open('./train_data/aug_test_features_ready2.pickle', 'wb') as output:\n pickle.dump(X_test_ready, output)\n\nwith open('./train_data/aug_test_labels_ready2.pickle', 'wb') as output:\n pickle.dump(y_test_ready, output)", "Observation: tensor graph needs to be reset all the time in order to avoid 2 GB limit overflow; at the beginning, functions were ment to work with RGB images. Repeat RGB to grayscale conversion is not so elegant and time consuming. This notebook may be optimized and gain significant performance in future." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google-research/google-research
nested_rhat/rhat_locker.ipynb
apache-2.0
[ "$\\hat R$ locker\nThis notebook serves as a sandbox to understand the potential of the nested-$\\hat R$ diagnostic. The underlying idea is to gather short chains into a long \"super chains\" and then check that the super chains are mixing. We'll motivate this idea and work out details, benefits and limitations. \nCopyright 2021 Google LLC.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Import tf first to enable eager mode.\nimport tensorflow as tf\ntf.executing_eagerly()\n\n# TODO (charlesm93): check which of these actually need to be imported.\n\nimport numpy as np\nfrom matplotlib.pyplot import *\n# %config InlineBackend.figure_format = 'retina'\n# matplotlib.pyplot.style.use(\"dark_background\")\n\nimport jax\nfrom jax import random\nfrom jax import numpy as jnp\n\nfrom colabtools import adhoc_import\n\nfrom inference_gym import using_jax as gym\n\nfrom tensorflow_probability.spinoffs.fun_mc import using_jax as fun_mcmc\n\n\n# import tensorflow as tf\nfrom tensorflow_probability.python.internal import prefer_static as ps\nfrom tensorflow_probability.python.internal import unnest\n\n\nimport tensorflow_probability as _tfp\ntfp = _tfp.substrates.jax\ntfd = tfp.distributions\ntfb = tfp.bijectors\n\ntfp_np = _tfp.substrates.numpy\ntfd_np = tfp_np.distributions \n\n# set font size for matplot lib\nfont = {'family' : 'normal',\n 'weight' : 'bold',\n 'size' : 14}\n\nmatplotlib.rc('font', **font)\n\ntf.executing_eagerly()", "Set up problem", "# options: Bananas, GermanCredit, Brownian\nproblem_name = 'Bananas'\n\nif (problem_name == 'Bananas'):\n target = gym.targets.VectorModel(gym.targets.Banana(),\n flatten_sample_transformations=True)\n num_dimensions = target.event_shape[0] \n init_step_size = 1.\n\nif (problem_name == 'GermanCredit'):\n # This problem seems to require that we load TF datasets first.\n import tensorflow_datasets\n target = gym.targets.VectorModel(gym.targets.GermanCreditNumericSparseLogisticRegression(),\n flatten_sample_transformations=True)\n num_dimensions = target.event_shape[0]\n init_step_size = 0.02\n\nif (problem_name == 'Brownian'):\n target = gym.targets.BrownianMotionMissingMiddleObservations()\n target = gym.targets.VectorModel(target,\n flatten_sample_transformations = True)\n num_dimensions = target.event_shape[0]\n init_step_size = 0.01\n\ndef target_log_prob_fn(x):\n \"\"\"Unnormalized, unconstrained target density.\n\n This is a thin wrapper that applies the default bijectors so that we can\n ignore any constraints.\n \"\"\"\n y = target.default_event_space_bijector(x)\n fldj = target.default_event_space_bijector.forward_log_det_jacobian(x)\n return target.unnormalized_log_prob(y) + fldj\n\n# NOTE: use a large factor to get overdispered initializations.\n# NOTE: don't set offset to 0 when the target mean is 0.\n# CHECK: what scale should we use? Poor inits can make the problem much more\n# difficult.\n# NOTE: we probably want inits that allow us to get decent estimates\n# in the long regime\n\n# if (problem_name == 'Bananas'):\nif (problem_name == 'Bananas'):\n offset = 2\n def initialize (shape, key = random.PRNGKey(37272709)):\n return 3 * random.normal(key, shape + (num_dimensions,)) + offset\n\nif (problem_name == 'GermanCredit'):\n offset = 0.1\n def initialize (shape, key = random.PRNGKey(37272709)):\n return 0.5 * random.normal(key, shape + (num_dimensions,)) + offset\n\n# offset = 0.5\n# def initialize (shape, key = random.PRNGKey(37272709)):\n# return 0.01 * random.normal(key, shape + (num_dimensions,)) + offset\n", "Run MCMC\nWe consider two regimes: the \"long\" regime in which a few chains are run for many warmup and sampling iterations, and the \"short\" regime, wherein many chains are run for a few warmup and sampling iterations. Note that in the short regime we're willing to not warmup our chains (i.e. possibly adapt step size, trajectory length, mass matrix) as well as in the long regime, the hope being that the variance decreases enough because we're running many chains.", "# Transition kernel for long regime\nnum_chains_long = 4\nif (problem_name == 'GermanCredit'):\n num_warmup_long, num_sampling_long = 500, 1000\nif (problem_name == 'Bananas'):\n num_warmup_long, num_sampling_long = 200, 1000\ntotal_samples_long = num_warmup_long + num_sampling_long\n\n# CHECK: is this the transition kernel we want to use?\n# REMARK: the step size is picked based on the model we're fitting\nif (problem_name == 'Bananas' or problem_name == 'GermanCredit'): \n kernel_long = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)\n kernel_long = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_long, num_warmup_long)\n kernel_long = tfp.mcmc.DualAveragingStepSizeAdaptation(\n kernel_long, num_warmup_long, target_accept_prob = 0.75,\n reduce_fn=tfp.math.reduce_log_harmonic_mean_exp)\n\n# Follow the inference gym tutorial\n# NOTE: transition kernel below is untested.\nif (problem_name == 'Brownian'):\n kernel_long = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)\n # Adapt step size.\n kernel_long = tfp.mcmc.DualAveragingStepSizeAdaptation(\n kernel_long, num_warmup_long, # int(num_samples // 2 * 0.8),\n target_accept_prob = 0.9)\n # Adapt trajectory length.\n kernel_long = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(\n kernel_long,\n num_adaptation_steps = num_warmup_long) # int(num_steps // 2 * 0.8))\n\n\n# TODO: work out what an appropriate transition kernel for this problem would be.\n# if (problem_name == 'GermanCredit'):\n# kernel_long = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)\n# kernel_long = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_long, num_warmup_long)\n# kernel_long = tfp.mcmc.DualAveragingStepSizeAdaptation(\n# kernel_long, num_warmup_long, target_accept_prob = 0.75,\n# reduce_fn=tfp.math.reduce_log_harmonic_mean_exp)\n\ninitial_state = initialize((num_chains_long,))\n\n# initial_state = initialize((num_chains_long,))\nresult_long = tfp.mcmc.sample_chain(\n total_samples_long, initial_state, kernel = kernel_long, seed = random.PRNGKey(1954))\n\n# Transition kernel for short regime\n# CHECK: how many warmup iterations should we use here?\n# Suggested options: 512, 1024, 2048, 2500\nnum_chains_short = 512\nnum_super_chains = 4\n\nif (problem_name == 'GermanCredit'):\n num_warmup_short, num_sampling_short = 1000, 1000\nif (problem_name == 'Bananas'):\n num_warmup_short, num_sampling_short = 100, 1000 # 100, 1000\ntotal_samples_short = num_warmup_short + num_sampling_short\n\nif (problem_name == 'Bananas' or problem_name == 'GermanCredit'):\n kernel_short = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)\n kernel_short = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_short, num_warmup_short)\n kernel_short = tfp.mcmc.DualAveragingStepSizeAdaptation(\n kernel_short, num_warmup_short, target_accept_prob = 0.75, #0.75,\n reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)\n\ndifferent_location = False\n\nif (different_location):\n # initialize each chain at a different location \n initial_state = initialize((num_chains_short,))\n\nelse:\n # Chains within a super chain are all initialized at the same location\n # Here we use the same initial points as in the long regime.\n initial_state = initial_state # initialize((num_super_chains,))\n initial_state = np.repeat(initial_state, num_chains_short // num_super_chains,\n axis = 0)\n\n\n\nresult_short = tfp.mcmc.sample_chain(\n total_samples_short, initial_state, kernel = kernel_short,\n seed = random.PRNGKey(1954))\n", "Analyze results\nSquared error for Monte Carlo estimate of the mean and variance", "# Get some estimates of the mean and variance.\ntry:\n mean_est = target.sample_transformations['identity'].ground_truth_mean\nexcept:\n print('no ground truth mean')\n mean_est = (result.all_states[num_warmup:, :]).mean(0).mean(0)\ntry:\n var_est = target.sample_transformations['identity'].ground_truth_standard_deviation**2\nexcept:\n print('no ground truth std dev')\n var_est = ((result.all_states[num_warmup:, :]**2).mean(0).mean(0) -\n mean_est**2)\n\njnp.linalg.norm(var_est[0] / 100)", "As a first step plot the squared error based on a Monte Carlo estimator that discards the first half of the samples, and doesn't discriminate between warmup and sampling iterations. We also plot the target precision whith the \"true\" variance -- when available for instance via the inference gym -- divided by 100. This is the precision we expect our Monte Carlo estimates to reach with an effective sample size of 100.", "\n# Map MCMC samples from the unconstrained space to the original space\n# CHECK: does this mess up the banana example?\nresult_state_long = target.default_event_space_bijector(result_long.all_states)\nresult_state_short = target.default_event_space_bijector(result_short.all_states)\n\ndef mc_est(x, axis = 0):\n \"\"\"Computes the running sample mean based on sampling iterations, with\n warmup iterations discarded.\n By default, we focus on the first parameter.\n \"\"\"\n # NOTE: why discard half of the samples?\n cum_x = np.cumsum(x, axis)\n return ((cum_x[1::2] - cum_x[:cum_x.shape[0]//2]) /\n np.arange(1, cum_x.shape[0] // 2 + 1).reshape([-1] + [1] * (len(cum_x.shape) - 1)))\n\nlong_error = mc_est(result_state_long.mean(1) - mean_est)\nshort_error = mc_est(result_state_short.mean(1) - mean_est)\n\ntrue_var_available = True\nif (true_var_available):\n target_precision = jnp.linalg.norm(var_est[0] / 100)\nelse:\n target_precision = jnp.linalg.norm(long_error[len(long_error) - 1], axis = -1)\n\nfigure(figsize = [6, 6])\nsemilogy(jnp.linalg.norm(long_error, axis = -1), label = '4 chains')\nsemilogy(jnp.linalg.norm(short_error, axis = -1), label = '1024 chains')\nhlines(target_precision, 0, total_samples_long / 2,\n linestyles = '--', \n label = 'Target: Var / 100')\nylabel(\"Squared error for Mean estimate\")\nxlabel(\"Iterations (excluding warmup)\")\nlegend(loc = 'best')\nshow()", "I don't think the variance of the variance is stored in the inference gym, although it's probably possible to access this information using the error in the variance estimate. For now, we'll use the final result reported by the long chain as the target precision.", "long_var_error = mc_est(result_state_long.var(1)) - var_est\nshort_var_error = mc_est(result_state_short.var(1)) - var_est\n\nlong_var_estimate = jnp.linalg.norm(long_var_error[len(long_var_error) - 1], axis = -1)\n\nfigure(figsize = [6, 6])\nsemilogy(jnp.linalg.norm(long_var_error, axis = -1), label = 'long')\nsemilogy(jnp.linalg.norm(short_var_error, axis = -1), label = 'short')\nhlines(long_var_estimate, 0, total_samples_long / 2,\n linestyles = '--', \n label = 'long var estimate')\nylabel(\"Squared error for Variance estimate\")\nlegend(loc = 'best')\nshow()\n\n# NOTE: why are the estimates in the long regime so poor??", "Repeat the above, using a Monte Carlo estimator based on sampling iterations with only the warmup samples discard.", "if (False):\n print(result_state_long[num_warmup_long:, :, :].mean(0)[0][0])\n print(result_state_short[num_warmup_short:, :, :].mean(0)[0][0])\n print(mean_est[0])\n print(long_error[len(long_error) - 1][0])\n print(short_error[len(short_error) - 1][0])\n\n result_state_long[num_warmup_long:, :, :].mean(1).shape\n\ndef mc_est_warm(x, axis = 0):\n \"\"\" compute running average without discarding half of the samples.\"\"\"\n return np.cumsum(x, axis) / np.arange(1, x.shape[0] + 1).reshape([-1] + [1] * (len(x.shape) - 1))\n\ndiscard_warmup = True\n\nif (discard_warmup):\n long_error = mc_est_warm(result_state_long[num_warmup_long:, :, :].mean(1)) - mean_est\n short_error = mc_est_warm(result_state_short[num_warmup_short:, :, :].mean(1)) - mean_est\nelse:\n long_error = result_state_long[num_warmup_long:, :, :].mean(1) - mean_est\n short_error = result_state_short[num_warmup_short:, :, :].mean(1) - mean_est\n\ntrue_var_available = True\nif (true_var_available):\n target_precision = jnp.linalg.norm(var_est[0] / 100)\nelse:\n target_precision = jnp.linalg.norm(long_error[len(long_error) - 1], axis = -1)\n\nfigure(figsize = [6, 6])\nsemilogy(jnp.linalg.norm(long_error, axis = -1), label = '4 chains')\nsemilogy(jnp.linalg.norm(short_error, axis = -1), label = '512 chains')\nhlines(target_precision, 0, num_sampling_long,\n linestyles = '--', \n label = 'target: var / 100')\nylabel(\"Squared error for Mean estimate\")\nxlabel(\"Sampling iterations (i.e. warmup excluded)\")\nlegend(loc = 'best')\nshow()", "Remark: if after one iteration we are below the target precision, than we're probably running a warmup which is too long and / or running too many chains.", "long_var_error = mc_est_warm(result_state_long[num_warmup_long:, :, :].var(1)) - var_est\nshort_var_error = mc_est_warm(result_state_short[num_warmup_short:, :, :].var(1)) - var_est\n\nlong_var_mc_estimate = jnp.linalg.norm(long_var_error[len(long_var_error) - 1], axis = -1)\n\nfigure(figsize = [6, 6])\nsemilogy(jnp.linalg.norm(long_var_error, axis = -1), label = 'long')\nsemilogy(jnp.linalg.norm(short_var_error, axis = -1), label = 'short')\nhlines(long_var_mc_estimate, 0, num_sampling_long,\n linestyles = '--', \n label = 'long MC estimate')\nylabel(\"Squared error for Variance estimate\")\nlegend(loc = 'best')\nshow()", "Staring at the plot above it's clear that the short regime reaches a reasonable precision in fewer iterations than the long regime, even though the long regime warms up chains for many more iterations. The dotted line represent the Monte Carlo estimate using all the samples from the long regime. We'll use this as our target precision.", "if (False):\n print(long_mc_estimate)\n print(jnp.linalg.norm(short_error, axis = -1)[0:10])\n print(long_var_estimate)\n print(jnp.linalg.norm(short_var_error, axis = -1)[0:10])\n\n# Identify the number of iterations after which the short regime matches\n# the precision of the long regime.\n# TODO: find a better criterion\n\nitem_index = np.where(jnp.linalg.norm(short_error, axis = -1) <= target_precision)\ntarget_iter_mean = item_index[0][0]\nprint(\"Reasonable precision for mean reached in\", target_iter_mean + 1, \"iteration(s).\")\n\nitem_index = np.where(jnp.linalg.norm(short_var_error, axis = -1) <= long_var_estimate)\ntarget_iter_var = item_index[0][0]\nprint(\"Reasonable precision for variance reached in\", target_iter_var + 1, \"iteration(s).\")", "Check for convergence\nLet's first examine whether we're passed the transient bias regime (we should be since we're discarding the warmup phase).", "# Plot last-sample estimarors\nfigure(figsize = [6, 6])\nsemilogy(jnp.linalg.norm(result_state_long.mean(1) - var_est, axis=-1),\n label='Long mean Error')\nsemilogy(jnp.linalg.norm(result_state_short.mean(1) - mean_est, axis=-1),\n label='Short Mean Error')\nhlines(jnp.sqrt(var_est.sum() / 100), 0, total_samples_long, label='Norm of Posterior Scales / 10')\nlegend(loc='best')\nxlabel('Iteration')\nylabel('Norm of Error of Estimate')\ntitle(target.name)\nxlim([0, 200])\nshow()\n\n# NOTE: Note sure what's going on here.", "Making due diligence, let's look at the samples returned by both methods, after discarding the warmup iterations.", "plot(result_long.all_states[num_warmup_short:, :, 0].flatten(), \n result_long.all_states[num_warmup_short:, :, 1].flatten(), '.', alpha = 0.2)\ntitle('Long regime')\nshow()\nplot(result_long.all_states[num_warmup_long:total_samples_long, :10, 1])\nshow()\n\n# NOTE: (for Banana problem) With 4 samples after warmup we already samples spread\n# out accross the parameter space.\nnum_samples_plot = 4 # target_iter_mean\n\nplot(result_short.all_states[num_warmup_short:num_samples_plot + num_warmup_short, :, 0].flatten(), \n result_short.all_states[num_warmup_short:num_samples_plot + num_warmup_short, :, 1].flatten(), '.', alpha = 0.2)\n\ntitle('Short regime')\nshow()\nplot(result_short.all_states[num_warmup_short:100 + num_warmup_short, [10, 20, 100, 500, 1000], 1])\nshow()\n\n# REMARK: the mixing for the banana problem is slow. This is obvious if we\n# only plot the first few samples of each chain.\nnum_samples_plot = 4 # target_iter_mean\nplot(result_short.all_states[num_warmup_short:, :, 0].flatten(), \n result_short.all_states[num_warmup_short:, :, 1].flatten(), '.', alpha = 0.2)\ntitle('Short regime')\nshow()\nplot(result_short.all_states[:, [1, 200, 400, 600, 800, 1000], 1])\nshow()", "Let's compute $\\hat R$ as a function of iteration and pay attention to how quickly $\\hat R$ goes to 1 in both regimes.", "# NOTE: the warmup is not stored.\n# NOTE: compute rhat for the samples on the original space, since these are\n# the quantities of interest.\ndef compute_rhat(result_state, num_samples, num_warmup = 0):\n return tfp.mcmc.potential_scale_reduction(result_state[num_warmup:num_warmup + num_samples + 1],\n independent_chain_ndims = 1).T\n\n# TODO: do this without a for loop\n# WARNING: this cell takes a minute to run\n# TODO: use a single variable num_sampling, instead of num_sampling_long and\n# num_sampling_var.\nrhat_long = np.array([])\nrhat_short = np.array([])\nrange_iter = range(2, num_sampling_long, 10) # range(2, num_samples, 8)\n\n# NOTE: depending on the problem, it can be interesting to look at both.\n# However, to be consistent with earlier analysis, the warmup samples should\n# be discarded.\ndiscard_warmup = True\n\nfor i in range_iter:\n if (discard_warmup):\n discard_long = num_warmup_long\n discard_short = num_warmup_short\n else:\n discard_long = 0\n discard_short = 0\n rhat_long = np.append(rhat_long, \n compute_rhat(result_state_long, i, discard_long)[0, ])\n rhat_short = np.append(rhat_short,\n compute_rhat(result_state_short, i, discard_short)[0, ])\n", "Remark: the $\\hat R$ estimate can be quite noisy, especially when computed with a small number of samples. One manifestation of this is the fact that $\\hat R < 1$. In the German credit score model, $\\hat R$ is as low as 0.6!! When this is the case, $\\hat R$ will typically be large for other parameters. Hence, inspecting many parameters (presumably all of interest) can safeguard us against crying \"victory\" too early.\nThis type of noise can explain why the change in $\\hat R$ isn't always quite monotone, sometimes with an increase at first, and then the expected decrease.", "result_snip = result_state_long[num_warmup_long:num_warmup_long + 2]\ntfp.mcmc.potential_scale_reduction(result_snip, independent_chain_ndims = 1).T\n\n# Plot result\nfigure(figsize = [6, 6])\nsemilogy(np.array(range_iter), rhat_long - 1, label = '4 chains')\nsemilogy(np.array(range_iter), rhat_short - 1, label = '512 chains')\nlegend(loc = 'best')\nxlim([0, 500])\nylabel(\"Rhat - 1\")\nshow()", "(Banana example) As expected, $\\hat R$ decreases with the number of iterations per chain, although crucially not with the total number of samples! As one might suspect, the short regime produces a less noisy estimate of $\\hat R$. To be more precise, we expect $\\hat R$ to decrease with the effective sample size per chain. Since the long regime benefits from a longer warmup, the effective sample size per iteration should be better, although it might not make a difference in this example.\nCrucially, $\\hat R$ as a convergence diagnostic isn't sensitive to the fact we are running many chains (although the estimator does become less noisy...).", "# Compare Rhat at the point where both methods have reached a comparable squared\n# error.\n# NOTE: not super reliable -- sometimes rhat is noisy and goes to 1 (or below)\n# before jumping back up...\nindex = np.where(range_iter > target_iter_mean)[0][0]\nprint(\"Rhat for short regime after hitting target precision:\", rhat_short[index])\nprint(\"Rhat for long regime after hitting target precision:\", rhat_long[len(rhat_long) - 1])\n", "Proposition: Concerned with how noisy $\\hat R$ might be, let's use a bootstrap scheme to get a standard deviation on the estimator. The short regime should be amiable to this, since we can resample chains. Unfortunately, if we sample with replacement, we underestimate the between chain variance, because some of the chains are identical. One idea is to randomly sample a subset of the chains without replacement and compute $\\hat R$.\nThis will overestimate the uncertainty in our calculations, since we have reduced the sample size.", "n_bootstrap_samples = 64 # 64\nrhat_estimates = np.array([])\nn_sampling_iter = max(range_iter[index], 2) # max(target_iter_mean, 2) # range_iter[index]\n\nfor i in range(1, n_bootstrap_samples):\n choose_samples_randomly = True\n if (choose_samples_randomly):\n bootstrap_sample = np.random.choice(np.array(range(1, num_chains_short + 1)),\n n_bootstrap_samples, replace = False)\n # num_chains_short // 16, replace = False)\n else:\n bootstrap_sample = np.array(range(1 + (i - 1) * n_bootstrap_samples, i * n_bootstrap_samples))\n # print(bootstrap_sample)\n # print(result_state_short[:, bootstrap_sample, :].shape)\n rhat_estimates = np.append(rhat_estimates,\n compute_rhat(result_state_short[:, bootstrap_sample, :], n_sampling_iter, num_warmup_short)[0, ])\n\nprint(\"Mean rhat (short) = \", rhat_estimates.mean(), \"+/-\", rhat_estimates.std())\n", "Nested $\\hat R$\nTo remedy the identified issue, we propose to pool chains together in the short regime, thereby building super-chains, and then checking that the super chains are mixing.\nWe index each sample by $n$ the iteration, $m$ the chain, and $k$ the cluster of chains, and write $\\theta^{(n, m, k)}$. The within-chain variance is estimated by\n$$\n s^2_{km} = \\frac{1}{N - 1} \\sum_{n = 1}^N \\left (\\theta^{(nmk)} - \\bar \\theta^{(.mk)} \\right)^2.\n$$\nNext the between-chain variance, or within super chain variance is\n\\begin{eqnarray}\n s^2_{k.} & = & \\frac{1}{M - 1} \\sum_{m = 1}^M \\left (\\bar \\theta^{(.mk)} - \\bar \\theta^{(..k)} \\right)^2,\n\\end{eqnarray}\nand the total variance for a super chain is\n\\begin{eqnarray}\n S^2_k & = & \\frac{1}{M - 1} \\sum_{m = 1}^M \\left (\\bar \\theta^{(.mk)} - \\bar \\theta^{(..k)} \\right)^2 + \\frac{1}{M (N - 1)} \\sum_{m = 1}^M \\sum_{n = 1}^N \\left (\\theta^{(nmk)} - \\bar \\theta^{(.mk)} \\right)^2 \\\n & = & s^2_{k.} + \\frac{1}{M} \\sum_{m = 1}^M s^2_{km}\n\\end{eqnarray}\nNotice that this calculation accounts for the fact the super-chain is made up of multiple chains.\nFinally the within-super-chain variance is estimated as\n$$\nW = \\frac{1}{K} \\sum_{k = 1}^K S^2_k.\n$$\nNow it remains to compute the between super-chain variance\n$$\nB = \\frac{1}{K - 1} \\sum_{k = 1}^K \\left (\\bar \\theta^{(..k)} - \\bar \\theta^{(...)} \\right)^2,\n$$\nyielding an estimate of the posterior variance\n$$\n \\widehat{\\mathrm{var}}^+(\\theta) = B + W,\n$$\nwhich very much looks like the posterior variance estimate used in the in the long regime, except that I've been a bit more consistent about making the estimator unbiased. We then compute\n$$\n \\hat R = \\sqrt{\\frac{\\widehat{\\mathrm{var}}^+(\\theta)}{W}}.\n$$\nRemark. The $\\theta$ can be replaced by the rank-normalized $z$ as presrcribed by Vehtari et al 2020.\nImplementation of nested-$\\hat R$ using TensorFlow.", "# Remark: eager execution is disabled and would have to be enabled at the\n# start of the program. I however suspect this would interfere with\n# TensorFlow probability.\ntf.executing_eagerly()\n\n# Follow procedure described in source code for potential scale reduction.\n# NOTE: some of the tf argument need to be adjusted (e.g. keepdims = False,\n# instead of True). Not quite sure why.\n# QUESTION: can these be accessed as internal functions of tf?\n# TODO: following Pavel's example, rewrite this without using tf.\n# TODO: add error message when the number of samples is less than 2.\n\n# REMARK: this function doesn't seem to work, returns NaN.\n# As a result, can only use _reduce_variance with biased = False.\ndef _axis_size(x, axis = None):\n \"\"\"Get number of elements of `x` in `axis`, as type `x.dtype`.\"\"\"\n if axis is None:\n return ps.cast(ps.size(x), x.dtype)\n return ps.cast(\n ps.reduce_prod(\n ps.gather(ps.shape(x), axis)), x.dtype)\n\ndef _reduce_variance(x, axis=None, biased=True, keepdims=False):\n with tf.name_scope('reduce_variance'):\n x = tf.convert_to_tensor(x, name='x')\n mean = tf.reduce_mean(x, axis=axis, keepdims=True)\n biased_var = tf.reduce_mean(\n tf.math.squared_difference(x, mean), axis=axis, keepdims=keepdims)\n if biased:\n return biased_var\n n = _axis_size(x, axis)\n return (n / (n - 1.)) * biased_var\n\ndef nested_rhat(result_state, num_super_chain):\n used_samples = result_state.shape[0]\n num_sub_chains = result_state.shape[1] // num_super_chains\n num_dimensions = result_state.shape[2]\n\n chain_states = result_state.reshape(used_samples, -1, num_sub_chains,\n num_dimensions)\n\n state = tf.convert_to_tensor(chain_states, name = 'state')\n mean_chain = tf.reduce_mean(state, axis = 0)\n mean_super_chain = tf.reduce_mean(state, axis = [0, 2])\n variance_chain = _reduce_variance(state, axis = 0, biased = False)\n variance_super_chain = _reduce_variance(mean_chain, axis = 1, biased = False) \\\n + tf.reduce_mean(variance_chain, axis = 1)\n\n W = tf.reduce_mean(variance_super_chain, axis = 0)\n B = _reduce_variance(mean_super_chain, axis = 0, biased = False)\n\n return tf.sqrt((W + B) / W)\n", "CASE 1 (sanity check): $\\hat R$ after a few iterations\nThe super chains are such that they have the same number of samples as the chains in the long regime. Because of the slow mixing, 4 iterations per chain is not enough to overcome the transient bias and the nested Rhat is high, even though each super chain has many iterations. Note we're looking at the first warmup iterations.", "# num_super_chains = 4\n# super_chain_size = num_chains_short // num_super_chains # 250\nused_samples = 4 # total_samples_long // super_chain_size # 4\nresult_state = result_short.all_states[0:used_samples, :, :]\n\nprint(\"short rhat: \", nested_rhat(result_state, num_super_chains))", "CASE 2: $\\hat R$ after \"enough\" iterations\nThe number of iterations in each chain corresponds to the number of samples required by the short regime to match the precision for the mean attained by the long regime after 1000 sampling iterations (meaning we've discarded the warmup iterations). The diagnostic is quite happy, even though there are only two iterations per chain.", "result_state.shape\ntarget_iter_mean\n\nused_samples = max(target_iter_mean, 2)\nresult_state = result_short.all_states[num_warmup_short:num_warmup_short + used_samples, :, :]\n\nprint(\"short nested-rhat: \", nested_rhat(result_state, num_super_chains)[0])\nprint(\"short rhat: \", rhat_short[index])\nprint(\"long rhat: \", rhat_long[len(rhat_long) - 1])\n\nprint(range_iter)\n\n# Let's find out how quickly nested-rhat compared to traditional rhat goes down.\nnested_rhat_short = np.array([])\nfor i in range_iter:\n nested_rhat_short = np.append(nested_rhat_short, \n nested_rhat(result_short.all_states[num_warmup_short:num_warmup_short + i, :, :],\n num_super_chains).numpy()[0])\n\nfigure(figsize = [6, 6])\nsemilogy(np.array(range_iter), rhat_long - 1, label = '$\\hat R$, 4 chains')\nsemilogy(np.array(range_iter), rhat_short - 1, label = '$\\hat R$, 512 chains')\nsemilogy(np.array(range_iter), nested_rhat_short - 1, label = '$n \\hat R$, 512 chains')\nlegend(loc = 'best')\nxlim([0, 1000])\nylabel(\"Rhat - 1\")\nxlabel(\"Post-warmup sampling iterations\")\nshow()\n\n\nthreshold = 1.1\nindex_classic = np.where((rhat_short < threshold) & (rhat_short > 1.))\nif (len(index_classic[0]) > 0):\n print(\"Rhat =\", threshold, \"after\",range_iter[index_classic[0][0]], \"iterations.\")\nelse:\n print(\"Rhat doesn't hit the target threshold = \", threshold, \".\")\n\n\nindex_short = np.where((nested_rhat_short < threshold) & (nested_rhat_short > 1.))\nif (len(index_short[0]) > 0):\n print(\"Nested Rhat =\", threshold, \"after\", range_iter[index_short[0][0]], \"iterations.\")\nelse:\n print(\"Nested Rhat doesn't hit the target threshold = \", threshold, \".\")\n\nthreshold = 1.01\nindex_classic = np.where((rhat_short < threshold) & (rhat_short > 1.))\nif (len(index_classic[0]) > 0):\n print(\"Rhat =\", threshold, \"after\",range_iter[index_classic[0][0]], \"iterations.\")\nelse:\n print(\"Rhat doesn't hit the target threshold = \", threshold, \".\")\n\nindex_short = np.where((nested_rhat_short < threshold) & (nested_rhat_short > 1.))\nif (len(index_short[0]) > 0):\n print(\"Nested Rhat =\", threshold, \"after\", range_iter[index_short[0][0]], \"iterations.\")\nelse:\n print(\"Nested Rhat doesn't hit the target threshold = \", threshold, \".\")", "Effective sample size\nWe'll now compute the effective sample size. We might in fact expect the classic diagnostic to work relatively well.", "ess_long = np.sum(tfp.mcmc.effective_sample_size(\n result_state_long[num_warmup_long:, : , :]), axis = 0)\n\ness_short = np.sum(tfp.mcmc.effective_sample_size(\n result_state_short[num_warmup_short:, :, :]), axis = 0)\n\ness_short_target = np.sum(tfp.mcmc.effective_sample_size(\n result_state_short[num_warmup_short:num_warmup_short + 3, :, :]), axis = 0)\n\n# NOTE: it seems we need at least 3 samples to compute the ess estimate...\n\nprint(\"Ess long (discarding warmup): \", ess_long[0])\nprint(\"Ess short (discarding warmup): \", ess_short[0])\nprint(\"Ess short (when hitting target precision): \", ess_short_target[0])", "Adaptive warmup length\nPlaying around a little, we find that once the algorithm is properly warmed up, the short regime can reach good precision in very few iterations. The primary limitation hence becomes the warm up time.\nProper warmup means (i) we've overcomed the transient bias and have already moved across the \"typical set\" -- it isn't enough to be in the \"typical set\" if where we are is determined by our starting point -- and (ii) our algorithm tuned well-enough such that it can explore every part of the parameter space in a reasonable time and has a relatively short relaxation time. The first item is essential to both sampling regimes, though intuitively, it seems we might be able to compromise on the second item in the short regime.\nIn many cases, the number of warmup samples is determined ahead of time when calling the algorithm. Ideally we'd stop the warmup once we have suitable tuning parameters and then move to the sampling phase. Zhang et al (2020) propose to run warmups over short windows of $w = 100$ iterations and compute $\\hat R$ and the ESS at the end of each of window to check if we should continue warming up. Once both diagnostic estimates are passed a certain threshold, the warmup ends and the sampling begins. In theory, this scheme can be adapted to the short regime by replacing $\\hat R$ with the nested $\\hat R$.\nMy guess is that by using nested $\\hat R$ and the classic ESS (computed using many independent chains) we'll implicitly compromise on item (ii) -- so a priori, the described warmup method requires little adjustment.", "# Define function to extract the adapted parameters\n# (Follow what's done in the inference gym tutorial)\n# REMARK: if we pass only initial step size, only one step size is adapted for \n# the whole transition kernel (as opposed to one step size per chain).\n# REMARK: we won't use this scheme. Instead, we'll pass the whole transition.\nfrom tensorflow_probability.python.internal.unnest import get_innermost\n\n\n# NOTE: presumable we're not going to use this, and instead get the full\n# kernel result back.\ndef trace_fn(_, pkr):\n return (\n get_innermost(pkr, 'step_size'),\n get_innermost(pkr, 'num_leapfrog_steps')\n # get_innermost(pkr, 'max_trajectory_length')\n )\n\n\ndef forge_chain (target_rhat, warmup_window_size, kernel_cold, initial_state,\n max_num_steps, seed, monitor = False,\n use_nested_rhat = True, use_log_joint = False,\n num_super_chains = 4):\n # store certain variables\n rhat_forge = np.array([])\n warmup_is_acceptable = False\n store_results = []\n\n warmup_iteration = 0\n\n current_state = initial_state\n final_kernel_args = None\n\n while (not warmup_is_acceptable and warmup_iteration <= max_num_steps):\n warmup_iteration += 1\n\n # 1) Run MCMC on short warmup window\n result_cold, target_log_prob, final_kernel_args = tfp.mcmc.sample_chain(\n num_results = warmup_window_size,\n current_state = current_state,\n kernel = kernel_cold,\n previous_kernel_results = final_kernel_args,\n seed = kernel_seed,\n trace_fn = lambda _, pkr: unnest.get_innermost(pkr, 'target_log_prob'),\n return_final_kernel_results = True)\n\n if (warmup_iteration == 1) : \n store_results = result_cold\n else : \n store_results = np.append(store_results, result_cold, axis = 0)\n\n current_state = result_cold[-1]\n\n # 2) Check if warmup is acceptable\n if (used_nested_rhat):\n if (use_log_joint):\n shape_lp = target_log_prob.shape\n rhat_warmup = nested_rhat(target_log_prob.reshape(shape_lp[0], shape_lp[1], 1),\n num_super_chains)\n else:\n rhat_warmup = max(nested_rhat(result_cold, num_super_chains))\n else:\n if (use_log_joint):\n rhat_warmup = tfp.mcmc.potential_scale_reduction(target_log_prob)\n else:\n rhat_warmup = max(tfp.mcmc.potential_scale_reduction(result_cold))\n # ess_warmup = np.sum(tfp.mcmc.effective_sample_size(result_cold), axis = 0)\n\n # print(rhat_warmup)\n\n if (rhat_warmup < target_rhat): warmup_is_acceptable = True\n # if (max(rhat_warmup) < 1.01 and min(ess_warmup) > 100): warmup_is_acceptable = True\n\n if (monitor):\n print(\"step:\", final_kernel_args.step)\n # print(\"max rhat:\", max(rhat_warmup))\n # print(\"min ess warmup:\" , min(ess_warmup))\n # print(\"step size:\", step_size)\n # print(\"number of leapfrog steps:\", num_leapfrog_steps)\n \n save_values = True\n if (save_values):\n rhat_forge = np.append(rhat_forge, rhat_warmup)\n # While loop ends\n\n return store_results, final_kernel_args, rhat_forge\n\n\n# Set up adaptive warmup scheme\nwarmup_window_size = 5\ntarget_rhat = 1.01\ntarget_ess = 100\nmax_num_steps = 1000 // warmup_window_size\ncurrent_state = initial_state\nnum_leapfrog_steps = 1\nwarmup_iteration = 0\nkernel_seed = random.PRNGKey(1957)\n\nused_nested_rhat = True\n\n# define kernel using most recent step size\nkernel_cold = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)\nkernel_cold = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_cold, warmup_window_size)\nkernel_cold = tfp.mcmc.DualAveragingStepSizeAdaptation(\n kernel_cold, warmup_window_size, target_accept_prob = 0.75,\n reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)\n\nkernel_warm = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)\nkernel_warm = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm, 0)\nkernel_warm = tfp.mcmc.DualAveragingStepSizeAdaptation(\n kernel_warm, warmup_window_size, target_accept_prob = 0.75,\n reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)\n\n\nresult_cold, final_kernel_args, rhat_forge = \\\n forge_chain(target_rhat = target_rhat,\n warmup_window_size = warmup_window_size,\n kernel_cold = kernel_cold,\n initial_state = initial_state,\n max_num_steps = max_num_steps,\n seed = random.PRNGKey(1954), monitor = False,\n use_nested_rhat = True,\n use_log_joint = True)\n\n\nprint(\"iterations:\", len(rhat_forge) * warmup_window_size)\nprint(rhat_forge)\nprint(target_rhat)\n# print(tfp.mcmc.potential_scale_reduction(result_cold[-50]))\n# print(nested_rhat(result_short.all_states[num_warmup_short:num_warmup_short + 5, :, :], num_super_chains))\n\n# Run sampling iterations\n# def trace_fn(_, pkr):\n# return (\n# get_innermost(pkr, 'unnormalized_log_prob'))\n\ncurrent_state = result_cold[-1]\n\nresult_warm, target_log_prob, final_kernel_args_warm = tfp.mcmc.sample_chain(\n num_results = 5,\n current_state = current_state,\n kernel = kernel_warm, # kernel_cold\n previous_kernel_results = final_kernel_args,\n seed = random.PRNGKey(100001),\n return_final_kernel_results = True,\n trace_fn = lambda _, pkr: unnest.get_innermost(pkr, 'target_log_prob'))\n\nprint(tfp.mcmc.potential_scale_reduction(target_log_prob))\n# print(nested_rhat(target_log_prob, num_super_chains))\nshape_lp = target_log_prob.shape\nlp__ = target_log_prob.reshape(shape_lp[0], shape_lp[1], 1)\nlp__.shape\nprint(nested_rhat(lp__, num_super_chains))\n\nprint(tfp.mcmc.potential_scale_reduction(result_warm))\nnested_rhat(result_warm, num_super_chain = num_super_chains)\n\n# options: result_cold[result_cold.shape[0] - 30:], result_state_short, result_warm, store_results\nstates_to_read = result_warm\n\nprint(\"mean estimate:\", np.mean(states_to_read.mean(0), axis = 0))\nprint(\"variance estimate:\", np.mean(states_to_read.var(1), axis = 0))\nprint(nested_rhat(states_to_read, num_super_chain = 4))\nprint(tfp.mcmc.potential_scale_reduction(states_to_read))\nprint(mean_est)\nprint(var_est)\n\n# Check output of the last run\n\nplot(result_warm[:, :, 0].flatten(), \n result_warm[:, :, 1].flatten(), '.', alpha = 0.2)\ntitle('Long regime')\nshow()\nplot(result_warm[:, :30, 1])\nshow()\n\n# Compare to output we get with uninterrupted run.\n# (Examine the iterations before the warmup ends)\nchain_state_short = result_short.all_states[num_warmup_short - 10:num_warmup_short - 10 + warmup_window_size, :, :]\nplot(chain_state_short[:, :, 0].flatten(),\n chain_state_short[:, :, 1].flatten(), '.', alpha = 0.2)\nshow()\n\nplot(chain_state_short[:, :30, 1])\nshow()", "Experiment with window size\nThe code below returns the length of the warmup phase, simulated accross several seeds. This can give us a sense of how long the warmup phase is on average for different seeds. Be mindful that when using too many seeds with a lot of chains, the GPU can run out of memory. The motivation is to check how stable the warmup strategy is when using different window sizes.", "target_rhat = 1.01\nwarmup_window_size = 30\nmax_num_steps = 1000 // warmup_window_size\n\niteration_after_warmup = np.array([])\n\nfor seed in jax.random.split(jax.random.PRNGKey(0), 10):\n initial_state = initialize((num_super_chains,), key = seed)\n initial_state = np.repeat(initial_state, num_chains_short // num_super_chains,\n axis = 0)\n\n result_cold, final_kernel_args, rhat_forge = \\\n forge_chain(target_rhat = target_rhat,\n warmup_window_size = warmup_window_size,\n kernel_cold = kernel_cold,\n initial_state = initial_state,\n max_num_steps = max_num_steps,\n seed = seed, monitor = False,\n use_nested_rhat = True,\n use_log_joint = False)\n \n iteration_after_warmup = np.append(iteration_after_warmup,\n len(rhat_forge) * warmup_window_size)\n\n\n# print(iteration_after_warmup)\nprint(rhat_forge)\nprint(iteration_after_warmup.mean())\nprint(iteration_after_warmup.std())", "Results for the Banana problem\nApplying the code above for the banana problem with\ntarget_rhat = 1.01\nuse_nested_rhat = True\nuse_log_joint = False\nwe estimate the length of the warmup phase for different window sizes:\nw = 10, length = 62 +/- 16.12\nw = 15, length = 72 +/- 17.41\nw = 20, length = 86 +/- 18\nw = 30, length = 90 +/- 13.75\nw = 60, length = 120 +/- 0.0\nTaking into consideration the different granularities, we find the results to be fairly consistent with one another.\nLet's go back to the original case where we use $\\hat R$ and ESS as our stoping criterion. Given the approximate one-to-one map between $\\hat R$ and ESS per chain, the two criterion are somewhat redundant, so I'll focus on $\\hat R$. When picking the window size, we must contend with the following trade-off:\n* if the window size is too short, we're unlikely to produce a large enough ESS per chain to hit the target $\\hat R$, and this could mean a never-ending warmup phase, or one that only stops once we exceed a maximum number of steps.\n* if the window size is too large, we may jump pass the optimal point. It's also worth noting that the first window is unlikely to yield satisfactory results, because the intial estimates are overdispered and bias. \nThe first item is largely mitigated by using nested-$\\hat R$, since we're then less dependent on the ESS per chain. The second item could be addressed by using a path-finder to initialize the chains and/or by discarding some of the early iterations in a window when computing the diagnostics. \nOne final remark is that using $\\hat R$ on the log joint distribution yielded somewhat optimistic results. As Pavel puts it: \"log_joint is a pretty bad metric. Generally, for convergence, you prefer to measure the least constrainted directions, and log_joint is typically not that.\"\nDraft Code", " \nresult_cold, _, final_kernel_args = tfp.mcmc.sample_chain(\n num_results = 100,\n current_state = initial_state,\n kernel = kernel_cold,\n previous_kernel_results = None,\n seed = random.PRNGKey(1954),\n return_final_kernel_results = True)\n\nresult_warm, _, final_kernel_args = tfp.mcmc.sample_chain(\n num_results = 50,\n current_state = result_cold[-1],\n kernel = kernel_warm,\n previous_kernel_results = final_kernel_args,\n seed = random.PRNGKey(1954),\n return_final_kernel_results = True)\n\n\nnested_rhat(result_warm[1:3], 4)\n\nwarmup_window_size = 200\ncurrent_state = initial_state\n\nkernel_warm = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)\nkernel_warm = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm, warmup_window_size)\nkernel_warm = tfp.mcmc.DualAveragingStepSizeAdaptation(\n kernel_warm, warmup_window_size, target_accept_prob = 0.75,\n reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)\n\n# result_warm, (step_size_saved, num_leapfrog_steps_saved) = tfp.mcmc.sample_chain(\n# warmup_window_size, current_state, kernel = kernel_warm,\n# seed = random.PRNGKey(1954), trace_fn = trace_fn)\n\nresult_warm, kernel_args, final_kernel_args = tfp.mcmc.sample_chain(\n warmup_window_size, current_state, kernel = kernel_warm,\n seed = random.PRNGKey(1954), return_final_kernel_results = True)\n\n\n# step_size = step_size_saved[warmup_window_size - 1]\n# current_state = result_warm[warmup_window_size - 1, :, :]\n# num_leapfrog_steps = num_leapfrog_steps_saved[warmup_window_size - 1]\n\ntfp.mcmc.potential_scale_reduction(result_warm[:, :, :])\n\n# kernel_warm2 = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, step_size, num_leapfrog_steps)\n# kernel_warm2 = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm2, warmup_window_size)\n# kernel_warm2 = tfp.mcmc.DualAveragingStepSizeAdaptation(\n# kernel_warm2, warmup_window_size, target_accept_prob = 0.75,\n# reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)\n\n# result_warm2, (step_size_saved) = tfp.mcmc.sample_chain(\n# warmup_window_size, current_state, kernel = kernel_warm2,\n# seed = random.PRNGKey(1954), trace_fn = trace_fn)\n\nresult_warm2 = tfp.mcmc.sample_chain(\n num_results = warmup_window_size, \n kernel = kernel_warm,\n current_state = current_state,\n previous_kernel_results = final_kernel_args,\n seed = random.PRNGKey(1953)\n)\n\ntfp.mcmc.potential_scale_reduction(result_warm2.all_states[:, :, :])\n\nprint(problem_name)\n\nprint(max(rhat_warmup))\nprint(min(ess_warmup))\n# print(len(step_size))\n# print(step_size[0][warmup_window_size - 1])\nmax(tfp.mcmc.potential_scale_reduction(result_warm))\n\n# Define kernel for warmup windows (should be the same in the long and short regime)\nwarmup_window_size = 10\n\nif (problem_name == 'Bananas' or problem_name == 'GermanCredit'):\n kernel_warm_init = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)\n kernel_warm_init = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm_init, warmup_window_size)\n kernel_warm_init = tfp.mcmc.DualAveragingStepSizeAdaptation(\n kernel_warm_init, warmup_window_size, target_accept_prob = 0.75, #0.75,\n reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)\n\n\nresult_warm, (step_size, max_trajectory_length) = tfp.mcmc.sample_chain(\n warmup_window_size, initial_state, kernel = kernel_warm_init, seed = random.PRNGKey(1954),\n trace_fn = trace_fn)\n\nprint(step_size[len(step_size) - 1])\nprint(max_trajectory_length[len(max_trajectory_length) - 1])\nprint(initial_state.shape)\nprint(result_warm[warmup_window_size, :, :])\n\n# To run next window, define a new transition kernel\n# REMARK: the maximum trajectory length isn't, if my understanding is correct,\n# a tuning parameter; rather something that get's calculated at each step. So\n# there's no need to pass it on.\nkernel_warm = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, step_size[len(step_size) - 1], 1)\nkernel_warm = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm, warmup_window_size)\nkernel_warm = tfp.mcmc.DualAveragingStepSizeAdaptation(\n kernel_warm_init, warmup_window_size, target_accept_prob = 0.75,\n reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)\n\nresult_warm2, (step_size, max_trajectory_length) = tfp.mcmc.sample_chain(\n warmup_window_size, initial_state, kernel = kernel_warm, seed = random.PRNGKey(1954),\n trace_fn = trace_fn)\n\nprint(result_warm.shape)\nprint(step_size.shape)\nprint(max_trajectory_length.shape) \n\nstep_size[len(step_size) - 1]\n\n# nested_rhat(result_short.all_states, num_super_chains)\n\n## Sandbox\n\n# Pool chains into super chains\n# num_super_chains = 4 # num_chains_short // num_chains_long\n# num_sub_chains = num_chains_short // num_super_chains\n# used_samples = num_samples # 5 # 2 * target_iter_mean # target_iter_mean\n# result_state = result_short.all_states[0:used_samples, :, :]\n# chain_states = result_state.reshape(used_samples, num_sub_chains,\n# -1, num_dimensions)\n\n# independent_chains_ndims = 1\n# sample_ndims = 1\n# sample_axis = tf.range(0, sample_ndims)\n# chain_axis \n\n # used_samples = result_state.shape[0]\n # num_sub_chains = result_state.shape[1] // num_super_chains\n # num_dimensions = result_state.shape[2]\n\n # chain_states = result_state.reshape(used_samples, -1, num_sub_chains,\n # num_dimensions)\n\n # state = tf.convert_to_tensor(chain_states, name = 'state')\n\n # mean_chain = tf.reduce_mean(state, axis = 0)\n # mean_super_chain = tf.reduce_mean(state, axis = [0, 2])\n # variance_chain = _reduce_variance(state, axis = 0, biased = False)\n # variance_super_chain = _reduce_variance(mean_chain, axis = 1, biased = False) \\\n # + tf.reduce_mean(variance_chain, axis = 1)\n\n # W = tf.reduce_mean(variance_super_chain, axis = 0)\n # B = _reduce_variance(mean_super_chain, axis = 0, biased = False)\n\n # rhat = tf.sqrt((W + B) / W)\n\n # print(rhat)\n\n # print(mean_chain.shape)\n # print(mean_super_chain.shape)\n # print(\"mean_super_chain: \", mean_super_chain)\n # print(variance_chain.shape)\n # print(variance_super_chain.shape)\n\n# print(state.shape) # (5, 250, 4, 2)\n# print(result_state.shape) # (5, 1000, 2)\n\n# # 'manually' compute the mean of each super chain.\n# print(np.mean(result_state[:, 0:250, 0]))\n# print(np.mean(result_state[:, 250:500, 0]))\n# print(np.mean(result_state[:, 500:750, 0]))\n# print(np.mean(result_state[:, 750:1000, 0]))\n\n# # compute the means after reshaping the results. Get agreement!\n# print(np.mean(chain_states[:, 0, :, 0]))\n# print(np.mean(chain_states[:, 1, :, 0]))\n# print(np.mean(chain_states[:, 2, :, 0]))\n# print(np.mean(chain_states[:, 3, :, 0]))\n\n# print(result_state[:, 250, 0])\n# print(chain_states[:, 0, 1, 0])\n\n# simple_chain = np.array([[0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]])\n# simple_chain.shape # (4, 6)\n\n# chain_reshape = simple_chain.reshape(4, 2, -1)\n# chain_reshape.shape # (4, 2, 3)\n# np.mean(chain_reshape, axis = 0) # returns mean for each chain\n# np.mean(chain_reshape[:, 0, :]) # 1\n# np.mean(chain_reshape[:, 1, :]) # 4\n\n# np.mean(simple_chain[:, 0:2]) # 1\n# np.mean(simple_chain[:, 3:6]) # 4 -- but it seems index should be 3:5\n# # simple_chain[:, 3:6]\n\n# ## Sandbox\n\n# tf.compat.v1.disable_eager_execution() # need to disable eager in TF2.x\n\n# state = result_short.all_states[1:range_iter[index], :, :]\n# n = state.shape[0]\n# m = state.shape[1]\n\n# sample_ndims = 1\n# independent_chains_ndims = 1\n# sample_axis = tf.range(0, sample_ndims) # CHECK\n# chain_axis = 0\n# sample_and_chain_axis = tf.range(0, sample_ndims + independent_chains_ndims) # CHECK\n\n\n# with tf.name_scope('potential_scale_reduction_single_state'):\n# state = tf.convert_to_tensor(state, name = 'state')\n\n# # CHECK: do we need to define a tf scope?\n# n_samples = tf.compat.dimension_value(state.shape[0])\n\n# # n = _axis_size(state, sample_axis)\n# # m = _axis_size(state, chain_axis)\n\n# # NOTE: These lines prompt the error message once the session is run.\n# # x = tf.reduce_mean(state, axis=sample_axis, keepdims=True)\n# # x_tf = tf.convert_to_tensor(x, name = 'x')\n# # n_tf = _axis_size(x_tf)\n\n# b_div_n = _reduce_variance(\n# tf.reduce_mean(state, axis = 0, keepdims = False),\n# sample_and_chain_axis, # sample and chain axis\n# biased = False\n# )\n\n# w = tf.reduce_mean(\n# _reduce_variance(state, sample_axis, keepdims = False, \n# biased = False),\n# axis = sample_and_chain_axis\n# )\n\n# # TODO: work out n and m from the number of chains being passed.\n# # n = target_iter_mean\n# # m = num_chains\n# sigma_2_plus = ((n - 1) / n) * w + b_div_n\n# rhat = ((m + 1.) / m) * sigma_2_plus / w - (n - 1.) / (m * n)\n\n\n# # Launch the graph in a session. (TensorFlow uses differed action,\n# # so need to explicitly request evaluation)\n# sess = tf.compat.v1.Session()\n\n# print(sess.run(rhat))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/machine_learning_in_the_enterprise/solutions/sdk-custom-image-classification-batch.ipynb
apache-2.0
[ "Vertex AI Custom Image Classification Model for Batch Prediction\nOverview\nIn this notebook, you learn how to use the Vertex SDK for Python to train and deploy a custom image classification model for batch prediction.\nLearning Objective\n\nCreate a Vertex AI custom job for training a model.\nTrain a TensorFlow model.\nMake a batch prediction.\nClean up resources.\n\nIntroduction\nIn this notebook, you will create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. \nMake sure to enable the Vertex AI API and Compute Engine API.\nInstallation\nInstall the latest (preview) version of Vertex SDK for Python.", "# Setup your dependencies\nimport os\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# Google Cloud Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_GOOGLE_CLOUD_NOTEBOOK:\n USER_FLAG = \"--user\"\n\n# Upgrade the specified package to the newest available version\n! pip install {USER_FLAG} --upgrade google-cloud-aiplatform", "Install the latest GA version of google-cloud-storage library as well.", "# Upgrade the specified package to the newest available version\n! pip install {USER_FLAG} --upgrade google-cloud-storage", "Install the pillow library for loading images.", "# Upgrade the specified package to the newest available version\n! pip install {USER_FLAG} --upgrade pillow", "Install the numpy library for manipulation of image data.", "# Upgrade the specified package to the newest available version\n! pip install {USER_FLAG} --upgrade numpy", "Please ignore the incompatible errors.\nRestart the kernel\nOnce you've installed everything, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Set your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.", "import os\n\nPROJECT_ID = \"\"\n\nif not os.getenv(\"IS_TESTING\"):\n # Get your Google Cloud project ID from gcloud\n shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)", "Otherwise, set your project ID here.", "if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"qwiklabs-gcp-00-f25b80479c89\" # @param {type:\"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.", "# Import necessary libraries\nfrom datetime import datetime\n\n# Use a timestamp to ensure unique resources\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a training job using the Cloud SDK, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. Vertex AI runs\nthe code from this package. In this tutorial, Vertex AI also saves the\ntrained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model resources.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.\nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Vertex AI services are\navailable. You may\nnot use a Multi-Regional Storage bucket for training with Vertex AI.", "# Fill in your bucket name and region\nBUCKET_NAME = \"gs://qwiklabs-gcp-00-f25b80479c89\" # @param {type:\"string\"}\nREGION = \"us-central1\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://qwiklabs-gcp-00-f25b80479c89\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport Vertex SDK for Python\nImport the Vertex SDK for Python into your Python environment and initialize it.", "# Import necessary libraries\nimport os\nimport sys\n\nfrom google.cloud import aiplatform\nfrom google.cloud.aiplatform import gapic as aip\n\naiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)", "Set hardware accelerators\nYou can set hardware accelerators for both training and prediction.\nSet the variables TRAIN_CPU/TRAIN_NCPU and DEPLOY_CPU/DEPLOY_NCPU to use a container image supporting a CPU and the number of CPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify:\n(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\nSee the locations where accelerators are available.\nOtherwise specify (None, None) to use a container image to run on a CPU.\nNote: TensorFlow releases earlier than 2.3 for GPU support fail to load the custom model in this tutorial. This issue is caused by static graph operations that are generated in the serving function. This is a known issue, which is fixed in TensorFlow 2.3. If you encounter this issue with your own custom models, use a container image for TensorFlow 2.3 or later with GPU support.\nFor this lab we will use a container image to run on a CPU.", "TRAIN_CPU, TRAIN_NCPU = (None, None)\n\nDEPLOY_CPU, DEPLOY_NCPU = (None, None)", "Set pre-built containers\nVertex AI provides pre-built containers to run training and prediction.\nFor the latest list, see Pre-built containers for training and Pre-built containers for prediction", "TRAIN_VERSION = \"tf-cpu.2-1\"\nDEPLOY_VERSION = \"tf2-cpu.2-1\"\n\nTRAIN_IMAGE = \"gcr.io/cloud-aiplatform/training/{}:latest\".format(TRAIN_VERSION)\nDEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)\n\nprint(\"Training:\", TRAIN_IMAGE, TRAIN_CPU, TRAIN_NCPU)\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_CPU, DEPLOY_NCPU)", "Set machine types\nNext, set the machine types to use for training and prediction.\n\nSet the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.\nmachine type\nn1-standard: 3.75GB of memory per vCPU\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: The following is not supported for training:\n\nstandard: 2 vCPUs\nhighcpu: 2, 4 and 8 vCPUs\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.", "# Set the machine type\nMACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nTRAIN_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Train machine type\", TRAIN_COMPUTE)\n\nMACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)", "Tutorial\nNow you are ready to start creating your own custom-trained model with CIFAR10.\nTrain a model\nThere are two ways you can train a custom model using a container image:\n\n\nUse a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.\n\n\nUse your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.\n\n\nDefine the command args for the training script\nPrepare the command-line arguments to pass to your training script.\n- args: The command line arguments to pass to the corresponding Python module. In this example, they will be:\n - \"--epochs=\" + EPOCHS: The number of epochs for training.\n - \"--steps=\" + STEPS: The number of steps (batches) per epoch.\n - \"--distribute=\" + TRAIN_STRATEGY\" : The training distribution strategy to use for single or distributed training.\n - \"single\": single device.\n - \"mirror\": all GPU devices on a single compute instance.\n - \"multi\": all GPU devices on all compute instances.", "# Define the command arguments for the training script\nJOB_NAME = \"custom_job_\" + TIMESTAMP\nMODEL_DIR = \"{}/{}\".format(BUCKET_NAME, JOB_NAME)\n\nif not TRAIN_NCPU or TRAIN_NCPU < 2:\n TRAIN_STRATEGY = \"single\"\nelse:\n TRAIN_STRATEGY = \"mirror\"\n\nEPOCHS = 20\nSTEPS = 100\n\nCMDARGS = [\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n \"--distribute=\" + TRAIN_STRATEGY,\n]", "Training script\nIn the next cell, you will write the contents of the training script, task.py. In summary:\n\nGet the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service.\nLoads CIFAR10 dataset from TF Datasets (tfds).\nBuilds a model using TF.Keras model API.\nCompiles the model (compile()).\nSets a training distribution strategy according to the argument args.distribute.\nTrains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps\nSaves the trained model (save(MODEL_DIR)) to the specified model directory.", "%%writefile task.py\n# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10\n\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport argparse\nimport os\nimport sys\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--lr', dest='lr',\n default=0.01, type=float,\n help='Learning rate.')\nparser.add_argument('--epochs', dest='epochs',\n default=10, type=int,\n help='Number of epochs.')\nparser.add_argument('--steps', dest='steps',\n default=200, type=int,\n help='Number of steps per epoch.')\nparser.add_argument('--distribute', dest='distribute', type=str, default='single',\n help='distributed training strategy')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\nprint('DEVICES', device_lib.list_local_devices())\n\n# Single Machine, single compute device\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\n# Single Machine, multiple compute device\nelif args.distribute == 'mirror':\n strategy = tf.distribute.MirroredStrategy()\n# Multiple Machine, multiple compute device\nelif args.distribute == 'multi':\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\n# Multi-worker configuration\nprint('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\n# Preparing dataset\nBUFFER_SIZE = 10000\nBATCH_SIZE = 64\n\ndef make_datasets_unbatched():\n # Scaling CIFAR10 data from (0, 255] to (0., 1.]\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255.0\n return image, label\n\n datasets, info = tfds.load(name='cifar10',\n with_info=True,\n as_supervised=True)\n return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()\n\n\n# Build the Keras model\ndef build_and_compile_cnn_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(32, 3, activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n model.compile(\n loss=tf.keras.losses.sparse_categorical_crossentropy,\n optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),\n metrics=['accuracy'])\n return model\n\n# Train the model\nNUM_WORKERS = strategy.num_replicas_in_sync\n# Here the batch size scales up by number of workers since\n# `tf.data.Dataset.batch` expects the global batch size.\nGLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS\nMODEL_DIR = os.getenv(\"AIP_MODEL_DIR\")\n\ntrain_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)\n\nwith strategy.scope():\n # Creation of dataset, and model building/compiling need to be within\n # `strategy.scope()`.\n model = build_and_compile_cnn_model()\n\nmodel.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)\nmodel.save(MODEL_DIR)", "Train the model\nDefine your custom training job on Vertex AI.\nUse the CustomTrainingJob class to define the job, which takes the following parameters:\n\ndisplay_name: The user-defined name of this training pipeline.\nscript_path: The local path to the training script.\ncontainer_uri: The URI of the training container image.\nrequirements: The list of Python package dependencies of the script.\nmodel_serving_container_image_uri: The URI of a container that can serve predictions for your model — either a prebuilt container or a custom container.\n\nUse the run function to start training, which takes the following parameters:\n\nargs: The command line arguments to be passed to the Python script.\nreplica_count: The number of worker replicas.\nmodel_display_name: The display name of the Model if the script produces a managed Model.\nmachine_type: The type of machine to use for training.\naccelerator_type: The hardware accelerator type.\naccelerator_count: The number of accelerators to attach to a worker replica.\n\nThe run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object.", "# TODO\n# Define your custom training job and use the run function to start the training\njob = aiplatform.CustomTrainingJob(\n display_name=JOB_NAME,\n script_path=\"task.py\",\n container_uri=TRAIN_IMAGE,\n requirements=[\"tensorflow_datasets==1.3.0\"],\n model_serving_container_image_uri=DEPLOY_IMAGE,\n)\n\nMODEL_DISPLAY_NAME = \"cifar10-\" + TIMESTAMP\n\n# TODO\n# Start the training\nif TRAIN_CPU:\n model = job.run(\n model_display_name=MODEL_DISPLAY_NAME,\n args=CMDARGS,\n replica_count=1,\n machine_type=TRAIN_COMPUTE,\n accelerator_type=TRAIN_CPU.name,\n accelerator_count=TRAIN_NCPU,\n )\nelse:\n model = job.run(\n model_display_name=MODEL_DISPLAY_NAME,\n args=CMDARGS,\n replica_count=1,\n machine_type=TRAIN_COMPUTE,\n accelerator_count=0,\n )", "Make a batch prediction request\nSend a batch prediction request to your deployed model.\nGet test data\nDownload images from the CIFAR dataset and preprocess them.\nDownload the test images\nDownload the provided set of images from the CIFAR dataset:", "# Download the images\n! gsutil -m cp -r gs://cloud-samples-data/ai-platform-unified/cifar_test_images .", "Preprocess the images\nBefore you can run the data through the endpoint, you need to preprocess it to match the format that your custom model defined in task.py expects.\nx_test:\nNormalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.\ny_test:\nYou can extract the labels from the image filenames. Each image's filename format is \"image_{LABEL}_{IMAGE_NUMBER}.jpg\"", "import numpy as np\nfrom PIL import Image\n\n# Load image data\nIMAGE_DIRECTORY = \"cifar_test_images\"\n\nimage_files = [file for file in os.listdir(IMAGE_DIRECTORY) if file.endswith(\".jpg\")]\n\n# Decode JPEG images into numpy arrays\nimage_data = [\n np.asarray(Image.open(os.path.join(IMAGE_DIRECTORY, file))) for file in image_files\n]\n\n# Scale and convert to expected format\nx_test = [(image / 255.0).astype(np.float32).tolist() for image in image_data]\n\n# Extract labels from image name\ny_test = [int(file.split(\"_\")[1]) for file in image_files]", "Prepare data for batch prediction\nBefore you can run the data through batch prediction, you need to save the data into one of a few possible formats.\nFor this tutorial, use JSONL as it's compatible with the 3-dimensional list that each image is currently represented in. To do this:\n\nIn a file, write each instance as JSON on its own line.\nUpload this file to Cloud Storage.\n\nFor more details on batch prediction input formats: https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions#batch_request_input", "import json\n\nBATCH_PREDICTION_INSTANCES_FILE = \"batch_prediction_instances.jsonl\"\n\nBATCH_PREDICTION_GCS_SOURCE = (\n BUCKET_NAME + \"/batch_prediction_instances/\" + BATCH_PREDICTION_INSTANCES_FILE\n)\n\n# Write instances at JSONL\nwith open(BATCH_PREDICTION_INSTANCES_FILE, \"w\") as f:\n for x in x_test:\n f.write(json.dumps(x) + \"\\n\")\n\n# Upload to Cloud Storage bucket\n! gsutil cp $BATCH_PREDICTION_INSTANCES_FILE $BATCH_PREDICTION_GCS_SOURCE\n\nprint(\"Uploaded instances to: \", BATCH_PREDICTION_GCS_SOURCE)", "Send the prediction request\nTo make a batch prediction request, call the model object's batch_predict method with the following parameters: \n- instances_format: The format of the batch prediction request file: \"jsonl\", \"csv\", \"bigquery\", \"tf-record\", \"tf-record-gzip\" or \"file-list\"\n- prediction_format: The format of the batch prediction response file: \"jsonl\", \"csv\", \"bigquery\", \"tf-record\", \"tf-record-gzip\" or \"file-list\"\n- job_display_name: The human readable name for the prediction job.\n - gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.\n- gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to.\n- model_parameters: Additional filtering parameters for serving prediction results.\n- machine_type: The type of machine to use for training.\n- accelerator_type: The hardware accelerator type.\n- accelerator_count: The number of accelerators to attach to a worker replica.\n- starting_replica_count: The number of compute instances to initially provision.\n- max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.\nCompute instance scaling\nYou can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.\nIf you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.", "MIN_NODES = 1\nMAX_NODES = 1\n\n# The name of the job\nBATCH_PREDICTION_JOB_NAME = \"cifar10_batch-\" + TIMESTAMP\n\n# Folder in the bucket to write results to\nDESTINATION_FOLDER = \"batch_prediction_results\"\n\n# The Cloud Storage bucket to upload results to\nBATCH_PREDICTION_GCS_DEST_PREFIX = BUCKET_NAME + \"/\" + DESTINATION_FOLDER\n\n# TODO\n# Make SDK batch_predict method call\nbatch_prediction_job = model.batch_predict(\n instances_format=\"jsonl\",\n predictions_format=\"jsonl\",\n job_display_name=BATCH_PREDICTION_JOB_NAME,\n gcs_source=BATCH_PREDICTION_GCS_SOURCE,\n gcs_destination_prefix=BATCH_PREDICTION_GCS_DEST_PREFIX,\n model_parameters=None,\n machine_type=DEPLOY_COMPUTE,\n accelerator_type=DEPLOY_CPU,\n accelerator_count=DEPLOY_NCPU,\n starting_replica_count=MIN_NODES,\n max_replica_count=MAX_NODES,\n sync=True,\n)", "Retrieve batch prediction results\nWhen the batch prediction is done processing, you can finally view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated when you created the batch prediction job. The predictions are located in a subdirectory starting with the name prediction. Within that directory, there is a file named prediction.results-xxxx-of-xxxx.\nLet's display the contents. You will get a row for each prediction. The row is the softmax probability distribution for the corresponding CIFAR10 classes.", "RESULTS_DIRECTORY = \"prediction_results\"\nRESULTS_DIRECTORY_FULL = RESULTS_DIRECTORY + \"/\" + DESTINATION_FOLDER\n\n# Create missing directories\nos.makedirs(RESULTS_DIRECTORY, exist_ok=True)\n\n# Get the Cloud Storage paths for each result\n! gsutil -m cp -r $BATCH_PREDICTION_GCS_DEST_PREFIX $RESULTS_DIRECTORY\n\n# Get most recently modified directory\nlatest_directory = max(\n [\n os.path.join(RESULTS_DIRECTORY_FULL, d)\n for d in os.listdir(RESULTS_DIRECTORY_FULL)\n ],\n key=os.path.getmtime,\n)\n\n# Get downloaded results in directory\nresults_files = []\nfor dirpath, subdirs, files in os.walk(latest_directory):\n for file in files:\n if file.startswith(\"prediction.results\"):\n results_files.append(os.path.join(dirpath, file))\n\n# Consolidate all the results into a list\nresults = []\nfor results_file in results_files:\n # Download each result\n with open(results_file, \"r\") as file:\n results.extend([json.loads(line) for line in file.readlines()])", "Evaluate results\nYou can then run a quick evaluation on the prediction results:\n\nnp.argmax: Convert each list of confidence levels to a label\nCompare the predicted labels to the actual labels\nCalculate accuracy as correct/total\n\nTo improve the accuracy, try training for a higher number of epochs.", "# Evaluate the results\ny_predicted = [np.argmax(result[\"prediction\"]) for result in results]\n\ncorrect = sum(y_predicted == np.array(y_test))\naccuracy = len(y_predicted)\nprint(\n f\"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}\"\n)", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nTraining Job\nModel\nCloud Storage Bucket", "delete_training_job = True\ndelete_model = True\n\n# Warning: Setting this to true will delete everything in your bucket\ndelete_bucket = False\n\n# TODO\n# Delete the training job\njob.delete()\n\n# TODO\n# Delete the model\nmodel.delete()\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil -m rm -r $BUCKET_NAME" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rnder/data-science-from-scratch
notebook/ch20_natural_language_processing.ipynb
unlicense
[ "20. 자연어처리\n1) 워드 클라우드\n\n단어의 크기를 단어의 빈도 수에 비례하도록 하여 단어를 아름답게 배치", "import math, random, re\nfrom collections import defaultdict, Counter\nfrom bs4 import BeautifulSoup\nimport requests\nimport matplotlib.pyplot as plt\n\n#데이터 과학 관련 키워드목록, 빈도 0~100\ndata = [ (\"big data\", 100, 15), (\"Hadoop\", 95, 25), (\"Python\", 75, 50),\n (\"R\", 50, 40), (\"machine learning\", 80, 20), (\"statistics\", 20, 60),\n (\"data science\", 60, 70), (\"analytics\", 90, 3),\n (\"team player\", 85, 85), (\"dynamic\", 2, 90), (\"synergies\", 70, 0),\n (\"actionable insights\", 40, 30), (\"think out of the box\", 45, 10),\n (\"self-starter\", 30, 50), (\"customer focus\", 65, 15),\n (\"thought leadership\", 35, 35)]", "아주 멋있어 보이기는 하지만, 딱히 어떤 정보를 제공하지는 않는다.\n단어가 구인 광고에 등장하는 빈도를 가로축,\n단어가 이력서에 등장하는 빈도를 세로축", "def text_size(total):\n \"\"\"equals 8 if total is 0, 28 if total is 200\"\"\"\n return 8 + total / 200 * 20\n\nfor word, job_popularity, resume_popularity in data:\n plt.text(job_popularity, resume_popularity, word,\n ha='center', va='center',\n size=text_size(job_popularity + resume_popularity))\nplt.xlabel(\"Popularity on Job Postings\")\nplt.ylabel(\"Popularity on Resumes\")\nplt.axis([0, 100, 0, 100])\nplt.show()", "2) n-gram 모델", "#유니코드 따옴표를 일반 아스키 따옴표로 변환\ndef fix_unicode(text):\n return text.replace(u\"\\u2019\", \"'\")\n\ndef get_document():\n\n url = \"http://radar.oreilly.com/2010/06/what-is-data-science.html\"\n html = requests.get(url).text\n soup = BeautifulSoup(html, 'html5lib')\n\n #content = soup.find(\"div\", \"entry-content\") # NoneType Error\n content = soup.find(\"div\", \"article-body\") # find article-body div\n \n regex = r\"[\\w']+|[\\.]\" # 단어나 마침표에 해당하는 문자열\n\n document = []\n\n for paragraph in content(\"p\"):\n words = re.findall(regex, fix_unicode(paragraph.text))\n document.extend(words)\n\n return document\n\ndocument = get_document()\n#document\n\n###+순차적으로 등장하는 단어들에 대한 정보를 얻기 위함?\na = [\"We've\",'all','heard', 'it']\nb = [\"We've\",'all','heard', 'it']\nlist(zip(a,b))\n\nbigrams = list(zip(document, document[1:]))\ntransitions = defaultdict(list)\nfor prev, current in bigrams:\n transitions[prev].append(current)\n\n#transitions\ntransitions\n\ntransitions['.']\n\n#시작 단어를 선택해야 하는데,, 마침표 다음에 등장하는 단어들중 임의로 하나를 선택하는것도 방법.\ndef generate_using_bigrams(transitions):\n current = \".\" # 다음단어가 문장의 시작이라는 것을 의미\n result = []\n while True:\n next_word_candidates = transitions[current] # bigrams (current, _)\n current = random.choice(next_word_candidates) # choose one at random\n result.append(current) # append it to results\n if current == \".\": return \" \".join(result) # if \".\" 종료\n\nrandom.seed(0)\nprint(\"bigram sentences\")\nfor i in range(10):\n print(i, generate_using_bigrams(transitions))\nprint()\n#터무니 없는 문장이지만, 데이터 과학과 관련되어 보일법한 웹사이트를 만들때 사용할 만한 것들이기도 하다...?", "bigram : 두개의 연속적인 단어\ntrigram : 3개의 연속적인 단어를 보는..(n-gram도 있디만 3개 정도만 봐도 충분..)", "###+순차적으로 등장하는 단어들에 대한 정보를 얻기 위함?\na = [\"We've\",'all','heard', 'it']\nb = [\"We've\",'all','heard', 'it']\nb = [\"We've\",'all','heard', 'it']\nlist(zip(a,b))\n\n#trigrams : 직전 두개의 단어에 의해 다음 단어가 결정됨\ntrigrams = list(zip(document, document[1:], document[2:]))\ntrigram_transitions = defaultdict(list)\nstarts = []\n\nfor prev, current, next in trigrams:\n if prev == \".\": # 만약 이전단어가 마침표 였다면\n starts.append(current) # 이제 새로운 단어의 시작을 의미\n trigram_transitions[(prev, current)].append(next)\n\n#운장은 앞서 바이그램과 비슷한 방식으로 생성할 수 있다\ndef generate_using_trigrams(starts, trigram_transitions):\n current = random.choice(starts) # choose a random starting word\n prev = \".\" # and precede it with a '.'\n result = [current]\n while True:\n next_word_candidates = trigram_transitions[(prev, current)]\n next = random.choice(next_word_candidates)\n\n prev, current = current, next\n result.append(current)\n\n if current == \".\":\n return \" \".join(result)\n\nprint(\"trigram sentences\")\nfor i in range(10):\n print(i, generate_using_trigrams(starts, trigram_transitions))\nprint()\n#조금 더 괜찮은 문장..", "trigram을 사용하면 다음 단어를 생성하는 각 단계에서 선택할 수 있는 단어의 수가 bigram을 사용할 때마다 훨씬 적어졌고, 선택할 수 있는 단어가 딱 하나만 존재하는 경우도 많았을 것이다.\n즉, 이미 어떤 문서상에 존재했던 문장(또는 긴문구)하나를 그대로 생성했을 가능성도 있다.\n이는 데이터 과학에 대한 더 많은 에세이들을 모으고, 이를 토대로 n-gram 모델을 구축하는 것을 의미!\n\n<p><span style=\"color:blue\">**3) 문법**</span></p>\n\n\n문법에 기반하여 말이 되는 문장을 생성하는 것\n품사란 무엇이며, 그것들을 어떻게 조합하면 문장이 되는지..\n명사 다음에는 항상 동사가 따른다...는 방식", "#항목 앞에 밑줄이 있으면 더 확장할 수 있는 규칙이고, 나머지는 종결어 라고하자.\n# 예, '_s'는 문장(sentence) 규칙을 의미, '_NP'는 명사구(noun phrase), '_VP'는 동사구\ngrammar = {\n \"_S\" : [\"_NP _VP\"],\n \"_NP\" : [\"_N\",\n \"_A _NP _P _A _N\"],\n \"_VP\" : [\"_V\",\n \"_V _NP\"],\n \"_N\" : [\"data science\", \"Python\", \"regression\"],\n \"_A\" : [\"big\", \"linear\", \"logistic\"],\n \"_P\" : [\"about\", \"near\"],\n \"_V\" : [\"learns\", \"trains\", \"tests\", \"is\"]\n}", "~~~\n['_S']\n['_NP','_VP'] \n['_N','_VP'] \n['Python','_VP'] \n['Python','_V','_NP'] \n['Python','trains','_NP'] \n['Python','trains','_A','_NP','_P','_A','_N'] \n['Python','trains','logistic','_NP','_P','_A','_N']\n['Python','trains','logistic','_N','_P','_A','_N'] \n['Python','trains','logistic','data science','_P','_A','_N'] \n['Python','trains','logistic','data science','about','_A', '_N'] \n['Python','trains','logistic','data science','about','logistic','_N'] \n['Python','trains','logistic','data science','about','logistic','Python'] \n~~~", "# 특정 항목이 종결어인지 아닌지?\ndef is_terminal(token):\n return token[0] != \"_\"\n\n# 각 항목을 대체 가능한 다른 항목 또는 항목들로 변환시키는 함수\ndef expand(grammar, tokens):\n for i, token in enumerate(tokens):\n\n # 종결어는 건너뜀\n if is_terminal(token): continue\n\n # 종결어가 아닌 단어는 대체할 수 있는 항목을 임의로 선택\n replacement = random.choice(grammar[token])\n\n if is_terminal(replacement):\n tokens[i] = replacement\n else:\n tokens = tokens[:i] + replacement.split() + tokens[(i+1):]\n # 새로운 단어의 list에 expand를 적용\n return expand(grammar, tokens)\n\n # 이제 모든 단어가 종결어 이기때문에 종료\n return tokens\n\ndef generate_sentence(grammar):\n return expand(grammar, [\"_S\"])\n\nprint(\"grammar sentences\")\nfor i in range(10):\n print(i, \" \".join(generate_sentence(grammar)))\nprint()", "<p><span style=\"color:blue\">**5) 토픽 모델링**</span></p>", "#단어의 분포에 따라 각 토픽에 weight를 할당\ndef sample_from(weights):\n '''i를 weight[i] / sum(weight)의 확률로 반환'''\n total = sum(weights)\n rnd = total * random.random() # 0과 total 사이를 균일하게 선택\n for i, w in enumerate(weights):\n rnd -= w # return the smallest i such that\n if rnd <= 0: return i # sum(weights[:(i+1)]) >= rnd", "~~~\n결국, weight가 [1,1,3] 이라면 \n1/5의 확룔로 0, \n1/5의 확률로 1, \n3/5의 확률로 2를 반환\n~~~", "documents = [\n [\"Hadoop\", \"Big Data\", \"HBase\", \"Java\", \"Spark\", \"Storm\", \"Cassandra\"],\n [\"NoSQL\", \"MongoDB\", \"Cassandra\", \"HBase\", \"Postgres\"],\n [\"Python\", \"scikit-learn\", \"scipy\", \"numpy\", \"statsmodels\", \"pandas\"],\n [\"R\", \"Python\", \"statistics\", \"regression\", \"probability\"],\n [\"machine learning\", \"regression\", \"decision trees\", \"libsvm\"],\n [\"Python\", \"R\", \"Java\", \"C++\", \"Haskell\", \"programming languages\"],\n [\"statistics\", \"probability\", \"mathematics\", \"theory\"],\n [\"machine learning\", \"scikit-learn\", \"Mahout\", \"neural networks\"],\n [\"neural networks\", \"deep learning\", \"Big Data\", \"artificial intelligence\"],\n [\"Hadoop\", \"Java\", \"MapReduce\", \"Big Data\"],\n [\"statistics\", \"R\", \"statsmodels\"],\n [\"C++\", \"deep learning\", \"artificial intelligence\", \"probability\"],\n [\"pandas\", \"R\", \"Python\"],\n [\"databases\", \"HBase\", \"Postgres\", \"MySQL\", \"MongoDB\"],\n [\"libsvm\", \"regression\", \"support vector machines\"]\n]\n\n#총 K=4개의 토픽을 반환해 보자!\nK = 4\n\n#각 토픽이 각 문서에 할당되는 횟수 (Counter는 각각의 문서를 의미)\ndocument_topic_counts = [Counter()\n for _ in documents]\n\n#각 단어가 각 토픽에 할당되는 횟수 (Counter는 각 토픽을 의미)\ntopic_word_counts = [Counter() for _ in range(K)]\n\n#각 토픽에 할당죄는 총 단어수 (각각의 숫자는 각 토픽을 의미)\ntopic_counts = [0 for _ in range(K)]\n\n#각 문서에 포함되는 총 단어수 (각각의 숫자는 각 문서를 의미)\ndocument_lengths = [len(d) for d in documents]\n\n#단어 종류의 수\ndistinct_words = set(word for document in documents for word in document)\nW = len(distinct_words)\n\n#총 문서의 수\nD = len(documents)\n\n# documents[3]의 문서중 토픽 1과 관련 있는 단어의 수를 구하면.\ndocument_topic_counts[3][1]\n\n#npl라는 단어가 토픽 2와 연관지어 나오는 횟수는?\ntopic_word_counts[2][\"nlp\"]\n\ndef p_topic_given_document(topic, d, alpha=0.1):\n \"\"\"문서 d의 모든 단어 중에서 topic에 속하는\n 단어의 비율 (smoothing을 더한 비율)\"\"\"\n\n return ((document_topic_counts[d][topic] + alpha) /\n (document_lengths[d] + K * alpha))\n\ndef p_word_given_topic(word, topic, beta=0.1):\n \"\"\"topic에 속한 단어 중에서 word의 비율 (smoothing을 더한 비율)\"\"\"\n\n return ((topic_word_counts[topic][word] + beta) /\n (topic_counts[topic] + W * beta))\n\ndef topic_weight(d, word, k):\n \"\"\"문서와 문서의 단어가 주어지면, k번째 토픽의 weight를 반환\"\"\"\n\n return p_word_given_topic(word, k) * p_topic_given_document(k, d)\n\ndef choose_new_topic(d, word):\n return sample_from([topic_weight(d, word, k)\n for k in range(K)])\n\nrandom.seed(0)\ndocument_topics = [[random.randrange(K) for word in document]\n for document in documents]\n\nfor d in range(D):\n for word, topic in zip(documents[d], document_topics[d]):\n document_topic_counts[d][topic] += 1\n topic_word_counts[topic][word] += 1\n topic_counts[topic] += 1\n\nfor iter in range(1000):\n for d in range(D):\n for i, (word, topic) in enumerate(zip(documents[d],\n document_topics[d])):\n\n # remove this word / topic from the counts\n # so that it doesn't influence the weights\n document_topic_counts[d][topic] -= 1\n topic_word_counts[topic][word] -= 1\n topic_counts[topic] -= 1\n document_lengths[d] -= 1\n\n # choose a new topic based on the weights\n new_topic = choose_new_topic(d, word)\n document_topics[d][i] = new_topic\n\n # and now add it back to the counts\n document_topic_counts[d][new_topic] += 1\n topic_word_counts[new_topic][word] += 1\n topic_counts[new_topic] += 1\n document_lengths[d] += 1 \n\n#토픽의 의미를 찾기위해 각 토픽에 대해 가장 영향력이 높은(weight 값이 큰) 단어들이 무언인지 보자\nfor k, word_counts in enumerate(topic_word_counts):\n for word, count in word_counts.most_common():\n if count > 0: print(k, word, count)\n\n# 단어들을 보고 다음고 ㅏ같이 이름을 지정해주자\ntopic_names = [\"Big Data and programming languages\",\n \"databases\",\n \"machine learning\",\n \"statistics\"]\n\n#사용자의 관심사가 무엇인지 알아볼 수 있다.\nfor document, topic_counts in zip(documents, document_topic_counts):\n print(document)\n for topic, count in topic_counts.most_common():\n if count > 0:\n print(topic_names[topic], count)\n print()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tabakg/potapov_interpolation
Testing_make_nonlinear_interaction.ipynb
gpl-3.0
[ "In this ipython notebook we will consider our method of constructing nonlinear interactions. We examine the resulting terms and confirm they have the expected behavior.", "import Potapov_Code.Roots as Roots\nimport Potapov_Code.Potapov as Potapov\nimport Potapov_Code.Time_Delay_Network as Time_Delay_Network\nimport Potapov_Code.Time_Sims as Time_Sims\nimport Potapov_Code.functions as functions\nimport Potapov_Code.tests as tests\n\nimport numpy as np\nimport numpy.linalg as la\nfrom scipy.integrate import ode\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef contour_plot(Mat,func = abs):\n '''\n Make a simple plot to view a matrix\n\n Args:\n Mat (compelx-valued matrix): a matrix to view\n func (optional[function]): a function to apply to each component\n\n Generates a plot of the matrix.\n '''\n fig = plt.figure()\n ax = fig.add_subplot(111)\n cax = ax.matshow(func(Mat), interpolation='nearest')\n fig.colorbar(cax)\n plt.show()\n\n\nEx = Time_Delay_Network.Example1(r = 0.7, max_linewidth=35.)\nEx.run_Potapov()\nE = Ex.E\nroots = Ex.roots\nM1 = Ex.M1\ndelays = Ex.delays\nmodes = functions.spatial_modes(roots,M1,E)\n\nvecs = Ex.vecs\n\nroots.sort(key = lambda z: z.imag)\n\n roots\n\nroot_z = lambda z: 1j*z ## a fake root we will vary\n\nroots_to_use = lambda z: [root_z(z).imag,roots[0].imag,roots[-1].imag]\n\nmodes_to_use = modes\n\ndelay_indices = 0\n\nplus_or_minus_arr = [1,1,1]\n\nx = np.linspace(-10000, 10000,4000)\n\nreload(Potapov_Code.functions)\n\nf = lambda z: functions.make_nonlinear_interaction(roots_to_use(z),modes_to_use,Ex.delays,0,\n 0,0.01,plus_or_minus_arr)\n\nplt.plot(x, [abs(f(z))**2 for z in x])\n\nEx = Time_Delay_Network.Example3(r1 = 0.7, r3 = 0.7, max_linewidth=35.)\nEx.run_Potapov()\nE = Ex.E\nroots = Ex.roots\nM1 = Ex.M1\ndelays = Ex.delays\nmodes = functions.spatial_modes(roots,M1,E)\n\nvecs = Ex.vecs\n\nroots.sort(key = lambda z: z.imag)\n\n roots\n\nroots[-1]+roots[-2]\n\nroot_z = lambda z: 1j*z ## a fake root we will vary\n\nroots_to_use = lambda z: [root_z(z).imag,roots[0].imag,roots[1].imag]\n\nmodes_to_use = [modes[0],modes[0], modes[0]]\n\ndelay_indices = 0\n\nplus_or_minus_arr = [1,1,1]\n\nx = np.linspace(-800,1000,4000)\n\nreload(Potapov_Code.functions)\n\nf = lambda z: functions.make_nonlinear_interaction(roots_to_use(z),modes_to_use,Ex.delays,0,\n 0,0.01,plus_or_minus_arr)\n\n### by default, indices_of_refraction=[1,1,1]", "We will vary the fake root we introduced to obtain the phase-mismatch diagram. That is, the phase mismatch $\\delta k$ is going to be some linear function of $z$.", "plt.plot(x, [abs(f(z))**2 for z in x])", "What happens when we change the indices of refraction for the different modes? The phase-mismatch will shift depending on where the new $\\delta k = 0$ occurs. The width of the peak may also change if the indices of refraction are large.", "indices_of_refraction=[3.,5.,10.]\n\nf = lambda z: functions.make_nonlinear_interaction(roots_to_use(z),\n modes_to_use,Ex.delays,0,0,0.1,plus_or_minus_arr,indices_of_refraction=indices_of_refraction)\n\nplt.plot(x, [abs(f(z))**2 for z in x])", "Generating a Hamiltonian from a model\nIn this section we will use example 3 to generate a Hamiltonian with nonlinaer coefficients resulting when inserting a nonlinearity in a circuit. We will assume that the nonlinearity is inserting at the delay line of index 0 corresponding to $\\tau_1$.", "import sympy as sp\nimport itertools\nfrom qnet.algebra.circuit_algebra import *\n\nEx = Time_Delay_Network.Example3(r1 = 0.9, r3 = 0.9, max_linewidth=35.,max_freq=25.)\nEx.run_Potapov()\nE = Ex.E\nroots = Ex.roots\nM1 = Ex.M1\ndelays = Ex.delays\nmodes = functions.spatial_modes(roots,M1,E)\n\nroots\n\n## nonlinearity information\n\ndelay_index = 0 \nstart_nonlin = 0.\nduration_nonlin = .1\n\nNONLIN_WEIGHT = 10.\n\nm = len(roots)\n\nindices = range(m)\n\nchi_order = 3 ## i.e. chi-3 nonlinearity\n\nplus_minus_combinations = list(itertools.combinations(range(chi_order + 1), 2)) ## pick which fields are annihilated\n\nlist_of_pm_arr = []\nfor tup in plus_minus_combinations:\n ls = [1]*(chi_order+1)\n for i in tup:\n ls[i]=-1\n list_of_pm_arr.append(ls)\n\na = [sp.symbols('a_'+str(i)) for i in range(m)]\na_H = [sp.symbols('a^H_'+str(i)) for i in range(m)]\n\nA,B,C,D = Potapov.get_Potapov_ABCD(Ex.roots,Ex.vecs,Ex.T,z=0.)\n\n#Omega = (A-A.H)/(2j) #### closed dynamics only. i.e. not damping\n\nOmega = -1j*A ## full dynamics\n\nH_lin_sp = 0\n## with sympy only\nfor i in range(m):\n for j in range(m):\n H_lin_sp += a_H[i]*a[j]*Omega[i,j]\n\ndef make_nonlin_term_sp(combination,pm_arr):\n '''\n Make symbolic term\n With sympy only\n '''\n r = 1\n for index,sign in zip(combination,pm_arr):\n if sign == 1:\n r*= a_H[index]\n else:\n r *= a[index]\n return r", "Let's impose a large 'index of refraction'. In the future we will replaces this by better conditions for phase-mismatch, including realistic values. For now, this will narrow the gain versus $\\Delta k$ function so that few interaction terms remain.", "def weight(combination,pm_arr):\n roots_to_use = np.array([roots[i].imag for i in combination])\n modes_to_use = [modes[i] for i in combination]\n return functions.make_nonlinear_interaction(roots_to_use, modes_to_use, delays, delay_indices,\n start_nonlin,duration_nonlin,pm_arr,\n indices_of_refraction = [1000.]*len(combination),\n eps=1e-12,)\n\n## TODO: add a priori check to restrict exponential growth\nweights = {}\n\ncount = 0\n\nfor pm_arr in list_of_pm_arr:\n field_combinations = itertools.combinations_with_replacement(range(m), chi_order+1)\n for combination in field_combinations:\n count += 1\n weights[tuple(combination),tuple(pm_arr)] = weight(combination,pm_arr) \nprint count\n\nplt.hist([abs(x) for x in [weights[key] for key in weights] ],bins=100);", "As we see above, most of the interactions are negligible. Let's drop them out.", "significant_weight_keys = [key for key in weights if abs(weights[key]) > 1e-4]\n\nsignificant_weights = dict((key,weights[key]) for key in significant_weight_keys)\n\nsignificant_weights = {k:v for k,v in weights.iteritems() if abs(v) > 1e-4} ## more elegant \n\nlen(significant_weights)\n\nH_nonlin_sp = 0 ## with sympy only\n\nfor combination,pm_arr in significant_weights:\n H_nonlin_sp += make_nonlin_term_sp(combination,pm_arr)*significant_weights[combination,pm_arr]\n\nH_sp = H_lin_sp + H_nonlin_sp*NONLIN_WEIGHT\n\ndef make_sp_conj(A):\n '''\n Returns the symbolic conjugate of A.\n Args:\n A (symbolic expression in symbols a[i] and a_H[i])\n Returns:\n The complex conjugate of A\n '''\n A_H = sp.conjugate(H_sp)\n for i in range(len(a)):\n A_H = A_H.subs(sp.conjugate(a[i]),a_H[i])\n A_H = A_H.subs(sp.conjugate(a_H[i]),a[i])\n return A_H\n\ndef make_eq_motion(H_sp):\n '''\n Input is a tuple or list, output is a matrix vector\n '''\n A_H = make_sp_conj(H_sp)\n diff_ls = [1j*sp.diff(H_sp,var) for var in a_H] + [-1j*sp.diff(A_H,var) for var in a]\n fs = [sp.lambdify( tuple(a+a_H),expression) for expression in diff_ls ]\n return lambda arr: (np.asmatrix([ f(* arr ) for f in fs])).T\n\n A_H = sp.conjugate(H_sp)\n for i in range(len(a)):\n A_H = A_H.subs(sp.conjugate(a[i]),a_H[i])\n A_H = A_H.subs(sp.conjugate(a_H[i]),a[i])\n\neq_mot = make_eq_motion(H_sp)\n\ndef double_up(M1,M2=None):\n if M2 == None:\n M2 = np.zeros_like(M1)\n top = np.hstack([M1,M2])\n bottom = np.hstack([np.conj(M2),np.conj(M1)])\n return np.vstack([top,bottom])\n\nA_d,C_d,D_d = map(double_up,(A,C,D))\n\nB_d = -double_up(C.H)\n\ndef make_f(eq_mot,B,a_in):\n '''\n Nonlinear equations of motion\n '''\n return lambda t,a: np.asarray(eq_mot(a)+B*a_in(t)).T[0]\n\ndef make_f_lin(A,B,a_in):\n '''\n Linear equations of motion\n '''\n return lambda t,a: np.asarray(A*np.asmatrix(a).T+B*a_in(t)).T[0]\n\na_in = lambda t: np.asmatrix([1.]*4).T\n\nf = make_f(eq_mot,B_d,a_in)\n\nf_lin = make_f_lin(A_d,B_d,a_in)\n\neq_res = eq_mot([1.]*10)\nprint eq_res\n\nmat_res = A_d*np.asmatrix([1.]*10).T\nprint mat_res\n\n## compute L2 error between \nnp.sqrt(sum(np.asarray(abs(eq_res - mat_res))**2))\n\nr = ode(f).set_integrator('zvode', method='bdf')\nr_lin = ode(f_lin).set_integrator('zvode', method='bdf')\n\ny0 = np.asmatrix([0.]*10).T\nt0=0.\n\nr.set_initial_value(y0, t0)\nr_lin.set_initial_value(y0, t0)\n\nt1 = 100\ndt = 0.01\n\nY = []\n\nwhile r.successful() and r.t < t1:\n r.integrate(r.t+dt)\n u = a_in(r.t)\n Y.append(C_d*r.y+D_d*u)\n\nY_lin = []\n\nwhile r_lin.successful() and r_lin.t < t1:\n r_lin.integrate(r_lin.t+dt)\n u = a_in(r_lin.t)\n Y_lin.append(C_d*r_lin.y+D_d*u)\n\nfor i in range(4):\n plt.plot([(y).real[i][0,0] for y in Y ])\n \nfor i in range(4):\n plt.plot([(y).real[i][0,0] for y in Y_lin ])", "When Making the nonlinear terms above zero, we find agreement with the linear equtions of motion.\nUsing the symbolic packages is kind of slow. For classical simulations maybe we can avoid that. We just need to extract the equations of motion, which should end up being sparse in the interaction terms.\nTODO: implement without sympy, e.g. with Julia\n\nTesting different cases with make_nonlinear_interaction\nmaking sure different exceptions get caught", "roots_to_use = np.array([roots[i] for i in combination])\nmodes_to_use = [modes[i] for i in combination]\n\ndef call_make_non_lin():\n return functions.make_nonlinear_interaction(roots_to_use, modes_to_use, delays, delay_indices,\n start_nonlin,duration_nonlin,pm_arr,\n indices_of_refraction,\n eps=1e-12,func=lambda z : z.imag)\n\ncall_make_non_lin()\n\nindices_of_refraction = 1000.\ncall_make_non_lin()\n\nstart_nonlin = -1 ## this shouldn't happen\ncall_make_non_lin()\n\nstart_nonlin = [1]*len(roots_to_use)\ncall_make_non_lin()\n\nstart_nonlin = 0.00001\nduration_nonlin = .099\ncall_make_non_lin()\n\nstart_nonlin = [0.00001]*len(roots_to_use)\nduration_nonlin = .099\ncall_make_non_lin()", "Unused methods below", "## consolidated weights do not take into account which modes are createad or annihilated.\n\nconsolidated_weightes = {}\nfor key in significant_weights:\n if not key[0] in consolidated_weightes:\n consolidated_weightes[key[0]] = significant_weights[key]\n else:\n consolidated_weightes[key[0]] += significant_weights[key]\n\n## QNET annihilation and creation operators\n\na_ = [Destroy(local_space('fock', namespace = str(i))) for i in range(m)]\n\n## Make linear Hamiltonian with QNET\n\nH_lin = sum([a_[i].dag()*a_[i]*Omega[i,i] for i in range(m)]) ## with QNET\ndef make_nonlin_term(combination,pm_arr):\n '''\n Make symbolic term\n With QNET\n '''\n r = 1\n for index,sign in zip(combination,pm_arr):\n if sign == 1:\n r*= a_[index].dag()\n else:\n r *= a_[index]\n return r\n\n## Make nonlinear Hamiltonian in QNET \n\nH_nonlin = 0 ## with QNET\n\nfor combination,pm_arr in significant_weights:\n H_nonlin += make_nonlin_term(combination,pm_arr)*significant_weights[combination,pm_arr]\n \nH_qnet = H_lin+H_nonlin" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
dev/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb
bsd-3-clause
[ "%matplotlib inline", "Source localization with a custom inverse solver\nThe objective of this example is to show how to plug a custom inverse solver\nin MNE in order to facilate empirical comparison with the methods MNE already\nimplements (wMNE, dSPM, sLORETA, eLORETA, LCMV, DICS, (TF-)MxNE etc.).\nThis script is educational and shall be used for methods\nevaluations and new developments. It is not meant to be an example\nof good practice to analyse your data.\nThe example makes use of 2 functions apply_solver and solver\nso changes can be limited to the solver function (which only takes three\nparameters: the whitened data, the gain matrix and the number of orientations)\nin order to try out another inverse algorithm.", "import numpy as np\nfrom scipy import linalg\nimport mne\nfrom mne.datasets import sample\nfrom mne.viz import plot_sparse_source_estimates\n\n\ndata_path = sample.data_path()\nmeg_path = data_path / 'MEG' / 'sample'\nfwd_fname = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'\nave_fname = meg_path / 'sample_audvis-ave.fif'\ncov_fname = meg_path / 'sample_audvis-shrunk-cov.fif'\nsubjects_dir = data_path / 'subjects'\ncondition = 'Left Auditory'\n\n# Read noise covariance matrix\nnoise_cov = mne.read_cov(cov_fname)\n# Handling average file\nevoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))\nevoked.crop(tmin=0.04, tmax=0.18)\n\nevoked = evoked.pick_types(eeg=False, meg=True)\n# Handling forward solution\nforward = mne.read_forward_solution(fwd_fname)", "Auxiliary function to run the solver", "def apply_solver(solver, evoked, forward, noise_cov, loose=0.2, depth=0.8):\n \"\"\"Call a custom solver on evoked data.\n\n This function does all the necessary computation:\n\n - to select the channels in the forward given the available ones in\n the data\n - to take into account the noise covariance and do the spatial whitening\n - to apply loose orientation constraint as MNE solvers\n - to apply a weigthing of the columns of the forward operator as in the\n weighted Minimum Norm formulation in order to limit the problem\n of depth bias.\n\n Parameters\n ----------\n solver : callable\n The solver takes 3 parameters: data M, gain matrix G, number of\n dipoles orientations per location (1 or 3). A solver shall return\n 2 variables: X which contains the time series of the active dipoles\n and an active set which is a boolean mask to specify what dipoles are\n present in X.\n evoked : instance of mne.Evoked\n The evoked data\n forward : instance of Forward\n The forward solution.\n noise_cov : instance of Covariance\n The noise covariance.\n loose : float in [0, 1] | 'auto'\n Value that weights the source variances of the dipole components\n that are parallel (tangential) to the cortical surface. If loose\n is 0 then the solution is computed with fixed orientation.\n If loose is 1, it corresponds to free orientations.\n The default value ('auto') is set to 0.2 for surface-oriented source\n space and set to 1.0 for volumic or discrete source space.\n depth : None | float in [0, 1]\n Depth weighting coefficients. If None, no depth weighting is performed.\n\n Returns\n -------\n stc : instance of SourceEstimate\n The source estimates.\n \"\"\"\n # Import the necessary private functions\n from mne.inverse_sparse.mxne_inverse import \\\n (_prepare_gain, is_fixed_orient,\n _reapply_source_weighting, _make_sparse_stc)\n\n all_ch_names = evoked.ch_names\n\n # Handle depth weighting and whitening (here is no weights)\n forward, gain, gain_info, whitener, source_weighting, mask = _prepare_gain(\n forward, evoked.info, noise_cov, pca=False, depth=depth,\n loose=loose, weights=None, weights_min=None, rank=None)\n\n # Select channels of interest\n sel = [all_ch_names.index(name) for name in gain_info['ch_names']]\n M = evoked.data[sel]\n\n # Whiten data\n M = np.dot(whitener, M)\n\n n_orient = 1 if is_fixed_orient(forward) else 3\n X, active_set = solver(M, gain, n_orient)\n X = _reapply_source_weighting(X, source_weighting, active_set)\n\n stc = _make_sparse_stc(X, active_set, forward, tmin=evoked.times[0],\n tstep=1. / evoked.info['sfreq'])\n\n return stc", "Define your solver", "def solver(M, G, n_orient):\n \"\"\"Run L2 penalized regression and keep 10 strongest locations.\n\n Parameters\n ----------\n M : array, shape (n_channels, n_times)\n The whitened data.\n G : array, shape (n_channels, n_dipoles)\n The gain matrix a.k.a. the forward operator. The number of locations\n is n_dipoles / n_orient. n_orient will be 1 for a fixed orientation\n constraint or 3 when using a free orientation model.\n n_orient : int\n Can be 1 or 3 depending if one works with fixed or free orientations.\n If n_orient is 3, then ``G[:, 2::3]`` corresponds to the dipoles that\n are normal to the cortex.\n\n Returns\n -------\n X : array, (n_active_dipoles, n_times)\n The time series of the dipoles in the active set.\n active_set : array (n_dipoles)\n Array of bool. Entry j is True if dipole j is in the active set.\n We have ``X_full[active_set] == X`` where X_full is the full X matrix\n such that ``M = G X_full``.\n \"\"\"\n inner = np.dot(G, G.T)\n trace = np.trace(inner)\n K = linalg.solve(inner + 4e-6 * trace * np.eye(G.shape[0]), G).T\n K /= np.linalg.norm(K, axis=1)[:, None]\n X = np.dot(K, M)\n\n indices = np.argsort(np.sum(X ** 2, axis=1))[-10:]\n active_set = np.zeros(G.shape[1], dtype=bool)\n for idx in indices:\n idx -= idx % n_orient\n active_set[idx:idx + n_orient] = True\n X = X[active_set]\n return X, active_set", "Apply your custom solver", "# loose, depth = 0.2, 0.8 # corresponds to loose orientation\nloose, depth = 1., 0. # corresponds to free orientation\nstc = apply_solver(solver, evoked, forward, noise_cov, loose, depth)", "View in 2D and 3D (\"glass\" brain like 3D plot)", "plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),\n opacity=0.1)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scko823/web-scraping-selenium-example
costco-rental.ipynb
mit
[ "from selenium import webdriver\n#import urllib you can use urllib to send web request to websites and get back html text as response\nimport pandas as pd\nfrom bs4 import BeautifulSoup\nfrom selenium.webdriver.common.keys import Keys\nfrom lxml import html\nimport numpy\n# import dependencies\n\nbrowser = webdriver.Firefox() #I only tested in firefox\nbrowser.get('http://costcotravel.com/Rental-Cars')\nbrowser.implicitly_wait(5)#wait for webpage download\n\nbrowser.find_element_by_id('pickupLocationTextWidget').send_keys(\"PHX\");\n\nbrowser.find_element_by_css_selector('.sayt-result').click()\n\n\nbrowser.find_element_by_id(\"pickupDateWidget\").send_keys('08/27/2016')#you can't send it directly, need to clear first\n\nbrowser.find_element_by_id(\"pickupDateWidget\").clear()\n\nbrowser.find_element_by_id(\"pickupDateWidget\").send_keys('08/27/2016')\n\nbrowser.find_element_by_id(\"dropoffDateWidget\").clear()\n\nbrowser.find_element_by_id(\"dropoffDateWidget\").send_keys('08/31/2016',Keys.RETURN)\n\nbrowser.find_element_by_css_selector('#pickupTimeWidget option[value=\"03:00 PM\"]').click() #select time \n\nbrowser.find_element_by_css_selector('#dropoffTimeWidget option[value=\"03:00 PM\"]').click()\n\nbrowser.find_element_by_link_text('SEARCH').click() #click the red button !!\n\nn = browser.page_source #grab the page source", "The follow code is same as before, but you can send the commands all in one go. \nHowever, there are implicit wait for the driver so it can do AJAX request and render the page for elements\nalso, you can you find_element_by_xpath method", "\n# browser = webdriver.Firefox() #I only tested in firefox\n# browser.get('http://costcotravel.com/Rental-Cars')\n# browser.implicitly_wait(5)#wait for webpage download\n# browser.find_element_by_id('pickupLocationTextWidget').send_keys(\"PHX\");\n# browser.implicitly_wait(5) #wait for the airport suggestion box to show\n# browser.find_element_by_xpath('//li[@class=\"sayt-result\"]').click() \n# #click the airport suggestion box \n\n# browser.find_element_by_xpath('//input[@id=\"pickupDateWidget\"]').send_keys('08/27/2016')\n# browser.find_element_by_xpath('//input[@id=\"dropoffDateWidget\"]').send_keys('08/30/2016',Keys.RETURN)\n\n# browser.find_element_by_xpath('//select[@id=\"pickupTimeWidget\"]/option[@value=\"09:00 AM\"]').click()\n# browser.find_element_by_xpath('//select[@id=\"dropoffTimeWidget\"]/option[@value=\"05:00 PM\"]').click()\n# browser.implicitly_wait(5) #wait for the clicks to be completed\n# browser.find_element_by_link_text('SEARCH').click()\n# #click the search box\n\n# time.sleep(8) #wait for firefox to download and render the page\n# n = browser.page_source #grab the html source code\n\ntype(n) #the site use unicode\n\nsoup = BeautifulSoup(n,'lxml') #use BeautifulSoup to parse the source\n\nprint \"--------------first 1000 characters:--------------\\n\"\nprint soup.prettify()[:1000]\nprint \"\\n--------------last 1000 characters:--------------\"\nprint soup.prettify()[-1000:]\n\ntable = soup.find('div',{'class':'rentalCarTableDetails'}) #find the table\n\nprint \"--------------first 1000 characters:--------------\\n\"\nprint table.prettify()[:1000]\nprint \"\\n--------------last 1000 characters:--------------\"\nprint table.prettify()[-1000:]\n\ntr = table.select('tr') #let's look at one of the row\n\ntype(tr)\n\n#lets look at first three row\nfor i in tr[0:3]:\n print i.prettify()\n print \"-----------------------------------\"", "let play with one of the row", "row = tr[3] \n\nrow.find('th',{'class':'tar'}).text.encode('utf-8')\n\nrow\n\nrow.contents[4].text #1. this is unicode, 2. the dollar sign is in the way\n\n'Car' in 'Econ Car' #use this string logic to filter out unwanted data\n\nrows = [i for i in tr if (('Price' not in i.contents[0].text and 'Fees' not in i.contents[0].text and 'Location' not in i.contents[0].text and i.contents[0].text !='') and len(i.contents[0].text)<30)]\n# use this crazy list comprehension to get the data we want \n#1. don't want the text 'Price' in the first column\n#2. don't want the text 'Fee' in the first column\n#3. don't want the text 'Location' in the first column\n#4. the text length of first column must be less than 30 characters long\n\nrows[0].contents[0].text #just exploring here...\n\nrows[0].contents[4].text #need to get rid of the $....\n\nrows[3].contents[0].text #need to make it utf-8\n\n#process the data\nprices = {} \nfor i in rows:\n #print the 1st column text\n print i.contents[0].text.encode('utf-8')\n prices[i.contents[0].text.encode('utf-8')] = [i.contents[1].text.encode('utf-8'),i.contents[2].text.encode('utf-8'), i.contents[3].text.encode('utf-8'),i.contents[4].text.encode('utf-8')]\n\nprices\n\niteritems = prices.iteritems() \n#call .iteritems() on a dictionary will give you a generator which you can iter over\n\niteritems.next() #run me five times\n\nfor name, priceList in prices.iteritems():\n newPriceList = []\n for i in priceList:\n newPriceList.append(i.replace('$',''))\n prices[name] = newPriceList\n\nprices\n\ndata = pd.DataFrame.from_dict(prices, orient='index') #get a pandas DataFrame from the prices dictionary\n\ndata\n\ndata = data.replace('Not Available', numpy.nan) #replace the 'Not Available' data point to numpy.nan\n\ndata = pd.to_numeric(data, errors='coerce') #cast to numeric data\n\ndata\n\ndata.columns= ['Alamo','Avis','Budget','Enterprise'] #set column names\n\ndata\n\ndata.notnull() #check for missing data \n\ndata.min(axis=1, skipna=True) #look at the cheapest car in each class", "From this point on, you can set up to run every night and email yourself results etc." ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
dotsdl/msmbuilder
examples/tica-example.ipynb
lgpl-2.1
[ "This example compares two methods for dimensionality reduction:\ntICA and PCA.", "%matplotlib inline\nfrom __future__ import print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport simtk.openmm as mm\nfrom msmbuilder.decomposition import tICA, PCA", "First, let's use OpenMM to run some dynamics on the 3D potential energy function \n$$E(x,y,z) = 5 \\cdot (x-1)^2 \\cdot (x+1)^2 + y^2 + z^2$$\nFrom looking at this equation, we can see that along the $x$ dimension,\nthe potential is a double-well, whereas along the $y$ and $z$ dimensions,\nwe've just got a harmonic potential. So, we should expect that $x$ is the slow\ndegree of freedom, whereas the system should equilibrate rapidly along $y$ and $z$.", "def propagate(n_steps=10000):\n \"Simulate some dynamics\"\n system = mm.System()\n system.addParticle(1)\n force = mm.CustomExternalForce('5*(x-1)^2*(x+1)^2 + y^2 + z^2')\n force.addParticle(0, [])\n system.addForce(force)\n integrator = mm.LangevinIntegrator(500, 1, 0.02)\n context = mm.Context(system, integrator)\n context.setPositions([[0, 0, 0]])\n context.setVelocitiesToTemperature(500)\n x = np.zeros((n_steps, 3))\n for i in range(n_steps):\n x[i] = context.getState(getPositions=True).getPositions(asNumpy=True)._value\n integrator.step(1)\n return x", "Okay, let's run the dynamics. The first plot below shows the $x$, $y$ and $z$ coordinate vs. time for the trajectory, and\nthe second plot shows each of the 1D and 2D marginal distributions.", "trajectory = propagate(10000)\n\nylabels = ['x', 'y', 'z']\nfor i in range(3):\n plt.subplot(3, 1, i+1)\n plt.plot(trajectory[:, i])\n plt.ylabel(ylabels[i])\nplt.xlabel('Simulation time')\nplt.show()", "Note that the variance of $x$ is much lower than the variance in $y$ or $z$, despite it's bi-modal distribution.", "# fit the two models\ntica = tICA(n_components=1, lag_time=100)\npca = PCA(n_components=1)\ntica.fit([trajectory])\npca.fit([trajectory])\n\nplt.subplot(1,2,1)\nplt.title('1st tIC')\nplt.bar([1,2,3], tica.components_[0], color='b')\nplt.xticks([1.5,2.5,3.5], ['x', 'y', 'z'])\nplt.subplot(1,2,2)\nplt.title('1st PC')\nplt.bar([1,2,3], pca.components_[0], color='r')\nplt.xticks([1.5,2.5,3.5], ['x', 'y', 'z'])\nplt.show()\n\nprint('1st tIC', tica.components_ / np.linalg.norm(tica.components_))\nprint('1st PC ', pca.components_ / np.linalg.norm(pca.components_))", "Note that the first tIC \"finds\" a projection that just resolves the $x$ coordinate, whereas PCA doesn't." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
stefanbuenten/nanodegree
p2/L1_Starter_Code.ipynb
mit
[ "Before we get started, a couple of reminders to keep in mind when using iPython notebooks:\n\nRemember that you can see from the left side of a code cell when it was last run if there is a number within the brackets.\nWhen you start a new notebook session, make sure you run all of the cells up to the point where you last left off. Even if the output is still visible from when you ran the cells in your previous session, the kernel starts in a fresh state so you'll need to reload the data, etc. on a new session.\nThe previous point is useful to keep in mind if your answers do not match what is expected in the lesson's quizzes. Try reloading the data and run all of the processing steps one by one in order to make sure that you are working with the same variables and data that are at each quiz stage.\n\nLoad Data from CSVs", "import unicodecsv\n\n## Longer version of code (replaced with shorter, equivalent version below)\n\n# enrollments = []\n# f = open('enrollments.csv', 'rb')\n# reader = unicodecsv.DictReader(f)\n# for row in reader:\n# enrollments.append(row)\n# f.close()\n\nwith open('enrollments.csv', 'rb') as f:\n reader = unicodecsv.DictReader(f)\n enrollments = list(reader)\n\n#####################################\n# 1 #\n#####################################\n\n## Read in the data from daily_engagement.csv and project_submissions.csv \n## and store the results in the below variables.\n## Then look at the first row of each table.\n\ndef csv_loader(file_name):\n \"\"\"\n Reads a CSV using unicodecsv module and returns a list\n \"\"\"\n with open(file_name, \"rb\") as f:\n reader = unicodecsv.DictReader(f)\n return list(reader)\n\ndaily_engagement = csv_loader(\"daily_engagement.csv\")\nprint(daily_engagement[0])\nproject_submissions = csv_loader(\"project_submissions.csv\")\nprint(project_submissions[0])", "Fixing Data Types", "from datetime import datetime as dt\n\n# Takes a date as a string, and returns a Python datetime object. \n# If there is no date given, returns None\ndef parse_date(date):\n if date == '':\n return None\n else:\n return dt.strptime(date, '%Y-%m-%d')\n \n# Takes a string which is either an empty string or represents an integer,\n# and returns an int or None.\ndef parse_maybe_int(i):\n if i == '':\n return None\n else:\n return int(i)\n\n# Clean up the data types in the enrollments table\nfor enrollment in enrollments:\n enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])\n enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])\n enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'\n enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'\n enrollment['join_date'] = parse_date(enrollment['join_date'])\n \nenrollments[0]\n\n# Clean up the data types in the engagement table\nfor engagement_record in daily_engagement:\n engagement_record['lessons_completed'] = int(float(engagement_record['lessons_completed']))\n engagement_record['num_courses_visited'] = int(float(engagement_record['num_courses_visited']))\n engagement_record['projects_completed'] = int(float(engagement_record['projects_completed']))\n engagement_record['total_minutes_visited'] = float(engagement_record['total_minutes_visited'])\n engagement_record['utc_date'] = parse_date(engagement_record['utc_date'])\n \ndaily_engagement[0]\n\n# Clean up the data types in the submissions table\nfor submission in project_submissions:\n submission['completion_date'] = parse_date(submission['completion_date'])\n submission['creation_date'] = parse_date(submission['creation_date'])\n\nproject_submissions[0]", "Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur.\nInvestigating the Data", "#####################################\n# 2 #\n#####################################\n\n## Find the total number of rows and the number of unique students (account keys)\n## in each table.\n\n# Part 1\nprint(\n len(enrollments),\n len(daily_engagement),\n len(project_submissions)\n )\n\n\n\n# Part 2\ndef get_unique_students(file_name):\n \"\"\"\n Retrieves a list of unique account keys from the specified file\n \"\"\"\n unqiue_students = set()\n for e in file_name:\n unqiue_students.add(e[\"account_key\"])\n return unqiue_students\n \nu_enrollments = get_unique_students(enrollments)\nu_daily_engagement = get_unique_students(daily_engagement)\nu_project_submissions = get_unique_students(project_submissions)\n\nprint(\n len(u_enrollments),\n len(u_daily_engagement),\n len(u_project_submissions)\n )", "Problems in the Data", "#####################################\n# 3 #\n#####################################\n\n## Rename the \"acct\" column in the daily_engagement table to \"account_key\".\nfor engagement_record in daily_engagement:\n engagement_record['account_key'] = engagement_record['acct']\n del[engagement_record['acct']]", "Missing Engagement Records", "#####################################\n# 4 #\n#####################################\n\n## Find any one student enrollments where the student is missing from the daily engagement table.\n## Output that enrollment.\n\nfor e in enrollments:\n if e[\"account_key\"] not in u_daily_engagement:\n print(\"\\n\", e)", "Checking for More Problem Records", "#####################################\n# 5 #\n#####################################\n\n## Find the number of surprising data points (enrollments missing from\n## the engagement table) that remain, if any.\n\nfor ix, e in enumerate(enrollments):\n if e[\"account_key\"] not in u_daily_engagement and e[\"join_date\"] != e[\"cancel_date\"]:\n print(\"\\n\", \"Index: %i\" % ix, \"\\n Correspoinding record: \\n %s\" % e)", "Tracking Down the Remaining Problems", "# Create a set of the account keys for all Udacity test accounts\nudacity_test_accounts = set()\nfor enrollment in enrollments:\n if enrollment['is_udacity']:\n udacity_test_accounts.add(enrollment['account_key'])\nlen(udacity_test_accounts)\n\n# Given some data with an account_key field, removes any records corresponding to Udacity test accounts\ndef remove_udacity_accounts(data):\n non_udacity_data = []\n for data_point in data:\n if data_point['account_key'] not in udacity_test_accounts:\n non_udacity_data.append(data_point)\n return non_udacity_data\n\n# Remove Udacity test accounts from all three tables\nnon_udacity_enrollments = remove_udacity_accounts(enrollments)\nnon_udacity_engagement = remove_udacity_accounts(daily_engagement)\nnon_udacity_submissions = remove_udacity_accounts(project_submissions)\n\nprint(\n len(non_udacity_enrollments),\n len(non_udacity_engagement),\n len(non_udacity_submissions))", "Refining the Question", "#####################################\n# 6 #\n#####################################\n\n## Create a dictionary named paid_students containing all students who either\n## haven't canceled yet or who remained enrolled for more than 7 days. The keys\n## should be account keys, and the values should be the date the student enrolled.\n\npaid_students = dict()\n\nfor e in non_udacity_enrollments:\n # check wether days_to_cancel == None or days_to_cancel > 7\n if e[\"days_to_cancel\"] == None or e[\"days_to_cancel\"] > 7:\n # store account key and join date in temporary variables\n temp_key = e[\"account_key\"]\n temp_date = e[\"join_date\"]\n # check wether account key already exists in temp variable or if join date > existing join date\n if temp_key not in paid_students or temp_date > paid_students[temp_key]:\n # add account_key and enrollment_date to\n paid_students[temp_key] = temp_date\n \nlen(paid_students)", "Getting Data from First Week", "# Takes a student's join date and the date of a specific engagement record,\n# and returns True if that engagement record happened within one week\n# of the student joining.\ndef within_one_week(join_date, engagement_date):\n time_delta = engagement_date - join_date\n return time_delta.days >= 0 and time_delta.days < 7\n\ndef remove_free_trial_cancels(data):\n new_data = []\n for data_point in data:\n if data_point['account_key'] in paid_students:\n new_data.append(data_point)\n return new_data\n\npaid_enrollments = remove_free_trial_cancels(non_udacity_enrollments)\npaid_engagement = remove_free_trial_cancels(non_udacity_engagement)\npaid_submissions = remove_free_trial_cancels(non_udacity_submissions)\n\n#####################################\n# 7 #\n#####################################\n\n## Create a list of rows from the engagement table including only rows where\n## the student is one of the paid students you just found, and the date is within\n## one week of the student's join date.\n\npaid_engagement_in_first_week = []\n\n# loop over engagements\nfor e in non_udacity_engagement:\n # check if student is in paid students and if engagement date is valid\n if e[\"account_key\"] in paid_students and within_one_week(paid_students[e[\"account_key\"]], e[\"utc_date\"]) == True:\n paid_engagement_in_first_week.append(e)\n \nlen(paid_engagement_in_first_week)", "Exploring Student Engagement", "from collections import defaultdict\n\n# Create a dictionary of engagement grouped by student.\n# The keys are account keys, and the values are lists of engagement records.\nengagement_by_account = defaultdict(list)\nfor engagement_record in paid_engagement_in_first_week:\n account_key = engagement_record['account_key']\n engagement_by_account[account_key].append(engagement_record)\n\n# Create a dictionary with the total minutes each student spent in the classroom during the first week.\n# The keys are account keys, and the values are numbers (total minutes)\ntotal_minutes_by_account = {}\nfor account_key, engagement_for_student in engagement_by_account.items():\n total_minutes = 0\n for engagement_record in engagement_for_student:\n total_minutes += engagement_record['total_minutes_visited']\n total_minutes_by_account[account_key] = total_minutes\n\nimport numpy as np\n\n# Summarize the data about minutes spent in the classroom\ntotal_minutes = list(total_minutes_by_account.values())\n\nprint('Mean:', np.mean(total_minutes))\nprint('Standard deviation:', np.std(total_minutes))\nprint('Minimum:', np.min(total_minutes))\nprint('Maximum:', np.max(total_minutes))", "Debugging Data Analysis Code", "#####################################\n# 8 #\n#####################################\n\n## Go through a similar process as before to see if there is a problem.\n## Locate at least one surprising piece of data, output it, and take a look at it.\n\nfor k,v in total_minutes_by_account.items():\n if v > 7200:\n print(\"\\n\", \"account key: \", k, \"value: \", v)\n\nprint(\n paid_engagement_in_first_week[\"account_key\" == 460],\n paid_engagement_in_first_week[\"account_key\" == 140],\n paid_engagement_in_first_week[\"account_key\" == 108],\n paid_engagement_in_first_week[\"account_key\" == 78]\n)", "Lessons Completed in First Week", "#####################################\n# 9 #\n#####################################\n\n## Adapt the code above to find the mean, standard deviation, minimum, and maximum for\n## the number of lessons completed by each student during the first week. Try creating\n## one or more functions to re-use the code above.\n\ndef group_data(data, key_name):\n \"\"\"\n Given data in dict form and a key, the function returns a grouped data set\n \"\"\"\n grouped_data = defaultdict(list)\n \n for e in data:\n key = e[key_name]\n grouped_data[key].append(e)\n return grouped_data\n\nengagement_by_account = group_data(paid_engagement_in_first_week, \"account_key\")\n\ndef sum_grouped_data(data, field_name):\n \"\"\"\n Given data in dict form and a field name, the function returns sum of the field name per key\n \"\"\"\n summed_data = {}\n \n for key, values in data.items():\n total = 0\n for value in values:\n total += value[field_name]\n summed_data[key] = total\n return summed_data\n\ntotal_lessons_per_account = sum_grouped_data(engagement_by_account, \"lessons_completed\")\n \ndef describe_data(data):\n \"\"\"\n Given a dataset the function returns mean, std. deviation, min and max\n \"\"\"\n print(\n \"Mean: %f\" % np.mean(data),\n \"Standard deviation: %f\" % np.std(data),\n \"Min: %f\" % np.min(data),\n \"Max: %f\" % np.max(data))\n plt.hist(data)\n \ndescribe_data(list(total_lessons_per_account.values()))", "Number of Visits in First Week", "######################################\n# 10 #\n######################################\n\n## Find the mean, standard deviation, minimum, and maximum for the number of\n## days each student visits the classroom during the first week.\n\nfor el in paid_engagement_in_first_week:\n if el[\"num_courses_visited\"] > 0:\n el[\"has_visited\"] = 1\n else:\n el[\"has_visited\"] = 0\n\nengagement_by_account = group_data(paid_engagement_in_first_week, \"account_key\")\ntotal_visits_per_day_per_account = sum_grouped_data(engagement_by_account, \"has_visited\")\ndescribe_data(list(total_visits_per_day_per_account.values()))", "Splitting out Passing Students", "######################################\n# 11 #\n######################################\n\n## Create two lists of engagement data for paid students in the first week.\n## The first list should contain data for students who eventually pass the\n## subway project, and the second list should contain data for students\n## who do not.\n\nsubway_project_lesson_keys = ['746169184', '3176718735']\n\npassing_engagement = []\nnon_passing_engagement = []\n\n# loop over project submission data\nfor el in paid_submissions:\n\n # check if project submission account key is in engagement data\n if el[\"account_key\"] in paid_engagement:\n\n print(e[\"account_key\"])\n \n # check if lesson key is in subway_project_lesson key\n if el[\"lesson_key\"] in subway_project_lesson_keys:\n \n print(e[\"lesson_key\"])\n \n # check if assigned_rating is PASSED or DISTINCTION\n if el[\"assigned_rating\"] in [\"PASSED\", \"DISTINCTION\"]:\n \n print(e[\"assigned_rating\"])\n \n # if so, add record to passing_engagement list\n passing_engagement.append(el)\n \n # else add record to non_passing_engagement list\n else:\n non_passing_engagement.append(el)\n \nprint(\"Passing: \", len(passing_engagement), \"Not passing: \", len(non_passing_engagement)) \n\nsubway_project_lesson_keys = ['746169184', '3176718735']\n\npass_subway_project = set()\n\nfor el in paid_submissions: \n if ((el[\"lesson_key\"] in subway_project_lesson_keys) and\n (el[\"assigned_rating\"] == 'PASSED' or el[\"assigned_rating\"] == 'DISTINCTION')):\n pass_subway_project.add(el['account_key'])\n\nlen(pass_subway_project)\n\npassing_engagement = []\nnon_passing_engagement = []\n\nfor el in paid_engagement_in_first_week:\n if el['account_key'] in pass_subway_project:\n passing_engagement.append(el)\n else:\n non_passing_engagement.append(el)\n\nprint(len(passing_engagement))\nprint(len(non_passing_engagement))", "Comparing the Two Student Groups", "######################################\n# 12 #\n######################################\n\n## Compute some metrics you're interested in and see how they differ for\n## students who pass the subway project vs. students who don't. A good\n## starting point would be the metrics we looked at earlier (minutes spent\n## in the classroom, lessons completed, and days visited).\n\n# prepare passing data\npassing_engagement_grouped = group_data(passing_engagement, \"account_key\")\nnon_passing_engagement_grouped = group_data(non_passing_engagement, \"account_key\")\n\npassing_minutes = sum_grouped_data(passing_engagement_grouped, \"total_minutes_visited\")\npassing_lessons = sum_grouped_data(passing_engagement_grouped, \"lessons_completed\")\npassing_days = sum_grouped_data(passing_engagement_grouped, \"has_visited\")\npassing_projects = sum_grouped_data(passing_engagement_grouped, \"projects_completed\")\n\n# prepare non passing data\nnon_passing_minutes = sum_grouped_data(non_passing_engagement_grouped, \"total_minutes_visited\")\nnon_passing_lessons = sum_grouped_data(non_passing_engagement_grouped, \"lessons_completed\")\nnon_passing_days = sum_grouped_data(non_passing_engagement_grouped, \"has_visited\")\nnon_passing_projects = sum_grouped_data(non_passing_engagement_grouped, \"projects_completed\")\n\n# compare\nprint(\"Minutes\", \"\\n\")\ndescribe_data(list(passing_minutes.values()))\ndescribe_data(list(non_passing_minutes.values()))\n\nprint(\"\\n\", \"Lessons\", \"\\n\")\ndescribe_data(list(passing_lessons.values()))\ndescribe_data(list(non_passing_lessons.values()))\n\nprint(\"\\n\", \"Days\", \"\\n\")\ndescribe_data(list(passing_days.values()))\ndescribe_data(list(non_passing_days.values()))\n\nprint(\"\\n\", \"Projects\", \"\\n\")\ndescribe_data(list(passing_projects.values()))\ndescribe_data(list(non_passing_projects.values()))\n\npassing_engagement[0:2]", "Making Histograms", "######################################\n# 13 #\n######################################\n\n## Make histograms of the three metrics we looked at earlier for both\n## students who passed the subway project and students who didn't. You\n## might also want to make histograms of any other metrics you examined.\n\n# setup\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# minutes passing\nplt.title(\"Passing students by minute\")\nplt.hist(list(passing_minutes.values()))\n\n# minutes non-passing\nplt.title(\"_NON_ Passing students by minute\")\nplt.hist(list(non_passing_minutes.values()))\n\n# lessons\nplt.title(\"Passing students by lessons\")\nplt.hist(list(passing_lessons.values()))\n\n# lessons non-passing\nplt.title(\"_NON_ Passing students by lessons\")\nplt.hist(list(non_passing_lessons.values()))\n\n# days\nplt.title(\"Passing students by days\")\nplt.hist(list(passing_days.values()))\n\n# days non-passing\nplt.title(\"_NON_ Passing students by days\")\nplt.hist(list(non_passing_days.values()))", "Improving Plots and Sharing Findings", "######################################\n# 14 #\n######################################\n\n## Make a more polished version of at least one of your visualizations\n## from earlier. Try importing the seaborn library to make the visualization\n## look better, adding axis labels and a title, and changing one or more\n## arguments to the hist() function.\nimport seaborn as sns\n\n# seaborn only\nplt.title(\"_NON_ Passing students by days with S-E-A-B-O-R-N\")\nplt.xlabel(\"days spent in the classroom\")\nplt.ylabel(\"frequency\")\nplt.hist(list(non_passing_days.values()), bins=8)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pk-ai/training
machine-learning/deep-learning/udacity/ud730/4_convolutions.ipynb
mit
[ "Deep Learning\nAssignment 4\nPreviously in 2_fullyconnected.ipynb and 3_regularization.ipynb, we trained fully connected networks to classify notMNIST characters.\nThe goal of this assignment is make the neural network convolutional.", "# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport numpy as np\nimport tensorflow as tf\nfrom six.moves import cPickle as pickle\nfrom six.moves import range\n\npickle_file = 'notMNIST.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n train_dataset = save['train_dataset']\n train_labels = save['train_labels']\n valid_dataset = save['valid_dataset']\n valid_labels = save['valid_labels']\n test_dataset = save['test_dataset']\n test_labels = save['test_labels']\n del save # hint to help gc free up memory\n print('Training set', train_dataset.shape, train_labels.shape)\n print('Validation set', valid_dataset.shape, valid_labels.shape)\n print('Test set', test_dataset.shape, test_labels.shape)", "Reformat into a TensorFlow-friendly shape:\n- convolutions need the image data formatted as a cube (width by height by #channels)\n- labels as float 1-hot encodings.", "image_size = 28\nnum_labels = 10\nnum_channels = 1 # grayscale\n\nimport numpy as np\n\ndef reformat(dataset, labels):\n dataset = dataset.reshape(\n (-1, image_size, image_size, num_channels)).astype(np.float32)\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\ntrain_dataset, train_labels = reformat(train_dataset, train_labels)\nvalid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\ntest_dataset, test_labels = reformat(test_dataset, test_labels)\nprint('Training set', train_dataset.shape, train_labels.shape)\nprint('Validation set', valid_dataset.shape, valid_labels.shape)\nprint('Test set', test_dataset.shape, test_labels.shape)\n\ndef accuracy(predictions, labels):\n return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n / predictions.shape[0])", "Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.", "batch_size = 16\npatch_size = 5\ndepth = 16\nnum_hidden = 64\n\ngraph = tf.Graph()\n\nwith graph.as_default():\n\n # Input data.\n tf_train_dataset = tf.placeholder(\n tf.float32, shape=(batch_size, image_size, image_size, num_channels))\n tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n layer1_weights = tf.Variable(tf.truncated_normal(\n [patch_size, patch_size, num_channels, depth], stddev=0.1))\n layer1_biases = tf.Variable(tf.zeros([depth]))\n layer2_weights = tf.Variable(tf.truncated_normal(\n [patch_size, patch_size, depth, depth], stddev=0.1))\n layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))\n layer3_weights = tf.Variable(tf.truncated_normal(\n [image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1))\n layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))\n layer4_weights = tf.Variable(tf.truncated_normal(\n [num_hidden, num_labels], stddev=0.1))\n layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))\n \n # Model.\n def model(data):\n conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')\n hidden = tf.nn.relu(conv + layer1_biases)\n conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')\n hidden = tf.nn.relu(conv + layer2_biases)\n shape = hidden.get_shape().as_list()\n reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])\n hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)\n return tf.matmul(hidden, layer4_weights) + layer4_biases\n \n # Training computation.\n logits = model(tf_train_dataset)\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n \n # Optimizer.\n optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n train_prediction = tf.nn.softmax(logits)\n valid_prediction = tf.nn.softmax(model(tf_valid_dataset))\n test_prediction = tf.nn.softmax(model(tf_test_dataset))\n\nnum_steps = 1001\n\nwith tf.Session(graph=graph) as session:\n tf.global_variables_initializer().run()\n print('Initialized')\n for step in range(num_steps):\n offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n batch_data = train_dataset[offset:(offset + batch_size), :, :, :]\n batch_labels = train_labels[offset:(offset + batch_size), :]\n feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n _, l, predictions = session.run(\n [optimizer, loss, train_prediction], feed_dict=feed_dict)\n if (step % 50 == 0):\n print('Minibatch loss at step %d: %f' % (step, l))\n print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))\n print('Validation accuracy: %.1f%%' % accuracy(\n valid_prediction.eval(), valid_labels))\n print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))", "Problem 1\nThe convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides by a max pooling operation (nn.max_pool()) of stride 2 and kernel size 2.\n\nWith adding the max_pool of stride 2 and kernel size 2 and increasing num_of_steps, the performance has improved from 89.8 to 93.3%", "batch_size = 16\npatch_size = 5\ndepth = 16\nnum_hidden = 64\n\ngraph = tf.Graph()\n\nwith graph.as_default():\n\n # Input data.\n tf_train_dataset = tf.placeholder(\n tf.float32, shape=(batch_size, image_size, image_size, num_channels))\n tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n layer1_weights = tf.Variable(tf.truncated_normal(\n [patch_size, patch_size, num_channels, depth], stddev=0.1))\n layer1_biases = tf.Variable(tf.zeros([depth]))\n layer2_weights = tf.Variable(tf.truncated_normal(\n [patch_size, patch_size, depth, depth], stddev=0.1))\n layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))\n layer3_weights = tf.Variable(tf.truncated_normal(\n [image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1))\n layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))\n layer4_weights = tf.Variable(tf.truncated_normal(\n [num_hidden, num_labels], stddev=0.1))\n layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))\n \n # Model.\n def model(data):\n conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')\n hidden = tf.nn.relu(conv + layer1_biases)\n # Adding the max pool\n max_pool = tf.nn.max_pool(hidden, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')\n conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')\n hidden = tf.nn.relu(conv + layer2_biases)\n max_pool = tf.nn.max_pool(hidden, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')\n shape = hidden.get_shape().as_list()\n reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])\n hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)\n return tf.matmul(hidden, layer4_weights) + layer4_biases\n \n # Training computation.\n logits = model(tf_train_dataset)\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n \n # Optimizer.\n optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n train_prediction = tf.nn.softmax(logits)\n valid_prediction = tf.nn.softmax(model(tf_valid_dataset))\n test_prediction = tf.nn.softmax(model(tf_test_dataset))\n\nnum_steps = 5001\n\nwith tf.Session(graph=graph) as session:\n tf.global_variables_initializer().run()\n print('Initialized')\n for step in range(num_steps):\n offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n batch_data = train_dataset[offset:(offset + batch_size), :, :, :]\n batch_labels = train_labels[offset:(offset + batch_size), :]\n feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n _, l, predictions = session.run(\n [optimizer, loss, train_prediction], feed_dict=feed_dict)\n if (step % 250 == 0):\n print('Minibatch loss at step %d: %f' % (step, l))\n print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))\n print('Validation accuracy: %.1f%%' % accuracy(\n valid_prediction.eval(), valid_labels))\n print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))", "Problem 2\nTry to get the best performance you can using a convolutional net. Look for example at the classic LeNet5 architecture, adding Dropout, and/or adding learning rate decay." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Sbozzolo/Open-Source-Tools-for-Physics
seminar_3/Introduction to Python.ipynb
gpl-3.0
[ "Introduction to Python\nBasic interaction\nPython can be used as a calculator since there is no need to declare types and most of the operations behave as expected (like the int division). The power operator is not ^ but **.", "2 + 2\n\n3 / 2\n\n2 ** 8", "Variables can be defined freely and change type. There is a very handy print function (this is very different from Python2!). The format function can be used to customize the output. More at https://pyformat.info/", "a = 42\nb = 256\nz = 2 + 3j\nw = 5 - 6j\nprint(\"I multiply\", a, \"and\", b, \"and I get\", a * b)\nprint(\"Compex numbers!\", z + w)\nprint(\"Real:\", z.real)\n# Variables as objects (in Python everything is an object)\nprint(\"Abs:\", abs(z))\n\nalmost_pi = 3.14\nbetter_pi = 3.14159265358979323846264338327950288419716939937510\nc = 299792458\nprint(\"Look at his scientific notation {:.2E} or ar this nice rounding {:.3f}\".format(c, better_pi))", "Note that Python does not require semicolons to terminate an instruction (but they don't harm) but require the indendation to be respected. (After for, if, while, def, class, ...)", "for i in range(5):\n if (not i%2 == 0 or i == 0):\n print(i)", "Strucutred Data\nIt's easy to work with variables of different nature. There are three kinds of structured variable: tuple (), lists [], and dicts {}. Tuples are immutable (ofter output of functions is given as a tuple). Lists are the usual arrays (multidimensional). Dictionaries are associative arrays with keywords.", "a = 5\na = \"Hello, World\"\n# Multiple assignation\nb, c = \"Hello\", \"World\"\nprint(a)\nprint(b, c)\n\ntuple_example = (1,2,3)\nprint(\"Tuple\", tuple_example[0])\n# tuple_example[1] = 3\n\nlist_example = [1,2,3]\nprint(\"List 1\", list_example[0])\nlist_example[1] = 4\nprint(\"List 2\", list_example[1])\n\ndict_example = {'one' : 1,\n 'two' : 2,\n 'three' : 3\n }\nprint(\"Dict\", dict_example['one'])", "Lists are very useful as most of the methods are build it, like for sorting, reversing, inserting, deleting, slicing, ...", "random_numbers = [1,64,78,13,54,34, \"Ravioli\"]\nprint(\"Length:\", len(random_numbers))\ntrue_random = random_numbers[0:5]\nprint(\"Sliced:\", true_random)\nprint(\"Sorted:\", sorted(true_random))\nprint(\"Max:\", max(true_random))\n\nrandom_numbers.remove(\"Ravioli\")\nprint(\"Removed:\", random_numbers)\n\nmulti_list = [\"A string\", [\"a\", \"list\"], (\"A\", \"Tuple\"), 5]\n\nprint(\"Concatenated list\", random_numbers + multi_list)", "CAVEAT: List can be dangerous and have unexpected behavior due to the default copy method (like pointers pointing to the same area of memory)", "cool_numbers = [0, 11, 42]\nother_numbers = cool_numbers\n\nprint(other_numbers)\n\ncool_numbers.append(300)\n\nprint(other_numbers)", "To avoid this problem usually slicing is used.", "cool_numbers = [0, 11, 42]\nother_numbers = cool_numbers[:]\n\nprint(other_numbers)\n\ncool_numbers.append(300)\n\nprint(other_numbers)", "String are considered list and slicing can be applied on strings, with a sleek behavior with respect to indeces:", "s = \"GNU Emacs\"\n# No problem with \"wrong\" index\nprint(s[4:100])\n# Backwards!\nprint(s[-9:-6])", "With a for loop it is possible to iterate over lists. (But attention not to modify the list over which for is iterating!)", "for num in cool_numbers:\n print(\"I like the number\", num)", "List can generate other list via list comprehension which is a functional way to operate on a list or a subset defined by if statements.", "numbers = [0, 1, 2, 3, 4, 5, 6, 7]\n\n# Numbers via list comprehension\n\nnumbers = [i for i in range(0,8)]\nprint(\"Numbers:\", numbers)\n\neven = [x for x in numbers if x%2 == 0]\nodd = [x for x in numbers if not x in even]\nprint(\"Even:\", even)\nprint(\"Odd:\", odd)", "Functions\nPython can have user-defined functions. There are some details about passing by reference or passing by value (what Python actually does is passing by assignment, details here: https://docs.python.org/3/faq/programming.html#how-do-i-write-a-function-with-output-parameters-call-by-reference). There are no return and arguments type but there is no overloading.", "def say_hello(to = \"Gabriele\"):\n print(\"Hello\", to)\n \nsay_hello()\nsay_hello(\"Albert\")\n\n\ndef sum_and_difference(a, b):\n return (a + b, a - b)\n\n(sum, diff) = sum_and_difference(10, 15)\nprint(\"Sum: {}, Diff: {}\".format(sum, diff))\n\ndef usless_box(a,b,c,d,e,f):\n return a,b,c,d,e,f\n\nfirst, _, _, _, _, _ = usless_box(100, 0, 1, 2, 3, 4)\n\nprint(first)", "A very useful construct is try-except that can be used to handle errors.", "hey = \"String\"\nohi = 6\n\ntry:\n print(hey/3)\nexcept:\n print(\"Error in hey!\")\n \ntry:\n print(ohi/3)\nexcept:\n print(\"Error in ohi!\")", "NOTE: Prefer this name convenction (no CamelCase) and space over tabs\nThere is full support to OOP with Ineheritance, Encapsulation and Polymorphism. (https://docs.python.org/3/tutorial/classes.html)\nShipped with battery included\nFor Python there exist a huge number of modules that extend the potentiality of Python. Here are some examples:\nOS\nos is a module for interacting with the system and with files", "# Modules have to be imported\n# In this way I import thw whole module\nimport os\n# To access an object inside the module I have to prepend the name\n\n# In this way I import only a function but I don't have to prepend the \n# module's name\nfrom os import getcwd \n\nprint(os.getcwd())\nprint(getcwd())", "os with Python's capability for manipulating string is a very simple way to interact with files and dir", "dir = \"test\"\nfiles = os.listdir(dir)\nprint(files)\n\n# Sorting\nfiles.sort()\nprint(files)\n\n# I take the subset starting with d and not ending with 10 and that are not directories\n\ndfiles = [f for f in files if f.startswith(\"d\") and not f.endswith(\"10\") and not os.path.isdir(f)]\n\nprint(dfiles)\n\nfor f in dfiles:\n data = f.split(\"_\")\n n1 = data[1]\n n2 = data[2]\n print(\"From the name of the file {} I have extrected {} {}\".format(f, n1, n2))", "Sys (and argparse)\nsys is another module for interactive with the system or to obtain information about it, in particular by means of the command line. \nargparse is a module for defining flags and arguments.", "import sys\n\n# sys provides the simplest way to pass command line arguments to a python script\nprint(sys.argv[0])\n\n\n# argparse is more flexible but requires also more setup", "NumPy\nNumpy is a module that provides a framework for numerical application. It defines new type of data highly optimized (NumPy is written in C) and provides simple interfaces for importing data from files and manipulate them. It is well integrated with the other scientific libraries for Python as it serves as base in many cases (SciPy, Matplotlib, Pandas, ...) Its fundamental object is the numpy array.\nWith good (enough) documentation!", "# Standard import\nimport numpy as np\n\n# Array from list\nnum = [0,1,2]\nprint(\"List:\", num)\n\nx = np.array(num)\nprint(\"Array:\", x)\n\ny = np.random.randint(3, size = (3))\nprint(\"Random\", y)\n\nz = np.array([x,y])\nprint(\"z:\", z)\nprint(\"Shape\", z.shape)\nzres = z.reshape(3,2)\nprint(\"z reshaped:\", zres)\n# Attention: numpy does not alter any object!\n\n# Operation behave well on arrays\ny3 = y + 3\nprint(\"y + 3:\", y3)\nprint(\"y squared:\", y**2)\n\n# Many built-in operations\nprint(\"Scalar product:\", np.dot(x,y))\n\n# Handy way to create an equispaced array\nxx = np.linspace(0, 15, 16)\nprint(\"xx:\", xx)\n\nyy = np.array([x**2 for x in xx])\nprint(\"yy:\", yy)\n\nzz = yy.reshape(4,4)\nprint(\"zz\", zz)\nprint(\"Eigenvalues:\", np.linalg.eigvals(zz))", "NumPy offers tools for:\n- Linear algebra\n- Logic functions\n- Datatypes\n- Constant of nature\n- Matematical functions (also special, as Hermite, Legendre...)\n- Polynomials\n- Statistics\n- Sorting, searching and counting\n- Fourier Transform\n- Random generation\n- Integration with C/C++ and Fortran code", "# Example: Polynomail x^2 + 2 x + 1\np = np.poly1d([1, 2, 1])\nprint(p)\n\n# Evaluate it at 1\nprint(\"p(1):\", p(1))\n\n# Find the roots\nprint(\"Roots:\", p.r)\n\n# Take derivative\nprint(\"Deriv:\", np.polyder(p))", "Interaction with files is really simple", "arr = np.random.random(10)\n\n# Prints a single column file, for arrays print many columns\nnp.savetxt(\"array.dat\", arr)\n\nfiles = os.listdir(\".\")\n\nprint([f for f in files if f == \"array.dat\"])\n\ndata = np.loadtxt(\"array.dat\")\n\nprint(data)", "It is possible to save data compressed in a gzip by appending tar.gz to the name of the file (in this case array.dat.tar.gz).\nREMEMBER:\n- To create: tar cvzf archive.tar.gz folder\n- To extract: tar xvzf archive.tar.gz\nMatplotlib\nMatplotlib is the tool for plotting and graphics", "import matplotlib.pyplot as plt\n\nplt.plot(arr)\nplt.ylabel('Some numbers')\nplt.xlabel('An index')\nplt.title(\"The title!\")\nplt.show()", "Matplotlib has a seamless integration with NumPy", "x = np.linspace(0,2 * np.pi, 100)\ny = np.sin(x)\nz = np.cos(x)\n\nplt.plot(x, y, \"r-\", x, z, \"g-\")\nplt.show()", "Matplotlib has a great library of examples (https://matplotlib.org/examples/) that in particular contains many of the most common plots (histograms, contour, scatter, pie, ...)", "# Plot of the Lorenz Attractor based on Edward Lorenz's 1963 \"Deterministic\n# Nonperiodic Flow\" publication.\n# http://journals.ametsoc.org/doi/abs/10.1175/1520-0469%281963%29020%3C0130%3ADNF%3E2.0.CO%3B2\n#\n# Note: Because this is a simple non-linear ODE, it would be more easily\n# done using SciPy's ode solver, but this approach depends only\n# upon NumPy.\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\ndef lorenz(x, y, z, s=10, r=28, b=2.667):\n x_dot = s*(y - x)\n y_dot = r*x - y - x*z\n z_dot = x*y - b*z\n return x_dot, y_dot, z_dot\n\n\ndt = 0.01\nstepCnt = 10000\n\n# Need one more for the initial values\nxs = np.empty((stepCnt + 1,))\nys = np.empty((stepCnt + 1,))\nzs = np.empty((stepCnt + 1,))\n\n# Setting initial values\nxs[0], ys[0], zs[0] = (0., 1., 1.05)\n\n# Stepping through \"time\".\nfor i in range(stepCnt):\n # Derivatives of the X, Y, Z state\n x_dot, y_dot, z_dot = lorenz(xs[i], ys[i], zs[i])\n xs[i + 1] = xs[i] + (x_dot * dt)\n ys[i + 1] = ys[i] + (y_dot * dt)\n zs[i + 1] = zs[i] + (z_dot * dt)\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\n\nax.plot(xs, ys, zs, lw=0.5)\nax.set_xlabel(\"X Axis\")\nax.set_ylabel(\"Y Axis\")\nax.set_zlabel(\"Z Axis\")\nax.set_title(\"Lorenz Attractor\")\n\nplt.show()", "SciPy\nSciPy is a module that relies on NumPy and provides many ready-made tools used in science. Examples:\n- Optimization\n- Integration\n- Interpolation\n- Signal processing\n- Statistics\nExample, minimize: $f\\left(\\mathbf{x}\\right)=\\sum_{i=1}^{N-1}100\\left(x_{i}-x_{i-1}^{2}\\right)^{2}+\\left(1-x_{i-1}\\right)^{2}.$", "import numpy as np\nfrom scipy.optimize import minimize\n\ndef rosen(x):\n \"\"\"The Rosenbrock function\"\"\"\n return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)\n\nx0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])\nres = minimize(rosen, x0, method='nelder-mead', options={'xtol': 1e-8, 'disp': True})\n\nprint(res.x)", "A complete example -- Dice rolls\nScripted!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rldotai/rlbench
rlbench/off_policy_comparison-short.ipynb
gpl-3.0
[ "%load_ext autoreload\n%autoreload 2\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport json\nimport numpy as np\nfrom toolz import pluck\n\nimport algos\nimport features\nimport parametric\nimport policy\nimport chicken\nfrom agents import OffPolicyAgent\nfrom rlbench import *\n\ndef run_contextual(agent, env, max_steps, params=dict()):\n ret = list()\n \n # reset the environment, get initial state, perform the run\n t = 0\n env.reset()\n s = env.state\n while not env.is_terminal() and t < max_steps:\n actions = env.actions\n a = agent.choose(s, actions)\n r, sp = env.do(a)\n \n # update the agent\n agent.update(s, a, r, sp, **params)\n \n # get the information from the agent\n ret.append(agent.get_context(s, a, r, sp))\n \n # prepare for next timestep\n t += 1\n s = sp\n\n # return the result\n return ret ", "True Values\nThe \"true\" values can be computed analytically in this case, so we did so.\nWe can also compute the distribution for weighting the errors.", "def compute_value_dct(theta_lst, features):\n return [{s: np.dot(theta, x) for s, x in features.items()} for theta in theta_lst]\n\ndef compute_values(theta_lst, X):\n return [np.dot(X, theta) for theta in theta_lst]\n\ndef compute_errors(value_lst, error_func):\n return [error_func(v) for v in value_lst]\n\ndef rmse_factory(true_values, d=None):\n true_values = np.ravel(true_values)\n \n # sensible default for weighting distribution\n if d is None:\n d = np.ones_like(true_values)\n else:\n d = np.ravel(d)\n assert(len(d) == len(true_values))\n \n # the actual root-mean square error\n def func(v):\n diff = true_values - v\n return np.sqrt(np.mean(d*diff**2))\n return func", "Comparing the Errors\nFor each algorithm, we get the associated experiment, and calculate the errors at each timestep, averaged over the runs performed with that algorithm.", "# define the experiment\nnum_states = 8\nnum_features = 6\nnum_active = 3\nnum_runs = 10\nmax_steps = 10000\n\n\n# set up environment\nenv = chicken.Chicken(num_states)\n\n# Define the target policy\npol_pi = policy.FixedPolicy({s: {0: 1} for s in env.states})\n# Define the behavior policy\npol_mu = policy.FixedPolicy({s: {0: 1} if s < 4 else {0: 0.5, 1: 0.5} for s in env.states})\n\n# state-dependent gamma\ngm_dct = {s: 0.9 for s in env.states}\ngm_dct[0] = 0\ngm_func = parametric.MapState(gm_dct)\ngm_p_func = parametric.MapNextState(gm_dct)\n\n# set up algorithm parameters\nupdate_params = {\n 'alpha': 0.02,\n 'beta': 0.002,\n 'gm': gm_func,\n 'gm_p': gm_p_func,\n 'lm': 0.0,\n 'lm_p': 0.0,\n 'interest': 1.0,\n}\n\n\n# Run all available algorithms \ndata = dict()\n\nfor name, alg in algos.algo_registry.items(): \n print(name)\n \n run_lst = []\n for i in range(num_runs):\n print(\"Run: %d\"%i, end=\"\\r\")\n episode_data = dict()\n \n # Want to use random features\n phi = features.RandomBinary(num_features, num_active)\n episode_data['features'] = {s: phi(s) for s in env.states}\n \n # Set up the agent\n _update_params = update_params.copy()\n if name == 'ETD':\n _update_params['alpha'] = 0.002\n\n agent = OffPolicyAgent(alg(phi.length), pol_pi, pol_mu, phi, _update_params)\n \n # Run the experiment\n episode_data['steps'] = run_contextual(agent, env, max_steps)\n \n run_lst.append(episode_data)\n data[name] = run_lst\n\n# True values & associated stationary distribution\ntheta_ls = np.array([ 0.4782969, 0.531441 , 0.59049, 0.6561, 0.729, 0.81, 0.9, 1.])\nd_pi = np.ones(num_states)/num_states\nD_pi = np.diag(d_pi)\n \n# define the error/objective function\nerr_func = rmse_factory(theta_ls, d=d_pi)\nbaseline = err_func(np.zeros(num_states))\n\nfor name, experiment in data.items():\n print(name)\n errors = []\n for episode in experiment:\n feats = experiment[0]['features']\n X = np.array([feats[k] for k in sorted(feats.keys())])\n steps = experiment[0]['steps']\n thetas = list(pluck('theta', steps))\n\n # compute the values at each step\n val_lst = compute_values(thetas, X)\n # compute the errors at each step\n err_lst = compute_errors(val_lst, err_func)\n errors.append(err_lst)\n \n # calculate the average error\n clipped_errs = np.clip(errors, 0, 100) \n avg_err = np.mean(clipped_errs, axis=0)\n \n # plot the errors \n fig, ax = plt.subplots()\n ax.plot(avg_err)\n \n \n # format the graph\n ax.set_ylim(1e-2, 2)\n ax.axhline(baseline, c='red')\n \n ax.set_yscale('log')\n plt.show()" ]
[ "code", "markdown", "code", "markdown", "code" ]
spulido99/NetworksAnalysis
santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb
mit
[ "Ejercicios Weak Ties & Random Networks\nEjercicios básicos de redes\nEjercicio Clustering Coeficient\niCalcule el coeficiente de clustering para cada nodo y en la red (sin dirección)", "edges = set([(1,2), (2,3), (2,4), (2,5), (4,5), (4,6), (5,6), (4,7)])\n\nfrom IPython.core.debugger import Tracer\nimport collections\nimport numpy as np\n\n\"\"\" Without NetworkX \"\"\"\n\nedges = set([(1,2), (2,3), (2,4), (2,5), (4,5), (4,6), (5,6), (4,7)])\n\ndef edges_to_graph(edges):\n edges = list(edges)\n graph = {}\n \n for i in range(0,len(edges)):\n \n if graph.get(edges[i][0], None):\n graph[edges[i][0]].add(edges[i][1])\n else:\n if len(edges[i]) == 2:\n graph[edges[i][0]] = set([edges[i][1]])\n else:\n graph[edges[i][0]] = set([])\n \n if len(edges[i]) == 2:\n if graph.get(edges[i][1], None):\n graph[edges[i][1]].add(edges[i][0])\n else:\n graph[edges[i][1]] = set([edges[i][0]])\n\n return graph\n\nG = edges_to_graph(edges)\n\n\ndef graph_to_tuples(graph):\n \n output_graph = []\n for node, neighbours in graph.items():\n output_graph.append((node,list(neighbours)))\n return output_graph\n\n\ndef element_neighbours(tuple_graph, element):\n \n \n for index, item in enumerate(tuple_graph):\n if element == item[0]:\n return item[1]\n \n raise IndexNotFoundError('Error: the requested element was not found')\n\n\ndef clustering_coefficient(graph):\n \n tuple_graph = graph_to_tuples(graph)\n L = np.zeros((len(tuple_graph),), dtype=np.int)\n\n for i in range(0, len(tuple_graph)):\n element_at_i = tuple_graph[i][0]\n for j in range(0, len(tuple_graph[i][1])-1):\n current = tuple_graph[i][1][j]\n for k in range(j+1, len(tuple_graph[i][1])):\n comparison = tuple_graph[i][1][k]\n # Search if there is a link\n if comparison in element_neighbours(tuple_graph, current):\n L[i] += 1\n\n C = {}\n \n for i in range(len(tuple_graph)):\n k = len(tuple_graph[i][1])\n if k >= 2:\n C[tuple_graph[i][0]] = float(2*L[i])/(k*(k-1))\n else:\n C[tuple_graph[i][0]] = 0.0\n \n return C\n\n\ndef average_clustering(graph):\n C = clustering_coefficient(graph)\n return float(sum(C.values()))/len(C)\n\nprint(clustering_coefficient(G))\nprint(average_clustering(G))\n\nimport networkx as nx\n\nG = nx.Graph()\nG.add_edges_from(edges)\n\nprint(nx.clustering(G))\nprint(nx.average_clustering(G))\n \n \n \n ", "Ejercicio Weigthed Netwroks\nCree una red no direccionada con los siguientes pesos.\n(a, b) = 0.3\n(a, c) = 1.0\n(a, d) = 0.9\n(a, e) = 1.0\n(a, f) = 0.4\n(c, f) = 0.2\n(b, h) = 0.2\n(f, j) = 0.8\n(f, g) = 0.9\n(j, g) = 0.6\n(g, k) = 0.4\n(g, h) = 0.2\n(k, h) = 1.0", "# To create a weighted, undirected graph, the edges must be provided in the form: (node1, node2, weight)\n\nedges = [('a', 'b', 0.3), ('a', 'c', 1.0), ('a', 'd', 0.9), ('a', 'e', 1.0), ('a', 'f', 0.4),\n ('c', 'f', 0.2), ('b', 'h', 0.2), ('f', 'j', 0.8), ('f', 'g', 0.9), ('j', 'g', 0.6),\n ('g', 'k', 0.4), ('g', 'h', 0.2), ('k', 'h', 1.0)]\n\ndef edges_to_weighted_graph(edges):\n edges = list(edges)\n graph = {}\n \n for i in range(0,len(edges)):\n \n if graph.get(edges[i][0], None):\n graph[edges[i][0]].add((edges[i][1], edges[i][2]))\n else:\n if len(edges[i]) == 3:\n graph[edges[i][0]] = set([(edges[i][1],edges[i][2])])\n else:\n graph[edges[i][0]] = set([])\n \n if len(edges[i]) == 3:\n if graph.get(edges[i][1], None):\n graph[edges[i][1]].add((edges[i][0],edges[i][2]))\n else:\n graph[edges[i][1]] = set([(edges[i][0],edges[i][2])])\n\n return graph\n\ngraph = edges_to_weighted_graph(edges)\n\nprint (graph)\n\n\"\"\" With NetworkX \"\"\"\n\nFG = nx.Graph()\n\nFG.add_weighted_edges_from(edges)\n\nprint (str(FG))", "Imprima la matriz de adyasencia", "def adjacency_matrix(graph):\n keys = list(graph.keys())\n keys.sort()\n \n adj_matrix = np.zeros((len(keys),len(keys)))\n \n for node, edges in graph.items():\n for edge in edges:\n adj_matrix[keys.index(node)][keys.index(edge[0])] = edge[1]\n \n return (adj_matrix, keys)\n\nprint (adjacency_matrix(graph))\n\n\"\"\" With NetworkX \"\"\"\nA = nx.adjacency_matrix(FG)\n\nprint (A)", "Ejercicio Weak & Strong ties\nCon la misma red anterior asuma que un link debil es inferior a 0.5, cree un código que calcule si se cumple la propiedad \"strong triadic closure\"", "def weighted_element_neighbours(tuple_graph, element):\n \n for index, item in enumerate(tuple_graph):\n if element[0] == item[0]:\n neighbours = [i[0] for i in item[1]]\n return neighbours\n \n raise IndexNotFoundError('Error: the requested element was not found')\n \n\ndef weighted_graph_to_tuples(graph):\n \n output_graph = []\n for node, neighbours in graph.items():\n output_graph.append((node,list(neighbours)))\n return output_graph\n\n\ndef triadic_closure(graph):\n \n tuple_graph = weighted_graph_to_tuples(graph)\n L = np.zeros((len(tuple_graph),), dtype=np.int)\n\n for i in range(0, len(tuple_graph)):\n element_at_i = tuple_graph[i][0]\n for j in range(0, len(tuple_graph[i][1])-1):\n current = tuple_graph[i][1][j]\n weight_current = current[1]\n if weight_current >= 0.5:\n for k in range(j+1, len(tuple_graph[i][1])):\n comparison = tuple_graph[i][1][k]\n weight_comparison = comparison[1]\n if weight_comparison >= 0.5:\n # Search if there is a link\n if not comparison[0] in weighted_element_neighbours(tuple_graph, current):\n return False\n\n return True\n\nprint(triadic_closure(graph))\n\nedges2 = [('a','b',0.1),('a','c',0.5),('a','d',0.9),('a','e',0.6),('c','d',0.1),('c','e',0.4),('d','e',0.9)]\n\ngraph2 = edges_to_weighted_graph(edges2)\n\nprint(triadic_closure(graph2))\n\n\n\"\"\" With NetworkX \"\"\"\n\n", "Cambie un peso de los links anteriores para que se deje de cumplir la propiedad y calcule si es cierto. Explique.\nEscriba un código que detecte puntes locales y que calcule el span de cada puente local", "import copy\n\"\"\" The following code is thought for unweighted graphs \"\"\"\n\nedges3 = [(1,2),(1,3),(1,5),(5,6),(2,6),(2,1),(2,4)]\nedges4 = [('a','b'),('a','c'),('a','d'),('a','e'),('a','f'),\n ('b','h'),('c','d'),('c','e'),('c','f'),('d','e'),\n ('f','j'),('f','g'),('j','g'),('g','k'),('g','h'),\n ('k','h')]\n\n\"\"\" This function was taken from Python Software Foundation.\n Python Patterns - Implementing Graphs. https://www.python.org/doc/essays/graphs/ \n (Visited in march 2017) \"\"\"\ndef find_shortest_path(graph, start, end, path=[]):\n path = path + [start]\n if start == end:\n return path\n if not start in graph:\n return None\n shortest = None\n for next in graph[start]:\n if next not in path:\n newpath = find_shortest_path(graph, next, end, path)\n if newpath:\n if not shortest or len(newpath) < len(shortest):\n shortest = newpath\n return shortest\n\n# Returns a tuple containing two values:\n# Input: an undirected graph G in form of a dict\n# (True, span) if there is a local bridge (span > 2) between two nodes\n# (True, None) if there is a bridge between two nodes\n# (False, None) otherwise\n\ndef bridge(graph, start, end):\n if not end in graph[start]:\n return (False, None)\n \n new_graph = copy.deepcopy(graph)\n new_graph[start] = graph[start] - {end}\n new_graph[end] = graph[end] - {start}\n span_path = find_shortest_path(new_graph, start, end)\n \n if not span_path:\n # Global bridge\n return (True, None)\n \n path_length = len(span_path) - 1\n if path_length > 2:\n return (True, path_length)\n elif path_length == 2:\n return (False, path_length)\n elif path_length == 1:\n raise MultiGraphNotAllowedError('Error: Multigraphs are not allowed')\n else:\n raise ReflexiveRelationsNotAllowedError('Error: Reflexive relations are not allowed')\n \n\ngraph3 = edges_to_graph(edges3)\n\n# Return the places of the graph where there is a bridge and the\n# span of each bridge as a vector of tuples in the form (start, end, span)\n\ndef local_bridges(graph):\n nodes = list(graph.keys())\n result = []\n for i in range(0, len(nodes)-1):\n node1 = nodes[i]\n for j in range(i+1, len(nodes)):\n node2 = nodes[j]\n brd = bridge(graph, nodes[i], nodes[j])\n if brd[0] and brd[1] != None:\n result.append((nodes[i],nodes[j],{'span':brd[1]}))\n \n return result\n \nbrds = local_bridges(graph3)\nprint(brds)\n\ngraph4 = edges_to_graph(edges4)\n\nprint(local_bridges(graph4))\n\ndef distance_matrix(graph):\n keys = list(graph.keys())\n keys.sort()\n \n d_matrix = np.zeros((len(keys),len(keys)))\n \n for i in range(0, len(keys)):\n for j in range(0, len(keys)):\n start = keys[i]\n end = keys[j]\n path = find_shortest_path(graph, start, end)\n d_matrix[i][j] = len(path)-1\n \n return (d_matrix, keys)\n\n\"\"\" With NetworkX \"\"\"\n\n", "Ejercicio Random Networks\ngenere 1000 redes aleatorias N = 12, p = 1/6 y grafique la distribución del número de enlaces", "import random\nimport seaborn as sns\n\n%matplotlib inline\n\nN = 12\np = float(1)/6\n\n\ndef random_network_links(N, p):\n edges = []\n \n for i in range(0, N-1):\n for j in range(i+1, N):\n rand = random.random()\n if rand <= p:\n edges.append((i+1,j+1))\n \n return edges\n\ndef random_network_links2(N, p):\n \n edges = []\n adj_matrix = np.zeros((N,N), dtype=int)\n \n for i in range(0, N-1):\n for j in range(i+1, N):\n rand = random.random()\n if rand <= p:\n edges.append((i+1,j+1))\n adj_matrix[i][j] = 1\n adj_matrix[j][i] = 1\n \n for i in range(0, N):\n if sum(adj_matrix[i]) == 0:\n edges.append((i+1,))\n \n return edges\n\n# Returns a number of random networks in the form of a list of edges\ndef random_networks(number_of_networks, N, p):\n \n networks = []\n for i in range(0, number_of_networks):\n networks.append(random_network_links2(N,p))\n \n return networks\n\ndef len_edges(edges_graph):\n result = 0\n for edge in edges_graph:\n if len(edge) == 2:\n result += 1\n return result\n\nnetworks1 = random_networks(1000,N,p)\nlen_edges1 = [len_edges(i) for i in networks1]\nax = sns.distplot(len_edges1)\n \n\"\"\" With NetworkX \"\"\"\n\ndef random_networks_nx(number_of_networks, N, p):\n \n networks = []\n for i in range(0, number_of_networks):\n G_ran = nx.gnp_random_graph(N,p)\n networks.append(G_ran)\n \n return networks\n\nnetworks2 = random_networks_nx(1000,N,p)\nlen_edges2 = [len(G.edges()) for G in networks2]\n\nsns.distplot(len_edges2)\n ", "Grafique la distribución del promedio de grados en cada una de las redes generadas del ejercicio anterior", "% matplotlib inline\n# Transform the list of lists of edges to a list of dicts, this is done to\n# calculate the average degree distribution in the next methods\n\nnetworks1_graph = [edges_to_graph(edges) for edges in networks1]\n\ndef degrees(graph):\n degrees = {}\n for node, links in graph.items():\n degrees[node] = len(links)\n return degrees\n\ndef avg_degree(graph):\n dgrs = degrees(graph)\n return float(sum(dgrs.values()))/len(dgrs)\n\navg_degrees1 = [avg_degree(network) for network in networks1_graph]\n\nax = sns.distplot(avg_degrees1)\n\n\n\"\"\" With NetworkX \"\"\"\ndef avg_degree_nx(graph):\n graph_degrees = graph.degree()\n return float(sum(graph_degrees.values()))/len(graph_degrees)\n\navg_degrees2 = [avg_degree_nx(network) for network in networks2]\n\nsns.distplot(avg_degrees2)", "Haga lo mismo para redes con 100 nodos", "% matplotlib inline\n\nnetworks100_1 = random_networks(1000, 100, p)\nnetworks100_2 = random_networks_nx(1000,100,p)\n\nlen_edges100_1 = [len_edges(i) for i in networks100_1]\n\nax = sns.distplot(len_edges100_1)\nlen_edges100_2 = [len(G.edges()) for G in networks100_2]\n\nsns.distplot(len_edges100_2)\n\nnetworks100_1_graph = [edges_to_graph(edges) for edges in networks100_1]\navg_degrees100_1 = [avg_degree(network) for network in networks100_1_graph]\n\navg_degrees100_2 = [avg_degree_nx(network) for network in networks100_2]\n\nax = sns.distplot(avg_degrees100_1)\nsns.distplot(avg_degrees100_2)\n", "Ejercicio Random Networks - Componente Gigante\nGrafique como crece el tamaño del componente más grande de una red aleatoria con N=100 nodos y diferentes valores de p\n(grafique con promedio de grado entre 0 y 4 cada 0.05)", "\"\"\" The following code snippet was taken from Mann, Edd. Depth-First Search and Breadth-First Search in Python.\n http://eddmann.com/posts/depth-first-search-and-breadth-first-search-in-python/ \"\"\"\n\ngraph5 = copy.deepcopy(graph4)\n\ngraph5['m'] = {'n'}\ngraph5['n'] = {'m'}\n\ndef bfs(graph, start):\n visited, queue = set(), collections.deque([start])\n while queue:\n vertex = queue.popleft()\n if vertex not in visited:\n visited.add(vertex)\n queue.extend(graph[vertex] - visited)\n return visited\n\n# return a list of lists of nodes of 'graph' each one being the nodes that\n# define a specific connected component of of 'graph'\n\ndef connected_components(graph):\n components = []\n nodes = set(graph.keys())\n while len(nodes):\n root = next(iter(nodes))\n visited = bfs(graph, root)\n components.append(visited)\n nodes = nodes - visited\n \n return components\n\n# Returns a set containing the nodes of a graph's biggest component\ndef biggest_component_nodes(graph):\n components = connected_components(graph)\n lengths = [len(component) for component in components]\n \n max_component = 0\n max_index = -1\n for i in range(0, len(lengths)):\n if lengths[i] > max_component:\n max_component = lengths[i]\n max_index = i\n \n return components[max_index]\n\n# Returns a subgraph containing the biggest connected component of 'graph'\ndef biggest_component(graph):\n nodes = biggest_component_nodes(graph)\n nodes = list(nodes)\n subgraph = {k:graph[k] for k in nodes if k in graph}\n \n return subgraph\n\n\n# Plot results\nimport matplotlib.pyplot as plt\nimport plotly.plotly as py\nfrom plotly.graph_objs import Scatter, Figure, Layout\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\n\ninit_notebook_mode(connected=True)\n\ndef plot_giant_component_growth(N):\n p_vector = []\n avg_degree_vector = []\n \n p = 0.0\n while p <= 1:\n p_vector.append(p)\n network = random_network_links2(N,p)\n network = edges_to_graph(network)\n \n component = biggest_component(network)\n \n avg_degree_vector.append(avg_degree(component))\n p += 0.05\n \n plt.plot(p_vector, avg_degree_vector, \"o\")\n\nplot_giant_component_growth(100)\n", "Grafique cuál es el porcentaje de nodos del componente más grande para diferentes valores de p", "def plot_giant_component_growth_nodes(N):\n p_vector = []\n node_percentages = []\n \n p = 0.0\n while p <= 1:\n p_vector.append(p)\n network = random_network_links2(N,p)\n network = edges_to_graph(network)\n \n component = biggest_component(network)\n component_percentage = float(len(component))/len(network)\n \n node_percentages.append(component_percentage)\n p += 0.001\n \n plt.plot(p_vector, node_percentages, \"o\")\n \nplot_giant_component_growth_nodes(100)", "Identifique para que valores de p el componente mas grande esta totalmente interconectado", "def identify_p_value_for_total_connection(N):\n p = 0.0\n while p <= 1:\n network = random_network_links2(N,p)\n network = edges_to_graph(network)\n \n component = biggest_component(network)\n component_percentage = float(len(component))/len(network)\n \n if component_percentage == 1:\n return p\n p += 0.001\n \n return 1 # Default value for a totally connected component\n \nidentify_p_value_for_total_connection(100)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
machinelearningnanodegree/stanford-cs231
solutions/pranay/assignment1/knn.ipynb
mit
[ "k-Nearest Neighbor (kNN) exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nThe kNN classifier consists of two stages:\n\nDuring training, the classifier takes the training data and simply remembers it\nDuring testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples\nThe value of k is cross-validated\n\nIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.", "# Run some setup code for this notebook.\n\nimport random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\n# Load the raw CIFAR-10 data.\ncifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# As a sanity check, we print out the size of the training and test data.\nprint 'Training data shape: ', X_train.shape\nprint 'Training labels shape: ', y_train.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape\n\n# Visualize some examples from the dataset.\n# We show a few examples of training images from each class.\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nnum_classes = len(classes)\nsamples_per_class = 7\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(y_train == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n plt.imshow(X_train[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls)\nplt.show()\n\n# Subsample the data for more efficient code execution in this exercise\nnum_training = 5000\nmask = range(num_training)\nX_train = X_train[mask]\ny_train = y_train[mask]\n\nnum_test = 500\nmask = range(num_test)\nX_test = X_test[mask]\ny_test = y_test[mask]\n\n# Reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape[0], -1))\nX_test = np.reshape(X_test, (X_test.shape[0], -1))\nprint X_train.shape, X_test.shape\n\nfrom cs231n.classifiers import KNearestNeighbor\n\n# Create a kNN classifier instance. \n# Remember that training a kNN classifier is a noop: \n# the Classifier simply remembers the data and does no further processing \nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)", "We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: \n\nFirst we must compute the distances between all test examples and all train examples. \nGiven these distances, for each test example we find the k nearest examples and have them vote for the label\n\nLets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.\nFirst, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.", "# Open cs231n/classifiers/k_nearest_neighbor.py and implement\n# compute_distances_two_loops.\n\n# Test your implementation:\ndists = classifier.compute_distances_two_loops(X_test)\nprint dists.shape\n\n# We can visualize the distance matrix: each row is a single test example and\n# its distances to training examples\nplt.imshow(dists, interpolation='none')\nplt.show()", "Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)\n\nWhat in the data is the cause behind the distinctly bright rows?\nWhat causes the columns?\n\nYour Answer: fill this in.", "# Now implement the function predict_labels and run the code below:\n# We use k = 1 (which is Nearest Neighbor).\ny_test_pred = classifier.predict_labels(dists, k=1)\n\n# Compute and print the fraction of correctly predicted examples\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)", "You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:", "y_test_pred = classifier.predict_labels(dists, k=5)\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)", "You should expect to see a slightly better performance than with k = 1.", "# Now lets speed up distance matrix computation by using partial vectorization\n# with one loop. Implement the function compute_distances_one_loop and run the\n# code below:\ndists_one = classifier.compute_distances_one_loop(X_test)\n\n# To ensure that our vectorized implementation is correct, we make sure that it\n# agrees with the naive implementation. There are many ways to decide whether\n# two matrices are similar; one of the simplest is the Frobenius norm. In case\n# you haven't seen it before, the Frobenius norm of two matrices is the square\n# root of the squared sum of differences of all elements; in other words, reshape\n# the matrices into vectors and compute the Euclidean distance between them.\ndifference = np.linalg.norm(dists - dists_one, ord='fro')\nprint 'Difference was: %f' % (difference, )\nif difference < 0.001:\n print 'Good! The distance matrices are the same'\nelse:\n print 'Uh-oh! The distance matrices are different'\n\n# Now implement the fully vectorized version inside compute_distances_no_loops\n# and run the code\ndists_two = classifier.compute_distances_no_loops(X_test)\n\n# check that the distance matrix agrees with the one we computed before:\ndifference = np.linalg.norm(dists - dists_two, ord='fro')\nprint 'Difference was: %f' % (difference, )\nif difference < 0.001:\n print 'Good! The distance matrices are the same'\nelse:\n print 'Uh-oh! The distance matrices are different'\n\n# Let's compare how fast the implementations are\ndef time_function(f, *args):\n \"\"\"\n Call a function f with args and return the time (in seconds) that it took to execute.\n \"\"\"\n import time\n tic = time.time()\n f(*args)\n toc = time.time()\n return toc - tic\n\ntwo_loop_time = time_function(classifier.compute_distances_two_loops, X_test)\nprint 'Two loop version took %f seconds' % two_loop_time\n\none_loop_time = time_function(classifier.compute_distances_one_loop, X_test)\nprint 'One loop version took %f seconds' % one_loop_time\n\nno_loop_time = time_function(classifier.compute_distances_no_loops, X_test)\nprint 'No loop version took %f seconds' % no_loop_time\n\n# you should see significantly faster performance with the fully vectorized implementation", "Cross-validation\nWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.", "num_folds = 5\nk_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\n\nX_train_folds = []\ny_train_folds = []\n################################################################################\n# TODO: #\n# Split up the training data into folds. After splitting, X_train_folds and #\n# y_train_folds should each be lists of length num_folds, where #\n# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #\n# Hint: Look up the numpy array_split function. #\n################################################################################\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# A dictionary holding the accuracies for different values of k that we find\n# when running cross-validation. After running cross-validation,\n# k_to_accuracies[k] should be a list of length num_folds giving the different\n# accuracy values that we found when using that value of k.\nk_to_accuracies = {}\n\n\n################################################################################\n# TODO: #\n# Perform k-fold cross validation to find the best value of k. For each #\n# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #\n# where in each case you use all but one of the folds as training data and the #\n# last fold as a validation set. Store the accuracies for all fold and all #\n# values of k in the k_to_accuracies dictionary. #\n################################################################################\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out the computed accuracies\nfor k in sorted(k_to_accuracies):\n for accuracy in k_to_accuracies[k]:\n print 'k = %d, accuracy = %f' % (k, accuracy)\n\n# plot the raw observations\nfor k in k_choices:\n accuracies = k_to_accuracies[k]\n plt.scatter([k] * len(accuracies), accuracies)\n\n# plot the trend line with error bars that correspond to standard deviation\naccuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])\naccuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])\nplt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)\nplt.title('Cross-validation on k')\nplt.xlabel('k')\nplt.ylabel('Cross-validation accuracy')\nplt.show()\n\n# Based on the cross-validation results above, choose the best value for k, \n# retrain the classifier using all the training data, and test it on the test\n# data. You should be able to get above 28% accuracy on the test data.\nbest_k = 1\n\nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)\ny_test_pred = classifier.predict(X_test, k=best_k)\n\n# Compute and display the accuracy\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.13/_downloads/plot_sensor_permutation_test.ipynb
bsd-3-clause
[ "%matplotlib inline", "Permutation T-test on sensor data\nOne tests if the signal significantly deviates from 0\nduring a fixed time window of interest. Here computation\nis performed on MNE sample dataset between 40 and 60 ms.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne import io\nfrom mne.stats import permutation_t_test\nfrom mne.datasets import sample\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id = 1\ntmin = -0.2\ntmax = 0.5\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: MEG + STI 014 - bad channels (modify to your needs)\ninclude = [] # or stim channel ['STI 014']\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\n\n# pick MEG Gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,\n include=include, exclude='bads')\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6))\ndata = epochs.get_data()\ntimes = epochs.times\n\ntemporal_mask = np.logical_and(0.04 <= times, times <= 0.06)\ndata = np.mean(data[:, :, temporal_mask], axis=2)\n\nn_permutations = 50000\nT0, p_values, H0 = permutation_t_test(data, n_permutations, n_jobs=1)\n\nsignificant_sensors = picks[p_values <= 0.05]\nsignificant_sensors_names = [raw.ch_names[k] for k in significant_sensors]\n\nprint(\"Number of significant sensors : %d\" % len(significant_sensors))\nprint(\"Sensors names : %s\" % significant_sensors_names)", "View location of significantly active sensors", "evoked = mne.EvokedArray(-np.log10(p_values)[:, np.newaxis],\n epochs.info, tmin=0.)\n\n# Extract mask and indices of active sensors in layout\nstats_picks = mne.pick_channels(evoked.ch_names, significant_sensors_names)\nmask = p_values[:, np.newaxis] <= 0.05\n\nevoked.plot_topomap(ch_type='grad', times=[0], scale=1,\n time_format=None, cmap='Reds', vmin=0., vmax=np.max,\n unit='-log10(p)', cbar_fmt='-%0.1f', mask=mask,\n size=3, show_names=lambda x: x[4:] + ' ' * 20)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
brooksandrew/simpleblog
_ipynb/2017-12-01-sleeping-giant-rural-postman-problem.ipynb
mit
[ "import sys\nsys.path.append(\"..\") # Adds higher directory to python modules path.\n\n%load_ext autoreload\n%autoreload 2", "This problem originated from a blog post I wrote for DataCamp on graph optimization here. The algorithm I sketched out there for solving the Chinese Problem on the Sleeping Giant state park trail network has since been formalized into the postman_problems python library. I've also added the Rural Postman solver that is implemented here.\nSo the three main enhancements in this post from the original DataCamp article and my second iteration published here updating to networkx 2.0 are:\n1. OpenStreetMap for graph data and visualization.\n2. Implementing the Rural Postman algorithm to consider optional edges.\n3. Leveraging the postman_problems library.\nThis code, notebook and data for this post can be found in the postman_problems_examples repo.\nThe motivation and background around this problem is written up more thoroughly in the previous posts and postman_problems.\nTable of Contents\n\nTable of Contents\n{:toc}", "import mplleaflet\nimport networkx as nx\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom collections import Counter\n\n# can be found in https://github.com/brooksandrew/postman_problems_examples\nfrom osm2nx import read_osm, haversine\nfrom graph import contract_edges, create_rpp_edgelist\n\nfrom postman_problems.tests.utils import create_mock_csv_from_dataframe\nfrom postman_problems.solver import rpp, cpp\nfrom postman_problems.stats import calculate_postman_solution_stats", "Create Graph from OSM", "# load OSM to a directed NX\ng_d = read_osm('sleepinggiant.osm') \n\n# create an undirected graph\ng = g_d.to_undirected()", "Adding edges that don't exist on OSM, but should", "g.add_edge('2318082790', '2318082832', id='white_horseshoe_fix_1')", "Adding distance to OSM graph\nUsing the haversine formula to calculate distance between each edge.", "for e in g.edges(data=True):\n e[2]['distance'] = haversine(g.node[e[0]]['lon'], \n g.node[e[0]]['lat'], \n g.node[e[1]]['lon'], \n g.node[e[1]]['lat'])", "Create graph of required trails only\nA simple heuristic with a couple tweaks is all we need to create the graph with required edges:\n\nKeep any edge with 'Trail' in the name attribute.\nManually remove the handful of trails that are not part of the required Giant Master route.", "g_t = g.copy()\n\nfor e in g.edges(data=True):\n \n # remove non trails\n name = e[2]['name'] if 'name' in e[2] else ''\n if ('Trail' not in name.split()) or (name is None):\n g_t.remove_edge(e[0], e[1])\n \n # remove non Sleeping Giant trails\n elif name in [\n 'Farmington Canal Linear Trail', \n 'Farmington Canal Heritage Trail', \n 'Montowese Trail',\n '(white blazes)']:\n g_t.remove_edge(e[0], e[1])\n\n# cleaning up nodes left without edges\nfor n in nx.isolates(g_t.copy()):\n g_t.remove_node(n)", "Viz Sleeping Giant Trails\nAll trails required for the Giant Master:", "fig, ax = plt.subplots(figsize=(1,8))\n\npos = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes()} \nnx.draw_networkx_edges(g_t, pos, width=2.5, edge_color='black', alpha=0.7)\n\nmplleaflet.save_html(fig, 'maps/sleepinggiant_trails_only.html')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/sleepinggiant_trails_only.html\" height=\"400\" width=\"750\"></iframe>\n\nConnect Edges\nIn order to run the RPP algorithm from postman_problems, the required edges of the graph must form a single connected component. We're almost there with the Sleeping Giant trail map as-is, so we'll just connect a few components manually. \nHere's an example of a few floating components (southwest corner of park):\n<img src=\"https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/sleepinggiant_disconnected_components.png\" width=\"500\">\nOpenStreetMap makes finding these edge (way) IDs simple. Once grabbing the ? cursor, you can click on any edge to retrieve IDs and attributes. \n<img src=\"https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/osm_edge_lookup.png\" width=\"1000\">\nDefine OSM edges to add and remove from graph", "edge_ids_to_add = [\n '223082783', \n '223077827', \n '40636272', \n '223082785', \n '222868698',\n '223083721',\n '222947116',\n '222711152',\n '222711155',\n '222860964',\n '223083718',\n '222867540',\n 'white_horseshoe_fix_1'\n]\n\nedge_ids_to_remove = [\n '17220599'\n]", "Add attributes for supplementary edges", "for e in g.edges(data=True):\n way_id = e[2].get('id').split('-')[0]\n if way_id in edge_ids_to_add:\n g_t.add_edge(e[0], e[1], **e[2])\n g_t.add_node(e[0], lat=g.node[e[0]]['lat'], lon=g.node[e[0]]['lon'])\n g_t.add_node(e[1], lat=g.node[e[1]]['lat'], lon=g.node[e[1]]['lon'])\n if way_id in edge_ids_to_remove:\n if g_t.has_edge(e[0], e[1]):\n g_t.remove_edge(e[0], e[1])\n \nfor n in nx.isolates(g_t.copy()):\n g_t.remove_node(n)", "Ensuring that we're left with one single connected component:", "len(list(nx.connected_components(g_t)))", "Viz Connected Component\nThe map below visualizes the required edges and nodes of interest (intersections and dead-ends where degree != 2):", "fig, ax = plt.subplots(figsize=(1,12))\n\n# edges\npos = {k: (g_t.node[k].get('lon'), g_t.node[k].get('lat')) for k in g_t.nodes()} \nnx.draw_networkx_edges(g_t, pos, width=3.0, edge_color='black', alpha=0.6)\n\n# nodes (intersections and dead-ends)\npos_x = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes() if (g_t.degree(k)==1) | (g_t.degree(k)>2)} \nnx.draw_networkx_nodes(g_t, pos_x, nodelist=pos_x.keys(), node_size=35.0, node_color='red', alpha=0.9)\n\nmplleaflet.save_html(fig, 'maps/trails_only_intersections.html')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/trails_only_intersections.html\" height=\"400\" width=\"750\"></iframe>\n\nViz Trail Color\nBecause we can and it's pretty.", "name2color = {\n 'Green Trail': 'green',\n 'Quinnipiac Trail': 'blue',\n 'Tower Trail': 'black',\n 'Yellow Trail': 'yellow',\n 'Red Square Trail': 'red',\n 'White/Blue Trail Link': 'lightblue',\n 'Orange Trail': 'orange',\n 'Mount Carmel Avenue': 'black',\n 'Violet Trail': 'violet',\n 'blue Trail': 'blue',\n 'Red Triangle Trail': 'red',\n 'Blue Trail': 'blue',\n 'Blue/Violet Trail Link': 'purple',\n 'Red Circle Trail': 'red',\n 'White Trail': 'gray',\n 'Red Diamond Trail': 'red',\n 'Yellow/Green Trail Link': 'yellowgreen',\n 'Nature Trail': 'forestgreen',\n 'Red Hexagon Trail': 'red',\n None: 'black'\n}\n\nfig, ax = plt.subplots(figsize=(1,10))\n \npos = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes()} \ne_color = [name2color[e[2].get('name')] for e in g_t.edges(data=True)]\nnx.draw_networkx_edges(g_t, pos, width=3.0, edge_color=e_color, alpha=0.5)\nnx.draw_networkx_nodes(g_t, pos_x, nodelist=pos_x.keys(), node_size=30.0, node_color='black', alpha=0.9)\n\nmplleaflet.save_html(fig, 'maps/trails_only_color.html', tiles='cartodb_positron')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/trails_only_color.html\" height=\"400\" width=\"750\"></iframe>\n\nCheck distance\nThis is strikingly close (within 0.25 miles) to what I calculated manually with some guess work from the SG trail map on the first pass at this problem here, before leveraging OSM.", "print('{:0.2f} miles of required trail.'.format(sum([e[2]['distance']/1609.34 for e in g_t.edges(data=True)])))", "Contract Edges\nWe could run the RPP algorithm on the graph as-is with >5000 edges. However, we can simplify computation by contracting edges into logical trail segments first. More details on the intuition and methodology in the 50 states post.", "print('Number of edges in trail graph: {}'.format(len(g_t.edges())))\n\n# intialize contracted graph\ng_tc = nx.MultiGraph()\n\n# add contracted edges to graph\nfor ce in contract_edges(g_t, 'distance'):\n start_node, end_node, distance, path = ce\n \n contracted_edge = {\n 'start_node': start_node,\n 'end_node': end_node,\n 'distance': distance,\n 'name': g[path[0]][path[1]].get('name'),\n 'required': 1,\n 'path': path\n }\n \n g_tc.add_edge(start_node, end_node, **contracted_edge)\n g_tc.node[start_node]['lat'] = g.node[start_node]['lat']\n g_tc.node[start_node]['lon'] = g.node[start_node]['lon']\n g_tc.node[end_node]['lat'] = g.node[end_node]['lat']\n g_tc.node[end_node]['lon'] = g.node[end_node]['lon']", "Edge contraction reduces the number of edges fed to the RPP algorithm by a factor of ~40.", "print('Number of edges in contracted trail graoh: {}'.format(len(g_tc.edges())))", "Solve CPP\nFirst, let's see how well the Chinese Postman solution works.\nCreate CPP edgelist", "# create list with edge attributes and \"from\" & \"to\" nodes\ntmp = []\nfor e in g_tc.edges(data=True):\n tmpi = e[2].copy() # so we don't mess w original graph\n tmpi['start_node'] = e[0]\n tmpi['end_node'] = e[1]\n tmp.append(tmpi)\n \n# create dataframe w node1 and node2 in order\neldf = pd.DataFrame(tmp) \neldf = eldf[['start_node', 'end_node'] + list(set(eldf.columns)-{'start_node', 'end_node'})]\n\n# create edgelist mock CSV\nelfn = create_mock_csv_from_dataframe(eldf)", "Start node\nThe route is designed to start at the far east end of the park on the Blue trail (node '735393342'). While the CPP and RPP solutions will return a Eulerian circuit (loop back to the starting node), we could truncate this last long doublebacking segment when actually running the route\n<img src=\"https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/sleepinggiant_starting_node.png\" width=\"600\">\nSolve", "circuit_cpp, gcpp = cpp(elfn, start_node='735393342')", "CPP Stats\n(distances in meters)", "cpp_stats = calculate_postman_solution_stats(circuit_cpp)\ncpp_stats\n\nprint('Miles in CPP solution: {:0.2f}'.format(cpp_stats['distance_walked']/1609.34))", "Solve RPP\nWith the CPP as benchmark, let's see how well we do when we allow for optional edges in the route.", "%%time\ndfrpp = create_rpp_edgelist(g_tc, \n graph_full=g, \n edge_weight='distance', \n max_distance=2500)", "Required vs optional edge counts\n(1=required and 0=optional)", "Counter( dfrpp['required'])", "Solve RPP", "# create mockfilename\nelfn = create_mock_csv_from_dataframe(dfrpp)\n\n%%time\n# solve\ncircuit_rpp, grpp = rpp(elfn, start_node='735393342')", "RPP Stats\n(distances in meters)", "rpp_stats = calculate_postman_solution_stats(circuit_rpp)\nrpp_stats", "Leveraging the optional roads and trails, we're able to shave a about 3 miles off the CPP route. Total mileage checks in at 30.71, just under a 50K (30.1 miles).", "print('Miles in RPP solution: {:0.2f}'.format(rpp_stats['distance_walked']/1609.34))", "Viz RPP Solution", "# hack to convert 'path' from str back to list. Caused by `create_mock_csv_from_dataframe`\nfor e in circuit_rpp:\n if type(e[3]['path']) == str:\n exec('e[3][\"path\"]=' + e[3][\"path\"])", "Create graph from RPP solution", "g_tcg = g_tc.copy()\n\n# calc shortest path between optional nodes and add to graph\nfor e in circuit_rpp:\n granular_type = 'trail' if e[3]['required'] else 'optional'\n \n # add granular optional edges to g_tcg\n path = e[3]['path']\n for pair in list(zip(path[:-1], path[1:])):\n if (g_tcg.has_edge(pair[0], pair[1])) and (g_tcg[pair[0]][pair[1]][0].get('granular_type') == 'optional'):\n g_tcg[pair[0]][pair[1]][0]['granular_type'] = 'trail'\n else:\n g_tcg.add_edge(pair[0], pair[1], granular='True', granular_type=granular_type)\n \n # add granular nodes from optional edge paths to g_tcg\n for n in path:\n g_tcg.add_node(n, lat=g.node[n]['lat'], lon=g.node[n]['lon'])", "Viz: RPP optional edges\nThe RPP algorithm picks up some logical shortcuts using the optional trails and a couple short stretches of road.\n\n<font color='black'>black</font>: required trails\n<font color='blue'>blue</font>: optional trails and roads", "fig, ax = plt.subplots(figsize=(1,8))\n\npos = {k: (g_tcg.node[k].get('lon'), g_tcg.node[k].get('lat')) for k in g_tcg.nodes()} \n\nel_opt = [e for e in g_tcg.edges(data=True) if e[2].get('granular_type') == 'optional'] \nnx.draw_networkx_edges(g_tcg, pos, edgelist=el_opt, width=6.0, edge_color='blue', alpha=1.0)\n\nel_tr = [e for e in g_tcg.edges(data=True) if e[2].get('granular_type') == 'trail']\nnx.draw_networkx_edges(g_tcg, pos, edgelist=el_tr, width=3.0, edge_color='black', alpha=0.8)\n\nmplleaflet.save_html(fig, 'maps/rpp_solution_opt_edges.html', tiles='cartodb_positron')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/rpp_solution_opt_edges.html\" height=\"400\" width=\"750\"></iframe>\n\nViz: RPP edges counts", "## Create graph directly from rpp_circuit and original graph w lat/lon (g)\ncolor_seq = [None, 'black', 'magenta', 'orange', 'yellow']\ngrppviz = nx.MultiGraph()\n\nfor e in circuit_rpp:\n for n1, n2 in zip(e[3]['path'][:-1], e[3]['path'][1:]):\n if grppviz.has_edge(n1, n2):\n grppviz[n1][n2][0]['linewidth'] += 2\n grppviz[n1][n2][0]['cnt'] += 1\n else: \n grppviz.add_edge(n1, n2, linewidth=2.5)\n grppviz[n1][n2][0]['color_st'] = 'black' if g_t.has_edge(n1, n2) else 'red'\n grppviz[n1][n2][0]['cnt'] = 1\n grppviz.add_node(n1, lat=g.node[n1]['lat'], lon=g.node[n1]['lon'])\n grppviz.add_node(n2, lat=g.node[n2]['lat'], lon=g.node[n2]['lon']) \n\nfor e in grppviz.edges(data=True):\n e[2]['color_cnt'] = color_seq[1] if 'cnt' not in e[2] else color_seq[e[2]['cnt'] ]\n ", "Edge walks per color: \n<font color='black'>black</font>: 1 <br>\n<font color='magenta'>magenta</font>: 2 <br>", "fig, ax = plt.subplots(figsize=(1,10))\n\npos = {k: (grppviz.node[k]['lon'], grppviz.node[k]['lat']) for k in grppviz.nodes()} \ne_width = [e[2]['linewidth'] for e in grppviz.edges(data=True)]\ne_color = [e[2]['color_cnt'] for e in grppviz.edges(data=True)]\nnx.draw_networkx_edges(grppviz, pos, width=e_width, edge_color=e_color, alpha=0.7)\n\nmplleaflet.save_html(fig, 'maps/rpp_solution_edge_cnts.html', tiles='cartodb_positron')", "<iframe src=\"https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/rpp_solution_edge_cnts.html\" height=\"400\" width=\"750\"></iframe>\n\nCreate geojson solution\nUsed for the forthcoming D3 route animation.", "geojson = {'features':[], 'type': 'FeatureCollection'}\ntime = 0\npath = list(reversed(circuit_rpp[0][3]['path']))\n\nfor e in circuit_rpp:\n if e[3]['path'][0] != path[-1]: \n path = list(reversed(e[3]['path']))\n else:\n path = e[3]['path']\n \n for n in path:\n time += 1\n doc = {'type': 'Feature',\n 'properties': {\n 'latitude': g.node[n]['lat'],\n 'longitude': g.node[n]['lon'],\n 'time': time,\n 'id': e[3].get('id')\n },\n 'geometry':{\n 'type': 'Point',\n 'coordinates': [g.node[n]['lon'], g.node[n]['lat']]\n }\n }\n geojson['features'].append(doc)\n \n\nwith open('circuit_rpp.geojson','w') as f:\n json.dump(geojson, f)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/zh-cn/hub/tutorials/object_detection.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================", "对象检测\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://tensorflow.google.cn/hub/tutorials/object_detection\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">View 在 TensorFlow.org 上查看</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/object_detection.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行 </a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/object_detection.ipynb\"> <img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\"> 在 GitHub 上查看源代码</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/object_detection.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\">下载笔记本</a></td>\n <td><a href=\"https://tfhub.dev/s?q=google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1%20OR%20google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1\"><img src=\"https://tensorflow.google.cn/images/hub_logo_32px.png\">查看 TF Hub 模型</a></td>\n</table>\n\n此 Colab 演示如何使用经过训练的 TF-Hub 模块执行对象检测。\n设置", "#@title Imports and function definitions\n\n# For running inference on the TF-Hub module.\nimport tensorflow as tf\n\nimport tensorflow_hub as hub\n\n# For downloading the image.\nimport matplotlib.pyplot as plt\nimport tempfile\nfrom six.moves.urllib.request import urlopen\nfrom six import BytesIO\n\n# For drawing onto the image.\nimport numpy as np\nfrom PIL import Image\nfrom PIL import ImageColor\nfrom PIL import ImageDraw\nfrom PIL import ImageFont\nfrom PIL import ImageOps\n\n# For measuring the inference time.\nimport time\n\n# Print Tensorflow version\nprint(tf.__version__)\n\n# Check available GPU devices.\nprint(\"The following GPU devices are available: %s\" % tf.test.gpu_device_name())", "使用示例\n用于下载图像和可视化的辅助函数。\n为了实现最简单的必需功能,根据 TF 对象检测 API 改编了可视化代码。", "def display_image(image):\n fig = plt.figure(figsize=(20, 15))\n plt.grid(False)\n plt.imshow(image)\n\n\ndef download_and_resize_image(url, new_width=256, new_height=256,\n display=False):\n _, filename = tempfile.mkstemp(suffix=\".jpg\")\n response = urlopen(url)\n image_data = response.read()\n image_data = BytesIO(image_data)\n pil_image = Image.open(image_data)\n pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)\n pil_image_rgb = pil_image.convert(\"RGB\")\n pil_image_rgb.save(filename, format=\"JPEG\", quality=90)\n print(\"Image downloaded to %s.\" % filename)\n if display:\n display_image(pil_image)\n return filename\n\n\ndef draw_bounding_box_on_image(image,\n ymin,\n xmin,\n ymax,\n xmax,\n color,\n font,\n thickness=4,\n display_str_list=()):\n \"\"\"Adds a bounding box to an image.\"\"\"\n draw = ImageDraw.Draw(image)\n im_width, im_height = image.size\n (left, right, top, bottom) = (xmin * im_width, xmax * im_width,\n ymin * im_height, ymax * im_height)\n draw.line([(left, top), (left, bottom), (right, bottom), (right, top),\n (left, top)],\n width=thickness,\n fill=color)\n\n # If the total height of the display strings added to the top of the bounding\n # box exceeds the top of the image, stack the strings below the bounding box\n # instead of above.\n display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]\n # Each display_str has a top and bottom margin of 0.05x.\n total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)\n\n if top > total_display_str_height:\n text_bottom = top\n else:\n text_bottom = top + total_display_str_height\n # Reverse list and print from bottom to top.\n for display_str in display_str_list[::-1]:\n text_width, text_height = font.getsize(display_str)\n margin = np.ceil(0.05 * text_height)\n draw.rectangle([(left, text_bottom - text_height - 2 * margin),\n (left + text_width, text_bottom)],\n fill=color)\n draw.text((left + margin, text_bottom - text_height - margin),\n display_str,\n fill=\"black\",\n font=font)\n text_bottom -= text_height - 2 * margin\n\n\ndef draw_boxes(image, boxes, class_names, scores, max_boxes=10, min_score=0.1):\n \"\"\"Overlay labeled boxes on an image with formatted scores and label names.\"\"\"\n colors = list(ImageColor.colormap.values())\n\n try:\n font = ImageFont.truetype(\"/usr/share/fonts/truetype/liberation/LiberationSansNarrow-Regular.ttf\",\n 25)\n except IOError:\n print(\"Font not found, using default font.\")\n font = ImageFont.load_default()\n\n for i in range(min(boxes.shape[0], max_boxes)):\n if scores[i] >= min_score:\n ymin, xmin, ymax, xmax = tuple(boxes[i])\n display_str = \"{}: {}%\".format(class_names[i].decode(\"ascii\"),\n int(100 * scores[i]))\n color = colors[hash(class_names[i]) % len(colors)]\n image_pil = Image.fromarray(np.uint8(image)).convert(\"RGB\")\n draw_bounding_box_on_image(\n image_pil,\n ymin,\n xmin,\n ymax,\n xmax,\n color,\n font,\n display_str_list=[display_str])\n np.copyto(image, np.array(image_pil))\n return image", "应用模块\n从 Open Images v4 加载公共图像,并在本地保存和显示。", "# By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg\nimage_url = \"https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg\" #@param\ndownloaded_image_path = download_and_resize_image(image_url, 1280, 856, True)", "选择对象检测模块并应用于下载的图像。模块包括:\n\nFasterRCNN+InceptionResNet V2:高准确率。\nssd+mobilenet V2:小而快。", "module_handle = \"https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1\" #@param [\"https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1\", \"https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1\"]\n\ndetector = hub.load(module_handle).signatures['default']\n\ndef load_img(path):\n img = tf.io.read_file(path)\n img = tf.image.decode_jpeg(img, channels=3)\n return img\n\ndef run_detector(detector, path):\n img = load_img(path)\n\n converted_img = tf.image.convert_image_dtype(img, tf.float32)[tf.newaxis, ...]\n start_time = time.time()\n result = detector(converted_img)\n end_time = time.time()\n\n result = {key:value.numpy() for key,value in result.items()}\n\n print(\"Found %d objects.\" % len(result[\"detection_scores\"]))\n print(\"Inference time: \", end_time-start_time)\n\n image_with_boxes = draw_boxes(\n img.numpy(), result[\"detection_boxes\"],\n result[\"detection_class_entities\"], result[\"detection_scores\"])\n\n display_image(image_with_boxes)\n\nrun_detector(detector, downloaded_image_path)", "更多图像\n使用时间跟踪对部分其他图像进行推理。", "image_urls = [\n # Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg\n \"https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg\",\n # By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg\n \"https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg\",\n # Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg\n \"https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg\",\n ]\n\ndef detect_img(image_url):\n start_time = time.time()\n image_path = download_and_resize_image(image_url, 640, 480)\n run_detector(detector, image_path)\n end_time = time.time()\n print(\"Inference time:\",end_time-start_time)\n\ndetect_img(image_urls[0])\n\ndetect_img(image_urls[1])\n\ndetect_img(image_urls[2])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pgmpy/pgmpy_notebook
notebooks/1. Introduction to Probabilistic Graphical Models.ipynb
mit
[ "Introduction to Probabilitic Graphical Models", "from IPython.display import Image", "Contents\n\nWhat is machine learning\nDifferent ways of learning from data\nWhy probabilistic graphical models\nMajor types of PGMs\n\n1. What is machine learning\nMachine learning is a scientific discipline that explores the construction and study of algorithms that can learn from data. Such algorithms operate by building a model from example inputs and using that to make predictions or decisions, rather than following strictly static program instructions.\nWe can take an example of predicting the type of flower based on the sepal length and width of the flower. Let's say we have some data (discretized iris data set on sepal length and width). The dataset looks something like this:", "%run ../scripts/1/discretize.py\ndata", "2. Different ways of learning from data\nNow let's say we want to predict the type of flower for a new given data point. There are multiple ways to solve this problem. We will consider these two ways in some detail: \n\nWe could find a function which can directly map an input value to it's class label. \nWe can find the probability distributions over the variables and then use this distribution to answer queries about the new data point.\n\nThere are a lot of algorithms for finding a mapping function. For example linear regression tries to find a linear equation which explains the data. Support vector machine tries to find a plane which separates the data points. Decision Tree tries to find a set of simple greater than and less than equations to classify the data. Let's try to apply Decision Tree on this data set.\nWe can plot the data and it looks something like this:", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Adding a little bit of noise so that it's easier to visualize\ndata_with_noise = data.iloc[:, :2] + np.random.normal(loc=0, scale=0.1, size=(150, 2))\nplt.scatter(data_with_noise.length, data_with_noise.width, c=[ \"bgr\"[k] for k in data.iloc[:,2] ], s=200, alpha=0.3)", "In the plot we can easily see that the blue points are concentrated on the top-left corner, green ones in bottom left and red ones in top right. \nNow let's try to train a Decision Tree on this data.", "from sklearn.tree import DecisionTreeClassifier\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(data[['length', 'width']].values, data.type.values, test_size=0.2)\n\nclassifier = DecisionTreeClassifier(max_depth=4)\nclassifier.fit(X_train, y_train)\nclassifier.predict(X_test)\n\nclassifier.score(X_test, y_test)", "So, in this case we got a classification accuracy of 60 %.\nNow moving on to our second approach using a probabilistic model.\nThe most obvious way to do this classification task would be to compute a Joint Probability Distribution over all these variables and then marginalize and reduce over these according to our new data point to get the probabilities of classes.", "X_train, X_test = data[:120], data[120:]\n\nX_train\n\n# Computing the joint probability distribution over the training data\njoint_prob = X_train.groupby(['length', 'width', 'type']).size() / 120\njoint_prob\n\n# Predicting values\n\n# Selecting just the feature variables.\nX_test_features = X_test.iloc[:, :2].values\nX_test_actual_results = X_test.iloc[:, 2].values\n\npredicted_values = []\nfor i in X_test_features:\n predicted_values.append(joint_prob[i[0], i[1]].idxmax())\n \npredicted_values = np.array(predicted_values)\npredicted_values\n\n# Comparing results with the actual data.\npredicted_values == X_test_actual_results\n\nscore = (predicted_values == X_test_actual_results).sum() / 30\nprint(score)", "Why Probabilistic Graphical Models\nIn the previous example we saw how Bayesian Inference works. We construct a Joint Distribution over the data and then condition on the observed variable to compute the posterior distribution. And then we query on this posterior distribution to predict the values of new data points.\nBut the problem with this method is that the Joint Probability Distribution is exponential to the number of states (cardinality) of each variable. So, for problems having a lot of features or having high cardinality of features, inference becomes a difficult task because of computational limitations. For example, for 10 random variables each having 10 states, the size of the Joint Distribution would be 10^10.\nProababilistic Graphical Models (PGM): PGM is a technique of compactly representing Joint Probability Distribution over random variables by exploiting the (conditional) independencies between the variables. PGM also provides us methods for efficiently doing inference over these joint distributions.\nEach graphical model is characterized by a graph structure (can be directed, undirected or both) and a set of parameters associated with each graph.\nThe problem in the above example can be represented using a Bayesian Model (a type of graphical model) as:", "Image(filename='../images/1/Iris_BN.png')", "In this case the parameters of the network would be $P(L)$, $P(W)$ and $P(T | L, W)$. So, we will need to store 5 values for $L$, 3 values for $W$ and 45 values for $P(T | L, W)$. So, a total of 45 + 5 + 3 = 53 values to completely parameterize the network which is actually more than 45 values which we need for $P (T, L, W)$. But in the cases of bigger networks graphical models help in saving space. We can take the example of the student network shown below:", "Image(filename='../images/1/student.png')", "Considering that $D$ has cardinality of 2, $I$ has cardinality of 2, $S$ has cardinality of 2, $G$ has cardinality of 3 and $L$ has cardinality of 2. Also the parameters in this network would be $P(D)$, $P(I)$, $P(S | I)$, $P(G | D, I)$, $P(L | G)$. So, the number of values needed would be 2 for $P(D)$, 2 for $P(I)$, 12 for $P(G | D, I)$, 6 for $P(L | G)$, 4 for $P(S | I)$, total of 4 + 6 + 12 + 2 + 2 = 26 compared to 2 * 2 * 3 * 2 * 2 = 48 required for the Joint Distribution over all the variables. \nTypes of Graphical Models\nThere are mainly 2 types of graphical models:\n\n\nBayesian Models: A Bayesian Model consists of a directed graph and Conditional Probability Distributions(CPDs) associated with each of the node. Each CPD is of the form $P(node | parents(node))$ where $parents(node)$ are the parents of the node in the graph structure.\n\n\nMarkov Models: A Markov Models consists of an undirected graph and are parameterized by Factors. Factors represent how much 2 or more variables agree with each other." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jvitria/DeepLearningBBVA2016
1. Basic Concepts I.ipynb
mit
[ "<small><i>June 2016 - This notebook was created by Jordi Vitrià. Source and license info are in the folder.</i></small>", "import warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport seaborn as sns\nimport bokeh.plotting as bp\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets.samples_generator import make_regression \nfrom scipy import stats \nfrom bokeh.models import WheelZoomTool, ResetTool, PanTool\n%matplotlib inline", "Basic Concepts I\nWhat is \"learning from data\"?\n\nIn general Learning from Data is a scientific discipline that is concerned with the design and development of algorithms that allow computers to infer (from data) a model that allows compact representation (unsupervised learning) and/or good generalization (supervised learning).\n\nThis is an important technology because it enables computational systems to adaptively improve their performance with experience accumulated from the observed data. \nMost of these algorithms are based on the iterative solution of a mathematical problem that involves data and model. If there was an analytical solution to the problem, this should be the adopted one, but this is not the case for most of the cases.\nSo, the most common strategy for learning from data is based on solving a system of equations as a way to find a series of parameters of the model that minimizes a mathematical problem. This is called optimization.\nThe most important technique for solving optimization problems is gradient descend.\nPreliminary: Nelder-Mead method for function minimization.\n\nSee \"An Interactive Tutorial on Numerical Optimization\": http://www.benfrederickson.com/numerical-optimization/\n\nThe most simple thing we can try to minimize a function $f(x)$ would be to sample two points relatively near each other, and just repeatedly take a step down away from the largest value. \nThe Nelder-Mead method dynamically adjusts the step size based off the loss of the new point. If the new point is better than any previously seen value, it expands the step size to accelerate towards the bottom. Likewise if the new point is worse it contracts the step size to converge around the minima. The usual settings are to half the step size when contracting and double the step size when expanding. \nThis method can be easily extended into higher dimensional examples, all thats required is taking one more point than there are dimensions - and then reflecting the worst point around the rest of the points to take a step down.\nGradient descend (for hackers) for function minimization: 1-D\nLet's suppose that we have a function $f: \\Re \\rightarrow \\Re$. For example: \n$$f(x) = x^2$$\nOur objective is to find the argument $x$ that minimizes this function (for maximization, consider $-f(x)$). To this end, the critical concept is the derivative.\nThe derivative of $f$ of a variable $x$, $f'(x)$ or $\\frac{\\mathrm{d}f}{\\mathrm{d}x}$, is a measure of the rate at which the value of the function changes with respect to the change of the variable. It is defined as the following limit:\n$$ f'(x) = \\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x)}{h} $$\nThe derivative specifies how to scale a small change in the input in order to obtain the corresponding change in the output: \n$$ f(x + h) \\approx f(x) + h f'(x)$$", "# numerical derivative at a point x\n\ndef f(x):\n return x**2\n\ndef fin_dif(x, f, h = 0.00001):\n '''\n This method returns the derivative of f at x\n by using the finite difference method\n '''\n return (f(x+h) - f(x))/h\n\nx = 2.0\nprint \"{:2.4f}\".format(fin_dif(x,f))", "The limit as $h$ approaches zero, if it exists, should represent the slope of the tangent line to $(x, f(x))$. \nFor values that are not zero it is only an approximation.", "for h in np.linspace(0.0, 1.0 , 5):\n print \"{:3.6f}\".format(f(5+h)), \"{:3.6f}\".format(f(5)+h*fin_dif(5,f))\n\nx = np.linspace(-1.5,-0.5, 100)\nf = [i**2 for i in x]\nplt.plot(x,f, 'r-')\nplt.plot([-1.5, -0.5], [2, 0.0], 'k-', lw=2)\nplt.plot([-1.4, -1.0], [1.96, 1.0], 'b-', lw=2)\nplt.plot([-1],[1],'o')\nplt.plot([-1.4],[1.96],'o')\nplt.text(-1.0, 1.2, r'$x,f(x)$')\nplt.text(-1.4, 2.2, r'$(x-h),f(x-h)$')\nplt.gcf().set_size_inches((12,6))\nplt.grid()\nplt.show", "It can be shown that the “centered difference formula\" is better when computing numerical derivatives:\n$$ \\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x - h)}{2h} $$\nThe error in the \"finite difference\" approximation can be derived from Taylor's theorem and, assuming that $f$ is differentiable, is $O(h)$. In the case of “centered difference\" the error is $O(h^2)$.\nThe derivative tells how to chage $x$ in order to make a small improvement in $f$. \nThen, we can follow these steps to decrease the value of the function:\n\nStart from a random $x$ value.\nCompute the derivative $f'(x) = \\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x - h)}{2h}$.\nWalk a small step in the opposite direction of the derivative, because we know that $f(x - h \\mbox{ sign}(f'(x))$ is less than $f(x)$ for small enough $h$. \n\nThe search for the minima ends when the derivative is zero because we have no more information about which direction to move. $x$ is a critical o stationary point if $f'(x)=0$. \n\nA minimum (maximum) is a critical point where $f(x)$ is lower (higher) than at all neighboring points. \nThere is a third class of critical points: saddle points.\n\nIf $f$ is a convex function, this should be the minimum (maximum) of our functions. In other cases it could be a local minimum (maximum) or a saddle point.", "W = 400\nH = 250\nbp.output_notebook()\n\nx = np.linspace(-15,15,100)\ny = x**2\n\nTOOLS = [WheelZoomTool(), ResetTool(), PanTool()]\n\ns1 = bp.figure(width=W, plot_height=H, \n title='Local minimum of function', \n tools=TOOLS)\ns1.line(x, y, color=\"navy\", alpha=0.5, line_width=3)\ns1.circle(0, 0, size =10, color=\"orange\")\ns1.title_text_font_size = '12pt'\ns1.yaxis.axis_label_text_font_size = \"14pt\"\ns1.xaxis.axis_label_text_font_size = \"14pt\"\n\nbp.show(s1)\n\nx = np.linspace(-15,15,100)\ny = -x**2\n\nTOOLS = [WheelZoomTool(), ResetTool(), PanTool()]\n\n\ns1 = bp.figure(width=W, plot_height=H, \n title='Local maximum of function', \n tools=TOOLS)\ns1.line(x, y, color=\"navy\", alpha=0.5, line_width=3)\ns1.circle(0, 0, size =10, color=\"orange\")\ns1.title_text_font_size = '12pt'\ns1.yaxis.axis_label_text_font_size = \"14pt\"\ns1.xaxis.axis_label_text_font_size = \"14pt\"\n\nbp.show(s1)\n\nx = np.linspace(-15,15,100)\ny = x**3\n\nTOOLS = [WheelZoomTool(), ResetTool(), PanTool()]\n\n\ns1 = bp.figure(width=W, plot_height=H, \n title='Saddle point of function', \n tools=TOOLS)\ns1.line(x, y, color=\"navy\", alpha=0.5, line_width=3)\ns1.circle(0, 0, size =10, color=\"orange\")\ns1.title_text_font_size = '12pt'\ns1.yaxis.axis_label_text_font_size = \"14pt\"\ns1.xaxis.axis_label_text_font_size = \"14pt\"\n\nbp.show(s1)", "There are two problems with numerical derivatives:\n+ It is approximate.\n+ It is very slow to evaluate (two function evaluations: $f(x + h) , f(x - h)$ ).\nOur knowledge from Calculus could help!\nWe know that we can get an analytical expression of the derivative for some functions. \nFor example, let's suppose we have a simple quadratic function, $f(x)=x^2−6x+5$, and we want to find the minimum of this function. \nFirst approach\nWe can solve this analytically using Calculus, by finding the derivate $f'(x) = 2x-6$ and setting it to zero:\n\\begin{equation}\n\\begin{split}\n2x-6 & = & 0 \\\n2x & = & 6 \\\nx & = & 3 \\\n\\end{split}\n\\end{equation}", "x = np.linspace(-10,20,100)\ny = x**2 - 6*x + 5\n \nTOOLS = [WheelZoomTool(), ResetTool(), PanTool()]\n\n\ns1 = bp.figure(width=W, plot_height=H, \n tools=TOOLS)\ns1.line(x, y, color=\"navy\", alpha=0.5, line_width=3)\ns1.circle(3, 3**2 - 6*3 + 5, size =10, color=\"orange\")\ns1.title_text_font_size = '12pt'\ns1.yaxis.axis_label_text_font_size = \"14pt\"\ns1.xaxis.axis_label_text_font_size = \"14pt\"\n\nbp.show(s1)", "Second approach\nTo find the local minimum using gradient descend: you start at a random point, and move into the direction of steepest descent relative to the derivative:\n\nStart from a random $x$ value.\nCompute the derivative $f'(x)$ analitically.\nWalk a small step in the opposite direction of the derivative. \n\nIn this example, let's suppose we start at $x=15$. The derivative at this point is $2×15−6=24$. \nBecause we're using gradient descent, we need to subtract the gradient from our $x$-coordinate: $f(x - f'(x))$. However, notice that $15−24$ gives us $−9$, clearly overshooting over target of $3$.", "x = np.linspace(-10,20,100)\ny = x**2 - 6*x + 5\nstart = 15\n\nTOOLS = [WheelZoomTool(), ResetTool(), PanTool()]\n\n\ns1 = bp.figure(width=W, plot_height=H, \n tools=TOOLS)\ns1.line(x, y, color=\"navy\", alpha=0.5, line_width=3)\ns1.circle(start, start**2 - 6*start + 5, size =10, color=\"orange\")\n\nd = 2 * start - 6\nend = start - d\n\ns1.circle(end, end**2 - 6*end + 5, size =10, color=\"red\")\ns1.title_text_font_size = '12pt'\ns1.yaxis.axis_label_text_font_size = \"14pt\"\ns1.xaxis.axis_label_text_font_size = \"14pt\"\n\nbp.show(s1)", "To fix this, we multiply the gradient by a step size. This step size (often called alpha) has to be chosen carefully, as a value too small will result in a long computation time, while a value too large will not give you the right result (by overshooting) or even fail to converge. \nIn this example, we'll set the step size to 0.01, which means we'll subtract $24×0.01$ from $15$, which is $14.76$. \nThis is now our new temporary local minimum: We continue this method until we either don't see a change after we subtracted the derivative step size, or until we've completed a pre-set number of iterations.", "old_min = 0\ntemp_min = 15\nstep_size = 0.01\nprecision = 0.0001\n \ndef f_derivative(x):\n import math\n return 2*x -6\n\nmins = []\ncost = []\n\nwhile abs(temp_min - old_min) > precision:\n old_min = temp_min \n gradient = f_derivative(old_min) \n move = gradient * step_size\n temp_min = old_min - move\n cost.append((3-temp_min)**2)\n mins.append(temp_min)\n\n# rounding the result to 2 digits because of the step size\nprint \"Local minimum occurs at {:3.2f}.\".format(round(temp_min,2))", "An important feature of gradient descent is that there should be a visible improvement over time: In this example, we simply plotted the squared distance from the local minima calculated by gradient descent and the true local minimum, cost, against the iteration during which it was calculated. As we can see, the distance gets smaller over time, but barely changes in later iterations.", "TOOLS = [WheelZoomTool(), ResetTool(), PanTool()]\n\n\nx, y = (zip(*enumerate(cost)))\ns1 = bp.figure(width=W, \n height=H, \n title='Squared distance to true local minimum', \n# title_text_font_size='14pt', \n tools=TOOLS,\n x_axis_label = 'Iteration',\n y_axis_label = 'Distance'\n)\ns1.line(x, y, color=\"navy\", alpha=0.5, line_width=3)\ns1.title_text_font_size = '16pt'\ns1.yaxis.axis_label_text_font_size = \"14pt\"\ns1.xaxis.axis_label_text_font_size = \"14pt\"\n\n\nbp.show(s1)", "From derivatives to gradient: $n$-dimensional function minimization.\nLet's consider a $n$-dimensional function $f: \\Re^n \\rightarrow \\Re$. For example: \n$$f(\\mathbf{x}) = \\sum_{n} x_n^2$$\nOur objective is to find the argument $\\mathbf{x}$ that minimizes this function.\nThe gradient of $f$ is the vector whose components are the $n$ partial derivatives of $f$. It is thus a vector-valued function. \nThe gradient points in the direction of the greatest rate of increase of the function.\n$$\\nabla {f} = (\\frac{\\partial f}{\\partial x_1}, \\dots, \\frac{\\partial f}{\\partial x_n})$$", "def f(x):\n return sum(x_i**2 for x_i in x)\n\ndef fin_dif_partial_centered(x, f, i, h=1e-6):\n w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]\n w2 = [x_j - (h if j==i else 0) for j, x_j in enumerate(x)]\n return (f(w1) - f(w2))/(2*h)\n\ndef fin_dif_partial_old(x, f, i, h=1e-6):\n w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]\n return (f(w1) - f(x))/h\n\ndef gradient_centered(x, f, h=1e-6):\n return[round(fin_dif_partial_centered(x,f,i,h), 10) for i,_ in enumerate(x)]\n\ndef gradient_old(x, f, h=1e-6):\n return[round(fin_dif_partial_old(x,f,i,h), 10) for i,_ in enumerate(x)]\n\nx = [1.0,1.0,1.0]\n\nprint f(x), gradient_centered(x,f)\nprint f(x), gradient_old(x,f) ", "The function we have evaluated, $f({\\mathbf x}) = x_1^2+x_2^2+x_3^2$, is $3$ at $(1,1,1)$ and the gradient vector at this point is $(2,2,2)$. \nThen, we can follow this steps to maximize (or minimize) the function:\n\nStart from a random $\\mathbf{x}$ vector.\nCompute the gradient vector.\nWalk a small step in the opposite direction of the gradient vector.\n\n\nIt is important to be aware that this gradient computation is very expensive: if $\\mathbf{x}$ has dimension $n$, we have to evaluate $f$ at $2*n$ points.\n\nHow to use the gradient.\n$f(x) = \\sum_i x_i^2$, takes its mimimum value when all $x$ are 0. \nLet's check it for $n=3$:", "def euc_dist(v1,v2):\n import numpy as np\n import math\n v = np.array(v1)-np.array(v2)\n return math.sqrt(sum(v_i ** 2 for v_i in v))", "Let's start by choosing a random vector and then walking a step in the opposite direction of the gradient vector. We will stop when the difference between the new solution and the old solution is less than a tolerance value.", "# choosing a random vector\n\nimport random\nimport numpy as np\n\nx = [random.randint(-10,10) for i in range(3)]\nx\n\ndef step(x,grad,alpha):\n return [x_i - alpha * grad_i for x_i, grad_i in zip(x,grad)]\n\ntol = 1e-15\nalpha = 0.01\nwhile True:\n grad = gradient_centered(x,f)\n next_x = step(x,grad,alpha)\n if euc_dist(next_x,x) < tol:\n break\n x = next_x\nprint [round(i,10) for i in x]", "Alpha\nThe step size, alpha, is a slippy concept: if it is too small we will slowly converge to the solution, if it is too large we can diverge from the solution. \nThere are several policies to follow when selecting the step size:\n\nConstant size steps. In this case, the size step determines the precision of the solution.\nDecreasing step sizes.\nAt each step, select the optimal step.\n\nThe last policy is good, but too expensive. In this case we would consider a fixed set of values:", "step_size = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]", "Learning from data\nIn general, we have:\n\nA dataset $(\\mathbf{x},y)$. \nA target function $f_\\mathbf{w}$, that we want to minimize, representing the discrepancy between our data and the model we want to fit. The model is represented by a set of parameters $\\mathbf{w}$. \nThe gradient of the target function, $g_f$. \n\nIn the most common case $f$ represents the errors from a data representation model $M$. To fit the model is to find the optimal parameters $\\mathbf{w}$ that minimize the following expression:\n$$ f_\\mathbf{w} = \\sum_{i} (y_i - M(\\mathbf{x}_i,\\mathbf{w}))^2 $$\nFor example, $(\\mathbf{x},y)$ can represent:\n\n$\\mathbf{x}$: the behavior of a \"Candy Crush\" player; $y$: monthly payments. \n$\\mathbf{x}$: sensor data about your car engine; $y$: probability of engine error.\n$\\mathbf{x}$: finantial data of a bank customer; $y$: customer rating.\n\n\nIf $y$ is a real value, it is called a regression problem.\nIf $y$ is binary/categorical, it is called a classification problem. \n\nLet's suppose that $M(\\mathbf{x},\\mathbf{w}) = \\mathbf{w} \\cdot \\mathbf{x}$. \nBatch gradient descend\nWe can implement gradient descend in the following way (batch gradient descend):", "# f = 2x\nx = range(100)\ny = [2*i for i in x]\n\n# f_target = Sum (y - wx)**2\ndef target_f(x,y,w):\n import numpy as np\n return np.sum((np.array(y) - np.array(x) * w)**2.0)\n\n# gradient_f = Sum 2wx**2 - 2xy\ndef gradient_f(x,y,w):\n import numpy as np\n return np.sum(2*w*(np.array(x)**2) - 2*np.array(x)*np.array(y))\n\ndef step(w,grad,alpha):\n return w - alpha * grad\n\ndef min_batch(target_f, gradient_f, x, y, toler = 1e-6):\n import random\n alphas = [100, 10, 1, 0.1, 0.001, 0.00001]\n w = random.random()\n val = target_f(x,y,w)\n print \"First w:\", w, \"First Val:\", val, \"\\n\"\n i = 0\n while True:\n i += 1\n gradient = gradient_f(x,y,w)\n next_ws = [step(w, gradient, alpha) for alpha in alphas]\n next_vals = [target_f(x,y,w) for w in next_ws]\n min_val = min(next_vals)\n next_w = next_ws[next_vals.index(min_val)] \n next_val = target_f(x,y,next_w)\n print i, \"w: {:4.4f}\".format(w), \"Val:{:4.4f}\".format(val), \"Gradient:\", gradient \n if (abs(val - next_val) < toler) or (i>200):\n return w\n else:\n w, val = next_w, next_val\n \nmin_batch(target_f, gradient_f, x, y)\n\n# Exercise: \n# 1. Consider a set of 100 data points and explain the behavior of the algorithm. \n# 2. How could we fix this behavior?", "Stochastic Gradient Descend\nThe last function evals the whole dataset $(\\mathbf{x}_i,y_i)$ at every step. \nIf the dataset is large, this strategy is too costly. In this case we will use a strategy called SGD (Stochastic Gradient Descend).\nWhen learning from data, the cost function is additive: it is computed by adding sample reconstruction errors. \nThen, we can compute the estimate the gradient (and move towards the minimum) by using only one data sample (or a small data sample).\nThus, we will find the minimum by iterating this gradient estimation over the dataset.\nA full iteration over the dataset is called epoch. During an epoch, data must be used in a random order.\nIf we apply this method we have some theoretical guarantees to find the minimum.", "import numpy as np\nx = range(10)\ny = [2*i for i in x]\ndata = zip(x,y)\n\ndef in_random_order(data):\n import random\n indexes = [i for i,_ in enumerate(data)]\n random.shuffle(indexes)\n for i in indexes:\n yield data[i]\n \nfor (x_i,y_i) in in_random_order(data):\n print x_i,y_i \n\ndef gradient_f_SGD(x,y,w):\n import numpy as np\n return 2*w*(np.array(x)**2) - 2*np.array(x)*np.array(y)\n\ndef SGD(target_f, gradient_f, x, y, alpha_0=0.01):\n import numpy as np\n import random\n data = zip(x,y)\n w = random.random()\n alpha = alpha_0\n min_w, min_val = float('inf'), float('inf')\n iteration_no_increase = 0\n while iteration_no_increase < 100:\n val = sum(target_f(x_i, y_i, w) for x_i,y_i in data)\n if val < min_val:\n min_w, min_val = w, val\n iteration_no_increase = 0\n alpha = alpha_0\n else:\n iteration_no_increase += 1\n alpha *= 0.9\n for x_i, y_i in in_random_order(data):\n gradient_i = gradient_f(x_i, y_i, w)\n w = np.array(w) - (alpha * np.array(gradient_i))\n return min_w\n\nprint \"w:\", SGD(target_f, gradient_f_SGD, x, y, 0.01)", "Exercise: Gradient Descent and Linear Regression\nThe linear regression model assumes a linear relationship between data:\n$$ y_i = w_1 x_i + w_0 $$\nLet's generate a more realistic dataset (with noise), where $w_1 = 2$ and $w_0 = 0$:", "import numpy as np\nx = np.random.uniform(0,1,20)\n\ndef f(x): return x*2\n\nnoise_variance =0.2\nnoise = np.random.randn(x.shape[0])*noise_variance\ny = f(x) + noise\n\nplt.plot(x, y, 'o', label='y')\nplt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')\nplt.xlabel('$x$', fontsize=15)\nplt.ylabel('$t$', fontsize=15)\nplt.ylim([0,2])\nplt.title('inputs (x) vs targets (y)')\nplt.grid()\nplt.legend(loc=2)\nplt.gcf().set_size_inches((10,6))\nplt.show()\n\n# Our model y = x * w\ndef nn(x, w): return x * w\n\n# Our cost function\ndef cost(y, t): return ((t - y)**2).sum()\n\nws = np.linspace(0, 4, num=100) \ncost_ws = np.vectorize(lambda w: cost(nn(x, w) , y))(ws) \n\n# Ploting the cost function\nplt.plot(ws, cost_ws, 'r-')\nplt.xlabel('$w$', fontsize=15)\nplt.ylabel('Cost', fontsize=15)\nplt.title('Cost vs. $w$')\nplt.grid()\nplt.gcf().set_size_inches((10,6))\nplt.show()", "Complete the following code and look at the plot of the first gradient descent updates. Explore the behavior of the proposed learning rates.", "def gradient(w, x, y): \n return 2 * x * (nn(x, w) - y)\n\ndef step(w_k, x, y, learning_rate):\n return learning_rate * gradient(w_k, x, y).sum()\n\nw = 0.01\n\n# define a learning_rate \nlearning_rate = 0.1\n\nnb_of_iterations = 20 \nw_cost = [(w, cost(nn(x, w), y))] \nfor i in range(nb_of_iterations):\n # Here your code \n w_cost.append((w, cost(nn(x, w), y))) \n \nfor i in range(0, len(w_cost)):\n print('w({}): {:.4f} \\t cost: {:.4f}'.format(i, w_cost[i][0], w_cost[i][1]))\n\n# Plotting the first gradient descent updates\nplt.plot(ws, cost_ws, 'r-') # Plot the error curve\n# Plot the updates\nfor i in range(1, len(w_cost)-2):\n w1, c1 = w_cost[i-1]\n w2, c2 = w_cost[i]\n plt.plot(w1, c1, 'bo')\n plt.plot([w1, w2],[c1, c2], 'b-')\n plt.text(w1, c1+0.5, '$w({})$'.format(i)) \n# Plot the last weight, axis, and show figure\nw1, c1 = w_cost[len(w_cost)-3]\nplt.plot(w1, c1, 'bo')\nplt.text(w1, c1+0.5, '$w({})$'.format(nb_of_iterations)) \nplt.xlabel('$w$', fontsize=15)\nplt.ylabel('$\\\\xi$', fontsize=15)\nplt.title('Gradient descent updates plotted on cost function')\nplt.grid()\nplt.gcf().set_size_inches((10,6))\nplt.show()\n\nw = 0\nnb_of_iterations = 10 \nfor i in range(nb_of_iterations):\n dw = step(w, x, y, learning_rate) \n w = w - dw \n \n\nplt.plot(x, y, 'o', label='t')\nplt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')\nplt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line')\nplt.xlabel('input x')\nplt.ylabel('target t')\nplt.ylim([0,2])\nplt.title('input vs. target')\nplt.grid()\nplt.legend(loc=2)\nplt.gcf().set_size_inches((10,6))\nplt.show()", "Mini-batch Gradient Descent\nIn code, general batch gradient descent looks something like this:\npython\nnb_epochs = 100\nfor i in range(nb_epochs):\n grad = evaluate_gradient(target_f, data, w)\n w = w - learning_rate * grad\nFor a pre-defined number of epochs, we first compute the gradient vector of the target function for the whole dataset w.r.t. our parameter vector. \nStochastic gradient descent (SGD) in contrast performs a parameter update for each training example and label:\npython\nnb_epochs = 100\nfor i in range(nb_epochs):\n np.random.shuffle(data)\n for example in data:\n grad = evaluate_gradient(target_f, example, w)\n w = w - learning_rate * grad\nMini-batch gradient descent finally takes the best of both worlds and performs an update for every mini-batch of $n$ training examples:\npython\nnb_epochs = 100\nfor i in range(nb_epochs):\n np.random.shuffle(data)\n for batch in get_batches(data, batch_size=50):\n grad = evaluate_gradient(target_f, batch, w)\n w = w - learning_rate * grad\nMinibatch SGD has the advantage that it works with a slightly less noisy estimate of the gradient. However, as the minibatch size increases, the number of updates done per computation done decreases (eventually it becomes very inefficient, like batch gradient descent). \nThere is an optimal trade-off (in terms of computational efficiency) that may vary depending on the data distribution and the particulars of the class of function considered, as well as how computations are implemented.\nLoss Funtions\nLoss functions $L(y, f(\\mathbf{x})) = \\sum_i \\ell(y_i, f(\\mathbf{x_i}))$ represent the price paid for inaccuracy of predictions in classification/regression problems.\nIn classification this function is often the zero-one loss, that is, $ \\ell(y_i, f(\\mathbf{x_i}))$ is zero when $y_i = f(\\mathbf{x}_i)$ and one otherwise.\nThis function is discontinuous with flat regions and is thus extremely hard to optimize using gradient-based methods. For this reason it is usual to consider a proxy to the loss called a surrogate loss function. For computational reasons this is usually convex function. Here we have some examples:\nSquare / Euclidean Loss (Linear Regression)\nIn regression problems, the most common loss function is the square loss function:\n$$ L(y, f(\\mathbf{x})) = \\sum_i (y_i - f(\\mathbf{x}_i))^2 $$\nThe square loss function can be re-written and utilized for classification:\n$$ L(y, f(\\mathbf{x})) = \\sum_i (1 - y_i f(\\mathbf{x}_i))^2 $$\nHinge / Margin Loss (Suport Vector Machines)\nThe hinge loss function is defined as:\n$$ L(y, f(\\mathbf{x})) = \\sum_i \\mbox{max}(0, 1 - y_i f(\\mathbf{x}_i)) $$\nThe hinge loss provides a relatively tight, convex upper bound on the 0–1 Loss.\n<img src=\"images/loss_functions.png\">\nLogistic Loss (Logistic Regression)\nThis function displays a similar convergence rate to the hinge loss function, and since it is continuous, gradient descent methods can be utilized. \n$$ L(y, f(\\mathbf{x})) = log(1 + exp(-y_i f(\\mathbf{x}_i))) $$\nSigmoid Cross-Entropy Loss (Softmax classifier)\nCross-Entropy is a loss function that is very used for training multiclass problems. We'll focus on models that assume that classes are mutually exclusive. In this case, our labels have this form $\\mathbf{y}_i =(1.0,0.0,0.0)$. If our model predicts a different distribution, say $ f(\\mathbf{x}_i)=(0.4,0.1,0.5)$, then we'd like to nudge the parameters so that $f(\\mathbf{x}_i)$ gets closer to $\\mathbf{y}_i$.\nC.Shannon showeed that if you want to send a series of messages composed of symbols from an alphabet with distribution $y$ ($y_j$ is the probability of the $j$-th symbol), then to use the smallest number of bits on average, you should assign $\\log(\\frac{1}{y_j})$ bits to the $j$-th symbol. \nThe optimal number of bits is known as entropy:\n$$ H(\\mathbf{y}) = \\sum_j y_j \\log\\frac{1}{y_j} = - \\sum_j y_j \\log y_j$$\nCross entropy is the number of bits we'll need if we encode symbols by using a wrong distribution $\\hat y$:\n$$ H(y, \\hat y) = - \\sum_j y_j \\log \\hat y_j $$ \nIn our case, the real distribution is $\\mathbf{y}$ and the \"wrong\" one is $f(\\mathbf{x}_i)$. So, minimizing cross entropy with respect our model parameters will result in the model that best approximates our labels if considered as a probabilistic distribution. \nCross entropy is used in combination with Softmax classifier. In order to classify $\\mathbf{x}_i$ we could take the index corresponding to the max value of $f(\\mathbf{x}_i)$, but Softmax gives a slightly more intuitive output (normalized class probabilities) and also has a probabilistic interpretation:\n$$ P(\\mathbf{y}_i = j \\mid \\mathbf{x_i}) = - log \\left( \\frac{e^{f_j(\\mathbf{x_i})}}{\\sum_k e^{f_k(\\mathbf{x_i})} } \\right) $$\nwhere $f_k$ is a linear classifier. \nAdvanced gradient descend\n\nSee \"An Interactive Tutorial on Numerical Optimization\": http://www.benfrederickson.com/numerical-optimization/\n\nMomentum\nSGD has trouble navigating ravines, i.e. areas where the surface curves much more steeply in one dimension than in another, which are common around local optima. In these scenarios, SGD oscillates across the slopes of the ravine while only making hesitant progress along the bottom towards the local optimum.\n<img src=\"images/ridge2.png\">\nMomentum is a method that helps accelerate SGD in the relevant direction and dampens oscillations. It does this by adding a fraction of the update vector of the past time step to the current update vector:\n$$ v_t = m v_{t-1} + \\alpha \\nabla_w f $$\n$$ w = w - v_t $$\nThe momentum $m$ is commonly set to $0.9$.\nNesterov\nHowever, a ball that rolls down a hill, blindly following the slope, is highly unsatisfactory. We'd like to have a smarter ball, a ball that has a notion of where it is going so that it knows to slow down before the hill slopes up again.\nNesterov accelerated gradient (NAG) is a way to give our momentum term this kind of prescience. We know that we will use our momentum term $m v_{t-1}$ to move the parameters $w$. Computing \n$w - m v_{t-1}$ thus gives us an approximation of the next position of the parameters (the gradient is missing for the full update), a rough idea where our parameters are going to be. We can now effectively look ahead by calculating the gradient not w.r.t. to our current parameters $w$ but w.r.t. the approximate future position of our parameters:\n$$ w_{new} = w - m v_{t-1} $$\n$$ v_t = m v_{t-1} + \\alpha \\nabla_{w_{new}} f $$\n$$ w = w - v_t $$\nAdagrad\nAll previous approaches manipulated the learning rate globally and equally for all parameters. Tuning the learning rates is an expensive process, so much work has gone into devising methods that can adaptively tune the learning rates, and even do so per parameter. \nAdagrad is an algorithm for gradient-based optimization that does just this: It adapts the learning rate to the parameters, performing larger updates for infrequent and smaller updates for frequent parameters.\n$$ c = c + (\\nabla_w f)^2 $$\n$$ w = w - \\frac{\\alpha}{\\sqrt{c}} $$ \nRMProp\nRMSProp update adjusts the Adagrad method in a very simple way in an attempt to reduce its aggressive, monotonically decreasing learning rate. In particular, it uses a moving average of squared gradients instead, giving:\n$$ c = \\beta c + (1 - \\beta)(\\nabla_w f)^2 $$\n$$ w = w - \\frac{\\alpha}{\\sqrt{c}} $$ \nwhere $\\beta$ is a decay rate that controls the size of the moving average.\n<img src=\"images/g1.gif\">\n(Image credit: Alec Radford) \n<img src=\"images/g2.gif\">\n(Image credit: Alec Radford)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
emjotde/UMZ
Cwiczenia/01/Uczenie Maszynowe - Ćwiczenia 1.3 - NumPy, algebra liniowa.ipynb
cc0-1.0
[ "1.3 NumPy - Algebra liniowa\nNumPy jest pakietem szczególnie przydatnym do obliczeń w dziedzinie algebry liniowej. W uczeniu maszynowym algebra liniowa będzie miała duże znaczenie. \nWektor o wymiarach $1 \\times N$ \n$$\n X =\n \\begin{pmatrix}\n x_{1} \\\n x_{2} \\\n \\vdots \\\n x_{N}\n \\end{pmatrix} \n$$\ni jego transpozycję $\\mathbf{x}^{T} = (x_{1}, x_{2},\\ldots,x_{N})$ można wyrazić w Pythonie w następujący sposób:", "import numpy as np\nx = np.array([[1,2,3]]).T\nxt = x.T\nx.shape\n\nxt.shape", "Macierz kolumnowa w NumPy.\n$$X =\n \\begin{pmatrix}\n 3 \\\n 4 \\\n 5 \\\n 6\n \\end{pmatrix}$$", "x = np.array([[3,4,5,6]]).T\nx", "A macierz wierszowa w NumPy.\n$$ X =\n \\begin{pmatrix}\n 3 & 4 & 5 & 6\n \\end{pmatrix}$$", "x = np.array([[3,4,5,6]])\nx", "Obiekty typu matrix\nMacierze ogólne omówiliśmy już w poprzednich dokumentach:\n$$A_{m,n} =\n \\begin{pmatrix}\n a_{1,1} & a_{1,2} & \\cdots & a_{1,n} \\\n a_{2,1} & a_{2,2} & \\cdots & a_{2,n} \\\n \\vdots & \\vdots & \\ddots & \\vdots \\\n a_{m,1} & a_{m,2} & \\cdots & a_{m,n}\n \\end{pmatrix}$$\nOprócz obiektów typu array istnieje wyspecjalizowany obiekt matrix, dla którego operacje * (mnożenie) oraz **-1 (odwracanie) są określone w sposób właściwy dla macierzy (w przeciwieństwu do operacji elementowych dla obietków array).", "x = np.array([1,2,3,4,5,6,7,8,9]).reshape(3,3)\nx\n\nX = np.matrix(x)\nX", "Operacje na macierzach\nWyznacznik", "a = np.array([[3,-9],[2,5]])\nnp.linalg.det(a)", "Macierz odwrotna", "A = np.array([[-4,-2],[5,5]])\nA\n\ninvA = np.linalg.inv(A)\ninvA\n\nnp.round(np.dot(A,invA))", "Ponieważ $AA^{-1} = A^{-1}A = I$.\nWartości i wektory własne", "a = np.diag((1, 2, 3))\na\n\nw,v = np.linalg.eig(a)\nw\n\nv", "Zadania 1.3\nZapisz i oblicz za pomocą NumPy\n1. iloczn macierzy $A$ z wektorem $\\vec{x}$:\n$$\\begin{align}\n A \\vec{x} &= \\left[\n \\begin{array}{rrr}\n 1 & -1 & 2\\\n 0 & -3 & 1\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{l}\n 2\\1\\0\n \\end{array}\n \\right]\n =\n \\left[\n \\begin{array}{r}\n 1\\\n -3\n \\end{array}\n \\right].\n\\end{align}$$\n\n\niloczyn macierzy $A$ i $B$:\n$$\\begin{align}\n AB &=\\left[\n \\begin{array}{rrr}\n 0 & 4 & -2\\\n -4 & -3 & 0\n \\end{array}\n \\right] \n \\left[\n \\begin{array}{rr}\n 0 &1\\\n 1 & -1\\\n 2 & 3\n \\end{array}\n \\right]\n =\n \\left[\n \\begin{array}{rr}\n 0 & -10\\\n -3 & -1\n \\end{array}\n \\right].\n \\end{align}$$\n\n\nPokaż, że dla powyśzych macierzy $A$ i $B$ prawdą jest, że $(AB)^T = B^TA^T$. \n\nOblicz $\\det(AB)$ (wyznacznik iloczynu $AB$).\nCzym rózni się operacja A**-1 dla obiektów typu array i matrix? Pokaż na przykładzie. \nDla macierzy $X = \\left[\n \\begin{array}{rrr}\n 1 & 2 & 3\\\n 1 & 3 & 6 \\\n \\end{array}\n \\right]$ oraz wektora $\\vec{y} = \\left[\n \\begin{array}{r}\n 5 \\\n 6 \\\n \\end{array}\n \\right]$ oblicz wynikowy wektor: \n$$\\vec{\\theta} = (X^TX)^{-1}X^T\\vec{y} = \\left[\n \\begin{array}{r}\n -11.75\\\n 8.5 \\\n -0.6875 \\\n \\end{array}\n \\right]$$. Wykonaj te same obliczenia raz na obiektach typu array i raz na obiektach typu matrix. W przypadku obiektów typu matrix użyj możliwie krótki zapis." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
aaai2018-paperid-62/aaai2018-paperid-62
paper_selection.ipynb
mit
[ "Sampling of papers from conferences\nThis Jupyter notebook includes the procedure used to select a random sample from the population of accepted papers for each conference.\nAccepted conference papers\nSampling of papers is based on the listing of accepted papers at the following locations:\nAAAI-14 http://www.aaai.org/Library/AAAI/aaai14contents.php\nAAAI-16 http://www.aaai.org/Library/AAAI/aaai16contents.php\nIJCAI-13 http://ijcai-13.org/program/accepted_papers\nIJCAI-16 http://ijcai-16.org/index.php/welcome/view/accepted_papers\nThese listings were used to generate the files available in the data/ folder. Each conference is represented by a textfile containing the papers accepted to the conference's main and special tracks. Each line in the textfiles represent a paper, including its title and the authors. Example:\nCausality based Propagation History Ranking in Social Networks Zheng Wang, Chaokun Wang, Jisheng Pei, Xiaojun Ye and Philip S. Yu \nIntervention Strategies for Increasing Engagement in Volunteer-Based Crowdsourcing Avi Segal, Kobi Gal, Ece Kamar, Eric Horvitz, Alex Bowyer and Grant Miller\n\nPapers are available through AAAI Publications for all but IJCAI-16 (at the time of writing):\nAAAI-14 http://www.aaai.org/ocs/index.php/AAAI/AAAI14/schedConf/presentations\nAAAI-16 http://www.aaai.org/ocs/index.php/AAAI/AAAI16/schedConf/presentations\nIJCAI-13 http://www.aaai.org/ocs/index.php/IJCAI/IJCAI13/schedConf/presentations\nFor IJCAI-16, see the proceedings at: http://www.ijcai.org/Proceedings/2016\nFirst, the accepted papers are loaded from files.", "from glob import glob\n\naccepted_papers = {}\ntrack_files = glob('data/accepted*'.format(dir))\nfor file in track_files:\n conference = file.split('_')[-1].strip('.txt')\n accepted_papers[conference] = []\n with open(file, 'r') as f:\n for line in f:\n accepted_papers[conference].append(line)", "The resulting dictionary accepted_papers contains a list of the accepted papers for each conference.", "for conference, papers in sorted(accepted_papers.items()):\n print('{conference} includes {papers} accepted papers.'.format(\n conference=conference, papers=len(papers)))", "Selection\nA sample population of 100 papers is selected from each conference using Python's pseudo-random number module. As per the documentation on random.sample \"The resulting list is in selection order so that all sub-slices will also be valid random samples.\" The seed is set to the unix timestamp for Jan 10 14:46:40 2017 UTC: 1484059600.", "import random\nrandom.seed(1484059600)\n\nk = 100\nsamples = {}\n\n# The order is set explicitly due to originally not sorting\n# accepted_papers.items().\nconferences = ['aaai-16', 'aaai-14', 'ijcai-13', 'ijcai-16']\n\nfor conference in conferences:\n samples[conference] = random.sample(accepted_papers[conference], k)", "Note that when originally generating the samples, the dictionary was iterated by the use of Python 3's dict.items() view. The order is not guaranteed. Due to the original generation not being sorted, the iteration needs to be set explicitly so future runs generate the same original sample populations.\nThe generated random samples are permanently stored to files in the ../data/ directory (Github: https://github.com/sidgek/msoppgave/tree/master/data/.", "for conference, papers in samples.items():\n outputfile = 'data/sampled_{conference}'.format(conference=conference)\n with open(outputfile, 'w') as f:\n for line in papers:\n f.write(line)", "Versions\nHere's a generated output to keep track of software versions used to run this Jupyter notebook.", "import IPython\nimport platform\n\nprint('Python version: {}'.format(platform.python_version()))\nprint('IPython version: {}'.format(IPython.__version__))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
napsternxg/gensim
docs/notebooks/topic_coherence-movies.ipynb
gpl-3.0
[ "Benchmark testing of coherence pipeline on Movies dataset\nHow to find how well coherence measure matches your manual annotators\nIntroduction: For the validation of any model adapted from a paper, it is of utmost importance that the results of benchmark testing on the datasets listed in the paper match between the actual implementation (palmetto) and gensim. This coherence pipeline has been implemented from the work done by Roeder et al. The paper can be found here.\nApproach :\n1. In this notebook, we'll use the Movies dataset mentioned in the paper. This dataset along with the topics on which the coherence is calculated and the gold (human) ratings on these topics can be found here.\n2. We will then calculate the coherence on these topics using the pipeline implemented in gensim.\n3. Once we have all our coherence values on these topics we will calculate the correlation with the human ratings using pearson's r.\n4. We will compare this final correlation value with the values listed in the paper and see if the pipeline is working as expected.", "from __future__ import print_function\n\nimport re\nimport os\n\nfrom scipy.stats import pearsonr\nfrom datetime import datetime\n\nfrom gensim.models import CoherenceModel\nfrom gensim.corpora.dictionary import Dictionary\nfrom smart_open import smart_open", "Download the dataset (movie.zip) and gold standard data (topicsMovie.txt and goldMovie.txt) from the link and plug in the locations below.", "base_dir = os.path.join(os.path.expanduser('~'), \"workshop/nlp/data/\")\ndata_dir = os.path.join(base_dir, 'wiki-movie-subset')\nif not os.path.exists(data_dir):\n raise ValueError(\"SKIP: Please download the movie corpus.\")\n\nref_dir = os.path.join(base_dir, 'reference')\ntopics_path = os.path.join(ref_dir, 'topicsMovie.txt')\nhuman_scores_path = os.path.join(ref_dir, 'goldMovie.txt')\n\n%%time\n\ntexts = []\nfile_num = 0\npreprocessed = 0\nlisting = os.listdir(data_dir)\n\nfor fname in listing:\n file_num += 1\n if 'disambiguation' in fname:\n continue # discard disambiguation and redirect pages\n elif fname.startswith('File_'):\n continue # discard images, gifs, etc.\n elif fname.startswith('Category_'):\n continue # discard category articles\n \n # Not sure how to identify portal and redirect pages,\n # as well as pages about a single year.\n # As a result, this preprocessing differs from the paper.\n \n with smart_open(os.path.join(data_dir, fname), 'rb') as f:\n for line in f:\n # lower case all words\n lowered = line.lower()\n #remove punctuation and split into seperate words\n words = re.findall(r'\\w+', lowered, flags = re.UNICODE | re.LOCALE)\n texts.append(words)\n \n preprocessed += 1\n if file_num % 10000 == 0:\n print('PROGRESS: %d/%d, preprocessed %d, discarded %d' % (\n file_num, len(listing), preprocessed, (file_num - preprocessed)))\n\n%%time\n\ndictionary = Dictionary(texts)\ncorpus = [dictionary.doc2bow(text) for text in texts]", "Cross validate the numbers\nAccording to the paper the number of documents should be 108,952 with a vocabulary of 1,625,124. The difference is because of a difference in preprocessing. However the results obtained are still very similar.", "print(len(corpus))\nprint(dictionary)\n\ntopics = [] # list of 100 topics\nwith smart_open(topics_path, 'rb') as f:\n topics = [line.split() for line in f if line]\nlen(topics)\n\nhuman_scores = []\nwith smart_open(human_scores_path, 'rb') as f:\n for line in f:\n human_scores.append(float(line.strip()))\nlen(human_scores)", "Deal with any vocabulary mismatch.", "# We first need to filter out any topics that contain terms not in our dictionary\n# These may occur as a result of preprocessing steps differing from those used to\n# produce the reference topics. In this case, this only occurs in one topic.\ninvalid_topic_indices = set(\n i for i, topic in enumerate(topics)\n if any(t not in dictionary.token2id for t in topic)\n)\nprint(\"Topics with out-of-vocab terms: %s\" % ', '.join(map(str, invalid_topic_indices)))\nusable_topics = [topic for i, topic in enumerate(topics) if i not in invalid_topic_indices]", "Start off with u_mass coherence measure.", "%%time\n\ncm = CoherenceModel(topics=usable_topics, corpus=corpus, dictionary=dictionary, coherence='u_mass')\nu_mass = cm.get_coherence_per_topic()\nprint(\"Calculated u_mass coherence for %d topics\" % len(u_mass))", "Start c_v coherence measure\nThis is expected to take much more time since c_v uses a sliding window to perform probability estimation and uses the cosine similarity indirect confirmation measure.", "%%time\n\ncm = CoherenceModel(topics=usable_topics, texts=texts, dictionary=dictionary, coherence='c_v')\nc_v = cm.get_coherence_per_topic()\nprint(\"Calculated c_v coherence for %d topics\" % len(c_v))", "Start c_uci and c_npmi coherence measures\nc_v and c_uci and c_npmi all use the boolean sliding window approach of estimating probabilities. Since the CoherenceModel caches the accumulated statistics, calculation of c_uci and c_npmi are practically free after calculating c_v coherence. These two methods are simpler and were shown to correlate less with human judgements than c_v but more so than u_mass.", "%%time\n\ncm.coherence = 'c_uci'\nc_uci = cm.get_coherence_per_topic()\nprint(\"Calculated c_uci coherence for %d topics\" % len(c_uci))\n\n%%time\n\ncm.coherence = 'c_npmi'\nc_npmi = cm.get_coherence_per_topic()\nprint(\"Calculated c_npmi coherence for %d topics\" % len(c_npmi))\n\nfinal_scores = [\n score for i, score in enumerate(human_scores)\n if i not in invalid_topic_indices\n]\nlen(final_scores)", "The values in the paper were:\nu_mass correlation : 0.093\nc_v correlation : 0.548\nc_uci correlation : 0.473\nc_npmi correlation : 0.438\nOur values are also very similar to these values which is good. This validates the correctness of our pipeline, as we can reasonably attribute the differences to differences in preprocessing.", "for our_scores in (u_mass, c_v, c_uci, c_npmi):\n print(pearsonr(our_scores, final_scores)[0])", "Where do we go now?\n\nThe time required for completing all of these operations can be improved a lot by cythonising them.\nPreprocessing can be improved for this notebook by following the exact process mentioned in the reference paper. Specifically: All corpora as well as the complete Wikipedia used as reference corpus are preprocessed using lemmatization and stop word removal. Additionally, we removed portal and category articles, redirection and disambiguation pages as well as articles about single years. Note: we tried lemmatizing and found that significantly more of the reference topics had out-of-vocabulary terms." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
desihub/desispec
doc/nb/Lya-tsnr-signal.ipynb
bsd-3-clause
[ "import pandas\nimport numpy as np\nimport pylab as pl\nimport astropy.io.fits as fits\nimport matplotlib.pyplot as plt\n\nfrom astropy.table import Table, join\nfrom astropy.convolution import convolve, Gaussian1DKernel\nfrom desiutil.dust import mwdust_transmission", "Set where to write Lya ensemble dflux.", "outdir = '/global/homes/m/mjwilson/sandbox/lya-signal/desimodel/0.14.0/data/tsnr'", "Load a set of Vi'd Lya QSOs.", "dat = fits.open('/project/projectdirs/desi/spectro/redux/cascades/tiles/80609/deep/coadd-0-80609-deep.fits')\n\ndat.info()\n\nvi = pandas.read_csv('/project/projectdirs/desi/sv/vi/TruthTables/Blanc/QSO/desi-vi_QSO_tile80609_nightdeep_merged_all_210210_ADDING_object_info.csv')\n\nvi\n\nisin = (vi['best_spectype'] == 'QSO') & (vi['best_quality'] >= 2.5) & (vi['best_z'] >= 2.1)\n\nvi = vi[isin]\n\nvi\n\ntids = vi['TARGETID']\n\ngauss_kernel = Gaussian1DKernel(15)\n\nisin = np.isin(dat['FIBERMAP'].data['TARGETID'], tids)\n\nfmap_ids = dat['FIBERMAP'].data['TARGETID'][isin]\n\nnin = np.count_nonzero(fmap_ids)\n\ngmags = 22.5 - 2.5*np.log10(dat['FIBERMAP'].data['FLUX_G'][isin] / mwdust_transmission(dat['FIBERMAP'].data['EBV'][isin], 'G', dat['FIBERMAP'].data['PHOTSYS'][isin]))\n\ngmags", "Our QSOs", "fig, axes = plt.subplots(nin, 1, figsize=(5, 5 * nin))\n\nfor band in ['B','R','Z']:\n for i, x in enumerate(dat['{}_FLUX'.format(band)].data[isin]):\n axes[i].plot(dat['{}_WAVELENGTH'.format(band)].data, convolve(x, gauss_kernel), lw=0.5)\n axes[i].set_ylim(bottom=-0.5)", "Take closest to g=22 to be our reference.", "idx = np.where(np.abs(gmags - 22.) == np.abs(gmags - 22.).min())[0][0]\n\n# Force 7 \nidx = 7 \n\n# Closest to 22.\nmaster_fluxes = {'gmag': gmags[idx], 'tid': fmap_ids[idx]}\n\nfor band in ['B', 'R', 'Z']:\n master_fluxes[band] = {'wave': dat['{}_WAVELENGTH'.format(band)].data,\n 'smoothflux': convolve(dat['{}_FLUX'.format(band)].data[isin][idx], gauss_kernel),\n 'ivar': dat['{}_IVAR'.format(band)].data[isin][idx]}\n\nmaster_fluxes['tid']\n\nmaster_fluxes['gmag']\n\nvi[vi['TARGETID'] == master_fluxes['tid']]\n\nmaster_fluxes['z'] = vi[vi['TARGETID'] == master_fluxes['tid']]['best_z']\n\nmaster_fluxes['continuum'] = 0.43\n\npl.plot(master_fluxes['B']['wave'], master_fluxes['B']['smoothflux'])\npl.axhline(master_fluxes['continuum'], c='k', lw=0.5)\n\npl.xlabel('Wavelength [A]')\npl.ylabel('1.e-17 ergs/s/cm2/A')", "Later we use this (by eye) 'continuum' as our asymptotic 'signal' normalization at the blue end.\nGet a QSO n(z)", "# https://desi.lbl.gov/svn/code/desimodel/tags/0.14.0/data/targets/nz_qso.dat; \n# Number per sq. deg. per dz=0.1\n# Note: Cascades\nzlo, zhi, Nz = np.loadtxt('/global/common/software/desi/cori/desiconda/20200801-1.4.0-spec/code/desimodel/0.14.0/data/targets/nz_qso.dat', unpack=True)\n\nzmid = 0.5 * (zlo + zhi)\n\nNz /= Nz.max()\n\npl.plot(zmid, Nz, c='k', lw=0.5)\npl.xlabel('z')\n\nzs = np.random.uniform(0.0, 5.0, 500000)\nzs = np.sort(zs)\n\n# pl.hist(zs, bins=np.arange(0.0, 5.0, 0.1))\n\ndraws = np.random.uniform(0.0, 1.0, 500000)\n\nidx = np.digitize(zs, bins=np.arange(0.0, 5.1, 0.1))\n\nprobs = np.zeros_like(idx, dtype=np.float)\n\nfor i, uid in enumerate(np.unique(idx)[:-1]):\n probs[idx == uid] = Nz[i]\n\ndraws\n\nprobs\n\nisin = draws <= probs\n\nqso_zs = zs[isin]", "Here we've drawn an ensemble of zs from this distribution.", "pl.plot(zmid, 5000. * Nz, c='k', lw=0.5)\npl.hist(qso_zs, bins=np.arange(0.0, 5.0, 0.05), alpha=0.5)\npl.xlabel('z')\n\nlya_zs = qso_zs[qso_zs > 2.1]\n\nlya_zs \n\n# lya_zs = lya_zs[:2]\n\n# 1216. * (1. + lya_zs)\n\nnlya = len(lya_zs)", "Our 'signal' will be unity bluer than Lya for a given redshift (zero otherwise). We then stack across the ensemble.", "tracer = 'LYA'\n\nhdr = fits.Header()\nhdr['NMODEL'] = nlya\nhdr['TRACER'] = tracer\nhdr['FILTER'] = 'decam2014-g'\nhdr['ZLO'] = 2.1\n \nhdu_list = [fits.PrimaryHDU(header=hdr)]\n\nfor band in ['b', 'r', 'z']:\n wave = dat['{}_WAVELENGTH'.format(band)].data\n nwave = wave[:,None] * np.ones(nlya, dtype=float)[None,:]\n \n weight = np.zeros(shape=(len(wave), nlya), dtype=float)\n \n for i, z in zip(range(nlya), lya_zs):\n weight[nwave[:,i] < (1. + z) * 1216., i] = 1.0\n\n mweight = np.mean(weight, axis=1) \n \n zpivot = 2.4 \n zfactor = (wave / (1. + zpivot) / 1216.)**0.95 \n zweight = zfactor * mweight\n \n mweight = np.expand_dims(master_fluxes['continuum'] * mweight, axis=0)\n zweight = np.expand_dims(master_fluxes['continuum'] * zweight, axis=0)\n\n if band =='b':\n pl.plot(wave, mweight[0], c='k', linestyle='--', label='No z weight')\n pl.plot(wave, zweight[0], c='k', label='z weight')\n \n else:\n pl.plot(wave, mweight[0], c='k', linestyle='--', label='')\n pl.plot(wave, zweight[0], c='k', label='')\n \n hdu_list.append(fits.ImageHDU(wave, name='WAVE_{}'.format(band.upper())))\n hdu_list.append(fits.ImageHDU(zweight, name='DFLUX_{}'.format(band.upper())))\n\nhdu_list = fits.HDUList(hdu_list)\nhdu_list.writeto('{}/tsnr-ensemble-{}.fits'.format(outdir, tracer.lower()), overwrite=True)\n\npl.xlabel('Wavelength [A]')\npl.ylabel('1.e-17 ergs/s/cm2/A')\npl.legend(frameon=False, loc=1)\n\nprint('Written to {}/tsnr-ensemble-{}.fits'.format(outdir, tracer.lower()))", "Finally, here we've used our reference continuum from above as the blue end normalization and write to disk at outdir.\nCheck against QSO tsnr.", "ens = fits.open('/global/common/software/desi/cori/desiconda/20200801-1.4.0-spec/code/desimodel/0.14.0/data/tsnr/tsnr-ensemble-qso.fits')\n\nens.info()\n\nens['DFLUX_B'].shape", "TODO: Non-critical (as resampled in tsnr.py), but followup on why the wavelength ranges to the other ensembles does not match reduction wavelengths now.\nDone." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/graphics
tensorflow_graphics/notebooks/non_rigid_deformation.ipynb
apache-2.0
[ "Copyright 2019 Google LLC.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Non-rigid surface deformation\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/non_rigid_deformation.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/non_rigid_deformation.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nNon-rigid surface deformation is a technique that, among other things, can be used to interactively manipulate meshes or to deform a template mesh to fit to a point-cloud. When manipulating meshes, this can for instance allow users to move the hand of a character, and have the rest of the arm deform in a realistic manner. It is interesting to note that the deformation can also be performed over the scale of parts or the entire mesh.\n\nThis notebook illustrates how to use Tensorflow Graphics to perform deformations similiar to the one contained in the above image. \nSetup & Imports\nIf Tensorflow Graphics is not installed on your system, the following cell can install the Tensorflow Graphics package for you.", "!pip install tensorflow_graphics", "Now that Tensorflow Graphics is installed, let's import everything needed to run the demo contained in this notebook.", "import numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_graphics.geometry.deformation_energy import as_conformal_as_possible\nfrom tensorflow_graphics.geometry.representation.mesh import utils as mesh_utils\nfrom tensorflow_graphics.geometry.transformation import quaternion\nfrom tensorflow_graphics.math.optimizer import levenberg_marquardt\nfrom tensorflow_graphics.notebooks import threejs_visualization\nfrom tensorflow_graphics.notebooks.resources import triangulated_stripe", "In this example, we build a mesh that corresponds to a flat and rectangular surface. Using the sliders, you can control the position of the deformation constraints applied to that surface, which respectively correspond to all the points along the left boundary, center, and right boundary of the mesh.", "mesh_rest_pose = triangulated_stripe.mesh\nconnectivity = mesh_utils.extract_unique_edges_from_triangular_mesh(triangulated_stripe.mesh['faces'])\ncamera = threejs_visualization.build_perspective_camera(\n field_of_view=40.0, position=(0.0, -5.0, 5.0))\nwidth = 500\nheight = 500\n_ = threejs_visualization.triangular_mesh_renderer([mesh_rest_pose],\n width=width,\n height=height,\n camera=camera)\n\n###############\n# UI controls #\n###############\n#@title Constraints on the deformed pose { vertical-output: false, run: \"auto\" }\nconstraint_1_z = 0 #@param { type: \"slider\", min: -1, max: 1 , step: 0.05 }\nconstraint_2_z = -1 #@param { type: \"slider\", min: -1, max: 1 , step: 0.05 }\nconstraint_3_z = 0 #@param { type: \"slider\", min: -1, max: 1 , step: 0.05 }\n\nvertices_rest_pose = tf.Variable(mesh_rest_pose['vertices'])\nvertices_deformed_pose = np.copy(mesh_rest_pose['vertices'])\nnum_vertices = vertices_deformed_pose.shape[0]\n\n# Adds the user-defined constraints\nvertices_deformed_pose[0, 2] = constraint_1_z\nvertices_deformed_pose[num_vertices // 2, 2] = constraint_1_z\nvertices_deformed_pose[num_vertices // 4, 2] = constraint_2_z\nvertices_deformed_pose[num_vertices // 2 + num_vertices // 4, 2] = constraint_2_z\nvertices_deformed_pose[num_vertices // 2 - 1, 2] = constraint_3_z\nvertices_deformed_pose[-1, 2] = constraint_3_z\n\nmesh_deformed_pose = {\n 'vertices': vertices_deformed_pose,\n 'faces': mesh_rest_pose['faces']\n}\n\nvertices_deformed_pose = tf.Variable(vertices_deformed_pose)\n\n# Builds a camera and render the mesh.\ncamera = threejs_visualization.build_perspective_camera(\n field_of_view=40.0, position=(0.0, -5.0, 5.0))\n_ = threejs_visualization.triangular_mesh_renderer([mesh_rest_pose],\n width=width,\n height=height,\n camera=camera)\n_ = threejs_visualization.triangular_mesh_renderer([mesh_deformed_pose],\n width=width,\n height=height,\n camera=camera)\n\ngeometries = threejs_visualization.triangular_mesh_renderer(\n [mesh_deformed_pose], width=width, height=height, camera=camera)\n\n\n################\n# Optimization #\n################\ndef update_viewer_callback(iteration, objective_value, variables):\n \"\"\"Callback to be called at each step of the optimization.\"\"\"\n geometries[0].getAttribute('position').copyArray(\n variables[0].numpy().ravel().tolist())\n geometries[0].getAttribute('position').needsUpdate = True\n geometries[0].computeVertexNormals()\n\n\ndef deformation_energy(vertices_deformed_pose, rotation):\n \"\"\"As conformal as possible deformation energy.\"\"\"\n return as_conformal_as_possible.energy(\n vertices_rest_pose,\n vertices_deformed_pose,\n rotation,\n connectivity,\n aggregate_loss=False)\n\n\ndef soft_constraints(vertices_deformed_pose):\n \"\"\"Soft constrains forcing results to obey the user-defined constraints.\"\"\"\n weight = 10.0\n return (\n weight * (vertices_deformed_pose[0, 2] - constraint_1_z),\n weight * (vertices_deformed_pose[num_vertices // 2, 2] - constraint_1_z),\n weight * (vertices_deformed_pose[num_vertices // 4, 2] - constraint_2_z),\n weight * (vertices_deformed_pose[num_vertices // 2 + num_vertices // 4, 2] -\n constraint_2_z),\n weight *\n (vertices_deformed_pose[num_vertices // 2 - 1, 2] - constraint_3_z),\n weight * (vertices_deformed_pose[-1, 2] - constraint_3_z),\n )\n\n\ndef fitting_energy(vertices_deformed_pose, rotation):\n deformation = deformation_energy(vertices_deformed_pose, rotation)\n constraints = soft_constraints(vertices_deformed_pose)\n return tf.concat((deformation, constraints), axis=0)\n\n\nrotations = tf.Variable(quaternion.from_euler(np.zeros((num_vertices, 3))))\n\nmax_iterations = 15 #@param { isTemplate: true, type: \"integer\" }\n_ = levenberg_marquardt.minimize(\n residuals=fitting_energy,\n variables=(vertices_deformed_pose, rotations),\n max_iterations=int(max_iterations),\n callback=update_viewer_callback)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
seg/2016-ml-contest
PA_Team/PA_Team_Submission_4-revised.ipynb
apache-2.0
[ "Submission 4 from <a href=\"http://petroanalytix.com/\">PetroAnalytix Team</a>\nIn this notebook, we try NN with several ideas/code from other contestant:\n* Alan Richardson (Ausar Geophysical) - PE imputation, method changed using MLPRegressor\n* <a href=\"https://home.deib.polimi.it/bestagini/\">Paolo Bestagini</a> - Feature augmentation\n* Model spearation between Marine and Non Marine", "import numpy as np\nnp.random.seed(1337)\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport time as tm\n\nimport pandas as pd\n\nfrom keras.models import Sequential, Model\nfrom keras.constraints import maxnorm\nfrom keras.layers import Dense, Dropout, Activation\nfrom keras.utils import np_utils\n\nfrom sklearn.metrics import f1_score, recall_score, accuracy_score, confusion_matrix\nfrom sklearn.model_selection import LeaveOneGroupOut\nfrom sklearn import preprocessing\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\n%matplotlib inline", "Load dataset", "training_data = pd.read_csv('../data/facies_vectors.csv')", "Utilities function", "def accuracy(conf):\n total_correct = 0.\n nb_classes = conf.shape[0]\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n acc = total_correct/sum(sum(conf))\n return acc\n\nadjacent_facies = np.array([[1], [0, 2], [1], [4], [3, 5], [4, 6, 7], [5, 7], [5, 6, 8], [6, 7]])\n\n\ndef accuracy_adjacent(conf, adjacent_facies):\n nb_classes = conf.shape[0]\n total_correct = 0.\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n for j in adjacent_facies[i]:\n total_correct += conf[i][j]\n return total_correct / sum(sum(conf))\n\n# 1=sandstone 2=c_siltstone 3=f_siltstone \n# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite\n# 8=packstone 9=bafflestone\nfacies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',\n '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']\n\nfacies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',\n 'WS', 'D','PS', 'BS']\n#facies_color_map is a dictionary that maps facies labels\n#to their respective colors\nfacies_color_map = {}\nfor ind, label in enumerate(facies_labels):\n facies_color_map[label] = facies_colors[ind]\n\ndef label_facies(row, labels):\n return labels[ row['Facies'] -1]\n \ntraining_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)\n\ndef make_facies_log_plot(logs, facies_colors):\n #make sure logs are sorted by depth\n logs = logs.sort_values(by='Depth')\n cmap_facies = colors.ListedColormap(\n facies_colors[0:len(facies_colors)], 'indexed')\n \n ztop=logs.Depth.min(); zbot=logs.Depth.max()\n \n cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)\n \n f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))\n ax[0].plot(logs.GR, logs.Depth, '.g')\n ax[1].plot(logs.ILD_log10, logs.Depth, '.')\n ax[2].plot(logs.DeltaPHI, logs.Depth, '.', color='0.5')\n ax[3].plot(logs.PHIND, logs.Depth, '.', color='r')\n ax[4].plot(logs.PE, logs.Depth, '.', color='black')\n im=ax[5].imshow(cluster, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n \n divider = make_axes_locatable(ax[5])\n cax = divider.append_axes(\"right\", size=\"20%\", pad=0.05)\n cbar=plt.colorbar(im, cax=cax)\n cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', \n 'SiSh', ' MS ', ' WS ', ' D ', \n ' PS ', ' BS ']))\n cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')\n \n for i in range(len(ax)-1):\n ax[i].set_ylim(ztop,zbot)\n ax[i].invert_yaxis()\n ax[i].grid()\n ax[i].locator_params(axis='x', nbins=3)\n \n ax[0].set_xlabel(\"GR\")\n ax[0].set_xlim(logs.GR.min(),logs.GR.max())\n ax[1].set_xlabel(\"ILD_log10\")\n ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())\n ax[2].set_xlabel(\"DeltaPHI\")\n ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())\n ax[3].set_xlabel(\"PHIND\")\n ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())\n ax[4].set_xlabel(\"PE\")\n ax[4].set_xlim(logs.PE.min(),logs.PE.max())\n ax[5].set_xlabel('Facies')\n \n ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])\n ax[4].set_yticklabels([]); ax[5].set_yticklabels([])\n ax[5].set_xticklabels([])\n f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)", "Extract data", "X = training_data.drop(['Formation', 'Well Name', 'Depth', 'Facies', 'FaciesLabels'], axis=1).values\ny = training_data['Facies'].values - 1\n\nwells = training_data[\"Well Name\"].values", "Modified imputation method using MLPRegressor", "from sklearn.neural_network import MLPRegressor\n\nreg = MLPRegressor()\nDataImpAll = training_data.drop(['Formation', 'Well Name', 'Depth', 'FaciesLabels'], axis=1).copy()\nDataImp = DataImpAll.dropna(axis = 0, inplace=False)\nXimp=DataImp.loc[:, DataImp.columns != 'PE']\nYimp=DataImp.loc[:, 'PE']\nreg.fit(Ximp, Yimp)\nX[np.array(DataImpAll.PE.isnull()),4] = reg.predict(DataImpAll.loc[DataImpAll.PE.isnull(),:].drop('PE',axis=1,inplace=False))\n\ntraining_data.ix[:,\"PE\"] = X[:,4]", "Feature Augmentation method from Bestagini", "# Feature windows concatenation function\ndef augment_features_window(X, N_neig):\n \n # Parameters\n N_row = X.shape[0]\n N_feat = X.shape[1]\n\n # Zero padding\n X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))\n\n # Loop over windows\n X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))\n for r in np.arange(N_row)+N_neig:\n this_row = []\n for c in np.arange(-N_neig,N_neig+1):\n this_row = np.hstack((this_row, X[r+c]))\n X_aug[r-N_neig] = this_row\n\n return X_aug\n\n# Feature gradient computation function\ndef augment_features_gradient(X, depth):\n \n # Compute features gradient\n d_diff = np.diff(depth).reshape((-1, 1))\n d_diff[d_diff==0] = 0.001\n X_diff = np.diff(X, axis=0)\n X_grad = X_diff / d_diff\n \n # Compensate for last missing value\n X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))\n \n return X_grad\n\n# Feature augmentation function\ndef augment_features(X, well, depth, N_neig=1):\n \n # Augment features\n X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))\n for w in np.unique(well):\n w_idx = np.where(well == w)[0]\n X_aug_win = augment_features_window(X[w_idx, :], N_neig)\n X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])\n X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)\n \n # Find padded rows\n padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])\n \n return X_aug, padded_rows\n\n# Marine Models\n\nOrg_data = training_data\ntraining_data = training_data[training_data[\"NM_M\"]==1]\n\nX = training_data.drop(['Formation', 'Well Name', 'Depth', 'Facies', 'FaciesLabels'], axis=1).values\ny = training_data['Facies'].values - 1\nwells = training_data[\"Well Name\"].values\nwell = training_data['Well Name'].values\ndepth = training_data['Depth'].values\n\n\nX, padded_rows = augment_features(X, well, depth, N_neig=1)\nX1org = X\ny1org = y", "Neural Network", "def fDNN(in_dim, out_dim):\n \n # Model\n model = Sequential()\n model.add(Dense(152, input_dim=in_dim, activation='relu'))\n model.add(Dropout(0.2))\n model.add(Dense(512, activation='relu'))\n model.add(Dropout(0.2))\n model.add(Dense(out_dim, activation='softmax'))\n\n # Compilation\n model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n \n return model", "Validation with Leave One Well Out on Training Dataset", "logo = LeaveOneGroupOut()\n\nnb_classes = 9\nepoch = 10\nbats = 20\n\nt0 = tm.time()\n\nf1s_ls = []\nacc_ls = []\nadj_ls = []\n\nfrom scipy.signal import medfilt\n\nfor train, test in logo.split(X, y, groups=wells):\n well_name = wells[test[0]]\n # Scaling\n scaler = preprocessing.MinMaxScaler().fit(X)\n X_tr = scaler.transform(X[train])\n X_te = scaler.transform(X[test])\n\n Y_tr = np_utils.to_categorical(y[train], nb_classes)\n\n in_dim = len(X_tr[0])\n\n # Method initialization\n mlp = fDNN(in_dim, nb_classes)\n \n # Training\n mlp.fit(X_tr, Y_tr, nb_epoch=epoch, batch_size=bats, verbose=0) \n \n # Predict\n y_hat = mlp.predict_classes(X_te, verbose=0)\n y_hat = medfilt(y_hat, kernel_size=5)\n \n try:\n f1s = f1_score(y[test], y_hat, average=\"weighted\", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])\n except:\n f1s = 0\n\n try:\n conf = confusion_matrix(y[test], y_hat, labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])\n acc = f1_score(y[test], y_hat, average=\"micro\", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])\n except:\n acc = 0\n\n try:\n acc_adj = accuracy_adjacent(conf, adjacent_facies)\n except:\n acc_adj = 0\n\n f1s_ls += [f1s]\n acc_ls += [acc]\n adj_ls += [acc_adj]\n print(\"{:>20s} f1w:{:.3f} | f1m:{:.3f} | acc_adj:{:.3f}\".format(well_name, f1s, acc, acc_adj))\n\nt1 = tm.time()\nprint(\"Avg F1w\", np.average(f1s_ls)*100, \"Avg F1m\", np.average(acc_ls)*100, \"Avg Adj\", np.average(adj_ls)*100)\nprint((t1-t0), \"seconds\")\n\n# Non - Marine\n\ntraining_data = Org_data\ntraining_data = training_data[training_data[\"NM_M\"]==2]\n\nX = training_data.drop(['Formation', 'Well Name', 'Depth', 'Facies', 'FaciesLabels'], axis=1).values\ny = training_data['Facies'].values - 1\nwells = training_data[\"Well Name\"].values\nwell = training_data['Well Name'].values\ndepth = training_data['Depth'].values\nX, padded_rows = augment_features(X, well, depth, N_neig=1)\nX2org =X\ny2org = y\n\nf1s_ls = []\nacc_ls = []\nadj_ls = []\n\nfrom scipy.signal import medfilt\n\nfor train, test in logo.split(X, y, groups=wells):\n well_name = wells[test[0]]\n \n # Scaling\n scaler = preprocessing.MinMaxScaler().fit(X)\n X_tr = scaler.transform(X[train])\n X_te = scaler.transform(X[test])\n\n Y_tr = np_utils.to_categorical(y[train], nb_classes)\n\n in_dim = len(X_tr[0])\n\n # Method initialization\n mlp = fDNN(in_dim, nb_classes)\n \n # Training\n mlp.fit(X_tr, Y_tr, nb_epoch=epoch, batch_size=bats, verbose=0) \n \n # Predict\n y_hat = mlp.predict_classes(X_te, verbose=0)\n y_hat = medfilt(y_hat, kernel_size=5)\n \n try:\n f1s = f1_score(y[test], y_hat, average=\"weighted\", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])\n except:\n f1s = 0\n\n try:\n conf = confusion_matrix(y[test], y_hat, labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])\n acc = f1_score(y[test], y_hat, average=\"micro\", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])\n except:\n acc = 0\n\n try:\n acc_adj = accuracy_adjacent(conf, adjacent_facies)\n except:\n acc_adj = 0\n\n f1s_ls += [f1s]\n acc_ls += [acc]\n adj_ls += [acc_adj]\n print(\"{:>20s} f1w:{:.3f} | f1m:{:.3f} | acc_adj:{:.3f}\".format(well_name, f1s, acc, acc_adj))\n\nt1 = tm.time()\nprint(\"Avg F1w\", np.average(f1s_ls)*100, \"Avg F1m\", np.average(acc_ls)*100, \"Avg Adj\", np.average(adj_ls)*100)\nprint((t1-t0), \"seconds\")", "Applying to Test Dataset", "Org_blind_data = pd.read_csv('../data/nofacies_data.csv')\nblind_data = Org_blind_data[Org_blind_data[\"NM_M\"]==1]\n\nX_blind = blind_data.drop(['Formation', 'Well Name', 'Depth'], axis=1).values\nwell_blind = blind_data['Well Name'].values\ndepth_blind = blind_data['Depth'].values\n\nX_blind, padded_rows = augment_features(X_blind, well_blind, depth_blind, N_neig=1)\n\n# Scaling\n\nscl = preprocessing.MinMaxScaler().fit(X1org)\nX_train = scl.transform(X1org)\nX_blind = scl.transform(X_blind)\nY_train = np_utils.to_categorical(y1org, nb_classes)\n\n# Method initialization\nmodel = fDNN(in_dim, nb_classes)\n\n# Training\nmodel.fit(X_train, Y_train, nb_epoch=epoch, batch_size=bats, verbose=0) \n\n# Predict\ny_blind = model.predict_classes(X_blind, verbose=0)\ny_blind = medfilt(y_blind, kernel_size=5)\n\nOrg_blind_data.ix[Org_blind_data[\"NM_M\"]==1,\"Facies\"] = y_blind + 1 # return the original value (1-9)\n\nblind_data = Org_blind_data[Org_blind_data[\"NM_M\"]==2]\nX_blind = blind_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1).values\nwell_blind = blind_data['Well Name'].values\ndepth_blind = blind_data['Depth'].values\nX_blind, padded_rows = augment_features(X_blind, well_blind, depth_blind, N_neig=1)\n\n# Scaling\nscl = preprocessing.MinMaxScaler().fit(X2org)\nX_train = scl.transform(X2org)\nX_blind = scl.transform(X_blind)\n\nY_train = np_utils.to_categorical(y2org, nb_classes)\n\n# Method initialization\nmodel = fDNN(in_dim, nb_classes)\n\n# Training\nmodel.fit(X_train, Y_train, nb_epoch=epoch, batch_size=bats, verbose=0) \n\n# Predict\ny_blind = model.predict_classes(X_blind, verbose=0)\ny_blind = medfilt(y_blind, kernel_size=5)\n\nOrg_blind_data.ix[Org_blind_data[\"NM_M\"]==2,\"Facies\"] = y_blind + 1 # return the original value (1-9)\n\nOrg_blind_data.to_csv(\"PA_Team_Submission_4-revised.csv\")\n\nmake_facies_log_plot(\n Org_blind_data[Org_blind_data['Well Name'] == 'STUART'],\n facies_colors)\n\nmake_facies_log_plot(\n Org_blind_data[Org_blind_data['Well Name'] == 'CRAWFORD'],\n facies_colors)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wasit7/parallel_forest
nb/parallel forest.ipynb
mit
[ "Parallel Forest Tutorial\nThis notebook shows a traing process of Parallel Random Forest. For cluster training please check https://github.com/wasit7/parallel_forest\nimport modules\nImport all necessary modules", "import numpy as np\nfrom matplotlib import pyplot as plt\nimport pickle\nimport os\n%pylab inline", "Generating datasets", "clmax=5\nspc=5e2\ntheta_range=2\n#samples is list of labels\nsamples=np.zeros(spc*clmax,dtype=np.uint32)\n#I is fessture vector\nI=np.zeros((spc*clmax,theta_range),dtype=np.float32)\nmarker=['bo','co','go','ro','mo','yo','ko',\n 'bs','cs','gs','rs','ms','ys','ks']\n\n# number of datasets being generated \n# 8 for training\n# another one for evaluation\nN=9 \npath=\"train/\"\nif not os.path.exists(path):\n os.makedirs(path)\nfor n in xrange(N):\n for cl in xrange(clmax):\n xo=cl*spc\n #define label\n samples[xo:xo+spc]=cl\n phi = np.linspace(0, 2*np.pi, spc) + \\\n np.random.randn(spc)*0.4*np.pi/clmax + \\\n 2*np.pi*cl/clmax\n r = np.linspace(0.1, 1, spc)\n I[xo:xo+spc,:]=np.transpose(np.array([r*np.cos(phi), r*np.sin(phi)]))\n with open(path+'dataset%02d.pic'%(n), 'wb') as pickleFile:\n #write label and feature vector\n theta_dim=1\n pickle.dump((clmax,theta_dim,theta_range,len(samples),samples,I,None), pickleFile, pickle.HIGHEST_PROTOCOL)", "Visualization of the dataset", "z=np.random.randint( 0,spc*clmax,1000)\nfor i in z:\n #ax.plot(dset.I[i,0],dset.I[i,1],marker[dset2.samples[i]])\n plt.plot(I[i,0],I[i,1],marker[samples[i]])\n plt.hold(True)", "Training", "from pforest.master import master\nm=master()\nm.reset()\nm.train()", "Write and read the tree\nYou may need to save/load the tree to/from a pickle file", "with open('out_tree.pic', 'wb') as pickleFile:\n pickle.dump(m.root, pickleFile, pickle.HIGHEST_PROTOCOL)\n \nwith open('out_tree.pic', 'rb') as pickleFile:\n root = pickle.load(pickleFile)", "Check the file size", "ls", "The result decision tree\nTermination code (Q:min bag size, G:no information gain, D:reaching maximum depth)", "from pforest.dataset import dataset\nfrom pforest.tree import tree\n\n#init the test tree\nt=tree()\nt.settree(root)\nt.show()", "Recall rate\nLoading a new dataset, the last on, for computing a recall rate", "#load the last dataset that never use for training\ndset=dataset(8)\ncorrect=0;\nfor x in xrange(dset.size):\n L=t.getL(np.array([x]),dset)\n if dset.getL(x) == L:\n correct=correct+1\n dset.setL(x,L)\nprint(\"recall rate: {}%\".format(correct/float(dset.size)*100))", "Labelling\nThe computer use the decision tree to classify the unknown feature vector u", "#setup the new test-set\n#load dataset \ndset=dataset(8)\nd=0.05\ny, x = np.mgrid[slice(-1, 1+d, d), slice(-1, 1+d, d)]\n\n#start labeling\nL=np.zeros(x.shape,dtype=int)\nfor r in xrange(x.shape[0]):\n for c in xrange(x.shape[1]):\n u=( x[r,c],y[r,c] )\n Prob=t.classify(u)\n L[r,c]=np.argmax(Prob)", "2D space partitioning by the decision tree\nDisplaying the labelled result", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots()\nax.axis([-1,1,-1,1])\nax.pcolor(x,y,L)\nax.hold(True)", "Overlay the dataset", "z=np.random.randint(0,dset.size,1000)\nfor i in z:\n ax.plot(dset.I[i,0],dset.I[i,1],marker[dset.samples[i]])\nfig\n\nt.classify([0.75,0.0])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Bio204-class/bio204-notebooks
Mathematical-Notation-Sums-and-Products.ipynb
cc0-1.0
[ "Author: Paul Magwene \nTitle: Mathematical notation: sums and products\nDate: 20 January 2016\n\nWhy mathematical notation?\nWhen studying quantitative subjects like statistics, mathematics, or computer science, it's typical to use short hand notation to represent various operations. This is done because some operations are used so frequently that it would be tedious to write out a full explanation each time they're used. \nSome of this mathematical notation can be a little intimidating at first. Don't let it scare you off. With a little practice it's easy to break the notation down into easy to understand parts.\nSum notation\nSumming (adding things up) is something you do frequently in all sorts of quantitative fields. \nWhen we carry out statistical analyses we're going to be working with sequences of numbers (mathematicians would usually call these vectors). In Python we might represent such a sequence as a list (or an array which we'll meet in a later class). In statistics we usually give the sequence a name like $\\mathbf{x}$; when programming we might use a variable assignment like x = [3,1,4,...].\nIn mathematics we represent the operation of summing the elements of a sequence with a capital Greek letter sigma ($\\Sigma$). Here's an example:\n$$\n\\sum_{i=1}^{10} \\mathbf{x}_i\n$$\nNotice that there are two numbers -- one above and one below the $\\Sigma$. These are the upper and lower bounds of the indices of the elements we want from $\\mathbf{x}$. Note that mathematicians index sequences from one, unlike computer scientists who usually index from zero.\nIn words, the above notation is equivalent to the written statement: \"From the sequence we call $\\mathbf{x}$, take the first 10 elements, and add them up.\"\nAn equivalent statement in Python would be:\npython\nsum(x[0:10]) # sum the first ten elements of x\nRemember that when slicing Python sequences, the second part of the slice is non-inclusive (i.e. we take all the elements up to but not including the element indexed by 10).\nOften it's convenient to further abstract our notation. Let's assume our sequence $\\mathbf{x}$ has a length we'll call $n$. We don't necessarily know the length of $\\mathbf{x}$ ahead of time, so using the label $n$ to refer to its length let's us abstract away this detail. If we want to represent the operation of summing up all the elements of $\\mathbf{x}$ we could write:\n$$\n\\sum_{i=1}^n \\mathbf{x}_i\n$$\nThe equivalent Python statement would be:\npython\nsum(x) # sum all the elements of x\nNotice that when using sum notation, the lower index doesn't have to start at 1. For example, to represent the operation of summing up the last 10 elements of $\\mathbf{x}$ we could write:\n$$\n\\sum_{i = n-10}^n \\mathbf{x}_i\n$$\nThe equivalent Python statement is:\npython\nsum(x[-10:]) # sum the last 10 elements of x\nSumming with for loops\nPython has a convenient sum function that makes it easy to quickly sum the elements of a sequence. But what if we didn't have this function, or what if we wanted to sum not the elements in the list, but rather some operation we applied to those elements? This problem is easy to solve with a for loop (Note: there are more efficient ways to do such operations, but we're aiming for conceptual simplicity). Let's illustrate this with an example:", "x = [2,4,6,8,10]\n\ns = 0 # initialize the object that will hold our sum\nfor i in x:\n s = s + i\n \nprint(\"The sum of x is:\", s)", "Instead of writing s = s + i we could have written s += i (read this as \"s is whatever it was before plus the value of i\"). So we could rewrite that for loop as:", "x = [2,4,6,8,10]\n\ns = 0 \nfor i in x:\n s += i\n \nprint(\"The sum of x is:\", s)", "Now what if we wanted to calculate the sum of the reciprocals of each element of x? A simple change to our code give us:", "x = [2,4,6,8,10]\ns = 0\nfor i in x:\n s += (1/i)\n \nprint(\"The sum of the reciprocals of x is:\", s)", "To bring things full circle, the equivalent mathematical notation to represent the operation of summing the reciprocals of all the elements of $\\mathbf{x}$ would be:\n$$\n\\sum_{i=1}^n \\frac{1}{\\mathbf{x}_i}\n$$\nThe code above is somewhat fragile in that it's not easily re-usable. What if we wanted to sum the reciprocals of a list called y or z instead of x? We'd have to go through our code example and change each instance of x. That's boring and error prone. Instead let's write a Python function to abstract away the steps:", "def sum_of_reciprocals(x):\n s = 0\n for i in x:\n s += (1.0/i)\n return s\n\n# test our function with different inputs\nx = [2,4,6,8,10]\ny = [1,3,5,7,9]\nz = [-1,1,-1,1]\n\nprint(\"The sum of the reciprocals of x is:\", sum_of_reciprocals(x))\nprint(\"The sum of the reciprocals of y is:\", sum_of_reciprocals(y))\nprint(\"The sum of the reciprocals of z is:\", sum_of_reciprocals(z))", "A even more compact way of writing our sum of reciprocals operation, that still used the built in sum function would be to use a list comprehension as shown below:", "sum_recip_x = sum([(1.0/i) for i in x])\nprint(\"The sum of the reciprocals of x is: \", sum_recip_x)", "Note that our sum_of_reciprocals function (or our solution using list comprehensions) doesn't deal with all possible cases we might use as input. If one of the elements of x was zero what would happen (go ahead and try it)? What if we passed a list of strings to the function instead of numbers?\nProduct notation\nNow that you (hopefully) understand sum notation, it should be easy to understand product notation. We use product notation to represent the products of the elements of a sequence (i.e. the value we get when we multiply the elements of the sequence). As we'll see later in the course, product notation arises frequently in discussions of probability.\nThe mathematical shorthand for taking the product of a sequence of numbers is the capital Greek Pi ($\\Pi$). In parallel to our first example above, the product of the first ten elements of a sequence $\\mathbf{x}$ could be written this way:\n$$\n\\prod_{i=1}^{10} \\mathbf{x}_i\n$$\nOther than the use of $\\Pi$ rather than $\\Sigma$, this is identical to the sum notation above. As before the notation includes information about the upper and lower bounds of the element indices for which we want to apply the operation.\nIn a similar manner to what we saw before, we can represent the operation of getting the product of an arbitrary sequence $\\mathbf{x}$ of length $n$ as follows:\n$$\n\\prod_{i=1}^{n} \\mathbf{x}_i\n$$\nProducts with for loops\nUnlike sum, there is no built-in product function in Python (we will see an efficient implementation of the product operation when we get to the numerical Python libraries). However, as we saw above we can use for loops to write our own product function.", "def product(x):\n p = 1\n for i in x:\n p *= i # same as p = p * i\n return p\n\nx = [2,4,6,8,10]\nproduct(x)\n\nproduct([(1.0/i) for i in x]) # use list comprehension to get reciprocals of x" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Mogeng/IO-HMM
examples/notebooks/SupervisedIOHMM.ipynb
mit
[ "This is the IOHMM model with the parameters learned in a supervised way. This is corresponding to the counting frequency process as in the supervised HMM. See notes in http://www.cs.columbia.edu/4761/notes07/chapter4.3-HMM.pdf.\nSupervisedIOHMM", "from __future__ import division\n\nimport json\nimport warnings\n\n\nimport numpy as np\nimport pandas as pd\n\n\nfrom IOHMM import SupervisedIOHMM\nfrom IOHMM import OLS, CrossEntropyMNL\n\n\nwarnings.simplefilter(\"ignore\")", "Load speed data", "speed = pd.read_csv('../data/speed.csv')\nspeed.head()", "Label some/all states\nIn our structure of the code, the states should be a dictionary, the key is the index in the sequence (e.g. 0, 5) and the value is a one-out-of-n code of array where the kth value is 1 if the hidden state is k. n is the number of states in total.\nIn the following example, we assume that the \"corr\" column gives the correct hidden states.", "states = {}\ncorr = np.array(speed['corr'])\nfor i in range(len(corr)):\n state = np.zeros((2,))\n if corr[i] == 'cor':\n states[i] = np.array([0,1])\n else:\n states[i] = np.array([1,0])", "Set up a simple model manully", "# we choose 2 hidden states in this model\nSHMM = SupervisedIOHMM(num_states=2)\n\n# we set only one output 'rt' modeled by a linear regression model\nSHMM.set_models(model_emissions = [OLS()], \n model_transition=CrossEntropyMNL(solver='lbfgs'),\n model_initial=CrossEntropyMNL(solver='lbfgs'))\n\n# we set no covariates associated with initial/transitiojn/emission models\nSHMM.set_inputs(covariates_initial = [], covariates_transition = [], covariates_emissions = [[]])\n\n# set the response of the emission model\nSHMM.set_outputs([['rt']])\n\n# set the data and ground truth states\nSHMM.set_data([[speed, states]])", "Start training", "SHMM.train()", "See the training results", "# the coefficients of the output model for each states\nprint(SHMM.model_emissions[0][0].coef)\nprint(SHMM.model_emissions[1][0].coef)\n\n# the scale/dispersion of the output model of each states\nprint(np.sqrt(SHMM.model_emissions[0][0].dispersion))\nprint(np.sqrt(SHMM.model_emissions[1][0].dispersion))\n\n# the transition probability from each state\nprint(np.exp(SHMM.model_transition[0].predict_log_proba(np.array([[]]))))\nprint(np.exp(SHMM.model_transition[1].predict_log_proba(np.array([[]]))))", "Save the trained model", "json_dict = SHMM.to_json('../models/SupervisedIOHMM/')\njson_dict\n\nwith open('../models/SupervisedIOHMM/model.json', 'w') as outfile:\n json.dump(json_dict, outfile, indent=4, sort_keys=True)", "Load back the trained model", "SHMM_from_json = SupervisedIOHMM.from_json(json_dict)", "See if the coefficients are any different", "# the coefficients of the output model for each states\nprint(SHMM.model_emissions[0][0].coef)\nprint(SHMM.model_emissions[1][0].coef)", "Set up the model using a config file, instead of doing it manully", "with open('../models/SupervisedIOHMM/config.json') as json_data:\n json_dict = json.load(json_data)\n\nSHMM_from_config = SupervisedIOHMM.from_config(json_dict)", "Set data and start training", "SHMM_from_config.set_data([[speed, states]])\nSHMM_from_config.train()", "See if the training results are any different?", "# the coefficients of the output model for each states\nprint(SHMM_from_config.model_emissions[0][0].coef)\nprint(SHMM_from_config.model_emissions[1][0].coef)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mayankjohri/LetsExplorePython
Section 2 - Advance Python/Chapter S2.01 - Functional Programming/02_03_iter.ipynb
gpl-3.0
[ "Python Iterators\nAs common sense suggest, Iterators are object which can be iterated upon such as list, dictionary, string etc. In Python they are literally everywhere. \nThey are objects which when iterated retuns one element at a time. We have already seen most of the inbuilt iterators, such as list, tuple, dictionary, string, etc. In this chapter we are going to create our own custom iterators.\nThere are few ways in which we can create a custom iterators.\nClass Methods\nIn order to create a python iterator, our custom class must implement two special methods, __iter__() and __next__(), which collectively are called the iterator protocol.", "class MyIter(object):\n def __init__(self, lst):\n self.lst = lst\n self.i = 0\n \n def __iter__(self):\n self.i = 0\n return self\n \n def __next__(self):\n if self.i < len(self.lst):\n nxt = self.lst[self.i]\n self.i +=1\n return nxt\n else:\n raise StopIteration\n\nm = MyIter([1, 2, 3, 4, 5, 6])\n\nfor a in m:\n print(a)", "iter()\nThe iter() method returns an iterator for the given object.\nSyntax:\npython\niter(object[, sentinel])\nWhere object is an object based on which the iterator needs to be constructed. The behavior of iterator is dependent on the value of sentinel, if sentinel is not provided then object should be an interator and the construct will behave as such, where as if sentinel is provided then object should be callable, and value returned will be treated as next call. Iteration ends when the value retuned equals to value in sentinel", "class MyDummy(object):\n def __init__(self):\n self.lst = [1, 2, 3, 4, 5, 6]\n self.i = 0\n \n def __call__(self):\n ret = self.lst[self.i]\n self.i += 1\n return ret\n\nd = MyDummy()\n\n\nfor a in iter(d, 3):\n print(a, end=\" \")\n\nm = MyIter([1, 2, 3, 4, 5, 6])\nfor a in iter(m):\n print(a, end=\" \")", "lets try another example, this time lets take a string", "st = \"Welcome to the city of lakes\"\n\nfor a in iter(st):\n print(a, end=\" \")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
griffinfoster/fundamentals_of_interferometry
2_Mathematical_Groundwork/fft_implementation_assignment.ipynb
gpl-2.0
[ "Implementation of a Radix-2 Fast Fourier Transform\nImport standard modules:", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS\n\nimport cmath", "This assignment is to implement a python-based Fast Fourier Transform (FFT). Building on $\\S$ 2.8 &#10142; we will implement a 1-D radix-2 Cooley-Tukey-based FFT using decimation in time (DIT) an $N = 2^n$ input function, and then generalize the function to take any input.\nFrom $\\S$ 2.8.2 &#10142; the discrete Fourier transform (DFT) is defined as:\n$$ \\mathscr{F}{\\rm D}{y}_k = Y_k = \\sum{n\\,=\\,0}^{N-1} y_n\\,e^{-\\imath 2\\pi \\frac{nk}{N}}, $$\nThat is, the $k^{th}$ element of the Fourier transformed spectrum $Y$ is a sum over all $n$ elements of the function $y$, each multipled by a complex twiddle factor $e^{-\\imath 2\\pi \\frac{nk}{N}}$. In $\\S$ 2.8.5 &#10142; two methods for computing the DFT for a size $N = 2^n$ discrete function. A double loop to compute all elements of the Fourier-transformed spectrum, and a matrix multiplication by generating the Fourier kernel $K$. The compute time to perform the DFT is $\\mathcal{O}(N^2)$, this is it takes $cN^2$ operations where $c > 1$ is a constant factor. Though as note in $\\S$ 2.8.5 &#10142; the matrix implementation is much fast that the loop because this algorithm takes advantage of fast vector math libraries.\nThe DFT code is replicated here as it will be used to compare our implementation of the FFT:", "def loop_DFT(x):\n \"\"\"\n Implementing the DFT in a double loop\n Input: x = the vector we want to find the DFT of\n \"\"\"\n #Get the length of the vector (will only work for 1D arrays)\n N = x.size\n #Create vector to store result in\n X = np.zeros(N, dtype=complex)\n for k in range(N):\n for n in range(N):\n X[k] += np.exp(-1j * 2.0* np.pi* k * n / N) * x[n]\n return X\n\ndef matrix_DFT(x):\n \"\"\"\n Implementing the DFT in vectorised form\n Input: x = the vector we want to find the DFT of\n \"\"\"\n #Get the length of the vector (will only work for 1D arrays)\n N = x.size\n #Create vector to store result in\n n = np.arange(N)\n k = n.reshape((N,1))\n K = np.exp(-1j * 2.0 * np.pi * k * n / N)\n return K.dot(x)", "In $\\S$ 2.8.6 &#10142; the fast Fourier transform was introduced as using recursion to implement a Fourier transform in $\\mathcal{O}(N\\log_2N)$ computations, significantly reducing the computational cost of computing the Fourier transform, especially for large $N$. A 'one layer' fast Fourier transform was presented which split the input function into two, and applied the twiddle factor to all values in the layer before calling the matrix-based DFT. This code is replicated below.", "def one_layer_FFT(x):\n \"\"\"An implementation of the 1D Cooley-Tukey FFT using one layer\"\"\"\n N = x.size\n if N%2 > 0:\n print \"Warning: length of x is not a power of two, returning DFT\"\n return matrix_DFT(x)\n else:\n X_even = matrix_DFT(x[::2])\n X_odd = matrix_DFT(x[1::2])\n factor = np.exp(-2j * np.pi * np.arange(N) / N)\n return np.concatenate([X_even + factor[:N / 2] * X_odd, X_even + factor[N / 2:] * X_odd])", "We can easily show that each of these functions produce the same results by introducting a discrete test function $x$ and showing that the same results are reported by each function call:", "xTest = np.random.random(256) # create random vector to take the DFT of\n\nprint np.allclose(loop_DFT(xTest), matrix_DFT(xTest)) # returns True if all values are equal (within numerical error)\nprint np.allclose(matrix_DFT(xTest), one_layer_FFT(xTest)) # returns True if all values are equal (within numerical error)", "We can also time each function to report of the amount of time is takes to return a finished spectrum.", "print 'Double Loop DFT:'\n%timeit loop_DFT(xTest)\nprint '\\nMatrix DFT:'\n%timeit matrix_DFT(xTest)\nprint '\\nOne Layer FFT + Matrix DFT:'\n%timeit one_layer_FFT(xTest)", "As we can see the matrix DFT is significatly faster than the double loop DFT, this is because of the fast vectorization functions in numpy. And, the 'one-layer' FFT is about twice as fast as the matrix DFT because of the FFT architecture. We can go one fast and use the built-in numpy FFT:", "print np.allclose(one_layer_FFT(xTest), np.fft.fft(xTest))\n\nprint 'numpy FFT:'\n%timeit np.fft.fft(xTest)", "The numpy FFT is very fast, in part because of the low-level programing implementation, but fundamentally because it uses an FFT architecture. Our goal for this assignment is to implement such an architecture.\nDecimation-in-Time (DIT) FFT (12 Points)\nThe computational efficiency of the FFT comes from the recursive design of the algorithm which takes advantage of a binary tree design and the use of generalized twiddle factors. There are two designs of the binary tree which leads to the decimation-in-time (DIT) and decimation-in-frequency (DIF) architectures. Both architectures produce equivalent results, they they differ in the direction and starting point of the computations on the FFT binary tree. See the wikipedia page on the Cooley-Tukey FFT &#10548; for a diagram and pseudo-code of the DIT implementation.\nFor this section of the assignment implement the Radix-2 DIT FFT algorithm for the case of a $2^n$ size input, this input can be either real or complex.", "def ditrad2(x):\n \"\"\"radix-2 DIT FFT\n x: list or array of N values to perform FFT on, can be real or imaginary, x must be of size 2^n\n \"\"\"\n ox = np.asarray(x, dtype='complex') # assure the input is an array of complex values\n # INSERT: assign a value to N, the size of the FFT\n N = #??? 1 point\n \n if N==1: return ox # base case\n\n # INSERT: compute the 'even' and 'odd' components of the FFT,\n # you will recursively call ditrad() here on a subset of the input values\n # Hint: a binary tree design splits the input in half\n even = #??? 2 points\n odd = #??? 2 points\n \n twiddles = np.exp(-2.j * cmath.pi * np.arange(N) / N) # compute the twiddle factors\n\n # INSERT: apply the twiddle factors and return the FFT by combining the even and odd values\n # Hint: twiddle factors are only applied to the odd values\n # Hint: combing even and odd is different from the way the inputs were split apart above.\n return #??? 3 points", "Once ditrad2() is properly implemented then the results of calling the function should be equivalent to the output of the numpy FFT, and should run faster than the DFT and one-layer FFT.", "print 'The output of ditrad2() is correct?', np.allclose(np.fft.fft(xTest), ditrad2(xTest)) # 2 points if true\n\nprint 'your FFT:'\n%timeit ditrad2(xTest) # 2 point if your time < One Layer FFT + Matrix DFT", "A non-$2^n$ FFT (10 points)\nNow that we have implemented a fast radix-2 algorithm for vectors of length $2^n$, we can write a generic algorithm which can take any length input. This algorithm will check if the length of the input is divisible by 2, if so then it will use the FFT, otherwise it will default to the slower matrix-based DFT.", "def generalFFT(x):\n \"\"\"radix-2 DIT FFT\n x: list or array of N values to perform FFT on, can be real or imaginary\n \"\"\"\n ox = np.asarray(x, dtype='complex') # assure the input is an array of complex values\n # INSERT: assign a value to N, the size of the FFT\n N = #??? 1 point\n \n if N==1: return ox # base case\n elif # INSERT: check if the length is divisible by 2, 1 point\n\n # INSERT: do a FFT, use your ditrad2() code here, 3 points\n # Hint: your ditrad2() code can be copied here, and will work with only a minor modification\n \n else: # INSERT: if not divisable by 2, do a slow Fourier Transform\n return # ??? 1 point", "Now running this algorithm on inputs of different lengths there should be different run times. For a vector with a prime number length then the algorithm will default to the slow matrix-based DFT. For a vector of length nearly always divisible by 2 then the algorithm should be faster.", "xTest2 = np.random.random(251) # create random vector to take the DFT of, not, this is not of length 2^n\nxTest3 = np.random.random(12*32) # create random vector to take the DFT of, not, this is not of length 2^n\n\nprint 'The output of generalFFT() is correct?', np.allclose(np.fft.fft(xTest2), generalFFT(xTest2)) # 1 point\n\nprint 'Your generic FFT:'\n%timeit generalFFT(xTest2) # 1 point if it runs in approximately the same time as matrix_DFT\n\n%timeit generalFFT(xTest3) # 2 point if it runs faster than the xTest2 vector", "FUTURE: Extras\n\nin place FFT, fixed point, radix-2 DIF, radix-4" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
LucaCanali/Miscellaneous
Oracle_Jupyter/Oracle_histograms.ipynb
apache-2.0
[ "How to generate histograms using Oracle SQL\nThis provides and example of how to generate frequency histograms using the Oracle SQL.\nDisambiguation: we refer here to computing histograms of table data, rather than histograms of the columns statistics used by the cost based optimizer.\nDependencies: needs an Oracle client installation and cx_Oracle\nAuthor and contacts: Luca.Canali@cern.ch\nSetup and prerequisites\nThis is how you can setup an Oracle instance for testing using a docker image for oracle-xe\nrun oracle xe on a container from gvenzl dockerhub repo https://github.com/gvenzl/oci-oracle-xe\ndocker run -d --name mydb1 -e ORACLE_PASSWORD=oracle -p 1521:1521 gvenzl/oracle-xe:latest # or use :slim\nwait till the DB is started, check logs at:\ndocker logs -f mydb1\noracledb library: This uses oracledb to connect to oracle, so no need to install the Oracle client.\nNote: oracledb can also work with the oracle client as cx_Oracle did,\nsee documentation for details.\nQuery Oracle from Python with oracledb\noracledb is the next version of cx_Oracle", "# connect to Oracle using oracledb\n# !pip install oracledb \n\nimport oracledb\n\ndb_user = 'system'\ndb_connect_string = 'localhost:1521/XEPDB1'\ndb_pass = 'oracle'\n\n# db_connect_string = 'dbserver:1521/orcl.mydomain.com'\n# import getpass\n# db_pass = getpass.getpass()\n", "Create the test table", "with oracledb.connect(user=db_user, password=db_pass, dsn=db_connect_string) as ora_conn:\n\n cursor = ora_conn.cursor()\n \n # use this drop statement if you need to recreate the table\n # cursor.execute(\"drop table data\")\n\n cursor.execute(\"begin dbms_random.seed(4242); end;\")\n\n cursor.execute(\"\"\"\n create table data as \n select dbms_random.value * 100 random_value \n from dual connect by level <=100\n \"\"\")\n\n", "Define the query to compute the histogram", "table_name = \"data\" # table or temporary view containing the data\nvalue_col = \"random_value\" # column name on which to compute the histogram\nmin = -20 # min: minimum value in the histogram\nmax = 90 # maximum value in the histogram\nbins = 11 # number of histogram buckets to compute\nstep = (max - min) / bins\n \n\nquery = f\"\"\"\nwith bucketized as (\n select width_bucket({value_col}, {min}, {max}, {bins}) as bucket\n from {table_name}\n),\nhist as (\n select bucket, count(*) as cnt\n from bucketized\n group by bucket\n),\nbuckets as (\n select rownum as bucket from dual connect by level <= {bins}\n)\nselect\n bucket, {min} + (bucket - 1/2) * {step} as value,\n nvl(cnt, 0) as count\nfrom hist right outer join buckets using(bucket)\norder by bucket\n\"\"\"", "Fetch the histogram data into a pandas dataframe", "import pandas as pd\n\n# query Oracle using ora_conn and put the result into a pandas Dataframe\nwith oracledb.connect(user=db_user, password=db_pass, dsn=db_connect_string) as ora_conn:\n hist_pandasDF = pd.read_sql(query, con=ora_conn) \n\n# Decription\n#\n# BUCKET: the bucket number, range from 1 to bins (included)\n# VALUE: midpoint value of the given bucket\n# COUNT: number of values in the bucket \n \nhist_pandasDF\n\n# Optionally normalize the event count into a frequency\n# dividing by the total number of events\n \nhist_pandasDF[\"FREQUENCY\"] = hist_pandasDF[\"COUNT\"] / sum(hist_pandasDF[\"COUNT\"]) \n \nhist_pandasDF", "Histogram plotting\nThe first plot is a histogram with the event counts (number of events per bin).\nThe second plot is a histogram of the events frequencies (number of events per bin normalized by the sum of the events).", "import matplotlib.pyplot as plt \nplt.style.use('seaborn-darkgrid')\nplt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})\n\nf, ax = plt.subplots()\n\n# histogram data\nx = hist_pandasDF[\"VALUE\"]\ny = hist_pandasDF[\"COUNT\"]\n\n# bar plot\nax.bar(x, y, width = 3.0, color='red')\n\nax.set_xlabel(\"Bucket values\")\nax.set_ylabel(\"Event count\")\nax.set_title(\"Distribution of event counts\")\n\n# Label for the resonances spectrum peaks\ntxt_opts = {'horizontalalignment': 'center',\n 'verticalalignment': 'center',\n 'transform': ax.transAxes}\n\nplt.show()\n\nimport matplotlib.pyplot as plt \nplt.style.use('seaborn-darkgrid')\nplt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})\n\nf, ax = plt.subplots()\n\n# histogram data\nx = hist_pandasDF[\"VALUE\"]\ny = hist_pandasDF[\"FREQUENCY\"]\n\n# bar plot\nax.bar(x, y, width = 3.0, color='blue')\n\nax.set_xlabel(\"Bucket values\")\nax.set_ylabel(\"Event frequency\")\nax.set_title(\"Distribution of event frequencies\")\n\n# Label for the resonances spectrum peaks\ntxt_opts = {'horizontalalignment': 'center',\n 'verticalalignment': 'center',\n 'transform': ax.transAxes}\n\nplt.show()\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
initialkommit/kookmin
midterm/kookmin_midterm_조재환_2.ipynb
mit
[ "Python_중간발표\n데이터사이언스학과 M2015228 조재환\n'key : value' I/O 연습문제", "drinks={\n 'martini' : {'vodka', 'vermouth'},\n 'black russian' : {'vodka', 'kahlua'},\n 'white russian' : {'cream', 'kahlua', 'vodka'},\n 'manhattan' : {'rye', 'vermouth', 'bitters'},\n 'screwdriver': {'orange juice', 'vodka'},\n 'verorange' : {'orange juice', 'vermouth'},\n 'kahlua milk' : {'kahlua', 'milk'},\n 'jin tonic' : {'jin', 'tonic water'},\n 'mojito' : {'rum', 'lime juice'},\n 'cinderella' : {'orange juice', 'lemon juice','pineapple juice'}\n }\n\ninputs = input('what do you want? ')\nprint('Here are some Recipt:')\n\nfor name, contents in drinks.items():\n if inputs in contents:\n print(name)\n\nmorse = {\n '.-':'A','-...':'B','-.-.':'C','-..':'D','.':'E','..-.':'F',\n '--.':'G','....':'H','..':'I','.---':'J','-.-':'K','.-..':'L',\n '--':'M','-.':'N','---':'O','.--.':'P','--.-':'Q','.-.':'R',\n '...':'S','-':'T','..-':'U','...-':'V','.--':'W','-..-':'X',\n '-.--':'Y','--..':'Z', '':' '\n}\n\ncode = '.... . ... .-.. . . .--. ... . .- .-. .-.. -.--'", "이전 제출물", "senten = input(\"What's going on? \")\nsenten = \".\".join(senten)\nsenten = senten.split('.')\nprint(senten)\n\nfor dot, capi in morse.items():\n if capi in senten:\n print(dot,end=\" \")\n #dotted = sorted(morse.get(dot))\n #print(sorted(morse.get(dot),reverse=True), end=\" \") \n print(morse.get(dot),end=\" \")", "이전에 제출한 것은 출력하면 알파벳과 모스부호가 정렬되지 않았습니다. 입력한 문장대로 모스부호를 나타내고 싶었었는데 조금 더 공부하다보니 코드를 만들 수 있어서 다시 한번 제출합니다.\n수정", "senten = input(\"What's going on? \") # 모스부호로 나타낼 문장을 입력\nsenten = \".\".join(senten) # 모스부호의 형태가 '알파벳' : '모스부호'로 되어있어서 입력받은 문장을 \n # 알파벳 단위로 끊어주기 위해 \".\"join으로 각 단어 사이에 .을 넣습니다.\nsenten = senten.split('.') # .을 기준으로 단어들을 모두 끊어 줍니다.\nprint(senten)\n\nfor word in senten: # str형태를 for문으로 출력하면 값하나가 그대로 나옵니다.\n for dot, capi in morse.items(): # 모스부호의 dictionary를 가져옵니다.\n if word in capi: # senten안의 word가 모스부호의 알파벳과 같으면\n print(capi,\"=\",dot, end=\", \") # 알파벳에 해당하는 모스부호를 출력합니다.\n\nsenten = input(\"What's going on? \") # 모스부호로 나타낼 문장을 입력\nprint(senten)\n\nfor word in senten: # str형태를 for문으로 출력하면 값하나가 그대로 나옵니다.\n for dot, capi in morse.items(): # 모스부호의 dictionary를 가져옵니다.\n if word in capi: # senten안의 word가 모스부호의 알파벳과 같으면\n print(capi,\"=\",dot, end=\", \") # 알파벳에 해당하는 모스부호를 출력합니다.\n\nsentens = 'IM LATE'\nsentens[0]\n\nmorse.items()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
mit-eicu/eicu-code
notebooks/pasthistory.ipynb
mit
[ "pastHistory\nProvides information related a patient’s relevant past medical history. Providing detailed past history is not common, but items such as AIDS, Cirrhosis of the Liver, Hepatic Failure, Chronic Renal Failure, Transplant, and Pre-existing Cancers / immunosuppression are more reliable because of their importance in severity outcome scoring.", "# Import libraries\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport psycopg2\nimport getpass\nimport pdvega\n\n# for configuring connection \nfrom configobj import ConfigObj\nimport os\n\n%matplotlib inline\n\n# Create a database connection using settings from config file\nconfig='../db/config.ini'\n\n# connection info\nconn_info = dict()\nif os.path.isfile(config):\n config = ConfigObj(config)\n conn_info[\"sqluser\"] = config['username']\n conn_info[\"sqlpass\"] = config['password']\n conn_info[\"sqlhost\"] = config['host']\n conn_info[\"sqlport\"] = config['port']\n conn_info[\"dbname\"] = config['dbname']\n conn_info[\"schema_name\"] = config['schema_name']\nelse:\n conn_info[\"sqluser\"] = 'postgres'\n conn_info[\"sqlpass\"] = ''\n conn_info[\"sqlhost\"] = 'localhost'\n conn_info[\"sqlport\"] = 5432\n conn_info[\"dbname\"] = 'eicu'\n conn_info[\"schema_name\"] = 'public,eicu_crd'\n \n# Connect to the eICU database\nprint('Database: {}'.format(conn_info['dbname']))\nprint('Username: {}'.format(conn_info[\"sqluser\"]))\nif conn_info[\"sqlpass\"] == '':\n # try connecting without password, i.e. peer or OS authentication\n try:\n if (conn_info[\"sqlhost\"] == 'localhost') & (conn_info[\"sqlport\"]=='5432'):\n con = psycopg2.connect(dbname=conn_info[\"dbname\"],\n user=conn_info[\"sqluser\"]) \n else:\n con = psycopg2.connect(dbname=conn_info[\"dbname\"],\n host=conn_info[\"sqlhost\"],\n port=conn_info[\"sqlport\"],\n user=conn_info[\"sqluser\"])\n except:\n conn_info[\"sqlpass\"] = getpass.getpass('Password: ')\n\n con = psycopg2.connect(dbname=conn_info[\"dbname\"],\n host=conn_info[\"sqlhost\"],\n port=conn_info[\"sqlport\"],\n user=conn_info[\"sqluser\"],\n password=conn_info[\"sqlpass\"])\nquery_schema = 'set search_path to ' + conn_info['schema_name'] + ';'", "Examine a single patient", "patientunitstayid = 141168\n\nquery = query_schema + \"\"\"\nselect *\nfrom pasthistory\nwhere patientunitstayid = {}\norder by pasthistoryoffset\n\"\"\".format(patientunitstayid)\n\ndf = pd.read_sql_query(query, con)\ndf.head()", "We can make a few observations:\n\npasthistorypath is a slash delimited (/) hierarchical categorization of the past history recorded\npasthistoryvalue and pasthistoryvaluetext are often identical\npasthistoryoffset is the time of the condition, while pasthistoryenteredoffset is when it was documented, though from above it appears the pasthistoryoffset is not necessarily the start time of the condition\n\nIdentifying COPD patients\nLet's look for patients who were admitted with a past history of COPD.", "dx = 'COPD'\nquery = query_schema + \"\"\"\nselect \n pasthistoryvalue, count(*) as n\nfrom pasthistory\nwhere pasthistoryvalue ilike '%{}%'\ngroup by pasthistoryvalue\n\"\"\".format(dx)\n\ndf_copd = pd.read_sql_query(query, con)\ndf_copd\n\ndx = 'COPD'\nquery = query_schema + \"\"\"\nselect \n patientunitstayid, count(*) as n\nfrom pasthistory\nwhere pasthistoryvalue ilike '%{}%'\ngroup by patientunitstayid\n\"\"\".format(dx)\n\ndf_copd = pd.read_sql_query(query, con)\nprint('{} unit stays with {}.'.format(df_copd.shape[0], dx))", "Hospitals with data available", "query = query_schema + \"\"\"\nwith t as\n(\nselect distinct patientunitstayid\nfrom pasthistory\n)\nselect \n pt.hospitalid\n , count(distinct pt.patientunitstayid) as number_of_patients\n , count(distinct t.patientunitstayid) as number_of_patients_with_tbl\nfrom patient pt\nleft join t\n on pt.patientunitstayid = t.patientunitstayid\ngroup by pt.hospitalid\n\"\"\".format(patientunitstayid)\n\ndf = pd.read_sql_query(query, con)\ndf['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0\ndf.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True)\ndf.head(n=10)\n\ndf[['data completion']].vgplot.hist(bins=10,\n var_name='Number of hospitals',\n value_name='Percent of patients with data')", "The majority of hospitals have data for the pasthistory table, again likely due to its importance for certain severity of illness scoring systems." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.21/_downloads/85b80d223414f32365a9175978a38cb4/plot_limo_data.ipynb
bsd-3-clause
[ "%matplotlib inline", "Single trial linear regression analysis with the LIMO dataset\nHere we explore the structure of the data contained in the\nLIMO dataset.\nThis example replicates and extends some of the main analysis\nand tools integrated in LIMO MEEG, a MATLAB toolbox originally designed\nto interface with EEGLAB_.\nIn summary, the example:\n\n\nFetches epoched data files for a single subject of the LIMO dataset [1]_.\n If the LIMO files are not found on disk, the\n fetcher :func:mne.datasets.limo.load_data() will automatically download\n the files from a remote repository.\n\n\nDuring import, information about the data (i.e., sampling rate, number of\n epochs per condition, number and name of EEG channels per subject, etc.) is\n extracted from the LIMO :file:.mat files stored on disk and added to the\n epochs structure as metadata.\n\n\nFits linear models on the single subject's data and visualizes inferential\n measures to evaluate the significance of the estimated effects.\n\n\nReferences\n.. [1] Guillaume, Rousselet. (2016). LIMO EEG Dataset, [dataset].\n University of Edinburgh, Centre for Clinical Brain Sciences.\n https://doi.org/10.7488/ds/1556.\n.. [2] Rousselet, G. A., Gaspar, C. M., Pernet, C. R., Husk, J. S.,\n Bennett, P. J., & Sekuler, A. B. (2010). Healthy aging delays scalp EEG\n sensitivity to noise in a face discrimination task.\n Frontiers in psychology, 1, 19. https://doi.org/10.3389/fpsyg.2010.00019\n.. [3] Rousselet, G. A., Pernet, C. R., Bennett, P. J., & Sekuler, A. B.\n (2008). Parametric study of EEG sensitivity to phase noise during face\n processing. BMC neuroscience, 9(1), 98.\n https://doi.org/10.1186/1471-2202-9-98", "# Authors: Jose C. Garcia Alanis <alanis.jcg@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom mne.datasets.limo import load_data\nfrom mne.stats import linear_regression\nfrom mne.viz import plot_events, plot_compare_evokeds\nfrom mne import combine_evoked\n\n\nprint(__doc__)\n\n# subject to use\nsubj = 1", "About the data\nIn the original LIMO experiment (see [2]), participants performed a\ntwo-alternative forced choice task, discriminating between two face stimuli.\nThe same two faces were used during the whole experiment,\nwith varying levels of noise added, making the faces more or less\ndiscernible to the observer (see Fig 1 in [3]_ for a similar approach).\nThe presented faces varied across a noise-signal (or phase-coherence)\ncontinuum spanning from 0 to 85% in increasing steps of 5%.\nIn other words, faces with high phase-coherence (e.g., 85%) were easy to\nidentify, while faces with low phase-coherence (e.g., 5%) were hard to\nidentify and by extension very hard to discriminate.\nLoad the data\nWe'll begin by loading the data from subject 1 of the LIMO dataset.", "# This step can take a little while if you're loading the data for the\n# first time.\nlimo_epochs = load_data(subject=subj)", "Note that the result of the loading process is an\n:class:mne.EpochsArray containing the data ready to interface\nwith MNE-Python.", "print(limo_epochs)", "Visualize events\nWe can visualise the distribution of the face events contained in the\nlimo_epochs structure. Events should appear clearly grouped, as the\nepochs are ordered by condition.", "fig = plot_events(limo_epochs.events, event_id=limo_epochs.event_id)\nfig.suptitle(\"Distribution of events in LIMO epochs\")", "As it can be seen above, conditions are coded as Face/A and Face/B.\nInformation about the phase-coherence of the presented faces is stored in the\nepochs metadata. These information can be easily accessed by calling\nlimo_epochs.metadata. As shown below, the epochs metadata also contains\ninformation about the presented faces for convenience.", "print(limo_epochs.metadata.head())", "Now let's take a closer look at the information in the epochs\nmetadata.", "# We want include all columns in the summary table\nepochs_summary = limo_epochs.metadata.describe(include='all').round(3)\nprint(epochs_summary)", "The first column of the summary table above provides more or less the same\ninformation as the print(limo_epochs) command we ran before. There are\n1055 faces (i.e., epochs), subdivided in 2 conditions (i.e., Face A and\nFace B) and, for this particular subject, there are more epochs for the\ncondition Face B.\nIn addition, we can see in the second column that the values for the\nphase-coherence variable range from -1.619 to 1.642. This is because the\nphase-coherence values are provided as a z-scored variable in the LIMO\ndataset. Note that they have a mean of zero and a standard deviation of 1.\nVisualize condition ERPs\nLet's plot the ERPs evoked by Face A and Face B, to see how similar they are.", "# only show -250 to 500 ms\nts_args = dict(xlim=(-0.25, 0.5))\n\n# plot evoked response for face A\nlimo_epochs['Face/A'].average().plot_joint(times=[0.15],\n title='Evoked response: Face A',\n ts_args=ts_args)\n# and face B\nlimo_epochs['Face/B'].average().plot_joint(times=[0.15],\n title='Evoked response: Face B',\n ts_args=ts_args)", "We can also compute the difference wave contrasting Face A and Face B.\nAlthough, looking at the evoked responses above, we shouldn't expect great\ndifferences among these face-stimuli.", "# Face A minus Face B\ndifference_wave = combine_evoked([limo_epochs['Face/A'].average(),\n limo_epochs['Face/B'].average()],\n weights=[1, -1])\n\n# plot difference wave\ndifference_wave.plot_joint(times=[0.15], title='Difference Face A - Face B')", "As expected, no clear pattern appears when contrasting\nFace A and Face B. However, we could narrow our search a little bit more.\nSince this is a \"visual paradigm\" it might be best to look at electrodes\nlocated over the occipital lobe, as differences between stimuli (if any)\nmight easier to spot over visual areas.", "# Create a dictionary containing the evoked responses\nconditions = [\"Face/A\", \"Face/B\"]\nevokeds = {condition: limo_epochs[condition].average()\n for condition in conditions}\n\n# concentrate analysis an occipital electrodes (e.g. B11)\npick = evokeds[\"Face/A\"].ch_names.index('B11')\n\n# compare evoked responses\nplot_compare_evokeds(evokeds, picks=pick, ylim=dict(eeg=(-15, 7.5)))", "We do see a difference between Face A and B, but it is pretty small.\nVisualize effect of stimulus phase-coherence\nSince phase-coherence\ndetermined whether a face stimulus could be easily identified,\none could expect that faces with high phase-coherence should evoke stronger\nactivation patterns along occipital electrodes.", "phase_coh = limo_epochs.metadata['phase-coherence']\n# get levels of phase coherence\nlevels = sorted(phase_coh.unique())\n# create labels for levels of phase coherence (i.e., 0 - 85%)\nlabels = [\"{0:.2f}\".format(i) for i in np.arange(0., 0.90, 0.05)]\n\n# create dict of evokeds for each level of phase-coherence\nevokeds = {label: limo_epochs[phase_coh == level].average()\n for level, label in zip(levels, labels)}\n\n# pick channel to plot\nelectrodes = ['C22', 'B11']\n# create figures\nfor electrode in electrodes:\n fig, ax = plt.subplots(figsize=(8, 4))\n plot_compare_evokeds(evokeds,\n axes=ax,\n ylim=dict(eeg=(-20, 15)),\n picks=electrode,\n cmap=(\"Phase coherence\", \"magma\"))", "As shown above, there are some considerable differences between the\nactivation patterns evoked by stimuli with low vs. high phase-coherence at\nthe chosen electrodes.\nPrepare data for linear regression analysis\nBefore we test the significance of these differences using linear\nregression, we'll interpolate missing channels that were\ndropped during preprocessing of the data.\nFurthermore, we'll drop the EOG channels (marked by the \"EXG\" prefix)\npresent in the data:", "limo_epochs.interpolate_bads(reset_bads=True)\nlimo_epochs.drop_channels(['EXG1', 'EXG2', 'EXG3', 'EXG4'])", "Define predictor variables and design matrix\nTo run the regression analysis,\nwe need to create a design matrix containing information about the\nvariables (i.e., predictors) we want to use for prediction of brain\nactivity patterns. For this purpose, we'll use the information we have in\nlimo_epochs.metadata: phase-coherence and Face A vs. Face B.", "# name of predictors + intercept\npredictor_vars = ['face a - face b', 'phase-coherence', 'intercept']\n\n# create design matrix\ndesign = limo_epochs.metadata[['phase-coherence', 'face']].copy()\ndesign['face a - face b'] = np.where(design['face'] == 'A', 1, -1)\ndesign['intercept'] = 1\ndesign = design[predictor_vars]", "Now we can set up the linear model to be used in the analysis using\nMNE-Python's func:~mne.stats.linear_regression function.", "reg = linear_regression(limo_epochs,\n design_matrix=design,\n names=predictor_vars)", "Extract regression coefficients\nThe results are stored within the object reg,\nwhich is a dictionary of evoked objects containing\nmultiple inferential measures for each predictor in the design matrix.", "print('predictors are:', list(reg))\nprint('fields are:', [field for field in getattr(reg['intercept'], '_fields')])", "Plot model results\nNow we can access and plot the results of the linear regression analysis by\ncalling :samp:reg['{&lt;name of predictor&gt;}'].{&lt;measure of interest&gt;} and\nusing the\n:meth:~mne.Evoked.plot_joint method just as we would do with any other\nevoked object.\nBelow we can see a clear effect of phase-coherence, with higher\nphase-coherence (i.e., better \"face visibility\") having a negative effect on\nthe activity measured at occipital electrodes around 200 to 250 ms following\nstimulus onset.", "reg['phase-coherence'].beta.plot_joint(ts_args=ts_args,\n title='Effect of Phase-coherence',\n times=[0.23])", "We can also plot the corresponding T values.", "# use unit=False and scale=1 to keep values at their original\n# scale (i.e., avoid conversion to micro-volt).\nts_args = dict(xlim=(-0.25, 0.5),\n unit=False)\ntopomap_args = dict(scalings=dict(eeg=1),\n average=0.05)\n\nfig = reg['phase-coherence'].t_val.plot_joint(ts_args=ts_args,\n topomap_args=topomap_args,\n times=[0.23])\nfig.axes[0].set_ylabel('T-value')", "Conversely, there appears to be no (or very small) systematic effects when\ncomparing Face A and Face B stimuli. This is largely consistent with the\ndifference wave approach presented above.", "ts_args = dict(xlim=(-0.25, 0.5))\n\nreg['face a - face b'].beta.plot_joint(ts_args=ts_args,\n title='Effect of Face A vs. Face B',\n times=[0.23])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
newworldnewlife/TensorFlow-Tutorials
13B_Visual_Analysis_MNIST.ipynb
mit
[ "TensorFlow Tutorial #13-B\nVisual Analysis (MNIST)\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nIntroduction\nTutorial #13 showed how to find input images that maximized the response of individual neurons inside the Inception model, so as to find the images that the neuron liked to see. But because the Inception model is so large and complex the images were just complex wavy patterns.\nThis tutorial uses a much simpler Convolutional Neural Network with the MNIST data-set for recognizing hand-written digits. The code is spliced together from Tutorial #03-B for constructing the neural network and Tutorial #13 for finding input images that maximize individual neuron responses inside the neural network, so a lot of this code may look familiar to you.\nFlowchart\nThe following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. Note that there are two separate optimization loops here:\nFirst the weights of the neural network are optimized by inputting images and their true classes to the network so as to improve the classification accuracy.\nAfterwards a second optimization is performed which finds the input image that maximizes a given feature or neuron inside the network. This finds an image that the network likes to see.\n\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\nimport math", "This was developed using Python 3.6 (Anaconda) and TensorFlow version:", "tf.__version__", "Load Data\nThe MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.", "from tensorflow.examples.tutorials.mnist import input_data\ndata = input_data.read_data_sets('data/MNIST/', one_hot=True)", "The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.", "print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))", "The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.", "data.test.cls = np.argmax(data.test.labels, axis=1)", "Data Dimensions\nThe data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.", "# We know that MNIST images are 28 pixels in each dimension.\nimg_size = 28\n\n# Images are stored in one-dimensional arrays of this length.\nimg_size_flat = img_size * img_size\n\n# Tuple with height and width of images used to reshape arrays.\nimg_shape = (img_size, img_size)\n\n# Number of colour channels for the images: 1 channel for gray-scale.\nnum_channels = 1\n\n# Number of classes, one class for each of 10 digits.\nnum_classes = 10", "Helper-functions for plotting images\nFunction used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.", "def plot_images(images, cls_true, cls_pred=None):\n assert len(images) == len(cls_true) == 9\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n fig.subplots_adjust(hspace=0.3, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n # Plot image.\n ax.imshow(images[i].reshape(img_shape), cmap='binary')\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n xlabel = \"True: {0}, Pred: {1}\".format(cls_true[i], cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Function used to plot 10 images in a 2x5 grid.", "def plot_images10(images, smooth=True):\n # Interpolation type.\n if smooth:\n interpolation = 'spline16'\n else:\n interpolation = 'nearest'\n\n # Create figure with sub-plots.\n fig, axes = plt.subplots(2, 5)\n\n # Adjust vertical spacing.\n fig.subplots_adjust(hspace=0.1, wspace=0.1)\n\n # For each entry in the grid.\n for i, ax in enumerate(axes.flat):\n # Get the i'th image and only use the desired pixels.\n img = images[i, :, :]\n \n # Plot the image.\n ax.imshow(img, interpolation=interpolation, cmap='binary')\n\n # Remove ticks.\n ax.set_xticks([])\n ax.set_yticks([])\n\n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show() ", "Function used to plot a single image.", "def plot_image(image):\n plt.imshow(image, interpolation='nearest', cmap='binary')\n plt.xticks([])\n plt.yticks([])", "Plot a few images to see if data is correct", "# Get the first images from the test-set.\nimages = data.test.images[0:9]\n\n# Get the true classes for those images.\ncls_true = data.test.cls[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true)", "TensorFlow Graph\nThe neural network is constructed as a computational graph in TensorFlow using the tf.layers API, which is described in detail in Tutorial #03-B.\nPlaceholder variables\nPlaceholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph.\nFirst we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.", "x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')", "The convolutional layers expect x to be encoded as a 4-rank tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:", "x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])", "Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.", "y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')", "We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.", "y_true_cls = tf.argmax(y_true, axis=1)", "Neural Network\nWe now implement the Convolutional Neural Network using the Layers API. We use the net-variable to refer to the last layer while building the neural network. This makes it easy to add or remove layers in the code if you want to experiment. First we set the net-variable to the reshaped input image.", "net = x_image", "The input image is then input to the first convolutional layer, which has 16 filters each of size 5x5 pixels. The activation-function is the Rectified Linear Unit (ReLU) described in more detail in Tutorial #02.", "net = tf.layers.conv2d(inputs=net, name='layer_conv1', padding='same',\n filters=16, kernel_size=5, activation=tf.nn.relu)", "After the convolution we do a max-pooling which is also described in Tutorial #02.", "net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)", "Then we make a second convolutional layer, also with max-pooling.", "net = tf.layers.conv2d(inputs=net, name='layer_conv2', padding='same',\n filters=36, kernel_size=5, activation=tf.nn.relu)\n\nnet = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)", "The output then needs to be flattened so it can be used in fully-connected (aka. dense) layers.", "net = tf.contrib.layers.flatten(net)\n\n# This should eventually be replaced by:\n# net = tf.layers.flatten(net)", "We can now add fully-connected (or dense) layers to the neural network.", "net = tf.layers.dense(inputs=net, name='layer_fc1',\n units=128, activation=tf.nn.relu)", "We need the neural network to classify the input images into 10 different classes. So the final fully-connected layer has num_classes=10 output neurons.", "net = tf.layers.dense(inputs=net, name='layer_fc_out',\n units=num_classes, activation=None)", "The outputs of the final fully-connected layer are sometimes called logits, so we have a convenience variable with that name which we will also use further below.", "logits = net", "We use the softmax function to 'squash' the outputs so they are between zero and one, and so they sum to one.", "y_pred = tf.nn.softmax(logits=logits)", "This tells us how likely the neural network thinks the input image is of each possible class. The one that has the highest value is considered the most likely so its index is taken to be the class-number.", "y_pred_cls = tf.argmax(y_pred, axis=1)", "Loss-Function to be Optimized\nTo make the model better at classifying the input images, we must somehow change the variables of the neural network.\nThe cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the model.\nTensorFlow has a function for calculating the cross-entropy, which uses the values of the logits-layer because it also calculates the softmax internally, so as to to improve numerical stability.", "cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits)", "We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.", "loss = tf.reduce_mean(cross_entropy)", "Optimization Method\nNow that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the Adam optimizer with a learning-rate of 1e-4.\nNote that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.", "optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)", "Classification Accuracy\nWe need to calculate the classification accuracy so we can report progress to the user.\nFirst we create a vector of booleans telling us whether the predicted class equals the true class of each image.", "correct_prediction = tf.equal(y_pred_cls, y_true_cls)", "The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))", "Optimize the Neural Network\nCreate TensorFlow session\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.", "session = tf.Session()", "Initialize variables\nThe variables for the TensorFlow graph must be initialized before we start optimizing them.", "session.run(tf.global_variables_initializer())", "Helper-function to perform optimization iterations\nThere are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to do more optimization iterations.", "train_batch_size = 64", "This function performs a number of optimization iterations so as to gradually improve the variables of the neural network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.", "# Counter for total number of iterations performed so far.\ntotal_iterations = 0\n\ndef optimize(num_iterations):\n # Ensure we update the global variable rather than a local copy.\n global total_iterations\n\n for i in range(total_iterations,\n total_iterations + num_iterations):\n\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch = data.train.next_batch(train_batch_size)\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n session.run(optimizer, feed_dict=feed_dict_train)\n\n # Print status every 100 iterations.\n if i % 100 == 0:\n # Calculate the accuracy on the training-set.\n acc = session.run(accuracy, feed_dict=feed_dict_train)\n\n # Message for printing.\n msg = \"Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}\"\n\n # Print it.\n print(msg.format(i + 1, acc))\n\n # Update the total number of iterations performed.\n total_iterations += num_iterations", "Helper-function to plot example errors\nFunction for plotting examples of images from the test-set that have been mis-classified.", "def plot_example_errors(cls_pred, correct):\n # This function is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # correct is a boolean array whether the predicted class\n # is equal to the true class for each image in the test-set.\n\n # Negate the boolean array.\n incorrect = (correct == False)\n \n # Get the images from the test-set that have been\n # incorrectly classified.\n images = data.test.images[incorrect]\n \n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = data.test.cls[incorrect]\n \n # Plot the first 9 images.\n plot_images(images=images[0:9],\n cls_true=cls_true[0:9],\n cls_pred=cls_pred[0:9])", "Helper-function to plot confusion matrix", "def plot_confusion_matrix(cls_pred):\n # This is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Get the true classifications for the test-set.\n cls_true = data.test.cls\n \n # Get the confusion matrix using sklearn.\n cm = confusion_matrix(y_true=cls_true,\n y_pred=cls_pred)\n\n # Print the confusion matrix as text.\n print(cm)\n\n # Plot the confusion matrix as an image.\n plt.matshow(cm)\n\n # Make various adjustments to the plot.\n plt.colorbar()\n tick_marks = np.arange(num_classes)\n plt.xticks(tick_marks, range(num_classes))\n plt.yticks(tick_marks, range(num_classes))\n plt.xlabel('Predicted')\n plt.ylabel('True')\n\n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Helper-function for showing the performance\nBelow is a function for printing the classification accuracy on the test-set.\nIt takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.\nNote that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.", "# Split the test-set into smaller batches of this size.\ntest_batch_size = 256\n\ndef print_test_accuracy(show_example_errors=False,\n show_confusion_matrix=False):\n\n # Number of images in the test-set.\n num_test = len(data.test.images)\n\n # Allocate an array for the predicted classes which\n # will be calculated in batches and filled into this array.\n cls_pred = np.zeros(shape=num_test, dtype=np.int)\n\n # Now calculate the predicted classes for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_test:\n # The ending index for the next batch is denoted j.\n j = min(i + test_batch_size, num_test)\n\n # Get the images from the test-set between index i and j.\n images = data.test.images[i:j, :]\n\n # Get the associated labels.\n labels = data.test.labels[i:j, :]\n\n # Create a feed-dict with these images and labels.\n feed_dict = {x: images,\n y_true: labels}\n\n # Calculate the predicted class using TensorFlow.\n cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n\n # Convenience variable for the true class-numbers of the test-set.\n cls_true = data.test.cls\n\n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n # Calculate the number of correctly classified images.\n # When summing a boolean array, False means 0 and True means 1.\n correct_sum = correct.sum()\n\n # Classification accuracy is the number of correctly classified\n # images divided by the total number of images in the test-set.\n acc = float(correct_sum) / num_test\n\n # Print the accuracy.\n msg = \"Accuracy on Test-Set: {0:.1%} ({1} / {2})\"\n print(msg.format(acc, correct_sum, num_test))\n\n # Plot some examples of mis-classifications, if desired.\n if show_example_errors:\n print(\"Example errors:\")\n plot_example_errors(cls_pred=cls_pred, correct=correct)\n\n # Plot the confusion matrix, if desired.\n if show_confusion_matrix:\n print(\"Confusion Matrix:\")\n plot_confusion_matrix(cls_pred=cls_pred)", "Performance before any optimization\nThe accuracy on the test-set is very low because the variables for the neural network have only been initialized and not optimized at all, so it just classifies the images randomly.", "print_test_accuracy()", "Performance after 10,000 optimization iterations\nAfter 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.", "%%time\noptimize(num_iterations=10000)\n\nprint_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)", "Optimizing the Input Images\nNow that the neural network has been optimized so it can recognize hand-written digits with about 99% accuracy, we will then find the input images that maximize certain features inside the neural network. This will show us what images the neural network likes to see the most.\nWe will do this by creating another form of optimization for the neural network, and we need several helper functions for doing this.\nHelper-function for getting the names of convolutional layers\nFunction for getting the names of all the convolutional layers in the neural network. We could have made this list manually, but for larger neural networks it is easier to do this with a function.", "def get_conv_layer_names():\n graph = tf.get_default_graph()\n \n # Create a list of names for the operations in the graph\n # for the Inception model where the operator-type is 'Conv2D'.\n names = [op.name for op in graph.get_operations() if op.type=='Conv2D']\n\n return names\n\nconv_names = get_conv_layer_names()\n\nconv_names\n\nlen(conv_names)", "Helper-function for finding the input image\nThis function finds the input image that maximizes a given feature in the network. It essentially just performs optimization with gradient ascent. The image is initialized with small random values and is then iteratively updated using the gradient for the given feature with regard to the image.", "def optimize_image(conv_id=None, feature=0,\n num_iterations=30, show_progress=True):\n \"\"\"\n Find an image that maximizes the feature\n given by the conv_id and feature number.\n\n Parameters:\n conv_id: Integer identifying the convolutional layer to\n maximize. It is an index into conv_names.\n If None then use the last fully-connected layer\n before the softmax output.\n feature: Index into the layer for the feature to maximize.\n num_iteration: Number of optimization iterations to perform.\n show_progress: Boolean whether to show the progress.\n \"\"\"\n\n # Create the loss-function that must be maximized.\n if conv_id is None:\n # If we want to maximize a feature on the last layer,\n # then we use the fully-connected layer prior to the\n # softmax-classifier. The feature no. is the class-number\n # and must be an integer between 1 and 1000.\n # The loss-function is just the value of that feature.\n loss = tf.reduce_mean(logits[:, feature])\n else:\n # If instead we want to maximize a feature of a\n # convolutional layer inside the neural network.\n\n # Get the name of the convolutional operator.\n conv_name = conv_names[conv_id]\n \n # Get the default TensorFlow graph.\n graph = tf.get_default_graph()\n \n # Get a reference to the tensor that is output by the\n # operator. Note that \":0\" is added to the name for this.\n tensor = graph.get_tensor_by_name(conv_name + \":0\")\n\n # The loss-function is the average of all the\n # tensor-values for the given feature. This\n # ensures that we generate the whole input image.\n # You can try and modify this so it only uses\n # a part of the tensor.\n loss = tf.reduce_mean(tensor[:,:,:,feature])\n\n # Get the gradient for the loss-function with regard to\n # the input image. This creates a mathematical\n # function for calculating the gradient.\n gradient = tf.gradients(loss, x_image)\n\n # Generate a random image of the same size as the raw input.\n # Each pixel is a small random value between 0.45 and 0.55,\n # which is the middle of the valid range between 0 and 1.\n image = 0.1 * np.random.uniform(size=img_shape) + 0.45\n\n # Perform a number of optimization iterations to find\n # the image that maximizes the loss-function.\n for i in range(num_iterations):\n # Reshape the array so it is a 4-rank tensor.\n img_reshaped = image[np.newaxis,:,:,np.newaxis]\n\n # Create a feed-dict for inputting the image to the graph.\n feed_dict = {x_image: img_reshaped}\n\n # Calculate the predicted class-scores,\n # as well as the gradient and the loss-value.\n pred, grad, loss_value = session.run([y_pred, gradient, loss],\n feed_dict=feed_dict)\n \n # Squeeze the dimensionality for the gradient-array.\n grad = np.array(grad).squeeze()\n\n # The gradient now tells us how much we need to change the\n # input image in order to maximize the given feature.\n\n # Calculate the step-size for updating the image.\n # This step-size was found to give fast convergence.\n # The addition of 1e-8 is to protect from div-by-zero.\n step_size = 1.0 / (grad.std() + 1e-8)\n\n # Update the image by adding the scaled gradient\n # This is called gradient ascent.\n image += step_size * grad\n\n # Ensure all pixel-values in the image are between 0 and 1.\n image = np.clip(image, 0.0, 1.0)\n\n if show_progress:\n print(\"Iteration:\", i)\n\n # Convert the predicted class-scores to a one-dim array.\n pred = np.squeeze(pred)\n\n # The predicted class for the Inception model.\n pred_cls = np.argmax(pred)\n\n # The score (probability) for the predicted class.\n cls_score = pred[pred_cls]\n\n # Print the predicted score etc.\n msg = \"Predicted class: {0}, score: {1:>7.2%}\"\n print(msg.format(pred_cls, cls_score))\n\n # Print statistics for the gradient.\n msg = \"Gradient min: {0:>9.6f}, max: {1:>9.6f}, stepsize: {2:>9.2f}\"\n print(msg.format(grad.min(), grad.max(), step_size))\n\n # Print the loss-value.\n print(\"Loss:\", loss_value)\n\n # Newline.\n print()\n\n return image.squeeze()", "This next function finds the images that maximize the first 10 features of a layer, by calling the above function 10 times.", "def optimize_images(conv_id=None, num_iterations=30):\n \"\"\"\n Find 10 images that maximize the 10 first features in the layer\n given by the conv_id.\n \n Parameters:\n conv_id: Integer identifying the convolutional layer to\n maximize. It is an index into conv_names.\n If None then use the last layer before the softmax output.\n num_iterations: Number of optimization iterations to perform.\n \"\"\"\n\n # Which layer are we using?\n if conv_id is None:\n print(\"Final fully-connected layer before softmax.\")\n else:\n print(\"Layer:\", conv_names[conv_id])\n\n # Initialize the array of images.\n images = []\n\n # For each feature do the following.\n for feature in range(0,10):\n print(\"Optimizing image for feature no.\", feature)\n \n # Find the image that maximizes the given feature\n # for the network layer identified by conv_id (or None).\n image = optimize_image(conv_id=conv_id, feature=feature,\n show_progress=False,\n num_iterations=num_iterations)\n\n # Squeeze the dim of the array.\n image = image.squeeze()\n\n # Append to the list of images.\n images.append(image)\n\n # Convert to numpy-array so we can index all dimensions easily.\n images = np.array(images)\n\n # Plot the images.\n plot_images10(images=images)", "First Convolutional Layer\nThese are the input images that maximize the features in the first convolutional layer, so these are the images that it likes to see.", "optimize_images(conv_id=0)", "Note how these are very simple shapes such as lines and angles. Some of these images may be completely white, which suggests that those features of the neural network are perhaps unused, so the number of features could be reduced in this layer.\nSecond Convolutional Layer\nThis shows the images that maximize the features or neurons in the second convolutional layer, so these are the input images it likes to see. Note how these are more complex lines and patterns compared to the first convolutional layer.", "optimize_images(conv_id=1)", "Final output layer\nNow find the image for the 2nd feature of the final output of the neural network. That is, we want to find an image that makes the neural network classify that image as the digit 2. This is the image that the neural network likes to see the most for the digit 2.", "image = optimize_image(conv_id=None, feature=2,\n num_iterations=10, show_progress=True)", "Note how the predicted class indeed becomes 2 already within the first few iterations so the optimization is working as intended. Also note how the loss-measure is increasing rapidly until it apparently converges. This is because the loss-measure is actually just the value of the feature or neuron that we are trying to maximize. Because this is the logits-layer prior to the softmax, these values can potentially be infinitely high, but they are limited because we limit the image-values between 0 and 1.\nNow plot the image that was found. This is the image that the neural network believes looks most like the digit 2.", "plot_image(image)", "Although some of the curves do hint somewhat at the digit 2, it is hard for a human to see why the neural network believes this is the optimal image for the digit 2. This can only be understood when the optimal images for the remaining digits are also shown.", "optimize_images(conv_id=None)", "These images may vary each time you run the optimization. Some of the images can be seen to somewhat resemble the hand-written digits. But the other images are often impossible to recognize and it is hard to understand why the neural network thinks these are the optimal input images for those digits.\nThe reason is perhaps that the neural network tries to recognize all digits simultaneously, and it has found that certain pixels often determine whether the image shows one digit or another. So the neural network has learned to differentiate those pixels that it has found to be important, but not the underlying curves and shapes of the digits, in the same way that a human recognizes the digits.\nAnother possibility is that the data-set contains mis-classified digits which may confuse the neural network during training. We have previously seen how some of the digits in the data-set are very hard to read even for humans, and this may cause the neural network to become distorted and trying to recognize strange artifacts in the images.\nYet another possibility is that the optimization process has stagnated in a local optimum. One way to test this, would be to run the optimization 50 times for the digits that are unclear, and see if some of the resulting images become more clear.\nClose TensorFlow Session\nWe are now done using TensorFlow, so we close the session to release its resources.", "# This has been commented out in case you want to modify and experiment\n# with the Notebook without having to restart it.\n# session.close()", "Conclusion\nThis tutorial showed how to find the input images that maximize certain features inside a neural network. These are the images that the neural network likes to see the most in order to activate a certain feature or neuron inside the network.\nThis was tested on a simple convolutional neural network using the MNIST data-set. The neural network had clearly learned to recognize the general shape of some of the digits, while it was impossible to see how it recognized other digits.\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\n\nPlot the images for all features in each convolutional layer instead of just the first 10 features. How many of them appear to be unused or redundant? What happens if you lower the number of features in that layer and train the network again, does it still perform just as well?\n\n\nTry adding more convolutional layers and find the input images that maximize their features. What do the images show? Do you think it is useful to add more convolutional layers than two?\n\n\nTry adding more fully-connected layers and modify the code so it can find input images that maximize the features of the fully-connected / dense layers as well. Currently the code can only maximize the features of the convolutional layers and the final fully-connected layer.\n\n\nFor the input images that are unclear, run the optimization e.g. 50 times for each of those digits, to see if it produces more clear input images. It is possible that the optimization has simply become stuck in a local optimum.\n\n\nExplain to a friend how the program works.\n\n\nLicense (MIT)\nCopyright (c) 2016-2017 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
martinjrobins/hobo
examples/toy/model-fitzhugh-nagumo.ipynb
bsd-3-clause
[ "Fitzhugh-Nagumo simplified action-potential model\nThis example shows how the Fitzhugh-Nagumo simplified action potential (AP) model can be used.\nThe model is based on a simplification and state-reduction of the original squid axon model by Hodgkind and Huxley.\nIt has two state variables, a voltage-like variable and a recovery variable.", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pints\nimport pints.toy\n\n# Create a model\nmodel = pints.toy.FitzhughNagumoModel()\n\n# Run a simulation\nparameters = [0.1, 0.5, 3]\ntimes = np.linspace(0, 20, 200)\nvalues = model.simulate(parameters, times)\n\n# Plot the results\nplt.figure()\nplt.xlabel('Time')\nplt.ylabel('Value')\nplt.plot(times, values)\nplt.legend(['Voltage', 'Recovery'])\nplt.show()", "With these parameters, the model creates wide AP waveforms that are more reminiscent of muscle cells than neurons.\nWe now set up a simple optimisation problem with the model.", "# First add some noise\nsigma = 0.5\nnoisy = values + np.random.normal(0, sigma, values.shape)\n\n# Plot the results\nplt.figure()\nplt.xlabel('Time')\nplt.ylabel('Noisy values')\nplt.plot(times, noisy)\nplt.show()", "Next, we set up a problem. Because this model has multiple outputs (2), we use a MultiOutputProblem.", "problem = pints.MultiOutputProblem(model, times, noisy)\nscore = pints.SumOfSquaresError(problem)", "Finally, we choose a wide set of boundaries and run!", "# Select boundaries\nboundaries = pints.RectangularBoundaries([0., 0., 0.], [10., 10., 10.])\n\n# Select a starting point\nx0 = [1, 1, 1]\n\n# Perform an optimization\nfound_parameters, found_value = pints.optimise(score, x0, boundaries=boundaries)\n\nprint('Score at true solution:')\nprint(score(parameters))\n\nprint('Found solution: True parameters:' )\nfor k, x in enumerate(found_parameters):\n print(pints.strfloat(x) + ' ' + pints.strfloat(parameters[k]))\n\n# Plot the results\nplt.figure()\nplt.xlabel('Time')\nplt.ylabel('Values')\nplt.plot(times, noisy, '-', alpha=0.25, label='noisy signal')\nplt.plot(times, values, alpha=0.4, lw=5, label='original signal')\nplt.plot(times, problem.evaluate(found_parameters), 'k--', label='recovered signal')\nplt.legend()\nplt.show()", "This shows the parameters are not retrieved entirely correctly, but the traces still strongly overlap.\nSampling with Monomial-gamma HMC\nThe Fitzhugh-Nagumo model has sensitivities calculated by the forward sensitivities approach, so we can use samplers that use gradients (although they will be slower per iteration; although perhaps not by ESS per second!), like Monomial-gamma HMC.", "problem = pints.MultiOutputProblem(model, times, noisy)\n\n# Create a log-likelihood function (adds an extra parameter!)\nlog_likelihood = pints.GaussianLogLikelihood(problem)\n\n# Create a uniform prior over both the parameters and the new noise variable\nlog_prior = pints.UniformLogPrior(\n [0, 0, 0, 0, 0],\n [10, 10, 10, 20, 20]\n)\n\n# Create a posterior log-likelihood (log(likelihood * prior))\nlog_posterior = pints.LogPosterior(log_likelihood, log_prior)\n\n# Choose starting points for 3 mcmc chains\nreal_parameters1 = np.array(parameters + [sigma, sigma])\nxs = [\n real_parameters1 * 1.1,\n real_parameters1 * 0.9,\n real_parameters1 * 1.15,\n real_parameters1 * 1.5,\n]\n\n# Create mcmc routine\nmcmc = pints.MCMCController(log_posterior, 4, xs, method=pints.MonomialGammaHamiltonianMCMC)\n\n# Add stopping criterion\nmcmc.set_max_iterations(200)\nmcmc.set_log_interval(1)\n\n# Run in parallel\nmcmc.set_parallel(True)\n\nfor sampler in mcmc.samplers():\n sampler.set_leapfrog_step_size([0.05, 0.2, 0.2, 0.1, 0.1])\n sampler.set_leapfrog_steps(10)\n\n# Run!\nprint('Running...')\nchains = mcmc.run()\nprint('Done!')\n\nimport pints.plot\npints.plot.trace(chains)\nplt.show()", "Print results.", "results = pints.MCMCSummary(\n chains=chains, \n time=mcmc.time(), \n parameter_names=['a', 'b', 'c', 'sigma_V', 'sigma_R'],\n)\nprint(results)", "Plot the few posterior predictive simulations versus data.", "import pints.plot\npints.plot.series(np.vstack(chains), problem)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
qgoisnard/Exercice-update
Exercice 2.ipynb
mit
[ "Exercice 2", "%matplotlib inline \nfrom sympy.interactive import printing\nprinting.init_printing()\nfrom frame import *\nimport sympy as sp\nimport numpy as np", "Nous allons initialiser les differentes valeurs :", "E=1.3 #en MPa\nh=7.5 #en mm\nb=20. #en mm\nLx=55. #en mm\nLyh=60. #en mm\nLyb=45 #en mm\nI=b*(h**3)/12 #en mm^4\nS=b*h #en mm^2\neps=10**(-3)\ng=9.81", "Nous allons maintenant créer les noeuds et les éléments de la structure :", "nodes= np.array([[0.,0.],[0.,Lyb],[0.,Lyh+Lyb],[Lx/2,Lyh+Lyb],[Lx,Lyh+Lyb],[Lx,Lyb],[Lx,0.]])\nelements=np.array([[0,1],[1,5],[1,2],[2,3],[3,4],[4,5],[5,6]])\n\nframe=LinearFrame(nodes,elements)\nframe.plot_with_label()\n\nne = frame.nelements\nndof = frame.ndof\nEI = np.ones(ne)*E*I\nES = np.ones(ne)*E*S\nf_x = 0*np.ones(7)\nf_y = 0*np.ones(7)\nframe.set_distributed_loads(f_x, f_y)\nframe.set_stiffness(EI, ES)\nblocked_dof = np.array([0, 1, 2, ndof-3, ndof-2, ndof-1])\nbc_values = np.array([0, 0, 0, 0, 0, 0])\n\nK = frame.assemble_K()\n\n\nu=np.array([0.,0.,0.,\n 2.,0.,0.,\n 5.,0.,0.,\n 0.,-3.,0.,\n -3.,0.,0.,\n -2.,0.,0.,\n 0.,0.,0.])\n\nprint (u)\n\n\nF= np.dot(K,np.transpose(u))\nF\n\n\nm=F[10]/g\nm", "La masse accrochée serait d'environ 320g\nExercice 3\nDans cette exercice, on doit créer des fonctions à l'intérieur de notre classe frame afin de récupérer l'effort normal(N), l'effort tangentielle (T) et le moment sur z (M).", "def find_N(self,element):\n \"\"\"\n Returns the normal force of an element.\n \"\"\"\n F = self.assemble_F()\n N = F[3*element]\n return N\n \ndef find_T(self,element):\n \"\"\"\n Returns the tangential force of an element.\n \"\"\"\n F = self.assemble_F()\n T = F[3*element+1]\n return T\n \ndef find_M(self,element):\n \"\"\"\n Returns the moment of an element.\n \"\"\"\n F = self.assemble_F()\n M = F[3*element+2]\n return M" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
uber/pyro
tutorial/source/ekf.ipynb
apache-2.0
[ "Kalman Filter\nKalman filters are linear models for state estimation of dynamic systems [1]. They have been the <i>de facto</i> standard in many robotics and tracking/prediction applications because they are well suited for systems with uncertainty about an observable dynamic process. They use a \"observe, predict, correct\" paradigm to extract information from an otherwise noisy signal. In Pyro, we can build differentiable Kalman filters with learnable parameters using the pyro.contrib.tracking library\nDynamic process\nTo start, consider this simple motion model:\n$$ X_{k+1} = FX_k + \\mathbf{W}_k $$\n$$ \\mathbf{Z}_k = HX_k + \\mathbf{V}_k $$\nwhere $k$ is the state, $X$ is the signal estimate, $Z_k$ is the observed value at timestep $k$, $\\mathbf{W}_k$ and $\\mathbf{V}_k$ are independent noise processes (ie $\\mathbb{E}[w_k v_j^T] = 0$ for all $j, k$) which we'll approximate as Gaussians. Note that the state transitions are linear.\nKalman Update\nAt each time step, we perform a prediction for the mean and covariance:\n$$ \\hat{X}k = F\\hat{X}{k-1}$$\n$$\\hat{P}k = FP{k-1}F^T + Q$$\nand a correction for the measurement:\n$$ K_k = \\hat{P}_k H^T(H\\hat{P}_k H^T + R)^{-1}$$\n$$ X_k = \\hat{X}_k + K_k(z_k - H\\hat{X}_k)$$\n$$ P_k = (I-K_k H)\\hat{P}_k$$\nwhere $X$ is the position estimate, $P$ is the covariance matrix, $K$ is the Kalman Gain, and $Q$ and $R$ are covariance matrices.\nFor an in-depth derivation, see [2]\nNonlinear Estimation: Extended Kalman Filter\nWhat if our system is non-linear, eg in GPS navigation? Consider the following non-linear system:\n$$ X_{k+1} = \\mathbf{f}(X_k) + \\mathbf{W}_k $$\n$$ \\mathbf{Z}_k = \\mathbf{h}(X_k) + \\mathbf{V}_k $$\nNotice that $\\mathbf{f}$ and $\\mathbf{h}$ are now (smooth) non-linear functions.\nThe Extended Kalman Filter (EKF) attacks this problem by using a local linearization of the Kalman filter via a Taylors Series expansion.\n$$ f(X_k, k) \\approx f(x_k^R, k) + \\mathbf{H}_k(X_k - x_k^R) + \\cdots$$\nwhere $\\mathbf{H}_k$ is the Jacobian matrix at time $k$, $x_k^R$ is the previous optimal estimate, and we ignore the higher order terms. At each time step, we compute a Jacobian conditioned the previous predictions (this computation is handled by Pyro under the hood), and use the result to perform a prediction and update.\nOmitting the derivations, the modification to the above predictions are now:\n$$ \\hat{X}k \\approx \\mathbf{f}(X{k-1}^R)$$\n$$ \\hat{P}k = \\mathbf{H}\\mathbf{f}(X_{k-1})P_{k-1}\\mathbf{H}\\mathbf{f}^T(X{k-1}) + Q$$\nand the updates are now:\n$$ X_k \\approx \\hat{X}k + K_k\\big(z_k - \\mathbf{h}(\\hat{X}_k)\\big)$$\n$$ K_k = \\hat{P}_k \\mathbf{H}\\mathbf{h}(\\hat{X}k) \\Big(\\mathbf{H}\\mathbf{h}(\\hat{X}k)\\hat{P}_k \\mathbf{H}\\mathbf{h}(\\hat{X}k) + R_k\\Big)^{-1} $$\n$$ P_k = \\big(I - K_k \\mathbf{H}\\mathbf{h}(\\hat{X}_k)\\big)\\hat{P}_K$$\nIn Pyro, all we need to do is create an EKFState object and use its predict and update methods. Pyro will do exact inference to compute the innovations and we will use SVI to learn a MAP estimate of the position and measurement covariances.\nAs an example, let's look at an object moving at near-constant velocity in 2-D in a discrete time space over 100 time steps.", "import os\nimport math\n\nimport torch\nimport pyro\nimport pyro.distributions as dist\nfrom pyro.infer.autoguide import AutoDelta\nfrom pyro.optim import Adam\nfrom pyro.infer import SVI, Trace_ELBO, config_enumerate\nfrom pyro.contrib.tracking.extended_kalman_filter import EKFState\nfrom pyro.contrib.tracking.distributions import EKFDistribution\nfrom pyro.contrib.tracking.dynamic_models import NcvContinuous\nfrom pyro.contrib.tracking.measurements import PositionMeasurement\n\nsmoke_test = ('CI' in os.environ)\nassert pyro.__version__.startswith('1.7.0')\n\ndt = 1e-2\nnum_frames = 10\ndim = 4\n\n# Continuous model\nncv = NcvContinuous(dim, 2.0)\n\n# Truth trajectory\nxs_truth = torch.zeros(num_frames, dim)\n# initial direction\ntheta0_truth = 0.0\n# initial state\nwith torch.no_grad():\n xs_truth[0, :] = torch.tensor([0.0, 0.0, math.cos(theta0_truth), math.sin(theta0_truth)])\n for frame_num in range(1, num_frames):\n # sample independent process noise\n dx = pyro.sample('process_noise_{}'.format(frame_num), ncv.process_noise_dist(dt))\n xs_truth[frame_num, :] = ncv(xs_truth[frame_num-1, :], dt=dt) + dx", "Next, let's specify the measurements. Notice that we only measure the positions of the particle.", "# Measurements\nmeasurements = []\nmean = torch.zeros(2)\n# no correlations\ncov = 1e-5 * torch.eye(2)\nwith torch.no_grad():\n # sample independent measurement noise\n dzs = pyro.sample('dzs', dist.MultivariateNormal(mean, cov).expand((num_frames,)))\n # compute measurement means\n zs = xs_truth[:, :2] + dzs", "We'll use a Delta autoguide to learn MAP estimates of the position and measurement covariances. The EKFDistribution computes the joint log density of all of the EKF states given a tensor of sequential measurements.", "def model(data):\n # a HalfNormal can be used here as well\n R = pyro.sample('pv_cov', dist.HalfCauchy(2e-6)) * torch.eye(4)\n Q = pyro.sample('measurement_cov', dist.HalfCauchy(1e-6)) * torch.eye(2)\n # observe the measurements\n pyro.sample('track_{}'.format(i), EKFDistribution(xs_truth[0], R, ncv,\n Q, time_steps=num_frames),\n obs=data)\n \nguide = AutoDelta(model) # MAP estimation\n\noptim = pyro.optim.Adam({'lr': 2e-2})\nsvi = SVI(model, guide, optim, loss=Trace_ELBO(retain_graph=True))\n\npyro.set_rng_seed(0)\npyro.clear_param_store()\n\nfor i in range(250 if not smoke_test else 2):\n loss = svi.step(zs)\n if not i % 10:\n print('loss: ', loss)\n\n# retrieve states for visualization\nR = guide()['pv_cov'] * torch.eye(4)\nQ = guide()['measurement_cov'] * torch.eye(2)\nekf_dist = EKFDistribution(xs_truth[0], R, ncv, Q, time_steps=num_frames)\nstates= ekf_dist.filter_states(zs)", "References\n[1] Kalman, R. E. A New Approach to Linear Filtering and Prediction Problems. 1960\n[2] Welch, Greg, and Bishop, Gary. An Introduction to the Kalman Filter. 2006." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dereneaton/ipyrad
testdocs/analysis/cookbook-digest_genomes.ipynb
gpl-3.0
[ "<span style=\"color:gray\">ipyrad-analysis toolkit: </span> digest genomes\nThe purpose of this tool is to digest a genome file in silico using the same restriction enzymes that were used for an empirical data set to attempt to extract homologous data from the genome file. This can be a useful procedure for adding additional outgroup samples to a data set. \nRequired software", "# conda install ipyrad -c conda-forge -c bioconda\n\nimport ipyrad.analysis as ipa", "A genome file\nYou will need a genome file in fasta format (optionally it can be gzip compressed).", "genome = \"/home/deren/Downloads/Ahypochondriacus_459_v2.0.fa\"", "Initialize the tool (e.g., ddRAD)\nYou can generate single or paired-end data, and you will likely want to restrict the size of selected fragments to be within an expected size selection window, as is typically done in empirical data sets. Here I select all fragments occuring between two restriction enzymes where the intervening fragment is 300-500bp in length. I then ask that the analysis returns the digested fragments as 150bp fastq reads, and to provide 10 copies of each one. I also restrict it to only the first (largest) 12 scaffolds using the 'nscaffolds' arg.", "digest = ipa.digest_genome(\n fasta=genome,\n name=\"amaranthus-digest\",\n workdir=\"digested_genomes\",\n re1=\"CTGCAG\",\n re2=\"AATTC\",\n ncopies=10,\n readlen=150,\n min_size=300,\n max_size=500, \n nscaffolds=12,\n)\n\ndigest.run()", "Check results", "! ls -l digested_genomes/", "Example 2 (original RAD data)\nThe original RAD method uses sonication rather than a second restriction digestion to cut all of the fragments down to an appropriate size for sequencing. Thus you only need to provide a single cut site and a selection window.", "digest = ipa.digest_genome(\n fasta=genome,\n name=\"amaranthus-digest-RAD\",\n workdir=\"digested_genomes\",\n re1=\"CTGCAG\",\n re2=None,\n paired=False,\n ncopies=10,\n readlen=100,\n min_size=300,\n max_size=500, \n nscaffolds=12,\n)\n\ndigest.run()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
prasants/pyds
12.Introduction_to_Pandas.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Pandas:-Introduction\" data-toc-modified-id=\"Pandas:-Introduction-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Pandas: Introduction</a></div><div class=\"lev2 toc-item\"><a href=\"#Importing-Libraries\" data-toc-modified-id=\"Importing-Libraries-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Importing Libraries</a></div><div class=\"lev1 toc-item\"><a href=\"#Data-Structures\" data-toc-modified-id=\"Data-Structures-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Data Structures</a></div><div class=\"lev2 toc-item\"><a href=\"#Series\" data-toc-modified-id=\"Series-21\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Series</a></div><div class=\"lev3 toc-item\"><a href=\"#Mini-Project\" data-toc-modified-id=\"Mini-Project-211\"><span class=\"toc-item-num\">2.1.1&nbsp;&nbsp;</span>Mini-Project</a></div><div class=\"lev2 toc-item\"><a href=\"#DataFrames\" data-toc-modified-id=\"DataFrames-22\"><span class=\"toc-item-num\">2.2&nbsp;&nbsp;</span>DataFrames</a></div><div class=\"lev1 toc-item\"><a href=\"#Indexing-and-Selection\" data-toc-modified-id=\"Indexing-and-Selection-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Indexing and Selection</a></div><div class=\"lev2 toc-item\"><a href=\"#Selecting-Columns\" data-toc-modified-id=\"Selecting-Columns-31\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Selecting Columns</a></div><div class=\"lev2 toc-item\"><a href=\"#Using-loc-and-iloc\" data-toc-modified-id=\"Using-loc-and-iloc-32\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Using <code>loc</code> and <code>iloc</code></a></div><div class=\"lev2 toc-item\"><a href=\"#Conditional-Selection\" data-toc-modified-id=\"Conditional-Selection-33\"><span class=\"toc-item-num\">3.3&nbsp;&nbsp;</span>Conditional Selection</a></div><div class=\"lev2 toc-item\"><a href=\"#Creating-New-Columns\" data-toc-modified-id=\"Creating-New-Columns-34\"><span class=\"toc-item-num\">3.4&nbsp;&nbsp;</span>Creating New Columns</a></div><div class=\"lev2 toc-item\"><a href=\"#Removing-Columns\" data-toc-modified-id=\"Removing-Columns-35\"><span class=\"toc-item-num\">3.5&nbsp;&nbsp;</span>Removing Columns</a></div><div class=\"lev2 toc-item\"><a href=\"#Dataframe-from-a-Dictionry\" data-toc-modified-id=\"Dataframe-from-a-Dictionry-36\"><span class=\"toc-item-num\">3.6&nbsp;&nbsp;</span>Dataframe from a Dictionry</a></div><div class=\"lev2 toc-item\"><a href=\"#Exercise\" data-toc-modified-id=\"Exercise-37\"><span class=\"toc-item-num\">3.7&nbsp;&nbsp;</span>Exercise</a></div><div class=\"lev1 toc-item\"><a href=\"#Handling-Missing-Data\" data-toc-modified-id=\"Handling-Missing-Data-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Handling Missing Data</a></div><div class=\"lev3 toc-item\"><a href=\"#What-is-Missing-Data?\" data-toc-modified-id=\"What-is-Missing-Data?-401\"><span class=\"toc-item-num\">4.0.1&nbsp;&nbsp;</span>What is Missing Data?</a></div><div class=\"lev2 toc-item\"><a href=\"#Imputation\" data-toc-modified-id=\"Imputation-41\"><span class=\"toc-item-num\">4.1&nbsp;&nbsp;</span>Imputation</a></div><div class=\"lev2 toc-item\"><a href=\"#Interpolation\" data-toc-modified-id=\"Interpolation-42\"><span class=\"toc-item-num\">4.2&nbsp;&nbsp;</span>Interpolation</a></div><div class=\"lev2 toc-item\"><a href=\"#A-Quick-Detour-into-some-Data-Viz\" data-toc-modified-id=\"A-Quick-Detour-into-some-Data-Viz-43\"><span class=\"toc-item-num\">4.3&nbsp;&nbsp;</span>A Quick Detour into some Data Viz</a></div><div class=\"lev1 toc-item\"><a href=\"#Merge,-Join,-Concatenate\" data-toc-modified-id=\"Merge,-Join,-Concatenate-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Merge, Join, Concatenate</a></div><div class=\"lev2 toc-item\"><a href=\"#Merge\" data-toc-modified-id=\"Merge-51\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>Merge</a></div><div class=\"lev2 toc-item\"><a href=\"#Join\" data-toc-modified-id=\"Join-52\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>Join</a></div><div class=\"lev2 toc-item\"><a href=\"#Concatenate\" data-toc-modified-id=\"Concatenate-53\"><span class=\"toc-item-num\">5.3&nbsp;&nbsp;</span>Concatenate</a></div><div class=\"lev1 toc-item\"><a href=\"#Grouping,-a.k.a.-split-apply-combine\" data-toc-modified-id=\"Grouping,-a.k.a.-split-apply-combine-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>Grouping, a.k.a. split-apply-combine</a></div><div class=\"lev2 toc-item\"><a href=\"#Apply\" data-toc-modified-id=\"Apply-61\"><span class=\"toc-item-num\">6.1&nbsp;&nbsp;</span>Apply</a></div><div class=\"lev2 toc-item\"><a href=\"#Map\" data-toc-modified-id=\"Map-62\"><span class=\"toc-item-num\">6.2&nbsp;&nbsp;</span>Map</a></div><div class=\"lev2 toc-item\"><a href=\"#ApplyMap\" data-toc-modified-id=\"ApplyMap-63\"><span class=\"toc-item-num\">6.3&nbsp;&nbsp;</span>ApplyMap</a></div><div class=\"lev1 toc-item\"><a href=\"#Pivot-Tables\" data-toc-modified-id=\"Pivot-Tables-7\"><span class=\"toc-item-num\">7&nbsp;&nbsp;</span>Pivot Tables</a></div><div class=\"lev2 toc-item\"><a href=\"#Sales-Reports\" data-toc-modified-id=\"Sales-Reports-71\"><span class=\"toc-item-num\">7.1&nbsp;&nbsp;</span>Sales Reports</a></div><div class=\"lev2 toc-item\"><a href=\"#Tips\" data-toc-modified-id=\"Tips-72\"><span class=\"toc-item-num\">7.2&nbsp;&nbsp;</span>Tips</a></div><div class=\"lev2 toc-item\"><a href=\"#Bada-Bing!\" data-toc-modified-id=\"Bada-Bing!-73\"><span class=\"toc-item-num\">7.3&nbsp;&nbsp;</span>Bada Bing!</a></div><div class=\"lev1 toc-item\"><a href=\"#Basic-Statistical-Operations/Explorations\" data-toc-modified-id=\"Basic-Statistical-Operations/Explorations-8\"><span class=\"toc-item-num\">8&nbsp;&nbsp;</span>Basic Statistical Operations/Explorations</a></div>\n\n# Pandas: Introduction\n**Pandas** is Python's library for dealing with structured or tabular data. It's main contributor, Wes McKinney was inspired by R's `data.frame`, and implemented it for Python. <br>\nIt combines the speed of NumPy with the ease of SQL, as per Wes, and I completely agree with that.\nIf you have used R, and the dplyr package, you know how easy it is to manipulate data with it. \n\nWe will be learning about various methods to deal with data, and occasionally we will make things a little challenging so as to replicate/mimic real world conditions. And while at it, we will throw in visualisations using Matplotlib too! The best way to learn is to write code yourself, but don't worry if you don't understand all of it in the first go. And of course, feel free to take a step back and revisit [Lesson 10](10.Visualise_This-Tutorial01.ipynb).\n\nBy the end of it, we should have dealt with about a few case studies, which should be an excellent start for your portfolio.\n\nWe will cover at the very least the following topic:\n* Indexing and Selection\n* Creating new columns\n* Renaming\n* Grouping\n* Handling missing values\n* Merge, join\n* map(), apply(), applymap()\n* Pivot Tables\n* Basic statistics\n* Plots (throughout the exercise)\n\nI say \"at the very least\" because in my opinion, this is the bare minimum you should know to handle data science problems 'in the wild', as in, problems that aren't toy problems, and the kind that data scientists deal with every day.\n\n## Importing Libraries\nAs usual, we begin by importing our libraries. Just as with NumPy, where we import it as `np`, we will import the pandas library as `pd`. It's just convention, and you're free to import it as `chuck_norris`, `really_long_name_for_reason_in_particular` or just plain and simple, `pd`.", "import pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt", "Data Structures\nThere are three fundamental data structures supported by Pandas:<br>\n* Series: a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers, Python objects, etc.). For those coming from an R background, Series is much like a Vector.\n* DataFrame: a 2-dimensional labeled data structure with columns of potentially different types.\n* Panel: also called longitudinal data or cross-sectional time series data, is data where multiple cases (people, firms, countries etc) were observed at two or more time periods. This is rarely used though, and I personally haven't come across this except for some Econometrics courses I had taken in my undergraduate years.\nSeries\nThe basic format to creat a series is:<br>\nseries_a = pd.Series(data, index = index_name)\nThe default value for the index is 1,2,3,4....and so on, and doesn't not need to be specified, except in the case of scalars.", "import pandas as pd\n\n# From Scalar Values\nseries_1 = pd.Series([1,2,3,4,5])\nseries_1", "Notice the 0,1,2,3... on the left side? That's called the Index. It starts from 0, but you can rename it.", "series_1 = pd.Series([1,2,3,4,5], index = ['Mon','Tue','Wed','Thu','Fri'])\nseries_1\n\nseries_2 = pd.Series(1.0, index = ['a','b','c','d','e'])\nseries_2\n\nimport pandas as pd\nimport numpy as np\n\n# From an array\n\n# Just copy this for now, we'll cover the 'seed' in DataFrames\nnp.random.seed(42) \n\nseries_3 = pd.Series(np.random.randn(5))\nseries_3\n\nnp.random.seed(42)\nseries_3 = pd.Series(np.random.randn(5), index = ['a','b','c','d','e'])\nseries_3\n\nnp.random.seed(42)\nind_1 = ['a','b','c','d','e']\nseries_3 = pd.Series(np.random.randn(5), index = ind_1)\nseries_3\n\nseries_4 = pd.Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])\nseries_4", "We can subset and get values from the series.", "series_4['a'] == series_4[0]\n\nseries_4[series_4>3]\n\nseries_4[series_4%2==0]\n\nseries_5 = pd.Series([1,2,3,4,5], index = ['HP', 'GS', 'IBM', 'AA', 'FB'])\nseries_5\n\nseries_5['IBM']\n\ntech_pf1 = series_5[['HP', 'IBM', 'FB']]\ntech_pf1\n\n# From a Dictionary\ndict_01 = {'Gavin' : 50, 'Russ' : 100, 'Erlich' : 150}\nseries_6 = pd.Series(dict_01)\nseries_6\n\n# Reordering the previous series\nindex = ['Gavin', 'Russ', 'Erlich', 'Peter']\nseries_7 = pd.Series(dict_01, index=index)\nseries_7", "Notice the NaN, which stands for Not a Number. We will be dealing with it extensively when working with DataFrames. It is an indicator for missing or corrupted data. Here's how we test for it.", "pd.isnull(series_7)", "And here's a nice discussion on the topic from our friends at StackOverflow.", "# Pandas is very smart, and aligns the series for mathematical operations\nseries_6 + series_7\n\n# Renaming an Index\nseries_7.index.name = \"Names\"\nseries_7\n\n# Naming a Series\nseries_7.name = \"SV\"\nseries_7", "Mini-Project", "goals = pd.Series([20,19,21,24,1], index = [\"Messi\", \"Neymar\", \"Zlatan\", \"Ronaldo\", \"N’Gog\"])\ngoals\n\n# Who scored less than 20 goals?\ngoals[goals<20]\n\n# What is the average number of goals scored?\ngoals.mean()\n\n# What is the median number of goals scored?\ngoals.median()\n\n# What is the range of goals scored? (Range = Max - Min)\ngoals_range = goals.max() - goals.min()\nprint(goals_range)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = (15,7)\n\n# Plot the goals in a bar chart\ngoals.plot(kind = \"bar\")\n\n# Let's beautify that a little\ngoals.plot(kind = \"barh\", title = \"Goal Scorers\")", "Read more about these here. \nDataFrames\nDataFrames is in many respects, the real Pandas. Usually, if you're using Pandas, it will be to use DataFrames.<br>\nWe will begin with creating DataFrames, and the usual indexing and selection mechanisms. In reality, you will probably never have to 'create' a DataFrame, but practice these skills here to get comfortable with heirarchies, indices and selections. Then we will move on to reading data from multiple formats, including spreadsheets, JSON files and API endpoints.\nBy the way, during these examples, we will always set seed first when generating random numbers. If you're coming from R, this is the same as set.seed(). In Python, we use the random.seed statement from numpy, which you can read about here. You can set it to any number you like, and I usually set it to 42 just out of habit, but there's not to say you can't set it to an arbitrary number like 27 or 2012. Use the same numbers as this notebook though to replicate the results. Also note that we need to mention it in every cell that we want the results replicated. \nYou will see later about how this is good practice especially when sharing your work with other members of the team - they will be able to reproduce your work on their machines due to the pseudo-random number that is generated algorithmically.", "import pandas as pd\nimport numpy as np\n\n# Let's start with a standard array\narr1 = np.array([[40,40,75,95],[80,85,120,130],\n [155,160,165,170],[200,245,250,260]])\nprint(arr1.shape)\nprint(arr1.size)\nprint(arr1)\n\n# It is quite common to assign a dataframe the name 'df', although you can\n# use a relevant name, such baseball_stats or book_sales\n# It's always good to use context driven names - you should code expecting\n# someone else to read it a few months down the line\n\ndf = pd.DataFrame(arr1, index = \"Peter,Clarke,Bruce,Tony\".split(\",\"), \n columns = \"Jan,Feb,Mar,Apr\".split(\",\"))\ndf", "Indexing and Selection\nSelecting Columns", "df = pd.DataFrame(arr1, index = \"Peter,Clarke,Bruce,Tony\".split(\",\"), \n columns = \"Jan,Feb,Mar,Apr\".split(\",\"))\ndf\n\n# Selecting columns\ndf[['Jan']]\n\ndf[['Jan','Feb']]\n\ndf[['Mar','Jan']]", "It's interesting to note that the offical Pandas documentation refers to DataFrames as:\n\nCan be thought of as a dict-like container for Series objects.\n\nYou can access it as a Series as below:", "df['Jan']\n\nprint('Series:', type(df['Jan']))\nprint('DataFrame:',type(df[['Jan']]))", "Using loc and iloc", "df = pd.DataFrame(arr1, index = \"Peter,Clarke,Bruce,Tony\".split(\",\"), \n columns = \"Jan,Feb,Mar,Apr\".split(\",\"))\ndf\n\n# For selecting by Label\ndf.loc[['Tony']]\n\ndf.loc[['Peter','Bruce']]\n\ndf.loc[['Peter','Bruce'],['Jan','Feb']]\n\n# All of Peter's data\ndf.loc[[\"Peter\"]][:]\n\ndf.loc[\"Peter\"][:]\n\ndf\n\n# Integer-location based indexing for selection by position\n# Note how this returns a Dataframe\ndf.iloc[[0]]\n\n# and this returns a Series\ndf.iloc[0]\n\n# Narrowing down further\ndf.iloc[[0],[1]]\n\n# Replicating the results from our use of the loc statement\ndf.iloc[[0,2]]\n\n# Compare to df.loc[['Peter','Bruce'],['A','D']]\ndf.iloc[[0,2],[0,3]]", "There's another function named ix. I have rarely used it, and both loc and iloc take care of all my selection needs. You can read about it here.\nAlso, check out the similarity of outputs below:", "df.ix[0:3]\n\ndf.iloc[0:3]", "Conditional Selection\nWhile exploring data sets, one often has to use conditional selection. Or this could be true for creating subsets to work.", "df\n\ndf[df%2 == 0]\n\ndf%2 == 0\n\ndf < 100\n\ndf[df<100]\n\ndf\n\ndf[df['Jan']>100][['Apr']]\n\ndf[df['Jan']<100][['Feb','Apr']]\n\n# Using multiple conditions\ndf[(df['Jan'] >= 80) & (df['Mar']>100)]", "Did you notice that we used &amp; instead of and? When using Pandas, we have to use the symbol, not the word. Here's a StackOverflow discussion on this.\nCreating New Columns", "df = pd.DataFrame(arr1, index = \"Peter,Clarke,Bruce,Tony\".split(\",\"), columns = \"Jan,Feb,Mar,Apr\".split(\",\"))\ndf\n\ndf[\"Dec\"] = df[\"Jan\"] + df[\"Mar\"]\ndf", "Removing Columns\nWhile fundamentally adding and removing columns ought to be similar operations, there are a few differences. Let's see if you can figure it out.", "df\n\ndf.drop('Dec', axis = 1)", "First, we had to mention the axis. 0 is for rows, 1 is for columns.", "df", "Why is 'Dec' still there? Here lies the difference - while removing columns, we have to specify that the operation should be inplace. Read about it in the official documentation.", "df.drop('Dec', axis = 1, inplace = True)\ndf", "And just for the sake of completion, let's temporarily kick out Tony from the table. Temporary, since it's not inplace.", "df.drop('Tony', axis = 0)\n\n# Renaming Columns\ndf.rename(columns={'Jan': 'January'}, inplace=True)\ndf\n\ndf.rename(columns={'Feb': 'February', 'Mar': 'March', 'Apr': 'April'}, inplace=True)\ndf", "Dataframe from a Dictionry\nLet's create a new dataframe from a dictionary, and then apply some of the selection techniques we just learnt.", "dict1 = {'first_name': ['Erlich', 'Richard', \"Dinesh\", 'Gilfoyle', 'Nelson'],\n 'second_name': ['Bachman', 'Hendricks', np.nan, np.nan, 'Bighetti'],\n 'occupation': ['Investor', 'Entrepreneur', 'Coder', 'Coder', 'Bench Warmer'],\n 'age': [40, 30, 28, 29, 28]}\ndf = pd.DataFrame(dict1, columns = ['first_name', 'second_name','occupation', 'age'])\ndf\n\n# Who is under 30 years of age?\ndf[df[\"age\"]<30]\n\n# Who are the coders?\ndf[df[\"occupation\"] == \"Coder\"]\n\n# Multiple Conditions : Coders, below 30\n# Not that conditions are Booleans, as shown below\ncoders = df[\"occupation\"] == \"Coder\"\nund_30 = df[\"age\"]<30\ndf[coders & und_30]\n\ndf[df[\"second_name\"].notnull()]", "Exercise", "np.random.seed(42)\nnp.random.randn(4,4)\n\nnp.random.seed(42)\ndf = pd.DataFrame(np.random.randn(4,4), index = \"Peter,Clarke,Bruce,Tony\".split(\",\"), columns = \"Jan,Feb,Mar,Apr\".split(\",\"))\ndf\n\n# Who scored greater than 0 in Apr?\ndf[df>0][[\"Apr\"]]\n\n# Who scored below 0 in March?\n\n\n# In which month/months did Clarke score above 0?\n\n\n# Find the highest scores for each month \n# Hint: .max()\n\n\n# Find the lowest scores for each month\n\n\n# Plot the higest score for each month in a bar graph\n", "Handling Missing Data\nPay special attention to this section. If needed, spend some extra time to cover all the relevant techniques. <br>\nNever in my experience have I come across a 100% clean data set \"in the wild\". What that means is that of course you will find that most data sets that you train with to be complete, but real world data is messy and incomplete. \nEven when working with high quality, financial data from exchanges, they might often have missing data points. The less said about unstructured data like text, the better. \nTL/DR: If you're going to fight Mike Tyson, don't train to fight Mr Bean.\n<img src=\"images/bean_box.jpg\">\nWhat is Missing Data?\nData can be missing because:\n* It was never captured\n* The data does not exist\n* It was captured but got corrupted\nIn Pandas, missing data will be represented as None or NaN.", "df = pd.DataFrame({'NYC':[3,np.nan,7,9,6],\n 'SF':[4,3,8,7,15],\n 'CHI':[4,np.nan,np.nan,14,6],\n 'MIA':[3, 9,12,8,9]}, index = ['Mon','Tue','Wed','Thu','Fri'])\ndf", "First thing we can do is drop rows with missing values with the dropna() function. By default, rows are dropped, but you can change this to columns as well.", "df.dropna()\n\ndf.dropna(axis = 0)\n\ndf.dropna(axis = 1)", "While this can be helpful in some ways, if your dataset is small, you are losing a significant portion of your data.\nFor example, if 100 rows out of 1 million rows have missing data, that's negligible, and can potentially be thrown away. What if you have 10 out of 85 rows with incorrect, unusable or missing data?", "df2 = df.copy()\n\ndf2\n\ndf2.mean()\n\n# Are these really the means though?\ndf\n\nmean = df2['SF'].mean()\nmean", "Imputation\nUsing the fillna function, we can replace missing values.", "df = pd.DataFrame({'NYC':[3,np.nan,7,9,6],\n 'SF':[4,3,8,7,15],\n 'CHI':[4,np.nan,np.nan,14,6],\n 'MIA':[3, 9,12,8,9]}, index = ['Mon','Tue','Wed','Thu','Fri'])\ndf\n\ndf.mean()\n\ndf.fillna(value = df.mean(), inplace = True)\ndf\n\ndf = pd.DataFrame({'NYC':[3,np.nan,7,9,6],\n 'SF':[4,3,8,7,15],\n 'CHI':[4,np.nan,np.nan,14,6],\n 'MIA':[3, 9,12,8,9]}, index = ['Mon','Tue','Wed','Thu','Fri'])\ndf\n\ndf3 = df.copy()\ndf3\n\nmedian = df3['SF'].median()\nmedian\n\ndf3.fillna(value = median, inplace = True)\ndf3\n\ndf3.mode()", "But sometimes, the data isn't part of the table. Consider the scenario below. We know that the below tables contains names of female babies. But it's missing in our dataset.", "baby_names = {\n 'id': ['101', '102', '103', '104', '105'],\n 'first_name': ['Emma', 'Madison', 'Hannah', 'Grace', 'Emily']\n }\ndf_baby = pd.DataFrame(baby_names, columns = ['id', 'first_name'])\ndf_baby\n\ndf_baby.columns\n\ndf_baby[\"gender\"] = \"F\"\n\ndf_baby\n\ndf_baby['gender'] = 0\n\ndf_baby", "Interpolation\nRead up more on the interpolate function here and here", "df = pd.read_csv(\"data/cafe_sales2015.csv\")\ndf\n\ndf[\"Date\"].head()\n\ndf[\"Date\"] = pd.to_datetime(df[\"Date\"])\n\ndf.set_index([\"Date\"], inplace = True)\n\ndf.head()\n\ndf.tail()\n\ndf.head(3)\n\ndf.describe()\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams[\"figure.figsize\"] = (20,5)\n\ndf.plot(kind=\"line\")\n\ndf[\"Water\"].plot(kind=\"line\")\n\ndf.interpolate(method = \"linear\", inplace = True)\n\ndf.head(5)\n\ndf.interpolate().count()\n\ndf[[\"Latte\", \"Water\"]].plot(kind=\"line\")", "Keep in mind though, that these are at best approximations. \nA Quick Detour into some Data Viz\nInstall Vincent by running the following line in your command line:\nPython 2.x: pip install vincent <br>\nPython 3.x: pip3 install vincent", "import vincent\nvincent.core.initialize_notebook()\n\nline = vincent.Line(df)\nline.axis_titles(x='Date', y='Amount')\n\nline = vincent.Line(df[[\"Latte\", \"Water\"]])\nline.axis_titles(x='Date', y='Amount')\n\nstacked = vincent.StackedArea(df)\nstacked.axis_titles(x='Date', y='Amount')\nstacked.legend(title='Cafe Sales')\nstacked.colors(brew='Spectral')", "Read about using the Vincent package here. \nThe latest update to Matplotlib, V 2.0.0 has really improved the quality of the graphics, but it's still not quite production ready, while on the positive side, it is stable and has a large community of people who use it. Niche packages like Vincent can produce some amazing graphics right out of the box with minimal tweaking, but they may not be very mature. Nevertheless, as Data Scientists, it's good to learn about new packages, especially those that help you communicate your results to a non-technical audience. If people don't understand what you do, they won't think what you do is important!\nMerge, Join, Concatenate\n<img src=\"images/sql-joins.png\">\nImage Source: http://www.datapine.com/blog/sql-joins-and-data-analysis-using-sql/\nMerge", "customers = {\n 'customer_id': ['101', '102', '103', '104', '105'],\n 'first_name': ['Tony', 'Silvio', 'Paulie', 'Corrado', 'Christopher'], \n 'last_name': ['Soprano', 'Dante', 'Gualtieri', 'Soprano', 'Moltisanti']}\ndf_1 = pd.DataFrame(customers, columns = ['customer_id', 'first_name', 'last_name'])\ndf_1\n\norders = {\n 'customer_id': ['101', '104', '105', '108', '111'],\n 'order_date': ['2015-01-01', '2015-01-08', '2015-01-19', '2015-02-10', '2015-02-11'], \n 'order_value': ['10000', '25000', '1100', '5000', '4400']}\ndf_2 = pd.DataFrame(orders, columns = ['customer_id', 'order_date', 'order_value'])\ndf_2\n\npd.merge(df_1, df_2, how = 'inner', on = 'customer_id')\n\npd.merge(df_1, df_2, how = 'left', on = 'customer_id')\n\npd.merge(df_1, df_2, how = 'right', on = 'customer_id')\n\npd.merge(df_1, df_2, how = 'outer', on = 'customer_id')", "Join", "customers = {\n 'customer_id': ['101', '102', '103', '104', '105'],\n 'first_name': ['Tony', 'Silvio', 'Paulie', 'Corrado', 'Christopher'], \n 'last_name': ['Soprano', 'Dante', 'Gualtieri', 'Soprano', 'Moltisanti']}\ncustomers\n\norders = {\n 'customer_id': ['101', '104', '105', '108', '111'],\n 'order_date': ['2015-01-01', '2015-01-08', '2015-01-19', '2015-02-10', '2015-02-11'], \n 'order_value': ['10000', '25000', '1100', '5000', '4400']}\norders\n\ndf1_new = pd.DataFrame.from_dict(customers, orient='columns', dtype=None)\n\ndf1_new\n\ndf1_new = df1_new.set_index('customer_id')\ndf1_new\n\ndf2_new = pd.DataFrame.from_dict(orders, orient='columns', dtype=None)\ndf2_new\n\ndf2_new = df2_new.set_index('customer_id')\ndf2_new\n\ndf1_new.join(df2_new,how = \"inner\")\n\ndf1_new.join(df2_new,how = \"outer\")\n\ndf1_new.join(df2_new,how = \"left\")\n\ndf1_new.join(df2_new,how = \"right\")\n\n# Alternate Way : I don't recommend this\ndf_1.join(df_2, on = \"customer_id\", lsuffix='_l', rsuffix='_r')", "Concatenate", "customers = {\n 'customer_id': ['101', '102', '103', '104', '105'],\n 'first_name': ['Tony', 'Silvio', 'Paulie', 'Corrado', 'Christopher'], \n 'last_name': ['Soprano', 'Dante', 'Gualtieri', 'Soprano', 'Moltisanti']}\ndf_1 = pd.DataFrame(customers, columns = ['customer_id', 'first_name', 'last_name'])\ndf_1\n\norders = {\n 'customer_id': ['101', '104', '105', '108', '111'],\n 'order_date': ['2015-01-01', '2015-01-08', '2015-01-19', '2015-02-10', '2015-02-11'], \n 'order_value': ['10000', '25000', '1100', '5000', '4400']}\ndf_2 = pd.DataFrame(orders, columns = ['customer_id', 'order_date', 'order_value'])\ndf_2\n\npd.concat([df_1,df_2])\n\npd.concat([df_1,df_2],axis=0)\n\npd.concat([df_1,df_2],axis=1)", "One final resource on why you would want to perform these operations in Pandas - and evidence on how fast it really is! http://wesmckinney.com/blog/high-performance-database-joins-with-pandas-dataframe-more-benchmarks/\nGrouping, a.k.a. split-apply-combine\nWhile analysing data, a Data Scientist has to very often perform aggregations, perform transformation ops like standardising data, and filter through the dataset to look at only relevant samples.\nThis is what the groupby function is primarily used for. \nRead more here.", "import pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nplt.rcParams[\"figure.figsize\"] = (15,7) \n\npaintball = {'Team': ['Super Ducks','Super Ducks', 'Super Ducks', 'Super Ducks', 'Super Ducks', 'Bobcats', 'Bobcats', 'Bobcats', 'Bobcats', 'Tigers', 'Tigers', 'Tigers', 'Tigers','Tigers','Tigers'], \n 'Name': ['Tony', 'Antonio', 'Felipe', 'Ryan', 'Mario', 'Sergio', 'Tanaka', 'Anderson', 'Joe', 'Floyd', 'Manny', 'Chris', 'Junior', 'George','Brock'],\n 'Kills': ['1', '1', '1', '4', '3', '2', '2', '2','5', '1', '1', '7', '4','8','5'], \n 'Shots Fired Before': [17, 19, 22, 8, 13, 85, 64, 49, 74, 14, 20, 24,13,31,37],\n 'Shots Fired After': [41, 73, 57, 30, 74, 37, 28, 40, 43, 18, 19, 21,13,32,39]}\ndf = pd.DataFrame(paintball, columns = ['Team', 'Name', 'Shots Fired Before', 'Shots Fired After','Kills'])\ndf\n\ndf.groupby('Team').mean()\n\nbyteam = df.groupby('Team')\nbyteam.count()\n\nbyteam.describe()\n\nbyteam.describe().transpose()['Bobcats']\n\nTeam_Before = df[['Shots Fired Before']].groupby(df['Team']).mean()\nTeam_After = df[['Shots Fired After']].groupby(df['Team']).mean()\n\nTeam_Before\n\nTeam_After\n\nTeam_Before.join(Team_After)\n\nplt.style.use('ggplot')\nplt.rcParams[\"figure.figsize\"] = (15,7) \nTeam_Before.join(Team_After).plot(kind=\"Bar\")", "Cool graph, but can we improve it, visually speaking? Yes of course we can! Let's look at some of the styles available within Matplotlib.", "plt.style.available", "Personally I am quite partial to ggplot and seaborn, but not so much to fivethirtyeight. Let's try these.", "plt.style.use('ggplot')\nplt.rcParams[\"figure.figsize\"] = (15,7) \nTeam_Before.join(Team_After).plot(kind=\"Bar\")", "What about fivethirtyeight?", "plt.style.use('fivethirtyeight')\nplt.rcParams[\"figure.figsize\"] = (15,7) \nTeam_Before.join(Team_After).plot(kind=\"Bar\")", "And seaborn. Note that seaborn is a visualisation library that works with Matplotlib. You can mimic the style without actually using it.", "plt.style.use('seaborn')\nplt.rcParams[\"figure.figsize\"] = (15,7) \nTeam_Before.join(Team_After).plot(kind=\"Bar\")\n\nplt.rcParams.update(plt.rcParamsDefault)\nplt.style.use('seaborn-poster')\nplt.rcParams[\"figure.figsize\"] = (15,7) \nTeam_Before.join(Team_After).plot(kind=\"Bar\")\n\npd.crosstab(df[\"Team\"], df[\"Kills\"], margins = True)\n\nplt.rcParams.update(plt.rcParamsDefault)\n%matplotlib inline\nplt.rcParams[\"figure.figsize\"] = (15,7)\nplt.style.use('seaborn-deep')\ndf.groupby('Kills').mean().plot(kind=\"bar\")", "Apply\nWe can use the apply function to perform an operation over an axis in a dataframe.", "import pandas as pd\nimport numpy as np\n\ndf = pd.read_csv(\"data/cafe_sales2015.csv\")\ndf.head()\n\ndf[\"Date\"] = pd.to_datetime(df[\"Date\"])\ndf.set_index([\"Date\"], inplace = True)\ndf.interpolate(method = \"linear\", inplace = True)\n\ndf.head()\n\n#print(df.apply(np.cumsum))\ndf.apply(np.average)\n\ndf.apply(lambda x: x.max() - x.min())\n\n# What columns have missing values?\ndf.apply(lambda x: sum(x.isnull()),axis=0)\n\n# Using Apply to find missing values\n# Obviously don't do this for datasets with thousands or millions of rows!\nempty = df.apply(lambda col: pd.isnull(col))\nempty", "Map\nThe map function iterates over each element of a series.", "import pandas as pd\nimport numpy as np\n\ndf = pd.read_csv(\"data/cafe_sales2015.csv\")\ndf.head()\n\ndf[\"Latte\"] = df[\"Latte\"].map(lambda x: x+2)\n\ndf.head()\n\ndf.interpolate(method = \"linear\", inplace = True)\ndf[\"Water\"] = df[\"Water\"].map(lambda x: x-1 if (x>0) else 0)\n\ndf.head()", "ApplyMap", "import pandas as pd\nimport numpy as np\n\ndf = pd.read_csv(\"data/cafe_sales2015.csv\")\ndf.head()\n\ndef to_int(x):\n if type(x) is float:\n x = int(x)\n return x \n else:\n return x\n\ndf.interpolate(method = \"linear\", inplace = True)\ndf.applymap(to_int).head()", "Further Reading<br>\nWes McKinney's amazing book covers this issue. Refer to Page 132.\nPivot Tables\nPivot tables are summarisation tables that help the user sort, count, total or average the data available in a dataset. If you have used Excel, you will be very familiar with them. If not, let's look at it from a fresh Pandas perspective.\nTypically, there are four parameters, but you don't always have to specify every one of them, as we will see in the examples below.\n\nindex: An array of the dataset that will used as indices to our new reshaped and aggregated DataFrame\ncolumns: An array of the dataset that will provide columns to the new DataFrame\nvalues: These are the values we wish to aggregate in each cell.\naggfunc: The function we will use to perform the aggregation\n\nSales Reports", "import pandas as pd\nimport numpy as np\n\n# The 'xlrd' module gets imported automatically, if not, install it with 'pip install xlrd'\ndf = pd.read_excel(\"Data/bev-sales.xlsx\")\ndf.head()\n\ndf.tail()\n\ndf.describe()\n\nhelp(pd.pivot_table)\n\ndf.head()\n\npd.pivot_table(df,index=[\"Sales Exec\"],values=[\"Revenue\"],aggfunc=\"sum\")\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\npd.pivot_table(df, index=[\"Sales Exec\"],values=[\"Revenue\"],aggfunc=\"sum\").plot(kind=\"bar\")\n\npd.pivot_table(df,index=[\"Sales Exec\"],values=[\"Revenue\"],aggfunc=\"mean\")\n\npd.pivot_table(df, index=[\"Sales Exec\", \"Item\"], values=[\"Revenue\"], aggfunc=\"sum\")\n\npd.pivot_table(df,index=[\"Sales Exec\"],values=[\"Revenue\"],aggfunc=[np.sum])\n\npd.pivot_table(df,index=[\"Sales Exec\"],values=[\"Units sold\", \"Revenue\"],aggfunc=[np.sum])\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn')\nplt.rcParams[\"figure.figsize\"] = (15,7)\n\n\npd.pivot_table(df,index=[\"Sales Exec\", \"Item\"],values=[\"Revenue\"],aggfunc=[np.sum]).plot(kind=\"bar\")\nplt.title('January Sales Report')\n\npd.pivot_table(df,index=[\"Sales Exec\", \"Item\"],values=[\"Units sold\", \"Revenue\"],\n columns=[\"Price per Unit\"], aggfunc=\"sum\", margins = True)", "Tips", "df = pd.read_csv(\"Data/tips.csv\")\ndf.head()\n\ndf[\"tip_pc\"] = df[\"tip\"] / df[\"total_bill\"]\n\ndf.head()\n\npd.pivot_table(df,index=[\"sex\"], values = [\"tip_pc\"], aggfunc=\"mean\")\n\npd.pivot_table(df, index = [\"smoker\", \"sex\"], values = [\"tip_pc\"], aggfunc = \"mean\")\n\npd.pivot_table(df,index=[\"sex\"], values = [\"total_bill\",\"tip\"], aggfunc=\"sum\")", "Bada Bing!", "import pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndf = pd.read_excel(\"Data/Sopranos/sopranos-killings.xlsx\")\ndf.head()\n\npd.pivot_table(df,index=[\"Cause of Death\"],values = [\"Season\"], aggfunc=\"first\")\n\npd.pivot_table(df,index=[\"Cause of Death\"],values = [\"Season\"], aggfunc=\"count\", margins=True)\n\nwhacked = pd.pivot_table(df,index=[\"Cause of Death\"],values = [\"Season\"], aggfunc=\"count\")\nwhacked\n\nplt.style.available\n\nplt.rcParams.update(plt.rcParamsDefault)\n%matplotlib inline\nplt.style.use('seaborn-deep')\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = (15,7)\nwhacked.plot(kind = \"bar\", legend=None)\nplt.title('How People Died on The Sopranos')\n\nwith plt.style.context('ggplot', after_reset=True):\n %matplotlib inline\n import matplotlib.pyplot as plt\n plt.rcParams[\"figure.figsize\"] = (15,7)\n whacked.plot(kind = \"bar\", legend=None)\n plt.title('How People Died on The Sopranos')\n\nkiller = pd.pivot_table(df,index=[\"Killer\"],values = [\"Season\"], aggfunc=\"count\")\n\nkiller = killer.sort_values(by=[\"Season\"], ascending = False)\nkiller\n\nplt.rcParams.update(plt.rcParamsDefault)\nplt.style.use('ggplot')\nplt.rcParams[\"figure.figsize\"] = (15,7)\n\nkiller[:10].plot(kind = \"bar\", legend=None)\nplt.title('Top 10 Killers')\n", "Basic Statistical Operations/Explorations", "import pandas as pd\nimport numpy as np\n\ndf = pd.read_csv(\"data/cafe_sales2015.csv\")\ndf[\"Date\"] = pd.to_datetime(df[\"Date\"])\ndf.set_index([\"Date\"], inplace = True)\ndf.interpolate(method = \"linear\", inplace = True)\n\ndf.head()\n\ndf.tail()\n\ndf.describe()\n\nprint(\"Mean\\n\", df.mean())\nprint(\"\\n\\nMedian\\n\", df.median())\nprint(\"\\n\\nMode\\n\", df.mode())\n\nprint(\"The Maximum value is:\\n\",df.max())\nprint(\"\\n\\nThe Minimum value is:\\n\",df.min())\nprint(\"\\n\\nKurtosis:\\n\",df.kurtosis())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bbartoldson/examples
decision trees/Decision Trees.ipynb
mit
[ "Implementing C4.5 and ID3 Decision Tree Algorithms with NumPy\nWe will apply these trees to the the UCI car evaluation dataset $^1$\n$^1$ https://archive.ics.uci.edu/ml/datasets/car+evaluation\nDisclaimer: I have neither verified nor validated this code, so use it at your own risk! Also, it was not designed to accommodate continuous attributes (data that must be split by creating buckets such as \"x<5.236\").\nStart by importing NumPy, and then explore the data", "import numpy as np\n#you only need matplotlib if you want to create some plots of the data\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndata_path = \"/home/brb/repos/examples/decision trees/UCI_cars\"\ndata = np.genfromtxt(data_path, delimiter=\",\", dtype=str)\nlabels = [\"buying\", \"maint\", \"doors\", \"persons\", \"lug_boot\", \"safety\", \"class\"]\n\nprint(\"records: {}\".format(len(data)))\nprint(\"example record: {}\".format(data[0]))\nprint(\"\\ncolumns:\\n\")\ncolumns = []\nfor col in range(len(data[0])):\n print(\"\\t\" + labels[col] + \": {}\".format(np.unique(data[:,col])))\n columns.append(np.unique(data[:,col]))", "The backbone of the decision tree algorithms is a criterion (e.g. entropy, Gini, error) with which we can choose the best (in a greedy sense) attribute to add to the tree. ID3 and C4.5 use information gain (entropy) and normalized information gain, respectively.", "def weighted_entropy(data, col_num):\n entropies = []\n n_s = []\n entropy_of_attribute = entropy(data[:,col_num])\n for value in columns[col_num]:\n candidate_child = data[data[:,col_num] == value]\n n_s.append(len(candidate_child))\n entropies.append(entropy(candidate_child[:,6]))\n n_s = np.array(n_s)\n n_s = n_s / np.sum(n_s)\n weighted_entropy = n_s.dot(entropies)\n return weighted_entropy, entropy_of_attribute\n \ndef entropy(data):\n classes = np.unique(data)\n n = len(data)\n n_s = []\n for class_ in classes:\n n_s.append(len(data[data==class_]))\n n_s = np.array(n_s)\n n_s = n_s/n\n n_s = n_s * np.log2(n_s)\n return max(0,-np.sum(n_s))", "To store our tree, we wll use dictionaries. Each node of the tree is a Python dict.", "def build_node(data, entropy, label, depth, class_=\"TBD\", parent=None):\n new_node = dict()\n new_node['data'] = data\n new_node['entropy'] = entropy\n new_node['label'] = label\n new_node['depth'] = depth\n new_node['class'] = class_\n new_node['parent'] = parent\n new_node['children'] = []\n return new_node\n\nroot = build_node(data, entropy(data[:,6]), \"all data\", 0)\nclasses = np.unique(root['data'][:,6])\nprint(classes)", "Functions that helps us build our tree and classify its leaves. find_best_split acts on a node, and returns the attribute that leads to the best (possibly normalized) information gain.", "def find_best_split(node, c45 = False):\n data = node['data']\n entropy = node['entropy']\n gains = []\n for col_num in range(len(columns) - 1):\n new_entropy, entropy_of_attribute = weighted_entropy(data, col_num)\n if c45:\n if entropy_of_attribute==0:\n gains.append(0)\n else:\n gains.append((entropy - new_entropy) / (entropy_of_attribute))\n else:\n gains.append(entropy - new_entropy)\n if np.max(gains) > 10**-3 :\n best_attribute = np.argmax(gains)\n return best_attribute\n else:\n return -1\n \ndef classify(node_data):\n data = node_data[:, 6]\n n_s = []\n for class_ in classes:\n n_s.append(len(data[data==class_]))\n return columns[-1][np.argmax(n_s)]\n \nlabels[find_best_split(root)], classify(root['data'])", "This function is recursive and will construct a decision tree out of a root node that contains your training data.", "def build_tree(node, c45 = False, max_depth = 999, noisy=False):\n next_split_attribute = find_best_split(node, c45)\n if next_split_attribute == -1 or node['depth'] == max_depth:\n node['class'] = classify(node['data'])\n #this if statement just handles some printing of the tree (rudimentary visualization)\n if noisy:\n label = []\n label.append(node['label'])\n temp_parent = node\n while temp_parent['parent']:\n temp_parent = temp_parent['parent']\n label.append(temp_parent['label'])\n depth = node['depth']\n for i, layer_label in enumerate(reversed(label)):\n for _ in range(i):\n print(\"\\t\", end=\"\")\n if i==depth:\n print(\"{} -> class {}\".format(layer_label, node['class']))\n else:\n print(\"{}\".format(layer_label))\n \n else:\n for value in columns[next_split_attribute]:\n data = node['data'][ node['data'][:, next_split_attribute] == value ]\n entropy_ = entropy(data[:, 6])\n new_node = build_node(data, entropy_, \"{} == {}\".format(\n labels[next_split_attribute],value),\n node['depth'] + 1, parent=node)\n build_tree(new_node, c45, max_depth, noisy)\n node['children'].append(new_node)", "Lastly, before building the tree, we need a function to check the tree's accuracy.", "def correct(decision_tree):\n if not decision_tree['children']:\n return np.sum(classify(decision_tree['data'])==decision_tree['data'][:,6])\n else:\n n_correct = 0\n for child in decision_tree['children']:\n n_correct += correct(child)\n return n_correct\n\ncorrect(root)/1728", "Let's make a tree!\nBut first, a quick look at the class distribution after splitting on safety, an important attribute according to our algorithm", "for safety in columns[5]:\n plt.hist(data[data[:,5]==safety, 6])\n plt.title(safety + \" safety\")\n plt.show()\n\nroot = build_node(data, entropy(data[:,6]), \"all data\", 0)\nbuild_tree(root, max_depth=1, noisy=True)\nprint(\"\\nTree Accuracy: {}\".format(correct(root)/1728))\n\nroot = build_node(data, entropy(data[:,6]), \"all data\", 0)\nbuild_tree(root, max_depth=2, noisy=True)\nprint(\"\\nTree Accuracy: {}\".format(correct(root)/1728))\n\nfor persons in columns[3]:\n indices1 = data[:,5]==\"high\"\n indices2 = data[:,3]==persons\n indices = np.alltrue([indices1,indices2], axis=0)\n plt.hist(data[indices, 6])\n plt.title(\"high safety and {} persons\".format(persons))\n plt.show()", "On this dataset, C4.5 and ID3 get similar accuracies...", "print(\"Training Accuracy Comparison\")\nprint(\"---------\")\nprint(\" ID3 C4.5\")\nfor depth in range(7):\n root = build_node(data, entropy(data[:,6]), \"all data\", 0)\n build_tree(root, max_depth=depth, c45=False)\n id3=correct(root)/1728\n root = build_node(data, entropy(data[:,6]), \"all data\", 0)\n build_tree(root, max_depth=depth, c45=True)\n c45=correct(root)/1728\n print('{:.3f} '.format(round(id3,3)), ' {:.3f}'.format(round(c45,3)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ye-kyaw-thu/sylbreak
jupyter-notebook/using-sylbreak-in-jupyter-notebook.ipynb
apache-2.0
[ "Using Sylbreak in Jupyter Notebook\nဒီ Jupyter Notebook က GitHub မှာ ကျွန်တော်တင်ပေးထားတဲ့ Sylbreak Python ပရိုဂရမ် https://github.com/ye-kyaw-thu/sylbreak/blob/master/python/sylbreak.py ကို Jupyter Notebook, Python 3 Kernel မှာ function တစ်ခုအနေနဲ့ဆောက်ပြီး သုံးတဲ့ပုံစံကို နမူနာအနေနဲ့ ပြသထားတာ ဖြစ်ပါတယ်။ \n\"sylbreak.py\" ကို သုံးစဉ်က input လုပ်တဲ့ စာကြောင်းမှာပါတဲ့ \"space\" ကို အရင်ဖြတ်ပြီးတော့ (e.g. cat input.txt | sed 's/ //g' | python sylbreak.py ...) သုံးတဲ့ပုံစံနဲ့ သွားခဲ့ပေမဲ့၊ ဒီ sylbreak function မှာတော့ function အတွင်းက ဖြတ်ပေးတဲ့ပုံစံနဲ့ ရေးပြထားပါတယ်။\nအဲဒီအတွက် \"line = re.sub(ur\"\\s+\",\"\", line)\" ဆိုတဲ့ statement တစ်ကြောင်းကို ဖြည့်ရေးထားပါတယ်။", "# Regular Expression Python Library ကို သုံးလို့ရအောင် import လုပ်တာ\nimport re\n\n# စာလုံးတွေကို အုပ်စုဖွဲ့တာ (သို့) variable declaration လုပ်တာ\n# တကယ်လို့ syllable break လုပ်တဲ့ အခါမှာ မြန်မာစာလုံးချည်းပဲ သပ်သပ် လုပ်ချင်တာဆိုရင် enChar က မလိုပါဘူး\nmyConsonant = \"က-အ\"\nenChar = \"a-zA-Z0-9\"\notherChar = \"ဣဤဥဦဧဩဪဿ၌၍၏၀-၉၊။!-/:-@[-`{-~\\s\"\nssSymbol = '္'\nngaThat = 'င်'\naThat = '်'\n\n# Regular expression pattern for Myanmar syllable breaking\n# *** a consonant not after a subscript symbol AND \n# a consonant is not followed by a-That character or a subscript symbol\n# မြန်မာစာကို syllable segmentation လုပ်ဖို့အတွက်က ဒီ RE pattern တစ်ခုတည်းနဲ့ အဆင်ပြေတယ်။\nBreakPattern = re.compile(r\"((?<!\" + ssSymbol + r\")[\"+ myConsonant + r\"](?![\" + aThat + ssSymbol + r\"])\" + r\"|[\" + enChar + otherChar + r\"])\", re.UNICODE)\n\n# sylbreak function ဆောက်တဲ့ အပိုင်း\ndef sylbreak(line):\n line = re.sub(r\"\\s+\",\"\", line)\n line = BreakPattern.sub(r\" \" + r\"\\1\", line)\n return line\n\n# sylbreak function ကိုခေါ်သုံးကြည့်ရအောင်\n\nsylbreak(\"မြန်မာစာသည် တို့စာ။ တို့စာကို သုတေသန လုပ်ပါ။\")", "စိတ်ထဲမှာ ပေါ်လာတာကို ကောက်ရေးပြီးတော့ syllable segmentation လုပ်ခိုင်းလိုက်တာပါ။ :)\nနောက်ထပ် ဥပမာအနေနဲ့ Wikipedia Myanmar မှာရေးထားတဲ့ အာခီမီးဒီးစ် ရဲ့ အတ္ထုပ္ပတ္တိအကျဉ်း ထဲမှာရေးထားတဲ့\nစာကြောင်းတွေကို sylbreak နဲ့ ဖြတ်ကြည့်ရအောင်။", "sylbreak(\"\"\"အာခီမီးဒီးစ်ကို ဘီစီ ၂၈၇ ခန့်က ရှေးဟောင်း မဂ္ဂနာဂရေစီယာပြည်လက်အောက်ခံ စစ္စလီပြည် ဆိုင်ရာကျူးစ် မြို့ တွင် မွေးဖွားခဲ့သည်။ ဘိုင်ဇန်တိုင်းဂရိခေတ် က သမိုင်းပညာရှင် ဂျွန်ဇီဇီ ၏ မှတ်တမ်းအရ အာခီမီးဒီးစ်သည် အသက် ၇၅ နှစ်အထိ နေထိုင်သွားရကြောင်း သိရသည်။ အာခီမီးဒီးစ်သည် သူ၏ တီထွင်မှု တစ်ခုဖြစ်သော သဲနာရီ နှင့် ပတ်သက်၍ ရေးသားထားသော Sand Reckoners အမည်ရှိ စာတမ်းများတွင် သူ၏ ဖခင်အမည်ကို နက္ခတ္တဗေဒပညာရှင် ဖီးဒီးယပ်စ် ဟု ဖော်ပြထားသည်။ သမိုင်းပညာရှင် ပလူးတပ် ရေးသားသော ခေတ်ပြိုင်ပုဂ္ဂိုလ်ထူးကြီးများ စာအုပ်တွင် အာခီမီးဒီးစ်သည် ဆိုင်ရာကျူးစ်ဘုရင် ဒုတိယမြောက်ဟီရိုးနှင့် ဆွေမျိုး တော်စပ်ကြောင်း ဖော်ပြထားသည်။ သူငယ်ရွယ်စဉ်က အီဂျစ်ပြည် အလက်ဇန္ဒြီးယားမြို့ တွင် ပညာဆည်းပူး ခဲ့သည်ဟု ယူဆရသည်။ ဘီစီ ၂၁၂ တွင် အာခီမီးဒီးစ် သေဆုံးခဲ့သည်။ ရောမစစ်ဗိုလ်ချုပ် မားကပ်စ် ကလောဒီးယပ်စ် မာဆဲလပ်စ် က နှစ်နှစ်ကြာဝိုင်းရံ ပိတ်ဆို့ပြီးနောက် ဆိုင်ရာကျူးစ် မြို့ကို သိမ်းပိုက်လိုက်သည်။ ထိုအချိန်တွင် အာခီမီးဒီးသည် ဂျော်မက်ထရီ ပုစ္ဆာတစ်ပုဒ်ကို စဉ်းစား အဖြေရှာနေခိုက် ဖြစ်သည်။ ရောမစစ်သားက သူ့အား ဖမ်းဆီးလိုက်ပြီး ဗိုလ်ချုပ် မာဆဲလပ်စ် နှင့် တွေ့ဆုံရန် ပြောဆိုရာ သူက သူ၏ပုစ္ဆာစဉ်းစားနေဆဲဖြစ်၍ မတွေ့လိုကြောင်း ငြင်းဆိုသည်တွင် ရောမစစ်သားက ဒေါသထွက်ကာ ဓားဖြင့် ထိုးသတ်လိုက်သည်ဟု ပလူးတပ် က ရေးသားခဲ့သည်။ ဗိုလ်ချုပ် မာဆဲလပ်စ်သည် အာခီမီးဒီးစ် သေဆုံးသွားသည့် အတွက် များစွာ နှမြောတသဖြစ်ရသည်။ အာခီမီးဒီးစ်အား ပညာရှင် တစ်ယောက်အဖြစ် သိရှိထားသောကြောင့် မသတ်ရန် ကြိုတင် အမိန့်ပေးထားခဲ့သည်။ “ငါ့စက်ဝိုင်းတွေပေါ် တက်မနင်းပါနဲ့”ဟူသော စကားကို အာခီမီးဒီးစ် နောက်ဆုံး ပြောဆိုခဲ့သည်ဟု အချို့က ယူဆကြသော်လည်း သမိုင်းပညာရှင် ပလူးတပ် ရေးသော စာအုပ်တွင်မူ မပါရှိပေ။ အာခီမီးဒီးစ်၏ ဂူဗိမ္မာန်တွင် ထုလုံးရှည်မှန်တစ်ခုအတွင်း စက်လုံးတစ်ခုကို ထည့်သွင်းထားသည့် ရုပ်တုတစ်ခုကို စိုက်ထူထားသည်။ အာခီမီးဒီးစ် သေဆုံးပြီး နှစ်ပေါင်း ၁၃၇နှစ်အကြာ ဘီစီ ၇၅တွင် ရောမခေတ် နိုင်ငံရေးသုခမိန် ဆီဇာရိုက အာခီမီးဒီးစ် အကြောင်းကြားသိရ၍ သူ၏ အုတ်ဂူအား ရှာဖွေခဲ့သည်။ ခြုံနွယ်ပိတ်ပေါင်းများ ဖုံးအုပ်နေသော အာခီမီးဒီးစ်၏ အုတ်ဂူကို ဆိုင်ရာကျူးစ်မြို့အနီးတွင် ရှာဖွေ တွေ့ရှိခဲ့ပြီး သန့်ရှင်းရေးပြုလုပ်ကာ အုတ်ဂူပေါ်မှ စာသားများကို ဖတ်ရှုသွားသည်။ ဆိုင်ရာကျူးစ်စစ်ပွဲ အပြီး နှစ်ပေါင်း ၇၀ အကြာတွင် ပိုလီးဘီးယပ်စ် ရေးသားသော ဆိုင်ရာကျူးစ်စစ်ပွဲ အကြောင်း စာအုပ်တွင် အာခီမီးဒီးစ်နှင့် ပတ်သက်သော အကြောင်းများ ပါရှိ၍ သမိုင်းပညာရှင် ပလူးတပ် က ထပ်မံ ရေးသားနိုင်ခဲ့ခြင်း ဖြစ်ပါသည်။ ဆိုင်ရာကျူးစ်မြို့ ကာကွယ်ရေးအတွက် စစ်ပွဲဝင် စက်ကိရိယာ လက်နက်ဆန်းများကိုလည်း အာခီမီးဒီးစ်က တီထွင်ပေးခဲ့ကြောင်း အဆိုပါ စာအုပ်တွင် ဖော်ပြပါရှိပါသည်။\n\n\"\"\")", "Typing order\nမြန်မာစာနဲ့ ပတ်သက်တဲ့ NLP (Natural Language Processing) အလုပ် တစ်ခုခု လုပ်ဖို့အတွက် syllable segmentation လုပ်ကြမယ်ဆိုရင် တကယ်တမ်းက မလုပ်ခင်မှာ၊ မြန်မာစာ စာကြောင်းတွေရဲ့ typing order အပါအဝင် တခြား ဖြစ်တတ်တဲ့ အမှားတွေကိုလည်း cleaning လုပ်ရပါတယ်။ အဲဒီလိုမလုပ်ရင် sylbreak က ကျွန်တော် အကြမ်းမျဉ်းသတ်မှတ်ထားတဲ့ မြန်မာစာ syllable unit တွေအဖြစ် မှန်မှန်ကန်ကန် ဖြတ်ပေးနိုင်မှာ မဟုတ်ပါဘူး။ မြန်မာစာ စာကြောင်းတွေထဲမှာ ရှိတတ်တဲ့အမှား တွေက တကယ့်ကို အများကြီးပါ။ တချို့ အမှားတွေက မျက်လုံးနဲ့ကြည့်ယုံနဲ့ မခွဲခြားနိုင်တာမျိုးတွေလည်း ရှိပါတယ်။ ဒီနေရာမှာတော့ အမှားအမျိုးအစားတွေထဲက တစ်မျိုးဖြစ်တဲ့ typing order အမှား တစ်မျိုး၊ နှစ်မျိုးကို ဥပမာအနေနဲ့ရှင်းပြရင်း၊ အဲဒီလိုအခြေအနေမျိုးမှာ ဖြစ်တတ်တဲ့ sylbreak က ထွက်လာမယ့် အမှား output တွေကိုလည်း လေ့လာကြည့်ကြရအောင်။ \nအောက်မှာ သုံးပြထားတဲ့ \"ခန့်\" က \"ခ န ့ ်\" (ခခွေး နငယ် အောက်မြစ် အသတ်) ဆိုတဲ့ အစီအစဉ် အမှားနဲ့ ရိုက်ထားတာဖြစ်ပါတယ်။ အဲဒါကြောင့် sylbreak က ထွက်လာတဲ့အခါမှာ \"ခခွေး\" နဲ့ \"နငယ် အသတ် အောက်မြစ်\" က ကွဲနေတာဖြစ်ပါတယ်။", "sylbreak(\"ဘီစီ ၂၈၇ ခန့်\")", "တကယ်တန်း မှန်ကန်တဲ့ \"ခန့်\" ရဲ့ typing order က \"ခ န ် ့\" (ခခွေး နငယ် အသတ် အောက်မြစ်) ပါ။\nအမြင်အားဖြင့်ကတော့ မခွဲနိုင်ပေမဲ့၊ မှန်ကန်တဲ့ typing order နဲ့ ရိုက်ထားရင်တော့ \"ခန့်\" ဆိုပြီး syllable တစ်ခုအနေနဲ့ ရိုက်ထုတ်ပြပေးပါလိမ့်မယ်။", "sylbreak(\"ဘီစီ ၂၈၇ ခန့်\")", "နောက်ထပ် typing order အမှားတစ်ခုကို ကြည့်ကြရအောင်။", "sylbreak(\"ထည့်သွင်းထားသည့်ရုပ်တု\")", "\"ညကြီး အောက်မြစ် အသတ်\" ဆိုတဲ့ မှားနေတဲ့ အစီအစဉ်ကို \"ညကြီး အသတ် အောက်မြစ်\" ဆိုပြီး\nပြောင်းရိုက်ပြီးတော့ sylbreak လုပ်ကြည့်ရင်တော့ အောက်ပါအတိုင်း \"ထ\" နဲ့ \"ည့်\", \"သ\" နဲ့ \"ည့်\" တွေက ကွဲမနေတော့ပဲ မှန်မှန်ကန်ကန်ဖြတ်ပေးပါလိမ့်မယ်။", "sylbreak(\"ထည့်သွင်းထားသည့်ရုပ်တု\")", "တချို့အမှားတွေကတော့ ဂရုစိုက်ရင် မျက်စိနဲ့ မြင်နိုင်ပါတယ်။\nဥပမာ \"ဥ\" (အက္ခရာ ဥ) နဲ့ \"ဉ\" (ညကလေး) ကိုမှားရိုက်တဲ့ကိစ္စပါ။\nသို့သော် ကျွန်တော်မြန်မာစာကြောင်းတွေအများကြီးကို ကိုင်တွယ်အလုပ်လုပ်တဲ့အခါတိုင်းမှာ ဒီလိုအမှားက အမြဲတမ်းကို ပါတတ်ပါတယ်။\nဖောင့် (font) မှာလည်း မှန်မှန်ကန်ကန်ခွဲထားမယ်ဆိုရင်၊ အမှန်က ညကလေးဆိုရင် အမြီးက ရှည်ပါတယ်။ \nစာရိုက်သူအများစုက သတိမပြုမိတဲ့ အကြောင်းအရင်း တစ်ခုကလည်း တချို့ text editor တွေမှာ \"အက္ခရာ ဥ\" နှင့် ညကလေး \"ဉ\" ကို ကွဲပြားအောင် မပြသပေးနိုင်လို့ပါ။", "sylbreak(\"ကာရီသည်ဒီနှစ်၏ပါရမီရှင်တစ်ဉီးနှင့်ထိုက်တန်သောအမျိုးသမီးအဆိုရှင်ဖြစ်သည်။\")", "ဝီကီပီးဒီးယားက မှားနေတဲ့ \"ညကလေး\" ကို \"အက္ခရာ ဥ\" နဲ့ပြန်ပြင်ရိုက်ထားတဲ့ စာကြောင်းနဲ့ နောက်တစ်ခေါက် syllable ဖြတ်ထားတာက အောက်ပါအတိုင်းဖြစ်ပါတယ်။ \"ညကလေး\" နဲ့ \"အက္ခရာ ဥ\" အမှားကိစ္စမှာတော့ syllable segmentation ဖြတ်တဲ့အပိုင်းမှာတော့ ထူးထူးခြားခြား အပြောင်းအလဲ မရှိပါဘူး။", "sylbreak(\"ကာရီသည်ဒီနှစ်၏ပါရမီရှင်တစ်ဦးနှင့်ထိုက်တန်သောအမျိုးသမီးအဆိုရှင်ဖြစ်သည်။\")", "Note\n\n\nsylbreak မှာ သုံးထားတဲ့ မြန်မာစာ syllable unit (အဖြတ်အတောက်တွေ) က ကျွန်တော်လုပ်ခဲ့တဲ့ NLP (Natural Language Processing) သုတေသနအလုပ်တွေဖြစ်တဲ့ Machine Translation, Automatic Speech Recognition, Text to Speech, POS tagging စတဲ့ အလုပ်တွေအတွက် လုပ်ရကိုင်ရ အဆင်အပြေဆုံး ပုံစံအတိုင်း တကယ့်ကို simple unit အနေနဲ့ဖြတ်ထားတာ ဖြစ်ပါတယ်။ အဲဒါကြောင့် \"ဘီစီ ၂၈၇\" ကို \"ဘီ စီ ၂ ၈ ၇\"၊ \"နက္ခတ္တ ဗေဒ\" ကို \"နက္ခတ္တ ဗေ ဒ\"၊ နောက်ပြီးတော့ မြန်မာစာတွေနဲ့ အတူတူရောပါနေတဲ့ \"Sand Reckoners\" ကို \"S a n d R e c k o n e r s\" ဆိုပြီး ဖြတ်ထားပါတယ်။ ဆိုလိုတာက ပါဌ်ဆင့်တွေကို ဖြေတဲ့ အလုပ် (ဥပမာ နက္ခတ္တ ကို နက် ခတ် တ)၊ ဂဏန်းစာလုံးတွေကို တွဲတဲ့အလုပ် (ဥပမာ ၂၈၃)၊ ရောပါနေတဲ့ အင်္ဂလိပ်စာလုံးတွေကို ဖယ်ပစ်တာ၊ နဂိုအတိုင်းပဲ တွဲထားတာ (ဥပမာ Sand Reckoners) မျိုးတွေကို တမင်တကာ လုပ်မထားတာပါ။ အကြောင်းအရင်းကတော့ အမျိုးမျိုးရှိပါတယ်။ ဥပမာ ပါဌ်ဆင့်တွေကို ဖြေပစ်လိုက်ရင် လိုအပ်တဲ့အခါမှာ နဂိုပုံစံအတိုင်းပြန်ရအောင် ပြန်ဆင့်ရပါတယ်။ အဲဒီအလုပ်က လွယ်မလိုလိုနဲ့ လက်တွေ့မှာတော့၊ ပါဌ်ဆင့် ပြန်ဆင့်ပေးရတဲ့အလုပ်အတွက် processing time နဲ့ အဲဒီကနေထွက်လာမဲ့ error တွေကို ရှာဖွေရတဲ့အလုပ်၊ ပြန်ပြင်ပေးရတဲ့အလုပ်တွေကို ရှောင်ချင်လို့ ဖြစ်ပါတယ်။ sylbreak က ကျွန်တော်တို့ မြန်မာစာကို unicode နဲ့သာ မှန်မှန်ကန်ကန်ရေးထားရင်၊ \"Regular Expression တစ်ကြောင်းထဲနဲ့ လွယ်လွယ်ကူကူ ဖြတ်လို့ရကြောင်း\" နောက်ပြီးတော့ အဲဒါက \"တကယ်လည်း မြန်မာစာ NLP သုတေသနအလုပ်တွေအတွက် အသုံးဝင်ကြောင်း\"၊ ဒါ့အပြင် \"အခြေခံကျတဲ့ မြန်မာဝဏ္ဏ (syllable unit) အနေနဲ့လည်း ရပါလိမ့်မယ်\" ဆိုတဲ့ message ကိုပေးထားတာပဲ ဖြစ်ပါတယ်။ ကျွန်တော်ရဲ့ syllable unit တွေက မြန်မာစာအနေနဲ့ကြည့်ရင် ပြည့်စုံမှန်ကန်တယ်လို့ မဆိုလိုပါဘူး။ ကိုယ်လုပ်မဲ့ အလုပ်၊ develop လုပ်နေတဲ့ application ပေါ်ကို မူတည်ပြီးတော့ လက်ရှိ ကျွန်တော်ပြင်ဆင်ပေးထားတဲ့ Regular Expression ကို ကြိုက်သလို ဖြည့်စွက်တာ၊ ပြင်သုံးတာကိုလုပ်နိုင်ပါတယ်။ \n\n\nPython 3.4 ကနေစပြီးတော့ \"ur\" (Unicode + Raw text) ဆိုပြီးတွဲရေးတာကို support မလုပ်ပါဘူး။\nသို့သော် \"u\" တစ်လုံးတည်း \"r\" တစ်လုံးတည်း သုံးတာကိုတော့ ခွင့်ပြုပါတယ်။ \n\n\nhttps://stackoverflow.com/questions/26063899/python-version-3-4-does-not-support-a-ur-prefix\n\nJupyter Notebook နဲ့ ပတ်သက်တဲ့ installation လုပ်ပုံလုပ်နည်း၊ အသုံးပြုပုံနဲ့ ပတ်သက်ပြီး မြန်မာလိုလေ့လာချင်တဲ့ သူများအတွက် ကျွန်တော့်ရဲ့ Tutorial မှာ လေ့လာနိုင်ပါတယ်။" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
liumengjun/cn-deep-learning
tutorials/weight-initialization/weight_initialization.ipynb
mit
[ "Weight Initialization\nIn this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker. \nTesting Weights\nDataset\nTo see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network.\nWe'll be using the MNIST dataset to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset.", "%matplotlib inline\n\nimport tensorflow as tf\nimport helper\n\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nprint('Getting MNIST Dataset...')\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\nprint('Data Extracted.')", "Neural Network\n<img style=\"float: left\" src=\"images/neural_network.png\"/>\nFor the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers.", "# Save the shapes of weights for each layer\nprint(mnist.train.images.shape[1])\nlayer_1_weight_shape = (mnist.train.images.shape[1], 256)\nlayer_2_weight_shape = (256, 128)\nlayer_3_weight_shape = (128, mnist.train.labels.shape[1])", "Initialize Weights\nLet's start looking at some initial weights.\nAll Zeros or Ones\nIf you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.\nWith every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.\nLet's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start.\nRun the cell below to see the difference between weights of all zeros against all ones.", "all_zero_weights = [\n tf.Variable(tf.zeros(layer_1_weight_shape)),\n tf.Variable(tf.zeros(layer_2_weight_shape)),\n tf.Variable(tf.zeros(layer_3_weight_shape))\n]\n\nall_one_weights = [\n tf.Variable(tf.ones(layer_1_weight_shape)),\n tf.Variable(tf.ones(layer_2_weight_shape)),\n tf.Variable(tf.ones(layer_3_weight_shape))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'All Zeros vs All Ones',\n [\n (all_zero_weights, 'All Zeros'),\n (all_one_weights, 'All Ones')])", "As you can see the accuracy is close to guessing for both zeros and ones, around 10%.\nThe neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.\nA good solution for getting these random weights is to sample from a uniform distribution.\nUniform Distribution\nA [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's tf.random_uniform function to pick random numbers from a uniform distribution.\n\ntf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a uniform distribution.\nThe generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nminval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0.\nmaxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point.\ndtype: The type of the output: float32, float64, int32, or int64.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).\n\n\nWe can visualize the uniform distribution by using a histogram. Let's map the values from tf.random_uniform([1000], -3, 3) to a histogram using the helper.hist_dist function. This will be 1000 random float values from -3 to 3, excluding the value 3.", "helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([1000], -3, 3))", "The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.\nNow that you understand the tf.random_uniform function, let's apply it to some initial weights.\nBaseline\nLet's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0.", "# Default for tf.random_uniform is minval=0 and maxval=1\nbasline_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Baseline',\n [(basline_weights, 'tf.random_uniform [0, 1)')])", "The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.\nGeneral rule for setting weights\nThe general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where\n$y=1/\\sqrt{n}$ ($n$ is the number of inputs to a given neuron).\nLet's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1).", "uniform_neg1to1_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[0, 1) vs [-1, 1)',\n [\n (basline_weights, 'tf.random_uniform [0, 1)'),\n (uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')])", "We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?\nToo small\nLet's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot.", "uniform_neg01to01_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1))\n]\n\nuniform_neg001to001_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01))\n]\n\nuniform_neg0001to0001_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',\n [\n (uniform_neg1to1_weights, '[-1, 1)'),\n (uniform_neg01to01_weights, '[-0.1, 0.1)'),\n (uniform_neg001to001_weights, '[-0.01, 0.01)'),\n (uniform_neg0001to0001_weights, '[-0.001, 0.001)')],\n plot_n_batches=None)", "Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\\sqrt{n}$.", "import numpy as np\n\ngeneral_rule_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0])))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[-0.1, 0.1) vs General Rule',\n [\n (uniform_neg01to01_weights, '[-0.1, 0.1)'),\n (general_rule_weights, 'General Rule')],\n plot_n_batches=None)", "The range we found and $y=1/\\sqrt{n}$ are really close.\nSince the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution.\nNormal Distribution\nUnlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram.\n\ntf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a normal distribution.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nmean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution.\nstddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution.\ndtype: The type of the output.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).", "helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([1000]))", "Let's compare the normal distribution against the previous uniform distribution.", "normal_01_weights = [\n tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)),\n tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)),\n tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Uniform [-0.1, 0.1) vs Normal stddev 0.1',\n [\n (uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'),\n (normal_01_weights, 'Normal stddev 0.1')])", "The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution.\nTruncated Normal Distribution\n\ntf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a truncated normal distribution.\nThe generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nmean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.\nstddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution.\ndtype: The type of the output.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).", "helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000]))", "Again, let's compare the previous results with the previous distribution.", "trunc_normal_01_weights = [\n tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)),\n tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)),\n tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Normal vs Truncated Normal',\n [\n (normal_01_weights, 'Normal'),\n (trunc_normal_01_weights, 'Truncated Normal')])", "There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.\nWe've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now.", "helper.compare_init_weights(\n mnist,\n 'Baseline vs Truncated Normal',\n [\n (basline_weights, 'Baseline'),\n (trunc_normal_01_weights, 'Truncated Normal')])", "That's a huge difference. You can barely see the truncated normal line. However, this is not the end your learning path. We've provided more resources for initializing weights in the classroom!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Bihaqo/tf_einsum_opt
example.ipynb
mit
[ "import tf_einsum_opt\nimport tensorflow as tf\nimport numpy as np\n\nsess = tf.Session()", "Small scale example", "def func(a, b, c):\n res = tf.einsum('ijk,ja,kb->iab', a, b, c) + 1\n res = tf.einsum('iab,kb->iak', res, c)\n return res\na = tf.random_normal((10, 11, 12))\nb = tf.random_normal((11, 13))\nc = tf.random_normal((12, 14))\n# res = func(a, b, c)\norders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c)\n\nres1 = func(a, b, c)\n%timeit sess.run(res1)\n\nres2 = optimized_func(a, b, c)\n%timeit sess.run(res2)\n\n# Check that the results of optimized and the original function are the same.\nnp.testing.assert_allclose(*sess.run([res1, res2]), rtol=1e-5, atol=1e-5)", "Example with more savings, but slower to optimize", "def func(a, b, c, d):\n res = tf.einsum('si,sj,sk,ij->s', a, b, d, c)\n res += tf.einsum('s,si->s', res, a)\n return res\na = tf.random_normal((100, 101))\nb = tf.random_normal((100, 102))\nc = tf.random_normal((101, 102))\nd = tf.random_normal((100, 30))\norders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c, d)\n\nres1 = func(a, b, c, d)\n%timeit sess.run(res1)\n\nres2 = optimized_func(a, b, c, d)\n%timeit sess.run(res2)", "Look at the recommendations:", "orders", "It means \"in file <ipython-input-13-1748bfc6b08e> line 2 change the order of arguments of einsum using permutation [0, 3, 1, 2]\", i.e. from\n tf.einsum('si,sj,sk,ij->s', a, b, d, c)\nto \n tf.einsum('si,ij,sj,sk->s', a, c, b, d)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AlbanoCastroSousa/RESSPyLab
examples/Post_Processing_Example_1.ipynb
mit
[ "Post-processing Examples\nThis notebook provides some examples for using the post-processing features in RESSPyLab.\nAutomatic table generation and calculation of the consistency metric $\\xi_2$ are shown for both the original and updated Voce-Chaboche (UVC) models.\nNote that there is an example for plotting output in each of the calibration examples.", "# First load RESSPyLab and necessary packages\nimport numpy as np\nimport RESSPyLab as rpl", "Original Voce-Chaboche model\nFirst we will use RESSPyLab to generate a formatted table of parameters including the relative error metric, $\\bar{\\varphi}$.\nThe inputs to this function are: \n1. Information about the name of the data set and the load protocols used in the optimization.\n2. The file containing the history of parameters (generated from the optimization).\n3. The data used in the optimization.\nTwo tables are returned (as pandas DataFrames) and are printed to screen in LaTeX format.\nIf you want the tables in some other format it is best to operate on the DataFrames directly (e.g., use to_csv()).", "# Identify the material\nmaterial_def = {'material_id': ['Example 1'], 'load_protocols': ['1,5']}\n# Set the path to the x log file\nx_log_file_1 = './output/x_log.txt'\nx_logs_all = [x_log_file_1]\n# Load the data\ndata_files_1 = ['example_1.csv']\ndata_1 = rpl.load_data_set(data_files_1)\ndata_all = [data_1]\n\n# Make the tables\nparam_table, metric_table = rpl.summary_tables_maker_vc(material_def, x_logs_all, data_all)", "Tables can be easily generated following a standard format for several data sets by appending additional entries to the lists of values in material_def and to x_logs_all and data_all.\nNow we will generate the consistency metric, $\\xi_2$.\nThe input arguments are:\n1. The parameters of the base case.\n2. The parameters of the case that you would like to compare with.\n3. The set of data to compute this metric over.\nThe metric is returned (the raw value, NOT as a percent) directly from this function.", "# Load the base parameters, we want the last entry in the file\nx_base = np.loadtxt(x_log_file_1, delimiter=' ')\nx_base = x_base[-1]\n# Load (or set) the sample parameters\nx_sample = np.array([179750., 318.47, 100.72, 8.00, 11608.17, 145.22, 1026.33, 4.68])\n\n# Calculate the metric\nconsistency_metric = rpl.vc_consistency_metric(x_base, x_sample, data_1)\nprint consistency_metric", "The value of $\\xi_2 = 65$ %, indicating that the two sets of parameters are inconsistent for this data set.\nUpdated Voce-Chaboche model\nThe inputs to generate the tables are the same as for the original model, however the input parameters have to come from optimization using the updated model.", "# Identify the material\nmaterial_def = {'material_id': ['Example 1'], 'load_protocols': ['1']}\n# Set the path to the x log file\nx_log_file_2 = './output/x_log_upd.txt'\nx_logs_all = [x_log_file_2]\n# Load the data\ndata_files_2 = ['example_1.csv']\ndata_2 = rpl.load_data_set(data_files_2)\ndata_all = [data_2]\n\n# Make the tables\nparam_table, metric_table = rpl.summary_tables_maker_uvc(material_def, x_logs_all, data_all)", "The consistency metric can be calculated in the same way as for the original model, but just using the uvc_consistency_metric function instead of vco_consistency_metric." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jasontlam/snorkel
tutorials/workshop/Workshop_3_Generative_Model_Training.ipynb
apache-2.0
[ "<img align=\"left\" src=\"imgs/logo.jpg\" width=\"50px\" style=\"margin-right:10px\">\nSnorkel Workshop: Extracting Spouse Relations <br> from the News\nPart 3: Training the Generative Model\nNow, we'll train a model of the LFs to estimate their accuracies. Once the model is trained, we can combine the outputs of the LFs into a single, noise-aware training label set for our extractor. Intuitively, we'll model the LFs by observing how they overlap and conflict with each other.", "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\nimport os\nimport re\nimport numpy as np\n\n# Connect to the database backend and initalize a Snorkel session\nfrom lib.init import *\nfrom snorkel.models import candidate_subclass\nfrom snorkel.annotations import load_gold_labels\n\nfrom snorkel.lf_helpers import (\n get_left_tokens, get_right_tokens, get_between_tokens,\n get_text_between, get_tagged_text,\n)\n\n# initialize our candidate type definition\nSpouse = candidate_subclass('Spouse', ['person1', 'person2'])\n\n# gold (human-labeled) development set labels\nL_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)", "I. Loading Labeling Matricies\nFirst we'll load our label matrices from notebook 2", "from snorkel.annotations import LabelAnnotator\n\nlabeler = LabelAnnotator()\nL_train = labeler.load_matrix(session, split=0)\nL_dev = labeler.load_matrix(session, split=1)", "Now we set up and run the hyperparameter search, training our model with different hyperparamters and picking the best model configuration to keep. We'll set the random seed to maintain reproducibility.\nNote that we are fitting our model's parameters to the training set generated by our labeling functions, while we are picking hyperparamters with respect to score over the development set labels which we created by hand.\nII: Unifying supervision\nA. Majority Vote\nThe most simple way to unify the output of all your LFs is by computed the unweighted majority vote.", "from lib.scoring import *\n\nmajority_vote_score(L_dev, L_gold_dev)", "B. Generative Model\nIn data programming, we use a more sophisitcated model to unify our labeling functions. We know that these labeling functions will not be perfect, and some may be quite low-quality, so we will model their accuracies with a generative model, which Snorkel will help us easily apply.\nThis will ultimately produce a single set of noise-aware training labels, which we will then use to train an end extraction model in the next notebook. For more technical details of this overall approach, see our NIPS 2016 paper.\n1. Training the Model\nWhen training the generative model, we'll tune our hyperparamters using a simple grid search. \nParameter Definitions\nepochs A single pass through all the data in your training set\nstep_size The factor by which we update model weights after computing the gradient\ndecay The rate our update factor dimishes (decay) over time.", "from snorkel.learning import GenerativeModel\nfrom snorkel.learning import RandomSearch, ListParameter, RangeParameter\n\n# use grid search to optimize the generative model\nstep_size_param = ListParameter('step_size', [0.1 / L_train.shape[0], 1e-5])\ndecay_param = ListParameter('decay', [0.9, 0.95])\nepochs_param = ListParameter('epochs', [10, 50])\nreg_param = ListParameter('reg_param', [1e-3, 1e-6])\nprior_param = ListParameter('LF_acc_prior_weight_default', [1.0, 0.9, 0.8])\n\n# search for the best model\nparam_grid = [step_size_param, decay_param, epochs_param, reg_param, prior_param]\nsearcher = RandomSearch(GenerativeModel, param_grid, L_train, n=10, lf_propensity=False)\n%time gen_model, run_stats = searcher.fit(L_dev, L_gold_dev, deps=set())\n\nrun_stats", "2. Model Accuracies\nThese are the weights learned for each LF", "L_dev.lf_stats(session, L_gold_dev, gen_model.learned_lf_stats()['Accuracy'])\n\ntrain_marginals = gen_model.marginals(L_train)", "3. Plotting Marginal Probabilities\nOne immediate santity check you can peform using the generative model is to visually examine the distribution of predicted training marginals. Ideally, there should get a bimodal distribution with large seperation between each peaks, as shown below by the far right image. The corresponds to good signal for true and positive class labels. For your first Snorkel application, you'll probably see marginals closer to the far left or middle images. With all mass centered around p=0.5, you probably need to write more LFs got get more overall coverage. In the middle image, you have good negative coverage, but not enough positive LFs\n<img align=\"left\" src=\"imgs/marginals-common.jpg\" width=\"265px\" style=\"margin-right:0px\">\n<img align=\"left\" src=\"imgs/marginals-real.jpg\" width=\"265px\" style=\"margin-right:0px\">\n<img align=\"left\" src=\"imgs/marginals-ideal.jpg\" width=\"265px\" style=\"margin-right:0px\">", "import matplotlib.pyplot as plt\nplt.hist(train_marginals, bins=20, range=(0.0, 1.0))\nplt.show()", "4. Generative Model Metrics", "dev_marginals = gen_model.marginals(L_dev)\n_, _, _, _ = gen_model.error_analysis(session, L_dev, L_gold_dev)", "5. Saving our training labels\nFinally, we'll save the training_marginals, which are our \"noise-aware training labels\", so that we can use them in the next tutorial to train our end extraction model:", "from snorkel.annotations import save_marginals\n%time save_marginals(session, L_train, train_marginals)", "III. Advanced Generative Model Features\nA. Structure Learning\nWe may also want to include the dependencies between our LFs when training the generative model. Snorkel makes it easy to do this! DependencySelector runs a fast structure learning algorithm over the matrix of LF outputs to identify a set of likely dependencies.", "from snorkel.learning.structure import DependencySelector\n\nMAX_DEPS = 5\n\nds = DependencySelector()\ndeps = ds.select(L_train, threshold=0.1)\ndeps = set(list(deps)[0:min(len(deps), MAX_DEPS)])\n\nprint \"Using {} dependencies\".format(len(deps))", "Now train the generative model with dependencies, we just pass in the above set as the deps argument to our model train function.\nsearcher = RandomSearch(GenerativeModel, param_grid, L_train, n=4, lf_propensity=False)\ngen_model, run_stats = searcher.fit(L_dev, L_gold_dev, deps=deps)\nrun_stats" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rjdkmr/do_x3dna
docs/notebooks/calculate_elasticity_tutorial.ipynb
gpl-3.0
[ "Elastic Properties and Deformation Energy\n\n\nThis tutorial discuss the analyses that can be performed using the dnaMD Python module included in the do_x3dna package. The tutorial is prepared using Jupyter Notebook and this notebook tutorial file could be downloaded from this link.\n\n\nDownload the input files that are used in the tutorial from this link.\n\n\nTwo following input files are required in this tutorial\n\ntutorial_data/elasticity_DNA/free_dna.h5 \ntutorial_data/elasticity_DNA/bound_dna.h5\n\n\n\nThese two files should be present inside tutorial_data/elasticity_DNA of the present working directory.\n\nThe above two files can be created by the steps as shown here\n\nImporting Python Modules\n\n\nnumpy: Required for the calculations involving large arrays\n\n\nmatplotlib: Required to plot the results\n\n\ndnaMD: Python module to analyze DNA/RNA structures from the do_x3dna output files.", "import numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport dnaMD\n\n%matplotlib inline", "Initializing eyDNA object with free_dna.h5 file\n\n\neyDNA object is initialized by using the total number of base-pairs and HDF5 file.\n\n\nThis class contains all the required functions to calculate the elastic properties and deformation free energy.", "eyDNA = dnaMD.dnaEY(27, 'BST', filename='elasticity_DNA/free_dna.h5')", "Determining modulus matrix - bending, stretching and twisting\nModulus matrix for all three major motions (bending, stretching and twisting) can be obtained with getStrecthTwistBend method.\nIn the following example, matrix is calculated for all frames and first 5000 frames, respectively.", "# All frames\navg, mod_matrix = eyDNA.getStretchTwistBendModulus([4,20], paxis='X')\nprint('Average values for all frames: ', avg)\nprint('Modulus matrix for all frames: \\n', mod_matrix )\nprint(' ')\n\n# Elastic matrix\navg, mod_matrix = eyDNA.getStretchTwistBendModulus([4,20], paxis='X', matrix=True)\nprint('Average values for all frames: ', avg)\nprint('Elastic constant matrix for all frames: \\n', mod_matrix )\nprint(' ')", "The elastic matrix is in this form:\n$$\\text{Elastic matrix} = \\begin{bmatrix}\n K_{Bx} & K_{Bx,By} & K_{Bx,S} & K_{Bx,T} \\\n K_{Bx,By} & K_{By} & K_{By,S} & K_{By,T} \\\n K_{Bx,S} & K_{By,S} & K_{S} & K_{S,T} \\\n K_{Bx,T} & K_{Bx,T} & K_{S,T} & K_{T}\n\\end{bmatrix}\n$$\nWhere:\n\n$Bx$ - Bending motion in one plane\n$By$ - Bending motion in another orthogonal plane\n$S$ - Stretching motion\n$T$ - Twisting motion\n\n$$\\text{modulus matrix} =\n\\begin{bmatrix}\nM_{Bx} & M_{Bx,By} & M_{Bx,S} & M_{Bx,T} \\\nM_{Bx,By} & M_{By} & M_{By,S} & M_{By,T} \\\nM_{Bx,S} & M_{By,S} & M_{S} & M_{S,T} \\\nM_{Bx,T} & M_{Bx,T} & M_{S,T} & M_{T}\n\\end{bmatrix}\n$$\n$$\n= 4.1419464 \\times \\begin{bmatrix}\nK_{Bx} & K_{Bx,By} & K_{Bx,S} & K_{Bx,T} \\\nK_{Bx,By} & K_{By} & K_{By,S} & K_{By,T} \\\nK_{Bx,S} & K_{By,S} & K_{S} & K_{S,T} \\\nK_{Bx,T} & K_{Bx,T} & K_{S,T} & K_{T}\n\\end{bmatrix} \\times L_0\n$$\nWhere:\n\n$M_{Bx}$ - Bending-1 stiffness in one plane\n$M_{By}$ - Bending-2 stiffness in another orthogonal plane\n$M_{S}$ - Stretch Modulus\n$M_{T}$ - Twist rigidity\n$M_{Bx,By}$ - Bending-1 and Bending-2 coupling\n$M_{By,S}$ - Bending-2 and stretching coupling\n$M_{S,T}$ - Stretching Twsiting coupling\n$M_{Bx,S}$ - Bending-1 Stretching coupling\n$M_{By,T}$ - Bending-2 Twisting coupling\n$M_{Bx,T}$ - Bending-1 Twisting coupling\n\nConvergence in bending, stretching and twisting with their couplings\nElasticities cannot be calcualted from an individual snapshot or frame. However, these properties can be calculated as a function of time by considering all the frames up to that time. For example, 0-50 ns, 0-100 ns, 0-150 ns etc. By this method, we can analyze the convergence and also further we can calculate error using block average method.\nElasticities over the time can be calculated using getElasticityByTime method.\nIf esType='BST', A ordered dictionary of 1D arrays of shape (nframes). The keys in dictionary are name of the elasticity in the same order as listed above..\n\n$M_{Bx}$ - bend-1 - Bending-1 stiffness in one plane\n$M_{By}$ - bend-2 - Bending-2 stiffness in another orthogonal plane\n$M_{S}$ - stretch - Stretch Modulus\n$M_{T}$ - twist - Twist rigidity\n$M_{Bx,By}$ - bend-1-bend-2 - Bending-1 and Bending-2 coupling\n$M_{By,S}$ - bend-2-stretch - Bending-2 and stretching coupling\n$M_{S,T}$ - stretch-twist - Stretching Twsiting coupling\n$M_{Bx,S}$ - bend-1-stretch - Bending-1 Stretching coupling\n$M_{By,T}$ - bend-2-twist - Bending-2 Twisting coupling\n$M_{Bx,T}$ - bend-1-twist - Bending-1 Twisting coupling\n\nIf esType='ST', 2D array with three properties of shape (3, frame) will be returned.\n\n$M_{S}$ - stretch - Stretch Modulus\n$M_{T}$ - twist - Twist rigidity\n$M_{S,T}$ -stretch-twist - Stretching Twsiting coupling\n\nIn the following example, modulus as a function of time was calculated by adding 1000 frames.", "time, modulus = eyDNA.getModulusByTime([4,20], frameGap=500, masked=True)\nprint('Keys in returned dictionary:\\n', '\\n'.join(list(modulus.keys())), '\\n-----------')\n\n# Stretching modulus\nplt.plot(time, modulus['stretch'])\nplt.scatter(time, modulus['stretch'])\nplt.xlabel('Time (ps)')\nplt.ylabel(r'Stretching Modulus (pN)')\nplt.show()\n\n# Twist rigidity\nplt.plot(time, modulus['twist'])\nplt.scatter(time, modulus['twist'])\nplt.xlabel('Time (ps)')\nplt.ylabel(r'Rigidity (pN nm$^2$)')\nplt.show()\n\n# Stretch twist coupling\nplt.plot(time, modulus['stretch-twist'])\nplt.scatter(time, modulus['stretch-twist'])\nplt.xlabel('Time (ps)')\nplt.ylabel(r'Stretch-Twist Coupling (pN nm)',)\nplt.show()", "Deformation free energy of bound DNA\nDeformation energy of a probe DNA (bound DNA) can be calculated with reference to the DNA present in the current object.\nThe deformation free energy is calculated using elastic matrix as follows\n$$G = \\frac{1}{2L_0}\\mathbf{xKx^T}$$\n$$\\mathbf{x} = \\begin{bmatrix}\n (\\theta^{x} - \\theta^{x}_0) & (\\theta^{y} - \\theta^{y}_0) & (L - L_0) & (\\phi - \\phi_0)\n \\end{bmatrix}$$\nWhere, $\\mathbf{K}$, $\\theta^{x}_0$, $\\theta^{y}_0$, $L_0$ and $\\phi_0$ is calculated from reference DNA while $\\theta^{x}$, $\\theta^{y}$, $L$ and $\\phi$ is calculated for probe DNA from each frame.\nWe already loaded the data for reference DNA above. Here, we will load data for probe DNA.", "# Load parameters of bound DNA\nboundDNA = dnaMD.DNA(27, filename='elasticity_DNA/bound_dna.h5')", "Deformation free energy can be calculated for the following motions that can be used with which option.\n\n'full' : Use entire elastic matrix -- all motions with their coupling\n'diag' : Use diagonal of elastic matrix -- all motions but no coupling\n'b1' : Only bending-1 motion\n'b2' : Only bending-2 motion\n'stretch' : Only stretching motion\n'twist' : Only Twisting motions\n'st_coupling' : Only stretch-twist coupling motion\n'bs_coupling' : Only Bending and stretching coupling\n'bt_coupling' : Only Bending and Twisting coupling\n'bb_coupling' : Only bending-1 and bending-2 coupling\n'bend' : Both bending motions with their coupling\n'st' : Stretching and twisting motions with their coupling\n'bs' : Bending (b1, b2) and stretching motions with their coupling\n'bt' : Bending (b1, b2) and twisting motions with their coupling\n\nwhich can be either 'all' or a list of energy terms given above.", "# Deformation free energy of bound DNA and calculate all above listed terms\ntime, energy = eyDNA.getGlobalDeformationEnergy([4,20], boundDNA, paxis='X', which='all', masked=True)\nenergyTerms=list(energy.keys())\nprint('Keys in returned dictionary:\\n', '\\n'.join(energyTerms), '\\n-----------')\n\n# Plot two energy terms\nfig = plt.figure(figsize=(8,8))\nfig.subplots_adjust(hspace=0.3)\n\nax1 = fig.add_subplot(211)\nax1.set_title('Bound DNA, entire elastic matrix')\nax1.plot(time, energy['full'])\nax1.set_xlabel('Time (ps)')\nax1.set_ylabel(r'Deformation Free Energy (kJ/mol)',)\n\nax2 = fig.add_subplot(212)\nax2.set_title('Bound DNA, only diagonal of elastic matrix')\nax2.plot(time, energy['diag'])\nax2.set_xlabel('Time (ps)')\nax2.set_ylabel(r'Deformation Free Energy (kJ/mol)',)\n\nplt.show()\n\n\n# Calculate average and error for each energy terms\nerror = dnaMD.get_error(time, list(energy.values()), len(energyTerms), err_type='block', tool='gmx analyze')\n\nprint(\"==============================================\")\nprint('{0:<16}{1:>14}{2:>14}'.format('Energy(kJ/mol)', 'Average', 'Error'))\nprint(\"----------------------------------------------\")\nfor i in range(len(energyTerms)):\n print('{0:<16}{1:>14.3f}{2:>14.3f}'.format(energyTerms[i], np.mean(energy[energyTerms[i]]),error[i]))\nprint(\"==============================================\\n\")", "Local elastic properties or stiffness\nLocal elastic properties can be caluclated using either local base-step parameters or local helical base-step parameters.\nIn case of base-step parameters: Shift ($Dx$), Slide ($Dy$), Rise ($Dz$), Tilt ($\\tau$), Roll ($\\rho$) and Twist ($\\omega$), following elastic matrix is calculated.\n$$\n\\mathbf{K}{base-step} = \\begin{bmatrix}\nK{Dx} & K_{Dx,Dy} & K_{Dx,Dz} & K_{Dx,\\tau} & K_{Dx,\\rho} & K_{Dx,\\omega} \\\nK_{Dx,Dy} & K_{Dy} & K_{Dy,Dz} & K_{Dy,\\tau} & K_{Dy,\\rho} & K_{Dy,\\omega} \\\nK_{Dx,Dz} & K_{Dy,Dz} & K_{Dz} & K_{Dz,\\tau} & K_{Dz,\\rho} & K_{Dz,\\omega} \\\nK_{Dx,\\tau} & K_{Dy,\\tau} & K_{Dz,\\tau} & K_{\\tau} & K_{\\tau, \\rho} & K_{\\tau,\\omega} \\\nK_{Dx,\\rho} & K_{Dy,\\rho} & K_{Dz,\\rho} & K_{\\tau, \\rho} & K_{\\rho} & K_{\\rho,\\omega} \\\nK_{Dx,\\omega} & K_{Dy,\\omega} & K_{Dz,\\omega} & K_{\\tau, \\omega} & K_{\\rho, \\omega} & K_{\\omega} \\\n\\end{bmatrix}\n$$\nIn case of helical-base-step parameters: x-displacement ($dx$), y-displacement ($dy$), h-rise ($h$), inclination ($\\eta$), tip ($\\theta$) and twist ($\\Omega$), following elastic matrix is calculated.\n$$\n\\mathbf{K}{helical-base-step} = \\begin{bmatrix}\nK{dx} & K_{dx,dy} & K_{dx,h} & K_{dx,\\eta} & K_{dx,\\theta} & K_{dx,\\Omega} \\\nK_{dx,dy} & K_{dy} & K_{dy,h} & K_{dy,\\eta} & K_{dy,\\theta} & K_{dy,\\Omega} \\\nK_{dx,h} & K_{dy,h} & K_{h} & K_{h,\\eta} & K_{h,\\theta} & K_{h,\\Omega} \\\nK_{dx,\\eta} & K_{dy,\\eta} & K_{h,\\eta} & K_{\\eta} & K_{\\eta, \\theta} & K_{\\eta,\\Omega} \\\nK_{dx,\\theta} & K_{dy,\\theta} & K_{h,\\theta} & K_{\\eta, \\theta} & K_{\\theta} & K_{\\theta,\\Omega} \\\nK_{dx,\\Omega} & K_{dy,\\Omega} & K_{h,\\Omega} & K_{\\eta, \\Omega} & K_{\\theta, \\Omega} & K_{\\Omega} \\\n\\end{bmatrix}\n$$", "# base-step\navg, matrix = eyDNA.calculateLocalElasticity([10,13], helical=False)\n\n# Print matrix in nice format\nout = ''\nmean_out = ''\nfor i in range(matrix.shape[0]):\n for j in range(matrix.shape[0]):\n if j != matrix.shape[0]-1:\n out += '{0:>10.5f} '.format(matrix[i][j])\n else:\n out += '{0:>10.5f}\\n'.format(matrix[i][j])\n mean_out += '{0:>15.3f} '.format(avg[i])\n\nprint('Average values for all frames: ', mean_out)\nprint('=========== ============== Elastic Matrix =============== ===========\\n')\nprint(out)\nprint('=========== ====================== ====================== ===========')\n\n# helical base-step\navg, matrix = eyDNA.calculateLocalElasticity([10,13], helical=True)\n\n# Print matrix in nice format\nout = ''\nmean_out = ''\nfor i in range(matrix.shape[0]):\n for j in range(matrix.shape[0]):\n if j != matrix.shape[0]-1:\n out += '{0:>10.5f} '.format(matrix[i][j])\n else:\n out += '{0:>10.5f}\\n'.format(matrix[i][j])\n mean_out += '{0:>15.3f} '.format(avg[i])\n\nprint('\\n\\nAverage values for all frames: ', mean_out)\nprint('=========== ============== Elastic Matrix =============== ===========\\n')\nprint(out)\nprint('=========== ====================== ====================== ===========')\n", "Local deformation energy of a local small segment\nUsing the above elastic matrix, deformation energy of this base-step in bound DNA can be calucalted.", "# Here calculate energy for one base-step\ntime, energy = eyDNA.getLocalDeformationEnergy([10,13], boundDNA, helical=False, which='all')\nenergyTerms=list(energy.keys())\nprint('Keys in returned dictionary:\\n', '\\n'.join(energyTerms), '\\n-----------')\n\n# Plot two energy terms\nfig = plt.figure(figsize=(8,8))\nfig.subplots_adjust(hspace=0.3)\n\nax1 = fig.add_subplot(211)\nax1.set_title('Bound DNA, entire elastic matrix')\nax1.plot(time, energy['full'])\nax1.set_xlabel('Time (ps)')\nax1.set_ylabel(r'Local Deformation Energy (kJ/mol)',)\n\nax2 = fig.add_subplot(212)\nax2.set_title('Bound DNA, only diagonal of elastic matrix')\nax2.plot(time, energy['diag'])\nax2.set_xlabel('Time (ps)')\nax2.set_ylabel(r'Local Deformation Energy (kJ/mol)',)\n\nplt.show()\n\n# Calculate average and error for each energy terms\nerror = dnaMD.get_error(time, list(energy.values()), len(energyTerms), err_type='block', tool='gmx analyze')\nprint(\"==============================================\")\nprint('{0:<16}{1:>14}{2:>14}'.format('Energy(kJ/mol)', 'Average', 'Error'))\nprint(\"----------------------------------------------\")\nfor i in range(len(energyTerms)):\n print('{0:<16}{1:>14.3f}{2:>14.3f}'.format(energyTerms[i], np.mean(energy[energyTerms[i]]),error[i]))\nprint(\"==============================================\\n\")\n", "Deformation energy of the consecutive overlapped DNA segments\nAbove method gives energy of a small local segment of the DNA. However, we mostly interested in large segment of the DNA. This large segment can be further divided into smaller local segments. For these smaller segments local deformation energy can be calculated. Here these segments overlapped with each other.", "# First calculation for local base-step parameters\nsegments, energies, error = eyDNA.getLocalDeformationEnergySegments([4,20], boundDNA, span=4, \n helical=False, which='all',\n err_type='block',\n tool='gmx analyze')\nenergyTerms=list(energies.keys())\nprint('Keys in returned dictionary:\\n', '\\n'.join(energyTerms), '\\n-----------')\n\n# Now plot the data\nfig = plt.figure(figsize=(14,8))\nfig.subplots_adjust(hspace=0.3)\nmpl.rcParams.update({'font.size': 16})\n\nxticks = range(len(segments))\n\nax1 = fig.add_subplot(111)\nax1.set_title('Local base-step parameters')\n\nfor term in energyTerms:\n ax1.errorbar(xticks, energies[term], yerr=error[term], ms=10, elinewidth=3, fmt='-o', label=term)\nax1.set_xticks(xticks)\nax1.set_xticklabels(segments, rotation='vertical')\nax1.set_xlabel('base-step number')\nax1.set_ylabel(r'Deformation Energy (kJ/mol)',)\nplt.legend()\n\nplt.show()", "Same as the above but energy is calculated using helical base-step parameters", "# Secind calculation for local base-step parameters\nsegments, energies, error = eyDNA.getLocalDeformationEnergySegments([4,20], boundDNA, span=4, \n helical=True, which='all',\n err_type='block',\n tool='gmx analyze')\nenergyTerms=list(energies.keys())\nprint('Keys in returned dictionary:\\n', '\\n'.join(energyTerms), '\\n-----------')\n\n# Now plot the data\nfig = plt.figure(figsize=(14,8))\nfig.subplots_adjust(hspace=0.3)\nmpl.rcParams.update({'font.size': 16})\n\nxticks = range(len(segments))\n\nax1 = fig.add_subplot(111)\nax1.set_title('Local base-step parameters')\n\nfor term in energyTerms:\n ax1.errorbar(xticks, energies[term], yerr=error[term], ms=10, elinewidth=3, fmt='-o', label=term)\nax1.set_xticks(xticks)\nax1.set_xticklabels(segments, rotation='vertical')\nax1.set_xlabel('base-step number')\nax1.set_ylabel(r'Deformation Energy (kJ/mol)',)\nplt.legend()\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fullmetalfelix/ML-CSC-tutorial
ACSF-Dimer.ipynb
gpl-3.0
[ "Atom Centered Symmetry Functions\nACSFs are a convenient way of transforming atomic coortinates and types into a computer-friendly string of numbers. Each atom gets its own set of ACSFs, computed using itself as the center, and all other atomic coordinates, which encode its chemical environment.<br>\n<img src=\"./images/acsf-schema.png\" width=\"400px\"><br>\nThe two main type of ACSFs are two- an three-body. Each set of ACSFs becomes the input of a neural network that calculates the corresponding energy contribution. The only important quantity is the total energy of the system, given by the sum of all contributions.\nFor more info see: Jörg Behler, <i>J. Chem. Phys.</i> <b>134</b>, 074106 (2011)\nA pratical example\nWe are going to see ACSFs in action for a simple dimer system.\nSince there is only one atomic species and only two atoms, we will not need the three-body terms. Due to the symmetry of the system we can just compute the ACSFs for one of the two atoms, feed them to a single NN to get the total energy directly.\nHere are some definitions we will need.", "# --- INITIAL DEFINITIONS ---\nfrom sklearn.neural_network import MLPRegressor\nimport numpy, math, random\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom ase import Atoms\nfrom visualise import view\n\n# cutoff for all ACSF\nRcut = 5.0\n\n# cutoff function\ndef fcut(r):\n if r >= Rcut: return 0\n return (math.cos(math.pi * r/Rcut)+1) * 0.5\n\n# G1 function definition\ndef G1f(r, eta, Rs):\n return math.exp(-eta*(r-Rs)*(r-Rs)) * fcut(r)", "Creating the training set\nIn order to learn the relationship between ACSFs and the energy of the system, we need a database of ACSFs for several atomic configurations, and the corresponding energy.\nThe sample configurations consist of the dimer, stretched and compressed. In reality the energy is calculated with quantum methods (DFT, CC, ...) but here we will use a simple Lennard-Jones function.", "# array of meaningful distances\ndists = numpy.arange(1.95, Rcut, Rcut/30)\n# LJ energy at those distances\nenergy = numpy.power(dists/2,-12)-numpy.power(dists/2,-6) - 2\n\nplt.plot(dists, energy,'.' )\nplt.xlabel('Pair distance')\nplt.ylabel('Energy')\nplt.show()", "Then we calculate the ACSFs for each dimer configuration. The results are formatted as a matrix: one row for each configuration, one column for each ACSF.", "# ACSFs G1 parameter pairs: this is a list of eta/Rs values\nparams = [(0.4, 0.2),(0.4, 0.5)]\n\n# initialise a matrix that will store the ACSFs of the first atom in all dimer configurations\nnConfs = dists.shape[0]\nacsf = numpy.zeros((nConfs, 1+len(params)))\n\nprint(\"Number of configurations: \" + str(nConfs))\nprint(\"Number of ACSfs: \" + str(acsf.shape[1]))\n\n\nfor k in range(nConfs): # for each configuration\n \n r = dists[k] # distance between atoms\n # compute G0 - sum of cutoffs\n acsf[k,0] = fcut(r)\n \n # compute all the G1\n for p in range(len(params)):\n # extract parameters\n eta,rs = params[p]\n # compute G1\n acsf[k,1+p] = G1f(r, eta, rs)\n\n# plot the Gs as a function of distance\nfor a in range(acsf.shape[1]):\n plt.plot(dists, acsf[:,a])\nplt.xlabel('Pair distance')\nplt.ylabel('ACSFs')\nplt.show()", "OPTIONAL TRICK\nWe can center the ACSFs around their mean and rescale them so that their standard deviation is 1. This is a common trick in ML with neural networks, to make the learning easier.", "acsf_mean = numpy.mean(acsf, axis=0)\nfor a in range(acsf.shape[1]):\n acsf[:,a] -= acsf_mean[a]\nacsf_std = numpy.std(acsf, axis=0)\nfor a in range(acsf.shape[1]):\n acsf[:,a] /= acsf_std[a]\n\n# plot the Gs as a function of distance\nfor a in range(acsf.shape[1]):\n plt.plot(dists, acsf[:,a])\nplt.xlabel('Pair distance')\nplt.ylabel('ACSFs - scaled and shifted')\nplt.show()", "Training\nWe create a neural network and train it on the ACSF database we just constructed.", "# setup the neural network\n# the network uses tanh function on all hidden neurons\n\nnn = MLPRegressor(hidden_layer_sizes=(5,), activation='tanh')", "The fitting may not be trivial since our database is small... the next instruction can be executed multiple times let the NN train more and hopefully improve.", "# change some training parameters\nnn.set_params(solver='lbfgs', alpha=0.001, tol=1.0e-10, learning_rate='constant', learning_rate_init=0.01)\n# do some training steps\nnn.fit(acsf, energy);\n\n# evaluate the training error\nenergyML = nn.predict(acsf)\n\nprint (\"Mean Abs Error (training) : \", (numpy.abs(energyML-energy)).mean())\n\n# energy curve\nplt.plot(dists, energy,'-.' )\nplt.plot(dists, energyML,'o' )\nplt.xlabel('Pair distance')\nplt.ylabel('Energy')\nplt.show()\n\n# regression plot\nplt.plot(energy,energyML,'o')\nplt.plot([-2.3,-1.7],[-2.3,-1.7]) # perfect fit line\nplt.xlabel('correct energy')\nplt.ylabel('NN energy')\nplt.show()", "Remarks\nDo not be fooled! Real systems are much more difficult to model, requiring more ACSFs, larger NNs, and much larger datasets for training.\nExercises\n1. Create a vaidation set and test the NN performance\nFor simplicity we just checked the error on training data, but it is better to check performance on a validation set not included in the training.\nCreate different dimer configurations and test NN performance on those.\n2. Craft you own energy\nMake the dimer energy expression more complex and attempt to machine-learn it.\n3. Add/edit the ACSFs parameters\nTry to change the ACSFs parameters to get better model performance.\n4. A real molecule\nHere is a real organic molecule... try to compute the ACSFs for its atoms using the DScribe package.\nDocumentation can be found here: https://singroup.github.io/dscribe/tutorials/acsf.html", "# atomic positions as matrix\nmolxyz = numpy.load(\"./data/molecule.coords.npy\")\n# atom types\nmoltyp = numpy.load(\"./data/molecule.types.npy\")\n\natoms_sys = Atoms(positions=molxyz, numbers=moltyp)\nview(atoms_sys)\n\n\nfrom dscribe.descriptors import ACSF\n\n# Setting up the ACSF descriptor\nacsf = ACSF(\n species=[\"H\", \"C\", \"N\", \"O\"],\n rcut=6.0,\n # configure parameters for desired ACSFs\n g2_params=[[1, 1], [1, 2], [1, 3]],\n g4_params=[[1, 1, 1], [1, 2, 1], [1, 1, -1], [1, 2, -1]],\n)\n\n# calculate the descriptor" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NGSchool2016/ngschool2016-materials
jupyter/agyorkei/.ipynb_checkpoints/NGSchool_python-checkpoint.ipynb
gpl-3.0
[ "Set the matplotlib magic to notebook enable inline plots", "%pylab inline", "Calculate the Nonredundant Read Fraction (NRF)\nSAM format example:\nSRR585264.8766235 0 1 4 15 35M * 0 0 CTTAAACAATTATTCCCCCTGCAAACATTTTCAAT GGGGGGGGGGGGGGGGGGGGGGFGGGGGGGGGGGG XT:A:U NM:i:1 X0:i:1 X1:i:6 XM:i:1 XO:i:0 XG:i:0 MD:Z:8T26\nImport the required modules", "import subprocess\nimport matplotlib.pyplot as plt\nimport random\nimport numpy as np", "Make figures prettier and biger", "plt.style.use('ggplot')\nfigsize(10,5)", "Parse the SAM file and extract the unique start coordinates.\nFirst store the file name in the variable", "file = \"/ngschool/chip_seq/bwa/input.sorted.bam\"", "Next we read the file using samtools. From each read we need to store the flag, chromosome name and start coordinate.", "p = subprocess.Popen([\"samtools\", \"view\", \"-q10\", \"-F260\", file],\n stdout=subprocess.PIPE)\ncoords = []\nfor line in p.stdout:\n flag, ref, start = line.decode('utf-8').split()[1:4]\n coords.append([flag, ref, start])\n\ncoords[:3]", "What is the total number of our unique reads?", "len(coords)", "Randomly sample the coordinates to get 1M for NRF calculations", "random.seed(1234)\nsample = random.sample(coords, 1000000)\n\nlen(sample)", "How many of those coordinates are unique? (We will use the set python object which only the unique items.)", "uniqueStarts = {'watson': set(), 'crick': set()}\nfor coord in sample:\n flag, ref, start = coord\n if int(flag) & 16:\n uniqueStarts['crick'].add((ref, start))\n else:\n uniqueStarts['watson'].add((ref, start))", "How many on the Watson strand?", "len(uniqueStarts['watson'])", "And on the Crick?", "len(uniqueStarts['crick'])", "Calculate the NRF", "NRF_input = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample)\nprint(NRF_input)", "Lets create a function from what we did above and apply it to all of our files!\nTo use our function on the real sequencing datasets (not only on a small subset) we need to optimize our method a bit- we will use python module called numpy.", "def calculateNRF(filePath, pickSample=True, sampleSize=10000000, seed=1234):\n p = subprocess.Popen(['samtools', 'view', '-q10', '-F260', filePath],\n stdout=subprocess.PIPE)\n coordType = np.dtype({'names': ['flag', 'ref', 'start'],\n 'formats': ['uint16', 'U10', 'uint32']})\n coordArray = np.empty(10000000, dtype=coordType)\n i = 0\n for line in p.stdout:\n if i >= len(coordArray):\n coordArray = np.append(coordArray, np.empty(1000000, dtype=coordType), axis=0)\n fg, rf, st = line.decode('utf-8').split()[1:4]\n coordArray[i] = np.array((fg, rf, st), dtype=coordType)\n i += 1\n coordArray = coordArray[:i]\n sample = coordArray\n if pickSample and len(coordArray) > sampleSize:\n np.random.seed(seed)\n sample = np.random.choice(coordArray, sampleSize, replace=False)\n uniqueStarts = {'watson': set(), 'crick': set()}\n for read in sample:\n flag, ref, start = read\n if flag & 16:\n uniqueStarts['crick'].add((ref, start))\n else:\n uniqueStarts['watson'].add((ref, start))\n NRF = (len(uniqueStarts['watson']) + len(uniqueStarts['crick']))*1.0/len(sample)\n return NRF", "Calculate the NRF for the chip-seq sample", "NRF_chip = calculateNRF(\"/ngschool/chip_seq/bwa/sox2_chip.sorted.bam\", sampleSize=1000000)\nprint(NRF_chip)", "Plot the NRF!", "plt.bar([0,2],[NRF_input, NRF_chip], width=1)\nplt.xlim([-0.5,3.5]), plt.xticks([0.5, 2.5], ['Input', 'ChIP'])\nplt.xlabel('Sample')\nplt.ylabel('NRF')\nplt.ylim([0, 1.25]), plt.yticks(np.arange(0, 1.2, 0.2))\nplt.plot((-0.5,3.5), (0.8,0.8), 'red', linestyle='dashed')\nplt.show()", "Calculate the Signal Extraction Scaling\nLoad the results from the coverage calculations", "countList = []\nwith open('/ngschool/chip_seq/bedtools/input_coverage.bed', 'r') as covFile:\n for line in covFile:\n countList.append(int(line.strip('\\n').split('\\t')[3]))\ncountList[0:6]\n\ncountList[-15:]", "Lets see where do our reads align to the genome. Plot the distribution of tags along the genome.", "plt.plot(range(len(countList)), countList)\nplt.xlabel('Bin number')\nplt.ylabel('Bin coverage')\nplt.xlim([0, len(countList)])\nplt.show()", "Now sort the list- order the windows based on the tag count", "countList.sort()\n\ncountList[0:6]", "Sum all the aligned tags", "countSum = sum(countList)\ncountSum", "Calculate the summaric fraction of tags along the ordered windows.", "countFraction = []\nfor i, count in enumerate(countList):\n if i == 0:\n countFraction.append(count*1.0 / countSum)\n else:\n countFraction.append((count*1.0 / countSum) + countFraction[i-1])", "Look at the last five items of the list:", "countFraction[-5:]", "Calculate the number of windows.", "winNumber = len(countFraction)\nwinNumber", "Calculate what fraction of a whole is the position of each window.", "winFraction = []\nfor i in range(winNumber):\n winFraction.append(i*1.0 / winNumber)", "Look at the last five items of our new list:", "winFraction[-5:]", "Now prepare the function!", "def calculateSES(filePath):\n countList = []\n with open(filePath, 'r') as covFile:\n for line in covFile:\n countList.append(int(line.strip('\\n').split('\\t')[3]))\n plt.plot(range(len(countList)), countList)\n plt.xlabel('Bin number')\n plt.ylabel('Bin coverage')\n plt.xlim([0, len(countList)])\n plt.show()\n countList.sort()\n countSum = sum(countList)\n countFraction = []\n for i, count in enumerate(countList):\n if i == 0:\n countFraction.append(count*1.0 / countSum)\n else:\n countFraction.append((count*1.0 / countSum) + countFraction[i-1])\n winNumber = len(countFraction)\n winFraction = []\n for i in range(winNumber):\n winFraction.append(i*1.0 / winNumber)\n return [winFraction, countFraction]", "Use our function to calculate the signal extraction scaling for the Sox2 ChIP sample:", "chipSes = calculateSES(\"/ngschool/chip_seq/bedtools/sox2_chip_coverage.bed\")", "Now we can plot the calculated fractions for both the input and ChIP sample:", "plt.plot(winFraction, countFraction, label='input')\nplt.plot(chipSes[0], chipSes[1], label='Sox2 ChIP')\nplt.ylim([0,1])\nplt.xlabel('Ordered window franction')\nplt.ylabel('Genome coverage fraction')\nplt.legend(loc='best')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
stanfordhpccenter/datatutorial
session-3/HPC-2016-Session-III-Supervised-Unsupervised-Learning.ipynb
mit
[ "Machine Learning\nMachine Learning is a set of algorithms to enable computers to make and improve predictions or behaviors based on some data. This ability is not explicitly programmed. It involves models with tuneable parameters, that can adapt their values based on available data. Thence, these models can generalize this knowledge and make predictions about new (and unseen) data.\nFitting lines through data. Any middle schooler could eyeball this data and draw a reasonable line through it...however, this task is not simple for a machine. \nAnd when we move to more complicated datasets and multiple dimensions, your middle schooler will give up.", "from IPython.core.display import Image, display\ndisplay(Image(filename='Reg1.png'))\ndisplay(Image(filename='Reg2.png'))\n\nfrom IPython.core.display import Image, display\ndisplay(Image(filename='Cluster0.png'))\ndisplay(Image(filename='Cluster1.png'))", "Scikit-Learn\nScikit-Learn (http://scikit-learn.org) is a python package that uses NumPy & SciPy to enable the application of popular machine learning algorithms up on small to medium datasets.\nReferring back to the machine learning models, every model in scikit is a python class with a uniform interface. Every instance of this class is an object and the general method of application is very similar.\na. Import class from module. (Here \"abc\" is an arbitrary algorithm.)\n* from sklearn.ABC import abc\nb. Instantiate estimator object \n* abc_model=abc(arguments)\nc. Fit model to training data\n* abc_model.fit(data)\nd. Use fitted model to predict \n* abc_model.predict(new_data)\nNow, we'll move from this (seemingly) abstract overview to actual application.\nTo motivate this discussion, lets start with a concrete problem...that of the infinite scroll.\nThe goal of Clustering is to find an arrangement in the data such that items in the same group (or cluster) are more similar to each other than those from different clusters.\nThe Prototype based K-Means algorithm is quiet popular. In prototype based clustering, each group is represented/exemplified by a prototype. In K-Means, the prototype is the mean (or centroid).\nExercise 1\nName another parameter that we could have chosen as a prototype? \nWhen would this parameter be more suited than the centroid?", "%matplotlib inline\nfrom sklearn.datasets import make_blobs\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\nX, y = make_blobs(n_samples=200,n_features=2,centers=6,cluster_std=0.8, shuffle=True,random_state=0)\n\nplt.scatter(X[:,0],X[:,1])", "Steps in the K-means algorithm:\n\nChoose k centroids from the sample points as initial cluster centers.\nAssign each data point to the nearest centroid (based on Euclidean distance).\nUpdate the centroid locations to the mean of the samples that were assigned to it.\nRepeat steps 2 and 3 till the cluster assignments do not change, or, a pre-defined tolerance, or, a maximum number of iterations is reached.", "#import Kmeans class for the cluster module\nfrom sklearn.cluster import KMeans\n\n#instantiate the model\nkm = KMeans(n_clusters=3, init='random', n_init=10, max_iter=300, tol=1e-04, random_state=0)", "The arguments to the algorithm:\n* n_clusters: The number of groups to be divided in.\n* n_init: The number of different initial random centroids to be run.\n* max_iter: The maximum number of iterations for each single run.\n* tol: Cut-off for the changes in the within-cluster sum-squared-error.", "#fitting the model to the data \ny_km = km.fit_predict(X)\n\nplt.scatter(X[y_km==0,0], X[y_km ==0,1], s=50, c='lightgreen', marker='o', label='Group A')\nplt.scatter(X[y_km ==1,0], X[y_km ==1,1], s=50, c='orange', marker='o', label='Group B')\nplt.scatter(X[y_km ==2,0], X[y_km ==2,1], s=50, c='white', marker='o', label='Group C')\nplt.scatter(km.cluster_centers_[:,0],km.cluster_centers_[:,1], s=50, marker='o', c='black', label='Centers')\nplt.legend()", "Exercise 2\nClustering the iris dataset based on sepal and petal lengths and widths.", "display(Image(filename='1.png'))\n\nfrom sklearn.datasets import load_iris\niris = load_iris()\nn_samples, n_features = iris.data.shape\nX, y = iris.data, iris.target\n\nf, axarr = plt.subplots(2, 2)\naxarr[0, 0].scatter(iris.data[:, 0], iris.data[:, 1],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))\naxarr[0, 0].set_title('Sepal length versus width')\naxarr[0, 1].scatter(iris.data[:, 1], iris.data[:, 2],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))\naxarr[0, 1].set_title('Sepal width versus Petal Length')\naxarr[1, 0].scatter(iris.data[:, 2], iris.data[:, 3],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))\naxarr[1, 0].set_title('Petal length versus width')\naxarr[1, 1].scatter(iris.data[:, 0], iris.data[:, 2],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))\naxarr[1, 1].set_title('Sepal length versus Petal length')\nplt.setp([a.get_xticklabels() for a in axarr[0, :]], visible=False);\nplt.setp([a.get_yticklabels() for a in axarr[:, 1]], visible=False);\n\n#Instantiate and fit the model here", "Regression", "x=np.arange(100)\neps=50*np.random.randn(100)\ny=2*x+eps\nplt.scatter(x,y)\nplt.xlabel(\"X\")\nplt.ylabel(\"Y\")\n\nfrom sklearn.linear_model import LinearRegression\nmodel=LinearRegression(normalize=True)\nX=x[:,np.newaxis]\n\nmodel.fit(X,y)\n\nX_fit=x[:,np.newaxis]\ny_pred=model.predict(X_fit)\n\nplt.scatter(x,y)\nplt.plot(X_fit,y_pred,linewidth=2)\nplt.xlabel(\"X\")\nplt.ylabel(\"Y\")\n\nprint model.coef_\nprint model.intercept_\n#So a unit change is X is associated with a ___ change in Y.", "Exercise 3\nLinear Regression over a multi-dimensional data set. The data exhibits the advertising expenditure over TV, radio and the print media, versus the change in sales of the product.", "import pandas as pd\ndata=pd.read_csv('addata.csv', index_col=0)\ndata.head(5)\n\n#from sklearn.linear_model import LinearRegression\nfrom sklearn import linear_model\n\nclf=linear_model.LinearRegression()\n\nfeature_cols=[\"TV\",\"Radio\",\"Newspaper\"]\nX=data[feature_cols]\ny=data[\"Sales\"]\n\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)\n\n#Fit the model and print the coefficients here\n\n#Make predictions for the test dataset here\n\nfrom sklearn import metrics\nprint np.sqrt(metrics.mean_squared_error(y_test,y_pred)) #RMSE" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mlosch/nnadapter
examples/summary_statistics.ipynb
mit
[ "Gather summary statistics from network outputs\nThis example script displays the use of emu to\nestimate normal distribution parameters from the output of each convolutional layer in a given pretrained model.\n1. Setup\n\nSetup environment", "import sys\nimport os\nimport numpy as np\nfrom collections import OrderedDict\n\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Define backend (here are implemented: caffe and torch)", "backend = 'caffe'", "Load a caffe model", "if backend == 'caffe':\n # make sure pycaffe is in your system path\n caffe_root = os.getenv(\"HOME\") + '/caffe/'\n sys.path.insert(0, caffe_root + 'python')\n \n # Load CaffeAdapter class\n from emu.caffe import CaffeAdapter\n \n # Define the path to .caffemodel, deploy.prototxt and mean.npy\n # Here we use the pretrained CaffeNet from the Caffe model zoo\n model_fp = caffe_root + 'models/bvlc_reference_caffenet/'\n weights_fp = model_fp + 'bvlc_reference_caffenet.caffemodel'\n prototxt_fp = model_fp + 'deploy.prototxt'\n \n mean_fp = caffe_root + 'data/ilsvrc12/ilsvrc_2012_mean.npy'\n # Alternatively, we could also define the mean as a numpy array:\n # mean = np.array([104.00698793, 116.66876762, 122.67891434])\n \n adapter = CaffeAdapter(prototxt_fp, weights_fp, mean_fp)", "Load a torch model", "if backend == 'torch':\n # Load TorchAdapter class\n from emu.torch import TorchAdapter\n \n # Define the path to the model file where the file can be a torch7 or pytorch model.\n # Torch7 models are supported but not well tested.\n model_fp = 'models/resnet-18.t7'\n \n # Alternatively, we can use pretrained torchvision models (see README).\n # model_fp = 'resnet18'\n \n # Define mean and std\n mean = np.array([0.485, 0.456, 0.406])\n std = np.array([0.229, 0.224, 0.225])\n # Alternatively, we could also pass a .t7 file path to the constructor\n \n # Define the image input size to the model with order:\n # Channels x Height x Width\n input_size = (3, 224, 224)\n \n adapter = TorchAdapter(model_fp, mean, std, input_size)", "Load available layers and their types", "layer_types = adapter.get_layers()\nfor lname, ltype in layer_types.items():\n print('%s:\\t%s' % (lname, ltype))", "Select convolutional layers", "conv_layers = [lname for lname, ltype in layer_types.items() if 'conv' in ltype.lower()]", "2. Forward images through network\n\nDefine path to a directory containing images and run them through the network", "images_dp = 'images/'\n\nfiles = os.listdir(images_dp)\n# Filter for jpeg extension\nimage_files = [os.path.join(images_dp, f) for f in files if f.endswith('.jpg')]\n\n# Run in batched fashion\nbatch_size = 32\n\n# As we run in batch mode, we have to store the intermediate layer outputs\nlayer_outputs = OrderedDict()\nfor layer in conv_layers:\n layer_outputs[layer] = []\n\nfor i in range(0, len(image_files), batch_size):\n image_list = image_files[i:(i+batch_size)]\n \n # Forward batch through network\n # The adapter takes care of loading images and transforming them to the right format.\n # Alternatively, we could load and transform the images manually and pass a list of numpy arrays.\n batch = adapter.preprocess(image_list)\n adapter.forward(batch)\n \n # Save a copy of the outputs of the convolutional layers.\n for layer in conv_layers:\n output = adapter.get_layeroutput(layer).copy()\n layer_outputs[layer].append(output)\n\n# Concatenate batch arrays to single outputs\nfor name, layer_output in layer_outputs.items():\n layer_outputs[name] = np.concatenate(layer_output, axis=0)", "3. Calculate summary statistics\n\nEstimate mean and standard deviation per layer", "means = [output.mean() for output in layer_outputs.values()]\nstds = [output.std() for output in layer_outputs.values()]\n\nplt.plot(means)\nplt.xticks(range(len(conv_layers)), conv_layers, rotation=45.0)\nplt.title('Convolution output mean over network depth');\nplt.xlabel('Layer');\n\nplt.plot(stds)\nplt.xticks(range(len(conv_layers)), conv_layers, rotation=45.0)\nplt.title('Convolution output std over network depth');\nplt.xlabel('Layer');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tpin3694/tpin3694.github.io
python/pandas_split_lat_and_long_into_variables.ipynb
mit
[ "Title: Split Combined Lat/Long Coordinate Variables Into Seperate Variables In Pandas\nSlug: pandas_split_lat_and_long_into_variables\nSummary: Split Combined Lat/Long Coordinate Variables Into Seperate Variables In Pandas\nDate: 2016-05-01 12:00\nCategory: Python\nTags: Data Wrangling\nAuthors: Chris Albon \nPreliminaries", "import pandas as pd\nimport numpy as np", "Create an example dataframe", "raw_data = {'geo': ['40.0024, -105.4102', '40.0068, -105.266', '39.9318, -105.2813', np.nan]}\ndf = pd.DataFrame(raw_data, columns = ['geo'])\ndf", "Split the geo variable into seperate lat and lon variables", "# Create two lists for the loop results to be placed\nlat = []\nlon = []\n\n# For each row in a varible,\nfor row in df['geo']:\n # Try to,\n try:\n # Split the row by comma and append\n # everything before the comma to lat\n lat.append(row.split(',')[0])\n # Split the row by comma and append\n # everything after the comma to lon\n lon.append(row.split(',')[1])\n # But if you get an error\n except:\n # append a missing value to lat\n lat.append(np.NaN)\n # append a missing value to lon\n lon.append(np.NaN)\n\n# Create two new columns from lat and lon\ndf['latitude'] = lat\ndf['longitude'] = lon", "View the dataframe", "df" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
boffi/boffi.github.io
dati_2015/ha03/06_3_DOF_System.ipynb
mit
[ "import numpy as np\nfrom numpy import poly1d as p, polyint\nfrom scipy.linalg import eigh\nnp.set_printoptions(suppress=True)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use(['fivethirtyeight', './00_mplrc'])\nfrom matplotlib import rcParams\nl_colors = rcParams['axes.color_cycle']\n\nfrom IPython.display import HTML, Latex, display\nHTML(open(\"00_custom.css\",\"r\").read())", "3 DOF System\n<img src=\"bending.svg\" style=\"width:100%\">\nIn the figure above\n<ol type='a'>\n <li> the system under investigation, with the two supported masses and\n the dynamical degrees of freedom that describe the system deformation\n (top left);\n <li> the three diagrams of bending moment (in red positive bending moments,\n in blue negative ones) that derive from application of external unit\n forces, corresponding to each of the three degrees of freedom.\n</ol>\n\nThe same bending moments are represented in the following data structure in terms of polynomials of first degree p((linear_coefficient, constant_coefficient)), each row corresponding to a load condition while the terms in each row are corresponding, the first 4 to the segments on length L on the horizontal part, from left to right (1,2,3) and from rigth to left (4), the fifth is corresponding to the vertical part, from top to bottom.", "bm = [[p(( 1, 0)), p(( 1, 1)), p(( 1, 2)), p(( 3, 0)), p(( 0, 0))],\n [p(( 0, 0)), p(( 0, 0)), p(( 1, 0)), p(( 1, 0)), p(( 0, 0))],\n [p(( 0, 0)), p(( 0,-1)), p(( 0,-1)), p((-1, 0)), p((-1, 0))]]", "To compute the flexibilities we sum the integrals of the products of bending moments on each of the five spans of unit length that we are using and place the results in a 2D data structure that is eventually converted to a matrix by np.mat.", "F = np.mat([[sum(polyint(bm0[i]*bm1[i])(1) for i in range(5))\n for bm1 in bm] for bm0 in bm])\n\nprint('F = 1/6 * L^3/EJ *')\nprint(F*6)", "we invert the flexibility matrix to obtain the stiffness matrix", "K = F.I\nprint('K = 3/136 * EJ/L^3 *')\nprint(K*136/3)", "and eventually we define the mass matrix", "M = np.mat(np.eye(3)) ; M[2,2]=2\nprint('M = m *')\nprint (M)\n\nevals, evecs = eigh(K,M)\nprint(\"Eigenvalues, w_0^2 *\", evals)\nfor i in range(3):\n if evecs[0,i]<0: evecs[:,i]*=-1\nprint(\"Matrix of mass normalized eigenvectors,\")\nprint(evecs)", "The Load\nThe load is $F_0\\,\\boldsymbol{r}\\,f(t)$ with $F_0 = \\delta EJ/L^3$, $\\boldsymbol{r}=\\begin{Bmatrix}1&0&0\\end{Bmatrix}^T$ and\n$f(t) = 2\\sin^2(\\omega_0t/2)=1-\\cos(\\omega_0t)$ for $0\\le \\omega_0 t\\le 2\\pi$ while $f(t)=0$ otherwise.", "pi = np.pi\nt1 = np.linspace(0,2*pi,601)\n\nplt.plot(t1,1-np.cos(t1))\n\nplt.xlabel(r'$\\omega_0t$', size=20)\nplt.ylabel(r'$p(t)\\,\\frac{L^3}{\\delta\\,EJ}$', size=20)\n\nplt.xlim((0,2*pi))\nplt.ylim((-0.05,2.05))\n\nplt.xticks((0,pi/2,pi,pi*1.5,2*pi),\n (r'$0$', r'$\\pi/2$', r'$\\pi$', r'$3\\pi/2$', r'$2\\pi$'), fontsize=20)\n\nplt.title('The normalized load')\nplt.show()", "The Particular Integrals\nFor our load, each modal equation of motion can be written as\n\\begin{align}\n m \\ddot q_i + m \\Lambda_i^2\\omega_0^2 q_i &=\n \\delta\\frac{EJ}{L^3}\\boldsymbol\\psi_i^T\\boldsymbol{r}\\,\n (1-\\cos(\\omega_0t))\\Rightarrow\\\n \\ddot q_i + \\Lambda_i^2\\omega_0^2 q_i &= G_i \\delta\\omega_0^2 \\,\n (1-\\cos(\\omega_0t)) \n\\end{align}\nwith $G_i = \\boldsymbol\\psi_i^T\\boldsymbol{r}.$\nWith $\\xi_i = C_i + D_i \\cos(\\omega_0 t)$, substituting in the equation of motion and considering separately the constant terms and the cosine terms, with appropriate simplifications we have\n\\begin{align}\n \\Lambda_i^2\\,C_i &= +G_i \\, \\delta\\\n (\\Lambda_i^2-1) \\, D_i &= -G_i\\,\\delta\n\\end{align}\nand consequently\n$$ C_i = +\\delta\\,\\frac{\\boldsymbol\\psi_i^T\\boldsymbol{r}}{\\Lambda^2_i},\\qquad\n D_i = -\\delta\\,\\frac{\\boldsymbol\\psi_i^T\\boldsymbol{r}}{\\Lambda^2_i-1}.$$", "r = np.array((1,0,0))\nw = np.sqrt(evals)\nC = np.dot(evecs.T,r)/evals\nD = np.dot(evecs.T,r)/(1-evals)\ndisplay(Latex(r'\\begin{align}' +\n r'\\\\'.join(r\"\"\"\n \\frac{\\xi_%d(t)}\\delta &= %+g %+g \\cos(\\omega_0 t),\n && \\text{for } 0 \\le \\omega_0 t \\le 2\\pi.\n \"\"\" % (i+1,C[i],D[i]) for i in range(3)) +\n r'\\end{align}'))\n\nfor i in 0, 1, 2:\n plt.plot(t1, C[i]+D[i]*np.cos(t1), label=r'$\\xi_%d(t)$'%(i+1))\n \nplt.xlabel(r'$\\omega_0t$', size=20)\nplt.ylabel(r'$\\xi/\\delta$', size=20)\nplt.legend(loc=0, ncol=3)\nplt.xlim((0,2*pi))\nplt.xticks((0,pi/2,pi,pi*1.5,2*pi),\n (r'$0$', r'$\\pi/2$', r'$\\pi$', r'$3\\pi/2$', r'$2\\pi$'))\nplt.title('The particular integrals, mode by mode')\nplt.show()", "Modal Responses\nWith respect to the forced phase, the modal responses have the generic expression\n\\begin{align}\n q_i(t) & = A_i\\cos(\\Lambda_i\\omega_0t)\n + B_i\\sin(\\Lambda_i\\omega_0t) + C_i + D_i\\cos(\\omega_0t),\\\n \\dot q_i(t) & = \\Lambda_i\\omega_0 \\left(\n B_i\\cos(\\Lambda_i\\omega_0t) - A_i\\sin(\\Lambda_i\\omega_0t) \\right) -\n \\omega_0 D_i \\sin(\\omega_0t),\n\\end{align}\nand we can write, for the specified initial rest conditions, that\n$$ A_i + C_i + D_i = 0, \\qquad B_i = 0$$\nhence\n\\begin{align}\n q_i(t) & = (1-\\cos(\\Lambda_i\\omega_0t)) C_i\n + (\\cos(\\omega_0t) - \\cos(\\Lambda_i\\omega_0t)) D_i,\\\n {\\dot q}_i(t) & = \\Lambda_i\\omega_0 (C_i+D_i) \\sin(\\Lambda_i\\omega_0t) -\n \\omega_0 D_i \\sin(\\omega_0t).\n\\end{align}", "A = -C - D\nL = np.sqrt(evals)\n\nt1 = np.linspace(0,2*pi,601)\nq1 = [A[i]*np.cos(L[i]*t1) + C[i] + D[i]*np.cos(t1) for i in (0,1,2)]\n\ndisplay(Latex(r'\\begin{align}' +\n r'\\\\'.join(r\"\"\"\n\\frac{q_%d(t)}\\delta &= %+g %+g \\cos(\\omega_0 t) %+g \\cos(%g\\omega_0t), &&\n\\text{for } 0 \\le \\omega_0 t \\le 2\\pi.\n \"\"\" % (i+1,C[i],D[i],A[i],L[i]) for i in range(3)) +\n r'\\end{align}'))", "With respect to the free response phase, $2\\pi \\le \\omega_0t$, writing\n$$\n q^_i(t) = A^_i \\cos(\\Lambda_i\\omega_0t) + B^*_i \\sin(\\Lambda_i\\omega_0t)\n$$\nimposing the continuity of modal displacements and modal velocities we have\n\\begin{align}\n q_i(t_1) &= A^_i \\cos(\\Lambda_i\\omega_0t_1) + B^_i \\sin(\\Lambda_i\\omega_0t_1)\\\n \\dot q_i(t_1) &= \\big(\n B^_i \\cos(\\Lambda_i\\omega_0t_1) - A^_i \\sin(\\Lambda_i\\omega_0t_1)\n \\big) \\Lambda_i\\omega_0\n\\end{align}\nthat gives\n\\begin{align}\n A^_i &= \\frac{q_i(t_1)\\Lambda_i\\omega_0\\cos(\\Lambda_i\\omega_0t_1) - \\dot q_i(t_1)\\sin(\\Lambda_i\\omega_0t_1)}{\\Lambda_i\\omega_0} \\\n B^_i &= \\frac{q_i(t_1)\\Lambda_i\\omega_0\\sin(\\Lambda_i\\omega_0t_1) + \\dot q_i(t_1)\\cos(\\Lambda_i\\omega_0t_1)}{\\Lambda_i\\omega_0} \\ \n\\end{align}", "ct1 = np.cos(L*2*pi)\nst1 = np.sin(L*2*pi)\n\nq0t1 = C + D*np.cos(2*pi) + A*ct1\nq1t1 = - D*np.sin(2*pi) - A*st1*L\n\nprint(q0t1, q1t1)\nAs = (q0t1*L*ct1 - q1t1*st1)/L\nBs = (q0t1*L*st1 + q1t1*ct1)/L\n\nprint(As*ct1+Bs*st1, L*(Bs*ct1-As*st1))\nt2 = np.linspace(2*pi, 4*pi, 601)\nq2 = [As[i]*np.cos(L[i]*t2) + Bs[i]*np.sin(L[i]*t2) for i in (0,1,2)]\n\ndisplay(Latex(r'\\begin{align}' +\n r'\\\\'.join(r\"\"\"\n\\frac{q^*_%d(t)}\\delta &= %+g \\cos(%g\\omega_0 t) %+g \\sin(%g\\omega_0t), &&\n\\text{for } 2\\pi \\le \\omega_0 t.\n \"\"\" % (i+1, As[i], L[i], Bs[i], L[i]) for i in range(3)) +\n r'\\end{align}'))", "Plotting the modal responses\nLet's plot the modal responses, first one by one, to appreciate the details of the single modal response", "for i in (0,1,2): \n plt.plot(t1/pi,q1[i], color=l_colors[i],\n label='$q_{%d}(t)$'%(i+1))\n plt.plot(t2/pi,q2[i], color=l_colors[i])\n plt.xlabel(r'$\\omega_0t/\\pi$', fontsize=18)\n plt.ylabel(r'$q/\\delta$', fontsize=18)\n plt.legend(loc=0, fontsize=18)\n plt.show()", "then all of them in a single plot, to appreciate the relative magnutudes of the different modal responses", "for i in (0,1,2): \n plt.plot(t1/pi,q1[i], color=l_colors[i],\n label='$q_{%d}(t)$'%(i+1))\n plt.plot(t2/pi,q2[i], color=l_colors[i])\n \nplt.xlabel(r'$\\omega_0t/\\pi$', fontsize=18)\nplt.ylabel(r'$q/\\delta$', fontsize=18)\nplt.legend(loc=0, fontsize=18)\nplt.show()", "System Response in Natural Coordinates\nWe stack together the times and the modal responses for the forced and the free phases in two single vectors, then we compute the nodal response by premultiplying the modal response by the eigenvectors matrix", "t = np.hstack((t1, t2))\nq = np.hstack((q1, q2))\nx = np.dot(evecs, q)", "Plotting of the natural coordinate responses\nAll of them in a single plot, as they have the same order of magnitude", "for i in (0,1,2): plt.plot(t/pi,x[i],\n label='$x_{%d}(t)$'%(i+1))\n \nplt.xlabel(r'$\\omega_0t/\\pi$', fontsize=18)\nplt.ylabel(r'$x/\\delta$', fontsize=18)\nplt.legend(loc=0, fontsize=18)\nplt.show()", "Final Displacements and Final Velocities\nSay that $t_2=4\\pi/\\omega_0$, we compute the vectors of sines and cosines with different frequencies at $t_2$, then we compute the modal displacements and velocities (note that the dimensional velocities are these adimensional velocities multiplied by $\\omega_0\\,\\delta$) and eventually we compute the nodal quantities by premultiplication by the eigenvectors matrix.", "ct2 = np.cos(L*4*pi)\nst2 = np.sin(L*4*pi)\n\nq0t2 = As*ct2+Bs*st2 ; q1t2 = L*(Bs*ct2-As*st2)\n\ndisplay(Latex(r\"$\\boldsymbol x(t_2) = \\{\"+\n \",\".join(\"%10.6f\"%x for x in np.dot(evecs,q0t2))+\n \"\\}\\,\\delta$\"))\ndisplay(Latex(r\"$\\boldsymbol v(t_2) = \\{\"+\n \",\".join(\"%10.6f\"%x for x in np.dot(evecs,q1t2))+\n \"\\}\\,\\omega_0\\,\\delta$\"))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mikezawitkowski/fireRiskSF
notebooks/exploratory/0.8-mission-fire-exploration-revisited.ipynb
mit
[ "\"\"\"\nauthor: mikezawitkowski\ncreated on 7/17/2016\n\"\"\"\nfrom __future__ import division, print_function\nimport pandas as pd\n%matplotlib inline\nimport seaborn as sns", "Mission Fire Exploration\nAt the time that this was created, there is a lot of press going on right now about Mission district fires, and gossip that maybe it's the cause of landlords or some arsonist trying to get more money for older properties. This notebook captures some\ninitial thoughts about this.\nThis exploration troubles me, because I don't see an upside to producing this, however I see quite a few downsides if I get this wrong.\nThis seems to be a very politically charged topic at the moment, and there are a lot of people who are claiming things and getting carried away with facts that may or may not be true. \nI'm not saying that one side or another is more right or wrong, but I'm confident that in the end, the data will prevail. \nIn the meantime, just as part of this exploration, I was curious to see if I could verify some of the claims that are being put forth, and figure out whether there are some other explanations, and just wrap my head around the problem and the data being used.", "query_url = 'https://data.sfgov.org/resource/wbb6-uh78.json?$order=close_dttm%20DESC&$offset={}&$limit={}'\n# query_url = \"https://data.sfgov.org/resource/wbb6-uh78.json?$where=alarm_dttm>='2013-02-12 04:52:17'&$order=close_dttm%20DESC\"\n# query_url = \"https://data.sfgov.org/resource/wbb6-uh78.json?$where=alarm_dttm>='2013-02-12 04:52:17'\"\n\noffset = 0\nlimit = 1000000\ndf = pd.read_json(query_url.format(offset, limit))\n# df = pd.read_json(query_url)\n\n\ncols_to_drop = [\"automatic_extinguishing_sytem_failure_reason\",\n \"automatic_extinguishing_sytem_type\",\n \"battalion\",\n \"box\",\n \"call_number\",\n \"detector_effectiveness\",\n \"detector_failure_reason\",\n \"ems_personnel\",\n \"ems_units\",\n \"exposure_number\",\n \"first_unit_on_scene\",\n \"ignition_factor_secondary\",\n \"mutual_aid\",\n \"no_flame_spead\",\n \"other_personnel\",\n \"other_units\",\n \"station_area\",\n \"supervisor_district\"]\ndf = df.drop(cols_to_drop, axis=1)\n\nfor col in df.columns:\n if 'dttm' in col:\n df[col] = pd.to_datetime(df[col])\n\ndf.alarm_dttm.min() # The earliest timestamp of this dataset is 2013-02-12 04:52:17\n\ndf.estimated_property_loss.value_counts(dropna=False)\n\ndf.shape\n\n# So we have 100,000 rows of data, going all the way back to February 10, 2013\n# There is thoughts that there's a correlation with year and cost, especially in the mission\ndf[df.estimated_property_loss.isnull()].__len__()\n\n# of the 100,000 rows, 96,335 are null\n96335 / float(df.shape[0])\n\n# wow, so where are these companies getting their data about the costs associated with fires?\n# it's not from the sfgov website. we'll need to table that and come back later.\n\ndf['year'] = df.alarm_dttm.apply(lambda x: x.year)\n\ntemp_df = df[df.estimated_property_loss.notnull()]\n\ntemp_df.shape\n\ntemp_df.groupby('year').sum()['estimated_property_loss']", "According to wikipeda, the mission district falls into two zipcodes, 94103, 94110\nSo let's look at just those zipcodes with the same grouping as above", "mask = ((temp_df.zipcode.notnull()) & (temp_df.zipcode.isin([94103, 94110])))\ntemp_df[mask].groupby('year').sum()['estimated_property_loss']\n\n# So based on the above data yes, the 2015 fires for those two zipcodes doubled, \n# and we can look into why, but could it be a symptom of population growth?\n\n# this article http://sf.curbed.com/2016/7/1/12073544/mission-fires-arson-campos\n# said that there were 2,788 blazes... but that's wrong, it's 2,788 units impacted. \n# One blaze could impact multiple units\n# \n# This infographic shows number of units impacted by fire by neighborhood,\n# but isn't this seriously misleading? https://infogr.am/sf_fires_by_zip-3\n# \n# Ok, no seriously, I'm setting aside this mission research, because the upside for getting it right is low\n# but the downside for getting it wrong is very impactful. Not the sort of press we want\n# TODO: check this out and compare it to the data set\n# https://celestelecomptedotcom.files.wordpress.com/2015/04/15-04-05_wfs-greater-alarms-01-01-01-04-05-15.pdf", "Initial Conclusions\nJust reading through the various articles, it seems that there's quite a bit of misinformation, and misuse of the dataset that is available for estimating fires. sf.curbed.com is saying there were 2,788 blazes in the Mission district over the full time period, but actually it's 2,788 units that were impacted. It could simply be a fact of there being a higher population density in that area, or age of buildings. There's a lot of reasons that fires could be higher in the Mission than in other parts of the city.\nHowever, I see a huge glaring problem in trying to make estimates regarding property damage values, and that is because 90% of the data points and calls for service to the fire department have no damage estimates listed. Yes, it is true that in 2014 to 2015 the estimated property loss had doubled, but let's take a little closer look, shall we?", "mask = ((temp_df.zipcode.notnull()) & \n (temp_df.zipcode.isin([94103, 94110])) & \n (temp_df.year == 2014))\ntemp_df[mask].groupby('year').sum()['estimated_property_loss']", "Disclaimers from the Fire Marshal\nhttps://celestelecompte.com/2015/04/25/open-data-fire-incident-report-san-francisco-2004-2015/\nI noticed a quote from that original letter: \n\nIMPORTANT – PLEASE NOTE: Entries contained in the attached report (including all monetary fire loss estimates) are intended for the sole use of the State Fire Marshal. Estimations and evaluations represent “most likely” and “most probable” cause and effect. Any representation as to the validity or accuracy of reported conditions (including all monetary fire loss estimates) outside the State Fire Marshal’s office is neither intended nor implied.\n\nWhen this data was requested, the response letter was explicit about the fact that the estimates were for internal use, and likely erroneous, and here we are using those estimates to claim that the cost of fires has gotten out of control. \nSo what do we do? We get all up in arms about a chart that somebody made about how the financial numbers are so much higher for the Mission:\nhttps://infogr.am/YCxOktys5EEYfx8r\nIn case you missed it, that link to the infogr.am is titled \"Financial Losses: Dramatic Increase in the Mission\"\nDefinition of The Mission\nWikipedia gives me two zipcodes, and using that, I'm able to get a rough guess of the same doubling effect of costs.\nThis other document has a different, more specific definition of the Mission:\n\nThe Mission District is defined for purposes of this report as the area bounded roughly by Market Street, Valencia\nStreet, Cesar Chavez Street, U.S. 101, 23rd Street, Hampshire Street, 17th Street, Vermont Street, Division Street,\nand 11th Street. These boundaries correspond to Census tracts 177, 201, 208, 209, 228.01, 228.03, 228.09, 229.02,\nand 229.03.", "mask = ((df.estimated_property_loss.notnull()))\nsns.df[mask].groupby('year').sum()['estimated_property_loss']\n\n# So based on the above data yes, the 2015 fires for those two zipcodes doubled, \n# and we can look into why, but could it be a symptom of population growth?\n# according to the document mentioned above and the report, it says that the population size shrunk. OK... \n# but the data that is being looked at is a HUGE period. There was a census report in 2000, and then another one \n# that's a large bucket of 2009-2013. The change reported was a 9% decrease, not exactly a huge boom.\n# My next theory is that the reason that the cost increased is simply that they got better about capturing records\n# for certain areas\n\n# Let's try a little experiment\n# let's look at which fire areas are better at keeping records, shall we?\ndf['loss_recorded'] = 0\n\nmask = ((df.estimated_property_loss.notnull()))\ndf.loc[mask, 'loss_recorded'] = 1\n\nmask = ((df.zipcode.notnull()))\nzipgroup = df[mask].groupby(['zipcode'])\n\n\nzipgroup.mean()['loss_recorded'].plot(kind='barh')\n\n# the above document shows the likelihood that the estimated_property_loss value \n# is recorded based on zipcode.\n# Mission District is within 94103, 94110 zipcodes\n# \nzipgroup.mean()['loss_recorded'][94103]\n\nzipgroup.mean()['loss_recorded'][94110]\n\nmask = ((df.estimated_property_loss.notnull()) & \n (df.zipcode == 94110))\nsns.distplot(df[mask].estimated_property_loss)\n\nmask = ((df.estimated_property_loss.notnull()) & \n (df.zipcode == 94103))\nsns.distplot(df[mask].estimated_property_loss)\n\ndf['estimated_property_loss'] = pd.to_numeric(df['estimated_property_loss'])\n\ndf['estimated_property_loss'] = df['estimated_property_loss'].fillna(0)\n\ndf.info()\n\nmask = ((df.estimated_property_loss.notnull()) & \n (df.zipcode == 94103))\ndf[mask].estimated_property_loss.value_counts(dropna=False, normalize=True, bins=50)\n\ndf['month'] = df.alarm_dttm.apply(lambda x: x.month)\n\nmask = ((df.month == 6) & (df.year == 2016))\ndf[mask].describe()\n\ndf.describe()\n\ndf.alarm_dttm.min()\n\ndf.alarm_dttm.max()\n\n# what is odd is how the fire civilian fatalities have a max value of 1, which makes it concerning that the dataset\n# is inaccurate and needs to be cleaned more carefully before we proceed.\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
OpenAstronomy/workshop_sunpy_astropy
02-Python1/02-Python-1-Lists_Instructor.ipynb
mit
[ "Storing Multiple Values in Lists\n<br/>\n<section class=\"objectives panel panel-warning\">\n<div class=\"panel-heading\">\n<h2><span class=\"fa fa-certificate\"></span> Learning Objectives </h2>\n</div>\n<ul>\n<li> Explain what a list is </li>\n<li> Create and index lists of simple values </li>\n</ul>\n</section>\n\nJust as a 'for' loop is a way to do operations many times, a list is a way to store many values. Unlike NumPy arrays, lists are built into the language (so we don’t have to load a library to use them). We create a list by putting values inside square brackets:", "odds = [1, 3, 5, 7]\nprint('odds are:', odds)", "We select individual elements from lists by indexing them:", "print('first and last:', odds[0], odds[-1])", "and if we loop over a list, the loop variable is assigned elements one at a time:", "for number in odds:\n print(number)", "There is one important difference between lists and strings: we can change the values in a list, but we cannot change the characters in a string. For example:", "names = ['Newton', 'Darwing', 'Turing'] # typo in Darwin's name\nprint('names is originally:', names)\nnames[1] = 'Darwin' # correct the name\nprint('final value of names:', names)", "works, but:", "name = 'Bell'\nname[0] = 'b'", "Ch-Ch-Ch-Changes\nData which can be modified in place is called mutable, while data which cannot be modified is called immutable. Strings and numbers are immutable. This does not mean that variables with string or number values are constants, but when we want to change the value of a string or number variable, we can only replace the old value with a completely new value.\nLists and arrays, on the other hand, are mutable: we can modify them after they have been created. We can change individual elements, append new elements, or reorder the whole list. For some operations, like sorting, we can choose whether to use a function that modifies the data in place or a function that returns a modified copy and leaves the original unchanged.\nBe careful when modifying data in place. If two variables refer to the same list, and you modify the list value, it will change for both variables! If you want variables with mutable values to be independent, you must make a copy of the value when you assign it.\nBecause of pitfalls like this, code which modifies data in place can be more difficult to understand. However, it is often far more efficient to modify a large data structure in place than to create a modified copy for every small change. You should consider both of these aspects when writing your code.\nThere are many ways to change the contents of lists besides assigning new values to individual elements:", "odds.append(11)\nprint('odds after adding a value:', odds)\n\ndel odds[0]\nprint('odds after removing the first element:', odds)\n\nodds.reverse()\nprint('odds after reversing:', odds)", "While modifying in place, it is useful to remember that python treats lists in a slightly counterintuitive way.\nIf we make a list and (attempt to) copy it then modify in place, we can cause all sorts of trouble:", "odds = [1, 3, 5, 7]\nprimes = odds\nprimes += [2]\nprint('primes:', primes)\nprint('odds:', odds)", "This is because python stores a list in memory, and then can use multiple names to refer to the same list. If all we want to do is copy a (simple) list, we can use the list() command, so we do not modify a list we did not mean to:", "odds = [1, 3, 5, 7]\nprimes = list(odds)\nprimes += [2]\nprint('primes:', primes)\nprint('odds:', odds)", "This is different from how variables worked in lesson 1, and more similar to how a spreadsheet works.\n<section class=\"objectives panel panel-success\">\n<div class=\"panel-heading\">\n<h2><span class=\"fa fa-pencil\"></span> Turn a string into a list </h2>\n</div>\n<br/>\nUse a for-loop to convert the string `“hello”` into a list of letters.\n<pre>[\"h\", \"e\", \"l\", \"l\", \"o\"]</pre>\n\nNB: you can create an empty list using `my_list = []`\n\n</section>\n\n<section class=\"objectives panel panel-success\">\n<div class=\"panel-heading\">\n<h2><span class=\"fa fa-pencil\"></span> Tuples and Excahnges </h2>\n</div>\n<br/>\nUse a for-loop to convert the string `“hello”` into a list of letters.\n<pre>[\"h\", \"e\", \"l\", \"l\", \"o\"]</pre>\n\nExplain what the overall effect of this code is:\n\n<pre>\nleft = 'L'\nright = 'R'\n\ntemp = left\nleft = right\nright = temp\n</pre>\n<br/>\nCompare it to:\n<pre> left, right = right, left </pre>\n<br/>\nDo they always do the same thing? Which does your brain scan better?\n</section>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CivicKnowledge/ambry
test/bundle_tests/build.example.com/classification/Using SQL JOINS.ipynb
bsd-2-clause
[ "Healthy Communities Data and Indicators Project (HCI)\nHealthy Communities Data and Indicators Project (HCI)", "from ambry import get_library\nl = get_library()\nb = l.bundle('cdph.ca.gov-hci-0.0.2')", "First, create a set of views to limit the individual indicators to one record per county. The Ambry SQL parser is \nver simplistic, and can't handle anything mroe then very simple joins.", "w = b.warehouse('hci_counties')\nw.clean()\nprint w.dsn\nw.query(\"\"\"\n\n-- Get only counties in California\nCREATE VIEW geo AS SELECT gvid, name AS county_name, geometry FROM census.gov-tiger-2015-counties\nWHERE statefp = 6;\n\n-- Get only records for all race/ethinicities\nCREATE VIEW hf_total AS SELECT gvid, mrfei FROM cdph.ca.gov-hci-healthy_food-county\nWHERE race_eth_name = 'Total';\n\n-- Get only records for all race/ethinicities\nCREATE VIEW aq_total AS SELECT gvid, pm25_concentration FROM cdph.ca.gov-hci-air_quality-county\nWHERE race_eth_name = 'Total';\n\n-- THe overty table has a lot of otrher categories, for report year and type of poverty\nCREATE VIEW pr_total AS SELECT gvid, percent FROM cdph.ca.gov-hci-poverty_rate-county\nWHERE race_eth_name = 'Total' AND reportyear='2008-2010' AND poverty='Overall';\n\n\"\"\").close()", "Now we can run a query to join the indicators.", "sql=\"\"\"\nSELECT county_name, mrfei, pm25_concentration, percent as percent_poverty FROM geo as counties\nJOIN hf_total ON hf_total.gvid = counties.gvid\nJOIN aq_total ON aq_total.gvid = counties.gvid\nJOIN pr_total ON pr_total.gvid = counties.gvid;\n\"\"\"\n\ndf = w.dataframe(sql)\ndf.head()\n\n\ndf.corr()", "Plot the PM2.5 Concentration, a measure of particulate air polution.", "%matplotlib inline\nsql=\"\"\"\nSELECT county_name, mrfei, pm25_concentration, percent as percent_poverty, geometry FROM geo as counties\nLEFT JOIN hf_total ON hf_total.gvid = counties.gvid\nLEFT JOIN aq_total ON aq_total.gvid = counties.gvid\nLEFT JOIN pr_total ON pr_total.gvid = counties.gvid;\n\"\"\"\n\nw.geoframe(sql).plot(column='pm25_concentration')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
MinnowBoard/fishbowl-notebooks
Dotstar-LED.ipynb
mit
[ "Dotstar LED\nDotstar LEDs are individually addressable LED strips for use with Arduinos, Raspberry Pis, and the Minnowboard. It connects to the device through the SPI pins and is driven here by Python. \nStart by importing the class file for the LEDs:", "from pyDrivers import dotstar", "Create Dotstar object\nYou can pass several arguments to the Dotstar class constructor to change the behavior of the LED class. \n\nds = dotstar.Dotstar(led_count=72, bus=0, init_data=0, init_brightness=0)\nParameters:\nled_count = some_number_of_leds\nChange the number of LEDs in your strip. Note that this counts the raw number of individual LEDs, not how many strips/devices you have. Make sure this is set so all the LEDs are used.\nbus = 0\nChange the SPI bus. If you do not specify one, it will be initialized on bus 0, which is the default for the Minnowboard.\ninit_data = some_brightness_value + some_hue\nChange the initial value of the LED strip. By default all the LEDS are initialized to the first color pushed. If you plan on having all the LEDs start off dark, don't set anything here.\ninit_brightness = some_brightness\nChange the initial brightness of the LEDs. Valid brightness settings range from 0 to 10, representing the intensity of the LEDs from 0% to 100%. If you want the LEDs to start off dark, set this to 0 at the start. \n\nHere is a typical initialization, starting all 72 LEDS (or 2 Adafruit Dotstar LED strips connected together) turned off:", "ds = dotstar.Dotstar(led_count=72*3,init_brightness=0)", "Class Methods\nNow we can make use of the functions in the class to set the colors and intesnity of each LED. The class works by populating a deque with the LED values you want, and then pushing all the data at once to the LED strip. The following methods provide the most basic functionality:\n\nDotstar.set(which_LED, brightness_level, red_hue, blue_hue, green_hue)\nThis function will add the LED to activate to the queue. The brightness and hue options are on a scale of 0 to 256, and the LED selection is from 0 to \nDotstar.draw()\nThis funciton draws the created deque to the LED strip. This function will clear the current deque, allowing you to populate another one.\nExample\nRun this section to create a sequence of 5 red LEDS that move throughout the length of the LEDs. It looks like the LED array on KITT from Knight Rider.", "while True:\n for current_led in range (4, ds.led_count-4):\n ds.set(current_led-4, 0, 0, 0, 0)\n ds.set(current_led-2, 10, 100, 0, 0)\n ds.set(current_led-1, 50, 200, 0, 0)\n ds.set(current_led, 50, 250, 0, 0)\n ds.set(current_led+1, 50, 200, 0, 0)\n ds.set(current_led+2, 50, 150, 0, 0)\n ds.set(current_led+4, 0, 0, 0, 0)\n ds.draw()\n for current_led in range(ds.led_count-5, 4, -1):\n ds.set(current_led-3,10,100,0,0)\n ds.set(current_led-2,10,150,0,0)\n ds.set(current_led-1,50,200,0,0)\n ds.set(current_led,50,250,0,0)\n ds.set(current_led+1,50,200,0,0)\n ds.set(current_led+2,50,150,0,0)\n ds.set(current_led+4,0,0,0,0)\n ds.draw()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
fablln/Deep-Learning
word_prediction_lstm/TP3-notebook.ipynb
mit
[ "<h1 style=\"text-align:center\">Deep Learning </h1>\n<h1 style=\"text-align:center\"> Lab Session 3 - 3 Hours </h1>\n<h1 style=\"text-align:center\">Long Short Term Memory (LSTM) for Language Modeling</h1>\n\n<b> Student 1:</b> CANALE\n<b> Student 2:</b> ELLENA\nIn this Lab Session, you will build and train a Recurrent Neural Network, based on Long Short-Term Memory (LSTM) units for next word prediction task. \nAnswers and experiments should be made by groups of one or two students. Each group should fill and run appropriate notebook cells. \nOnce you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an pdf document using print as PDF (Ctrl+P). Do not forget to run all your cells before generating your final report and do not forget to include the names of all participants in the group. The lab session should be completed by June 9th 2017.\nSend you pdf file to benoit.huet@eurecom.fr and olfa.ben-ahmed@eurecom.fr using [DeepLearning_lab3] as Subject of your email.\nIntroduction\nYou will train a LSTM to predict the next word using a sample short story. The LSTM will learn to predict the next item of a sentence from the 3 previous items (given as input). Ponctuation marks are considered as dictionary items so they can be predicted too. Figure 1 shows the LSTM and the process of next word prediction. \n<img src=\"lstm.png\" height=\"370\" width=\"370\"> \nEach word (and ponctuation) from text sentences is encoded by a unique integer. The integer value corresponds to the index of the corresponding word (or punctuation mark) in the dictionnary. The network output is a one-hot-vector indicating the index of the predicted word in the reversed dictionary (Section 1.2). For example if the prediction is 86, the predicted word will be \"company\". \nYou will use a sample short story from Aesop’s Fables (http://www.taleswithmorals.com/) to train your model. \n<font size=\"3\" face=\"verdana\" > <i> \"There was once a young Shepherd Boy who tended his sheep at the foot of a mountain near a dark forest.\nIt was rather lonely for him all day, so he thought upon a plan by which he could get a little company and some excitement.\nHe rushed down towards the village calling out \"Wolf, Wolf,\" and the villagers came out to meet him, and some of them stopped with him for a considerable time.\nThis pleased the boy so much that a few days afterwards he tried the same trick, and again the villagers came to his help.\nBut shortly after this a Wolf actually did come out from the forest, and began to worry the sheep, and the boy of course cried out \"Wolf, Wolf,\" still louder than before.\nBut this time the villagers, who had been fooled twice before, thought the boy was again deceiving them, and nobody stirred to come to his help.\nSo the Wolf made a good meal off the boy's flock, and when the boy complained, the wise man of the village said:\n\"A liar will not be believed, even when he speaks the truth.\" \"</i> </font>. \nStart by loading the necessary libraries and resetting the default computational graph. For more details about the rnn packages, we suggest you to take a look at https://www.tensorflow.org/api_guides/python/contrib.rnn", "import numpy as np\nimport collections # used to build the dictionary\nimport random\nimport time\nfrom time import time\nimport pickle # may be used to save your model \nimport matplotlib.pyplot as plt\n#Import Tensorflow and rnn\nimport tensorflow as tf\nfrom tensorflow.contrib import rnn \n\n# Target log path\nlogs_path = 'lstm_words'\nwriter = tf.summary.FileWriter(logs_path)", "Next-word prediction task\nPart 1: Data preparation\n1.1. Loading data\nLoad and split the text of our story", "def load_data(filename):\n with open(filename) as f:\n data = f.readlines()\n data = [x.strip().lower() for x in data]\n data = [data[i].split() for i in range(len(data))]\n data = np.array(data)\n data = np.reshape(data, [-1, ])\n print(data)\n return data\n\n#Run the cell \ntrain_file ='data/story.txt'\ntrain_data = load_data(train_file)\nprint(\"Loaded training data...\")\nprint(len(train_data))", "1.2.Symbols encoding\nThe LSTM input's can only be numbers. A way to convert words (symbols or any items) to numbers is to assign a unique integer to each word. This process is often based on frequency of occurrence for efficient coding purpose.\nHere, we define a function to build an indexed word dictionary (word->number). The \"build_vocabulary\" function builds both:\n\nDictionary : used for encoding words to numbers for the LSTM inputs \nReverted dictionnary : used for decoding the outputs of the LSTM into words (and punctuation).\n\nFor example, in the story above, we have 113 individual words. The \"build_vocabulary\" function builds a dictionary with the following entries ['the': 0], [',': 1], ['company': 85],...", "def build_vocabulary(words):\n count = collections.Counter(words).most_common()\n dic= dict()\n for word, _ in count:\n dic[word] = len(dic)\n\n reverse_dic= dict(zip(dic.values(), dic.keys()))\n return dic, reverse_dic", "Run the cell below to display the vocabulary", "dictionary, reverse_dictionary = build_vocabulary(train_data)\nvocabulary_size= len(dictionary) \nprint \"Dictionary size (Vocabulary size) = \", vocabulary_size\nprint(\"\\n\")\nprint(\"Dictionary : \\n\")\nprint(dictionary)\nprint(\"\\n\")\nprint(\"Reverted Dictionary : \\n\" )\nprint(reverse_dictionary)", "Part 2 : LSTM Model in TensorFlow\nSince you have defined how the data will be modeled, you are now to develop an LSTM model to predict the word of following a sequence of 3 words. \n2.1. Model definition\nDefine a 2-layers LSTM model. \nFor this use the following classes from the tensorflow.contrib library:\n\nrnn.BasicLSTMCell(number of hidden units) \nrnn.static_rnn(rnn_cell, data, dtype=tf.float32)\nrnn.MultiRNNCell(,)\n\nYou may need some tensorflow functions (https://www.tensorflow.org/api_docs/python/tf/) :\n- tf.split\n- tf.reshape \n- ...", "def lstm_model(x, w, b, n_input, n_hidden):\n # reshape to [1, n_input]\n x = tf.reshape(x, [-1, n_input])\n\n # Generate a n_input-element sequence of inputs\n # (eg. [had] [a] [general] -> [20] [6] [33])\n x = tf.split(x,n_input,1)\n\n # 1-layer LSTM with n_hidden units.\n rnn_cell = rnn.BasicLSTMCell(n_hidden)\n \n #improvement\n #rnn_cell = rnn.MultiRNNCell([rnn.BasicLSTMCell(n_hidden),rnn.BasicLSTMCell(n_hidden)])\n #rnn_cell = rnn.MultiRNNCell([rnn.BasicLSTMCell(n_hidden),rnn.BasicLSTMCell(n_hidden),rnn.BasicLSTMCell(n_hidden)])\n\n # generate prediction\n outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32)\n\n # there are n_input outputs but\n # we only want the last output\n return tf.matmul(outputs[-1], w['out']) + b['out']", "Training Parameters and constants", "# Training Parameters\nlearning_rate = 0.001\nepochs = 50000\ndisplay_step = 1000\nn_input = 3\n\n#For each LSTM cell that you initialise, supply a value for the hidden dimension, number of units in LSTM cell\nn_hidden = 64\n\n# tf Graph input\nx = tf.placeholder(\"float\", [None, n_input, 1])\ny = tf.placeholder(\"float\", [None, vocabulary_size])\n\n# LSTM weights and biases\nweights = { 'out': tf.Variable(tf.random_normal([n_hidden, vocabulary_size]))}\nbiases = {'out': tf.Variable(tf.random_normal([vocabulary_size])) }\n\n\n#build the model\npred = lstm_model(x, weights, biases,n_input,n_hidden)", "Define the Loss/Cost and optimizer", "# Loss and optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))\n#cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))\n#cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(tf.clip_by_value(pred,-1.0,1.0)), reduction_indices=1))\noptimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)\n\n# Model evaluation\ncorrect_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Comment:\nWe decided to apply the softmax and calculate the cost at the same time. In this way we can use the method softmax_cross_entropy_with_logits, which is more numerically stable in corner cases than applying the softmax and then calculating the cross entropy\nWe give you here the Test Function", "#run the cell\ndef test(sentence, session, verbose=False):\n sentence = sentence.strip()\n words = sentence.split(' ')\n if len(words) != n_input:\n print(\"sentence length should be equel to\", n_input, \"!\")\n try:\n symbols_inputs = [dictionary[str(words[i - n_input])] for i in range(n_input)]\n keys = np.reshape(np.array(symbols_inputs), [-1, n_input, 1])\n onehot_pred = session.run(pred, feed_dict={x: keys})\n onehot_pred_index = int(tf.argmax(onehot_pred, 1).eval())\n words.append(reverse_dictionary[onehot_pred_index])\n sentence = \" \".join(words)\n if verbose:\n print(sentence)\n return reverse_dictionary[onehot_pred_index]\n except:\n print \" \".join([\"Word\", words[i - n_input], \"not in dictionary\"])", "Part 3 : LSTM Training\nIn the Training process, at each epoch, 3 words are taken from the training data, encoded to integer to form the input vector. The training labels are one-hot vector encoding the word that comes after the 3 inputs words. Display the loss and the training accuracy every 1000 iteration. Save the model at the end of training in the lstm_model folder", "# Initializing the variables\ninit = tf.global_variables_initializer()\nsaver = tf.train.Saver()\nstart_time = time()\n# Launch the graph\nwith tf.Session() as session:\n session.run(init)\n step = 0\n offset = random.randint(0,n_input+1)\n end_offset = n_input + 1\n acc_total = 0\n loss_total = 0\n\n writer.add_graph(session.graph)\n\n while step < epochs:\n # Generate a minibatch. Add some randomness on selection process.\n if offset > (len(train_data)-end_offset):\n offset = random.randint(0, n_input+1)\n\n symbols_in_keys = [ [dictionary[ str(train_data[i])]] for i in range(offset, offset+n_input) ]\n symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])\n\n symbols_out_onehot = np.zeros([len(dictionary)], dtype=float)\n symbols_out_onehot[dictionary[str(train_data[offset+n_input])]] = 1.0\n symbols_out_onehot = np.reshape(symbols_out_onehot,[1,-1])\n\n _, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred], \\\n feed_dict={x: symbols_in_keys, y: symbols_out_onehot})\n loss_total += loss\n acc_total += acc\n if (step+1) % display_step == 0:\n print(\"Iter= \" + str(step+1) + \", Average Loss= \" + \\\n \"{:.6f}\".format(loss_total/display_step) + \", Average Accuracy= \" + \\\n \"{:.2f}%\".format(100*acc_total/display_step))\n acc_total = 0\n loss_total = 0\n symbols_in = [train_data[i] for i in range(offset, offset + n_input)]\n symbols_out = train_data[offset + n_input]\n symbols_out_pred = reverse_dictionary[int(tf.argmax(onehot_pred, 1).eval())]\n print(\"%s - [%s] vs [%s]\" % (symbols_in,symbols_out,symbols_out_pred))\n step += 1\n offset += (n_input+1)\n print(\"Optimization Finished!\")\n print(\"Elapsed time: \", time() - start_time)\n print(\"Run on command line.\")\n print(\"\\ttensorboard --logdir=%s\" % (logs_path))\n print(\"Point your web browser to: http://localhost:6006/\")\n save_path = saver.save(session, \"model.ckpt\")\n print(\"Model saved in file: %s\" % save_path)\n", "Comment:\nWe created different models with different number of layers, and we have seen that the best accuracy is achieved using only 2 laers. Using more or less layers we achieve a lower accuracy\nPart 4 : Test your model\n3.1. Next word prediction\nLoad your model (using the model_saved variable given in the training session) and test the sentences :\n- 'get a little' \n- 'nobody tried to'\n- Try with other sentences using words from the stroy's vocabulary.", "with tf.Session() as sess:\n # Initialize variables\n sess.run(init)\n\n # Restore model weights from previously saved model\n saver.restore(sess, \"./model.ckpt\")\n print(test('get a little', sess))\n print(test('nobody tried to', sess))", "Comment:\nHere it looks that the RNN is working, in fact it can predict correctly the next word. \nWe should not that in this case is difficult to check if the RNN is actually overfitting the training data.\n3.2. More fun with the Fable Writer !\nYou will use the RNN/LSTM model learned in the previous question to create a\nnew story/fable.\nFor this you will choose 3 words from the dictionary which will start your\nstory and initialize your network. Using those 3 words the RNN will generate\nthe next word or the story. Using the last 3 words (the newly predicted one\nand the last 2 from the input) you will use the network to predict the 5\nword of the story.. and so on until your story is 5 sentence long. \nMake a point at the end of your story. \nTo implement that, you will use the test function. \nThis is the original fable, we will look at it to note an eventual overfitting\nIt was rather lonely for him all day, so he thought upon a plan by which he could get a little company and some excitement.\nHe rushed down towards the village calling out \"Wolf, Wolf,\" and the villagers came out to meet him, and some of them stopped with him for a considerable time.\nThis pleased the boy so much that a few days afterwards he tried the same trick, and again the villagers came to his help.\nBut shortly after this a Wolf actually did come out from the forest, and began to worry the sheep, and the boy of course cried out \"Wolf, Wolf,\" still louder than before.\nBut this time the villagers, who had been fooled twice before, thought the boy was again deceiving them, and nobody stirred to come to his help.\nSo the Wolf made a good meal off the boy's flock, and when the boy complained, the wise man of the village said:\n\"A liar will not be believed, even when he speaks the truth.", "#Your implementation goes here \nwith tf.Session() as sess:\n # Initialize variables\n sess.run(init)\n\n # Restore model weights from previously saved model\n saver.restore(sess, \"./model.ckpt\")\n \n #a sentence is concluded when we find a dot.\n fable = [random.choice(dictionary.keys()) for _ in range(3)]\n n_sentences = fable.count('.')\n\n offset = 0\n while n_sentences < 5:\n next_word = test(' '.join(fable[offset:offset+3]), sess)\n fable.append(next_word)\n if next_word == '.':\n n_sentences += 1\n offset+=1\n print(' '.join(fable))", "Comment:\nThis is interesting, we see that the sentences have some sort of sense, but when we reach a point, we see the same sentence repated many times. Thus is probably due to overfitting, we should look more deeply. We see that the repeated sentence is different from the original one, but it is still always the same. We think this is due to the fact that the dot start always the same sentence. Maybe we could create more layers and see what happens.", "def load_data(filename):\n with open(filename) as f:\n data = f.readlines()\n data = [x.strip().lower() for x in data]\n data = [data[i].split() for i in range(len(data))]\n data = np.array(data)\n data = np.reshape(data, [-1, ])\n return data\n\ntrain_file ='data/story.txt'\ntrain_data = load_data(train_file)\n\ndef build_vocabulary(words):\n count = collections.Counter(words).most_common()\n dic= dict()\n for word, _ in count:\n dic[word] = len(dic)\n\n reverse_dic= dict(zip(dic.values(), dic.keys()))\n return dic, reverse_dic\n\ndictionary, reverse_dictionary = build_vocabulary(train_data)\nvocabulary_size= len(dictionary) \n\nimport numpy as np\nimport collections # used to build the dictionary\nimport random\nimport time\nfrom time import time\nimport pickle # may be used to save your model \nimport matplotlib.pyplot as plt\n#Import Tensorflow and rnn\nimport tensorflow as tf\nfrom tensorflow.contrib import rnn \n\ndef create_train_model(n_input = 3, n_layers = 2,verbose = False):\n tf.reset_default_graph()\n # Target log path\n logs_path = 'lstm_words'\n writer = tf.summary.FileWriter(logs_path)\n \n def lstm_model(x, w, b, n_input, n_hidden,n_layers):\n # reshape to [1, n_input]\n x = tf.reshape(x, [-1, n_input])\n\n # Generate a n_input-element sequence of inputs\n # (eg. [had] [a] [general] -> [20] [6] [33])\n x = tf.split(x,n_input,1)\n\n rnn_layers = [rnn.BasicLSTMCell(n_hidden) for _ in range(n_layers)]\n rnn_cell = rnn.MultiRNNCell(rnn_layers)\n # generate prediction\n outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32)\n\n # there are n_input outputs but\n # we only want the last output\n return tf.matmul(outputs[-1], w['out']) + b['out']\n \n # Training Parameters\n learning_rate = 0.001\n epochs = 50000\n display_step = 1000\n\n\n #For each LSTM cell that you initialise, supply a value for the hidden dimension, number of units in LSTM cell\n n_hidden = 64\n\n # tf Graph input\n x = tf.placeholder(\"float\", [None, n_input, 1])\n y = tf.placeholder(\"float\", [None, vocabulary_size])\n\n # LSTM weights and biases\n weights = { 'out': tf.Variable(tf.random_normal([n_hidden, vocabulary_size]))}\n biases = {'out': tf.Variable(tf.random_normal([vocabulary_size])) }\n\n\n #build the model\n pred = lstm_model(x, weights, biases,n_input,n_hidden,n_layers)\n # Loss and optimizer\n cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))\n #cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))\n #cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(tf.clip_by_value(pred,-1.0,1.0)), reduction_indices=1))\n optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)\n\n # Model evaluation\n correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\n \n # Initializing the variables\n init = tf.global_variables_initializer()\n saver = tf.train.Saver()\n start_time = time()\n # Launch the graph\n with tf.Session() as session:\n session.run(init)\n step = 0\n offset = random.randint(0,n_input+1)\n end_offset = n_input + 1\n acc_total = 0\n loss_total = 0\n\n writer.add_graph(session.graph)\n\n while step < epochs:\n # Generate a minibatch. Add some randomness on selection process.\n if offset > (len(train_data)-end_offset):\n offset = random.randint(0, n_input+1)\n\n symbols_in_keys = [ [dictionary[ str(train_data[i])]] for i in range(offset, offset+n_input) ]\n symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])\n\n symbols_out_onehot = np.zeros([len(dictionary)], dtype=float)\n symbols_out_onehot[dictionary[str(train_data[offset+n_input])]] = 1.0\n symbols_out_onehot = np.reshape(symbols_out_onehot,[1,-1])\n\n _, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred], \\\n feed_dict={x: symbols_in_keys, y: symbols_out_onehot})\n loss_total += loss\n acc_total += acc\n if (step+1) % display_step == 0:\n if verbose or step+1 == epochs: print(\"Iter= \" + str(step+1) + \", Average Loss= \" + \\\n \"{:.6f}\".format(loss_total/display_step) + \", Average Accuracy= \" + \\\n \"{:.2f}%\".format(100*acc_total/display_step))\n acc_total = 0\n loss_total = 0\n symbols_in = [train_data[i] for i in range(offset, offset + n_input)]\n symbols_out = train_data[offset + n_input]\n symbols_out_pred = reverse_dictionary[int(tf.argmax(onehot_pred, 1).eval())]\n if verbose: print(\"%s - [%s] vs [%s]\" % (symbols_in,symbols_out,symbols_out_pred))\n step += 1\n offset += (n_input+1)\n \n \n print(\"Optimization Finished!\")\n print(\"Elapsed time: \", time() - start_time)\n print(\"Run on command line.\")\n print(\"\\ttensorboard --logdir=%s\" % (logs_path))\n print(\"Point your web browser to: http://localhost:6006/\")\n save_path = saver.save(session, \"model.ckpt\")\n print(\"Model saved in file: %s\" % save_path)\n \n #run the cell\n def test(sentence, session, verbose=False):\n sentence = sentence.strip()\n words = sentence.split(' ')\n if len(words) != n_input:\n print(\"sentence length should be equel to\", n_input, \"!\")\n try:\n symbols_inputs = [dictionary[str(words[i - n_input])] for i in range(n_input)]\n keys = np.reshape(np.array(symbols_inputs), [-1, n_input, 1])\n onehot_pred = session.run(pred, feed_dict={x: keys})\n onehot_pred_index = int(tf.argmax(onehot_pred, 1).eval())\n words.append(reverse_dictionary[onehot_pred_index])\n sentence = \" \".join(words)\n if verbose:\n print(sentence)\n return reverse_dictionary[onehot_pred_index]\n except:\n print \" \".join([\"Word\", words[i - n_input], \"not in dictionary\"])\n\n \n #a sentence is concluded when we find a dot.\n fable = [random.choice(dictionary.keys()) for _ in range(n_input)]\n #print(dictionary)\n #print(fable)\n n_sentences = fable.count('.')\n\n offset = 0\n while n_sentences < 5 and len(fable) < 200:\n next_word = test(' '.join(fable[offset:offset+n_input]), session)\n fable.append(next_word)\n if next_word == '.':\n n_sentences += 1\n offset+=1\n print(' '.join(fable))", "3.3. Play with number of inputs\nThe number of input in our example is 3, see what happens when you use other number (1 and 5)\nn_input = 1", "create_train_model(n_input = 1, n_layers = 1)\n\ncreate_train_model(n_input = 1, n_layers = 2)\n\ncreate_train_model(n_input = 1, n_layers = 3)", "Comment:\nHere we see that when the input size is 1 we obtain a vad model regardless of the number of layers, this is because we are basically predicting a word based on the preceding word. This not enough to create a sentence with some sort of sense.Looking ath the prediction accuracy, it is very low.\nn_input = 3", "create_train_model(n_input = 3, n_layers = 1)\n\ncreate_train_model(n_input = 3, n_layers = 2)\n\ncreate_train_model(n_input = 3, n_layers = 3)", "Comment:\nHere we see some sentences that have a sense, but we see a tendency to repeat the sentence of the training fable. This is interesting, because during the training the single triples where chosen randomly and not sequentially. Somehow, the net learned the training fable.\nn_input = 5", "create_train_model(n_input = 5, n_layers = 1)\n\ncreate_train_model(n_input = 5, n_layers = 2)\n\ncreate_train_model(n_input = 5, n_layers = 3)", "Comment:\nWith 5 words, the model learn to predict very well the next word, in fact we obtain an high accuracy. In this case we see that whole sentences are copied from the original fable, but they are not repeated exactly, we still see that some sentences are repeated, but at this point we think that this is due to the limited training set." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.18/_downloads/bd0a5abf40feb6e3910f5dd50085e964/plot_source_label_time_frequency.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute power and phase lock in label of the source space\nCompute time-frequency maps of power and phase lock in the source space.\nThe inverse method is linear based on dSPM inverse operator.\nThe example also shows the difference in the time-frequency maps\nwhen they are computed with and without subtracting the evoked response\nfrom each epoch. The former results in induced activity only while the\nlatter also includes evoked (stimulus-locked) activity.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\nfrom mne.minimum_norm import read_inverse_operator, source_induced_power\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nfname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nlabel_name = 'Aud-rh'\nfname_label = data_path + '/MEG/sample/labels/%s.label' % label_name\n\ntmin, tmax, event_id = -0.2, 0.5, 2\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.find_events(raw, stim_channel='STI 014')\ninverse_operator = read_inverse_operator(fname_inv)\n\ninclude = []\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\n\n# Picks MEG channels\npicks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,\n stim=False, include=include, exclude='bads')\nreject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)\n\n# Load epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject,\n preload=True)\n\n# Compute a source estimate per frequency band including and excluding the\n# evoked response\nfreqs = np.arange(7, 30, 2) # define frequencies of interest\nlabel = mne.read_label(fname_label)\nn_cycles = freqs / 3. # different number of cycle per frequency\n\n# subtract the evoked response in order to exclude evoked activity\nepochs_induced = epochs.copy().subtract_evoked()\n\nplt.close('all')\n\nfor ii, (this_epochs, title) in enumerate(zip([epochs, epochs_induced],\n ['evoked + induced',\n 'induced only'])):\n # compute the source space power and the inter-trial coherence\n power, itc = source_induced_power(\n this_epochs, inverse_operator, freqs, label, baseline=(-0.1, 0),\n baseline_mode='percent', n_cycles=n_cycles, n_jobs=1)\n\n power = np.mean(power, axis=0) # average over sources\n itc = np.mean(itc, axis=0) # average over sources\n times = epochs.times\n\n ##########################################################################\n # View time-frequency plots\n plt.subplots_adjust(0.1, 0.08, 0.96, 0.94, 0.2, 0.43)\n plt.subplot(2, 2, 2 * ii + 1)\n plt.imshow(20 * power,\n extent=[times[0], times[-1], freqs[0], freqs[-1]],\n aspect='auto', origin='lower', vmin=0., vmax=30., cmap='RdBu_r')\n plt.xlabel('Time (s)')\n plt.ylabel('Frequency (Hz)')\n plt.title('Power (%s)' % title)\n plt.colorbar()\n\n plt.subplot(2, 2, 2 * ii + 2)\n plt.imshow(itc,\n extent=[times[0], times[-1], freqs[0], freqs[-1]],\n aspect='auto', origin='lower', vmin=0, vmax=0.7,\n cmap='RdBu_r')\n plt.xlabel('Time (s)')\n plt.ylabel('Frequency (Hz)')\n plt.title('ITC (%s)' % title)\n plt.colorbar()\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code" ]
bearing/dosenet-analysis
Programming Lesson Modules/Module 8- Measures of Central Tendency.ipynb
mit
[ "Module 8- Measures of Location and Spread\nauthor: Radley Rigonan\nThis module is the first in a series of modules that explore data and statistical analysis. In this case, we will be using DoseNet data to improve our understanding of central tendency.\nI will be using DoseNet data from the following link:\nhttps://radwatch.berkeley.edu/sites/default/files/dosenet/etch.csv", "%matplotlib inline\nimport csv\nimport io\nimport urllib.request\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates \nfrom datetime import datetime\n\nurl = 'https://radwatch.berkeley.edu/sites/default/files/dosenet/etch.csv'\nresponse = urllib.request.urlopen(url)\nreader = csv.reader(io.TextIOWrapper(response)) \ntimedata = [] \ncpm = []\nline = 0\nfor row in reader:\n if line != 0:\n timedata.append(datetime.fromtimestamp(float(row[2],)))\n cpm.append(float(row[6]))\n line += 1", "Measures of central tendency identify values that lie on the center of a sample and help statisticians summarize their data. The most measures of central tendency are mean, median, and mode. Although you should be familiar with these values, they are defined as:\nMEAN = sum(sample) / len(sample)\nMEDIAN = sorted(sample)[len(sample)/2]\nMODE: element(s) with highest frequency", "mean_cpm1 = sum(cpm)/len(cpm)\nprint('mean CPM from its definition is: %s' %mean_cpm1)\n\n\nmean_cpm2 = np.mean(cpm)\nprint('mean CPM from built-in function is: %s' %mean_cpm2)\n\nif len(cpm)%2 == 0:\n median_cpm1 = sorted(cpm)[int(len(cpm)/2)]\nelse:\n median_cpm1 = (sorted(cpm)[int((len(cpm)+1)/2)]+sorted(cpm)[int((len(cpm)-1)/2)]) / 2\nprint('median CPM from its definition is: %s' %median_cpm1)\n\n\nmedian_cpm2 = np.median(cpm)\nprint('median CPM from built-in function is: %s' %median_cpm2)\n\nfrom collections import Counter\ncounter = Counter(cpm)\n_,val = counter.most_common(1)[0]\nmode_cpm1 = [i for i, target in counter.items() if target == val]\nprint('mode(s) CPM from its definition is: %s' %mode_cpm1)\n\n\nimport statistics # note: this function fails if there are two statistical modes\nmode_cpm2 = statistics.mode(cpm)\nprint('mode(s) CPM from built-in function is: %s' %mode_cpm2)\n\nfig, ax = plt.subplots()\nax.plot(timedata,cpm,alpha=0.3) \n # alpha modifier adds transparency, I add this so the CPM plot doesn't overpower the mean, median, and mode\nax.plot([timedata[0],timedata[-1]], [mean_cpm1,mean_cpm1], label='mean CPM')\nax.plot([timedata[0],timedata[-1]], [median_cpm1,median_cpm1], 'r:', label='median CPM')\nax.plot([timedata[0],timedata[-1]], [mode_cpm1,mode_cpm1], 'c--', label='mode CPM',alpha=0.5)\n\nplt.legend(loc='best')\nplt.ylim(ymax = 5, ymin = .5)\n\nax.xaxis.set_major_locator(mdates.MonthLocator())\nax.xaxis.set_major_formatter(mdates.DateFormatter('%b-%Y'))\nax.xaxis.set_minor_locator(mdates.DayLocator())\nplt.xticks(rotation=15)\n\nplt.title('DoseNet Data: Etcheverry Roof\\nCPM vs. Time with mean, mode, and median')\nplt.ylabel('CPM')\nplt.xlabel('Date')\n\nfig, ax = plt.subplots()\ny,x, _ = plt.hist(cpm,bins=30, alpha=0.3, label='CPM distribution')\nax.plot([mean_cpm1,mean_cpm1], [0,y.max()],label='mean CPM')\nax.plot([median_cpm1, median_cpm1], [0,y.max()], 'r:', label='median CPM')\nax.plot([mode_cpm1,mode_cpm1], [0,y.max()], 'c--', label='mode CPM')\n\nplt.legend(loc='best')\nplt.title('DoseNet Data: Etcheverry Roof\\nCPM Histogram with mean, mode, and median')\nplt.ylabel('Frequency')\nplt.xlabel('CPM')", "As you can see from the timeseries plot and the histogram, mean, median, and mode can generally gauge the central values in a set of sample data. This is especially true for radiation data: radiation is a stochastic process (it changes and fluctuates over time), but background radiation measured by DoseNet devices trends towards an average point." ]
[ "markdown", "code", "markdown", "code", "markdown" ]