repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
emsi/ml-toolbox
random/catfish/3_regularization.ipynb
agpl-3.0
[ "Deep Learning\nAssignment 3\nPreviously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.\nThe goal of this assignment is to explore regularization techniques.", "# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport numpy as np\nimport tensorflow as tf\nfrom six.moves import cPickle as pickle", "First reload the data we generated in notmist.ipynb.", "pickle_file = 'notMNIST.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n train_dataset = save['train_dataset']\n train_labels = save['train_labels']\n valid_dataset = save['valid_dataset']\n valid_labels = save['valid_labels']\n test_dataset = save['test_dataset']\n test_labels = save['test_labels']\n del save # hint to help gc free up memory\n print('Training set', train_dataset.shape, train_labels.shape)\n print('Validation set', valid_dataset.shape, valid_labels.shape)\n print('Test set', test_dataset.shape, test_labels.shape)", "Reformat into a shape that's more adapted to the models we're going to train:\n- data as a flat matrix,\n- labels as float 1-hot encodings.", "image_size = 28\nnum_labels = 10\n\ndef reformat(dataset, labels):\n dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\ntrain_dataset, train_labels = reformat(train_dataset, train_labels)\nvalid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\ntest_dataset, test_labels = reformat(test_dataset, test_labels)\nprint('Training set', train_dataset.shape, train_labels.shape)\nprint('Validation set', valid_dataset.shape, valid_labels.shape)\nprint('Test set', test_dataset.shape, test_labels.shape)\n\ndef accuracy(predictions, labels):\n return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n / predictions.shape[0])", "Problem 1\nIntroduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.\n\n\nProblem 2\nLet's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?\n\n\nProblem 3\nIntroduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.\nWhat happens to our extreme overfitting case?\n\n\nProblem 4\nTry to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.\nOne avenue you can explore is to add multiple layers.\nAnother one is to use learning rate decay:\nglobal_step = tf.Variable(0) # count the number of steps taken.\nlearning_rate = tf.train.exponential_decay(0.5, global_step, ...)\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
PLN-FaMAF/DeepLearningEAIA
deep_learning_tutorial_1.ipynb
bsd-3-clause
[ "# Optional, only if you installed Seaborn\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt", "Express Deep Learning in Python - Part 1\nDo you have everything ready? Check the part 0!\nHow fast can you build a MLP?\nIn this first part we will see how to implement the basic components of a MultiLayer Perceptron (MLP) classifier, most commonly known as Neural Network. We will be working with the Keras: a very simple library for deep learning.\nAt this point, you may know how machine learning in general is applied and have some intuitions about how deep learning works, and more importantly, why it works. Now it's time to make some experiments, and for that you need to be as quick and flexible as possible. Keras is an idea tool for prototyping and doing your first approximations to a Machine Learning problem. On the one hand, Keras is integrated with two very powerfull backends that support GPU computations, Tensorflow and Theano. On the other hand, it has a level of abstraction high enough to be simple to understand and easy to use. For example, it uses a very similar interface to the sklearn library that you have seen before, with fit and predict methods.\nNow let's get to work with an example:\n1 - The libraries\nFirts let's check we have installed everything we need for this tutorial:", "import numpy\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout\nfrom keras.datasets import mnist", "2 - The dataset\nFor this quick tutorial we will use the (very popular) MNIST dataset. This is a dataset of 70K images of handwritten digits. Our task is to recognize which digits is displayed in the image: a classification problem. You have seen in previous courses how to train and evaluate a classifier, so we wont talk in further details about supervised learning.\nThe input to the MLP classifier are going to be images of 28x28 pixels represented as matrixes. The output will be one of ten classes (0 to 9), representing the predicted number written in the image.", "batch_size = 128\nnum_classes = 10\nepochs = 10\nTRAIN_EXAMPLES = 60000\nTEST_EXAMPLES = 10000\n\n# the data, shuffled and split between train and test sets\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\n\n# reshape the dataset to convert the examples from 2D matrixes to 1D arrays.\nx_train = x_train.reshape(60000, 28*28)\nx_test = x_test.reshape(10000, 28*28)\n\n# to make quick runs, select a smaller set of images.\ntrain_mask = numpy.random.choice(x_train.shape[0], TRAIN_EXAMPLES, replace=False)\nx_train = x_train[train_mask, :].astype('float32')\ny_train = y_train[train_mask]\ntest_mask = numpy.random.choice(x_test.shape[0], TEST_EXAMPLES, replace=False)\nx_test = x_test[test_mask, :].astype('float32')\ny_test = y_test[test_mask]\n\n# normalize the input\nx_train /= 255\nx_test /= 255\n\n# convert class vectors to binary class matrices\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)", "3 - The model\nThe concept of Deep Learning is very broad, but the core of it is the use of classifiers with multiple hidden layer of neurons, or smaller classifiers. We all know the classical image of the simplest possible possible deep model: a neural network with a single hidden layer. \n\ncredits http://www.extremetech.com/wp-content/uploads/2015/07/NeuralNetwork.png\nIn theory, this model can represent any function TODO add a citation here. We will see how to implement this network in Keras, and during the second part of this tutorial how to add more features to create a deep and powerful classifier.\nFirst, Deep Learning models are concatenations of Layers. This is represented in Keras with the Sequential model. We create the Sequential instance as an \"empty carcass\" and then we fill it with different layers. \nThe most basic type of Layer is the Dense layer, where each neuron in the input is connected to each neuron in the following layer, like we can see in the image above. Internally, a Dense layer has two variables: a matrix of weights and a vector of bias, but the beauty of Keras is that you don't need to worry about that. All the variables will be correctly created, initialized, trained and possibly regularized for you.\nEach layer needs to know or be able to calculate al least three things:\n\nThe size of the input: the number of neurons in the incoming layer. For the first layer this corresponds to the size of each example in our dataset. The next layers can calculate their input size using the output of the previous layer, so we generally don't need to tell them this.\nThe type of activation: this is the function that is applied to the output of each neuron. Will talk in detail about this later.\nThe size of the output: the number of neurons in the next layer.", "model = Sequential()\n\n# Input to hidden layer\nmodel.add(Dense(512, activation='relu', input_shape=(784,)))\n# Hidden to output layer\nmodel.add(Dense(10, activation='softmax'))", "We have successfully build a Neural Network! We can print a description of our architecture using the following command:", "model.summary()", "Compiling a model in Keras\nA very appealing aspect of Deep Learning frameworks is that they solve the implementation of complex algorithms such as Backpropagation. For those with some numerical optimization notions, minimization algorithms often involve the calculation of first defivatives. Neural Networks are huge functions full of non-linearities, and differentiating them is a... nightmare. For this reason, models need to be \"compiled\". In this stage, the backend builds complex computational graphs, and we don't have to worry about derivatives or gradients.\nIn Keras, a model can be compiled with the method .compile(). The method takes two parameters: loss and optimizer. The loss is the function that calculates how much error we have in each prediction example, and there are a lot of implemented alternatives ready to use. We will talk more about this, for now we use the standard categorical crossentropy. As you can see, we can simply pass a string with the name of the function and Keras will find the implementation for us.\nThe optimizer is the algorithm to minimize the value of the loss function. Again, Keras has many optimizers available. The basic one is the Stochastic Gradient Descent.\nWe pass a third argument to the compile method: the metric. Metrics are measures or statistics that allows us to keep track of the classifier's performance. It's similar to the loss, but the results of the metrics are not use by the optimization algorithm. Besides, metrics are always comparable, while the loss function can take random values depending on your problem.\nKeras will calculate metrics and loss both on the training and the validation dataset. That way, we can monitor how other performance metrics vary when the loss is optimized and detect anomalies like overfitting.", "model.compile(loss='categorical_crossentropy',\n optimizer=keras.optimizers.SGD(),\n metrics=['accuracy'])", "[OPTIONAL] We can now visualize the architecture of our model using the vis_util tools. It's a very schematic view, but you can check it's not that different from the image we saw above (and that we intended to replicate).\nIf you can't execute this step don't worry, you can still finish the tutorial. This step requires graphviz and pydotplus libraries.", "from IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\n\nSVG(model_to_dot(model).create(prog='dot', format='svg'))", "Training\nOnce the model is compiled, everything is ready to train the classifier. Keras' Sequential model has a similar interface as the sklearn library that you have seen before, with fit and predict methods. As usual, we need to pass our training examples and their corresponding labels. Other parameters needed to train a neural network is the size of the batch and the number of epochs. We have two ways of specifying a validation dataset: we can pass the tuple of values and labels directly with the validation_data parameter, or we can pass a proportion to the validation_split argument and Keras will split the training dataset for us.\nTo correctly train our model we need to pass two important parameters to the fit function:\n * batch_size: is the number of examples to use in each \"minibatch\" iteration of the Stochastic Gradient Descent algorithm. This is necessary for most optimization algorithms. The size of the batch is important because it defines how fast the algorithm will perform each iteration and also how much memory will be used to load each batch (possibly in the GPU).\n * epochs: is the number of passes through the entire dataset. We need enough epochs for the classifier to converge, but we need to stop before the classifier starts overfitting.", "history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs,\n verbose=1, validation_data=(x_test, y_test));", "We have trained our model!\nAdditionally, Keras has printed out a lot of information of the training, thanks to the parameter verbose=1 that we passed to the fit function. We can see how many time it took in each iteration, and the value of the loss and metrics in the training and the validation dataset. The same information is stored in the output of the fit method, which sadly it's not well documented. We can see it in a pretty table with pandas.", "import pandas\npandas.DataFrame(history.history)", "Why is this useful? This will give you an insight on how well your network is optimizing the loss, and how much it's actually learning. When training, you need to keep track of two things:\n\nYour network is actually learning. This means your training loss is decreasing in average. If it's going up or it's stuck for more than a couple of epochs is safe to stop you training and try again.\nYou network is not overfitting. It's normal to have a gap between the validation and the training metrics, but they should decrease more or less at the same rate. If you see that your metrics for training are getting better but your validation metrics are getting worse, it is also a good point to stop and fix your overfitting problem.\n\nEvaluation\nKeras gives us a very useful method to evaluate the current performance called evaluate (surprise!). Evaluate will return the value of the loss function and all the metrics that we pass to the model when calling compile.", "score = model.evaluate(x_test, y_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])", "As you can see, using only 10 training epochs we get a very surprising accuracy in the training and test dataset. If you want to take a deeper look into your model, you can obtain the predictions as a vector and then use general purpose tools to explore the results. For example, we can plot the confusion matrix to see the most common errors.", "prediction = model.predict_classes(x_test)\n\nimport seaborn as sns\nfrom sklearn.metrics import confusion_matrix\nsns.set_style('white')\nsns.set_palette('colorblind')\n\nmatrix = confusion_matrix(numpy.argmax(y_test, 1), prediction)\nfigure = sns.heatmap(matrix / matrix.astype(numpy.float).sum(axis=1), \n xticklabels=range(10), yticklabels=range(10),\n cmap=sns.cubehelix_palette(8, as_cmap=True))", "We can see that the model is still confusing some numbers. For example, 4s and 9s, or 3s and 8s. This may be happening because our model is trained with very few epochs, but most likely it happens because our model is too simple and can't generalize to unseen data. In the following part of the tutorial, we will see the details of more complex components of neural classifiers and how to use them to build a more powerful classifier." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
murali-munna/pattern_classification
dimensionality_reduction/projection/linear_discriminant_analysis.ipynb
gpl-3.0
[ "Sebastian Raschka\n- Link to the containing GitHub Repository: https://github.com/rasbt/pattern_classification\n- Link to this IPython Notebook on GitHub: linear_discriminant_analysis.ipynb", "%load_ext watermark\n\n%watermark -v -d -u -p pandas,scikit-learn,numpy,matplotlib ", "<font size=\"1.5em\">More information about the watermark magic command extension.</font>\n<hr>\nI would be happy to hear your comments and suggestions. \nPlease feel free to drop me a note via\ntwitter, email, or google+.\n<hr>\n\nLinear Discriminant Analysis bit by bit\n<br>\n<br>\nSections\n\nIntroduction\nPrincipal Component Analysis vs. Linear Discriminant Analysis\nWhat is a \"good\" feature subspace?\nSummarizing the LDA approach in 5 steps\n\n\nPreparing the sample data set\nAbout the Iris dataset\nReading in the dataset\nHistograms and feature selection\nStandardization\nNormality assumptions\n\n\nLDA in 5 steps\nStep 1: Computing the d-dimensional mean vectors\nStep 2: Computing the Scatter Matrices\nStep 3: Solving the generalized eigenvalue problem for the matrix $S_{W}^{-1}S_B$\nStep 4: Selecting linear discriminants for the new feature subspace\nStep 5: Transforming the samples onto the new subspace\n\n\nA comparison of PCA and LDA\nLDA via scikit-learn\nUpdate-scikit-learn-0.15.2\n\n\n\n<br>\n<br>\nIntroduction\n[back to top]\nLinear Discriminant Analysis (LDA) is most commonly used as dimensionality reduction technique in the pre-processing step for pattern-classification and machine learning applications. \nThe goal is to project a dataset onto a lower-dimensional space with good class-separability in order avoid overfitting (\"curse of dimensionality\") and also reduce computational costs.\nRonald A. Fisher formulated the Linear Discriminant in 1936 (The Use of Multiple Measurements in Taxonomic Problems), and it also has some practical uses as classifier. The original Linear discriminant was described for a 2-class problem, and it was then later generalized as \"multi-class Linear Discriminant Analysis\" or \"Multiple Discriminant Analysis\" by C. R. Rao in 1948 (The utilization of multiple measurements in problems of biological classification)\nThe general LDA approach is very similar to a Principal Component Analysis (for more information about the PCA, see the previous article Implementing a Principal Component Analysis (PCA) in Python step by step), but in addition to finding the component axes that maximize the variance of our data (PCA), we are additionally interested in the axes that maximize the separation between multiple classes (LDA).\nSo, in a nutshell, often the goal of an LDA is to project a feature space (a dataset n-dimensional samples) onto a smaller subspace $k$ (where $k \\leq n-1$) while maintaining the class-discriminatory information. \nIn general, dimensionality reduction does not only help reducing computational costs for a given classification task, but it can also be helpful to avoid overfitting by minimizing the error in parameter estimation (\"curse of dimensionality\").\n<br>\n<br>\nPrincipal Component Analysis vs. Linear Discriminant Analysis\n[back to top]\nBoth Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are linear transformation techniques that are commonly used for dimensionality reduction. PCA can be described as an \"unsupervised\" algorithm, since it \"ignores\" class labels and its goal is to find the directions (the so-called principal components) that maximize the variance in a dataset.\nIn contrast to PCA, LDA is \"supervised\" and computes the directions (\"linear discriminants\") that will represent the axes that that maximize the separation between multiple classes.\nAlthough it might sound intuitive that LDA is superior to PCA for a multi-class classification task where the class labels are known, this might not always the case.\nFor example, comparisons between classification accuracies for image recognition after using PCA or LDA show that PCA tends to outperform LDA if the number of samples per class is relatively small (PCA vs. LDA, A.M. Martinez et al., 2001).\nIn practice, it is also not uncommon to use both LDA and PCA in combination: E.g., PCA for dimensionality reduction followed by an LDA.\n<br>\n<br>\n\n<br>\n<br>\nWhat is a \"good\" feature subspace?\n[back to top]\nLet's assume that our goal is to reduce the dimensions of a $d$-dimensional dataset by projecting it onto a $(k)$-dimensional subspace (where $k\\;<\\;d$). \nSo, how do we know what size we should choose for $k$ ($k$ = the number of dimensions of the new feature subspace), and how do we know if we have a feature space that represents our data \"well\"? \nLater, we will compute eigenvectors (the components) from our data set and collect them in a so-called scatter-matrices (i.e., the in-between-class scatter matrix and within-class scatter matrix).\nEach of these eigenvectors is associated with an eigenvalue, which tells us about the \"length\" or \"magnitude\" of the eigenvectors. \nIf we would observe that all eigenvalues have a similar magnitude, then this may be a good indicator that our data is already projected on a \"good\" feature space. \nAnd in the other scenario, if some of the eigenvalues are much much larger than others, we might be interested in keeping only those eigenvectors with the highest eigenvalues, since they contain more information about our data distribution. Vice versa, eigenvalues that are close to 0 are less informative and we might consider dropping those for constructing the new feature subspace.\n<br>\n<br>\nSummarizing the LDA approach in 5 steps\n[back to top]\nListed below are the 5 general steps for performing a linear discriminant analysis; we will explore them in more detail in the following sections.\n\nCompute the $d$-dimensional mean vectors for the different classes from the dataset.\nCompute the scatter matrices (in-between-class and within-class scatter matrix).\nCompute the eigenvectors ($\\pmb e_1, \\; \\pmb e_2, \\; ..., \\; \\pmb e_d$) and corresponding eigenvalues ($\\pmb \\lambda_1, \\; \\pmb \\lambda_2, \\; ..., \\; \\pmb \\lambda_d$) for the scatter matrices.\nSort the eigenvectors by decreasing eigenvalues and choose $k$ eigenvectors with the largest eigenvalues to form a $k \\times d$ dimensional matrix $\\pmb W\\;$ (where every column represents an eigenvector).\nUse this $k \\times d$ eigenvector matrix to transform the samples onto the new subspace. This can be summarized by the mathematical equation: $\\pmb Y = \\pmb X \\times \\pmb W$ (where $\\pmb X$ is a $n \\times d$-dimensional matrix representing the $n$ samples, and $\\pmb y$ are the transformed $n \\times k$-dimensional samples in the new subspace).\n\n<a name=\"sample_data\"></a>\n<br>\n<br>\nPreparing the sample data set\n[back to top]\n<a name=\"sample_data\"></a>\n<br>\n<br>\nAbout the Iris dataset\n[back to top]\nFor the following tutorial, we will be working with the famous \"Iris\" dataset that has been deposited on the UCI machine learning repository\n(https://archive.ics.uci.edu/ml/datasets/Iris).\n<font size=\"1\">\nReference:\nBache, K. & Lichman, M. (2013). UCI Machine Learning Repository. Irvine, CA: University of California, School of Information and Computer Science.</font>\nThe iris dataset contains measurements for 150 iris flowers from three different species.\nThe three classes in the Iris dataset:\n\nIris-setosa (n=50)\nIris-versicolor (n=50)\nIris-virginica (n=50)\n\nThe four features of the Iris dataset:\n\nsepal length in cm\nsepal width in cm\npetal length in cm\npetal width in cm", "feature_dict = {i:label for i,label in zip(\n range(4),\n ('sepal length in cm', \n 'sepal width in cm', \n 'petal length in cm', \n 'petal width in cm', ))}", "<a name=\"sample_data\"></a>\n<br>\n<br>\nReading in the dataset\n[back to top]", "import pandas as pd\n\ndf = pd.io.parsers.read_csv(\n filepath_or_buffer='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', \n header=None, \n sep=',', \n )\ndf.columns = [l for i,l in sorted(feature_dict.items())] + ['class label']\ndf.dropna(how=\"all\", inplace=True) # to drop the empty line at file-end\n\ndf.tail()", "$\\pmb X = \\begin{bmatrix} x_{1_{\\text{sepal length}}} & x_{1_{\\text{sepal width}}} & x_{1_{\\text{petal length}}} & x_{1_{\\text{petal width}}}\\ \nx_{2_{\\text{sepal length}}} & x_{2_{\\text{sepal width}}} & x_{2_{\\text{petal length}}} & x_{2_{\\text{petal width}}}\\ \n... \\\nx_{150_{\\text{sepal length}}} & x_{150_{\\text{sepal width}}} & x_{150_{\\text{petal length}}} & x_{150_{\\text{petal width}}}\\ \n\\end{bmatrix}, \\;\\;\n\\pmb y = \\begin{bmatrix} \\omega_{\\text{setosa}}\\ \n\\omega_{\\text{setosa}}\\ \n... \\\n\\omega_{\\text{virginica}}\\end{bmatrix}$\n<a name=\"sample_data\"></a>\n<br>\n<br>\nSince it is more convenient to work with numerical values, we will use the LabelEncode from the scikit-learn library to convert the class labels into numbers: 1, 2, and 3.", "from sklearn.preprocessing import LabelEncoder\n\nX = df[[0,1,2,3]].values \ny = df['class label'].values \n\nenc = LabelEncoder()\nlabel_encoder = enc.fit(y)\ny = label_encoder.transform(y) + 1\n\nlabel_dict = {1: 'Setosa', 2: 'Versicolor', 3:'Virginica'}", "$\\pmb y = \\begin{bmatrix}{\\text{setosa}}\\ \n{\\text{setosa}}\\ \n... \\\n{\\text{virginica}}\\end{bmatrix} \\quad \\Rightarrow\n\\begin{bmatrix} {\\text{1}}\\ \n{\\text{1}}\\ \n... \\\n{\\text{3}}\\end{bmatrix}$\n<a name=\"sample_data\"></a>\n<br>\n<br>\nHistograms and feature selection\n[back to top]\nJust to get a rough idea how the samples of our three classes $\\omega_1$, $\\omega_2$ and $\\omega_3$ are distributed, let us visualize the distributions of the four different features in 1-dimensional histograms.", "%matplotlib inline\n\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport math\n\nfig, axes = plt.subplots(nrows=2, ncols=2, figsize=(12,6))\n\nfor ax,cnt in zip(axes.ravel(), range(4)): \n \n # set bin sizes\n min_b = math.floor(np.min(X[:,cnt]))\n max_b = math.ceil(np.max(X[:,cnt]))\n bins = np.linspace(min_b, max_b, 25)\n \n # plottling the histograms\n for lab,col in zip(range(1,4), ('blue', 'red', 'green')):\n ax.hist(X[y==lab, cnt],\n color=col, \n label='class %s' %label_dict[lab], \n bins=bins,\n alpha=0.5,)\n ylims = ax.get_ylim()\n \n # plot annotation\n leg = ax.legend(loc='upper right', fancybox=True, fontsize=8)\n leg.get_frame().set_alpha(0.5)\n ax.set_ylim([0, max(ylims)+2])\n ax.set_xlabel(feature_dict[cnt])\n ax.set_title('Iris histogram #%s' %str(cnt+1))\n \n # hide axis ticks\n ax.tick_params(axis=\"both\", which=\"both\", bottom=\"off\", top=\"off\", \n labelbottom=\"on\", left=\"off\", right=\"off\", labelleft=\"on\")\n\n # remove axis spines\n ax.spines[\"top\"].set_visible(False) \n ax.spines[\"right\"].set_visible(False) \n ax.spines[\"bottom\"].set_visible(False) \n ax.spines[\"left\"].set_visible(False) \n \naxes[0][0].set_ylabel('count')\naxes[1][0].set_ylabel('count')\n \nfig.tight_layout() \n \nplt.show()", "From just looking at these simple graphical representations of the features, we can already tell that the petal lengths and widths are likely better suited as potential features two separate between the three flower classes. In practice, instead of reducing the dimensionality via a projection (here: LDA), a good alternative would be a feature selection technique. For low-dimensional datasets like Iris, a glance at those histograms would already be very informative. Another simple, but very useful technique would be to use feature selection algorithms, which I have described in more detail in another article: Feature Selection Algorithms in Python\n<a name=\"sample_data\"></a>\n<br>\n<br>\nStandardization\n[back to top]\nNormalization is one important part of every data pre-processing step and typically a requirement for best performances of many machine learning algorithms.\nThe two most popular approaches for data normalization are the so-called \"standardization\" and \"min-max scaling\".\n\n\nStandardization (or Z-score normalization): Rescaling of the features so that they'll have the properties of a standard normal distribution with &mu;=0 and &sigma;=1 (i.e., unit variance centered around the mean). \n\n\nMin-max scaling: Rescaling of the features to unit range, typically a range between 0 and 1. Quite often, min-max scaling is also just called \"normalization\", which can be quite confusing depending on the context where the term is being used. Via Min-max scaling, \n\n\nBoth are very important procedures, so that I have also a separate article about it with more details: About Feature Scaling and Normalization.\nIn our case, although the features are already on the same scale (measured in centimeters), we still want to scale the features to unit variance (&sigma;=1, &mu;=0).", "from sklearn import preprocessing\npreprocessing.scale(X, axis=0, with_mean=True, with_std=True, copy=False)\nprint()", "<a name=\"sample_data\"></a>\n<br>\n<br>\nNormality assumptions\n[back to top]\nIt should be mentioned that LDA assumes normal distributed data, features that are statistically independent, and identical covariance matrices for every class. However, this only applies for LDA as classifier and LDA for dimensionality reduction can also work reasonably well if those assumptions are violated. And even for classification tasks LDA seems can be quite robust to the distribution of the data: \n\n\"linear discriminant analysis frequently achieves good performances in\nthe tasks of face and object recognition, even though the assumptions\nof common covariance matrix among groups and normality are often\nviolated (Duda, et al., 2001)\" (Tao Li, et al., 2006).\n\n<br>\n<font size=\"1\">References: \nTao Li, Shenghuo Zhu, and Mitsunori Ogihara. “Using Discriminant Analysis for Multi-Class Classification: An Experimental Investigation.” Knowledge and Information Systems 10, no. 4 (2006): 453–72.) \nDuda, Richard O, Peter E Hart, and David G Stork. 2001. Pattern Classification. New York: Wiley.</font>\n<a name=\"sample_data\"></a>\n<br>\n<br>\nLDA in 5 steps\n[back to top]\nAfter we went through several preparation steps, our data is finally ready for the actual LDA. In practice, LDA for dimensionality reduction would be just another preprocessing step for a typical machine learning or pattern classification task.\n<a name=\"sample_data\"></a>\n<br>\n<br>\nStep 1: Computing the d-dimensional mean vectors\n[back to top]\nIn this first step, we will start off with a simple computation of the mean vectors $\\pmb m_i$, $(i = 1,2,3)$ of the 3 different flower classes:\n$\\pmb m_i = \\begin{bmatrix} \n\\mu_{\\omega_i (\\text{sepal length)}}\\ \n\\mu_{\\omega_i (\\text{sepal width})}\\ \n\\mu_{\\omega_i (\\text{petal length)}}\\\n\\mu_{\\omega_i (\\text{petal width})}\\\n\\end{bmatrix} \\; , \\quad \\text{with} \\quad i = 1,2,3$", "np.set_printoptions(precision=4)\n\nmean_vectors = []\nfor cl in range(1,4):\n mean_vectors.append(np.mean(X[y==cl], axis=0))\n print('Mean Vector class %s: %s\\n' %(cl, mean_vectors[cl-1]))", "<a name=\"sample_data\"></a>\n<br>\n<br>\n<a name=\"sc_matrix\"></a>\nStep 2: Computing the Scatter Matrices\n[back to top]\nNow, we will compute the two 4x4-dimensional matrices: The within-class and the between-class scatter matrix.\n<br>\n<br>\n2.1 Within-class scatter matrix $S_W$\n[back to top]\nThe within-class scatter matrix $S_W$ is computed by the following equation: \n$S_W = \\sum\\limits_{i=1}^{c} S_i$\nwhere\n$S_i = \\sum\\limits_{\\pmb x \\in D_i}^n (\\pmb x - \\pmb m_i)\\;(\\pmb x - \\pmb m_i)^T$\n(scatter matrix for every class) \nand $\\pmb m_i$ is the mean vector \n$\\pmb m_i = \\frac{1}{n_i} \\sum\\limits_{\\pmb x \\in D_i}^n \\; \\pmb x_k$", "S_W = np.zeros((4,4))\nfor cl,mv in zip(range(1,4), mean_vectors):\n class_sc_mat = np.zeros((4,4)) # scatter matrix for every class\n for row in X[y == cl]:\n row, mv = row.reshape(4,1), mv.reshape(4,1) # make column vectors\n class_sc_mat += (row-mv).dot((row-mv).T)\n S_W += class_sc_mat # sum class scatter matrices\nprint('within-class Scatter Matrix:\\n', S_W)", "<br>\n2.1 b\nAlternatively, we could also compute the class-covariance matrices by adding the scaling factor $\\frac{1}{N-1}$ to the within-class scatter matrix, so that our equation becomes\n$\\Sigma_i = \\frac{1}{N_{i}-1} \\sum\\limits_{\\pmb x \\in D_i}^n (\\pmb x - \\pmb m_i)\\;(\\pmb x - \\pmb m_i)^T$.\nand $S_W = \\sum\\limits_{i=1}^{c} (N_{i}-1) \\Sigma_i$\nwhere $N_{i}$ is the sample size of the respective class (here: 50), and in this particular case, we can drop the term ($N_{i}-1)$ \nsince all classes have the same sample size.\nHowever, the resulting eigenspaces will be identical (identical eigenvectors, only the eigenvalues are scaled differently by a constant factor).\n<br>\n<br>\n2.2 Between-class scatter matrix $S_B$\n[back to top]\nThe between-class scatter matrix $S_B$ is computed by the following equation: \n$S_B = \\sum\\limits_{i=1}^{c} N_{i} (\\pmb m_i - \\pmb m) (\\pmb m_i - \\pmb m)^T$\nwhere\n $\\pmb m$ is the overall mean, and $\\pmb m_{i}$ and $N_{i}$ are the sample mean and sizes of the respective classes.", "overall_mean = np.mean(X, axis=0)\n\nS_B = np.zeros((4,4))\nfor i,mean_vec in enumerate(mean_vectors): \n n = X[y==i+1,:].shape[0]\n mean_vec = mean_vec.reshape(4,1) # make column vector\n overall_mean = overall_mean.reshape(4,1) # make column vector\n S_B += n * (mean_vec - overall_mean).dot((mean_vec - overall_mean).T)\n \nprint('between-class Scatter Matrix:\\n', S_B)", "<br>\n<br>\nStep 3: Solving the generalized eigenvalue problem for the matrix $S_{W}^{-1}S_B$\n[back to top]\nNext, we will solve the generalized eigenvalue problem for the matrix $S_{W}^{-1}S_B$ to obtain the linear discriminants.", "eig_vals, eig_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))\n\nfor i in range(len(eig_vals)):\n eigvec_sc = eig_vecs[:,i].reshape(4,1) \n print('\\nEigenvector {}: \\n{}'.format(i+1, eigvec_sc.real))\n print('Eigenvalue {:}: {:.2e}'.format(i+1, eig_vals[i].real))", "<br>\n<br>\nAfter this decomposition of our square matrix into eigenvectors and eigenvalues, let us briefly recapitulate how we can interpret those results. As we remember from our first linear algebra class in high school or college, both eigenvectors and eigenvalues are providing us with information about the distortion of a linear transformation: The eigenvectors are basically the direction of this distortion, and the eigenvalues are the scaling factor for the eigenvectors that describing the magnitude of the distortion. \nIf we are performing the LDA for dimensionality reduction, the eigenvectors are important since they will form the new axes of our new feature subspace; the associated eigenvalues are of particular interest since they will tell us how \"informative\" the new \"axes\" are. \nLet us briefly double-check our calculation and talk more about the eigenvalues in the next section.\n<br>\n<br>\nChecking the eigenvector-eigenvalue calculation\n[back to top]\nA quick check that the eigenvector-eigenvalue calculation is correct and satisfy the equation:\n$\\pmb A\\pmb{v} = \\lambda\\pmb{v}$ \n<br>\nwhere\n$\\pmb A = S_{W}^{-1}S_B\\\n\\pmb{v} = \\; \\text{Eigenvector}\\\n\\lambda = \\; \\text{Eigenvalue}$", "for i in range(len(eig_vals)):\n eigv = eig_vecs[:,i].reshape(4,1) \n np.testing.assert_array_almost_equal(np.linalg.inv(S_W).dot(S_B).dot(eigv), \n eig_vals[i] * eigv, \n decimal=6, err_msg='', verbose=True)\nprint('ok')", "<br>\n<br>\nStep 4: Selecting linear discriminants for the new feature subspace\n[back to top]\n<br>\n<br>\n4.1. Sorting the eigenvectors by decreasing eigenvalues\n[back to top]\nRemember from the introduction that we are not only interested in merely projecting the data into a subspace that improves the class separability, but also reduces the dimensionality of our feature space, (where the eigenvectors will form the axes of this new feature subspace). \nHowever, the eigenvectors only define the directions of the new axis, since they have all the same unit length 1.\nSo, in order to decide which eigenvector(s) we want to drop for our lower-dimensional subspace, we have to take a look at the corresponding eigenvalues of the eigenvectors. Roughly speaking, the eigenvectors with the lowest eigenvalues bear the least information about the distribution of the data, and those are the ones we want to drop.\nThe common approach is to rank the eigenvectors from highest to lowest corresponding eigenvalue and choose the top $k$ eigenvectors.", "# Make a list of (eigenvalue, eigenvector) tuples\neig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]\n\n# Sort the (eigenvalue, eigenvector) tuples from high to low\neig_pairs = sorted(eig_pairs, key=lambda k: k[0], reverse=True)\n\n# Visually confirm that the list is correctly sorted by decreasing eigenvalues\n\nprint('Eigenvalues in decreasing order:\\n')\nfor i in eig_pairs:\n print(i[0])", "<br>\n<br>\nIf we take a look at the eigenvalues, we can already see that 2 eigenvalues are close to 0 and conclude that the eigenpairs are less informative than the other two. Let's express the \"explained variance\" as percentage:", "print('Variance explained:\\n')\neigv_sum = sum(eig_vals)\nfor i,j in enumerate(eig_pairs):\n print('eigenvalue {0:}: {1:.2%}'.format(i+1, (j[0]/eigv_sum).real))", "<br>\n<br>\nThe first eigenpair is by far the most informative one, and we won't loose much information if we would form a 1D-feature spaced based on this eigenpair.\n<br>\n<br>\n4.2. Choosing k eigenvectors with the largest eigenvalues\n[back to top]\nAfter sorting the eigenpairs by decreasing eigenvalues, it is now time to construct our $k \\times d$-dimensional eigenvector matrix $\\pmb W$ (here $4 \\times 2$: based on the 2 most informative eigenpairs) and thereby reducing the initial 4-dimensional feature space into a 2-dimensional feature subspace.", "W = np.hstack((eig_pairs[0][1].reshape(4,1), eig_pairs[1][1].reshape(4,1)))\nprint('Matrix W:\\n', W.real)", "<br>\n<br>\nStep 5: Transforming the samples onto the new subspace\n[back to top]\nIn the last step, we use the $4 \\times 2$-dimensional matrix $\\pmb W$ that we just computed to transform our samples onto the new subspace via the equation \n$\\pmb Y = \\pmb X \\times \\pmb W $.\n(where $\\pmb X$ is a $n \\times d$-dimensional matrix representing the $n$ samples, and $\\pmb Y$ are the transformed $n \\times k$-dimensional samples in the new subspace).", "X_lda = X.dot(W)\nassert X_lda.shape == (150,2), \"The matrix is not 2x150 dimensional.\"\n\nfrom matplotlib import pyplot as plt\n\ndef plot_step_lda():\n \n ax = plt.subplot(111)\n for label,marker,color in zip(\n range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):\n\n plt.scatter(x=X_lda[:,0].real[y == label],\n y=X_lda[:,1].real[y == label],\n marker=marker,\n color=color,\n alpha=0.5,\n label=label_dict[label]\n )\n\n plt.xlabel('LD1')\n plt.ylabel('LD2')\n\n leg = plt.legend(loc='upper right', fancybox=True)\n leg.get_frame().set_alpha(0.5)\n plt.title('LDA: Iris projection onto the first 2 linear discriminants')\n \n # hide axis ticks\n plt.tick_params(axis=\"both\", which=\"both\", bottom=\"off\", top=\"off\", \n labelbottom=\"on\", left=\"off\", right=\"off\", labelleft=\"on\")\n\n # remove axis spines\n ax.spines[\"top\"].set_visible(False) \n ax.spines[\"right\"].set_visible(False) \n ax.spines[\"bottom\"].set_visible(False) \n ax.spines[\"left\"].set_visible(False) \n \n plt.grid()\n plt.tight_layout\n plt.show()\n \nplot_step_lda()", "The scatter plot above represents our new feature subspace that we constructed via LDA. We can see that the first linear discriminant \"LD1\" separates the classes quite nicely. However, the second discriminant, \"LD2\", does not add much valuable information, which we've already concluded when we looked at the ranked eigenvalues is step 4.\n<br>\n<br>\nA comparison of PCA and LDA\n[back to top]\nIn order to compare the feature subspace that we obtained via the Linear Discriminant Analysis, we will use the PCA class from the scikit-learn machine-learning library. The documentation can be found here:\nhttp://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html. \nFor our convenience, we can directly specify to how many components we want to retain in our input dataset via the n_components parameter. \nn_components : int, None or string\n\nNumber of components to keep. if n_components is not set all components are kept:\n n_components == min(n_samples, n_features)\n if n_components == ‘mle’, Minka’s MLE is used to guess the dimension if 0 &lt; n_components &lt; 1, \n select the number of components such that the amount of variance that needs to be explained \n is greater than the percentage specified by n_components\n\n<br>\n<br>\nBut before we skip to the results of the respective linear transformations, let us quickly recapitulate the purposes of PCA and LDA: PCA finds the axes with maximum variance for the whole data set where LDA tries to find the axes for best class seperability. In practice, often a LDA is done followed by a PCA for dimensionality reduction.\n<br>\n<br>\n\n<br>\n<br>", "from sklearn.decomposition import PCA as sklearnPCA\n\nsklearn_pca = sklearnPCA(n_components=2)\nX_pca = sklearn_pca.fit_transform(X)\n\ndef plot_pca():\n\n ax = plt.subplot(111)\n \n for label,marker,color in zip(\n range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):\n\n plt.scatter(x=X_pca[:,0][y == label],\n y=X_pca[:,1][y == label],\n marker=marker,\n color=color,\n alpha=0.5,\n label=label_dict[label]\n )\n\n plt.xlabel('PC1')\n plt.ylabel('PC2')\n\n leg = plt.legend(loc='upper right', fancybox=True)\n leg.get_frame().set_alpha(0.5)\n plt.title('PCA: Iris projection onto the first 2 principal components')\n\n # hide axis ticks\n plt.tick_params(axis=\"both\", which=\"both\", bottom=\"off\", top=\"off\", \n labelbottom=\"on\", left=\"off\", right=\"off\", labelleft=\"on\")\n\n # remove axis spines\n ax.spines[\"top\"].set_visible(False) \n ax.spines[\"right\"].set_visible(False) \n ax.spines[\"bottom\"].set_visible(False) \n ax.spines[\"left\"].set_visible(False) \n \n plt.tight_layout\n plt.grid()\n \n plt.show()\n\nplot_pca()\nplot_step_lda()", "<br>\n<br>\nThe two plots above nicely confirm what we have discussed before: Where the PCA accounts for the most variance in the whole dataset, the LDA gives us the axes that account for the most variance between the individual classes.\n<br>\n<br>\nLDA via scikit-learn\n[back to top]\nNow, after we have seen how an Linear Discriminant Analysis works using a step-by-step approach, there is also a more convenient way to achive the same via the LDA class implemented in the scikit-learn machine learning library.", "from sklearn.lda import LDA\n\n# LDA\nsklearn_lda = LDA(n_components=2)\nX_lda_sklearn = sklearn_lda.fit_transform(X, y)\n\n\ndef plot_scikit_lda(X, title, mirror=1):\n \n ax = plt.subplot(111)\n for label,marker,color in zip(\n range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):\n \n plt.scatter(x=X[:,0][y == label]*mirror,\n y=X[:,1][y == label],\n marker=marker,\n color=color,\n alpha=0.5,\n label=label_dict[label]\n )\n\n plt.xlabel('LD1')\n plt.ylabel('LD2')\n\n leg = plt.legend(loc='upper right', fancybox=True)\n leg.get_frame().set_alpha(0.5)\n plt.title(title)\n \n # hide axis ticks\n plt.tick_params(axis=\"both\", which=\"both\", bottom=\"off\", top=\"off\", \n labelbottom=\"on\", left=\"off\", right=\"off\", labelleft=\"on\")\n\n # remove axis spines\n ax.spines[\"top\"].set_visible(False) \n ax.spines[\"right\"].set_visible(False) \n ax.spines[\"bottom\"].set_visible(False) \n ax.spines[\"left\"].set_visible(False) \n \n plt.grid()\n plt.tight_layout\n plt.show()\n\nplot_step_lda()\nplot_scikit_lda(X_lda_sklearn, title='Default LDA via scikit-learn', mirror=(-1))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tbrx/compiled-inference
notebooks/Factorial-HMM.ipynb
gpl-3.0
[ "from scipy import stats\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport torch\nimport sys\nsys.path.insert(0, '..')\n%matplotlib inline\n\nUSE_GPU = torch.cuda.is_available()\nprint \"Using GPU?\", USE_GPU\n\nfrom torch.autograd import Variable\nimport seaborn as sns\nsns.set_context(\"notebook\", font_scale=1.5, rc={\"lines.markersize\": 12})\nsns.set_style('white')\n\nfrom learn_smc_proposals import cde\nfrom learn_smc_proposals.examples import factorial_hmm", "Factorial HMM\nExample synthetic data: 20 different \"devices\", each with different power consumptions, turning on and off following separate Markov models", "devices = factorial_hmm.gen_devices()\nT = 50\n\nnp.random.seed(20)\nX, Y = factorial_hmm.gen_dataset(devices, T)\n\nplt.figure(figsize=(15,3.5))\nplt.plot(Y)\nplt.figure(figsize=(15,10))\nplt.imshow((X*devices).T, interpolation='None', aspect=1);\nplt.yticks(np.arange(len(devices)), devices);\n\nprint len(devices), 2**len(devices)\n\ntrace_train = []\ntrace_validation = []\n\ndist_est = cde.ConditionalBinaryMADE(len(devices)+1, len(devices), H=300, num_layers=4)\nif USE_GPU:\n dist_est.cuda()\n\n\ndist_est.load_state_dict(torch.load('../saved/trained_hmm_params.rar'))", "Test out learned distribution inside of SMC\nWe'll compare it against a baseline of \"bootstrap\" SMC, which proposes from the transition dynamics of the individual HMMs.", "X_hat_bootstrap, ancestry_bootstrap, ESS_bootstrap = \\\n factorial_hmm.run_smc(devices, Y, 500, factorial_hmm.baseline_proposal, verbose=False)\nY_hat_bootstrap = np.dot(X_hat_bootstrap, devices)\n\nnn_proposal = factorial_hmm.make_nn_proposal(dist_est)\nX_hat_nn, ancestry_nn, ESS_nn = \\\n factorial_hmm.run_smc(devices, Y, 500, nn_proposal, verbose=False)\nY_hat_nn = np.dot(X_hat_nn, devices)\n\nplt.hist(ESS_bootstrap, histtype='stepfilled', linewidth=2, alpha=0.5, bins=20,edgeColor='k')\nplt.hist(ESS_nn, histtype='stepfilled', linewidth=2, alpha=0.5, bins=20,edgeColor='k')\nplt.xlim([0,plt.xlim()[1]])\nplt.legend(['bootstrap', 'nnsmc'])\nplt.title('Histogram of effective sample size of SMC filtering distribution');\n\nplt.figure(figsize=(16,4))\nplt.title('Ancestral paths for bootstrap proposals (blue) and nn (green)')\nplt.plot(ancestry_bootstrap.T, color=sns.color_palette()[0]);\nplt.plot(ancestry_nn.T, color=sns.color_palette()[1]);\nplt.ylim(0,ancestry_nn.shape[0])\nplt.xlim(0,T-1);\n\nplt.figure(figsize=(14,3.25))\n\nplt.plot(np.dot(X_hat_nn, devices).T, color=sns.color_palette()[1], alpha=0.1)\nplt.plot(np.arange(len(Y)), Y,'k--')\nplt.xlim([0,T-1])\n\nplt.xlabel('Time step')\nplt.ylabel('Total energy usage')", "Look at rate of path coalescence", "ANC_PRIOR = []\nANC_NN = []\n\ndef count_uniques(ancestry):\n K, T = ancestry.shape\n counts = np.empty((T,), dtype=int)\n for t in xrange(T):\n counts[t] = len(np.unique(ancestry[:,t]))\n return counts\n\ndef run_iter():\n X,Y = factorial_hmm.gen_dataset(devices, T=30)\n X_particles_baseline, ancestry_baseline, _ = \\\n factorial_hmm.run_smc(devices, Y, 100, factorial_hmm.baseline_proposal, verbose=False)\n print \"smc complete\"\n X_particles, ancestry_nnsmc, _ = \\\n factorial_hmm.run_smc(devices, Y, 500, nn_proposal, verbose=False)\n print \"nn complete\"\n ANC_PRIOR.append(count_uniques(ancestry_baseline))\n ANC_NN.append(count_uniques(ancestry_nnsmc))\n return X,Y\n\nfor i in xrange(10): \n print \"iteration\", i+1\n X_tmp, Y_tmp = run_iter()\n\nplt.figure(figsize=(8,3.5))\nplt.plot(np.arange(len(X_tmp)), np.mean(ANC_PRIOR, 0));\nplt.plot(np.arange(len(X_tmp)), np.mean(ANC_NN, 0));\nplt.legend(['Bootstrap SMC', 'NN-SMC'], loc='upper left')\n\npm = np.mean(ANC_PRIOR, 0)\npsd = np.std(ANC_PRIOR, 0)\nsafe_lb = (pm - psd) * (pm - psd > 1.0) + (pm - psd <= 1.0)\nplt.fill_between(np.arange(len(X_tmp)), safe_lb, pm+psd, alpha=0.25, color=sns.color_palette()[0]);\npm = np.mean(ANC_NN, 0)\npsd = np.std(ANC_NN, 0)\nplt.fill_between(np.arange(len(X_tmp)), pm-psd, pm+psd, alpha=0.25, color=sns.color_palette()[1]);\n\nplt.semilogy();\nplt.xlabel('Time step')\nplt.ylabel('Surviving paths')\nplt.ylim(1, 100)\nplt.xlim(0, len(X_tmp)-1)\n\nplt.tight_layout()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ZwickyTransientFacility/simsurvey-examples
simsurvey_demo.ipynb
bsd-3-clause
[ "Simsurvey demo", "import os\nhome_dir = os.getcwd()\n\n# Please enter the path to where you have placed the Schlegel, Finkbeiner & Davis (1998) dust map files\n# You can also set the environment variable SFD_DIR to this path (in that case the variable below should be None)\nsfd98_dir = os.path.join(home_dir, 'data/sfd98')\n\nimport simsurvey\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport sncosmo\nfrom astropy.cosmology import Planck15\n\nimport simsurvey_tools as sst\nfrom scipy.interpolate import RectBivariateSpline as Spline2d\n\nimport ligo.skymap.plot\n\nsimsurvey.__version__\n\n# Load the ZTF fields, CCD corners and filters\nfields = sst.load_ztf_fields()\nsst.load_ztf_filters()\n\n# Load the ZTF CCD corners \nccds = sst.load_ztf_ccds()\n\n# Load the ZTF quadrants corners \nccds = sst.load_ztf_ccds(filename='data/ZTF_corners_rcid.txt', num_segs=64)", "Create a survey plan", "obs = {'time': [], 'field': [], 'band': [], 'maglim': [], 'skynoise': [], 'comment': [], 'zp': []}\n\nmjd_start = 58239.5\nfor k in range(0, 61, 3):\n obs['time'].extend([mjd_start + k + l/24. for l in range(3)])\n obs['field'].extend([683 for l in range(3)])\n obs['band'].extend(['ztfg', 'ztfr', 'ztfi'])\n obs['maglim'].extend([22 for l in range(3)])\n obs['zp'].extend([30 for l in range(3)])\n obs['comment'].extend(['' for l in range(3)])\n \nobs['skynoise'] = 10**(-0.4 * (np.array(obs['maglim']) - 30)) / 5\n\nplan = simsurvey.SurveyPlan(time=obs['time'],\n band=obs['band'],\n skynoise=obs['skynoise'],\n obs_field=obs['field'],\n obs_ccd=None,\n zp=obs['zp'],\n comment=obs['comment'],\n fields=fields,\n ccds=ccds\n )\n\nmjd_range = (plan.pointings['time'].min() - 30, plan.pointings['time'].max() + 30)\n\nplan.pointings", "Transient model\nIn this example the transient is created using models from https://github.com/mbulla/kilonova_models", "! git clone https://github.com/mbulla/kilonova_models.git\n\ndef Bullamodel(dynwind=False, dataDir='kilonova_models/02_Dhawan2019/', mej=0.04, phi=30, temp=5000):\n l = dataDir+'nph1.0e+06_mej'+'{:.2f}'.format(mej)+'_phi'+'{:.0f}'.format(phi)+'_T'+'{:.1e}'.format(temp)+'.txt'\n f = open(l)\n lines = f.readlines()\n nobs = int(lines[0])\n nwave = float(lines[1])\n line3 = (lines[2]).split(' ')\n ntime = int(line3[0])\n t_i = float(line3[1])\n t_f = float(line3[2])\n cos_theta = np.linspace(0, 1, nobs) # 11 viewing angles\n phase = np.linspace(t_i, t_f, ntime) # epochs\n file_ = np.genfromtxt(l, skip_header=3)\n wave = file_[0:int(nwave),0]\n flux = []\n for i in range(int(nobs)):\n flux.append(file_[i*int(nwave):i*int(nwave)+int(nwave),1:])\n flux = np.array(flux).T\n\n return phase, wave, cos_theta, flux\n\n\n# AngularTimeSeriesSource classdefined to create an angle dependent time serie source.\nclass AngularTimeSeriesSource(sncosmo.Source):\n \"\"\"A single-component spectral time series model.\n The spectral flux density of this model is given by\n .. math::\n F(t, \\lambda) = A \\\\times M(t, \\lambda)\n where _M_ is the flux defined on a grid in phase and wavelength\n and _A_ (amplitude) is the single free parameter of the model. The\n amplitude _A_ is a simple unitless scaling factor applied to\n whatever flux values are used to initialize the\n ``TimeSeriesSource``. Therefore, the _A_ parameter has no\n intrinsic meaning. It can only be interpreted in conjunction with\n the model values. Thus, it is meaningless to compare the _A_\n parameter between two different ``TimeSeriesSource`` instances with\n different model data.\n Parameters\n ----------\n phase : `~numpy.ndarray`\n Phases in days.\n wave : `~numpy.ndarray`\n Wavelengths in Angstroms.\n cos_theta: `~numpy.ndarray`\n Cosine of\n flux : `~numpy.ndarray`\n Model spectral flux density in erg / s / cm^2 / Angstrom.\n Must have shape ``(num_phases, num_wave, num_cos_theta)``.\n zero_before : bool, optional\n If True, flux at phases before minimum phase will be zeroed. The\n default is False, in which case the flux at such phases will be equal\n to the flux at the minimum phase (``flux[0, :]`` in the input array).\n name : str, optional\n Name of the model. Default is `None`.\n version : str, optional\n Version of the model. Default is `None`.\n \"\"\"\n\n _param_names = ['amplitude', 'theta']\n param_names_latex = ['A', r'\\theta']\n\n def __init__(self, phase, wave, cos_theta, flux, zero_before=True, zero_after=True, name=None,\n version=None):\n self.name = name\n self.version = version\n self._phase = phase\n self._wave = wave\n self._cos_theta = cos_theta\n self._flux_array = flux\n self._parameters = np.array([1., 0.])\n self._current_theta = 0.\n self._zero_before = zero_before\n self._zero_after = zero_after\n self._set_theta()\n\n def _set_theta(self):\n logflux_ = np.zeros(self._flux_array.shape[:2])\n for k in range(len(self._phase)):\n adding = 1e-10 # Here we are adding 1e-10 to avoid problems with null values\n f_tmp = Spline2d(self._wave, self._cos_theta, np.log(self._flux_array[k]+adding),\n kx=1, ky=1)\n logflux_[k] = f_tmp(self._wave, np.cos(self._parameters[1]*np.pi/180)).T\n\n self._model_flux = Spline2d(self._phase, self._wave, logflux_, kx=1, ky=1)\n\n self._current_theta = self._parameters[1]\n\n def _flux(self, phase, wave):\n if self._current_theta != self._parameters[1]:\n self._set_theta()\n f = self._parameters[0] * (np.exp(self._model_flux(phase, wave)))\n if self._zero_before:\n mask = np.atleast_1d(phase) < self.minphase()\n f[mask, :] = 0.\n if self._zero_after:\n mask = np.atleast_1d(phase) > self.maxphase()\n f[mask, :] = 0.\n return f\n\nphase, wave, cos_theta, flux = Bullamodel()\nsource = AngularTimeSeriesSource(phase, wave, cos_theta, flux)\ndust = sncosmo.CCM89Dust()\nmodel = sncosmo.Model(source=source,effects=[dust, dust], effect_names=['host', 'MW'], effect_frames=['rest', 'obs'])\n\n# Distribution of viewing angles\n\nthetadist = 'uniform in cosine' # 'uniform in cosine', 'uniform in degrees', 'fixed theta'\n\ndef random_parameters(redshifts, model,r_v=2., ebv_rate=0.11,**kwargs):\n # Amplitude\n amp = []\n for z in redshifts:\n amp.append(10**(-0.4*Planck15.distmod(z).value))\n \n if thetadist=='uniform in cosine':\n theta = np.arccos(np.random.random(len(redshifts))) / np.pi * 180\n elif thetadist=='uniform in degrees':\n theta = np.random.uniform(0, 90,size=len(redshifts))\n elif thetadist=='fixed theta':\n theta = np.array([20]*len(redshifts)) # Viewing angle fixed to 20 degrees\n\n return {\n 'amplitude': np.array(amp),\n 'theta': theta, \n 'hostr_v': r_v * np.ones(len(redshifts)),\n 'hostebv': np.random.exponential(ebv_rate, len(redshifts))\n }", "Transient Generator", "transientprop = dict(lcmodel=model, lcsimul_func=random_parameters)", "Number of injections, you can fix the number of generated transients or follow a rate. Rate should always be specified even for ntransient != None.", "ntransient = 1000\nrate = 1000 * 1e-6 # Mpc-3 yr-1\n\ndec_range=(plan.pointings['Dec'].min()-10,plan.pointings['Dec'].max()+10)\nra_range=(plan.pointings['RA'].min()-10,plan.pointings['RA'].max()+10)\n\ntr = simsurvey.get_transient_generator([0, 0.05],\n ntransient=ntransient,\n ratefunc=lambda z: rate,\n dec_range=dec_range,\n ra_range=ra_range,\n mjd_range=(mjd_range[0],\n mjd_range[1]),\n transientprop=transientprop,\n sfd98_dir=sfd98_dir\n ) ", "SimulSurvey", "# With sourcenoise==False, the flux error will correspond to the skynoise. Sourcenoise==True add an extra term in the flux errors from the brightness of the source. \n\nsurvey = simsurvey.SimulSurvey(generator=tr, plan=plan, n_det=2, threshold=5., sourcenoise=False)\n \nlcs = survey.get_lightcurves(\n progress_bar=True, notebook=True # If you get an error because of the progress_bar, delete this line.\n )\n\nlen(lcs.lcs)", "Save", "lcs.save('lcs.pkl')", "Output\n\n\nlcs.lcs contains the detected lightcurves\n\n\nlcs.meta contains parameters for the detected lightcurves\n\n\nlcs.meta_full contains parameters for all the injection within the observed area. \n\n\nlcs.meta_rejected contains parameters for all the injection within the observed area that were not detected.\n\n\nlcs.meta_notobserved contains parameters for all the injection outside the observed area.", "_ = sncosmo.plot_lc(lcs[0])\n\n# Redshift distribution\nplt.hist(lcs.meta_full['z'], lw=1, histtype='step', range=(0,0.05), bins=20, label='all')\nplt.hist(lcs.meta['z'], lw=2, histtype='step', range=(0,0.05), bins=20, label='detected')\nplt.xlabel('Redshift', fontsize='x-large')\nplt.ylabel(r'$N_{KNe}$', fontsize='x-large')\nplt.xlim((0, 0.05))\nplt.legend()\n\nplt.hist(lcs.stats['p_det'], lw=2, histtype='step', range=(0,10), bins=20)\nplt.xlabel('Detection phase (observer-frame)', fontsize='x-large')\n_ = plt.ylabel(r'$N_{KNe}$', fontsize='x-large')\n\nplt.figure()\nax = plt.axes()\nax.grid()\n\nax.scatter(lcs.meta_notobserved['ra'], lcs.meta_notobserved['dec'], marker='*', label='meta_notobserved', alpha=0.7)\nax.scatter(lcs.meta_rejected['ra'], lcs.meta_rejected['dec'], marker='*', label='meta_rejected', alpha=0.7)\nax.scatter(lcs.meta['ra'], lcs.meta['dec'], marker='*', label='meta_detected', alpha=0.7)\n\n#ax.legend(loc='center left', bbox_to_anchor=(0.9, .5))\nax.legend(loc=0)\nax.set_ylabel('DEC (deg)')\nax.set_xlabel('RA (deg)')\n\nplt.tight_layout()\nplt.show()\n\nplt.figure()\nax = plt.axes(\n [0.05, 0.05, 0.9, 0.9],\n projection='geo degrees mollweide'\n )\nax.grid()\n\nax.scatter(lcs.meta_notobserved['ra'], lcs.meta_notobserved['dec'], transform=ax.get_transform('world'), marker='*', label='meta_notobserved', alpha=0.7)\nax.scatter(lcs.meta_rejected['ra'], lcs.meta_rejected['dec'], transform=ax.get_transform('world'), marker='*', label='meta_rejected', alpha=0.7)\nax.scatter(lcs.meta['ra'], lcs.meta['dec'], transform=ax.get_transform('world'), marker='*', label='meta_detected', alpha=0.7)\n\n#ax.legend(loc='center left', bbox_to_anchor=(0.9, .5))\nax.legend(loc=0)\nax.set_ylabel('DEC (deg)')\nax.set_xlabel('RA (deg)')\n\nplt.tight_layout()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kimkipyo/dss_git_kkp
통계, 머신러닝 복습/160516월_3일차_기초 선형 대수 1 - 행렬의 정의와 연산 Basic Linear Algebra(NumPy)/3.NumPy 연산.ipynb
mit
[ "NumPy 연산\n벡터화 연산\nNumPy는 코드를 간단하게 만들고 계산 속도를 빠르게 하기 위한 벡터화 연산(vectorized operation)을 지원한다. 벡터화 연산이란 반복문(loop)을 사용하지 않고 선형 대수의 벡터 혹은 행렬 연산과 유사한 코드를 사용하는 것을 말한다.\n예를 들어 다음과 같은 연산을 해야 한다고 하자.\n$$ \nx = \\begin{bmatrix}1 \\ 2 \\ 3 \\ \\vdots \\ 100 \\end{bmatrix}, \\;\\;\\;\\;\ny = \\begin{bmatrix}101 \\ 102 \\ 103 \\ \\vdots \\ 200 \\end{bmatrix},\n$$\n$$z = x + y = \\begin{bmatrix}1+101 \\ 2+102 \\ 3+103 \\ \\vdots \\ 100+200 \\end{bmatrix}= \\begin{bmatrix}102 \\ 104 \\ 106 \\ \\vdots \\ 300 \\end{bmatrix}\n$$\n만약 NumPy의 벡터화 연산을 사용하지 않는다면 이 연산은 루프를 활용하여 다음과 같이 코딩해야 한다.", "x = np.arange(1, 101)\nx\n\ny = np.arange(101, 201)\ny\n\n%%time\nz = np.zeros_like(x)\nfor i, (xi, yi) in enumerate(zip(x, y)):\n z[i] = xi + yi\n\nz\n\nz", "그러나 NumPy는 벡터화 연산을 지원하므로 다음과 같이 덧셈 연산 하나로 끝난다. 위에서 보인 선형 대수의 벡터 기호를 사용한 연산과 코드가 완전히 동일하다.", "%%time\nz = x + y\n\nz", "연산 속도도 벡터화 연산이 훨씬 빠른 것을 볼 수 있다.\nElement-Wise 연산\nNumPy의 벡터화 연산은 같은 위치의 원소끼리 연산하는 element-wise 연산이다. NumPy의 ndarray를 선형 대수의 벡터나 행렬이라고 했을 때 덧셈, 뺄셈은 NumPy 연산과 일치한다\n스칼라와 벡터의 곱도 마찬가지로 선형 대수에서 사용하는 식과 NumPy 코드가 일치한다.", "x = np.arange(10)\nx\n\na = 100\na * x", "NumPy 곱셉의 경우에는 행렬의 곱, 즉 내적(inner product, dot product)의 정의와 다르다. 따라서 이 경우에는 별도로 dot이라는 명령 혹은 메서드를 사용해야 한다.", "x = np.arange(10)\ny = np.arange(10)\nx * y\n\nx\n\ny\n\nnp.dot(x, y)\n\nx.dot(y)", "비교 연산도 마찬가지로 element-wise 연산이다. 따라서 벡터 혹은 행렬 전체의 원소가 모두 같아야 하는 선형 대수의 비교 연산과는 다르다.", "a = np.array([1, 2, 3, 4])\nb = np.array([4, 2, 2, 4])\n\na == b\n\na >= b", "만약 배열 전체를 비교하고 싶다면 array_equal 명령을 사용한다.", "a = np.array([1, 2, 3, 4])\nb = np.array([4, 2, 2, 4])\nc = np.array([1, 2, 3, 4])\n\nnp.array_equal(a, b)\n\nnp.array_equal(a, c)", "만약 NumPy 에서 제공하는 지수 함수, 로그 함수 등의 수학 함수를 사용하면 element-wise 벡터화 연산을 지원한다.", "a = np.arange(5)\na\n\nnp.exp(a)\n\n10**a\n\nnp.log(a)\n\nnp.log10(a)", "만약 NumPy에서 제공하는 함수를 사용하지 않으면 벡터화 연산은 불가능하다.", "import math\na = [1, 2, 3]\nmath.exp(a)", "브로드캐스팅\n선형 대수의 행렬 덧셈 혹은 뺄셈을 하려면 두 행렬의 크기가 같아야 한다. 그러나 NumPy에서는 서로 다른 크기를 가진 두 ndarray 배열의 사칙 연산도 지원한다. 이 기능을 브로드캐스팅(broadcasting)이라고 하는데 크기가 작은 배열을 자동으로 반복 확장하여 크기가 큰 배열에 맞추는 방벙이다.\n예를 들어 다음과 같이 벡터와 스칼라를 더하는 경우를 생각하자. 선형 대수에서는 이러한 연산이 불가능하다.\n$$ \nx = \\begin{bmatrix}0 \\ 1 \\ 2 \\ 3 \\ 4 \\end{bmatrix}, \\;\\;\\;\\; \nx + 1 = \\begin{bmatrix}0 \\ 1 \\ 2 \\ 3 \\ 4 \\end{bmatrix} + 1 = ?\n$$\n그러나 NumPy는 브로드캐스팅 기능을 사용하여 스칼라를 벡터와 같은 크기로 확장시켜서 덧셈 계산을 한다.\n$$ \n\\begin{bmatrix}0 \\ 1 \\ 2 \\ 3 \\ 4 \\end{bmatrix} \\overset{\\text{numpy}}+ 1 = \n\\begin{bmatrix}0 \\ 1 \\ 2 \\ 3 \\ 4 \\end{bmatrix} + \\begin{bmatrix}1 \\ 1 \\ 1 \\ 1 \\ 1 \\end{bmatrix} = \n\\begin{bmatrix}1 \\ 2 \\ 3 \\ 4 \\ 5 \\end{bmatrix}\n$$", "x = np.arange(5)\ny = np.ones_like(x)\nx + y\n\nx + 1", "브로드캐스팅은 더 차원이 높은 경우에도 적용된다. 다음 그림을 참조하라.\n<img src=\"https://datascienceschool.net/upfiles/dbd3775c3b914d4e8c6bbbb342246b6a.png\" style=\"width: 60%; margin: 0 auto 0 auto;\">", "np.tile(np.arange(0, 40, 10), (3, 1))\n\na = np.tile(np.arange(0, 40, 10), (3, 1)).T\na\n\nb = np.array([0, 1, 2])\nb\n\na + b\n\na = np.arange(0, 40, 10)[:, np.newaxis]\na\n\na + b", "차원 축소 연산\nndarray의 하나의 행에 있는 원소를 하나의 데이터 집합으로 보고 평균을 구하면 각 행에 대해 하나의 숫자가 나오게 된다. 예를 들어 10x5 크기의 2차원 배열에 대해 행-평균을 구하면 10개의 숫자를 가진 1차원 벡터가 나오게 된다. 이러한 연산을 차원 축소(dimension reduction) 연산이라고 한다.\nndarray 는 다음과 같은 차원 축소 연산 명령 혹은 메서드를 지원한다.\n\n최대/최소: min, max, argmin, argmax\n통계: sum, mean, median, std, var\n불리언: all, any", "x = np.array([1, 2, 3, 4])\nx\n\nnp.sum(x)\n\nx.sum()\n\nx = np.array([1, 3, 2, 4])\n\nx.min(), np.min(x)\n\nx.max()\n\nx.argmin() # index of minimum\n\nx.argmax() # index of maximum\n\nx = np.array([1, 2, 3, 1])\n\nx.mean()\n\nnp.median(x)\n\nnp.all([True, True, False])\n\nnp.any([True, True, False])\n\na = np.zeros((100, 100), dtype=np.int)\na\n\nnp.any(a == 0)\n\nnp.any(a != 0)\n\nnp.all(a == 0)\n\na = np.array([1, 2, 3, 2])\nb = np.array([2, 2, 3, 2])\nc = np.array([6, 4, 4, 5])\n\n((a <= b) & (b <= c)).all()", "연산의 대상이 2차원 이상인 경우에는 어느 차원으로 계산을 할 지를 axis 인수를 사용하여 지시한다. axis=0인 경우는 열 연산, axis=1인 경우는 행 연산 등으로 사용한다. 디폴트 값은 0이다.\n<img src=\"https://datascienceschool.net/upfiles/edfaf93a7f124f359343d1dcfe7f29fc.png\", style=\"margin: 0 auto 0 auto;\">", "x = np.array([[1, 1], [2, 2]])\nx\n\nx.sum()\n\nx.sum(axis=0) # columns (first dimension)\n\nx.sum(axis=1) # rows (second dimension)\n\ny = np.array([[1, 2, 3], [5, 6, 1]])\nnp.median(y, axis=-1) # last axis\n\ny\n\nnp.median(y, axis=1)", "정렬\nsort 명령이나 메서드를 사용하여 배열 안의 원소를 크기에 따라 정렬하여 새로운 배열을 만들 수도 있다. 2차원 이상인 경우에는 마찬가지로 axis 인수를 사용하여 방향을 결정한다.", "a = np.array([[4, 3, 5], [1, 2, 1]])\na\n\nnp.sort(a)\n\nnp.sort(a, axis=1)\n\nnp.sort(a, axis=0)", "sort 메서드는 해당 객체의 자료 자체가 변화하는 in-place 메서드이므로 사용할 때 주의를 기울여야 한다.", "a\n\na.sort(axis=1)\na", "만약 자료를 정렬하는 것이 아니라 순서만 알고 싶다면 argsort 명령을 사용한다.", "a = np.array([4, 3, 1, 2])\nj = np.argsort(a)\nj\n\na[j]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statsmodels/statsmodels.github.io
v0.13.0/examples/notebooks/generated/regression_diagnostics.ipynb
bsd-3-clause
[ "Regression diagnostics\nThis example file shows how to use a few of the statsmodels regression diagnostic tests in a real-life context. You can learn about more tests and find out more information about the tests here on the Regression Diagnostics page.\nNote that most of the tests described here only return a tuple of numbers, without any annotation. A full description of outputs is always included in the docstring and in the online statsmodels documentation. For presentation purposes, we use the zip(name,test) construct to pretty-print short descriptions in the examples below.\nEstimate a regression model", "%matplotlib inline\n\nfrom statsmodels.compat import lzip\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.formula.api as smf\nimport statsmodels.stats.api as sms\nimport matplotlib.pyplot as plt\n\n# Load data\nurl = \"https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/HistData/Guerry.csv\"\ndat = pd.read_csv(url)\n\n# Fit regression model (using the natural log of one of the regressors)\nresults = smf.ols(\"Lottery ~ Literacy + np.log(Pop1831)\", data=dat).fit()\n\n# Inspect the results\nprint(results.summary())", "Normality of the residuals\nJarque-Bera test:", "name = [\"Jarque-Bera\", \"Chi^2 two-tail prob.\", \"Skew\", \"Kurtosis\"]\ntest = sms.jarque_bera(results.resid)\nlzip(name, test)", "Omni test:", "name = [\"Chi^2\", \"Two-tail probability\"]\ntest = sms.omni_normtest(results.resid)\nlzip(name, test)", "Influence tests\nOnce created, an object of class OLSInfluence holds attributes and methods that allow users to assess the influence of each observation. For example, we can compute and extract the first few rows of DFbetas by:", "from statsmodels.stats.outliers_influence import OLSInfluence\n\ntest_class = OLSInfluence(results)\ntest_class.dfbetas[:5, :]", "Explore other options by typing dir(influence_test)\nUseful information on leverage can also be plotted:", "from statsmodels.graphics.regressionplots import plot_leverage_resid2\n\nfig, ax = plt.subplots(figsize=(8, 6))\nfig = plot_leverage_resid2(results, ax=ax)", "Other plotting options can be found on the Graphics page.\nMulticollinearity\nCondition number:", "np.linalg.cond(results.model.exog)", "Heteroskedasticity tests\nBreush-Pagan test:", "name = [\"Lagrange multiplier statistic\", \"p-value\", \"f-value\", \"f p-value\"]\ntest = sms.het_breuschpagan(results.resid, results.model.exog)\nlzip(name, test)", "Goldfeld-Quandt test", "name = [\"F statistic\", \"p-value\"]\ntest = sms.het_goldfeldquandt(results.resid, results.model.exog)\nlzip(name, test)", "Linearity\nHarvey-Collier multiplier test for Null hypothesis that the linear specification is correct:", "name = [\"t value\", \"p value\"]\ntest = sms.linear_harvey_collier(results)\nlzip(name, test)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "使用 Keras 预处理层对结构化数据进行分类\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://tensorflow.google.cn/tutorials/structured_data/preprocessing_layers\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 GitHub 上查看源代码</a></td>\n <td> <img><a>下载笔记本</a>\n</td>\n</table>\n\n本教程演示了如何对结构化数据(例如 CSV 中的表格数据)进行分类。您将使用 Keras 定义模型,并使用预处理层作为桥梁,将 CSV 中的列映射到用于训练模型的特征。本教程包含以下操作的完整代码:\n\n使用 Pandas 加载 CSV 文件。\n构建输入流水线以使用 tf.data 对行进行批处理和乱序。\n使用 Keras 预处理层将 CSV 中的列映射到用于训练模型的特征。\n使用 Keras 构建、训练和评估模型。\n\n注:本教程类似于使用特征列对结构化数据进行分类。此版本使用新的实验性 Keras 预处理层而不是 tf.feature_column。Keras 预处理层更直观,可以轻松包含在模型中以简化部署。\n数据集\n您将使用 PetFinder 数据集的简化版本。CSV 中有几千行。每行描述一个宠物,每列描述一个特性。您将使用此信息来预测宠物是否会被领养。\n以下是对该数据集的描述。请注意,其中既有数值列,也有分类列。还有一个您不会在本教程中用到的自由文本列。\n列 | 描述 | 特征类型 | 数据类型\n--- | --- | --- | ---\nType | 动物类型(狗、猫) | 分类 | 字符串\nAge | 宠物年龄 | 数值 | 整数\nBreed1 | 宠物的主要品种 | 分类 | 字符串\nColor1 | 宠物的颜色 1 | 分类 | 字符串\nColor2 | 宠物的颜色 2 | 分类 | 字符串\nMaturitySize | 成年个体大小 | 分类 | 字符串\nFurLength | 毛发长度 | 分类 | 字符串\nVaccinated | 宠物已接种疫苗 | 分类 | 字符串\nSterilized | 宠物已绝育 | 分类 | 字符串\nHealth | 健康状况 | 分类 | 字符串\nFee | 领养费 | 数值 | 整数\nDescription | 关于此宠物的简介 | 文本 | 字符串\nPhotoAmt | 为该宠物上传的照片总数 | 数值 | 整数\nAdoptionSpeed | 领养速度 | 分类 | 整数\n导入TensorFlow和其他库", "!pip install -q sklearn\n\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.layers.experimental import preprocessing\n\ntf.__version__", "使用 Pandas 创建数据帧\nPandas 是一个 Python 库,其中包含许多有用的加载和处理结构化数据的实用工具。您将使用 Pandas 从 URL 下载数据集,并将其加载到数据帧中。", "import pathlib\n\ndataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'\ncsv_file = 'datasets/petfinder-mini/petfinder-mini.csv'\n\ntf.keras.utils.get_file('petfinder_mini.zip', dataset_url,\n extract=True, cache_dir='.')\ndataframe = pd.read_csv(csv_file)\n\ndataframe.head()", "创建目标变量\nKaggle 比赛中的任务是预测宠物被领养的速度(例如,在第一周、第一个月、前三个月等)。我们针对教程进行一下简化。在这里,您将把它转化为一个二元分类问题,并简单地预测宠物是否被领养。\n修改标签列后,0 表示宠物未被领养,1 表示宠物已被领养。", "# In the original dataset \"4\" indicates the pet was not adopted.\ndataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)\n\n# Drop un-used columns.\ndataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])", "将数据帧拆分为训练集、验证集和测试集\n您下载的数据集是单个 CSV 文件。您将把它拆分为训练集、验证集和测试集。", "train, test = train_test_split(dataframe, test_size=0.2)\ntrain, val = train_test_split(train, test_size=0.2)\nprint(len(train), 'train examples')\nprint(len(val), 'validation examples')\nprint(len(test), 'test examples')", "使用 tf.data 创建输入流水线\n接下来,您将使用 tf.data 封装数据帧,以便对数据进行乱序和批处理。如果您处理的 CSV 文件非常大(大到无法放入内存),则可以使用 tf.data 直接从磁盘读取文件。本教程中没有涉及这方面的内容。", "# A utility method to create a tf.data dataset from a Pandas Dataframe\ndef df_to_dataset(dataframe, shuffle=True, batch_size=32):\n dataframe = dataframe.copy()\n labels = dataframe.pop('target')\n ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))\n if shuffle:\n ds = ds.shuffle(buffer_size=len(dataframe))\n ds = ds.batch(batch_size)\n ds = ds.prefetch(batch_size)\n return ds", "现在您已经创建了输入流水线,我们调用它来查看它返回的数据的格式。您使用了小批次来保持输出的可读性。", "batch_size = 5\ntrain_ds = df_to_dataset(train, batch_size=batch_size)\n\n[(train_features, label_batch)] = train_ds.take(1)\nprint('Every feature:', list(train_features.keys()))\nprint('A batch of ages:', train_features['Age'])\nprint('A batch of targets:', label_batch )", "您可以看到数据集(从数据帧)返回了一个列名称字典,该字典映射到来自数据帧中行的列值。\n演示预处理层的使用。\nKeras 预处理层 API 允许您构建 Keras 原生输入处理流水线。您将使用 3 个预处理层来演示特征预处理代码。\n\nNormalization - 数据的特征归一化。\nNormalization - 类别编码层。\nStringLookup - 将字符串从词汇表映射到整数索引。\nIntegerLookup - 将词汇表中的整数映射到整数索引。\n\n您可以在此处找到可用预处理层的列表。\n数值列\n对于每个数值特征,您将使用 Normalization() 层来确保每个特征的平均值为 0,且其标准差为 1。\nget_normalization_layer 函数返回一个层,该层将特征归一化应用于数值特征。", "def get_normalization_layer(name, dataset):\n # Create a Normalization layer for our feature.\n normalizer = preprocessing.Normalization(axis=None)\n\n # Prepare a Dataset that only yields our feature.\n feature_ds = dataset.map(lambda x, y: x[name])\n\n # Learn the statistics of the data.\n normalizer.adapt(feature_ds)\n\n return normalizer\n\nphoto_count_col = train_features['PhotoAmt']\nlayer = get_normalization_layer('PhotoAmt', train_ds)\nlayer(photo_count_col)", "注:如果您有许多数值特征(数百个或更多),首先将它们连接起来并使用单个 normalization 层会更有效。\n分类列\n在此数据集中,Type 表示为字符串(例如 'Dog' 或 'Cat')。您不能将字符串直接馈送给模型。预处理层负责将字符串表示为独热向量。\nget_category_encoding_layer 函数返回一个层,该层将值从词汇表映射到整数索引,并对特征进行独热编码。", "def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):\n # Create a StringLookup layer which will turn strings into integer indices\n if dtype == 'string':\n index = preprocessing.StringLookup(max_tokens=max_tokens)\n else:\n index = preprocessing.IntegerLookup(max_tokens=max_tokens)\n\n # Prepare a Dataset that only yields our feature\n feature_ds = dataset.map(lambda x, y: x[name])\n\n # Learn the set of possible values and assign them a fixed integer index.\n index.adapt(feature_ds)\n\n # Create a Discretization for our integer indices.\n encoder = preprocessing.CategoryEncoding(num_tokens=index.vocabulary_size())\n\n # Apply one-hot encoding to our indices. The lambda function captures the\n # layer so we can use them, or include them in the functional model later.\n return lambda feature: encoder(index(feature))\n\ntype_col = train_features['Type']\nlayer = get_category_encoding_layer('Type', train_ds, 'string')\nlayer(type_col)", "通常,您不应将数字直接输入模型,而是改用这些输入的独热编码。考虑代表宠物年龄的原始数据。", "type_col = train_features['Age']\ncategory_encoding_layer = get_category_encoding_layer('Age', train_ds,\n 'int64', 5)\ncategory_encoding_layer(type_col)", "选择要使用的列\n您已经了解了如何使用多种类型的预处理层。现在您将使用它们来训练模型。您将使用 Keras-functional API 来构建模型。Keras 函数式 API 是一种比 tf.keras.Sequential API 更灵活的创建模型的方式。\n本教程的目标是向您展示使用预处理层所需的完整代码(例如机制)。任意选择了几列来训练我们的模型。\n要点:如果您的目标是构建一个准确的模型,请尝试使用自己的更大的数据集,并仔细考虑哪些特征最有意义,以及它们应该如何表示。\n之前,您使用了小批次来演示输入流水线。现在让我们创建一个具有更大批次大小的新输入流水线。", "batch_size = 256\ntrain_ds = df_to_dataset(train, batch_size=batch_size)\nval_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)\ntest_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)\n\nall_inputs = []\nencoded_features = []\n\n# Numeric features.\nfor header in ['PhotoAmt', 'Fee']:\n numeric_col = tf.keras.Input(shape=(1,), name=header)\n normalization_layer = get_normalization_layer(header, train_ds)\n encoded_numeric_col = normalization_layer(numeric_col)\n all_inputs.append(numeric_col)\n encoded_features.append(encoded_numeric_col)\n\n# Categorical features encoded as integers.\nage_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')\nencoding_layer = get_category_encoding_layer('Age', train_ds, dtype='int64',\n max_tokens=5)\nencoded_age_col = encoding_layer(age_col)\nall_inputs.append(age_col)\nencoded_features.append(encoded_age_col)\n\n# Categorical features encoded as string.\ncategorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',\n 'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']\nfor header in categorical_cols:\n categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')\n encoding_layer = get_category_encoding_layer(header, train_ds, dtype='string',\n max_tokens=5)\n encoded_categorical_col = encoding_layer(categorical_col)\n all_inputs.append(categorical_col)\n encoded_features.append(encoded_categorical_col)\n", "创建、编译并训练模型\n接下来,您可以创建端到端模型。", "all_features = tf.keras.layers.concatenate(encoded_features)\nx = tf.keras.layers.Dense(32, activation=\"relu\")(all_features)\nx = tf.keras.layers.Dropout(0.5)(x)\noutput = tf.keras.layers.Dense(1)(x)\nmodel = tf.keras.Model(all_inputs, output)\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[\"accuracy\"])", "我们来可视化连接图:", "# rankdir='LR' is used to make the graph horizontal.\ntf.keras.utils.plot_model(model, show_shapes=True, rankdir=\"LR\")\n", "训练模型。", "model.fit(train_ds, epochs=10, validation_data=val_ds)\n\nloss, accuracy = model.evaluate(test_ds)\nprint(\"Accuracy\", accuracy)", "根据新数据进行推断\n要点:您开发的模型现在可以直接从 CSV 文件中对行进行分类,因为预处理代码包含在模型本身中。\n现在,您可以保存并重新加载 Keras 模型。请按照此处的教程了解有关 TensorFlow 模型的更多信息。", "model.save('my_pet_classifier')\nreloaded_model = tf.keras.models.load_model('my_pet_classifier')", "要获得对新样本的预测,只需调用 model.predict()。您只需要做两件事:\n\n将标量封装成列表,以便具有批次维度(模型只处理成批次的数据,而不是单个样本)\n对每个特征调用 convert_to_tensor", "sample = {\n 'Type': 'Cat',\n 'Age': 3,\n 'Breed1': 'Tabby',\n 'Gender': 'Male',\n 'Color1': 'Black',\n 'Color2': 'White',\n 'MaturitySize': 'Small',\n 'FurLength': 'Short',\n 'Vaccinated': 'No',\n 'Sterilized': 'No',\n 'Health': 'Healthy',\n 'Fee': 100,\n 'PhotoAmt': 2,\n}\n\ninput_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}\npredictions = reloaded_model.predict(input_dict)\nprob = tf.nn.sigmoid(predictions[0])\n\nprint(\n \"This particular pet had a %.1f percent probability \"\n \"of getting adopted.\" % (100 * prob)\n)", "要点:对于更大、更复杂的数据集,您通常会看到深度学习的最佳结果。在处理像这样的小数据集时,我们建议使用决策树或随机森林作为强基线。本教程的目标是演示处理结构化数据的机制,以便您将来处理自己的数据集时有可以作为起点的代码。\n后续步骤\n进一步了解有关结构化数据分类的最佳方法是自己尝试。您可能希望找到另一个可使用的数据集,并使用与上述类似的代码训练模型对其进行分类。为了提高准确率,请仔细考虑要在模型中包含哪些特征,以及它们应该如何表示。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eford/rebound
ipython_examples/OrbitPlot.ipynb
gpl-3.0
[ "Orbit Plot\nREBOUND comes with a simple way to plot instantaneous orbits of planetary systems. To show how this works, let's setup a test simulation with 4 planets.", "import rebound\nsim = rebound.Simulation()\nsim.add(m=1)\nsim.add(m=0.1, e=0.041, a=0.4, inc=0.2, f=0.43, Omega=0.82, omega=2.98)\nsim.add(m=1e-3, e=0.24, a=1.0, pomega=2.14)\nsim.add(m=1e-3, e=0.24, a=1.5, omega=1.14, l=2.1)\nsim.add(a=-2.7, e=1.4, f=-1.5,omega=-0.7) # hyperbolic orbit", "To plot these initial orbits in the $xy$-plane, we can simply call the OrbitPlot function and give it the simulation as an argument.", "%matplotlib inline\nfig = rebound.OrbitPlot(sim)", "Note that the OrbitPlot function chooses reasonable limits for the axes for you. There are various ways to customize the plot. Have a look at the arguments used in the following examples, which are pretty much self-explanatory (if in doubt, check the documentation!).", "fig = rebound.OrbitPlot(sim, unitlabel=\"[AU]\", color=True, periastron=True)\n\nfig = rebound.OrbitPlot(sim, unitlabel=\"[AU]\", periastron=True, lw=2)", "Note that all orbits are draw with respect to the center of mass of all interior particles. This coordinate system is known as Jacobi coordinates. It requires that the particles are sorted by ascending semi-major axis within the REBOUND simulation's particle array. \nFrom within iPython/Jupyter one can also call the OrbitPlot routine in a loop, thus making an animation as one steps through a simulation. This is a nice way of keeping track of what is going on in a simulation without having to wait until the end. To do that we need to import the display and clear_output function from iPython first. We'll also need access to the clear function of matplotlib. Then, we run a loop, updating the figure as we go along.", "from IPython.display import display, clear_output\nimport matplotlib.pyplot as plt\nsim.move_to_com()\nfor i in range(3):\n sim.integrate(sim.t+0.31)\n fig = rebound.OrbitPlot(sim,color=True,unitlabel=\"[AU]\",lim=2.)\n display(fig)\n plt.close(fig)\n clear_output(wait=True)", "To get an idea of the three dimensional distribution of orbits, use the slices=True option. This will plot the orbits three times, from different perspectives. You can adjust the dimensions in the z direction using the limz keyword.", "fig = rebound.OrbitPlot(sim,slices=True,color=True,unitlabel=\"[AU]\",lim=2.,limz=0.36)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
BinRoot/TensorFlow-Book
ch11_seq2seq/Concept02_embedding_lookup.ipynb
mit
[ "Ch 11: Concept 02\nEmbedding Lookup\nImport TensorFlow, and begin an interactive session", "import tensorflow as tf\nsess = tf.InteractiveSession()", "Let's say we only have 4 words in our vocabulary: \"the\", \"fight\", \"wind\", and \"like\".\nMaybe each word is associated with numbers.\n| Word | Number | \n| ------ |:------:|\n| 'the' | 17 |\n| 'fight' | 22 |\n| 'wind' | 35 |\n| 'like' | 51 |", "embeddings_0d = tf.constant([17,22,35,51])\n\n", "Or maybe, they're associated with one-hot vectors.\n| Word | Vector | \n| ------ |:------:|\n| 'the ' | [1, 0, 0, 0] |\n| 'fight' | [0, 1, 0, 0] |\n| 'wind' | [0, 0, 1, 0] |\n| 'like' | [0, 0, 0, 1] |", "embeddings_4d = tf.constant([[1, 0, 0, 0],\n [0, 1, 0, 0],\n [0, 0, 1, 0],\n [0, 0, 0, 1]])", "This may sound over the top, but you can have any tensor you want, not just numbers or vectors.\n| Word | Tensor | \n| ------ |:------:|\n| 'the ' | [[1, 0] , [0, 0]] |\n| 'fight' | [[0, 1] , [0, 0]] |\n| 'wind' | [[0, 0] , [1, 0]] |\n| 'like' | [[0, 0] , [0, 1]] |", "embeddings_2x2d = tf.constant([[[1, 0], [0, 0]],\n [[0, 1], [0, 0]],\n [[0, 0], [1, 0]],\n [[0, 0], [0, 1]]])", "Let's say we want to find the embeddings for the sentence, \"fight the wind\".", "ids = tf.constant([1, 0, 2])", "We can use the embedding_lookup function provided by TensorFlow:", "lookup_0d = sess.run(tf.nn.embedding_lookup(embeddings_0d, ids))\nprint(lookup_0d)\n\nlookup_4d = sess.run(tf.nn.embedding_lookup(embeddings_4d, ids))\nprint(lookup_4d)\n\nlookup_2x2d = sess.run(tf.nn.embedding_lookup(embeddings_2x2d, ids))\nprint(lookup_2x2d)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
claudiuskerth/PhDthesis
Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb
mit
[ "Load $\\delta$a$\\delta$i\nI have not installed dadi globally on huluvu. Instead, I left it in my Downloads directory '/home/claudius/Downloads/dadi'. In order for Python to find that module, I need to add that directory to the PYTHONPATH variable.", "import sys\n\nsys.path\n\nsys.path.insert(0, '/home/claudius/Downloads/dadi')\n\nsys.path\n\nimport dadi\n\nimport pylab\n\npylab.rcParams['figure.figsize'] = [12.0, 10.0]\n\n%matplotlib inline", "Load data", "% ll dadiExercises/\n\n% cat dadiExercises/ERY.FOLDED.sfs.dadi_format", "I have turned the 1D folded SFS's from realSFS into $\\delta$d$\\delta$i format by hand according to the description in section 3.1 of the manual.\nNote, that the last line, indicating the mask, has length 37, but the folded spectrum has length 19. Dadi wants to mask counts from invariable sites. For an unfolded spectrum, i. e. polarised with respect to an inferred ancestral allele at each site, the first and the last count classes would correspond to invariable sites. In a folded spectrum, i. e. with counts of the minor allele at each site, the last count class corresponds to SNP's with minor sample allele frequency of $n/2$ (with even sample size).", "fs_ery = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format')\n\n%pdoc dadi.Spectrum.from_file\n\nfs_ery\n\nns = fs_ery.sample_sizes\nns\n\nfs_ery.pop_ids = ['ery'] # must be an array, otherwise leads to error later on\n\n# the number of segregating sites in the spectrum\n\nfs_ery.sum()", "According to the number of segregating sites, this spectrum should have good power to distinguish between alternative demographic models (see Adams2004). However, the noise in the data is extreme, as can be seen below, which might compromise this power and maybe even lead to false inferences.\nPlot the data", "%pdoc dadi.Plotting.plot_1d_fs\n\npylab.rcParams['figure.figsize'] = [12.0, 10.0]\n\ndadi.Plotting.plot_1d_fs(fs_ery, show=False)", "Built-in 1D models", "# show modules within dadi\n\ndir(dadi)\n\ndir(dadi.Demographics1D)\n\n# show the source of the 'Demographics1D' method\n\n%psource dadi.Demographics1D", "standard neutral model", "# create link to method\n\nfunc = dadi.Demographics1D.snm\n\n# make the extrapolating version of the demographic model function\n\nfunc_ex = dadi.Numerics.make_extrap_log_func(func)\n\n# setting the smallest grid size slightly larger than the largest population sample size\n\npts_l = [40, 50, 60]", "The snm function does not take parameters to optimize. I can therefore get directly the expected model. The snm function does not take a fold argument. I am therefore going to calculated an unfolded expected spectrum and then fold.", "# calculate unfolded AFS under standard neutral model (up to a scaling factor theta)\n\nmodel = func_ex(0, ns, pts_l)\nmodel\n\ndadi.Plotting.plot_1d_fs(model.fold()[:19], show=False)", "What's happening in the 18th count class?", "# get the source of the fold method, which is part of the Spectrum object\n\n%psource dadi.Spectrum.fold\n\n# get the docstring of the Spectrum object\n\n%pdoc dadi.Spectrum\n\n# retrieve the spectrum array from the Spectrum object\n\nmodel.data", "I am going to fold manually now.", "# reverse spectrum and add to itself\n\nmodel_fold = model.data + model.data[::-1]\nmodel_fold\n\n# discard all count classes >n/2\n\nmodel_fold = model_fold[:19]\nmodel_fold", "When the sample size is even, then highest sample frequency class corresponds to just one unfolded class (18). This has been added to itself and those SNP's are counted twice at the moment. I need to divide this class by 2 to get the correct count for this folded class.", "# divide highest sample frequency class by 2\n\nmodel_fold[18] = model_fold[18]/2.0\n\nmodel_fold\n\n# create dadi Spectrum object from array, need to specify custom mask\n\nmodel_folded = dadi.Spectrum(data=model_fold, mask_corners=False, mask= [1] + [0]*18)\nmodel_folded\n\ndadi.Plotting.plot_1d_fs(model_folded)", "The folded expected spectrum is correct. Also, see figure 4.5 in Wakeley2009.\nHow to fold an unfolded spectrum", "# fold the unfolded model\n\nmodel_folded = model.fold()\n#model_folded = model_folded[:(ns[0]+1)]\nmodel_folded.pop_ids = ['ery'] # be sure to give an array, not a scalar string\nmodel_folded\n\nll_model_folded = dadi.Inference.ll_multinom(model_folded, fs_ery)\n\nprint 'The log composite likelihood of the observed ery spectrum given a standard neutral model is {0:.3f}.'.format(ll_model_folded)", "$\\theta$ and implied $N_{ref}$", "theta = dadi.Inference.optimal_sfs_scaling(model_folded, fs_ery)\n\nprint 'The optimal value of theta is {0:.3f}.'.format(theta)", "This theta estimate is a little bit higher than what I estimated with curve fitting in Fist_Steps_with_dadi.ipynb, which was 10198.849.\nWhat effective ancestral population size would that imply?\nAccording to section 4.4 in the dadi manual:\n$$\n\\theta = 4 N_{ref} \\mu_{L} \\qquad \\text{L: sequence length}\n$$\nLet's assume the mutation rate per nucleotide site per generation is $3\\times 10^{-9}$ (see e. g. Liu2017). Then\n$$\n\\mu_{L} = \\mu_{site} \\times L\n$$\nSo\n$$\n\\theta = 4 N_{ref} \\mu_{site} \\times L\n$$\nand\n$$\nN_{ref} = \\frac{\\theta}{4 \\mu_{site} L}\n$$", "mu = 3e-9\nL = fs_ery.data.sum() # this sums over all entries in the spectrum, including masked ones, i. e. also contains invariable sites\nprint \"The total sequence length is \" + str(L)\nN_ref = theta/L/mu/4\nprint \"The effective ancestral population size (in number of diploid individuals) implied by this theta is: {0}.\".format(int(N_ref))", "This effective population size is consistent with those reported in Lynch2016 for other insect species.\n\nBegin Digression:", "x = pylab.arange(0, 100)\ny = 0.5**(x)\npylab.plot(x, y)\n\nx[:10] * y[:10]\n\nsum(x * y)", "End Digression", "model_folded * theta\n\npylab.semilogy(model_folded * theta, \"bo-\", label='SNM')\npylab.plot(fs_ery, \"ro-\", label='ery')\npylab.legend()\n\n%psource dadi.Plotting.plot_1d_comp_Poisson\n\n# compare model prediction and data visually with dadi function\n\ndadi.Plotting.plot_1d_comp_multinom(model_folded[:19], fs_ery[:19], residual='linear')", "The lower plot is for the scaled Poisson residuals. \n$$\nresiduals = (model - data)/\\sqrt{model}\n$$\nThe model is the expected counts in each frequency class. If these counts are Poisson distributed, then their variance is equal to their expectation. The differences between model and data are therefore scaled by the expected standard deviation of the model counts.\nThe observed counts deviate by up to 30 standard deviations from the model!\nWhat could be done about this? \nThe greatest deviations are seen for the first two frequency classes, the ones that should provide the greatest amount of information (Fu1994) for theta and therefore probably also other parameters. Toni has suggested that the doubleton class is inflated due to \"miscalling\" heterozygotes as homozygotes. When they contain a singleton they will be \"called\" as homozygote and therefore contribute to the doubleton count. This is aggravated by the fact that the sequenced individuals are all male which only possess one X chromosome. The X chromosome is the fourth largest of the 9 chromosomes of these grasshoppers (8 autosomes + X) (see Gosalvez1988, fig. 2). That is, about 1/9th of the sequenced RAD loci are haploid but ANGSD assumes all loci to be diploid. The genotype likelihoods it calculates are all referring to diploid genotypes.\nI think one potential reason for the extreme deviations is that the genotype likelihoods are generally biased toward homozygote genotypes (i. e. also for autosomal loci) due to PCR duplicates (see eq. 1 in Nielsen2012). So, one potential improvement would be to remove PCR duplicates. \nAnother potential improvement could be found by subsampling 8/9th to 8/10th of the contigs in the SAF files and estimating an SFS from these. Given enough subsamples, one should eventually be found that maximally excludes loci from the X chromosome. This subsample is expected to produce the least squared deviations from an expected SFS under the standard neutral model. However, one could argue that this attempt to exclude problematic loci could also inadvertently remove loci that strongly deviate from neutral expectations due to non-neutral evolution, again reducing power to detect deviations from the standard neutral model. I think one could also just apply the selection criterion of the second MAF class to be lower than the first and just save all contig subsamples and SFS's that fulfill that criterioin, since that should be true for all demographic scenarios.\nExponential growth\nCreating a folded spectrum exactly how dadi expects it\nAs seen above in the folded model spectrum, dadi just masks out entries that are not sensical in a folded spectrum, but keeps the length of the spectrum the same as the unfolded. That way the sample size (i. e. number of chromosomes) is determined correctly. Let's create a correct folded spectrum object for ery.", "fs_ery\n\n# make copy of spectrum array\ndata_abc = fs_ery.data.copy()\n\n# resize the array to the unfolded length\n\ndata_abc.resize((37,))\ndata_abc\n\nfs_ery_ext = dadi.Spectrum(data_abc)\nfs_ery_ext\n\nfs_ery_ext.fold()\n\nfs_ery_ext = fs_ery_ext.fold()\nfs_ery_ext.pop_ids = ['ery']\nfs_ery_ext\n\nfs_ery_ext.sample_sizes", "Now, the reported sample size is correct and we have a Spectrum object that dadi can handle correctly.\nTo fold or not to fold by ANGSD\nDoes estimating an unfolded spectrum with ANGSD and then folding yield a sensible folded SFS when the sites are not polarised with respect to an ancestral allele but with respect to the reference allele? Matteo Fumagalli thinks that this is sensible.\nLoad SFS folded by ANGSD", "% cat dadiExercises/ERY.FOLDED.sfs.dadi_format\n\n# load the spectrum that was created from folded SAF's\n\nfs_ery_folded_by_Angsd = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format')\nfs_ery_folded_by_Angsd\n\n# extract unmasked entries of the SFS\n\nm = fs_ery_folded_by_Angsd.mask\nfs_ery_folded_by_Angsd[m == False]", "Load unfolded SFS", "% ll ../ANGSD/SFS/ERY/", "I have copied the unfolded SFS into the current directory.", "% ll\n\n% cat ERY.unfolded.sfs\n\n# load unfolded spectrum\n\nfs_ery_unfolded_by_ANGSD = dadi.Spectrum.from_file('ERY.unfolded.sfs')\nfs_ery_unfolded_by_ANGSD\n\n# fold unfolded spectrum\n\nfs_ery_unfolded_by_Angsd_folded = fs_ery_unfolded_by_ANGSD.fold()\nfs_ery_unfolded_by_Angsd_folded\n\n# plot the two spectra\n\npylab.rcParams['figure.figsize'] = [12.0, 10.0]\n\npylab.plot(fs_ery_folded_by_Angsd, 'ro-', label='folded by ANGSD')\npylab.plot(fs_ery_unfolded_by_Angsd_folded, 'bo-', label='folded by DADI')\npylab.legend()\npylab.savefig('ery_fold_comp.png')\n\n%psource dadi.Plotting.plot_1d_comp_Poisson\n\ndadi.Plotting.plot_1d_comp_Poisson(fs_ery_folded_by_Angsd[:19], fs_ery_unfolded_by_Angsd_folded[:19], \\\n residual='linear')", "The sizes of the residuals (scaled by the Poisson standard deviations) indicate that the two versions of the folded SFS of ery are significantly different.\nNow, what does the parallelus data say?", "% ll dadiExercises/\n\n% cat dadiExercises/PAR.FOLDED.sfs.dadi_format\n\n# load the spectrum folded by ANGSD\n\nfs_par_folded_by_Angsd = dadi.Spectrum.from_file('dadiExercises/PAR.FOLDED.sfs.dadi_format')\nfs_par_folded_by_Angsd\n\n% cat PAR.unfolded.sfs\n\n# load spectrum that has been created from unfolded SAF's\n\nfs_par_unfolded_by_Angsd = dadi.Spectrum.from_file('PAR.unfolded.sfs')\nfs_par_unfolded_by_Angsd\n\nfs_par_unfolded_by_Angsd_folded = fs_par_unfolded_by_Angsd.fold()\nfs_par_unfolded_by_Angsd_folded\n\ndadi.Plotting.plot_1d_comp_Poisson(fs_par_folded_by_Angsd[:19], fs_par_unfolded_by_Angsd_folded[:19], \\\n residual='linear')\n\n#pylab.subplot(2,1,1)\npylab.plot(fs_par_folded_by_Angsd[:19], 'ro-', label='folded by ANGSD')\n\n#pylab.subplot(2,1,2)\npylab.plot(fs_par_unfolded_by_Angsd_folded, 'bo-', label='folded by DADI')\npylab.legend()\npylab.savefig('par_fold_comp.png')", "The unfolded spectrum folded by dadi seems to be a bit better behaved than the one folded by ANGSD. I really wonder whether folding in ANGSD is needed.\nThe folded 2D spectrum from ANGSD is a 19 x 19 matrix. This is not a format that dadi can understand.", "%psource dadi.Spectrum.from_data_dict", "See this thread on the dadi forum.\n\nExponential growth model", "# show the source of the 'Demographics1D' method\n\n%psource dadi.Demographics1D.growth\n\n# create link to function that specifies a simple growth or decline model\n\nfunc = dadi.Demographics1D.growth\n\n# create extrapolating version of the function\n\nfunc_ex = dadi.Numerics.make_extrap_log_func(func)\n\n# set lower and upper bounds to nu and T\n\nupper_bound = [100, 3]\nlower_bound = [1e-2, 0]\n\n# set starting value\n\np0 = [1, 1] # corresponds to constant population size\n\n%pdoc dadi.Misc.perturb_params\n\n# perturb starting values by up to a factor of 2\n\np0 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)\n\np0\n\n%psource dadi.Inference.optimize_log\n\n# run optimisation of paramters\n\npopt = dadi.Inference.optimize_log(p0=p0, data=fs_ery, model_func=func_ex, pts=pts_l, \\\n lower_bound=lower_bound, upper_bound=upper_bound, \\\n verbose=0, maxiter=100, full_output=False)\n\npopt", "Parallelised $\\delta$a$\\delta$i\nI need to run the simulation with different starting values to check convergence.\nI would like to do these runs in parallel. I have 12 cores available on huluvu.", "from ipyparallel import Client\n\ncl = Client()\n\ncl.ids", "I now have connections to 11 engines. I started the engines with ipcluster start -n 11 &amp; in the terminal.", "# create load balanced view of the engines\n\nlbview = cl.load_balanced_view()\n\nlbview.block\n\n# create direct view of all engines\n\ndview = cl[:]", "import variables to namespace of engines", "# set starting value for all engines\n\ndview['p0'] = [1, 1]\ndview['p0']\n\n# set lower and upper bounds to nu and T for all engines\n\ndview['upper_bound'] = [100, 3]\ndview['lower_bound'] = [1e-2, 0]\n\ndview['fs_ery'] = fs_ery\ncl[0]['fs_ery']\n\ndview['func_ex'] = func_ex\ndview['pts_l'] = pts_l", "import dadi on all engines", "with dview.sync_imports():\n import sys\n\ndview.execute('sys.path.insert(0, \\'/home/claudius/Downloads/dadi\\')')\n\ncl[0]['sys.path']\n\nwith dview.sync_imports():\n import dadi", "create parallel function to run dadi", "@lbview.parallel(block=True)\ndef run_dadi(x): # for the function to be called with map, it needs to have one input variable\n # perturb starting values by up to a factor of 2\n p1 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)\n # run optimisation of paramters\n popt = dadi.Inference.optimize_log(p0=p1, data=fs_ery, model_func=func_ex, pts=pts_l, \\\n lower_bound=lower_bound, upper_bound=upper_bound, \\\n verbose=0, maxiter=100, full_output=False)\n return popt\n\nrun_dadi.map(range(20))\n\npopt\n\n# set starting value\np0 = [1, 1]\n\n# perturb starting values by up to a factor of 2\np0 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)\n\n# run optimisation of paramters\npopt = dadi.Inference.optimize_log(p0=p0, data=fs_ery_ext, model_func=func_ex, pts=pts_l, \\\n lower_bound=lower_bound, upper_bound=upper_bound, \\\n verbose=0, maxiter=100, full_output=False)\npopt", "", "def exp_growth(x):\n p0 = [1, 1]\n\n # perturb starting values by up to a factor of 2\n p0 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)\n\n # run optimisation of paramters\n popt = dadi.Inference.optimize_log(p0=p0, data=fs_ery_ext, model_func=func_ex, pts=pts_l, \\\n lower_bound=lower_bound, upper_bound=upper_bound, \\\n verbose=0, maxiter=100, full_output=False)\n return popt\n\npopt = map(exp_growth, range(10))\n\n# this will run a few minutes\n\n# popt\n\nimport ipyparallel as ipp\n\nc = ipp.Client()\nc.ids\n\n%%time\n\ndview = c[:]\n\npopt = dview.map_sync(exp_growth, range(10))", "Unfortunately, parallelisation is not as straightforward as it should be.", "popt", "Except for the last iteration, the two parameter estimates seem to have converged.", "ns = fs_ery_ext.sample_sizes\nns\n\nprint popt[0]\nprint popt[9]", "What is the log likelihood of the model given these two different parameter sets?", "model_one = func_ex(popt[0], ns, pts_l)\nll_model_one = dadi.Inference.ll_multinom(model_one, fs_ery_ext)\nll_model_one\n\nmodel_two = func_ex(popt[9], ns, pts_l)\nll_model_two = dadi.Inference.ll_multinom(model_two, fs_ery_ext)\nll_model_two", "The lower log-likelihood for the last set of parameters inferred indicates that the optimisation got trapped in a local minimum in the last run of the optimisation.\nWhat the majority of the parameter sets seem to indicate is that at about time $0.007 \\times 2 N_{ref}$ generations in the past the ancestral population started to shrink exponentially, reaching a population size of about $0.14 \\times N_{ref}$ at present.", "print 'The model suggests that exponential decline in population size started {0:.0f} generations ago.'.format(popt[0][1] * 2 * N_ref)", "Two epoch model", "dir(dadi.Demographics1D)\n\n%psource dadi.Demographics1D.two_epoch", "This model specifies a stepwise change in population size some time ago. It assumes that the population size has stayed constant since the change.", "func = dadi.Demographics1D.two_epoch\nfunc_ex = dadi.Numerics.make_extrap_log_func(func)\n\nupper_bound = [10, 3]\nlower_bound = [1e-3, 0]\npts_l = [40, 50, 60]\n\ndef stepwise_pop_change(x):\n # set initial values\n p0 = [1, 1]\n\n # perturb initial parameter values randomly by up to 2 * fold\n p0 = dadi.Misc.perturb_params(p0, fold=1.5, \\\n upper_bound=upper_bound, lower_bound=lower_bound)\n \n # run optimisation\n popt = dadi.Inference.optimize_log(p0, fs_ery_ext, func_ex, pts_l, \\\n upper_bound=upper_bound, lower_bound=lower_bound,\n verbose=0, maxiter=10)\n \n return popt\n\nstepwise_pop_change(1)\n\nstepwise_pop_change(1)\n\npopt = map(stepwise_pop_change, range(10))\n\npopt", "This model does not converge on a set of parameter values.", "nu = [i[0] for i in popt]\nnu\n\nT = [i[1] for i in popt]\nT\n\npylab.rcParams['font.size'] = 14.0\n\npylab.loglog(nu, T, 'bo')\npylab.xlabel(r'$\\nu$')\npylab.ylabel('T')", "Both parameters seem to be correlated. With the available data, it may not be possible to distinguish between a moderate reduction in population size a long time ago (topright in the above figure) and a drastic reduction in population size a short time ago (bottomleft in the above figure).\nBottleneck then exponential growth", "%psource dadi.Demographics1D", "This model has three parameters. $\\nu_B$ is the ratio of the population size (with respect to the ancestral population size $N_{ref}$) after the first stepwise change at time T in the past. The population is then asumed to undergo exponential growth/decline to a ratio of population size $\\nu_F$ at present.", "func = dadi.Demographics1D.bottlegrowth\nfunc_ex = dadi.Numerics.make_extrap_log_func(func)\n\nupper_bound = [100, 100, 3]\nlower_bound = [1e-3, 1e-3, 0]\npts_l = [40, 50, 60]\n\ndef bottleneck_growth(x):\n p0 = [1, 1, 1] # corresponds to constant population size\n \n # perturb initial parameter values randomly by up to 2 * fold\n p0 = dadi.Misc.perturb_params(p0, fold=1.5, \\\n upper_bound=upper_bound, lower_bound=lower_bound)\n \n # run optimisation\n popt = dadi.Inference.optimize_log(p0, fs_ery_ext, func_ex, pts_l, \\\n upper_bound=upper_bound, lower_bound=lower_bound,\n verbose=0, maxiter=10)\n \n return popt\n\n%%time\n\npopt = map(bottleneck_growth, range(10))\n\npopt", "There is no convergence of parameters estimates. The parameter combinations stand for vastly different demographic scenarios. Most seem to suggest a population increase (up to 100 times the ancestral population size), followed by exponential decrease to about the ancestral population size.\nThree epochs", "func = dadi.Demographics1D.three_epoch\nfunc_ex = dadi.Numerics.make_extrap_log_func(func)\n\n%psource dadi.Demographics1D.three_epoch", "This model tries to estimate three parameters. The populations is expected to undergo a stepwise population size change (bottleneck) at time TF + TB. At time TF it is expected to recover immediately to the current population size.", "upper_bound = [100, 100, 3, 3]\nlower_bound = [1e-3, 1e-3, 0, 0]\npts_l = [40, 50, 60]\n\ndef opt_three_epochs(x):\n p0 = [1, 1, 1, 1] # corresponds to constant population size\n \n # perturb initial parameter values randomly by up to 2 * fold\n p0 = dadi.Misc.perturb_params(p0, fold=1.5, \\\n upper_bound=upper_bound, lower_bound=lower_bound)\n \n # run optimisation\n popt = dadi.Inference.optimize_log(p0, fs_ery_ext, func_ex, pts_l, \\\n upper_bound=upper_bound, lower_bound=lower_bound,\n verbose=0, maxiter=10)\n \n return popt\n\n%%time\n\npopt = map(opt_three_epochs, range(10))\n\npopt", "Note, that only one of the optimisations inferred a bottleneck (4th). All others either inferred a constant population size or an increase in population size. Contemporary population sizes are mostly inferred to be similar to ancestral population sizes. The two time parameters vary wildly." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ericmjl/data-testing-tutorial
3-data-checks.ipynb
mit
[ "Data Checks\n\n\nSchema checks: Making sure that only the columns that are expected are provided.\n\n\nDatum checks:\n\nLooking for missing values\nEnsuring that expected value ranges are correct\n\n\n\nStatistical checks:\n\nVisual check of data distributions.\nCorrelations between columns.\nStatistical distribution checks.\n\n\n\nRoles in Data Analysis\n\nData Provider: Someone who's collected and/or curated the data.\nData Analyst: The person who is analyzing the data.\n\nSometimes they're the same person; at other times they're not. Tasks related to testing can often be assigned to either role, but there are some tasks more naturally suited to each.\nSchema Checks\nSchema checks are all about making sure that the data columns that you want to have are all present, and that they have the expected data types.\nThe way data are provided to you should be in two files. The first file is the actual data matrix. The second file should be a metadata specification file, minimally containing the name of the CSV file it describes, and the list of columns present. Why the duplication? The list of columns is basically an implicit contract between your data provider and you, and provides a verifiable way of describing the data matrix's columns.\nWe're going to use a few datasets from Boston's open data repository. Let's first take a look at Boston's annual budget data, while pretending we're the person who curated the data, the \"data provider\".", "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'", "A bit of basic pandas\nLet's first start by reading in the CSV file as a pandas.DataFrame().", "import pandas as pd\ndf = pd.read_csv('data/boston_budget.csv')\ndf.head()", "To get the columns of a DataFrame object df, call df.columns. This is a list-like object that can be iterated over.", "df.columns", "YAML Files\nDescribe data in a human-friendly & computer-readable format. The environment.yml file in your downloaded repository is also a YAML file, by the way!\nStructure:\nyaml\nkey1: value\nkey2:\n- value1\n- value2\n- subkey1:\n - value3\nExample YAML-formatted schema:\nyaml\nfilename: boston_budget.csv\ncolumn_names:\n- \"Fiscal Year\"\n- \"Service (cabinet)\"\n- \"Department\"\n- \"Program #\"\n...\n- \"Fund\"\n- \"Amount\"\nYAML-formatted text can be read as dictionaries.", "spec = \"\"\"\nfilename: boston_budget.csv\ncolumns:\n- \"Fiscal Year\"\n- \"Service (Cabinet)\"\n- \"Department\"\n- \"Program #\"\n- \"Program\"\n- \"Expense Type\"\n- \"ACCT #\"\n- \"Expense Category (Account)\"\n- \"Fund\"\n- \"Amount\"\n\"\"\"\n\nimport yaml\nmetadata = yaml.load(spec)\nmetadata", "You can also take dictionaries, and return YAML-formatted text.", "print(yaml.dump(metadata))", "By having things YAML formatted, you preserve human-readability and computer-readability simultaneously. \nProviding metadata should be something already done when doing analytics; YAML-format is a strong suggestion, but YAML schema will depend on use case.\nLet's now switch roles, and pretend that we're on side of the \"analyst\" and are no longer the \"data provider\". \nHow would you check that the columns match the spec? Basically, check that every element in df.columns is present inside the metadata['columns'] list.\nExercise\nInside test_datafuncs.py, write a utility function, check_schema(df, meta_columns) that tests whether every column in a DataFrame is present in some metadata spec file. It should accept two arguments:\n\ndf: a pandas.DataFrame\nmeta_columns: A list of columns from the metadata spec.\n\n```python\ndef check_schema(df, meta_columns):\n for col in df.columns:\n assert col in meta_columns, f'\"{col}\" not in metadata column spec'\n```\nIn your test file, outside the function definition, write another test function, test_budget_schemas(), explicitly runs a test for just the budget data.\n```python\ndef test_budget_schemas():\n columns = read_metadata('data/metadata_budget.yml')['columns']\n df = pd.read_csv('data/boston_budget.csv')\ncheck_schema(df, columns)\n\n```\nNow, run the test. Do you get the following error? Can you spot the error?\n```bash\n def check_schema(df, meta_columns):\n for col in df.columns:\n\n assert col in meta_columns, f'\"{col}\" not in metadata column spec'\n\nE AssertionError: \" Amount\" not in metadata column spec\nE assert ' Amount' in ['Fiscal Year', 'Service (Cabinet)', 'Department', 'Program #', 'Program', 'Expense Type', ...]\n\ntest_datafuncs_soln.py:63: AssertionError\n=================================== 1 failed, 7 passed in 0.91 seconds ===================================\n```\nIf there is even a slight mis-spelling, this kind of check will help you pinpoint where that is. Note how the \"Amount\" column is spelled with an extra space. \nAt this point, I would contact the data provider to correct errors like this.\nIt is a logical practice to keep one schema spec file per table provided to you. However, it is also possible to take advantage of YAML \"documents\" to keep multiple schema specs inside a single YAML file. \nThe choice is yours - in cases where there are a lot of data files, it may make sense (for the sake of file-system sanity) to keep all of the specs in multiple files that represent logical groupings of data.\nExercise: Write YAML metadata spec.\nPut yourself in the shoes of a data provider. Take the boston_ei.csv file in the data/ directory, and make a schema spec file for that file.\nExercise: Write test for metadata spec.\nNext, put yourself in the shoes of a data analyst. Take the schema spec file and write a test for it.\nExercise: Auto YAML Spec.\nInside datafuncs.py, write a function with the signature autospec(handle) that takes in a file path, and does the following:\n\nCreate a dictionary, with two keys:\na \"filename\" key, whose value only records the filename (and not the full file path),\na \"columns\" key, whose value records the list of columns in the dataframe.\n\n\nConverts the dictionary to a YAML string\nWrites the YAML string to disk.\n\nOptional Exercise: Write meta-test\nNow, let's go \"meta\". Write a \"meta-test\" that ensures that every CSV file in the data/ directory has a schema file associated with it. (The function need not check each schema.) Until we finish filling out the rest of the exercises, this test can be allowed to fail, and we can mark it as a test to skip by marking it with an @skip decorator:\npython\n@pytest.mark.skip(reason=\"no way of currently testing this\")\ndef test_my_func():\n ...\nNotes\n\nThe point here is to have a trusted copy of schema apart from data file. YAML not necessarily only way!\nIf no schema provided, manually create one; this is exploratory data analysis anyways - no effort wasted!\n\nDatum Checks\nNow that we're done with the schema checks, let's do some sanity checks on the data as well. This is my personal favourite too, as some of the activities here overlap with the early stages of exploratory data analysis.\nWe're going to switch datasets here, and move to a 'corrupted' version of the Boston Economic Indicators dataset. Its file path is: ./data/boston_ei-corrupt.csv.", "import pandas as pd\nimport seaborn as sns\nsns.set_style('white')\n%matplotlib inline\n\ndf = pd.read_csv('data/boston_ei-corrupt.csv')\ndf.head()", "Demo: Visual Diagnostics\nWe can use a package called missingno, which gives us a quick visual view of the completeness of the data. This is a good starting point for deciding whether you need to manually comb through the data or not.", "# First, we check for missing data.\nimport missingno as msno\nmsno.matrix(df)", "Immediately it's clear that there's a number of rows with empty values! Nothing beats a quick visual check like this one.\nWe can get a table version of this using another package called pandas_summary.", "# We can do the same using pandas-summary.\nfrom pandas_summary import DataFrameSummary\n\ndfs = DataFrameSummary(df)\ndfs.summary()", "dfs.summary() returns a Pandas DataFrame; this means we can write tests for data completeness!\nExercise: Test for data completeness.\nWrite a test named check_data_completeness(df) that takes in a DataFrame and confirms that there's no missing data from the pandas-summary output. Then, write a corresponding test_boston_ei() that tests the schema for the Boston Economic Indicators dataframe.\n```python\nIn test_datafuncs.py\nfrom pandas_summary import DataFrameSummary\ndef check_data_completeness(df):\ndf_summary = DataFrameSummary(df).summary()\nfor col in df_summary.columns:\n assert df_summary.loc['missing', col] == 0, f'{col} has missing values'\n\ndef test_boston_ei():\n df = pd.read_csv('data/boston_ei.csv')\n check_data_completeness(df)\n```\nExercise: Test for value correctness.\nIn the Economic Indicators dataset, there are four \"rate\" columns: ['labor_force_part_rate', 'hotel_occup_rate', 'hotel_avg_daily_rate', 'unemp_rate'], which must have values between 0 and 1.\nAdd a utility function to test_datafuncs.py, check_data_range(data, lower=0, upper=1), which checks the range of the data such that:\n- data is a list-like object.\n- data &lt;= upper\n- data &gt;= lower\n- upper and lower have default values of 1 and 0 respectively.\nThen, add to the test_boston_ei() function tests for each of these four columns, using the check_data_range() function.\n```python\nIn test_datafuncs.py\ndef check_data_range(data, lower=0, upper=1):\n assert min(data) >= lower, f\"minimum value less than {lower}\"\n assert max(data) <= upper, f\"maximum value greater than {upper}\"\ndef test_boston_ei():\n df = pd.read_csv('data/boston_ei.csv')\n check_data_completeness(df)\nzero_one_cols = ['labor_force_part_rate', 'hotel_occup_rate',\n 'hotel_avg_daily_rate', 'unemp_rate']\nfor col in zero_one_cols:\n check_data_range(df['labor_force_part_rate'])\n\n```\nDistributions\nMost of what is coming is going to be a demonstration of the kinds of tools that are potentially useful for you. Feel free to relax from coding, as these aren't necessarily obviously automatable.\nNumerical Data\nWe can take the EDA portion further, by doing an empirical cumulative distribution plot for each data column.", "import numpy as np\ndef compute_dimensions(length):\n \"\"\"\n Given an integer, compute the \"square-est\" pair of dimensions for plotting.\n \n Examples:\n - length: 17 => rows: 4, cols: 5\n - length: 14 => rows: 4, cols: 4\n \n This is a utility function; can be tested separately.\n \"\"\"\n sqrt = np.sqrt(length)\n floor = int(np.floor(sqrt))\n ceil = int(np.ceil(sqrt))\n \n if floor ** 2 >= length:\n return (floor, floor)\n elif floor * ceil >= length:\n return (floor, ceil)\n else:\n return (ceil, ceil)\n \ncompute_dimensions(length=17)\n\nassert compute_dimensions(17) == (4, 5)\nassert compute_dimensions(16) == (4, 4)\nassert compute_dimensions(15) == (4, 4)\nassert compute_dimensions(11) == (3, 4)\n\n# Next, let's visualize the empirical CDF for each column of data.\nimport matplotlib.pyplot as plt\n\ndef empirical_cumdist(data, ax, title=None):\n \"\"\"\n Plots the empirical cumulative distribution of values.\n \"\"\"\n x, y = np.sort(data), np.arange(1, len(data)+1) / len(data)\n ax.scatter(x, y)\n ax.set_title(title)\n \ndata_cols = [i for i in df.columns if i not in ['Year', 'Month']]\nn_rows, n_cols = compute_dimensions(len(data_cols))\n\nfig = plt.figure(figsize=(n_cols*3, n_rows*3))\nfrom matplotlib.gridspec import GridSpec\ngs = GridSpec(n_rows, n_cols)\nfor i, col in enumerate(data_cols):\n ax = plt.subplot(gs[i])\n empirical_cumdist(df[col], ax, title=col)\n \nplt.tight_layout()\nplt.show()", "It's often a good idea to standardize numerical data (that aren't count data). The term standardize often refers to the statistical procedure of subtracting the mean and dividing by the standard deviation, yielding an empirical distribution of data centered on 0 and having standard deviation of 1.\nExercise\nWrite a test for a function that standardizes a column of data. Then, write the function.\nNote: This function is also implemented in the scikit-learn library as part of their preprocessing module. However, in case an engineering decision that you make is that you don't want to import an entire library just to use one function, you can re-implement it on your own.\n```python\ndef standard_scaler(x):\n return (x - x.mean()) / x.std()\ndef test_standard_scaler(x):\n std = standard_scaler(x)\n assert np.allclose(std.mean(), 0)\n assert np.allclose(std.std(), 1)\n```\nExercise\nNow, plot the grid of standardized values.", "data_cols = [i for i in df.columns if i not in ['Year', 'Month']]\nn_rows, n_cols = compute_dimensions(len(data_cols))\n\nfig = plt.figure(figsize=(n_cols*3, n_rows*3))\nfrom matplotlib.gridspec import GridSpec\ngs = GridSpec(n_rows, n_cols)\nfor i, col in enumerate(data_cols):\n ax = plt.subplot(gs[i])\n empirical_cumdist(standard_scaler(df[col]), ax, title=col)\n \nplt.tight_layout()\nplt.show()", "Exercise\nDid we just copy/paste the function?! It's time to stop doing this. Let's refactor the code into a function that can be called.\nCategorical Data\nFor categorical-type data, we can plot the empirical distribution as well. (This example uses the smartphone_sanitization.csv dataset.)", "from collections import Counter\n\ndef empirical_catdist(data, ax, title=None):\n d = Counter(data)\n print(d)\n x = range(len(d.keys()))\n labels = list(d.keys())\n y = list(d.values())\n ax.bar(x, y)\n ax.set_xticks(x)\n ax.set_xticklabels(labels)\n\nsmartphone_df = pd.read_csv('data/smartphone_sanitization.csv')\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nempirical_catdist(smartphone_df['site'], ax=ax)", "Statistical Checks\n\nReport on deviations from normality.\n\nNormality?!\n\nThe Gaussian (Normal) distribution is commonly assumed in downstream statistical procedures, e.g. outlier detection.\nWe can test for normality by using a K-S test.\n\nK-S test\nFrom Wikipedia:\n\nIn statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). It is named after Andrey Kolmogorov and Nikolai Smirnov.", "from scipy.stats import ks_2samp\nimport numpy.random as npr\n\n# Simulate a normal distribution with 10000 draws.\nnormal_rvs = npr.normal(size=10000)\nresult = ks_2samp(normal_rvs, df['labor_force_part_rate'].dropna())\nresult.pvalue < 0.05\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nempirical_cumdist(normal_rvs, ax=ax)\nempirical_cumdist(df['hotel_occup_rate'], ax=ax)", "Exercise\nRe-create the panel of cumulative distribution plots, this time adding on the Normal distribution, and annotating the p-value of the K-S test in the title.", "data_cols = [i for i in df.columns if i not in ['Year', 'Month']]\nn_rows, n_cols = compute_dimensions(len(data_cols))\n\nfig = plt.figure(figsize=(n_cols*3, n_rows*3))\nfrom matplotlib.gridspec import GridSpec\ngs = GridSpec(n_rows, n_cols)\nfor i, col in enumerate(data_cols):\n ax = plt.subplot(gs[i])\n test = ks_2samp(normal_rvs, standard_scaler(df[col]))\n empirical_cumdist(normal_rvs, ax)\n empirical_cumdist(standard_scaler(df[col]), ax, title=f\"{col}, p={round(test.pvalue, 2)}\")\n \nplt.tight_layout()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PyLCARS/PythonUberHDL
PYNQLearn/FabricOnly/.ipynb_checkpoints/myHDL_PYNQZ12_FabricOnly-checkpoint.ipynb
bsd-3-clause
[ "\\title{myHDL to PYNQ Fabric Only Exsample}\n\\author{Steven K Armour}\n\\maketitle\nRefrances\nLibraries and Helper functions", "from myhdl import *\nfrom myhdlpeek import Peeker\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sympy import *\ninit_printing()\n\nimport random\n\n#https://github.com/jrjohansson/version_information\n%load_ext version_information\n%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random\n\n#helper functions to read in the .v and .vhd generated files into python\ndef VerilogTextReader(loc, printresult=True):\n with open(f'{loc}.v', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***Verilog modual from {loc}.v***\\n\\n', VerilogText)\n return VerilogText\n\ndef VHDLTextReader(loc, printresult=True):\n with open(f'{loc}.vhd', 'r') as vText:\n VerilogText=vText.read()\n if printresult:\n print(f'***VHDL modual from {loc}.vhd***\\n\\n', VerilogText)\n return VerilogText", "Project 1: 1 Switch 1 LED\nhttps://timetoexplore.net/blog/arty-fpga-verilog-01\nConstraints File\nmyHDL Code", "@block\ndef S0L0(sw, clk, led):\n \"\"\"\n FPGA Hello world of one switch controlling one LED based on\n https://timetoexplore.net/blog/arty-fpga-verilog-01\n \n Target:\n ZYNQ 7000 Board (Arty, PYNQ-Z1, PYNQ-Z2) with at least 2 \n switchs and 4 leds\n \n \n Input:\n sw(2bitVec):switch input\n clk(bool): clock input \n Ouput:\n led(4bitVec): led output\n \n \"\"\"\n \n @always(clk.posedge)\n def logic():\n if sw[0]==0:\n led.next[0]=True\n else:\n led.next[0]=False\n \n return instances()", "myHDL Testing", "Peeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nsw=Signal(intbv(0)[2:]); Peeker(sw, 'sw')\nled=Signal(intbv(0)[4:]); Peeker(led, 'led')\n\nnp.random.seed(18)\nswTVals=[int(i) for i in np.random.randint(0,2, 10)]\n\nDUT=S0L0(sw, clk, led)\n\ndef S0L0_TB():\n \n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n for i in range(10):\n sw.next[0]=swTVals[i]\n yield clk.posedge\n raise StopSimulation()\n \n return instances()\n \nsim=Simulation(DUT, S0L0_TB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nPeeker.to_dataframe()", "Verilog Code", "DUT.convert()\nVerilogTextReader('S0L0');", "\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{S0L0_RTL.png}}\n\\caption{\\label{fig:S0L0RTL} S0L0 RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{S0L0_SYN.png}}\n\\caption{\\label{fig:S0L0SYN} S0L0 Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{S0L0_SYN.png}}\n\\caption{\\label{fig:S0L0SYN} S0L0 Implementated Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nPYNQ-Z1 Constraints File\nBelow is what is found in file constrs_S0L0.xdc\nNotice that the orgianl port names found in the PYNQ-Z1 Constraints file have been changed to the port names of the module S0L0\nVerilog Testbench", "swTVal=intbv(int(''.join([str(i) for i in swTVals]), 2))[len(swTVals):]\nprint(f'swTest: {swTVals}, {swTVal}, {[int(i) for i in swTVal]}')\n\n\n@block\ndef S0L0_TBV():\n clk=Signal(bool(0))\n sw=Signal(intbv(0)[2:])\n led=Signal(intbv(0)[4:])\n \n #test stimuli\n swTVals=Signal(swTVal)\n \n @always_comb\n def print_data():\n print(sw, clk, led)\n\n\n DUT=S0L0(sw, clk, led)\n \n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n for i in range(10):\n sw.next[0]=swTVals[i]\n yield clk.posedge\n raise StopSimulation()\n \n return instances()\n \nTB=S0L0_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('S0L0_TBV');", "Board Verification\nProject 2: 2 Switchs 4 LEDS\nhttps://timetoexplore.net/blog/arty-fpga-verilog-01\nmyHDL Code", "@block\ndef S2L4(sw, clk, led):\n \"\"\"\n FPGA Hello world of two switchs controlling four LED based on\n https://timetoexplore.net/blog/arty-fpga-verilog-01\n \n Target:\n ZYNQ 7000 Board (Arty, PYNQ-Z1, PYNQ-Z2) with at least 2 \n switchs and 4 leds\n \n \n Input:\n sw(2bitVec):switch input\n clk(bool): clock input \n Ouput:\n led(4bitVec): led output\n \n \"\"\"\n\n \n @always(clk.posedge)\n def logic():\n if sw[0]==0:\n led.next[2:]=0\n else:\n led.next[2:]=3\n \n if sw[1]==0:\n led.next[4:2]=0\n else:\n led.next[4:2]=3\n \n return instances()", "myHDL Testing", "Peeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nsw=Signal(intbv(0)[2:]); Peeker(sw, 'sw')\nled=Signal(intbv(0)[4:]); Peeker(led, 'led')\n\nnp.random.seed(18)\nswTVals=[int(i) for i in np.random.randint(0,4, 10)]\n\nDUT=S2L4(sw, clk, led)\n\ndef S2L4_TB():\n \n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n for i in range(10):\n sw.next=swTVals[i]\n yield clk.posedge\n raise StopSimulation()\n \n return instances()\n \nsim=Simulation(DUT, S2L4_TB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nPeeker.to_dataframe()", "Verilog Code", "DUT.convert()\nVerilogTextReader('S2L4');", "\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{S2L4_RTL.png}}\n\\caption{\\label{fig:S2L4RTL} S2L4 RTL schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{S2L4_SYN.png}}\n\\caption{\\label{fig:S2L4SYN} S2L4 Synthesized Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\n\\begin{figure}\n\\centerline{\\includegraphics[width=10cm]{S2L4_IMP.png}}\n\\caption{\\label{fig:S2L4SYN} S2L4 Implementated Schematic; Xilinx Vivado 2017.4}\n\\end{figure}\nVerilog Testbench (ToDo)\nwill write later when testbench conversion is improved\nPYNQ-Z1 Constraints File\nusing same one as in 1 Switch 1 LED: constrs_S0L0.xdc\nBoard Verification\nProject 3: Countdown\nmyHDL Code", "@block\ndef countLED(clk, led):\n counter=Signal(modbv(0)[33:])\n \n @always(clk.posedge)\n def logic():\n counter.next=counter+1\n led.next[0]=counter[26]\n led.next[1]=counter[24]\n led.next[3]=counter[22]\n led.next[4]=counter[20]\n \n return instances()", "myHDL Testing", "Peeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nled=Signal(intbv(0)[4:]); Peeker(led, 'led')\n\n\nDUT=countLED(clk, led)\n\n'''\ndef countLED_TB():\n \n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n if i==2**33:\n raise StopSimulation()\n if 1%100==0:\n print(i)\n i+=1\n yield clk.posedge\n \n return instances()\n \nsim=Simulation(DUT, countLED_TB(), *Peeker.instances()).run()\n'''\n;", "Need to figure out how to write/run these long simulations better in python \nVerilog Code", "DUT.convert()\nVerilogTextReader('countLED');", "Verilog Testbench\nPYNQ-Z1 Constraints File\nBelow is what is found in file constrs_S0L0.xdc\nNotice that the orgianl port names found in the PYNQ-Z1 Constraints file have been changed to the port names of the module S0L0\nBoard Verification\nProject 4: Basic Duty Cycle\nhttps://timetoexplore.net/blog/arty-fpga-verilog-02\nmyHDL Code", "@block\ndef BDCLed(clk, led):\n counter=Signal(modbv(0)[8:])\n duty_led=Signal(modbv(8)[8:])\n \n @always(clk.posedge)\n def logic():\n counter.next=counter+1\n if counter<duty_led:\n led.next=15\n else:\n led.next=0\n \n return instances()", "myHDL Testing", "Peeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\nled=Signal(intbv(0)[4:]); Peeker(led, 'led')\n\nDUT=BDCLed(clk, led)\n \ndef BDCLed_TB():\n \n @always(delay(1))\n def ClkGen():\n clk.next=not clk\n \n @instance\n def stimules():\n i=0\n while True:\n if i==1000:\n raise StopSimulation()\n i+=1\n yield clk.posedge\n \n return instances()\n \nsim=Simulation(DUT, BDCLed_TB(), *Peeker.instances()).run()\n\nPeeker.to_wavedrom()\n\nBDCLedData=Peeker.to_dataframe()\nBDCLedData=BDCLedData[BDCLedData['clk']==1]\nBDCLedData.plot(y='led');", "Verilog Code", "DUT.convert()\nVerilogTextReader('BDCLed');", "PYNQ-Z1 Constraints File\nBelow is what is found in file constrs_S0L0.xdc\nNotice that the orgianl port names found in the PYNQ-Z1 Constraints file have been changed to the port names of the module S0L0\nVerilog Testbench", "@block\ndef BDCLed_TBV():\n\n clk=Signal(bool(0))\n led=Signal(intbv(0)[4:])\n \n @always_comb\n def print_data():\n print(sw, clk, led)\n\n DUT=BDCLed(clk, led)\n \n \n @instance\n def clk_signal():\n while True:\n clk.next = not clk\n yield delay(1)\n \n @instance\n def stimules():\n i=0\n while True:\n if i==1000:\n raise StopSimulation()\n i+=1\n yield clk.posedge\n \n return instances()\n \nTB=BDCLed_TBV()\nTB.convert(hdl=\"Verilog\", initial_values=True)\nVerilogTextReader('BDCLed_TBV');", "Board Verification\nProject 5: Mid level PWM LED\npwm myHDL Code", "@block\ndef pwm(clk, dutyCount, o_state):\n counter=Signal(modbv(0)[8:])\n \n @always(clk.posedge)\n def logic():\n counter.next=counter+1\n o_state.next=counter<dutyCount\n \n return instances()", "pwm myHDL Testing", "Peeker.clear()\nclk=Signal(bool(0)); Peeker(clk, 'clk')\ndutyCount=Signal(intbv(4)[8:]); Peeker(dutyCount, 'dutyCount')\no_state=Signal(bool(0)); Peeker(o_state, 'o_state')\n\nDUT=pwm(clk, dutyCount, o_state)\n\ndef pwm_TB():\n pass", "pwm Verilog Code" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
26fe/jsonstat.py
examples-notebooks/oecd-canada-jsonstat_v1.ipynb
lgpl-3.0
[ "Notebook: using jsonstat.py python library with jsonstat format version 1.\nThis Jupyter notebook shows the python library jsonstat.py in action. The JSON-stat is a simple lightweight JSON dissemination format. For more information about the format see the official site. This example shows how to explore the example data file oecd-canada from json-stat.org site. This file is compliant to the version 1 of jsonstat.", "# all import here\nfrom __future__ import print_function\nimport os\nimport pandas as ps # using panda to convert jsonstat dataset to pandas dataframe\nimport jsonstat # import jsonstat.py package\n\nimport matplotlib as plt # for plotting \n\n%matplotlib inline", "Download or use cached file oecd-canada.json. Caching file on disk permits to work off-line and to speed up the exploration of the data.", "url = 'http://json-stat.org/samples/oecd-canada.json'\nfile_name = \"oecd-canada.json\"\n\nfile_path = os.path.abspath(os.path.join(\"..\", \"tests\", \"fixtures\", \"www.json-stat.org\", file_name))\nif os.path.exists(file_path):\n print(\"using already downloaded file {}\".format(file_path))\nelse:\n print(\"download file and storing on disk\")\n jsonstat.download(url, file_name)\n file_path = file_name", "Initialize JsonStatCollection from the file and print the list of dataset contained into the collection.", "collection = jsonstat.from_file(file_path)\ncollection", "Select the dataset named oedc. Oecd dataset has three dimensions (concept, area, year), and contains 432 values.", "oecd = collection.dataset('oecd')\noecd", "Shows some detailed info about dimensions", "oecd.dimension('concept')\n\noecd.dimension('area')\n\noecd.dimension('year')", "Accessing value in the dataset\nPrint the value in oecd dataset for area = IT and year = 2012", "oecd.data(area='IT', year='2012')\n\noecd.value(area='IT', year='2012')\n\noecd.value(concept='unemployment rate',area='Australia',year='2004') # 5.39663128\n\noecd.value(concept='UNR',area='AU',year='2004')", "Trasforming dataset into pandas DataFrame", "df_oecd = oecd.to_data_frame('year', content='id')\ndf_oecd.head()\n\ndf_oecd['area'].describe() # area contains 36 values", "Extract a subset of data in a pandas dataframe from the jsonstat dataset.\nWe can trasform dataset freezing the dimension area to a specific country (Canada)", "df_oecd_ca = oecd.to_data_frame('year', content='id', blocked_dims={'area':'CA'})\ndf_oecd_ca.tail()\n\ndf_oecd_ca['area'].describe() # area contains only one value (CA)\n\ndf_oecd_ca.plot(grid=True)", "Trasforming a dataset into a python list", "oecd.to_table()[:5]", "It is possible to trasform jsonstat data into table in different order", "order = [i.did for i in oecd.dimensions()]\norder = order[::-1] # reverse list\ntable = oecd.to_table(order=order)\ntable[:5]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tpin3694/tpin3694.github.io
machine-learning/support_vector_classifier.ipynb
mit
[ "Title: Support Vector Classifier \nSlug: support_vector_classifier \nSummary: How to train a support vector classifier in Scikit-Learn \nDate: 2017-09-22 12:00\nCategory: Machine Learning\nTags: Support Vector Machines\nAuthors: Chris Albon \n<a alt=\"Support Vector Classifier\" href=\"https://machinelearningflashcards.com\">\n <img src=\"support_vector_classifier/Support_Vector_Classifier_print.png\" class=\"flashcard center-block\">\n</a>\nThere is a balance between SVC maximizing the margin of the hyperplane and minimizing the misclassification. In SVC, the later is controlled with the hyperparameter $C$, the penalty imposed on errors. C is a parameter of the SVC learner and is the penalty for misclassifying a data point. When C is small, the classifier is okay with misclassified data points (high bias but low variance). When C is large, the classifier is heavily penalized for misclassified data and therefore bends over backwards avoid any misclassified data points (low bias but high variance).\nIn scikit-learn, $C$ is determined by the parameter C and defaults to C=1.0. We should treat $C$ has a hyperparameter of our learning algorithm which we tune using model selection techniques.\nPreliminaries", "# Load libraries\nfrom sklearn.svm import LinearSVC\nfrom sklearn import datasets\nfrom sklearn.preprocessing import StandardScaler\nimport numpy as np", "Load Iris Flower Data", "# Load feature and target data\niris = datasets.load_iris()\nX = iris.data\ny = iris.target", "Standardize Features", "# Standarize features\nscaler = StandardScaler()\nX_std = scaler.fit_transform(X)", "Train Support Vector Classifier", "# Create support vector classifier\nsvc = LinearSVC(C=1.0)\n\n# Train model\nmodel = svc.fit(X_std, y)", "Create Previously Unseen Observation", "# Create new observation\nnew_observation = [[-0.7, 1.1, -1.1 , -1.7]]", "Predict Class Of Observation", "# Predict class of new observation\nsvc.predict(new_observation)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Upward-Spiral-Science/spect-team
Code/Assignment-11/AdvancedFeatureSelection.ipynb
apache-2.0
[ "Complex feature selection as a preprocessing step to learning and clasification", "# Standard\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# Dimensionality reduction and Clustering\nfrom sklearn.decomposition import PCA\nfrom sklearn.cluster import KMeans\nfrom sklearn import manifold, datasets\nfrom itertools import cycle\n\n# Plotting tools and classifiers\nfrom matplotlib.colors import ListedColormap\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn import preprocessing\nfrom sklearn.datasets import make_moons, make_circles, make_classification\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA\nfrom sklearn import cross_validation\nfrom sklearn.cross_validation import LeaveOneOut\n\n# Let's read the data in and clean it\n\ndef get_NaNs(df):\n columns = list(df.columns.get_values()) \n row_metrics = df.isnull().sum(axis=1)\n rows_with_na = []\n for i, x in enumerate(row_metrics):\n if x > 0: rows_with_na.append(i)\n return rows_with_na\n\ndef remove_NaNs(df):\n rows_with_na = get_NaNs(df)\n cleansed_df = df.drop(df.index[rows_with_na], inplace=False) \n return cleansed_df\n\ninitial_data = pd.DataFrame.from_csv('Data_Adults_1_reduced_inv4.csv')\ncleansed_df = remove_NaNs(initial_data)\n\n# Let's also get rid of nominal data\nnumerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']\nX = cleansed_df.select_dtypes(include=numerics)\n\n# Let's now clean columns getting rid of certain columns that might not be important to our analysis\n\ncols2drop = ['GROUP_ID', 'doa', 'Baseline_header_id', 'Concentration_header_id', 'Baseline_Reading_id',\n 'Concentration_Reading_id']\nX = X.drop(cols2drop, axis=1, inplace=False)\n\n# For our studies children skew the data, it would be cleaner to just analyse adults\nX = X.loc[X['Age'] >= 18]\nY = X.loc[X['race_id'] == 1]\nX = X.loc[X['Gender_id'] == 1]\n\n# Let's extract ADHd and Bipolar patients (mutually exclusive)\n\nADHD_men = X.loc[X['ADHD'] == 1]\nADHD_men = ADHD_men.loc[ADHD_men['Bipolar'] == 0]\n\nBP_men = X.loc[X['Bipolar'] == 1]\nBP_men = BP_men.loc[BP_men['ADHD'] == 0]\n\nprint ADHD_men.shape\nprint BP_men.shape\n\n# Keeping a backup of the data frame object because numpy arrays don't play well with certain scikit functions\nADHD_men = pd.DataFrame(ADHD_men.drop(['Patient_ID', 'Gender_id', 'ADHD', 'Bipolar', 'Age', 'race_id']\n , axis = 1, inplace = False))\nBP_men = pd.DataFrame(BP_men.drop(['Patient_ID', 'Gender_id', 'ADHD', 'Bipolar', 'Age', 'race_id']\n , axis = 1, inplace = False))", "Feature Selection\nWe are now going to explore Some feature selection procedures, the output of this will then be sent to a classifier\n\nRecursive elimination with cross validation\nSimple best percentile features\nTree based feature selection\n<br/>\n<br/>\n\nThe output from this is then sent to the following classifiers\n<br/>\n1. Random Forrests - Good ensemble technique\n2. QDA - Other experiments with this classifier have been successful\n3. LDA - A good simple technique\n4. Gaussian Naive Bayes - Experiments with this classifier have proven successful in the past", "from sklearn.svm import SVC\nfrom sklearn.cross_validation import StratifiedKFold\nfrom sklearn.feature_selection import RFECV\nfrom sklearn.feature_selection import SelectFromModel\nfrom sklearn.ensemble import ExtraTreesClassifier\n\n# Make the Labels vector\nclabels1 = [1] * 946 + [0] * 223\n\n# Concatenate and Scale\ncombined1 = pd.concat([ADHD_men, BP_men])\ncombined1 = pd.DataFrame(preprocessing.scale(combined1))\n\n# Recursive Feature elimination with cross validation\nsvc = SVC(kernel=\"linear\")\nrfecv = RFECV(estimator=svc, step=1, cv=StratifiedKFold(clabels1, 2),\n scoring='accuracy')\nrfecv.fit(combined1, clabels1)\ncombined1_recf = rfecv.transform(combined1)\n\ncombined1_recf = pd.DataFrame(combined1_recf)\nprint combined1_recf.head()\n\n# Percentile base feature selection \n\nfrom sklearn.feature_selection import SelectPercentile, f_classif\nselector = SelectPercentile(f_classif, percentile=5)\ncombined_kpercentile = selector.fit_transform(combined1, clabels1)\n\ncombined1_kpercentile = pd.DataFrame(combined_kpercentile)\nprint combined1_kpercentile.head()\n\n# Tree based selection\nfrom sklearn.ensemble import ExtraTreesClassifier\n\nclf = ExtraTreesClassifier()\nclf = clf.fit(combined1, clabels1)\ncombined1_trees = SelectFromModel(clf, prefit=True).transform(combined1)\n\ncombined1_trees = pd.DataFrame(combined1_trees)\nprint combined1_trees.head()", "Classifiers", "# Leave one Out cross validation\ndef leave_one_out(classifier, values, labels):\n leave_one_out_validator = LeaveOneOut(len(values))\n classifier_metrics = cross_validation.cross_val_score(classifier, values, labels, cv=leave_one_out_validator)\n accuracy = classifier_metrics.mean()\n deviation = classifier_metrics.std()\n return accuracy, deviation\n\nrf = RandomForestClassifier(n_estimators = 22) \nqda = QDA()\nlda = LDA()\ngnb = GaussianNB()\nclassifier_accuracy_list = []\nclassifiers = [(rf, \"Random Forest\"), (lda, \"LDA\"), (qda, \"QDA\"), (gnb, \"Gaussian NB\")]\nfor classifier, name in classifiers:\n accuracy, deviation = leave_one_out(classifier, combined1_recf, clabels1)\n print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)\n classifier_accuracy_list.append((name, accuracy))\n\nfor classifier, name in classifiers:\n accuracy, deviation = leave_one_out(classifier, combined1_kpercentile, clabels1)\n print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)\n classifier_accuracy_list.append((name, accuracy))\n\nfor classifier, name in classifiers:\n accuracy, deviation = leave_one_out(classifier, combined1_trees, clabels1)\n print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)\n classifier_accuracy_list.append((name, accuracy))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
stable/_downloads/51cca4c9f4bd40623cb6bfa890e2eb4b/20_erp_stats.ipynb
bsd-3-clause
[ "%matplotlib inline", "Visualising statistical significance thresholds on EEG data\nMNE-Python provides a range of tools for statistical hypothesis testing\nand the visualisation of the results. Here, we show a few options for\nexploratory and confirmatory tests - e.g., targeted t-tests, cluster-based\npermutation approaches (here with Threshold-Free Cluster Enhancement);\nand how to visualise the results.\nThe underlying data comes from :footcite:DufauEtAl2015; we contrast long vs.\nshort words. TFCE is described in :footcite:SmithNichols2009.", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.stats import ttest_ind\n\nimport mne\nfrom mne.channels import find_ch_adjacency, make_1020_channel_selections\nfrom mne.stats import spatio_temporal_cluster_test\n\nnp.random.seed(0)\n\n# Load the data\npath = mne.datasets.kiloword.data_path() / 'kword_metadata-epo.fif'\nepochs = mne.read_epochs(path)\n# These data are quite smooth, so to speed up processing we'll (unsafely!) just\n# decimate them\nepochs.decimate(4, verbose='error')\nname = \"NumberOfLetters\"\n\n# Split up the data by the median length in letters via the attached metadata\nmedian_value = str(epochs.metadata[name].median())\nlong_words = epochs[name + \" > \" + median_value]\nshort_words = epochs[name + \" < \" + median_value]", "If we have a specific point in space and time we wish to test, it can be\nconvenient to convert the data into Pandas Dataframe format. In this case,\nthe :class:mne.Epochs object has a convenient\n:meth:mne.Epochs.to_data_frame method, which returns a dataframe.\nThis dataframe can then be queried for specific time windows and sensors.\nThe extracted data can be submitted to standard statistical tests. Here,\nwe conduct t-tests on the difference between long and short words.", "time_windows = ((.2, .25), (.35, .45))\nelecs = [\"Fz\", \"Cz\", \"Pz\"]\nindex = ['condition', 'epoch', 'time']\n\n# display the EEG data in Pandas format (first 5 rows)\nprint(epochs.to_data_frame(index=index)[elecs].head())\n\nreport = \"{elec}, time: {tmin}-{tmax} s; t({df})={t_val:.3f}, p={p:.3f}\"\nprint(\"\\nTargeted statistical test results:\")\nfor (tmin, tmax) in time_windows:\n long_df = long_words.copy().crop(tmin, tmax).to_data_frame(index=index)\n short_df = short_words.copy().crop(tmin, tmax).to_data_frame(index=index)\n for elec in elecs:\n # extract data\n A = long_df[elec].groupby(\"condition\").mean()\n B = short_df[elec].groupby(\"condition\").mean()\n\n # conduct t test\n t, p = ttest_ind(A, B)\n\n # display results\n format_dict = dict(elec=elec, tmin=tmin, tmax=tmax,\n df=len(epochs.events) - 2, t_val=t, p=p)\n print(report.format(**format_dict))", "Absent specific hypotheses, we can also conduct an exploratory\nmass-univariate analysis at all sensors and time points. This requires\ncorrecting for multiple tests.\nMNE offers various methods for this; amongst them, cluster-based permutation\nmethods allow deriving power from the spatio-temoral correlation structure\nof the data. Here, we use TFCE.", "# Calculate adjacency matrix between sensors from their locations\nadjacency, _ = find_ch_adjacency(epochs.info, \"eeg\")\n\n# Extract data: transpose because the cluster test requires channels to be last\n# In this case, inference is done over items. In the same manner, we could\n# also conduct the test over, e.g., subjects.\nX = [long_words.get_data().transpose(0, 2, 1),\n short_words.get_data().transpose(0, 2, 1)]\ntfce = dict(start=.4, step=.4) # ideally start and step would be smaller\n\n# Calculate statistical thresholds\nt_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test(\n X, tfce, adjacency=adjacency,\n n_permutations=100) # a more standard number would be 1000+\nsignificant_points = cluster_pv.reshape(t_obs.shape).T < .05\nprint(str(significant_points.sum()) + \" points selected by TFCE ...\")", "The results of these mass univariate analyses can be visualised by plotting\n:class:mne.Evoked objects as images (via :class:mne.Evoked.plot_image)\nand masking points for significance.\nHere, we group channels by Regions of Interest to facilitate localising\neffects on the head.", "# We need an evoked object to plot the image to be masked\nevoked = mne.combine_evoked([long_words.average(), short_words.average()],\n weights=[1, -1]) # calculate difference wave\ntime_unit = dict(time_unit=\"s\")\nevoked.plot_joint(title=\"Long vs. short words\", ts_args=time_unit,\n topomap_args=time_unit) # show difference wave\n\n# Create ROIs by checking channel labels\nselections = make_1020_channel_selections(evoked.info, midline=\"12z\")\n\n# Visualize the results\nfig, axes = plt.subplots(nrows=3, figsize=(8, 8))\naxes = {sel: ax for sel, ax in zip(selections, axes.ravel())}\nevoked.plot_image(axes=axes, group_by=selections, colorbar=False, show=False,\n mask=significant_points, show_names=\"all\", titles=None,\n **time_unit)\nplt.colorbar(axes[\"Left\"].images[-1], ax=list(axes.values()), shrink=.3,\n label=\"µV\")\n\nplt.show()", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DataReply/persistable
examples/Persistable.ipynb
gpl-3.0
[ "Introduction:\nThis material has been used in the past to teach colleagues in our group how to use persistable.\nThe persistable package provides a general loggable superclass that provides Python users a simple way to persist load calculations and track corresponding calculation parameters.\nInheriting from Persistable automatically spools a logger and appends the PersistLoad object for easy and reproducible data persistance with loading, with parameter tracking. The PersistLoad object is based on setting a workingdatadir within which all persisted data is saved and logs are stored. Such a directory acts as a home for a specific set of experiments.\nFor more details, read the docs.\nImports:", "# Persistable Class:\nfrom persistable import Persistable\n\n# Set a persistable top path:\nfrom pathlib import Path\nLOCALDATAPATH = Path('.').absolute()", "Instantiate Persistable:\nEach persistable object is instantiated with parameters that should uniquely (or nearly uniquely) define the payload.", "params = {\n \"hello\": \"world\",\n \"another_dict\": {\n \"test\": [1,2,3]\n },\n \"a\": 1,\n \"b\": 4\n}\np = Persistable(\n payload_name=\"first_payload\",\n params=params,\n workingdatapath=LOCALDATAPATH / \"knowledgeshare_20170929\" # object will live in this local disk location \n)", "Define Payload:\nPayloads are defined by overriding the _generate_payload function:\nPayload defined by _generate_payload function:\nSimply override _generate_payload to give the Persistable object generate functionality. Note that generate here means to create the payload. The term is not meeant to indicate that a python generator is being produced.", "# ML Example:\n\"\"\"\ndef _generate_payload(self):\n X = pd.read_csv(self.params['datafile'])\n model = XGboost(X)\n model.fit()\n self.payload['model'] = model\n\"\"\"\n\n# Silly Example:\ndef _generate_payload(self):\n self.payload['sum'] = self.params['a'] + self.params['b']\n self.payload['msg'] = self.params['hello']", "Now we will monkeypatch the payload generator to override its counterpart in Persistable object (only necessary because we've defined the generator outside of an IDE).", "def bind(instance, method):\n def binding_scope_fn(*args, **kwargs): \n return method(instance, *args, **kwargs)\n return binding_scope_fn\n\np._generate_payload = bind(p, _generate_payload)\n\np.generate()", "Persistable as a Super Class:\nThe non Monkey Patching equivalent to what we did above:", "class SillyPersistableExample(Persistable):\n def _generate_payload(self):\n self.payload['sum'] = self.params['a'] + self.params['b']\n self.payload['msg'] = self.params['hello']\n \np2 = SillyPersistableExample(payload_name=\"silly_example\", params=params, workingdatapath=LOCALDATAPATH / \"knowledgeshare_20170929\")\np2.generate()", "Load:", "p_test = Persistable(\n \"first_payload\",\n params=params,\n workingdatapath=LOCALDATAPATH/\"knowledgeshare_20170929\"\n)\np_test.load()\n\np_test.payload" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
smalladi78/SEF
notebooks/0_DataCleanup-Feb2016.ipynb
unlicense
[ "Data cleanup\nThis notebook is meant for cleaning up the donation data.\nThe following is a summary of some of the cleanup tasks from this notebook:\n\nLoad the csv file that has the donors\nStrip out whitespace from all the columns\nFill na with empty strings\nChange column data types (after examining for correctness)\nCleanup amounts column - removed negative (totals to -641910.46 dollars) and zero values\nCleanup state codes.\nRemoved donations that are outside of US - about \\$30,000 USD\nRemoved donations totaling to 9.5 million dollars that came from anonymous donors (as outliers)\nIf there is no location information or it is inaccurate, move it to a different, move it to a different dataframe\nUpdate the city and state names when not present based on the zipcodes dataset.", "import pandas as pd\nimport numpy as np\nimport locale\nimport matplotlib.pyplot as plt\nfrom bokeh.plotting import figure, show\nfrom bokeh.models import ColumnDataSource, HoverTool\n\n%matplotlib inline\nfrom bokeh.plotting import output_notebook\noutput_notebook()\n\n_ = locale.setlocale(locale.LC_ALL, '')\nthousands_sep = lambda x: locale.format(\"%.2f\", x, grouping=True)\n#example:\nprint thousands_sep(1234567890.76543)\n\ngetdate_ym = lambda x: str(x.year) + \"_\" + str(x.month)\ngetdate_ymd = lambda x: str(x.month) + \"/\" + str(x.day) + \"/\" + str(x.year)\ndates = pd.DatetimeIndex(['2010-10-17', '2011-05-13', \"2012-01-15\"])\nmap(getdate_ym, dates)\nmap(getdate_ymd, dates)", "Load csv", "df = pd.read_csv('in/gifts_Feb2016_2.csv')\nsource_columns = ['donor_id', 'amount_initial', 'donation_date', 'appeal', 'fund', 'city', 'state', 'zipcode_initial', 'charitable', 'sales']\ndf.columns = source_columns\n\ndf.info()\n\nstrip_func = lambda x: x.strip() if isinstance(x, str) else x\ndf = df.applymap(strip_func)", "Address nan column values", "df.replace({'appeal': {'0': ''}}, inplace=True)\ndf.appeal.fillna('', inplace=True)\ndf.fund.fillna('', inplace=True)", "Change column types and drop unused columns", "df.donation_date = pd.to_datetime(df.donation_date)\ndf.charitable = df.charitable.astype('bool')\ndf['zipcode'] = df.zipcode_initial.str[0:5]\n\nfill_zipcode = lambda x: '0'*(5-len(str(x))) + str(x)\nx1 = pd.DataFrame([[1, '8820'], [2, 8820]], columns=['a','b'])\nx1.b = x1.b.apply(fill_zipcode)\nx1\n\ndf.zipcode = df.zipcode.apply(fill_zipcode)", "Cleanup amounts", "## Ensure that all amounts are dollar figures\ndf[~df.amount_initial.str.startswith('-$') & ~df.amount_initial.str.startswith('$')]\n\n## drop row with invalid data\ndf.drop(df[df.donation_date == '1899-12-31'].index, axis=0, inplace=True)\n\ndf['amount_cleanup'] = df.amount_initial.str.replace(',', '')\ndf['amount_cleanup'] = df.amount_cleanup.str.replace('$', '')\ndf['amount'] = df.amount_cleanup.astype(float)\n\n## Make sure we did not throw away valid numbers by checking with the original value\ndf[(df.amount == 0)].amount_initial.unique()", "Outlier data", "# There are some outliers in the data, quite a few of them are recent.\n_ = plt.scatter(df[df.amount > 5000].amount.values, df[df.amount > 5000].donation_date.values)\nplt.show()\n\n# Fun little thing to try out bokeh (we can hover and detect the culprits)\ndef plot_data(df):\n dates = map(getdate_ym, pd.DatetimeIndex(df[df.amount > 5000].donation_date))\n amounts = map(thousands_sep, df[df.amount > 5000].amount)\n x = df[df.amount > 5000].donation_date.values\n y = df[df.amount > 5000].amount.values\n donor_ids = df[df.amount > 5000].donor_id.values\n states = df[df.amount > 5000].state.values\n\n source = ColumnDataSource(\n data=dict(\n x=x,\n y=y,\n dates=dates,\n amounts=amounts,\n donor_ids=donor_ids,\n states=states,\n )\n )\n\n hover = HoverTool(\n tooltips=[\n (\"date\", \"@dates\"),\n (\"amount\", \"@amounts\"),\n (\"donor\", \"@donor_ids\"),\n (\"states\", \"@states\"),\n ]\n )\n\n p = figure(plot_width=400, plot_height=400, title=None, tools=[hover])\n p.circle('x', 'y', size=5, source=source)\n\n show(p)\n\nplot_data(df.query('amount > 5000'))\n\n# All the Outliers seem to have the following properties: state == YY and specific donorid.\n# Plot the remaining data outside of these to check that we caught all the outliers.\nplot_data(df[~df.index.isin(df.query('state == \"YY\" and amount > 5000').index)])\n\n# Outlier data\ndf[(df.state == 'YY') & (df.amount >= 45000)]\n\ndf[(df.state == 'YY') & (df.amount >= 45000)]\\\n .sort_values(by='amount', ascending=False)\\\n .head(6)[source_columns]\\\n .to_csv('out/0/outlier_data.csv')", "Exchanged emails with Anil and confirmed the decision to drop the outlier for the anonymous donor with the 9.5 million dollars.", "df.drop(df[(df.state == 'YY') & (df.amount >= 45000)].index, inplace=True)\n\nprint 'After dropping the anonymous donor, total amounts from the unknown state as a percentage of all amounts is: '\\\n , thousands_sep(100*df[(df.state == 'YY')].amount.sum()/df.amount.sum()), '%'", "Amounts with zero values", "## Some funds have zero amounts associated with them.\n## They mostly look like costs - expense fees, transaction fees, administrative fees\n## Let us examine if we can safely drop them from our analysis\n\ndf[df.amount_initial == '$0.00'].groupby(['fund', 'appeal'])['donor_id'].count()", "Dropping rows with zero amounts (after confirmation with SEF office)", "df.drop(df[df.amount == 0].index, axis=0, inplace=True)", "Negative amounts", "## What is the total amount of the negative?\nprint 'Total negative amount is: ', df[df.amount < 0].amount.sum()\n\n# Add if condition to make this re-runnable\nif df[df.amount < 0].amount.sum() > 0:\n print 'Amounts grouped by fund and appeal, sorted by most negative amounts'\n df[df.amount < 0]\\\n .groupby(['fund', 'appeal'])['amount',]\\\n .sum()\\\n .sort_values(by='amount')\\\n .to_csv('out/0/negative_amounts_sorted.csv')\n\n df[df.amount < 0]\\\n .groupby(['fund', 'appeal'])['amount',]\\\n .sum()\\\n .to_csv('out/0/negative_amounts_grouped_by_fund.csv')", "Dropping rows with negative amounts (after confirmation with SEF office)", "df.drop(df[df.amount < 0].index, axis=0, inplace=True)", "Investigate invalid state codes", "df.info()\n\ndf.state.unique()\n\n## States imported from http://statetable.com/\nstates = pd.read_csv('in/state_table.csv')\nstates.rename(columns={'abbreviation': 'state'}, inplace=True)\n\nall_states = pd.merge(states, pd.DataFrame(df.state.unique(), columns=['state']), on='state', how='right')\ninvalid_states = all_states[pd.isnull(all_states.id)].state\n\ndf[df.state.isin(invalid_states)].state.value_counts().sort_index()\n\ndf[df.state.isin(['56', 'AB', 'BC', 'CF', 'Ca', 'Co', 'HY', 'IO', 'Ny', 'PR', 'UK', 'VI', 'ja'])]\n\n%%html\n<style>table {float:left}</style>", "Explanation for invalid state codes:\nState|Count|Action|Explanation|\n-----|-----|------|-----------|\nYY|268|None|All these rows are bogus entries (City and Zip are also YYYYs) - about 20% of the donation amount has this\nON|62|Remove|This is the state of Ontario, Canada\nAP|18|Remove|This is data for Hyderabad\nVI|6|Remove|Virgin Islands\nPR|5|Remove|Peurto Rico\nNy|5|NY|Same as NY - rename Ny as NY\n56|1|Remove|This is one donation from Bangalore, Karnataka\nHY|1|Remove|Hyderabad\nBC|1|Remove|British Columbia, Canada\nIO|1|IA|Changed to Iowa - based on city and zip code\nAB|1|Remove|AB stands for Alberta, Canada\nCa|1|CA|Same as California - rename Ca to CA\nCo|1|CO|Same as Colarado - rename Co to CO\nCF|1|FL|Changed to Florida based on zip code and city\nja|1|FL|Change to FL based on zip code and city\nUK|1|Remove|London, UK\nKA|1|Remove|Bangalore, Karnataka", "state_renames = {'Ny': 'NY', 'IO': 'IA', 'Ca' : 'CA', 'Co' : 'CO', 'CF' : 'FL', 'ja' : 'FL'}\ndf.replace({'state': state_renames}, inplace=True)", "Dropping data for non-US locations", "non_usa_states = ['ON', 'AP', 'VI', 'PR', '56', 'HY', 'BC', 'AB', 'UK', 'KA']\nprint 'Total amount for locations outside USA: ', sum(df[df.state.isin(non_usa_states)].amount)\n#### Total amount for locations outside USA: 30710.63\n\ndf.drop(df[df.state.isin(non_usa_states)].index, axis=0, inplace=True)", "Investigate donations with state of YY", "print 'Percentage of amount for unknown (YY) state : {:.2f}'.format(100*df[df.state == 'YY'].amount.sum()/df.amount.sum())\n\nprint 'Total amount for the unknown state excluding outliers: ', df[(df.state == 'YY') & (df.amount < 45000)].amount.sum()\nprint 'Total amount for the unknown state: ', df[(df.state == 'YY')].amount.sum()\nprint 'Total amount: ', df.amount.sum()", "We will add these donations to the noloc_df below (which is the donations that have empty strings for the city/state/zipcode.\nInvestigate empty city, state and zip code\nPecentage of total amount from donations with no location: 3.087\nMoving all the data with no location to a different dataframe.\nWe will investigate the data that does have location information for correctness of location and then merge the no location data back at the end.", "print 'Pecentage of total amount from donations with no location: ', 100*sum(df[(df.city == '') & (df.state == '') & (df.zipcode_initial == '')].amount)/sum(df.amount)\n\nnoloc_df = df[(df.city == '') & (df.state == '') & (df.zipcode_initial == '')].copy()\ndf = df[~((df.city == '') & (df.state == '') & (df.zipcode_initial == ''))].copy()\n\nprint df.shape[0] + noloc_df.shape[0]\n\nnoloc_df = noloc_df.append(df[(df.state == 'YY')])\ndf = df[~(df.state == 'YY')]\n\n# Verify that we transferred all the rows over correctly. This total must match the total from above.\nprint df.shape[0] + noloc_df.shape[0]", "Investigate City in ('YYY','yyy')\nThese entries have invalid location information and will be added to the noloc_df dataframe.", "noloc_df = noloc_df.append(df[(df.city.str.lower() == 'yyy') | (df.city.str.lower() == 'yyyy')])\ndf = df[~((df.city.str.lower() == 'yyy') | (df.city.str.lower() == 'yyyy'))]\n\n# Verify that we transferred all the rows over correctly. This total must match the total from above.\nprint df.shape[0] + noloc_df.shape[0]", "Investigate empty state but non-empty city\nPercentage of total amount for data with City but no state: 0.566", "print 'Percentage of total amount for data with City but no state: {:.3f}'.format(100*sum(df[df.state == ''].amount)/sum(df.amount))\ndf[((df.state == '') & (df.city != ''))][['city','zipcode','amount']].sort_values('city', ascending=True).to_csv('out/0/City_No_State.csv')", "By visually examining the cities for rows that don't have a state, we can see that all the cities are coming from Canada and India and some from other countries (except two entries). So we will correct these two entries and drop all the other rows as they are not relevant to the USA.", "index = df[(df.donor_id == '-28K0T47RF') & (df.donation_date == '2007-11-30') & (df.city == 'Cupertino')].index\ndf.ix[index,'state'] = 'CA'\nindex = df[(df.donor_id == '9F4812A118') & (df.donation_date == '2012-06-30') & (df.city == 'San Juan')].index\ndf.ix[index,'state'] = 'WA'\ndf.ix[index,'zipcode'] = 98250\n\n# Verified that these remaining entries are for non-US location\nprint 'Total amount for non-USA location: ', df[((df.state == '') & (df.city != ''))].amount.sum()\n\ndf.drop(df[((df.state == '') & (df.city != ''))].index, inplace=True)", "Investigate empty city and zipcode but valid US state\nPercentage of total amount for data with valid US state, but no city, zipcode: 4.509\nMost of this amount (1.7 of 1.8 million) is coming from about 600 donors in California. We already know that about California is a major contributor to donations.\nAlthough, we can do some analytics based on just the US state using this data, it complicates the analysis that does not substantiate the knowledge gain.\nTherefore, we are dropping the state column from these rows and moving over this data to the dataset that has no location (the one that we created earlier) to simplify our analysis.", "print 'Percentage of total amount for data with valid US state, but no city, zipcode: {:.3f}'.format(100*sum(df[(df.city == '') & (df.zipcode_initial == '')].amount)/sum(df.amount))\n\n# Verify that we transferred all the rows over correctly. This total must match the total from above.\nprint df.shape[0] + noloc_df.shape[0]\n\nstateonly_df = df[(df.city == '') & (df.zipcode_initial == '')].copy()\nstateonly_df.state = ''\n\n## Move the rows with just the state over to the noloc_df dataset\nnoloc_df = pd.concat([noloc_df, stateonly_df])\ndf = df[~((df.city == '') & (df.zipcode_initial == ''))].copy()\n\n# Verify that we transferred all the rows over correctly. This total must match the total from above.\nprint df.shape[0] + noloc_df.shape[0]\n\nprint 100*sum(df[df.city == ''].amount)/sum(df.amount)\n\nprint len(df[df.city == '']), len(df[df.zipcode_initial == ''])\nprint sum(df[df.city == ''].amount), sum(df[df.zipcode_initial == ''].amount)\nprint sum(df[(df.city == '') & (df.zipcode_initial != '')].amount),\\\n sum(df[(df.city != '') & (df.zipcode_initial == '')].amount)\n\nprint sum(df.amount)", "Investigating empty city and empty state with non-empty zip code\nSince we have the zip code data from the US census data, we can use that to fill in the city and state", "## Zip codes from ftp://ftp.census.gov/econ2013/CBP_CSV/zbp13totals.zip\nzipcodes = pd.read_csv('in/zbp13totals.txt', dtype={'zip': object})\nzipcodes = zipcodes[['zip', 'city', 'stabbr']]\nzipcodes = zipcodes.rename(columns = {'zip':'zipcode', 'stabbr': 'state', 'city': 'city'})\nzipcodes.city = zipcodes.city.str.title()\nzipcodes.zipcode = zipcodes.zipcode.astype('str')\n\n## If we know the zip code, we can populate the city by using the zipcodes data\ndf.replace({'city': {'': np.nan}, 'state': {'': np.nan}}, inplace=True)\n\n## Set the index correctly for update to work. Then reset it back.\ndf.set_index(['zipcode'], inplace=True)\nzipcodes.set_index(['zipcode'], inplace=True)\n\ndf.update(zipcodes, join='left', overwrite=False, raise_conflict=False)\n\ndf.reset_index(drop=False, inplace=True)\nzipcodes.reset_index(drop=False, inplace=True)\n\nzipcodesdetail = pd.read_csv('in/zip_code_database.csv')\n\nzipcodesdetail = zipcodesdetail[zipcodesdetail.country == 'US'][['zip', 'primary_city', 'county', 'state', 'timezone', 'latitude', 'longitude']]\nzipcodesdetail = zipcodesdetail.rename(columns = {'zip':'zipcode', 'primary_city': 'city'})\n\n# The zip codes dataset has quite a few missing values. Filling in what we need for now.\n# If this happens again, search for a different data source!!\nzipcodesdetail.loc[(zipcodesdetail.city == 'Frisco') & (zipcodesdetail.state == 'TX') & (pd.isnull(zipcodesdetail.county)), 'county'] = 'Denton'\n\n# Strip the ' County' portion from the county names\ndef getcounty(county):\n if pd.isnull(county):\n return county\n elif county.endswith(' County'):\n return county[:-7]\n else:\n return county\n\nzipcodesdetail.county = zipcodesdetail['county'].apply(getcounty)\n\nzipcodesdetail.zipcode = zipcodesdetail.zipcode.apply(fill_zipcode)\n\nnewcols = np.array(list(set(df.columns).union(zipcodesdetail.columns)))\n\ndf = pd.merge(df, zipcodesdetail, on=['state', 'city', 'zipcode'], how='inner', suffixes=('_x', ''))[newcols]\n\n# For some reason, the data types are being reset. So setting them back to their expected data types.\ndf.donation_date = df.donation_date.apply(pd.to_datetime)\ndf.charitable = df.charitable.apply(bool)\ndf.amount = df.amount.apply(int)", "Investigate invalid zip codes", "all_zipcodes = pd.merge(df, zipcodes, on='zipcode', how='left')\nall_zipcodes[pd.isnull(all_zipcodes.city_x)].head()\n\n## There seems to be only one row with an invalid zip code. Let's drop it.\ndf.drop(df[df.zipcode_initial.isin(['GU214ND','94000'])].index, axis=0, inplace=True)", "Final check on all location data to confirm that we have no rows with empty state, city or location", "print 'No state: count of rows: ', len(df[df.state == ''].amount),\\\n 'Total amount: ', sum(df[df.state == ''].amount)\nprint 'No zipcode: count of rows: ', len(df[df.zipcode == ''].amount),\\\n 'Total amount: ', sum(df[df.zipcode == ''].amount)\nprint 'No city: count of rows: ', len(df[df.city == ''].amount),\\\n 'Total amount: ', sum(df[df.city == ''].amount)\n\n# Examining data - top 10 states by amount and number of donors\nprint df.groupby('state')['amount',].sum().sort_values(by='amount', ascending=False)[0:10]\nprint df.groupby('state')['donor_id',].count().sort_values(by='donor_id', ascending=False)[0:10]\n\nprint noloc_df.state.unique()\nprint noloc_df.city.unique()\nprint noloc_df.zipcode.unique()\n\nnoloc_df['city'] = ''\nnoloc_df['state'] = ''\nnoloc_df['zipcode'] = ''\n\nprint df.shape[0] + noloc_df.shape[0]\n\ndf.shape, noloc_df.shape\n\n# The input data has the latest zip code for each donor. So we cannot observe any movement even if there was any since\n# all donations by a given donor will only have the same exact zipcode.\nx1 = pd.DataFrame(df.groupby(['donor_id','zipcode']).zipcode.nunique())\nx1[x1.zipcode != 1]\n\n# The noloc_df and the df with location values have no donors in common - so we cannot use the donor\n# location information from df to detect the location in noloc_df.\nset(df.donor_id.values).intersection(noloc_df.donor_id.values)\n\ndf.rename(columns={'donation_date': 'activity_date'}, inplace=True)\ndf['activity_year'] = df.activity_date.apply(lambda x: x.year)\ndf['activity_month'] = df.activity_date.apply(lambda x: x.month)\ndf['activity_dow'] = df.activity_date.apply(lambda x: x.dayofweek)\ndf['activity_ym'] = df['activity_date'].map(lambda x: 100*x.year + x.month)\ndf['activity_yq'] = df['activity_date'].map(lambda x: 10*x.year + (x.month-1)//3)\ndf['activity_ymd'] = df['activity_date'].map(lambda x: 10000*x.year + 100*x.month + x.day)\n\n# Drop the zipcode_initial (for privacy reasons)\ndf.drop('zipcode_initial', axis=1, inplace=True)", "All done! Let's save our dataframes for the next stage of processing", "!mkdir -p out/0\ndf.to_pickle('out/0/donations.pkl')\nnoloc_df.to_pickle('out/0/donations_noloc.pkl')\n\ndf[df.donor_id == '_1D50SWTKX'].sort_values(by='activity_date').tail()\n\ndf.columns\n\ndf.shape" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
snucsne/CSNE-Course-Source-Code
CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch09-word-play.ipynb
mit
[ "Chapter 9: Word play\n\nContents\n- Reading word lists\n- Search\n- Looping with indices\n- Debugging\n- Exercises\n\nThis notebook is based on \"Think Python, 2Ed\" by Allen B. Downey <br>\nhttps://greenteapress.com/wp/think-python-2e/\n\nReading word lists\n\nThe built-in function open opens a file (specified as the argument) and returns a file object", "input_file = open( 'data/short-words.txt' )\nprint( input_file )", "The book says fin is an acceptable name, but I opt for a more descriptive name\nThere are a number of methods for reading and writing files, including:\nread( size ) Reads size bytes of data. If size is omitted or negative, the entire file is readn and return. Returns an empty string if the end of the file (EOF) is reached.\nreadline() Reads a single line from the file\nwrite( a_string ) Writes a string to the file\nclose() Closes the file object and frees up any system resources\nYou can also use a for loop to read each line of the file", "for line in input_file:\n word = line.strip()\n print( word )", "The strip method removes whitespace at the beginning and end of a string\n\nSearch\n\nMost of the exercises in this chapter have something in common\nThey all involve searching a string for specific characters", "def has_no_e( word ):\n result = True\n for letter in word:\n if( 'e' == letter ):\n result = False\n return result\n\ninput_file = open( 'data/short-words.txt' )\nfor line in input_file:\n word = line.strip()\n if( has_no_e( word ) ):\n print( 'No `e`: ', word )", "The for loop traverses each letter in the word looking for an e\nIn fact, if you paid very good attention, you will see that the uses_all and uses_only functions in the book are the same\nIn computer science, we frequently encounter problems that are essentially the same as ones we have already solved, but are just worded differently\nWhen you find one (called problem recognition), you can apply a previously developed solution\nHow much work you need to do to apply it is dependent on how general your solution is\nThis is an essential skill for problem-solving in general and not just programming\n\nLooping with indices\n\nThe previous code didn't have a need to use the indices of characters so the simple for ... in loop was used\nThere are a number of ways to traverse a string while maintaining a current index\nUse a for loop across the range of the length of the string\nUse recursion\nUse a while loop and maintain the current index\nI recommend the first option as it lets the for loop maintain the index\nRecursion is more complex than necessary for this problem\nA while loop can be used, but isn't as well suited since we know exactly how many times we need to run through the loop\nExamples of all three options are below", "fruit = 'banana'\n\n# For loop\nfor i in range( len( fruit ) ):\n print( 'For: [',i,']=[',fruit[i],']' )\n\n# Recursive function\ndef recurse_through_string( word, i ):\n print( 'Recursive: [',i,']=[',fruit[i],']' )\n if( (i + 1) < len( word ) ):\n recurse_through_string( word, i + 1 )\n\nrecurse_through_string( fruit, 0 )\n \n# While loop\ni = 0\nwhile( i < len( fruit ) ):\n print( 'While: [',i,']=[',fruit[i],']' )\n i = i + 1", "Debugging\n\nTesting is hard\nThe programs discussed in this chapter are relatively easy to test since you can check the results by hand\nThere are ways to make testing easier and more effective\nOne is to ensure you have different variations of a test\nFor example, for the words with an e function, test using words that have an e at the beginning, middle and end. Test long and short words (including the empty string).\nOften you will come across special cases (like the empty string) that can throw your program off if you don't have a robust solution\nAnother option is finding large sets of data (like the words list file) against which you can test your program\nHowever, if your program requires you to manually inspect the tests for correctness, you are always at risk of missing something\nThe best option is automated testing\nFor example, wrapping your tests in conditionals that only print out if the test fails is a good start\nIn later courses, I will discuss libraries that make automated testing easier\nRemember that although it feels like more work to write tests, it saves quite a bit of time in the long run\n\nExercises\n\nWrite a program that reads words.txt and prints only the words with more than 20 characters (not counting whitespace). (Ex. 9.1 on pg. 84)\nGeneralize the has_no_e function to a function called avoids that takes a word and a string of forbidden letters. It should return True if the word does not contain any of the forbidden letters and False if it does. (Ex. 9.3 on pg. 84)\nWrite a function called uses_only that takes a word and a string of letters, and returns True if the word contains only letters in the list. (Ex. 9.4 on pg. 84)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
oyamad/game_theory_models
test_logitdyn.ipynb
bsd-3-clause
[ "import numpy as np\nfrom game_tools import NormalFormGame\nfrom logitdyn import LogitDynamics\nfrom __future__ import division\n\ncoo_payoffs = np.array([[4,0], [3,2]])\ng_coo = NormalFormGame(coo_payoffs)", "test_simulate_LLN", "u = coo_payoffs\nbeta = 1.0\nP = np.zeros((2,2))", "I made a probabilistic choice matrix $P$ in a redundant way just in case.", "P[0,0] = np.exp(u[0,0] * beta) / (np.exp(u[0,0] * beta) + np.exp(u[1,0] * beta))\nP[0,0]\n\nP[1,0] = np.exp(u[1,0] * beta) / (np.exp(u[0,0] * beta) + np.exp(u[1,0] * beta))\nP[1,0]\n\nP[0,1] = np.exp(u[0,1] * beta) / (np.exp(u[0,1] * beta) + np.exp(u[1,1] * beta))\nP[0,1]\n\nP[1,1] = np.exp(u[1,1] * beta) / (np.exp(u[0,1] * beta) + np.exp(u[1,1] * beta))\nP[1,1]\n\nprint P", "$P[i,j]$ represents the probability that a player chooses an action $i$ provided that his opponent takes an action $j$.", "Q = np.zeros((4,4))\nQ[0, 0] = P[0, 0]\nQ[0, 1] = 0.5 * P[1, 0]\nQ[0, 2] = 0.5 * P[1, 0]\nQ[0, 3] = 0\nQ[1, 0] = 0.5 * P[0, 0]\nQ[1, 1] = 0.5 * P[0, 1] + 0.5 * P[1, 0]\nQ[1, 2] = 0\nQ[1, 3] = 0.5 * P[1, 1]\nQ[2, 0] = 0.5 * P[0, 0]\nQ[2, 1] = 0\nQ[2, 2] = 0.5 * P[1, 0] + 0.5 * P[0, 1]\nQ[2, 3] = 0.5 * P[1, 1]\nQ[3, 0] = 0\nQ[3, 1] = 0.5 * P[0, 1]\nQ[3, 2] = 0.5 * P[0, 1]\nQ[3, 3] = P[1, 1]\nprint Q", "$Q$ is the transition probability matrix. The first row and column represent the state $(0,0)$, which means that player 1 takes action 0 and player 2 also takes action 0. The second ones represent $(0,1)$, the third ones represent $(1,0)$, and the last ones represent $(1,1)$.", "from quantecon.mc_tools import MarkovChain\n\nmc = MarkovChain(Q)\n\nmc.stationary_distributions[0]", "I take 0.61029569 as the criterion for the test.", "ld = LogitDynamics(g_coo)\n\n# New one (using replicate)\nn = 1000\nseq = ld.replicate(T=100, num_reps=n)\ncount = 0\nfor i in range(n):\n if all(seq[i, :] == [1, 1]):\n count += 1\nratio = count / n\nratio\n\n# Old one\ncounts = np.zeros(1000)\nfor i in range(1000):\n seq = ld.simulate(ts_length=100)\n count = 0\n for j in range(100):\n if all(seq[j, :] == [1, 1]):\n count += 1\n counts[i] = count\nm = counts.mean() / 100\nm" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zoofIO/flexx-notebooks
EuroScipy 2015 demo.ipynb
bsd-3-clause
[ "This is the demo that I used during the EuroScipy 2015 talk on Flexx.\nflexx.webruntime\nLaunch a web runtime. Can be a browser or something that looks like a desktop app.", "from flexx.webruntime import launch\nrt = launch('http://flexx.rtfd.org', 'xul', title='Test title')", "flexx.pyscript", "from flexx.pyscript import py2js\n\nprint(py2js('square = lambda x: x**2'))\n\ndef foo(n):\n res = []\n for i in range(n):\n res.append(i**2)\n return res\nprint(py2js(foo))\n\ndef foo(n):\n return [i**2 for i in range(n)]\nprint(py2js(foo))", "flexx.react\nReactive programming uses signals to communicate between different components of an app, and provides easy ways to react to changes in the values of these signals.\nThe API for flexx.react consists of a few decorators to turn functions into signals. One signal is the input signal.", "from flexx import react\n\n@react.input\ndef name(n='john doe'):\n if not isinstance(n, str):\n raise ValueError('Name must be a string')\n return n.capitalize()\n\nname\n\n@react.connect('name')\ndef greet(n):\n print('hello %s' % n)\n\nname(\"almar klein\")", "A signal can have multiple upstream signals.", "@react.connect('first_name', 'last_name')\ndef greet(first, last):\n print('hello %s %s!' % (first, last))", "Dynamism provides great flexibility", "class Person(react.HasSignals):\n \n @react.input\n def father(f):\n assert isinstance(f, Person)\n return f\n\n @react.connect('father.last_name')\n def last_name(s):\n return s\n \n @react.connect('children.*.name')\n def child_names(*names):\n return ', '.join(name)", "flexx.app", "from flexx import app, react\napp.init_notebook()\n\nclass Greeter(app.Model):\n \n @react.input\n def name(s):\n return str(s)\n \n class JS:\n \n @react.connect('name')\n def _greet(name):\n alert('Hello %s!' % name)\n\ngreeter = Greeter()\n\ngreeter.name('John')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
satishgoda/learning
python/libs/rxpy/GettingStarted.ipynb
mit
[ "Getting Started with RxPY\nReactiveX, or Rx for short, is an API for programming with observable event streams. RxPY is a port of ReactiveX to Python. Learning Rx with Python is particularly interesting since Python removes much of the clutter that comes with statically typed languages. RxPY works with both Python 2 and Python 3 but all examples in this tutorial uses Python 3.4.\nRx is about processing streams of events. With Rx you:\n\nTell what you want to process (Observable)\nHow you want to process it (A composition of operators)\nWhat you want to do with the result (Observer)\n\nIt's important to understand that with Rx you describe what you want to do with events if and when they arrive. It's all a declarative composition of operators that will do some processing the events when they arrive. If nothing happens, then nothing is processed.\nThus the pattern is that you subscribe to an Observable using an Observer:\npython\nsubscription = Observable.subscribe(observer)\nNOTE: Observables are not active in themselves. They need to be subscribed to make something happen. Simply having an Observable lying around doesn't make anything happen.\nInstall\nUse pip to install RxPY:", "%%bash\npip install rx", "Importing the Rx module", "import rx\nfrom rx import Observable, Observer", "Generating a sequence\nThere are many ways to generate a sequence of events. The easiest way to get started is to use the from_iterable() operator that is also called just from_. Other operators you may use to generate a sequence such as just, generate, create and range.", "class MyObserver(Observer):\n def on_next(self, x):\n print(\"Got: %s\" % x)\n \n def on_error(self, e):\n print(\"Got error: %s\" % e)\n \n def on_completed(self):\n print(\"Sequence completed\")\n\nxs = Observable.from_iterable(range(10))\nd = xs.subscribe(MyObserver())\n\nxs = Observable.from_(range(10))\nd = xs.subscribe(print)", "NOTE: The subscribe method takes an observer, or one to three callbacks for handing on_next(), on_error(), and on_completed(). This is why we can use print directly as the observer in the example above, since it becomes the on_next() handler for an anonymous observer. \nFiltering a sequence", "xs = Observable.from_(range(10))\nd = xs.filter(\n lambda x: x % 2\n ).subscribe(print)", "Transforming a sequence", "xs = Observable.from_(range(10))\nd = xs.map(\n lambda x: x * 2\n ).subscribe(print)", "NOTE: You can also take an index as the second parameter to the mapper function:", "xs = Observable.from_(range(10, 20, 2))\nd = xs.map(\n lambda x, i: \"%s: %s\" % (i, x * 2)\n ).subscribe(print)", "Merge\nMerging two observable sequences into a single observable sequence using the merge operator:", "xs = Observable.range(1, 5)\nys = Observable.from_(\"abcde\")\nzs = xs.merge(ys).subscribe(print)", "The Spacetime of Rx\nIn the examples above all the events happen at the same moment in time. The events are only separated by ordering. This confuses many newcomers to Rx since the result of the merge operation above may have several valid results such as:\na1b2c3d4e5\n1a2b3c4d5e\nab12cd34e5\nabcde12345\n\nThe only guarantee you have is that 1 will be before 2 in xs, but 1 in xs can be before or after a in ys. It's up the the sort stability of the scheduler to decide which event should go first. For real time data streams this will not be a problem since the events will be separated by actual time. To make sure you get the results you \"expect\", it's always a good idea to add some time between the events when playing with Rx.\nMarbles and Marble Diagrams\nAs we saw in the previous section it's nice to add some time when playing with Rx and RxPY. A great way to explore RxPY is to use the marbles test module that enables us to play with marble diagrams. The marbles module adds two new extension methods to Observable. The methods are from_marbles() and to_marbles().\nExamples:\n1. res = rx.Observable.from_marbles(\"1-2-3-|\")\n2. res = rx.Observable.from_marbles(\"1-2-3-x\", rx.Scheduler.timeout)\nThe marble string consists of some special characters:\n- = Timespan of 100 ms\n x = on_error()\n | = on_completed()\nAll other characters are treated as an on_next() event at the given moment they are found on the string. If you need to represent multi character values, then you can group then with brackets such as \"1-(42)-3\". \nLets try it out:", "from rx.testing import marbles\n\nxs = Observable.from_marbles(\"a-b-c-|\")\nxs.to_blocking().to_marbles()", "It's now easy to also add errors into the even stream by inserting x into the marble string:", "xs = Observable.from_marbles(\"1-2-3-x-5\")\nys = Observable.from_marbles(\"1-2-3-4-5\")\nxs.merge(ys).to_blocking().to_marbles()", "Subjects and Streams\nA simple way to create an observable stream is to use a subject. It's probably called a subject after the Subject-Observer pattern described in the Design Patterns book by the gang of four (GOF).\nAnyway, a Subject is both an Observable and an Observer, so you can both subscribe to it and on_next it with events. This makes it an obvious candidate if need to publish values into an observable stream for processing:", "from rx.subjects import Subject\n\nstream = Subject()\nstream.on_next(41)\n\nd = stream.subscribe(lambda x: print(\"Got: %s\" % x))\n\nstream.on_next(42)\n\nd.dispose()\nstream.on_next(43)", "That's all for now" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
probml/pyprobml
notebooks/book1/04/laplace_approx_beta_binom_jax.ipynb
mit
[ "Laplace approximation ( Quadratic approximation)\nIn this notebook we will approximate posterior of beta-bernouli model for coin toss problem using laplace approximation method", "try:\n from probml_utils import latexify, savefig\nexcept:\n %pip install git+https://github.com/probml/probml-utils.git\n from probml_utils import latexify, savefig\n\nimport jax\nimport jax.numpy as jnp\nfrom jax import lax\n\ntry:\n from tensorflow_probability.substrates import jax as tfp\nexcept ModuleNotFoundError:\n %pip install -qqq tensorflow_probability\n from tensorflow_probability.substrates import jax as tfp\n\ntry:\n import optax\nexcept ModuleNotFoundError:\n %pip install -qqq optax\n import optax\n\ntry:\n from rich import print\nexcept ModuleNotFoundError:\n %pip install -qqq rich\n from rich import print\n\ntry:\n from tqdm import trange\nexcept:\n %pip install -qqq tqdm\n from tqdm import trange\n\n\nimport seaborn as sns\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\ndist = tfp.distributions\n\nlatexify(width_scale_factor=2, fig_height=2) # to apply latexify, set LATEXIFY=1 in environment variable\n\n# Use same data as https://github.com/probml/probml-notebooks/blob/main/notebooks/beta_binom_approx_post_pymc.ipynb\nkey = jax.random.PRNGKey(128)\ndataset = np.repeat([0, 1], (10, 1))\nn_samples = len(dataset)\nprint(f\"Dataset: {dataset}\")\nn_heads = dataset.sum()\nn_tails = n_samples - n_heads\n\n# prior distribution ~ Beta\ndef prior_dist():\n return dist.Beta(concentration1=1.0, concentration0=1.0)\n\n\n# likelihood distribution ~ Bernoulli\ndef likelihood_dist(theta):\n return dist.Bernoulli(probs=theta)\n\n# closed form of beta posterior\na = prior_dist().concentration1\nb = prior_dist().concentration0\n\nexact_posterior = dist.Beta(concentration1=a + n_heads, concentration0=b + n_tails)\n\ntheta_range = jnp.linspace(0.01, 0.99, 100)\n\nax = plt.gca()\nax2 = ax.twinx()\n(plt2,) = ax2.plot(theta_range, exact_posterior.prob(theta_range), \"g--\", label=\"True Posterior\")\n(plt3,) = ax2.plot(theta_range, prior_dist().prob(theta_range), label=\"Prior\")\n\nlikelihood = jax.vmap(lambda x: jnp.prod(likelihood_dist(x).prob(dataset)))(theta_range)\n(plt1,) = ax.plot(theta_range, likelihood, \"r-.\", label=\"Likelihood\")\n\nax.set_xlabel(\"theta\")\nax.set_ylabel(\"Likelihood\")\nax2.set_ylabel(\"Prior & Posterior\")\nax2.legend(handles=[plt1, plt2, plt3], bbox_to_anchor=(1.6, 1));", "Laplace approximation from scratch in JAX\nAs mentioned in book2 section 7.4.3, Using laplace approximation, any distribution can be approximated as normal distribution having mean $\\hat{\\theta}$ and standard deviation as $H^{-1}$\n\\begin{align}\n H = \\triangledown ^2_{\\theta = \\hat{\\theta}} \\log p(\\theta|\\mathcal{D}) \\\n p(\\theta|\\mathcal{D}) = \\frac{1}{Z}p(\\theta|\\mathcal{D}) = \\mathcal{N}(\\theta |\\hat{\\theta}, H^{-1})\n\\end{align}\nWhere H is Hessian and $\\hat{\\theta}$ is the mode\nFind $\\hat{\\theta}$\nNo we find $\\hat{\\theta}$ ($\\theta$_map) by minimizing negative log prior-likelhihood.", "def neg_log_prior_likelihood_fn(params, dataset):\n theta = params[\"theta\"]\n likelihood_log_prob = likelihood_dist(theta).log_prob(dataset).sum() # log probability of likelihood\n prior_log_prob = prior_dist().log_prob(theta) # log probability of prior\n return -(likelihood_log_prob + prior_log_prob) # negative log_prior_liklihood\n\nloss_and_grad_fn = jax.value_and_grad(neg_log_prior_likelihood_fn)\nparams = {\"theta\": 0.5}\nneg_joint_log_prob, grads = loss_and_grad_fn(params, dataset)\n\noptimizer = optax.adam(0.01)\nopt_state = optimizer.init(params)\n\n@jax.jit\ndef train_step(carry, data_output):\n\n params = carry[\"params\"]\n neg_joint_log_prob, grads = loss_and_grad_fn(params, dataset)\n\n opt_state = carry[\"opt_state\"]\n updates, opt_state = optimizer.update(grads, opt_state)\n params = optax.apply_updates(params, updates)\n\n carry = {\"params\": params, \"opt_state\": opt_state}\n data_output = {\"params\": params, \"loss\": neg_joint_log_prob}\n\n return carry, data_output\n\ncarry = {\"params\": params, \"opt_state\": opt_state}\ndata_output = {\"params\": params, \"loss\": neg_joint_log_prob}\n\nn = 100\niterator = jnp.ones(n)\nlast_carry, output = jax.lax.scan(train_step, carry, iterator)\n\nloss = output[\"loss\"]\nplt.plot(loss, label=\"loss\")\nplt.legend();\n\noptimized_params = last_carry[\"params\"]\ntheta_map = optimized_params[\"theta\"]\nprint(f\"theta_map = {theta_map}\")", "loc and scale of approximated normal posterior", "loc = theta_map # loc of approximate posterior\nprint(f\"loc = {loc}\")\n\n# scale of approximate posterior\nscale = 1 / jnp.sqrt(jax.hessian(neg_log_prior_likelihood_fn)(optimized_params, dataset)[\"theta\"][\"theta\"])\nprint(f\"scale = {scale}\")", "True posterior and laplace approximated posterior", "plt.figure()\ny = jnp.exp(dist.Normal(loc, scale).log_prob(theta_range))\nplt.title(\"Quadratic approximation\")\nplt.plot(theta_range, y, label=\"laplace approximation\", color=\"tab:red\")\nplt.plot(theta_range, exact_posterior.prob(theta_range), label=\"true posterior\", color=\"tab:green\", linestyle=\"--\")\nplt.xlabel(\"$\\\\theta$\")\nplt.ylabel(\"$p(\\\\theta)$\")\nsns.despine()\nplt.legend()\nsavefig(\"bb_laplace\") # set FIG_DIR = \"path/to/figure\" enviornment variable to save figure", "Pymc", "try:\n import pymc3 as pm\nexcept ModuleNotFoundError:\n %pip install -qq pymc3\n import pymc3 as pm\ntry:\n import scipy.stats as stats\nexcept ModuleNotFoundError:\n %pip install -qq scipy\n import scipy.stats as stats\n\nimport scipy.special as sp\n\ntry:\n import arviz as az\nexcept ModuleNotFoundError:\n %pip install -qq arviz\n import arviz as az\n\nimport math\n\n# Laplace\nwith pm.Model() as normal_aproximation:\n theta = pm.Beta(\"theta\", 1.0, 1.0)\n y = pm.Binomial(\"y\", n=1, p=theta, observed=dataset) # Bernoulli\n mean_q = pm.find_MAP()\n std_q = ((1 / pm.find_hessian(mean_q, vars=[theta])) ** 0.5)[0]\n loc = mean_q[\"theta\"]\n\n# plt.savefig('bb_laplace.pdf');\n\nx = theta_range\n\nplt.figure()\nplt.plot(x, stats.norm.pdf(x, loc, std_q), \"--\", label=\"Laplace\")\npost_exact = stats.beta.pdf(x, n_heads + 1, n_tails + 1)\nplt.plot(x, post_exact, label=\"exact\")\nplt.title(\"Quadratic approximation\")\nplt.xlabel(\"θ\", fontsize=14)\nplt.yticks([])\nplt.legend()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
padipadou/CADL
session-3/session-3.ipynb
apache-2.0
[ "Session 3: Unsupervised and Supervised Learning\n<p class=\"lead\">\nAssignment: Build Unsupervised and Supervised Networks\n</p>\n\n<p class=\"lead\">\nParag K. Mital<br />\n<a href=\"https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\">Creative Applications of Deep Learning w/ Tensorflow</a><br />\n<a href=\"https://www.kadenze.com/partners/kadenze-academy\">Kadenze Academy</a><br />\n<a href=\"https://twitter.com/hashtag/CADL\">#CADL</a>\n</p>\n\n<a name=\"learning-goals\"></a>\nLearning Goals\n\nLearn how to build an autoencoder\nLearn how to explore latent/hidden representations of an autoencoder.\nLearn how to build a classification network using softmax and onehot encoding\n\nOutline\n<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->\n\n\nAssignment Synopsis\nPart One - Autoencoders\nInstructions\nCode\nVisualize the Embedding\nReorganize to Grid\n2D Latent Manifold\n\n\nPart Two - General Autoencoder Framework\nInstructions\n\n\nPart Three - Deep Audio Classification Network\nInstructions\nPreparing the Data\nCreating the Network\n\n\nAssignment Submission\nComing Up\n\n<!-- /MarkdownTOC -->\n\nThis next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you \"run\" it (use \"shift+enter\")!", "# First check the Python version\nimport sys\nif sys.version_info < (3,4):\n print('You are running an older version of Python!\\n\\n' \\\n 'You should consider updating to Python 3.4.0 or ' \\\n 'higher as the libraries built for this course ' \\\n 'have only been tested in Python 3.4 and higher.\\n')\n print('Try installing the Python 3.5 version of anaconda '\n 'and then restart `jupyter notebook`:\\n' \\\n 'https://www.continuum.io/downloads\\n\\n')\n\n# Now get necessary libraries\ntry:\n import os\n import numpy as np\n import matplotlib.pyplot as plt\n from skimage.transform import resize\n from skimage import data\n from scipy.misc import imresize\n import IPython.display as ipyd\nexcept ImportError:\n print('You are missing some packages! ' \\\n 'We will try installing them before continuing!')\n !pip install \"numpy>=1.11.0\" \"matplotlib>=1.5.1\" \"scikit-image>=0.11.3\" \"scikit-learn>=0.17\" \"scipy>=0.17.0\"\n import os\n import numpy as np\n import matplotlib.pyplot as plt\n from skimage.transform import resize\n from skimage import data\n from scipy.misc import imresize\n import IPython.display as ipyd\n print('Done!')\n\n# Import Tensorflow\ntry:\n import tensorflow as tf\nexcept ImportError:\n print(\"You do not have tensorflow installed!\")\n print(\"Follow the instructions on the following link\")\n print(\"to install tensorflow before continuing:\")\n print(\"\")\n print(\"https://github.com/pkmital/CADL#installation-preliminaries\")\n\n# This cell includes the provided libraries from the zip file\n# and a library for displaying images from ipython, which\n# we will use to display the gif\ntry:\n from libs import utils, gif, datasets, dataset_utils, vae, dft\nexcept ImportError:\n print(\"Make sure you have started notebook in the same directory\" +\n \" as the provided zip file which includes the 'libs' folder\" +\n \" and the file 'utils.py' inside of it. You will NOT be able\"\n \" to complete this assignment unless you restart jupyter\"\n \" notebook inside the directory created by extracting\"\n \" the zip file or cloning the github repo.\")\n\n# We'll tell matplotlib to inline any drawn figures like so:\n%matplotlib inline\nplt.style.use('ggplot')\n\n# Bit of formatting because I don't like the default inline code style:\nfrom IPython.core.display import HTML\nHTML(\"\"\"<style> .rendered_html code { \n padding: 2px 4px;\n color: #c7254e;\n background-color: #f9f2f4;\n border-radius: 4px;\n} </style>\"\"\")", "<a name=\"assignment-synopsis\"></a>\nAssignment Synopsis\nIn the last session we created our first neural network. We saw that in order to create a neural network, we needed to define a cost function which would allow gradient descent to optimize all the parameters in our network. We also saw how neural networks become much more expressive by introducing series of linearities followed by non-linearities, or activation functions. We then explored a fun application of neural networks using regression to learn to paint color values given x, y positions. This allowed us to build up a sort of painterly like version of an image.\nIn this session, we'll see how to construct a few more types of neural networks. First, we'll explore a generative network called autoencoders. This network can be extended in a variety of ways to include convolution, denoising, or a variational layer. In Part Two, you'll then use a general autoencoder framework to encode your own list of images. In Part three, we'll then explore a discriminative network used for classification, and see how this can be used for audio classification of music or speech.\nOne main difference between these two networks are the data that we'll use to train them. In the first case, we will only work with \"unlabeled\" data and perform unsupervised learning. An example would be a collection of images, just like the one you created for assignment 1. Contrast this with \"labeled\" data which allows us to make use of supervised learning. For instance, we're given both images, and some other data about those images such as some text describing what object is in the image. This allows us to optimize a network where we model a distribution over the images given that it should be labeled as something. This is often a much simpler distribution to train, but with the expense of it being much harder to collect.\nOne of the major directions of future research will be in how to better make use of unlabeled data and unsupervised learning methods.\n<a name=\"part-one---autoencoders\"></a>\nPart One - Autoencoders\n<a name=\"instructions\"></a>\nInstructions\nWork with a dataset of images and train an autoencoder. You can work with the same dataset from assignment 1, or try a larger dataset. But be careful with the image sizes, and make sure to keep it relatively small (e.g. < 100 x 100 px). \nRecall from the lecture that autoencoders are great at \"compressing\" information. The network's construction and cost function are just like what we've done in the last session. The network is composed of a series of matrix multiplications and nonlinearities. The only difference is the output of the network has exactly the same shape as what is input. This allows us to train the network by saying that the output of the network needs to be just like the input to it, so that it tries to \"compress\" all the information in that video.\nAutoencoders have some great potential for creative applications, as they allow us to compress a dataset of information and even generate new data from that encoding. We'll see exactly how to do this with a basic autoencoder, and then you'll be asked to explore some of the extensions to produce your own encodings.\n<a name=\"code\"></a>\nCode\nWe'll now go through the process of building an autoencoder just like in the lecture. First, let's load some data. You can use the first 100 images of the Celeb Net, your own dataset, or anything else approximately under 1,000 images. Make sure you resize the images so that they are <= 100x100 pixels, otherwise the training will be very slow, and the montages we create will be too large.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# See how this works w/ Celeb Images or try your own dataset instead:\nimgs = ...\n\n# Then convert the list of images to a 4d array (e.g. use np.array to convert a list to a 4d array):\nXs = ...\n\nprint(Xs.shape)\nassert(Xs.ndim == 4 and Xs.shape[1] <= 100 and Xs.shape[2] <= 100)", "We'll now make use of something I've written to help us store this data. It provides some interfaces for generating \"batches\" of data, as well as splitting the data into training, validation, and testing sets. To use it, we pass in the data and optionally its labels. If we don't have labels, we just pass in the data. In the second half of this notebook, we'll explore using a dataset's labels as well.", "ds = datasets.Dataset(Xs)\n# ds = datasets.CIFAR10(flatten=False)", "It allows us to easily find the mean:", "mean_img = ds.mean().astype(np.uint8)\nplt.imshow(mean_img)\n# If your image comes out entirely black, try w/o the `astype(np.uint8)`\n# that means your images are read in as 0-255, rather than 0-1 and \n# this simply depends on the version of matplotlib you are using.", "Or the deviation:", "std_img = ds.std()\nplt.imshow(std_img)\nprint(std_img.shape)", "Recall we can calculate the mean of the standard deviation across each color channel:", "std_img = np.mean(std_img, axis=2)\nplt.imshow(std_img)", "All the input data we gave as input to our Datasets object, previously stored in Xs is now stored in a variable as part of our ds Datasets object, X:", "plt.imshow(ds.X[0])\nprint(ds.X.shape)", "It takes a parameter, split at the time of creation, which allows us to create train/valid/test sets. By default, this is set to [1.0, 0.0, 0.0], which means to take all the data in the train set, and nothing in the validation and testing sets. We can access \"batch generators\" of each of these sets by saying: ds.train.next_batch. A generator is a really powerful way of handling iteration in Python. If you are unfamiliar with the idea of generators, I recommend reading up a little bit on it, e.g. here: http://intermediatepythonista.com/python-generators - think of it as a for loop, but as a function. It returns one iteration of the loop each time you call it.\nThis generator will automatically handle the randomization of the dataset. Let's try looping over the dataset using the batch generator:", "for (X, y) in ds.train.next_batch(batch_size=10):\n print(X.shape)", "This returns X and y as a tuple. Since we're not using labels, we'll just ignore this. The next_batch method takes a parameter, batch_size, which we'll set appropriately to our batch size. Notice it runs for exactly 10 iterations to iterate over our 100 examples, then the loop exits. The order in which it iterates over the 100 examples is randomized each time you iterate.\nWrite two functions to preprocess (normalize) any given image, and to unprocess it, i.e. unnormalize it by removing the normalization. The preprocess function should perform exactly the task you learned to do in assignment 1: subtract the mean, then divide by the standard deviation. The deprocess function should take the preprocessed image and undo the preprocessing steps. Recall that the ds object contains the mean and std functions for access the mean and standarad deviation. We'll be using the preprocess and deprocess functions on the input and outputs of the network. Note, we could use Tensorflow to do this instead of numpy, but for sake of clarity, I'm keeping this separate from the Tensorflow graph.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# Write a function to preprocess/normalize an image, given its dataset object\n# (which stores the mean and standard deviation!)\ndef preprocess(img, ds):\n norm_img = (img - ...) / ...\n return norm_img\n\n# Write a function to undo the normalization of an image, given its dataset object\n# (which stores the mean and standard deviation!)\ndef deprocess(norm_img, ds):\n img = norm_img * ... + ...\n return img", "We're going to now work on creating an autoencoder. To start, we'll only use linear connections, like in the last assignment. This means, we need a 2-dimensional input: Batch Size x Number of Features. We currently have a 4-dimensional input: Batch Size x Height x Width x Channels. We'll have to calculate the number of features we have to help construct the Tensorflow Graph for our autoencoder neural network. Then, when we are ready to train the network, we'll reshape our 4-dimensional dataset into a 2-dimensional one when feeding the input of the network. Optionally, we could create a tf.reshape as the first operation of the network, so that we can still pass in our 4-dimensional array, and the Tensorflow graph would reshape it for us. We'll try the former method, by reshaping manually, and then you can explore the latter method, of handling 4-dimensional inputs on your own.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# Calculate the number of features in your image.\n# This is the total number of pixels, or (height x width x channels).\nn_features = ...\nprint(n_features)", "Let's create a list of how many neurons we want in each layer. This should be for just one half of the network, the encoder only. It should start large, then get smaller and smaller. We're also going to try an encode our dataset to an inner layer of just 2 values. So from our number of features, we'll go all the way down to expressing that image by just 2 values. Try a small network to begin with, then explore deeper networks:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "encoder_dimensions = [128, 2]", "Now create a placeholder just like in the last session in the tensorflow graph that will be able to get any number (None) of n_features inputs.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "X = tf.placeholder(...\n \nassert(X.get_shape().as_list() == [None, n_features])", "Now complete the function encode below. This takes as input our input placeholder, X, our list of dimensions, and an activation function, e.g. tf.nn.relu or tf.nn.tanh, to apply to each layer's output, and creates a series of fully connected layers. This works just like in the last session! We multiply our input, add a bias, then apply a non-linearity. Instead of having 20 neurons in each layer, we're going to use our dimensions list to tell us how many neurons we want in each layer.\nOne important difference is that we're going to also store every weight matrix we create! This is so that we can use the same weight matrices when we go to build our decoder. This is a very powerful concept that creeps up in a few different neural network architectures called weight sharing. Weight sharing isn't necessary to do of course, but can speed up training and offer a different set of features depending on your dataset. Explore trying both. We'll also see how another form of weight sharing works in convolutional networks.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "def encode(X, dimensions, activation=tf.nn.tanh):\n # We're going to keep every matrix we create so let's create a list to hold them all\n Ws = []\n\n # We'll create a for loop to create each layer:\n for layer_i, n_output in enumerate(dimensions):\n\n # TODO: just like in the last session,\n # we'll use a variable scope to help encapsulate our variables\n # This will simply prefix all the variables made in this scope\n # with the name we give it. Make sure it is a unique name\n # for each layer, e.g., 'encoder/layer1', 'encoder/layer2', or\n # 'encoder/1', 'encoder/2',... \n with tf.variable_scope(...)\n\n # TODO: Create a weight matrix which will increasingly reduce\n # down the amount of information in the input by performing\n # a matrix multiplication. You can use the utils.linear function.\n h, W = ...\n \n # TODO: Apply an activation function (unless you used the parameter\n # for activation function in the utils.linear call)\n\n # Finally we'll store the weight matrix.\n # We need to keep track of all\n # the weight matrices we've used in our encoder\n # so that we can build the decoder using the\n # same weight matrices.\n Ws.append(W)\n \n # Replace X with the current layer's output, so we can\n # use it in the next layer.\n X = h\n \n z = X\n return Ws, z", "We now have a function for encoding an input X. Take note of which activation function you use as this will be important for the behavior of the latent encoding, z, later on.", "# Then call the function\nWs, z = encode(X, encoder_dimensions)\n\n# And just some checks to make sure you've done it right.\nassert(z.get_shape().as_list() == [None, 2])\nassert(len(Ws) == len(encoder_dimensions))", "Let's take a look at the graph:", "[op.name for op in tf.get_default_graph().get_operations()]", "So we've created a few layers, encoding our input X all the way down to 2 values in the tensor z. We do this by multiplying our input X by a set of matrices shaped as:", "[W_i.get_shape().as_list() for W_i in Ws]", "Resulting in a layer which is shaped as:", "z.get_shape().as_list()", "Building the Decoder\nHere is a helpful animation on what the matrix \"transpose\" operation does:\n\nBasically what is happening is rows becomes columns, and vice-versa. We're going to use our existing weight matrices but transpose them so that we can go in the opposite direction. In order to build our decoder, we'll have to do the opposite of what we've just done, multiplying z by the transpose of our weight matrices, to get back to a reconstructed version of X. First, we'll reverse the order of our weight matrics, and then append to the list of dimensions the final output layer's shape to match our input:", "# We'll first reverse the order of our weight matrices\ndecoder_Ws = Ws[::-1]\n\n# then reverse the order of our dimensions\n# appending the last layers number of inputs.\ndecoder_dimensions = encoder_dimensions[::-1][1:] + [n_features]\nprint(decoder_dimensions)\n\nassert(decoder_dimensions[-1] == n_features)", "Now we'll build the decoder. I've shown you how to do this. Read through the code to fully understand what it is doing:", "def decode(z, dimensions, Ws, activation=tf.nn.tanh):\n current_input = z\n for layer_i, n_output in enumerate(dimensions):\n # we'll use a variable scope again to help encapsulate our variables\n # This will simply prefix all the variables made in this scope\n # with the name we give it.\n with tf.variable_scope(\"decoder/layer/{}\".format(layer_i)):\n\n # Now we'll grab the weight matrix we created before and transpose it\n # So a 3072 x 784 matrix would become 784 x 3072\n # or a 256 x 64 matrix, would become 64 x 256\n W = tf.transpose(Ws[layer_i])\n\n # Now we'll multiply our input by our transposed W matrix\n h = tf.matmul(current_input, W)\n\n # And then use a relu activation function on its output\n current_input = activation(h)\n\n # We'll also replace n_input with the current n_output, so that on the\n # next iteration, our new number inputs will be correct.\n n_input = n_output\n Y = current_input\n return Y\n\nY = decode(z, decoder_dimensions, decoder_Ws)", "Let's take a look at the new operations we've just added. They will all be prefixed by \"decoder\" so we can use list comprehension to help us with this:", "[op.name for op in tf.get_default_graph().get_operations()\n if op.name.startswith('decoder')]", "And let's take a look at the output of the autoencoder:", "Y.get_shape().as_list()", "Great! So we should have a synthesized version of our input placeholder, X, inside of Y. This Y is the result of many matrix multiplications, first a series of multiplications in our encoder all the way down to 2 dimensions, and then back to the original dimensions through our decoder. Let's now create a pixel-to-pixel measure of error. This should measure the difference in our synthesized output, Y, and our input, X. You can use the $l_1$ or $l_2$ norm, just like in assignment 2. If you don't remember, go back to homework 2 where we calculated the cost function and try the same idea here.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# Calculate some measure of loss, e.g. the pixel to pixel absolute difference or squared difference\nloss = ...\n\n# Now sum over every pixel and then calculate the mean over the batch dimension (just like session 2!)\n# hint, use tf.reduce_mean and tf.reduce_sum\ncost = ...", "Now for the standard training code. We'll pass our cost to an optimizer, and then use mini batch gradient descent to optimize our network's parameters. We just have to be careful to make sure we're preprocessing our input and feed it in the right shape, a 2-dimensional matrix of [batch_size, n_features] in dimensions.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "learning_rate = ...\noptimizer = tf.train.AdamOptimizer(...).minimize(...)", "Below is the training code for our autoencoder. Please go through each line of code to make sure you understand what is happening, and fill in the missing pieces. This will take awhile. On my machine, it takes about 15 minutes. If you're impatient, you can \"Interrupt\" the kernel by going to the Kernel menu above, and continue with the notebook. Though, the longer you leave this to train, the better the result will be.\nWhat I really want you to notice is what the network learns to encode first, based on what it is able to reconstruct. It won't able to reconstruct everything. At first, it will just be the mean image. Then, other major changes in the dataset. For the first 100 images of celeb net, this seems to be the background: white, blue, black backgrounds. From this basic interpretation, you can reason that the autoencoder has learned a representation of the backgrounds, and is able to encode that knowledge of the background in its inner most layer of just two values. It then goes on to represent the major variations in skin tone and hair. Then perhaps some facial features such as lips. So the features it is able to encode tend to be the major things at first, then the smaller things.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# (TODO) Create a tensorflow session and initialize all of our weights:\nsess = ...\nsess.run(tf.global_variables_initializer())", "Note that if you run into \"InternalError\" or \"ResourceExhaustedError\", it is likely that you have run out of memory! Try a smaller network! For instance, restart the notebook's kernel, and then go back to defining encoder_dimensions = [256, 2] instead. If you run into memory problems below, you can also try changing the batch_size to 50.", "# Some parameters for training\nbatch_size = 100\nn_epochs = 31\nstep = 10\n\n# We'll try to reconstruct the same first 100 images and show how\n# The network does over the course of training.\nexamples = ds.X[:100]\n\n# We have to preprocess the images before feeding them to the network.\n# I'll do this once here, so we don't have to do it every iteration.\ntest_examples = preprocess(examples, ds).reshape(-1, n_features)\n\n# If we want to just visualize them, we can create a montage.\ntest_images = utils.montage(examples).astype(np.uint8)\n\n# Store images so we can make a gif\ngifs = []\n\n# Now for our training:\nfor epoch_i in range(n_epochs):\n \n # Keep track of the cost\n this_cost = 0\n \n # Iterate over the entire dataset in batches\n for batch_X, _ in ds.train.next_batch(batch_size=batch_size):\n \n # (TODO) Preprocess and reshape our current batch, batch_X:\n this_batch = preprocess(..., ds).reshape(-1, n_features)\n \n # Compute the cost, and run the optimizer.\n this_cost += sess.run([cost, optimizer], feed_dict={X: this_batch})[0]\n \n # Average cost of this epoch\n avg_cost = this_cost / ds.X.shape[0] / batch_size\n print(epoch_i, avg_cost)\n \n # Let's also try to see how the network currently reconstructs the input.\n # We'll draw the reconstruction every `step` iterations.\n if epoch_i % step == 0:\n \n # (TODO) Ask for the output of the network, Y, and give it our test examples\n recon = sess.run(...\n \n # Resize the 2d to the 4d representation:\n rsz = recon.reshape(examples.shape)\n\n # We have to unprocess the image now, removing the normalization\n unnorm_img = deprocess(rsz, ds)\n \n # Clip to avoid saturation\n # TODO: Make sure this image is the correct range, e.g.\n # for float32 0-1, you should clip between 0 and 1\n # for uint8 0-255, you should clip between 0 and 255!\n clipped = np.clip(unnorm_img, 0, 255)\n\n # And we can create a montage of the reconstruction\n recon = utils.montage(clipped)\n \n # Store for gif\n gifs.append(recon)\n\n fig, axs = plt.subplots(1, 2, figsize=(10, 10))\n axs[0].imshow(test_images)\n axs[0].set_title('Original')\n axs[1].imshow(recon)\n axs[1].set_title('Synthesis')\n fig.canvas.draw()\n plt.show()", "Let's take a look a the final reconstruction:", "fig, axs = plt.subplots(1, 2, figsize=(10, 10))\naxs[0].imshow(test_images)\naxs[0].set_title('Original')\naxs[1].imshow(recon)\naxs[1].set_title('Synthesis')\nfig.canvas.draw()\nplt.show()\nplt.imsave(arr=test_images, fname='test.png')\nplt.imsave(arr=recon, fname='recon.png')", "<a name=\"visualize-the-embedding\"></a>\nVisualize the Embedding\nLet's now try visualizing our dataset's inner most layer's activations. Since these are already 2-dimensional, we can use the values of this layer to position any input image in a 2-dimensional space. We hope to find similar looking images closer together.\nWe'll first ask for the inner most layer's activations when given our example images. This will run our images through the network, half way, stopping at the end of the encoder part of the network.", "zs = sess.run(z, feed_dict={X:test_examples})", "Recall that this layer has 2 neurons:", "zs.shape", "Let's see what the activations look like for our 100 images as a scatter plot.", "plt.scatter(zs[:, 0], zs[:, 1])", "If you view this plot over time, and let the process train longer, you will see something similar to the visualization here on the right: https://vimeo.com/155061675 - the manifold is able to express more and more possible ideas, or put another way, it is able to encode more data. As it grows more expressive, with more data, and longer training, or deeper networks, it will fill in more of the space, and have different modes expressing different clusters of the data. With just 100 examples of our dataset, this is very small to try to model with such a deep network. In any case, the techniques we've learned up to now apply in exactly the same way, even if we had 1k, 100k, or even many millions of images.\nLet's try to see how this minimal example, with just 100 images, and just 100 epochs looks when we use this embedding to sort our dataset, just like we tried to do in the 1st assignment, but now with our autoencoders embedding.\n<a name=\"reorganize-to-grid\"></a>\nReorganize to Grid\nWe'll use these points to try to find an assignment to a grid. This is a well-known problem known as the \"assignment problem\": https://en.wikipedia.org/wiki/Assignment_problem - This is unrelated to the applications we're investigating in this course, but I thought it would be a fun extra to show you how to do. What we're going to do is take our scatter plot above, and find the best way to stretch and scale it so that each point is placed in a grid. We try to do this in a way that keeps nearby points close together when they are reassigned in their grid.", "n_images = 100\nidxs = np.linspace(np.min(zs) * 2.0, np.max(zs) * 2.0,\n int(np.ceil(np.sqrt(n_images))))\nxs, ys = np.meshgrid(idxs, idxs)\ngrid = np.dstack((ys, xs)).reshape(-1, 2)[:n_images,:]\n\nfig, axs = plt.subplots(1,2,figsize=(8,3))\naxs[0].scatter(zs[:, 0], zs[:, 1],\n edgecolors='none', marker='o', s=2)\naxs[0].set_title('Autoencoder Embedding')\naxs[1].scatter(grid[:,0], grid[:,1],\n edgecolors='none', marker='o', s=2)\naxs[1].set_title('Ideal Grid')", "To do this, we can use scipy and an algorithm for solving this assignment problem known as the hungarian algorithm. With a few points, this algorithm runs pretty fast. But be careful if you have many more points, e.g. > 1000, as it is not a very efficient algorithm!", "from scipy.spatial.distance import cdist\ncost = cdist(grid[:, :], zs[:, :], 'sqeuclidean')\nfrom scipy.optimize._hungarian import linear_sum_assignment\nindexes = linear_sum_assignment(cost)", "The result tells us the matching indexes from our autoencoder embedding of 2 dimensions, to our idealized grid:", "indexes\n\nplt.figure(figsize=(5, 5))\nfor i in range(len(zs)):\n plt.plot([zs[indexes[1][i], 0], grid[i, 0]],\n [zs[indexes[1][i], 1], grid[i, 1]], 'r')\nplt.xlim([-3, 3])\nplt.ylim([-3, 3])", "In other words, this algorithm has just found the best arrangement of our previous zs as a grid. We can now plot our images using the order of our assignment problem to see what it looks like:", "examples_sorted = []\nfor i in indexes[1]:\n examples_sorted.append(examples[i])\nplt.figure(figsize=(15, 15))\nimg = utils.montage(np.array(examples_sorted)).astype(np.uint8)\nplt.imshow(img,\n interpolation='nearest')\nplt.imsave(arr=img, fname='sorted.png')", "<a name=\"2d-latent-manifold\"></a>\n2D Latent Manifold\nWe'll now explore the inner most layer of the network. Recall we go from the number of image features (the number of pixels), down to 2 values using successive matrix multiplications, back to the number of image features through more matrix multiplications. These inner 2 values are enough to represent our entire dataset (+ some loss, depending on how well we did). Let's explore how the decoder, the second half of the network, operates, from just these two values. We'll bypass the input placeholder, X, and the entire encoder network, and start from Z. Let's first get some data which will sample Z in 2 dimensions from -1 to 1. This range may be different for you depending on what your latent space's range of values are. You can try looking at the activations for your z variable for a set of test images, as we've done before, and look at the range of these values. Or try to guess based on what activation function you may have used on the z variable, if any. \nThen we'll use this range to create a linear interpolation of latent values, and feed these values through the decoder network to have our synthesized images to see what they look like.", "# This is a quick way to do what we could have done as\n# a nested for loop:\nzs = np.meshgrid(np.linspace(-1, 1, 10),\n np.linspace(-1, 1, 10))\n\n# Now we have 100 x 2 values of every possible position\n# in a 2D grid from -1 to 1:\nzs = np.c_[zs[0].ravel(), zs[1].ravel()]", "Now calculate the reconstructed images using our new zs. You'll want to start from the beginning of the decoder! That is the z variable! Then calculate the Y given our synthetic values for z stored in zs. \n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "recon = sess.run(Y, feed_dict={...})\n\n# reshape the result to an image:\nrsz = recon.reshape(examples.shape)\n\n# Deprocess the result, unnormalizing it\nunnorm_img = deprocess(rsz, ds)\n\n# clip to avoid saturation\nclipped = np.clip(unnorm_img, 0, 255)\n\n# Create a montage\nimg_i = utils.montage(clipped).astype(np.uint8)", "And now we can plot the reconstructed montage representing our latent space:", "plt.figure(figsize=(15, 15))\nplt.imshow(img_i)\nplt.imsave(arr=img_i, fname='manifold.png')", "<a name=\"part-two---general-autoencoder-framework\"></a>\nPart Two - General Autoencoder Framework\nThere are a number of extensions we can explore w/ an autoencoder. I've provided a module under the libs folder, vae.py, which you will need to explore for Part Two. It has a function, VAE, to create an autoencoder, optionally with Convolution, Denoising, and/or Variational Layers. Please read through the documentation and try to understand the different parameters.", "help(vae.VAE)", "Included in the vae.py module is the train_vae function. This will take a list of file paths, and train an autoencoder with the provided options. This will spit out a bunch of images of the reconstruction and latent manifold created by the encoder/variational encoder. Feel free to read through the code, as it is documented.", "help(vae.train_vae)", "I've also included three examples of how to use the VAE(...) and train_vae(...) functions. First look at the one using MNIST. Then look at the other two: one using the Celeb Dataset; and lastly one which will download Sita Sings the Blues, rip the frames, and train a Variational Autoencoder on it. This last one requires ffmpeg be installed (e.g. for OSX users, brew install ffmpeg, Linux users, sudo apt-get ffmpeg-dev, or else: https://ffmpeg.org/download.html). The Celeb and Sita Sings the Blues training require us to use an image pipeline, which I've mentioned briefly during the lecture. This does many things for us: it loads data from disk in batches, decodes the data as an image, resizes/crops the image, and uses a multithreaded graph to handle it all. It is very efficient and is the way to go when handling large image datasets. \nThe MNIST training does not use this. Instead, the entire dataset is loaded into the CPU memory, and then fed in minibatches to the graph using Python/Numpy. This is far less efficient, but will not be an issue for such a small dataset, e.g. 70k examples of 28x28 pixels = ~1.6 MB of data, easily fits into memory (in fact, it would really be better to use a Tensorflow variable with this entire dataset defined). When you consider the Celeb Net, you have 200k examples of 218x178x3 pixels = ~700 MB of data. That's just for the dataset. When you factor in everything required for the network and its weights, then you are pushing it. Basically this image pipeline will handle loading the data from disk, rather than storing it in memory.\n<a name=\"instructions-1\"></a>\nInstructions\nYou'll now try to train your own autoencoder using this framework. You'll need to get a directory full of 'jpg' files. You'll then use the VAE framework and the vae.train_vae function to train a variational autoencoder on your own dataset. This accepts a list of files, and will output images of the training in the same directory. These are named \"test_xs.png\" as well as many images named prefixed by \"manifold\" and \"reconstruction\" for each iteration of the training. After you are happy with your training, you will need to create a forum post with the \"test_xs.png\" and the very last manifold and reconstruction image created to demonstrate how the variational autoencoder worked for your dataset. You'll likely need a lot more than 100 images for this to be successful.\nNote that this will also create \"checkpoints\" which save the model! If you change the model, and already have a checkpoint by the same name, it will try to load the previous model and will fail. Be sure to remove the old checkpoint or specify a new name for ckpt_name! The default parameters shown below are what I have used for the celeb net dataset which has over 200k images. You will definitely want to use a smaller model if you do not have this many images! Explore!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# Get a list of jpg file (Only JPG works!)\nfiles = [os.path.join(some_dir, file_i) for file_i in os.listdir(some_dir) if file_i.endswith('.jpg')]\n\n# Ensure that you have the latest TensorFlow version installed, otherwise you may have encountered\n# 'rsz_shape' error because of the backward incompatible API.\n# Train it! Change these parameters!\nvae.train_vae(files,\n input_shape,\n learning_rate=0.0001,\n batch_size=100,\n n_epochs=50,\n n_examples=10,\n crop_shape=[64, 64, 3],\n crop_factor=0.8,\n n_filters=[100, 100, 100, 100],\n n_hidden=256,\n n_code=50,\n convolutional=True,\n variational=True,\n filter_sizes=[3, 3, 3, 3],\n dropout=True,\n keep_prob=0.8,\n activation=tf.nn.relu,\n img_step=100,\n save_step=100,\n ckpt_name=\"vae.ckpt\")", "<a name=\"part-three---deep-audio-classification-network\"></a>\nPart Three - Deep Audio Classification Network\n<a name=\"instructions-2\"></a>\nInstructions\nIn this last section, we'll explore using a regression network, one that predicts continuous outputs, to perform classification, a model capable of predicting discrete outputs. We'll explore the use of one-hot encodings and using a softmax layer to convert our regression outputs to a probability which we can use for classification. In the lecture, we saw how this works for the MNIST dataset, a dataset of 28 x 28 pixel handwritten digits labeled from 0 - 9. We converted our 28 x 28 pixels into a vector of 784 values, and used a fully connected network to output 10 values, the one hot encoding of our 0 - 9 labels. \nIn addition to the lecture material, I find these two links very helpful to try to understand classification w/ neural networks:\nhttps://colah.github.io/posts/2014-03-NN-Manifolds-Topology/\nhttps://cs.stanford.edu/people/karpathy/convnetjs//demo/classify2d.html\nThe GTZAN Music and Speech dataset has 64 music and 64 speech files, each 30 seconds long, and each at a sample rate of 22050 Hz, meaning there are 22050 samplings of the audio signal per second. What we're going to do is use all of this data to build a classification network capable of knowing whether something is music or speech. So we will have audio as input, and a probability of 2 possible values, music and speech, as output. This is very similar to the MNIST network. We just have to decide on how to represent our input data, prepare the data and its labels, build batch generators for our data, create the network, and train it. We'll make use of the libs/datasets.py module to help with some of this.\n<a name=\"preparing-the-data\"></a>\nPreparing the Data\nLet's first download the GTZAN music and speech dataset. I've included a helper function to do this.", "dst = 'gtzan_music_speech'\nif not os.path.exists(dst):\n dataset_utils.gtzan_music_speech_download(dst)", "Inside the dst directory, we now have folders for music and speech. Let's get the list of all the wav files for music and speech:", "# Get the full path to the directory\nmusic_dir = os.path.join(os.path.join(dst, 'music_speech'), 'music_wav')\n\n# Now use list comprehension to combine the path of the directory with any wave files\nmusic = [os.path.join(music_dir, file_i)\n for file_i in os.listdir(music_dir)\n if file_i.endswith('.wav')]\n\n# Similarly, for the speech folder:\nspeech_dir = os.path.join(os.path.join(dst, 'music_speech'), 'speech_wav')\nspeech = [os.path.join(speech_dir, file_i)\n for file_i in os.listdir(speech_dir)\n if file_i.endswith('.wav')]\n\n# Let's see all the file names\nprint(music, speech)", "We now need to load each file. We can use the scipy.io.wavefile module to load the audio as a signal. \nAudio can be represented in a few ways, including as floating point or short byte data (16-bit data). This dataset is the latter and so can range from -32768 to +32767. We'll use the function I've provided in the utils module to load and convert an audio signal to a -1.0 to 1.0 floating point datatype by dividing by the maximum absolute value. Let's try this with just one of the files we have:", "file_i = music[0]\ns = utils.load_audio(file_i)\nplt.plot(s)", "Now, instead of using the raw audio signal, we're going to use the Discrete Fourier Transform to represent our audio as matched filters of different sinuoids. Unfortunately, this is a class on Tensorflow and I can't get into Digital Signal Processing basics. If you want to know more about this topic, I highly encourage you to take this course taught by the legendary Perry Cook and Julius Smith: https://www.kadenze.com/courses/physics-based-sound-synthesis-for-games-and-interactive-systems/info - there is no one better to teach this content, and in fact, I myself learned DSP from Perry Cook almost 10 years ago.\nAfter taking the DFT, this will return our signal as real and imaginary components, a polar complex value representation which we will convert to a cartesian representation capable of saying what magnitudes and phases are in our signal.", "# Parameters for our dft transform. Sorry we can't go into the\n# details of this in this course. Please look into DSP texts or the\n# course by Perry Cook linked above if you are unfamiliar with this.\nfft_size = 512\nhop_size = 256\n\nre, im = dft.dft_np(s, hop_size=256, fft_size=512)\nmag, phs = dft.ztoc(re, im)\nprint(mag.shape)\nplt.imshow(mag)", "What we're seeing are the features of the audio (in columns) over time (in rows). We can see this a bit better by taking the logarithm of the magnitudes converting it to a psuedo-decibel scale. This is more similar to the logarithmic perception of loudness we have. Let's visualize this below, and I'll transpose the matrix just for display purposes:", "plt.figure(figsize=(10, 4))\nplt.imshow(np.log(mag.T))\nplt.xlabel('Time')\nplt.ylabel('Frequency Bin')", "We could just take just a single row (or column in the second plot of the magnitudes just above, as we transposed it in that plot) as an input to a neural network. However, that just represents about an 80th of a second of audio data, and is not nearly enough data to say whether something is music or speech. We'll need to use more than a single row to get a decent length of time. One way to do this is to use a sliding 2D window from the top of the image down to the bottom of the image (or left to right). Let's start by specifying how large our sliding window is.", "# The sample rate from our audio is 22050 Hz.\nsr = 22050\n\n# We can calculate how many hops there are in a second\n# which will tell us how many frames of magnitudes\n# we have per second\nn_frames_per_second = sr // hop_size\n\n# We want 500 milliseconds of audio in our window\nn_frames = n_frames_per_second // 2\n\n# And we'll move our window by 250 ms at a time\nframe_hops = n_frames_per_second // 4\n\n# We'll therefore have this many sliding windows:\nn_hops = (len(mag) - n_frames) // frame_hops", "Now we can collect all the sliding windows into a list of Xs and label them based on being music as 0 or speech as 1 into a collection of ys.", "Xs = []\nys = []\nfor hop_i in range(n_hops):\n # Creating our sliding window\n frames = mag[(hop_i * frame_hops):(hop_i * frame_hops + n_frames)]\n \n # Store them with a new 3rd axis and as a logarithmic scale\n # We'll ensure that we aren't taking a log of 0 just by adding\n # a small value, also known as epsilon.\n Xs.append(np.log(np.abs(frames[..., np.newaxis]) + 1e-10))\n \n # And then store the label \n ys.append(0)", "The code below will perform this for us, as well as create the inputs and outputs to our classification network by specifying 0s for the music dataset and 1s for the speech dataset. Let's just take a look at the first sliding window, and see it's label:", "plt.imshow(Xs[0][..., 0])\nplt.title('label:{}'.format(ys[1]))", "Since this was the first audio file of the music dataset, we've set it to a label of 0. And now the second one, which should have 50% overlap with the previous one, and still a label of 0:", "plt.imshow(Xs[1][..., 0])\nplt.title('label:{}'.format(ys[1]))", "So hopefully you can see that the window is sliding down 250 milliseconds at a time, and since our window is 500 ms long, or half a second, it has 50% new content at the bottom. Let's do this for every audio file now:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# Store every magnitude frame and its label of being music: 0 or speech: 1\nXs, ys = [], []\n\n# Let's start with the music files\nfor i in music:\n # Load the ith file:\n s = utils.load_audio(i)\n \n # Now take the dft of it (take a DSP course!):\n re, im = dft.dft_np(s, fft_size=fft_size, hop_size=hop_size)\n \n # And convert the complex representation to magnitudes/phases (take a DSP course!):\n mag, phs = dft.ztoc(re, im)\n \n # This is how many sliding windows we have:\n n_hops = (len(mag) - n_frames) // frame_hops\n \n # Let's extract them all:\n for hop_i in range(n_hops):\n \n # Get the current sliding window\n frames = mag[(hop_i * frame_hops):(hop_i * frame_hops + n_frames)]\n \n # We'll take the log magnitudes, as this is a nicer representation:\n this_X = np.log(np.abs(frames[..., np.newaxis]) + 1e-10)\n \n # And store it:\n Xs.append(this_X)\n \n # And be sure that we store the correct label of this observation:\n ys.append(0)\n \n# Now do the same thing with speech (TODO)!\nfor i in speech:\n \n # Load the ith file:\n s = ...\n \n # Now take the dft of it (take a DSP course!):\n re, im = ...\n \n # And convert the complex representation to magnitudes/phases (take a DSP course!):\n mag, phs = ...\n \n # This is how many sliding windows we have:\n n_hops = (len(mag) - n_frames) // frame_hops\n\n # Let's extract them all:\n for hop_i in range(n_hops):\n \n # Get the current sliding window\n frames = mag[(hop_i * frame_hops):(hop_i * frame_hops + n_frames)]\n \n # We'll take the log magnitudes, as this is a nicer representation:\n this_X = np.log(np.abs(frames[..., np.newaxis]) + 1e-10)\n \n # And store it:\n Xs.append(this_X)\n \n # Make sure we use the right label (TODO!)!\n ys.append...\n \n# Convert them to an array:\nXs = np.array(Xs)\nys = np.array(ys)\n\nprint(Xs.shape, ys.shape)\n\n# Just to make sure you've done it right. If you've changed any of the\n# parameters of the dft/hop size, then this will fail. If that's what you\n# wanted to do, then don't worry about this assertion.\nassert(Xs.shape == (15360, 43, 256, 1) and ys.shape == (15360,))", "Just to confirm it's doing the same as above, let's plot the first magnitude matrix:", "plt.imshow(Xs[0][..., 0])\nplt.title('label:{}'.format(ys[0]))", "Let's describe the shape of our input to the network:", "n_observations, n_height, n_width, n_channels = Xs.shape", "We'll now use the Dataset object I've provided for you under libs/datasets.py. This will accept the Xs, ys, a list defining our dataset split into training, validation, and testing proportions, and a parameter one_hot stating whether we want our ys to be converted to a one hot vector or not.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "ds = datasets.Dataset(Xs=..., ys=..., split=[0.8, 0.1, 0.1], one_hot=True)", "Let's take a look at the batch generator this object provides. We can all any of the splits, the train, valid, or test splits as properties of the object. And each split provides a next_batch method which gives us a batch generator. We should have specified that we wanted one_hot=True to have our batch generator return our ys with 2 features, one for each possible class.", "Xs_i, ys_i = next(ds.train.next_batch())\n\n# Notice the shape this returns. This will become the shape of our input and output of the network:\nprint(Xs_i.shape, ys_i.shape)\n\nassert(ys_i.shape == (100, 2))", "Let's take a look at the first element of the randomized batch:", "plt.imshow(Xs_i[0, :, :, 0])\nplt.title('label:{}'.format(ys_i[0]))", "And the second one:", "plt.imshow(Xs_i[1, :, :, 0])\nplt.title('label:{}'.format(ys_i[1]))", "So we have a randomized order in minibatches generated for us, and the ys are represented as a one-hot vector with each class, music and speech, encoded as a 0 or 1. Since the next_batch method is a generator, we can use it in a loop until it is exhausted to run through our entire dataset in mini-batches.\n<a name=\"creating-the-network\"></a>\nCreating the Network\nLet's now create the neural network. Recall our input X is 4-dimensional, with the same shape that we've just seen as returned from our batch generator above. We're going to create a deep convolutional neural network with a few layers of convolution and 2 finals layers which are fully connected. The very last layer must have only 2 neurons corresponding to our one-hot vector of ys, so that we can properly measure the cross-entropy (just like we did with MNIST and our 10 element one-hot encoding of the digit label). First let's create our placeholders:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "tf.reset_default_graph()\n\n# Create the input to the network. This is a 4-dimensional tensor!\n# Don't forget that we should use None as a shape for the first dimension\n# Recall that we are using sliding windows of our magnitudes (TODO):\nX = tf.placeholder(name='X', shape=..., dtype=tf.float32)\n\n# Create the output to the network. This is our one hot encoding of 2 possible values (TODO)!\nY = tf.placeholder(name='Y', shape=..., dtype=tf.float32)", "Let's now create our deep convolutional network. Start by first creating the convolutional layers. Try different numbers of layers, different numbers of filters per layer, different activation functions, and varying the parameters to get the best training/validation score when training below. Try first using a kernel size of 3 and a stride of 1. You can use the utils.conv2d function to help you create the convolution.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# TODO: Explore different numbers of layers, and sizes of the network\nn_filters = [9, 9, 9, 9]\n\n# Now let's loop over our n_filters and create the deep convolutional neural network\nH = X\nfor layer_i, n_filters_i in enumerate(n_filters):\n \n # Let's use the helper function to create our connection to the next layer:\n # TODO: explore changing the parameters here:\n H, W = utils.conv2d(\n H, n_filters_i, k_h=3, k_w=3, d_h=2, d_w=2,\n name=str(layer_i))\n \n # And use a nonlinearity\n # TODO: explore changing the activation here:\n H = tf.nn.relu(H)\n \n # Just to check what's happening:\n print(H.get_shape().as_list())", "We'll now connect our last convolutional layer to a fully connected layer of 100 neurons. This is essentially combining the spatial information, thus losing the spatial information. You can use the utils.linear function to do this, which will internally also reshape the 4-d tensor to a 2-d tensor so that it can be connected to a fully-connected layer (i.e. perform a matrix multiplication).\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# Connect the last convolutional layer to a fully connected network (TODO)!\nfc, W = utils.linear(H, ...\n\n# And another fully connected layer, now with just 2 outputs, the number of outputs that our\n# one hot encoding has (TODO)!\nY_pred, W = utils.linear(fc, ...", "We'll now create our cost. Unlike the MNIST network, we're going to use a binary cross entropy as we only have 2 possible classes. You can use the utils.binary_cross_entropy function to help you with this. Remember, the final cost measure the average loss of your batches.", "loss = utils.binary_cross_entropy(Y_pred, Y)\ncost = tf.reduce_mean(tf.reduce_sum(loss, 1))", "Just like in MNIST, we'll now also create a measure of accuracy by finding the prediction of our network. This is just for us to monitor the training and is not used to optimize the weights of the network! Look back to the MNIST network in the lecture if you are unsure of how this works (it is exactly the same):\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "predicted_y = tf.argmax(...\nactual_y = tf.argmax(...\ncorrect_prediction = tf.equal(...\naccuracy = tf.reduce_mean(...", "We'll now create an optimizer and train our network:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "learning_rate = ...\noptimizer = tf.train.AdamOptimizer(...).minimize(...)", "Now we're ready to train. This is a pretty simple dataset for a deep convolutional network. As a result, I've included code which demonstrates how to monitor validation performance. A validation set is data that the network has never seen, and is not used for optimizing the weights of the network. We use validation to better understand how well the performance of a network \"generalizes\" to unseen data.\nYou can easily run the risk of overfitting to the training set of this problem. Overfitting simply means that the number of parameters in our model are so high that we are not generalizing our model, and instead trying to model each individual point, rather than the general cause of the data. This is a very common problem that can be addressed by using less parameters, or enforcing regularization techniques which we didn't have a chance to cover (dropout, batch norm, l2, augmenting the dataset, and others).\nFor this dataset, if you notice that your validation set is performing worse than your training set, then you know you have overfit! You should be able to easily get 97+% on the validation set within < 10 epochs. If you've got great training performance, but poor validation performance, then you likely have \"overfit\" to the training dataset, and are unable to generalize to the validation set. Try varying the network definition, number of filters/layers until you get 97+% on your validation set!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "# Explore these parameters: (TODO)\nn_epochs = 10\nbatch_size = 200\n\n# Create a session and init!\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\n# Now iterate over our dataset n_epoch times\nfor epoch_i in range(n_epochs):\n print('Epoch: ', epoch_i)\n \n # Train\n this_accuracy = 0\n its = 0\n \n # Do our mini batches:\n for Xs_i, ys_i in ds.train.next_batch(batch_size):\n # Note here: we are running the optimizer so\n # that the network parameters train!\n this_accuracy += sess.run([accuracy, optimizer], feed_dict={\n X:Xs_i, Y:ys_i})[0]\n its += 1\n print(this_accuracy / its)\n print('Training accuracy: ', this_accuracy / its)\n \n # Validation (see how the network does on unseen data).\n this_accuracy = 0\n its = 0\n \n # Do our mini batches:\n for Xs_i, ys_i in ds.valid.next_batch(batch_size):\n # Note here: we are NOT running the optimizer!\n # we only measure the accuracy!\n this_accuracy += sess.run(accuracy, feed_dict={\n X:Xs_i, Y:ys_i})\n its += 1\n print('Validation accuracy: ', this_accuracy / its)", "Let's try to inspect how the network is accomplishing this task, just like we did with the MNIST network. First, let's see what the names of our operations in our network are.", "g = tf.get_default_graph()\n[op.name for op in g.get_operations()]", "Now let's visualize the W tensor's weights for the first layer using the utils function montage_filters, just like we did for the MNIST dataset during the lecture. Recall from the lecture that this is another great way to inspect the performance of your network. If many of the filters look uniform, then you know the network is either under or overperforming. What you want to see are filters that look like they are responding to information such as edges or corners.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "g = tf.get_default_graph()\nW = ...\n\nassert(W.dtype == np.float32)\nm = montage_filters(W)\nplt.figure(figsize=(5, 5))\nplt.imshow(m)\nplt.imsave(arr=m, fname='audio.png')", "We can also look at every layer's filters using a loop:", "g = tf.get_default_graph()\nfor layer_i in range(len(n_filters)):\n W = sess.run(g.get_tensor_by_name('{}/W:0'.format(layer_i)))\n plt.figure(figsize=(5, 5))\n plt.imshow(montage_filters(W))\n plt.title('Layer {}\\'s Learned Convolution Kernels'.format(layer_i))", "In the next session, we'll learn some much more powerful methods of inspecting such networks.\n<a name=\"assignment-submission\"></a>\nAssignment Submission\nAfter you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as:\n<pre>\n session-3/\n session-3.ipynb\n test.png\n recon.png\n sorted.png\n manifold.png\n test_xs.png\n audio.png\n</pre>\n\nYou'll then submit this zip file for your third assignment on Kadenze for \"Assignment 3: Build Unsupervised and Supervised Networks\"! Remember to post Part Two to the Forum to receive full credit! If you have any questions, remember to reach out on the forums and connect with your peers or with me.\nTo get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\nAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the #CADL hashtag so that other students can find your work!", "utils.build_submission('session-3.zip',\n ('test.png',\n 'recon.png',\n 'sorted.png',\n 'manifold.png',\n 'test_xs.png',\n 'audio.png',\n 'session-3.ipynb'))", "<a name=\"coming-up\"></a>\nComing Up\nIn session 4, we'll start to interrogate pre-trained Deep Convolutional Networks trained to recognize 1000 possible object labels. Along the way, we'll see how by inspecting the network, we can perform some very interesting image synthesis techniques which led to the Deep Dream viral craze. We'll also see how to separate the content and style of an image and use this for generative artistic stylization! In Session 5, we'll explore a few other powerful methods of generative synthesis, including Generative Adversarial Networks, Variational Autoencoding Generative Adversarial Networks, and Recurrent Neural Networks." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vinitsamel/udacitydeeplearning
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\nfrom collections import Counter\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n counts = Counter(text)\n vocab = sorted(counts, key=counts.get, reverse=True)\n vocab_to_int = { w : i for i, w in enumerate(vocab, 0)}\n int_to_vocab = dict(enumerate(vocab))\n \n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n token_dict = {'.' : \"||Period||\", ',' : \"||Comma||\", '\"' : \"||Quotation_Mark||\",\\\n ';' : \"||Semicolon||\", '!': \"||Exclamation_Mark||\", '?': \"||Question_Mark||\", \\\n '(' : \"||Left_Parentheses||\", ')' : \"||Right_Parentheses||\", '--' : \"||Dash||\", '\\n' : \"||Return||\"}\n return token_dict\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n inputs_ = tf.placeholder(tf.int32, shape=[None, None], name='input')\n targets_ = tf.placeholder(tf.int32, shape=[None, None], name='targets')\n learn_rate_ = tf.placeholder(tf.float32, shape=None, name='learning_rate')\n return (inputs_, targets_, learn_rate_)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n cell = tf.contrib.rnn.MultiRNNCell([lstm])\n initial_state = tf.identity(cell.zero_state(batch_size, tf.int32), name=\"initial_state\") \n return cell, initial_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n return embed\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n outputs, fs = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n final_state = tf.identity(fs, name='final_state')\n return outputs, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n embed = get_embed(input_data, vocab_size, embed_dim) \n rnn, final_state = build_rnn(cell, embed) \n logits = tf.contrib.layers.fully_connected(rnn, vocab_size, activation_fn=None, \\\n weights_initializer = tf.truncated_normal_initializer(stddev=0.1),\\\n biases_initializer=tf.zeros_initializer())\n return logits, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n# Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n# Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.", "def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n\n # TODO: Implement Function\n \n num_batches = int(len(int_text) / (batch_size * seq_length))\n num_words = num_batches * batch_size * seq_length\n input_data = np.array(int_text[:num_words])\n target_data = np.array(int_text[1:num_words+1])\n \n input_batches = np.split(input_data.reshape(batch_size, -1), num_batches, 1)\n target_batches = np.split(target_data.reshape(batch_size, -1), num_batches, 1)\n \n #last target value in the last batch is the first input value of the first batch\n #print (batches)\n target_batches[-1][-1][-1]=input_batches[0][0][0]\n \n return np.array(list(zip(input_batches, target_batches)))\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 20\n# Batch Size\nbatch_size = 100\n# RNN Size\nrnn_size = 512\n# Embedding Dimension Size\nembed_dim = 300\n# Sequence Length\nseq_length = 10\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 10\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n InputTensor = loaded_graph.get_tensor_by_name('input:0')\n InitialStateTensor = loaded_graph.get_tensor_by_name('initial_state:0')\n FinalStateTensor = loaded_graph.get_tensor_by_name('final_state:0')\n ProbsTensor = loaded_graph.get_tensor_by_name('probs:0')\n\n return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n return int_to_vocab[np.argmax(probabilities)]\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[0, dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
facebook/prophet
notebooks/multiplicative_seasonality.ipynb
mit
[ "%load_ext rpy2.ipython\n%matplotlib inline\nfrom prophet import Prophet\nimport pandas as pd\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport logging\nlogging.getLogger('prophet').setLevel(logging.ERROR)\nlogging.getLogger('numexpr').setLevel(logging.ERROR)\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n%%R\nlibrary(prophet)", "By default Prophet fits additive seasonalities, meaning the effect of the seasonality is added to the trend to get the forecast. This time series of the number of air passengers is an example of when additive seasonality does not work:", "%%R -w 10 -h 6 -u in\ndf <- read.csv('../examples/example_air_passengers.csv')\nm <- prophet(df)\nfuture <- make_future_dataframe(m, 50, freq = 'm')\nforecast <- predict(m, future)\nplot(m, forecast)\n\ndf = pd.read_csv('../examples/example_air_passengers.csv')\nm = Prophet()\nm.fit(df)\nfuture = m.make_future_dataframe(50, freq='MS')\nforecast = m.predict(future)\nfig = m.plot(forecast)", "This time series has a clear yearly cycle, but the seasonality in the forecast is too large at the start of the time series and too small at the end. In this time series, the seasonality is not a constant additive factor as assumed by Prophet, rather it grows with the trend. This is multiplicative seasonality.\nProphet can model multiplicative seasonality by setting seasonality_mode='multiplicative' in the input arguments:", "%%R -w 10 -h 6 -u in\nm <- prophet(df, seasonality.mode = 'multiplicative')\nforecast <- predict(m, future)\nplot(m, forecast)\n\nm = Prophet(seasonality_mode='multiplicative')\nm.fit(df)\nforecast = m.predict(future)\nfig = m.plot(forecast)", "The components figure will now show the seasonality as a percent of the trend:", "%%R -w 9 -h 6 -u in\nprophet_plot_components(m, forecast)\n\nfig = m.plot_components(forecast)", "With seasonality_mode='multiplicative', holiday effects will also be modeled as multiplicative. Any added seasonalities or extra regressors will by default use whatever seasonality_mode is set to, but can be overriden by specifying mode='additive' or mode='multiplicative' as an argument when adding the seasonality or regressor.\nFor example, this block sets the built-in seasonalities to multiplicative, but includes an additive quarterly seasonality and an additive regressor:", "%%R\nm <- prophet(seasonality.mode = 'multiplicative')\nm <- add_seasonality(m, 'quarterly', period = 91.25, fourier.order = 8, mode = 'additive')\nm <- add_regressor(m, 'regressor', mode = 'additive')\n\nm = Prophet(seasonality_mode='multiplicative')\nm.add_seasonality('quarterly', period=91.25, fourier_order=8, mode='additive')\nm.add_regressor('regressor', mode='additive')", "Additive and multiplicative extra regressors will show up in separate panels on the components plot. Note, however, that it is pretty unlikely to have a mix of additive and multiplicative seasonalities, so this will generally only be used if there is a reason to expect that to be the case." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ShiroJean/Breast-cancer-risk-prediction
NB2_ExploratoryDataAnalysis.ipynb
mit
[ "2.0 Notebook 2: Exploratory Data Analysis\nNow that we have a good intuitive sense of the data, Next step involves taking a closer look at attributes and data values. In this section, I am getting familiar with the data, which will provide useful knowledge for data pre-processing.\n2.1 Objectives of Data Exploration\nExploratory data analysis (EDA) is a very important step which takes place after feature engineering and acquiring data and it should be done before any modeling. This is because it is very important for a data scientist to be able to understand the nature of the data without making assumptions. The results of data exploration can be extremely useful in grasping the structure of the data, the distribution of the values, and the presence of extreme values and interrelationships within the data set.\n\nThe purpose of EDA is:\n* to use summary statistics and visualizations to better understand data, \nfind clues about the tendencies of the data, its quality and to formulate assumptions and the hypothesis of our analysis\n* For data preprocessing to be successful, it is essential to have an overall picture of your data\nBasic statistical descriptions can be used to identify properties of the data and highlight which data values should be treated as noise or outliers.* \n\nNext step is to explore the data. There are two approached used to examine the data using:\n\n\nDescriptive statistics is the process of condensing key characteristics of the data set into simple numeric metrics. Some of the common metrics used are mean, standard deviation, and correlation. \n\n\nVisualization is the process of projecting the data, or parts of it, into Cartesian space or into abstract images. In the data mining process, data exploration is leveraged in many different steps including preprocessing, modeling, and interpretation of results. \n\n\n2.2 Descriptive statistics\nSummary statistics are measurements meant to describe data. In the field of descriptive statistics, there are many summary measurements)", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\n#Load libraries for data processing\nimport pandas as pd #data processing, CSV file I/O (e.g. pd.read_csv)\nimport numpy as np\nfrom scipy.stats import norm\nimport seaborn as sns # visualization\n\n\nplt.rcParams['figure.figsize'] = (15,8) \nplt.rcParams['axes.titlesize'] = 'large'\n\ndata = pd.read_csv('data/clean-data.csv', index_col=False)\ndata.drop('Unnamed: 0',axis=1, inplace=True)\n#data.head(2)\n\n#basic descriptive statistics\ndata.describe()\n\ndata.skew()", "The skew result show a positive (right) or negative (left) skew. Values closer to zero show less skew.\n From the graphs, we can see that radius_mean, perimeter_mean, area_mean, concavity_mean and concave_points_mean are useful in predicting cancer type due to the distinct grouping between malignant and benign cancer types in these features. We can also see that area_worst and perimeter_worst are also quite useful.", "data.diagnosis.unique()\n\n# Group by diagnosis and review the output.\ndiag_gr = data.groupby('diagnosis', axis=0)\npd.DataFrame(diag_gr.size(), columns=['# of observations'])", "Check binary encoding from NB1 to confirm the coversion of the diagnosis categorical data into numeric, where\n* Malignant = 1 (indicates prescence of cancer cells)\n* Benign = 0 (indicates abscence)\nObservation\n\n357 observations indicating the absence of cancer cells and 212 show absence of cancer cell\n\nLets confirm this, by ploting the histogram\n2.3 Unimodal Data Visualizations\nOne of the main goals of visualizing the data here is to observe which features are most helpful in predicting malignant or benign cancer. The other is to see general trends that may aid us in model selection and hyper parameter selection.\nApply 3 techniques that you can use to understand each attribute of your dataset independently.\n* Histograms.\n* Density Plots.\n* Box and Whisker Plots.", "#lets get the frequency of cancer diagnosis\nsns.set_style(\"white\")\nsns.set_context({\"figure.figsize\": (10, 8)})\nsns.countplot(data['diagnosis'],label='Count',palette=\"Set3\")", "2.3.1 Visualise distribution of data via histograms\nHistograms are commonly used to visualize numerical variables. A histogram is similar to a bar graph after the values of the variable are grouped (binned) into a finite number of intervals (bins).\nHistograms group data into bins and provide you a count of the number of observations in each bin. From the shape of the bins you can quickly get a feeling for whether an attribute is Gaussian, skewed or even has an exponential distribution. It can also help you see possible outliers.\nSeparate columns into smaller dataframes to perform visualization", "#Break up columns into groups, according to their suffix designation \n#(_mean, _se,\n# and __worst) to perform visualisation plots off. \n#Join the 'ID' and 'Diagnosis' back on\ndata_id_diag=data.loc[:,[\"id\",\"diagnosis\"]]\ndata_diag=data.loc[:,[\"diagnosis\"]]\n\n#For a merge + slice:\ndata_mean=data.ix[:,1:11]\ndata_se=data.ix[:,11:22]\ndata_worst=data.ix[:,23:]\n\nprint(df_id_diag.columns)\n#print(data_mean.columns)\n#print(data_se.columns)\n#print(data_worst.columns)\n\n", "Histogram the \"_mean\" suffix designition", "#Plot histograms of CUT1 variables\nhist_mean=data_mean.hist(bins=10, figsize=(15, 10),grid=False,)\n\n#Any individual histograms, use this:\n#df_cut['radius_worst'].hist(bins=100)", "Histogram for the \"_se\" suffix designition", "#Plot histograms of _se variables\n#hist_se=data_se.hist(bins=10, figsize=(15, 10),grid=False,)", "Histogram \"_worst\" suffix designition", "#Plot histograms of _worst variables\n#hist_worst=data_worst.hist(bins=10, figsize=(15, 10),grid=False,)", "Observation\n\nWe can see that perhaps the attributes concavity,and concavity_point may have an exponential distribution ( ). We can also see that perhaps the texture and smooth and symmetry attributes may have a Gaussian or nearly Gaussian distribution. This is interesting because many machine learning techniques assume a Gaussian univariate distribution on the input variables.\n\n2.3.2 Visualize distribution of data via density plots\nDensity plots \"_mean\" suffix designition", "#Density Plots\nplt = data_mean.plot(kind= 'density', subplots=True, layout=(4,3), sharex=False, \n sharey=False,fontsize=12, figsize=(15,10))\n", "Density plots \"_se\" suffix designition", "#Density Plots\n#plt = data_se.plot(kind= 'density', subplots=True, layout=(4,3), sharex=False, \n# sharey=False,fontsize=12, figsize=(15,10))\n", "Density plot \"_worst\" suffix designition", "#Density Plots\n#plt = data_worst.plot(kind= 'kde', subplots=True, layout=(4,3), sharex=False, sharey=False,fontsize=5, \n# figsize=(15,10))\n", "Observation\n\nWe can see that perhaps the attributes perimeter,radius, area, concavity,ompactness may have an exponential distribution ( ). We can also see that perhaps the texture and smooth and symmetry attributes may have a Gaussian or nearly Gaussian distribution. This is interesting because many machine learning techniques assume a Gaussian univariate distribution on the input variables.\n\n2.3.3 Visualise distribution of data via box plots\nBox plot \"_mean\" suffix designition", "# box and whisker plots\n#plt=data_mean.plot(kind= 'box' , subplots=True, layout=(4,4), sharex=False, sharey=False,fontsize=12)", "Box plot \"_se\" suffix designition", "# box and whisker plots\n#plt=data_se.plot(kind= 'box' , subplots=True, layout=(4,4), sharex=False, sharey=False,fontsize=12)", "Box plot \"_worst\" suffix designition", "# box and whisker plots\n#plt=data_worst.plot(kind= 'box' , subplots=True, layout=(4,4), sharex=False, sharey=False,fontsize=12)", "Observation\n\nWe can see that perhaps the attributes perimeter,radius, area, concavity,ompactness may have an exponential distribution ( ). We can also see that perhaps the texture and smooth and symmetry attributes may have a Gaussian or nearly Gaussian distribution. This is interesting because many machine learning techniques assume a Gaussian univariate distribution on the input variables.\n\n2.4 Multimodal Data Visualizations\n\nScatter plots\nCorrelation matrix\n\nCorrelation matrix", "# plot correlation matrix\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\n\nplt.style.use('fivethirtyeight')\nsns.set_style(\"white\")\n\ndata = pd.read_csv('data/clean-data.csv', index_col=False)\ndata.drop('Unnamed: 0',axis=1, inplace=True)\n# Compute the correlation matrix\ncorr = data_mean.corr()\n\n# Generate a mask for the upper triangle\nmask = np.zeros_like(corr, dtype=np.bool)\nmask[np.triu_indices_from(mask)] = True\n\n# Set up the matplotlib figure\ndata, ax = plt.subplots(figsize=(8, 8))\nplt.title('Breast Cancer Feature Correlation')\n\n# Generate a custom diverging colormap\ncmap = sns.diverging_palette(260, 10, as_cmap=True)\n\n# Draw the heatmap with the mask and correct aspect ratio\nsns.heatmap(corr, vmax=1.2, square='square', cmap=cmap, mask=mask, \n ax=ax,annot=True, fmt='.2g',linewidths=2)", "Observation:\nWe can see strong positive relationship exists with mean values paramaters between 1-0.75;.\n* The mean area of the tissue nucleus has a strong positive correlation with mean values of radius and parameter;\n* Some paramters are moderately positive corrlated (r between 0.5-0.75)are concavity and area, concavity and perimeter etc\n* Likewise, we see some strong negative correlation between fractal_dimension with radius, texture, parameter mean values.", "\nplt.style.use('fivethirtyeight')\nsns.set_style(\"white\")\n\ndata = pd.read_csv('data/clean-data.csv', index_col=False)\ng = sns.PairGrid(data[[data.columns[1],data.columns[2],data.columns[3],\n data.columns[4], data.columns[5],data.columns[6]]],hue='diagnosis' )\ng = g.map_diag(plt.hist)\ng = g.map_offdiag(plt.scatter, s = 3)", "Summary\n\nMean values of cell radius, perimeter, area, compactness, concavity\n and concave points can be used in classification of the cancer. Larger\n values of these parameters tends to show a correlation with malignant\n tumors.\n\nmean values of texture, smoothness, symmetry or fractual dimension\n does not show a particular preference of one diagnosis over the other. \n\n\nIn any of the histograms there are no noticeable large outliers that warrants further cleanup." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rsterbentz/phys202-2015-work
assignments/assignment10/ODEsEx02.ipynb
mit
[ "Ordinary Differential Equations Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.integrate import odeint\nfrom IPython.html.widgets import interact, fixed", "Lorenz system\nThe Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:\n$$ \\frac{dx}{dt} = \\sigma(y-x) $$\n$$ \\frac{dy}{dt} = x(\\rho-z) - y $$\n$$ \\frac{dz}{dt} = xy - \\beta z $$\nThe solution vector is $[x(t),y(t),z(t)]$ and $\\sigma$, $\\rho$, and $\\beta$ are parameters that govern the behavior of the solutions.\nWrite a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.", "def lorentz_derivs(yvec, t, sigma, rho, beta):\n \"\"\"Compute the the derivatives for the Lorentz system at yvec(t).\"\"\"\n x = yvec[0]\n y = yvec[1]\n z = yvec[2]\n dx = sigma*(y-x)\n dy = x*(rho-z) - y\n dz = x*y - beta*z\n return np.array([dx,dy,dz])\n\nassert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])", "Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.", "def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Solve the Lorenz system for a single initial condition.\n \n Parameters\n ----------\n ic : array, list, tuple\n Initial conditions [x,y,z].\n max_time: float\n The max time to use. Integrate with 250 points per time unit.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \n Returns\n -------\n soln : np.ndarray\n The array of the solution. Each row will be the solution vector at that time.\n t : np.ndarray\n The array of time points used.\n \n \"\"\"\n t = np.linspace(0,max_time,250)\n soln = odeint(lorentz_derivs, ic, t, args=(sigma, rho, beta))\n return (soln, t)\n\nassert True # leave this to grade solve_lorenz", "Write a function plot_lorentz that:\n\nSolves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.\nPlot $[x(t),z(t)]$ using a line to show each trajectory.\nColor each line using the hot colormap from Matplotlib.\nLabel your plot and choose an appropriate x and y limit.\n\nThe following cell shows how to generate colors that can be used for the lines:", "N = 5\ncolors = plt.cm.hot(np.linspace(0,1,N))\nfor i in range(N):\n # To use these colors with plt.plot, pass them as the color argument\n print(colors[i])\n\nnp.random.seed(1)\ng=[]\nh=[]\nf=[]\nfor i in range(5):\n rnd = np.random.random(size=3)\n a,b,c = 30*rnd - 15\n g.append(a)\n h.append(b)\n f.append(c)\ng,h,f\n\ndef plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Plot [x(t),z(t)] for the Lorenz system.\n \n Parameters\n ----------\n N : int\n Number of initial conditions and trajectories to plot.\n max_time: float\n Maximum time to use.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \"\"\"\n np.random.seed(1)\n colors = plt.cm.hot(np.linspace(0,1,N))\n f = plt.figure(figsize=(7,7))\n for i in range(N):\n ic = 30*np.random.random(size=3) - 15\n soln, t = solve_lorentz(ic, max_time, sigma, rho, beta)\n plt.plot(soln[:,0], soln[:,2], color=colors[i])\n plt.xlabel('x(t)')\n plt.ylabel('z(t)')\n plt.title('Lorenz System: x(t) vs. z(t)')\n plt.ylim(-20,110)\n plt.xlim(-60,60)\n\nplot_lorentz()\n\nassert True # leave this to grade the plot_lorenz function", "Use interact to explore your plot_lorenz function with:\n\nmax_time an integer slider over the interval $[1,10]$.\nN an integer slider over the interval $[1,50]$.\nsigma a float slider over the interval $[0.0,50.0]$.\nrho a float slider over the interval $[0.0,50.0]$.\nbeta fixed at a value of $8/3$.", "interact(plot_lorentz, N=(1,50), max_time=(1,10), sigma=(0.0,50.0), rho=(0.0,50.0), beta=fixed(8/3));", "Describe the different behaviors you observe as you vary the parameters $\\sigma$, $\\rho$ and $\\beta$ of the system:\n$\\sigma$ = 25 gives a butterfly look to the graph. A low $\\sigma$ value gives more V-shaped graph, with $\\sigma$ = 0 creating vertical lines of varying lengths and positions (dependent on ic). $\\rho$ affects the overall size of the graph, with larger $\\rho$ corresponding to a larger graph." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/cloud
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
apache-2.0
[ "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">View on GitHub\n </a>\n </td>\n</table>\n\nOverview\nThis tutorial demonstrates AI Platform's CloudTuner service.\nObjective\nCloudTuner is implemented based upon the KerasTuner and uses AI Platform Vizier as an oracle to get suggested trials, run trials, etc. The usage of CloudTuner is the same as KerasTuner and additionally accept Vizier's study_config as an alternative input.\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nAI Platform Training\nCloud Storage\n\nLearn about AI Platform Training\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nPIP install packages and dependencies\nInstall additional dependencies not installed in the notebook environment.\n\nUse the latest major GA version of the framework.", "! pip install google-cloud\n! pip install google-cloud-storage\n! pip install requests\n! pip install tensorflow_datasets", "Set up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the AI Platform APIs\n\n\nIf running locally on your own machine, you will need to install the Google Cloud SDK.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\nAuthenticate your Google Cloud account\nIf you are using AI Platform Notebooks, your environment is already\nauthenticated. Skip these steps.", "import sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your Google Cloud account. This provides access\n# to your Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nif 'google.colab' in sys.modules:\n from google.colab import auth as google_auth\n google_auth.authenticate_user()\n\n# If you are running this tutorial in a notebook locally, replace the string\n# below with the path to your service account key and run this cell to\n# authenticate your Google Cloud account.\nelse:\n %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json\n\n# Log in to your account on Google Cloud\n! gcloud auth application-default login\n! gcloud auth login", "Install CloudTuner\nDownload and install CloudTuner from tensorflow-cloud.", "! pip install tensorflow-cloud", "Restart the Kernel\nWe will automatically restart your kernel so the notebook has access to the packages you installed.", "# Restart the kernel after pip installs\nimport IPython\napp = IPython.Application.instance()\napp.kernel.do_shutdown(True)", "Import libraries and define constants", "from tensorflow_cloud import CloudTuner\nimport keras_tuner\n\nREGION = 'us-central1'\nPROJECT_ID = '[your-project-id]' #@param {type:\"string\"}\n! gcloud config set project $PROJECT_ID", "Tutorial\nPrepare Data\nFor this tutorial, we will use a subset (10000 examples) from the MNIST dataset.", "from tensorflow.keras.datasets import mnist\n(x, y), (val_x, val_y) = mnist.load_data()\nx = x.astype('float32') / 255.\nval_x = val_x.astype('float32') / 255.\n\nx = x[:10000]\ny = y[:10000]", "Define model building function\nNext, we will define the hyperparameter model building function like one does for KerasTuner, where the following are tunable:\n- number of layers\n- the learning rate\n\nNote that CloudTuner does not support adding hyperparameters in the model building function. Instead, the search space is configured by passing a hyperparameters argument when instantiating (constructing) the tuner.", "from tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import Flatten, Dense\nfrom tensorflow.keras.optimizers import Adam\n\n\ndef build_model(hp):\n model = Sequential()\n model.add(Flatten(input_shape=(28, 28)))\n\n # the number of layers is tunable\n for _ in range(hp.get('num_layers')):\n model.add(Dense(units=64, activation='relu'))\n model.add(Dense(10, activation='softmax'))\n\n # the learning rate is tunable\n model.compile(\n optimizer=Adam(lr=hp.get('learning_rate')),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n return model", "Instantiate CloudTuner\nNext, we instantiate an instance of the CloudTuner. We will define our tuning hyperparameters and pass them into the constructor as the parameter hyperparameters.\nWe also set the objective ('accuracy') to measure the performance of each trial, and we shall keep the number of trials small (5) for the purpose of this demonstration.", "# Configure the search space\nHPS = keras_tuner.HyperParameters()\nHPS.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='log')\nHPS.Int('num_layers', 2, 10)\n\ntuner = CloudTuner(\n build_model,\n project_id=PROJECT_ID,\n region=REGION,\n objective='accuracy',\n hyperparameters=HPS,\n max_trials=5,\n directory='tmp_dir/1')", "Let's use the search_space_summary() method to display what the search space for this optimization study looks like.", "tuner.search_space_summary()", "Search\nLet's now execute our search for this optimization study with the search() method. This method takes the same parameters as the fit() method in TF.keras API model instance.", "tuner.search(x=x, y=y, epochs=10, validation_data=(val_x, val_y))", "Results\nNext, we use the results_summary() method to get a summary of the trials that were tried in this optimization study.", "tuner.results_summary()", "Get the Best Model\nNow, let's get the best model from the study using the get_best_models() method. The parameter num specifies the topmost number of models. In our case, we set it to 1 for the best overall model. The method returns a list (of models), so we use index of 0 to get the model out of the list.", "model = tuner.get_best_models(num_models=1)[0]\n\nprint(model)\nprint(model.weights)", "Tutorial: Using an input pipeline with datasets\nIn this example we will build training pipeline that uses tf.data.datasets for training the model.", "import tensorflow as tf\nimport tensorflow_datasets as tfds", "Load MNIST Data", "(ds_train, ds_test), ds_info = tfds.load(\n 'mnist',\n split=['train', 'test'],\n shuffle_files=True,\n as_supervised=True,\n with_info=True,\n)\n\n# tfds.load introduces a new logger which results in duplicate log messages.\n# To mitigate this issue following removes Jupyter notebook root logger handler. More details @\n# https://stackoverflow.com/questions/6729268/log-messages-appearing-twice-with-python-logging\n\nimport logging\nlogger = logging.getLogger()\nlogger.handlers = []", "Build training pipeline\nBuild a training and evaluation pipeline using ds.map, ds.cache, ds.shuffle, ds.batch, and ds.prefetch. For more details on building high performance pipelines refer to data performance", "def normalize_img(image, label):\n \"\"\"Normalizes images: `uint8` -> `float32`.\"\"\"\n return tf.cast(image, tf.float32) / 255., label\n\n\nds_train = ds_train.map(\n normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)\nds_train = ds_train.cache()\nds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)\nds_train = ds_train.batch(128)\nds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)\n\nds_test = ds_test.map(\n normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)\nds_test = ds_test.batch(128)\nds_test = ds_test.cache()\nds_test = ds_test.prefetch(tf.data.experimental.AUTOTUNE)", "Create and train the model", "def build_pipeline_model(hp):\n model = Sequential()\n model.add(Flatten(input_shape=(28, 28, 1)))\n\n # the number of layers is tunable\n for _ in range(hp.get('num_layers')):\n model.add(Dense(units=64, activation='relu'))\n model.add(Dense(10, activation='softmax'))\n\n # the learning rate is tunable\n model.compile(\n optimizer=Adam(lr=hp.get('learning_rate')),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n return model\n\n# Configure the search space\npipeline_HPS = keras_tuner.HyperParameters()\npipeline_HPS.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='log')\npipeline_HPS.Int('num_layers', 2, 10)\n\npipeline_tuner = CloudTuner(\n build_pipeline_model,\n project_id=PROJECT_ID,\n region=REGION,\n objective='accuracy',\n hyperparameters=pipeline_HPS,\n max_trials=5,\n directory='tmp_dir/2')\n\npipeline_tuner.search(x=ds_train, epochs=10, validation_data=ds_test)\n\npipeline_tuner.results_summary()\n\npipeline_model = pipeline_tuner.get_best_models(num_models=1)[0]\nprint(pipeline_model)\nprint(pipeline_model.weights)", "Tutorial: Using a Study Configuration\nNow, let's repeat this study but this time the search space is passed in as a Vizier study_config.\nCreate the Study Configuration\nLet's start by constructing the study config for optimizing the accuracy of the model with the hyperparameters number of layers and learning rate, just as we did before.", "# Configure the search space\nSTUDY_CONFIG = {\n 'algorithm': 'ALGORITHM_UNSPECIFIED',\n 'metrics': [{\n 'goal': 'MAXIMIZE',\n 'metric': 'accuracy'\n }],\n 'parameters': [{\n 'discrete_value_spec': {\n 'values': [0.0001, 0.001, 0.01]\n },\n 'parameter': 'learning_rate',\n 'type': 'DISCRETE'\n }, {\n 'integer_value_spec': {\n 'max_value': 10,\n 'min_value': 2\n },\n 'parameter': 'num_layers',\n 'type': 'INTEGER'\n }, {\n 'discrete_value_spec': {\n 'values': [32, 64, 96, 128]\n },\n 'parameter': 'units',\n 'type': 'DISCRETE'\n }],\n 'automatedStoppingConfig': {\n 'decayCurveStoppingConfig': {\n 'useElapsedTime': True\n }\n }\n}", "Instantiate CloudTuner\nNext, we instantiate an instance of the CloudTuner. In this instantiation, we replace the hyperparameters and objective parameters with the study_config parameter.", "tuner = CloudTuner(\n build_model,\n project_id=PROJECT_ID,\n region=REGION,\n study_config=STUDY_CONFIG,\n max_trials=10,\n directory='tmp_dir/3')", "Let's use the search_space_summary() method to display what the search space for this optimization study looks like.", "tuner.search_space_summary()", "Search\nLet's now execute our search for this optimization study with the search() method.", "tuner.search(x=x, y=y, epochs=5, steps_per_epoch=2000, validation_steps=1000, validation_data=(val_x, val_y))", "Results\nNow let's use the results_summary() method to get a summary of the trials that were tried in this optimization study.", "tuner.results_summary()", "Tutorial: Distributed Tuning\nLet's run multiple tuning loops concurrently using multiple threads. To run distributed tuning, multiple tuners should share the same study_id, but different tuner_ids.", "from multiprocessing.dummy import Pool\n# If you are running this tutorial in a notebook locally, you may run multiple\n# tuning loops concurrently using multi-processes instead of multi-threads.\n# from multiprocessing import Pool\n\nimport time\nimport datetime\n\nSTUDY_ID = 'Tuner_study_{}'.format(\n datetime.datetime.now().strftime('%Y%m%d_%H%M%S'))\n\n\ndef single_tuner(tuner_id):\n \"\"\"Instantiate a `CloudTuner` and set up its `tuner_id`.\n\n Args:\n tuner_id: Integer.\n Returns:\n A CloudTuner.\n \"\"\"\n tuner = CloudTuner(\n build_model,\n project_id=PROJECT_ID,\n region=REGION,\n objective='accuracy',\n hyperparameters=HPS,\n max_trials=18,\n study_id=STUDY_ID,\n directory=('tmp_dir/cloud/%s' % (STUDY_ID)))\n tuner.tuner_id = str(tuner_id)\n return tuner\n\n\ndef search_fn(tuner):\n # Start searching from different time points for each worker to avoid `model.build` collision.\n time.sleep(int(tuner.tuner_id) * 2)\n tuner.search(x=x, y=y, epochs=5, validation_data=(val_x, val_y), verbose=0)\n return tuner\n", "Search\nLet's now execute multiple search loops in parallel for this study with the search() method.", "# Number of search loops we would like to run in parallel\nnum_parallel_trials = 4\ntuners = [single_tuner(i) for i in range(num_parallel_trials)]\np = Pool(processes=num_parallel_trials)\nresult = p.map(search_fn, tuners)\np.close()\np.join()", "Results\nNow let's use the results_summary() method to get a summary of the trials (from all the search loops) that were tried in this optimization study.", "result[0].results_summary()", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.16/_downloads/decoding_rsa.ipynb
bsd-3-clause
[ "%matplotlib inline", "Representational Similarity Analysis\nRepresentational Similarity Analysis is used to perform summary statistics\non supervised classifications where the number of classes is relatively high.\nIt consists in characterizing the structure of the confusion matrix to infer\nthe similarity between brain responses and serves as a proxy for characterizing\nthe space of mental representations [1] [2] [3]_.\nIn this example, we perform RSA on responses to 24 object images (among\na list of 92 images). Subjects were presented with images of human, animal\nand inanimate objects [4]_. Here we use the 24 unique images of faces\nand body parts.\n<div class=\"alert alert-info\"><h4>Note</h4><p>this example will download a very large (~6GB) file, so we will not\n build the images below.</p></div>\n\nReferences\n.. [1] Shepard, R. \"Multidimensional scaling, tree-fitting, and clustering.\"\n Science 210.4468 (1980): 390-398.\n.. [2] Laakso, A. & Cottrell, G.. \"Content and cluster analysis:\n assessing representational similarity in neural systems.\" Philosophical\n psychology 13.1 (2000): 47-76.\n.. [3] Kriegeskorte, N., Marieke, M., & Bandettini. P. \"Representational\n similarity analysis-connecting the branches of systems neuroscience.\"\n Frontiers in systems neuroscience 2 (2008): 4.\n.. [4] Cichy, R. M., Pantazis, D., & Oliva, A. \"Resolving human object\n recognition in space and time.\" Nature neuroscience (2014): 17(3),\n 455-462.", "# Authors: Jean-Remi King <jeanremi.king@gmail.com>\n# Jaakko Leppakangas <jaeilepp@student.jyu.fi>\n# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport numpy as np\nfrom pandas import read_csv\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.manifold import MDS\n\nimport mne\nfrom mne.io import read_raw_fif, concatenate_raws\nfrom mne.datasets import visual_92_categories\n\nprint(__doc__)\n\ndata_path = visual_92_categories.data_path()\n\n# Define stimulus - trigger mapping\nfname = op.join(data_path, 'visual_stimuli.csv')\nconds = read_csv(fname)\nprint(conds.head(5))", "Let's restrict the number of conditions to speed up computation", "max_trigger = 24\nconds = conds[:max_trigger] # take only the first 24 rows", "Define stimulus - trigger mapping", "conditions = []\nfor c in conds.values:\n cond_tags = list(c[:2])\n cond_tags += [('not-' if i == 0 else '') + conds.columns[k]\n for k, i in enumerate(c[2:], 2)]\n conditions.append('/'.join(map(str, cond_tags)))\nprint(conditions[:10])", "Let's make the event_id dictionary", "event_id = dict(zip(conditions, conds.trigger + 1))\nevent_id['0/human bodypart/human/not-face/animal/natural']", "Read MEG data", "n_runs = 4 # 4 for full data (use less to speed up computations)\nfname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')\nraws = [read_raw_fif(fname % block) for block in range(n_runs)]\nraw = concatenate_raws(raws)\n\nevents = mne.find_events(raw, min_duration=.002)\n\nevents = events[events[:, 2] <= max_trigger]", "Epoch data", "picks = mne.pick_types(raw.info, meg=True)\nepochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,\n picks=picks, tmin=-.1, tmax=.500, preload=True)", "Let's plot some conditions", "epochs['face'].average().plot()\nepochs['not-face'].average().plot()", "Representational Similarity Analysis (RSA) is a neuroimaging-specific\nappelation to refer to statistics applied to the confusion matrix\nalso referred to as the representational dissimilarity matrices (RDM).\nCompared to the approach from Cichy et al. we'll use a multiclass\nclassifier (Multinomial Logistic Regression) while the paper uses\nall pairwise binary classification task to make the RDM.\nAlso we use here the ROC-AUC as performance metric while the\npaper uses accuracy. Finally here for the sake of time we use\nRSA on a window of data while Cichy et al. did it for all time\ninstants separately.", "# Classify using the average signal in the window 50ms to 300ms\n# to focus the classifier on the time interval with best SNR.\nclf = make_pipeline(StandardScaler(),\n LogisticRegression(C=1, solver='lbfgs'))\nX = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)\ny = epochs.events[:, 2]\n\nclasses = set(y)\ncv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)\n\n# Compute confusion matrix for each cross-validation fold\ny_pred = np.zeros((len(y), len(classes)))\nfor train, test in cv.split(X, y):\n # Fit\n clf.fit(X[train], y[train])\n # Probabilistic prediction (necessary for ROC-AUC scoring metric)\n y_pred[test] = clf.predict_proba(X[test])", "Compute confusion matrix using ROC-AUC", "confusion = np.zeros((len(classes), len(classes)))\nfor ii, train_class in enumerate(classes):\n for jj in range(ii, len(classes)):\n confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])\n confusion[jj, ii] = confusion[ii, jj]", "Plot", "labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6\nfig, ax = plt.subplots(1)\nim = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])\nax.set_yticks(range(len(classes)))\nax.set_yticklabels(labels)\nax.set_xticks(range(len(classes)))\nax.set_xticklabels(labels, rotation=40, ha='left')\nax.axhline(11.5, color='k')\nax.axvline(11.5, color='k')\nplt.colorbar(im)\nplt.tight_layout()\nplt.show()", "Confusion matrix related to mental representations have been historically\nsummarized with dimensionality reduction using multi-dimensional scaling [1].\nSee how the face samples cluster together.", "fig, ax = plt.subplots(1)\nmds = MDS(2, random_state=0, dissimilarity='precomputed')\nchance = 0.5\nsummary = mds.fit_transform(chance - confusion)\ncmap = plt.get_cmap('rainbow')\ncolors = ['r', 'b']\nnames = list(conds['condition'].values)\nfor color, name in zip(colors, set(names)):\n sel = np.where([this_name == name for this_name in names])[0]\n size = 500 if name == 'human face' else 100\n ax.scatter(summary[sel, 0], summary[sel, 1], s=size,\n facecolors=color, label=name, edgecolors='k')\nax.axis('off')\nax.legend(loc='lower right', scatterpoints=1, ncol=2)\nplt.tight_layout()\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
michaelaye/planet4
notebooks/P4 stats.ipynb
isc
[ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Task:-Define-status-of-Planet-4\" data-toc-modified-id=\"Task:-Define-status-of-Planet-4-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Task: Define status of Planet 4</a></span><ul class=\"toc-item\"><li><span><a href=\"#Database-format\" data-toc-modified-id=\"Database-format-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Database format</a></span></li><li><span><a href=\"#Image-IDs\" data-toc-modified-id=\"Image-IDs-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Image IDs</a></span><ul class=\"toc-item\"><li><span><a href=\"#Cleaning-NaNs\" data-toc-modified-id=\"Cleaning-NaNs-1.2.1\"><span class=\"toc-item-num\">1.2.1&nbsp;&nbsp;</span>Cleaning NaNs</a></span></li><li><span><a href=\"#After-NaNs-are-removed\" data-toc-modified-id=\"After-NaNs-are-removed-1.2.2\"><span class=\"toc-item-num\">1.2.2&nbsp;&nbsp;</span>After NaNs are removed</a></span></li></ul></li><li><span><a href=\"#Classification-IDs\" data-toc-modified-id=\"Classification-IDs-1.3\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Classification IDs</a></span><ul class=\"toc-item\"><li><span><a href=\"#Uniqueness-within-Image_ID!\" data-toc-modified-id=\"Uniqueness-within-Image_ID!-1.3.1\"><span class=\"toc-item-num\">1.3.1&nbsp;&nbsp;</span>Uniqueness within Image_ID!</a></span></li></ul></li><li><span><a href=\"#Percentages-done.\" data-toc-modified-id=\"Percentages-done.-1.4\"><span class=\"toc-item-num\">1.4&nbsp;&nbsp;</span>Percentages done.</a></span></li><li><span><a href=\"#Separate-for-seasons\" data-toc-modified-id=\"Separate-for-seasons-1.5\"><span class=\"toc-item-num\">1.5&nbsp;&nbsp;</span>Separate for seasons</a></span><ul class=\"toc-item\"><li><span><a href=\"#Percentages-done\" data-toc-modified-id=\"Percentages-done-1.5.1\"><span class=\"toc-item-num\">1.5.1&nbsp;&nbsp;</span>Percentages done</a></span></li><li><span><a href=\"#MDAP-2014\" data-toc-modified-id=\"MDAP-2014-1.5.2\"><span class=\"toc-item-num\">1.5.2&nbsp;&nbsp;</span>MDAP 2014</a></span></li></ul></li></ul></li><li><span><a href=\"#Problem-??\" data-toc-modified-id=\"Problem-??-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Problem ??</a></span><ul class=\"toc-item\"><li><span><a href=\"#Group-by-user_name-instead-of-classification_id\" data-toc-modified-id=\"Group-by-user_name-instead-of-classification_id-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Group by user_name instead of classification_id</a></span><ul class=\"toc-item\"><li><span><a href=\"#The-subframe-known-as-jp7\" data-toc-modified-id=\"The-subframe-known-as-jp7-2.1.1\"><span class=\"toc-item-num\">2.1.1&nbsp;&nbsp;</span>The subframe known as jp7</a></span></li></ul></li><li><span><a href=\"#Some-instructive-plots\" data-toc-modified-id=\"Some-instructive-plots-2.2\"><span class=\"toc-item-num\">2.2&nbsp;&nbsp;</span>Some instructive plots</a></span><ul class=\"toc-item\"><li><span><a href=\"#Plot-over-required-constraint\" data-toc-modified-id=\"Plot-over-required-constraint-2.2.1\"><span class=\"toc-item-num\">2.2.1&nbsp;&nbsp;</span>Plot over required constraint</a></span></li><li><span><a href=\"#How-do-the-different-existing-user-counts-distribute\" data-toc-modified-id=\"How-do-the-different-existing-user-counts-distribute-2.2.2\"><span class=\"toc-item-num\">2.2.2&nbsp;&nbsp;</span>How do the different existing user counts distribute</a></span></li></ul></li></ul></li></ul></div>\n\nTask: Define status of Planet 4\nFirst import the pandas data table analyis library and check which version I'm using (as I'm constantly changing that to keep up-to-date.)", "from planet4 import io\nimport pandas as pd", "Database format\nIn a different notebook (the document you are looking at is called an IPython Notebook) I have converted the mongodb database text dump from Planet 4 into \nHDF format. \nI saved it in a subformat for very fast read-speed into memory; the 2 GB file currently loads within 20 seconds on my Macbook Pro.\nBy the way, this HDF5 format is supported in IDL and Matlab as well, so I could provide this file as a download for Candy and others, if wanted.\nI save the object I get back here in the variable df, a shortcut for dataframe, which is the essential table object of the pandas library.", "df = pd.read_hdf(get_data.get_current_database_fname(), 'df')", "So, what did we receive in df (note that type 'object' often means string in our case, but could mean also a different complex datatype):", "df = pd.read_hdf(\"/Users/klay6683/local_data/2018-10-14_planet_four_classifications_queryable_cleaned.h5\")\n\ndf.info()\n\nfrom planet4 import stats\n\nobsids = df.image_name.unique()\n\nfrom tqdm import tqdm_notebook as tqdm\n\nresults = []\nfor obsid in tqdm(obsids):\n sub_df = df[df.image_name==obsid]\n results.append(stats.get_status_per_classifications(sub_df))\n\ns = pd.Series(results, index=obsids)\n\ns.describe()\n\n%matplotlib inline\n\ns.to_csv(\"current_status.csv\")\n\ns.hist(bins=30)\n\ns[s<50].max()\n\ns[s<50].shape\n\n!cat current_status.csv", "Here are the first 5 rows of the dataframe:", "pd.Series(df.image_name.unique()).to_csv(\"image_names.csv\", index=False)", "Image IDs\nFor a simple first task, let's get a list of unique image ids, to know how many objects have been published.", "img_ids = df.image_id.unique()\nprint img_ids", "We might have some NaN values in there, depending on how the database dump was created. Let's check if that's true.", "df.image_id.notnull().value_counts()", "If there's only True as an answer above, you can skip the nan-cleaning section\nCleaning NaNs", "df[df.image_id.isnull()].T # .T just to have it printed like a column, not a row", "In one version of the database dump, I had the last row being completely NaN, so I dropped it with the next command:", "#df = df.drop(10718113)", "Let's confirm that there's nothing with a NaN image_id now:", "df[df.image_id.isnull()]", "After NaNs are removed\nOk, now we should only get non-NaNs:", "img_ids = df.image_id.unique()\nimg_ids", "So, how many objects were online:", "no_all = len(img_ids)\nno_all", "Classification IDs\nNow we need to find out how often each image_id has been looked at. \nFor that we have the groupby functionality. \nSpecifically, because we want to know how many citizens have submitted a classification for each image_id, we need to group by the image_id and count the unique classification_ids within each image_id group. \nUniqueness within Image_ID!\nWe need to constrain for uniqueness because each classified object is included with the same classification_id and we don't want to count them more than once, because we are interested in the overall submission only for now.\nIn other words: Because the different fans, blobs and interesting things for one image_id have all been submitted with the same classification_id, I need to constrain to unique classification_ids, otherwise images with a lot of submitted items would appear 'more completed' just for having a lot of fan-content, and not for being analyzed by a lot of citizens, which is what we want.\nFirst, I confirm that classification_ids indeed have more than 1 entry, i.e. when there was more than one object classified by a user:", "df.groupby(df.classification_id, sort=False).size()", "Ok, that is the case.\nNow, group those classification_ids by the image_ids and save the grouping. Switch off sorting for speed, we want to sort by the counts later anyway.", "grouping = df.classification_id.groupby(df.image_id, sort=False)", "Aggregate each group by finding the size of the unique list of classification_ids.", "counts = grouping.agg(lambda x: x.unique().size)\ncounts", "Order the counts by value", "counts = counts.order(ascending=False)\ncounts", "Note also that the length of this counts data series is 98220, exactly the number of unique image_ids.\nPercentages done.\nBy constraining the previous data series for the value it has (the counts) and look at the length of the remaining data, we can determine the status of the finished rate.", "counts[counts >= 30].size", "That's pretty disappointing, but alas, the cold hard truth. \nThis means, taking all submitted years into account in the data, we have currently only the following percentage done:", "counts[counts>= 30].size / float(no_all) * 100", "Wishing to see higher values, I was for some moments contemplating if one maybe has to sum up the different counts to be correct, but I don't think that's it.\nThe way I see it, one has to decide in what 'phase-space' one works to determine the status of Planet4.\nEither in the phase space of total subframes or in the total number of classifications. And I believe to determine the finished state of Planet4 it is sufficient and actually easier to focus on the available number of subframes and determine how often each of them has been looked at.\nSeparate for seasons\nThe different seasons of our south polar observations are separated by several counts of the thousands digit in the image_id column of the original HiRISE image id, in P4 called image_name.", "from planet4 import helper_functions as hf\n\nhf.define_season_column(df)\n\nhf.unique_image_ids_per_season(df)\n\nno_all = df.season.value_counts()\nno_all", "Percentages done\nNow I code a short function with the code I used above to create the counts of classification_ids per image_id. Note again the restriction to uniqueness of classification_ids.", "def get_counts_per_classification_id(df, unique=True):\n grouping = df.classification_id.groupby(df.image_id, sort=False)\n # because I only grouped the classification_id column above, this function is only\n # applied to it. First, reduce to a unique list, and then save the size of that list.\n if unique:\n return grouping.agg(lambda x: x.unique().size)\n else:\n return grouping.size()\n\ndf.image_name.groupby(df.season).agg(lambda x:x.unique().size)\n\nno_all = df.image_id.groupby(df.season).agg(lambda x: x.unique().size)\nno_all\n\ndef done_per_season(season, limit, unique=True, in_percent=True):\n subdf = df[df.season == season]\n counts_per_classid = get_counts_per_classification_id(subdf, unique)\n no_done = counts_per_classid[counts_per_classid >= limit].size\n if in_percent:\n return 100.0 * no_done / no_all[season]\n else:\n return no_done\n\nfor season in [1,2,3]:\n print season\n print done_per_season(season, 30, in_percent=True)", "MDAP 2014", "reload(hf)\n\nseason1 = df.loc[df.season==1, :]\n\ninca = season1.loc[season1.image_name.str.endswith('_0985')]\n\nmanhattan = season1.loc[season1.image_name.str.endswith('_0935')]\n\nhf.get_status(inca)\n\nhf.get_status(manhattan)\n\nhf.get_status(season1)\n\ninca_images = \"\"\"PSP_002380_0985,PSP_002868_0985,PSP_003092_0985,PSP_003158_0985,PSP_003237_0985,PSP_003448_0985,PSP_003593_0985,PSP_003770_0815,PSP_003804_0985,PSP_003928_0815\"\"\" \n\ninca_images = inca_images.split(',')\n\ninca = df.loc[df.image_name.isin(inca_images),:]\n\nhf.get_status(inca, 25)\n\nfor img in inca_images:\n print img\n print hf.get_status(season1.loc[season1.image_name == img,:])\n\noneimage = season1.loc[season1.image_name == 'PSP_003928_0815',:]\n\nimg_ids = oneimage.image_id.unique()\n\ncounts = hf.classification_counts_per_image(season1)\n\ncounts[img_ids[0]]\n\ncontainer = []\nfor img_id in img_ids:\n container.append(counts[img_id])\n\nhist(container)\nsavefig('done_for_PSP_003928_0815.png')\n\ncounts = hf.classification_counts_per_image(df)\n\ncounts[counts >=30].size\n\ndf.info()", "In the following code I not only check for the different years, but also the influence on the demanded limit of counts to define a subframe as 'finished'.\nTo collect the data I create an empty dataframe with an index ranging through the different limits I want to check (i.e. range(30,101,10))", "import sys\nfrom collections import OrderedDict\nresults = pd.DataFrame(index=range(30,101,10))\nfor season in [1,2,3]:\n print season\n sys.stdout.flush() # to force a print out of the std buffer\n subdf = df[df.season == season]\n counts = get_counts_per_classification_id(subdf)\n values = OrderedDict()\n for limit in results.index:\n values[limit] = done_per_season(season, limit)\n results[season] = values.values()\n\nnp.round(results)", "Problem ??\nGroup by user_name instead of classification_id\nI realised that user_ids should provide just the same access to the performed counts, because each classification_id should have exactly one user_id, as they are created when that user clicks on Submit, right? \nAt least that's how I understood it.\nSo imagine my surprise when I found out it isn't the same answer. And unfortunately it looks like we have to reduce our dataset even further by apparent multiple submissions of the same classification, but let's see.\nFirst, create the respective function to determine counts via the user_name instead of classification_id after grouping for image_id.\nThis first grouping by image_id is the essential step for the determination how often a particular image_id has been worked on, so that doesn't change.", "def get_counts_per_user_name(df):\n grouping = df.user_name.groupby(df.image_id, sort=False)\n counts = grouping.agg(lambda x: x.unique().size)\n# counts = counts.order(ascending=False)\n return counts\n\ncounts_by_user = get_counts_per_user_name(df)\ncounts_by_user", "Compare that again to the output for classifying per classification_id:", "counts_by_class = get_counts_per_classification_id(df)\ncounts_by_class", "So, not the same result! Let's dig deeper.\nThe subframe known as jp7\nFocus on one image_id and study what is happening there. I first get a sub-table for the subframe 'jp7' and determine the user_names that worked on that subframe.\nThen I loop over the names, filtering another sub-part of the table where the current user worked on jp7. \nAccording to the hypothesis that a classification_id is created for a user at submisssion time and the idea that a user should not see an image twice, there should only be one classification_id in that sub-part.\nI am testing that by checking if the unique list of classification_ids has a length $>1$. If it does, I print out the user_name.", "jp7 = df[df.image_id == 'APF0000jp7']\nunique_users = jp7.user_name.unique()\n# having the list of users that worked on jp7\nfor user in unique_users:\n subdf = jp7[jp7.user_name == user]\n if len(subdf.classification_id.unique()) > 1:\n print user, len(subdf)", "Ok, so let's have a look at the data for the first user_name for the subframe jp7", "jp7[jp7.user_name == 'not-logged-in-8d495c463aeffd67c08b2dfc1141f33b']", "First note that the creation time of these 2 different classifications is different, so it looks like this user has seen the jp7 subframe more than once.\nBut then when you scroll this html table to the right, you will notice that the submitted object has the exact same coordinates in both classifications? \nHow likely is it, that the user finds the exact same coordinates in less than 60 seconds?\nSo the question is, is this really a new classification and the user has done it twice? Or was the same thing submitted twice? Hopefully Meg knows the answer to that.\nSome instructive plots\nPlot over required constraint\nI found it instructive to look at how the status of finished data depends on the limit we put on the reached counts per image_id (i.e. subframe).\nAlso, how does it change when looking for unique user_names per image_id instead of unique classification_ids.", "results[[2,3]].plot()\nxlabel('Required number of analyses submitted to be considered \"done\".')\nylabel('Current percentage of dataset finished [%]')\ntitle(\"Season 2 and 3 status, depending on definition of 'done'.\")\nsavefig('Season2_3_status.png', dpi=200)\n\nx = range(1,101)\nper_class = []\nper_user = []\nfor val in x:\n per_class.append(100 * counts_by_class[counts_by_class >= val].size/float(no_all))\n per_user.append(100 * counts_by_user[counts_by_user >= val].size/float(no_all))\n\nplot(x,per_class)\nplot(x, per_user)\nxlabel('Counts constraint for _finished_ criterium')\nylabel('Current percent finished [%]')", "Ok, so not that big a deal until we require more than 80 classifications to be done.\nHow do the different existing user counts distribute\nThe method 'value_counts()' basically delivers a histogram on the counts_by_user data series.\nIn other words, it shows how the frequency of classifications distribute over the dataset. It shows an to be expected peak close to 100, because that's what we are aiming now and the system does today not anymore show a subframe that has been seen 100 times.\nBut it also shows quite some waste in citizen power from all the counts that went for counts > 100.", "counts_by_user.value_counts()\n\ncounts_by_user.value_counts().plot(style='*')\n\nusers_work = df.classification_id.groupby(df.user_name).agg(lambda x: x.unique().size)\n\nusers_work.order(ascending=False)[:10]\n\ndf[df.user_name=='gwyneth walker'].classification_id.value_counts()\n\nimport helper_functions as hf\nreload(hf)\n\nhf.classification_counts_for_user('Kitharode', df).hist?\n\nhf.classification_counts_for_user('Paul Johnson', df)\n\nnp.isnan(df.marking)\n\ndf.marking" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aspuru-guzik-group/selfies
docs/source/tutorial.ipynb
apache-2.0
[ "Tutorial\nThe Basics\nWe begin by importing selfies.", "import selfies as sf", "First, let's try translating between SMILES and SELFIES - as an example, we will use benzaldehyde. To translate from SMILES to SELFIES, use the selfies.encoder function, and to translate from SMILES back to SELFIES, use the selfies.decoder function.", "original_smiles = \"O=Cc1ccccc1\" # benzaldehyde\n\ntry:\n \n encoded_selfies = sf.encoder(original_smiles) # SMILES -> SELFIES\n decoded_smiles = sf.decoder(encoded_selfies) # SELFIES -> SMILES\n \nexcept sf.EncoderError as err: \n pass # sf.encoder error...\nexcept sf.DecoderError as err: \n pass # sf.decoder error...\n\nencoded_selfies\n\ndecoded_smiles", "Note that original_smiles and decoded_smiles are different strings, but they both represent benzaldehyde. Thus, when comparing the two SMILES strings, string equality should not be used. Insead, use RDKit to check whether the SMILES strings represent the same molecule.", "from rdkit import Chem\n\nChem.CanonSmiles(original_smiles) == Chem.CanonSmiles(decoded_smiles)", "Customizing SELFIES\nThe SELFIES grammar is derived dynamically from a set of semantic constraints, which assign bonding capacities to various atoms. Let's customize the semantic constraints that selfies operates on. By default, the following constraints are used:", "sf.get_preset_constraints(\"default\")", "These constraints map atoms (they keys) to their bonding capacities (the values). The special ? key maps to the bonding capacity for all atoms that are not explicitly listed in the constraints. For example, S and Li are constrained to a maximum of 6 and 8 bonds, respectively. Every SELFIES string can be decoded into a molecule that obeys the current constraints.", "sf.decoder(\"[Li][=C][C][S][=C][C][#S]\")", "But suppose that we instead wanted to constrain S and Li to a maximum of 2 and 1 bond(s), respectively. To do so, we create a new set of constraints, and tell selfies to operate on them using selfies.set_semantic_constraints.", "new_constraints = sf.get_preset_constraints(\"default\")\nnew_constraints['Li'] = 1\nnew_constraints['S'] = 2\n\nsf.set_semantic_constraints(new_constraints)", "To check that the update was succesful, we can use selfies.get_semantic_constraints, which returns the semantic constraints that selfies is currently operating on.", "sf.get_semantic_constraints()", "Our previous SELFIES string is now decoded like so. Notice that the specified bonding capacities are met, with every S and Li making only 2 and 1 bonds, respectively.", "sf.decoder(\"[Li][=C][C][S][=C][C][#S]\")", "Finally, to revert back to the default constraints, simply call:", "sf.set_semantic_constraints()", "Please refer to the API reference for more details and more preset constraints.\nSELFIES in Practice\nLet's use a simple example to show how selfies can be used in practice, as well as highlight some convenient utility functions from the library. We start with a toy dataset of SMILES strings. As before, we can use selfies.encoder to convert the dataset into SELFIES form.", "smiles_dataset = [\"COC\", \"FCF\", \"O=O\", \"O=Cc1ccccc1\"]\nselfies_dataset = list(map(sf.encoder, smiles_dataset))\n\nselfies_dataset", "The function selfies.len_selfies computes the symbol length of a SELFIES string. We can use it to find the maximum symbol length of the SELFIES strings in the dataset.", "max_len = max(sf.len_selfies(s) for s in selfies_dataset)\nmax_len", "To extract the SELFIES symbols that form the dataset, use selfies.get_alphabet_from_selfies. Here, we add [nop] to the alphabet, which is a special padding character that selfies recognizes.", "alphabet = sf.get_alphabet_from_selfies(selfies_dataset)\nalphabet.add(\"[nop]\")\n\nalphabet = list(sorted(alphabet))\nalphabet", "Then, create a mapping between the alphabet SELFIES symbols and indices.", "vocab_stoi = {symbol: idx for idx, symbol in enumerate(alphabet)}\nvocab_itos = {idx: symbol for symbol, idx in vocab_stoi.items()}\n\nvocab_stoi", "SELFIES provides some convenience methods to convert between SELFIES strings and label (integer) and one-hot encodings. Using the first entry of the dataset (dimethyl ether) as an example:", "dimethyl_ether = selfies_dataset[0]\nlabel, one_hot = sf.selfies_to_encoding(dimethyl_ether, vocab_stoi, pad_to_len=max_len)\n\nlabel\n\none_hot\n\ndimethyl_ether = sf.encoding_to_selfies(one_hot, vocab_itos, enc_type=\"one_hot\")\ndimethyl_ether\n\nsf.decoder(dimethyl_ether) # sf.decoder ignores [nop]", "If different encoding strategies are desired, selfies.split_selfies can be used to tokenize a SELFIES string into its individual symbols.", "list(sf.split_selfies(\"[C][O][C]\"))", "Please refer to the API reference for more details and utility functions." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
andrewzwicky/puzzles
FiveThirtyEightRiddler/2017-09-15/classic_sticks.ipynb
mit
[ "Riddler Classic\nThis week’s Classic, from Spreck Rosekrans, continues our camping theme. Here are four questions of increasing difficulty about finding sticks in the woods, breaking them and making shapes:\n\nIf you break a stick in two places at random, forming three pieces, what is the probability of being able to form a triangle with the pieces?\nIf you select three sticks, each of random length (between 0 and 1), what is the probability of being able to form a triangle with them?\nIf you break a stick in two places at random, what is the probability of being able to form an acute triangle — where each angle is less than 90 degrees — with the pieces?\nIf you select three sticks, each of random length (between 0 and 1), what is the probability of being able to form an acute triangle with the sticks?", "import random\n\nN = 1000000", "1", "triangle_count = 0\nfor _ in range(N):\n a,b = sorted((random.random(), random.random()))\n x,y,z = (a,b-a,1-b)\n if x<0.5 and y<0.5 and z<0.5:\n triangle_count += 1\n \ntriangle_count / N", "2", "triangle_count = 0\nfor _ in range(N):\n sticks = sorted((random.random(), random.random(), random.random()))\n if sticks[2] < sticks[0] + sticks[1]:\n triangle_count += 1\n \ntriangle_count / N", "3", "triangle_count = 0\nfor _ in range(N):\n a,b = sorted((random.random(), random.random()))\n x,y,z = (a,b-a,1-b)\n if (x**2 + y**2 > z**2) and (x**2 + z**2 > y**2) and (z**2 + y**2 > x**2):\n triangle_count += 1\n \ntriangle_count / N", "4", "triangle_count = 0\nfor _ in range(N):\n x,y,z = (random.random(), random.random(), random.random())\n if (x**2 + y**2 > z**2) and (x**2 + z**2 > y**2) and (z**2 + y**2 > x**2):\n triangle_count += 1\n \ntriangle_count / N" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tylere/docker-tmpnb-ee
notebooks/1 - IPython Notebook Examples/IPython Project Examples/Interactive Widgets/Widget List.ipynb
apache-2.0
[ "Index - Back - Next\nWidget List\nComplete list\nFor a complete list of the widgets available to you, you can list the classes in the widget namespace (as seen below). Widget and DOMWidget, not listed below, are base classes.", "from IPython.html import widgets\n[n for n in dir(widgets) if not n.endswith('Widget') and n[0] == n[0].upper() and not n[0] == '_']", "Numeric widgets\nThere are 8 widgets distributed with IPython that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing Float with Int in the widget name, you can find the Integer equivalent.\nFloatSlider", "widgets.FloatSlider(\n value=7.5,\n min=5.0,\n max=10.0,\n step=0.1,\n description='Test:',\n)", "Sliders can also be displayed vertically.", "widgets.FloatSlider(\n value=7.5,\n min=5.0,\n max=10.0,\n step=0.1,\n description='Test',\n orientation='vertical',\n)", "FloatProgress", "widgets.FloatProgress(\n value=7.5,\n min=5.0,\n max=10.0,\n step=0.1,\n description='Loading:',\n)", "BoundedFloatText", "widgets.BoundedFloatText(\n value=7.5,\n min=5.0,\n max=10.0,\n description='Text:',\n)", "FloatText", "widgets.FloatText(\n value=7.5,\n description='Any:',\n)", "Boolean widgets\nThere are two widgets that are designed to display a boolean value.\nToggleButton", "widgets.ToggleButton(\n description='Click me',\n value=False,\n)", "Checkbox", "widgets.Checkbox(\n description='Check me',\n value=True,\n)", "Selection widgets\nThere are four widgets that can be used to display single selection lists, and one that can be used to display multiple selection lists. All inherit from the same base class. You can specify the enumeration of selectable options by passing a list. You can also specify the enumeration as a dictionary, in which case the keys will be used as the item displayed in the list and the corresponding value will be returned when an item is selected.\nDropdown", "from IPython.display import display\nw = widgets.Dropdown(\n options=['1', '2', '3'],\n value='2',\n description='Number:',\n)\ndisplay(w)\n\nw.value", "The following is also valid:", "w = widgets.Dropdown(\n options={'One': 1, 'Two': 2, 'Three': 3},\n value=2,\n description='Number:',\n)\ndisplay(w)\n\nw.value", "RadioButtons", "widgets.RadioButtons(\n description='Pizza topping:',\n options=['pepperoni', 'pineapple', 'anchovies'],\n)", "Select", "widgets.Select(\n description='OS:',\n options=['Linux', 'Windows', 'OSX'],\n)", "ToggleButtons", "widgets.ToggleButtons(\n description='Speed:',\n options=['Slow', 'Regular', 'Fast'],\n)", "SelectMultiple\nMultiple values can be selected with <kbd>shift</kbd> and <kbd>ctrl</kbd> pressed and mouse clicks or arrow keys.", "w = widgets.SelectMultiple(\n description=\"Fruits\",\n options=['Apples', 'Oranges', 'Pears']\n)\ndisplay(w)\n\nw.value", "String widgets\nThere are 4 widgets that can be used to display a string value. Of those, the Text and Textarea widgets accept input. The Latex and HTML widgets display the string as either Latex or HTML respectively, but do not accept input.\nText", "widgets.Text(\n description='String:',\n value='Hello World',\n)", "Textarea", "widgets.Textarea(\n description='String:',\n value='Hello World',\n)", "Latex", "widgets.Latex(\n value=\"$$\\\\frac{n!}{k!(n-k)!} = \\\\binom{n}{k}$$\",\n)", "HTML", "widgets.HTML(\n value=\"Hello <b>World</b>\"\n)", "Button", "widgets.Button(description='Click me')", "Index - Back - Next" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ijpulidos/solar-physics-ex
rotation/Acquiring_Data.ipynb
mit
[ "# Modulos básicos\nimport numpy as np\nimport time\n#from pylab import imshow\nimport matplotlib.pyplot as plt\nfrom tqdm import tqdm, tnrange, tqdm_notebook\n# Modulo para manejo de fecha\nfrom datetime import datetime, timedelta\n# Modulos para astrofisica/solar\nimport astropy\nfrom sunpy.net import vso\nimport astropy.units as u\nfrom sunpy.map import Map\nfrom astropy.io import fits # to fix headers\n# Custom-made methods and classes for fixing headers\n#from lib.CompatMaps import sinehpc_wcs_frame_mapping", "Obtener los datos\nSe usa el cliente VSO de SunPy para obtener los datos automáticamente, solo se tienen que cambiar las fechas correspondientes. Fechas de interés para el proyecto son:\n* 2012/01/29 a 2012/01/30\n* 2013/03/04 a 2013/03/09\n* 2014/09/23 a 2014/09/28\n* 2015/09/03 a 2015/09/08\n* 2016/03/11 a 2016/03/16\n* 2016/08/28 a 2016/08/31\n* 2016/06/13 a 2016/06/16\n* 2016/03/29 a 2016/04/01\n* 2016/01/29 a 2016/02/01\n* 2016/08/13 a 2016/08/16\n* 2012/11/01 - 2012/11/06\n* 2015-11-25 - 2015-11-27\n* 2012/02/28 - 2012/03/02\n* 2012/02/18 - 2012/02/21\n* 2011/09/29 - 2011/10/02\n* 2011/10/08 - 2011/10/11\n* 2012/05/01 - 2012/05/04\n* 2012/07/01 - 2012/07/04", "# defining datetime range and number of samples \ndates = [] # where the dates pairs are going to be stored\ndate_start = datetime(2012,7,1,0,0,0)\ndate_end = datetime(2012,7,3,23,59,59)\ndate_samples = 35 # Number of samples to take between dates\ndate_delta = (date_end - date_start)/date_samples # How frequent to take a sample\ndate_window = timedelta(minutes=1.0)\ntemp_date = date_start\nwhile temp_date < date_end:\n dates.append((str(temp_date),str(temp_date+date_window)))\n temp_date += date_delta \n\nfor i in range(3):\n # definir instrumento\n instrument = 'hmi'\n # definir rango de longitud de onda (min,max)\n #wavelength = 400*u.nm , 700*u.nm\n\n # Query data - Buscar datos en esas fechas\n t = 0\n for i in dates:\n tstart, tend = i[0], i[1]\n #data_client = vso.VSOClient()\n data_client = vso.VSOClient(url='https://vso.nascom.nasa.gov/API/VSOi_rpc_literal.wsdl') # workaround when VSO server fail \n # more info at: https://riot.im/app/#/room/!MeRdFpEonLoCwhoHeT:matrix.org/$14939136771403280rCVLc:matrix.org \n data_query = data_client.query(vso.attrs.Time(tstart, tend), \\\n vso.attrs.Instrument(instrument), vso.attrs.Physobs(\"intensity\"))\n print(\"Found \",len(data_query),\" records from \", tstart, \" to \", tend)\n print(\"Time range: \", data_query.time_range())\n print(\"Size in KB: \", data_query.total_size())\n data_dir = '/home/ivan/projects/Physics/solar/solar-physics-ex/rotation/data/set18/{file}.fits'\n results = data_client.get(data_query, path=data_dir)\n if t%2 == 0:\n time.sleep(30)\n t+=1", "Acquiring data from helioviewer\nTo this date (04/05/2017) the VSO server for fits files docs.virtualsolar.org is down and has been for some hours. So I had to choose to use Helioviewer to download data, which come in jpg/png files.", "from sunpy.net.helioviewer import HelioviewerClient\n\nhv = HelioviewerClient()\ndatasources = hv.get_data_sources()\n\n# print a list of datasources and their associated ids\nfor observatory, instruments in datasources.items():\n for inst, detectors in instruments.items():\n for det, measurements in detectors.items():\n for meas, params in measurements.items():\n print(\"%s %s: %d\" % (observatory, params['nickname'], params['sourceId']))\n\nfilepath = hv.download_jp2('2012/07/05 00:30:00', directory=\"data/HMI/set1\", observatory='SDO', instrument='HMI', detector='HMI', measurement='continuum')\nhmi = Map(filepath)\n#xrange = Quantity([200, 550], 'arcsec')\n#yrange = Quantity([-400, 200], 'arcsec')\nhmi.submap(xrange, yrange).peek()\n# Si falla\n# < Cadair> mefistofeles: install glymur and openjpeg >1.5" ]
[ "code", "markdown", "code", "markdown", "code" ]
bkimo/discrete-math-with-python
lab2-bubble-sort.ipynb
mit
[ "Algorithm Complexity: Array and Bubble Sort\nAn algorithm is a list of instructions for doing something, and algorithm design is essential to computer science. Here we will study simple algorithms of sorting an array of numbers. \nAn array is a sequence of variables $x_1, x_2, x_3, ..., x_n$; e.g., \n\nNotice that the order of the elements in an array matters, and an array can have duplicate entries.\nA sort is an algorithm that guarantees that\n $$ x_1\\leq x_2\\leq x_3\\leq \\cdots \\leq x_n $$\n after the algorithm finishes.\nBubble sort\nLet $x_1, x_2, ..., x_n$ be an array whose elements can be compared by $\\leq $. The following algorithm is called a bubble sort.\n\nThe bubble sort makes multiple passes through an array. It compares adjacent items and exchanges those that are out of order. Each pass through the array places the next largest value in its proper place. In essence, each item “bubbles” up to the location where it belongs.\nFollowing figure shows the first pass of a bubble sort. The shaded items are being compared to see if they are out of order. If there are $n$ items in the array, then there are $n−1$ pairs of items that need to be compared on the first pass. It is important to note that once the largest value in the array is part of a pair, it will continually be moved along until the pass is complete.\n\nAt the start of the second pass, the largest value is now in place. There are $n−1$ items left to sort, meaning that there will be $n−2$ pairs. Since each pass places the next largest value in place, the total number of passes necessary will be $n−1$. After completing the $n−1$, the smallest item must be in the correct position with no further processing required. \nThe exchange operation, sometimes called a “swap” as in the algorithm, is slightly different in Python than in most other programming languages. Typically, swapping two elements in an array requires a temporary storage location (an additional memory location). A code fragment such as\n\nwill exchange the $i$th and $j$th items in the array. Without the temporary storage, one of the values would be overwritten.\nIn Python, it is possible to perform simultaneous assignment. The statement a,b=b,a will result in two assignment statements being done at the same time. Using simultaneous assignment, the exchange operation can be done in one statement.\n\nThe following example shows the complete bubbleSort function working on the array shown above.", "def bubbleSort(alist):\n for i in range(0, len(alist)-1):\n for j in range(0, len(alist)-1-i):\n if alist[j] > alist[j+1]:\n alist[j], alist[j+1] = alist[j+1], alist[j] \n\nalist = [54,26,93,17,77,31,44,55,20]\nbubbleSort(alist)\nprint(alist)", "To analyze the bubble sort, we should note that regardless of how the items are arranged in the initial array, $n−1$ passes will be made to sort an array of size n. Table below shows the number of comparisons for each pass. The total number of comparisons is the sum of the first $n−1$ integers. Recall that the sum of the first $n-1$ integers is $\\frac{n(n-1)}{2}$ This is still $\\mathcal{O}(n^2)$ comparisons. In the best case, if the list is already ordered, no exchanges will be made. However, in the worst case, every comparison will cause an exchange. On average, we exchange half of the time.\n\nRemark A bubble sort is often considered the most inefficient sorting method since it must exchange items before the final location is known. These “wasted” exchange operations are very costly. However, because the bubble sort makes passes through the entire unsorted portion of the list, it has the capability to do something most sorting algorithms cannot. In particular, if during a pass there are no exchanges, then we know that the list must have been sorted already. A bubble sort can be modified to stop early if it finds that the list has become sorted. This means that for lists that require just a few passes, a bubble sort may have an advantage in that it will recognize the sorted list and stop. The following shows this modification, which is often referred to as the short bubble.", "def shortBubbleSort(alist):\n exchanges = True\n passnum = len(alist)-1\n while passnum > 0 and exchanges:\n exchanges = False\n for i in range(passnum):\n# print(i)\n if alist[i]>alist[i+1]:\n exchanges = True\n alist[i], alist[i+1] = alist[i+1], alist[i]\n passnum = passnum-1\n# print('passnum = ', passnum)\n \nalist = [54,26,93,17,77,31,44,55,20]\n#alist = [17, 20, 26, 31, 44, 54, 55, 77, 93]\nshortBubbleSort(alist)\nprint(alist)", "Plotting Algorithmic Time Complexity of a Function using Python\nWe may take an idea of using the Python Timer and timeit methods to create a simple plotting scheme using matplotlib.\nHere is the code. The code is quite simple. Perhaps the only interesting thing here is the use of partial to pass in the function and the $N$ parameter into Timer. You can add in your own function here and plot the time complexity.", "from matplotlib import pyplot\nimport numpy as np\nimport timeit\nfrom functools import partial\nimport random\n\ndef fconst(N):\n \"\"\"\n O(1) function\n \"\"\"\n x = 1\n\ndef flinear(N):\n \"\"\"\n O(n) function\n \"\"\"\n x = [i for i in range(N)]\n\ndef fsquare(N):\n \"\"\"\n O(n^2) function\n \"\"\"\n for i in range(N):\n for j in range(N):\n x = i*j\n\ndef fshuffle(N):\n # O(N)\n random.shuffle(list(range(N)))\n\ndef fsort(N):\n x = list(range(N))\n random.shuffle(x)\n x.sort()\n \n\ndef plotTC(fn, nMin, nMax, nInc, nTests):\n \"\"\"\n Run timer and plot time complexity\n \"\"\"\n x = []\n y = []\n for i in range(nMin, nMax, nInc):\n N = i\n testNTimer = timeit.Timer(partial(fn, N))\n t = testNTimer.timeit(number=nTests)\n x.append(i)\n y.append(t)\n p1 = pyplot.plot(x, y, 'o')\n #pyplot.legend([p1,], [fn.__name__, ])\n\n# main() function\ndef main():\n print('Analyzing Algorithms...')\n\n #plotTC(fconst, 10, 1000, 10, 10)\n #plotTC(flinear, 10, 1000, 10, 10)\n plotTC(fsquare, 10, 1000, 10, 10)\n #plotTC(fshuffle, 10, 1000, 1000, 10)\n #plotTC(fsort, 10, 1000, 10, 10)\n # enable this in case you want to set y axis limits\n #pyplot.ylim((-0.1, 0.5))\n \n # show plot\n pyplot.show()\n\n# call main\nif __name__ == '__main__':\n main()", "Exercises\n1. [10 ps] Let $x_1, x_2, ..., x_n$ be an array whose elements can be compared by the total ordering $\\leq$.\n(a) Write an algorithm for computing the maximum element in the array. \n(b) How many \"&lt;\" comparisons does your algorithm require?\n(c) Write a python code based on your algorithm and test your assertion in (b) with \nexamples of several arrays.\n\n2. [5 pts] Write a python code plotting algorithmic time complexity of the bubbleSort function.\n3. [15 pts] The following is a pseudo code of Insertion sort, which is a simple sorting algorithm\nthat builds the final sorted array one item at a time. Write a insertionSort in python and \nplot algorithmic time complexity of the insertionSort function.\n\n4. [10 pts] There are dataset in 2001 and 2002 in the United Arab Emirates that show the types of accidents and types of traffic accidents on Emirates (http://www.bayanat.ae). Use the bubble sort method (in Python) to rearrange the dataset as follows:\nST1. Sort alphabetically according to Emirates.\nST2. For the same Emirates, classify by accident type.\nST3. For incidents of the same type, sort by year in ascending order.\nST4. In the same year, sort the number of accidents in order.\n\nWrite the python code. What can you tell about traffic accidents in Ras Al Khaimah?\n5-1. [10 pts] There are dataset in 2003-2017 that show the mean temperature\nin the Emirates ([http://data.bayanat.ae/en_GB/dataset/mean-temperature-by-year-and-month ])\nUse the bubble sort (or short bubble short) method to rearrange the dataset as follows:\nST1. Sort it by Year in ascending order (from 2003 to 2017). \nST2. For the same Year, sort it by Month in ascending order (from January to December).\nST3. Use the sorted result data to plot \"Month vs Mean Temp\" graph for each year on the same window.\n\nWrite the python code. What can you tell about the tendency of mean temperature in UAE? How do Mean Temperatures change over years?\n5-2. [10 pts] There are dataset in 2003-2017 that show the Mean of Relative Humidity by Month & Year ( %)bin the Emirates ([http://data.bayanat.ae/dataset/mean-of-relative-humidity-by-year-and-month ]).\nUse the bubble sort (or short bubble short) method to rearrange the dataset as follows:\nST1. Sort it by Year in ascending order (from 2003 to 2017). \nST2. For the same Year, sort it by Month in ascending order (from January to December).\nST3. Use the sorted result data to plot \"Month vs Mean Temp\" graph for each year on the same window.\n\nWrite the python code. What can you tell about the tendency of relative humidity in UAE? How do Relative Humidity change over years?\n5-3. [10 pts] Compare the results of Problem 5-2 and 5-3 and discuss your observation. (If appropriate, you may write a python code (for analyzing or visualizing) to support your argument.)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
watsonyanghx/CS231n
assignment2/.ipynb_checkpoints/BatchNormalization-checkpoint.ipynb
mit
[ "Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\nThe authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n[3] Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.", "# As usual, a bit of setup\n\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print '%s: ' % k, v.shape", "Batch normalization: Forward\nIn the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.", "# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization\n\n# Simulate the forward pass for a two-layer network\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint 'Before batch normalization:'\nprint ' means: ', a.mean(axis=0)\nprint ' stds: ', a.std(axis=0)\n\n# Means should be close to zero and stds close to one\nprint 'After batch normalization (gamma=1, beta=0)'\na_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})\nprint ' mean: ', a_norm.mean(axis=0)\nprint ' std: ', a_norm.std(axis=0)\n\n# Now means should be close to beta and stds close to gamma\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint 'After batch normalization (nontrivial gamma, beta)'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\nfor t in xrange(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint 'After batch normalization (test-time):'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)", "Batch Normalization: backward\nNow implement the backward pass for batch normalization in the function batchnorm_backward.\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\nOnce you have finished, run the following to numerically check your backward pass.", "# Gradient check batchnorm backward pass\n\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma, dout)\ndb_num = eval_numerical_gradient_array(fb, beta, dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dgamma error: ', rel_error(da_num, dgamma)\nprint 'dbeta error: ', rel_error(db_num, dbeta)", "Batch Normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.\nSurprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\nNOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.", "N, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint 'dx difference: ', rel_error(dx1, dx2)\nprint 'dgamma difference: ', rel_error(dgamma1, dgamma2)\nprint 'dbeta difference: ', rel_error(dbeta1, dbeta2)\nprint 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))", "Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.\nConcretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\nHINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.", "N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print 'Running check with reg = ', reg\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n use_batchnorm=True)\n\n loss, grads = model.loss(X, y)\n print 'Initial loss: ', loss\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))\n if reg == 0: print", "Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.", "# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nsolver.train()", "Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.", "plt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 1)\nplt.plot(solver.loss_history, 'o', label='baseline')\nplt.plot(bn_solver.loss_history, 'o', label='batchnorm')\n\nplt.subplot(3, 1, 2)\nplt.plot(solver.train_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')\n\nplt.subplot(3, 1, 3)\nplt.plot(solver.val_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()", "Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.", "# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers = {}\nsolvers = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers[weight_scale] = solver\n\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))\n \n best_val_accs.append(max(solvers[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\n\nplt.gcf().set_size_inches(10, 15)\nplt.show()", "Question:\nDescribe the results of this experiment, and try to give a reason why the experiment gave the results that it did.\nAnswer:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sauloal/ipython
opticalmapping/xmap_reader.ipynb
mit
[ "XMAP plotter\nHelping hands\nhttp://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week2/getting_data.ipynb\nhttp://nbviewer.ipython.org/github/jvns/pandas-cookbook/blob/master/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb\nImports", "import pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n#import matplotlib as plt\n#plt.use('TkAgg') \n\nimport operator\nimport re\nfrom collections import defaultdict\n\nimport pylab\npylab.show()\n\n%pylab inline", "Definitions", "fileUrl = \"../S_lycopersicum_chromosomes.2.50.BspQI_to_EXP_REFINEFINAL1_xmap.txt\"\nMIN_CONF = 10.0\nFULL_FIG_W , FULL_FIG_H = 16, 8\nCHROM_FIG_W, CHROM_FIG_H = FULL_FIG_W, 20", "Setup\nFigure sizes controller", "class size_controller(object):\n def __init__(self, w, h):\n self.w = w\n self.h = h\n \n def __enter__(self):\n self.o = rcParams['figure.figsize']\n rcParams['figure.figsize'] = self.w, self.h\n return None\n \n def __exit__(self, type, value, traceback):\n rcParams['figure.figsize'] = self.o", "Column type definition", "col_type_int = np.int64\ncol_type_flo = np.float64\ncol_type_str = np.object\ncol_info =[\n [ \"XmapEntryID\" , col_type_int ],\n [ \"QryContigID\" , col_type_int ],\n [ \"RefContigID\" , col_type_int ],\n [ \"QryStartPos\" , col_type_flo ],\n [ \"QryEndPos\" , col_type_flo ],\n [ \"RefStartPos\" , col_type_flo ],\n [ \"RefEndPos\" , col_type_flo ],\n [ \"Orientation\" , col_type_str ],\n [ \"Confidence\" , col_type_flo ],\n [ \"HitEnum\" , col_type_str ],\n [ \"QryLen\" , col_type_flo ],\n [ \"RefLen\" , col_type_flo ],\n [ \"LabelChannel\", col_type_str ],\n [ \"Alignment\" , col_type_str ],\n]\n\ncol_names=[cf[0] for cf in col_info]\ncol_types=dict(zip([c[0] for c in col_info], [c[1] for c in col_info]))\ncol_types\n", "Read XMAP\nhttp://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week2/getting_data.ipynb", "CONVERTERS = {\n 'info': filter_conv\n}\nSKIP_ROWS = 9\nNROWS = None\ngffData = pd.read_csv(fileUrl, names=col_names, index_col='XmapEntryID', dtype=col_types, header=None, skiprows=SKIP_ROWS, delimiter=\"\\t\", comment=\"#\", verbose=True, nrows=NROWS)\ngffData.head()", "Add length column", "gffData['qry_match_len'] = abs(gffData['QryEndPos'] - gffData['QryStartPos'])\ngffData['ref_match_len'] = abs(gffData['RefEndPos'] - gffData['RefStartPos'])\ngffData['match_prop' ] = gffData['qry_match_len'] / gffData['ref_match_len']\ngffData = gffData[gffData['Confidence'] >= MIN_CONF]\ndel gffData['LabelChannel']\ngffData.head()\n\nre_matches = re.compile(\"(\\d+)M\")\nre_insertions = re.compile(\"(\\d+)I\")\nre_deletions = re.compile(\"(\\d+)D\")\ndef process_cigar(cigar, **kwargs):\n \"\"\"\n 2M3D1M1D1M1D4M1I2M1D2M1D1M2I2D9M3I3M1D6M1D2M2D1M1D6M1D1M1D1M2D2M2D1M1I1D1M1D5M2D4M2D1M2D2M1D2M1D3M1D1M1D2M3I3D1M1D1M3D2M3D1M2I1D1M2D1M1D1M1I2D3M2I1M1D2M1D1M1D1M2I3D3M3D1M2D1M1D1M1D5M2D12M\n \"\"\"\n assert(set([x for x in cigar]) <= set(['M', 'D', 'I', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9']))\n\n cigar_matches = 0\n cigar_insertions = 0\n cigar_deletions = 0\n\n i_matches = re_matches .finditer(cigar)\n i_inserts = re_insertions.finditer(cigar)\n i_deletes = re_deletions .finditer(cigar)\n\n for i in i_matches:\n n = i.group(1)\n cigar_matches += int(n)\n\n for i in i_inserts:\n n = i.group(1)\n cigar_insertions += int(n)\n\n for i in i_deletes:\n n = i.group(1)\n cigar_deletions += int(n)\n\n return cigar_matches, cigar_insertions, cigar_deletions\n\ngffData[['cigar_matches', 'cigar_insertions', 'cigar_deletions']] = gffData['HitEnum'].apply(process_cigar, axis=1).apply(pd.Series, 1)\ndel gffData['HitEnum']\n\ngffData.head()\n\n\nre_alignment = re.compile(\"\\((\\d+),(\\d+)\\)\")\n\ndef process_alignment(alignment, **kwargs):\n \"\"\"\n Alignment (4862,48)(4863,48)(4864,47)(4865,46)(4866,45)(4867,44)(4870,43)(4873,42)(4874,41)(4875,40)(4877,40)(4878,39)(4879,38)(4880,37)(4883,36)(4884,36)(4885,35)(4886,34)(4887,33)(4888,33)(4889,32)(4890,30)(4891,30)(4892,29)(4893,28)(4894,28)(4899,27)(4900,26)(4901,25)(4902,24)(4903,23)(4904,22)(4906,21)(4907,21)(4908,20)(4910,19)(4911,18)(4912,17)(4913,16)(4915,15)(4917,14)(4918,13)(4919,12)(4920,11)(4922,10)(4923,9)(4925,8)(4927,7)(4930,6)(4931,5)(4932,3)(4933,2)(4934,1)\n \"\"\"\n assert(set([x for x in alignment]) <= set(['(', ')', ',', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9']))\n\n count_refs = defaultdict(int)\n count_queries = defaultdict(int)\n count_refs_colapses = 0\n count_queries_colapses = 0\n\n i_alignment = re_alignment.finditer(alignment)\n for i in i_alignment:\n c_r = int(i.group(1))\n c_q = int(i.group(2))\n\n count_refs [c_r] += 1\n count_queries[c_q] += 1\n\n count_refs_colapses = sum([count_refs[ x] for x in count_refs if count_refs[ x] > 1])\n count_queries_colapses = sum([count_queries[x] for x in count_queries if count_queries[x] > 1])\n\n return len(count_refs), len(count_queries), count_refs_colapses, count_queries_colapses\n\ngffData[['len_count_refs', 'len_count_queries', 'count_refs_colapses', 'count_queries_colapses']] = gffData['Alignment'].apply(process_alignment, axis=1).apply(pd.Series, 1)\ndel gffData['Alignment']\ngffData.head()\n\n", "More stats", "ref_qry = gffData[['RefContigID','QryContigID']]\nref_qry = ref_qry.sort('RefContigID')\nprint ref_qry.head()\n\nref_qry_grpby_ref = ref_qry.groupby('RefContigID', sort=True)\nref_qry_grpby_ref.head()\n\nqry_ref = gffData[['QryContigID','RefContigID']]\nqry_ref = qry_ref.sort('QryContigID')\nprint qry_ref.head()\n\nqry_ref_grpby_qry = qry_ref.groupby('QryContigID', sort=True)\nqry_ref_grpby_qry.head()\n\ndef stats_from_data_vals(RefContigID, QryContigID, groups, indexer, data, data_vals, valid_data_poses):\n ref_lens = [ ( x[\"RefStartPos\"], x[\"RefEndPos\"] ) for x in data_vals ]\n qry_lens = [ ( x[\"QryStartPos\"], x[\"QryEndPos\"] ) for x in data_vals ]\n\n num_qry_matches = []\n for RefContigID_l in groups[\"QryContigID_RefContigID\"][QryContigID]:\n for match_pos in groups[\"QryContigID_RefContigID\"][QryContigID][RefContigID_l]:\n if match_pos in valid_data_poses:\n num_qry_matches.append(RefContigID_l)\n\n #num_qry_matches = len( groups[\"QryContigID_RefContigID\"][QryContigID] )\n num_qry_matches = len( set(num_qry_matches) )\n num_orientations = len( set([x[\"Orientation\"] for x in data_vals]) )\n\n ref_no_gap_len = sum( [ max(x)-min(x) for x in ref_lens ] )\n ref_min_coord = min( [ min(x) for x in ref_lens ] )\n ref_max_coord = max( [ max(x) for x in ref_lens ] )\n ref_gap_len = ref_max_coord - ref_min_coord\n\n qry_no_gap_len = sum( [ max(x)-min(x) for x in qry_lens ] )\n qry_min_coord = min( [ min(x) for x in qry_lens ] )\n qry_max_coord = max( [ max(x) for x in qry_lens ] )\n qry_gap_len = qry_max_coord - qry_min_coord\n\n XmapEntryIDs = groups[\"QryContigID_XmapEntryID\"][QryContigID].keys()\n\n Confidences = []\n for XmapEntryID in XmapEntryIDs:\n data_pos = list(indexer[\"XmapEntryID\"][XmapEntryID])[0]\n if data_pos not in valid_data_poses:\n continue\n Confidences.append( [ data[data_pos][\"Confidence\"], data[data_pos][\"RefContigID\"] ] )\n\n max_confidence = max([ x[0] for x in Confidences ])\n max_confidence_chrom = [ x[1] for x in Confidences if x[0] == max_confidence][0]\n\n stats = {}\n stats[\"_meta_is_max_confidence_for_qry_chrom\" ] = max_confidence_chrom == RefContigID\n\n stats[\"_meta_len_ref_match_gapped\" ] = ref_gap_len\n stats[\"_meta_len_ref_match_no_gap\" ] = ref_no_gap_len\n stats[\"_meta_len_qry_match_gapped\" ] = qry_gap_len\n stats[\"_meta_len_qry_match_no_gap\" ] = qry_no_gap_len\n\n stats[\"_meta_max_confidence_for_qry\" ] = max_confidence\n stats[\"_meta_max_confidence_for_qry_chrom\" ] = max_confidence_chrom\n\n stats[\"_meta_num_orientations\" ] = num_orientations\n stats[\"_meta_num_qry_matches\" ] = num_qry_matches\n stats[\"_meta_qry_matches\" ] = ','.join( [ str(x) for x in sorted(list(set([ x[1] for x in Confidences ]))) ] )\n\n stats[\"_meta_proportion_sizes_gapped\" ] = (ref_gap_len * 1.0)/ qry_gap_len\n stats[\"_meta_proportion_sizes_no_gap\" ] = (ref_no_gap_len * 1.0)/ qry_no_gap_len\n\n return stats\n\n\nfor QryContigID in sorted(QryContigIDs):\n data_poses = list(groups[\"RefContigID_QryContigID\"][RefContigID][QryContigID])\n all_data_poses = list(indexer[\"QryContigID\"][QryContigID])\n data_vals = [ data[x] for x in data_poses ]\n\n stats = stats_from_data_vals(RefContigID, QryContigID, groups, indexer, data, data_vals, all_data_poses)\n\n #print \"RefContigID %4d QryContigID %6d\" % ( RefContigID, QryContigID )\n for data_val in data_vals:\n cigar = data_val[\"HitEnum\"]\n cigar_matches, cigar_insertions, cigar_deletions = process_cigar(cigar)\n\n Alignment = data_val[\"Alignment\"]\n alignment_count_queries, alignment_count_refs, alignment_count_refs_colapses, alignment_count_queries_colapses = process_alignment(Alignment)\n\n for stat in stats:\n data_val[stat] = stats[stat]\n\n\n data_val[\"_meta_proportion_query_len_gapped\" ] = (data_val['_meta_len_qry_match_gapped'] * 1.0)/ data_val[\"QryLen\"]\n data_val[\"_meta_proportion_query_len_no_gap\" ] = (data_val['_meta_len_qry_match_no_gap'] * 1.0)/ data_val[\"QryLen\"]\n\n #print \" \", \" \".join( [\"%s %s\" % (x, str(data_val[x])) for x in sorted(data_val)] )\n reporter.write( \"\\t\".join( [ str(data_val[x]) for x in valid_fields['names' ] ] ) + \"\\n\" )\n", "Good part\nhttp://nbviewer.ipython.org/github/jvns/pandas-cookbook/blob/master/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb\nhttp://pandas.pydata.org/pandas-docs/dev/visualization.html\nhttps://bespokeblog.wordpress.com/2011/07/11/basic-data-plotting-with-matplotlib-part-3-histograms/\nhttp://nbviewer.ipython.org/github/mwaskom/seaborn/blob/master/examples/plotting_distributions.ipynb\nhttp://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week3/exploratory_graphs.ipynb\nhttp://pandas.pydata.org/pandas-docs/version/0.15.0/visualization.html\nhttp://www.gregreda.com/2013/10/26/working-with-pandas-dataframes/\nColumn types", "gffData.dtypes", "Global statistics", "gffData[['Confidence', 'QryLen', 'qry_match_len', 'ref_match_len', 'match_prop']].describe()", "List of chromosomes", "chromosomes = np.unique(gffData['RefContigID'].values)\nchromosomes", "Quality distribution", "with size_controller(FULL_FIG_W, FULL_FIG_H):\n bq = gffData.boxplot(column='Confidence')", "Quality distribution per chromosome", "with size_controller(FULL_FIG_W, FULL_FIG_H):\n bqc = gffData.boxplot(column='Confidence', by='RefContigID')", "Position distribution", "with size_controller(FULL_FIG_W, FULL_FIG_H):\n hs = gffData['RefStartPos'].hist()", "Position distribution per chromosome", "hsc = gffData['qry_match_len'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1)) \n\nhsc = gffData['RefStartPos'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1)) ", "Length distribution", "with size_controller(FULL_FIG_W, FULL_FIG_H):\n hl = gffData['qry_match_len'].hist()", "Length distribution per chromosome", "hlc = gffData['qry_match_len'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
InsightLab/data-science-cookbook
2019/09-clustering/Notebook_KMeans_Answer.ipynb
mit
[ "<p style=\"text-align: center;\">Clusterização e algoritmo K-means</p>\nOrganizar dados em agrupamentos é um dos modos mais fundamentais de compreensão e aprendizado. Como por exemplo, os organismos em um sistema biologico são classificados em domínio, reino, filo, classe, etc. A análise de agrupamento é o estudo formal de métodos e algoritmos para agrupar objetos de acordo com medidas ou características semelhantes. A análise de cluster, em sua essência, não utiliza rótulos de categoria que marcam objetos com identificadores anteriores, ou seja, rótulos de classe. A ausência de informação de categoria distingue o agrupamento de dados (aprendizagem não supervisionada) da classificação ou análise discriminante (aprendizagem supervisionada). O objetivo da clusterização é encontrar estruturas em dados e, portanto, é de natureza exploratória. \nA técnica de Clustering tem uma longa e rica história em uma variedade de campos científicos. Um dos algoritmos de clusterização mais populares e simples, o K-means, foi publicado pela primeira vez em 1955. Apesar do K-means ter sido proposto há mais de 50 anos e milhares de algoritmos de clustering terem sido publicados desde então, o K-means é ainda amplamente utilizado.\nFonte: Anil K. Jain, Data clustering: 50 years beyond K-means, Pattern Recognition Letters, Volume 31, Issue 8, 2010\nObjetivo\n\nImplementar as funções do algoritmo KMeans passo-a-passo\nComparar a implementação com o algoritmo do Scikit-Learn\nEntender e codificar o Método do Cotovelo\nUtilizar o K-means em um dataset real \n\nCarregando os dados de teste\nCarregue os dados disponibilizados, e identifique visualmente em quantos grupos os dados parecem estar distribuídos.", "# import libraries\n\n# linear algebra\nimport numpy as np \n# data processing\nimport pandas as pd \n# data visualization\nfrom matplotlib import pyplot as plt \n\n# load the data with pandas\ndataset = pd.read_csv('dataset.csv', header=None)\ndataset = np.array(dataset)\n\nplt.scatter(dataset[:,0], dataset[:,1], s=10)\nplt.show()", "Criar um novo dataset para práticar", "# Selecionar três centróides\ncluster_center_1 = np.array([2,3])\ncluster_center_2 = np.array([6,6])\ncluster_center_3 = np.array([10,1])\n\n# Gerar amostras aleátorias a partir dos centróides escolhidos\ncluster_data_1 = np.random.randn(100, 2) + cluster_center_1\ncluster_data_2 = np.random.randn(100,2) + cluster_center_2\ncluster_data_3 = np.random.randn(100,2) + cluster_center_3\n\nnew_dataset = np.concatenate((cluster_data_1, cluster_data_2, \n cluster_data_3), axis = 0)\n\nplt.scatter(new_dataset[:,0], new_dataset[:,1], s=10)\nplt.show()", "1. Implementar o algoritmo K-means\nNesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída.\n1.1 Inicializar os centróides\nA primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência.\nPara inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição. \n\nDica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html", "def calculate_initial_centers(dataset, k):\n \"\"\"\n Inicializa os centróides iniciais de maneira arbitrária \n \n Argumentos:\n dataset -- Conjunto de dados - [m,n]\n k -- Número de centróides desejados\n \n Retornos:\n centroids -- Lista com os centróides calculados - [k,n]\n \"\"\"\n \n #### CODE HERE ####\n \n minimum = np.min(dataset, axis=0)\n maximum = np.max(dataset, axis=0)\n shape = [k, dataset.shape[1]]\n centroids = np.random.uniform(minimum, maximum, size=shape)\n \n ### END OF CODE ###\n \n return centroids", "Teste a função criada e visualize os centróides que foram calculados.", "k = 3\ncentroids = calculate_initial_centers(dataset, k)\n\nplt.scatter(dataset[:,0], dataset[:,1], s=10)\nplt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100)\nplt.show()", "1.2 Definir os clusters\nNa segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados.\n1.2.1 Função de distância\nCodifique a função de distância euclidiana entre dois pontos (a, b).\nDefinido pela equação:\n$$ dist(a, b) = \\sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$\n$$ dist(a, b) = \\sqrt{\\sum_{i=1}^{n}(a_i-b_i)^{2}} $$", "def euclidean_distance(a, b):\n \"\"\"\n Calcula a distância euclidiana entre os pontos a e b\n \n Argumentos:\n a -- Um ponto no espaço - [1,n]\n b -- Um ponto no espaço - [1,n]\n \n Retornos:\n distance -- Distância euclidiana entre os pontos\n \"\"\"\n \n #### CODE HERE ####\n \n distance = np.sqrt(np.sum(np.square(a-b)))\n \n ### END OF CODE ###\n \n return distance", "Teste a função criada.", "a = np.array([1, 5, 9])\nb = np.array([3, 7, 8])\n\nif (euclidean_distance(a,b) == 3):\n print(\"Distância calculada corretamente!\")\nelse:\n print(\"Função de distância incorreta\")", "1.2.2 Calcular o centroide mais próximo\nUtilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer. \n\nDica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html", "def nearest_centroid(a, centroids):\n \"\"\"\n Calcula o índice do centroid mais próximo ao ponto a\n \n Argumentos:\n a -- Um ponto no espaço - [1,n]\n centroids -- Lista com os centróides - [k,n]\n \n Retornos:\n nearest_index -- Índice do centróide mais próximo\n \"\"\"\n \n #### CODE HERE ####\n \n distance_zeros = np.zeros(centroids.shape[0])\n for index, centroid in enumerate(centroids):\n distance = euclidean_distance(a, centroid)\n distance_zeros[index] = distance\n \n nearest_index = np.argmin(distance_zeros)\n \n ### END OF CODE ###\n \n return nearest_index", "Teste a função criada", "# Seleciona um ponto aleatório no dataset\nindex = np.random.randint(dataset.shape[0])\na = dataset[index,:]\n\n# Usa a função para descobrir o centroid mais próximo\nidx_nearest_centroid = nearest_centroid(a, centroids)\n\n\n# Plota os dados ------------------------------------------------\nplt.scatter(dataset[:,0], dataset[:,1], s=10)\n# Plota o ponto aleatório escolhido em uma cor diferente\nplt.scatter(a[0], a[1], c='magenta', s=30)\n\n# Plota os centroids\nplt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)\n# Plota o centroid mais próximo com uma cor diferente\nplt.scatter(centroids[idx_nearest_centroid,0], \n centroids[idx_nearest_centroid,1],\n marker='^', c='springgreen', s=100)\n\n# Cria uma linha do ponto escolhido para o centroid selecionado\nplt.plot([a[0], centroids[idx_nearest_centroid,0]], \n [a[1], centroids[idx_nearest_centroid,1]],c='orange')\nplt.annotate('CENTROID', (centroids[idx_nearest_centroid,0], \n centroids[idx_nearest_centroid,1],))\nplt.show()", "1.2.3 Calcular centroid mais próximo de cada dado do dataset\nUtilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.", "def all_nearest_centroids(dataset, centroids):\n \"\"\"\n Calcula o índice do centroid mais próximo para cada \n ponto do dataset\n \n Argumentos:\n dataset -- Conjunto de dados - [m,n]\n centroids -- Lista com os centróides - [k,n]\n \n Retornos:\n nearest_indexes -- Índices do centróides mais próximos - [m,1]\n \"\"\"\n \n #### CODE HERE ####\n \n nearest_indexes = np.zeros(dataset.shape[0])\n \n for index, a in enumerate(dataset):\n nearest_indexes[index] = nearest_centroid(a, centroids)\n \n ### END OF CODE ###\n \n return nearest_indexes", "Teste a função criada visualizando os cluster formados.", "nearest_indexes = all_nearest_centroids(dataset, centroids)\n\nplt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)\nplt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)\nplt.show()", "1.3 Métrica de avaliação\nApós formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação.\nO algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia.\n$$\\sum_{i=0}^{n}\\min_{c_j \\in C}(||x_i - c_j||^2)$$\nA inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes:\n\nA inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares.\nA inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos.\n\nFonte: https://scikit-learn.org/stable/modules/clustering.html\nPara podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente.\n$$inertia = \\sum_{i=0}^{n}\\min_{c_j \\in C} (dist(x_i, c_j))^2$$", "def inertia(dataset, centroids, nearest_indexes):\n \"\"\"\n Soma das distâncias quadradas das amostras para o \n centro do cluster mais próximo.\n \n Argumentos:\n dataset -- Conjunto de dados - [m,n]\n centroids -- Lista com os centróides - [k,n]\n nearest_indexes -- Índices do centróides mais próximos - [m,1]\n \n Retornos:\n inertia -- Soma total do quadrado da distância entre \n os dados de um cluster e seu centróide\n \"\"\"\n \n #### CODE HERE ####\n \n inertia = 0\n for index, centroid in enumerate(centroids):\n dataframe = dataset[nearest_indexes == index,:]\n for a in dataframe:\n inertia += np.square(euclidean_distance(a,centroid))\n \n ### END OF CODE ###\n \n return inertia", "Teste a função codificada executando o código abaixo.", "tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]])\ntmp_centroide = np.array([[2,3,4]])\n\ntmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)\nif inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26:\n print(\"Inertia calculada corretamente!\")\nelse:\n print(\"Função de inertia incorreta!\")\n\n# Use a função para verificar a inertia dos seus clusters\ninertia(dataset, centroids, nearest_indexes)", "1.4 Atualizar os clusters\nNessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.", "def update_centroids(dataset, centroids, nearest_indexes):\n \"\"\"\n Atualiza os centroids\n \n Argumentos:\n dataset -- Conjunto de dados - [m,n]\n centroids -- Lista com os centróides - [k,n]\n nearest_indexes -- Índices do centróides mais próximos - [m,1]\n \n Retornos:\n centroids -- Lista com centróides atualizados - [k,n]\n \"\"\"\n \n #### CODE HERE ####\n \n for index, centroid in enumerate(centroids):\n dataframe = dataset[nearest_indexes == index,:]\n if(dataframe.size != 0):\n centroids[index] = np.mean(dataframe, axis=0)\n \n ### END OF CODE ###\n \n return centroids", "Visualize os clusters formados", "nearest_indexes = all_nearest_centroids(dataset, centroids)\n\n# Plota os os cluster ------------------------------------------------\nplt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)\n\n# Plota os centroids\nplt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)\nfor index, centroid in enumerate(centroids):\n dataframe = dataset[nearest_indexes == index,:]\n for data in dataframe:\n plt.plot([centroid[0], data[0]], [centroid[1], data[1]], \n c='lightgray', alpha=0.3)\nplt.show()", "Execute a função de atualização e visualize novamente os cluster formados", "centroids = update_centroids(dataset, centroids, nearest_indexes)", "2. K-means\n2.1 Algoritmo completo\nUtilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!", "class KMeans():\n \n def __init__(self, n_clusters=8, max_iter=300):\n self.n_clusters = n_clusters\n self.max_iter = max_iter\n \n def fit(self,X):\n \n # Inicializa os centróides\n self.cluster_centers_ = calculate_initial_centers(X, self.n_clusters)\n \n # Computa o cluster de cada amostra\n self.labels_ = all_nearest_centroids(X, self.cluster_centers_)\n \n # Calcula a inércia inicial\n old_inertia = inertia(X, self.cluster_centers_, self.labels_)\n \n for index in range(self.max_iter):\n \n #### CODE HERE ####\n \n self.cluster_centers_ = update_centroids(X, self.cluster_centers_, self.labels_)\n self.labels_ = all_nearest_centroids(X, self.cluster_centers_)\n self.inertia_ = inertia(X, self.cluster_centers_, self.labels_)\n \n if(old_inertia == self.inertia_):\n break\n else:\n old_inertia = self.inertia_\n \n ### END OF CODE ###\n \n return self\n \n def predict(self, X):\n \n return all_nearest_centroids(X, self.cluster_centers_)", "Verifique o resultado do algoritmo abaixo!", "kmeans = KMeans(n_clusters=3)\nkmeans.fit(dataset)\n\nprint(\"Inércia = \", kmeans.inertia_)\n\nplt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_)\nplt.scatter(kmeans.cluster_centers_[:,0], \n kmeans.cluster_centers_[:,1], marker='^', c='red', s=100)\nplt.show()", "2.2 Comparar com algoritmo do Scikit-Learn\nUse a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior.\n\nDica: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans", "from sklearn.cluster import KMeans as scikit_KMeans\n\nscikit_kmeans = scikit_KMeans(n_clusters=3)\nscikit_kmeans.fit(dataset)\n\nprint(\"Inércia = \", scikit_kmeans.inertia_)\n\nplt.scatter(dataset[:,0], dataset[:,1], c=scikit_kmeans.labels_)\nplt.scatter(scikit_kmeans.cluster_centers_[:,0], \n scikit_kmeans.cluster_centers_[:,1], c='red')\n\nplt.show()", "3. Método do cotovelo\nImplemete o método do cotovelo e mostre o melhor K para o conjunto de dados.", "n_clusters_test = 8\n\nn_sequence = np.arange(1, n_clusters_test+1)\ninertia_vec = np.zeros(n_clusters_test)\n\nfor index, n_cluster in enumerate(n_sequence):\n inertia_vec[index] = KMeans(n_clusters=n_cluster).fit(dataset).inertia_\n \nplt.plot(n_sequence, inertia_vec, 'ro-')\nplt.show()", "4. Dataset Real\nExercícios\n1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2].\n\n[1] http://archive.ics.uci.edu/ml/datasets/iris\n[2] http://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation\n\n\nDica: você pode utilizar as métricas completeness e homogeneity.\n\n2 - Tente melhorar o resultado obtido na questão anterior utilizando uma técnica de mineração de dados. Explique a diferença obtida. \n\nDica: você pode tentar normalizar os dados [3].\n- [3] https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html\n\n3 - Qual o número de clusteres (K) você escolheu na questão anterior? Desenvolva o Método do Cotovelo sem usar biblioteca e descubra o valor de K mais adequado. Após descobrir, utilize o valor obtido no algoritmo do K-means.\n4 - Utilizando os resultados da questão anterior, refaça o cálculo das métricas e comente os resultados obtidos. Houve uma melhoria? Explique.", "#### CODE HERE ####" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sonyahanson/assaytools
examples/ipynbs/data-analysis/hsa/analyzing_FLU_hsa_lig1_20150922.ipynb
lgpl-2.1
[ "FLUORESCENCE BINDING ASSAY ANALYSIS\nExperiment date: 2015/09/22\nProtein: HSA\nFluorescent ligand : dansylamide (lig1)\nXml parsing parts adopted from Sonya's assaytools/examples/fluorescence-binding-assay/Src-gefitinib fluorescence simple.ipynb", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom lxml import etree\nimport pandas as pd\nimport os\nimport matplotlib.cm as cm \nimport seaborn as sns\n%pylab inline\n\n# Get read and position data of each fluorescence reading section\ndef get_wells_from_section(path):\n reads = path.xpath(\"*/Well\")\n wellIDs = [read.attrib['Pos'] for read in reads]\n\n data = [(float(s.text), r.attrib['Pos'])\n for r in reads\n for s in r]\n\n datalist = {\n well : value\n for (value, well) in data\n }\n \n welllist = [\n [\n datalist[chr(64 + row) + str(col)] \n if chr(64 + row) + str(col) in datalist else None\n for row in range(1,9)\n ]\n for col in range(1,13)\n ]\n \n return welllist\n\nfile_lig1=\"MI_FLU_hsa_lig1_20150922_150518.xml\"\nfile_name = os.path.splitext(file_lig1)[0]\nlabel = file_name[0:25]\nprint label\n\nroot = etree.parse(file_lig1)\n\n#find data sections\nSections = root.xpath(\"/*/Section\")\nmuch = len(Sections)\nprint \"****The xml file \" + file_lig1 + \" has %s data sections:****\" % much\nfor sect in Sections:\n print sect.attrib['Name']\n\n#Work with topread\nTopRead = root.xpath(\"/*/Section\")[0]\nwelllist = get_wells_from_section(TopRead)\n\ndf_topread = pd.DataFrame(welllist, columns = ['A - HSA','B - Buffer','C - HSA','D - Buffer', 'E - HSA','F - Buffer','G - HSA','H - Buffer'])\ndf_topread.transpose()\n\n# To generate cvs file\n# df_topread.transpose().to_csv(label + Sections[0].attrib['Name']+ \".csv\")", "Calculating Molar Fluorescence (MF) of Free Ligand\n1. Maximum likelihood curve-fitting\nFind the maximum likelihood estimate, $\\theta^$, i.e. the curve that minimizes the squared error $\\theta^ = \\text{argmin} \\sum_i |y_i - f_\\theta(x_i)|^2$ (assuming i.i.d. Gaussian noise)\nY = MF*L + BKG\nY: Fluorescence read (Flu unit)\nL: Total ligand concentration (uM)\nBKG: background fluorescence without ligand (Flu unit)\nMF: molar fluorescence of free ligand (Flu unit/ uM)", "import numpy as np\nfrom scipy import optimize\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef model(x,slope,intercept):\n ''' 1D linear model in the format scipy.optimize.curve_fit expects: '''\n return x*slope + intercept\n\n# generate some data\n#X = np.random.rand(1000)\n#true_slope=1.0\n#true_intercept=0.0\n#noise = np.random.randn(len(X))*0.1\n#Y = model(X,slope=true_slope,intercept=true_intercept) + noise\n\n#ligand titration\nlig1=np.array([200.0000,86.6000,37.5000,16.2000,7.0200, 3.0400, 1.3200, 0.5700, 0.2470, 0.1070, 0.0462, 0.0200])\nlig1\n\n# Since I have 4 replicates\nL=np.concatenate((lig1, lig1, lig1, lig1))\nlen(L)\n\n# Fluorescence read\ndf_topread.loc[:,(\"B - Buffer\", \"D - Buffer\", \"F - Buffer\", \"H - Buffer\")]\n\nB=df_topread.loc[:,(\"B - Buffer\")]\nD=df_topread.loc[:,(\"D - Buffer\")]\nF=df_topread.loc[:,(\"F - Buffer\")]\nH=df_topread.loc[:,(\"H - Buffer\")]\n\nY = np.concatenate((B.as_matrix(),D.as_matrix(),F.as_matrix(),H.as_matrix()))\n\n(MF,BKG),_ = optimize.curve_fit(model,L,Y)\nprint('MF: {0:.3f}, BKG: {1:.3f}'.format(MF,BKG))\nprint('y = {0:.3f} * L + {1:.3f}'.format(MF, BKG))\n", "Curve-fitting to binding saturation curve\nFluorescence intensity vs added ligand\nLR= ((X+Rtot+KD)-SQRT((X+Rtot+KD)^2-4XRtot))/2\nL= X - LR\nY= BKG + MFL + FRMF*LR\nConstants\nRtot: receptor concentration (uM)\nBKG: background fluorescence without ligand (Flu unit)\nMF: molar fluorescence of free ligand (Flu unit/ uM)\nParameters to fit\nKd: dissociation constant (uM)\nFR: Molar fluorescence ratio of complex to free ligand (unitless)\n complex flurescence = FRMFLR\nExperimental data\nY: fluorescence measurement\nX: total ligand concentration\nL: free ligand concentration", "def model2(x,kd,fr):\n ''' 1D linear model in the format scipy.optimize.curve_fit expects: '''\n # lr =((x+rtot+kd)-((x+rtot+kd)**2-4*x*rtot)**(1/2))/2\n # y = bkg + mf*(x - lr) + fr*mf*lr\n bkg = 86.2\n mf = 2.517\n rtot = 0.5\n return bkg + mf*(x - ((x+rtot+kd)-((x+rtot+kd)**2-4*x*rtot)**(1/2))/2) + fr*mf*(((x+rtot+kd)-((x+rtot+kd)**2-4*x*rtot)**(1/2))/2)\n\n# Total HSA concentration (uM)\nRtot = 0.5\n#Total ligand titration\nX = L\nlen(X)\n\n\n# Fluorescence read\ndf_topread.loc[:,(\"A - HSA\", \"C - HSA\", \"E - HSA\", \"G - HSA\")]\n\nA=df_topread.loc[:,(\"A - HSA\")]\nC=df_topread.loc[:,(\"C - HSA\")]\nE=df_topread.loc[:,(\"E - HSA\")]\nG=df_topread.loc[:,(\"G - HSA\")]\n\nY = np.concatenate((A.as_matrix(),C.as_matrix(),E.as_matrix(),G.as_matrix()))\nlen(Y)\n\n(Kd,FR),_ = optimize.curve_fit(model2, X, Y, p0=(5,1))\n\nprint('Kd: {0:.3f}, Fr: {1:.3f}'.format(Kd,FR))\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
bowen0701/data_science
notebook/sentiment_nlp.ipynb
bsd-2-clause
[ "Sentiment Analysis with NLP for Accommodation Reviews\n\nBowen Li\n2017/11/09\n\nIntroduction\nRecently I got hotel accommodation reviews data to practice Sentiment Analysis with Natural Language Processing (NLP), which I previously just knew the basics and would like to gain hands-on experience for this Natural Language Understanding task. This notebook is to summarize my results.\nSentiment Analysis with NLP\nWe will perform Sentiment Analysis with NLP by applying the Occam's Razor Principle.\n\nCollect datasets\nExploratory data analysis (EDA) with datasets\nCheck missing / abnormal data\nGroup-by aggregate score distributions\nPre-process datasets\nRemove missing / abnormal data\nJoin score & review datasets\nConcat review_title and review_comment to review_title_comments\nLower review_title_comments\nTokenize and remove stopwords and punctuations\nGet bag of words\nSentiment analysis\nRandomly permutate data\nLabel review\nSplite training and test sets\nMachine learning for classification by Naive Bayes Classifier\nModel evaluation by precision and recall\nMeasure real-world performance\nPredict label based on bag of words\nCompare two labels's score distributions\n\nPython scripts\nFirst import Python libraries.", "from __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport scipy as sp\nimport pandas as pd\n\nimport nltk\n# When performing experiment, remove comment out for nltk.download().\n# nltk.download()\n\nimport time\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")", "The following are the scripts for Sentiment Analysis with NLP.", "score_file = 'reviews_score.csv'\nreview_file = 'reviews.csv'\n\ndef read_score_review(score_file, review_file):\n \"\"\"Read score and review data.\"\"\"\n score_df = pd.read_csv(score_file)\n review_df = pd.read_csv(review_file)\n return score_df, review_df\n\ndef groupby_agg_data(df, gkey='gkey', rid='rid'):\n \"\"\"Group-by aggregate data.\"\"\"\n agg_df = (df.groupby(gkey)[rid]\n .count()\n .reset_index())\n nan_count = df[gkey].isnull().sum()\n nan_df = pd.DataFrame({gkey: [np.nan], rid: [nan_count]})\n agg_df = agg_df.append(nan_df)[[gkey, rid]]\n agg_df['percent'] = agg_df[rid] / agg_df[rid].sum()\n return agg_df\n\ndef count_missing_data(df, cols='cols'):\n \"\"\"Count missing records w.r.t. columns.\"\"\"\n print('Missing rows:')\n for col in cols:\n nan_rows = df[col].isnull().sum()\n print('For {0}: {1}'.format(col, nan_rows))\n\ndef slice_abnormal_id(df, rid='hotel_review_id'):\n \"\"\"View abnormal records with column\"\"\"\n abnorm_bool_arr = (df[rid] == 0)\n abnorm_count = abnorm_bool_arr.sum()\n print('abnorm_count: {}'.format(abnorm_count))\n abnorm_df = df[abnorm_bool_arr]\n return abnorm_df\n\ndef remove_missing_abnormal_data(score_raw_df, review_raw_df, \n rid='hotel_review_id', \n score_col='rating_overall'):\n \"\"\"Remove missing / abnormal data.\"\"\"\n filter_score_bool_arr = (score_raw_df[rid].notnull() & \n score_raw_df[score_col].notnull())\n score_df = score_raw_df[filter_score_bool_arr]\n \n filter_review_bool_arr = review_raw_df[rid].notnull()\n review_df = review_raw_df[filter_review_bool_arr]\n \n return score_df, review_df\n\ndef join_score_review(score_df, review_df, on='hotel_review_id', how='left'):\n \"\"\"Join score and review datasets.\"\"\"\n score_review_df = pd.merge(score_df, review_df, on=on, how=how)\n score_review_count = score_review_df.shape[0]\n print('score_review_count: {}'.format(score_review_count))\n return score_review_df\n\ndef concat_review_title_comments(score_review_df, \n concat_cols=['review_title', 'review_comments'],\n concat_2col='review_title_comments'):\n \"\"\"Concat review title and review comments.\"\"\"\n concat_text_col = ''\n for concat_col in concat_cols:\n concat_text_col += score_review_df[concat_col]\n if concat_col != concat_cols[len(concat_cols) - 1]:\n concat_text_col += '. '\n score_review_df[concat_2col] = concat_text_col\n return score_review_df\n\ndef lower_review_title_comments(score_review_df, \n lower_col='review_title_comments'):\n \"\"\"Lower sentences.\"\"\"\n score_review_df[lower_col] = score_review_df[lower_col].str.lower()\n return score_review_df\n\ndef _tokenize_sen(sen):\n \"\"\"Tokenize one sentence.\"\"\"\n from nltk.tokenize import word_tokenize\n sen_token = word_tokenize(str(sen))\n return sen_token\n\ndef _remove_nonstop_words_puncs(sen):\n \"\"\"Remove nonstop words and meaningless punctuations in one sentence.\"\"\"\n from nltk.corpus import stopwords\n sen_clean = [\n word for word in sen \n if word not in stopwords.words('english') and \n word not in [',', '.', '(', ')', '&']]\n return sen_clean\n\ndef tokenize_clean_sentence(sen):\n \"\"\"Tokenize and clean one sentence.\"\"\"\n sen_token = _tokenize_sen(sen)\n sen_token_clean = _remove_nonstop_words_puncs(sen_token)\n return sen_token_clean\n\n# def preprocess_sentence(df, sen_cols=['review_title', 'review_comments']): \n# \"\"\"Preprocess sentences (deprecated due to slow performance).\"\"\"\n# for sen_col in sen_cols:\n# print('Start tokenizing \"{}\"'.format(sen_col))\n# sen_token_col = '{}_token'.format(sen_col)\n# df[sen_token_col] = df[sen_col].apply(tokenize_clean_sentence)\n# print('Finish tokenizing \"{}\"'.format(sen_col))\n# return df\n\ndef preprocess_sentence_par(df, sen_col='review_title_comments',\n sen_token_col='review_title_comments_token', num_proc=32):\n \"\"\"Preporecess sentences in parallel.\n \n Note: We apply multiprocessing with 32 cores; adjust `num_proc` by your computing environment.\n \"\"\"\n import multiprocessing as mp\n pool = mp.Pool(num_proc)\n df[sen_token_col] = pool.map_async(tokenize_clean_sentence , df[sen_col]).get()\n return df\n\ndef get_bag_of_words(w_ls):\n \"\"\"Get bag of words in word list.\"\"\"\n w_bow = dict([(w, True) for w in w_ls])\n return w_bow\n\ndef get_bag_of_words_par(df, sen_token_col='review_title_comments_token',\n bow_col='review_title_comments_bow', num_proc=32):\n \"\"\"Get bag of words in parallel for sentences.\"\"\"\n import multiprocessing as mp\n pool = mp.Pool(num_proc)\n df[bow_col] = pool.map_async(get_bag_of_words , df[sen_token_col]).get()\n return df\n\ndef label_review(df, scores_ls=None, label='negative',\n score_col='rating_overall',\n review_col='review_title_comments_bow'):\n \"\"\"Label review by positive or negative.\"\"\"\n df_label = df[df[score_col].isin(scores_ls)]\n label_review_ls = (df_label[review_col]\n .apply(lambda bow: (bow, label))\n .tolist())\n return label_review_ls\n\ndef permutate(data_ls):\n \"\"\"Randomly permutate data.\"\"\"\n np.random.shuffle(data_ls)\n\ndef create_train_test_sets(pos_review_ls, neg_review_ls, train_percent=0.75):\n \"\"\"Create the training and test sets.\"\"\"\n neg_num = np.int(np.ceil(len(neg_review_ls) * train_percent))\n pos_num = np.int(np.ceil(len(pos_review_ls) * train_percent))\n \n train_set = neg_review_ls[:neg_num] + pos_review_ls[:pos_num]\n permutate(train_set)\n \n test_set = neg_review_ls[neg_num:] + pos_review_ls[pos_num:]\n permutate(test_set)\n \n return train_set, test_set\n\ndef train_naive_bayes(train_set):\n from nltk.classify import NaiveBayesClassifier\n nb_clf = NaiveBayesClassifier.train(train_set)\n return nb_clf\n\ndef eval_naive_bayes(test_set, nb_clf):\n import collections\n from nltk.metrics.scores import precision\n from nltk.metrics.scores import recall\n\n ref_sets = {'positive': set(), \n 'negative': set()}\n pred_sets = {'positive': set(), \n 'negative': set()}\n \n for i, (bow, label) in enumerate(test_set):\n ref_sets[label].add(i)\n pred_label = nb_clf.classify(bow)\n pred_sets[pred_label].add(i)\n \n print('Positive precision:', precision(ref_sets['positive'], pred_sets['positive']))\n print('Positive recall:', recall(ref_sets['positive'], pred_sets['positive']))\n print('Negative precision:', precision(ref_sets['negative'], pred_sets['negative']))\n print('Negative recall:', recall(ref_sets['negative'], pred_sets['negative']))\n\ndef pred_labels(df, clf, \n bow_col='review_title_comments_bow',\n pred_col='pred_label',\n sel_cols=['rating_overall', \n 'review_title_comments_bow', \n 'pred_label']):\n \"\"\"Predict labels for bag of words.\"\"\"\n df[pred_col] = df[bow_col].apply(clf.classify)\n df_pred = df[sel_cols]\n return df_pred\n\ndef get_boxplot_data(pred_label_df, \n pred_col='pred_label', score_col='rating_overall'):\n pos_data = pred_label_df[pred_label_df[pred_col] == 'positive'][score_col].values\n neg_data = pred_label_df[pred_label_df[pred_col] == 'negative'][score_col].values\n box_data = [pos_data, neg_data]\n return box_data\n\ndef plot_box(d_ls, title='Box Plot', xlab='xlab', ylab='ylab', \n xticks=None, xlim=None, ylim=None, figsize=(15, 10)):\n import matplotlib.pyplot as plt\n import seaborn as sns\n import matplotlib\n matplotlib.style.use('ggplot')\n %matplotlib inline\n plt.figure()\n fig, ax = plt.subplots(figsize=figsize)\n plt.boxplot(d_ls)\n plt.title(title)\n plt.xlabel(xlab)\n plt.ylabel(ylab)\n if xticks:\n ax.set_xticklabels(xticks)\n if xlim:\n plt.xlim(xlim)\n if ylim:\n plt.ylim(ylim)\n # plt.axis('auto') \n plt.show()", "Collect Data\nWe first read score and review raw datasets.\n\nScore dataset: two columns\nhotel_review_id: hotel review sequence ID\nrating_overall: overal accommodation rating\nReview dataset: three columns\nhotel_review_id: hotel review sequence ID\nreview_title: review title\nreview_comments: detailed review comments", "score_raw_df, review_raw_df = read_score_review(score_file, review_file)\n\nprint(len(score_raw_df))\nprint(len(review_raw_df))\n\nscore_raw_df.head(5)\n\nreview_raw_df.head(5)", "EDA with Datasets\nCheck missing / abnormal data", "count_missing_data(score_raw_df, \n cols=['hotel_review_id', 'rating_overall'])\n\nscore_raw_df[score_raw_df.rating_overall.isnull()]\n\ncount_missing_data(review_raw_df, \n cols=['hotel_review_id', 'review_title', 'review_comments'])\n\nabnorm_df = slice_abnormal_id(score_raw_df, rid='hotel_review_id')\n\nabnorm_df\n\nabnorm_df = slice_abnormal_id(review_raw_df, rid='hotel_review_id')\n\nabnorm_df", "Group-by aggregate score distributions\nFrom the following results we can observe that \n\nthe rating_overall scores are imbalanced. Specifically, only about $1\\%$ records have low scores $\\le 5$, thus about $99\\%$ records have scores $\\ge 6$.\nsome records have missing score.", "score_raw_df.rating_overall.unique()\n\nscore_agg_df = groupby_agg_data(\n score_raw_df, gkey='rating_overall', rid='hotel_review_id')\n\nscore_agg_df", "Pre-process Datasets\nRemove missing / abnormal data\nSince there are few records (only 27) having missing hotel_review_id and rating_overall score, we just ignore them.", "score_df, review_df = remove_missing_abnormal_data(\n score_raw_df, review_raw_df, \n rid='hotel_review_id', \n score_col='rating_overall')\n\nscore_df.head(5)\n\nreview_df.head(5)", "Join score & review datasets\nTo leverage fast vectorized operation with Pandas DataFrame, we joint score and review datasets.", "score_review_df_ = join_score_review(score_df, review_df)\n\nscore_review_df_.head(5)", "The following are the procedure for processing natural language texts.\nConcat review_title and review_comments\nUsing the Occam's Razor Principle, since review_title and review_comments both are natural languages, we can simply concat them into one sentence for further natural language processing.", "score_review_df = concat_review_title_comments(\n score_review_df_, \n concat_cols=['review_title', 'review_comments'],\n concat_2col='review_title_comments')\n\nscore_review_df.head(5)", "Lower review_title_comments", "score_review_df = lower_review_title_comments(\n score_review_df, \n lower_col='review_title_comments')\n\nscore_review_df.head(5)", "Tokenize and remove stopwords\nTokenizing is an important technique by which we would like to split the sentence into vector of invidual words. Nevertheless, there are many stopwords that are useless in natural language text, for example: he, is, at, which, and on. Thus we would like to remove them from the vector of tokenized words.\nNote that since the tokenizing and removing stopwords tasks are time-consuming, we apply Python build-in package multiprocessing for parallel computing to improve the performance.", "start_token_time = time.time()\n\nscore_review_token_df = preprocess_sentence_par(\n score_review_df, \n sen_col='review_title_comments',\n sen_token_col='review_title_comments_token', num_proc=32)\n\nend_token_time = time.time()\nprint('Time for tokenizing: {}'.format(end_token_time - start_token_time))\n\nscore_review_token_df.head(5)\n\nscore_review_token_df.review_title_comments_token[1]", "Get bag of words\nThe tokenized words may contain duplicated words, and for simplicity, we would like to apply the Bag of Words, which just represents the sentence as a bag (multiset) of its words, ignoring grammar and even word order. Here, following the Occam's Razor Principle again, we do not keep word frequencies, thus we use binary (presence/absence or True/False) weights.", "start_bow_time = time.time()\n\nscore_review_bow_df = get_bag_of_words_par(\n score_review_token_df, \n sen_token_col='review_title_comments_token',\n bow_col='review_title_comments_bow', num_proc=32)\n\nend_bow_time= time.time()\nprint('Time for bag of words: {}'.format(end_bow_time - start_bow_time))\n\nscore_review_bow_df.review_title_comments_bow[:5]", "Sentiment Analysis\nLabel data\nSince we would like to polarize data with consideration for the imbalanced data problem as mentioned before, we decide to label\n\nratings 2, 3 and 4 by \"negative\",\nratings 9 and 10 by \"positive\".", "neg_review_ls = label_review(\n score_review_bow_df,\n scores_ls=[2, 3, 4], label='negative',\n score_col='rating_overall',\n review_col='review_title_comments_bow')\n\npos_review_ls = label_review(\n score_review_bow_df,\n scores_ls=[9, 10], label='positive',\n score_col='rating_overall',\n review_col='review_title_comments_bow')\n\nneg_review_ls[1]\n\npos_review_ls[1]", "Splite training and test sets\nWe split the training and test sets by the rule of $75\\%$ and $25\\%$.", "train_set, test_set = create_train_test_sets(\n pos_review_ls, neg_review_ls, train_percent=0.75)\n\ntrain_set[10]", "Naive Bayes Classification\nWe first apply Naive Bayes Classifier to learn positive or negative sentiment.", "nb_clf = train_naive_bayes(train_set)", "Model evaluation\nWe evaluate our model by positive / negative precision and recall. From the results we can observe that our model performs fairly good.", "eval_naive_bayes(test_set, nb_clf)", "Measure Real-World Performance\nPredict label based on bag of words", "start_pred_time = time.time()\n\npred_label_df = pred_labels(\n score_review_bow_df, nb_clf, \n bow_col='review_title_comments_bow',\n pred_col='pred_label')\n\nend_pred_time = time.time()\nprint('Time for prediction: {}'.format(end_pred_time - start_pred_time))\n\npred_label_df.head(5)", "Compare two labels's score distributions\nFrom the following boxplot, we can observe that our model performs reasonably well in the real world, even by our suprisingly simple machine learning modeling. \nWe can further apply divergence measures, such as Kullback-Leibler divergence, to quantify the rating_overall distribution distance between two label groups, if needed.", "box_data = get_boxplot_data(\n pred_label_df, \n pred_col='pred_label', score_col='rating_overall')\n\nplot_box(box_data, title='Box Plot for rating_overall by Sentiment Classes', \n xlab='class', ylab='rating_overall', \n xticks=['positive', 'negative'], figsize=(12, 7))", "Discussions\n\nFollowing Occam's Razor Principle, we first apply the \"standard\" approach for Sentiment Analysis with Natural Language Processing.\nOur simple Naive Bayes Classifier performs fairly well in model evaluation and real-world performance, by investigating precision and recall for positive and negative sentiment and by viewing boxplot, respectively.\nNote that our model predicts really good at positive reviews which generally produce high rating_overall. Nevertheless, the model performs comparably bad at negative reviews since some would produce above average rating_overall. The reason for this is because the rating_overall distribution is imbalanced and leads to much less negative reviews.\nThus, to improve the model performance, we can resolve the imbalanced data problem by applying Sampling Techniques, for example positive sampling by which we keep all negative records and sample positive ones for better classification. (We will say sampling techniques for the imbalanced data problem later.)\nWe can further enhance the performance by applying more advanced machine learning models with L1/L2-regularizations, or by using better Feature Engineering techniques, such as Bigrams or by learning word embeddings with Word2Vec.\nFurthermore, we can apply Divergence Measures, such as Kullback-Leibler divergence, to quantify the rating_overall distribution distance between two label groups. By calculating divergence measures we can quantify our enhancements." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
probml/pyprobml
notebooks/book2/04/gibbs_demo_potts_jax.ipynb
mit
[ "Gibbs sampling for a Potts model on a 2d lattice\nMing Liang Ang.\nThe math behind the model\nThe potts model\n$$p(x) = \\frac{1}{Z}\\exp{-\\mathcal{E}(x)}\\\n\\mathcal{E}(x) = - J\\sum_{i\\sim j}\\mathbb{I}(x_i = x_j)\\\np(x_i = k | x_{-i}) = \\frac{\\exp(J\\sum_{n\\in \\text{nbr}}\\mathbb{I}(x_n = k))}{\\sum_{k'}\\exp(J\\sum_{n\\in \\text{nbr}}\\mathbb{I}(x_n = k))}$$ \nIn order to efficiently compute \n$$\n\\sum_{n\\in \\text{nbr}}$$ \nfor all the different states in our potts model we use a convolution. The idea is to first reperesent each potts model state as a one-hot state and then apply a convolution to compute the logits. \n$$\\begin{pmatrix}\nS_{11} & S_{12} & \\ldots & S_{1n} \\\nS_{21} & S_{22} & \\ldots & S_{2n} \\\n\\vdots & &\\ddots & \\vdots\\\nS_{n1} & S_{n2} & \\ldots & S_{nn} \\\n \\end{pmatrix} \\underset{\\longrightarrow}{\\text{padding}} \\begin{pmatrix}\n 0 & \\ldots & 0 & \\ldots & 0 & 0\\\n0 & S_{11} & S_{12} & \\ldots & S_{1n} & 0 \\\n0 & S_{21} & S_{22} & \\ldots & S_{2n}&0 \\\n\\vdots & &\\ddots & \\vdots\\\n0 & S_{n1} & S_{n2} & \\ldots & S_{nn} & 0 \\\n0 & \\ldots & 0 & \\ldots & 0 & 0\\\n \\end{pmatrix} \\underset{\\longrightarrow}{\\text{convolution}} \\begin{pmatrix}\nE_{11} & E_{12} & \\ldots & E_{1n} \\\nE_{21} & E_{22} & \\ldots & E_{2n} \\\n\\vdots & &\\ddots & \\vdots\\\nE_{n1} & E_{n2} & \\ldots & E_{nn} \\\n \\end{pmatrix} $$ \nAn example\n$$\\begin{pmatrix}\n1 & 1 & 1 \\\n1 & 1 & 1 \\\n1 & 1 & 1 \n \\end{pmatrix} \\underset{\\longrightarrow}{\\text{padding}} \\begin{pmatrix}\n 0 & 0 & 0 & 0 & 0\\\n0 & 1 & 1 & 1 & 0 \\\n0 & 1 & 1 & 1 & 0\\\n0 & 1 & 1 & 1 & 0 \\\n0 & 0 & 0 & 0 & 0\n \\end{pmatrix} \\underset{\\longrightarrow}{\\text{convolution}} \\begin{pmatrix}\n2 & 3 & 2 \\\n3 & 4 & 3 \\\n2 & 3 & 2\n \\end{pmatrix} $$ \nWhere the matrix $$\\begin{pmatrix}\n2 & 3 & 2 \\\n3 & 4 & 3 \\\n2 & 3 & 2\n \\end{pmatrix} $$ correspond to the number of neighbours with the same value around in the matrix \\begin{pmatrix}\n1 & 1 & 1 \\\n1 & 1 & 1 \\\n1 & 1 & 1 \n \\end{pmatrix} \nFor more than 2 states, we represent the above matrix as a 3d tensor which you can imagine as the state matrix but with each element as a one hot vector. \nImport libaries", "import jax\nimport jax.numpy as jnp\nfrom jax import lax\nfrom jax import vmap\nfrom jax import random\nfrom jax import jit\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ntry:\n from tqdm import trange\nexcept ModuleNotFoundError:\n %pip install -qq tqdm\n from tqdm import trange", "RNG key", "key = random.PRNGKey(12234)", "The number of states and size of the 2d grid", "K = 10\nix = 128\niy = 128", "The convolutional kernel for computing energy of markov blanket of each node", "kernel = jnp.zeros((3, 3, 1, 1), dtype=jnp.float32)\nkernel += jnp.array([[0, 1, 0], [1, 0, 1], [0, 1, 0]])[:, :, jnp.newaxis, jnp.newaxis]\n\ndn = lax.conv_dimension_numbers(\n (K, ix, iy, 1), # only ndim matters, not shape\n kernel.shape, # only ndim matters, not shape\n (\"NHWC\", \"HWIO\", \"NHWC\"),\n) # the important bit", "Creating the checkerboard", "mask = jnp.indices((K, iy, ix, 1)).sum(axis=0) % 2\n\ndef checkerboard_pattern1(x):\n return mask[0, :, :, 0]\n\n\ndef checkerboard_pattern2(x):\n return mask[1, :, :, 0]\n\n\ndef make_checkerboard_pattern1():\n arr = vmap(checkerboard_pattern1, in_axes=0)(jnp.array(K * [1]))\n return jnp.expand_dims(arr, -1)\n\n\ndef make_checkerboard_pattern2():\n arr = vmap(checkerboard_pattern2, in_axes=0)(jnp.array(K * [1]))\n return jnp.expand_dims(arr, -1)\n\ndef test_state_mat_update(state_mat_update):\n \"\"\"\n Checking the checkerboard pattern is the same for each channel\n \"\"\"\n mask = make_checkerboard_pattern1()\n inverse_mask = make_checkerboard_pattern2()\n state_mat = jnp.zeros((K, 128, 128, 1))\n sample = jnp.ones((K, 128, 128, 1))\n new_state = state_mat_update(mask, inverse_mask, sample, state_mat)\n assert jnp.array_equal(new_state[0, :, :, 0], new_state[1, :, :, 0])\n\n\ndef test_state_mat_update2(state_mat_update):\n \"\"\"\n Checking the checkerboard pattern is the same for each channel\n \"\"\"\n mask = make_checkerboard_pattern1()\n inverse_mask = make_checkerboard_pattern2()\n state_mat = jnp.ones((K, 128, 128, 1))\n sample = jnp.zeros((K, 128, 128, 1))\n new_state = state_mat_update(mask, inverse_mask, sample, state_mat)\n assert jnp.array_equal(new_state[0, :, :, 0], new_state[1, :, :, 0])\n\n\ndef test_energy(energy):\n \"\"\"\n If you give the convolution all ones, it will produce the number of edges\n it is connected to on a grid i.e the number of neighbours around it.\n \"\"\"\n X = jnp.ones((3, 3))\n state_mat = jax.nn.one_hot(X, K, axis=0)[:, :, :, jnp.newaxis]\n energy = energy(state_mat, 1)\n assert np.array_equal(energy[1, :, :, 0], jnp.array([[2, 3, 2], [3, 4, 3], [2, 3, 2]]))\n\ndef sampler(K, key, logits):\n # Sample from the energy using gumbel trick\n u = random.uniform(key, shape=(K, ix, iy, 1))\n sample = jnp.argmax(logits - jnp.log(-jnp.log(u)), axis=0)\n sample = jax.nn.one_hot(sample, K, axis=0)\n return sample\n\n\ndef state_mat_update(mask, inverse_mask, sample, state_mat):\n # Update the state_mat using masking\n masked_sample = mask * sample\n masked_state_mat = inverse_mask * state_mat\n state_mat = masked_state_mat + masked_sample\n return state_mat\n\n\ndef energy(state_mat, jvalue):\n # Calculate energy\n logits = lax.conv_general_dilated(state_mat, jvalue * kernel, (1, 1), \"SAME\", (1, 1), (1, 1), dn)\n return logits\n\n\ndef gibbs_sampler(key, jvalue, niter=1):\n key, key2 = random.split(key)\n\n X = random.randint(key, shape=(ix, iy), minval=0, maxval=K)\n state_mat = jax.nn.one_hot(X, K, axis=0)[:, :, :, jnp.newaxis]\n\n mask = make_checkerboard_pattern1()\n inverse_mask = make_checkerboard_pattern2()\n\n @jit\n def state_update(key, state_mat, mask, inverse_mask):\n logits = energy(state_mat, jvalue)\n sample = sampler(K, key, logits)\n state_mat = state_mat_update(mask, inverse_mask, sample, state_mat)\n return state_mat\n\n for iter in tqdm(range(niter)):\n key, key2 = random.split(key2)\n state_mat = state_update(key, state_mat, mask, inverse_mask)\n mask, inverse_mask = inverse_mask, mask\n\n return jnp.squeeze(jnp.argmax(state_mat, axis=0), axis=-1)", "Running the test", "test_state_mat_update(state_mat_update)\ntest_state_mat_update2(state_mat_update)\ntest_energy(energy)", "Running the model", "Jvals = [1.42, 1.43, 1.44]\n\ngibbs_sampler(key, 1, niter=2)\n\ndfig, axs = plt.subplots(1, len(Jvals), figsize=(8, 8))\nfor t in tqdm(range(len(Jvals))):\n arr = gibbs_sampler(key, Jvals[t], niter=8000)\n axs[t].imshow(arr, cmap=\"Accent\", interpolation=\"nearest\")\n axs[t].set_title(f\"J = {Jvals[t]}\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
noammor/coursera-machinelearning-python
ex3/ml-ex3-onevsall.ipynb
mit
[ "One vs All", "import pandas\nimport numpy as np\nimport scipy.io\nimport scipy.optimize\nimport functools\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Loading and visualizing training data\nThe training data is 5000 digit images of digits of size 20x20. We will display a random selection of 25 of them.", "ex3data1 = scipy.io.loadmat(\"./ex3data1.mat\")\nX = ex3data1['X']\ny = ex3data1['y'][:,0]\ny[y==10] = 0\n\nm, n = X.shape\nm, n\n\nfig = plt.figure(figsize=(5,5))\nfig.subplots_adjust(wspace=0.05, hspace=0.15)\n\nimport random\n\ndisplay_rows, display_cols = (5, 5)\n\nfor i in range(display_rows * display_cols):\n ax = fig.add_subplot(display_rows, display_cols, i+1)\n ax.set_axis_off()\n image = X[random.randint(0, m-1)].reshape(20, 20).T\n image /= np.max(image)\n ax.imshow(image, cmap=plt.cm.Greys_r)\n\nX = np.insert(X, 0, np.ones(m), 1)", "Part 2: Vectorize Logistic Regression\nIn this part of the exercise, you will reuse your logistic regression\ncode from the last exercise. You task here is to make sure that your\nregularized logistic regression implementation is vectorized. After\nthat, you will implement one-vs-all classification for the handwritten\ndigit dataset.", "def sigmoid(z):\n return 1 / (1 + np.exp(-z))\n\ndef h(theta, x):\n return sigmoid(x.dot(theta))\n\n#LRCOSTFUNCTION Compute cost and gradient for logistic regression with \n#regularization\n# J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using\n# theta as the parameter for regularized logistic regression and the\n# gradient of the cost w.r.t. to the parameters. \n\ndef cost(X, y, theta, lambda_=None):\n # You need to return the following variables correctly \n J = 0\n \n # ====================== YOUR CODE HERE ======================\n # Instructions: Compute the cost of a particular choice of theta.\n # You should set J to the cost.\n # Compute the partial derivatives and set grad to the partial\n # derivatives of the cost w.r.t. each parameter in theta\n #\n # Hint: The computation of the cost function and gradients can be\n # efficiently vectorized. For example, consider the computation\n #\n # sigmoid(X * theta)\n #\n # Each row of the resulting matrix will contain the value of the\n # prediction for that example. You can make use of this to vectorize\n # the cost function and gradient computations. \n #\n\n \n \n # =============================================================\n \n return J\n\ndef gradient(X, y, theta, lambda_=None):\n # You need to return the following variables correctly \n grad = np.zeros(theta.shape)\n \n # ====================== YOUR CODE HERE ======================\n # Hint: When computing the gradient of the regularized cost function, \n # there're many possible vectorized solutions, but one solution\n # looks like:\n # grad = (unregularized gradient for logistic regression)\n # temp = theta; \n # temp[0] = 0; # because we don't add anything for j = 0 \n # grad = grad + YOUR_CODE_HERE (using the temp variable)\n \n \n \n # =============================================================\n \n return grad\n\ninitial_theta = np.zeros(n + 1)\nlambda_ = 0.1\ncost(X, y, initial_theta, lambda_)\n\ngradient(X, y, initial_theta, lambda_).shape\n\ndef one_vs_all(X, y, num_labels, lambda_):\n #ONEVSALL trains multiple logistic regression classifiers and returns all\n #the classifiers in a matrix all_theta, where the i-th row of all_theta \n #corresponds to the classifier for label i\n # [all_theta] = ONEVSALL(X, y, num_labels, lambda) trains num_labels\n # logisitc regression classifiers and returns each of these classifiers\n # in a list all_theta, where the i-th item of all_theta corresponds \n # to the classifier for label i\n \n # You need to return the following variables correctly \n all_theta = [None] * num_labels\n \n # ====================== YOUR CODE HERE ======================\n # Instructions: You should complete the following code to train num_labels\n # logistic regression classifiers with regularization\n # parameter lambda. \n #\n # Hint: You can use y == c to obtain a vector of True's and False's\n #\n # Note: For this assignment, we recommend using scipy.optimize.minimize with method='L-BFGS-B'\n # to optimize the cost function.\n # It is okay to use a for-loop (for i in range(num_labels)) to\n # loop over the different classes.\n #\n # Example Code for scipy.optimize.minimize:\n #\n # result = scipy.optimize.minimize(lambda t: cost(X, y==digit, t, lambda_),\n # initial_theta,\n # jac=lambda t: gradient(X, y==digit, t, lambda_),\n # method='L-BFGS-B')\n # theta = result.x\n \n \n \n \n \n # =========================================================================\n\nnum_labels = 10\nthetas = one_vs_all(X, y, num_labels, lambda_)\n\nfig = plt.figure(figsize=(10,10))\nfor d in range(10):\n ax = fig.add_subplot(5, 2, d+1)\n ax.scatter(range(m), h(thetas[d], X), s=1)\n\ndef predict_one_vs_all(X, thetas):\n #PREDICT Predict the label for a trained one-vs-all classifier. The labels \n #are in the range 1..K, where K = len(thetas)\n # p = PREDICTONEVSALL(all_theta, X) will return a vector of predictions\n # for each example in the matrix X. Note that X contains the examples in\n # rows. all_theta is a list where the i-th entry is a trained logistic\n # regression theta vector for the i-th class. You should set p to a vector\n # of values from 1..K (e.g., p = [1; 3; 1; 2] predicts classes 1, 3, 1, 2\n # for 4 examples) \n \n \n # You need to return the following variables correctly \n p = np.zeros(X.shape[0]);\n \n # ====================== YOUR CODE HERE ======================\n # Instructions: Complete the following code to make predictions using\n # your learned logistic regression parameters (one-vs-all).\n # You should set p to a vector of predictions (from 1 to\n # num_labels).\n #\n # Hint: This code can be done all vectorized using the max function.\n # In particular, the max function can also return the index of the \n # max element, for more information see 'help max'. If your examples \n # are in rows, then, you can use max(A, [], 2) to obtain the max \n # for each row.\n # \n\n\n \n \n \n \n \n \n # =========================================================================\n \n return p\n\npredictions = predict_one_vs_all(X, thetas)\n\nplt.scatter(range(m), predictions, s=1)", "Training set accuracy:", "(predictions == y).mean()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/messy-consortium/cmip6/models/sandbox-1/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: MESSY-CONSORTIUM\nSource ID: SANDBOX-1\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:10\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.16/_downloads/plot_artifacts_correction_filtering.ipynb
bsd-3-clause
[ "%matplotlib inline", "Filtering and resampling data\nSome artifacts are restricted to certain frequencies and can therefore\nbe fixed by filtering. An artifact that typically affects only some\nfrequencies is due to the power line.\nPower-line noise is a noise created by the electrical network.\nIt is composed of sharp peaks at 50Hz (or 60Hz depending on your\ngeographical location). Some peaks may also be present at the harmonic\nfrequencies, i.e. the integer multiples of\nthe power-line frequency, e.g. 100Hz, 150Hz, ... (or 120Hz, 180Hz, ...).\nThis tutorial covers some basics of how to filter data in MNE-Python.\nFor more in-depth information about filter design in general and in\nMNE-Python in particular, check out\nsphx_glr_auto_tutorials_plot_background_filtering.py.", "import numpy as np\nimport mne\nfrom mne.datasets import sample\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nproj_fname = data_path + '/MEG/sample/sample_audvis_eog_proj.fif'\n\ntmin, tmax = 0, 20 # use the first 20s of data\n\n# Setup for reading the raw data (save memory by cropping the raw data\n# before loading it)\nraw = mne.io.read_raw_fif(raw_fname)\nraw.crop(tmin, tmax).load_data()\nraw.info['bads'] = ['MEG 2443', 'EEG 053'] # bads + 2 more\n\nfmin, fmax = 2, 300 # look at frequencies between 2 and 300Hz\nn_fft = 2048 # the FFT size (n_fft). Ideally a power of 2\n\n# Pick a subset of channels (here for speed reason)\nselection = mne.read_selection('Left-temporal')\npicks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,\n stim=False, exclude='bads', selection=selection)\n\n# Let's first check out all channel types\nraw.plot_psd(area_mode='range', tmax=10.0, picks=picks, average=False)", "Removing power-line noise with notch filtering\nRemoving power-line noise can be done with a Notch filter, directly on the\nRaw object, specifying an array of frequency to be cut off:", "raw.notch_filter(np.arange(60, 241, 60), picks=picks, filter_length='auto',\n phase='zero')\nraw.plot_psd(area_mode='range', tmax=10.0, picks=picks, average=False)", "Removing power-line noise with low-pass filtering\nIf you're only interested in low frequencies, below the peaks of power-line\nnoise you can simply low pass filter the data.", "# low pass filtering below 50 Hz\nraw.filter(None, 50., fir_design='firwin')\nraw.plot_psd(area_mode='range', tmax=10.0, picks=picks, average=False)", "High-pass filtering to remove slow drifts\nTo remove slow drifts, you can high pass.\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>In several applications such as event-related potential (ERP)\n and event-related field (ERF) analysis, high-pass filters with\n cutoff frequencies greater than 0.1 Hz are usually considered\n problematic since they significantly change the shape of the\n resulting averaged waveform (see examples in\n `tut_filtering_hp_problems`). In such applications, apply\n high-pass filters with caution.</p></div>", "raw.filter(1., None, fir_design='firwin')\nraw.plot_psd(area_mode='range', tmax=10.0, picks=picks, average=False)", "To do the low-pass and high-pass filtering in one step you can do\na so-called band-pass filter by running the following:", "# band-pass filtering in the range 1 Hz - 50 Hz\nraw.filter(1, 50., fir_design='firwin')", "Downsampling and decimation\nWhen performing experiments where timing is critical, a signal with a high\nsampling rate is desired. However, having a signal with a much higher\nsampling rate than necessary needlessly consumes memory and slows down\ncomputations operating on the data. To avoid that, you can downsample\nyour time series. Since downsampling raw data reduces the timing precision\nof events, it is recommended only for use in procedures that do not require\noptimal precision, e.g. computing EOG or ECG projectors on long recordings.\n<div class=\"alert alert-info\"><h4>Note</h4><p>A *downsampling* operation performs a low-pass (to prevent\n aliasing) followed by *decimation*, which selects every\n $N^{th}$ sample from the signal. See\n :func:`scipy.signal.resample` and\n :func:`scipy.signal.resample_poly` for examples.</p></div>\n\nData resampling can be done with resample methods.", "raw.resample(100, npad=\"auto\") # set sampling frequency to 100Hz\nraw.plot_psd(area_mode='range', tmax=10.0, picks=picks)", "To avoid this reduction in precision, the suggested pipeline for\nprocessing final data to be analyzed is:\n\nlow-pass the data with :meth:mne.io.Raw.filter.\nExtract epochs with :class:mne.Epochs.\nDecimate the Epochs object using :meth:mne.Epochs.decimate or the\n decim argument to the :class:mne.Epochs object.\n\nWe also provide the convenience methods :meth:mne.Epochs.resample and\n:meth:mne.Evoked.resample to downsample or upsample data, but these are\nless optimal because they will introduce edge artifacts into every epoch,\nwhereas filtering the raw data will only introduce edge artifacts only at\nthe start and end of the recording." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/sandbox-3/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: NCC\nSource ID: SANDBOX-3\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:25\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncc', 'sandbox-3', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/launching_into_ml/solutions/explore_data.ipynb
apache-2.0
[ "Explore and create ML datasets\nIn this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.\nLearning Objectives\n\nAccess and explore a public BigQuery dataset on NYC Taxi Cab rides\nVisualize your dataset using the Seaborn library\nInspect and clean-up the dataset for future ML model training\nCreate a benchmark to judge future ML model performance off of\n\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. \nLet's start with the Python imports that we need.", "!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst\n\nfrom google.cloud import bigquery\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np", "<h3> Extract sample data from BigQuery </h3>\n\nThe dataset that we will use is <a href=\"https://console.cloud.google.com/bigquery?project=nyc-tlc&p=nyc-tlc&d=yellow&t=trips&page=table\">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.\nLet's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format.", "%%bigquery\nSELECT\n FORMAT_TIMESTAMP(\n \"%Y-%m-%d %H:%M:%S %Z\", pickup_datetime) AS pickup_datetime,\n pickup_longitude, pickup_latitude, dropoff_longitude,\n dropoff_latitude, passenger_count, trip_distance, tolls_amount, \n fare_amount, total_amount \nFROM\n `nyc-tlc.yellow.trips` # TODO 1\nLIMIT 10", "Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this.\nWe will also store the BigQuery result in a Pandas dataframe named \"trips\"", "%%bigquery trips\nSELECT\n FORMAT_TIMESTAMP(\n \"%Y-%m-%d %H:%M:%S %Z\", pickup_datetime) AS pickup_datetime,\n pickup_longitude, pickup_latitude, \n dropoff_longitude, dropoff_latitude,\n passenger_count,\n trip_distance,\n tolls_amount,\n fare_amount,\n total_amount\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1\n\nprint(len(trips))\n\n# We can slice Pandas dataframes as if they were arrays\ntrips[:10]", "<h3> Exploring data </h3>\n\nLet's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.", "# TODO 2\nax = sns.regplot(\n x=\"trip_distance\", y=\"fare_amount\",\n fit_reg=False, ci=None, truncate=True, data=trips)\nax.figure.set_size_inches(10, 8)", "Hmm ... do you see something wrong with the data that needs addressing?\nIt appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).\nNote the extra WHERE clauses.", "%%bigquery trips\nSELECT\n FORMAT_TIMESTAMP(\n \"%Y-%m-%d %H:%M:%S %Z\", pickup_datetime) AS pickup_datetime,\n pickup_longitude, pickup_latitude, \n dropoff_longitude, dropoff_latitude,\n passenger_count,\n trip_distance,\n tolls_amount,\n fare_amount,\n total_amount\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1\n# TODO 3\n AND trip_distance > 0\n AND fare_amount >= 2.5\n\nprint(len(trips))\n\nax = sns.regplot(\n x=\"trip_distance\", y=\"fare_amount\",\n fit_reg=False, ci=None, truncate=True, data=trips)\nax.figure.set_size_inches(10, 8)", "What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.\nLet's also examine whether the toll amount is captured in the total amount.", "tollrides = trips[trips[\"tolls_amount\"] > 0]\ntollrides[tollrides[\"pickup_datetime\"] == \"2012-02-27 09:19:10 UTC\"]\n\nnotollrides = trips[trips[\"tolls_amount\"] == 0]\nnotollrides[notollrides[\"pickup_datetime\"] == \"2012-02-27 09:19:10 UTC\"]", "Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.\nLet's also look at the distribution of values within the columns.", "trips.describe()", "Hmm ... The min, max of longitude look strange.\nFinally, let's actually look at the start and end of a few of the trips.", "def showrides(df, numlines):\n lats = []\n lons = []\n for iter, row in df[:numlines].iterrows():\n lons.append(row[\"pickup_longitude\"])\n lons.append(row[\"dropoff_longitude\"])\n lons.append(None)\n lats.append(row[\"pickup_latitude\"])\n lats.append(row[\"dropoff_latitude\"])\n lats.append(None)\n\n sns.set_style(\"darkgrid\")\n plt.figure(figsize=(10, 8))\n plt.plot(lons, lats)\n\nshowrides(notollrides, 10)\n\nshowrides(tollrides, 10)", "As you'd expect, rides that involve a toll are longer than the typical ride.\n<h3> Quality control and other preprocessing </h3>\n\nWe need to do some clean-up of the data:\n<ol>\n<li>New York city longitudes are around -74 and latitudes are around 41.</li>\n<li>We shouldn't have zero passengers.</li>\n<li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li>\n<li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li>\n<li>Discard the timestamp</li>\n</ol>\n\nWe could do preprocessing in BigQuery, similar to how we removed the zero-distance rides, but just to show you another option, let's do this in Python. In production, we'll have to carry out the same preprocessing on the real-time input data. \nThis sort of preprocessing of input data is quite common in ML, especially if the quality-control is dynamic.", "def preprocess(trips_in):\n trips = trips_in.copy(deep=True)\n trips.fare_amount = trips.fare_amount + trips.tolls_amount\n del trips[\"tolls_amount\"]\n del trips[\"total_amount\"]\n del trips[\"trip_distance\"] # we won't know this in advance!\n\n qc = np.all([\n trips[\"pickup_longitude\"] > -78,\n trips[\"pickup_longitude\"] < -70,\n trips[\"dropoff_longitude\"] > -78,\n trips[\"dropoff_longitude\"] < -70,\n trips[\"pickup_latitude\"] > 37,\n trips[\"pickup_latitude\"] < 45,\n trips[\"dropoff_latitude\"] > 37,\n trips[\"dropoff_latitude\"] < 45,\n trips[\"passenger_count\"] > 0\n ], axis=0)\n\n return trips[qc]\n\ntripsqc = preprocess(trips)\ntripsqc.describe()", "The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable.\nLet's move on to creating the ML datasets.\n<h3> Create ML datasets </h3>\n\nLet's split the QCed data randomly into training, validation and test sets.\nNote that this is not the entire data. We have 1 billion taxicab rides. This is just splitting the 10,000 rides to show you how it's done on smaller datasets. In reality, we'll have to do it on all 1 billion rides and this won't scale.", "shuffled = tripsqc.sample(frac=1)\ntrainsize = int(len(shuffled[\"fare_amount\"]) * 0.70)\nvalidsize = int(len(shuffled[\"fare_amount\"]) * 0.15)\n\ndf_train = shuffled.iloc[:trainsize, :]\ndf_valid = shuffled.iloc[trainsize:(trainsize + validsize), :]\ndf_test = shuffled.iloc[(trainsize + validsize):, :]\n\ndf_train.head(n=1)\n\ndf_train.describe()\n\ndf_valid.describe()\n\ndf_test.describe()", "Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) just to verify our code works, before we run it on all the data.", "def to_csv(df, filename):\n outdf = df.copy(deep=False)\n outdf.loc[:, \"key\"] = np.arange(0, len(outdf)) # rownumber as key\n # Reorder columns so that target is first column\n cols = outdf.columns.tolist()\n cols.remove(\"fare_amount\")\n cols.insert(0, \"fare_amount\")\n print (cols) # new order of columns\n outdf = outdf[cols]\n outdf.to_csv(filename, header=False, index_label=False, index=False)\n\nto_csv(df_train, \"taxi-train.csv\")\nto_csv(df_valid, \"taxi-valid.csv\")\nto_csv(df_test, \"taxi-test.csv\")\n\n!head -10 taxi-valid.csv", "<h3> Verify that datasets exist </h3>", "!ls -l *.csv", "We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.", "%%bash\nhead taxi-train.csv", "Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.\n<h3> Benchmark </h3>\n\nBefore we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.\nMy model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model.", "def distance_between(lat1, lon1, lat2, lon2):\n # Haversine formula to compute distance \"as the crow flies\".\n lat1_r = np.radians(lat1)\n lat2_r = np.radians(lat2)\n lon_diff_r = np.radians(lon2 - lon1)\n sin_prod = np.sin(lat1_r) * np.sin(lat2_r)\n cos_prod = np.cos(lat1_r) * np.cos(lat2_r) * np.cos(lon_diff_r)\n minimum = np.minimum(1, sin_prod + cos_prod)\n dist = np.degrees(np.arccos(minimum)) * 60 * 1.515 * 1.609344\n\n return dist\n\ndef estimate_distance(df):\n return distance_between(\n df[\"pickuplat\"], df[\"pickuplon\"], df[\"dropofflat\"], df[\"dropofflon\"])\n\ndef compute_rmse(actual, predicted):\n return np.sqrt(np.mean((actual - predicted) ** 2))\n\ndef print_rmse(df, rate, name):\n print (\"{1} RMSE = {0}\".format(\n compute_rmse(df[\"fare_amount\"], rate * estimate_distance(df)), name))\n\n# TODO 4\nFEATURES = [\"pickuplon\", \"pickuplat\", \"dropofflon\", \"dropofflat\", \"passengers\"]\nTARGET = \"fare_amount\"\ncolumns = list([TARGET])\ncolumns.append(\"pickup_datetime\")\ncolumns.extend(FEATURES) # in CSV, target is first column, after the features\ncolumns.append(\"key\")\ndf_train = pd.read_csv(\"taxi-train.csv\", header=None, names=columns)\ndf_valid = pd.read_csv(\"taxi-valid.csv\", header=None, names=columns)\ndf_test = pd.read_csv(\"taxi-test.csv\", header=None, names=columns)\nrate = df_train[\"fare_amount\"].mean() / estimate_distance(df_train).mean()\nprint (\"Rate = ${0}/km\".format(rate))\nprint_rmse(df_train, rate, \"Train\")\nprint_rmse(df_valid, rate, \"Valid\") \nprint_rmse(df_test, rate, \"Test\") ", "<h2>Benchmark on same dataset</h2>\n\nThe RMSE depends on the dataset, and for comparison, we have to evaluate on the same dataset each time. We'll use this query in later labs:", "validation_query = \"\"\"\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\n \"unused\" AS key\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n\"\"\"\n\nclient = bigquery.Client()\ndf_valid = client.query(validation_query).to_dataframe()\nprint_rmse(df_valid, 2.59988, \"Final Validation Set\")", "The simple distance-based rule gives us a RMSE of <b>$8.14</b>. We have to beat this, of course, but you will find that simple rules of thumb like this can be surprisingly difficult to beat.\nLet's be ambitious, though, and make our goal to build ML models that have a RMSE of less than $6 on the test set.\nCopyright 2020 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vangj/py-bbn
jupyter/generate-bbn.ipynb
apache-2.0
[ "Purpose: demonstrate generating random Bayesian belief networks\nLet's generate some random Bayesian belief networks (BBNs) and perform inference.", "import numpy as np\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport warnings\n\nfrom pybbn.generator.bbngenerator import generate_singly_bbn, generate_multi_bbn, convert_for_exact_inference\nfrom pybbn.generator.bbngenerator import convert_for_drawing\nfrom pybbn.pptc.inferencecontroller import InferenceController\n\nnp.random.seed(37)\n\ng, p = generate_multi_bbn(5, max_iter=5)\nm_bbn = convert_for_exact_inference(g, p)\nnx_multi_bbn = convert_for_drawing(m_bbn)\n\ng, p = generate_singly_bbn(5, max_iter=10)\ns_bbn = convert_for_exact_inference(g, p)\nnx_singly_bbn = convert_for_drawing(s_bbn)", "Here, we visualize the generated multi- and singly-connected BBNs.", "with warnings.catch_warnings():\n warnings.simplefilter('ignore')\n \n plt.figure(figsize=(10, 5))\n plt.subplot(121) \n nx.draw(nx_multi_bbn, with_labels=True, font_weight='bold')\n plt.title('Multi-connected BBN')\n plt.subplot(122) \n nx.draw(nx_singly_bbn, with_labels=True, font_weight='bold')\n plt.title('Singly-connected BBN')", "Now, let's print out the probabilities of each node for the multi- and singly-connected BBNs.", "join_tree = InferenceController.apply(m_bbn)\nfor node in join_tree.get_bbn_nodes():\n potential = join_tree.get_bbn_potential(node)\n print(node)\n print(potential)\n print('>')\n\njoin_tree = InferenceController.apply(s_bbn)\nfor node in join_tree.get_bbn_nodes():\n potential = join_tree.get_bbn_potential(node)\n print(node)\n print(potential)\n print('>')", "Generate a lot of graphs and visualize them", "def generate_graphs(n=10, prog='neato', multi=True):\n d = {}\n for i in range(n):\n max_nodes = np.random.randint(3, 8)\n max_iter = np.random.randint(10, 100)\n \n if multi is True:\n g, p = generate_multi_bbn(max_nodes, max_iter=max_iter) \n else: \n g, p = generate_singly_bbn(max_nodes, max_iter=max_iter)\n \n bbn = convert_for_exact_inference(g, p)\n pos = nx.nx_agraph.graphviz_layout(g, prog=prog)\n \n d[i] = {\n 'g': g,\n 'p': p,\n 'bbn': bbn,\n 'pos': pos\n }\n return d\n\ndef draw_graphs(graphs, prefix):\n fig, axes = plt.subplots(5, 2, figsize=(15, 20))\n for i, ax in enumerate(np.ravel(axes)):\n graph = graphs[i]\n nx.draw(graph['g'], pos=graph['pos'], with_labels=True, ax=ax)\n ax.set_title('{} Graph {}'.format(prefix, i + 1))\n\nmulti_graphs = generate_graphs(multi=True)\nsingly_graphs = generate_graphs(multi=False)\n\nwith warnings.catch_warnings():\n warnings.simplefilter('ignore')\n \n draw_graphs(multi_graphs, 'Multi-connected')\n draw_graphs(singly_graphs, 'Singly-connected')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karlstroetmann/Algorithms
Python/Chapter-02/Power.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css') as file:\n css = file.read()\nHTML(css)", "Efficient Computation of Powers\nThe function power takes two natural numbers $m$ and $n$ and computes $m^n$. Our first implementation is inefficient and takes $n-1$ multiplication to compute $m^n$.", "def power(m, n):\n r = 1\n for i in range(n):\n r *= m\n return r\n\npower(2, 3), power(3, 2)\n\n%%time\np = power(3, 500000)\n\np", "Next, we try a recursive implementation that is based on the following two equations:\n1. $m^0 = 1$\n2. $m^n = \\left{\\begin{array}{ll}\n m^{n//2} \\cdot m^{n//2} & \\mbox{if $n$ is even}; \\\n m^{n//2} \\cdot m^{n//2} \\cdot m & \\mbox{if $n$ is odd}.\n \\end{array}\n \\right.\n $", "def power(m, n):\n if n == 0:\n return 1\n p = power(m, n // 2)\n if n % 2 == 0:\n return p * p\n else:\n return p * p * m\n\n%%time\np = power(3, 500000)" ]
[ "code", "markdown", "code", "markdown", "code" ]
JakeColtman/BayesianSurvivalAnalysis
Full done.ipynb
mit
[ "import lifelines\nimport pymc as pm\nfrom pyBMA.CoxPHFitter import CoxPHFitter\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy import log\nfrom datetime import datetime\nimport pandas as pd\n%matplotlib inline ", "The first step in any data analysis is acquiring and munging the data\nOur starting data set can be found here:\n http://jakecoltman.com in the pyData post\nIt is designed to be roughly similar to the output from DCM's path to conversion\nDownload the file and transform it into something with the columns:\nid,lifetime,age,male,event,search,brand\nwhere lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints\nIt is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)", "running_id = 0\noutput = [[0]]\nwith open(\"E:/output.txt\") as file_open:\n for row in file_open.read().split(\"\\n\"):\n cols = row.split(\",\")\n if cols[0] == output[-1][0]:\n output[-1].append(cols[1])\n output[-1].append(True)\n else:\n output.append(cols)\n output = output[1:]\n \nfor row in output:\n if len(row) == 6:\n row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]\noutput = output[1:-1]\n\ndef convert_to_days(dt):\n day_diff = dt / np.timedelta64(1, 'D')\n if day_diff == 0:\n return 23.0\n else: \n return day_diff\n\ndf = pd.DataFrame(output, columns=[\"id\", \"advert_time\", \"male\",\"age\",\"search\",\"brand\",\"conversion_time\",\"event\"])\ndf[\"lifetime\"] = pd.to_datetime(df[\"conversion_time\"]) - pd.to_datetime(df[\"advert_time\"])\ndf[\"lifetime\"] = df[\"lifetime\"].apply(convert_to_days)\ndf[\"male\"] = df[\"male\"].astype(int)\ndf[\"search\"] = df[\"search\"].astype(int)\ndf[\"brand\"] = df[\"brand\"].astype(int)\ndf[\"age\"] = df[\"age\"].astype(int)\ndf[\"event\"] = df[\"event\"].astype(int)\ndf = df.drop('advert_time', 1)\ndf = df.drop('conversion_time', 1)\ndf = df.set_index(\"id\")\ndf = df.dropna(thresh=2)\ndf.median()\n\n###Parametric Bayes\n#Shout out to Cam Davidson-Pilon\n\n## Example fully worked model using toy data\n## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html\n## Note that we've made some corrections \n\nN = 2500\n\n##Generate some random data \nlifetime = pm.rweibull( 2, 5, size = N )\nbirth = pm.runiform(0, 10, N)\ncensor = ((birth + lifetime) >= 10)\nlifetime_ = lifetime.copy()\nlifetime_[censor] = 10 - birth[censor]\n\n\nalpha = pm.Uniform('alpha', 0, 20)\nbeta = pm.Uniform('beta', 0, 20)\n\n@pm.observed\ndef survival(value=lifetime_, alpha = alpha, beta = beta ):\n return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))\n\nmcmc = pm.MCMC([alpha, beta, survival ] )\nmcmc.sample(50000, 30000)\n\npm.Matplot.plot(mcmc)\nmcmc.trace(\"alpha\")[:]", "Problems: \n1 - Try to fit your data from section 1 \n2 - Use the results to plot the distribution of the median\n\nNote that the media of a Weibull distribution is:\n$$β(log 2)^{1/α}$$", "censor = np.array(df[\"event\"].apply(lambda x: 0 if x else 1).tolist())\nalpha = pm.Uniform(\"alpha\", 0,50) \nbeta = pm.Uniform(\"beta\", 0,50) \n\n@pm.observed\ndef survival(value=df[\"lifetime\"], alpha = alpha, beta = beta ):\n return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))\n\n\nmcmc = pm.MCMC([alpha, beta, survival ] )\nmcmc.sample(10000)\n\ndef weibull_median(alpha, beta):\n return beta * ((log(2)) ** ( 1 / alpha))\nplt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace(\"alpha\"), mcmc.trace(\"beta\"))])", "Problems:\n4 - Try adjusting the number of samples for burning and thinnning\n5 - Try adjusting the prior and see how it affects the estimate", "censor = np.array(df[\"event\"].apply(lambda x: 0 if x else 1).tolist())\nalpha = pm.Uniform(\"alpha\", 0,50) \nbeta = pm.Uniform(\"beta\", 0,50) \n\n@pm.observed\ndef survival(value=df[\"lifetime\"], alpha = alpha, beta = beta ):\n return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))\n\nmcmc = pm.MCMC([alpha, beta, survival ] )\nmcmc.sample(10000, burn = 3000, thin = 20)\n\npm.Matplot.plot(mcmc)\n\n#Solution to Q5\n## Adjusting the priors impacts the overall result\n## If we give a looser, less informative prior then we end up with a broader, shorter distribution\n## If we give much more informative priors, then we get a tighter, taller distribution\n\ncensor = np.array(df[\"event\"].apply(lambda x: 0 if x else 1).tolist())\n\n## Note the narrowing of the prior\nalpha = pm.Normal(\"alpha\", 1.7, 10000) \nbeta = pm.Normal(\"beta\", 18.5, 10000) \n\n####Uncomment this to see the result of looser priors\n## Note this ends up pretty much the same as we're already very loose\n#alpha = pm.Uniform(\"alpha\", 0, 30) \n#beta = pm.Uniform(\"beta\", 0, 30) \n\n@pm.observed\ndef survival(value=df[\"lifetime\"], alpha = alpha, beta = beta ):\n return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))\n\nmcmc = pm.MCMC([alpha, beta, survival ] )\nmcmc.sample(10000, burn = 5000, thin = 20)\npm.Matplot.plot(mcmc)\n#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace(\"alpha\"), mcmc.trace(\"beta\"))])", "Problems:\n7 - Try testing whether the median is greater than a different values", "medians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace(\"alpha\"), mcmc.trace(\"beta\"))]\ntesting_value = 14.9\nnumber_of_greater_samples = sum([x >= testing_value for x in medians])\n100 * (number_of_greater_samples / len(medians))", "If we want to look at covariates, we need a new approach. \nWe'll use Cox proprtional hazards, a very popular regression model.\nTo fit in python we use the module lifelines:\nhttp://lifelines.readthedocs.io/en/latest/", "#Fitting solution\ncf = lifelines.CoxPHFitter()\ncf.fit(df, 'lifetime', event_col = 'event')\ncf.summary", "Once we've fit the data, we need to do something useful with it. Try to do the following things:\n1 - Plot the baseline survival function\n\n2 - Predict the functions for a particular set of features\n\n3 - Plot the survival function for two different set of features\n\n4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time", "#Solution to 1\nfig, axis = plt.subplots(nrows=1, ncols=1)\ncf.baseline_survival_.plot(ax = axis, title = \"Baseline Survival\")\n\nregressors = np.array([[1,45,0,0]])\nsurvival = cf.predict_survival_function(regressors)\nsurvival.head()\n\n#Solution to plotting multiple regressors\nfig, axis = plt.subplots(nrows=1, ncols=1, sharex=True)\nregressor1 = np.array([[1,45,0,1]])\nregressor2 = np.array([[1,23,1,1]])\nsurvival_1 = cf.predict_survival_function(regressor1)\nsurvival_2 = cf.predict_survival_function(regressor2)\nplt.plot(survival_1,label = \"45 year old male - search\")\nplt.plot(survival_2,label = \"45 year old male - display\")\nplt.legend(loc = \"upper\")\n\nodds = survival_1 / survival_2\nplt.plot(odds, c = \"red\")", "Model selection\nDifficult to do with classic tools (here)\nProblem:\n1 - Calculate the BMA coefficient values\n\n2 - Try running with different priors", "from pyBMA import CoxPHFitter\nbmaCox = CoxPHFitter.CoxPHFitter()\nbmaCox.fit(df, \"lifetime\", event_col= \"event\", priors= [0.5]*4)\nbmaCox.summary\n\n#Low probability for everything favours parsimonious models\nbmaCox = CoxPHFitter.CoxPHFitter()\nbmaCox.fit(df, \"lifetime\", event_col= \"event\", priors= [0.1]*4)\nbmaCox.summary\n\n#Boost probability of brand\nbmaCox = CoxPHFitter.CoxPHFitter()\nbmaCox.fit(df, \"lifetime\", event_col= \"event\", priors= [0.3, 0.9, 0.001, 0.3])\nprint(bmaCox.summary)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bMzi/ML_in_Finance
0208_LDA-QDA.ipynb
mit
[ "Linear and Quadratic Discriminant Analysis\nLinear Discriminant Analysis\nClassifying with Bayes' Theorem\nIn a previous chapter we discussed logistic regression for the case of two response classes (e.g. 0 and 1). It models the conditional probability $\\Pr(Y=k|X=x)$ directly through the use of the Sigmoid function. In this chapter we discuss an alternative approach that models the distribution of the predictors $X$ separately for each response class (i.e. given $Y$), and then uses Bayes' theorem to transform these into conditional probabilities for $\\Pr(Y=k|X=x)$. Its main advantage compared to logistic regressions is that if classes are well-separated, parameter estimates for logistic regression tend to be unstable whereas linear discriminant analysis (LDA) does not suffer from this problem. Beyond that, LDA is a popular algorithm for multiple-class classification (i.e. the response has more than two classes, for example buy/hold/sell etc.) where logistic regression is not used that often (James et al. (2013)).\nLDA assigns an object to class $k$ for which the computed probability is highest. These probabilities are calculated using Bayes' rule which states that\n\\begin{equation}\n\\begin{aligned}\n\\underbrace{\\Pr(Y=k|X)}{\\text{posterior probability}} &= \\frac{\\overbrace{\\Pr(X|Y=k)}^{\\text{conditional probability}} \\quad \\overbrace{\\Pr(k)}^{\\text{prior probability}}}{\\underbrace{\\Pr(X)}{\\text{evidence}}} \\[3ex] \n &= \\frac{\\Pr(X|Y=k) \\Pr(k)}{\\sum_{\\ell=1}^K \\Pr(X|Y=\\ell) \\Pr(\\ell)}\n\\end{aligned}\n\\end{equation}\nAbove, $\\Pr(k)$ is simply the prior probability of class $k$ (with $\\sum_{k=1}^K \\Pr(k) = 1$) that a randomly chosen observation is drawn from the $k$th class. $\\Pr(X|Y=k)$ on the other hand is the class conditional density of $X$ in class $Y=k$. Following the notation in (Friedman et al. (2001)), we denote $\\Pr(X|Y=k) \\equiv f_k(x)$ to indicate that it is a density function. LDA's decision rule thus classifies an observation into class $k$ if \n\\begin{align}\n\\Pr(Y=k|X) &> \\Pr(Y=j|X) \\qquad \\forall j \\neq k \\nonumber \\\n\\end{align}\nor, if we substitute both sides of the inequality with Bayes' rule:\n\\begin{align}\n\\frac{f_k(x) \\Pr(k)}{\\sum_{\\ell=1}^K f_\\ell(x) \\Pr(\\ell)} &> \\frac{f_j(x) \\Pr(j)}{\\sum_{\\ell=1}^K f_\\ell(x) \\Pr(\\ell)} \\qquad \\forall j \\neq k \\nonumber\n\\end{align}\nThe evidence term (denominator in above equation) can be omitted from the decision rule because it is merely a scaling factor (Raschka (2014)). This then yields the following simple decision boundary:\n\\begin{align}\nf_k(x) \\Pr(k) &> f_j(x) \\Pr(j) \\qquad \\forall j \\neq k\n\\end{align}\nThis expression can also be written as\n\\begin{equation}\n\\delta_k(x) = \\arg \\max_k \\; f_k(x) \\Pr(k) \n\\end{equation}\nBayes Decision Rule in LDA with One Feature\nSuppose that $f_k(x)$ follows a Gaussian distribution. For the one-dimensional setting, that is if we have just one feature $p=1$, the normal density takes the well known form\n\\begin{equation}\nf_k(x) = \\frac{1}{\\sqrt{2\\pi \\sigma_k^2}} \\, \\exp\\left( - \\frac{(x - \\mu_k)^2}{2 \\sigma_k^2} \\right)\n\\end{equation}\nwhere $\\mu_k$ and $\\sigma_k^2$ are the mean and variance for the $k$th class, respectively. For the moment let us also assume that there is a shared variance term across all $K$ classes, i.e. $\\sigma_1^2 = \\sigma_2^2 = \\ldots = \\sigma_k^2$. Then plugging the normal distribution into our maximization problem, taking the log and doing some algebra - see the appendix of the script for detailed steps - we find that an observation is assigned to class $k$ for which $\\delta_k(x)$ is greatest:\n\\begin{equation}\n\\delta_k(x) = \\arg \\max_k \\left[\\frac{x \\mu_k}{\\sigma^2} - \\frac{\\mu_k}{2\\sigma^2} + \\ln(\\Pr(k)) \\right]\n\\end{equation}\nBelow figure shows how LDA classifies data based on the above result. In the left subplot we see two separate normal densities representing a situation with two classes ($K \\in {\\text{blue, green}}$). $\\Pr(k=\\text{blue}) = \\Pr(k=\\text{green}) = 0.5$; equal for both classes. Both densities have the same variance $\\sigma_1^2 = \\sigma_2^2 = 1$ but different location parameter, $\\mu_1 = -1.25, \\mu_2 = 1.25$. \n<img src=\"Graphics/0208_BayesDescBoundary1d.png\" alt=\"BayesDescBoundary1d\" style=\"width: 1000px;\"/>\nIf we were to know these parameter, then LDA's decision boundary would be drawn exactly at zero (dashed line). If $\\Pr(k=\\text{blue}) > \\Pr(k=\\text{green})$, Bayes' decision boundary would move to the right, if $\\Pr(k=\\text{blue}) < \\Pr(k=\\text{green})$, to the left. There is some overlapping area leading to some uncertainty, but overall the error rate is minimized to a minimum. In reality however, we do not know the true location and scale parameter and hence we have to estimate them - what we will discuss momentarily. The right plot displays histograms of 50 randomly drawn observations from the aforementioned normal distribution. Given this data, LDA calculates $\\mu_k, \\sigma^2$ and uses $\\delta_k(x)$ to draw the decision boundary (solid vertical line). Data points to the left belong to the blue class, all others to the green class. The dashed vertical line again displays the optimal decision boundary. Because we don't know the true location and scale parameter LDA relies on estimates. This introduces inaccuracy that is reduced the larger the data sample is (assuming our normal assumption is correct).\nAssumptions and Parameter Estimation\nSo far we have discussed how LDA draws its decision boundary with the help of Bayes rule and given the assumptions that the features follow a normal distribution. But in order to follow through with our classification task, estimates for $\\Pr(k), \\mu_k$, and $\\sigma^2$ are required. Estimating the prior probability $\\Pr(k)$ is no difficult job: we simply compute the fraction of training observations that belong to the $k$th class: $\\hat{\\Pr}(k) = n_k / n$, where $n_k$ represents the count of samples from class $k$ and $n$ the count of all samples. Location parameter $\\mu_k$ is estimated using the average of training observation of the $k$th class and $\\sigma^2$, the scale parameter, is the weighted average of the sample variance for each of the $K$ classes (Note that Friedman et al. (2001) and James et al (2013) both use a biased corrected version of $\\hat{\\sigma}^2$ (and $\\hat{\\Sigma}$ for the case of $p>1$) by dividing the summed terms by $n-K$ instead of $n$. The formula given here uses an uncorrected estimate of $\\sigma$ and in that follows Sklearn's implementation.)\n\\begin{align}\n\\hat{\\mu}k &= \\frac{1}{n_k} \\sum{i:y_i=k} x_i \\\n\\hat{\\sigma}^2 &= \\frac{1}{n} \\sum_{k=1}^K \\sum_{i:y_i=k} (x_i - \\hat{\\mu}_k)^2\n\\end{align}\nGiven the assumption of normality and given these estimates for location and scale we are able to establish a decision rule that assigns each new data point to the class for which $\\delta_k(x)$ is highest.\nBayes Decision Rule in LDA with More Than One Feature\nAbove we have used the one-dimensional case with one predictor to introduce how LDA classifies an observation. Now we extend the classifier to work with multiple features ($p>1$). Again we assume that $X = (X_1, X_2, \\ldots, X_p)$ is drawn from a (multivariate) normal distribution with a class specific mean vector $\\mu_k$ of length $p$ and common covariance matrix $\\Sigma$ of dimension $p \\times p$. This is expressed as $X \\sim N(\\mu, \\Sigma)$. The multivariate Gaussian density is defined as\n\\begin{equation}\nf_k(x) = \\frac{1}{(2\\pi)^{p/2}|\\Sigma|^{1/2}} \\, \\exp \\left( -\\frac{1}{2} (x-\\mu_k)^T \\Sigma^{-1} (x-\\mu_k) \\right)\n\\end{equation}\nAs before we plug this expression into our maximization problem, take the logarithm and perform a little bit of algebra (For the interested the different steps are shown in the appendix of the script). This yields to the following LDA's Bayes classifier rule, based on which an observation $X=x$ is assigned to the class for which $\\delta_k(x)$ is largest.\n\\begin{equation}\n\\delta_k(x) = \\arg \\max_k \\left[x^T \\Sigma^{-1} \\mu_k - \\frac{1}{2} \\mu_k^T\\Sigma^{-1}\\mu_k + \\ln(\\Pr(k))\\right]\n\\end{equation}\nThe estimates for $\\Pr(k), \\mu_k$, and $\\Sigma$ follow again the same approach as in the case of only one predictor.\nThe next figure plots LDA's Bayes decision boundary for a random training set with two features $X_1, X_2$. The colors indicate the binary response with blue circles indicating customers who accepted a product offer and green circles representing those who declined it. The bivariate normal contours (ellipses) represent iso-lines with the same probabilities. LDA uses Bayes' decision rule discussed above to classify any new data point into class $k$. \n<img src=\"Graphics/0208_BayesDescBoundary2d.png\" alt=\"BayesDescBoundary2d\" style=\"width: 1000px;\"/>\nLDA in Python\nSetup\nWe will apply LDA in Python with the functions that are provided through the sklearn package and the 'Default' data set we used to introduce logistic regression in a previous chapter. Sklearn, short for Scikit-learn, is a key resource for clustering, classification or regression algorithms in machine learning. It offers an abundant variety of functions and functionalities and is actively developed by a large community. \nSklearn is one of the most extensive package in Python with hundreds, if not thousands, of functions. It is good practice to not load the full library as we did for example with numpy but to only load those functions that are needed to run your task at hand. This saves computer memory and with that improves the efficiency of your algorithm, especially if you are using your household PC to run it on larger data sets.", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import metrics\nplt.rcParams['font.size'] = 14\nplt.style.use('seaborn-whitegrid')\n\n# Default data set is not available online. Data was extracted from R package \"ISLR\"\ndf = pd.read_csv('Data/Default.csv', sep=',')\n\n# Factorize 'No' and 'Yes' in columns 'default' and 'student'\ndf['defaultFac'] = df.default.factorize()[0]\ndf['studentFac'] = df.student.factorize()[0]\ndf.head(5)\n\n# Assign data to feature matrix X_train and response vector y_train\nX_train = df[['balance', 'income', 'studentFac']]\ny_train = df.defaultFac", "LDA Classifier Object & Fit\nNow we are in a position to run the LDA classifier. This, as you can see from the three lines below, is as easy as it gets.", "from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA\n\n# Create LDA object and run classifier\nlda = LDA(solver='lsqr')\nlda = lda.fit(X_train, y_train)\nlda", "The parameter solver='lsqr' specifies the method by which the covariance matrix is estimated. lsqr follows the approach introduced in the preceding subsection. Others such as svd or eigen are available. See Scikit-learn's guide or the function description.\nEvery function in sklearn has different attributes and methods. Sklearns convention is to store anything that is derived from the data in attributes that end with a trailing underscore. That is to separate them from parameters that are set by the user (Mueller and Guido (2017)). For example the estimated covariance matrix can be printed with this command.", "lda.covariance_", "In a Jupyter notebook, to see all options you can simply type lda. and hit tab.\nLDA Performance\nHere are some basic metrics on how the LDA classifier performed on the training data.", "print('default-rate: {0: .4f}'.format(np.sum(y_train)/len(y_train)))\nprint('score: {0: .4f}'.format(lda.score(X_train, y_train)))\nprint('error-rate: {0: .4f}'.format(1-lda.score(X_train, y_train)))", "Overall, 3.33% of all observations defaulted. If we would simply label each entry as 'non-default' we would have an error rate of this magnitude. So, in comparison to this naive classifier, LDA seems to have some skill in predicting the default.\n\nIMPORTANT NOTE: In order to be in line with James et al. (2015), the textbook for this course, we have not performed any train/test split of the data. Therefore we will use the same full matrix X_train and response vector y_train as test data. Performance metrics might be applied to both test and training data but in the end the results on the test set are those that we are ultimately interested in. To drive this point home, I have relabeled the X_train and y_train to X_test, y_test. Nevertheless, be aware that in this unique case it is the same data!\n\nLet us print the confusion matrix introduced in the previous chapter to see the class-wise performance. For reference the confusion matrix is also printed as DataFrame but moving forward be sure to know that row represent the true values and columns predicted labels.", "# Relabel variables as discussed\nX_test = X_train\ny_test = y_train\n\n# Predict labels\ny_pred = lda.predict(X_test)\n\n# Sklearn's confusion matrix\nprint(metrics.confusion_matrix(y_test, y_pred))\n\n# Manual confusion matrix as pandas DataFrame\nconfm = pd.DataFrame({'Predicted default status': y_pred,\n 'True default status': y_test})\nconfm.replace(to_replace={0:'No', 1:'Yes'}, inplace=True)\nprint(confm.groupby(['True default status','Predicted default status']).size().unstack('Predicted default status'))", "The confusion matrix tells us that for the non-defaulters, LDA only misclassified 22 of them. This is an excellent rate. However, out of the 333 (=253 + 80) people who actually defaulted, LDA classified only 80 correctly. This means our classifier missed out on 76.0% of those who actually defaulted! For a credit card applicant with a bad credit score this is good news. For a credit card company, not so much. \nVarying the Threshold Levels\nWhy does LDA miss all these 'defaulters'? Implicitly, Bayes classifier minimizes the overall error rate, meaning that it yields the smallest possible total number of misclassified observations - irrespective of the class-specific error rate. Bayes classifier works by assigning an observation to class 'default' for which the posterior probability $Pr(\\text{default = Yes}|X=x) > 0.5$. For a credit card company who seeks to have as few defaults as possible, this threshold might be too large. Instead, such a company might decide to label any customer with a posterior probability of default above 20% to the 'default' class ($Pr(\\text{default = Yes}|X=x) > 0.2$). Let us investigate how the results in such a case would look like.", "# Calculated posterior probabilities\nposteriors = lda.predict_proba(X_test)\nposteriors[:5, :]", "The function lda.predict_proba() provides the posterior probabilities of $\\Pr(\\text{default = 0}|X=x)$ in the first column and $\\Pr(\\text{default = 1}|X=x)$ in the second. The latter column is what we are interested in. Out of convenience we use sklearn's binarize function to classify all probabilities above the threshold of 0.2 as 1 (=default) and generate the confusion matrix.", "from sklearn.preprocessing import binarize\n\n# Set threshold and get classes\nthresh = 0.2\ny_pred020 = binarize([posteriors[:, 1]], thresh)[0]\n\n# new confusion matrix (threshold of 0.2)\nprint(metrics.confusion_matrix(y_test, y_pred020))", "Now LDA misclassifies only 140 out of 333 defaults, or 42.0%. Thats a sharp improvement over the 76.0% from before. But this comes at a price: Before, of those who did not default LDA mislabeled only 22 (or 0.2%) incorrectly. This number increased now to 232 (or 2.4%). Combined, the total error rate increased from 2.75% to 3.72%. For a credit card company, this might be a price they are willing to pay to have a more accurate identification of individuals who default. \nBelow code snippet calculates and plots the overall error rate, the proportion of missed defaulting customers and the fraction of error among the non-defaulting customers as a function of the threshold value for the posterior probability that is used to assign classes.", "# Array of thresholds\nthresh = np.linspace(0, 0.5, num=100)\n\ner = [] # Total error rate\nder = [] # Defaults error rate\nnder = [] # Non-Defaults error rate\n\nfor t in thresh:\n # Sort/arrange data\n y_pred_class = binarize([posteriors[:, 1]], t)[0]\n confm = metrics.confusion_matrix(y_test, y_pred_class)\n \n # Calculate error rates\n er = np.append(er, (confm[0, 1] + confm[1, 0]) / len(posteriors))\n der = np.append(der, confm[1, 0] / np.sum(confm[1, :]))\n nder = np.append(nder, confm[0, 1] / np.sum(confm[0, :]))\n\n# Plot\nplt.figure(figsize=(12, 6))\nplt.plot(thresh, er, label='Total error rate')\nplt.plot(thresh, der, label='Missed defaults')\nplt.plot(thresh, nder, label='Missed non-defaults')\nplt.xlim(0, 0.5)\nplt.xlabel('Threshold')\nplt.ylabel('Error Rate')\nplt.legend();", "How do we know what threshold value is best? Unfortunately there's no formula for it. \"Such a decision must be based on domain knowledge, such as detailed information about costs associated with defaults\" (James et al. (2013, p.147)) and it will always be a trade-off: if we increase the threshold we reduce the missed non-defaults but at the same time increase the missed defaults.\nPerformance Metrics\nThis is now the perfect opportunity to refresh our memory on a few classification performance measures introduced in the previous chapters and add a few more to have a full bag of performance metrics. The following table will help in doing this.\n<img src=\"Graphics/0208_ConfusionMatrixDefault.png\" alt=\"ConfusionMatrixDefault\" style=\"width: 800px;\"/>\nWe will use the following abbreviations (Markham (2016)): \n\nTrue Positives (TP): correctly predicted defaults\nTrue Negatives (TN): correctly predicted non-defaults\nFalse Positives (FP): incorrectly predicted defaults (\"Type I error\")\nFalse Negatives (FN): incorrectly predicted non-defaults (\"Type II error\")", "# Assign confusion matrix values to variables\nconfm = metrics.confusion_matrix(y_test, y_pred)\nprint(confm)\nTP = confm[1, 1] # True positives\nTN = confm[0, 0] # True negatives\nFP = confm[0, 1] # False positives\nFN = confm[1, 0] # False negatives", "So far we've encountered the following performance metrics: \n\nScore\nError rate\nSensitivity and \nSpecificity. \n\nWe briefly recapture their meaning, how they are calculated and how to call them in Scikit-learn. We will make use of the functions in the metrics sublibrary of sklearn\nScore\n\nScore = (TN + TP) / (TN + TP + FP + FN)\nFraction of (overall) correctly predicted classes", "print((TN + TP) / (TN + TP + FP + FN))\nprint(metrics.accuracy_score(y_test, y_pred))\nprint(lda.score(X_test, y_test))", "Error rate\n\nError rate = 1 - Score or Error rate = (FP + FN) / (TN + TP + FP + FN)\nFraction of (overall) incorrectly predicted classes\nAlso known as \"Misclassification Rate\"", "print((FP + FN) / (TN + TP + FP + FN))\nprint(1 - metrics.accuracy_score(y_test, y_pred))\nprint(1 - lda.score(X_test, y_test))", "Specificity\n\nSpecificity = TN / (TN + FP)\nFraction of correctly predicted negatives (e.g. 'non-defaults')", "print(TN / (TN + FP))", "Sensitivity or Recall\n\nSensitivity = TP / (TP + FN)\nFraction of correctly predicted 'positives' (e.g. 'defaults'). Basically asks the question: \"When the actual value is positive, how often is the prediction correct?\"\nAlso known as True positive rate\nCounterpart to Precision", "print(TP / (TP + FN))\nprint(metrics.recall_score(y_test, y_pred))", "The above four classification performance metrics we already encountered. There are two more metrics we want to cover: Precision and the F-Score.\nPrecision\n\nPrecision = TP / (TP + FP)\nRefers to the accuracy of a positive ('default') prediction. Basically asks the question: \"When a positive value is predicted, how often is the prediction correct?\" \nCounterpart to Recall", "print(TP / (TP + FP))\nprint(metrics.precision_score(y_test, y_pred))", "<img src=\"Graphics/0208_ConfusionMatrixDefault.png\" alt=\"ConfusionMatrixDefault\" style=\"width: 800px;\"/>\nF-Score\nVan Rijsbergen (1979) introduced a measure that is still widely used to evaluate the accuracy of predictions in two-class (binary) classification problems: the F-Score. It combines Precision and Recall (aka Sensitivity) in one metric and tells us something about the relations between data's positive labels and those given by a classifier. It is a single measure of a classification procedure's usefullness and in general the rule is that the higher the F-Score, the better the predictive power of the classification procedure. It is defined as:\n\\begin{align}\nF_{\\beta} &= \\frac{(1 + \\beta^2) \\cdot \\text{precision} \\cdot \\text{recall}}{\\beta^2 \\cdot \\text{precision} + \\text{recall}} \\[2ex]\n &= \\frac{(1+\\beta^2) \\cdot TP}{(1+\\beta^2) \\cdot TP + \\beta^2 \\cdot FN + FP}\n\\end{align}\nThis measure employs a parameter $\\beta$ that captures a user's preference (Guggenbuehler (2015)). The most common value for $\\beta$ is 1. This $F_1$-score weights both precision and recall evenly (simple harmonic mean). In rare cases the $F_2$-score is used, which puts twice as much weights on recall as on precision (Hripcsak and Rotschild (2005)).", "print(metrics.confusion_matrix(y_test, y_pred))\nprint(metrics.f1_score(y_test, y_pred))\nprint(((1+1**2) * TP)/((1+1**2) * TP + FN + FP))\nprint(metrics.classification_report(y_test, y_pred))", "Let us compare this to the situation where we set the posterior probability threshold for 'default' at 20%.", "# Confusion matrix & clf-report for cut-off \n# value Pr(default=yes | X = x) > 0.20\nprint(metrics.confusion_matrix(y_test, y_pred020))\nprint(metrics.classification_report(y_test, y_pred020))", "We see that by reducing the cut-off level from $\\Pr(\\text{default} = 1| X=x) > 0.5$ to $\\Pr(\\text{default} = 1| X=x) > 0.2$ precision decreases but recall improves. This changes the $F_1$-score. \nDoes this mean that a threshold of 20% is more appropriate? In general, one could argue for a 'yes'. Yet, as mentioned before, this boils down to domain knowledge. Where the $F_1$-score is of help, together with the other metrics introduced, is when we compare models against each other and want to determine which one performed best. For example if we compare results from logistic regression and LDA (and both used a cut-off level of 50%) the F-score would suggest that the one with the higher value performed better.\nFor a summary on performance metrics the following two ressources are recommended:\n\n\nFor the interested reader an excellent and easily accessible summary on performance metrics is the article by Sokolova and Lapalme (2009). \n\n\nFor further details and examples please also consider the scikit-learn discription.\n\n\nPrecision-Recall Curve\nIf one is interested in understanding how precision and recall varies given different level of thresholds, then there is a function to do this.", "# Extract data displayed in above plot\nprecision, recall, threshold = metrics.precision_recall_curve(y_test, posteriors[:, 1])\n\nprint('Precision: ', precision)\nprint('Recall: ', recall)\nprint('Threshold: ', threshold)", "This one can easily visualize - done in the next code snippet. We also add some more information to the plot by displaying the Average Precision (AP) and the Area under the Curve (AUC). The former summarizes the plot in that it calculates the weighted mean of precisions achieved at each threshold, with the increase in recall from the previous threshold used as the weight see further description here. The latter calculates the area under the curve using the trapezoidal rule. Notice that ideally the function hugs the top-right corner.", "# Calculate the average precisions score\ny_dec_bry = lda.decision_function(X_test)\naverage_precision = metrics.average_precision_score(y_test, y_dec_bry)\n\n# Calculate AUC\nprec_recall_auc = metrics.auc(recall, precision)\n\n# Plot Precision/Recall variations given different\n# levels of thresholds\nplt.plot(recall, precision)\nplt.xlabel('Recall')\nplt.ylabel('Precision')\nplt.ylim([0.0, 1.05])\nplt.xlim([0.0, 1.0])\nplt.title('2-class Precision-Recall curve: \\n AP={0:0.2f} / AUC={1:0.2f}'.format(\n average_precision, prec_recall_auc));", "ROC Curve\nHaving introduced the major performance measures, let us now discuss the so called ROC curve (short for \"receiver operating characteristics\"). This is a very popular way of visualizing the performance of binary classifiers. Its origin are in signal detection theory durign WWII (Flaach (2017)) but it has since found application in medical decision making and machine learning (Fawcett (2006)). ROC investigates the relationship between sensitivity and specificity of a binary classifier. Sensitivity (or true positive rate) measures the proportion of positives (defaults) correctly classified. Specificity (or true negative rate) measures the proportion of negatives (non-defaults) correctly classified. \nAbove we calculated that if we use $\\Pr(\\text{default = Yes}|X=x) > 0.5$ to classify posterior probabilities as defaults, LDA has its best overall error rate but misses on 76.0% of the customers who acutally defaulted. By decreasing the threshold to 0.2 we improved the accuracy of detecting defaults but this came at the cost of a higher overall error rate. This was the trade-off we faced. The ROC curve serves to visualize a variation of this trade-off. It varies the cut-off threshold from 0 to 1 and calculates for each threshold the true positive rate (aka sensitivity) and false positive rate (equals 1 - specificity). These values are then plotted with the former on the vertical and the later on the horizontal axis. \nThough this might feel a bit abstract if one is not familiar with all these technical terms, the interpretation is fortunately fairly simple. The ideal ROC curve will hug the top left corner. In that case, the area under the curve (AUC) is biggest. The bigger the AUC, the better the classifier. A perfect classifier has an AUC of 1.\nHere's how we calculate the ROC numbers, the corresponding area under the curve and how we plot it.", "# Compute ROC curve and ROC area (AUC) for each class\nfpr, tpr, thresholds = metrics.roc_curve(y_test, posteriors[:, 1])\nroc_auc = metrics.auc(fpr, tpr)\n\nplt.figure(figsize=(6, 6))\nplt.plot(fpr, tpr, lw=2, label='ROC curve (area = {0: 0.2f})'.format(roc_auc))\nplt.plot([0, 1], [0, 1], lw=2, c = 'k', linestyle='--')\nplt.xlim([-0.01, 1.0])\nplt.ylim([-0.01, 1.01])\nplt.xlabel('False Positive Rate (1 - Specificity)')\nplt.ylabel('True Positive Rate (Sensitivity)')\nplt.title('Receiver operating characteristic (ROC)', fontweight='bold', fontsize=18)\nplt.legend(loc=\"lower right\");", "An AUC value of 0.95 is close to the maximum of 1 and should be deemed very good. The dashed black line puts this in perspective: it represents the \"no information\" classifier; this is what we would expect if the probability of default is not associated with 'student' status and 'balance'. Such a classifier, that performs no better than chance, is expected to have an AUC of 0.5. \nQuadratic Discriminant Analysis\nUnderlying Assumptions\nFor LDA we assume that observations within each class are drawn from a multivariate normal distribution with a class-specific mean vector and a common covariance metrix: $X \\sim N(\\mu_k, \\Sigma)$. Quadratic discriminant analysis (QDA) relaxes these assumptions somewhat. The basic assumption is still that the observations follow a multivariate normal distribution, however, QDA allows for class specific means and covariance matrices: $X \\sim N(\\mu_k, \\Sigma_k)$, where $\\Sigma_k$ is a covariance matrix for the $k$th class. With that, the Bayes classifier assigns an observation to the class for which \n\\begin{equation}\n\\delta_k(x) = \\arg \\max_k \\; - \\frac{1}{2} \\ln(|\\Sigma_k|) - \\frac{1}{2} (x - \\mu_k)^T \\Sigma_k^{-1} (x - \\mu_k) + \\ln(\\Pr(k))\n\\end{equation}\nis highest. For a derivation of this result see again the appendix of the script. As was the case for LDA, parameter $\\mu_k, \\Sigma_k$ and $\\Pr(k)$ are again estimated from the training data with the same formulas introduced in this notebook. \nBelow figure depict both LDA and QDA. Both classifiers were trained on the same data. Due to the different variability of the two classes the QDA algorithm seems to perform slightly better in this case.\n<img src=\"Graphics/0208_QDABayesDescBoundary2d.png\" alt=\"QDABayesDescBoundary2d\" style=\"width: 1000px;\"/>\nUnder what circumstances should we prefer QDA over LDA? As always, there's no straight answer to this question. Obviously, performance should be king. However, it is said that LDA tends to be a better bet than QDA if the training set is small. In contrast, if the hold-out set is large or the assumption of a common covariance matrix is clearly incorrect, then QDA is recommended. Beyond that, we have to keep in mind that QDA estimates $K p(p+1)/2$ parameters. So if the number of parameters $p$ is large, QDA might take some time to process (James et al. (2013)). \n\nNaive Bayes\nNaive Bayes is the name for a family of popular ML algorithms that are often used in text mining. Text mining is a field of ML that deals with extracting quantitative information from text. A simple example of it is the analysis of Twitter feeds in order to predict stock market reactions. There exist different variations of Naive Bayes applications. One is called 'Gaussian Naive Bayes' and works similar to QDA - with the exception that contrary to QDA the covariance matrices $\\Sigma$ are assumed to be diagonal. This means $\\Sigma_k$ only contains the variances of the different features for class $k$. Its covariance terms (the off-diagonal elements) are assumed to be zero. Because of its popularity, Naive Bayes is well documented in text books and on the web. A good starting point is Scikit-learn's tutorial on Naive Bayes, Collins (2013) or Russell and Norvig (2009, p.499). To apply the algorithm in Python you want to use sklearn.naive_bayes.GaussianNB() or (for text mining preferably) sklearn.naive_bayes.MultinomialNB(). \n\nQDA in Python\nThe application of QDA follows the one detailed for LDA. Therefore we let the code speak for itself.", "from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA\n\n# Run qda on training data\nqda = QDA().fit(X_train, y_train)\nqda\n\n# Predict classes for qda\ny_pred_qda = qda.predict(X_test)\nposteriors_qda = qda.predict_proba(X_test)[:, 1]\n\n# Print performance metrics\nprint(metrics.confusion_matrix(y_test, y_pred_qda))\nprint(qda.score(X_test, y_test))\nprint(metrics.classification_report(y_test, y_pred_qda))", "The performance seems to be slightly better than with LDA. Let's plot the ROC curve for both LDA and QDA.", "# Compute ROC curve and ROC area (AUC) for each class\nfpr_qda, tpr_qda, _ = metrics.roc_curve(y_test, posteriors_qda)\nroc_auc_qda = metrics.auc(fpr_qda, tpr_qda)\n\nplt.figure(figsize=(6, 6))\nplt.plot(fpr, tpr, lw=2, label='LDA ROC (AUC = {0: 0.2f})'.format(roc_auc))\nplt.plot(fpr_qda, tpr_qda, lw=2, label='QDA ROC (AUC = {0: 0.2f})'.format(roc_auc_qda))\nplt.plot([0, 1], [0, 1], lw=2, c = 'k', linestyle='--')\nplt.xlim([-0.01, 1.0])\nplt.ylim([-0.01, 1.01])\nplt.xlabel('False Positive Rate (1 - Specificity)')\nplt.ylabel('True Positive Rate (Sensitivity)')\nplt.title('ROC Curve', fontweight='bold', fontsize=18)\nplt.legend(loc=\"lower right\");", "With respect to Sensitivity (Recall) and Specificity LDA and QDA perform virtually identical. Therefore, one might give the edge here to QDA because of its slighly better Recall and $F_1$-Score. \nReality and the Gaussian Assumption for LDA & QDA\nDespite the rather strict assumptions regarding normal distribution, LDA and QDA perform well on an amazingly large and diverse set of classification tasks. Friedman et al. (2001, p. 111) put it this way:\n\n\"Both techniques are widely used, and entire books are devoted to LDA. It seems that whatever exotic tools are the rage of the day, we should always have available these two simple tools. The question arises why LDA and QDA have such a good track record. The reason is not likely to be that the data are approximately Gaussian, and in addition for LDA that the covariances are approximately equal. More likely a reason is that the data can only support simple decision boundaries such as linear or quadratic, and the estimates provided via the Gaussian models are stable. This is a bias variance tradeoff - we can put up with the bias of a linear decision boundary because it can be estimated with much lower variance than more exotic alternatives. This argument is less believable for QDA, since it can have many parameters itself - although perhaps fewer than the non-parametric alternatives.\"\n\nWhether LDA or QDA should be applied to categorical/binary features warrants a separate note. It is true that discriminant analysis was designed for continuous features (Ronald A. Fisher (1936)) where the underlying assumption is that the values are normally distributed. However, as above quote shows, studies have proven the robustness of the model even in light of violations of the rather rigid normality assumption. This is not only true for continuous features but also for categorical/binary features. For more details see Huberty et al. (1986). It follows that applying LDA and QDA is possible, though the user should cautiously control the output. We will discuss appropriate cross validation methods to do so in the next chapter. \nFurther Ressources\nIn writing this notebook, many ressources were consulted. For internet ressources the links are provided within the textflow above and will therefore not be listed again. Beyond these links, the following ressources were consulted and are recommended as further reading on the discussed topics:\n\nCollins, Michael, 2013, The Naive Bayes Model, Maximum-Likelihood Estimation, and the EM Algorithm, Technical report, Columbia University, New York.\nFawcett, Tom, 2006, An introduction to ROC analysis, Pattern Recognition Letters 27, 861–874.\nFisher, Roland A., 1936, The Use of Multiple Measurements in Taxonomic Problems, Annals of Human Genetics 7, 179-188.\nFlach, Peter A., 2017, Roc analysis, in Claude Sammut, and Geoffrey I. Webb, eds., Encyclopedia of Machine Learning and Data Mining, 1109–1116 (Springer Science & Business Media, New York, NY).\nFriedman, Jerome, Trevor Hastie, and Robert Tibshirani, 2001, The Elements of Statistical Learning (Springer, New York, NY).\nGuggenbuehler, Jan P., 2015, Predicting Net New Money Using Machine Learning Algorithms and Newspaper Articles, Technical report, University of Zurich, Zurich.\nJames, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani, 2013, An Introduction to Statistical Learning: With Applications in R (Springer Science & Business Media, New York, NY).\nJobson, J. David, and Bob Korkie, 1980, Estimation for Markowitz Efficient Portfolios, Journal of the American Statistical Association 75, 544–554.\nHripcsak, George, and Adam S Rothschild, 2005, Agreement, the F-measure, and Reliability in Information Retrieval, Journal of the American Medical Informatics Association 12, 296–298.\nHuberty, Carl J., Joseph M. Wisenbaker, Jerry D. Smith, and Janet C. Smith, 1986, Using Categorical Variables in Discriminant Analysis, Multivariate Behavioral Research 21, 479-496.\nLedoit, Olivier, and Michael Wolf, 2004, Honey, i shrunk the sample covariance matrix, The Journal of Portfolio Management 30, 110–119.\nMüller, Andreas C., and Sarah Guido, 2017, Introduction to Machine Learning with Python (O’Reilly Media, Sebastopol, CA).\nRaschka, Sebastian, 2014, Naive Bayes and Text Classification I - Introduction and Theory from website, http://sebastianraschka.com/Articles/2014_naive_bayes_1.html, 08/31/2017\nRussell, Stuart, and Peter Norvig, 2009, Artificial Intelligence: A Modern Approach (Prentice Hall Press, Upper Saddle River, NJ).\nSokolova, Marina, and Guy Lapalme, 2009, A systematic analysis of performance measures for classification tasks, Information Processing & Management 45, 427–437.\nVan Rijsbergen, Cornelis Joost, 1979, Information Retrieval (Butterworths, London).\n\nAddendum\npredict, predict_proba, and decision_function\nLet us quickly discuss the difference between the \n* classifier.predict(), \n* classifier.predict_proba(), and \n* classifier.decision_function(). \nclassifier.predict() we already know: it simply predicts the label given the traineded classifier and a feature matrix X (preferably a test set).", "lda.predict(X_test)[:10]", "classifier.predict_proba() we have also introduced above: it provides probabilities of $\\Pr(y = 0|X=x)$ in the first column and $\\Pr(y = 1|X=x)$ in the second.", "lda.predict_proba(X_test)[:10]", "Finally, classifier.decision_function() predicts confidence scores given the feature matrix. The confidence scores for a feature matrix is the signed distance of that sample to the hyperplane. What this exaclty means should become more clear once we have discussed the support vector classifier (SVC).", "lda.decision_function(X_test)[:10]", "ROC & Precision-Recall Curve in Sklearn Version 0.22.1\nStarting with Scikit-learn version 0.22.1 the plotting of the ROC and Precision-Recall Curve was integrated into Scikit-learn and there's now a function available to cut the plotting work a bit short. Below two code snippets that show how to do it.", "# Plot Precision-Recall Curve\ndisp = metrics.plot_precision_recall_curve(lda, X_test, y_test);\n\ndisp = metrics.plot_roc_curve(lda, X_test, y_test);", "Notice that you can also overlay ROCs from multiple models. See this example" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ThomasProctor/Slide-Rule-Data-Intensive
statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb
mit
[ "What is the true normal human body temperature?\nBackground\nThe mean normal body temperature was held to be 37$^{\\circ}$C or 98.6$^{\\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. In 1992, this value was revised to 36.8$^{\\circ}$C or 98.2$^{\\circ}$F. \nExercise\nIn this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.\nAnswer the following questions in this notebook below and submit to your Github account. \n\nIs the distribution of body temperatures normal? \nRemember that this is a condition for the CLT, and hence the statistical tests we are using, to apply. \n\n\nIs the true population mean really 98.6 degrees F?\nBring out the one sample hypothesis test! In this situation, is it approriate to apply a z-test or a t-test? How will the result be different?\n\n\nAt what temperature should we consider someone's temperature to be \"abnormal\"?\nStart by computing the margin of error and confidence interval.\n\n\nIs there a significant difference between males and females in normal temperature?\nSet up and solve for a two sample hypothesis testing.\n\n\n\nYou can include written notes in notebook cells using Markdown: \n - In the control panel at the top, choose Cell > Cell Type > Markdown\n - Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet\nResources\n\nInformation and data sources: http://www.amstat.org/publications/jse/datasets/normtemp.txt, http://www.amstat.org/publications/jse/jse_data_archive.htm\nMarkdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet", "import pandas as pd\n\ndf = pd.read_csv('data/human_body_temperature.csv')\n\ndf.info()\n\ndf.head()", "Question 1", "df['temperature'].hist()", "No, this sample isn't normal, it is definitely skewed. However \"this is a condition for the CLT... to apply\" is just wrong. The whole power of the CLT is that it says that the distribution of sample means (not the sample distribution) tends to a normal distribution regardless of the distribution of the population or sample. What we do care about for the CLT is that our data is independent, which, assuming the data was gathered in a traditional manner, should be the case.\nQuestion 2", "m=df['temperature'].mean()\nm", "With 130 data points, it really doesn't matter if we use the normal or t distribution. A t distribution with 129 degrees of freedom is essentially a normal distribution, so the results should not be very different. However, in this day in age I don't see the purpose of even bothering with the normal distribution. Looking up t distribution tables is awfully annoying, so it once had purpose, however nowdays I'm just going to let a computer calculate either for me, and both are equally simple.", "from scipy.stats import t, norm\nfrom math import sqrt\n\npatients=df.shape[0]\nn=patients-1\n\npatients\n\n\nSE=df['temperature'].std()/sqrt(n)\nSE", "Our null hypothosis is that the true average body temperature is $98.6^\\circ F$. We'll be calculating the probability of finding a value less than or equal to the mean we obtained in this data given that this null hypothosis is true, i.e. our alternative hypothosis is that the true average body temperature is less than $98.6^\\circ F$", "t.cdf((m-98.6)/SE,n)\n\nnorm.cdf((m-98.6)/SE)", "Regardless of what distribution we assume we are drawing our sample means from, the probability of seeing this data or averages less than it if the true average body temperature was 98.6 is basically zero.\nQuestion 3", "print(m+t.ppf(0.95,n)*SE)\nprint(m-t.ppf(0.95,n)*SE)\n\nt.ppf(0.95,n)*SE", "Our estimate of the true average human body temperature is thus $98.2^\\circ F \\pm 0.1$.\nThis confidence interval, however, does not answer the question 'At what temperature should we consider someone's temperature to be \"abnormal\"?'. We can look at the population distribution, and see right away that the majority of our test subjects would be considered abnormal if we this, which makes no sense.\nThe confidence intervals only say something about what we can expect of sample means, not about individual values. Unfortunately, we would not expect the percentiles of this data to be drawn from a normal distribution, so I, at least, am not currently equipped to do confidence/hypothosis testing. However, I can give them, which should give a good estimate of what should be considered normal, but I can't give estimates of how confident we can be in these values.", "df['temperature'].quantile([.1,.9])", "This range, 97.29-99.10 degrees F includes 80% of the patients in our sample.\nThis shows the dramatic difference between the population distribution and the sample distribution of the mean; we looked at the sample distribution (from the confidence interval), and found that 90% of the population fell within a $\\pm 0.1^\\circ$ range, while looking at the population distribution, we see a $\\pm 0.9^\\circ$ range for a smaller percentage of the distribution.\nQuestion 4", "males=df[df['gender']=='M']\nmales.describe()\n\nfemales=df[df['gender']=='F']\nfemales.describe()\n\nSEgender=sqrt(females['temperature'].std()/females.shape[0]+males['temperature'].std()/males.shape[0])\nSEgender\n\nmgender=females['temperature'].mean()-males['temperature'].mean()\nmgender\n\n\n2*(1-t.cdf(mgender/SEgender,21))", "The probability of seeing this difference in our data if our null hypothosis (that there is no gender difference) is true is actually relatively high, 6.5%. Using the 5% threshold, we can't reject the null hypothosis." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
paulovn/ml-vm-notebook
vmfiles/IPNB/Examples/a Basic/03 Matplotlib essentials.ipynb
bsd-3-clause
[ "Matplotlib\nThis notebook is (will be) a small crash course on the functionality of the Matplotlib Python module for creating graphs (and embedding it in notebooks). It is of course no substitute for the proper Matplotlib thorough documentation.\nInitialization\nWe need to add a bit of IPython magic to tell the notebook backend that we want to display all graphs within the notebook. Otherwise they would generate objects instead of displaying into the interface; objects that we later can output to file or display explicitly with plt.show().\nThis is done by the following declaration:", "%matplotlib inline", "Now we need to import the library in our notebook. There are a number of different ways to do it, depending on what part of matplotlib we want to import, and how should it be imported into the namespace. This is one of the most common ones; it means that we will use the plt. prefix to refer to the Matplotlib API", "import matplotlib.pyplot as plt", "Matplotlib allows extensive customization of the graph aspect. Some of these customizations come together in \"styles\". Let's see which styles are available:", "from __future__ import print_function\nprint(plt.style.available)\n\n# Let's choose one style. And while we are at it, define thicker lines and big graphic sizes\nplt.style.use('bmh')\nplt.rcParams['lines.linewidth'] = 1.5\nplt.rcParams['figure.figsize'] = (15, 5)", "Simple plots\nWithout much more ado, let's display a simple graphic. For that we define a vector variable, and a function of that vector to be plotted", "import numpy as np\nx = np.arange( -10, 11 )\ny = x*x", "And we plot it", "plt.plot(x,y)\nplt.xlabel('x');\nplt.ylabel('x square');", "We can extensively alter the aspect of the plot. For instance, we can add markers and change color:", "plt.plot(x,y,'ro-');", "Matplotlib syntax\nMatplotlib commands have two variants:\n * A declarative syntax, with direct plotting commands. It is inspired by Matlab graphics syntax, so if you know Matlab it will be easy. It is the one used above.\n * An object-oriented syntax, more complicated but somehow more powerful\nThe next cell shows an example of the object-oriented syntax", "# Create a figure object\nfig = plt.figure()\n\n# Add a graph to the figure. We get an axes object\nax = fig.add_subplot(1, 1, 1) # specify (nrows, ncols, axnum)\n\n# Create two vectors: x, y \nx = np.linspace(0, 10, 1000)\ny = np.sin(x)\n\n# Plot those vectors on the axes we have\nax.plot(x, y)\n\n# Add another plot to the same axes\ny2 = np.cos(x)\nax.plot(x, y2)\n\n# Modify the axes\nax.set_ylim(-1.5, 1.5)\n\n# Add labels\nax.set_xlabel(\"$x$\")\nax.set_ylabel(\"$f(x)$\")\nax.set_title(\"Sinusoids\")\n\n# Add a legend\nax.legend(['sine', 'cosine']);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
slundberg/shap
notebooks/api_examples/explainers/Permutation.ipynb
mit
[ "Permutation explainer\nThis notebooks demonstrates how to use the Permutation explainer on some simple datasets. The Permutation explainer is model-agnostic, so it can compute Shapley values and Owen values for any model. It works by iterating over complete permutations of the features forward and the reversed. By doing this, changing one feature at a time we can minimize the number of model evaluations that are required, and always ensure we satisfy efficiency no matter how many executions of the original model we choose to use for appoximation the feature attribution values. So the SHAP values computed, while approximate, do exactly sum up to the difference between the base value of the model and the output of the model for each explained instance.\nBecause the Permutation explainer has important performance optimizations, and does not require regularization parameter tuning like Kernel explainer, the Permutation explainer is the default model agnostic explainer used for tabular datasets that have more features than would be appropriate for the Exact explainer.\nBelow we domonstrate how to use the Permutation explainer on a simple adult income classification dataset and model.", "import shap\nimport xgboost\n\n# get a dataset on income prediction\nX,y = shap.datasets.adult()\n\n# train an XGBoost model (but any other model type would also work)\nmodel = xgboost.XGBClassifier()\nmodel.fit(X, y);", "Tabular data with independent (Shapley value) masking", "# build a Permutation explainer and explain the model predictions on the given dataset\nexplainer = shap.explainers.Permutation(model.predict_proba, X)\nshap_values = explainer(X[:100])\n\n# get just the explanations for the positive class\nshap_values = shap_values[...,1]", "Plot a global summary", "shap.plots.bar(shap_values)", "Plot a single instance", "shap.plots.waterfall(shap_values[0])", "Tabular data with partition (Owen value) masking\nWhile Shapley values result from treating each feature independently of the other features, it is often useful to enforce a structure on the model inputs. Enforcing such a structure produces a structure game (i.e. a game with rules about valid input feature coalitions), and when that structure is a nest set of feature grouping we get the Owen values as a recursive application of Shapley values to the group. In SHAP, we take the partitioning to the limit and build a binary herarchial clustering tree to represent the structure of the data. This structure could be chosen in many ways, but for tabular data it is often helpful to build the structure from the redundancy of information between the input features about the output label. This is what we do below:", "# build a clustering of the features based on shared information about y\nclustering = shap.utils.hclust(X, y)\n\n# above we implicitly used shap.maskers.Independent by passing a raw dataframe as the masker\n# now we explicitly use a Partition masker that uses the clustering we just computed\nmasker = shap.maskers.Partition(X, clustering=clustering)\n\n# build a Permutation explainer and explain the model predictions on the given dataset\nexplainer = shap.explainers.Permutation(model.predict_proba, masker)\nshap_values2 = explainer(X[:100])\n\n# get just the explanations for the positive class\nshap_values2 = shap_values2[...,1]", "Plot a global summary\nNote that only the Relationship and Marital status features share more that 50% of their explanation power (as measured by R2) with each other, so all the other parts of the clustering tree are removed by the the default clustering_cutoff=0.5 setting:", "shap.plots.bar(shap_values2)", "Plot a single instance\nNote that there is a strong similarity between the explanation from the Independent masker above and the Partition masker here. In general the distinctions between these methods for tabular data are not large, though the Partition masker allows for much faster runtime and potentially more realistic manipulations of the model inputs (since groups of clustered features are masked/unmasked together).", "shap.plots.waterfall(shap_values2[0])", "<hr>\nHave an idea for more helpful examples? Pull requests that add to this documentation notebook are encouraged!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Sasanita/nmt-keras
examples/2_training_tutorial.ipynb
mit
[ "NMT-Keras tutorial\n2. Creating and training a Neural Translation Model\nNow, we'll create and train a Neural Machine Translation (NMT) model. Since there is a significant number of hyperparameters, we'll use the default ones, specified in the config.py file. Note that almost every hardcoded parameter is automatically set from config if we run main.py.\nWe'll create the so-called 'GroundHogModel'. It is defined in the model_zoo.py file. See the neural_machine_translation.pdf for an overview of such system.\nIf you followed the notebook 1_dataset_tutorial.ipynb, you should have a dataset instance. Otherwise, you should follow that notebook first.\nFirst, we'll make some imports, load the default parameters and load the dataset.", "from config import load_parameters\nfrom model_zoo import TranslationModel\nimport utils\nfrom keras_wrapper.cnn_model import loadModel\nfrom keras_wrapper.dataset import loadDataset\nfrom keras_wrapper.extra.callbacks import PrintPerformanceMetricOnEpochEndOrEachNUpdates\nparams = load_parameters()\ndataset = loadDataset('datasets/Dataset_tutorial_dataset.pkl')", "Since the number of words in the dataset may be unknown beforehand, we must update the params information according to the dataset instance:", "params['INPUT_VOCABULARY_SIZE'] = dataset.vocabulary_len['source_text']\nparams['OUTPUT_VOCABULARY_SIZE'] = dataset.vocabulary_len['target_text']", "Now, we create a TranslationModel instance:", "nmt_model = TranslationModel(params,\n model_type='GroundHogModel', \n model_name='tutorial_model',\n vocabularies=dataset.vocabulary,\n store_path='trained_models/tutorial_model/',\n verbose=True)", "Now, we must define the inputs and outputs mapping from our Dataset instance to our model", "inputMapping = dict()\nfor i, id_in in enumerate(params['INPUTS_IDS_DATASET']):\n pos_source = dataset.ids_inputs.index(id_in)\n id_dest = nmt_model.ids_inputs[i]\n inputMapping[id_dest] = pos_source\nnmt_model.setInputsMapping(inputMapping)\n\noutputMapping = dict()\nfor i, id_out in enumerate(params['OUTPUTS_IDS_DATASET']):\n pos_target = dataset.ids_outputs.index(id_out)\n id_dest = nmt_model.ids_outputs[i]\n outputMapping[id_dest] = pos_target\nnmt_model.setOutputsMapping(outputMapping)", "We can add some callbacks for controlling the training (e.g. Sampling each N updates, early stop, learning rate annealing...). For instance, let's build an Early-Stop callback. After each 2 epochs, it will compute the 'coco' scores on the development set. If the metric 'Bleu_4' doesn't improve during more than 5 checkings, it will stop. We need to pass some variables to the callback (in the extra_vars dictionary):", "extra_vars = {'language': 'en',\n 'n_parallel_loaders': 8,\n 'tokenize_f': eval('dataset.' + 'tokenize_none'),\n 'beam_size': 12,\n 'maxlen': 50,\n 'model_inputs': ['source_text', 'state_below'],\n 'model_outputs': ['target_text'],\n 'dataset_inputs': ['source_text', 'state_below'],\n 'dataset_outputs': ['target_text'],\n 'normalize': True,\n 'alpha_factor': 0.6,\n 'val': {'references': dataset.extra_variables['val']['target_text']}\n }\n\nvocab = dataset.vocabulary['target_text']['idx2words']\ncallbacks = []\ncallbacks.append(PrintPerformanceMetricOnEpochEndOrEachNUpdates(nmt_model,\n dataset,\n gt_id='target_text',\n metric_name=['coco'],\n set_name=['val'],\n batch_size=50,\n each_n_epochs=2,\n extra_vars=extra_vars,\n reload_epoch=0,\n is_text=True,\n index2word_y=vocab,\n sampling_type='max_likelihood',\n beam_search=True,\n save_path=nmt_model.model_path,\n start_eval_on_epoch=0,\n write_samples=True,\n write_type='list',\n verbose=True))", "Now we are almost ready to train. We set up some training parameters...", "training_params = {'n_epochs': 100,\n 'batch_size': 40,\n 'maxlen': 30,\n 'epochs_for_save': 1,\n 'verbose': 0,\n 'eval_on_sets': [], \n 'n_parallel_loaders': 8,\n 'extra_callbacks': callbacks,\n 'reload_epoch': 0,\n 'epoch_offset': 0}", "And train!", "nmt_model.trainNet(dataset, training_params)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session04/Day1/LSSTC-DSFP4-Juric-FrequentistAndBayes-03-Credibility.ipynb
mit
[ "Frequentism and Bayesianism III: Confidence, Credibility and why Frequentism and Science Don't Mix\nMario Juric & Jake VanderPlas, University of Washington\ne-mail: &#109;&#106;&#117;&#114;&#105;&#99;&#64;&#97;&#115;&#116;&#114;&#111;&#46;&#119;&#97;&#115;&#104;&#105;&#110;&#103;&#116;&#111;&#110;&#46;&#101;&#100;&#117;, twitter: @mjuric\n\nThis lecture is based on a post on the blog Pythonic Perambulations, by Jake VanderPlas. The content is BSD licensed. See also VanderPlas (2014) \"Frequentism and Bayesianism: A Python-driven Primer\".\nSlides built using the excellent RISE Jupyter extension by Damian Avila.\nIn Douglas Adams' classic Hitchhiker's Guide to the Galaxy, hyper-intelligent pan-dimensional beings build a computer named Deep Thought in order to calculate \"the Answer to the Ultimate Question of Life, the Universe, and Everything\".\nAfter seven and a half million years spinning its hyper-dimensional gears, before an excited crowd, Deep Thought finally outputs the answer:\n<big><center>42</center></big>\nThe disappointed technicians, who trained a lifetime for this moment, are stupefied. They probe Deep Though for more information, and after some back-and-forth, the computer responds: \"once you do know what the question actually is, you'll know what the answer means.\"\nAn answer does you no good if you don't know the question.\nThis story is an apt metaphor for statistics as sometimes used in the scientific literature.\nWhen trying to estimate the value of an unknown parameter, the frequentist approach generally relies on a confidence interval (CI), while the Bayesian approach relies on a credible region (CR).\nWhile these concepts sound and look very similar, their subtle difference can be extremely important, as they answer essentially different questions.\nLike the poor souls hoping for enlightenment in Douglas Adams' universe, scientists often turn the crank of frequentism hoping for useful answers, but in the process overlook the fact that in science, frequentism is generally answering the wrong question.\nThis is far from simple philosophical navel-gazing: as I'll show, it can have real consequences for the conclusions we draw from observed data.\nConfidence vs. Credibility\nIn the first part of this lecture, we discussed the basic philosophical difference between frequentism and Bayesianism: frequentists consider probability a measure of the frequency of (perhaps hypothetical) repeated events; Bayesians consider probability as a measure of the degree of certainty about values. As a result of this, speaking broadly, frequentists consider model parameters to be fixed and data to be random, while Bayesians consider model parameters to be random and data to be fixed.\nThese philosophies fundamenally affect the way that each approach seeks bounds on the value of a model parameter. Because the differences here are subtle, let's go right into a simple example to illustrate the difference between a frequentist confidence interval and a Bayesian credible region.\nExample 1: The Mean of a Gaussian\nLet's start by again examining an extremely simple problem; this is the same problem we saw in part I of this series: finding the mean of a Gaussian distribution. Previously we simply looked at the (frequentist) maximum likelihood and (Bayesian) maximum a posteriori estimates; here we'll extend this and look at confidence intervals and credibile regions.\nHere is the problem: imagine you're observing a star that you assume has a constant brightness. Simplistically, we can think of this brightness as the number of photons reaching our telescope in one second. Any given measurement of this number will be subject to measurement errors: the source of those errors is not important right now, but let's assume the observations $x_i$ are drawn from a normal distribution about the true brightness value with a known standard deviation $\\sigma_x$.\nGiven a series of measurements, what are the 95% (i.e. $2\\sigma$) limits that we would place on the brightness of the star?\n1. The Frequentist Approach\nThe frequentist approach to this problem is well-known, and is as follows:\nFor any set of $N$ values $D = {x_i}_{i=1}^N$, an unbiased estimate of the mean $\\mu$ of the distribution is given by\n$$\n\\bar{x} = \\frac{1}{N}\\sum_{i=1}^N x_i\n$$\nThe sampling distribution describes the observed frequency of the estimate of the mean; by the central limit theorem we can show that the sampling distribution is normal; i.e.\n$$\nf(\\bar{x}~||~\\mu) \\propto \\exp\\left[\\frac{-(\\bar{x} - \\mu)^2}{2\\sigma_\\mu^2}\\right]\n$$\nwhere we've used the standard error of the mean,\n$$\n\\sigma_\\mu = \\sigma_x / \\sqrt{N}\n$$\nThe central limit theorem tells us that this is a reasonable approximation for any generating distribution if $N$ is large; if our generating distribution happens to be Gaussian, it also holds for $N$ as small as 2.\nLet's quickly check this empirically, by looking at $10^6$ samples of the mean of 5 numbers:", "import numpy as np\n\nN = 5\nNsamp = 10 ** 6\nsigma_x = 2\n\nnp.random.seed(0)\nx = np.random.normal(0, sigma_x, size=(Nsamp, N))\nmu_samp = x.mean(1)\nsig_samp = sigma_x * N ** -0.5\n\nprint(\"{0:.3f} should equal {1:.3f}\".format(np.std(mu_samp), sig_samp))", "It checks out: the standard deviation of the observed means is equal to $\\sigma_x N^{-1/2}$, as expected.\nFrom this normal sampling distribution, we can quickly write the 95% confidence interval by recalling that two standard deviations is roughly equivalent to 95% of the area under the curve. So our confidence interval is\n$$\nCI_{\\mu} = \\left(\\bar{x} - 2\\sigma_\\mu,~\\bar{x} + 2\\sigma_\\mu\\right)\n$$\nLet's try this with a quick example: say we have three observations with an error (i.e. $\\sigma_x$) of 10. What is our 95% confidence interval on the mean?\nWe'll generate our observations assuming a true value of 100:", "true_B = 100\nsigma_x = 10\n\nnp.random.seed(1)\nD = np.random.normal(true_B, sigma_x, size=3)\nprint(D)", "Next let's create a function which will compute the confidence interval:", "from scipy.special import erfinv\n\ndef freq_CI_mu(D, sigma, frac=0.95):\n \"\"\"Compute the confidence interval on the mean\"\"\"\n # we'll compute Nsigma from the desired percentage\n Nsigma = np.sqrt(2) * erfinv(frac)\n mu = D.mean()\n sigma_mu = sigma * D.size ** -0.5\n return mu - Nsigma * sigma_mu, mu + Nsigma * sigma_mu\n\nprint(\"95% Confidence Interval: [{0:.0f}, {1:.0f}]\".format(*freq_CI_mu(D, 10)))", "Note here that we've assumed $\\sigma_x$ is a known quantity; this could also be estimated from the data along with $\\mu$, but here we kept things simple for sake of example.\n2. The Bayesian Approach\nFor the Bayesian approach, we start with Bayes' theorem:\n$$\nP(\\mu~|~D) = \\frac{P(D~|~\\mu)P(\\mu)}{P(D)}\n$$\nWe'll use a flat prior on $\\mu$ (i.e. $P(\\mu) \\propto 1$ over the region of interest) and use the likelihood\n$$\nP(D~|~\\mu) = \\prod_{i=1}^N \\frac{1}{\\sqrt{2\\pi\\sigma_x^2}}\\exp\\left[\\frac{(\\mu - x_i)^2}{2\\sigma_x^2}\\right]\n$$\nComputing this product and manipulating the terms, it's straightforward to show that this gives\n$$\nP(\\mu~|~D) \\propto \\exp\\left[\\frac{-(\\mu - \\bar{x})^2}{2\\sigma_\\mu^2}\\right]\n$$\nwhich is recognizable as a normal distribution with mean $\\bar{x}$ and standard deviation $\\sigma_\\mu$.\nThat is, the Bayesian posterior on $\\mu$ in this case is exactly equal to the frequentist sampling distribution for $\\mu$.\nFrom this posterior, we can compute the Bayesian credible region, which is the shortest interval that contains 95% of the probability. Here, it looks exactly like the frequentist confidence interval:\n$$\nCR_{\\mu} = \\left(\\bar{x} - 2\\sigma_\\mu,~\\bar{x} + 2\\sigma_\\mu\\right)\n$$\nFor completeness, we'll also create a function to compute the Bayesian credible region:", "def bayes_CR_mu(D, sigma, frac=0.95):\n \"\"\"Compute the credible region on the mean\"\"\"\n Nsigma = np.sqrt(2) * erfinv(frac)\n mu = D.mean()\n sigma_mu = sigma * D.size ** -0.5\n return mu - Nsigma * sigma_mu, mu + Nsigma * sigma_mu\n\nprint(\"95% Credible Region: [{0:.0f}, {1:.0f}]\".format(*bayes_CR_mu(D, 10)))", "So What's the Difference?\nThe above derivation is one reason why the frequentist confidence interval and the Bayesian credible region are so often confused. In many simple problems, they correspond exactly. But we must be clear that even though the two are numerically equivalent, their interpretation is very different.\nRecall that in Bayesianism, the probability distributions reflect our degree of belief. So when we computed the credible region above, it's equivalent to saying\n\n\"Given our observed data, there is a 95% probability that the true value of $\\mu$ falls within $CR_\\mu$\" - Bayesians\n\nIn frequentism, on the other hand, $\\mu$ is considered a fixed value and the data (and all quantities derived from the data, including the bounds of the confidence interval) are random variables. So the frequentist confidence interval is equivalent to saying\n\n\"There is a 95% probability that when I compute $CI_\\mu$ from data of this sort, the true mean will fall within $CI_\\mu$.\" - Frequentists\n\nNote the difference: the Bayesian solution is a statement of probability about the parameter value given fixed bounds. The frequentist solution is a probability about the bounds given a fixed parameter value. This follows directly from the philosophical definitions of probability that the two approaches are based on.\nThe difference is subtle, but, as I'll discuss below, it has drastic consequences. First, let's further clarify these notions by running some simulations to confirm the interpretation.\nConfirming the Bayesian Credible Region\nTo confirm what the Bayesian credible region is claiming, we must do the following:\n\nsample random $\\mu$ values from the prior\nsample random sets of points given each $\\mu$\nselect the sets of points which match our observed data\nask what fraction of these $\\mu$ values are within the credible region we've constructed.\n\nIn code, that looks like this:", "# first define some quantities that we need \nNsamples = int(2E7)\nN = len(D)\nsigma_x = 10\n\n# if someone changes N, this could easily cause a memory error\nif N * Nsamples > 1E8:\n raise ValueError(\"Are you sure you want this many samples?\")\n \n# eps tells us how close to D we need to be to consider\n# it a matching sample. The value encodes the tradeoff\n# between bias and variance of our simulation\neps = 0.5\n\n# Generate some mean values from the (flat) prior in a reasonable range\nnp.random.seed(0)\nmu = 80 + 40 * np.random.random(Nsamples)\n\n# Generate data for each of these mean values\nx = np.random.normal(mu, sigma_x, (N, Nsamples)).T\n\n# find data which matches matches our \"observed\" data\nx.sort(1)\nD.sort()\ni = np.all(abs(x - D) < eps, 1)\nprint(\"number of suitable samples: {0}\".format(i.sum()))\n\n# Now we ask how many of these mu values fall in our credible region\nmu_good = mu[i]\nCR = bayes_CR_mu(D, 10)\nwithin_CR = (CR[0] < mu_good) & (mu_good < CR[1])\nprint \"Fraction of means in Credible Region: {0:.3f}\".format(within_CR.sum() * 1. / within_CR.size)", "We see that, as predicted, roughly 95% of $\\mu$ values with data matching ours lie in the Credible Region.\nThe important thing to note here is which of the variables is random, and which are fixed. In the Bayesian approach, we compute a single credible region from our observed data, and we consider it in terms of multiple random draws of $\\mu$.\nConfirming the frequentist Confidence Interval\nConfirmation of the interpretation of the frequentist confidence interval is a bit less involved. We do the following:\n\ndraw sets of values from the distribution defined by the single true value of $\\mu$.\nfor each set of values, compute a new confidence interval.\ndetermine what fraction of these confidence intervals contain $\\mu$.\n\nIn code, it looks like this:", "# define some quantities we need\nN = len(D)\nNsamples = int(1E4)\nmu = 100\nsigma_x = 10\n\n# Draw datasets from the true distribution\nnp.random.seed(0)\nx = np.random.normal(mu, sigma_x, (Nsamples, N))\n\n# Compute a confidence interval from each dataset\nCIs = np.array([freq_CI_mu(Di, sigma_x) for Di in x])\n\n# find which confidence intervals contain the mean\ncontains_mu = (CIs[:, 0] < mu) & (mu < CIs[:, 1])\nprint \"Fraction of Confidence Intervals containing the mean: {0:.3f}\".format(contains_mu.sum() * 1. / contains_mu.size)", "We see that, as predicted, 95% of the confidence intervals contain the true value of $\\mu$.\nAgain, the important thing to note here is which of the variables is random. We use a single value of $\\mu$, and consider it in relation to multiple confidence intervals constructed from multiple random data samples.\nDiscussion\nWe should remind ourselves again of the difference between the two types of constraints:\n\nThe Bayesian approach fixes the credible region, and guarantees 95% of possible values of $\\mu$ will fall within it.\nThe frequentist approach fixes the parameter, and guarantees that 95% of possible confidence intervals will contain it.\n\nComparing the frequentist confirmation and the Bayesian confirmation above, we see that the distinctions which stem from the very definition of probability mentioned above:\n\nBayesianism treats parameters (e.g. $\\mu$) as random variables, while frequentism treats parameters as fixed.\nBayesianism treats observed data (e.g. $D$) as fixed, while frequentism treats data as random variables.\nBayesianism treats its parameter constraints (e.g. $CR_\\mu$) as fixed, while frequentism treats its constraints (e.g. $CI_\\mu$) as random variables.\n\nIn the above example, as in many simple problems, the confidence interval and the credibility region overlap exactly, so the distinction is not especially important. But scientific analysis is rarely this simple; next we'll consider an example in which the choice of approach makes a big difference.\nExample 2: Jaynes' Truncated Exponential\nFor an example of a situation in which the frequentist confidence interval and the Bayesian credibility region do not overlap, I'm going to turn to an example given by E.T. Jaynes, a 20th century physicist who wrote extensively on statistical inference in Physics. In the fifth example of his Confidence Intervals vs. Bayesian Intervals (pdf), he considers a truncated exponential model. Here is the problem, in his words:\n\nA device will operate without failure for a time $\\theta$ because of a protective chemical inhibitor injected into it; but at time $\\theta$ the supply of the chemical is exhausted, and failures then commence, following the exponential failure law. It is not feasible to observe the depletion of this inhibitor directly; one can observe only the resulting failures. From data on actual failure times, estimate the time $\\theta$ of guaranteed safe operation...\n\nEssentially, we have data $D$ drawn from the following model:\n$$\np(x~|~\\theta) = \\left{\n\\begin{array}{lll}\n\\exp(\\theta - x) &,& x > \\theta\\\n0 &,& x < \\theta\n\\end{array}\n\\right}\n$$\nwhere $p(x~|~\\theta)$ gives the probability of failure at time $x$, given an inhibitor which lasts for a time $\\theta$.\nGiven some observed data $D = {x_i}$, we want to estimate $\\theta$.\nLet's start by plotting this model for a particular value of $\\theta$, so we can see what we're working with:", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef p(x, theta):\n return (x > theta) * np.exp(theta - x)\n\nx = np.linspace(5, 18, 1000)\nplt.fill(x, p(x, 10), alpha=0.3)\nplt.ylim(0, 1.2)\nplt.xlabel('x')\nplt.ylabel('p(x)');", "Imagine now that we've observed some data, $D = {10, 12, 15}$, and we want to infer the value of $\\theta$ from this data. We'll explore four approaches to this below.\n1. Common Sense Approach\nOne general tip that I'd always recommend: in any problem, before computing anything, think about what you're computing and guess what a reasonable solution might be. We'll start with that here. Thinking about the problem, the hard cutoff in the probability distribution leads to one simple observation: $\\theta$ must be smaller than the smallest observed value.\nThis is immediately obvious on examination: the probability of seeing a value less than $\\theta$ is zero. Thus, a model with $\\theta$ greater than any observed value is impossible, assuming our model specification is correct. Our fundamental assumption in both Bayesianism and frequentism is that the model is correct, so in this case, we can immediately write our common sense condition:\n$$\n\\theta < \\min(D)\n$$\nor, in the particular case of $D = {10, 12, 15}$,\n$$\n\\theta < 10\n$$\nAny reasonable constraint on $\\theta$ given this data should meet this criterion. With this in mind, let's go on to some quantitative approaches based on Frequentism and Bayesianism.\n2. Frequentist approach #1: Sampling Distribution via the Normal Approximation\nIn the frequentist paradigm, we'd like to compute a confidence interval on the value of $\\theta$. We can start by observing that the population mean is given by\n$$\n\\begin{array}{ll}\nE(x) &= \\int_0^\\infty xp(x)dx\\\n &= \\theta + 1\n \\end{array}\n$$\nSo, using the sample mean as the point estimate of $E(x)$, we have an unbiased estimator for $\\theta$ given by\n$$\n\\hat{\\theta} = \\frac{1}{N} \\sum_{i=1}^N x_i - 1\n$$\nThe exponential distribution has a standard deviation of 1, so in the limit of large $N$, we can use the standard error of the mean (as above) to show that the sampling distribution of $\\hat{\\theta}$ will approach normal with variance $\\sigma^2 = 1 / N$. Given this, we can write our 95% (i.e. 2$\\sigma$) confidence interval as\n$$\nCI_{\\rm large~N} = \\left(\\hat{\\theta} - 2 N^{-1/2},~\\hat{\\theta} + 2 N^{-1/2}\\right)\n$$\nLet's write a function which will compute this, and evaluate it for our data:", "from scipy.special import erfinv\n\ndef approx_CI(D, sig=0.95):\n \"\"\"Approximate truncated exponential confidence interval\"\"\"\n # use erfinv to convert percentage to number of sigma\n Nsigma = np.sqrt(2) * erfinv(sig)\n D = np.asarray(D)\n N = D.size\n theta_hat = np.mean(D) - 1\n return [theta_hat - Nsigma / np.sqrt(N),\n theta_hat + Nsigma / np.sqrt(N)]\n\nD = [10, 12, 15]\nprint(\"approximate CI: ({0:.1f}, {1:.1f})\".format(*approx_CI(D)))", "We immediately see an issue. By our simple common sense argument, we've determined that it is impossible for $\\theta$ to be greater than 10, yet the entirety of the 95% confidence interval is above this range! Perhaps this issue is due to the small sample size: the above computation is based on a large-$N$ approximation, and we have a relatively paltry $N = 3$.\nMaybe this will be improved if we do the more computationally intensive exact approach?\nThe answer is no. If we compute the confidence interval without relying on large-$N$ Gaussian eapproximation, the result is $(10.2, 12.2)$.\nNote: you can verify yourself by evaluating the code in the sub-slides.\n3. Frequentist approach #2: Exact Sampling Distribution\nComputing the confidence interval from the exact sampling distribution takes a bit more work.\nFor small $N$, the normal approximation will not apply, and we must instead compute the confidence integral from the actual sampling distribution, which is the distribution of the mean of $N$ variables each distributed according to $p(\\theta)$. The sum of random variables is distributed according to the convolution of the distributions for individual variables, so we can exploit the convolution theorem and use the method of characteristic functions to find the following sampling distribution for the sum of $N$ variables distributed according to our particular $p(x~|~\\theta)$:\n$$\nf(\\theta~|~D) \\propto\n\\left{\n\\begin{array}{lll}\nz^{N - 1}\\exp(-z) &,& z > 0\\\n0 &,& z < 0\n\\end{array}\n\\right}\n;~ z = N(\\hat{\\theta} + 1 - \\theta)\n$$\nTo compute the 95% confidence interval, we can start by computing the cumulative distribution: we integrate $f(\\theta~|~D)$ from $0$ to $\\theta$ (note that we are not actually integrating over the parameter $\\theta$, but over the estimate of $\\theta$. Frequentists cannot integrate over parameters).\nThis integral is relatively painless if we make use of the expression for the incomplete gamma function:\n$$\n\\Gamma(a, x) = \\int_x^\\infty t^{a - 1}e^{-t} dt\n$$\nwhich looks strikingly similar to our $f(\\theta)$.\nUsing this to perform the integral, we find that the cumulative distribution is given by\n$$\nF(\\theta~|~D) = \\frac{1}{\\Gamma(N)}\\left[ \\Gamma\\left(N, \\max[0, N(\\hat{\\theta} + 1 - \\theta)]\\right) - \\Gamma\\left(N,~N(\\hat{\\theta} + 1)\\right)\\right]\n$$\nA contiguous 95% confidence interval $(\\theta_1, \\theta_2)$ satisfies the following equation:\n$$\nF(\\theta_2~|~D) - F(\\theta_1~|~D) = 0.95\n$$\nThere are in fact an infinite set of solutions to this; what we want is the shortest of these. We'll add the constraint that the probability density is equal at either side of the interval:\n$$\nf(\\theta_2~|~D) = f(\\theta_1~|~D)\n$$\n(Jaynes claims that this criterion ensures the shortest possible interval, but I'm not sure how to prove that).\nSolving this system of two nonlinear equations will give us the desired confidence interval. Let's compute this numerically:", "from scipy.special import gammaincc\nfrom scipy import optimize\n\n\ndef exact_CI(D, frac=0.95):\n \"\"\"Exact truncated exponential confidence interval\"\"\"\n D = np.asarray(D)\n N = D.size\n theta_hat = np.mean(D) - 1\n\n def f(theta, D):\n z = theta_hat + 1 - theta\n return (z > 0) * z ** (N - 1) * np.exp(-N * z)\n\n def F(theta, D):\n return gammaincc(N, np.maximum(0, N * (theta_hat + 1 - theta))) - gammaincc(N, N * (theta_hat + 1))\n \n def eqns(CI, D):\n \"\"\"Equations which should be equal to zero\"\"\"\n theta1, theta2 = CI\n return (F(theta2, D) - F(theta1, D) - frac,\n f(theta2, D) - f(theta1, D))\n \n guess = approx_CI(D, 0.68) # use 1-sigma interval as a guess\n result = optimize.root(eqns, guess, args=(D,))\n if not result.success:\n print \"warning: CI result did not converge!\"\n return result.x", "As a sanity check, let's make sure that the exact and approximate confidence intervals match for a large number of points:", "np.random.seed(0)\nDlarge = 10 + np.random.random(500)\nprint \"approx: ({0:.3f}, {1:.3f})\".format(*approx_CI(Dlarge))\nprint \"exact: ({0:.3f}, {1:.3f})\".format(*exact_CI(Dlarge))", "As expected, the approximate solution is very close to the exact solution for large $N$, which gives us confidence that we're computing the right thing.\nLet's return to our 3-point dataset and see the results:", "print(\"approximate CI: ({0:.1f}, {1:.1f})\".format(*approx_CI(D)))\nprint(\"exact CI: ({0:.1f}, {1:.1f})\".format(*exact_CI(D)))", "The exact confidence interval is slightly different than the approximate one, but still reflects the same problem: we know from common-sense reasoning that $\\theta$ can't be greater than 10, yet the 95% confidence interval is entirely in this forbidden region! The confidence interval seems to be giving us unreliable results.\nWe'll discuss this in more depth further below, but first let's see if Bayes can do better.\n4. Bayesian Credibility Interval\nFor the Bayesian solution, we start by writing Bayes' rule:\n$$\np(\\theta~|~D) = \\frac{p(D~|~\\theta)p(\\theta)}{P(D)}\n$$\nUsing a constant prior $p(\\theta)$, and with the likelihood\n$$\np(D~|~\\theta) = \\prod_{i=1}^N p(x~|~\\theta)\n$$\nwe find\n$$\np(\\theta~|~D) \\propto \\left{\n\\begin{array}{lll}\nN\\exp\\left[N(\\theta - \\min(D))\\right] &,& \\theta < \\min(D)\\\n0 &,& \\theta > \\min(D)\n\\end{array}\n\\right}\n$$\nwhere $\\min(D)$ is the smallest value in the data $D$, which enters because of the truncation of $p(x~|~\\theta)$.\nBecause $p(\\theta~|~D)$ increases exponentially up to the cutoff, the shortest 95% credibility interval $(\\theta_1, \\theta_2)$ will be given by\n$$\n\\theta_2 = \\min(D)\n$$\nand $\\theta_1$ given by the solution to the equation\n$$\n\\int_{\\theta_1}^{\\theta_2} N\\exp[N(\\theta - \\theta_2)]d\\theta = f\n$$\nthis can be solved analytically by evaluating the integral, which gives\n$$\n\\theta_1 = \\theta_2 + \\frac{\\log(1 - f)}{N}\n$$\nLet's write a function which computes this:", "def bayes_CR(D, frac=0.95):\n \"\"\"Bayesian Credibility Region\"\"\"\n D = np.asarray(D)\n N = float(D.size)\n theta2 = D.min()\n theta1 = theta2 + np.log(1. - frac) / N\n return theta1, theta2", "Now that we have this Bayesian method, we can compare the results of the four methods:", "print(\"common sense: theta < {0:.1f}\".format(np.min(D)))\nprint(\"frequentism (approx): 95% CI = ({0:.1f}, {1:.1f})\".format(*approx_CI(D)))\nprint(\"frequentism (exact): 95% CI = ({0:.1f}, {1:.1f})\".format(*exact_CI(D)))\nprint(\"Bayesian: 95% CR = ({0:.1f}, {1:.1f})\".format(*bayes_CR(D)))", "What we find is that the Bayesian result agrees with our common sense, while the frequentist approach does not. The problem is that frequentism is answering the wrong question.\nNumerical Confirmation\nTo try to quell any doubts about the math here, I want to repeat the exercise we did above and show that the confidence interval derived above is, in fact, correct. We'll use the same approach as before, assuming a \"true\" value for $\\theta$ and sampling data from the associated distribution:", "from scipy.stats import expon\n\nNsamples = 1000\nN = 3\ntheta = 10\n\nnp.random.seed(42)\ndata = expon(theta).rvs((Nsamples, N))\nCIs = np.array([exact_CI(Di) for Di in data])\n\n# find which confidence intervals contain the mean\ncontains_theta = (CIs[:, 0] < theta) & (theta < CIs[:, 1])\nprint \"Fraction of Confidence Intervals containing theta: {0:.3f}\".format(contains_theta.sum() * 1. / contains_theta.size)", "As is promised by frequentism, 95% of the computed confidence intervals contain the true value. The procedure we used to compute the confidence intervals is, in fact, correct: our data just happened to be among the 5% where the method breaks down. But here's the thing: we know from the data themselves that we are in the 5% where the CI fails. The fact that the standard frequentist confidence interval ignores this common-sense information should give you pause about blind reliance on the confidence interval for any nontrivial problem.\nFor good measure, let's check that the Bayesian credible region also passes its test:", "np.random.seed(42)\nN = int(1E7)\neps = 0.1\n\ntheta = 9 + 2 * np.random.random(N)\ndata = (theta + expon().rvs((3, N))).T\ndata.sort(1)\nD.sort()\ni_good = np.all(abs(data - D) < eps, 1)\n\nprint(\"Number of good samples: {0}\".format(i_good.sum()))\n\ntheta_good = theta[i_good]\ntheta1, theta2 = bayes_CR(D)\n\nwithin_CR = (theta1 < theta_good) & (theta_good < theta2)\nprint(\"Fraction of thetas in Credible Region: {0:.3f}\".format(within_CR.sum() * 1. / within_CR.size))", "Again, we have confirmed that, as promised, ~95% of the suitable values of $\\theta$ fall in the credible region we computed from our single observed sample.\nFrequentism Answers the Wrong Question\nWe've shown that the frequentist approach in the second example is technically correct, but it disagrees with our common sense. What are we to take from this?\nHere's the crux of the problem: The frequentist confidence interval, while giving the correct answer, is usually answering the wrong question. And this wrong-question approach is the result of a probability definition which is fundamental to the frequentist paradigm!\n<br>\n<img style=\"display: block; margin-left: auto; margin-right: auto\" alt=\"Frankie &amp; Benjy\" src=\"https://vignette.wikia.nocookie.net/villains/images/1/13/Mice-s1xicp-1-.jpg/revision/latest?cb=20141020183029\">\nRecall the statements about confidence intervals and credible regions that I made above. From the Bayesians:\n\n\"Given our observed data, there is a 95% probability that the true value of $\\theta$ falls within the credible region\" - Bayesians\n\nAnd from the frequentists:\n\n\"There is a 95% probability that when I compute a confidence interval from data of this sort, the true value of $\\theta$ will fall within it.\" - Frequentists\n\nNow think about what this means. Suppose you've measured three failure times of your device, and you want to estimate $\\theta$. I would assert that \"data of this sort\" is not your primary concern: you should be concerned with what you can learn from those particular three observations, not the entire hypothetical space of observations like them. \nAs we saw above, if you follow the frequentists in considering \"data of this sort\", you are in danger at arriving at an answer that tells you nothing meaningful about the particular data you have measured.\nSuppose you attempt to change the question and ask what the frequentist confidence interval can tell you given the particular data that you've observed. Here's what it has to say:\n\n\"Given this observed data, the true value of $\\theta$ is either in our confidence interval or it isn't\" - Frequentists\n\nThat's all the confidence interval means – and all it can mean! – for this particular data that you have observed. Really. I'm not making this up.\nYou might notice that this is simply a tautology, and can be put more succinctly:\n\n\"Given this observed data, I can put no constraint on the value of $\\theta$\" - Frequentists\n\nIf you're interested in what your particular, observed data are telling you, frequentism is useless.\nHold on... isn't that a bit harsh?\nThis might be a harsh conclusion for some to swallow, but I want to emphasize that it is not simply a matter of opinion or idealogy; it's an undeniable fact based on the very philosophical stance underlying frequentism and the very definition of the confidence interval. If what you're interested in are conclusions drawn from the <u>particular data</u> you observed, frequentism's standard answers (i.e. the confidence interval and the closely-related $p$-values) are entirely useless.\nUnfortunately, most people using frequentist principles in practice don't seem to realize this. Many scientists operate as if the confidence interval is a Bayesian credible region, but it demonstrably is not. This oversight can perhaps be forgiven for the statistical layperson, as even trained statisticians will often mistake the interpretation of the confidence interval.\nI think the reason this mistake is so common is that in many simple cases (as I showed in the first example above) the confidence interval and the credible region happen to coincide. Frequentism, in this case, correctly answers the question you ask, but only because of the happy accident that Bayesianism gives the same result for that problem.\nThis can lead to (sometimes amusing) mistakes in physics in astronomy. But confidence intervals and $p$-values are firmly entrenched in sciences such as medicine, where lives are, quite literally, at stake.\nUnfortunately, our colleagues there are still attempting to \"fix\" $p$-values:\n<img style=\"display: block; margin-left: auto; margin-right: auto\" alt=\"Frankie &amp; Benjy\" src=\"figures/p-value-005.png\">\n<div style=\"text-align: right; margin-right:1em;\"> &mdash; *Science, July 2017*</div>\n\nFrequentism Considered Harmful\n<br>\n\n\"Because it is too easy to misunderstand and misuse, frequentism should be considered harmful\".\n\n<div style=\"text-align: right; margin-right:1em;\"> &mdash; Juric 2017, paraphrasing [Dijkstra (1968)](https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.pdf)</div>\n\n<br>\nOr, as Jake puts it...\nFrequentism and Science Do Not Mix.\n<br>\n<div style=\"text-align: right; margin-right:5em;\"> &mdash; VanderPlas 2014</div>\n\nThe moral of the story is that frequentism and Science do not mix. Let me say it directly: you should be suspicious of the use of frequentist confidence intervals and p-values in science.\nIn a scientific setting, confidence intervals, and closely-related p-values, provide the correct answer to the wrong question. In particular, if you ever find someone stating or implying that a 95% confidence interval is 95% certain to contain a parameter of interest, do not trust their interpretation or their results. If you happen to be peer-reviewing the paper, reject it. Their data do not back-up their conclusion.\n(addendum, from Jake VanderPlas' blog):\n\"Now, I should point out that I am certainly not the first person to state things this way, or even this strongly. The Physicist E.T. Jaynes was known as an ardent defender of Bayesianism in science; one of my primary inspirations for this post was his 1976 paper, Confidence Intervals vs. Bayesian Intervals (pdf). More recently, statistician and blogger W.M. Briggs posted a diatribe on arXiv called It's Time To Stop Teaching Frequentism to Non-Statisticians which brings up this same point. It's in the same vein of argument that Savage, Cornfield, and other outspoken 20th-century Bayesian practitioners made throughout their writings, talks, and correspondance.\nSo should you ever use confidence intervals at all? Perhaps in situations (such as analyzing gambling odds) where multiple data realizations are the reality, frequentism makes sense. But in most scientific applications where you're concerned with what one particular observed set of data is telling you, frequentism simply answers the wrong question.\nEdit, November 2014: to appease several commentors, I'll add a caveat here. The unbiased estimator $\\bar{x}$ that we used above is just one of many possible estimators, and it can be argued that such estimators are not always the best choice. Had we used, say, the Maximum Likelihood estimator or a sufficient estimator like $\\min(x)$, our initial misinterpretation of the confidence interval would not have been as obviously wrong, and may even have fooled us into thinking we were right. But this does not change our central argument, which involves the question frequentism asks. Regardless of the estimator, if we try to use frequentism to ask about parameter values given observed data, we are making a mistake. For some choices of estimator this mistaken interpretation may not be as manifestly apparent, but it is mistaken nonetheless.\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
initialkommit/kookmin
midterm/kookmin_midterm_정인환.ipynb
mit
[ "Python Programming for Data Analysis\n1. 데이터 분석을 위한 환경 구성 (패키지 설치 포함)", "# 운영체제\n!ver\n\n# 현재 위치 및 하위 디렉토리 구조\n!dir\n\n# 파이선 버전\n\n!python --version\n\n# 가상환경 버전\n\n!virtualenv --version\n\n# 존재하는 가상환경 목록\n\n!workon\n\n# 가상환경 kookmin1에 진입\n# workon kookmin1\n\n# 가상환경 kookmin1에 설치된 패키지\n# 데이터 분석 : numpy, pandas\n# 시각화 : matplotlib\n\n!pip freeze", "TicTaeToe 게임", "from IPython.display import Image\nImage(filename='images/TicTaeToe.png')", "TicTaeToe게임을 간단 버젼으로 구현한 것으로 사용자가 먼저 착수하여 승부를 겨루게 됩니다. \n향후에는 기계학습으로 발전시켜 실력을 키워 보려 합니다.", "# %load TicTaeToe.py\nimport sys\nimport random\n\n# 게임 방범 설명\nprint(\"출처: http://www.practicepython.org\")\nprint(\"==================================\")\nprint(\"가로, 세로, 대각선 방향으로 \")\nprint(\"세점을 먼저 이어 놓으면 이기는\")\nprint(\"게임으로 사용자(U)와 Computer(C)가\")\nprint(\"번갈아 놓습니다.\")\nprint(\"==================================\\n\")\n\n# 3 x 3 정보를 담기 위한 저장소 선언\n# 0 은 초기 상태\n# 1 은 사용자가 선택한 곳\n# 2 는 컴퓨터가 선택한 곳\ndim=3\nlist4 = [0,0,0,0,0,0,0,0,0]\n\n# 사용자 안내를 위한 박스를 그리고 그 안에 번호 넣기\ndef graph():\n k = 1\n for i in range(dim+1):\n print(\" ---\"*dim)\n for j in range(dim):\n if (i < dim):\n print(\"| \"+str(k), end=\" \")\n k = k + 1\n if (i != 3):\n print(\"|\")\n\n# 사용자 또는 컴퓨터가 수를 둘때 마다,\n# 누가 이겼는지 체크\ndef game_wins(list4):\n #print(list4)\n for i in range(dim): \n #checks to see if you win in a column\n if list4[i] == list4[i+3] == list4[i+6] == 1:\n print(\"You Won\")\n elif list4[i] == list4[i+3] == list4[i+6] == 2:\n print(\"You Lost\")\n #checks to see if you win in a row\n if list4[dim*i] == list4[dim*i+1] == list4[dim*i+2] == 1:\n print (\"You Won\")\n elif list4[dim*i] == list4[dim*i+1] == list4[dim*i+2] == 2:\n print(\"You Lost\")\n #checks to see if you win in a diagonal\n if list4[0] == list4[4] == list4[8] == 1:\n print (\"You Won\")\n elif list4[0] == list4[4] == list4[8] == 2:\n print(\"You Lost\")\n if list4[2] == list4[4] == list4[6] == 1:\n print (\"You Won\")\n elif list4[2] == list4[4] == list4[6] == 2:\n print(\"You Lost\")\n\n# 사용자 안내를 위한 박스를 그리고 그 안에 번호 또는 둔 수 표기\ndef graph_pos(list4):\n for idx in range(len(list4)):\n if (idx % 3 == 0):\n print(\" ---\"*dim)\n if (list4[idx] == 0):\n print(\"| \"+str(idx+1), end=\" \")\n elif (list4[idx] == 1):\n print(\"| \"+\"U\", end=\" \")\n else:\n print(\"| \"+\"C\", end=\" \") \n if (idx % 3 == 2):\n print(\"|\")\n print(\"\\n\")\n\n# 게임 시작\ngo = input(\"Play TicTaeToe? Enter, or eXit?\")\nif (go == 'x' or go == 'X'):\n sys.exit(0)\ngraph()\nprint(\"\\n\")\n\nwhile(1): # 보드게임이 승부가 날때까지 무한 반복\n # 빈곳 선택\n pos = int(input(\"You : \")) - 1\n while (pos < 0 or pos > 8 or list4[pos] != 0):\n pos = int(input(\"Again : \")) - 1\n list4[pos] = 1\n \n # 보드를 갱신하여 그리고, 승부 체크\n graph_pos(list4)\n game_wins(list4)\n\n # 컴퓨터 차례로, 빈곳을 랜덤하게 선택하여 List에 저장\n pos = random.randrange(9)\n while (list4[pos] != 0):\n pos = random.randrange(9)\n print(\"Computer : \" + str(pos+1))\n list4[pos] = 2\n \n # 보드를 갱신하여 그리고, 승부 체크\n graph_pos(list4)\n game_wins(list4)", "<Note>\nwrite/save\n%%writefile myfile.py\nwrite/save cell contents into myfile.py (use -a to append). Another alias: %%file myfile.py\nrun\n%run myfile.py\nrun myfile.py and output results in the current cell\nload/import\n%load myfile.py\nload \"import\" myfile.py into the current cell\nfor more magic and help\n%lsmagic\nlist all the other cool cell magic commands.\n%COMMAND-NAME?\nfor help on how to use a certain command. i.e. %run?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb
mit
[ "Coaddition and Subtraction\nVersion 0.1\nBy Yusra AlSayyad (Princeton University)\nThis notebook provides some problems for ground-based coaddition and subtraction.\nWe add and subtract images for different purposes: addition to get the benefits of longer exposures and subtraction to reveal what has changed. However, in order to perform either operation, the following image characteristics need to be normalized:\n* Astrometric calibration (i.e. WCS)\n* Photometric calibration (i.e. Zeropoint)\n* Background level\n* Point Spread Function (PSF) (optional for coaddition)\nFor these problems, we'll assume that we have images that have already been normalized for WCS, zeropoint, and background variations.\nProblem 1) Expected depth of a coadd\nIn order of magnitude calculations, you may hear people throw around the statement that the depth of a coadd increases by a factor of $\\sqrt{N}$ for N single-epoch images. Where does this come from? Under what conditions is this a good approximation?\nWe are going to use our same star + noise 1-D simulation from IntroductionToBasicStellarPhotometry.ipynb and FindingSources.ipynb:", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib.ticker import MultipleLocator\nfrom scipy.stats import norm\n\ndef pixel_plot(pix, counts, fig=None, ax=None): \n '''Make a pixelated 1D plot'''\n if fig is None and ax is None:\n fig, ax = plt.subplots()\n \n ax.step(pix, counts, \n where='post')\n \n ax.set_xlabel('pixel number')\n ax.set_ylabel('relative counts')\n ax.xaxis.set_minor_locator(MultipleLocator(1))\n ax.xaxis.set_major_locator(MultipleLocator(5))\n fig.tight_layout()\n return fig, ax\n\n# It is sufficient to copy and paste from\n# your introductionToBasicStellarPhotometry noteboook\n\ndef phi(x, mu, fwhm):\n \"\"\"Evalute the 1d PSF N(mu, sigma^2) along x\n \n Parameters\n ----------\n x : array-like of shape (n_pixels,)\n detector pixel number\n mu : float\n mean position of the 1D star\n fwhm : float\n Full-width half-maximum of the stellar profile on the detector\n \n Returns\n -------\n flux : array-like of shape (n_pixels,)\n Flux in each pixel of the input array\n \"\"\"\n # complete\n\n return flux\n\n\n# Define your image simulation function to\n# It is sufficient to copy and paste from\n# your introductionToBasicStellarPhotometry noteboook\n# Note that the background S should now be supplied as \n# an array of length (x) or a constant. \n\ndef simulate(x, mu, fwhm, S, F):\n \"\"\"simulate a noisy stellar signal\n \n Parameters\n ----------\n x : array-like\n detector pixel number\n mu : float\n mean position of the 1D star\n fwhm : float\n Full-width half-maximum of the stellar profile on the detector\n S : float or array-like of len(x)\n Sky background for each pixel\n F : float\n Total stellar flux\n \n Returns\n -------\n noisy_counts : array-like (same shape as x)\n the (noisy) number of counts in each pixel\n \"\"\"\n # complete\n\n return noisy_counts", "Problem 1.1) Make a simple mean coadd\nSimulate N observations of a star, and coadd them by taking the mean of the N observations. (We can only do this because they are already astrometrically and photometrically aligned and have the same background value.)", "MU = 35\nS = 100\nF = 100\nFWHM = 5\n\nx = np.arange(100)\n\n# simulate a single observation of the star and plot:\ny = # complete\npixel_plot(x, y)\n\n# Write a simulateN function that returns an array of size (N, x)\n# representing N realizations of your simulated star\n# This will stand in as a stack of multiple observations of one star\n\ndef simulateN(x, mu, fwhm, S, F, N):\n \"\"\"simulate a noisy stellar signal\n \n Parameters\n ----------\n x : array-like\n detector pixel number\n mu : float\n mean position of the 1D star\n fwhm : float\n Full-width half-maximum of the stellar profile on the detector\n S : float or array-like of len(x)\n Sky background for each pixel\n F : float\n Total stellar flux\n N: int\n Number of images to simulate\n \n Returns\n -------\n noisy_counts : array-like of shape (N, x)\n the (noisy) number of counts in each pixel\n \"\"\"\n # complete\n return noisy_counts\n\n# simulate N=50 images with the same star\nx = np.arange(100)\nstack = # complete\n# where stack is an array of size (50, 100) representing a pile of 50 images with 100 pixels\n\n# coadd by taking the mean and plot the result\ncoadd = # complete\npixel_plot(x, coadd)\n\n# Try a few different N to see how it affects the S/N of your result\n\n\n\n# Plot the coadds of N=[1, 10, and 100] on the same plot:\n# complete \n", "Problem 1.2) SNR vs N\nNow compute the observed SNR of the simulated star on each coadd and compare to the expected SNR in the idealized case. The often repeated mnemonic for SNR inscrease as a function of number of images, $N$, is that noise decreses like $\\sqrt{N}$. This is of course idealized case where the noise in each observation is identical.\nUsing your simulateN function, simulate a series of mean coadds with increasing N.\n\nFirst, plot the empirical noise/uncertainty/stdev as a function of N. and overplot the expected uncertainty given the input sky level. You can use an area you know isn't touched by the star. \nNext, plot the empirical SNR of the star (measured flux/fluxErr) as a function of N. Overplot the expected SNR. You can assume you know the sky level. \n\nYour expected scaling with N should roughly track your empirical estimate.", "# complete\n\n# hint. One way to start this\n# std = []\n# flux = []\n# Ns = np.arange(1, 1000, 5)\n# for N in Ns:\n # y = simulateN(...)\n # complete\n \n# plt.plot(Ns, ..., label=\"coadd\")\n# plt.plot(Ns, ..., label=\"expected\")\n# plt.xlabel('N')\n# plt.ylabel('pixel noise')\n# plt.legend()\n\n# complete\n\n# plt.plot(Ns, ..., label=\"coadd\")\n# plt.plot(Ns, ..., label=\"expected\")\n# plt.xlabel('N')\n# plt.ylabel('PSF Flux SNR')\n# plt.legend()", "Problem 2) PSFs and Image weights in coadds\nProblem (1) pretends that the input images are identical in quality, however this is never the case in practice. In practice, adding another image does not necessarily increase the SNR. For example, imagine you have two exposures, but in one the dome light was accidentally left on. A coadd with these two images weighted equally will have a worse SNR than the first image alone. Therefore the images should be aggregated with a weighted mean, so that images of poor quality don't degrade the quality of the coadd. What weights to we pick?\nWeights can be chosen to either minimize the variance on the coadd or maximize the SNR of point sources on the coadd. \nSome background:\nAssuming that all noise sources are independent, the SNR of the measurement of flux from a star is:\n\\begin{equation}\nSNR \\propto {{N_{\\rm photons}}\\over{ \\sigma_{\\rm sky}} \\sqrt{A} },\n\\end{equation}\nwhere $N_{\\rm photons}$ is the number of photons detected from the star,\n$A$ is the area in pixels covered by the star.\nThe per-pixel sky noise $\\sigma_{sky}$ includes all sources of noise: dark current, read noise and sky-background, and it coded in the variance plane of the image. For the epoch $i$, $\\sigma^2_{i, {\\rm sky}}$ is the average of the variance plane.\nThe $N_{\\rm photons}$ is proportional to transparency $T$, and the area that the stellar photons cover is determined by the seeing: $A \\propto {\\rm FWHM}^2$.\nTherefore, a coadd optimized for point-source detection would, to weight each image by the SNR$^2$, use the following as weights which prefers good-seeing epochs taken when the sky is transparent and dark.\n\\begin{align}\nw_i & = {\\rm SNR}^2 \\propto {{T_i^2}\\over{{\\rm FWHM_i}^2 \\sigma_i^2}}.\n\\end{align} \nThe usual inverse-variance weighting which produces the minimum-variance co-add, is given by, \n\\begin{align}\nw_i & =T_i^2/\\sigma_i^2\n\\end{align} \nIn practice, the factor of $T_i^2$ is incorporated into the variance when flux-scaling the single-epoch images to a common zeropoint. This step multiplies the image by a scale-factor, which increases the variance of the image by the square of the scale factor. The scale factor is inversely proportional to the transparency, so that\n$\\sigma_{scaled}= \\sigma/T$. \nFor this problem assume the images are all on the same zeropoint (like problem 1) i.e. T=1. \nProblem 2.1 Weighting images in Variable Sky\nNow simulate 50 observations of stars with Sky S ranging from 100 to 1000. Remember to subtract this background off before stacking this time! Plot the plain (unweighted) mean coadd vs. the minimum variance coadd. Weights should add up to 1. What's the empirical noise estimate of the coadd?", "# complete", "Problem 2.2 Weighting images in Variable Seeing\nSimulate 50 observations with FWHM's ranging from 2-10 pixels. Keep the flux amplitude F, and sky noise S both fixed. \nGenerate two coadds, (1) with the weights that minimize variance and (2) with the weights that maximize point source SNR. Weights should add up to 1. Plot both coadds.", "# complete", "Problem 2.3 Image variance vs per pixel variance (Challenge Problem)\nWhy do we use per image variances instead of per pixel variances? Let's see! Start tracking the per-pixel variance when you simulate the star. Make a coadd of 200 observations with FWHM's ranging from 2 to 20 pixels. Make a coadd weighted by the per-pixel inverse variance. How does the profile of the star look in this coadd compared to an unweighted coadd and compared to the coadd with the $w_i = \\frac{1}{{\\rm FWHM_i}^2 \\sigma_i^2}$? (You may have to plot the difference to see the change).", "# complete", "Problem 3) Dipoles in Image Subtraction\nIn the lesson, we said that just because you see a dipole in a difference image, does not mean that the astrometric registration is terrible. For this problem, we'll forgo the pixelated simulated star from the previous problems and operate with Gaussian profiles.", "# Create two Gaussian 1-D profiles with\n\nASTROM_OFFSET = 0.1 # units of e.g. pixels\nFLUX_SCALE = 1. # units of e.g nanojansky\nPSF = 1 # rms pixels\n\nx = np.linspace(-5, 5)\ny1 = FLUX_SCALE * norm.pdf(x, ASTROM_OFFSET, PSF)\ny2 = FLUX_SCALE * norm.pdf(x, -ASTROM_OFFSET, PSF)\nplt.plot(x, y1, label='profile 1')\nplt.plot(x, y2, label='profile 2')\nplt.xlabel('x (pixel)')\nplt.ylabel('y (flux)')\nplt.legend()\n", "Problem 3.1) Plot the difference of these two profiles:", "# complete", "Problem 3.2) What if we have amazing astrometric registration\nand shrink the astrometric offset by a factor of a thousand. Is there a star sufficiently bright to produce the same dipole? What is its FLUX SCALE?", "ASTROM_OFFSET = 0.0001 \nPSF = 1. \n\n# complete \n\n# Plot both dipoles (for the offset=0.1 and the offset=0.0001 in the same figure.\n# Same or different subplots up to you. ", "Problem 3.3) Distance between peaks.\nDoes the distance between the dipole's positive and negative peaks depend on the astrometric offset? If not, what does it depend on? You can answer this by vizualizing the dipoles vs offsets. But for a challenge, measure the distance between peaks and plot them as a function of astrometric offset or another factor.", "# complete ", "Problem 3.4)\nIn the problem setup we assumed that the astrometric error was < the PSF width. Is this assumption likely to hold in an image you'd get from a wide-field imager like Rubin's?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jhonatancasale/graduation-pool
disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb
apache-2.0
[ "Fundamentos de Matrizes | Matrix Fundamentals:\n\nUma forma organizada de representar os dados numéricos.\nO tamanho ou a dimensão da matriz (nro linhas) X (nro colunas), por exemplo $2x3$\nO elemento que ocupa a i-ésima linha e a j-ésima coluna é denotado por $a_{ij}$\n\nExamplo de uma Matriz $2x3$ | Example of a $2x3$ Matrix\n$$A_{2x3} = \\begin{pmatrix}\n a_{11} & a_{12} & a_{13} \\\n a_{21} & a_{22} & a_{23}\n\\end{pmatrix}$$\nExemplo numérico de uma Matriz $2x3$ | Numeric example of a $2x3$ Matrix\n$$A_{2x3} = \\begin{pmatrix}\n -1 & 42 & 10 \\\n 12 & 0 & 9\n\\end{pmatrix}$$\nAlguns exemplos em Python3 | Some examples in Python3", "import numpy as np # for array, dot and so on", "Matrix creation", "B = np.arange(9).reshape(3, 3)\nprint(B)\n\nA = np.array([\n [-1, 42, 10],\n [12, 0, 9]\n])\nprint(A)\n\n# inspecting the matrices\nprint(A.shape) # 2 x 3\nprint(B.shape) # 3 x 3\n\n# We have 2 dimensions `X1` and `X2`\nprint(A.ndim)\nprint(B.ndim)\n\nZeros = np.zeros((2, 3))\nprint(Zeros)\n\nOnes = np.ones((3, 3))\nprint(Ones)\n\nEmpty = np.empty((4, 4))\nprint(Empty)", "Vector creation", "print(np.arange(5, 30, 7))\n\nprint(np.arange(10, 13, .3))\n\nprint(np.linspace(0, 2, 13))", "np.arange bahevior to large numbers", "print(np.arange(10000))\n\nprint(np.arange(10000).reshape(100,100))", "Basic Operations\n$$A_{mxn} \\pm B_{mxn} \\mapsto C_{mxn}$$\n$$u_{1xn} \\pm v_{1xn} \\mapsto w_{1xn} \\quad (u_n \\pm v_n \\mapsto w_n)$$", "A = np.array([10, 20, 30, 40, 50, -1])\nB = np.linspace(0, 1, A.size)\n\nprint(\"{} + {} -> {}\".format(A, B, A + B))\nprint(\"{} - {} -> {}\".format(A, B, A - B))", "$$f:M_{mxn} \\to M_{mxn}$$\n$$a_{ij} \\mapsto a_{ij}^2$$", "print(\"{} ** 2 -> {}\".format(A, A ** 2))", "$$f:M_{mxn} \\to M_{mxn}$$\n$$a_{ij} \\mapsto 2\\sin(a_{ij})$$", "print(\"2 * sin({}) -> {}\".format(A, 2 * np.sin(A)))", "$$f:M_{mxn} \\to M_{mxn}$$\n$$\n\\forall \\quad i, j: \\quad i < m, j < n \\qquad a_{ij} = \n\\left{ \n \\begin{array}{ll}\n \\text{True} & \\quad se \\quad a_{ij} > 30 \\\n \\text{False} & \\quad \\text{c.c}\n \\end{array}\n \\right.\n$$", "print(A > 30)", "Usando um vetor de Bools como Indexador", "print(A[A > 30])", "$$A_{mxn} * B_{mxn} \\mapsto C_{mxn}$$\n$$c_{ij} = a_{ij} * b_{ij}$$\n$$\\forall \\quad i, j: \\quad i < m, j < n$$", "print(\"{} * {} -> {}\".format(A, B, A * B))", "", "print(\"{}.{} -> {}\".format(A, B, A.dot(B)))\n\nprint(\"{}.{} -> {}\".format(A, B, np.dot(A, B)))\n\nprint(np.ones(10) * 12)\n\nM = np.linspace(-1, 1, 16).reshape(4, 4)\nprint(M)\n\nprint(\"sum(A) -> {}\".format(M.sum()))\n\nprint(\"max(A) -> {} | min(A) -> {}\" .format(M.max(), M.min()))\n\nN = np.arange(16).reshape(4, 4)\n\nprint(N)\n\nprint(N.sum(axis=0)) # sum by column\n\nprint(N.sum(axis=1)) #sum by row\n\nprint(N.min(axis=1))\n\nprint(N.cumsum(axis=0))\n\nprint(N)\n\nfor column in range(N.shape[1]):\n print(N[:,column])\n\nprint(N.T)\n\nprint(N)\n\nprint(N.transpose())\n\nprint(N)\n\nI = np.eye(2)\nprint(I)\n\nI2 = I * 2\nI2_inv = np.linalg.inv(I2)\nprint(I2_inv)\n\nprint(np.dot(I2, I2_inv))\n\ndir(np.linalg)\n\nprint(np.trace(I2))\n\nProd = np.dot(I2, I2)\nprint(Prod)\n\nprint(np.linalg.eig(Prod))", "$$Ax = y$$", "A = np.linspace(1, 4, 4).reshape(2, 2)\nprint(A)\n\ny = np.array([5., 7.])\n\nx = np.linalg.solve(A, y)\nprint(x)\n\nprint(np.dot(A, x.T))\n\nx = np.arange(0, 10, 2)\ny = np.arange(5)\n\nprint(np.vstack([x, y]))\n\nprint(np.hstack([x, y]))\n\nprint(np.hsplit(x, [2]))\n\nprint(np.hsplit(x, [2, 4]))\n\nprint(np.vsplit(np.eye(3), range(1, 3)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NicWayand/xray
examples/xray_seasonal_means.ipynb
apache-2.0
[ "Calculating Seasonal Averages from Timeseries of Monthly Means\nAuthor: Joe Hamman\nThe data used for this example can be found in the xray-data repository. You may need to change the path to RASM_example_data.nc below.\nSuppose we have a netCDF or xray Dataset of monthly mean data and we want to calculate the seasonal average. To do this properly, we need to calculate the weighted average considering that each month has a different number of days.", "%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport xray\nfrom netCDF4 import num2date\nimport matplotlib.pyplot as plt \n\nprint(\"numpy version : \", np.__version__)\nprint(\"pandas version : \", pd.version.version)\nprint(\"xray version : \", xray.version.version)", "Some calendar information so we can support any netCDF calendar.", "dpm = {'noleap': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],\n '365_day': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],\n 'standard': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],\n 'gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],\n 'proleptic_gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],\n 'all_leap': [0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],\n '366_day': [0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],\n '360_day': [0, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30]} ", "A few calendar functions to determine the number of days in each month\nIf you were just using the standard calendar, it would be easy to use the calendar.month_range function.", "def leap_year(year, calendar='standard'):\n \"\"\"Determine if year is a leap year\"\"\"\n leap = False\n if ((calendar in ['standard', 'gregorian',\n 'proleptic_gregorian', 'julian']) and\n (year % 4 == 0)):\n leap = True\n if ((calendar == 'proleptic_gregorian') and\n (year % 100 == 0) and\n (year % 400 != 0)):\n leap = False\n elif ((calendar in ['standard', 'gregorian']) and\n (year % 100 == 0) and (year % 400 != 0) and\n (year < 1583)):\n leap = False\n return leap\n\ndef get_dpm(time, calendar='standard'):\n \"\"\"\n return a array of days per month corresponding to the months provided in `months`\n \"\"\"\n month_length = np.zeros(len(time), dtype=np.int)\n \n cal_days = dpm[calendar]\n \n for i, (month, year) in enumerate(zip(time.month, time.year)):\n month_length[i] = cal_days[month]\n if leap_year(year, calendar=calendar):\n month_length[i] += 1\n return month_length", "Open the Dataset", "monthly_mean_file = 'RASM_example_data.nc'\nds = xray.open_dataset(monthly_mean_file, decode_coords=False)\nprint(ds)", "Now for the heavy lifting:\nWe first have to come up with the weights,\n- calculate the month lengths for each monthly data record\n- calculate weights using groupby('time.season')\nFinally, we just need to multiply our weights by the Dataset and sum allong the time dimension.", "# Make a DataArray with the number of days in each month, size = len(time)\nmonth_length = xray.DataArray(get_dpm(ds.time.to_index(), calendar='noleap'),\n coords=[ds.time], name='month_length')\n\n# Calculate the weights by grouping by 'time.season'.\n# Conversion to float type ('astype(float)') only necessary for Python 2.x\nweights = month_length.groupby('time.season') / month_length.astype(float).groupby('time.season').sum()\n\n# Test that the sum of the weights for each season is 1.0\nnp.testing.assert_allclose(weights.groupby('time.season').sum().values, np.ones(4))\n\n# Calculate the weighted average\nds_weighted = (ds * weights).groupby('time.season').sum(dim='time')\n\nprint(ds_weighted)\n\n# only used for comparisons\nds_unweighted = ds.groupby('time.season').mean('time')\nds_diff = ds_weighted - ds_unweighted\n\n# Quick plot to show the results\nis_null = np.isnan(ds_unweighted['Tair'][0].values)\n\nfig, axes = plt.subplots(nrows=4, ncols=3, figsize=(14,12))\nfor i, season in enumerate(('DJF', 'MAM', 'JJA', 'SON')):\n plt.sca(axes[i, 0])\n plt.pcolormesh(np.ma.masked_where(is_null, ds_weighted['Tair'].sel(season=season).values),\n vmin=-30, vmax=30, cmap='Spectral_r')\n plt.colorbar(extend='both')\n \n plt.sca(axes[i, 1])\n plt.pcolormesh(np.ma.masked_where(is_null, ds_unweighted['Tair'].sel(season=season).values),\n vmin=-30, vmax=30, cmap='Spectral_r')\n plt.colorbar(extend='both')\n\n plt.sca(axes[i, 2])\n plt.pcolormesh(np.ma.masked_where(is_null, ds_diff['Tair'].sel(season=season).values),\n vmin=-0.1, vmax=.1, cmap='RdBu_r')\n plt.colorbar(extend='both')\n for j in range(3):\n axes[i, j].axes.get_xaxis().set_ticklabels([])\n axes[i, j].axes.get_yaxis().set_ticklabels([])\n axes[i, j].axes.axis('tight')\n \n axes[i, 0].set_ylabel(season)\n \naxes[0, 0].set_title('Weighted by DPM')\naxes[0, 1].set_title('Equal Weighting')\naxes[0, 2].set_title('Difference')\n \nplt.tight_layout()\n\nfig.suptitle('Seasonal Surface Air Temperature', fontsize=16, y=1.02)\n\n# Wrap it into a simple function\ndef season_mean(ds, calendar='standard'):\n # Make a DataArray of season/year groups\n year_season = xray.DataArray(ds.time.to_index().to_period(freq='Q-NOV').to_timestamp(how='E'),\n coords=[ds.time], name='year_season')\n\n # Make a DataArray with the number of days in each month, size = len(time)\n month_length = xray.DataArray(get_dpm(ds.time.to_index(), calendar=calendar),\n coords=[ds.time], name='month_length')\n # Calculate the weights by grouping by 'time.season'\n weights = month_length.groupby('time.season') / month_length.groupby('time.season').sum()\n\n # Test that the sum of the weights for each season is 1.0\n np.testing.assert_allclose(weights.groupby('time.season').sum().values, np.ones(4))\n\n # Calculate the weighted average\n return (ds * weights).groupby('time.season').sum(dim='time')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pligor/predicting-future-product-prices
02_preprocessing/exploration04-price_history_dfa.ipynb
agpl-3.0
[ "#%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2", "https://cschoel.github.io/nolds/nolds.html#detrended-fluctuation-analysis", "# -*- coding: UTF-8 -*-\nfrom __future__ import division\nimport numpy as np\nimport pandas as pd\nimport sys\nimport math\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\nimport re\nimport os\nimport csv\nfrom helpers.outliers import MyOutliers\nfrom skroutz_mobile import SkroutzMobile\nfrom sklearn.ensemble import IsolationForest\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.tree import DecisionTreeClassifier, export_graphviz\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import accuracy_score, confusion_matrix, r2_score\nfrom skroutz_mobile import SkroutzMobile\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom helpers.my_train_test_split import MySplitTrainTest\nfrom sklearn.preprocessing import StandardScaler\nfrom preprocess_price_history import PreprocessPriceHistory\nfrom price_history import PriceHistory\nfrom dfa import dfa\nimport scipy.signal as ss\nimport nolds\n%matplotlib inline\n\nrandom_state = np.random.RandomState(seed=16011984)\n\ncsv_in = \"../price_history_02_with_seq_start.csv\"\n\norig_df = pd.read_csv(csv_in, index_col=0, encoding='utf-8', quoting=csv.QUOTE_ALL)\norig_df.shape\n\ndf = orig_df.drop(labels=PriceHistory.SPECIAL_COLS, axis=1)\ndf.shape\n\nCSV_FILEPATH = \"../price_history_02_with_seq_start.csv\"\n\n#xx = df.iloc[0, ]\nph = PriceHistory(CSV_FILEPATH)\n\ntt = ph.extractSequenceByLocation(iloc=0)\ntt.shape\n\ntt[-1]\n\nalpha = nolds.dfa(tt)\nalpha\n\nseqs = [ph.extractSequenceByLocation(iloc=ii) for ii in xrange(len(ph.df))]\nlen(seqs)\n\nlen(seqs[0])\n\nalphas = []\nfor seq in seqs:\n try:\n alpha = nolds.dfa(seq.values)\n if not np.isnan(alpha):\n alphas.append(alpha)\n except AssertionError, ee:\n pass\n \n#alphas = [seq for seq in seqs if len(seq) > 1 and not np.all(seq[0] == seq)]\nlen(alphas)\n\nplt.figure(figsize=(17,8))\nsns.distplot(alphas, rug=True,\n axlabel='Alpha of Detrended Flunctuation Analysis')\nplt.show()", "Conclusion\nthe estimate alpha for the Hurst parameter (alpha < 1: stationary process similar to fractional Gaussian noise with H = alpha, alpha > 1: non-stationary process similar to fractional Brownian motion with H = alpha - 1)\nSo most price histories are identified as we would expect, as non-stationary processes", "# References", "https://cschoel.github.io/nolds/nolds.html#detrended-fluctuation-analysis\nhttps://scholar.google.co.uk/scholar?q=Detrended+fluctuation+analysis%3A+A+scale-free+view+on+neuronal+oscillations&btnG=&hl=en&as_sdt=0%2C5\nMLA format:\nHardstone, Richard, et al. \"Detrended fluctuation analysis: a scale-free view on neuronal oscillations.\" Frontiers in physiology 3 (2012).\nPrice histories Detrended", "seq = seqs[0].values\n\nplt.plot(seq)\n\ndetrendeds = [ss.detrend(seq) for seq in seqs]\nlen(detrendeds)\n\nplt.plot(detrendeds[0])\n\ndetrendeds[0]\n\nalldetr = []\nfor detrended in detrendeds:\n alldetr += list(detrended)\nlen(alldetr)\n\nfig = plt.figure( figsize=(14, 6) )\nsns.distplot(alldetr, axlabel=\"Price Deviation from zero after detrend\")\nplt.show()\n\nstdsca = StandardScaler(with_std=False)\n\nseqs_zero_mean = [stdsca.fit_transform(seq.values.reshape(1, -1).T) for seq in seqs]\nlen(seqs_zero_mean), seqs_zero_mean[0].shape, seqs_zero_mean[3].shape\n\nallzeromean = np.empty(shape=(0, 1))\nfor seq in seqs_zero_mean:\n allzeromean = np.vstack( (allzeromean, seq) )\nallzeromean.shape\n\nfig = plt.figure( figsize=(14, 6) )\nsns.distplot(allzeromean.flatten(),\n axlabel=\"Price Deviation from zero before detrend\")\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wmfschneider/CHE30324
Resources/Python_Tutorial.ipynb
gpl-3.0
[ "Import Modules", "import numpy as np #handles arrays of values similar to MATLAB\nfrom scipy import linalg #contains certain operators you may need for class\nimport matplotlib.pyplot as plt #contains everything you need to create plots\nimport sympy as sy\nfrom sympy.functions import ln\nimport math", "Performing basic math functions", "x = 2\nprint('2*x =',2*x) #multiplcation\nprint('x^3 =',x**3) #exponents\nprint('e^x =',np.exp(x)) #e^x\nprint('e^x =',np.e**x) #e^x alternate form\nprint('Pi =',np.pi) #Pi", "Integration\nThis will integrate a function that you provide. There a number of other methods for numerical integration that can be found online.\nFor our examples we will use:\n$I = \\int_{0}^{1} ax^2 + b dx$", "from scipy.integrate import quad\n\n# First define a function that you want to integrate\ndef integrand(x,a,b):\n return a*x**2 + b\n\n# Set your constants \na = 2\nb = 1\nI = quad(integrand, 0, 1, args=(a,b))\nprint(I)\n# I has two values, the first value is the estimation of the integration, the second value is the upper bound on the error.\n# Notice that the upper bound on the error is extremely small, this is a good estimation. ", "Arrays", "y = np.array([1,2,3,4,5]) #create an array of values\nprint('y =\\t',y) #'\\t' creates an indent to nicely align answers \nprint('y[0] =\\t',y[0]) #Python starts counting at 0\nprint('y[2] =\\t',y[2]) #y[2] gives the third element in y (I don't agree with this syntax but what can ya do?)\nprint('y*x =\\t',y*x)\n\nx = np.array([6,7,8,9])\nz = np.concatenate((y,x),axis=None) #concatenate two arrays\nprint('[y,x] =\\t',z)\nprint('sum of y elements =\\t',np.sum(y)) #sum the elements of an array\nprint('sum of z elements =\\t',np.sum(z))", "for loops", "#This first loop iterates over the elements in an array\narray = np.array([0,1,2,3,4])\nprint('Frist Loop')\nfor x in array:\n print(x*2)\n\n#This second loop iterates for x in the range of [0,4], again we have to say '5' because of the way Python counts\nprint('Second Loop')\nfor x in range(5):\n print(x*2)", "Summation with for loops\n$\\sum_{n=1}^{4} 2^{-n}$", "answer = 0 #Each iteration will be added to this, so we start it at zero\nstorage = [] #This will be used to store values after each iteration\nfor n in range(1,5):\n storage.append(2**(-n)) #The append command adds elements to an array\n answer+=2**(-n) #+= is the same as saying answer = answer + ...\nprint('answer =\\t',answer)\nprint('stored values=\\t',storage)", "while loops", "#This while loop accomplishes the same thing as the two for loops above\nx=0\nwhile x<5:\n print(x*2)\n x+=1", "if statements", "#Order of your if statements matters. \narray = np.array([2,4,6,7,11])\nfor x in array:\n if x<5:\n print('Not a winner')\n elif x<10:\n print(2*x)\n else:\n break", "Linear Algebra", "#Create a matrix\na = np.array([[1,2,3],[4,5,6],[7,8,9]])\nprint('a =\\n',a)\n\n#get eigenvalues and eigvenvectors of a\nw,v = linalg.eig(a) \nprint('eigenvalues =\\t',w)\nprint('eigenvectors =\\n',v)\n\n#Matrix multiplication\nb = np.array([1,0,0])\nprint('a*b =\\t',a@b.T) #'@' does matrix multiplication, '.T' transposes a matrix or vector", "Creating a function", "#'def' starts the function. Variables inside the parentheses are inputs to your function. \n#Return is what your function will output.\n#In this example I have created a function that provides the first input raised to the power of the second input. \n\ndef x2y(x,y):\n return x**y\n\nx2y(4,2)", "Symbolic Math", "#This lets us create functions with variables\n#First define a variable\nx = sy.Symbol('x')\n\n#Next create a function\nfunction = x**4\n\nder = function.diff(x,1)\nprint('first derivative =\\t',der)\n\nder2 = function.diff(x,2)\nprint('second derivative =\\t',der2)\n\n#You can substitute back in for symbols now\nprint('1st derivative at x=2 =\\t',der.subs(x,2))\nprint('2nd derivative at x=2 =\\t',der2.subs(x,2))\n\nfunction2 = ln(x)\n\nlnder = function2.diff(x,1)\nprint('derivative of ln(x) =\\t',lnder)", "Plotting", "#Standard plot\n\nx = np.linspace(0,10)\ny = np.sin(x)\nz = np.cos(x)\n\nplt.plot(x,y,x,z)\nplt.xlabel('Radians');\nplt.ylabel('Value');\nplt.title('Standard Plot')\nplt.legend(['Sin','Cos'])\nplt.show()\n\n#Scatter Plot\n\nx = np.linspace(0,10,11)\ny = np.sin(x)\nz = np.cos(x)\n\nplt.scatter(x,y)\nplt.scatter(x,z)\nplt.xlabel('Radians');\nplt.ylabel('Value');\nplt.title('Scatter Plot')\nplt.legend(['Sin','Cos'])\nplt.show()\n\n", "Math package has useful tools as well", "print('5! =\\t',math.factorial(5))\nprint('|-3| =\\t',math.fabs(-3))", "Something to help with your homework\nAssume you have a chemical reaction defined by:\nA + 2B -> C\nFor every mole of A consumed, 2 moles of B are consumed, and 1 mole of C is produced. \nIf we have the following molar flow rates:\nFA0 = 1.5 moles/s = Initial flow of A\nFB0 = = 2.5 moles/s = Initial flow of B\nFA = Flow rate of A as a function of reaction advancement\nFB = Flow rate of B as a function of reaction advancement\nFC = Flow rate of C as a function of reaction advancement\nPlot the molar flow rate of each species as a function of reaction advancement.", "# Set up a vector to store values of advancement\nadv = np.arange(0,20,.01)\n\n# Inital Flow Rates\nfa0 = 1.5 #moles/s\nfb0 = 2.5\nfc0 = 0\n\n# Calculate flow rate as a function of advancement\nfa = fa0-1*adv\nfb = fb0-2*adv\nfc = fc0+adv\n\n# Find the maximum value of advancement, value at which one of the reactants hits 0 moles/s\naind = np.where(fa<0)[0][0]\nbind = np.where(fb<0)[0][0]\nmax_adv = min([aind,bind])\n\n# Cut all of the vectors to the maximum value of advancement \nadv=adv[:max_adv]\nfa=fa[:max_adv]\nfb=fb[:max_adv]\nfc=fc[:max_adv]\n\n# Plot everything real nice\nplt.plot(adv,fa,label='A')\nplt.plot(adv,fb,label='B')\nplt.plot(adv,fc,label='C')\nplt.grid()\nplt.xlabel('Reaction Advancement',weight='bold')\nplt.ylabel('Molar Flow Rate',weight='bold')\nplt.legend()\nplt.show()", "Putting it to use: Homework For Fun!\nUse 'for' loops to create Taylor expansions of $ln(1+x)$ centered at 0, with an order of 1, 2, 3, and 4. Plot these Taylor expansions along with the original equation on one plot. Label your plots. \nAs a reminder the formula for a Taylor expansion is: \n$f(a) + \\sum_{n=1}^{\\infty}\\frac{f^n(a)}{n!}(x-a)^n$\nSince our expansion is centered at zero, $a=0$.", "#Insert code here.", "Additional Resources\n\nCode Academy\nOfficial Python Reference\nLearn Python the Hard Way\nContact me at : jcrum@nd.edu" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.20/_downloads/5514ea6c90dde531f8026904a417527e/plot_10_evoked_overview.ipynb
bsd-3-clause
[ "%matplotlib inline", "The Evoked data structure: evoked/averaged data\nThis tutorial covers the basics of creating and working with :term:evoked\ndata. It introduces the :class:~mne.Evoked data structure in detail,\nincluding how to load, query, subselect, export, and plot data from an\n:class:~mne.Evoked object. For info on creating an :class:~mne.Evoked\nobject from (possibly simulated) data in a :class:NumPy array\n&lt;numpy.ndarray&gt;, see tut_creating_data_structures.\n :depth: 2\nAs usual we'll start by importing the modules we need:", "import os\nimport mne", "Creating Evoked objects from Epochs\n:class:~mne.Evoked objects typically store an EEG or MEG signal that has\nbeen averaged over multiple :term:epochs, which is a common technique for\nestimating stimulus-evoked activity. The data in an :class:~mne.Evoked\nobject are stored in an :class:array &lt;numpy.ndarray&gt; of shape\n(n_channels, n_times) (in contrast to an :class:~mne.Epochs object,\nwhich stores data of shape (n_epochs, n_channels, n_times)). Thus to\ncreate an :class:~mne.Evoked object, we'll start by epoching some raw data,\nand then averaging together all the epochs from one condition:", "sample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)\nevents = mne.find_events(raw, stim_channel='STI 014')\n# we'll skip the \"face\" and \"buttonpress\" conditions, to save memory:\nevent_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4}\nepochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,\n preload=True)\nevoked = epochs['auditory/left'].average()\n\ndel raw # reduce memory usage", "Basic visualization of Evoked objects\nWe can visualize the average evoked response for left-auditory stimuli using\nthe :meth:~mne.Evoked.plot method, which yields a butterfly plot of each\nchannel type:", "evoked.plot()", "Like the plot() methods for :meth:Raw &lt;mne.io.Raw.plot&gt; and\n:meth:Epochs &lt;mne.Epochs.plot&gt; objects,\n:meth:evoked.plot() &lt;mne.Evoked.plot&gt; has many parameters for customizing\nthe plot output, such as color-coding channel traces by scalp location, or\nplotting the :term:global field power &lt;GFP&gt; alongside the channel traces.\nSee tut-visualize-evoked for more information about visualizing\n:class:~mne.Evoked objects.\nSubselecting Evoked data\n.. sidebar:: Evokeds are not memory-mapped\n:class:~mne.Evoked objects use a :attr:~mne.Evoked.data attribute\n rather than a :meth:~mne.Epochs.get_data method; this reflects the fact\n that the data in :class:~mne.Evoked objects are always loaded into\n memory, never memory-mapped_ from their location on disk (because they\n are typically much smaller than :class:~mne.io.Raw or\n :class:~mne.Epochs objects).\nUnlike :class:~mne.io.Raw and :class:~mne.Epochs objects,\n:class:~mne.Evoked objects do not support selection by square-bracket\nindexing. Instead, data can be subselected by indexing the\n:attr:~mne.Evoked.data attribute:", "print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints", "To select based on time in seconds, the :meth:~mne.Evoked.time_as_index\nmethod can be useful, although beware that depending on the sampling\nfrequency, the number of samples in a span of given duration may not always\nbe the same (see the time-as-index section of the\ntutorial about Raw data &lt;tut-raw-class&gt; for details).\nSelecting, dropping, and reordering channels\nBy default, when creating :class:~mne.Evoked data from an\n:class:~mne.Epochs object, only the \"data\" channels will be retained:\neog, ecg, stim, and misc channel types will be dropped. You\ncan control which channel types are retained via the picks parameter of\n:meth:epochs.average() &lt;mne.Epochs.average&gt;, by passing 'all' to\nretain all channels, or by passing a list of integers, channel names, or\nchannel types. See the documentation of :meth:~mne.Epochs.average for\ndetails.\nIf you've already created the :class:~mne.Evoked object, you can use the\n:meth:~mne.Evoked.pick, :meth:~mne.Evoked.pick_channels,\n:meth:~mne.Evoked.pick_types, and :meth:~mne.Evoked.drop_channels methods\nto modify which channels are included in an :class:~mne.Evoked object.\nYou can also use :meth:~mne.Evoked.reorder_channels for this purpose; any\nchannel names not provided to :meth:~mne.Evoked.reorder_channels will be\ndropped. Note that channel selection methods modify the object in-place, so\nin interactive/exploratory sessions you may want to create a\n:meth:~mne.Evoked.copy first.", "evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True)\nprint(evoked_eeg.ch_names)\n\nnew_order = ['EEG 002', 'MEG 2521', 'EEG 003']\nevoked_subset = evoked.copy().reorder_channels(new_order)\nprint(evoked_subset.ch_names)", "Similarities among the core data structures\n:class:~mne.Evoked objects have many similarities with :class:~mne.io.Raw\nand :class:~mne.Epochs objects, including:\n\n\nThey can be loaded from and saved to disk in .fif format, and their\n data can be exported to a :class:NumPy array &lt;numpy.ndarray&gt; (but through\n the :attr:~mne.Evoked.data attribute, not through a get_data()\n method). :class:Pandas DataFrame &lt;pandas.DataFrame&gt; export is also\n available through the :meth:~mne.Evoked.to_data_frame method.\n\n\nYou can change the name or type of a channel using\n :meth:evoked.rename_channels() &lt;mne.Evoked.rename_channels&gt; or\n :meth:evoked.set_channel_types() &lt;mne.Evoked.set_channel_types&gt;.\n Both methods take :class:dictionaries &lt;dict&gt; where the keys are existing\n channel names, and the values are the new name (or type) for that channel.\n Existing channels that are not in the dictionary will be unchanged.\n\n\n:term:SSP projector &lt;projector&gt; manipulation is possible through\n :meth:~mne.Evoked.add_proj, :meth:~mne.Evoked.del_proj, and\n :meth:~mne.Evoked.plot_projs_topomap methods, and the\n :attr:~mne.Evoked.proj attribute. See tut-artifact-ssp for more\n information on SSP.\n\n\nLike :class:~mne.io.Raw and :class:~mne.Epochs objects,\n :class:~mne.Evoked objects have :meth:~mne.Evoked.copy,\n :meth:~mne.Evoked.crop, :meth:~mne.Evoked.time_as_index,\n :meth:~mne.Evoked.filter, and :meth:~mne.Evoked.resample methods.\n\n\nLike :class:~mne.io.Raw and :class:~mne.Epochs objects,\n :class:~mne.Evoked objects have evoked.times,\n :attr:evoked.ch_names &lt;mne.Evoked.ch_names&gt;, and :class:info &lt;mne.Info&gt;\n attributes.\n\n\nLoading and saving Evoked data\nSingle :class:~mne.Evoked objects can be saved to disk with the\n:meth:evoked.save() &lt;mne.Evoked.save&gt; method. One difference between\n:class:~mne.Evoked objects and the other data structures is that multiple\n:class:~mne.Evoked objects can be saved into a single .fif file, using\n:func:mne.write_evokeds. The example data &lt;sample-dataset&gt;\nincludes just such a .fif file: the data have already been epoched and\naveraged, and the file contains separate :class:~mne.Evoked objects for\neach experimental condition:", "sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis-ave.fif')\nevokeds_list = mne.read_evokeds(sample_data_evk_file, verbose=False)\nprint(evokeds_list)\nprint(type(evokeds_list))", "Notice that :func:mne.read_evokeds returned a :class:list of\n:class:~mne.Evoked objects, and each one has an evoked.comment\nattribute describing the experimental condition that was averaged to\ngenerate the estimate:", "for evok in evokeds_list:\n print(evok.comment)", "If you want to load only some of the conditions present in a .fif file,\n:func:~mne.read_evokeds has a condition parameter, which takes either a\nstring (matched against the comment attribute of the evoked objects on disk),\nor an integer selecting the :class:~mne.Evoked object based on the order\nit's stored in the file. Passing lists of integers or strings is also\npossible. If only one object is selected, the :class:~mne.Evoked object\nwill be returned directly (rather than a length-one list containing it):", "right_vis = mne.read_evokeds(sample_data_evk_file, condition='Right visual')\nprint(right_vis)\nprint(type(right_vis))", "Above, when we created an :class:~mne.Evoked object by averaging epochs,\nbaseline correction was applied by default when we extracted epochs from the\nclass:~mne.io.Raw object (the default baseline period is (None, 0),\nwhich assured zero mean for times before the stimulus event). In contrast, if\nwe plot the first :class:~mne.Evoked object in the list that was loaded\nfrom disk, we'll see that the data have not been baseline-corrected:", "evokeds_list[0].plot(picks='eeg')", "This can be remedied by either passing a baseline parameter to\n:func:mne.read_evokeds, or by applying baseline correction after loading,\nas shown here:", "evokeds_list[0].apply_baseline((None, 0))\nevokeds_list[0].plot(picks='eeg')", "Notice that :meth:~mne.Evoked.apply_baseline operated in-place. Similarly,\n:class:~mne.Evoked objects may have been saved to disk with or without\n:term:projectors &lt;projector&gt; applied; you can pass proj=True to the\n:func:~mne.read_evokeds function, or use the :meth:~mne.Evoked.apply_proj\nmethod after loading.\nCombining Evoked objects\nOne way to pool data across multiple conditions when estimating evoked\nresponses is to do so prior to averaging (recall that MNE-Python can select\nbased on partial matching of /-separated epoch labels; see\ntut-section-subselect-epochs for more info):", "left_right_aud = epochs['auditory'].average()\nprint(left_right_aud)", "This approach will weight each epoch equally and create a single\n:class:~mne.Evoked object. Notice that the printed representation includes\n(average, N=145), indicating that the :class:~mne.Evoked object was\ncreated by averaging across 145 epochs. In this case, the event types were\nfairly close in number:", "left_aud = epochs['auditory/left'].average()\nright_aud = epochs['auditory/right'].average()\nprint([evok.nave for evok in (left_aud, right_aud)])", "However, this may not always be the case; if for statistical reasons it is\nimportant to average the same number of epochs from different conditions,\nyou can use :meth:~mne.Epochs.equalize_event_counts prior to averaging.\nAnother approach to pooling across conditions is to create separate\n:class:~mne.Evoked objects for each condition, and combine them afterward.\nThis can be accomplished by the function :func:mne.combine_evoked, which\ncomputes a weighted sum of the :class:~mne.Evoked objects given to it. The\nweights can be manually specified as a list or array of float values, or can\nbe specified using the keyword 'equal' (weight each :class:~mne.Evoked\nobject by $\\frac{1}{N}$, where $N$ is the number of\n:class:~mne.Evoked objects given) or the keyword 'nave' (weight each\n:class:~mne.Evoked object by the number of epochs that were averaged\ntogether to create it):", "left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave')\nassert left_right_aud.nave == left_aud.nave + right_aud.nave", "Keeping track of nave is important for inverse imaging, because it is\nused to scale the noise covariance estimate (which in turn affects the\nmagnitude of estimated source activity). See minimum_norm_estimates\nfor more information (especially the whitening_and_scaling section).\nFor this reason, combining :class:~mne.Evoked objects with either\nweights='equal' or by providing custom numeric weights should usually\nnot be done if you intend to perform inverse imaging on the resulting\n:class:~mne.Evoked object.\nOther uses of Evoked objects\nAlthough the most common use of :class:~mne.Evoked objects is to store\naverages of epoched data, there are a couple other uses worth noting here.\nFirst, the method :meth:epochs.standard_error() &lt;mne.Epochs.standard_error&gt;\nwill create an :class:~mne.Evoked object (just like\n:meth:epochs.average() &lt;mne.Epochs.average&gt; does), but the data in the\n:class:~mne.Evoked object will be the standard error across epochs instead\nof the average. To indicate this difference, :class:~mne.Evoked objects\nhave a :attr:~mne.Evoked.kind attribute that takes values 'average' or\n'standard error' as appropriate.\nAnother use of :class:~mne.Evoked objects is to represent a single trial\nor epoch of data, usually when looping through epochs. This can be easily\naccomplished with the :meth:epochs.iter_evoked() &lt;mne.Epochs.iter_evoked&gt;\nmethod, and can be useful for applications where you want to do something\nthat is only possible for :class:~mne.Evoked objects. For example, here\nwe use the :meth:~mne.Evoked.get_peak method (which isn't available for\n:class:~mne.Epochs objects) to get the peak response in each trial:", "for ix, trial in enumerate(epochs[:3].iter_evoked()):\n channel, latency, value = trial.get_peak(ch_type='eeg',\n return_amplitude=True)\n latency = int(round(latency * 1e3)) # convert to milliseconds\n value = int(round(value * 1e6)) # convert to µV\n print('Trial {}: peak of {} µV at {} ms in channel {}'\n .format(ix, value, latency, channel))", ".. REFERENCES" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CUBoulder-ASTR2600/lectures
lecture_14_ndarraysI.ipynb
isc
[ "Multi-Dimensional Arrays\nStandard matrix notation is $A_{i,j}$, where i and j are\nthe row and column numbers, respectively.\nRead $A_{i,j}$ as \"A-sub-i-sub-j\" or \"A-sub-i-j\".\nCommas are often not used in the subscripts or\nhave different meanings.\nIn standard mathematics, the indexing starts with 1.\nIn Python, the indexing starts with 0.\nQ. What is the rank of $A_{i,j,k,l}$?\nThe shape of an array is a $d$-vector (or 1-D array) that holds the number of elements in each dimension. $d$ represents the dimensionality of the array.\nE.g., the shape of a $A_{i,j,k,l}$ is ($n_i$, $n_j$, $n_k$, $n_l$), where n denotes the number of elements in dimensions $i$, $j$, $k$, and $l$.\nTwo-Dimensional Numerical Python Arrays\nA 2-D array is a matrix, and is analogous to an array of arrays, though each element of an array must have the same data type.\n\nExample: $$wave = \\frac{c}{freq}$$\n\nwith wavelength in meters, \nc = 3.00e8 m/s, and\nfrequency in Hz.\nWe will convert wavelengths of 1 mm to 3 mm\nto frequencies, bracketing the peak in the cosmic \nmicrowave background radiation.", "import numpy as np\n\nfrom scipy.constants import c, G, h\n\n# Create a wavelength array (in mm):\nwaves = np.linspace(1.0, 3.0, 21)", "Q. What will the maximum (last element) of wave be? How to check?", "print(waves.max())\nwaves\n\n# Now, convert to frequency \n# (note conversion from mm to m):\n\nfreqs = c / (waves / 1e3)\nfreqs\n\n# Make a table & print (zip pairs up wave and freq \n# into a list of tuples):\n\ntable = [[wave, freq] for wave, freq in zip(waves, freqs)]\n\nfor row in table:\n print(row)\n\nprint(np.array(table))\n\n# Just for review:\n\nprint(list(zip(waves, freqs)))\n\ntable = np.array([waves, freqs])\ntable", "Q. How could we regroup elements to match the previous incarnation? (row major)", "table.transpose()\n\n# let's just work with the transpose\n\ntable = table.T ", "Q. What should this yield?", "table.shape", "Q. What should this be?", "table[20][0]\n\ntable[20,0]", "Not possible for lists! :", "l = list(table)\nprint(l[20][0])\nl[20,0]\n\ntable.shape\n\nfor index1 in range(table.shape[0]):\n\n # Q. What is table.shape[0]?\n \n for index2 in range(table.shape[1]):\n print('table[{}, {}] = {:g}'.format(index1, index2, \n table[index1, index2]))\n \n # Q. What will this loop print?\n ", "When you just loop over the elements of an array, you get rows:", "table.shape[0]\n\nfor row in table: # don't be fooled, it's not my naming of the looper that does that!\n print(row)\n\nfor idontknowwhat in table: \n print(idontknowwhat)", "This could also be done with one loop using numpy's ndenumerate.\nndenumerate will enumerate the rows and columns of the array:", "for index_tuple, value in np.ndenumerate(table):\n print('index {} has value {:.2e}'.format(index_tuple, value))", "Q. Reminder: what is the shape of table?", "print(table.shape)\nprint(type(table.shape))", "Q. So what is table.shape[0]?", "table.shape[0]", "Q. And table.shape[1]?", "table.shape[1]", "Arrays can be sliced analogously to lists.\nBut we already saw, there's more indexing posssibilities on top with numpy.", "table[0]", "Q: How to get the first column instead?", "table[:, 0]\n\n# Note that this is different.\n\n# Q. What is this?\n\ntable[:][0]\n\n# This will print the second column:\n\ntable[:, 1]\n\n# To get the first five rows of the table:\n \nprint(table[:5, :])\n\nprint()\n\n# Same as:\nprint(table[:5])", "Numpy also has a multi-dimensional lazy indexing trick under its sleeve:", "ndarray = np.zeros(2,3,4) # will fail. Why? Hint: Look at error message\n\nndarray = np.zeros((2,3,4))\n\nndarray = np.arange(2*3*4).reshape((2,3,4)) # will fail. Why?\n\nndarray\n\nndarray[:, :, 0]\n\nndarray[..., 0]", "Array Computing\nFor an array $A$ of any rank, $f(A)$ means applying the function\n$f$ to each element of $A$.\nMatrix Objects", "xArray1 = np.array([1, 2, 3], float)\nxArray1\n\nxArray1.T\n\nxMatrix = np.matrix(xArray1)\nprint(type(xMatrix))\nxMatrix\n\nxMatrix.shape\n\nxMatrix2 = xMatrix.transpose()\nxMatrix2\n\n# Or\nxMatrix.T", "Q. What is the identity matrix?", "iMatrix = np.eye(3) # or np.identity\niMatrix\n\n# And\niMatrix2 = np.mat(iMatrix) # 'mat' short for 'matrix'\niMatrix2\n\n# Array multiplication.\n# Reminder of xMatrix?\nxMatrix\n\n# Multiplication of any matrix by the identity matrix\n# yields that matrix:\nxMatrix * iMatrix\n\n# Reminder of xMatrix2:\nxMatrix2\n\nxMatrix2 = iMatrix * xMatrix2\nxMatrix2\n\nxMatrix * xMatrix2\n\nnp.dot(xMatrix, xMatrix2)\n\nxMatrix\n\nxMatrix2\n\nxArray = np.array(xMatrix)\nxArray2 = np.array(xMatrix2)\nxArray * xArray2\n\nxMatrix.shape, xMatrix2.shape\n\nxArray.shape\n\nnp.array(xMatrix) * np.array(xMatrix2).T" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
joshnsolomon/phys202-2015-work
assignments/assignment05/InteractEx02.ipynb
mit
[ "Interact Exercise 2\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display", "Plotting with parameters\nWrite a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\\pi]$.\n\nCustomize your visualization to make it effective and beautiful.\nCustomize the box, grid, spines and ticks to match the requirements of this data.\nUse enough points along the x-axis to get a smooth plot.\nFor the x-axis tick locations use integer multiples of $\\pi$.\nFor the x-axis tick labels use multiples of pi using LaTeX: $3\\pi$.", "# YOUR CODE HERE\ndef plot_sine1(a, b):\n t = np.linspace(0,4*np.pi,400)\n plt.plot(t,np.sin(a*t + b))\n plt.xlim(0,4*np.pi)\n plt.ylim(-1.0,1.0)\n plt.xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi], ['0','π','2π','3π','4π'])\n\nplot_sine1(5, 3.4)", "Then use interact to create a user interface for exploring your function:\n\na should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.\nb should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.", "interact(plot_sine1,a=(0.0,5.0,.1),b=(-5.0,5.0,.1));\n\nassert True # leave this for grading the plot_sine1 exercise", "In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:\n\ndashed red: r--\nblue circles: bo\ndotted black: k.\n\nWrite a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.", "def plot_sine2(a,b,style):\n t = np.linspace(0,4*np.pi,400)\n plt.plot(t,np.sin(a*t + b),style)\n plt.xlim(0,4*np.pi)\n plt.ylim(-1.0,1.0)\n plt.xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi], ['0','π','2π','3π','4π'])\n\nplot_sine2(4.0, -1.0, 'r--')", "Use interact to create a UI for plot_sine2.\n\nUse a slider for a and b as above.\nUse a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.", "interact(plot_sine2, a=(0.0,5.0,.1), b=(-5.0,5.0,.1), style={'Dotted Blue': 'b:', 'Black Circles': 'ko', 'Red Triangles':'r^'});\n\nassert True # leave this for grading the plot_sine2 exercise" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ddtm/dl-course
Seminar4/bonus/Bonus-advanced-cnn.ipynb
mit
[ "Deep learning for computer vision\ngot no lasagne?\nInstall the bleeding edge version from here: http://lasagne.readthedocs.org/en/latest/user/installation.html\nMain task\nThis week, we shall focus on the image recognition problem on cifar10 dataset\n* 60k images of shape 3x32x32\n* 10 different classes: planes, dogs, cats, trucks, etc.", "import numpy as np\nfrom cifar import load_cifar10\nX_train,y_train,X_val,y_val,X_test,y_test = load_cifar10(\"cifar_data\")\n\nclass_names = np.array(['airplane','automobile ','bird ','cat ','deer ','dog ','frog ','horse ','ship ','truck'])\n\nprint X_train.shape,y_train.shape\n\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.figure(figsize=[12,10])\nfor i in range(12):\n plt.subplot(3,4,i+1)\n plt.xlabel(class_names[y_train[i]])\n plt.imshow(np.transpose(X_train[i],[1,2,0]))", "lasagne\n\nlasagne is a library for neural network building and training\nit's a low-level library with almost seamless integration with theano", "import lasagne\nimport theano\nimport theano.tensor as T\n\ninput_X = T.tensor4(\"X\")\n\n#input dimention (None means \"Arbitrary\")\ninput_shape = [None,3,32,32]\n\ntarget_y = T.vector(\"target Y integer\",dtype='int32')", "Defining network architecture", "#Input layer (auxilary)\ninput_layer = lasagne.layers.InputLayer(shape = input_shape,input_var=input_X)\n\n#fully connected layer, that takes input layer and applies 50 neurons to it.\n# nonlinearity here is sigmoid as in logistic regression\n# you can give a name to each layer (optional)\ndense_1 = lasagne.layers.DenseLayer(input_layer,num_units=100,\n nonlinearity = lasagne.nonlinearities.sigmoid,\n name = \"hidden_dense_layer\")\n\n#fully connected output layer that takes dense_1 as input and has 10 neurons (1 for each digit)\n#We use softmax nonlinearity to make probabilities add up to 1\ndense_output = lasagne.layers.DenseLayer(dense_1,num_units = 10,\n nonlinearity = lasagne.nonlinearities.softmax,\n name='output')\n\n\n#network prediction (theano-transformation)\ny_predicted = lasagne.layers.get_output(dense_output)\n\n#all network weights (shared variables)\nall_weights = lasagne.layers.get_all_params(dense_output,trainable=True)\nprint all_weights", "Than you could simply\n\ndefine loss function manually\ncompute error gradient over all weights\ndefine updates\nBut that's a whole lot of work and life's short\nnot to mention life's too short to wait for SGD to converge\n\nInstead, we shall use Lasagne builtins", "#Mean categorical crossentropy as a loss function - similar to logistic loss but for multiclass targets\nloss = lasagne.objectives.categorical_crossentropy(y_predicted,target_y).mean()\n\n#prediction accuracy (WITH dropout)\naccuracy = lasagne.objectives.categorical_accuracy(y_predicted,target_y).mean()\n\n#This function computes gradient AND composes weight updates just like you did earlier\nupdates_sgd = lasagne.updates.sgd(loss, all_weights,learning_rate=0.01)\n\n#function that computes loss and updates weights\ntrain_fun = theano.function([input_X,target_y],[loss,accuracy],updates= updates_sgd)\n\n\n#deterministic prediciton (without dropout)\ny_predicted_det = lasagne.layers.get_output(dense_output,deterministic=True)\n\n#prediction accuracy (without dropout)\naccuracy_det = lasagne.objectives.categorical_accuracy(y_predicted_det,target_y).mean()\n\n#function that just computes accuracy without dropout/noize -- for evaluation purposes\naccuracy_fun = theano.function([input_X,target_y],accuracy_det)", "That's all, now let's train it!\n\nWe got a lot of data, so it's recommended that you use SGD\nSo let's implement a function that splits the training sample into minibatches", "# An auxilary function that returns mini-batches for neural network training\n\n#Parameters\n# X - a tensor of images with shape (many, 3, 32, 32), e.g. X_train\n# y - a vector of answers for corresponding images e.g. Y_train\n#batch_size - a single number - the intended size of each batches\n\n#What do need to implement\n# 1) Shuffle data\n# - Gotta shuffle X and y the same way not to break the correspondence between X_i and y_i\n# 3) Split data into minibatches of batch_size\n# - If data size is not a multiple of batch_size, make one last batch smaller.\n# 4) return a list (or an iterator) of pairs\n# - (подгруппа картинок, ответы из y на эту подгруппу)\ndef iterate_minibatches(X, y, batchsize):\n \n <return an iterable of (X_batch, y_batch) batches of images and answers for them>\n \n \n \n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n# You feel lost and wish you stayed home tonight?\n# Go search for a similar function at\n# https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py", "Training loop", "import time\n\nnum_epochs = 100 #amount of passes through the data\n \nbatch_size = 50 #number of samples processed at each function call\n\nfor epoch in range(num_epochs):\n # In each epoch, we do a full pass over the training data:\n train_err = 0\n train_acc = 0\n train_batches = 0\n start_time = time.time()\n for batch in iterate_minibatches(X_train, y_train,batch_size):\n inputs, targets = batch\n train_err_batch, train_acc_batch= train_fun(inputs, targets)\n train_err += train_err_batch\n train_acc += train_acc_batch\n train_batches += 1\n\n # And a full pass over the validation data:\n val_acc = 0\n val_batches = 0\n for batch in iterate_minibatches(X_val, y_val, batch_size):\n inputs, targets = batch\n val_acc += accuracy_fun(inputs, targets)\n val_batches += 1\n\n \n # Then we print the results for this epoch:\n print(\"Epoch {} of {} took {:.3f}s\".format(\n epoch + 1, num_epochs, time.time() - start_time))\n\n print(\" training loss (in-iteration):\\t\\t{:.6f}\".format(train_err / train_batches))\n print(\" train accuracy:\\t\\t{:.2f} %\".format(\n train_acc / train_batches * 100))\n print(\" validation accuracy:\\t\\t{:.2f} %\".format(\n val_acc / val_batches * 100))\n\ntest_acc = 0\ntest_batches = 0\nfor batch in iterate_minibatches(X_test, y_test, 500):\n inputs, targets = batch\n acc = accuracy_fun(inputs, targets)\n test_acc += acc\n test_batches += 1\nprint(\"Final results:\")\nprint(\" test accuracy:\\t\\t{:.2f} %\".format(\n test_acc / test_batches * 100))\n\nif test_acc / test_batches * 100 > 95:\n print \"Double-check, than consider applying for NIPS'17. SRSly.\"\nelif test_acc / test_batches * 100 > 90:\n print \"U'r freakin' amazin'!\"\nelif test_acc / test_batches * 100 > 80:\n print \"Achievement unlocked: 110lvl Warlock!\"\nelif test_acc / test_batches * 100 > 70:\n print \"Achievement unlocked: 80lvl Warlock!\"\nelif test_acc / test_batches * 100 > 50:\n print \"Achievement unlocked: 60lvl Warlock!\"\nelse:\n print \"We need more magic!\"", "First step\nLet's create a mini-convolutional network with roughly such architecture:\n* Input layer\n* 3x3 convolution with 10 filters and ReLU activation\n* 3x3 pooling (or set previous convolution stride to 3)\n* Dense layer with 100-neurons and ReLU activation\n* 10% dropout\n* Output dense layer.\nTrain it with Adam optimizer with default params.\nSecond step\n\nAdd batch_norm (with default params) between convolution and pooling\n\nRe-train the network with the same optimizer\nQuest For A Better Network\n(please read it at least diagonally)\n\nThe ultimate quest is to create a network that has as high accuracy as you can push it.\nThere is a mini-report at the end that you will have to fill in. We recommend reading it first and filling it while you iterate.\n\nGrading\n\nstarting at zero points\n+2 for describing your iteration path in a report below.\n+2 for building a network that gets above 20% accuracy\n+1 for beating each of these milestones on TEST dataset:\n50% (5 total)\n60% (6 total)\n65% (7 total)\n70% (8 total)\n75% (9 total)\n80% (10 total)\n\n\n\nBonus points\nCommon ways to get bonus points are:\n* Get higher score, obviously.\n* Anything special about your NN. For example \"A super-small/fast NN that gets 80%\" gets a bonus.\n* Any detailed analysis of the results. (saliency maps, whatever)\nRestrictions\n\nPlease do NOT use pre-trained networks for this assignment until you reach 80%.\nIn other words, base milestones must be beaten without pre-trained nets (and such net must be present in the e-mail). After that, you can use whatever you want.\nyou can use validation data for training, but you can't' do anything with test data apart from running the evaluation procedure.\n\nTips on what can be done:\n\nNetwork size\nMOAR neurons, \n\nMOAR layers, (lasagne docs)\n\n\nNonlinearities in the hidden layers\n\ntanh, relu, leaky relu, etc\n\n\n\nLarger networks may take more epochs to train, so don't discard your net just because it could didn't beat the baseline in 5 epochs.\n\n\nPh'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn!\n\n\nConvolution layers\n\nthey are a must unless you have any super-ideas\nnetwork = lasagne.layers.Conv2DLayer(prev_layer,\n num_filters = n_neurons,\n filter_size = (filter width, filter height),\n nonlinearity = some_nonlinearity)\n\nWarning! Training convolutional networks can take long without GPU. That's okay.\n\nIf you are CPU-only, we still recomment to try a simple convolutional architecture\na perfect option is if you can set it up to run at nighttime and check it up at the morning.\nMake reasonable layer size estimates. A 128-neuron first convolution is likely an overkill.\nTo reduce computation time by a factor in exchange for some accuracy drop, try using stride parameter. A stride=2 convolution should take roughly 1/4 of the default (stride=1) one.\n\n\n\nPlenty other layers and architectures\n\nhttp://lasagne.readthedocs.org/en/latest/modules/layers.html\nbatch normalization, pooling, etc\n\n\n\nEarly Stopping\n\nTraining for 100 epochs regardless of anything is probably a bad idea.\nSome networks converge over 5 epochs, others - over 500.\n\nWay to go: stop when validation score is 10 iterations past maximum\n\n\nFaster optimization - \n\nrmsprop, nesterov_momentum, adam, adagrad and so on.\nConverge faster and sometimes reach better optima\nIt might make sense to tweak learning rate/momentum, other learning parameters, batch size and number of epochs\n\n\n\nBatchNormalization (lasagne.layers.batch_norm) FTW!\n\n\nRegularize to prevent overfitting\n\nAdd some L2 weight norm to the loss function, theano will do the rest\nCan be done manually or via - http://lasagne.readthedocs.org/en/latest/modules/regularization.html\n\n\n\nDropout - to prevent overfitting\n\nlasagne.layers.DropoutLayer(prev_layer, p=probability_to_zero_out) \nDon't overdo it. Check if it actually makes your network better\n\n\n\nData augmemntation - getting 5x as large dataset for free is a great deal\n\nZoom-in+slice = move\nRotate+zoom(to remove black stripes)\nany other perturbations\nAdd Noize (easiest: GaussianNoizeLayer)\nSimple way to do that (if you have PIL/Image): \nfrom scipy.misc import imrotate,imresize\nand a few slicing\n\n\nStay realistic. There's usually no point in flipping dogs upside down as that is not the way you usually see them.\n\nThere is a template for your solution below that you can opt to use or throw away and write it your way", "import numpy as np\nfrom cifar import load_cifar10\nX_train,y_train,X_val,y_val,X_test,y_test = load_cifar10(\"cifar_data\")\n\nclass_names = np.array(['airplane','automobile ','bird ','cat ','deer ','dog ','frog ','horse ','ship ','truck'])\n\nprint X_train.shape,y_train.shape\n\nimport lasagne\n\ninput_X = T.tensor4(\"X\")\n\n#input dimention (None means \"Arbitrary\" and only works at the first axes [samples])\ninput_shape = [None,3,32,32]\n\ntarget_y = T.vector(\"target Y integer\",dtype='int32')\n\n#Input layer (auxilary)\ninput_layer = lasagne.layers.InputLayer(shape = input_shape,input_var=input_X)\n\n<student.code_neural_network_architecture()>\n\ndense_output = <your network output>\n\n# Network predictions (theano-transformation)\ny_predicted = lasagne.layers.get_output(dense_output)\n\n#All weights (shared-varaibles)\n# \"trainable\" flag means not to return auxilary params like batch mean (for batch normalization)\nall_weights = lasagne.layers.get_all_params(dense_output,trainable=True)\nprint all_weights\n\n#loss function\nloss = <loss function>\n\n#<optionally add regularization>\n\n#accuracy with dropout/noize\naccuracy = lasagne.objectives.categorical_accuracy(y_predicted,target_y).mean()\n\n#weight updates\nupdates = <try different update methods>\n\n#A function that accepts X and y, returns loss functions and performs weight updates\ntrain_fun = theano.function([input_X,target_y],[loss,accuracy],updates= updates_sgd)\n\n\n#deterministic prediciton (without dropout)\ny_predicted_det = lasagne.layers.get_output(dense_output)\n\n#prediction accuracy (without dropout)\naccuracy_det = lasagne.objectives.categorical_accuracy(y_predicted_det,target_y).mean()\n\n#function that just computes accuracy without dropout/noize -- for evaluation purposes\naccuracy_fun = theano.function([input_X,target_y],accuracy_det)\n\n#итерации обучения\n\nnum_epochs = <how many times to iterate over the entire training set>\n\nbatch_size = <how many samples are processed at a single function call>\n\nfor epoch in range(num_epochs):\n # In each epoch, we do a full pass over the training data:\n train_err = 0\n train_acc = 0\n train_batches = 0\n start_time = time.time()\n for batch in iterate_minibatches(X_train, y_train,batch_size):\n inputs, targets = batch\n train_err_batch, train_acc_batch= train_fun(inputs, targets)\n train_err += train_err_batch\n train_acc += train_acc_batch\n train_batches += 1\n\n # And a full pass over the validation data:\n val_acc = 0\n val_batches = 0\n for batch in iterate_minibatches(X_val, y_val, batch_size):\n inputs, targets = batch\n val_acc += accuracy_fun(inputs, targets)\n val_batches += 1\n\n \n # Then we print the results for this epoch:\n print(\"Epoch {} of {} took {:.3f}s\".format(\n epoch + 1, num_epochs, time.time() - start_time))\n\n print(\" training loss (in-iteration):\\t\\t{:.6f}\".format(train_err / train_batches))\n print(\" train accuracy:\\t\\t{:.2f} %\".format(\n train_acc / train_batches * 100))\n print(\" validation accuracy:\\t\\t{:.2f} %\".format(\n val_acc / val_batches * 100))\n\ntest_acc = 0\ntest_batches = 0\nfor batch in iterate_minibatches(X_test, y_test, 500):\n inputs, targets = batch\n acc = accuracy_fun(inputs, targets)\n test_acc += acc\n test_batches += 1\nprint(\"Final results:\")\nprint(\" test accuracy:\\t\\t{:.2f} %\".format(\n test_acc / test_batches * 100))\n\nif test_acc / test_batches * 100 > 80:\n print \"Achievement unlocked: 80lvl Warlock!\"\nelse:\n print \"We need more magic!\"", "Report\nAll creative approaches are highly welcome, but at the very least it would be great to mention\n* the idea;\n* brief history of tweaks and improvements;\n* what is the final architecture and why?\n* what is the training method and, again, why?\n* Any regularizations and other techniques applied and their effects;\nThere is no need to write strict mathematical proofs (unless you want to).\n * \"I tried this, this and this, and the second one turned out to be better. And i just didn't like the name of that one\" - OK, but can be better\n * \"I have analized these and these articles|sources|blog posts, tried that and that to adapt them to my problem and the conclusions are such and such\" - the ideal one\n * \"I took that code that demo without understanding it, but i'll never confess that and instead i'll make up some pseudoscientific explaination\" - not_ok\nHi, my name is ___ ___, and here's my story\nA long ago in a galaxy far far away, when it was still more than an hour before deadline, i got an idea:\nI gonna build a neural network, that\n\nbrief text on what was\nthe original idea\nand why it was so\n\nHow could i be so naive?!\nOne day, with no signs of warning,\nThis thing has finally converged and\n* Some explaination about what were the results,\n* what worked and what didn't\n* most importantly - what next steps were taken, if any\n* and what were their respective outcomes\nFinally, after iterations, mugs of [tea/coffee]\n\nwhat was the final architecture\nas well as training method and tricks\n\nThat, having wasted ____ [minutes, hours or days] of my life training, got\n\naccuracy on training: __\naccuracy on validation: __\naccuracy on test: __\n\n[an optional afterword and mortal curses on assignment authors]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.23/_downloads/84d68dbced84793d122fec3a2cf0cde5/source_power_spectrum.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute source power spectral density (PSD) in a label\nReturns an STC file containing the PSD (in dB) of each of the sources\nwithin a label.", "# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\nfrom mne.minimum_norm import read_inverse_operator, compute_source_psd\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nfname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nfname_label = data_path + '/MEG/sample/labels/Aud-lh.label'\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, verbose=False)\nevents = mne.find_events(raw, stim_channel='STI 014')\ninverse_operator = read_inverse_operator(fname_inv)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\n\n# picks MEG gradiometers\npicks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,\n stim=False, exclude='bads')\n\ntmin, tmax = 0, 120 # use the first 120s of data\nfmin, fmax = 4, 100 # look at frequencies between 4 and 100Hz\nn_fft = 2048 # the FFT size (n_fft). Ideally a power of 2\nlabel = mne.read_label(fname_label)\n\nstc = compute_source_psd(raw, inverse_operator, lambda2=1. / 9., method=\"dSPM\",\n tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax,\n pick_ori=\"normal\", n_fft=n_fft, label=label,\n dB=True)\n\nstc.save('psd_dSPM')", "View PSD of sources in label", "plt.plot(stc.times, stc.data.T)\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('PSD (dB)')\nplt.title('Source Power Spectrum (PSD)')\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jbarnoud/PBxplore
doc/source/notebooks/Deformability.ipynb
mit
[ "Visualize protein deformability\nProtein Blocks are great tools to study protein deformability. Indeed, if the block assigned to a residue changes between two frames of a trajectory, it represents a local deformation of the protein rather than the displacement of the residue.\nThe API allows to visualize Protein Block variability throughout a molecular dynamics simulation trajectory.", "from __future__ import print_function, division\nfrom pprint import pprint\nfrom IPython.display import Image, display\nimport matplotlib.pyplot as plt\nimport os\n\n# The following line, in a jupyter notebook, allows to display\n# the figure directly in the notebook. See <https://jupyter.org/>\n%matplotlib inline\n\nimport pbxplore as pbx", "Here we will look at a molecular dynamics simulation of the barstar. As we will analyse Protein Block sequences, we first need to assign these sequences for each frame of the trajectory.", "# Assign PB sequences for all frames of a trajectory\ntrajectory = os.path.join(pbx.DEMO_DATA_PATH, 'barstar_md_traj.xtc')\ntopology = os.path.join(pbx.DEMO_DATA_PATH, 'barstar_md_traj.gro')\nsequences = []\nfor chain_name, chain in pbx.chains_from_trajectory(trajectory, topology):\n dihedrals = chain.get_phi_psi_angles()\n pb_seq = pbx.assign(dihedrals)\n sequences.append(pb_seq)", "Block occurences per position\nThe basic information we need to analyse protein deformability is the count of occurences of each PB for each position throughout the trajectory. This occurence matrix can be calculated with the :func:pbxplore.analysis.count_matrix function.", "count_matrix = pbx.analysis.count_matrix(sequences)", "count_matrix is a numpy array with one row per PB and one column per position. In each cell is the number of time a position was assigned to a PB.\nWe can visualize count_matrix using Matplotlib as any 2D numpy array.", "im = plt.imshow(count_matrix, interpolation='none', aspect='auto')\nplt.colorbar(im)\nplt.xlabel('Position')\nplt.ylabel('Block')", "PBxplore provides the :func:pbxplore.analysis.plot_map function to ease the visualization of the occurence matrix.", "pbx.analysis.plot_map('map.png', count_matrix)\n!rm map.png", "The :func:pbxplore.analysis.plot_map helper has a residue_min and a residue_max optional arguments to display only part of the matrix. These two arguments can be pass to all PBxplore functions that produce a figure.", "pbx.analysis.plot_map('map.png', count_matrix,\n residue_min=60, residue_max=70)\n!rm map.png", "Note that matrix in the the figure produced by :func:pbxplore.analysis.plot_map is normalized so as the sum of each column is 1. The matrix can be normalized with the :func:pbxplore.analysis.compute_freq_matrix.", "freq_matrix = pbx.analysis.compute_freq_matrix(count_matrix)\n\nim = plt.imshow(freq_matrix, interpolation='none', aspect='auto')\nplt.colorbar(im)\nplt.xlabel('Position')\nplt.ylabel('Block')", "Protein Block entropy\nThe $N_{eq}$ is a measure of variability based on the count matrix calculated above. It can be computed with the :func:pbxplore.analysis.compute_neq function.", "neq_by_position = pbx.analysis.compute_neq(count_matrix)", "neq_by_position is a 1D numpy array with the $N_{eq}$ for each residue.", "plt.plot(neq_by_position)\nplt.xlabel('Position')\nplt.ylabel('$N_{eq}$')", "The :func:pbxplore.analysis.plot_neq helper ease the plotting of the $N_{eq}$.", "pbx.analysis.plot_neq('neq.png', neq_by_position)\n!rm neq.png", "The residue_min and residue_max arguments are available.", "pbx.analysis.plot_neq('neq.png', neq_by_position,\n residue_min=60, residue_max=70)\n!rm neq.png", "Display PB variability as a logo", "pbx.analysis.generate_weblogo('logo.png', count_matrix)\ndisplay(Image('logo.png'))\n!rm logo.png\n\npbx.analysis.generate_weblogo('logo.png', count_matrix,\n residue_min=60, residue_max=70)\ndisplay(Image('logo.png'))\n!rm logo.png" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karst87/ml
dev/openlibs/tensorflow/basic_usage.ipynb
mit
[ "基本使用\nhttp://wiki.jikexueyuan.com/project/tensorflow-zh/get_started/basic_usage.html\n使用 TensorFlow, 你必须明白 TensorFlow:\n 使用图 (graph) 来表示计算任务.\n 在被称之为 会话 (Session) 的上下文 (context) 中执行图.\n 使用 tensor 表示数据.\n 通过 变量 (Variable) 维护状态.\n 使用 feed 和 fetch 可以为任意的操作(arbitrary operation) 赋值或者从其中获取数据.\n\n综述\nTensorFlow 是一个编程系统, 使用图来表示计算任务. 图中的节点被称之为 op (operation 的缩写). 一个 op 获得 0 个或多个 Tensor, 执行计算, 产生 0 个或多个 Tensor. 每个 Tensor 是一个类型化的多维数组. 例如, 你可以将一小组图像集表示为一个四维浮点数数组, 这四个维度分别是 [batch, height, width, channels].\n\n一个 TensorFlow 图描述了计算的过程. 为了进行计算, 图必须在 会话 里被启动. 会话 将图的 op 分发到诸如 CPU 或 GPU 之类的 设备 上, 同时提供执行 op 的方法. 这些方法执行后, 将产生的 tensor 返回. 在 Python 语言中, 返回的 tensor 是 numpy ndarray 对象; 在 C 和 C++ 语言中, 返回的 tensor 是 tensorflow::Tensor 实例.\n\n计算图\nTensorFlow 程序通常被组织成一个构建阶段和一个执行阶段. 在构建阶段, op 的执行步骤 被描述成一个图. 在执行阶段, 使用会话执行执行图中的 op.\n\n例如, 通常在构建阶段创建一个图来表示和训练神经网络, 然后在执行阶段反复执行图中的训练 op.\n\nTensorFlow 支持 C, C++, Python 编程语言. 目前, TensorFlow 的 Python 库更加易用, 它提供了大量的辅助函数来简化构建图的工作, 这些函数尚未被 C 和 C++ 库支持.\n\n三种语言的会话库 (session libraries) 是一致的.\n\n构建图\n构建图的第一步, 是创建源 op (source op). 源 op 不需要任何输入, 例如 常量 (Constant). 源 op 的输出被传递给其它 op 做运算.\n\nPython 库中, op 构造器的返回值代表被构造出的 op 的输出, 这些返回值可以传递给其它 op 构造器作为输入.\n\nTensorFlow Python 库有一个默认图 (default graph), op 构造器可以为其增加节点. 这个默认图对 许多程序来说已经足够用了. 阅读 Graph 类 文档 来了解如何管理多个图.", "import tensorflow as tf\n\n# 创建一个常量op,产生一个1*2的矩阵,这个op被作为一个节点\n# 加到默认图中\n# 构造器的返回值代表该常量op的返回值\nmatrix1 = tf.constant([[3, 3]])\n\n# 创建另外一个常量op,产生一个2*1的矩阵\nmatrix2 = tf.constant([[2],[2]])\n\n# 创建一个矩阵乘法 matmul op , 把 'matrix1' 和 'matrix2' 作为输入.\n# 返回值 'product' 代表矩阵乘法的结果.\nproduct = tf.matmul(matrix1, matrix2)", "默认图现在有三个节点, 两个 constant() op, 和一个matmul() op. 为了真正进行矩阵相乘运算, 并得到矩阵乘法的 结果, 你必须在会话里启动这个图.\n\n在一个会话中启动图\n构造阶段完成后, 才能启动图. 启动图的第一步是创建一个 Session 对象, 如果无任何创建参数, 会话构造器将启动默认图.\n\n欲了解完整的会话 API, 请阅读Session 类.", "# 启动默认图\nsess = tf.Session()\n\n# 调用 sess 的 'run()' 方法来执行矩阵乘法 op, 传入 'product' 作为该方法的参数. \n# 上面提到, 'product' 代表了矩阵乘法 op 的输出, 传入它是向方法表明, 我们希望取回\n# 矩阵乘法 op 的输出.\n#\n# 整个执行过程是自动化的, 会话负责传递 op 所需的全部输入. op 通常是并发执行的.\n# \n# 函数调用 'run(product)' 触发了图中\n# 三个 op (两个常量 op 和一个矩阵乘法 op) 的执行.\n#\n# 返回值 'result' 是一个 numpy `ndarray` 对象.\nresult = sess.run(product)\nprint(result)\n# ==> [[ 12.]]\n\n# 任务完成, 关闭会话.\nsess.close()", "Session 对象在使用完后需要关闭以释放资源. 除了显式调用 close 外, 也可以使用 \"with\" 代码块 来自动完成关闭动作.", "with tf.Session() as sess:\n result = sess.run(product)\n print(result)", "在实现上, TensorFlow 将图形定义转换成分布式执行的操作, 以充分利用可用的计算资源(如 CPU 或 GPU). 一般你不需要显式指定使用 CPU 还是 GPU, TensorFlow 能自动检测. 如果检测到 GPU, TensorFlow 会尽可能地利用找到的第一个 GPU 来执行操作.\n\n如果机器上有超过一个可用的 GPU, 除第一个外的其它 GPU 默认是不参与计算的. 为了让 TensorFlow 使用这些 GPU, 你必须将 op 明确指派给它们执行. with...Device 语句用来指派特定的 CPU 或 GPU 执行操作:", "with tf.Session() as sess:\n# with tf.device('/gpu:0'):\n with tf.device('/cpu:0'):\n matrix1 = tf.constant([[3, 3]])\n matrix2 = tf.constant([[2], [2]])\n product = tf.matmul(matrix1, matrix2)\n reuslt = sess.run(product)\n print(result)", "设备用字符串进行标识. 目前支持的设备包括:\n \"/cpu:0\": 机器的 CPU.\n \"/gpu:0\": 机器的第一个 GPU, 如果有的话.\n \"/gpu:1\": 机器的第二个 GPU, 以此类推.\n阅读使用GPU章节, 了解 TensorFlow GPU 使用的更多信息.\n\n交互式使用\n文档中的 Python 示例使用一个会话 Session 来 启动图, 并调用 Session.run() 方法执行操作.\n\n为了便于使用诸如 IPython 之类的 Python 交互环境, 可以使用 InteractiveSession 代替 Session 类, 使用 Tensor.eval() 和 Operation.run() 方法代替 Session.run(). 这样可以避免使用一个变量来持有会话.", "# 进入一个交互式TensorFlow会话\nimport tensorflow as tf\n\nsess = tf.InteractiveSession()\n\nx = tf.Variable([1, 2])\na = tf.constant([3, 3])\n\n# 使用初始化器 initializer op 的 run() 方法初始化 'x'\nx.initializer.run()\n\n# 增加一个减法 sub op, 从 'x' 减去 'a'. 运行减法 op, 输出结果 \nsub = tf.sub(x, a)\nprint(sub.eval())\n# ==> [-2. -1.]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NathanYee/ThinkBayes2
code/.ipynb_checkpoints/chap07soln-checkpoint.ipynb
gpl-2.0
[ "Think Bayes: Chapter 7\nThis notebook presents code and exercises from Think Bayes, second edition.\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT", "from __future__ import print_function, division\n\n% matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport math\nimport numpy as np\n\nfrom thinkbayes2 import Pmf, Cdf, Suite, Joint\nimport thinkplot", "Warm-up exercises\nExercise: Suppose that goal scoring in hockey is well modeled by a \nPoisson process, and that the long-run goal-scoring rate of the\nBoston Bruins against the Vancouver Canucks is 2.9 goals per game.\nIn their next game, what is the probability\nthat the Bruins score exactly 3 goals? Plot the PMF of k, the number\nof goals they score in a game.", "# Solution\n\nfrom scipy.stats import poisson\n\npoisson.pmf(3, 2.9)\n\n# Solution\n\nfrom thinkbayes2 import EvalPoissonPmf\n\nEvalPoissonPmf(3, 2.9)\n\n# Solution\n\nfrom thinkbayes2 import MakePoissonPmf\n\npmf = MakePoissonPmf(2.9, high=10)\nthinkplot.Hist(pmf)\nthinkplot.Config(xlabel='Number of goals',\n ylabel='PMF',\n xlim=[-0.5, 10.5])", "Exercise: Assuming again that the goal scoring rate is 2.9, what is the probability of scoring a total of 9 goals in three games? Answer this question two ways:\n\n\nCompute the distribution of goals scored in one game and then add it to itself twice to find the distribution of goals scored in 3 games.\n\n\nUse the Poisson PMF with parameter $\\lambda t$, where $\\lambda$ is the rate in goals per game and $t$ is the duration in games.", "# Solution\n\npmf = MakePoissonPmf(2.9, high=30)\ntotal = pmf + pmf + pmf\nthinkplot.Hist(total)\nthinkplot.Config(xlabel='Number of goals',\n ylabel='PMF',\n xlim=[-0.5, 22.5])\ntotal[9]\n\n# Solution\n\nEvalPoissonPmf(9, 3 * 2.9)", "Exercise: Suppose that the long-run goal-scoring rate of the\nCanucks against the Bruins is 2.6 goals per game. Plot the distribution\nof t, the time until the Canucks score their first goal.\nIn their next game, what is the probability that the Canucks score\nduring the first period (that is, the first third of the game)?\nHint: thinkbayes2 provides MakeExponentialPmf and EvalExponentialCdf.", "# Solution\n\nfrom thinkbayes2 import MakeExponentialPmf\n\npmf = MakeExponentialPmf(lam=2.6, high=2.5)\nthinkplot.Pdf(pmf)\nthinkplot.Config(xlabel='Time between goals',\n ylabel='PMF')\n\n# Solution\n\nfrom scipy.stats import expon\n\nexpon.cdf(1/3, scale=1/2.6)\n\n# Solution\n\nfrom thinkbayes2 import EvalExponentialCdf\n\nEvalExponentialCdf(1/3, 2.6)", "Exercise: Assuming again that the goal scoring rate is 2.8, what is the probability that the Canucks get shut out (that is, don't score for an entire game)? Answer this question two ways, using the CDF of the exponential distribution and the PMF of the Poisson distribution.", "# Solution\n\n1 - EvalExponentialCdf(1, 2.6)\n\n# Solution\n\nEvalPoissonPmf(0, 2.6)", "The Boston Bruins problem\nThe Hockey suite contains hypotheses about the goal scoring rate for one team against the other. The prior is Gaussian, with mean and variance based on previous games in the league.\nThe Likelihood function takes as data the number of goals scored in a game.", "from thinkbayes2 import MakeNormalPmf\nfrom thinkbayes2 import EvalPoissonPmf\n\nclass Hockey(Suite):\n \"\"\"Represents hypotheses about the scoring rate for a team.\"\"\"\n\n def __init__(self, label=None):\n \"\"\"Initializes the Hockey object.\n\n label: string\n \"\"\"\n mu = 2.8\n sigma = 0.3\n\n pmf = MakeNormalPmf(mu, sigma, num_sigmas=4, n=101)\n Suite.__init__(self, pmf, label=label)\n \n def Likelihood(self, data, hypo):\n \"\"\"Computes the likelihood of the data under the hypothesis.\n\n Evaluates the Poisson PMF for lambda and k.\n\n hypo: goal scoring rate in goals per game\n data: goals scored in one game\n \"\"\"\n lam = hypo\n k = data\n like = EvalPoissonPmf(k, lam)\n return like", "Now we can initialize a suite for each team:", "suite1 = Hockey('bruins')\nsuite2 = Hockey('canucks')", "Here's what the priors look like:", "thinkplot.PrePlot(num=2)\nthinkplot.Pdf(suite1)\nthinkplot.Pdf(suite2)\nthinkplot.Config(xlabel='Goals per game',\n ylabel='Probability')", "And we can update each suite with the scores from the first 4 games.", "suite1.UpdateSet([0, 2, 8, 4])\nsuite2.UpdateSet([1, 3, 1, 0])\n\nthinkplot.PrePlot(num=2)\nthinkplot.Pdf(suite1)\nthinkplot.Pdf(suite2)\nthinkplot.Config(xlabel='Goals per game',\n ylabel='Probability')\n\nsuite1.Mean(), suite2.Mean()", "To predict the number of goals scored in the next game we can compute, for each hypothetical value of $\\lambda$, a Poisson distribution of goals scored, then make a weighted mixture of Poissons:", "from thinkbayes2 import MakeMixture\nfrom thinkbayes2 import MakePoissonPmf\n\ndef MakeGoalPmf(suite, high=10):\n \"\"\"Makes the distribution of goals scored, given distribution of lam.\n\n suite: distribution of goal-scoring rate\n high: upper bound\n\n returns: Pmf of goals per game\n \"\"\"\n metapmf = Pmf()\n\n for lam, prob in suite.Items():\n pmf = MakePoissonPmf(lam, high)\n metapmf.Set(pmf, prob)\n\n mix = MakeMixture(metapmf, label=suite.label)\n return mix", "Here's what the results look like.", "goal_dist1 = MakeGoalPmf(suite1)\ngoal_dist2 = MakeGoalPmf(suite2)\n\nthinkplot.PrePlot(num=2)\nthinkplot.Pmf(goal_dist1)\nthinkplot.Pmf(goal_dist2)\nthinkplot.Config(xlabel='Goals',\n ylabel='Probability',\n xlim=[-0.7, 11.5])\n\ngoal_dist1.Mean(), goal_dist2.Mean()", "Now we can compute the probability that the Bruins win, lose, or tie in regulation time.", "diff = goal_dist1 - goal_dist2\np_win = diff.ProbGreater(0)\np_loss = diff.ProbLess(0)\np_tie = diff.Prob(0)\n\nprint('Prob win, loss, tie:', p_win, p_loss, p_tie)", "If the game goes into overtime, we have to compute the distribution of t, the time until the first goal, for each team. For each hypothetical value of $\\lambda$, the distribution of t is exponential, so the predictive distribution is a mixture of exponentials.", "from thinkbayes2 import MakeExponentialPmf\n\ndef MakeGoalTimePmf(suite):\n \"\"\"Makes the distribution of time til first goal.\n\n suite: distribution of goal-scoring rate\n\n returns: Pmf of goals per game\n \"\"\"\n metapmf = Pmf()\n\n for lam, prob in suite.Items():\n pmf = MakeExponentialPmf(lam, high=2.5, n=1001)\n metapmf.Set(pmf, prob)\n\n mix = MakeMixture(metapmf, label=suite.label)\n return mix", "Here's what the predictive distributions for t look like.", "time_dist1 = MakeGoalTimePmf(suite1) \ntime_dist2 = MakeGoalTimePmf(suite2)\n \nthinkplot.PrePlot(num=2)\nthinkplot.Pmf(time_dist1)\nthinkplot.Pmf(time_dist2) \nthinkplot.Config(xlabel='Games until goal',\n ylabel='Probability')\n\ntime_dist1.Mean(), time_dist2.Mean()", "In overtime the first team to score wins, so the probability of winning is the probability of generating a smaller value of t:", "p_win_in_overtime = time_dist1.ProbLess(time_dist2)\np_adjust = time_dist1.ProbEqual(time_dist2)\np_win_in_overtime += p_adjust / 2\nprint('p_win_in_overtime', p_win_in_overtime)", "Finally, we can compute the overall chance that the Bruins win, either in regulation or overtime.", "p_win_overall = p_win + p_tie * p_win_in_overtime\nprint('p_win_overall', p_win_overall)", "Exercises\nExercise: To make the model of overtime more correct, we could update both suites with 0 goals in one game, before computing the predictive distribution of t. Make this change and see what effect it has on the results.", "# Solution\n\nsuite1.Update(0)\nsuite2.Update(0)\ntime_dist1 = MakeGoalTimePmf(suite1) \ntime_dist2 = MakeGoalTimePmf(suite2)\np_win_in_overtime = time_dist1.ProbLess(time_dist2)\np_adjust = time_dist1.ProbEqual(time_dist2)\np_win_in_overtime += p_adjust / 2\nprint('p_win_in_overtime', p_win_in_overtime)\np_win_overall = p_win + p_tie * p_win_in_overtime\nprint('p_win_overall', p_win_overall)", "Exercise: In the final match of the 2014 FIFA World Cup, Germany defeated Argentina 1-0. What is the probability that Germany had the better team? What is the probability that Germany would win a rematch?\nFor a prior distribution on the goal-scoring rate for each team, use a gamma distribution with parameter 1.3.", "from thinkbayes2 import MakeGammaPmf\n\nxs = np.linspace(0, 8, 101)\npmf = MakeGammaPmf(xs, 1.3)\nthinkplot.Pdf(pmf)\nthinkplot.Config(xlabel='Goals per game')\npmf.Mean()", "Exercise: In the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?\nNote: for this one you will need a new suite that provides a Likelihood function that takes as data the time between goals, rather than the number of goals in a game. \nExercise: Which is a better way to break a tie: overtime or penalty shots?\nExercise: Suppose that you are an ecologist sampling the insect population in a new environment. You deploy 100 traps in a test area and come back the next day to check on them. You find that 37 traps have been triggered, trapping an insect inside. Once a trap triggers, it cannot trap another insect until it has been reset.\nIf you reset the traps and come back in two days, how many traps do you expect to find triggered? Compute a posterior predictive distribution for the number of traps." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
timgasser/keras-mnist
notebooks/data_exploration.ipynb
mit
[ "Data Exploration\nThis is a notebook to explore the pickle files saved out by the convert_data.py script. We'll sanity check all the pickle files, by loading in the image files and displaying them with their labels.", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport pickle\n\nplt.style.use('fivethirtyeight')\n# plt.rcParams['font.family'] = 'serif'\nplt.rcParams['font.serif'] = 'Helvetica'\nplt.rcParams['font.monospace'] = 'Consolas'\nplt.rcParams['font.size'] = 16\nplt.rcParams['axes.labelsize'] = 16\nplt.rcParams['axes.labelweight'] = 'bold'\nplt.rcParams['xtick.labelsize'] = 14\nplt.rcParams['ytick.labelsize'] = 14\nplt.rcParams['legend.fontsize'] = 16\nplt.rcParams['figure.titlesize'] = 20\nplt.rcParams['lines.linewidth'] = 2\n\n%matplotlib inline\n\n# for auto-reloading external modules\n%load_ext autoreload\n%autoreload 2", "Loading pickle files\nThe convert_data.py converts the ubyte format input files into numpy arrays. These arrays are then saved out as pickle files to be quickly loaded later on. The shape of the numpy arrays for images and labels are:\n\nImages: (N, rows, cols)\nLabels: (N, 1)", "# Set up the file directory and names\nDIR = '../input/'\nX_TRAIN = DIR + 'train-images-idx3-ubyte.pkl'\nY_TRAIN = DIR + 'train-labels-idx1-ubyte.pkl'\nX_TEST = DIR + 't10k-images-idx3-ubyte.pkl'\nY_TEST = DIR + 't10k-labels-idx1-ubyte.pkl'\n\nprint('Loading pickle files')\nX_train = pickle.load( open( X_TRAIN, \"rb\" ) )\ny_train = pickle.load( open( Y_TRAIN, \"rb\" ) )\nX_test = pickle.load( open( X_TEST, \"rb\" ) )\ny_test = pickle.load( open( Y_TEST, \"rb\" ) )\n\nn_train = X_train.shape[0]\nn_test = X_test.shape[0]\n\nprint('Train images shape {}, labels shape {}'.format(X_train.shape, y_train.shape))\nprint('Test images shape {}, labels shape {}'.format(X_test.shape, y_test.shape))", "Sample training images with labels\nLet's show a few of the training images with the corresponding labels, so we can sanity check that the labels match the numbers, and the images themselves look like actual digits.", "# Check a few training values at random as a sanity check\ndef show_label_images(X, y):\n '''Shows random images in a grid'''\n \n num = 9\n \n images = np.random.randint(0, X.shape[0], num)\n print('Showing training image indexes {}'.format(images))\n\n fig, axes = plt.subplots(3,3, figsize=(6,6))\n for idx, val in enumerate(images):\n r, c = divmod(idx, 3)\n axes[r][c].imshow(X[images[idx]])\n axes[r][c].annotate('Label: {}'.format(y[val]), xy=(1, 1))\n axes[r][c].xaxis.set_visible(False)\n axes[r][c].yaxis.set_visible(False)\n \nshow_label_images(X_train, y_train)", "Sample test images with labels\nNow we can check the test images and labels by picking a few random ones, and making sure the images look reasonable and they match their labels.", "# Now do the same for the training dataset\nshow_label_images(X_test, y_test)\n\n# # Training label distribution\ny_train_df = pd.DataFrame(y_train, columns=['class'])\ny_train_df.plot.hist(legend=False)\nhist_df = pd.DataFrame(y_train_df['class'].value_counts(normalize=True))\nhist_df.index.name = 'class'\nhist_df.columns = ['train']", "The class distribution is pretty evenly split between the classes. 1 is the most popular class with 11.24% of instances, and at the other end 5 is the least frequent class, with 9.04% of instances", "# Test label distribution\ny_test_df = pd.DataFrame(y_test, columns=['class'])\ny_test_df.plot.hist(legend=False, bins=10)\ntest_counts = y_test_df['class'].value_counts(normalize=True)\nhist_df['test'] = test_counts", "The distribution looks very similar between training and test datasets.", "hist_df['diff'] = np.abs(hist_df['train'] - hist_df['test'])\nhist_df.sort_values('diff', ascending=False)['diff'].plot.bar()", "The largest difference is 0.0040% in the number 2 class.", "# Final quick check of datatypes\nassert X_train.dtype == np.uint8\nassert y_train.dtype == np.uint8\nassert X_test.dtype == np.uint8\nassert y_test.dtype == np.uint8" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
erikdrysdale/erikdrysdale.github.io
_rmd/extra_hrt/hrt_python_copy.ipynb
mit
[ "The HRT for mixed data types (Python implementation)\nIn my last post I showed how the holdout random test (HRT) could be used to obtain valid p-values for any machine learning model by sampling from the conditional distribution of the design matrix. Like the permutation-type approaches used to assess variable importance for decision trees, this method sees whether a measure of performance accuracy declines when a column of the data has its values shuffled. However these ad-hoc permutation approaches lack statistical rigor and will not obtain a valid inferential assessment, even asymptotically, as the non-permuted columns of the data are not conditioned on. For example, if two features are correlated with the data, but only one has a statistical relationship with the response, then naive permutation approaches will often find the correlated noise column to be significant simply by it riding on the statistical coattails of the true variable. The HRT avoids this issue by fully conditioning on the data.\nOne simple way of learning the conditional distribution of the design matrix is to assume a multivariate Gaussian distribution but simply estimating the precision matrix. However when the columns of the data are not Gaussian or not continuous then this learned distribution will prove a poor estimate of the conditional relationship of the data. The goal is this post is two-fold. First, show how to fit a marginal regression model to each column of the data (regularized Gaussian and Binomial regressions are used). Second, a python implementation will be used to complement the R code used previously. While this post will use an un-tuned random forest classifier, any machine learning model can be used for the training set of the data.", "# import the necessary modules\nimport pandas as pd\nimport numpy as np\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import make_classification\nimport seaborn as sns", "Split a dataset into a tranining and a test folder\nIn the code blocks below we load a real and synthetic dataset to highlight the HRT at the bottom of the script.\nOption 1: South African Heart Dataset", "link_data = \"https://web.stanford.edu/~hastie/ElemStatLearn/datasets/SAheart.data\"\ndat_sah = pd.read_csv(link_data)\n# Extract the binary response and then drop\ny_sah = dat_sah['chd']\ndat_sah.drop(columns=['row.names','chd'],inplace=True)\n# one-hot encode famhist\ndat_sah['famhist'] = pd.get_dummies(dat_sah['famhist'])['Present']\n# Convert the X matrix to a numpy array\nX_sah = np.array(dat_sah)", "Note that the column types of each data need to be defined in the cn_type variable.", "cn_type_sah = np.where(dat_sah.columns=='famhist','binomial','gaussian')\n# Do a train/test split\nnp.random.seed(1234)\nidx = np.arange(len(y_sah))\nnp.random.shuffle(idx)\nidx_test = np.where((idx % 5) == 0)[0]\nidx_train = np.where((idx % 5) != 0)[0]\n\nX_train_sah = X_sah[idx_train]\nX_test_sah = X_sah[idx_test]\ny_train_sah = y_sah[idx_train]\ny_test_sah = y_sah[idx_test]", "Option 2: Non-linear decision boundary dataset", "# ---- Random circle data ---- #\nnp.random.seed(1234)\nn_circ = 1000\nX_circ = np.random.randn(n_circ,5)\nX_circ = X_circ + np.random.randn(n_circ,1)\ny_circ = np.where(np.apply_along_axis(arr=X_circ[:,0:2],axis=1,func1d= lambda x: np.sqrt(np.sum(x**2)) ) > 1.2,1,0)\n\ncn_type_circ = np.repeat('gaussian',X_circ.shape[1])\n\nidx = np.arange(n_circ)\nnp.random.shuffle(idx)\nidx_test = np.where((idx % 5) == 0)[0]\nidx_train = np.where((idx % 5) != 0)[0]\n\nX_train_circ = X_circ[idx_train]\nX_test_circ = X_circ[idx_test]\ny_train_circ = y_circ[idx_train]\ny_test_circ = y_circ[idx_test]\n\nsns.scatterplot(x='var1',y='var2',hue='y',\n data=pd.DataFrame({'y':y_circ,'var1':X_circ[:,0],'var2':X_circ[:,1]}))", "Function support\nThe code block below provides a wrapper to implement the HRT algorithm for a binary outcome using a single training and test split. See my previous post for generalizations of this method for cross-validation. The function also requires a cn_type argument to specify whether the column is continuous or Bernoulli. The glm_l2 function implements an L2-regularized generalized regression model for Gaussian and Binomial data using an iteratively re-weighted least squares method. This can generalized for elastic-net regularization as well as different generalized linear model classes. The dgp_fun function takes a model with with glm_l2 and will generate a new vector of the data conditional on the rest of the design matrix.", "# ---- FUNCTION SUPPORT FOR SCRIPT ---- #\n\ndef hrt_bin_fun(X_train,y_train,X_test,y_test,cn_type):\n \n # ---- INTERNAL FUNCTION SUPPORT ---- #\n \n # Sigmoid function\n def sigmoid(x):\n return( 1/(1+np.exp(-x)) )\n # Sigmoid weightin\n def sigmoid_w(x):\n return( sigmoid(x)*(1-sigmoid(x)) )\n\n def glm_l2(resp,x,standardize,family='binomial',lam=0,add_int=True,tol=1e-4,max_iter=100):\n y = np.array(resp.copy())\n X = x.copy()\n n = X.shape[0]\n\n # Make sure all the response values are zeros or ones\n check1 = (~np.all(np.isin(np.array(resp),[0,1]))) & (family=='binomial')\n if check1:\n print('Error! Response variable is not all binary'); #return()\n # Make sure the family type is correct\n check2 = ~pd.Series(family).isin(['gaussian','binomial'])[0]\n if check2:\n print('Error! Family must be either gaussian or binoimal')\n\n # Normalize if requested\n if standardize:\n mu_X = X.mean(axis=0).reshape(1,X.shape[1])\n std_X = X.std(axis=0).reshape(1,X.shape[1])\n else:\n mu_X = np.repeat(0,p).reshape(1,X.shape[1])\n std_X = np.repeat(1,p).reshape(1,X.shape[1])\n\n X = (X - mu_X)/std_X\n\n # Add intercept\n if add_int:\n X = np.append(X,np.repeat(1,n).reshape(n,1),axis=1)\n\n # Calculate dimensions\n y = y.reshape(n,1)\n\n p = X.shape[1]\n # l2-regularization\n Lambda = n * np.diag(np.repeat(lam,p))\n\n bhat = np.repeat(0,X.shape[1])\n\n if family=='binomial':\n bb = np.log(np.mean(y)/(1-np.mean(y)))\n else:\n bb = np.mean(y)\n\n if add_int:\n bhat = np.append(bhat[1:p],bb).reshape(p,1)\n\n if family=='binomial':\n ii = 0\n diff = 1\n while( (ii < max_iter) & (diff > tol) ):\n ii += 1\n # Predicted probabilities\n eta = X.dot(bhat)\n phat = sigmoid(eta)\n res = y - phat\n what = phat*(1-phat)\n # Adjusted response\n z = eta + res/what\n # Weighted-least squares\n bhat_new = np.dot( np.linalg.inv( np.dot((X * what).T,X) + Lambda), np.dot((X * what).T, z) )\n diff = np.mean((bhat_new - bhat)**2)\n bhat = bhat_new.copy()\n sig2 = 0\n\n else:\n bhat = np.dot( np.linalg.inv( np.dot(X.T,X) + Lambda ), np.dot(X.T, y) )\n # Calculate the standard error of the residuals\n res = y - np.dot(X,bhat)\n sig2 = np.sum(res**2) / (n - (p - add_int))\n\n # Separate the intercept\n if add_int:\n b0 = bhat[p-1][0]\n bhat2 = bhat[0:(p-1)].copy() / std_X.T # Extract intercept\n b0 = b0 - np.sum(bhat2 * mu_X.T)\n else:\n bhat2 = bhat.copy() / std_X.T\n b0 = 0\n\n # Create a dictionary to store the results\n ret_dict = {'b0':b0, 'bvec':bhat2, 'family':family, 'sig2':sig2, 'n':n}\n return ret_dict\n\n # mdl=mdl_lst[4].copy(); x = tmp_X.copy() \n # Function to generate data from a fitted model\n def dgp_fun(mdl,x):\n tmp_n = mdl['n']\n tmp_family = mdl['family']\n tmp_sig2 = mdl['sig2']\n tmp_b0 = mdl['b0']\n tmp_bvec = mdl['bvec']\n # Fitted value\n fitted = np.squeeze(np.dot(x, tmp_bvec) + tmp_b0)\n\n if tmp_family=='gaussian':\n # Generate some noise\n noise = np.random.randn(tmp_n)*np.sqrt(tmp_sig2) + tmp_b0\n y_ret = fitted + noise\n else:\n y_ret = np.random.binomial(n=1,p=sigmoid(fitted),size=tmp_n)\n # Return\n return(y_ret)\n\n # Logistic loss function\n def loss_binomial(y,yhat):\n ll = -1*np.mean(y*np.log(yhat) + (1-y)*np.log(1-yhat))\n return(ll)\n \n # Loop through and fit a model to each column\n mdl_lst = []\n for cc in np.arange(len(cn_type)):\n tmp_y = X_test[:,cc]\n tmp_X = np.delete(X_test, cc, 1)\n tmp_family = cn_type[cc]\n mdl_lst.append(glm_l2(resp=tmp_y,x=tmp_X,family=tmp_family,lam=0,standardize=True))\n \n # ---- FIT SOME MACHINE LEARNING MODEL HERE ---- #\n # Fit random forest\n clf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0)\n clf.fit(X_train, y_train)\n # Baseline predicted probabilities and logistic loss\n phat_baseline = clf.predict_proba(X_test)[:,1] \n loss_baseline = loss_binomial(y=y_test,yhat=phat_baseline)\n\n # ---- CALCULATE P-VALUES FOR EACH MODEL ---- #\n pval_lst = []\n nsim = 250\n for cc in np.arange(len(cn_type)):\n print('Variable %i of %i' % (cc+1, len(cn_type)))\n mdl_cc = mdl_lst[cc]\n X_test_not_cc = np.delete(X_test, cc, 1)\n X_test_cc = X_test.copy()\n loss_lst = []\n for ii in range(nsim):\n np.random.seed(ii)\n xx_draw_test = dgp_fun(mdl=mdl_cc,x=X_test_not_cc)\n X_test_cc[:,cc] = xx_draw_test\n phat_ii = clf.predict_proba(X_test_cc)[:,1]\n loss_ii = loss_binomial(y=y_test,yhat=phat_ii)\n loss_lst.append(loss_ii)\n pval_cc = np.mean(np.array(loss_lst) <= loss_baseline)\n pval_lst.append(pval_cc)\n\n # Return p-values\n return(pval_lst)", "Get the p-values for the different datasets\nNow that the hrt_bin_fun has been defined, we can perform inference on the columns of the two datasets created above.", "pval_circ = hrt_bin_fun(X_train=X_train_circ,y_train=y_train_circ,X_test=X_test_circ,y_test=y_test_circ,cn_type=cn_type_circ)\npval_sah = hrt_bin_fun(X_train=X_train_sah,y_train=y_train_sah,X_test=X_test_sah,y_test=y_test_sah,cn_type=cn_type_sah)", "The results below show that the sbp, tobacco, ldl, adiposity, and age are statistically significant features for the South African Heart Dataset. As expected, the first two variables, var1, and var2 from the non-linear decision boundary dataset are important as these are the two variables which define the decision boundary with the rest of the variables being noise variables.", "pd.concat([pd.DataFrame({'vars':dat_sah.columns, 'pval':pval_sah, 'dataset':'SAH'}),\n pd.DataFrame({'vars':['var'+str(x) for x in np.arange(5)+1],'pval':pval_circ,'dataset':'NLP'})])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
natashabatalha/PandExo
notebooks/JWST_Running_Pandexo.ipynb
gpl-3.0
[ "Getting Started\nBefore starting here, all the instructions on the installation page should be completed! \nHere you will learn how to: \n\nset planet properties \nset stellar properties\nrun default instrument modes \nadjust instrument modes \nrun pandexo", "import warnings\nwarnings.filterwarnings('ignore')\nimport pandexo.engine.justdoit as jdi # THIS IS THE HOLY GRAIL OF PANDEXO\nimport numpy as np\nimport os\n#pip install pandexo.engine --upgrade", "Make sure that your environment path is set to match the correct version of pandeia", "print(os.environ['pandeia_refdata'] )\nimport pandeia.engine\nprint(pandeia.engine.__version__)", "Load blank exo dictionary\nTo start, load in a blank exoplanet dictionary with empty keys. You will fill these out for yourself in the next step.", "exo_dict = jdi.load_exo_dict()\nprint(exo_dict.keys())\n#print(exo_dict['star']['w_unit'])", "Edit exoplanet observation inputs\nEditting each keys are annoying. But, do this carefully or it could result in nonsense runs", "exo_dict['observation']['sat_level'] = 80 #saturation level in percent of full well \nexo_dict['observation']['sat_unit'] = '%'\nexo_dict['observation']['noccultations'] = 1 #number of transits \nexo_dict['observation']['R'] = None #fixed binning. I usually suggest ZERO binning.. you can always bin later \n #without having to redo the calcualtion\nexo_dict['observation']['baseline_unit'] = 'total' #Defines how you specify out of transit observing time\n #'frac' : fraction of time in transit versus out = in/out \n #'total' : total observing time (seconds)\nexo_dict['observation']['baseline'] = 4.0*60.0*60.0 #in accordance with what was specified above (total observing time)\n\nexo_dict['observation']['noise_floor'] = 0 #this can be a fixed level or it can be a filepath \n #to a wavelength dependent noise floor solution (units are ppm)", "Edit exoplanet host star inputs\nNote... If you select phoenix you do not have to provide a starpath, w_unit or f_unit, but you do have to provide a temp, metal and logg. If you select user you do not need to provide a temp, metal and logg, but you do need to provide units and starpath. \nOption 1) Grab stellar model from database", "#OPTION 1 get start from database\nexo_dict['star']['type'] = 'phoenix' #phoenix or user (if you have your own)\nexo_dict['star']['mag'] = 8.0 #magnitude of the system\nexo_dict['star']['ref_wave'] = 1.25 #For J mag = 1.25, H = 1.6, K =2.22.. etc (all in micron)\nexo_dict['star']['temp'] = 5500 #in K \nexo_dict['star']['metal'] = 0.0 # as log Fe/H\nexo_dict['star']['logg'] = 4.0 #log surface gravity cgs", "Option 1) Input as dictionary or filename", "#Let's create a little fake stellar input\n\nimport scipy.constants as sc\nwl = np.linspace(0.8, 5, 3000)\nnu = sc.c/(wl*1e-6) # frequency in sec^-1\nteff = 5500.0\nplanck_5500K = nu**3 / (np.exp(sc.h*nu/sc.k/teff) - 1)\n\n#can either be dictionary input\nstarflux = {'f':planck_5500K, 'w':wl}\n#or can be as a stellar file\n#starflux = 'planck_5500K.dat'\n#with open(starflux, 'w') as sf:\n# for w,f in zip(wl, planck_5500K):\n# sf.write(f'{w:.15f} {f:.15e}\\n')\n\nexo_dict['star']['type'] = 'user' \nexo_dict['star']['mag'] = 8.0 #magnitude of the system\nexo_dict['star']['ref_wave'] = 1.25 \nexo_dict['star']['starpath'] = starflux \nexo_dict['star']['w_unit'] = 'um'\nexo_dict['star']['f_unit'] = 'erg/cm2/s/Hz'", "Edit exoplanet inputs using one of three options\n1) user specified\n2) constant value\n3) select from grid\n1) Edit exoplanet planet inputs if using your own model", "exo_dict['planet']['type'] ='user' #tells pandexo you are uploading your own spectrum\nexo_dict['planet']['exopath'] = 'wasp12b.txt'\n\n#or as a dictionary\n#exo_dict['planet']['exopath'] = {'f':spectrum, 'w':wavelength}\n\nexo_dict['planet']['w_unit'] = 'cm' #other options include \"um\",\"nm\" ,\"Angs\", \"sec\" (for phase curves)\nexo_dict['planet']['f_unit'] = 'rp^2/r*^2' #other options are 'fp/f*' \nexo_dict['planet']['transit_duration'] = 2.0*60.0*60.0 #transit duration \nexo_dict['planet']['td_unit'] = 's' #Any unit of time in accordance with astropy.units can be added", "2) Users can also add in a constant temperature or a constant transit depth", "exo_dict['planet']['type'] = 'constant' #tells pandexo you want a fixed transit depth\nexo_dict['planet']['transit_duration'] = 2.0*60.0*60.0 #transit duration \nexo_dict['planet']['td_unit'] = 's' \nexo_dict['planet']['radius'] = 1\nexo_dict['planet']['r_unit'] = 'R_jup' #Any unit of distance in accordance with astropy.units can be added here\nexo_dict['star']['radius'] = 1\nexo_dict['star']['r_unit'] = 'R_sun' #Same deal with astropy.units here\nexo_dict['planet']['f_unit'] = 'rp^2/r*^2' #this is what you would do for primary transit \n\n#ORRRRR....\n#if you wanted to instead to secondary transit at constant temperature \n#exo_dict['planet']['f_unit'] = 'fp/f*' \n#exo_dict['planet']['temp'] = 1000", "3) Select from grid\nNOTE: Currently only the fortney grid for hot Jupiters from Fortney+2010 is supported. Holler though, if you want another grid supported", "exo_dict['planet']['type'] = 'grid' #tells pandexo you want to pull from the grid\nexo_dict['planet']['temp'] = 1000 #grid: 500, 750, 1000, 1250, 1500, 1750, 2000, 2250, 2500\nexo_dict['planet']['chem'] = 'noTiO' #options: 'noTiO' and 'eqchem', noTiO is chemical eq. without TiO\nexo_dict['planet']['cloud'] = 'ray10' #options: nothing: '0', \n# Weak, medium, strong scattering: ray10,ray100, ray1000\n# Weak, medium, strong cloud: flat1,flat10, flat100\nexo_dict['planet']['mass'] = 1\nexo_dict['planet']['m_unit'] = 'M_jup' #Any unit of mass in accordance with astropy.units can be added here\nexo_dict['planet']['radius'] = 1\nexo_dict['planet']['r_unit'] = 'R_jup' #Any unit of distance in accordance with astropy.units can be added here\nexo_dict['star']['radius'] = 1\nexo_dict['star']['r_unit'] = 'R_sun' #Same deal with astropy.units here\n", "Load in instrument dictionary (OPTIONAL)\nStep 2 is optional because PandExo has the functionality to automatically load in instrument dictionaries. Skip this if you plan on observing with one of the following and want to use the subarray with the smallest frame time and the readout mode with 1 frame/1 group (standard): \n- NIRCam F444W\n- NIRSpec Prism\n- NIRSpec G395M\n- NIRSpec G395H\n- NIRSpec G235H\n- NIRSpec G235M\n- NIRCam F322W\n- NIRSpec G140M\n- NIRSpec G140H\n- MIRI LRS\n- NIRISS SOSS", "jdi.print_instruments()\n\ninst_dict = jdi.load_mode_dict('NIRSpec G140H')\n\n#loading in instrument dictionaries allow you to personalize some of \n#the fields that are predefined in the templates. The templates have \n#the subbarays with the lowest frame times and the readmodes with 1 frame per group. \n#if that is not what you want. change these fields\n\n#Try printing this out to get a feel for how it is structured: \n\nprint(inst_dict['configuration'])\n\n#Another way to display this is to print out the keys \ninst_dict.keys()", "Don't know what instrument options there are?", "print(\"SUBARRAYS\")\nprint(jdi.subarrays('nirspec'))\n\nprint(\"FILTERS\")\nprint(jdi.filters('nircam'))\n\nprint(\"DISPERSERS\")\nprint(jdi.dispersers('nirspec'))\n\n#you can try personalizing some of these fields\n\ninst_dict[\"configuration\"][\"detector\"][\"ngroup\"] = 'optimize' #running \"optimize\" will select the maximum \n #possible groups before saturation. \n #You can also write in any integer between 2-65536\n\ninst_dict[\"configuration\"][\"detector\"][\"subarray\"] = 'substrip256' #change the subbaray\n\n", "Adjusting the Background Level\nYou may want to think about adjusting the background level of your observation, based on the position of your target. PandExo two options and three levels for the position: \n\necliptic or minzodi \nlow, medium, high", "inst_dict['background'] = 'ecliptic'\ninst_dict['background_level'] = 'high'", "Running NIRISS SOSS Order 2\nPandExo only will extract a single order at a time. By default, it is set to extract Order 1. Below you can see how to extract the second order. \nNOTE! Users should be careful with this calculation. Saturation will be limited by the first order. Therefore, I suggest running one calculation with ngroup='optmize' for Order 1. This will give you an idea of a good number of groups to use. Then, you can use that in this order 2 calculation.", "inst_dict = jdi.load_mode_dict('NIRISS SOSS')\ninst_dict['strategy']['order'] = 2\ninst_dict['configuration']['detector']['subarray'] = 'substrip256'\nngroup_from_order1_run = 2\ninst_dict[\"configuration\"][\"detector\"][\"ngroup\"] = ngroup_from_order1_run", "Running PandExo\nYou have four options for running PandExo. All of them are accessed through attribute jdi.run_pandexo. See examples below. \njdi.run_pandexo(exo, inst, param_space = 0, param_range = 0,save_file = True,\n output_path=os.getcwd(), output_file = '', verbose=True)\nOption 1- Run single instrument mode, single planet\nIf you forget which instruments are available run jdi.print_isntruments() and pick one", "jdi.print_instruments()\n\nresult = jdi.run_pandexo(exo_dict,['NIRCam F322W2'], verbose=True)", "Note, you can turn off print statements with verbose=False\nOption 2- Run single instrument mode (with user dict), single planet\nThis is the same thing as option 1 but instead of feeding it a list of keys, you can feed it a instrument dictionary (this is for users who wanted to simulate something NOT pre defined within pandexo)", "inst_dict = jdi.load_mode_dict('NIRSpec G140H')\n#personalize subarray\ninst_dict[\"configuration\"][\"detector\"][\"subarray\"] = 'sub2048'\nresult = jdi.run_pandexo(exo_dict, inst_dict)", "Option 3- Run several modes, single planet\nUse several modes from print_isntruments() options.", "#choose select \nresult = jdi.run_pandexo(exo_dict,['NIRSpec G140M','NIRSpec G235M','NIRSpec G395M'],\n output_file='three_nirspec_modes.p',verbose=True)\n#run all \n#result = jdi.run_pandexo(exo_dict, ['RUN ALL'], save_file = False)", "Option 4- Run single mode, several planet cases\nUse a single modes from print_isntruments() options. But explore parameter space with respect to any parameter in the exo dict. The example below shows how to loop over several planet models\nYou can loop through anything in the exoplanet dictionary. It will be planet, star or observation followed by whatever you want to loop through in that set. \ni.e. planet+exopath, star+temp, star+metal, star+logg, observation+sat_level.. etc", "#looping over different exoplanet models \njdi.run_pandexo(exo_dict, ['NIRCam F444W'], param_space = 'planet+exopath',\n param_range = os.listdir('/path/to/location/of/models'),\n output_path = '/path/to/output/simulations')\n\n#looping over different stellar temperatures \njdi.run_pandexo(exo_dict, ['NIRCam F444W'], param_space = 'star+temp',\n param_range = np.linspace(5000,8000,2),\n output_path = '/path/to/output/simulations')\n\n#looping over different saturation levels\njdi.run_pandexo(exo_dict, ['NIRCam F444W'], param_space = 'observation+sat_level',\n param_range = np.linspace(.5,1,5),\n output_path = '/path/to/output/simulations')\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/applied-machine-learning-intensive
content/05_deep_learning/03_autoencoders/colab.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/05_deep_learning/03_autoencoders/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Autoencoders\nAn autoencoder is a type of neural network used to learn an efficient representation, or encoding, for a set of data. The advantages of using these learned encodings are similar to those of word embeddings; they reduce the dimension of the feature space and can capture similarities between different inputs. Autoencoders are a useful unsupervised learning method, as they do not require any ground truth labels to train.\nThis notebook is based on this tutorial and this keras example.\nData\nWe will use the MNIST dataset, which contains images of handwritten digits (0, 1, 2, etc.). This dataset has 60,000 training examples and 10,000 testing examples.", "# Set random seeds for reproducible results.\nimport numpy as np\nimport tensorflow as tf\n\nnp.random.seed(42)\ntf.random.set_seed(42)\n\n# Load dataset using keras data loader.\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()", "Each image in the dataset is 28 x 28 pixels. Let's flatten each to a 1-dimensional vector of length 784.", "image_size = x_train.shape[1]\noriginal_dim = image_size * image_size\n# Flatten each image into a 1-d vector.\nx_train = np.reshape(x_train, [-1, original_dim])\nx_test = np.reshape(x_test, [-1, original_dim])\n\n# Rescale pixel values to a 0-1 range.\nx_train = x_train.astype('float32') / 255\nx_test = x_test.astype('float32') / 255\n\nprint('x_train:', x_train.shape)\nprint('x_test:', x_test.shape)", "Autoencoder Structure\n<a title=\"Chervinskii [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons\" href=\"https://commons.wikimedia.org/wiki/File:Autoencoder_structure.png\"><img width=\"512\" alt=\"Autoencoder structure\" src=\"https://upload.wikimedia.org/wikipedia/commons/2/28/Autoencoder_structure.png\"></a>\nSource: Wikipedia\nAn autoencoder works by learning to output a copy of its input, after passing the input through one or more smaller hidden layer(s). This hidden layer describes an encoding or \"code\" used to represent the input (x in the above graph). An autoencoder has two main parts: an encoder that maps the input into the code, and a decoder that maps the code back to a reconstruction of the original input (x' in the above graph). This structure forces the hidden layer to learn a more efficient, useful representation of the input data (z in the above graph, also called a \"latent representation\").\nBasic Model\nBelow is an example of a simple autoencoder that maps the 784-dimensional input image to a 36-dimensional latent representation, then attempts to reconstruct the 784-dimensional input image from that encoded representation. \nInstead of keras.models.Sequential, we'll use keras.models.Model to more clearly show the encoder and decoder parts of the autoencoder as individual models. This will also make it easier to extract the latent representations from the encoder. The Sequential API is usually easier to use while the Model API is more flexible. You can read more about their differences here.", "from tensorflow.keras import Input\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras.models import Model", "Encoder", "latent_dim = 36\n\n# input layer (needed for the Model API).\ninput_layer = Input(shape=(original_dim,), name='encoder_input')\n\n# Notice that with all layers except for the first,\n# we need to specify which layer is used as input.\nlatent_layer = Dense(latent_dim, activation='relu',\n name='latent_layer')(input_layer)\n\nencoder = Model(input_layer, latent_layer, name='encoder')\nencoder.summary()", "Decoder", "latent_inputs = Input(shape=(latent_dim,), name='decoder_input')\noutput_layer = Dense(original_dim, name='decoder_output')(latent_inputs)\n\ndecoder = Model(latent_inputs, output_layer, name='decoder')\ndecoder.summary()", "Training\nThe full autoencoder passes the inputs to the encoder, then the latent representations from the encoder to the decoder. We'll use the Adam optimizer and Mean Squared Error loss.", "autoencoder = Model(\n input_layer,\n decoder(encoder(input_layer)),\n name=\"autoencoder\"\n)\n\nautoencoder.compile(optimizer='adam', loss='mse')\nautoencoder.summary()", "We will train for 50 epochs, using EarlyStopping to stop training early if validation loss improves by less than 0.0001 for 10 consecutive epochs. Using a batch size of 2048, this should take 1-2 minutes to train.", "early_stopping = tf.keras.callbacks.EarlyStopping(\n monitor='val_loss',\n # minimum change in loss that qualifies as \"improvement\"\n # higher values of min_delta lead to earlier stopping\n min_delta=0.0001,\n # threshold for number of epochs with no improvement\n patience=10,\n verbose=1\n)\n\nautoencoder.fit(\n # input\n x_train,\n # output\n x_train,\n epochs=50,\n batch_size=2048,\n validation_data=(x_test, x_test),\n callbacks=[early_stopping]\n)", "Visualize Predictions", "decoded_imgs = autoencoder.predict(x_test)\n\nimport matplotlib.pyplot as plt\n\ndef visualize_imgs(nrows, axis_names, images, sizes, n=10):\n '''\n Plots images in a grid layout.\n\n nrows: number of rows of images to display\n axis_names: list of names for each row\n images: list of arrays of images\n sizes: list of image size to display for each row\n n: number of images to display per row (default 10)\n\n nrows = len(axis_names) = len(images)\n '''\n fig, axes = plt.subplots(figsize=(20,4), nrows=nrows, ncols=1, sharey=False)\n for i in range(nrows):\n axes[i].set_title(axis_names[i], fontsize=16)\n axes[i].axis('off')\n\n for col in range(n):\n for i in range(nrows):\n ax = fig.add_subplot(nrows, n, col + 1 + i * n)\n plt.imshow(images[i][col].reshape(sizes[i], sizes[i]))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n fig.tight_layout()\n plt.show()\n\nvisualize_imgs(\n 2,\n ['Original Images', 'Reconstructions'],\n [x_test, decoded_imgs],\n [image_size, image_size]\n)", "This shows 10 original images with their corresponding reconstructed images directly below. Clearly, our autoencoder captured the basic digit structure of each image, though the reconstructed images are less sharp.\nApplication: Image Compression\nAutoencoders have been used extensively in image compression and processing. An autoencoder can create higher resolution images from low-resolution images, and even colorize black and white images.\nTo see how autoencoders can be used to compress images, we can use our already trained encoder as an image compressor. You can think of the decoder as a decompressor, reconstructing the original image from the compressed one.", "# Compress original images.\nencoded_imgs = encoder.predict(x_test)\n# Reconstruct original images.\ndecoded_imgs = decoder.predict(encoded_imgs)\n\nvisualize_imgs(\n 3,\n ['Original Images', '36-dimensional Latent Representation', 'Reconstructions'],\n [x_test, encoded_imgs, decoded_imgs],\n [image_size, 6, image_size]\n)", "Now we can visualize the latent representation of each image that the autoencoder learned. Since this reduces the 784-dimensional original image to a 36-dimensional image, it essentially performs an image compression.\nApplication: Image Denoising\nAutoencoders can also \"denoise\" images, such as poorly scanned pictures, and even partially damaged and destroyed paper documents (Kaggle dataset). To train a denoising autoencoder, we must first add noise to the images. \nNote: \"Noise\" refers to something that interferes with the quality of original input, such as static in an image or a partially jumbled message.\nAdd Noise\nimgaug is a useful package to perform various image augmentations. Many of the arithmetic functions in the package simulate adding noise to an image. We'll use the SaltAndPepper technique.\nNote: This will take slightly under a minute to run on the full training and testing sets.", "from imgaug import augmenters\n\n# Reshape images to 3-dimensional for augmenter. Since the images were\n# originally 2-dimensional, the third dimension is just 1.\nx_train = x_train.reshape(-1, image_size, image_size, 1)\nx_test = x_test.reshape(-1, image_size, image_size, 1)\n \n# p is the probability of changing a pixel to noise.\n# higher values of p mean noisier images.\nnoise = augmenters.SaltAndPepper(p=0.6)\n# We could chain multiple augmenters using Sequential.\nseq = augmenters.Sequential([noise])\n\n# Rescale pixel values to 0-255 (instead of 0-1) for augmenter,\n# add noise to images, then rescale pixel values back to 0-1.\nx_train_noise = seq.augment_images(x_train * 255) / 255\nx_test_noise = seq.augment_images(x_test * 255) / 255", "For comparison, here are what 5 images look like before we add noise:", "f, ax = plt.subplots(figsize=(20,2), nrows=1, ncols=5)\nfor i in range(5, 10):\n ax[i-5].imshow(x_train[i].reshape(image_size, image_size))\nplt.show()", "After we add noise, the images look like this:", "f, ax = plt.subplots(figsize=(20,2), nrows=1, ncols=5)\nfor i in range(5, 10):\n ax[i-5].imshow(x_train_noise[i].reshape(image_size, image_size))\nplt.show()", "As you can see, the images are quite noisy and difficult to denoise even with the human eye. Luckily, autoencoders are much better at this task. We'll follow a similar architecture as before, but this time we'll train the model using the noisy images as input and the original, un-noisy images as output.\nEncoder\nWe will need a more sophisticated encoder / decoder architecture to handle the more complex problem. The encoder will use 3 Conv2D layers, with decreasing output filter sizes and a MaxPool layer after each. This will perform the desired effect of compressing, or \"downsampling\", the image.\nSince we are using convolutional layers, we can work directly with the 3-dimensional images.", "from tensorflow.keras.layers import Conv2D, MaxPool2D, UpSampling2D\n\nfilter_1 = 64\nfilter_2 = 32\nfilter_3 = 16\nkernel_size = (3, 3)\npool_size = (2, 2)\nlatent_dim = 4\n\ninput_layer = Input(shape=(image_size, image_size, 1))\n# First convolutional layer\nencoder_conv1 = Conv2D(filter_1, kernel_size,\n activation='relu', padding='same')(input_layer)\nencoder_pool1 = MaxPool2D(pool_size, padding='same')(encoder_conv1)\n# Second convolutional layer\nencoder_conv2 = Conv2D(filter_2, kernel_size, activation='relu',\n padding='same')(encoder_pool1)\nencoder_pool2 = MaxPool2D(pool_size, padding='same')(encoder_conv2)\n# Third convolutional layer\nencoder_conv3 = Conv2D(filter_3, kernel_size,\n activation='relu', padding='same')(encoder_pool2)\nlatent_layer = MaxPool2D(pool_size, padding='same')(encoder_conv3)\n\nencoder_denoise = Model(input_layer, latent_layer, name='encoder')\nencoder_denoise.summary()", "Decoder\nThe decoder will work in reverse, using 3 Conv2D layers, with increasing output filter sizes and an UpSampling2D layer after each. This will perform the desired effect of reconstructing or denoising the image.", "latent_inputs = Input(shape=(latent_dim, latent_dim, filter_3))\n\n# First convolutional layer\ndecoder_conv1 = Conv2D(filter_3, kernel_size,\n activation='relu', padding='same')(latent_inputs)\ndecoder_up1 = UpSampling2D(pool_size)(decoder_conv1)\n# Second convolutional layer\ndecoder_conv2 = Conv2D(filter_2, kernel_size,\n activation='relu', padding='same')(decoder_up1)\ndecoder_up2 = UpSampling2D(pool_size)(decoder_conv2)\n# Third convolutional layer\ndecoder_conv3 = Conv2D(filter_1, kernel_size,\n activation='relu')(decoder_up2)\ndecoder_up3 = UpSampling2D(pool_size)(decoder_conv3)\n\n# Output layer, which outputs images of size (28 x 28 x 1)\noutput_layer = Conv2D(1, kernel_size, padding='same')(decoder_up3)\n\ndecoder_denoise = Model(latent_inputs, output_layer, name='decoder')\ndecoder_denoise.summary()", "Training\nWe will again use early stopping and the same model parameters.", "denoise_autoencoder = Model(\n input_layer,\n decoder_denoise(encoder_denoise(input_layer))\n)\n\ndenoise_autoencoder.compile(optimizer='adam', loss='mse')\ndenoise_autoencoder.summary()", "We will only train for 10 epochs this time since the model is more complex and takes longer to train. This should take around a minute.", "denoise_autoencoder.fit(\n # Input\n x_train_noise,\n # Output\n x_train,\n epochs=10,\n batch_size=2048,\n validation_data=(x_test_noise, x_test),\n callbacks=[early_stopping]\n)", "Visualize Denoised Images\nLet's visualize the first 10 denoised images.", "denoised_imgs = denoise_autoencoder.predict(x_test_noise[:10])\n\nvisualize_imgs(\n 3,\n ['Noisy Images', 'Denoised Images', 'Original Images'],\n [x_test_noise, denoised_imgs, x_test],\n [image_size, image_size, image_size]\n)", "As we can see, the autoencoder is mostly successful in recovering the original image, though a few denoised images are still blurry or unclear. More training or a different model architecture may help.\nResources\n\nIntroduction to Autoencoders\nBuilding Autoencoders in Keras\nPCA vs. Autoencoders\nVariational Autoencoders\nAuto-Encoding Variational Bayes paper\nGenerating Images with VAEs\nCredit Card Fraud Detection using Autoencoders\nAutoencoder Explained Video\n\nExercises\nExercise 1: Watermarks\nIn this exercise we'll perform a task similar to the denoising in the example above. The Mighty Mouse: Wolf! Wolf! dataset contains a Mighty Mouse video that has been watermarked. In this exercise you'll create an autoencoder to remove the watermark.\nFirst download and unzip the dataset.\nStudent Solution", "! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && cp kaggle.json ~/.kaggle/ && echo 'Done'\n! kaggle datasets download joshmcadams/mighty-mouse-wolf-wolf\n! unzip mighty-mouse-wolf-wolf.zip\n! ls", "We'll use the smaller videos (80x60) in this exercise in order to fit within Colab's memory limits and in order to get our model to run faster.\nmighty_mouse_80x60_watermarked.mp4 contains the feature data. This is the watermarked video file.\nmighty_mouse_80x60.mp4 contains the target data. This is the video file before watermarking.\nYour task is to build an autoencoder that can be used to restore the watermarked file back to a non-watermarked state.\nUse as many code and text cells as you need to. Explain your reasoning and work.", "# Your answer goes here", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dchandan/rebound
ipython_examples/OrbitPlot.ipynb
gpl-3.0
[ "Orbit Plot\nREBOUND comes with a simple way to plot instantaneous orbits of planetary systems. To show how this works, let's setup a test simulation with 4 planets.", "import rebound\nsim = rebound.Simulation()\nsim.add(m=1)\nsim.add(m=0.1, e=0.041, a=0.4, inc=0.2, f=0.43, Omega=0.82, omega=2.98)\nsim.add(m=1e-3, e=0.24, a=1.0, pomega=2.14)\nsim.add(m=1e-3, e=0.24, a=1.5, omega=1.14, l=2.1)\nsim.add(a=-2.7, e=1.4, f=-1.5,omega=-0.7) # hyperbolic orbit", "To plot these initial orbits in the $xy$-plane, we can simply call the OrbitPlot function and give it the simulation as an argument.", "%matplotlib inline\nfig = rebound.OrbitPlot(sim)", "Note that the OrbitPlot function chooses reasonable limits for the axes for you. There are various ways to customize the plot. Have a look at the arguments used in the following examples, which are pretty much self-explanatory (if in doubt, check the documentation!).", "fig = rebound.OrbitPlot(sim, unitlabel=\"[AU]\", color=True, trails=True, periastron=True)\n\nfig = rebound.OrbitPlot(sim, unitlabel=\"[AU]\", periastron=True, lw=2)", "Note that all orbits are draw with respect to the center of mass of all interior particles. This coordinate system is known as Jacobi coordinates. It requires that the particles are sorted by ascending semi-major axis within the REBOUND simulation's particle array." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ucsd-ccbb/jupyter-genomics
notebooks/networkAnalysis/specificity_visualization_high_dimensional_data/Visualizing and scoring labeled high dimensional data.ipynb
mit
[ "Visualizing and scoring labeled high dimensional data\n Brin Rosenthal, sbrosenthal@ucsd.edu\nApril 29, 2016\n\n<a id='toc'></a>\nTable of Contents\n\nIntroduction \nImport modules \nImport data \nPlot raw data heatmap \nParse row labels\nReduce to two dimensions\nPlot data in transformed dimensions\nIntroduce scoring method (specificity)\nPlot transformed data in specificity coordinate\n\n\n<a id='Introduction'></a>\nIntroduction\n\nIn this notebook we will walk through a workflow where we figure out how to visualize and score high dimensional data with two sets of labels.\nWe suspect our data has a lot of internal structure, and we want to pull out the datapoints most unique to a subset of labels, as well as to identify datapoints which are common across all labels.\nWe will first use dimensionality reduction techniques, including as t-distributed Stochastic Neighbor Embedding (t-SNE), to reduce the data to two dimensions.\nThen we will develop a scoring function, which rewards nearby points for having the same label as a focal point, and penalizes nearby poitns for having different labels.\nWe will calculate this score for each unique element in each label type, and plot on new 'specificity' axes. Points which have high specificity are more unique, which points with low specificity are more common. \n\n<a id='import'></a>\nImport some useful modules", "# import some useful packages\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport seaborn as sns\nimport networkx as nx\nimport pandas as pd\nimport random\nimport community\nimport json\nimport os\nfrom scipy.spatial.distance import pdist,squareform\nimport nltk\nfrom nltk import word_tokenize\nimport string\n\nfrom nltk.collocations import *\nfrom nltk.corpus import stopwords\n\n# latex rendering of text in graphs\nimport matplotlib as mpl\nmpl.rc('text', usetex = False)\nmpl.rc('font', family = 'serif')\n\n\n% matplotlib inline", "<a id='import_data'></a>\nImport the data\n\n\nData consists of a large matrix, with r rows and c columns.\nRows are labeled with 2 pieces of information:\n 1) Which disease does row belong to?\n 2) Which GO term does row belong to?\nThe values in each row represent the similarity of the focal (row) datapoint to other datapoints. Each row has at least one entry equal to 1.0. We can think of each row as coordinates (in c-dimensional space).", "# load the dataframe using pandas\ncluster_focal_df = pd.read_csv('cluster_diff_test_nodes_5d.csv',sep='\\t',\n index_col='index')\n\n\n# drop this column because we don't need it\ncluster_focal_df = cluster_focal_df.drop('focal_mean',1)\n\n# add a column that is the mean of values in each row, and sort by it\ncluster_focal_mean = cluster_focal_df.mean(1)\ncluster_focal_df['total_mean']=cluster_focal_mean\ncluster_focal_df = cluster_focal_df.sort('total_mean',ascending=False)\n\n\n", "TOC\n<a id='plot_heatmap'></a>\nPlot the raw data as a heatmap", "# plot the heatmap\nplt.figure(figsize=(15,15))\nplt.matshow(cluster_focal_df,fignum=False,cmap='jet',vmin=0,vmax=1,aspect='auto')\n#plt.yticks(range(len(cluster_focal_df)),list(cluster_focal_df.index),fontsize=8)\nplt.xticks(range(len(cluster_focal_df.columns)),list(cluster_focal_df.columns),rotation=90,fontsize=10)\nplt.grid('off')", "TOC\n<a id='parse_rlabels'></a>\nParse the row labels\n\n\nHere we include two functions that will be useful for parsing row labels from DF indices, and mapping these labels to colors\nNOTE These functions are specific to the example dataset used here", "\ndef build_row_colors(nodes_df,cmap = matplotlib.cm.nipy_spectral,find_col_colors = True):\n '''\n Simple helper function for plotting to return row_colors and col_colors for sns.clustermap.\n - disease names will be extracted from df indices and columns and used for plotting\n - cmap defines the desired colormap (can be any matplotlib colormap)\n \n '''\n\n # make the list of disease naes\n nodes_index = list(nodes_df.index)\n\n dname_list = []\n for idx_temp in nodes_index:\n\n idx_ = idx_temp.find('_')\n\n dname_temp = idx_temp[:idx_]\n dname_list.append(dname_temp)\n\n dname_list = pd.Series(dname_list)\n\n # make the row colors (one color per disease)\n num_diseases = len(np.unique(dname_list))\n dnames = list(np.unique(dname_list)) #list(dname_list.unique())\n\n cmap_idx_dict = dict(zip(dnames,[int(round(i/float(num_diseases)*220.)+25) for i in range(num_diseases)]))\n\n rcolors=[]\n for dfocal in dname_list:\n #color_list = [sns.color_palette('Set2',num_diseases)[cmap_idx]]*(num_dfocal)\n color_temp = cmap(cmap_idx_dict[dfocal])\n rcolors.append(color_temp)\n \n \n \n # now find the column colors\n if find_col_colors:\n dnames_split = [split_dname(d) for d in dnames]\n \n # loop over columns to find which disease it is\n colnames = list(nodes_df.columns)\n dname_col_list = [0]*len(colnames)\n for i in range(len(colnames)):\n col = colnames[i]\n for d in dnames_split:\n # is disease d in column col?\n idx_match = col.find(d[0:5])\n if idx_match>-1:\n dname_col_list[i]=d\n \n if type(dname_col_list[i]) != str:\n dname_col_list[i]='unknown'\n \n \n cmap_col_idx_dict = dict(zip(dnames_split,[int(round(i/float(num_diseases)*256.)) for i in range(num_diseases)]))\n cmap_col_idx_dict['unknown'] = 255\n print(cmap_col_idx_dict)\n \n ccolors=[]\n for dfocal in dname_col_list:\n #color_list = [sns.color_palette('Set2',num_diseases)[cmap_idx]]*(num_dfocal)\n color_temp = cmap(cmap_col_idx_dict[dfocal])\n ccolors.append(color_temp)\n \n return rcolors,ccolors,dname_col_list,dname_list\n \n else:\n return rcolors,dname_col_list,dname_list\n\n\n \ndef split_dname(dtemp):\n '''\n Helper function to split disease name into words separated by underscores\n '''\n dkeep=dtemp\n \n icount = 0 # don't look at the first letter\n for i in range(1,len(dtemp)):\n icount+=1\n c = dtemp[i]\n if c.isupper():\n dkeep = dkeep[0:icount]+'_'+dkeep[icount:]\n icount+=1 # add another to icount to account for new underscore\n\n return dkeep\n\ndef get_reduced_labels(nodes_df,num_common_bigrams=25):\n '''\n Reduce the cluster labels to common bigrams\n \n '''\n\n cluster_labels = list(nodes_df.index)\n # shuffle cluster_labels to get rid of local structure\n np.random.shuffle(cluster_labels)\n # build up a list of the most common words\n word_list = []\n for c in cluster_labels:\n\n # split cluster_label into parts separated by underscore\n cluster_label = c.split('_')\n GO_temp = cluster_label[2] # the third element is the GO term\n tokens = word_tokenize(GO_temp)\n word_list.extend(tokens)\n\n word_list = pd.Series(word_list)\n word_list.value_counts()\n\n\n filtered_words = [word for word in word_list if word not in stopwords.words('english')]\n\n # find common bigrams\n bigram_measures = nltk.collocations.BigramAssocMeasures()\n trigram_measures = nltk.collocations.TrigramAssocMeasures()\n\n finder = nltk.collocations.BigramCollocationFinder.from_words(filtered_words)\n\n top_N = finder.nbest(bigram_measures.raw_freq,num_common_bigrams)\n\n # loop over cluster_labels, and replace with common phrase if it occurs\n cluster_labels = list(nodes_df.index)\n reduced_labels = []\n for c in cluster_labels:\n # split cluster_label into parts separated by underscore\n cluster_label = c.split('_')\n if cluster_label[2]=='':\n GO_temp = cluster_label[3] # the fourth element is the GO term if third is blank\n else:\n GO_temp = cluster_label[2] # the third element is the GO term\n\n tokens = word_tokenize(GO_temp)\n\n is_match = False\n i = -1\n while (not is_match) and (i<len(top_N)-1):\n i+=1\n num_overlap = len(set.intersection(set(top_N[i]),set(tokens)))\n if num_overlap>=2: # for bigrams only\n is_match=True\n reduced_labels.append(top_N[i][0]+' ' + top_N[i][1])\n\n if not is_match:\n # if there isn't any match, just take the normal label\n reduced_labels.append(GO_temp)\n \n return reduced_labels\n \n\n# parse first label set (called GO terms from now on)\n\nreduced_labels = get_reduced_labels(cluster_focal_df,num_common_bigrams=0)\nreduced_label_VC = pd.Series(reduced_labels).value_counts()\n\nn_bigrams = len(np.unique(reduced_labels))-1 # include all labels\n\n# make dictionaries going from label to index and back\nlabel_to_idx = dict(zip(list(reduced_label_VC.index),range(len(reduced_label_VC))))\nidx_to_label = dict(zip(range(len(reduced_label_VC)),list(reduced_label_VC.index)))\nreduced_idx = [float(label_to_idx[label]) if label_to_idx[label]<n_bigrams else n_bigrams+1. for label in reduced_labels ]\n\nlabels = idx_to_label.values()\nkeys = idx_to_label.keys()\n\nidx_to_label_reduced = dict(zip(keys[0:n_bigrams+1],labels[0:n_bigrams+1]))\nidx_to_label_reduced[n_bigrams+1]='other' # set all unlabeled points to 'other'\n\nlabel_to_idx_reduced = dict(zip(labels[0:n_bigrams+1],keys[0:n_bigrams+1]))\nlabel_to_idx_reduced['other']=n_bigrams+1 # set all unlabeled points to 'other'\n\n\n# parse second label set (called Disease names from now on)\n\n# map diseases to colors\nrcolors,tmp1,tmp2,dname_list = build_row_colors(cluster_focal_df,cmap = matplotlib.cm.nipy_spectral,find_col_colors = True)\ndname_to_rcolors = dict(zip(dname_list.values,rcolors))\n", "TOC\n<a id='dim_reduce'></a>\nReduce to two dimensions\n\nMethods (scikit-learn implementations used here):\n- t-SNE: Van der Maaten, Laurens, and Geoffrey Hinton. \"Visualizing data using t-SNE.\" Journal of Machine Learning Research 9.2579-2605 (2008): 85.\n<img src=\"screenshots/sklearn_tsne.png\" width=\"600\" height=\"600\">\n\n\nPrincipal Component Analysis (PCA): M. Tipping and C. Bishop, Probabilistic Principal Component Analysis, Journal of the Royal Statistical Society, Series B, 61, Part 3, pp. 611-622\n<img src=\"screenshots/sklearn_pca.png\" width=\"600\" height=\"600\">\n\n\nIsomap: Tenenbaum, J.B.; De Silva, V.; & Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 290 (5500)\n<img src=\"screenshots/sklearn_isomap.png\" width=\"600\" height=\"600\">", "from sklearn.manifold import TSNE\nfrom sklearn.decomposition import PCA\nfrom sklearn.decomposition import NMF\nfrom sklearn.manifold import Isomap\n\n# select which dimensionality reduction technique you want here\ndim_reduct_method = 'TSNE'\n\ntsne = TSNE(n_components=2)\npca = PCA(n_components=2)\nisomap = Isomap(n_neighbors=10,n_components=2,path_method='auto')\n\n# drop total_mean column\nfocal_df = cluster_focal_df.drop('total_mean',1)\nfocal_df = focal_df.replace(to_replace=1.0,value=0.0)\n\n# make an array out of the df for input into dim reduction methods\ncluster_mat =np.array(focal_df)\n\nif dim_reduct_method=='TSNE':\n cluster_transf = tsne.fit_transform(cluster_mat)\nelif dim_reduct_method=='PCA':\n cluster_transf = pca.fit_transform(cluster_mat)\nelif dim_reduct_method=='Isomap':\n cluster_transf = isomap.fit_transform(cluster_mat)", "TOC\n<a id='plot_transformed'></a>\nPlot the data in transformed coordinates\n\n\nLeft panel: transformed coordinates color-coded by GO term. Looks like there is some grouping happening, where some points labeled by the same GO term appear to be clustered together.\nRight panel: transformed coordinates color-coded by disease name. Again there is some clear grouping happening, easily identified by eye.\n\nCan we quantify our observations by developing a scoring method to evaluate how localized points are by GO term and by disease name?", "plt.figure(figsize=(20,10))\n\nplt.subplot(1,2,1)\nplt.plot(cluster_transf[:,0],cluster_transf[:,1],'o',color='gray',markersize=4)\nfor i in range(len(idx_to_label_reduced)):\n \n reduced_labels = pd.Series(reduced_labels)\n label_temp = idx_to_label_reduced[i]\n idx_focal = list(reduced_labels[reduced_labels==label_temp].index)\n if len(idx_focal)>0:\n col_temp =matplotlib.cm.Set1(int(round(float(i)/len(idx_to_label_reduced)*255)))\n\n plt.plot(cluster_transf[idx_focal,0],cluster_transf[idx_focal,1],'o',color=col_temp,label=idx_to_label_reduced[i],\n markersize=5)\n#plt.legend(loc='upper left',fontsize=10,ncol=1)\n#plt.xlim([-30,30])\nplt.title(dim_reduct_method+' transformed data \\ncolor-coded by GO term',fontsize=18)\n\nplt.subplot(1,2,2)\nfor d in dname_to_rcolors.keys():\n \n idx_focal = list(dname_list[dname_list==d].index)\n if len(idx_focal)>0:\n col_temp =dname_to_rcolors[d]\n\n plt.plot(cluster_transf[idx_focal,0],cluster_transf[idx_focal,1],'o',color=col_temp,label=d,\n markersize=5)\nplt.legend(fontsize=14,loc='lower left')\nplt.title(dim_reduct_method+' transformed data \\ncolor-coded by disease name',fontsize=18)\n#plt.xlim([-30,30])\n\n", "TOC\n<a id='scoring_method'></a>\nScoring method (Specificity)\n\n\n\nOur scoring method measures a weighted distance ($S$) between all pairs of points in the dataset, wehre the weights are determined by the labels. If two nearby points have the same label, they will be rewarded, if they have different labels, they will be penalized.\n$ s_i = \\sum_{j=1}^N \\frac{1}{N}F(d_{ij}) \\delta(c_{ij}) $\n\n\nDistances ($d_{ij}$ are Euclidean distances meausured in 2-d reduced space.\n\n$\\delta(c_{ij})$ is 0 if points $i$ and $j$ have different labels, and 1 if they have the same labels.\nThe distance transformation function $F(d_{ij})$ is selected by the user based on desired encoding of distance. This transformation is necessary because we want to reward nearby points in our weighted average. Choices are:\n'log_inv': $F(x) = \\log(1/x)$\n'inv': $F(x) = 1/x$\n'sub': $F(x) = 1-x/\\max(x)$\n'rank': $F(x) = (1-rank(x))/N$\n'rank_inv': $F(x) = 1/rank(x)$", "def weighted_score(x,y,labels1,labels2,dtype='log_inv'):\n '''\n This function calculates the weighted scores of points in x,y, defined by labels1 and labels2.\n - Points are scored more highly if they are close to other points with the same label, and are penalized if \n they are close to points with different labels. \n \n '''\n\n d = squareform(pdist(np.transpose([x,y])))\n #d = squareform(pdist(cluster_mat))\n \n if dtype=='log_inv':\n d_log_inv = np.log(1/d)\n np.fill_diagonal(d_log_inv,0)\n d_transf = d_log_inv\n elif dtype=='inv':\n d_inv = 1/d\n np.fill_diagonal(d_inv,0)\n d_transf = d_inv\n elif dtype=='sub':\n d_sub = 1 - d/np.max(d)\n np.fill_diagonal(d_sub,1)\n d_transf = d_sub\n \n elif dtype=='rank':\n d_rank = []\n for i in range(len(d)):\n d_rank.append(len(d)-np.argsort(d[i,:]))\n \n d_transf = d_rank\n elif dtype=='rank_inv':\n d_inv_rank = []\n for i in range(len(d)):\n d_inv_rank.append(1./(np.argsort(d[i,:])+1))\n \n d_transf = d_inv_rank\n\n\n \n labels1 = pd.Series(labels1)\n label_delta_mat = np.zeros((len(labels1),len(labels1)))\n for i in range(len(labels1)):\n label_temp = labels1==labels1[i]\n label_plus_minus = [(int(label)-.5)*2 for label in label_temp]\n\n label_delta_mat[i,:] = label_plus_minus\n\n\n score1 = np.mean(d_transf*label_delta_mat,axis=0)\n\n\n labels2 = pd.Series(labels2)\n label_delta_mat = np.zeros((len(labels2),len(labels2)))\n for i in range(len(labels2)):\n label_temp = labels2==labels2[i]\n label_plus_minus = [(int(label)-.5)*2 for label in label_temp]\n\n label_delta_mat[i,:] = label_plus_minus\n\n\n score2 = np.mean(d_transf*label_delta_mat,axis=0)\n \n return score1,score2\n \n \n\n# calculate the score here\nx = cluster_transf[:,0]\ny = cluster_transf[:,1]\nlabels1 = [l if l in label_to_idx_reduced.keys() else 'other' for l in reduced_labels]\nlabels2 = dname_list\n\nscore1,score2 = weighted_score(x,y,labels1,labels2,dtype='log_inv')\n\n# make a dataframe to store the score results\nScore_df = pd.DataFrame({'score1':list(score1),'score2':list(score2),\n 'GOlabels':list(labels1),'Dnames':list(dname_list)},index=range(len(score1)))\n\n# calculate the average score for each GOterm and disease name\nsGO_GB_mean = []\nsD_GB_mean = []\nsGO_GB_mean = Score_df.groupby('GOlabels').mean()\nsD_GB_mean = Score_df.groupby('Dnames').mean()\n\n\n# measure how many disease names are associated with each GOterm\nGO_GB_D = Score_df['Dnames'].groupby(Score_df['GOlabels']).value_counts()\n\n\n\n# need to normalize by total number of clusters in each disease\nclusters_per_disease = Score_df['Dnames'].value_counts()\nclusters_per_GOterm = Score_df['GOlabels'].value_counts()\n\n\n# plot the reduced data in specificity coordinates here\nplt.figure(figsize=(14,7))\nplt.subplot(1,2,1)\nplt.scatter(score1,score2,c=[label_to_idx_reduced[l] for l in labels1],cmap='jet')\nplt.xlabel('GO specificity',fontsize=16)\nplt.ylabel('Disease specificity',fontsize=16)\nplt.title('color-coded by GO term',fontsize=16)\n\n\nplt.subplot(1,2,2)\nplt.scatter(score1,score2,c=[dname_to_rcolors[d] for d in dname_list],cmap='jet')\nplt.xlabel('GO specificity',fontsize=16)\nplt.ylabel('Disease specificity',fontsize=16)\nplt.title('color-coded by disease name',fontsize=16)\n", "TOC\n<a id='plot_specificity'></a>\nPlot the average specificities per GO term and per disease name\n\n\n\nPlot points as label names\n\n\nLeft panel: GO term plotted in specificity coordinates. Points are color-coded by the disease which contains the most counts of that term. Points are larger if the GO term has more occurrences in the data. \n\nGO terms with high GO specificity and Disease specificity (upper right quadrant) are likely to be found nearby to other points with the same GO label and disease label. \nGO terms with high GO specificity but low disease specificity are likely to be found near points with the same GO labels, but different disease labels\nGO terms with low GO specificity, but high disease specificity are likely to be found near points with different GO labels, but the same disease labels.\nGo terms with low specificity in both GO and Disease (lower left quadrant) are not likely to be found near other points with the same labels. \n\n\n\nRight panel: Disease names plotted in specificity coordinates. \n\nDiseases with high specificity in both GO and Disease are likely to be found near points with the same GO labels and Disease labels.\nDiseases with high GO specificity but low disease specificity are found near points with the same GO labels, but different disease labels.\nDiseases with low GO specificity but high disease specificity are found near points with different GO labels, but the same disease labels.\nDiseases with low specificity in both GO and disease are not likely to be found near other points with the same labels.", "fig = plt.figure(figsize=(15,15))\naxes = fig.add_subplot(1,1,1)\nsubpos = [0.7,0.7,0.25,0.25]\nfor GOname in list(sGO_GB_mean.index):\n msize = np.log(clusters_per_GOterm[GOname])*3*15 # set the marker size\n \n # get the text color\n D_freq_norm = GO_GB_D[GOname]# /clusters_per_disease # normalize by number of clusters per disease\n D_freq_norm.sort(ascending=False)\n\n if (D_freq_norm[0]/float(np.sum(D_freq_norm))) > .5:\n most_frequent_D = D_freq_norm.index[0] # get the most frequent disease for focal GO term\n color_temp = dname_to_rcolors[most_frequent_D]\n else:\n # if focal GOname doesn't really belong to any disease, make it white\n color_temp='black'\n axes.plot(sGO_GB_mean['score1'][GOname],sGO_GB_mean['score2'][GOname],\n '.',marker=r'$'+GOname[0:20]+'$',markersize=msize,color=color_temp)\n \nplt.xlabel('GO specificity',fontsize=16)\nplt.ylabel('Disease specificity',fontsize=16)\nplt.xlim([2.5,3.5])\nplt.ylim([0.5,3.2])\n \nsubax1 = add_subplot_axes(axes,subpos)\n\nfor Dname in list(sD_GB_mean.index):\n msize = len(Dname)*5\n subax1.plot(sD_GB_mean['score1'][Dname],sD_GB_mean['score2'][Dname],\n '.',marker=r'$'+Dname+'$',markersize=msize,color=dname_to_rcolors[Dname])\n \nplt.xlabel('GO specificity',fontsize=12)\nplt.ylabel('Disease specificity',fontsize=12)\nplt.xlim([2.5,3.5])\n\n\n\ndef add_subplot_axes(ax,rect,axisbg='w'):\n '''\n This function allows for plotting of inset subplots (from http://stackoverflow.com/questions/17458580/embedding-small-plots-inside-subplots-in-matplotlib)\n '''\n \n fig = plt.gcf()\n box = ax.get_position()\n width = box.width\n height = box.height\n inax_position = ax.transAxes.transform(rect[0:2])\n transFigure = fig.transFigure.inverted()\n infig_position = transFigure.transform(inax_position) \n x = infig_position[0]\n y = infig_position[1]\n width *= rect[2]\n height *= rect[3] # <= Typo was here\n subax = fig.add_axes([x,y,width,height],axisbg=axisbg)\n x_labelsize = subax.get_xticklabels()[0].get_size()\n y_labelsize = subax.get_yticklabels()[0].get_size()\n x_labelsize *= rect[2]**0.5\n y_labelsize *= rect[3]**0.5\n subax.xaxis.set_tick_params(labelsize=x_labelsize)\n subax.yaxis.set_tick_params(labelsize=y_labelsize)\n return subax", "TOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AnasFullStack/Awesome-Full-Stack-Web-Developer
algorithms/python_revision.ipynb
mit
[ "Python Quick Revision\nBook URL\n1.8. Getting Started with Data", "print(2 ** 10)\nprint(2 ** 100)\nprint(7 // 3)\nprint(7 / 3)\nprint(7 % 3)", "1.8.2. Built-in Collection Data Types\n1. lists\n Lists are heterogeneous, meaning that the data objects need not all be from the same class and the collection can be assigned to a variable as below. \n| Operation Name | Operator | Explanation |\n| --- | --- | --- |\n| indexing | [ ] | Access an element of a sequence |\n| concatenation | + | Combine sequences together |\n| repetition | * | Concatenate a repeated number of times |\n| membership | in | Ask whether an item is in a sequence |\n| length | len | Ask the number of items in the sequence |\n| slicing | [ : ] | Extract a part of a sequence |", "fakeList = ['str', 12, True, 1.232] # heterogeneous\nprint(fakeList)\nmyList = [1,2,3,4]\nA = [myList] * 3\nprint(A)\nmyList[2]=45454545\nprint(A)", "| Method Name | Use | Explanation |\n| --- | --- | --- |\n| append | alist.append(item) | Adds a new item to the end of a list |\n| insert | alist.insert(i,item) | Inserts an item at the ith position in a list |\n| pop | alist.pop() | Removes and returns the last item in a list |\n| pop | alist.pop(i) | Removes and returns the ith item in a list |\n| sort | alist.sort() | Modifies a list to be sorted |\n| reverse | alist.reverse() | Modifies a list to be in reverse order |\n| del | del alist[i] | Deletes the item in the ith position |\n| index | alist.index(item) | Returns the index of the first occurrence of item |\n| count | alist.count(item) | Returns the number of occurrences of item |\n| remove | alist.remove(item) | Removes the first occurrence of item |", "myList = [1024, 3, True, 6.5]\nmyList.append(False)\nprint(myList)\nmyList.insert(2,4.5)\nprint(myList)\nprint(myList.pop())\nprint(myList)\nprint(myList.pop(1))\nprint(myList)\nmyList.pop(2)\nprint(myList)\nmyList.sort()\nprint(myList)\nmyList.reverse()\nprint(myList)\nprint(myList.count(6.5))\nprint(myList.index(4.5))\nmyList.remove(6.5)\nprint(myList)\ndel myList[0]\nprint(myList)\n\nprint(list(range(10)))\nprint(list(range(5,10)))\nprint(list(range(5,10,2)))\nprint(list(range(10,1,-1)))", "2. Strings\n| Method Name | Use | Explanation |\n| --- | --- | --- |\n| center | astring.center(w) | Returns a string centered in a field of size w |\n| count | astring.count(item) | Returns the number of occurrences of item in the string |\n| ljust | astring.ljust(w) | Returns a string left-justified in a field of size w |\n| lower | astring.lower() | Returns a string in all lowercase |\n| rjust | astring.rjust(w) | Returns a string right-justified in a field of size w |\n| find | astring.find(item) | Returns the index of the first occurrence of item |\n| split | astring.split(schar) | Splits a string into substrings at schar |", "myName= \"David\"\nprint(myName[3])\nprint(myName * 2)\nprint(len(myName))\nprint(myName.upper())\nprint('.' + myName.center(10) + '.')\nprint('.' + myName.ljust(10) + '.')\nprint('.' + myName.rjust(10) + '.')\nprint(myName.find('v'))\nprint(myName.split('v'))", "A major difference between lists and strings is that lists can be modified while strings cannot. This is referred to as mutability. Lists are mutable; strings are immutable. For example, you can change an item in a list by using indexing and assignment. With a string that change is not allowed. \n3. Tuples\n Tuples are very similar to lists in that they are heterogeneous sequences of data. The difference is that a tuple is immutable, like a string. A tuple cannot be changed.", "myTuple = (2,True,4.96)\nprint(myTuple)\nprint(len(myTuple))", "However, if you try to change an item in a tuple, you will get an error. Note that the error message provides location and reason for the problem.", "myTuple[1]=False", "4. Set\nA set is an unordered collection of zero or more immutable Python data objects. Sets do not allow duplicates and are written as comma-delimited values enclosed in curly braces. The empty set is represented by set(). Sets are heterogeneous, and the collection can be assigned to a variable as below.", "print({3,6,\"cat\",4.5,False})\nmySet = {3,6,\"cat\",4.5,False}\nprint(mySet)", "| Operation Name | Operator | Explanation |\n| --- | --- | --- |\n| membership | in | Set membership |\n| length | len | Returns the cardinality of the set |\n| &#124; | aset &#124; otherset | Returns a new set with all elements from both sets |\n| &amp; | aset &amp; otherset | Returns a new set with only those elements common to both sets |\n| - | aset - otherset | Returns a new set with all items from the first set not in second |\n| &lt;= | aset &lt;= otherset | Asks whether all elements of the first set are in the second |\n| Method Name | Use | Explanation |\n| --- | --- | --- |\n| union | aset.union(otherset) | Returns a new set with all elements from both sets |\n| intersection | aset.intersection(otherset) | Returns a new set with only those elements common to both sets |\n| difference | aset.difference(otherset) | Returns a new set with all items from first set not in second |\n| issubset | aset.issubset(otherset) | Asks whether all elements of one set are in the other |\n| add | aset.add(item) | Adds item to the set |\n| remove | aset.remove(item) | Removes item from the set |\n| pop | aset.pop() | Removes an arbitrary element from the set |\n| clear | aset.clear() | Removes all elements from the set |", "mySet = {3,6,\"cat\",4.5,False}\nprint(mySet)\nyourSet = {99,3,100}\nprint(yourSet)\n\nprint( mySet.union(yourSet))\nprint( mySet | yourSet)\n\nprint( mySet.intersection(yourSet))\nprint( mySet & yourSet)\n\nprint( mySet.difference(yourSet))\nprint( mySet - yourSet)\n\nprint( {3,100}.issubset(yourSet))\nprint( {3,100}<=yourSet)\n\nmySet.add(\"house\")\nprint( mySet)\n\nmySet.remove(4.5)\nprint( mySet)\n\nmySet.pop()\nprint( mySet)\n\nmySet.clear()\nprint( mySet)", "5. Dictionary\nDictionaries are collections of associated pairs of items where each pair consists of a key and a value. This key-value pair is typically written as key:value. Dictionaries are written as comma-delimited key:value pairs enclosed in curly braces. For example,", "capitals = {'Iowa':'DesMoines','Wisconsin':'Madison'}\nprint(capitals)\nprint(capitals['Iowa'])\ncapitals['Utah']='SaltLakeCity'\nprint(capitals)\ncapitals['California']='Sacramento'\nprint(len(capitals))\nfor k in capitals:\n print(capitals[k],\" is the capital of \", k)", "| Operator | Use | Explanation |\n| --- | --- | --- |\n| [] | myDict[k] | Returns the value associated with k, otherwise its an error |\n| in | key in adict | Returns True if key is in the dictionary, False otherwise |\n| del | del adict[key] | Removes the entry from the dictionary |\n| Method Name | Use | Explanation |\n| --- | --- | --- |\n| keys | adict.keys() | Returns the keys of the dictionary in a dict_keys object |\n| values | adict.values() | Returns the values of the dictionary in a dict_values object |\n| items | adict.items() | Returns the key-value pairs in a dict_items object |\n| get | adict.get(k) | Returns the value associated with k, None otherwise |\n| get | adict.get(k,alt) | Returns the value associated with k, alt otherwise |", "phoneext={'david':1410,'brad':1137}\nprint(phoneext)\nprint(phoneext.keys())\nprint(list(phoneext.keys()))\nprint(phoneext.values())\nprint(list(phoneext.values()))\nprint(phoneext.items())\nprint(list(phoneext.items()))\nprint(phoneext.get(\"kent\"))\nprint(phoneext.get(\"kent\",\"NO ENTRY\"))\n", "1.9. Input and Output", "aName = input(\"Please enter your name \")\nprint(\"Your name in all capitals is\",aName.upper(),\n \"and has length\", len(aName))\n\nsradius = input(\"Please enter the radius of the circle \")\nradius = float(sradius)\ndiameter = 2 * radius\nprint(diameter)", "1.9.1. String Formatting", "print(\"Hello\",\"World\")\nprint(\"Hello\",\"World\", sep=\"***\")\nprint(\"Hello\",\"World\", end=\"***\")\n\naName = \"Anas\"\nage = 10\nprint(aName, \"is\", age, \"years old.\")\nprint(\"%s is %d years old.\" % (aName, age)) \n# The % operator is a string operator called the format operator.", "| Character | Output Format |\n| --- | --- |\n| d, i | Integer |\n| u | Unsigned integer |\n| f | Floating point as m.ddddd |\n| e | Floating point as m.ddddde+/-xx |\n| E | Floating point as m.dddddE+/-xx |\n| g | Use %e for exponents less than <span class=\"math\"><span class=\"MathJax_Preview\" style=\"color: inherit; display: none;\"> <span class=\"MathJax\" id=\"MathJax-Element-1-Frame\" tabindex=\"0\" data-mathml=\"\\<math xmlns=&quot;http://www.w3.org/1998/Math/MathML&quot;>\\<nobr aria-hidden=\"true\">\\<span class=\"math\" id=\"MathJax-Span-1\" style=\"width: 1.432em; display: inline-block;\">\\<span style=\"display: inline-block; position: relative; width: 1.193em; height: 0px; font-size: 120%;\">\\<span style=\"position: absolute; clip: rect(1.729em 1001.19em 2.86em -999.997em); top: -2.557em; left: 0em;\">\\<span class=\"mrow\" id=\"MathJax-Span-2\">\\<span class=\"mo\" id=\"MathJax-Span-3\" style=\"font-family: STIXGeneral-Regular;\">− \\<span class=\"mn\" id=\"MathJax-Span-4\" style=\"font-family: STIXGeneral-Regular;\">4 \\<span style=\"display: inline-block; width: 0px; height: 2.562em;\"> \\<span style=\"display: inline-block; overflow: hidden; vertical-align: -0.211em; border-left: 0px solid; width: 0px; height: 1.146em;\"> \\</nobr>\\<span class=\"MJX_Assistive_MathML\" role=\"presentation\">\\<math xmlns=\"http://www.w3.org/1998/Math/MathML\">\\<mo>−\\</mo>\\<mn>4\\</mn>\\</math> \\<mo>&amp;#x2212;\\</mo>\\<mn>4\\</mn>\\</math>\" role=\"presentation\" style=\"position: relative;\"> <script type=\"math/tex\" id=\"MathJax-Element-1\">-4</script> or greater than <span class=\"math\"><span class=\"MathJax_Preview\" style=\"color: inherit; display: none;\"> <span class=\"MathJax\" id=\"MathJax-Element-2-Frame\" tabindex=\"0\" data-mathml=\"\\<math xmlns=&quot;http://www.w3.org/1998/Math/MathML&quot;>\\<nobr aria-hidden=\"true\">\\<span class=\"math\" id=\"MathJax-Span-5\" style=\"width: 1.432em; display: inline-block;\">\\<span style=\"display: inline-block; position: relative; width: 1.193em; height: 0px; font-size: 120%;\">\\<span style=\"position: absolute; clip: rect(1.67em 1001.13em 2.801em -999.997em); top: -2.557em; left: 0em;\">\\<span class=\"mrow\" id=\"MathJax-Span-6\">\\<span class=\"mo\" id=\"MathJax-Span-7\" style=\"font-family: STIXGeneral-Regular;\">+ \\<span class=\"mn\" id=\"MathJax-Span-8\" style=\"font-family: STIXGeneral-Regular;\">5 \\<span style=\"display: inline-block; width: 0px; height: 2.562em;\"> \\<span style=\"display: inline-block; overflow: hidden; vertical-align: -0.139em; border-left: 0px solid; width: 0px; height: 1.004em;\"> \\</nobr>\\<span class=\"MJX_Assistive_MathML\" role=\"presentation\">\\<math xmlns=\"http://www.w3.org/1998/Math/MathML\">\\<mo>+\\</mo>\\<mn>5\\</mn>\\</math> \\<mo>+\\</mo>\\<mn>5\\</mn>\\</math>\" role=\"presentation\" style=\"position: relative;\"> <script type=\"math/tex\" id=\"MathJax-Element-2\">+5</script> , otherwise use %f |\n| c | Single character |\n| s | String, or any Python data object that can be converted to a string by using the str function. |\n| % | Insert a literal % character |\n| Modifier | Example | Description |\n| --- | --- | --- |\n| number | %20d | Put the value in a field width of 20 |\n| - | %-20d | Put the value in a field 20 characters wide, left-justified |\n| + | %+20d | Put the value in a field 20 characters wide, right-justified |\n| 0 | %020d | Put the value in a field 20 characters wide, fill in with leading zeros. |\n| . | %20.2f | Put the value in a field 20 characters wide with 2 characters to the right of the decimal point. |\n| (name) | %(name)d | Get the value from the supplied dictionary using name as the key.", "price = 24\nitem = \"banana\"\nprint(\"The %s costs %d cents\" % (item, price))\nprint(\"The %+10s costs %5.2f cents\" % (item, price))\nprint(\"The %+10s costs %10.2f cents\" % (item, price))\nprint(\"The %+10s costs %010.2f cents\" % (item, price))\nitemdict = {\"item\":\"banana\",\"cost\":24}\nprint(\"The %(item)s costs %(cost)7.1f cents\" % itemdict)\n", "1.10. Control Structures\n algorithms require two important control structures: iteration and selection. \n- Iteration\n1. While", "counter = 1\nwhile counter <= 5:\n print(\"Hello, world\")\n counter = counter + 1", "2. for", "for item in [1,3,6,2,5]:\n print(item)\n\nfor item in range(5):\n... print(item**2)\n\nwordlist = ['cat','dog','rabbit']\nletterlist = [ ]\nfor aword in wordlist:\n for aletter in aword:\n if(aletter not in letterlist):\n letterlist.append(aletter)\nprint(letterlist)", "list comprehension", "sqlist=[]\nfor x in range(1,11):\n sqlist.append(x*x)\nprint(sqlist) \n\nsqlist2=[x*x for x in range(1,11)] # list comprehension\nprint(sqlist2)\n\nsqlist=[x*x for x in range(1,11) if x%2 != 0]\nprint(sqlist)\n\n[ch.upper() for ch in 'comprehension' if ch not in 'aeiou']\n\nwordlist = ['cat','dog','rabbit']\n\nuniqueLetters = [letter for word in wordlist for letter in word]\nprint(uniqueLetters)", "1.12. Defining Functions\nproblem\nHere’s a self check that really covers everything so far.\nYou may have heard of the infinite monkey theorem?\nThe theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare.\nWell, suppose we replace a monkey with a Python function. How long do you think it would take for a Python function to generate just one sentence of Shakespeare? The sentence we’ll shoot for is: “methinks it is like a weasel”\nYou’re not going to want to run this one in the browser, so fire up your favorite Python IDE. The way we’ll simulate this is to write a function that generates a string that is 27 characters long by choosing random letters from the 26 letters in the alphabet plus the space. We’ll write another function that will score each generated string by comparing the randomly generated string to the goal.\nA third function will repeatedly call generate and score, then if 100% of the letters are correct we are done. If the letters are not correct then we will generate a whole new string.To make it easier to follow your program’s progress this third function should print out the best string generated so far and its score every 1000 tries.", "import string\nimport random\nimport time\n\nstart_time = time.time()\n\ndef generate_new_sentense():\n sentense = [random.choice(string.ascii_lowercase + \" \") for x in range(28) ]\n return \"\".join(sentense)\n\ndef compare_sentences(guess):\n target_sentence = \"methinks it is like a weasel\"\n return guess == target_sentence\n\ndef main():\n i= 0\n print (i)\n guess = generate_new_sentense()\n print (guess)\n while not compare_sentences(guess):\n guess = generate_new_sentense()\n print (guess)\n i+= 1\n print (i)\n# main()\n\nprint(\"--- %s seconds ---\" % (time.time() - start_time))", "1.13. Object-Oriented Programming in Python: Defining Classes\n1.13.1. A Fraction Class", "class Fraction:\n \n def __init__(self, top, bottom):\n self.num = top\n self.den = bottom\n\n def show(self):\n print(self.num,\"/\",self.den)\n \n # Overriding the default __str__ function\n def __str__(self):\n return str(self.num)+\"/\"+str(self.den)\n \n def __add__(self,otherfraction):\n newnum = self.num * otherfraction.den + self.den * otherfraction.num\n newden = self.den * otherfraction.den\n return Fraction(newnum,newden)\n \nmyfraction = Fraction(3,5)\nprint(myfraction)\nprint(myfraction.show())\nf1 = Fraction(1,4)\nf2 = Fraction(1,2)\nf3 = f1 + f2\nprint(f3)\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wbinventor/openmc
examples/jupyter/candu.ipynb
mit
[ "In this example, we will create a typical CANDU bundle with rings of fuel pins. At present, OpenMC does not have a specialized lattice for this type of fuel arrangement, so we must resort to manual creation of the array of fuel pins.", "%matplotlib inline\nfrom math import pi, sin, cos\nimport numpy as np\nimport openmc", "Let's begin by creating the materials that will be used in our model.", "fuel = openmc.Material(name='fuel')\nfuel.add_element('U', 1.0)\nfuel.add_element('O', 2.0)\nfuel.set_density('g/cm3', 10.0)\n\nclad = openmc.Material(name='zircaloy')\nclad.add_element('Zr', 1.0)\nclad.set_density('g/cm3', 6.0)\n\nheavy_water = openmc.Material(name='heavy water')\nheavy_water.add_nuclide('H2', 2.0)\nheavy_water.add_nuclide('O16', 1.0)\nheavy_water.add_s_alpha_beta('c_D_in_D2O')\nheavy_water.set_density('g/cm3', 1.1)", "With out materials created, we'll now define key dimensions in our model. These dimensions are taken from the example in section 11.1.3 of the Serpent manual.", "# Outer radius of fuel and clad\nr_fuel = 0.6122\nr_clad = 0.6540\n\n# Pressure tube and calendria radii\npressure_tube_ir = 5.16890\npressure_tube_or = 5.60320\ncalendria_ir = 6.44780\ncalendria_or = 6.58750\n\n# Radius to center of each ring of fuel pins\nring_radii = np.array([0.0, 1.4885, 2.8755, 4.3305])", "To begin creating the bundle, we'll first create annular regions completely filled with heavy water and add in the fuel pins later. The radii that we've specified above correspond to the center of each ring. We actually need to create cylindrical surfaces at radii that are half-way between the centers.", "# These are the surfaces that will divide each of the rings\nradial_surf = [openmc.ZCylinder(R=r) for r in\n (ring_radii[:-1] + ring_radii[1:])/2]\n\nwater_cells = []\nfor i in range(ring_radii.size):\n # Create annular region\n if i == 0:\n water_region = -radial_surf[i]\n elif i == ring_radii.size - 1:\n water_region = +radial_surf[i-1]\n else:\n water_region = +radial_surf[i-1] & -radial_surf[i]\n \n water_cells.append(openmc.Cell(fill=heavy_water, region=water_region))", "Let's see what our geometry looks like so far. In order to plot the geometry, we create a universe that contains the annular water cells and then use the Universe.plot() method. While we're at it, we'll set some keyword arguments that can be reused for later plots.", "plot_args = {'width': (2*calendria_or, 2*calendria_or)}\nbundle_universe = openmc.Universe(cells=water_cells)\nbundle_universe.plot(**plot_args)", "Now we need to create a universe that contains a fuel pin. Note that we don't actually need to put water outside of the cladding in this universe because it will be truncated by a higher universe.", "surf_fuel = openmc.ZCylinder(R=r_fuel)\n\nfuel_cell = openmc.Cell(fill=fuel, region=-surf_fuel)\nclad_cell = openmc.Cell(fill=clad, region=+surf_fuel)\n\npin_universe = openmc.Universe(cells=(fuel_cell, clad_cell))\n\npin_universe.plot(**plot_args)", "The code below works through each ring to create a cell containing the fuel pin universe. As each fuel pin is created, we modify the region of the water cell to include everything outside the fuel pin.", "num_pins = [1, 6, 12, 18]\nangles = [0, 0, 15, 0]\n\nfor i, (r, n, a) in enumerate(zip(ring_radii, num_pins, angles)):\n for j in range(n):\n # Determine location of center of pin\n theta = (a + j/n*360.) * pi/180.\n x = r*cos(theta)\n y = r*sin(theta)\n \n pin_boundary = openmc.ZCylinder(x0=x, y0=y, R=r_clad)\n water_cells[i].region &= +pin_boundary\n \n # Create each fuel pin -- note that we explicitly assign an ID so \n # that we can identify the pin later when looking at tallies\n pin = openmc.Cell(fill=pin_universe, region=-pin_boundary)\n pin.translation = (x, y, 0)\n pin.id = (i + 1)*100 + j\n bundle_universe.add_cell(pin)\n\nbundle_universe.plot(**plot_args)", "Looking pretty good! Finally, we create cells for the pressure tube and calendria and then put our bundle in the middle of the pressure tube.", "pt_inner = openmc.ZCylinder(R=pressure_tube_ir)\npt_outer = openmc.ZCylinder(R=pressure_tube_or)\ncalendria_inner = openmc.ZCylinder(R=calendria_ir)\ncalendria_outer = openmc.ZCylinder(R=calendria_or, boundary_type='vacuum')\n\nbundle = openmc.Cell(fill=bundle_universe, region=-pt_inner)\npressure_tube = openmc.Cell(fill=clad, region=+pt_inner & -pt_outer)\nv1 = openmc.Cell(region=+pt_outer & -calendria_inner)\ncalendria = openmc.Cell(fill=clad, region=+calendria_inner & -calendria_outer)\n\nroot_universe = openmc.Universe(cells=[bundle, pressure_tube, v1, calendria])", "Let's look at the final product. We'll export our geometry and materials and then use plot_inline() to get a nice-looking plot.", "geom = openmc.Geometry(root_universe)\ngeom.export_to_xml()\n\nmats = openmc.Materials(geom.get_all_materials().values())\nmats.export_to_xml()\n\np = openmc.Plot.from_geometry(geom)\np.color_by = 'material'\np.colors = {\n fuel: 'black',\n clad: 'silver',\n heavy_water: 'blue'\n}\nopenmc.plot_inline(p)", "Interpreting Results\nOne of the difficulties of a geometry like this is identifying tally results when there was no lattice involved. To address this, we specifically gave an ID to each fuel pin of the form 100*ring + azimuthal position. Consequently, we can use a distribcell tally and then look at our DataFrame which will show these cell IDs.", "settings = openmc.Settings()\nsettings.particles = 1000\nsettings.batches = 20\nsettings.inactive = 10\nsettings.source = openmc.Source(space=openmc.stats.Point())\nsettings.export_to_xml()\n\nfuel_tally = openmc.Tally()\nfuel_tally.filters = [openmc.DistribcellFilter(fuel_cell)]\nfuel_tally.scores = ['flux']\n\ntallies = openmc.Tallies([fuel_tally])\ntallies.export_to_xml()\n\nopenmc.run(output=False)", "The return code of 0 indicates that OpenMC ran successfully. Now let's load the statepoint into a openmc.StatePoint object and use the Tally.get_pandas_dataframe(...) method to see our results.", "sp = openmc.StatePoint('statepoint.{}.h5'.format(settings.batches))\n\nt = sp.get_tally()\nt.get_pandas_dataframe()", "We can see that in the 'level 2' column, the 'cell id' tells us how each row corresponds to a ring and azimuthal position." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MIT-LCP/mimic-code
mimic-iii/notebooks/aline-aws/aline-awsathena.ipynb
mit
[ "Arterial line study\nThis notebook reproduces the arterial line study in MIMIC-III. The following is an outline of the notebook:\n\nGenerate necessary materialized views in SQL\nCombine materialized views and acquire a single dataframe\nWrite this data to file for use in R\n\nThe R code then evaluates whether an arterial line is associated with mortality after propensity matching.\nNote that the original arterial line study used a genetic algorithm to select the covariates in the propensity score. We omit the genetic algorithm step, and instead use the final set of covariates described by the authors. For more detail, see:\n\nHsu DJ, Feng M, Kothari R, Zhou H, Chen KP, Celi LA. The association between indwelling arterial catheters and mortality in hemodynamically stable patients with respiratory failure: a propensity score analysis. CHEST Journal. 2015 Dec 1;148(6):1470-6.", "# Install OS dependencies. This only needs to be run once for each new notebook instance.\n!pip install PyAthena\n\nfrom pyathena import connect\nfrom pyathena.util import as_pandas\nfrom __future__ import print_function\n\n# Import libraries\nimport datetime\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport os\nimport boto3\nfrom botocore.client import ClientError\n# below is used to print out pretty pandas dataframes\nfrom IPython.display import display, HTML\n%matplotlib inline\n\n\ns3 = boto3.resource('s3')\nclient = boto3.client(\"sts\")\naccount_id = client.get_caller_identity()[\"Account\"]\nmy_session = boto3.session.Session()\nregion = my_session.region_name\nathena_query_results_bucket = 'aws-athena-query-results-'+account_id+'-'+region\n\ntry:\n s3.meta.client.head_bucket(Bucket=athena_query_results_bucket)\nexcept ClientError:\n bucket = s3.create_bucket(Bucket=athena_query_results_bucket)\n print('Creating bucket '+athena_query_results_bucket)\ncursor = connect(s3_staging_dir='s3://'+athena_query_results_bucket+'/athena/temp').cursor()\n\n\n# The Glue database name of your MIMIC-III parquet data\ngluedatabase=\"mimiciii\"\n\n# location of the queries to generate aline specific materialized views\naline_path = './'\n\n# location of the queries to generate materialized views from the MIMIC code repository\nconcepts_path = './concepts/'", "1 - Generate materialized views\nBefore generating the aline cohort, we require the following materialized views to be already generated:\n\nangus - from angus.sql\nheightweight - from HeightWeightQuery.sql\naline_vaso_flag - from aline_vaso_flag.sql\n\nYou can generate the above by executing the below codeblock. If you haven't changed the directory structure, the below should work, otherwise you may need to modify the concepts_path variable above.", "# Load in the query from file\nquery='DROP TABLE IF EXISTS DATABASE.angus_sepsis;'\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nf = os.path.join(concepts_path,'sepsis/angus-awsathena.sql')\nwith open(f) as fp:\n query = ''.join(fp.readlines())\n \n# Execute the query\nprint('Generating table \\'angus_sepsis\\' using {} ...'.format(f),end=' ')\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nprint('done.')\n\n# Load in the query from file\nquery='DROP TABLE IF EXISTS DATABASE.heightweight;'\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nf = os.path.join(concepts_path,'demographics/HeightWeightQuery-awsathena.sql')\nwith open(f) as fp:\n query = ''.join(fp.readlines())\n \n# Execute the query\nprint('Generating table \\'heightweight\\' using {} ...'.format(f),end=' ')\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nprint('done.')\n\n\n# Load in the query from file\nquery='DROP TABLE IF EXISTS DATABASE.aline_vaso_flag;'\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nf = os.path.join(aline_path,'aline_vaso_flag-awsathena.sql')\nwith open(f) as fp:\n query = ''.join(fp.readlines())\n \n# Execute the query\nprint('Generating table \\'aline_vaso_flag\\' using {} ...'.format(f),end=' ')\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nprint('done.')\n\n\n# Load in the query from file\nquery='DROP TABLE IF EXISTS DATABASE.ventsettings;'\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nf = os.path.join(concepts_path,'durations/ventilation-settings-awsathena.sql')\nwith open(f) as fp:\n query = ''.join(fp.readlines())\n \n# Execute the query\nprint('Generating table \\'vent_settings\\' using {} ...'.format(f),end=' ')\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nprint('done.')\n\n\n# Load in the query from file\nquery='DROP TABLE IF EXISTS DATABASE.ventdurations;'\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nf = os.path.join(concepts_path,'durations/ventilation-durations-awsathena.sql')\nwith open(f) as fp:\n query = ''.join(fp.readlines())\n \n# Execute the query\nprint('Generating table \\'vent_durations\\' using {} ...'.format(f),end=' ')\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nprint('done.')", "Now we generate the aline_cohort table using the aline_cohort.sql file.\nAfterwards, we can generate the remaining 6 materialized views in any order, as they all depend on only aline_cohort and raw MIMIC-III data.", "# Load in the query from file\nquery='DROP TABLE IF EXISTS DATABASE.aline_cohort_all;'\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nf = os.path.join(aline_path,'aline_cohort-awsathena.sql')\nwith open(f) as fp:\n query = ''.join(fp.readlines())\n \n# Execute the query\nprint('Generating table \\'aline_cohort_all\\' using {} ...'.format(f),end=' ')\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nprint('done.')\n\n\n# Load in the query from file\nquery='DROP TABLE IF EXISTS DATABASE.aline_cohort;'\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nf = os.path.join(aline_path,'aline_final_cohort-awsathena.sql')\nwith open(f) as fp:\n query = ''.join(fp.readlines())\n \n# Execute the query\nprint('Generating table \\'aline_cohort\\' using {} ...'.format(f),end=' ')\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\nprint('done.')\n\nquery = \"\"\"\nselect\nicustay_id\n, exclusion_readmission\n, exclusion_shortstay\n, exclusion_vasopressors\n, exclusion_septic\n, exclusion_aline_before_admission\n, exclusion_not_ventilated_first24hr\n, exclusion_service_surgical\nfrom DATABASE.aline_cohort_all\n\"\"\"\n\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\n# Load the result of the query into a dataframe\ndf = as_pandas(cursor)\n\n# print out exclusions\nidxRem = df['icustay_id'].isnull()\nfor c in df.columns:\n if 'exclusion_' in c:\n print('{:5d} - {}'.format(df[c].sum(), c))\n idxRem[df[c]==1] = True \n \n# final exclusion (excl sepsis/something else)\nprint('Will remove {} of {} patients.'.format(np.sum(idxRem), df.shape[0]))\n\n\nprint('')\nprint('')\nprint('Reproducing the flow of the flowchart from Chest paper.')\n\n# first stay\nidxRem = (df['exclusion_readmission']==1) | (df['exclusion_shortstay']==1)\nprint('{:5d} - removing {:5d} ({:2.2f}%) patients - short stay // readmission.'.format(\n df.shape[0], np.sum(idxRem), 100.0*np.mean(idxRem)))\ndf = df.loc[~idxRem,:]\n\nidxRem = df['exclusion_not_ventilated_first24hr']==1\nprint('{:5d} - removing {:5d} ({:2.2f}%) patients - not ventilated in first 24 hours.'.format(\n df.shape[0], np.sum(idxRem), 100.0*np.mean(idxRem)))\n\ndf = df.loc[df['exclusion_not_ventilated_first24hr']==0,:]\n\nprint('{:5d}'.format(df.shape[0]))\nidxRem = df['icustay_id'].isnull()\nfor c in ['exclusion_septic', 'exclusion_vasopressors',\n 'exclusion_aline_before_admission', 'exclusion_service_surgical']:\n print('{:5s} - removing {:5d} ({:2.2f}%) patients - additional {:5d} {:2.2f}% - {}'.format(\n '', df[c].sum(), 100.0*df[c].mean(),\n np.sum((idxRem==0)&(df[c]==1)), 100.0*np.mean((idxRem==0)&(df[c]==1)),\n c))\n idxRem = idxRem | (df[c]==1)\n\ndf = df.loc[~idxRem,:]\nprint('{} - final cohort.'.format(df.shape[0]))", "The following codeblock loads in the SQL from each file in the aline subfolder and executes the query to generate the materialized view. We specifically exclude the aline_cohort.sql file as we have already executed it above. Again, the order of query execution does not matter for these queries. Note also that the filenames are the same as the created materialized view names for convenience.", "# get a list of all files in the subfolder\naline_queries = [f for f in os.listdir(aline_path) \n # only keep the filename if it is actually a file (and not a directory)\n if os.path.isfile(os.path.join(aline_path,f))\n # and only keep the filename if it is an SQL file\n & f.endswith('.sql')\n # and we do *not* want aline_cohort - it's generated above\n & (f != 'aline_cohort-awsathena.sql') & (f != 'aline_final_cohort-awsathena.sql') & (f != 'aline_vaso_flag-awsathena.sql')]\n\n\n\nfor f in aline_queries:\n # Load in the query from file\n table=f.split('-')\n query='DROP TABLE IF EXISTS DATABASE.{};'.format(table[0])\n cursor.execute(query.replace(\"DATABASE\", gluedatabase))\n print('Executing {} ...'.format(f), end=' ')\n \n with open(os.path.join(aline_path,f)) as fp:\n query = ''.join(fp.readlines())\n cursor.execute(query.replace(\"DATABASE\", gluedatabase))\n print('done.')\n", "Summarize the cohort exclusions before we pull all the data together.\n2 - Extract all covariates and outcome measures\nWe now aggregate all the data from the various views into a single dataframe.", "# Load in the query from file\nquery = \"\"\"\n--FINAL QUERY\nselect\n co.subject_id, co.hadm_id, co.icustay_id\n\n -- static variables from patient tracking tables\n , co.age\n , co.gender\n -- , co.gender_num -- gender, 0=F, 1=M\n , co.intime as icustay_intime\n , co.day_icu_intime -- day of week, text\n --, co.day_icu_intime_num -- day of week, numeric (0=Sun, 6=Sat)\n , co.hour_icu_intime -- hour of ICU admission (24 hour clock)\n , case \n when co.hour_icu_intime >= 7\n and co.hour_icu_intime < 19\n then 1\n else 0\n end as icu_hour_flag\n , co.outtime as icustay_outtime\n\n -- outcome variables\n , co.icu_los_day\n , co.hospital_los_day\n , co.hosp_exp_flag -- 1/0 patient died within current hospital stay\n , co.icu_exp_flag -- 1/0 patient died within current ICU stay\n , co.mort_day -- days from ICU admission to mortality, if they died\n , co.day_28_flag -- 1/0 whether the patient died 28 days after *ICU* admission\n , co.mort_day_censored -- days until patient died *or* 150 days (150 days is our censor time)\n , co.censor_flag -- 1/0 did this patient have 150 imputed in mort_day_censored\n\n -- aline flags\n -- , co.initial_aline_flag -- always 0, we remove patients admitted w/ aline\n , co.aline_flag -- 1/0 did the patient receive an aline\n , co.aline_time_day -- if the patient received aline, fractional days until aline put in\n\n -- demographics extracted using regex + echos\n , bmi.weight as weight_first\n , bmi.height as height_first\n , bmi.bmi\n\n -- service patient was admitted to the ICU under\n , co.service_unit\n\n -- severity of illness just before ventilation\n , so.sofa as sofa_first\n\n -- vital sign value just preceeding ventilation\n , vi.map as map_first\n , vi.heartrate as hr_first\n , vi.temperature as temp_first\n , vi.spo2 as spo2_first\n\n -- labs!\n , labs.bun_first\n , labs.creatinine_first\n , labs.chloride_first\n , labs.hgb_first\n , labs.platelet_first\n , labs.potassium_first\n , labs.sodium_first\n , labs.tco2_first\n , labs.wbc_first\n\n -- comorbidities extracted using ICD-9 codes\n , icd.chf as chf_flag\n , icd.afib as afib_flag\n , icd.renal as renal_flag\n , icd.liver as liver_flag\n , icd.copd as copd_flag\n , icd.cad as cad_flag\n , icd.stroke as stroke_flag\n , icd.malignancy as malignancy_flag\n , icd.respfail as respfail_flag\n , icd.endocarditis as endocarditis_flag\n , icd.ards as ards_flag\n , icd.pneumonia as pneumonia_flag\n\n -- sedative use\n , sed.sedative_flag\n , sed.midazolam_flag\n , sed.fentanyl_flag\n , sed.propofol_flag\n \nfrom DATABASE.aline_cohort co\n-- The following tables are generated by code within this repository\nleft join DATABASE.aline_sofa so\non co.icustay_id = so.icustay_id\nleft join DATABASE.aline_bmi bmi\n on co.icustay_id = bmi.icustay_id\nleft join DATABASE.aline_icd icd\n on co.hadm_id = icd.hadm_id\nleft join DATABASE.aline_vitals vi\n on co.icustay_id = vi.icustay_id\nleft join DATABASE.aline_labs labs\n on co.icustay_id = labs.icustay_id\nleft join DATABASE.aline_sedatives sed\n on co.icustay_id = sed.icustay_id\norder by co.icustay_id\n\"\"\"\n\ncursor.execute(query.replace(\"DATABASE\", gluedatabase))\n# Load the result of the query into a dataframe\ndf = as_pandas(cursor)\ndf.describe().T", "Now we need to remove obvious outliers, including correcting ages > 200 to 91.4 (i.e. replace anonymized ages with 91.4, the median age of patients older than 89).", "# plot the rest of the distributions\nfor col in df.columns:\n if df.dtypes[col] in ('int64','float64'):\n plt.figure(figsize=[12,6])\n plt.hist(df[col].dropna(), bins=50, normed=True)\n plt.xlabel(col,fontsize=24)\n plt.show()\n\n# apply corrections\ndf.loc[df['age']>89, 'age'] = 91.4", "3 - Write to file", "df.to_csv('aline_data.csv',index=False)", "4 - Create a propensity score using this data\nWe will create the propensity score using R in the Jupyter Notebook file aline_propensity_score.ipynb." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
shaunharker/DSGRN
Tutorials/PatternMatchExperiments.ipynb
mit
[ "Pattern Matching Experiments", "from DSGRN import *", "Networks\nWe give two sets of networks. One of them allows for all parameters. The other is identical except it only uses essential parameters.", "network_strings = [ \n[\"SWI4 : (NDD1)(~YOX1)\", \"HCM1 : SWI4\", \"NDD1 : HCM1\", \"YOX1 : SWI4\"],\n[\"SWI4 : (NDD1)(~YOX1)\", \"HCM1 : SWI4\", \"NDD1 : HCM1\", \"YOX1 : (SWI4)(HCM1)\"],\n[\"SWI4 : (NDD1)(~YOX1)\", \"HCM1 : SWI4\", \"NDD1 : HCM1\", \"YOX1 : (SWI4)(~HCM1)\"],\n[\"SWI4 : (NDD1)(~YOX1)\", \"HCM1 : SWI4\", \"NDD1 : HCM1\", \"YOX1 : (SWI4)(NDD1)\"],\n[\"SWI4 : (NDD1)(~YOX1)\", \"HCM1 : SWI4\", \"NDD1 : HCM1\", \"YOX1 : (SWI4)(~NDD1)\"],\n[\"SWI4 : (NDD1)(~YOX1)\", \"HCM1 : (SWI4)(YOX1)\", \"NDD1 : HCM1\", \"YOX1 : SWI4\"],\n[\"SWI4 : (NDD1)(~YOX1)\", \"HCM1 : (SWI4)(~YOX1)\", \"NDD1 : HCM1\", \"YOX1 : SWI4\"],\n[\"SWI4 : (NDD1)(~YOX1)\", \"HCM1 : SWI4\", \"NDD1 : (HCM1)(YOX1)\", \"YOX1 : SWI4\"],\n[\"SWI4 : (NDD1)(~YOX1)\", \"HCM1 : SWI4\", \"NDD1 : (HCM1)(~YOX1)\", \"YOX1 : SWI4\"] ]", "Full Networks", "networks = [Network() for i in range(0,9)]\nfor i,network in enumerate(networks):\n network.assign('\\n'.join(network_strings[i]))", "Essential Networks", "essential_network_strings = [ [ line + \" : E\" for line in network_string ] for network_string in network_strings]\nessential_networks = [Network() for i in range(0,9)]\nfor i,network in enumerate(essential_networks):\n network.assign('\\n'.join(essential_network_strings[i]))", "Path match analysis\nWe give two functions for path match analysis. One looks at the entire domain graph. The other only checks for path matches in stable Morse sets.\nAnalysis on entire domain graph", "def Analyze(network, events, event_ordering):\n poe = PosetOfExtrema(network, events, event_ordering )\n pattern_graph = PatternGraph(poe)\n parameter_graph = ParameterGraph(network)\n result = []\n for parameter_index in range(0, parameter_graph.size()):\n parameter = parameter_graph.parameter(parameter_index)\n search_graph = SearchGraph(DomainGraph(parameter))\n matching_graph = MatchingGraph(search_graph, pattern_graph);\n if PathMatch(matching_graph):\n result.append(parameter_index)\n return [result, parameter_graph.size()]", "Analysis on stable Morse set only", "def AnalyzeOnStable(network, events, event_ordering):\n poe = PosetOfExtrema(network, events, event_ordering )\n pattern_graph = PatternGraph(poe)\n parameter_graph = ParameterGraph(network)\n results = []\n for parameter_index in range(0, parameter_graph.size()):\n parameter = parameter_graph.parameter(parameter_index)\n domain_graph = DomainGraph(parameter)\n morse_decomposition = MorseDecomposition(domain_graph.digraph())\n morse_graph = MorseGraph()\n morse_graph.assign(domain_graph, morse_decomposition)\n MorseNodes = range(0, morse_graph.poset().size())\n isStable = lambda node : len(morse_graph.poset().children(node)) == 0\n isStableFC = lambda node : morse_graph.annotation(node)[0] == 'FC' and isStable(node)\n hasStableFC = any( isStableFC(node) for node in MorseNodes)\n StableNodes = [ node for node in MorseNodes if isStable(node) ]\n subresult = []\n for node in StableNodes:\n search_graph = SearchGraph(domain_graph, node)\n matching_graph = MatchingGraph(search_graph, pattern_graph)\n path_match = PathMatch(matching_graph)\n if path_match:\n subresult.append([parameter_index, node])\n results.append([subresult, 1 if hasStableFC else 0])\n return [results, parameter_graph.size()]", "Poset of Extrema\nWe study two poset of extrema. The first poset comes from looking at times [10,60] and assuming SWI4 happens before the other minima at the beginning and thus can be excluded. The other comes from including all extrema.\nOriginal Poset of Extrema", "original_events = [(\"HCM1\", \"min\"), (\"NDD1\", \"min\"), (\"YOX1\", \"min\"), \n (\"SWI4\", \"max\"), (\"HCM1\", \"max\"), (\"YOX1\", \"max\"), \n (\"NDD1\", \"max\"),\n (\"SWI4\",\"min\")]\noriginal_event_ordering = [ (i,j) for i in [0,1,2] for j in [3,4,5] ] + \\\n [ (i,j) for i in [3,4,5] for j in [6] ] + \\\n [ (i,j) for i in [6] for j in [7] ]\n\nDrawGraph(PosetOfExtrema(networks[0], original_events, original_event_ordering ))", "Alternative Poset of Extrema", "all_events = [(\"SWI4\", \"min\"), (\"HCM1\", \"min\"), (\"NDD1\", \"min\"), (\"YOX1\", \"min\"), \n (\"SWI4\", \"max\"), (\"HCM1\", \"max\"), (\"YOX1\", \"max\"), \n (\"NDD1\", \"max\"),\n (\"SWI4\",\"min\"),\n (\"YOX1\", \"min\"), (\"HCM1\",\"min\"),\n (\"NDD1\", \"min\"),\n (\"SWI4\", \"max\"), (\"HCM1\", \"max\"), (\"YOX1\", \"max\"),\n (\"NDD1\", \"max\")]\nall_event_ordering = [ (i,j) for i in [0,1,2,3] for j in [4,5,6] ] + \\\n [ (i,j) for i in [4,5,6] for j in [7] ] + \\\n [ (i,j) for i in [7] for j in [8] ] + \\\n [ (i,j) for i in [8] for j in [9,10] ] + \\\n [ (i,j) for i in [9,10] for j in [11,12,13,14] ] + \\\n [ (11,15) ]\n\nDrawGraph(PosetOfExtrema(networks[0], all_events, all_event_ordering ))", "Experiments\nThere are 8 experiements corresponding to 3 binary choices:\n\nFull networks vs Essential networks \nPath matching in entire domain graph vs path matching in stable Morse sets\nOriginal poset of extrema vs Alternative poset of extrema", "def DisplayExperiment(results, title):\n markdown_string = \"# \" + title + \"\\n\\n\"\n markdown_string += \"| network | # parameters | # parameters with path match |\\n\"\n markdown_string += \"| ------- |------------ | ---------------------------- |\\n\"\n for i, item in enumerate(results):\n [parameters_with_path_match, pgsize] = item\n markdown_string += (\"|\" + str(i) + \"|\" + str(pgsize) + \"|\" + str(len(parameters_with_path_match)) + \"|\\n\")\n from IPython.display import display, Markdown, Latex\n display(Markdown(markdown_string))\ndef DisplayStableExperiment(results, title):\n markdown_string = \"# \" + title + \"\\n\\n\"\n markdown_string += \"| network | # parameters | # parameters with stable FC | # parameters with path match |\\n\"\n markdown_string += \"| ------- |------------ | ---------------------------- | ---------------------------- |\\n\"\n for i, item in enumerate(results):\n [results, pgsize] = item\n parameters_with_path_match = sum([ 1 if pair[0] else 0 for pair in results])\n parameters_with_stable_fc = sum([ 1 if pair[1] else 0 for pair in results])\n markdown_string += (\"|\" + str(i) + \"|\" + str(pgsize) + \"|\" +str(parameters_with_stable_fc) +\"|\"+str(parameters_with_path_match) + \"|\\n\")\n from IPython.display import display, Markdown, Latex\n display(Markdown(markdown_string))\n\n%%time\nexperiment = lambda network : Analyze(network, original_events, original_event_ordering)\nexperimental_results_1 = [ experiment(network) for network in networks ]\nDisplayExperiment(experimental_results_1, \"Experiment 1: All parameters, original poset of extrema\")\n\n%%time\nexperiment = lambda network : Analyze(network, original_events, original_event_ordering)\nexperimental_results_2 = [ experiment(network) for network in essential_networks ]\nDisplayExperiment(experimental_results_2, \"Experiment 2: Essential parameters, original poset of extrema\")\n\n%%time\nexperiment = lambda network : AnalyzeOnStable(network, original_events, original_event_ordering)\nexperimental_results_3 = [ experiment(network) for network in networks ]\nDisplayStableExperiment(experimental_results_3, \"Experiment 3: All parameters, original poset, stable only\")\n\n%%time\nexperiment = lambda network : AnalyzeOnStable(network, original_events, original_event_ordering)\nexperimental_results_4 = [ experiment(network) for network in essential_networks ]\nDisplayStableExperiment(experimental_results_4, \"Experiment 4: Essential parameters, original poset, stable only\")\n\n%%time\nexperiment = lambda network : Analyze(network, all_events, all_event_ordering)\nexperimental_results_5 = [ experiment(network) for network in networks ]\nDisplayExperiment(experimental_results_5, \"Experiment 5: All parameters, alternative poset of extrema\")\n\n%%time\nexperiment = lambda network : Analyze(network, all_events, all_event_ordering)\nexperimental_results_6 = [ experiment(network) for network in essential_networks ]\nDisplayExperiment(experimental_results_6, \"Experiment 6: Essential parameters, alternative poset of extrema\")\n\n%%time\nexperiment = lambda network : AnalyzeOnStable(network, all_events, all_event_ordering)\nexperimental_results_7 = [ experiment(network) for network in networks ]\nDisplayStableExperiment(experimental_results_7, \"Experiment 7: All parameters, alternative poset of extrema, stable only\")\n\n%%time\nexperiment = lambda network : AnalyzeOnStable(network, all_events, all_event_ordering)\nexperimental_results_8 = [ experiment(network) for network in essential_networks ]\nDisplayStableExperiment(experimental_results_8, \"Experiment 8: Essential parameters, alternative poset of extrema, stable only\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jinzekid/codehub
python/day6/ch02.ipynb
gpl-3.0
[ "Python Language Basics, IPython, and Jupyter Notebooks", "import numpy as np\nnp.random.seed(12345)\nnp.set_printoptions(precision=4, suppress=True)", "The Python Interpreter\n```python\n$ python\nPython 3.6.0 | packaged by conda-forge | (default, Jan 13 2017, 23:17:12)\n[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\n\n\na = 5\nprint(a)\n5\n```\n\n\n\npython\nprint('Hello world')\npython\n$ python hello_world.py\nHello world\n```shell\n$ ipython\nPython 3.6.0 | packaged by conda-forge | (default, Jan 13 2017, 23:17:12)\nType \"copyright\", \"credits\" or \"license\" for more information.\nIPython 5.1.0 -- An enhanced Interactive Python.\n? -> Introduction and overview of IPython's features.\n%quickref -> Quick reference.\nhelp -> Python's own help system.\nobject? -> Details about 'object', use 'object??' for extra details.\nIn [1]: %run hello_world.py\nHello world\nIn [2]:\n```\nIPython Basics\nRunning the IPython Shell\n$", "import numpy as np\ndata = {i : np.random.randn() for i in range(7)}\ndata", "from numpy.random import randn\ndata = {i : randn() for i in range(7)}\nprint(data)\n{0: -1.5948255432744511, 1: 0.10569006472787983, 2: 1.972367135977295,\n3: 0.15455217573074576, 4: -0.24058577449429575, 5: -1.2904897053651216,\n6: 0.3308507317325902}\n\n\n\nRunning the Jupyter Notebook\nshell\n$ jupyter notebook\n[I 15:20:52.739 NotebookApp] Serving notebooks from local directory:\n/home/wesm/code/pydata-book\n[I 15:20:52.739 NotebookApp] 0 active kernels\n[I 15:20:52.739 NotebookApp] The Jupyter Notebook is running at:\nhttp://localhost:8888/\n[I 15:20:52.740 NotebookApp] Use Control-C to stop this server and shut down\nall kernels (twice to skip confirmation).\nCreated new window in existing browser session.\nTab Completion\n```\nIn [1]: an_apple = 27\nIn [2]: an_example = 42\nIn [3]: an\n```\n```\nIn [3]: b = [1, 2, 3]\nIn [4]: b.\n```\n```\nIn [1]: import datetime\nIn [2]: datetime.\n```\nIn [7]: datasets/movielens/\nIntrospection\n```\nIn [8]: b = [1, 2, 3]\nIn [9]: b?\nType: list\nString Form:[1, 2, 3]\nLength: 3\nDocstring:\nlist() -> new empty list\nlist(iterable) -> new list initialized from iterable's items\nIn [10]: print?\nDocstring:\nprint(value, ..., sep=' ', end='\\n', file=sys.stdout, flush=False)\nPrints the values to a stream, or to sys.stdout by default.\nOptional keyword arguments:\nfile: a file-like object (stream); defaults to the current sys.stdout.\nsep: string inserted between values, default a space.\nend: string appended after the last value, default a newline.\nflush: whether to forcibly flush the stream.\nType: builtin_function_or_method\n```\n```python\ndef add_numbers(a, b):\n \"\"\"\n Add two numbers together\nReturns\n-------\nthe_sum : type of arguments\n\"\"\"\nreturn a + b\n\n```\n```python\nIn [11]: add_numbers?\nSignature: add_numbers(a, b)\nDocstring:\nAdd two numbers together\nReturns\nthe_sum : type of arguments\nFile: <ipython-input-9-6a548a216e27>\nType: function\n```\n```python\nIn [12]: add_numbers??\nSignature: add_numbers(a, b)\nSource:\ndef add_numbers(a, b):\n \"\"\"\n Add two numbers together\nReturns\n-------\nthe_sum : type of arguments\n\"\"\"\nreturn a + b\n\nFile: <ipython-input-9-6a548a216e27>\nType: function\n```\npython\nIn [13]: np.*load*?\nnp.__loader__\nnp.load\nnp.loads\nnp.loadtxt\nnp.pkgload\nThe %run Command\n```python\ndef f(x, y, z):\n return (x + y) / z\na = 5\nb = 6\nc = 7.5\nresult = f(a, b, c)\n```\npython\nIn [14]: %run ipython_script_test.py\n```python\nIn [15]: c\nOut [15]: 7.5\nIn [16]: result\nOut[16]: 1.4666666666666666\n```\n```python\n\n\n\n%load ipython_script_test.py\n\n\n\ndef f(x, y, z):\n return (x + y) / z\n\na = 5\nb = 6\nc = 7.5\n\nresult = f(a, b, c)\n\n```\nInterrupting running code\nExecuting Code from the Clipboard\n```python\nx = 5\ny = 7\nif x > 5:\n x += 1\ny = 8\n\n```\n```python\nIn [17]: %paste\nx = 5\ny = 7\nif x > 5:\n x += 1\ny = 8\n\n-- End pasted text --\n```\npython\nIn [18]: %cpaste\nPasting code; enter '--' alone on the line to stop or use Ctrl-D.\n:x = 5\n:y = 7\n:if x &gt; 5:\n: x += 1\n:\n: y = 8\n:--\nTerminal Keyboard Shortcuts\nAbout Magic Commands\n```python\nIn [20]: a = np.random.randn(100, 100)\nIn [20]: %timeit np.dot(a, a)\n10000 loops, best of 3: 20.9 µs per loop\n```\n```python\nIn [21]: %debug?\nDocstring:\n::\n%debug [--breakpoint FILE:LINE] [statement [statement ...]]\nActivate the interactive debugger.\nThis magic command support two ways of activating debugger.\nOne is to activate debugger before executing code. This way, you\ncan set a break point, to step through the code from the point.\nYou can use this mode by giving statements to execute and optionally\na breakpoint.\nThe other one is to activate debugger in post-mortem mode. You can\nactivate this mode simply running %debug without any argument.\nIf an exception has just occurred, this lets you inspect its stack\nframes interactively. Note that this will always work only on the last\ntraceback that occurred, so you must call this quickly after an\nexception that you wish to inspect has fired, because if another one\noccurs, it clobbers the previous one.\nIf you want IPython to automatically do this on every exception, see\nthe %pdb magic for more details.\npositional arguments:\n statement Code to run in debugger. You can omit this in cell\n magic mode.\noptional arguments:\n --breakpoint <FILE:LINE>, -b <FILE:LINE>\n Set break point at LINE in FILE.\n``` \n```python\nIn [22]: %pwd\nOut[22]: '/home/wesm/code/pydata-book\nIn [23]: foo = %pwd\nIn [24]: foo\nOut[24]: '/home/wesm/code/pydata-book'\n```\nMatplotlib Integration\npython\nIn [26]: %matplotlib\nUsing matplotlib backend: Qt4Agg\npython\nIn [26]: %matplotlib inline\nPython Language Basics\nLanguage Semantics\nIndentation, not braces\npython\nfor x in array:\n if x &lt; pivot:\n less.append(x)\n else:\n greater.append(x)\npython\na = 5; b = 6; c = 7\nEverything is an object\nComments\npython\nresults = []\nfor line in file_handle:\n # keep the empty lines for now\n # if len(line) == 0:\n # continue\n results.append(line.replace('foo', 'bar'))\npython\nprint(\"Reached this line\") # Simple status report\nFunction and object method calls\nresult = f(x, y, z)\ng()\nobj.some_method(x, y, z)\npython\nresult = f(a, b, c, d=5, e='foo')\nVariables and argument passing", "a = [1, 2, 3]\n\nb = a\n\na.append(4)\nb", "python\ndef append_element(some_list, element):\n some_list.append(element)\n```python\nIn [27]: data = [1, 2, 3]\nIn [28]: append_element(data, 4)\nIn [29]: data\nOut[29]: [1, 2, 3, 4]\n```\nDynamic references, strong types", "a = 5\ntype(a)\na = 'foo'\ntype(a)\n\n'5' + 5\n\na = 4.5\nb = 2\n# String formatting, to be visited later\nprint('a is {0}, b is {1}'.format(type(a), type(b)))\na / b\n\na = 5\nisinstance(a, int)\n\na = 5; b = 4.5\nisinstance(a, (int, float))\nisinstance(b, (int, float))", "Attributes and methods\n```python\nIn [1]: a = 'foo'\nIn [2]: a.<Press Tab>\na.capitalize a.format a.isupper a.rindex a.strip\na.center a.index a.join a.rjust a.swapcase\na.count a.isalnum a.ljust a.rpartition a.title\na.decode a.isalpha a.lower a.rsplit a.translate\na.encode a.isdigit a.lstrip a.rstrip a.upper\na.endswith a.islower a.partition a.split a.zfill\na.expandtabs a.isspace a.replace a.splitlines\na.find a.istitle a.rfind a.startswith\n```", "a = 'foo'\n\ngetattr(a, 'split')", "Duck typing", "def isiterable(obj):\n try:\n iter(obj)\n return True\n except TypeError: # not iterable\n return False\n\nisiterable('a string')\nisiterable([1, 2, 3])\nisiterable(5)", "if not isinstance(x, list) and isiterable(x):\n x = list(x)\nImports\n```python\nsome_module.py\nPI = 3.14159\ndef f(x):\n return x + 2\ndef g(a, b):\n return a + b\n```\nimport some_module\nresult = some_module.f(5)\npi = some_module.PI\nfrom some_module import f, g, PI\nresult = g(5, PI)\nimport some_module as sm\nfrom some_module import PI as pi, g as gf\nr1 = sm.f(pi)\nr2 = gf(6, pi)\nBinary operators and comparisons", "5 - 7\n12 + 21.5\n5 <= 2\n\na = [1, 2, 3]\nb = a\nc = list(a)\na is b\na is not c\n\na == c\n\na = None\na is None", "Mutable and immutable objects", "a_list = ['foo', 2, [4, 5]]\na_list[2] = (3, 4)\na_list\n\na_tuple = (3, 5, (4, 5))\na_tuple[1] = 'four'", "Scalar Types\nNumeric types", "ival = 17239871\nival ** 6\n\nfval = 7.243\nfval2 = 6.78e-5\n\n3 / 2\n\n3 // 2", "Strings\na = 'one way of writing a string'\nb = \"another way\"", "c = \"\"\"\nThis is a longer string that\nspans multiple lines\n\"\"\"\n\nc.count('\\n')\n\na = 'this is a string'\na[10] = 'f'\nb = a.replace('string', 'longer string')\nb\n\na\n\na = 5.6\ns = str(a)\nprint(s)\n\ns = 'python'\nlist(s)\ns[:3]\n\ns = '12\\\\34'\nprint(s)\n\ns = r'this\\has\\no\\special\\characters'\ns\n\na = 'this is the first half '\nb = 'and this is the second half'\na + b\n\ntemplate = '{0:.2f} {1:s} are worth US${2:d}'\n\ntemplate.format(4.5560, 'Argentine Pesos', 1)", "Bytes and Unicode", "val = \"español\"\nval\n\nval_utf8 = val.encode('utf-8')\nval_utf8\ntype(val_utf8)\n\nval_utf8.decode('utf-8')\n\nval.encode('latin1')\nval.encode('utf-16')\nval.encode('utf-16le')\n\nbytes_val = b'this is bytes'\nbytes_val\ndecoded = bytes_val.decode('utf8')\ndecoded # this is str (Unicode) now", "Booleans", "True and True\nFalse or True", "Type casting", "s = '3.14159'\nfval = float(s)\ntype(fval)\nint(fval)\nbool(fval)\nbool(0)", "None", "a = None\na is None\nb = 5\nb is not None", "def add_and_maybe_multiply(a, b, c=None):\n result = a + b\nif c is not None:\n result = result * c\n\nreturn result", "type(None)", "Dates and times", "from datetime import datetime, date, time\ndt = datetime(2011, 10, 29, 20, 30, 21)\ndt.day\ndt.minute\n\ndt.date()\ndt.time()\n\ndt.strftime('%m/%d/%Y %H:%M')\n\ndatetime.strptime('20091031', '%Y%m%d')\n\ndt.replace(minute=0, second=0)\n\ndt2 = datetime(2011, 11, 15, 22, 30)\ndelta = dt2 - dt\ndelta\ntype(delta)\n\ndt\ndt + delta", "Control Flow\nif, elif, and else\nif x < 0:\n print('It's negative')\nif x < 0:\n print('It's negative')\nelif x == 0:\n print('Equal to zero')\nelif 0 < x < 5:\n print('Positive but smaller than 5')\nelse:\n print('Positive and larger than or equal to 5')", "a = 5; b = 7\nc = 8; d = 4\nif a < b or c > d:\n print('Made it')\n\n4 > 3 > 2 > 1", "for loops\nfor value in collection:\n # do something with value\nsequence = [1, 2, None, 4, None, 5]\ntotal = 0\nfor value in sequence:\n if value is None:\n continue\n total += value\nsequence = [1, 2, 0, 4, 6, 5, 2, 1]\ntotal_until_5 = 0\nfor value in sequence:\n if value == 5:\n break\n total_until_5 += value", "for i in range(4):\n for j in range(4):\n if j > i:\n break\n print((i, j))", "for a, b, c in iterator:\n # do something\nwhile loops\nx = 256\ntotal = 0\nwhile x > 0:\n if total > 500:\n break\n total += x\n x = x // 2\npass\nif x < 0:\n print('negative!')\nelif x == 0:\n # TODO: put something smart here\n pass\nelse:\n print('positive!')\nrange", "range(10)\nlist(range(10))\n\nlist(range(0, 20, 2))\nlist(range(5, 0, -1))", "seq = [1, 2, 3, 4]\nfor i in range(len(seq)):\n val = seq[i]\nsum = 0\nfor i in range(100000):\n # % is the modulo operator\n if i % 3 == 0 or i % 5 == 0:\n sum += i\nTernary expressions\nvalue = \nif", "x = 5\n'Non-negative' if x >= 0 else 'Negative'" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
evelynegroen/evelynegroen.github.io
Code/.ipynb_checkpoints/AUP_LCA_evelynegroen-checkpoint.ipynb
mit
[ "Procedure: Uncertainty propagation for matrix-based LCA\nMethod: Analytic uncertainty propagation (Taylor approximation)\nAuthor: Evelyne Groen {evelyne [dot] groen [at] gmail [dot] com}\nLast update: 25/10/2016", "import numpy as np \n\nA_det = np.matrix('10 0; -2 100') #A-matrix\nB_det = np.matrix('1 10') #B-matrix\nf = np.matrix('1000; 0') #Functional unit vector f\n\ng_LCA = B_det * A_det.I * f \n\nprint(\"The deterministic result is:\", g_LCA[0,0]) \n", "Step 1: Calculate partial derivatives\nNB: this is a vectorized implementation of the MatLab code that was originally written by Reinout Heijungs & Sangwong Suh", "s = A_det.I * f #scaling vector s: inv(A_det)*f\nLambda = B_det * A_det.I; #B_det*inv(A)\n\ndgdA = -(s * Lambda).T #Partial derivatives A-matrix\nGamma_A = np.multiply((A_det/g_LCA), dgdA) #For free: the multipliers of the A-matrix\nprint(\"The multipliers of the A-matrix are:\")\nprint(Gamma_A)\n\ndgdB = s.T #Partial derivatives B-matrix\nGamma_B = np.multiply((B_det/g_LCA), dgdB) #For free too: the multipliers of the B-matrix\nprint(\"The multipliers of the B-matrix are:\")\nprint(Gamma_B)", "Step 2: Determine output variance", "CV = 0.05 #Coefficient of variation set to 5% (CV = sigma/mu)\nvar_A = np.power(abs(CV*A_det),2) #Variance of the A-matrix (var =sigma^2)\nvar_B = np.power(abs(CV*B_det),2) #Variance of the B-matrix\n \nP = np.concatenate((np.reshape(dgdA, 4), dgdB), axis=1) #P contains partial derivatives of both A and B \nvar_P = np.concatenate((np.reshape(var_A, 4), var_B), axis=1) #var_P contains all variances of each parameter in A and B\n\nvar_g = sum(np.multiply(np.power(P, 2), var_P)) #Total output variance (first order Taylor)\nvar_g = var_g[0,0] + var_g[0,1] +var_g[0,2] + var_g[0,3] + var_g[0,4] + var_g[0,5]\n\nprint(\"The total output variance equals:\", var_g)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
cxhernandez/msmbuilder
examples/advanced/hmm-and-msm.ipynb
lgpl-2.1
[ "This example builds HMM and MSMs on the alanine_dipeptide dataset using varing lag times\nand numbers of states, and compares the relaxation timescales", "from __future__ import print_function\nimport os\n%matplotlib inline\nfrom matplotlib.pyplot import *\nfrom msmbuilder.featurizer import SuperposeFeaturizer\nfrom msmbuilder.example_datasets import AlanineDipeptide\nfrom msmbuilder.hmm import GaussianHMM\nfrom msmbuilder.cluster import KCenters\nfrom msmbuilder.msm import MarkovStateModel", "First: load and \"featurize\"\nFeaturization refers to the process of converting the conformational\nsnapshots from your MD trajectories into vectors in some space $\\mathbb{R}^N$ that can be manipulated and modeled by subsequent analyses. The Gaussian HMM, for instance, uses Gaussian emission distributions, so it models the trajectory as a time-dependent\nmixture of multivariate Gaussians.\nIn general, the featurization is somewhat of an art. For this example, we're using MSMBuilder's SuperposeFeaturizer, which superposes each snapshot onto a reference frame (trajectories[0][0] in this example), and then measure the distance from each\natom to its position in the reference conformation as the 'feature'", "print(AlanineDipeptide.description())\n\ndataset = AlanineDipeptide().get()\ntrajectories = dataset.trajectories\ntopology = trajectories[0].topology\n\nindices = [atom.index for atom in topology.atoms if atom.element.symbol in ['C', 'O', 'N']]\nfeaturizer = SuperposeFeaturizer(indices, trajectories[0][0])\nsequences = featurizer.transform(trajectories)", "Now sequences is our featurized data.", "lag_times = [1, 10, 20, 30, 40]\nhmm_ts0 = {}\nhmm_ts1 = {}\nn_states = [3, 5]\n\nfor n in n_states:\n hmm_ts0[n] = []\n hmm_ts1[n] = []\n for lag_time in lag_times:\n strided_data = [s[i::lag_time] for s in sequences for i in range(lag_time)]\n hmm = GaussianHMM(n_states=n, n_init=1).fit(strided_data)\n timescales = hmm.timescales_ * lag_time\n hmm_ts0[n].append(timescales[0])\n hmm_ts1[n].append(timescales[1])\n print('n_states=%d\\tlag_time=%d\\ttimescales=%s' % (n, lag_time, timescales))\n print()\n\nfigure(figsize=(14,3))\n\nfor i, n in enumerate(n_states):\n subplot(1,len(n_states),1+i)\n plot(lag_times, hmm_ts0[n])\n plot(lag_times, hmm_ts1[n])\n if i == 0:\n ylabel('Relaxation Timescale')\n xlabel('Lag Time')\n title('%d states' % n)\n\nshow()\n\nmsmts0, msmts1 = {}, {}\nlag_times = [1, 10, 20, 30, 40]\nn_states = [4, 8, 16, 32, 64]\n\nfor n in n_states:\n msmts0[n] = []\n msmts1[n] = []\n for lag_time in lag_times:\n assignments = KCenters(n_clusters=n).fit_predict(sequences)\n msm = MarkovStateModel(lag_time=lag_time, verbose=False).fit(assignments)\n timescales = msm.timescales_\n msmts0[n].append(timescales[0])\n msmts1[n].append(timescales[1])\n print('n_states=%d\\tlag_time=%d\\ttimescales=%s' % (n, lag_time, timescales[0:2]))\n print()\n\nfigure(figsize=(14,3))\n\nfor i, n in enumerate(n_states):\n subplot(1,len(n_states),1+i)\n plot(lag_times, msmts0[n])\n plot(lag_times, msmts1[n])\n if i == 0:\n ylabel('Relaxation Timescale')\n xlabel('Lag Time')\n title('%d states' % n)\n\nshow()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/pcmdi/cmip6/models/sandbox-1/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: PCMDI\nSource ID: SANDBOX-1\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:36\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-1', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jinzishuai/learn2deeplearn
deeplearning.ai/C5.SequenceModel/Week1_RNN/assignment/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v1.ipynb
gpl-3.0
[ "Character level language model - Dinosaurus land\nWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely! \n<table>\n<td>\n<img src=\"images/dino.jpg\" style=\"width:250;height:300px;\">\n\n</td>\n\n</table>\n\nLuckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this dataset. (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! \nBy completing this assignment you will learn:\n\nHow to store text data for processing using an RNN \nHow to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit\nHow to build a character-level text generation recurrent neural network\nWhy clipping the gradients is important\n\nWe will begin by loading in some functions that we have provided for you in rnn_utils. Specifically, you have access to functions such as rnn_forward and rnn_backward which are equivalent to those you've implemented in the previous assignment.", "import numpy as np\nfrom utils import *\nimport random\nfrom random import shuffle", "1 - Problem Statement\n1.1 - Dataset and Preprocessing\nRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.", "data = open('dinos.txt', 'r').read()\ndata= data.lower()\nchars = list(set(data))\ndata_size, vocab_size = len(data), len(chars)\nprint('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))", "The characters are a-z (26 characters) plus the \"\\n\" (or newline character), which in this assignment plays a role similar to the &lt;EOS&gt; (or \"End of sentence\") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, char_to_ix and ix_to_char are the python dictionaries.", "char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }\nix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }\nprint(ix_to_char)", "1.2 - Overview of the model\nYour model will have the following structure: \n\nInitialize parameters \nRun the optimization loop\nForward propagation to compute the loss function\nBackward propagation to compute the gradients with respect to the loss function\nClip the gradients to avoid exploding gradients\nUsing the gradients, update your parameter with the gradient descent update rule.\n\n\nReturn the learned parameters \n\n<img src=\"images/rnn.png\" style=\"width:450;height:300px;\">\n<caption><center> Figure 1: Recurrent Neural Network, similar to what you had built in the previous notebook \"Building a RNN - Step by Step\". </center></caption>\nAt each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\\langle 1 \\rangle}, x^{\\langle 2 \\rangle}, ..., x^{\\langle T_x \\rangle})$ is a list of characters in the training set, while $Y = (y^{\\langle 1 \\rangle}, y^{\\langle 2 \\rangle}, ..., y^{\\langle T_x \\rangle})$ is such that at every time-step $t$, we have $y^{\\langle t \\rangle} = x^{\\langle t+1 \\rangle}$. \n2 - Building blocks of the model\nIn this part, you will build two important blocks of the overall model:\n- Gradient clipping: to avoid exploding gradients\n- Sampling: a technique used to generate characters\nYou will then apply these two functions to build the model.\n2.1 - Clipping the gradients in the optimization loop\nIn this section you will implement the clip function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not \"exploding,\" meaning taking on overly large values. \nIn the exercise below, you will implement a function clip that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a maxValue (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone. \n<img src=\"images/clip.png\" style=\"width:400;height:150px;\">\n<caption><center> Figure 2: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight \"exploding gradient\" problems. </center></caption>\nExercise: Implement the function below to return the clipped gradients of your dictionary gradients. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this hint for examples of how to clip in numpy. You will need to use the argument out = ....", "### GRADED FUNCTION: clip\n\ndef clip(gradients, maxValue):\n '''\n Clips the gradients' values between minimum and maximum.\n \n Arguments:\n gradients -- a dictionary containing the gradients \"dWaa\", \"dWax\", \"dWya\", \"db\", \"dby\"\n maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue\n \n Returns: \n gradients -- a dictionary with the clipped gradients.\n '''\n \n dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']\n \n ### START CODE HERE ###\n # clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)\n for gradient in [dWax, dWaa, dWya, db, dby]:\n None\n ### END CODE HERE ###\n \n gradients = {\"dWaa\": dWaa, \"dWax\": dWax, \"dWya\": dWya, \"db\": db, \"dby\": dby}\n \n return gradients\n\nnp.random.seed(3)\ndWax = np.random.randn(5,3)*10\ndWaa = np.random.randn(5,5)*10\ndWya = np.random.randn(2,5)*10\ndb = np.random.randn(5,1)*10\ndby = np.random.randn(2,1)*10\ngradients = {\"dWax\": dWax, \"dWaa\": dWaa, \"dWya\": dWya, \"db\": db, \"dby\": dby}\ngradients = clip(gradients, 10)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"gradients[\\\"dWax\\\"][3][1] =\", gradients[\"dWax\"][3][1])\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])", "Expected output:\n<table>\n<tr>\n <td> \n **gradients[\"dWaa\"][1][2] **\n </td>\n <td> \n 10.0\n </td>\n</tr>\n\n<tr>\n <td> \n **gradients[\"dWax\"][3][1]**\n </td>\n <td> \n -10.0\n </td>\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWya\"][1][2]**\n </td>\n <td> \n0.29713815361\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"db\"][4]**\n </td>\n <td> \n[ 10.]\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dby\"][1]**\n </td>\n <td> \n[ 8.45833407]\n </td>\n</tr>\n\n</table>\n\n2.2 - Sampling\nNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:\n<img src=\"images/dinos3.png\" style=\"width:500;height:300px;\">\n<caption><center> Figure 3: In this picture, we assume the model is already trained. We pass in $x^{\\langle 1\\rangle} = \\vec{0}$ at the first time step, and have the network then sample one character at a time. </center></caption>\nExercise: Implement the sample function below to sample characters. You need to carry out 4 steps:\n\n\nStep 1: Pass the network the first \"dummy\" input $x^{\\langle 1 \\rangle} = \\vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\\langle 0 \\rangle} = \\vec{0}$\n\n\nStep 2: Run one step of forward propagation to get $a^{\\langle 1 \\rangle}$ and $\\hat{y}^{\\langle 1 \\rangle}$. Here are the equations:\n\n\n$$ a^{\\langle t+1 \\rangle} = \\tanh(W_{ax} x^{\\langle t \\rangle } + W_{aa} a^{\\langle t \\rangle } + b)\\tag{1}$$\n$$ z^{\\langle t + 1 \\rangle } = W_{ya} a^{\\langle t + 1 \\rangle } + b_y \\tag{2}$$\n$$ \\hat{y}^{\\langle t+1 \\rangle } = softmax(z^{\\langle t + 1 \\rangle })\\tag{3}$$\nNote that $\\hat{y}^{\\langle t+1 \\rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\\hat{y}^{\\langle t+1 \\rangle}_i$ represents the probability that the character indexed by \"i\" is the next character. We have provided a softmax() function that you can use.\n\nStep 3: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\\hat{y}^{\\langle t+1 \\rangle }$. This means that if $\\hat{y}^{\\langle t+1 \\rangle }_i = 0.16$, you will pick the index \"i\" with 16% probability. To implement it, you can use np.random.choice.\n\nHere is an example of how to use np.random.choice():\npython\nnp.random.seed(0)\np = np.array([0.1, 0.0, 0.7, 0.2])\nindex = np.random.choice([0, 1, 2, 3], p = p.ravel())\nThis means that you will pick the index according to the distribution: \n$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.\n\nStep 4: The last step to implement in sample() is to overwrite the variable x, which currently stores $x^{\\langle t \\rangle }$, with the value of $x^{\\langle t + 1 \\rangle }$. You will represent $x^{\\langle t + 1 \\rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\\langle t + 1 \\rangle }$ in Step 1 and keep repeating the process until you get a \"\\n\" character, indicating you've reached the end of the dinosaur name.", "# GRADED FUNCTION: sample\n\ndef sample(parameters, char_to_ix, seed):\n \"\"\"\n Sample a sequence of characters according to a sequence of probability distributions output of the RNN\n\n Arguments:\n parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. \n char_to_ix -- python dictionary mapping each character to an index.\n seed -- used for grading purposes. Do not worry about it.\n\n Returns:\n indices -- a list of length n containing the indices of the sampled characters.\n \"\"\"\n \n # Retrieve parameters and relevant shapes from \"parameters\" dictionary\n Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']\n vocab_size = by.shape[0]\n n_a = Waa.shape[1]\n \n ### START CODE HERE ###\n # Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)\n x = None\n # Step 1': Initialize a_prev as zeros (≈1 line)\n a_prev = None\n \n # Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)\n indices = []\n \n # Idx is a flag to detect a newline character, we initialize it to -1\n idx = -1 \n \n # Loop over time-steps t. At each time-step, sample a character from a probability distribution and append \n # its index to \"indices\". We'll stop if we reach 50 characters (which should be very unlikely with a well \n # trained model), which helps debugging and prevents entering an infinite loop. \n counter = 0\n newline_character = char_to_ix['\\n']\n \n while (idx != newline_character and counter != 50):\n \n # Step 2: Forward propagate x using the equations (1), (2) and (3)\n a = None\n z = None\n y = None\n \n # for grading purposes\n np.random.seed(counter+seed) \n \n # Step 3: Sample the index of a character within the vocabulary from the probability distribution y\n idx = None\n\n # Append the index to \"indices\"\n None\n \n # Step 4: Overwrite the input character as the one corresponding to the sampled index.\n x = None\n x[None] = None\n \n # Update \"a_prev\" to be \"a\"\n a_prev = None\n \n # for grading purposes\n seed += 1\n counter +=1\n \n ### END CODE HERE ###\n\n if (counter == 50):\n indices.append(char_to_ix['\\n'])\n \n return indices\n\nnp.random.seed(2)\nn, n_a = 20, 100\na0 = np.random.randn(n_a, 1)\ni0 = 1 # first character is ix_to_char[i0]\nWax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)\nb, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"b\": b, \"by\": by}\n\n\nindices = sample(parameters, char_to_ix, 0)\nprint(\"Sampling:\")\nprint(\"list of sampled indices:\", indices)\nprint(\"list of sampled characters:\", [ix_to_char[i] for i in indices])", "Expected output:\n<table>\n<tr>\n <td> \n **list of sampled indices:**\n </td>\n <td> \n [18, 2, 26, 0]\n </td>\n </tr><tr>\n <td> \n **list of sampled characters:**\n </td>\n <td> \n ['r', 'b', 'z', '\\n']\n </td>\n\n\n\n</tr>\n</table>\n\n3 - Building the language model\nIt is time to build the character-level language model for text generation. \n3.1 - Gradient descent\nIn this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:\n\nForward propagate through the RNN to compute the loss\nBackward propagate through time to compute the gradients of the loss with respect to the parameters\nClip the gradients if necessary \nUpdate your parameters using gradient descent \n\nExercise: Implement this optimization process (one step of stochastic gradient descent). \nWe provide you with the following functions: \n```python\ndef rnn_forward(X, Y, a_prev, parameters):\n \"\"\" Performs the forward propagation through the RNN and computes the cross-entropy loss.\n It returns the loss' value as well as a \"cache\" storing values to be used in the backpropagation.\"\"\"\n ....\n return loss, cache\ndef rnn_backward(X, Y, parameters, cache):\n \"\"\" Performs the backward propagation through time to compute the gradients of the loss with respect\n to the parameters. It returns also all the hidden states.\"\"\"\n ...\n return gradients, a\ndef update_parameters(parameters, gradients, learning_rate):\n \"\"\" Updates parameters using the Gradient Descent Update Rule.\"\"\"\n ...\n return parameters\n```", "# GRADED FUNCTION: optimize\n\ndef optimize(X, Y, a_prev, parameters, learning_rate = 0.01):\n \"\"\"\n Execute one step of the optimization to train the model.\n \n Arguments:\n X -- list of integers, where each integer is a number that maps to a character in the vocabulary.\n Y -- list of integers, exactly the same as X but shifted one index to the left.\n a_prev -- previous hidden state.\n parameters -- python dictionary containing:\n Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)\n Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)\n Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n b -- Bias, numpy array of shape (n_a, 1)\n by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n learning_rate -- learning rate for the model.\n \n Returns:\n loss -- value of the loss function (cross-entropy)\n gradients -- python dictionary containing:\n dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)\n dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)\n dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)\n db -- Gradients of bias vector, of shape (n_a, 1)\n dby -- Gradients of output bias vector, of shape (n_y, 1)\n a[len(X)-1] -- the last hidden state, of shape (n_a, 1)\n \"\"\"\n \n ### START CODE HERE ###\n \n # Forward propagate through time (≈1 line)\n loss, cache = None\n \n # Backpropagate through time (≈1 line)\n gradients, a = None\n \n # Clip your gradients between -5 (min) and 5 (max) (≈1 line)\n gradients = None\n \n # Update parameters (≈1 line)\n parameters = None\n \n ### END CODE HERE ###\n \n return loss, gradients, a[len(X)-1]\n\nnp.random.seed(1)\nvocab_size, n_a = 27, 100\na_prev = np.random.randn(n_a, 1)\nWax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)\nb, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"b\": b, \"by\": by}\nX = [12,3,5,11,22,3]\nY = [4,14,11,22,25, 26]\n\nloss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)\nprint(\"Loss =\", loss)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"np.argmax(gradients[\\\"dWax\\\"]) =\", np.argmax(gradients[\"dWax\"]))\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])\nprint(\"a_last[4] =\", a_last[4])", "Expected output:\n<table>\n\n\n<tr>\n <td> \n **Loss **\n </td>\n <td> \n 126.503975722\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWaa\"][1][2]**\n </td>\n <td> \n 0.194709315347\n </td>\n<tr>\n <td> \n **np.argmax(gradients[\"dWax\"])**\n </td>\n <td> 93\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWya\"][1][2]**\n </td>\n <td> -0.007773876032\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"db\"][4]**\n </td>\n <td> [-0.06809825]\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dby\"][1]**\n </td>\n <td>[ 0.01538192]\n </td>\n</tr>\n<tr>\n <td> \n **a_last[4]**\n </td>\n <td> [-1.]\n </td>\n</tr>\n\n</table>\n\n3.2 - Training the model\nGiven the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. \nExercise: Follow the instructions and implement model(). When examples[index] contains one dinosaur name (string), to create an example (X, Y), you can use this:\npython\n index = j % len(examples)\n X = [None] + [char_to_ix[ch] for ch in examples[index]] \n Y = X[1:] + [char_to_ix[\"\\n\"]]\nNote that we use: index= j % len(examples), where j = 1....num_iterations, to make sure that examples[index] is always a valid statement (index is smaller than len(examples)).\nThe first entry of X being None will be interpreted by rnn_forward() as setting $x^{\\langle 0 \\rangle} = \\vec{0}$. Further, this ensures that Y is equal to X but shifted one step to the left, and with an additional \"\\n\" appended to signify the end of the dinosaur name.", "# GRADED FUNCTION: model\n\ndef model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):\n \"\"\"\n Trains the model and generates dinosaur names. \n \n Arguments:\n data -- text corpus\n ix_to_char -- dictionary that maps the index to a character\n char_to_ix -- dictionary that maps a character to an index\n num_iterations -- number of iterations to train the model for\n n_a -- number of units of the RNN cell\n dino_names -- number of dinosaur names you want to sample at each iteration. \n vocab_size -- number of unique characters found in the text, size of the vocabulary\n \n Returns:\n parameters -- learned parameters\n \"\"\"\n \n # Retrieve n_x and n_y from vocab_size\n n_x, n_y = vocab_size, vocab_size\n \n # Initialize parameters\n parameters = initialize_parameters(n_a, n_x, n_y)\n \n # Initialize loss (this is required because we want to smooth our loss, don't worry about it)\n loss = get_initial_loss(vocab_size, dino_names)\n \n # Build list of all dinosaur names (training examples).\n with open(\"dinos.txt\") as f:\n examples = f.readlines()\n examples = [x.lower().strip() for x in examples]\n \n # Shuffle list of all dinosaur names\n shuffle(examples)\n \n # Initialize the hidden state of your LSTM\n a_prev = np.zeros((n_a, 1))\n \n # Optimization loop\n for j in range(num_iterations):\n \n ### START CODE HERE ###\n \n # Use the hint above to define one training example (X,Y) (≈ 2 lines)\n index = None\n X = None\n Y = None\n \n # Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters\n # Choose a learning rate of 0.01\n curr_loss, gradients, a_prev = None\n \n ### END CODE HERE ###\n \n # Use a latency trick to keep the loss smooth. It happens here to accelerate the training.\n loss = smooth(loss, curr_loss)\n\n # Every 2000 Iteration, generate \"n\" characters thanks to sample() to check if the model is learning properly\n if j % 2000 == 0:\n \n print('Iteration: %d, Loss: %f' % (j, loss) + '\\n')\n \n # The number of dinosaur names to print\n seed = 0\n for name in range(dino_names):\n \n # Sample indices and print them\n sampled_indices = sample(parameters, char_to_ix, seed)\n print_sample(sampled_indices, ix_to_char)\n \n seed += 1 # To get the same result for grading purposed, increment the seed by one. \n \n print('\\n')\n \n return parameters", "Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.", "parameters = model(data, ix_to_char, char_to_ix)", "Conclusion\nYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like maconucon, marloralus and macingsersaurus. Your model hopefully also learned that dinosaur names tend to end in saurus, don, aura, tor, etc.\nIf your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, dromaeosauroides is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! \nThis assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus!\n<img src=\"images/mangosaurus.jpeg\" style=\"width:250;height:300px;\">\n4 - Writing like Shakespeare\nThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. \nA similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. \n<img src=\"images/shakespeare.jpg\" style=\"width:500;height:400px;\">\n<caption><center> Let's become poets! </center></caption>\nWe have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.", "from __future__ import print_function\nfrom keras.callbacks import LambdaCallback\nfrom keras.models import Model, load_model, Sequential\nfrom keras.layers import Dense, Activation, Dropout, Input, Masking\nfrom keras.layers import LSTM\nfrom keras.utils.data_utils import get_file\nfrom keras.preprocessing.sequence import pad_sequences\nfrom shakespeare_utils import *\nimport sys\nimport io", "To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called \"The Sonnets\". \nLet's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run generate_output, which will prompt asking you for an input (&lt;40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try \"Forsooth this maketh no sense \" (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.", "print_callback = LambdaCallback(on_epoch_end=on_epoch_end)\n\nmodel.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])\n\n# Run this cell to try with different inputs without having to re-train the model \ngenerate_output()", "The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are:\n- LSTMs instead of the basic RNN to capture longer-range dependencies\n- The model is a deeper, stacked LSTM model (2 layer)\n- Using Keras instead of python to simplify the code \nIf you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py.\nCongratulations on finishing this notebook! \nReferences:\n- This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's blog post.\n- For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sripaladugu/sripaladugu.github.io
ipynb/OOP Concepts.ipynb
mit
[ "Youtube Videos:\n * https://www.youtube.com/watch?v=rq8cL2XMM5M&index=3&list=PL-osiE80TeTsqhIuOqKhwlXsIBIdSeYtc\n * https://www.youtube.com/watch?v=RSl87lqOXDE&index=4&list=PL-osiE80TeTsqhIuOqKhwlXsIBIdSeYtc\nOnline References:\n * https://jeffknupp.com/blog/2017/03/27/improve-your-python-python-classes-and-object-oriented-programming/\n * https://dbader.org/blog/abstract-base-classes-in-python", "class Employee:\n emp_count = 0 # Class Variable\n company = 'Google' # Class Variable \n def __init__(self, fname, lname):\n self.fname = fname\n self.lname = lname\n self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'\n Employee.emp_count += 1\n \n def get_fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n \n def get_company(self):\n return 'Company Name is: {}'.format(Employee.company)\n\nemp1 = Employee('Sri', 'Paladugu')\nemp2 = Employee('Dhruv', 'Paladugu')\n\nprint( emp1.get_fullname() )\nprint( Employee.emp_count )\n\n# Trobule ensues when you treat class variables as instance attribute. \n# What the interpreter does in this case is, it creates an instance attribute with the same name and assigns to it.\n# The class variable still remains intact with old value.\nemp1.company = 'Verily'\nprint(emp1.company)\nprint(emp1.get_company())\n\nprint(emp2.company)\nprint(emp2.email)", "Class Methods", "class Employee:\n emp_count = 0 # Class Variable\n company = 'Google' # Class Variable\n raise_amount = 1.04\n def __init__(self, fname, lname):\n self.fname = fname\n self.lname = lname\n self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'\n Employee.emp_count += 1\n \n def get_fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n \n def get_company(self):\n return 'Company Name is: {}'.format(Employee.company)\n \n @classmethod\n def set_raise_amt(cls, amount):\n cls.raise_amount = amount\n\nemp1 = Employee('Sri', 'Paladugu')\nemp2 = Employee('Dhruv', 'Paladugu')\n\nEmployee.set_raise_amt(1.05)\nprint(Employee.raise_amount)\nprint(emp1.raise_amount)\nprint(emp2.raise_amount)", "Class Methods can be used to create alternate constructors", "class Employee:\n emp_count = 0 # Class Variable\n company = 'Google' # Class Variable\n raise_amount = 1.04\n def __init__(self, fname, lname, salary):\n self.fname = fname\n self.lname = lname\n self.salary = salary\n self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'\n Employee.emp_count += 1\n \n def get_fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n \n def get_company(self):\n return 'Company Name is: {}'.format(Employee.company)\n \n @classmethod\n def set_raise_amt(cls, amount):\n cls.raise_amount = amount\n \n @classmethod\n def from_string(cls, emp_str):\n fname, lname, salary = emp_str.split(\"-\")\n return cls(fname, lname, salary)\n\nnew_emp = Employee.from_string(\"Pradeep-Koganti-10000\")\nprint(new_emp.email)", "Static Methods\n\nInstance methods take self as the first argument\nClass methods take cls as the first argument\nStatic methods don't take instance or class as their argument, we just pass the arguments we want to work with.\n\nStatic methods don't operate on instance or class.", "class Employee:\n emp_count = 0 # Class Variable\n company = 'Google' # Class Variable\n raise_amount = 1.04\n def __init__(self, fname, lname, salary):\n self.fname = fname\n self.lname = lname\n self.salary = salary\n self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'\n Employee.emp_count += 1\n \n def get_fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n \n def get_company(self):\n return 'Company Name is: {}'.format(Employee.company)\n \n @classmethod\n def set_raise_amt(cls, amount):\n cls.raise_amount = amount\n \n @classmethod\n def from_string(cls, emp_str):\n fname, lname, salary = emp_str.split(\"-\")\n return cls(fname, lname, salary)\n \n @staticmethod\n def is_workday(day):\n if day.weekday() == 5 or day.weekday() == 6:\n return False\n else:\n return True\n\nimport datetime\nmy_date = datetime.date(2016, 7, 10)\n\nprint(Employee.is_workday(my_date))", "Inheritance - Creating subclasses", "class Employee:\n emp_count = 0 # Class Variable\n company = 'Google' # Class Variable\n raise_amount = 1.04\n def __init__(self, fname, lname, salary):\n self.fname = fname\n self.lname = lname\n self.salary = salary\n self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'\n Employee.emp_count += 1\n \n def get_fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n \n def get_company(self):\n return 'Company Name is: {}'.format(Employee.company)\n \n def apply_raise(self):\n self.salary = self.salary * self.raise_amount\n\nclass Developer(Employee):\n pass\n\ndev1 = Developer('Sri', 'Paladugu', 1000)\nprint(dev1.get_fullname())\nprint(help(Developer)) # This command prints the Method resolution order. \n# Indicating the order in which the interpreter is going to look for methods.", "Now what if you want Developer's raise_amount to be 10%?", "class Employee:\n emp_count = 0 # Class Variable\n company = 'Google' # Class Variable\n raise_amount = 1.04\n def __init__(self, fname, lname, salary):\n self.fname = fname\n self.lname = lname\n self.salary = salary\n self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'\n Employee.emp_count += 1\n \n def get_fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n \n def get_company(self):\n return 'Company Name is: {}'.format(Employee.company)\n\n def apply_raise(self):\n self.salary = self.salary * self.raise_amount \n \nclass Developer(Employee):\n raise_amount = 1.10\n\ndev1 = Developer('Sri', 'Paladugu', 1000)\ndev1.apply_raise()\nprint(dev1.salary)", "Now what if we want the Developer class to have an extra attribute like prog_lang?", "class Employee:\n emp_count = 0 # Class Variable\n company = 'Google' # Class Variable\n raise_amount = 1.04\n def __init__(self, fname, lname, salary):\n self.fname = fname\n self.lname = lname\n self.salary = salary\n self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'\n Employee.emp_count += 1\n \n def get_fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n \n def get_company(self):\n return 'Company Name is: {}'.format(Employee.company)\n\n def apply_raise(self):\n self.salary = self.salary * self.raise_amount \n \nclass Developer(Employee):\n raise_amount = 1.10\n \n def __init__(self, fname, lname, salary, prog_lang):\n super().__init__(fname, lname, salary)\n # or you can also use the following syntax\n # Employee.__init__(self, fname, lname, salary)\n self.prog_lang = prog_lang\n\ndev1 = Developer('Sri', 'Paladugu', 1000, 'Python')\nprint(dev1.get_fullname())\nprint(dev1.prog_lang)", "Gotcha - Mutable default arguments\n* https://pythonconquerstheuniverse.wordpress.com/2012/02/15/mutable-default-arguments/", "class Employee:\n emp_count = 0 # Class Variable\n company = 'Google' # Class Variable\n raise_amount = 1.04\n def __init__(self, fname, lname, salary):\n self.fname = fname\n self.lname = lname\n self.salary = salary\n self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'\n Employee.emp_count += 1\n \n def get_fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n \n def get_company(self):\n return 'Company Name is: {}'.format(Employee.company)\n\n def apply_raise(self):\n self.salary = self.salary * self.raise_amount \n \nclass Developer(Employee):\n raise_amount = 1.10\n \n def __init__(self, fname, lname, salary, prog_lang):\n super().__init__(fname, lname, salary)\n # or you can also use the following syntax\n # Employee.__init__(self, fname, lname, salary)\n self.prog_lang = prog_lang\n\nclass Manager(Employee):\n def __init__(self, fname, lname, salary, employees = None): # Use None as default not empty list []\n super().__init__(fname, lname, salary)\n if employees is None:\n self.employees = []\n else:\n self.employees = employees\n def add_employee(self, emp):\n if emp not in self.employees:\n self.employees.append(emp)\n def remove_employee(self, emp):\n if emp in self.employees:\n self.employees.remove(emp)\n def print_emps(self):\n for emp in self.employees:\n print('--->', emp.get_fullname())\n\ndev_1 = Developer('Sri', 'Paladugu', 1000, 'Python')\ndev_2 = Developer('Dhruv', 'Paladugu', 2000, 'Java')\nmgr_1 = Manager('Sue', 'Smith', 9000, [dev_1])\nprint(mgr_1.email)\nprint(mgr_1.print_emps())\nmgr_1.add_employee(dev_2)\nprint(mgr_1.print_emps())\n\nprint('Is dev_1 an instance of Developer: ', isinstance(dev_1, Developer))\nprint('Is dev_1 an instance of Employee: ', isinstance(dev_1, Employee))\nprint('Is Developer an Subclass of Developer: ', issubclass(Developer, Developer))\nprint('Is Developer an Subclass of Employee: ', issubclass(Developer, Employee))", "Magic or Dunder Methods\n\nhttps://www.youtube.com/watch?v=3ohzBxoFHAY&index=5&list=PL-osiE80TeTsqhIuOqKhwlXsIBIdSeYtc\n\nDunder methods:\n1. __repr__\n2. __str__", "class Employee:\n company = 'Google'\n def __init__(self, fname, lname, salary):\n self.fname = fname\n self.lname = lname\n self.salary = salary\n self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'\n def __repr__(self): # For other developers\n return \"Employee('{}','{}','{}')\".format(self.fname, self.lname, self.salary)\n def __str__(self): # For end user\n return '{} - {}'.format(self.get_fullname(), self.email)\n def get_fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n\nemp1 = Employee('Sri', 'Paladugu', 5000)\nprint(emp1)\nprint(repr(emp1))", "__add__\n__len__", "# if you do: 1 + 2 internally the interpreter calls the dunder method __add__\nprint(int.__add__(1,2))\n# Similarly # if you do: [2,3] + [4,5] internally the interpreter calls the dunder method __add__\nprint(list.__add__([2,3],[4,5]))\n\nprint('Paladugu'.__len__()) # This is same as len('Paladugu')\n\nclass Employee:\n company = 'Google'\n def __init__(self, fname, lname, salary):\n self.fname = fname\n self.lname = lname\n self.salary = salary\n self.email = self.fname + '.' + self.lname + '@' + self.company + '.com'\n def __repr__(self): # For other developers\n return \"Employee('{}','{}','{}')\".format(self.fname, self.lname, self.salary)\n def __str__(self): # For end user\n return '{} - {}'.format(self.get_fullname(), self.email)\n def get_fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n def __add__(self, other):\n return self.salary + other.salary\n def __len__(self):\n return len(self.get_fullname())\n\nemp1 = Employee('Sri', 'Paladugu', 5000)\nemp2 = Employee('Dhruv', 'Paladugu', 5000)\n\nprint(emp1 + emp2)\nprint(len(emp1))", "Property Decorators", "class Employee:\n company = 'Google'\n def __init__(self, fname, lname, salary):\n self.fname = fname\n self.lname = lname\n self.salary = salary\n\n @property\n def email(self):\n return '{}.{}@{}.com'.format(self.fname, self.lname, self.company)\n\n @property\n def fullname(self):\n return '{} {}'.format(self.fname, self.lname)\n \n @fullname.setter\n def fullname(self, name):\n first, last = name.split(' ')\n self.fname = first\n self.lname = last\n \n @fullname.deleter\n def fullname(self):\n print('Delete Name!')\n self.fname = None\n self.lname = None\n\nemp1 = Employee('Sri', 'Paladugu', 5000)\nprint(emp1.email)\nprint(emp1.fullname)\nemp1.fullname = 'Ramki Paladugu'\nprint(emp1.email)\ndel emp1.fullname\nprint(emp1.email)", "Abstract Base Classes in Python\nWhat are Abstract Base Classes good for? A while ago I had a discussion about which pattern to use for implementing a maintainable class hierarchy in Python. More specifically, the goal was to define a simple class hierarchy for a service backend in the most programmer-friendly and maintainable way.\nThere was a BaseService that defines a common interface and several concrete implementations that do different things but all provide the same interface (MockService, RealService, and so on). To make this relationship explicit the concrete implementations all subclass BaseService.\nTo be as maintainable and programmer-friendly as possible the idea was to make sure that:\n\ninstantiating the base class is impossible; and\nforgetting to implement interface methods in one of the subclasses raises an error as early as possible.", "from abc import ABCMeta, abstractmethod\n\nclass Base(metaclass=ABCMeta):\n @abstractmethod\n def foo(self):\n pass\n\n @abstractmethod\n def bar(self):\n pass\n\nclass Concrete(Base):\n def foo(self):\n pass\n\n # We forget to declare bar()\n\nc = Concrete()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nimagh/CNN_Implementations
Notebooks/AAE.ipynb
gpl-3.0
[ "Adversarial Autoencoders\nAdversarial Autoencoders. Makhzani, 2015\nPerforming variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution.\nUse this code with no warranty and please respect the accompanying license.", "# Imports\n%reload_ext autoreload\n%autoreload 1\n\nimport os, sys\nsys.path.append('../')\nsys.path.append('../common')\nsys.path.append('../GenerativeModels')\n\nfrom tools_general import tf, np\nfrom IPython.display import Image\nfrom tools_train import get_train_params, OneHot, vis_square\nfrom tools_config import data_dir\nfrom tools_train import get_train_params, plot_latent_variable\nimport matplotlib.pyplot as plt\nimport imageio\nfrom tensorflow.examples.tutorials.mnist import input_data\nfrom tools_train import get_demo_data\n\n# define parameters\nnetworktype = 'AAE_MNIST'\n\nwork_dir = '../trained_models/%s/' %networktype\nif not os.path.exists(work_dir): os.makedirs(work_dir)", "Network definitions", "from AAE import create_encoder, create_decoder, create_aae_trainer", "Training AAE\nYou can either get the fully trained models from google drive or train your own models using the AAE.py script.\nExperiments\nCreate demo networks and restore weights", "iter_num = 18018\nbest_model = work_dir + \"Model_Iter_%.3d.ckpt\"%iter_num\nbest_img = work_dir + 'Gen_Iter_%d.jpg'%iter_num\nImage(filename=best_img)\n\nlatentD = 2 # of the best model trained\nbatch_size = 128\n\ntf.reset_default_graph() \ndemo_sess = tf.InteractiveSession()\n\nis_training = tf.placeholder(tf.bool, [], 'is_training')\n\nZph = tf.placeholder(tf.float32, [None, latentD])\nXph = tf.placeholder(tf.float32, [None, 28, 28, 1])\n\nZ_op = create_encoder(Xph, is_training, latentD, reuse=False, networktype=networktype + '_Enc') \nXrec_op = create_decoder(Z_op, is_training, latentD, reuse=False, networktype=networktype + '_Dec')\nXgen_op = create_decoder(Zph, is_training, latentD, reuse=True, networktype=networktype + '_Dec')\n \ntf.global_variables_initializer().run()\n\nenc_varlist = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=networktype + '_Enc') \ndec_varlist = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=networktype + '_Dec')\nsaver = tf.train.Saver(var_list=enc_varlist+dec_varlist)\nsaver.restore(demo_sess, best_model)\n\n#Get uniform samples over the labels\nspl = 800 # sample_per_label\ndata = input_data.read_data_sets(data_dir, one_hot=False, reshape=False)\nXdemo, Xdemo_labels = get_demo_data(data, spl)\n\ndecoded_data = demo_sess.run(Z_op, feed_dict={Xph:Xdemo, is_training:False})\nplot_latent_variable(decoded_data, Xdemo_labels)", "Generate new data\nApproximate samples from the posterior distribution over the latent variables p(z|x)", "Zdemo = np.random.normal(size=[128, latentD], loc=0.0, scale=1.).astype(np.float32)\n\ngen_sample = demo_sess.run(Xgen_op, feed_dict={Zph: Zdemo , is_training:False})\nvis_square(gen_sample[:121], [11, 11], save_path=work_dir + 'sample.jpg')\nImage(filename=work_dir + 'sample.jpg')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kabrapratik28/Stanford_courses
cs231n/2016/assignment3/RNN_Captioning.ipynb
apache-2.0
[ "Image Captioning with RNNs\nIn this exercise you will implement a vanilla recurrent neural networks and use them it to train a model that can generate novel captions for images.", "# As usual, a bit of setup\n\nimport time, os, json\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.rnn_layers import *\nfrom cs231n.captioning_solver import CaptioningSolver\nfrom cs231n.classifiers.rnn import CaptioningRNN\nfrom cs231n.coco_utils import load_coco_data, sample_coco_minibatch, decode_captions\nfrom cs231n.image_utils import image_from_url\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))", "Microsoft COCO\nFor this exercise we will use the 2014 release of the Microsoft COCO dataset which has become the standard testbed for image captioning. The dataset consists of 80,000 training images and 40,000 validation images, each annotated with 5 captions written by workers on Amazon Mechanical Turk.\nTo download the data, change to the cs231n/datasets directory and run the script get_coco_captioning.sh.\nWe have preprocessed the data and extracted features for you already. For all images we have extracted features from the fc7 layer of the VGG-16 network pretrained on ImageNet; these features are stored in the files train2014_vgg16_fc7.h5 and val2014_vgg16_fc7.h5 respectively. To cut down on processing time and memory requirements, we have reduced the dimensionality of the features from 4096 to 512; these features can be found in the files train2014_vgg16_fc7_pca.h5 and val2014_vgg16_fc7_pca.h5.\nThe raw images take up a lot of space (nearly 20GB) so we have not included them in the download. However all images are taken from Flickr, and URLs of the training and validation images are stored in the files train2014_urls.txt and val2014_urls.txt respectively. This allows you to download images on the fly for visualization. Since images are downloaded on-the-fly, you must be connected to the internet to view images.\nDealing with strings is inefficient, so we will work with an encoded version of the captions. Each word is assigned an integer ID, allowing us to represent a caption by a sequence of integers. The mapping between integer IDs and words is in the file coco2014_vocab.json, and you can use the function decode_captions from the file cs231n/coco_utils.py to convert numpy arrays of integer IDs back into strings.\nThere are a couple special tokens that we add to the vocabulary. We prepend a special &lt;START&gt; token and append an &lt;END&gt; token to the beginning and end of each caption respectively. Rare words are replaced with a special &lt;UNK&gt; token (for \"unknown\"). In addition, since we want to train with minibatches containing captions of different lengths, we pad short captions with a special &lt;NULL&gt; token after the &lt;END&gt; token and don't compute loss or gradient for &lt;NULL&gt; tokens. Since they are a bit of a pain, we have taken care of all implementation details around special tokens for you.\nYou can load all of the MS-COCO data (captions, features, URLs, and vocabulary) using the load_coco_data function from the file cs231n/coco_utils.py. Run the following cell to do so:", "# Load COCO data from disk; this returns a dictionary\n# We'll work with dimensionality-reduced features for this notebook, but feel\n# free to experiment with the original features by changing the flag below.\ndata = load_coco_data(pca_features=True)\n\n# Print out all the keys and values from the data dictionary\nfor k, v in data.iteritems():\n if type(v) == np.ndarray:\n print k, type(v), v.shape, v.dtype\n else:\n print k, type(v), len(v)", "Look at the data\nIt is always a good idea to look at examples from the dataset before working with it.\nYou can use the sample_coco_minibatch function from the file cs231n/coco_utils.py to sample minibatches of data from the data structure returned from load_coco_data. Run the following to sample a small minibatch of training data and show the images and their captions. Running it multiple times and looking at the results helps you to get a sense of the dataset.\nNote that we decode the captions using the decode_captions function and that we download the images on-the-fly using their Flickr URL, so you must be connected to the internet to viw images.", "# Sample a minibatch and show the images and captions\nbatch_size = 3\n\ncaptions, features, urls = sample_coco_minibatch(data, batch_size=batch_size)\nfor i, (caption, url) in enumerate(zip(captions, urls)):\n plt.imshow(image_from_url(url))\n plt.axis('off')\n caption_str = decode_captions(caption, data['idx_to_word'])\n plt.title(caption_str)\n plt.show()", "Recurrent Neural Networks\nAs discussed in lecture, we will use recurrent neural network (RNN) language models for image captioning. The file cs231n/rnn_layers.py contains implementations of different layer types that are needed for recurrent neural networks, and the file cs231n/classifiers/rnn.py uses these layers to implement an image captioning model.\nWe will first implement different types of RNN layers in cs231n/rnn_layers.py.\nVanilla RNN: step forward\nOpen the file cs231n/rnn_layers.py. This file implements the forward and backward passes for different types of layers that are commonly used in recurrent neural networks.\nFirst implement the function rnn_step_forward which implements the forward pass for a single timestep of a vanilla recurrent neural network. After doing so run the following to check your implementation.", "N, D, H = 3, 10, 4\n\nx = np.linspace(-0.4, 0.7, num=N*D).reshape(N, D)\nprev_h = np.linspace(-0.2, 0.5, num=N*H).reshape(N, H)\nWx = np.linspace(-0.1, 0.9, num=D*H).reshape(D, H)\nWh = np.linspace(-0.3, 0.7, num=H*H).reshape(H, H)\nb = np.linspace(-0.2, 0.4, num=H)\n\nnext_h, _ = rnn_step_forward(x, prev_h, Wx, Wh, b)\nexpected_next_h = np.asarray([\n [-0.58172089, -0.50182032, -0.41232771, -0.31410098],\n [ 0.66854692, 0.79562378, 0.87755553, 0.92795967],\n [ 0.97934501, 0.99144213, 0.99646691, 0.99854353]])\n\nprint 'next_h error: ', rel_error(expected_next_h, next_h)", "Vanilla RNN: step backward\nIn the file cs231n/rnn_layers.py implement the rnn_step_backward function. After doing so run the following to numerically gradient check your implementation. You should see errors less than 1e-8.", "from cs231n.rnn_layers import rnn_step_forward, rnn_step_backward\n\nN, D, H = 4, 5, 6\nx = np.random.randn(N, D)\nh = np.random.randn(N, H)\nWx = np.random.randn(D, H)\nWh = np.random.randn(H, H)\nb = np.random.randn(H)\n\nout, cache = rnn_step_forward(x, h, Wx, Wh, b)\n\ndnext_h = np.random.randn(*out.shape)\n\nfx = lambda x: rnn_step_forward(x, h, Wx, Wh, b)[0]\nfh = lambda prev_h: rnn_step_forward(x, h, Wx, Wh, b)[0]\nfWx = lambda Wx: rnn_step_forward(x, h, Wx, Wh, b)[0]\nfWh = lambda Wh: rnn_step_forward(x, h, Wx, Wh, b)[0]\nfb = lambda b: rnn_step_forward(x, h, Wx, Wh, b)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dnext_h)\ndprev_h_num = eval_numerical_gradient_array(fh, h, dnext_h)\ndWx_num = eval_numerical_gradient_array(fWx, Wx, dnext_h)\ndWh_num = eval_numerical_gradient_array(fWh, Wh, dnext_h)\ndb_num = eval_numerical_gradient_array(fb, b, dnext_h)\n\ndx, dprev_h, dWx, dWh, db = rnn_step_backward(dnext_h, cache)\n\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dprev_h error: ', rel_error(dprev_h_num, dprev_h)\nprint 'dWx error: ', rel_error(dWx_num, dWx)\nprint 'dWh error: ', rel_error(dWh_num, dWh)\nprint 'db error: ', rel_error(db_num, db)", "Vanilla RNN: forward\nNow that you have implemented the forward and backward passes for a single timestep of a vanilla RNN, you will combine these pieces to implement a RNN that process an entire sequence of data.\nIn the file cs231n/rnn_layers.py, implement the function rnn_forward. This should be implemented using the rnn_step_forward function that you defined above. After doing so run the following to check your implementation. You should see errors less than 1e-7.", "N, T, D, H = 2, 3, 4, 5\n\nx = np.linspace(-0.1, 0.3, num=N*T*D).reshape(N, T, D)\nh0 = np.linspace(-0.3, 0.1, num=N*H).reshape(N, H)\nWx = np.linspace(-0.2, 0.4, num=D*H).reshape(D, H)\nWh = np.linspace(-0.4, 0.1, num=H*H).reshape(H, H)\nb = np.linspace(-0.7, 0.1, num=H)\n\nh, _ = rnn_forward(x, h0, Wx, Wh, b)\nexpected_h = np.asarray([\n [\n [-0.42070749, -0.27279261, -0.11074945, 0.05740409, 0.22236251],\n [-0.39525808, -0.22554661, -0.0409454, 0.14649412, 0.32397316],\n [-0.42305111, -0.24223728, -0.04287027, 0.15997045, 0.35014525],\n ],\n [\n [-0.55857474, -0.39065825, -0.19198182, 0.02378408, 0.23735671],\n [-0.27150199, -0.07088804, 0.13562939, 0.33099728, 0.50158768],\n [-0.51014825, -0.30524429, -0.06755202, 0.17806392, 0.40333043]]])\nprint 'h error: ', rel_error(expected_h, h)", "Vanilla RNN: backward\nIn the file cs231n/rnn_layers.py, implement the backward pass for a vanilla RNN in the function rnn_backward. This should run back-propagation over the entire sequence, calling into the rnn_step_backward function that you defined above.", "N, D, T, H = 2, 3, 10, 5\n\nx = np.random.randn(N, T, D)\nh0 = np.random.randn(N, H)\nWx = np.random.randn(D, H)\nWh = np.random.randn(H, H)\nb = np.random.randn(H)\n\nout, cache = rnn_forward(x, h0, Wx, Wh, b)\n\ndout = np.random.randn(*out.shape)\n\ndx, dh0, dWx, dWh, db = rnn_backward(dout, cache)\n\nfx = lambda x: rnn_forward(x, h0, Wx, Wh, b)[0]\nfh0 = lambda h0: rnn_forward(x, h0, Wx, Wh, b)[0]\nfWx = lambda Wx: rnn_forward(x, h0, Wx, Wh, b)[0]\nfWh = lambda Wh: rnn_forward(x, h0, Wx, Wh, b)[0]\nfb = lambda b: rnn_forward(x, h0, Wx, Wh, b)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\ndh0_num = eval_numerical_gradient_array(fh0, h0, dout)\ndWx_num = eval_numerical_gradient_array(fWx, Wx, dout)\ndWh_num = eval_numerical_gradient_array(fWh, Wh, dout)\ndb_num = eval_numerical_gradient_array(fb, b, dout)\n\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dh0 error: ', rel_error(dh0_num, dh0)\nprint 'dWx error: ', rel_error(dWx_num, dWx)\nprint 'dWh error: ', rel_error(dWh_num, dWh)\nprint 'db error: ', rel_error(db_num, db)", "Word embedding: forward\nIn deep learning systems, we commonly represent words using vectors. Each word of the vocabulary will be associated with a vector, and these vectors will be learned jointly with the rest of the system.\nIn the file cs231n/rnn_layers.py, implement the function word_embedding_forward to convert words (represented by integers) into vectors. Run the following to check your implementation. You should see error around 1e-8.", "N, T, V, D = 2, 4, 5, 3\n\nx = np.asarray([[0, 3, 1, 2], [2, 1, 0, 3]])\nW = np.linspace(0, 1, num=V*D).reshape(V, D)\n\nout, _ = word_embedding_forward(x, W)\nexpected_out = np.asarray([\n [[ 0., 0.07142857, 0.14285714],\n [ 0.64285714, 0.71428571, 0.78571429],\n [ 0.21428571, 0.28571429, 0.35714286],\n [ 0.42857143, 0.5, 0.57142857]],\n [[ 0.42857143, 0.5, 0.57142857],\n [ 0.21428571, 0.28571429, 0.35714286],\n [ 0., 0.07142857, 0.14285714],\n [ 0.64285714, 0.71428571, 0.78571429]]])\n\nprint 'out error: ', rel_error(expected_out, out)", "Word embedding: backward\nImplement the backward pass for the word embedding function in the function word_embedding_backward. After doing so run the following to numerically gradient check your implementation. You should see errors less than 1e-11.", "N, T, V, D = 50, 3, 5, 6\n\nx = np.random.randint(V, size=(N, T))\nW = np.random.randn(V, D)\n\nout, cache = word_embedding_forward(x, W)\ndout = np.random.randn(*out.shape)\ndW = word_embedding_backward(dout, cache)\n\nf = lambda W: word_embedding_forward(x, W)[0]\ndW_num = eval_numerical_gradient_array(f, W, dout)\n\nprint 'dW error: ', rel_error(dW, dW_num)", "Temporal Affine layer\nAt every timestep we use an affine function to transform the RNN hidden vector at that timestep into scores for each word in the vocabulary. Because this is very similar to the affine layer that you implemented in assignment 2, we have provided this function for you in the temporal_affine_forward and temporal_affine_backward functions in the file cs231n/rnn_layers.py. Run the following to perform numeric gradient checking on the implementation.", "# Gradient check for temporal affine layer\nN, T, D, M = 2, 3, 4, 5\n\nx = np.random.randn(N, T, D)\nw = np.random.randn(D, M)\nb = np.random.randn(M)\n\nout, cache = temporal_affine_forward(x, w, b)\n\ndout = np.random.randn(*out.shape)\n\nfx = lambda x: temporal_affine_forward(x, w, b)[0]\nfw = lambda w: temporal_affine_forward(x, w, b)[0]\nfb = lambda b: temporal_affine_forward(x, w, b)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\ndw_num = eval_numerical_gradient_array(fw, w, dout)\ndb_num = eval_numerical_gradient_array(fb, b, dout)\n\ndx, dw, db = temporal_affine_backward(dout, cache)\n\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dw error: ', rel_error(dw_num, dw)\nprint 'db error: ', rel_error(db_num, db)", "Temporal Softmax loss\nIn an RNN language model, at every timestep we produce a score for each word in the vocabulary. We know the ground-truth word at each timestep, so we use a softmax loss function to compute loss and gradient at each timestep. We sum the losses over time and average them over the minibatch.\nHowever there is one wrinke: since we operate over minibatches and different captions may have different lengths, we append &lt;NULL&gt; tokens to the end of each caption so they all have the same length. We don't want these &lt;NULL&gt; tokens to count toward the loss or gradient, so in addition to scores and ground-truth labels our loss function also accepts a mask array that tells it which elements of the scores count towards the loss.\nSince this is very similar to the softmax loss function you implemented in assignment 1, we have implemented this loss function for you; look at the temporal_softmax_loss function in the file cs231n/rnn_layers.py.\nRun the following cell to sanity check the loss and perform numeric gradient checking on the function.", "# Sanity check for temporal softmax loss\nfrom cs231n.rnn_layers import temporal_softmax_loss\n\nN, T, V = 100, 1, 10\n\ndef check_loss(N, T, V, p):\n x = 0.001 * np.random.randn(N, T, V)\n y = np.random.randint(V, size=(N, T))\n mask = np.random.rand(N, T) <= p\n print temporal_softmax_loss(x, y, mask)[0]\n \ncheck_loss(100, 1, 10, 1.0) # Should be about 2.3\ncheck_loss(100, 10, 10, 1.0) # Should be about 23\ncheck_loss(5000, 10, 10, 0.1) # Should be about 2.3\n\n# Gradient check for temporal softmax loss\nN, T, V = 7, 8, 9\n\nx = np.random.randn(N, T, V)\ny = np.random.randint(V, size=(N, T))\nmask = (np.random.rand(N, T) > 0.5)\n\nloss, dx = temporal_softmax_loss(x, y, mask, verbose=False)\n\ndx_num = eval_numerical_gradient(lambda x: temporal_softmax_loss(x, y, mask)[0], x, verbose=False)\n\nprint 'dx error: ', rel_error(dx, dx_num)", "RNN for image captioning\nNow that you have implemented the necessary layers, you can combine them to build an image captioning model. Open the file cs231n/classifiers/rnn.py and look at the CaptioningRNN class.\nImplement the forward and backward pass of the model in the loss function. For now you only need to implement the case where cell_type='rnn' for vanialla RNNs; you will implement the LSTM case later. After doing so, run the following to check your forward pass using a small test case; you should see error less than 1e-10.", "N, D, W, H = 10, 20, 30, 40\nword_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}\nV = len(word_to_idx)\nT = 13\n\nmodel = CaptioningRNN(word_to_idx,\n input_dim=D,\n wordvec_dim=W,\n hidden_dim=H,\n cell_type='rnn',\n dtype=np.float64)\n\n# Set all model parameters to fixed values\nfor k, v in model.params.iteritems():\n model.params[k] = np.linspace(-1.4, 1.3, num=v.size).reshape(*v.shape)\n\nfeatures = np.linspace(-1.5, 0.3, num=(N * D)).reshape(N, D)\ncaptions = (np.arange(N * T) % V).reshape(N, T)\n\nloss, grads = model.loss(features, captions)\nexpected_loss = 9.83235591003\n\nprint 'loss: ', loss\nprint 'expected loss: ', expected_loss\nprint 'difference: ', abs(loss - expected_loss)", "Run the following cell to perform numeric gradient checking on the CaptioningRNN class; you should errors around 1e-7 or less.", "batch_size = 2\ntimesteps = 3\ninput_dim = 4\nwordvec_dim = 5\nhidden_dim = 6\nword_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3}\nvocab_size = len(word_to_idx)\n\ncaptions = np.random.randint(vocab_size, size=(batch_size, timesteps))\nfeatures = np.random.randn(batch_size, input_dim)\n\nmodel = CaptioningRNN(word_to_idx,\n input_dim=input_dim,\n wordvec_dim=wordvec_dim,\n hidden_dim=hidden_dim,\n cell_type='rnn',\n dtype=np.float64,\n )\n\nloss, grads = model.loss(features, captions)\n\nfor param_name in sorted(grads):\n f = lambda _: model.loss(features, captions)[0]\n param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)\n e = rel_error(param_grad_num, grads[param_name])\n print '%s relative error: %e' % (param_name, e)", "Overfit small data\nSimilar to the Solver class that we used to train image classification models on the previous assignment, on this assignment we use a CaptioningSolver class to train image captioning models. Open the file cs231n/captioning_solver.py and read through the CaptioningSolver class; it should look very familiar.\nOnce you have familiarized yourself with the API, run the following to make sure your model overfit a small sample of 100 training examples. You should see losses around 1.", "small_data = load_coco_data(max_train=50)\n\nsmall_rnn_model = CaptioningRNN(\n cell_type='rnn',\n word_to_idx=data['word_to_idx'],\n input_dim=data['train_features'].shape[1],\n hidden_dim=512,\n wordvec_dim=256,\n )\n\nsmall_rnn_solver = CaptioningSolver(small_rnn_model, small_data,\n update_rule='adam',\n num_epochs=50,\n batch_size=25,\n optim_config={\n 'learning_rate': 5e-3,\n },\n lr_decay=0.95,\n verbose=True, print_every=10,\n )\n\nsmall_rnn_solver.train()\n\n# Plot the training losses\nplt.plot(small_rnn_solver.loss_history)\nplt.xlabel('Iteration')\nplt.ylabel('Loss')\nplt.title('Training loss history')\nplt.show()", "Test-time sampling\nUnlike classification models, image captioning models behave very differently at training time and at test time. At training time, we have access to the ground-truth caption so we feed ground-truth words as input to the RNN at each timestep. At test time, we sample from the distribution over the vocabulary at each timestep, and feed the sample as input to the RNN at the next timestep.\nIn the file cs231n/classifiers/rnn.py, implement the sample method for test-time sampling. After doing so, run the following to sample from your overfit model on both training and validation data. The samples on training data should be very good; the samples on validation data probably won't make sense.", "for split in ['train', 'val']:\n minibatch = sample_coco_minibatch(small_data, split=split, batch_size=2)\n gt_captions, features, urls = minibatch\n gt_captions = decode_captions(gt_captions, data['idx_to_word'])\n\n sample_captions = small_rnn_model.sample(features)\n sample_captions = decode_captions(sample_captions, data['idx_to_word'])\n\n for gt_caption, sample_caption, url in zip(gt_captions, sample_captions, urls):\n plt.imshow(image_from_url(url))\n plt.title('%s\\n%s\\nGT:%s' % (split, sample_caption, gt_caption))\n plt.axis('off')\n plt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/08_image_keras/flowers_fromscratch_tpu.ipynb
apache-2.0
[ "Flowers Image Classification with TensorFlow on Cloud ML Engine TPU\nThis notebook demonstrates how to do image classification from scratch on a flowers dataset using the Estimator API. Unlike flowers_fromscratch.ipynb, here we do it on a TPU.\nTherefore, this will work only if you have quota for TPUs (not in Qwiklabs). It will cost about $3 if you want to try it out.", "%%bash\npip install apache-beam[gcp]", "After doing a pip install, click on Reset Session so that the Python environment picks up the new package", "import os\nPROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID\nBUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME\nREGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\nMODEL_TYPE = 'tpu'\n\n# do not change these\nos.environ['PROJECT'] = PROJECT\nos.environ['BUCKET'] = BUCKET\nos.environ['REGION'] = REGION\nos.environ['MODEL_TYPE'] = MODEL_TYPE\nos.environ['TFVERSION'] = '1.8' # Tensorflow version\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION", "Preprocess JPEG images to TF Records\nWhile using a GPU, it is okay to read the JPEGS directly from our input_fn. However, TPUs are too fast and it will be very wasteful to have the TPUs wait on I/O. Therefore, we'll preprocess the JPEGs into TF Records.\nThis runs on Cloud Dataflow and will take <b> 15-20 minutes </b>", "%%bash\ngsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt\n\n%%bash\ngsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | wc -l\ngsutil cat gs://cloud-ml-data/img/flower_photos/eval_set.csv | wc -l\n\n%%bash\nexport PYTHONPATH=${PYTHONPATH}:${PWD}/flowersmodeltpu\ngsutil -m rm -rf gs://${BUCKET}/tpu/flowers/data\npython -m trainer.preprocess \\\n --train_csv gs://cloud-ml-data/img/flower_photos/train_set.csv \\\n --validation_csv gs://cloud-ml-data/img/flower_photos/eval_set.csv \\\n --labels_file /tmp/labels.txt \\\n --project_id $PROJECT \\\n --output_dir gs://${BUCKET}/tpu/flowers/data\n\n%%bash\ngsutil ls gs://${BUCKET}/tpu/flowers/data/", "Run as a Python module\nFirst run locally without --use_tpu -- don't be concerned if the process gets killed for using too much memory.", "%%bash\nWITHOUT_TPU=\"--train_batch_size=2 --train_steps=5\"\nOUTDIR=./flowers_trained\nrm -rf $OUTDIR\nexport PYTHONPATH=${PYTHONPATH}:${PWD}/flowersmodeltpu\npython -m flowersmodeltpu.task \\\n --output_dir=$OUTDIR \\\n --num_train_images=3300 \\\n --num_eval_images=370 \\\n $WITHOUT_TPU \\\n --learning_rate=0.01 \\\n --project=${PROJECT} \\\n --train_data_path=gs://${BUCKET}/tpu/flowers/data/train* \\\n --eval_data_path=gs://${BUCKET}/tpu/flowers/data/validation*", "Then, run it on Cloud ML Engine with --use_tpu", "%%bash\nWITH_TPU=\"--train_batch_size=256 --train_steps=3000 --batch_norm --use_tpu\"\nWITHOUT_TPU=\"--train_batch_size=2 --train_steps=5\"\nOUTDIR=gs://${BUCKET}/flowers/trained_${MODEL_TYPE}_delete\nJOBNAME=flowers_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n --region=$REGION \\\n --module-name=flowersmodeltpu.task \\\n --package-path=${PWD}/flowersmodeltpu \\\n --job-dir=$OUTDIR \\\n --staging-bucket=gs://$BUCKET \\\n --scale-tier=BASIC_TPU \\\n --runtime-version=$TFVERSION \\\n -- \\\n --output_dir=$OUTDIR \\\n --num_train_images=3300 \\\n --num_eval_images=370 \\\n $WITH_TPU \\\n --learning_rate=0.01 \\\n --project=${PROJECT} \\\n --train_data_path=gs://${BUCKET}/tpu/flowers/data/train-* \\\n --eval_data_path=gs://${BUCKET}/tpu/flowers/data/validation-*\n\n%%bash\nMODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1)\nsaved_model_cli show --dir $MODEL_LOCATION --all", "Monitoring training with TensorBoard\nUse this cell to launch tensorboard", "from google.datalab.ml import TensorBoard\nTensorBoard().start('gs://{}/flowers/trained_{}'.format(BUCKET, MODEL_TYPE))\n\nfor pid in TensorBoard.list()['pid']:\n TensorBoard().stop(pid)\n print 'Stopped TensorBoard with pid {}'.format(pid)", "Deploying and predicting with model\nDeploy the model:", "%%bash\nMODEL_NAME=\"flowers\"\nMODEL_VERSION=${MODEL_TYPE}\nMODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1)\necho \"Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes\"\n#gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}\n#gcloud ml-engine models delete ${MODEL_NAME}\n#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION\ngcloud alpha ml-engine versions create ${MODEL_VERSION} --machine-type mls1-c4-m4 --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION", "To predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src=\"http://storage.googleapis.com/cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg\" />", "%%bash\ngcloud alpha ml-engine models list", "The online prediction service expects images to be base64 encoded as described here.", "%%bash\nIMAGE_URL=gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg\n\n# Copy the image to local disk.\ngsutil cp $IMAGE_URL flower.jpg\n\n# Base64 encode and create request message in json format.\npython -c 'import base64, sys, json; img = base64.b64encode(open(\"flower.jpg\", \"rb\").read()).decode(); print(json.dumps({\"image_bytes\":{\"b64\": img}}))' &> request.json", "Send it to the prediction service", "%%bash\ngcloud ml-engine predict \\\n --model=flowers2 \\\n --version=${MODEL_TYPE} \\\n --json-instances=./request.json", "<pre>\n# Copyright 2017 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n</pre>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kowey/attelo
doc/tut_parser2.ipynb
gpl-3.0
[ "Parsers (part 2)\nIn the previous tutorial, we saw a couple of basic parsers, and also introduced the notion of a pipeline parser. It turns out that some of the parsers we introduced and had taken for granted are themselves pipelines. In this tutorial we will break these pipelines down and explore some of finer grained tasks that a parser can do.\nPreliminaries\nWe begin with the same multipacks and the same breakdown into a training and test set", "from __future__ import print_function\n\nfrom os import path as fp\nfrom attelo.io import (load_multipack)\n\nCORPUS_DIR = 'example-corpus'\nPREFIX = fp.join(CORPUS_DIR, 'tiny')\n\n# load the data into a multipack\nmpack = load_multipack(PREFIX + '.edus',\n PREFIX + '.pairings',\n PREFIX + '.features.sparse',\n PREFIX + '.features.sparse.vocab',\n verbose=True)\n\ntest_dpack = mpack.values()[0]\ntrain_mpack = {k: mpack[k] for k in mpack.keys()[1:]}\ntrain_dpacks = train_mpack.values()\ntrain_targets = [x.target for x in train_dpacks]\n\ndef print_results(dpack):\n 'summarise parser results'\n for i, (edu1, edu2) in enumerate(dpack.pairings):\n wanted = dpack.get_label(dpack.target[i])\n got = dpack.get_label(dpack.graph.prediction[i])\n print(i, edu1.id, edu2.id, '\\t|', got, '\\twanted:', wanted)", "Breaking a parser down (attach)\nIf we examine the source code for the attach pipeline, we can see that it is in fact a two step pipeline combining the attach classifier wrapper and a decoder. So let's see what happens when we run the attach classifier by itself.", "import numpy as np\nfrom attelo.learning import (SklearnAttachClassifier)\nfrom attelo.parser.attach import (AttachClassifierWrapper)\nfrom sklearn.linear_model import (LogisticRegression)\n\ndef print_results_verbose(dpack):\n \"\"\"Print detailed parse results\"\"\"\n for i, (edu1, edu2) in enumerate(dpack.pairings):\n attach = \"{:.2f}\".format(dpack.graph.attach[i])\n label = np.around(dpack.graph.label[i,:], decimals=2)\n got = dpack.get_label(dpack.graph.prediction[i])\n print(i, edu1.id, edu2.id, '\\t|', attach, label, got)\n \nlearner = SklearnAttachClassifier(LogisticRegression())\nparser1a = AttachClassifierWrapper(learner)\nparser1a.fit(train_dpacks, train_targets)\n\ndpack = parser1a.transform(test_dpack)\nprint_results_verbose(dpack)", "Parsers and weighted datapacks\nIn the output above, we have dug a little bit deeper into our datapacks. Recall above that a parser translates datapacks to datapacks. The output of a parser is always a weighted datapack., ie. a datapack whose 'graph'\nattribute is set to a record containing\n\nattachment weights\nlabel weights\npredictions (like target values)\n\nSo called \"standalone\" parsers will take an unweighted datapack (graph == None) and produce a weighted datapack with predictions set. But some parsers tend to be more useful as part of a pipeline:\n\nthe attach classfier wrapper fills the attachment weights\nlikewise the label classifier wrapper assigns label weights\na decoder assigns predictions from weights\n\nWe see the first case in the above output. Notice that the attachments have been set to values from a model, but the label weights and predictions are assigned default values. \nNB: all parsers should do \"something sensible\" in the face of all inputs. This typically consists of assuming the default weight of 1.0 for unweighted datapacks.\nDecoders\nHaving now transformed a datapack with the attach classifier wrapper, let's now pass its results to a decoder. In fact, let's try a couple of different decoders and compare the output.", "from attelo.decoding.baseline import (LocalBaseline)\n\ndecoder = LocalBaseline(threshold=0.4)\ndpack2 = decoder.transform(dpack)\nprint_results_verbose(dpack2)", "The result above is what we get if we run a decoder on the output of the attach classifier wrapper. This is in fact, the the same thing as running the attachment pipeline. We can define a similar pipeline below.", "from attelo.parser.pipeline import (Pipeline)\n\n# this is basically attelo.parser.attach.AttachPipeline\nparser1 = Pipeline(steps=[('attach weights', parser1a),\n ('decoder', decoder)])\nparser1.fit(train_dpacks, train_targets)\nprint_results_verbose(parser1.transform(test_dpack))", "Mixing and matching\nBeing able to break parsing down to this level of granularity lets us experiment with parsing techniques by composing different parsing substeps in different ways. For example, below, we write two slightly different pipelines, one which sets labels separately from decoding, and one which combines attach and label scores before handing them off to a decoder.", "from attelo.learning.local import (SklearnLabelClassifier)\nfrom attelo.parser.label import (LabelClassifierWrapper, \n SimpleLabeller)\nfrom attelo.parser.full import (AttachTimesBestLabel)\n\nlearner_l = SklearnLabelClassifier(LogisticRegression())\n\nprint(\"Post-labelling\")\nprint(\"--------------\")\nparser = Pipeline(steps=[('attach weights', parser1a),\n ('decoder', decoder),\n ('labels', SimpleLabeller(learner_l))])\nparser.fit(train_dpacks, train_targets)\nprint_results_verbose(parser.transform(test_dpack))\n\nprint()\nprint(\"Joint\")\nprint(\"-----\")\nparser = Pipeline(steps=[('attach weights', parser1a),\n ('label weights', LabelClassifierWrapper(learner_l)),\n ('attach times label', AttachTimesBestLabel()),\n ('decoder', decoder)])\nparser.fit(train_dpacks, train_targets)\nprint_results_verbose(parser.transform(test_dpack))", "Conclusion\nThinking of parsers as transformers from weighted datapacks to weighted datapacks should allow for some interesting parsing experiments, parsers that\n\ndivide the work using different strategies on different subtypes of input (eg. intra vs intersentential links), or\nwork in multiple stages, maybe modifying past decisions along the way, or\ninfluence future parsing stages by tweaking the weights they might see, or\nprune out undesirable edges (by setting their weights to zero), or\napply some global constraint satisfaction algorithm across the possible weights\n\nWith a notion of a parsing pipeline, you should also be able to build parsers that combine different experiments that you want to try simultaneously" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/starthinker
colabs/bucket.ipynb
apache-2.0
[ "Storage Bucket\nCreate and permission a bucket in Storage.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.", "!pip install git+https://github.com/google/starthinker\n", "2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.", "from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n", "3. Enter Storage Bucket Recipe Parameters\n\nSpecify the name of the bucket and who will have owner permissions.\nExisting buckets are preserved.\nAdding a permission to the list will update the permissions but removing them will not.\nYou have to manualy remove grants.\nModify the values below for your use case, can be done multiple times, then click play.", "FIELDS = {\n 'auth_write':'service', # Credentials used for writing data.\n 'bucket_bucket':'', # Name of Google Cloud Bucket to create.\n 'bucket_emails':'', # Comma separated emails.\n 'bucket_groups':'', # Comma separated groups.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n", "4. Execute Storage Bucket\nThis does NOT need to be modified unless you are changing the recipe, click play.", "from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'bucket':{\n 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},\n 'bucket':{'field':{'name':'bucket_bucket','kind':'string','order':2,'default':'','description':'Name of Google Cloud Bucket to create.'}},\n 'emails':{'field':{'name':'bucket_emails','kind':'string_list','order':3,'default':'','description':'Comma separated emails.'}},\n 'groups':{'field':{'name':'bucket_groups','kind':'string_list','order':4,'default':'','description':'Comma separated groups.'}}\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jakevdp/sklearn_tutorial
notebooks/03.2-Regression-Forests.ipynb
bsd-3-clause
[ "<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>\nSupervised Learning In-Depth: Random Forests\nPreviously we saw a powerful discriminative classifier, Support Vector Machines.\nHere we'll take a look at motivating another powerful algorithm. This one is a non-parametric algorithm called Random Forests.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\nplt.style.use('seaborn')", "Motivating Random Forests: Decision Trees\nRandom forests are an example of an ensemble learner built on decision trees.\nFor this reason we'll start by discussing decision trees themselves.\nDecision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero-in on the classification:", "import fig_code\nfig_code.plot_example_decision_tree()", "The binary splitting makes this extremely efficient.\nAs always, though, the trick is to ask the right questions.\nThis is where the algorithmic process comes in: in training a decision tree classifier, the algorithm looks at the features and decides which questions (or \"splits\") contain the most information.\nCreating a Decision Tree\nHere's an example of a decision tree classifier in scikit-learn. We'll start by defining some two-dimensional labeled data:", "from sklearn.datasets import make_blobs\n\nX, y = make_blobs(n_samples=300, centers=4,\n random_state=0, cluster_std=1.0)\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');", "We have some convenience functions in the repository that help", "from fig_code import visualize_tree, plot_tree_interactive", "Now using IPython's interact (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits:", "plot_tree_interactive(X, y);", "Notice that at each increase in depth, every node is split in two except those nodes which contain only a single class.\nThe result is a very fast non-parametric classification, and can be extremely useful in practice.\nQuestion: Do you see any problems with this?\nDecision Trees and over-fitting\nOne issue with decision trees is that it is very easy to create trees which over-fit the data. That is, they are flexible enough that they can learn the structure of the noise in the data rather than the signal! For example, take a look at two trees built on two subsets of this dataset:", "from sklearn.tree import DecisionTreeClassifier\nclf = DecisionTreeClassifier()\n\nplt.figure()\nvisualize_tree(clf, X[:200], y[:200], boundaries=False)\nplt.figure()\nvisualize_tree(clf, X[-200:], y[-200:], boundaries=False)", "The details of the classifications are completely different! That is an indication of over-fitting: when you predict the value for a new point, the result is more reflective of the noise in the model rather than the signal.\nEnsembles of Estimators: Random Forests\nOne possible way to address over-fitting is to use an Ensemble Method: this is a meta-estimator which essentially averages the results of many individual estimators which over-fit the data. Somewhat surprisingly, the resulting estimates are much more robust and accurate than the individual estimates which make them up!\nOne of the most common ensemble methods is the Random Forest, in which the ensemble is made up of many decision trees which are in some way perturbed.\nThere are volumes of theory and precedent about how to randomize these trees, but as an example, let's imagine an ensemble of estimators fit on subsets of the data. We can get an idea of what these might look like as follows:", "def fit_randomized_tree(random_state=0):\n X, y = make_blobs(n_samples=300, centers=4,\n random_state=0, cluster_std=2.0)\n clf = DecisionTreeClassifier(max_depth=15)\n \n rng = np.random.RandomState(random_state)\n i = np.arange(len(y))\n rng.shuffle(i)\n visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,\n xlim=(X[:, 0].min(), X[:, 0].max()),\n ylim=(X[:, 1].min(), X[:, 1].max()))\n \nfrom ipywidgets import interact\ninteract(fit_randomized_tree, random_state=(0, 100));", "See how the details of the model change as a function of the sample, while the larger characteristics remain the same!\nThe random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer:", "from sklearn.ensemble import RandomForestClassifier\nclf = RandomForestClassifier(n_estimators=100, random_state=0)\nvisualize_tree(clf, X, y, boundaries=False);", "By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!\n(Note: above we randomized the model through sub-sampling... Random Forests use more sophisticated means of randomization, which you can read about in, e.g. the scikit-learn documentation)\nQuick Example: Moving to Regression\nAbove we were considering random forests within the context of classification.\nRandom forests can also be made to work in the case of regression (that is, continuous rather than categorical variables). The estimator to use for this is sklearn.ensemble.RandomForestRegressor.\nLet's quickly demonstrate how this can be used:", "from sklearn.ensemble import RandomForestRegressor\n\nx = 10 * np.random.rand(100)\n\ndef model(x, sigma=0.3):\n fast_oscillation = np.sin(5 * x)\n slow_oscillation = np.sin(0.5 * x)\n noise = sigma * np.random.randn(len(x))\n\n return slow_oscillation + fast_oscillation + noise\n\ny = model(x)\nplt.errorbar(x, y, 0.3, fmt='o');\n\nxfit = np.linspace(0, 10, 1000)\nyfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None])\nytrue = model(xfit, 0)\n\nplt.errorbar(x, y, 0.3, fmt='o')\nplt.plot(xfit, yfit, '-r');\nplt.plot(xfit, ytrue, '-k', alpha=0.5);", "As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model!\nExample: Random Forest for Classifying Digits\nWe previously saw the hand-written digits data. Let's use that here to test the efficacy of the SVM and Random Forest classifiers.", "from sklearn.datasets import load_digits\ndigits = load_digits()\ndigits.keys()\n\nX = digits.data\ny = digits.target\nprint(X.shape)\nprint(y.shape)", "To remind us what we're looking at, we'll visualize the first few data points:", "# set up the figure\nfig = plt.figure(figsize=(6, 6)) # figure size in inches\nfig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)\n\n# plot the digits: each image is 8x8 pixels\nfor i in range(64):\n ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])\n ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')\n \n # label the image with the target value\n ax.text(0, 7, str(digits.target[i]))", "We can quickly classify the digits using a decision tree as follows:", "from sklearn.model_selection import train_test_split\nfrom sklearn import metrics\n\nXtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)\nclf = DecisionTreeClassifier(max_depth=11)\nclf.fit(Xtrain, ytrain)\nypred = clf.predict(Xtest)", "We can check the accuracy of this classifier:", "metrics.accuracy_score(ypred, ytest)", "and for good measure, plot the confusion matrix:", "plt.imshow(metrics.confusion_matrix(ypred, ytest),\n interpolation='nearest', cmap=plt.cm.binary)\nplt.grid(False)\nplt.colorbar()\nplt.xlabel(\"predicted label\")\nplt.ylabel(\"true label\");", "Exercise\n\nRepeat this classification task with sklearn.ensemble.RandomForestClassifier. How does the max_depth, max_features, and n_estimators affect the results?\nTry this classification with sklearn.svm.SVC, adjusting kernel, C, and gamma. Which classifier performs optimally?\nTry a few sets of parameters for each model and check the F1 score (sklearn.metrics.f1_score) on your results. What's the best F1 score you can reach?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
compsocialscience/summer-institute
2018/materials/boulder/day4-text-analysis/Day 4, Lecture 3 - Text networks and word embeddings.ipynb
mit
[ "import numpy as np\nimport pandas as pd\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sb\nsb.set_style('whitegrid')\n\nimport requests\nimport json\nimport re\nfrom bs4 import BeautifulSoup\n\nimport string\nimport nltk\n\nimport networkx as nx", "Loading data\nLoad the data from disk into memory.", "with open('potus_wiki_bios_cleaned.json','r') as f:\n bios = json.load(f)", "Confirm there are 44 presidents (shaking fist at Grover Cleveland) in the dictionary.", "print(\"There are {0} biographies of presidents.\".format(len(bios)))", "What's an example of a single biography? We access the dictionary by passing the key (President's name), which returns the value (the text of the biography).", "example = bios['Grover Cleveland']\nprint(example)", "Get some metadata about the U.S. Presidents.", "presidents_df = pd.DataFrame(requests.get('https://raw.githubusercontent.com/hitch17/sample-data/master/presidents.json').json())\npresidents_df = presidents_df.set_index('president')\npresidents_df['wikibio words'] = pd.Series({bio_name:len(bio_text) for bio_name,bio_text in bios.items()})\npresidents_df.head()\n", "A really basic exploratory scatterplot for the number of words in each President's biography compared to their POTUS index.", "presidents_df.plot.scatter(x='number',y='wikibio words')", "TF-IDF\nWe can create a document-term matrix where the rows are our 44 presidential biographies, the columns are the terms (words), and the values in the cells are the word counts: the number of times that document contains that word. This is the \"term frequency\" (TF) part of TF-IDF.\nThe IDF part of TF-IDF is the \"inverse document frequency\". The intuition is that words that occur frequency within a single document but are infrequent across the corpus of documents should receiving a higher weighting: these words have greater relative meaning. Conversely, words that are frequently used across documents are down-weighted.\nThe image below has documents as columns and terms as rows.", "# Import the libraries from scikit-learn\nfrom sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer\n\ncount_vect = CountVectorizer()\n\n# Compute the word counts -- it expects a big string, so join our cleaned words back together\nbio_counts = count_vect.fit_transform([' '.join(bio) for bio in bios.values()])\n\n# Compute the TF-IDF for the word counts from each biography\nbio_tfidf = TfidfTransformer().fit_transform(bio_counts)\n\n# Convert from sparse to dense array representation\nbio_tfidf_dense = bio_tfidf.todense()", "Make a text similarity network\nOnce we have the TFIDF scores for every word in each president's biography, we can make a text similarity network. Multiplying the document by term matrix by its transpose should return the cosine similarities between documents. We can also import cosine_similarity from scikit-learn if you don't believe me (I didn't believe me either). Cosine similarity values closer to 1 indicate these documents' words have more similar TFIDF scores and values closer to 0 indicate these documents' words are more dissimilar.\nThe goal here is to create a network where nodes are presidents and edges are weighted similarity scores. All text documents will have some minimal similarity, so we can threshold the similarity scores to only those similarities in the top 10% for each president.", "# Compute cosine similarity\npres_pres_df = pd.DataFrame(bio_tfidf_dense*bio_tfidf_dense.T)\n\n# If you don't believe me that cosine similiarty is the document-term matrix times its transpose\nfrom sklearn.metrics.pairwise import cosine_similarity\npres_pres_df = pd.DataFrame(cosine_similarity(bio_tfidf_dense))\n\n# Filter for edges in the 90th percentile or greater\npres_pres_filtered_df = pres_pres_df[pres_pres_df >= pres_pres_df.quantile(.9)]\n\n# Reshape and filter data\nedgelist_df = pres_pres_filtered_df.stack().reset_index()\nedgelist_df = edgelist_df[(edgelist_df[0] != 0) & (edgelist_df['level_0'] != edgelist_df['level_1'])]\n\n# Rename and replace data\nedgelist_df.rename(columns={'level_0':'from','level_1':'to',0:'weight'},inplace=True)\nedgelist_df.replace(dict(enumerate(bios.keys())),inplace=True)\n\n# Inspect\nedgelist_df.head()\n", "We read this pandas edgelist into networkx using from_pandas_edgelist, report out some basic descriptives about the network, and write the graph object to file in case we want to visualize it in a dedicated network visualization package like Gephi.", "# Convert from edgelist to a graph object\ng = nx.from_pandas_edgelist(edgelist_df,source='from',target='to',edge_attr=['weight'])\n\n# Report out basic descriptives\nprint(\"There are {0:,} nodes and {1:,} edges in the network.\".format(g.number_of_nodes(),g.number_of_edges()))\n\n# Write graph object to disk for visualization\nnx.write_gexf(g,'bio_similarity.gexf')\n", "Since this is a small and sparse network, we can try to use Matplotlib to visualize it instead. I would only use the nx.draw functionality for small networks like this one.", "# Plot the nodes as a spring layout\n\n#g_pos = nx.layout.fruchterman_reingold_layout(g, k = 5, iterations=10000)\ng_pos = nx.layout.kamada_kawai_layout(g)\n\n# Draw the graph\nf,ax = plt.subplots(1,1,figsize=(10,10))\nnx.draw(G = g,\n ax = ax,\n pos = g_pos,\n with_labels = True,\n node_size = [dc*(len(g) - 1)*100 for dc in nx.degree_centrality(g).values()],\n font_size = 10,\n font_weight = 'bold',\n width = [d['weight']*10 for i,j,d in g.edges(data=True)],\n node_color = 'tomato',\n edge_color = 'grey'\n )", "Case study: Text similarity network of the S&P 500 companies\nStep 1: Load and preprocess the content of the articles.", "# Load the data\nwith open('sp500_wiki_articles.json','r') as f:\n sp500_articles = json.load(f)\n\n# Bring in the text_preprocessor we wrote from Day 4, Lecture 1\ndef text_preprocessor(text):\n \"\"\"Takes a large string (document) and returns a list of cleaned tokens\"\"\"\n tokens = nltk.wordpunct_tokenize(text)\n clean_tokens = []\n for t in tokens:\n if t.lower() not in all_stopwords and len(t) > 2:\n clean_tokens.append(lemmatizer(t.lower()))\n return clean_tokens\n\n# Clean each article\ncleaned_sp500 = {}\n\nfor name,text in sp500_articles.items():\n cleaned_sp500[name] = text_preprocessor(text)\n\n# Save to disk\nwith open('sp500_wiki_articles_cleaned.json','w') as f:\n json.dump(cleaned_sp500,f)", "Step 2: Compute the TFIDF matrix for the S&P 500 companies.", "# Compute the word counts\nsp500_counts = \n\n# Compute the TF-IDF for the word counts from each biography\nsp500_tfidf = \n\n# Convert from sparse to dense array representation\nsp500_tfidf_dense = ", "Step 3: Compute the cosine similarities.", "# Compute cosine similarity\ncompany_company_df = \n\n# Filter for edges in the 90th percentile or greater\ncompany_company_filtered_df = \n\n# Reshape and filter data\nsp500_edgelist_df = \nsp500_edgelist_df = \n\n# Rename and replace data\nsp500_edgelist_df.rename(columns={'level_0':'from','level_1':'to',0:'weight'},inplace=True)\nsp500_edgelist_df.replace(dict(enumerate(sp500_articles.keys())),inplace=True)\n\n# Inspect\nsp500_edgelist_df.head()\n", "Step 4: Visualize the resulting network.\nWord2Vec\nWe used TF-IDF vectors of documents and cosine similarities between these document vectors as a way of representing the similarity in the networks above. However, TF-IDF score are simply (normalized) word frequencies: they do not capture semantic information. A vector space model like the popular Word2Vec represents each token (word) in a high-dimensional (here we'll use 100-dimensions) space that is trained from some (ideally) large corpus of documents. Ideally, tokens that are used in similar contexts are placed into similar locations in this high-dimensional space. Once we have vectorized words into this space, we're able to efficiently compute do a variety of other operations such as compute similarities between words or do transformations that can find analogies.\nI lack the expertise and we lack the time to get into the math behind these methods, but here are some helpful tutorials I've found:\n* Word embeddings: exploration, explanation, and exploitation \n* Learning Word Embedding\n* On word embeddings\n* TensorFlow - Vector Representations of Words\nWe'll use the 44 Presidential biographies as a small and specific corpus. We start by training a bios_model from the list of biographies using hyperparamaters for the number of dimensions (size), the number of surrounding words to use as training (window), and the minimum number of times a word has to occur to be included in the model (min_count).", "from gensim.models import Word2Vec\n\nbios_model = Word2Vec(bios.values(),size=100,window=10,min_count=8)", "Each word in the vocabulary exists as a N-dimensional vector, where N is the \"size\" hyper-parameter set in the model. The \"congress\" token in located at this position in the 100-dimensional space we trained in bios_model.", "bios_model.wv['congress']\n\nbios_model.wv.most_similar('congress')\n\nbios_model.wv.most_similar('court')\n\nbios_model.wv.most_similar('war')\n\nbios_model.wv.most_similar('election')", "There's a doesnt_match method that predicts which word in a list doesn't match the other word senses in the list. Sometime the results are predictable/trivial.", "bios_model.wv.doesnt_match(['democrat','republican','whig','panama'])", "Other times the results are unexpected/interesting.", "bios_model.wv.doesnt_match(['canada','mexico','cuba','japan','france'])", "One of the most powerful implications of having these vectorized embeddings of word meanings is the ability to do operations similar arithmetic that recover or reveal interesting semantic meanings. The classic example is Man:Woman::King:Queen:\n\nWhat are some examples of these vector similarities from our trained model?\nrepublican - slavery = democrat - X\n-(republican - slavery) + democrat = X\nslavery + democrat - republican = X", "bios_model.wv.most_similar(positive=['democrat','slavery'],negative=['republican'])\n\nbios_model.wv.most_similar(positive=['republican','labor'],negative=['democrat'])", "Finally, you can use the similarity method to return the similarity between two terms. In our trained model, \"britain\" and \"france\" are more similar to each other than \"mexico\" and \"canada\".", "bios_model.wv.similarity('republican','democrat')\n\nbios_model.wv.similarity('mexico','canada')\n\nbios_model.wv.similarity('britain','france')", "Case study: S&P500 company Word2Vec model\nStep 1: Open the \"sp500_wiki_articles_cleaned.json\" you previous saved of the cleaned S&P500 company article content or use a text preprocessor on \"sp500_wiki_articles.json\" to generate a dictionary of cleaned article content. Train a sp500_model using the Word2Vec model on the values of the cleaned company article content. You can use default hyperparameters for size, window, and min_count, or experiment with alternative values.\nStep 2: Using the most_similar method, explore some similarities this model has learned for salient tokens about companies (e.g., \"board\", \"controversy\", \"executive\", \"investigation\"). Use the positive and negative options to explore different analogies. Using the doesnt_match method, experiment with word combinations to discover predictable and unexpected exceptions. Using the similarity method, identify interesting similarity scores.\nDimensionality reduction\nMaterial from this segment is adapted from Jake Vanderplas's \"Python Data Science Handbook\" notebooks and Kevyn Collins-Thompson's \"Applied Machine Learning in Python\" module on Coursera.\nIn the TF-IDF, we have over 17,000 dimensions (corresponding to the unique tokens) for each of the 44 presidential biographies. This data is sparse and large, which makes it hard to visualize. Ideally we'd only have two dimensions of data for a task like visualization.\nDimensionality reduction encompasses a set of methods like principal component analysis, multidimensional scaling, and more advanced \"manifold learning\" that reduces high-dimensional data down to fewer dimensions. For the purposes of visualization, we typically want 2 dimensions. These methods use a variety of different assumptions and modeling approaches. If you want to understand the differences between them, you'll likely need to find a graduate-level machine learning course. \nLet's compare what each of these do on our presidential TF-IDF: the goal here is to understand there are different methods for dimensionality reduction and each generates different new components and/or clusters that you'll need to interpret.", "print(bio_tfidf_dense.shape)\nbio_tfidf_dense", "Principal component analysis (PCA) is probably one of the most widely-used and efficient methods for dimensionality reduction.", "# Step 1: Choose a class of models\nfrom sklearn.decomposition import PCA\n\n# Step 2: Instantiate the model\npca = PCA(n_components=2)\n\n# Step 3: Arrange the data into features matrices\n# Already done\n\n# Step 4: Fit the model to the data\npca.fit(bio_tfidf_dense)\n\n# Step 5: Evaluate the model\nX_pca = pca.transform(bio_tfidf_dense)\n\n# Visualize\nf,ax = plt.subplots(1,1,figsize=(10,10))\nax.scatter(X_pca[:,0],X_pca[:,1])\n\nax.set_title('PCA')\n\nfor i,txt in enumerate(bios.keys()):\n if txt == 'Barack Obama':\n ax.annotate(txt,(X_pca[i,0],X_pca[i,1]),color='blue',fontweight='bold')\n elif txt == 'Donald Trump':\n ax.annotate(txt,(X_pca[i,0],X_pca[i,1]),color='red',fontweight='bold')\n else:\n ax.annotate(txt,(X_pca[i,0],X_pca[i,1]))", "Multi-dimensional scaling is another common technique in the social sciences.", "# Step 1: Choose your model class(es)\nfrom sklearn.manifold import MDS\n\n# Step 2: Instantiate your model class(es)\nmds = MDS(n_components=2,metric=False,n_jobs=-1)\n\n# Step 3: Arrange data into features matrices\n# Done!\n\n# Step 4: Fit the data and transform\nX_mds = mds.fit_transform(bio_tfidf_dense)\n\n# Plot the data\nf,ax = plt.subplots(1,1,figsize=(10,10))\nax.scatter(X_mds[:,0],X_mds[:,1])\n\nax.set_title('Multi-Dimensional Scaling')\n\nfor i,txt in enumerate(bios.keys()):\n if txt == 'Barack Obama':\n ax.annotate(txt,(X_mds[i,0],X_mds[i,1]),color='blue',fontweight='bold')\n elif txt == 'Donald Trump':\n ax.annotate(txt,(X_mds[i,0],X_mds[i,1]),color='red',fontweight='bold')\n else:\n ax.annotate(txt,(X_mds[i,0],X_mds[i,1]))", "Isomap is an extension of MDS.", "# Step 1: Choose your model class(es)\nfrom sklearn.manifold import Isomap\n\n# Step 2: Instantiate your model class(es)\niso = Isomap(n_neighbors = 5, n_components = 2)\n\n# Step 3: Arrange data into features matrices\n# Done!\n\n# Step 4: Fit the data and transform\nX_iso = iso.fit_transform(bio_tfidf_dense)\n\n\n# Plot the data\nf,ax = plt.subplots(1,1,figsize=(10,10))\nax.scatter(X_iso[:,0],X_iso[:,1])\n\nax.set_title('IsoMap')\n\nfor i,txt in enumerate(bios.keys()):\n if txt == 'Barack Obama':\n ax.annotate(txt,(X_iso[i,0],X_iso[i,1]),color='blue',fontweight='bold')\n elif txt == 'Donald Trump':\n ax.annotate(txt,(X_iso[i,0],X_iso[i,1]),color='red',fontweight='bold')\n else:\n ax.annotate(txt,(X_iso[i,0],X_iso[i,1]))", "Spectral embedding does interesting things to the eigenvectors of a similarity matrix.", "# Step 1: Choose your model class(es)\nfrom sklearn.manifold import SpectralEmbedding\n\n# Step 2: Instantiate your model class(es)\nse = SpectralEmbedding(n_components = 2)\n\n# Step 3: Arrange data into features matrices\n# Done!\n\n# Step 4: Fit the data and transform\nX_se = se.fit_transform(bio_tfidf_dense)\n\n\n# Plot the data\nf,ax = plt.subplots(1,1,figsize=(9,6))\nax.scatter(X_se[:,0],X_se[:,1])\n\nax.set_title('Spectral Embedding')\n\nfor i,txt in enumerate(bios.keys()):\n if txt == 'Barack Obama':\n ax.annotate(txt,(X_se[i,0],X_se[i,1]),color='blue',fontweight='bold')\n elif txt == 'Donald Trump':\n ax.annotate(txt,(X_se[i,0],X_se[i,1]),color='red',fontweight='bold')\n else:\n ax.annotate(txt,(X_se[i,0],X_se[i,1]))", "Locally Linear Embedding is yet another dimensionality reduction method, but not my favorite to date given performance (meaningful clusters as output) and cost (expensive to compute).", "# Step 1: Choose your model class(es)\nfrom sklearn.manifold import LocallyLinearEmbedding\n\n# Step 2: Instantiate your model class(es)\nlle = LocallyLinearEmbedding(n_components = 2,n_jobs=-1)\n\n# Step 3: Arrange data into features matrices\n# Done!\n\n# Step 4: Fit the data and transform\nX_lle = lle.fit_transform(bio_tfidf_dense)\n\n\n# Plot the data\nf,ax = plt.subplots(1,1,figsize=(9,6))\nax.scatter(X_lle[:,0],X_lle[:,1])\n\nax.set_title('Locally Linear Embedding')\n\nfor i,txt in enumerate(bios.keys()):\n if txt == 'Barack Obama':\n ax.annotate(txt,(X_lle[i,0],X_lle[i,1]),color='blue',fontweight='bold')\n elif txt == 'Donald Trump':\n ax.annotate(txt,(X_lle[i,0],X_lle[i,1]),color='red',fontweight='bold')\n else:\n ax.annotate(txt,(X_lle[i,0],X_lle[i,1]))", "t-Distributed Stochastic Neighbor Embedding (t-SNE) is ubiquitous for visualizing word or document embeddings. It can be expensive to run, but it does a great job recovering clusters. There are some hyper-parameters, particularly \"perplexity\" that you'll need to tune to get things to look interesting.\nWattenberg, Viégas, and Johnson have an outstanding interactive tool visualizing how t-SNE's different parameters influence the layout as well as good advice on how to make the best of it.", "# Step 1: Choose your model class(es)\nfrom sklearn.manifold import TSNE\n\n# Step 2: Instantiate your model class(es)\ntsne = TSNE(n_components = 2, init='pca', random_state=42, perplexity=11)\n\n# Step 3: Arrange data into features matrices\n# Done!\n\n# Step 4: Fit the data and transform\nX_tsne = tsne.fit_transform(bio_tfidf_dense)\n\n\n# Plot the data\nf,ax = plt.subplots(1,1,figsize=(10,10))\nax.scatter(X_tsne[:,0],X_tsne[:,1])\n\nax.set_title('t-SNE')\n\nfor i,txt in enumerate(bios.keys()):\n if txt == 'Barack Obama':\n ax.annotate(txt,(X_tsne[i,0],X_tsne[i,1]),color='blue',fontweight='bold')\n elif txt == 'Donald Trump':\n ax.annotate(txt,(X_tsne[i,0],X_tsne[i,1]),color='red',fontweight='bold')\n else:\n ax.annotate(txt,(X_tsne[i,0],X_tsne[i,1]))", "Uniform Maniford Approximation and Projection (UMAP) is a new and particularly fast dimensionality reduction method with some comparatively great documentation. Unfortunately, UMAP is so new that it hasn't been translated into scikit-learn yet, so you'll need to install it separately from the terminal:\nconda install -c conda-forge umap-learn", "# Step 1: Choose your model class(es)\nfrom umap import UMAP\n\n# Step 2: Instantiate your model class(es)\numap_ = UMAP(n_components=2, n_neighbors=10, random_state=42)\n\n# Step 3: Arrange data into features matrices\n# Done!\n\n# Step 4: Fit the data and transform\nX_umap = umap_.fit_transform(bio_tfidf_dense)\n\n# Plot the data\nf,ax = plt.subplots(1,1,figsize=(10,10))\nax.scatter(X_umap[:,0],X_umap[:,1])\n\nax.set_title('UMAP')\n\nfor i,txt in enumerate(bios.keys()):\n if txt == 'Barack Obama':\n ax.annotate(txt,(X_umap[i,0],X_umap[i,1]),color='blue',fontweight='bold')\n elif txt == 'Donald Trump':\n ax.annotate(txt,(X_umap[i,0],X_umap[i,1]),color='red',fontweight='bold')\n else:\n ax.annotate(txt,(X_umap[i,0],X_umap[i,1]))", "Case study: S&P500 company clusters\nStep 1: Using the sp500_tfidf_dense array/DataFrame, experiment with different dimensionality reduction tools we covered above. Visualize and inspect the distribution of S&P500 companies for interesting dimensions (do X and Y dimensions in this reduced data capture anything meaningful?) or clusters (do companies clusters together as we'd expect?).\nVisualizing word embeddings\nUsing the bio_counts, we can find the top-N most frequent words and save them as top_words.", "top_words = pd.DataFrame(bio_counts.todense().sum(0).T,\n index=count_vect.get_feature_names())[0]\n\ntop_words = top_words.sort_values(0,ascending=False).head(1000).index.tolist()", "For each word in top_words, we get its vector from bios_model and add it to the top_word_vectors list and cast this list back to a numpy array.", "top_word_vectors = []\n\nfor word in top_words:\n try:\n vector = bios_model.wv[word]\n top_word_vectors.append(vector)\n except KeyError:\n pass\n \ntop_word_vectors = np.array(top_word_vectors)", "We can then use the dimensionality tools we just covered in the previous section to visualize the word similarities. PCA is fast but rarely does a great job with this extremely high-dimensional and sparse data: it's a cloud of points with no discernable structure.", "# Step 1: Choose your model class(es)\n# from sklearn.decomposition import PCA\n\n# Step 2: Instantiate the model\npca = PCA(n_components=2)\n\n# Step 3: Arrange data into features matrices\nX_w2v = top_word_vectors\n\n# Step 4: Fit the data and transform\nX_w2v_pca = pca.fit_transform(X_w2v)\n\n\n# Plot the data\nf,ax = plt.subplots(1,1,figsize=(10,10))\nax.scatter(X_w2v_pca[:,0],X_w2v_pca[:,1],s=3)\n\nax.set_title('PCA')\n\nfor i,txt in enumerate(top_words):\n if i%10 == 0:\n ax.annotate(txt,(X_w2v_pca[i,0],X_w2v_pca[i,1]))\n \nf.savefig('term_pca.pdf')", "t-SNE was more-or-less engineered for precisely the task of visualizing word embeddings. It likely takes on the order of a minute or more for t-SNE to reduce the top_words embeddings to only two dimensions. Assuming our perplexity and other t-SNE hyperparameters are well-behaved, there should be relatively easy-to-discern clusters of words with similar meanings. You can also open the \"term_sne.pdf\" file and zoome to inspect.", "# Step 1: Choose your model class(es)\nfrom sklearn.manifold import TSNE\n\n# Step 2: Instantiate your model class(es)\ntsne = TSNE(n_components = 2, init='pca', random_state=42, perplexity=25)\n\n# Step 3: Arrange data into features matrices\nX_w2v = top_word_vectors\n\n# Step 4: Fit the data and transform\nX_w2v_tsne = tsne.fit_transform(X_w2v)\n\n\n# Plot the data\nf,ax = plt.subplots(1,1,figsize=(10,10))\nax.scatter(X_w2v_tsne[:,0],X_w2v_tsne[:,1],s=3)\n\nax.set_title('t-SNE')\n\nfor i,txt in enumerate(top_words):\n if i%10 == 0:\n ax.annotate(txt,(X_w2v_tsne[i,0],X_w2v_tsne[i,1]))\n \nf.savefig('term_tsne.pdf')", "UMAP is faster and I think better, but you'll need to make sure this is installed on your system since it doesn't come with scikit-learn or Anaconda by default. Words like \"nominee\" and \"campaign\" or the names of the months cluster clearly together apart from the rest.", "# Step 1: Choose your model class(es)\nfrom umap import UMAP\n\n# Step 2: Instantiate your model class(es)\numap_ = UMAP(n_components=2, n_neighbors=5, random_state=42)\n\n# Step 3: Arrange data into features matrices\n# Done!\n\n# Step 4: Fit the data and transform\nX_w2v_umap = umap_.fit_transform(X_w2v)\n\n# Plot the data\nf,ax = plt.subplots(1,1,figsize=(10,10))\nax.scatter(X_w2v_umap[:,0],X_w2v_umap[:,1],s=3)\n\nax.set_title('UMAP')\n\nfor i,txt in enumerate(top_words):\n if i%10 == 0:\n ax.annotate(txt,(X_w2v_umap[i,0],X_w2v_umap[i,1]))\n \nf.savefig('term_umap.pdf')", "Case study: Visualizing word embeddings for S&P500 company articles\nStep 1: Compute the word vectors for the top 1000(ish) terms in the S&P500 word counts from your sp500_model. \nStep 2: Reduce the dimensionality of these top word vectors using PCA, t-SNE, or (if you've installed it) UMAP and visualize the results. What meaningful or surprising clusters do you discover?" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
olgaliak/cntk-cyclegan
simpleGan/CNTK_206B_DCGAN.ipynb
mit
[ "CNTK 206 Part B: Deep Convolutional GAN with MNIST data\nPrerequisites: We assume that you have successfully downloaded the MNIST data by completing the tutorial titled CNTK_103A_MNIST_DataLoader.ipynb.\nIntroduction\nGenerative models have gained a lot of attention in deep learning community which has traditionally leveraged discriminative models for (semi-supervised) and unsupervised learning. \nOverview\nIn the previous tutorial we introduce the original GAN implementation by Goodfellow et al at NIPS 2014. This pioneering work has since then been extended and many techniques have been published amongst which the Deep Convolutional Generative Adversarial Network a.k.a. DCGAN has become the recommended launch pad in the community.\nIn this tutorial, we introduce an implementation of the DCGAN with some well tested architectural constraints that improve stability in the GAN training: \n\nWe use strided convolutions in the (discriminator) and fractional-strided convolutions in the generator.\nWe have used batch normalization in both the generator and the discriminator\nWe have removed fully connected hidden layers for deeper architectures.\nWe use ReLU activation in generator for all layers except for the output, which uses Tanh.\nWe use LeakyReLU activation in the discriminator for all layers.", "import matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\n\nimport cntk as C\nfrom cntk import Trainer\nfrom cntk.layers import default_options\nfrom cntk.device import set_default_device, gpu, cpu\nfrom cntk.initializer import normal\nfrom cntk.io import (MinibatchSource, CTFDeserializer, StreamDef, StreamDefs,\n INFINITELY_REPEAT)\nfrom cntk.layers import Dense, Convolution2D, ConvolutionTranspose2D, BatchNormalization\nfrom cntk.learners import (adam, UnitType, learning_rate_schedule,\n momentum_as_time_constant_schedule, momentum_schedule)\nfrom cntk.logging import ProgressPrinter\n\n%matplotlib inline", "Select the notebook runtime environment devices / settings\nSet the device to cpu / gpu for the test environment. If you have both CPU and GPU on your machine, you can optionally switch the devices. By default, we choose the best available device.", "# Select the right target device when this notebook is being tested:\nif 'TEST_DEVICE' in os.environ:\n import cntk\n if os.environ['TEST_DEVICE'] == 'cpu':\n C.device.set_default_device(C.device.cpu())\n else:\n C.device.set_default_device(C.device.gpu(0))\nC.device.set_default_device(C.device.gpu(0))", "There are two run modes:\n- Fast mode: isFast is set to True. This is the default mode for the notebooks, which means we train for fewer iterations or train / test on limited data. This ensures functional correctness of the notebook though the models produced are far from what a completed training would produce.\n\nSlow mode: We recommend the user to set this flag to False once the user has gained familiarity with the notebook content and wants to gain insight from running the notebooks for a longer period with different parameters for training. \n\nNote\nIf the isFlag is set to False the notebook will take a few hours on a GPU enabled machine. You can try fewer iterations by setting the num_minibatches to a smaller number say 20,000 which comes at the expense of quality of the generated images.", "isFast = True", "Data Reading\nThe input to the GAN will be a vector of random numbers. At the end of the traning, the GAN \"learns\" to generate images of hand written digits drawn from the MNIST database. We will be using the same MNIST data generated in tutorial 103A. A more in-depth discussion of the data format and reading methods can be seen in previous tutorials. For our purposes, just know that the following function returns an object that will be used to generate images from the MNIST dataset. Since we are building an unsupervised model, we only need to read in features and ignore the labels.", "# Ensure the training data is generated and available for this tutorial\n# We search in two locations in the toolkit for the cached MNIST data set.\n\ndata_found = False\nfor data_dir in [os.path.join(\"..\", \"Examples\", \"Image\", \"DataSets\", \"MNIST\"),\n os.path.join(\"data\", \"MNIST\")]:\n train_file = os.path.join(data_dir, \"Train-28x28_cntk_text.txt\")\n if os.path.isfile(train_file):\n data_found = True\n break\n \nif not data_found:\n raise ValueError(\"Please generate the data by completing CNTK 103 Part A\")\n \nprint(\"Data directory is {0}\".format(data_dir))\n\ndef create_reader(path, is_training, input_dim, label_dim):\n deserializer = CTFDeserializer(\n filename = path,\n streams = StreamDefs(\n labels_unused = StreamDef(field = 'labels', shape = label_dim, is_sparse = False),\n features = StreamDef(field = 'features', shape = input_dim, is_sparse = False\n )\n )\n )\n return MinibatchSource(\n deserializers = deserializer,\n randomize = is_training,\n max_sweeps = INFINITELY_REPEAT if is_training else 1\n )", "The random noise we will use to train the GAN is provided by the noise_sample function to generate random noise samples from a uniform distribution within the interval [-1, 1].", "np.random.seed(123)\ndef noise_sample(num_samples):\n return np.random.uniform(\n low = -1.0,\n high = 1.0,\n size = [num_samples, g_input_dim]\n ).astype(np.float32)", "Model Creation\nFirst we provide a brief recap of the basics of GAN. You may skip this block if you are familiar with CNTK 206A. \nA GAN network is composed of two sub-networks, one called the Generator ($G$) and the other Discriminator ($D$). \n- The Generator takes random noise vector ($z$) as input and strives to output synthetic (fake) image ($x^$) that is indistinguishable from the real image ($x$) from the MNIST dataset. \n- The Discriminator strives to differentiate between the real image ($x$) and the fake ($x^$) image.\n\nIn each training iteration, the Generator produces more realistic fake images (in other words minimizes the difference between the real and generated counterpart) and the Discriminator maximizes the probability of assigning the correct label (real vs. fake) to both real examples (from training set) and the generated fake ones. The two conflicting objectives between the sub-networks ($G$ and $D$) leads to the GAN network (when trained) converge to an equilibrium, where the Generator produces realistic looking fake MNIST images and the Discriminator can at best randomly guess whether images are real or fake. The resulting Generator model once trained produces realistic MNIST image with the input being a random number. \nModel config\nFirst, we establish some of the architectural and training hyper-parameters for our model. \n\nThe generator network is fractional strided convolutional network. The input is a 10-dimensional random vector and the output of the generator is a flattened version of a 28 x 28 fake image. The discriminator is strided-convolution network. It takes as input the 784 dimensional output of the generator or a real MNIST image, reshapes into a 28 x 28 image format and outputs a single scalar - the estimated probability that the input image is a real MNIST image.\n\nModel components\nWe build a computational graph for our model, one each for the generator and the discriminator. First, we establish some of the architectural parameters of our model.", "# architectural parameters\nimg_h, img_w = 28, 28\nkernel_h, kernel_w = 5, 5 \nstride_h, stride_w = 2, 2\n\n# Input / Output parameter of Generator and Discriminator\ng_input_dim = 100\ng_output_dim = d_input_dim = img_h * img_w\n\n# We expect the kernel shapes to be square in this tutorial and\n# the strides to be of the same length along each data dimension\nif kernel_h == kernel_w:\n gkernel = dkernel = kernel_h\nelse:\n raise ValueError('This tutorial needs square shaped kernel') \n \nif stride_h == stride_w:\n gstride = dstride = stride_h\nelse:\n raise ValueError('This tutorial needs same stride in all dims')\n\n# Helper functions\ndef bn_with_relu(x, activation=C.relu):\n h = BatchNormalization(map_rank=1)(x)\n return C.relu(h)\n\n# We use param-relu function to use a leak=0.2 since CNTK implementation \n# of Leaky ReLU is fixed to 0.01\ndef bn_with_leaky_relu(x, leak=0.2):\n h = BatchNormalization(map_rank=1)(x)\n r = C.param_relu(C.constant((np.ones(h.shape)*leak).astype(np.float32)), h)\n return r", "Generator\nThe generator takes a 100-dimensional random vector (for starters) as input ($z$) and the outputs a 784 dimensional vector, corresponding to a flattened version of a 28 x 28 fake (synthetic) image ($x^*$). In this tutorial, we use fractionally strided convolutions (a.k.a ConvolutionTranspose) with ReLU activations except for the last layer. We use a tanh activation on the last layer to make sure that the output of the generator function is confined to the interval [-1, 1]. The use of ReLU and tanh activation functions are key in addition to using the fractionally strided convolutions.", "def convolutional_generator(z):\n with default_options(init=C.normal(scale=0.02)):\n print('Generator input shape: ', z.shape)\n\n s_h2, s_w2 = img_h//2, img_w//2 #Input shape (14,14)\n s_h4, s_w4 = img_h//4, img_w//4 # Input shape (7,7)\n gfc_dim = 1024\n gf_dim = 64\n\n h0 = Dense(gfc_dim, activation=None)(z)\n h0 = bn_with_relu(h0)\n print('h0 shape', h0.shape)\n\n h1 = Dense([gf_dim * 2, s_h4, s_w4], activation=None)(h0)\n h1 = bn_with_relu(h1)\n print('h1 shape', h1.shape)\n\n h2 = ConvolutionTranspose2D(gkernel,\n num_filters=gf_dim*2,\n strides=gstride,\n pad=True,\n output_shape=(s_h2, s_w2),\n activation=None)(h1)\n h2 = bn_with_relu(h2)\n print('h2 shape', h2.shape)\n\n h3 = ConvolutionTranspose2D(gkernel,\n num_filters=1,\n strides=gstride,\n pad=True,\n output_shape=(img_h, img_w),\n activation=C.sigmoid)(h2)\n print('h3 shape :', h3.shape)\n\n return C.reshape(h3, img_h * img_w)", "Discriminator\nThe discriminator takes as input ($x^*$) the 784 dimensional output of the generator or a real MNIST image, re-shapes the input to a 28 x 28 image and outputs the estimated probability that the input image is a real MNIST image. The network is modeled using strided convolution with Leaky ReLU activation except for the last layer. We use a sigmoid activation on the last layer to ensure the discriminator output lies in the inteval of [0,1].", "def convolutional_discriminator(x):\n with default_options(init=C.normal(scale=0.02)):\n\n dfc_dim = 1024\n df_dim = 64\n\n print('Discriminator convolution input shape', x.shape)\n x = C.reshape(x, (1, img_h, img_w))\n\n h0 = Convolution2D(dkernel, 1, strides=dstride)(x)\n h0 = bn_with_leaky_relu(h0, leak=0.2)\n print('h0 shape :', h0.shape)\n\n h1 = Convolution2D(dkernel, df_dim, strides=dstride)(h0)\n h1 = bn_with_leaky_relu(h1, leak=0.2)\n print('h1 shape :', h1.shape)\n\n h2 = Dense(dfc_dim, activation=None)(h1)\n h2 = bn_with_leaky_relu(h2, leak=0.2)\n print('h2 shape :', h2.shape)\n\n h3 = Dense(1, activation=C.sigmoid)(h2)\n print('h3 shape :', h3.shape)\n\n return h3", "We use a minibatch size of 128 and a fixed learning rate of 0.0002 for training. In the fast mode (isFast = True) we verify only functional correctness with 5000 iterations. \nNote: In the slow mode, the results look a lot better but it requires in the order of 10 minutes depending on your hardware. In general, the more number of minibatches one trains, the better is the fidelity of the generated images.", "# training config\nminibatch_size = 128\nnum_minibatches = 5000 if isFast else 10000\nlr = 0.0002\nmomentum = 0.5 #equivalent to beta1", "Build the graph\nThe rest of the computational graph is mostly responsible for coordinating the training algorithms and parameter updates, which is particularly tricky with GANs for couple reasons. The GANs are sensitive to the choice of learner and the parameters. Many of the parameters chosen here are based on many hard learnt lessons from the community. You may directly go to the code if you have read the basic GAN tutorial. \n\n\nFirst, the discriminator must be used on both the real MNIST images and fake images generated by the generator function. One way to represent this in the computational graph is to create a clone of the output of the discriminator function, but with substituted inputs. Setting method=share in the clone function ensures that both paths through the discriminator model use the same set of parameters.\n\n\nSecond, we need to update the parameters for the generator and discriminator model separately using the gradients from different loss functions. We can get the parameters for a Function in the graph with the parameters attribute. However, when updating the model parameters, update only the parameters of the respective models while keeping the other parameters unchanged. In other words, when updating the generator we will update only the parameters of the $G$ function while keeping the parameters of the $D$ function fixed and vice versa.\n\n\nTraining the Model\nThe code for training the GAN very closely follows the algorithm as presented in the original NIPS 2014 paper. In this implementation, we train $D$ to maximize the probability of assigning the correct label (fake vs. real) to both training examples and the samples from $G$. In other words, $D$ and $G$ play the following two-player minimax game with the value function $V(G,D)$:\n$$\n \\min_G \\max_D V(D,G)= \\mathbb{E}{x}[ log D(x) ] + \\mathbb{E}{z}[ log(1 - D(G(z))) ]\n$$\nAt the optimal point of this game the generator will produce realistic looking data while the discriminator will predict that the generated image is indeed fake with a probability of 0.5. The algorithm referred below is implemented in this tutorial.", "def build_graph(noise_shape, image_shape, generator, discriminator):\n input_dynamic_axes = [C.Axis.default_batch_axis()]\n Z = C.input(noise_shape, dynamic_axes=input_dynamic_axes)\n X_real = C.input(image_shape, dynamic_axes=input_dynamic_axes)\n X_real_scaled = X_real / 255.0\n\n # Create the model function for the generator and discriminator models\n X_fake = generator(Z)\n D_real = discriminator(X_real_scaled)\n D_fake = D_real.clone(\n method = 'share',\n substitutions = {X_real_scaled.output: X_fake.output}\n )\n\n # Create loss functions and configure optimazation algorithms\n G_loss = 1.0 - C.log(D_fake)\n D_loss = -(C.log(D_real) + C.log(1.0 - D_fake))\n\n G_learner = adam(\n parameters = X_fake.parameters,\n lr = learning_rate_schedule(lr, UnitType.sample),\n momentum = momentum_schedule(0.5)\n )\n D_learner = adam(\n parameters = D_real.parameters,\n lr = learning_rate_schedule(lr, UnitType.sample),\n momentum = momentum_schedule(0.5)\n )\n\n # Instantiate the trainers\n G_trainer = Trainer(\n X_fake,\n (G_loss, None),\n G_learner\n )\n D_trainer = Trainer(\n D_real,\n (D_loss, None),\n D_learner\n )\n\n return X_real, X_fake, Z, G_trainer, D_trainer", "With the value functions defined we proceed to interatively train the GAN model. The training of the model can take significnantly long depending on the hardware especiallly if isFast flag is turned off.", "def train(reader_train, generator, discriminator):\n X_real, X_fake, Z, G_trainer, D_trainer = \\\n build_graph(g_input_dim, d_input_dim, generator, discriminator)\n\n # print out loss for each model for upto 25 times\n print_frequency_mbsize = num_minibatches // 25\n \n print(\"First row is Generator loss, second row is Discriminator loss\")\n pp_G = ProgressPrinter(print_frequency_mbsize)\n pp_D = ProgressPrinter(print_frequency_mbsize)\n\n k = 2\n\n input_map = {X_real: reader_train.streams.features}\n for train_step in range(num_minibatches):\n\n # train the discriminator model for k steps\n for gen_train_step in range(k):\n Z_data = noise_sample(minibatch_size)\n X_data = reader_train.next_minibatch(minibatch_size, input_map)\n if X_data[X_real].num_samples == Z_data.shape[0]:\n batch_inputs = {X_real: X_data[X_real].data, Z: Z_data}\n D_trainer.train_minibatch(batch_inputs)\n\n # train the generator model for a single step\n Z_data = noise_sample(minibatch_size)\n batch_inputs = {Z: Z_data}\n\n G_trainer.train_minibatch(batch_inputs)\n G_trainer.train_minibatch(batch_inputs)\n\n pp_G.update_with_trainer(G_trainer)\n pp_D.update_with_trainer(D_trainer)\n\n G_trainer_loss = G_trainer.previous_minibatch_loss_average\n\n return Z, X_fake, G_trainer_loss\n\nreader_train = create_reader(train_file, True, d_input_dim, label_dim=10)\n\n# G_input, G_output, G_trainer_loss = train(reader_train, dense_generator, dense_discriminator)\nG_input, G_output, G_trainer_loss = train(reader_train,\n convolutional_generator,\n convolutional_discriminator)\n\n# Print the generator loss \nprint(\"Training loss of the generator is: {0:.2f}\".format(G_trainer_loss))", "Generating Fake (Synthetic) Images\nNow that we have trained the model, we can create fake images simply by feeding random noise into the generator and displaying the outputs. Below are a few images generated from random samples. To get a new set of samples, you can re-run the last cell.", "def plot_images(images, subplot_shape):\n plt.style.use('ggplot')\n fig, axes = plt.subplots(*subplot_shape)\n for image, ax in zip(images, axes.flatten()):\n ax.imshow(image.reshape(28, 28), vmin=0, vmax=1.0, cmap='gray')\n ax.axis('off')\n plt.show()\n\n\nnoise = noise_sample(36)\nimages = G_output.eval({G_input: noise})\nplot_images(images, subplot_shape=[6, 6])", "Larger number of iterations should generate more realistic looking MNIST images. A sampling of such generated images are shown below.\n\nNote: It takes a large number of iterations to capture a representation of the real world signal. Even simple dense networks can be quite effective in modelling data albeit MNIST is a relatively simple dataset as well.\nSuggested Task\n\n\nPlease refer to several hacks presented in this article by Soumith Chintala, Facebook Research. While some of the hacks have been incorporated in this notebook, there are several others I would suggest that you try out.\n\n\nPerformance is a key aspect to deep neural networks training. Study how the changing the minibatch sizes impact the performance both with regards to quality of the generated images and the time it takes to train a model.\n\n\nTry generating fake images using the CIFAR-10 data set as the training data. How does the network above performs? There are other variation in GAN, such as conditional GAN where the network is additionally conditioned on the input label. Try implementing the labels." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vascotenner/holoviews
doc/Examples/HipsterDynamics.ipynb
bsd-3-clause
[ "The Hipster Effect: An IPython Interactive Exploration\nThis notebook originally appeared as a post on the blog Pythonic Perambulations. The content is BSD licensed. It has been adapted to use HoloViews by Philipp Rudiger.\nThis week I started seeing references all over the internet to this paper: The Hipster Effect: When Anticonformists All Look The Same. It essentially describes a simple mathematical model which models conformity and non-conformity among a mutually interacting population, and finds some interesting results: namely, conformity among a population of self-conscious non-conformists is similar to a phase transition in a time-delayed thermodynamic system. In other words, with enough hipsters around responding to delayed fashion trends, a plethora of facial hair and fixed gear bikes is a natural result.\nAlso naturally, upon reading the paper I wanted to try to reproduce the work. The paper solves the problem analytically for a continuous system and shows the precise values of certain phase transitions within the long-term limit of the postulated system. Though such theoretical derivations are useful, I often find it more intuitive to simulate systems like this in a more approximate manner to gain hands-on understanding.\nMathematically Modeling Hipsters\nWe'll start by defining the problem, and going through the notation suggested in the paper. We'll consider a group of $N$ people, and define the following quantities:\n\n$\\epsilon_i$ : this value is either $+1$ or $-1$. $+1$ means person $i$ is a hipster, while $-1$ means they're a conformist.\n$s_i(t)$ : this is also either $+1$ or $-1$. This indicates person $i$'s choice of style at time $t$. For example, $+1$ might indicated a bushy beard, while $-1$ indicates clean-shaven.\n$J_{ij}$ : The influence matrix. This is a value greater than zero which indicates how much person $j$ influences person $i$.\n$\\tau_{ij}$ : The delay matrix. This is an integer telling us the length of delay for the style of person $j$ to affect the style of person $i$.\n\nThe idea of the model is this: on any given day, person $i$ looks at the world around him or her, and sees some previous day's version of everyone else. This information is $s_j(t - \\tau_{ij})$.\nThe amount that person $j$ influences person $i$ is given by the influence matrix, $J_{ij}$, and after putting all the information together, we see that person $i$'s mean impression of the world's style is\n$$\nm_i(t) = \\frac{1}{N} \\sum_j J_{ij} \\cdot s_j(t - \\tau_{ij})\n$$\nGiven the problem setup, we can quickly check whether this impression matches their own current style:\n\nif $m_i(t) \\cdot s_i(t) > 0$, then person $i$ matches those around them\nif $m_i(t) \\cdot s_i(t) < 0$, then person $i$ looks different than those around them\n\nA hipster who notices that their style matches that of the world around them will risk giving up all their hipster cred if they don't change quickly; a conformist will have the opposite reaction. Because $\\epsilon_i$ = $+1$ for a hipster and $-1$ for a conformist, we can encode this observation in a single value which tells us what which way the person will lean that day:\n$$\nx_i(t) = -\\epsilon_i m_i(t) s_i(t)\n$$\nSimple! If $x_i(t) > 0$, then person $i$ will more likely switch their style that day, and if $x_i(t) < 0$, person $i$ will more likely maintain the same style as the previous day. So we have a formula for how to update each person's style based on their preferences, their influences, and the world around them.\nBut the world is a noisy place. Each person might have other things going on that day, so instead of using this value directly, we can turn it in to a probabilistic statement. Consider the function\n$$\n\\phi(x;\\beta) = \\frac{1 + \\tanh(\\beta \\cdot x)}{2}\n$$\nWe can plot this function quickly:", "import numpy as np\nimport holoviews as hv\nhv.notebook_extension(bokeh=True, width=90)\n\n%%output backend='matplotlib'\n%%opts NdOverlay [aspect=1.5 figure_size=200 legend_position='top_left']\nx = np.linspace(-1, 1, 1000)\ncurves = hv.NdOverlay(key_dimensions=['$\\\\beta$'])\nfor beta in [0.1, 0.5, 1, 5]:\n curves[beta] = hv.Curve(zip(x, 0.5 * (1 + np.tanh(beta * x))), kdims=['$x$'],\n vdims=['$\\\\phi(x;\\\\beta)$'])\ncurves", "This gives us a nice way to move from our preference $x_i$ to a probability of switching styles. Here $\\beta$ is inversely related to noise. For large $\\beta$, the noise is small and we basically map $x > 0$ to a 100% probability of switching, and $x<0$ to a 0% probability of switching. As $\\beta$ gets smaller, the probabilities get less and less distinct.\nThe Code\nLet's see this model in action. We'll start by defining a class which implements everything we've gone through above:", "class HipsterStep(object):\n \"\"\"Class to implement hipster evolution\n \n Parameters\n ----------\n initial_style : length-N array\n values > 0 indicate one style, while values <= 0 indicate the other.\n is_hipster : length-N array\n True or False, indicating whether each person is a hipster\n influence_matrix : N x N array\n Array of non-negative values. influence_matrix[i, j] indicates\n how much influence person j has on person i\n delay_matrix : N x N array\n Array of positive integers. delay_matrix[i, j] indicates the\n number of days delay between person j's influence on person i.\n \"\"\"\n def __init__(self, initial_style, is_hipster,\n influence_matrix, delay_matrix,\n beta=1, rseed=None):\n self.initial_style = initial_style\n self.is_hipster = is_hipster\n self.influence_matrix = influence_matrix\n self.delay_matrix = delay_matrix\n \n self.rng = np.random.RandomState(rseed)\n self.beta = beta\n \n # make s array consisting of -1 and 1\n self.s = -1 + 2 * (np.atleast_2d(initial_style) > 0)\n N = self.s.shape[1]\n \n # make eps array consisting of -1 and 1\n self.eps = -1 + 2 * (np.asarray(is_hipster) > 0)\n \n # create influence_matrix and delay_matrix\n self.J = np.asarray(influence_matrix, dtype=float)\n self.tau = np.asarray(delay_matrix, dtype=int)\n \n # validate all the inputs\n assert self.s.ndim == 2\n assert self.s.shape[1] == N\n assert self.eps.shape == (N,)\n assert self.J.shape == (N, N)\n assert np.all(self.J >= 0)\n assert np.all(self.tau > 0)\n\n @staticmethod\n def phi(x, beta):\n return 0.5 * (1 + np.tanh(beta * x))\n \n def step_once(self):\n N = self.s.shape[1]\n \n # iref[i, j] gives the index for the j^th individual's\n # time-delayed influence on the i^th individual\n iref = np.maximum(0, self.s.shape[0] - self.tau)\n \n # sref[i, j] gives the previous state of the j^th individual\n # which affects the current state of the i^th individual\n sref = self.s[iref, np.arange(N)]\n\n # m[i] is the mean of weighted influences of other individuals\n m = (self.J * sref).sum(1) / self.J.sum(1)\n \n # From m, we use the sigmoid function to compute a transition probability\n transition_prob = self.phi(-self.eps * m * self.s[-1], beta=self.beta)\n \n # Now choose steps stochastically based on this probability\n new_s = np.where(transition_prob > self.rng.rand(N), -1, 1) * self.s[-1]\n \n # Add this to the results, and return\n self.s = np.vstack([self.s, new_s])\n return self.s\n \n def step(self, N):\n for i in range(N):\n self.step_once()\n return self.s\n", "Now we'll create a function which will return an instance of the HipsterStep class with the appropriate settings:", "def get_sim(Npeople=500, hipster_frac=0.8, initial_state_frac=0.5, delay=20, log10_beta=0.5, rseed=42):\n\n rng = np.random.RandomState(rseed)\n\n initial_state = (rng.rand(1, Npeople) > initial_state_frac)\n is_hipster = (rng.rand(Npeople) > hipster_frac)\n\n influence_matrix = abs(rng.randn(Npeople, Npeople))\n influence_matrix.flat[::Npeople + 1] = 0\n\n delay_matrix = 1 + rng.poisson(delay, size=(Npeople, Npeople))\n\n return HipsterStep(initial_state, is_hipster, influence_matrix, delay_matrix=delay_matrix,\n beta=10 ** log10_beta, rseed=rseed)", "Exploring this data\nNow that we've defined the simulation, we can start exploring this data. I'll quickly demonstrate how to advance simulation time and get the results.\nFirst we initialize the model with a certain fraction of hipsters:", "sim = get_sim(hipster_frac=0.8)", "To run the simulation a number of steps we execute sim.step(Nsteps) giving us a matrix of identities for each invidual at each timestep.", "result = sim.step(200)\nresult", "Now we can simply go right ahead and visualize this data using an Image Element type, defining the dimensions and bounds of the space.", "%%opts Image [width=600]\nhv.Image(result.T, bounds=(0, 0, 100, 500),\n kdims=['Time', 'individual'], vdims=['State'])", "Now that you know how to run the simulation and access the data have a go at exploring the effects of different parameters on the population dynamics or apply some custom analyses to this data. Here are two quick examples of what you can do:", "%%opts Curve [width=350] Image [width=350]\nhipster_frac = hv.HoloMap(kdims=['Hipster Fraction'])\nfor i in np.linspace(0.1, 1, 10):\n sim = get_sim(hipster_frac=i)\n hipster_frac[i] = hv.Image(sim.step(200).T, (0, 0, 500, 500), group='Population Dynamics',\n kdims=['Time', 'individual'], vdims=['Bearded'])\n(hipster_frac + hipster_frac.reduce(individual=np.mean).to.curve('Time', 'Bearded'))\n\n%%opts Overlay [width=600] Curve (color='black')\naggregated = hipster_frac.table().aggregate(['Time', 'Hipster Fraction'], np.mean, np.std)\naggregated.to.curve('Time') * aggregated.to.errorbars('Time')", "Your turn\nWhat intuitions can you develop about this system? How do the different parameters affect it?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rishuatgithub/MLPy
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
apache-2.0
[ "<img src=\"../Pierian-Data-Logo.PNG\">\n<br>\n<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>\nCNN Exercises\nFor these exercises we'll work with the <a href='https://www.kaggle.com/zalando-research/fashionmnist'>Fashion-MNIST</a> dataset, also available through <a href='https://pytorch.org/docs/stable/torchvision/index.html'><tt><strong>torchvision</strong></tt></a>. Like MNIST, this dataset consists of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes:\n0. T-shirt/top\n1. Trouser\n2. Pullover\n3. Dress\n4. Coat\n5. Sandal\n6. Shirt\n7. Sneaker\n8. Bag\n9. Ankle boot\n<div class=\"alert alert-danger\" style=\"margin: 10px\"><strong>IMPORTANT NOTE!</strong> Make sure you don't run the cells directly above the example output shown, <br>otherwise you will end up writing over the example output!</div>\n\nPerform standard imports, load the Fashion-MNIST dataset\nRun the cell below to load the libraries needed for this exercise and the Fashion-MNIST dataset.<br>\nPyTorch makes the Fashion-MNIST dataset available through <a href='https://pytorch.org/docs/stable/torchvision/datasets.html#fashion-mnist'><tt><strong>torchvision</strong></tt></a>. The first time it's called, the dataset will be downloaded onto your computer to the path specified. From that point, torchvision will always look for a local copy before attempting another download.", "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.utils.data import DataLoader\nfrom torchvision import datasets, transforms\nfrom torchvision.utils import make_grid\n\nimport numpy as np\nimport pandas as pd\nfrom sklearn.metrics import confusion_matrix\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ntransform = transforms.ToTensor()\n\ntrain_data = datasets.FashionMNIST(root='../Data', train=True, download=True, transform=transform)\ntest_data = datasets.FashionMNIST(root='../Data', train=False, download=True, transform=transform)\n\nclass_names = ['T-shirt','Trouser','Sweater','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Boot']", "1. Create data loaders\nUse DataLoader to create a <tt>train_loader</tt> and a <tt>test_loader</tt>. Batch sizes should be 10 for both.", "# CODE HERE\n\n\n\n\n# DON'T WRITE HERE", "2. Examine a batch of images\nUse DataLoader, <tt>make_grid</tt> and matplotlib to display the first batch of 10 images.<br>\nOPTIONAL: display the labels as well", "# CODE HERE\n\n\n\n\n\n\n\n# DON'T WRITE HERE\n# IMAGES ONLY\n\n# DON'T WRITE HERE\n# IMAGES AND LABELS", "Downsampling\n<h3>3. If a 28x28 image is passed through a Convolutional layer using a 5x5 filter, a step size of 1, and no padding, what is the resulting matrix size?</h3>\n\n<div style='border:1px black solid; padding:5px'>\n<br><br>\n</div>", "##################################################\n###### ONLY RUN THIS TO CHECK YOUR ANSWER! ######\n################################################\n\n# Run the code below to check your answer:\nconv = nn.Conv2d(1, 1, 5, 1)\nfor x,labels in train_loader:\n print('Orig size:',x.shape)\n break\nx = conv(x)\nprint('Down size:',x.shape)", "4. If the sample from question 3 is then passed through a 2x2 MaxPooling layer, what is the resulting matrix size?\n<div style='border:1px black solid; padding:5px'>\n<br><br>\n</div>", "##################################################\n###### ONLY RUN THIS TO CHECK YOUR ANSWER! ######\n################################################\n\n# Run the code below to check your answer:\nx = F.max_pool2d(x, 2, 2)\nprint('Down size:',x.shape)", "CNN definition\n5. Define a convolutional neural network\nDefine a CNN model that can be trained on the Fashion-MNIST dataset. The model should contain two convolutional layers, two pooling layers, and two fully connected layers. You can use any number of neurons per layer so long as the model takes in a 28x28 image and returns an output of 10. Portions of the definition have been filled in for convenience.", "# CODE HERE\nclass ConvolutionalNetwork(nn.Module):\n def __init__(self):\n super().__init__()\n pass\n\n def forward(self, X):\n pass \n return \n \ntorch.manual_seed(101)\nmodel = ConvolutionalNetwork()", "Trainable parameters\n6. What is the total number of trainable parameters (weights & biases) in the model above?\nAnswers will vary depending on your model definition.\n<div style='border:1px black solid; padding:5px'>\n<br><br>\n</div>", "# CODE HERE", "7. Define loss function & optimizer\nDefine a loss function called \"criterion\" and an optimizer called \"optimizer\".<br>\nYou can use any functions you want, although we used Cross Entropy Loss and Adam (learning rate of 0.001) respectively.", "# CODE HERE\n\n\n\n\n# DON'T WRITE HERE", "8. Train the model\nDon't worry about tracking loss values, displaying results, or validating the test set. Just train the model through 5 epochs. We'll evaluate the trained model in the next step.<br>\nOPTIONAL: print something after each epoch to indicate training progress.", "# CODE HERE\n\n\n\n\n", "9. Evaluate the model\nSet <tt>model.eval()</tt> and determine the percentage correct out of 10,000 total test images.", "# CODE HERE\n\n\n\n\n", "Great job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Mashimo/datascience
01-Regression/overfit.ipynb
apache-2.0
[ "The overfitting problem\nWe have seen what is linear regression, how to make models and algorithms for estimating the parameters of such models, how to measure the loss.\nNow we see how to assess how well the considered method should perform in predicting new data, how to select amongst possible models to choose the best performing.\nThis leads directly to the bias-variance tradeoff, which is fundamental to machine learning. \nLoad in the data\nThe dataset is from house sales in King County, the region where the city of Seattle, WA is located.", "import pandas as pd\n\ndtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, \n 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, \n 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':str, \n 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, \n 'id':str, 'sqft_lot':int, 'view':int}\n\nsales = pd.read_csv('../datasets/kc_house_data.csv', dtype=dtype_dict)\n\nsales.head()\n\nsales.shape\n\nm = sales.shape[0] # number of training examples\nm", "The dataset contains information (21 features, including the price) related to 21613 houses.\nOur target variable (i.e., what we want to predict when a new house gets on sale) is the price.\nBaseline: the simplest model\nNow let's compute the loss in the case of the simplest model: a fixed price equal to the average of historic prices, independently on house size, rooms, location, ...", "# Let's compute the mean of the House Prices in King County \ny = sales['price'] # extract the price column\n\navg_price = y.mean() # this is our baseline\nprint (\"average price: ${:.0f} \".format(avg_price))\n\nExamplePrice = y[0]\nExamplePrice", "The predictions are very easy to calculate, just the baseline value:", "def get_baseline_predictions():\n # Simplest version: return the baseline as predicted values\n predicted_values = avg_price \n return predicted_values", "Example:", "my_house_size = 2500\nestimated_price = get_baseline_predictions()\nprint (\"The estimated price for a house with {} squared feet is {:.0f}\".format(my_house_size, estimated_price))", "The estimated price for the example house will still be around 540K, wile the real value is around 222K. Quite an error!\nMeasures of loss\nThere are several way of implementing the loss, I use the squared error here. \n$L = [y - f(X)]^2$", "import numpy as np\ndef get_loss(yhat, target):\n \"\"\"\n Arguments:\n yhat -- vector of size m (predicted labels)\n target -- vector of size m (true labels)\n \n Returns:\n loss -- the value of the L2 loss function\n \"\"\"\n # compute the residuals (since we are squaring it doesn't matter \n # which order you subtract)\n # np.dot will square the residuals and add them up\n loss = np.dot((target - yhat), (target - yhat))\n\n return(loss)", "To better see the value of the cost function we use also the RMSE, the Root Mean Square Deviation.\nBasically the average of the losses, rooted.", "baselineCost = get_loss(get_baseline_predictions(), y)\n\nprint (\"Training Error for baseline RSS: {:.0f}\".format(baselineCost))\nprint (\"Average Training Error for baseline RMSE: {:.0f}\".format(np.sqrt(baselineCost/m)))", "As you can see, it is quite high error, especially related to the average selling price.\nNow, we can look at how training error behaves as model complexity increases.\nLearning a better but still simple model\nUsing a constant value, the average, is easy but does not make too much sense.\nLet's create a linear model with the house size as the feature. We expect that the price is dependent on the size: bigger house, more expensive.", "from sklearn import linear_model\n\nsimple_model = linear_model.LinearRegression()\n\nsimple_features = sales[['sqft_living']] # input X: the house size\n\nsimple_model.fit(simple_features, y)", "Now that we have fit the model we can extract the regression weights (coefficients) as follows:", "simple_model_intercept = simple_model.intercept_\nprint (simple_model_intercept)\n\nsimple_model_weights = simple_model.coef_\nprint (simple_model_weights)", "This means that our simple model to predict a house price y is (approximated): \n$y = -43581 + 281x $\nwhere x is the size in squared feet.\nIt is not anymore a horizontal line but a diagonal one, with a slope.\nMaking Predictions\nRecall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:", "training_predictions = simple_model.predict(simple_features)\nprint (training_predictions[0])", "We are getting closer to the real value for the example house (recall, it's around 222K).\nCompute the Training Error\nNow that we can make predictions given the model, let's again compute the RSS and the RMSE.", " # First get the predictions using the features subset\npredictions = simple_model.predict(sales[['sqft_living']])\nsimpleCost = get_loss(predictions, y) \nprint (\"Training Error for baseline RSS: {:.0f}\".format(simpleCost))\nprint (\"Average Training Error for baseline RMSE: {:.0f}\".format(np.sqrt(simpleCost/m)))", "The simple model reduced greatly the training error.\nLearning a multiple regression model\nWe can add more features to the model, for example the number of bedrooms and bathrooms.", "more_features = sales[['sqft_living', 'bedrooms', 'bathrooms']] # input X", "We can learn a multiple regression model predicting 'price' based on the above features on the data with the following code:", "better_model = linear_model.LinearRegression()\nbetter_model.fit(more_features, y)", "Now that we have fitted the model we can extract the regression weights (coefficients) as follows:", "betterModel_intercept = better_model.intercept_\nprint (betterModel_intercept)\n\nbetterModel_weights = better_model.coef_\nprint (betterModel_weights)", "The better model is therefore:\n$y = 74847 + 309x1 - 57861x2 + 7933x3$\nNote that the equation has now three variables: the size, the bedrooms and the bathrooms.\nMaking Predictions\nAgain we can use the .predict() function to find the predicted values for data we pass. For the model above:", "better_predictions = better_model.predict(more_features)\nprint (better_predictions[0]) ", "Again, a little bit closer to the real value (222K)\nCompute the Training Error\nNow that we can make predictions given the model, let's write a function to compute the RSS of the model.", "predictions = better_model.predict(more_features)\nbetterCost = get_loss(predictions, y) \nprint (\"Training Error for baseline RSS: {:.0f}\".format(betterCost))\nprint (\"Average Training Error for baseline RMSE: {:.0f}\".format(np.sqrt(betterCost/m)))", "Only a slight improvement this time\nCreate some new features\nAlthough we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even \"interaction\" features such as the product of bedrooms and bathrooms.", "from math import log", "Next we create the following new features as column :\n* bedrooms_squared = bedrooms*bedrooms\n* bed_bath_rooms = bedrooms*bathrooms\n* log_sqft_living = log(sqft_living)\n* lat_plus_long = lat + long \n* more polynomial features: bedrooms ^ 4, bathrooms ^ 7, size ^ 3", "sales['bedrooms_squared'] = sales['bedrooms'].apply(lambda x: x**2)\n\nsales['bed_bath_rooms'] = sales['bedrooms'] * sales.bathrooms\n\nsales['log_sqft_living'] = sales['sqft_living'].apply(lambda x: log(x))\n\nsales['lat_plus_long'] = sales['lat'] + sales.long\n\nsales['bedrooms_4'] = sales['bedrooms'].apply(lambda x: x**4)\n\nsales['bathrooms_7'] = sales['bathrooms'].apply(lambda x: x**7)\n\nsales['size_3'] = sales['sqft_living'].apply(lambda x: x**3)\n\nsales.head()", "Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.\nbedrooms times bathrooms gives what's called an \"interaction\" feature. It is large when both of them are large.\nTaking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.\nAdding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)\n\nLearning Multiple Models\nNow we will learn the weights for five (nested) models for predicting house prices. The first model will have the fewest features, the second model will add more features and so on:", "model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long', 'sqft_lot', 'floors']\nmodel_2_features = model_1_features + ['log_sqft_living', 'bedrooms_squared', 'bed_bath_rooms']\nmodel_3_features = model_2_features + ['lat_plus_long']\nmodel_4_features = model_3_features + ['bedrooms_4', 'bathrooms_7']\nmodel_5_features = model_4_features + ['size_3']", "Now that we have the features, we learn the weights for the five different models for predicting target = 'price' using and look at the value of the weights/coefficients:", "model_1 = linear_model.LinearRegression()\nmodel_1.fit(sales[model_1_features], y)\n\nmodel_2 = linear_model.LinearRegression()\nmodel_2.fit(sales[model_2_features], y)\n\nmodel_3 = linear_model.LinearRegression()\nmodel_3.fit(sales[model_3_features], y)\n\nmodel_4 = linear_model.LinearRegression()\nmodel_4.fit(sales[model_4_features], y)\n\nmodel_5 = linear_model.LinearRegression()\nmodel_5.fit(sales[model_5_features], y)\n\n# You can examine/extract each model's coefficients, for example:\nprint (model_1.coef_)\nprint (model_2.coef_)", "Interesting: in the previous model the weight coefficient for the size lot was positive but now in the model_2 is negative.\nThis is an effect of adding the feature logging the size.\nComparing multiple models\nNow that you've learned three models and extracted the model weights we want to evaluate which model is best.\nWe can use the loss function from earlier to compute the RSS on training data for each of the models.", "# Compute the RSS for each of the models:\nprint (get_loss(model_1.predict(sales[model_1_features]), y))\nprint (get_loss(model_2.predict(sales[model_2_features]), y))\nprint (get_loss(model_3.predict(sales[model_3_features]), y))\nprint (get_loss(model_4.predict(sales[model_4_features]), y))\nprint (get_loss(model_5.predict(sales[model_5_features]), y))", "model_5 has the lowest RSS on the training data.\nThe most complex model.\nThe test error\nTraining error decreases quite significantly with model complexity. This is quite intuitive, because the model was fit on the training points and then as we increase the model complexity, we are better able to fit the training data points.\nA natural question is whether a training error is a good measure of predictive performance? \nThe issue is that the training error is overly optimistic and that's because the beta parameters were fit on the training data to minimise the residual sum of squares, which can often be related to the training error.\nSo, in general, having small training error does not imply having good predictive performance.\nThis takes us to something called test error (or out-of-sample error): we hold out some houses from the data set and we're putting these into what's called a test set.\nAnd when we fit our models, we just fit our models on the training data set.\nBut then when we go to assess our performance of that model we look at these test houses in the test dataset and these are hopefully serving as a proxy of everything out there in the world.\nBottom line, the test error is a (noisy) approximation of the true error.\nSplit data into training and testing.\nLet's see how can be applied to our example.\nFirst we split the data into a training set and a testing set using a function from sklearn, the train_test_split().\nWe use a seed for reproducibility.", "from sklearn.model_selection import train_test_split\ntrain_data,test_data = train_test_split(sales, test_size=0.3, random_state=999)\n\ntrain_data.head()\n\ntrain_data.shape\n\n# test_data = pd.read_csv('kc_house_test_data.csv', dtype=dtype_dict)\ntest_data.head()\n\ntest_data.shape", "In this case the testing set will be the 30% (therefore the training set is 70% of the original data)", "train_y = train_data.price # extract the price column\ntest_y = test_data.price", "Retrain the models on training data only:", "model_1.fit(train_data[model_1_features], train_y)\n\nmodel_2.fit(train_data[model_2_features], train_y)\n\nmodel_3.fit(train_data[model_3_features], train_y)\n\nmodel_4.fit(train_data[model_4_features], train_y)\n\nmodel_5.fit(train_data[model_5_features], train_y)\n\n# Compute the RSS on TRAINING data for each of the models\nprint (get_loss(model_1.predict(train_data[model_1_features]), train_y))\nprint (get_loss(model_2.predict(train_data[model_2_features]), train_y))\nprint (get_loss(model_3.predict(train_data[model_3_features]), train_y))\nprint (get_loss(model_4.predict(train_data[model_4_features]), train_y))\nprint (get_loss(model_5.predict(train_data[model_5_features]), train_y))", "Now compute the RSS on TEST data for each of the models.", "# Compute the RSS on TESTING data for each of the three models and record the values:\nprint (get_loss(model_1.predict(test_data[model_1_features]), test_y))\nprint (get_loss(model_2.predict(test_data[model_2_features]), test_y))\nprint (get_loss(model_3.predict(test_data[model_3_features]), test_y))\nprint (get_loss(model_4.predict(test_data[model_4_features]), test_y))\nprint (get_loss(model_5.predict(test_data[model_5_features]), test_y))", "The most complex model has the lowest error on the training data, but since that model has a non-sensical feature, it performs less well on the test data.\n\nOverfitting\nWhen you have too many features in a model and the learned hypothesis fit the training set very well but fail to generalise to new data (predict prices on new houses) then this is called the overfitting problem.\nFormally, a model, let's say Model1 with some parameters beta_1, overfits if exists another model - let's call it Model2, with estimated parameters beta_2 such that the training error of Model2 is less than the training error of Model1 but on the other hand, the true general error of Model2 is greater than the true error of Model1.\n\nFrom the picture above you can see that the models prone to overfit are the ones\nthat have small training error and high complexity.\nTherefore one simple way to avoid overfitting is to prefer simpler models and avoid complex models with many features." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
knowledgeanyhow/notebooks
scikit-learn/sklearn_cookbook.ipynb
mit
[ "scikit-learn Cookbook\nThis cookbook contains recipes for some common applications of machine learning. You'll need a working knowledge of pandas, matplotlib, numpy, and, of course, scikit-learn to benefit from it.", "# <help:cookbook_setup>\n%matplotlib inline", "Training with k-Fold Cross-Validation\nThis recipe repeatedly trains a logistic regression classifier over different subsets (folds) of sample data. It attempts to match the percentage of each class in every fold to its percentage in the overall dataset (stratification). It evaluates each model against a test set and collects the confusion matrices for each test fold into a pandas.Panel.\nThis recipe defaults to using the Iris data set. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to the instance classes as human readable names.", "# <help:scikit_cross_validation>\nimport warnings\nwarnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them\nimport pandas\nimport sklearn\nimport sklearn.datasets\nimport sklearn.metrics as metrics \nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.cross_validation import StratifiedKFold\n\n# load the iris dataset\ndataset = sklearn.datasets.load_iris()\n\n# define feature vectors (X) and target (y)\nX = dataset.data \ny = dataset.target \nlabels = dataset.target_names \nlabels \n\n# <help:scikit_cross_validation>\n# use log reg classifier\nclf = LogisticRegression()\n\ncms = {}\nscores = []\ncv = StratifiedKFold(y, n_folds=10)\nfor i, (train, test) in enumerate(cv):\n # train then immediately predict the test set\n y_pred = clf.fit(X[train], y[train]).predict(X[test])\n # compute the confusion matrix on each fold, convert it to a DataFrame and stash it for later compute\n cms[i] = pandas.DataFrame(metrics.confusion_matrix(y[test], y_pred), columns=labels, index=labels)\n # stash the overall accuracy on the test set for the fold too\n scores.append(metrics.accuracy_score(y[test], y_pred))\n\n# Panel of all test set confusion matrices\npl = pandas.Panel(cms)\ncm = pl.sum(axis=0) #Sum the confusion matrices to get one view of how well the classifiers perform\ncm\n\n# <help:scikit_cross_validation>\n# accuracy predicting the test set for each fold\nscores", "Principal Component Analysis Plots\nThis recipe performs a PCA and plots the data against the first two principal components in a scatter plot. It then prints the eigenvalues and eigenvectors of the covariance matrix and finally prints the precentage of total variance explained by each component. \nThis recipe defaults to using the Iris data set. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to human-readable names of the classes.", "# <help:scikit_pca>\nimport warnings\nwarnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them\nfrom __future__ import division\nimport math\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn.datasets\nimport sklearn.metrics as metrics\nfrom sklearn.decomposition import PCA\nfrom sklearn.preprocessing import StandardScaler\n\n# load the iris dataset\ndataset = sklearn.datasets.load_iris()\n# define feature vectors (X) and target (y)\nX = dataset.data \ny = dataset.target \nlabels = dataset.target_names \n\n# <help:scikit_pca>\n# define the number of components to compute, recommend n_components < y_features\npca = PCA(n_components=2) \nX_pca = pca.fit_transform(X)\n\n# plot the first two principal components\nfig, ax = plt.subplots()\nplt.scatter(X_pca[:,0], X_pca[:,1])\nplt.grid()\nplt.title('PCA of the dataset')\nax.set_xlabel('Component #1') \nax.set_ylabel('Component #2')\nplt.show()\n\n# <help:scikit_pca>\n# eigendecomposition on the covariance matrix\ncov_mat = np.cov(X_pca.T)\neig_vals, eig_vecs = np.linalg.eig(cov_mat)\nprint('Eigenvectors \\n%s' %eig_vecs)\nprint('\\nEigenvalues \\n%s' %eig_vals)\n\n# <help:scikit_pca>\n# prints the percentage of overall variance explained by each component\nprint(pca.explained_variance_ratio_)", "K-Means Clustering Plots\nThis recipe performs a K-means clustering k=1..n times. It prints and plots the the within-clusters sum of squares error for each k (i.e., inertia) as an indicator of what value of k might be appropriate for the given dataset.\nThis recipe defaults to using the Iris data set. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to human-readable names of the classes. To change the number of clusters, modify k.", "# <help:scikit_k_means_cluster>\nimport warnings\nwarnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them\nfrom time import time\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn.datasets\nfrom sklearn.cluster import KMeans\n\n# load datasets and assign data and features\ndataset = sklearn.datasets.load_iris()\n# define feature vectors (X) and target (y)\nX = dataset.data\ny = dataset.target\n\n# set the number of clusters, must be >=1\nn = 6\ninertia = [np.NaN]\n\n# perform k-means clustering over i=0...k\nfor k in range(1,n):\n k_means_ = KMeans(n_clusters=k)\n k_means_.fit(X)\n print('k = %d, inertia= %f' % (k, k_means_.inertia_ ))\n inertia.append(k_means_.inertia_) \n \n# plot the SSE of the clusters for each value of i\nax = plt.subplot(111)\nax.plot(inertia, '-o')\nplt.xticks(range(n))\nplt.title(\"Inertia\")\nax.set_ylabel('Inertia')\nax.set_xlabel('# Clusters')\nplt.show() ", "SVM Classifier Hyperparameter Tuning with Grid Search\nThis recipe performs a grid search for the best settings for a support vector machine, predicting the class of each flower in the dataset. It splits the dataset into training and test instances once.\nThis recipe defaults to using the Iris data set. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to human-readable names of the classes. Modify parameters to change the grid search space or the scoring='accuracy' value to optimize a different metric for the classifier (e.g., precision, recall).", "#<help_scikit_grid_search>\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn.datasets\nimport sklearn.metrics as metrics\nfrom sklearn.svm import SVC\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.metrics import classification_report\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.preprocessing import label_binarize\n\n# load datasets and features\ndataset = sklearn.datasets.load_iris()\n# define feature vectors (X) and target (y)\nX = dataset.data\ny = dataset.target\nlabels = dataset.target_names\n\n# separate datasets into training and test datasets once, no folding\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n\n#<help_scikit_grid_search>\n#define the parameter dictionary with the kernels of SVCs\nparameters = [\n {'kernel': ['rbf'], 'gamma': [1e-3, 1e-4, 1e-2], 'C': [1, 10, 100, 1000]},\n {'kernel': ['linear'], 'C': [1, 10, 100, 1000]},\n {'kernel': ['poly'], 'degree': [1, 3, 5], 'C': [1, 10, 100, 1000]}\n]\n\n# find the best parameters to optimize accuracy\nsvc_clf = SVC(C=1, probability= True)\nclf = GridSearchCV(svc_clf, parameters, cv=5, scoring='accuracy') #5 folds\nclf.fit(X_train, y_train) #train the model \nprint(\"Best parameters found from SVM's:\")\nprint clf.best_params_ \nprint(\"Best score found from SVM's:\") \nprint clf.best_score_", "Plot ROC Curves\nThis recipe plots the reciever operating characteristic (ROC) curve for a SVM classifier trained over the given dataset.\nThis recipe defaults to using the Iris data set which has three classes. The recipe uses a one-vs-the-rest strategy to create the binary classifications appropriate for ROC plotting. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to human-readable names of the classes.\nNote that the recipe adds noise to the iris features to make the ROC plots more realistic. Otherwise, the classification is nearly perfect and the plot hard to study. Remove the noise generator if you use your own data!", "# <help:scikit_roc>\nimport warnings\nwarnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn.datasets\nimport sklearn.metrics as metrics\nfrom sklearn.svm import SVC\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.preprocessing import label_binarize\n\n# load iris, set and data\ndataset = sklearn.datasets.load_iris()\nX = dataset.data\n# binarize the output for binary classification\ny = label_binarize(dataset.target, classes=[0, 1, 2])\nlabels = dataset.target_names\n\n# <help:scikit_roc>\n# add noise to the features so the plot is less ideal\n# REMOVE ME if you use your own dataset!\nrandom_state = np.random.RandomState(0)\nn_samples, n_features = X.shape\nX = np.c_[X, random_state.randn(n_samples, 200 * n_features)]\n\n# <help:scikit_roc>\n# split data for cross-validation\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)\n\n# classify instances into more than two classes, one vs rest\n# add param to create probabilities to determine Y or N as the classification\nclf = OneVsRestClassifier(SVC(kernel='linear', probability=True))\n\n# fit estiamators and return the distance of each sample from the decision boundary\ny_score = clf.fit(X_train, y_train).decision_function(X_test)\n\n# <help:scikit_roc>\n# plot the ROC curve, best for it to be in top left corner\nplt.figure(figsize=(10,5))\nplt.plot([0, 1], [0, 1], 'k--') # add a straight line representing a random model \nfor i, label in enumerate(labels):\n # false positive and true positive rate for each class\n fpr, tpr, _ = metrics.roc_curve(y_test[:, i], y_score[:, i])\n # area under the curve (auc) for each class\n roc_auc = metrics.auc(fpr, tpr)\n plt.plot(fpr, tpr, label='ROC curve of {0} (area = {1:0.2f})'.format(label, roc_auc))\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.title('Receiver Operating Characteristic for Iris data set')\nplt.xlabel('False Positive Rate') # 1- specificity\nplt.ylabel('True Positive Rate') # sensitivity\nplt.legend(loc=\"lower right\")\nplt.show()", "Build a Transformation and Classification Pipeline\nThis recipe builds a transformation and training pipeline for a model that can classify a snippet of text as belonging to one of 20 USENET newgroups. It then prints the precision, recall, and F1-score for predictions over a held-out test set as well as the confusion matrix.\nThis recipe defaults to using the 20 USENET newsgroup dataset. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to human-readable names of the classes. Then modify the pipeline components to perform appropriate transformations for your data.\n<div class=\"alert alert-block alert-warning\" style=\"margin-top: 20px\">**Warning:** Running this recipe with the sample data may consume a significant amount of memory.</div>", "# <help:scikit_pipeline>\nimport pandas\nimport sklearn.metrics as metrics\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\nfrom sklearn.feature_extraction.text import HashingVectorizer\nfrom sklearn.linear_model import Perceptron\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.datasets import fetch_20newsgroups\n\n# download the newsgroup dataset\ndataset = fetch_20newsgroups('all')\n\n# define feature vectors (X) and target (y) \nX = dataset.data\ny = dataset.target\nlabels = dataset.target_names\nlabels\n\n# <help:scikit_pipeline>\n# split data holding out 30% for testing the classifier\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)\n\n# pipelines concatenate functions serially, output of 1 becomes input of 2\nclf = Pipeline([\n ('vect', HashingVectorizer(analyzer='word', ngram_range=(1,3))), # count frequency of words, using hashing trick\n ('tfidf', TfidfTransformer()), # transform counts to tf-idf values,\n ('clf', SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, n_iter=5))\n])\n\n# <help:scikit_pipeline>\n# train the model and predict the test set\ny_pred = clf.fit(X_train, y_train).predict(X_test)\n\n# standard information retrieval metrics\nprint metrics.classification_report(y_test, y_pred, target_names=labels)\n\n# <help:scikit_pipeline>\n# show the confusion matrix in a labeled dataframe for ease of viewing\nindex_labels = ['{} {}'.format(i, l) for i, l in enumerate(labels)]\npandas.DataFrame(metrics.confusion_matrix(y_test,y_pred), index=index_labels)", "<div class=\"alert\" style=\"border: 1px solid #aaa; background: radial-gradient(ellipse at center, #ffffff 50%, #eee 100%);\">\n<div class=\"row\">\n <div class=\"col-sm-1\"><img src=\"https://knowledgeanyhow.org/static/images/favicon_32x32.png\" style=\"margin-top: -6px\"/></div>\n <div class=\"col-sm-11\">This notebook was created using <a href=\"https://knowledgeanyhow.org\">IBM Knowledge Anyhow Workbench</a>. To learn more, visit us at <a href=\"https://knowledgeanyhow.org\">https://knowledgeanyhow.org</a>.</div>\n </div>\n</div>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gcgruen/homework
data-databases-homework/.ipynb_checkpoints/Homework_3_Gruen-checkpoint.ipynb
mit
[ "Homework assignment #3\nThese problem sets focus on using the Beautiful Soup library to scrape web pages.\nProblem Set #1: Basic scraping\nI've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object.", "from bs4 import BeautifulSoup\nfrom urllib.request import urlopen\nhtml_str = urlopen(\"http://static.decontextualize.com/widgets2016.html\").read()\ndocument = BeautifulSoup(html_str, \"html.parser\")", "Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of &lt;h3&gt; tags contained in widgets2016.html.", "h3_tags = document.find_all('h3')\n\nh3_tags_count = 0\nfor tag in h3_tags:\n h3_tags_count = h3_tags_count + 1\nprint(h3_tags_count)", "Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the \"Widget Catalog\" header.", "#inspecting webpace with help of developer tools -- shows infomation is stored in an a tag that has the class 'tel'\na_tags = document.find_all('a', {'class':'tel'})\n\nfor tag in a_tags:\n print(tag.string)\n\n#Does not return the same: [tag.string for tag in a_tags]", "In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order):\nSkinner Widget\nWidget For Furtiveness\nWidget For Strawman\nJittery Widget\nSilver Widget\nDivided Widget\nManicurist Widget\nInfinite Widget\nYellow-Tipped Widget\nUnshakable Widget\nSelf-Knowledge Widget\nWidget For Cinema", "search_table = document.find_all('table',{'class': 'widgetlist'})\n#print(search_table)\n\ntables_content = [table('td', {'class':'wname'}) for table in search_table]\n#print(tables_content)\n\nfor table in tables_content:\n for single_table in table:\n print(single_table.string)", "Problem set #2: Widget dictionaries\nFor this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:\n[{'partno': 'C1-9476',\n 'price': '$2.70',\n 'quantity': u'512',\n 'wname': 'Skinner Widget'},\n {'partno': 'JDJ-32/V',\n 'price': '$9.36',\n 'quantity': '967',\n 'wname': u'Widget For Furtiveness'},\n ...several items omitted...\n {'partno': '5B-941/F',\n 'price': '$13.26',\n 'quantity': '919',\n 'wname': 'Widget For Cinema'}]\nAnd this expression:\nwidgets[5]['partno']\n\n... should evaluate to:\nLH-74/O", "widgets = []\n\n#STEP 1: Find all tr tags, because that's what tds are grouped by\nfor tr_tags in document.find_all('tr', {'class': 'winfo'}):\n#STEP 2: For each tr_tag in tr_tags, make a dict of its td\n tr_dict ={}\n for td_tags in tr_tags.find_all('td'):\n td_tags_class = td_tags['class']\n for tag in td_tags_class:\n tr_dict[tag] = td_tags.string\n#STEP3: add dicts to list\n widgets.append(tr_dict)\nwidgets\n#widgets[5]['partno']", "In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:\n[{'partno': 'C1-9476',\n 'price': 2.7,\n 'quantity': 512,\n 'widgetname': 'Skinner Widget'},\n {'partno': 'JDJ-32/V',\n 'price': 9.36,\n 'quantity': 967,\n 'widgetname': 'Widget For Furtiveness'},\n ... some items omitted ...\n {'partno': '5B-941/F',\n 'price': 13.26,\n 'quantity': 919,\n 'widgetname': 'Widget For Cinema'}]\n\n(Hint: Use the float() and int() functions. You may need to use string slices to convert the price field to a floating-point number.)", "#had to rename variables as it kept printing the ones from the cell above...\nwidgetsN = []\nfor trN_tags in document.find_all('tr', {'class': 'winfo'}):\n trN_dict ={}\n for tdN_tags in trN_tags.find_all('td'):\n tdN_tags_class = tdN_tags['class']\n for tagN in tdN_tags_class:\n if tagN == 'price':\n sliced_tag_string = tdN_tags.string[1:]\n trN_dict[tagN] = float(sliced_tag_string)\n elif tagN == 'quantity':\n trN_dict[tagN] = int(tdN_tags.string)\n else:\n trN_dict[tagN] = tdN_tags.string\n widgetsN.append(trN_dict)\nwidgetsN", "Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.\nExpected output: 7928", "widget_quantity_list = [element['quantity'] for element in widgetsN]\nsum(widget_quantity_list)", "In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.\nExpected output:\nWidget For Furtiveness\nJittery Widget\nSilver Widget\nInfinite Widget\nWidget For Cinema", "for widget in widgetsN:\n if widget['price'] > 9.30:\n print(widget['wname'])", "Problem set #3: Sibling rivalries\nIn the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes:\nOften, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html):", "example_html = \"\"\"\n<h2>Camembert</h2>\n<p>A soft cheese made in the Camembert region of France.</p>\n\n<h2>Cheddar</h2>\n<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>\n\"\"\"", "If our task was to create a dictionary that maps the name of the cheese to the description that follows in the &lt;p&gt; tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:", "example_doc = BeautifulSoup(example_html, \"html.parser\")\ncheese_dict = {}\nfor h2_tag in example_doc.find_all('h2'):\n cheese_name = h2_tag.string\n cheese_desc_tag = h2_tag.find_next_sibling('p')\n cheese_dict[cheese_name] = cheese_desc_tag.string\n\ncheese_dict", "With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header \"Hallowed Widgets.\"\nExpected output:\nMZ-556/B\nQV-730\nT1-9731\n5B-941/F", "for h3_tags in document.find_all('h3'):\n if h3_tags.string == 'Hallowed widgets':\n hallowed_table = h3_tags.find_next_sibling('table')\n for element in hallowed_table.find_all('td', {'class':'partno'}):\n print(element.string)", "Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!\nIn the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are \"categories\" of widgets (e.g., the contents of the &lt;h3&gt; tags on the page: \"Forensic Widgets\", \"Mood widgets\", \"Hallowed Widgets\") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this:\n{'Forensic Widgets': 3,\n 'Hallowed widgets': 4,\n 'Mood widgets': 2,\n 'Wondrous widgets': 3}", "category_counts = {}\n\nfor x_tags in document.find_all('h3'):\n x_table = x_tags.find_next_sibling('table')\n tr_info_tags = x_table.find_all('tr', {'class':'winfo'})\n category_counts[x_tags.string] = len(tr_info_tags)\n \ncategory_counts", "Congratulations! You're done." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]