repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
Evfro/polara
examples/Reproducing_EIGENREC_results.ipynb
mit
[ "Introduction\nThe main goal of this tutorial is to demonstrate how one can use Polara to conduct custom experiments with very specific requirements. Needless to say, it can be useful for reproducing someone's research as well. Below you'll find a particular example based on one of my favorite papers called \"EIGENREC: generalizing PureSVD for effective and efficient top-N recommendations\", which helped me to see standard SVD-based models in a differrent light and even led me to my own discoveries. Even though it's not necessary for understanding the material below, I strongly recommend to read the original paper as it builds on top of clear ideas and contains a very thorough analysis.\nThe key take home message from this paper for me personally is that SVD can be viewed as a particular case of a more general eigendecomposition problem of the scaled similarity matrix. Based on that insight, the authors of the EigenRec model propose several modifications, which involve tuning the scaling factor as well as the similarity measure in order to improve the model's performance.\n<div class=\"alert alert-block alert-info\">In this tutorial we are not going to reproduce the full work and will focus only on some of its easy-to-implement parts. Basically, we will alter only the scaling factor to see how it affects the quality of recommendations.</div>\n\nNevertheless, the tutorial allows to verify validity of the proposed ideas and creates a convenient playground for further exploration.\nData preparation\nGetting Movielens-1M data\nAs in the previous tutorials, let's download the Movielens dataset. The task should be already familiar to you. This is one of the datasets used in the paper as well. One could use some other datasets, it wouldn't change anything in later parts of the tutorial. The main requirement is to have it in the form of a Pandas dataframe, similarly to what is returned by the get_movielens_data function.", "from polara import RecommenderData\nfrom polara import get_movielens_data\n\ndata = get_movielens_data() # will automatically download it\n# alternatively you can specify a path to the local copy as an argument to the function\ndata.head()\n\ndata_model = RecommenderData(data, 'userid', 'movieid', 'rating', seed=0)", "Custom experimental setup with item sampling\nThe EigenRec paper follows a specific experimentation setup, mainly based on the settings, proposed in my another favorite paper Performance of recommender algorithms on top-n recommendation tasks, devoted to the PureSVD model itself. For evaluation purposes, the authors sample 1.4% of all available ratings and additionally shrink the resulting sample by leaving 5-star ratings only. Quote from the paper (Section 4.2.1): \n<div class=\"alert alert-block alert-info\">\"...we form a probeset $\\mathcal{P}$ by randomly sampling 1.4% of the ratings of the dataset, and we use each item $v_j$, rated with 5-star by user $u_i$ in $\\mathcal{P}$ to create the test set $\\mathcal{T}$...\"</div>\n\nThis setup can be easily implemented in Polara with the help of test_ratio and holdout_size parameters of the RecommendeData instance. It requires a two-step preparation procedure.\nThe first step is to sample data without filtering top-rated items. The following configuration does the thing:", "data_model.test_ratio = 0 # do not split dataset into folds, use entire dataset for sampling\ndata_model.holdout_size = 0.014 # sample this fraction of ratings from data\ndata_model.random_holdout = True # sample ratings randomly (not just 5-star)\ndata_model.warm_start = False # allow test users to be part of the training (excluding holdout items)\ndata_model.prepare() # perform sampling", "Mind the test_ratio parameter setting. Together with the test_fold parameter it controls, which fraction of the dataset to sample from; 0 means the whole dataset and turns off data splitting mechanism used by Polara for cross-validation. The value of test_fold has no effect in that case. Also note that by default Polara performs some additional manipulations with data like cleaning and reindexing to transform it into a uniform internal representation for further use. Key actions and their results are reported in an output text, which can be turned off by setting data_model.verbose = False. Here's how to see the final result of sampling:", "data_model.test.holdout.head()", "The second step is to leave only items with rating 5, as it was done in the original paper. The easiest way in our case would be to simply run:\npython\ndata_model.test.holdout.query('rating==5', inplace=True)\nHowever, in general, you shouldn't manually change the data after it was processed by Polara, as it may break some internal logic. A more appropriate and a safier way to achieve the same is to use the set_test_data method, specifically designed to cover custom configurations:", "data_model.set_test_data(\n holdout=data_model.test.holdout.query('rating==5'), # select only 5-star ratings\n warm_start=data_model.warm_start, \n reindex=False, # avoid reindexing users and items second time\n ensure_consistency=False # do not try to filter out unseen entities (already excluded)\n # leaving it as True wouldn't change the result but would lead to extra checks\n)", "Note that we reuse the previously sampled holdout dataset (the $\\mathcal{P}$ dataset in the authors' notation), which is already reindexed by Polara's built-in data pre-processing procedure. In order not to loose the index mapping between internal and external representation of movies and users (stored in the data_model.index attribute) it's very important to set reindex argument of the set_test_data method to False. Now the data_model.test.holdout dataframe stores the final result, namely the $\\mathcal{T}$ dataset:", "data_model.test.holdout.head()", "Scaled SVD-based model\nIn the simplest case of the EigenRec model, when only the scaling factor is changed, we can go with a very straightforward approach. Instead of computing similarity matrices and solving an eigendecomposition problem, it is sufficient to apply standard SVD to a scaled rating matrix $\\tilde R$: \n$$\n\\tilde R = R \\, S^{d-1} \\approx U\\Sigma V^T,\n$$ \nwhere $R$ is an $M \\times N$ rating matrix, $S = \\text{diag}{\\|r_1\\|_2, \\dots, \\|r_N\\|_2}^d$ is a diagonal scaling matrix with its non-zero values depending on a scaling parameter $d$ and $r_i$ denotes an $i$-th column of $R$. Note that due to the orthogonality of columns in the SVD factors the approximation of $\\tilde R$ can be written in an equivalent and more convenient form $\\tilde RVV^T$, which can be used to generate recommendations.\nScaling input data\nIn order to calculate the scaled version of the PureSVD approach we can reuse the SVDModel class implemented in Polara. One of the ways to do that is to redefine the build method in an SVDModel's subclass. A simpler solution, however, is to directly modify an output of the get_training_matrix method, which is generally available for all models in Polara and is used internally in the SVDModel in particular. This method returns the rating matrix in a sparse format, which is then fed into the scipy's truncated SVD implementation within the build method (you can run the SVDModel.build?? command with double question mark to see it). Assuming we already have sparse rating matrix, the following function will help to scale it:", "from scipy.sparse import diags\nfrom scipy.sparse.linalg import norm as spnorm\n\ndef sparse_normalize(matrix, scaling, axis):\n '''Function to scale either rows or columns of the sparse rating matrix'''\n if scaling == 1: # no scaling (standard SVD case)\n return matrix\n \n norm = spnorm(matrix, axis=axis, ord=2) # compute Euclidean norm of rows or columns\n scaling_matrix = diags(np.power(norm, scaling-1, where=norm!=0))\n \n if axis == 0: # scale columns\n return matrix.dot(scaling_matrix)\n if axis == 1: # scale rows\n return scaling_matrix.dot(matrix)", "Sampling random items for evaluation\nSomewhat more involved modifications are required to generate model predictions, as it's based on an additional sampling of items not previously seen by the test users. Quote from the paper (Section 4.2.1): \n<div class=\"alert alert-block alert-info\">\"For each item in $\\mathcal{T}$, we randomly select another 1000 unrated items of the same user...\"</div>\n\nThis means that we need to generate prediction scores for 1000 randomly selected unseen items in addition to every item from the holdout. Moreover, every set of 1001 items is treated independently of the user it belongs to. Normally, Polara performs evaluation on a per user basis; however, in this case the logic is different and we have to take care of users with mulltiple items in the holdout. From the line below it can be clearly seen that some test users can have up to 8 items:", "data_model.test.holdout.userid.value_counts().max()", "In order to \"flatten\" the holdout dataset and to independently generate prediction scores for every holdout item (and 1000 of additionally sampled items) we will customize the get_recommendations method of the SVDModel class. Below is the support function, that helps to achieve the necessary result. It iterates over all holdout items, randomly samples a predefined amount of previously unrated items and generates prediction scores for them:", "def sample_scores_flat(useridx, itemidx, seen_data, all_items, user_factors, item_factors, sample_size=1000, random_state=None):\n '''Function to randomly sample unrated items and generate prediction scores for them.'''\n scores = []\n for user, items in itemidx.groupby(useridx): # iterate over every test user and get all user items\n seen_items = seen_data[1][seen_data[0]==user].tolist() # list of the previously rated items of the user\n seen_items.extend(items.tolist()) # take holdout items into account as well\n item_pool = all_items[~all_items.isin(seen_items)] # exclude seen items from all available items\n for item in items:\n sampled_items = item_pool.sample(n=sample_size, random_state=random_state) \n scores.append(item_factors[sampled_items.values, :].dot(user_factors[user, :]))\n return scores", "<div class=\"alert alert-block alert-warning\">Prediction scores are generated similarly to the standard *PureSVD* model by an orthogonal projection of a vector $r$ of user ratings onto the latent feature space, defined by the formula $VV^Tr$. Note that unlike the model computation phase, no scaling is used in the prediction.</div>\n\nThe code above complies with this definition by expecting user_factors to be the product $V^Tr$ for a set of test users and item_factors to be $V$ itself. Below you can find a full implementation of our new model.\nDefining the model", "import numpy as np\nimport pandas as pd\nfrom polara import SVDModel\n\nclass ScaledSVD(SVDModel):\n '''Class that adds scaling functionality to the PureSVD model'''\n \n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.col_scaling = 1 # scaling parameted d, initially corresponds to PureSVD\n self.n_rnd_items = 1000 # number of randomly sampled items\n self.seed = 0 # to control randomization\n self.method = 'ScaledSVD'\n \n def get_training_matrix(self, *args, **kwargs):\n svd_matrix = super().get_training_matrix(*args, **kwargs) # get sparse rating matrix\n return sparse_normalize(svd_matrix, self.col_scaling, 0)\n \n def get_recommendations(self):\n holdout = self.data.test.holdout\n itemid = self.data.fields.itemid # \"movieid\" in the case of Movielense dataset\n userid = self.data.fields.userid # \"userid\" in the case of Movielense dataset\n \n itemidx = holdout[itemid] # holdout items of the test users\n useridx = pd.factorize(holdout[userid])[0] # have to \"rebase\" user index;\n # necessary for indexing rows of the matrix with test user ratings\n \n # prediction scores for holdout items\n test_matrix, seen_data = self.get_test_matrix() \n item_factors = self.factors[itemid] # right singular vectors, matrix V\n user_factors = test_matrix.dot(item_factors) # similarly to PCA\n holdout_scores = (\n user_factors[useridx, :] * item_factors[itemidx.values, :]\n ).sum(axis=1).squeeze()\n \n # scores for randomly sampled unseen items\n all_items = self.data.index.itemid.new # all unique (reindexed) items\n rs = np.random.RandomState(self.seed) # fixing random state to control random output\n sampled_scores = sample_scores_flat(\n useridx, itemidx,\n seen_data, all_items,\n user_factors, item_factors,\n self.n_rnd_items, random_state=rs\n )\n \n # combine all scores and rank selected items \n scores = np.concatenate( # stack into array with 1001 columns\n (holdout_scores[:, None], sampled_scores), axis=1\n )\n rankings = np.apply_along_axis(np.argsort, 1, -scores)\n return rankings", "The model is ready and can be used in a standard way:", "svd = ScaledSVD(data_model) # create model\nsvd.rank = 50\nsvd.col_scaling = 0.5\nsvd.build() # fit model", "Now, when we have our model computed, its time to evaluate it. However, we cannot use the built-in evaluation routine. Normally, the number of test users is equal to the number of rows in recommendations array and that's the logic Polara relies on. In our case the number of test users is lower than the number of rows in recommendations array and actually corresponds to the total number of ratings in the holdout:", "# if you run the cell for the first time you'll notice a short delay before print output due to calculation of recommendations\nprint('# of test users:', data_model.test.holdout.userid.nunique())\nprint('# of rows and columns in recommendations array:', svd.recommendations.shape)\nprint('# of ratinhgs in the holdout:', data_model.test.holdout.shape[0])", "We will fix this inconsistency in the next section.\nWorth noting here that Polara implements a unified system of callbacks, which reset the svd.recommendations property whenever either the data_model or the model itself are changed in a way that affects the models' output (try, for example, call svd.recommendations, then set the rank of the model to some higher value and call svd.recommendations again). This mechanism helps to ensure predictable and consistent state and to prevent accidental reuse of the cached results during experiments. \nIt can also be extended with user-defined triggers, which is probably the topic for another tutorial.\nModel evaluation\nSimple approach\nWhen you try to evaluate your model, it calls for the model.recommendations property which is automatically filled with the result of the get_recommendations method. The simplest way to evaluate the result in accordance with the new structure of the recommendations array is to define a small function as shown below:", "def evaluate_mrr(model):\n '''Function to calculate MRR score.'''\n is_holdout = model.recommendations==0 # holdout items are always in the first column before sorting\n pos = np.where(is_holdout)[1] + 1.0 # position of holdout items (indexing starts from 0, so adding 1) \n mrr = np.reciprocal(pos).mean() # mean reciprocal rank\n return mrr", "Finally, to compute the MRR score, as it is done in the original paper, simply run:", "evaluate_mrr(svd)", "More functional approach\nWhile the previously described approach is fully working and easy, in some cases you may want to use the built-in model.evaluate method, as it provides additional functionality. It is also useful to see how Polara can be customized to serve specific needs. The key ingredient here is the control of the type of entities that are recommended. By default, Polara expects items to be recommended to users and looks for the corresponding fields in the test data. These fields are defined via data_model.fields.userid and data_model.fields.itemid attributes respectively. The default behavior, however, can be redefined at the model level be setting model._prediction_key (users by default) and model._prediction_target (items by default) attributes to custom values. This scheme, for example, can be utilized in cold start experiments, where the task is to find users potentially interested in a \"cold\" item instead of recommending items to users (see polara.recommender.coldstart for implementation details). The following lines show how to change the default settings for our needs:", "svd._prediction_key = 'xuser'\nsvd._prediction_target = 'xitem'", "Now, we need to specify the corresponding fields in the holdout data. Recall that our goal is to treat every item in the holdout independently of the user or, in other words, to assign every item to a unique \"virtual\" user ('xuser'). Furthermore, by construction, prediction scores for holdout items are located in the first column of the recommendations array. This means that every holdout item ('xitem') should have index 0. Here's the necessary modification:", "data_model.test.holdout['xuser'] = np.arange(data_model.test.holdout.shape[0]) # number of rated items defines the range\ndata_model.test.holdout['xitem'] = 0", "Let's check that the result is the same (up to a small rounding error due to different calculation schemes):", "svd.evaluate('ranking', simple_rates=True) # `simple_rates` is used to enforce calculation of MRR", "<div class=\"alert alert-block alert-warning\">If you'll do the math you'll see that the whole experiment took under 100 lines of code to program, and the most part of it was pretty standard (i.e., declaring variables and methods).</div>\n\nLess lines of code typically means less risks for having bugs or inconsistencies. By following a certain protocol, Polara provides a high-level interface that abstracts many technical aspects allowing to focus on the most important parts of research.\nReproducing the results\nThe next task is to repeat experiments from the EigenRec paper, where the authors compute\n<div class=\"alert alert-block alert-info\">\"...MRR scores as a function of the parameter $d$ for every case, using the number of latent factors that produces the best possible performance for each matrix.\"</div>\n\nGrid search\nThe beauty of SVD-based models is that it is much easier to perform grid-search for finding optimal values of hyper-parameters. Once you have computed a model for a certain set of hyper parameters with some rank value $k$, you can quickly find all other models of rank \"k' < k\" without recomputing SVD.\n<div class=\"alert alert-block alert-info\">Going from larger values of rank to smaller ones is performed by a simple truncation of the latent factor matrix.</div>\n\nThis not only allows to perform experiments faster, but also simplifies the code for it. Moreover, SVDModel already has the necessary rank-check procedures, which allow to avoid rebuilding the model when user sets a smaller value of rank. No special actions are required here. Below is the code that implements the grid search experiment, taking that feature into account (note that on a moderate hardware the code will run for approximately half an hour):", "try:\n from ipypb import track\nexcept ImportError:\n from tqdm import tqdm_notebook as track\n%matplotlib inline\n\nsvd_mrr_flat = {} # will stor results here\nsvd.verbose = False\n\nmax_rank = 150\nscaling_params = np.arange(-20, 21, 2) / 10 # values of d from -2 to 2 with step 0.2\nsvd_ranks = range(10, max_rank+1, 10) # ranks from 10 to max_ranks with step 10\n\nfor scaling in track(scaling_params):\n svd.col_scaling = scaling\n svd.rank = max_rank\n svd.build()\n \n for rank in list(reversed(svd_ranks)): # iterating over rank values in a descending order\n svd.rank = rank # allows to truncate factor matrices without recomputing SVD\n svd_mrr_flat[(scaling, rank)] = svd.evaluate('ranking', simple_rates=True).mrr", "Results\nNow we have the results of the grid search stored in the svd_mrr_flat dictionary. There's one catch that wasn't clear for me at first:\n<div class=\"alert alert-block alert-warning\">in order to show the effect of parameter $d$ the authors have fixed the value of rank corresponding to the best result achieved with EigenRec.</div>\n\nThis means that the curve on Figure 1 in the original paper is obtained with a fixed value of rank, corresponding to the optimal point at the top of the curve, and all other points are obtained by only changing the scaling factor. Here's one way to draw it:", "result_flat = pd.Series(svd_mrr_flat)\nbest_d, best_rank = result_flat.idxmax()\nbest_d, best_rank\n\nresult_flat.xs(best_rank, axis=0, level=1).plot(label='fixed rank', legend=True, title='MRR',\n figsize=(4.3, 2), ylim=(0, None), xlim=(-2, 2), grid=True);", "Comparing this picture to the bottom left graph of Figure 1 in the original paper leads to a satisfactory conclusion that the curves on the graphs are very close. Of course, there are slight differences; however, there are many factors that may affect it, like data sampling and unrated items randomization. It would be a good idea to repeat the experiment with different seed values and draw a confidence region around the curve. However, there are no drammatic differences in the general behavior of the curves, which is a very nice result that didn't take too much efforts. Here are some top-score configurations from the experiment:", "result_flat.sort_values(ascending=False).head()", "A bit of exploration\nThe difference between the best result achieved with the EigenRec approach and the standard PureSVD result (that corresponds to the point with scaling parameter equal to 1) is quite large. However, such a comparison is a bit unfair as the restriction on having a fixed value of rank is artificial. We can draw another curve that corresponds to optimal values of both scaling parameter and rank of the decomposition:", "result_flat.groupby(level=0).max().plot(label='optimal rank', legend=True)\nresult_flat.xs(best_rank, axis=0, level=1).plot(label='fixed rank', legend=True, title='MRR',\n figsize=(4.3, 2), ylim=(0, None), xlim=(-2, 2), grid=True);", "Now the difference is less pronounced. Anyway, the EigenRec approach still performs better. Moreover, the difference vary significantly from dataset to dataset and in some cases that difference can be much more noticeable. Another degree of freedom here, which may increase the top score, is the maximum value of rank used in the grid search. We have manually set it to be 150. Let's look which values of rank were used at each point of the curve:", "ax = result_flat.groupby(level=0).idxmax().str[1].plot(label='optimal rank value', ls=\":\", legend=True,\n secondary_y=True, c='g')\nresult_flat.groupby(level=0).max().plot(label='optimal rank experiment', legend=True)\nresult_flat.xs(best_rank, axis=0, level=1).plot(label='fixed rank experiment', legend=True, title='MRR',\n figsize=(4.3, 2), ylim=(0, None), xlim=(-2, 2), grid=True);", "Clearly, the values are capped on the left half of the graph, which leaves the room for further improvement. I have performed experiments with a higher threshold and was able to achieve a higher top score with a bit lower value of the scaling parameter. If you want to see this, simply rerun the grid search experiment with a higher value of the max_rank variable. Be prepared that it will take a longer time (hint: reduce the search space for the scaling parameter). Anyway, the key conclusion doesn't change - even a simple scaling factor can be advantageous and allows to outperform the standard model. This conclusion is supported by many other experiments in the original paper, which we haven't run here.\nBonus: Double-scaled SVD\nThe main change in the model is basically enclosed in the following line\npython\nscaled_matrix = sparse_normalize(svd_matrix, self.col_scaling, 0)\nwhich scales columns of the rating matrix. However, there's nothing really special about this particular type of scaling and we could also scale rows instead of or in addition to that. Scaling rows would help to control the contribution of users with either too high or too low number of rated items. The entire code for defining the double-scaled model is listed below:\n```python\nclass DScaledSVD(ScaledSVD):\n def init(self, args, kwargs):\n super().init(args, **kwargs)\n self.row_scaling = 1 # PureSVD config\n self.method = 'DScaledSVD'\ndef get_training_matrix(self, *args, **kwargs):\n svd_matrix = super().get_training_matrix(*args, **kwargs)\n return sparse_normalize(svd_matrix, self.row_scaling, 1)\n\n```\nNote that running the grid search experiment with two scaling parameters, row_scaling and col_scaling, will take more time. In my experiments with Movielense data and the value of rank limited by 150 from the above there was a very weak improvement, which wasn't worth the efforts. This however, may change with higher values of ranks or at least with another data. I'll leave verifying this for the reader.\nInstead of conclusion\nBeing a researcher myself, I'm often involved in some sort of \"redoing\" the work that was already done by someone else. Leaving the reproducibility aspect aside, there are many other reasons why it can be useful, e.g., it may help to understand presented ideas better or to see if there're any subtle moments in an original work that are not evident at first glance. It helps to create a playground ready for further exploration and may even lead to new ideas. I hope that Polara will help you with this as it helps me in my own research." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
laserson/phip-stat
notebooks/aligners/Aligners.ipynb
apache-2.0
[ "Comparison of bowtie, bowtie2, and kallisto", "import pandas as pd\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%matplotlib inline", "Data preparation\nWe will use some of Ben's Sjorgrens data for this. We will generate a random sample of 1 million reads from the full data set.\nPrepare data with Snakemake\nbash\nsnakemake -s aligners.snakefile\nIt appears that kallisto needs at least 51 bases of the reference to successfully align most of the reads. Must be some kind of off-by-one issue with the data structures.\nLoad alignments", "names = ['QNAME', 'FLAG', 'RNAME', 'POS', 'MAPQ', 'CIGAR', 'RNEXT', 'PNEXT', 'TLEN', 'SEQ', 'QUAL']\n\nbowtie_alns = pd.read_csv('alns/bowtie-51mer.aln', sep='\\t', header=None, usecols=list(range(11)), names=names)\nbowtie2_alns = pd.read_csv('alns/bowtie2-51mer.aln', sep='\\t', header=None, usecols=list(range(11)), names=names)\nkallisto_alns = pd.read_csv('alns/kallisto-51mer.sam', sep='\\t', header=None, usecols=list(range(11)), names=names, comment='@')\n\n(bowtie_alns.RNAME != '*').sum() / len(bowtie_alns)\n\n(bowtie2_alns.RNAME != '*').sum() / len(bowtie2_alns)\n\n(kallisto_alns.RNAME != '*').sum() / len(kallisto_alns)", "Bowtie2 vs kallisto", "bt2_k_joined = pd.merge(bowtie2_alns, kallisto_alns, how='inner', on='QNAME', suffixes=['_bt2', '_k'])", "How many reads do bowtie2 and kallisto agree on?", "(bt2_k_joined.RNAME_bt2 == bt2_k_joined.RNAME_k).sum()\n\nFor the minority of reads they disagree on, what do they look like?", "For the minority of reads they disagree on, what do they look like", "bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_k", "Mostly lower sensitivity of kallisto due to indels in the read. Specifically, out of", "(bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k).sum()", "discordant reads, the number where kallisto failed to map is", "(bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_k == '*').sum()", "or as a fraction", "(bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_k == '*').sum() / (bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k).sum()", "Are there any cases where bowtie2 fails to align", "(bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_bt2 == '*').sum()", "Which means there are no cases where bowtie and kallisto align to different peptides.", "((bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k) & (bt2_k_joined.RNAME_bt2 != '*') & (bt2_k_joined.RNAME_k != '*')).sum()", "What do examples look like of kallisto aligning and bowtie2 not?", "bt2_k_joined[(bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k) & (bt2_k_joined.RNAME_bt2 == '*')]", "Looks like there is a perfect match to a prefix and the latter part of the read doesn't match\n```\nread AAATCCACCATTGTGAAGCAGATGAAGATCATTCATGGTTACTCAGAGCA\nref AAATCCACCATTGTGAAGCAGATGAAGATCATTCATAAAAATGGTTACTCA\nread GGTCCTCACGCCGCCCGCGTTCGCGGGTTGGCATTACAATCCGCTTTCCA\nref GGTCCTCACGCCGCCCGCGTTCGCGGGTTGGCATTCCTCCCACACCAGACT\n```\nBowtie vs kallisto", "bt_k_joined = pd.merge(bowtie_alns, kallisto_alns, how='inner', on='QNAME', suffixes=['_bt', '_k'])", "How many reads do bowtie and kallisto agree on?", "(bt_k_joined.RNAME_bt == bt_k_joined.RNAME_k).sum()", "For the minority of reads they disagree on, what do they look like", "bt_k_joined[bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k][['RNAME_bt', 'RNAME_k']]", "Looks like many disagreeents, but probably still few disagreements on a positive mapping.", "(bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k).sum()", "discordant reads, the number where kallisto failed to map is", "(bt_k_joined[bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k].RNAME_k == '*').sum()", "and the number where bowtie failed is", "(bt_k_joined[bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k].RNAME_bt == '*').sum()", "which means there are no disagreements on mapping. kallisto appears to be somewhat higher sensitivity.\nQuantitation", "bowtie_counts = pd.read_csv('counts/bowtie-51mer.tsv', sep='\\t', header=0, names=['id', 'input', 'output'])\nbowtie2_counts = pd.read_csv('counts/bowtie2-51mer.tsv', sep='\\t', header=0, names=['id', 'input', 'output'])\nkallisto_counts = pd.read_csv('counts/kallisto-51mer.tsv', sep='\\t', header=0)\n\nfig, ax = plt.subplots()\n_ = ax.hist(bowtie_counts.output, bins=100, log=True)\n_ = ax.set(title='bowtie')\n\nfig, ax = plt.subplots()\n_ = ax.hist(bowtie2_counts.output, bins=100, log=True)\n_ = ax.set(title='bowtie2')\n\nfig, ax = plt.subplots()\n_ = ax.hist(kallisto_counts.est_counts, bins=100, log=True)\n_ = ax.set(title='kallisto')\n\nbt2_k_counts = pd.merge(bowtie2_counts, kallisto_counts, how='inner', left_on='id', right_on='target_id')\n\nfig, ax = plt.subplots()\nax.scatter(bt2_k_counts.output, bt2_k_counts.est_counts)\n\nsp.stats.pearsonr(bt2_k_counts.output, bt2_k_counts.est_counts)\n\nsp.stats.spearmanr(bt2_k_counts.output, bt2_k_counts.est_counts)", "Otherwise, the kallisto index is about 3x bigger than the bowtie indices, but kallisto (5.7 s single-threaded) is about 3.5x faster than bowtie2 (20 s) and 7.3x faster than bowtie (42 s; though still appears to be using 2 threads).\nNote: it appears that kallisto needs a few extra bases on the reference to achieve its sensitivity. Performed an analysis like so:\nLooked at discordant cells according to Ben.\n```python\ncpm = pd.read_csv('cpm.tsv', sep='\\t', index_col=0, header=0)\nmlxp = pd.read_csv('mlxp.tsv', sep='\\t', index_col=0, header=0)\nbeadsonlycols = list(filter(lambda c: 'BEADS_ONLY' in c, mlxp.columns))\nsamples = ['Sjogrens.serum.Sjogrens.FS08-01647.20A20G.1']\noligo1 = list(filter(lambda c: 'hblOligo32108' in c, mlxp.index))[0] # hit for Ben\noligo2 = list(filter(lambda c: 'hblOligo223219' in c, mlxp.index))[0] # null for Ben\noligos = [oligo1, oligo2]\nprint(cpm[beadsonlycols + samples].loc[oligos].to_csv(sep='\\t'))\nprint(mlxp[beadsonlycols + samples].loc[oligos].to_csv(sep='\\t'))\n```\nBuilt some indices of different sizes\npython\nfrom Bio import SeqIO\nk = 60\noutput = f'reference{k}.fasta'\nwith open(output, 'w') as op:\n for sr in SeqIO.parse('/Users/laserson/repos/phage_libraries_private/human90/human90-ref.fasta', 'fasta'):\n print(sr[:k].format('fasta'), end='', file=op)\n```bash\nkallisto index -i human90-50.idx reference50.fasta\nkallisto index -i human90-51.idx reference51.fasta\nkallisto index -i human90-52.idx reference52.fasta\nkallisto index -i human90-55.idx reference55.fasta\nkallisto index -i human90-60.idx reference60.fasta\nkallisto quant -i human90-50.idx -o quant-50 --single -l 50 -s 0.1 --pseudobam Sjogrens.serum.Sjogrens.FS08-01647.20A20G.1.fastq.gz > aln-50.sam\nkallisto quant -i human90-51.idx -o quant-51 --single -l 50 -s 0.1 --pseudobam Sjogrens.serum.Sjogrens.FS08-01647.20A20G.1.fastq.gz > aln-51.sam\nkallisto quant -i human90-52.idx -o quant-52 --single -l 50 -s 0.1 --pseudobam Sjogrens.serum.Sjogrens.FS08-01647.20A20G.1.fastq.gz > aln-52.sam\nkallisto quant -i human90-55.idx -o quant-55 --single -l 50 -s 0.1 --pseudobam Sjogrens.serum.Sjogrens.FS08-01647.20A20G.1.fastq.gz > aln-55.sam\nkallisto quant -i human90-60.idx -o quant-60 --single -l 50 -s 0.1 --pseudobam Sjogrens.serum.Sjogrens.FS08-01647.20A20G.1.fastq.gz > aln-60.sam\n```\nGenerated the following numbers of alignments\n6,369 reads pseudoaligned\n1,419,515 reads pseudoaligned\n1,477,736 reads pseudoaligned\n1,490,788 reads pseudoaligned\n1,498,420 reads pseudoaligned\nBut looking at the results\nbash\ngrep hblOligo32108 quant-50/abundance.tsv\ngrep hblOligo32108 quant-51/abundance.tsv\ngrep hblOligo32108 quant-52/abundance.tsv\ngrep hblOligo32108 quant-55/abundance.tsv\ngrep hblOligo32108 quant-60/abundance.tsv\nIt was clear that at least 52 bases was necessary for the 50 base read to get max alignments for the peptides chosen." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JamesSample/icpw
check_trend_signif.ipynb
mit
[ "%matplotlib inline\nimport matplotlib.pyplot as plt, seaborn as sn, mpld3\nimport pandas as pd\nfrom scipy.stats import theilslopes\nsn.set_context('talk')", "Check significance\nIn an e-mail received 19/07/2016 at 17:20, Don pointed out a couple of TOC plots on my trends map where he was surprised that the estimated trend was deemed insignificant:\n\nLittle Woodford (site code X15:1C1-093)\nPartridge (station code X15:ME-9999)\n\nChecking this will provide a useful test of my trend analysis code.\nTo make the test as independent as possible, I've started off by extracting TOC data for these two sites using the manual interface for RESA2. This method of accessing the database is completely separate to that used by my trends code, so it'll be interesting to see whether I get the same results!", "# Read RESA2 export, calculate annual medians and plot\n\n# Input file\nin_xlsx = (r'C:\\Data\\James_Work\\Staff\\Heleen_d_W\\ICP_Waters\\TOC_Trends_Analysis_2015'\n r'\\Data\\TOC_Little_Woodford_Partridge.xlsx')\ndf = pd.read_excel(in_xlsx, sheetname='DATA')\n\n# Pivot\ndf = df.pivot(index='Date', columns='Station name', values='TOC')\ndf.reset_index(inplace=True)\n\n# Calculate year\ndf['year'] = df['Date'].apply(lambda x: x.year)\n\n# Take median in each year\ngrpd = df.groupby(['year',])\ndf = grpd.aggregate('median')\n\n# Plot\ndf.plot(figsize=(12, 8))\nplt.show()\n\n# Print summary stats\ndf.describe()", "These plots and summary statistics are identical to the ones given on my web map (with the exception that, for plotting, the web map linearly interpolates over data gaps, so the break in the line for Little Woodford is not presented). This is a good start. \nThe next step is to estimate the Theil-Sen slope. It would also be useful to plot the 95% confidence interval around the line, as this should make it easier to see whether a trend ought to be identified as significant or not. However, a little surprisingly, it seems there is no standard way of estimating confidence intervals for Theil-Sen regressions. This is because the Theil-Sen method is strictly a way of estimating the slope of the regression line, but not the intercept (see e.g. here). \nA number of intercept estimators have been proposed previously (e.g. here). For the median regression, which is what I've plotted on my web map, SciPy uses the Conover Estimator to calculate the intercept\n$$\\beta_{median} = y_{median} - M_{median} * x_{median}$$\nwhere $\\beta$ is the intercept and $M$ is the slope calculated using the Theil-Sen method. Although I can't find many references for constructing confidence intervals for this type of regression, presumably I can just generalise the above formula to estimate slopes and intercepts for any percentile, $p$\n$$\\beta_{p} = y_{p} - M_{p} * x_{p}$$\nIt's worth a try, anyway.", "# Theil-Sen regression\n\n# Set up plots\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16, 8))\n\n# Loop over sites\nfor idx, site in enumerate(['LITTLE - WOODFORD', 'PARTRIDGE POND']):\n # Get data\n df2 = df[site].reset_index()\n \n # Drop NaNs\n df2.dropna(how='any', inplace='True')\n\n # Get quantiles\n qdf = df2.quantile([0.025, 0.975])\n y_2_5 = qdf.ix[0.025, site]\n x_2_5 = qdf.ix[0.025, 'year']\n y_97_5 = qdf.ix[0.975, site]\n x_97_5 = qdf.ix[0.975, 'year']\n \n # Theil-Sen regression\n slp_50, icpt_50, slp_lb, slp_ub = theilslopes(df2[site].values, \n df2['year'].values, 0.95)\n \n # Calculate CI for intercepts\n icpt_lb = y_2_5 - (slp_lb * x_2_5)\n icpt_ub = y_97_5 - (slp_ub * x_97_5)\n \n # Plot\n # Data\n axes[idx].plot(df2['year'], df2[site], 'bo-', label='Data')\n \n # Lower and upper CIs\n axes[idx].plot(df2['year'], \n slp_lb * df2['year'] + icpt_lb,\n 'r-', label='')\n \n axes[idx].plot(df2['year'], \n slp_ub * df2['year'] + icpt_ub,\n 'r-', label='95% CI on trend')\n \n axes[idx].fill_between(df2['year'],\n slp_lb * df2['year'] + icpt_lb,\n slp_ub * df2['year'] + icpt_ub, \n facecolor='red',\n alpha=0.1)\n \n # Median \n axes[idx].plot(df2['year'], \n slp_50 * df2['year'] + icpt_50,\n 'k-', label='Median trend')\n \n axes[idx].legend(loc='best', fontsize=16)\n axes[idx].set_title(site, fontsize=20)\n \nplt.tight_layout()\nplt.show()", "These plots illustrate why the trend is not considered to be significant: although in both cases the median trend implies quite a strong relationship (i.e. the effect size is large), the associated uncertainty is sufficiently big that we can't rule out the trend being zero (or even slightly negative) at the 95% confidence level.\nIt would be relatively easy to modify the code for my map to include these confidence intervals on the plots in the pop-up windows for each station. My main reason for not doing this originally is that the Mann-Kendall and Theil-Sen tests are slightly different, so (I think) it would be possible to have contradictory \"edge cases\" where, for the same dataset, the M-K test returns \"significant\" whereas the Theil-Sen estimator returns \"insignificant\" (and vice versa). Of the two approaches, M-K is well accepted and widely used as a test for trend significance, whereas I can't find much information at all regarding constructing confidence intervals for the Theil-Sen estimator. The method I've used above seems reasonable to me, but I've basically made it up and it would be nice to have a reference of some kind to confirm my approach before including it on the map." ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
krosaen/ml-study
basic-stats/nba-games-net-rating-boxplots/NbaTeamGameNetRatingsPlots.ipynb
mit
[ "NBA team net ratings\nLet's do some basic exploritory analysis on NBA games.\nEach basketball game is won by a margin, say 8 points. To normalize for the pace of the game, there's \"net rating\" which is the delta in points scored per 100 possessions. This makes different games that were played at faster or slower paces comparable. A team's average net rating is the best single number for quantifying performance (save the win/loss record).\nTo start with we need a record of every NBA game this season and its net rating.\nI boiled this down into a csv with the date, team name and net rating (each game will have two entries, but this is ok as we only look at per team stats). This csv was produced with a separate ruby script, but if you are curious, these two JSON endpoints from stats.nba.com were used:\n\nThe season game log\nThe advanced box score for a single game\n\nNote that these endpoints are unofficial at best, and were sniffed out by looking at XHR requests in Chrome's developer tools while browsing stats.nba.com.\nLet's load the data.", "import csv\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport os\nimport itertools\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nnba_games = []\nfname = 'nba-games.csv'\nwith open(fname,\"r\") as f:\n # game_id,game_date,team_id,team_name,net_rating\n reader = csv.reader(f)\n next(reader)\n nba_games = list(reader)\n \n", "Next, a function that extracts the net rating for a team.", "def team_net_ratings(the_team_name):\n \"\"\"\n team name is one of \n [\"76ers\", \"Bucks\", \"Bulls\", \"Cavaliers\", \"Celtics\", \n \"Clippers\", \"Grizzlies\", \"Hawks\", \"Heat\", \n \"Hornets\", \"Jazz\", \"Kings\", \"Knicks\", \"Lakers\", \n \"Magic\", \"Mavericks\", \"Nets\", \"Nuggets\", \"Pacers\", \n \"Pelicans\", \"Pistons\", \"Raptors\", \"Rockets\", \"Spurs\", \n \"Suns\", \"Thunder\", \"Timberwolves\", \"Trail Blazers\", \"Warriors\", \"Wizards\"] \n \"\"\"\n return [\n float(net_rating)\n for game_id,game_date,team_id,team_name,net_rating in nba_games \n if team_name == the_team_name]\n \n", "The Pistons\nWith this, we can make a histogram of the net ratings for a team. I like the Pistons, so let's check them out:", "import matplotlib.mlab as mlab\n\nplt.hist(team_net_ratings('Pistons'), bins=15)\nplt.title(\"Pistons Net Rating for 2015/2016 Games\")\nplt.xlabel(\"Net Rating\")\nplt.ylabel(\"Num Games\")\n\nplt.show()", "As experienced as a fan this year, we are a bit bi-modal, sometimes playing great, even beating some of the leagu's best teams, other times getting blown out (that -40 net rating was most recently against The Wizards).\nBest and worst teams\nNow let's compare this to the best and worst teams in the league", "plt.hist(team_net_ratings(\"Warriors\"), bins=15, color='b', label='Warriors')\nplt.hist(team_net_ratings(\"76ers\"), bins=15, color='r', alpha=0.5, label='76ers')\nplt.title(\"Net Rating for 2015/2016 Games\")\nplt.xlabel(\"Net Rating\")\nplt.ylabel(\"Num Games\")\nplt.legend()\nplt.show() \n", "Yep, the warriors usually win, and the 76ers usually lose. Still striking to see how many games the warriors win by a safe margin.\nBox Plots\nBox plots are a nice way to visually compare multiple team's distributions giving a quick snapshot of the median, range and interquartile range. Let's compare the top 9 seeds in the Eastern Conference (I'm hoping the Pistons fight their way into the 8th seed, they are currently 9th).", "def box_plot_teams(team_names):\n reversed_names = list(reversed(team_names))\n data = [team_net_ratings(team_name) for team_name in reversed_names]\n plt.figure()\n plt.xlabel('Game Net Ratings')\n plt.boxplot(data, labels=reversed_names, vert=False)\n plt.show()\n \nbox_plot_teams(['Cavaliers', 'Raptors', 'Celtics', 'Heat', 'Hawks', 'Hornets', 'Pacers', 'Bulls', 'Pistons'])\n", "We can see that The Pistons have the largest spread, but have a median slightly better than The Bulls (they are in fact neck and neck) with potentially more upside. The 3rd quartile net rating of close to 10 is what makes us Pistons fans feel like we could have a shot against most teams in The Eastern Conference.\nAnother note: the standard boxplot plots the dashed line up to 1.5 the IQR range, beyond that data points are considered outliers and plotted individually. The Pistons do not have any outliers by this standard; so on a given night we can get blown out or win big and it shouldn't really surprise use :)\nFinally, let's look at the mean and standard deviations of each.", "def mean_std(team_name):\n nrs = team_net_ratings(team_name)\n return (team_name, np.mean(nrs), np.std(nrs))\n\n[mean_std(team_name) for team_name in \n ['Cavaliers', 'Raptors', 'Celtics', 'Heat', 'Hawks', 'Hornets', 'Pacers', 'Bulls', 'Pistons']]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mbeyeler/opencv-machine-learning
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mit
[ "<!--BOOK_INFORMATION-->\n<a href=\"https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv\" target=\"_blank\"><img align=\"left\" src=\"data/cover.jpg\" style=\"width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;\"></a>\nThis notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.\nThe code is released under the MIT license,\nand is available on GitHub.\nNote that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.\nIf you find this content useful, please consider supporting the work by\nbuying the book!\n<!--NAVIGATION-->\n< Implementing a Spam Filter with Bayesian Learning | Contents | Classifying Emails Using the Naive Bayes Classifier >\nImplementing Our First Bayesian Classifier\nIn the previous chapter, we learned how to generate a number of Gaussian blobs using\nscikit-learn. Do you remember how that is done?\nCreating a toy dataset\nThe function I'm referring to resides within scikit-learn's datasets module. Let's create 100\ndata points, each belonging to one of two possible classes, and group them into two\nGaussian blobs. To make the experiment reproducible, we specify an integer to pick a seed\nfor the random_state. You can again pick whatever number you prefer. Here I went with\nThomas Bayes' year of birth (just for kicks):", "from sklearn import datasets\nX, y = datasets.make_blobs(100, 2, centers=2, random_state=1701, cluster_std=2)", "Let's have a look at the dataset we just created using our trusty friend, Matplotlib:", "import matplotlib.pyplot as plt\nplt.style.use('ggplot')\n%matplotlib inline", "I'm sure this is getting easier every time. We use scatter to create a scatter plot of all $x$\nvalues (X[:, 0]) and $y$ values (X[:, 1]), which will result in the following output:", "plt.figure(figsize=(10, 6))\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50);", "In agreement with our specifications, we see two different point clusters. They hardly\noverlap, so it should be relatively easy to classify them. What do you think—could a linear\nclassifier do the job?\nYes, it could. Recall that a linear classifier would try to draw a straight line through the\nfigure, trying to put all blue dots on one side and all red dots on the other. A diagonal line\ngoing from the top-left corner to the bottom-right corner could clearly do the job. So we\nwould expect the classification task to be relatively easy, even for a naive Bayes classifier.\nBut first, don't forget to split the dataset into training and test sets! Here, I reserve 10% of\nthe data points for testing:", "import numpy as np\nfrom sklearn import model_selection as ms\nX_train, X_test, y_train, y_test = ms.train_test_split(\n X.astype(np.float32), y, test_size=0.1\n)", "Classifying the data with a normal Bayes classifier\nWe will then use the same procedure as in earlier chapters to train a normal Bayes\nclassifier. Wait, why not a naive Bayes classifier? Well, it turns out OpenCV doesn't really\nprovide a true naive Bayes classifier... Instead, it comes with a Bayesian classifier that doesn't\nnecessarily expect features to be independent, but rather expects the data to be clustered\ninto Gaussian blobs. This is exactly the kind of dataset we created earlier!\nWe can create a new classifier using the following function:", "import cv2\nmodel_norm = cv2.ml.NormalBayesClassifier_create()", "Then, training is done via the train method:", "model_norm.train(X_train, cv2.ml.ROW_SAMPLE, y_train)", "Once the classifier has been trained successfully, it will return True. We go through the\nmotions of predicting and scoring the classifier, just like we have done a million times\nbefore:", "_, y_pred = model_norm.predict(X_test)\n\nfrom sklearn import metrics\nmetrics.accuracy_score(y_test, y_pred)", "Even better—we can reuse the plotting function from the last chapter to inspect the decision\nboundary! If you recall, the idea was to create a mesh grid that would encompass all data\npoints and then classify every point on the grid. The mesh grid is created via the NumPy\nfunction of the same name:", "def plot_decision_boundary(model, X_test, y_test):\n # create a mesh to plot in\n h = 0.02 # step size in mesh\n x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1\n y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1\n xx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n \n X_hypo = np.column_stack((xx.ravel().astype(np.float32),\n yy.ravel().astype(np.float32)))\n ret = model.predict(X_hypo)\n if isinstance(ret, tuple):\n zz = ret[1]\n else:\n zz = ret\n zz = zz.reshape(xx.shape)\n \n plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8)\n plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200)\n\nplt.figure(figsize=(10, 6))\nplot_decision_boundary(model_norm, X, y)", "So far, so good. The interesting part is that a Bayesian classifier also returns the probability\nwith which each data point has been classified:", "ret, y_pred, y_proba = model_norm.predictProb(X_test)", "The function returns a Boolean flag (True for success, False for failure), the predicted\ntarget labels (y_pred), and the conditional probabilities (y_proba). Here, y_proba is an $N\n\\times 2$ matrix that indicates, for every one of the $N$ data points, the probability with which it\nwas classified as either class 0 or class 1:", "y_proba.round(2)", "Classifying the data with a naive Bayes classifier\nWe can compare the result to a true naïve Bayes classifier by asking scikit-learn for help:", "from sklearn import naive_bayes\nmodel_naive = naive_bayes.GaussianNB()", "As usual, training the classifier is done via the fit method:", "model_naive.fit(X_train, y_train)", "Scoring the classifier is built in:", "model_naive.score(X_test, y_test)", "Again a perfect score! However, in contrast to OpenCV, this classifier's predict_proba\nmethod returns true probability values, because all values are between 0 and 1, and because\nall rows add up to 1:", "yprob = model_naive.predict_proba(X_test)\nyprob.round(2)", "You might have noticed something else: This classifier has absolutely no doubt about the\ntarget label of each and every data point. It's all or nothing.\nThe decision boundary returned by the naive Bayes classifier looks slightly different, but\ncan be considered identical to the previous command for the purpose of this exercise:", "plt.figure(figsize=(10, 6))\nplot_decision_boundary(model_naive, X, y)", "Visualizing conditional probabilities\nSimilarly, we can also visualize probabilities. For this, we slightly modify the plot function\nfrom the previous example. We start out by creating a mesh grid between (x_min, x_max)\nand (y_min, y_max):", "def plot_proba(model, X_test, y_test):\n # create a mesh to plot in\n h = 0.02 # step size in mesh\n x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1\n y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1\n xx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n \n X_hypo = np.column_stack((xx.ravel().astype(np.float32),\n yy.ravel().astype(np.float32)))\n if hasattr(model, 'predictProb'):\n _, _, y_proba = model.predictProb(X_hypo)\n else:\n y_proba = model.predict_proba(X_hypo)\n \n zz = y_proba[:, 1] - y_proba[:, 0]\n zz = zz.reshape(xx.shape)\n \n plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8)\n plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200)\n\nplt.figure(figsize=(10, 6))\nplot_proba(model_naive, X, y)", "<!--NAVIGATION-->\n< Implementing a Spam Filter with Bayesian Learning | Contents | Classifying Emails Using the Naive Bayes Classifier >" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ml4a/ml4a-guides
examples/dreaming/neural-synth.ipynb
gpl-2.0
[ "Neural synthesis, feature visualization, and DeepDream notes\nThis notebook introduces what we'll call here \"neural synthesis,\" the technique of synthesizing images using an iterative process which optimizes the pixels of the image to achieve some desired state of activations in a convolutional neural network. \nThe technique in its modern form dates back to around 2009 and has its origins in early attempts to visualize what features were being learned by the different layers in the network (see Erhan et al, Simonyan et al, and Mahendran & Vedaldi) as well as in trying to identify flaws or vulnerabilities in networks by synthesizing and feeding them adversarial examples (see Nguyen et al, and Dosovitskiy & Brox). The following is an example from Simonyan et al on visualizing image classification models.\n\nIn 2012, the technique became widely known after Le et al published results of an experiment in which a deep neural network was fed millions of images, predominantly from YouTube, and unexpectedly learned a cat face detector. At that time, the network was trained for three days on 16,000 CPU cores spread over 1,000 machines!\n\nIn 2015, following the rapid proliferation of cheap GPUs, Google software engineers Mordvintsev, Olah, and Tyka first used it for ostensibly artistic purposes and introduced several innovations, including optimizing pixels over multiple scales (octaves), improved regularization, and most famously, using real images (photographs, paintings, etc) as input and optimizing their pixels so as to enhance whatever activations the network already detected (hence \"hallucinating\" or \"dreaming\"). They nicknamed their work \"Deepdream\" and released the first publicly available code for running it in Caffe, which led to the technique being widely disseminated on social media, puppyslugs and all. Some highlights of their original work follow, with more found in this gallery.\n\n\nA number of creative innovations were further introduced by Mike Tyka including optimizing several channels along pre-arranged masks, and using feedback loops to generate video. Some examples of his work follow.\n\nThis notebook builds upon the code found in tensorflow's deepdream example. The first part of this notebook will summarize that one, including naive optimization, multiscale generation, and Laplacian normalization. The code from that notebook is lightly modified and is mostly found in the the lapnorm.py script, which is imported into this notebook. The second part of this notebook builds upon that example by showing how to combine channels and mask their gradients, warp the canvas, and generate video using a feedback loop. Here is a gallery of examples and a video work.\nBefore we get started, we need to make sure we have downloaded and placed the Inceptionism network in the data folder. Run the next cell if you haven't already downloaded it.", "#Grab inception model from online and unzip it (you can skip this step if you've already downloaded the model.\n!wget -P . https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip\n!unzip inception5h.zip -d inception5h/\n!rm inception5h.zip", "To get started, make sure all of the folloing import statements work without error. You should get a message telling you there are 59 layers in the network and 7548 channels.", "from __future__ import print_function\nfrom io import BytesIO\nimport math, time, copy, json, os\nimport glob\nfrom os import listdir\nfrom os.path import isfile, join\nfrom random import random\nfrom io import BytesIO\nfrom enum import Enum\nfrom functools import partial\nimport PIL.Image\nfrom IPython.display import clear_output, Image, display, HTML\nimport numpy as np\nimport scipy.misc\nimport tensorflow as tf\n\n# import everything from lapnorm.py\nfrom lapnorm import *", "Let's inspect the network now. The following will give us the name of all the layers in the network, as well as the number of channels they contain. We can use this as a lookup table when selecting channels.", "for l, layer in enumerate(layers):\n layer = layer.split(\"/\")[1]\n num_channels = T(layer).shape[3]\n print(layer, num_channels)", "The basic idea is to take any image as input, then iteratively optimize its pixels so as to maximally activate a particular channel (feature extractor) in a trained convolutional network. We reproduce tensorflow's recipe here to read the code in detail. In render_naive, we take img0 as input, then for iter_n steps, we calculate the gradient of the pixels with respect to our optimization objective, or in other words, the diff for all of the pixels we must add in order to make the image activate the objective. The objective we pass is a channel in one of the layers of the network, or an entire layer. Declare the function below.", "def render_naive(t_obj, img0, iter_n=20, step=1.0):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n img = img0.copy()\n for i in range(iter_n):\n g, score = sess.run([t_grad, t_score], {t_input:img})\n # normalizing the gradient, so the same step size should work \n g /= g.std()+1e-8 # for different layers and networks\n img += g*step\n return img\n", "Now let's try running it. First, we initialize a 200x200 block of colored noise. We then select the layer mixed4d_5x5_bottleneck_pre_relu and the 20th channel in that layer as the objective, and run it through render_naive for 40 iterations. You can try to optimize at different layers or different channels to get a feel for how it looks.", "img0 = np.random.uniform(size=(200, 200, 3)) + 100.0\nlayer = 'mixed4d_3x3_bottleneck_pre_relu'\nchannel = 140\nimg1 = render_naive(T(layer)[:,:,:,channel], img0, 40, 1.0)\ndisplay_image(img1)", "The above isn't so interesting yet. One improvement is to use repeated upsampling to effectively detect features at multiple scales (what we call \"octaves\") of the image. What we do is we start with a smaller image and calculate the gradients for that, going through the procedure like before. Then we upsample it by a particular ratio and calculate the gradients and modify the pixels of the result. We do this several times. \nYou can see that render_multiscale is similar to render_naive except now the addition of the outer \"octave\" loop which repeatedly upsamples the image using the resize function.", "def render_multiscale(t_obj, img0, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n img = img0.copy()\n for octave in range(octave_n):\n if octave>0:\n hw = np.float32(img.shape[:2])*octave_scale\n img = resize(img, np.int32(hw))\n for i in range(iter_n):\n g = calc_grad_tiled(img, t_grad)\n # normalizing the gradient, so the same step size should work \n g /= g.std()+1e-8 # for different layers and networks\n img += g*step\n print(\"octave %d/%d\"%(octave+1, octave_n))\n clear_output()\n return img\n", "Let's try this on noise first. Note the new variables octave_n and octave_scale which control the parameters of the scaling. Thanks to tensorflow's patch to do the process on overlapping subrectangles, we don't have to worry about running out of memory. However, making the overall size large will mean the process takes longer to complete.", "h, w = 200, 200\noctave_n = 3\noctave_scale = 1.4\niter_n = 50\n\nimg0 = np.random.uniform(size=(h, w, 3)) + 100.0\n\nlayer = 'mixed4c_5x5_bottleneck_pre_relu'\nchannel = 20\n\nimg1 = render_multiscale(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale)\ndisplay_image(img1)", "Now load a real image and use that as the starting point. We'll use the kitty image in the assets folder. Here is the original.\n<img src=\"../assets/kitty.jpg\" alt=\"kitty\" style=\"width: 280px;\"/>", "h, w = 320, 480\noctave_n = 3\noctave_scale = 1.4\niter_n = 60\n\nimg0 = load_image('../assets/kitty.jpg', h, w)\n\nlayer = 'mixed4d_5x5_bottleneck_pre_relu'\nchannel = 21\n\nimg1 = render_multiscale(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale)\ndisplay_image(img1)", "Now we introduce Laplacian normalization. The problem is that although we are finding features at multiple scales, it seems to have a lot of unnatural high-frequency noise. We apply a Laplacian pyramid decomposition to the image as a regularization technique and calculate the pixel gradient at each scale, as before.", "def render_lapnorm(t_obj, img0, iter_n=10, step=1.0, oct_n=3, oct_s=1.4, lap_n=4):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n # build the laplacian normalization graph\n lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))\n img = img0.copy()\n for octave in range(oct_n):\n if octave>0:\n hw = np.float32(img.shape[:2])*oct_s\n img = resize(img, np.int32(hw))\n for i in range(iter_n):\n g = calc_grad_tiled(img, t_grad)\n g = lap_norm_func(g)\n img += g*step\n print('.', end='')\n print(\"octave %d/%d\"%(octave+1, oct_n))\n clear_output()\n return img\n ", "With Laplacian normalization and multiple octaves, we have the core technique finished and are level with the Tensorflow example. Try running the example below and modifying some of the numbers to see how it affects the result. Remember you can use the layer lookup table at the top of this notebook to recall the different layers that are available to you. Note the differences between early (low-level) layers and later (high-level) layers.", "h, w = 300, 400\noctave_n = 3\noctave_scale = 1.4\niter_n = 20\n\nimg0 = np.random.uniform(size=(h, w, 3)) + 100.0\n\nlayer = 'mixed5b_pool_reduce_pre_relu'\nchannel = 99\n\nimg1 = render_lapnorm(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale)\ndisplay_image(img1)", "Now we are going to modify the render_lapnorm function in three ways. \n1) Instead of passing just a single channel or layer to be optimized (the objective, t_obj), we can pass several in an array, letting us optimize several channels simultaneously (it must be an array even if it contains just one element).\n2) We now also pass in mask, which is a numpy array of dimensions (h,w,n) where h and w are the height and width of the source image img0 and n is equal to the number of objectives in t_obj. The mask is like a gate or multiplier of the gradient for each channel. mask[:,:,0] gets multiplied by the gradient of the first objective, mask[:,:,1] by the second and so on. It should contain a float between 0 and 1 (0 to kill the gradient, 1 to let all of it pass). Another way to think of mask is it's like step for every individual pixel for each objective.\n3) Internally, we use a convenience function get_mask_sizes which figures out for us the size of the image and mask at every octave, so we don't have to worry about calculating this ourselves, and can just pass in an img and mask of the same size.", "def lapnorm_multi(t_obj, img0, mask, iter_n=10, step=1.0, oct_n=3, oct_s=1.4, lap_n=4, clear=True):\n mask_sizes = get_mask_sizes(mask.shape[0:2], oct_n, oct_s)\n img0 = resize(img0, np.int32(mask_sizes[0])) \n t_score = [tf.reduce_mean(t) for t in t_obj] # defining the optimization objective\n t_grad = [tf.gradients(t, t_input)[0] for t in t_score] # behold the power of automatic differentiation!\n # build the laplacian normalization graph\n lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))\n img = img0.copy()\n for octave in range(oct_n):\n if octave>0:\n hw = mask_sizes[octave] #np.float32(img.shape[:2])*oct_s\n img = resize(img, np.int32(hw))\n oct_mask = resize(mask, np.int32(mask_sizes[octave]))\n for i in range(iter_n):\n g_tiled = [lap_norm_func(calc_grad_tiled(img, t)) for t in t_grad]\n for g, gt in enumerate(g_tiled):\n img += gt * step * oct_mask[:,:,g].reshape((oct_mask.shape[0],oct_mask.shape[1],1))\n print('.', end='')\n print(\"octave %d/%d\"%(octave+1, oct_n))\n if clear:\n clear_output()\n return img", "Try first on noise, as before. This time, we pass in two objectives from different layers and we create a mask where the top half only lets in the first channel, and the bottom half only lets in the second.", "h, w = 300, 400\noctave_n = 3\noctave_scale = 1.4\niter_n = 15\n\nimg0 = np.random.uniform(size=(h, w, 3)) + 100.0\n\nobjectives = [T('mixed3a_3x3_pre_relu')[:,:,:,79], \n T('mixed5a_1x1_pre_relu')[:,:,:,200],\n T('mixed4b_5x5_bottleneck_pre_relu')[:,:,:,22]]\n\n# mask\nmask = np.zeros((h, w, 3))\nmask[0:100,:,0] = 1.0\nmask[100:200,:,1] = 1.0\nmask[200:,:,2] = 1.0\n\nimg1 = lapnorm_multi(objectives, img0, mask, iter_n, 1.0, octave_n, octave_scale)\ndisplay_image(img1)", "Now the same thing, but we optimize over the kitty instead and pick new channels.", "h, w = 400, 400\noctave_n = 3\noctave_scale = 1.4\niter_n = 30\n\nimg0 = load_image('../assets/kitty.jpg', h, w)\n\nobjectives = [T('mixed4d_3x3_bottleneck_pre_relu')[:,:,:,99], \n T('mixed5a_5x5_bottleneck_pre_relu')[:,:,:,40]]\n\n# mask\nmask = np.zeros((h, w, 2))\nmask[:,:200,0] = 1.0\nmask[:,200:,1] = 1.0\n\nimg1 = lapnorm_multi(objectives, img0, mask, iter_n, 1.0, octave_n, octave_scale)\ndisplay_image(img1)", "Let's make a more complicated mask. Here we use numpy's linspace function to linearly interpolate the mask between 0 and 1, going from left to right, in the first channel's mask, and the opposite for the second channel. Thus on the far left of the image, we let in only the second channel, on the far right only the first channel, and in the middle exactly 50% of each. We'll make a long one to show the smooth transition. We'll also visualize the first channel's mask right afterwards.", "h, w = 256, 1024\n\nimg0 = np.random.uniform(size=(h, w, 3)) + 100.0\n\noctave_n = 3\noctave_scale = 1.4\nobjectives = [T('mixed4c_3x3_pre_relu')[:,:,:,50], \n T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,29]]\n\nmask = np.zeros((h, w, 2))\nmask[:,:,0] = np.linspace(0,1,w)\nmask[:,:,1] = np.linspace(1,0,w)\n\n\n\nimg1 = lapnorm_multi(objectives, img0, mask, iter_n=40, step=1.0, oct_n=3, oct_s=1.4, lap_n=4)\n\nprint(\"image\")\ndisplay_image(img1)\nprint(\"masks\")\ndisplay_image(255*mask[:,:,0])\ndisplay_image(255*mask[:,:,1])\n", "One can think up many clever ways to make masks. Maybe they are arranged as overlapping concentric circles, or along diagonal lines, or even using Perlin noise to get smooth organic-looking variation. \nHere is one example making a circular mask.", "h, w = 500, 500\n\ncy, cx = 0.5, 0.5\n\n# circle masks\npts = np.array([[[i/(h-1.0),j/(w-1.0)] for j in range(w)] for i in range(h)])\nctr = np.array([[[cy, cx] for j in range(w)] for i in range(h)])\n\npts -= ctr\ndist = (pts[:,:,0]**2 + pts[:,:,1]**2)**0.5\ndist = dist / np.max(dist)\n\nmask = np.ones((h, w, 2))\nmask[:, :, 0] = dist\nmask[:, :, 1] = 1.0-dist\n\n\nimg0 = np.random.uniform(size=(h, w, 3)) + 100.0\n\noctave_n = 3\noctave_scale = 1.4\nobjectives = [T('mixed3b_5x5_bottleneck_pre_relu')[:,:,:,9], \n T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,17]]\n\nimg1 = lapnorm_multi(objectives, img0, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4)\ndisplay_image(img1)\n", "Now we show how to use an existing image as a set of masks, using k-means clustering to segment it into several sections which become masks.", "import sklearn.cluster\n\nk = 3\nh, w = 320, 480\nimg0 = load_image('../assets/kitty.jpg', h, w)\n\nimgp = np.array(list(img0)).reshape((h*w, 3))\nclusters, assign, _ = sklearn.cluster.k_means(imgp, k)\nassign = assign.reshape((h, w))\n\nmask = np.zeros((h, w, k))\nfor i in range(k):\n mask[:,:,i] = np.multiply(np.ones((h, w)), (assign==i))\n\nfor i in range(k):\n display_image(mask[:,:,i]*255.)\n\nimg0 = np.random.uniform(size=(h, w, 3)) + 100.0\n\noctave_n = 3\noctave_scale = 1.4\nobjectives = [T('mixed4b_3x3_bottleneck_pre_relu')[:,:,:,111], \n T('mixed5b_pool_reduce_pre_relu')[:,:,:,12],\n T('mixed4b_5x5_bottleneck_pre_relu')[:,:,:,11]]\n\n\nimg1 = lapnorm_multi(objectives, img0, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4)\ndisplay_image(img1)\n", "Now, we move on to generating video. The most straightforward way to do this is using feedback; generate one image in the conventional way, and then use it as the input to the next generation, rather than starting with noise again. By itself, this would simply repeat or intensify the features found in the first image, but we can get interesting results by perturbing the input to the second generation slightly before passing it in. For example, we can crop it slightly to remove the outer rim, then resize it to the original size and run it through again. If we do this repeatedly, we will get what looks like a constant zooming-in motion.\nThe next block of code demonstrates this. We'll make a small square with a single feature, then crop the outer rim by around 5% before making the next one. We'll repeat this 20 times and look at the resulting frames. For simplicity, we'll just set the mask to 1 everywhere. Note, we've also set the clear variable in lapnorm_multi to false so we can see all the images in sequence.", "h, w = 200, 200\n\n# start with random noise\nimg = np.random.uniform(size=(h, w, 3)) + 100.0\n\noctave_n = 3\noctave_scale = 1.4\nobjectives = [T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,11]]\nmask = np.ones((h, w, 1))\n\n# repeat the generation loop 20 times. notice the feedback -- we make img and then use it the initial input \nfor f in range(20):\n img = lapnorm_multi(objectives, img, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4, clear=False)\n display_image(img) # let's see it\n scipy.misc.imsave('frame%05d.png'%f, img) # ffmpeg to save the frames\n img = resize(img[10:-10,10:-10,:], (h, w)) # before looping back, crop the border by 10 pixels, resize, repeat\n", "If you look at all the frames, you can see the zoom-in effect. Zooming is just one of the things we can do to get interesting dynamics. Another cropping technique might be to shift the canvas in one direction, or maybe we can slightly rotate the canvas around a pivot point, or perhaps distort it with perlin noise. There are many things that can be done to get interesting and compelling results. Try also combining these with different ways of aking and modifying masks, and the combinatorial space of possibilities grows immensely. Most ambitiously, you can try training your own convolutional network from scratch and using it instead of Inception to get more custom effects. Thus as we see, the technique of feature visualization provides a wealth of possibilities to generate interesting video art." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SylvainCorlay/bqplot
examples/Marks/Object Model/Lines.ipynb
apache-2.0
[ "The Lines Mark\nLines is a Mark object that is primarily used to visualize quantitative data. It works particularly well for continuous data, or when the shape of the data needs to be extracted.\nIntroduction\nThe Lines object provides the following features:\n\nAbility to plot a single set or multiple sets of y-values as a function of a set or multiple sets of x-values\nAbility to style the line object in different ways, by setting different attributes such as the colors, line_style, stroke_width etc.\nAbility to specify a marker at each point passed to the line. The marker can be a shape which is at the data points between which the line is interpolated and can be set through the markers attribute\n\nThe Lines object has the following attributes\n| Attribute | Description | Default Value |\n|:-:|---|:-:|\n| colors | Sets the color of each line, takes as input a list of any RGB, HEX, or HTML color name | CATEGORY10 |\n| opacities | Controls the opacity of each line, takes as input a real number between 0 and 1 | 1.0 |\n| stroke_width | Real number which sets the width of all paths | 2.0 |\n| line_style | Specifies whether a line is solid, dashed, dotted or both dashed and dotted | 'solid' |\n| interpolation | Sets the type of interpolation between two points | 'linear' |\n| marker | Specifies the shape of the marker inserted at each data point | None |\n| marker_size | Controls the size of the marker, takes as input a non-negative integer | 64 |\n|close_path| Controls whether to close the paths or not | False |\n|fill| Specifies in which way the paths are filled. Can be set to one of {'none', 'bottom', 'top', 'inside'}| None |\n|fill_colors| List that specifies the fill colors of each path | [] | \n| Data Attribute | Description | Default Value |\n|x |abscissas of the data points | array([]) |\n|y |ordinates of the data points | array([]) |\n|color | Data according to which the Lines will be colored. Setting it to None defaults the choice of colors to the colors attribute | None |\nTo explore more features, run the following lines of code:\npython\nfrom bqplot import Lines\n?Lines\nor visit the Lines documentation page\nLet's explore these features one by one\nWe begin by importing the modules that we will need in this example", "import numpy as np #For numerical programming and multi-dimensional arrays\nfrom pandas import date_range #For date-rate generation\nfrom bqplot import LinearScale, Lines, Axis, Figure, DateScale, ColorScale", "Random Data Generation", "security_1 = np.cumsum(np.random.randn(150)) + 100.\nsecurity_2 = np.cumsum(np.random.randn(150)) + 100.", "Basic Line Chart\nUsing the bqplot, object oriented API, we can generate a Line Chart with the following code snippet:", "sc_x = LinearScale()\nsc_y = LinearScale()\n\nline = Lines(x=np.arange(len(security_1)), y=security_1,\n scales={'x': sc_x, 'y': sc_y})\nax_x = Axis(scale=sc_x, label='Index')\nax_y = Axis(scale=sc_y, orientation='vertical', label='y-values of Security 1')\n\nFigure(marks=[line], axes=[ax_x, ax_y], title='Security 1')", "The x attribute refers to the data represented horizontally, while the y attribute refers the data represented vertically. \nWe can explore the different attributes by changing each of them for the plot above:", "line.colors = ['DarkOrange']", "In a similar way, we can also change any attribute after the plot has been displayed to change the plot. Run each of the cells below, and try changing the attributes to explore the different features and how they affect the plot.", "# The opacity allows us to display the Line while featuring other Marks that may be on the Figure\nline.opacities = [.5]\n\nline.stroke_width = 2.5", "To switch to an area chart, set the fill attribute, and control the look with fill_opacities and fill_colors.", "line.fill = 'bottom'\nline.fill_opacities = [0.2]\n\nline.line_style = 'dashed'\n\nline.interpolation = 'basis'", "While a Lines plot allows the user to extract the general shape of the data being plotted, there may be a need to visualize discrete data points along with this shape. This is where the markers attribute comes in.", "line.marker = 'triangle-down'", "The marker attributes accepts the values square, circle, cross, diamond, square, triangle-down, triangle-up, arrow, rectangle, ellipse. Try changing the string above and re-running the cell to see how each marker type looks.\nPlotting a Time-Series\nThe DateScale allows us to plot time series as a Lines plot conveniently with most date formats.", "# Here we define the dates we would like to use\ndates = date_range(start='01-01-2007', periods=150)\n\ndt_x = DateScale()\nsc_y = LinearScale()\n\ntime_series = Lines(x=dates, y=security_1, scales={'x': dt_x, 'y': sc_y})\nax_x = Axis(scale=dt_x, label='Date')\nax_y = Axis(scale=sc_y, orientation='vertical', label='Security 1')\n\nFigure(marks=[time_series], axes=[ax_x, ax_y], title='A Time Series Plot')", "Plotting Multiples Sets of Data with Lines\nThe Lines mark allows the user to plot multiple y-values for a single x-value. This can be done by passing an ndarray or a list of the different y-values as the y-attribute of the Lines as shown below.", "x_dt = DateScale()\ny_sc = LinearScale()", "We pass each data set as an element of a list. The colors attribute allows us to pass a specific color for each line.", "dates_new = date_range(start='06-01-2007', periods=150)\n\nsecurities = np.cumsum(np.random.randn(150, 10), axis=0)\npositions = np.random.randint(0, 2, size=10)\n\n# We pass the color scale and the color data to the lines\nline = Lines(x=dates, y=[security_1, security_2], \n scales={'x': x_dt, 'y': y_sc},\n labels=['Security 1', 'Security 2'])\n\nax_x = Axis(scale=x_dt, label='Date')\nax_y = Axis(scale=y_sc, orientation='vertical', label='Security 1')\n\nFigure(marks=[line], axes=[ax_x, ax_y], legend_location='top-left')", "Similarly, we can also pass multiple x-values for multiple sets of y-values", "line.x, line.y = [dates, dates_new], [security_1, security_2]", "Coloring Lines according to data\nThe color attribute of a Lines mark can also be used to encode one more dimension of data. Suppose we have a portfolio of securities and we would like to color them based on whether we have bought or sold them. We can use the color attribute to encode this information.", "x_dt = DateScale()\ny_sc = LinearScale()\ncol_sc = ColorScale(colors=['Red', 'Green'])\n\ndates_color = date_range(start='06-01-2007', periods=150)\n\nsecurities = 100. + np.cumsum(np.random.randn(150, 10), axis=0)\npositions = np.random.randint(0, 2, size=10)\n# Here we generate 10 random price series and 10 random positions\n\n# We pass the color scale and the color data to the lines\nline = Lines(x=dates_color, y=securities.T, \n scales={'x': x_dt, 'y': y_sc, 'color': col_sc}, color=positions,\n labels=['Security 1', 'Security 2'])\n\nax_x = Axis(scale=x_dt, label='Date')\nax_y = Axis(scale=y_sc, orientation='vertical', label='Security 1')\n\nFigure(marks=[line], axes=[ax_x, ax_y], legend_location='top-left')", "We can also reset the colors of the Line to their defaults by setting the color attribute to None.", "line.color = None", "Patches\nThe fill attribute of the Lines mark allows us to fill a path in different ways, while the fill_colors attribute lets us control the color of the fill", "sc_x = LinearScale()\nsc_y = LinearScale()\n\npatch = Lines(x=[[0, 2, 1.2, np.nan, np.nan, np.nan, np.nan], [0.5, 2.5, 1.7, np.nan, np.nan, np.nan, np.nan], [4,5,6, 6, 5, 4, 3]], \n y=[[0, 0, 1 , np.nan, np.nan, np.nan, np.nan], [0.5, 0.5, -0.5, np.nan, np.nan, np.nan, np.nan], [1, 1.1, 1.2, 2.3, 2.2, 2.7, 1.0]],\n fill_colors=['orange', 'blue', 'red'],\n fill='inside',\n stroke_width=10,\n close_path=True,\n scales={'x': sc_x, 'y': sc_y},\n display_legend=True)\n\nFigure(marks=[patch], animation_duration=1000)\n\npatch.opacities = [0.1, 0.2]\n\npatch.x = [[2, 3, 3.2, np.nan, np.nan, np.nan, np.nan], [0.5, 2.5, 1.7, np.nan, np.nan, np.nan, np.nan], [4,5,6, 6, 5, 4, 3]]\n\npatch.close_path = False" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
psychemedia/parlihacks
notebooks/Committee Reports.ipynb
mit
[ "Future of the Report - Sketches 1\nThis notebook contains notes and sketches created whilst exploring a particular committee report, the Women and Equalities Committee Gender pay gap inquiry report.\n(From a cursory inspection of several other HTML published reports, there appears to be a significant amount of inconsistency in the way reports from different committees are presented online. A closer look at other reports, and the major differences that appear to arise across them, will be considered at a later date.)\nScraping the Report Home Page", "url='https://publications.parliament.uk/pa/cm201516/cmselect/cmwomeq/584/58402.htm'", "Observation - from the report contents page, I can navigate via the Back button to https://publications.parliament.uk/pa/cm201516/cmselect/cmwomeq/584/58401.htm but then it's not clear where I am at all?\nIt would probably make sense to be able to get back to the inquiry page for the inquiry that resulted in the report.", "import pandas as pd", "Report Contents Page Link Scraper", "import requests\nimport requests_cache\nrequests_cache.install_cache('parli_comm_cache')\n\nfrom bs4 import BeautifulSoup\n\n#https://www.dataquest.io/blog/web-scraping-tutorial-python/\npage = requests.get(url)\nsoup = BeautifulSoup(page.content, 'html.parser')\n\n#What does a ToC item look like?\nsoup.select('p[class*=\"ToC\"]')[5].find('a')\n\nurl_written=None\nurl_witnesses=None\n\nfor p in soup.select('p[class*=\"ToC\"]'):\n #witnesses\n if 'Witnesses' in p.find('a'):\n url_witnesses=p.find('a')['href']\n #written evidence\n if 'Published written evidence' in p.find('a'):\n url_written=p.find('a')['href']\n \nurl_written, url_witnesses\n\n#https://stackoverflow.com/a/34661518/454773\npages=[]\nfor EachPart in soup.select('p[class*=\"ToC\"]'):\n href=EachPart.find('a')['href']\n #Fudge to collect URLs of pages asssociated with report content\n if '#_' in href:\n pages.append(EachPart.find('a')['href'].split('#')[0])\npages=list(set(pages))\npages\n\n#We need to get the relative path for the page...\nimport os.path\n\nstub=os.path.split(url)\nstub\n\n#Grab all the pages in the report\nfor p in pages:\n r=requests.get('{}/{}'.format(stub[0],p))", "Report - Page Scraper\nFor each HTML Page in the report, extract references to oral evidence session questions and written evidence.", "pagesoup=BeautifulSoup(r.content, 'html.parser')\nprint(str(pagesoup.select('div[id=\"shellcontent\"]')[0])[:2000])\n\nimport re\n\ndef evidenceRef(pagesoup):\n qs=[]\n ws=[]\n #Grab list of questions\n for p in pagesoup.select('div[class=\"_idFootnote\"]'):\n #Find oral question numbers\n q=re.search(r'^.*\\s+(Q[0-9]*)\\s*$', p.find('p').text)\n if q:\n qs.append(q.group(1))\n\n #Find links to written evidence\n links=p.find('p').findAll('a')\n if len(links)>1:\n if links[1]['href'].startswith('http://data.parliament.uk/WrittenEvidence/CommitteeEvidence.svc/EvidenceDocument/'):\n ws.append(links[1].text.strip('()'))\n return qs, ws\n\nevidenceRef(pagesoup)\n\nqs=[]\nws=[]\nfor p in pages:\n r=requests.get('{}/{}'.format(stub[0],p))\n pagesoup=BeautifulSoup(r.content, 'html.parser')\n pagesoup.select('div[id=\"shellcontent\"]')[0]\n qstmp,wstmp= evidenceRef(pagesoup)\n qs += qstmp\n ws +=wstmp\n\npd.DataFrame(qs)[0].value_counts().head()\n\npd.DataFrame(ws)[0].value_counts().head()", "Report - Oral Session Page Scraper\nIs this reliably cribbed by link text Witnesses?", "#url='https://publications.parliament.uk/pa/cm201516/cmselect/cmwomeq/584/58414.htm'\n\nif url_witnesses is not None:\n r=requests.get('{}/{}'.format(stub[0],url_witnesses))\n pagesoup=BeautifulSoup(r.content, 'html.parser')\n \n l1=[t.text.split('\\t')[0] for t in pagesoup.select('h2[class=\"WitnessHeading\"]')]\n l2=pagesoup.select('table')\n \npd.DataFrame({'a':l1,'b':l2})\n\n#Just as easy to do this by hand\n\nitems=[]\n\nitems.append(['Tuesday 15 December 2015','Chris Giles', 'Economics Editor', 'The Financial Times','Q1', 'Q35'])\nitems.append(['Tuesday 15 December 2015','Dr Alison Parken', 'Women Adding Value to the Economy (WAVE)', 'Cardiff University','Q1', 'Q35'])\nitems.append(['Tuesday 15 December 2015','Professor Jill Rubery','', 'Manchester University','Q1', 'Q35'])\nitems.append(['Tuesday 15 December 2015','Sheila Wild', 'Founder', 'Equal Pay Portal','Q1', 'Q35'])\nitems.append(['Tuesday 15 December 2015','Professor the Baroness Wolf of Dulwich', \"King's College\", 'London','Q1', 'Q35'])\n\nitems.append(['Tuesday 15 December 2015','Neil Carberry', 'Director for Employment and Skills', 'CBI','Q36','Q58'])\nitems.append(['Tuesday 15 December 2015','Ann Francke', 'Chief Executive', 'Chartered Management Institute','Q36','Q58'])\nitems.append(['Tuesday 15 December 2015','Monika Queisser',' Senior Counsellor and Head of Social Policy', 'Organisation for Economic Cooperation and Development','Q36','Q58'])\n\nitems.append(['Tuesday 12 January 2016','Amanda Brown', 'Assistant General Secretary', 'NUT','Q59','Q99'])\nitems.append(['Tuesday 12 January 2016','Dr Sally Davies', 'President', \"Medical Women's Federation\",'Q59','Q99'])\nitems.append(['Tuesday 12 January 2016','Amanda Fone','Chief Executive Officer', 'F1 Recruitment and Search','Q59','Q99'])\nitems.append(['Tuesday 12 January 2016','Audrey Williams', 'Employment Lawyer and Partner',' Fox Williams','Q59','Q99'])\n\nitems.append(['Tuesday 12 January 2016','Anna Ritchie Allan', 'Project Manager', 'Close the Gap','Q100','Q136'])\nitems.append(['Tuesday 12 January 2016','Christopher Brooks', 'Policy Adviser', 'Age UK','Q100','Q136'])\nitems.append(['Tuesday 12 January 2016','Scarlet Harris', 'Head of Gender Equality', 'TUC','Q100','Q136'])\nitems.append(['Tuesday 12 January 2016','Mr Robert Stephenson-Padron', 'Managing Director', 'Penrose Care','Q100','Q136'])\n\nitems.append(['Tuesday 19 January 2016','Sarah Jackson', 'Chief Executive', 'Working Families','Q137','Q164'])\nitems.append(['Tuesday 19 January 2016','Adrienne Burgess', 'Joint Chief Executive and Head of Research', 'Fatherhood Institute','Q137','Q164'])\nitems.append(['Tuesday 19 January 2016','Maggie Stilwell', 'Partner', 'Ernst & Young LLP','Q137','Q164'])\n\nitems.append(['Tuesday 26 January 2016','Michael Newman', 'Vice-Chair', 'Discrimination Law Association','Q165','Q191'])\nitems.append(['Tuesday 26 January 2016','Duncan Brown', '','Institute for Employment Studies','Q165','Q191'])\nitems.append(['Tuesday 26 January 2016','Tim Thomas', 'Head of Employment and Skills', \"EEF, the manufacturers' association\",'Q165','Q191'])\n\nitems.append(['Tuesday 26 January 2016','Helen Fairfoul', 'Chief Executive', 'Universities and Colleges Employers Association','Q192','Q223'])\nitems.append(['Tuesday 26 January 2016','Emma Stewart', 'Joint Chief Executive Officer', 'Timewise Foundation','Q192','Q223'])\nitems.append(['Tuesday 26 January 2016','Claire Turner','', 'Joseph Rowntree Foundation','Q192','Q223'])\n\nitems.append(['Wednesday 10 February 2016','Rt Hon Nicky Morgan MP', 'Secretary of State for Education and Minister for Women and Equalities','Department for Education','Q224','Q296'])\nitems.append(['Wednesday 10 February 2016','Nick Boles MP', 'Minister for Skills', 'Department for Business, Innovation and Skills','Q224','Q296'])\n\n\ndf=pd.DataFrame(items,columns=['Date','Name','Role','Org','Qmin','Qmax'])\n#Cleaning check\ndf['Org']=df['Org'].str.strip()\ndf['n_qmin']=df['Qmin'].str.strip('Q').astype(int)\ndf['n_qmax']=df['Qmax'].str.strip('Q').astype(int)\ndf['session']=df['Qmin']+'-'+df['n_qmax'].astype(str)\ndf.head()", "Report - Written Evidence Scraper\nIs this reliably cribbed by link text Published written evidence?", "#url='https://publications.parliament.uk/pa/cm201516/cmselect/cmwomeq/584/58415.htm'\n\nall_written=[]\n\nif url_written is not None:\n r=requests.get('{}/{}'.format(stub[0],url_written))\n pagesoup=BeautifulSoup(r.content, 'html.parser')\n for p in pagesoup.select('p[class=\"EvidenceList1\"]'):\n #print(p)\n #Get rid of span tags\n for match in p.findAll('span[class=\"EvidenceList1Span\"]'):\n match.extract()\n all_written.append((p.contents[1].strip('()').strip(), p.find('a')['href'],p.find('a').text))\n\nwritten_df=pd.DataFrame(all_written)\nwritten_df.columns=['Org','URL','RefNumber']\nwritten_df.head()\n\ndef getSession(q):\n return df[(df['n_qmin']<=q) & (df['n_qmax']>=q)].iloc[0]['session']\n\ngetSession(33)\n\n#Report on sessions that included a question by count\n\ndf_qs=pd.DataFrame(qs, columns=['qn'])\ndf_qs['session']=df_qs['qn'].apply(lambda x: getSession(int(x.strip('Q'))) )\ns_qs_cnt=df_qs['session'].value_counts()\ns_qs_cnt\n\npd.concat([s_qs_cnt,df.groupby('session')['Org'].apply(lambda x: '; '.join(list(x)))],\n axis=1).sort_values('session',ascending=False)\n\n#Written evidence\ndf_ws=pd.DataFrame(ws,columns=['RefNumber'])\ndf_ws=df_ws.merge(written_df, on='RefNumber')\ndf_ws['Org'].value_counts().head()\n\n#Organisations that gave written and witness evidence\nset(df_ws['Org']).intersection(set(df['Org']))\n\n#Note there are more matches that are hidden by dirty data\n#- e.g. NUT and National Union of Teachers are presumably the same\n#- e.g. F1 Recruitment and Search and F1 Recruitment Ltd are presumably the same", "Scraping the Government Response", "url='https://publications.parliament.uk/pa/cm201617/cmselect/cmwomeq/963/96302.htm'\n\n#Inconsistency across different reports in terms of presentation, linking to evidence" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PythonFreeCourse/Notebooks
week01/5_Input_and_Casting.ipynb
mit
[ "<img src=\"images/logo.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" alt=\"לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.\">\n<p style=\"align: right; direction: rtl; float: right;\">קלט</p>\n<p style=\"align: right; direction: rtl; float: right;\">מה זה קלט?</p>\n<p style=\"text-align: right; direction: rtl; float: right;\">\nאחד הדברים הכיפיים ביותר בתכנות, הוא להשתמש ולהגיב לנתונים שנאספו ממקור חיצוני. נתונים אלו נקראים במינוח מקצועי <dfn>קלט</dfn>.<br>\nקלט יכול להגיע מכמה מקורות: מטקסט שהמשתמש מזין באמצעות המקלדת, או מקובץ חיצוני.<br>\nבשיעורים הקרובים נתמקד בקלט שהמשתמש מזין באמצעות המקלדת, ובעתיד נרחיב את היכולות שלנו בשיטות נוספות.\n</p>\n\n<p style=\"align: right; direction: rtl; float: right;\">דוגמאות לקלט</p>\n<ol style=\"text-align: right; direction: rtl; float: right; white-space: nowrap;\">\n<li>משתמש מזין בתוכנה באמצעות המקלדת את <mark>שם המשתמש והסיסמה שלו</mark>.</li>\n<li>משתמש ב־Google מזין <mark>שאילתה</mark>, ואז <mark>לוחץ על \"חיפוש\"</mark>.</li>\n<li>משתמש מזין את <mark>הקוד הסודי</mark> שלו בכספומט.</li>\n<li>משתמש פותח את הנעילה של הטלפון שלו באמצעות <mark>טביעת האצבע שלו</mark>.</li>\n<li>משתמש מכניס <mark>תמונה</mark> לתוכנת Photoshop, שתכליתה עריכת תמונות, כדי לבצע עליה פעולות גרפיות.</li>\n<li>משתמש פותח <mark>קובץ Word</mark> באמצעות התוכנה Microsoft Office.</li>\n</ol>\n\n<p style=\"align: right; direction: rtl; float: right;\">כיצד מקבלים קלט?</p>\n<p style=\"text-align: right; direction: rtl; float: right;\">\nכדי לקבל קלט מהמשתמש באמצעות המקלדת, נשתמש ב־<code dir=\"ltr\">input()</code>, כשבתוך הסוגריים נכתוב מחרוזת כלשהי שתוצג למשתמש (ראו דוגמה).<br>\nמטרת המחרוזת – להסביר למשתמש שאנחנו מצפים ממנו לקלט, ומאיזה סוג.<br>\nבואו נראה איך זה עובד:\n</p>", "name = input(\"Please enter your name: \")\nmessage = \"Hello, \" + name + \"!\"\nprint(message)", "<p style=\"text-align: right; direction: rtl; float: right;\">\nהשורה הראשונה היא החידוש פה: בשורה זו אנחנו מבקשים קלט מהמשתמש (את השם שלו), ושומרים את הקלט שהזין במשתנה בשם <code>name</code>.<br>\nברגע שפייתון מגיעה ל־<code dir=\"ltr\">input()</code>, היא עוצרת כל פעולה, עד שתקבל קלט מהמשתמש.<br>\nלאחר מכן היא \"מחליפה\" את <code dir=\"ltr\">input()</code> בקלט שקיבלה מהמשתמש.<br>\nלדוגמה, אם הזנתי כקלט <em>Moishalah</em>, מה שיקרה בפועל אלו השורות הבאות (השוו עם הקוד מלמעלה):\n</p>", "name = \"Moishalah\"\nmessage = \"Hello, \" + name + \"!\"\nprint(message)", "<p style=\"align: right; direction: rtl; float: right;\">תרגול</p>\n<p style=\"text-align: right; direction: rtl; float: right;\">\nכתבו קוד המבקש כקלט שלושה נתונים: שם פרטי, שם משפחה ותאריך לידה.<br>\nהקוד יציג למשתמש ברכה חביבה.<br>\nלדוגמה, עבור הנתונים <code>Israel</code>, <code>Cohen</code>, <code>22/07/1992</code>, הוא יציג:\n</p>\n\n\nHi Israel Cohen! Your birthday is on 22/07/1992.\n\n<p style=\"align: right; direction: rtl; float: right;\">המרת ערכים</p>\n<p style=\"text-align: right; direction: rtl; float: right;\">\nמי מכם שזוכר היטב את השיעורים הקודמים, ודאי יודע שלכל ערך שנכתוב יש סוג (או \"<dfn>טיפוס</dfn>\").\n</p>", "type(5)\n\ntype(1.5)\n\ntype('Hello')", "<p style=\"text-align: right; direction: rtl; float: right;\">\nאם אתם מרגישים שממש הספקתם לשכוח, שווה לכם להציץ <a href=\"3_Types.ipynb\">בפרק 3</a>, שמלמד על טיפוסים.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right;\">\nאם נשתעשע מעט עם <code dir=\"ltr\">input()</code>, נגלה מהר מאוד שלפעמים הקלט מהמשתמש לא מגיע אלינו בדיוק כמו שרצינו.<br>\nבואו נראה דוגמה:\n</p>", "moshe_apples = input(\"How many apples does Moshe have? \")\norly_apples = input(\"How many apples does Orly have? \")\napples_together = moshe_apples + orly_apples\nprint(\"Together, they have \" + apples_together + \" apples!\")", "<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\">\n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הזינו את הנתונים בסעיפים 1 עד 5 לתוכנית התפוחים של משה ואורלי המופיעה מעלה, ונסו להבין מה קרה.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<ol style=\"text-align: right; direction: rtl; float: right; white-space: nowrap; list-style-position: outside; display: inline-block;\">\n<li style=\"white-space: nowrap;\">למשה יש <code>0</code> תפוחים, ולאורלי יש <code>5</code> תפוחים.</li>\n<li style=\"white-space: nowrap;\">למשה יש <code>2</code> תפוחים, ולאורלי יש <code>3</code> תפוחים.</li>\n<li style=\"white-space: nowrap;\">למשה יש <code dir=\"ltr\">-15</code> תפוחים, ולאורלי יש <code>2</code> תפוחים.</li>\n<li style=\"white-space: nowrap;\">למשה יש <code>2</code> תפוחים, ולאורלי יש <code dir=\"ltr\">-15</code> תפוחים.</li>\n<li style=\"white-space: nowrap;\">למשה יש <code>nananana</code> תפוחים, ולאורלי יש <code dir=\"ltr\">batman!</code> תפוחים.</li>\n</ol>\n\n<p style=\"align: right; direction: rtl; float: right;\">אז מה קרה בתרגול?</p>\n<p style=\"text-align: right; direction: rtl; float: right;\">\nאף על פי שרצינו להתייחס לקלט כאל נתון מספרי (<code>int</code>), פייתון החליטה להתייחס אליו כמחרוזת (<code>str</code>), ולכן חיברה בין מחרוזות ולא בין מספרים.<br>\nמכאן אנחנו לומדים חוק חשוב מאוד, שאם ניטיב לזכור אותו יחסוך לנו הרבה תקלות בעתיד:\n</p>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/warning.png\" style=\"height: 50px !important;\" alt=\"אזהרה!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n כשאנחנו מקבלים קלט באמצעות <code dir=\"ltr\">input()</code>, הערך שנקבל יהיה תמיד מטיפוס מחרוזת.\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right;\">\nשימו לב שניסיון לעשות פעולות בין טיפוסים שונים (כמו מחרוזת ומספר שלם) עלול לגרום לכם לשגיאות בתרגולים הקרובים.<br>\nנסו, לדוגמה, להריץ את הקוד הבא:\n</p>", "moshe_apples = input(\"How many apples does Moshe have? \")\nmoshe_apples = moshe_apples + 1 # Give Moshe a single apple\nprint(moshe_apples)", "<p style=\"align: right; direction: rtl; float: right;\">המרת טיפוסים (Casting)</p>\n<p style=\"text-align: right; direction: rtl; float: right;\">\n שפכנו ליטר מים לקערה עם 5 קוביות קרח. כמה יש בה עכשיו?<br>\n קשה לנו מאוד לענות על השאלה כיוון שהיא מנוסחת באופן גרוע ומערבת דברים מסוגים שונים. מאותה סיבה בדיוק לפייתון קשה עם הקוד מלמעלה.<br>\n נוכל להקפיא את המים ולמדוד כמה קרח יש בקערה, או להמיס את הקרח ולמדוד כמה מים יש בה.<br>\n בפייתון נצטרך להחליט מה אנחנו רוצים לעשות, ולהמיר את הערכים שאנחנו עובדים איתם לטיפוסים המתאימים לפני שנבצע את הפעולה.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right;\">\nנזכיר שהטיפוס של כל קלט שנקבל בעזרת <code dir=\"ltr\">input()</code> תמיד יהיה מחרוזת (<code dir=\"ltr\">str</code>):\n</p>", "color = input(\"What is your favorite color? \")\nprint(\"The type of the input \" + color + \" is...\")\ntype(color)\n\nage = input(\"What is your age? \")\nprint(\"The type of the input \" + age + \" is...\")\ntype(age)", "<p style=\"text-align: right; direction: rtl; float: right;\">\nכזכור, כל עוד הקלט שלנו הוא מסוג מחרוזת, פעולות כמו חיבור שלו עם מספר ייכשלו.<br>\nלכן נצטרך לדאוג ששניהם יהיו מאותו סוג על ידי המרה של אחד הערכים מסוג אחד לסוג אחר.<br>\nתהליך הפיכת ערך לסוג טיפוס אחר נקרא <dfn>המרת טיפוסים</dfn>, או <dfn>Casting</dfn> / <dfn>Type Conversion</dfn>.<br>\nאם נבחן את בעיית התפוחים של משה מהכותרת הקודמת:\n</p>", "moshe_apples = input(\"How many apples does Moshe have? \")\nmoshe_apples = moshe_apples + 1 # Give Moshe a single apple\nprint(moshe_apples)", "<p style=\"text-align: right; direction: rtl; float: right;\">\nנראה שהקוד לא יעבוד, כיוון שאין אפשרות לחבר בין מחרוזת (מספר התפוחים של משה מהקלט של המשתמש) לבין מספר (ה־1 שאנחנו רוצים להוסיף).<br>\nכיוון שהמטרה היא להוסיף תפוח 1 למספר מסוים של תפוחים, נבחר להמיר את <code>moshe_apples</code> למספר שלם (<code>int</code>) במקום מחרוזת (<code>str</code>).<br>\nנעשה זאת כך:\n</p>", "moshe_apples = input(\"How many apples does Moshe have? \")\nmoshe_apples = int(moshe_apples) # <--- Casting\nmoshe_apples = moshe_apples + 1\nprint(moshe_apples)", "<p style=\"text-align: right; direction: rtl; float: right;\">\nאיזה כיף, המרנו את מספר התפוחים של משה לערך מטיפוס שלם (שורה 2), ועכשיו הקוד עובד!<br>\nשימו לב שעכשיו אם נרצה להדפיס את מספר התפוחים לצד משפט שאומר \"למשה יש X תפוחים\", אנחנו עלולים להיתקל בבעיה.<br>\nהמשפט שאנחנו רוצים להדפיס הוא <code>str</code>, ומספר התפוחים שחישבנו ושננסה לשרשר אליו יהיה <code>int</code>.<br>\nראו איך זה ישפיע על התוכנית:\n</p>", "moshe_apples = input(\"How many apples does Moshe have? \")\nmoshe_apples = int(moshe_apples) # <--- Casting\nmoshe_apples = moshe_apples + 1\nprint(\"Moshe has \" + moshe_apples + \" apples\")", "<p style=\"text-align: right; direction: rtl; float: right;\">\nפייתון התריעה לפנינו שיש פה בעיה: בשורה האחרונה, היא לא מצליחה לחבר את מספר התפוחים עם המחרוזות הנמצאות בצדדיו.<br>\nמה הפתרון?<br>\nאם אמרתם להמיר את מספר התפוחים של משה למחרוזת, זה אכן יעבוד. נעשה את זה ככה:\n</p>", "moshe_apples = input(\"How many apples does Moshe have? \")\nmoshe_apples = int(moshe_apples) # <--- Casting to int\nmoshe_apples = moshe_apples + 1\nmoshe_apples = str(moshe_apples) # <--- Casting to str\nprint(\"Moshe has \" + moshe_apples + \" apples\")", "<p style=\"align: right; direction: rtl; float: right;\">טבלת המרה</p>\n<p style=\"text-align: right; direction: rtl; float: right;\">\nכדי להמיר לסוג מסוים, כל מה שאתם צריכים לדעת זה את הסוג שאליו אתם רוצים להמיר.<br>\nמשם פשוט בחרו את השם הרלוונטי מהטבלה שאתם כבר מכירים:\n</p>\n\n| צורת המרה | שם בפייתון | שם באנגלית | שם בעברית |\n|:----------------|:----------|:------------|----------:|\n| str(something) | str | string | מחרוזת |\n| int(something) | int | integer | מספר שלם |\n| float(something) | float | float | מספר עשרוני |\n<p style=\"align: right; direction: rtl; float: right;\">תרגול</p>\n<p style=\"align: right; direction: rtl; float: right;\">קרמבו</p>\n<p style=\"text-align: right; direction: rtl; float: right;\">\nלגברת עמיטלי מחוות ביסקוויט יש מפעל משוגע לייצור קרמבו.<br>\nאחד התפקידים במפעל הוא הרכבת קופסאות לאריזת הקרמבו ומילויה. אדון מרקוב מאייש תפקיד זה.<br>\nמרקוב מחליט על מימדיה של כל קופסת קרמבו טעים חדשה שהוא ממלא: כמה יחידות קרמבו יכנסו לגובה, כמה יכנסו לרוחב וכמה יכנסו לאורך האריזה.<br>\nבנו תוכנה שתעזור למרקוב לחשב את כמות הקרמבו שהוא הכניס לקופסה, לפי הנוסחה: <span style=\"display: inline-flex; direction: ltr\">$w \\times h \\times l$</span>, רוחב כפול גובה כפול אורך.<br>\n<em>לדוגמה</em>: התוכנה תקבל ממרקוב כקלט 3 עבור האורך, 4 עבור הרוחב ו־2.5 עבור הגובה, ותחזיר את הפלט <samp>30</samp>, שהוא <span style=\"display: inline-flex; direction: ltr\">$2.5 \\times 3 \\times 4$</span>.\n</p>\n\n<p style=\"align: right; direction: rtl; float: right;\">תה אמריקה II</p>\n<p style=\"text-align: right; direction: rtl; float: right;\">\nחזרו ל<a href=\"2_Arithmetics.ipynb\">מחברת 2</a>. זוכרים את התרגיל האחרון שהיה שם, על התה והמרת מעלות פרנהייט לצלזיוס?<br>\nבואו נבנה מחשבון פרנהייט לצלזיוס! בקשו מהמשתמש להכניס מספר בפרנהייט, והדפיסו את המספר בצלזיוס.<br>\nלהזכירכם, הנוסחה היא: <code>(5 חלקי 9) כפול (מעלות בפרנהייט פחות 32)</code>, או בכתיב מתמטי, <span style=\"display: inline-flex\">$C = \\frac{5}{9}\\times(F - 32)$</span>.<br>\nלדוגמה: עבור הקלט <em>212</em> התוכנה תדפיס 100, כיוון ש־212 מעלות פרנהייט הן 100 מעלות צלזיוס.\n</p>\n\n<p style=\"align: right; direction: rtl; float: right;\">מסטיק בזוקה</p>\n<p style=\"text-align: right; direction: rtl; float: right;\">\nלפי מסטיק בזוקה, עד גיל 21 תגיעו לירח. אנחנו פחות אופטימיים (לנו פשוט זה פחות עבד), ומנבאים לך הצלחה עד גיל 90.<br>\nכתוב תוכנה שמקבלת כקלט את השם שלך ואת הגיל שלך, ומחשבת בעוד כמה שנים תגיע לירח לפי הנבואה שלנו.<br>\nהתוכנה תדפיס את המשפט: <q dir=\"ltr\">X, wait another Y years.</q>, כאשר X יוחלף בשמך ו־Y יוחלף במספר השנים שתצטרך לחכות עד גיל 90.<br>\n<em>לדוגמה</em>: אם הכנסת לתוכנה שגילך הוא 25 ושמך הוא ים, התוכנה תדפיס:\n</p>\n\n\nYam, wait another 65 years." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tsivula/becs-114.1311
demos_pystan/diabetes.ipynb
gpl-3.0
[ "Bayesian Logistic Regression with PyStan\nTODO: Work in progress \nAuthors: Jonah Gabry, Ben Goodrich, Aki Vehtari, Tuomas Sivula\nThe introduction to Bayesian logistic regression is from a CRAN vignette by Jonah Gabry and Ben Goodrich. CRAN vignette was modified to a R notebook by Aki Vehtari. Instead of wells data in CRAN vignette, Pima Indians data is used. The end of the notebook differs significantly from the CRAN vignette. The R notebook was ported to this Python notebook by Aki Vehtari and Tuomas Sivula.\nIntroduction\nThis vignette explains how to estimate generalized linear models (GLMs) for binary (Bernoulli) response variables using PyStan.\nThe four steps of a Bayesian analysis are\n\nSpecify a joint distribution for the outcome(s) and all the unknowns, which typically takes the form of a marginal prior distribution for the unknowns multiplied by a likelihood for the outcome(s) conditional on the unknowns. This joint distribution is proportional to a posterior distribution of the unknowns conditional on the observed data\nDraw from posterior distribution using Markov Chain Monte Carlo (MCMC).\nEvaluate how well the model fits the data and possibly revise the model.\nDraw from the posterior predictive distribution of the outcome(s) given interesting values of the predictors in order to visualize how a manipulation of a predictor affects (a function of) the outcome(s).\nThis notebook demonstrates Steps 1-3 when the likelihood is the product of conditionally independent binomial distributions (possibly with only one trial per observation).\n\nLikelihood\nFor a binomial GLM the likelihood for one observation $y$ can be written as a conditionally binomial PMF $$\\binom{n}{y} \\pi^{y} (1 - \\pi)^{n - y},$$ where $n$ is the known number of trials, $\\pi = g^{-1}(\\eta)$ is the probability of success and $\\eta = \\alpha + \\mathbf{x}^\\top \\boldsymbol{\\beta}$ is a linear predictor. For a sample of size $N$, the likelihood of the entire sample is the product of $N$ individual likelihood contributions.\nBecause $\\pi$ is a probability, for a binomial model the link function $g$ maps between the unit interval (the support of $\\pi$) and the set of all real numbers $\\mathbb{R}$. When applied to a linear predictor $\\eta$ with values in $\\mathbb{R}$, the inverse link function $g^{-1}(\\eta)$ therefore returns a valid probability between 0 and 1.\nThe two most common link functions used for binomial GLMs are the logit and probit functions. With the logit (or log-odds) link function $g(x) = \\ln{\\left(\\frac{x}{1-x}\\right)}$, the likelihood for a single observation becomes\n$$\\binom{n}{y}\\left(\\text{logit}^{-1}(\\eta)\\right)^y \\left(1 - \\text{logit}^{-1}(\\eta)\\right)^{n-y} = \\binom{n}{y} \\left(\\frac{e^{\\eta}}{1 + e^{\\eta}}\\right)^{y} \\left(\\frac{1}{1 + e^{\\eta}}\\right)^{n - y}$$\nand the probit link function $g(x) = \\Phi^{-1}(x)$ yields the likelihood\n$$\\binom{n}{y} \\left(\\Phi(\\eta)\\right)^{y} \\left(1 - \\Phi(\\eta)\\right)^{n - y},$$\nwhere $\\Phi$ is the CDF of the standard normal distribution. The differences between the logit and probit functions are minor and -- if, as rstanarm does by default, the probit is scaled so its slope at the origin matches the logit's -- the two link functions should yield similar results. Unless the user has a specific reason to prefer the probit link, we recommend the logit simply because it will be slightly faster and more numerically stable.\nIn theory, there are infinitely many possible link functions, although in practice only a few are typically used. \nPriors\nA full Bayesian analysis requires specifying prior distributions $f(\\alpha)$ and $f(\\boldsymbol{\\beta})$ for the intercept and vector of regression coefficients. \nAs an example, suppose we have $K$ predictors and believe --- prior to seeing the data --- that $\\alpha, \\beta_1, \\dots, \\beta_K$ are as likely to be positive as they are to be negative, but are highly unlikely to be far from zero. These beliefs can be represented by normal distributions with mean zero and a small scale (standard deviation).\nIf, on the other hand, we have less a priori confidence that the parameters will be close to zero then we could use a larger scale for the normal distribution and/or a distribution with heavier tails than the normal like the Student's $t$ distribution.\nPosterior\nWith independent prior distributions, the joint posterior distribution for $\\alpha$ and $\\boldsymbol{\\beta}$ is proportional to the product of the priors and the $N$ likelihood contributions:\n$$f\\left(\\alpha,\\boldsymbol{\\beta} | \\mathbf{y},\\mathbf{X}\\right) \\propto f\\left(\\alpha\\right) \\times \\prod_{k=1}^K f\\left(\\beta_k\\right) \\times \\prod_{i=1}^N { g^{-1}\\left(\\eta_i\\right)^{y_i} \\left(1 - g^{-1}\\left(\\eta_i\\right)\\right)^{n_i-y_i}}.$$\nThis is posterior distribution that PyStan will draw from when using MCMC.\nLogistic Regression Example\nWhen the logit link function is used the model is often referred to as a logistic regression model (the inverse logit function is the CDF of the standard logistic distribution). As an example, here we will show how to carry out a analysis for Pima Indians data set similar to analysis from Chapter 5.4 of Gelman and Hill (2007) using PyStan.", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# import stan interface\nimport pystan\n\n# add utilities directory to path\nimport os, sys\nutil_path = os.path.abspath(os.path.join(os.path.pardir, 'utilities_and_data'))\nif util_path not in sys.path and os.path.exists(util_path):\n sys.path.insert(0, util_path)\n\n# import from utilities\nimport stan_utility\nimport psis # pareto smoothed importance sampling\nimport plot_tools", "Data\nFirst we load and pre-process data.", "# load data\ndata_path = os.path.abspath(\n os.path.join(\n os.path.pardir,\n 'utilities_and_data',\n 'diabetes.csv'\n )\n)\ndata = pd.read_csv(data_path)\n# print some basic info()\ndata.info()\n\n# preview some first rows\ndata.head()\n\n# some summary\ndata.describe()", "Preprocess data", "# modify the data column names slightly for easier typing\n# rename DiabetesPedigreeFunction to dpf\ndata.rename(columns={'DiabetesPedigreeFunction': 'dpf'}, inplace=True)\n# make lower\ndata.rename(columns=lambda old_name: old_name.lower(), inplace=True)\n\n# removing those observation rows with 0 in selected variables\nnormed_predictors = [\n 'glucose',\n 'bloodpressure',\n 'skinthickness',\n 'insulin',\n 'bmi'\n]\ndata = data[(data[normed_predictors] != 0).all(axis=1)]\n\n# scale the covariates for easier comparison of coefficient posteriors\n# N.B. int columns turn into floats\ndata.iloc[:,:-1] -= data.iloc[:,:-1].mean()\ndata.iloc[:,:-1] /= data.iloc[:,:-1].std()\n\n# preview some first rows againg\ndata.head()\n\n# preparing the inputs\nX = data.iloc[:,:-1]\ny = data.iloc[:,-1]\n\n# get shape into variables\nn, p = X.shape\nprint('number of observations = {}'.format(n))\nprint('number of predictors = {}'.format(p))", "Stan model code for logistic regression\nLogistic regression with Student's $t$ prior as discussed above.", "with open('logistic_t.stan') as file:\n print(file.read())\n\nmodel = stan_utility.compile_model('logistic_t.stan')", "Set priors and sample from the posterior\nHere we'll use a Student t prior with 7 degrees of freedom and a scale of 2.5, which, as discussed above, is a reasonable default prior when coefficients should be close to zero but have some chance of being large. PyStan returns the posterior distribution for the parameters describing the uncertainty related to unknown parameter values.", "data1 = dict(\n n=n,\n d=p,\n X=X,\n y=y,\n p_alpha_df=7,\n p_alpha_loc=0,\n p_alpha_scale=2.5,\n p_beta_df=7,\n p_beta_loc=0,\n p_beta_scale=2.5\n)\nfit1 = model.sampling(data=data1, seed=74749)\nsamples1 = fit1.extract(permuted=True)", "Inspect the resulting posterior\nCheck n_effs and Rhats", "# print summary of selected variables\n# use pandas data frame for layout\nsummary = fit1.summary(pars=['alpha', 'beta'])\npd.DataFrame(\n summary['summary'],\n index=summary['summary_rownames'],\n columns=summary['summary_colnames']\n)", "n_effs are high and Rhats<1.1, which is good.\nNext we check divergences, E-BMFI and treedepth exceedences as explained in Robust Statistical Workflow with PyStan Case Study by Michael Betancourt.", "stan_utility.check_treedepth(fit1)\nstan_utility.check_energy(fit1)\nstan_utility.check_div(fit1)", "Everything is fine based on these diagnostics and we can proceed with our analysis.\nVisualise the marginal posterior distributions of each parameter", "# plot histograms\nfig, axes = plot_tools.hist_multi_sharex(\n [samples1['alpha']] + [sample for sample in samples1['beta'].T],\n rowlabels=['intercept'] + list(X.columns),\n n_bins=60,\n x_lines=0,\n figsize=(7, 10)\n)", "We can use Pareto smoothed importance sampling leave-one-out cross-validation to estimate the predictive performance.", "loo1, loos1, ks1 = psis.psisloo(samples1['log_lik'])\nloo1_se = np.sqrt(np.var(loos1, ddof=1)*n)\nprint('elpd_loo: {:.4} (SE {:.3})'.format(loo1, loo1_se))\n\n# check the number of large (> 0.5) Pareto k estimates\nnp.sum(ks1 > 0.5)", "Alternative horseshoe prior on weights\nIn this example, with $n \\gg p$ the difference is small, and thus we don’t expect much difference with a different prior and horseshoe prior is usually more useful for $n<p$.\nThe global scale parameter for horseshoe prior is chosen as recommended by Juho Piironen and Aki Vehtari (2017). On the Hyperprior Choice for the Global Shrinkage Parameter in the Horseshoe Prior. Journal of Machine Learning Research: Workshop and Conference Proceedings (AISTATS 2017 Proceedings), accepted for publication. arXiv preprint arXiv:1610.05559.", "with open('logistic_hs.stan') as file:\n print(file.read())\n\nmodel = stan_utility.compile_model('logistic_hs.stan')\n\np0 = 2 # prior guess for the number of relevant variables\ntau0 = p0 / (p - p0) * 1 / np.sqrt(n)\ndata2 = dict(\n n=n,\n d=p,\n X=X,\n y=y,\n p_alpha_df=7,\n p_alpha_loc=0,\n p_alpha_scale=2.5,\n p_beta_df=1,\n p_beta_global_df=1,\n p_beta_global_scale=tau0\n)\nfit2 = model.sampling(data=data2, seed=74749)\nsamples2 = fit2.extract(permuted=True)", "We see that the horseshoe prior has shrunk the posterior distribution of irrelevant features closer to zero, without affecting the posterior distribution of the relevant features.", "# print summary of selected variables\n# use pandas data frame for layout\nsummary = fit2.summary(pars=['alpha', 'beta'])\npd.DataFrame(\n summary['summary'],\n index=summary['summary_rownames'],\n columns=summary['summary_colnames']\n)\n\n# plot histograms\nfig, axes = plot_tools.hist_multi_sharex(\n [samples2['alpha']] + [sample for sample in samples2['beta'].T],\n rowlabels=['intercept'] + list(X.columns),\n n_bins=60,\n x_lines=0,\n figsize=(7, 10)\n)", "We compute LOO also for the model with Horseshoe prior. Expected log predictive density is higher, but not significantly. This is not surprising as this is a easy data with $n \\gg p$.", "loo2, loos2, ks2 = psis.psisloo(samples2['log_lik'])\nloo2_se = np.sqrt(np.var(loos2, ddof=1)*n)\nprint('elpd_loo: {:.4} (SE {:.3})'.format(loo2, loo2_se))\n\n# check the number of large (> 0.5) Pareto k estimates\nnp.sum(ks2 > 0.5)\n\nelpd_diff = loos2 - loos1\nelpd_diff_se = np.sqrt(np.var(elpd_diff, ddof=1)*n)\nelpd_diff = np.sum(elpd_diff)\nprint('elpd_diff: {:.4} (SE {:.3})'.format(elpd_diff, elpd_diff_se))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CompPhysics/MachineLearning
doc/src/Optimization/autodiff/examples_allowed_functions-Copy1.ipynb
cc0-1.0
[ "Examples of the supported features in Autograd\nBefore using Autograd for more complicated calculations, it might be useful to experiment with what kind of functions Autograd is capable of finding the gradient of. The following Python functions are just meant to illustrate what Autograd can do, but please feel free to experiment with other, possibly more complicated, functions as well!", "import autograd.numpy as np\nfrom autograd import grad", "Supported functions\nHere are some examples of supported function implementations that Autograd can differentiate. Keep in mind that this list over examples is not comprehensive, but rather explores which basic constructions one might often use. \nFunctions using simple arithmetics", "def f1(x):\n return x**3 + 1\n\nf1_grad = grad(f1)\n\n# Remember to send in float as argument to the computed gradient from Autograd!\na = 1.0\n\n# See the evaluated gradient at a using autograd:\nprint(\"The gradient of f1 evaluated at a = %g using autograd is: %g\"%(a,f1_grad(a)))\n\n# Compare with the analytical derivative, that is f1'(x) = 3*x**2 \ngrad_analytical = 3*a**2\nprint(\"The gradient of f1 evaluated at a = %g by finding the analytic expression is: %g\"%(a,grad_analytical))", "Functions with two (or more) arguments\nTo differentiate with respect to two (or more) arguments of a Python function, Autograd need to know at which variable the function if being differentiated with respect to.", "def f2(x1,x2):\n return 3*x1**3 + x2*(x1 - 5) + 1\n\n# By sending the argument 0, Autograd will compute the derivative w.r.t the first variable, in this case x1\nf2_grad_x1 = grad(f2,0)\n\n# ... and differentiate w.r.t x2 by sending 1 as an additional arugment to grad\nf2_grad_x2 = grad(f2,1)\n\nx1 = 1.0\nx2 = 3.0 \n\nprint(\"Evaluating at x1 = %g, x2 = %g\"%(x1,x2))\nprint(\"-\"*30)\n\n# Compare with the analytical derivatives:\n\n# Derivative of f2 w.r.t x1 is: 9*x1**2 + x2:\nf2_grad_x1_analytical = 9*x1**2 + x2\n\n# Derivative of f2 w.r.t x2 is: x1 - 5:\nf2_grad_x2_analytical = x1 - 5\n\n# See the evaluated derivations:\nprint(\"The derivative of f2 w.r.t x1: %g\"%( f2_grad_x1(x1,x2) ))\nprint(\"The analytical derivative of f2 w.r.t x1: %g\"%( f2_grad_x1(x1,x2) ))\n\nprint()\n\nprint(\"The derivative of f2 w.r.t x2: %g\"%( f2_grad_x2(x1,x2) ))\nprint(\"The analytical derivative of f2 w.r.t x2: %g\"%( f2_grad_x2(x1,x2) ))", "Note that the grad function will not produce the true gradient of the function. The true gradient of a function with two or more variables will produce a vector, where each element is the function differentiated w.r.t a variable. \nFunctions using the elements of its argument directly", "def f3(x): # Assumes x is an array of length 5 or higher\n return 2*x[0] + 3*x[1] + 5*x[2] + 7*x[3] + 11*x[4]**2\n\nf3_grad = grad(f3)\n\nx = np.linspace(0,4,5)\n\n# Print the computed gradient:\nprint(\"The computed gradient of f3 is: \", f3_grad(x))\n\n# The analytical gradient is: (2, 3, 5, 7, 22*x[4])\nf3_grad_analytical = np.array([2, 3, 5, 7, 22*x[4]])\n\n# Print the analytical gradient:\nprint(\"The analytical gradient of f3 is: \", f3_grad_analytical)", "Note that in this case, when sending an array as input argument, the output from Autograd is another array. This is the true gradient of the function, as opposed to the function in the previous example. By using arrays to represent the variables, the output from Autograd might be easier to work with, as the output is closer to what one could expect form a gradient-evaluting function. \nFunctions using mathematical functions from Numpy", "def f4(x):\n return np.sqrt(1+x**2) + np.exp(x) + np.sin(2*np.pi*x)\n\nf4_grad = grad(f4)\n\nx = 2.7\n\n# Print the computed derivative:\nprint(\"The computed derivative of f4 at x = %g is: %g\"%(x,f4_grad(x)))\n\n# The analytical derivative is: x/sqrt(1 + x**2) + exp(x) + cos(2*pi*x)*2*pi\nf4_grad_analytical = x/np.sqrt(1 + x**2) + np.exp(x) + np.cos(2*np.pi*x)*2*np.pi\n\n# Print the analytical gradient:\nprint(\"The analytical gradient of f4 at x = %g is: %g\"%(x,f4_grad_analytical))", "Functions using if-else tests", "def f5(x):\n if x >= 0:\n return x**2\n else:\n return -3*x + 1\n\nf5_grad = grad(f5)\n\nx = 2.7\n\n# Print the computed derivative:\nprint(\"The computed derivative of f5 at x = %g is: %g\"%(x,f5_grad(x)))\n\n# The analytical derivative is: \n# if x >= 0, then 2*x\n# else -3\n\nif x >= 0:\n f5_grad_analytical = 2*x\nelse:\n f5_grad_analytical = -3\n\n\n# Print the analytical derivative:\nprint(\"The analytical derivative of f5 at x = %g is: %g\"%(x,f5_grad_analytical))", "Functions using for- and while loops", "def f6_for(x):\n val = 0\n for i in range(10):\n val = val + x**i\n return val\n\ndef f6_while(x):\n val = 0\n i = 0\n while i < 10:\n val = val + x**i\n i = i + 1\n return val\n\nf6_for_grad = grad(f6_for)\nf6_while_grad = grad(f6_while)\n\nx = 0.5\n\n# Print the computed derivaties of f6_for and f6_while\nprint(\"The computed derivative of f6_for at x = %g is: %g\"%(x,f6_for_grad(x)))\nprint(\"The computed derivative of f6_while at x = %g is: %g\"%(x,f6_while_grad(x)))\n\n# Both of the functions are implementation of the sum: sum(x**i) for i = 0, ..., 9\n# The analytical derivative is: sum(i*x**(i-1)) \nf6_grad_analytical = 0\nfor i in range(10):\n f6_grad_analytical += i*x**(i-1)\n\nprint(\"The analytical derivative of f6 at x = %g is: %g\"%(x,f6_grad_analytical))", "Functions using recursion", "def f7(n): # Assume that n is an integer\n if n == 1 or n == 0:\n return 1\n else:\n return n*f7(n-1)\n\nf7_grad = grad(f7)\n\nn = 2.0\n\nprint(\"The computed derivative of f7 at n = %d is: %g\"%(n,f7_grad(n)))\n\n# The function f7 is an implementation of the factorial of n.\n# By using the product rule, one can find that the derivative is:\n\nf7_grad_analytical = 0\nfor i in range(int(n)-1):\n tmp = 1\n for k in range(int(n)-1):\n if k != i:\n tmp *= (n - k)\n f7_grad_analytical += tmp\n\nprint(\"The analytical derivative of f7 at n = %d is: %g\"%(n,f7_grad_analytical))", "Note that if n is equal to zero or one, Autograd will give an error message. This message appears when the output is independent on input. \nUnsupported functions\nAutograd supports many features. However, there are some functions that is not supported (yet) by Autograd.\nAssigning a value to the variable being differentiated with respect to", "def f8(x): # Assume x is an array\n x[2] = 3\n return x*2\n\nf8_grad = grad(f8)\n\nx = 8.4\n\nprint(\"The derivative of f8 is:\",f8_grad(x))", "Here, Autograd tells us that an 'ArrayBox' does not support item assignment. The item assignment is done when the program tries to assign x[2] to the value 3. However, Autograd has implemented the computation of the derivative such that this assignment is not possible. \nThe syntax a.dot(b) when finding the dot product", "def f9(a): # Assume a is an array with 2 elements\n b = np.array([1.0,2.0])\n return a.dot(b)\n\nf9_grad = grad(f9)\n\nx = np.array([1.0,0.0])\n\nprint(\"The derivative of f9 is:\",f9_grad(x))", "Here we are told that the 'dot' function does not belong to Autograd's version of a Numpy array.\nTo overcome this, an alternative syntax which also computed the dot product can be used:", "def f9_alternative(x): # Assume a is an array with 2 elements\n b = np.array([1.0,2.0])\n return np.dot(x,b) # The same as x_1*b_1 + x_2*b_2\n\nf9_alternative_grad = grad(f9_alternative)\n\nx = np.array([3.0,0.0])\n\nprint(\"The gradient of f9 is:\",f9_alternative_grad(x))\n\n# The analytical gradient of the dot product of vectors x and b with two elements (x_1,x_2) and (b_1, b_2) respectively\n# w.r.t x is (b_1, b_2).", "Recommended to avoid\nThe documentation recommends to avoid inplace operations such as" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kylepjohnson/notebooks
lda/leipzig/1_lda_tests/1a Setup and preprocess docs.ipynb
mit
[ "Setup CLTK and import TLG\nFollow the setup instructions from the CLTK tutorial here.", "from cltk.corpus.utils.importer import CorpusImporter\n\nmy_greek_downloader = CorpusImporter('greek')\n\nmy_greek_downloader.import_corpus('tlg', '~/corpora/TLG_E/')", "Pre-process TLG E corpus\nCovert Beta Code to Unicode\nhttp://docs.cltk.org/en/latest/greek.html#converting-tlg-texts-with-tlgu", "from cltk.corpus.greek.tlgu import TLGU\n\ntlgu = TLGU()\ntlgu.convert_corpus(corpus='tlg') # writes to: ~/cltk_data/greek/text/tlg/plaintext/", "Cleanup texts\nOverwrite the plaintext files with more aggresive cleanup, but keep periods.\nhttp://docs.cltk.org/en/latest/greek.html#text-cleanup", "!head ~/cltk_data/greek/text/tlg/plaintext/TLG0437.TXT\n\nfrom cltk.corpus.utils.formatter import tlg_plaintext_cleanup\nimport os\n\nplaintext_dir = os.path.expanduser('~/cltk_data/greek/text/tlg/plaintext/')\nfiles = os.listdir(plaintext_dir)\n\nfor file in files:\n file = os.path.join(plaintext_dir, file)\n with open(file) as file_open:\n file_read = file_open.read()\n\n clean_text = tlg_plaintext_cleanup(file_read, rm_punctuation=True, rm_periods=False)\n clean_text = clean_text.lower()\n with open(file, 'w') as file_open:\n file_open.write(clean_text)\n\n!head ~/cltk_data/greek/text/tlg/plaintext/TLG0437.TXT " ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
JesseLivezey/science-programming
integration/Numerical Integration.ipynb
mit
[ "import numpy as np\nfrom matplotlib import pyplot as plt\n%pylab inline", "Numerically Integrating Newton's Second Law\nThere are many times in physics when you want to know the solution to a differential equation that you can't solve analytically. This comes up in fields from ranging from astrophysics to biophysics to particle physics. Once you change from finding exact solutions to numerical solutions, all sorts of nuaced difficulties can arise. We'll explore a few examples in this workbook.\nThe Simple Harmonic Oscillator\nLet's start with a system where we know the answer so that we'll have something concrete to compare against. For a simple harmonic oscillator we know that the acceleration comes from a spring force:\n$$\\ddot x=-\\tfrac{k}{m}x.$$\nWe know that the solution to this differential equation is:\n$$x(t) = A\\sin{\\sqrt{\\tfrac{k}{m}}t}.$$\nLet's work on integrating it numerically. The simplest way of integrating this equation is to use a \"delta\" approximation for the derivates.\n$$\\tfrac{\\Delta v}{\\Delta t}=-\\tfrac{k}{m}x\\implies\\Delta v=-\\tfrac{k}{m}x\\Delta t = a\\Delta t$$\n$$\\tfrac{\\Delta x}{\\Delta t}=v\\implies\\Delta x=v\\Delta t$$\nCombining these, we can work out that one way of integrating this equation to find $x(t)$ is:\nProblem 2.1 on the worksheet.\n$$v_{t+\\Delta t}=v_{t}+a\\Delta=v_{t}-\\tfrac{k}{m}x\\Delta t$$\n$$x_{t+\\Delta t}=x_{t}+v_{t}\\Delta t.$$\nLet's set this up!", "# Input Values\ntime = 40.\ndelta_t = .1\ntime_steps = int(time/delta_t)\n# Create arrays for storing variables\nx = np.zeros(time_steps)\nv = np.zeros(time_steps)\nt = np.zeros(time_steps)\n# Set initial values to \"stretched\"\nx[0] = 1.\nv[0] = 0.\nt[0] = 0.\nomega = .75\n# Create function to calculate acceleration\ndef accel(x,omegaIn):\n return -omegaIn**2*x\n\n# Solve the equation\nfor tt in xrange(time_steps-1):\n v[tt+1] = v[tt]+accel(x[tt],omega)*delta_t#\n x[tt+1] = x[tt]+v[tt]*delta_t\n t[tt+1] = t[tt]+delta_t\n# Find exact answers\nxExact = x[0]*np.cos(omega*t)\nvExact = -x[0]*omega*np.sin(omega*t)", "Plots", "plt.plot(t,x,'ro',label='x(t)')\nplot(t,v,'bo',label='v(t)')\nlegend()\nplt.ylim((-1.5,1.5))\nplt.title('Numerical x(t) and v(t)')\nplt.figure()\nplt.plot(t,xExact,'ro',label='x(t)')\nplot(t,vExact,'bo',label='v(t)')\nlegend()\nplt.ylim((-1.5,1.5))\nplt.title('Exact x(t) and v(t)')", "Try a few different values of delta_t. What happens as you make delta_t larger?\nOne subtle problem with the method we are using above is that it may not be conserving energy. You can see this happening as the amplitude grows over time. Let's try creating a quick \"hack\" to fix this.\nCopy the position and velocity code from above. After each update, rescale the velocity so that energy is conserved.", "# Code goes here:", "What about friction?\nHow could you incorporate a drag force into this program? You can assume the drag force is proportional to velocity:\n$$F_\\text{drag} = -b v$$\nCopy your code from above and add in a drag term. Do the resulting plots make sense?", "# Code goes here:\n\n# Plots go here:" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google-research/language
language/multiberts/multi_vs_original.ipynb
apache-2.0
[ "MultiBERTs vs. Original BERT\nHere, we'll compare the MultiBERTs models run for 2M steps with the single previously-released bert-base-uncased model, as described in Appendix E.2 of the paper. Our analysis will be unpaired with respect to seeds, but we'll still sample jointly over examples in the evaluation set and report confidence intervals as described in Section 3 of the paper.\nWe'll use SQuAD 2.0 here, but the code below can easily be modified to handle other tasks.", "import json\nimport os\nimport re\n\nimport numpy as np\nimport pandas as pd\n\nfrom tqdm.notebook import tqdm # for progress indicator\n\nscratch_dir = \"/tmp/multiberts_squad\"\nif not os.path.isdir(scratch_dir): \n os.mkdir(scratch_dir)\n \npreds_root = \"https://storage.googleapis.com/multiberts/public/example-predictions/SQuAD\"\n# Fetch SQuAD eval script. Rename to allow module import, as this is invalid otherwise.\n!curl $preds_root/evaluate-v2.0.py -o $scratch_dir/evaluate_squad2.py\n# Fetch development set labels\n!curl -O $preds_root/dev-v2.0.json --output-dir $scratch_dir\n# Fetch predictions index file\n!curl -O $preds_root/index.tsv --output-dir $scratch_dir\n\n!ls $scratch_dir", "Load the run metadata. You can also just look through the directory, but this index file is convenient if (as we do here) you only want to download some of the files.", "run_info = pd.read_csv(os.path.join(scratch_dir, 'index.tsv'), sep='\\t')\n# Filter to SQuAD 2.0 runs from either 2M MultiBERTs or the original BERT checkpoint (\"public\").\nmask = run_info.task == \"v2.0\"\nmask &= (run_info.n_steps == \"2M\") | (run_info.release == 'public')\nrun_info = run_info[mask]\nrun_info\n\n# Download all prediction files\nfor fname in tqdm(run_info.file):\n !curl $preds_root/$fname -o $scratch_dir/$fname --create-dirs --silent\n\n!ls $scratch_dir/v2.0", "Now we should have everything in our scratch directory, and can load individual predictions.\nSQuAD has a monolithic eval script that isn't easily compatible with a bootstrap procedure (among other things, it parses a lot of JSON, and you don't want to do that in the inner loop!). Ultimately, though, it relies on computing some point-wise scores (exact-match $\\in {0,1}$ and F1 $\\in [0,1]$) and averaging these across examples. For efficiency, we'll pre-compute these before running our bootstrap.", "# Import the SQuAD 2.0 eval script; we'll use some functions from this below.\nimport sys\nsys.path.append(scratch_dir)\nimport evaluate_squad2 as squad_eval\n\n# Load dataset\nwith open(os.path.join(scratch_dir, 'dev-v2.0.json')) as fd:\n dataset = json.load(fd)['data']", "The official script supports thresholding for no-answer, but the default settings ignore this and treat only predictions of emptystring (\"\") as no-answer. So, we can score on exact_raw and f1_raw directly.", "exact_scores = {} # filename -> qid -> score\nf1_scores = {} # filename -> qid -> score\nfor fname in tqdm(run_info.file):\n with open(os.path.join(scratch_dir, fname)) as fd:\n preds = json.load(fd)\n \n exact_raw, f1_raw = squad_eval.get_raw_scores(dataset, preds)\n exact_scores[fname] = exact_raw\n f1_scores[fname] = f1_raw\n \ndef dict_of_dicts_to_matrix(dd):\n \"\"\"Convert a scores to a dense matrix.\n \n Outer keys assumed to be rows, inner keys are columns (e.g. example IDs).\n Uses pandas to ensure that different rows are correctly aligned.\n \n Args:\n dd: map of row -> column -> value\n \n Returns:\n np.ndarray of shape [num_rows, num_columns]\n \"\"\"\n # Use pandas to ensure keys are correctly aligned.\n df = pd.DataFrame(dd).transpose()\n return df.values\n\nexact_scores = dict_of_dicts_to_matrix(exact_scores)\nf1_scores = dict_of_dicts_to_matrix(f1_scores)\n\nexact_scores.shape", "Run multibootstrap\nbase (L) is the original BERT checkpoint, expt (L') is MultiBERTs with 2M steps. Since we pre-computed the pointwise exact match and F1 scores for each run and each example, we can just pass dummy labels and use a simple average over predictions as our scoring function.", "import multibootstrap\n\nnum_bootstrap_samples = 1000\n\nselected_runs = run_info.copy()\nselected_runs['seed'] = selected_runs['pretrain_id']\nselected_runs['intervention'] = (selected_runs['release'] == 'multiberts')\n\n# Dummy labels\ndummy_labels = np.zeros_like(exact_scores[0]) # [num_examples]\nscore_fn = lambda y_true, y_pred: np.mean(y_pred)\n\n# Targets; run once for each.\ntargets = {'exact': exact_scores, 'f1': f1_scores}\n\nstats = {}\nfor name, preds in targets.items():\n print(f\"Metric: {name:s}\")\n samples = multibootstrap.multibootstrap(selected_runs, preds, dummy_labels, score_fn,\n nboot=num_bootstrap_samples,\n paired_seeds=False,\n progress_indicator=tqdm)\n stats[name] = multibootstrap.report_ci(samples, c=0.95)\n print(\"\")\n\npd.concat({k: pd.DataFrame(v) for k,v in stats.items()}).transpose()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tacticsiege/TacticToolkit
examples/2017-09-11_TacticToolkit_Intro.ipynb
mit
[ "%%html\n<style>\ntable {float:left}\n</style>", "TacticToolkit Introduction\nTacticToolkit is a codebase to assist with machine learning and natural language processing. We build on top of sklearn, tensorflow, keras, nltk, spaCy and other popular libraries. The TacticToolkit will help throughout; from data acquisition to preprocessing to training to inference. \n| Modules | Description |\n|---------------|------------------------------------------------------|\n| corpus | Load and work with text corpora |\n| data | Data generation and common data functions |\n| plotting | Predefined and customizable plots |\n| preprocessing | Transform and clean data in preparation for training |\n| sandbox | Newer experimental features and references |\n| text | Text manipulation and processing |", "# until we can install, add parent dir to path so ttk is found\nimport sys\nsys.path.insert(0, '..')\n\n# basic imports\nimport pandas as pd\nimport numpy as np\nimport re\n\nimport matplotlib\n%matplotlib inline\nmatplotlib.rcParams['figure.figsize'] = (10.0, 8.0)\n\nimport matplotlib.pyplot as plt", "Let's start with some text\nThe ttk.text module includes classes and functions to make working with text easier. These are meant to supplement existing nltk and spaCy text processing, and often work in conjunction with these libraries. Below is an overview of some of the major components. We'll explore these objects with some simple text now.\n| Class | Purpose |\n|-----------------|------------------------------------------------------------------------|\n| Normalizer | Normalizes text by formatting, stemming and substitution |\n| Tokenizer | High level tokenizer, provides word, sentence and paragraph tokenizers |", "# simple text normalization\n# apply individually\n# apply to sentences\n\n# simple text tokenization\n# harder text tokenization\n# sentence tokenization\n# paragraph tokenization", "Corpii? Corpuses? Corpora!\nThe ttk.corpus module builds on the nltk.corpus model, adding new corpus readers and corpus processing objects. It also includes loading functions for the corpora included with ttk, which will download the content from github as needed. \nWe'll use the Dated Headline corpus included with ttk. This corpus was created using ttk, and is maintained in a complimentary github project, TacticCorpora (https://github.com/tacticsiege/TacticCorpora).\nFirst, a quick look at the corpus module's major classes and functions.\n| Class | Purpose |\n|------------------------------|----------------------------------------------------------------------------------|\n|CategorizedDatedCorpusReader |Extends nltk's CategorizedPlainTextCorpusReader to include a second category, Date| \n|CategorizedDatedCorpusReporter|Summarizes corpora. Filterable, and output can be str, list or DataFrame |\n| Function | Purpose |\n|--------------------------------------|-------------------------------------------------------------------------|\n| load_headline_corpus(with_date=True) | Loads Categorized or CategorizedDated CorpusReader from headline data |", "from ttk.corpus import load_headline_corpus\n\n# load the dated corpus. \n# This will attempt to download the corpus from github if it is not present locally.\ncorpus = load_headline_corpus(verbose=True)\n\n# inspect categories\nprint (len(corpus.categories()), 'categories')\nfor cat in corpus.categories():\n print (cat)\n\n# all main corpus methods allow lists of categories and dates filters\nd = '2017-08-22'\nprint (len(corpus.categories(dates=[d])), 'categories')\nfor cat in corpus.categories(dates=[d]):\n print (cat)\n\n# use the Corpus Reporters to get summary reports\nfrom ttk.corpus import CategorizedDatedCorpusReporter\nreporter = CategorizedDatedCorpusReporter()\n\n# summarize categories\nprint (reporter.category_summary(corpus))\n\n# reporters can return str, list or dataframe\nfor s in reporter.date_summary(corpus,\n dates=['2017-08-17', '2017-08-18', '2017-08-19',], \n output='list'):\n print (s)\n\ncat_frame = reporter.category_summary(corpus,\n categories=['BBC', 'CNBC', 'CNN', 'NPR',],\n output='dataframe')\ncat_frame.head()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
parambharat/ML-Programs
P0:_Titanic_Survival/Titanic_Survival_Exploration.ipynb
mit
[ "Machine Learning Engineer Nanodegree\nIntroduction and Foundations\nProject 0: Titanic Survival Exploration\nIn 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.\n\nTip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. \n\nGetting Started\nTo begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.\nRun the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.\n\nTip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.", "import numpy as np\nimport pandas as pd\n\n# RMS Titanic data visualization code \nfrom titanic_visualizations import survival_stats\nfrom IPython.display import display\n%matplotlib inline\n\n# Load the dataset\nin_file = 'titanic_data.csv'\nfull_data = pd.read_csv(in_file)\n\n# Print the first few entries of the RMS Titanic data\ndisplay(full_data.head())", "From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:\n- Survived: Outcome of survival (0 = No; 1 = Yes)\n- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)\n- Name: Name of passenger\n- Sex: Sex of the passenger\n- Age: Age of the passenger (Some entries contain NaN)\n- SibSp: Number of siblings and spouses of the passenger aboard\n- Parch: Number of parents and children of the passenger aboard\n- Ticket: Ticket number of the passenger\n- Fare: Fare paid by the passenger\n- Cabin Cabin number of the passenger (Some entries contain NaN)\n- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)\nSince we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.\nRun the code block cell to remove Survived as a feature of the dataset and store it in outcomes.", "# Store the 'Survived' feature in a new variable and remove it from the dataset\noutcomes = full_data['Survived']\ndata = full_data.drop('Survived', axis = 1)\n\n# Show the new dataset with 'Survived' removed\ndisplay(data.head())", "The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].\nTo measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. \nThink: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?", "def accuracy_score(truth, pred):\n \"\"\" Returns accuracy score for input truth and predictions. \"\"\"\n \n # Ensure that the number of predictions matches number of outcomes\n if len(truth) == len(pred): \n \n # Calculate and return the accuracy as a percent\n return \"Predictions have an accuracy of {:.2f}%.\".format((truth == pred).mean()*100)\n \n else:\n return \"Number of predictions does not match number of outcomes!\"\n \n# Test the 'accuracy_score' function\npredictions = pd.Series(np.ones(5, dtype = int))\nprint accuracy_score(predictions, outcomes[:5])", "Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.\n\nMaking Predictions\nIf we were told to make a prediction about any passenger aboard the RMS Titanic who we did not know anything about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers as a whole did not survive the ship sinking.\nThe function below will always predict that a passenger did not survive.", "def predictions_0(data):\n \"\"\" Model with no features. Always predicts a passenger did not survive. \"\"\"\n\n predictions = []\n for _, passenger in data.iterrows():\n \n # Predict the survival of 'passenger'\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_0(data)", "Question 1\nUsing the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Replace this text with the prediction accuracy you found above.\nLet's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.\nRun the code cell below to plot the survival outcomes of passengers based on their sex.", "survival_stats(data, outcomes, 'Sex')", "Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.", "def predictions_1(data):\n \"\"\" Model with one feature: \n - Predict a passenger survived if they are female. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n predictions.append(1) if passenger['Sex'] == 'female' else predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_1(data)", "Question 2\nHow accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Replace this text with the prediction accuracy you found above.\nUsing just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. Consider, for example, all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.\nRun the code cell below to plot the survival outcomes of male passengers based on their age.", "survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\"])", "Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.", "def predictions_2(data):\n \"\"\" Model with two features: \n - Predict a passenger survived if they are female.\n - Predict a passenger survived if they are male and younger than 10. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if passenger[\"Sex\"] == 'female' or (passenger[\"Sex\"] == 'male' and passenger[\"Age\"] < 10):\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictpassenger['Sex'] == 'female'ions\npredictions = predictions_2(data)", "Question 3\nHow accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Replace this text with the prediction accuracy you found above.\nAdding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. \nPclass, Sex, Age, SibSp, and Parch are some suggested features to try.\nUse the survival_stats function below to to examine various survival statistics.\nHint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: [\"Sex == 'male'\", \"Age &lt; 18\"]", "survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\", \"Age < 18\"])", "After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.\nMake sure to keep track of the various features and conditions you tried before arriving at your final prediction model.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.", "def predictions_3(data):\n \"\"\" Model with multiple features. Makes a prediction with an accuracy of at least 80%. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if passenger[\"Pclass\"] < 3 and (passenger['Sex'] == 'female' or passenger['Age'] < 15):\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_3(data)", "Question 4\nDescribe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?\nHint: Run the code cell below to see the accuracy of your predictions.", "print accuracy_score(outcomes, predictions)", "Answer: Replace this text with your answer to the question above.\nConclusion\nCongratulations on what you've accomplished here! You should now have an algorithm for predicting whether or not a person survived the Titanic disaster, based on their features. In fact, what you have done here is a manual implementation of a simple machine learning model, the decision tree. In a decision tree, we split the data into smaller groups, one feature at a time. Each of these splits will result in groups that are more homogeneous than the original group, so that our predictions become more accurate. The advantage of having a computer do things for us is that it will be more exhaustive and more precise than our manual exploration above. This link provides another introduction into machine learning using a decision tree.\nA decision tree is just one of many algorithms that fall into the category of supervised learning. In this Nanodegree, you'll learn about supervised learning techniques first. In supervised learning, we concern ourselves with using features of data to predict or model things with objective outcome labels. That is, each of our datapoints has a true outcome value, whether that be a category label like survival in the Titanic dataset, or a continuous value like predicting the price of a house.\nQuestion 5\nCan you think of an example of where supervised learning can be applied?\nHint: Be sure to note the outcome variable to be predicted and at least two features that might be useful for making the predictions.\nAnswer: Replace this text with your answer to the question above.\n\nTip: If we want to share the results of our analysis with others, we aren't limited to giving them a copy of the iPython Notebook (.ipynb) file. We can also export the Notebook output in a form that can be opened even for those without Python installed. From the File menu in the upper left, go to the Download as submenu. You can then choose a different format that can be viewed more generally, such as HTML (.html) or\nPDF (.pdf). You may need additional packages or software to perform these exports." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
citxx/sis-python
crash-course/builtin-sort.ipynb
mit
[ "<h1>Содержание<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Встроенная-сортировка\" data-toc-modified-id=\"Встроенная-сортировка-1\">Встроенная сортировка</a></span></li><li><span><a href=\"#Сортировка-в-обратном-порядке\" data-toc-modified-id=\"Сортировка-в-обратном-порядке-2\">Сортировка в обратном порядке</a></span></li><li><span><a href=\"#Сортировка-по-ключу\" data-toc-modified-id=\"Сортировка-по-ключу-3\">Сортировка по ключу</a></span></li></ul></div>\n\nВстроенная сортировка\nВ Python есть метод sort для списков.", "a = [5, 3, -2, 9, 1]\n\n# Метод sort меняет существующий список\na.sort()\nprint(a)", "Сортировка в обратном порядке\nДля сортировки в обратном порядке можно указать параметр reverse.", "a = [5, 3, -2, 9, 1]\na.sort(reverse=True)\nprint(a)", "Сортировка по ключу\nСортировка по ключу позволяет отсортировать список не по значению самого элемента, а по чему-то другому.", "# Обычно строки сортируются в алфавитном порядке\na = [\"bee\", \"all\", \"accessibility\", \"zen\", \"treasure\"]\na.sort()\nprint(a)\n\n# А используя сортировку по ключу можно сортировать, например, по длине\na = [\"bee\", \"all\", \"accessibility\", \"zen\", \"treasure\"]\na.sort(key=len)\nprint(a)", "В качестве параметра key можно указывать не только встроенные функции, но и самостоятельно определённые. Такая функция должна принимать один аргумент, элемент списка, и возращать значение, по которому надо сортировать.", "# Сортируем по остатку от деления на 10\ndef mod(x):\n return x % 10\n\na = [1, 15, 143, 8, 0, 5, 17, 48]\na.sort(key=mod)\nprint(a)\n\n# Обычно списки сортируются сначала по первому элементу, потом по второму и так далее\na = [[4, 3], [1, 5], [2, 15], [1, 6], [2, 9], [4, 1]]\na.sort()\nprint(a)\n\n# А так можно отсортировать сначала по первому по возрастанию, а при равенсте — по втором\ndef my_key(x):\n return x[0], -x[1]\n\na = [[4, 3], [1, 5], [2, 15], [1, 6], [2, 9], [4, 1]]\na.sort(key=my_key)\nprint(a)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
psas/liquid-engine-test-stand
Archive/Analysis/pressure-reservoir-notebook.ipynb
gpl-2.0
[ "Pressure Reservoir Calculations\nProblem Statement\nDescription\nThe Liquid Propellant Engine Test Stand (LiqPETS) will initially use a blowdown high pressure feed system to supply Liquid Engine 0 (LE-0 or \"LEO\") with Liquid Oxygen (LOX) and Isopropyl Alcohol (IPA). The goal of this calculation is to determine how many nitrogen tanks are required to supply the test stand with enough pressurant to perform a static-fire (hot fire) of the engine.\nGiven\nLEO is expected to consume:\n * $1.13\\frac{lbm}{s}$ of LOX\n * $0.94\\frac{lbm}{s}$ of IPA.\nThe total volume of the propellant tanks is $4.0gal$ each. \nThe pressure supplied to the LOX side must not be more than $500PSI$\nThe pressure supplied to the IPA tank must not be more than $800PSI$\nThe stand will use standard $300{ft}^3$ $2000PSI$ K-type pressure vessels\nFind\n[_] The pressure losses from the pressure reservoir to the tanks\n[X] The volume of nitrogen required to maintain driving pressures in both propellant tanks\n[X] The number of nitrogen bottles required to achieve this\nSolution\nVolume and Tank Quantity\nDetermine the volume of the K-bottle\n$$\\frac{P_1}{P_2} = \\frac{\\nu_2}{\\nu_1}$$", "import numpy\nfrom matplotlib import pyplot as plt\n\n# Volume of gas in the bottle\nvolume_gas = 43.61 #L\n\n# Pressure of the gas in the bottle standard\npressure_gas = 2000 #PSI\n\n# convert to metric\nvolume_gas = volume_gas * 0.001 # L/m3\npressure_gas = pressure_gas * 6.89476 #kPa/PSI\n\nprint(\"initial gas pressure: \" + str(pressure_gas))\nprint(\"initial gas volume: \" + str(volume_gas))\n\n# Volume of propellant tanks\nvolume_tanks = 8.0 #gal\nvolume_tanks = volume_tanks / 264.172 # m3 / gal\n\n# initial and final volumes\nquantity_cylinders = numpy.arange(0,10,1)\nquantity_runs = 3\nvolume_initial = volume_gas * quantity_cylinders\nvolume_final = volume_initial + volume_tanks * quantity_runs\n\n# initial and final pressures\npressure_initial = pressure_gas # kPa\npressure_final = pressure_initial * (volume_initial / volume_final)\n\n# return to imperial unitz\npressure_final = pressure_final / 6.89476\n\nplt.plot(quantity_cylinders, pressure_final)\n\nprint('# Cyl\\tPressure(PSI)')\nfor i in numpy.arange(0,len(quantity_cylinders),1):\n print (' ' + str(quantity_cylinders[i]) + '\\t' + str(pressure_final[i]))", "Pressure Drop calculations\nCollecting K-values of fittings, connections, etc...", "\"\"\"\n\n\"\"\"", "Works Cited\nCengel - Thermodynamics" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
stephank16/enes_graph_use_case
prov_templates/Data_ingest_use_case_templates.ipynb
gpl-3.0
[ "ENES use case 1: data ingest workflow at data center\nApproach:\n\nStep1: Generate prov template based on workflow description \n (copied from existing data ingest workflow handling software)\n\nResult: can be done in a few lines of python code\n\n\nStep2: experiment with prov template expansion\n\ncore problem is the best option to \n represent the workflow in a provenance graph - many different options .. \n thus this notebook is used to present different options for representation of\n the provenance template as a basis for discussion:\n\n\n\nStep3: Discussion of different template representations of a specific workflow", "# import variable setting dictionaries from dkrz data ingest tool chain\n# and remove __doc__ strings from dictionary (would clutter PROV graph visualizations)\nfrom provtemplates import workflow_steps\n\nfrom collections import MutableMapping\nfrom contextlib import suppress\n\ndef delete_keys_from_dict(dictionary, keys):\n for key in keys:\n with suppress(KeyError):\n del dictionary[key]\n for value in dictionary.values():\n if isinstance(value, MutableMapping):\n delete_keys_from_dict(value, keys)\n\nworkflow_dict = workflow_steps.WORKFLOW_DICT\n\nfrom provtemplates import provconv\nimport prov.model as prov\nimport six\nimport itertools\n\nfrom provtemplates import workflow_steps\n\n\nns_dict = {\n 'prov':'http://www.w3.org/ns/prov#',\n 'var':'http://openprovenance.org/var#',\n 'vargen':'http://openprovenance.org/vargen#',\n 'tmpl':'http://openprovenance.org/tmpl#',\n 'foaf':'http://xmlns.com/foaf/0.1/',\n 'ex': 'http://example.org/',\n 'orcid':'http://orcid.org/',\n\n #document.set_default_namespace('http://example.org/0/')\n 'rdf':'http://www.w3.org/1999/02/22-rdf-syntax-ns#',\n 'rdfs':'http://www.w3.org/2000/01/rdf-schema#',\n 'xsd':'http://www.w3.org/2001/XMLSchema#',\n 'ex1': 'http://example.org/1/',\n 'ex2': 'http://example.org/2/'\n}\n\nprov_doc01 = provconv.set_namespaces(ns_dict,prov.ProvDocument())\nprov_doc02 = provconv.set_namespaces(ns_dict,prov.ProvDocument())\nprov_doc03 = provconv.set_namespaces(ns_dict,prov.ProvDocument())\nprov_doc1 = prov_doc01.bundle(\"var:data-ingest-wflow\")\nprov_doc2 = prov_doc02.bundle(\"var:data-ingest-wflow\")\nprov_doc3 = prov_doc03.bundle(\"var:data-ingest-wflow\")\nprov_doc01.set_default_namespace('http://enes.org/ns/ingest#')\nprov_doc02.set_default_namespace('http://enes.org/ns/ingest#')\nprov_doc03.set_default_namespace('http://enes.org/ns/ingest#')\n\n\ndef gen_bundles(workflow_dict,prov_doc):\n global_in_out = prov_doc.entity('var:wf_doc')\n for wflow_step, wflow_stepdict in workflow_dict.items():\n nbundle = prov_doc.bundle('var:'+wflow_step)\n out_node = nbundle.entity('var:'+wflow_step+'_out')\n agent = nbundle.agent('var:'+wflow_step+'_agent')\n activity = nbundle.activity('var:'+wflow_step+'_activity')\n in_node = nbundle.entity('var:'+wflow_step+'_in')\n \n nbundle.wasGeneratedBy(out_node,activity)\n nbundle.used(activity,in_node)\n nbundle.wasAssociatedWith(activity,agent)\n nbundle.wasDerivedFrom(in_node,out_node) \n nbundle.used(activity,global_in_out)\n nbundle.wasGeneratedBy(global_in_out,activity)\n \ndef in_bundles(workflow_dict,prov_doc): \n first = True\n out_nodes = []\n nbundle = prov_doc\n for wflow_step, wflow_stepdict in workflow_dict.items():\n #nbundle = prov_doc.bundle('var:'+wflow_step)\n out_node = nbundle.entity('var:'+wflow_step+'_out')\n agent = nbundle.agent('var:'+wflow_step+'_agent')\n activity = nbundle.activity('var:'+wflow_step+'_activity')\n if first: \n in_node = nbundle.entity('var:'+wflow_step+'_in')\n nbundle.used(activity,in_node)\n first = False \n out_nodes.append((nbundle,out_node,agent,activity))\n return out_nodes \n \n \ndef chain_bundles(nodes): \n '''\n chaining based on \"used\" activity relationship\n '''\n i = 1\n for (nbundle,out_node,agent,activity) in nodes[1:]:\n (prev_bundle,prev_out,prev_agent,prev_activity) = nodes[i-1]\n nbundle.used(activity,prev_out)\n i += 1\n for (nbundle,out_node,agent,activity) in nodes: \n nbundle.wasGeneratedBy(out_node,activity)\n nbundle.wasAssociatedWith(activity,agent) \n \ndef chain_hist_bundles(nodes,prov_doc):\n '''\n chaining based on \"used\" activity relationship\n add an explicit end_result composing all the generated\n intermediate results\n '''\n i = 1\n for (nbundle,out_node,agent,activity) in nodes[1:]:\n (prev_bundle,prev_out,prev_agent,prev_activity) = nodes[i-1]\n nbundle.used(activity,prev_out)\n i += 1\n for (nbundle,out_node,agent,activity) in nodes: \n nbundle.wasGeneratedBy(out_node,activity)\n nbundle.wasAssociatedWith(activity,agent)\n wf_out = prov_doc.entity(\"ex:wf_result\")\n wf_agent = prov_doc.agent(\"ex:workflow_handler\")\n wf_activity = prov_doc.activity(\"ex:wf_trace_composition\")\n prov_doc.wasGeneratedBy(wf_out,wf_activity)\n prov_doc.wasAssociatedWith(wf_activity,wf_agent)\n for (nbundle,out_node,agent,activity) in nodes:\n prov_doc.used(wf_activity,out_node)\n \n \n", "Template representation variant 1\n\nbundles for each workflow step \n (characterized by output, activity, and agent with relationships)\nevery activity uses information from a global provenance log file (used relationship)\n and every activity updates parts of a global provenance log file (was generated by relationship)\n\nNB: ! this produces not valid ProvTemplates, as multiple bundles are used", "# generate prov_template options and print provn representation\ngen_bundles(workflow_dict,prov_doc01)\nprint(prov_doc01.get_provn())\n\n%matplotlib inline\nprov_doc01.plot()\nprov_doc01.serialize('data-ingest1.rdf',format='rdf')", "Template representation variant 2:\n\nworkflow steps without bundles \nworkflow steps are chained (output is input to next step)", "nodes = in_bundles(workflow_dict,prov_doc2)\nchain_bundles(nodes)\nprint(prov_doc02.get_provn())\n\n%matplotlib inline\nprov_doc02.plot()\nfrom prov.dot import prov_to_dot\ndot = prov_to_dot(prov_doc02)\n\nprov_doc02.serialize('ingest-prov-version2.rdf',format='rdf')\n\ndot.write_png('ingest-prov-version2.png')", "Template representation variant 3:\n\nworkflow steps without bundles \nworkflow steps are chained (output is input to next step) \nglobal workflow representation generation added", "gnodes = in_bundles(workflow_dict,prov_doc3)\nchain_hist_bundles(gnodes,prov_doc3)\nprint(prov_doc03.get_provn())\ndot = prov_to_dot(prov_doc03)\ndot.write_png('ingest-prov-version3.png')\n\n%matplotlib inline\nprov_doc03.plot()\nprov_doc03.serialize('data-ingest3.rdf',format='rdf')\n\n# ------------------ to be removed --------------------------------------\n\n\n\n# generate prov_template options and print provn representation\ngen_bundles(workflow_dict,prov_doc1)\nprint(prov_doc1.get_provn())\nnodes = in_bundles(workflow_dict,prov_doc2)\nchain_bundles(nodes)\nprint(prov_doc2.get_provn())\ngnodes = in_bundles(workflow_dict,prov_doc3)\nchain_hist_bundles(gnodes,prov_doc3)\nprint(prov_doc3.get_provn())\n\n\n\n%matplotlib inline\nprov_doc1.plot()\nprov_doc2.plot()\nprov_doc3.plot()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/text_classification/labs/fine_tune_bert.ipynb
apache-2.0
[ "Fine-Tuning a BERT Model\nLearning objectives\n\nGet the dataset from TensorFlow Datasets.\nPreprocess the data.\nBuild the model.\nTrain the model.\nRe-encoding a large dataset.\n\nIntroduction\nIn this notebook, you will work through fine-tuning a BERT model using the tensorflow-models PIP package.\nThe pretrained BERT model this tutorial is based on is also available on TensorFlow Hub, to see how to use it refer to the Hub Appendix\nEach learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.\nSetup\nInstall the TensorFlow Model Garden pip package\n\ntf-models-official is the stable Model Garden package. Note that it may not include the latest changes in the tensorflow_models github repo. To include latest changes, you may install tf-models-nightly,\nwhich is the nightly Model Garden package created daily automatically.\npip will install all models and dependencies automatically.", "!pip install -q -U \"tensorflow-text==2.8.*\"\n\n!pip install -q tf-models-official==2.4.0", "Imports", "import os\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport tensorflow as tf\n\nimport tensorflow_hub as hub\nimport tensorflow_datasets as tfds\ntfds.disable_progress_bar()\n\nfrom official.modeling import tf_utils\nfrom official import nlp\nfrom official.nlp import bert\n\n# Load the required submodules\nimport official.nlp.optimization\nimport official.nlp.bert.bert_models\nimport official.nlp.bert.configs\nimport official.nlp.bert.run_classifier\nimport official.nlp.bert.tokenization\nimport official.nlp.data.classifier_data_lib\nimport official.nlp.modeling.losses\nimport official.nlp.modeling.models\nimport official.nlp.modeling.networks\n", "Resources\nThis directory contains the configuration, vocabulary, and a pre-trained checkpoint used in this tutorial:", "gs_folder_bert = \"gs://cloud-tpu-checkpoints/bert/v3/uncased_L-12_H-768_A-12\"\ntf.io.gfile.listdir(gs_folder_bert)", "You can get a pre-trained BERT encoder from TensorFlow Hub:", "hub_url_bert = \"https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3\"", "The data\nFor this example we used the GLUE MRPC dataset from TFDS.\nThis dataset is not set up so that it can be directly fed into the BERT model, so this section also handles the necessary preprocessing.\nGet the dataset from TensorFlow Datasets\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.\n\nNumber of labels: 2.\nSize of training dataset: 3668.\nSize of evaluation dataset: 408.\nMaximum sequence length of training and evaluation dataset: 128.", "glue, info = tfds.load('glue/mrpc', with_info=True,\n # It's small, load the whole dataset\n batch_size=-1)\n\nlist(glue.keys())", "The info object describes the dataset and it's features:", "info.features", "The two classes are:", "info.features['label'].names", "Here is one example from the training set:", "glue_train = glue['train']\n\nfor key, value in glue_train.items():\n print(f\"{key:9s}: {value[0].numpy()}\")", "The BERT tokenizer\nTo fine tune a pre-trained model you need to be sure that you're using exactly the same tokenization, vocabulary, and index mapping as you used during training.\nThe BERT tokenizer used in this tutorial is written in pure Python (It's not built out of TensorFlow ops). So you can't just plug it into your model as a keras.layer like you can with preprocessing.TextVectorization.\nThe following code rebuilds the tokenizer that was used by the base model:", "# Set up tokenizer to generate Tensorflow dataset\ntokenizer = # TODO 1: Your code goes here\n\nprint(\"Vocab size:\", len(tokenizer.vocab))", "Tokenize a sentence:", "tokens = tokenizer.tokenize(\"Hello TensorFlow!\")\nprint(tokens)\nids = tokenizer.convert_tokens_to_ids(tokens)\nprint(ids)", "Preprocess the data\nThe section manually preprocessed the dataset into the format expected by the model.\nThis dataset is small, so preprocessing can be done quickly and easily in memory. For larger datasets the tf_models library includes some tools for preprocessing and re-serializing a dataset. See Appendix: Re-encoding a large dataset for details.\nEncode the sentences\nThe model expects its two inputs sentences to be concatenated together. This input is expected to start with a [CLS] \"This is a classification problem\" token, and each sentence should end with a [SEP] \"Separator\" token:", "tokenizer.convert_tokens_to_ids(['[CLS]', '[SEP]'])", "Start by encoding all the sentences while appending a [SEP] token, and packing them into ragged-tensors:", "def encode_sentence(s):\n tokens = list(tokenizer.tokenize(s.numpy()))\n tokens.append('[SEP]')\n return tokenizer.convert_tokens_to_ids(tokens)\n\nsentence1 = tf.ragged.constant([\n encode_sentence(s) for s in glue_train[\"sentence1\"]])\nsentence2 = tf.ragged.constant([\n encode_sentence(s) for s in glue_train[\"sentence2\"]])\n\nprint(\"Sentence1 shape:\", sentence1.shape.as_list())\nprint(\"Sentence2 shape:\", sentence2.shape.as_list())", "Now prepend a [CLS] token, and concatenate the ragged tensors to form a single input_word_ids tensor for each example. RaggedTensor.to_tensor() zero pads to the longest sequence.", "cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]\ninput_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)\n_ = plt.pcolormesh(input_word_ids.to_tensor())", "Mask and input type\nThe model expects two additional inputs:\n\nThe input mask\nThe input type\n\nThe mask allows the model to cleanly differentiate between the content and the padding. The mask has the same shape as the input_word_ids, and contains a 1 anywhere the input_word_ids is not padding.", "input_mask = tf.ones_like(input_word_ids).to_tensor()\n\nplt.pcolormesh(input_mask)", "The \"input type\" also has the same shape, but inside the non-padded region, contains a 0 or a 1 indicating which sentence the token is a part of.", "type_cls = tf.zeros_like(cls)\ntype_s1 = tf.zeros_like(sentence1)\ntype_s2 = tf.ones_like(sentence2)\ninput_type_ids = tf.concat([type_cls, type_s1, type_s2], axis=-1).to_tensor()\n\nplt.pcolormesh(input_type_ids)", "Put it all together\nCollect the above text parsing code into a single function, and apply it to each split of the glue/mrpc dataset.", "def encode_sentence(s, tokenizer):\n tokens = list(tokenizer.tokenize(s))\n tokens.append('[SEP]')\n return tokenizer.convert_tokens_to_ids(tokens)\n\ndef bert_encode(glue_dict, tokenizer):\n num_examples = len(glue_dict[\"sentence1\"])\n \n sentence1 = tf.ragged.constant([\n encode_sentence(s, tokenizer)\n for s in np.array(glue_dict[\"sentence1\"])])\n sentence2 = tf.ragged.constant([\n encode_sentence(s, tokenizer)\n for s in np.array(glue_dict[\"sentence2\"])])\n\n cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]\n input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)\n\n input_mask = tf.ones_like(input_word_ids).to_tensor()\n\n type_cls = tf.zeros_like(cls)\n type_s1 = tf.zeros_like(sentence1)\n type_s2 = tf.ones_like(sentence2)\n input_type_ids = tf.concat(\n [type_cls, type_s1, type_s2], axis=-1).to_tensor()\n\n inputs = {\n 'input_word_ids': input_word_ids.to_tensor(),\n 'input_mask': input_mask,\n 'input_type_ids': input_type_ids}\n\n return inputs\n\nglue_train = bert_encode(glue['train'], tokenizer)\nglue_train_labels = glue['train']['label']\n\nglue_validation = bert_encode(glue['validation'], tokenizer)\nglue_validation_labels = glue['validation']['label']\n\nglue_test = bert_encode(glue['test'], tokenizer)\nglue_test_labels = glue['test']['label']", "Each subset of the data has been converted to a dictionary of features, and a set of labels. Each feature in the input dictionary has the same shape, and the number of labels should match:", "# Print the key value and shapes\nfor key, value in glue_train.items():\n# TODO 2: Your code goes here\n\nprint(f'glue_train_labels shape: {glue_train_labels.shape}')", "The model\nBuild the model\nThe first step is to download the configuration for the pre-trained model.", "import json\n\nbert_config_file = os.path.join(gs_folder_bert, \"bert_config.json\")\nconfig_dict = json.loads(tf.io.gfile.GFile(bert_config_file).read())\n\nbert_config = bert.configs.BertConfig.from_dict(config_dict)\n\nconfig_dict", "The config defines the core BERT Model, which is a Keras model to predict the outputs of num_classes from the inputs with maximum sequence length max_seq_length.\nThis function returns both the encoder and the classifier.", "bert_classifier, bert_encoder = bert.bert_models.classifier_model(\n bert_config, num_labels=2)", "The classifier has three inputs and one output:", "tf.keras.utils.plot_model(bert_classifier, show_shapes=True, dpi=48)", "Run it on a test batch of data 10 examples from the training set. The output is the logits for the two classes:", "glue_batch = {key: val[:10] for key, val in glue_train.items()}\n\nbert_classifier(\n glue_batch, training=True\n).numpy()", "The TransformerEncoder in the center of the classifier above is the bert_encoder.\nInspecting the encoder, we see its stack of Transformer layers connected to those same three inputs:", "tf.keras.utils.plot_model(bert_encoder, show_shapes=True, dpi=48)", "Restore the encoder weights\nWhen built the encoder is randomly initialized. Restore the encoder's weights from the checkpoint:", "checkpoint = tf.train.Checkpoint(encoder=bert_encoder)\ncheckpoint.read(\n os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()", "Note: The pretrained TransformerEncoder is also available on TensorFlow Hub. See the Hub appendix for details. \nSet up the optimizer\nBERT adopts the Adam optimizer with weight decay (aka \"AdamW\").\nIt also employs a learning rate schedule that firstly warms up from 0 and then decays to 0.", "# Set up epochs and steps\nepochs = 3\nbatch_size = 32\neval_batch_size = 32\n\ntrain_data_size = len(glue_train_labels)\nsteps_per_epoch = int(train_data_size / batch_size)\nnum_train_steps = steps_per_epoch * epochs\nwarmup_steps = int(epochs * train_data_size * 0.1 / batch_size)\n\n# creates an optimizer with learning rate schedule\noptimizer = # TODO 3: Your code goes here", "This returns an AdamWeightDecay optimizer with the learning rate schedule set:", "type(optimizer)", "To see an example of how to customize the optimizer and it's schedule, see the Optimizer schedule appendix.\nTrain the model\nThe metric is accuracy and we use sparse categorical cross-entropy as loss.", "metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)]\nloss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\nbert_classifier.compile(\n optimizer=optimizer,\n loss=loss,\n metrics=metrics)\n\n# Train the model\nbert_classifier.fit(# TODO 4: Your code goes here)", "Now run the fine-tuned model on a custom example to see that it works.\nStart by encoding some sentence pairs:", "my_examples = bert_encode(\n glue_dict = {\n 'sentence1':[\n 'The rain in Spain falls mainly on the plain.',\n 'Look I fine tuned BERT.'],\n 'sentence2':[\n 'It mostly rains on the flat lands of Spain.',\n 'Is it working? This does not match.']\n },\n tokenizer=tokenizer)", "The model should report class 1 \"match\" for the first example and class 0 \"no-match\" for the second:", "result = bert_classifier(my_examples, training=False)\n\nresult = tf.argmax(result).numpy()\nresult\n\nnp.array(info.features['label'].names)[result]", "Save the model\nOften the goal of training a model is to use it for something, so export the model and then restore it to be sure that it works.", "export_dir='./saved_model'\ntf.saved_model.save(bert_classifier, export_dir=export_dir)\n\nreloaded = tf.saved_model.load(export_dir)\nreloaded_result = reloaded([my_examples['input_word_ids'],\n my_examples['input_mask'],\n my_examples['input_type_ids']], training=False)\n\noriginal_result = bert_classifier(my_examples, training=False)\n\n# The results are (nearly) identical:\nprint(original_result.numpy())\nprint()\nprint(reloaded_result.numpy())", "Appendix\n<a id=re_encoding_tools></a>\nRe-encoding a large dataset\nThis tutorial you re-encoded the dataset in memory, for clarity.\nThis was only possible because glue/mrpc is a very small dataset. To deal with larger datasets tf_models library includes some tools for processing and re-encoding a dataset for efficient training.\nThe first step is to describe which features of the dataset should be transformed:", "processor = nlp.data.classifier_data_lib.TfdsProcessor(\n tfds_params=\"dataset=glue/mrpc,text_key=sentence1,text_b_key=sentence2\",\n process_text_fn=bert.tokenization.convert_to_unicode)", "Then apply the transformation to generate new TFRecord files.", "# Set up output of training and evaluation Tensorflow dataset\ntrain_data_output_path=\"./mrpc_train.tf_record\"\neval_data_output_path=\"./mrpc_eval.tf_record\"\n\nmax_seq_length = 128\nbatch_size = 32\neval_batch_size = 32\n\n# Generate and save training data into a tf record file\ninput_meta_data = (# TODO 5: Your code goes here)", "Finally create tf.data input pipelines from those TFRecord files:", "training_dataset = bert.run_classifier.get_dataset_fn(\n train_data_output_path,\n max_seq_length,\n batch_size,\n is_training=True)()\n\nevaluation_dataset = bert.run_classifier.get_dataset_fn(\n eval_data_output_path,\n max_seq_length,\n eval_batch_size,\n is_training=False)()\n", "The resulting tf.data.Datasets return (features, labels) pairs, as expected by keras.Model.fit:", "training_dataset.element_spec", "Create tf.data.Dataset for training and evaluation\nIf you need to modify the data loading here is some code to get you started:", "def create_classifier_dataset(file_path, seq_length, batch_size, is_training):\n \"\"\"Creates input dataset from (tf)records files for train/eval.\"\"\"\n dataset = tf.data.TFRecordDataset(file_path)\n if is_training:\n dataset = dataset.shuffle(100)\n dataset = dataset.repeat()\n\n def decode_record(record):\n name_to_features = {\n 'input_ids': tf.io.FixedLenFeature([seq_length], tf.int64),\n 'input_mask': tf.io.FixedLenFeature([seq_length], tf.int64),\n 'segment_ids': tf.io.FixedLenFeature([seq_length], tf.int64),\n 'label_ids': tf.io.FixedLenFeature([], tf.int64),\n }\n return tf.io.parse_single_example(record, name_to_features)\n\n def _select_data_from_record(record):\n x = {\n 'input_word_ids': record['input_ids'],\n 'input_mask': record['input_mask'],\n 'input_type_ids': record['segment_ids']\n }\n y = record['label_ids']\n return (x, y)\n\n dataset = dataset.map(decode_record,\n num_parallel_calls=tf.data.AUTOTUNE)\n dataset = dataset.map(\n _select_data_from_record,\n num_parallel_calls=tf.data.AUTOTUNE)\n dataset = dataset.batch(batch_size, drop_remainder=is_training)\n dataset = dataset.prefetch(tf.data.AUTOTUNE)\n return dataset\n\n# Set up batch sizes\nbatch_size = 32\neval_batch_size = 32\n\n# Return Tensorflow dataset\ntraining_dataset = create_classifier_dataset(\n train_data_output_path,\n input_meta_data['max_seq_length'],\n batch_size,\n is_training=True)\n\nevaluation_dataset = create_classifier_dataset(\n eval_data_output_path,\n input_meta_data['max_seq_length'],\n eval_batch_size,\n is_training=False)\n\ntraining_dataset.element_spec", "<a id=\"hub_bert\"></a>\nTFModels BERT on TFHub\nYou can get the BERT model off the shelf from TFHub. It would not be hard to add a classification head on top of this hub.KerasLayer", "# Note: 350MB download.\nimport tensorflow_hub as hub\n\nhub_model_name = \"bert_en_uncased_L-12_H-768_A-12\" \n\nhub_encoder = hub.KerasLayer(f\"https://tfhub.dev/tensorflow/{hub_model_name}/3\",\n trainable=True)\n\nprint(f\"The Hub encoder has {len(hub_encoder.trainable_variables)} trainable variables\")", "Test run it on a batch of data:", "result = hub_encoder(\n inputs=dict(\n input_word_ids=glue_train['input_word_ids'][:10],\n input_mask=glue_train['input_mask'][:10],\n input_type_ids=glue_train['input_type_ids'][:10],),\n training=False,\n)\n\nprint(\"Pooled output shape:\", result['pooled_output'].shape)\nprint(\"Sequence output shape:\", result['sequence_output'].shape)", "At this point it would be simple to add a classification head yourself.\nThe bert_models.classifier_model function can also build a classifier onto the encoder from TensorFlow Hub:", "hub_classifier = nlp.modeling.models.BertClassifier(\n bert_encoder,\n num_classes=2,\n dropout_rate=0.1,\n initializer=tf.keras.initializers.TruncatedNormal(\n stddev=0.02))", "The one downside to loading this model from TFHub is that the structure of internal keras layers is not restored. So it's more difficult to inspect or modify the model. The BertEncoder model is now a single layer:", "tf.keras.utils.plot_model(hub_classifier, show_shapes=True, dpi=64)", "<a id=\"model_builder_functions\"></a>\nLow level model building\nIf you need a more control over the construction of the model it's worth noting that the classifier_model function used earlier is really just a thin wrapper over the nlp.modeling.networks.BertEncoder and nlp.modeling.models.BertClassifier classes. Just remember that if you start modifying the architecture it may not be correct or possible to reload the pre-trained checkpoint so you'll need to retrain from scratch.\nBuild the encoder:", "bert_encoder_config = config_dict.copy()\n\n# You need to rename a few fields to make this work:\nbert_encoder_config['attention_dropout_rate'] = bert_encoder_config.pop('attention_probs_dropout_prob')\nbert_encoder_config['activation'] = tf_utils.get_activation(bert_encoder_config.pop('hidden_act'))\nbert_encoder_config['dropout_rate'] = bert_encoder_config.pop('hidden_dropout_prob')\nbert_encoder_config['initializer'] = tf.keras.initializers.TruncatedNormal(\n stddev=bert_encoder_config.pop('initializer_range'))\nbert_encoder_config['max_sequence_length'] = bert_encoder_config.pop('max_position_embeddings')\nbert_encoder_config['num_layers'] = bert_encoder_config.pop('num_hidden_layers')\n\nbert_encoder_config\n\nmanual_encoder = nlp.modeling.networks.BertEncoder(**bert_encoder_config)", "Restore the weights:", "checkpoint = tf.train.Checkpoint(encoder=manual_encoder)\ncheckpoint.read(\n os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()", "Test run it:", "result = manual_encoder(my_examples, training=True)\n\nprint(\"Sequence output shape:\", result[0].shape)\nprint(\"Pooled output shape:\", result[1].shape)", "Wrap it in a classifier:", "manual_classifier = nlp.modeling.models.BertClassifier(\n bert_encoder,\n num_classes=2,\n dropout_rate=bert_encoder_config['dropout_rate'],\n initializer=bert_encoder_config['initializer'])\n\nmanual_classifier(my_examples, training=True).numpy()", "<a id=\"optimizer_schedule\"></a>\nOptimizers and schedules\nThe optimizer used to train the model was created using the nlp.optimization.create_optimizer function:", "optimizer = nlp.optimization.create_optimizer(\n 2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)", "That high level wrapper sets up the learning rate schedules and the optimizer.\nThe base learning rate schedule used here is a linear decay to zero over the training run:", "epochs = 3\nbatch_size = 32\neval_batch_size = 32\n\ntrain_data_size = len(glue_train_labels)\nsteps_per_epoch = int(train_data_size / batch_size)\nnum_train_steps = steps_per_epoch * epochs\n\ndecay_schedule = tf.keras.optimizers.schedules.PolynomialDecay(\n initial_learning_rate=2e-5,\n decay_steps=num_train_steps,\n end_learning_rate=0)\n\nplt.plot([decay_schedule(n) for n in range(num_train_steps)])", "This, in turn is wrapped in a WarmUp schedule that linearly increases the learning rate to the target value over the first 10% of training:", "warmup_steps = num_train_steps * 0.1\n\nwarmup_schedule = nlp.optimization.WarmUp(\n initial_learning_rate=2e-5,\n decay_schedule_fn=decay_schedule,\n warmup_steps=warmup_steps)\n\n# The warmup overshoots, because it warms up to the `initial_learning_rate`\n# following the original implementation. You can set\n# `initial_learning_rate=decay_schedule(warmup_steps)` if you don't like the\n# overshoot.\nplt.plot([warmup_schedule(n) for n in range(num_train_steps)])", "Then create the nlp.optimization.AdamWeightDecay using that schedule, configured for the BERT model:", "optimizer = nlp.optimization.AdamWeightDecay(\n learning_rate=warmup_schedule,\n weight_decay_rate=0.01,\n epsilon=1e-6,\n exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
david-abel/simple_rl
examples/examples_overview.ipynb
apache-2.0
[ "Simple RL\nWelcome! Here we'll showcase some basic examples of typical RL programming tasks.\nExample 1: Grid World\nFirst, we'll grab our relevant imports: some agents, an MDP, an a function to facilitate running experiments and plotting:", "# Add simple_rl to system path.\nimport os\nimport sys\nparent_dir = os.path.abspath(os.path.join(os.getcwd(), os.pardir))\nsys.path.insert(0, parent_dir)\n\nfrom simple_rl.agents import QLearningAgent, RandomAgent\nfrom simple_rl.tasks import GridWorldMDP\nfrom simple_rl.run_experiments import run_agents_on_mdp", "Next, we make an MDP and a few agents:", "# Setup MDP.\nmdp = GridWorldMDP(width=6, height=6, init_loc=(1,1), goal_locs=[(6,6)])\n\n# Setup Agents.\nql_agent = QLearningAgent(actions=mdp.get_actions()) \nrand_agent = RandomAgent(actions=mdp.get_actions())", "The real meat of <i>simple_rl</i> are the functions that run experiments. The first of which takes a list of agents and an mdp and simulates their interaction:", "# Run experiment and make plot.\nrun_agents_on_mdp([ql_agent, rand_agent], mdp, instances=5, episodes=100, steps=40, reset_at_terminal=True, verbose=False)", "We can throw R-Max, introduced by [Brafman and Tennenholtz, 2002] in the mix, too:", "from simple_rl.agents import RMaxAgent\n\nrmax_agent = RMaxAgent(actions=mdp.get_actions(), horizon=3, s_a_threshold=1)\n\n# Run experiment and make plot.\nrun_agents_on_mdp([rmax_agent, ql_agent, rand_agent], mdp, instances=5, episodes=100, steps=20, reset_at_terminal=True, verbose=False)", "Each experiment we run generates an Experiment object. This facilitates recording results, making relevant files, and plotting. When the <code>run_agents...</code> function is called, a <i>results</i> dir is created containing relevant experiment data. There should be a subdirectory in <i>results</i> named after the mdp you ran experiments on -- this is where the plot, agent results, and <i>parameters.txt</i> file are stored.\nAll of the above code is contained in the <i>simple_example.py</i> file. \nExample 2: Visuals (require pygame)\nFirst let's make a FourRoomMDP from [Sutton, Precup, Singh 1999], which is more visually interesting than a grid world.", "from simple_rl.tasks import FourRoomMDP\nfour_room_mdp = FourRoomMDP(9, 9, goal_locs=[(9, 9)], gamma=0.95)\n\n# Run experiment and make plot.\nfour_room_mdp.visualize_value()", "<img src=\"val.png\" alt=\"Val\" style=\"width: 400px;\"/>\nOr we can visualize a policy:\n<img src=\"pol.png\" alt=\"Val Visual\" style=\"width: 400px;\"/>\nBoth of these are in examples/viz_example.py. If you need pygame in anaconda, give this a shot:\n&gt; conda install -c cogsci pygame\n\nIf you get an sdl font related error on Mac/Linux, try:\n&gt; brew update sdl &amp;&amp; sdl_tf\n\nWe can also make grid worlds with a text file. For instance, we can construct the grid problem from [Barto and Pickett 2002] by making a text file:\n--w-----w---w----g\n--------w---------\n--w-----w---w-----\n--w-----w---w-----\nwwwww-wwwwwwwww-ww\n---w----w----w----\n---w---------w----\n--------w---------\nwwwwwwwww---------\nw-------wwwwwww-ww\n--w-----w---w-----\n--------w---------\n--w---------w-----\n--w-----w---w-----\nwwwww-wwwwwwwww-ww\n---w-----w---w----\n---w-----w---w----\na--------w--------\n\nThen, we make a grid world out of it:", "from simple_rl.tasks.grid_world import GridWorldMDPClass\n\npblocks_mdp = GridWorldMDPClass.make_grid_world_from_file(\"pblocks_grid.txt\", randomize=False)\npblocks_mdp.visualize_value()", "Which Produces:\n<img src=\"pblocks.png\" alt=\"Policy Blocks Grid World\" style=\"width: 400px;\"/>\nExample 3: OOMDPs, Taxi\nThere's also a Taxi MDP, which is actually built on top of an Object Oriented MDP Abstract class from [Diuk, Cohen, Littman 2008].", "from simple_rl.tasks import TaxiOOMDP\nfrom simple_rl.run_experiments import run_agents_on_mdp\nfrom simple_rl.agents import QLearningAgent, RandomAgent\n\n# Taxi initial state attributes..\nagent = {\"x\":1, \"y\":1, \"has_passenger\":0}\npassengers = [{\"x\":3, \"y\":2, \"dest_x\":2, \"dest_y\":3, \"in_taxi\":0}]\ntaxi_mdp = TaxiOOMDP(width=4, height=4, agent=agent, walls=[], passengers=passengers)\n\n# Make agents.\nql_agent = QLearningAgent(actions=taxi_mdp.get_actions()) \nrand_agent = RandomAgent(actions=taxi_mdp.get_actions())", "Above, we specify the objects of the OOMDP and their attributes. Now, just as before, we can let some agents interact with the MDP:", "# Run experiment and make plot.\nrun_agents_on_mdp([ql_agent, rand_agent], taxi_mdp, instances=5, episodes=100, steps=150, reset_at_terminal=True)", "More on OOMDPs in <i>examples/oomdp_example.py</i>\nExample 4: Markov Games\n --------\nI've added a few markov games, including rock paper scissors, grid games, and prisoners dilemma. Just as before, we get a run agents method that simulates learning and makes a plot:", "from simple_rl.run_experiments import play_markov_game\nfrom simple_rl.agents import QLearningAgent, FixedPolicyAgent\nfrom simple_rl.tasks import RockPaperScissorsMDP\n\nimport random\n\n# Setup MDP, Agents.\nmarkov_game = RockPaperScissorsMDP()\nql_agent = QLearningAgent(actions=markov_game.get_actions(), epsilon=0.2) \nfixed_action = random.choice(markov_game.get_actions())\nfixed_agent = FixedPolicyAgent(policy=lambda s:fixed_action)\n\n# Run experiment and make plot.\nplay_markov_game([ql_agent, fixed_agent], markov_game, instances=10, episodes=1, steps=10)", "Example 5: Gym MDP\n --------\nRecently I added support for making OpenAI gym MDPs. It's again only a few lines of code:", "from simple_rl.tasks import GymMDP\nfrom simple_rl.agents import LinearQLearningAgent, RandomAgent\nfrom simple_rl.run_experiments import run_agents_on_mdp\n\n# Gym MDP.\ngym_mdp = GymMDP(env_name='CartPole-v0', render=False) # If render is true, visualizes interactions.\nnum_feats = gym_mdp.get_num_state_feats()\n\n# Setup agents and run.\nlin_agent = LinearQLearningAgent(gym_mdp.get_actions(), num_features=num_feats, alpha=0.2, epsilon=0.4, rbf=True)\n\nrun_agents_on_mdp([lin_agent], gym_mdp, instances=3, episodes=1, steps=50)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
M-R-Houghton/euroscipy_2015
scikit_image/lectures/solutions/adv0_chromosomes.ipynb
mit
[ "from __future__ import division, print_function\n%matplotlib inline", "Measuring chromatin fluorescence\nGoal: we want to quantify the amount of a particular protein (red fluorescence) localized on the centromeres (green) versus the rest of the chromosome (blue).\n\nThe main challenge here is the uneven illumination, which makes isolating the chromosomes a struggle.", "import numpy as np\nfrom matplotlib import cm, pyplot as plt\nimport skdemo\nplt.rcParams['image.cmap'] = 'cubehelix'\nplt.rcParams['image.interpolation'] = 'none'\n\nfrom skimage import io\nimage = io.imread('images/chromosomes.tif')\nskdemo.imshow_with_histogram(image)", "Let's separate the channels so we can work on each individually.", "protein, centromeres, chromosomes = image.transpose((2, 0, 1))", "Getting the centromeres is easy because the signal is so clean:", "from skimage.filter import threshold_otsu\ncentromeres_binary = centromeres > threshold_otsu(centromeres)\nskdemo.imshow_all(centromeres, centromeres_binary)", "But getting the chromosomes is not so easy:", "chromosomes_binary = chromosomes > threshold_otsu(chromosomes)\nskdemo.imshow_all(chromosomes, chromosomes_binary)", "Let's try using an adaptive threshold:", "from skimage.filter import threshold_adaptive\nchromosomes_adapt = threshold_adaptive(chromosomes, block_size=51)\n# Question: how did I choose this block size?\nskdemo.imshow_all(chromosomes, chromosomes_adapt)", "Not only is the uneven illumination a problem, but there seem to be some artifacts due to the illumination pattern!\nExercise: Can you think of a way to fix this?\n(Hint: in addition to everything you've learned so far, check out skimage.morphology.remove_small_objects)", "from skimage.morphology import (opening, selem,\n remove_small_objects)\nd = selem.diamond(radius=4)\nchr0 = opening(chromosomes_adapt, d)\nchr1 = remove_small_objects(chr0.astype(bool), 256)\nimages = [chromosomes, chromosomes_adapt, chr0, chr1]\ntitles = ['original', 'adaptive threshold',\n 'opening', 'small objects removed']\nskdemo.imshow_all(*images, titles=titles, shape=(2, 2))", "Now that we have the centromeres and the chromosomes, it's time to do the science: get the distribution of intensities in the red channel using both centromere and chromosome locations.", "# Replace \"None\" below with the right expressions!\ncentromere_intensities = protein[centromeres_binary]\nchromosome_intensities = protein[chr1]\nall_intensities = np.concatenate((centromere_intensities,\n chromosome_intensities))\nminint = np.min(all_intensities)\nmaxint = np.max(all_intensities)\nbins = np.linspace(minint, maxint, 100)\nplt.hist(centromere_intensities, bins=bins, color='blue',\n alpha=0.5, label='centromeres')\nplt.hist(chromosome_intensities, bins=bins, color='orange',\n alpha=0.5, label='chromosomes')\nplt.legend(loc='upper right')\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mattgiguere/doglodge
code/test_rating_classifiers.ipynb
mit
[ "test_rating_classifiers\nA notebook describing the rating classification.", "import numpy as np\nfrom time import time\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import fetch_20newsgroups\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.feature_extraction.text import HashingVectorizer\nfrom sklearn.feature_selection import SelectKBest, chi2\nfrom sklearn.linear_model import RidgeClassifier\nfrom sklearn.svm import LinearSVC\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.linear_model import Perceptron\nfrom sklearn.linear_model import PassiveAggressiveClassifier\nfrom sklearn.naive_bayes import BernoulliNB, MultinomialNB\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.neighbors import NearestCentroid\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.utils.extmath import density\nfrom sklearn import metrics\nfrom scipy.stats import pearsonr\nimport pandas as pd\nimport connect_aws_db as cadb\n\n\n%matplotlib inline\n\nengine = cadb.connect_aws_db(write_unicode=True)\n\ncategories = ['dogs', 'general']", "Restore BF Reviews and Ratings", "cmd = \"SELECT review_rating, review_text FROM bf_reviews\"\n\nbfdf = pd.read_sql_query(cmd, engine)\n\nprint(len(bfdf))\nbfdf.head(5)", "Now limit the reviews used in training to only reviews with more than 350 characters.", "bfdfl = bfdf[bfdf['review_text'].str.len() > 350].copy()\n\nlen(bfdfl)", "Create Training and Testing Data", "train_data = bfdfl['review_text'].values[:750]\n\ny_train = bfdfl['review_rating'].values[:750]\n\nt0 = time()\nvectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,\n stop_words='english')\nX_train = vectorizer.fit_transform(train_data)\nduration = time() - t0\nprint('vectorized in {:.2f} seconds.'.format(duration))\nprint(X_train.shape)\n\ntest_data = bfdfl['review_text'].values[750:]\n\nt0 = time()\nX_test = vectorizer.transform(test_data)\nduration = time() - t0\nprint('transformed test data in {:.2f} seconds.'.format(duration))\n\nfeature_names = np.asarray(vectorizer.get_feature_names())\n\nlen(feature_names)\n\nfeature_names[:5]\n\ny_test = bfdfl['review_rating'].values[750:]", "Now Test Several Classifiers", "def benchmark(clf, pos_label=None):\n print('_' * 80)\n print(\"Training: \")\n print(clf)\n t0 = time()\n clf.fit(X_train, y_train)\n train_time = time() - t0\n print(\"train time: %0.3fs\" % train_time)\n\n t0 = time()\n pred = clf.predict(X_test)\n test_time = time() - t0\n print(\"test time: %0.3fs\" % test_time)\n\n score = metrics.f1_score(y_test, pred, pos_label=pos_label)\n print(\"f1-score: %0.3f\" % score)\n\n if hasattr(clf, 'coef_'):\n print(\"dimensionality: %d\" % clf.coef_.shape[1])\n print(\"density: %f\" % density(clf.coef_))\n\n# if opts.print_top10 and feature_names is not None:\n# print(\"top 10 keywords per class:\")\n# for i, category in enumerate(categories):\n# top10 = np.argsort(clf.coef_[i])[-10:]\n# print(trim(\"%s: %s\"\n# % (category, \" \".join(feature_names[top10]))))\n print()\n\n# if opts.print_report:\n# print(\"classification report:\")\n# print(metrics.classification_report(y_test, pred,\n# target_names=categories))\n\n# if opts.print_cm:\n# print(\"confusion matrix:\")\n# print(metrics.confusion_matrix(y_test, pred))\n\n print()\n clf_descr = str(clf).split('(')[0]\n return clf_descr, score, train_time, test_time, pred\n\nresults = []\nfor clf, name in (\n (RidgeClassifier(tol=1e-2, solver=\"lsqr\"), \"Ridge Classifier\"),\n (Perceptron(n_iter=50), \"Perceptron\"),\n (PassiveAggressiveClassifier(n_iter=50), \"Passive-Aggressive\"),\n (KNeighborsClassifier(n_neighbors=10), \"kNN\"),\n (RandomForestClassifier(n_estimators=20), 'RandomForest')):\n print('=' * 80)\n print(name)\n results.append(benchmark(clf))\n\nfor penalty in [\"l2\", \"l1\"]:\n print('=' * 80)\n print(\"%s penalty\" % penalty.upper())\n # Train Liblinear model\n results.append(benchmark(LinearSVC(loss='l2', penalty=penalty,\n dual=False, tol=1e-3)))\n\n # Train SGD model\n results.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50,\n penalty=penalty)))\n\n\n\n# Train SGD with Elastic Net penalty\nprint('=' * 80)\nprint(\"Elastic-Net penalty\")\nresults.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50,\n penalty=\"elasticnet\")))\n\n# Train NearestCentroid without threshold\nprint('=' * 80)\nprint(\"NearestCentroid (aka Rocchio classifier)\")\nresults.append(benchmark(NearestCentroid()))\n\n# Train sparse Naive Bayes classifiers\nprint('=' * 80)\nprint(\"Naive Bayes\")\nresults.append(benchmark(MultinomialNB(alpha=.01)))\nresults.append(benchmark(BernoulliNB(alpha=.01)))\n\n\nclass L1LinearSVC(LinearSVC):\n\n def fit(self, X, y):\n # The smaller C, the stronger the regularization.\n # The more regularization, the more sparsity.\n self.transformer_ = LinearSVC(penalty=\"l1\",\n dual=False, tol=1e-3)\n X = self.transformer_.fit_transform(X, y)\n return LinearSVC.fit(self, X, y)\n\n def predict(self, X):\n X = self.transformer_.transform(X)\n return LinearSVC.predict(self, X)\n\nprint('=' * 80)\nprint(\"LinearSVC with L1-based feature selection\")\nresults.append(benchmark(L1LinearSVC()))\n", "Plot Results", "indices = np.arange(len(results))\n\nresults = [[x[i] for x in results] for i in range(5)]\n\nfont = {'family' : 'normal',\n 'weight' : 'bold',\n 'size' : 16}\n\nplt.rc('font', **font)\nplt.rcParams['figure.figsize'] = 12.94, 8\nclf_names, score, training_time, test_time, pred = results\ntraining_time = np.array(training_time) / np.max(training_time)\ntest_time = np.array(test_time) / np.max(test_time)\n\n#plt.figure(figsize=(12, 8))\nplt.title(\"Score\")\nplt.barh(indices, score, .2, label=\"score\", color='#982023')\nplt.barh(indices + .3, training_time, .2, label=\"training time\", color='#46959E')\nplt.barh(indices + .6, test_time, .2, label=\"test time\", color='#C7B077')\nplt.yticks(())\nplt.legend(loc='best')\nplt.subplots_adjust(left=.25)\nplt.subplots_adjust(top=.95)\nplt.subplots_adjust(bottom=.05)\nplt.ylim(0, 14)\nprint(indices)\nfor i, c in zip(indices, clf_names):\n plt.text(-0.025, i, c, horizontalalignment='right')\n\nclf_names[0] = 'Ridge'\nclf_names[2] = 'PassAggress'\nclf_names[3] = 'KNN'\nclf_names[4] = 'RandomForest'\nclf_names[5] = 'LinearSVC L2'\nclf_names[6] = 'SGDC SVM L2'\nclf_names[7] = 'LinearSVC L1'\nclf_names[8] = 'SGDC L1'\nclf_names[9] = 'SGDC ElNet'\nclf_names[13] = 'LinearSVC L1FS'\n\n\nfig, ax = plt.subplots(1, 1)\n\nclf_names, score, training_time, test_time, pred = results\n\nax.plot(indices, score, '-o', label=\"score\", color='#982023')\nax.plot(indices, training_time, '-o', label=\"training time (s)\", color='#46959E')\nax.plot(indices, test_time, '-o', label=\"test time (s)\", color='#C7B077')\n#labels = [item.get_text() for item in ax.get_xticklabels()]\nlabels = clf_names\nax.xaxis.set_ticks(np.arange(np.min(indices), np.max(indices)+1, 1))\nax.set_xticklabels(clf_names, rotation='70', horizontalalignment='right')\nax.set_xlim([-1, 14])\nax.set_ylim([0, 1])\nax.legend(loc='best')\nplt.subplots_adjust(left=0.05, bottom=0.3, top=.98)\nplt.savefig('ratingClassifierScores.png', dpi=144)\n\n\nfor name, scr in zip(clf_names, score):\n print('{}: {:.3f}'.format(name, scr))", "Now Plot The Predicted Rating as a Function of the Given Rating for the BF Test Data", "fig, ax = plt.subplots(1, 1)\nax.plot(y_test + 0.1*np.random.random(len(y_test)) - 0.05, pred[0] + 0.1*np.random.random(len(y_test)) - 0.05, '.')\nax.set_xlim([0, 6])\nax.set_ylim([0, 6])\nax.set_xlabel('Given Rating')\nax.set_ylabel('Predicted Rating')\n\nms = np.zeros((5, 5))\nfor row in range(5):\n for col in range(5):\n #print('row {}, col {}'.format(row, col))\n ms[row, col] = len(np.where((y_test == col+1) & (pred[0] == row+1))[0])\nms\n\nlogms = 5*np.log(ms+1)\nlogms\n\nfig, ax = plt.subplots(1, 1)\nfor row in range(5):\n for col in range(5):\n ax.plot(col+1, row+1, 'o', ms=logms[row, col], color='#83A7C8', alpha=0.5)\nax.set_xlim([0, 6])\nax.set_ylim([0, 6])\nax.set_xlabel('Given Rating')\nax.set_ylabel('Predicted Rating')\n#plt.savefig('Predicted_Vs_Given_Bubbles.png', dpi=144)\n\nfor idx, prediction in enumerate(pred):\n print(idx, pearsonr(y_test, prediction))\n\nfig, ax = plt.subplots(1, 1)\nax.hist(y_test, bins=range(1, 7), align='left', color='#83A7C8', alpha=0.25, label='Given')\nax.hist(pred[10], bins=range(1, 7), align='left', color='#BA4C37', alpha=0.25, label='Predicted')\n#ax.set_xlim([0, 6])\nax.xaxis.set_ticks([1, 2, 3, 4, 5])\nax.set_xlabel('Rating')\nax.set_ylabel('Number of Reviews')\nax.legend(loc='best')\n#plt.savefig('PredictedGivenDist.png', dpi=144)", "confusion matrix", "from sklearn import metrics\n\ndef plot_confusion_matrix(y_pred, y, normalize=False, cmap=plt.cm.binary):\n cm = metrics.confusion_matrix(y, y_pred)\n cm = np.flipud(cm)\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n plt.imshow(cm, cmap=cmap, interpolation='nearest')\n plt.colorbar()\n plt.xticks(np.arange(0, 5), np.arange(1, 6))\n plt.yticks(np.arange(0, 5), np.arange(1, 6)[::-1])\n plt.xlabel('bringfido.com rating (true rating)')\n plt.ylabel('predicted rating')\n \nprint \"classification accuracy:\", metrics.accuracy_score(y_test, pred[10])\nplot_confusion_matrix(y_test, pred[10], normalize=True, cmap=plt.cm.Blues)\n#plt.savefig('rating_confusion_matrix.png', dpi=144)\n\nclf = NearestCentroid()\nclf.fit(X_train, y_train)\ny_pred = clf.predict(X_test)\n\ncens = clf.centroids_\n\nclf.get_params()\n\nwords = vectorizer.get_feature_names()\nlen(words)\n\ncens.shape", "Which features/words have the highest weight towards rating 1?\nUnnormalized centroid", "wgtarr = cens[4,:]\n\nratwords = np.argsort(wgtarr).tolist()[::-1]\n\nfor i in range(20):\n print(wgtarr[ratwords[i]], words[ratwords[i]], ratwords[i])\n\ncens[:, 1148]", "Normalized centroid\nFirst compute the total for each feature across all ratings (1 to 5)", "cen_tot = np.sum(cens, axis=0)\n\ncen_tot.shape\n\nwgtarr = cens[4,:]/cen_tot\n\nwords[np.argsort(wgtarr)[0]]\n\nratwords = np.argsort(wgtarr).tolist()[::-1]\n\n\nfor i in range(20):\n print(wgtarr[ratwords[i]], words[ratwords[i]])", "Expanding the Model to 3-grams", "t0 = time()\nvectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,\n stop_words='english', ngram_range=(1, 3))\nX_train = vectorizer.fit_transform(train_data)\nduration = time() - t0\nprint('vectorized in {:.2f} seconds.'.format(duration))\nprint(X_train.shape)\n\nt0 = time()\nX_test = vectorizer.transform(test_data)\nduration = time() - t0\nprint('transformed test data in {:.2f} seconds.'.format(duration))\n\nresults = []\nfor clf, name in (\n (RidgeClassifier(tol=1e-2, solver=\"lsqr\"), \"Ridge Classifier\"),\n (Perceptron(n_iter=50), \"Perceptron\"),\n (PassiveAggressiveClassifier(n_iter=50), \"Passive-Aggressive\"),\n (KNeighborsClassifier(n_neighbors=10), \"kNN\"),\n (RandomForestClassifier(n_estimators=20), 'RandomForest')):\n print('=' * 80)\n print(name)\n results.append(benchmark(clf))\n\nfor penalty in [\"l2\", \"l1\"]:\n print('=' * 80)\n print(\"%s penalty\" % penalty.upper())\n # Train Liblinear model\n results.append(benchmark(LinearSVC(loss='l2', penalty=penalty,\n dual=False, tol=1e-3)))\n\n # Train SGD model\n results.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50,\n penalty=penalty)))\n\n# Train SGD with Elastic Net penalty\nprint('=' * 80)\nprint(\"Elastic-Net penalty\")\nresults.append(benchmark(SGDClassifier(alpha=.0001, n_iter=50,\n penalty=\"elasticnet\")))\n\n# Train NearestCentroid without threshold\nprint('=' * 80)\nprint(\"NearestCentroid (aka Rocchio classifier)\")\nresults.append(benchmark(NearestCentroid()))\n\n# Train sparse Naive Bayes classifiers\nprint('=' * 80)\nprint(\"Naive Bayes\")\nresults.append(benchmark(MultinomialNB(alpha=.01)))\nresults.append(benchmark(BernoulliNB(alpha=.01)))\n\n\nclass L1LinearSVC(LinearSVC):\n\n def fit(self, X, y):\n # The smaller C, the stronger the regularization.\n # The more regularization, the more sparsity.\n self.transformer_ = LinearSVC(penalty=\"l1\",\n dual=False, tol=1e-3)\n X = self.transformer_.fit_transform(X, y)\n return LinearSVC.fit(self, X, y)\n\n def predict(self, X):\n X = self.transformer_.transform(X)\n return LinearSVC.predict(self, X)\n\nprint('=' * 80)\nprint(\"LinearSVC with L1-based feature selection\")\nresults.append(benchmark(L1LinearSVC()))\n\n\nindices = np.arange(len(results))\n\nresults = [[x[i] for x in results] for i in range(5)]\n\nfig, ax = plt.subplots(1, 1)\n\nclf_names, score, training_time, test_time, pred = results\n\nax.plot(indices, score, '-o', label=\"score\", color='#982023')\nax.plot(indices, training_time, '-o', label=\"training time (s)\", color='#46959E')\nax.plot(indices, test_time, '-o', label=\"test time (s)\", color='#C7B077')\n#labels = [item.get_text() for item in ax.get_xticklabels()]\nlabels = clf_names\nax.xaxis.set_ticks(np.arange(np.min(indices), np.max(indices)+1, 1))\nax.set_xticklabels(clf_names, rotation='70', horizontalalignment='right')\nax.set_xlim([-1, 14])\nax.set_ylim([0, 1])\nax.legend(loc='best')\nplt.subplots_adjust(left=0.05, bottom=0.3, top=.98)\n#plt.savefig('ratingClassifierScores.png', dpi=144)", "Expanding the Model to 2-grams", "t0 = time()\nvectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,\n stop_words='english', ngram_range=(1, 2))\nX_train = vectorizer.fit_transform(train_data)\nduration = time() - t0\nprint('vectorized in {:.2f} seconds.'.format(duration))\nprint(X_train.shape)\n\nt0 = time()\nX_test = vectorizer.transform(test_data)\nduration = time() - t0\nprint('transformed test data in {:.2f} seconds.'.format(duration))\n\nresults = []\nfor clf, name in (\n (RidgeClassifier(tol=1e-2, solver=\"lsqr\"), \"Ridge Classifier\"),\n (Perceptron(n_iter=50), \"Perceptron\"),\n (PassiveAggressiveClassifier(n_iter=50), \"Passive-Aggressive\"),\n (KNeighborsClassifier(n_neighbors=10), \"kNN\"),\n (RandomForestClassifier(n_estimators=20), 'RandomForest'),\n (LinearSVC(loss='l2', penalty=\"L2\",dual=False, tol=1e-3), \"LinearSVC L2\"),\n (SGDClassifier(alpha=.0001, n_iter=50, penalty=\"L2\"), \"SGDC SVM L2\"),\n (LinearSVC(loss='l2', penalty=\"L1\",dual=False, tol=1e-3), \"LinearSVC L1\"),\n (SGDClassifier(alpha=.0001, n_iter=50, penalty=\"L1\"), \"SGDC SVM L1\"),\n (SGDClassifier(alpha=.0001, n_iter=50, penalty=\"elasticnet\"), \"Elastic Net\"),\n (NearestCentroid(), \"Nearest Centroid\"),\n (MultinomialNB(alpha=.01), \"MultinomialNB\"),\n (BernoulliNB(alpha=.01), \"BernouliNB\")):\n print('=' * 80)\n print(name)\n results.append(benchmark(clf))\n\nclass L1LinearSVC(LinearSVC):\n\n def fit(self, X, y):\n # The smaller C, the stronger the regularization.\n # The more regularization, the more sparsity.\n self.transformer_ = LinearSVC(penalty=\"l1\",\n dual=False, tol=1e-3)\n X = self.transformer_.fit_transform(X, y)\n return LinearSVC.fit(self, X, y)\n\n def predict(self, X):\n X = self.transformer_.transform(X)\n return LinearSVC.predict(self, X)\n\nprint('=' * 80)\nprint(\"LinearSVC with L1-based feature selection\")\nresults.append(benchmark(L1LinearSVC()))\n\n\nindices = np.arange(len(results))\n\nresults = [[x[i] for x in results] for i in range(5)]\n\nfig, ax = plt.subplots(1, 1)\n\nclf_names, score, training_time, test_time, pred = results\n\nax.plot(indices, score, '-o', label=\"score\", color='#982023')\nax.plot(indices, training_time, '-o', label=\"training time (s)\", color='#46959E')\nax.plot(indices, test_time, '-o', label=\"test time (s)\", color='#C7B077')\n#labels = [item.get_text() for item in ax.get_xticklabels()]\nlabels = clf_names\nax.xaxis.set_ticks(np.arange(np.min(indices), np.max(indices)+1, 1))\nax.set_xticklabels(clf_names, rotation='70', horizontalalignment='right')\nax.set_xlim([-1, 14])\nax.set_ylim([0, 1])\nax.legend(loc='best')\nplt.subplots_adjust(left=0.05, bottom=0.3, top=.98)\n#plt.savefig('ratingClassifierScores.png', dpi=144)\n\nfor name, scr in zip(clf_names, score):\n print('{}: {:.3f}'.format(name, scr))", "Conclusions\nThe 1-gram model worked just as well as the 3-gram model. To reduce complexity, I will therefore use the 1-gram model. Out of the models tested, the NearestCentroid performed the best, so I will use that for classification.", "engine = cadb.connect_aws_db(write_unicode=True)\n\ncity = 'palo_alto'\n\ncmd = 'select h.hotel_id, h.business_id, count(*) as count from '\ncmd += 'ta_reviews r inner join ta_hotels h on r.business_id = '\ncmd += 'h.business_id where h.hotel_city = \"'\ncmd += (' ').join(city.split('_'))+'\" '\ncmd += 'GROUP BY r.business_id'\ncmd\n\npd.read_sql_query(cmd, engine)\n\ncmd = 'select distinct r.business_id from '\ncmd += 'ta_reviews r inner join ta_hotels h on r.business_id = '\ncmd += 'h.business_id where h.hotel_city = \"'\ncmd += (' ').join(city.split('_'))+'\" '\ncmd\n\n[int(bid[0]) for bid in pd.read_sql_query(cmd, engine).values]\n\nbids = [1, 2, 5, 10, 20, 54325]\n\nif 3 not in bids:\n print('it is clear!')\nelse:\n print('already exists')\n\nnp.where((y_test == 5) & (pred[10] == 1))\n\nlen(test_data)\n\ntest_data[47]\n\ntest_data[354]\n\nnp.where((y_test == 1) & (pred[10] == 5))\n\nnp.where((y_test == 5) & (pred[10] == 5))\n\ntest_data[4]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
antoniomezzacapo/qiskit-tutorial
qiskit/aqua/chemistry/advanced_howto.ipynb
apache-2.0
[ "<img src=\"../../../images/qiskit-heading.gif\" alt=\"Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook\" width=\"500 px\" align=\"left\">\nQiskit Aqua: Chemistry advanced how to\nThe latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.\n\nContributors\nRichard Chen<sup>[1]</sup>, Antonio Mezzacapo<sup>[1]</sup>, Marco Pistoia<sup>[1]</sup>, Stephen Wood<sup>[1]</sup>\nAffiliation\n\n<sup>[1]</sup>IBMQ\n\nIntroduction\nIn the aqua_chemistry_howto example, we show how to configure different parameters in an input dictionary for different experiments in Aqua Chemistry. Nonetheless, for the users who intersted in experimenting with their new algorithm or new components in the algorithm, which is outside of the bounds of the high-level APIs, the users may want to break down the computation into smaller steps.\nIn this notebook, we decompose the steps to compute the ground state energy of a molecule into 4 steps:\n 1. define a molecule and get integrals from a driver (PySCF)\n 2. define how to map Fermionic Hamiltonian into qubit Hamiltonian\n 3. initiate and config dynamically-loaded instances, such as algorithm, optimizer, variational_form, and initial_state\n 4. run the algorithm and retrieve the results", "# import common packages\nimport numpy as np\nfrom collections import OrderedDict\n\n# lib from Qiskit Aqua Chemistry\nfrom qiskit_aqua_chemistry import FermionicOperator\n\n# lib from Qiskit Aqua\nfrom qiskit_aqua import Operator\nfrom qiskit_aqua import (get_algorithm_instance, get_optimizer_instance, \n get_variational_form_instance, get_initial_state_instance)\n\n# lib for driver\nfrom qiskit_aqua_chemistry.drivers import ConfigurationManager", "Step 1: define a molecule\nHere, we use LiH in sto3g basis with PySCF driver as an example.\nThe molecule object records the information from the PySCF driver.", "# using driver to get fermionic Hamiltonian\n# PySCF example\ncfg_mgr = ConfigurationManager()\npyscf_cfg = OrderedDict([('atom', 'Li .0 .0 .0; H .0 .0 1.6'), \n ('unit', 'Angstrom'), \n ('charge', 0), \n ('spin', 0), \n ('basis', 'sto3g')])\nsection = {}\nsection['properties'] = pyscf_cfg\ndriver = cfg_mgr.get_driver_instance('PYSCF')\nmolecule = driver.run(section)", "Step 2: Prepare qubit Hamiltonian\nHere, we setup the to-be-frozen and to-be-removed orbitals to reduce the problem size when we mapping to qubit Hamiltonian. Furthermore, we define the mapping type for qubit Hamiltonian.\nFor the particular parity mapping, we can further reduce the problem size.", "# please be aware that the idx here with respective to original idx\nfreeze_list = [0]\nremove_list = [-3, -2] # negative number denotes the reverse order\nmap_type = 'parity'\n\nh1 = molecule._one_body_integrals\nh2 = molecule._two_body_integrals\nnuclear_repulsion_energy = molecule._nuclear_repulsion_energy\n\nnum_particles = molecule._num_alpha + molecule._num_beta\nnum_spin_orbitals = molecule._num_orbitals * 2\nprint(\"HF energy: {}\".format(molecule._hf_energy - molecule._nuclear_repulsion_energy))\nprint(\"# of electrons: {}\".format(num_particles))\nprint(\"# of spin orbitals: {}\".format(num_spin_orbitals))\n\n# prepare full idx of freeze_list and remove_list\n# convert all negative idx to positive\nremove_list = [x % molecule._num_orbitals for x in remove_list]\nfreeze_list = [x % molecule._num_orbitals for x in freeze_list]\n# update the idx in remove_list of the idx after frozen, since the idx of orbitals are changed after freezing\nremove_list = [x - len(freeze_list) for x in remove_list]\nremove_list += [x + molecule._num_orbitals - len(freeze_list) for x in remove_list]\nfreeze_list += [x + molecule._num_orbitals for x in freeze_list]\n\n# prepare fermionic hamiltonian with orbital freezing and eliminating, and then map to qubit hamiltonian\n# and if PARITY mapping is selected, reduction qubits\nenergy_shift = 0.0\nqubit_reduction = True if map_type == 'parity' else False\n\nferOp = FermionicOperator(h1=h1, h2=h2)\nif len(freeze_list) > 0:\n ferOp, energy_shift = ferOp.fermion_mode_freezing(freeze_list)\n num_spin_orbitals -= len(freeze_list)\n num_particles -= len(freeze_list)\nif len(remove_list) > 0:\n ferOp = ferOp.fermion_mode_elimination(remove_list)\n num_spin_orbitals -= len(remove_list)\n\nqubitOp = ferOp.mapping(map_type=map_type, threshold=0.00000001)\nqubitOp = qubitOp.two_qubit_reduced_operator(num_particles) if qubit_reduction else qubitOp\nqubitOp.chop(10**-10)", "We use the classical eigen decomposition to get the smallest eigenvalue as a reference.", "# Using exact eigensolver to get the smallest eigenvalue\nexact_eigensolver = get_algorithm_instance('ExactEigensolver')\nexact_eigensolver.init_args(qubitOp, k=1)\nret = exact_eigensolver.run()\nprint('The computed energy is: {:.12f}'.format(ret['eigvals'][0].real))\nprint('The total ground state energy is: {:.12f}'.format(ret['eigvals'][0].real + energy_shift + nuclear_repulsion_energy))", "Step 3: Initiate and config dynamically-loaded instances\nTo run VQE with UCCSD variational form, we require\n- VQE algorithm\n- Classical Optimizer\n- UCCSD variational form\n- Prepare the initial state into HartreeFock state\n[Optional] Setup token to run the experiment on a real device\nIf you would like to run the experiement on a real device, you need to setup your account first.\nNote: If you do not store your token yet, use IBMQ.save_accounts() to store it first.", "from qiskit import IBMQ\nIBMQ.load_accounts()\n\n# setup COBYLA optimizer\nmax_eval = 200\ncobyla = get_optimizer_instance('COBYLA')\ncobyla.set_options(maxiter=max_eval)\n\n# setup HartreeFock state\nHF_state = get_initial_state_instance('HartreeFock')\nHF_state.init_args(qubitOp.num_qubits, num_spin_orbitals, map_type, \n qubit_reduction, num_particles)\n\n# setup UCCSD variational form\nvar_form = get_variational_form_instance('UCCSD')\nvar_form.init_args(qubitOp.num_qubits, depth=1, \n num_orbitals=num_spin_orbitals, num_particles=num_particles, \n active_occupied=[0], active_unoccupied=[0, 1],\n initial_state=HF_state, qubit_mapping=map_type, \n two_qubit_reduction=qubit_reduction, num_time_slices=1)\n\n# setup VQE\nvqe_algorithm = get_algorithm_instance('VQE')\nvqe_algorithm.setup_quantum_backend(backend='statevector_simulator')\nvqe_algorithm.init_args(qubitOp, 'matrix', var_form, cobyla)", "Step 4: Run algorithm and retrieve the results\nThe smallest eigenvalue is stored in the first entry of the eigvals key.", "results = vqe_algorithm.run()\nprint('The computed ground state energy is: {:.12f}'.format(results['eigvals'][0]))\nprint('The total ground state energy is: {:.12f}'.format(results['eigvals'][0] + energy_shift + nuclear_repulsion_energy))\nprint(\"Parameters: {}\".format(results['opt_params']))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
saketkc/hatex
2015_Fall/MATH-578B/Homework4/Homework4.ipynb
mit
[ "Given\nDensity of bacteria in plate = $\\rho= 10/cm^2$\nArea $A=10cm^2$\nProbability of a bacteria carrying mutation = $\\mu$\nFacts\nBacteria on plate are like 'white and black' balls in a box with thw white ones representing the mutated copies.\nThus, assuming total number $N \\sim Poisson(\\lambda)$ and the coordinates $(X_i,Y_i)$ to be in a uniform square(assume the 'dish' to be square ) it is safe to assume, that this is a poisson point process with intensity $\\mu$ and the mutated 'whites' make it a poisson coloring problem.\nHence, if $MW$ represents the mutated whites and $UB$ represent the unmutated black balls, we have \n$MW \\sim PPP(\\mu\\rho)$ and $UB \\sim PPP((1-\\mu)\\rho)$\nThus for part (a): P(probability that n 'mutated whites' exist) = $\\frac{e^{-\\rho\\mu A}(\\mu\\rho A)^n}{n!}$\nPart (b)", "from math import log, exp, e\nt_min_grow = log(10)/log(1.05)\nrho = 10\nA = 10\nN = rho*A\n\n\nprint t_min_grow", "Radius of cell growing for last $t$ minutes: $1.05^t \\times 10^{-3}cm$. Thus for a cell to form a detectable cluster it should grow for atleast: $\\frac{log(10)}{log(1.05}=47.19$ minutes\nAnd the total time for grow is $60$ minutes, so that MAX wait time before growth starts is $60-47.19=12.81$ minutes", "delta = (1-exp(-12.81/20))\nprint(delta)", "$\\delta = p(\\text{wait time $\\leq$ 12.81 minutes}) = 1-\\exp(-12.81/20) = 0.473$\nSo now we need these criteria for calling a ball white: \n- It gets mutated (PPP)\n- It has a waiting time of <12.81 minutes\nSo we have a new $PPP(\\rho \\mu \\delta)$ and \n$\\frac{17}{\\rho \\times A} = \\exp(-\\rho \\mu \\delta A) \\times \\frac{(\\rho \\mu \\delta A)^{17}}{17!}$\n$\\ln(\\frac{17}{\\rho A}) = -\\rho \\mu \\delta A + 17 \\ln(\\rho \\mu \\delta A) - \\ln(17!)$", "from sympy import solve, Eq, symbols\nfrom mpmath import log as mpl\nfrom math import factorial\nmu= symbols('mu')\nlhs = log(17/100.0,e)\nrhs = Eq(-rho*mu*delta*A + 17*(rho*mu*delta*A-1) -mpl(factorial(17)))\ns = solve(rhs, mu)\n\n\nprint (s)", "Thus, approximated $\\mu = 0.0667$ for the given dataset!\nProblem 2", "%pylab inline\nimport matplotlib.pyplot as plt\nN = 10**6\nN_t = 10**6\nmu = 10**-6\ns = 0.001\ngenerations = 8000\nmu_t = N_t*mu\n\nfrom scipy.stats import poisson, binom \nimport numpy as np\n\ndef run_generations(distribution):\n mutations = []\n all_mutations = []\n for t in range(generations):\n # of mutations in generation t\n offspring_mutations = []\n for mutation in mutations:\n # an individual carrying n mutations leaves behind on average (1 − s)^n copies of each of her genes\n if distribution == 'poisson':\n mutated_copies = np.sum(poisson.rvs(1-s, size=mutation))\n else:\n p = (1-s)/2\n mutated_copies = np.sum(binom.rvs(2, p, size=mutation))\n offspring_mutations.append(mutated_copies)\n\n M_t = poisson.rvs(mu_t, size=1)[0]\n offspring_mutations.append(M_t)\n ## Done with this generation\n mutations = offspring_mutations\n all_mutations.append(mutations)\n return all_mutations", "Poisson", "pylab.rcParams['figure.figsize'] = (16.0, 12.0)\nall_mutations = run_generations('poisson')\nplt.plot(range(1,generations+1),[np.mean(x) for x in all_mutations])\nplt.title('Average distinct mutation per generations')\n\nplt.plot(range(1,generations+1),[np.sum(x) for x in all_mutations])\nplt.title('Total mutation per generations')\n\nplt.hist([np.max(x) for x in all_mutations], 50)\nplt.title('Most common mutation')\n\nplt.hist([np.mean(x) for x in all_mutations], 50)\nplt.title('Distinct mutation')", "Poisson Results", "mu = 10**-6\ns = 0.001\nN= 10**6\ntheoretical_tot_mut = mu*N/s\nprint(theoretical_tot_mut)\n\nprint ('Average total mutations per generation: {}'.format(np.mean([np.sum(x) for x in all_mutations])))\nprint ('Average distinct mutations per generation: {}'.format(np.mean([len(x) for x in all_mutations])))\nprint ('Theoretical total mutations per generation: {}'.format(theoretical_tot_mut))\n", "Binomial", "pylab.rcParams['figure.figsize'] = (16.0, 12.0)\nall_mutations = run_generations('binomial')\nplt.plot(range(1,generations+1),[np.mean(x) for x in all_mutations])\nplt.title('Average distinct mutation per generations')\n\nplt.plot(range(1,generations+1),[np.sum(x) for x in all_mutations])\nplt.title('Total mutation per generations')\n\nplt.hist([np.max(x) for x in all_mutations], 50)\nplt.title('Most common mutation')\n\nplt.hist([np.mean(x) for x in all_mutations], 50)\nplt.title('Distinct mutation')", "Binomial results", "mu = 10**-6\ns=0.001\nN= 10**6\ntheoretical_tot_mut = mu*N/s\nprint(theoretical_tot_mut)\n\nprint ('Average total mutations per generation: {}'.format(np.mean([np.sum(x) for x in all_mutations])))\nprint ('Average distinct mutations per generation: {}'.format(np.mean([len(x) for x in all_mutations])))\nprint ('Theoretical total mutations per generation: {}'.format(theoretical_tot_mut))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
astroumd/GradMap
notebooks/Lectures2021/Lecture1/L1_challenge_problem_stars_instructor.ipynb
gpl-3.0
[ "# Students, you'll learn what this means in Lecture #3 so please ignore this for now\nfrom numpy.random import randint as random_number", "Stellar Classification\nBackground\nThe\nHarvard Spectral Classification system for stars\nclassifies stars based on their spectral type - where the type of a star is designated as a letter that corresponds to a given temperature range (since different temperatures correspond to different spectra!). \nBelow there is a list of temperatures in Kelvin of a stellar cluster. \nUsing the table linked from the above Wikipedia page, let's classify each of the stars in our temperature list.", "# These are your stellar temperatures, you're welcome!\ntemp = [5809, 16589, 4698, 1869, 37809, 8634]", "Using if-elif for discrete classification\nThis problem will use elif statements, the useful sibling of if and else, it basically extends your if statements to test multiple situations.\nIn our current situation, where each star will have exactly one spectral type, elif will really come through to make our if statements more efficient.\nUnlike using many if statements, elif only executes if no previous statements have been deemed True.\nThis is nice, especially if we can anticipate which scenarios are most probable. \nLet's start with a simple classification problem to get into the mindset of if-elif-else logic.\nWe want to classify if a random number n is between 0 and 100, 101 and 149, or 150 to infinity. \nThis could be useful if, for example, we wanted to classify a person's IQ score.\nFill in the if-elif-else statements below so that our number, n, will be classified.\nUse a print() statement to print out n and its classification (make sure you are descriptive!)\nYou can use the following template: print(n, 'your description here')", "# Fill in the parentheses. Don't forget indentation!\nn = random_number(50,250) # this should be given!\nif ( n <= 100 ):\n print(n,'is less than or equal to 100.')\nelif (100 < n <= 150):\n print(n,'is between 100 and 150.')\nelse:\n print(n, 'is greater than or equal to 150.')", "Test your statement a few times so that you see if it works for various numbers.\nEvery time you run the cell, a new random number will be chosen, but you can also set it to make sure that the code works correctly.\nJust comment (#) before the random_number() function.\nMake sure to also test the boundary numbers, as they may act odd if there is a precarious &lt;=.\nThe loop\nWe have a list of stellar classifications above. \nOur new classifier will be a lot like the number classifier, but you will need to use\nthe stellar classification boundaries in Wikipedia's table\ninstead of our previous boundaries.\nAnother thing you will need to do is make a loop so that each star in temp is classified within one cell!\nYou can do this with a while-loop, using a dummy index that goes up to len(temp).\nConstruct a loop such that, for each temperature in temp, you will print out the star's temperature and classification.", "i = 0\nend = len(temp)\n\n# Define your loop here\nwhile i < end:\n if temp[i] < 3700:\n print('Star',temp[i],'K is type', 'M')\n elif 3700 <= temp[i] < 5200:\n print('Star',temp[i],'K is type', 'K')\n elif 5200 <= temp[i] < 6000:\n print('Star',temp[i],'K is type', 'G')\n elif 6000 <= temp[i] < 7500:\n print('Star',temp[i],'K is type', 'F')\n elif 7500 <= temp[i] < 10000:\n print('Star',temp[i],'K is type', 'A')\n elif 10000 <= temp[i] < 30000:\n print('Star',temp[i],'K is type', 'B')\n else: # Greater than 30000:\n print('Star', temp[i], 'K is type', 'O')\n \n i = i + 1", "Your result should be a printout of each star with the correct stellar type, check the table to make sure they're being classified reasonably!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SnoopJeDi/WoWNPCs
WoW_NPC_analysis.ipynb
mit
[ "Analysis of non-player characters (NPCs) in World of Warcraft (WoW)\nWorld of Warcraft (WoW) is a popular online game originally published in 2004 by Blizzard Entertainment. The game has maintained its popularity over 13 years, with many updates to add new content. The game world is massive and contains many non-player characters (NPCs) with a variety of character statistics. This notebook shows how to explore the statistics of the game-world by scraping NPC data from a Wikia maintained by fans of the game.", "import re\nfrom lxml import etree\nfrom matplotlib import pyplot as plt\nimport numpy as np\n%matplotlib inline\nimport seaborn as sns", "Scrape data\nThe file wowwiki_pages_current.xml is a database dump of the WoW Wikia, containing the current version of every page on the Wikia. This amounts to ~500 MB of uncompressed text! To process this, we will use the lxml library. A typical page we are interested in looks something like this:\n```xml\n<page>\n <title>Rat</title>\n <ns>0</ns>\n <id>15369</id>\n <sha1>1q38rt4376m9s74uwwslwuea2yy16or</sha1>\n <revision>\n <id>2594586</id>\n <timestamp>2012-08-25T17:06:39Z</timestamp>\n <contributor>\n <username>MarkvA</username>\n <id>1171508</id>\n </contributor>\n <minor />\n <comment>adding ID to infobox, replaced: ...snip...</comment>\n <text xml:space=\"preserve\" bytes=\"1491\">{{npcbox\n |name=Rat|id=4075\n |image=Rat.png\n |level=1\n |race=Rat\n |creature=Critter\n |faction=Combat\n |aggro={{aggro|0|0}}\n |health=8\n |location=All around [[Azeroth (world)|Azeroth]]\n}}\n'''Rats''' are small [[critter]]s, found in many places, including [[Booty Bay]], [[The Barrens]], and the [[Deeprun Tram]]\n....snip...\n </text>\n </revision>\n</page>\n```\nWe can see that the information we care about is inside of a &lt;text&gt; element, contained in an npcbox, following the pattern of attribute=value. In this case, name=Rat, level=1, health=8.\nBecause the file is pretty large, we will use the lxml.etree.iterparse() method to examine each individual page as it is parsed, extracting relevant character information if it is present, and discarding each element when we are done with it (to save memory). Our strategy will be to process any &lt;text&gt; element we come across using the regular expressions library (regular expressions are a powerful tool for processing textual data).", "MAXPRINT = 100 \nMAXPROCESS = 1e7\nnumprocessed = 0\nnames = []\nlevels = []\nhealths = []\nitertree = etree.iterparse('wowwiki_pages_current.xml')\nfor event, element in itertree:\n if numprocessed > MAXPROCESS:\n raise Exception('Maximum number of records processed')\n break\n numprocessed += 1\n \n # if we are currently looking at the text of an article **and** there's a health value, let's process it\n if element.tag.endswith('text') and element.text and 'health' in element.text:\n # this set of regular expressions will try to extract the name, health, and level of the NPC\n name_re = re.search('name ?= ?(.+)(\\||\\n)', element.text)\n health_re = re.search('health ?= ?([\\d,]+)', element.text)\n level_re = re.search('level ?= ?(.+)(\\n|\\|)', element.text)\n \n health = int(health_re.group(1).replace(',', '')) if health_re else None\n try:\n level = int(level_re.group(1)) \n except:\n level = None\n name = name_re.group(1) if name_re else None\n if name and health:\n names.append(name)\n levels.append(level)\n healths.append(health)\n element.clear() # get rid of the current element, we're done with it", "Data cleaning and exploratory analysis\nNow we have a set of lists (names, levels, healths) that contain information about every NPC we were able to find. We can look at this data using the numpy library to find out what the data look like, using the techniques of exploratory data analysis.", "# convert the lists we've built into numpy arrays. `None` entries will be mapped to NaNs\nnames = np.array(names)\nlvls = np.array(levels, dtype=np.float64)\nhps = np.array(healths, dtype=np.float64)\n\nnum_NPCs = len(names)\nmin_lvl, max_lvl = lvls.min(), lvls.max()\nmin_hp, max_hp = hps.min(), hps.max()\n\nprint(\n \"Number of entries: {}\\n\"\n \"Min/max level: {}, {}\\n\"\n \"Min/max health: {}, {}\".format(num_NPCs, min_lvl, max_lvl, min_hp, max_hp)\n)", "We can see we've successfully extracted over 12,000 NPCs, but there are some NaNs in the levels! Let's look at these...", "names[np.isnan(lvls)][:5]", "Looking at the page for Vol'jin, we see that his level is encoded on the page as ?? Boss, which can't be converted to an integer. World of Warcraft has a lot of these, and for a more detailed analysis, we could isolate these entries by assigning them a special value during parsing, but in this first pass at the data, we will simply discard them by getting rid of all NaNs. The numpy.isfinite() function will helps us select only entries in the lvls array that are finite (i.e. not NaN)", "idx = np.isfinite(lvls)\n\nnum_NPCs = len(names[idx])\nmin_lvl, max_lvl = lvls[idx].min(), lvls[idx].max()\nmin_hp, max_hp = hps[idx].min(), hps[idx].max()\n\nprint(\n \"Number of entries: {}\\n\"\n \"Min/max level: {}, {}\\n\"\n \"Min/max health: {}, {}\".format(num_NPCs, min_lvl, max_lvl, min_hp, max_hp)\n)", "Ah, there we go! Knowing that the maximum player level (as of this writing) in WoW is 100, we can surmise that there are still some NPCs in this dataset that represent very-high-level bosses/etc., which may skew the statistics of hitpoints. We can set our cutoff at this point to only consider NPCs that are directly comparable to player characters. Since it appears there is a large range of HP values, we will look at the logarithm (np.log10) of the HPs.\n(N.B. a numpy trick we're using here: idx is a boolean array, but calling np.sum() forces a typecast (False -> 0, True -> 1))", "LEVEL_CUTOFF = 100\nhps = np.log10(healths)\nlvls = np.array(levels)\n\nidx = np.isfinite(lvls)\nlvls = lvls[idx]\nhps = hps[idx]\n\nprint(\"Number of NPCs with level > 100: %d\" % (lvls > 100).sum())\n\nidx = (lvls <= LEVEL_CUTOFF)\n\nprint(\"Number of NPCs with finite level < 100: %d\\n\" % (idx.sum()))\n\nlvls = lvls[idx]\nhps = hps[idx]\n\nnum_NPCs = lvls.size\nmin_lvl, max_lvl = lvls.min(), lvls.max()\nmin_hp, max_hp = hps.min(), hps.max()\n\nprint(\n \"Number of entries: {}\\n\"\n \"Min/max level: {}, {}\\n\"\n \"Min/max log10(health): {}, {}\".format(num_NPCs, min_lvl, max_lvl, min_hp, max_hp)\n)", "Visualizing the data\nWe could continue to explore these data using text printouts of statistical information (mean, median, etc.), but with a dataset this large, visualization becomes a very powerful tool. We will use the matplotlib library (and the seaborn library that wraps it) to generate a hexplot (and marginal distributions) of the data.\n(N.B. the use of the inferno colormap here! There are a lot of good reasons to be particular about your choice of colormaps)", "ax = sns.jointplot(lvls, hps, stat_func=None, kind='hex', \n xlim=(0, LEVEL_CUTOFF), bins='log', color='r', cmap='inferno_r'\n )\nax.fig.suptitle('HP vs Lvl of WoW NPCS (N={N})'.format(N=lvls.size), y=1.02)\nax.ax_joint.set_xlabel('NPC level')\nax.ax_joint.set_ylabel(r'$log_{10}(\\mathrm{HP})$')\ncax = ax.fig.add_axes([0.98, 0.1, 0.05, 0.65])\nplt.colorbar(cax=cax)\ncax.set_title('$log_{10}$(count)', x=1.5)", "Lessons learned from visualizing the data\n\nOverall, HP pools increase roughly exponentially with NPC level, with a sharp uptick near the current maximum level.\nThere are well-defined peaks in the number of NPCs at levels 60, 70, and 80, the points of historical maximum levels for player characters. This suggests that the developers focus their efforts on offering a variety of \"end-game\" content, which is anecdotally true.\n\nOpen questions\n\nWhat's going on with the pockets of low-HP high-level NPCs? Are these \"punching bag\" NPCs for assessing DPS?\nHow does the introduction of \"difficulty\" in raids affect this? It's very likely that the scraping process is hampered by this, because we just grab the very first thing that looks like an HP. But there are plenty of pages with multiple HP values!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
agaveapi/SC17-container-tutorial
content/notebooks/Python SDK.ipynb
bsd-3-clause
[ "agavepy, The Agave Python SDK", "!mkdir -p ~/agave\n\n%cd ~/agave\n\n!pip2 install --upgrade setvar\n\nimport re\nimport os\nimport sys\nfrom setvar import *\nfrom time import sleep\n\n# This cell enables inline plotting in the notebook\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nloadvar()", "In this notebook, we introduce some basic uses of the agavepy Python library for interacting with the Agave Platform science-as-a-service APIs. The examples primarily draw from the apps service, but the concepts introduced are broadly applicable to all Agave services. In subsequent notebooks, we'll take deeper dives into specific topics such as using agavepy to launch and monitor an Agave job. For more information about Agave, please see the developer site: http://agaveapi.co/\nThe agavepy library provides a high-level Python binding to the Agave API. The first step is to import the Agave class:", "from agavepy.agave import Agave\n\nimport json", "Before we can interact with Agave, we need to instantiate a client. Typically, we would use the constructor and pass in our credentials (OAuth client key and secret as well as our username and password) together with configuration data for our \"tenant\", the organization within Agave we wish to interact with.", "agave_cache_dir = os.environ.get('AGAVE_CACHE_DIR')\nag_token_cache = json.loads(open(agave_cache_dir + '/current').read())\nprint (ag_token_cache)\n\nAGAVE_APP_NAME=\"funwave-tvd-nectar\" + os.environ['AGAVE_USERNAME']\n\nag = Agave(token=ag_token_cache['access_token'], refresh_token=ag_token_cache['refresh_token'], api_key=ag_token_cache['apikey'], api_secret=ag_token_cache['apisecret'],api_server=ag_token_cache['baseurl'], client_name=AGAVE_APP_NAME, verify=False)\n#(api_server=ag_token_cache['baseurl'], api_key=ag_token_cache['apikey'], api_secret=ag_token_cache['apisecret'], verify=False, username=ag_token_cache['username'], password=os.environ.get('AGAVE_PASSWORD'))", "The agavepy library's Agave class also provides a restore() method for reconstituting previous OAuth sessions. Previous sessions are read from and written to a cache file, /etc/.agpy, so that OAuth sessions persist across iPython sessions. When you authenticated to JupyterHub, the OAuth login was written to the .agpy file. We can therefore use the restore method to create an OAuth client without needing to pass any credentials:\nNote that the restore method can take arguments (such as client_name) so that you can restore/manage multiple OAuth sessions. When first getting started on the hub, there is only one session in the cache file, so no arguments are required. \nIf we ever want to inspect the OAuth session being used by our client, we have a few methods available to us. First, we can print the token_info dictionary on the token object:", "ag.token.token_info\n\nag.token.refresh()\n\nag.token.token_info", "This shows us both the access and refresh tokens being used. We can also see the end user profile associated with these tokens:", "ag.profiles.get()", "Finally, we can inspect the ag object directly for attributes like api_key, api_secret, api_server, etc.", "print ag.api_key, ag.api_secret, ag.api_server", "We are now ready to interact with Agave services using our agavepy client. We can take a quick look at the available top-level methods of our client:", "dir(ag)", "We see there is a top-level method for each of the core science APIs in agave. We will focus on the apps service since it is of broad interest, but much of what we illustrate is generally applicable to all Agave core science APIs. \nWe can browse a specific collection using the list() method. For example, let's see what apps are available to us:", "ag.apps.list()", "What we see in the output above is a python list representing the JSON object returned from Agave's apps service. It is a list of objects, each of which representing a single app. Let's capture the first app object and inspect it. To do that we can use normal Python list notation:", "app = ag.apps.list()[0]\n\nprint type(app); app", "We see that the app object is of type agavepy.agave.AttrDict. That's a Python dictionary with some customizations to provide convenience features such as using dot notation for keys/attributes. For example, we see that the app object has an 'id' key. We can access it directly using dot notation:", "app.id", "Equivalently, we can use normal Python dictionary syntax:", "app['id']", "In Agave, the app id is the unique identifier for the application. We'll come back to that in a minute. For now, just know that this example is very typical of responses from agavepy: in general the JSON response object is represented by lists of AttrDicts.\nStepping back for a second, let's explore the apps collection a bit. We can always get a list of operations available for a collection by using the dir(-) method:", "dir(ag.apps)", "Also notice that we have tab-completion on these operations. So, if we start typing \"ag.apps.l\" and then hit tab, Jupyter provides a select box of operations beginning with \"l\". Try putting the following cell in focus and then hitting the tab key (but don't actually hit enter or try to execute the cell; otherwise you'll get an exception because there's no method called \"l\"):", "ag.apps.l", "If we would like to get details about a specific object for which we know the unique id, in general we use the get method, passing in the id for the object. Here, we will use an app id we found from the ag.apps.list command.", "ag.apps.get(appId=app.id)", "Whoa, that's a lot of information. We aren't going to give a comprehensive introduction to Agave applications in this notebook. Instead we refer you to the official Agave app tutorial on the website: http://agaveapi.co/documentation/tutorials/app-management-tutorial/\nHowever, we will point out a couple of important points. Let's capture that response in an object called full_app:", "full_app = ag.apps.get(appId=app.id)", "Complex sub-objects of the application such as application inputs and parameters come back as top level attributes. and are represented as lists. The individual elements of the list are represented as AttrDicts. We can see this by exploring our full_app's inputs:", "print type(full_app.inputs); print type(full_app.inputs[0]); full_app.inputs[0]", "Then, if we want the input id, we can use dot notation or dictionary notation just as before:", "full_app.inputs[0].id", "You now have the ability to fully explore individual Agave objects returned from agavepy, but what about searching for objects? The Agave platform provides a powerful search feature across most services, and agavepy supports that as well. \nEvery retrieval operation in agavepy (for example, apps.list) supports a \"search\" argument. The syntax for the search argument is identical to that described in the Agave documentation: it uses a dot notation combining search terms, values and (optional) operators. The search object itself should be a python dictionary with strings for the keys and values. Formally, each key:value pair in the dictionary adheres to the following form: \n $$term.operator:value$$\nThe operator is optional and defaults to equality ('eq'). For example, the following search filters the list of all apps down to just those with the id attribute equal to our app.id:", "ag.apps.list(search={'id': app.id})", "Equivalently, we could explicitly set the equality operator:", "ag.apps.list(search={'id.eq': app.id})", "Typically, the list of available search terms is identical to the attributes included in the JSON returned when requesting the full resource description. Operators include 'like', 'lt', 'gt', 'lte', 'gte', etc. See the official Agave documentation for the complete list. \nHere we retrieve all apps with a name is \"like\" opensees:", "ag.apps.list(search={'name.like': 'opensees'})", "Two results were returned, both with name \"opensees\".\nYou can include multiple search expressions in the form of additional key:value pairs to build a more restrictive query. Here we restrict the result to opensees apps with revision at least 25:", "ag.apps.list(search={'name.like': 'opensees', 'revision.gte': 25})", "We hope this gives you enough general information to begin exploring the Agave services using agavepy on your own. In subsequent notebooks, we'll take deeper dives into specific topics such as using agavepy to launch and monitor an Agave job executing OpenSees on Stampede." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ad960009/dist-keras
examples/mnist_preprocessing.ipynb
gpl-3.0
[ "MNIST Preprocessing\nJoeri Hermans (Technical Student, IT-DB-SAS, CERN) \nDepartement of Knowledge Engineering \nMaastricht University, The Netherlands", "!(date +%d\\ %B\\ %G)", "Preparation\nTo get started, we first load all the required imports. Please make sure you installed dist-keras, and seaborn. Furthermore, we assume that you have access to an installation which provides Apache Spark.\nBefore you start this notebook, place the MNIST dataset (which is provided in a zip in examples/data within this repository) on HDFS. Or in the case HDFS is not available, place it on the local filesystem. But make sure the path to the file is identical for all computing nodes.", "%matplotlib inline\n\nimport numpy as np\n\nimport seaborn as sns\n\nimport time\n\nfrom pyspark import SparkContext\nfrom pyspark import SparkConf\n\nfrom matplotlib import pyplot as plt\n\nfrom pyspark.ml.feature import StandardScaler\nfrom pyspark.ml.feature import VectorAssembler\nfrom pyspark.ml.feature import OneHotEncoder\nfrom pyspark.ml.feature import MinMaxScaler\nfrom pyspark.ml.feature import StringIndexer\n\nfrom distkeras.transformers import *\nfrom distkeras.utils import *", "In the following cell, adapt the parameters to fit your personal requirements.", "# Modify these variables according to your needs.\napplication_name = \"MNIST Preprocessing\"\nusing_spark_2 = False\nlocal = False\npath_train = \"data/mnist_train.csv\"\npath_test = \"data/mnist_test.csv\"\nif local:\n # Tell master to use local resources.\n master = \"local[*]\"\n num_processes = 3\n num_executors = 1\nelse:\n # Tell master to use YARN.\n master = \"yarn-client\"\n num_executors = 20\n num_processes = 1\n\n# This variable is derived from the number of cores and executors, and will be used to assign the number of model trainers.\nnum_workers = num_executors * num_processes\n\nprint(\"Number of desired executors: \" + `num_executors`)\nprint(\"Number of desired processes / executor: \" + `num_processes`)\nprint(\"Total number of workers: \" + `num_workers`)\n\nimport os\n\n# Use the DataBricks CSV reader, this has some nice functionality regarding invalid values.\nos.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-csv_2.10:1.4.0 pyspark-shell'\n\nconf = SparkConf()\nconf.set(\"spark.app.name\", application_name)\nconf.set(\"spark.master\", master)\nconf.set(\"spark.executor.cores\", `num_processes`)\nconf.set(\"spark.executor.instances\", `num_executors`)\nconf.set(\"spark.executor.memory\", \"20g\")\nconf.set(\"spark.yarn.executor.memoryOverhead\", \"2\")\nconf.set(\"spark.locality.wait\", \"0\")\nconf.set(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\");\n\n# Check if the user is running Spark 2.0 +\nif using_spark_2:\n sc = SparkSession.builder.config(conf=conf) \\\n .appName(application_name) \\\n .getOrCreate()\nelse:\n # Create the Spark context.\n sc = SparkContext(conf=conf)\n # Add the missing imports\n from pyspark import SQLContext\n sqlContext = SQLContext(sc)\n\n# Record time of starting point.\ntime_start = time.time()\n\n# Check if we are using Spark 2.0\nif using_spark_2:\n reader = sc\nelse:\n reader = sqlContext\n# Read the training set.\nraw_dataset_train = reader.read.format('com.databricks.spark.csv') \\\n .options(header='true', inferSchema='true') \\\n .load(path_train)\n# Read the test set.\nraw_dataset_test = reader.read.format('com.databricks.spark.csv') \\\n .options(header='true', inferSchema='true') \\\n .load(path_test)\n# Repartition the datasets.\nraw_dataset_train = raw_dataset_train.repartition(num_workers)\nraw_dataset_test = raw_dataset_test.repartition(num_workers)", "As shown in the output of the cell above, we see that every pixel is associated with a seperate column. In order to ensure compatibility with Apache Spark, we vectorize the columns, and add the resulting vectors as a seperate column. However, in order to achieve this, we first need a list of the required columns. This is shown in the cell below.", "# First, we would like to extract the desired features from the raw dataset.\n# We do this by constructing a list with all desired columns.\nfeatures = raw_dataset_train.columns\nfeatures.remove('label')", "Once we have a list of columns names, we can pass this to Spark's VectorAssembler. This VectorAssembler will take a list of features, vectorize them, and place them in a column defined in outputCol.", "# Next, we use Spark's VectorAssembler to \"assemble\" (create) a vector of all desired features.\n# http://spark.apache.org/docs/latest/ml-features.html#vectorassembler\nvector_assembler = VectorAssembler(inputCols=features, outputCol=\"features\")\n# This transformer will take all columns specified in features, and create an additional column \"features\" which will contain all the desired features aggregated into a single vector.\ntraining_set = vector_assembler.transform(raw_dataset_train)\ntest_set = vector_assembler.transform(raw_dataset_test)", "Once we have the inputs for our Neural Network (features column) after applying the VectorAssembler, we should also define the outputs. Since we are dealing with a classification task, the output of our Neural Network should be a one-hot encoded vector with 10 elements. For this, we provide a OneHotTransformer which accomplish this exact task.", "# Define the number of output classes.\nnb_classes = 10\nencoder = OneHotTransformer(nb_classes, input_col=\"label\", output_col=\"label_encoded\")\ntraining_set = encoder.transform(training_set)\ntest_set = encoder.transform(test_set)", "MNIST\nMNIST is a dataset of handwritten digits. Every image is a 28 by 28 pixel grayscale image. This means that every pixel has a value between 0 and 255. Some examples of instances within this dataset are shown in the cells below.\nNormalization\nIn this Section, we will normalize the feature vectors between the 0 and 1 range.", "# Clear the datasets in the case you ran this cell before.\ntraining_set = training_set.select(\"features\", \"label\", \"label_encoded\")\ntest_set = test_set.select(\"features\", \"label\", \"label_encoded\")\n# Allocate a MinMaxTransformer using Distributed Keras.\n# o_min -> original_minimum\n# n_min -> new_minimum\ntransformer = MinMaxTransformer(n_min=0.0, n_max=1.0, \\\n o_min=0.0, o_max=250.0, \\\n input_col=\"features\", \\\n output_col=\"features_normalized\")\n# Transform the datasets.\ntraining_set = transformer.transform(training_set)\ntest_set = transformer.transform(test_set)", "Convolutions\nIn order to make the dense vectors compatible with convolution operations in Keras, we add another column which contains the matrix form of these images. We provide a utility class (MatrixTransformer), which helps you with this.", "reshape_transformer = ReshapeTransformer(\"features_normalized\", \"matrix\", (28, 28, 1))\ntraining_set = reshape_transformer.transform(training_set)\ntest_set = reshape_transformer.transform(test_set)", "Dense Transformation\nAt the moment, dist-keras does not support SparseVectors due to the numpy dependency. As a result, we have to convert the SparseVector to a DenseVector. We added a simple utility transformer which does this for you.", "dense_transformer = DenseTransformer(input_col=\"features_normalized\", output_col=\"features_normalized_dense\")\ntraining_set = dense_transformer.transform(training_set)\ntest_set = dense_transformer.transform(test_set)", "Artificial Enlargement\nWe want to make the dataset 100 times larger to simulate larger datasets, and to evaluate optimizer performance.", "df = training_set\nexpansion = 10\nfor i in range(0, expansion):\n df = df.unionAll(training_set)\ntraining_set = df\ntraining_set.cache()", "Writing to HDFS\nIn order to prevent constant preprocessing, and ensure optimizer performance, we write the data to HDFS in a Parquet format.", "training_set.write.parquet(\"data/mnist_train.parquet\")\ntest_set.write.parquet(\"data/mnist_test.parquet\")\n\n# Record end of transformation.\ntime_end = time.time()\n\ndt = time_end - time_start\nprint(\"Took \" + str(dt) + \" seconds.\")\n\n!hdfs dfs -rm -r data/mnist_test.parquet\n!hdfs dfs -rm -r data/mnist_train.parquet" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nitheeshkl/Udacity_CarND_LaneLines_P1
P1.ipynb
mit
[ "Self-Driving Car Engineer Nanodegree\nProject: Finding Lane Lines on the Road\n\nIn this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip \"raw-lines-example.mp4\" (also contained in this repository) to see what the output should look like after using the helper functions below. \nOnce you have a result that looks roughly like \"raw-lines-example.mp4\", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.\nIn addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.\n\nLet's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the \"play\" button above) to display the image.\nNote: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".\n\nThe tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.\n\n<figure>\n <img src=\"examples/line-segments-example.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> \n </figcaption>\n</figure>\n<p></p>\n<figure>\n <img src=\"examples/laneLines_thirdPass.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your goal is to connect/average/extrapolate line segments to get output like this</p> \n </figcaption>\n</figure>\n\nRun the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips. \nImport Packages", "#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n%matplotlib inline", "Read in an Image", "#reading in an image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n\n#printing out some stats and plotting\nprint('This image is:', type(image), 'with dimensions:', image.shape)\nplt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')", "Ideas for Lane Detection Pipeline\nSome OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:\ncv2.inRange() for color selection\ncv2.fillPoly() for regions selection\ncv2.line() to draw lines on an image given endpoints\ncv2.addWeighted() to coadd / overlay two images\ncv2.cvtColor() to grayscale or change color\ncv2.imwrite() to output images to file\ncv2.bitwise_and() to apply a mask to an image\nCheck out the OpenCV documentation to learn about these and discover even more awesome functionality!\nHelper Functions\nBelow are some helper functions to help get you started. They should look familiar from the lesson!", "import math\nfrom scipy import stats\n\ndef grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n (assuming your grayscaled image is called 'gray')\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n #return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n \ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\n# to store the slopes & intercepts from previous frame\nprevious_lslope = 1\nprevious_lintercept = 0\nprevious_rslope = 1\nprevious_rintercept = 0\n\ndef draw_lines(img, lines, color=[255, 0, 0], thickness=8):\n \"\"\"\n NOTE: this is the function you might want to use as a starting point once you want to \n average/extrapolate the line segments you detect to map out the full\n extent of the lane (going from the result shown in raw-lines-example.mp4\n to that shown in P1_example.mp4). \n \n Think about things like separating line segments by their \n slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n line vs. the right line. Then, you can average the position of each of \n the lines and extrapolate to the top and bottom of the lane.\n \n This function draws `lines` with `color` and `thickness`. \n Lines are drawn on the image inplace (mutates the image).\n If you want to make the lines semi-transparent, think about combining\n this function with the weighted_img() function below\n \"\"\"\n # to store our x,y co-ordinate of left & right lane lines\n left_x = []\n left_y = []\n right_x = []\n right_y = []\n\n for line in lines:\n for x1,y1,x2,y2 in line:\n # calculate the slope\n slope = (y2-y1)/(x2-x1)\n # if positive slope, then right line because (0,0) is top left corner\n if (slope > 0.5) and (slope < 0.65):\n # store the points\n right_x.append(x1)\n right_x.append(x2)\n right_y.append(y1)\n right_y.append(y2)\n # draw the actual detected hough lines as well to visually comapre the error\n # cv2.line(img, (x1, y1), (x2, y2), [0,255,0], 2)\n # else its a left line\n elif (slope < -0.5) and (slope > -0.7):\n # store the points\n left_x.append(x1)\n left_x.append(x2)\n left_y.append(y1)\n left_y.append(y2)\n # draw the actual detected hough lines as well to visually comapre the error\n # cv2.line(img, (x1, y1), (x2, y2), [0,255,0], 2)\n\n global previous_lslope\n global previous_lintercept\n global previous_rslope\n global previous_rintercept\n\n # use linear regression to find the slope & intercepts of our left & right lines\n if left_x and left_y:\n previous_lslope, previous_lintercept, lr_value, lp_value, lstd_err = stats.linregress(left_x,left_y)\n \n if right_x and right_y:\n previous_rslope, previous_rintercept, rr_value, rp_value, rstd_err = stats.linregress(right_x,right_y)\n \n # else in all other conditions use the slope & intercept from the previous frame, presuming the next \n # frames will result in correct slope & incercepts for lane lines\n # FIXME: this logic will fail in conditions when lane lines are not detected in consecutive next frames\n # better to not show/detect false lane lines? \n \n # extrapolate the lines in the lower half of the image using our detected slope & intercepts\n x = img.shape[1]\n y = img.shape[0]\n # left line\n l_y1 = int(round(y))\n l_y2 = int(round(y*0.6))\n l_x1_lr = int(round((l_y1-previous_lintercept)/previous_lslope))\n l_x2_lr = int(round((l_y2-previous_lintercept)/previous_lslope))\n # right line\n r_y1 = int(round(y))\n r_y2 = int(round(y*0.6))\n r_x1_lr = int(round((r_y1-previous_rintercept)/previous_rslope))\n r_x2_lr = int(round((r_y2-previous_rintercept)/previous_rslope))\n\n # draw the extrapolated lines onto the image\n cv2.line(img, (l_x1_lr, l_y1), (l_x2_lr, l_y2), color, thickness)\n cv2.line(img, (r_x1_lr, r_y1), (r_x2_lr, r_y2), color, thickness)\n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., λ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + λ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, λ)", "Test Images\nBuild your pipeline to work on the images in the directory \"test_images\"\nYou should make sure your pipeline works well on these images before you try the videos.", "import os\nos.listdir(\"test_images/\")", "Build a Lane Finding Pipeline\nBuild the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.\nTry tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.", "# TODO: Build your pipeline that will draw lane lines on the test_images\n# then save them to the test_images directory.\ndef showImage(img,cmap=None):\n # create a new figure to show each image\n plt.figure()\n # show image\n plt.imshow(img,cmap=cmap)\n\ndef detectLaneLines(image):\n # get image sizes. X=columsns & Y=rows\n x = image.shape[1]\n y = image.shape[0]\n \n # convert the image to gray scale\n grayImage = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)\n # blur the image \n blurImage = gaussian_blur(grayImage,3)\n # perform edge detection\n cannyImage = canny(blurImage,50,150)\n # define the co-ordinates for the interested section of the image.\n # we're only interested in the bottom half where there is road\n vertices = np.array([[(x*0.15,y),(x*0.45,y*0.6),(x*0.55,y*0.6),(x*0.85,y)]],dtype=np.int32)\n # create a masked image of only the interested region\n maskedImage = region_of_interest(cannyImage, vertices)\n \n # detect & draw lines using hough algo on our masked image\n rho = 1 # distance resolution in pixels of the Hough grid\n theta = (np.pi/180)*1 # angular resolution in radians of the Hough grid\n threshold = 10 # minimum number of votes (intersections in Hough grid cell)\n min_line_length = 3 #minimum number of pixels making up a line\n max_line_gap = 3 # maximum gap in pixels between connectable line segments\n houghImage = hough_lines(maskedImage,rho, theta, threshold, min_line_length, max_line_gap)\n \n # merge the mask layer onto the original image and return it\n return weighted_img(houghImage,image)\n\n# read a sample image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n# show detected lanes in the sample\nshowImage(detectLaneLines(image),cmap='gray')\n\n", "Test on Videos\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\nWe can test our solution on two provided videos:\nsolidWhiteRight.mp4\nsolidYellowLeft.mp4\nNote: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.\nIf you get an error that looks like this:\nNeedDownloadError: Need ffmpeg exe. \nYou can download it by calling: \nimageio.plugins.ffmpeg.download()\nFollow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.", "# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML\n\ndef process_image(image):\n # NOTE: The output you return should be a color image (3 channel) for processing video below\n # TODO: put your pipeline here,\n # you should return the final output (image where lines are drawn on lanes)\n\n return detectLaneLines(image)", "Let's try the one with the solid white lane on the right first ...", "white_output = 'test_videos_output/solidWhiteRight.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\").subclip(0,5)\nclip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)", "Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.", "HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(white_output))", "Improve the draw_lines() function\nAt this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\".\nGo back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.\nNow for the one with the solid yellow lane on the left. This one's more tricky!", "yellow_output = 'test_videos_output/solidYellowLeft.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)\nclip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(process_image)\n%time yellow_clip.write_videofile(yellow_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))", "Writeup and Submission\nIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.\nOptional Challenge\nTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!", "challenge_output = 'test_videos_output/challenge.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)\nclip3 = VideoFileClip('test_videos/challenge.mp4')\nchallenge_clip = clip3.fl_image(process_image)\n%time challenge_clip.write_videofile(challenge_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cdalzell/ds-for-wall-street
ds-for-ws-student.ipynb
apache-2.0
[ "Loading and Cleaning the Data\nTurn on inline matplotlib plotting and import plotting dependencies.", "%matplotlib inline\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport seaborn as sns", "Import analytic depedencies. Doc code for spark-timeseries and source code for tsanalysis.", "import numpy as np\nimport pandas as pd\nimport tsanalysis.loaddata as ld\nimport tsanalysis.tsutil as tsutil\nimport sparkts.timeseriesrdd as tsrdd\nimport sparkts.datetimeindex as dtindex\nfrom sklearn import linear_model", "Load wiki page view and stock price data into Spark DataFrames.\nwiki_obs is a Spark dataframe of (timestamp, page, views) of types (Timestamp, String, Double). ticker_obs is a Spark dataframe of (timestamp, symbol, price) of types (Timestamp, String, Double).", "wiki_obs = ld.load_wiki_df(sqlCtx, '/user/srowen/wiki.tsv')\nticker_obs = ld.load_ticker_df(sqlCtx, '/user/srowen/ticker.tsv')", "Display the first 5 elements of the wiki_obs RDD. \nwiki_obs contains Row objects with the fields (timestamp, page, views).\nDisplay the first 5 elements of the tickers_obs RDD.\nticker_obs contains Row objects with the fields (timestamp, symbol, views).\nCreate datetime index.\nCreate time series RDD from observations and index. Remove time instants with NaNs.\nCache the tsrdd.\nExamine the first element in the RDD.\nTime series have values and a datetime index. We can create a tsrdd for hourly stock prices from an index and a Spark DataFrame of observations. ticker_tsrdd is an RDD of tuples where each tuple has the form (ticker symbol, stock prices) where ticker symbol is a string and stock prices is a 1D np.ndarray. We create a nicely formatted string representation of this pair in print_ticker_info(). Notice how we access the two elements of the tuple.", "def print_ticker_info(ticker):\n print ('The first ticker symbol is: {} \\nThe first 20 elements of the associated ' +\\\n 'series are:\\n {}').format(ticker[0], ticker[1][:20])", "Create a wiki page view tsrdd and set the index to match the index of ticker_tsrdd.\n Linearly interpolate to impute missing values.\nwiki_tsrdd is an RDD of tuples where each tuple has the form (page title, wiki views) where page title is a string and wiki views is a 1D np.ndarray. We have cached both RDDs because we will be doing many subsequent operations on them.\n Filter out symbols with more than the minimum number of NaNs.\n Then filter out instants with NaNs.", "def count_nans(vec):\n return np.count_nonzero(np.isnan(vec))", "Linking symbols and pages\nWe need to join together the wiki page and ticker data, but the time series RDDs are not directly joinable on their keys. To overcome this, we have create a dict from wikipage title to stock ticker symbol.\nCreate a dict from ticker symbols to page names.\nCreate another from page names to ticker symbols.", "# a dict from wiki page name to ticker symbol\npage_symbols = {}\nfor line in open('../symbolnames.tsv').readlines():\n tokens = line[:-1].split('\\t')\n page_symbols[tokens[1]] = tokens[0]\n\ndef get_page_symbol(page_series):\n if page_series[0] in page_symbols:\n return [(page_symbols[page_series[0]], page_series[1])]\n else:\n return []\n# reverse keys and values. a dict from ticker symbol to wiki page name.\nsymbol_pages = dict(zip(page_symbols.values(), page_symbols.keys()))\nprint page_symbols.items()[0]\nprint symbol_pages.items()[0]", "Join together wiki_tsrdd and ticker_tsrdd\nFirst, we use this dict to look up the corresponding stock ticker symbol and rekey the wiki page view time series. We then join the data sets together. The result is an RDD of tuples where each element is of the form (ticker_symbol, (wiki_series, ticker_series)). We count the number of elements in the resulting rdd to see how many matches we have.\nCorrelation and Relationships\nDefine a function for computing the pearson r correlation of the stock price and wiki page traffic associated with a company.\nHere we look up a specific stock and corrsponding wiki page, and provide an example of \ncomputing the pearson correlation locally. We use scipy.stats.stats.pearsonr to compute the pearson correlation and corresponding two sided p value. wiki_vol_corr and corr_with_offset both return this as a tuple of (corr, p_value).", "from scipy.stats.stats import pearsonr\n\ndef wiki_vol_corr(page_key):\n # lookup individual time series by key.\n ticker = ticker_tsrdd.find_series(page_symbols[page_key]) # numpy array\n wiki = wiki_tsrdd.find_series(page_key) # numpy array\n return pearsonr(ticker, wiki)\n\ndef corr_with_offset(page_key, offset):\n \"\"\"offset is an integer that describes how many time intervals we have slid\n the wiki series ahead of the ticker series.\"\"\"\n ticker = ticker_tsrdd.find_series(page_symbols[page_key]) # numpy array\n wiki = wiki_tsrdd.find_series(page_key) # numpy array\n return pearsonr(ticker[offset:], wiki[:-offset])", "Create a plot of the joint distribution of wiki trafiic and stock prices for a specific company using seaborn's jointplot function.", "def joint_plot(page_key, ticker, wiki, offset=0):\n with sns.axes_style(\"white\"):\n sns.jointplot(x=ticker, y=wiki, kind=\"kde\", color=\"b\");\n plt.xlabel('Stock Price')\n plt.ylabel('Wikipedia Page Views')\n plt.title('Joint distribution of {} stock price\\n and Wikipedia page views.'\\\n +'\\nWith a {} day offset'.format(page_key, offset), y=1.20)", "Find the companies with the highest correlation between stock prices time series and wikipedia page traffic.\nNote that comparing a tuple means you compare the composite object lexicographically.\n Add in filtering out less than useful correlation results. \nThere are a lot of invalid correlations that get computed, so lets filter those out.\n Find the top 10 correlations as defined by the ordering on tuples. \n Create a joint plot of some of the stronger relationships.\nVolatility\n Compute per-day volatility for each symbol. \nMake sure we don't have any NaNs.\nVisualize volatility\nPlot daily volatility in stocks over time.\nWhat does the distribution of volatility for the whole market look like? Add volatility for individual stocks in a datetime bin.\n Find stocks with the highest average daily volatility. \n Plot stocks with the highest average daily volatility over time. \nWe first map over ticker_daily_vol to find the index of the value with the highest volatility. We then relate that back to the index set on the RDD to find the corresponding datetime.\nA large number of stock symbols had their most volatile days on August 24th and August 25th of\nthis year.\nRegress volatility against page views\nResample the wiki page view data set so we have total pageviews by day. \n Cache the wiki page view RDD.\nResample the wiki page view data set so we have total pageviews by day. This means reindexing the time series and aggregating data together with daily buckets. We use np.nansum to add up numbers while treating nans like zero.\nValidate data by checking for nans.\n Fit a linear regression model to every pair in the joined wiki-ticker RDD and extract R^2 scores.", "def regress(X, y):\n model = linear_model.LinearRegression()\n model.fit(X, y)\n score = model.score(X, y)\n return (score, model)\n\nlag = 2\nlead = 2\n\njoined = regressions = wiki_daily_views.flatMap(get_page_symbol) \\\n .join(ticker_daily_vol)\n \nmodels = joined.mapValues(lambda x: regress(tsutil.lead_and_lag(lead, lag, x[0]), x[1][lag:-lead]))\nmodels.cache()\nmodels.count()", "Print out the symbols with the highest R^2 scores\nPlot the results of a linear model.\nPlotting a linear model always helps me understand it better. Again, seaborn is super useful with smart defaults built in.\nBox plot / Tukey outlier identification\nTukey originally proposed a method for identifying outliers in bow and whisker plots. Eseentially, we find the cut off value for the 75th percentile $P_{75} = percentile(sample, 0.75)$, and add a reasonable buffer (expressed in terms of the interquartile range) $1.5IQR = 1.5(P_{75}-P_{25})$ above that cutoff.\nWrite a function that returns the high value cutoff for Tukey's boxplot outlier criterion.\nFilter out any values below Tukey's boxplot criterion for outliers.\nBlack Monday\nSelect the date range comprising Black Monday 2015.\nWhich stocks saw the worst return for that day?\nPlot wiki page views for one of those stocks" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
VandyAstroML/Vanderbilt_Computational_Bootcamp
notebooks/Week_02/02_Python_Git_Github_Tutorial.ipynb
mit
[ "Today's Agenda\nToday we'll discuss how to:\n- Install Anaconda\n- Create your Github Student Account\n- Git Tutorial\nSlack - Important info!\nThe Slack group is now working and ready to go!<br>\nThe Slack website is: https://https://vandyastroseminar.slack.com\nFill out the following Google Form with your info: https://goo.gl/y2ammQ\nPython Anaconda\nAnaconda is a great way to handle your modules and their dependencies.\nYou have the option to use install Python 2.7 or Python 3.5. In astronomy, the version Python 2.7 is mostly used, although new modules are adoption version 3.5\n\nDownload Anaconda\n\nAfter the installation is complete, you should see that your \"PATH\" was modified in your \"~/.bashrc\" (Linux) or \"~/bash_profile\" file.\n```bash\nadded by Anaconda 1.6.1 installer\nexport PATH=\"~/anaconda2/bin:$PATH\"\n```\nOnce the Anaconda path is appended to your \"PATH\", you should be able to see something like this:\npython\nIPython 5.0.0 -- An enhanced Interactive Python.\n? -&gt; Introduction and overview of IPython's features.\n%quickref -&gt; Quick reference.\nhelp -&gt; Python's own help system.\nobject? -&gt; Details about 'object', use 'object??' for extra details.\nIn [1]:\nUpdated Anaconda modules\nWhenever you want to update your Python modules to the latest versions, you can just run these commands from the terminal:\nbash\nconda update condaconda update conda\nconda update condaconda update anaconda\nInstalling new Modules\nIf you want to install a new module that is hosted by Anaconda, you can type:\nbash\nconda install &lt;name of package&gt;\nAnd the package should be installed with all the necessary dependencies.\nGit and Github\nNowadays it is really important to keep track of every change that make to a code, especially if you're \npart of a large organizations.\nGit is a tool for version control of your code, i.e. you can keep track of\n- any changes made to your code\n- any issues with your code\n- you can consult other team members about implementing a new feature\n- you can restore a code to a previous version\n- etc.\nIn a nutshell, you should be using Git.\nGit comes already pre-installed with Anaconda. You can test whether it is installed on your machine by typing:\nbash\nwhich git\nAnd you shoud see the path to the git executable.\nIf it's not installed, you can install it from here\nCreating your Github Account\n\"Github\" is a website for you to manage your repositories and collaborate with other people.\nGithub offers a Student Pack for free (https://education.github.com/).\nYou have to use your @vanderbilt email address to get it. Or you can sign up for you regular Github account.\nThis lets you create public and private repositories.\nGithub Concepts\nGit Commands\nBefore you begin, make sure git is working correctly (here):", "$ git config --global user.name \"John Doe\"\n$ git config --global user.email \"johndoe@example.com\"", "To clone the local repository", "$ git clone https://github.com/VandyAstroML/Vanderbilt_Computational_Bootcamp.git", "Important Git commands\n\ngit clone: Clone an existing repository\ngit init: Initialize a git repository. This will create a new repository in your computer\ngit pull: This will pull any changes that have been made, and merged them to you current version of the repository\ngit push: It will push all your changes to the repository\ngit commit -m \"\": Commit any new changes made to the files\n\nSetting up SSH Keys\nSSH are a great way to not type your passworkd every time that you want to make any changes to your repositories.\nThe instructions to do so are listed in here" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
spacedrabbit/PythonBootcamp
Iterators and Generators Homework.ipynb
mit
[ "Iterators and Generators Homework\nProblem 1\nCreate a generator that generates the squares of numbers up to some number N.", "def gensquares(N):\n for i in range(N): \n yield i**2\n\nfor x in gensquares(10):\n print x", "Problem 2\nCreate a generator that yields \"n\" random numbers between a low and high number (that are inputs). Note: Use the random library. For example:", "import random\n\nrandom.randint(1,10)\n\ndef rand_num(low,high,n):\n for i in range(n+1):\n yield random.randint(low, high)\n\nfor num in rand_num(1,10,12):\n print num", "Problem 3\nUse the iter() function to convert the string below", "s = 'hello'\n\n#code here\nfor letter in iter(s):\n print letter", "Problem 4\nExplain a use case for a generator using a yield statement where you would not want to use a normal function with a return statement.\nA generator, utilizing a yield statement, returns an iterator object. The iterator object will yield/return a value each time it is called upon to iterate through its code. So in cases where a return statement would be used to return the entirely of a list, the generator would only return the current iteration of the list, remembering its state where it was last yielded.\nExtra Credit!\nCan you explain what gencomp is in the code below? (Note: We never covered this in lecture! You will have to do some googling/Stack Overflowing!)", "my_list = [1,2,3,4,5]\n\ngencomp = (item for item in my_list if item > 3)\n\nfor item in gencomp:\n print item", "Hint google: generator comprehension is!\nGreat Job!\na generator expression is similar to a list comprehension. However it returns a generator rather than a list. Syntactically they are very similar with the exception being that list comps use [] and gen comps use ()." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
abulbasar/machine-learning
Scikit - 10 Image Classification (MNIST dataset).ipynb
apache-2.0
[ "MNIST", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import *\nimport scipy\n\n%matplotlib inline", "Dataset description\nDatasource: http://yann.lecun.com/exdb/mnist/\nThe training dataset consists of 60,000 training digits and the test set contains 10,000 samples, respectively. The images in the MNIST dataset consist of pixels, and each pixel is represented by a gray scale intensity value. Here, we unroll the pixels into 1D row vectors, which represent the rows in our image array (784 per row or image). The second array (labels) returned by the load_mnist function contains the corresponding target variable, the class labels (integers 0-9) of the handwritten digits.\nCsv version of the files are available in the following links.\nCSV training set http://www.pjreddie.com/media/files/mnist_train.csv\nCSV test set http://www.pjreddie.com/media/files/mnist_test.csv", "training = pd.read_csv(\"/data/MNIST/mnist_train.csv\", header = None)\ntesting = pd.read_csv(\"/data/MNIST/mnist_test.csv\", header = None)\n\nX_train, y_train = training.iloc[:, 1:].values, training.iloc[:, 0].values \nX_test, y_test = testing.iloc[:, 1:].values, testing.iloc[:, 0].values \n\nprint(\"Shape of X_train: \", X_train.shape, \"shape of y_train: \", y_train.shape)\nprint(\"Shape of X_test: \", X_test.shape, \"shape of y_test: \", y_test.shape)", "Distribution of class frequencies", "label_counts = pd.DataFrame({\n \"train\": pd.Series(y_train).value_counts().sort_index(), \n \"test\": pd.Series(y_test).value_counts().sort_index()\n})\n(label_counts / label_counts.sum()).plot.bar()\n\nplt.xlabel(\"Class\")\nplt.ylabel(\"Frequency (normed)\")", "Chisquare test on class frequencies", "scipy.stats.chisquare(label_counts.train, label_counts.test)", "Display a few sample images", "fig, axes = plt.subplots(5, 5, figsize = (15, 9))\nfor i, ax in enumerate(fig.axes):\n img = X_train[i, :].reshape(28, 28)\n ax.imshow(img, cmap = \"Greys\", interpolation=\"nearest\")\n ax.set_title(\"True: %i\" % y_train[i])\nplt.tight_layout()", "View different variations of a digit", "fig, axes = plt.subplots(10, 5, figsize = (15, 20))\n\nfor i, ax in enumerate(fig.axes):\n img = X_train[y_train == 7][i, :].reshape(28, 28)\n ax.imshow(img, cmap = \"Greys\", interpolation=\"nearest\")\n\nplt.tight_layout()", "Feature scaling", "scaler = preprocessing.StandardScaler()\nX_train_std = scaler.fit_transform(X_train.astype(np.float64))\nX_test_std = scaler.transform(X_test.astype(np.float64))", "Applying logistic regression classifier", "%%time \nlr = linear_model.LogisticRegression()\nlr.fit(X_train_std, y_train)\nprint(\"accuracy:\", lr.score(X_test_std, y_test))", "Display wrong predictions", "y_test_pred = lr.predict(X_test_std)\nmiss_indices = (y_test != y_test_pred)\nmisses = X_test[miss_indices]\nprint(\"No of miss: \", misses.shape[0])\n\nfig, axes = plt.subplots(10, 5, figsize = (15, 20))\nmisses_actual = y_test[miss_indices]\nmisses_pred = y_test_pred[miss_indices]\n\nfor i, ax in enumerate(fig.axes):\n img = misses[i].reshape(28, 28)\n ax.imshow(img, cmap = \"Greys\", interpolation=\"nearest\")\n ax.set_title(\"A: %s, P: %d\" % (misses_actual[i], misses_pred[i]))\nplt.tight_layout()", "Applying SGD classifier", "inits = np.random.randn(10, 784) \ninits = inits / np.std(inits, axis=1).reshape(10, -1)\n\n%%time\nest = linear_model.SGDClassifier(n_jobs=4, tol=1e-5, eta0 = 0.15, \n learning_rate = \"invscaling\", \n alpha = 0.01, max_iter= 100)\nest.fit(X_train_std, y_train, inits)\nprint(\"accuracy\", est.score(X_test_std, y_test), \"iterations:\", est.n_iter_)\n\nfig, _ = plt.subplots(3, 4, figsize = (15, 10))\nfor i, ax in enumerate(fig.axes):\n if i < est.coef_.shape[0]:\n ax.imshow(est.coef_[i, :].reshape(28, 28), cmap = \"bwr\", interpolation=\"nearest\")\n else:\n ax.remove()\n\npd.DataFrame(est.coef_[0, :].reshape(28, 28))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ljishen/kividry
experiments/benchmarking/visualize_2.ipynb
apache-2.0
[ "Performance Benchmarking for KV Drive\nThe goal of these set of experiments is to characterize the variability across platforms in a systematic and consistent way in terms of KV drive. The steps of experiments are as follows,\n\nRun Stress-ng benchmarks on one KV drive;\nRun Stress-ng benchmarks on machine issdm-6, and get the \"without limit\" result;\nFind all the common benchmarks from both results;\nCalculate the speedup (normalized value) of each benchmark based on the one from KV drive (issdm-6 (without limit) / KV drive);\nUse torpor to calculate the best cpu quota by minimizing the average speedups. We will later use this parameter to limit the cpu usage in the docker container;\nRun Stress-ng benchmarks in the constrained docker container on machine issdm-6, and get the \"with limit\" result;\nCalculate the speedup based on KV drive again (issdm-6 (with limit) / KV drive), then we get a new \"speedup range\", which should be must smaller than the previous one.\nRun a bunch of other benchmarks on both KV drive and constrained docker container to verify if they are all within in the later \"speedup range\".\nMake conclusion.", "%matplotlib inline\nimport pandas as pd\nimport random\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\n\npd.set_option(\"display.max_rows\", 8)", "First, we load all test data.", "df = pd.read_csv('stress-ng/third/torpor-results/alltests.csv')", "Let's have a look at the pattern of data.", "df.head()", "Show all the test machines.", "df['machine'].unique()", "Define some predicates for machines and limits", "machine_is_issdm_6 = df['machine'] == 'issdm-6'\nmachine_is_t2_micro = df['machine'] == 't2.micro'\nmachine_is_kv3 = df['machine'] == 'kv3'\n\nlimits_is_with = df['limits'] == 'with'\nlimits_is_without = df['limits'] == 'without'", "Show the number of stress tests on different machines", "df_issdm_6_with_limit = df[machine_is_issdm_6 & limits_is_with]\ndf_t2_micro_with_limit = df[machine_is_t2_micro & limits_is_with]\ndf_kv3_without_limit = df[machine_is_kv3 & limits_is_without]\n\nprint(\n len(df_issdm_6_with_limit), # machine issdm-6 with limit\n len(df[machine_is_issdm_6 & limits_is_without]), # machine issdm-6 without limit\n\n len(df_t2_micro_with_limit), # machine t2.micro with limit\n len(df[machine_is_t2_micro & limits_is_without]), # machine t2.micro without limit\n\n len(df_kv3_without_limit) # machine kv3 without limit\n)", "Because those failed benchmarks are not shown in the result report, we want to know how many common successful stress tests on the target machine and kv3.", "issdm_6_with_limit_merge_kv3 = pd.merge(df_issdm_6_with_limit, df_kv3_without_limit, how='inner', on='benchmark')\nt2_micro_with_limit_merge_kv3 = pd.merge(df_t2_micro_with_limit, df_kv3_without_limit, how='inner', on='benchmark')\n\nprint(\n # common successful tests from issdm-6 and kv3\n len(issdm_6_with_limit_merge_kv3),\n \n # common successful tests from t2.micro and kv3\n len(t2_micro_with_limit_merge_kv3)\n)", "Read the normalized results.", "df_normalized = pd.read_csv('stress-ng/third/torpor-results/alltests_with_normalized_results_1.1.csv')", "Show some of the data lines. The normalized value is the speedup based on kv3. It becomes a negative value when the benchmark runs on the target machine is slower than on kv3 (slowdown).", "df_normalized.head()", "Show those benchmarks are not both successful completed on the issdm-6 and kv3.", "df_issdm_6_with_limit[~df_issdm_6_with_limit['benchmark'].isin(issdm_6_with_limit_merge_kv3['benchmark'])]", "Show those benchmarks are not both successful completed on the t2.micro and kv3.", "df_t2_micro_with_limit[~df_t2_micro_with_limit['benchmark'].isin(t2_micro_with_limit_merge_kv3['benchmark'])]", "We can find the number of benchmarks are speed-up and slowdown, respectively.", "normalized_limits_is_with = df_normalized['limits'] == 'with'\nnormalized_limits_is_without = df_normalized['limits'] == 'without'\n\nnormalized_machine_is_issdm_6 = df_normalized['machine'] == 'issdm-6'\nnormalized_machine_is_t2_micro = df_normalized['machine'] == 't2.micro'\n\nnormalized_is_speed_up = df_normalized['normalized'] > 0\nnormalized_is_slow_down = df_normalized['normalized'] < 0\n\nprint(\n # issdm-6 without CPU restriction\n len(df_normalized[normalized_limits_is_without & normalized_machine_is_issdm_6 & normalized_is_speed_up]), # 1. speed-up\n len(df_normalized[normalized_limits_is_without & normalized_machine_is_issdm_6 & normalized_is_slow_down]), # 2. slowdown\n \n # issdm-6 with CPU restriction\n len(df_normalized[normalized_limits_is_with & normalized_machine_is_issdm_6 & normalized_is_speed_up]), # 3. speed-up\n len(df_normalized[normalized_limits_is_with & normalized_machine_is_issdm_6 & normalized_is_slow_down]), # 4. slowdown\n \n # t2.micro without CPU restriction\n len(df_normalized[normalized_limits_is_without & normalized_machine_is_t2_micro & normalized_is_speed_up]), # 5. speed-up\n len(df_normalized[normalized_limits_is_without & normalized_machine_is_t2_micro & normalized_is_slow_down]), # 6. slowdown\n \n # t2.micro with CPU restriction\n len(df_normalized[normalized_limits_is_with & normalized_machine_is_t2_micro & normalized_is_speed_up]), # 7. speed-up\n len(df_normalized[normalized_limits_is_with & normalized_machine_is_t2_micro & normalized_is_slow_down]) # 8. slowdown\n)", "The average of normalized value for results under CPU restriction", "print(\n # For issdm-6\n df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_with]['normalized'].mean(),\n \n # For t2_micro\n df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_with]['normalized'].mean()\n)", "Experiment Results from issdm-6\nLet's have a look at the histogram of frequency of normalized value based on stress tests without CPU restriction running on issdm-6.", "df_normalized_issdm_6_without_limit = df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_without]\ndf_normalized_issdm_6_without_limit.normalized.hist(bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)\n\nplt.title('stress tests run on issdm-6 without CPU restriction', fontsize=30)\n\nplt.xlabel('Normalized Value (re-execution / original)', fontsize=25)\nplt.ylabel('Frequency (# of benchmarks)', fontsize=25)", "Here is the rank of normalized value from stress tests without CPU restriction", "df_normalized_issdm_6_without_limit_sorted = df_normalized_issdm_6_without_limit.sort_values(by='normalized', ascending=0)\ndf_normalized_issdm_6_without_limit_sorted_head = df_normalized_issdm_6_without_limit_sorted.head()\ndf_normalized_issdm_6_without_limit_sorted_tail = df_normalized_issdm_6_without_limit_sorted.tail()\ndf_normalized_issdm_6_without_limit_sorted_head.append(df_normalized_issdm_6_without_limit_sorted_tail)", "Now let's have a look at the histogram of frequency of normalized value based on stress tests with CPU restriction running on issdm-6.", "df_normalized_issdm_6_with_limit = df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_with]\ndf_normalized_issdm_6_with_limit.normalized.hist(color='Orange', bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)\n\nplt.title('stress tests run on issdm-6 with CPU restriction', fontsize=30)\n\nplt.xlabel('Normalized Value (re-execution / original)', fontsize=25)\nplt.ylabel('Frequency (# of benchmarks)', fontsize=25)", "Here is the rank of normalized value from stress tests with CPU restriction", "df_normalized_issdm_6_with_limit_sorted = df_normalized_issdm_6_with_limit.sort_values(by='normalized', ascending=0)\ndf_normalized_issdm_6_with_limit_sorted_head = df_normalized_issdm_6_with_limit_sorted.head()\ndf_normalized_issdm_6_with_limit_sorted_tail = df_normalized_issdm_6_with_limit_sorted.tail()\ndf_normalized_issdm_6_with_limit_sorted_head.append(df_normalized_issdm_6_with_limit_sorted_tail)", "We notice that the stressng-cpu-jenkin looks like an outlier. Let's redraw the histogram without this one.", "df_normalized_issdm_6_no_outlier = df_normalized_issdm_6_with_limit['benchmark'] != 'stressng-cpu-jenkin'\ndf_normalized_issdm_6_with_limit[df_normalized_issdm_6_no_outlier].normalized.hist(color='Green', bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)\n\nplt.title('stress tests run on issdm-6 with CPU restriction (no outlier)', fontsize=30)\n\nplt.xlabel('Normalized Value (re-execution / original)', fontsize=25)\nplt.ylabel('Frequency (# of benchmarks)', fontsize=25)", "Summary\nWe got the boundary of normalized value on issdm-6 from -29.394675 to 54.266945 by using parameters --cpuset-cpus=1 --cpu-quota=7234 --cpu-period=100000, which means the docker container only uses 7.234ms CPU worth of run-time every 100ms on cpu 1 (See cpu for more details).\nExperiment Results from t2.micro\nLet's have a look at the histogram of frequency of normalized value based on stress tests without CPU restriction running on t2.micro.", "df_normalized_t2_micro_without_limit = df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_without]\ndf_normalized_t2_micro_without_limit.normalized.hist(bins=150,figsize=(30,12), xlabelsize=20, ylabelsize=20)\n\nplt.title('stress tests run on t2.micro without CPU restriction', fontsize=30)\n\nplt.xlabel('Normalized Value (re-execution / original)', fontsize=25)\nplt.ylabel('Frequency (# of benchmarks)', fontsize=25)", "Here is the rank of normalized value from stress tests without CPU restriction", "df_normalized_t2_micro_without_limit_sorted = df_normalized_t2_micro_without_limit.sort_values(by='normalized', ascending=0)\ndf_normalized_t2_micro_without_limit_sorted_head = df_normalized_t2_micro_without_limit_sorted.head()\ndf_normalized_t2_micro_without_limit_sorted_tail = df_normalized_t2_micro_without_limit_sorted.tail()\ndf_normalized_t2_micro_without_limit_sorted_head.append(df_normalized_t2_micro_without_limit_sorted_tail)", "Let's have a look at the histogram of frequency of normalized value based on stress tests with CPU restriction running on t2.micro.", "df_normalized_t2_micro_with_limit = df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_with]\ndf_normalized_t2_micro_with_limit.normalized.hist(color='Orange', bins=150, figsize=(30,12), xlabelsize=20, ylabelsize=20)\n\nplt.title('stress tests run on t2.micro with CPU restriction', fontsize=30)\n\nplt.xlabel('Normalized Value (re-execution / original)', fontsize=25)\nplt.ylabel('Frequency (# of benchmarks)', fontsize=25)", "Here is the rank of normalized value from stress tests with CPU restriction", "df_normalized_t2_micro_with_limit_sorted = df_normalized_t2_micro_with_limit.sort_values(by='normalized', ascending=0)\ndf_normalized_t2_micro_with_limit_sorted_head = df_normalized_t2_micro_with_limit_sorted.head()\ndf_normalized_t2_micro_with_limit_sorted_tail = df_normalized_t2_micro_with_limit_sorted.tail()\ndf_normalized_t2_micro_with_limit_sorted_head.append(df_normalized_t2_micro_with_limit_sorted_tail)", "We notice that the stressng-memory-stack looks like an outlier. Let's redraw the histogram without this one.", "df_normalized_t2_micro_no_outlier = df_normalized_t2_micro_with_limit['benchmark'] != 'stressng-memory-stack'\ndf_normalized_t2_micro_with_limit[df_normalized_t2_micro_no_outlier].normalized.hist(color='Green', bins=150, figsize=(30,12), xlabelsize=20, ylabelsize=20)\n\nplt.title('stress tests run on t2.micro with CPU restriction (no outlier)', fontsize=30)\n\nplt.xlabel('Normalized Value (re-execution / original)', fontsize=25)\nplt.ylabel('Frequency (# of benchmarks)', fontsize=25)", "The stressng-cpu-jenkin benchmark is a collection of (non-cryptographic) hash functions for multi-byte keys. See Jenkins hash function from Wikipedia for more details.\nSummary\nWe got the boundary of normalized value on t2.micro from -198.440535 to 119.904761 by using parameters --cpuset-cpus=0 --cpu-quota=25750 --cpu-period=100000, which means the docker container only uses 7.234ms CPU worth of run-time every 100ms on cpu 0 (See cpu for more details).\nVerification\nNow we use 9 other benchmark programs to verify this result. These programs are,\n- blogbench: filesystem benchmark.\n- compilebench: It tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees.\n- fhourstones: This integer benchmark solves positions in the game of connect-4.\n- himeno: Himeno benchmark score is affected by the performance of a computer, especially memory band width. This benchmark program takes measurements to proceed major loops in solving the Poisson’s equation solution using the Jacobi iteration method.\n- interbench: It is designed to measure the effect of changes in Linux kernel design or system configuration changes such as cpu, I/O scheduler and filesystem changes and options.\n- nbench: NBench(Wikipedia) is a synthetic computing benchmark program developed in the mid-1990s by the now defunct BYTE magazine intended to measure a computer's CPU, FPU, and Memory System speed.\n- pybench: It is a collection of tests that provides a standardized way to measure the performance of Python implementations.\n- ramsmp: RAMspeed is a free open source command line utility to measure cache and memory performance of computer systems.\n- stockfish-7: It is a simple benchmark by letting Stockfish analyze a set of positions for a given limit each.\nRead verification tests data.", "df_verification = pd.read_csv('verification/results/2/alltests_with_normalized_results_1.1.csv')", "Show number of test benchmarks.", "len(df_verification) / 2", "Order the test results by the absolute of normalized value", "df_verification_rank = df_verification.reindex(df_verification.normalized.abs().sort_values(ascending=0).index)\ndf_verification_rank.head(8)", "Verification Tests on issdm-6\nHistogram of frequency of normalized value.", "df_verification_issdm_6 = df_verification[df_verification['machine'] == 'issdm-6']\ndf_verification_issdm_6.normalized.hist(color='y', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)\n\nplt.title('verification tests run on issdm-6', fontsize=30)\n\nplt.xlabel('Normalized Value (re-execution / original)', fontsize=25)\nplt.ylabel('Frequency (# of benchmarks)', fontsize=25)", "Print the max the min normalized value,", "print(\n df_verification_issdm_6['normalized'].max(),\n df_verification_issdm_6['normalized'].min()\n)", "The average of noramlized value is,", "df_verification_issdm_6['normalized'].mean()", "If we remove all nbench tests, the frequency histogram changes to", "df_verification_issdm_6_no_nbench = df_verification_issdm_6[~df_verification_issdm_6['benchmark'].str.startswith('nbench')]\ndf_verification_issdm_6_no_nbench.normalized.hist(color='greenyellow', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)\n\nplt.title('verification tests run on issdm-6 (no nbench)', fontsize=30)\n\nplt.xlabel('Normalized Value (re-execution / original)', fontsize=25)\nplt.ylabel('Frequency (# of benchmarks)', fontsize=25)", "The max the min normalized value changes to,", "print(\n df_verification_issdm_6_no_nbench['normalized'].max(),\n df_verification_issdm_6_no_nbench['normalized'].min()\n)", "The average of noramlized value changes to,", "df_verification_issdm_6_no_nbench['normalized'].mean()", "Verification Tests on t2.micro\nHistogram of frequency of normalized value.", "df_verification_t2_micro = df_verification[df_verification['machine'] == 't2.micro']\ndf_verification_t2_micro.normalized.hist(color='y', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)\n\nplt.title('verification tests run on t2.micro', fontsize=30)\n\nplt.xlabel('Normalized Value (re-execution / original)', fontsize=25)\nplt.ylabel('Frequency (# of benchmarks)', fontsize=25)", "The average of noramlized value of the verification benchmarks is,", "df_verification_t2_micro['normalized'].mean()", "Let's see the frequency histogram after removing right-most four outliers.", "df_verification_top_benchmakrs = df_verification_rank[df_verification_rank['machine'] == 't2.micro'].head(4)['benchmark']\ndf_verification_t2_micro_no_outliers = df_verification_t2_micro[~df_verification_t2_micro['benchmark'].isin(df_verification_top_benchmakrs)]\n\ndf_verification_t2_micro_no_outliers.normalized.hist(color='greenyellow', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)\n\nplt.title('verification tests on t2.micro (no outliers)', fontsize=30)\n\nplt.xlabel('Normalized Value (re-execution / original)', fontsize=25)\nplt.ylabel('Frequency (# of benchmarks)', fontsize=25)", "Print the max the min normalized value,", "print(\n df_verification_t2_micro_no_outliers['normalized'].max(),\n df_verification_t2_micro_no_outliers['normalized'].min()\n)", "The average of noramlized value without the four outliners is,", "df_verification_t2_micro_no_outliers['normalized'].mean()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tkurfurst/deep-learning
language-translation/Project 4 Submission/dlnd_language_translation.ipynb
mit
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (5025, 5036)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n# len({word: None for word in source_text.split()}))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n \n\n source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split('\\n')]\n target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [target_vocab_to_int['<EOS>']] for sentence in target_text.split('\\n')]\n \n return source_id_text, target_id_text\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoding_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\n\nReturn the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate, keep probability)\n \"\"\"\n # TODO: Implement Function\n \n inputs = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None])\n learn_rate = tf.placeholder(tf.float32)\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n return inputs, targets, learn_rate, keep_prob\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoding Input\nImplement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.", "def process_decoding_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for dencoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n \n td_end_removed = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n td_start_added = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), td_end_removed], 1) \n\n return td_start_added\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_decoding_input(process_decoding_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().", "def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :return: RNN state\n \"\"\"\n # TODO: Implement Function\n \n\n # Encoder embedding\n \n # source_vocab_size = len(source_letter_to_int)\n \n # enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, 1000, rnn_size)\n\n # Encoder\n \n # enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n \n enc_LSTM = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n enc_LSTM = tf.contrib.rnn.DropoutWrapper(enc_LSTM, output_keep_prob=keep_prob)\n enc_LSTM = tf.contrib.rnn.MultiRNNCell([enc_LSTM] * num_layers)\n \n enc_RNN_out, enc_RNN_state = tf.nn.dynamic_rnn(enc_LSTM, rnn_inputs, dtype=tf.float32)\n \n return enc_RNN_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.", "def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,\n output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State *\n :param dec_cell: Decoder RNN Cell *\n :param dec_embed_input: Decoder embedded input *\n :param sequence_length: Sequence Length *\n :param decoding_scope: TenorFlow Variable Scope for decoding *\n :param output_fn: Function to apply the output layer *\n :param keep_prob: Dropout keep probability\n :return: Train Logits\n \"\"\"\n # TODO: Implement Function\n \n train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state, name=None)\n \n train_pred, fin_state, fin_cntxt_state = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,\\\n train_decoder_fn,inputs=dec_embed_input,sequence_length=sequence_length,\\\n parallel_iterations=None, swap_memory=False,time_major=False, scope=decoding_scope, name=None)\n \n train_logits = output_fn(train_pred)\n \n return train_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,\n maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state *\n \n :param dec_cell: Decoder RNN Cell *\n :param dec_embeddings: Decoder embeddings *\n :param start_of_sequence_id: GO ID *\n :param end_of_sequence_id: EOS Id *\n :param maximum_length: The maximum allowed time steps to decode *\n :param vocab_size: Size of vocabulary *\n :param decoding_scope: TensorFlow Variable Scope for decoding *\n :param output_fn: Function to apply the output layer *\n :param keep_prob: Dropout keep probability\n :return: Inference Logits\n \"\"\"\n # TODO: Implement Function\n \n infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings,\\\n target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], maximum_length, vocab_size, dtype=tf.int32, name=None)\n \n infer_logits, fin_state, fin_cntxt_state = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,\\\n infer_decoder_fn, inputs=None, sequence_length=maximum_length,\\\n parallel_iterations=None, swap_memory=False,time_major=False, scope=decoding_scope, name=None)\n \n # infer_logits = output_fn(infer_pred)\n \n return infer_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nCreate RNN cell for decoding using rnn_size and num_layers.\nCreate the output fuction using lambda to transform it's input, logits, to class logits.\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob):\n \"\"\"\n Create decoding layer\n :param dec_embed_input: Decoder embedded input\n :param dec_embeddings: Decoder embeddings\n :param encoder_state: The encoded state\n :param vocab_size: Size of vocabulary\n :param sequence_length: Sequence Length *\n :param rnn_size: RNN Size *\n :param num_layers: Number of layers *\n :param target_vocab_to_int: Dictionary to go from the target words to an id *\n :param keep_prob: Dropout keep probability *\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # TODO: Implement Function\n \n # Decoder RNNs\n \n dec_LSTM = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n dec_LSTM = tf.contrib.rnn.DropoutWrapper(dec_LSTM, output_keep_prob=keep_prob)\n dec_LSTM = tf.contrib.rnn.MultiRNNCell([dec_LSTM] * num_layers)\n \n # Create Output Function\n\n with tf.variable_scope(\"decoding\") as decoding_scope:\n \n # Output Layer\n \n output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)\n \n # Train Logits \n\n train_logits = decoding_layer_train(encoder_state, dec_LSTM,\\\n dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)\n \n with tf.variable_scope(\"decoding\", reuse=True) as decoding_scope:\n \n # Infer Logits \n \n infer_logits = decoding_layer_infer(encoder_state, dec_LSTM,\\\n dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)\n \n return train_logits, infer_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nApply embedding to the input data for the encoder.\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).\nProcess target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.\nApply embedding to the target data for the decoder.\nDecode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).", "def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder **\n :param target_data: Target placeholder **\n :param keep_prob: Dropout keep probability placeholder **\n :param batch_size: Batch Size **\n :param sequence_length: Sequence Length **\n :param source_vocab_size: Source vocabulary size **\n :param target_vocab_size: Target vocabulary size **\n :param enc_embedding_size: Decoder embedding size **\n :param dec_embedding_size: Encoder embedding size **\n :param rnn_size: RNN Size **\n :param num_layers: Number of layers **\n :param target_vocab_to_int: Dictionary to go from the target words to an id **\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # TODO: Implement Function\n \n # Apply embedding to the input data for the encoder\n enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)\n \n \n # Encode the input\n encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)\n \n \n # Process target data\n p_target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size)\n \n \n # Apply embedding to the target data for the decoder\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, p_target_data)\n \n \n # Decode the encoded input\n train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state,\\\n target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)\n \n return train_logits, infer_logits\n \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability", "# Number of Epochs\nepochs = 10\n# Batch Size\nbatch_size = 512\n# RNN Size\nrnn_size = 128\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 128\ndecoding_embedding_size = 128\n# Learning Rate\nlearning_rate = 0.005\n# Dropout Keep Probability\nkeep_probability = 0.8", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_source_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob = model_inputs()\n sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n \n train_logits, inference_logits = seq2seq_model(\n tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),\n encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)\n\n tf.identity(inference_logits, 'logits')\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([input_shape[0], sequence_length]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport time\n\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1]), (0,0)],\n 'constant')\n\n return np.mean(np.equal(target, np.argmax(logits, 2)))\n\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\n\nvalid_source = helper.pad_sentence_batch(source_int_text[:batch_size])\nvalid_target = helper.pad_sentence_batch(target_int_text[:batch_size])\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n start_time = time.time()\n \n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n sequence_length: target_batch.shape[1],\n keep_prob: keep_probability})\n \n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch, keep_prob: 1.0})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source, keep_prob: 1.0})\n \n train_acc = get_accuracy(target_batch, batch_train_logits)\n valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)\n end_time = time.time()\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n \n sentence = sentence.lower()\n sequence = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split()]\n \n return sequence\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('logits:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))\nprint(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
feststelltaste/software-analytics
prototypes/Complexity over Time (easier way maybe).ipynb
gpl-3.0
[ "The idea\nIn my previous blog post, we got to know the idea of \"indentation-based complexity\". We took a static view on the Linux kernel to spot the most complex areas.\nThis time, we wanna track the evolution of the indentation-based complexity of a software system over time. We are especially interested in it's correlation between the lines of code. Because if we have a more or less stable development of the lines of codes of our system, but an increasing number of indentation per source code file, we surely got a complexity problem.\nAgain, this analysis is higly inspired by Adam Tornhill's book \"Software Design X-Ray\"\n, which I currently always recommend if you want to get a deep dive into software data analysis.\nThe data\nFor the calculation of the evolution of our software system, we can use data from the version control system. In our case, we can get all changes to Java source code files with Git. We just need so say the right magic words, which is\ngit log -p -- *.java\nThis gives us data like the following:\n```\ncommit e5254156eca3a8461fa758f17dc5fae27e738ab5\nAuthor: Antoine Rey &#97;&#110;&#116;&#111;&#105;&#110;&#101;&#46;&#114;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;\nDate: Fri Aug 19 18:54:56 2016 +0200\nConvert Controler's integration test to unit test\n\ndiff --git a/src/test/java/org/springframework/samples/petclinic \n/web/CrashControllerTests.java b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java\nindex ee83b8a..a83255b 100644\n--- a/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java\n+++ b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java\n@@ -1,8 +1,5 @@\n package org.springframework.samples.petclinic.web;\n-import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;\n-import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*;\n-\n import org.junit.Before;\n import org.junit.Test;\n import org.junit.runner.RunWith;\n```\nWe have the\n* commit sha\ncommit e5254156eca3a8461fa758f17dc5fae27e738ab5\n* author's name\nAuthor: Antoine Rey &lt;antoine.rey@gmail.com&gt;\n* date of the commit\nDate: Fri Aug 19 18:54:56 2016 +0200\n* commit message\nConvert Controler's integration test to unit test\n* names of the files that changes (after and before)\ndiff --git a/src/test/java/org/springframework/samples/petclinic \n/web/CrashControllerTests.java b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java\n* the extended index header\nindex ee83b8a..a83255b 100644\n--- a/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java\n+++ b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java \n* and the full file diff where we can see additions or modifications (+) and deletions (-) \n```\n package org.springframework.samples.petclinic.web;\n-import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;\n-import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*; \n-\n import org.junit.Before;\n\n```\nWe \"just\" have to get this data into our favorite data analysis framework, which is, of course, Pandas :-). We can actually do that! Let's see how!\nAdvanced data wangling\nReading in such a semi-structured data is a little challenge. But we can do it with some tricks. First, we read in the whole Git diff history by standard means, using read_csv and the separator \\n to get one row per line. We make sure to give the columns a nice name as well.", "import pandas as pd\n\ndiff_raw = pd.read_csv(\n \"../../buschmais-spring-petclinic_fork/git_diff.log\",\n sep=\"\\n\",\n names=[\"raw\"])\ndiff_raw.head(5)\n\ndiff_raw[diff_raw.raw.str.startswith(\"commit\")].head()", "The output is the commit data that I've describe above where each in line the text file represents one row in the DataFrame (without blank lines).\nCleansing\nWe skip all the data we don't need for sure. Especially the \"extended index header\" with the two lines that being with +++ and --- are candidates to mix with the real diff data that begins also with a + or a -. Furtunately, we can identify these rows easily: These are the rows that begin with the row that starts with index. Using the shift operation starting at the row with index, we can get rid of all those lines.", "index_row = diff_raw.raw.str.startswith(\"index \")\nignored_diff_rows = (index_row.shift(1) | index_row.shift(2))\ndiff_raw = diff_raw[~(index_row | ignored_diff_rows)]\ndiff_raw.head(10)", "Extracting metadata\nNext, we extract some metadata of a commit. We can identify the different entries by using a regular expression that looks up a specific key word for each line. We extract each individual information into a new Series/column because we need it for each change line during the software's history.", "diff_raw['commit'] = diff_raw.raw.str.split(\"^commit \").str[1]\ndiff_raw['timestamp'] = pd.to_datetime(diff_raw.raw.str.split(\"^Date: \").str[1])\ndiff_raw['path'] = diff_raw.raw.str.extract(\"^diff --git.* b/(.*)\", expand=True)[0]\ndiff_raw.head()", "To assign each commit's metadata to the remaining rows, we forward fill those rows with the metadata by using the fillna method.", "diff_raw = diff_raw.fillna(method='ffill')\ndiff_raw.head(8)", "Identifying source code lines\nWe can now focus on the changed source code lines. We can identify", "diff_raw[\"i\"] = diff_raw.raw.str[1:].str.len() - diff_raw.raw.str[1:].str.lstrip().str.len()\ndiff_raw.head()\n\n%%timeit\ndiff_raw['added'] = diff_raw.raw.str.extract(\"^\\+( *).*$\", expand=True)[0].str.len()\ndiff_raw['deleted'] = diff_raw.raw.str.extract(\"^-( *).*$\", expand=True)[0].str.len()\ndiff_raw.head()", "For our later indentation-based complexity calculation, we have to make sure that each line", "diff_raw['line'] = diff_raw.raw.str.replace(\"\\t\", \" \")\ndiff_raw.head()\n\ndiff = \\\n diff_raw[\n (~diff_raw['added'].isnull()) | \n (~diff_raw['deleted'].isnull())].copy()\ndiff.head()\n\ndiff['is_comment'] = diff.line.str[1:].str.match(r' *(//|/*\\*).*')\ndiff['is_empty'] = diff.line.str[1:].str.replace(\" \",\"\").str.len() == 0\ndiff['is_source'] = ~(diff['is_empty'] | diff['is_comment'])\ndiff.head()\n\ndiff.raw.str[0].value_counts()\n\ndiff['lines_added'] = (~diff.added.isnull()).astype('int')\ndiff['lines_deleted'] = (~diff.deleted.isnull()).astype('int')\ndiff.head()\n\ndiff = diff.fillna(0)\n#diff.to_excel(\"temp.xlsx\")\ndiff.head()\n\ncommits_per_day = diff.set_index('timestamp').resample(\"D\").sum()\ncommits_per_day.head()\n\n%matplotlib inline\ncommits_per_day.cumsum().plot()\n\n(commits_per_day.added - commits_per_day.deleted).cumsum().plot()\n\n(commits_per_day.lines_added - commits_per_day.lines_deleted).cumsum().plot()\n\ndiff_sum = diff.sum()\ndiff_sum.lines_added - diff_sum.lines_deleted \n\n3913" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
turbomanage/training-data-analyst
courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb
apache-2.0
[ "LAB 03: Basic Feature Engineering in Keras\nLearning Objectives\n\nCreate an input pipeline using tf.data\nEngineer features to create categorical, crossed, and numerical feature columns\n\nIntroduction\nIn this lab, we utilize feature engineering to improve the prediction of housing prices using a Keras Sequential Model. \nEach learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference. \nStart by importing the necessary libraries for this lab.", "# Install Sklearn\n!python3 -m pip install --user sklearn\n\n# Ensure the right version of Tensorflow is installed.\n!pip3 freeze | grep 'tensorflow==2\\|tensorflow-gpu==2' || \\\n!python3 -m pip install --user tensorflow==2\n\nimport os\nimport tensorflow.keras\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport tensorflow as tf\n\nfrom tensorflow import feature_column as fc\nfrom tensorflow.keras import layers\nfrom sklearn.model_selection import train_test_split\nfrom keras.utils import plot_model\n\nprint(\"TensorFlow version: \",tf.version.VERSION)", "Many of the Google Machine Learning Courses Programming Exercises use the California Housing Dataset, which contains data drawn from the 1990 U.S. Census. Our lab dataset has been pre-processed so that there are no missing values.\nFirst, let's download the raw .csv data by copying the data from a cloud storage bucket.", "if not os.path.isdir(\"../data\"):\n os.makedirs(\"../data\")\n\n!gsutil cp gs://cloud-training-demos/feat_eng/housing/housing_pre-proc.csv ../data \n\n!ls -l ../data/", "Now, let's read in the dataset just copied from the cloud storage bucket and create a Pandas dataframe.", "housing_df = pd.read_csv('../data/housing_pre-proc.csv', error_bad_lines=False)\nhousing_df.head()", "We can use .describe() to see some summary statistics for the numeric fields in our dataframe. Note, for example, the count row and corresponding columns. The count shows 20433.000000 for all feature columns. Thus, there are no missing values.", "housing_df.describe()", "Split the dataset for ML\nThe dataset we loaded was a single CSV file. We will split this into train, validation, and test sets.", "train, test = train_test_split(housing_df, test_size=0.2)\ntrain, val = train_test_split(train, test_size=0.2)\n\nprint(len(train), 'train examples')\nprint(len(val), 'validation examples')\nprint(len(test), 'test examples')", "Now, we need to output the split files. We will specifically need the test.csv later for testing. You should see the files appear in the home directory.", "train.to_csv('../data/housing-train.csv', encoding='utf-8', index=False)\n\nval.to_csv('../data/housing-val.csv', encoding='utf-8', index=False)\n\ntest.to_csv('../data/housing-test.csv', encoding='utf-8', index=False)\n\n!head ../data/housing*.csv", "Create an input pipeline using tf.data\nNext, we will wrap the dataframes with tf.data. This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model. \nHere, we create an input pipeline using tf.data. This function is missing two lines. Correct and run the cell.", "# A utility method to create a tf.data dataset from a Pandas Dataframe\n\ndef df_to_dataset(dataframe, shuffle=True, batch_size=32):\n dataframe = dataframe.copy()\n \n # TODO 1 -- Your code here\n\n if shuffle:\n ds = ds.shuffle(buffer_size=len(dataframe))\n ds = ds.batch(batch_size)\n return ds", "Next we initialize the training and validation datasets.", "batch_size = 32\ntrain_ds = df_to_dataset(train)\nval_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)", "Now that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.", "# TODO 1\n", "We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.\nNumeric columns\nThe output of a feature column becomes the input to the model. A numeric is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.\nIn the California housing prices dataset, most columns from the dataframe are numeric. Let' create a variable called numeric_cols to hold only the numerical feature columns.", "# TODO 1\n", "Scaler function\nIt is very important for numerical variables to get scaled before they are \"fed\" into the neural network. Here we use min-max scaling. Here we are creating a function named 'get_scal' which takes a list of numerical features and returns a 'minmax' function, which will be used in tf.feature_column.numeric_column() as normalizer_fn in parameters. 'Minmax' function itself takes a 'numerical' number from a particular feature and return scaled value of that number. \nNext, we scale the numerical feature columns that we assigned to the variable \"numeric cols\".", "# Scalar def get_scal(feature):\n# TODO 1\n\n\n# TODO 1\n", "Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.", "print('Total number of feature coLumns: ', len(feature_columns))", "Using the Keras Sequential Model\nNext, we will run this cell to compile and fit the Keras Sequential model.", "# Model create\nfeature_layer = tf.keras.layers.DenseFeatures(feature_columns, dtype='float64')\n\nmodel = tf.keras.Sequential([\n feature_layer,\n layers.Dense(12, input_dim=8, activation='relu'),\n layers.Dense(8, activation='relu'),\n layers.Dense(1, activation='linear', name='median_house_value')\n])\n\n# Model compile\nmodel.compile(optimizer='adam',\n loss='mse',\n metrics=['mse'])\n\n# Model Fit\nhistory = model.fit(train_ds,\n validation_data=val_ds,\n epochs=32)", "Next we show loss as Mean Square Error (MSE). Remember that MSE is the most commonly used regression loss function. MSE is the sum of squared distances between our target variable (e.g. housing median age) and predicted values.", "loss, mse = model.evaluate(train_ds)\nprint(\"Mean Squared Error\", mse)", "Visualize the model loss curve\nNext, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.", "def plot_curves(history, metrics):\n nrows = 1\n ncols = 2\n fig = plt.figure(figsize=(10, 5))\n\n for idx, key in enumerate(metrics): \n ax = fig.add_subplot(nrows, ncols, idx+1)\n plt.plot(history.history[key])\n plt.plot(history.history['val_{}'.format(key)])\n plt.title('model {}'.format(key))\n plt.ylabel(key)\n plt.xlabel('epoch')\n plt.legend(['train', 'validation'], loc='upper left'); \n\nplot_curves(history, ['loss', 'mse'])", "Load test data\nNext, we read in the test.csv file and validate that there are no null values. \nAgain, we can use .describe() to see some summary statistics for the numeric fields in our dataframe. The count shows 4087.000000 for all feature columns. Thus, there are no missing values.", "test_data = pd.read_csv('../data/housing-test.csv')\ntest_data.describe()", "Now that we have created an input pipeline using tf.data and compiled a Keras Sequential Model, we now create the input function for the test data and to initialize the test_predict variable.", "# TODO 1\n\n\ntest_predict = test_input_fn(dict(test_data))", "Prediction: Linear Regression\nBefore we begin to feature engineer our feature columns, we should predict the median house value. By predicting the median house value now, we can then compare it with the median house value after feature engineeing.\nTo predict with Keras, you simply call model.predict() and pass in the housing features you want to predict the median_house_value for. Note: We are predicting the model locally.", "predicted_median_house_value = model.predict(test_predict)", "Next, we run two predictions in separate cells - one where ocean_proximity=INLAND and one where ocean_proximity= NEAR OCEAN.", "# Ocean_proximity is INLAND\nmodel.predict({\n 'longitude': tf.convert_to_tensor([-121.86]),\n 'latitude': tf.convert_to_tensor([39.78]),\n 'housing_median_age': tf.convert_to_tensor([12.0]),\n 'total_rooms': tf.convert_to_tensor([7653.0]),\n 'total_bedrooms': tf.convert_to_tensor([1578.0]),\n 'population': tf.convert_to_tensor([3628.0]),\n 'households': tf.convert_to_tensor([1494.0]),\n 'median_income': tf.convert_to_tensor([3.0905]),\n 'ocean_proximity': tf.convert_to_tensor(['INLAND'])\n}, steps=1)\n\n# Ocean_proximity is NEAR OCEAN\nmodel.predict({\n 'longitude': tf.convert_to_tensor([-122.43]),\n 'latitude': tf.convert_to_tensor([37.63]),\n 'housing_median_age': tf.convert_to_tensor([34.0]),\n 'total_rooms': tf.convert_to_tensor([4135.0]),\n 'total_bedrooms': tf.convert_to_tensor([687.0]),\n 'population': tf.convert_to_tensor([2154.0]),\n 'households': tf.convert_to_tensor([742.0]),\n 'median_income': tf.convert_to_tensor([4.9732]),\n 'ocean_proximity': tf.convert_to_tensor(['NEAR OCEAN'])\n}, steps=1)", "The arrays returns a predicted value. What do these numbers mean? Let's compare this value to the test set. \nGo to the test.csv you read in a few cells up. Locate the first line and find the median_house_value - which should be 249,000 dollars near the ocean. What value did your model predict for the median_house_value? Was it a solid model performance? Let's see if we can improve this a bit with feature engineering! \nEngineer features to create categorical and numerical features\nNow we create a cell that indicates which features will be used in the model.\nNote: Be sure to bucketize 'housing_median_age' and ensure that 'ocean_proximity' is one-hot encoded. And, don't forget your numeric values!", "# TODO 2\n", "Next, we scale the numerical, bucktized, and categorical feature columns that we assigned to the variables in the precding cell.", "# Scalar def get_scal(feature):\ndef get_scal(feature):\n def minmax(x):\n mini = train[feature].min()\n maxi = train[feature].max()\n return (x - mini)/(maxi-mini)\n return(minmax)\n\n# All numerical features - scaling\nfeature_columns = []\nfor header in numeric_cols:\n scal_input_fn = get_scal(header)\n feature_columns.append(fc.numeric_column(header,\n normalizer_fn=scal_input_fn))", "Categorical Feature\nIn this dataset, 'ocean_proximity' is represented as a string. We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector.\nNext, we create a categorical feature using 'ocean_proximity'.", "# TODO 2\n", "Bucketized Feature\nOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider our raw data that represents a homes' age. Instead of representing the house age as a numeric column, we could split the home age into several buckets using a bucketized column. Notice the one-hot values below describe which age range each row matches.\nNext we create a bucketized column using 'housing_median_age'", "# TODO 2\n", "Feature Cross\nCombining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features.\nNext, we create a feature cross of 'housing_median_age' and 'ocean_proximity'.", "# TODO 2\n", "Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.", "print('Total number of feature coumns: ', len(feature_columns))", "Next, we will run this cell to compile and fit the Keras Sequential model. This is the same model we ran earlier.", "# Model create\nfeature_layer = tf.keras.layers.DenseFeatures(feature_columns,\n dtype='float64')\n\nmodel = tf.keras.Sequential([\n feature_layer,\n layers.Dense(12, input_dim=8, activation='relu'),\n layers.Dense(8, activation='relu'),\n layers.Dense(1, activation='linear', name='median_house_value')\n])\n\n# Model compile\nmodel.compile(optimizer='adam',\n loss='mse',\n metrics=['mse'])\n\n# Model Fit\nhistory = model.fit(train_ds,\n validation_data=val_ds,\n epochs=32)", "Next, we show loss and mean squared error then plot the model.", "loss, mse = model.evaluate(train_ds)\nprint(\"Mean Squared Error\", mse)\n\nplot_curves(history, ['loss', 'mse'])", "Next we create a prediction model. Note: You may use the same values from the previous prediciton.", "# TODO 2\n", "Analysis\nThe array returns a predicted value. Compare this value to the test set you ran earlier. Your predicted value may be a bit better.\nNow that you have your \"feature engineering template\" setup, you can experiment by creating additional features. For example, you can create derived features, such as households per population, and see how they impact the model. You can also experiment with replacing the features you used to create the feature cross.\nCopyright 2020 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
christophebertrand/ada-epfl
HW05-TamingText/Part2.ipynb
mit
[ "import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import show\n%matplotlib inline\n\nimport glob # to find all files in folder\n\nimport pycountry\nimport re # regex\nfrom nltk.sentiment.util import *\nimport nltk as nl\nfrom nltk.corpus import stopwords as nlstopw\nimport string\nfrom matplotlib import cm\n", "Import data", "folder = 'hillary-clinton-emails/'\n\nemails = pd.read_csv(folder + 'Emails.csv', index_col='Id')\nemails.head(5)", "Analyse Emails", "emails.head()", "The columns ExtractedBodyText is supposed to be the content of the mail but some of the mail have a ExtractedBodyText = NaN but the Rawtext seems to contains something", "emails.columns\n\nprint('Number of emails: ', len(emails))\n\nbodyNaN = emails.ExtractedBodyText.isnull().sum()\nprint('Number of emails with ExtractedBodyText=NaN: {}, ({:.2f}%)'.format(emails.ExtractedBodyText.isnull().sum(), bodyNaN/ len(emails)))", "We could also use the subject since it is usually a summary of the mail", "bodyNaN = emails.ExtractedSubject.isnull().sum()\nprint('Number of emails with ExtractedSubject=NaN: {}, ({:.2f}%)'.format(emails.ExtractedBodyText.isnull().sum(), bodyNaN/ len(emails)))", "Now let's try to combine the subject and the body and drop the mail that have both subject= NaN and body = Nan", "subBodyNan = emails[np.logical_and(emails.ExtractedBodyText.isnull(),emails.ExtractedSubject.isnull())]\nprint('Number of email where both subject and body is NaN: {}({:.2f})'.format(len(subBodyNan), len(subBodyNan)/ len(emails)))", "Well, that number is small enough to drop all email where both Extracted subject and Extracted body is NaN.\nLet's drop them and create a new columns subjectBody that is the concatenation of the 2 columns ExtractedSubject and ExtractedBody. From now we will work with that columns", "emails = emails[~ np.logical_and(emails.ExtractedBodyText.isnull(), emails.ExtractedSubject.isnull())]\nlen(emails)\n\nemails.ExtractedBodyText.fillna('',inplace=True)\nemails.ExtractedSubject.fillna('',inplace=True)\nemails['SubjectBody'] = emails.ExtractedBodyText + emails.ExtractedSubject\nemails.SubjectBody.head()", "Last check to be sur that our columns of interest don't have anymore NaN", "print('Number of NaN in columns SubjectBody: ' ,emails.SubjectBody.isnull().sum())", "Keep only mail that mentions a country\nStructure of a country in pycountry.countres", "list(pycountry.countries)[0]", "First we create a dataframe with one line by countries and we count for each countries its occurences in the mail.\nSince a country can be reference in many way (Switzerland, switzerland, CH), we need to consider all the possible form. \nWe may have a problem with word that have many meaning like US(country) and us (pronoun) so we can't just take all the country name in loer case and all the mail in lower case and just compare.\nWe first try to use that technique:\n 1. the country name can appear either in lower case, with the first letter in uppercase or all in uppercase\n 2. alpha_2 and alpha_3 are always used in uppercase\nBut we still have a lot of problem. Indeed a lot of mail contain sentences all in upper cas (see below):\n - SUBJECT TO AGREEMENT ON SENSITIVE INFORMATION & REDACTIONS. NO FOIA WAIVER. STATE-SCB0045012\nFor example this sentence will match for Togo because of the TO and also for norway because of the NO. An other example is Andorra that appears in 55 mails thanks to AND\nAt first we also wanted to keep the upper case since it can be helpfull to do the sentiment analysis. Look at those 2 sentence and their corresponding score:\n - VADER is very smart, handsome, and funny.: compound: 0.8545, neg: 0.0, neu: 0.299, pos: 0.701,\n - VADER is VERY SMART, handsome, and FUNNY.: compound: 0.9227, neg: 0.0, neu: 0.246, pos: 0.754,\nThe score is not the same. But since we have a lot of information in Upper case and it nothing to do with sentiment, we will put all mails in lower case. And we will also remove the stopwords. \nWe know that we remove the occurance of USA under 'us' but it will also remove the 'and', 'can', 'it',...", "emails.SubjectBody.head(100).apply(print)", "Tokenize and remove stopwords", "\nfrom gensim import corpora, models, utils\nfrom nltk.corpus import stopwords\nsw = stopwords.words('english') + ['re', 'fw', 'fvv', 'fwd']\nsw = sw + ['pm', \"a\", \"about\", \"above\", \"above\", \"across\", \"after\", \"afterwards\", \"again\", \"against\", \"all\", \"almost\", \"alone\", \"along\", \"already\", \"also\",\"although\",\"always\",\"am\",\"among\", \"amongst\", \"amoungst\", \"amount\", \"an\", \"and\", \"another\", \"any\",\"anyhow\",\"anyone\",\"anything\",\"anyway\", \"anywhere\", \"are\", \"around\", \"as\", \"at\", \"back\",\"be\",\"became\", \"because\",\"become\",\"becomes\", \"becoming\", \"been\", \"before\", \"beforehand\", \"behind\", \"being\", \"below\", \"beside\", \"besides\", \"between\", \"beyond\", \"bill\", \"both\", \"bottom\",\"but\", \"by\", \"call\", \"can\", \"cannot\", \"cant\", \"co\", \"con\", \"could\", \"couldnt\", \"cry\", \"de\", \"describe\", \"detail\", \"do\", \"done\", \"down\", \"due\", \"during\", \"each\", \"eg\", \"eight\", \"either\", \"eleven\",\"else\", \"elsewhere\", \"empty\", \"enough\", \"etc\", \"even\", \"ever\", \"every\", \"everyone\", \"everything\", \"everywhere\", \"except\", \"few\", \"fifteen\", \"fify\", \"fill\", \"find\", \"fire\", \"first\", \"five\", \"for\", \"former\", \"formerly\", \"forty\", \"found\", \"four\", \"from\", \"front\", \"full\", \"further\", \"get\", \"give\", \"go\", \"had\", \"has\", \"hasnt\", \"have\", \"he\", \"hence\", \"her\", \"here\", \"hereafter\", \"hereby\", \"herein\", \"hereupon\", \"hers\", \"herself\", \"him\", \"himself\", \"his\", \"how\", \"however\", \"hundred\", \"ie\", \"if\", \"in\", \"inc\", \"indeed\", \"interest\", \"into\", \"is\", \"it\", \"its\", \"itself\", \"keep\", \"last\", \"latter\", \"latterly\", \"least\", \"less\", \"ltd\", \"made\", \"many\", \"may\", \"me\", \"meanwhile\", \"might\", \"mill\", \"mine\", \"more\", \"moreover\", \"most\", \"mostly\", \"move\", \"much\", \"must\", \"my\", \"myself\", \"name\", \"namely\", \"neither\", \"never\", \"nevertheless\", \"next\", \"nine\", \"no\", \"nobody\", \"none\", \"noone\", \"nor\", \"not\", \"nothing\", \"now\", \"nowhere\", \"of\", \"off\", \"often\", \"on\", \"once\", \"one\", \"only\", \"onto\", \"or\", \"other\", \"others\", \"otherwise\", \"our\", \"ours\", \"ourselves\", \"out\", \"over\", \"own\",\"part\", \"per\", \"perhaps\", \"please\", \"put\", \"rather\", \"re\", \"same\", \"see\", \"seem\", \"seemed\", \"seeming\", \"seems\", \"serious\", \"several\", \"she\", \"should\", \"show\", \"side\", \"since\", \"sincere\", \"six\", \"sixty\", \"so\", \"some\", \"somehow\", \"someone\", \"something\", \"sometime\", \"sometimes\", \"somewhere\", \"still\", \"such\", \"system\", \"take\", \"ten\", \"than\", \"that\", \"the\", \"their\", \"them\", \"themselves\", \"then\", \"thence\", \"there\", \"thereafter\", \"thereby\", \"therefore\", \"therein\", \"thereupon\", \"these\", \"they\", \"thickv\", \"thin\", \"third\", \"this\", \"those\", \"though\", \"three\", \"through\", \"throughout\", \"thru\", \"thus\", \"to\", \"together\", \"too\", \"top\", \"toward\", \"towards\", \"twelve\", \"twenty\", \"two\", \"un\", \"under\", \"until\", \"up\", \"upon\", \"us\", \"very\", \"via\", \"was\", \"we\", \"well\", \"were\", \"what\", \"whatever\", \"when\", \"whence\", \"whenever\", \"where\", \"whereafter\", \"whereas\", \"whereby\", \"wherein\", \"whereupon\", \"wherever\", \"whether\", \"which\", \"while\", \"whither\", \"who\", \"whoever\", \"whole\", \"whom\", \"whose\", \"why\", \"will\", \"with\", \"within\", \"without\", \"would\", \"yet\", \"you\", \"your\", \"yours\", \"yourself\", \"yourselves\", \"the\"]\n\ndef fil(row):\n t = utils.simple_preprocess(row.SubjectBody)\n filt = list(filter(lambda x: x not in sw, t))\n return ' '.join(filt)\nemails['SubjectBody'] = emails.apply(fil, axis=1)\nemails.head(10)\n\ncountries = np.array([[country.name.lower(), country.alpha_2.lower(), country.alpha_3.lower()] for country in list(pycountry.countries)])\ncountries[:5]\n\ncountries.shape\n\ncountries = pd.DataFrame(countries, columns=['Name', 'Alpha_2', 'Alpha_3'])\ncountries.head()\n\n\n\ncountries.shape\n\ncountries.isin(['aruba']).any().any()\n\ndef check_country(row):\n return countries.isin(row.SubjectBody.split()).any().any()\n \n\nemails_country = emails[emails.apply(check_country, axis=1)]\nlen(emails_country)\n", "Sentiment analysis\nWe explain before our precessing. Now we will do the sentiment analysis only on the subject and the body\nSo we will only consider the subject and the body", "sentiments = pd.DataFrame(emails_country.SubjectBody)\nsentiments.head()\n\nsentiments.shape", "Analysis\nWe will do a sentiment analysis on each sentense and then compute a socre for each country\nWe will compare different module:\n - nltk.sentiment.vader that attribute a score to each sentence\n - liuhu that has a set of positive word and one of neg word. We count the positif word and neg word in each sentence and compute the mean\nVader (That part takes time)", "sentiments.head()\n\n\nfrom nltk.sentiment.vader import SentimentIntensityAnalyzer\nsid = SentimentIntensityAnalyzer()\ndef sentiment_analysis(row):\n score = sid.polarity_scores(row)\n return pd.Series({'Pos': score['pos'], 'Neg': score['neg'], 'Compound_':score['compound'] })\n\nsentiments = pd.concat([sentiments, sentiments.SubjectBody.apply(sentiment_analysis)], axis=1)\n\nsentiments.to_csv('mailScore.csv')\nsentiments.head()", "Liuhu", "from nltk.corpus import opinion_lexicon\nsentimentsLihuh = pd.read_csv('mailScore.csv', index_col='Id')\n#transform the array of positiv and negatif word in dict\ndicPosNeg = dict()\nfor word in opinion_lexicon.positive():\n dicPosNeg[word] = 1\n \nfor word in opinion_lexicon.negative():\n dicPosNeg[word] = -1 \n \n\ndef sentiment_liuhu(sentence):\n counter = []\n for word in sentence.split():\n value = dicPosNeg.get(word, -999)\n if value != -999:\n counter.append(value)\n \n if len(counter) == 0 :\n return pd.Series({'Sum_': int(0), 'Mean_': int(0) })\n return pd.Series({'Sum_': np.sum(counter), 'Mean_': np.mean(counter) })\n\nsentimentsLihuh = pd.concat([sentimentsLihuh, sentimentsLihuh.SubjectBody.apply(sentiment_liuhu)], axis=1)\n\nsentimentsLihuh.to_csv('mailScore2.csv')\nsentimentsLihuh", "Aggregate by countries\nWe groupe by country and compute the mean of each score", "sentiments = pd.read_csv('mailScore2.csv', index_col='Id')\nsentiments.head()\n\ndef aggScoreByCountry(country):\n condition = sentiments.apply(lambda x: np.any(country.isin(x.SubjectBody.split())), axis=1)\n sent = sentiments[condition]\n if len(sent) == 0:\n print(country.Name, -999)\n return pd.Series({'Compound_':-999, 'Mean_':-999, 'Appearance': int(len(sent))})\n compound_ = np.mean(sent.Compound_)\n mean_ = np.mean(sent.Mean_)\n print(country.Name, compound_)\n return pd.Series({'Compound_': compound_, 'Mean_': mean_, 'Appearance': int(len(sent))})\n\ncountries = pd.concat([countries, countries.apply(lambda x: aggScoreByCountry(x), axis=1)],axis=1)\ncountries.to_csv('countriesScore.csv')", "Drop all country that have a score of -999 (they never appear in the mails)", "countries = countries[countries.Compound_ != -999]\nlen(countries)", "It's a lot of country. We will also use a thresold for the appearance. We only keep mails that are mentioned in a minimum number of emails", "minimum_appearance = 15\ncountries_min = countries[countries.Appearance >= minimum_appearance]\nlen(countries_min)", "Plot\nWe plot the 2 analysis. The first plot show an historgram with the vador score and in color the appearance in the mail.\nIn the second plot the histogram shows the liuhu score and in color the appearance in the mail\nwe only consider countries that are at least mention 15 times. Otherwise we end up with to much countries", "# Set up colors : red to green\ncountries_sorted = countries_min.sort(columns=['Compound_'])\nplt.figure(figsize=(16, 6), dpi=80)\n\n\nappearance = np.array(countries_sorted.Appearance)\ncolors = cm.RdYlGn((appearance / float(max(y))))\nplot = plt.scatter(appearance, appearance, c=appearance, cmap = 'RdYlGn')\nplt.clf()\ncolorBar = plt.colorbar(plot)\ncolorBar.ax.set_title(\"Appearance\")\n\n\nindex = np.arange(len(countries_sorted))\nbar_width = 0.95\nplt.bar(range(countries_sorted.shape[0]), countries_sorted.Compound_, align='center', tick_label=countries_sorted.Name, color=colors)\nplt.xticks(rotation=90, ha='center')\nplt.title('Using Vader')\nplt.xlabel('Countries')\nplt.ylabel('Vador Score')\n\n\ncountries_sorted = countries_min.sort(columns=['Mean_'])\n\nplt.figure(figsize=(16, 6), dpi=80)\n\nappearance = np.array(countries_sorted.Appearance)\ncolors = cm.RdYlGn((appearance / float(max(y))))\nplot = plt.scatter(appearance, appearance, c=appearance, cmap = 'RdYlGn')\nplt.clf()\ncolorBar = plt.colorbar(plot)\ncolorBar.ax.set_title(\"Appearance\")\n\n\nindex = np.arange(len(countries_sorted))\nbar_width = 0.95\nplt.bar(range(countries_sorted.shape[0]), countries_sorted.Mean_, align='center', tick_label=countries_sorted.Name, color=colors)\nplt.xticks(rotation=90, ha='center')\nplt.title('Liuhu Score')\nplt.xlabel('Countries')\nplt.ylabel('Liuhu Score')\n\n\n", "We can see that our result are not very concluding. We coul expect to have USA with a high score and it doesn't appear in our graph. As explain before it's because of the us pronoum. \nOne improvment is to use a Part-of-speech tagger and that way maybe we would be able to distinguish the country 'US' and the pronoun 'us'" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eecs445-f16/umich-eecs445-f16
handsOn_lecture10_bias-variance_tradeoff/draft/bias_variance_solutions.ipynb
mit
[ "EECS 445: Machine Learning\nHands On 10: Bias Variance Tradeoff\nConsider a sequence of IID random variable: \n$$\nX_i =\n\\begin{cases}\n100 & \\text{ with prob. } 0.02 \\\n0 & \\text{ with prob. } 0.97 \\\n-100 & \\text{ with prob. } 0.01 \\\n\\end{cases}\n$$\nThe true mean of $X_i$ is \n$$\n0.02 \\times 100 + 0.97 \\times 0 + 0.01 \\times -100 = 1\n$$\nWe want to estimate the true mean of this distribution. We will consider two different estimators of the true mean.\nLet's say you take three samples $X_1, X_2, X_3$, and you compute the empirical mean $Z=\\frac{X_1 + X_2 + X_3}{3}$ and empirical median $Y$ of these three samples (recall that the median is obtained by sorting $X_1, X_2, X_3$ and then choosing the middle (2nd) entry).\nWhat is the bias-variance tradeoff of the $Y$ and $Z$ for estimating the true mean of the above distribution?\n\nThey are both unbiased estimators of the true mean, and have the same variance.\nThe median has higher bias and higher variance.\nThe mean has higher bias and higher variance.\nThey both have no bias, but the mean has lower variance.\nThe mean has no bias but some variance, and the median has non-zero bias but less variance\n\nSolution\n\nThe last answer is correct.\nThe empirical mean of a sample of random $n$ IID random variables is always an unbiased estimate of the true mean. However, the empirical mean estimator can have high variance. Here it is $ \\text{Var}(Z) = \\frac{\\text{Var}(X_i)}{3} = \\frac{(100-1)^2 \\times 0.02 + (-100 - 1)^2 \\times 0.01 + (0-1)^2 \\times 0.97}{3} = 99 \\frac 2 3.$\nThe median, on the other hand, is a biased estimator. It is a little bit hard to calculate exactly, but here goes:\n$$\nmedian = \\begin{cases} 100 & w.p. 0.02^3 + \\binom{3}{1} 0.02^2 \\times 0.98 \\\n-100 & w.p. 0.01^3 + \\binom{3}{1} 0.01^2 \\times 0.99\n\\end{cases}\n$$\nIf you work this out, you see that the median on average is $0.089$. This means that the $\\text{bias}^2 \\approx (1-0.089)^2$ which is no more than 1. Using a similar argument, you can check that the variance of the median is no more than 20. This can be checked experimentally! \n\nDerivation of Bias-Variance Tradeoff eqaution\nAssume that we have noisy data, modeled by $f = y + \\epsilon$, where $\\epsilon \\in \\mathcal{N}(0,\\sigma)$. Given an estimator $\\hat{f}$, the squared error can be derived as follows:\n$$\n\\begin{align}\n\\mathbb{E}\\left[\\left(\\hat{f} - f\\right)^2\\right] &= \\mathbb{E}\\left[\\hat{f}^2 - 2f\\hat{f} + f^2\\right]\\\n&= \\mathbb{E}\\left[\\hat{f}^2\\right] + \\mathbb{E}\\left[f^2\\right] - 2\\mathbb{E}\\left[f\\hat{f}^2\\right] \\text{ By linearity of expectation} \\\n\\end{align}\n$$\nNow, by definition, $Var(x) = \\mathbb{E}\\left[x^2\\right] - \\left(\\mathbb{E}\\left[x\\right]\\right)^2$. Subsituting this definition into the eqaution above, we get:\n$$\n\\begin{align}\n\\mathbb{E}\\left[\\hat{f}^2\\right] + \\mathbb{E}\\left[f^2\\right] - 2\\mathbb{E}\\left[f\\hat{f}^2\\right] &= Var(\\hat{f}) + \\left(\\mathbb{E}[\\hat{f}]\\right)^2 + Var(f) + \\left(\\mathbb{E}[f]\\right)^2 - 2f\\mathbb{E}[\\hat{F}^2] \\ \n&= Var(\\hat{f}) + Var(f) + \\left(\\mathbb{E}[\\hat{f}] - f\\right)^2\\\n&= \\boxed{\\sigma + Var(\\hat{f}) + \\left(\\mathbb{E}[\\hat{f}] - f\\right)^2}\n\\end{align}\n$$\nThe first term $\\sigma$ is the irreducible error due to the noise in the data (from the distribution of $\\epsilon$). The second term is the variance of the estimator $\\hat{f}$ and the final term is the bias of the estimator. There is an inherent tradeoff between the bias and variance of an estimator. Generally, more complex estimators (think of high-degree polynomials as an example) will have a low bias since they will fit the sampled data really well. However, this accuracy will not be maintained if we continued to resample the data, which implies that the variance of this estimator is high. \nActivity 1: Bias Variance Tradeoff\nWe will now see try to see the inherent tradeoff between bias and variance of estimators through linear regression. Consider the following dataset.", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy.matlib import repmat\n\nfrom sklearn\ndegrees = [1,2,3,4,5]\n\n\n#define data\nn = 20\nsub = 1000\nmean = 0\nstd = 0.25\n\n#define test set\nXtest = np.random.random((n,1))*2*np.pi\nytest = np.sin(Xtest) + np.random.normal(mean,std,(n,1))\n\n#pre-allocate variables\npreds = np.zeros((n,sub))\nbias = np.zeros(len(degrees))\nvariance = np.zeros(len(degrees))\nmse = np.zeros(len(degrees))\nvalues = np.expand_dims(np.linspace(0,2*np.pi,100),1)\n", "Let's try several polynomial fits to the data:", "for j,degree in enumerate(degrees):\n \n for i in range(sub):\n \n #create data - sample from sine wave \n x = np.random.random((n,1))*2*np.pi\n y = np.sin(x) + np.random.normal(mean,std,(n,1))\n \n #TODO\n #create features corresponding to degree - ex: 1, x, x^2, x^3...\n A = \n \n #TODO: \n #fit model using least squares solution (linear regression)\n #later include ridge regression/normalization\n coeffs = \n \n #store predictions for each sampling\n preds[:,i] = poly.fit_transform(Xtest).dot(coeffs)[:,0]\n \n #plot 9 images\n if i < 9:\n plt.subplot(3,3,i+1)\n plt.plot(values,poly.fit_transform(values).dot(coeffs),x,y,'.b')\n\n plt.axis([0,2*np.pi,-2,2])\n plt.suptitle('PolyFit = %i' % (degree))\n plt.show()\n\n #TODO\n #Calculate mean bias, variance, and MSE (UNCOMMENT CODE BELOW!)\n #bias[j] = \n #variance[j] = \n #mse[j] = \n", "Let's plot the data with the estimators!", "plt.subplot(3,1,1)\nplt.plot(degrees,bias)\nplt.title('bias')\nplt.subplot(3,1,2)\nplt.plot(degrees,variance)\nplt.title('variance')\nplt.subplot(3,1,3)\nplt.plot(degrees,mse)\nplt.title('MSE')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
Kaggle/learntools
notebooks/machine_learning/raw/ex7.ipynb
apache-2.0
[ "Introduction\nIn this exercise, you will create and submit predictions for a Kaggle competition. You can then improve your model (e.g. by adding features) to apply what you've learned and move up the leaderboard.\nBegin by running the code cell below to set up code checking and the filepaths for the dataset.", "# Set up code checking\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.machine_learning.ex7 import *\n\n# Set up filepaths\nimport os\nif not os.path.exists(\"../input/train.csv\"):\n os.symlink(\"../input/home-data-for-ml-course/train.csv\", \"../input/train.csv\") \n os.symlink(\"../input/home-data-for-ml-course/test.csv\", \"../input/test.csv\") ", "Here's some of the code you've written so far. Start by running it again.", "# Import helpful libraries\nimport pandas as pd\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.metrics import mean_absolute_error\nfrom sklearn.model_selection import train_test_split\n\n# Load the data, and separate the target\niowa_file_path = '../input/train.csv'\nhome_data = pd.read_csv(iowa_file_path)\ny = home_data.SalePrice\n\n# Create X (After completing the exercise, you can return to modify this line!)\nfeatures = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']\n\n# Select columns corresponding to features, and preview the data\nX = home_data[features]\nX.head()\n\n# Split into validation and training data\ntrain_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)\n\n# Define a random forest model\nrf_model = RandomForestRegressor(random_state=1)\nrf_model.fit(train_X, train_y)\nrf_val_predictions = rf_model.predict(val_X)\nrf_val_mae = mean_absolute_error(rf_val_predictions, val_y)\n\nprint(\"Validation MAE for Random Forest Model: {:,.0f}\".format(rf_val_mae))", "Train a model for the competition\nThe code cell above trains a Random Forest model on train_X and train_y. \nUse the code cell below to build a Random Forest model and train it on all of X and y.", "# To improve accuracy, create a new Random Forest model which you will train on all training data\nrf_model_on_full_data = ____\n\n# fit rf_model_on_full_data on all data from the training data\n____", "Now, read the file of \"test\" data, and apply your model to make predictions.", "# path to file you will use for predictions\ntest_data_path = '../input/test.csv'\n\n# read test data file using pandas\ntest_data = ____\n\n# create test_X which comes from test_data but includes only the columns you used for prediction.\n# The list of columns is stored in a variable called features\ntest_X = ____\n\n# make predictions which we will submit. \ntest_preds = ____", "Before submitting, run a check to make sure your test_preds have the right format.", "# Check your answer (To get credit for completing the exercise, you must get a \"Correct\" result!)\nstep_1.check()\n# step_1.solution()\n\n#%%RM_IF(PROD)%%\nrf_model_on_full_data = RandomForestRegressor()\nrf_model_on_full_data.fit(X, y)\ntest_data_path = '../input/test.csv'\ntest_data = pd.read_csv(test_data_path)\ntest_X = test_data[features]\ntest_preds = rf_model_on_full_data.predict(test_X)\nstep_1.assert_check_passed()", "Generate a submission\nRun the code cell below to generate a CSV file with your predictions that you can use to submit to the competition.", "# Run the code to save predictions in the format used for competition scoring\n\noutput = pd.DataFrame({'Id': test_data.Id,\n 'SalePrice': test_preds})\noutput.to_csv('submission.csv', index=False)", "Submit to the competition\nTo test your results, you'll need to join the competition (if you haven't already). So open a new window by clicking on this link. Then click on the Join Competition button.\n\nNext, follow the instructions below:\n$SUBMIT_TO_COMP$\nContinue Your Progress\nThere are many ways to improve your model, and experimenting is a great way to learn at this point.\nThe best way to improve your model is to add features. To add more features to the data, revisit the first code cell, and change this line of code to include more column names:\npython\nfeatures = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']\nSome features will cause errors because of issues like missing values or non-numeric data types. Here is a complete list of potential columns that you might like to use, and that won't throw errors:\n- 'MSSubClass'\n- 'LotArea'\n- 'OverallQual' \n- 'OverallCond' \n- 'YearBuilt'\n- 'YearRemodAdd' \n- '1stFlrSF'\n- '2ndFlrSF' \n- 'LowQualFinSF' \n- 'GrLivArea'\n- 'FullBath'\n- 'HalfBath'\n- 'BedroomAbvGr' \n- 'KitchenAbvGr' \n- 'TotRmsAbvGrd' \n- 'Fireplaces' \n- 'WoodDeckSF' \n- 'OpenPorchSF'\n- 'EnclosedPorch' \n- '3SsnPorch' \n- 'ScreenPorch' \n- 'PoolArea' \n- 'MiscVal' \n- 'MoSold' \n- 'YrSold'\nLook at the list of columns and think about what might affect home prices. To learn more about each of these features, take a look at the data description on the competition page.\nAfter updating the code cell above that defines the features, re-run all of the code cells to evaluate the model and generate a new submission file. \nWhat's next?\nAs mentioned above, some of the features will throw an error if you try to use them to train your model. The Intermediate Machine Learning course will teach you how to handle these types of features. You will also learn to use xgboost, a technique giving even better accuracy than Random Forest.\nThe Pandas course will give you the data manipulation skills to quickly go from conceptual idea to implementation in your data science projects. \nYou are also ready for the Deep Learning course, where you will build models with better-than-human level performance at computer vision tasks." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
shankari/folium
examples/Plugins.ipynb
mit
[ "import os\nimport folium\n\nprint(folium.__version__)", "Examples of plugins usage in folium\nIn this notebook we show a few illustrations of folium's plugin extensions.\nThis is a development notebook\nAdds a button to enable/disable zoom scrolling.\nScrollZoomToggler", "from folium import plugins\n\n\nm = folium.Map([45, 3], zoom_start=4)\n\nplugins.ScrollZoomToggler().add_to(m)\n\nm.save(os.path.join('results', 'Plugins_0.html'))\n\nm", "MarkerCluster\nAdds a MarkerCluster layer on the map.", "import numpy as np\n\n\nN = 100\n\ndata = np.array(\n [\n np.random.uniform(low=35, high=60, size=N), # Random latitudes in Europe.\n np.random.uniform(low=-12, high=30, size=N), # Random longitudes in Europe.\n range(N), # Popups texts are simple numbers.\n ]\n).T\n\nm = folium.Map([45, 3], zoom_start=4)\n\nplugins.MarkerCluster(data).add_to(m)\n\nm.save(os.path.join('results', 'Plugins_1.html'))\n\nm", "Terminator", "m = folium.Map([45, 3], zoom_start=1)\n\nplugins.Terminator().add_to(m)\n\nm.save(os.path.join('results', 'Plugins_2.html'))\n\nm", "Leaflet.boatmarker", "m = folium.Map([30, 0], zoom_start=3)\n\nplugins.BoatMarker(\n location=(34, -43),\n heading=45,\n wind_heading=150,\n wind_speed=45,\n color='#8f8'\n).add_to(m)\n\nplugins.BoatMarker(\n location=(46, -30),\n heading=-20,\n wind_heading=46,\n wind_speed=25,\n color='#88f'\n).add_to(m)\n\n\nm.save(os.path.join('results', 'Plugins_3.html'))\n\nm", "Fullscreen", "m = folium.Map(location=[41.9, -97.3], zoom_start=4)\n\nplugins.Fullscreen(\n position='topright',\n title='Expand me',\n titleCancel='Exit me',\n forceSeparateButton=True).add_to(m)\n\n\nm.save(os.path.join('results', 'Plugins_4.html'))\n\nm # Click on the top right button." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
MBARIMike/stoqs
stoqs/contrib/notebooks/geospatial_sample_selection_may2018.ipynb
gpl-3.0
[ "Select and Plot Data at Time Series Location During CANON Spring 2018\nSeveral platforms visited the same geographic location - retreive all the data from this location and compare\nExecuting this Notebook requires a personal STOQS server. The Docker instructions below will give you a personal server and your own copy of the database.\nDocker Instructions\nWith your stoqs server running in Docker as \ndetailed in the README load a copy of the stoqs_canon_may2018 database (from your $STOQS_HOME/docker directory) :\ndocker-compose exec stoqs createdb -U postgres stoqs_canon_may2018\ncurl -k https://stoqs.mbari.org/media/pg_dumps/stoqs_canon_may2018.pg_dump | \\\n docker exec -i stoqs pg_restore -Fc -U postgres -d stoqs_canon_may2018\n\nThis may take 10 minutes or more to complete - wait for the command prompt. Then launch the Jupyter Notebook server:\ndocker-compose exec stoqs stoqs/manage.py shell_plus --notebook\n\nA message is displayed giving a token for you to use in a browser on your host, e.g.:\nhttp://localhost:8888/?token=&lt;use_the_given_one-time_token&gt;\n\nIn the browser window navigate to this file (stoqs/contrib/notebooks/geospatial_sample_selection_may2018.ipynb) and open it. You will then be able to execute the cells and experiment with this notebook.\n\nFind all the data within 1 km of the center of the Makai ESP Samples (time series location): -122.520, 36.980", "db = 'stoqs_canon_may2018'\n\nfrom django.contrib.gis.geos import fromstr\nfrom django.contrib.gis.measure import D\n\nts_loc = fromstr('POINT(-122.520 36.980)')\nnear_ts_loc = Measurement.objects.using(db).filter(geom__distance_lt=(ts_loc, D(km=1.0)))\n\nacts = Activity.objects.using(db).filter(instantpoint__measurement__in=near_ts_loc)\n\nacts.values_list('platform__name', flat=True).distinct()\n\npctds = acts.filter(platform__name='WesternFlyer_PCTD').order_by('startdate').distinct()\nesps = acts.filter(platform__name='makai_ESP_Archive').order_by('startdate').distinct()\n\npctds\n\n%matplotlib inline\nimport pylab as plt\nplt.scatter([pctd.mappoint.x for pctd in pctds],\n [pctd.mappoint.y for pctd in pctds], c='b')\nplt.scatter([esp.mappoint.x for esp in esps],\n [esp.mappoint.y for esp in esps], c='r')\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom matplotlib import pylab\nfrom numpy import arange\nimport operator\n\ndef plot_platforms(ax):\n plat_labels = []\n\n # Plot in order by platformtype name and platform name\n for ypos, plat in enumerate(\n sorted(plat_start_ends.keys(),\n key=operator.attrgetter('platformtype.name', 'name'))):\n plat_labels.append(f'{plat.name} ({plat.platformtype.name})') \n for bdate, edate in plat_start_ends[plat]:\n dd = edate - bdate\n if dd < 1:\n dd = 1\n ax.barh(ypos+0.5, dd, left=bdate, height=0.8, \n align='center', color='#' + plat.color, alpha=1.0) \n\n ax.set_title(Campaign.objects.using(db).get(id=1).description)\n ax.set_ylim(-0.5, len(plat_labels) + 0.5)\n ax.set_yticks(arange(len(plat_labels)) + 0.5)\n ax.set_yticklabels(plat_labels)\n\n ax.grid(True)\n plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%B %Y'))\n plt.gca().xaxis.set_major_locator(mdates.MonthLocator())\n plt.gca().xaxis.set_minor_locator(mdates.DayLocator())\n plt.gcf().autofmt_xdate()\n\npylab.rcParams['figure.figsize'] = (15, 9)\nfig, ax = plt.subplots()\nplot_platforms(ax)\nplt.show()", "There appear to be 10 events measured by the Benthic Event Detectors. Let's find the start times for these events and use k-means clustering to group the BEDs event data start times into 10 clusters.", "import numpy as np\nfrom sklearn.cluster import KMeans\nbed_starts = np.array(Activity.objects.using(db)\n .filter(platform__name__contains='BED')\n .values_list('startdate', flat=True)\n .order_by('startdate'), dtype=np.datetime64)\nkm = KMeans(n_clusters=10).fit(bed_starts.reshape(-1, 1))", "Pick the earliest event start time and construct start and end times that we'll use to instruct the STOQS loader that these are the times when we want to load ADCP data from all the moorings into the database.", "events = {}\nfor bed_start in bed_starts:\n label = km.predict(bed_start.reshape(-1, 1))[0]\n if label not in events.keys():\n events[label] = bed_start\n # Print the clusters of start times and tune n_clusters above to get the optimal set\n ##print(bed_start, label)", "Print Event() instances of begining and end times for use in loadCCE_2015.py", "from datetime import datetime, timedelta\nevent_start_ends = defaultdict(list)\ndef print_Events(events, before, after, type):\n for start in events.values():\n beg_dt = repr(start.astype(datetime) - before).replace('datetime.', '')\n end_dt = repr(start.astype(datetime) + after).replace('datetime.', '')\n event_start_ends[type].append((mdates.date2num(start.astype(datetime) - before),\n mdates.date2num(start.astype(datetime) + after)))\n print(f\" Event({beg_dt}, {end_dt}),\")\n\n# Low-resolution region: 1 day before to 2 days after the start of each event\nbefore = timedelta(days=1)\nafter = timedelta(days=2)\nprint(\"lores_event_times = [\")\nprint_Events(events, before, after, 'lores')\nprint(\" ]\")\n\n# High-resolution region: 4 hours before to 14 hours after the start of each event\nbefore = timedelta(hours=4)\nafter = timedelta(hours=14)\nprint(\"hires_event_times = [\")\nprint_Events(events, before, after, 'hires')\nprint(\" ]\")", "Plot timeline again, but this time with events as shaded regions across all the Platforms.", "def plot_events(ax):\n for type in ('lores', 'hires'):\n for bdate, edate in event_start_ends[type]:\n dd = edate - bdate\n if dd < 1:\n dd = 1\n # Plot discovered events as gray lines across all platforms\n ax.barh(0, dd, left=bdate, height=32, \n align='center', color='#000000', alpha=0.1) \n\npylab.rcParams['figure.figsize'] = (15, 9)\nfig, ax2 = plt.subplots()\nplot_platforms(ax2)\nplot_events(ax2)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/03_tensorflow/debug_demo.ipynb
apache-2.0
[ "Demonstrates some common TensorFlow errors\nThis notebook demonstrates some common TensorFlow errors, how to find them, and how to fix them.", "import tensorflow as tf\nprint(tf.__version__)", "Shape error", "def some_method(data):\n a = data[:,0:2]\n c = data[:,1]\n s = (a + c)\n return tf.sqrt(tf.matmul(s, tf.transpose(s)))\n\nwith tf.Session() as sess:\n fake_data = tf.constant([\n [5.0, 3.0, 7.1],\n [2.3, 4.1, 4.8],\n [2.8, 4.2, 5.6],\n [2.9, 8.3, 7.3]\n ])\n print(sess.run(some_method(fake_data)))\n\ndef some_method(data):\n a = data[:,0:2]\n print(a.get_shape())\n c = data[:,1]\n print(c.get_shape())\n s = (a + c)\n return tf.sqrt(tf.matmul(s, tf.transpose(s)))\n\nwith tf.Session() as sess:\n fake_data = tf.constant([\n [5.0, 3.0, 7.1],\n [2.3, 4.1, 4.8],\n [2.8, 4.2, 5.6],\n [2.9, 8.3, 7.3]\n ])\n print(sess.run(some_method(fake_data)))\n\ndef some_method(data):\n a = data[:,0:2]\n print(a.get_shape())\n c = data[:,1:3]\n print(c.get_shape())\n s = (a + c)\n return tf.sqrt(tf.matmul(s, tf.transpose(s)))\n\nwith tf.Session() as sess:\n fake_data = tf.constant([\n [5.0, 3.0, 7.1],\n [2.3, 4.1, 4.8],\n [2.8, 4.2, 5.6],\n [2.9, 8.3, 7.3]\n ])\n print(sess.run(some_method(fake_data)))\n\nimport tensorflow as tf\n\nx = tf.constant([[3, 2],\n [4, 5],\n [6, 7]])\nprint(\"x.shape\", x.shape)\nexpanded = tf.expand_dims(x, 1)\nprint(\"expanded.shape\", expanded.shape)\nsliced = tf.slice(x, [0, 1], [2, 1])\nprint(\"sliced.shape\", sliced.shape)\n\nwith tf.Session() as sess:\n print(\"expanded: \", expanded.eval())\n print(\"sliced: \", sliced.eval())", "Vector vs scalar", "def some_method(data):\n print(data.get_shape())\n a = data[:,0:2]\n print(a.get_shape())\n c = data[:,1:3]\n print(c.get_shape())\n s = (a + c)\n return tf.sqrt(tf.matmul(s, tf.transpose(s)))\n\nwith tf.Session() as sess:\n fake_data = tf.constant([5.0, 3.0, 7.1])\n print(sess.run(some_method(fake_data)))\n\ndef some_method(data):\n print(data.get_shape())\n a = data[:,0:2]\n print(a.get_shape())\n c = data[:,1:3]\n print(c.get_shape())\n s = (a + c)\n return tf.sqrt(tf.matmul(s, tf.transpose(s)))\n\nwith tf.Session() as sess:\n fake_data = tf.constant([5.0, 3.0, 7.1])\n fake_data = tf.expand_dims(fake_data, 0)\n print(sess.run(some_method(fake_data)))", "Type error", "def some_method(a, b):\n s = (a + b)\n return tf.sqrt(tf.matmul(s, tf.transpose(s)))\n\nwith tf.Session() as sess:\n fake_a = tf.constant([\n [5.0, 3.0, 7.1],\n [2.3, 4.1, 4.8],\n ])\n fake_b = tf.constant([\n [2, 4, 5],\n [2, 8, 7]\n ])\n print(sess.run(some_method(fake_a, fake_b)))\n\ndef some_method(a, b):\n b = tf.cast(b, tf.float32)\n s = (a + b)\n return tf.sqrt(tf.matmul(s, tf.transpose(s)))\n\nwith tf.Session() as sess:\n fake_a = tf.constant([\n [5.0, 3.0, 7.1],\n [2.3, 4.1, 4.8],\n ])\n fake_b = tf.constant([\n [2, 4, 5],\n [2, 8, 7]\n ])\n print(sess.run(some_method(fake_a, fake_b)))", "TensorFlow debugger\nWrap your normal Session object with tf_debug.LocalCLIDebugWrapperSession", "import tensorflow as tf\nfrom tensorflow.python import debug as tf_debug\n\ndef some_method(a, b):\n b = tf.cast(b, tf.float32)\n s = (a / b)\n s2 = tf.matmul(s, tf.transpose(s))\n return tf.sqrt(s2)\n\nwith tf.Session() as sess:\n fake_a = [\n [5.0, 3.0, 7.1],\n [2.3, 4.1, 4.8],\n ]\n fake_b = [\n [2, 0, 5],\n [2, 8, 7]\n ]\n a = tf.placeholder(tf.float32, shape=[2, 3])\n b = tf.placeholder(tf.int32, shape=[2, 3])\n k = some_method(a, b)\n \n # Note: won't work without the ui_type=\"readline\" argument because\n # Datalab is not an interactive terminal and doesn't support the default \"curses\" ui_type.\n # If you are running this a standalone program, omit the ui_type parameter and add --debug\n # when invoking the TensorFlow program\n # --debug (e.g: python debugger.py --debug )\n sess = tf_debug.LocalCLIDebugWrapperSession(sess, ui_type=\"readline\")\n sess.add_tensor_filter(\"has_inf_or_nan\", tf_debug.has_inf_or_nan)\n print(sess.run(k, feed_dict = {a: fake_a, b: fake_b}))", "In the tfdbg> window that comes up, try the following:\n* run -f has_inf_or_nan\n* Notice that several tensors are dumped once the filter criterion is met\n* List the inputs to a specific tensor:\n* li transpose:0 \n* Print the value of a tensor\n* pt transpose:0\n* Where is the inf?\nVisit https://www.tensorflow.org/programmers_guide/debugger for usage details of tfdbg\ntf.Print()\nCreate a python script named debugger.py with the contents shown below.", "%%writefile debugger.py\nimport tensorflow as tf\n\ndef some_method(a, b):\n b = tf.cast(b, tf.float32)\n s = (a / b)\n print_ab = tf.Print(s, [a, b])\n s = tf.where(tf.is_nan(s), print_ab, s)\n return tf.sqrt(tf.matmul(s, tf.transpose(s)))\n\nwith tf.Session() as sess:\n fake_a = tf.constant([\n [5.0, 3.0, 7.1],\n [2.3, 4.1, 4.8],\n ])\n fake_b = tf.constant([\n [2, 0, 5],\n [2, 8, 7]\n ])\n \n print(sess.run(some_method(fake_a, fake_b)))", "Execute the python script", "%%bash\npython debugger.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karlstroetmann/Artificial-Intelligence
Python/3 Games/Game.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css') as f:\n css = f.read()\nHTML(css)", "Utilities\nThe global variable gCache is used as a cache for the function evaluate defined later. Instead of just storing the values for a given State, the cache stores pairs of the form \n* ('=', v), \n* ('≤', v), or\n* ('≥', v).\nThe first component of these pairs is a flag that specifies whether the stored value v is exact or whether it only is a lower or upper bound. Concretely, provided gCache[State] is defined and value(State) computes the value of a given State from the perspective of the maximizing \nplayer, the following invariants are satisfied:\n* $\\texttt{gCache[State]} = (\\texttt{'='}, v) \\rightarrow \\texttt{value(State)} = v$.\n* $\\texttt{gCache[State]} = (\\texttt{'≤'}, v) \\rightarrow \\texttt{value(State)} \\leq v$.\n* $\\texttt{gCache[State]} = (\\texttt{'≥'}, v) \\rightarrow \\texttt{value(State)} \\geq v$.", "gCache = {}", "In order to have some variation in our game, we use random numbers to choose between optimal moves.", "import random\nrandom.seed(0)", "Alpha-Beta Pruning with Progressive Deepening, Move Ordering, and Memoization\nThe function pd_evaluate takes three arguments:\n- State is the current state of the game,\n- limit determines how deep the game tree is searched,\n- f is either the function maxValue or the function minValue.\nThe function pd_evaluate uses progressive deepening to compute the value of State. The given State is evaluated for a depth of $0$, $1$, $\\cdots$, limit. The values calculated for a depth of $l$ are stored and used to sort the states when State is next evaluated for a depth of $l+1$. This is beneficial for alpha-beta pruning because alpha-beta pruning can cut off more branches from the search tree if we start by evaluating the best moves first.", "import time\n\ndef pd_evaluate(State, time_limit, f):\n start = time.time()\n limit = 0\n while True:\n value = evaluate(State, limit, f)\n stop = time.time()\n if value in [-1, 1] or stop - start > time_limit:\n print(f'searched to depth {limit}, using {round(stop - start, 3)} seconds')\n return value, limit\n limit += 1", "The function evaluate takes five arguments:\n- State is the current state of the game,\n- limit determines the lookahead. To be more precise, it is the number of half-moves that are investigated to compute the value. If limit is 0 and the game has not ended, the game is evaluated via the function heuristic. This function is supposed to be defined in the notebook defining the game.\n- f is either the function maxValue or the function minValue. \nf = maxValue if it's the maximizing player's turn in State. Otherwise,\n f = minValue.\n- alpha and beta are the parameters from alpha-beta pruning.\nThe function evaluate returns the value that the given State has if both players play their optimal game. \n- If the maximizing player can force a win, the return value is 1.\n- If the maximizing player can at best force a draw, the return value is 0.\n- If the maximizing player might lose even when playing optimal, the return value is -1.\nOtherwise, the value is calculated according to a heuristic.\nFor reasons of efficiency, the function evaluate is memoized using the global variable gCache. This work in the same way as described in the notebook Alpha-Beta-Pruning-Memoization.ipynb.", "def evaluate(State, limit, f, alpha=-1, beta=1):\n global gCache\n if (State, limit) in gCache:\n flag, v = gCache[(State, limit)] \n if flag == '=':\n return v\n if flag == '≤':\n if v <= alpha:\n return v\n elif alpha < v < beta:\n w = f(State, limit, alpha, v)\n store_cache(State, limit, alpha, v, w)\n return w\n else: # beta <= v:\n w = f(State, limit, alpha, beta)\n store_cache(State, limit, alpha, beta, w)\n return w\n if flag == '≥':\n if beta <= v:\n return v\n elif alpha < v < beta:\n w = f(State, limit, v, beta)\n store_cache(State, limit, v, beta, w)\n return w\n else: # v <= alpha\n w = f(State, limit, alpha, beta)\n store_cache(State, limit, alpha, beta, w)\n return w\n else:\n v = f(State, limit, alpha, beta)\n store_cache(State, limit, alpha, beta, v)\n return v", "The function store_cache is called with five arguments:\n* State is a state of the game,\n* limit is the search depth,\n* alpha is a number,\n* beta is a number, and\n* value is a number such that:\n $$\\texttt{evaluate(State, limit, f, alpha, beta)} = \\texttt{value}$$\nThe function stores the value in the dictionary Cache under the key State.\nIt also stores an indicator that is either '≤', '=', or '≥'. The value that is stored \nsatisfies the following conditions:\n* If Cache[State, limit] = ('≤', value), then evaluate(State, limit) ≤ value. \n* If Cache[State, limit] = ('=', value), then evaluate(State, limit) = value. \n* If Cache[State, limit] = ('≥', value), then evaluate(State, limit) ≥ value.", "def store_cache(State, limit, alpha, beta, value):\n global gCache\n if value <= alpha:\n gCache[(State, limit)] = ('≤', value)\n elif value < beta:\n gCache[(State, limit)] = ('=', value)\n else: # value >= beta\n gCache[(State, limit)] = ('≥', value)", "The function value_cache receives a State and a limit as parameters. If a value for State has been computed to the given evaluation depth, this value is returned. Otherwise, 0 is returned.", "def value_cache(State, limit):\n flag, value = gCache.get((State, limit), ('=', 0))\n return value", "The module heapq implements heaps. The implementation of maxValue and minValue use heaps as priority queues in order to sort the moves. This improves the performance of alpha-beta pruning.", "import heapq", "The function maxValue satisfies the following specification:\n- $\\alpha \\leq \\texttt{value}(s) \\leq \\beta \\;\\rightarrow\\;\\texttt{maxValue}(s, l, \\alpha, \\beta) = \\texttt{value}(s)$\n- $\\texttt{value}(s) \\leq \\alpha \\;\\rightarrow\\; \\texttt{maxValue}(s, l, \\alpha, \\beta) \\leq \\alpha$\n- $\\beta \\leq \\texttt{value}(s) \\;\\rightarrow\\; \\beta \\leq \\texttt{maxValue}(s, \\alpha, \\beta)$\nIt assumes that gPlayers[0] is the maximizing player. This function implements alpha-beta pruning. After searching up to a depth of limit, the value is approximated using the function heuristic.", "def maxValue(State, limit, alpha=-1, beta=1):\n if finished(State):\n return utility(State)\n if limit == 0:\n return heuristic(State)\n value = alpha\n NextStates = next_states(State, gPlayers[0])\n if len(NextStates) == 1: # singular value extension\n return evaluate(NextStates[0], limit, minValue, value, beta)\n Moves = [] # empty priority queue\n for ns in NextStates:\n # heaps are sorted ascendingly, hence the minus\n heapq.heappush(Moves, (-value_cache(ns, limit-2), ns))\n while Moves:\n _, ns = heapq.heappop(Moves)\n value = max(value, evaluate(ns, limit-1, minValue, value, beta))\n if value >= beta:\n return value\n return value", "The function minValue satisfies the following specification:\n- $\\alpha \\leq \\texttt{value}(s) \\leq \\beta \\;\\rightarrow\\;\\texttt{minValue}(s, l, \\alpha, \\beta) = \\texttt{value}(s)$\n- $\\texttt{value}(s) \\leq \\alpha \\;\\rightarrow\\; \\texttt{minValue}(s, l, \\alpha, \\beta) \\leq \\alpha$\n- $\\beta \\leq \\texttt{value}(s) \\;\\rightarrow\\; \\beta \\leq \\texttt{minValue}(s, \\alpha, \\beta)$\nIt assumes that gPlayers[1] is the minimizing player. This function implements alpha-beta pruning. After searching up to a depth of limit, the value is approximated using the function heuristic.", "def minValue(State, limit, alpha=-1, beta=1):\n if finished(State):\n return utility(State)\n if limit == 0:\n return heuristic(State)\n value = beta\n NextStates = next_states(State, gPlayers[1])\n if len(NextStates) == 1:\n return evaluate(NextStates[0], limit, maxValue, alpha, value)\n Moves = [] # empty priority queue\n for ns in NextStates:\n heapq.heappush(Moves, (value_cache(ns, limit-2), ns))\n while Moves:\n _, ns = heapq.heappop(Moves)\n value = min(value, evaluate(ns, limit-1, maxValue, alpha, value))\n if value <= alpha:\n return value\n return value\n\n%%capture\n%run Connect-Four.ipynb", "In the state shown below, Red can force a win by pushing his stones in the 6th row. Due to this fact, *alpha-beta pruning is able to prune large parts of the search path and hence the evaluation is fast.", "canvas = create_canvas()\ndraw(gTestState, canvas, '?')\n\ngCache = {}\n\n%%time\nvalue, limit = pd_evaluate(gTestState, 10, maxValue)\nvalue\n\nlen(gCache)\n\ngCache = {}\n\n%%time\nvalue, limit = pd_evaluate(gStart, 5, maxValue)\nvalue\n\nlen(gCache)", "In order to evaluate the effect of progressive deepening, we reset the cache and can then evaluate the test state without progressive deepening.", "gCache = {}\n\n%%time\nvalue = evaluate(gTestState, 8, maxValue)\nvalue\n\nlen(gCache)", "Playing the Game\nThe function best_move takes two arguments:\n- State is the current state of the game,\n- limit is the depth limit of the recursion.\nThe function best_move returns a pair of the form $(v, s)$ where $s$ is a state and $v$ is the value of this state. The state $s$ is a state that is reached from State if the player makes one of her optimal moves. In order to have some variation in the game, the function randomly chooses any of the optimal moves.", "def best_move(State, time_limit):\n NextStates = next_states(State, gPlayers[0])\n if len(NextStates) == 1:\n return pd_evaluate(State, time_limit, maxValue), NextStates[0]\n bestValue, limit = pd_evaluate(State, time_limit, maxValue)\n BestMoves = [s for s in NextStates \n if evaluate(s, limit-1, minValue) == bestValue\n ]\n BestState = random.choice(BestMoves)\n return bestValue, BestState", "The next line is needed because we need the function IPython.display.clear_output to clear the output in a cell.", "import IPython.display ", "The function play_game plays on the given canvas. The game played is specified indirectly by specifying the following:\n- Start is a global variable defining the start state of the game.\n- next_states is a function such that $\\texttt{next_states}(s, p)$ computes the set of all possible states that can be reached from state $s$ if player $p$ is next to move.\n- finished is a function such that $\\texttt{finished}(s)$ is true for a state $s$ if the game is over in state $s$.\n- utility is a function such that $\\texttt{utility}(s, p)$ returns either -1, 0, or 1 in the terminal state $s$. We have that\n - $\\texttt{utility}(s, p)= -1$ iff the game is lost for player $p$ in state $s$, \n - $\\texttt{utility}(s, p)= 0$ iff the game is drawn, and \n - $\\texttt{utility}(s, p)= 1$ iff the game is won for player $p$ in state $s$.", "def play_game(canvas, time_limit):\n global gCache, gMoveCounter\n State = gStart\n while (True):\n gCache = {}\n firstPlayer = gPlayers[0]\n val, State = best_move(State, time_limit)\n draw(State, canvas, f'value = {round(val, 2)}.')\n if finished(State):\n final_msg(State)\n break\n IPython.display.clear_output(wait=True)\n State = get_move(State)\n draw(State, canvas, '')\n if finished(State):\n IPython.display.clear_output(wait=True)\n final_msg(State)\n break\n\ncanvas = create_canvas()\ndraw(gStart, canvas, f'Current value of game for \"X\": {round(0, 2)}')\n\nplay_game(canvas, 2)\n\nlen(gCache)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mtasende/Machine-Learning-Nanodegree-Capstone
notebooks/prod/n09_dyna_10000_states_full_training.ipynb
mit
[ "In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.", "# Basic imports\nimport os\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport datetime as dt\nimport scipy.optimize as spo\nimport sys\nfrom time import time\nfrom sklearn.metrics import r2_score, median_absolute_error\nfrom multiprocessing import Pool\n\n%matplotlib inline\n\n%pylab inline\npylab.rcParams['figure.figsize'] = (20.0, 10.0)\n\n%load_ext autoreload\n%autoreload 2\n\nsys.path.append('../../')\n\nimport recommender.simulator as sim\nfrom utils.analysis import value_eval\nfrom recommender.agent import Agent\nfrom functools import partial\n\nNUM_THREADS = 1\nLOOKBACK = -1 # 252*4 + 28\nSTARTING_DAYS_AHEAD = 252\nPOSSIBLE_FRACTIONS = [0.0, 1.0]\n\n# Get the data\nSYMBOL = 'SPY'\ntotal_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')\ndata_train_df = total_data_train_df[SYMBOL].unstack()\ntotal_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')\ndata_test_df = total_data_test_df[SYMBOL].unstack()\nif LOOKBACK == -1:\n total_data_in_df = total_data_train_df\n data_in_df = data_train_df\nelse:\n data_in_df = data_train_df.iloc[-LOOKBACK:]\n total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]\n\n# Create many agents\nindex = np.arange(NUM_THREADS).tolist()\nenv, num_states, num_actions = sim.initialize_env(total_data_train_df, \n SYMBOL, \n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n n_levels=10)\nagents = [Agent(num_states=num_states, \n num_actions=num_actions, \n random_actions_rate=0.98, \n random_actions_decrease=0.9999,\n dyna_iterations=20,\n name='Agent_{}'.format(i)) for i in index]\n\ndef show_results(results_list, data_in_df, graph=False):\n for values in results_list:\n total_value = values.sum(axis=1)\n print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))\n print('-'*100)\n initial_date = total_value.index[0]\n compare_results = data_in_df.loc[initial_date:, 'Close'].copy()\n compare_results.name = SYMBOL\n compare_results_df = pd.DataFrame(compare_results)\n compare_results_df['portfolio'] = total_value\n std_comp_df = compare_results_df / compare_results_df.iloc[0]\n if graph:\n plt.figure()\n std_comp_df.plot()", "Let's show the symbols data, to see how good the recommender has to be.", "print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))\n\n# Simulate (with new envs, each time)\nn_epochs = 7\n\nfor i in range(n_epochs):\n tic = time()\n env.reset(STARTING_DAYS_AHEAD)\n results_list = sim.simulate_period(total_data_in_df, \n SYMBOL,\n agents[0],\n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\n toc = time()\n print('Epoch: {}'.format(i))\n print('Elapsed time: {} seconds.'.format((toc-tic)))\n print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\n show_results([results_list], data_in_df)\n\nenv.reset(STARTING_DAYS_AHEAD)\nresults_list = sim.simulate_period(total_data_in_df, \n SYMBOL, agents[0], \n learn=False, \n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n other_env=env)\nshow_results([results_list], data_in_df, graph=True)", "Let's run the trained agent, with the test set\nFirst a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).", "TEST_DAYS_AHEAD = 20\n\nenv.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)\ntic = time()\nresults_list = sim.simulate_period(total_data_test_df, \n SYMBOL,\n agents[0],\n learn=False,\n starting_days_ahead=TEST_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\ntoc = time()\nprint('Epoch: {}'.format(i))\nprint('Elapsed time: {} seconds.'.format((toc-tic)))\nprint('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\nshow_results([results_list], data_test_df, graph=True)", "And now a \"realistic\" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).", "env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)\ntic = time()\nresults_list = sim.simulate_period(total_data_test_df, \n SYMBOL,\n agents[0],\n learn=True,\n starting_days_ahead=TEST_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\ntoc = time()\nprint('Epoch: {}'.format(i))\nprint('Elapsed time: {} seconds.'.format((toc-tic)))\nprint('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\nshow_results([results_list], data_test_df, graph=True)", "What are the metrics for \"holding the position\"?", "print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))\n\nimport pickle\nwith open('../../data/dyna_10000_states_full_training.pkl', 'wb') as best_agent:\n pickle.dump(agents[0], best_agent)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ja/lite/examples/style_transfer/overview.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "TensorFlow Lite による芸術的スタイル転送\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/examples/style_transfer/overview\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lite/examples/style_transfer/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lite/examples/style_transfer/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/examples/style_transfer/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n </td>\n</table>\n\n最近開発されたディープラーニングの中で最も面白い開発の 1 つとして、芸術的スタイル転送または パスティーシュ(模倣)として知られる能力があります。これは芸術的スタイルを表現する画像とコンテンツを表現する画像から成る 2 つの入力画像に基づいて新しい画像を創造するものです。\n\nこの手法を使用すると、様々なスタイルの美しく新しい作品を生成することができます。\n\nTensorFlow Lite を初めて使用する場合、Android を使用する場合は、以下のサンプルアプリをご覧ください。\n<a class=\"button button-primary\" href=\"https://github.com/tensorflow/examples/tree/master/lite/examples/style_transfer/android\">Android の例</a>\nAndroid や iOS 以外のプラットフォームを使用する場合、または、すでに <a href=\"https://www.tensorflow.org/api_docs/python/tf/lite\">TensorFlow Lite API</a> に精通している場合は、このチュートリアルに従い、事前トレーニング済みの TensorFlow Lite モデル を使用して、任意のコンテンツ画像とスタイル画像のペアにスタイル転送を適用する方法を学ぶことができます。モデルを使用して、独自のモバイルアプリにスタイル転送を追加することができます。\nモデルは GitHub でオープンソース化されています。異なるパラメータを使用してモデルの再トレーニング(例えば、コンテンツレイヤーの重みを増やしてよりコンテンツ画像に近い出力画像にするなど)が可能です。\nモデルアーキテクチャの理解\n\nこの芸術的スタイル転送モデルは、2 つのサブモデルで構成されています。\n\nスタイル予測モデル: 入力スタイル画像を 100 次元スタイルのボトルネックベクトルに変換する MobilenetV2 ベースのニューラルネットワーク。\nスタイル変換モデル: コンテンツ画像にスタイルのボトルネックベクトルを適用し、スタイル化された画像を生成するニューラルネットワーク。\n\nアプリが特定のスタイル画像セットのみをサポートする必要がある場合は、それらのスタイルのボトルネックベクトルを事前に計算して、そのスタイル予測モデルをアプリのバイナリから除外します。\nセットアップ\n依存関係をインポートします。", "import tensorflow as tf\nprint(tf.__version__)\n\nimport IPython.display as display\n\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nmpl.rcParams['figure.figsize'] = (12,12)\nmpl.rcParams['axes.grid'] = False\n\nimport numpy as np\nimport time\nimport functools", "コンテンツ画像とスタイル画像、および事前トレーニング済みの TensorFlow Lite モデルをダウンロードします。", "content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')\nstyle_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')\n\nstyle_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/prediction/1?lite-format=tflite')\nstyle_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/transfer/1?lite-format=tflite')", "入力を前処理する\n\nコンテンツ画像とスタイル画像は RGB 画像である必要があります。ピクセル値は [0..1] 間の float32 の数値です。\nスタイル画像のサイズは (1, 256, 256, 3) である必要があります。画像を中央でクロップしてサイズを変更します。\nコンテンツ画像は (1, 384, 384, 3) である必要があります。画像を中央でクロップしてサイズを変更します。", "# Function to load an image from a file, and add a batch dimension.\ndef load_img(path_to_img):\n img = tf.io.read_file(path_to_img)\n img = tf.io.decode_image(img, channels=3)\n img = tf.image.convert_image_dtype(img, tf.float32)\n img = img[tf.newaxis, :]\n\n return img\n\n# Function to pre-process by resizing an central cropping it.\ndef preprocess_image(image, target_dim):\n # Resize the image so that the shorter dimension becomes 256px.\n shape = tf.cast(tf.shape(image)[1:-1], tf.float32)\n short_dim = min(shape)\n scale = target_dim / short_dim\n new_shape = tf.cast(shape * scale, tf.int32)\n image = tf.image.resize(image, new_shape)\n\n # Central crop the image.\n image = tf.image.resize_with_crop_or_pad(image, target_dim, target_dim)\n\n return image\n\n# Load the input images.\ncontent_image = load_img(content_path)\nstyle_image = load_img(style_path)\n\n# Preprocess the input images.\npreprocessed_content_image = preprocess_image(content_image, 384)\npreprocessed_style_image = preprocess_image(style_image, 256)\n\nprint('Style Image Shape:', preprocessed_style_image.shape)\nprint('Content Image Shape:', preprocessed_content_image.shape)", "入力を可視化する", "def imshow(image, title=None):\n if len(image.shape) > 3:\n image = tf.squeeze(image, axis=0)\n\n plt.imshow(image)\n if title:\n plt.title(title)\n\nplt.subplot(1, 2, 1)\nimshow(preprocessed_content_image, 'Content Image')\n\nplt.subplot(1, 2, 2)\nimshow(preprocessed_style_image, 'Style Image')", "TensorFlow Lite でスタイル転送を実行する\nスタイルを予測する", "# Function to run style prediction on preprocessed style image.\ndef run_style_predict(preprocessed_style_image):\n # Load the model.\n interpreter = tf.lite.Interpreter(model_path=style_predict_path)\n\n # Set model input.\n interpreter.allocate_tensors()\n input_details = interpreter.get_input_details()\n interpreter.set_tensor(input_details[0][\"index\"], preprocessed_style_image)\n\n # Calculate style bottleneck.\n interpreter.invoke()\n style_bottleneck = interpreter.tensor(\n interpreter.get_output_details()[0][\"index\"]\n )()\n\n return style_bottleneck\n\n# Calculate style bottleneck for the preprocessed style image.\nstyle_bottleneck = run_style_predict(preprocessed_style_image)\nprint('Style Bottleneck Shape:', style_bottleneck.shape)", "スタイルを変換する", "# Run style transform on preprocessed style image\ndef run_style_transform(style_bottleneck, preprocessed_content_image):\n # Load the model.\n interpreter = tf.lite.Interpreter(model_path=style_transform_path)\n\n # Set model input.\n input_details = interpreter.get_input_details()\n interpreter.allocate_tensors()\n\n # Set model inputs.\n interpreter.set_tensor(input_details[0][\"index\"], preprocessed_content_image)\n interpreter.set_tensor(input_details[1][\"index\"], style_bottleneck)\n interpreter.invoke()\n\n # Transform content image.\n stylized_image = interpreter.tensor(\n interpreter.get_output_details()[0][\"index\"]\n )()\n\n return stylized_image\n\n# Stylize the content image using the style bottleneck.\nstylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)\n\n# Visualize the output.\nimshow(stylized_image, 'Stylized Image')", "スタイルをブレンドする\nコンテンツ画像のスタイルをスタイル化された出力にブレンドさせることができます。こうすると、出力がよりコンテンツ画像のように見えるようになります。", "# Calculate style bottleneck of the content image.\nstyle_bottleneck_content = run_style_predict(\n preprocess_image(content_image, 256)\n )\n\n# Define content blending ratio between [0..1].\n# 0.0: 0% style extracts from content image.\n# 1.0: 100% style extracted from content image.\ncontent_blending_ratio = 0.5 #@param {type:\"slider\", min:0, max:1, step:0.01}\n\n# Blend the style bottleneck of style image and content image\nstyle_bottleneck_blended = content_blending_ratio * style_bottleneck_content \\\n + (1 - content_blending_ratio) * style_bottleneck\n\n# Stylize the content image using the style bottleneck.\nstylized_image_blended = run_style_transform(style_bottleneck_blended,\n preprocessed_content_image)\n\n# Visualize the output.\nimshow(stylized_image_blended, 'Blended Stylized Image')", "パフォーマンスベンチマーク\nパフォーマンスベンチマークの数値は、ここで説明するツールで生成されます。\n<table>\n<thead>\n<tr>\n<th>モデル名</th> <th>モデルサイズ</th> <th>デバイス</th> <th>NNAPI</th> <th>CPU</th> <th>GPU</th>\n</tr> </thead>\n<tr> <td rowspan=\"3\"><a href=\"https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/prediction/1?lite-format=tflite\">Style prediction model (int8)</a></td>\n<td rowspan=\"3\">2.8 Mb</td>\n<td>Pixel 3 (Android 10)</td> <td>142ms</td>\n<td>14ms*</td>\n<td></td>\n</tr>\n<tr>\n<td>Pixel 4 (Android 10)</td> <td>5.2ms</td>\n<td>6.7ms*</td>\n<td></td>\n</tr>\n<tr>\n<td>iPhone XS (iOS 12.4.1)</td> <td></td>\n<td>10.7ms**</td>\n<td></td>\n</tr>\n<tr> <td rowspan=\"3\"><a href=\"https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/transfer/1?lite-format=tflite\">スタイル変換モデル (int8)</a></td>\n<td rowspan=\"3\">0.2 Mb</td>\n<td>Pixel 3 (Android 10)</td> <td></td>\n<td>540ms*</td>\n<td></td>\n</tr>\n<tr>\n<td>Pixel 4 (Android 10)</td> <td></td>\n<td>405ms*</td>\n<td></td>\n</tr>\n<tr>\n<td>iPhone XS (iOS 12.4.1)</td> <td></td>\n<td>251ms**</td>\n<td></td>\n</tr>\n<tr> <td rowspan=\"2\"><a href=\"https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/fp16/prediction/1?lite-format=tflite\">スタイル予測モデル (float16)</a></td>\n<td rowspan=\"2\">4.7 Mb</td>\n<td>Pixel 3 (Android 10)</td> <td>86ms</td>\n<td>28ms*</td>\n<td>9.1ms</td>\n</tr>\n<tr>\n<td>Pixel 4 (Android 10)</td>\n<td>32ms</td>\n<td>12ms*</td>\n<td>10ms</td>\n</tr>\n<tr> <td rowspan=\"2\"><a href=\"https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/fp16/transfer/1?lite-format=tflite\">Style transfer model (float16)</a></td>\n<td rowspan=\"2\">0.4 Mb</td>\n<td>Pixel 3 (Android 10)</td> <td>1095ms</td>\n<td>545ms*</td>\n<td>42ms</td>\n</tr>\n<tr>\n<td>Pixel 4 (Android 10)</td>\n<td>603ms</td>\n<td>377ms*</td>\n<td>42ms</td>\n</tr>\n</table>\n\n 4 つのスレッドを使用。\n 最高のパフォーマンス結果を得るために、iPhone では 2 つのスレッドを使用。*" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dietmarw/EK5312_ElectricalMachines
Chapman/Ch4-Problem_4-07.ipynb
unlicense
[ "Excercises Electric Machinery Fundamentals\nChapter 4\nProblem 4-7", "%pylab notebook\n%precision 1", "Description\nA 100-MVA, 14.4-kV, 0.8-PF-lagging, 50-Hz, two-pole, Y-connected synchronous generator has a per-unit synchronous reactance of 1.1 and a per-unit armature resistance of 0.011.", "Vl = 14.4e3 # [V]\nS = 100e6 # [VA]\nra = 0.011 # [pu]\nxs = 1.1 # [pu]\nPF = 0.8\np = 2\nfse = 50 # [Hz]", "(a)\n\nWhat are its synchronous reactance and armature resistance in ohms?\n\n(b)\n\nWhat is the magnitude of the internal generated voltage $E_A$ at the rated conditions?\nWhat is its torque angle $\\delta$ at these conditions?\n\n(c)\nIgnoring losses in this generator\n\nWhat torque must be applied to its shaft by the prime mover at full load?\n\nSOLUTION\nThe base phase voltage of this generator is:", "Vphase_base = Vl / sqrt(3)\nprint('Vphase_base = {:.0f} V'.format(Vphase_base))", "Therefore, the base impedance of the generator is:\n$$Z_\\text{base} = \\frac{3V^2_{\\phi_\\text{base}}}{S_\\text{base}}$$", "Zbase = 3*Vphase_base**2 / S\nprint('Zbase = {:.3f} Ω'.format(Zbase))", "(b)\nThe generator impedance in ohms are:", "Ra = ra * Zbase\nXs = xs * Zbase\nprint('''\nRa = {:.4f} Ω Xs = {:.3f} Ω\n==============================='''.format(Ra, Xs))", "(b)\nThe rated armature current is:\n$$I_A = I_L = \\frac{S}{\\sqrt{3}V_T}$$", "Ia_amp = S / (sqrt(3) * Vl)\nprint('Ia_amp = {:.0f} A'.format(Ia_amp))", "The power factor is 0.8 lagging, so:", "Ia_angle = -arccos(PF)\nIa = Ia_amp * (cos(Ia_angle) + sin(Ia_angle)*1j)\nprint('Ia = {:.0f} ∠{:.2f}° A'.format(abs(Ia), Ia_angle / pi *180))", "It is very often the case that especially in larger machines the armature resistance $R_A$ is simply neclected and one calulates the armature voltage simply as:\n$$\\vec{E}A = \\vec{V}\\phi + jX_S\\vec{I}_A$$\nBut since in this case we were given the armature resistance explicitly we should also use it.\nTherefore, the internal generated voltage is\n$$\\vec{E}A = \\vec{V}\\phi + (R_A + jX_S)\\vec{I}_A$$", "EA = Vphase_base + (Ra + Xs*1j) * Ia\nEA_angle = arctan(EA.imag/EA.real)\nprint('EA = {:.1f} V ∠{:.1f}°'.format(abs(EA), EA_angle/pi*180))", "Therefore, the magnitude of the internal generated voltage $E_A$ is:", "abs(EA)", "V, and the torque angle $\\delta$ is:", "EA_angle/pi*180", "degrees.\n(c)\nIgnoring losses, the input power would equal the output power. Since", "Pout = PF * S\nprint('Pout = {:.1F} MW'.format(Pout/1e6))", "and,\n$$n_\\text{sync} = \\frac{120f_{se}}{P}$$", "n_sync = 120*fse / p\nprint('n_sync = {:.0F} r/min'.format(n_sync))", "the applied torque would be:\n$$\\tau_\\text{app} = \\tau_\\text{ind} = \\frac{P_\\text{out}}{\\omega_\\text{sync}}$$", "w_sync = n_sync * (2*pi/60.0)\ntau_app = Pout / w_sync\nprint('''\nτ_app = {:.0f} Nm\n================='''.format(tau_app))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
WomensCodingCircle/CodingCirclePython
Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb
mit
[ "Lists and Tuples\nLists Recap\nA list is a sequence of values. These values can be anything: strings, numbers, booleans, even other lists.\nTo make a list you put the items separated by commas between brackets []", "sushi_order = ['unagi', 'hamachi', 'otoro']\nprices = [6.50, 5.50, 15.75]\nprint(sushi_order)\nprint(prices)", "You can access a single element in a list by indexing in using brackets. List indexing starts at 0 so to get the first element, you use 0, the second element is 1 and so on.\nlist[index]", "print(sushi_order[0])\nprint(sushi_order[2])", "You can find the length of a list using len\nlen(list)", "print(len(sushi_order))", "You can use negative indexing to get the last element of a list", "print(sushi_order[-3])", "Nested lists\nLists can contain other lists as elements.\nThis is a convenient alternative to a matrix. You can arrange lists of varying lengths (and contents) in a specific order and you can iterate over the elements (see below).", "everyones_order = [['california roll'], ['unagi', 'dragon roll'], sushi_order]\nprint(everyones_order)", "To access an element in a nested list, first index to the inner list, then index to the item.\nExample:\nlist_of_lists = [[1,2], [3,4], []]\nAcess the first index to the inner list and index to the item\npython\n inner_list = list_of_lists[1] # [3,4]\n print inner_list[0] # 3 \nOr even quicker:\npython\n list_of_lists[1][0] # 3\nTRY IT\n1) To get dragon roll from the sushi order, first we get the second element (index 1) then we get the the second item (index 1)\n2) Print california roll from the list everyones_order:\n3) Print all items from the second person's order\nMutable Lists\nLists are mutable, that means that you can change elements. \nTo assign a new value to an element\nmy_list = [1, 2, 3]\nmy_list[0] = 100", "sushi_order[0] = 'caterpillar roll'\nprint(sushi_order)", "TRY IT\nUpdate the last element in prices to be 21.00 and print out the new result", "prices[-1] = 21.00\nprint(prices)", "Operators and Lists\nThe in operator allows you to see if an element is contained in a list", "sushi_order\n\nprint(('hamachi' in sushi_order))\n\nif 'otoro' in sushi_order:\n print(\"Big spender!\")", "You can use some arithmatic operators on lists\nThe + operator concatenates two lists\nThe * operator duplicates a list that many times", "print((sushi_order * 3))\n\nexprep = ['rep'+str(i) for i in range(5)]\nexprep\n\nprint((prices + sushi_order))", "Note: You can only concatenate lists with lists! If you want to add a \"non-list\" element you can use the append() function.", "newprices = prices.copy()\nnewprices.append(22)\nprint(newprices)\n\nprices", "Remember slices from strings? We can also use the slice operator on lists", "inexpensive = sushi_order[:2] #takes only the first two elements from list\nprint(inexpensive)", "Don't forget, you can use the for and in keywords to loop through a list", "for item in sushi_order:\n print((\"I'd like to order the {}.\".format(item)))\n \nprint(\"And hold the wasabi!\")\n\nfor ind, item in enumerate(sushi_order):\n print((\"I'd like to order the {0} for {1}.\".format(item, prices[ind])))", "TRY IT\nCreate a variable called lots_of_sushi that repeats the inexpensive list two times", "lots_of_sushi = inexpensive*2\nprint(lots_of_sushi)", "Adding and deleting elements\nTo add an element to a list, you have a few options\n\n\nthe append method adds an element or elements to the end of a list, if you pass it a list, the next element with be a list (making a list of lists)\n\n\nthe extend method takes a list of elements and adds them all to the end, not creating a list of lists\n\n\nuse the + operator like you saw before", "my_sushis = ['maguro', 'rock n roll']\nmy_sushis.append('avocado roll')\nprint(my_sushis)\nmy_sushis.append(['hamachi', 'california roll'])\nprint(my_sushis)\n\nmy_sushis = ['maguro', 'rock n roll']\nmy_sushis.extend(['hamachi', 'california roll'])\nprint(my_sushis)", "You also have several options for removing elements\n\n\nthe pop method takes the index of the element to remove and returns the value of the element\n\n\nthe remove method takes the value of the element to remove\n\n\nthe del operator deletes the element or slice of the list that you give it\ndel l[1:]", "print(my_sushis)\nlast_sushi = my_sushis.pop(-1)\nprint(last_sushi)\n\nmy_sushis.remove('maguro')\nprint(my_sushis)\n\ndel my_sushis[1:]\nprint(my_sushis)", "TRY IT\nAdd 'rock n roll' to sushi_order then delete the first element of sushi_order\nList Functions\nmax will return maximum value of list\nmin returns minimum value of list\nsum returns the sum of the values in a list\nlen returns the number of elements in a list # Just a reminder", "numbers = [1, 1, 2, 3, 5, 8]\n\nprint((max(numbers)))\nprint((min(numbers)))\nprint((sum(numbers)))\nprint((len(numbers)))\n\n", "TRY IT\nFind the average of numbers using list functions (and not a loop!)", "sum(numbers)/len(numbers)", "Aliasing\nIf you assign a list to another variable, it will still refer to the same list. This can cause trouble if you change one list because the other will change too.", "cooked_rolls = ['unagi roll', 'shrimp tempura roll']\nmy_order = cooked_rolls\nmy_order.append('hamachi')\nprint(my_order)\nprint(cooked_rolls)", "To check this, you can use the is operator to see if both variable refer to the same object", "print((my_order is cooked_rolls))", "To fix this, you can make a copy of the list using the list function\nlist takes a sequence and turns it into a list.\nAlternatively you can use the copy() method: my_order = cooked_rolls.copy()", "cooked_rolls = ['unagi roll', 'shrimp tempura roll']\nmy_order = list(cooked_rolls)\nmy_order.append('hamachi')\nprint(my_order)\nprint(cooked_rolls)", "Tuples\nTuples are very similar to lists. The major difference is that tuples are immutable meaning that you can not add, remove, or assign new values to a tuple.\nThe creator of a tuple is the comma , but by convention people usually surround tuples with parenthesis.", "noodles = ('soba', 'udon', 'ramen', 'lo mein', 'somen', 'rice noodle')\nprint((type(noodles)))", "You can create a tuple from any sequence using the tuple function", "sushi_tuple = tuple(my_order)\nprint(sushi_tuple)\n# Remember strings are sequences\nmaguro = tuple('maguro')\nprint(maguro)", "To create a single element tuple, you need to add a comma to the end of that element (it looks kinda weird)", "single_element_tuple = (1,)\nprint(single_element_tuple)\nprint((type(single_element_tuple)))", "You can use the indexing and slicing you learned for lists the same with tuples.\nBut, because tuples are immutable, you cannot use the append, pop, del, extend, or remove methods or even assign new values to indexes", "print((noodles[0]))\nprint((noodles[4:]))\n\n# This should throw an error\nnoodles[0] = 'spaghetti'", "To change the values in a tuple, you need to create a new tuple (there is nothing stopping you from assigning it to the same variable, though", "print(sushi_tuple)\nsushi_tuple = sushi_tuple[1:] + ('california roll',)\nprint(sushi_tuple)", "You can loop through tuples the same way you loop through lists, using for in", "for noodle in noodles:\n print((\"Yummy, yummy {0} and {1}\".format(noodle, 'sushi')))", "TRY IT\nCreate a tuple containing 'soy sauce' 'ginger' and 'wasabi' and save it in a variable called accompaniments\nZip\nthe zip function takes any number of lists of the same length and returns a list of tuples where the tuples will contain the i-th element from each of the lists.\nThis is really useful when combining lists that are related (especially for looping)", "print((list(zip([1,2,3], [4,5,6]))))\n\nsushi = ['salmon', 'tuna', 'sea urchin']\nprices = [5.5, 6.75, 8]\n\nsushi_and_prices = list(zip(sushi, prices))\n\nsushi_and_prices\n\nfor sushi, price in sushi_and_prices:\n print((\"The {0} costs ${1}\".format(sushi, price)))", "Enumerate\nWhile the zip function iterates over two lists, the built-in function enumerate loops through indices and elements of a list. It returns a list of tuples containing the index and value of that element.\nfor index, value in enumerate(list):\n ...", "exotic_sushi = ['tako', 'toro', 'uni', 'hirame']\nfor index, item in enumerate(exotic_sushi):\n print((index, item))", "Project: Party Budget\nYou are tasked with writing budgeting software, but at this point, things are a mess. You have two files. budget_prices.txt has a list of costs for each item separated by new lines (\\n). budget_items.txt has a list of the items that were bought. Luckily they are both in order. You need to write a program that will take the files and a value for the overall budget and print out the total spent and how close they are to reaching the budget. In step 2 you will create a new file where the items and prices are in the same document and there is a sum printed out at the end.\nStep 1\n\nCreate a function called file_to_float_list that takes in a file and returns a list containing a float for each line Hint Make sure to remove the newlines when casting as floats.\nStore the budget of 2000.00 in a variable called budget.\nRun file_to_float_list on budget_prices.txt and save the result in a variable called prices.\nCalculate the sum of the prices array and store in a variable called spent.\nCalculate the percentage of budget spent and store in a variable called percent_spent\n\nPrint out the results:\nBudget: 2000.00 Spent: (amt spent) Percentage Spent: (percent spent)\n\n\n Bonus Print out a progress bar for the budget. Print out '=' for every 10% spent and '-' for every 10% unspent. =====>-----\n\n\nStep 2\n\nCreate a function called file_to_string_list that takes in a file and returns a list containing a string for each line with newlines removed.\nRun file_to_string_list on budget_items.txt and save the result in a variable called stuff_bought.\nZip stuff_bought and prices together and store in a variable called items_and_prices\nLoop through items_and_prices and print out the item, then a tab character '\\t' and then the price (use string formatting)\nPrint another line 'Sum\\t(amount spent)'\n\nPrint a final line 'Budget\\t(budget)'\n\n\nBonus Print everything you printed for step 2 into a new file. (Then open the file in excel.)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
probml/pyprobml
deprecated/schools8_pymc3.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/schools8_pymc3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nIn this notebook, we fit a hierarchical Bayesian model to the \"8 schools\" dataset.\nSee also https://github.com/probml/pyprobml/blob/master/scripts/schools8_pymc3.py", "%matplotlib inline\nimport sklearn\nimport scipy.stats as stats\nimport scipy.optimize\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport time\nimport numpy as np\nimport os\nimport pandas as pd\n\n!pip install -U pymc3>=3.8\nimport pymc3 as pm\n\nprint(pm.__version__)\nimport theano.tensor as tt\nimport theano\n\n#!pip install arviz\nimport arviz as az\n\n!mkdir ../figures", "Data", "# https://github.com/probml/pyprobml/blob/master/scripts/schools8_pymc3.py\n\n# Data of the Eight Schools Model\nJ = 8\ny = np.array([28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0])\nsigma = np.array([15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0])\nprint(np.mean(y))\nprint(np.median(y))\n\nnames = []\nfor t in range(8):\n names.append(\"{}\".format(t))\n\n# Plot raw data\nfig, ax = plt.subplots()\ny_pos = np.arange(8)\nax.errorbar(y, y_pos, xerr=sigma, fmt=\"o\")\nax.set_yticks(y_pos)\nax.set_yticklabels(names)\nax.invert_yaxis() # labels read top-to-bottom\nplt.title(\"8 schools\")\nplt.savefig(\"../figures/schools8_data.png\")\nplt.show()", "Centered model", "# Centered model\nwith pm.Model() as Centered_eight:\n mu_alpha = pm.Normal(\"mu_alpha\", mu=0, sigma=5)\n sigma_alpha = pm.HalfCauchy(\"sigma_alpha\", beta=5)\n alpha = pm.Normal(\"alpha\", mu=mu_alpha, sigma=sigma_alpha, shape=J)\n obs = pm.Normal(\"obs\", mu=alpha, sigma=sigma, observed=y)\n log_sigma_alpha = pm.Deterministic(\"log_sigma_alpha\", tt.log(sigma_alpha))\n\nnp.random.seed(0)\nwith Centered_eight:\n trace_centered = pm.sample(1000, chains=4, return_inferencedata=False)\n\npm.summary(trace_centered).round(2)\n# PyMC3 gives multiple warnings about divergences\n# Also, see r_hat ~ 1.01, ESS << nchains*1000, especially for sigma_alpha\n# We can solve these problems below by using a non-centered parameterization.\n# In practice, for this model, the results are very similar.\n\n# Display the total number and percentage of divergent chains\ndiverging = trace_centered[\"diverging\"]\nprint(\"Number of Divergent Chains: {}\".format(diverging.nonzero()[0].size))\ndiverging_pct = diverging.nonzero()[0].size / len(trace_centered) * 100\nprint(\"Percentage of Divergent Chains: {:.1f}\".format(diverging_pct))\n\ndir(trace_centered)\n\ntrace_centered.varnames\n\nwith Centered_eight:\n # fig, ax = plt.subplots()\n az.plot_autocorr(trace_centered, var_names=[\"mu_alpha\", \"sigma_alpha\"], combined=True)\n plt.savefig(\"schools8_centered_acf_combined.png\", dpi=300)\n\nwith Centered_eight:\n # fig, ax = plt.subplots()\n az.plot_autocorr(trace_centered, var_names=[\"mu_alpha\", \"sigma_alpha\"])\n plt.savefig(\"schools8_centered_acf.png\", dpi=300)\n\nwith Centered_eight:\n az.plot_forest(trace_centered, var_names=\"alpha\", hdi_prob=0.95, combined=True)\n plt.savefig(\"schools8_centered_forest_combined.png\", dpi=300)\n\nwith Centered_eight:\n az.plot_forest(trace_centered, var_names=\"alpha\", hdi_prob=0.95, combined=False)\n plt.savefig(\"schools8_centered_forest.png\", dpi=300)", "Non-centered", "# Non-centered parameterization\n\nwith pm.Model() as NonCentered_eight:\n mu_alpha = pm.Normal(\"mu_alpha\", mu=0, sigma=5)\n sigma_alpha = pm.HalfCauchy(\"sigma_alpha\", beta=5)\n alpha_offset = pm.Normal(\"alpha_offset\", mu=0, sigma=1, shape=J)\n alpha = pm.Deterministic(\"alpha\", mu_alpha + sigma_alpha * alpha_offset)\n # alpha = pm.Normal('alpha', mu=mu_alpha, sigma=sigma_alpha, shape=J)\n obs = pm.Normal(\"obs\", mu=alpha, sigma=sigma, observed=y)\n log_sigma_alpha = pm.Deterministic(\"log_sigma_alpha\", tt.log(sigma_alpha))\n\nnp.random.seed(0)\nwith NonCentered_eight:\n trace_noncentered = pm.sample(1000, chains=4)\n\npm.summary(trace_noncentered).round(2)\n# Samples look good: r_hat = 1, ESS ~= nchains*1000\n\nwith NonCentered_eight:\n az.plot_autocorr(trace_noncentered, var_names=[\"mu_alpha\", \"sigma_alpha\"], combined=True)\n plt.savefig(\"schools8_noncentered_acf_combined.png\", dpi=300)\n\nwith NonCentered_eight:\n az.plot_forest(trace_noncentered, var_names=\"alpha\", combined=True, hdi_prob=0.95)\n plt.savefig(\"schools8_noncentered_forest_combined.png\", dpi=300)\n\naz.plot_forest(\n [trace_centered, trace_noncentered],\n model_names=[\"centered\", \"noncentered\"],\n var_names=\"alpha\",\n combined=True,\n hdi_prob=0.95,\n)\nplt.axvline(np.mean(y), color=\"k\", linestyle=\"--\")\n\naz.plot_forest(\n [trace_centered, trace_noncentered],\n model_names=[\"centered\", \"noncentered\"],\n var_names=\"alpha\",\n kind=\"ridgeplot\",\n combined=True,\n hdi_prob=0.95,\n);", "Funnel of hell", "# Plot the \"funnel of hell\"\n# Based on\n# https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/GLM_hierarchical_non_centered.ipynb\n\nfig, axs = plt.subplots(ncols=2, sharex=True, sharey=True)\nx = pd.Series(trace_centered[\"mu_alpha\"], name=\"mu_alpha\")\ny = pd.Series(trace_centered[\"log_sigma_alpha\"], name=\"log_sigma_alpha\")\naxs[0].plot(x, y, \".\")\naxs[0].set(title=\"Centered\", xlabel=\"µ\", ylabel=\"log(sigma)\")\n# axs[0].axhline(0.01)\n\nx = pd.Series(trace_noncentered[\"mu_alpha\"], name=\"mu\")\ny = pd.Series(trace_noncentered[\"log_sigma_alpha\"], name=\"log_sigma_alpha\")\naxs[1].plot(x, y, \".\")\naxs[1].set(title=\"NonCentered\", xlabel=\"µ\", ylabel=\"log(sigma)\")\n# axs[1].axhline(0.01)\n\nplt.savefig(\"schools8_funnel.png\", dpi=300)\n\nxlim = axs[0].get_xlim()\nylim = axs[0].get_ylim()\n\nx = pd.Series(trace_centered[\"mu_alpha\"], name=\"mu\")\ny = pd.Series(trace_centered[\"log_sigma_alpha\"], name=\"log sigma_alpha\")\nsns.jointplot(x, y, xlim=xlim, ylim=ylim)\nplt.suptitle(\"centered\")\nplt.savefig(\"schools8_centered_joint.png\", dpi=300)\n\nx = pd.Series(trace_noncentered[\"mu_alpha\"], name=\"mu\")\ny = pd.Series(trace_noncentered[\"log_sigma_alpha\"], name=\"log sigma_alpha\")\nsns.jointplot(x, y, xlim=xlim, ylim=ylim)\nplt.suptitle(\"noncentered\")\nplt.savefig(\"schools8_noncentered_joint.png\", dpi=300)\n\ngroup = 0\nfig, axs = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(10, 5))\nx = pd.Series(trace_centered[\"alpha\"][:, group], name=f\"alpha {group}\")\ny = pd.Series(trace_centered[\"log_sigma_alpha\"], name=\"log_sigma_alpha\")\naxs[0].plot(x, y, \".\")\naxs[0].set(title=\"Centered\", xlabel=r\"$\\alpha_0$\", ylabel=r\"$\\log(\\sigma_\\alpha)$\")\n\nx = pd.Series(trace_noncentered[\"alpha\"][:, group], name=f\"alpha {group}\")\ny = pd.Series(trace_noncentered[\"log_sigma_alpha\"], name=\"log_sigma_alpha\")\naxs[1].plot(x, y, \".\")\naxs[1].set(title=\"NonCentered\", xlabel=r\"$\\alpha_0$\", ylabel=r\"$\\log(\\sigma_\\alpha)$\")\n\nxlim = axs[0].get_xlim()\nylim = axs[0].get_ylim()\n\nplt.savefig(\"schools8_funnel_group0.png\", dpi=300)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
muatik/dm
titanic-data-exploration.ipynb
mit
[ "import pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn import cross_validation, svm, grid_search\n%matplotlib inline\n\ndf = pd.read_csv(\"data/tinanic/train.csv\")\n\"\"\"\nVARIABLE DESCRIPTIONS:\nsurvival Survival\n (0 = No; 1 = Yes)\npclass Passenger Class\n (1 = 1st; 2 = 2nd; 3 = 3rd)\nname Name\nsex Sex\nage Age\nsibsp Number of Siblings/Spouses Aboard\nparch Number of Parents/Children Aboard\nticket Ticket Number\nfare Passenger Fare\ncabin Cabin\nembarked Port of Embarkation\n (C = Cherbourg; Q = Queenstown; S = Southampton)\n\nSPECIAL NOTES:\nPclass is a proxy for socio-economic status (SES)\n 1st ~ Upper; 2nd ~ Middle; 3rd ~ Lower\n\nAge is in Years; Fractional if Age less than One (1)\n If the Age is Estimated, it is in the form xx.5\n\nWith respect to the family relation variables (i.e. sibsp and parch)\nsome relations were ignored. The following are the definitions used\nfor sibsp and parch.\n\nSibling: Brother, Sister, Stepbrother, or Stepsister of Passenger Aboard Titanic\nSpouse: Husband or Wife of Passenger Aboard Titanic (Mistresses and Fiances Ignored)\nParent: Mother or Father of Passenger Aboard Titanic\nChild: Son, Daughter, Stepson, or Stepdaughter of Passenger Aboard Titanic\n\nOther family relatives excluded from this study include cousins,\nnephews/nieces, aunts/uncles, and in-laws. Some children travelled\nonly with a nanny, therefore parch=0 for them. As well, some\ntravelled with very close friends or neighbors in a village, however,\nthe definitions do not support such relations.\n\n\"\"\"\ndf.sample(6)\n\ndf.info()", "Embarked feature", "sns.countplot(data=df, hue=\"Survived\", x=\"Embarked\")\n\nsns.barplot(data=df, x=\"Embarked\", y=\"Survived\")\n\nsns.countplot(data=df, x=\"Age\")\n\nsns.boxplot(data=df, x=\"Survived\", y=\"Age\")\nsns.stripplot(\n x=\"Survived\", y=\"Age\", data=df, jitter=True, edgecolor=\"gray\", alpha=0.25)\n\nsns.FacetGrid(df, hue=\"Survived\", size=6).map(sns.kdeplot, \"Age\").add_legend()", "Sex feature\nFirst, let's have a look at which gender is dominant in the population by a countplot.", "sns.countplot(data=df, x=\"Sex\")\n\nsns.countplot(data=df, hue=\"Survived\", x=\"Sex\")", "According to sex vs. survived chart, most of men did not survived while the majority of women did. The following chart also supports this claim by showing us that 70% of women survived.", "sns.barplot(data=df, x=\"Sex\", y=\"Survived\")", "The inference is that this sex feature can be used in a classification task to determine whether a given person survived or not.\nPclass feature\nThis stands for Passenger Class. There are three classes as 1 = 1st; 2 = 2nd; 3 = 3rd. We can make a guess saying most probably the first class passengers survived thanks to their nobility. This guess is based on the domain knowledge; in that time classes among the people is more obvious and severe than now. Let's have a look at the data to see the truth.", "sns.countplot(data=df, hue=\"Survived\", x=\"Pclass\")\n\nsns.countplot(data=df[df['Pclass'] == 1], hue=\"Survived\", x=\"Sex\")", "The chart above corrects the guess: unfortunatelly, passenger class plays a crucial role.", "sns.countplot(data=df[df['Pclass'] == 3], hue=\"Survived\", x=\"Sex\")\n\nsns.barplot(x=\"Sex\", y=\"Survived\", hue=\"Pclass\", data=df);\n\ndef titanicFit(df):\n\n X = df[[\"Sex\", \"Age\", \"Pclass\", \"Embarked\"]]\n y = df[\"Survived\"]\n\n X.Age.fillna(X.Age.mean(), inplace=True)\n\n X.Sex.replace(to_replace=\"male\", value=1, inplace=True)\n X.Sex.replace(to_replace=\"female\", value=0, inplace=True)\n\n X.Embarked.replace(to_replace=\"S\", value=1, inplace=True)\n X.Embarked.replace(to_replace=\"C\", value=2, inplace=True)\n X.Embarked.replace(to_replace=\"Q\", value=3, inplace=True)\n\n X_train, X_test, y_train, y_test = cross_validation.train_test_split(\n X, y, test_size=0.3, random_state=0)\n\n clf = svm.SVC(kernel=\"rbf\")\n parameters = [\n {\n \"kernel\" :[\"linear\"]\n }, {\n \"kernel\" :[\"rbf\"], \"C\":[1, 10, 100], \"gamma\":[0.001, 0.002, 0.01]}\n ]\n\n clf = grid_search.GridSearchCV(\n svm.SVC(), param_grid=parameters, cv=5).fit(X, y)\n return clf\n #print clf.score(X_test, y_test)\n\nclf = titanicFit(df[df.Embarked.isnull() == False])\n\nclf.grid_scores_" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
adolfoguimaraes/machinelearning
Introduction/Tutorial01_HelloWorld.ipynb
mit
[ "Tutorial 01 - Hello World em Aprendizagem de Máquina\nPara começar o nosso estudo de aprendizagem de máquina vamos começar com um exemplo simples de aprendizagem. O objetivo aqui é entender o que é Aprendizagem de Máquina e como podemos usá-la. Não serão apresentados detalhes dos métodos aplicados, eles serão explicados ao longo do curso. \nO material do curso será baseado no curso Intro to Machine Learning da Udacity e também no conteúdo de alguns livros:\n[1]: Inteligência Artificial. Uma Abordagem de Aprendizado de Máquina (FACELI et. al, 2011)\n[2]: Machine Learning: An Algorithmic Perspective, Second Edition (MARSLAND et. al, 2014)\n[3]: Redes Neurais Artificiais Para Engenharia e Ciências Aplicadas. Fundamentos Teóricos e Aspectos Práticos (da SILVA I., 2016)\n[4]: An Introduction to Statistical Learning with Applications in R (JAMES, G. et al, 2015) \nEm termos de linguagem de programação, usaremos o Python e as bibliotecas do ScikitLearn e do Tensorflow. Bibliotecas auxiliares como Pandas, NumPy, Scipy, MatPlotLib dentre outras também serão necessárias.\nO material dessa primeira aula é baseado em dois vídeos: \n\nHello World - Machine Learning Recipes #1 (by Josh Gordon - Google)\nVisualizing a Decision Tree - Machine Learning Recipes #2 (by Josh Gordon - Google)\n\nVamos Começar :)\nO primeiro passo é entender o que é Aprendizagem de Máquina (em inglês, Machine Learning). Uma definição que consta em [2] é a seguinte:\n\nMachine Learning, then, is about making computers modify or adapt their actions (whether theses actions are making predictions, or controlling a robot) so that these actions get more accurate, where accuracy is measured by how well the chosen actions reflect the correct ones.\n\nPodemos enxergar a aprendizagem de máquina como sendo um campo da Inteligência Artificial que visa prover os computadores a capacidade de modificar e adaptar as sua ações de acordo com o problema e, ao longo do processo, melhorar o seu desempenho. \nÉ nessa área que se encontra a base de sistemas que usamos no dia a dia como: \n\nSistemas de tradução automática\nSistemas de recomendação de filmes\nAssistentes pessoais como a Siri\n\nDentre tantas outras aplicações que serão detalhadas ao longo do curso.\nTodos esses sistemas são possíveis graças a um amplo estudo de uma série de algoritmos que compoõe a aprendizagem de máquina. Existem disversas formas de classficiar esse conjunto de algoritmos. Uma forma simples é dividi-los em 4 grupos. Citando [2], temos:\n\n\n\nAprendizado Supervisionado (Supervised Learning): A training set of examples with the correct responses (targets) is provided and, based on this training set, the algorithm generalises to respond correctly to all possible inputs. This also colled learning from exemplars.\n\n\nAprendizado Não-Supervisionado (Unsupervised Learning): Correct responses are not provided, but instead the algorithm tries to identify similarities between the inputs so that inputs that have something in common are categorised together. The statistical approach to unsupervised learning is known as density estimation.\n\n\nAprendizado por Reforço (Reinforcement Learning): This is somewhere between supervised and unsupervised learning. The algortithm gets told when the answer is wrong, but dows not get told how to correct it. It has to explore and try out different possibilities until it works out how to get the answer right. Reinforcement learning is sometime called learning with a critic because of this monitor that scores the answer, but does not suggest improvements.\n\n\nAprendizado Evolucionário (Evolutionary Learning): Biological evolution can be seen as a learning process: biological organisms adapt to improve their survival rates and chance of having offspring in their environment. We'll look at how we can model this in a computer, using an idea of fitness, which corresponds to a score for how good the current solution is.\n\n\n\nNeste curso iremos explorar alguns dos principais algoritmos de cada um dos grupos.\nHello World\nPara começar vamos tentar entender um pouco de como funciona o processo que será tratado nos algoritmos com uma tarefa simples de classificação. A classificação é uma das técnicas de aprendizado supervisionado e consiste em dado um conjunto de dados, você deve classificar cada instância deste conjunto em uma classe. Isso será tema do próximo tutorial e será melhor detalhado. \nPara simplificar, imagine a seguinte tarefa: desejo construir um programa que classifica laranjas e maças. Para entender o problema, assista: https://www.youtube.com/watch?v=cKxRvEZd3Mw\nÉ fácil perceber que não podemos simplesmente programar todas as variações de características que temos em relação à maças e laranjas. No entanto, podemos aprender padrões que caracterizam uma maça e uma laranja. Se uma nova fruta for passada ao programa, a presença ou não desses padrões permitirá classifica-la em maça, laranja ou outra fruta.\nVamos trabalhar com uma base de dados de exemplo que possui dados coletados com características de laranjas e maças. Para simplificar, vamos trabalhar com duas características: peso e textura. Em aprendizagem de máquina, as caracterísicas que compõe nosso conjunto de dados são chamadas de features. \nPeso | Textura | Classe (label)\n------------ | ------------- | -------------\n150g | Irregular | Laranja\n170g | Irregular | Laranja\n140g | Suave | Maçã\n130g | Suave | Maçã\nCada linha da nossa base de dados é chamada de instância (examples). Cada exemplo é classificado de acordo com um label ou classe. Nesse caso, iremos trabalhar com duas classes que são os tipos de frutas.\nToda a nossa tabela são os dados de treinamento. Entenda esses dados como aqueles que o nosso programa irá usar para aprender. De forma geral e bem simplificada, quanto mais dados nós tivermos, melhor o nosso programa irá aprender. \nVamos simular esse problema no código.", "# Vamos transformar as informações textuais em números: (0) irregular, (1) Suave.\n# Os labels também serão transformados em números: (0) Maçã e (1) Laranja\n\nfeatures = [[140, 1], [130, 1], [150, 0], [170, 0]]\nlabels = [0, 0, 1, 1]", "Vamos agora criar um modelo baseado nesse conjunto de dados. Vamos utilizar o algoritmo de árvore de decisão para fazer isso.", "from sklearn import tree \nclf = tree.DecisionTreeClassifier()", "clf consiste no classificador baseado na árvore de decisão. Precisamos treina-lo com o conjunto da base de dados de treinamento.", "clf = clf.fit(features, labels)", "Observer que o classificador recebe com parâmetro as features e os labels. Esse classificador é um tipo de classificador supervisionado, logo precisa conhecer o \"gabarito\" das instâncias que estão sendo passadas.\nUma vez que temos o modelo construído, podemos utiliza-lo para classificar uma instância desconhecida.", "# Peso 160 e Textura Irregular. Observe que esse tipo de fruta não está presente na base de dados.\nprint(clf.predict([[160, 0]]))", "Ele classificou essa fruta como sendo uma Laranja.\nHelloWorld++\nVamos estender um pouco mais esse HelloWorld. Claro que o exemplo anterior foi só para passar a idéia de funcionamento de um sistema desse tipo. No entanto, o nosso programa não está aprendendo muita coisa já que a quantidade de exemplos passada para ele é muito pequena. Vamos trabalhar com um exemplo um pouco maior.\nPara esse exemplo, vamos utilizar o Iris Dataset. Esse é um clássico dataset utilizado na aprendizagem de máquina. Ele tem o propósito mais didático e a tarefa é classificar 3 espécies de um tipo de flor (Iris). A classificação é feita a partir de 4 características da planta: sepal length, sepal width, petal length e petal width. \n<img src=\"http://5047-presscdn.pagely.netdna-cdn.com/wp-content/uploads/2015/04/iris_petal_sepal.png\" />\nAs flores são classificadas em 3 tipos: Iris Setosa, Iris Versicolor e Iris Virginica.\nVamos para o código ;)\nO primeiro passo é carregar a base de dados. Os arquivos desta base estão disponíveis no UCI Machine Learning Repository. No entanto, como é uma base bastante utilizada, o ScikitLearn permite importá-la diretamente da biblioteca.", "from sklearn.datasets import load_iris\n\ndataset_iris = load_iris()", "Imprimindo as características:", "print(dataset_iris.feature_names)", "Imprimindo os labels:", "print(dataset_iris.target_names)", "Imprimindo os dados:", "print(dataset_iris.data)\n\n# Nessa lista, 0 = setosa, 1 = versicolor e 2 = verginica\nprint(dataset_iris.target)", "Antes de continuarmos, vale a pena mostrar que o Scikit-Learn exige alguns requisitos para se trabalhar com os dados. Esse tutorial não tem como objetivo fazer um estudo detalhado da biblioteca, mas é importante tomar conhecimento de tais requisitos para entender alguns exemplos que serão mostrados mais à frente. São eles:\n\nAs features e os labels devem ser armazenados em objetos distintos\nAmbos devem ser numéricos\nAmbos devem ser representados por uma Array Numpy\nAmbos devem ter tamanhos específicos\n\nVamos ver estas informações no Iris-dataset.", "# Verifique os tipos das features e das classes\nprint(type(dataset_iris.data))\nprint(type(dataset_iris.target))\n\n# Verifique o tamanho das features (primeira dimensão = numero de instâncias, segunda dimensão = número de atributos)\nprint(dataset_iris.data.shape)\n\n# Verifique o tamanho dos labels\nprint(dataset_iris.target.shape)", "Quando importamos a base diretamente do ScikitLearn, as features e labels já vieram em objetos distintos. Só por questão de simplificação dos nomes, vou renomeá-los.", "X = dataset_iris.data\nY = dataset_iris.target", "Construindo e testando um modelo de treinamento\nUma vez que já temos nossa base de dados, o próximo passo é construir nosso modelo de aprendizagem de máquina capaz de utilizar o dataset. No entanto, antes de construirmos nosso modelo é preciso saber qual modelo desenvolver e para isso precisamos definir qual o nosso propósito na tarefa de treinamento.\nExistem vários tipos de tarefas dentro da aprendizagem de máquina. Como dito anteriormente, vamos trabalhar com a tarefa de classificação. A classificação consiste em criar um modelo a partir de dados que estejam de alguma forma classificados. O modelo gerado é capaz de determinar qual classe uma instância pertence a partir dos dados que foram dados como entrada.\nNa apresentação do dataset da Iris vimos que cada instância é classificada com um tipo (no caso, o tipo da espécie a qual a planta pertence). Sendo assim, vamos tratar esse problema como um problema de classificação. Existem outras tarefas dentro da aprendizagem de máquina, como: clusterização, agrupamento, dentre outras. Mais detalhes de cada uma deles serão apresentados na aula de aprendizagem de máquina.\nO passo seguinte é construir o modelo. Para tal, vamos seguir 4 passos:\n\nPasso 1: Importar o classificador que deseja utilizar\nPasso 2: Instanciar o modelo\nPasso 3: Treinar o modelo\nPasso 4: Fazer predições para novos valores\n\nNessa apresentação, vamos continuar utilizando o modelo de Árvore de Decisão. O fato de usá-la nesta etapa é que é fácil visualizar o que o modelo está fazendo com os dados. \nPara nosso exemplo, vamos treinar o modelo com um conjunto de dados e, em seguida, vamos testá-lo com um conjunto de dados que não foram utilizados para treinar. Para isso, vamos retirar algumas instâncias da base de treinamento e usá-las posteriormente para testá-la. Vamos chamar isso de dividir a base em base de treino e base de teste. É fácil perceber que não faz sentido testarmos nosso modelo com um padrão que ele já conhece. Por isso, faz-se necessária essa separação.", "import numpy as np\n\n# Determinando os índices que serão retirados da base de treino para formar a base de teste\n\ntest_idx = [0, 50, 100] # as instâncias 0, 50 e 100 da base de dados\n\n# Criando a base de treino \n\ntrain_target = np.delete(dataset_iris.target, test_idx)\ntrain_data = np.delete(dataset_iris.data, test_idx, axis=0)\n\n# Criando a base de teste\ntest_target = dataset_iris.target[test_idx]\ntest_data = dataset_iris.data[test_idx]\n\nprint(\"Tamanho dos dados originais: \", dataset_iris.data.shape) #np.delete não modifica os dados originais\nprint(\"Tamanho do treinamento: \", train_data.shape) \nprint(\"Tamanho do teste: \", test_data.shape)", "Agora que já temos nosso dataset separado, vamos criar o classificador e treina-lo com os dados de treinamento.", "clf = tree.DecisionTreeClassifier()\nclf.fit(train_data, train_target)", "O classificador foi treinado, agora vamos utiliza-lo para classificar as instâncias da base de teste.", "print(clf.predict(test_data))", "Como estamos trabalhando com o aprendizado supervisionado, podemos comparar com o target que já conhecemos da base de teste.", "print(test_target)", "Observe que, neste caso, nosso classificador teve uma acurácia de 100% acertando todas as instâncias informadas. Claro que esse é só um exemplo e normalmente trabalhamos com valores de acurácias menores que 100%. No entanto, vale ressaltar que para algumas tarefas, como reconhecimento de imagens, as taxas de acurácias estão bem próximas de 100%.\nVisualizando nosso modelo\nA vantagem em se trablhar com a árvore de decisão é que podemos visualizar exatamente o que modelo faz. De forma geral, uma árvore de decisão é uma árvore que permite serparar o conjunto de dados. Cada nó da árvore é \"uma pergunta\" que direciona aquela instância ao longo da árvore. Nos nós folha da árvore se encontram as classes. Esse tipo de modelo será mais detalhado mais a frente no nosso curso. \nPara isso, vamos utilizar um código que visualiza a árvore gerada.", "from IPython.display import Image \nimport pydotplus\n\ndot_data = tree.export_graphviz(clf, out_file=None, \n feature_names=dataset_iris.feature_names, \n class_names=dataset_iris.target_names, \n filled=True, rounded=True, \n special_characters=True) \ngraph = pydotplus.graph_from_dot_data(dot_data) \nImage(graph.create_png(), width=800) ", "Observe que nos nós internos pergunta sim ou não para alguma característica. Por exemplo, no nó raiz a pergunta é \"pedal width é menor ou igual a 0.8\". Isso significa que se a instância que estou querendo classificar possui pedal width menor que 0.8 ela será classificada como setosa. Se isso não for true ela será redirecionada para outro nó que irá analisar outra característica. Esse processo continua até que consiga atingir um nó folha. Como execício faça a classificação, acompahando na tabela, para as instâncias de testes.", "print(test_data)\nprint(test_target)", "Vale ressaltar que essa árvore, que é nosso modelo, foi construído a partir do conjunto de dados que foi passado na etapa de treinamento. Se mudamos nosso conjunto de dados, uma nova representação do módelo, ou seja, uma nova árvore será criada. \nCom isso chegamos ao final do nosso HelloWorld em aprendizagem de máquina. A partir dos próximos tutoriais vamos começar a detalhar os diversos algoritmos para as mais distintas tarefas de aprendizagem. Para começar, iremos trabalhar com os modelos supervisionados.\nAté o próximo tutorial ;)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb
apache-2.0
[ "# Copyright 2022 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex Training: Distributed Hyperparameter Tuning\n<table align=\"left\">\n\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb\"\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td> <td>\n <a href=\"https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb\">\n <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\">\nOpen in Vertex AI Workbench\n </a>\n </td>\n</table>\n\nOverview\nThis notebook demonstrates how to run a hyperparameter tuning job with Vertex Training to discover optimal hyperparameter values for an ML model. To speed up the training process, MirroredStrategy from the tf.distribute module is used to distribute training across multiple GPUs on a single machine.\nDataset\nThe dataset used for this tutorial is the horses or humans dataset from TensorFlow Datasets. The trained model predicts if an image is of a horse or a human.\nObjective\nIn this notebook, you create a custom-trained model from a Python script in a Docker container. You learn how to modify training application code for hyperparameter tuning and submit a Vertex Training hyperparameter tuning job with the Python SDK.\nThe steps performed include:\n\nCreate a Vertex AI custom job for training a model.\nLaunch hyperparameter tuning job with the Python SDK.\nCleanup resources.\n\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets\nall the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements.\nYou need the following:\n\nThe Google Cloud SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Google Cloud guide to Setting up a Python development\nenvironment and the Jupyter\ninstallation guide provide detailed instructions\nfor meeting these requirements. The following steps provide a condensed set of\ninstructions:\n\n\nInstall and initialize the Cloud SDK.\n\n\nInstall Python 3.\n\n\nInstall\n virtualenv\n and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the\ncommand-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstall additional packages\nInstall the latest version of Vertex SDK for Python.", "import os\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# Google Cloud Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_GOOGLE_CLOUD_NOTEBOOK:\n USER_FLAG = \"--user\"\n\n! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform", "Restart the kernel\nAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.", "# Automatically restart kernel after installs\nimport os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nSelect a GPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select \"Runtime --> Change runtime type > GPU\"\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex AI API and Compute Engine API.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.", "import os\n\nPROJECT_ID = \"\"\n\n# Get your Google Cloud project ID from gcloud\nif not os.getenv(\"IS_TESTING\"):\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)", "Otherwise, set your project ID here.", "if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"\" # @param {type:\"string\"}", "Set project ID", "! gcloud config set project $PROJECT_ID", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions\nwhen prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\n\n\nIn the Cloud Console, go to the Create service account key\n page.\n\n\nClick Create service account.\n\n\nIn the Service account name field, enter a name, and\n click Create.\n\n\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex AI\"\ninto the filter box, and select\n Vertex AI Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\n\n\nClick Create. A JSON file that contains your key downloads to your\nlocal environment.\n\n\nEnter the path to your service account key as the\nGOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# If on Google Cloud Notebooks, then don't execute this code\nif not IS_GOOGLE_CLOUD_NOTEBOOK:\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a custom training job using the Cloud SDK, you will need to provide a staging bucket.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.\nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Vertex AI services are\navailable. You may\nnot use a Multi-Regional Storage bucket for training with Vertex AI.", "BUCKET_URI = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\nREGION = \"us-central1\" # @param {type:\"string\"}\n\nif BUCKET_URI == \"\" or BUCKET_URI is None or BUCKET_URI == \"gs://[your-bucket-name]\":\n BUCKET_URI = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP\n\nprint(BUCKET_URI)", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_URI", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_URI", "Import libraries and define constants", "import os\nimport sys\n\nfrom google.cloud import aiplatform\nfrom google.cloud.aiplatform import hyperparameter_tuning as hpt", "Write Dockerfile\nThe first step in containerizing your code is to create a Dockerfile. In the Dockerfile, you'll include all the commands needed to run the image such as installing the necessary libraries and setting up the entry point for the training code.\nThis Dockerfile uses the Deep Learning Container TensorFlow Enterprise 2.5 GPU Docker image. The Deep Learning Containers on Google Cloud come with many common ML and data science frameworks pre-installed. After downloading that image, this Dockerfile installs the CloudML Hypertune library and sets up the entrypoint for the training code.", "%%writefile Dockerfile\n\nFROM gcr.io/deeplearning-platform-release/tf2-gpu.2-5\nWORKDIR /\n\n# Installs hypertune library\nRUN pip install cloudml-hypertune\n\n# Copies the trainer code to the docker image.\nCOPY trainer /trainer\n\n# Sets up the entry point to invoke the trainer.\nENTRYPOINT [\"python\", \"-m\", \"trainer.task\"]", "Create training application code\nNext, you create a trainer directory with a task.py script that contains the code for your training application.", "# Create trainer directory\n\n! mkdir trainer", "In the next cell, you write the contents of the training script, task.py. This file downloads the horses or humans dataset from TensorFlow datasets and trains a tf.keras functional model using MirroredStrategy from the tf.distribute module.\nThere are a few components that are specific to using the hyperparameter tuning service:\n\nThe script imports the hypertune library. Note that the Dockerfile included instructions to pip install the hypertune library.\nThe function get_args() defines a command-line argument for each hyperparameter you want to tune. In this example, the hyperparameters that will be tuned are the learning rate, the momentum value in the optimizer, and the number of units in the last hidden layer of the model. The value passed in those arguments is then used to set the corresponding hyperparameter in the code.\nAt the end of the main() function, the hypertune library is used to define the metric to optimize. In this example, the metric that will be optimized is the the validation accuracy. This metric is passed to an instance of HyperTune.", "%%writefile trainer/task.py\n\nimport argparse\nimport hypertune\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\n\ndef get_args():\n \"\"\"Parses args. Must include all hyperparameters you want to tune.\"\"\"\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\n '--learning_rate', required=True, type=float, help='learning rate')\n parser.add_argument(\n '--momentum', required=True, type=float, help='SGD momentum value')\n parser.add_argument(\n '--units',\n required=True,\n type=int,\n help='number of units in last hidden layer')\n parser.add_argument(\n '--epochs',\n required=False,\n type=int,\n default=10,\n help='number of training epochs')\n args = parser.parse_args()\n return args\n\n\ndef preprocess_data(image, label):\n \"\"\"Resizes and scales images.\"\"\"\n\n image = tf.image.resize(image, (150, 150))\n return tf.cast(image, tf.float32) / 255., label\n\n\ndef create_dataset(batch_size):\n \"\"\"Loads Horses Or Humans dataset and preprocesses data.\"\"\"\n\n data, info = tfds.load(\n name='horses_or_humans', as_supervised=True, with_info=True)\n\n # Create train dataset\n train_data = data['train'].map(preprocess_data)\n train_data = train_data.shuffle(1000)\n train_data = train_data.batch(batch_size)\n\n # Create validation dataset\n validation_data = data['test'].map(preprocess_data)\n validation_data = validation_data.batch(64)\n\n return train_data, validation_data\n\n\ndef create_model(units, learning_rate, momentum):\n \"\"\"Defines and compiles model.\"\"\"\n\n inputs = tf.keras.Input(shape=(150, 150, 3))\n x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(inputs)\n x = tf.keras.layers.MaxPooling2D((2, 2))(x)\n x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(x)\n x = tf.keras.layers.MaxPooling2D((2, 2))(x)\n x = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')(x)\n x = tf.keras.layers.MaxPooling2D((2, 2))(x)\n x = tf.keras.layers.Flatten()(x)\n x = tf.keras.layers.Dense(units, activation='relu')(x)\n outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)\n model = tf.keras.Model(inputs, outputs)\n model.compile(\n loss='binary_crossentropy',\n optimizer=tf.keras.optimizers.SGD(\n learning_rate=learning_rate, momentum=momentum),\n metrics=['accuracy'])\n return model\n\n\ndef main():\n args = get_args()\n\n # Create Strategy\n strategy = tf.distribute.MirroredStrategy()\n\n # Scale batch size\n GLOBAL_BATCH_SIZE = 64 * strategy.num_replicas_in_sync \n train_data, validation_data = create_dataset(GLOBAL_BATCH_SIZE)\n\n # Wrap model variables within scope\n with strategy.scope():\n model = create_model(args.units, args.learning_rate, args.momentum)\n\n # Train model\n history = model.fit(\n train_data, epochs=args.epochs, validation_data=validation_data)\n\n # Define Metric\n hp_metric = history.history['val_accuracy'][-1]\n\n hpt = hypertune.HyperTune()\n hpt.report_hyperparameter_tuning_metric(\n hyperparameter_metric_tag='accuracy',\n metric_value=hp_metric,\n global_step=args.epochs)\n\n\nif __name__ == '__main__':\n main()", "Build the Container\nIn the next cells, you build the container and push it to Google Container Registry.", "# Set the IMAGE_URI\nIMAGE_URI = f\"gcr.io/{PROJECT_ID}/horse-human:hypertune\"\n\n# Build the docker image\n! docker build -f Dockerfile -t $IMAGE_URI ./\n\n# Push it to Google Container Registry:\n! docker push $IMAGE_URI", "Create and run hyperparameter tuning job on Vertex AI\nOnce your container is pushed to Google Container Registry, you use the Vertex SDK to create and run the hyperparameter tuning job.\nYou define the following specifications:\n* worker_pool_specs: Dictionary specifying the machine type and Docker image. This example defines a single node cluster with one n1-standard-4 machine with two NVIDIA_TESLA_T4 GPUs.\n* parameter_spec: Dictionary specifying the parameters to optimize. The dictionary key is the string assigned to the command line argument for each hyperparameter in your training application code, and the dictionary value is the parameter specification. The parameter specification includes the type, min/max values, and scale for the hyperparameter.\n* metric_spec: Dictionary specifying the metric to optimize. The dictionary key is the hyperparameter_metric_tag that you set in your training application code, and the value is the optimization goal.", "worker_pool_specs = [\n {\n \"machine_spec\": {\n \"machine_type\": \"n1-standard-4\",\n \"accelerator_type\": \"NVIDIA_TESLA_T4\",\n \"accelerator_count\": 2,\n },\n \"replica_count\": 1,\n \"container_spec\": {\"image_uri\": IMAGE_URI},\n }\n]\n\nmetric_spec = {\"accuracy\": \"maximize\"}\n\nparameter_spec = {\n \"learning_rate\": hpt.DoubleParameterSpec(min=0.001, max=1, scale=\"log\"),\n \"momentum\": hpt.DoubleParameterSpec(min=0, max=1, scale=\"linear\"),\n \"units\": hpt.DiscreteParameterSpec(values=[64, 128, 512], scale=None),\n}", "Create a CustomJob.", "print(BUCKET_URI)\n\n# Create a CustomJob\n\nJOB_NAME = \"horses-humans-hyperparam-job\" + TIMESTAMP\n\nmy_custom_job = aiplatform.CustomJob(\n display_name=JOB_NAME,\n project=PROJECT_ID,\n worker_pool_specs=worker_pool_specs,\n staging_bucket=BUCKET_URI,\n)", "Then, create and run a HyperparameterTuningJob.\nThere are a few arguments to note:\n\n\nmax_trial_count: Sets an upper bound on the number of trials the service will run. The recommended practice is to start with a smaller number of trials and get a sense of how impactful your chosen hyperparameters are before scaling up.\n\n\nparallel_trial_count: If you use parallel trials, the service provisions multiple training processing clusters. The worker pool spec that you specify when creating the job is used for each individual training cluster. Increasing the number of parallel trials reduces the amount of time the hyperparameter tuning job takes to run; however, it can reduce the effectiveness of the job overall. This is because the default tuning strategy uses results of previous trials to inform the assignment of values in subsequent trials.\n\n\nsearch_algorithm: The available search algorithms are grid, random, or default (None). The default option applies Bayesian optimization to search the space of possible hyperparameter values and is the recommended algorithm.", "# Create and run HyperparameterTuningJob\n\nhp_job = aiplatform.HyperparameterTuningJob(\n display_name=JOB_NAME,\n custom_job=my_custom_job,\n metric_spec=metric_spec,\n parameter_spec=parameter_spec,\n max_trial_count=15,\n parallel_trial_count=3,\n project=PROJECT_ID,\n search_algorithm=None,\n)\n\nhp_job.run()", "Click on the generated link in the output to see your run in the Cloud Console. When the job completes, you will see the results of the tuning trials.\n\nCleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:", "# Set this to true only if you'd like to delete your bucket\ndelete_bucket = False\n\nif delete_bucket or os.getenv(\"IS_TESTING\"):\n ! gsutil rm -r $BUCKET_URI" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
florianwittkamp/FD_ACOUSTIC
JupyterNotebook/1D/FD_1D_DX8_DT2.ipynb
gpl-3.0
[ "FD_1D_DX8_DT2 1-D acoustic Finite-Difference modelling\nGNU General Public License v3.0\nAuthor: Florian Wittkamp\nFinite-Difference acoustic seismic wave simulation\nDiscretization of the first-order acoustic wave equation\nTemporal second-order accuracy $O(\\Delta T^2)$\nSpatial fourth-order accuracy $O(\\Delta X^8)$\nInitialisation", "%matplotlib inline\nimport numpy as np\nimport time as tm\nimport matplotlib.pyplot as plt", "Input Parameter", "# Discretization\nc1=20 # Number of grid points per dominant wavelength\nc2=0.5 # CFL-Number\nnx=2000 # Number of grid points\nT=10 # Total propagation time\n\n# Source Signal\nf0= 10 # Center frequency Ricker-wavelet\nq0= 1 # Maximum amplitude Ricker-Wavelet\nxscr = 100 # Source position (in grid points)\n\n# Receiver\nxrec1=400 # Position Reciever 1 (in grid points)\nxrec2=800 # Position Reciever 2 (in grid points)\nxrec3=1800 # Position Reciever 3 (in grid points)\n\n# Velocity and density\nmodell_v = np.hstack((1000*np.ones((int(nx/2))),1500*np.ones((int(nx/2)))))\nrho=np.hstack((1*np.ones((int(nx/2))),1.5*np.ones((int(nx/2)))))", "Preparation", "# Init wavefields\nvx=np.zeros(nx)\np=np.zeros(nx)\n\n# Calculate first Lame-Paramter\nl=rho * modell_v * modell_v\n\ncmin=min(modell_v.flatten()) # Lowest P-wave velocity\ncmax=max(modell_v.flatten()) # Highest P-wave velocity\nfmax=2*f0 # Maximum frequency\ndx=cmin/(fmax*c1) # Spatial discretization (in m)\ndt=dx/(cmax)*c2 # Temporal discretization (in s)\nlampda_min=cmin/fmax # Smallest wavelength\n\n# Output model parameter:\nprint(\"Model size: x:\",dx*nx,\"in m\")\nprint(\"Temporal discretization: \",dt,\" s\")\nprint(\"Spatial discretization: \",dx,\" m\")\nprint(\"Number of gridpoints per minimum wavelength: \",lampda_min/dx)", "Create space and time vector", "x=np.arange(0,dx*nx,dx) # Space vector\nt=np.arange(0,T,dt) # Time vector\nnt=np.size(t) # Number of time steps\n\n# Plotting model\nfig, (ax1, ax2) = plt.subplots(1, 2)\nfig.subplots_adjust(wspace=0.4,right=1.6)\nax1.plot(x,modell_v)\nax1.set_ylabel('VP in m/s')\nax1.set_xlabel('Depth in m')\nax1.set_title('P-wave velocity')\n\nax2.plot(x,rho)\nax2.set_ylabel('Density in g/cm^3')\nax2.set_xlabel('Depth in m')\nax2.set_title('Density');", "Source signal - Ricker-wavelet", "tau=np.pi*f0*(t-1.5/f0)\nq=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2)\n\n# Plotting source signal\nplt.figure(3)\nplt.plot(t,q)\nplt.title('Source signal Ricker-Wavelet')\nplt.ylabel('Amplitude')\nplt.xlabel('Time in s')\nplt.draw()", "Time stepping", "# Init Seismograms\nSeismogramm=np.zeros((3,nt)); # Three seismograms\n\n# Calculation of some coefficients\ni_dx=1.0/(dx)\nkx=np.arange(5,nx-4)\n\nprint(\"Starting time stepping...\")\n## Time stepping\nfor n in range(2,nt):\n\n # Inject source wavelet\n p[xscr]=p[xscr]+q[n]\n\n # Update velocity\n for kx in range(6,nx-5):\n\n # Calculating spatial derivative\n p_x=i_dx*(1225.0/1024.0)*(p[kx+1]-p[kx])+i_dx*(-245.0/3072.0)*(p[kx+2]-p[kx-1])+i_dx*(49.0/5120.0)*(p[kx+3]-p[kx-2])+i_dx*(-5.0/7168.0)*(p[kx+4]-p[kx-3])\n\n # Update velocity\n vx[kx]=vx[kx]-dt/rho[kx]*p_x\n\n # Update pressure\n for kx in range(6,nx-5):\n\n # Calculating spatial derivative\n vx_x=i_dx*(1225.0/1024.0)*(vx[kx]-vx[kx-1])+i_dx*(-245.0/3072.0)*(vx[kx+1]-vx[kx-2])+i_dx*(49.0/5120.0)*(vx[kx+2]-vx[kx-3])+i_dx*(-5.0/7168.0)*(vx[kx+3]-vx[kx-4])\n\n # Update pressure\n p[kx]=p[kx]-l[kx]*dt*(vx_x);\n\n # Save seismograms\n Seismogramm[0,n]=p[xrec1]\n Seismogramm[1,n]=p[xrec2]\n Seismogramm[2,n]=p[xrec3]\n \nprint(\"Finished time stepping!\")", "Save seismograms", "## Save seismograms\nnp.save(\"Seismograms/FD_1D_DX8_DT2\",Seismogramm)\n\n## Plot seismograms\nfig, (ax1, ax2, ax3) = plt.subplots(3, 1)\nfig.subplots_adjust(hspace=0.4,right=1.6, top = 2 )\n\nax1.plot(t,Seismogramm[0,:])\nax1.set_title('Seismogram 1')\nax1.set_ylabel('Amplitude')\nax1.set_xlabel('Time in s')\nax1.set_xlim(0, T)\n\nax2.plot(t,Seismogramm[1,:])\nax2.set_title('Seismogram 2')\nax2.set_ylabel('Amplitude')\nax2.set_xlabel('Time in s')\nax2.set_xlim(0, T)\n\nax3.plot(t,Seismogramm[2,:])\nax3.set_title('Seismogram 3')\nax3.set_ylabel('Amplitude')\nax3.set_xlabel('Time in s')\nax3.set_xlim(0, T);\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lwcook/horsetail-matching
notebooks/MixedUncertainties.ipynb
mit
[ "In this notebook we show the capability for horsetail matching to deal with mixed uncertainties. Mixed uncertainties refers to when some of the uncertain parameters cannot be assigned probability distributions (for example they are due to a lack of knowledge, and so we have no data on which to base a distribution) and are instead represented with an interval.\nAs before we start by importing the modules we need and creating uncertain parameters. \nHorsetail matching allows uncertainties to be defined in two ways...\nThe simplest is using the built in UncertainParameter class and child classes.\nIf more flexibility is required, then a parameter can also be described by a function that returns a set of samples of this uncertain parameter.\nThese alternatives are illustrated below:", "import matplotlib.pyplot as plt\nimport numpy as np\nimport os\n\nfrom horsetailmatching import UniformParameter, IntervalParameter, HorsetailMatching\n\nu_prob = UniformParameter(lower_bound=-1, upper_bound=1)\n\nn_samples = 500\ndef u_prob_alternative():\n return np.random.uniform(-1, 1, n_samples)\n\nu_int = IntervalParameter(lower_bound=-1, upper_bound=1)", "Then we can setup the horsetail matching object, using TP2 from the demo problems as our quantity of interest. Recall this is a fuction that takes two inputs: values of the design variables, x, and values of the uncertain parameters, u, and returns the quantity of interest, q. \nInterval uncertainties are given as the third argument to a horsetail matching object, or through the int_uncertainties keyword. So the following two objects are equivalent:", "from horsetailmatching.demoproblems import TP2\n\ndef my_target(h): \n return 1\n\ntheHM = HorsetailMatching(TP2, u_prob, u_int, \n ftarget=(my_target, my_target), samples_prob=n_samples, samples_int=50)\n\ntheHM = HorsetailMatching(TP2, prob_uncertainties=[u_prob_alternative], int_uncertainties=[u_int], \n ftarget=(my_target, my_target), samples_prob=n_samples, samples_int=50)", "Note that under mixed uncertainties we can set separate targets for the upper and lower bounds on the CDF (the two horsetail curves) by passing a tuple of (target_for_upper_bound, target_for_lower_bound) to the ftarget argument.\nNote also that here we also specified the number of samples to take from the probabilistic uncertainties and how many to take from the interval uncertainties using the arguments samples_prob and samples_int. A nested structure is used to evaluate the metric under mixed uncertainties and so the total number of samples taken will be (samples_prob) x (samples_int).\nIf specifying uncertainties using a sampling function, the number of samples returned by this function needs to be the same as the number specified in the samples_prob attribute.\nWe can use the getHorsetail() method to visualize the horsetail plot, whch can then be plotted using matplotlib.\nThis time because we are dealing with mixed uncertainties we can get a CDF at each value of the sampled interval uncertainties (the third returned argument from getHorsetail() gives a list of these CDFs) of which the envelope gives the upper and lower bounds - the horsetail plot - which is highlighed in blue here.", "print(theHM.evalMetric([2, 3]))\n\nupper, lower, CDFs = theHM.getHorsetail()\n(q1, h1, t1) = upper\n(q2, h2, t2) = lower\n\nfor CDF in CDFs:\n plt.plot(CDF[0], CDF[1], c='grey', lw=0.5)\nplt.plot(q1, h1, 'b')\nplt.plot(q2, h2, 'b')\nplt.plot(t1, h1, 'k--')\nplt.plot(t2, h2, 'k--')\nplt.xlim([0, 15])\nplt.ylim([0, 1])\nplt.xlabel('Quantity of Interest')\nplt.show()", "Since this problem is highly non-linear, we obtain an interestingly shaped horsetail plot with CDFs that cross. Note that the target is plotted in dashed lines. Now to optimize the horsetail matching metric, we simply use the evalMetric method in an optimizer as before:", "from scipy.optimize import minimize\n\nsolution = minimize(theHM.evalMetric, x0=[1,1], method='Nelder-Mead')\nprint(solution)", "Now we can inspect the horsetail plot of the optimum design by using the getHorsetail method again:", "upper, lower, CDFs = theHM.getHorsetail()\n\nfor CDF in CDFs:\n plt.plot(CDF[0], CDF[1], c='grey', lw=0.5)\nplt.plot(upper[0], upper[1], 'r')\nplt.plot(lower[0], lower[1], 'r')\nplt.plot([theHM.ftarget[0](y) for y in upper[1]], upper[1], 'k--')\nplt.plot([theHM.ftarget[1](y) for y in lower[1]], lower[1], 'k--')\nplt.xlim([0, 15])\nplt.ylim([0, 1])\nplt.xlabel('Quantity of Interest')\nplt.show()", "You may have noticed that the optimization required a large number of evaluations to converge, and so takes some time to run. In the next notebook we will show how to utilize gradients to speed up the optimization: http://nbviewer.jupyter.org/github/lwcook/horsetail-matching/blob/master/notebooks/Gradients.ipynb\nFor other tutorials, please visit http://www-edc.eng.cam.ac.uk/aerotools/horsetailmatching/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rohitbahl1986/TrafficSignClassifier
Traffic_Sign_Classifier.ipynb
mit
[ "Self-Driving Car Engineer Nanodegree\nDeep Learning\nProject: Build a Traffic Sign Recognition Classifier\nIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. \n\nNote: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \\n\",\n \"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission. \n\nIn addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.\nThe rubric contains \"Stand Out Suggestions\" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the \"stand out suggestions\", you can include the code in this Ipython notebook and also discuss the results in the writeup file.\n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\n\nStep 0: Import Module And Load The Data", "# Import all the relevant modules.\nimport cv2\nimport csv\nimport matplotlib.image as mpimg\nimport matplotlib.mlab as mlab\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport pickle\nfrom random import randint\nimport seaborn as sns\nfrom sklearn.utils import shuffle\nimport tensorflow as tf\nfrom tensorflow.contrib.layers import flatten\n\n#Load the data\n\ntraining_file = \"train.p\"\nvalidation_file= \"valid.p\"\ntesting_file = \"test.p\"\n\nwith open(training_file, mode='rb') as file:\n train = pickle.load(file)\n\nwith open(validation_file, mode='rb') as file:\n valid = pickle.load(file)\n\nwith open(testing_file, mode='rb') as file:\n test = pickle.load(file)\n\nX_train_ori, y_train_ori = train['features'], train['labels']\n#Create a array large enough to hold the new agumented images \n#which will be created in the pre processing section\nX_train = np.empty((3*X_train_ori.shape[0],X_train_ori.shape[1],X_train_ori.shape[2],X_train_ori.shape[3]))\ny_train = np.empty((3*y_train_ori.shape[0]))\n\nX_valid, y_valid = valid['features'], valid['labels']\n\nX_test, y_test = test['features'], test['labels']\n", "Step 1: Dataset Summary & Exploration\nThe pickled data is a dictionary with 4 key/value pairs:\n\n'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).\n'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.\n'sizes' is a list containing tuples, (width, height) representing the original width and height the image.\n'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES\n\nComplete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results. \nBasic Summary of the Data Set", "#Number of original training examples\nn_train_ori = X_train_ori.shape[0]\nprint(\"Number of original training examples =\", n_train_ori)\n\n# Number of training examples after image agumentation\nn_train = X_train.shape[0]\nprint(\"Number of training examples =\", n_train)\n\n# Number of validation examples\nn_validation = X_valid.shape[0]\nprint(\"Number of validation examples =\", n_validation)\n\n# Number of testing examples.\nn_test = X_test.shape[0]\nprint(\"Number of testing examples =\", n_test)\n\n# Shape of an traffic sign image\nimage_shape = X_train.shape[1:]\nprint(\"Image data shape =\", image_shape)\n\n# Unique classes/labels there are in the dataset.\nn_classes = len(set(y_train_ori))\nprint(\"Number of classes =\", n_classes)", "Include an exploratory visualization of the dataset", "### Data exploration visualization\n# Visualizations will be shown in the notebook.\n%matplotlib inline\n\ndef plotTrafficSign(n_rows, n_cols):\n \"\"\"\n This function displays random images from the trainign data set.\n \"\"\"\n fig, axes = plt.subplots(nrows = n_rows, ncols = n_cols, figsize=(60,30))\n for row in axes:\n for col in row:\n index = randint(0,n_train_ori)\n col.imshow(X_train_ori[index,:,:,:])\n col.set_title(y_train_ori[index])\n\n#Plot traffic signs for visualization\nplotTrafficSign(10, 5)\n\n#Plot distribution of data\nsns.distplot(y_train_ori, kde=False, bins=n_classes)\nsns.distplot(y_valid, kde=False, bins=n_classes)\nsns.distplot(y_test, kde=False, bins=n_classes)", "Histogram of the data shows that the trainign data is unevenly distributed. This might affect the training of CNN model.\nComparing the distribution across the 3 sets (training/validation/test), it seems that the distribution is similar in all the sets.\n\nStep 2: Design and Test a Model Architecture\nDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.\nThe LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! \nWith the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. \nThere are various aspects to consider when thinking about this problem:\n\nNeural network architecture (is the network over or underfitting?)\nPlay around preprocessing techniques (normalization, rgb to grayscale, etc)\nNumber of examples per label (some have more than others).\nGenerate fake data.\n\nHere is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.\nPre-process the Data Set (normalization, grayscale, etc.)\nMinimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project. \nOther pre-processing steps are optional. You can try different techniques to see if it improves performance. \nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project.", "### Preprocess the data.\n\ndef dataGeneration():\n \"\"\"\n This function auguments the training data by creating new data (via image rotation)\n \"\"\"\n global X_train\n global y_train\n global y_train_ori\n global X_train_ori\n global n_train_ori\n \n #Create new data by fliping the images in the vertical and horizontal directions\n X_train[0:n_train_ori,:,:,:] = X_train_ori[:,:,:,:]\n y_train[0:n_train_ori] = y_train_ori[:]\n width = X_train.shape[1]\n height = X_train.shape[2]\n center = (width/ 2, height/ 2)\n for index in range(n_train_ori):\n #Rotate by 10 degrees\n rotation = cv2.getRotationMatrix2D(center, 10, 1.0)\n X_train[n_train_ori+index,:,:,:] = cv2.warpAffine(X_train_ori[index,:,:,:], rotation, (width, height))\n y_train[n_train_ori+index] = y_train_ori[index]\n #Flip the image horizontally\n rotation = cv2.getRotationMatrix2D(center, -10, 1.0)\n X_train[2*n_train_ori+index,:,:,:] = cv2.warpAffine(X_train_ori[index,:,:,:], rotation, (width, height))\n y_train[2*n_train_ori+index] = y_train_ori[index]\n\ndef normalize(X_input):\n \"\"\"\n This function normalizes the data\n \"\"\"\n #Min-Max normalization of data\n range_min = 0.1\n range_max = 0.9\n data_min = 0\n data_max = 255\n X_input = range_min + (((X_input - data_min)*(range_max - range_min) )/(data_max - data_min))\n return X_input\n\ndef randomize(X_input, y_input):\n \"\"\"\n This function randomizes the data.\n \"\"\"\n #Randomize the data\n X_input, y_input = shuffle(X_input, y_input)\n return X_input, y_input\n\ndataGeneration()\nX_train = normalize(X_train)\nX_valid = normalize(X_valid)\nX_test = normalize(X_test)\nX_train, y_train = randomize(X_train, y_train)", "Model Architecture", "def LeNet(x, keep_prob=1.0): \n # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer\n mu = 0\n sigma = 0.1\n global n_classes\n \n # Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.\n conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))\n conv1_b = tf.Variable(tf.zeros(6))\n conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b\n\n # Activation.\n conv1 = tf.nn.relu(conv1)\n\n # Pooling. Input = 28x28x6. Output = 14x14x6.\n conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n #Dropout\n conv1 = tf.nn.dropout(conv1, keep_prob)\n \n # Layer 2: Convolutional. Output = 10x10x16.\n conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))\n conv2_b = tf.Variable(tf.zeros(16))\n conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b\n \n # Activation.\n conv2 = tf.nn.relu(conv2)\n \n # Pooling. Input = 10x10x16. Output = 5x5x16.\n conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n #Dropout\n conv2 = tf.nn.dropout(conv2, keep_prob)\n \n # Flatten. Input = 5x5x16. Output = 400.\n fc0 = flatten(conv2)\n \n # Layer 3: Fully Connected. Input = 400. Output = 300.\n fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 300), mean = mu, stddev = sigma))\n fc1_b = tf.Variable(tf.zeros(300))\n fc1 = tf.matmul(fc0, fc1_W) + fc1_b\n \n # Activation.\n fc1 = tf.nn.relu(fc1)\n\n #Dropout\n fc1 = tf.nn.dropout(fc1, keep_prob)\n \n # Layer 4: Fully Connected. Input = 300. Output = 200.\n fc2_W = tf.Variable(tf.truncated_normal(shape=(300, 200), mean = mu, stddev = sigma))\n fc2_b = tf.Variable(tf.zeros(200))\n fc2 = tf.matmul(fc1, fc2_W) + fc2_b\n \n # Activation.\n fc2 = tf.nn.relu(fc2)\n\n #Dropout\n fc2 = tf.nn.dropout(fc2, keep_prob)\n \n # Layer 5: Fully Connected. Input = 200. Output = n_classes.\n fc3_W = tf.Variable(tf.truncated_normal(shape=(200, n_classes), mean = mu, stddev = sigma))\n fc3_b = tf.Variable(tf.zeros(n_classes))\n logits = tf.matmul(fc2, fc3_W) + fc3_b\n \n return logits\n\nclass CModel:\n def __init__(self, input_conv, target, learning_rate = 0.001, \n epochs = 10, batch_size = 128, keep_prob=1.0, debug_logging = False):\n \"\"\"\n This is the ctor for the class CModel.\n It initializes various hyper parameters required for training.\n \"\"\"\n self.learning_rate = learning_rate\n self.epoch = epochs\n self.batch_size = batch_size\n self.debug_logging = debug_logging\n self.input_conv = input_conv \n self.target = target\n self.logits = None\n self.one_hot_out_class = None\n self.keep_prob = keep_prob\n \n def __loss(self):\n \"\"\"\n This function calculates the loss.\n \"\"\"\n cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=self.one_hot_out_class, logits=self.logits)\n loss_operation = tf.reduce_mean(cross_entropy)\n return loss_operation\n \n def __optimize(self, loss_operation):\n \"\"\"\n This function runs the optimizer to train the weights.\n \"\"\"\n optimizer = tf.train.AdamOptimizer(learning_rate = self.learning_rate)\n minimize_loss = optimizer.minimize(loss_operation)\n return minimize_loss\n \n def trainLeNet(self):\n \"\"\"\n This function trains the LeNet network.\n \"\"\" \n print(\"n_classes \",n_classes)\n self.logits = LeNet(self.input_conv,self.keep_prob)\n self.one_hot_out_class = tf.one_hot(self.target, n_classes)\n loss_operation = self.__loss()\n minimize_loss = self.__optimize(loss_operation)\n \n return minimize_loss\n \n def accuracy(self):\n \"\"\"\n This function calculates the accuracy of the model.\n \"\"\"\n prediction, _ = self.prediction()\n correct_prediction = tf.equal(prediction, tf.argmax(self.one_hot_out_class, 1))\n accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n return accuracy_operation\n \n def prediction(self):\n return tf.argmax(self.logits, 1), tf.nn.top_k(tf.nn.softmax(self.logits), k=5)", "Train, Validate and Test the Model\nA validation set can be used to assess how well the model is performing. A low accuracy on the training and validation\nsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.", "#Model training\n\nclass CEvaluate:\n def __init__(self, learning_rate=0.001, epoch=10, batch_size=128):\n \n self.input_conv = tf.placeholder(tf.float32, (None, 32, 32, 3))\n self.target = tf.placeholder(tf.int32, (None))\n self.keep_prob = tf.placeholder(tf.float32)\n self.model = CModel(self.input_conv, self.target, learning_rate, epoch, batch_size, self.keep_prob)\n self.train = self.model.trainLeNet()\n self.accuracy_operation = self.model.accuracy()\n self.epoch = epoch\n self.batch_size = batch_size\n self.saver = tf.train.Saver()\n self.prediction = self.model.prediction()\n\n \n def __evaluate(self, X_data, y_data, keep_prob=1):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, self.batch_size):\n batch_x, batch_y = X_data[offset:offset+self.batch_size], y_data[offset:offset+self.batch_size]\n accuracy = sess.run(self.accuracy_operation, feed_dict={self.input_conv: batch_x, \\\n self.target: batch_y, self.keep_prob: 1.0})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples\n\n def test(self):\n global X_test\n global y_test\n with tf.Session() as sess:\n self.saver.restore(sess, tf.train.latest_checkpoint('.'))\n test_accuracy = self.__evaluate(X_test, y_test)\n print(\"Test Accuracy = \", test_accuracy)\n \n def predictions(self, test_images):\n with tf.Session() as sess:\n self.saver.restore(sess, './lenet')\n predict, top_k_softmax = sess.run(self.prediction, feed_dict={self.input_conv: test_images, self.keep_prob: 1.0})\n\n return predict, top_k_softmax\n\n \n def run(self):\n global X_train\n global y_train\n global X_valid\n global y_valid\n \n validation_accuracy = []\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n print(\"Training...\")\n for i in range(self.epoch):\n print(\"Epoch == \", i)\n for offset in range(0, num_examples, self.batch_size):\n end = offset + self.batch_size\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(self.train, feed_dict={self.input_conv: batch_x, self.target: batch_y, self.keep_prob: 0.9})\n\n validation_accuracy.append(self.__evaluate(X_valid, y_valid))\n print(\"Validation Accuracy == \", validation_accuracy[i])\n\n self.saver.save(sess, './lenet')\n \n plt.plot(validation_accuracy)\n plt.xlabel(\"Epoch\")\n plt.ylabel(\"Validation Accuracy\")\n plt.title(\"Tracking of validation accuracy\")\n plt.show()\n\nlearning_rate = 0.001\nepoch = 30\nbatch_size = 128\neval_model = CEvaluate(learning_rate, epoch, batch_size)\neval_model.run() \neval_model.test()", "Step 3: Test a Model on New Images\nTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.\nYou may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.\nLoad and Output the Images", "### Load the images and plot them.\nimport os\ntest_images = os.listdir('test_images')\nnum_test_images = 5\nX_new_test = np.empty((num_test_images, 32, 32, 3))\ny_new_test = np.empty(num_test_images)\ndic = {\"60.jpg\":3, \"70.jpg\":4, \"roadwork.jpg\":25, \"stop.jpg\":14, \"yield.jpg\":13}\nfor index, image_name in enumerate(test_images):\n image_path = os.path.join('test_images', image_name)\n original_image = mpimg.imread(image_path)\n X_new_test[index,:,:,:] = cv2.resize(original_image,(32,32),interpolation=cv2.INTER_AREA)\n y_new_test[index] = dic[image_name]\n plt.imshow(X_new_test[index,:,:,:])\n plt.show()", "Predict the Sign Type for Each Image/Analyze Performance/ Output Soft Max", "with open('signnames.csv', mode='r') as file:\n reader = csv.reader(file)\n sign_mapping = {rows[0]:rows[1] for rows in reader}\n\nX_new_test = normalize(X_new_test)\npredict, top_k_softmax = eval_model.predictions(X_new_test)\n\nfor output,expected in zip(predict,y_new_test):\n print(\"Expected {} ...... Output {}\".format(sign_mapping[str(int(expected))], sign_mapping[str(output)]))\n\n\n\n### Calculate the accuracy for these 5 new images.\ncount = 0\nfor result, expectation in zip(predict, y_new_test):\n if result == expectation:\n count = count+1\n\naccuracy = count/num_test_images\nprint(\"accuracy of the prediction of new test images\", accuracy)", "Output Top 5 Softmax Probabilities For Each Image Found on the Web", "print(\"top_k_softmax == \", top_k_softmax)", "Project Writeup\nOnce you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file. \n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \\n\",\n \"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.\n\n\nStep 4 (Optional): Visualize the Neural Network's State with Test Images\nThis Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.\nProvided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.\nFor an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.\n<figure>\n <img src=\"visualize_cnn.png\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above)</p> \n </figcaption>\n</figure>\n<p></p>", "### Visualize your network's feature maps here.\n### Feel free to use as many code cells as needed.\n\n# image_input: the test image being fed into the network to produce the feature maps\n# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer\n# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output\n# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry\n\ndef outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):\n # Here make sure to preprocess your image_input in a way your network expects\n # with size, normalization, ect if needed\n # image_input =\n # Note: x should be the same name as your network's tensorflow data placeholder variable\n # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function\n activation = tf_activation.eval(session=sess,feed_dict={x : image_input})\n featuremaps = activation.shape[3]\n plt.figure(plt_num, figsize=(15,15))\n for featuremap in range(featuremaps):\n plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column\n plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number\n if activation_min != -1 & activation_max != -1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmin =activation_min, vmax=activation_max, cmap=\"gray\")\n elif activation_max != -1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmax=activation_max, cmap=\"gray\")\n elif activation_min !=-1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmin=activation_min, cmap=\"gray\")\n else:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", cmap=\"gray\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gcgruen/homework
foundations-homework/05/.ipynb_checkpoints/homework-05-gruen-nyt-checkpoint.ipynb
mit
[ "All API's: http://developer.nytimes.com/\nArticle search API: http://developer.nytimes.com/article_search_v2.json\nBest-seller API: http://developer.nytimes.com/books_api.json#/Documentation\nTest/build queries: http://developer.nytimes.com/\nTip: Remember to include your API key in all requests! And their interactive web thing is pretty bad. You'll need to register for the API key.\n1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?", "#API Key: 0c3ba2a8848c44eea6a3443a17e57448\n\nimport requests\nbestseller_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/2009-05-10/hardcover-fiction?api-key=0c3ba2a8848c44eea6a3443a17e57448')\nbestseller_data = bestseller_response.json()\nprint(\"The type of bestseller_data is:\", type(bestseller_data))\nprint(\"The keys of bestseller_data are:\", bestseller_data.keys())\n\n# Exploring the data structure further\nbestseller_books = bestseller_data['results']\nprint(type(bestseller_books))\nprint(bestseller_books[0])\n\nfor book in bestseller_books:\n #print(\"NEW BOOK!!!\")\n #print(book['book_details'])\n #print(book['rank'])\n if book['rank'] == 1:\n for element in book['book_details']:\n print(\"The book that topped the hardcover fiction NYT Beststeller list on Mothers Day in 2009 was\", element['title'], \"written by\", element['author'])", "After writing a code that returns a result, now automating that for the various dates using a function:", "def bestseller(x, y):\n bestsellerA_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/'+ x +'/hardcover-fiction?api-key=0c3ba2a8848c44eea6a3443a17e57448')\n bestsellerA_data = bestsellerA_response.json()\n bestsellerA_books = bestsellerA_data['results']\n \n for book in bestsellerA_books:\n if book['rank'] == 1:\n for element in book['book_details']:\n print(\"The book that topped the hardcover fiction NYT Beststeller list on\", y, \"was\", \n element['title'], \"written by\", element['author'])\n\nbestseller('2009-05-10', \"Mothers Day 2009\")\nbestseller('2010-05-09', \"Mothers Day 2010\")\nbestseller('2009-06-21', \"Fathers Day 2009\")\nbestseller('2010-06-20', \"Fathers Day 2010\")\n\n#Alternative solution would be, instead of putting this code into a function to loop it: \n#1) to create a dictionary called dates containing y as keys and x as values to these keys\n#2) to take the above code and nest it into a for loop that loops through the dates, each time using the next key:value pair\n # for date in dates:\n # replace value in URL and run the above code used inside the function\n # replace key in print statement", "2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?", "# STEP 1: Exploring the data structure using just one of the dates from the question\nbookcat_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2009-06-06&api-key=0c3ba2a8848c44eea6a3443a17e57448')\nbookcat_data = bookcat_response.json()\nprint(type(bookcat_data))\nprint(bookcat_data.keys())\n\nbookcat = bookcat_data['results']\nprint(type(bookcat))\nprint(bookcat[0])\n\n# STEP 2: Writing a loop that runs the same code for both dates (no function, as only one variable)\ndates = ['2009-06-06', '2015-06-15']\nfor date in dates:\n bookcatN_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=' + date + '&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n bookcatN_data = bookcatN_response.json()\n bookcatN = bookcatN_data['results']\n \n category_listN = []\n for category in bookcatN:\n category_listN.append(category['display_name'])\n print(\" \")\n print(\"THESE WERE THE DIFFERENT BOOK CATEGORIES THE NYT RANKED ON\", date)\n for cat in category_listN:\n print(cat)", "3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?\nTip: Add \"Libya\" to your search to make sure (-ish) you're talking about the right guy.", "# STEP 1a: EXPLORING THE DATA\n\ntest_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi+Libya&api-key=0c3ba2a8848c44eea6a3443a17e57448')\ntest_data = test_response.json()\nprint(type(test_data))\nprint(test_data.keys())\n\ntest_hits = test_data['response']\nprint(type(test_hits))\nprint(test_hits.keys())\n\n# STEP 1b: EXPLORING THE META DATA\n\ntest_hits_meta = test_data['response']['meta']\nprint(\"The meta data of the search request is a\", type(test_hits_meta))\nprint(\"The dictionary despot_hits_meta has the following keys:\", test_hits_meta.keys())\nprint(\"The search requests with the TEST URL yields total:\")\ntest_hit_count = test_hits_meta['hits']\nprint(test_hit_count)\n\n# STEP 2: BUILDING THE CODE TO LOOP THROUGH DIFFERENT SPELLINGS\ndespot_names = ['Gadafi', 'Gaddafi', 'Kadafi', 'Qaddafi']\n\nfor name in despot_names:\n despot_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=' + name +'+Libya&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n despot_data = despot_response.json()\n \n despot_hits_meta = despot_data['response']['meta']\n despot_hit_count = despot_hits_meta['hits']\n print(\"The NYT has referred to the Libyan despot\", despot_hit_count, \"times using the spelling\", name)", "4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?", "hip_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&fq=pub_year:1995&api-key=0c3ba2a8848c44eea6a3443a17e57448')\nhip_data = hip_response.json()\nprint(type(hip_data))\nprint(hip_data.keys())\n\n# STEP 1: EXPLORING THE DATA STRUCTURE:\n\nhipsters = hip_data['response']\n#print(hipsters)\n#hipsters_meta = hipsters['meta']\n#print(type(hipsters_meta))\nhipsters_results = hipsters['docs']\nprint(hipsters_results[0].keys())\n#print(type(hipsters_results))\n\n#STEP 2: LOOPING FOR THE ANSWER:\n\nearliest_date = '1996-01-01'\nfor mention in hipsters_results:\n if mention['pub_date'] < earliest_date:\n earliest_date = mention['pub_date']\n print(\"This is the headline of the first text to mention 'hipster' in 1995:\", mention['headline']['main'])\n print(\"It was published on:\", mention['pub_date']) \n print(\"This is its lead paragraph:\")\n print(mention['lead_paragraph'])", "5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?\nTip: You'll want to put quotes around the search term so it isn't just looking for \"gay\" and \"marriage\" in the same article.\nTip: Write code to find the number of mentions between Jan 1, 1950 and Dec 31, 1959.", "# data structure requested same as in task 3, just this time loop though different date ranges\n\ndef countmention(a, b, c):\n if b == ' ':\n marry_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=\"gay marriage\"&begin_date='+ a +'&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n else:\n marry_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=\"gay marriage\"&begin_date='+ a +'&end_date='+ b +'&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n \n marry_data = marry_response.json()\n\n marry_hits_meta = marry_data['response']['meta']\n marry_hit_count = marry_hits_meta['hits']\n print(\"The count for NYT articles mentioning 'gay marriage' between\", c, \"is\", marry_hit_count)\n\n#supposedly, there's a way to solve the following part in a more efficient way, but those I tried did not work, \n#so it ended up being more time-efficient just to type it:\ncountmention('19500101', '19591231', '1950 and 1959')\ncountmention('19600101', '19691231', '1960 and 1969')\ncountmention('19700101', '19791231', '1970 and 1979')\ncountmention('19800101', '19891231', '1980 and 1989')\ncountmention('19900101', '19991231', '1990 and 1999')\ncountmention('20000101', '20091231', '2000 and 2009')\ncountmention('20100101', ' ', '2010 and present')", "6) What section talks about motorcycles the most?\nTip: You'll be using facets", "moto_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycle&facet_field=section_name&facet_filter=true&api-key=0c3ba2a8848c44eea6a3443a17e57448')\nmoto_data = moto_response.json()\n\n#STEP 1: EXPLORING DATA STRUCTURE\n#print(type(moto_data))\n#print(moto_data.keys())\n#print(moto_data['response'])\n#print(moto_data['response'].keys())\n#print(moto_data['response']['facets'])\n\n#STEP 2: Code to get to the answer\nmoto_facets = moto_data['response']['facets']\n#print(moto_facets)\n#print(moto_facets.keys())\nmoto_sections = moto_facets['section_name']['terms']\n#print(moto_sections)\n\n#this for loop is not necessary, but it's nice to know the counts \n#(also to check whether the next loop identifies the right section)\nfor section in moto_sections:\n print(\"The section\", section['term'], \"mentions motorcycles\", section['count'], \"times.\")\n\nmost_motorcycles = 0\nfor section in moto_sections:\n if section['count'] > most_motorcycles:\n most_motorcycles = section['count']\n print(\" \")\n print(\"That means the section\", section['term'], \"mentions motorcycles the most, namely\", section['count'], \"times.\")", "7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?\nTip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.", "picks_offset_values = [0, 20, 40]\npicks_review_list = []\n\nfor value in picks_offset_values:\n picks_response = requests.get ('http://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=' + str(value) + '&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n picks_data = picks_response.json()\n\n#STEP 1: EXPLORING THE DATA STRUCTURE (without the loop)\n\n#print(picks_data.keys())\n#print(picks_data['num_results'])\n#print(picks_data['results'])\n#print(type(picks_data['results']))\n#print(picks_data['results'][0].keys())\n\n#STEP 2: After writing a test code (not shown) without the loop, now CODING THE LOOP\n\n last_reviews = picks_data['num_results']\n picks_results = picks_data['results']\n \n critics_pick_count = 0\n for review in picks_results:\n if review['critics_pick'] == 1:\n critics_pick_count = critics_pick_count + 1\n picks_new_count = critics_pick_count \n picks_review_list.append(picks_new_count)\n print(\"Out of the last\", last_reviews + value, \"movie reviews,\", sum(picks_review_list), \"were Critics' picks.\")", "8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?", "#STEP 1: EXPLORING THE DATA STRUCTURE (without the loop)\n#critics_response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=0&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n#critics_data = critics_response.json()\n#print(critics_data.keys())\n#print(critics_data['num_results'])\n#print(critics_data['results'])\n#print(type(critics_data['results']))\n#print(critics_data['results'][0].keys())\n\n#STEP 2: CREATE A LOOP, THAT GOES THROUGH THE SEARCH RESULTS FOR EACH OFFSET VALUE AND STORES THE RESULTS IN THE SAME LIST\n#(That list is then passed on to step 3)\n\ncritics_offset_value = [0, 20]\ncritics_list = [ ]\nfor value in critics_offset_value:\n critics_response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=' + str(value) + '&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n critics_data = critics_response.json()\n \n critics = critics_data['results']\n\n for review in critics:\n critics_list.append(review['byline'])\n #print(critics_list)\nunique_critics = set(critics_list)\n#print(unique_critics)\n \n#STEP 3: FOR EVERY NAME IN THE UNIQUE CRITICS LIST, LOOP THROUGH NON-UNIQUE LIST TO COUNT HOW OFTEN THEY OCCUR\n#STEP 4: SELECT THE ONE THAT HAS WRITTEN THE MOST (from the #print statement below, I know it's two people with same score)\n\nmax_count = 0\nfor name in unique_critics:\n name_count = 0\n for critic in critics_list:\n if critic == name:\n name_count = name_count + 1\n if name_count > max_count:\n max_count = name_count\n max_name = name\n if name_count == max_count:\n same_count = name_count\n same_name = name\n #print(name, \"has written\", name_count, \"reviews out of the last 40 reviews.\")\nprint(max_name, \"has written the most of the last 40 reviews:\", max_count)\nprint(same_name, \"has written the most of the last 40 reviews:\", same_count)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DestrinStorm/deep-learning
weight-initialization/weight_initialization.ipynb
mit
[ "Weight Initialization\nIn this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker. \nTesting Weights\nDataset\nTo see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network.\nWe'll be using the MNIST dataset to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset.", "%matplotlib inline\n\nimport tensorflow as tf\nimport helper\n\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nprint('Getting MNIST Dataset...')\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\nprint('Data Extracted.')", "Neural Network\n<img style=\"float: left\" src=\"images/neural_network.png\"/>\nFor the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers.", "# Save the shapes of weights for each layer\nlayer_1_weight_shape = (mnist.train.images.shape[1], 256)\nlayer_2_weight_shape = (256, 128)\nlayer_3_weight_shape = (128, mnist.train.labels.shape[1])", "Initialize Weights\nLet's start looking at some initial weights.\nAll Zeros or Ones\nIf you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.\nWith every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.\nLet's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start.\nRun the cell below to see the difference between weights of all zeros against all ones.", "all_zero_weights = [\n tf.Variable(tf.zeros(layer_1_weight_shape)),\n tf.Variable(tf.zeros(layer_2_weight_shape)),\n tf.Variable(tf.zeros(layer_3_weight_shape))\n]\n\nall_one_weights = [\n tf.Variable(tf.ones(layer_1_weight_shape)),\n tf.Variable(tf.ones(layer_2_weight_shape)),\n tf.Variable(tf.ones(layer_3_weight_shape))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'All Zeros vs All Ones',\n [\n (all_zero_weights, 'All Zeros'),\n (all_one_weights, 'All Ones')])", "As you can see the accuracy is close to guessing for both zeros and ones, around 10%.\nThe neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.\nA good solution for getting these random weights is to sample from a uniform distribution.\nUniform Distribution\nA [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's tf.random_uniform function to pick random numbers from a uniform distribution.\n\ntf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a uniform distribution.\nThe generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nminval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0.\nmaxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point.\ndtype: The type of the output: float32, float64, int32, or int64.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).\n\n\nWe can visualize the uniform distribution by using a histogram. Let's map the values from tf.random_uniform([1000], -3, 3) to a histogram using the helper.hist_dist function. This will be 1000 random float values from -3 to 3, excluding the value 3.", "helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([1000], -3, 3))", "The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.\nNow that you understand the tf.random_uniform function, let's apply it to some initial weights.\nBaseline\nLet's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0.", "# Default for tf.random_uniform is minval=0 and maxval=1\nbasline_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Baseline',\n [(basline_weights, 'tf.random_uniform [0, 1)')])", "The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.\nGeneral rule for setting weights\nThe general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where\n$y=1/\\sqrt{n}$ ($n$ is the number of inputs to a given neuron).\nLet's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1).", "uniform_neg1to1_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[0, 1) vs [-1, 1)',\n [\n (basline_weights, 'tf.random_uniform [0, 1)'),\n (uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')])", "We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?\nToo small\nLet's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot.", "uniform_neg01to01_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1))\n]\n\nuniform_neg001to001_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01))\n]\n\nuniform_neg0001to0001_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',\n [\n (uniform_neg1to1_weights, '[-1, 1)'),\n (uniform_neg01to01_weights, '[-0.1, 0.1)'),\n (uniform_neg001to001_weights, '[-0.01, 0.01)'),\n (uniform_neg0001to0001_weights, '[-0.001, 0.001)')],\n plot_n_batches=None)", "Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\\sqrt{n}$.", "import numpy as np\n\ngeneral_rule_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0])))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[-0.1, 0.1) vs General Rule',\n [\n (uniform_neg01to01_weights, '[-0.1, 0.1)'),\n (general_rule_weights, 'General Rule')],\n plot_n_batches=None)", "The range we found and $y=1/\\sqrt{n}$ are really close.\nSince the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution.\nNormal Distribution\nUnlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram.\n\ntf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a normal distribution.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nmean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution.\nstddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution.\ndtype: The type of the output.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).", "helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([1000]))", "Let's compare the normal distribution against the previous uniform distribution.", "normal_01_weights = [\n tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)),\n tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)),\n tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Uniform [-0.1, 0.1) vs Normal stddev 0.1',\n [\n (uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'),\n (normal_01_weights, 'Normal stddev 0.1')])", "The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution.\nTruncated Normal Distribution\n\ntf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a truncated normal distribution.\nThe generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nmean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.\nstddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution.\ndtype: The type of the output.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).", "helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000]))", "Again, let's compare the previous results with the previous distribution.", "trunc_normal_01_weights = [\n tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)),\n tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)),\n tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Normal vs Truncated Normal',\n [\n (normal_01_weights, 'Normal'),\n (trunc_normal_01_weights, 'Truncated Normal')])", "There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.\nWe've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now.", "helper.compare_init_weights(\n mnist,\n 'Baseline vs Truncated Normal',\n [\n (basline_weights, 'Baseline'),\n (trunc_normal_01_weights, 'Truncated Normal')])", "That's a huge difference. You can barely see the truncated normal line. However, this is not the end your learning path. We've provided more resources for initializing weights in the classroom!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
srodriguex/coursera_data_analysis_tools
Week_4.ipynb
mit
[ "Table of Contents\n\n1. Week 4 Assignment: Generating a Correlation Coefficient\n1.1 Subset the dataset into the moderate variable levels\n1.2 Pearson correlation $r$\n1.3 Conclusion\n\n\n\n1. Week 4 Assignment: Generating a Correlation Coefficient\nIn this assignment I've chosen the Gapminder dataset. Looking through its codebook we've decided to study the relationship of the numeric variables incomeperperson and lifeexpectancy taking into account the numeric variable urbanrate as a potential moderator:\n\nincomeperperson\n\n\n2010 Gross Domestic Product per capita in constant 2000 US$. The World Bank Work Development inflation but not the differences in the cost of living between countries Indicators\nhas been taken into account.\n\n\nlifeexpectancy\n\n\n2011 life expectancy at birth (years). The average number of years a newborn child would live if current mortality patterns were to stay the same.\n\n\nurbanrate (potential moderator)\n\n\n2008 urban population (% of total). Urban population refers to people living in urban areas as defined by\nnational statistical offices (calculated using World Bank population estimates and urban ratios from the United Nations World Urbanization Prospects).", "# Import all ploting and scientific library,\n# and embed figures in this file.\n%pylab inline\n\n# Package to manipulate dataframes.\nimport pandas as pd\n\n# Nice looking plot functions.\nimport seaborn as sn\n\n# The Pearson correlation function.\nfrom scipy.stats import pearsonr\n\n# Read the dataset.\ndf = pd.read_csv('data/gapminder.csv')\n\n# Set the country name as the index of the dataframe.\ndf.index = df.country\n\n# This column is no longer needed.\ndel df['country']\n\n# Select only the variables we're interested.\ndf = df[['lifeexpectancy','incomeperperson', 'urbanrate']]\n\n# Convert the types.\ndf.lifeexpectancy = pd.to_numeric(df.lifeexpectancy, errors='coerce')\ndf.incomeperperson = pd.to_numeric(df.incomeperperson, errors='coerce')\ndf.urbanrate = pd.to_numeric(df.urbanrate, errors='coerce')\n\n# Remove missing values.\ndf = df.dropna()\n", "1.1 Subset the dataset into the moderate variable levels\nIn order to verifify whether the moderator variabel, urbanrate, plays a role into the interaction between incomeperperon and lifeexpectancy, we'll subset our dataset into two groups: onde group for countries below 50% of urbanrate population, and the other group with countries equal or above 50% of urbanrate population.", "# Dataset with low urban rate.\ndf_low = df[df.urbanrate < 50]\n\n# Dataset with high urban rate.\ndf_high = df[df.urbanrate >= 50]", "1.2 Pearson correlation $r$\nFor each subset, we'll conduct the Pearson correlation analysis and verify the results.", "r_low = pearsonr(df_low.incomeperperson, df_low.lifeexpectancy)\nr_high = pearsonr(df_high.incomeperperson, df_high.lifeexpectancy)\n\nprint('Correlation in LOW urban rate: {}'.format(r_low))\n\nprint('Correlation in HIGH urban rate: {}'.format(r_high))\n\nprint('Percentage of variability LOW urban rate: {:2}%'.\n format(round(r_low[0]**2*100,2)))\n\nprint('Percentage of variability HIGH urban rate: {:2}%'.\n format(round(r_high[0]**2*100,2)))\n\n# Silent matplotlib warning. \nimport warnings\nwarnings.filterwarnings('ignore',category=FutureWarning)\n\n# Setting an apropriate size for the graph.\nf,a = subplots(1, 2)\nf.set_size_inches(12,6)\n\n# Plot the graph.\nsn.regplot(df_low.incomeperperson, df_low.lifeexpectancy, ax=a[0]);\na[0].set_title('Countries with LOW urbanrate', fontweight='bold');\nsn.regplot(df_high.incomeperperson, df_high.lifeexpectancy, ax=a[1]);\na[1].set_title('Countries with HIGH urbanrate', fontweight='bold');\n", "1.3 Conclusion\nAs we can see above, the correlation in countries with high urban rate, urbanrate $\\ge 50\\%$, is higher than in countries with low urban rate, and in both cases the $pvalue$ is significan. So, we can say the variable urbanrate moderates the relationship between lifeexpectancy and incomeperperson. \nEnd of assignment." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bosscha/alma-calibrator
notebooks/lines/scanningLineRedshiftwithSplat.ipynb
gpl-2.0
[ "Method to try to match lines, mainly in absorption, with known lines at different redshift. The lineid musr be identified first to discard wrong detections.", "import sys , os\nsys.path.append(\"../../src/lines\")\n\nimport lineTools as lt\nimport pickle\nimport matplotlib.pyplot as pl\n\n### working dir. and files\nwd = \"/home/stephane/Science/RadioGalaxy/ALMA/absorptions/analysis/a/\"\nos.chdir(wd)\n\ndatadir = \"dataSpecAll/\"\ndbline = \"lineAll.db\"\ntransfile = \"splatalogue.csv\"\ndirplot = \"plots/\"", "We define the source to be scanned from the lineAll.db", "def outputMatch(matches, minmatch = 5, mainLines = None):\n\n for m in matches:\n imax = len(m)\n ifound = 0\n\n redshift = m[0]\n \n for i in range(1,len(m)):\n if len(m[i]) > 0:\n ifound += 1\n \n if mainLines != None:\n ifound =0\n for i in range(1,len(m)):\n for mainline in mainLines:\n if len(m[i]) > 0:\n for line in m[i]:\n if line[0].find(mainline) != -1:\n ifound += 1\n \n if ifound >= minmatch:\n print(\"########################\")\n print(\"## Redshift: %f\"%(redshift))\n print(\"## Freq. matched: %d\"%(ifound))\n print(\"##\")\n print(\"## Formula Name E_K Frequency\")\n print(\"## (K) (MHz)\")\n for i in range(1,len(m)):\n if len(m[i]) > 0:\n print(\"## Line:\")\n for line in m[i]:\n print line\n \n print(\"## \\n###END###\\n\")\n\nsource = \"J2148+0657\"\nredshift = 0.895\n\nal = lt.analysisLines(dbline)\ncmdsql = \"select lineid FROM lines WHERE source = '%s'\"%(source)\nresdb = al.query(cmdsql)\nlineid = []\nfor l in resdb:\n lineid.append(l[0])\nprint(lineid)", "Scan through the lines (lineid) matching with a local splatalogue.db. emax is the maximum energy of the upper level to restrain to low energy transitions...", "m = al.scanningSplatRedshiftSourceLineid(lineid, zmin = redshift , zmax = 0.90, dz = 1e-4,nrao = True, emax= 40., absorption = True, emission = True )\n\nredshift = []\nlineDetected =[]\nminmatch = 15\n\nfor l in m:\n redshift.append(l[0])\n \n ifound = 0\n \n for i in range(1,len(l)):\n if len(l[i]) > 0:\n ifound += 1\n \n if ifound >= minmatch:\n print(\"###Redshift: %f\"%(l[0]))\n print(\"##\")\n for line in l[1:-1]:\n if len(line) > 0:\n print line\n print(\"\\n\\n\")\n \n \n lineDetected.append(ifound)", "Plot the detected lines vs. the redshift.", "pl.figure(figsize=(15,10))\npl.xlabel(\"z\")\npl.ylabel(\"Lines\")\npl.plot(redshift, lineDetected, \"k-\")\npl.show()\n\n\n## uncomment to save data in a pickle file\nf = open(\"3c273-redshift-hires-scan.pickle\",\"w\")\npickle.dump(m,f )\nf.close()\n", "Display the matching transitions", "mL = ['CO v=0','HCN','HCO+']\noutputMatch(m, minmatch=3, mainLines = None)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tuanavu/coursera-university-of-washington
machine_learning/2_regression/lecture/week1/PhillyCrime.ipynb
mit
[ "Fire up graphlab create", "import sys\nsys.path.append('C:\\Anaconda2\\envs\\dato-env\\Lib\\site-packages')\nimport graphlab", "Load some house value vs. crime rate data\nDataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').", "sales = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv/')\n\nsales", "Exploring the data\nThe house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.", "graphlab.canvas.set_target('ipynb')\nsales.show(view=\"Scatter Plot\", x=\"CrimeRate\", y=\"HousePrice\")", "Fit the regression model using crime as the feature", "crime_model = graphlab.linear_regression.create(sales, target='HousePrice', features=['CrimeRate'],validation_set=None,verbose=False)", "Let's see what our fit looks like\nMatplotlib is a Python plotting library that is also useful for plotting. You can install it with:\n'pip install matplotlib'", "import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(sales['CrimeRate'],sales['HousePrice'],'.',\n sales['CrimeRate'],crime_model.predict(sales),'-')", "Above: blue dots are original data, green line is the fit from the simple regression.\nRemove Center City and redo the analysis\nCenter City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's remove this datapoint and see what happens.", "sales_noCC = sales[sales['MilesPhila'] != 0.0] \n\nsales_noCC.show(view=\"Scatter Plot\", x=\"CrimeRate\", y=\"HousePrice\")", "Refit our simple regression model on this modified dataset:", "crime_model_noCC = graphlab.linear_regression.create(sales_noCC, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)", "Look at the fit:", "plt.plot(sales_noCC['CrimeRate'],sales_noCC['HousePrice'],'.',\n sales_noCC['CrimeRate'],crime_model.predict(sales_noCC),'-')", "Compare coefficients for full-data fit versus no-Center-City fit\nVisually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.", "crime_model.get('coefficients')\n\ncrime_model_noCC.get('coefficients')", "Above: We see that for the \"no Center City\" version, per unit increase in crime, the predicted decrease in house prices is 2,287. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different!\nHigh leverage points:\nCenter City is said to be a \"high leverage\" point because it is at an extreme x value where there are not other observations. As a result, recalling the closed-form solution for simple regression, this point has the potential to dramatically change the least squares line since the center of x mass is heavily influenced by this one point and the least squares line will try to fit close to that outlying (in x) point. If a high leverage point follows the trend of the other data, this might not have much effect. On the other hand, if this point somehow differs, it can be strongly influential in the resulting fit.\nInfluential observations:\nAn influential observation is one where the removal of the point significantly changes the fit. As discussed above, high leverage points are good candidates for being influential observations, but need not be. Other observations that are not leverage points can also be influential observations (e.g., strongly outlying in y even if x is a typical value).\nRemove high-value outlier neighborhoods and redo analysis\nBased on the discussion above, a question is whether the outlying high-value towns are strongly influencing the fit. Let's remove them and see what happens.", "sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000] \ncrime_model_nohighend = graphlab.linear_regression.create(sales_nohighend, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)", "Do the coefficients change much?", "crime_model_noCC.get('coefficients')\n\ncrime_model_nohighend.get('coefficients')", "Above: We see that removing the outlying high-value neighborhoods has some effect on the fit, but not nearly as much as our high-leverage Center City datapoint." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
heatseeknyc/data-science
src/bryan analyses/Hack for Heat #3.ipynb
mit
[ "%matplotlib inline\n\nfrom matplotlib import pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport psycopg2\n\npd.options.display.max_columns = 40", "Hack for Heat #3: Number of complaints over time\nThis time, we're going to look at raw 311 complaint data. The data that I was working with previously was summarized data.\nThis dataset is much bigger, which is nice because it'll give me a chance to maintain my SQL-querying-from-memory-skills.\nFirst, we're going to have to load all of this data into a postgres database. I wrote this tablebase.\nSQL-ing this\nThe python library psycopg2 lets us work with postgres databases in python. We first create a connection object, that encapsulates the connection to the database, then create a cursor class that lets us make queries from that database.", "connection = psycopg2.connect('dbname = threeoneone user=threeoneoneadmin password=threeoneoneadmin')\ncursor = connection.cursor()", "For example, we might want to extract the column names from our table:", "cursor.execute('''SELECT * FROM threeoneone.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'service'; ''')\ncolumns = cursor.fetchall()\n\ncolumns = [x[3] for x in columns]\n\ncolumns[0:5]", "Complaints over time\nLet's start with something simple. First, let's extract a list of all complaints, and the plot the number of complaints by month.", "cursor.execute('''SELECT createddate FROM service;''')\ncomplaintdates = cursor.fetchall()\n\ncomplaintdates = pd.DataFrame(complaintdates)\n\ncomplaintdates.head()", "Renaming our column:", "complaintdates.columns = ['Date']", "Next we have to convert these tuples into strings:", "complaintdates['Date'] = [x [0] for x in complaintdates['Date']]", "Normally, if these were strings, we'd use the extract_dates function we wrote in a previous post. However, because I typed these as datetime objects, we can just extract the .year(), .month(), and .day() attributes:", "type(complaintdates['Date'][0])\n\ncomplaintdates['Day'] = [x.day for x in complaintdates['Date']]\ncomplaintdates['Month'] = [x.month for x in complaintdates['Date']]\ncomplaintdates['Year'] = [x.year for x in complaintdates['Date']]", "This is how many total complaints we have:", "len(complaintdates)", "We can group them by month:", "bymonth = complaintdates.groupby(by='Month').count()\nbymonth", "By year:", "byyear = complaintdates.groupby(by='Year').count()\nbyyear\n\nbyday = complaintdates.groupby(by='Day').count()\nbydate = complaintdates.groupby(by='Date').count()", "Some matplotlib", "plt.figure(figsize = (12,10))\nx = range(0,12)\ny = bymonth['Date']\nplt.plot(x,y)\n\nplt.figure(figsize = (12,10))\nx = range(0,7)\ny = byyear['Date']\n\nplt.plot(x,y)\n\nplt.figure(figsize = (12,10))\nx = range(0,len(byday))\ny = byday['Date']\n\nplt.plot(x,y)", "The sharp decline we see at the end is obviously because not all months have the same number of days.", "plt.figure(figsize=(12,10))\nx = range(0,len(bydate))\ny = bydate['Year'] #This is arbitrary - year, month, and day are all series that store the counts\n\nplt.plot(x,y)", "That's all for now. In the next post, I'm going to break this down by borough, as well as polish this graph." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
angelmtenor/data-science-keras
text_generator.ipynb
mit
[ "English sequence generator\nCreating an English language sequence generator capable of building semi-coherent English sentences from scratch by building them up character-by-character\nNatural Language Processing\nDataset: Complete version of Sir Arthur Conan Doyle's classic book The Adventures of Sherlock Holmes\nBased on RNN project: text generation of the Udacity's Artificial Intelligence Nanodegree", "%matplotlib inline\n\nimport os\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport keras\nimport helper\n\nhelper.reproducible(seed=9)\nsns.set()", "Load and Process the data", "text = open('data/holmes.txt').read().lower()\nprint('Total characters: {}'.format(len(text)))\ntext[:300]", "Preprocess the data", "text = text[1302:] # remove title, author page, and table of contents\ntext = text.replace('\\n', ' ')\ntext = text.replace('\\r', ' ')\n\nunique_characters = set(list(text))\nprint(unique_characters)\n\n# remove non-english characters\nimport re\ntext = re.sub(\"[$%&'()*@/àâèé0123456789-]\", \" \", text)\ntext = text.replace('\"', ' ')\ntext = text.replace(' ', ' ') # shorten any extra dead space created above\ntext[:300]\n\nchars = sorted(list(set(text)))\nnum_chars = len(chars)\nprint('Total characters: {}'.format(len(text)))\nprint('Unique characters: {}'.format(num_chars))\nprint(chars)", "Split data into input/output pairs", "# Transforms the input text and window-size into a set of input/output pairs\n# for use with the RNN \"\"\"\n\nwindow_size = 100\nstep_size = 5\n\ninput_pairs = []\noutput_pairs = []\n\nfor i in range(0, len(text) - window_size, step_size):\n input_pairs.append(text[i:i + window_size])\n output_pairs.append(text[i + window_size])", "One-hot encoding characters", "chars_to_indices = dict((c, i) for i, c in enumerate(chars))\nindices_to_chars = dict((i, c) for i, c in enumerate(chars))\n\n# create variables for one-hot encoded input/output\nX = np.zeros((len(input_pairs), window_size, num_chars), dtype=np.bool)\ny = np.zeros((len(input_pairs), num_chars), dtype=np.bool)\n\n# transform character-based input_pairs/output_pairs into equivalent numerical versions\nfor i, sentence in enumerate(input_pairs):\n for t, char in enumerate(sentence):\n X[i, t, chars_to_indices[char]] = 1\n y[i, chars_to_indices[output_pairs[i]]] = 1", "Recurrent Neural Network Model", "from keras.models import Sequential\nfrom keras.layers import Dense, Activation, LSTM\n\nmodel = Sequential()\nmodel.add(LSTM(200, input_shape=(window_size, num_chars)))\nmodel.add(Dense(num_chars, activation=None))\nmodel.add(Dense(num_chars, activation=\"softmax\"))\nmodel.summary()\n\noptimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)\n\nmodel.compile(loss='categorical_crossentropy', optimizer=optimizer)\n\n# train the model\nprint(\"Training ...\")\n%time history = model.fit(X, y, batch_size=512, epochs=100,verbose=0)\nhelper.show_training(history)\n\nmodel_path = os.path.join(\"models\", \"text_generator.h5\")\nmodel.save(model_path)\nprint(\"\\nModel saved at\", model_path)", "Make predictions", "model = keras.models.load_model(model_path)\nprint(\"Model loaded:\", model_path)\n\n\ndef predict_next_chars(model, input_chars, num_to_predict):\n \"\"\" predict a number of future characters \"\"\"\n\n predicted_chars = ''\n for i in range(num_to_predict):\n x_test = np.zeros((1, window_size, len(chars)))\n for t, char in enumerate(input_chars):\n x_test[0, t, chars_to_indices[char]] = 1.\n\n test_predict = model.predict(x_test, verbose=0)[0]\n\n # translate numerical prediction back to characters\n r = np.argmax(test_predict)\n d = indices_to_chars[r]\n\n # update predicted_chars and input\n predicted_chars += d\n input_chars += d\n input_chars = input_chars[1:]\n return predicted_chars\n\n\nfor s in range(0, 500, 100):\n start_index = s\n input_chars = text[start_index:start_index + window_size]\n predict_input = predict_next_chars(model, input_chars, num_to_predict=100)\n\n print('------------------')\n input_line = 'input chars = ' + '\\n' + input_chars + '\"' + '\\n'\n print(input_line)\n\n line = 'predicted chars = ' + '\\n' + predict_input + '\"' + '\\n'\n print(line)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.24/_downloads/bcaf3ed1f43ea7377c6c0b00137d728f/custom_inverse_solver.ipynb
bsd-3-clause
[ "%matplotlib inline", "Source localization with a custom inverse solver\nThe objective of this example is to show how to plug a custom inverse solver\nin MNE in order to facilate empirical comparison with the methods MNE already\nimplements (wMNE, dSPM, sLORETA, eLORETA, LCMV, DICS, (TF-)MxNE etc.).\nThis script is educational and shall be used for methods\nevaluations and new developments. It is not meant to be an example\nof good practice to analyse your data.\nThe example makes use of 2 functions apply_solver and solver\nso changes can be limited to the solver function (which only takes three\nparameters: the whitened data, the gain matrix and the number of orientations)\nin order to try out another inverse algorithm.", "import numpy as np\nfrom scipy import linalg\nimport mne\nfrom mne.datasets import sample\nfrom mne.viz import plot_sparse_source_estimates\n\n\ndata_path = sample.data_path()\nfwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'\ncov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'\nsubjects_dir = data_path + '/subjects'\ncondition = 'Left Auditory'\n\n# Read noise covariance matrix\nnoise_cov = mne.read_cov(cov_fname)\n# Handling average file\nevoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))\nevoked.crop(tmin=0.04, tmax=0.18)\n\nevoked = evoked.pick_types(eeg=False, meg=True)\n# Handling forward solution\nforward = mne.read_forward_solution(fwd_fname)", "Auxiliary function to run the solver", "def apply_solver(solver, evoked, forward, noise_cov, loose=0.2, depth=0.8):\n \"\"\"Call a custom solver on evoked data.\n\n This function does all the necessary computation:\n\n - to select the channels in the forward given the available ones in\n the data\n - to take into account the noise covariance and do the spatial whitening\n - to apply loose orientation constraint as MNE solvers\n - to apply a weigthing of the columns of the forward operator as in the\n weighted Minimum Norm formulation in order to limit the problem\n of depth bias.\n\n Parameters\n ----------\n solver : callable\n The solver takes 3 parameters: data M, gain matrix G, number of\n dipoles orientations per location (1 or 3). A solver shall return\n 2 variables: X which contains the time series of the active dipoles\n and an active set which is a boolean mask to specify what dipoles are\n present in X.\n evoked : instance of mne.Evoked\n The evoked data\n forward : instance of Forward\n The forward solution.\n noise_cov : instance of Covariance\n The noise covariance.\n loose : float in [0, 1] | 'auto'\n Value that weights the source variances of the dipole components\n that are parallel (tangential) to the cortical surface. If loose\n is 0 then the solution is computed with fixed orientation.\n If loose is 1, it corresponds to free orientations.\n The default value ('auto') is set to 0.2 for surface-oriented source\n space and set to 1.0 for volumic or discrete source space.\n depth : None | float in [0, 1]\n Depth weighting coefficients. If None, no depth weighting is performed.\n\n Returns\n -------\n stc : instance of SourceEstimate\n The source estimates.\n \"\"\"\n # Import the necessary private functions\n from mne.inverse_sparse.mxne_inverse import \\\n (_prepare_gain, is_fixed_orient,\n _reapply_source_weighting, _make_sparse_stc)\n\n all_ch_names = evoked.ch_names\n\n # Handle depth weighting and whitening (here is no weights)\n forward, gain, gain_info, whitener, source_weighting, mask = _prepare_gain(\n forward, evoked.info, noise_cov, pca=False, depth=depth,\n loose=loose, weights=None, weights_min=None, rank=None)\n\n # Select channels of interest\n sel = [all_ch_names.index(name) for name in gain_info['ch_names']]\n M = evoked.data[sel]\n\n # Whiten data\n M = np.dot(whitener, M)\n\n n_orient = 1 if is_fixed_orient(forward) else 3\n X, active_set = solver(M, gain, n_orient)\n X = _reapply_source_weighting(X, source_weighting, active_set)\n\n stc = _make_sparse_stc(X, active_set, forward, tmin=evoked.times[0],\n tstep=1. / evoked.info['sfreq'])\n\n return stc", "Define your solver", "def solver(M, G, n_orient):\n \"\"\"Run L2 penalized regression and keep 10 strongest locations.\n\n Parameters\n ----------\n M : array, shape (n_channels, n_times)\n The whitened data.\n G : array, shape (n_channels, n_dipoles)\n The gain matrix a.k.a. the forward operator. The number of locations\n is n_dipoles / n_orient. n_orient will be 1 for a fixed orientation\n constraint or 3 when using a free orientation model.\n n_orient : int\n Can be 1 or 3 depending if one works with fixed or free orientations.\n If n_orient is 3, then ``G[:, 2::3]`` corresponds to the dipoles that\n are normal to the cortex.\n\n Returns\n -------\n X : array, (n_active_dipoles, n_times)\n The time series of the dipoles in the active set.\n active_set : array (n_dipoles)\n Array of bool. Entry j is True if dipole j is in the active set.\n We have ``X_full[active_set] == X`` where X_full is the full X matrix\n such that ``M = G X_full``.\n \"\"\"\n inner = np.dot(G, G.T)\n trace = np.trace(inner)\n K = linalg.solve(inner + 4e-6 * trace * np.eye(G.shape[0]), G).T\n K /= np.linalg.norm(K, axis=1)[:, None]\n X = np.dot(K, M)\n\n indices = np.argsort(np.sum(X ** 2, axis=1))[-10:]\n active_set = np.zeros(G.shape[1], dtype=bool)\n for idx in indices:\n idx -= idx % n_orient\n active_set[idx:idx + n_orient] = True\n X = X[active_set]\n return X, active_set", "Apply your custom solver", "# loose, depth = 0.2, 0.8 # corresponds to loose orientation\nloose, depth = 1., 0. # corresponds to free orientation\nstc = apply_solver(solver, evoked, forward, noise_cov, loose, depth)", "View in 2D and 3D (\"glass\" brain like 3D plot)", "plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),\n opacity=0.1)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dalonlobo/GL-Mini-Projects
TweetAnalysis/Final/Q1/Dalon_4_RTD_MiniPro_Tweepy_Q1.ipynb
mit
[ "Tweepy streamer\n## Q1 Find Influential people in twitter:\n- For simplicity assume the algorithm to find influential person is directly proportional to followers.\n- Find top 20 Influential personalities from the twitter across the globe.\n\nSince this is streaming application, we will use python logging module to log. Further read.", "import logging # python logging module\n\n# basic format for logging\nlogFormat = \"%(asctime)s - [%(levelname)s] (%(funcName)s:%(lineno)d) %(message)s\"\n\n# logs will be stored in tweepy.log\nlogging.basicConfig(filename='tweepy.log', level=logging.INFO, \n format=logFormat, datefmt=\"%Y-%m-%d %H:%M:%S\")\n", "Authentication and Authorisation\nCreate an app in twitter here. Copy the necessary keys and access tokens, which will be used here in our code. \nThe authorization is done using Oauth, An open protocol to allow secure authorization in a simple and standard method from web, mobile and desktop applications. Further read. \nWe will use Tweepy a python module. Tweepy is open-sourced, hosted on GitHub and enables Python to communicate with Twitter platform and use its API. Tweepy supports oauth authentication. Authentication is handled by the tweepy.AuthHandler class.", "import tweepy # importing all the modules required\nimport socket # will be used to create sockets\nimport json # manipulate json\n\nfrom httplib import IncompleteRead\n\n# Keep these tokens secret, as anyone can have full access to your\n# twitter account, using these tokens\n\nconsumerKey = \"#\"\nconsumerSecret = \"#\"\n\naccessToken = \"#\"\naccessTokenSecret = \"#\"\n", "Post this step, we will have full access to twitter api's", "# Performing the authentication and authorization, post this step \n# we will have full access to twitter api's\ndef connectToTwitter():\n \"\"\"Connect to twitter.\"\"\"\n try:\n auth = tweepy.OAuthHandler(consumerKey, consumerSecret)\n auth.set_access_token(accessToken, accessTokenSecret)\n\n api = tweepy.API(auth)\n logging.info(\"Successfully logged in to twitter.\")\n return api, auth\n except Exception as e:\n logging.info(\"Something went wrong in oauth, please check your tokens.\")\n logging.error(e)\n", "Streaming with tweepy\nThe Twitter streaming API is used to download twitter messages in real time. We use streaming api instead of rest api because, the REST api is used to pull data from twitter but the streaming api pushes messages to a persistent session. This allows the streaming api to download more data in real time than could be done using the REST API.\nIn Tweepy, an instance of tweepy.Stream establishes a streaming session and routes messages to StreamListener instance. The on_data method of a stream listener receives all messages and calls functions according to the message type. \nBut the on_data method is only a stub, so we need to implement the functionality by subclassing StreamListener. \nUsing the streaming api has three steps.\n\nCreate a class inheriting from StreamListener\nUsing that class create a Stream object\nConnect to the Twitter API using the Stream.", "# Tweet listner class which subclasses from tweepy.StreamListener\nclass TweetListner(tweepy.StreamListener):\n \"\"\"Twitter stream listner\"\"\"\n \n def __init__(self, csocket):\n self.clientSocket = csocket\n \n def dataProcessing(self, data):\n \"\"\"Process the data, before sending to spark streaming\n \"\"\"\n sendData = {} # data that is sent to spark streamer\n user = data.get(\"user\", {})\n name = user.get(\"name\", \"undefined\").encode('utf-8')\n followersCount = user.get(\"followers_count\", 0)\n sendData[\"name\"] = name\n sendData[\"followersCount\"] = followersCount\n #data_string = \"{}:{}\".format(name, followersCount) \n self.clientSocket.send(json.dumps(sendData) + u\"\\n\") # append new line character, so that spark recognizes it\n logging.debug(json.dumps(sendData))\n \n def on_data(self, raw_data):\n \"\"\" Called when raw data is received from connection.\n return False to stop stream and close connection.\n \"\"\"\n try:\n data = json.loads(raw_data)\n self.dataProcessing(data)\n #self.clientSocket.send(json.dumps(sendData) + u\"\\n\") # Because the connection was breaking\n return True\n except Exception as e:\n logging.error(\"An unhandled exception has occured, check your data processing\")\n logging.error(e)\n raise e\n \n def on_error(self, status_code):\n \"\"\"Called when a non-200 status code is returned\"\"\"\n logging.error(\"A non-200 status code is returned\")\n return True\n \n\n# Creating a proxy socket\ndef createProxySocket(host, port):\n \"\"\" Returns a socket which can be used to connect\n to spark.\n \"\"\"\n try:\n s = socket.socket() # initialize socket instance\n s.bind((host, port)) # bind to the given host and port \n s.listen(5) # Enable a server to accept connections.\n logging.info(\"Listening on the port {}\".format(port))\n cSocket, address = s.accept() # waiting for a connection\n logging.info(\"Received Request from: {}\".format(address))\n return cSocket\n except socket.error as e: \n if e.errno == socket.errno.EADDRINUSE: # Address in use\n logging.error(\"The given host:port {}:{} is already in use\"\\\n .format(host, port))\n logging.info(\"Trying on port: {}\".format(port + 1))\n return createProxySocket(host, port + 1)\n", "Drawbacks of twitter streaming API\nThe major drawback of the Streaming API is that Twitter’s Steaming API provides only a sample of tweets that are occurring. The actual percentage of total tweets users receive with Twitter’s Streaming API varies heavily based on the criteria users request and the current traffic. Studies have estimated that using Twitter’s Streaming API users can expect to receive anywhere from 1% of the tweets to over 40% of tweets in near real-time. The reason that you do not receive all of the tweets from the Twitter Streaming API is simply because Twitter doesn’t have the current infrastructure to support it, and they don’t want to; hence, the Twitter Firehose. Ref\nSo we will use a hack i.e. get the top trending topics and use that to filter data.", "def getWOEIDForTrendsAvailable(api, place):\n \"\"\"Returns the WOEID of the country if the trend is available there. \"\"\"\n \n # Iterate through trends\n data = api.trends_available()\n for item in data:\n if item[\"name\"] == place: # Use place = \"Worldwide\" to get woeid of world\n woeid = item[\"woeid\"]\n break\n return woeid #name = India, woeid\n\n\n# Get the list of trending topics from twitter\ndef getTrendingTopics(api, woeid):\n \"\"\"Get the top trending topics from twitter\"\"\"\n data = api.trends_place(woeid)\n listOfTrendingTopic = [trend[\"name\"] for trend in data[0][\"trends\"]]\n return listOfTrendingTopic\n\nif __name__ == \"__main__\":\n try:\n api, auth = connectToTwitter() # connecting to twitter\n # Global information is available by using 1 as the WOEID\n # woeid = getWOEIDForTrendsAvailable(api, \"Worldwide\") # get the woeid of the worldwide\n woeid = 1\n trendingTopics = getTrendingTopics(api, woeid)[:10] # Pick only top 10 trending topics\n \n host = \"localhost\"\n port = 8888\n cSocket = createProxySocket(host, port) # Creating a socket\n \n while True:\n try:\n # Connect/reconnect the stream\n tweetStream = tweepy.Stream(auth, TweetListner(cSocket)) # Stream the twitter data\n # DON'T run this approach async or you'll just create a ton of streams!\n tweetStream.filter(track=trendingTopics) # Filter on trending topics\n except IncompleteRead:\n # Oh well, reconnect and keep trucking\n continue\n except KeyboardInterrupt:\n # Or however you want to exit this loop\n tweetStream.disconnect()\n break\n except Exception as e:\n logging.error(\"Unhandled exception has occured\")\n logging.error(e)\n continue\n \n except KeyboardInterrupt: # Keyboard interrupt called\n logging.error(\"KeyboardInterrupt was hit\")\n except Exception as e:\n logging.error(\"Unhandled exception has occured\")\n logging.error(e)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
csaladenes/csaladenes.github.io
present/bi/2020/jupyter/2_pandas_filetipusok.ipynb
mit
[ "import pandas as pd", "JSON file beolvasás", "pd.read_json('data.json')", "Excel file beolvasás: sorok kihagyhatók a file tetejéről, munkalap neve választható.", "df=pd.read_excel('2.17deaths causes.xls',sheet_name='2.17',skiprows=5)", "numpy egy matematikai bővítőcsomag", "import numpy as np", "A nan értékek numpy-ban vannak definiálva.", "df=df.set_index('Unnamed: 0').dropna(how='any').replace('-',np.nan)\n\ndf2=pd.read_excel('2.17deaths causes.xls',sheet_name='2.17',skiprows=4)", "ffill azt jelenti forward fill, és a nan-okat kitölti a balra vagy fölötte álló értékkel. Az axis=0 a sorokat jelenti, az axis=1 az oszlopokat.", "df2.loc[[0]].ffill(axis=1)", "Sorok/oszlopok törlése.", "df=df.drop('Unnamed: 13',axis=1)\n\ndf.columns\n\n[year for year in range(2011,2017)]\n\ndf.columns=[year for year in range(2011,2017) for k in range(2)]", "Nested pythonic lista - két felsorolás egymás után", "[str(year)+'-'+str(k) for year in range(2011,2017) for k in range(2)]\n\nnemek=['Masculin','Feminin']\n[str(year)+'-'+nem for year in range(2011,2017) for nem in nemek]\n\ndf.columns=[str(year)+'-'+nem for year in range(2011,2017) for nem in nemek]\ndf\n\nevek=[str(year) for year in range(2011,2017) for nem in nemek]\nnemlista=[nem for year in range(2011,2017) for nem in nemek]\n\ndf=df.T", "Új oszlopok a dimenzióknak.", "df['Ev']=evek\ndf['Nem']=nemlista\n\ndf.head(6)\n\ndf.set_index(['Ev','Nem'])", "unstack paranccsal egy MultiIndex (azaz többszintes index) pivot-álható.", "df.set_index(['Ev','Nem'])[['Total']].unstack()", "Hiányzó értékek (nan-ok) helyettesítése.", "pd.DataFrame([0,3,4,5,'gfgf',np.nan]).replace(np.nan,'Mas')\n\npd.DataFrame([0,3,4,5,'gfgf',np.nan]).fillna('Mas')", "join - több DataFrame összefűzése. Az index ugyanaz kell legyen. Az oszlopok nevei különbözőek. Az index neve nem számít.", "df1=pd.read_excel('pensiunea comfort 1.xlsx',sheet_name='Sheet1')\ndf2=pd.read_excel('pensiunea comfort 1.xlsx',sheet_name='Sheet2')\ndf3=pd.read_excel('pensiunea comfort 1.xlsx',sheet_name='Sheet3')\n\ndf1=df1.dropna(how='all',axis=0).dropna(how='all',axis=1).set_index(2019)\ndf2=df2.dropna(how='all',axis=0).dropna(how='all',axis=1).set_index(2019)\ndf3=df3.dropna(how='all',axis=0).dropna(how='all',axis=1).set_index('2019/ NR. DE NOPTI')\n\ndf1.join(df2).join(df3)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
muatik/dm
SingularValueDecomposition.ipynb
mit
[ "import numpy as np\nimport pandas as pd\nfrom scipy.ndimage import imread\nfrom matplotlib import pylab as plt", "SVD is one of the matrix factorization tecniques. It factors a matrix into three parts with which we can reconstruct the initial matrix. However, reconstructing original matrix is not mostly the primary aim. Rather, we factorize matrices in order to achive following goals:\n\nto find principal components\nto reduce matrix size removing redundant dimentions\nto find latent dimentions\nvisualization\n\nIn a simple terms, factorization can be defined as breaking something into its building blocks, in other terms, its factors. Using SVD, we can decompose a matrix into three separate matrices as follows:\n$$ A_{m x n} = U_{m x r} * \\Sigma_{r x r} * (V_{n x r})^{T} $$\nwhere \n- U is the left singular vectors\n- $\\Sigma$ is the singular values sorted descending order along its diagonal, and full of zeroes elsewhere\n- V is the right singular vectors\n- m is the number of rows, \n- n is the number of columns(dimentions), \n- and r is the rank.\nExample", "A = np.mat([\n [4, 5, 4, 1, 1],\n [5, 3, 5, 0, 0],\n [0, 1, 0, 1, 1],\n [0, 0, 0, 0, 1],\n [1, 0, 0, 4, 5],\n [0, 1, 0, 5, 4],\n])\n\nU, S, V = np.linalg.svd(A)\nU.shape, S.shape, V.shape", "Left singular vectors", "U", "Singular values", "S\n\nnp.diag(S)", "As you can see, the singular values are sorted descendingly.\nRight singular values", "V", "Reconstructing the original matrix", "def reconstruct(U, S, V, rank):\n return U[:,0:rank] * np.diag(S[:rank]) * V[:rank]\n\nr = len(S) \nreconstruct(U, S, V, r)", "We use all the dimentions to get back to the original matrix. As a result, we obtain the matrix which is almost identical. Let's calculate the difference between the two matrices.", "def calcError(A, B):\n return np.sum(np.power(A - B, 2))\n\ncalcError(A, reconstruct(U, S, V, r))", "Expectedly, the error is infinitesimal.\nHowever, most of the time this is not our intention. Instead of using all the dimentions(rank), we only use some of them, which have more variance, in other words, which provides more information. Let's see what we will get when using only the first three most significant dimentions.", "reconstruct(U, S, V, 3)\n\ncalcError(A, reconstruct(U, S, V, 3))", "Again, the reconstructed matrix is very similar to the original one. And the total error is still small. \nNow we can ask the question that which rank should we pick? There is trade-off that when you use more rank, you get closer to the original matrix and have less error, however you need to keep more data. On the other hand, if you use less rank, you will have much error but save space and remove the redundant dimentions and noise.", "reconstruct(U, S, V, 2)\n\ncalcError(A, reconstruct(U, S, V, 2))\n\nA = np.mat([\n [4, 5, 4, 0, 4, 0, 0, 1, 0, 1, 2, 1],\n [5, 3, 5, 5, 0, 1, 0, 0, 2, 0, 0, 2],\n [0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0],\n [0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1],\n [1, 0, 1, 0, 1, 5, 0, 0, 4, 5, 4, 0],\n [0, 1, 1, 0, 0, 4, 3, 5, 5, 3, 4, 0],\n])\n\ndef reconstruct(U, S, V, rank):\n return U[:,0:rank] * np.diag(S[:rank]) * V[:rank]\n\nfor rank in range(1, len(S)):\n rA = reconstruct(U, S, V, rank)\n error = calcError(A, rA)\n coverage = S[:rank].sum() / S.sum()\n print(\"with rank {}, coverage: {:.4f}, error: {:.4f}\".format(rank, coverage, error))", "As it can be seen above, more rank is used, less error occur. From another perspective, we get closer to the original data by increasing rank number.\nOn the other hand, after a certain rank, using more rank will not contribute as much as \nLet's compare a reconstructed column to the original one by just naked eyes. Even it is reconstructed using only 4 dimention, we almost, with some error, get the original data.", "print(\"Original:\\n\", A[:,10])\nprint(\"Reconstructed:\\n\", reconstruct(U, S, V, 4)[:,10])\n\nimread(\"data/pacman.png\", flatten=True).shape\n\nA = np.mat(imread(\"data/pacman.png\", flatten=True))\n\nU, S, V = np.linalg.svd(A)\n\nA.shape, U.shape, S.shape, V.shape\n\nfor rank in range(1, len(S)):\n rA = reconstruct(U, S, V, rank)\n error = calcError(A, rA)\n coverage = S[:rank].sum() / S.sum()\n print(\"with rank {}, coverage: {:.4f}, error: {:.4f}\".format(rank, coverage, error))\n\nfor i in range(1, 50, 5):\n rA = reconstruct(U, S, V, i)\n print(rA.shape)\n plt.imshow(rA, cmap='gray')\n plt.show()\n\nplt.imshow(data, interpolation='nearest')\n\n128 * 128 \n\n- (10*128*2)\n\nfrom PIL import Image\nA = np.mat(imread(\"data/noise.png\", flatten=True))\nimg = Image.open('data/noise.png')\nimggray = img.convert('LA')\n\nimgmat = np.array(list(imggray.getdata(band=0)), float)\n\n\nimgmat = np.array(list(imggray.getdata(band=0)), float)\nimgmat.shape = (imggray.size[1], imggray.size[0])\nimgmat = np.matrix(imgmat)\nplt.figure(figsize=(9,6))\nplt.imshow(imgmat, cmap='gray');\nplt.show()\n\nU, S, V = np.linalg.svd(imgmat)\n\nfor i in range(1, 10, 1):\n rA = reconstruct(U, S, V, i)\n print(rA.shape)\n plt.imshow(rA, cmap='gray');\n plt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gully/starfish-demo
demo7/raw/mixture_model_01_exploratory.ipynb
mit
[ "Issue #35: Mixture model mode\ngully\nTue, March 1, 2016\nBasically we want to add two spectra together (in the right proportions) to mimic a two-component photosphere.\nThe mixture model\nWe have to get the scale factor right.\n$$ f_{mix} = (1-c) \\cdot \\frac{f_1}{\\bar f_1} + c \\cdot q_m \\cdot \\frac{f_2}{\\bar f_2} $$\nwhere:\n$\\frac{f_1}{\\bar f_1}$ is the normalized spectrum of star 1\n$\\frac{f_2}{\\bar f_2}$ is the normalized spectrum of star 2\n$c$ is the solid angle ratio of $f_2$ to $f_1$\n$q_m$ is the wavelength-dependent specific flux ratio of of $f_2$ to $f_1$ for spectral order $m$\nStrategies for estimating the scale factor $q_m$\nThere are a few different Strategies for dealing with $q_m$: \n\nDirectly synthetize un-normalized (raw) flux with the PCA process, rather than dividing out the mean, so $q_m$ disappears.\nApproximate $q_m$ as the ratio of Black Bodies in the wavelength range of interest, using $T_{eff, 1}$ and $T_{eff, 2}$, ignore other parameter dependences.\nCalculate $q_m$ as the ratio of un-normalized (raw) flux ratios $\\frac{\\bar f_2}{\\bar f_1}$ for each grid point, for each spectral order chunk, interpolate between grid points.\n\nStrategy 1 is super hard. The assumption of normalized, mean subtracted fluxes are so deeply imbedded in spectral emulation framework that Strategy 1 would require a massive rewrite of the PCA spectral emulator, and will introduce major problems in determining the PCA components. So that's out.\nStrategy 2 will work OKAYish, but the black body is not a great estimate for the flux ratio in the presence of large molecular absorption bands for cool stellar photospheres, so some multiple simultaneous fiting of different spectral orders could get wonky. So this is a good backup plan, or temporary demo, but should probably not be used for production.\n<img src= ./BB_flux_rat_vs_PHOENIX.png width=400></img>\nStrategy 3 is probably about as close to the Right Thing to Do as we can get, but requires a few data engineering steps.\nData engineering for Strategy 3\n\nGenerate a temporary un-normalized (i.e. raw) grid in parallel with the normalized grid, vis-a-vis grid.py --create\nFrom the raw grid, compute the ratio of $q_m(\\theta_{grid, 1}, \\theta_{grid, 2}) = \\bar f_2 / \\bar f_1 $ for each grid point with every other grid point, for each spectral order\nUse 6-D regression to fit a function $\\hat q_m(\\theta_{\\star, 1}, \\theta_{\\star, 2})$.\n\nWhich regression to use? My first reaction is Gaussian Process, since it would mimic the spectral emulator. But after messing around with it and thinking about dimensionality and other tradeoffs, I decided to just use a linear interpolator. This choice might cause the $c$ parameter above to cling to the grid points, if the grid points have cuspy discontinuities from piecewise linear joints at the grid points. So if we see that we can come back and make a fancier estimator. For now we will leave it and move on.", "import numpy as np\n\nimport Starfish\nfrom Starfish.grid_tools import HDF5Creator\n\nh5i = Starfish.grid_tools.HDF5Interface(\"libraries/PHOENIX_TRES_test.hdf5\")", "Double check that we're using raw fluxes (norm = False):", "if Starfish.config[\"grid\"][\"norm\"] == False : print(\"All good.\") \n\nh5i.grid_points.shape", "Let's load the flux of each model grid point and compute the mean flux ratio with every other model grid point.\nThere will be $N_{grid} \\times N_{grid}$ pairs of flux ratios, only half of which are unique.", "N_grid, D_dim = h5i.grid_points.shape\n\nN_tot = N_grid*N_grid\n\nd_grd = np.empty((N_tot, D_dim*2))\nf_rat = np.empty(N_tot)\n\nc = 0\nfor i in np.arange(N_grid):\n print(i, end=' ')\n for j in np.arange(N_grid):\n d_grd[c] = np.hstack((h5i.grid_points[i], h5i.grid_points[j]))\n f_rat[c] = np.mean(h5i.load_flux(h5i.grid_points[i]))/np.mean(h5i.load_flux(h5i.grid_points[j]))\n c += 1", "We now have a six dimensional design matrix and a scalar that we can fit to.", "d_grd.shape, f_rat.shape\n\nfrom scipy.interpolate import LinearNDInterpolator\n\ninterp_f_rat = LinearNDInterpolator(d_grd, f_rat)\n\ninterp_f_rat(6000,4.0, 0, 6200, 5.0, -1.0)\n\nnp.mean(h5i.load_flux([6000, 4.0, 0]))/np.mean(h5i.load_flux([6200, 5.0, -1.0]))", "Checks out.\nSo now we can produce $q_m$ on demand!\nJust a reminder... we have to do this for each order. The next step is to figure out how to implement this efficiently in parallel operations.", "spec = h5i.load_flux(h5i.grid_points[i])\n\nh5i.wl.shape, spec.shape\n\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nplt.plot(h5i.wl, spec)\n\nStarfish.config[\"data\"][\"orders\"]", "The end." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tpin3694/tpin3694.github.io
machine-learning/dbscan_clustering-Copy1.ipynb
mit
[ "Title: Agglomerative Clustering\nSlug: agglomerative_clustering\nSummary: How to conduct agglomerative clustering in scikit-learn.\nDate: 2017-09-22 12:00\nCategory: Machine Learning\nTags: Clustering\nAuthors: Chris Albon \n<a alt=\"Agglomerative Clustering\" href=\"https://machinelearningflashcards.com\">\n <img src=\"agglomerative_clustering/Aggomerative_Clustering_print.png\" class=\"flashcard center-block\">\n</a>\nPreliminaries", "# Load libraries\nfrom sklearn import datasets\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.cluster import AgglomerativeClustering", "Load Iris Flower Data", "# Load data\niris = datasets.load_iris()\nX = iris.data", "Standardize Features", "# Standarize features\nscaler = StandardScaler()\nX_std = scaler.fit_transform(X)", "Conduct Agglomerative Clustering\nIn scikit-learn, AgglomerativeClustering uses the linkage parameter to determine the merging strategy to minimize the 1) variance of merged clusters (ward), 2) average of distance between observations from pairs of clusters (average), or 3) maximum distance between observations from pairs of clusters (complete). \nTwo other parameters are useful to know. First, the affinity parameter determines the distance metric used for linkage (minkowski, euclidean, etc.). Second, n_clusters sets the number of clusters the clustering algorithm will attempt to find. That is, clusters are successively merged until there are only n_clusters remaining.", "# Create meanshift object\nclt = AgglomerativeClustering(linkage='complete', \n affinity='euclidean', \n n_clusters=3)\n\n# Train model\nmodel = clt.fit(X_std)", "Show Cluster Membership", "# Show cluster membership\nmodel.labels_" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SKA-ScienceDataProcessor/algorithm-reference-library
workflows/notebooks/imaging-wterm_arlexecute.ipynb
apache-2.0
[ "Wide-field imaging demonstration\nThis script makes a fake data set, fills it with a number of point components, and then images it using a variety of algorithms. See imaging-fits for a similar notebook that checks for errors in the recovered properties of the images.\nThe measurement equation for a wide field of view interferometer is:\n$$V(u,v,w) =\\int \\frac{I(l,m)}{\\sqrt{1-l^2-m^2}} e^{-2 \\pi j (ul+um + w(\\sqrt{1-l^2-m^2}-1))} dl dm$$\nWe will show various algorithms for computing approximations to this integral. Calculation of the visibility V from the sky brightness I is called predict, and the inverese is called invert.", "%matplotlib inline\n\nimport os\nimport sys\n\nsys.path.append(os.path.join('..', '..'))\n\nfrom data_models.parameters import arl_path\n\nresults_dir = arl_path('test_results')\n\nfrom matplotlib import pylab\n\nimport numpy\n\nfrom astropy.coordinates import SkyCoord\nfrom astropy import units as u\nfrom astropy.wcs.utils import pixel_to_skycoord\n\nfrom matplotlib import pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom data_models.polarisation import PolarisationFrame\n\nfrom wrappers.serial.image.iterators import image_raster_iter\nfrom processing_library.image.operations import create_w_term_like\n\n# Use serial wrappers by default\nfrom wrappers.serial.visibility.base import create_visibility, create_visibility, create_visibility_from_rows\nfrom wrappers.serial.skycomponent.operations import create_skycomponent\nfrom wrappers.serial.image.operations import show_image, export_image_to_fits\nfrom wrappers.serial.visibility.iterators import vis_timeslice_iter\nfrom wrappers.serial.simulation.configurations import create_named_configuration\nfrom wrappers.serial.imaging.base import invert_2d, create_image_from_visibility, \\\n predict_skycomponent_visibility, advise_wide_field\nfrom wrappers.serial.visibility.iterators import vis_timeslice_iter\nfrom wrappers.serial.imaging.weighting import weight_visibility\nfrom wrappers.serial.visibility.iterators import vis_timeslices\n\nfrom wrappers.arlexecute.griddata.kernels import create_awterm_convolutionfunction\nfrom wrappers.arlexecute.griddata.convolution_functions import apply_bounding_box_convolutionfunction\n\n# Use arlexecute for imaging\nfrom wrappers.arlexecute.execution_support.arlexecute import arlexecute\nfrom workflows.arlexecute.imaging.imaging_arlexecute import invert_list_arlexecute_workflow\n\nimport logging\n\nlog = logging.getLogger()\nlog.setLevel(logging.DEBUG)\nlog.addHandler(logging.StreamHandler(sys.stdout))\n\ndoplot = True\n\n\npylab.rcParams['figure.figsize'] = (12.0, 12.0)\npylab.rcParams['image.cmap'] = 'rainbow'", "Construct the SKA1-LOW core configuration", "lowcore = create_named_configuration('LOWBD2-CORE')", "We create the visibility. \nThis just makes the uvw, time, antenna1, antenna2, weight columns in a table", "times = numpy.array([-3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0]) * (numpy.pi / 12.0)\nfrequency = numpy.array([1e8])\nchannel_bandwidth = numpy.array([1e7])\n\n\nreffrequency = numpy.max(frequency)\nphasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-45.0 * u.deg, frame='icrs', equinox='J2000')\nvt = create_visibility(lowcore, times, frequency, channel_bandwidth=channel_bandwidth,\n weight=1.0, phasecentre=phasecentre, polarisation_frame=PolarisationFrame(\"stokesI\"))", "Advise on wide field parameters. This returns a dictionary with all the input and calculated variables.", "advice = advise_wide_field(vt, wprojection_planes=1)", "Plot the synthesized UV coverage.", "if doplot:\n plt.clf()\n plt.plot(vt.data['uvw'][:, 0], vt.data['uvw'][:, 1], '.', color='b')\n plt.plot(-vt.data['uvw'][:, 0], -vt.data['uvw'][:, 1], '.', color='r')\n plt.xlabel('U (wavelengths)')\n plt.ylabel('V (wavelengths)')\n plt.show()\n \n plt.clf()\n plt.plot(vt.data['uvw'][:, 0], vt.data['uvw'][:, 2], '.', color='b')\n plt.xlabel('U (wavelengths)')\n plt.ylabel('W (wavelengths)')\n plt.show()\n\n plt.clf()\n plt.plot(vt.data['time'][vt.u>0.0], vt.data['uvw'][:, 2][vt.u>0.0], '.', color='b')\n plt.plot(vt.data['time'][vt.u<=0.0], vt.data['uvw'][:, 2][vt.u<=0.0], '.', color='r')\n plt.xlabel('U (wavelengths)')\n plt.ylabel('W (wavelengths)')\n plt.show()\n\n plt.clf()\n n, bins, patches = plt.hist(vt.w, 50, normed=1, facecolor='green', alpha=0.75)\n plt.xlabel('W (wavelengths)')\n plt.ylabel('Count')\n plt.show()", "Show the planar nature of the uvw sampling, rotating with hour angle\nCreate a grid of components and predict each in turn, using the full phase term including w.", "npixel = 512\ncellsize=0.001\nfacets = 4\nflux = numpy.array([[100.0]])\nvt.data['vis'] *= 0.0\n\nmodel = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)\nspacing_pixels = npixel // facets\nlog.info('Spacing in pixels = %s' % spacing_pixels)\nspacing = 180.0 * cellsize * spacing_pixels / numpy.pi\ncenters = -1.5, -0.5, +0.5, +1.5\ncomps=list()\nfor iy in centers:\n for ix in centers:\n pra = int(round(npixel // 2 + ix * spacing_pixels - 1))\n pdec = int(round(npixel // 2 + iy * spacing_pixels - 1))\n sc = pixel_to_skycoord(pra, pdec, model.wcs)\n log.info(\"Component at (%f, %f) %s\" % (pra, pdec, str(sc)))\n comp = create_skycomponent(flux=flux, frequency=frequency, direction=sc, \n polarisation_frame=PolarisationFrame(\"stokesI\"))\n comps.append(comp)\npredict_skycomponent_visibility(vt, comps)", "Make the dirty image and point spread function using the two-dimensional approximation:\n$$V(u,v,w) =\\int I(l,m) e^{2 \\pi j (ul+um)} dl dm$$\nNote that the shape of the sources vary with position in the image. This space-variant property of the PSF arises from the w-term neglected in the two-dimensional invert.", "arlexecute.set_client(use_dask=True)\n\ndirty = create_image_from_visibility(vt, npixel=512, cellsize=0.001, \n polarisation_frame=PolarisationFrame(\"stokesI\"))\nvt = weight_visibility(vt, dirty)\n\nfuture = invert_list_arlexecute_workflow([vt], [dirty], context='2d')\ndirty, sumwt = arlexecute.compute(future, sync=True)[0]\n\nif doplot:\n show_image(dirty)\n\nprint(\"Max, min in dirty image = %.6f, %.6f, sumwt = %f\" % (dirty.data.max(), dirty.data.min(), sumwt))\n\nexport_image_to_fits(dirty, '%s/imaging-wterm_dirty.fits' % (results_dir))", "This occurs because the Fourier transform relationship between sky brightness and visibility is only accurate over small fields of view. \nHence we can make an accurate image by partitioning the image plane into small regions, treating each separately and then glueing the resulting partitions into one image. We call this image plane partitioning image plane faceting.\n$$V(u,v,w) = \\sum_{i,j} \\frac{1}{\\sqrt{1- l_{i,j}^2- m_{i,j}^2}} e^{-2 \\pi j (ul_{i,j}+um_{i,j} + w(\\sqrt{1-l_{i,j}^2-m_{i,j}^2}-1))}\n\\int I(\\Delta l, \\Delta m) e^{-2 \\pi j (u\\Delta l_{i,j}+u \\Delta m_{i,j})} dl dm$$", "dirtyFacet = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)\nfuture = invert_list_arlexecute_workflow([vt], [dirtyFacet], facets=4, context='facets')\ndirtyFacet, sumwt = arlexecute.compute(future, sync=True)[0]\n\nif doplot:\n show_image(dirtyFacet)\n\nprint(\"Max, min in dirty image = %.6f, %.6f, sumwt = %f\" % (dirtyFacet.data.max(), dirtyFacet.data.min(), sumwt))\nexport_image_to_fits(dirtyFacet, '%s/imaging-wterm_dirtyFacet.fits' % (results_dir))", "That was the best case. This time, we will not arrange for the partitions to be centred on the sources.", "dirtyFacet2 = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)\nfuture = invert_list_arlexecute_workflow([vt], [dirtyFacet2], facets=2, context='facets')\ndirtyFacet2, sumwt = arlexecute.compute(future, sync=True)[0]\n\n\nif doplot:\n show_image(dirtyFacet2)\n\nprint(\"Max, min in dirty image = %.6f, %.6f, sumwt = %f\" % (dirtyFacet2.data.max(), dirtyFacet2.data.min(), sumwt))\nexport_image_to_fits(dirtyFacet2, '%s/imaging-wterm_dirtyFacet2.fits' % (results_dir))", "Another approach is to partition the visibility data by slices in w. The measurement equation is approximated as:\n$$V(u,v,w) =\\sum_i \\int \\frac{ I(l,m) e^{-2 \\pi j (w_i(\\sqrt{1-l^2-m^2}-1))})}{\\sqrt{1-l^2-m^2}} e^{-2 \\pi j (ul+um)} dl dm$$\nIf images constructed from slices in w are added after applying a w-dependent image plane correction, the w term will be corrected. \nThe w-dependent w-beam is:", "if doplot:\n wterm = create_w_term_like(model, phasecentre=vt.phasecentre, w=numpy.max(vt.w))\n show_image(wterm)\n plt.show()\n\ndirtywstack = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)\nfuture = invert_list_arlexecute_workflow([vt], [dirtywstack], vis_slices=101, context='wstack')\ndirtywstack, sumwt = arlexecute.compute(future, sync=True)[0]\n\nshow_image(dirtywstack)\nplt.show()\n\nprint(\"Max, min in dirty image = %.6f, %.6f, sumwt = %f\" % \n (dirtywstack.data.max(), dirtywstack.data.min(), sumwt))\n\nexport_image_to_fits(dirtywstack, '%s/imaging-wterm_dirty_wstack.fits' % (results_dir))", "The w-term can also be viewed as a time-variable distortion. Approximating the array as instantaneously co-planar, we have that w can be expressed in terms of $u,v$\n$$w = a u + b v$$\nTransforming to a new coordinate system:\n$$ l' = l + a (\\sqrt{1-l^2-m^2}-1))$$\n$$ m' = m + b (\\sqrt{1-l^2-m^2}-1))$$\nIgnoring changes in the normalisation term, we have:\n$$V(u,v,w) =\\int \\frac{I(l',m')}{\\sqrt{1-l'^2-m'^2}} e^{-2 \\pi j (ul'+um')} dl' dm'$$\nTo illustrate this, we will construct images as a function of time. For comparison, we show difference of each time slice from the best facet image. Instantaneously the sources are un-distorted but do lie in the wrong location.", "for rows in vis_timeslice_iter(vt):\n visslice = create_visibility_from_rows(vt, rows)\n dirtySnapshot = create_image_from_visibility(visslice, npixel=512, cellsize=0.001, npol=1, compress_factor=0.0)\n future = invert_list_arlexecute_workflow([visslice], [dirtySnapshot], context='2d')\n dirtySnapshot, sumwt = arlexecute.compute(future, sync=True)[0]\n \n print(\"Max, min in dirty image = %.6f, %.6f, sumwt = %f\" % \n (dirtySnapshot.data.max(), dirtySnapshot.data.min(), sumwt))\n if doplot:\n dirtySnapshot.data -= dirtyFacet.data\n show_image(dirtySnapshot)\n plt.title(\"Hour angle %.2f hours\" % (numpy.average(visslice.time) * 12.0 / 43200.0))\n plt.show()", "This timeslice imaging leads to a straightforward algorithm in which we correct each time slice and then sum the resulting timeslices.", "dirtyTimeslice = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)\nfuture = invert_list_arlexecute_workflow([vt], [dirtyTimeslice], vis_slices=vis_timeslices(vt, 'auto'),\n padding=2, context='timeslice')\ndirtyTimeslice, sumwt = arlexecute.compute(future, sync=True)[0]\n\n\nshow_image(dirtyTimeslice)\nplt.show()\n\nprint(\"Max, min in dirty image = %.6f, %.6f, sumwt = %f\" % \n (dirtyTimeslice.data.max(), dirtyTimeslice.data.min(), sumwt))\n\nexport_image_to_fits(dirtyTimeslice, '%s/imaging-wterm_dirty_Timeslice.fits' % (results_dir))", "Finally we try w-projection. For a fixed w, the measurement equation can be stated as as a convolution in Fourier space. \n$$V(u,v,w) =G_w(u,v) \\ast \\int \\frac{I(l,m)}{\\sqrt{1-l^2-m^2}} e^{-2 \\pi j (ul+um)} dl dm$$\nwhere the convolution function is:\n$$G_w(u,v) = \\int \\frac{1}{\\sqrt{1-l^2-m^2}} e^{-2 \\pi j (ul+um + w(\\sqrt{1-l^2-m^2}-1))} dl dm$$\nHence when gridding, we can use the transform of the w beam to correct this effect while gridding.", "dirtyWProjection = create_image_from_visibility(vt, npixel=512, cellsize=0.001, npol=1)\n\ngcfcf = create_awterm_convolutionfunction(model, nw=101, wstep=800.0/101, oversampling=8, \n support=60,\n use_aaf=True)\n \nfuture = invert_list_arlexecute_workflow([vt], [dirtyWProjection], context='2d', gcfcf=[gcfcf])\n\ndirtyWProjection, sumwt = arlexecute.compute(future, sync=True)[0]\n\nif doplot:\n show_image(dirtyWProjection)\n\nprint(\"Max, min in dirty image = %.6f, %.6f, sumwt = %f\" % (dirtyWProjection.data.max(), \n dirtyWProjection.data.min(), sumwt))\nexport_image_to_fits(dirtyWProjection, '%s/imaging-wterm_dirty_WProjection.fits' % (results_dir))\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Vooban/Decision-Trees-For-Knowledge-Discovery
Decision-Trees-For-Knowledge-Discovery-With-XGBoost.ipynb
mit
[ "Discovering structure behind data\nLet's understand and model the hidden structure behind data with Decision Trees. In this tutorial, we'll explore and inspect how a model can do its decisions on a car evaluation data set. Decision trees work with simple \"if\" clauses dichotomically chained together, splitting the data flow recursively on those \"if\"s until they reach a leaf where we can categorize the data. Such data inspection could be used to reverse engineer the behavior of any function. \nSince decision trees are good algorithms for discovering the structure hidden behind data, we'll use and model the car evaluation data set, for which the prediction problem is a (deterministic) surjective function. This means that the inputs of the examples in the data set cover all the possibilities, and that for each possible input value, there is only one answer to predict (thus, two examples with the same input values would never have a different expected prediction). On the point of view of Data Science, because of the properties of our dataset, we won't need to use a test set nor to use cross validation. Thus, the error we will obtain below at modelizing our dataset would be equal to the true test error if we had a test set.\nThe attribute to predict in the data set could have been, for example, created from a programmatic function and we will basically reverse engineer the logic mapping the inputs to the outputs to recreate the function and to be able to explain it visually.\nAbout the Car Evaluation Data Set\nFor more information: http://archive.ics.uci.edu/ml/datasets/Car+Evaluation\nOverview\nThe Car Evaluation Database was derived from a simple hierarchical decision model originally developed for the demonstration of DEX, M. Bohanec, V. Rajkovic: Expert system for decision making. Sistemica 1(1), pp. 145-157, 1990.). The model evaluates cars according to the following concept structure: \n\nCAR car acceptability:\nPRICE overall price:\nbuying buying price\nmaint price of the maintenance\n\n\nTECH technical characteristics:\nCOMFORT comfort:\ndoors number of doors\npersons capacity in terms of persons to carry\nlug_boot the size of luggage boot\nsafety estimated safety of the car\n\n\n\nInput attributes are printed in lowercase. Besides the target concept (CAR), the model includes three intermediate concepts: PRICE, TECH, COMFORT. Every concept is in the original model related to its lower level descendants by a set of examples. \nThe Car Evaluation Database contains examples with the structural information removed, i.e., directly relates CAR to the six input attributes: buying, maint, doors, persons, lug_boot, safety. \nBecause of known underlying concept structure, this database may be particularly useful for testing constructive induction and structure discovery methods. \nAttributes, instances, and Class Distribution\nNumber of Attributes: 6\nMissing Attribute Values: none\n| Attribute | Values |\n|------------|--------|\n| buying | v-high, high, med, low |\n| maint | v-high, high, med, low |\n| doors | 2, 3, 4, 5-more |\n| persons | 2, 4, more |\n| lug_boot | small, med, big |\n| safety | low, med, high |\nNumber of Instances: 1728 (Instances completely cover the attribute space.)\n| class | N | N[%] |\n|---|---|---|\n| unacc | 1210 | 70.023 % |\n| acc | 384 | 22.222 % |\n| good | 69 | 3.993 % |\n| v-good | 65 | 3.762 % |\nWe'll now load the car evaluation data set in Python and then train decision trees with XGBoost", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport xgboost as xgb\nfrom xgboost import plot_tree\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\n\nimport os", "Define the features and preprocess the car evaluation data set\nWe'll preprocess the attributes into redundant features, such as using an integer index (linear) to represent a value for an attribute, as well as also using a one-hot encoding for each attribute's possible values as new features. Despite the fact that this is redundant, this will help to make the tree smaller since it has more choice on how to split data on each branch.", "input_labels = [\n [\"buying\", [\"vhigh\", \"high\", \"med\", \"low\"]],\n [\"maint\", [\"vhigh\", \"high\", \"med\", \"low\"]],\n [\"doors\", [\"2\", \"3\", \"4\", \"5more\"]],\n [\"persons\", [\"2\", \"4\", \"more\"]],\n [\"lug_boot\", [\"small\", \"med\", \"big\"]],\n [\"safety\", [\"low\", \"med\", \"high\"]],\n]\n\noutput_labels = [\"unacc\", \"acc\", \"good\", \"vgood\"]\n\n# Load data set\ndata = np.genfromtxt(os.path.join('data', 'data/car.data'), delimiter=',', dtype=\"U\")\ndata_inputs = data[:, :-1]\ndata_outputs = data[:, -1]\n\ndef str_data_to_one_hot(data, input_labels):\n \"\"\"Convert each feature's string to a flattened one-hot array. \"\"\"\n X_int = LabelEncoder().fit_transform(data.ravel()).reshape(*data.shape)\n X_bin = OneHotEncoder().fit_transform(X_int).toarray()\n \n attrs_names = []\n for a in input_labels:\n key = a[0]\n for b in a[1]:\n value = b\n attrs_names.append(\"{}_is_{}\".format(key, value))\n\n return X_bin, attrs_names\n\ndef str_data_to_linear(data, input_labels):\n \"\"\"Convert each feature's string to an integer index\"\"\"\n X_lin = np.array([[\n input_labels[a][1].index(j) for a, j in enumerate(i)\n ] for i in data])\n \n # Indexes will range from 0 to n-1\n attrs_names = [i[0] + \"_index\" for i in input_labels]\n \n return X_lin, attrs_names\n\n# Take both one-hot and linear versions of input features: \nX_one_hot, attrs_names_one_hot = str_data_to_one_hot(data_inputs, input_labels)\nX_linear_int, attrs_names_linear_int = str_data_to_linear(data_inputs, input_labels)\n\n# Put that together:\nX = np.concatenate([X_one_hot, X_linear_int], axis=-1)\nattrs_names = attrs_names_one_hot + attrs_names_linear_int\n\n# Outputs use indexes, this is not one-hot:\ninteger_y = np.array([output_labels.index(i) for i in data_outputs])\n\nprint(\"Data set's shape,\")\nprint(\"X.shape, integer_y.shape, len(attrs_names), len(output_labels):\")\nprint(X.shape, integer_y.shape, len(attrs_names), len(output_labels))\n\n# Shaping the data into a single pandas dataframe for naming columns:\npdtrain = pd.DataFrame(X)\npdtrain.columns = attrs_names\ndtrain = xgb.DMatrix(pdtrain, integer_y)", "Train simple decision trees (here using XGBoost) to fit the data set:\nFirst, let's define some hyperparameters, such as the depth of the tree.", "\nnum_rounds = 1 # Do not use boosting for now, we want only 1 decision tree per class.\nnum_classes = len(output_labels)\nnum_trees = num_rounds * num_classes\n\n# Let's use a max_depth of 4 for the sole goal of simplifying the visual representation produced\n# (ideally, a tree would be deeper to classify perfectly on that dataset)\nparam = {\n 'max_depth': 4,\n 'objective': 'multi:softprob',\n 'num_class': num_classes\n}\n\nbst = xgb.train(param, dtrain, num_boost_round=num_rounds)\nprint(\"Decision trees trained!\")\nprint(\"Mean Error Rate:\", bst.eval(dtrain))\nprint(\"Accuracy:\", (bst.predict(dtrain).argmax(axis=-1) == integer_y).mean()*100, \"%\")", "Plot and save the trees (one for each class):\nThe 4 trees of the classifer (one tree per class) will each output a number that represents how much it is probable that the thing to classify belongs to that class, and it is by comparing the output at the end of all the trees for a given example that we could get the maximal output as associating the example to that class. Indeed, the binary situation where we have only one tree that outputs a positive and else negative number would be simpler to interpret rather than classifying for 4 binary classes at once.", "def plot_first_trees(bst, output_labels, trees_name):\n \"\"\"\n Plot and save the first trees for multiclass classification\n before any boosting was performed.\n \"\"\"\n for tree_idx in range(len(output_labels)):\n class_name = output_labels[tree_idx]\n graph_save_path = os.path.join(\n \"exported_xgboost_trees\", \n \"{}_{}_for_{}\".format(trees_name, tree_idx, class_name)\n )\n\n graph = xgb.to_graphviz(bst, num_trees=tree_idx)\n graph.render(graph_save_path)\n\n # from IPython.display import display\n # display(graph)\n ### Inline display in the notebook would be too huge and would require much side scrolling.\n ### So we rather plot it anew with matplotlib and a fixed size for inline quick view purposes:\n fig, ax = plt.subplots(figsize=(16, 16)) \n plot_tree(bst, num_trees=tree_idx, rankdir='LR', ax=ax)\n plt.title(\"Saved a high-resolution graph for the class '{}' to: {}.pdf\".format(class_name, graph_save_path))\n plt.show()\n\n# Plot our simple trees:\nplot_first_trees(bst, output_labels, trees_name=\"simple_tree\")", "Note that the above trees can be viewed here online:\nhttps://github.com/Vooban/Decision-Trees-For-Knowledge-Discovery/tree/master/exported_xgboost_trees\nPlot the importance of each input features for those simple decision trees:\nNote here that it is the feature importance according to our simple, shallow trees. More complex trees would include more of the features/attributes with different proportions.", "fig, ax = plt.subplots(figsize=(12, 7)) \nxgb.plot_importance(bst, ax=ax)\nplt.show()", "Let's now generate slightly more complex trees to aid inspection\n<p align=\"center\">\n <a href=\"http://theinceptionbutton.com/\" ><img src=\"deeper.jpg\" /></a>\n</p>\n\nLet's go deeper and build deeper trees. However, those trees are not maximally complex since XGBoost is rather built to boost over forests of small trees than a big one.", "num_rounds = 1 # Do not use boosting for now, we want only 1 decision tree per class.\nnum_classes = len(output_labels)\nnum_trees = num_rounds * num_classes\n\n# Let's use a max_depth of 4 for the sole goal of simplifying the visual representation produced\n# (ideally, a tree would be deeper to classify perfectly on that dataset)\nparam = {\n 'max_depth': 9,\n 'objective': 'multi:softprob',\n 'num_class': num_classes\n}\n\nbst = xgb.train(param, dtrain, num_boost_round=num_rounds)\nprint(\"Decision trees trained!\")\nprint(\"Mean Error Rate:\", bst.eval(dtrain))\nprint(\"Accuracy:\", (bst.predict(dtrain).argmax(axis=-1) == integer_y).mean()*100, \"%\")\n\n# Plot our complex trees:\nplot_first_trees(bst, output_labels, trees_name=\"complex_tree\")\n\n# And their feature importance:\nprint(\"Now, our feature importance chart considers more features, but it is still not complete.\")\nfig, ax = plt.subplots(figsize=(12, 7)) \nxgb.plot_importance(bst, ax=ax)\nplt.show()", "Note that the above trees can be viewed here online:\nhttps://github.com/Vooban/Decision-Trees-For-Knowledge-Discovery/tree/master/exported_xgboost_trees\nFinding a perfect classifier rather than an easily explainable one\nWe'll now use boosting. The resulting trees can't be explained as easily as the previous ones, since one classifier will now have incrementally many trees for each class to reduce error, each new trees based on the errors of the previous ones. And those trees will each be weighted.", "num_rounds = 10 # 10 rounds of boosting, thus 10 trees per class. \nnum_classes = len(output_labels)\nnum_trees = num_rounds * num_classes\n\nparam = {\n 'max_depth': 20,\n 'eta': 1.43,\n 'objective': 'multi:softprob',\n 'num_class': num_classes,\n}\n\nbst = xgb.train(param, dtrain, early_stopping_rounds=1, num_boost_round=num_rounds, evals=[(dtrain, \"dtrain\")])\nprint(\"Boosted decision trees trained!\")\nprint(\"Mean Error Rate:\", bst.eval(dtrain))\nprint(\"Accuracy:\", (bst.predict(dtrain).argmax(axis=-1) == integer_y).mean()*100, \"%\")", "In our case, note that it is possible to have an error of 0 (thus an accuracy of 100%) since we have a dataset that represents a function, which is mathematically deterministic and could be interpreted as programmatically pure in the case it would be implemented. But wait... we just implemented and recreated the function that was used to model the dataset with our trees! We don't need cross validation nor a test set, because our training data already covers the full feature space (attribute space). \nFinally, the full attributes/features importance:", "# Some plot options from the doc:\n# importance_type : str, default \"weight\"\n# How the importance is calculated: either \"weight\", \"gain\", or \"cover\"\n# \"weight\" is the number of times a feature appears in a tree\n# \"gain\" is the average gain of splits which use the feature\n# \"cover\" is the average coverage of splits which use the feature\n# where coverage is defined as the number of samples affected by the split\n\nimportance_types = [\"weight\", \"gain\", \"cover\"]\nfor i in importance_types:\n print(\"Importance type:\", i)\n fig, ax = plt.subplots(figsize=(12, 7))\n xgb.plot_importance(bst, importance_type=i, ax=ax)\n plt.show()", "Conclusion\nTo sum up, we managed to get a good classification result and to be able to explain those results visually and automatically, but not with full depth here due to XGBoost not being built especially for having complete trees. Also note that it would have been possible to solve a regression problem with the same algorithm, such as predicting a price rather than a category, using a single tree. Note that if you want to have full depth with your trees, we also explored the decision trees as implemented in scikit-learn. \nSuch techniques using decision trees can be useful in reverse engineering an existing system, such as an old one that has been coded in a peculiar programming language and for which the employees who coded it have left. This technique can also be used for data mining, gaining business intelligence, and insights from data. \nDecision trees are good to model data sets and XGBoost has revealed to be a quite good algorithm for winning Kaggle competitions. Using XGBoost can lead to great results in plus of being interesting for roughly explaining how classifications are made on data. However, XGBoost is normally used for boosting trees (also called gradient boosting) and the resulting forest is hard to interpret. Each tree is trained on the errors of the previous ones until the error gets at its lowest." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
krismolendyke/den
notebooks/Authorization.ipynb
mit
[ "Authorization\nFollowing the Nest authorization documentation.\nSetup\nGet the values of Client ID and Client secret from the clients page and set them in the environment before running this IPython Notebook. The environment variable names should be DEN_CLIENT_ID and DEN_CLIENT_SECRET, respectively.", "import os\n\nDEN_CLIENT_ID = os.environ[\"DEN_CLIENT_ID\"]\nDEN_CLIENT_SECRET = os.environ[\"DEN_CLIENT_SECRET\"]", "Get Authorization URL\nAvailable per client. For Den it is:\n\nhttps://home.nest.com/login/oauth2?client_id=54033edb-04e0-4fc7-8306-5ed6cb7d7b1d&state=STATE\n\nWhere STATE should be a value that is:\n\nUsed to protect against cross-site request forgery attacks\nFormat: any unguessable string\nWe strongly recommend that you use a new, unique value for each call\n\nCreate STATE helper", "import uuid\n\ndef _get_state():\n \"\"\"Get a unique id string.\"\"\"\n return str(uuid.uuid1())\n\n_get_state()", "Create Authorization URL Helper", "API_PROTOCOL = \"https\"\nAPI_LOCATION = \"home.nest.com\"\n\nfrom urlparse import SplitResult, urlunsplit\nfrom urllib import urlencode\n\ndef _get_url(path, query, netloc=API_LOCATION):\n \"\"\"Get a URL for the given path and query.\"\"\"\n split = SplitResult(scheme=API_PROTOCOL, netloc=netloc, path=path, query=query, fragment=\"\")\n return urlunsplit(split)\n\ndef get_auth_url(client_id=DEN_CLIENT_ID):\n \"\"\"Get an authorization URL for the given client id.\"\"\"\n path = \"login/oauth2\"\n query = urlencode({\"client_id\": client_id, \"state\": _get_state()})\n return _get_url(path, query)\n\nget_auth_url()", "Get Authorization Code\nget_auth_url() returns a URL that should be visited in the browser to get an authorization code.\nFor Den, this authorization code will be a PIN.", "!open \"{get_auth_url()}\"", "Cut and paste that PIN here:", "pin = \"\"", "Get Access Token\nUse the pin code to request an access token. https://developer.nest.com/documentation/cloud/authorization-reference/", "def get_access_token_url(client_id=DEN_CLIENT_ID, client_secret=DEN_CLIENT_SECRET, code=pin):\n \"\"\"Get an access token URL for the given client id.\"\"\"\n path = \"oauth2/access_token\"\n query = urlencode({\"client_id\": client_id, \n \"client_secret\": client_secret, \n \"code\": code,\n \"grant_type\": \"authorization_code\"})\n return _get_url(path, query, \"api.\" + API_LOCATION)\n\nget_access_token_url()", "POST to that URL to get a response containing an access token:", "import requests\n\nr = requests.post(get_access_token_url())\nprint r.status_code\nassert r.status_code == requests.codes.OK\n\nr.json()", "It seems like the access token can only be created once and has a 10 year expiration time.", "access_token = r.json()[\"access_token\"]\naccess_token", "Use the API\nThe access_token will be used when making API calls." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
samstav/scipy_2015_sklearn_tutorial
notebooks/03.3 Case Study - Face Recognition with Eigenfaces.ipynb
cc0-1.0
[ "Example from Image Processing", "%matplotlib inline\nimport matplotlib.pyplot as plt", "Here we'll take a look at a simple facial recognition example.\nThis uses a dataset available within scikit-learn consisting of a\nsubset of the Labeled Faces in the Wild\ndata. Note that this is a relatively large download (~200MB) so it may\ntake a while to execute.", "from sklearn import datasets\nlfw_people = datasets.fetch_lfw_people(min_faces_per_person=70, resize=0.4,\n data_home='datasets')\nlfw_people.data.shape", "If you're on a unix-based system such as linux or Mac OSX, these shell commands\ncan be used to see the downloaded dataset:", "!ls datasets\n\n!du -sh datasets/lfw_home", "Once again, let's visualize these faces to see what we're working with:", "fig = plt.figure(figsize=(8, 6))\n# plot several images\nfor i in range(15):\n ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[])\n ax.imshow(lfw_people.images[i], cmap=plt.cm.bone)\n\nimport numpy as np\nplt.figure(figsize=(10, 2))\n\nunique_targets = np.unique(lfw_people.target)\ncounts = [(lfw_people.target == i).sum() for i in unique_targets]\n\nplt.xticks(unique_targets, lfw_people.target_names[unique_targets])\nlocs, labels = plt.xticks()\nplt.setp(labels, rotation=45, size=14)\n_ = plt.bar(unique_targets, counts)", "One thing to note is that these faces have already been localized and scaled\nto a common size. This is an important preprocessing piece for facial\nrecognition, and is a process that can require a large collection of training\ndata. This can be done in scikit-learn, but the challenge is gathering a\nsufficient amount of training data for the algorithm to work\nFortunately, this piece is common enough that it has been done. One good\nresource is OpenCV, the\nOpen Computer Vision Library.\nWe'll perform a Support Vector classification of the images. We'll\ndo a typical train-test split on the images to make this happen:", "from sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(lfw_people.data, lfw_people.target, random_state=0)\n\nprint(X_train.shape, X_test.shape)", "Preprocessing: Principal Component Analysis\n1850 dimensions is a lot for SVM. We can use PCA to reduce these 1850 features to a manageable\nsize, while maintaining most of the information in the dataset. Here it is useful to use a variant\nof PCA called RandomizedPCA, which is an approximation of PCA that can be much faster for large\ndatasets. We saw this method in the previous notebook, and will use it again here:", "from sklearn import decomposition\npca = decomposition.RandomizedPCA(n_components=150, whiten=True)\npca.fit(X_train)\nX_train_pca = pca.transform(X_train)\nX_test_pca = pca.transform(X_test)\nprint(X_train_pca.shape)\nprint(X_test_pca.shape)", "These projected components correspond to factors in a linear combination of\ncomponent images such that the combination approaches the original face. In general, PCA can be a powerful technique for preprocessing that can greatly improve classification performance.\nDoing the Learning: Support Vector Machines\nNow we'll perform support-vector-machine classification on this reduced dataset:", "from sklearn import svm\nclf = svm.SVC(C=5., gamma=0.001)\nclf.fit(X_train_pca, y_train)", "Finally, we can evaluate how well this classification did. First, we might plot a\nfew of the test-cases with the labels learned from the training set:", "fig = plt.figure(figsize=(8, 6))\nfor i in range(15):\n ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[])\n ax.imshow(X_test[i].reshape((50, 37)), cmap=plt.cm.bone)\n y_pred = clf.predict(X_test_pca[i])[0]\n color = 'black' if y_pred == y_test[i] else 'red'\n ax.set_title(lfw_people.target_names[y_pred], fontsize='small', color=color)", "The classifier is correct on an impressive number of images given the simplicity\nof its learning model! Using a linear classifier on 150 features derived from\nthe pixel-level data, the algorithm correctly identifies a large number of the\npeople in the images.\nAgain, we can\nquantify this effectiveness using one of several measures from the sklearn.metrics\nmodule. First we can do the classification report, which shows the precision,\nrecall and other measures of the \"goodness\" of the classification:", "from sklearn import metrics\ny_pred = clf.predict(X_test_pca)\nprint(metrics.classification_report(y_test, y_pred, target_names=lfw_people.target_names))", "Another interesting metric is the confusion matrix, which indicates how often\nany two items are mixed-up. The confusion matrix of a perfect classifier\nwould only have nonzero entries on the diagonal, with zeros on the off-diagonal.", "print(metrics.confusion_matrix(y_test, y_pred))\n\nprint(metrics.f1_score(y_test, y_pred))", "Pipelining\nAbove we used PCA as a pre-processing step before applying our support vector machine classifier.\nPlugging the output of one estimator directly into the input of a second estimator is a commonly\nused pattern; for this reason scikit-learn provides a Pipeline object which automates this\nprocess. The above problem can be re-expressed as a pipeline as follows:", "from sklearn.pipeline import Pipeline\n\nclf = Pipeline([('pca', decomposition.RandomizedPCA(n_components=150, whiten=True)),\n ('svm', svm.LinearSVC(C=1.0))])\n\nclf.fit(X_train, y_train)\n\ny_pred = clf.predict(X_test)\n\nprint(metrics.confusion_matrix(y_pred, y_test))", "The results are not identical because we used the randomized version of the PCA -- because the\nprojection varies slightly each time, the results vary slightly as well.\nFinal Note\nHere we have used PCA \"eigenfaces\" as a pre-processing step for facial recognition.\nThe reason we chose this is because PCA is a broadly-applicable technique, which can\nbe useful for a wide array of data types. For more details on the eigenfaces approach, see the original paper by Turk and Penland, Eigenfaces for Recognition. Research in the field of facial recognition has moved much farther beyond this paper, and has shown specific feature extraction methods can be more effective. However, eigenfaces is a canonical example of machine learning \"in the wild\", and is a simple method with good results." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
me-surrey/dl-gym
10_introduction_to_artificial_neural_networks.ipynb
apache-2.0
[ "Chapter 10 – Introduction to Artificial Neural Networks\nThis notebook contains all the sample code and solutions to the exercises in chapter 10.\nSetup\nFirst, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:", "# To support both python 2 and python 3\nfrom __future__ import division, print_function, unicode_literals\n\n# Common imports\nimport numpy as np\nimport os\n\n# to make this notebook's output stable across runs\ndef reset_graph(seed=42):\n tf.reset_default_graph()\n tf.set_random_seed(seed)\n np.random.seed(seed)\n\n# To plot pretty figures\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nplt.rcParams['axes.labelsize'] = 14\nplt.rcParams['xtick.labelsize'] = 12\nplt.rcParams['ytick.labelsize'] = 12\n\n# Where to save the figures\nPROJECT_ROOT_DIR = \".\"\nCHAPTER_ID = \"ann\"\n\ndef save_fig(fig_id, tight_layout=True):\n path = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID, fig_id + \".png\")\n print(\"Saving figure\", fig_id)\n if tight_layout:\n plt.tight_layout()\n plt.savefig(path, format='png', dpi=300)", "Perceptrons", "import numpy as np\nfrom sklearn.datasets import load_iris\nfrom sklearn.linear_model import Perceptron\n\niris = load_iris()\nX = iris.data[:, (2, 3)] # petal length, petal width\ny = (iris.target == 0).astype(np.int)\n\nper_clf = Perceptron(random_state=42)\nper_clf.fit(X, y)\n\ny_pred = per_clf.predict([[2, 0.5]])\n\ny_pred\n\na = -per_clf.coef_[0][0] / per_clf.coef_[0][1]\nb = -per_clf.intercept_ / per_clf.coef_[0][1]\n\naxes = [0, 5, 0, 2]\n\nx0, x1 = np.meshgrid(\n np.linspace(axes[0], axes[1], 500).reshape(-1, 1),\n np.linspace(axes[2], axes[3], 200).reshape(-1, 1),\n )\nX_new = np.c_[x0.ravel(), x1.ravel()]\ny_predict = per_clf.predict(X_new)\nzz = y_predict.reshape(x0.shape)\n\nplt.figure(figsize=(10, 4))\nplt.plot(X[y==0, 0], X[y==0, 1], \"bs\", label=\"Not Iris-Setosa\")\nplt.plot(X[y==1, 0], X[y==1, 1], \"yo\", label=\"Iris-Setosa\")\n\nplt.plot([axes[0], axes[1]], [a * axes[0] + b, a * axes[1] + b], \"k-\", linewidth=3)\nfrom matplotlib.colors import ListedColormap\ncustom_cmap = ListedColormap(['#9898ff', '#fafab0'])\n\nplt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5)\nplt.xlabel(\"Petal length\", fontsize=14)\nplt.ylabel(\"Petal width\", fontsize=14)\nplt.legend(loc=\"lower right\", fontsize=14)\nplt.axis(axes)\n\nsave_fig(\"perceptron_iris_plot\")\nplt.show()", "Activation functions", "def logit(z):\n return 1 / (1 + np.exp(-z))\n\ndef relu(z):\n return np.maximum(0, z)\n\ndef derivative(f, z, eps=0.000001):\n return (f(z + eps) - f(z - eps))/(2 * eps)\n\nz = np.linspace(-5, 5, 200)\n\nplt.figure(figsize=(11,4))\n\nplt.subplot(121)\nplt.plot(z, np.sign(z), \"r-\", linewidth=2, label=\"Step\")\nplt.plot(z, logit(z), \"g--\", linewidth=2, label=\"Logit\")\nplt.plot(z, np.tanh(z), \"b-\", linewidth=2, label=\"Tanh\")\nplt.plot(z, relu(z), \"m-.\", linewidth=2, label=\"ReLU\")\nplt.grid(True)\nplt.legend(loc=\"center right\", fontsize=14)\nplt.title(\"Activation functions\", fontsize=14)\nplt.axis([-5, 5, -1.2, 1.2])\n\nplt.subplot(122)\nplt.plot(z, derivative(np.sign, z), \"r-\", linewidth=2, label=\"Step\")\nplt.plot(0, 0, \"ro\", markersize=5)\nplt.plot(0, 0, \"rx\", markersize=10)\nplt.plot(z, derivative(logit, z), \"g--\", linewidth=2, label=\"Logit\")\nplt.plot(z, derivative(np.tanh, z), \"b-\", linewidth=2, label=\"Tanh\")\nplt.plot(z, derivative(relu, z), \"m-.\", linewidth=2, label=\"ReLU\")\nplt.grid(True)\n#plt.legend(loc=\"center right\", fontsize=14)\nplt.title(\"Derivatives\", fontsize=14)\nplt.axis([-5, 5, -0.2, 1.2])\n\nsave_fig(\"activation_functions_plot\")\nplt.show()\n\ndef heaviside(z):\n return (z >= 0).astype(z.dtype)\n\ndef sigmoid(z):\n return 1/(1+np.exp(-z))\n\ndef mlp_xor(x1, x2, activation=heaviside):\n return activation(-activation(x1 + x2 - 1.5) + activation(x1 + x2 - 0.5) - 0.5)\n\nx1s = np.linspace(-0.2, 1.2, 100)\nx2s = np.linspace(-0.2, 1.2, 100)\nx1, x2 = np.meshgrid(x1s, x2s)\n\nz1 = mlp_xor(x1, x2, activation=heaviside)\nz2 = mlp_xor(x1, x2, activation=sigmoid)\n\nplt.figure(figsize=(10,4))\n\nplt.subplot(121)\nplt.contourf(x1, x2, z1)\nplt.plot([0, 1], [0, 1], \"gs\", markersize=20)\nplt.plot([0, 1], [1, 0], \"y^\", markersize=20)\nplt.title(\"Activation function: heaviside\", fontsize=14)\nplt.grid(True)\n\nplt.subplot(122)\nplt.contourf(x1, x2, z2)\nplt.plot([0, 1], [0, 1], \"gs\", markersize=20)\nplt.plot([0, 1], [1, 0], \"y^\", markersize=20)\nplt.title(\"Activation function: sigmoid\", fontsize=14)\nplt.grid(True)", "FNN for MNIST\nusing tf.learn", "from tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets(\"/tmp/data/\")\n\nX_train = mnist.train.images\nX_test = mnist.test.images\ny_train = mnist.train.labels.astype(\"int\")\ny_test = mnist.test.labels.astype(\"int\")\n\nimport tensorflow as tf\n\nconfig = tf.contrib.learn.RunConfig(tf_random_seed=42) # not shown in the config\n\nfeature_cols = tf.contrib.learn.infer_real_valued_columns_from_input(X_train)\ndnn_clf = tf.contrib.learn.DNNClassifier(hidden_units=[300,100], n_classes=10,\n feature_columns=feature_cols, config=config)\ndnn_clf = tf.contrib.learn.SKCompat(dnn_clf) # if TensorFlow >= 1.1\ndnn_clf.fit(X_train, y_train, batch_size=50, steps=40000)\n\nfrom sklearn.metrics import accuracy_score\n\ny_pred = dnn_clf.predict(X_test)\naccuracy_score(y_test, y_pred['classes'])\n\nfrom sklearn.metrics import log_loss\n\ny_pred_proba = y_pred['probabilities']\nlog_loss(y_test, y_pred_proba)", "Using plain TensorFlow", "import tensorflow as tf\n\nn_inputs = 28*28 # MNIST\nn_hidden1 = 300\nn_hidden2 = 100\nn_outputs = 10\n\nreset_graph()\n\nX = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\ny = tf.placeholder(tf.int64, shape=(None), name=\"y\")\n\ndef neuron_layer(X, n_neurons, name, activation=None):\n with tf.name_scope(name):\n n_inputs = int(X.get_shape()[1])\n stddev = 2 / np.sqrt(n_inputs)\n init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)\n W = tf.Variable(init, name=\"kernel\")\n b = tf.Variable(tf.zeros([n_neurons]), name=\"bias\")\n Z = tf.matmul(X, W) + b\n if activation is not None:\n return activation(Z)\n else:\n return Z\n\nwith tf.name_scope(\"dnn\"):\n hidden1 = neuron_layer(X, n_hidden1, name=\"hidden1\",\n activation=tf.nn.relu)\n hidden2 = neuron_layer(hidden1, n_hidden2, name=\"hidden2\",\n activation=tf.nn.relu)\n logits = neuron_layer(hidden2, n_outputs, name=\"outputs\")\n\nwith tf.name_scope(\"loss\"):\n xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,\n logits=logits)\n loss = tf.reduce_mean(xentropy, name=\"loss\")\n\nlearning_rate = 0.01\n\nwith tf.name_scope(\"train\"):\n optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n training_op = optimizer.minimize(loss)\n\nwith tf.name_scope(\"eval\"):\n correct = tf.nn.in_top_k(logits, y, 1)\n accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n\ninit = tf.global_variables_initializer()\nsaver = tf.train.Saver()\n\nn_epochs = 40\nbatch_size = 50\n\nwith tf.Session() as sess:\n init.run()\n for epoch in range(n_epochs):\n for iteration in range(mnist.train.num_examples // batch_size):\n X_batch, y_batch = mnist.train.next_batch(batch_size)\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})\n acc_test = accuracy.eval(feed_dict={X: mnist.test.images,\n y: mnist.test.labels})\n print(epoch, \"Train accuracy:\", acc_train, \"Test accuracy:\", acc_test)\n\n save_path = saver.save(sess, \"./my_model_final.ckpt\")\n\nwith tf.Session() as sess:\n saver.restore(sess, \"./my_model_final.ckpt\") # or better, use save_path\n X_new_scaled = mnist.test.images[:20]\n Z = logits.eval(feed_dict={X: X_new_scaled})\n y_pred = np.argmax(Z, axis=1)\n\nprint(\"Predicted classes:\", y_pred)\nprint(\"Actual classes: \", mnist.test.labels[:20])\n\nfrom IPython.display import clear_output, Image, display, HTML\n\ndef strip_consts(graph_def, max_const_size=32):\n \"\"\"Strip large constant values from graph_def.\"\"\"\n strip_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = strip_def.node.add() \n n.MergeFrom(n0)\n if n.op == 'Const':\n tensor = n.attr['value'].tensor\n size = len(tensor.tensor_content)\n if size > max_const_size:\n tensor.tensor_content = b\"<stripped %d bytes>\"%size\n return strip_def\n\ndef show_graph(graph_def, max_const_size=32):\n \"\"\"Visualize TensorFlow graph.\"\"\"\n if hasattr(graph_def, 'as_graph_def'):\n graph_def = graph_def.as_graph_def()\n strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n code = \"\"\"\n <script>\n function load() {{\n document.getElementById(\"{id}\").pbtxt = {data};\n }}\n </script>\n <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n <div style=\"height:600px\">\n <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n </div>\n \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n\n iframe = \"\"\"\n <iframe seamless style=\"width:1200px;height:620px;border:0\" srcdoc=\"{}\"></iframe>\n \"\"\".format(code.replace('\"', '&quot;'))\n display(HTML(iframe))\n\nshow_graph(tf.get_default_graph())", "Using dense() instead of neuron_layer()\nNote: the book uses tensorflow.contrib.layers.fully_connected() rather than tf.layers.dense() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dense(), because anything in the contrib module may change or be deleted without notice. The dense() function is almost identical to the fully_connected() function, except for a few minor differences:\n* several parameters are renamed: scope becomes name, activation_fn becomes activation (and similarly the _fn suffix is removed from other parameters such as normalizer_fn), weights_initializer becomes kernel_initializer, etc.\n* the default activation is now None rather than tf.nn.relu.\n* a few more differences are presented in chapter 11.", "n_inputs = 28*28 # MNIST\nn_hidden1 = 300\nn_hidden2 = 100\nn_outputs = 10\n\nreset_graph()\n\nX = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\ny = tf.placeholder(tf.int64, shape=(None), name=\"y\") \n\nwith tf.name_scope(\"dnn\"):\n hidden1 = tf.layers.dense(X, n_hidden1, name=\"hidden1\",\n activation=tf.nn.relu)\n hidden2 = tf.layers.dense(hidden1, n_hidden2, name=\"hidden2\",\n activation=tf.nn.relu)\n logits = tf.layers.dense(hidden2, n_outputs, name=\"outputs\")\n\nwith tf.name_scope(\"loss\"):\n xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n loss = tf.reduce_mean(xentropy, name=\"loss\")\n\nlearning_rate = 0.01\n\nwith tf.name_scope(\"train\"):\n optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n training_op = optimizer.minimize(loss)\n\nwith tf.name_scope(\"eval\"):\n correct = tf.nn.in_top_k(logits, y, 1)\n accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n\ninit = tf.global_variables_initializer()\nsaver = tf.train.Saver()\n\nn_epochs = 20\nn_batches = 50\n\nwith tf.Session() as sess:\n init.run()\n for epoch in range(n_epochs):\n for iteration in range(mnist.train.num_examples // batch_size):\n X_batch, y_batch = mnist.train.next_batch(batch_size)\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})\n acc_test = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})\n print(epoch, \"Train accuracy:\", acc_train, \"Test accuracy:\", acc_test)\n\n save_path = saver.save(sess, \"./my_model_final.ckpt\")\n\nshow_graph(tf.get_default_graph())", "Exercise solutions\n1. to 8.\nSee appendix A.\n9.\nTrain a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot learning curves using TensorBoard, and so on).\nFirst let's create the deep net. It's exactly the same as earlier, with just one addition: we add a tf.summary.scalar() to track the loss and the accuracy during training, so we can view nice learning curves using TensorBoard.", "n_inputs = 28*28 # MNIST\nn_hidden1 = 300\nn_hidden2 = 100\nn_outputs = 10\n\nreset_graph()\n\nX = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\ny = tf.placeholder(tf.int64, shape=(None), name=\"y\") \n\nwith tf.name_scope(\"dnn\"):\n hidden1 = tf.layers.dense(X, n_hidden1, name=\"hidden1\",\n activation=tf.nn.relu)\n hidden2 = tf.layers.dense(hidden1, n_hidden2, name=\"hidden2\",\n activation=tf.nn.relu)\n logits = tf.layers.dense(hidden2, n_outputs, name=\"outputs\")\n\nwith tf.name_scope(\"loss\"):\n xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n loss = tf.reduce_mean(xentropy, name=\"loss\")\n loss_summary = tf.summary.scalar('log_loss', loss)\n\nlearning_rate = 0.01\n\nwith tf.name_scope(\"train\"):\n optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n training_op = optimizer.minimize(loss)\n\nwith tf.name_scope(\"eval\"):\n correct = tf.nn.in_top_k(logits, y, 1)\n accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n accuracy_summary = tf.summary.scalar('accuracy', accuracy)\n\ninit = tf.global_variables_initializer()\nsaver = tf.train.Saver()", "Now we need to define the directory to write the TensorBoard logs to:", "from datetime import datetime\n\ndef log_dir(prefix=\"\"):\n now = datetime.utcnow().strftime(\"%Y%m%d%H%M%S\")\n root_logdir = \"tf_logs\"\n if prefix:\n prefix += \"-\"\n name = prefix + \"run-\" + now\n return \"{}/{}/\".format(root_logdir, name)\n\nlogdir = log_dir(\"mnist_dnn\")", "Now we can create the FileWriter that we will use to write the TensorBoard logs:", "file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())", "Hey! Why don't we implement early stopping? For this, we are going to need a validation set. Luckily, the dataset returned by TensorFlow's input_data() function (see above) is already split into a training set (60,000 instances, already shuffled for us), a validation set (5,000 instances) and a test set (5,000 instances). So we can easily define X_valid and y_valid:", "X_valid = mnist.validation.images\ny_valid = mnist.validation.labels\n\nm, n = X_train.shape\n\nn_epochs = 10001\nbatch_size = 50\nn_batches = int(np.ceil(m / batch_size))\n\ncheckpoint_path = \"/tmp/my_deep_mnist_model.ckpt\"\ncheckpoint_epoch_path = checkpoint_path + \".epoch\"\nfinal_model_path = \"./my_deep_mnist_model\"\n\nbest_loss = np.infty\nepochs_without_progress = 0\nmax_epochs_without_progress = 50\n\nwith tf.Session() as sess:\n if os.path.isfile(checkpoint_epoch_path):\n # if the checkpoint file exists, restore the model and load the epoch number\n with open(checkpoint_epoch_path, \"rb\") as f:\n start_epoch = int(f.read())\n print(\"Training was interrupted. Continuing at epoch\", start_epoch)\n saver.restore(sess, checkpoint_path)\n else:\n start_epoch = 0\n sess.run(init)\n\n for epoch in range(start_epoch, n_epochs):\n for iteration in range(mnist.train.num_examples // batch_size):\n X_batch, y_batch = mnist.train.next_batch(batch_size)\n sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n accuracy_val, loss_val, accuracy_summary_str, loss_summary_str = sess.run([accuracy, loss, accuracy_summary, loss_summary], feed_dict={X: X_valid, y: y_valid})\n file_writer.add_summary(accuracy_summary_str, epoch)\n file_writer.add_summary(loss_summary_str, epoch)\n if epoch % 5 == 0:\n print(\"Epoch:\", epoch,\n \"\\tValidation accuracy: {:.3f}%\".format(accuracy_val * 100),\n \"\\tLoss: {:.5f}\".format(loss_val))\n saver.save(sess, checkpoint_path)\n with open(checkpoint_epoch_path, \"wb\") as f:\n f.write(b\"%d\" % (epoch + 1))\n if loss_val < best_loss:\n saver.save(sess, final_model_path)\n best_loss = loss_val\n else:\n epochs_without_progress += 5\n if epochs_without_progress > max_epochs_without_progress:\n print(\"Early stopping\")\n break\n\nos.remove(checkpoint_epoch_path)\n\nwith tf.Session() as sess:\n saver.restore(sess, final_model_path)\n accuracy_val = accuracy.eval(feed_dict={X: X_test, y: y_test})\n\naccuracy_val" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
myuuuuun/RepeatedMatrixGame
PrisonersDilemma/.ipynb_checkpoints/ts_length-checkpoint.ipynb
mit
[ "ゲームの期数に関して\n実験本体の詳細: README", "#-*- encoding: utf-8 -*-\n%matplotlib inline\n# 日本語対応\nimport matplotlib\nmatplotlib.rcParams['font.family'] = 'Osaka'\nimport math\nimport numpy as np\nimport pandas as pd\nimport functools\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats\nnp.set_printoptions(precision=3)\nnp.set_printoptions(linewidth=400)\nnp.set_printoptions(threshold=np.nan)\npd.set_option('display.max_columns', 30)\npd.set_option('display.width', 400)\npd.set_option('display.precision', 3)", "各セッションの期数の決め方は、以下の通り。\n引用: README#共通ルール\n<blockquote>\n<p>ゲームを何期続けるかは、以下のルールによって決めます。\n\n<ol>\n<li>第1期は確率1で到来する\n<li>以降は毎期末に抽選を行い、**97%の確率でゲームを継続する**(3%の確率で終了する) \n(これは、無限回繰り返しゲームにおいて現在割引価値を0.97と取ることを意味します)<br>\n</ol>\n\n<p>平均は33.33期になります。\n</blockquote>\n\nしたがって、各セッションの期数を確率変数 $X_1, X_2, \\ldots, X_{1000}$ とすると、\n$$\nX_1, X_2, \\ldots, X_{1000} \\overset{\\text{i.i.d.}}{\\sim} Geo(0.03)\n$$\nとなる。ただしGeo(p)は「成功確率がpの時、初めて成功するまでの総試行回数」を表す幾何分布で、(下側)累積分布関数は\n$$\nP(X \\leq x) = F(x) = \\sum_{k=1}^{x} p(1-p)^{k-1} \\hspace{20pt} x=1, 2, \\ldots\n$$\nとなる。\n平均は\n\\begin{eqnarray}\nE[X] &=& \\sum_{x=1}^{\\infty} x pq^{x-1} \\hspace{20pt} q = 1-p\\\n&=& p\\left(\\frac{\\partial}{\\partial q} \\sum_{x=1}^{\\infty} q^x \\right) \\\n&=& p \\left(\\frac{\\partial}{\\partial q} \\frac{q}{1-q} \\right) \\\n&=& p \\frac{1-q+q}{(1-q)^2} \\\n&=& \\frac{1}{p}\n\\end{eqnarray}\n分散は\n\\begin{eqnarray}\nE[X(X+1)] &=& \\sum_{x=1}^{\\infty} x(x+1) pq^{x-1}\\\n&=& p\\left(\\frac{\\partial^2}{\\partial q^2} \\sum_{x=1}^{\\infty} q^{x+1} \\right) \\\n&=& p \\left(\\frac{\\partial^2}{\\partial q^2} \\frac{q^2}{1-q} \\right) \\\n&=& p \\left(\\frac{\\partial}{\\partial q} \\frac{2q(1-q)+q^2}{(1-q)^2} \\right) \\\n&=& p \\left(\\frac{\\partial}{\\partial q} \\frac{2q-q^2}{(1-q)^2} \\right) \\\n&=& p \\left(\\frac{(2-2q)(1-q)^2 + 2(2q-q^2)(1-q)}{(1-q)^4} \\right) \\\n&=& p \\left(\\frac{2(1-q)^2 + 4q-2q^2)}{(1-q)^3} \\right) \\\n&=& \\frac{2}{p^2}\n\\end{eqnarray}\n\\begin{eqnarray}\nVar(X) &=& E[X^2] - (E[X])^2 \\\n&=& E[X(X+1)] - E[X] - (E[X])^2 \\\n&=& \\frac{2}{p^2} - \\frac{1}{p} - \\frac{1}{p^2} \\\n&=& \\frac{1-p}{p^2}\n\\end{eqnarray}\nとなる。\n今回はp=0.03なので、$E[X] = 33.33, Var(X) = 1077.78$ となる。\n確率関数と累積分布関数\n確率(質量)関数と下側累積分布関数をplot(理論値)。線にしてあるが、実際は離散値(x=1, 2, 3, ...)。", "# settings\nseed = 282\nrs = np.random.RandomState(seed)\nparameter = 0.03\n\n# plot\nfig = plt.figure(figsize=(20, 12))\nmu = 33.333\nsigma = 1077.778\n\n# theoretical pmf\nax = plt.subplot(2, 1, 1)\nplt.title(\"Geo(0.03)の確率関数\")\nx = np.arange(1, 301)\nplt.plot(x, stats.geom.pmf(x, parameter), linewidth=1, color='green', label=\"theoretical pmf\")\nplt.xlabel(\"ts_length(X)\")\nplt.ylabel(\"probability(f)\")\nax.text(35, 0.02, r'''$\\mu$={0:.3f}, $\\sigma^2$={1:.3f}'''.format(mu, sigma), ha = 'left', va = 'bottom', size=14)\nax.axvline(x=mu, linewidth=1, color='red', label=\"average\")\nax.set_xlim([1, 300])\nplt.legend()\n\n# theoretical cmf\nax = plt.subplot(2, 1, 2)\nplt.title(\"Geo(0.03)の下側分布関数\")\nx = np.arange(1, 301)\nplt.plot(x, stats.geom.cdf(x, parameter), linewidth=1, color='blue', label=\"theoretical cdf\")\nplt.xlabel(\"ts_length(X)\")\nplt.ylabel(\"cumulative dens(F)\")\nax.text(35, 0.55, r'''$\\mu$={0:.3f}のときF={1:.3f}'''.format(mu, stats.geom.cdf(mu, parameter)), ha = 'left', va = 'bottom', size=14)\nax.axvline(x=mu, linewidth=1, color='red', label=\"average\")\nax.set_xlim([1, 300])\nplt.legend()\n\nplt.show()", "確率関数は単調減少。$P(X \\leq E[X]) = F(33.33) = 0.634$ より、全体の63.4%程度が平均よりも小さい値であることが期待される。\nこのことは、幾何分布を適当な指数分布で近似(連続化)できるとすれば、その時間あたりの生起回数はポアソン分布$F(x) = \\sum_{k=0}^x\\frac{\\lambda^k}{k!}e^{- \\lambda}$に従うことからもわかる。今1単位時間を33.33期とすれば、1単位時間に1回以上「ゲームが終了する」事象が起こる確率は、$\\lambda = 1$ のポアソン分布で$x \\geq 1$となる確率なので、\n\\begin{eqnarray}\nP(X\\geq1) = 1 - F(0) = \\frac{1^0}{0!}e^{- 1} = \\frac{1}{e} = 0.6321\n\\end{eqnarray}\nとなって、ほぼ一致する。\n和の分布\n正確な和の分布\n幾何分布に従う確率変数 $X_1, X_2, \\ldots, X_{1000}$ の和を $S(X)$ とすると、\n$$\nS(X) \\ {\\sim} \\ Negbin(n=1000, p=0.03)\n$$\nとなる。ただしNegbin(n, p)は「成功確率がpの時、n回成功するまでの総試行回数」を表す負の二項分布で、累積分布関数は、\n$$\nP(Y \\leq y) = F(y) = \\sum_{k=n}^{y} {}{k-1}C {n-1} p^n (1-p)^{k-n} \\hspace{20pt} y=n, n+1, n+2, \\ldots\n$$\nとなる。平均は $E[Y] = \\frac{n}{p}$、分散は $Var(Y) = \\frac{n(1-p)}{p^2}$ となる。\nいま $p=0.03, n=1000$ なので、$E[S(X)] = 33333.33, Var(S(X)) = 1077777.78$ となる。\n和の近似的な分布\n$Var(X_i) < \\infty$ より、 $X_1, X_2, \\ldots, X_n$ の平均 $Y = E[X]$ の分布は、\n\\begin{eqnarray}\nP(Y\\leq y) = F(y) \\overset{\\text{d}}{\\longrightarrow} \\int_{-\\infty}^{\\infty} \\frac{\\sqrt{n}}{\\sqrt{2 \\pi \\sigma^2}} exp\\left( {-\\cfrac{n(y-\\mu)^2}{2\\sigma^2}} \\right ) \\ dy\n\\end{eqnarray}\nとなって、正規分布 $N(\\mu, \\frac{\\sigma^2}{n})$ に分布収束する。\nしたがって和の近似的な分布は、($Z = nY$ と変数変換すれば良いので)\n$$\nS(X) = n E[X] \\sim N(n\\mu, n\\sigma^2)\n$$\nとなる。\n$X_i$ の母平均, 母分散は $\\mu = 0.03, \\sigma = 1077.77\\ldots$ だったので、$E[S] = 33.33, Var(S) = 1077777.78$ となる。\n和の分布の①正確分布、②近似分布、③シミュレーションを比較", "seed = 282\nnp.random.seed(seed=seed)\nrs = np.random.RandomState(seed)\nparameter = 0.03\nsize = 1000\nsum_max = 100000\nave = 1/parameter\nvar = (1-parameter)/parameter**2\nsum_x = np.arange(100001)\n\n# 「幾何分布から1000個のサンプルを取り出し、和を取る」\n# ことを10万回繰り返し、シミュレーションで分布を求める\nsim_size = 100000\nsums = np.zeros(sim_size, dtype=float)\nfor i in range(sim_size):\n sums[i] = rs.geometric(p=parameter, size=1000).sum()\n\n# 負の二項分布 n=1000, p=0.03\n# scipy.statの負の二項分布は「失敗回数」を数える分布なので、適宜修正\nnbin = stats.nbinom.pmf(sum_x-1000, 1000, parameter)\n\n# 正規分布 mu=3333.33, sigma^2=107777.78\nnormal_ave = size * ave\nnormal_var = size * var\nnormal = stats.norm.pdf(x=sum_x, loc=normal_ave, scale=pow(normal_var, 0.5))\n\n# プロット\nfig, ax = plt.subplots(figsize=(20, 6))\nplt.title(\"Geo(0.03)から抽出した1000個のサンプルの和S(X)の分布\")\nplt.plot(sum_x, nbin, linewidth=2, color='blue', label=\"theoretical pmf(負の二項分布)\")\nplt.plot(sum_x, normal, linewidth=2, color='orange', label=\"approx pdf(正規分布)\")\nplt.hist(sums, bins=100, color='#cccccc', normed=True, label=\"simulation(size=100000)\")\nax.set_xlim([25000, 45000])\nplt.xlabel(\"ts_length(X)\")\nplt.ylabel(\"density(f)\")\nplt.legend()\nplt.show()", "近似分布の方が正確な分布よりもやや右側に寄っているが、ほぼ一致している。細かく統計量を見ると、", "# 統計量\ndata = np.zeros((11, 3), dtype=float)\n\nordered_sum = np.sort(sums)\nsim_lower_percent = lambda p: (ordered_sum[sim_size*p-1]+ordered_sum[sim_size*p])/2\ndata[0, 0] = sums.mean()\ndata[1, 0] = sums.var()\ndata[2, 0] = sim_lower_percent(0.01)\ndata[3, 0] = sim_lower_percent(0.025)\ndata[4, 0] = sim_lower_percent(0.05)\ndata[5, 0] = sim_lower_percent(0.1)\ndata[6, 0] = np.median(sums)\ndata[7, 0] = sim_lower_percent(0.9)\ndata[8, 0] = sim_lower_percent(0.95)\ndata[9, 0] = sim_lower_percent(0.975)\ndata[10, 0] = sim_lower_percent(0.99)\n\ndata[0, 1] = stats.nbinom.mean(1000, parameter) + 1000\ndata[1, 1] = stats.nbinom.var(1000, parameter)\ndata[2, 1] = stats.nbinom.interval(0.98, 1000, parameter)[0]+1000\ndata[3, 1] = stats.nbinom.interval(0.95, 1000, parameter)[0]+1000\ndata[4, 1] = stats.nbinom.interval(0.9, 1000, parameter)[0]+1000\ndata[5, 1] = stats.nbinom.interval(0.8, 1000, parameter)[0]+1000\ndata[6, 1] = stats.nbinom.median(1000, parameter) + 1000\ndata[7, 1] = stats.nbinom.interval(0.8, 1000, parameter)[1]+1000\ndata[8, 1] = stats.nbinom.interval(0.9, 1000, parameter)[1]+1000\ndata[9, 1] = stats.nbinom.interval(0.95, 1000, parameter)[1]+1000\ndata[10, 1] = stats.nbinom.interval(0.98, 1000, parameter)[1]+1000\n\ndata[0, 2] = stats.norm.mean(loc=normal_ave, scale=pow(normal_var, 0.5))\ndata[1, 2] = stats.norm.var(loc=normal_ave, scale=pow(normal_var, 0.5))\ndata[2, 2] = stats.norm.interval(0.98, loc=normal_ave, scale=pow(normal_var, 0.5))[0]\ndata[3, 2] = stats.norm.interval(0.95, loc=normal_ave, scale=pow(normal_var, 0.5))[0]\ndata[4, 2] = stats.norm.interval(0.9, loc=normal_ave, scale=pow(normal_var, 0.5))[0]\ndata[5, 2] = stats.norm.interval(0.8, loc=normal_ave, scale=pow(normal_var, 0.5))[0]\ndata[6, 2] = stats.norm.median(loc=normal_ave, scale=pow(normal_var, 0.5))\ndata[7, 2] = stats.norm.interval(0.8, loc=normal_ave, scale=pow(normal_var, 0.5))[1]\ndata[8, 2] = stats.norm.interval(0.9, loc=normal_ave, scale=pow(normal_var, 0.5))[1]\ndata[9, 2] = stats.norm.interval(0.95, loc=normal_ave, scale=pow(normal_var, 0.5))[1]\ndata[10, 2] = stats.norm.interval(0.98, loc=normal_ave, scale=pow(normal_var, 0.5))[1]\n\ndf = pd.DataFrame(data,\n columns=[\"シミュレーション\", \"正確分布\", \"近似分布\"],\n index=[\"平均\", \"分散\", \"下側 1%\", \"下側 2.5%\", \"下側 5%\", \"下側 10%\", \"中央値\", \"上側 10%\", \"上側 5%\", \"上側 2.5%\", \"上側 1%\"])\nprint(df)", "<table>\n <thead>\n <tr>\n <th>\n\n </th>\n <th>\n シミュレーション\n </th>\n <th>\n 正確分布\n </th>\n <th>\n 近似分布\n </th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>\n 平均\n </th>\n <td>\n 33332.48\n </td>\n <td>\n 33333.33\n </td>\n <td>\n 33333.33\n </td>\n </tr>\n <tr>\n <th>\n 分散\n </th>\n <td>\n 1081320.67\n </td>\n <td>\n 1077777.78\n </td>\n <td>\n 1077777.78\n </td>\n </tr>\n <tr>\n <th>\n 下側1%\n </th>\n <td>\n 30956.00\n </td>\n <td>\n 30967.00\n </td>\n <td>\n 30918.21\n </td>\n </tr>\n <tr>\n <th>\n 下側2.5%\n </th>\n <td>\n 31325.00\n </td>\n <td>\n 31330.00\n </td>\n <td>\n 31298.58\n </td>\n </tr>\n <tr>\n <th>\n 下側5%\n </th>\n <td>\n 31641.50\n </td>\n <td>\n 31645.00\n </td>\n <td>\n 31625.71\n </td>\n </tr>\n <tr>\n <th>\n 下側10%\n </th>\n <td>\n 32010.00\n </td>\n <td>\n 32010.00\n </td>\n <td>\n 32002.88\n </td>\n </tr>\n <tr>\n <th>\n 中央値\n </th>\n <td>\n 33324.00\n </td>\n <td>\n 33322.00\n </td>\n <td>\n 33333.33\n </td>\n </tr>\n <tr>\n <th>\n 上側10%\n </th>\n <td>\n 34671.00\n </td>\n <td>\n 34671.00\n </td>\n <td>\n 34663.79\n </td>\n </tr>\n <tr>\n <th>\n 上側5%\n </th>\n <td>\n 35063.00\n </td>\n <td>\n 35059.00\n </td>\n <td>\n 35040.96\n </td>\n </tr>\n <tr>\n <th>\n 上側2.5%\n </th>\n <td>\n 35406.00\n </td>\n <td>\n 35399.00\n </td>\n <td>\n 35368.09\n </td>\n </tr>\n <tr>\n <th>\n 上側1%\n </th>\n <td>\n 35802.50\n </td>\n <td>\n 35797.00\n </td>\n <td>\n 35748.46\n </td>\n </tr>\n </tbody>\n</table>\n\nとなる。正規分布の方が負の二項分布よりもやや裾が厚いが、ほぼ同じ。\n実験で使用されたセッションの平均期数を検定\n尾山ゼミの実験で使用されたゲームの期数の平均は32.856期、神取ゼミの実験では35.51期であった。総期数はそれぞれ32856期, 35510期となる。 \n総期数が負の二項分布に従っているとして、\n ①両側検定(S=33322 vs S $\\neq$ 33322)\n ②片側検定(尾山ゼミ: S=33322 vs S < 33322, 神取ゼミ: S=33322 vs S > 33322)\nの両方の下でのp値を求める(Negbin(1000, 0.03) の下で、観測値よりも極端な値の出る確率を求める)。", "# 総期数の仮説検定\ndef two_sided_p(x):\n parameter = 0.03\n med = stats.nbinom.median(1000, parameter) + 1000\n if x <= med:\n for p in np.arange(0, 1.005, 0.005):\n point = stats.nbinom.interval(p, 1000, parameter)[0]+1000\n if point <= x:\n return 1-p \n else:\n for p in np.arange(0, 1.005, 0.005):\n point = stats.nbinom.interval(p, 1000, parameter)[1]+1000\n if x <= point:\n return 1-p\n \n raise ValueError\n\n\ndef one_sided_p(x):\n parameter = 0.03\n med = stats.nbinom.median(1000, parameter)+1000\n if x <= med:\n p = stats.nbinom.cdf(x-1000, 1000, parameter)\n return p\n else:\n p = stats.nbinom.cdf(x-1000, 1000, parameter)\n return 1-p \n\n raise ValueError\n\n\nprint(\"尾山ゼミの総期数 S=32856\")\nprint(\"両側検定でのp値:\", two_sided_p(32856))\nprint(\"片側検定でのp値:\", one_sided_p(32856))\nprint(\"\")\nprint(\"神取ゼミの総期数 S=35510\")\nprint(\"両側検定でのp値:\", two_sided_p(35510))\nprint(\"片側検定でのp値:\", one_sided_p(35510))", "となった。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
anshbansal/anshbansal.github.io
udacity_data_science_notes/Data_Wrangling_with_MongoDB/lesson_05/lesson_05.ipynb
mit
[ "Introduction\nWill explore aggregation framework for some analysis and then explore how we could use it for data cleaning\nExample of Aggregation Framework\nLet's find out who tweeted the most\n- group tweets by user\n- count each user's tweets\n- sort into descending order\n- select user at top", "import pprint\n\ndef get_client():\n from pymongo import MongoClient\n return MongoClient('mongodb://localhost:27017/')\n\ndef get_collection():\n return get_client().examples.twitter\n\ncollection = get_collection()\n\ndef aggregate_and_show(collection, query, limit = True):\n _query = query[:]\n if limit:\n _query.append({\"$limit\": 5})\n result = collection.aggregate(_query)\n pprint.pprint(list(r for r in result))\n\nquery = [\n {\"$group\": {\"_id\": \"$user.screen_name\",\n \"count\": {\"$sum\": 1}}},\n {\"$sort\": {\"count\": -1}}\n]\naggregate_and_show(collection, query)", "Aggregation Operators\n\n$project - shape documents e.g. select\n$match - filtering\n$skip - skip at start\n$limit - limit after some\n$unwind - for every field of the array field on which it is used it will create an instance of document containing the values of the field. This can be used for grouping\n\nMatch operator\nWho has the highest followers to friend ratio?", "query = [\n {\"$match\": {\"user.friends_count\": {\"$gt\": 0},\n \"user.followers_count\": {\"$gt\": 0}}},\n {\"$project\": {\"ratio\": {\"$divide\": [\"$user.followers_count\", \n \"$user.friends_count\"]},\n \"screen_name\": \"$user.screen_name\"}},\n {\"$sort\": {\"ratio\": -1}}\n]\n\naggregate_and_show(collection, query)", "For $match we use the same syntax that we use for read operations\nProject operator\n\ninclude fields from the original document\ninsert computed fields\nrename fields\ncreate fields that hold sub documents\n\nUnwind operator\n\nneed to use array values somehow\n\nLet's try and find who included the most user mentions", "query = [\n {\"$unwind\": \"$entities.user_mentions\"},\n {\"$group\": {\"_id\": \"$user.screen_name\",\n \"count\": {\"$sum\": 1}}},\n {\"$sort\": {\"count\": -1}}\n]\n\naggregate_and_show(collection, query)", "group operators\n\n$sum\n$first\n$last\n$max\n$min\n$avg\n\narray operators\n- $push\n- $addToSet", "#get unique hashtags by user\nquery = [\n {\"$unwind\": \"$entities.hashtags\"},\n {\"$group\": {\"_id\": \"$user.screen_name\",\n \"unique_hashtags\": {\n \"$addToSet\": \"$entities.hashtags.text\"\n }}},\n {\"$sort\": {\"_id\": -1}}\n]\n\naggregate_and_show(collection, query)\n\n# find number of unique user mentions\nquery = [\n {\"$unwind\": \"$entities.user_mentions\"},\n {\"$group\": {\n \"_id\": \"$user.screen_name\",\n \"mset\": {\n \"$addToSet\": \"$entities.user_mentions.screen_name\"\n }\n }},\n {\"$unwind\": \"$mset\"},\n {\"$group\": {\"_id\": \"$_id\", \"count\": {\"$sum\": 1}}},\n {\"$sort\": {\"count\": -1}}\n]\n\naggregate_and_show(collection, query)", "Indexes\nSequence of index is important\n\nWe can create indexes using \ndb.collections.ensureIndex({\"tg\" : 1})\nGeospatial Indexes\n\nallows to query a location near a location\n\n\n\nThe value needs to be stored as an array [X, Y]. \nthen we need to create an index\nthen we use $near for using this" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
majkelx/astwro
examples/gapick_analyse.ipynb
mit
[ "%pylab inline\nstyle.use('ggplot')", "Analysis of gapick tool genetic algorithm's (GA) run\nRead more about PSF stars finding tool gapick in astwro documentation.\nResults directory\ngapick wrotes several results into directory specified by --out_dir parameter. Set path to results dir below:", "resultpath = '~/tmp/gapick_fine'", "Checkout the content of this directory", "import os\nos.chdir(os.path.expanduser(resultpath))\n\n!ls ", "For each generation of GA there are three files:\n* genXXX.gen - dump of all individuals as boolean matrix\n* genXXX.lst - daophot LST file with PSF stars of best individual\n* genXXX.reg - DS9 region file corresponding to LST file\nAlso, there are three links to most recent versions of that files:", "!ls -l gen_last.*", "Such links are maintained during execution of gapick script which allows partial results analysis while script is running.\n*.gen files structure is shown below:", "!head gen_last.gen", "Each line is one individual, each column is one candidate star. 1 means that candidate is a member of individual.\nMoreover there is about.txt with information about parameters:", "!cat about.txt", "Also several optput files of daophot commands are included, as well as opt files with used configuration.\nEvolution analysis\nlogbook.pkl file contains statistics collected by deap module during evolution.", "import deap, pickle\n\nf = open('logbook.pkl')\nlogbook = pickle.load(f)", "Plot values of chi, and number of selected stars, against generation number", "gen = logbook.select('gen')\nchi_mins = logbook.chapters[\"fitness\"].select(\"min\")\nstars_avgs = logbook.chapters[\"size\"].select(\"avg\")\n\nfig, ax1 = subplots()\nfig.set_size_inches(10,6)\nline1 = ax1.plot(gen, chi_mins, \"b-\", label=\"Minimum chi\")\nax1.set_xlabel(\"Generation\")\nax1.set_ylabel(\"chi\", color=\"b\")\nfor tl in ax1.get_yticklabels():\n tl.set_color(\"b\")\n\nax2 = ax1.twinx()\nline2 = ax2.plot(gen, stars_avgs, \"r-\", label=\"Average stars number\")\nax2.set_ylabel(\"stars\", color=\"r\")\nfor tl in ax2.get_yticklabels():\n tl.set_color(\"r\")\n\nlns = line1 + line2\nlabs = [l.get_label() for l in lns]\nax1.legend(lns, labs, loc=\"center right\")\n\nshow()", "Spośród wszystkich gwiazd, algorytm wybrał 100 (domyślna wartość parametru --stars_to_pick) kandydatów przy pomocy polecenia daophot PICK. Następnie poleceniem PSF obliczył point spread function na postawie tego pełnego zbioru i uzyskał błędy dopasowania profilu dla tych gwiazd (Profile errors zwracane przez daophot PSF). \nZ początkowej listy kandydatów odrzycił gwiazdy których błąd dopasowania profilu przekracza 0.1 (domyślna wartość parametru --max_psf_err). W tym przypadku pozostało 99 gwiazd.\nW dalszej ewolucji jedynie spośród tych 99 gwiazd wybierane były gwiazdy do PSF.\nUżytkownik, zamiast zdawać się na PICK, może również wskazać swoją listę początkowych kandydatów (paramter --lst_file).\nDla początkowych zbirów gwiazd individuals pierwszej generacji, wybierani byli kandydaci z prawdopodobieństwem 0.3 (domyślna wartość parametru --ga_init_prob) co dało średnio 30 gwiazd w zbiorze. Average stars number ma właśnie wartość około 30 dla zerowej generacji na wykresie. Później wartość ta ustabilizowała się w zakresie 45-50 gwiazd.\nWykres pokazuje też minimalizację parametru chi z generacji na generację, przy czym po 40 generacji spadek ten jest już bardzo nieznaczny.\nKolejny wykres pokazuje które gwiazdy były najczęściej wybierane w kolejnych generacjach. Kolor na przecięciu generacji i gwiazdy-kandydata wskazuje w jak wielu zbiorach danej generacji występowała gwiazda.", "spectrum = logbook.select('spectrum')\nx_starno = range(len(spectrum[0]))\n\nfig, ax1 = plt.subplots()\nfig.set_size_inches(30,10)\n#cont = ax1.contourf(x_starno, gen, spectrum)\ncont = ax1.imshow(spectrum, interpolation='nearest', cmap=plt.cm.YlGnBu)\nax1.set_xlabel(\"Star\")\nax1.set_ylabel(\"Generation\")\nplt.colorbar(cont)\n\nplt.show()\n#plt.contourf(range(99), gen, spectrum)", "Na wykresie widać jak w ciągu około 20 populacji wyłaniają się wyraźni liderzy. Niemniej również w późniejszych generacjach występują zmiany. Pomiędzy 70 a 80 generacją algorytm \"wymienił\" jedną z gwiazd na inną, ktora w poczatkowych generacjach nie była preferowana.\nComaprision with other evolutions\nLiczebność zbiorów pierwszej generacji\nZgodnie z sugestią mojego promotora, dr. Gabrieli Michalskiej, przyjżałem się również jak przebiega ewolucja dla różnych ilości gwiazd w zbiorach wyjściowej generacji.\nParametr --init_prob $x$ określa prawdopodobieństwo z jakim kandydat zostanie wylosowany do zbioru pierwszej generacji. Jeżeli, przykładowo skrypt wyznacza poleceniem daophot PICK 100 (wartość domyślna) kandydatów spośród których szukamy gwaizd do PSF, to początkowe zbiory miały średnio liczności $100 x$\nProvide set of results dir and labels it below:", "resultpaths = [\n '~/tmp/gapick_fine',\n '~/tmp/gapick_simple',\n]\nlabels = [\n 'fine',\n 'simple',\n]", "Pobranie danych logów z wyników", "logbook = []\nresultpaths = [os.path.expanduser(p) for p in resultpaths]\nfor p in resultpaths:\n f = open(os.path.join(p,'logbook.pkl'))\n logbook.append(pickle.load(f))\n\ngens = []\nchi_min = []\nstars_av = []\nfor l in logbook:\n gens.append(l.select('gen'))\n chi_min.append(l.chapters[\"fitness\"].select(\"min\"))\n stars_av.append(l.chapters[\"size\"].select(\"avg\"))\n\nfig, ax = subplots(2, 1, sharex=True)\nfig.set_size_inches(10,10)\nfor c, d, gen in zip(chi_min, labels, gens):\n ax[0].plot(gen, c, label=\"Min chi ({})\".format(d))\n\nax[0].set_ylabel(\"chi\")\nax[0].legend(loc=\"upper right\")\nax[0].tick_params(axis='y', which='both', labelleft='on', labelright='on')\n\nfor s, d, gen in zip (stars_av, labels, gens):\n ax[1].plot(gen, s, label=\"Av stars no ({})\".format(d))\n\nax[1].set_ylabel(\"stars\")\nax[1].legend(loc=\"lower right\")\nax[1].tick_params(axis='y', which='both', labelleft='on', labelright='on')\nax[1].set_xlabel(\"Generation\")\n\nshow()", "Sprawdźmy jak różne bądź podobne są skrajne rozwiązania, zestawiając \"histogramy\" ich przebiegu.", "spectrum = [ l.select('spectrum') for l in logbook]\nx_starno = range(len(spectrum[0][0]))\n\nfig, ax = subplots(2,1)\nfig.subplots_adjust(hspace=0)\nfig.set_size_inches(10,10)\nax[1].set_ylim(0, len(gen)) # flip\nax[0].imshow(spectrum[0], interpolation='nearest', cmap=plt.cm.YlGnBu)\nax[1].imshow(spectrum[-1], interpolation='nearest', cmap=plt.cm.YlGnBu)\nax[0].set_xlabel(\"Star\")\nax[0].set_ylabel(\"Generation, ({})\".format(labels[0]))\nax[1].set_ylabel(\"Generation, ({})\".format(labels[-1]))\n#plt.colorbar(cont)\n\nplt.show()", "Widać, że wparawdzie część gwiazd występuje w wynikach obu ewolucji, generalnie ostateczny obraz jest wyraźnie różny." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jgarciab/wwd2017
class8/class8_impute.ipynb
gpl-3.0
[ "Working with data 2017. Class 8\nContact\nJavier Garcia-Bernardo\ngarcia@uva.nl\n1. Clustering\n2. Data imputation\n3. Dimensionality reduction", "##Some code to run at the beginning of the file, to be able to show images in the notebook\n##Don't worry about this cell\n\n#Print the plots in this screen\n%matplotlib inline \n\n#Be able to plot images saved in the hard drive\nfrom IPython.display import Image \n\n#Make the notebook wider\nfrom IPython.core.display import display, HTML \ndisplay(HTML(\"<style>.container { width:90% !important; }</style>\"))\n\nimport seaborn as sns\nimport pylab as plt\nimport pandas as pd\nimport numpy as np\nimport scipy.stats\n\nimport statsmodels.formula.api as smf", "1. Clustering", "#Som elibraries\nfrom sklearn import preprocessing\nfrom sklearn.cluster import DBSCAN, KMeans\n\n#Read teh data, dropna, get sample\ndf = pd.read_csv(\"data/big3_position.csv\",sep=\"\\t\").dropna()\ndf[\"Revenue\"] = np.log10(df[\"Revenue\"])\ndf[\"Assets\"] = np.log10(df[\"Assets\"])\ndf[\"Employees\"] = np.log10(df[\"Employees\"])\ndf[\"MarketCap\"] = np.log10(df[\"MarketCap\"])\ndf = df.replace([np.inf,-np.inf],np.nan).dropna().sample(300)\ndf.head(2)\n\n#Scale variables to give all of them the same weight\nX = df.loc[:,[\"Revenue\",\"Assets\",\"Employees\",\"MarketCap\"]]\nX = preprocessing.scale(X)\nprint(X.sum(0))\nprint(X.std(0))\nX", "1a. Clustering with K-means\n\nk-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.\nOther methods: http://scikit-learn.org/stable/modules/clustering.html", "#Get labels of each row and add a new column with the labels\nkmeans = KMeans(n_clusters=2, random_state=0).fit(X)\nlabels = kmeans.labels_\ndf[\"kmeans_labels\"] = labels\nsns.lmplot(x=\"MarketCap\",y=\"Assets\",hue=\"kmeans_labels\",fit_reg=False,data=df)", "1b. Clustering with DBSCAN\n\nThe DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as oppos", "#Get labels of each row and add a new column with the labels\ndb = DBSCAN(eps=1, min_samples=10).fit(X)\nlabels = db.labels_\ndf[\"dbscan_labels\"] = labels\nsns.lmplot(x=\"MarketCap\",y=\"Assets\",hue=\"dbscan_labels\",fit_reg=False,data=df)\n\nImage(url=\"http://scikit-learn.org/stable/_images/sphx_glr_plot_cluster_comparison_0011.png\")", "1c. Hierarchical clustering\n\nKeeps aggreagating from a point", "import scipy\nimport pylab\nimport scipy.cluster.hierarchy as sch\n\n# Generate distance matrix based on the difference between rows\nD = np.zeros([4,4])\nfor i in range(4):\n for j in range(4):\n D[i,j] = np.sum(np.abs(X[:,i]-X[:,j])) #Euclidean distance or mutual information are also common\n \nprint(D)\n\n#Create the linkage and plot\nY = sch.linkage(D, method='centroid') #many methods, single, complete...\nZ1 = sch.dendrogram(Y, orientation='right',labels=[\"Revenue\",\"Assets\",\"Employees\",\"MarketCap\"])\n", "2. Imputation of missing data (fancy)", "#Required libraries\n!conda install tensorflow -y\n!pip install fancyimpute\n!pip install pydot_ng\n\nimport sklearn.preprocessing\nimport sklearn\n\n\n#Read the data again but do not \ndf = pd.read_csv(\"data/big3_position.csv\",sep=\"\\t\")\ndf[\"Revenue\"] = np.log10(df[\"Revenue\"])\ndf[\"Assets\"] = np.log10(df[\"Assets\"])\ndf[\"Employees\"] = np.log10(df[\"Employees\"])\ndf[\"MarketCap\"] = np.log10(df[\"MarketCap\"])\n\n\nle = sklearn.preprocessing.LabelEncoder()\nlabels = le.fit_transform(df[\"TypeEnt\"])\ndf[\"TypeEnt_int\"] = labels\n\nprint(le.classes_)\n\ndf = df.replace([np.inf,-np.inf],np.nan).sample(300)\ndf.head(2)\n\nX = df.loc[:,[\"Revenue\",\"Assets\",\"Employees\",\"MarketCap\",\"TypeEnt_int\"]].values\nX\n\ndf.describe()\n\nfrom fancyimpute import KNN\n\n# X is the complete data matrix\n# X_incomplete has the same values as X except a subset have been replace with NaN\n\n# Use 10 nearest rows which have a feature to fill in each row's missing features\nX_filled_knn = KNN(k=10).complete(X)\ndf.loc[:,cols] = X_filled_knn\n\ndf.describe()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
keras-team/keras-io
examples/keras_recipes/ipynb/sklearn_metric_callbacks.ipynb
apache-2.0
[ "Evaluating and exporting scikit-learn metrics in a Keras callback\nAuthor: lukewood<br>\nDate created: 10/07/2021<br>\nLast modified: 10/07/2021<br>\nDescription: This example shows how to use Keras callbacks to evaluate and export non-TensorFlow based metrics.\nIntroduction\nKeras callbacks allow for the execution of arbitrary\ncode at various stages of the Keras training process. While Keras offers first-class\nsupport for metric evaluation, Keras metrics may only\nrely on TensorFlow code internally.\nWhile there are TensorFlow implementations of many metrics online, some metrics are\nimplemented using NumPy or another Python-based numerical computation library.\nBy performing metric evaluation inside of a Keras callback, we can leverage any existing\nmetric, and ultimately export the result to TensorBoard.\nJaccard score metric\nThis example makes use of a sklearn metric, sklearn.metrics.jaccard_score(), and\nwrites the result to TensorBoard using the tf.summary API.\nThis template can be modified slightly to make it work with any existing sklearn metric.", "import tensorflow as tf\nimport tensorflow.keras as keras\nimport tensorflow.keras.layers as layers\nfrom sklearn.metrics import jaccard_score\nimport numpy as np\nimport os\n\n\nclass JaccardScoreCallback(keras.callbacks.Callback):\n \"\"\"Computes the Jaccard score and logs the results to TensorBoard.\"\"\"\n\n def __init__(self, model, x_test, y_test, log_dir):\n self.model = model\n self.x_test = x_test\n self.y_test = y_test\n self.keras_metric = tf.keras.metrics.Mean(\"jaccard_score\")\n self.epoch = 0\n self.summary_writer = tf.summary.create_file_writer(\n os.path.join(log_dir, model.name)\n )\n\n def on_epoch_end(self, batch, logs=None):\n self.epoch += 1\n self.keras_metric.reset_state()\n predictions = self.model.predict(self.x_test)\n jaccard_value = jaccard_score(\n np.argmax(predictions, axis=-1), self.y_test, average=None\n )\n self.keras_metric.update_state(jaccard_value)\n self._write_metric(\n self.keras_metric.name, self.keras_metric.result().numpy().astype(float)\n )\n\n def _write_metric(self, name, value):\n with self.summary_writer.as_default():\n tf.summary.scalar(\n name, value, step=self.epoch,\n )\n self.summary_writer.flush()\n", "Sample usage\nLet's test our JaccardScoreCallback class with a Keras model.", "# Model / data parameters\nnum_classes = 10\ninput_shape = (28, 28, 1)\n\n# The data, split between train and test sets\n(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\n\n# Scale images to the [0, 1] range\nx_train = x_train.astype(\"float32\") / 255\nx_test = x_test.astype(\"float32\") / 255\n# Make sure images have shape (28, 28, 1)\nx_train = np.expand_dims(x_train, -1)\nx_test = np.expand_dims(x_test, -1)\nprint(\"x_train shape:\", x_train.shape)\nprint(x_train.shape[0], \"train samples\")\nprint(x_test.shape[0], \"test samples\")\n\n\n# Convert class vectors to binary class matrices.\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)\n\nmodel = keras.Sequential(\n [\n keras.Input(shape=input_shape),\n layers.Conv2D(32, kernel_size=(3, 3), activation=\"relu\"),\n layers.MaxPooling2D(pool_size=(2, 2)),\n layers.Conv2D(64, kernel_size=(3, 3), activation=\"relu\"),\n layers.MaxPooling2D(pool_size=(2, 2)),\n layers.Flatten(),\n layers.Dropout(0.5),\n layers.Dense(num_classes, activation=\"softmax\"),\n ]\n)\n\nmodel.summary()\n\nbatch_size = 128\nepochs = 15\n\nmodel.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\ncallbacks = [JaccardScoreCallback(model, x_test, np.argmax(y_test, axis=-1), \"logs\")]\nmodel.fit(\n x_train,\n y_train,\n batch_size=batch_size,\n epochs=epochs,\n validation_split=0.1,\n callbacks=callbacks,\n)", "If you now launch a TensorBoard instance using tensorboard --logdir=logs, you will\nsee the jaccard_score metric alongside any other exported metrics!\n\nConclusion\nMany ML practitioners and researchers rely on metrics that may not yet have a TensorFlow\nimplementation. Keras users can still leverage the wide variety of existing metric\nimplementations in other frameworks by using a Keras callback. These metrics can be\nexported, viewed and analyzed in the TensorBoard like any other metric." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
UWashington-Astro300/Astro300-A16
08_Python_LaTeX.ipynb
mit
[ "Python and $\\LaTeX$", "%matplotlib inline\n\nimport sympy as sp\nimport numpy as np\nimport matplotlib.pyplot as plt", "Python uses the $\\LaTeX$ language to typeset equations.\nUse a single set of $ to make your $\\LaTeX$ inline and a double set $$ to center\nThis code will produce the output:\n$$ \\int \\cos(x)\\ dx = \\sin(x) $$\nUse can use $\\LaTeX$ in plots:", "plt.style.use('ggplot')\n\nx = np.linspace(0,2*np.pi,100)\ny = np.sin(5*x) * np.exp(-x)\n\nplt.plot(x,y)\nplt.title(\"The function $y\\ =\\ \\sin(5x)\\ e^{-x}$\")\nplt.xlabel(\"This is in units of 2$\\pi$\")\n\nplt.text(2.0, 0.4, '$\\Delta t = \\gamma\\, \\Delta t$', color='green', fontsize=36)", "Use can use sympy to make $\\LaTeX$ equations for you!", "z = sp.symbols('z')\n\na = 1/( (z+2)*(z+1) )\n\nprint(sp.latex(a))", "$$ \\frac{1}{\\left(z + 1\\right) \\left(z + 2\\right)} $$", "print(sp.latex(sp.Integral(z**2,z)))", "$$ \\int z^{2}\\, dz $$\nAstropy can output $\\LaTeX$ tables", "from astropy.io import ascii\nfrom astropy.table import QTable\n\nT = QTable.read('Zodiac.csv', format='ascii.csv')\n\nT[0:3]\n\nascii.write(T, format='latex')", "Some websites to open up for class:\n\n\nSpecial Relativity\n\n\n\n\n\nShareLatex\n\n\nShareLatex Docs and Help\n\n\nLatex Symbols\n\n\nLatex draw symbols\n\n\nThe SAO/NASA Astrophysics Data System\n\n\n\n\n\nLatex wikibook\n\n\n\nAssignment for Week 8", "t = np.linspace(0,12*np.pi,2000)\n\nfig,ax = plt.subplots(1,1) # One window\nfig.set_size_inches(11,8.5) # (width,height) - letter paper landscape\n\nfig.tight_layout() # Make better use of space on plot\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AtmaMani/pyChakras
udemy_ml_bootcamp/Python-for-Data-Visualization/Geographical Plotting/Choropleth Maps Exercise .ipynb
mit
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nChoropleth Maps Exercise\nWelcome to the Choropleth Maps Exercise! In this exercise we will give you some simple datasets and ask you to create Choropleth Maps from them. Due to the Nature of Plotly we can't show you examples\nFull Documentation Reference\nPlotly Imports", "import plotly.graph_objs as go \nfrom plotly.offline import init_notebook_mode,iplot\ninit_notebook_mode(connected=True)\nimport pandas as pd", "Import pandas and read the csv file: 2014_World_Power_Consumption", "df = pd.read_csv('./2014_World_Power_Consumption')\ndf.head()", "Referencing the lecture notes, create a Choropleth Plot of the Power Consumption for Countries using the data and layout dictionary.", "data = {'type':'choropleth', 'locations':df['Country'],'locationmode':'country names',\n 'z':df['Power Consumption KWH'], 'text':df['Text']}\n\nlayout={'title':'World power consumption', \n 'geo':{'showframe':True, 'projection':{'type':'Mercator'}}}\n\nchoromap = go.Figure(data = [data],layout = layout)\niplot(choromap,validate=False)", "USA Choropleth\n Import the 2012_Election_Data csv file using pandas.", "df2 = pd.read_csv('./2012_Election_Data')", "Check the head of the DataFrame.", "df2.head()", "Now create a plot that displays the Voting-Age Population (VAP) per state. If you later want to play around with other columns, make sure you consider their data type. VAP has already been transformed to a float for you.", "data = {'type':'choropleth', 'locations':df2['State Abv'],'locationmode':'USA-states',\n 'z':df2['% Non-citizen'], 'text':df2['% Non-citizen']}\nlayout={'geo':{'scope':'usa'}}\n\nchoromap = go.Figure(data = [data],layout = layout)\niplot(choromap,validate=False)", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Aniruddha-Tapas/Applied-Machine-Learning
Classification/Logistic Regression for Banknote Authentication.ipynb
mit
[ "Logistic Regression for Banknote Authentication\n<hr>\nOverview\n\nChoosing a classification algorithm\nFirst steps with scikit-learn\n\nLoading the Dataset\n\n\nLogistic regression\n\nTraining a logistic regression model with scikit-learn\nMeasuring our classifier using Binary classification performance metrics\nConfusion Matrix\nPrecision and Recall \nCalculating the F1 measure\nROC-AUC\n\n\nFinding the most Important Features\nPlotting our model decison regions\nTackling overfitting via regularization\n\n\nSummary\n\nChoosing a classification algorithm\nIn the subsequent chapters, we will take a tour through a selection of popular and powerful machine learning algorithms that are commonly used in academia as well as in the industry. While learning about the differences between several supervised learning algorithms for classification, we will also develop an intuitive appreciation of their individual strengths and weaknesses by tackling real-word classification problems. We will take our first steps with the scikit-learn library, which offers a user-friendly interface for using those algorithms efficiently and productively. \nChoosing an appropriate classification algorithm for a particular problem task requires practice: each algorithm has its own quirks and is based on certain assumptions. The \"No Free Lunch\" theorem: no single classifier works best across all possible scenarios. In practice, it is always recommended that you compare the performance of at least a handful of different learning algorithms to select the best model for the particular problem; these may differ in the number of features or samples, the amount of noise in a dataset, and whether the classes are linearly separable or not.\nEventually, the performance of a classifier, computational power as well as predictive power, depends heavily on the underlying data that are available for learning. The five main steps that are involved in training a machine learning algorithm can be summarized as follows:\n\nSelection of features.\nChoosing a performance metric.\nChoosing a classifier and optimization algorithm.\nEvaluating the performance of the model.\nTuning the algorithm.\n\nSince the approach of this section is to build machine learning knowledge step by step, we will mainly focus on the principal concepts of the different algorithms in this chapter and revisit topics such as feature selection and preprocessing, performance metrics, and hyperparameter tuning for more detailed discussions later in the section.\nFirst steps with scikit-learn\nIn this example we are going to use Logistic Regression algorithm to classify banknotes as authentic or not. Since we have two outputs (Authentic or Not Authentic) this type of classification is called Binary Classification. Classification task consisting cases where we have to classify into one or more classes is called Multi-Class Classifation. \nDataset\nYou can get the Banknote Authentication dataset from here: http://archive.ics.uci.edu/ml/datasets/banknote+authentication. UCI Machine Learning Repository is one of the most widely used resource for datasets. As you'll see, we use multiple datasets from this repository to tackle different Machine Learning tasks. \nThe Banknote Authentication data was extracted from images that were taken from genuine and forged banknote-like specimens. For digitization, an industrial camera usually used for print inspection was used. The final images have 400x 400 pixels. Due to the object lens and distance to the investigated object gray-scale pictures with a resolution of about 660 dpi were gained. Wavelet Transform tool were used to extract features from images. The dataset contains the following attributes:\n\nvariance of Wavelet Transformed image (continuous) \nskewness of Wavelet Transformed image (continuous) \ncurtosis of Wavelet Transformed image (continuous) \nentropy of image (continuous) \nclass (integer 0 for fake and 1 for authentic bank notes) \n\nSave the downloaded data_banknote_authentication.txt in the same directory as of your code.", "import numpy as np\nimport pandas as pd\n\n# read .csv from provided dataset\ncsv_filename=\"data_banknote_authentication.txt\"\n\n# We assign the collumn names ourselves and load the data in a Pandas Dataframe\ndf=pd.read_csv(csv_filename,names=[\"Variance\",\"Skewness\",\"Curtosis\",\"Entropy\",\"Class\"])", "We'll take a look at our data:", "df.head()", "Apparently, the first 5 instances of our datasets are all fake (Class is 0).", "print(\"No of Fake bank notes = \" + str(len(df[df['Class'] == 0])))\n\nprint(\"No of Authentic bank notes = \" + str(len(df[df['Class'] == 1])))", "This shows we have 762 total instances of Fake banknotes and 610 total instances of Authentic banknotes in our dataset.", "features=list(df.columns[:-1])\nprint(\"Our features :\" )\nfeatures\n\nX = df[features]\ny = df['Class']\n\nprint('Class labels:', np.unique(y))", "To evaluate how well a trained model performs on unseen data, we will further split the dataset into separate training and test datasets. Splitting data into 70% training and 30% test data:", "from sklearn.cross_validation import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.3, random_state=0)", "Many machine learning and optimization algorithms also require feature scaling\nfor optimal performance. Here, we will standardize the features using the StandardScaler class from scikit-learn's preprocessing module:\nStandardizing the features:", "from sklearn.preprocessing import StandardScaler\n\nsc = StandardScaler()\nsc.fit(X_train)\nX_train_std = sc.transform(X_train)\nX_test_std = sc.transform(X_test)", "Using the preceding code, we loaded the StandardScaler class from the preprocessing module and initialized a new StandardScaler object that we assigned to the variable sc. Using the fit method, StandardScaler estimated the parameters μ (sample mean) and \u001f (standard deviation) for each feature dimension from the training data. By calling the transform method, we then standardized the training data using those estimated parameters μ and \u001f. Note that we used the same scaling parameters to standardize the test set so that both the values in the training and test dataset are comparable to each other.\nLogistic regression:\nLogistic regression is a classification model that is very easy to implement but performs very well on linearly separable classes. It is one of the most widely used algorithms for classification in industry. The logistic regression model is a linear model for binary classification that can be extended to multiclass classification via the OvR technique.\nThe sigmoid function used in the Logistic Regression:", "import matplotlib.pyplot as plt\nimport numpy as np\n\n\ndef sigmoid(z):\n return 1.0 / (1.0 + np.exp(-z))\n\nz = np.arange(-7, 7, 0.1)\nphi_z = sigmoid(z)\n\nplt.plot(z, phi_z)\nplt.axvline(0.0, color='k')\nplt.ylim(-0.1, 1.1)\nplt.xlabel('z')\nplt.ylabel('$\\phi (z)$')\n\n# y axis ticks and gridline\nplt.yticks([0.0, 0.5, 1.0])\nax = plt.gca()\nax.yaxis.grid(True)\n\nplt.tight_layout()\n# plt.savefig('./figures/sigmoid.png', dpi=300)\nplt.show()", "Learning the weights of the logistic cost function", "def cost_1(z):\n return - np.log(sigmoid(z))\n\n\ndef cost_0(z):\n return - np.log(1 - sigmoid(z))\n\nz = np.arange(-10, 10, 0.1)\nphi_z = sigmoid(z)\n\nc1 = [cost_1(x) for x in z]\nplt.plot(phi_z, c1, label='J(w) if y=1')\n\nc0 = [cost_0(x) for x in z]\nplt.plot(phi_z, c0, linestyle='--', label='J(w) if y=0')\n\nplt.ylim(0.0, 5.1)\nplt.xlim([0, 1])\nplt.xlabel('$\\phi$(z)')\nplt.ylabel('J(w)')\nplt.legend(loc='best')\nplt.tight_layout()\n# plt.savefig('./figures/log_cost.png', dpi=300)\nplt.show()", "Training a logistic regression model with scikit-learn\nScikit-learn implements a highly optimized version of logistic regression that also supports multiclass settings off-the-shelf, we will skip the implementation and use the sklearn.linear_model.LogisticRegression class as well as the familiar fit method to train the model on the standardized flower training dataset:", "from sklearn.linear_model import LogisticRegression\n\nlr = LogisticRegression(C=1000.0, random_state=0)\n\nlr.fit(X_train_std, y_train)\n\ny_test.shape", "Having trained a model in scikit-learn, we can make predictions via the predict method", "y_pred = lr.predict(X_test_std)\nprint('Misclassified samples: %d' % (y_test != y_pred).sum())", "On executing the preceding code, we see that the perceptron misclassifies 5 out of the 412 note samples. Thus, the misclassification error on the test dataset is 0.012 or 1.2 percent (5/412 = 0.012\u001d).\nMeasuring our classifier using Binary classification performance metrics\nA variety of metrics exist to evaluate the performance of binary classifiers against\ntrusted labels. The most common metrics are accuracy, precision, recall, F1 measure,\nand ROC AUC score. All of these measures depend on the concepts of true positives,\ntrue negatives, false positives, and false negatives. Positive and negative refer to the\nclasses. True and false denote whether the predicted class is the same as the true class.\nFor our Banknote classifier, a true positive prediction is when the classifier correctly\npredicts that a note is authentic. A true negative prediction is when the classifier\ncorrectly predicts that a note is fake. A prediction that a fake note is authentic\nis a false positive prediction, and an authentic note is incorrectly classified as fake is a\nfalse negative prediction. \nConfusion Matrix\nA confusion matrix, or contingency table, can be used to\nvisualize true and false positives and negatives. The rows of the matrix are the true\nclasses of the instances, and the columns are the predicted classes of the instances:", "from sklearn.metrics import confusion_matrix\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nconfusion_matrix = confusion_matrix(y_test, y_pred)\nprint(confusion_matrix)\nplt.matshow(confusion_matrix)\nplt.title('Confusion matrix')\nplt.colorbar()\nplt.ylabel('True label')\nplt.xlabel('Predicted label')\nplt.show()", "The confusion matrix indicates that there were 227 true negative predictions, 180\ntrue positive predictions, 0 false negative predictions, and 5 false positive\nprediction.\nScikit-learn also implements a large variety of different performance metrics that are available via the metrics module. For example, we can calculate the classification accuracy of the perceptron on the test set as follows:", "from sklearn.metrics import accuracy_score\nprint('Accuracy: %.2f' % accuracy_score(y_test, y_pred))", "Here, y_test are the true class labels and y_pred are the class labels that we predicted previously.\nFurthermore, we can predict the class-membership probability of the samples via\nthe predict_proba method. For example, we can predict the probabilities of the\nfirst banknote sample:", "lr.predict_proba(X_test_std[0,:])", "The preceding array tells us that the model predicts a chance of 99.96 percent that the sample is an autentic banknote (y = 1) class, and 0.003 percent chance that the sample is a fake note (y = 0).\nWhile accuracy measures the overall correctness of the classifier, it does not distinguish between false positive errors and false negative errors. Some applications may be more sensitive to false negatives than false positives, or vice\nversa. Furthermore, accuracy is not an informative metric if the proportions of the classes are skewed in the population. For example, a classifier that predicts whether or not credit card transactions are fraudulent may be more sensitive to\nfalse negatives than to false positives.\nA classifier that always predicts that transactions are legitimate could have a high accuracy score, but would not be useful. For these reasons, classifiers are often evaluated using two additional measures called precision and recall.\nPrecision and Recall:\nPrecision is the fraction of positive predictions that are correct. For instance, in our Banknote Authentication\nclassifier, precision is the fraction of notes classified as authentic that are actually\nauthentic. \nPrecision is given by the following ratio:\nP = TP / (TP + FP)\nSometimes called sensitivity in medical domains, recall is the fraction of the truly\npositive instances that the classifier recognizes. A recall score of one indicates\nthat the classifier did not make any false negative predictions. For our Banknote Authentication\nclassifier, recall is the fraction of authentic notes that were truly classified as authentic.\nRecall is calculated with the following ratio:\nR = TP / (TP + FN)\nIndividually, precision and recall are seldom informative; they are both incomplete\nviews of a classifier's performance. Both precision and recall can fail to distinguish\nclassifiers that perform well from certain types of classifiers that perform poorly. A\ntrivial classifier could easily achieve a perfect recall score by predicting positive for\nevery instance. For example, assume that a test set contains ten positive examples\nand ten negative examples. \nA classifier that predicts positive for every example will\nachieve a recall of one, as follows:\nR = 10 / (10 + 0) = 1\nA classifier that predicts negative for every example, or that makes only false positive\nand true negative predictions, will achieve a recall score of zero. Similarly, a classifier\nthat predicts that only a single instance is positive and happens to be correct will\nachieve perfect precision.\nScikit-learn provides a function to calculate the precision and recall for a classifier\nfrom a set of predictions and the corresponding set of trusted labels. \nCalculating our Banknote Authentication classifier's precision and recall:", "from sklearn.cross_validation import cross_val_score\n\nprecisions = cross_val_score(lr, X_train_std, y_train, cv=5,scoring='precision')\nprint('Precision', np.mean(precisions), precisions)\n\nrecalls = cross_val_score(lr, X_train_std, y_train, cv=5,scoring='recall')\nprint('Recalls', np.mean(recalls), recalls)", "Our classifier's precision is 0.988; almost all of the notes that it predicted as\nauthentic were actually authentic. Its recall is also high, indicating that it correctly classified\napproximately 98 percent of the authentic messages as authentic.\nCalculating the F1 measure\nThe F1 measure is the harmonic mean, or weighted average, of the precision and\nrecall scores. Also called the f-measure or the f-score, the F1 score is calculated using\nthe following formula:\nF1 = 2PR / (P + R)\nThe F1 measure penalizes classifiers with imbalanced precision and recall scores,\nlike the trivial classifier that always predicts the positive class. A model with perfect\nprecision and recall scores will achieve an F1 score of one. A model with a perfect\nprecision score and a recall score of zero will achieve an F1 score of zero. As for\nprecision and recall, scikit-learn provides a function to calculate the F1 score for\na set of predictions. Let's compute our classifier's F1 score.", "f1s = cross_val_score(lr, X_train_std, y_train, cv=5, scoring='f1')\nprint('F1', np.mean(f1s), f1s)", "The arithmetic mean of our classifier's precision and recall scores is 0.98. As the\ndifference between the classifier's precision and recall is small, the F1 measure's\npenalty is small. Models are sometimes evaluated using the F0.5 and F2 scores,\nwhich favor precision over recall and recall over precision, respectively.\nROC AUC\nA Receiver Operating Characteristic, or ROC curve, visualizes a classifier's performance. Unlike accuracy, the ROC curve is insensitive to data sets with unbalanced class proportions; unlike precision and recall, the ROC curve illustrates the classifier's performance for all values of the discrimination threshold. ROC curves plot the classifier's recall against its fall-out. Fall-out, or the false positive rate, is the number of false positives divided by the total number of negatives. It is\ncalculated using the following formula:\nF = FP / (TN + FP)", "from sklearn.metrics import roc_auc_score, roc_curve, auc\nroc_auc_score(y_test,lr.predict(X_test_std))", "Plotting the ROC curve for our SMS spam classifier:", "import matplotlib.pyplot as plt\n%matplotlib inline\ny_pred = lr.predict_proba(X_test_std)\n\nfalse_positive_rate, recall, thresholds = roc_curve(y_test, y_pred[:, 1])\nroc_auc = auc(false_positive_rate, recall)\nplt.title('Receiver Operating Characteristic')\nplt.plot(false_positive_rate, recall, 'b', label='AUC = %0.2f' % roc_auc)\nplt.legend(loc='lower right')\n\nplt.plot([0, 1], [0, 1], 'r--')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.0])\nplt.ylabel('Recall')\nplt.xlabel('Fall-out')\nplt.show()", "From the ROC AUC plot, it is apparent that our classifier outperforms random\nguessing and does a very good job in classifying; almost all of the plot area lies under its curve.\nFinding the most important features with forests of trees\nThis examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.", "import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.ensemble import ExtraTreesClassifier\n\n# Build a classification task using 3 informative features\n# Build a forest and compute the feature importances\nforest = ExtraTreesClassifier(n_estimators=250,\n random_state=0)\n\nforest.fit(X, y)\nimportances = forest.feature_importances_\nstd = np.std([tree.feature_importances_ for tree in forest.estimators_],\n axis=0)\nindices = np.argsort(importances)[::-1]\n\n# Print the feature ranking\nprint(\"Feature ranking:\")\n\nfor f in range(X.shape[1]):\n print(\"%d. feature %d - %s (%f) \" % (f + 1, indices[f], features[indices[f]], importances[indices[f]]))\n\n# Plot the feature importances of the forest\nplt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k')\nplt.title(\"Feature importances\")\nplt.bar(range(X.shape[1]), importances[indices],\n color=\"r\", yerr=std[indices], align=\"center\")\nplt.xticks(range(X.shape[1]), indices)\nplt.xlim([-1, X.shape[1]])\nplt.show()", "We'll cover the details of the code later. For now it can be evidently seen that our most important features that are helping us to correctly classify are: Variance and Skewness.\nWe'll use these two features to plot our graph. \nPlotting our model decison regions\nFinally, we can plot the decision regions of our newly trained perceptron model and visualize how well it separates the different samples.", "X_train, X_test, y_train, y_test = train_test_split(\n X[['Variance','Skewness']], y, test_size=0.3, random_state=0)\nsc = StandardScaler()\nsc.fit(X_train)\nX_train_std = sc.transform(X_train)\nX_test_std = sc.transform(X_test)\n\nfrom matplotlib.colors import ListedColormap\nimport matplotlib.pyplot as plt\nimport warnings\n\n\ndef versiontuple(v):\n return tuple(map(int, (v.split(\".\"))))\n\n\ndef plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02):\n\n # setup marker generator and color map\n markers = ('s', 'x', 'o', '^', 'v')\n colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')\n cmap = ListedColormap(colors[:len(np.unique(y))])\n\n # plot the decision surface\n x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1\n x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1\n xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),\n np.arange(x2_min, x2_max, resolution))\n Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)\n Z = Z.reshape(xx1.shape)\n plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)\n plt.xlim(xx1.min(), xx1.max())\n plt.ylim(xx2.min(), xx2.max())\n\n for idx, cl in enumerate(np.unique(y)):\n plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],\n alpha=0.8, c=cmap(idx),\n marker=markers[idx], label=cl)\n\n # highlight test samples\n if test_idx:\n # plot all samples\n if not versiontuple(np.__version__) >= versiontuple('1.9.0'):\n X_test, y_test = X[list(test_idx), :], y[list(test_idx)]\n warnings.warn('Please update to NumPy 1.9.0 or newer')\n else:\n X_test, y_test = X[test_idx, :], y[test_idx]\n\n plt.scatter(X_test[:, 0],\n X_test[:, 1],\n c='',\n alpha=1.0,\n linewidths=1,\n marker='o',\n s=55, label='test set')\n\nfrom sklearn.linear_model import LogisticRegression\n\nlr = LogisticRegression(C=1000.0, random_state=0)\nlr.fit(X_train_std, y_train)\n\nplot_decision_regions(X_combined_std, y_combined,\n classifier=lr, test_idx=range(105, 150))\nplt.xlabel('Variance')\nplt.ylabel('Skewness')\nplt.legend(loc='upper left')\nplt.tight_layout()\nplt.show()", "As we can see in the resulting plot, the banknote classes have been seperated by a abstract line generated by the Logistic Regression Classifier. We could plot this 2D plot using the features Skewness and Variance.\nTackling overfitting via regularization\nOverfitting is a common problem in machine learning, where a model performs well on training data but does not generalize well to unseen data (test data). If a model suffers from overfitting, we also say that the model has a high variance, which can be caused by having too many parameters that lead to a model that is too complex given the underlying data. Similarly, our model can also suffer from underfitting (high bias), which means that our model is not complex enough to capture the pattern in the training data well and therefore also suffers from low performance\non unseen data.\nThe problem of overfitting and underfitting can be best illustrated by using a more complex, nonlinear decision boundary as shown in the following figure:\n<img src=\"images/overfitting.png\">\nOne way of finding a good bias-variance tradeoff is to tune the complexity of the model via regularization. Regularization is a very useful method to handle collinearity (high correlation among features), filter out noise from data, and eventually prevent overfitting. The concept behind regularization is to introduce additional information (bias) to penalize extreme parameter weights. The most common form of regularization is the so-called L2 regularization (sometimes also called L2 shrinkage or weight decay). \nThe parameter C that is implemented for the LogisticRegression class in\nscikit-learn comes from a convention in support vector machines, which will be\nthe topic of the next section. C is directly related to the regularization parameter, which is its inverse.\nConsequently, decreasing the value of the inverse regularization parameter C means that we are increasing the regularization strength.\n\nSummary\nIn this chapter, we learned that Logistic regression is not only a useful model for online learning via stochastic gradient descent, but also allows us to predict the probability of a particular event. Feel free to try out different datasets to better understand the cases where Logistic Regression works best.\n<hr>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ClementPhil/deep-learning
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\nfrom collections import Counter\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n \n vocab = set(text)\n vocab_to_int = {word: index for index, word in enumerate(vocab)}\n int_to_vocab = dict(enumerate(vocab))\n # TODO: Implement Function\n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "\ndef token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n token_dick = {'.':'||Period||', ',':'||Comma||', '\"':'||Quotation_Mark||',';':'||Semicolon||','!':'||Exclamation_mark||','?':'||Question_mark||','(':'||Left_Parentheses||',')':'||Right_Parentheses||','--':'||Dash||','\\n':'||Return||' }\n return token_dick\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n Input = tf.placeholder(tf.int32,[None,None],name='input')\n Targets = tf.placeholder(tf.int32,[None,None],name='target')\n LearningRate = tf.placeholder(tf.float32);\n return Input, Targets, LearningRate\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.75)\n Cell = tf.contrib.rnn.MultiRNNCell([lstm] * 2)\n initial_state = Cell.zero_state(batch_size,tf.float32)\n InitialState= tf.identity(initial_state, name='initial_state')\n \n return Cell, InitialState\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding,input_data)\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n \n # TODO: Implement Function\n outputs,final_state = tf.nn.dynamic_rnn(cell,inputs,dtype=tf.float32)\n final = tf.identity(final_state,name=\"final_state\")\n return outputs, final\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n embed = get_embed(input_data,vocab_size,rnn_size)\n outputs ,final = build_rnn(cell,embed)\n logits = tf.contrib.layers.fully_connected(outputs,vocab_size,activation_fn=None,weights_initializer=tf.truncated_normal_initializer(stddev=0.1),biases_initializer=tf.zeros_initializer())\n return logits, final\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n# Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n# Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.", "def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # I turned for help in forum using the utils.py, I sruggled for days\n \n # TODO: Implement Function\n n_batches = int(len(int_text) / (batch_size * seq_length))\n\n # Drop the last few characters to make only full batches\n xdata = np.array(int_text[: n_batches * batch_size * seq_length])\n ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])\n\n x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)\n y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)\n\n return np.array(list(zip(x_batches, y_batches)))\n \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 200\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 128\n# Embedding Dimension Size\nembed_dim = None\n# Sequence Length\nseq_length = 7\n# Learning Rate\nlearning_rate = 0.005\n# Show stats for every n number of batches\nshow_every_n_batches = 20\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n inputTensor = loaded_graph.get_tensor_by_name(\"input:0\")\n initialState = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n finalStateTensor = loaded_graph.get_tensor_by_name(\"final_state:0\")\n probsTensor = loaded_graph.get_tensor_by_name(\"probs:0\")\n return inputTensor, initialState,finalStateTensor, probsTensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n return np.random.choice(list(int_to_vocab.values()),p=probabilities)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BONSAMURAIS/bonsai
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
bsd-3-clause
[ "Overdetermined system resolution - sugar consumption in beverages\nIn the process of disagregating an IO-activity (such as the beverage industry into winery, brewery, distilled beverages, softs drinks and bottled water) the following issue is frequently raised: How do we distribute the specific inputs (e.g. sugar) over the different industries?\nFinding the sugar consumption in the different beverages is equivalent to solving an overdetermined system. An overdetermined system has more equations than unknowns. In this notebook, we will explore different ways to solve this overdetermined system, the solution they provide and the relative error associated with these solutions.\nCountries in Exiobase (NAM version)\nThe \"total sugar consumption of the beverage industry\" is provided in EXIOBASE for 14 countries:\n- sugar = quantity of sugar consumed for all beverages (kg)\n\nCzech Republic (CZ)\nGermany (DE)\nDenmark (DK)\nSpain (ES)\nFinland (FI)\nFrance (FR)\nIreland (IE)\nItaly (IT)\nPoland (PL)\nSweden (SE)\nGreat Britain (GB)\nUnited States (US)\nCanada (CA)\nNorway (NO)\n\nAlso, the volumes of each beverage type produced in the different countries were collected (extracted from industry associations):\n- wine = volume of wine (L)\n- beer = volume of beer (L)\n- cider = volume of cider (L)\n- distilled beverages = volume of distilled beverages (L)\n- soft drinks = volume of soft drinks (L)\n- bottled water = volume of bottled water (L)\nTo cross-check numbers from the different data sources, the country specific \"total volume of beverages produced\" registered in EXIOBASE can be compared to the sum of all volumes collected per beverge type in the same country.\nUnknowns\nThe unknowns are the amounts of sugar required for each beverage type:\n- Sw = Sugar in wine (kg/L) => the wineries register no sugar input, so Sw = 0\n- Sb = Sugar in beer (kg/L) => the breweries register no sugar input, so Sb = 0\n- Sc = Sugar in cider (kg/L) => the varying sugar content of hard cider is a result of the fermentation process, sweeter ciders are slowly fermented and repeatedly racked (moved to new containers) to strain the yeast that feeds on the cider’s natural sugars.\n- Sd = Sugar in distilled beverages (kg/L)\n- Ss = Sugar in soft drinks (kg/L)\n- Sw = Sugar in water (kg/L) => there should be no sugar in bottled water, so Sw = 0\nEquations\n14 countries => so 14 equations, however we decided to exclude the equation for Canada because the volume of soft drinks in Canada (collected by Ivan) is suspiciously low compared to the sugar consumption in the beverage industry. \n=> so 13 equations\nThe original equation of the overtermined system was:\n- Sw x wine + Sb x beer + Sc x cider + Sd x distilled beverages + Ss x soft drinks + Sw x bottled water = Sugar\nWe have now simplified it into:\n- Sc x cider + Sd x distilled beverages + Ss x soft drinks = Sugar\n1. Linear regressions on sugar rates and identifying outliers\nFirst for soft drinks, then for distilled beverages, finally for cider", "from sklearn import linear_model\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n## without removing outliers\n## wo extension = with outliers\nsugarwo = np.array([2.62E7,1.46E8,1.02E7,1.03E8,6.82E6,9.00E7,8.78E6,7.22E7,5.90E7,7.54E6,8.03E7,5.49E8,3.40E8,6.18E6])\n\nwine = np.array([4.70E7,9.13E8,0,3.34E9,0,5.11E9,0,4.67E9,8.32E6,0,1.45E6,2.78E9,5.65E7,0])\nbeer = np.array([1.68E9,8.71E9,6.59E8,2.96E9,4.37E8,1.58E9,8.52E8,1.3E9,3.79E9,4.59E8,5.46E9,2.25E10,1.95E9,2.29E8])\nciderwo = np.array([8.09E6,8.55E7,4.49E6,6.24E7,8.21E7,1.18E8,6.84E7,0,1.57E8,1.26E7,8.97E8,3.48E7,1.65E7,1.02E7])\nspiritswo = np.array([1.82E7,1.11E8,6.08E6,1.74E8,2.21E7,1.91E8,2.55E7,1.61E8,1.26E8,3.45E7,7.47E8,7.06E8,2.3E8,1.25E6])\nsoftdrinkswo = np.array([1.75E9,9.44E9,6.94E8,6.30E9,4.09E8,5.51E9,2.96E8,3.87E9,4.25E9,5.84E8,6.46E9,4.96E10,3.53E9,5.13E8])\nbottledwater = np.array([6.86E8,1.04E10,1.13E8,6.52E9,7.30E7,1.08E10,6.93E7,1.56E10,3.44E9,1.38E8,1.19E9,3.4E10,2.29E9,8.6E7])", "Soft drinks", "%matplotlib inline\nplt.scatter(sugarwo,softdrinkswo)\nplt.show()", "Identify and remove outliers: Canada (The sugar consumption is quite high compared to the amount of soft drinks produced in Canada)", "softdrinks = np.array([1.75E9,9.44E9,6.94E8,6.30E9,4.09E8,5.51E9,2.96E8,3.87E9,4.25E9,5.84E8,6.46E9,5.13E8])\nsugar = np.array([2.62E7,1.46E8,1.02E7,1.03E8,6.82E6,9.00E7,8.78E6,7.22E7,5.90E7,7.54E6,8.03E7,6.18E6])\n\n#Create linear regression object\nregr = linear_model.LinearRegression()\n#Train the model using the sets\nregr.fit(sugar[:,np.newaxis],softdrinks)\n#The coefficients\nprint('Coefficients:\\n', regr.coef_)", "We know that part of the sugar goes into other drinks.\nThe 64 kg sugar per L soft drink coefficient is too high, meaning that not all the sugar goes into soft drinks. \nThe sugar rate per L beverage is supposed to be around 0.0028-0.042 kg /L beverage.", "#The mean square error\nprint(\"Residual sum of squares: %.2f\" % np.mean((regr.predict(sugar.reshape(-1,1))-softdrinks.reshape(-1,1))**2))\n#Explained variance score: 1 is perfect prediction\nprint('Variance score: %.2f' % regr.score(sugar.reshape(-1,1), softdrinks.reshape(-1,1)))", "The variance obtained for the correlation between softdrinks and sugar is very close to 1; which confirms the sugar consumption in soft drinks. (linear relationship between sugar consumption and softdrinks)", "%matplotlib inline\nplt.scatter(sugar,softdrinks, color='black')\nplt.plot(sugar.reshape(-1,1),regr.predict(sugar.reshape(-1,1)), color='blue',linewidth=2)\nplt.xticks(())\nplt.yticks(())\nplt.show()", "Distilled beverages", "%matplotlib inline\nplt.scatter(sugarwo,spiritswo)", "Identify and remove outlier : United States (The volume of spirits produced is quite high and the sugar consumption quite low)", "spirits = np.array([1.82E7,1.11E8,6.08E6,1.74E8,2.21E7,1.91E8,2.55E7,1.61E8,1.26E8,3.45E7,7.47E8,1.25E6])\nsugar = np.array([2.62E7,1.46E8,1.02E7,1.03E8,6.82E6,9.00E7,8.78E6,7.22E7,5.90E7,7.54E6,8.03E7,6.18E6])\n\n%matplotlib inline\n#Create linear regression object\nregr = linear_model.LinearRegression()\n#Train the model using the sets\nregr.fit(sugar[:,np.newaxis],spirits)\n#The coefficients\nprint('Coefficients:\\n', regr.coef_)\n#The mean square error\nprint(\"Residual sum of squares: %.2f\" % np.mean((regr.predict(sugar.reshape(-1,1))-spirits.reshape(-1,1))**2))\n#Explained variance score: 1 is perfect prediction\nprint('Variance score: %.2f' % regr.score(sugar.reshape(-1,1),spirits.reshape(-1,1)))\nplt.scatter(sugar,spirits, color='black')\nplt.plot(sugar.reshape(-1,1),regr.predict(sugar.reshape(-1,1)), color='blue',linewidth=2)\nplt.xticks(())\nplt.yticks(())\nplt.show()", "Cider", "%matplotlib inline\nplt.scatter(sugarwo,ciderwo)", "Identify and remove outliers: United Kingdom (World's largest consumer of cider: http://greatist.com/health/beer-or-cider-healthier)", "sugar = np.array([2.62E7,1.46E8,1.02E7,1.03E8,6.82E6,9.00E7,8.78E6,7.22E7,5.90E7,7.54E6,6.18E6])\ncider = np.array([8.09E6,8.55E7,4.49E6,6.24E7,8.21E7,1.18E8,6.84E7,0,1.57E8,1.26E7,1.02E7])\n\n%matplotlib inline\n#Create linear regression object\nregr = linear_model.LinearRegression()\n#Train the model using the sets\nregr.fit(sugar[:,np.newaxis],cider)\n#The coefficients\nprint('Coefficients:\\n', regr.coef_)\n#The mean square error\nprint(\"Residual sum of squares: %.2f\" % np.mean((regr.predict(sugar.reshape(-1,1))-cider.reshape(-1,1))**2))\n#Explained variance score: 1 is perfect prediction\nprint('Variance score: %.2f' % regr.score(sugar.reshape(-1,1),cider.reshape(-1,1)))\nplt.scatter(sugar,cider, color='black')\nplt.plot(sugar.reshape(-1,1),regr.predict(sugar.reshape(-1,1)), color='blue',linewidth=2)\nplt.xticks(())\nplt.yticks(())\nplt.show()", "2. Resolution with least-square method\nCalculation of a pseudo-solution of the system using the Moore-Penrose pseudo-inverse.\nhttps://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)", "from numpy import matrix\nfrom scipy.linalg import pinv\n\n# Matrix of the overdetermined system\nvolumes = matrix([[1.82E7,1.75E9], [1.11E8,9.44E9], [6.08E6, 6.94E8], [1.74E8,6.30E9],[2.21E7,4.09E8],[1.91E8,5.51E9],[2.55E7,2.96E8],[1.61E8,3.87E9],[1.26E8,4.25E9],[3.45E7,5.84E8], [7.47E8,6.46E9],[7.06E8,4.96E10],[1.25E6,5.13E8]])\nprint(volumes)\n\n# The second member\nsugar=matrix([[2.62E7],[1.46E8],[1.02E7],[1.03E8],[6.82E6],[9.00E7],[8.78E6],[7.22E7],[5.90E7],[7.54E6],[8.03E7],[5.49E8],[6.18E6]])\nprint(sugar)\n\n# Calculation of Moore-Penrose inverse\nPIA=pinv(volumes)\nprint(PIA)\n\n# Application to second member for pseudo solution\nprint (PIA*sugar)", "The solutions we obtain with the least square method are: \n- sugar rate in distilled beverages: 0.028 kg/L\n- sugar rate in soft drinks: 0.011 kg/L\nThe sugar rate is higher for distilled beverages? (Is it because we removed soft drink Canada production which contains a lot of sugar?)\nCCL: The least square method gives equal probability to the linear relationship between the sugar intake in distilled beverages or soft drinks.\nResolution with the QR decomposition vs least square method\nIn linear algebra, a QR decomposition (also called a QR factorization) of a matrix is a decomposition of a matrix A into a product A = QR of an orthogonal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problem.\nThe method of least squares is a standard approach in regression analysis to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. \"Least squares\" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation.\nDifferent outliers were removed from the equations.", "# When not removing any outliers from the equation\nfrom numpy import *\n# generating the overdetermined system\nA = matrix([[1.82E7,1.75E9], [1.11E8,9.44E9], [6.08E6, 6.94E8], [1.74E8,6.30E9],[2.21E7,4.09E8],[1.91E8,5.51E9],[2.55E7,2.96E8],[1.61E8,3.87E9],[1.26E8,4.25E9],[3.45E7,5.84E8], [7.47E8,6.46E9],[7.06E8,4.96E10],[2.30E8,3.53E9],[1.25E6,5.13E8]])\nb = matrix([[2.62E7],[1.46E8],[1.02E7],[1.03E8],[6.82E6],[9.00E7],[8.78E6],[7.22E7],[5.90E7],[7.54E6],[8.03E7],[5.49E8],[3.40E10],[6.18E6]])\nx_lstsq = linalg.lstsq(A,b)[0] # computing the numpy solution\nQ,R = linalg.qr(A) #qr decomposition of A\nQb = dot(Q.T,b) # computing Q^T*b (project b onto the range of A)\nx_qr = linalg.solve(R,Qb) # solving R*x = Q^T*b\n# comparing the solutions\nprint('qr solution')\nprint(x_qr)\nprint ('lstsq solution')\nprint(x_lstsq)\n\n# When removing only Canada from the equation\nfrom numpy import *\n# generating the overdetermined system\nA = matrix([[1.82E7,1.75E9], [1.11E8,9.44E9], [6.08E6, 6.94E8], [1.74E8,6.30E9],[2.21E7,4.09E8],[1.91E8,5.51E9],[2.55E7,2.96E8],[1.61E8,3.87E9],[1.26E8,4.25E9],[3.45E7,5.84E8], [7.47E8,6.46E9],[7.06E8,4.96E10],[1.25E6,5.13E8]])\nb = matrix([[2.62E7],[1.46E8],[1.02E7],[1.03E8],[6.82E6],[9.00E7],[8.78E6],[7.22E7],[5.90E7],[7.54E6],[8.03E7],[5.49E8],[6.18E6]])\nx_lstsq = linalg.lstsq(A,b)[0] # computing the numpy solution\nQ,R = linalg.qr(A) #qr decomposition of A\nQb = dot(Q.T,b) # computing Q^T*b (project b onto the range of A)\nx_qr = linalg.solve(R,Qb) # solving R*x = Q^T*b\n# comparing the solutions\nprint('qr solution')\nprint(x_qr)\nprint ('lstsq solution')\nprint(x_lstsq)\n\n#When removing only the United States from the equation\nfrom numpy import *\n# generating the overdetermined system\nA = matrix([[1.82E7,1.75E9], [1.11E8,9.44E9], [6.08E6, 6.94E8], [1.74E8,6.30E9],[2.21E7,4.09E8],[1.91E8,5.51E9],[2.55E7,2.96E8],[1.61E8,3.87E9],[1.26E8,4.25E9],[3.45E7,5.84E8], [7.47E8,6.46E9],[2.30E8,3.53E9],[1.25E6,5.13E8]])\nb = matrix([[2.62E7],[1.46E8],[1.02E7],[1.03E8],[6.82E6],[9.00E7],[8.78E6],[7.22E7],[5.90E7],[7.54E6],[8.03E7],[3.40E8],[6.18E6]])\nx_lstsq = linalg.lstsq(A,b)[0] # computing the numpy solution\nQ,R = linalg.qr(A) #qr decomposition of A\nQb = dot(Q.T,b) # computing Q^T*b (project b onto the range of A)\nx_qr = linalg.solve(R,Qb) # solving R*x = Q^T*b\n# comparing the solutions\nprint('qr solution')\nprint(x_qr)\nprint ('lstsq solution')\nprint(x_lstsq)\n\n#When removing the United States and Canada from the equation\nfrom numpy import *\n# generating the overdetermined system\nA = matrix([[1.82E7,1.75E9], [1.11E8,9.44E9], [6.08E6, 6.94E8], [1.74E8,6.30E9],[2.21E7,4.09E8],[1.91E8,5.51E9],[2.55E7,2.96E8],[1.61E8,3.87E9],[1.26E8,4.25E9],[3.45E7,5.84E8], [7.47E8,6.46E9],[1.25E6,5.13E8]])\nb = matrix([[2.62E7],[1.46E8],[1.02E7],[1.03E8],[6.82E6],[9.00E7],[8.78E6],[7.22E7],[5.90E7],[7.54E6],[8.03E7],[6.18E6]])\nx_lstsq = linalg.lstsq(A,b)[0] # computing the numpy solution\nQ,R = linalg.qr(A) #qr decomposition of A\nQb = dot(Q.T,b) # computing Q^T*b (project b onto the range of A)\nx_qr = linalg.solve(R,Qb) # solving R*x = Q^T*b\n# comparing the solutions\nprint('qr solution')\nprint(x_qr)\nprint ('lstsq solution')\nprint(x_lstsq)", "We can see that depending on the outliers we decide to remove from the over-determined system, the solutions obtained are extremely different.\n3. Substantial uncertainties in the independant variable : fitting errors-in-variables models\nThe most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being the difference between an observed value and the fitted value provided by a model. When the problem has substantial uncertainties in the independent variable (the x variable), then simple regression and least squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares.\nIn statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.\nIn the case when some regressors have been measured with errors, estimation based on the standard assumption leads to inconsistent estimates, meaning that the parameter estimates do not tend to the true values even in very large samples. For simple linear regression the effect is an underestimate of the coefficient, known as the attenuation bias.\nConsider a simple linear regression model of the form\ny t = α + β x t ∗ + ε t , t = 1 , … , T , {\\displaystyle y_{t}=\\alpha +\\beta x_{t}^{*}+\\varepsilon _{t}\\,,\\quad t=1,\\ldots ,T,} y_{t} = \\alpha + \\beta x_{t}^{*} + \\varepsilon_t\\,, \\quad t=1,\\ldots,T,\n\nwhere x t ∗ {\\displaystyle x_{t}^{}} x_{t}^{} denotes the true but unobserved regressor. Instead we observe this value with an error:\nx t = x t ∗ + η t {\\displaystyle x_{t}=x_{t}^{*}+\\eta _{t}\\,} x_{t} = x_{t}^{*} + \\eta_{t}\\,\n\nwhere the measurement error η t {\\displaystyle \\eta {t}} \\eta{t} is assumed to be independent from the true value x t ∗ {\\displaystyle x_{t}^{}} x_{t}^{}.\nIf the y t {\\displaystyle y_{t}} y_{t}′s are simply regressed on the x t {\\displaystyle x_{t}} x_{t}′s (see simple linear regression), then the estimator for the slope coefficient is\nβ ^ = 1 T ∑ t = 1 T ( x t − x ¯ ) ( y t − y ¯ ) 1 T ∑ t = 1 T ( x t − x ¯ ) 2 , {\\displaystyle {\\hat {\\beta }}={\\frac {{\\tfrac {1}{T}}\\sum _{t=1}^{T}(x_{t}-{\\bar {x}})(y_{t}-{\\bar {y}})}{{\\tfrac {1}{T}}\\sum _{t=1}^{T}(x_{t}-{\\bar {x}})^{2}}}\\,,} \\hat{\\beta} = \\frac{\\tfrac{1}{T}\\sum_{t=1}^T(x_t-\\bar{x})(y_t-\\bar{y})} {\\tfrac{1}{T}\\sum_{t=1}^T(x_t-\\bar{x})^2}\\,,\n\nwhich converges as the sample size T {\\displaystyle T} T increases without bound:\nβ ^ → p Cov ⁡ [ x t , y t ] Var ⁡ [ x t ] = β σ x ∗ 2 σ x ∗ 2 + σ η 2 = β 1 + σ η 2 / σ x ∗ 2 . {\\displaystyle {\\hat {\\beta }}{\\xrightarrow {p}}{\\frac {\\operatorname {Cov} [\\,x_{t},y_{t}\\,]}{\\operatorname {Var} [\\,x_{t}\\,]}}={\\frac {\\beta \\sigma _{x^{*}}^{2}}{\\sigma _{x^{*}}^{2}+\\sigma _{\\eta }^{2}}}={\\frac {\\beta }{1+\\sigma _{\\eta }^{2}/\\sigma _{x^{*}}^{2}}}\\,.} \\hat{\\beta} \\xrightarrow{p} \\frac{\\operatorname{Cov}[\\,x_t,y_t\\,]}{\\operatorname{Var}[\\,x_t\\,]} = \\frac{\\beta \\sigma^2_{x^*}} {\\sigma_{x^*}^2 + \\sigma_\\eta^2} = \\frac{\\beta} {1 + \\sigma_\\eta^2/\\sigma_{x^*}^2}\\,.\n\nVariances are non-negative, so that in the limit the estimate is smaller in magnitude than the true value of β {\\displaystyle \\beta } \\beta , an effect which statisticians call attenuation or regression dilution.[5] Thus the ‘naїve’ least squares estimator is inconsistent in this setting. However, the estimator is a consistent estimator of the parameter required for a best linear predictor of y {\\displaystyle y} y given x {\\displaystyle x} x: in some applications this may be what is required, rather than an estimate of the ‘true’ regression coefficient, although that would assume that the variance of the errors in observing x ∗ {\\displaystyle x^{}} x^{} remains fixed. This follows directly from the result quoted immediately above, and the fact that the regression coefficient relating the y t {\\displaystyle y_{t}} y_{t}′s to the actually observed x t {\\displaystyle x_{t}} x_{t}′s, in a simple linear regression, is given by\nβ x = Cov ⁡ [ x t , y t ] Var ⁡ [ x t ] . {\\displaystyle \\beta _{x}={\\frac {\\operatorname {Cov} [\\,x_{t},y_{t}\\,]}{\\operatorname {Var} [\\,x_{t}\\,]}}.} \\beta _{x}={\\frac {\\operatorname {Cov}[\\,x_{t},y_{t}\\,]}{\\operatorname {Var}[\\,x_{t}\\,]}}.\n\nIt is this coefficient, rather than β {\\displaystyle \\beta } \\beta , that would be required for constructing a predictor of y {\\displaystyle y} y based on an observed x {\\displaystyle x} x which is subject to noise.\nIt can be argued that almost all existing data sets contain errors of different nature and magnitude, so that attenuation bias is extremely frequent (although in multivariate regression the direction of bias is ambiguous. Jerry Hausman sees this as an iron law of econometrics: \"The magnitude of the estimate is usually smaller than expected.\"", "import numpy\nfrom warnings import warn\nfrom scipy.odr import __odrpack" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vanheck/blog-notes
Analyzes/Volatile movements/01 Volatile movements in python and pandas 1.ipynb
mit
[ "Analýza volatilních pohybů v Pythonu a Pandas 1\nV následujícím grafu jsou pro příklad zvýrazněny volatilní pohyby:\n\nVolatilní pohyb je jedna z možností, jak sledovat a analyzovat tržní pohyby, které jsou založeny na emocích. Pokud se cena určité komodity rychle mění, vyvolává to v obchodnících silné emoce. Pokud např. cena nemovitostí během roku vzroste o 100% a má tendenci dále růst, všimnou si toho všichni a ti, co by s koupí nemovitosti otálely, si velmi rychle uspíší ji koupit za aktuální cenu, protože když budou čekat, mohla by cena být někde opravdu vysoko a oni by si už nemohli dovolit nemovitost koupit. A čím rychleji cena stoupá, tím více lidí se nákup snaží uspíšit. Otázka zní: \n\n\"Je dobré nakupovat s nimi, nebo je lepší jim danou věc prodat\"?\n\nJak lze identifikovat volatilní pohyb pomocí Pythonu a Pandas?\nBudu vycházet ze zdarma dostupných EOD (End Of Day) dat trhu SPY (ETF, kopírující hlavní americký akciový index S&P 500), který můžu stáhnout z yahoo finance pomocí pandas-datareader. Online graf je možné vidět na https://finance.yahoo.com/chart/SPY.", "import pandas as pd\nimport pandas_datareader.data as web\nimport datetime\n\nstart = datetime.datetime(2015, 1, 1)\nend = datetime.datetime(2018, 8, 31)\n\nspy_data = web.DataReader('SPY', 'yahoo', start, end)\nspy_data = spy_data.drop(['Volume', 'Adj Close'], axis=1) # sloupce 'Volume' a 'Adj Close' nebudu potřebovat\nspy_data.tail()", "Každý řádek představuje cenu pro daný den a to nejvyšší (High), nejnižší (Low), otevírací (Open - začátek dne) a uzavírací (Close - konec dne). Volatilní pohyb pro daný den pak vidím v grafu na první pohled jako výrazné velké svíce (viz. graf 1). Abych je mohl automaticky v mém analytickém softwaru (python - pandas) označit, definuji volatilní svíce pomocí pravidla například jako:\n\nVelikost změny ceny musí být větší než 4 předchozí svíce\n\nK tomuto zjištění musím vypočítat velikost vzdálenosti pro jednotlivé svíce $Close-Open$. Pandas mi tuhle práci velmi pěkně usnadní:", "spy_data['C-O'] = spy_data['Close'] - spy_data['Open']\nspy_data.tail()", "Nyní znám přesnou změnu ceny každý den. Abych mohl porovnávat velikosti, aniž by mi záleželo na tom, zda se daný den cena propadla, nebo stoupla, aplikuji absolutní hodnotu.", "spy_data['Abs(C-O)'] = spy_data['C-O'].abs()\nspy_data.tail()", "Identifikace volatilní úsečky\nVolatilní svíce identifikuji pomocí funkcionality rolling a funkce apply. Funkcionalita rolling umožnuje rozdělit pandas DataFrame na jednotlivé menší \"okna\", které bude předávat postupně funkci apply v parametru. Tzn. že v následujícím kódu se provede výpočet funkce is_bigger pro každý řádek dat uložených v spy_data. Do parametru rows se bude postupně vkládat výřez dat, obsahující 4 řádky (aktuální počátaný řádek + 3 řádky předchozí). Jako výsledek funkce is_bigger bude hodnota, zda je aktuálně počítaný řádek volatilnější, než 4 předchozí.", "def is_bigger(rows):\n result = rows[-1] > rows[:-1].max() # rows[-1] - poslední hodnota je větší než maximum z předchozích\n return result\n\nspy_data['VolBar'] = spy_data['Abs(C-O)'].rolling(4).apply(is_bigger,raw=True)\nspy_data.tail(10)", "Které svíce jsou volatilnější, než 4 předchozí, si zobrazím pomocí jednoduché selekce, kde ve sloupečku VolBar == 1.", "spy_data[spy_data['VolBar'] == 1].tail()", "Příště se zaměřím na to, jak jednoduše tyhle volatilní úsečky analyzovat." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/text_classification/solutions/rnn.ipynb
apache-2.0
[ "Recurrent Neural Networks (RNN) with Keras\nLearning Objectives\n\nAdd built-in RNN layers.\nBuild bidirectional RNNs.\nUsing CuDNN kernels when available.\nBuild a RNN model with nested input/output.\n\nIntroduction\nRecurrent neural networks (RNN) are a class of neural networks that is powerful for\nmodeling sequence data such as time series or natural language.\nSchematically, a RNN layer uses a for loop to iterate over the timesteps of a\nsequence, while maintaining an internal state that encodes information about the\ntimesteps it has seen so far.\nThe Keras RNN API is designed with a focus on:\n\n\nEase of use: the built-in keras.layers.RNN, keras.layers.LSTM,\nkeras.layers.GRU layers enable you to quickly build recurrent models without\nhaving to make difficult configuration choices.\n\n\nEase of customization: You can also define your own RNN cell layer (the inner\npart of the for loop) with custom behavior, and use it with the generic\nkeras.layers.RNN layer (the for loop itself). This allows you to quickly\nprototype different research ideas in a flexible way with minimal code.\n\n\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.\nSetup", "import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers", "Built-in RNN layers: a simple example\nThere are three built-in RNN layers in Keras:\n\n\nkeras.layers.SimpleRNN, a fully-connected RNN where the output from previous\ntimestep is to be fed to next timestep.\n\n\nkeras.layers.GRU, first proposed in\nCho et al., 2014.\n\n\nkeras.layers.LSTM, first proposed in\nHochreiter & Schmidhuber, 1997.\n\n\nIn early 2015, Keras had the first reusable open-source Python implementations of LSTM\nand GRU.\nHere is a simple example of a Sequential model that processes sequences of integers,\nembeds each integer into a 64-dimensional vector, then processes the sequence of\nvectors using a LSTM layer.", "model = keras.Sequential()\n# Add an Embedding layer expecting input vocab of size 1000, and\n# output embedding dimension of size 64.\nmodel.add(layers.Embedding(input_dim=1000, output_dim=64))\n\n# Add a LSTM layer with 128 internal units.\n# TODO\nmodel.add(layers.LSTM(128))\n\n# Add a Dense layer with 10 units.\n# TODO\nmodel.add(layers.Dense(10))\n\nmodel.summary()", "Built-in RNNs support a number of useful features:\n\nRecurrent dropout, via the dropout and recurrent_dropout arguments\nAbility to process an input sequence in reverse, via the go_backwards argument\nLoop unrolling (which can lead to a large speedup when processing short sequences on\nCPU), via the unroll argument\n...and more.\n\nFor more information, see the\nRNN API documentation.\nOutputs and states\nBy default, the output of a RNN layer contains a single vector per sample. This vector\nis the RNN cell output corresponding to the last timestep, containing information\nabout the entire input sequence. The shape of this output is (batch_size, units)\nwhere units corresponds to the units argument passed to the layer's constructor.\nA RNN layer can also return the entire sequence of outputs for each sample (one vector\nper timestep per sample), if you set return_sequences=True. The shape of this output\nis (batch_size, timesteps, units).", "model = keras.Sequential()\nmodel.add(layers.Embedding(input_dim=1000, output_dim=64))\n\n# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)\nmodel.add(layers.GRU(256, return_sequences=True))\n\n# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)\nmodel.add(layers.SimpleRNN(128))\n\nmodel.add(layers.Dense(10))\n\nmodel.summary()", "In addition, a RNN layer can return its final internal state(s). The returned states\ncan be used to resume the RNN execution later, or\nto initialize another RNN.\nThis setting is commonly used in the\nencoder-decoder sequence-to-sequence model, where the encoder final state is used as\nthe initial state of the decoder.\nTo configure a RNN layer to return its internal state, set the return_state parameter\nto True when creating the layer. Note that LSTM has 2 state tensors, but GRU\nonly has one.\nTo configure the initial state of the layer, just call the layer with additional\nkeyword argument initial_state.\nNote that the shape of the state needs to match the unit size of the layer, like in the\nexample below.", "encoder_vocab = 1000\ndecoder_vocab = 2000\n\nencoder_input = layers.Input(shape=(None,))\nencoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(\n encoder_input\n)\n\n# Return states in addition to output\noutput, state_h, state_c = layers.LSTM(64, return_state=True, name=\"encoder\")(\n encoder_embedded\n)\nencoder_state = [state_h, state_c]\n\ndecoder_input = layers.Input(shape=(None,))\ndecoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(\n decoder_input\n)\n\n# Pass the 2 states to a new LSTM layer, as initial state\ndecoder_output = layers.LSTM(64, name=\"decoder\")(\n decoder_embedded, initial_state=encoder_state\n)\noutput = layers.Dense(10)(decoder_output)\n\nmodel = keras.Model([encoder_input, decoder_input], output)\nmodel.summary()", "RNN layers and RNN cells\nIn addition to the built-in RNN layers, the RNN API also provides cell-level APIs.\nUnlike RNN layers, which processes whole batches of input sequences, the RNN cell only\nprocesses a single timestep.\nThe cell is the inside of the for loop of a RNN layer. Wrapping a cell inside a\nkeras.layers.RNN layer gives you a layer capable of processing batches of\nsequences, e.g. RNN(LSTMCell(10)).\nMathematically, RNN(LSTMCell(10)) produces the same result as LSTM(10). In fact,\nthe implementation of this layer in TF v1.x was just creating the corresponding RNN\ncell and wrapping it in a RNN layer. However using the built-in GRU and LSTM\nlayers enable the use of CuDNN and you may see better performance.\nThere are three built-in RNN cells, each of them corresponding to the matching RNN\nlayer.\n\n\nkeras.layers.SimpleRNNCell corresponds to the SimpleRNN layer.\n\n\nkeras.layers.GRUCell corresponds to the GRU layer.\n\n\nkeras.layers.LSTMCell corresponds to the LSTM layer.\n\n\nThe cell abstraction, together with the generic keras.layers.RNN class, make it\nvery easy to implement custom RNN architectures for your research.\nCross-batch statefulness\nWhen processing very long sequences (possibly infinite), you may want to use the\npattern of cross-batch statefulness.\nNormally, the internal state of a RNN layer is reset every time it sees a new batch\n(i.e. every sample seen by the layer is assumed to be independent of the past). The\nlayer will only maintain a state while processing a given sample.\nIf you have very long sequences though, it is useful to break them into shorter\nsequences, and to feed these shorter sequences sequentially into a RNN layer without\nresetting the layer's state. That way, the layer can retain information about the\nentirety of the sequence, even though it's only seeing one sub-sequence at a time.\nYou can do this by setting stateful=True in the constructor.\nIf you have a sequence s = [t0, t1, ... t1546, t1547], you would split it into e.g.\ns1 = [t0, t1, ... t100]\ns2 = [t101, ... t201]\n...\ns16 = [t1501, ... t1547]\nThen you would process it via:\npython\nlstm_layer = layers.LSTM(64, stateful=True)\nfor s in sub_sequences:\n output = lstm_layer(s)\nWhen you want to clear the state, you can use layer.reset_states().\n\nNote: In this setup, sample i in a given batch is assumed to be the continuation of\nsample i in the previous batch. This means that all batches should contain the same\nnumber of samples (batch size). E.g. if a batch contains [sequence_A_from_t0_to_t100,\n sequence_B_from_t0_to_t100], the next batch should contain\n[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200].\n\nHere is a complete example:", "paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)\nparagraph2 = np.random.random((20, 10, 50)).astype(np.float32)\nparagraph3 = np.random.random((20, 10, 50)).astype(np.float32)\n\nlstm_layer = layers.LSTM(64, stateful=True)\noutput = lstm_layer(paragraph1)\noutput = lstm_layer(paragraph2)\noutput = lstm_layer(paragraph3)\n\n# reset_states() will reset the cached state to the original initial_state.\n# If no initial_state was provided, zero-states will be used by default.\n# TODO\nlstm_layer.reset_states()\n", "RNN State Reuse\n<a id=\"rnn_state_reuse\"></a>\nThe recorded states of the RNN layer are not included in the layer.weights(). If you\nwould like to reuse the state from a RNN layer, you can retrieve the states value by\nlayer.states and use it as the\ninitial state for a new layer via the Keras functional API like new_layer(inputs,\ninitial_state=layer.states), or model subclassing.\nPlease also note that sequential model might not be used in this case since it only\nsupports layers with single input and output, the extra input of initial state makes\nit impossible to use here.", "paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)\nparagraph2 = np.random.random((20, 10, 50)).astype(np.float32)\nparagraph3 = np.random.random((20, 10, 50)).astype(np.float32)\n\nlstm_layer = layers.LSTM(64, stateful=True)\noutput = lstm_layer(paragraph1)\noutput = lstm_layer(paragraph2)\n\nexisting_state = lstm_layer.states\n\nnew_lstm_layer = layers.LSTM(64)\nnew_output = new_lstm_layer(paragraph3, initial_state=existing_state)\n", "Bidirectional RNNs\nFor sequences other than time series (e.g. text), it is often the case that a RNN model\ncan perform better if it not only processes sequence from start to end, but also\nbackwards. For example, to predict the next word in a sentence, it is often useful to\nhave the context around the word, not only just the words that come before it.\nKeras provides an easy API for you to build such bidirectional RNNs: the\nkeras.layers.Bidirectional wrapper.", "model = keras.Sequential()\n\n# Add Bidirectional layers\n# TODO\nmodel.add(\n layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10))\n)\nmodel.add(layers.Bidirectional(layers.LSTM(32)))\nmodel.add(layers.Dense(10))\n\nmodel.summary()", "Under the hood, Bidirectional will copy the RNN layer passed in, and flip the\ngo_backwards field of the newly copied layer, so that it will process the inputs in\nreverse order.\nThe output of the Bidirectional RNN will be, by default, the concatenation of the forward layer\noutput and the backward layer output. If you need a different merging behavior, e.g.\nconcatenation, change the merge_mode parameter in the Bidirectional wrapper\nconstructor. For more details about Bidirectional, please check\nthe API docs.\nPerformance optimization and CuDNN kernels\nIn TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN\nkernels by default when a GPU is available. With this change, the prior\nkeras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your\nmodel without worrying about the hardware it will run on.\nSince the CuDNN kernel is built with certain assumptions, this means the layer will\nnot be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or\nGRU layers. E.g.:\n\nChanging the activation function from tanh to something else.\nChanging the recurrent_activation function from sigmoid to something else.\nUsing recurrent_dropout > 0.\nSetting unroll to True, which forces LSTM/GRU to decompose the inner\ntf.while_loop into an unrolled for loop.\nSetting use_bias to False.\nUsing masking when the input data is not strictly right padded (if the mask\ncorresponds to strictly right padded data, CuDNN can still be used. This is the most\ncommon case).\n\nFor the detailed list of constraints, please see the documentation for the\nLSTM and\nGRU layers.\nUsing CuDNN kernels when available\nLet's build a simple LSTM model to demonstrate the performance difference.\nWe'll use as input sequences the sequence of rows of MNIST digits (treating each row of\npixels as a timestep), and we'll predict the digit's label.", "batch_size = 64\n# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).\n# Each input sequence will be of size (28, 28) (height is treated like time).\ninput_dim = 28\n\nunits = 64\noutput_size = 10 # labels are from 0 to 9\n\n# Build the RNN model\ndef build_model(allow_cudnn_kernel=True):\n # CuDNN is only available at the layer level, and not at the cell level.\n # This means `LSTM(units)` will use the CuDNN kernel,\n # while RNN(LSTMCell(units)) will run on non-CuDNN kernel.\n if allow_cudnn_kernel:\n # The LSTM layer with default options uses CuDNN.\n lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))\n else:\n # Wrapping a LSTMCell in a RNN layer will not use CuDNN.\n lstm_layer = keras.layers.RNN(\n keras.layers.LSTMCell(units), input_shape=(None, input_dim)\n )\n model = keras.models.Sequential(\n [\n lstm_layer,\n keras.layers.BatchNormalization(),\n keras.layers.Dense(output_size),\n ]\n )\n return model\n", "Let's load the MNIST dataset:", "mnist = keras.datasets.mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0\nsample, sample_label = x_train[0], y_train[0]", "Let's create a model instance and train it.\nWe choose sparse_categorical_crossentropy as the loss function for the model. The\noutput of the model has shape of [batch_size, 10]. The target for the model is an\ninteger vector, each of the integer is in the range of 0 to 9.", "model = build_model(allow_cudnn_kernel=True)\n\n# Compile the model\n# TODO\nmodel.compile(\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n optimizer=\"sgd\",\n metrics=[\"accuracy\"],\n)\n\n\nmodel.fit(\n x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1\n)", "Now, let's compare to a model that does not use the CuDNN kernel:", "noncudnn_model = build_model(allow_cudnn_kernel=False)\nnoncudnn_model.set_weights(model.get_weights())\nnoncudnn_model.compile(\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n optimizer=\"sgd\",\n metrics=[\"accuracy\"],\n)\nnoncudnn_model.fit(\n x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1\n)", "When running on a machine with a NVIDIA GPU and CuDNN installed,\nthe model built with CuDNN is much faster to train compared to the\nmodel that uses the regular TensorFlow kernel.\nThe same CuDNN-enabled model can also be used to run inference in a CPU-only\nenvironment. The tf.device annotation below is just forcing the device placement.\nThe model will run on CPU by default if no GPU is available.\nYou simply don't have to worry about the hardware you're running on anymore. Isn't that\npretty cool?", "import matplotlib.pyplot as plt\n\nwith tf.device(\"CPU:0\"):\n cpu_model = build_model(allow_cudnn_kernel=True)\n cpu_model.set_weights(model.get_weights())\n result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)\n print(\n \"Predicted result is: %s, target result is: %s\" % (result.numpy(), sample_label)\n )\n plt.imshow(sample, cmap=plt.get_cmap(\"gray\"))", "RNNs with list/dict inputs, or nested inputs\nNested structures allow implementers to include more information within a single\ntimestep. For example, a video frame could have audio and video input at the same\ntime. The data shape in this case could be:\n[batch, timestep, {\"video\": [height, width, channel], \"audio\": [frequency]}]\nIn another example, handwriting data could have both coordinates x and y for the\ncurrent position of the pen, as well as pressure information. So the data\nrepresentation could be:\n[batch, timestep, {\"location\": [x, y], \"pressure\": [force]}]\nThe following code provides an example of how to build a custom RNN cell that accepts\nsuch structured inputs.\nDefine a custom cell that supports nested input/output\nSee Making new Layers & Models via subclassing\nfor details on writing your own layers.", "class NestedCell(keras.layers.Layer):\n def __init__(self, unit_1, unit_2, unit_3, **kwargs):\n self.unit_1 = unit_1\n self.unit_2 = unit_2\n self.unit_3 = unit_3\n self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]\n self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]\n super(NestedCell, self).__init__(**kwargs)\n\n def build(self, input_shapes):\n # expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]\n i1 = input_shapes[0][1]\n i2 = input_shapes[1][1]\n i3 = input_shapes[1][2]\n\n self.kernel_1 = self.add_weight(\n shape=(i1, self.unit_1), initializer=\"uniform\", name=\"kernel_1\"\n )\n self.kernel_2_3 = self.add_weight(\n shape=(i2, i3, self.unit_2, self.unit_3),\n initializer=\"uniform\",\n name=\"kernel_2_3\",\n )\n\n def call(self, inputs, states):\n # inputs should be in [(batch, input_1), (batch, input_2, input_3)]\n # state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]\n input_1, input_2 = tf.nest.flatten(inputs)\n s1, s2 = states\n\n output_1 = tf.matmul(input_1, self.kernel_1)\n output_2_3 = tf.einsum(\"bij,ijkl->bkl\", input_2, self.kernel_2_3)\n state_1 = s1 + output_1\n state_2_3 = s2 + output_2_3\n\n output = (output_1, output_2_3)\n new_states = (state_1, state_2_3)\n\n return output, new_states\n\n def get_config(self):\n return {\"unit_1\": self.unit_1, \"unit_2\": unit_2, \"unit_3\": self.unit_3}\n", "Build a RNN model with nested input/output\nLet's build a Keras model that uses a keras.layers.RNN layer and the custom cell\nwe just defined.", "unit_1 = 10\nunit_2 = 20\nunit_3 = 30\n\ni1 = 32\ni2 = 64\ni3 = 32\nbatch_size = 64\nnum_batches = 10\ntimestep = 50\n\ncell = NestedCell(unit_1, unit_2, unit_3)\nrnn = keras.layers.RNN(cell)\n\ninput_1 = keras.Input((None, i1))\ninput_2 = keras.Input((None, i2, i3))\n\noutputs = rnn((input_1, input_2))\n\nmodel = keras.models.Model([input_1, input_2], outputs)\n\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"accuracy\"])", "Train the model with randomly generated data\nSince there isn't a good candidate dataset for this model, we use random Numpy data for\ndemonstration.", "input_1_data = np.random.random((batch_size * num_batches, timestep, i1))\ninput_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))\ntarget_1_data = np.random.random((batch_size * num_batches, unit_1))\ntarget_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))\ninput_data = [input_1_data, input_2_data]\ntarget_data = [target_1_data, target_2_data]\n\nmodel.fit(input_data, target_data, batch_size=batch_size)", "With the Keras keras.layers.RNN layer, You are only expected to define the math\nlogic for individual step within the sequence, and the keras.layers.RNN layer\nwill handle the sequence iteration for you. It's an incredibly powerful way to quickly\nprototype new kinds of RNNs (e.g. a LSTM variant).\nFor more details, please visit the API docs." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
datascience-practice/data-quest
python_introduction/beginner/files and loops.ipynb
mit
[ "2: Opening files\nInstructions\nUse the open() function to create a File object. The name of the file is \"crime_rates.csv\" and we want the file to be accessed in read mode (\"r\"). Assign this File object to the variable f.\nAnswer", "! touch test.txt\na = open(\"test.txt\", \"r\")\nprint(a)\nf = open(\"crime_rates.csv\", \"r\")", "3: Reading in files\nInstructions\nRun the read() method on the File object f to return the string representation of crime_rates.csv. Assign the resulting string to the variable data.", "f = open(\"crime_rates.csv\", \"r\")\ndata = f.read()\n\nprint(data)", "4: Splitting\nInstructions\nSplit the string object data on the new-line character \"\\n\" and store the result in a variable named rows. Then use the print() function to display the first 5 elements in rows.\nAnswer", "# We can split a string into a list.\nsample = \"john,plastic,joe\"\nsplit_list = sample.split(\",\")\nprint(split_list)\n\n# Here's another example.\nstring_two = \"How much wood\\ncan a woodchuck chuck\\nif a woodchuck\\ncan chuck wood?\"\nsplit_string_two = string_two.split('\\n')\nprint(split_string_two)\n\n# Code from previous cells\nf = open('crime_rates.csv', 'r')\ndata = f.read()\nrows = data.split('\\n')\nprint(rows[0:5])", "5: Loops\nInstructions\n...\nAnswer\n6: Practice, loops\nInstructions\nThe variable ten_rows contains the first 10 elements in rows. Write a for loop that iterates over each element in ten_rows and uses the print() function to display each element.\nAnswer", "ten_rows = rows[0:10]\nfor row in ten_rows:\n print(row)", "7: List of lists\nInstructions\nFor now, explore and run the code we dissected in this step in the code cell below\nAnswer", "three_rows = [\"Albuquerque,749\", \"Anaheim,371\", \"Anchorage,828\"]\nfinal_list = []\nfor row in three_rows:\n split_list = row.split(',')\n final_list.append(split_list)\nprint(final_list)\nfor elem in final_list:\n print(elem)\nprint(final_list[0])\nprint(final_list[1])\nprint(final_list[2])", "8: Practice, splitting elements in a list\nLet's now convert the full dataset, rows, into a list of lists using the same logic from the step before.\nInstructions\nWrite a for loop that splits each element in rows on the comma delimiter and appends the resulting list to a new list named final_data. Then, display the first 5 elements in final_data using list slicing and the print() function.\nAnswer", "f = open('crime_rates.csv', 'r')\ndata = f.read()\nrows = data.split('\\n')\nfinal_data = [row.split(\",\")\n for row in rows]\nprint(final_data[0:5])\n", "9: Accessing elements in a list of lists, the manual way\nInstructions\nfive_elements contains the first 5 elements from final_data. Create a list of strings named cities_list that contains the city names from each list in five_elements.\nAnswer", "five_elements = final_data[:5]\nprint(five_elements)\ncities_list = [city for city,_ in five_elements]", "10: Looping through a list of lists\nInstructions\nCreate a list of strings named cities_list that contains just the city names from final_data. Recall that the city name is located at index 0 for each list in final_data.\nAnswer", "crime_rates = []\n\nfor row in five_elements:\n # row is a list variable, not a string.\n crime_rate = row[1]\n # crime_rate is a string, the city name.\n crime_rates.append(crime_rate)\n \ncities_list = [row[0] for row in final_data]", "11: Practice\nInstructions\nCreate a list of integers named int_crime_rates that contains just the crime rates as integers from the list rows.\nFirst create an empty list and assign it to int_crime_rates. Then, write a for loop that iterates over rows that executes the following:\n\nuses the split() method to convert each string in rows into a list on the comma delimiter\nconverts the value at index 1 from that list to an integer using the int() function\nthen uses the append() method to add each integer to int_crime_rates", "f = open('crime_rates.csv', 'r')\ndata = f.read()\nrows = data.split('\\n')\nprint(rows[0:5])\n\nint_crime_rates = []\n\nfor row in rows:\n data = row.split(\",\")\n if len(data) < 2:\n continue\n int_crime_rates.append(int(row.split(\",\")[1]))\n\nprint(int_crime_rates)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hunterowens/data-pipelines
chicago/chicago_permits.ipynb
mit
[ "Introduction\nThis is an example of using Luigi (https://luigi.readthedocs.io/en/stable/index.html) to create a data pipeline. \nThis is intended to be an example of using Luigi to create a data pipeline that grabs data off the web (in this case building permits and ward boundaries for the city of Chicago) and does some data cleaning and visualization.\nCheers!\nDave\n@imagingnerd\nGithub: dmwelch\nSources:\nExcellent blog post by Sensitive Cities: http://sensitivecities.com/so-youd-like-to-make-a-map-using-python-EN.html\nHunter Owens presentation at PyData Chicago 2016: https://github.com/hunterowens/data-pipelines", "%matplotlib inline\n\nimport datetime\nfrom datetime import date\nimport pickle\nimport StringIO\nimport zipfile\n\nimport luigi\nimport requests\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nfrom matplotlib.colors import Normalize, rgb2hex\nfrom matplotlib.collections import PatchCollection\nfrom mpl_toolkits.basemap import Basemap # pip install https://github.com/matplotlib/basemap/archive/v1.0.7rel.tar.gz\nfrom shapely.geometry import Point, Polygon, MultiPoint, MultiPolygon\nfrom shapely.prepared import prep\n# from pysal.esda.mapclassify import Natural_Breaks as nb\nfrom descartes import PolygonPatch\nimport fiona\nfrom itertools import chain", "Download permit data from the city of Chicago\nWe save the output to data/permits.csv", "class DownloadData(luigi.ExternalTask):\n \"\"\"\n Downloads permit data from city of Chicago\n \"\"\"\n def run(self):\n url = 'https://data.cityofchicago.org/api/views/ydr8-5enu/rows.csv?accessType=DOWNLOAD'\n response = requests.get(url)\n with self.output().open('w') as out_file:\n out_file.write(response.text)\n\n def output(self):\n return luigi.LocalTarget(\"data/permits.csv\")", "Clean the data\nThe ESTIMATED_COST column will create Inf values for \"$\" so we must clean the strings.\nSave the cleaned data to data/permits_clean.csv", "def to_float(s):\n default = np.nan\n try:\n r = float(s.replace('$', ''))\n except:\n return default\n return r\n \ndef to_int(s):\n default = None\n if s == '': return default\n return int(s)\n\ndef to_date(s):\n default = '01/01/1900'\n if s == '': s = default\n return datetime.datetime.strptime(s, \"%m/%d/%Y\")\n\n# *** Additional available headers at the end ***\nconverter = {'ID': to_int,\n 'PERMIT#': str,\n 'PERMIT_TYPE': str,\n 'ISSUE_DATE': to_date,\n 'ESTIMATED_COST': to_float,\n 'AMOUNT_WAIVED': to_float,\n 'AMOUNT_PAID': to_float,\n 'TOTAL_FEE': to_float,\n 'STREET_NUMBER': to_int,\n 'STREET DIRECTION': str,\n 'STREET_NAME': str,\n 'SUFFIX': str,\n 'WORK_DESCRIPTION': str,\n 'LATITUDE': to_float,\n 'LONGITUDE': to_float,\n 'LOCATION': str,\n }\n\nclass cleanCSV(luigi.Task):\n \"\"\"This is our cleaning step\"\"\"\n\n def requires(self):\n return DownloadData()\n\n def run(self):\n df = pd.read_csv(self.input().open('r'), \n usecols=converter.keys(), \n converters=converter, \n skipinitialspace=True)\n df.to_csv(self.output().fn)\n\n def output(self):\n return luigi.LocalTarget(\"data/permits_clean.csv\")", "Download the ward shapefiles\nThe response is in ZIP format, so we need to extract and return the *.shp file as the output", "import shutil\n\nclass DownloadWards(luigi.ExternalTask):\n \"\"\"\n Downloads ward shapefiles from city of Chicago\n \"\"\"\n def run(self):\n url = \"https://data.cityofchicago.org/api/geospatial/sp34-6z76?method=export&format=Shapefile\"\n response = requests.get(url)\n z = zipfile.ZipFile(StringIO.StringIO(response.content))\n files = z.namelist()\n z.extractall('data/')\n for fname in files:\n shutil.move('data/' + fname, 'data/geo_export' + fname[-4:])\n\n def output(self):\n return luigi.LocalTarget(\"data/geo_export.shp\")", "Convienence functions", "def plot(m, ldn_points, df_map, bds, sizes, title, label, output):\n plt.clf()\n fig = plt.figure()\n ax = fig.add_subplot(111, axisbg='w', frame_on=False)\n\n # we don't need to pass points to m() because we calculated using map_points and shapefile polygons\n dev = m.scatter(\n [geom.x for geom in ldn_points],\n [geom.y for geom in ldn_points],\n s=sizes, marker='.', lw=.25,\n facecolor='#33ccff', edgecolor='none',\n alpha=0.9, antialiased=True,\n label=label, zorder=3)\n # plot boroughs by adding the PatchCollection to the axes instance\n ax.add_collection(PatchCollection(df_map['patches'].values, match_original=True))\n # copyright and source data info\n smallprint = ax.text(\n 1.03, 0,\n 'Total points: %s' % len(ldn_points),\n ha='right', va='bottom',\n size=4,\n color='#555555',\n transform=ax.transAxes)\n\n # Draw a map scale\n m.drawmapscale(\n bds[0] + 0.08, bds[1] + 0.015,\n bds[0], bds[1],\n 10.,\n barstyle='fancy', labelstyle='simple',\n fillcolor1='w', fillcolor2='#555555',\n fontcolor='#555555',\n zorder=5)\n plt.title(title)\n plt.tight_layout()\n # this will set the image width to 722px at 100dpi\n fig.set_size_inches(7.22, 5.25)\n plt.savefig(output, dpi=500, alpha=True)\n # plt.show()\n\ndef make_basemap(infile):\n with fiona.open(infile) as shp:\n bds = shp.bounds\n extra = 0.05\n ll = (bds[0], bds[1])\n ur = (bds[2], bds[3])\n w, h = bds[2] - bds[0], bds[3] - bds[1]\n # Check w & h calculations\n assert bds[0] + w == bds[2] and bds[1] + h == bds[3], \"Width or height of image not correct!\"\n center = (bds[0] + (w / 2.0), bds[1] + (h / 2.0))\n m = Basemap(projection='tmerc',\n lon_0=center[0],\n lat_0=center[1],\n ellps = 'WGS84',\n width=w * 100000 + 10000,\n height=h * 100000 + 10000,\n lat_ts=0,\n resolution='i',\n suppress_ticks=True\n )\n m.readshapefile(infile[:-4], \n 'chicago', \n color='blue', \n zorder=3)\n # m.fillcontinents()\n return m, bds\n\ndef data_map(m):\n df_map = pd.DataFrame({'poly': [Polygon(xy) for xy in m.chicago],\n 'ward_name': [ward['ward'] for ward in m.chicago_info]})\n df_map['area_m'] = df_map['poly'].map(lambda x: x.area)\n df_map['area_km'] = df_map['area_m'] / 100000\n # draw ward patches from polygons\n df_map['patches'] = df_map['poly'].map(lambda x: PolygonPatch(x, fc='#555555',\n ec='#787878', lw=.25, alpha=.9,\n zorder=4))\n return df_map\n\ndef point_objs(m, df, df_map):\n # Create Point objects in map coordinates from dataframe lon and lat values\n map_points = pd.Series(\n [Point(m(mapped_x, mapped_y)) for mapped_x, mapped_y in zip(df['LONGITUDE'], df['LATITUDE'])])\n permit_points = MultiPoint(list(map_points.values))\n wards_polygon = prep(MultiPolygon(list(df_map['poly'].values)))\n return filter(wards_polygon.contains, permit_points)", "Map Permit Distribution", "class MakePermitMap(luigi.Task):\n def requires(self):\n return dict(wards=DownloadWards(), \n data=cleanCSV())\n \n def run(self):\n m, bds = make_basemap(self.input()['wards'].fn)\n df = pd.read_csv(self.input()['data'].open('r'))\n df_map = data_map(m)\n ldn_points = point_objs(m, df, df_map)\n plot(m, ldn_points, df_map, bds, sizes=5, \n title=\"Permit Locations, Chicago\", \n label=\"Permit Locations\", \n output='data/chicago_permits.png')\n\n \n def output(self):\n return luigi.LocalTarget('data/chicago_permits.png')", "Map Estimated Costs", "class MakeEstimatedCostMap(luigi.Task):\n \"\"\" Plot the permits and scale the size by the estimated cost (relative to range)\"\"\"\n def requires(self):\n return dict(wards=DownloadWards(), \n data=cleanCSV())\n\n def run(self):\n m, bds = make_basemap(self.input()['wards'].fn)\n df = pd.read_csv(self.input()['data'].open('r'))\n # Get the estimated costs, normalize, and scale by 5 <-- optional\n costs = df['ESTIMATED_COST']\n costs.fillna(costs.min() * 2, inplace=True)\n assert not np.any([cost is np.inf for cost in costs]), \"Inf in column!\"\n # plt.hist(costs, 3000, log=True);\n sizes = ((costs - costs.min()) / (costs.max() - costs.min())) * 100 #scale factor\n \n df_map = data_map(m)\n ldn_points = point_objs(m, df, df_map)\n plot(m, ldn_points, df_map, bds, sizes=sizes, \n title=\"Relative Estimated Permit Cost, Chicago\", \n label=\"Relative Estimated Permit Cost\", \n output='data/chicago_rel_est_cost.png')\n \n def output(self):\n return luigi.LocalTarget('data/chicago_est_cost.png')", "Run All Tasks", "class MakeMaps(luigi.WrapperTask):\n \"\"\" RUN ALL THE PLOTS!!! \"\"\"\n def requires(self):\n yield MakePermitMap()\n yield MakeEstimatedCostMap()\n \n def run(self):\n pass", "Note: to run from the commandline:\n\nExport to '.py' file\n\nRun: \npython -m luigi chicago_permits MakeMaps --local-scheduler", "# if __name__ == '__main__':\nluigi.run(['MakeMaps', '--local-scheduler'])", "Miscellaneous notes\nThe estimated cost spread is predictably non-linear, so further direction could be to filter out the \"$0\" as unestimated (which they likely are)! \nSuggested work\n\nMap estimated costs overlaid with actual cost\nMap permits by # of contractors involved and cost\nPlot estimated cost accuracy based on contractor count\nMap permits by contractors\nChloropleth maps by permit count, cost, etc.\nInclude population density (Census data) or distance from major routes\nEtc...", "# For reference, cost spread is exponential\n# plt.hist(costs, 3000, log=True);\n\n# Additional headers available...\n\"\"\"\n# PIN1,\n# PIN2,\n# PIN3,\n# PIN4,\n# PIN5,\n# PIN6,\n# PIN7,\n# PIN8,\n# PIN9,\n# PIN10,\n'CONTRACTOR_1_TYPE,\n'CONTRACTOR_1_NAME,\n'CONTRACTOR_1_ADDRESS,\n'CONTRACTOR_1_CITY,\n'CONTRACTOR_1_STATE,\n'CONTRACTOR_1_ZIPCODE,\n'CONTRACTOR_1_PHONE,\n'CONTRACTOR_2_TYPE,\n'CONTRACTOR_2_NAME,\n'CONTRACTOR_2_ADDRESS,\n'CONTRACTOR_2_CITY,\n'CONTRACTOR_2_STATE,\n'CONTRACTOR_2_ZIPCODE,\n'CONTRACTOR_2_PHONE,\n'CONTRACTOR_3_TYPE,\n'CONTRACTOR_3_NAME,\n'CONTRACTOR_3_ADDRESS,\n'CONTRACTOR_3_CITY,\n'CONTRACTOR_3_STATE,\n'CONTRACTOR_3_ZIPCODE,\n'CONTRACTOR_3_PHONE,\n'CONTRACTOR_4_TYPE,\n'CONTRACTOR_4_NAME,\n'CONTRACTOR_4_ADDRESS,\n'CONTRACTOR_4_CITY,\n'CONTRACTOR_4_STATE,\n'CONTRACTOR_4_ZIPCODE,\n'CONTRACTOR_4_PHONE,\n'CONTRACTOR_5_TYPE,\n'CONTRACTOR_5_NAME,\n'CONTRACTOR_5_ADDRESS,\n'CONTRACTOR_5_CITY,\n'CONTRACTOR_5_STATE,\n'CONTRACTOR_5_ZIPCODE,\n'CONTRACTOR_5_PHONE,\n'CONTRACTOR_6_TYPE,\n'CONTRACTOR_6_NAME,\n'CONTRACTOR_6_ADDRESS,\n'CONTRACTOR_6_CITY,\n'CONTRACTOR_6_STATE,\n'CONTRACTOR_6_ZIPCODE,\n'CONTRACTOR_6_PHONE,\n'CONTRACTOR_7_TYPE,\n'CONTRACTOR_7_NAME,\n'CONTRACTOR_7_ADDRESS,\n'CONTRACTOR_7_CITY,\n'CONTRACTOR_7_STATE,\n'CONTRACTOR_7_ZIPCODE,\n'CONTRACTOR_7_PHONE,\n'CONTRACTOR_8_TYPE,\n'CONTRACTOR_8_NAME,\n'CONTRACTOR_8_ADDRESS,\n'CONTRACTOR_8_CITY,\n'CONTRACTOR_8_STATE,\n'CONTRACTOR_8_ZIPCODE,\n'CONTRACTOR_8_PHONE,\n'CONTRACTOR_9_TYPE,\n'CONTRACTOR_9_NAME,\n'CONTRACTOR_9_ADDRESS,\n'CONTRACTOR_9_CITY,\n'CONTRACTOR_9_STATE,\n'CONTRACTOR_9_ZIPCODE,\n'CONTRACTOR_9_PHONE,\n'CONTRACTOR_10_TYPE,\n'CONTRACTOR_10_NAME,\n'CONTRACTOR_10_ADDRESS,\n'CONTRACTOR_10_CITY,\n'CONTRACTOR_10_STATE,\n'CONTRACTOR_10_ZIPCODE,\n'CONTRACTOR_10_PHONE,\n'CONTRACTOR_11_TYPE,\n'CONTRACTOR_11_NAME,\n'CONTRACTOR_11_ADDRESS,\n'CONTRACTOR_11_CITY,\n'CONTRACTOR_11_STATE,\n'CONTRACTOR_11_ZIPCODE,\n'CONTRACTOR_11_PHONE,\n'CONTRACTOR_12_TYPE,\n'CONTRACTOR_12_NAME,\n'CONTRACTOR_12_ADDRESS,\n'CONTRACTOR_12_CITY,\n'CONTRACTOR_12_STATE,\n'CONTRACTOR_12_ZIPCODE,\n'CONTRACTOR_12_PHONE,\n'CONTRACTOR_13_TYPE,\n'CONTRACTOR_13_NAME,\n'CONTRACTOR_13_ADDRESS,\n'CONTRACTOR_13_CITY,\n'CONTRACTOR_13_STATE,\n'CONTRACTOR_13_ZIPCODE,\n'CONTRACTOR_13_PHONE,\n'CONTRACTOR_14_TYPE,\n'CONTRACTOR_14_NAME,\n'CONTRACTOR_14_ADDRESS,\n'CONTRACTOR_14_CITY,\n'CONTRACTOR_14_STATE,\n'CONTRACTOR_14_ZIPCODE,\n'CONTRACTOR_14_PHONE,\n'CONTRACTOR_15_TYPE,\n'CONTRACTOR_15_NAME,\n'CONTRACTOR_15_ADDRESS,\n'CONTRACTOR_15_CITY,\n'CONTRACTOR_15_STATE,\n'CONTRACTOR_15_ZIPCODE,\n'CONTRACTOR_15_PHONE,\n\"\"\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/nerc/cmip6/models/hadgem3-gc31-hm/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: HADGEM3-GC31-HM\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:26\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hm', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Naereen/notebooks
simus/Simulations_du_jeu_de_151.ipynb
mit
[ "Simulons le jeu de 151 avec Python !\nBut :\n\nSimuler numériquement le jeu de 151, qui est une variante à 3 dés du jeu appelé 5000,\nAfin de trouver une stratégie optimale.\n\nComment ?\n\nOn va simuler des millions de parties (cf. méthodes de Monte-Carlo),\nAfin de comparer différentes stratégies (aléatoires, s'arrêter au premier coup, s'arrêter après 3 coups, etc),\nOn les compare en fonction de leur gain moyen,\nLa meilleur stratégie sera celle qui apporte un gain moyen le plus élevé.\n\nRésultats ?\n→ Lien vers les résultats obtenus.\n\nPlans\n\nfonctions pour simuler un tirage (3 dés),\npuis simuler une une partie (affrontant deux stratégies, face à un total), plein de parties (total aléatoire),\nimplémenter différentes stratégies,\nles comparer, faire des graphiques, et tirer des conclusions statistiquement valides (avec moyennes et écart-types).\n\n\n1. Simuler un tirage et une partie\n1.1. Simuler un tirage\nDépendances :", "import numpy as np\nimport numpy.random as rn\nrn.seed(0) # Pour obtenir les mêmes résultats\nimport matplotlib.pyplot as plt\n\nimport seaborn as sns\nsns.set(context=\"notebook\", style=\"darkgrid\", palette=\"hls\", font=\"sans-serif\", font_scale=1.4)", "Première fonction : pour tirer trois dés, à 6 faces, indépendants.", "def tirage(nb=1):\n \"\"\" Renvoie un numpy array de taille (3,) si nb == 1, sinon (nb, 3).\"\"\"\n if nb == 1:\n return rn.randint(1, 7, 3)\n else:\n return rn.randint(1, 7, (nb, 3))", "Testons là :", "tirage()\n\ntirage(10)", "1.2. Points d'un tirage\nLe jeu de 151 associe les points suivants, multiples de 50, aux tirages de 3 dés :\n\n200 pour un brelan de 2, 300 pour un brelan de 3, .., 600 pour un brelan de 6, 700 pour un brelan de 1,\n100 pour chaque 1, si ce n'est pas un brelan,\n50 pour chaque 5, si ce n'est pas un brelan.", "COMPTE_SUITE = False # Savoir si on implémente aussi la variante avec les suites\n\ndef _points(valeurs, compte_suite=COMPTE_SUITE):\n if valeurs[0] == valeurs[1] == valeurs[2]: # Un brelan !\n if valeurs[0] == 1:\n return 700\n else:\n return 100 * valeurs[0]\n else: # Pas de brelan\n # Code pour compter les suites :\n bonus_suite = compte_suite and set(np.diff(np.sort(valeurs))) == {1}\n return 100 * (np.sum(valeurs == 1) + bonus_suite) + 50 * np.sum(valeurs == 5)\n\n\ndef points(valeurs, compte_suite=COMPTE_SUITE):\n \"\"\" Calcule les points du tirage correspondant à valeurs.\n \n - si valeurs est de taille (3,), renvoie un seul entier,\n - si valeurs est de taille (nb, 3), renvoie un tableau de points.\n \"\"\"\n if len(np.shape(valeurs)) > 1:\n return np.array([_points(valeurs[i,:], compte_suite) for i in range(np.shape(valeurs)[0])])\n else:\n return _points(valeurs, compte_suite)", "1.2.1. Un seul tirage\nTestons ces fonctions :", "valeurs = tirage()\nprint(\"La valeur {} donne {:>5} points.\".format(valeurs, points(valeurs)))\n\nfor _ in range(20):\n valeurs = tirage()\n print(\"- La valeur {} donne {:>5} points.\".format(valeurs, points(valeurs)))", "Testons quelques valeurs particulières :\n\nLes brelans :", "for valeur in range(1, 7):\n valeurs = valeur * np.ones(3, dtype=int)\n print(\"- La valeur {} donne {:>5} points.\".format(valeurs, points(valeurs)))", "Les 1 :", "for valeurs in [np.array([2, 3, 6]), np.array([1, 3, 6]), np.array([1, 1, 6])]:\n print(\"- La valeur {} donne {:>5} points.\".format(valeurs, points(valeurs)))", "Les 5 :", "for valeurs in [np.array([2, 3, 6]), np.array([5, 3, 6]), np.array([5, 5, 6])]:\n print(\"- La valeur {} donne {:>5} points.\".format(valeurs, points(valeurs)))", "→ C'est bon, tout marche !\nNote : certaines variants du 151 accordent une valeur supplémentaire aux suites (non ordonnées) : [1, 2, 3] vaut 200, [2, 3, 4] vaut 100, et [3, 4, 5] et [4, 5, 6] vaut 150.\nCe n'est pas difficile à intégrer dans notre fonction points.\n\nTestons quand même les suites :", "for valeurs in [np.array([1, 2, 3]), np.array([2, 3, 4]), np.array([3, 4, 5]), np.array([4, 5, 6])]:\n print(\"- La valeur {} donne {:>5} points.\".format(valeurs, points(valeurs)))", "1.2.2. Plusieurs tirages\nTestons ces fonctions :", "valeurs = tirage(10)\nprint(valeurs)\nprint(points(valeurs))", "1.2.3. Moyenne d'un tirage, et quelques figures\nOn peut faire quelques tests statistiques dès maintenant :\n\nPoints moyens d'un tirage :", "def moyenneTirage(nb=1000):\n return np.mean(points(tirage(nb), False))\n\ndef moyenneTirage_avecSuite(nb=1000):\n return np.mean(points(tirage(nb), True))\n\nfor p in range(2, 7):\n nb = 10 ** p\n print(\"- Pour {:>7} tirages, les tirages valent en moyenne {:>4} points.\".format(nb, moyenneTirage(nb)))\n print(\"- Pour {:>7} tirages, les tirages valent en moyenne {:>4} points si on compte aussi les suites.\".format(nb, moyenneTirage_avecSuite(nb)))", "Ça semble converger vers 85 : en moyenne, un tirage vaut entre 50 et 100 points, plutôt du côté des 100.\nEt si on compte les suites, la valeur moyenne d'un tirage vaut plutôt 96 points (ça augmente comme prévu, mais ça augmente peu).\n\nMoyenne et écart type :", "def moyenneStdTirage(nb=1000):\n pts = points(tirage(nb))\n return np.mean(pts), np.std(pts)\n\nfor p in range(2, 7):\n nb = 10 ** p\n m, s = moyenneStdTirage(nb)\n print(\"- Pour {:>7} tirages, les tirages valent en moyenne {:6.2f} +- {:>6.2f} points.\".format(nb, m, s))", "Quelques courbes :", "def plotPoints(nb=2000):\n pts = np.sort(points(tirage(nb)))\n m = np.mean(pts)\n plt.figure()\n plt.plot(pts, 'ro')\n plt.title(\"Valeurs de {} tirages. Moyenne = {:.2f}\".format(nb, m))\n plt.show()\n\nplotPoints()\n\nplotPoints(10**5)\n\nplotPoints(10**6)", "On peut calculer la probabilité d'avoir un tirage valant 0 points :", "def probaPoints(nb=1000, pt=0, compte_suite=COMPTE_SUITE):\n pts = points(tirage(nb), compte_suite)\n return np.sum(pts == pt) / float(nb)\n\nfor p in range(2, 7):\n nb = 10 ** p\n prob = probaPoints(nb, compte_suite=False)\n print(\"- Pour {:>7} tirages, il y a une probabilité {:7.2%} d'avoir 0 point.\".format(nb, prob))\n prob = probaPoints(nb, compte_suite=True)\n print(\"- Pour {:>7} tirages, il y a une probabilité {:7.2%} d'avoir 0 point si on compte les suites.\".format(nb, prob))", "Donc un tirage apporte 85 points en moyenne, mais il y a environ 28% de chance qu'un tirage rate.\nSi on compte les suites, un tirage apporte 97 points en moyenne, mais il y a environ 25% de chance qu'un tirage rate.\n\nOn peut faire le même genre de calcul pour les différentes valeurs de points possibles :", "# valeursPossibles = list(set(points(tirage(10000))))\nvaleursPossibles = [0, 50, 100, 150, 200, 250, 300, 400, 500, 600, 700]\n\nfor p in range(4, 7):\n nb = 10 ** p\n tirages = tirage(nb)\n pts = points(tirages, False)\n pts_s = points(tirages, True)\n print(\"\\n- Pour {:>7} tirages :\".format(nb))\n for pt in valeursPossibles:\n prob = np.sum(pts == pt) / float(nb)\n print(\" - Il y a une probabilité {:7.2%} d'avoir {:3} point{}.\".format(prob, pt, 's' if pt > 0 else ''))\n prob = np.sum(pts_s == pt) / float(nb)\n print(\" - Il y a une probabilité {:7.2%} d'avoir {:3} point{} si on compte les suites.\".format(prob, pt, 's' if pt > 0 else ''))", "On devrait faire des histogrammes, mais j'ai la flemme...\nCes quelques expériences montrent qu'on a :\n- une chance d'environ 2.5% d'avoir plus de 300 points (par un brelan),\n- une chance d'environ 9% d'avoir entre 200 et 300 points,\n- une chance d'environ 11% d'avoir 150 points,\n- une chance d'environ 27% d'avoir 100 points,\n- une chance d'environ 22% d'avoir 50 points,\n- une chance d'environ 28% d'avoir 0 point.\nAutant de chance d'avoir 100 points que 0 ? Et oui !\nLa variante comptant les suites augmente la chance d'avoir 200 points (de 7.5% à 10%), d'avoir 150 points (de 11% à 16%), et diminue la chance d'avoir 0 point, mais ne change pas vraiment le reste du jeu.\n\n1.3. Simuler des parties\n1.3.1. Simuler une partie\nOn va d'abord écrire une fonction qui prend deux joeurs, un total, et simule la partie, puis donne l'indice (0 ou 1) du joueur qui gagne.", "DEBUG = False # Par défaut, on n'affiche rien\n\ndef unJeu(joueur, compte, total, debug=DEBUG):\n accu = 0\n if debug: print(\" - Le joueur {.__name__} commence à jouer, son compte est {} et le total est {} ...\".format(joueur, compte, total))\n t = tirage()\n nbLance = 1\n if points(t) == 0:\n if debug: print(\" - Hoho, ce tirage {} vallait 0 points, le joueur doit arrêter.\".format(t))\n return 0, nbLance\n if debug: print(\" - Le joueur a obtenu {} ...\".format(t))\n while compte + accu <= total and joueur(compte, accu, t, total):\n accu += points(t)\n t = tirage()\n nbLance += 1\n if debug: print(\" - Le joueur a décidé de rejouer, accumulant {} points, et a ré-obtenu {} ...\".format(accu, t))\n if points(t) == 0:\n if debug: print(\" - Hoho, ce tirage {} vallait 0 points, le joueur doit arrêter.\".format(t))\n break\n accu += points(t)\n if compte + accu > total:\n if debug: print(\" - Le joueur a dépassé le total : impossible de marquer ! compte = {} + accu = {} > total = {} !\".format(compte, accu, total))\n return 0, nbLance\n else:\n if accu > 0:\n if debug: print(\" - Le joueur peut marquer les {} points accumulés en {} lancés !\".format(accu, nbLance))\n return accu, nbLance\n\n\ndef unePartie(joueurs, total=1000, debug=DEBUG, i0=0):\n assert len(joueurs) == 2, \"Erreur, seulement 2 joueurs sont acceptés !\"\n comptes = [0, 0]\n nbCoups = [0, 0]\n nbLances = [0, 0]\n scores = [[0], [0]]\n if debug: print(\"- Le joueur #{} va commencer ...\".format(i0))\n i = i0\n while max(comptes) != total: # Tant qu'aucun joueur n'a gagné\n nbCoups[i] += 1\n if debug: print(\"- C'est au joueur #{} ({.__name__}) de jouer, son compte est {} et le total est {} ...\".format(i, joueurs[i], comptes[i], total))\n accu, nbLance = unJeu(joueurs[i], comptes[i], total, debug)\n nbLances[i] += nbLance\n if accu > 0:\n comptes[i] += accu\n scores[i].append(comptes[i]) # Historique\n if comptes[i] == total:\n if debug: print(\"- Le joueur #{} ({.__name__}) a gagné en {} coups et {} lancés de dés !\".format(i, joueurs[i], nbCoups[i], nbLances[i]))\n if debug: print(\"- Le joueur #{} ({.__name__}) a perdu, avec un score de {}, après {} coups et {} lancés de dés !\".format(i^1, joueurs[i^1], comptes[i^1], nbCoups[i^1], nbLances[i^1]))\n return i, scores\n i ^= 1 # 0 → 1, 1 → 0 (ou exclusif)\n\n# Note : on pourrait implémenter une partie à plus de 2 joueurs", "1.3.2. Des stratégies\nOn doit définir des stratégies, sous la forme de fonctions joueur(compte, accu, t, total), qui renvoie True si elle doit continuer à jouer, ou False si elle doit marquer.\nD'abord, deux stratégies un peu stupides :", "def unCoup(compte, accu, t, total):\n \"\"\" Stratégie qui marque toujours au premier coup, peu importe le 1er tirage obtenu.\"\"\"\n return False # Marque toujours !\n\ndef jusquauBout(compte, accu, t, total):\n \"\"\" Stratégie qui ne marque que si elle peut gagner exactement .\"\"\"\n if compte + accu + points(t) >= total:\n return False # Marque si elle peut gagner\n else:\n return True # Continue à jouer", "Une autre stratégie, qui marche seulement si elle peut marquer plus de X points (100, 150 etc).\nC'est la version plus \"gourmande\" de unCoup, qui marque si elle a plus de 50 points.", "def auMoinsX(X):\n def joueur(compte, accu, t, total):\n \"\"\" Stratégie qui marque si elle a eu plus de {} points.\"\"\".format(X)\n if accu + points(t) >= X:\n return False # Marque si elle a obtenu plus de X points\n elif compte + accu + points(t) == total:\n return False # Marque si elle peut gagner\n elif total - compte < X:\n # S'il reste peu de points, marque toujours\n # (sinon la stratégie d'accumuler plus de X points ne marche plus)\n return False\n else:\n return True # Continue de jouer, essaie d'obtenir X points\n joueur.__name__ = \"auMoins{}\".format(X) # Triche sur le nom\n return joueur\n\nauMoins50 = auMoinsX(50) # == unCoup, en fait\nauMoins100 = auMoinsX(100)\nauMoins150 = auMoinsX(150)\nauMoins200 = auMoinsX(200) # Commence à devenir très audacieux\nauMoins250 = auMoinsX(250)\nauMoins300 = auMoinsX(300) # Compètement fou, très peu de chance de marquer ça ou plus!\nauMoins350 = auMoinsX(350)\nauMoins400 = auMoinsX(400)\nauMoins450 = auMoinsX(450)\nauMoins500 = auMoinsX(500)\nauMoins550 = auMoinsX(550)\nauMoins600 = auMoinsX(600)\nauMoins650 = auMoinsX(650)\nauMoins700 = auMoinsX(700)\n# On pourrait continuer ...\nauMoins800 = auMoinsX(800)\nauMoins850 = auMoinsX(850)\nauMoins900 = auMoinsX(900)\nauMoins950 = auMoinsX(950)\nauMoins1000 = auMoinsX(1000)", "Une autre stratégie \"stupide\" : décider aléatoirement, selon une loi de Bernoulli, si elle continue ou si elle s'arrête.", "def bernoulli(p=0.5):\n def joueur(compte, accu, t, total):\n \"\"\" Marque les points accumulés avec probabilité p = {} (Bernoulli).\"\"\".format(p)\n return rn.random() > p\n joueur.__name__ = \"bernoulli_{:.3g}\".format(p)\n return joueur", "1.3.3. Quelques exemples\nEssayons de faire jouer deux stratégies face à l'autre.", "joueurs = [unCoup, unCoup]\ntotal = 200\n\nunePartie(joueurs, total, True)\nunePartie(joueurs, total)\n\njoueurs = [unCoup, jusquauBout]\ntotal = 200\nunePartie(joueurs, total)\n\njoueurs = [unCoup, auMoins100]\ntotal = 500\nunePartie(joueurs, total)\n\njoueurs = [unCoup, auMoins200]\ntotal = 1000\nunePartie(joueurs, total)", "1.3.4. Générer plusieurs parties\nOn peut maintenant lancer plusieurs centaines de simulations de parties, sans afficher le déroulement de chaque parties.\nLa fonction unePartie renvoie un tuple, (i, comptes), où :\n- i est l'indice (0 ou 1) du joueur ayant gagné la partie,\n- et comptes est une liste contenant les deux historiques des points des deux joueurs.\nPar exemple, pour un total = 500, la sortie (1, [[0, 100, 150, 250, 450], [0, 50, 450, 500]]) signifie :\n- le joueur 1 a gagné, après avoir marqué 50 points, puis 400, et enfin 50,\n- le joueur 2 a perdu, après avoir marqué 100 points, puis 50, puis 100, puis 200, mais a perdu avec 450 points.", "def desParties(nb, joueurs, total=1000, i0=0):\n indices, historiques = [], []\n for _ in range(nb):\n i, h = unePartie(joueurs, total=total, i0=i0, debug=False)\n indices.append(i)\n historiques.append(h)\n return indices, historiques", "Par exemple, on peut opposer le joueur pas courageux (unCoup) au joueur très gourmand (jusquauBout) sur 100 parties avec un total de 250 points :", "def freqGain(indiceMoyen, i):\n # (1^i) + ((-1)**(i==0)) * indiceMoyen\n if i == 0:\n return 1 - indiceMoyen\n else:\n return indiceMoyen\n\ndef afficheResultatsDesParties(nb, joueurs, total, indices, historiques):\n indiceMoyen = np.mean(indices)\n pointsFinaux = [np.mean(list(historiques[k][i][-1] for k in range(nb))) for i in [0, 1]]\n\n print(\"Dans {} parties simulées, contre le total {} :\".format(nb, total))\n for i in [0, 1]:\n print(\" - le joueur {} ({.__name__:<11}) a gagné {:>5.2%} du temps, et a eu un score final moyen de {:>5g} points ...\".format(i, joueurs[i], freqGain(indiceMoyen, i), pointsFinaux[i]))\n\nnb = 10000\njoueurs = [unCoup, jusquauBout]\ntotal = 1000\nindices, historiques = desParties(nb, joueurs, total)\nafficheResultatsDesParties(nb, joueurs, total, indices, historiques)\n\nnb = 10000\njoueurs = [unCoup, jusquauBout]\ntotal = 500\nindices, historiques = desParties(nb, joueurs, total)\nafficheResultatsDesParties(nb, joueurs, total, indices, historiques)\n\nnb = 10000\njoueurs = [unCoup, jusquauBout]\ntotal = 5000\nindices, historiques = desParties(nb, joueurs, total)\nafficheResultatsDesParties(nb, joueurs, total, indices, historiques)", "Affichons une première courbe qui montrera la supériorité d'une stratégie face à la plus peureuse, en fonction du total.", "def plotResultatsDesParties(nb, joueurs, totaux):\n N = len(totaux)\n indicesMoyens = []\n for total in totaux:\n indices, _ = desParties(nb, joueurs, total)\n indicesMoyens.append(np.mean(indices))\n plt.figure()\n plt.plot(totaux, indicesMoyens, 'ro')\n plt.xlabel(\"Objectif (points totaux à atteindre)\")\n plt.ylabel(\"Taux de victoire de 1 face à 0\")\n plt.title(\"Taux de victoire du joueur 1 ({.__name__}) face au joueur 0 ({.__name__}),\\n pour {} parties simulees pour chaque total.\".format(joueurs[1], joueurs[0], nb))\n plt.show()\n\nnb = 1000\njoueurs = [unCoup, jusquauBout]\ntotaux = [50, 100, 150, 200, 250, 300, 350, 400, 450, 500]\nplotResultatsDesParties(nb, joueurs, totaux)\n\nnb = 1000\njoueurs = [unCoup, jusquauBout]\ntotalMax = 2000\ntotaux = list(range(50, totalMax + 50, 50))\nplotResultatsDesParties(nb, joueurs, totaux)", "D'autres comparaisons, entre stratégies gourmandes.", "nb = 5000\njoueurs = [auMoins100, auMoins200]\ntotalMax = 1000\ntotaux = list(range(50, totalMax + 50, 50))\nplotResultatsDesParties(nb, joueurs, totaux)\n\nnb = 1000\njoueurs = [auMoins100, jusquauBout]\ntotalMax = 2000\ntotaux = list(range(50, totalMax + 50, 100))\nplotResultatsDesParties(nb, joueurs, totaux)\n\nnb = 1000\ntotalMax = 2000\ntotaux = list(range(50, totalMax + 50, 50))\n\njoueurs = [unCoup, bernoulli(0.5)]\nplotResultatsDesParties(nb, joueurs, totaux)\n\njoueurs = [unCoup, bernoulli(0.1)]\nplotResultatsDesParties(nb, joueurs, totaux)\n\njoueurs = [unCoup, bernoulli(0.25)]\nplotResultatsDesParties(nb, joueurs, totaux)\n\njoueurs = [unCoup, bernoulli(0.75)]\nplotResultatsDesParties(nb, joueurs, totaux)\n\njoueurs = [unCoup, bernoulli(0.9)]\nplotResultatsDesParties(nb, joueurs, totaux)", "Évaluation en self-play\nPlutôt que de faire jouer une stratégie face à une autre, et d'utiliser le taux de victoire comme une mesure de performance (ce que j'ai fait plus haut), on peut chercher à mesure un autre taux de victoire.\nOn peut laisser une stratégie jouer tout seule, et mesurer plutôt le nombre de coup requis pour gagner.", "def unePartieSeul(joueur, total=1000, debug=DEBUG):\n compte = 0\n nbCoups = 0\n nbLances = 0\n score = [0]\n if debug: print(\"Simulation pour le joueur ({.__name__}), le total à atteindre est {} :\".format(joueur, total))\n while compte < total: # Tant que joueur n'a pas gagné\n nbCoups += 1\n if debug: print(\" - Coup #{}, son compte est {} / {} ...\".format(nbCoups, compte, total))\n accu, nbLance = unJeu(joueur, compte, total, debug)\n nbLances += nbLance\n if accu > 0:\n compte += accu\n score.append(compte) # Historique\n if compte == total:\n if debug: print(\"- Le joueur ({.__name__}) a gagné en {} coups et {} lancés de dés !\".format(joueur, nbCoups, nbLances))\n return score", "Testons ça avec la stratégie naïve unCoup :", "h = unePartieSeul(unCoup, 1000)\nprint(\"Partie gagnée en {} coups par le joueur ({.__name__}), avec le score {} ...\".format(len(h), unCoup, h))", "Comme précédemment, on peut générer plusieurs simulations pour la même tâche, et obtenir ainsi une liste d'historiques de jeu.", "def desPartiesSeul(nb, joueur, total=1000, debug=False):\n historique = []\n for _ in range(nb):\n h = unePartieSeul(joueur, total=total, debug=debug)\n historique.append(h)\n return historique\n\ndesPartiesSeul(4, unCoup)", "Ce qui nous intéresse est uniquement le nombre de coups qu'une certaine stratégie va devoir jouer avant de gagner :", "[len(l)-1 for l in desPartiesSeul(4, unCoup)]", "Avec un joli affichage et un calcul du nombre moyen de coups :", "def afficheResultatsDesPartiesSeul(nb, joueur, total, historique):\n nbCoupMoyens = np.mean([len(h) - 1 for h in historique])\n print(\"Dans {} parties simulées, contre le total {}, le joueur ({.__name__}) a gagné en moyenne en {} coups ...\".format(nb, total, joueur, nbCoupMoyens))\n\nhistorique = desPartiesSeul(100, unCoup, 1000)\nafficheResultatsDesPartiesSeul(100, unCoup, 1000, historique)", "Comme précédemment, on peut afficher un graphique montrant l'évolution de ce nombre moyen de coups, disons pour $1000$ parties simulées, en fonction du total à atteindre.\nLa courbe obtenue devrait être croissante, mais difficile de prévoir davantage son comportement.", "def plotResultatsDesPartiesSeul(nb, joueur, totaux):\n N = len(totaux)\n nbCoupMoyens = []\n for total in totaux:\n historique = desPartiesSeul(nb, joueur, total)\n nbCoupMoyens.append(np.mean([len(h) - 1 for h in historique]))\n plt.figure()\n plt.plot(totaux, nbCoupMoyens, 'ro')\n plt.xlabel(\"Objectif (points totaux à atteindre)\")\n plt.ylabel(\"Nombre moyen de coups joués avant de gagner\")\n plt.title(\"Nombre moyen de coups requis par {.__name__}\\n pour {} parties simulées pour chaque total.\".format(joueur, nb))\n plt.show()", "On va utiliser les mêmes paramètres de simulation que précédemment : $1000$ simulations pour chaque total, et des totaux allant de $50$ à $2000$ par pas de $50$.", "nb = 1000\ntotalMax = 2000\ntotaux = list(range(50, totalMax + 50, 50))", "La courbe pour unCoup permet d'établir le comportement de la stratégie naïve, on pourra ensuite comparer les autres stratégies.", "plotResultatsDesPartiesSeul(nb, unCoup, totaux)", "Tient, pour unCoup, la courbe est linéaire dans le total. C'est assez logique, vue la stratégie utilisée !\nOn marque à chaque coup, donc le nombre de coups moyens est juste le total divisé par le score moyen.\nOn se rappelle que le score moyen en un tirage est d'environ $96$ points (avec suite), et en effet $2000 / 91 \\simeq 21$, ce qu'on lit sur la courbe.", "scoreMoyen = 96\ntotal = 2000\ntotal / scoreMoyen", "Pour jusquauBout :", "plotResultatsDesPartiesSeul(nb, jusquauBout, totaux)", "On constate que cette stratégie jusquauBout gagne bien plus rapidement que la stratégie unCoup !\n\n\nPour auMoins200, par exemple :", "plotResultatsDesPartiesSeul(nb, auMoins200, totaux)", "Pour bernoulli(0.5), par exemple :", "plotResultatsDesPartiesSeul(nb, bernoulli(0.5), totaux)", "Pour bernoulli(0.2), par exemple :", "plotResultatsDesPartiesSeul(nb, bernoulli(0.2), totaux)", "Pour bernoulli(0.8), par exemple :", "plotResultatsDesPartiesSeul(nb, bernoulli(0.8), totaux)", "Ces comparaisons de différentes stratégies de Bernoulli permettent de conclure, comme on le présentait, que la meilleure stratégie (parmi les quelques testées) est la stratégie jusquauBout !\nToutes les courbes ci dessus montrent un comportement (presque) linéaire du nombre moyen de coups requis pour gagner en fonction du total.\nAinsi, pour comparer différentes stratégies, on peut juste comparer leur nombre de coups moyen pour un certain total, disons $T = 2000$.", "def comparerStrategies(joueurs, nb=1000, total=2000):\n resultats = []\n for joueur in joueurs:\n historique = desPartiesSeul(nb, joueur, total)\n nbCoupMoyen = np.mean([len(h) - 1 for h in historique])\n resultats.append((nbCoupMoyen, joueur.__name__))\n # Trier les résultats permet de voir les meilleures stratégies en premier !\n return sorted(resultats)\n\njoueurs = [unCoup, jusquauBout]\ncomparerStrategies(joueurs, nb=nb, total=totalMax)", "On va comparer toutes les stratégies définies plus haut :", "joueurs = [unCoup, jusquauBout]\njoueurs += [auMoins50, auMoins100, auMoins150, auMoins200, auMoins250, auMoins300, auMoins350, auMoins400, auMoins450, auMoins500, auMoins550, auMoins600, auMoins650, auMoins700, auMoins800, auMoins850, auMoins900, auMoins950, auMoins1000]\nfor p in range(0, 20 + 1):\n joueurs.append(bernoulli(p/20.))\n\n# print([j.__name__ for j in joueurs])\n\nnb = 1000\ntotalMax = 2000\nresultats = comparerStrategies(joueurs, nb=nb, total=totalMax)\nprint(\"Pour le total {} et {} simulations ...\".format(totalMax, nb))\nfor (i, (n, j)) in enumerate(resultats):\n print(\"- La stratégie classée #{:2} / {} est {:<14}, avec un nombre moyen de coups = {:.3g} ...\".format(i, len(joueurs), j, n))\n\nnb = 2000\ntotalMax = 3000\nresultats = comparerStrategies(joueurs, nb=nb, total=totalMax)\nprint(\"Pour le total {} et {} simulations ...\".format(totalMax, nb))\nfor (i, (n, j)) in enumerate(resultats):\n print(\"- La stratégie classée #{:2} / {} est {:<14}, avec un nombre moyen de coups = {:.3g} ...\".format(i, len(joueurs), j, n))\n\nnb = 1000\ntotalMax = 5000\nresultats = comparerStrategies(joueurs, nb=nb, total=totalMax)\nprint(\"Pour le total {} et {} simulations ...\".format(totalMax, nb))\nfor (i, (n, j)) in enumerate(resultats):\n print(\"- La stratégie classée #{:2} / {} est {:<14}, avec un nombre moyen de coups = {:.3g} ...\".format(i, len(joueurs), j, n))", "$\\implies$ la stratégie la plus efficace est en effet jusquauBout !\n\nNotons néanmoins que je n'ai testé que des stratégies très simples...\nEn particulier, celles considérées n'utilisent pas, dans leur prise de décision, le nombre de coups déja joué, ni le nombre de tirage courant." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ueapy/ueapy.github.io
content/notebooks/2017-01-20-function-quirks.ipynb
mit
[ "name = '2017-01-20-function-quirks'\ntitle = 'Some peculiarities of using functions in Python'\ntags = 'basics'\nauthor = 'Denis Sergeev'\n\nfrom nb_tools import connect_notebook_to_post\nfrom IPython.core.display import HTML, Image\n\nhtml = connect_notebook_to_post(name, title, tags, author)", "Let's review the basics\nFunctions without arguments", "def my_super_function():\n pass\n\ndef even_better():\n print('This is executed within a function')\n\neven_better()\n\ntype(even_better)", "Positional arguments\naka mandatory parameters", "import numpy as np\n\ndef uv2wdir(u, v):\n \"\"\"Calculate horizontal wind direction (meteorological notation)\"\"\"\n return 180 + 180 / np.pi * np.arctan2(u, v)\n\na = uv2wdir(10, -10)\na\n\ntype(a)", "Keyword (named) arguments\naka optional parameters", "def myfun(list_of_strings, separator=' ', another=123):\n result = separator.join(list_of_strings)\n return result\n\nwords = ['This', 'is', 'my', 'Function']\n\nmyfun(words, another=456, separator='-------')", "Dangerous default arguments", "default_number = 10\n\ndef double_it(x=default_number):\n return x * 2\n\ndouble_it()\n\ndouble_it(2)\n\ndefault_number = 100000000\n\ndouble_it()", "But what if we used a mutable type as a default argument?", "def add_items_bad(element, times=1, lst=[]):\n for _ in range(times):\n lst.append(element)\n return lst\n\nmylist = add_items_bad('a', 3)\nprint(mylist)\n\nanother_list = add_items_bad('b', 5)\nprint(another_list)\n\ndef add_items_good(element, times=1, lst=None):\n if lst is None:\n lst = []\n\n for _ in range(times):\n lst.append(element)\n return lst\n\nmylist = add_items_good('a', 3)\nprint(mylist)\n\nanother_list = add_items_good('b', 5)\nprint(another_list)", "Global variables\nVariables declared outside the function can be referenced within the function:", "x = 5\n\ndef add_x(y):\n return x + y\n\nadd_x(20)", "But these global variables cannot be modified within the function, unless declared global in the function.", "def setx(y):\n global x\n x = y\n print('x is {}'.format(x))\n\nx\n\nsetx(10)\n\nprint(x)\n\ndef foo():\n a = 1\n print(locals())\n\nfoo()", "Arbitrary number of arguments\nSpecial forms of parameters:\n* *args: any number of positional arguments packed into a tuple\n* **kwargs: any number of keyword arguments packed into a dictionary", "def variable_args(*args, **kwargs):\n print('args are', args)\n print('kwargs are', kwargs)\n if 'z' in kwargs:\n print(kwargs['z'])\n\nvariable_args('foo', 'bar', x=1, y=2)", "Example 1", "def smallest(x, y):\n if x < y:\n return x\n else:\n return y\n\nsmallest(1, 2)\n\n# smallest(1, 2, 3) <- results in TypeError\n\ndef smallest(x, *args):\n small = x\n for y in args:\n if y < small:\n small= y\n return small\n\nsmallest(11)", "Example 2\nUnpacking a dictionary of keyword arguments is particularly handy in matplotlib.", "import matplotlib.pyplot as plt\n\n%matplotlib inline\n\narr1 = np.random.rand(100)\narr2 = np.random.rand(100)\n\nstyle1 = dict(linewidth=3, color='#FF0123')\nstyle2 = dict(linestyle='--', color='skyblue')\n\nplt.plot(arr1, **style1)\nplt.plot(arr2, **style2)", "Passing functions into functions\n<img src=\"https://lifebeyondfife.com/wp-content/uploads/2015/05/functions.jpg\" width=400>\nFunctions are first-class objects. This means that functions can be passed around, and used as arguments, just like any other value (e.g, string, int, float).", "def find_special_numbers(special_selector, limit=10):\n found = []\n n = 0\n while len(found) < limit:\n if special_selector(n):\n found.append(n)\n n += 1\n return found\n\ndef check_odd(a):\n return a % 2 == 1\n\nmylist = find_special_numbers(check_odd, 25)\n\nfor n in mylist:\n print(n, end=',')", "But lots of small functions can clutter your code...\nlambda expressions\nHighly pythonic!\ncheck = i -&gt; return True if i % 6 == 0", "check = lambda i: i % 6 == 0\n\n#check = lambda", "Lambdas usually are not defined on their own, but inserted in-place.", "find_special_numbers(lambda i: i % 6 == 0, 5)", "Another common example", "lyric = \"Never gonna give you up\"\n\nwords = lyric.split()\nwords\n\nsorted(words, key=lambda x: x.lower())", "How to sort a list of strings, each of which is a number?\nJust using sorted() gives us not what we want:", "lst = ['20', '1', '2', '100']\n\nsorted(lst)", "But we can use a lambda-expression to overcome this problem:\nOption 1:", "sorted(lst, key=lambda x: int(x))", "Option 2:", "sorted(lst, key=lambda x: x.zfill(16))", "By the way, what does zfill() method do? It pads a string with zeros:", "'aaaa'.zfill(10)", "Resources\n\nWrite Pythonic Code Like a Seasoned Developer Course Demo Code\nScipy Lecture notes", "HTML(html)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/hub
examples/colab/tf_hub_delf_module.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================", "How to match images using DELF and TensorFlow Hub\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/tf_hub_delf_module\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf_hub_delf_module.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/hub/blob/master/examples/colab/tf_hub_delf_module.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf_hub_delf_module.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/google/delf/1\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n </td>\n</table>\n\nTensorFlow Hub (TF-Hub) is a platform to share machine learning expertise packaged in reusable resources, notably pre-trained modules.\nIn this colab, we will use a module that packages the DELF neural network and logic for processing images to identify keypoints and their descriptors. The weights of the neural network were trained on images of landmarks as described in this paper.\nSetup", "!pip install scikit-image\n\nfrom absl import logging\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom PIL import Image, ImageOps\nfrom scipy.spatial import cKDTree\nfrom skimage.feature import plot_matches\nfrom skimage.measure import ransac\nfrom skimage.transform import AffineTransform\nfrom six import BytesIO\n\nimport tensorflow as tf\n\nimport tensorflow_hub as hub\nfrom six.moves.urllib.request import urlopen", "The data\nIn the next cell, we specify the URLs of two images we would like to process with DELF in order to match and compare them.", "#@title Choose images\nimages = \"Bridge of Sighs\" #@param [\"Bridge of Sighs\", \"Golden Gate\", \"Acropolis\", \"Eiffel tower\"]\nif images == \"Bridge of Sighs\":\n # from: https://commons.wikimedia.org/wiki/File:Bridge_of_Sighs,_Oxford.jpg\n # by: N.H. Fischer\n IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/2/28/Bridge_of_Sighs%2C_Oxford.jpg'\n # from https://commons.wikimedia.org/wiki/File:The_Bridge_of_Sighs_and_Sheldonian_Theatre,_Oxford.jpg\n # by: Matthew Hoser\n IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/c/c3/The_Bridge_of_Sighs_and_Sheldonian_Theatre%2C_Oxford.jpg'\nelif images == \"Golden Gate\":\n IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/1/1e/Golden_gate2.jpg'\n IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/3/3e/GoldenGateBridge.jpg'\nelif images == \"Acropolis\":\n IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/c/ce/2006_01_21_Ath%C3%A8nes_Parth%C3%A9non.JPG'\n IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/5/5c/ACROPOLIS_1969_-_panoramio_-_jean_melis.jpg'\nelse:\n IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/d/d8/Eiffel_Tower%2C_November_15%2C_2011.jpg'\n IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/a/a8/Eiffel_Tower_from_immediately_beside_it%2C_Paris_May_2008.jpg'", "Download, resize, save and display the images.", "def download_and_resize(name, url, new_width=256, new_height=256):\n path = tf.keras.utils.get_file(url.split('/')[-1], url)\n image = Image.open(path)\n image = ImageOps.fit(image, (new_width, new_height), Image.ANTIALIAS)\n return image\n\nimage1 = download_and_resize('image_1.jpg', IMAGE_1_URL)\nimage2 = download_and_resize('image_2.jpg', IMAGE_2_URL)\n\nplt.subplot(1,2,1)\nplt.imshow(image1)\nplt.subplot(1,2,2)\nplt.imshow(image2)", "Apply the DELF module to the data\nThe DELF module takes an image as input and will describe noteworthy points with vectors. The following cell contains the core of this colab's logic.", "delf = hub.load('https://tfhub.dev/google/delf/1').signatures['default']\n\ndef run_delf(image):\n np_image = np.array(image)\n float_image = tf.image.convert_image_dtype(np_image, tf.float32)\n\n return delf(\n image=float_image,\n score_threshold=tf.constant(100.0),\n image_scales=tf.constant([0.25, 0.3536, 0.5, 0.7071, 1.0, 1.4142, 2.0]),\n max_feature_num=tf.constant(1000))\n\nresult1 = run_delf(image1)\nresult2 = run_delf(image2)", "Use the locations and description vectors to match the images", "#@title TensorFlow is not needed for this post-processing and visualization\ndef match_images(image1, image2, result1, result2):\n distance_threshold = 0.8\n\n # Read features.\n num_features_1 = result1['locations'].shape[0]\n print(\"Loaded image 1's %d features\" % num_features_1)\n \n num_features_2 = result2['locations'].shape[0]\n print(\"Loaded image 2's %d features\" % num_features_2)\n\n # Find nearest-neighbor matches using a KD tree.\n d1_tree = cKDTree(result1['descriptors'])\n _, indices = d1_tree.query(\n result2['descriptors'],\n distance_upper_bound=distance_threshold)\n\n # Select feature locations for putative matches.\n locations_2_to_use = np.array([\n result2['locations'][i,]\n for i in range(num_features_2)\n if indices[i] != num_features_1\n ])\n locations_1_to_use = np.array([\n result1['locations'][indices[i],]\n for i in range(num_features_2)\n if indices[i] != num_features_1\n ])\n\n # Perform geometric verification using RANSAC.\n _, inliers = ransac(\n (locations_1_to_use, locations_2_to_use),\n AffineTransform,\n min_samples=3,\n residual_threshold=20,\n max_trials=1000)\n\n print('Found %d inliers' % sum(inliers))\n\n # Visualize correspondences.\n _, ax = plt.subplots()\n inlier_idxs = np.nonzero(inliers)[0]\n plot_matches(\n ax,\n image1,\n image2,\n locations_1_to_use,\n locations_2_to_use,\n np.column_stack((inlier_idxs, inlier_idxs)),\n matches_color='b')\n ax.axis('off')\n ax.set_title('DELF correspondences')\n\n\n\n\nmatch_images(image1, image2, result1, result2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jaakla/getdelficomments
Welcome_To_Colaboratory.ipynb
unlicense
[ "<a href=\"https://colab.research.google.com/github/jaakla/getdelficomments/blob/master/Welcome_To_Colaboratory.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n<p><img alt=\"Colaboratory logo\" height=\"45px\" src=\"/img/colab_favicon.ico\" align=\"left\" hspace=\"10px\" vspace=\"0px\"></p>\n\n<h1>What is Colaboratory?</h1>\n\nColaboratory, or \"Colab\" for short, allows you to write and execute Python in your browser, with \n- Zero configuration required\n- Free access to GPUs\n- Easy sharing\nWhether you're a student, a data scientist or an AI researcher, Colab can make your work easier. Watch Introduction to Colab to learn more, or just get started below!\nNew Section\nGetting started\nThe document you are reading is not a static web page, but an interactive environment called a Colab notebook that lets you write and execute code.\nFor example, here is a code cell with a short Python script that computes a value, stores it in a variable, and prints the result:", "seconds_in_a_day = 24 * 60 * 60\nseconds_in_a_day", "To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut \"Command/Ctrl+Enter\". To edit the code, just click the cell and start editing.\nVariables that you define in one cell can later be used in other cells:", "seconds_in_a_week = 7 * seconds_in_a_day\nseconds_in_a_week", "Colab notebooks allow you to combine executable code and rich text in a single document, along with images, HTML, LaTeX and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see Overview of Colab. To create a new Colab notebook you can use the File menu above, or use the following link: create a new Colab notebook.\nColab notebooks are Jupyter notebooks that are hosted by Colab. To learn more about the Jupyter project, see jupyter.org.\nData science\nWith Colab you can harness the full power of popular Python libraries to analyze and visualize data. The code cell below uses numpy to generate some random data, and uses matplotlib to visualize it. To edit the code, just click the cell and start editing.", "import numpy as np\nfrom matplotlib import pyplot as plt\n\nys = 200 + np.random.randn(100)\nx = [x for x in range(len(ys))]\n\nplt.plot(x, ys, '-')\nplt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)\n\nplt.title(\"Sample Visualization\")\nplt.show()", "You can import your own data into Colab notebooks from your Google Drive account, including from spreadsheets, as well as from Github and many other sources. To learn more about importing data, and how Colab can be used for data science, see the links below under Working with Data.\nMachine learning\nWith Colab you can import an image dataset, train an image classifier on it, and evaluate the model, all in just a few lines of code. Colab notebooks execute code on Google's cloud servers, meaning you can leverage the power of Google hardware, including GPUs and TPUs, regardless of the power of your machine. All you need is a browser.\nColab is used extensively in the machine learning community with applications including:\n- Getting started with TensorFlow\n- Developing and training neural networks\n- Experimenting with TPUs\n- Disseminating AI research\n- Creating tutorials\nTo see sample Colab notebooks that demonstrate machine learning applications, see the machine learning examples below.\nMore Resources\nWorking with Notebooks in Colab\n\nOverview of Colaboratory\nGuide to Markdown\nImporting libraries and installing dependencies\nSaving and loading notebooks in GitHub\nInteractive forms\nInteractive widgets\n<img src=\"/img/new.png\" height=\"20px\" align=\"left\" hspace=\"4px\" alt=\"New\"></img>\n TensorFlow 2 in Colab\n\n<a name=\"working-with-data\"></a>\nWorking with Data\n\nLoading data: Drive, Sheets, and Google Cloud Storage \nCharts: visualizing data\nGetting started with BigQuery\n\nMachine Learning Crash Course\nThese are a few of the notebooks from Google's online Machine Learning course. See the full course website for more.\n- Intro to Pandas\n- Tensorflow concepts\n- First steps with TensorFlow\n- Intro to neural nets\n- Intro to sparse data and embeddings\n<a name=\"using-accelerated-hardware\"></a>\nUsing Accelerated Hardware\n\nTensorFlow with GPUs\nTensorFlow with TPUs\n\n<a name=\"machine-learning-examples\"></a>\nMachine Learning Examples\nTo see end-to-end examples of the interactive machine learning analyses that Colaboratory makes possible, check out these tutorials using models from TensorFlow Hub.\nA few featured examples:\n\nRetraining an Image Classifier: Build a Keras model on top of a pre-trained image classifier to distinguish flowers.\nText Classification: Classify IMDB movie reviews as either positive or negative.\nStyle Transfer: Use deep learning to transfer style between images.\nMultilingual Universal Sentence Encoder Q&A: Use a machine learning model to answer questions from the SQuAD dataset.\nVideo Interpolation: Predict what happened in a video between the first and the last frame." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mrcslws/nupic.research
projects/archive/dynamic_sparse/notebooks/ExperimentAnalysis-MNISTSparser.ipynb
agpl-3.0
[ "Experiment:\nEvaluate pruning by magnitude weighted by coactivations (more thorough evaluation), compare it to baseline (SET).\nMotivation.\nCheck if results are consistently above baseline.\nConclusion\n\nNo significant difference between both models\nNo support for early stopping", "%load_ext autoreload\n%autoreload 2\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport glob\nimport tabulate\nimport pprint\nimport click\nimport numpy as np\nimport pandas as pd\nfrom ray.tune.commands import *\nfrom nupic.research.frameworks.dynamic_sparse.common.browser import *\n\nimport matplotlib.pyplot as plt\nfrom matplotlib import rcParams\n\n%config InlineBackend.figure_format = 'retina'\n\nimport seaborn as sns\nsns.set(style=\"whitegrid\")\nsns.set_palette(\"colorblind\")", "Load and check data", "exps = ['improved_magpruning_eval3', 'improved_magpruning_eval7', 'improved_magpruning_eval8']\npaths = [os.path.expanduser(\"~/nta/results/{}\".format(e)) for e in exps]\ndf = load_many(paths)\n\ndf.head(5)\n\n# replace hebbian prine\ndf['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)\ndf['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)\n\ndf.columns\n\ndf.shape\n\ndf.iloc[1]\n\ndf.groupby('model')['model'].count()", "## Analysis\nExperiment Details", "# Did any trials failed?\ndf[df[\"epochs\"]<30][\"epochs\"].count()\n\n# Removing failed or incomplete trials\ndf_origin = df.copy()\ndf = df_origin[df_origin[\"epochs\"]>=30]\ndf.shape\n\n# which ones failed?\n# failed, or still ongoing?\ndf_origin['failed'] = df_origin[\"epochs\"]<30\ndf_origin[df_origin['failed']]['epochs']\n\n# helper functions\ndef mean_and_std(s):\n return \"{:.3f} ± {:.3f}\".format(s.mean(), s.std())\n\ndef round_mean(s):\n return \"{:.0f}\".format(round(s.mean()))\n\nstats = ['min', 'max', 'mean', 'std']\n\ndef agg(columns, filter=None, round=3):\n if filter is None:\n return (df.groupby(columns)\n .agg({'val_acc_max_epoch': round_mean,\n 'val_acc_max': stats, \n 'model': ['count']})).round(round)\n else:\n return (df[filter].groupby(columns)\n .agg({'val_acc_max_epoch': round_mean,\n 'val_acc_max': stats, \n 'model': ['count']})).round(round)\n", "Does improved weight pruning outperforms regular SET", "agg(['model'])\n\nagg(['on_perc'])\n\nagg(['on_perc', 'model'])\n\n# translate model names\nrcParams['figure.figsize'] = 16, 8\nd = {\n 'DSNNWeightedMag': 'DSNN',\n 'DSNNMixedHeb': 'SET',\n 'SparseModel': 'Static', \n}\ndf_plot = df.copy()\ndf_plot['model'] = df_plot['model'].apply(lambda x: d[x])\n\n# sns.scatterplot(data=df_plot, x='on_perc', y='val_acc_max', hue='model')\nsns.lineplot(data=df_plot, x='on_perc', y='val_acc_max', hue='model')\n\nrcParams['figure.figsize'] = 16, 8\nfilter = df_plot['model'] != 'Static'\nsns.lineplot(data=df_plot[filter], x='on_perc', y='val_acc_max_epoch', hue='model')\n\nsns.lineplot(data=df_plot, x='on_perc', y='val_acc_last', hue='model')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NYUDataBootcamp/Projects
UG_S17/Madhok-Rinkacs-Yao-USFlightDelays.ipynb
mit
[ "US Airlines--Which is the Most Punctual?\nAuthors: Chakshu Madhok, Wynonna Rinkacs, Ziqi Yao\nDate: May 12, 2017\nThis Jupyter Notebook was created for the NYU Stern course Data Bootcamp. \nAbstract\nIn this project, we aim to determine the “most punctual” commercial airline for domestic flights, based on 2015 flight data from the U.S. DOT. \nMotivation: As the old saying goes, time is money. Most people prefer to save as much time as possible and to avoid delays when traveling. However, the everyday consumer has no control over avoiding a flight delay--unless he or she strategically chooses flights that are unlikely to be delayed in the first place. \nAlthough some people may be loyal to a certain airline for the quality of amenities offered, we believe that punctuality is the leading factor in determining a “good” airline. Through our analysis, we will reveal which airline one should pick to minimize the chance of a delayed flight.\nPreliminaries\nImport packages for later use.", "import sys # system module \nimport pandas as pd # data package\nimport matplotlib.pyplot as plt # graphics module \nimport datetime as dt # date and time module\nimport numpy as np \nimport seaborn as sns\n\n%matplotlib inline\nsns.set_style('whitegrid')\nsns.color_palette(\"pastel\")\n \n# check versions, make sure Python is running\nprint('Python version:', sys.version)\nprint('Pandas version: ', pd.__version__)\nprint('Today: ', dt.date.today())", "Loading the Data\nData Source: To access and use the data, it is easiest to download the files directly to your computer and import it into Jupyter Notebook from the location on your computer. \nFirst, we access the 3 data files from the local file paths and save them to DataFrames:\n- airports.csv - airport codes, names, and locations\n- airlines.csv - airline codes and names\n- flights.csv - commercial domestic flights in 2015, flight info\n[df].head() helps us see the data and variables we are dealing with in each file.", "path = 'C:/Users/Ziqi/Desktop/Data Bootcamp/Project/airports.csv'\nairports = pd.read_csv(path)\nairports.head() \n\nairlines = pd.read_csv('C:/Users/Ziqi/Desktop/Data Bootcamp/Project/airlines.csv')\nairlines.head()\n\nflights = pd.read_csv('C:/Users/Ziqi/Desktop/Data Bootcamp/Project/flights.csv', low_memory=False) # (this is a big data file)\nflights.head()\n\n# number of rows and columns of each DataFrame\n\nprint('airports:',airports.shape)\nprint('airlines:',airlines.shape)\nprint('flights:',flights.shape)", "We see that the data contain 322 airports, 14 airlines, and 5,819,079 flights.\nFirst Glance: Average Arrival Delay\nThe main DataFrame of interest is flights, which contains information about airlines, airports, and delays. Here, we examine the columns in flights and create a new DataFrame with our columns of interest.", "# list of column names and datatypes in flights\n\nflights.info()\n\nflights.index", "Cleaning and Shaping", "# create new DataFrame with relevant variables\n\ncolumns=['YEAR',\n 'MONTH',\n 'DAY',\n 'DAY_OF_WEEK',\n 'AIRLINE',\n 'FLIGHT_NUMBER', \n 'ORIGIN_AIRPORT',\n 'DESTINATION_AIRPORT',\n 'DEPARTURE_DELAY',\n 'ARRIVAL_DELAY',\n 'DIVERTED',\n 'CANCELLED',\n 'AIR_SYSTEM_DELAY',\n 'SECURITY_DELAY',\n 'AIRLINE_DELAY',\n 'LATE_AIRCRAFT_DELAY',\n 'WEATHER_DELAY']\n\nflights2 = pd.DataFrame(flights, columns=columns)\nflights2.head()\n\n# for later convenience, we will replace the airline codes with each airline's full name, using a dictionary \n\nairlines_dictionary = dict(zip(airlines['IATA_CODE'], airlines['AIRLINE']))\nflights2['AIRLINE'] = flights2['AIRLINE'].apply(lambda x: airlines_dictionary[x])\nflights2.head()", "The DataFrame flights2 will serve as the foundation for our analysis on US domestic flight delays in 2015. We can further examine the data to determine which airline is the \"most punctual\".\nFirst, we will rank the airlines by average arrival delay. We are mainly concerned about arrival delay because regardless of whether a flight departs on time, what matters most to the passenger is whether he or she arrives at the final destination on time. Of course, a significant departure delay may result in an arrival delay. However, airlines may include a buffer in the scheduled arrival time to ensure that passengers reach their destination at the promised time.", "# create DataFrame with airlines and arrival delays\n\ndelays = flights2[['AIRLINE','DEPARTURE_DELAY','ARRIVAL_DELAY']]\n\n# if we hadn't used a dictionary to change the airline names, this is the code we would have used to produce the same result:\n#flights4 = pd.merge(airlines, flights3, left_on='IATA_CODE', right_on='AIRLINE', how='left')\n#flights4.drop('IATA_CODE', axis=1, inplace=True)\n#flights4.drop('AIRLINE_y', axis=1, inplace=True)\n#flights4.rename(columns={'AIRLINE_x': 'AIRLINE'}, inplace=True)\n\ndelays.head()\n\n# group data by airline name, calculate average arrival delay for each airline in 2015\n\nairline_av_delay = delays.groupby(['AIRLINE']).mean()\nairline_av_delay\n\n# create bar graph of average delay time for each airline\n\nairline_av_delay.sort(['ARRIVAL_DELAY'], ascending=1, inplace=True)\n\n\nsns.set()\nfig, ax = plt.subplots()\n\nairline_av_delay.plot(ax=ax,\n kind='bar',\n title='Average Delay (mins)')\n\nax.set_ylabel('Average Minutes Delayed')\nax.set_xlabel('Airline')\n\nplt.show()\n\n", "The bar graph shows that Alaska Airlines has the shortest delay on average--in fact, the average Alaska Airlines flight arrives before the scheduled arrival time, making it the airline with the best time on record. On the other end, Spirit Airlines has the longest average arrival delay. Interestingly, none of the average arrival delays exceed 15 minutes--for the most part, it seems that US domestic flights have been pretty punctual in 2015!\nAdditionally, almost all of the airlines have a departure delay greater than the arrival delay (with the exception of Hawaiian Airlines), which makes sense, considering that departure delay could be due to a variety of factors related to the departure airport, such as security, late passengers, or late arrivals of other flights to that airport. Despite a greater average departure delay, most airports seem to make up for the delay during the travel time, resulting in a shorter average arrival delay. \nSecond Glance: Consistency\nNow that we know how the airlines rank in terms of arrival delay, we can look at the how many of each airline's flights were cancelled or diverted. Second, we can calculate delay percentages for each airline, i.e. what percent of each airline's total flights were delayed in 2015, to determine which airlines are more likely to be delayed.", "# new DataFrame with relevant variables\n\ndiverted_cancelled = flights2[['AIRLINE','DIVERTED', 'CANCELLED']]\ndiverted_cancelled.head()\n\ndiverted_cancelled = diverted_cancelled.groupby(['AIRLINE']).sum()\n\n# total number of flights scheduled by each airline in 2015\n\ntotal_flights = flights2[['AIRLINE', 'FLIGHT_NUMBER']].groupby(['AIRLINE']).count()\ntotal_flights.rename(columns={'FLIGHT_NUMBER': 'TOTAL_FLIGHTS'}, inplace=True)\ntotal_flights\n\n# Tangent: for fun, we can see which airlines were dominant in the number of domestic flights\n\ntotal_flights['TOTAL_FLIGHTS'].plot.pie(figsize=(12,12), rot=45, autopct='%1.0f%%', title='Market Share of Domestic Flights in 2015 by Airline')", "It appears that the airlines with the top three largest market share of domestic flights in 2015 were Southwest (22%), Delta (15%), and American Airlines (12%).", "#resetting the index to merge the two DataFrames\n\ntotal_flights2 = total_flights.reset_index()\ndiverted_cancelled2 = diverted_cancelled.reset_index()\n\n# check\n\ntotal_flights2\ndiverted_cancelled2\n\n# calculate divertion and cancellation rates (percentages) for each airline\n\ndc_rates = pd.merge(diverted_cancelled2, total_flights2, on='AIRLINE')\ndc_rates['DIVERTION_RATE'] = dc_rates['DIVERTED']/dc_rates['TOTAL_FLIGHTS']\ndc_rates['CANCELLATION_RATE'] = dc_rates['CANCELLED']/dc_rates['TOTAL_FLIGHTS']\ndc_rates = dc_rates.set_index(['AIRLINE'])\ndc_rates\n\ndc_rates[['DIVERTION_RATE','CANCELLATION_RATE']].plot.bar(legend=True, figsize=(13,11),rot=45)\n", "Overall, the chance of cancellation or divertion is very low, with the divertion rate almost nonexistant. Flights are rarely diverted and only in extreme situations dues to plane safety failures, attacks, or natural disasters. We could use the flight divertion rate as a proxy for the safety of flying in 2015, and are happy to see this rate way below 0.01%. American Airlines and its partner American Eagle Airlines were the most likely to cancel a flight in 2015, while Hawaiian Airlines and Alaska Airlines were the least likely. (It is interesting to note that the two airlines operating out of the two states not in the continental U.S. are the least likely to be cancelled, despite have to travel the greatest distance.)", "# create a DataFrame with all flights that had a positive arrival delay time\n\ndelayed = flights2['ARRIVAL_DELAY'] >= 0\npos_delay = flights2[delayed]\npos_delay.head()\n\n# groupby function to determine how many flights had delayed arrival for each airline\n\npos_delay = pos_delay[['AIRLINE','ARRIVAL_DELAY']].groupby(['AIRLINE']).count()\n\npos_delay2 = pos_delay.reset_index()\n\n# merge with total_flights to calculate percentage of flights that were delayed for each airline\n\ndelay_rates = pd.merge(pos_delay2, total_flights2, on='AIRLINE')\ndelay_rates['DELAY_RATE'] = delay_rates['ARRIVAL_DELAY']/delay_rates['TOTAL_FLIGHTS']\ndelay_rates = delay_rates.set_index(['AIRLINE'])\ndelay_rates.sort(['DELAY_RATE'], ascending=1, inplace=True)\ndelay_rates.reset_index()\n\ndelay_rates[['DELAY_RATE']].plot.bar(legend=True, figsize=(13,11),rot=45)\n", "Spirit Airlines has the largest chance of being delayed upon arrival, with Delta Airlines the least likely.\nHowever, when we combine divertion rate, cancellation rate, and delay rate, we see that delays account for the majority of flights that didn't operate as scheduled for all airlines across the board.", "# combining the two into one DataFrame\n\nall_rates = pd.merge(dc_rates.reset_index(), delay_rates.reset_index()).set_index(['AIRLINE'])\nall_rates\n\nall_rates[['DIVERTION_RATE','CANCELLATION_RATE','DELAY_RATE']].plot.bar(legend=True, figsize=(13,10),rot=45)", "Conclusion\nObviously, delays are a lot more prevalent than diverted or cancelled flights. In conclusion, it appears that Delta Airlines was the most punctual domestic airline in 2015. Delta Airlines had the lowest average delay rate upon arrival and the third lowest cancellation rate. We can therefore state that if punctuality is a passenger's top priority when flights, we recommend flying Delta Airlines. The airlines with the highest average delay rate was Spirit Airlines, followed closely by Frontier. (While these flights are more likely to be delayed and arrive late, they are known as two of the cheapest airlines to fly in the U.S. While we are not observing ticket prices and affordability of airlines, it is still important to note.)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Kaggle/learntools
notebooks/intro_to_programming/raw/tut5.ipynb
apache-2.0
[ "Introduction\nWhen doing data science, you need a way to organize your data so you can work with it efficiently. Python has many data structures available for holding your data, such as lists, sets, dictionaries, and tuples. In this tutorial, you will learn how to work with Python lists.\nMotivation\nIn the Petal to the Metal competition, your goal is to classify the species of a flower based only on its image. (This is a common task in computer vision, and it is called image classification.) Towards this goal, say you organize the names of the flower species in the data. \nOne way to do this is by organizing the names in a Python string.", "flowers = \"pink primrose,hard-leaved pocket orchid,canterbury bells,sweet pea,english marigold,tiger lily,moon orchid,bird of paradise,monkshood,globe thistle\"\n\nprint(type(flowers))\nprint(flowers)", "Even better is to represent the same data in a Python list. To create a list, you need to use square brackets ([, ]) and separate each item with a comma. Every item in the list is a Python string, so each is enclosed in quotation marks.", "flowers_list = [\"pink primrose\", \"hard-leaved pocket orchid\", \"canterbury bells\", \"sweet pea\", \"english marigold\", \"tiger lily\", \"moon orchid\", \"bird of paradise\", \"monkshood\", \"globe thistle\"]\n\nprint(type(flowers_list))\nprint(flowers_list)", "At first glance, it doesn't look too different, whether you represent the information in a Python string or list. But as you will see, there are a lot of tasks that you can more easily do with a list. For instance, a list will make it easier to:\n- get an item at a specified position (first, second, third, etc), \n- check the number of items, and\n- add and remove items.\nLists\nLength\nWe can count the number of entries in any list with len(), which is short for \"length\". You need only supply the name of the list in the parentheses.", "# The list has ten entries\nprint(len(flowers_list))", "Indexing\nWe can refer to any item in the list according to its position in the list (first, second, third, etc). This is called indexing.\nNote that Python uses zero-based indexing, which means that:\n- to pull the first entry in the list, you use 0,\n- to pull the second entry in the list, you use 1, and\n- to pull the final entry in the list, you use one less than the length of the list.", "print(\"First entry:\", flowers_list[0])\nprint(\"Second entry:\", flowers_list[1])\n\n# The list has length ten, so we refer to final entry with 9\nprint(\"Last entry:\", flowers_list[9])", "Side Note: You may have noticed that in the code cell above, we use a single print() to print multiple items (both a Python string (like \"First entry:\") and a value from the list (like flowers_list[0]). To print multiple things in Python with a single command, we need only separate them with a comma.\nSlicing\nYou can also pull a segment of a list (for instance, the first three entries or the last two entries). This is called slicing. For instance:\n- to pull the first x entries, you use [:x], and\n- to pull the last y entries, you use [-y:].", "print(\"First three entries:\", flowers_list[:3])\nprint(\"Final two entries:\", flowers_list[-2:])", "As you can see above, when we slice a list, it returns a new, shortened list.\nRemoving items\nRemove an item from a list with .remove(), and put the item you would like to remove in parentheses.", "flowers_list.remove(\"globe thistle\")\nprint(flowers_list)", "Adding items\nAdd an item to a list with .append(), and put the item you would like to add in parentheses.", "flowers_list.append(\"snapdragon\")\nprint(flowers_list)", "Lists are not just for strings\nSo far, we have only worked with lists where each item in the list is a string. But lists can have items with any data type, including booleans, integers, and floats.\nAs an example, consider hardcover book sales in the first week of April 2000 in a retail store.", "hardcover_sales = [139, 128, 172, 139, 191, 168, 170]", "Here, hardcover_sales is a list of integers. Similar to when working with strings, you can still do things like get the length, pull individual entries, and extend the list.", "print(\"Length of the list:\", len(hardcover_sales))\nprint(\"Entry at index 2:\", hardcover_sales[2])", "You can also get the minimum with min() and the maximum with max().", "print(\"Minimum:\", min(hardcover_sales))\nprint(\"Maximum:\", max(hardcover_sales))", "To add every item in the list, use sum().", "print(\"Total books sold in one week:\", sum(hardcover_sales))", "We can also do similar calculations with slices of the list. In the next code cell, we take the sum from the first five days (sum(hardcover_sales[:5])), and then divide by five to get the average number of books sold in the first five days.", "print(\"Average books sold in first five days:\", sum(hardcover_sales[:5])/5)", "Your turn\nNow it's your turn to practice creating and modifying lists." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eramirem/numerical-methods-pdes
01_python.ipynb
cc0-1.0
[ "<table>\n <tr align=left><td><img align=left src=\"https://i.creativecommons.org/l/by/4.0/88x31.png\">\n <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>\n</table>\n\nReview: Python\nAPMA 4300 Resources:\n - Intro to Python\n - Intro to NumPy\n - Intro to Matplotlib\nOther resources:\n - Basic Python\n - Software Carpentry - Programming in Python\n - NumPy Intro\nInstallation\nThere are two options this semester for getting the necessary software:\n1. Install on your own machine\n1. Use a cloud computing platform\nYour Own Machine\nThe easiest way to install all the components you will need for the class is to use Continuum Analytics' Anaconda distribution. We will be using python 2.7.x for all in class demos and homework so I strongly suggest you do not get the Python 3.4 version.\nAlternatives to using Anaconda also exist in the form of Enthought's Canopy distribution which provides all the tools you will need as well along with an IDE (development environment).\nCloud Computing\nInstead of running things locally on your machine there are a number of cloud services that you are welcome to use in order to get everything running easily.\n 1. Sage-Math-Cloud - Create an account on Sage-Math-Cloud and interact with python via the provided terminal or Ipython notebook inteface.\n 1. Wakari - Continuum also has a free cloud service called Wakari that you can sign up for which provides an Anaconda installation along with similar tools to Sage-Math-Cloud.\nBasic Python\nPython is a dynamic, interpreted language used throughout computational engineering and science. Due to its ease of use, cost, and available tools we will be learning the material in this course using Python tools exclusively. For those who are coming in without prior Python knowledge but a strong programming background this lecture should acquaint you with the basics. For people who do not know Python and do not have a strong programming background it is strongly recommended that you take the time to look through the other tutorials mentioned above.\nMath\nBasic math in Python is fairly straight forward with all the usual types of operations (+, -, *, /, **) which are addition, subtraction, multiplication, division and power. Note that in Python though we need to be careful about the types of numbers we use.", "1 / 2", "Python returns the floor of the 1 / 2 because we gave it integers to divide. It then interprets the result as also needing to be an integer. If one of the numbers was a decimal number we would have a decimal number as a result (really these are floating point numbers float).", "1. / 2", "In compound statements it can become more difficult to figure out where possible rounding might occur so be careful when you evaluate statements.", "4. + 4.0**(3.0/2)", "Python also understands imaginary numbers:", "4 + 3j", "Some of the more advanced mathematical functions are stored in modules. In order to use these functions we must first import them into our notebook and then use them.", "import math\n\nmath?\n\nmath.sqrt(2.0)\n\nmath.sin(math.pi / 2.0)\n\nfrom math import *\nsin(pi)", "Variables\nVariables are defined and assigned to like many other languages.", "num_students = 80\nroom_capacity = 85\n(room_capacity - num_students) / room_capacity * 100.0", "Note that we do not get what we expect from this expression as we expected from above. What would we have to change to get this to work?\nWe could go back to change our initializations but we could also use the function float to force these values to be of float type:", "float(room_capacity - num_students) / float(room_capacity) * 100.0", "Control Flow\nif statements are the most basic unit of logic and allows us to conditionally operate on things.", "x = 4\nif x > 5:\n print \"x is greater than 5\"\nelif x < 5:\n print \"x is less than 5\"\nelse:\n print \"x is equal to 5\"", "for allows us to repeat tasks over a range of values or objects.", "for i in range(5):\n print i\n\nfor i in range(3,7):\n print i\n\nfor animal in ['cat', 'dog', 'chinchilla']:\n print animal\n\nfor n in range(2, 10):\n is_prime = True\n for x in range(2, n):\n if n % x == 0:\n print n, 'equals', x, '*', n / x\n is_prime = False\n break\n if is_prime:\n print \"%s is a prime number\" % (n)", "Functions\nFunctions are a fundamental way in any language to break up the code into pieces that can be isolated and repeatedly used based on their input.", "def my_print_function(x):\n print x\n\nmy_print_function(3)\n\ndef my_add_function(a, b):\n return a + b, b\n\nmy_add_function(3.0, 5.0)\n\ndef my_crazy_function(a, b, c=1.0):\n d = a + b**c\n return d\n\nmy_crazy_function(2.0, 3.0), my_crazy_function(2.0, 3.0, 2.0), my_crazy_function(2.0, 3.0, c=2.0)\n\ndef my_other_function(a, b, c=1.0):\n return a + b, a + b**c, a + b**(3.0 / 7.0)\n\nmy_other_function(2.0, 3.0, c=2.0)\n\ndef fibonacci(n):\n \"\"\"Return a list of the Fibonacci sequence up to n\"\"\"\n values = [0, 1]\n while values[-1] <= n:\n values.append(values[-1] + values[-2])\n print values\n return values\n\nfibonacci(100)\nfibonacci?", "NumPy\nThe most important part of NumPy is the specification of an array object called an ndarray. This object as its name suggests stores array like information in multiple dimensions. These objects allow a programmer to access the data in a multitude of different ways as well as create common types of arrays and operate on these arrays easily.", "import numpy", "Constructors\nWays to make arrays in NumPy.", "my_array = numpy.array([[1, 2], [3, 4]])\nprint my_array\n\nnumpy.linspace(-1, 1, 10)\n\nnumpy.zeros([3, 3])\n\nnumpy.ones([2, 3, 2])\n\nnumpy.empty([2,3])", "Access\nHow do we access data in an array?", "my_array[0, 1]\n\nmy_array[:,0]\n\nmy_vec = numpy.array([[1], [2]])\nprint my_vec\n\nnumpy.dot(my_array, my_vec)\nnumpy.cross?\n\nmy_array * my_vec", "Manipulations\nHow do we manipulate arrays beyond indexing into them?", "A = numpy.array([[1, 2, 3], [4, 5, 6]])\nprint \"A Shape = \", A.shape\nprint A\n\nB = A.reshape((6,1))\nprint \"A Shape = \", A.shape\nprint \"B Shape = \", B.shape\nprint B\n\nnumpy.tile(A, (2,2))\n\nA.transpose()\n\nA = numpy.array([[1,2,3],[4,5,6],[7,8,9]])\nprint A\nprint A.shape\n\nB = numpy.arange(1,10)\nprint B\nprint B.reshape((3,3))\nB.reshape?\nC = B.reshape((3,3))\n\nprint A * C\n\nnumpy.dot(A, C)", "Mathematical Functions", "x = numpy.linspace(-2.0 * numpy.pi, 2.0 * numpy.pi, 62)\ny = numpy.sin(x)\nprint y\n\nx = numpy.linspace(-1, 1, 20)\nnumpy.sqrt(x)\n\nx = numpy.linspace(-1, 1, 20, dtype=complex)\nnumpy.sqrt(x)", "Linear Algebra\nSome functions for linear algebra available in NumPy. Full implementation in scipy.linalg.", "numpy.linalg.norm(x)\nnumpy.linalg.norm?\n\nM = numpy.array([[0,2],[8,0]])\nb = numpy.array([1,2])\nprint M\nprint b\n\nx = numpy.linalg.solve(M,b)\nprint x\n\nlamda,V = numpy.linalg.eig(M)\nprint lamda\nprint V", "SciPy\nNumPy contains the basic building blocks for numerical computing where as the SciPy packages contain all the higher-level functionality. Refer to the SciPy webpage for more information on what it contains. There are also a number of SciKits which provide additional functionality\nMatplotlib\nThe most common facility for plotting with the Python numerical suite is to use the matplotlib package. We will cover a few of the basic approaches to plotting figures. If you are interested in learning more about matplotlib or are looking to see how you might create a particular plot check out the matplotlib gallery for inspiration.\nRefer to the APAM 4300 Notes on matplotlib for detailed examples.\nJupyter Notebooks\nAll class notes and assignments will be done within Jupyter notebooks (formally IPython notebooks). It is important that you become acquainted with these as there are a couple of pitfalls that you should be aware of.\n - Before turning in an assignment make sure to go to the \"Kernel\" menu and restart the notebook. After this select \"Run All\" from the \"Cell\" menu to rerun everything. This will ensure that you have properly defined everything in your notebook and have not accidentally erased a variable definition.\n - Use version 4.0 of the notebook so as not to run into problems with deleting notebook cells.\nCode Styling\nVery important in practice to write readable and understandable code. Here are a few things to keep in mind while programming in and out of this class, we will work on this actively as the semester progresses as well. The standard for which Python program are written to is called PEP 8 and contains the following basic guidelines:\n - Use 4-space indentation, no tabs\n - Wrap lines that exceed 80 characters\n - Use judicious use of blank lines to separate out functions, classes, and larger blocks of contained code\n - Comment! Also, put comments on their own line when possible\n - Use docstrings (function descriptions)\n - Use spaces around operators and after commas, a = f(1, 2) + g(3, 4)\n - Name your classes and functions consistently.\n - Use CamelCase for classes\n - Use lower_case_with_underscores for functions and variables\n - When in doubt be verbose with your comments and names of variables, functions, and classes\nGood coding style will mean that we can more easily grade your assignments and understand what you are doing. Please make sure to keep this in mind especially when naming variables, making comments, or documenting your functions." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/messy-consortium/cmip6/models/sandbox-2/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: MESSY-CONSORTIUM\nSource ID: SANDBOX-2\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:10\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-2', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kit-cel/lecture-examples
mloc/ch1_Preliminaries/gradient_descent.ipynb
gpl-2.0
[ "Gradient Descent Visualization\nThis code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>\nThis code illustrates:\n* Gradient descent with fixed step size\n* Interactive visualization of influence of step size", "import importlib\nautograd_available = True\n# if automatic differentiation is available, use it\ntry:\n import autograd\nexcept ImportError:\n autograd_available = False\n pass\n\nif autograd_available:\n import autograd.numpy as np \n from autograd import elementwise_grad as egrad\nelse:\n import numpy as np\n \nimport matplotlib.pyplot as plt\nfrom ipywidgets import interactive\nimport ipywidgets as widgets\n%matplotlib inline \n\nif autograd_available:\n print('Using autograd to compute gradients')\nelse:\n print('Using hand-calculated gradient')", "Specify the function to minimize as a simple python function.<br>\nWe have implemented some test functions that can be selected using the function selector, however, you are free to implement your own functions.<br>\nRight now, we have implemented the following functions:\n1. $\\frac{1}{2}x^2$, which is convex and has a global minimum at $x=0$\n2. $\\frac{1}{2}x^3$, which has no global minimum, but an inflection point at $x=0$\n3. $x^2+x^3$, which has a minimum at $x=0$ and a maximum at $x=-\\frac{2}{3}$\nThe derivative is automatically computed using the autograd library, which returns a function that evaluates the gradient of myfun", "function_select = 3\n\ndef myfun(x):\n functions = {\n 1: 0.5*x**2,\n 2: 0.5*x**3,\n 3: x**2+x**3\n }\n return functions.get(function_select)\n\nif autograd_available:\n gradient = egrad(myfun)\nelse:\n def gradient(x):\n functions = {\n 1: x,\n 2: 1.5*x**2,\n 3: 2*x+3*x**2\n }\n return functions.get(function_select)\n", "Plot the function and its derivative", "x = np.linspace(-3,3,100)\nfy = myfun(x)\ngy = gradient(x) \n\nplt.figure(1,figsize=(10,6))\nplt.rcParams.update({'font.size': 14})\nplt.plot(x,fy,x,gy)\nplt.grid(True)\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.legend([\"$f(x)$\",\"$f^\\prime(x)$\"])\nplt.show()", "Simple gradient descent strategy using only sign of the derivative\nCarry out the simple gradient descent strategy by using only the sign of the gradient\n\\begin{equation}\nx_i = x_{i-1} - \\epsilon\\cdot \\mathrm{sign}(f^\\prime(x_{i-1}))\n\\end{equation}", "epsilon = 0.5\nstart = 3.75\n\npoints = []\nwhile abs(gradient(start)) > 1e-8 and len(points) < 50:\n points.append( (start,myfun(start)) )\n start = start - epsilon*np.sign(gradient(start))\n\nplt.figure(1,figsize=(15,6))\nplt.rcParams.update({'font.size': 14})\nplt.subplot(1,2,1)\nplt.scatter(list(zip(*points))[0],list(zip(*points))[1],c=range(len(points),0,-1),cmap='gray',s=40,edgecolors='k')\nplt.plot(x,fy)\nplt.grid(True)\nplt.xlabel(\"x\")\nplt.ylabel(\"y=f(x)\")\n\nplt.subplot(1,2,2)\nplt.plot(range(0,len(points)),list(zip(*points))[0])\nplt.grid(True)\nplt.xlabel(\"Step i\")\nplt.ylabel(\"x_i\")\nplt.show()", "Gradient descent\nCarry out the final gradient descent strategy, which is given by\n\\begin{equation}\nx_i = x_{i-1} - \\epsilon\\cdot f^\\prime(x_{i-1})\n\\end{equation}", "epsilon = 0.01\nstart = 3.75\n\npoints = []\nwhile abs(gradient(start)) > 1e-8 and len(points) < 500:\n points.append( (start,myfun(start)) )\n start = start - epsilon*gradient(start) \n\nplt.figure(1,figsize=(15,6))\nplt.rcParams.update({'font.size': 14})\nplt.subplot(1,2,1)\nplt.scatter(list(zip(*points))[0],list(zip(*points))[1],c=range(len(points),0,-1),cmap='gray',s=40,edgecolors='k')\nplt.plot(x,fy)\nplt.grid(True)\nplt.xlabel(\"x\")\nplt.ylabel(\"y=f(x)\")\n\nplt.subplot(1,2,2)\nplt.plot(range(0,len(points)),list(zip(*points))[0])\nplt.grid(True)\nplt.xlabel(\"Step i\")\nplt.ylabel(\"x_i\")\nplt.show()", "Here, we provide an interactive tool to play around yourself with parameters of the gradient descent.", "def interactive_gradient_descent(start,epsilon, maximum_steps, xmin, xmax):\n points = []\n # assume 1e-10 is about zero\n while abs(gradient(start)) > 1e-10 and len(points) < maximum_steps:\n points.append( (start,myfun(start)) )\n start = start - epsilon*gradient(start) \n \n plt.figure(1,figsize=(15,6))\n plt.rcParams.update({'font.size': 14})\n plt.subplot(1,2,1)\n plt.scatter(list(zip(*points))[0],list(zip(*points))[1],c=range(len(points),0,-1),cmap='gray',s=40,edgecolors='k')\n px = np.linspace(xmin,xmax,1000)\n pfy = myfun(px) \n plt.plot(px,pfy)\n plt.autoscale(enable=True,tight=True)\n plt.xlim(xmin,xmax)\n plt.grid(True)\n plt.xlabel(\"x\")\n plt.ylabel(\"y=f(x)\")\n\n plt.subplot(1,2,2)\n plt.plot(range(0,len(points)),list(zip(*points))[0])\n plt.grid(True)\n plt.xlabel(\"Step i\")\n plt.ylabel(\"x_i\")\n plt.show() \n\nepsilon_values = np.arange(0.0,0.1,0.0001)\nstyle = {'description_width': 'initial'}\ninteractive_update = interactive(interactive_gradient_descent, \\\n epsilon = widgets.SelectionSlider(options=[(\"%g\"%i,i) for i in epsilon_values], value=0.01, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%'),style=style), \\\n start = widgets.FloatSlider(min=-5.0,max=5.0,step=0.0001,value=3.7, continuous_update=False, description='Start x', layout=widgets.Layout(width='75%'), style=style), \\\n maximum_steps = widgets.IntSlider(min=20, max=500, value= 200, continuous_update=False, description='Number steps',layout=widgets.Layout(width='50%'),style=style), \\\n xmin = widgets.FloatSlider(min=-10, max=0, step=0.1, value=-5, continuous_update=False, description='Plot negative x limit',layout=widgets.Layout(width='50%'), style=style), \\\n xmax = widgets.FloatSlider(min=0, max=10, step=0.1, value=5, continuous_update=False, description='Plot positive x limit',layout=widgets.Layout(width='50%'),style=style))\n\n\noutput = interactive_update.children[-1]\noutput.layout.height = '400px'\ninteractive_update" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
arcyfelix/Courses
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/.ipynb_checkpoints/05-Groupby-checkpoint.ipynb
apache-2.0
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nGroupby\nThe groupby method allows you to group rows of data together and call aggregate functions", "import pandas as pd\n# Create dataframe\ndata = {'Company': ['GOOG', 'GOOG', 'MSFT', 'MSFT', 'FB', 'FB'],\n 'Person': ['Sam', 'Charlie', 'Amy', 'Vanessa', 'Carl', 'Sarah'],\n 'Sales': [200, 120, 340, 124, 243, 350]}\n\ndf = pd.DataFrame(data)\n\ndf", "Now you can use the .groupby() method to group rows together based off of a column name. For instance let's group based off of Company. This will create a DataFrameGroupBy object:", "df.groupby('Company')", "You can save this object as a new variable:", "by_comp = df.groupby(\"Company\")", "And then call aggregate methods off the object:", "by_comp.mean()\n\ndf.groupby('Company').mean()", "More examples of aggregate methods:", "by_comp.std()\n\nby_comp.min()\n\nby_comp.max()\n\nby_comp.count()\n\nby_comp.describe()\n\nby_comp.describe().transpose()\n\nby_comp.describe().transpose()['GOOG']", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]