repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
jwyang/JULE-Caffe
examples/net_surgery.ipynb
mit
[ "Net Surgery\nCaffe networks can be transformed to your particular needs by editing the model parameters. The data, diffs, and parameters of a net are all exposed in pycaffe.\nRoll up your sleeves for net surgery with pycaffe!", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport Image\n\n# Make sure that caffe is on the python path:\ncaffe_root = '../' # this file is expected to be in {caffe_root}/examples\nimport sys\nsys.path.insert(0, caffe_root + 'python')\n\nimport caffe\n\n# configure plotting\nplt.rcParams['figure.figsize'] = (10, 10)\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'", "Designer Filters\nTo show how to load, manipulate, and save parameters we'll design our own filters into a simple network that's only a single convolution layer. This net has two blobs, data for the input and conv for the convolution output and one parameter conv for the convolution filter weights and biases.", "# Load the net, list its data and params, and filter an example image.\ncaffe.set_mode_cpu()\nnet = caffe.Net('net_surgery/conv.prototxt', caffe.TEST)\nprint(\"blobs {}\\nparams {}\".format(net.blobs.keys(), net.params.keys()))\n\n# load image and prepare as a single input batch for Caffe\nim = np.array(Image.open('images/cat_gray.jpg'))\nplt.title(\"original image\")\nplt.imshow(im)\nplt.axis('off')\n\nim_input = im[np.newaxis, np.newaxis, :, :]\nnet.blobs['data'].reshape(*im_input.shape)\nnet.blobs['data'].data[...] = im_input", "The convolution weights are initialized from Gaussian noise while the biases are initialized to zero. These random filters give output somewhat like edge detections.", "# helper show filter outputs\ndef show_filters(net):\n net.forward()\n plt.figure()\n filt_min, filt_max = net.blobs['conv'].data.min(), net.blobs['conv'].data.max()\n for i in range(3):\n plt.subplot(1,4,i+2)\n plt.title(\"filter #{} output\".format(i))\n plt.imshow(net.blobs['conv'].data[0, i], vmin=filt_min, vmax=filt_max)\n plt.tight_layout()\n plt.axis('off')\n\n# filter the image with initial \nshow_filters(net)", "Raising the bias of a filter will correspondingly raise its output:", "# pick first filter output\nconv0 = net.blobs['conv'].data[0, 0]\nprint(\"pre-surgery output mean {:.2f}\".format(conv0.mean()))\n# set first filter bias to 10\nnet.params['conv'][1].data[0] = 1.\nnet.forward()\nprint(\"post-surgery output mean {:.2f}\".format(conv0.mean()))", "Altering the filter weights is more exciting since we can assign any kernel like Gaussian blur, the Sobel operator for edges, and so on. The following surgery turns the 0th filter into a Gaussian blur and the 1st and 2nd filters into the horizontal and vertical gradient parts of the Sobel operator.\nSee how the 0th output is blurred, the 1st picks up horizontal edges, and the 2nd picks up vertical edges.", "ksize = net.params['conv'][0].data.shape[2:]\n# make Gaussian blur\nsigma = 1.\ny, x = np.mgrid[-ksize[0]//2 + 1:ksize[0]//2 + 1, -ksize[1]//2 + 1:ksize[1]//2 + 1]\ng = np.exp(-((x**2 + y**2)/(2.0*sigma**2)))\ngaussian = (g / g.sum()).astype(np.float32)\nnet.params['conv'][0].data[0] = gaussian\n# make Sobel operator for edge detection\nnet.params['conv'][0].data[1:] = 0.\nsobel = np.array((-1, -2, -1, 0, 0, 0, 1, 2, 1), dtype=np.float32).reshape((3,3))\nnet.params['conv'][0].data[1, 0, 1:-1, 1:-1] = sobel # horizontal\nnet.params['conv'][0].data[2, 0, 1:-1, 1:-1] = sobel.T # vertical\nshow_filters(net)", "With net surgery, parameters can be transplanted across nets, regularized by custom per-parameter operations, and transformed according to your schemes.\nCasting a Classifier into a Fully Convolutional Network\nLet's take the standard Caffe Reference ImageNet model \"CaffeNet\" and transform it into a fully convolutional net for efficient, dense inference on large inputs. This model generates a classification map that covers a given input size instead of a single classification. In particular a 8 $\\times$ 8 classification map on a 451 $\\times$ 451 input gives 64x the output in only 3x the time. The computation exploits a natural efficiency of convolutional network (convnet) structure by amortizing the computation of overlapping receptive fields.\nTo do so we translate the InnerProduct matrix multiplication layers of CaffeNet into Convolutional layers. This is the only change: the other layer types are agnostic to spatial size. Convolution is translation-invariant, activations are elementwise operations, and so on. The fc6 inner product when carried out as convolution by fc6-conv turns into a 6 \\times 6 filter with stride 1 on pool5. Back in image space this gives a classification for each 227 $\\times$ 227 box with stride 32 in pixels. Remember the equation for output map / receptive field size, output = (input - kernel_size) / stride + 1, and work out the indexing details for a clear understanding.", "!diff net_surgery/bvlc_caffenet_full_conv.prototxt ../models/bvlc_reference_caffenet/deploy.prototxt", "The only differences needed in the architecture are to change the fully connected classifier inner product layers into convolutional layers with the right filter size -- 6 x 6, since the reference model classifiers take the 36 elements of pool5 as input -- and stride 1 for dense classification. Note that the layers are renamed so that Caffe does not try to blindly load the old parameters when it maps layer names to the pretrained model.", "# Make sure that caffe is on the python path:\ncaffe_root = '../' # this file is expected to be in {caffe_root}/examples\nimport sys\nsys.path.insert(0, caffe_root + 'python')\n\nimport caffe\n\n# Load the original network and extract the fully connected layers' parameters.\nnet = caffe.Net('../models/bvlc_reference_caffenet/deploy.prototxt', \n '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel', \n caffe.TEST)\nparams = ['fc6', 'fc7', 'fc8']\n# fc_params = {name: (weights, biases)}\nfc_params = {pr: (net.params[pr][0].data, net.params[pr][1].data) for pr in params}\n\nfor fc in params:\n print '{} weights are {} dimensional and biases are {} dimensional'.format(fc, fc_params[fc][0].shape, fc_params[fc][1].shape)", "Consider the shapes of the inner product parameters. The weight dimensions are the output and input sizes while the bias dimension is the output size.", "# Load the fully convolutional network to transplant the parameters.\nnet_full_conv = caffe.Net('net_surgery/bvlc_caffenet_full_conv.prototxt', \n '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',\n caffe.TEST)\nparams_full_conv = ['fc6-conv', 'fc7-conv', 'fc8-conv']\n# conv_params = {name: (weights, biases)}\nconv_params = {pr: (net_full_conv.params[pr][0].data, net_full_conv.params[pr][1].data) for pr in params_full_conv}\n\nfor conv in params_full_conv:\n print '{} weights are {} dimensional and biases are {} dimensional'.format(conv, conv_params[conv][0].shape, conv_params[conv][1].shape)", "The convolution weights are arranged in output $\\times$ input $\\times$ height $\\times$ width dimensions. To map the inner product weights to convolution filters, we could roll the flat inner product vectors into channel $\\times$ height $\\times$ width filter matrices, but actually these are identical in memory (as row major arrays) so we can assign them directly.\nThe biases are identical to those of the inner product.\nLet's transplant!", "for pr, pr_conv in zip(params, params_full_conv):\n conv_params[pr_conv][0].flat = fc_params[pr][0].flat # flat unrolls the arrays\n conv_params[pr_conv][1][...] = fc_params[pr][1]", "Next, save the new model weights.", "net_full_conv.save('net_surgery/bvlc_caffenet_full_conv.caffemodel')", "To conclude, let's make a classification map from the example cat image and visualize the confidence of \"tiger cat\" as a probability heatmap. This gives an 8-by-8 prediction on overlapping regions of the 451 $\\times$ 451 input.", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# load input and configure preprocessing\nim = caffe.io.load_image('images/cat.jpg')\ntransformer = caffe.io.Transformer({'data': net_full_conv.blobs['data'].data.shape})\ntransformer.set_mean('data', np.load('../python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1))\ntransformer.set_transpose('data', (2,0,1))\ntransformer.set_channel_swap('data', (2,1,0))\ntransformer.set_raw_scale('data', 255.0)\n# make classification map by forward and print prediction indices at each location\nout = net_full_conv.forward_all(data=np.asarray([transformer.preprocess('data', im)]))\nprint out['prob'][0].argmax(axis=0)\n# show net input and confidence map (probability of the top prediction at each location)\nplt.subplot(1, 2, 1)\nplt.imshow(transformer.deprocess('data', net_full_conv.blobs['data'].data[0]))\nplt.subplot(1, 2, 2)\nplt.imshow(out['prob'][0,281])", "The classifications include various cats -- 282 = tiger cat, 281 = tabby, 283 = persian -- and foxes and other mammals.\nIn this way the fully connected layers can be extracted as dense features across an image (see net_full_conv.blobs['fc6'].data for instance), which is perhaps more useful than the classification map itself.\nNote that this model isn't totally appropriate for sliding-window detection since it was trained for whole-image classification. Nevertheless it can work just fine. Sliding-window training and finetuning can be done by defining a sliding-window ground truth and loss such that a loss map is made for every location and solving as usual. (This is an exercise for the reader.)\nA thank you to Rowland Depp for first suggesting this trick." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
swirlingsand/deep-learning-foundations
seq2seq/.ipynb_checkpoints/sequence_to_sequence_implementation-checkpoint.ipynb
mit
[ "Character Sequence to Sequence\nIn this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models. This notebook was updated to work with TensorFlow 1.1 and builds on the work of Dave Currie. Check out Dave's post Text Summarization with Amazon Reviews.\n<img src=\"images/sequence-to-sequence.jpg\"/>\nDataset\nThe dataset lives in the /data/ folder. At the moment, it is made up of the following files:\n * letters_source.txt: The list of input letter sequences. Each sequence is its own line. \n * letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.", "import numpy as np\nimport time\n\nimport helper\n\nsource_path = 'data/letters_source.txt'\ntarget_path = 'data/letters_target.txt'\n\nsource_sentences = helper.load_data(source_path)\ntarget_sentences = helper.load_data(target_path)", "Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.", "source_sentences[:50].split('\\n')", "source_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. source_sentences contains a sorted characters of the line.", "target_sentences[:50].split('\\n')", "Preprocess\nTo do anything useful with it, we'll need to turn the each string into a list of characters: \n<img src=\"images/source_and_target_arrays.png\"/>\nThen convert the characters to their int values as declared in our vocabulary:", "def extract_character_vocab(data):\n special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>']\n\n set_words = set([character for line in data.split('\\n') for character in line])\n int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}\n vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}\n\n return int_to_vocab, vocab_to_int\n\n# Build int2letter and letter2int dicts\nsource_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)\ntarget_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)\n\n# Convert characters to ids\nsource_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\\n')]\ntarget_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\\n')] \n\nprint(\"Example source sequence\")\nprint(source_letter_ids[:3])\nprint(\"\\n\")\nprint(\"Example target sequence\")\nprint(target_letter_ids[:3])", "This is the final shape we need them to be in. We can now proceed to building the model.\nModel\nCheck the Version of TensorFlow\nThis will check to make sure you have the correct version of TensorFlow", "from distutils.version import LooseVersion\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))", "Hyperparameters", "# Number of Epochs\nepochs = 60\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 50\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 15\ndecoding_embedding_size = 15\n# Learning Rate\nlearning_rate = 0.001", "Input", "def get_model_inputs():\n input_data = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n lr = tf.placeholder(tf.float32, name='learning_rate')\n\n target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')\n max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')\n source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')\n \n return input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length\n", "Sequence to Sequence Model\nWe can now start defining the functions that will build the seq2seq model. We are building it from the bottom up with the following components:\n2.1 Encoder\n - Embedding\n - Encoder cell\n2.2 Decoder\n 1- Process decoder inputs\n 2- Set up the decoder\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n2.3 Seq2seq model connecting the encoder and decoder\n2.4 Build the training graph hooking up the model with the \n optimizer\n\n2.1 Encoder\nThe first bit of the model we'll build is the encoder. Here, we'll embed the input data, construct our encoder, then pass the embedded data to the encoder.\n\n\nEmbed the input data using tf.contrib.layers.embed_sequence\n<img src=\"images/embed_sequence.png\" />\n\n\nPass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.\n<img src=\"images/encoder.png\" />", "def encoding_layer(input_data, rnn_size, num_layers,\n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n\n\n # Encoder embedding\n enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)\n\n # RNN cell\n def make_cell(rnn_size):\n enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return enc_cell\n\n enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)\n \n return enc_output, enc_state", "2.2 Decoder\nThe decoder is probably the most involved part of this model. The following steps are needed to create it:\n1- Process decoder inputs\n2- Set up the decoder components\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n\nProcess Decoder Input\nIn the training process, the target sequences will be used in two different places:\n\nUsing them to calculate the loss\nFeeding them to the decoder during training to make the model more robust.\n\nNow we need to address the second point. Let's assume our targets look like this in their letter/word form (we're doing this for readibility. At this point in the code, these sequences would be in int form):\n<img src=\"images/targets_1.png\"/>\nWe need to do a simple transformation on the tensor before feeding it to the decoder:\n1- We will feed an item of the sequence to the decoder at each time step. Think about the last timestep -- where the decoder outputs the final word in its output. The input to that step is the item before last from the target sequence. The decoder has no use for the last item in the target sequence in this scenario. So we'll need to remove the last item. \nWe do that using tensorflow's tf.strided_slice() method. We hand it the tensor, and the index of where to start and where to end the cutting.\n<img src=\"images/strided_slice_1.png\"/>\n2- The first item in each sequence we feed to the decoder has to be GO symbol. So We'll add that to the beginning.\n<img src=\"images/targets_add_go.png\"/>\nNow the tensor is ready to be fed to the decoder. It looks like this (if we convert from ints to letters/symbols):\n<img src=\"images/targets_after_processing_1.png\"/>", "# Process the input we'll feed to the decoder\ndef process_decoder_input(target_data, vocab_to_int, batch_size):\n '''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)\n\n return dec_input", "Set up the decoder components\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n\n1- Embedding\nNow that we have prepared the inputs to the training decoder, we need to embed them so they can be ready to be passed to the decoder. \nWe'll create an embedding matrix like the following then have tf.nn.embedding_lookup convert our input to its embedded equivalent:\n<img src=\"images/embeddings.png\" />\n2- Decoder Cell\nThen we declare our decoder cell. Just like the encoder, we'll use an tf.contrib.rnn.LSTMCell here as well.\nWe need to declare a decoder for the training process, and a decoder for the inference/prediction process. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).\nFirst, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.\n3- Dense output layer\nBefore we move to declaring our decoders, we'll need to create the output layer, which will be a tensorflow.python.layers.core.Dense layer that translates the outputs of the decoder to logits that tell us which element of the decoder vocabulary the decoder is choosing to output at each time step.\n4- Training decoder\nEssentially, we'll be creating two decoders which share their parameters. One for training and one for inference. The two are similar in that both created using tf.contrib.seq2seq.BasicDecoder and tf.contrib.seq2seq.dynamic_decode. They differ, however, in that we feed the the target sequences as inputs to the training decoder at each time step to make it more robust.\nWe can think of the training decoder as looking like this (except that it works with sequences in batches):\n<img src=\"images/sequence-to-sequence-training-decoder.png\"/>\nThe training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).\n5- Inference decoder\nThe inference decoder is the one we'll use when we deploy our model to the wild.\n<img src=\"images/sequence-to-sequence-inference-decoder.png\"/>\nWe'll hand our encoder hidden state to both the training and inference decoders and have it process its output. TensorFlow handles most of the logic for us. We just have to use the appropriate methods from tf.contrib.seq2seq and supply them with the appropriate inputs.", "def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,\n target_sequence_length, max_target_sequence_length, enc_state, dec_input):\n # 1. Decoder Embedding\n target_vocab_size = len(target_letter_to_int)\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n\n # 2. Construct the decoder cell\n def make_cell(rnn_size):\n dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return dec_cell\n\n dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n # 3. Dense layer to translate the decoder's output at each time \n # step into a choice from the target vocabulary\n output_layer = Dense(target_vocab_size,\n kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))\n\n\n # 4. Set up a training decoder and an inference decoder\n # Training Decoder\n with tf.variable_scope(\"decode\"):\n\n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n \n \n # Basic decoder\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n enc_state,\n output_layer) \n \n # Perform dynamic decoding using the decoder\n training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n # 5. Inference Decoder\n # Reuses the same parameters trained by the training process\n with tf.variable_scope(\"decode\", reuse=True):\n start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')\n\n # Helper for the inference process.\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n target_letter_to_int['<EOS>'])\n\n # Basic decoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n enc_state,\n output_layer)\n \n # Perform dynamic decoding using the decoder\n inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n \n\n \n return training_decoder_output, inference_decoder_output", "2.3 Seq2seq model\nLet's now go a step above, and hook up the encoder and decoder using the methods we just declared", "\ndef seq2seq_model(input_data, targets, lr, target_sequence_length, \n max_target_sequence_length, source_sequence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, \n rnn_size, num_layers):\n \n # Pass the input data through the encoder. We'll ignore the encoder output, but use the state\n _, enc_state = encoding_layer(input_data, \n rnn_size, \n num_layers, \n source_sequence_length,\n source_vocab_size, \n encoding_embedding_size)\n \n \n # Prepare the target sequences we'll feed to the decoder in training mode\n dec_input = process_decoder_input(targets, target_letter_to_int, batch_size)\n \n # Pass encoder state and decoder inputs to the decoders\n training_decoder_output, inference_decoder_output = decoding_layer(target_letter_to_int, \n decoding_embedding_size, \n num_layers, \n rnn_size,\n target_sequence_length,\n max_target_sequence_length,\n enc_state, \n dec_input) \n \n return training_decoder_output, inference_decoder_output\n \n\n", "Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this:\n<img src=\"images/logits.png\"/>\nThe logits we get from the training tensor we'll pass to tf.contrib.seq2seq.sequence_loss() to calculate the loss and ultimately the gradient.", "# Build the graph\ntrain_graph = tf.Graph()\n# Set the graph to default to ensure that it is ready for training\nwith train_graph.as_default():\n \n # Load the model inputs \n input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs()\n \n # Create the training and inference logits\n training_decoder_output, inference_decoder_output = seq2seq_model(input_data, \n targets, \n lr, \n target_sequence_length, \n max_target_sequence_length, \n source_sequence_length,\n len(source_letter_to_int),\n len(target_letter_to_int),\n encoding_embedding_size, \n decoding_embedding_size, \n rnn_size, \n num_layers) \n \n # Create tensors for the training logits and inference logits\n training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')\n inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions')\n \n # Create the weights for sequence_loss\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n \n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n", "Get Batches\nThere's little processing involved when we retreive the batches. This is a simple example assuming batch_size = 2\nSource sequences (it's actually in int form, we're showing the characters for clarity):\n<img src=\"images/source_batch.png\" />\nTarget sequences (also in int, but showing letters for clarity):\n<img src=\"images/target_batch.png\" />", "def pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\ndef get_batches(targets, sources, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n \n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n \n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n \n yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths", "Train\nWe're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.", "# Split data to training and validation sets\ntrain_source = source_letter_ids[batch_size:]\ntrain_target = target_letter_ids[batch_size:]\nvalid_source = source_letter_ids[:batch_size]\nvalid_target = target_letter_ids[:batch_size]\n(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size,\n source_letter_to_int['<PAD>'],\n target_letter_to_int['<PAD>']))\n\ndisplay_step = 20 # Check training loss after every 20 batches\n\ncheckpoint = \"best_model.ckpt\" \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n \n for epoch_i in range(1, epochs+1):\n for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(\n get_batches(train_target, train_source, batch_size,\n source_letter_to_int['<PAD>'],\n target_letter_to_int['<PAD>'])):\n \n # Training step\n _, loss = sess.run(\n [train_op, cost],\n {input_data: sources_batch,\n targets: targets_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths})\n\n # Debug message updating us on the status of the training\n if batch_i % display_step == 0 and batch_i > 0:\n \n # Calculate validation cost\n validation_loss = sess.run(\n [cost],\n {input_data: valid_sources_batch,\n targets: valid_targets_batch,\n lr: learning_rate,\n target_sequence_length: valid_targets_lengths,\n source_sequence_length: valid_sources_lengths})\n \n print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}'\n .format(epoch_i,\n epochs, \n batch_i, \n len(train_source) // batch_size, \n loss, \n validation_loss[0]))\n\n \n \n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, checkpoint)\n print('Model Trained and Saved')", "Prediction", "def source_to_seq(text):\n '''Prepare the text for the model'''\n sequence_length = 7\n return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text))\n\n\n\n\ninput_sentence = 'hello'\ntext = source_to_seq(input_sentence)\n\ncheckpoint = \"./best_model.ckpt\"\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(checkpoint + '.meta')\n loader.restore(sess, checkpoint)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n \n #Multiply by batch_size to match the model's input parameters\n answer_logits = sess.run(logits, {input_data: [text]*batch_size, \n target_sequence_length: [len(text)]*batch_size, \n source_sequence_length: [len(text)]*batch_size})[0] \n\n\npad = source_letter_to_int[\"<PAD>\"] \n\nprint('Original Text:', input_sentence)\n\nprint('\\nSource')\nprint(' Word Ids: {}'.format([i for i in text]))\nprint(' Input Words: {}'.format(\" \".join([source_int_to_letter[i] for i in text])))\n\nprint('\\nTarget')\nprint(' Word Ids: {}'.format([i for i in answer_logits if i != pad]))\nprint(' Response Words: {}'.format(\" \".join([target_int_to_letter[i] for i in answer_logits if i != pad])))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tomislacker/python-mke-trash-pickup
notebooks/SimpleQuery.ipynb
unlicense
[ "import json\nimport requests\nfrom lxml import html\n\n\nclass XPathObject(object):\n input_properties = {}\n \"\"\"Dict of keys (property names) and XPaths (to read vals from)\"\"\"\n\n @classmethod\n def FromHTML(cls, html_contents):\n inst = cls()\n print(\"Reading through {b} bytes for {c} properties...\".format(\n b=len(html_contents),\n c=len(cls.input_properties)))\n\n tree = html.fromstring(html_contents)\n\n for attr_name, xpath in cls.input_properties.items():\n print(\"Searching for '{n}': {x}\".format(\n n=attr_name,\n x=xpath))\n elements = tree.xpath(xpath)\n\n if not len(elements):\n print(\"Failed to find '{n}': {x}\".format(\n n=attr_name,\n x=xpath))\n continue\n\n setattr(\n inst,\n attr_name,\n elements[0].text)\n\n return inst\n\n def __repr__(self):\n return json.dumps(\n self.__dict__,\n indent=4,\n separators=(',', ': '))\n\n\nclass RefusePickup(XPathObject):\n input_properties = {\n 'success_msg': '//*[@id=\"nConf\"]/h1',\n 'route_garbage': '//*[@id=\"nConf\"]/strong[1]',\n 'next_pickup_garbage': '//*[@id=\"nConf\"]/strong[2]',\n 'route_recyle': '//*[@id=\"nConf\"]/strong[3]',\n 'next_pickup_recycle_after': '//*[@id=\"nConf\"]/strong[4]',\n 'next_pickup_recycle_before': '//*[@id=\"nConf\"]/strong[5]',\n }\n\n\nclass RefuseQueryAddress(object):\n STREET_TYPES = [\n 'AV', # Avenue\n 'BL', #\n 'CR', # Circle\n 'CT', # Court\n 'DR', # Drive\n 'LA', # Lane\n 'PK', # Parkway\n 'PL', # Place\n 'RD', # Road\n 'SQ', # Square\n 'ST', # Street\n 'TR', # Terrace\n 'WY', # Way\n ]\n def __init__(self, house_number, direction, street_name, street_type):\n self.house_number = house_number\n self.direction = direction\n self._street_name = street_name\n self._street_type = street_type\n\n assert self.street_type in self.STREET_TYPES, \\\n \"Invalid street type: {st}\".format(\n st=self.street_type)\n\n @property\n def street_name(self):\n return self._street_name.upper()\n \n @property\n def street_type(self):\n return self._street_type.upper()\n\n\nclass RefuseQuery(object):\n form_url = 'http://mpw.milwaukee.gov/services/garbage_day'\n parse_xpath = RefusePickup\n \n @classmethod\n def Execute(cls, refuse_address):\n response = requests.post(\n cls.form_url,\n data={\n 'laddr': refuse_address.house_number,\n 'sdir': refuse_address.direction,\n 'sname': refuse_address.street_name,\n 'stype': refuse_address.street_type,\n 'Submit': 'Submit',\n })\n response_method = getattr(cls.parse_xpath, 'FromHTML')\n return response_method(response.text)\n ", "Define An Address\nThe following address is of a Walgreens for an example.", "address = RefuseQueryAddress(\n house_number=2727,\n direction='S',\n street_name='27th',\n street_type='st')", "Execute The Query\nCall the RefuseQuery class to fetch, parse, and return the status of\nfuture refuse pickups.", "pickup = RefuseQuery.Execute(address)", "Assess Results\nLook at the response object to determine what route the address is\non, when the next garbage pickup is, and when the next recycle pickup\nwill likely be.", "print(repr(pickup))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
studentofdata/qcew
vmfiles/IPNB/Examples/b Graphics/30 Seaborn.ipynb
bsd-3-clause
[ "Seaborn graphics\nSeaborn is a Python library with \"a high-level interface for drawing attractive statistical graphics\". This notebook includes some examples taken from the Seaborn example gallery.", "# The imports\n%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.set(style=\"darkgrid\")", "Example 1: interactplot", "# Generate a random dataset with strong simple effects and an interaction\nn = 80\nrs = np.random.RandomState(11)\nx1 = rs.randn(n)\nx2 = x1 / 5 + rs.randn(n)\nb0, b1, b2, b3 = .5, .25, -1, 2\ny = b0 + b1 * x1 + b2 * x2 + b3 * x1 * x2 + rs.randn(n)\ndf = pd.DataFrame(np.c_[x1, x2, y], columns=[\"x1\", \"x2\", \"y\"])\n\n# Show a scatterplot of the predictors with the estimated model surface\nsns.interactplot(\"x1\", \"x2\", \"y\", df);", "Example 2: Correlation matrix heatmap", "sns.set(context=\"paper\", font=\"monospace\")\n\n# Load the datset of correlations between cortical brain networks\ndf = sns.load_dataset(\"brain_networks\", header=[0, 1, 2], index_col=0)\ncorrmat = df.corr()\n\n# Set up the matplotlib figure\nf, ax = plt.subplots( figsize=(12, 9) )\n\n# Draw the heatmap using seaborn\nsns.heatmap(corrmat, vmax=.8, square=True)\n\n# Use matplotlib directly to emphasize known networks\nnetworks = corrmat.columns.get_level_values(\"network\")\nfor i, network in enumerate(networks):\n if i and network != networks[i - 1]:\n ax.axhline(len(networks) - i, c=\"w\")\n ax.axvline(i, c=\"w\")\nf.tight_layout()\n", "Example 3: Linear regression with marginal distributions", "sns.set(style=\"darkgrid\", color_codes=True)\n\ntips = sns.load_dataset(\"tips\")\ng = sns.jointplot(\"total_bill\", \"tip\", data=tips, kind=\"reg\",\n xlim=(0, 60), ylim=(0, 12), color=\"r\", size=7)\n", "Interactivity\nWe repeat the above example, but now using mpld3 to provide pan & zoom interactivity.\nNote that this may not work if graphics have already been initialized", "# Seaborn + interactivity through mpld3\nimport mpld3\nsns.set( style=\"darkgrid\", color_codes=True )\n\ntips = sns.load_dataset(\"tips\")\nsns.jointplot( \"total_bill\", \"tip\", data=tips, kind=\"reg\",\n xlim=(0, 60), ylim=(0, 12), color=\"r\", size=7 )\n\n\nmpld3.display()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
postBG/DL_project
image-classification/dlnd_image_classification.ipynb
mit
[ "Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/input/cifar-10/python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)", "Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)", "Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.", "def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n used min-max normalization\n \n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n max_value = 255\n min_value = 0\n return (x - min_value) / (max_value - min_value)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)", "One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.", "from sklearn import preprocessing\nlb=preprocessing.LabelBinarizer()\nlb.fit(range(10))\n\ndef one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n return lb.transform(x)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)", "Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))", "Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.", "import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a bach of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n shape = [x for x in image_shape]\n shape.insert(0, None)\n return tf.placeholder(tf.float32, shape=shape, name=\"x\")\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n return tf.placeholder(tf.float32, shape=[None, n_classes], name=\"y\")\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n return tf.placeholder(tf.float32, name='keep_prob')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)", "Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.", "def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n x_tensor_shape = x_tensor.get_shape().as_list()\n weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor_shape[-1], conv_num_outputs], stddev=0.05))\n bias = tf.Variable(tf.truncated_normal([conv_num_outputs], stddev=0.05))\n \n conv_layer = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')\n conv_layer = tf.nn.bias_add(conv_layer, bias=bias)\n conv_layer = tf.nn.relu(conv_layer)\n \n conv_layer = tf.nn.max_pool(conv_layer, \n ksize=[1, pool_ksize[0], pool_ksize[1], 1], \n strides=[1, pool_strides[0], pool_strides[1], 1], \n padding='SAME')\n \n return conv_layer \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)", "Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n return tf.contrib.layers.flatten(x_tensor)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)", "Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n x_shape = x_tensor.get_shape().as_list()\n weights = tf.Variable(tf.truncated_normal([x_shape[1], num_outputs], stddev=0.05))\n bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.05))\n \n return tf.nn.relu(tf.add(tf.matmul(x_tensor, weights), bias))\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)", "Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.", "def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n x_shape = x_tensor.get_shape().as_list()\n weights = tf.Variable(tf.truncated_normal([x_shape[1], num_outputs], stddev=0.05))\n bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.05))\n \n return tf.add(tf.matmul(x_tensor, weights), bias)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)", "Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.", "def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n conv_output_depth = {\n 'layer1': 32,\n 'layer2': 64,\n 'layer3': 128\n }\n conv_ksize = (3, 3)\n conv_strides = (1, 1)\n pool_ksize = (2, 2)\n pool_strides = (2, 2)\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n conv_layer1 = conv2d_maxpool(x, conv_output_depth['layer1'], conv_ksize, conv_strides, pool_ksize, pool_strides)\n conv_layer2 = conv2d_maxpool(conv_layer1, conv_output_depth['layer2'], conv_ksize, conv_strides, pool_ksize, pool_strides)\n conv_layer3 = conv2d_maxpool(conv_layer2, conv_output_depth['layer3'], conv_ksize, conv_strides, pool_ksize, pool_strides)\n \n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n flattened_layer = flatten(conv_layer3)\n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n fc_layer1 = fully_conn(flattened_layer, num_outputs=512)\n fc_layer1 = tf.nn.dropout(fc_layer1, keep_prob=keep_prob)\n \n fc_layer2 = fully_conn(fc_layer1, num_outputs=256)\n fc_layer2 = tf.nn.dropout(fc_layer2, keep_prob=keep_prob)\n \n fc_layer3 = fully_conn(fc_layer2, num_outputs=128)\n fc_layer3 = tf.nn.dropout(fc_layer3, keep_prob=keep_prob)\n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n logits = output(fc_layer3, 10)\n \n # TODO: return output\n return logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)", "Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.", "def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n session.run(optimizer, feed_dict={x: feature_batch, y:label_batch, keep_prob: keep_probability})\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)", "Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.", "def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})\n valid_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})\n print('Traning Loss: {:>10.4f} Accuracy: {:.6f}'.format(loss, valid_accuracy))", "Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout", "# TODO: Tune Parameters\nepochs = 10\nbatch_size = 128\nkeep_probability = 0.5", "Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)", "Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)", "Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()", "Why 50-70% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 70%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phanrahan/magmathon
notebooks/advanced/verilator.ipynb
mit
[ "Start by defining a Python function that we want to compute.", "def f(a, b, c):\n return (a & b) ^ c", "Generate a circuit that computes this function. To implement the logical operations we use standard verilog gates, which are available in mantle.verilog.gates.", "import magma as m\nimport mantle\n\nclass VerilatorExample(m.Circuit):\n io = m.IO(a=m.In(m.Bit), b=m.In(m.Bit), c=m.In(m.Bit), d=m.Out(m.Bit))\n io.d <= f(io.a, io.b, io.c)\n\nm.compile(\"build/VerilatorExample\", VerilatorExample, \"coreir-verilog\", inline=True)\n%cat build/VerilatorExample.v", "Next, generate a verilator test harness in C++ for the circuit. The test vectors are generated using the python function f. The verilator test bench compares the output of the simulator to those test vectors.", "from itertools import product\nfrom fault import Tester\n\ntester = Tester(VerilatorExample)\nfor a, b, c in product([0, 1], [0, 1], [0, 1]):\n tester.poke(VerilatorExample.a, a)\n tester.poke(VerilatorExample.b, b)\n tester.poke(VerilatorExample.c, c)\n tester.eval()\n tester.expect(VerilatorExample.d, f(a, b, c))\ntester.print(\"done!!\")\ntester.compile_and_run(\"verilator\", directory=\"build\")\n%cat build/VerilatorExample_driver.cpp", "Using fault, we can use the same tester and (with the same testbench inputs/expectations) and use a different backend, like the python simulator.", "tester.compile_and_run(\"python\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NuGrid/NuPyCEE
DOC/Capabilities/Including_radioactive_isotopes.ipynb
bsd-3-clause
[ "Including Radioactive Isotopes in NuPyCEE\nPrepared by: Benoit Côté\nThis notebook describe the radioactive isotope implementation in NuPyCEE and shows how to run a SYGMA and OMEGA with radioactive yields.", "# Import python modules\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Import the NuPyCEE codes\nfrom NuPyCEE import sygma\nfrom NuPyCEE import omega", "1. Input Parameters\nThe inputs that need to be provided to activate the radioactive option are:\n\nthe list of selected radioactive isotopes,\nthe radioactive yield tables.\n\nThe list of isotopes is declared in the yield_tables/decay_info.txt file and can be modified prior any simulation. The radioactive yields are found (or need to be added) in the yield_tables/ folder. Each stable yield table can have their associated radioactive yield table:\n\nMassive and AGB stars\nStable isotopes: table\nRadioactive isotopes: table_radio\nType Ia supernovae\nStable isotopes: sn1a_table\nRadioactive isotopes: sn1a_table_radio\nNeutron star mergers\nStable isotopes: nsmerger_table\nRadioactive isotopes: nsmerger_table_radio\nEtc..\n\nEach enrichment source can be activated independently by providing its input radioactive yield table. The radioactive yield table file format needs to be identical to its stable counterpart.\nWarning: Radioactive isotopes will decay into stable isotopes. When using radioactive yields, please make sure that the stable yields do not include the decayed isotopes already.\n2. Single Decay Channel (Default Option)\nIf the radioactive isotopes you selected have only one decay channel, you can use the default decay option, which uses the following exponential law,\n$N_r(t)=N_r(t_0)\\,\\mathrm{exp}\\left[\\frac{-(t-t_0)}{\\tau}\\right],$\n$\\tau=\\frac{T_{1/2}}{\\mathrm{ln}(2)},$\nwhere $t_0$ is the reference time where the number of radioactive isotopes was equal to $N_0$. $T_{1/2}$ is the half-life of the isotope, which needs to be specified in yield_tables/decay_info.txt. The decayed product will be added to the corresponding stable isotope, as defined in yield_tables/decay_info.txt.\nExample with Al-26\nBelow, a SYGMA simulation is ran with no star formation to better isolate the decay process. Here we choose Al-26 as an example, which decays into Mg-26.", "# Number of timesteps in the simulaton.\n# See https://github.com/NuGrid/NuPyCEE/blob/master/DOC/Capabilities/Timesteps_size_management.ipynb\nspecial_timesteps = -1\nnb_dt = 100\ntend = 2.0e6\ndt = tend / float(nb_dt)\n\n# No star formation.\nno_sf = True\n\n# Dummy neutron star merger yields to activate the radioactive option.\nnsmerger_table_radio = 'yield_tables/extra_table_radio_dummy.txt'\n\n# Add 1 Msun of radioactive Al-26 in the gas.\n# The indexes of this array reflect the order seen in the yield_tables/decay_file.txt file\n# Index 0, 1, 2 --> Al-26, K-40, U-238\nism_ini_radio = [1.0, 0.0, 0.0]", "Run SYGMA", "# Run SYGMA (or in this case, the decay process)\ns = sygma.sygma(iniZ=0.02, no_sf=no_sf, ism_ini_radio=ism_ini_radio,\\\n special_timesteps=special_timesteps, tend=tend, dt=dt,\\\n decay_file='yield_tables/decay_file.txt', nsmerger_table_radio=nsmerger_table_radio)\n\n# Get the Al-26 (radioactive) and Mg-26 (stable) indexes in the gas arrays\ni_Al_26 = s.radio_iso.index('Al-26')\ni_Mg_26 = s.history.isotopes.index('Mg-26')\n\n# Extract the evolution of these isotopes as a function of time\nAl_26 = np.zeros(s.nb_timesteps+1)\nMg_26 = np.zeros(s.nb_timesteps+1)\nfor i_t in range(s.nb_timesteps+1):\n Al_26[i_t] = s.ymgal_radio[i_t][i_Al_26]\n Mg_26[i_t] = s.ymgal[i_t][i_Mg_26]", "Plot results", "# Plot the evolution of Al-26 and Mg-26\n%matplotlib nbagg\nplt.figure(figsize=(8,4.5))\nplt.plot( np.array(s.history.age)/1e6, Al_26, '--b', label='Al-26' )\nplt.plot( np.array(s.history.age)/1e6, Mg_26, '-r', label='Mg-26' )\nplt.plot([0,2.0], [0.5,0.5], ':k')\nplt.plot([0.717,0.717], [0,1], ':k')\n\n# Labels and fontsizes\nplt.xlabel('Time [Myr]', fontsize=16)\nplt.ylabel('Mass of isotope [M$_\\odot$]', fontsize=16)\nplt.legend(fontsize=14, loc='center left', bbox_to_anchor=(1, 0.5))\nplt.subplots_adjust(top=0.96)\nplt.subplots_adjust(bottom=0.15)\nplt.subplots_adjust(right=0.75)\nmatplotlib.rcParams.update({'font.size': 14.0})", "3. Multiple Decay Channels\nIf the radioactive isotopes you selected have more than one decay channel, you need to use the provided decay module. This option can be activated by adding use_decay_module=True in the list of parameters when creating an instance of SYGMA and OMEGA. When using the decay module, the yield_tables/decay_file.txt file still needs to be provided as an input to define which radioactive isotopes are selected for the calculation.\nExample with K-40\nBelow we still run a SYGMA simulation with no star formation to better isolate the decay process. A fraction of K-40 decays into Ca-40, and another fraction decays into Ar-40.\nRun SYGMA", "# Add 1 Msun of radioactive K-40 in the gas.\n# The indexes of this array reflect the order seen in the yield_tables/decay_file.txt file\n# Index 0, 1, 2 --> Al-26, K-40, U-238\nism_ini_radio = [0.0, 1.0, 0.0]\n\n# Number of timesteps in the simulaton.\n# See https://github.com/NuGrid/NuPyCEE/blob/master/DOC/Capabilities/Timesteps_size_management.ipynb\nspecial_timesteps = -1\nnb_dt = 100\ntend = 5.0e9\ndt = tend / float(nb_dt)\n\n# Run SYGMA (or in this case, the decay process)\n# with the decay module\ns = sygma.sygma(iniZ=0.0, sfr=sfr, starbursts=starbursts, ism_ini_radio=ism_ini_radio,\\\n special_timesteps=special_timesteps, tend=tend, dt=dt,\\\n decay_file='yield_tables/decay_file.txt', nsmerger_table_radio=nsmerger_table_radio,\\\n use_decay_module=True, radio_refinement=1)\n\n# Get the K-40 (radioactive) and Ca-40 and Ar-40 (stable) indexes in the gas arrays\ni_K_40 = s.radio_iso.index('K-40')\ni_Ca_40 = s.history.isotopes.index('Ca-40')\ni_Ar_40 = s.history.isotopes.index('Ar-40')\n\n# Extract the evolution of these isotopes as a function of time\nK_40 = np.zeros(s.nb_timesteps+1)\nCa_40 = np.zeros(s.nb_timesteps+1)\nAr_40 = np.zeros(s.nb_timesteps+1)\nfor i_t in range(s.nb_timesteps+1):\n K_40[i_t] = s.ymgal_radio[i_t][i_K_40]\n Ca_40[i_t] = s.ymgal[i_t][i_Ca_40]\n Ar_40[i_t] = s.ymgal[i_t][i_Ar_40]\n\n# Plot the evolution of Al-26 and Mg-26\n%matplotlib nbagg\nplt.figure(figsize=(8,4.5))\nplt.plot( np.array(s.history.age)/1e6, K_40, '--b', label='K-40' )\nplt.plot( np.array(s.history.age)/1e6, Ca_40, '-r', label='Ca-40' )\nplt.plot( np.array(s.history.age)/1e6, Ar_40, '-g', label='Ar-40' )\n\n# Labels and fontsizes\nplt.xlabel('Time [Myr]', fontsize=16)\nplt.ylabel('Mass of isotope [M$_\\odot$]', fontsize=16)\nplt.legend(fontsize=14, loc='center left', bbox_to_anchor=(1, 0.5))\nplt.subplots_adjust(top=0.96)\nplt.subplots_adjust(bottom=0.15)\nplt.subplots_adjust(right=0.75)\nmatplotlib.rcParams.update({'font.size': 14.0})", "Example with U-238", "# Add 1 Msun of radioactive U-238 in the gas.\n# The indexes of this array reflect the order seen in the yield_tables/decay_file.txt file\n# Index 0, 1, 2 --> Al-26, K-40, U-238\nism_ini_radio = [0.0, 0.0, 1.0]\n\n# Number of timesteps in the simulaton.\n# See https://github.com/NuGrid/NuPyCEE/blob/master/DOC/Capabilities/Timesteps_size_management.ipynb\nspecial_timesteps = -1\nnb_dt = 100\ntend = 5.0e9\ndt = tend / float(nb_dt)\n\n# Run SYGMA (or in this case, the decay process)\n# with the decay module\ns = sygma.sygma(iniZ=0.0, sfr=sfr, starbursts=starbursts, ism_ini_radio=ism_ini_radio,\\\n special_timesteps=special_timesteps, tend=tend, dt=dt,\\\n decay_file='yield_tables/decay_file.txt', nsmerger_table_radio=nsmerger_table_radio,\\\n use_decay_module=True, radio_refinement=1)", "In the case of U-238, there are many isotopes that are resulting from the multiple decay channels. Those new radioactive isotopes are added automatically in the list of isotopes in NuPyCEE.", "print(s.radio_iso)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tanle8/Data-Science
1-uIDS-courseNotes/l5-MapReduce.ipynb
mit
[ "Lesson 5: MapReduce\nBig Data and MapReduce", "from IPython.display import HTML\n\nHTML('<iframe width=\"846\" height=\"476\" src=\"https://www.youtube.com/embed/KdSqUjFWzdY\" frameborder=\"0\" allowfullscreen></iframe>')\n\nfrom IPython.display import HTML\nHTML('<iframe width=\"960\" height=\"540\" src=\"https://www.youtube.com/embed/gYiwszKaCoQ\" frameborder=\"0\" allowfullscreen></iframe>')", "Basics of MapReduce", "from IPython.display import HTML\n\nHTML('<iframe width=\"798\" height=\"449\" src=\"https://www.youtube.com/embed/gI4HN0JhPmo\" frameborder=\"0\" allowfullscreen></iframe>')", "Quiz: Couting Words Serially\n```Python\nimport logging\nimport sys\nimport string\nfrom util import logfile\nlogging.basicConfig(filename=logfile, format='%(message)s',\n level=logging.INFO, filemode='w')\ndef word_count():\n # For this exercise, write a program that serially counts the number of occurrences\n # of each word in the book Alice in Wonderland.\n #\n # The text of Alice in Wonderland will be fed into your program line-by-line.\n # Your program needs to take each line and do the following:\n # 1) Tokenize the line into string tokens by whitespace\n # Example: \"Hello, World!\" should be converted into \"Hello,\" and \"World!\"\n # (This part has been done for you.)\n #\n # 2) Remove all punctuation\n # Example: \"Hello,\" and \"World!\" should be converted into \"Hello\" and \"World\"\n # \n # 3) Make all letters lowercase\n # Example: \"Hello\" and \"World\" should be converted to \"hello\" and \"world\"\n #\n # Store the the number of times that a word appears in Alice in Wonderland\n # in the word_counts dictionary, and then print (don't return) that dictionary\n #\n # In this exercise, print statements will be considered your final output. Because\n # of this, printing a debug statement will cause the grader to break. Instead, \n # you can use the logging module which we've configured for you.\n #\n # For example:\n # logging.info(\"My debugging message\")\n #\n # The logging module can be used to give you more control over your\n # debugging or other messages than you can get by printing them. Messages \n # logged via the logger we configured will be saved to a\n # file. If you click \"Test Run\", then you will see the contents of that file\n # once your program has finished running.\n # \n # The logging module also has other capabilities; see \n # https://docs.python.org/2/library/logging.html\n # for more information.\n# Create an empty dictionary to store word/frequency pair as key/value\nword_counts = {}\n\nfor line in sys.stdin:\n # 2) Remove all punctuation\n # Example: \"Hello,\" and \"World!\" should be converted into \"Hello\" and \"World\"\n # 3) Make all letters lowercase\n # Example: \"Hello\" and \"World\" should be converted to \"hello\" and \"world\"\n data = line.strip().split(\" \")\n\n # Your code here\n # With each word in the list, remove any punctuation and turn it into lowercase.\n # Check if the word appears or not, if yes, +1 to key value otherwise assigns its\n # value to 1.\n for i in data:\n key = i.translate(string.maketrans(\"\",\"\"), string.punctuation).lower()\n if key in word_counts.keys():\n word_counts[key] += 1\n else:\n word_counts[key] = 1\n\nprint word_counts\n\nword_count()\n```\nCounting Words in MapReduce", "from IPython.display import HTML\n\nHTML('<iframe width=\"798\" height=\"449\" src=\"https://www.youtube.com/embed/onseMon9zqA\" frameborder=\"0\" allowfullscreen></iframe>')\n\n\nfrom IPython.display import HTML\n\nHTML('<iframe width=\"798\" height=\"449\" src=\"https://www.youtube.com/embed/_q6098sNqpo\" frameborder=\"0\" allowfullscreen></iframe>')", "Mapper", "from IPython.display import HTML\nHTML('<iframe width=\"798\" height=\"449\" src=\"https://www.youtube.com/embed/mPYxFC7DI28\" frameborder=\"0\" allowfullscreen></iframe>')", "Reducer", "from IPython.display import HTML\n\nHTML('<iframe width=\"798\" height=\"449\" src=\"https://www.youtube.com/embed/bkhuEG0D2HM\" frameborder=\"0\" allowfullscreen></iframe>')", "Quiz: Mapper And Reducer With Aadhaar Data\naadhaar_genereated_mapper.py\n```Python\nimport sys\nimport string\nimport logging\nfrom util import mapper_logfile\nlogging.basicConfig(filename=mapper_logfile, format='%(message)s',\n level=logging.INFO, filemode='w')\ndef mapper():\n#Also make sure to fill out the reducer code before clicking \"Test Run\" or \"Submit\".\n\n#Each line will be a comma-separated list of values. The\n#header row WILL be included. Tokenize each row using the \n#commas, and emit (i.e. print) a key-value pair containing the \n#district (not state) and Aadhaar generated, separated by a tab. \n#Skip rows without the correct number of tokens and also skip \n#the header row.\n\n#You can see a copy of the the input Aadhaar data\n#in the link below:\n#https://www.dropbox.com/s/vn8t4uulbsfmalo/aadhaar_data.csv\n\n#Since you are printing the output of your program, printing a debug \n#statement will interfere with the operation of the grader. Instead, \n#use the logging module, which we've configured to log to a file printed \n#when you click \"Test Run\". For example:\n#logging.info(\"My debugging message\")\n#\n#Note that, unlike print, logging.info will take only a single argument.\n#So logging.info(\"my message\") will work, but logging.info(\"my\",\"message\") will not.\n\nfor line in sys.stdin:\n #your code here\n # tokenize the line of data\n data = line.strip().split(\",\")\n\n if len(data) != 12 or data[0] == 'Registrar':\n continue\n print \"{0}\\t{1}\".format(data[3],data[8])\n\nmapper()\n```\naadhaar_genereated_reducer.py\n```Python\nimport sys\nimport logging\nfrom util import reducer_logfile\nlogging.basicConfig(filename=reducer_logfile, format='%(message)s',\n level=logging.INFO, filemode='w')\ndef reducer():\n#Also make sure to fill out the mapper code before clicking \"Test Run\" or \"Submit\".\n\n#Each line will be a key-value pair separated by a tab character.\n#Print out each key once, along with the total number of Aadhaar \n#generated, separated by a tab. Make sure each key-value pair is \n#formatted correctly! Here's a sample final key-value pair: 'Gujarat\\t5.0'\n\n#Since you are printing the output of your program, printing a debug \n#statement will interfere with the operation of the grader. Instead, \n#use the logging module, which we've configured to log to a file printed \n#when you click \"Test Run\". For example:\n#logging.info(\"My debugging message\")\n#Note that, unlike print, logging.info will take only a single argument.\n#So logging.info(\"my message\") will work, but logging.info(\"my\",\"message\") will not.\n\n# Initialize values\naadhaar_generated = 0\nold_key = None\n\nfor line in sys.stdin:\n # your code here\n data = line.strip().split(\"\\t\")\n if len(data) != 2:\n continue\n this_key, count = data\n\n if old_key and old_key != this_key:\n print \"{0}\\t{1}\".format(old_key, aadhaar_generated)\n aadhaar_generated = 0\n\n old_key = this_key\n aadhaar_generated += float(count)\n\nif old_key != None:\n print \"{0}\\t{1}\".format(old_key, aadhaar_generated)\n\nreducer()\n```\nMapReduce Ecosystem\nMapReduce programming model\n\nHadoop is a very common open source implementation of MapReduce.\nHadoop couples the map reduce programming model with a distributed file system.\nIn order to more easily allow programmers to complete complicated tasks using the processing power of Hadoop, there are many infrastructures out there that either built on top of Hadoop or allow data access via Hadoop.\nTwo of the most common are Hive and Pig. But there are bunch of them out there, for example: Mahout for machine learning\nHive was initially developed by Facebook, and one of its biggest selling points is that it allows running map-preoduced jobs through a SQL-like querying language, called the Hive Query Language; Giraph for graph analysis and Cassandra, a hybrid of a key value and a column oriented database.\nPig was originally developed at Yahoo! and excels in some areas Hive does not. Pig jobs are written in a procedural language called Pig Latin. This wins developers a bunch of things. Among them are the ability to be more explicit about the execution of our data processing. Which is not possible in a declarative language like SQL syntax. And also the ability to split your data pipeline.", "# Recap\nfrom IPython.display import HTML\nHTML('<iframe width=\"798\" height=\"449\" src=\"https://www.youtube.com/embed/Pl68U2iGtyI\" frameborder=\"0\" allowfullscreen></iframe>')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
diegocavalca/Studies
deep-learnining-specialization/2. improving deep neural networks/week2/Optimization methods.ipynb
cc0-1.0
[ "Optimization Methods\nUntil now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. \nGradient descent goes \"downhill\" on a cost function $J$. Think of it as trying to do this: \n<img src=\"images/cost.jpg\" style=\"width:650px;height:300px;\">\n<caption><center> <u> Figure 1 </u>: Minimizing the cost is like finding the lowest point in a hilly landscape<br> At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. </center></caption>\nNotations: As usual, $\\frac{\\partial J}{\\partial a } = $ da for any variable a.\nTo get started, run the following code to import the libraries you will need.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.io\nimport math\nimport sklearn\nimport sklearn.datasets\n\nfrom opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation\nfrom opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset\nfrom testCases import *\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'", "1 - Gradient Descent\nA simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. \nWarm-up exercise: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$: \n$$ W^{[l]} = W^{[l]} - \\alpha \\text{ } dW^{[l]} \\tag{1}$$\n$$ b^{[l]} = b^{[l]} - \\alpha \\text{ } db^{[l]} \\tag{2}$$\nwhere L is the number of layers and $\\alpha$ is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding.", "# GRADED FUNCTION: update_parameters_with_gd\n\ndef update_parameters_with_gd(parameters, grads, learning_rate):\n \"\"\"\n Update parameters using one step of gradient descent\n \n Arguments:\n parameters -- python dictionary containing your parameters to be updated:\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n grads -- python dictionary containing your gradients to update each parameters:\n grads['dW' + str(l)] = dWl\n grads['db' + str(l)] = dbl\n learning_rate -- the learning rate, scalar.\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n \"\"\"\n\n L = len(parameters) // 2 # number of layers in the neural networks\n\n # Update rule for each parameter\n for l in range(L):\n ### START CODE HERE ### (approx. 2 lines)\n parameters[\"W\" + str(l+1)] = parameters[\"W\" + str(l+1)] - learning_rate*grads['dW' + str(l+1)]\n parameters[\"b\" + str(l+1)] = parameters[\"b\" + str(l+1)] - learning_rate*grads['db' + str(l+1)]\n ### END CODE HERE ###\n \n return parameters\n\nparameters, grads, learning_rate = update_parameters_with_gd_test_case()\n\nparameters = update_parameters_with_gd(parameters, grads, learning_rate)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "Expected Output:\n<table> \n <tr>\n <td > **W1** </td> \n <td > [[ 1.63535156 -0.62320365 -0.53718766]\n [-1.07799357 0.85639907 -2.29470142]] </td> \n </tr> \n\n <tr>\n <td > **b1** </td> \n <td > [[ 1.74604067]\n [-0.75184921]] </td> \n </tr> \n\n <tr>\n <td > **W2** </td> \n <td > [[ 0.32171798 -0.25467393 1.46902454]\n [-2.05617317 -0.31554548 -0.3756023 ]\n [ 1.1404819 -1.09976462 -0.1612551 ]] </td> \n </tr> \n\n <tr>\n <td > **b2** </td> \n <td > [[-0.88020257]\n [ 0.02561572]\n [ 0.57539477]] </td> \n </tr> \n</table>\n\nA variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. \n\n(Batch) Gradient Descent:\n\n``` python\nX = data_input\nY = labels\nparameters = initialize_parameters(layers_dims)\nfor i in range(0, num_iterations):\n # Forward propagation\n a, caches = forward_propagation(X, parameters)\n # Compute cost.\n cost = compute_cost(a, Y)\n # Backward propagation.\n grads = backward_propagation(a, caches, parameters)\n # Update parameters.\n parameters = update_parameters(parameters, grads)\n```\n\nStochastic Gradient Descent:\n\npython\nX = data_input\nY = labels\nparameters = initialize_parameters(layers_dims)\nfor i in range(0, num_iterations):\n for j in range(0, m):\n # Forward propagation\n a, caches = forward_propagation(X[:,j], parameters)\n # Compute cost\n cost = compute_cost(a, Y[:,j])\n # Backward propagation\n grads = backward_propagation(a, caches, parameters)\n # Update parameters.\n parameters = update_parameters(parameters, grads)\nIn Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will \"oscillate\" toward the minimum rather than converge smoothly. Here is an illustration of this: \n<img src=\"images/kiank_sgd.png\" style=\"width:750px;height:250px;\">\n<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : SGD vs GD<br> \"+\" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption>\nNote also that implementing SGD requires 3 for-loops in total:\n1. Over the number of iterations\n2. Over the $m$ training examples\n3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)\nIn practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples.\n<img src=\"images/kiank_minibatch.png\" style=\"width:750px;height:250px;\">\n<caption><center> <u> <font color='purple'> Figure 2 </u>: <font color='purple'> SGD vs Mini-Batch GD<br> \"+\" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption>\n<font color='blue'>\nWhat you should remember:\n- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.\n- You have to tune a learning rate hyperparameter $\\alpha$.\n- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).\n2 - Mini-Batch Gradient descent\nLet's learn how to build mini-batches from the training set (X, Y).\nThere are two steps:\n- Shuffle: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. \n<img src=\"images/kiank_shuffle.png\" style=\"width:550px;height:300px;\">\n\nPartition: Partition the shuffled (X, Y) into mini-batches of size mini_batch_size (here 64). Note that the number of training examples is not always divisible by mini_batch_size. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full mini_batch_size, it will look like this: \n\n<img src=\"images/kiank_partition.png\" style=\"width:550px;height:300px;\">\nExercise: Implement random_mini_batches. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:\npython\nfirst_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]\nsecond_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]\n...\nNote that the last mini-batch might end up smaller than mini_batch_size=64. Let $\\lfloor s \\rfloor$ represents $s$ rounded down to the nearest integer (this is math.floor(s) in Python). If the total number of examples is not a multiple of mini_batch_size=64 then there will be $\\lfloor \\frac{m}{mini_batch_size}\\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini__batch__size \\times \\lfloor \\frac{m}{mini_batch_size}\\rfloor$).", "# GRADED FUNCTION: random_mini_batches\n\ndef random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):\n \"\"\"\n Creates a list of random minibatches from (X, Y)\n \n Arguments:\n X -- input data, of shape (input size, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)\n mini_batch_size -- size of the mini-batches, integer\n \n Returns:\n mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)\n \"\"\"\n \n np.random.seed(seed) # To make your \"random\" minibatches the same as ours\n m = X.shape[1] # number of training examples\n mini_batches = []\n \n # Step 1: Shuffle (X, Y)\n permutation = list(np.random.permutation(m))\n shuffled_X = X[:, permutation]\n shuffled_Y = Y[:, permutation].reshape((1,m))\n\n # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.\n num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning\n for k in range(0, num_complete_minibatches):\n ### START CODE HERE ### (approx. 2 lines)\n mini_batch_X = shuffled_X[:,k * mini_batch_size:(k + 1) * mini_batch_size]\n mini_batch_Y = shuffled_Y[:,k * mini_batch_size:(k + 1) * mini_batch_size]\n ### END CODE HERE ###\n mini_batch = (mini_batch_X, mini_batch_Y)\n mini_batches.append(mini_batch)\n \n # Handling the end case (last mini-batch < mini_batch_size)\n if m % mini_batch_size != 0:\n #end = m - mini_batch_size * math.floor(m / mini_batch_size)\n ### START CODE HERE ### (approx. 2 lines)\n mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size:]\n mini_batch_Y = shuffled_Y[:,num_complete_minibatches * mini_batch_size:]\n ### END CODE HERE ###\n mini_batch = (mini_batch_X, mini_batch_Y)\n mini_batches.append(mini_batch)\n \n return mini_batches\n\nX_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()\nmini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)\n\nprint (\"shape of the 1st mini_batch_X: \" + str(mini_batches[0][0].shape))\nprint (\"shape of the 2nd mini_batch_X: \" + str(mini_batches[1][0].shape))\nprint (\"shape of the 3rd mini_batch_X: \" + str(mini_batches[2][0].shape))\nprint (\"shape of the 1st mini_batch_Y: \" + str(mini_batches[0][1].shape))\nprint (\"shape of the 2nd mini_batch_Y: \" + str(mini_batches[1][1].shape)) \nprint (\"shape of the 3rd mini_batch_Y: \" + str(mini_batches[2][1].shape))\nprint (\"mini batch sanity check: \" + str(mini_batches[0][0][0][0:3]))", "Expected Output:\n<table style=\"width:50%\"> \n <tr>\n <td > **shape of the 1st mini_batch_X** </td> \n <td > (12288, 64) </td> \n </tr> \n\n <tr>\n <td > **shape of the 2nd mini_batch_X** </td> \n <td > (12288, 64) </td> \n </tr> \n\n <tr>\n <td > **shape of the 3rd mini_batch_X** </td> \n <td > (12288, 20) </td> \n </tr>\n <tr>\n <td > **shape of the 1st mini_batch_Y** </td> \n <td > (1, 64) </td> \n </tr> \n <tr>\n <td > **shape of the 2nd mini_batch_Y** </td> \n <td > (1, 64) </td> \n </tr> \n <tr>\n <td > **shape of the 3rd mini_batch_Y** </td> \n <td > (1, 20) </td> \n </tr> \n <tr>\n <td > **mini batch sanity check** </td> \n <td > [ 0.90085595 -0.7612069 0.2344157 ] </td> \n </tr>\n\n</table>\n\n<font color='blue'>\nWhat you should remember:\n- Shuffling and Partitioning are the two steps required to build mini-batches\n- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.\n3 - Momentum\nBecause mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will \"oscillate\" toward convergence. Using momentum can reduce these oscillations. \nMomentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the \"velocity\" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. \n<img src=\"images/opt_momentum.png\" style=\"width:400px;height:250px;\">\n<caption><center> <u><font color='purple'>Figure 3</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center>\nExercise: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the grads dictionary, that is:\nfor $l =1,...,L$:\npython\nv[\"dW\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"W\" + str(l+1)])\nv[\"db\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"b\" + str(l+1)])\nNote that the iterator l starts at 0 in the for loop while the first parameters are v[\"dW1\"] and v[\"db1\"] (that's a \"one\" on the superscript). This is why we are shifting l to l+1 in the for loop.", "# GRADED FUNCTION: initialize_velocity\n\ndef initialize_velocity(parameters):\n \"\"\"\n Initializes the velocity as a python dictionary with:\n - keys: \"dW1\", \"db1\", ..., \"dWL\", \"dbL\" \n - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.\n Arguments:\n parameters -- python dictionary containing your parameters.\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n \n Returns:\n v -- python dictionary containing the current velocity.\n v['dW' + str(l)] = velocity of dWl\n v['db' + str(l)] = velocity of dbl\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural networks\n v = {}\n \n # Initialize velocity\n for l in range(L):\n ### START CODE HERE ### (approx. 2 lines)\n v[\"dW\" + str(l+1)] = np.zeros_like(parameters[\"W\" + str(l+1)])\n v[\"db\" + str(l+1)] = np.zeros_like(parameters[\"b\" + str(l+1)])\n ### END CODE HERE ###\n \n return v\n\nparameters = initialize_velocity_test_case()\n\nv = initialize_velocity(parameters)\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))", "Expected Output:\n<table style=\"width:40%\"> \n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[ 0.]\n [ 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr> \n</table>\n\nExercise: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$: \n$$ \\begin{cases}\nv_{dW^{[l]}} = \\beta v_{dW^{[l]}} + (1 - \\beta) dW^{[l]} \\\nW^{[l]} = W^{[l]} - \\alpha v_{dW^{[l]}}\n\\end{cases}\\tag{3}$$\n$$\\begin{cases}\nv_{db^{[l]}} = \\beta v_{db^{[l]}} + (1 - \\beta) db^{[l]} \\\nb^{[l]} = b^{[l]} - \\alpha v_{db^{[l]}} \n\\end{cases}\\tag{4}$$\nwhere L is the number of layers, $\\beta$ is the momentum and $\\alpha$ is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a \"one\" on the superscript). So you will need to shift l to l+1 when coding.", "# GRADED FUNCTION: update_parameters_with_momentum\n\ndef update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):\n \"\"\"\n Update parameters using Momentum\n \n Arguments:\n parameters -- python dictionary containing your parameters:\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n grads -- python dictionary containing your gradients for each parameters:\n grads['dW' + str(l)] = dWl\n grads['db' + str(l)] = dbl\n v -- python dictionary containing the current velocity:\n v['dW' + str(l)] = ...\n v['db' + str(l)] = ...\n beta -- the momentum hyperparameter, scalar\n learning_rate -- the learning rate, scalar\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n v -- python dictionary containing your updated velocities\n \"\"\"\n\n L = len(parameters) // 2 # number of layers in the neural networks\n \n # Momentum update for each parameter\n for l in range(L):\n \n ### START CODE HERE ### (approx. 4 lines)\n # compute velocities\n v[\"dW\" + str(l+1)] = beta * v[\"dW\" + str(l+1)] + (1 - beta) * grads['dW' + str(l+1)]\n v[\"db\" + str(l+1)] = beta * v[\"db\" + str(l+1)] + (1 - beta) * grads['db' + str(l+1)]\n # update parameters\n parameters[\"W\" + str(l+1)] = parameters[\"W\" + str(l+1)] - learning_rate*v[\"dW\" + str(l+1)]\n parameters[\"b\" + str(l+1)] = parameters[\"b\" + str(l+1)] - learning_rate*v[\"db\" + str(l+1)]\n ### END CODE HERE ###\n \n return parameters, v\n\nparameters, grads, v = update_parameters_with_momentum_test_case()\n\nparameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))", "Expected Output:\n<table style=\"width:90%\"> \n <tr>\n <td > **W1** </td> \n <td > [[ 1.62544598 -0.61290114 -0.52907334]\n [-1.07347112 0.86450677 -2.30085497]] </td> \n </tr> \n\n <tr>\n <td > **b1** </td> \n <td > [[ 1.74493465]\n [-0.76027113]] </td> \n </tr> \n\n <tr>\n <td > **W2** </td> \n <td > [[ 0.31930698 -0.24990073 1.4627996 ]\n [-2.05974396 -0.32173003 -0.38320915]\n [ 1.13444069 -1.0998786 -0.1713109 ]] </td> \n </tr> \n\n <tr>\n <td > **b2** </td> \n <td > [[-0.87809283]\n [ 0.04055394]\n [ 0.58207317]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[-0.11006192 0.11447237 0.09015907]\n [ 0.05024943 0.09008559 -0.06837279]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[-0.01228902]\n [-0.09357694]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[-0.02678881 0.05303555 -0.06916608]\n [-0.03967535 -0.06871727 -0.08452056]\n [-0.06712461 -0.00126646 -0.11173103]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.02344157]\n [ 0.16598022]\n [ 0.07420442]]</td> \n </tr> \n</table>\n\nNote that:\n- The velocity is initialized with zeros. So the algorithm will take a few iterations to \"build up\" velocity and start to take bigger steps.\n- If $\\beta = 0$, then this just becomes standard gradient descent without momentum. \nHow do you choose $\\beta$?\n\nThe larger the momentum $\\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\\beta$ is too big, it could also smooth out the updates too much. \nCommon values for $\\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\\beta = 0.9$ is often a reasonable default. \nTuning the optimal $\\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$. \n\n<font color='blue'>\nWhat you should remember:\n- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.\n- You have to tune a momentum hyperparameter $\\beta$ and a learning rate $\\alpha$.\n4 - Adam\nAdam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. \nHow does Adam work?\n1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction). \n2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction). \n3. It updates parameters in a direction based on combining information from \"1\" and \"2\".\nThe update rule is, for $l = 1, ..., L$: \n$$\\begin{cases}\nv_{dW^{[l]}} = \\beta_1 v_{dW^{[l]}} + (1 - \\beta_1) \\frac{\\partial \\mathcal{J} }{ \\partial W^{[l]} } \\\nv^{corrected}{dW^{[l]}} = \\frac{v{dW^{[l]}}}{1 - (\\beta_1)^t} \\\ns_{dW^{[l]}} = \\beta_2 s_{dW^{[l]}} + (1 - \\beta_2) (\\frac{\\partial \\mathcal{J} }{\\partial W^{[l]} })^2 \\\ns^{corrected}{dW^{[l]}} = \\frac{s{dW^{[l]}}}{1 - (\\beta_1)^t} \\\nW^{[l]} = W^{[l]} - \\alpha \\frac{v^{corrected}{dW^{[l]}}}{\\sqrt{s^{corrected}{dW^{[l]}}} + \\varepsilon}\n\\end{cases}$$\nwhere:\n- t counts the number of steps taken of Adam \n- L is the number of layers\n- $\\beta_1$ and $\\beta_2$ are hyperparameters that control the two exponentially weighted averages. \n- $\\alpha$ is the learning rate\n- $\\varepsilon$ is a very small number to avoid dividing by zero\nAs usual, we will store all parameters in the parameters dictionary \nExercise: Initialize the Adam variables $v, s$ which keep track of the past information.\nInstruction: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for grads, that is:\nfor $l = 1, ..., L$:\n```python\nv[\"dW\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"W\" + str(l+1)])\nv[\"db\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"b\" + str(l+1)])\ns[\"dW\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"W\" + str(l+1)])\ns[\"db\" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters[\"b\" + str(l+1)])\n```", "# GRADED FUNCTION: initialize_adam\n\ndef initialize_adam(parameters) :\n \"\"\"\n Initializes v and s as two python dictionaries with:\n - keys: \"dW1\", \"db1\", ..., \"dWL\", \"dbL\" \n - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.\n \n Arguments:\n parameters -- python dictionary containing your parameters.\n parameters[\"W\" + str(l)] = Wl\n parameters[\"b\" + str(l)] = bl\n \n Returns: \n v -- python dictionary that will contain the exponentially weighted average of the gradient.\n v[\"dW\" + str(l)] = ...\n v[\"db\" + str(l)] = ...\n s -- python dictionary that will contain the exponentially weighted average of the squared gradient.\n s[\"dW\" + str(l)] = ...\n s[\"db\" + str(l)] = ...\n\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural networks\n v = {}\n s = {}\n \n # Initialize v, s. Input: \"parameters\". Outputs: \"v, s\".\n for l in range(L):\n ### START CODE HERE ### (approx. 4 lines)\n v[\"dW\" + str(l+1)] = np.zeros_like(parameters[\"W\" + str(l+1)])\n v[\"db\" + str(l+1)] = np.zeros_like(parameters[\"b\" + str(l+1)])\n s[\"dW\" + str(l+1)] = np.zeros_like(parameters[\"W\" + str(l+1)])\n s[\"db\" + str(l+1)] = np.zeros_like(parameters[\"b\" + str(l+1)])\n ### END CODE HERE ###\n \n return v, s\n\nparameters = initialize_adam_test_case()\n\nv, s = initialize_adam(parameters)\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))\nprint(\"s[\\\"dW1\\\"] = \" + str(s[\"dW1\"]))\nprint(\"s[\\\"db1\\\"] = \" + str(s[\"db1\"]))\nprint(\"s[\\\"dW2\\\"] = \" + str(s[\"dW2\"]))\nprint(\"s[\\\"db2\\\"] = \" + str(s[\"db2\"]))\n", "Expected Output:\n<table style=\"width:40%\"> \n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[ 0.]\n [ 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr> \n <tr>\n <td > **s[\"dW1\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **s[\"db1\"]** </td> \n <td > [[ 0.]\n [ 0.]] </td> \n </tr> \n\n <tr>\n <td > **s[\"dW2\"]** </td> \n <td > [[ 0. 0. 0.]\n [ 0. 0. 0.]\n [ 0. 0. 0.]] </td> \n </tr> \n\n <tr>\n <td > **s[\"db2\"]** </td> \n <td > [[ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr>\n\n</table>\n\nExercise: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: \n$$\\begin{cases}\nv_{W^{[l]}} = \\beta_1 v_{W^{[l]}} + (1 - \\beta_1) \\frac{\\partial J }{ \\partial W^{[l]} } \\\nv^{corrected}{W^{[l]}} = \\frac{v{W^{[l]}}}{1 - (\\beta_1)^t} \\\ns_{W^{[l]}} = \\beta_2 s_{W^{[l]}} + (1 - \\beta_2) (\\frac{\\partial J }{\\partial W^{[l]} })^2 \\\ns^{corrected}{W^{[l]}} = \\frac{s{W^{[l]}}}{1 - (\\beta_2)^t} \\\nW^{[l]} = W^{[l]} - \\alpha \\frac{v^{corrected}{W^{[l]}}}{\\sqrt{s^{corrected}{W^{[l]}}}+\\varepsilon}\n\\end{cases}$$\nNote that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding.", "# GRADED FUNCTION: update_parameters_with_adam\n\ndef update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,\n beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):\n \"\"\"\n Update parameters using Adam\n \n Arguments:\n parameters -- python dictionary containing your parameters:\n parameters['W' + str(l)] = Wl\n parameters['b' + str(l)] = bl\n grads -- python dictionary containing your gradients for each parameters:\n grads['dW' + str(l)] = dWl\n grads['db' + str(l)] = dbl\n v -- Adam variable, moving average of the first gradient, python dictionary\n s -- Adam variable, moving average of the squared gradient, python dictionary\n learning_rate -- the learning rate, scalar.\n beta1 -- Exponential decay hyperparameter for the first moment estimates \n beta2 -- Exponential decay hyperparameter for the second moment estimates \n epsilon -- hyperparameter preventing division by zero in Adam updates\n\n Returns:\n parameters -- python dictionary containing your updated parameters \n v -- Adam variable, moving average of the first gradient, python dictionary\n s -- Adam variable, moving average of the squared gradient, python dictionary\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural networks\n v_corrected = {} # Initializing first moment estimate, python dictionary\n s_corrected = {} # Initializing second moment estimate, python dictionary\n \n # Perform Adam update on all parameters\n for l in range(L):\n # Moving average of the gradients. Inputs: \"v, grads, beta1\". Output: \"v\".\n ### START CODE HERE ### (approx. 2 lines)\n v[\"dW\" + str(l+1)] = beta1 * v[\"dW\" + str(l+1)] + (1 - beta1) * grads[\"dW\" + str(l+1)]\n v[\"db\" + str(l+1)] = beta1 * v[\"db\" + str(l+1)] + (1 - beta1) * grads[\"db\" + str(l+1)]\n ### END CODE HERE ###\n\n # Compute bias-corrected first moment estimate. Inputs: \"v, beta1, t\". Output: \"v_corrected\".\n ### START CODE HERE ### (approx. 2 lines)\n v_corrected[\"dW\" + str(l+1)] = v[\"dW\" + str(l+1)]/(1 - np.power(beta1, t))\n v_corrected[\"db\" + str(l+1)] = v[\"db\" + str(l+1)]/(1 - np.power(beta1, t))\n ### END CODE HERE ###\n\n # Moving average of the squared gradients. Inputs: \"s, grads, beta2\". Output: \"s\".\n ### START CODE HERE ### (approx. 2 lines)\n s[\"dW\" + str(l+1)] = beta2 * s[\"dW\" + str(l+1)] + (1 - beta2) * np.square(grads[\"dW\" + str(l+1)])\n s[\"db\" + str(l+1)] = beta2 * s[\"db\" + str(l+1)] + (1 - beta2) * np.square(grads[\"db\" + str(l+1)])\n ### END CODE HERE ###\n\n # Compute bias-corrected second raw moment estimate. Inputs: \"s, beta2, t\". Output: \"s_corrected\".\n ### START CODE HERE ### (approx. 2 lines)\n s_corrected[\"dW\" + str(l+1)] = s[\"dW\" + str(l+1)]/(1 - np.power(beta2, t))\n s_corrected[\"db\" + str(l+1)] = s[\"db\" + str(l+1)]/(1 - np.power(beta2, t))\n ### END CODE HERE ###\n\n # Update parameters. Inputs: \"parameters, learning_rate, v_corrected, s_corrected, epsilon\". Output: \"parameters\".\n ### START CODE HERE ### (approx. 2 lines)\n parameters[\"W\" + str(l+1)] = parameters[\"W\" + str(l+1)] - learning_rate * v_corrected[\"dW\" + str(l+1)] / np.sqrt(s_corrected[\"dW\" + str(l+1)] + epsilon)\n parameters[\"b\" + str(l+1)] = parameters[\"b\" + str(l+1)] - learning_rate * v_corrected[\"db\" + str(l+1)] / np.sqrt(s_corrected[\"db\" + str(l+1)] + epsilon)\n ### END CODE HERE ###\n\n return parameters, v, s\n\nparameters, grads, v, s = update_parameters_with_adam_test_case()\nparameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)\n\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))\nprint(\"v[\\\"dW1\\\"] = \" + str(v[\"dW1\"]))\nprint(\"v[\\\"db1\\\"] = \" + str(v[\"db1\"]))\nprint(\"v[\\\"dW2\\\"] = \" + str(v[\"dW2\"]))\nprint(\"v[\\\"db2\\\"] = \" + str(v[\"db2\"]))\nprint(\"s[\\\"dW1\\\"] = \" + str(s[\"dW1\"]))\nprint(\"s[\\\"db1\\\"] = \" + str(s[\"db1\"]))\nprint(\"s[\\\"dW2\\\"] = \" + str(s[\"dW2\"]))\nprint(\"s[\\\"db2\\\"] = \" + str(s[\"db2\"]))", "Expected Output:\n<table> \n <tr>\n <td > **W1** </td> \n <td > [[ 1.63178673 -0.61919778 -0.53561312]\n [-1.08040999 0.85796626 -2.29409733]] </td> \n </tr> \n\n <tr>\n <td > **b1** </td> \n <td > [[ 1.75225313]\n [-0.75376553]] </td> \n </tr> \n\n <tr>\n <td > **W2** </td> \n <td > [[ 0.32648046 -0.25681174 1.46954931]\n [-2.05269934 -0.31497584 -0.37661299]\n [ 1.14121081 -1.09245036 -0.16498684]] </td> \n </tr> \n\n <tr>\n <td > **b2** </td> \n <td > [[-0.88529978]\n [ 0.03477238]\n [ 0.57537385]] </td> \n </tr> \n <tr>\n <td > **v[\"dW1\"]** </td> \n <td > [[-0.11006192 0.11447237 0.09015907]\n [ 0.05024943 0.09008559 -0.06837279]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db1\"]** </td> \n <td > [[-0.01228902]\n [-0.09357694]] </td> \n </tr> \n\n <tr>\n <td > **v[\"dW2\"]** </td> \n <td > [[-0.02678881 0.05303555 -0.06916608]\n [-0.03967535 -0.06871727 -0.08452056]\n [-0.06712461 -0.00126646 -0.11173103]] </td> \n </tr> \n\n <tr>\n <td > **v[\"db2\"]** </td> \n <td > [[ 0.02344157]\n [ 0.16598022]\n [ 0.07420442]] </td> \n </tr> \n <tr>\n <td > **s[\"dW1\"]** </td> \n <td > [[ 0.00121136 0.00131039 0.00081287]\n [ 0.0002525 0.00081154 0.00046748]] </td> \n </tr> \n\n <tr>\n <td > **s[\"db1\"]** </td> \n <td > [[ 1.51020075e-05]\n [ 8.75664434e-04]] </td> \n </tr> \n\n <tr>\n <td > **s[\"dW2\"]** </td> \n <td > [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]\n [ 1.57413361e-04 4.72206320e-04 7.14372576e-04]\n [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] </td> \n </tr> \n\n <tr>\n <td > **s[\"db2\"]** </td> \n <td > [[ 5.49507194e-05]\n [ 2.75494327e-03]\n [ 5.50629536e-04]] </td> \n </tr>\n</table>\n\nYou now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference.\n5 - Model with different optimization algorithms\nLets use the following \"moons\" dataset to test the different optimization methods. (The dataset is named \"moons\" because the data from each of the two classes looks a bit like a crescent-shaped moon.)", "train_X, train_Y = load_dataset()", "We have already implemented a 3-layer neural network. You will train it with: \n- Mini-batch Gradient Descent: it will call your function:\n - update_parameters_with_gd()\n- Mini-batch Momentum: it will call your functions:\n - initialize_velocity() and update_parameters_with_momentum()\n- Mini-batch Adam: it will call your functions:\n - initialize_adam() and update_parameters_with_adam()", "def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,\n beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):\n \"\"\"\n 3-layer neural network model which can be run in different optimizer modes.\n \n Arguments:\n X -- input data, of shape (2, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)\n layers_dims -- python list, containing the size of each layer\n learning_rate -- the learning rate, scalar.\n mini_batch_size -- the size of a mini batch\n beta -- Momentum hyperparameter\n beta1 -- Exponential decay hyperparameter for the past gradients estimates \n beta2 -- Exponential decay hyperparameter for the past squared gradients estimates \n epsilon -- hyperparameter preventing division by zero in Adam updates\n num_epochs -- number of epochs\n print_cost -- True to print the cost every 1000 epochs\n\n Returns:\n parameters -- python dictionary containing your updated parameters \n \"\"\"\n\n L = len(layers_dims) # number of layers in the neural networks\n costs = [] # to keep track of the cost\n t = 0 # initializing the counter required for Adam update\n seed = 10 # For grading purposes, so that your \"random\" minibatches are the same as ours\n \n # Initialize parameters\n parameters = initialize_parameters(layers_dims)\n\n # Initialize the optimizer\n if optimizer == \"gd\":\n pass # no initialization required for gradient descent\n elif optimizer == \"momentum\":\n v = initialize_velocity(parameters)\n elif optimizer == \"adam\":\n v, s = initialize_adam(parameters)\n \n # Optimization loop\n for i in range(num_epochs):\n \n # Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch\n seed = seed + 1\n minibatches = random_mini_batches(X, Y, mini_batch_size, seed)\n\n for minibatch in minibatches:\n\n # Select a minibatch\n (minibatch_X, minibatch_Y) = minibatch\n\n # Forward propagation\n a3, caches = forward_propagation(minibatch_X, parameters)\n\n # Compute cost\n cost = compute_cost(a3, minibatch_Y)\n\n # Backward propagation\n grads = backward_propagation(minibatch_X, minibatch_Y, caches)\n\n # Update parameters\n if optimizer == \"gd\":\n parameters = update_parameters_with_gd(parameters, grads, learning_rate)\n elif optimizer == \"momentum\":\n parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)\n elif optimizer == \"adam\":\n t = t + 1 # Adam counter\n parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,\n t, learning_rate, beta1, beta2, epsilon)\n \n # Print the cost every 1000 epoch\n if print_cost and i % 1000 == 0:\n print (\"Cost after epoch %i: %f\" %(i, cost))\n if print_cost and i % 100 == 0:\n costs.append(cost)\n \n # plot the cost\n plt.plot(costs)\n plt.ylabel('cost')\n plt.xlabel('epochs (per 100)')\n plt.title(\"Learning rate = \" + str(learning_rate))\n plt.show()\n\n return parameters", "You will now run this 3 layer neural network with each of the 3 optimization methods.\n5.1 - Mini-batch Gradient descent\nRun the following code to see how the model does with mini-batch gradient descent.", "# train 3-layer model\nlayers_dims = [train_X.shape[0], 5, 2, 1]\nparameters = model(train_X, train_Y, layers_dims, optimizer = \"gd\")\n\n# Predict\npredictions = predict(train_X, train_Y, parameters)\n\n# Plot decision boundary\nplt.title(\"Model with Gradient Descent optimization\")\naxes = plt.gca()\naxes.set_xlim([-1.5,2.5])\naxes.set_ylim([-1,1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "5.2 - Mini-batch gradient descent with momentum\nRun the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.", "# train 3-layer model\nlayers_dims = [train_X.shape[0], 5, 2, 1]\nparameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = \"momentum\")\n\n# Predict\npredictions = predict(train_X, train_Y, parameters)\n\n# Plot decision boundary\nplt.title(\"Model with Momentum optimization\")\naxes = plt.gca()\naxes.set_xlim([-1.5,2.5])\naxes.set_ylim([-1,1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "5.3 - Mini-batch with Adam mode\nRun the following code to see how the model does with Adam.", "# train 3-layer model\nlayers_dims = [train_X.shape[0], 5, 2, 1]\nparameters = model(train_X, train_Y, layers_dims, optimizer = \"adam\")\n\n# Predict\npredictions = predict(train_X, train_Y, parameters)\n\n# Plot decision boundary\nplt.title(\"Model with Adam optimization\")\naxes = plt.gca()\naxes.set_xlim([-1.5,2.5])\naxes.set_ylim([-1,1.5])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "5.4 - Summary\n<table> \n <tr>\n <td>\n **optimization method**\n </td>\n <td>\n **accuracy**\n </td>\n <td>\n **cost shape**\n </td>\n\n </tr>\n <td>\n Gradient descent\n </td>\n <td>\n 79.7%\n </td>\n <td>\n oscillations\n </td>\n <tr>\n <td>\n Momentum\n </td>\n <td>\n 79.7%\n </td>\n <td>\n oscillations\n </td>\n </tr>\n <tr>\n <td>\n Adam\n </td>\n <td>\n 94%\n </td>\n <td>\n smoother\n </td>\n </tr>\n</table>\n\nMomentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm.\nAdam on the other hand, clearly outperforms mini-batch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you've seen that Adam converges a lot faster.\nSome advantages of Adam include:\n- Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum) \n- Usually works well even with little tuning of hyperparameters (except $\\alpha$)\nReferences:\n\nAdam paper: https://arxiv.org/pdf/1412.6980.pdf" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
3juholee/materialproject_ml
notebooks/old_ICSD_Notebooks/Understanding ICSD data.ipynb
mit
[ "from __future__ import division, print_function\n\n# import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom pymatgen.core import Element, Composition\n\n%matplotlib inline\n\nimport csv\n\nwith open(\"../../ICSD/icsd-ternaries.csv\", \"r\") as f:\n csv_reader = csv.reader(f, dialect = csv.excel_tab)\n data = [line for line in csv_reader]\n\nformulas = [line[2] for line in data]\ncompositions = [Composition(f) for f in formulas]", "Structure Types\nStructure types are assigned by hand by ICSD curators.", "# How many ternaries have been assigned a structure type?\nstructure_types = [line[3] for line in data if line[3] is not '']\nunique_structure_types = set(structure_types)\nprint(\"There are {} ICSD ternaries entries.\".format(len(data)))\nprint(\"Structure types are assigned for {} entries.\".format(len(structure_types)))\nprint(\"There are {} unique structure types.\".format(len(unique_structure_types)))", "Filter for stoichiometric compounds only:", "def is_stoichiometric(composition):\n return np.all(np.mod(composition.values(), 1) == 0)\n\nstoichiometric_compositions = [c for c in compositions if is_stoichiometric(c)]\nprint(\"Number of stoichiometric compositions: {}\".format(len(stoichiometric_compositions)))\nternaries = set(c.formula for c in stoichiometric_compositions)\nprint(\"Number of unique stoichiometric compositions: {}\".format(len(ternaries)))\n\ndata_stoichiometric = [x for x in data if is_stoichiometric(Composition(x[2]))]\n\nfrom collections import Counter\n\nstruct_type_freq = Counter(x[3] for x in data_stoichiometric if x[3] is not '')\n\nplt.loglog(range(1, len(struct_type_freq)+1),\n sorted(struct_type_freq.values(), reverse = True), 'o')\nplt.xlabel(\"Structure Type\")\nplt.ylabel(\"Structure Type Frequency\")\nplt.title(\"Distribution of Frequencies of Structure Types\")\n\nsorted(struct_type_freq.items(), key = lambda x: x[1], reverse = True)\n\nuniq_phases = set()\nfor row in data_stoichiometric:\n spacegroup, formula, struct_type = row[1:4]\n phase = (spacegroup, Composition(formula).formula, struct_type)\n uniq_phases.add(phase)\n\nuniq_struct_type_freq = Counter(x[2] for x in uniq_phases if x[2] is not '')\nuniq_struct_type_freq_sorted = sorted(uniq_struct_type_freq.items(), key = lambda x: x[1], reverse = True)\n\nplt.loglog(range(1, len(uniq_struct_type_freq_sorted)+1),\n [x[1] for x in uniq_struct_type_freq_sorted], 'o')\nplt.xlabel(\"Structure Type\")\nplt.ylabel(\"Structure Type Frequency\")\nplt.title(\"Distribution of Frequencies of Structure Types\")\n\nuniq_struct_type_freq_sorted\n\nfor struct_type,freq in uniq_struct_type_freq_sorted[:10]:\n print(\"{} : {}\".format(struct_type, freq))\n fffs = [p[1] for p in uniq_phases if p[2] == struct_type]\n fmt = \" \".join([\"{:14}\"]*5)\n print(fmt.format(*fffs[0:5]))\n print(fmt.format(*fffs[5:10]))\n print(fmt.format(*fffs[10:15]))\n print(fmt.format(*fffs[15:20]))", "Long Formulas", "# What are the longest formulas?\nfor formula in sorted(formulas, key = lambda x: len(x), reverse = True)[:20]:\n print(formula)", "Two key insights:\n1. Just because there are three elements in the formula\n doesn't mean the compound is fundamentally a ternary.\n There are doped binaries which masquerade as ternaries.\n And there are doped ternaries which masquerade as quaternaries,\n or even quintenaries. Because I only asked for compositions\n with 3 elements, this data is missing.\n2. ICSD has strategically placed parentheses in the formulas\n which give hints as to logical groupings. For example:\n (Ho1.3 Ti0.7) ((Ti0.64 Ho1.36) O6.67)\n is in fact in the pyrochlore family, A2B2O7.\nIntermetallics\nHow many intermetallics does the ICSD database contain?", "def filter_in_set(compound, universe):\n return all((e in universe) for e in Composition(compound))\n\ntransition_metals = [e for e in Element if e.is_transition_metal]\ntm_ternaries = [c for c in formulas if filter_in_set(c, transition_metals)]\nprint(\"Number of intermetallics:\", len(tm_ternaries))\n\nunique_tm_ternaries = set([Composition(c).formula for c in tm_ternaries])\nprint(\"Number of unique intermetallics:\", len(unique_tm_ternaries))\n\nunique_tm_ternaries" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ledeprogram/algorithms
class7/donow/benzaquen_mercy_donow_7.ipynb
gpl-3.0
[ "Apply logistic regression to categorize whether a county had high mortality rate due to contamination\n1. Import the necessary packages to read in the data, plot, and create a logistic regression model", "import pandas as pd\n%matplotlib inline\nimport numpy as np\nfrom sklearn.linear_model import LogisticRegression", "2. Read in the hanford.csv file in the data/ folder", "df = pd.read_csv(\"hanford.csv\")\n\ndf.head()", "3. Calculate the basic descriptive statistics on the data", "df.mean()\n\ndf.median()\n\n#range\ndf[\"Exposure\"].max() - df[\"Exposure\"].min()\n\n#range\ndf[\"Mortality\"].max() - df[\"Mortality\"].min()\n\ndf.std()\n\ndf.corr()", "4. Find a reasonable threshold to say exposure is high and recode the data", "#IQR\nIQR= df['Exposure'].quantile(q=0.75)- df['Exposure'].quantile(q=0.25)", "UAL= (IQR * 1.5) +Q3\nLAL= Q1- (IQR * 1.5)\nAnything outside of UAL and LAL is an outlier", "Q1= df['Exposure'].quantile(q=0.25) #1st Quartile\nQ1\n\nQ2= df['Exposure'].quantile(q=0.5) #2nd Quartile (Median)\n\nQ3= df['Exposure'].quantile(q=0.75) #3rd Quartile\n\nUAL= (IQR * 1.5) +Q3\nUAL\n\nLAL= Q1- (IQR * 1.5)\nLAL", "5. Create a logistic regression model\n6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ML4DS/ML4all
P2.Numpy/old/numpy_professor.ipynb
mit
[ "Exercises about Numpy\nAuthor: Jerónimo Arenas García (jeronimo.arenas@uc3m.es)\n\nNotebook version: 1.1 (Sep 20, 2017)\n\nChanges: v.1.0 (Mar 15, 2016) - First version\n v.1.1 (Sep 20, 2017) - Compatibility with python 2 and python 3\n Display messages in English\n\nPending changes:\n * Add a section 7.4. representing f_poly as a function of x", "# Import some libraries that will be necessary for working with data and displaying plots\n\nimport numpy as np\nimport hashlib\n\n# Test functions\n\ndef hashstr(str1):\n \"\"\"Implements the secure hash of a string\"\"\"\n return hashlib.sha1(str1).hexdigest()\n\ndef test_arrayequal(x1, x2, err_msg, ok_msg='Test passed'):\n \"\"\"Test if all elements in arrays x1 and x2 are the same item by item\n :param x1: First array for the comparison\n :param x2: Second array for the comparison\n :param err_msg: Display message if both arrays are not the same\n :param ok_msg: Display message if arrays are the same (optional)\n \"\"\"\n try:\n np.testing.assert_array_equal(x1, x2)\n print(ok_msg)\n except:\n print(err_msg)\n\ndef test_strequal(str1, str2, err_msg, ok_msg='Test passed'):\n \"\"\"Test if str1 and str2 are the same string\n :param str1: First string for the comparison\n :param str2: Second string for the comparison\n :param err_msg: Display message if both strings are not the same\n :param ok_msg: Display message if strings are the same (optional)\n \"\"\"\n try:\n np.testing.assert_string_equal(str1, str2)\n print(ok_msg)\n except:\n print(err_msg)\n \ndef test_hashedequal(str1, str2, err_msg, ok_msg='Test passed'):\n \"\"\"Test if hashed(str1) and str2 are the same string\n :param str1: First string for the comparison\n str1 will be hashed for the comparison\n :param str2: Second string for the comparison\n :param err_msg: Display message if both strings are not the same\n :param ok_msg: Display message if strings are the same (optional)\n \"\"\"\n try:\n np.testing.assert_string_equal(hashstr(str1), str2)\n print(ok_msg)\n except:\n print(err_msg)\n", "This notebook reviews some of the Python modules that make it possible to work with data structures in an easy an efficient manner. We will review Numpy arrays and matrices, and some of the common operations which are needed when working with these data structures in Machine Learning.\n1. Create numpy arrays of different types\nThe following code fragment defines variable x as a list of 4 integers, you can check that by printing the type of any element of x. Use python command map() to create a new list with the same elements as x, but where each element of the list is a float. Note that, since in Python 3 map() returns an iterable object, you need to call function list() to populate the list.", "x = [5, 4, 3, 4]\nprint(type(x[0]))\n\n# Create a list of floats containing the same elements as in x\n# x_f = list(map(<FILL IN>))\nx_f = list(map(float, x))\n \n\ntest_arrayequal(x, x_f, 'Elements of both lists are not the same')\nif ((type(x[-2])==int) & (type(x_f[-2])==float)):\n print('Test passed')\nelse:\n print('Type conversion incorrect')", "Numpy arrays can be defined directly using methods such as np.arange(), np.ones(), np.zeros(), as well as random number generators. Alternatively, you can easily generate them from python lists (or lists of lists) containing elements of numeric type.\nYou can easily check the shape of any numpy vector with the property .shape, and reshape it with the method reshape(). Note the difference between 1-D and N-D numpy arrays (ndarrays). You should also be aware of the existence of another numpy data type: Numpy matrices (http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.matrix.html) are inherently 2-D structures where operators * and ** have the meaning of matrix multiplication and matrix power.\nIn the code below, you can check the types and shapes of different numpy arrays. Complete also the exercise where you are asked to convert a unidimensional array into a vector of size $4\\times2$.", "# Numpy arrays can be created from numeric lists or using different numpy methods\ny = np.arange(8)+1\nx = np.array(x_f)\n\n# Check the different data types involved\nprint('Variable x_f is of type', type(x_f))\nprint('Variable x is of type ', type(x))\nprint('Variable y is of type', type(y))\n\n# Print the shapes of the numpy arrays\nprint('Variable y has dimension', y.shape)\nprint('Variable x has dimension', x.shape)\n\n#Complete the following exercises\n# Convert x into a variable x_matrix, of type `numpy.matrixlib.defmatrix.matrix` using command\n# np.matrix(). The resulting matrix should be of dimensions 4x1\n# x_matrix = <FILL IN>\nx_matrix = np.matrix(x).T\n\n# Convert x into a variable x_array, of type `ndarray`, and shape (4,1)\n# x_array = <FILL IN>\nx_array = x[:,np.newaxis]\n\n# Reshape array y into a numpy array of shape (4,2) using command np.reshape()\n# y = <FILL IN>\ny = y.reshape((4,2))\n \n\ntest_strequal(str(type(x_matrix)), \"<class 'numpy.matrixlib.defmatrix.matrix'>\", 'x_matrix is not defined as a matrix')\ntest_hashedequal(x_matrix.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_matrix')\ntest_strequal(str(type(x_array)), \"<class 'numpy.ndarray'>\", 'x_array is not defined as numpy ndarray')\ntest_hashedequal(x_array.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_array')\ntest_strequal(str(type(y)), \"<class 'numpy.ndarray'>\", 'y is not defined as a numpy ndarray')\ntest_hashedequal(y.tostring(), '0b61a85386775357e0710800497771a34fdc8ae5', 'Incorrect variable y')", "Some other useful Numpy methods are:\n\nnp.flatten(): converts a numpy array or matrix into a vector by concatenating the elements in the different dimension. Note that the result of the method keeps the type of the original variable, so the result is a 1-D ndarray when invoked on a numpy array, and a numpy matrix (and necessarily 2-D) when invoked on a matrix.\nnp.tolist(): converts a numpy array or matrix into a python list.\n\nThese uses are illustrated in the code fragment below.", "print('Applying flatten() to matrix x_matrix (of type matrix)')\nprint('x_matrix.flatten():', x_matrix.flatten())\nprint('Its type:', type(x_matrix.flatten()))\nprint('Its dimensions:', x_matrix.flatten().shape)\n\nprint('\\nApplying flatten() to matrix y (of type ndarray)')\nprint('y.flatten():', y.flatten())\nprint('Its type:', type(y.flatten()))\nprint('Its dimensions:', y.flatten().shape)\n\nprint('\\nApplying tolist() to x_matrix (of type matrix) and to the 2D vector y (of type ndarray)')\nprint('x_matrix.tolist():', x_matrix.tolist())\nprint('y.tolist():', y.tolist())\n", "2. Products and powers of numpy arrays and matrices\n\n* and ** when used with Numpy arrays implement elementwise product and exponentiation\n* and ** when used with Numpy matrices implement matrix product and exponentiation\nMethod np.dot() implements matrix multiplication, and can be used both with numpy arrays and matrices.\n\nSo you have to be careful about the types you are using for each variable", "# Try to run the following command on variable x_matrix, and check what happens\nprint(x_array**2)\n\nprint('Remember that the shape of x_array is', x_array.shape)\nprint('Remember that the shape of y is', y.shape)\n\n# Complete the following exercises. You can print the partial results to visualize them\n\n# Multiply the 2-D array `y` by 2\n# y_by2 = <FILL IN>\ny_by2 = y * 2\n\n# Multiply each of the columns in `y` by the column vector x_array\n# z_4_2 = <FILL IN>\nz_4_2 = x_array * y\n\n# Obtain the matrix product of the transpose of x_array and y\n# x_by_y = <FILL IN>\nx_by_y = x_array.T.dot(y)\n\n# Repeat the previous calculation, this time using x_matrix (of type numpy matrix) instead of x_array\n# Note that in this case you do not need to use method dot()\n# x_by_y2 = <FILL IN>\nx_by_y2 = x_matrix.T * y\n\n# Multiply vector x_array by its transpose to obtain a 4 x 4 matrix\n#x_4_4 = <FILL IN>\nx_4_4 = x_array.dot(x_array.T)\n\n# Multiply the transpose of vector x_array by vector x_array. The result is the squared-norm of the vector\n#x_norm2 = <FILL IN>\nx_norm2 = x_array.T.dot(x_array)\n \n\ntest_hashedequal(y_by2.tostring(),'1b54af8620657d5b8da424ca6be8d58b6627bf9a','Incorrect result for variable y_by2')\ntest_hashedequal(z_4_2.tostring(),'0727ed01af0aa4175316d3916fd1c8fe2eb98f27','Incorrect result for variable z_4_2')\ntest_hashedequal(x_by_y.tostring(),'b33f700fec2b6bd66e76260d31948ce07b8c15d3','Incorrect result for variable x_by_y')\ntest_hashedequal(x_by_y2.tostring(),'b33f700fec2b6bd66e76260d31948ce07b8c15d3','Incorrect result for variable x_by_y2')\ntest_hashedequal(x_4_4.tostring(),'832c97cc2d69298287838350b0bae66deec58b03','Incorrect result for variable x_4_4')\ntest_hashedequal(x_norm2.tostring(),'33b80b953557002511474aa340441d5b0728bbaf','Incorrect result for variable x_norm2')", "3. Numpy methods that can be carried out along different dimensions\nCompare the result of the following commands:", "print(z_4_2.shape)\nprint(np.mean(z_4_2))\nprint(np.mean(z_4_2,axis=0))\nprint(np.mean(z_4_2,axis=1))\n", "Other numpy methods where you can specify the axis along with a certain operation should be carried out are:\n\nnp.median()\nnp.std()\nnp.var()\nnp.percentile()\nnp.sort()\nnp.argsort()\n\nIf the axis argument is not provided, the array is flattened before carriying out the corresponding operation.\n4. Concatenating matrices and vectors\nProvided that the necessary dimensions fit, horizontal and vertical stacking of matrices can be carried out with methods np.hstack() and np.vstack().\nComplete the following exercises to practice with matrix concatenation:", "# Previous check that you are working with the right matrices\ntest_hashedequal(z_4_2.tostring(),'0727ed01af0aa4175316d3916fd1c8fe2eb98f27','Incorrect result for variable z_4_2')\ntest_hashedequal(x_array.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_array')\n\n# Vertically stack matrix z_4_2 with itself\n# ex1_res = <FILL IN>\nex1_res = np.vstack((z_4_2,z_4_2))\n\n# Horizontally stack matrix z_4_2 and vector x_array\n# ex2_res = <FILL IN>\nex2_res = np.hstack((z_4_2,x_array))\n\n# Horizontally stack a column vector of ones with the result of the first exercise (variable ex1_res)\n# X = <FILL IN>\nX = np.hstack((np.ones((8,1)),ex1_res))\n \n\ntest_hashedequal(ex1_res.tostring(),'e740ea91c885cdae95499eaf53ec6f1429943d9c','Wrong value for variable ex1_res')\ntest_hashedequal(ex2_res.tostring(),'d5f18a630b2380fcae912f449b2a87766528e0f2','Wrong value for variable ex2_res')\ntest_hashedequal(X.tostring(),'bdf94b49c2b7c6ae71a916beb647236918ead39f','Wrong value for variable X')", "5. Slicing\nParticular elements of numpy arrays (both unidimensional and multidimensional) can be accessed using standard python slicing. When working with multidimensional arrays, slicing can be carried out along the different dimensions at once", "# Keep last row of matrix X\n# X_sub1 = <FILL IN>\nX_sub1 = X[-1,]\n\n# Keep first column of the three first rows of X\n# X_sub2 = <FILL IN>\nX_sub2 = X[:3,0]\n\n# Keep first two columns of the three first rows of X\n# X_sub3 = <FILL IN>\nX_sub3 = X[:3,:2]\n\n# Invert the order of the rows of X\n# X_sub4 = <FILL IN>\nX_sub4 = X[::-1,:]\n \n\ntest_hashedequal(X_sub1.tostring(),'51fb613567c9ef5fc33e7190c60ff37e0cd56706','Wrong value for variable X_sub1')\ntest_hashedequal(X_sub2.tostring(),'12a72e95677fc01de6b7bfb7f62d772d0bdb5b87','Wrong value for variable X_sub2')\ntest_hashedequal(X_sub3.tostring(),'f45247c6c31f9bcccfcb2a8dec9d288ea41e6acc','Wrong value for variable X_sub3')\ntest_hashedequal(X_sub4.tostring(),'1fd985c087ba518c6d040799e49a967e4b1d433a','Wrong value for variable X_sub4')", "Extracting columns and rows from multidimensional arrays\nSomething to be aware of when extracting rows or columns from numpy arrays is that if you specify just the index of the row or column you want to extract, the result will be a 1-D numpy array in any case. For instance, the following code prints the second column and third row of the numpy array X, and shows its dimensions. Notice that in both cases you get arrays with 1 dimension only.", "X_col2 = X[:,1]\nX_row3 = X[2,]\n\nprint('Matrix X is\\n', X)\nprint('Second column of matrix X:', X_col2, '; Dimensions:', X_col2.shape)\nprint('Third row of matrix X:', X_row3, '; Dimensions:', X_row3.shape)", "If you wish that the extracted row or column is still a 2-D row or column vector, it is important to specify an interval instead of a single value, even if such interval consists of just one value.\nMany numpy functions will also return 1-D vectors. It is important to be aware of such behavior to avoid and detect bugs in your code that may give place to undesired behaviors.", "X_col2 = X[:,1:2]\nX_row3 = X[2:3,]\n\nprint('Second column of matrix X:', X_col2, '; Dimensions:', X_col2.shape)\nprint('Third row of matrix X:', X_row3, '; Dimensions:', X_row3.shape)", "6. Matrix inversion\nNon singular matrices can be inverted with method np.linalg.inv(). Invert square matrices $X\\cdot X^\\top$ and $X^\\top \\cdot X$, and see what happens when trying to invert a singular matrix. The rank of a matrix can be studied with method numpy.linalg.matrix_rank().", "print(X.shape)\nprint(X.dot(X.T))\nprint(X.T.dot(X))\n\nprint(np.linalg.inv(X.T.dot(X)))\n#print np.linalg.inv(X.dot(X.T))", "7. Exercises\nIn this section, you will complete three exercises where you will carry out some common operations when working with data structures. For this exercise you will work with the 2-D numpy array X, assuming that it contains the values of two different variables for 8 data patterns. A first column of ones has already been introduced in a previous exercise:\n$${\\bf X} = \\left[ \\begin{array}{ccc} 1 & x_1^{(1)} & x_2^{(1)} \\ 1 & x_1^{(2)} & x_2^{(2)} \\ \\vdots & \\vdots & \\vdots \\ 1 & x_1^{(8)} & x_2^{(8)}\\end{array}\\right]$$\nFirst of all, let us check that you are working with the right matrix", "test_hashedequal(X.tostring(),'bdf94b49c2b7c6ae71a916beb647236918ead39f','Wrong value for variable X')", "7.1. Non-linear transformations\nCreate a new matrix Z, where additional features are created by carrying out the following non-linear transformations:\n$${\\bf Z} = \\left[ \\begin{array}{ccc} 1 & x_1^{(1)} & x_2^{(1)} & \\log\\left(x_1^{(1)}\\right) & \\log\\left(x_2^{(1)}\\right)\\ 1 & x_1^{(2)} & x_2^{(2)} & \\log\\left(x_1^{(2)}\\right) & \\log\\left(x_2^{(2)}\\right) \\ \\vdots & \\vdots & \\vdots \\ 1 & x_1^{(8)} & x_2^{(8)} & \\log\\left(x_1^{(8)}\\right) & \\log\\left(x_2^{(8)}\\right)\\end{array}\\right] = \\left[ \\begin{array}{ccc} 1 & z_1^{(1)} & z_2^{(1)} & z_3^{(1)} & z_4^{(1)}\\ 1 & z_1^{(2)} & z_2^{(2)} & z_3^{(1)} & z_4^{(1)} \\ \\vdots & \\vdots & \\vdots \\ 1 & z_1^{(8)} & z_2^{(8)} & z_3^{(1)} & z_4^{(1)} \\end{array}\\right]$$\nIn other words, we are calculating the logarightmic values of the two original variables. From now on, any function involving linear transformations of the variables in Z, will be in fact a non-linear function of the original variables.", "# Obtain matrix Z using concatenation functions\n# Z = np.hstack(<FILL IN>)\nZ = np.hstack((X,np.log(X[:,1:])))\n \n\ntest_hashedequal(Z.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')", "Repeat the previous exercise, this time using the map() method together with function log_transform(). This function needs to be defined in such a way that guarantees that variable Z_map is the same as the previously computed variable Z.", "def log_transform(x):\n # return <FILL IN>\n return np.hstack((x,np.log(x[1]),np.log(x[2])))\n \nZ_map = np.array(list(map(log_transform,X)))\n \n\ntest_hashedequal(Z_map.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')", "Repeat the previous exercise once more. This time, define a lambda function for the task.", "# Z_lambda = np.array(list(map(lambda x: <FILL IN>,X)))\nZ_lambda = np.array(list(map(lambda x: np.hstack((x,np.log(x[1]),np.log(x[2]))),X)))\n \n\ntest_hashedequal(Z_lambda.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')", "7.2. Polynomial transformations\nSimilarly to the previous exercise, now we are interested in obtaining another matrix that will be used to evaluate a polynomial model. In order to do so, compute matrix Z_poly as follows:\n$$Z_\\text{poly} = \\left[ \\begin{array}{cccc} 1 & x_1^{(1)} & (x_1^{(1)})^2 & (x_1^{(1)})^3 \\ 1 & x_1^{(2)} & (x_1^{(2)})^2 & (x_1^{(2)})^3 \\ \\vdots & \\vdots & \\vdots \\ 1 & x_1^{(8)} & (x_1^{(8)})^2 & (x_1^{(8)})^3 \\end{array}\\right]$$\nNote that, in this case, only the first variable of each pattern is used.", "# Calculate variable Z_poly, using any method that you want\n# Z_poly = <FILL IN>\nZ_poly = np.array(list(map(lambda x: np.array([x[1]**k for k in range(4)]),X)))\n \n\ntest_hashedequal(Z_poly.tostring(),'7e025512fcee1c1db317a1a30f01a0d4b5e46e67','Wrong variable Z_poly')", "7.3. Model evaluation\nFinally, we can use previous data matrices Z and Z_poly to efficiently compute the output of the corresponding non-linear models over all the patterns in the data set. In this exercise, we consider the two following linear-in-the-parameters models to be evaluated:\n$$f_\\text{log}({\\bf x}) = w_0 + w_1 \\cdot x_1 + w_2 \\cdot x_2 + w_3 \\cdot \\log(x_1) + w_4 \\cdot \\log(x_2)$$\n$$f_\\text{poly}({\\bf x}) = w_0 + w_1 \\cdot x_1 + w_2 \\cdot x_1^2 + w_3 \\cdot x_1^3$$\nCompute the output of the two models for the particular weights that are defined in the code below. Your output variables f_log and f_poly should contain the outputs of the model for all eight patterns in the data set. \nNote that for this task, you just need to implement appropriate matricial products among the extended data matrices, Z and Z_poly, and the provided weight vectors.", "w_log = np.array([3.3, 0.5, -2.4, 3.7, -2.9])\nw_poly = np.array([3.2, 4.5, -3.2, 0.7])\n\n# f_log = <FILL IN>\nf_log = Z.dot(w_log)\n# f_poly = <FILL IN>\nf_poly = Z_poly.dot(w_poly)\n \n\ntest_hashedequal(f_log.tostring(),'d5801dfbd603f6db7010b9ef80fa48e351c0b38b','Incorrect evaluation of the logarithmic model')\ntest_hashedequal(f_poly.tostring(),'32abdcc0e32e76500947d0691cfa9917113d7019','Incorrect evaluation of the polynomial model')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fernandojvdasilva/nlp-python-lectures
nlp_classification_pt-br.ipynb
gpl-3.0
[ "<h1 align=\"center\"> Introdução ao Processamento de Linguagem Natural (PLN) Usando Python </h1>\n<h3 align=\"center\"> Professor Fernando Vieira da Silva MSc.</h3>\n\n<h2>Problema de Classificação</h2>\n\n<p>Neste tutorial vamos trabalhar com um exemplo prático de problema de classificação de texto. O objetivo é identificar uma sentença como escrita \"formal\" ou \"informal\".</p>\n\n<b>1. Obtendo o corpus</b>\n<p>Para simplificar o problema, vamos continuar utilizando o corpus Gutenberg como textos formais e vamos usar mensagens de chat do corpus <b>nps_chat</b> como textos informais.</p>\n<p>Antes de tudo, vamos baixar o corpus nps_chat:</p>", "import nltk\n\nnltk.download('nps_chat')\n\nfrom nltk.corpus import nps_chat\n\nprint(nps_chat.fileids())", "<p>Agora vamos ler os dois corpus e armazenar as sentenças em uma mesma ndarray. Perceba que também teremos uma ndarray para indicar se o texto é formal ou não. Começamos armazenando o corpus em lists. Vamos usar apenas 500 elementos de cada, para fins didáticos.</p>", "import nltk\n\nx_data_nps = []\n\nfor fileid in nltk.corpus.nps_chat.fileids():\n x_data_nps.extend([post.text for post in nps_chat.xml_posts(fileid)])\n\ny_data_nps = [0] * len(x_data_nps)\n\nx_data_gut = []\nfor fileid in nltk.corpus.gutenberg.fileids():\n x_data_gut.extend([' '.join(sent) for sent in nltk.corpus.gutenberg.sents(fileid)])\n \ny_data_gut = [1] * len(x_data_gut)\n\nx_data_full = x_data_nps[:500] + x_data_gut[:500]\nprint(len(x_data_full))\ny_data_full = y_data_nps[:500] + y_data_gut[:500]\nprint(len(y_data_full))", "<p>Em seguida, transformamos essas listas em ndarrays, para usarmos nas etapas de pré-processamento que já conhecemos.</p>", "import numpy as np\n\nx_data = np.array(x_data_full, dtype=object)\n#x_data = np.array(x_data_full)\nprint(x_data.shape)\ny_data = np.array(y_data_full)\nprint(y_data.shape)", "<b>2. Dividindo em datasets de treino e teste</b>\n<p>Para que a pesquisa seja confiável, precisamos avaliar os resultados em um dataset de teste. Por isso, vamos dividir os dados aleatoriamente, deixando 80% para treino e o demais para testar os resultados em breve.</p>", "train_indexes = np.random.rand(len(x_data)) < 0.80\n\nprint(len(train_indexes))\nprint(train_indexes[:10])\n\nx_data_train = x_data[train_indexes]\ny_data_train = y_data[train_indexes]\n\nprint(len(x_data_train))\nprint(len(y_data_train))\n\nx_data_test = x_data[~train_indexes]\ny_data_test = y_data[~train_indexes]\n\nprint(len(x_data_test))\nprint(len(y_data_test))", "<b>3. Treinando o classificador</b>\n<p>Para tokenização, vamos usar a mesma função do tutorial anterior:</p>", "from nltk import pos_tag\nfrom nltk.corpus import stopwords\nfrom nltk.stem import WordNetLemmatizer\nfrom nltk.tokenize import word_tokenize\nimport string\nfrom nltk.corpus import wordnet\n\nstopwords_list = stopwords.words('english')\n\nlemmatizer = WordNetLemmatizer()\n\ndef my_tokenizer(doc):\n words = word_tokenize(doc)\n \n pos_tags = pos_tag(words)\n \n non_stopwords = [w for w in pos_tags if not w[0].lower() in stopwords_list]\n \n non_punctuation = [w for w in non_stopwords if not w[0] in string.punctuation]\n \n lemmas = []\n for w in non_punctuation:\n if w[1].startswith('J'):\n pos = wordnet.ADJ\n elif w[1].startswith('V'):\n pos = wordnet.VERB\n elif w[1].startswith('N'):\n pos = wordnet.NOUN\n elif w[1].startswith('R'):\n pos = wordnet.ADV\n else:\n pos = wordnet.NOUN\n \n lemmas.append(lemmatizer.lemmatize(w[0], pos))\n\n return lemmas\n \n ", "<p>Mas agora vamos criar um <b>pipeline</b> contendo o vetorizador TF-IDF, o SVD para redução de atributos e um algoritmo de classificação. Mas antes, vamos encapsular nosso algoritmo para escolher o número de dimensões para o SVD em uma classe que pode ser utilizada com o pipeline:</p>", "from sklearn.decomposition import TruncatedSVD\n\nclass SVDDimSelect(object):\n def fit(self, X, y=None): \n self.svd_transformer = TruncatedSVD(n_components=X.shape[1]/2)\n self.svd_transformer.fit(X)\n \n cummulative_variance = 0.0\n k = 0\n for var in sorted(self.svd_transformer.explained_variance_ratio_)[::-1]:\n cummulative_variance += var\n if cummulative_variance >= 0.5:\n break\n else:\n k += 1\n \n self.svd_transformer = TruncatedSVD(n_components=k)\n return self.svd_transformer.fit(X)\n \n def transform(self, X, Y=None):\n return self.svd_transformer.transform(X)\n \n def get_params(self, deep=True):\n return {}", "<p>Finalmente podemos criar nosso pipeline:</p>", "from sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn import neighbors\n\nclf = neighbors.KNeighborsClassifier(n_neighbors=10, weights='uniform')\n\nmy_pipeline = Pipeline([('tfidf', TfidfVectorizer(tokenizer=my_tokenizer)),\\\n ('svd', SVDDimSelect()), \\\n ('clf', clf)])", "<p>Estamos quase lá... Agora vamos criar um objeto <b>RandomizedSearchCV</b> que fará a seleção de hiper-parâmetros do nosso classificador (aka. parâmetros que não são aprendidos durante o treinamento). Essa etapa é importante para obtermos a melhor configuração do algoritmo de classificação. Para economizar tempo de treinamento, vamos usar um algoritmo simples o <i>K nearest neighbors (KNN)</i>.", "from sklearn.grid_search import RandomizedSearchCV\nimport scipy\n\npar = {'clf__n_neighbors': range(1, 60), 'clf__weights': ['uniform', 'distance']}\n\n\nhyperpar_selector = RandomizedSearchCV(my_pipeline, par, cv=3, scoring='accuracy', n_jobs=2, n_iter=20)\n", "<p>E agora vamos treinar nosso algoritmo, usando o pipeline com seleção de atributos:</p>", "#print(hyperpar_selector)\n\nhyperpar_selector.fit(X=x_data_train, y=y_data_train)\n\nprint(\"Best score: %0.3f\" % hyperpar_selector.best_score_)\nprint(\"Best parameters set:\")\nbest_parameters = hyperpar_selector.best_estimator_.get_params()\nfor param_name in sorted(par.keys()):\n print(\"\\t%s: %r\" % (param_name, best_parameters[param_name]))", "<b>4. Testando o classificador</b>\n<p>Agora vamos usar o classificador com o nosso dataset de testes, e observar os resultados:</p>", "from sklearn.metrics import *\n\ny_pred = hyperpar_selector.predict(x_data_test)\n\nprint(accuracy_score(y_data_test, y_pred))", "<b>5. Serializando o modelo</b><br>", "import pickle\n\nstring_obj = pickle.dumps(hyperpar_selector)\n\nmodel_file = open('model.pkl', 'wb')\n\nmodel_file.write(string_obj)\n\nmodel_file.close()", "<b>6. Abrindo e usando um modelo salvo </b><br>", "\nmodel_file = open('model.pkl', 'rb')\nmodel_content = model_file.read()\n\nobj_classifier = pickle.loads(model_content)\n\nmodel_file.close()\n\nres = obj_classifier.predict([\"what's up bro?\"])\n\nprint(res)\n\nres = obj_classifier.predict(x_data_test)\nprint(accuracy_score(y_data_test, res))\n\nres = obj_classifier.predict(x_data_test)\n\nprint(res)\n\nformal = [x_data_test[i] for i in range(len(res)) if res[i] == 1]\n\nfor txt in formal:\n print(\"%s\\n\" % txt)\n\n\ninformal = [x_data_test[i] for i in range(len(res)) if res[i] == 0]\n\nfor txt in informal:\n print(\"%s\\n\" % txt)\n\nres2 = obj_classifier.predict([\"Emma spared no exertions to maintain this happier flow of ideas , and hoped , by the help of backgammon , to get her father tolerably through the evening , and be attacked by no regrets but her own\"])\n\nprint(res2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NYUDataBootcamp/Projects
UG_F16/Kustas-Madej-CrimeRatesFinalProject.ipynb
mit
[ "Crime reduction from 1994 to 2013\nAuthors: Ryszard Madej & Katherine Kustas\nSummary: \nThis project investigates a number of questions about the nature of crime in America in the last 20 years (1994 – 2013 available data):\n1) What crimes have been most prevalent in the past twenty years?\n2) Which years saw the largest drop in crime in the US?\n3) What factors contributed to the decline in crime rates?\nData Sources:\nThe data from this project is sourced from the Federal Bureau of Investigation (FBI), the domestic intelligence and security service of the United States, which simultaneously serves as the nation's prime federal law enforcement agency. The data consists of tables providing the estimated number of offenses and the rate (per 100,000 inhabitants) of crime in the United States for 1994 through 2013, as well as the 2-, 5-, and 10-year trends for 2013 based on these estimates.\nThe data used in creating these tables were from all law enforcement agencies participating in the UCR Program (including those submitting less than 12 months of data).\nThe crime statistics for the nation include estimated offense totals (except arson) for agencies submitting less than 12 months of offense reports for each year.\nImportant to note is that only data provided under the legacy definition of rape are shown in this table. (Calculating rape trends with the data provided under the revised definition of rape would not be possible, as there is currently only one year of data available.)\nIn addition, data from the Center for Disease Control and Prevention (CDC) is also used to compare against our data from the FBI. The survey we used, the Youth Risk Behavior Surveillance System (YRBSS), monitors six types of health-risk behaviors that contribute to the leading causes of death and disability among youth and adults, including— \n\nBehaviors that contribute to unintentional injuries and violence\nSexual behaviors related to unintended pregnancy and sexually transmitted diseases, including HIV infection\nAlcohol and other drug use\nTobacco use\nUnhealthy dietary behaviors\nInadequate physical activity\n\nThe survey was conducted by the CDC in conjunction with state, territorial, and local education and health agencies and tribal governments.", "import sys # system module\nimport pandas as pd # data package\nimport matplotlib as mpl # graphics package\nimport matplotlib.pyplot as plt # pyplot module\nimport datetime as dt # date and time module\nimport numpy as np\n\n\n# make plots show up in notebook\n%matplotlib inline \n", "Data frames", "#import data and then display each data frame\n\npath1 = 'data/fbi_table_20years.xlsx'\ndf_20yr = pd.read_excel(path1,\n index_col=0)\n\npath2 = 'data/fbi_table_20years_edited.xlsx'\ndf_20yr_real = pd.read_excel(path2,\n index_col=0)\n\npath3 = 'data/fbi_table_20years_rates.xlsx'\ndf_20yr_rates = pd.read_excel(path3,\n index_col=0)\n\npath4 = 'data/CDS_Data.xlsx'\ndf_CDC = pd.read_excel(path4,\n index_col=0)\n\ndf_20yr\n\ndf_20yr_real\n\ndf_20yr_rates\n\ndf_CDC", "Line Chart: Crime rate (1994-2013)", "#create a line plot from crime rates data frame\n\nfig, ax = plt.subplots()\ndf_20yr_rates.plot(ax=ax,\n kind='line', # line plot\n title='Different Crimes vs. Time\\n\\n',\n grid = True,\n ylim = (-50,3100),\n marker = 'o',\n use_index = True) \n\nplt.legend(loc = 'upper right')\nax.set_title('Crime rates over time\\n',fontsize = 16) #format title and axis labels\nax.set_xlabel('Year', fontsize = 14) \nax.set_ylabel('Crime Rate', fontsize = 14)\nax.set_xlim(1994, 2013) #set limits for x and y axis\nax.set_ylim(-50,3100)\nfig.set_size_inches(15, 13)", "Analysis:\nIn the above graph, we can observe a steady decline (despite a few isolated increases) in crime rates across different categories of crime from 1994 to 2013. A number of explanations have been proposed to explain the trend. Historian Neil Howe has suggested that decline might come from the entrance of millennials into the potential criminal demographic. Both will be explored in further detail later in this project.\nPie Chart: Breakdown of crime type", "#find totals of each column in order to find which crime was most prevalent over the course of the past 20 years\n\nmurder_total = 0\nrape_total = 0\nrobbery_total = 0\nagg_ass_total = 0\nburglary_total = 0\nlarceny_total = 0\nveh_total = 0\n\ntotals_list = []\nlist_total = 0\n\n#find total number of murders\nfor i in (df_20yr_real.index):\n murder_total += df_20yr_real['Murder and\\nnonnegligent \\nmanslaughter'][i]\n list_total += murder_total\ntotals_list.append(murder_total)\n\n#find total number of rapes\nfor i in (df_20yr_real.index):\n rape_total += df_20yr_real['Rape\\n(legacy\\ndefinition)2'][i]\n list_total += rape_total\ntotals_list.append(rape_total)\n\n#find total number of robberies\nfor i in (df_20yr_real.index):\n robbery_total += df_20yr_real['Robbery'][i]\n list_total += robbery_total\ntotals_list.append(robbery_total)\n\n#find total number of assaults\nfor i in (df_20yr_real.index):\n agg_ass_total += df_20yr_real['Aggravated \\nassault'][i]\n list_total += agg_ass_total\ntotals_list.append(agg_ass_total)\n\n#find total number of burglaries\nfor i in (df_20yr_real.index):\n burglary_total += df_20yr_real['Burglary'][i]\n list_total += burglary_total\ntotals_list.append(burglary_total)\n\n#find total number of larcenies\nfor i in (df_20yr_real.index):\n larceny_total += df_20yr_real['Larceny-\\ntheft'][i]\n list_total += larceny_total\ntotals_list.append(larceny_total)\n\n#find total number of vehicle thefts\nfor i in (df_20yr_real.index):\n veh_total += df_20yr_real['Motor \\nvehicle \\ntheft'][i]\n list_total += veh_total\ntotals_list.append(veh_total)\n\n#plot pie chart using above data\n\nk = ['Murder and nonnegligent manslaughter', 'Rape', 'Robbery', 'Aggravated assault', 'Burglary', \\\n 'Larceny theft', 'Motor vehicle theft']\npercent_list = []\nfor i in totals_list:\n percent = i/list_total\n percent_list.append(percent) #convert values to percentages\n\narr = np.array(percent_list)\npercent = 100.*arr/arr.sum()\nlabels = ['{0} : {1:1.2f}%'.format(x,y) for x,y in zip(k, percent)]\ncolours = ['red','black', 'green', 'lightskyblue', 'yellow', 'purple', 'darkblue'] #style the pie chart\npatches, texts = plt.pie(totals_list, colors=colours, startangle=90)\nfig = plt.gcf()\nfig.set_size_inches(7.5, 7.5)\nplt.legend(patches, labels, loc=\"best\", bbox_to_anchor=(1.02, 0.94), borderaxespad=0)\nplt.axis('equal')\nplt.title('Prevalence of Various Crimes: 1994-2013 (as percentage of total crime)\\n', fontsize = 16)\nplt.tight_layout()\nplt.show()", "Analysis:\nHere we can see the relative prevalence of various types of crime in the United States. Larceny theft accounts for over 50% of the crime committed in the US over the relevant 20-year period followed by burglary and motor vehicle theft contributing about 19% and about 10%, respectively. Rape, murder, aggravated assault, and robbery each contributed about 1%, 0.14%, about 8% and around 4% as well.\nBar Graph: Yearly percent change in total crime (1994-2013)", "#calculate total number of crimes per year\n\nrow_total = 0\nrow_total_list = []\ncount = 0\n\nfor i in (df_20yr_real.index):\n for x in (df_20yr_real.columns):\n row_total += df_20yr_real[x][i]\n row_total_list.append(row_total)\n row_total = 0\n\n#calculate percent change in crimes between each year and then add to new column in data frame \npercent_change_list = []\n\nfor k in range(0,len(row_total_list)):\n if k > 0:\n percent_change = (((row_total_list[k]/row_total_list[k-1]) - 1) * -1) * 100\n if percent_change < 0:\n percent_change = 0.0\n percent_change_list.append(percent_change)\n count+=1\n else:\n percent_change_list.append(0.0)\n count+=1\n\n# add the percent change column to our data frame\n\n#df_20yr_real['Percent Change'] = percent_change_list\n#del df_20yr_real['Percent Change']\n\n#plot bar graph using above percent change data\n\nfig, ax = plt.subplots()\nfig.set_size_inches(16, 6.5)\ndf_20yr_real['Percent Change'].plot(kind='bar', \n ax=ax,\n legend = False,\n color = ['blue','purple'],\n alpha = 0.65,\n rot = 0,\n width = 0.9,\n align = 'center') \nplt.style.use('bmh')\nax.set_xlabel('Year', fontsize = 14)\nax.set_ylabel('Percent Change', fontsize = 14) #style bar graph\nax.set_title('Yearly change in total crime\\n', fontsize = 16)\nax.set_ylim(0,7)\n", "Analysis:\nWe can see from the above bar chart that there was a substantial decrease in crime during the year 1997 and 1998, this could be attributed to a number of increasingly rigorous policing tactics around the country, Bratton’s Zero Tolerance policing in New York City for example. \nIn addition to stricter policing which, according to some sources was controversial and led to an increase in dissent and crime, there was a large influx of millennials into the criminal age demographic (approximately 12-24 years of age) at which they are most likely to commit or be victims of violent crime. \nLine Chart: High schoolers partaking in risky behaviors", "#create a line plot from CDC data frame\n\nfig, ax = plt.subplots()\ndf_CDC.plot(ax=ax,\n kind='line', # line plot\n grid = True,\n marker = 'o',\n use_index = True)\n\nplt.legend(loc = 'upper right') #format legend\nax.set_title('High schoolers partaking in risky behaviors',fontsize = 16) #format title and axis labels\nax.set_xlabel('Year', fontsize = 14)\nax.set_ylabel('Percent of Students', fontsize = 14)\nfig.set_size_inches(15, 8)", "Analysis:\nThe above line graphs show the total crime in the United States graphed against some key indicators in the CDC Youth Risk Behavior Survey. In the graph above, it can be seen that High School age youths are partaking in “risky” behaviors at increasingly lower rates over the last twenty years. We have plotted the percentage of high school youths that have ever drank a beer, rarely wore a helmet when biking, ever tried smoking a cigarette, and ever had sexual intercourse – indicators of risky behavior among teenagers.\nAs the percentage of US high schoolers partaking in these risky activities decreases we see a correlating decline in crime in the United States. The entrance of millennials – increasingly looked after, sheltered, and advised to not take risks – at least partially helps to explain the sharp decline in crime rates during the late 90’s that has persisted to the present day.\nConclusion\nThe influx of a less risky, milder generation has defined the criminal scene for the last two decades – encouraging more responsible behavior among our youth has to some degree resulted in a decline in crime. It is, therefore, obvious that building a safer country depends on cultivating today’s youth, ensuring that they are given opportunities and support to pursue more constructive and less risky behaviors. This will help us create a productive generation of citizens that propels the United States into a safer future." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.22/_downloads/6684371ec2bc8e72513b3bdbec0d3a9f/plot_20_events_from_raw.ipynb
bsd-3-clause
[ "%matplotlib inline", "Parsing events from raw data\nThis tutorial describes how to read experimental events from raw recordings,\nand how to convert between the two different representations of events within\nMNE-Python (Events arrays and Annotations objects).\n :depth: 1\nIn the introductory tutorial &lt;overview-tut-events-section&gt; we saw an\nexample of reading experimental events from a :term:\"STIM\" channel &lt;stim\nchannel&gt;; here we'll discuss :term:events and :term:annotations more\nbroadly, give more detailed information about reading from STIM channels, and\ngive an example of reading events that are in a marker file or included in\nthe data file as an embedded array. The tutorials tut-event-arrays and\ntut-annotate-raw discuss how to plot, combine, load, save, and\nexport :term:events and :class:~mne.Annotations (respectively), and the\nlatter tutorial also covers interactive annotation of :class:~mne.io.Raw\nobjects.\nWe'll begin by loading the Python modules we need, and loading the same\nexample data &lt;sample-dataset&gt; we used in the introductory tutorial\n&lt;tut-overview&gt;, but to save memory we'll crop the :class:~mne.io.Raw object\nto just 60 seconds before loading it into RAM:", "import os\nimport numpy as np\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)\nraw.crop(tmax=60).load_data()", "The Events and Annotations data structures\nGenerally speaking, both the Events and :class:~mne.Annotations data\nstructures serve the same purpose: they provide a mapping between times\nduring an EEG/MEG recording and a description of what happened at those\ntimes. In other words, they associate a when with a what. The main\ndifferences are:\n\nUnits: the Events data structure represents the when in terms of\n samples, whereas the :class:~mne.Annotations data structure represents\n the when in seconds.\nLimits on the description: the Events data structure represents the\n what as an integer \"Event ID\" code, whereas the\n :class:~mne.Annotations data structure represents the what as a\n string.\nHow duration is encoded: Events in an Event array do not have a\n duration (though it is possible to represent duration with pairs of\n onset/offset events within an Events array), whereas each element of an\n :class:~mne.Annotations object necessarily includes a duration (though\n the duration can be zero if an instantaneous event is desired).\nInternal representation: Events are stored as an ordinary\n :class:NumPy array &lt;numpy.ndarray&gt;, whereas :class:~mne.Annotations is\n a :class:list-like class defined in MNE-Python.\n\nWhat is a STIM channel?\nA :term:stim channel (short for \"stimulus channel\") is a channel that does\nnot receive signals from an EEG, MEG, or other sensor. Instead, STIM channels\nrecord voltages (usually short, rectangular DC pulses of fixed magnitudes\nsent from the experiment-controlling computer) that are time-locked to\nexperimental events, such as the onset of a stimulus or a button-press\nresponse by the subject (those pulses are sometimes called TTL_ pulses,\nevent pulses, trigger signals, or just \"triggers\"). In other cases, these\npulses may not be strictly time-locked to an experimental event, but instead\nmay occur in between trials to indicate the type of stimulus (or experimental\ncondition) that is about to occur on the upcoming trial.\nThe DC pulses may be all on one STIM channel (in which case different\nexperimental events or trial types are encoded as different voltage\nmagnitudes), or they may be spread across several channels, in which case the\nchannel(s) on which the pulse(s) occur can be used to encode different events\nor conditions. Even on systems with multiple STIM channels, there is often\none channel that records a weighted sum of the other STIM channels, in such a\nway that voltage levels on that channel can be unambiguously decoded as\nparticular event types. On older Neuromag systems (such as that used to\nrecord the sample data) this \"summation channel\" was typically STI 014;\non newer systems it is more commonly STI101. You can see the STIM\nchannels in the raw data file here:", "raw.copy().pick_types(meg=False, stim=True).plot(start=3, duration=6)", "You can see that STI 014 (the summation channel) contains pulses of\ndifferent magnitudes whereas pulses on other channels have consistent\nmagnitudes. You can also see that every time there is a pulse on one of the\nother STIM channels, there is a corresponding pulse on STI 014.\n.. TODO: somewhere in prev. section, link out to a table of which systems\n have STIM channels vs. which have marker files or embedded event arrays\n (once such a table has been created).\nConverting a STIM channel signal to an Events array\nIf your data has events recorded on a STIM channel, you can convert them into\nan events array using :func:mne.find_events. The sample number of the onset\n(or offset) of each pulse is recorded as the event time, the pulse magnitudes\nare converted into integers, and these pairs of sample numbers plus integer\ncodes are stored in :class:NumPy arrays &lt;numpy.ndarray&gt; (usually called\n\"the events array\" or just \"the events\"). In its simplest form, the function\nrequires only the :class:~mne.io.Raw object, and the name of the channel(s)\nfrom which to read events:", "events = mne.find_events(raw, stim_channel='STI 014')\nprint(events[:5]) # show the first 5", ".. sidebar:: The middle column of the Events array\nMNE-Python events are actually *three* values: in between the sample\nnumber and the integer event code is a value indicating what the event\ncode was on the immediately preceding sample. In practice, that value is\nalmost always ``0``, but it can be used to detect the *endpoint* of an\nevent whose duration is longer than one sample. See the documentation of\n:func:`mne.find_events` for more details.\n\nIf you don't provide the name of a STIM channel, :func:~mne.find_events\nwill first look for MNE-Python config variables &lt;tut-configure-mne&gt;\nfor variables MNE_STIM_CHANNEL, MNE_STIM_CHANNEL_1, etc. If those are\nnot found, channels STI 014 and STI101 are tried, followed by the\nfirst channel with type \"STIM\" present in raw.ch_names. If you regularly\nwork with data from several different MEG systems with different STIM channel\nnames, setting the MNE_STIM_CHANNEL config variable may not be very\nuseful, but for researchers whose data is all from a single system it can be\na time-saver to configure that variable once and then forget about it.\n:func:~mne.find_events has several options, including options for aligning\nevents to the onset or offset of the STIM channel pulses, setting the minimum\npulse duration, and handling of consecutive pulses (with no return to zero\nbetween them). For example, you can effectively encode event duration by\npassing output='step' to :func:mne.find_events; see the documentation\nof :func:~mne.find_events for details. More information on working with\nevents arrays (including how to plot, combine, load, and save event arrays)\ncan be found in the tutorial tut-event-arrays.\nReading embedded events as Annotations\nSome EEG/MEG systems generate files where events are stored in a separate\ndata array rather than as pulses on one or more STIM channels. For example,\nthe EEGLAB format stores events as a collection of arrays in the :file:.set\nfile. When reading those files, MNE-Python will automatically convert the\nstored events into an :class:~mne.Annotations object and store it as the\n:attr:~mne.io.Raw.annotations attribute of the :class:~mne.io.Raw object:", "testing_data_folder = mne.datasets.testing.data_path()\neeglab_raw_file = os.path.join(testing_data_folder, 'EEGLAB', 'test_raw.set')\neeglab_raw = mne.io.read_raw_eeglab(eeglab_raw_file)\nprint(eeglab_raw.annotations)", "The core data within an :class:~mne.Annotations object is accessible\nthrough three of its attributes: onset, duration, and\ndescription. Here we can see that there were 154 events stored in the\nEEGLAB file, they all had a duration of zero seconds, there were two\ndifferent types of events, and the first event occurred about 1 second after\nthe recording began:", "print(len(eeglab_raw.annotations))\nprint(set(eeglab_raw.annotations.duration))\nprint(set(eeglab_raw.annotations.description))\nprint(eeglab_raw.annotations.onset[0])", "More information on working with :class:~mne.Annotations objects, including\nhow to add annotations to :class:~mne.io.Raw objects interactively, and how\nto plot, concatenate, load, save, and export :class:~mne.Annotations\nobjects can be found in the tutorial tut-annotate-raw.\nConverting between Events arrays and Annotations objects\nOnce your experimental events are read into MNE-Python (as either an Events\narray or an :class:~mne.Annotations object), you can easily convert between\nthe two formats as needed. You might do this because, e.g., an Events array\nis needed for epoching continuous data, or because you want to take advantage\nof the \"annotation-aware\" capability of some functions, which automatically\nomit spans of data if they overlap with certain annotations.\nTo convert an :class:~mne.Annotations object to an Events array, use the\nfunction :func:mne.events_from_annotations on the :class:~mne.io.Raw file\ncontaining the annotations. This function will assign an integer Event ID to\neach unique element of raw.annotations.description, and will return the\nmapping of descriptions to integer Event IDs along with the derived Event\narray. By default, one event will be created at the onset of each annotation;\nthis can be modified via the chunk_duration parameter of\n:func:~mne.events_from_annotations to create equally spaced events within\neach annotation span (see chunk-duration, below, or see\nfixed-length-events for direct creation of an Events array of\nequally-spaced events).", "events_from_annot, event_dict = mne.events_from_annotations(eeglab_raw)\nprint(event_dict)\nprint(events_from_annot[:5])", "If you want to control which integers are mapped to each unique description\nvalue, you can pass a :class:dict specifying the mapping as the\nevent_id parameter of :func:~mne.events_from_annotations; this\n:class:dict will be returned unmodified as the event_dict.\n.. TODO add this when the other tutorial is nailed down:\n Note that this event_dict can be used when creating\n :class:~mne.Epochs from :class:~mne.io.Raw objects, as demonstrated\n in :doc:epoching_tutorial_whatever_its_name_is.", "custom_mapping = {'rt': 77, 'square': 42}\n(events_from_annot,\n event_dict) = mne.events_from_annotations(eeglab_raw, event_id=custom_mapping)\nprint(event_dict)\nprint(events_from_annot[:5])", "To make the opposite conversion (from an Events array to an\n:class:~mne.Annotations object), you can create a mapping from integer\nEvent ID to string descriptions, use ~mne.annotations_from_events\nto construct the :class:~mne.Annotations object, and call the\n:meth:~mne.io.Raw.set_annotations method to add the annotations to the\n:class:~mne.io.Raw object.\nBecause the sample data &lt;sample-dataset&gt; was recorded on a Neuromag\nsystem (where sample numbering starts when the acquisition system is\ninitiated, not when the recording is initiated), we also need to pass in\nthe orig_time parameter so that the onsets are properly aligned relative\nto the start of recording:", "mapping = {1: 'auditory/left', 2: 'auditory/right', 3: 'visual/left',\n 4: 'visual/right', 5: 'smiley', 32: 'buttonpress'}\nannot_from_events = mne.annotations_from_events(\n events=events, event_desc=mapping, sfreq=raw.info['sfreq'],\n orig_time=raw.info['meas_date'])\nraw.set_annotations(annot_from_events)", "Now, the annotations will appear automatically when plotting the raw data,\nand will be color-coded by their label value:", "raw.plot(start=5, duration=5)", "Making multiple events per annotation\nAs mentioned above, you can generate equally-spaced events from an\n:class:~mne.Annotations object using the chunk_duration parameter of\n:func:~mne.events_from_annotations. For example, suppose we have an\nannotation in our :class:~mne.io.Raw object indicating when the subject was\nin REM sleep, and we want to perform a resting-state analysis on those spans\nof data. We can create an Events array with a series of equally-spaced events\nwithin each \"REM\" span, and then use those events to generate (potentially\noverlapping) epochs that we can analyze further.", "# create the REM annotations\nrem_annot = mne.Annotations(onset=[5, 41],\n duration=[16, 11],\n description=['REM'] * 2)\nraw.set_annotations(rem_annot)\n(rem_events,\n rem_event_dict) = mne.events_from_annotations(raw, chunk_duration=1.5)", "Now we can check that our events indeed fall in the ranges 5-21 seconds and\n41-52 seconds, and are ~1.5 seconds apart (modulo some jitter due to the\nsampling frequency). Here are the event times rounded to the nearest\nmillisecond:", "print(np.round((rem_events[:, 0] - raw.first_samp) / raw.info['sfreq'], 3))", "Other examples of resting-state analysis can be found in the online\ndocumentation for :func:mne.make_fixed_length_events, such as\n:doc:../../auto_examples/connectivity/plot_mne_inverse_envelope_correlation.\n.. LINKS" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
perwin/s4g_barfractions
s4gbars_main.ipynb
bsd-3-clause
[ "General Figures\nRequirements\nThis notebook is meant to be run within the full s4g_barfractions repository, including the associated Python modules and data files.\nIn addition, this notebook requires, directly or indirectly, the following Python packages:\n * numpy\n * scipy\n * matplotlib\nBy default, output PDF figure files are not saved to disk; to enable this, set the savePlots variable in the Setup cell to True and change the plotDir variable (same cell) to point to where you want the figures saved.\nSetup", "import numpy as np\nimport scipy\n\nimport datautils as du\nimport plotutils as pu\nimport s4gutils\n\n# paths for locating data, saving plots, etc.\ndataDir = \"./data/\"\nsimDir = dataDir\nfbarLitDir = dataDir + \"f_bar_trends-from-literature/\"\n# change the following if you want to save the figures somewhere convenient\nbaseDir = \"/Users/erwin/Documents/Working/Paper-s4gbars/\"\nplotDir = baseDir + \"plots/\"\nsavePlots = False\n\ns4gdata = du.ReadCompositeTable(dataDir+\"s4gbars_table.dat\", columnRow=25, dataFrame=True)\nnDisksTotal = len(s4gdata.name)\n\n# axis labels, etc., for plots\nxtmstar = r\"$\\log \\: (M_{\\star} / M_{\\odot})$\"\nxtfgas = r\"$\\log \\: (M_{\\rm HI} / M_{\\star})$\"\nxtgmr = r\"$g - r$\"\nxtmB = r\"$B_{\\rm tc}$\"\nxtBmV = r\"$B - V$\"\nxtBmV_tc = r\"$(B - V)_{\\rm tc}$\"\nytfbar = r\"Bar Fraction $f_{\\rm bar}$\"\nytbarsize_kpc = r\"Bar size $a_{\\rm vis}$ [kpc]\"\nytbarsize_kpc_obs = r\"Observed bar size $a_{\\rm vis}$ [kpc]\"\nytR25_kpc = r\"$R_{25}$ [kpc]\"\n\n\nss1 = r\"S$^{4}$G: $D \\leq 25$ Mpc\"\nss1m = r\"S$^{4}$G: $D \\leq 25$ Mpc, $\\log M_{\\star} \\geq 8.5$\"\nss1_bold = r\"$\\mathbf{S^{4}G:}$ $D \\leq 25$ Mpc\"\nss1m_bold = r\"$\\mathbf{S^{4}G:}$ $D \\leq 25$ Mpc, $\\log M_{\\star} \\geq 8.5$\"\nss2 = r\"S$^{4}$G: $D \\leq 30$ Mpc\"\nss2b = r\"S$^{4}$G: $D \\leq 30$ Mpc, $\\log M_{\\star} \\geq 9$\"\nss2 = r\"S$^{4}$G: $D \\leq 30$ Mpc\"\nss2m = r\"S$^{4}$G: $D \\leq 30$ Mpc, $\\log M_{\\star} \\geq 9$\"\nss3 = r\"S$^{4}$G: $D \\leq 40$ Mpc\"\nss3m = r\"S$^{4}$G: $D \\leq 40$ Mpc, $\\log M_{\\star} \\geq 9.5$\"\n\ns4g_txt = r\"S$^{4}$G\"\ns4g_txt_bold = r\"$\\mathbf{S^{4}G:}$\"\ns4g_fwhm_txt = r\"S$^{4}$G $\\langle$FWHM$\\rangle$\"\n\n%pylab inline\n\nmatplotlib.rcParams['figure.figsize'] = (8,6)\nmatplotlib.rcParams['xtick.labelsize'] = 16\nmatplotlib.rcParams['ytick.labelsize'] = 16\nmatplotlib.rcParams['axes.labelsize'] = 20", "Useful functions", "def logistic_lin( x, a, b ):\n \"\"\"Calculates the standard linear logistic function (probability distribution)\n for x (which can be a scalar or a numpy array).\n \"\"\"\n \n return 1.0 / (1.0 + np.exp(-(a + b*x)))\n\n\ndef logistic_polyn( x, params ):\n \"\"\"Calculates the general polynomial form of the logistic function \n (probability distribution) for x (which can be a scalar or a numpy array).\n \"\"\"\n \n order = len(params) - 1\n logit = params[0]\n for n in range(order):\n b = params[n + 1]\n logit += b * x**(n + 1)\n \n return 1.0 / (1.0 + np.exp(-logit))\n\n\ndef GetBarazzaData( fname ):\n \"\"\"Retrieve bar fractions and total galaxy counts per bin for Barazza+2008 data\n (their Fig. 19); calculates proper binomial confidence intervals.\n \"\"\"\n dlines = [line for line in open(fname) if line[0] != '#' and len(line) > 1]\n x = np.array([float(line.split()[0]) for line in dlines])\n f = np.array([float(line.split()[1]) for line in dlines])\n n = np.array([int(line.split()[2]) for line in dlines])\n n_bars = np.round(f*n)\n e_low_vect = []\n e_high_vect = []\n for i in range(len(x)):\n dummy,e_low,e_high = s4gutils.Binomial(n_bars[i], n[i])\n e_low_vect.append(e_low)\n e_high_vect.append(e_high)\n return (x, f, np.array(e_low_vect), np.array(e_high_vect))\n", "Defining different subsamples via index vectors\nLists of integers defining indices of galaxies in Parent Disc Sample which meet various criteria\nthat define specific subsamples.", "ii_barred = [i for i in range(nDisksTotal) if s4gdata.sma[i] > 0]\nii_unbarred = [i for i in range(nDisksTotal) if s4gdata.sma[i] <= 0]\n\nii_spirals = [i for i in range(nDisksTotal) if s4gdata.t_s4g[i] > -0.5]\nii_barred_spirals = [i for i in ii_spirals if i in ii_barred]\nii_unbarred_spirals = [i for i in ii_spirals if i in ii_unbarred]\n\n# limited sample 1: D < 25 Mpc -- 663 spirals: 373 barred, 290 unbarred\nii_all_limited1 = [i for i in ii_spirals if s4gdata.dist[i] <= 25]\nii_barred_limited1 = [i for i in ii_all_limited1 if i in ii_barred]\nii_unbarred_limited1 = [i for i in ii_all_limited1 if i not in ii_barred]\n\nii_SB_limited1 = [i for i in ii_all_limited1 if i in ii_barred_limited1 and s4gdata.bar_strength[i] == 1]\nii_nonSB_limited1 = [i for i in ii_all_limited1 if i not in ii_SB_limited1]\nii_SAB_limited1 = [i for i in ii_all_limited1 if i in ii_barred_limited1 and s4gdata.bar_strength[i] == 2]\nii_nonSAB_limited1 = [i for i in ii_all_limited1 if i not in ii_SB_limited1]\n\n# S0 only (74 S0s: 27 barred, 47 unbarred)\nii_all_limited1_S0 = [i for i in range(nDisksTotal) if s4gdata.dist[i] <= 25 and s4gdata.t_s4g[i] <= -0.5]\nii_barred_limited1_S0 = [i for i in ii_all_limited1_S0 if i in ii_barred]\nii_unbarred_limited1_S0 = [i for i in ii_all_limited1_S0 if i not in ii_barred]\nii_SB_limited1_S0 = [i for i in ii_SB_limited1 if s4gdata.t_s4g[i] <= -0.5]\nii_nonSB_limited1_S0 = [i for i in ii_nonSB_limited1 if s4gdata.t_s4g[i] <= -0.5]\nii_SAB_limited1_S0 = [i for i in ii_SAB_limited1 if s4gdata.t_s4g[i] <= -0.5]\nii_nonSAB_limited1_S0 = [i for i in ii_nonSAB_limited1 if s4gdata.t_s4g[i] <= -0.5]\n\n\n\n# limited subsample 1m: D < 25 Mpc and log Mstar >= 8.5 -- 576 spirals: 356 barred, 220 unbarred\nii_all_limited1_m8_5 = [i for i in ii_all_limited1 if s4gdata.logmstar[i] >= 8.5]\nii_barred_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i in ii_barred]\nii_unbarred_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i not in ii_barred]\nii_SB_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i in ii_barred and s4gdata.bar_strength[i] == 1]\nii_nonSB_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i not in ii_SB_limited1_m8_5]\nii_SAB_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i in ii_barred and s4gdata.bar_strength[i] == 2]\nii_nonSAB_limited1_m8_5 = [i for i in ii_all_limited1_m8_5 if i not in ii_SB_limited1_m8_5]\n# S0 only (74 S0s: 27 barred, 47 unbarred)\nii_all_limited1_m8_5_S0 = [i for i in ii_all_limited1_S0 if s4gdata.logmstar[i] >= 8.5]\nii_barred_limited1_m8_5_S0 = [i for i in ii_all_limited1_m8_5_S0 if i in ii_barred]\nii_unbarred_limited1_m8_5_S0 = [i for i in ii_all_limited1_m8_5_S0 if i not in ii_barred]\nii_SB_limited1_m8_5_S0 = [i for i in ii_all_limited1_m8_5_S0 if i in ii_barred and s4gdata.bar_strength[i] == 1]\nii_nonSB_limited1_m8_5_S0 = [i for i in ii_all_limited1_m8_5_S0 if i not in ii_SB_limited1_m8_5_S0]\nii_SAB_limited1_m8_5_S0 = [i for i in ii_all_limited1_m8_5_S0 if i in ii_barred and s4gdata.bar_strength[i] == 2]\nii_nonSAB_limited1_m8_5_s0 = [i for i in ii_all_limited1_m8_5_S0 if i not in ii_SAB_limited1_m8_5_S0 and s4gdata.t_s4g[i]]\n\n\n\n# limited subsample 2: D < 30 Mpc -- 856 galaxies: 483 barred, 373 unbarred\nii_all_limited2 = [i for i in ii_spirals if s4gdata.dist[i] <= 30]\nii_barred_limited2 = [i for i in ii_all_limited2 if i in ii_barred]\nii_unbarred_limited2 = [i for i in ii_all_limited2 if i not in ii_barred]\n\nii_SB_limited2 = [i for i in ii_barred_limited2 if s4gdata.bar_strength[i] == 1]\nii_nonSB_limited2 = [i for i in ii_all_limited2 if i not in ii_SB_limited2]\nii_SAB_limited2 = [i for i in ii_barred_limited2 if s4gdata.bar_strength[i] == 2]\nii_nonSAB_limited2 = [i for i in ii_all_limited2 if i not in ii_SB_limited2]\n\n# S0 only (74 S0s: 27 barred, 47 unbarred)\nii_all_limited2_S0 = [i for i in range(nDisksTotal) if s4gdata.dist[i] <= 30 and s4gdata.t_s4g[i] <= -0.5]\nii_barred_limited2_S0 = [i for i in ii_all_limited2_S0 if i in ii_barred]\nii_unbarred_limited2_S0 = [i for i in ii_all_limited2_S0 if i not in ii_barred]\nii_SB_limited2_S0 = [i for i in ii_SB_limited2 if s4gdata.t_s4g[i] <= -0.5]\nii_nonSB_limited2_S0 = [i for i in ii_nonSB_limited2 if s4gdata.t_s4g[i] <= -0.5]\nii_SAB_limited2_S0 = [i for i in ii_SAB_limited2 if s4gdata.t_s4g[i] <= -0.5]\nii_nonSAB_limited2_S0 = [i for i in ii_nonSAB_limited2 if s4gdata.t_s4g[i] <= -0.5]\n\n\n# limited subsample 2m: D < 30 Mpc and log Mstar >= 9 -- 639 galaxies: 398 barred, 241 unbarred\nii_all_limited2_m9 = [i for i in ii_all_limited2 if s4gdata.logmstar[i] >= 9]\nii_barred_limited2_m9 = [i for i in ii_all_limited2_m9 if i in ii_barred]\nii_unbarred_limited2_m9 = [i for i in ii_all_limited2_m9 if i not in ii_barred]\n\nii_SB_limited2_m9 = [i for i in ii_all_limited2_m9 if i in ii_barred and s4gdata.bar_strength[i] == 1]\nii_nonSB_limited2_m9 = [i for i in ii_all_limited2_m9 if i not in ii_SB_limited2_m9]\nii_SAB_limited2_m9 = [i for i in ii_all_limited2_m9 if i in ii_barred and s4gdata.bar_strength[i] == 2]\nii_nonSAB_limited2_m9 = [i for i in ii_all_limited2_m9 if i not in ii_SAB_limited2_m9]\n\n\n# galaxies with/without HyperLeda B-V colors\nii_dist25 = [i for i in range(nDisksTotal) if s4gdata.dist[i] <= 25.0]\nii_dist30 = [i for i in range(nDisksTotal) if s4gdata.dist[i] <= 30.0]\n\nii_bmv_good = [i for i in range(nDisksTotal) if s4gdata.BmV_tc[i] > -2]\nii_bmv_missing = [i for i in range(nDisksTotal) if s4gdata.BmV_tc[i] < -2]\nii_d30_bmv_good = [i for i in ii_bmv_good if i in ii_dist30]\nii_d30_bmv_missing = [i for i in ii_bmv_missing if i in ii_dist30]\nii_d25_bmv_good = [i for i in ii_bmv_good if i in ii_dist25]\nii_d25_bmv_missing = [i for i in ii_bmv_missing if i in ii_dist25]\n", "Generate files for logistic regression with R\nThis code will regenerate the input files for the logistic regression analysis in R (see R notebook s4gbars_R_logistic-regression.ipynb)\nBy default, this will save the file in the data/ subdirectory, overwriting the pre-existing files. To change the destination, redefine dataDir.", "# optionally redefine dataDir to save files in a different location\n# dataDir = XXX\n\noutf = open(dataDir+\"barpresence_vs_logmstar_for_R.txt\", 'w')\noutf.write(\"# Bar presence as function of log(M_star/M_sun) for D < 25 Mpc\\n\")\noutf.write(\"logmstar bar\\n\")\nfor i in ii_all_limited1:\n logmstar = s4gdata.logmstar[i]\n if i in ii_barred_limited1:\n barFlag = 1\n else:\n barFlag = 0\n outf.write(\"%.3f %d\\n\" % (logmstar, barFlag))\noutf.close()\n\n# restrict things to logMstar = 8.5--11 to avoid low-mass galaxies with crazy-high Vmax weights\n# and tiny number of galaxies with logMstar > 11\nff = \"barpresence_vs_logmstar_for_R_w25_m8.5-11.txt\"\noutf = open(dataDir+ff, 'w')\noutf.write(\"# Bar presence as function of log(M_star/M_sun) for D < 25 Mpc, with V_max weights\\n\")\noutf.write(\"logmstar weight bar\\n\")\nn_tot = 0\nfor i in ii_all_limited1:\n if s4gdata.logmstar[i] >= 8.5 and s4gdata.logmstar[i] <= 11:\n logmstar = s4gdata.logmstar[i]\n weight = s4gdata.w25[i]\n if i in ii_barred_limited1:\n barFlag = 1\n else:\n barFlag = 0\n outf.write(\"%.3f %.3f %d\\n\" % (logmstar, weight, barFlag))\n n_tot += 1\noutf.close()\nprint(\"%s: %d galaxies\" % (ff, n_tot))\n\n# SB and SAB separately\n# restrict things to logMstar = 8.5--11 to avoid low-mass galaxies with crazy-high Vmax weights\n# and tiny number of galaxies with logMstar > 11\nff = \"SBpresence_vs_logmstar_for_R_w25_m8.5-11.txt\"\noutf = open(dataDir+ff, 'w')\noutf.write(\"# Bar presence as function of log(M_star/M_sun) for D < 25 Mpc, with V_max weights\\n\")\noutf.write(\"logmstar weight SB\\n\")\nn_tot = 0\nfor i in ii_all_limited1:\n if s4gdata.logmstar[i] >= 8.5 and s4gdata.logmstar[i] <= 11:\n logmstar = s4gdata.logmstar[i]\n weight = s4gdata.w25[i]\n if i in ii_SB_limited1:\n barFlag = 1\n else:\n barFlag = 0\n outf.write(\"%.3f %.3f %d\\n\" % (logmstar, weight, barFlag))\n n_tot += 1\noutf.close()\nprint(\"%s: %d galaxies\" % (ff, n_tot))\n\nff = \"SABpresence_vs_logmstar_for_R_w25_m8.5-11.txt\"\noutf = open(dataDir+ff, 'w')\noutf.write(\"# Bar presence as function of log(M_star/M_sun) for D < 25 Mpc, with V_max weights\\n\")\noutf.write(\"logmstar weight SAB\\n\")\nn_tot = 0\nfor i in ii_all_limited1:\n if s4gdata.logmstar[i] >= 8.5 and s4gdata.logmstar[i] <= 11:\n logmstar = s4gdata.logmstar[i]\n weight = s4gdata.w25[i]\n if i in ii_SAB_limited1:\n barFlag = 1\n else:\n barFlag = 0\n outf.write(\"%.3f %.3f %d\\n\" % (logmstar, weight, barFlag))\n n_tot += 1\noutf.close()\nprint(\"%s: %d galaxies\" % (ff, n_tot))\n\n\n\n# restrict things to logMstar = 8.5--11 to avoid low-mass galaxies with crazy-high Vmax weights\n# and tiny number of galaxies with logMstar > 11\nff = \"barpresence_vs_logmstar-Re_for_R_w25.txt\"\noutf = open(dataDir+ff, 'w')\noutf.write(\"# Bar presence as function of log(M_star/M_sun) and log(R_e) for D < 25 Mpc, with V_max weights\\n\")\noutf.write(\"logmstar logRe weight bar\\n\")\nn_tot = 0\nfor i in ii_all_limited1:\n if s4gdata.logmstar[i] >= 8.5 and s4gdata.logmstar[i] <= 11 and s4gdata.Re_kpc[i] > 0:\n logmstar = s4gdata.logmstar[i]\n logRe = math.log10(s4gdata.Re_kpc[i])\n weight = s4gdata.w25[i]\n if i in ii_barred_limited1:\n barFlag = 1\n else:\n barFlag = 0\n outf.write(\"%.3f %.3f %.3f %d\\n\" % (logmstar, logRe, weight, barFlag))\n n_tot += 1\noutf.close()\nprint(\"%s: %d galaxies\" % (ff, n_tot))\n\n\nff = \"barpresence_vs_logmstar-logfgas_for_R_w25.txt\"\noutf = open(dataDir+ff, 'w')\noutf.write(\"# Bar presence as function of log(M_star/M_sun) and log(f_gas) for D < 25 Mpc, with V_max weights\\n\")\noutf.write(\"logmstar logfgas weight bar\\n\")\nn_tot = 0\nfor i in ii_all_limited1:\n if s4gdata.logmstar[i] >= 8.5 and s4gdata.logmstar[i] <= 11 and s4gdata.logfgas[i] < 3:\n logmstar = s4gdata.logmstar[i]\n logfgas = s4gdata.logfgas[i]\n weight = s4gdata.w25[i]\n if i in ii_barred_limited1:\n barFlag = 1\n else:\n barFlag = 0\n outf.write(\"%.3f %.3f %.3f %d\\n\" % (logmstar, logfgas, weight, barFlag))\n n_tot += 1\noutf.close()\nprint(\"%s: %d galaxies\" % (ff, n_tot))\n\n\nww25 = s4gdata.weight_BmVtc * s4gdata.w25\nff = \"barpresence_vs_logmstar-gmr_for_R_w25.txt\"\noutf = open(dataDir+ff, 'w')\noutf.write(\"# Bar presence as function of g-r for D < 25 Mpc and logMstar > 8.5, with B-V and V_max weights\\n\")\noutf.write(\"logmstar gmr weight bar\\n\")\nn_tot = 0\nfor i in ii_all_limited1_m8_5:\n if s4gdata.gmr_tc[i] >= -1:\n logmstar = s4gdata.logmstar[i]\n gmr = s4gdata.gmr_tc[i]\n weight = ww25[i]\n if i in ii_barred_limited1:\n barFlag = 1\n else:\n barFlag = 0\n outf.write(\"%.3f %.3f %.3f %d\\n\" % (logmstar, gmr, weight, barFlag))\n n_tot += 1\noutf.close()\nprint(\"%s: %d galaxies\" % (ff, n_tot))\n", "Figures\nFigure 1\nLeft panel: Distances of galaxies in S4G Parent Disk Sample vs stellar mass", "plt.plot(s4gdata.dist, s4gdata.logmstar, 'ko', mfc='None', mec='k',ms=4)\nplt.plot(s4gdata.dist[ii_barred], s4gdata.logmstar[ii_barred], 'ko',ms=3.5)\nplt.axvline(25)\nplt.axvline(30, ls='--')\nplt.axhline(8.5)\nplt.axhline(9, ls='--')\nxlim(0,60)\nplt.xlabel(\"Distance [Mpc]\"); plt.ylabel(xtmstar)\nif savePlots: plt.savefig(plotDir+\"logMstar-vs-distance.pdf\")", "Right panel: $R_{25}$ vs distance for S4G spirals", "# define extra subsample for plot: all spirals with log(M_star) >= 9\nii_logmstar9 = [i for i in ii_spirals if s4gdata.logmstar[i] >= 9]\n\nplot(s4gdata.dist[ii_spirals], s4gdata.R25_kpc[ii_spirals], 'o', mfc='None', mec='0.25',ms=4)\nplot(s4gdata.dist[ii_logmstar9], s4gdata.R25_kpc[ii_logmstar9], 'cD', mec='k', ms=4)\nxlim(0,60)\nxlabel(\"Distance [Mpc]\"); ylabel(ytR25_kpc)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"R25-vs-distance.pdf\")", "Figure 2\nLeft panel: $g - r$ vs stellar mass", "# define extra subsamples for plot: galaxies with valid B-V_tc values; subsets at different distances\nii_bmv_good = [i for i in range(nDisksTotal) if s4gdata.BmV_tc[i] > -2]\niii25 = [i for i in ii_bmv_good if s4gdata.dist[i] <= 25]\niii25to30 = [i for i in ii_bmv_good if s4gdata.dist[i] > 25 and s4gdata.dist[i] <= 30]\niii_larger = [i for i in ii_bmv_good if s4gdata.dist[i] > 30]\n\nplot(s4gdata.logmstar[iii_larger], s4gdata.gmr_tc[iii_larger], 's', mec='0.25', mfc='None', ms=4)\nplot(s4gdata.logmstar[iii25to30], s4gdata.gmr_tc[iii25to30], 'mD', ms=4)\nplot(s4gdata.logmstar[iii25], s4gdata.gmr_tc[iii25], 'ko', ms=5)\nxlabel(xtmstar); ylabel(xtgmr)\nxlim(7,11.5)\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"gmr-vs-logmstar.pdf\")", "Right panel: Gas mass ratio $f_{\\rm gas}$ vs stellar mass", "# define extra subsamples for plot: galaxies with valid H_I meassurements; subsets at different distances\niii25 = [i for i in ii_spirals if s4gdata.M_HI[i] < 1.0e40 and s4gdata.dist[i] <= 25]\niii25to30 = [i for i in ii_spirals if s4gdata.M_HI[i] < 1.0e40 and s4gdata.dist[i] > 25 and s4gdata.dist[i] <= 30]\niii_larger = [i for i in ii_spirals if s4gdata.M_HI[i] < 1.0e40 and s4gdata.dist[i] > 30]\n\nplot(s4gdata.logmstar[iii_larger], s4gdata.logfgas[iii_larger], 's', mec='0.25', mfc='None', ms=4)\nplot(s4gdata.logmstar[iii25to30], s4gdata.logfgas[iii25to30], 'mD', ms=4)\nplot(s4gdata.logmstar[iii25], s4gdata.logfgas[iii25], 'ko', ms=5)\nxlabel(xtmstar); ylabel(xtfgas)\nxlim(7,11.5)\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"logfgas-vs-logmstar.pdf\")", "Figure 4: Histogram of stellar masses in different subsamples", "hist(s4gdata.logmstar, bins=np.arange(7,12,0.5), color='1.0', label=\"All\", edgecolor='k')\nhist(s4gdata.logmstar[ii_all_limited2], bins=np.arange(7,12,0.5), color='0.9', edgecolor='k', label=r\"$D < 30$ Mpc\")\nhist(s4gdata.logmstar[ii_all_limited1], bins=np.arange(7,12,0.5), color='g', edgecolor='k', label=r\"$D < 25$ Mpc\")\nxlabel(xtmstar);ylabel(\"N\")\nlegend(fontsize=9, loc='upper left', framealpha=0.5)\nif savePlots: savefig(plotDir+\"logmstar_hist.pdf\")", "Figure 5: Bar fraction as function of stellar mass, color, gas mass fraction\nThe code here is for the six individual panels of the figure\nUpper left panel: Bar frequency vs stellar mass", "# load Barazza+2008 bar frequencies\nlogmstar_b08,fbar_b08,fbar_e_low_b08,fbar_e_high_b08 = GetBarazzaData(fbarLitDir+\"fbar-vs-logmstar_barazza+2008.txt\")\n\n# load other SDSS-based bar frequencies\nlogmstar_na10,fbar_na10 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-logMstar_nair-abraham2010.txt\")\nlogmstar_m12,fbar_m12 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-logmstar_masters+2012.txt\")\nlogmstar_m14,fbar_m14 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-logmstar_melvin+2014.txt\")\nlogmstar_g15,fbar_g15 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-logmstar_gavazzi+2015.txt\")\n\n\n# quadratic logistic fit (using weights) -- see R notebook s4gbars_R_logistic-regression.ipynb\n# for determination of parameters\nlogistic_params = [-82.2446, 17.1052, -0.8801]\nmm = np.arange(8.0,11.51,0.01)\nlogistic_fit2w = logistic_polyn(mm, logistic_params)\n\n# plot SDSS-based bar frequencies\nplt.plot(logmstar_na10, fbar_na10, '*', mfc=\"None\",mec='c', ms=7,label='N&A 2010')\nplt.plot(logmstar_m12, fbar_m12, 'D', mfc=\"None\",mec='k', ms=7,label='Masters+2012')\nplt.plot(logmstar_m14, fbar_m14, 's', mfc=\"0.75\",mec='k', ms=5,label='Melvin+2014')\nplt.plot(logmstar_g15, fbar_g15, '*', color='m', alpha=0.5, ms=7,label='Gavazzi+2015')\n\n# plot S4G bar frequencies and quadratic logistic fit\npu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_barred_limited1, ii_unbarred_limited1, 8.0, 11.3,0.25, fmt='ro', mec='k', ms=9, noErase=True, label=ss1_bold)\nplt.plot(mm, logistic_fit2w, 'r--', lw=1.5, label=s4g_txt_bold + \" logistic fit\")\nplt.errorbar(logmstar_b08, fbar_b08, yerr=[fbar_e_low_b08,fbar_e_high_b08], fmt='bD',alpha=0.5, label='Barazza+2008')\nplt.ylim(0,1)\nplt.xlabel(xtmstar); plt.ylabel('Bar fraction')\n\n# add weighted counts for S4G data\nbinranges = np.arange(8.0, 11.3,0.25)\ni_all = ii_barred_limited1 + ii_unbarred_limited1\n(n_all, bin_edges) = np.histogram(s4gdata.logmstar[i_all], binranges)\nn_all_int = [round(n) for n in n_all]\nfor i in range(len(n_all_int)):\n x = binranges[i]\n n = n_all_int[i]\n text(x + 0.07, 0.025, \"%3d\" % n, fontsize=11.5, color='r')\n\n# re-order labels in legend\nax = plt.gca()\nhandles,labels = ax.get_legend_handles_labels()\nprint(labels)\nhandles = [handles[5], handles[4], handles[6], handles[1], handles[2], handles[3], handles[0]]\nlabels = [labels[5], labels[4], labels[6], labels[1], labels[2], labels[3], labels[0]]\nlegend(handles, labels, loc=\"upper left\", fontsize=10, ncol=4, framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fbar-vs-logmstar.pdf\")\nprint(labels)", "Upper right panel: SB and SAB frequencies vs stellar mass", "pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_SB_limited1, ii_nonSB_limited1, 8.0, 11.3, 0.25, fmt='ko',ms=8, label=r'SB (S$^{4}$G: $D \\leq 25$ Mpc)')\npu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_SAB_limited1, ii_nonSAB_limited1, 8.0, 11.3, 0.25, offset=0.03, fmt='co', mec='k', ms=8, noErase=True, label=r'SAB (S$^{4}$G: $D \\leq 25$ Mpc)')\nplt.ylim(0,1)\nplt.xlabel(xtmstar); plt.ylabel('Bar fraction')\nlegend(fontsize=10, loc='upper left', framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fSB-fSAB-vs-logmstar.pdf\")", "Middle left panel: Bar frequency vs color", "gmr_b08,fbar_b08,fbar_e_low_b08,fbar_e_high_b08 = GetBarazzaData(fbarLitDir+\"fbar-vs-gmr_barazza+2008.txt\")\ngmr_na10,fbar_na10 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-gmr_nair-abraham2010.txt\")\ngmr_m11,fbar_m11 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-gmr_masters+2011.txt\")\ngmr_m12,fbar_m12 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-gmr_masters+2012.txt\")\ngmr_lee12,fbar_lee12 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-gmr_lee+2012.txt\")\n\n# calculate weights: product of color and V/V_max weights\nww25 = s4gdata.weight_BmVtc * s4gdata.w25\nww30 = s4gdata.weight_BmVtc * s4gdata.w30\n\nplt.plot(gmr_na10, fbar_na10, '*', color='c', mec='k', alpha=0.5, ms=7, label='N&A 2010')\nplt.plot(gmr_m11, fbar_m11, 's', color='0.7', mec='k', label='Masters+2011')\nplt.plot(gmr_m12, fbar_m12, 'D', mfc=\"None\", mec='k', ms=7, label='Masters+2012')\nplt.plot(gmr_lee12, fbar_lee12, 'v', mfc=\"0.9\", mec='k', ms=7, label='Lee+2012')\npu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_barred_limited1_m8_5, ii_unbarred_limited1_m8_5, -0.2,1.0,0.1, noErase=True, fmt='ro', mec='k', ms=9, label=ss1m_bold)\nplt.errorbar(gmr_b08, fbar_b08, yerr=[fbar_e_low_b08,fbar_e_high_b08], fmt='bD',alpha=0.5, label='Barazza+2008')\n# linear logistic regression for S4G galaxies\ngmrvect = np.arange(0,1.1, 0.1)\nplot(gmrvect, logistic_lin(gmrvect, 0.4544, -0.4394), 'r--', lw=1.5, label=s4g_txt_bold + \" logistic fit\")\nplt.xlabel(xtgmr); plt.ylabel('Bar fraction')\nxlim(0,1);ylim(0,1)\n\n# add weighted counts for S4G data\nbinranges = np.arange(-0.2,1.0,0.1)\ni_all = ii_barred_limited1_m8_5 + ii_unbarred_limited1_m8_5\n(n_all, bin_edges) = np.histogram(s4gdata.gmr_tc[i_all], binranges)\nn_all_int = [round(n) for n in n_all]\nfor i in range(2, len(n_all_int)):\n x = binranges[i]\n n = n_all_int[i]\n text(x + 0.035, 0.025, \"%3d\" % n, fontsize=11.5, color='r')\n\n# re-order labels in legend\nax = plt.gca()\nhandles,labels = ax.get_legend_handles_labels()\nprint(labels)\nhandles = [handles[5], handles[4], handles[6], handles[0], handles[1], handles[2], handles[3]]\nlabels = [labels[5], labels[4], labels[6], labels[0], labels[1], labels[2], labels[3]]\nlegend(handles, labels, loc=\"upper left\", fontsize=10, ncol=3, framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fbar-vs-gmr_corrected_all.pdf\")", "Middle right panel: SB and SAB frequencies vs color", "pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_SB_limited1_m8_5, ii_nonSB_limited1_m8_5, -0.2,1,0.1, fmt='ko', ms=8, label=\"SB (\"+ss1m+\")\")\npu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_SAB_limited1_m8_5, ii_nonSAB_limited1_m8_5, -0.2,1,0.1, fmt='co', mec='k', ms=8, noErase=True, label=\"SAB (\"+ss1m+\")\")\nplt.ylim(0,1)\nplt.xlabel(xtgmr); plt.ylabel('Bar fraction')\nlegend(loc=\"upper left\", fontsize=10, framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fSB-fSAB-vs-gmr_corrected.pdf\")", "Lower left panel: Bar frequency vs gas mass ratio", "logfgas_m12,fbar_m12 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-logfgas_masters+2012.txt\")\nlogfgas_cs17_raw,fbar_cs17 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-logfgas_cervantes_sodi2017.txt\")\n# correct CS17 values from log M_{HI + He}/M_{star} to log M_{HI}/M_{star}\nlogfgas_cs17 = logfgas_cs17_raw - 0.146\n\nplt.clf();pu.PlotFrequencyWithWeights(s4gdata.logfgas, s4gdata.w25, ii_barred_limited1_m8_5, ii_unbarred_limited1_m8_5, -3,2,0.5, fmt='ro', mec='k', ms=9, label=ss1m_bold)\nplt.plot(logfgas_m12, fbar_m12, 'D', mfc=\"None\",mec='k', ms=7,label='Masters+2012')\nplt.plot(logfgas_cs17, fbar_cs17, '*', color='0.75', mec='k', ms=8,label='Cervantes Sodi 2017')\n# linear logistic regression for S4G galaxies\nfgasvect = np.arange(-3, 1.01, 0.01)\nplot(fgasvect, logistic_lin(fgasvect, 0.42456, 0.03684), 'r--', lw=1.5, label=s4g_txt_bold + \" logistic fit\")\nplt.xlabel(xtfgas);plt.ylabel('Bar fraction')\nplt.ylim(0,1);plt.xlim(-3,1)\n\n# add weighted counts for S4G data\nbinranges = np.arange(-3,2,0.5)\ni_all = ii_barred_limited1_m8_5 + ii_unbarred_limited1_m8_5\n(n_all, bin_edges) = np.histogram(s4gdata.logfgas[i_all], binranges)\nn_all_int = [round(n) for n in n_all]\nfor i in range(0, len(n_all_int) - 1):\n x = binranges[i]\n n = n_all_int[i]\n text(x + 0.2, 0.025, \"%3d\" % n, fontsize=11.5, color='r')\n\n# re-order labels in legend\nax = plt.gca()\nhandles,labels = ax.get_legend_handles_labels()\nprint(labels)\nhandles = [handles[3], handles[2], handles[0], handles[1]]\nlabels = [labels[3], labels[2], labels[0], labels[1]]\nlegend(handles, labels, loc=\"upper right\", fontsize=10, ncol=2, framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: savefig(plotDir+\"fbar-vs-fgas.pdf\")", "Lower right panel: SB and SAB frequencies vs gas mass ratio", "pu.PlotFrequencyWithWeights(s4gdata.logfgas, s4gdata.w25, ii_SB_limited1_m8_5, ii_nonSB_limited1_m8_5, -3,1.5,0.5, fmt='ko', ms=8, label=\"SB (\"+ss1m+\")\")\npu.PlotFrequencyWithWeights(s4gdata.logfgas, s4gdata.w25, ii_SAB_limited1_m8_5, ii_nonSAB_limited1_m8_5, -3,1.5,0.5, fmt='co', mec='k', ms=8, noErase=True, label=\"SAB (\"+ss1m+\")\")\nplt.legend(loc='upper left',fontsize=10, framealpha=0.5)\nplt.ylim(0,1);xlim(-3,1)\nplt.xlabel(xtfgas); plt.ylabel('Bar fraction')\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fSB-fSAB-vs-fgas.pdf\")", "Figure A1\nWe generate an interpolating spline using an edited version of the actual binned f(B_tc) values -- basically, we ensure that the spline interpolation goes smoothly to 0 for faint magnitudes and smoothly to 1 for bright magnitudes.", "# generate Akima spline interpolation for f(B-V) as function of B_tc\nx_Btc = [7.0, 8.25, 8.75, 9.25, 9.75, 10.25, 10.75, 11.25, 11.75, 12.25, 12.75, 13.25, 13.75, 14.25, 14.75, 15.25, 15.75, 16.25]\ny_fBmV = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.9722222222222222, 0.8840579710144928, 0.8125, 0.6222222222222222, 0.5632183908045977, 0.4074074074074074, 0.2727272727272727, 0.3442622950819672, 0.2978723404255319, 0.10714285714285714, 0.01, 0.0]\nfBmV_akimaspline = scipy.interpolate.Akima1DInterpolator(x_Btc, y_fBmV)\n\nxx = np.arange(7,17,0.1)\npu.PlotFrequency(s4gdata.B_tc, ii_d30_bmv_good, ii_d30_bmv_missing, 7,16.5,0.5, fmt='ko', label=ss1)\npu.PlotFrequency(s4gdata.B_tc, ii_d25_bmv_good, ii_d25_bmv_missing, 7,16.5,0.5, fmt='ro', label=ss2, noErase=True)\nplot(xx, fBmV_akimaspline(xx), color='k', ls='--')\nxlim(16.5,7); ylim(0,1)\nxlabel(xtmB); ylabel(r\"Fraction of galaxies with $(B - V)_{\\rm tc}$\")\nlegend(fontsize=10,loc='upper left', framealpha=0.5)\nif savePlots: savefig(plotDir+\"f_bmv-vs-btc-with-spline.pdf\")", "Figure A2\nLeft panel", "pu.PlotFrequencyWithWeights(s4gdata.BmV_tc, s4gdata.weight_BmVtc, ii_barred_limited2_m9, ii_unbarred_limited2_m9, 0,1,0.1, fmt='ko', ms=9, label=ss2m);\npu.PlotFrequencyWithWeights(s4gdata.BmV_tc, s4gdata.weight_BmVtc, ii_barred_limited1_m8_5, ii_unbarred_limited1_m8_5, 0,1,0.1, offset=0.01, fmt='ro', ms=9, noErase=True, label=ss1m)\nplt.xlabel(xtBmV_tc);plt.ylabel('Bar fraction')\nplt.ylim(0,1)\nplt.legend(fontsize=10, framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fbar-vs-BmV_corrected.pdf\")", "Right panel", "ww25 = s4gdata.weight_BmVtc * s4gdata.w25\nww30 = s4gdata.weight_BmVtc * s4gdata.w30\n\npu.PlotFrequencyWithWeights(s4gdata.BmV_tc, ww25, ii_SB_limited1_m8_5, ii_nonSB_limited1_m8_5, -0.2,1,0.1, fmt='ko', ms=8, label=\"SB (\"+ss1+\")\")\npu.PlotFrequencyWithWeights(s4gdata.BmV_tc, ww30, ii_SAB_limited1_m8_5, ii_nonSAB_limited1_m8_5, -0.2,1,0.1, fmt='co', ms=8, noErase=True, label=\"SAB (\"+ss1+\")\")\nplt.ylim(0,1)\nplt.xlabel(xtBmV_tc)\nplt.ylabel('Bar fraction')\nlegend(loc=\"upper right\", fontsize=10, framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fSB-fSAB-vs-BmV_corrected.pdf\")", "Figure B1\nUpper left panel", "# load Diaz-Garcia+2016a fractions\nlogmstar_dg16,fbar_dg16 = s4gutils.Read2ColumnProfile(fbarLitDir+\"fbar-vs-logMstar_diaz-garcia+2016a.txt\")\n\npu.PlotFrequency(s4gdata.logmstar, ii_barred_limited1, ii_unbarred_limited1, 8.0, 11.3, 0.25, fmt='ro', ms=9, label=ss1)\npu.PlotFrequency(s4gdata.logmstar, ii_barred_limited2, ii_unbarred_limited2, 8.0, 11.3, 0.25, offset=0.02, fmt='ro', mfc='None', mew=1, mec='r', ms=8,noErase=True, label=ss2)\nplt.plot(logmstar_dg16,fbar_dg16, 's', mfc=\"0.75\",mec='k', ms=7,label='Díaz-García+2016a')\nplt.ylim(0,1)\nplt.xlabel(xtmstar)\nplt.ylabel('Bar fraction')\nlegend(loc=\"upper left\", fontsize=10, framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fbar-vs-logmstar_2sample.pdf\")", "Upper right panel", "pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_SB_limited1, ii_nonSB_limited1, 8.0, 11.3, 0.25, fmt='ko', ms=8, label=r'SB (S$^{4}$G: $D \\leq 25$ Mpc)')\npu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w30, ii_SB_limited2, ii_nonSB_limited2, 8.0, 11.3, 0.25, noErase=True, ms=8, fmt='ko', mfc='None', offset=0.02, label=r'SB (S$^{4}$G: $D \\leq 30$ Mpc)')\npu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_SAB_limited1, ii_nonSAB_limited1, 8.0, 11.3, 0.25, noErase=True, ms=8, fmt='co', label=r'SAB (S$^{4}$G: $D \\leq 25$ Mpc)')\npu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w30, ii_SAB_limited2, ii_nonSAB_limited2, 8.0, 11.3, 0.25, noErase=True, ms=8, fmt='co', mfc='None', mec='c', offset=0.02, label=r'SAB (S$^{4}$G: $D \\leq 30$ Mpc)')\n\n#pu.PlotFrequency(s4gdata.logmstar, ii_barred_limited1, ii_unbarred_limited1, 8.0, 11.3, 0.25, fmt='ro', ms=9, label=ss2)\n#pu.PlotFrequency(s4gdata.logmstar, ii_barred_limited2, ii_unbarred_limited2, 8.0, 11.3, 0.25, offset=0.02, fmt='ro', mfc='None', mew=1, mec='r', ms=8,noErase=True, label=ss1)\nplt.ylim(0,1)\nplt.xlabel(xtmstar)\nplt.ylabel('Bar fraction')\nlegend(loc=\"upper left\", ncol=2, fontsize=10, framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fSB-fSAB-vs-logmstar_2sample.pdf\")", "Left middle panel", "pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_barred_limited1_m8_5, ii_unbarred_limited1_m8_5, -0.2,1.0,0.1, fmt='ro', ms=9, label=ss1m)\npu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww30, ii_barred_limited2_m9, ii_unbarred_limited2_m9, -0.2,1.0,0.1, offset=0.01, fmt='ro', mfc='None', mew=1, mec='r', ms=8, noErase=True, label=ss2m)\nplt.xlabel(xtgmr)\nplt.ylabel('Bar fraction')\nxlim(0,1);ylim(0,1)\nlegend(loc=\"upper left\", fontsize=9, framealpha=0.5)\nplt.subplots_adjust(bottom=0.14)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fbar-vs-gmr_corrected_2sample.pdf\")", "Right middle panel", "pu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_SB_limited1_m8_5, ii_nonSB_limited1_m8_5, 0,1,0.1, fmt='ko', ms=8, label=\"SB (\"+ss1m+\")\")\npu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww30, ii_SB_limited2_m9, ii_nonSB_limited2_m9, 0,1,0.1, noErase=True, ms=8, fmt='ko', mfc='None', offset=0.01, label=\"SB (\"+ss2m+\")\")\npu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww25, ii_SAB_limited1_m8_5, ii_nonSAB_limited1_m8_5, 0,1,0.1, noErase=True, fmt='co', ms=8, label=\"SAB (\"+ss1m+\")\")\npu.PlotFrequencyWithWeights(s4gdata.gmr_tc, ww30, ii_SAB_limited2_m9, ii_nonSAB_limited2_m9, 0,1,0.1, noErase=True, ms=8, fmt='co', mfc='None', mew=1, mec='c', offset=0.01, label=\"SAB (\"+ss2m+\")\")\nplt.ylim(0,1)\nplt.xlabel(xtgmr)\nplt.ylabel('Bar fraction')\nlegend(loc=\"upper left\", fontsize=10, framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fSB-fSAB-vs-gmr_corrected_2sample.pdf\")", "Lower left panel", "pu.PlotFrequency(s4gdata.logfgas, ii_barred_limited1_m8_5, ii_unbarred_limited1_m8_5, -3,2,0.5, noErase=False, fmt='ro', ms=9, label=ss1m)\npu.PlotFrequency(s4gdata.logfgas, ii_barred_limited2_m9, ii_unbarred_limited2_m9, -3,2,0.5, offset=0.03, noErase=True, fmt='ro', mfc='None', mec='r', ms=9, label=ss2m)\nplt.xlabel(xtfgas);plt.ylabel('Bar fraction')\nplt.ylim(0,1);plt.xlim(-3,1)\nlegend(fontsize=9, loc='lower left', framealpha=0.5)\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fbar-vs-fgas_2sample.pdf\")", "Lower right panel", "pu.PlotFrequency(s4gdata.logfgas, ii_SB_limited1_m8_5, ii_nonSB_limited1_m8_5, -3,2,0.5, fmt='ko', ms=8, label=\"SB (\"+ss1m+\")\")\npu.PlotFrequency(s4gdata.logfgas, ii_SB_limited2_m9, ii_nonSB_limited2_m9, -3,2,0.5, noErase=True, ms=8, fmt='ko', mfc='None', mec='k', offset=0.03, label=\"SB (\"+ss2m+\")\")\npu.PlotFrequency(s4gdata.logfgas, ii_SAB_limited1_m8_5, ii_nonSAB_limited1_m8_5, -3,2,0.5, noErase=True, fmt='co', ms=8, label=\"SAB (\"+ss1m+\")\")\npu.PlotFrequency(s4gdata.logfgas, ii_SAB_limited2_m9, ii_nonSAB_limited2_m9, -3,2,0.5, noErase=True, ms=8, fmt='co', mfc='None', mec='c', offset=0.03, label=\"SAB (\"+ss2m+\")\")\nplt.legend(loc='upper left', ncol=2, fontsize=10)\nplt.ylim(0,1);xlim(-3,1)\nplt.xlabel(xtfgas)\nplt.ylabel('Bar fraction')\n# push bottom of plot upwards so that x-axis label isn't clipped in PDF output\nplt.subplots_adjust(bottom=0.14)\nif savePlots: plt.savefig(plotDir+\"fSB-fSAB-vs-fgas_2sample.pdf\")", "Figure B2", "ii_all_limited1_S0 = [i for i in range(nDisksTotal) if s4gdata.dist[i] <= 25 and s4gdata.t_s4g[i] <= -0.5]\nii_barred_limited1_with_S0 = [i for i in range(nDisksTotal) if i in ii_barred and s4gdata.dist[i] <= 25]\nii_unbarred_limited1_with_S0 = [i for i in range(nDisksTotal) if i in ii_unbarred and s4gdata.dist[i] <= 25]\nii_barred_limited1_S0 = [i for i in ii_all_limited1_S0 if i in ii_barred]\nii_unbarred_limited1_S0 = [i for i in ii_all_limited1_S0 if i in ii_unbarred]\n\nfig,axs = plt.subplots(1,2, figsize=(15,5))\n\naxs[0].plot([8.0,11.5], [0,1], color='None')\npu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_barred_limited1, ii_unbarred_limited1, 8.0, 11.3, 0.25, noErase=True, axisObj=axs[0], fmt='ro', ms=9, label=ss1 + \", spirals\")\ntxt2 = ss1 + \", S0s + spirals\"\npu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_barred_limited1_with_S0, ii_unbarred_limited1_with_S0, 8.0, 11.3, 0.25, axisObj=axs[0], offset=-0.03, fmt='o', color='orange', mew=1.3, mfc='None', mec='orange', ms=7,noErase=True, label=txt2)\ntxt3 = ss1 + \", S0s only\"\npu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_barred_limited1_S0, ii_unbarred_limited1_S0, 8.0, 11.3, 0.25, axisObj=axs[0], offset=0.04, fmt='D', mfc='None', mec='0.5', ecolor='0.5', ms=7, noErase=True, label=txt3)\naxs[0].set_ylim(0,1)\naxs[0].set_xlabel(xtmstar)\naxs[0].set_ylabel('Bar fraction')\naxs[0].legend(loc='upper left', fontsize=10)\nplt.subplots_adjust(bottom=0.14)\n\nbins = np.arange(8,11.5, 0.25)\naxs[1].hist(s4gdata.logmstar[ii_all_limited1], bins=bins, label='Spirals')\naxs[1].hist(s4gdata.logmstar[ii_all_limited1_S0], bins=bins, color='r', label='S0')\naxs[1].set_ylim(0,100)\naxs[1].set_xlabel(xtmstar);axs[1].set_ylabel(r\"$N$\")\naxs[1].legend(loc='upper right', fontsize=10)\nplt.subplots_adjust(bottom=0.14)\nif savePlots: savefig(plotDir+\"fbar-spirals+S0-vs-mstar-with-mstar-hist.pdf\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oiertwo/ep2015
welcome/.ipynb_checkpoints/Welcome-checkpoint.ipynb
mit
[ "<div align=\"center\"> \n<h1>Welcome to </h1>\n<br>\n</div>\n<div align=\"center\">\n<img src='media/mapamundi-bilbao.jpg' width=\"100%\" />\n</div>\n\n<div align=\"center\"> \n<h1>Welcome to </h1>\n</div>\n<div >\n<img src='media/05-Secondary Logo B.png' width=512 />\n</div>\n\nHo we are\nFabio Pliger\n@fpliger\n - EPS Board member\nOier Echaniz\n@oiertwo\n - ACPySS Chair (On-site team)\n<div class=\"col-md-8\"> \n<h1>Attendees evolution</h1>\n</div>\n<div class=\"col-md-4\">\n<img src='media/05-Secondary Logo B.png' width=512 />\n</div>", "import matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\ndata = [('year', 'location', 'attendees'),\n (2002, 'Charleroi', 240),\n (2003, 'Charleroi', 300),\n (2004, 'Göteborg', 'nan'),\n (2005, 'Göteborg', 'nan'),\n (2006, 'Geneva', 'nan'),\n (2007, 'Vilnius', 'nan'),\n (2008, 'Vilnius', 206),\n (2009, 'Birmingham', 410),\n (2010, 'Birmingham', 446),\n (2011, 'Florence', 670),\n (2012, 'Florence', 760),\n (2013, 'Florence', 870),\n (2014, 'Berlin', 1250),\n (2015, 'Bilbao', 1100),]\n\nnames = data[0]\neps = {name: [] for name in names}\nfor line in data[1:]:\n for pos, name in enumerate(names):\n eps[name].append(line[pos])\n\n\n\nplt.plot(eps['year'], eps['attendees'])", "Attendees evolution\nIN BILBAO", "data = [('year', 'location', 'attendees'),\n (2014, 'Bilbao', 0),\n (2015, 'Bilbao', 1100)]\n\nnames = data[0]\neps = {name: [] for name in names}\nfor line in data[1:]:\n for pos, name in enumerate(names):\n eps[name].append(line[pos])\n\n\nplt.plot(eps['year'], eps['attendees'])", "Wifi information\n- Password and SSID: europython2015\n- If problem:\n * Move through the venue to find an antenna with a empty spot\n * there is nobody in the venue that can solve\n\nCable (for emergency purpose)\n-For speakers \n -Only if the wifi is not working properly\n-In the helpdesk and info desk in case of emergency\n\nCode of conduct\n- Available online, \n- Not tolerate\n- Behave properly\n- Enjoy the conference\n\nBadges\n- Check the bags, is everything inside?\n- Booklet, flyer, t-shirt, present… \n- wrong t-shirt size you can switch it?\n\nSchedule\n- Check the schedule\n- Talks\n - 5 Parallel tracks\n- Trainings\n - No signing, first come first serve\n - Big rooms\n- Keynotes\n - Present keynoters?\n- Lightning talks\n - Google room\n- Poster session\n\n<div align=\"center\"> \n<h1>Lunch and coffee breaks areas:</h1>\n<h3>HALL at -2 Floor</h3>\n</div>\n<div >\n<img src='media/ecc_hall.png' width=\"100%\" />\n</div>\n\nWe are recording\n- Livestreaming?...\n- Releasing the videos ASAP\n\n<div class=\"col-md-8\"> \n<h1>Social event</h1>\n\n<h3> Get your ticket </h3>\n<ul>\n <li>Registration desk on tuesday afternoon or wednesday morning</li>\n</ul>\n\n<h3> Buy your ticket </h3>\n<ul>\n <li>Ticket still available for 40€</li>\n\n</ul>\n <img src='media/ticket_cena.jpg' width=512 /> \n\n</div>\n<div class=\"col-md-4\">\n\n<img src='media/pinchos night.jpg' width=512 />\n</div>\n\nGuidebook App\n- Live updates\n- Live talks feedbacks\n- Feedback form\n- Connect with people\n- …\n\nEvents\n- Pydata Track (Hidden)\n- Django Girls\n- Recruitment sessions\n- Educational Track\n- Local Track\n- Sprint\n- Open Spaces\n- Dojos\n- Sponsor event - Visit their booths\n\nOpen Spaces\n- Explain how to create your openspace talk\n\nVolunteers:\n- Become a Session chair\n- Organization: Green T-Shirt\n- Volunteers: Yellow shirt\n\n<div class=\"col-md-8\"> \n<h1>Speakers</h1>\n\n<h2>Before your talk during any coffee \nbreak or lunch please go to the room \nwhere your talk is going to be to test \nyour laptops and minimize the switch between talks </h2>\n\n</div>\n<div class=\"col-md-4\">\n<img src='media/desk.png' width=512 />\n</div>\n\nLoungue Areas\n- Retro games, it cost 50 cents, will be for charity\n\nREMINDERS:\n<div align=\"center\">\n\n<h1>No food/drinks in rooms and outside the venue</h1>\n\n</div>\n\n<div align=\"center\">\n\n<img src='media/no_food_or_drink.png' width=\"100%\" />\n</div>\n\n<div align=\"center\"> \n<h1>Enjoy the conference and Bilbao</h1>\n<br>\n</div>\n<div>\n<img src='media/mapamundi-bilbao.jpg' width=\"100%\" />\n</div>" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
tpin3694/tpin3694.github.io
machine-learning/calculate_dot_product_of_two_vectors.ipynb
mit
[ "Title: Calculate Dot Product Of Two Vectors \nSlug: calculate_dot_product_of_two_vectors \nSummary: How to calculate the dot product of two vectors in Python. \nDate: 2017-09-02 12:00\nCategory: Machine Learning\nTags: Vectors Matrices Arrays\nAuthors: Chris Albon \nPreliminaries", "# Load library\nimport numpy as np", "Create Two Vectors", "# Create two vectors\nvector_a = np.array([1,2,3])\nvector_b = np.array([4,5,6])", "Calculate Dot Product (Method 1)", "# Calculate dot product\nnp.dot(vector_a, vector_b)", "Calculate Dot Product (Method 2)", "# Calculate dot product\nvector_a @ vector_b" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lia-statsletters/notebooks
mv_kecdf_frechet.ipynb
gpl-3.0
[ "jupyter nbconvert mv_kecdf_frechet.ipynb --to slides --post serve", "from __future__ import division\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats as spst\nimport statsmodels.api as sm\nfrom scipy import optimize\n\nfrom statsmodels.nonparametric import kernels\n\n\nkernel_func = dict(wangryzin=kernels.wang_ryzin,\n aitchisonaitken=kernels.aitchison_aitken,\n gaussian=kernels.gaussian,\n aitchison_aitken_reg = kernels.aitchison_aitken_reg,\n wangryzin_reg = kernels.wang_ryzin_reg,\n gauss_convolution=kernels.gaussian_convolution,\n wangryzin_convolution=kernels.wang_ryzin_convolution,\n aitchisonaitken_convolution=kernels.aitchison_aitken_convolution,\n gaussian_cdf=kernels.gaussian_cdf,\n aitchisonaitken_cdf=kernels.aitchison_aitken_cdf,\n wangryzin_cdf=kernels.wang_ryzin_cdf,\n d_gaussian=kernels.d_gaussian)\n\ndef gpke(bwp, dataxx, data_predict, var_type, ckertype='gaussian',\n okertype='wangryzin', ukertype='aitchisonaitken', tosum=True):\n r\"\"\"Returns the non-normalized Generalized Product Kernel Estimator\"\"\"\n kertypes = dict(c=ckertype, o=okertype, u=ukertype)\n Kval = np.empty(dataxx.shape)\n for ii, vtype in enumerate(var_type):\n func = kernel_func[kertypes[vtype]]\n Kval[:, ii] = func(bwp[ii], dataxx[:, ii], data_predict[ii])\n\n iscontinuous = np.array([c == 'c' for c in var_type])\n dens = Kval.prod(axis=1) / np.prod(bwp[iscontinuous])\n #dens = np.nanprod(Kval,axis=1) / np.prod(bwp[iscontinuous])\n if tosum:\n return dens.sum(axis=0)\n else:\n return dens\n\n\nclass LeaveOneOut(object):\n def __init__(self, X):\n self.X = np.asarray(X)\n\n def __iter__(self):\n X = self.X\n nobs, k_vars = np.shape(X)\n\n for i in range(nobs):\n index = np.ones(nobs, dtype=np.bool)\n index[i] = False\n yield X[index, :]\n\n\n\ndef loo_likelihood(bww, datax, var_type, func=lambda x: x, ):\n #print(bww)\n LOO = LeaveOneOut(datax)\n L = 0\n for i, X_not_i in enumerate(LOO):\n f_i = gpke(bww, dataxx=-X_not_i, data_predict=-datax[i, :],\n var_type=var_type)\n L += func(f_i)\n return -L\n\n\ndef get_bw(datapfft ,var_type ,reference):\n # Using leave-one-out likelihood\n # the initial value for the optimization is the normal_reference\n # h0 = normal_reference()\n\n data = adjust_shape(datapfft, len(var_type))\n\n h0 =reference\n fmin =lambda bb, funcx: loo_likelihood(bb, data, var_type, func=funcx)\n bw = optimize.fmin(fmin, x0=h0, args=(np.log, ),\n maxiter=1e3, maxfun=1e3, disp=0, xtol=1e-3)\n # bw = self._set_bw_bounds(bw) # bound bw if necessary\n return bw\n\ndef adjust_shape(dat, k_vars):\n \"\"\" Returns an array of shape (nobs, k_vars) for use with `gpke`.\"\"\"\n dat = np.asarray(dat)\n if dat.ndim > 2:\n dat = np.squeeze(dat)\n if dat.ndim == 1 and k_vars > 1: # one obs many vars\n nobs = 1\n elif dat.ndim == 1 and k_vars == 1: # one obs one var\n nobs = len(dat)\n else:\n if np.shape(dat)[0] == k_vars and np.shape(dat)[1] != k_vars:\n dat = dat.T\n\n nobs = np.shape(dat)[0] # ndim >1 so many obs many vars\n\n dat = np.reshape(dat, (nobs, k_vars))\n return dat\n\n%alias_magic t timeit", "Data science friday tales:\nUsing Fréchet Bounds for Bandwidth selection in MV Kernel Methods.\nLia Silva-Lopez\n\nTuesday, 19/03/2019\nThis story starts with a reading accident\n\nOne moment you are reading a book...\n<img src=\"img/reading_accident.png\">\n...And the next there are bounds for everything.\nBounds for distributions* in terms of its marginals?\n\n\n\nThat makes perfect sense!\n\n\nWhy aren't these bounds more mainstream? \n\n\nIs it because it's hard to pronounce 'Fréchet'?\n\n\n*with distributions we mean CDFs, not densities\nAllright, let's point the fingers at ourselves:\n\nWhat would I do with Fréchet bounds?\n'Cause the whole point of bounds is to try not to break them. Right?\n<img src=\"https://media3.giphy.com/media/145lrrvcdNq43m/source.gif\" style=\"width: 280px;\">\nAn idea: Let's use them for Bandwidth selection in MV KDEs.\n\n\n\nMV kernel estimation is expensive, slow and often hard to check with d>2.\n\n\nWhich is why Kernel methods are recommended for smoothing (mostly).\n\n\nHowever, working with multivariate is not easy.\n\nSo a lot of people turn to KDEs in the lack of better information.\nOr, throw samples at a blackbox and hope for the best.\n\n\n\nHow can we use Fréchet bounds here?\n\nThere are different methods for BW selection, many based on some kind of optimization.\n\n\nTo prune the search space on any iterative method:\n\n(Naive) Removing bws that lead to estimates violating the bounds.\n(Less naive, but less parsinomic) prune using thresholds.\n\n\n\nTo construct functions to optimize over:\n\n(Cheap, uninformative) Counting the number of violations of the bound?\n(Cheap, informative) Summing all diffs between each violation and the bound point?\n\n\n\nOther questions to answer:\n\n\n\nAre we breaking Fréchet bounds when estimating CDFs with Kernels?\n\nAnd if we break them, \nHow badly are they usually broken?\nDo they get broken often?\n\n\n\n\n\nWhat are the consequences of selecting BWs that lead to breaking Fréchet?\n\n\n<img src=\"https://memegenerator.net/img/instances/63332969/do-all-the-things.jpg\" style=\"width: 70%;\">\nWhat's in Python for this?\n\n\n\nScikit-Learn, Scipy, StatsModels are the usual suspects.\n\n\nOnly StatsModels has a convenience method to estimate CDFs with use Kernels.\n\n\nBased solely on making my life easy, I chose StatsModels to hack MV KDE methods and insert Fréchet bounds in BW estimation.\n\n\nStatsModels KDEMultivariate Package\n\n\n\nWrapper code for MV KDE here\n\n\nBase code for bandwith selection methods here.\n\n\nLet's have a quick overview for how this normally works.\nFirst we generate some data to call the methods.\n\nWe will use some betas.", "n=1000\ndistr=spst.beta #<-- From SciPy\nsmpl=np.linspace(0,1,num=n)\nparams={'horns':(0.5,0.5),'horns1':(0.5,0.55),\n 'shower':(5.,2.),'grower':(2.,5.)}\n\nv_type=f'{\"c\"*len(params)}' #<-- Statsmodels wants to know if data is \n # continuous (c) \n # discrete ordered (o) \n # discrete unordered (u)\n \nfig, ax = plt.subplots(1,2,figsize=(10,5))\nlist(map(lambda x: ax[0].plot(distr.cdf(smpl,*x),smpl) , params.values()))\nlist(map(lambda x: ax[1].plot(smpl,distr.pdf(smpl,*x)) , params.values()))\nax[0].legend(list(params.keys()));ax[1].legend(list(params.keys()))\nax[0].grid(); ax[1].grid() \nfig.suptitle(f'CDFs & PDFs for different marginals (Beta distributed)')\nplt.show()", "Kernels and BW Selection Methods\n\nKernel selection depends on \"v_type\". For \"c\" -> Gaussian Kernel.\nThis is a list of the kernel functions available in the package\nkernel_func = dict(\n wangryzin=kernels.wang_ryzin,\n aitchisonaitken=kernels.aitchison_aitken,\n gaussian=kernels.gaussian,\n\n aitchison_aitken_reg = kernels.aitchison_aitken_reg,\n wangryzin_reg = kernels.wang_ryzin_reg,\n\n gauss_convolution=kernels.gaussian_convolution,\n wangryzin_convolution=kernels.wang_ryzin_convolution,\n aitchisonaitken_convolution=kernels.aitchison_aitken_convolution,\n\n gaussian_cdf=kernels.gaussian_cdf,\n aitchisonaitken_cdf=kernels.aitchison_aitken_cdf,\n wangryzin_cdf=kernels.wang_ryzin_cdf,\n\n d_gaussian=kernels.d_gaussian)\n\nDifferent kernels are selected for different reasons: variable features and if they want to fit pdfs or cdfs.\n(probably)\nBandwidth selection methods\nWe have a choice of 3 BW selection methods:\n1. normal_reference: normal reference rule of thumb (default)\n\nBW from this method is the starting point of the other two algorithms\nSilverman's rule for MV case\nQuick, but too smooth.\n\n2. cv_ml: cross validation maximum likelihood\n\nNot quick, but reasonable estimates in reasonable time (within seconds to few minutes).\n\nUses the bandwidth estimate that maximizes the leave-out-out likelihood. \n\n\nImplemented in method \"_cv_ml(self)\" of \"class GenericKDE(object)\" in \"statsmodels.nonparametric._kernel_base\"\n\n\nThe leave-one-out log likelihood function is:\n\n\n$$\\ln L=\\sum_{i=1}^{n}\\ln f_{-i}(X_{i})$$\n\nThe leave-one-out kernel estimator of $f_{-i}$ is:\n\n$$f_{-i}(X_{i})=\\frac{1}{(n-1)h} \\sum_{j=1,j\\neq i}K_{h}(X_{i},X_{j})$$\nwhere $K_{h}$ represents the Generalized product kernel estimator:\n$$ K_{h}(X_{i},X_{j})=\\prod_{s=1}^{q}h_{s}^{-1}k\\left(\\frac{X_{is}-X_{js}}{h_{s}}\\right) $$\n\nThe Generalized product Kernel Estimator is also a method of GenericKDE(object).\n\n3. cv_ls: cross validation least squares\n\nVery, very slow (>8x times slower than ml)\n\nReturns the value of the bandwidth that maximizes the integrated mean square error between the estimated and actual distribution.\n\nImplemented in method \"_cv_ls(self)\" of \"class GenericKDE(object)\" in \"statsmodels.nonparametric._kernel_base\"\n\nThe integrated mean square error (IMSE) is given by:\n$$ \\int\\left[\\hat{f}(x)-f(x)\\right]^{2}dx $$\nComparing times and bandwidth choices\nLet's compare times and values for bandwidth selection for each method available in StatsModels, considering 4 dimensions and 1000 samples.\n\n\nRule-of-thumb:\n3 loops, best of 3: 109 µs per loop\nbw with reference [0.15803504 0.15817752 0.07058083 0.07048409]\n\n\nCV-LOO ML: \n3 loops, best of 3: 1min 39s per loop\nbw with maximum likelihood [0.04915534 0.03477012 0.09889865 0.09816758]\n\n\nCV-LS:\n3 loops, best of 3: 12min 30s per loop\nbw with least squares [1.12156416e-01 1.00000000e-10 1.03594669e-01 9.11747124e-02]\n\n\nBut, check out the bandwidths sizes!", "import statsmodels.api as sm\n\n#Generate some independent data for each parameter set\nmvdata={k:distr.rvs(*params[k],size=n) for k in params}\nrd=np.array(list(mvdata.values()))\n\n%timeit -n3 sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='normal_reference')\ndens_u_rot = sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='normal_reference')\nprint('bw with reference',dens_u_rot.bw, '(only available for gaussian kernels)')\n\n%timeit -n3 sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='cv_ml')\ndens_u_ml = sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='cv_ml')\nprint('bw with maximum likelihood',dens_u_ml.bw)\n\n# BW with least squares takes >8x more than with ml\n%timeit -n3 sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='cv_ls')\ndens_u_ls = sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='cv_ls')\nprint('bw with least squares',dens_u_ls.bw)", "Now the fun part: modifying the package to do our bidding\n\n\n\nAll we need is in two classes: class KDEMultivariate(GenericKDE) and its parent, class GenericKDE(object).\n\n\nWhen we call the constructor for the KDEMultivariate object, this happens:\n\nData checks & reshaping, internal stuff settings.\nBandwidth selection function method is chosen and bandwidth is calculated by doing a call to a hidden parent method (self._compute_bw(bw) or self._compute_efficient(bw))\n\n\n\nAt the parent method, either one of this methods of the parent called:\n\n_normal_reference() <- Silverman's rule\n_cv_ml() <- Cross validation maximum likelihood\n_cv_ls() <- Cross validation least squares\n\n\n\nHow do the BW calculation methods work?\n\n_cv_ml() and _cv_ls() are almost the same method except for:\n| _cv_ml() | _cv_ls() |\n|---|---|\n| h0 = self._normal_reference() | h0 = self._normal_reference() |\n|bw = optimize.fmin(self.loo_likelihood, | bw = optimize.fmin(self.imse,|\n|x0=h0, args=(np.log, ),| x0=h0,| \n|maxiter=1e3, | maxiter=1e3 |\n|maxfun=1e3, | maxfun=1e3, |\n|disp=0, | disp=0,|\n|xtol=1e-3) | xtol=1e-3) |\n\n\nA bummer: no direct way to feed ranges of hyperparameters in order to constrain the search space!\n\nThey simply call scipy.optimize.fmin underneath.\n\n\n\noptimize.fmin comes from scipy.optimize\nSo everything is passed to an optimization function!\n\nPretty much. \nDoesn't mean we can't do something about it :).\nLet's look inside loo_likelihood, and see where can we intervene:", "def loo_likelihood(self, bw, func=lambda x: x):\n\n LOO = LeaveOneOut(self.data) #<- iterator for a leave-one-out over the data\n L = 0\n for i, X_not_i in enumerate(LOO): #<- per leave-one-out of the data (ouch!)\n \n f_i = gpke(bw, #<- provided by the optimization algorithm \n \n data=-X_not_i, #<- dataset minus one sample as given by LOO\n \n data_predict=-self.data[i, :], #<- REAL dataset, ith point\n \n var_type=self.var_type) #<- 'cccc' or something similar\n \n L += func(f_i) #<- _cv_ml() passed np.log, so its log-likelihood \n # of gkpe at ith point.\n \n return -L", "What happens inside gkpe?\n\n\n\nBoth the CDF and PDF are estimated with a gpke. They just use a different kernel.\n\n\nAll the kernel implementations are here.", "def gpke(bw, data, data_predict, var_type, ckertype='gaussian',\n okertype='wangryzin', ukertype='aitchisonaitken', tosum=True):\n \n kertypes = dict(c=ckertype, o=okertype, u=ukertype) #<- kernel selection\n\n Kval = np.empty(data.shape)\n for ii, vtype in enumerate(var_type): #per ii dimension\n func = kernel_func[kertypes[vtype]]\n Kval[:, ii] = func(bw[ii], data[:, ii], data_predict[ii]) \n\n iscontinuous = np.array([c == 'c' for c in var_type])\n dens = Kval.prod(axis=1) / np.prod(bw[iscontinuous]) #<- Ta-da, kernel products.\n if tosum:\n return dens.sum(axis=0)\n else:\n return dens", "What did I do?\nGroundwork:\n\nMethods for: \n\n\nEstimating the Fréchet bounds for a dataset.\n\n\nVisualizing the bounds (2d datasets) see here\n\n\nCounting how many violations of the bound were made by a CDF.\n\n\nMeasuring the size of the violation at each point (diff between the point of the CDF in which the violation happened, and the bound that was broken)\n\n\nGenerating experiments\n\n\nMassaging outputs of the profiler\n\n\nThen...\n\n\n\nEstimated a percentage of bound breaking for winning bandwidths in different methods.\n\nIt was not zero!!!\n\n\n\nTried using the violations as a way to prune \"unpromising\" bandwidths before applying the gpke thru the whole LOO iteration.\n\n\nIt made the optimization algorithm go coo-coo\n\n\nBecause scipy.optimize.fmin was expecting a number out of that function.\n\n\nTo return something \"proportionally punishing\", I probably should keeping track of the previous estimates.\n\n\nThat would require more work.\n\nBasically also hacking the code for the optimization.\nFuture work!\n\n\n\n\n\nThen more...\n\nHijacked loo_likelihood() to make my own method in which violations are used to guide the optimization algorithm.\n\n\nTried feeding number of violations to the algorithm. \n\n\nThe algorithm got lost.\n\n\nMaybe too little information?\n\n\n\n\nTried feeding the sum of the size of all violations. \n\n\nIt kinda worked but the final steps of the algorithm were unstable.\n\n\nCan we make it a bit more informative?\n\n\n\n\nAnd then, some more.\n\n\n\nTried feeding a weighted sum of the size of all violations.\n\n\nThe weights were the size of the bound at each violation point.\n\n\nThe rationale is that a violation at a narrow point should be punished more than a violation at an already wide point.\n\n\nIt still takes between 20% to 200% more time than cv_ml when it should be at least an order of magnitude faster (cdf estimation is faster than leave-one-out)\n\n\nGee, I wonder if I have a bug somewhere?\n\n\n\n\nYup, I actually had a bug.\n\nWhat was the bug?\n\n\n\nWhile making this presentation I realized I had a bug.\n\n\nMy method for estimating bounds should be called with THE CDFs of each dimension.\n\nI was calling it with the data directly (!!!).\n\n\n\nNo wonder I was getting horrible results.\n\n\nSo the actual results of this hack will have to wait :P.\n\nAll my tests of the weekend are now useless.\nWill keep them somewhere in my hard-drive as mementos...\n\n\n\n;D I will repeat the tests with the right call and show the results for the next presentation.\n<img src=\"https://i.kym-cdn.com/photos/images/newsfeed/000/187/324/allthethings.png\" style=\"width: 70%;\">\nOk, is not like everything is wrong\n\nLet's do some quick counts here for Fréchet violations.", "def get_frechets(dvars):\n d=len(dvars)\n n=len(dvars[0])\n dimx=np.array(range(d))\n un=np.ones(d,dtype=int)\n bottom_frechet = np.array([max( np.sum( dvars[dimx,un*i] ) +1-d, 0 ) \n for i in range(n) ])\n \n top_frechet = np.array([min([y[i] for y in dvars]) for i in range(n)])\n return {'top': top_frechet, 'bottom': bottom_frechet}\n\ncdfs={fname :distr.cdf(smpl,*params[fname]) for fname in params}\nfrechets=get_frechets(np.array(list(cdfs.values())))", "Calculating number of violations", "def check_frechet_fails(guinea_cdf,frechets):\n fails={'top':[], 'bottom':[]}\n for n in range(len(guinea_cdf)):\n #n_hyper_point=np.array([x[n] for x in rd])\n if guinea_cdf[n]>frechets['top'][n]:\n fails['top'].append(True)\n else:\n fails['top'].append(False)\n\n if guinea_cdf[n]<frechets['bottom'][n]:\n fails['bottom'].append(True)\n else:\n fails['bottom'].append(False)\n return {'top':np.array(fails['top']),\n 'bottom':np.array(fails['bottom'])}", "Given 4 dimensions and 1000 samples, we got:\n\n\nFor Silverman: 58.8% violations\n\n\nFor cv_ml: 58.0% violations\n\n\nFor cv_ls: 57.0% violations", "# For Silverman\nviolations_silverman=check_frechet_fails(dens_u_rot.cdf(),frechets)\nviolations_silverman=np.sum(violations_silverman['top'])+ np.sum(violations_silverman['bottom'])\nprint(f'violations:{violations_silverman} ({100.*violations_silverman/len(smpl)}%)')\n\n# For cv_ml\nviolations_cv_ml=check_frechet_fails(dens_u_ml.cdf(),frechets)\nviolations_cv_ml=np.sum(violations_cv_ml['top'])+ np.sum(violations_cv_ml['bottom'])\nprint(f'violations:{violations_cv_ml} ({100.*violations_cv_ml/len(smpl)}%)')\n\n# For cv_ls\nviolations_cv_ls=check_frechet_fails(dens_u_ls.cdf(),frechets)\nviolations_cv_ls=np.sum(violations_cv_ls['top'])+ np.sum(violations_cv_ls['bottom'])\nprint(f'violations:{violations_cv_ls} ({100.*violations_cv_ls/len(smpl)}%)')", "What more?\n\nQuite a lot of sweat went into generating code for comparing my approaches with cv_ml\nI may as well show it to you, and point where the bug was :(.", "def generate_experiments(reps,n,params, distr, dims):\n bws_frechet={f'bw_{x}':[] for x in params}\n bws_cv_ml={f'bw_{x}':[] for x in params}\n\n\n for iteration in range(reps):\n mvdata = {k: distr.rvs(*params[k], size=n) for k in params}\n rd = np.array(list(mvdata.values())) #<---- THIS IS NOT A CDF!!!!!\n\n # get frechets and thresholds\n frechets = get_frechets(rd) #<------- THEREFORE THIS IS A BUG !!!!!\n\n bw_frechets, bw_cv_ml=profile_run(rd, frechets,iteration)\n\n for ix,x in enumerate(params):\n bws_frechet[f'bw_{x}'].append(bw_frechets[ix])\n bws_cv_ml[f'bw_{x}'].append(bw_cv_ml[ix])\n\n pd.DataFrame(bws_frechet).to_csv(f'/home/lia/liaProjects/outs/bws_frechet_d{dims}-n{n}-iter{reps}.csv')\n pd.DataFrame(bws_cv_ml).to_csv(f'/home/lia/liaProjects/outs/bws_cv_ml_d{dims}-n{n}-iter{reps}.csv')", "And this is how the functions that make the calculations look underneath.", "def get_bw(datapfft, var_type, reference, frech_bounds=None):\n # Using leave-one-out likelihood\n # the initial value for the optimization is the normal_reference\n # h0 = normal_reference()\n\n data = adjust_shape(datapfft, len(var_type))\n\n if not frech_bounds:\n fmin =lambda bw, funcx: loo_likelihood(bw, data, var_type, func=funcx)\n argsx=(np.log,)\n else:\n fmin = lambda bw, funcx: frechet_likelihood(bw, data, var_type,\n frech_bounds, func=funcx)\n argsx=(None,) #second element of tuple is if debug mode\n\n h0 = reference\n bw = optimize.fmin(fmin, x0=h0, args=argsx, #feeding logarithm for loo\n maxiter=1e3, maxfun=1e3, disp=0, xtol=1e-3)\n # bw = self._set_bw_bounds(bw) # bound bw if necessary\n return bw", "And this was my frechet_likelihood method", "def frechet_likelihood(bww, datax, var_type, frech_bounds, func=None, debug_mode=False,):\n \n cdf_est = cdf(datax, bww, var_type) # <- calls gpke underneath, but is a short call\n \n d_violations = calc_frechet_fails(cdf_est, frech_bounds)\n \n width_bound = frech_bounds['top'] - frech_bounds['bottom']\n \n viols=(d_violations['top']+d_violations['bottom'])/width_bound\n \n L= np.sum(viols)\n return L", "And this is how profiling info was collected\n\nThe python profiler is a bit unfriendly, so maybe this code could be useful as a snippet?\nOr, getting a professional license of pycharm ;) (Thanks boss!)", "def profile_run(rd,frechets,iterx):\n dims=len(rd)\n n=len(rd[0])\n v_type = f'{\"c\"*dims}'\n # threshold: number of violations by the cheapest method.\n dens_u_rot = sm.nonparametric.KDEMultivariate(data=rd, var_type=v_type, bw='normal_reference')\n cdf_dens_u_rot = dens_u_rot.cdf()\n violations_rot = count_frechet_fails(cdf_dens_u_rot, frechets)\n\n\n #profile frechets\n pr = cProfile.Profile()\n pr.enable()\n bw_frechets = get_bw(rd, v_type, dens_u_rot.bw, frech_bounds=frechets)\n pr.disable()\n\n s = io.StringIO()\n ps = pstats.Stats(pr, stream=s).sort_stats('cumtime')\n ps.print_stats()\n s = s.getvalue()\n\n with open(f'/home/lia/liaProjects/outs/frechet-profile-d{dims}-n{n}-iter{iterx}.txt', 'w+') as f:\n f.write(s)\n\n\n #profile cv_ml\n pr = cProfile.Profile()\n pr.enable()\n bw_cv_ml = get_bw(rd, v_type, dens_u_rot.bw)\n pr.disable()\n\n s = io.StringIO()\n ps = pstats.Stats(pr, stream=s).sort_stats('cumtime')\n ps.print_stats()\n s = s.getvalue()\n\n with open(f'/home/lia/liaProjects/outs/loo-ml-profile-d{dims}-n{n}-iter{iterx}.txt', 'w+') as f:\n f.write(s)\n\n\n return bw_frechets,bw_cv_ml", "Next Month!\n\n\n\nRepeat runs but without the bug ;) and using the data from the samples.\n\n\nSee if this makes reasonable kernels in the 2d case. \n\n\nInteractive graphics for plotting the bounds and the cdfs in the 2d case.\n\n\nThank you :)!\n\nAny questions? \n<img src=\"https://media.giphy.com/media/4uraUi810KSMo/giphy.gif\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CalPolyPat/Python-Workshop
Python Workshop/Logic.ipynb
mit
[ "Logic\nAt this point we have all of our supplies gathered and we know how they work. So far we know how to make variables and do some basic operations on them. We know a bit about containers and how we can use them to hold variables in various ways. Now how do we use these things to make the computer do our bidding? This notebook is all about logic; making decisions based on conditions(booleans). We will also cover soething called loops which let us do something over and over again with a few lines of code, or, and much more importantly, do something to each element in a container. Let's start with logic.\nIf and else\nWe want our computers to make decisions. Computers, however, only know how to ask one type of question: if (something is true) then (do something). To ask this question using code, we use the following format: \nif (some condition):\n #put code here\n\nWhere (some condition) must give a boolean, or true and false, answer. If the condition is not true, perhaps we want to do something else. To do that we follow this format:\nif (some condition):\n #code here happens if the condition is true\nelse:\n #code here happens if condition is false\n\nIt is key to note that the code in the if-block is indented. This tells the computer that the code should only run if the condition is true. Let's take a look at some examples.", "if (True):\n print(\"This will always print\")\nelse:\n print(\"This will never print\")\nif (False):\n print(\"This will never print\")\nelse:\n print(\"This will always print\")\n\na = 4\nb = 5\nif (a<b):\n print(\"Wow, a was less than b, what a surprise!\")\n \nif (a>b):\n print(\"I knew that a was greater than b!\")\nelse:\n print(\"Wait, a was not greater than b???\")\n\n#What does a<b actually return?\nprint(a<b)\n#Aha, it was just a boolean all along\n\nla = [1,2,3,4,5,'a','b']\nif(1 in la):\n print('1 is in la')\nelse:\n print('1 is not in la')", "Elif\nLet's say you want to check a different condition before just saying, \"The first condition was false, let's do the else statement.\" We could just use a second if statement, but instead we have the else-if statement, elif. It allows us to check a second condition after the first one fails. Let us concrete this idea with an example.", "#I love food, let's take a look in my fridge\nfridge = ['bananas', 'apples', 'water', 'tortillas', 'cheese']\n#I want some pizza, but if I don't have any I will settle for a quesadilla which requires tortillas and cheese\nif('pizza' in fridge):\n print('Patrick ate pizza and was happy')\nelif('tortillas' in fridge and 'cheese' in fridge):\n print('Patrick didn\\'t get his pizza, but he did get a quesadilla and is still happy!')\nelse:\n print('Patrick is still hungry')\n ", "Let's revamp that example, but this time, I went out and bought a pizza.", "#I love food, let's take a look in my fridge\nfridge = ['bananas', 'apples', 'water', 'tortillas', 'cheese', 'pizza']\n#I want some pizza, but if I don't have any I will settle for a quesadilla which requires tortillas and cheese\nif('pizza' in fridge):\n print('Patrick ate pizza and was happy')\nelif('tortillas' in fridge and 'cheese' in fridge):\n print('Patrick didn\\'t get his pizza, but he did get a quesadilla and is still happy!')\nelse:\n print('Patrick is still hungry')\n ", "Notice that, although I had the fixings for a quesadilla in my fridge, I had pizza so I never needed to check for a tortilla and cheese. This illustrates the fact that elif wont run unless the if statements before it fails. Further, you can stack elif statements forever. Let's see that.", "#I love food, let's take a look in my fridge\nfridge = ['bananas', 'apples', 'water', 'tortillas', 'beer']\n#I want some pizza, but if I don't have any I will settle for a quesadilla which requires tortillas and cheese\nif('pizza' in fridge):\n print('Patrick ate pizza and was happy')\nelif('tortillas' in fridge and 'cheese' in fridge):\n print('Patrick didn\\'t get his pizza, but he did get a quesadilla and is still happy!')\nelif('beer' in fridge):\n print('Patrick is still hungry, but he has beer so he is happy')\nelse:\n print('Patrick is still hungry')\n ", "Exercises\n\nWrite some \"dummy\" if, if else, and if elif else statements that will print out exactly what you expect until you feel comfortable with them.\n\nWhat will be the output of the following code sample:\nif(2<4):\n if(len([1,2,3])<=len(set([1,1,1,2,2,3,3,3]))):\n print(\"This will certainly print\")\n elif(2>1):\n print(\"Or will this print?\")\n else:\n print(\"It's gotta be this one...\")\nelse:\n print(\"This won't print...or will it.\")\n\n\nLoops\n\"I feel like I'm doing this over and over again\" -Your computer on loops. Wanna do something 10, 100, n times? Loops are your best friend! Want to loop through a list containing all of your data? Loops are your bestest friend! We will look at two different types of loops, while loops and for loops. \nWhile Loops\nWhile loops will continue to loop until some condition is false. While loops follow the format:\nwhile (some condition):\n #some code here\n\nWhile loops can go on forever if the condition is never false. This is not the end of the world and you won't crash your computer. To stop a cell that is running, you can click on the stop button in the Jupyter toolbar. Let's see what we can do with this.", "t=15\nwhile(t>0):\n print(\"t-minus \" + str(t))\n t-=1", "While loops are really good if you want to do something over and over again. Let's generate some fake data with this. I introduce here the range() function. This generates a list of numbers. Let's see briefly how it works.", "# Let's make a list of numbers starting at zero and going to 99. range() by default uses a step size of 1\n#so this will yield integers from 0 to 99\nx = range(0,100)\nprint(x)\n# Unfortunately range does some strange things and doesn't return a list, if you want a list, you already know how to convert it.\nprint(list(x))\n\ny = []\nx = 1\nwhile(x<100):\n y.append(x**5-27*x**2-2300*x**-1+x%(x+1))\n x+=1\nprint(y)", "So that was a cute example of how we can generate some data based on some equation. Later on, however, we will want to graph our data and this requires a second list for our x values. The while loop is cumbersome in the respect and so we now introduce the for loop.\nFor Loops\nA for loop will loop through any container element by element and conveniently place each element in a special new variable. The format of a for loop is as follows:\nfor (special variable name) in (container we are looping through):\n #do some stuff with, or without, that special variable\n\nThe advantage of for loops is that you get each element of some list handed to you on a platter...er, in a variable. Our previous example of generating data now allows us to make a list for our x data and loop through that. Let's see that in action.", "x = range(1,100) #Remember that this makes a list of integers from 1 to 99\ny = []\nfor val in x: #val is our special variable here, it will take on the value of every element in x\n print(val)\n y.append(val**2+3*val)\nprint(y)", "Again, a neat little example. The true power of for loops comes when we have lists that are not numerical. Let's make every string in a list uppercase.", "words = ['i', 'am', 'sorry', 'dave', 'i', 'can\\'t', 'do', 'that']\nupperwords = []\nfor word in words: #remember that word will take on the value of every element of words\n print(word)\n upperwords.append(word.upper()) # to make a string uppercase, you can use the .upper() function.\nprint(upperwords)", "We have one more special type of loop to cover. List comprehensions; a quick way to make a list in one line.\nList Comprehensions\nA list comprehension is essentially a for loop sandwiched into a list. The syntax for a list comprehension is as follows:\nX = [(expression involving special variable) for (special variable) in (some list)]\n\nFor example, we want a list containing x^2 for x in [0,1,2,3,4,5,6,7,8,9,10], we can create this list by using:\nY = [x**2 for x in range(0,11)]\n\nDoes this actually work?", "y = [x**2 for x in range(0,11)]\nprint(y)", "What about something wierder?", "print(words)\nwordslength = [len(word) for word in words]\nprint(wordslength)", "My god, it worked! Think of the possibilities! With these new tools we can do 90% of all programming we will ever do. Pretty neat huh. I would like to show you one more example of list comprehensions.", "# I only want words with length less than 3\nnewwords = [word for word in words if len(word)<3]\nprint(newwords)", "This is an incredibly powerful tool that you will use on ocassion, so keep it in your noggin somewhere.\nExercises\n\nWrite a while loop that counts to 25 and on every number that is divisible by 3 prints out \"Flarbos\".\nDo the same thing as in 1. but with a for loop.\nDo Project Euler problem #2 https://projecteuler.net/problem=2\nDo Project Euler problem #25 https://projecteuler.net/problem=25\nFind the smallest integer k such that k, k/13, and k^2/43 are divisible by 244 using a for loop.\nRepeat 5. using a list comprehension." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/inpe/cmip6/models/sandbox-1/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: INPE\nSource ID: SANDBOX-1\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:06\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inpe', 'sandbox-1', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
glouppe/scikit-optimize
examples/hyperparameter-optimization.ipynb
bsd-3-clause
[ "Tuning a scikit-learn estimator with skopt\nGilles Louppe, July 2016.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.rcParams[\"figure.figsize\"] = (10, 6)", "Problem statement\nTuning the hyper-parameters of a machine learning model is often carried out using an exhaustive exploration of (a subset of) the space all hyper-parameter configurations (e.g., using sklearn.model_selection.GridSearchCV), which often results in a very time consuming operation. \nIn this notebook, we illustrate how skopt can be used to tune hyper-parameters using sequential model-based optimisation, hopefully resulting in equivalent or better solutions, but within less evaluations.\nObjective\nThe first step is to define the objective function we want to minimize, in this case the cross-validation mean absolute error of a gradient boosting regressor over the Boston dataset, as a function of its hyper-parameters:", "from sklearn.datasets import load_boston\nfrom sklearn.ensemble import GradientBoostingRegressor\nfrom sklearn.model_selection import cross_val_score\n\nboston = load_boston()\nX, y = boston.data, boston.target\nreg = GradientBoostingRegressor(n_estimators=50, random_state=0)\n\ndef objective(params):\n max_depth, learning_rate, max_features, min_samples_split, min_samples_leaf = params\n\n reg.set_params(max_depth=max_depth,\n learning_rate=learning_rate,\n max_features=max_features,\n min_samples_split=min_samples_split, \n min_samples_leaf=min_samples_leaf)\n\n return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1, scoring=\"mean_absolute_error\"))", "Next, we need to define the bounds of the dimensions of the search space we want to explore, and (optionally) the starting point:", "space = [(1, 5), # max_depth\n (10**-5, 10**-1, \"log-uniform\"), # learning_rate\n (1, X.shape[1]), # max_features\n (2, 30), # min_samples_split\n (1, 30)] # min_samples_leaf\n\nx0 = [3, 0.01, 6, 2, 1]", "Optimize all the things!\nWith these two pieces, we are now ready for sequential model-based optimisation. Here we compare gaussian process-based optimisation versus forest-based optimisation.", "from skopt import gp_minimize\nres_gp = gp_minimize(objective, space, x0=x0, n_calls=50, random_state=0)\n\n\"Best score=%.4f\" % res_gp.fun\n\nprint(\"\"\"Best parameters:\n- max_depth=%d\n- learning_rate=%.6f\n- max_features=%d\n- min_samples_split=%d\n- min_samples_leaf=%d\"\"\" % (res_gp.x[0], res_gp.x[1], \n res_gp.x[2], res_gp.x[3], \n res_gp.x[4]))\n\nfrom skopt import forest_minimize\nres_forest = forest_minimize(objective, space, x0=x0, n_calls=50, random_state=0)\n\n\"Best score=%.4f\" % res_forest.fun\n\nprint(\"\"\"Best parameters:\n- max_depth=%d\n- learning_rate=%.6f\n- max_features=%d\n- min_samples_split=%d\n- min_samples_leaf=%d\"\"\" % (res_forest.x[0], res_forest.x[1], \n res_forest.x[2], res_forest.x[3], \n res_forest.x[4]))", "As a baseline, let us also compare with random search in the space of hyper-parameters, which is equivalent to sklearn.model_selection.RandomizedSearchCV.", "from skopt import dummy_minimize\nres_dummy = dummy_minimize(objective, space, x0=x0, n_calls=50, random_state=0)\n\n\"Best score=%.4f\" % res_dummy.fun\n\nprint(\"\"\"Best parameters:\n- max_depth=%d\n- learning_rate=%.4f\n- max_features=%d\n- min_samples_split=%d\n- min_samples_leaf=%d\"\"\" % (res_dummy.x[0], res_dummy.x[1], \n res_dummy.x[2], res_dummy.x[3], \n res_dummy.x[4]))", "Convergence plot", "from skopt.plots import plot_convergence\nplot_convergence((\"gp_optimize\", res_gp),\n (\"forest_optimize\", res_forest),\n (\"dummy_optimize\", res_dummy))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Jwuthri/Mozinor
mozinor/example/Mozinor example Reg.ipynb
mit
[ "Use mozinor for regression\nImport the main module", "from mozinor.baboulinet import Baboulinet", "Prepare the pipeline\n(str) filepath: Give the csv file\n(str) y_col: The column to predict\n(bool) regression: Regression or Classification ?\n(bool) process: (WARNING) apply some preprocessing on your data (tune this preprocess with params below)\n(char) sep: delimiter\n(list) col_to_drop: which columns you don't want to use in your prediction\n(bool) derivate: for all features combination apply, n1 * n2, n1 / n2 ...\n(bool) transform: for all features apply, log(n), sqrt(n), square(n)\n(bool) scaled: scale the data ?\n(bool) infer_datetime: for all columns check the type and build new columns from them (day, month, year, time) if they are date type\n(str) encoding: data encoding\n(bool) dummify: apply dummies on your categoric variables\n\nThe data files have been generated by sklearn.dataset.make_regression", "cls = Baboulinet(filepath=\"toto2.csv\", y_col=\"predict\", regression=True)", "Now run the pipeline\nMay take some times", "res = cls.babouline()", "The class instance, now contains 2 objects, the model for this data, and the best stacking for this data\nTo make auto generate the code of the model\nGenerate the code for the best model", "cls.bestModelScript()", "Generate the code for the best stacking", "cls.bestStackModelScript()", "To check which model is the best\nBest model", "res.best_model\n\nshow = \"\"\"\n Model: {},\n Score: {}\n\"\"\"\nprint(show.format(res.best_model[\"Estimator\"], res.best_model[\"Score\"]))", "Best stacking", "res.best_stack_models\n\nshow = \"\"\"\n FirstModel: {},\n SecondModel: {},\n Score: {}\n\"\"\"\nprint(show.format(res.best_stack_models[\"Fit1stLevelEstimator\"], res.best_stack_models[\"Fit2ndLevelEstimator\"], res.best_stack_models[\"Score\"]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
andressotov/News-Categorization-MNB
News_Categorization_MNB.ipynb
mit
[ "News Categorization using Multinomial Naive Bayes\nby Andrés Soto\nOnce upon a time, while searching by internet, I discovered this site, where I found this challenge: \n* Using the News Aggregator Data Set, can we predict the category (business, entertainment, etc.) of a news article given only its headline? \nSo I decided to try to do it using the Multinomial Naive Bayes method.\nThe News Aggregator Data Set comes from the UCI Machine Learning Repository. \n\nLichman, M. (2013). UCI Machine Learning Repository. Irvine, CA: University of California, School of Information and Computer Science. \n\nThis specific dataset can be found in the UCI ML Repository at this URL.\nThis dataset contains headlines, URLs, and categories for 422,937 news stories collected by a web aggregator between March 10th, 2014 and August 10th, 2014. News categories in this dataset are labelled:\nLabel | Category | News \n-------|------------|----------\nb | business | <div style=\"text-align: right\"> 115967 </div>\nt | science and technology | <div style=\"text-align: right\"> 108344 </div> \ne | entertainment | <div style=\"text-align: right\"> 152469 </div>\nm | health | <div style=\"text-align: right\"> 45639 </div> \nMultinomial Naive Bayes method will be used to predict the category (business, entertainment, etc.) of a news article given only its headline. The paper is divided in four sections. The first section is dedicated to importing the data set and getting some preliminary information about it. Second section explains how to divide data in two sets: the training set and the test set. Section number 3 is about training and testing the classification algorithm and obtaining results. Results analysis constitute the last section. \nImport data\nTo import the data from the CSV file, we will use Pandas library which also offers data structures and operations for manipulating data tables. Therefore, we need to import Pandas library. \nTo embed plots inside the Notebook, we use the \"%matplotlib inline\" magic command.", "%matplotlib inline\nimport pandas as pd ", "Now, we have to initialize some variables that will be used. They will be used to collect the news titles, its categories, as well as a list of the different possible categories (without repetitions).", "titles = [] # list of news titles\ncategories = [] # list of news categories\nlabels = [] # list of different categories (without repetitions)\nnlabels = 4 # number of different categories\nlnews = [] # list of dictionaries with two fields: one for the news and \n # the other for its category", "The code for this section will be organized in two functions: one which imports the data and the other which counts the news in each category, its percentage and plots it.", "def import_data():\n global titles, labels, categories\n # importing news aggregator data via Pandas (Python Data Analysis Library)\n news = pd.read_csv(\"uci-news-aggregator.csv\")\n # function 'head' shows the first 5 items in a column (or\n # the first 5 rows in the DataFrame)\n print(news.head())\n categories = news['CATEGORY']\n titles = news['TITLE']\n labels = sorted(list(set(categories))) ", "Let's see how long it takes to import the data by %time magic command.", "%time import_data()", "The time to import the dat was 3.54 seconds. Let's analyze how many news we have from the different categories and its percentage. We will use the class Counter from the collections library, which keeps track of how many values contains a collection. Then we will tabulate the different categories and its percentage via a DataFrame.", "from collections import Counter\n\ndef count_data(labels,categories): \n c = Counter(categories)\n cont = dict(c)\n # total number of news\n tot = sum(list(cont.values())) \n d = {\n \"category\" : labels,\n \"news\" : [cont[l] for l in labels],\n \"percent\" : [cont[l]/tot for l in labels]\n }\n \n print(pd.DataFrame(d)) \n print(\"total \\t\",tot) \n \n return cont\n\ncont = count_data(labels,categories)", "Let's show a pie plot with the proportion of news by category.", "import pylab as pl # useful for drawing graphics\n\ndef categories_pie_plot(cont,tit):\n global labels\n sizes = [cont[l] for l in labels]\n pl.pie(sizes, explode=(0, 0, 0, 0), labels=labels,\n autopct='%1.1f%%', shadow=True, startangle=90)\n pl.title(tit)\n pl.show()\n \ncategories_pie_plot(cont,\"Plotting categories\")", "As we can see, the entertainment (e) category is the biggest one, which is more than three times bigger than health (m) category. In second place we have business (b) and technology (t), which are more than two times bigger than health category.\nSplitting the data\nNow we should split our data into two sets:\n1. a training set (70%) used to discover potentially predictive relationships, and\n2. a test set (30%) used to evaluate whether the discovered relationships hold and to assess the strength and utility of a predictive relationship.\nBefore splitting it, the data should be first permuted. Shuffle is a method included in scikit-learn library which allows to do random permutations of collections. Then data could be splitted into a pair of train and test sets.", "from sklearn.utils import shuffle # Shuffle arrays in a consistent way\n\nX_train = []\ny_train = []\nX_test = []\ny_test = []\n\ndef split_data():\n global titles, categories\n global X_train, y_train, X_test, y_test,labels\n N = len(titles)\n Ntrain = int(N * 0.7) \n # Let's shuffle the data\n titles, categories = shuffle(titles, categories, random_state=0)\n X_train = titles[:Ntrain]\n y_train = categories[:Ntrain]\n X_test = titles[Ntrain:]\n y_test = categories[Ntrain:]\n\n%time split_data()", "Time required to split data is 1.28 seconds. Now let's analyze the proportion of news categories in the training set.", "cont2 = count_data(labels,y_train)", "Percentage are very much close to the ones obtained for the whole data set.", "categories_pie_plot(cont2,\"Categories % in training set\")", "Train and test the classifier\nIn order to train and test the classifier, the first step should be to tokenize and count the number of occurrence of each word that appears into the news'titles using for that CountVectorizer class. Each term found is assigned a unique integer index.\nThen the counters will be transformed to a TF-IDF representation using TfidfTransformer class. The last step creates the Multinomial Naive Bayes classifier. \nIn order to make the training process easier, scikit-learn provides a Pipeline class that behaves like a compound classifier. \nThe metrics module allows to calculate score functions, performance metrics and pairwise metrics and distance computations. F1-score can be interpreted as a weighted average of the precision and recall.", "from sklearn.feature_extraction.text import CountVectorizer \nfrom sklearn.feature_extraction.text import TfidfTransformer \nfrom sklearn.naive_bayes import MultinomialNB \nfrom sklearn.pipeline import Pipeline \nfrom sklearn import metrics \nimport numpy as np\nimport pprint\n\n# lmats = [] # list of confussion matrix \nnrows = nlabels\nncols = nlabels\n# conf_mat_sum = np.zeros((nrows, ncols))\n# f1_acum = [] # list of f1-score\n\ndef train_test():\n global X_train, y_train, X_test, y_test, labels \n #lmats, \\\n # conf_mat_sum, f1_acum, ncategories\n text_clf = Pipeline([('vect', CountVectorizer()),\n ('tfidf', TfidfTransformer()),\n ('clf', MultinomialNB()),\n ])\n text_clf = text_clf.fit(X_train, y_train)\n predicted = text_clf.predict(X_test)\n return predicted\n\n%time predicted = train_test()", "To compare the predicted labels to the corresponding set of true labels we will use the method accuracy_score from scikit-learn, which gives an accuracy over 0.92", "metrics.accuracy_score(y_test, predicted)", "To show the main classification metrics we will use the classification_report method from scikit-learn.", "print(metrics.classification_report(y_test, predicted, target_names=labels))", "We can see that, although the metrics (precision, recall and f1-score) in average give us 0.92, the results for category e (entertainment) are even better.\nConfusion matrix allows to detect if a classification algorithm is confusing two or more classes if you have an unequal number of observations in each class as in this case. An ideal classifier with 100% accuracy would produce a pure diagonal matrix which would have all the points predicted in their correct class. In case of class imbalance, confusion matrix normalization by class support size (number of elements in each class) can be interesting in order to have a visual interpretation of which class is being misclassified.", "mat = metrics.confusion_matrix(y_test, predicted,labels=labels)\ncm = mat.astype('float') / mat.sum(axis=1)[:, np.newaxis]\ncm", "Let's print a plot for the confussion matrix.", "import itertools\nimport matplotlib.pyplot as plt\n\ndef plot_confusion_matrix(cm, classes,\n title='Confusion matrix',\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n \"\"\"\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, '{:5.2f}'.format(cm[i, j]),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n plt.colorbar()\n plt.show()\n\nplot_confusion_matrix(cm, labels, title='Confusion matrix')", "Confussion matrix columns represent the instances in a predicted class while rows represent the instances in an actual class. The diagonal elements represent the number of points for which the predicted label is equal to the true label, while off-diagonal elements are those that are mislabeled by the classifier. The higher the diagonal values of the confusion matrix the better, indicating many correct predictions.\nNow, let's see the relation between f1-score and the percentage by category", "def resume_data(labels,y_train,f1s):\n c = Counter(y_train)\n cont = dict(c)\n tot = sum(list(cont.values()))\n nlabels = len(labels)\n d = {\n \"category\" : [labels[i] for i in range(nlabels)],\n \"percent\" : [cont[labels[i]]/tot for i in range(nlabels)],\n \"f1-score\" : [f1s[i] for i in range(nlabels)]\n }\n \n print(pd.DataFrame(d)) \n print(\"total \\t\",tot) \n return cont\n\nf1s = metrics.f1_score(y_test, predicted, labels=labels, average=None)\ncont3 = resume_data(labels,y_train,f1s)", "Results analysis\nAs a resume, results show a good accuracy (0.9238) with a good average level for precision, recall and f1-score (0.92) Analyzing these results by category, results are even better for the entertainment category ('e') with 0.96 for f1-score, 0.97 for recall and 0.95 for precision. I would like to highlight that best result for prediction corresponds to health category ('m') with 0.97, but with a recall of 0.85. Other categories show a more even behavior.\nAnalyzing confusion matrix results, the higher index of points predicted in their correct class for category 'e' with 0.9719. This category presents a misclassification index of 0.014 for technology category ('t') and lower indexes for the other categories.\nOn the contrary, category 'm' presents the worst hit rate with an indicator of 0.846, which has misclassification indexes of 0.062 with business category ('b'), of 0.0619 with category 'e' and of 0.03 with category 't'.\nAnalyzing the number of news by category, category 'e' presents the higher percentage, 36%, with 45625 news. On the other hand, category 'm' presents the lower percentage, 10.79%, with just 13709 news. Thus, category 'e' is more than three times bigger than category 'm'. Categories 'b' and 't' present similar number of news and percentage: 'b' has 34729 news with 27%, and 't' has 32663 news with a 25%. Both categories, 'b' and 't', are more than two times bigger than category 'm'. According to these, better results seem to correspond with higher percentage categories. In future experiments, I would try to confirm this hypothesis.\nIn this experience, we just trained the classification algorithm with one set of data, so we just have one set of results. Although the training set and the test set were selected by random, it is just a sample of the possible results. In future experiments, I would try to test which is the confidence of the experiment results." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Hugovdberg/timml
notebooks/timml_notebook4_sol.ipynb
mit
[ "TimML Notebook 4\nHorizontal well\nA horizontal well is located in a 20 m thick aquifer; the hydraulic conductivity is $k = 10$ m/d and the vertical\nanisotropy factor is 0.1. The horizontal well is placed 5 m above the bottom of the aquifer. The well has\na discharge of 10,000 m$^3$/d and radius of $r=0.2$ m. The well is 200 m long and runs from $(x, y) = (−100, 0)$\nto $(x, y) = (100, 0)$. A long straight river with a head of 40 m runs to the right of the horizontal well along the line\n$x = 200$. The head is fixed to 42 m at $(x, y) = (−1000, 0)$.\nThree-dimensional flow to the horizontal well is modeled by dividing the aquifer up in 11 layers; the\nelevations are: [20, 15, 10, 8, 6, 5.5, 5.2, 4.8, 4.4, 4, 2, 0]. At the depth of the well, the layer thickness is equal to\nthe diameter of the well, and it increases in the layers above and below the well. A TimML model is created with the Model3D\ncommand. The horizontal well is located in layer 6 and is modeled with the LineSinkDitch element. Initially, the entry resistance of the well is set to zero.", "%matplotlib inline\nimport numpy as np\nfrom timml import *\nfigsize = (6, 6)\n\nz = [20, 15, 10, 8, 6, 5.5, 5.2, 4.8, 4.4, 4, 2, 0]\nml = Model3D(kaq=10, z=z, kzoverkh=0.1)\nls1 = LineSinkDitch(ml, x1=-100, y1=0, x2=100, y2=0, Qls=10000, order=5, layers=6)\nls2 = HeadLineSinkString(ml, [(200, -1000), (200, -200), (200, 0), (200, 200), (200, 1000)], hls=40, order=5, layers=0)\nrf = Constant(ml, xr=-1000, yr=0, hr=42, layer=0)\n\nprint(ls1.hls)\nprint(ls2.hls)", "Questions:\nExercise 4a\nSolve the model.", "ml.solve()", "Exercise 4b\nCreate contour plots of layers 0 and 6 and note the difference between the layers. Also,\ncompute the head at $(x, y) = (0, 0.2)$ (on the edge of the well) and notice that there is a very large head\ndifference between the top of the aquifer and the well.", "ml.contour(win=[-150, 150, -150, 150], ngr=[50, 100], layers = [0, 6],\n figsize=figsize)\nprint('The head at the top and in layer 6 are:')\nprint(ml.head(0, 0.2, [0, 6]))", "Exercise 4c\nDraw a number of pathlines from different elevations using the tracelines command. First make a plot with a cross section below it.", "ml.plot(win=[-1000, 1000, -1000, 1000], orientation='both', figsize=figsize)\nml.tracelines(xstart=[-500, -500, -500], ystart=[-500, -500, -500], zstart=[5, 9, 15], \n hstepmax=20, tmax=10 * 365.25, orientation='both', color='C0')\nml.tracelines(xstart=[250, 250, 250], ystart=[50, 50, 50], zstart=[5, 9, 15], \n hstepmax=20, tmax=10 * 365.25, orientation='both', color='C1')", "Exercise 4d\nMake a contour plot of the heads in a vertical cross-section using the vcontour command. Use a cross-section along the well.", "ml.vcontour(win=[-200, 300, 0, 0], n=50, levels=20, figsize=(6,6))", "Exercise 4e\nChange the entry resistance of the horizontal well to 0.01 days and set the width to 0.4 m and resolve the model. Notice the difference in the head inside the horizontal well with the headinside function of the horizontal well. Use a", "print('head inside w/o resistance:')\nprint(ls1.headinside())\n\nml = Model3D(kaq=10, z=z, kzoverkh=0.1)\nls = LineSinkDitch(ml, x1=-100, y1=0, x2=100, y2=0, Qls=10000, order=5, layers=6, wh=0.4, res=0.01)\nHeadLineSinkString(ml, [(200, -1000), (200, -200), (200, 0), (200, 200), (200, 1000)], \n hls=40, order=5, layers=0)\nrf = Constant(ml, xr=-1000, yr=0, hr=42, layer=0)\nml.solve()\n\nprint('head inside horizontal well:', ls.headinside())\n\nml.vcontour(win=[-200, 300, 0, 0], n=50, levels=20, vinterp=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
harishkrao/Python-for-Data-Analysis
chapter 02/List-dict-defaultdict-Counter.ipynb
mit
[ "Import the json package\nAssign the path of the json content file to the path variable.", "import json\npath = r'C:\\Users\\hrao\\Documents\\Personal\\HK\\Books\\pydata-book-master\\pydata-book-master\\ch02\\usagov_bitly_data2012-03-16-1331923249.txt'", "Open the file located in the path directory, one line at a time, and store it in a list called records.", "records = [json.loads(line) for line in open(path,'r')]\n\ntype(records)\n\nrecords[0]", "Calling a specific key within the list", "records[0]['tz']", "Printing all time zone values in the records list. \nHere we search for the string 'tz' in each element of the records list. \nIf the search returns a string, then we print the corresponding value of the key 'tz' for that element.", "time_zones = [rec['tz'] for rec in records if 'tz' in rec]\n\ntime_zones[:10]", "Counting the frequency of each time zone's occurrence in the list using a dict type in Python", "counts = {}\nfor x in time_zones:\n if x in counts:\n counts[x] = counts.get(x,0) + 1\n else:\n counts[x] = 1\nprint(counts)\n\nfrom collections import defaultdict\n\ncounts = defaultdict(int)\nfor x in time_zones:\n counts[x] += 1\n\nprint(counts)\n\ncounts['America/New_York']\n\n\nlen(time_zones)", "To list the top n time zone occurrences", "def top_counts(count_dict, n):\n value_key_pairs = [(count, tz) for tz, count in count_dict.items()]\n value_key_pairs.sort()\n return value_key_pairs[-n:]\n\ntop_counts(counts,10)\n\nfrom collections import Counter\ncounts = Counter(time_zones)\ncounts.most_common(10)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statsmodels/statsmodels.github.io
v0.12.1/examples/notebooks/generated/statespace_varmax.ipynb
bsd-3-clause
[ "VARMAX models\nThis is a brief introduction notebook to VARMAX models in statsmodels. The VARMAX model is generically specified as:\n$$\ny_t = \\nu + A_1 y_{t-1} + \\dots + A_p y_{t-p} + B x_t + \\epsilon_t +\nM_1 \\epsilon_{t-1} + \\dots M_q \\epsilon_{t-q}\n$$\nwhere $y_t$ is a $\\text{k_endog} \\times 1$ vector.", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\ndta = sm.datasets.webuse('lutkepohl2', 'https://www.stata-press.com/data/r12/')\ndta.index = dta.qtr\ndta.index.freq = dta.index.inferred_freq\nendog = dta.loc['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']]", "Model specification\nThe VARMAX class in statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument).\nExample 1: VAR\nBelow is a simple VARX(2) model in two endogenous variables and an exogenous series, but no constant term. Notice that we needed to allow for more iterations than the default (which is maxiter=50) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series: this model, for example, estimates 27 parameters off of 75 observations of 3 variables.", "exog = endog['dln_consump']\nmod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='n', exog=exog)\nres = mod.fit(maxiter=1000, disp=False)\nprint(res.summary())", "From the estimated VAR model, we can plot the impulse response functions of the endogenous variables.", "ax = res.impulse_responses(10, orthogonalized=True).plot(figsize=(13,3))\nax.set(xlabel='t', title='Responses to a shock to `dln_inv`');", "Example 2: VMA\nA vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term.", "mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal')\nres = mod.fit(maxiter=1000, disp=False)\nprint(res.summary())", "Caution: VARMA(p,q) specifications\nAlthough the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued when these models are specified). Nonetheless, they may in some circumstances provide useful information.", "mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1))\nres = mod.fit(maxiter=1000, disp=False)\nprint(res.summary())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oditorium/blog
iPython/Reportlab2-FromMarkdown.ipynb
agpl-3.0
[ "Markdown 2 Reportlab\nMarkdown\nHere we create some lorem ipsum markdown text for testing", "from IPython.display import HTML\nimport markdown as md\n\nl = \"\"\"LOREM ipsum dolor sit amet, _consectetur_ adipiscing elit. Praesent dignissim orci a leo dapibus semper eget sed \nsem. Pellentesque tellus nisl, condimentum nec libero id, __cursus consequat__ lectus. Ut quis nulla laoreet, efficitur \nmetus sit amet, <strike>viverra dui. Nam tempor ornare urna a consequat</strike>. Nulla dolor velit, sollicitudin sit \namet consectetur sed, interdum nec orci. Nunc suscipit tempus est ut porta. <u>Ut non felis a ligula suscipit \nposuere quis sit amet elit</u>.\"\"\"\n\nmarkdown_text = \"\"\"\n# Heading1\n## Heading 2\n\n%s %s %s\n\n\n## Heading 2\n\n%s\n\n- %s\n- %s\n- %s\n\n## Heading 2\n\n%s\n\n4. %s\n4. %s\n4. %s\n\n%s\n\"\"\" % (l,l,l,l,l,l,l,l,l,l,l,l)\n\n#HTML(md.markdown(markdown_text))", "ReportLab\nimport the necessary functions one by one", "from markdown import markdown as md_markdown\n\nfrom xml.etree.ElementTree import fromstring as et_fromstring\nfrom xml.etree.ElementTree import tostring as et_tostring\n\nfrom reportlab.platypus import BaseDocTemplate as plat_BaseDocTemplate\nfrom reportlab.platypus import Frame as plat_Frame\nfrom reportlab.platypus import Paragraph as plat_Paragraph\nfrom reportlab.platypus import PageTemplate as plat_PageTemplate\n\nfrom reportlab.lib.styles import getSampleStyleSheet as sty_getSampleStyleSheet\nfrom reportlab.lib.pagesizes import A4 as ps_A4\nfrom reportlab.lib.pagesizes import A5 as ps_A5\nfrom reportlab.lib.pagesizes import landscape as ps_landscape\nfrom reportlab.lib.pagesizes import portrait as ps_portrait\nfrom reportlab.lib.units import inch as un_inch", "The ReportFactory class creates a ReportLab document / report object; the idea is that all style information as well as page layouts are collected in this object, so that when a different factory is passed to the writer object the report looks different.", "class ReportFactory():\n \"\"\"create a Reportlab report object using BaseDocTemplate\n \n the report creation is a two-step process\n \n 1. instantiate a ReportFactory object\n 2. retrieve the report using the report() method\n \n note: as it currently stands the report object is remembered in the\n factory object, so another call to report() return the _same_ object;\n this means that changing the paramters after report() has been called\n for the first time will not have an impact\n \"\"\"\n \n def __init__(self, filename=None): \n if filename == None: filename = 'report_x1.pdf'\n # f = open (filename,'wb') -> reports can take a file handle!\n self.filename = filename\n self.pagesize = ps_portrait(ps_A4)\n self.showboundary = 0\n #PAGE_HEIGHT=defaultPageSize[1]; PAGE_WIDTH=defaultPageSize[0]\n self.styles=sty_getSampleStyleSheet()\n self.bullet = \"\\u2022\"\n self._report = None\n \n @staticmethod\n def static_page(canvas,doc):\n \"\"\"template for report page\n \n this template defines how the standard page looks (header, footer, background\n objects; it does _not_ define the flow objects though, as those are separately\n passed to the PageTemplate() function)\n \"\"\"\n canvas.saveState()\n canvas.setFont('Times-Roman',9)\n canvas.drawString(un_inch, 0.75 * un_inch, \"Report - Page %d\" % doc.page)\n canvas.restoreState()\n \n def refresh_styles(self):\n \"\"\"refresh all styles\n \n derived ReportLab styles need to be refreshed in case the parent style\n has been modified; this does not really work though - it seems that the\n styles are simply flattened....\n \"\"\"\n style_names = self.styles.__dict__['byName'].keys()\n for name in style_names:\n self.styles[name].refresh()\n \n def report(self):\n \"\"\"initialise a report object\n \n this function initialised and returns a report object, based on the properties\n set on the factory object at this point (note: the report object is only generated\n _once_ and subsequent calls return the same object;this implies that most property\n changes after this function has been called are not taken into account)\n \"\"\"\n if self._report == None:\n rp = plat_BaseDocTemplate(self.filename,showBoundary=self.showboundary, pagesize=self.pagesize)\n frame_page = plat_Frame(rp.leftMargin, rp.bottomMargin, rp.width, rp.height, id='main')\n pagetemplates = [\n plat_PageTemplate(id='Page',frames=frame_page,onPage=self.static_page),\n ]\n rp.addPageTemplates(pagetemplates)\n self._report = rp\n return self._report\n\n ", "The ReportWriter object executes the conversion from markdown to pdf. It is currently very simplistic - for example there is no entry hook for starting the conversion at the html level rather than at markdown, and only a few basic tags are implemented.", "class ReportWriter():\n \n def __init__(self, report_factory):\n self._simple_tags = {\n 'h1' : 'Heading1',\n 'h2' : 'Heading2',\n 'h3' : 'Heading3',\n 'h4' : 'Heading4',\n 'h5' : 'Heading5',\n 'p' : 'BodyText',\n }\n self.rf = report_factory\n self.report = report_factory.report();\n \n def _render_simple_tag(self, el, story):\n style_name = self._simple_tags[el.tag]\n el.tag = 'para'\n text = et_tostring(el)\n story.append(plat_Paragraph(text,self.rf.styles[style_name]))\n \n def _render_ol(self, el, story):\n return self._render_error(el, story)\n \n def _render_ul(self, ul_el, story):\n for li_el in ul_el:\n li_el.tag = 'para'\n text = et_tostring(li_el)\n story.append(plat_Paragraph(text,self.rf.styles['Bullet'], bulletText=self.rf.bullet))\n \n def _render_error(self, el, story):\n story.append(plat_Paragraph(\n \"<para fg='#ff0000' bg='#ffff00'>cannot render '%s' tag</para>\" % el.tag,self.rf.styles['Normal']))\n \n @staticmethod\n def html_from_markdown(mdown, remove_newline=True, wrap=True):\n \"\"\"convert markdown to html\n \n mdown - the markdown to be converted\n remove_newline - if True, all \\n characters are removed after conversion\n wrap - if True, the whole html is wrapped in an <html> tag\n \"\"\"\n html = md_markdown(mdown)\n if remove_newline: html = html.replace(\"\\n\", \"\")\n if wrap: html = \"<html>\"+html+\"</html>\"\n return html\n \n @staticmethod\n def dom_from_html(html, wrap=False):\n \"\"\"convert html into a dom tree\n \n html - the html to be converted\n wrap - if True, the whole html is wrapped in an <html> tag \n \"\"\"\n if wrap: html = \"<html>\"+html+\"</html>\"\n dom = et_fromstring(html)\n return (dom)\n \n @staticmethod\n def dom_from_markdown(mdown):\n \"\"\"convert markdown into a dom tree\n \n mdown - the markdown to be converted\n wrap - if True, the whole html is wrapped in an <html> tag \n \"\"\"\n html = ReportWriter.html_from_markdown(mdown, remove_newline=True, wrap=True)\n dom = ReportWriter.dom_from_html(html, wrap=False)\n return (dom)\n \n def create_report(self, mdown):\n \"\"\"create report and write it do disk\n \n mdown - markdown source of the report\n \"\"\"\n dom = self.dom_from_markdown(mdown)\n story = []\n for el in dom:\n if el.tag in self._simple_tags:\n self._render_simple_tag(el, story)\n elif el.tag == 'ul':\n self._render_ul(el, story)\n elif el.tag == 'ol':\n self._render_ol(el, story)\n else:\n self._render_error(el, story)\n self.report.build(story)", "create a standard report (A4, black text etc)", "rfa4 = ReportFactory('report_a4.pdf')\npdfw = ReportWriter(rfa4)\npdfw.create_report(markdown_text*10)", "create a second report with different parameters (A5, changed colors etc; the __dict__ method shows all the options that can be modified for changing styles)", "#rfa5.styles['Normal'].__dict__\n\nrfa5 = ReportFactory('report_a5.pdf')\nrfa5.pagesize = ps_portrait(ps_A5)\n#rfa5.styles['Normal'].textColor = '#664422'\n#rfa5.refresh_styles()\nrfa5.styles['BodyText'].textColor = '#666666'\nrfa5.styles['Bullet'].textColor = '#666666'\nrfa5.styles['Heading1'].textColor = '#000066'\nrfa5.styles['Heading2'].textColor = '#000066'\nrfa5.styles['Heading3'].textColor = '#000066'\n\n\npdfw = ReportWriter(rfa5)\npdfw.create_report(markdown_text*10)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
KrisCheng/ML-Learning
archive/MOOC/Deeplearning_AI/ImprovingDeepNeuralNetworks/HyperparameterTuning/Tensorflow+Tutorial.ipynb
mit
[ "TensorFlow Tutorial\nWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: \n\nInitialize variables\nStart your own session\nTrain algorithms \nImplement a Neural Network\n\nPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. \n1 - Exploring the Tensorflow Library\nTo start, you will import the library:", "import math\nimport numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nfrom tensorflow.python.framework import ops\nfrom tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict\n\n%matplotlib inline\nnp.random.seed(1)", "Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. \n$$loss = \\mathcal{L}(\\hat{y}, y) = (\\hat y^{(i)} - y^{(i)})^2 \\tag{1}$$", "y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.\ny = tf.constant(39, name='y') # Define y. Set to 39\n\nloss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss\n\ninit = tf.global_variables_initializer() # When init is run later (session.run(init)),\n # the loss variable will be initialized and ready to be computed\nwith tf.Session() as session: # Create a session and print the output\n session.run(init) # Initializes the variables\n print(session.run(loss)) # Prints the loss", "Writing and running programs in TensorFlow has the following steps:\n\nCreate Tensors (variables) that are not yet executed/evaluated. \nWrite operations between those Tensors.\nInitialize your Tensors. \nCreate a Session. \nRun the Session. This will run the operations you'd written above. \n\nTherefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run init=tf.global_variables_initializer(). That initialized the loss variable, and in the last line we were finally able to evaluate the value of loss and print its value.\nNow let us look at an easy example. Run the cell below:", "a = tf.constant(2)\nb = tf.constant(10)\nc = tf.multiply(a,b)\nprint(c)", "As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type \"int32\". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.", "sess = tf.Session()\nprint(sess.run(c))", "Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session. \nNext, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. \nTo specify values for a placeholder, you can pass in values by using a \"feed dictionary\" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.", "# Change the value of x in the feed_dict\n\nx = tf.placeholder(tf.int64, name = 'x')\nprint(sess.run(2 * x, feed_dict = {x: 3}))\nsess.close()", "When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session. \nHere's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.\n1.1 - Linear function\nLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. \nExercise: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):\n```python\nX = tf.constant(np.random.randn(3,1), name = \"X\")\n```\nYou might find the following functions helpful: \n- tf.matmul(..., ...) to do a matrix multiplication\n- tf.add(..., ...) to do an addition\n- np.random.randn(...) to initialize randomly", "# GRADED FUNCTION: linear_function\n\ndef linear_function():\n \"\"\"\n Implements a linear function: \n Initializes W to be a random tensor of shape (4,3)\n Initializes X to be a random tensor of shape (3,1)\n Initializes b to be a random tensor of shape (4,1)\n Returns: \n result -- runs the session for Y = WX + b \n \"\"\"\n \n np.random.seed(1)\n \n ### START CODE HERE ### (4 lines of code)\n X = tf.constant(np.random.randn(3,1), name = \"X\")\n W = tf.constant(np.random.randn(4,3), name = \"W\")\n b = tf.constant(np.random.randn(4,1), name = \"b\")\n Y = tf.add(tf.matmul(W, X), b)\n ### END CODE HERE ### \n \n # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate\n \n ### START CODE HERE ###\n sess = tf.Session()\n result = sess.run(Y)\n ### END CODE HERE ### \n \n # close the session \n sess.close()\n\n return result\n\nprint( \"result = \" + str(linear_function()))", "Expected Output : \n<table> \n<tr> \n<td>\n**result**\n</td>\n<td>\n[[-2.15657382]\n [ 2.95891446]\n [-1.08926781]\n [-0.84538042]]\n</td>\n</tr> \n\n</table>\n\n1.2 - Computing the sigmoid\nGreat! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. For this exercise lets compute the sigmoid function of an input. \nYou will do this exercise using a placeholder variable x. When running the session, you should use the feed dictionary to pass in the input z. In this exercise, you will have to (i) create a placeholder x, (ii) define the operations needed to compute the sigmoid using tf.sigmoid, and then (iii) run the session. \n Exercise : Implement the sigmoid function below. You should use the following: \n\ntf.placeholder(tf.float32, name = \"...\")\ntf.sigmoid(...)\nsess.run(..., feed_dict = {x: z})\n\nNote that there are two typical ways to create and use sessions in tensorflow: \nMethod 1:\n```python\nsess = tf.Session()\nRun the variables initialization (if needed), run the operations\nresult = sess.run(..., feed_dict = {...})\nsess.close() # Close the session\n**Method 2:**python\nwith tf.Session() as sess: \n # run the variables initialization (if needed), run the operations\n result = sess.run(..., feed_dict = {...})\n # This takes care of closing the session for you :)\n```", "# GRADED FUNCTION: sigmoid\n\ndef sigmoid(z):\n \"\"\"\n Computes the sigmoid of z\n \n Arguments:\n z -- input value, scalar or vector\n \n Returns: \n results -- the sigmoid of z\n \"\"\"\n \n ### START CODE HERE ### ( approx. 4 lines of code)\n # Create a placeholder for x. Name it 'x'.\n x = tf.placeholder(tf.float32, name = \"x\")\n\n # compute sigmoid(x)\n sigmoid = tf.sigmoid(x)\n\n # Create a session, and run it. Please use the method 2 explained above. \n # You should use a feed_dict to pass z's value to x. \n with tf.Session() as sess: \n # Run session and call the output \"result\"\n result = sess.run(sigmoid, feed_dict = {x: z})\n \n ### END CODE HERE ###\n \n return result\n\nprint (\"sigmoid(0) = \" + str(sigmoid(0)))\nprint (\"sigmoid(12) = \" + str(sigmoid(12)))", "Expected Output : \n<table> \n<tr> \n<td>\n**sigmoid(0)**\n</td>\n<td>\n0.5\n</td>\n</tr>\n<tr> \n<td>\n**sigmoid(12)**\n</td>\n<td>\n0.999994\n</td>\n</tr> \n\n</table>\n\n<font color='blue'>\nTo summarize, you how know how to:\n1. Create placeholders\n2. Specify the computation graph corresponding to operations you want to compute\n3. Create the session\n4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. \n1.3 - Computing the Cost\nYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{2}$ and $y^{(i)}$ for i=1...m: \n$$ J = - \\frac{1}{m} \\sum_{i = 1}^m \\large ( \\small y^{(i)} \\log a^{ [2] (i)} + (1-y^{(i)})\\log (1-a^{ [2] (i)} )\\large )\\small\\tag{2}$$\nyou can do it in one line of code in tensorflow!\nExercise: Implement the cross entropy loss. The function you will use is: \n\ntf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)\n\nYour code should input z, compute the sigmoid (to get a) and then compute the cross entropy cost $J$. All this can be done using one call to tf.nn.sigmoid_cross_entropy_with_logits, which computes\n$$- \\frac{1}{m} \\sum_{i = 1}^m \\large ( \\small y^{(i)} \\log \\sigma(z^{2}) + (1-y^{(i)})\\log (1-\\sigma(z^{2})\\large )\\small\\tag{2}$$", "# GRADED FUNCTION: cost\n\ndef cost(logits, labels):\n \"\"\"\n    Computes the cost using the sigmoid cross entropy\n    \n    Arguments:\n    logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)\n    labels -- vector of labels y (1 or 0) \n \n Note: What we've been calling \"z\" and \"y\" in this class are respectively called \"logits\" and \"labels\" \n in the TensorFlow documentation. So logits will feed into z, and labels into y. \n    \n    Returns:\n    cost -- runs the session of the cost (formula (2))\n \"\"\"\n \n ### START CODE HERE ### \n \n # Create the placeholders for \"logits\" (z) and \"labels\" (y) (approx. 2 lines)\n z = tf.placeholder(tf.float32, name = \"z\")\n y = tf.placeholder(tf.float32, name = \"y\")\n \n # Use the loss function (approx. 1 line)\n cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = z, labels = y)\n \n # Create a session (approx. 1 line). See method 1 above.\n sess = tf.Session()\n \n # Run the session (approx. 1 line).\n cost = sess.run(cost, feed_dict = {z: logits,y: labels})\n \n # Close the session (approx. 1 line). See method 1 above.\n sess.close()\n \n ### END CODE HERE ###\n \n return cost\n\nlogits = sigmoid(np.array([0.2,0.4,0.7,0.9]))\ncost = cost(logits, np.array([0,0,1,1]))\nprint (\"cost = \" + str(cost))", "Expected Output : \n<table> \n <tr> \n <td>\n **cost**\n </td>\n <td>\n [ 1.00538719 1.03664088 0.41385433 0.39956614]\n </td>\n </tr>\n\n</table>\n\n1.4 - Using One Hot encodings\nMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:\n<img src=\"images/onehot.png\" style=\"width:600px;height:150px;\">\nThis is called a \"one hot\" encoding, because in the converted representation exactly one element of each column is \"hot\" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: \n\ntf.one_hot(labels, depth, axis) \n\nExercise: Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use tf.one_hot() to do this.", "# GRADED FUNCTION: one_hot_matrix\n\ndef one_hot_matrix(labels, C):\n \"\"\"\n Creates a matrix where the i-th row corresponds to the ith class number and the jth column\n corresponds to the jth training example. So if example j had a label i. Then entry (i,j) \n will be 1. \n \n Arguments:\n labels -- vector containing the labels \n C -- number of classes, the depth of the one hot dimension\n \n Returns: \n one_hot -- one hot matrix\n \"\"\"\n \n ### START CODE HERE ###\n \n # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)\n C = tf.constant(C)\n \n # Use tf.one_hot, be careful with the axis (approx. 1 line)\n one_hot_matrix = tf.one_hot(labels, C, axis=0)\n \n # Create the session (approx. 1 line)\n sess = tf.Session()\n \n # Run the session (approx. 1 line)\n one_hot = sess.run(one_hot_matrix)\n \n # Close the session (approx. 1 line). See method 1 above.\n sess.close()\n \n ### END CODE HERE ###\n \n return one_hot\n\nlabels = np.array([1,2,3,0,2,1])\none_hot = one_hot_matrix(labels, C = 4)\nprint (\"one_hot = \" + str(one_hot))", "Expected Output: \n<table> \n <tr> \n <td>\n **one_hot**\n </td>\n <td>\n [[ 0. 0. 0. 1. 0. 0.]\n [ 1. 0. 0. 0. 0. 1.]\n [ 0. 1. 0. 0. 1. 0.]\n [ 0. 0. 1. 0. 0. 0.]]\n </td>\n </tr>\n\n</table>\n\n1.5 - Initialize with zeros and ones\nNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is tf.ones(). To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. \nExercise: Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). \n\ntf.ones(shape)", "# GRADED FUNCTION: ones\n\ndef ones(shape):\n \"\"\"\n Creates an array of ones of dimension shape\n \n Arguments:\n shape -- shape of the array you want to create\n \n Returns: \n ones -- array containing only ones\n \"\"\"\n \n ### START CODE HERE ###\n \n # Create \"ones\" tensor using tf.ones(...). (approx. 1 line)\n ones = tf.ones(shape)\n \n # Create the session (approx. 1 line)\n sess = tf.Session()\n \n # Run the session to compute 'ones' (approx. 1 line)\n ones = sess.run(ones)\n \n # Close the session (approx. 1 line). See method 1 above.\n sess.close()\n \n ### END CODE HERE ###\n return ones\n\nprint (\"ones = \" + str(ones([3])))", "Expected Output:\n<table> \n <tr> \n <td>\n **ones**\n </td>\n <td>\n [ 1. 1. 1.]\n </td>\n </tr>\n\n</table>\n\n2 - Building your first neural network in tensorflow\nIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:\n\nCreate the computation graph\nRun the graph\n\nLet's delve into the problem you'd like to solve!\n2.0 - Problem statement: SIGNS Dataset\nOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.\n\nTraining set: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).\nTest set: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).\n\nNote that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.\nHere are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.\n<img src=\"images/hands.png\" style=\"width:800px;height:350px;\"><caption><center> <u><font color='purple'> Figure 1</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>\nRun the following code to load the dataset.", "# Loading the dataset\nX_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()", "Change the index below and run the cell to visualize some examples in the dataset.", "# Example of a picture\nindex = 0\nplt.imshow(X_train_orig[index])\nprint (\"y = \" + str(np.squeeze(Y_train_orig[:, index])))", "As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.", "# Flatten the training and test images\nX_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T\nX_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T\n# Normalize image vectors\nX_train = X_train_flatten/255.\nX_test = X_test_flatten/255.\n# Convert training and test labels to one hot matrices\nY_train = convert_to_one_hot(Y_train_orig, 6)\nY_test = convert_to_one_hot(Y_test_orig, 6)\n\nprint (\"number of training examples = \" + str(X_train.shape[1]))\nprint (\"number of test examples = \" + str(X_test.shape[1]))\nprint (\"X_train shape: \" + str(X_train.shape))\nprint (\"Y_train shape: \" + str(Y_train.shape))\nprint (\"X_test shape: \" + str(X_test.shape))\nprint (\"Y_test shape: \" + str(Y_test.shape))", "Note that 12288 comes from $64 \\times 64 \\times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.\nYour goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. \nThe model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. \n2.1 - Create placeholders\nYour first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session. \nExercise: Implement the function below to create the placeholders in tensorflow.", "# GRADED FUNCTION: create_placeholders\n\ndef create_placeholders(n_x, n_y):\n \"\"\"\n Creates the placeholders for the tensorflow session.\n \n Arguments:\n n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)\n n_y -- scalar, number of classes (from 0 to 5, so -> 6)\n \n Returns:\n X -- placeholder for the data input, of shape [n_x, None] and dtype \"float\"\n Y -- placeholder for the input labels, of shape [n_y, None] and dtype \"float\"\n \n Tips:\n - You will use None because it let's us be flexible on the number of examples you will for the placeholders.\n In fact, the number of examples during test/train is different.\n \"\"\"\n\n ### START CODE HERE ### (approx. 2 lines)\n X = tf.placeholder(tf.float32, shape = [n_x, None], name = \"X\")\n Y = tf.placeholder(tf.float32, shape = [n_y, None], name = \"Y\")\n ### END CODE HERE ###\n \n return X, Y\n\nX, Y = create_placeholders(12288, 6)\nprint (\"X = \" + str(X))\nprint (\"Y = \" + str(Y))", "Expected Output: \n<table> \n <tr> \n <td>\n **X**\n </td>\n <td>\n Tensor(\"Placeholder_1:0\", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)\n </td>\n </tr>\n <tr> \n <td>\n **Y**\n </td>\n <td>\n Tensor(\"Placeholder_2:0\", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2)\n </td>\n </tr>\n\n</table>\n\n2.2 - Initializing the parameters\nYour second task is to initialize the parameters in tensorflow.\nExercise: Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: \npython\nW1 = tf.get_variable(\"W1\", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))\nb1 = tf.get_variable(\"b1\", [25,1], initializer = tf.zeros_initializer())\nPlease use seed = 1 to make sure your results match ours.", "# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters():\n \"\"\"\n Initializes parameters to build a neural network with tensorflow. The shapes are:\n W1 : [25, 12288]\n b1 : [25, 1]\n W2 : [12, 25]\n b2 : [12, 1]\n W3 : [6, 12]\n b3 : [6, 1]\n \n Returns:\n parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3\n \"\"\"\n \n tf.set_random_seed(1) # so that your \"random\" numbers match ours\n \n ### START CODE HERE ### (approx. 6 lines of code)\n W1 = tf.get_variable(\"W1\", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))\n b1 = tf.get_variable(\"b1\", [25,1], initializer = tf.zeros_initializer())\n W2 = tf.get_variable(\"W2\", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))\n b2 = tf.get_variable(\"b2\", [12,1], initializer = tf.zeros_initializer())\n W3 = tf.get_variable(\"W3\", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))\n b3 = tf.get_variable(\"b3\", [6,1], initializer = tf.zeros_initializer())\n ### END CODE HERE ###\n\n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2,\n \"W3\": W3,\n \"b3\": b3}\n \n return parameters\n\ntf.reset_default_graph()\nwith tf.Session() as sess:\n parameters = initialize_parameters()\n print(\"W1 = \" + str(parameters[\"W1\"]))\n print(\"b1 = \" + str(parameters[\"b1\"]))\n print(\"W2 = \" + str(parameters[\"W2\"]))\n print(\"b2 = \" + str(parameters[\"b2\"]))", "Expected Output: \n<table> \n <tr> \n <td>\n **W1**\n </td>\n <td>\n < tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >\n </td>\n </tr>\n <tr> \n <td>\n **b1**\n </td>\n <td>\n < tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >\n </td>\n </tr>\n <tr> \n <td>\n **W2**\n </td>\n <td>\n < tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >\n </td>\n </tr>\n <tr> \n <td>\n **b2**\n </td>\n <td>\n < tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >\n </td>\n </tr>\n\n</table>\n\nAs expected, the parameters haven't been evaluated yet.\n2.3 - Forward propagation in tensorflow\nYou will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: \n\ntf.add(...,...) to do an addition\ntf.matmul(...,...) to do a matrix multiplication\ntf.nn.relu(...) to apply the ReLU activation\n\nQuestion: Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at z3. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need a3!", "# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(X, parameters):\n \"\"\"\n Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX\n \n Arguments:\n X -- input dataset placeholder, of shape (input size, number of examples)\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\"\n the shapes are given in initialize_parameters\n\n Returns:\n Z3 -- the output of the last LINEAR unit\n \"\"\"\n \n # Retrieve the parameters from the dictionary \"parameters\" \n W1 = parameters['W1']\n b1 = parameters['b1']\n W2 = parameters['W2']\n b2 = parameters['b2']\n W3 = parameters['W3']\n b3 = parameters['b3']\n \n ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:\n Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1\n A1 = tf.nn.relu(Z1) # A1 = relu(Z1)\n Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2\n A2 = tf.nn.relu(Z2) # A2 = relu(Z2)\n Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3\n ### END CODE HERE ###\n \n return Z3\n\ntf.reset_default_graph()\n\nwith tf.Session() as sess:\n X, Y = create_placeholders(12288, 6)\n parameters = initialize_parameters()\n Z3 = forward_propagation(X, parameters)\n print(\"Z3 = \" + str(Z3))", "Expected Output: \n<table> \n <tr> \n <td>\n **Z3**\n </td>\n <td>\n Tensor(\"Add_2:0\", shape=(6, ?), dtype=float32)\n </td>\n </tr>\n\n</table>\n\nYou may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.\n2.4 Compute cost\nAs seen before, it is very easy to compute the cost using:\npython\ntf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))\nQuestion: Implement the cost function below. \n- It is important to know that the \"logits\" and \"labels\" inputs of tf.nn.softmax_cross_entropy_with_logits are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.\n- Besides, tf.reduce_mean basically does the summation over the examples.", "# GRADED FUNCTION: compute_cost \n\ndef compute_cost(Z3, Y):\n \"\"\"\n Computes the cost\n \n Arguments:\n Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)\n Y -- \"true\" labels vector placeholder, same shape as Z3\n \n Returns:\n cost - Tensor of the cost function\n \"\"\"\n \n # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)\n logits = tf.transpose(Z3)\n labels = tf.transpose(Y)\n \n ### START CODE HERE ### (1 line of code)\n cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))\n ### END CODE HERE ###\n \n return cost\n\ntf.reset_default_graph()\n\nwith tf.Session() as sess:\n X, Y = create_placeholders(12288, 6)\n parameters = initialize_parameters()\n Z3 = forward_propagation(X, parameters)\n cost = compute_cost(Z3, Y)\n print(\"cost = \" + str(cost))", "Expected Output: \n<table> \n <tr> \n <td>\n **cost**\n </td>\n <td>\n Tensor(\"Mean:0\", shape=(), dtype=float32)\n </td>\n </tr>\n\n</table>\n\n2.5 - Backward propagation & parameter updates\nThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.\nAfter you compute the cost function. You will create an \"optimizer\" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.\nFor instance, for gradient descent the optimizer would be:\npython\noptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)\nTo make the optimization you would do:\npython\n_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})\nThis computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.\nNote When coding, we often use _ as a \"throwaway\" variable to store values that we won't need to use later. Here, _ takes on the evaluated value of optimizer, which we don't need (and c takes the value of the cost variable). \n2.6 - Building the model\nNow, you will bring it all together! \nExercise: Implement the model. You will be calling the functions you had previously implemented.", "def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,\n num_epochs = 1500, minibatch_size = 32, print_cost = True):\n \"\"\"\n Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.\n \n Arguments:\n X_train -- training set, of shape (input size = 12288, number of training examples = 1080)\n Y_train -- test set, of shape (output size = 6, number of training examples = 1080)\n X_test -- training set, of shape (input size = 12288, number of training examples = 120)\n Y_test -- test set, of shape (output size = 6, number of test examples = 120)\n learning_rate -- learning rate of the optimization\n num_epochs -- number of epochs of the optimization loop\n minibatch_size -- size of a minibatch\n print_cost -- True to print the cost every 100 epochs\n \n Returns:\n parameters -- parameters learnt by the model. They can then be used to predict.\n \"\"\"\n \n ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables\n tf.set_random_seed(1) # to keep consistent results\n seed = 3 # to keep consistent results\n (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)\n n_y = Y_train.shape[0] # n_y : output size\n costs = [] # To keep track of the cost\n \n # Create Placeholders of shape (n_x, n_y)\n ### START CODE HERE ### (1 line)\n X, Y = create_placeholders(n_x, n_y)\n ### END CODE HERE ###\n\n # Initialize parameters\n ### START CODE HERE ### (1 line)\n parameters = initialize_parameters()\n ### END CODE HERE ###\n \n # Forward propagation: Build the forward propagation in the tensorflow graph\n ### START CODE HERE ### (1 line)\n Z3 = forward_propagation(X, parameters)\n ### END CODE HERE ###\n \n # Cost function: Add cost function to tensorflow graph\n ### START CODE HERE ### (1 line)\n cost = compute_cost(Z3, Y)\n ### END CODE HERE ###\n \n # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.\n ### START CODE HERE ### (1 line)\n optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)\n ### END CODE HERE ###\n \n # Initialize all the variables\n init = tf.global_variables_initializer()\n\n # Start the session to compute the tensorflow graph\n with tf.Session() as sess:\n \n # Run the initialization\n sess.run(init)\n \n # Do the training loop\n for epoch in range(num_epochs):\n\n epoch_cost = 0. # Defines a cost related to an epoch\n num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set\n seed = seed + 1\n minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)\n\n for minibatch in minibatches:\n\n # Select a minibatch\n (minibatch_X, minibatch_Y) = minibatch\n \n # IMPORTANT: The line that runs the graph on a minibatch.\n # Run the session to execute the \"optimizer\" and the \"cost\", the feedict should contain a minibatch for (X,Y).\n ### START CODE HERE ### (1 line)\n _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})\n ### END CODE HERE ###\n \n epoch_cost += minibatch_cost / num_minibatches\n\n # Print the cost every epoch\n if print_cost == True and epoch % 100 == 0:\n print (\"Cost after epoch %i: %f\" % (epoch, epoch_cost))\n if print_cost == True and epoch % 5 == 0:\n costs.append(epoch_cost)\n \n # plot the cost\n plt.plot(np.squeeze(costs))\n plt.ylabel('cost')\n plt.xlabel('iterations (per tens)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n\n # lets save the parameters in a variable\n parameters = sess.run(parameters)\n print (\"Parameters have been trained!\")\n\n # Calculate the correct predictions\n correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))\n\n # Calculate accuracy on the test set\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\n\n print (\"Train Accuracy:\", accuracy.eval({X: X_train, Y: Y_train}))\n print (\"Test Accuracy:\", accuracy.eval({X: X_test, Y: Y_test}))\n \n return parameters", "Run the following cell to train your model! On our machine it takes about 5 minutes. Your \"Cost after epoch 100\" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!", "parameters = model(X_train, Y_train, X_test, Y_test)", "Expected Output:\n<table> \n <tr> \n <td>\n **Train Accuracy**\n </td>\n <td>\n 0.999074\n </td>\n </tr>\n <tr> \n <td>\n **Test Accuracy**\n </td>\n <td>\n 0.716667\n </td>\n </tr>\n\n</table>\n\nAmazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.\nInsights:\n- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. \n- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.\n2.7 - Test with your own image (optional / ungraded exercise)\nCongratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Write your image's name in the following code\n 4. Run the code and check if the algorithm is right!", "import scipy\nfrom PIL import Image\nfrom scipy import ndimage\n\n## START CODE HERE ## (PUT YOUR IMAGE NAME) \nmy_image = \"thumbs_up.jpg\"\n## END CODE HERE ##\n\n# We preprocess your image to fit your algorithm.\nfname = \"images/\" + my_image\nimage = np.array(ndimage.imread(fname, flatten=False))\nmy_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T\nmy_image_prediction = predict(my_image, parameters)\n\nplt.imshow(image)\nprint(\"Your algorithm predicts: y = \" + str(np.squeeze(my_image_prediction)))", "You indeed deserved a \"thumbs-up\" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any \"thumbs-up\", so the model doesn't know how to deal with it! We call that a \"mismatched data distribution\" and it is one of the various of the next course on \"Structuring Machine Learning Projects\".\n<font color='blue'>\nWhat you should remember:\n- Tensorflow is a programming framework used in deep learning\n- The two main object classes in tensorflow are Tensors and Operators. \n- When you code in tensorflow you have to take the following steps:\n - Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...)\n - Create a session\n - Initialize the session\n - Run the session to execute the graph\n- You can execute the graph multiple times as you've seen in model()\n- The backpropagation and optimization is automatically done when running the session on the \"optimizer\" object." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
charmasaur/digbeta
tour/ijcai15.ipynb
gpl-3.0
[ "Reproduce the IJCAI15 Paper\nNOTE: Before running this notebook, please run script src/ijcai15_setup.py to setup data properly.\n<a id='toc'></a>\n\nDataset\nCompute POI Coordinates\nTrajectory Recommendation Problem\nDefinitions\nProblem Formulation\nPrepare Data\nLoad Trajectory Data\nCompute POI Info\nConstruct Travelling Sequences\nTransition Matrix\nTrajectory Recommendation -- Approach I\nChoose Cross Validation Sequences\nRecommendation by Solving ILPs\nEvaluation\nTrajectory Recommendation -- Approach II\nChoose Travelling Sequences for Training and Testing\nCompute POI popularity and user interest using training set\nGenerate ILP\nEvaluation\nIssues\n\n<a id='sec1'></a>\n1. Dataset &#8648;\nThe dataset used in this paper can be downloaded here, the summary of this dataset is also available.\nUnfortunately, one critical portion of information is missing, i.e. the geo-coordinates of each POI,\nwhich is necessary for calculating the travel time from one POI to another.\nHowever, it could be approximated by averaging (longitude, latitude) of \nall photos mapped to a specific POI by retriving coordinates of all photos in this dataset from the original YFCC100M dataset using photoID.\nSimple statistics of this dataset\n<table>\n<tr>\n<td><b>City</b></td>\n<td><b>#POIs</b></td>\n<td><b>#Users</b></td>\n<td><b>#POI_Visits</b></td>\n<td><b>#Travel_Sequences</b></td></tr>\n<tr><td>Toronto</td><td>29</td><td>1,395</td><td>39,419</td><td>6,057</td></tr>\n<tr><td>Osaka</td><td>27</td><td>450</td><td>7,747</td><td>1,115</td></tr>\n<tr><td>Glasgow</td><td>27</td><td>601</td><td>11,434</td><td>2,227</td></tr>\n<tr><td>Edinburgh</td><td>28</td><td>1,454</td><td>33,944</td><td>5,028</td></tr>\n</table>\n\nNOTE: the number of photos for each city described in the paper is NOT available in this dataset\n<a id='sec1.1'></a>\n1.1 Compute POI Coordinates\nTo compute the mean value of coordinates for all photos mapped to a POI, \nwe need to search the coordinates for each photo from the 100 million records.\nTo accelerate the searching process, first extract the photo id, longitude and latitude columns from the whole dataset\ncut -d $'\\t' -f1,11,12 yfcc100m_dataset &gt;&gt; dataset.yfcc\nand then import them to a database which was created by the following SQL scripts\nCREATE DATABASE yfcc100m;\nCREATE TABLE yfcc100m.tdata(\n pv_id BIGINT UNSIGNED NOT NULL UNIQUE PRIMARY KEY, /* Photo/video identifier */\n longitude FLOAT, /* Longitude */\n latitude FLOAT /* Latitude */\n);\nCOMMIT;\nPython scripts to import these data to DB looks like\nimport mysql.connector as db\ndef import_data(fname):\n dbconnection = db.connect(user='USERNAME', password='PASSWORD')\n cursor = dbconnection.cursor()\n with open(fname, 'r') as f:\n for line in f:\n items = line.split('\\t')\n assert(len(items) == 3)\n pv_id = items[0].strip()\n lon = items[1].strip()\n lat = items[2].strip()\n if len(lon) == 0 or len(lat) == 0:\n continue\n sqlstr = 'INSERT INTO yfcc100m.tdata VALUES (' + pv_id + ', ' + lon + ', ' + lat + ')' \n try:\n cursor.execute(sqlstr)\n except db.Error as error:\n print('ERROR: {}'.format(error))\n dbconnection.commit()\n dbconnection.close()\nPython scripts to search coordinates for photos looks like\nimport mysql.connector as db\ndef search_coords(fin, fout):\n dbconnection = db.connect(user='USERNAME', password='PASSWORD', database='yfcc100m')\n cursor = dbconnection.cursor()\n with open(fout, 'w') as fo:\n with open(fin, 'r') as fi:\n for line in fi:\n items = line.split(';')\n assert(len(items) == 7)\n photoID = items[0].strip()\n sqlstr = 'SELECT longitude, latitude FROM tdata WHERE pv_id = ' + photoID\n cursor.execute(sqlstr)\n for longitude, latitude in cursor:\n fo.write(photoID + ';' + str(longitude) + ';' + str(latitude) + '\\n')\n dbconnection.commit()\n dbconnection.close()\nThe above retrived results are available and will be downloaded automatically by executing scripts src/ijcai15_setup.py.\n<a id='sec2'></a>\n2. Trajectory Recommendation Problem\n<a id='sec2.1'></a>\n2.1 Definitions\nFor user $u$ and POI $p$, define\n\n\nTravel History: \n\\begin{equation}\nS_u = {(p_1, t_{p_1}^a, t_{p_1}^d), \\dots, (p_n, t_{p_n}^a, t_{p_n}^d)}\n\\end{equation}\nwhere $t_{p_i}^a$ is the arrival time and $t_{p_i}^d$ the departure time of user $u$ at POI $p_i$\n\n\nTravel Sequences: split $S_u$ if\n\\begin{equation}\n|t_{p_i}^d - t_{p_{i+1}}^a| > \\tau ~(\\text{e.g.}~ \\tau = 8 ~\\text{hours})\n\\end{equation}\n\n\nPOI Popularity:\n\\begin{equation}\nPop(p) = \\sum_{u \\in U} \\sum_{p_i \\in S_u} \\delta(p_i == p)\n\\end{equation}\n\n\nAverage POI Visit Duration: \n\\begin{equation}\n\\bar{V}(p) = \\frac{1}{N} \\sum_{u \\in U} \\sum_{p_i \\in S_u} (t_{p_i}^d - t_{p_i}^a) \\delta(p_i == p)\n\\end{equation}\nwhere $N$ is #visits of POI $p$ by all users\n\n\nDefine the interest of user $u$ in POI category $c$ as\n\n\nTime based User Interest:\n\\begin{equation}\nInt^{Time}(u, c) = \\sum_{p_i \\in S_u} \\frac{(t_{p_i}^d - t_{p_i}^a)}{\\bar{V}(p_i)} \\delta(Cat_{p_i} == c)\n\\end{equation}\nwhere $Cat_{p_i}$ is the category of POI $p_i$\nwe also tried this one\n\\begin{equation}\nInt^{Time}(u, c) = \\frac{1}{n} \\sum_{p_i \\in S_u} \\frac{(t_{p_i}^d - t_{p_i}^a)}{\\bar{V}(p_i)} \\delta(Cat_{p_i} == c)\n\\end{equation}\nwhere $n$ is the number of visit of category $c$ by user $u$ (i.e. the frequency base user interest defined below),\nswitch between the two definitions here.\n\n\nFrequency based User Interest:\n\\begin{equation}\nInt^{Freq}(u, c) = \\sum_{p_i \\in S_u} \\delta(Cat_{p_i} == c)\n\\end{equation}\n\n\nEvaluation metrics: Let $P_r$ be the set of POIs of the recommended trajectory and $P_v$ be the set of POIs visited in real-life travel sequence.\n\n\nTour Recall:\n\\begin{equation}\n\\text{Recall} = \\frac{|P_r \\cap P_v|}{|P_v|}\n\\end{equation}\n\n\nTour Precision:\n\\begin{equation}\n\\text{Precision} = \\frac{|P_r \\cap P_v|}{|P_r|}\n\\end{equation}\n\n\nTour F1-score:\n\\begin{equation}\n\\text{F1-score} = \\frac{2 \\times \\text{Precsion} \\times \\text{Recall}}{\\text{Precsion} + \\text{Recall}}\n\\end{equation}\n\n\n<a id='sec2.2'></a>\n2.2 Problem Formulation\nThe paper formulates the itinerary recommendation problem as an Integer Linear Programming(ILP) as follows.\nGiven a set of POIs, time budget $B$, the starting/destination POI $p_1$/$p_N$,\nrecommend a trajectory $(p_1,\\dots,p_N)$ to user $u$ that\n\\begin{equation}\n \\text{Maximize} \\sum_{i=2}^{N-1} \\sum_{j=2}^{N} x_{i,j} \\left(\\eta Int(u, Cat_{p_i}) + (1-\\eta) Pop(p_i)\\right)\n\\end{equation}\nSubject to\n\\begin{align}\n %x_{i,j} \\in {0, 1}, & \\forall i,j = 1,\\dots,N \\\n \\sum_{j=2}^N x_{1,j} &= \\sum_{i=1}^{N-1} x_{i,N} = 1 \\ %\\text{(starts/ends at $p_1$/$p_N$)} \\\n \\sum_{i=1}^{N-1} x_{i,k} &= \\sum_{j=2}^{N} x_{k,j} \\le 1, \\forall k = 2,\\dots,N-1 \\ %\\text{(connected, enters/leaves $p_k$ at most once)}\n %q_i \\in {2,\\dots,N}, & \\forall i = 2,\\dots,N \\\n q_i - q_j + 1 &\\le (N-1)(1-x_{i,j}), \\forall i,j = 2,\\dots,N \\ %\\text{sub-tour elimination} \\\n %\\sum_{i=1}^{N-1} \\sum_{j=2}^N x_{i,j} Cost(i,j) \\le B %\\text{(budget constraint)}\n %\\sum_{i=1}^{N-1} \\sum_{j=2}^N x_{i,j} \\left(T^{Travel}(p_i, p_j) + Int(u, Cat_{p_j}) * \\bar{V}(p_j) \\right) & \\le B\n \\sum_{i=1}^{N-1} \\sum_{j=2}^N x_{i,j} & \\left(Time(p_i, p_j) + Int(u, Cat_{p_j}) * \\bar{V}(p_j) \\right) \\le B\n\\end{align}\nWe use a Python library called PuLP from the COIN-OR project to model the integer programs.\nPuLP enables many LP solvers such as GLPK, CBC, CPLEX and Gurobi to be called to solve the model.\nIts comprehensive documentation is available here.\n<a id='sec3'></a>\n3. Prepare Data\n<a id='sec3.1'></a>\n3.1 Load Trajectory Data", "%matplotlib inline\n\nimport os\nimport re\nimport sys\nimport math\nimport pulp\nimport random\nimport pickle\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\n\nspeed = 4 # 4km/h\nrandom.seed(123456789)\n\ndata_dir = 'data/data-ijcai15'\n#fvisit = os.path.join(data_dir, 'userVisits-Osak.csv')\n#fcoord = os.path.join(data_dir, 'photoCoords-Osak.csv')\n#fvisit = os.path.join(data_dir, 'userVisits-Glas.csv')\n#fcoord = os.path.join(data_dir, 'photoCoords-Glas.csv')\n#fvisit = os.path.join(data_dir, 'userVisits-Edin.csv')\n#fcoord = os.path.join(data_dir, 'photoCoords-Edin.csv')\nfvisit = os.path.join(data_dir, 'userVisits-Toro.csv')\nfcoord = os.path.join(data_dir, 'photoCoords-Toro.csv')\n\nsuffix = fvisit.split('-')[-1].split('.')[0]\nfrecseq = os.path.join(data_dir, 'reccommendSeq-' + suffix + '.pkl')\n\nvisits = pd.read_csv(fvisit, sep=';')\nvisits.head()\n\ncoords = pd.read_csv(fcoord, sep=';')\ncoords.head()\n\n# merge data frames according to column 'photoID'\nassert(visits.shape[0] == coords.shape[0])\ntraj = pd.merge(visits, coords, on='photoID')\ntraj.head()\n\npd.DataFrame([traj[['photoLon', 'photoLat']].min(), traj[['photoLon', 'photoLat']].max(), \\\n traj[['photoLon', 'photoLat']].max() - traj[['photoLon', 'photoLat']].min()], \\\n index = ['min', 'max', 'range'])\n\nplt.figure(figsize=[15, 5])\nplt.xlabel('Longitude')\nplt.ylabel('Latitude')\nplt.scatter(traj['photoLon'], traj['photoLat'], marker='+')\n\nnum_photo = traj['photoID'].unique().shape[0]\nnum_user = traj['userID'].unique().shape[0]\nnum_seq = traj['seqID'].unique().shape[0]\nnum_poi = traj['poiID'].unique().shape[0]\npd.DataFrame([num_photo, num_user, num_seq, num_poi, num_photo/num_user, num_seq/num_user], \\\n index = ['#photo', '#user', '#seq', '#poi', '#photo/user', '#seq/user'], columns=[str(suffix)])", "<a id='sec3.2'></a>\n3.2 Compute POI Info\nCompute POI (Longitude, Latitude) as the average coordinates of the assigned photos.", "poi_coords = traj[['poiID', 'photoLon', 'photoLat']].groupby('poiID').agg(np.mean)\npoi_coords.reset_index(inplace=True)\npoi_coords.rename(columns={'photoLon':'poiLon', 'photoLat':'poiLat'}, inplace=True)\npoi_coords.head()", "Extract POI category and visiting frequency.", "poi_catfreq = traj[['poiID', 'poiTheme', 'poiFreq']].groupby('poiID').first()\npoi_catfreq.reset_index(inplace=True)\npoi_catfreq.head()\n\npoi_all = pd.merge(poi_catfreq, poi_coords, on='poiID')\npoi_all.set_index('poiID', inplace=True)\npoi_all.head()", "<a id='sec3.3'></a>\n3.3 Construct Travelling Sequences", "seq_all = traj[['userID', 'seqID', 'poiID', 'dateTaken']].copy()\\\n .groupby(['userID', 'seqID', 'poiID']).agg([np.min, np.max])\nseq_all.head()\n\nseq_all.columns = seq_all.columns.droplevel()\nseq_all.head()\n\nseq_all.reset_index(inplace=True)\nseq_all.head()\n\nseq_all.rename(columns={'amin':'arrivalTime', 'amax':'departureTime'}, inplace=True)\nseq_all['poiDuration(sec)'] = seq_all['departureTime'] - seq_all['arrivalTime']\nseq_all.head()\n\n#tseq = seq_all[['poiID', 'poiDuration(sec)']].copy().groupby('poiID').agg(np.mean)\n#tseq\n\nseq_user = seq_all[['seqID', 'userID']].copy()\nseq_user = seq_user.groupby('seqID').first()\nseq_user.head()\n\nseq_len = seq_all[['userID', 'seqID', 'poiID']].copy()\nseq_len = seq_len.groupby(['userID', 'seqID']).agg(np.size)\nseq_len.reset_index(inplace=True)\nseq_len.rename(columns={'poiID':'seqLen'}, inplace=True)\n#seq_len.head()\nax = seq_len['seqLen'].hist(bins=20)\nax.set_yscale('log')", "<a id='sec3.4'></a>\n3.4 Transition Matrix\n3.4.1 Transition Matrix for Time at POI", "users = seq_all['userID'].unique()\ntransmat_time = pd.DataFrame(np.zeros((len(users), poi_all.index.shape[0]), dtype=np.float64), \\\n index=users, columns=poi_all.index)\n\npoi_time = seq_all[['userID', 'poiID', 'poiDuration(sec)']].copy().groupby(['userID', 'poiID']).agg(np.sum)\npoi_time.head()\n\nfor idx in poi_time.index:\n transmat_time.loc[idx[0], idx[1]] += poi_time.loc[idx].iloc[0]\nprint(transmat_time.shape)\ntransmat_time.head()\n\n# add 1 (sec) to each cell as a smooth factor\nlog10_transmat_time = np.log10(transmat_time.copy() + 1)\nprint(log10_transmat_time.shape)\nlog10_transmat_time.head()", "3.4.2 Transition Matrix for POI Category", "poi_cats = traj['poiTheme'].unique().tolist()\npoi_cats.sort()\npoi_cats\n\nncats = len(poi_cats)\ntransmat_cat = pd.DataFrame(data=np.zeros((ncats, ncats), dtype=np.float64), index=poi_cats, columns=poi_cats)\n\nfor seqid in seq_all['seqID'].unique().tolist():\n seqi = seq_all[seq_all['seqID'] == seqid].copy()\n seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)\n for j in range(len(seqi.index)-1):\n idx1 = seqi.index[j]\n idx2 = seqi.index[j+1]\n poi1 = seqi.loc[idx1, 'poiID']\n poi2 = seqi.loc[idx2, 'poiID']\n cat1 = poi_all.loc[poi1, 'poiTheme']\n cat2 = poi_all.loc[poi2, 'poiTheme']\n transmat_cat.loc[cat1, cat2] += 1\ntransmat_cat", "Normalise each row to get an estimate of transition probabilities (MLE).", "for r in transmat_cat.index:\n rowsum = transmat_cat.ix[r].sum()\n if rowsum == 0: continue # deal with lack of data\n transmat_cat.loc[r] /= rowsum\ntransmat_cat", "Compute the log of transition probabilities with smooth factor $\\epsilon=10^{-12}$.", "log10_transmat_cat = np.log10(transmat_cat.copy() + 1e-12)\nlog10_transmat_cat", "<a id='sec4'></a>\n4. Trajectory Recommendation -- Approach I\nA different leave-one-out cross-validation approach:\n - For each user, choose one trajectory (with length >= 3) uniformly at random from all of his/her trajectories \n as the validation trajectory\n - Use all other trajectories (of all users) to 'train' (i.e. compute metrics for the ILP formulation)\n<a id='sec4.1'></a>\n4.1 Choose Cross Validation Sequences", "cv_seqs = seq_all[['userID', 'seqID', 'poiID']].copy().groupby(['userID', 'seqID']).agg(np.size)\ncv_seqs.rename(columns={'poiID':'seqLen'}, inplace=True)\ncv_seqs = cv_seqs[cv_seqs['seqLen'] > 2]\ncv_seqs.reset_index(inplace=True)\nprint(cv_seqs.shape)\ncv_seqs.head()\n\ncv_seq_set = []\n\n# choose one sequence for each user in cv_seqs uniformly at random\nfor user in cv_seqs['userID'].unique():\n seqlist = cv_seqs[cv_seqs['userID'] == user]['seqID'].tolist()\n seqid = random.choice(seqlist)\n cv_seq_set.append(seqid)\n\nlen(cv_seq_set)", "<a id='sec4.2'></a>\n4.2 Recommendation by Solving ILPs", "def calc_poi_info(seqid_set, seq_all, poi_all):\n poi_info = seq_all[seq_all['seqID'].isin(seqid_set)][['poiID', 'poiDuration(sec)']].copy()\n poi_info = poi_info.groupby('poiID').agg([np.mean, np.size])\n poi_info.columns = poi_info.columns.droplevel()\n poi_info.reset_index(inplace=True)\n poi_info.rename(columns={'mean':'avgDuration(sec)', 'size':'popularity'}, inplace=True)\n poi_info.set_index('poiID', inplace=True)\n poi_info['poiTheme'] = poi_all.loc[poi_info.index, 'poiTheme']\n poi_info['poiLon'] = poi_all.loc[poi_info.index, 'poiLon']\n poi_info['poiLat'] = poi_all.loc[poi_info.index, 'poiLat']\n return poi_info.copy()\n\ndef calc_user_interest(seqid_set, seq_all, poi_all, poi_info):\n user_interest = seq_all[seq_all['seqID'].isin(seqid_set)][['userID', 'poiID', 'poiDuration(sec)']].copy()\n user_interest['timeRatio'] = [poi_info.loc[x, 'avgDuration(sec)'] for x in user_interest['poiID']]\n user_interest['timeRatio'] = user_interest['poiDuration(sec)'] / user_interest['timeRatio']\n user_interest['poiTheme'] = [poi_all.loc[x, 'poiTheme'] for x in user_interest['poiID']]\n user_interest.drop(['poiID', 'poiDuration(sec)'], axis=1, inplace=True)\n user_interest = user_interest.groupby(['userID', 'poiTheme']).agg([np.sum, np.size]) # the sum\n user_interest.columns = user_interest.columns.droplevel()\n user_interest.rename(columns={'sum':'timeBased', 'size':'freqBased'}, inplace=True)\n user_interest.reset_index(inplace=True)\n user_interest.set_index(['userID', 'poiTheme'], inplace=True)\n return user_interest.copy()\n\ndef calc_dist(longitude1, latitude1, longitude2, latitude2):\n \"\"\"Calculate the distance (unit: km) between two places on earth\"\"\"\n # convert degrees to radians\n lon1 = math.radians(longitude1)\n lat1 = math.radians(latitude1)\n lon2 = math.radians(longitude2)\n lat2 = math.radians(latitude2)\n radius = 6371.009 # mean earth radius is 6371.009km, en.wikipedia.org/wiki/Earth_radius#Mean_radius\n # The haversine formula, en.wikipedia.org/wiki/Great-circle_distance\n dlon = math.fabs(lon1 - lon2)\n dlat = math.fabs(lat1 - lat2)\n return 2 * radius * math.asin(math.sqrt(\\\n (math.sin(0.5*dlat))**2 + math.cos(lat1) * math.cos(lat2) * (math.sin(0.5*dlon))**2 ))\n\ndef calc_dist_mat(poi_info):\n poi_dist_mat = pd.DataFrame(data=np.zeros((poi_info.shape[0], poi_info.shape[0]), dtype=np.float64), \\\n index=poi_info.index, columns=poi_info.index)\n for i in range(poi_info.index.shape[0]):\n for j in range(i+1, poi_info.index.shape[0]):\n r = poi_info.index[i]\n c = poi_info.index[j]\n dist = calc_dist(poi_info.loc[r, 'poiLon'], poi_info.loc[r, 'poiLat'], \\\n poi_info.loc[c, 'poiLon'], poi_info.loc[c, 'poiLat'])\n assert(dist > 0.)\n poi_dist_mat.loc[r, c] = dist\n poi_dist_mat.loc[c, r] = dist\n return poi_dist_mat\n\ndef calc_seq_budget(user, seq, poi_info, poi_dist_mat, user_interest):\n \"\"\"Calculate the travel budget for the given travelling sequence\"\"\"\n assert(len(seq) > 1)\n budget = 0. # travel budget\n for i in range(len(seq)-1):\n px = seq[i]\n py = seq[i+1]\n assert(px in poi_info.index)\n assert(py in poi_info.index)\n budget += 60 * 60 * poi_dist_mat.loc[px, py] / speed # travel time (seconds)\n caty = poi_info.loc[py, 'poiTheme']\n avgtime = poi_info.loc[py, 'avgDuration(sec)']\n userint = 0\n if (user, caty) in user_interest.index: userint = user_interest.loc[user, caty] # for testing set\n budget += userint * avgtime # expected visit duration\n return budget\n\ndef recommend_ILP(user, budget, startPoi, endPoi, poi_info, poi_dist_mat, eta, speed, user_interest):\n assert(0 <= eta <= 1); assert(budget > 0)\n p0 = str(startPoi); pN = str(endPoi); N = poi_info.index.shape[0]\n \n # REF: pythonhosted.org/PuLP/index.html\n pois = [str(p) for p in poi_info.index] # create a string list for each POI\n prob = pulp.LpProblem('TourRecommendation', pulp.LpMaximize) # create problem\n # visit_i_j = 1 means POI i and j are visited in sequence\n visit_vars = pulp.LpVariable.dicts('visit', (pois, pois), 0, 1, pulp.LpInteger) \n # a dictionary contains all dummy variables\n dummy_vars = pulp.LpVariable.dicts('u', [x for x in pois if x != p0], 2, N, pulp.LpInteger)\n\n # add objective\n objlist = []\n for pi in [x for x in pois if x not in {p0, pN}]:\n for pj in [y for y in pois if y != p0]:\n cati = poi_info.loc[int(pi), 'poiTheme']\n userint = 0; poipop = 0\n if (user, cati) in user_interest.index: userint = user_interest.loc[user, cati]\n if int(pi) in poi_info.index: poipop = poi_info.loc[int(pi), 'popularity']\n objlist.append(visit_vars[pi][pj] * (eta * userint + (1.-eta) * poipop))\n prob += pulp.lpSum(objlist), 'Objective'\n \n # add constraints, each constraint should be in ONE line\n prob += pulp.lpSum([visit_vars[p0][pj] for pj in pois if pj != p0]) == 1, 'StartAtp0'\n prob += pulp.lpSum([visit_vars[pi][pN] for pi in pois if pi != pN]) == 1, 'EndAtpN'\n for pk in [x for x in pois if x not in {p0, pN}]:\n prob += pulp.lpSum([visit_vars[pi][pk] for pi in pois if pi != pN]) == \\\n pulp.lpSum([visit_vars[pk][pj] for pj in pois if pj != p0]), 'Connected_' + pk\n prob += pulp.lpSum([visit_vars[pi][pk] for pi in pois if pi != pN]) <= 1, 'LeaveAtMostOnce_' + pk\n prob += pulp.lpSum([visit_vars[pk][pj] for pj in pois if pj != p0]) <= 1, 'EnterAtMostOnce_' + pk\n \n costlist = []\n for pi in [x for x in pois if x != pN]:\n for pj in [y for y in pois if y != p0]:\n catj = poi_info.loc[int(pj), 'poiTheme']\n traveltime = 60 * 60 * poi_dist_mat.loc[int(pi), int(pj)] / speed # seconds\n userint = 0; avgtime = 0\n if (user, catj) in user_interest.index: userint = user_interest.loc[user, catj]\n if int(pj) in poi_info.index: avgtime = poi_info.loc[int(pj), 'avgDuration(sec)']\n costlist.append(visit_vars[pi][pj] * (traveltime + userint * avgtime))\n prob += pulp.lpSum(costlist) <= budget, 'WithinBudget'\n \n for pi in [x for x in pois if x != p0]:\n for pj in [y for y in pois if y != p0]:\n prob += dummy_vars[pi] - dummy_vars[pj] + 1 <= (N - 1) * (1 - visit_vars[pi][pj]), \\\n 'SubTourElimination_' + str(pi) + '_' + str(pj)\n\n # solve problem\n #prob.solve() # using PuLP's default solver\n #prob.solve(pulp.PULP_CBC_CMD(options=['-threads', '8', '-strategy', '1', '-maxIt', '2000000'])) # CBC\n #prob.solve(pulp.GLPK_CMD()) # GLPK\n gurobi_options = [('TimeLimit', '7200'), ('Threads', '18'), ('NodefileStart', '0.9'), ('Cuts', '2')]\n prob.solve(pulp.GUROBI_CMD(options=gurobi_options)) # GUROBI\n \n print('status:', pulp.LpStatus[prob.status]) # print the status of the solution\n #print('obj:', pulp.value(prob.objective)) # print the optimised objective function value\n #for v in prob.variables(): # print each variable with it's resolved optimum value\n # print(v.name, '=', v.varValue)\n # if v.varValue != 0: print(v.name, '=', v.varValue)\n\n visit_mat = pd.DataFrame(data=np.zeros((len(pois), len(pois)), dtype=np.float), index=pois, columns=pois)\n for pi in pois:\n for pj in pois: visit_mat.loc[pi, pj] = visit_vars[pi][pj].varValue\n\n # build the recommended trajectory\n recseq = [p0]\n while True:\n pi = recseq[-1]\n pj = visit_mat.loc[pi].idxmax()\n assert(round(visit_mat.loc[pi, pj]) == 1)\n recseq.append(pj); \n #print(recseq); sys.stdout.flush()\n if pj == pN: return [int(x) for x in recseq]\n\ncv_seq_dict = dict()\nrec_seq_dict = dict()\n\nfor seqid in cv_seq_set:\n seqi = seq_all[seq_all['seqID'] == seqid].copy()\n seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)\n cv_seq_dict[seqid] = seqi['poiID'].tolist()\n\neta = 0.5\ntime_based = True\n\ndoCompute = True\n\nif os.path.exists(frecseq):\n seq_dict = pickle.load(open(frecseq, 'rb'))\n if (np.array(sorted(cv_seq_dict.keys())) == np.array(sorted(seq_dict.keys()))).all():\n rec_seq_dict = seq_dict\n doCompute = False\n\nif doCompute:\n n = 1\n print('#sequences', len(cv_seq_set))\n for seqid, seq in cv_seq_dict.items():\n train_set = [x for x in seq_all['seqID'].unique() if x != seqid]\n poi_info = calc_poi_info(train_set, seq_all, poi_all)\n user_interest = calc_user_interest(train_set, seq_all, poi_all, poi_info)\n poi_dist_mat = calc_dist_mat(poi_info)\n user = seq_user.loc[seqid].iloc[0]\n the_user_interest = None\n if time_based == True: the_user_interest = user_interest['timeBased'].copy()\n else: the_user_interest = user_interest['freqBased'].copy()\n budget = calc_seq_budget(user, seq, poi_info, poi_dist_mat, the_user_interest)\n print(n, 'sequence', seq, ', user', user, ', budget', budget); sys.stdout.flush()\n\n recseq = recommend_ILP(user, budget, seq[0], seq[-1], poi_info, poi_dist_mat, eta, speed, the_user_interest)\n rec_seq_dict[seqid] = recseq\n print('->', recseq, '\\n'); sys.stdout.flush()\n n += 1\n \n pickle.dump(rec_seq_dict, open(frecseq, 'wb'))", "<a id='sec4.3'></a>\n4.3 Evaluation\nResults from paper (Toronto data, time-based uesr interest, eta=0.5):\n - Recall: 0.779&plusmn;0.10\n - Precision: 0.706&plusmn;0.013\n - F1-score: 0.732&plusmn;0.012", "def calc_recall_precision_F1score(seq_act, seq_rec):\n assert(len(seq_act) > 0)\n assert(len(seq_rec) > 0)\n actset = set(seq_act)\n recset = set(seq_rec)\n intersect = actset & recset\n recall = len(intersect) / len(seq_act)\n precision = len(intersect) / len(seq_rec)\n F1score = 2. * precision * recall / (precision + recall)\n return recall, precision, F1score\n\nrecall = []\nprecision = []\nF1score = []\n\nfor seqid in rec_seq_dict.keys():\n assert(seqid in cv_seq_dict)\n seq = cv_seq_dict[seqid]\n recseq = rec_seq_dict[seqid]\n r, p, F1 = calc_recall_precision_F1score(seq, recseq)\n recall.append(r)\n precision.append(p)\n F1score.append(F1)\n\nprint('Recall:', np.mean(recall), np.std(recall))\nprint('Precision:', np.mean(precision), np.std(precision))\nprint('F1-score:', np.mean(F1score), np.std(F1score))", "<a id='sec5'></a>\n5. Trajectory Recommendation -- Approach II\nThe paper stated \"We evaluate PERSTOUR and the baselines using leave-one-out cross-validation [Kohavi,1995] (i.e., when evaluating a specific travel sequence of a user, we use this user's other travel sequences for training our algorithms\"\nWhile it's not clear if this means when evaluate a travel sequence for a user,\n - all other sequences of this user (except the one for validation) as well as all sequences of other users are used for training, (i.e. the approach in the section above) or\n - use leave-one-out for each user to construct a testing set (the approach in this section)\n<a id='sec5.1'></a>\n5.1 Choose Travelling Sequences for Training and Testing\nTrajectories with length greater than 3 are used in the paper.", "seq_ge3 = seq_len[seq_len['seqLen'] >= 3]\nseq_ge3['seqLen'].hist(bins=20)", "Split travelling sequences into training set and testing set using leave-one-out for each user.\nFor testing purpose, users with less than two travelling sequences are not considered in this experiment.", "train_set = []\ntest_set = []\n\nuser_seqs = seq_ge3[['userID', 'seqID']].groupby('userID')\n\nfor user, indices in user_seqs.groups.items():\n if len(indices) < 2: continue\n idx = random.choice(indices)\n test_set.append(seq_ge3.loc[idx, 'seqID'])\n train_set.extend([seq_ge3.loc[x, 'seqID'] for x in indices if x != idx])\n\nprint('#seq in trainset:', len(train_set))\nprint('#seq in testset:', len(test_set))\nseq_ge3[seq_ge3['seqID'].isin(train_set)]['seqLen'].hist(bins=20)\n#data = np.array(seqs1['seqLen'])\n#hist, bins = np.histogram(data, bins=3)\n#print(hist)", "Sanity check: the total number of travelling sequences used in training and testing", "seq_exp = seq_ge3[['userID', 'seqID']].copy()\nseq_exp = seq_exp.groupby('userID').agg(np.size)\nseq_exp.reset_index(inplace=True)\nseq_exp.rename(columns={'seqID':'#seq'}, inplace=True)\nseq_exp = seq_exp[seq_exp['#seq'] > 1] # user with more than 1 sequences\nprint('total #seq for experiment:', seq_exp['#seq'].sum())\n#seq_exp.head()", "<a id='sec5.2'></a>\n5.2 Compute POI popularity and user interest using training set\nCompute average POI visit duration, POI popularity as defined at the top of the notebook.", "poi_info = seq_all[seq_all['seqID'].isin(train_set)]\npoi_info = poi_info[['poiID', 'poiDuration(sec)']].copy()\n\npoi_info = poi_info.groupby('poiID').agg([np.mean, np.size])\npoi_info.columns = poi_info.columns.droplevel()\npoi_info.reset_index(inplace=True)\npoi_info.rename(columns={'mean':'avgDuration(sec)', 'size':'popularity'}, inplace=True)\npoi_info.set_index('poiID', inplace=True)\nprint('#poi:', poi_info.shape[0])\nif poi_info.shape[0] < poi_all.shape[0]:\n extra_index = list(set(poi_all.index) - set(poi_info.index))\n extra_poi = pd.DataFrame(data=np.zeros((len(extra_index), 2), dtype=np.float64), \\\n index=extra_index, columns=['avgDuration(sec)', 'popularity'])\n poi_info = poi_info.append(extra_poi)\n print('#poi after extension:', poi_info.shape[0])\npoi_info['poiTheme'] = poi_all.loc[poi_info.index, 'poiTheme']\npoi_info['poiLon'] = poi_all.loc[poi_info.index, 'poiLon']\npoi_info['poiLat'] = poi_all.loc[poi_info.index, 'poiLat']\npoi_info.head()", "Compute time/frequency based user interest as defined at the \ntop of the notebook.", "user_interest = seq_all[seq_all['seqID'].isin(train_set)]\nuser_interest = user_interest[['userID', 'poiID', 'poiDuration(sec)']].copy()\n\nuser_interest['timeRatio'] = [poi_info.loc[x, 'avgDuration(sec)'] for x in user_interest['poiID']]\n#user_interest[user_interest['poiID'].isin({9, 10, 12, 18, 20, 26})]\n#user_interest[user_interest['timeRatio'] < 1]\nuser_interest.head()\n\nuser_interest['timeRatio'] = user_interest['poiDuration(sec)'] / user_interest['timeRatio']\nuser_interest.head()\n\nuser_interest['poiTheme'] = [poi_all.loc[x, 'poiTheme'] for x in user_interest['poiID']]\nuser_interest.drop(['poiID', 'poiDuration(sec)'], axis=1, inplace=True)", "<a id='switch'></a>\nSum defined in paper, but sum of (time ratio) * (avg duration) will become extremely large in some cases, which is unrealistic, switch between the two to have a look at the effects.", "#user_interest = user_interest.groupby(['userID', 'poiTheme']).agg([np.sum, np.size]) # the sum\nuser_interest = user_interest.groupby(['userID', 'poiTheme']).agg([np.mean, np.size]) # try the mean value\n\nuser_interest.columns = user_interest.columns.droplevel()\n#user_interest.rename(columns={'sum':'timeBased', 'size':'freqBased'}, inplace=True)\nuser_interest.rename(columns={'mean':'timeBased', 'size':'freqBased'}, inplace=True)\nuser_interest.reset_index(inplace=True)\nuser_interest.set_index(['userID', 'poiTheme'], inplace=True)\nuser_interest.head()\n\n#user_interest.columns.shape[0]", "<a id='sec5.3'></a>\n5.3 Generate ILP", "poi_dist_mat = pd.DataFrame(data=np.zeros((poi_info.shape[0], poi_info.shape[0]), dtype=np.float64), \\\n index=poi_info.index, columns=poi_info.index)\nfor i in range(poi_info.index.shape[0]):\n for j in range(i+1, poi_info.index.shape[0]):\n r = poi_info.index[i]\n c = poi_info.index[j]\n dist = calc_dist(poi_info.loc[r, 'poiLon'], poi_info.loc[r, 'poiLat'], \\\n poi_info.loc[c, 'poiLon'], poi_info.loc[c, 'poiLat'])\n assert(dist > 0.)\n poi_dist_mat.loc[r, c] = dist\n poi_dist_mat.loc[c, r] = dist\n\ndef generate_ILP(lpFilename, user, budget, startPoi, endPoi, poi_info, poi_dist_mat, eta, speed, user_interest):\n \"\"\"Recommend a trajectory given an existing travel sequence S_N, \n the first/last POI and travel budget calculated based on S_N\n \"\"\"\n assert(0 <= eta <= 1)\n assert(budget > 0)\n p0 = str(startPoi)\n pN = str(endPoi)\n N = poi_info.index.shape[0]\n \n # The MIP problem\n # REF: pythonhosted.org/PuLP/index.html\n # create a string list for each POI\n pois = [str(p) for p in poi_info.index]\n\n # create problem\n prob = pulp.LpProblem('TourRecommendation', pulp.LpMaximize)\n\n # visit_i_j = 1 means POI i and j are visited in sequence\n visit_vars = pulp.LpVariable.dicts('visit', (pois, pois), 0, 1, pulp.LpInteger)\n\n # a dictionary contains all dummy variables\n dummy_vars = pulp.LpVariable.dicts('u', [x for x in pois if x != p0], 2, N, pulp.LpInteger)\n\n # add objective\n objlist = []\n for pi in [x for x in pois if x not in {p0, pN}]:\n for pj in [y for y in pois if y != p0]:\n cati = poi_info.loc[int(pi), 'poiTheme']\n userint = 0\n if (user, cati) in user_interest.index: userint = user_interest.loc[user, cati]\n objlist.append(visit_vars[pi][pj] * (eta * userint + (1.-eta) * poi_info.loc[int(pi), 'popularity']))\n prob += pulp.lpSum(objlist), 'Objective'\n # add constraints\n # each constraint should be in ONE line\n prob += pulp.lpSum([visit_vars[p0][pj] for pj in pois if pj != p0]) == 1, 'StartAtp0' # starts at the first POI\n prob += pulp.lpSum([visit_vars[pi][pN] for pi in pois if pi != pN]) == 1, 'EndAtpN' # ends at the last POI\n for pk in [x for x in pois if x not in {p0, pN}]:\n prob += pulp.lpSum([visit_vars[pi][pk] for pi in pois if pi != pN]) == \\\n pulp.lpSum([visit_vars[pk][pj] for pj in pois if pj != p0]), \\\n 'Connected_' + pk # the itinerary is connected\n prob += pulp.lpSum([visit_vars[pi][pk] for pi in pois if pi != pN]) <= 1, \\\n 'LeaveAtMostOnce_' + pk # LEAVE POIk at most once\n prob += pulp.lpSum([visit_vars[pk][pj] for pj in pois if pj != p0]) <= 1, \\\n 'EnterAtMostOnce_' + pk # ENTER POIk at most once\n \n # travel cost within budget\n costlist = []\n for pi in [x for x in pois if x != pN]:\n for pj in [y for y in pois if y != p0]:\n catj = poi_info.loc[int(pj), 'poiTheme']\n traveltime = 60 * 60 * poi_dist_mat.loc[int(pi), int(pj)] / speed # seconds\n userint = 0\n if (user, catj) in user_interest.index: userint = user_interest.loc[user, catj]\n costlist.append(visit_vars[pi][pj] * (traveltime + userint * poi_info.loc[int(pj), 'avgDuration(sec)']))\n prob += pulp.lpSum(costlist) <= budget, 'WithinBudget'\n \n for pi in [x for x in pois if x != p0]:\n for pj in [y for y in pois if y != p0]:\n prob += dummy_vars[pi] - dummy_vars[pj] + 1 <= \\\n (N - 1) * (1 - visit_vars[pi][pj]), \\\n 'SubTourElimination_' + str(pi) + '_' + str(pj) # TSP sub-tour elimination\n\n # write problem data to an .lp file\n prob.writeLP(lpFilename)", "5.3.1 Generate ILPs for training set", "def extract_seq(seqid_set, seq_all):\n \"\"\"Extract the actual sequences (i.e. a list of POI) from a set of sequence ID\"\"\"\n seq_dict = dict()\n for seqid in seqid_set:\n seqi = seq_all[seq_all['seqID'] == seqid].copy()\n seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)\n seq_dict[seqid] = seqi['poiID'].tolist()\n return seq_dict\n\ntrain_seqs = extract_seq(train_set, seq_all)\n\nlpDir = os.path.join(data_dir, 'lp_' + suffix)\nif not os.path.exists(lpDir):\n print('Please create directory \"' + lpDir + '\"')\n\neta = 0.5\n#eta = 1\ntime_based = True\n\nfor seqid in sorted(train_seqs.keys()):\n if not os.path.exists(lpDir): \n print('Please create directory \"' + lpDir + '\"')\n break\n seq = train_seqs[seqid]\n lpFile = os.path.join(lpDir, str(seqid) + '.lp')\n user = seq_user.loc[seqid].iloc[0]\n the_user_interest = None\n if time_based == True:\n the_user_interest = user_interest['timeBased'].copy()\n else: \n the_user_interest = user_interest['freqBased'].copy()\n budget = calc_seq_budget(user, seq, poi_info, poi_dist_mat, the_user_interest)\n print('generating ILP', lpFile, 'for user', user, 'sequence', seq, 'budget', round(budget, 2))\n generate_ILP(lpFile, user, budget, seq[0], seq[-1], poi_info, poi_dist_mat, eta, speed, the_user_interest)", "5.3.2 Generate ILPs for testing set", "test_seqs = extract_seq(test_set, seq_all)\n\nfor seqid in sorted(test_seqs.keys()):\n if not os.path.exists(lpDir): \n print('Please create directory \"' + lpDir + '\"')\n break\n seq = test_seqs[seqid]\n lpFile = os.path.join(lpDir, str(seqid) + '.lp')\n user = seq_user.loc[seqid].iloc[0]\n the_user_interest = None\n if time_based == True:\n the_user_interest = user_interest['timeBased'].copy()\n else: \n the_user_interest = user_interest['freqBased'].copy()\n budget = calc_seq_budget(user, seq, poi_info, poi_dist_mat, the_user_interest)\n print('generating ILP', lpFile, 'for user', user, 'sequence', seq, 'budget', round(budget, 2))\n generate_ILP(lpFile, user, budget, seq[0], seq[-1], poi_info, poi_dist_mat, eta, speed, the_user_interest)", "<a id='sec5.4'></a>\n5.4 Evaluation", "def load_solution_gurobi(fsol, startPoi, endPoi):\n \"\"\"Load recommended itinerary from MIP solution file by GUROBI\"\"\"\n seqterm = [] \n with open(fsol, 'r') as f:\n for line in f:\n if re.search('^visit_', line): # e.g. visit_0_7 1\\n\n item = line.strip().split(' ') # visit_21_16 1.56406801399038e-09\\n\n if round(float(item[1])) == 1:\n fromto = item[0].split('_')\n seqterm.append((int(fromto[1]), int(fromto[2])))\n p0 = startPoi\n pN = endPoi\n recseq = [p0]\n while True:\n px = recseq[-1]\n for term in seqterm:\n if term[0] == px:\n recseq.append(term[1])\n if term[1] == pN: \n return recseq\n else:\n seqterm.remove(term)\n break", "5.4.1 Evaluation on training set", "train_seqs_rec = dict()\n\nsolDir = os.path.join(data_dir, os.path.join('lp_' + suffix, 'eta05_time'))\n#solDir = os.path.join(data_dir, os.path.join('lp_' + suffix, 'eta10_time'))\nif not os.path.exists(solDir):\n print('Directory for solution files', solDir, 'does not exist.')\n\nfor seqid in sorted(train_seqs.keys()):\n if not os.path.exists(solDir):\n print('Directory for solution files', solDir, 'does not exist.')\n break\n seq = train_seqs[seqid]\n solFile = os.path.join(solDir, str(seqid) + '.lp.sol')\n recseq = load_solution_gurobi(solFile, seq[0], seq[-1])\n train_seqs_rec[seqid] = recseq\n print('Sequence', seqid, 'Actual:', seq, ', Recommended:', recseq)\n\nrecall = []\nprecision = []\nF1score = []\nfor seqid in train_seqs.keys():\n r, p, F1 = calc_recall_precision_F1score(train_seqs[seqid], train_seqs_rec[seqid])\n recall.append(r)\n precision.append(p)\n F1score.append(F1)\n\nprint('Recall:', round(np.mean(recall), 2), ',', round(np.std(recall), 2))\nprint('Precision:', round(np.mean(precision), 2), ',', round(np.std(recall), 2))\nprint('F1-score:', round(np.mean(F1score), 2), ',', round(np.std(recall), 2))", "5.4.2 Evaluation on testing set\nResults from paper (Toronto data, time-based uesr interest, eta=0.5):\n - Recall: 0.779&plusmn;0.10\n - Precision: 0.706&plusmn;0.013\n - F1-score: 0.732&plusmn;0.012", "test_seqs_rec = dict()\n\nsolDirTest = os.path.join(data_dir, os.path.join('lp_' + suffix, 'eta05_time.test'))\nif not os.path.exists(solDirTest):\n print('Directory for solution files', solDirTest, 'does not exist.')\n\nfor seqid in sorted(test_seqs.keys()):\n if not os.path.exists(solDirTest):\n print('Directory for solution files', solDirTest, 'does not exist.')\n break\n seq = test_seqs[seqid]\n solFile = os.path.join(solDirTest, str(seqid) + '.lp.sol')\n recseq = load_solution_gurobi(solFile, seq[0], seq[-1])\n test_seqs_rec[seqid] = recseq\n print('Sequence', seqid, 'Actual:', seq, ', Recommended:', recseq)\n\nrecallT = []\nprecisionT = []\nF1scoreT = []\nfor seqid in test_seqs.keys():\n r, p, F1 = calc_recall_precision_F1score(test_seqs[seqid], test_seqs_rec[seqid])\n recallT.append(r)\n precisionT.append(p)\n F1scoreT.append(F1)\n\nprint('Recall:', round(np.mean(recallT), 2), ',', round(np.std(recallT), 2))\nprint('Precision:', round(np.mean(precisionT), 2), ',', round(np.std(recallT), 2))\nprint('F1-score:', round(np.mean(F1scoreT), 2), ',', round(np.std(recallT), 2))", "<a id='sec6'></a>\n6. Issues &#8648;\n\nLarge budget leads to unrealistic recommended trajectory.\nlarge budget mainly comes from user interest times the average POI visit duration, since user interest is cumulative (i.e. sum, as defined at the top of the notebook)\n\nwe try to use averaging instead of cumulative (which seems more realistic, max budget 15 hours vs. max 170 hours).\n\n\nIs it necessary to consider visiting a certain POI more than one times? This paper ignores this setting.\n\n\nDealing with edge case $\\bar{V}(p) = 0$\n\n\nIt appears when POIs at which just one photo was taken for each visited user (including some user just took/uploaded two or more photos with the same timestamp), the case does appear in this dataset.\nFor all users $U$, POI $p$, arrival time $p^a$ and depature time $p^d$, The Average POI Visit Duration is defined as: \n$\\bar{V}(p) = \\frac{1}{n}\\sum_{u \\in U}\\sum_{p_x \\in S_u}(t_{p_x}^d - t_{p_x}^a)\\delta(p_x = p), \\forall p \\in P$\nand Time-based User Interest is defined as:\n$Int_u^Time(c) = \\sum_{p_x \\in S_u} \\frac{t_{p_x}^d - t_{p_x}^a}{\\bar{V}(p_x)} \\delta(Cat_{p_x} = c), \\forall c \\in C$\nUp to now, two strategies have been tried:\n * let the term $\\frac{t_{p_x}^d - t_{p_x}^a}{\\bar{V}(p_x)} = K$, where $K$ is a constant (e.g. 2). This approach seems to work, but the effects of different constants should be tested\n * discard all photo records in dataset related to the edge case. This approach suffers from throwing too much information, makes the useful dataset too small (at about 1% of the original dataset sometimes)\n\nCBC is too slow for large sequences (length >= 4)\nuse Gurobi on CECS servers" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dwaithe/ONBI_image_analysis
day2_colocalisation/2015 Correlation and Colocalisation practical.ipynb
gpl-2.0
[ "Introduction to Correlation and Colocalisation with Python.\nReading images\nDominic Waithe 2015 (c)\nExercise: See the similarities between the dot-product and correlation. Apply correlation to images to obtain a metric of colocalisation/similarity. Use colocalisation to assess the quality of registration.\nWe start with two lists of numbers (or two vectors or arrays as they are known). Please find the dot product of the two vectors. The dot product formula is a follows:<img src=\"dotProduct.png\">\nIn python there is more than one way to find the dot product of two vectors. It can be performed using 'for loops' or through vectorised notation", "#This line is very important: (It turns on the inline visuals!)\n%pylab inline\na = [2,9,32,12,14,6,9,23,4,5,13,6,7,92,21,45];\nb = [7,21,4,2,92,9,9,6,13,12,45,5,6,23,14,32];\n\n#Please calculate the dot product of the vectors 'a' and 'b'.\n#You may use any method you like. If get stuck. Check:\n#http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html\n#If you rearrange the numbers in 'b', what sequence will give\n#the highest dot-product magnitude?\n", "The Pearson's test\nExercise: See the similarities\nThe above example shows you how two number sequences can be compared with nothing more complicated than by using the dot product. This works as long as the sequences comprise of the same numbers but in a shuffled order. To compare different sequences with the original we normalise by the magnitude of the vectors. To include this step. We use a more complicated equation:\n<img src=\"eqn_full.gif\">\nhttps://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient\nhttps://en.wikipedia.org/wiki/Cross-correlation\nHopefully you can see the top of this equation is very similar to the dot-product, except that it is centered on zero (subtraction of the mu, the mean) and the variance is normalised (division by standard deviation).\nBecause the equation is normalised, a perfectly correlated sequence yeilds a rho value of 1.0. A perfectly random comparison yields 0 and two anti-correlated sequences will yield a value of -1.0.", "#The cross-correlation algorithm is another name for the Pearson's test.\n#Here it is written in code form and utilising the builtin functions:\nc = [0,1,2]\nd = [3,4,5]\nrho = np.average((c-np.average(c))*(d-np.average(d)))/(np.std(c)*np.std(d))\nprint('rho',np.round(rho,3))\n#equally you can write\nrho = np.dot(c-np.average(c),d-np.average(d))/sqrt(((np.dot(c-np.average(c),c-np.average(c)))*np.dot(d-np.average(d),d-np.average(d))))\nprint('rho',round(rho,3))\n\n#Why is the rho for c and d, 1.0?\n#Edit the variables c and d and find the pearson's value for 'a' and 'b'.\n#What happens when you correlate 'a' with 'a'?\n\n#Here is an image from the Fiji practical\nfrom tifffile import imread as imreadtiff\nim = imreadtiff('neuron.tif')\nprint('image dimensions',im.shape, ' im dtype:',im.dtype)\nsubplot(2,2,1)\nimshow(im[0,:,:],cmap='Blues_r')\nsubplot(2,2,2)\nimshow(im[1,:,:],cmap='Greens_r')\nsubplot(2,2,3)\nimshow(im[2,:,:],cmap='Greys_r')\nsubplot(2,2,4)\nimshow(im[3,:,:],cmap='Reds_r')", "Pearson's comparison of microscopy derived images", "a = im[0,:,:].reshape(-1)\nb = im[3,:,:].reshape(-1)\n#Calculate the pearson's coefficent (rho) for the image channel 0, 3.\n#You should hopefully obtain a value 0.829\n\n#from tifffile import imread as imreadtiff\nim = imreadtiff('composite.tif')\n\n#The organisation of this file is not simple. It is also a 16-bit image.\nprint(\"shape of im: \",im.shape,\"bit-depth: \",im.dtype)\n\n#We can assess the image data like so.\nCH0 = im[0,0,:,:]\nCH1 = im[1,0,:,:]\n\n#Single channels visualisation can handle 16-bit\nsubplot(2,2,1)\nimshow(CH0,cmap='Reds_r')\nsubplot(2,2,2)\nimshow(CH1,cmap='Greens_r')\nsubplot(2,2,3)\n\n#RGB data have to range between 0 and 255 in each channel and be int (8-bit).\nimRGB = np.zeros((CH0.shape[0],CH0.shape[1],3))\nimRGB[:,:,0] = CH0/255.0\nimRGB[:,:,1] = CH1/255.0\nimshow((imRGB.astype(np.uint8)))\n\n\n#What is the current Pearson's value for this image?\n", "Maybe remove so not to clash with Mark's.\nLast challenge\nExercise: The above image is not registered. Can you devise a way of registering this image using the Pearson's test, as a measure for the similarity of the image in different positions. hint you will need to move one of the images relative to the other and measure the colocalisation in this position. The best localisation will have the highest rho value. Produce an image of your fully registered image.", "np.max(imRGB/256.0)\n\nrho_max = 0\n#This moves one of your images with respect to the other.\nfor c in range(1,40):\n for r in range(1,40):\n #We need to dynamically sample our image.\n temp = CH0[c:-40+c,r:-40+r].reshape(-1);\n #The -40 makes sure they are the same size.\n ref = CH1[:-40,:-40].reshape(-1);\n \n rho = np.dot(temp-np.average(temp),ref-np.average(ref))/sqrt(((np.dot(temp-np.average(temp),temp-np.average(temp)))*np.dot(ref-np.average(ref),ref-np.average(ref))))\n \n #You will need to work out the highest rho value is recorded.\n #You will then need to find the coordinates of this high rho.\n #You will then need to provide a visualisation with the image translated.\n \n\nnp.max(imRGB)\n\nimshow?\n\nwhos" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSimPy
notebooks/chap17.ipynb
mit
[ "Modeling and Simulation in Python\nChapter 17\nCopyright 2017 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *", "Data\nWe have data from Pacini and Bergman (1986), \"MINMOD: a computer program to calculate insulin sensitivity and pancreatic responsivity from the frequently sampled intravenous glucose tolerance test\", Computer Methods and Programs in Biomedicine, 23: 113-122..", "data = pd.read_csv('data/glucose_insulin.csv', index_col='time')", "Here's what the glucose time series looks like.", "plot(data.glucose, 'bo', label='glucose')\ndecorate(xlabel='Time (min)',\n ylabel='Concentration (mg/dL)')", "And the insulin time series.", "plot(data.insulin, 'go', label='insulin')\ndecorate(xlabel='Time (min)',\n ylabel='Concentration ($\\mu$U/mL)')", "For the book, I put them in a single figure, using subplot", "subplot(2, 1, 1)\nplot(data.glucose, 'bo', label='glucose')\ndecorate(ylabel='Concentration (mg/dL)')\n\nsubplot(2, 1, 2)\nplot(data.insulin, 'go', label='insulin')\ndecorate(xlabel='Time (min)',\n ylabel='Concentration ($\\mu$U/mL)')\n\nsavefig('figs/chap17-fig01.pdf')", "Interpolation\nWe have measurements of insulin concentration at discrete points in time, but we need to estimate it at intervening points. We'll use interpolate, which takes a Series and returns a function:\nThe return value from interpolate is a function.", "I = interpolate(data.insulin)", "We can use the result, I, to estimate the insulin level at any point in time.", "I(7)", "I can also take an array of time and return an array of estimates:", "t_0 = get_first_label(data)\nt_end = get_last_label(data)\nts = linrange(t_0, t_end, endpoint=True)\nI(ts)\ntype(ts)", "Here's what the interpolated values look like.", "plot(data.insulin, 'go', label='insulin data')\nplot(ts, I(ts), color='green', label='interpolated')\n\ndecorate(xlabel='Time (min)',\n ylabel='Concentration ($\\mu$U/mL)')\n\nsavefig('figs/chap17-fig02.pdf')", "Exercise: Read the documentation of scipy.interpolate.interp1d. Pass a keyword argument to interpolate to specify one of the other kinds of interpolation, and run the code again to see what it looks like.", "# Solution goes here", "Exercise: Interpolate the glucose data and generate a plot, similar to the previous one, that shows the data points and the interpolated curve evaluated at the time values in ts.", "# Solution goes here", "Under the hood", "source_code(interpolate)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/zh-cn/tensorboard/migrate.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "将 tf.summary 用法迁移到 TF 2.0\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://tensorflow.google.cn/tensorboard/migrate\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看 </a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/migrate.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行 </a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/migrate.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 GitHub 中查看源代码</a></td>\n</table>\n\n\n注:本文档面向已经熟悉 TensorFlow 1.x TensorBoard 并希望将大型 TensorFlow 代码库从 TensorFlow 1.x 迁移至 2.0 的用户。如果您是 TensorBoard 的新用户,另请参阅入门文档。如果您使用 tf.keras,那么可能无需执行任何操作即可升级到 TensorFlow 2.0。", "import tensorflow as tf", "TensorFlow 2.0 包含对 tf.summary API(用于写入摘要数据以在 TensorBoard 中进行可视化)的重大变更。\n变更\n将 tf.summary API 视为两个子 API 非常实用:\n\n一组用于记录各个摘要(summary.scalar()、summary.histogram()、summary.image()、summary.audio() 和 summary.text())的运算,从您的模型代码内嵌调用。\n写入逻辑,用于收集各个摘要并将其写入到特殊格式化的日志文件中(TensorBoard 随后会读取该文件以生成可视化效果)。\n\n在 TF 1.x 中\n上述二者必须手动关联在一起,方法是通过 Session.run() 获取摘要运算输出,并调用 FileWriter.add_summary(output, step)。v1.summary.merge_all() 运算通过使用计算图集合汇总所有摘要运算输出使这个操作更轻松,但是这种方式对 Eager Execution 和控制流的效果仍不尽人意,因此特别不适用于 TF 2.0。\n在 TF 2.X 中\n上述二者紧密集成。现在,单独的 tf.summary 运算在执行时可立即写入其数据。在您的模型代码中使用 API 的方式与以往类似,但是现在对 Eager Execution 更加友好,同时也保留了与计算图模式的兼容性。两个子 API 的集成意味着 summary.FileWriter 现已成为 TensorFlow 执行上下文的一部分,可直接通过 tf.summary 运算访问,因此配置写入器将是主要的差异。\nEager Execution 的示例用法(TF 2.0 中默认):", "writer = tf.summary.create_file_writer(\"/tmp/mylogs/eager\")\n\nwith writer.as_default():\n for step in range(100):\n # other model code would go here\n tf.summary.scalar(\"my_metric\", 0.5, step=step)\n writer.flush()\n\nls /tmp/mylogs/eager", "tf.function 计算图执行的示例用法:", "writer = tf.summary.create_file_writer(\"/tmp/mylogs/tf_function\")\n\n@tf.function\ndef my_func(step):\n with writer.as_default():\n # other model code would go here\n tf.summary.scalar(\"my_metric\", 0.5, step=step)\n\nfor step in tf.range(100, dtype=tf.int64):\n my_func(step)\n writer.flush()\n\nls /tmp/mylogs/tf_function", "旧 TF 1.x 计算图执行的示例用法:", "g = tf.compat.v1.Graph()\nwith g.as_default():\n step = tf.Variable(0, dtype=tf.int64)\n step_update = step.assign_add(1)\n writer = tf.summary.create_file_writer(\"/tmp/mylogs/session\")\n with writer.as_default():\n tf.summary.scalar(\"my_metric\", 0.5, step=step)\n all_summary_ops = tf.compat.v1.summary.all_v2_summary_ops()\n writer_flush = writer.flush()\n\n\nwith tf.compat.v1.Session(graph=g) as sess:\n sess.run([writer.init(), step.initializer])\n\n for i in range(100):\n sess.run(all_summary_ops)\n sess.run(step_update)\n sess.run(writer_flush) \n\nls /tmp/mylogs/session", "转换您的代码\n将现有的 tf.summary 用法转换至 TF 2.0 API 无法实现可靠的自动化,因此需要通过 tf_upgrade_v2 脚本将其全部重写为 tf.compat.v1.summary。要迁移到 TF 2.0,您需要以如下方式修改代码:\n\n\n必须存在通过 .as_default() 设置的默认写入器才能使用摘要运算\n\n这意味着在 Eager Execution 模式下执行运算或在计算图构造中使用运算\n如果没有默认写入器,摘要运算将变为静默空运算\n默认写入器(尚)不跨 @tf.function 执行边界传播(仅在跟踪函数时对其进行检测),所以最佳做法是在函数体中调用 writer.as_default(),并确保在使用 @tf.function 时,写入器对象始终存在\n\n\n\n必须通过 step 参数将“步骤”值传入每个运算\n\nTensorBoard 需要步骤值以将数据呈现为时间序列\n由于 TF 1.x 中的全局步骤已被移除,因此需要执行显式传递,以确保每个运算都知道要读取的所需步骤变量\n为了减少样板,对注册默认步骤值的实验性支持通过 tf.summary.experimental.set_step() 提供,但这是临时功能,如有更改,恕不另行通知\n\n\n\n各个摘要运算的函数签名已更改\n\n现在,返回值为布尔值(指示是否实际写入了摘要)\n第二个参数名称(如果使用)已从 tensor 更改为 data\ncollections 参数已被移除;集合仅适用于 TF 1.x\nfamily 参数已被移除;仅使用 tf.name_scope()\n\n\n\n[仅针对旧计算图模式/会话执行用户]\n\n\n首先使用 v1.Session.run(writer.init()) 初始化写入器\n\n\n使用 v1.summary.all_v2_summary_ops() 获取当前计算图的所有 TF 2.0 摘要运算,例如通过 Session.run() 执行它们\n\n\n使用 v1.Session.run(writer.flush()) 刷新写入器,并以同样方式使用 close()\n\n\n\n\n如果您的 TF 1.x 代码已改用 tf.contrib.summary API,因其与 TF 2.0 API 更加相似,tf_upgrade_v2 脚本将能够自动执行大多数迁移步骤(并针对无法完全迁移的任何用法发出警告或错误)。在大多数情况下,它只是将 API 调用重写为 tf.compat.v2.summary;如果只需要与 TF 2.0+ 兼容,那么您可以删除 compat.v2 并将其作为 tf.summary 引用。\n其他提示\n除上述重要内容以外,一些辅助方面也进行了更改:\n\n\n条件记录(例如“每 100 个步骤记录一次”)有所更新\n\n要控制运算和相关代码,请将其包装在常规 if 语句(可在 Eager 模式下运行,以及通过 AutoGraph 在 @tf.function 中使用)或 tf.cond 中\n要仅控制摘要,请使用新的 tf.summary.record_if() 上下文管理器,并将其传递给您选择的布尔条件\n以下内容替换了 TF 1.x 模式:\n if condition:\n writer.add_summary()\n\n\n\n不直接编写 tf.compat.v1.Graph - 改为使用跟踪函数\n\nTF 2.0 中的计算图执行使用 @tf.function,而非显式计算图\n在 TF 2.0 中,使用新的跟踪样式 API tf.summary.trace_on() 和 tf.summary.trace_export() 记录执行的函数计算图\n\n\n\n不再使用 tf.summary.FileWriterCache 按 logdir 缓存全局写入器\n\n用户应实现自己的写入器对象缓存/共享方案,或者使用独立的写入器(TensorBoard 正在实现对后者的支持)\n\n\n\n事件文件的二进制表示已更改\n\nTensorBoard 1.x 已支持新格式;此项变更仅对从事件文件手动解析摘要数据的用户存在影响\n摘要数据现在以张量字节形式存储;您可以使用 tf.make_ndarray(event.summary.value[0].tensor) 将其转换为 Numpy" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
swirlingsand/deep-learning-foundations
How-to-Use-Tensorflow-for-Time-Series-Live--master/demo_full_notes.ipynb
mit
[ "In this tutorial I’ll explain how to build a simple working \nRecurrent Neural Network in TensorFlow! \nWe will build a simple Echo-RNN that remembers the input sequence and then echoes it after a few time-steps. This will help us understand how\nmemory works \nWe are mapping two sequences!\nWhat is an RNN?\nIt is short for “Recurrent Neural Network”, and is basically a neural \nnetwork that can be used when your data is treated as a sequence, where \nthe particular order of the data-points matter. More importantly, this \nsequence can be of arbitrary length.\nThe most straight-forward example is perhaps a time-seriedems of numbers, \nwhere the task is to predict the next value given previous values. The \ninput to the RNN at every time-step is the current value as well as a \nstate vector which represent what the network has “seen” at time-steps \nbefore. This state-vector is the encoded memory of the RNN, initially \nset to zero.\nGreat paper on this \nhttps://arxiv.org/pdf/1506.00019.pdf", "from IPython.display import Image\nfrom IPython.core.display import HTML \nfrom __future__ import print_function, division\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nImage(url= \"https://cdn-images-1.medium.com/max/1600/1*UkI9za9zTR-HL8uM15Wmzw.png\")\n\n#hyperparams\n\nnum_epochs = 100\ntotal_series_length = 50000\ntruncated_backprop_length = 15\nstate_size = 4\nnum_classes = 2\necho_step = 3\nbatch_size = 5\nnum_batches = total_series_length//batch_size//truncated_backprop_length\n\n#Step 1 - Collect data\n#Now generate the training data, \n#the input is basically a random binary vector. The output will be the \n#“echo” of the input, shifted echo_step steps to the right.\n\n#Notice the reshaping of the data into a matrix with batch_size rows. \n#Neural networks are trained by approximating the gradient of loss function \n#with respect to the neuron-weights, by looking at only a small subset of the data, \n#also known as a mini-batch.The reshaping takes the whole dataset and puts it into \n#a matrix, that later will be sliced up into these mini-batches.\n\ndef generateData():\n #0,1, 50K samples, 50% chance each chosen\n x = np.array(np.random.choice(2, total_series_length, p=[0.5, 0.5]))\n #shift 3 steps to the left\n y = np.roll(x, echo_step)\n #padd beginning 3 values with 0\n y[0:echo_step] = 0\n #Gives a new shape to an array without changing its data.\n #The reshaping takes the whole dataset and puts it into a matrix, \n #that later will be sliced up into these mini-batches.\n x = x.reshape((batch_size, -1)) # The first index changing slowest, subseries as rows\n y = y.reshape((batch_size, -1))\n\n return (x, y)\n\ndata = generateData()\n\nprint(data)\n\n#Schematic of the reshaped data-matrix, arrow curves shows adjacent time-steps that ended up on different rows. \n#Light-gray rectangle represent a “zero” and dark-gray a “one”.\nImage(url= \"https://cdn-images-1.medium.com/max/1600/1*aFtwuFsboLV8z5PkEzNLXA.png\")\n\n#TensorFlow works by first building up a computational graph, that \n#specifies what operations will be done. The input and output of this graph\n#is typically multidimensional arrays, also known as tensors. \n#The graph, or parts of it can then be executed iteratively in a \n#session, this can either be done on the CPU, GPU or even a resource \n#on a remote server.\n\n#operations and tensors\n\n#The two basic TensorFlow data-structures that will be used in this \n#example are placeholders and variables. On each run the batch data \n#is fed to the placeholders, which are “starting nodes” of the \n#computational graph. Also the RNN-state is supplied in a placeholder, \n#which is saved from the output of the previous run.\n\n#Step 2 - Build the Model\n\n#datatype, shape (5, 15) 2D array or matrix, batch size shape for later\nbatchX_placeholder = tf.placeholder(tf.float32, [batch_size, truncated_backprop_length])\nbatchY_placeholder = tf.placeholder(tf.int32, [batch_size, truncated_backprop_length])\n\n#and one for the RNN state, 5,4 \ninit_state = tf.placeholder(tf.float32, [batch_size, state_size])\n\n#The weights and biases of the network are declared as TensorFlow variables,\n#which makes them persistent across runs and enables them to be updated\n#incrementally for each batch.\n\n#3 layer recurrent net, one hidden state\n\n#randomly initialize weights\nW = tf.Variable(np.random.rand(state_size+1, state_size), dtype=tf.float32)\n#anchor, improves convergance, matrix of 0s \nb = tf.Variable(np.zeros((1,state_size)), dtype=tf.float32)\n\nW2 = tf.Variable(np.random.rand(state_size, num_classes),dtype=tf.float32)\nb2 = tf.Variable(np.zeros((1,num_classes)), dtype=tf.float32)", "The figure below shows the input data-matrix, and the current batch batchX_placeholder \nis in the dashed rectangle. As we will see later, this “batch window” is slided truncated_backprop_length \nsteps to the right at each run, hence the arrow. In our example below batch_size = 3, truncated_backprop_length = 3, \nand total_series_length = 36. Note that these numbers are just for visualization purposes, the values are different in the code. \nThe series order index is shown as numbers in a few of the data-points.", "Image(url= \"https://cdn-images-1.medium.com/max/1600/1*n45uYnAfTDrBvG87J-poCA.jpeg\")\n\n#Now it’s time to build the part of the graph that resembles the actual RNN computation, \n#first we want to split the batch data into adjacent time-steps.\n\n# Unpack columns\n#Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors.\n#so a bunch of arrays, 1 batch per time step\n\n# Change to unstack for new version of TF\ninputs_series = tf.unstack(batchX_placeholder, axis=1)\nlabels_series = tf.unstack(batchY_placeholder, axis=1)", "As you can see in the picture below that is done by unpacking the columns (axis = 1) of the batch into a Python list. The RNN will simultaneously be training on different parts in the time-series; steps 4 to 6, 16 to 18 and 28 to 30 in the current batch-example. The reason for using the variable names “plural”_”series” is to emphasize that the variable is a list that represent a time-series with multiple entries at each step.", "Image(url= \"https://cdn-images-1.medium.com/max/1600/1*f2iL4zOkBUBGOpVE7kyajg.png\")\n#Schematic of the current batch split into columns, the order index is shown on each data-point \n#and arrows show adjacent time-steps.", "The fact that the training is done on three places simultaneously in our time-series, requires us to save three instances of states when propagating forward. That has already been accounted for, as you see that the init_state placeholder has batch_size rows.", "#Forward pass\n#state placeholder\ncurrent_state = init_state\n#series of states through time\nstates_series = []\n\n\n#for each set of inputs\n#forward pass through the network to get new state value\n#store all states in memory\nfor current_input in inputs_series:\n #format input\n current_input = tf.reshape(current_input, [batch_size, 1])\n #mix both state and input data \n input_and_state_concatenated = tf.concat(1, [current_input, current_state]) # Increasing number of columns\n #perform matrix multiplication between weights and input, add bias\n #squash with a nonlinearity, for probabiolity value\n next_state = tf.tanh(tf.matmul(input_and_state_concatenated, W) + b) # Broadcasted addition\n #store the state in memory\n states_series.append(next_state)\n #set current state to next one\n current_state = next_state\n", "Notice the concatenation on line 6, what we actually want to do is calculate the sum of two affine transforms current_input * Wa + current_state * Wb in the figure below. By concatenating those two tensors you will only use one matrix multiplication. The addition of the bias b is broadcasted on all samples in the batch.", "Image(url= \"https://cdn-images-1.medium.com/max/1600/1*fdwNNJ5UOE3Sx0R_Cyfmyg.png\")", "You may wonder the variable name truncated_backprop_length is supposed to mean. When a RNN is trained, it is actually treated as a deep neural network with reoccurring weights in every layer. These layers will not be unrolled to the beginning of time, that would be too computationally expensive, and are therefore truncated at a limited number of time-steps. In our sample schematics above, the error is backpropagated three steps in our batch", "#calculate loss\n#second part of forward pass\n#logits short for logistic transform\nlogits_series = [tf.matmul(state, W2) + b2 for state in states_series] #Broadcasted addition\n#apply softmax nonlinearity for output probability\npredictions_series = [tf.nn.softmax(logits) for logits in logits_series]\n\n#measure loss, calculate softmax again on logits, then compute cross entropy\n#measures the difference between two probability distributions\n#this will return A Tensor of the same shape as labels and of the same type as logits \n#with the softmax cross entropy loss.\nlosses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels) for logits, labels in zip(logits_series,labels_series)]\n#computes average, one value\ntotal_loss = tf.reduce_mean(losses)\n#use adagrad to minimize with .3 learning rate\n#minimize it with adagrad, not SGD\n#One downside of SGD is that it is sensitive to\n#the learning rate hyper-parameter. When the data are sparse and features have\n#different frequencies, a single learning rate for every weight update can have\n#exponential regret.\n#Some features can be extremely useful and informative to an optimization problem but \n#they may not show up in most of the training instances or data. If, when they do show up, \n#they are weighted equally in terms of learning rate as a feature that has shown up hundreds \n#of times we are practically saying that the influence of such features means nothing in the \n#overall optimization. it's impact per step in the stochastic gradient descent will be so small \n#that it can practically be discounted). To counter this, AdaGrad makes it such that features \n#that are more sparse in the data have a higher learning rate which translates into a larger \n#update for that feature\n#sparse features can be very useful.\n#Each feature has a different learning rate which is adaptable. \n#gives voice to the little guy who matters a lot\n#weights that receive high gradients will have their effective learning rate reduced, \n#while weights that receive small or infrequent updates will have their effective learning rate increased. \n#great paper http://seed.ucsd.edu/mediawiki/images/6/6a/Adagrad.pdf\ntrain_step = tf.train.AdagradOptimizer(0.3).minimize(total_loss)", "The last line is adding the training functionality, TensorFlow will perform back-propagation for us automatically — the computation graph is executed once for each mini-batch and the network-weights are updated incrementally.\nNotice the API call to sparse_softmax_cross_entropy_with_logits, it automatically calculates the softmax internally and then computes the cross-entropy. In our example the classes are mutually exclusive (they are either zero or one), which is the reason for using the “Sparse-softmax”, you can read more about it in the API. The usage is to havelogits is of shape [batch_size, num_classes] and labels of shape [batch_size].", "#visualizer\ndef plot(loss_list, predictions_series, batchX, batchY):\n plt.subplot(2, 3, 1)\n plt.cla()\n plt.plot(loss_list)\n\n for batch_series_idx in range(5):\n one_hot_output_series = np.array(predictions_series)[:, batch_series_idx, :]\n single_output_series = np.array([(1 if out[0] < 0.5 else 0) for out in one_hot_output_series])\n\n plt.subplot(2, 3, batch_series_idx + 2)\n plt.cla()\n plt.axis([0, truncated_backprop_length, 0, 2])\n left_offset = range(truncated_backprop_length)\n plt.bar(left_offset, batchX[batch_series_idx, :], width=1, color=\"blue\")\n plt.bar(left_offset, batchY[batch_series_idx, :] * 0.5, width=1, color=\"red\")\n plt.bar(left_offset, single_output_series * 0.3, width=1, color=\"green\")\n\n plt.draw()\n plt.pause(0.0001)", "There is a visualization function so we can se what’s going on in the network as we train. It will plot the loss over the time, show training input, training output and the current predictions by the network on different sample series in a training batch.", "#Step 3 Training the network\nwith tf.Session() as sess:\n #we stupidly have to do this everytime, it should just know\n #that we initialized these vars. v2 guys, v2..\n sess.run(tf.initialize_all_variables())\n #interactive mode\n plt.ion()\n #initialize the figure\n plt.figure()\n #show the graph\n plt.show()\n #to show the loss decrease\n loss_list = []\n\n for epoch_idx in range(num_epochs):\n #generate data at eveery epoch, batches run in epochs\n x,y = generateData()\n #initialize an empty hidden state\n _current_state = np.zeros((batch_size, state_size))\n\n print(\"New data, epoch\", epoch_idx)\n #each batch\n for batch_idx in range(num_batches):\n #starting and ending point per batch\n #since weights reoccuer at every layer through time\n #These layers will not be unrolled to the beginning of time, \n #that would be too computationally expensive, and are therefore truncated \n #at a limited number of time-steps\n start_idx = batch_idx * truncated_backprop_length\n end_idx = start_idx + truncated_backprop_length\n\n batchX = x[:,start_idx:end_idx]\n batchY = y[:,start_idx:end_idx]\n \n #run the computation graph, give it the values\n #we calculated earlier\n _total_loss, _train_step, _current_state, _predictions_series = sess.run(\n [total_loss, train_step, current_state, predictions_series],\n feed_dict={\n batchX_placeholder:batchX,\n batchY_placeholder:batchY,\n init_state:_current_state\n })\n\n loss_list.append(_total_loss)\n\n if batch_idx%100 == 0:\n print(\"Step\",batch_idx, \"Loss\", _total_loss)\n plot(loss_list, _predictions_series, batchX, batchY)\n\nplt.ioff()\nplt.show()", "You can see that we are moving truncated_backprop_length steps forward on each iteration (line 15–19), but it is possible have different strides. This subject is further elaborated in this article. The downside with doing this is that truncated_backprop_length need to be significantly larger than the time dependencies (three steps in our case) in order to encapsulate the relevant training data. Otherwise there might a lot of “misses”, as you can see on the figure below.", "Image(url= \"https://cdn-images-1.medium.com/max/1600/1*uKuUKp_m55zAPCzaIemucA.png\")", "Time series of squares, the elevated black square symbolizes an echo-output, which is activated three steps from the echo input (black square). The sliding batch window is also striding three steps at each run, which in our sample case means that no batch will encapsulate the dependency, so it can not train.\nThe network will be able to exactly learn the echo behavior so there is no need for testing data.\nThe program will update the plot as training progresses, Blue bars denote a training input signal (binary one), red bars show echos in the training output and green bars are the echos the net is generating. The different bar plots show different sample series in the current batch. Fully trained at 100 epochs look like this", "Image(url= \"https://cdn-images-1.medium.com/max/1600/1*ytquMdmGMJo0-3kxMCi1Gg.png\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.18/_downloads/66fec418bceb5ce89704fb8b44930330/plot_3d_to_2d.ipynb
bsd-3-clause
[ "%matplotlib inline", "====================================================\nHow to convert 3D electrode positions to a 2D image.\n====================================================\nSometimes we want to convert a 3D representation of electrodes into a 2D\nimage. For example, if we are using electrocorticography it is common to\ncreate scatterplots on top of a brain, with each point representing an\nelectrode.\nIn this example, we'll show two ways of doing this in MNE-Python. First,\nif we have the 3D locations of each electrode then we can use Mayavi to\ntake a snapshot of a view of the brain. If we do not have these 3D locations,\nand only have a 2D image of the electrodes on the brain, we can use the\n:class:mne.viz.ClickableImage class to choose our own electrode positions\non the image.", "# Authors: Christopher Holdgraf <choldgraf@berkeley.edu>\n#\n# License: BSD (3-clause)\nfrom scipy.io import loadmat\nimport numpy as np\nfrom mayavi import mlab\nfrom matplotlib import pyplot as plt\nfrom os import path as op\n\nimport mne\nfrom mne.viz import ClickableImage # noqa\nfrom mne.viz import plot_alignment, snapshot_brain_montage\n\n\nprint(__doc__)\n\nsubjects_dir = mne.datasets.sample.data_path() + '/subjects'\npath_data = mne.datasets.misc.data_path() + '/ecog/sample_ecog.mat'\n\n# We've already clicked and exported\nlayout_path = op.join(op.dirname(mne.__file__), 'data', 'image')\nlayout_name = 'custom_layout.lout'", "Load data\nFirst we'll load a sample ECoG dataset which we'll use for generating\na 2D snapshot.", "mat = loadmat(path_data)\nch_names = mat['ch_names'].tolist()\nelec = mat['elec'] # electrode coordinates in meters\ndig_ch_pos = dict(zip(ch_names, elec))\nmon = mne.channels.DigMontage(dig_ch_pos=dig_ch_pos)\ninfo = mne.create_info(ch_names, 1000., 'ecog', montage=mon)\nprint('Created %s channel positions' % len(ch_names))", "Project 3D electrodes to a 2D snapshot\nBecause we have the 3D location of each electrode, we can use the\n:func:mne.viz.snapshot_brain_montage function to return a 2D image along\nwith the electrode positions on that image. We use this in conjunction with\n:func:mne.viz.plot_alignment, which visualizes electrode positions.", "fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,\n surfaces=['pial'], meg=False)\nmlab.view(200, 70)\nxy, im = snapshot_brain_montage(fig, mon)\n\n# Convert from a dictionary to array to plot\nxy_pts = np.vstack([xy[ch] for ch in info['ch_names']])\n\n# Define an arbitrary \"activity\" pattern for viz\nactivity = np.linspace(100, 200, xy_pts.shape[0])\n\n# This allows us to use matplotlib to create arbitrary 2d scatterplots\nfig2, ax = plt.subplots(figsize=(10, 10))\nax.imshow(im)\nax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm')\nax.set_axis_off()\n# fig2.savefig('./brain.png', bbox_inches='tight') # For ClickableImage", "Manually creating 2D electrode positions\nIf we don't have the 3D electrode positions then we can still create a\n2D representation of the electrodes. Assuming that you can see the electrodes\non the 2D image, we can use :class:mne.viz.ClickableImage to open the image\ninteractively. You can click points on the image and the x/y coordinate will\nbe stored.\nWe'll open an image file, then use ClickableImage to\nreturn 2D locations of mouse clicks (or load a file already created).\nThen, we'll return these xy positions as a layout for use with plotting topo\nmaps.", "# This code opens the image so you can click on it. Commented out\n# because we've stored the clicks as a layout file already.\n\n# # The click coordinates are stored as a list of tuples\n# im = plt.imread('./brain.png')\n# click = ClickableImage(im)\n# click.plot_clicks()\n\n# # Generate a layout from our clicks and normalize by the image\n# print('Generating and saving layout...')\n# lt = click.to_layout()\n# lt.save(op.join(layout_path, layout_name)) # To save if we want\n\n# # We've already got the layout, load it\nlt = mne.channels.read_layout(layout_name, path=layout_path, scale=False)\nx = lt.pos[:, 0] * float(im.shape[1])\ny = (1 - lt.pos[:, 1]) * float(im.shape[0]) # Flip the y-position\nfig, ax = plt.subplots()\nax.imshow(im)\nax.scatter(x, y, s=120, color='r')\nplt.autoscale(tight=True)\nax.set_axis_off()\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oasis-open/cti-python-stix2
docs/guide/custom.ipynb
bsd-3-clause
[ "# Delete this cell to re-enable tracebacks\nimport sys\nipython = get_ipython()\n\ndef hide_traceback(exc_tuple=None, filename=None, tb_offset=None,\n exception_only=False, running_compiled_code=False):\n etype, value, tb = sys.exc_info()\n value.__cause__ = None # suppress chained exceptions\n return ipython._showtraceback(etype, value, ipython.InteractiveTB.get_exception_only(etype, value))\n\nipython.showtraceback = hide_traceback\n\n# JSON output syntax highlighting\nfrom __future__ import print_function\nfrom pygments import highlight\nfrom pygments.lexers import JsonLexer, TextLexer\nfrom pygments.formatters import HtmlFormatter\nfrom IPython.display import display, HTML\nfrom IPython.core.interactiveshell import InteractiveShell\n\nInteractiveShell.ast_node_interactivity = \"all\"\n\ndef json_print(inpt):\n string = str(inpt)\n formatter = HtmlFormatter()\n if string[0] == '{':\n lexer = JsonLexer()\n else:\n lexer = TextLexer()\n return HTML('<style type=\"text/css\">{}</style>{}'.format(\n formatter.get_style_defs('.highlight'),\n highlight(string, lexer, formatter)))\n\nglobals()['print'] = json_print", "Custom STIX Content\nCustom Properties\nAttempting to create a STIX object with properties not defined by the specification will result in an error. Try creating an Identity object with a custom x_foo property:", "from stix2 import Identity\n\nIdentity(name=\"John Smith\",\n identity_class=\"individual\",\n x_foo=\"bar\")", "To create a STIX object with one or more custom properties, pass them in as a dictionary parameter called custom_properties:", "identity = Identity(name=\"John Smith\",\n identity_class=\"individual\",\n custom_properties={\n \"x_foo\": \"bar\"\n })\nprint(identity.serialize(pretty=True))", "Alternatively, setting allow_custom to True will allow custom properties without requiring a custom_properties dictionary.", "identity2 = Identity(name=\"John Smith\",\n identity_class=\"individual\",\n x_foo=\"bar\",\n allow_custom=True)\nprint(identity2.serialize(pretty=True))", "Likewise, when parsing STIX content with custom properties, pass allow_custom=True to parse():", "from stix2 import parse\n\ninput_string = \"\"\"{\n \"type\": \"identity\",\n \"spec_version\": \"2.1\",\n \"id\": \"identity--311b2d2d-f010-4473-83ec-1edf84858f4c\",\n \"created\": \"2015-12-21T19:59:11Z\",\n \"modified\": \"2015-12-21T19:59:11Z\",\n \"name\": \"John Smith\",\n \"identity_class\": \"individual\",\n \"x_foo\": \"bar\"\n}\"\"\"\nidentity3 = parse(input_string, allow_custom=True)\nprint(identity3.x_foo)", "To remove a custom properties, use new_version() and set that property to None.", "identity4 = identity3.new_version(x_foo=None)\nprint(identity4.serialize(pretty=True))", "Custom STIX Object Types\nTo create a custom STIX object type, define a class with the @CustomObject decorator. It takes the type name and a list of property tuples, each tuple consisting of the property name and a property instance. Any special validation of the properties can be added by supplying an __init__ function.\nLet's say zoo animals have become a serious cyber threat and we want to model them in STIX using a custom object type. Let's use a species property to store the kind of animal, and make that property required. We also want a property to store the class of animal, such as \"mammal\" or \"bird\" but only want to allow specific values in it. We can add some logic to validate this property in __init__.", "from stix2 import CustomObject, properties\n\n@CustomObject('x-animal', [\n ('species', properties.StringProperty(required=True)),\n ('animal_class', properties.StringProperty()),\n])\nclass Animal(object):\n def __init__(self, animal_class=None, **kwargs):\n if animal_class and animal_class not in ['mammal', 'bird', 'fish', 'reptile']:\n raise ValueError(\"'%s' is not a recognized class of animal.\" % animal_class)", "Now we can create an instance of our custom Animal type.", "animal = Animal(species=\"lion\",\n animal_class=\"mammal\")\nprint(animal.serialize(pretty=True))", "Trying to create an Animal instance with an animal_class that's not in the list will result in an error:", "Animal(species=\"xenomorph\",\n animal_class=\"alien\")", "Parsing custom object types that you have already defined is simple and no different from parsing any other STIX object.", "input_string2 = \"\"\"{\n \"type\": \"x-animal\",\n \"id\": \"x-animal--941f1471-6815-456b-89b8-7051ddf13e4b\",\n \"created\": \"2015-12-21T19:59:11Z\",\n \"modified\": \"2015-12-21T19:59:11Z\",\n \"spec_version\": \"2.1\",\n \"species\": \"shark\",\n \"animal_class\": \"fish\"\n}\"\"\"\nanimal2 = parse(input_string2)\nprint(animal2.species)", "However, parsing custom object types which you have not defined will result in an error:", "input_string3 = \"\"\"{\n \"type\": \"x-foobar\",\n \"id\": \"x-foobar--d362beb5-a04e-4e6b-a030-b6935122c3f9\",\n \"created\": \"2015-12-21T19:59:11Z\",\n \"modified\": \"2015-12-21T19:59:11Z\",\n \"bar\": 1,\n \"baz\": \"frob\"\n}\"\"\"\nparse(input_string3)", "Custom Cyber Observable Types\nSimilar to custom STIX object types, use a decorator to create custom Cyber Observable types. Just as before, __init__() can hold additional validation, but it is not necessary.", "from stix2 import CustomObservable\n\n@CustomObservable('x-new-observable', [\n ('a_property', properties.StringProperty(required=True)),\n ('property_2', properties.IntegerProperty()),\n])\nclass NewObservable():\n pass\n\nnew_observable = NewObservable(a_property=\"something\",\n property_2=10)\nprint(new_observable.serialize(pretty=True))", "Likewise, after the custom Cyber Observable type has been defined, it can be parsed.", "from stix2 import ObservedData\n\ninput_string4 = \"\"\"{\n \"type\": \"observed-data\",\n \"id\": \"observed-data--b67d30ff-02ac-498a-92f9-32f845f448cf\",\n \"spec_version\": \"2.1\",\n \"created_by_ref\": \"identity--f431f809-377b-45e0-aa1c-6a4751cae5ff\",\n \"created\": \"2016-04-06T19:58:16.000Z\",\n \"modified\": \"2016-04-06T19:58:16.000Z\",\n \"first_observed\": \"2015-12-21T19:00:00Z\",\n \"last_observed\": \"2015-12-21T19:00:00Z\",\n \"number_observed\": 50,\n \"objects\": {\n \"0\": {\n \"type\": \"x-new-observable\",\n \"a_property\": \"foobaz\",\n \"property_2\": 5\n }\n }\n}\"\"\"\nobs_data = parse(input_string4)\nprint(obs_data.objects[\"0\"].a_property)\nprint(obs_data.objects[\"0\"].property_2)", "ID-Contributing Properties for Custom Cyber Observables\nSTIX 2.1 Cyber Observables (SCOs) have deterministic IDs, meaning that the ID of a SCO is based on the values of some of its properties. Thus, if multiple cyber observables of the same type have the same values for their ID-contributing properties, then these SCOs will have the same ID. UUIDv5 is used for the deterministic IDs, using the namespace \"00abedb4-aa42-466c-9c01-fed23315a9b7\". A SCO's ID-contributing properties may consist of a combination of required properties and optional properties.\nIf a SCO type does not have any ID contributing properties defined, or all of the ID-contributing properties are not present on the object, then the SCO uses a randomly-generated UUIDv4. Thus, you can optionally define which of your custom SCO's properties should be ID-contributing properties. Similar to standard SCOs, your custom SCO's ID-contributing properties can be any combination of the SCO's required and optional properties.\nYou define the ID-contributing properties when defining your custom SCO with the CustomObservable decorator. After the list of properties, you can optionally define the list of id-contributing properties. If you do not want to specify any id-contributing properties for your custom SCO, then you do not need to do anything additional.\nSee the example below:", "from stix2 import CustomObservable\n\n@CustomObservable('x-new-observable-2', [\n ('a_property', properties.StringProperty(required=True)),\n ('property_2', properties.IntegerProperty()),\n], [\n 'a_property'\n])\nclass NewObservable2():\n pass\n\nnew_observable_a = NewObservable2(a_property=\"A property\", property_2=2000)\nprint(new_observable_a.serialize(pretty=True))\n\nnew_observable_b = NewObservable2(a_property=\"A property\", property_2=3000)\nprint(new_observable_b.serialize(pretty=True))\n\nnew_observable_c = NewObservable2(a_property=\"A different property\", property_2=3000)\nprint(new_observable_c.serialize(pretty=True))", "In this example, a_property is the only id-contributing property. Notice that the ID for new_observable_a and new_observable_b is the same since they have the same value for the id-contributing a_property property.\nCustom Cyber Observable Extensions\nFinally, custom extensions to existing Cyber Observable types can also be created. Just use the @CustomExtension decorator. Note that you must provide the Cyber Observable class to which the extension applies. Again, any extra validation of the properties can be implemented by providing an __init__() but it is not required. Let's say we want to make an extension to the File Cyber Observable Object:", "from stix2 import CustomExtension\n\n@CustomExtension('x-new-ext', [\n ('property1', properties.StringProperty(required=True)),\n ('property2', properties.IntegerProperty()),\n])\nclass NewExtension():\n pass\n\nnew_ext = NewExtension(property1=\"something\",\n property2=10)\nprint(new_ext.serialize(pretty=True))", "Once the custom Cyber Observable extension has been defined, it can be parsed.", "input_string5 = \"\"\"{\n \"type\": \"observed-data\",\n \"id\": \"observed-data--b67d30ff-02ac-498a-92f9-32f845f448cf\",\n \"spec_version\": \"2.1\",\n \"created_by_ref\": \"identity--f431f809-377b-45e0-aa1c-6a4751cae5ff\",\n \"created\": \"2016-04-06T19:58:16.000Z\",\n \"modified\": \"2016-04-06T19:58:16.000Z\",\n \"first_observed\": \"2015-12-21T19:00:00Z\",\n \"last_observed\": \"2015-12-21T19:00:00Z\",\n \"number_observed\": 50,\n \"objects\": {\n \"0\": {\n \"type\": \"file\",\n \"name\": \"foo.bar\",\n \"hashes\": {\n \"SHA-256\": \"35a01331e9ad96f751278b891b6ea09699806faedfa237d40513d92ad1b7100f\"\n },\n \"extensions\": {\n \"x-new-ext\": {\n \"property1\": \"bla\",\n \"property2\": 50\n }\n }\n }\n }\n}\"\"\"\nobs_data2 = parse(input_string5)\nprint(obs_data2.objects[\"0\"].extensions[\"x-new-ext\"].property1)\nprint(obs_data2.objects[\"0\"].extensions[\"x-new-ext\"].property2)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
phoebe-project/phoebe2-docs
development/examples/rossiter_mclaughlin.ipynb
gpl-3.0
[ "Rossiter-McLaughlin Effect\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.4,<2.5\"", "As always, let's do imports and initialize a logger and a new bundle.", "import phoebe\nimport numpy as np\n\nb = phoebe.default_binary()", "Now we'll try to exaggerate the effect by spinning up the secondary component.", "b.set_value('q', value=0.7)\nb.set_value('incl', component='binary', value=87)\nb.set_value('requiv', component='primary', value=0.8)\nb.set_value('teff', component='secondary', value=6500)\nb.set_value('syncpar', component='secondary', value=1.5)", "Adding Datasets\nWe'll add radial velocity, line profile, and mesh datasets. We'll compute the rvs through the whole orbit, but the mesh and line profiles right around the eclipse - just at the times that we want to plot for an animation.", "anim_times = phoebe.arange(0.44, 0.56, 0.002)", "We'll add two identical datasets, one where we compute only dynamical RVs (won't include Rossiter-McLaughlin) and another where we compute flux-weighted RVs (will include Rossiter-McLaughlin).", "b.add_dataset('rv', \n times=phoebe.linspace(0,1,201), \n dataset='dynamicalrvs')\n\nb.set_value_all('rv_method', dataset='dynamicalrvs', value='dynamical')\n\nb.add_dataset('rv', \n times=phoebe.linspace(0,1,201), \n dataset='numericalrvs')\n\nb.set_value_all('rv_method', dataset='numericalrvs', value='flux-weighted')", "For the mesh, we'll save some time by only exposing plane-of-sky coordinates and the 'rvs' column.", "b.add_dataset('mesh', \n compute_times=anim_times, \n coordinates='uvw', \n columns=['rvs@numericalrvs'],\n dataset='mesh01')", "And for the line-profile, we'll expose the line-profile for both of our stars separately, instead of for the entire system.", "b.add_dataset('lp', \n compute_times=anim_times, \n component=['primary', 'secondary'], \n wavelengths=phoebe.linspace(549.5,550.5,101), \n profile_rest=550)", "Running Compute", "b.run_compute(irrad_method='none')", "Plotting\nThroughout all of these plots, we'll color the components green and magenta (to differentiate them from the red and blue of the RV mapping).", "colors = {'primary': 'green', 'secondary': 'magenta'}", "First let's compare between the dynamical and numerical RVs. \nThe dynamical RVs show the velocity of the center of each star along the line of sight. But the numerical method integrates over the visible surface elements, giving us what we'd observe if deriving RVs from observed spectra of the binary. Here we do see the Rossiter McLaughlin effect. You'll also notice that RVs are not available for the secondary star when its completely occulted (they're nans in the array).", "afig, mplfig = b.plot(kind='rv',\n c=colors, \n ls={'numericalrvs': 'solid', 'dynamicalrvs': 'dotted'},\n show=True)", "Now let's make a plot of the line profiles and mesh during ingress to visualize what's happening. \nLet's go through these options (see the plot API docs for more details):\n* time: make the plot at this single time\n* fc: (will be ignored by everything but the mesh): set the facecolor to the rvs column. This will automatically apply a red-blue color mapping.\n* ec: disable drawing the edges of the triangles in a separate color. We could also set this to 'none', but then we'd be able to \"see-through\" the triangle edges.\n* c: set the colors as defined in our dictionary above. This will apply to the rv, lp, and horizon datasets, but will be ignored by the mesh.\n* ls: set the linestyle to differentiate between numerical and dynamical rvs.\n* highlight: highlight the current time on the numerical rvs only.\n* axpos: define the layout of the axes so the mesh plot takes up the horizontal space it needs.\n* xlim: \"zoom-in\" on the RM effect in the RVs, allow the others to fallback on automatic limits.\n* tight_layout: use matplotlib's tight layout to ensure we have enough padding between axes to see the labels.", "afig, mplfig= b.plot(time=0.46,\n fc='rvs@numericalrvs', ec='face',\n c=colors,\n ls={'numericalrvs': 'solid', 'dynamicalrvs': 'dotted'},\n highlight={'numericalrvs': True, 'dynamicalrvs': False},\n axpos={'mesh': 211, 'rv': 223, 'lp': 224},\n xlim={'rv': (0.4, 0.6)}, ylim={'rv': (-80, 80)},\n tight_layout=True, \n show=True)", "Here we can see that star in front (green) is eclipsing more of the blue-shifted part of the back star (magenta), distorting the line profile, causing the apparent center of the line profile to be shifted to the right/red, and therefore the radial velocities to be articially increased as compared to the dynamical RVs.\nNow let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions:\n\ntimes: pass our array of times that we want the animation to loop over.\npad_aspect: pad_aspect doesn't work with animations, so we'll disable to avoid the warning messages.\nanimate: self-explanatory.\nsave: we could use show=True, but that doesn't always play nice with jupyter notebooks\nsave_kwargs: may need to change these for your setup, to create a gif, passing {'writer': 'imagemagick'} is often useful.", "afig, mplanim = b.plot(times=anim_times,\n fc='rvs@numericalrvs', ec='face',\n c=colors,\n ls={'numericalrvs': 'solid', 'dynamicalrvs': 'dotted'},\n highlight={'numericalrvs': True, 'dynamicalrvs': False},\n pad_aspect=False,\n axpos={'mesh': 211, 'rv': 223, 'lp': 224},\n xlim={'rv': (0.4, 0.6)}, ylim={'rv': (-80, 80)},\n animate=True, \n save='rossiter_mclaughlin.gif',\n save_kwargs={'writer': 'imagemagick'})", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/zh-cn/io/tutorials/genome.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://tensorflow.google.cn/io/tutorials/genome\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看 </a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/genome.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行 </a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/genome.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 GitHub 中查看源代码</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/genome.ipynb\">{img1下载笔记本</a></td>\n</table>\n\n概述\n本教程将演示 tfio.genome 软件包,其中提供了常用的基因组学 IO 功能,即读取多种基因组学文件格式,以及提供一些用于准备数据(例如,独热编码或将 Phred 质量解析为概率)的常用运算。\n此软件包使用 Google Nucleus 库来提供一些核心功能。 \n设置", "try:\n %tensorflow_version 2.x\nexcept Exception:\n pass\n!pip install tensorflow-io\n\nimport tensorflow_io as tfio\nimport tensorflow as tf", "FASTQ 数据\nFASTQ 是一种常见的基因组学文件格式,除了基本的质量信息外,还存储序列信息。\n首先,让我们下载一个样本 fastq 文件。", "# Download some sample data:\n!curl -OL https://raw.githubusercontent.com/tensorflow/io/master/tests/test_genome/test.fastq", "读取 FASTQ 数据\n现在,让我们使用 tfio.genome.read_fastq 读取此文件(请注意,tf.data API 即将发布)。", "fastq_data = tfio.genome.read_fastq(filename=\"test.fastq\")\nprint(fastq_data.sequences)\nprint(fastq_data.raw_quality)", "如您所见,返回的 fastq_data 具有 fastq_data.sequences,后者是 fastq 文件中所有序列的字符串张量(大小可以不同);并具有 fastq_data.raw_quality,其中包含与在序列中读取的每个碱基的质量有关的 Phred 编码质量信息。\n质量\n如有兴趣,您可以使用辅助运算将此质量信息转换为概率。", "quality = tfio.genome.phred_sequences_to_probability(fastq_data.raw_quality)\nprint(quality.shape)\nprint(quality.row_lengths().numpy())\nprint(quality)", "独热编码\n您可能还需要使用独热编码器对基因组序列数据(由 A T C G 碱基组成)进行编码。有一项内置运算可以帮助编码。", "print(tfio.genome.sequences_to_onehot.__doc__)\n\nprint(tfio.genome.sequences_to_onehot.__doc__)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
w4zir/ml17s
python-tutorial.ipynb
mit
[ "CS228 Python Tutorial\nAdapted from the CS231n Python tutorial by Justin Johnson (http://cs231n.github.io/python-numpy-tutorial/).\nIntroduction\nPython is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing.\nWe expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing.\nSome of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html).\nIn this tutorial, we will cover:\n\nBasic Python: Basic data types (Containers, Lists, Dictionaries, Sets, Tuples), Functions, Classes\nNumpy: Arrays, Array indexing, Datatypes, Array math, Broadcasting\nMatplotlib: Plotting, Subplots, Images\n\nBasics of Python\nPython is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:", "def quicksort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[int(len(arr) / 2)]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quicksort(left) + middle + quicksort(right)\n\nprint (quicksort([3,6,8,10,1,2]))", "Python versions\nThere are currently two different supported versions of Python, 2.7 and 3.6. Somewhat confusingly, Python 3.X introduced many backwards-incompatible changes to the language, so code written for 2.7 may not work under 3.6 and vice versa. For this class all code will use Python 2.7.\nYou can check your Python version at the command line by running python --version.\nBasic data types\nNumbers\nIntegers and floats work as you would expect from other languages:", "x,y = 3,4\nprint (x,y)\n\n# type of variable\nprint(type(x))\n\nprint (x + 1) # Addition;\nprint (x - 1) # Subtraction;\nprint (x * 2) # Multiplication;\nprint (x ** 2) # Exponentiation;\n\nx += 1\nprint (x) # Prints \"4\"\nx *= 2\nprint (x) # Prints \"8\"\n\ny = 2.5\nprint (type(y)) # Prints \"<type 'float'>\"\nprint (y, y + 1, y * 2, y ** 2) # Prints \"2.5 3.5 5.0 6.25\"", "Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.\nPython also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.\nBooleans\nPython implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&amp;&amp;, ||, etc.):", "t, f = True, False\nprint (type(t)) # Prints \"<type 'bool'>\"", "Now we let's look at the operations:", "print (t and f) # Logical AND;\nprint (t or f) # Logical OR;\nprint (not t) # Logical NOT;\nprint (t != f) # Logical XOR;", "Strings", "hello = 'hello' # String literals can use single quotes\nworld = \"world\" # or double quotes; it does not matter.\nprint (hello, len(hello))\n\nhw = hello + ' ' + world # String concatenation\nprint (hw) # prints \"hello world\"\n\nhw12 = '%s %s %d' % (hello, world, 12) # sprintf style string formatting\nprint (hw12) # prints \"hello world 12\"", "String objects have a bunch of useful methods; for example:", "s = \"hello\"\nprint (s.capitalize()) # Capitalize a string; prints \"Hello\"\nprint (s.upper()) # Convert a string to uppercase; prints \"HELLO\"\nprint (s.rjust(7)) # Right-justify a string, padding with spaces; prints \" hello\"\nprint (s.center(7)) # Center a string, padding with spaces; prints \" hello \"\nprint (s.replace('l', '(ell)')) # Replace all instances of one substring with another;\n # prints \"he(ell)(ell)o\"\nprint (' world '.strip()) # Strip leading and trailing whitespace; prints \"world\"", "You can find a list of all string methods in the documentation.\nContainers\nPython includes several built-in container types: lists, dictionaries, sets, and tuples.\nLists\nA list is the Python equivalent of an array, but is resizeable and can contain elements of different types:", "xs = [3, 1, 2] # Create a list\nprint (xs, xs[2])\nprint (xs[-1]) # Negative indices count from the end of the list; prints \"2\"\n\nys = [[1,2,3],[2,3,4]]\nprint(ys)\nprint(ys[1][2])\n\nxs[2] = 'foo' # Lists can contain elements of different types\nprint (xs)\n\nxs.append('bar') # Add a new element to the end of the list\nprint (xs) \n\nx = xs.pop() # Remove and return the last element of the list\nprint (x, xs) ", "As usual, you can find all the gory details about lists in the documentation.\nSlicing\nIn addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:", "# nums = range(5) # range is a built-in function that creates a list of integers\nnums = [2,3,5,1,2,8]\nprint (nums) # Prints \"[0, 1, 2, 3, 4]\"\nprint (nums[2:4]) # Get a slice from index 2 to 4 (exclusive); prints \"[2, 3]\"\nprint (nums[2:]) # Get a slice from index 2 to the end; prints \"[2, 3, 4]\"\nprint (nums[:2]) # Get a slice from the start to index 2 (exclusive); prints \"[0, 1]\"\nprint (nums[:]) # Get a slice of the whole list; prints [\"0, 1, 2, 3, 4]\"\nprint (nums[:-2]) # Slice indices can be negative; prints [\"0, 1, 2, 3]\"", "Loops\nYou can loop over the elements of a list like this:", "animals = ['cat', 'dog', 'monkey']\nfor animal in animals:\n print (animal)\n print(1)\n \nx =1\nprint(x)", "If you want access to the index of each element within the body of a loop, use the built-in enumerate function:", "animals = ['cat', 'dog', 'monkey']\nfor idx, animal in enumerate(animals):\n print ('#%d: %s' % (idx + 1, animal))", "List comprehensions:\nWhen programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:", "nums = [0, 1, 2, 3, 4]\nsquares = []\nfor x in nums:\n squares.append(x ** 2)\nprint (squares)", "You can make this code simpler using a list comprehension:", "nums = [0, 1, 2, 3, 4]\nsquares = [x ** 2 for x in nums]\nprint (squares)", "List comprehensions can also contain conditions:", "nums = [0, 1, 2, 3, 4]\neven_squares = [x ** 2 for x in nums if x % 2 == 0]\nprint (even_squares)", "Dictionaries\nA dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this:", "d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data\nprint (d['cat']) # Get an entry from a dictionary; prints \"cute\"\nprint ('cat' in d) # Check if a dictionary has a given key; prints \"True\"\n\nd['fish'] = 'wet' # Set an entry in a dictionary\nprint (d['fish']) # Prints \"wet\"\n\nprint (d['monkey']) # KeyError: 'monkey' not a key of d\n\nprint (d.get('monkey', 'N/A')) # Get an element with a default; prints \"N/A\"\nprint (d.get('fish', 'N/A')) # Get an element with a default; prints \"wet\"\n\ndel d['fish'] # Remove an element from a dictionary\nprint (d.get('fish', 'N/A')) # \"fish\" is no longer a key; prints \"N/A\"", "You can find all you need to know about dictionaries in the documentation.\nIt is easy to iterate over the keys in a dictionary:", "d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal in d:\n legs = d[animal]\n print ('A %s has %d legs' % (animal, legs))", "If you want access to keys and their corresponding values, use the iteritems method:", "d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal, legs in d.items():\n print ('A %s has %d legs' % (animal, legs))", "Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:", "nums = [0, 1, 2, 3, 4]\neven_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}\nprint (even_num_to_square)", "Sets\nA set is an unordered collection of distinct elements. As a simple example, consider the following:", "animals = {'cat', 'dog'}\nprint ('cat' in animals) # Check if an element is in a set; prints \"True\"\nprint ('fish' in animals) # prints \"False\"\n\n\nanimals.add('fish') # Add an element to a set\nprint ('fish' in animals)\nprint (animals) # Number of elements in a set;\n\nanimals.add('cat') # Adding an element that is already in the set does nothing\nprint (animals) \nanimals.remove('cat') # Remove an element from a set\nprint (animals) ", "Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:", "animals = {'cat', 'dog', 'fish'}\nfor idx, animal in enumerate(animals):\n print ('#%d: %s' % (idx + 1, animal))\n# Prints \"#1: fish\", \"#2: dog\", \"#3: cat\"", "Functions\nPython functions are defined using the def keyword. For example:", "def sign(x):\n if x > 0:\n return 'positive'\n elif x < 0:\n return 'negative'\n else:\n return 'zero'\n\nfor x in [-1, 0, 1]:\n print (sign(x))", "We will often define functions to take optional keyword arguments, like this:", "def hello(name, loud=False):\n if loud:\n print ('HELLO, %s' % name.upper())\n else:\n print ('Hello, %s!' % name)\n\nhello('Bob')\nloud = True\nhello('Fred', True)", "Classes\nThe syntax for defining classes in Python is straightforward:", "class Greeter:\n\n # Constructor\n def __init__(self, name):\n self.name = name # Create an instance variable\n\n # Instance method\n def greet(self, loud=False):\n if loud:\n print ('HELLO, %s!' % self.name.upper())\n else:\n print ('Hello, %s' % self.name)\n\ng = Greeter('Fred') # Construct an instance of the Greeter class\ng.greet() # Call an instance method; prints \"Hello, Fred\"\ng.greet(loud=True) # Call an instance method; prints \"HELLO, FRED!\"", "Numpy\nNumpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.\nTo use Numpy, we first need to import the numpy package:", "import numpy as np", "Arrays\nA numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.\nWe can initialize numpy arrays from nested Python lists, and access elements using square brackets:", "a = np.array([1, 2, 3]) # Create a rank 1 array\nprint (type(a), a.shape, a[0], a[1], a[2])\na[0] = 5 # Change an element of the array\nprint (a) \n\nb = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array\nprint (b)\n\nprint (b.shape) \nprint (b[0, 0], b[0, 1], b[1, 0])", "Numpy also provides many functions to create arrays:", "a = np.zeros((2,2)) # Create an array of all zeros\nprint (a)\n\nb = np.ones((1,2)) # Create an array of all ones\nprint (b)\n\nc = np.full((2,2), 7) # Create a constant array\nprint (c) \n\nd = np.eye(2) # Create a 2x2 identity matrix\nprint (d)\n\ne = np.random.random((2,2)) # Create an array filled with random values\nprint (e)", "Array indexing\nNumpy offers several ways to index into arrays.\nSlicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:", "import numpy as np\n\n# Create the following rank 2 array with shape (3, 4)\n# [[ 1 2 3 4]\n# [ 5 6 7 8]\n# [ 9 10 11 12]]\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\n\n# Use slicing to pull out the subarray consisting of the first 2 rows\n# and columns 1 and 2; b is the following array of shape (2, 2):\n# [[2 3]\n# [6 7]]\nb = a[:2, 1:3]\nprint (b)", "A slice of an array is a view into the same data, so modifying it will modify the original array.", "print (a[0, 1]) \nb[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]\nprint (a[0, 1])", "You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing:", "# Create the following rank 2 array with shape (3, 4)\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\nprint (a)\nprint(a.shape)", "Two ways of accessing the data in the middle row of the array.\nMixing integer indexing with slices yields an array of lower rank,\nwhile using only slices yields an array of the same rank as the\noriginal array:", "row_r1 = a[1, :] # Rank 1 view of the second row of a \nrow_r2 = a[1:2, :] # Rank 2 view of the second row of a\nrow_r3 = a[[1], :] # Rank 2 view of the second row of a\nprint (row_r1, row_r1.shape)\nprint (row_r2, row_r2.shape)\nprint (row_r3, row_r3.shape)\n\n# We can make the same distinction when accessing columns of an array:\ncol_r1 = a[:, 1]\ncol_r2 = a[:, 1:2]\nprint (col_r1, col_r1.shape)\nprint (col_r2, col_r2.shape)", "Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. Here is an example:", "a = np.array([[1,2], [3, 4], [5, 6]])\n\n# An example of integer array indexing.\n# The returned array will have shape (3,) and \nprint (a[[0, 1, 2], [0, 1, 0]])\n\n# The above example of integer array indexing is equivalent to this:\nprint (np.array([a[0, 0], a[1, 1], a[2, 0]]))\n\n# When using integer array indexing, you can reuse the same\n# element from the source array:\nprint (a[[0, 0], [1, 1]])\n\n# Equivalent to the previous integer array indexing example\nprint (np.array([a[0, 1], a[0, 1]]))", "One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:", "# Create a new array from which we will select elements\na = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nprint (a)\n\n# Create an array of indices\nb = np.array([0, 2, 0, 1])\n\n# Select one element from each row of a using the indices in b\nprint (a[np.arange(4), b]) # Prints \"[ 1 6 7 11]\"\n\n# Mutate one element from each row of a using the indices in b\na[np.arange(4), b] += 10\nprint (a)", "Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. Here is an example:", "import numpy as np\n\na = np.array([[1,2], [3, 4], [5, 6]])\n\nbool_idx = (a > 2) # Find the elements of a that are bigger than 2;\n # this returns a numpy array of Booleans of the same\n # shape as a, where each slot of bool_idx tells\n # whether that element of a is > 2.\n\nprint (bool_idx)\n\n# We use boolean array indexing to construct a rank 1 array\n# consisting of the elements of a corresponding to the True values\n# of bool_idx\nprint (a[bool_idx])\n\n# We can do all of the above in a single concise statement:\nprint (a[a > 2])", "For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.\nDatatypes\nEvery numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example:", "x = np.array([1, 2]) # Let numpy choose the datatype\ny = np.array([1.0, 2.0]) # Let numpy choose the datatype\nz = np.array([1, 2], dtype=np.int64) # Force a particular datatype\n\nprint (x.dtype, y.dtype, z.dtype)", "You can read all about numpy datatypes in the documentation.\nArray math\nBasic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:", "x = np.array([[1,2],[3,4]], dtype=np.float64)\ny = np.array([[5,6],[7,8]], dtype=np.float64)\n\n# Elementwise sum; both produce the array\nprint (x + y)\nprint (np.add(x, y))\n\n# Elementwise difference; both produce the array\nprint x - y\nprint np.subtract(x, y)\n\n# Elementwise product; both produce the array\nprint x * y\nprint np.multiply(x, y)\n\n# Elementwise division; both produce the array\n# [[ 0.2 0.33333333]\n# [ 0.42857143 0.5 ]]\nprint x / y\nprint np.divide(x, y)\n\n# Elementwise square root; produces the array\n# [[ 1. 1.41421356]\n# [ 1.73205081 2. ]]\nprint np.sqrt(x)", "Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:", "x = np.array([[1,2],[3,4]])\ny = np.array([[5,6],[7,8]])\n\nv = np.array([9,10])\nw = np.array([11, 12])\n\n# Inner product of vectors; both produce 219\nprint v.dot(w)\nprint np.dot(v, w)\n\n# Matrix / vector product; both produce the rank 1 array [29 67]\nprint x.dot(v)\nprint np.dot(x, v)\n\n# Matrix / matrix product; both produce the rank 2 array\n# [[19 22]\n# [43 50]]\nprint x.dot(y)\nprint np.dot(x, y)", "Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum:", "x = np.array([[1,2],[3,4]])\n\nprint np.sum(x) # Compute sum of all elements; prints \"10\"\nprint np.sum(x, axis=0) # Compute sum of each column; prints \"[4 6]\"\nprint np.sum(x, axis=1) # Compute sum of each row; prints \"[3 7]\"", "You can find the full list of mathematical functions provided by numpy in the documentation.\nApart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object:", "print x\nprint x.T\n\nv = np.array([[1,2,3]])\nprint v \nprint v.T", "Broadcasting\nBroadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.\nFor example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:", "# We will add the vector v to each row of the matrix x,\n# storing the result in the matrix y\nx = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nv = np.array([1, 0, 1])\ny = np.empty_like(x) # Create an empty matrix with the same shape as x\n\n# Add the vector v to each row of the matrix x with an explicit loop\nfor i in range(4):\n y[i, :] = x[i, :] + v\n\nprint (y)", "This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this:", "vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other\nprint (vv) # Prints \"[[1 0 1]\n # [1 0 1]\n # [1 0 1]\n # [1 0 1]]\"\n\ny = x + vv # Add x and vv elementwise\nprint (y)", "Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting:", "import numpy as np\n\n# We will add the vector v to each row of the matrix x,\n# storing the result in the matrix y\nx = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nv = np.array([1, 0, 1])\ny = x + v # Add v to each row of x using broadcasting\nprint (y)", "The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.\nBroadcasting two arrays together follows these rules:\n\nIf the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.\nThe two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.\nThe arrays can be broadcast together if they are compatible in all dimensions.\nAfter broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.\nIn any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension\n\nIf this explanation does not make sense, try reading the explanation from the documentation or this explanation.\nFunctions that support broadcasting are known as universal functions. You can find the list of all universal functions in the documentation.\nHere are some applications of broadcasting:", "# Compute outer product of vectors\nv = np.array([1,2,3]) # v has shape (3,)\nw = np.array([4,5]) # w has shape (2,)\n# To compute an outer product, we first reshape v to be a column\n# vector of shape (3, 1); we can then broadcast it against w to yield\n# an output of shape (3, 2), which is the outer product of v and w:\n\nprint (np.reshape(v, (3, 1)) * w)\n\n# Add a vector to each row of a matrix\nx = np.array([[1,2,3], [4,5,6]])\n# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),\n# giving the following matrix:\n\nprint (x + v)\n\n# Add a vector to each column of a matrix\n# x has shape (2, 3) and w has shape (2,).\n# If we transpose x then it has shape (3, 2) and can be broadcast\n# against w to yield a result of shape (3, 2); transposing this result\n# yields the final result of shape (2, 3) which is the matrix x with\n# the vector w added to each column. Gives the following matrix:\n\nprint ((x.T + w).T)\n\n# Another solution is to reshape w to be a row vector of shape (2, 1);\n# we can then broadcast it directly against x to produce the same\n# output.\nprint x + np.reshape(w, (2, 1))\n\n# Multiply a matrix by a constant:\n# x has shape (2, 3). Numpy treats scalars as arrays of shape ();\n# these can be broadcast together to shape (2, 3), producing the\n# following array:\nprint x * 2", "Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.\nThis brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.\nMatplotlib\nMatplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.", "import matplotlib.pyplot as plt", "By running this special iPython command, we will be displaying plots inline:", "%matplotlib inline", "Plotting\nThe most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:", "# Compute the x and y coordinates for points on a sine curve\nx = np.arange(0, 3 * np.pi, 0.1)\ny = np.sin(x)\n\n# Plot the points using matplotlib\nplt.scatter(x, y)", "With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:", "y_cos = np.cos(x)\n\n# Plot the points using matplotlib\nplt.plot(x, y)\nplt.plot(x, y_cos)\nplt.xlabel('x axis label')\nplt.ylabel('y axis label')\nplt.title('Sine and Cosine')\nplt.legend(['Sine', 'Cosine'])", "Subplots\nYou can plot different things in the same figure using the subplot function. Here is an example:", "# Compute the x and y coordinates for points on sine and cosine curves\nx = np.arange(0, 3 * np.pi, 0.1)\ny_sin = np.sin(x)\ny_cos = np.cos(x)\n\n# Set up a subplot grid that has height 2 and width 1,\n# and set the first such subplot as active.\nplt.subplot(2, 1, 1)\n\n# Make the first plot\nplt.plot(x, y_sin)\nplt.title('Sine')\n\n# Set the second subplot as active, and make the second plot.\nplt.subplot(2, 1, 2)\nplt.plot(x, y_cos)\nplt.title('Cosine')\n\n# Show the figure.\nplt.show()", "You can read much more about the subplot function in the documentation.\nPandas", "import pandas as pd\n\n# read_csv() is the function (or feature) from pandas we want to use to load the file into memory\ndframe = pd.read_csv(\"lectures/datasets/titanic_dataset.csv\")\n\n# .head(num_of_rows) is a method that displays the first few (num_of_rows) rows, not counting column headers\ndframe.head(5)\n\n# rows and columns in dataset\ndframe.shape\n\n# check columns in dataset\ndframe.columns\n\n# select a row\nhundredth_row = dframe.loc[99]\nprint(hundredth_row)\n\n# select multiple rows\nprint(\"Rows 3, 4, 5 and 6\")\nprint(dframe.loc[3:6])\n\n# select specific columns\ncols = ['survived','sex','age']\nspecific_cols = dframe[cols]\nspecific_cols.head()\n\n# check statistics of the data\ndframe.describe()\n\n# check histogram of age\ndframe.hist(column='age', bins=10)\n\n# Replace all the occurences of male with the number 0 and female with 1\ndframe.loc[dframe[\"sex\"] == \"male\", \"sex\"] = 0\ndframe.loc[dframe[\"sex\"] == \"female\", \"sex\"] = 1", "Images", "from IPython.display import Image\nImage(filename='lectures/images/01_02.png', width=500)\n\nImage(filename='lectures/images/01_01.png', width=500)", "KNN Classifier", "# read X and y\n# cols = ['pclass','sex','age','fare']\ncols = ['pclass','sex','age']\nX = dframe[cols]\ny = dframe[[\"survived\"]]\n\ndframe.head()\n\n# Use scikit-learn KNN classifier to predit survival probability\nfrom sklearn.neighbors import KNeighborsClassifier\nneigh = KNeighborsClassifier(n_neighbors=3)\nneigh.fit(X, y) \n\n# check accuracy\nneigh.score(X,y)\n\n# define a passenger\npassenger = [1,1,29]\n\n# predict survial label\nprint(neigh.predict([passenger]))\n\n# predict survial probability\nprint(neigh.predict_proba([passenger]))\n\n# find k-nearest neighbors\nneigh.kneighbors(passenger,3)\n\n# Let's create some data for DiCaprio and Winslet and you\nimport numpy as np\ncolsidx = [0,2,3];\ndicaprio = np.array([3, 'Jack Dawson', 0, 19, 0, 0, 'N/A', 5.0000])\nwinslet = np.array([1, 'Rose DeWitt Bukater', 1, 17, 1, 2, 'N/A', 100.0000])\nyou = np.array([1, 'user', 1, 21, 0, 2, 'N/A', 50.0000])\n# Preprocess data\ndicaprio = dicaprio[colsidx]\nwinslet = winslet[colsidx]\nyou = you[colsidx]\n# # Predict surviving chances (class 1 results)\npred = neigh.predict([dicaprio, winslet, you])\nprob = neigh.predict_proba([dicaprio, winslet, you])\nprint(\"DiCaprio Surviving:\", pred[0], \" with probability\", prob[0])\nprint(\"Winslet Surviving Rate:\", pred[1], \" with probability\", prob[2])\nprint(\"user Surviving Rate:\", pred[2], \" with probability\", prob[2])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Evfro/RecSys_ISP2017
polara_intro.ipynb
mit
[ "import numpy as np\nfrom polara.recommender.data import RecommenderData\nfrom polara.recommender.models import RecommenderModel\nfrom polara.tools.movielens import get_movielens_data", "Get Movielens-1M data\nthis will download movielens-1m dataset from http://grouplens.org/datasets/movielens/:", "data, genres = get_movielens_data(get_genres=True)\n\ndata.head()\n\ndata.info()\n\ngenres.head()\n\n%matplotlib inline", "Rating distribution in the dataset:", "data.rating.value_counts().sort_index().plot.bar()", "Building our first recommender model\nPreparing data\nRecommenderData class provides a set of tools for manipulating the data and preparing it for experimentation.\nInput parameters are: the data itself (pandas dataframe) and mapping of the data fields (column names) to internal representation: userid, itemid and feedback:", "data_model = RecommenderData(data, userid='userid', itemid='movieid', feedback='rating')", "Verify correct mapping:", "data.columns\n\ndata_model.fields", "RecommenderData class has a number of parameters to control how the data is processed. Defaults are fine to start with:", "data_model.get_configuration()", "Use prepare method to split the dataset into 2 parts: training data and test data.", "data_model.prepare()", "As the original data possibly contains gaps in users' and items' indices, the data preparation process will clean this up: items from the training data will be indexed starting from zero with no gaps and the result will be stored in:", "data_model.index.itemid.head()", "Similarly, all userid's from both training and test set are reindexed and stored in:", "data_model.index.userid.training.head()\n\ndata_model.index.userid.test.head()", "Internally only new inices are used. This ensures consistency of various methods used by the model.\nThe dataset is split according to test_fold and test_ratio attributes. By default it uses first 80% of users for training and last 20% of the users as test data.", "data_model.training.head()\n\ndata_model.training.shape", "The test data is further split into testset and evaluation set (evalset). Testset is used to generate recommendations, which are than evaluated against the evaluation set.", "data_model.test.testset.head()\n\ndata_model.test.testset.shape\n\ndata_model.test.evalset.head()\n\ndata_model.test.evalset.shape", "The users in the test and evaluation sets are the same (but this users are not in the training set!).\nFor every test user the evaluation set contains a fixed number of items which are held out from the original test data. The number of holdout items is controlled by holdout_size parameter. By default it's set to 3:", "data_model.holdout_size\n\ndata_model.test.evalset.groupby('userid').movieid.count().head()", "Creating recommender model\nYou can create your own model by subclassing RecommenderModel class and defining two required methods: self.build() and self.get_recommendations():", "class TopMovies(RecommenderModel):\n def build(self):\n self._recommendations = None # this is required line in order to ensure consitency in experiments\n itemid = self.data.fields.itemid # get the name of the column, that corresponds to movieid\n \n # calculate popularity of the movies based on the number of ratings\n item_scores = self.data.training[itemid].value_counts().sort_index().values\n \n # store it for later use in some attribute\n self.item_scores = item_scores\n \n \n def get_recommendations(self):\n userid = self.data.fields.userid #get the name of the column, that corresponds to userid\n \n # get the number of test users\n # we expect that userid doesn't have gaps in numbering (as it might be in original dataset,\n # RecommenderData class takes care of that)\n num_users = self.data.test.testset[userid].max() + 1\n \n # repeat computed popularity scores in accordance with the number of test users\n scores = np.repeat(self.item_scores[None, :], num_users, axis=0)\n \n # we got the scores, but what we actually need is items (their id)\n # we also need only top-k items, not all of them (for top-k recommendation task)\n # here's how to get it:\n top_recs = self.get_topk_items(scores)\n # here leftmost items are those with the highest scores\n \n return top_recs", "Note, that recommendations, generated by this model, do not take into account the fact, that some of the recommended items may be present in the test set and thus, should not be recommended (they are considered seen by a test user). In order to fix that you can use filter_seen parameter along with downvote_seen_items method as follows:\nif self.filter_seen:\n #prevent seen items from appearing in recommendations\n itemid = self.data.fields.itemid\n test_idx = (test_data[userid].values.astype(np.int64),\n test_data[itemid].values.astype(np.int64))\n self.downvote_seen_items(scores, test_idx)\nWith this procedure \"seen\" items will get the lowest scores and they will be sorted out. Place this code snippet inside the get_recommendations routine before handovering scores into get_top_k_items. This will improve the baseline.\nAlternative way\nAnother way is to define slice_recommendations instead of get_recommendations method. With slice_recommendations defined, the model will scale better when huge datasets are used.\nThe method slice_recommendations takes a piece of the test data slice by slice instead of processing it as a whole. Slice if defined by start and stop parameter (which are simply a userid to start with and userid to stop at). Slicing the data avoids memory overhead and leads to a faster evaluation of models. Slicing is done automatically behind the scene and you don't have to specify anything else. Another advantage: seen items will be automatically sorted out from recommendations as long as filter_seen attribute is set to True (it is by default). So it will requires less line of code.", " class TopMoviesALT(RecommenderModel):\n def build(self):\n # should be the same as in TopMovies\n \n def slice_recommendations(self, test_data, shape, start, stop):\n # current implementation requires handovering slice data in specific format further,\n # and the easiest way to get it is via get_test_matrix method. It also returns\n # test data in sparse matrix format, but as our recommender model is non-personalized\n # we don't actually need it. See SVDModel implementation to see when it's useful.\n test_matrix, slice_data = self.get_test_matrix(test_data, shape, (start, stop))\n nusers = stop - start\n scores = np.repeat(self.item_scores[None, :], nusers, axis=0)\n return scores, slice_data", "Now everything is set to create an instance of the recommender model and produce recommendations.\ngenerating recommendations:", "top = TopMovies(data_model) # the model takes as input parameter the recommender data model\n\ntop.build()\n\nrecs = top.get_recommendations()\n\nrecs\n\nrecs.shape\n\ntop.topk", "You can evaluate your model befotre submitting the results (to ensure that you have improved above baseline):", "top.evaluate()", "Try to change your model to maximize the true_positive score.\nsubmitting your model:\nAfter you have created your perfect recsys model, firstly, save your recommendation into file. Please, use your name as the name for file (this will be used to display at leaderboard)", "np.savez('your_full_name', recs=recs)", "Now you can uppload your results:", "import requests\n\nfiles = {'upload': open('your_full_name.npz','rb')}\nurl = \"http://isp2017.azurewebsites.net/upload\"\n\nr = requests.post(url, files=files)", "Verify, that upload is successful:", "print r.status_code, r.reason", "You can also do it manyally at http://isp2017.azurewebsites.net/upload\nCheck out how do your result compare to others at: http://isp2017.azurewebsites.net" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rsterbentz/phys202-2015-work
assignments/project/NeuralNetworks.ipynb
mit
[ "Neural Networks\nThis project was created by Brian Granger. All content is licensed under the MIT License.\n\nIntroduction\nNeural networks are a class of algorithms that can learn how to compute the value of a function given previous examples of the functions output. Because neural networks are capable of learning how to compute the output of a function based on existing data, they generally fall under the field of Machine Learning.\nLet's say that we don't know how to compute some function $f$:\n$$ f(x) \\rightarrow y $$\nBut we do have some data about the output that $f$ produces for particular input $x$:\n$$ f(x_1) \\rightarrow y_1 $$\n$$ f(x_2) \\rightarrow y_2 $$\n$$ \\ldots $$\n$$ f(x_n) \\rightarrow y_n $$\nA neural network learns how to use that existing data to compute the value of the function $f$ on yet unseen data. Neural networks get their name from the similarity of their design to how neurons in the brain work.\nWork on neural networks began in the 1940s, but significant advancements were made in the 1970s (backpropagation) and more recently, since the late 2000s, with the advent of deep neural networks. These days neural networks are starting to be used extensively in products that you use. A great example of the application of neural networks is the recently released Flickr automated image tagging. With these algorithms, Flickr is able to determine what tags (\"kitten\", \"puppy\") should be applied to each photo, without human involvement.\nIn this case the function takes an image as input and outputs a set of tags for that image:\n$$ f(image) \\rightarrow {tag_1, \\ldots} $$\nFor the purpose of this project, good introductions to neural networks can be found at:\n\nThe Nature of Code, Daniel Shiffman.\nNeural Networks and Deep Learning, Michael Nielsen.\nData Science from Scratch, Joel Grus\n\nThe Project\nYour general goal is to write Python code to predict the number associated with handwritten digits. The dataset for these digits can be found in sklearn:", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom IPython.html.widgets import interact\n\nfrom sklearn.datasets import load_digits\ndigits = load_digits()\nprint(digits.data.shape)\n\ndef show_digit(i):\n plt.matshow(digits.images[i])\n plt.set_cmap('gray');\n\ninteract(show_digit, i=(0,100));", "The actual, known values (0,1,2,3,4,5,6,7,8,9) associated with each image can be found in the target array:", "digits.target", "Here are some of the things you will need to do as part of this project:\n\nSplit the original data set into two parts: 1) a training set that you will use to train your neural network and 2) a test set you will use to see if your trained neural network can accurately predict previously unseen data.\nWrite Python code to implement the basic building blocks of neural networks. This code should be modular and fully tested. While you can look at the code examples in the above resources, your code should be your own creation and be substantially different. One way of ensuring your code is different is to make it more general.\nCreate appropriate data structures for the neural network.\nFigure out how to initialize the weights of the neural network.\nWrite code to implement forward and back propagation.\nWrite code to train the network with the training set.\n\nYour base question should be to get a basic version of your code working that can predict handwritten digits with an accuracy that is significantly better than that of random guessing.\nHere are some ideas of questions you could explore as your two additional questions:\n\nHow to specify, train and use networks with more hidden layers.\nThe best way to determine the initial weights.\nMaking it all fast to handle more layers and neurons per layer (%timeit and %%timeit).\nExplore different ways of optimizing the weights/output of the neural network.\nTackle the full MNIST benchmark of $10,000$ digits.\nHow different sigmoid function affect the results.\n\nImplementation hints\nThere are optimization routines in scipy.optimize that may be helpful.\nYou should use NumPy arrays and fast NumPy operations (dot) everywhere that is possible." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/es-419/tutorials/quickstart/advanced.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Guia inicial de TensorFlow 2.0 para expertos\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/quickstart/advanced\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />Ver en TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/es-419/tutorials/quickstart/advanced.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Ejecutar en Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/es-419/tutorials/quickstart/advanced.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />Ver codigo en GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/es-419/tutorials/quickstart/advanced.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Descargar notebook</a>\n </td>\n</table>\n\nNote: Nuestra comunidad de Tensorflow ha traducido estos documentos. Como las traducciones de la comunidad\nson basados en el \"mejor esfuerzo\", no hay ninguna garantia que esta sea un reflejo preciso y actual \nde la Documentacion Oficial en Ingles.\nSi tienen sugerencias sobre como mejorar esta traduccion, por favor envian un \"Pull request\"\nal siguiente repositorio tensorflow/docs.\nPara ofrecerse como voluntario o hacer revision de las traducciones de la Comunidad\npor favor contacten al siguiente grupo docs@tensorflow.org list.\nEste es un notebook de Google Colaboratory. Los programas de Python se executan directamente en tu navegador —una gran manera de aprender y utilizar TensorFlow. Para poder seguir este tutorial, ejecuta este notebook en Google Colab presionando el boton en la parte superior de esta pagina.\nEn Colab, selecciona \"connect to a Python runtime\": En la parte superior derecha de la barra de menus selecciona: CONNECT.\nPara ejecutar todas las celdas de este notebook: Selecciona Runtime > Run all.\nDescarga e installa el paquete TensorFlow 2.0 version. \nImporta TensorFlow en tu programa:\nImport TensorFlow into your program:", "import tensorflow as tf\n\nfrom tensorflow.keras.layers import Dense, Flatten, Conv2D\nfrom tensorflow.keras import Model", "Carga y prepara el conjunto de datos MNIST", "mnist = tf.keras.datasets.mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0\n\n# Agrega una dimension de canales\nx_train = x_train[..., tf.newaxis]\nx_test = x_test[..., tf.newaxis]", "Utiliza tf.data to separar por lotes y mezclar el conjunto de datos:", "train_ds = tf.data.Dataset.from_tensor_slices(\n (x_train, y_train)).shuffle(10000).batch(32)\n\ntest_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)", "Construye el modelo tf.keras utilizando la API de Keras model subclassing API:", "class MyModel(Model):\n def __init__(self):\n super(MyModel, self).__init__()\n self.conv1 = Conv2D(32, 3, activation='relu')\n self.flatten = Flatten()\n self.d1 = Dense(128, activation='relu')\n self.d2 = Dense(10, activation='softmax')\n\n def call(self, x):\n x = self.conv1(x)\n x = self.flatten(x)\n x = self.d1(x)\n return self.d2(x)\n\n# Crea una instancia del modelo\nmodel = MyModel()", "Escoge un optimizador y una funcion de perdida para el entrenamiento de tu modelo:", "loss_object = tf.keras.losses.SparseCategoricalCrossentropy()\n\noptimizer = tf.keras.optimizers.Adam()", "Escoge metricas para medir la perdida y exactitud del modelo.\nEstas metricas acumulan los valores cada epoch y despues imprimen el resultado total.", "train_loss = tf.keras.metrics.Mean(name='train_loss')\ntrain_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')\n\ntest_loss = tf.keras.metrics.Mean(name='test_loss')\ntest_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')", "Utiliza tf.GradientTape para entrenar el modelo.", "@tf.function\ndef train_step(images, labels):\n with tf.GradientTape() as tape:\n predictions = model(images)\n loss = loss_object(labels, predictions)\n gradients = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n\n train_loss(loss)\n train_accuracy(labels, predictions)", "Prueba el modelo:", "@tf.function\ndef test_step(images, labels):\n predictions = model(images)\n t_loss = loss_object(labels, predictions)\n\n test_loss(t_loss)\n test_accuracy(labels, predictions)\n\nEPOCHS = 5\n\nfor epoch in range(EPOCHS):\n for images, labels in train_ds:\n train_step(images, labels)\n\n for test_images, test_labels in test_ds:\n test_step(test_images, test_labels)\n\n template = 'Epoch {}, Perdida: {}, Exactitud: {}, Perdida de prueba: {}, Exactitud de prueba: {}'\n print(template.format(epoch+1,\n train_loss.result(),\n train_accuracy.result()*100,\n test_loss.result(),\n test_accuracy.result()*100))\n\n # Reinicia las metricas para el siguiente epoch.\n train_loss.reset_states()\n train_accuracy.reset_states()\n test_loss.reset_states()\n test_accuracy.reset_states()", "El model de clasificacion de images fue entrenado y alcanzo una exactitud de ~98% en este conjunto de datos. Para aprender mas, lee los tutoriales de TensorFlow." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
QuinnLee/cs109a-Project
notebooks/5-aa-second_model.ipynb
mit
[ "Building a better model\nFollowing the baseline model and some feature engineering, we will now build a better predictive model. This will follow a few new patterns:\n1. We will import data cleaning and feature engineering stuff from external Python modules we've built (for standardization across our machines).\n2. We will cross-validate across time: that is, the model will be trained on earlier years and tested on later years.\n3. Rather than looping through models (and perhaps working mroe with Pipeline and GridSearch), we will focus on tuning the parameters of the best-performing model from the baseline set.", "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport os\nimport sys\nimport sklearn\nimport sqlite3\nimport matplotlib\n\nimport numpy as np\nimport pandas as pd\nimport enchant as en\nimport seaborn as sns\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.cross_validation import train_test_split, cross_val_score\n\nsrc_dir = os.path.join(os.getcwd(), os.pardir, 'src')\nsys.path.append(src_dir)\n%aimport data\nfrom data import make_dataset as md\n\nplt.style.use('ggplot')\nplt.rcParams['figure.figsize'] = (16.0, 6.0)\nplt.rcParams['legend.markerscale'] = 3\nmatplotlib.rcParams['font.size'] = 16.0", "Data: Preparing for the model\nImporting the raw data", "DIR = os.getcwd() + \"/../data/\"\nt = pd.read_csv(DIR + 'raw/lending-club-loan-data/loan.csv', low_memory=False)\nt.head()", "Cleaning, imputing missing values, feature engineering (some NLP)", "t2 = md.clean_data(t)\nt3 = md.impute_missing(t2)\ndf = md.simple_dataset(t3)\n# df = md.spelling_mistakes(t3) - skipping for now, so computationally expensive!", "Train, test split: Splitting on 2015", "df['issue_d'].hist(bins = 50)\nplt.title('Seasonality in lending')\nplt.ylabel('Frequency')\nplt.xlabel('Year')\nplt.show()", "We can use past years as predictors of future years. One challenge with this approach is that we confound time-sensitive trends (for example, global economic shocks to interest rates - such as the financial crisis of 2008, or the growth of Lending Club to broader and broader markets of debtors) with differences related to time-insensitive factors (such as a debtor's riskiness).\nTo account for this, we can bundle our training and test sets into the following blocks:\n- Before 2015: Training set\n- 2015 to current: Test set", "old = df[df['issue_d'] < '2015']\nnew = df[df['issue_d'] >= '2015']\nold.shape, new.shape", "We'll use the pre-2015 data on interest rates (old) to fit a model and cross-validate it. We'll then use the post-2015 data as a 'wild' dataset to test against.\nFitting the model", "X = old.drop(['int_rate', 'issue_d', 'earliest_cr_line', 'grade'], 1)\ny = old['int_rate']\nX.shape, y.shape\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)\nX_train.shape, X_test.shape, y_train.shape, y_test.shape\n\nrfr = RandomForestRegressor(n_estimators = 10, max_features='sqrt')\nscores = cross_val_score(rfr, X, y, cv = 3)\nprint(\"Accuracy: {:.2f} (+/- {:.2f})\".format(scores.mean(), scores.std() * 2))\n\nX_new = new.drop(['int_rate', 'issue_d', 'earliest_cr_line', 'grade'], 1)\ny_new = new['int_rate']\n\nnew_scores = cross_val_score(rfr, X_new, y_new, cv = 3)\nprint(\"Accuracy: {:.2f} (+/- {:.2f})\".format(new_scores.mean(), new_scores.std() * 2))\n\n# QUINN: Let's just use this - all data\nX_total = df.drop(['int_rate', 'issue_d', 'earliest_cr_line', 'grade'], 1)\ny_total = df['int_rate']\n\ntotal_scores = cross_val_score(rfr, X_total, y_total, cv = 3)\nprint(\"Accuracy: {:.2f} (+/- {:.2f})\".format(total_scores.mean(), total_scores.std() * 2))", "Fitting the model\nWe fit the model on all the data, and evaluate feature importances.", "rfr.fit(X_total, y_total)\n\nfi = [{'importance': x, 'feature': y} for (x, y) in \\\n sorted(zip(rfr.feature_importances_, X_total.columns))]\nfi = pd.DataFrame(fi)\nfi.sort_values(by = 'importance', ascending = False, inplace = True) \nfi.head()\n\ntop5 = fi.head()\ntop5.plot(kind = 'bar')\nplt.xticks(range(5), top5['feature'])\nplt.title('Feature importances (top 5 features)')\nplt.ylabel('Relative importance')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "변분 추론으로 일반화된 선형 혼합 효과 모델 맞춤 조정하기\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a> </td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행</a></td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub에서 소스 보기</a>\n</td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드</a></td>\n</table>", "#@title Install { display-mode: \"form\" }\nTF_Installation = 'System' #@param ['TF Nightly', 'TF Stable', 'System']\n\nif TF_Installation == 'TF Nightly':\n !pip install -q --upgrade tf-nightly\n print('Installation of `tf-nightly` complete.')\nelif TF_Installation == 'TF Stable':\n !pip install -q --upgrade tensorflow\n print('Installation of `tensorflow` complete.')\nelif TF_Installation == 'System':\n pass\nelse:\n raise ValueError('Selection Error: Please select a valid '\n 'installation option.')\n\n#@title Install { display-mode: \"form\" }\nTFP_Installation = \"System\" #@param [\"Nightly\", \"Stable\", \"System\"]\n\nif TFP_Installation == \"Nightly\":\n !pip install -q tfp-nightly\n print(\"Installation of `tfp-nightly` complete.\")\nelif TFP_Installation == \"Stable\":\n !pip install -q --upgrade tensorflow-probability\n print(\"Installation of `tensorflow-probability` complete.\")\nelif TFP_Installation == \"System\":\n pass\nelse:\n raise ValueError(\"Selection Error: Please select a valid \"\n \"installation option.\")", "요약\n이 colab에서는 TensorFlow Probability의 변분 추론을 사용하여 일반화된 선형 혼합 효과 모델을 맞춤 조정하는 방법을 보여줍니다.\n모델 패밀리\n일반화된 선형 혼합 효과 모델(GLMM)은 샘플별 노이즈를 예측된 선형 응답에 통합한다는 점을 제외하면 일반화된 선형 모델(GLM)과 유사합니다. 이것은 거의 보이지 않는 특성이 더 일반적으로 보이는 특성과 정보를 공유할 수 있기 때문에 부분적으로 유용합니다.\n생성 프로세스로서 일반화된 선형 혼합 효과 모델(GLMM)은 다음과 같은 특징이 있습니다.\n$$ \\begin{align} \\text{for } &amp; r = 1\\ldots R: \\hspace{2.45cm}\\text{# for each random-effect group}\\ &amp;\\begin{aligned} \\text{for } &amp;c = 1\\ldots |C_r|: \\hspace{1.3cm}\\text{# for each category (\"level\") of group $r$}\\ &amp;\\begin{aligned} \\beta_{rc} &amp;\\sim \\text{MultivariateNormal}(\\text{loc}=0_{D_r}, \\text{scale}=\\Sigma_r^{1/2}) \\end{aligned} \\end{aligned}\\ \\text{for } &amp; i = 1 \\ldots N: \\hspace{2.45cm}\\text{# for each sample}\\ &amp;\\begin{aligned} &amp;\\eta_i = \\underbrace{\\vphantom{\\sum_{r=1}^R}x_i^\\top\\omega}\\text{fixed-effects} + \\underbrace{\\sum{r=1}^R z_{r,i}^\\top \\beta_{r,C_r(i) }}\\text{random-effects} \\ &amp;Y_i|x_i,\\omega,{z{r,i} , \\beta_r}_{r=1}^R \\sim \\text{Distribution}(\\text{mean}= g^{-1}(\\eta_i)) \\end{aligned} \\end{align} $$\n여기서\n$$ \\begin{align} R &amp;= \\text{number of random-effect groups}\\ |C_r| &amp;= \\text{number of categories for group $r$}\\ N &amp;= \\text{number of training samples}\\ x_i,\\omega &amp;\\in \\mathbb{R}^{D_0}\\ D_0 &amp;= \\text{number of fixed-effects}\\ C_r(i) &amp;= \\text{category (under group $r$) of the $i$th sample}\\ z_{r,i} &amp;\\in \\mathbb{R}^{D_r}\\ D_r &amp;= \\text{number of random-effects associated with group $r$}\\ \\Sigma_{r} &amp;\\in {S\\in\\mathbb{R}^{D_r \\times D_r} : S \\succ 0 }\\ \\eta_i\\mapsto g^{-1}(\\eta_i) &amp;= \\mu_i, \\text{inverse link function}\\ \\text{Distribution} &amp;=\\text{some distribution parameterizable solely by its mean} \\end{align} $$\n즉, 각 그룹의 모든 카테고리가 다변량 정규 분포의 샘플 $\\beta_{rc}$와 연결되어 있음을 의미합니다. $\\beta_{rc}$ 추출은 항상 독립적이지만 $r$ 그룹에 대해서만 동일하게 분포됩니다. $r\\in{1,\\ldots,R}$당 정확히 하나의 $\\Sigma_r$가 있습니다.\n샘플 그룹의 특성($z_{r,i}$)과 유사하게 결합하면 결과는 $i$번째 예측 선형 응답(그렇지 않으면 $x_i^\\top\\omega$)에 대한 샘플별 노이즈입니다.\n${\\Sigma_r:r\\in{1,\\ldots,R}}$를 추정할 때 본질적으로 임의 효과 그룹이 전달하는 노이즈의 양을 추정합니다. 그렇지 않으면 $x_i^\\top\\omega$에 있는 신호를 추출합니다.\n$\\text{Distribution}$, 역링크 함수 및 $g^{-1}$에 대한 다양한 옵션이 있습니다. 일반적인 옵션은 다음과 같습니다.\n\n$Y_i\\sim\\text{Normal}(\\text{mean}=\\eta_i, \\text{scale}=\\sigma)$,\n$Y_i\\sim\\text{Binomial}(\\text{mean}=n_i \\cdot \\text{sigmoid}(\\eta_i), \\text{total_count}=n_i)$, 및\n$Y_i\\sim\\text{Poisson}(\\text{mean}=\\exp(\\eta_i))$.\n\n더 많은 가능성은 tfp.glm 모듈을 참조하세요.\n변분 추론\n불행히도, 매개변수 $\\beta,{\\Sigma_r} _r^R$의 최대 가능성 추정치를 찾는 것은 비 분석 적분을 수반합니다. 이러한 문제를 피하고자 대신 다음을 수행합니다.\n\n부록에 $q_{\\lambda}$로 표시된 매개변수화된 분포 패밀리('대리 밀도')를 정의합니다.\n$q_{\\lambda}$가 실제 목표 밀도에 가깝도록 매개변수 $\\lambda$를 찾습니다.\n\n분포 패밀리는 적절한 차원의 독립적인 가우시안이 될 것이며, '목표 밀도에 가까움'이란 '쿨백-라우블러(Kullbakc-Leibler) 발산 최소화'를 의미합니다. 예를 들어, 잘 작성된 유도 및 동기는 '변분 추론: 통계학자를 위한 검토'의 섹션 2.2를 참조하세요. 특히, KL 발산을 최소화하는 것은 ELBO(evidence lower bound)를 최소화하는 것과 동일함을 보여줍니다.\n장난감 문제\nGelman 등(2007)의 '라돈 데이터세트'는 회귀에 대한 접근 방식을 입증하는 데 사용되는 데이터세트입니다(예: 밀접하게 관련된 PyMC3 블로그 게시물). 라돈 데이터세트에는 미국 전역에서 측정된 라돈의 실내 측정값이 포함되어 있습니다. 라돈은 자연적으로 발생하는 방사성 가스로 고농도에서 독성이 있습니다.\n데모를 위해, 지하실이 있는 가정에서 라돈 수치가 더 높다는 가설을 검증하는 데 관심이 있다고 가정해 보겠습니다. 또한 라돈 농도가 토양 유형, 즉 지리 문제와 관련이 있다고 의심합니다.\n이를 ML 문제로 만들기 위해, 판독 값이 측정된 층의 선형 함수를 기반으로 로그 라돈 수준을 예측하려고 합니다. 또한 카운티(county)를 임의 효과로 사용하여 지리로 인한 분산을 설명할 것입니다. 즉, 일반화된 선형 혼합 효과 모델을 사용합니다.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport os\nfrom six.moves import urllib\n\nimport matplotlib.pyplot as plt; plt.style.use('ggplot')\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns; sns.set_context('notebook')\nimport tensorflow_datasets as tfds\n\nimport tensorflow.compat.v2 as tf\ntf.enable_v2_behavior()\n\nimport tensorflow_probability as tfp\ntfd = tfp.distributions\ntfb = tfp.bijectors", "또한 GPU의 가용성을 빠르게 확인합니다.", "if tf.test.gpu_device_name() != '/device:GPU:0':\n print(\"We'll just use the CPU for this run.\")\nelse:\n print('Huzzah! Found GPU: {}'.format(tf.test.gpu_device_name()))", "데이터세트 얻기\nTensorFlow 데이터세트에서 데이터세트를 로드하고 약간의 가벼운 전처리를 수행합니다.", "def load_and_preprocess_radon_dataset(state='MN'):\n \"\"\"Load the Radon dataset from TensorFlow Datasets and preprocess it.\n \n Following the examples in \"Bayesian Data Analysis\" (Gelman, 2007), we filter\n to Minnesota data and preprocess to obtain the following features:\n - `county`: Name of county in which the measurement was taken.\n - `floor`: Floor of house (0 for basement, 1 for first floor) on which the\n measurement was taken.\n\n The target variable is `log_radon`, the log of the Radon measurement in the\n house.\n \"\"\"\n ds = tfds.load('radon', split='train')\n radon_data = tfds.as_dataframe(ds)\n radon_data.rename(lambda s: s[9:] if s.startswith('feat') else s, axis=1, inplace=True)\n df = radon_data[radon_data.state==state.encode()].copy()\n\n df['radon'] = df.activity.apply(lambda x: x if x > 0. else 0.1)\n # Make county names look nice. \n df['county'] = df.county.apply(lambda s: s.decode()).str.strip().str.title()\n # Remap categories to start from 0 and end at max(category).\n df['county'] = df.county.astype(pd.api.types.CategoricalDtype())\n df['county_code'] = df.county.cat.codes\n # Radon levels are all positive, but log levels are unconstrained\n df['log_radon'] = df['radon'].apply(np.log)\n\n # Drop columns we won't use and tidy the index \n columns_to_keep = ['log_radon', 'floor', 'county', 'county_code']\n df = df[columns_to_keep].reset_index(drop=True)\n \n return df\n\ndf = load_and_preprocess_radon_dataset()\ndf.head()", "GLMM 패밀리 전문화하기\n이 섹션에서는 GLMM 패밀리를 라돈 수준 예측 작업에 전문화합니다. 이를 위해 먼저 GLMM의 고정 효과 특수 케이스를 고려합니다. $$ \\mathbb{E}[\\log(\\text{radon}_j)] = c + \\text{floor_effect}_j $$\n이 모델은 관측치 $j$의 로그 라돈이 $j$번째 판독 값이 측정된 층과 일정한 절편에 의해 예상대로 결정된다고 가정합니다. 의사 코드에서는 다음과 같이 작성할 수 있습니다.\ndef estimate_log_radon(floor):\n return intercept + floor_effect[floor]\n모든 층에 대해 학습된 가중치와 보편적인 intercept 항이 있습니다. 0층과 1층의 라돈 측정값을 보면 다음과 같이 시작하는 것이 좋습니다.", "fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 4))\ndf.groupby('floor')['log_radon'].plot(kind='density', ax=ax1);\nax1.set_xlabel('Measured log(radon)')\nax1.legend(title='Floor')\n\ndf['floor'].value_counts().plot(kind='bar', ax=ax2)\nax2.set_xlabel('Floor where radon was measured')\nax2.set_ylabel('Count')\nfig.suptitle(\"Distribution of log radon and floors in the dataset\");", "지리에 관한 내용을 포함하여 모델을 좀 더 정교하게 만드는 것이 아마도 더 좋을 것입니다. 라돈은 땅에 존재할 수 있는 우라늄의 붕괴 사슬의 일부이므로 지리를 설명하는 것이 중요합니다.\n$$ \\mathbb{E}[\\log(\\text{radon}_j)] = c + \\text{floor_effect}_j + \\text{county_effect}_j $$\n다시 하면, 의사 코드에서 다음과 같습니다.\ndef estimate_log_radon(floor, county):\n return intercept + floor_effect[floor] + county_effect[county]\n카운티별 가중치를 제외하고는 이전과 동일합니다.\n충분히 큰 훈련 세트가 주어지면 이는 합리적인 모델입니다. 하지만 미네소타의 데이터를 고려할 때 관측치 수가 작은 카운티가 많음을 알 수 있습니다. 예를 들어, 85개 카운티 중 39개 카운티의 관측치가 5개 미만입니다.\n이는 카운티당 관측치 수가 증가함에 따라 위의 모델로 수렴하는 방식으로 모든 관측치 간에 통계적 강도를 공유하도록 동기를 부여합니다.", "fig, ax = plt.subplots(figsize=(22, 5));\ncounty_freq = df['county'].value_counts()\ncounty_freq.plot(kind='bar', ax=ax)\nax.set_xlabel('County')\nax.set_ylabel('Number of readings');", "이 모델을 맞춤 조정하면 county_effect 벡터는 훈련 샘플이 거의 없는 카운티에 대한 결과를 기억하게 될 것입니다. 아마도 과대적합이 발생하여 일반화가 불량할 수 있습니다.\nGLMM은 위의 두 GLM에 대해 적절한 타협점을 제공합니다. 다음과 같이 맞춤 조정하는 것을 고려할 수 있습니다.\n$$ \\log(\\text{radon}_j) \\sim c + \\text{floor_effect}_j + \\mathcal{N}(\\text{county_effect}_j, \\text{county_scale}) $$\n이 모델은 첫 번째 모델과 같지만, 가능성이 정규 분포가 되도록 고정했으며 단일 변수 county_scale을 통해 모든 카운티에서 분산을 공유합니다. 의사 코드는 다음과 같습니다.\ndef estimate_log_radon(floor, county):\n county_mean = county_effect[county]\n random_effect = np.random.normal() * county_scale + county_mean\n return intercept + floor_effect[floor] + random_effect\n관측된 데이터로 county_scale, county_mean 및 random_effect에 대한 결합 분포를 추론합니다. 글로벌 county_scale을 사용하면 카운티 간에 통계적 강도를 공유할 수 있습니다. 관측치가 많은 경우 관측치가 거의 없는 카운티 분산에 도움이 됩니다. 또한 더 많은 데이터를 수집하면 이 모델은 scale 변수가 풀링하지 않는 모델로 수렴됩니다. 이 데이터세트를 사용하더라도 두 모델 중 하나를 사용하여 관측치가 가장 많은 카운티에 대한 유사한 결론에 도달하게 됩니다.\n실험\n이제 TensorFlow에서 변분 추론으로 위의 GLMM을 맞춤 조정하려고 합니다. 먼저 데이터를 특성과 레이블로 분할합니다.", "features = df[['county_code', 'floor']].astype(int)\nlabels = df[['log_radon']].astype(np.float32).values.flatten()", "모델을 지정합니다.", "def make_joint_distribution_coroutine(floor, county, n_counties, n_floors):\n\n def model():\n county_scale = yield tfd.HalfNormal(scale=1., name='scale_prior')\n intercept = yield tfd.Normal(loc=0., scale=1., name='intercept')\n floor_weight = yield tfd.Normal(loc=0., scale=1., name='floor_weight')\n county_prior = yield tfd.Normal(loc=tf.zeros(n_counties),\n scale=county_scale,\n name='county_prior')\n random_effect = tf.gather(county_prior, county, axis=-1)\n\n fixed_effect = intercept + floor_weight * floor\n linear_response = fixed_effect + random_effect\n yield tfd.Normal(loc=linear_response, scale=1., name='likelihood')\n return tfd.JointDistributionCoroutineAutoBatched(model)\n\njoint = make_joint_distribution_coroutine(\n features.floor.values, features.county_code.values, df.county.nunique(),\n df.floor.nunique())\n\n# Define a closure over the joint distribution \n# to condition on the observed labels.\ndef target_log_prob_fn(*args):\n return joint.log_prob(*args, likelihood=labels)", "사후 확률 대리를 지정합니다.\n매개변수 $\\lambda$가 훈련 가능한 대리 패밀리 $q_{\\lambda}$를 구성합니다. 이 경우에 패밀리는 각 매개변수에 대해 하나의 분포를 갖는 독립적인 다변량 정규 분포이고 $\\lambda = {(\\mu_j, \\sigma_j)}$입니다. 여기서 $j$는 4개의 매개변수를 인덱싱합니다.\n대리 패밀리를 맞춤 조정하기 위한 메서드는 tf.Variables를 사용하는 것입니다. 또한 tfp.util.TransformedVariable을 Softplus와 같이 사용하여 scale 매개변수(훈련 가능함)를 양수로 제한합니다. 또한 양수 매개변수인 전체 scale_prior에 Softplus를 적용합니다.\n최적화를 돕기 위해 약간의 지터를 사용하여 이러한 훈련 가능한 변수를 초기화합니다.", "# Initialize locations and scales randomly with `tf.Variable`s and \n# `tfp.util.TransformedVariable`s.\n_init_loc = lambda shape=(): tf.Variable(\n tf.random.uniform(shape, minval=-2., maxval=2.))\n_init_scale = lambda shape=(): tfp.util.TransformedVariable(\n initial_value=tf.random.uniform(shape, minval=0.01, maxval=1.),\n bijector=tfb.Softplus())\nn_counties = df.county.nunique()\n\nsurrogate_posterior = tfd.JointDistributionSequentialAutoBatched([\n tfb.Softplus()(tfd.Normal(_init_loc(), _init_scale())), # scale_prior\n tfd.Normal(_init_loc(), _init_scale()), # intercept\n tfd.Normal(_init_loc(), _init_scale()), # floor_weight\n tfd.Normal(_init_loc([n_counties]), _init_scale([n_counties]))]) # county_prior", "이 셀은 다음과 같이 tfp.experimental.vi.build_factored_surrogate_posterior로 대체할 수 있습니다.\npython\nsurrogate_posterior = tfp.experimental.vi.build_factored_surrogate_posterior(\n event_shape=joint.event_shape_tensor()[:-1],\n constraining_bijectors=[tfb.Softplus(), None, None, None])\n결과\n다루기 쉬운 매개변수화된 분포 패밀리를 정의한 다음, 목표 분포에 가까운 다루기 쉬운 분포를 갖도록 매개변수를 선택하는 것이 목표임을 기억하세요.\n위의 대리 분포를 빌드했으며 옵티마이저와 주어진 스텝 수를 허용하는 tfp.vi.fit_surrogate_posterior를 사용하여 음성 ELBO를 최소화하는 대리 모델에 대한 매개변수를 찾습니다(대리자와 대상 분포 사이의 쿨백-라이블러 발산을 최소화하는 것과 일치함).\n반환 값은 각 스텝에서 음의 ELBO이며 surrogate_posterior의 분포는 옵티마이저에서 찾은 매개변수로 업데이트됩니다.", "optimizer = tf.optimizers.Adam(learning_rate=1e-2)\n\nlosses = tfp.vi.fit_surrogate_posterior(\n target_log_prob_fn, \n surrogate_posterior,\n optimizer=optimizer,\n num_steps=3000, \n seed=42,\n sample_size=2)\n\n(scale_prior_, \n intercept_, \n floor_weight_, \n county_weights_), _ = surrogate_posterior.sample_distributions()\n\nprint(' intercept (mean): ', intercept_.mean())\nprint(' floor_weight (mean): ', floor_weight_.mean())\nprint(' scale_prior (approx. mean): ', tf.reduce_mean(scale_prior_.sample(10000)))\n\nfig, ax = plt.subplots(figsize=(10, 3))\nax.plot(losses, 'k-')\nax.set(xlabel=\"Iteration\",\n ylabel=\"Loss (ELBO)\",\n title=\"Loss during training\",\n ylim=0);", "추정된 평균 카운티(county) 효과와 해당 평균의 불확실성을 플롯할 수 있습니다. 이를 관찰 횟수로 정렬했으며 가장 큰 수는 왼쪽에 있습니다. 관측치가 많은 카운티에서는 불확실성이 작지만, 관측치가 한두 개만 있는 카운티에서는 불확실성이 더 큽니다.", "county_counts = (df.groupby(by=['county', 'county_code'], observed=True)\n .agg('size')\n .sort_values(ascending=False)\n .reset_index(name='count'))\n\nmeans = county_weights_.mean()\nstds = county_weights_.stddev()\n\nfig, ax = plt.subplots(figsize=(20, 5))\n\nfor idx, row in county_counts.iterrows():\n mid = means[row.county_code]\n std = stds[row.county_code]\n ax.vlines(idx, mid - std, mid + std, linewidth=3)\n ax.plot(idx, means[row.county_code], 'ko', mfc='w', mew=2, ms=7)\n\nax.set(\n xticks=np.arange(len(county_counts)),\n xlim=(-1, len(county_counts)),\n ylabel=\"County effect\",\n title=r\"Estimates of county effects on log radon levels. (mean $\\pm$ 1 std. dev.)\",\n)\nax.set_xticklabels(county_counts.county, rotation=90);", "실제로 추정된 표준 편차에 대한 로그 수의 관측치를 플롯하여 이를 더 직접적으로 볼 수 있으며 관계가 거의 선형임을 알 수 있습니다.", "fig, ax = plt.subplots(figsize=(10, 7))\nax.plot(np.log1p(county_counts['count']), stds.numpy()[county_counts.county_code], 'o')\nax.set(\n ylabel='Posterior std. deviation',\n xlabel='County log-count',\n title='Having more observations generally\\nlowers estimation uncertainty'\n);", "R에서 lme4와 비교하기", "%%shell\nexit # Trick to make this block not execute.\n\nradon = read.csv('srrs2.dat', header = TRUE)\nradon = radon[radon$state=='MN',]\nradon$radon = ifelse(radon$activity==0., 0.1, radon$activity)\nradon$log_radon = log(radon$radon)\n\n# install.packages('lme4')\nlibrary(lme4)\nfit <- lmer(log_radon ~ 1 + floor + (1 | county), data=radon)\nfit\n\n# Linear mixed model fit by REML ['lmerMod']\n# Formula: log_radon ~ 1 + floor + (1 | county)\n# Data: radon\n# REML criterion at convergence: 2171.305\n# Random effects:\n# Groups Name Std.Dev.\n# county (Intercept) 0.3282\n# Residual 0.7556\n# Number of obs: 919, groups: county, 85\n# Fixed Effects:\n# (Intercept) floor\n# 1.462 -0.693", "다음 표에 결과가 요약되어 있습니다.", "print(pd.DataFrame(data=dict(intercept=[1.462, tf.reduce_mean(intercept_.mean()).numpy()],\n floor=[-0.693, tf.reduce_mean(floor_weight_.mean()).numpy()],\n scale=[0.3282, tf.reduce_mean(scale_prior_.sample(10000)).numpy()]),\n index=['lme4', 'vi']))", "이 표는 VI 결과가 lme4의 ~10% 내에 있음을 나타냅니다. 이러한 결과는 다음과 같은 이유로 다소 놀랍습니다.\n\nlme4는 VI가 아닌 Laplace의 메서드을 기반으로 합니다.\n이 colab에서 실제로 수렴하려는 노력은 없었습니다.\n하이퍼 매개변수를 조정하기 위해 최소한의 노력만 기울였습니다.\n데이터를 정규화하거나 전처리하는 데 아무런 노력도 기울이지 않았습니다(예: 센터 특성 등).\n\n결론\n이 colab에서는 일반화된 선형 혼합 효과 모델을 설명하고 TensorFlow Probability를 사용하여 변분 추론으로 맞춤 조정하는 방법을 보여주었습니다. 장난감 문제에는 수백 개의 훈련 샘플만 있었지만, 여기에 사용된 기술은 대규모에서 필요한 기술과 정확히 동일합니다." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
w4zir/ml17s
assignments/assignment01-house-price-using-regression.ipynb
mit
[ "CSAL4243: Introduction to Machine Learning\nMuhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)\nAssignment 1: Linear Regression\nIn this assignment you are going to learn how Linear Regression works by using the code for linear regression and gradient descent we have been looking at in the class. You are also going to use linear regression from scikit-learn library for machine learning. You are going to learn how to download data from kaggle (a website for datasets and machine learning) and upload submissions to kaggle competitions. And you will be able to compete with the world.\nOverview\n\nPseudocode\nTasks\nLoad and analyze data\n\n\nTask 1: Effect of Learning Rate $\\alpha$\nLoad X and y\nLinear Regression with Gradient Descent code\nRun Gradient Descent on training data\nPlot trained line on data\n\n\nTask 2: Predict test data output and submit it to Kaggle\nUpload .csv file to Kaggle.com\n\n\nTask 3: Use scikit-learn for Linear Regression\nTask 4: Multivariate Linear Regression\nResources\nCredits\n\n<br>\n<br>\nPseudocode\nLinear Regressio with Gradient Descent\n\nLoad training data into X_train and y_train\n[Optionally] normalize features X_train using $x^i = \\frac{x^i - \\mu^i}{\\rho^i}$ where $\\mu^i$ is mean and $\\rho^i$ is standard deviation of feature $i$\nInitialize hyperparameters\niterations \nlearning rate $\\alpha$\n\n\nInitialize $\\theta_s$ \nAt each iteration\nCompute cost using $J(\\theta) = \\frac{1}{2m}\\sum_{i=1}^{m} (h(x^i) - y^i)^2$ where $h(x) = \\theta_0 + \\theta_1 x_1 + \\theta_2 x_2 .... + \\theta_n x_n$\nUpdate $\\theta_s$ using $\\begin{align} \\; \\; & \\theta_j := \\theta_j - \\alpha \\frac{1}{m} \\sum\\limits_{i=1}^{m} (h_\\theta(x_{i}) - y_{i}) \\cdot x^j_{i} \\; & & \\text{for j := 0...n} \\end{align}$\n[Optionally] Break if cost $J(\\theta)$ does not change.\n\n\n\n<br>\n<br>\nDownload House Prices dataset\nThe dataset you are going to use in this assignment is called House Prices, available at kaggle. To download the dataset go to dataset data tab. Download 'train.csv', 'test.csv', 'data_description.txt' and 'sample_submission.csv.gz' files. 'train.csv' is going to be used for training the model. 'test.csv' is used to test the model i.e. generalization. 'data_description.txt' contain feature description of the dataset. 'sample_submission.csv.gz' contain sample submission file that you need to generate to be submitted to kaggle.\n<br>\nTasks\n\nEffect of Learning Rate $\\alpha$ \nPredict test data output and submit it to Kaggle\nUse scikit-learn for Linear Regression\nMultivariate Linear Regression\n\nLoad and analyze data", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nfrom sklearn import linear_model\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n\n# read house_train.csv data in pandas dataframe df_train using pandas read_csv function\ndf_train = pd.read_csv('datasets/house_price/train.csv', encoding='utf-8')\n\n# check data by printing first few rows\ndf_train.head()\n\n# check columns in dataset\ndf_train.columns\n\n# check correlation matrix, darker means more correlation\ncorrmat = df_train.corr()\nf, aX_train= plt.subplots(figsize=(12, 9))\nsns.heatmap(corrmat, vmax=.8, square=True);\n\n# SalePrice correlation matrix with top k variables\nk = 10 #number of variables for heatmap\ncols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index\ncm = np.corrcoef(df_train[cols].values.T)\nsns.set(font_scale=1.25)\nhm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)\nplt.show()\n\n#scatterplot with some important variables\ncols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt']\nsns.set()\nsns.pairplot(df_train[cols], size = 2.5)\nplt.show();", "<br>\nTask 1: Effect of Learning Rate $\\alpha$\nUse Linear Regression code below using X=\"GrLivArea\" as input variable and y=\"SalePrice\" as target variable. Use different values of $\\alpha$ given in table below and comment on why they are useful or not and which one is a good choice.\n\n$\\alpha=0.000001$:\n$\\alpha=0.00000001$:\n$\\alpha=0.000000001$:\n\n<br>\nLoad X and y", "# Load X and y variables from pandas dataframe df_train\ncols = ['GrLivArea']\nX_train = np.array(df_train[cols])\ny_train = np.array(df_train[[\"SalePrice\"]])\n\n# Get m = number of samples and n = number of features\nm = X_train.shape[0]\nn = X_train.shape[1]\n\n# append a column of 1's to X for theta_0\nX_train = np.insert(X_train,0,1,axis=1)", "Linear Regression with Gradient Descent code", "iterations = 1500\nalpha = 0.000000001 # change it and find what happens\n\ndef h(X, theta): #Linear hypothesis function\n hx = np.dot(X,theta)\n return hx\n\n\ndef computeCost(theta,X,y): #Cost function\n \"\"\"\n theta is an n- dimensional vector, X is matrix with n- columns and m- rows\n y is a matrix with m- rows and 1 column\n \"\"\"\n #note to self: *.shape is (rows, columns)\n return float((1./(2*m)) * np.dot((h(X,theta)-y).T,(h(X,theta)-y)))\n\n#Actual gradient descent minimizing routine\ndef gradientDescent(X,y, theta_start = np.zeros((n+1,1))):\n \"\"\"\n theta_start is an n- dimensional vector of initial theta guess\n X is input variable matrix with n- columns and m- rows. y is a matrix with m- rows and 1 column.\n \"\"\"\n theta = theta_start\n j_history = [] #Used to plot cost as function of iteration\n theta_history = [] #Used to visualize the minimization path later on\n for meaninglessvariable in range(iterations):\n tmptheta = theta\n # append for plotting\n j_history.append(computeCost(theta,X,y))\n theta_history.append(list(theta[:,0]))\n #Simultaneously updating theta values\n for j in range(len(tmptheta)):\n tmptheta[j] = theta[j] - (alpha/m)*np.sum((h(X,theta) - y)*np.array(X[:,j]).reshape(m,1))\n theta = tmptheta\n return theta, theta_history, j_history", "Run Gradient Descent on training data", "#Actually run gradient descent to get the best-fit theta values\ninitial_theta = np.zeros((n+1,1));\ntheta, theta_history, j_history = gradientDescent(X_train,y_train,initial_theta)\n\nplt.plot(j_history)\nplt.title(\"Convergence of Cost Function\")\nplt.xlabel(\"Iteration number\")\nplt.ylabel(\"Cost function\")\nplt.show()", "Plot trained line on data", "# predict output for training data\nhx_train= h(X_train, theta)\n\n# plot it\nplt.scatter(X_train[:,1],y_train)\nplt.plot(X_train[:,1],hx_train[:,0], color='red')\nplt.show()", "<br>\nTask 2: Predict test data output and submit it to Kaggle\nIn this task we will use the model trained above to predict \"SalePrice\" on test data. Test data has all the input variables/features but no target variable. Out aim is to use the trained model to predict the target variable for test data. This is called generalization i.e. how good your model works on unseen data. The output in the form \"Id\",\"SalePrice\" in a .csv file should be submitted to kaggle. Please provide your score on kaggle after this step as an image. It will be compared to the 5 feature Linear Regression later.", "# read data in pandas frame df_test and check first few rows\n # write code here\n\ndf_test.head()\n\n# check statistics of test data, make sure no data is missing.\nprint(df_test.shape)\ndf_test[cols].describe()\n\n# Get X_test, no target variable (SalePrice) provided in test data. It is what we need to predict.\nX_test = np.array(df_test[cols])\n\n#Insert the usual column of 1's into the \"X\" matrix\nX_test = np.insert(X_test,0,1,axis=1)\n\n# predict test data labels i.e. y_test\npredict = h(X_test, theta)\n\n# save prediction as .csv file\npd.DataFrame({'Id': df_test.Id, 'SalePrice': predict[:,0]}).to_csv(\"predict1.csv\", index=False)", "Upload .csv file to Kaggle.com\n\nCreate an account at https://www.kaggle.com\nGo to https://www.kaggle.com/c/house-prices-advanced-regression-techniques/submit\nUpload \"predict1.csv\" file created above.\nUpload your score as an image below.", "from IPython.display import Image\nImage(filename='images/asgn_01.png', width=500)", "<br>\nTask 3: Use scikit-learn for Linear Regression\nIn this task we are going to use Linear Regression class from scikit-learn library to train the same model. The aim is to move from understanding algorithm to using an exisiting well established library. There is a Linear Regression example available on scikit-learn website as well.\n\nUse the scikit-learn linear regression class to train the model on df_train\nCompare the parameters from scikit-learn linear_model.LinearRegression.coef_ to the $\\theta_s$ from earlier. \nUse the linear_model.LinearRegression.predict on test data and upload it to kaggle. See if your score improves. Provide screenshot.\nNote: no need to append 1's to X_train. Scitkit linear regression has parameter called fit_intercept that is by defauly enabled.", "# import scikit-learn linear model\nfrom sklearn import linear_model\n\n# get X and y\n # write code here\n\n\n# Create linear regression object\n # write code here check link above for example\n\n# Train the model using the training sets. Use fit(X,y) command\n # write code here\n\n# The coefficients\nprint('Intercept: \\n', regr.intercept_)\nprint('Coefficients: \\n', regr.coef_)\n# The mean squared error\nprint(\"Mean squared error: %.2f\"\n % np.mean((regr.predict(X_train) - y_train) ** 2))\n# Explained variance score: 1 is perfect prediction\nprint('Variance score: %.2f' % regr.score(X_train, y_train))\n\n\n# read test X without 1's\n # write code here\n\n# predict output for test data. Use predict(X) command.\npredict2 = # write code here\n\n# remove negative sales by replacing them with zeros\npredict2[predict2<0] = 0\n\n# save prediction as predict2.csv file\n # write code here", "<br>\nTask 4: Multivariate Linear Regression\nLastly use columns ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] and scikit-learn or the code given above to predict output on test data. Upload it to kaggle like earlier and see how much it improves your score.\n\nEverything remains same except dimensions of X changes.\nThere might be some data missing from the test or train data that you can check using pandas.DataFrame.describe() function. Below we provide some helping functions for removing that data.", "# define columns ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt']\n # write code here\n\n# check features range and statistics. Training dataset looks fine as all features has same count.\ndf_train[cols].describe()\n\n# Load X and y variables from pandas dataframe df_train\n # write code here\n\n# Get m = number of samples and n = number of features\n # write code here\n\n#Feature normalizing the columns (subtract mean, divide by standard deviation)\n#Store the mean and std for later use\n#Note don't modify the original X matrix, use a copy\nstored_feature_means, stored_feature_stds = [], []\nXnorm = np.array(X_train).copy()\nfor icol in range(Xnorm.shape[1]):\n stored_feature_means.append(np.mean(Xnorm[:,icol]))\n stored_feature_stds.append(np.std(Xnorm[:,icol]))\n #Skip the first column if 1's\n# if not icol: continue\n #Faster to not recompute the mean and std again, just used stored values\n Xnorm[:,icol] = (Xnorm[:,icol] - stored_feature_means[-1])/stored_feature_stds[-1]\n \n# check data after normalization\npd.DataFrame(data=Xnorm,columns=cols).describe()\n\n# Run Linear Regression from scikit-learn or code given above. \n\n # write code here. Repeat from above.\n\n\n# To predict output using ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] as input features.\n# Check features range and statistics to see if there is any missing data. \n# As you can see from count \"GarageCars\" and \"TotalBsmtSF\" has 1 missing value each.\ndf_test[cols].describe()\n\n# Replace missing value with the mean of the feature\ndf_test['GarageCars'] = df_test['GarageCars'].fillna((df_test['GarageCars'].mean()))\ndf_test['TotalBsmtSF'] = df_test['TotalBsmtSF'].fillna((df_test['TotalBsmtSF'].mean()))\n\ndf_test[cols].describe()\n\n# read test X without 1's\n # write code here\n\n# predict using trained model\npredict3 = # write code here\n\n# replace any negative predicted saleprice by zero\npredict3[predict3<0] = 0\n\n\n# predict target/output variable for test data using the trained model and upload to kaggle.\n # write code to save output as predict3.csv here", "Resources\nCourse website: https://w4zir.github.io/ml17s/\nCourse resources\nCredits\nRaschka, Sebastian. Python machine learning. Birmingham, UK: Packt Publishing, 2015. Print.\nAndrew Ng, Machine Learning, Coursera\nScikit Learn Linear Regression" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jni/numpy-skimage-tutorial
notebooks/3_morphological_operations.ipynb
bsd-3-clause
[ "Morphological operations\nMorphology is the study of shapes. In image processing, some simple operations can get you a long way. The first things to learn are erosion and dilation. In erosion, we look at a pixel’s local neighborhood and replace the value of that pixel with the minimum value of that neighborhood. In dilation, we instead choose the maximum.", "import numpy as np\nfrom matplotlib import pyplot as plt, cm\n%matplotlib inline\nimport skdemo\nplt.rcParams['image.cmap'] = 'cubehelix'\nplt.rcParams['image.interpolation'] = 'none'\n\nimage = np.array([[0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0],\n [0, 0, 1, 1, 1, 0, 0],\n [0, 0, 1, 1, 1, 0, 0],\n [0, 0, 1, 1, 1, 0, 0],\n [0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0]], dtype=np.uint8)\nplt.imshow(image);", "The documentation for scikit-image's morphology module is\nhere.\nImportantly, we must use a structuring element, which defines the local\nneighborhood of each pixel. To get every neighbor (up, down, left, right, and\ndiagonals), use morphology.square; to avoid diagonals, use\nmorphology.diamond:", "from skimage import morphology\nsq = morphology.square(width=3)\ndia = morphology.diamond(radius=1)\nprint(sq)\nprint(dia)", "The central value of the structuring element represents the pixel being considered, and the surrounding values are the neighbors: a 1 value means that pixel counts as a neighbor, while a 0 value does not. So:", "skdemo.imshow_all(image, morphology.erosion(image, sq), shape=(1, 2))", "and", "skdemo.imshow_all(image, morphology.dilation(image, sq))", "and", "skdemo.imshow_all(image, morphology.dilation(image, dia))", "Erosion and dilation can be combined into two slightly more sophisticated operations, opening and closing. Here's an example:", "image = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],\n [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],\n [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],\n [0, 0, 1, 1, 1, 0, 0, 1, 0, 0],\n [0, 0, 1, 1, 1, 0, 0, 1, 0, 0],\n [0, 0, 1, 1, 1, 0, 0, 1, 0, 0],\n [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],\n [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],\n [0, 0, 1, 1, 1, 1, 1, 1, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], np.uint8)\nplt.imshow(image);", "What happens when run an erosion followed by a dilation of this image?\nWhat about the reverse?\nTry to imagine the operations in your head before trying them out below.", "skdemo.imshow_all(image, morphology.opening(image, sq)) # erosion -> dilation\n\nskdemo.imshow_all(image, morphology.closing(image, sq)) # dilation -> erosion", "Exercise: use morphological operations to remove noise from a binary image.", "from skimage import data, color\nhub = color.rgb2gray(data.hubble_deep_field()[350:450, 90:190])\nplt.imshow(hub);", "Remove the smaller objects to retrieve the large galaxy using a boolean array, and then use skimage.exposure.histogram and plt.plot to show the light distribution from the galaxy.\n\n<div style=\"height: 400px;\"></div>", "%reload_ext load_style\n%load_style ../themes/tutorial.css" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
szitenberg/ReproPhyloVagrant
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
mit
[ "This section starts with a Project that already contains alignments:", "from reprophylo import *\npj = unpickle_pj('./outputs/my_project.pkpj',\n git=False)", "If we call the keys of the pj.alignments dictionary, we can see the names of the alignments it contains:", "pj.alignments.keys()", "3.7.1 Configuring an alignment trimming process\nLike the sequence alignment phase, alignment trimming has its own configuration class, the TrimalConf class. An object of this class will generate a command-line and the required input files for the program TrimAl, but will not execute the process (this is shown below). Once the process has been successfully executed, this TrimalConf object is also stored in pj.used_methods and it can be invoked as a report.\n3.7.1.1 Example1, the default gappyput algorithm\nWith TrimalConf, instead of specifying loci names, we provide alignment names, as they appear in the keys of pj.alignments", "gappyout = TrimalConf(pj, # The Project\n \n method_name='gappyout', # Any unique string ('gappyout' is default)\n \n program_name='trimal', # No alternatives in this ReproPhylo version\n \n cmd='default', # the default is trimal. Change it here\n # or in pj.defaults['trimal']\n \n alns=['MT-CO1@mafftLinsi'], # 'all' by default\n \n trimal_commands={'gappyout': True} # By default, the gappyout algorithm is used.\n ) ", "3.7.1.2 List comprehension to subset alignments\nIn this example, it is easy enough to copy and paste alignment names into a list and pass it to TrimalConf. But this is more difficult if we want to fish out a subset of alignments from a very large list of alignments. In such cases, Python's list comprehension is very useful. Below I show two uses of list comprehension, but the more you feel comfortable with this approach, the better.\nGetting locus names of rRNA loci\nIf you read the code line that follows very carefully, you will see it quite literally says \"take the name of each Locus found in pj.loci if its feature type is rRNA, and put it in a list\":", "rRNA_locus_names = [locus.name for locus in pj.loci if locus.feature_type == 'rRNA']\nprint rRNA_locus_names", "what we get is a list of names of our rRNA loci. \nGetting alignment names that have locus names of rRNA loci\nThe following line says: \"take the key of each alignment from the pj.alignments dictionary if the first word before the '@' symbol is in the list of rRNA locus names, and put this key in a list\":", "rRNA_alignment_names = [key for key in pj.alignments.keys() if key.split('@')[0] in rRNA_locus_names]\nprint rRNA_alignment_names", "We get a list of keys, of the rRNA loci alignments we produced on the previous section, and which are stored in the pj.alignments dictionary. We can now pass this list to a new TrimalConf instance that will only process rRNA locus alignments:", "gt50 = TrimalConf(pj,\n method_name='gt50',\n alns = rRNA_alignment_names,\n trimal_commands={'gt': 0.5} # This will keep positions with up to\n # 50% gaps.\n )", "3.7.2 Executing the alignment trimming process\nAs for the alignment phase, this is done with a Project method, which accepts a list of TrimalConf objects.", "pj.trim([gappyout, gt50])", "Once used, these objects are also placed in the pj.used_methods dictionary, and they can be printed out for observation:", "print pj.used_methods['gappyout']", "3.7.3 Accessing trimmed sequence alignments\n3.7.3.1 The pj.trimmed_alignments dictionary\nThe trimmed alignments themselves are stored in the pj.trimmed_alignments dictionary, using keys that follow this pattern: locus_name@alignment_method_name@trimming_method_name where alignment_method_name is the name you have provided to your AlnConf object and trimming_method_name is the one you provided to your TrimalConf object.", "pj.trimmed_alignments", "3.7.3.2 Accessing a MultipleSeqAlignment object\nA trimmed alignment can be easily accessed and manipulated with any of Biopython's AlignIO tricks using the fta Project method:", "print pj.fta('18s@muscleDefault@gt50')[:4,410:420].format('phylip-relaxed')", "3.7.3.3 Writing trimmed sequence alignment files\nTrimmed alignment text files can be dumped in any AlignIO format for usage in an external command line or GUI program. When writing to files, you can control the header of the sequence by, for example, adding the organism name of the gene name, or by replacing the feature ID with the record ID:", "# record_id and source_organism are feature qualifiers in the SeqRecord object\n# See section 3.4\nfiles = pj.write_trimmed_alns(id=['record_id','source_organism'],\n format='fasta')\nfiles", "The files will always be written to the current working directory (where this notebook file is), and can immediately be moved programmatically to avoid clutter:", "# make a new directory for your trimmed alignment files:\nif not os.path.exists('trimmed_alignment_files'):\n os.mkdir('trimmed_alignment_files')\n \n# move the files there\nfor f in files:\n os.rename(f, \"./trimmed_alignment_files/%s\"%f)", "3.7.3.4 Viewing trimmed alignments\nTrimmed alignments can be viewed in the same way as alignments, but using this command:", "pj.show_aln('MT-CO1@mafftLinsi@gappyout',id=['source_organism'])\n\npickle_pj(pj, 'outputs/my_project.pkpj')", "3.7.4 Quick reference", "# Make a TrimalConf object\ntrimconf = TrimalConf(pj, **kwargs)\n\n# Execute alignment process\npj.trim([trimconf])\n\n# Show AlnConf description\nprint pj.used_methods['method_name']\n\n# Fetch a MultipleSeqAlignment object\ntrim_aln_obj = pj.fta('locus_name@aln_method_name@trim_method_name')\n\n# Write alignment text files\npj.write_trimmed_alns(id=['some_feature_qualifier'], format='fasta')\n# the default feature qualifier is 'feature_id'\n# 'fasta' is the default format\n\n# View alignment in browser\npj.show_aln('locus_name@aln_method_name@trim_method_name',id=['some_feature_qualifier'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SchwaZhao/networkproject1
03_Introduction_To_Supervised_Machine_Learning.ipynb
mit
[ "In this section we will see the basics of supervised machine learning with a logistic regression classifier. We will see a simple example and see how to evaluate the performance of a binary classifier and avoid over-fitting.\nSupervised machine learning\nThis section is partially inspired by the following Reference: http://cs229.stanford.edu/notes/cs229-notes1.pdf\nSupervised learning consists of inferring a function from a labeled training set. On the other hand, unsupervised learning is a machine learning technique used when the input data is not labeled. Clustering is a example of unsupervised learning. \nFor supervised learning, we define:\n\nThe features (input variables) $x^{(i)}\\in \\mathbb{X}$ \nThe target (output we are trying to predict) $y^{(i)} \\in \\mathbb{Y}$\n\nA pair $(x^{(i)},y^{(i)})$ is a training example.\nThe set ${(x^{(i)},y^{(i)}); i = 1,...,m}$ is the training set:\nThe goal of supervised learning is to learn a function $h: \\mathbb{X}\\mapsto\\mathbb{Y}$, called the hypothesis, so that $h(x)$ is a good \npredictor of the corresponding $y$.\n\nRegression correspond to the case where $y$ is a continuous variable\nClassification correspond to the case where $y$ can only take a small number of discrete values\n\nExamples: \n- Univariate Linear Regression: $h_w(x) = w_0+w_1x$, with $\\mathbb{X} = \\mathbb{Y} = \\mathbb{R}$\n- Multivariate Linear Regression: $$h_w(x) = w_0+w_1x_1 + ... + w_nx_n = \\sum_{i=0}^{n}w_ix_i = w^Tx,$$\nwith $\\mathbb{Y} = \\mathbb{R}$ and $\\mathbb{X} = \\mathbb{R^n}$.\nHere $w_0$ is the intercept with the convention that $x_0=1$ to simplify notation.\nBinary Classification with Logistic Regression\n\n\n$y$ can take only two values, 0 or 1. For example, if $y$ is the sentiment associated with the tweet,\n$y=1$ if the tweet is \"positive\" and $y=0$ is the tweet is \"negative\".\n\n\n$x^{(i)}$ represents the features of a tweet. For example the presence or absence of certain words.\n\n\n$y^{(i)}$ is the label of the training example represented by $x^{(i)}$.\n\n\nSince $y\\in{0,1}$ we want to limit $h_w(x)$ between $[0,1]$.\nThe Logistic regression consists of choosing $h_w(x)$ as\n$$\nh_w(x) = \\frac{1}{1+e^{-w^Tx}}\n$$\nwhere $w^Tx = \\sum_{i=0}^{n}w_ix_i$ and $h_w(x) = g(w^Tx)$ with\n$$\ng(x)=\\frac{1}{1+e^{-w^Tx}}.\n$$\n$g(x)$ is the logistic function or sigmoid function", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nx = np.linspace(-10,10)\ny = 1/(1+np.exp(-x))\n\np = plt.plot(x,y)\nplt.grid(True)", "$g(x)\\rightarrow 1$ for $x\\rightarrow\\infty$\n$g(x)\\rightarrow 0$ for $x\\rightarrow -\\infty$\n$g(0) = 1/2$\n\nFinally, to go from the regression to the classification, we can simply apply the following condition:\n$$\ny=\\left{\n \\begin{array}{@{}ll@{}}\n 1, & \\text{if}\\ h_w(x)>=1/2 \\\n 0, & \\text{otherwise}\n \\end{array}\\right.\n$$\nLet's clarify the notation. We have $m$ training samples and $n$ features, our training examples can be represented by a $m$-by-$n$ matrix $\\underline{\\underline{X}}=(x_{ij})$ ($m$-by-$n+1$, if we include the intercept term) that contains the training examples, $x^{(i)}$, in its rows.\nThe target values of the training set can be represented as a $m$-dimensional vector $\\underline{y}$ and the parameters \nof our model as\na $n$-dimensional vector $\\underline{w}$ ($n+1$ if we take into account the intercept).\nNow, for a given training example $x^{(i)}$, the function that we want to learn (or fit) can be written:\n$$\nh_\\underline{w}(x^{(i)}) = \\frac{1}{1+e^{-\\sum_{j=0}^n w_j x_{ij}}}\n$$", "# Simple example:\n# we have 20 students that took an exam and we want to know if we can use \n# the number of hours they studied to predict if they pass or fail the\n# exam\n\n# m = 20 training samples \n# n = 1 feature (number of hours)\n\nX = np.array([0.50, 0.75, 1.00, 1.25, 1.50, 1.75, 1.75, 2.00, 2.25, 2.50,\n 2.75, 3.00, 3.25, 3.50, 4.00, 4.25, 4.50, 4.75, 5.00, 5.50])\n# 1 = pass, 0 = fail\ny = np.array([0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1])\n\nprint(X.shape)\n\nprint(y.shape)\n\np = plt.plot(X,y,'o')\ntx = plt.xlabel('x [h]')\nty = plt.ylabel('y ')\n\n", "Likelihood of the model\nHow to find the parameters, also called weights, $\\underline{w}$ that best fit our training data?\nWe want to find the weights $\\underline{w}$ that maximize the likelihood of observing the target $\\underline{y}$ given the observed features $\\underline{\\underline{X}}$.\nWe need a probabilistic model that gives us the probability of observing the value $y^{(i)}$ given the features $x^{(i)}$.\nThe function $h_\\underline{w}(x^{(i)})$ can be used precisely for that:\n$$\nP(y^{(i)}=1|x^{(i)};\\underline{w}) = h_\\underline{w}(x^{(i)})\n$$\n$$\nP(y^{(i)}=0|x^{(i)};\\underline{w}) = 1 - h_\\underline{w}(x^{(i)})\n$$\nwe can write it more compactly as:\n$$\nP(y^{(i)}|x^{(i)};\\underline{w}) = (h_\\underline{w}(x^{(i)}))^{y^{(i)}} ( 1 - h_\\underline{w}(x^{(i)}))^{1-y^{(i)}}\n$$\nwhere $y^{(i)}\\in{0,1}$\nWe see that $y^{(i)}$ is a random variable following a Bernouilli distribution with expectation $h_\\underline{w}(x^{(i)})$.\nThe Likelihood function of a statistical model is defined as:\n$$\n\\mathcal{L}(\\underline{w}) = \\mathcal{L}(\\underline{w};\\underline{\\underline{X}},\\underline{y}) = P(\\underline{y}|\\underline{\\underline{X}};\\underline{w}).\n$$\nThe likelihood takes into account all the $m$ training samples of our training dataset and estimates the likelihood \nof observing $\\underline{y}$ given $\\underline{\\underline{X}}$ and $\\underline{w}$. Assuming that the $m$ training examples were generated independently, we can write:\n$$\n\\mathcal{L}(\\underline{w}) = P(\\underline{y}|\\underline{\\underline{X}};\\underline{w}) = \\prod_{i=1}^m P(y^{(i)}|x^{(i)};\\underline{w}) = \\prod_{i=1}^m (h_\\underline{w}(x^{(i)}))^{y^{(i)}} ( 1 - h_\\underline{w}(x^{(i)}))^{1-y^{(i)}}.\n$$\nThis is the function that we want to maximize. It is usually much simpler to maximize the logarithm of this function, which is equivalent.\n$$\nl(\\underline{w}) = \\log\\mathcal{L}(\\underline{w}) = \\sum_{i=1}^{m} \\left(y^{(i)} \\log h_\\underline{w}(x^{(i)}) + (1- y^{(i)})\\log\\left(1- h_\\underline{w}(x^{(i)})\\right) \\right)\n$$\nLoss function and linear models\nAn other way of formulating this problem is by defining a Loss function $L\\left(y^{(i)}, f(x^{(i)})\\right)$ such that:\n$$\n\\sum_{i=1}^{m} L\\left(y^{(i)}, f(x^{(i)})\\right) = - l(\\underline{w}).\n$$\nAnd now the problem consists of minimizing $\\sum_{i=1}^{m} L\\left(y^{(i)}, f(x^{(i)})\\right)$ over all the possible values of $\\underline{w}$.\nUsing the definition of $h_\\underline{w}(x^{(i)})$ you can show that $L$ can be written as:\n$$\nL\\left(y^{(i)}=1, f(x^{(i)})\\right) = \\log_2\\left(1+e^{-f(x^{(i)})}\\right)\n$$\nand\n$$\nL\\left(y^{(i)}=0, f(x^{(i)})\\right) = \\log_2\\left(1+e^{-f(x^{(i)})}\\right) - \\log_2\\left(e^{-f(x^{(i)})}\\right)\n$$\nwhere $f(x^{(i)}) = \\sum_{j=0}^n w_j x_{ij}$ is called the decision function.", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfx = np.linspace(-5,5)\nLy1 = np.log2(1+np.exp(-fx))\nLy0 = np.log2(1+np.exp(-fx)) - np.log2(np.exp(-fx))\n\np = plt.plot(fx,Ly1,label='L(1,f(x))')\np = plt.plot(fx,Ly0,label='L(0,f(x))')\nplt.xlabel('f(x)')\nplt.ylabel('L')\nplt.legend()\n\n# coming back to our simple example\n\ndef Loss(x_i,y_i, w0, w1):\n fx = w0 + x_i*w1\n \n if y_i == 1:\n return np.log2(1+np.exp(-fx))\n if y_i == 0:\n return np.log2(1+np.exp(-fx)) - np.log2(np.exp(-fx))\n else:\n raise Exception('y_i must be 0 or 1')\n \ndef sumLoss(x,y, w0, w1):\n sumloss = 0\n for x_i, y_i in zip(x,y):\n sumloss += Loss(x_i,y_i, w0, w1)\n return sumloss\n \n\n# lets compute the loss function for several values\nw0s = np.linspace(-10,20,100)\nw1s = np.linspace(-10,20,100)\n\nsumLoss_vals = np.zeros((w0s.size, w1s.size))\nfor k, w0 in enumerate(w0s):\n for l, w1 in enumerate(w1s):\n sumLoss_vals[k,l] = sumLoss(X,y,w0,w1)\n \n\n\n# let's find the values of w0 and w1 that minimize the loss\nind0, ind1 = np.where(sumLoss_vals == sumLoss_vals.min())\n\nprint((ind0,ind1))\nprint((w0s[ind0], w1s[ind1]))\n\n# plot the loss function\np = plt.pcolor(w0s, w1s, sumLoss_vals)\nc = plt.colorbar()\n\np2 = plt.plot(w1s[ind1], w0s[ind0], 'ro')\n\ntx = plt.xlabel('w1')\nty = plt.ylabel('w0')\n\n\n", "Here we found the minimum of the loss function simply by computing it over a large range of values. In practice, this approach is not possible when the dimensionality of the loss function (number of weights) is very large. To find the minimum of the loss function, the gradient descent algorithm (or stochastic gradient descent) is often used.", "# plot the solution\n\nx = np.linspace(0,6,100)\n\ndef h_w(x, w0=w0s[ind0], w1=w1s[ind1]):\n return 1/(1+np.exp(-(w0+x*w1)))\n\np1 = plt.plot(x, h_w(x))\np2 = plt.plot(X,y,'ro')\ntx = plt.xlabel('x [h]')\nty = plt.ylabel('y ')\n\n\n# probability of passing the exam if you worked 5 hours:\nprint(h_w(5))", "We will use the package sci-kit learn (http://scikit-learn.org/) that provide access to many tools for machine learning, data mining and data analysis.", "# The same thing using the sklearn module\nfrom sklearn.linear_model import LogisticRegression\n\nmodel = LogisticRegression(C=1e10)\n\n# to train our model we use the \"fit\" method\n# we have to reshape X because we have only one feature here\nmodel.fit(X.reshape(-1,1),y)\n\n# to see the weights\nprint(model.coef_)\nprint(model.intercept_)\n\n# use the trained model to predict new values\nprint(model.predict_proba(5))\nprint(model.predict(5))", "Note that although the loss function is not linear, the decision function is a linear function of the weights and features. This is why the Logistic regression is called a linear model.\nOther linear models are defined by different loss functions. For example:\n- Perceptron: $L \\left(y^{(i)}, f(x^{(i)})\\right) = \\max(0, -y^{(i)}\\cdot f(x^{(i)}))$\n- Hinge-loss (soft-margin Support vector machine (SVM) classification): $L \\left(y^{(i)}, f(x^{(i)})\\right) = \\max(0, 1-y^{(i)}\\cdot f(x^{(i)}))$\nSee http://scikit-learn.org/stable/modules/sgd.html for more examples.", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfx = np.linspace(-5,5, 200)\nLogit = np.log2(1+np.exp(-fx))\nPercep = np.maximum(0,- fx) \nHinge = np.maximum(0, 1- fx)\nZeroOne = np.ones(fx.size)\nZeroOne[fx>=0] = 0\n\np = plt.plot(fx,Logit,label='Logistic Regression')\np = plt.plot(fx,Percep,label='Perceptron')\np = plt.plot(fx,Hinge,label='Hinge-loss')\np = plt.plot(fx,ZeroOne,label='Zero-One loss')\nplt.xlabel('f(x)')\nplt.ylabel('L')\nplt.legend()\nylims = plt.ylim((0,7))", "Evaluating the performance of a binary classifier\nThe confusion matrix allows to visualize the performance of a classifier:\n| | predicted positive | predicted negative |\n| --- |:---:|:---:|\n| real positive | TP | FN |\n| real negative | FP | TN | \nFor each prediction $y_p$, we put it in one of the four categories based on the true value of $y$:\n- TP = True Positive\n- FP = False Positive\n- TN = True Negative\n- FN = False Negative\nWe can then evalute several measures, for example:\nAccuracy:\n$\\text{Accuracy}=\\frac{TP+TN}{TP+TN+FP+FN}$\nAccuracy is the proportion of true results (both true positives and true negatives) among the total number of cases examined. However, accuracy is not necessarily a good measure of the predictive power of a model. See the example below:\nAccuracy paradox:\nA classifier with these results:\n| |Predicted Negative | Predicted Positive|\n| --- |---|---|\n|Negative Cases |9,700 | 150|\n|Positive Cases |50 |100|\nhas an accuracy = 98%.\nNow consider the results of a classifier that systematically predict a negative result independently of the input:\n| |Predicted Negative| Predicted Positive|\n|---|---|---|\n|Negative Cases| 9,850 | 0|\n|Positive Cases| 150 |0 |\nThe accuracy of this classifier is 98.5% while it is clearly useless. Here the less accurate model is more useful than the more accurate one. This is why accuracy should not be used (alone) to evaluate the performance of a classifier. \nPrecision and Recall are usually prefered:\nPrecision:\n$\\text{Precision}=\\frac{TP}{TP+FP}$\nPrecision measures the fraction of correct positive or the lack of false positive.\nIt answers the question: \"Given a positive prediction from the classifier, how likely is it to be correct ?\"\nRecall:\n$\\text{Recall}=\\frac{TP}{TP+FN}$\nRecall measures the proportion of positives that are correctly identified as such or the lack of false negative.\nIt answers the question: \"Given a positive example, will the classifier detect it ?\"\n$F_1$ score:\nIn order to account for the precision and recall of a classifier, $F_1$ score is the harmonic mean of both measures:\n$F_1 = 2 \\cdot \\frac{\\mathrm{precision} \\cdot \\mathrm{recall}}{ \\mathrm{precision} + \\mathrm{recall}} = 2 \\frac{TP}{2TP +FP+FN}$\nWhen evaluating the performance of a classifier it is important to test is on a different set of values than then set we used to train it. Indeed, we want to know how the classifier performs on new data not on the training data. For this purpose we separate the training set in two: a part that we use to train the model and a part that we use to test it. This method is called cross-validation. Usually, we split the training set in N parts (typically 3 or 10), train the model on N-1 parts and test it on the remaining part. We then repeat this procedure with all the combination of training and testing parts and average the performance metrics from each tests. Sci-kit learn allows to easily perform cross-validation: http://scikit-learn.org/stable/modules/cross_validation.html\nRegularization and over-fitting\nOverfitting happens when your model is too complicated to generalise for new data. When your model fits your data perfectly, it is unlikely to fit new data well.\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/1/19/Overfitting.svg\" style=\"width: 250px;\"/>\nThe model in green is over-fitted. It performs very well on the training set, but it does not generalize well to new data compared to the model in black.\nTo avoid over-fitting, it is important to have a large training set and to use cross-validation to evaluate the performance of a model. Additionally, regularization is used to make the model less \"complex\" and more general.\nRegularization consists in adding a term $R(\\underline{w})$, that penalizes too \"complex\" models, to the loss function, so that the training error that we want to minimize is:\n$E(\\underline{w}) = \\sum_{i=1}^{m} L\\left(y^{(i)}, f(x^{(i)})\\right) + \\lambda R(\\underline{w})$,\nwhere $\\lambda$ is a parameter that controls the strength of the regularization.\nUsual choices for $R(\\underline{w})$ are:\n- L2 norm of the weights: $R(\\underline{w}) := \\frac{1}{2} \\sum_{i=1}^{n} w_j^2$, which forces small weights in the solution,\n- L1 norm of the weights: $R(\\underline{w}) := \\sum_{i=1}^{n} |w_j|$, (also refered as Lasso) which leads to sparse solutions (with several zero weights).\nThe choice of the regularization and of the its strength are usually done by selecting the best choice during the cross-validation.", "# for example\nfrom sklearn.model_selection import cross_val_predict\nfrom sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix\n\n# logistic regression with L2 regularization, C controls the strength of the regularization\n# C = 1/lambda\nmodel = LogisticRegression(C=1, penalty='l2')\n\n# cross validation using 10 folds\ny_pred = cross_val_predict(model, X.reshape(-1,1), y=y, cv=10)\n\nprint(confusion_matrix(y,y_pred))\n\n\nprint('Accuracy = ' + str(accuracy_score(y, y_pred)))\nprint('Precision = ' + str(precision_score(y, y_pred)))\nprint('Recall = ' + str(precision_score(y, y_pred)))\nprint('F_1 = ' + str(f1_score(y, y_pred)))\n\n# try to run it with different number of folds for the cross-validation \n# and different values of the regularization strength\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
donK23/pyData-Projects
HolmesTopicModels/holmes_topic_models/notebook/2_Modeling.ipynb
apache-2.0
[ "Modeling\nML Tasks", "import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Input", "from sklearn.datasets import load_files\n\ncorpus = load_files(\"../data/\")\n\ndoc_count = len(corpus.data)\nprint(\"Doc count:\", doc_count)\nassert doc_count is 56, \"Wrong number of documents loaded, should be 56 (56 stories)\"", "Vectorizer", "from helpers.tokenizer import TextWrangler\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\n\nbow_stem = CountVectorizer(strip_accents=\"ascii\", tokenizer=TextWrangler(kind=\"stem\"))\nX_bow_stem = bow_stem.fit_transform(corpus.data)\n\ntfidf_stem = TfidfVectorizer(strip_accents=\"ascii\", tokenizer=TextWrangler(kind=\"stem\"))\nX_tfidf_stem = tfidf_stem.fit_transform(corpus.data)", "Models", "from sklearn.decomposition import LatentDirichletAllocation, TruncatedSVD, NMF\n\nn_topics = 5\n\nlda = LatentDirichletAllocation(n_components=n_topics, \n learning_decay=0.5, learning_offset=1.,\n random_state=23)\nlsa = TruncatedSVD(n_components=n_topics, random_state=23)\nnmf = NMF(n_components=n_topics, solver=\"mu\", beta_loss=\"kullback-leibler\", alpha=0.1, random_state=23)\n\nlda_params = {\"lda__learning_decay\": [0.5, 0.7, 0.9],\n \"lda__learning_offset\": [1., 5., 10.]}", "Pipelines", "from sklearn.pipeline import Pipeline\n\nlda_pipe = Pipeline([\n (\"bow\", bow_stem),\n (\"lda\", lda)\n])\n\nlsa_pipe = Pipeline([\n (\"tfidf\", tfidf_stem),\n (\"lsa\", lsa)\n])\n\nnmf_pipe = Pipeline([\n (\"tfidf\", tfidf_stem),\n (\"nmf\", nmf)\n])", "Gridsearch", "from sklearn.model_selection import GridSearchCV\n\nlda_model = GridSearchCV(lda_pipe, param_grid=lda_params, cv=5, n_jobs=-1)\n#lda_model.fit(corpus.data)\n#lda_model.best_params_", "Training", "lda_pipe.fit(corpus.data)\nnmf_pipe.fit(corpus.data)\nlsa_pipe.fit(corpus.data)", "Evaluation", "print(\"LDA\")\nprint(\"Log Likelihood:\", lda_pipe.score(corpus.data))", "Visual Inspection", "def df_topic_model(vectorizer, model, n_words=20):\n keywords = np.array(vectorizer.get_feature_names())\n topic_keywords = []\n for topic_weights in model.components_:\n top_keyword_locs = (-topic_weights).argsort()[:n_words]\n topic_keywords.append(keywords.take(top_keyword_locs))\n \n df_topic_keywords = pd.DataFrame(topic_keywords)\n df_topic_keywords.columns = ['Word '+str(i) for i in range(df_topic_keywords.shape[1])]\n df_topic_keywords.index = ['Topic '+str(i) for i in range(df_topic_keywords.shape[0])]\n \n return df_topic_keywords\n\nprint(\"LDA\")\ndf_topic_model(vectorizer=bow_stem, model=lda_pipe.named_steps.lda, n_words=15)\n\nprint(\"LSA\")\ndf_topic_model(vectorizer=tfidf_stem, model=lsa_pipe.named_steps.lsa, n_words=15)\n\nprint(\"NMF\")\ndf_topic_model(vectorizer=tfidf_stem, model=nmf_pipe.named_steps.nmf, n_words=15)\n\nimport pyLDAvis\nfrom pyLDAvis.sklearn import prepare\npyLDAvis.enable_notebook()\n\nprepare(lda_pipe.named_steps.lda, X_bow_stem, bow_stem, mds=\"tsne\")\n\nprepare(nmf_pipe.named_steps.nmf, X_tfidf_stem, tfidf_stem, mds=\"tsne\")", "Conclusion:\nTopic models derived from different approaches look dissimilar. Top word distribution of NMF appears most \nmeaningful, mostly because its topics doesn't share same words (due to NMF algorithm). LSA topic model is \nbetter interpretable than its LDA counterpart. Nonetheless, topics from both are hard to distinguish and \ndoesn't make much sense. Therefore I'll go with the NMF topic model for the assginment to novel collections\nstep.\nJaccard Index", "df_topic_word_lda = df_topic_model(vectorizer=bow_stem, model=lda_pipe.named_steps.lda, n_words=10)\ndf_topic_word_lsa = df_topic_model(vectorizer=tfidf_stem, model=lsa_pipe.named_steps.lsa, n_words=10)\ndf_topic_word_nmf = df_topic_model(vectorizer=tfidf_stem, model=nmf_pipe.named_steps.nmf, n_words=10)\n\ndef jaccard_index(list1, list2):\n s1 = set(list1)\n s2 = set(list2)\n jaccard_index = len(s1.intersection(s2)) / len(s1.union(s2))\n return jaccard_index\n\nsims_lda_lsa, sims_lda_nmf, sims_lsa_nmf = {}, {}, {}\nassert df_topic_word_lda.shape[0] == df_topic_word_lsa.shape[0] == df_topic_word_nmf.shape[0], \"n_topics mismatch\"\n\nfor ix, row in df_topic_word_lda.iterrows(): \n l1 = df_topic_word_lda.loc[ix, :].values.tolist()\n l2 = df_topic_word_lsa.loc[ix, :].values.tolist()\n l3 = df_topic_word_nmf.loc[ix, :].values.tolist()\n sims_lda_lsa[ix] = jaccard_index(l1, l2)\n sims_lda_nmf[ix] = jaccard_index(l1, l3)\n sims_lsa_nmf[ix] = jaccard_index(l2, l3)\n\ndf_jaccard_sims = pd.DataFrame([sims_lda_lsa, sims_lda_nmf, sims_lsa_nmf])\ndf_jaccard_sims.index = [\"LDA vs LSA\", \"LDA vs NMF\", \"LSA vs NMF\"]\ndf_jaccard_sims[\"mean_sim\"] = df_jaccard_sims.mean(axis=1)\ndf_jaccard_sims", "Conclusion:\nTopics derived from different topic modeling approaches are fundamentally dissimilar.\nDocument-topic Assignment", "nmf_topic_distr = nmf_pipe.transform(corpus.data)\n\ncollections_map = {0: \"His Last Bow\", 1: \"The Adventures of Sherlock Holmes\",\n 2: \"The Case-Book of Sherlock_Holmes\", 3: \"The Memoirs of Sherlock Holmes\",\n 4: \"The Return of Sherlock Holmes\"}\n\n# Titles created from dominant words in topics\nnovel_collections_map = {0: \"The Whispering Ways Sherlock Holmes Waits to Act on Waste\", \n 1: \"Vengeful Wednesdays: Unexpected Incidences on the Tapering Train by Sherlock Holmes\",\n 2: \"A Private Journey of Sherlock Holmes: Thirteen Unfolded Veins on the Move\",\n 3: \"Sherlock Holmes Tumbling into the hanging arms of Scylla\",\n 4: \"The Shooking Jaw of Sherlock Holmes in the Villa of the Baronet\"}\n\nprint(\"Novel Sherlock Holmes Short Stories Collections:\")\nfor _,title in novel_collections_map.items():\n print(\"*\", title)\n\ntopics = [\"Topic\" + str(i) for i in range(n_topics)]\ndocs = [\" \".join(f_name.split(\"/\")[-1].split(\".\")[0].split(\"_\")) \n for f_name in corpus.filenames]\n\ndf_document_topic = pd.DataFrame(np.round(nmf_topic_distr, 3), columns=topics, index=docs)\ndf_document_topic[\"assigned_topic\"] = np.argmax(df_document_topic.values, axis=1)\ndf_document_topic[\"orig_collection\"] = [collections_map[item] for item in corpus.target]\ndf_document_topic[\"novel_collection\"] = [novel_collections_map.get(item, item) \n for item in df_document_topic.assigned_topic.values]\n\ndf_novel_assignment = df_document_topic.sort_values(\"assigned_topic\").loc[:, [\"orig_collection\", \n \"novel_collection\"]]\ndf_novel_assignment\n\nfrom yellowbrick.text import TSNEVisualizer\n\ntsne = TSNEVisualizer()\ntsne.fit(X_tfidf_stem, df_document_topic.novel_collection)\ntsne.poof()", "Conclusion:\nA new ordering of short stories from the Sherlock Holmes series into collections based on NMF topic models is possible. Naming of collections according to dominant words in topics is also possible, but they sound strange and doesn't make much sense. The projection of word vectors from the documents looks slightly more structured than the original ordering by the author. Nevertheless the cost of this ordering is that it looses the tension in the canon somehow (eg \"The Final Problem\" and \"The Empty House\" are assigned in the same collection). So after all, I'd go with the original ordering by Sir Arthur Conan Doyle." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NEONScience/NEON-Data-Skills
tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-2018-py/classify_raster_with_threshold-2018-py.ipynb
agpl-3.0
[ "syncID: b0860577d1994b6e8abd23a6edf9e005\ntitle: \"Classify a Raster Using Threshold Values in Python - 2018\"\ndescription: \"Learn how to read NEON lidar raster GeoTIFFs (e.g., CHM, slope, aspect) into Python numpy arrays with gdal and create a classified raster object.\" \ndateCreated: 2018-07-04 \nauthors: Bridget Hass\ncontributors: Donal O'Leary, Max Burner\nestimatedTime: 1 hour\npackagesLibraries: numpy, gdal, matplotlib, matplotlib.pyplot, os\ntopics: lidar, raster, remote-sensing\nlanguagesTool: python\ndataProduct: DP1.30003, DP3.30015, DP3.30024, DP3.30025\ncode1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-2018-py/classify_raster_with_threshold-2018-py.ipynb\ntutorialSeries: intro-lidar-py-series\nurlTitle: classify-raster-thresholds-2018-py\n\nIn this tutorial, we will learn how to read NEON lidar raster GeoTIFFS \n(e.g., CHM, slope aspect) into Python numpy arrays with gdal and create a \nclassified raster object.\n<div id=\"ds-objectives\" markdown=\"1\">\n\n### Objectives\n\nAfter completing this tutorial, you will be able to:\n\n* Read NEON lidar raster GeoTIFFS (e.g., CHM, slope aspect) into Python numpy arrays with gdal.\n* Create a classified raster object using thresholds.\n\n### Install Python Packages\n\n* **numpy**\n* **gdal** \n* **matplotlib** \n\n### Download Data\n\nFor this lesson, we will be using a 1km tile of a Canopy Height Model derived from lidar data collected at the Smithsonian Environmental Research Center (SERC) NEON site. <a href=\"https://ndownloader.figshare.com/files/25787420\">Download Data Here</a>.\n\n<a href=\"https://ndownloader.figshare.com/files/25787420\" class=\"link--button link--arrow\">\nDownload Dataset</a>\n\n</div>\n\nIn this tutorial, we will work with the NEON AOP L3 LiDAR ecoysystem structure (Canopy Height Model) data product. For more information about NEON data products and the CHM product DP3.30015.001, see the <a href=\"http://data.neonscience.org/data-products/DP3.30015.001\" target=\"_blank\">NEON Data Product Catalog</a>. \nFirst, let's import the required packages and set our plot display to be in-line:", "import numpy as np\nimport gdal, copy\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')", "Open a GeoTIFF with GDAL\nLet's look at the SERC Canopy Height Model (CHM) to start. We can open and read this in Python using the gdal.Open function:", "# Note that you will need to update the filepath below according to your local machine\nchm_filename = '/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_CHM.tif'\nchm_dataset = gdal.Open(chm_filename)", "Read GeoTIFF Tags\nThe GeoTIFF file format comes with associated metadata containing information about the location and coordinate system/projection. Once we have read in the dataset, we can access this information with the following commands:", "#Display the dataset dimensions, number of bands, driver, and geotransform \ncols = chm_dataset.RasterXSize; print('# of columns:',cols)\nrows = chm_dataset.RasterYSize; print('# of rows:',rows)\nprint('# of bands:',chm_dataset.RasterCount)\nprint('driver:',chm_dataset.GetDriver().LongName)", "Use GetProjection\nWe can use the gdal GetProjection method to display information about the coordinate system and EPSG code.", "print('projection:',chm_dataset.GetProjection())", "Use GetGeoTransform\nThe geotransform contains information about the origin (upper-left corner) of the raster, the pixel size, and the rotation angle of the data. All NEON data in the latest format have zero rotation. In this example, the values correspond to:", "print('geotransform:',chm_dataset.GetGeoTransform())", "In this case, the geotransform values correspond to:\n\nLeft-Most X Coordinate = 367000.0\nW-E Pixel Resolution = 1.0\nRotation (0 if Image is North-Up) = 0.0\nUpper Y Coordinate = 4307000.0\nRotation (0 if Image is North-Up) = 0.0\nN-S Pixel Resolution = -1.0 \n\nThe negative value for the N-S Pixel resolution reflects that the origin of the image is the upper left corner. We can convert this geotransform information into a spatial extent (xMin, xMax, yMin, yMax) by combining information about the origin, number of columns & rows, and pixel size, as follows:", "chm_mapinfo = chm_dataset.GetGeoTransform()\nxMin = chm_mapinfo[0]\nyMax = chm_mapinfo[3]\n\nxMax = xMin + chm_dataset.RasterXSize/chm_mapinfo[1] #divide by pixel width \nyMin = yMax + chm_dataset.RasterYSize/chm_mapinfo[5] #divide by pixel height (note sign +/-)\nchm_ext = (xMin,xMax,yMin,yMax)\nprint('chm raster extent:',chm_ext)", "Use GetRasterBand\nWe can read in a single raster band with GetRasterBand and access information about this raster band such as the No Data Value, Scale Factor, and Statitiscs as follows:", "chm_raster = chm_dataset.GetRasterBand(1)\nnoDataVal = chm_raster.GetNoDataValue(); print('no data value:',noDataVal)\nscaleFactor = chm_raster.GetScale(); print('scale factor:',scaleFactor)\nchm_stats = chm_raster.GetStatistics(True,True)\nprint('SERC CHM Statistics: Minimum=%.2f, Maximum=%.2f, Mean=%.3f, StDev=%.3f' % \n (chm_stats[0], chm_stats[1], chm_stats[2], chm_stats[3]))", "Use ReadAsArray\nFinally we can convert the raster to an array using the ReadAsArray method. Cast the array to a floating point value using astype(np.float). Once we generate the array, we want to set No Data Values to NaN, and apply the scale factor:", "chm_array = chm_dataset.GetRasterBand(1).ReadAsArray(0,0,cols,rows).astype(np.float)\nchm_array[chm_array==int(noDataVal)]=np.nan #Assign CHM No Data Values to NaN\nchm_array=chm_array/scaleFactor\nprint('SERC CHM Array:\\n',chm_array) #display array values\n\nchm_array.shape\n\n# Calculate the % of pixels that are NaN and non-zero:\npct_nan = np.count_nonzero(np.isnan(chm_array))/(rows*cols)\nprint('% NaN:',round(pct_nan*100,2))\nprint('% non-zero:',round(100*np.count_nonzero(chm_array)/(rows*cols),2))", "Plot Canopy Height Data\nTo get a better idea of the dataset, we can use a similar function to plot_aop_refl that we used in the NEON AOP reflectance tutorials:", "def plot_band_array(band_array,refl_extent,colorlimit,ax=plt.gca(),title='',cmap_title='',colormap=''):\n plot = plt.imshow(band_array,extent=refl_extent,clim=colorlimit); \n cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap); \n cbar.set_label(cmap_title,rotation=90,labelpad=20);\n plt.title(title); ax = plt.gca(); \n ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation #\n rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees", "Histogram of Data\nAs we did with the reflectance tile, it is often useful to plot a histogram of the geotiff data in order to get a sense of the range and distribution of values. First we'll make a copy of the array and remove the nan values.", "import copy\nchm_nonan_array = copy.copy(chm_array)\nchm_nonan_array = chm_nonan_array[~np.isnan(chm_array)]\nplt.hist(chm_nonan_array,weights=np.zeros_like(chm_nonan_array)+1./\n (chm_array.shape[0]*chm_array.shape[1]),bins=50);\nplt.title('Distribution of SERC Canopy Height')\nplt.xlabel('Tree Height (m)'); plt.ylabel('Relative Frequency')", "On your own, adjust the number of bins, and range of the y-axis to get a good idea of the distribution of the canopy height values. We can see that most of the values are zero. In SERC, many of the zero CHM values correspond to bodies of water as well as regions of land without trees. Let's look at a histogram and plot the data without zero values:", "chm_nonzero_array = copy.copy(chm_array)\nchm_nonzero_array[chm_array==0]=np.nan\nchm_nonzero_nonan_array = chm_nonzero_array[~np.isnan(chm_nonzero_array)]\n# Use weighting to plot relative frequency\nplt.hist(chm_nonzero_nonan_array,bins=50);\n\n# plt.hist(chm_nonzero_nonan_array.flatten(),50) \nplt.title('Distribution of SERC Non-Zero Canopy Height')\nplt.xlabel('Tree Height (m)'); plt.ylabel('Relative Frequency')", "Note that it appears that the trees don't have a smooth or normal distribution, but instead appear blocked off in chunks. This is an artifact of the Canopy Height Model algorithm, which bins the trees into 5m increments (this is done to avoid another artifact of \"pits\" (Khosravipour et al., 2014). \nFrom the histogram we can see that the majority of the trees are < 30m. We can re-plot the CHM array, this time adjusting the color bar limits to better visualize the variation in canopy height. We will plot the non-zero array so that CHM=0 appears white.", "plot_band_array(chm_array,\n chm_ext,\n (0,35),\n title='SERC Canopy Height',\n cmap_title='Canopy Height, m',\n colormap='BuGn')", "Threshold Based Raster Classification\nNext, we will create a classified raster object. To do this, we will use the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into five groups:\n- Class 1: CHM = 0 m \n- Class 2: 0m < CHM <= 10m\n- Class 3: 10m < CHM <= 20m\n- Class 4: 20m < CHM <= 30m\n- Class 5: CHM > 30m\nWe can use np.where to find the indices where a boolean criteria is met.", "chm_reclass = copy.copy(chm_array)\nchm_reclass[np.where(chm_array==0)] = 1 # CHM = 0 : Class 1\nchm_reclass[np.where((chm_array>0) & (chm_array<=10))] = 2 # 0m < CHM <= 10m - Class 2\nchm_reclass[np.where((chm_array>10) & (chm_array<=20))] = 3 # 10m < CHM <= 20m - Class 3\nchm_reclass[np.where((chm_array>20) & (chm_array<=30))] = 4 # 20m < CHM <= 30m - Class 4\nchm_reclass[np.where(chm_array>30)] = 5 # CHM > 30m - Class 5", "We can define our own colormap to plot these discrete classifications, and create a custom legend to label the classes:", "import matplotlib.colors as colors\nplt.figure(); \ncmapCHM = colors.ListedColormap(['lightblue','yellow','orange','green','red'])\nplt.imshow(chm_reclass,extent=chm_ext,cmap=cmapCHM)\nplt.title('SERC CHM Classification')\nax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation \nrotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees\n\n# Create custom legend to label the four canopy height classes:\nimport matplotlib.patches as mpatches\nclass1_box = mpatches.Patch(color='lightblue', label='CHM = 0m')\nclass2_box = mpatches.Patch(color='yellow', label='0m < CHM <= 10m')\nclass3_box = mpatches.Patch(color='orange', label='10m < CHM <= 20m')\nclass4_box = mpatches.Patch(color='green', label='20m < CHM <= 30m')\nclass5_box = mpatches.Patch(color='red', label='CHM > 30m')\n\nax.legend(handles=[class1_box,class2_box,class3_box,class4_box,class5_box],\n handlelength=0.7,bbox_to_anchor=(1.05, 0.4),loc='lower left',borderaxespad=0.)", "<div id=\"ds-challenge\" markdown=\"1\">\n**Challenge: Document Your Workflow**\n\n1. Look at the code that you created for this lesson. Now imagine yourself months in the future. Document your script so that your methods and process is clear and reproducible for yourself or others to follow when you come back to this work at a later date. \n2. In documenting your script, synthesize the outputs. Do they tell you anything about the vegetation structure at the field site? \n\n</div>\n\n<div id=\"ds-challenge\" markdown=\"1\">\n**Challenge: Try Another Classification**\n\nCreate the following threshold classified outputs:\n\n1. A raster where NDVI values are classified into the following categories:\n * Low greenness: NDVI < 0.3\n * Medium greenness: 0.3 < NDVI < 0.6\n * High greenness: NDVI > 0.6\n2. A raster where aspect is classified into North and South facing slopes: \n * North: 0-45 & 315-360 degrees \n * South: 135-225 degrees\n\n <figure>\n <a href=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/geospatial-skills/NSEWclassification_BozEtAl2015.jpg\">\n <img src=\"https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/geospatial-skills/NSEWclassification_BozEtAl2015.jpg\"></a>\n <figcaption> Reclassification of aspect (azimuth) values: North, 315-45 \n degrees; East, 45-135 degrees; South, 135-225 degrees; West, 225-315 degrees.\n Source: <a href=\"http://www.aimspress.com/article/10.3934/energy.2015.3.401/fulltext.html\"> Boz et al. 2015 </a>\n </figcaption>\n</figure>\n\nBe sure to document your workflow as you go using Jupyter Markdown cells. \n\n**Data Institute Participants:** When you are finished, export your outputs to HTML by selecting File > Download As > HTML (.html). Save the file as LastName_Tues_classifyThreshold.html. Add this to the Tuesday directory in your DI-NEON-participants Git directory and push them to your fork in GitHub. Merge with the central repository using a pull request. \n\n</div>\n\nReferences\nKhosravipour, Anahita & Skidmore, Andrew & Isenburg, Martin & Wang, Tiejun & Hussin, Yousif. (2014). <a href=\"https://www.researchgate.net/publication/273663100_Generating_Pit-free_Canopy_Height_Models_from_Airborne_Lidar\" target=\"_blank\"> Generating Pit-free Canopy Height Models from Airborne Lidar. Photogrammetric Engineering & Remote Sensing</a>. 80. 863-872. 10.14358/PERS.80.9.863." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
balarsen/pymc_learning
updating_info/Arb_dist.ipynb
bsd-3-clause
[ "# https://github.com/pymc-devs/pymc3/blob/master/docs/source/notebooks/updating_priors.ipynb\n", "Use an arbitary distribution\nNOTE this requires Pymc3 3.1\npymc3.distributions.DensityDist", "# pymc3.distributions.DensityDist?\n\n\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nfrom pymc3 import Model, Normal, Slice\nfrom pymc3 import sample\nfrom pymc3 import traceplot\nfrom pymc3.distributions import Interpolated\nfrom theano import as_op\nimport theano.tensor as tt\nimport numpy as np\nfrom scipy import stats\n\n%matplotlib inline\n\n%load_ext version_information\n\n%version_information pymc3\n\nfrom sklearn.neighbors.kde import KernelDensity\nimport numpy as np\nX = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])\nkde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X)\nkde.score_samples(X)\nplt.scatter(X[:,0], X[:,1])", "Generating data", "# Initialize random number generator\nnp.random.seed(123)\n\n# True parameter values\nalpha_true = 5\nbeta0_true = 7\nbeta1_true = 13\n\n# Size of dataset\nsize = 100\n\n# Predictor variable\nX1 = np.random.randn(size)\nX2 = np.random.randn(size) * 0.2\n\n# Simulate outcome variable\nY = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)", "Model specification\nOur initial beliefs about the parameters are quite informative (sd=1) and a bit off the true values.", "basic_model = Model()\n\nwith basic_model:\n \n # Priors for unknown model parameters\n alpha = Normal('alpha', mu=0, sd=1)\n beta0 = Normal('beta0', mu=12, sd=1)\n beta1 = Normal('beta1', mu=18, sd=1)\n \n # Expected value of outcome\n mu = alpha + beta0 * X1 + beta1 * X2\n \n # Likelihood (sampling distribution) of observations\n Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)\n \n # draw 1000 posterior samples\n trace = sample(1000)\n\ntraceplot(trace);\n", "In order to update our beliefs about the parameters, we use the posterior distributions, which will be used as the prior distributions for the next inference. The data used for each inference iteration has to be independent from the previous iterations, otherwise the same (possibly wrong) belief is injected over and over in the system, amplifying the errors and misleading the inference. By ensuring the data is independent, the system should converge to the true parameter values.\nBecause we draw samples from the posterior distribution (shown on the right in the figure above), we need to estimate their probability density (shown on the left in the figure above). Kernel density estimation (KDE) is a way to achieve this, and we will use this technique here. In any case, it is an empirical distribution that cannot be expressed analytically. Fortunately PyMC3 provides a way to use custom distributions, via Interpolated class.", "def from_posterior(param, samples):\n smin, smax = np.min(samples), np.max(samples)\n width = smax - smin\n x = np.linspace(smin, smax, 100)\n y = stats.gaussian_kde(samples)(x)\n \n # what was never sampled should have a small probability but not 0,\n # so we'll extend the domain and use linear approximation of density on it\n x = np.concatenate([[x[0] - 3 * width], x, [x[-1] + 3 * width]])\n y = np.concatenate([[0], y, [0]])\n return Interpolated(param, x, y)", "Now we just need to generate more data and build our Bayesian model so that the prior distributions for the current iteration are the posterior distributions from the previous iteration. It is still possible to continue using NUTS sampling method because Interpolated class implements calculation of gradients that are necessary for Hamiltonian Monte Carlo samplers.", "traces = [trace]\n\n\nfor _ in range(10):\n\n # generate more data\n X1 = np.random.randn(size)\n X2 = np.random.randn(size) * 0.2\n Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)\n\n model = Model()\n with model:\n # Priors are posteriors from previous iteration\n alpha = from_posterior('alpha', trace['alpha'])\n beta0 = from_posterior('beta0', trace['beta0'])\n beta1 = from_posterior('beta1', trace['beta1'])\n \n # Expected value of outcome\n mu = alpha + beta0 * X1 + beta1 * X2\n\n # Likelihood (sampling distribution) of observations\n Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)\n \n # draw 10000 posterior samples\n trace = sample(1000)\n traces.append(trace)\n\nprint('Posterior distributions after ' + str(len(traces)) + ' iterations.')\ncmap = mpl.cm.autumn\nfor param in ['alpha', 'beta0', 'beta1']:\n plt.figure(figsize=(8, 2))\n for update_i, trace in enumerate(traces):\n samples = trace[param]\n smin, smax = np.min(samples), np.max(samples)\n x = np.linspace(smin, smax, 100)\n y = stats.gaussian_kde(samples)(x)\n plt.plot(x, y, color=cmap(1 - update_i / len(traces)))\n plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k')\n plt.ylabel('Frequency')\n plt.title(param)\n plt.show()", "You can re-execute the last two cells to generate more updates.\nWhat is interesting to note is that the posterior distributions for our parameters tend to get centered on their true value (vertical lines), and the distribution gets thiner and thiner. This means that we get more confident each time, and the (false) belief we had at the beginning gets flushed away by the new data we incorporate.", "for _ in range(10):\n\n # generate more data\n X1 = np.random.randn(size)\n X2 = np.random.randn(size) * 0.2\n Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)\n\n model = Model()\n with model:\n # Priors are posteriors from previous iteration\n alpha = from_posterior('alpha', trace['alpha'])\n beta0 = from_posterior('beta0', trace['beta0'])\n beta1 = from_posterior('beta1', trace['beta1'])\n\n # Expected value of outcome\n mu = alpha + beta0 * X1 + beta1 * X2\n\n # Likelihood (sampling distribution) of observations\n Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)\n \n # draw 10000 posterior samples\n trace = sample(1000)\n traces.append(trace)\n\nprint('Posterior distributions after ' + str(len(traces)) + ' iterations.')\ncmap = mpl.cm.autumn\nfor param in ['alpha', 'beta0', 'beta1']:\n plt.figure(figsize=(8, 2))\n for update_i, trace in enumerate(traces):\n samples = trace[param]\n smin, smax = np.min(samples), np.max(samples)\n x = np.linspace(smin, smax, 100)\n y = stats.gaussian_kde(samples)(x)\n plt.plot(x, y, color=cmap(1 - update_i / len(traces)))\n plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k')\n plt.ylabel('Frequency')\n plt.title(param)\n plt.show()\n\nfor _ in range(10):\n\n # generate more data\n X1 = np.random.randn(size)\n X2 = np.random.randn(size) * 0.2\n Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)\n\n model = Model()\n with model:\n # Priors are posteriors from previous iteration\n alpha = from_posterior('alpha', trace['alpha'])\n beta0 = from_posterior('beta0', trace['beta0'])\n beta1 = from_posterior('beta1', trace['beta1'])\n\n # Expected value of outcome\n mu = alpha + beta0 * X1 + beta1 * X2\n\n # Likelihood (sampling distribution) of observations\n Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)\n \n # draw 10000 posterior samples\n trace = sample(1000)\n traces.append(trace)\n\nprint('Posterior distributions after ' + str(len(traces)) + ' iterations.')\ncmap = mpl.cm.autumn\nfor param in ['alpha', 'beta0', 'beta1']:\n plt.figure(figsize=(8, 2))\n for update_i, trace in enumerate(traces):\n samples = trace[param]\n smin, smax = np.min(samples), np.max(samples)\n x = np.linspace(smin, smax, 100)\n y = stats.gaussian_kde(samples)(x)\n plt.plot(x, y, color=cmap(1 - update_i / len(traces)))\n plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k')\n plt.ylabel('Frequency')\n plt.title(param)\n plt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wgong/open_source_learning
learn_stem/machine_learning/tensorflow/02_Convolutional_Neural_Network.ipynb
apache-2.0
[ "TensorFlow Tutorial #02\nConvolutional Neural Network\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nIntroduction\nThe previous tutorial showed that a simple linear model had about 91% classification accuracy for recognizing hand-written digits in the MNIST data-set.\nIn this tutorial we will implement a simple Convolutional Neural Network in TensorFlow which has a classification accuracy of about 99%, or more if you make some of the suggested exercises.\nConvolutional Networks work by moving small filters across the input image. This means the filters are re-used for recognizing patterns throughout the entire input image. This makes the Convolutional Networks much more powerful than Fully-Connected networks with the same number of variables. This in turn makes the Convolutional Networks faster to train.\nYou should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. Beginners to TensorFlow may also want to study the first tutorial before proceeding to this one.\nFlowchart\nThe following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below.", "from IPython.display import Image\nImage('images/02_network_flowchart.png')", "The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.\nThese 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are down-sampled again to 7x7 pixels.\nThe output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.\nThe convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.\nThese particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.\nNote that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow.\nConvolutional Layer\nThe following chart shows the basic idea of processing an image in the first convolutional layer. The input image depicts the number 7 and four copies of the image are shown here, so we can see more clearly how the filter is being moved to different positions of the image. For each position of the filter, the dot-product is being calculated between the filter and the image pixels under the filter, which results in a single pixel in the output image. So moving the filter across the entire input image results in a new image being generated.\nThe red filter-weights means that the filter has a positive reaction to black pixels in the input image, while blue pixels means the filter has a negative reaction to black pixels.\nIn this case it appears that the filter recognizes the horizontal line of the 7-digit, as can be seen from its stronger reaction to that line in the output image.", "Image('images/02_convolution.png')", "The step-size for moving the filter across the input is called the stride. There is a stride for moving the filter horizontally (x-axis) and another stride for moving vertically (y-axis).\nIn the source-code below, the stride is set to 1 in both directions, which means the filter starts in the upper left corner of the input image and is being moved 1 pixel to the right in each step. When the filter reaches the end of the image to the right, then the filter is moved back to the left side and 1 pixel down the image. This continues until the filter has reached the lower right corner of the input image and the entire output image has been generated.\nWhen the filter reaches the end of the right-side as well as the bottom of the input image, then it can be padded with zeroes (white pixels). This causes the output image to be of the exact same dimension as the input image.\nFurthermore, the output of the convolution may be passed through a so-called Rectified Linear Unit (ReLU), which merely ensures that the output is positive because negative values are set to zero. The output may also be down-sampled by so-called max-pooling, which considers small windows of 2x2 pixels and only keeps the largest of those pixels. This halves the resolution of the input image e.g. from 28x28 to 14x14 pixels.\nNote that the second convolutional layer is more complicated because it takes 16 input channels. We want a separate filter for each input channel, so we need 16 filters instead of just one. Furthermore, we want 36 output channels from the second convolutional layer, so in total we need 16 x 36 = 576 filters for the second convolutional layer. It can be a bit challenging to understand how this works.\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\nimport time\nfrom datetime import timedelta\nimport math", "This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:", "tf.__version__", "Configuration of Neural Network\nThe configuration of the Convolutional Neural Network is defined here for convenience, so you can easily find and change these numbers and re-run the Notebook.", "# Convolutional Layer 1.\nfilter_size1 = 5 # Convolution filters are 5 x 5 pixels.\nnum_filters1 = 16 # There are 16 of these filters.\n\n# Convolutional Layer 2.\nfilter_size2 = 5 # Convolution filters are 5 x 5 pixels.\nnum_filters2 = 36 # There are 36 of these filters.\n\n# Fully-connected layer.\nfc_size = 128 # Number of neurons in fully-connected layer.", "Load Data\nThe MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.", "from tensorflow.examples.tutorials.mnist import input_data\ndata = input_data.read_data_sets('data/MNIST/', one_hot=True)\n", "The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.", "print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))", "The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.", "data.test.cls = np.argmax(data.test.labels, axis=1)\n\nfeed_dict_test = {x: data.test.images,\n y_true: data.test.labels,\n y_true_cls: data.test.cls}", "Data Dimensions\nThe data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.", "# We know that MNIST images are 28 pixels in each dimension.\nimg_size = 28\n\n# Images are stored in one-dimensional arrays of this length.\nimg_size_flat = img_size * img_size\n\n# Tuple with height and width of images used to reshape arrays.\nimg_shape = (img_size, img_size)\n\n# Number of colour channels for the images: 1 channel for gray-scale.\nnum_channels = 1\n\n# Number of classes, one class for each of 10 digits.\nnum_classes = 10", "Helper-function for plotting images\nFunction used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.", "def plot_images(images, cls_true, cls_pred=None):\n assert len(images) == len(cls_true) == 9\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n fig.subplots_adjust(hspace=0.3, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n # Plot image.\n ax.imshow(images[i].reshape(img_shape), cmap='binary')\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n xlabel = \"True: {0}, Pred: {1}\".format(cls_true[i], cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Plot a few images to see if data is correct", "# Get the first images from the test-set.\nimages = data.test.images[0:9]\n\n# Get the true classes for those images.\ncls_true = data.test.cls[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true)", "TensorFlow Graph\nThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.\nTensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.\nTensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.\nA TensorFlow graph consists of the following parts which will be detailed below:\n\nPlaceholder variables used for inputting data to the graph.\nVariables that are going to be optimized so as to make the convolutional network perform better.\nThe mathematical formulas for the convolutional network.\nA cost measure that can be used to guide the optimization of the variables.\nAn optimization method which updates the variables.\n\nIn addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.\nHelper-functions for creating new variables\nFunctions for creating new TensorFlow variables in the given shape and initializing them with random values. Note that the initialization is not actually done at this point, it is merely being defined in the TensorFlow graph.", "def new_weights(shape):\n return tf.Variable(tf.truncated_normal(shape, stddev=0.05))\n\ndef new_biases(length):\n return tf.Variable(tf.constant(0.05, shape=[length]))", "Helper-function for creating a new Convolutional Layer\nThis function creates a new convolutional layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.\nIt is assumed that the input is a 4-dim tensor with the following dimensions:\n\nImage number.\nY-axis of each image.\nX-axis of each image.\nChannels of each image.\n\nNote that the input channels may either be colour-channels, or it may be filter-channels if the input is produced from a previous convolutional layer.\nThe output is another 4-dim tensor with the following dimensions:\n\nImage number, same as input.\nY-axis of each image. If 2x2 pooling is used, then the height and width of the input images is divided by 2.\nX-axis of each image. Ditto.\nChannels produced by the convolutional filters.", "def new_conv_layer(input, # The previous layer.\n num_input_channels, # Num. channels in prev. layer.\n filter_size, # Width and height of each filter.\n num_filters, # Number of filters.\n use_pooling=True): # Use 2x2 max-pooling.\n\n # Shape of the filter-weights for the convolution.\n # This format is determined by the TensorFlow API.\n shape = [filter_size, filter_size, num_input_channels, num_filters]\n\n # Create new weights aka. filters with the given shape.\n weights = new_weights(shape=shape)\n\n # Create new biases, one for each filter.\n biases = new_biases(length=num_filters)\n\n # Create the TensorFlow operation for convolution.\n # Note the strides are set to 1 in all dimensions.\n # The first and last stride must always be 1,\n # because the first is for the image-number and\n # the last is for the input-channel.\n # But e.g. strides=[1, 2, 2, 1] would mean that the filter\n # is moved 2 pixels across the x- and y-axis of the image.\n # The padding is set to 'SAME' which means the input image\n # is padded with zeroes so the size of the output is the same.\n layer = tf.nn.conv2d(input=input,\n filter=weights,\n strides=[1, 1, 1, 1],\n padding='SAME')\n\n # Add the biases to the results of the convolution.\n # A bias-value is added to each filter-channel.\n layer += biases\n\n # Use pooling to down-sample the image resolution?\n if use_pooling:\n # This is 2x2 max-pooling, which means that we\n # consider 2x2 windows and select the largest value\n # in each window. Then we move 2 pixels to the next window.\n layer = tf.nn.max_pool(value=layer,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME')\n\n # Rectified Linear Unit (ReLU).\n # It calculates max(x, 0) for each input pixel x.\n # This adds some non-linearity to the formula and allows us\n # to learn more complicated functions.\n layer = tf.nn.relu(layer)\n\n # Note that ReLU is normally executed before the pooling,\n # but since relu(max_pool(x)) == max_pool(relu(x)) we can\n # save 75% of the relu-operations by max-pooling first.\n\n # We return both the resulting layer and the filter-weights\n # because we will plot the weights later.\n return layer, weights", "Helper-function for flattening a layer\nA convolutional layer produces an output tensor with 4 dimensions. We will add fully-connected layers after the convolution layers, so we need to reduce the 4-dim tensor to 2-dim which can be used as input to the fully-connected layer.", "def flatten_layer(layer):\n # Get the shape of the input layer.\n layer_shape = layer.get_shape()\n\n # The shape of the input layer is assumed to be:\n # layer_shape == [num_images, img_height, img_width, num_channels]\n\n # The number of features is: img_height * img_width * num_channels\n # We can use a function from TensorFlow to calculate this.\n num_features = layer_shape[1:4].num_elements()\n \n # Reshape the layer to [num_images, num_features].\n # Note that we just set the size of the second dimension\n # to num_features and the size of the first dimension to -1\n # which means the size in that dimension is calculated\n # so the total size of the tensor is unchanged from the reshaping.\n layer_flat = tf.reshape(layer, [-1, num_features])\n\n # The shape of the flattened layer is now:\n # [num_images, img_height * img_width * num_channels]\n\n # Return both the flattened layer and the number of features.\n return layer_flat, num_features", "Helper-function for creating a new Fully-Connected Layer\nThis function creates a new fully-connected layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.\nIt is assumed that the input is a 2-dim tensor of shape [num_images, num_inputs]. The output is a 2-dim tensor of shape [num_images, num_outputs].", "def new_fc_layer(input, # The previous layer.\n num_inputs, # Num. inputs from prev. layer.\n num_outputs, # Num. outputs.\n use_relu=True): # Use Rectified Linear Unit (ReLU)?\n\n # Create new weights and biases.\n weights = new_weights(shape=[num_inputs, num_outputs])\n biases = new_biases(length=num_outputs)\n\n # Calculate the layer as the matrix multiplication of\n # the input and weights, and then add the bias-values.\n layer = tf.matmul(input, weights) + biases\n\n # Use ReLU?\n if use_relu:\n layer = tf.nn.relu(layer)\n\n return layer", "Placeholder variables\nPlaceholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.\nFirst we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.", "x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')", "The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:", "x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])", "Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.", "y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')", "We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.", "y_true_cls = tf.argmax(y_true, dimension=1)", "Convolutional Layer 1\nCreate the first convolutional layer. It takes x_image as input and creates num_filters1 different filters, each having width and height equal to filter_size1. Finally we wish to down-sample the image so it is half the size by using 2x2 max-pooling.", "layer_conv1, weights_conv1 = \\\n new_conv_layer(input=x_image,\n num_input_channels=num_channels,\n filter_size=filter_size1,\n num_filters=num_filters1,\n use_pooling=True)", "Check the shape of the tensor that will be output by the convolutional layer. It is (?, 14, 14, 16) which means that there is an arbitrary number of images (this is the ?), each image is 14 pixels wide and 14 pixels high, and there are 16 different channels, one channel for each of the filters.", "layer_conv1", "Convolutional Layer 2\nCreate the second convolutional layer, which takes as input the output from the first convolutional layer. The number of input channels corresponds to the number of filters in the first convolutional layer.", "layer_conv2, weights_conv2 = \\\n new_conv_layer(input=layer_conv1,\n num_input_channels=num_filters1,\n filter_size=filter_size2,\n num_filters=num_filters2,\n use_pooling=True)", "Check the shape of the tensor that will be output from this convolutional layer. The shape is (?, 7, 7, 36) where the ? again means that there is an arbitrary number of images, with each image having width and height of 7 pixels, and there are 36 channels, one for each filter.", "layer_conv2", "Flatten Layer\nThe convolutional layers output 4-dim tensors. We now wish to use these as input in a fully-connected network, which requires for the tensors to be reshaped or flattened to 2-dim tensors.", "layer_flat, num_features = flatten_layer(layer_conv2)", "Check that the tensors now have shape (?, 1764) which means there's an arbitrary number of images which have been flattened to vectors of length 1764 each. Note that 1764 = 7 x 7 x 36.", "layer_flat\n\nnum_features", "Fully-Connected Layer 1\nAdd a fully-connected layer to the network. The input is the flattened layer from the previous convolution. The number of neurons or nodes in the fully-connected layer is fc_size. ReLU is used so we can learn non-linear relations.", "layer_fc1 = new_fc_layer(input=layer_flat,\n num_inputs=num_features,\n num_outputs=fc_size,\n use_relu=True)", "Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and fc_size == 128.", "layer_fc1", "Fully-Connected Layer 2\nAdd another fully-connected layer that outputs vectors of length 10 for determining which of the 10 classes the input image belongs to. Note that ReLU is not used in this layer.", "layer_fc2 = new_fc_layer(input=layer_fc1,\n num_inputs=fc_size,\n num_outputs=num_classes,\n use_relu=False)\n\nlayer_fc2", "Predicted Class\nThe second fully-connected layer estimates how likely it is that the input image belongs to each of the 10 classes. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each element is limited between zero and one and the 10 elements sum to one. This is calculated using the so-called softmax function and the result is stored in y_pred.", "y_pred = tf.nn.softmax(layer_fc2)", "The class-number is the index of the largest element.", "y_pred_cls = tf.argmax(y_pred, dimension=1)", "Cost-function to be optimized\nTo make the model better at classifying the input images, we must somehow change the variables for all the network layers. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.\nThe cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the network layers.\nTensorFlow has a built-in function for calculating the cross-entropy. Note that the function calculates the softmax internally so we must use the output of layer_fc2 directly rather than y_pred which has already had the softmax applied.", "cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,\n labels=y_true)", "We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.", "cost = tf.reduce_mean(cross_entropy)", "Optimization Method\nNow that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the AdamOptimizer which is an advanced form of Gradient Descent.\nNote that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.", "optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)", "Performance Measures\nWe need a few more performance measures to display the progress to the user.\nThis is a vector of booleans whether the predicted class equals the true class of each image.", "correct_prediction = tf.equal(y_pred_cls, y_true_cls)", "This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))", "TensorFlow Run\nCreate TensorFlow session\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.", "session = tf.Session()", "Initialize variables\nThe variables for weights and biases must be initialized before we start optimizing them.", "session.run(tf.global_variables_initializer())", "Helper-function to perform optimization iterations\nThere are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.", "train_batch_size = 64", "Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.", "# Counter for total number of iterations performed so far.\ntotal_iterations = 0\n\ndef optimize(num_iterations, ndisplay_interval=100):\n # Ensure we update the global variable rather than a local copy.\n global total_iterations\n\n # Start-time used for printing time-usage below.\n start_time = time.time()\n\n for i in range(total_iterations,\n total_iterations + num_iterations):\n\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch = data.train.next_batch(train_batch_size)\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n session.run(optimizer, feed_dict=feed_dict_train)\n\n # Print status every 100 iterations.\n if i % ndisplay_interval == 0:\n # Calculate the accuracy on the training-set.\n acc = session.run(accuracy, feed_dict=feed_dict_train)\n\n # Message for printing.\n msg = \"* Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}\"\n\n # Print it.\n print(msg.format(i + 1, acc))\n\n # Update the total number of iterations performed.\n total_iterations += num_iterations\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # Print the time-usage.\n print(\"* Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))", "helper-function to plot sample digits", "def plot_sample9():\n # Use TensorFlow to get a list of boolean values\n # whether each test-image has been correctly classified,\n # and a list for the predicted class of each image.\n prediction, cls_pred = session.run([correct_prediction, y_pred_cls],\n feed_dict=feed_dict_test)\n\n num_imgs = data.test.images.shape[0]\n i_start = np.random.choice(num_imgs-10, 1)[0]\n\n # Plot the first 9 images.\n plot_images(images=data.test.images[i_start:i_start+9],\n cls_true=data.test.cls[i_start:i_start+9],\n cls_pred=cls_pred[i_start:i_start+9])", "Helper-function to plot example errors\nFunction for plotting examples of images from the test-set that have been mis-classified.", "def plot_example_errors(cls_pred, correct):\n # This function is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # correct is a boolean array whether the predicted class\n # is equal to the true class for each image in the test-set.\n\n # Negate the boolean array.\n incorrect = (correct == False)\n \n # Get the images from the test-set that have been\n # incorrectly classified.\n images = data.test.images[incorrect]\n \n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = data.test.cls[incorrect]\n \n # Plot the first 9 images.\n plot_images(images=images[0:9],\n cls_true=cls_true[0:9],\n cls_pred=cls_pred[0:9])", "Helper-function to plot confusion matrix", "def plot_confusion_matrix(cls_pred):\n # This is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Get the true classifications for the test-set.\n cls_true = data.test.cls\n \n # Get the confusion matrix using sklearn.\n cm = confusion_matrix(y_true=cls_true,\n y_pred=cls_pred)\n\n # Print the confusion matrix as text.\n print(cm)\n\n # Plot the confusion matrix as an image.\n plt.matshow(cm)\n\n # Make various adjustments to the plot.\n plt.colorbar()\n tick_marks = np.arange(num_classes)\n plt.xticks(tick_marks, range(num_classes))\n plt.yticks(tick_marks, range(num_classes))\n plt.xlabel('Predicted')\n plt.ylabel('True')\n\n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Helper-function for showing the performance\nFunction for printing the classification accuracy on the test-set.\nIt takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.\nNote that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.", "# Split the test-set into smaller batches of this size.\ntest_batch_size = 256\n\ndef print_test_accuracy(show_example_errors=False,\n show_confusion_matrix=False):\n\n # Number of images in the test-set.\n num_test = len(data.test.images)\n\n # Allocate an array for the predicted classes which\n # will be calculated in batches and filled into this array.\n cls_pred = np.zeros(shape=num_test, dtype=np.int)\n\n # Now calculate the predicted classes for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_test:\n # The ending index for the next batch is denoted j.\n j = min(i + test_batch_size, num_test)\n\n # Get the images from the test-set between index i and j.\n images = data.test.images[i:j, :]\n\n # Get the associated labels.\n labels = data.test.labels[i:j, :]\n\n # Create a feed-dict with these images and labels.\n feed_dict = {x: images,\n y_true: labels}\n\n # Calculate the predicted class using TensorFlow.\n cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n\n # Convenience variable for the true class-numbers of the test-set.\n cls_true = data.test.cls\n\n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n # Calculate the number of correctly classified images.\n # When summing a boolean array, False means 0 and True means 1.\n correct_sum = correct.sum()\n\n # Classification accuracy is the number of correctly classified\n # images divided by the total number of images in the test-set.\n acc = float(correct_sum) / num_test\n\n # Print the accuracy.\n msg = \"Accuracy on Test-Set: {0:.1%} ({1} / {2})\"\n print(msg.format(acc, correct_sum, num_test))\n\n # Plot some examples of mis-classifications, if desired.\n if show_example_errors:\n print(\"Example errors:\")\n plot_example_errors(cls_pred=cls_pred, correct=correct)\n\n # Plot the confusion matrix, if desired.\n if show_confusion_matrix:\n print(\"Confusion Matrix:\")\n plot_confusion_matrix(cls_pred=cls_pred)", "Performance before any optimization\nThe accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.", "print_test_accuracy()", "Performance after 1 optimization iteration\nThe classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.", "optimize(num_iterations=1)\n\nprint_test_accuracy()", "Performance after 100 optimization iterations\nAfter 100 optimization iterations, the model has significantly improved its classification accuracy.", "optimize(num_iterations=99) # We already performed 1 iteration above.\n\nprint_test_accuracy(show_example_errors=True)", "Performance after 1000 optimization iterations\nAfter 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.", "optimize(num_iterations=900) # We performed 100 iterations above.", "test-run on 6/12/2017\n\nOptimization Iteration: 101, Training Accuracy: 70.3%\nOptimization Iteration: 201, Training Accuracy: 81.2%\nOptimization Iteration: 301, Training Accuracy: 84.4%\nOptimization Iteration: 401, Training Accuracy: 89.1%\nOptimization Iteration: 501, Training Accuracy: 93.8%\nOptimization Iteration: 601, Training Accuracy: 87.5%\nOptimization Iteration: 701, Training Accuracy: 98.4%\nOptimization Iteration: 801, Training Accuracy: 93.8%\nOptimization Iteration: 901, Training Accuracy: 92.2%\nTime usage: 0:01:28", "plot_sample9()\n\nprint_test_accuracy(show_example_errors=True)", "Performance after 10,000 optimization iterations\nAfter 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.", "optimize(num_iterations=9000, ndisplay_interval=500) # We performed 1000 iterations above.", "Optimization Iteration: 1, Training Accuracy: 92.2%\nOptimization Iteration: 501, Training Accuracy: 98.4%\nOptimization Iteration: 1001, Training Accuracy: 95.3%\nOptimization Iteration: 1501, Training Accuracy: 100.0%\nOptimization Iteration: 2001, Training Accuracy: 96.9%\nOptimization Iteration: 2501, Training Accuracy: 100.0%\nOptimization Iteration: 3001, Training Accuracy: 96.9%\nOptimization Iteration: 3501, Training Accuracy: 98.4%\nOptimization Iteration: 4001, Training Accuracy: 96.9%\nOptimization Iteration: 4501, Training Accuracy: 100.0%\nOptimization Iteration: 5001, Training Accuracy: 96.9%\nOptimization Iteration: 5501, Training Accuracy: 100.0%\nOptimization Iteration: 6001, Training Accuracy: 98.4%\nOptimization Iteration: 6501, Training Accuracy: 96.9%\nOptimization Iteration: 7001, Training Accuracy: 100.0%\nOptimization Iteration: 7501, Training Accuracy: 98.4%\nOptimization Iteration: 8001, Training Accuracy: 100.0%\nOptimization Iteration: 8501, Training Accuracy: 100.0%\nTime usage: 0:14:56", "plot_sample9()\n\nprint_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)", "Visualization of Weights and Layers\nIn trying to understand why the convolutional neural network can recognize handwritten digits, we will now visualize the weights of the convolutional filters and the resulting output images.\nHelper-function for plotting convolutional weights", "def plot_conv_weights(weights, input_channel=0):\n # Assume weights are TensorFlow ops for 4-dim variables\n # e.g. weights_conv1 or weights_conv2.\n \n # Retrieve the values of the weight-variables from TensorFlow.\n # A feed-dict is not necessary because nothing is calculated.\n w = session.run(weights)\n\n # Get the lowest and highest values for the weights.\n # This is used to correct the colour intensity across\n # the images so they can be compared with each other.\n w_min = np.min(w)\n w_max = np.max(w)\n\n # Number of filters used in the conv. layer.\n num_filters = w.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot all the filter-weights.\n for i, ax in enumerate(axes.flat):\n # Only plot the valid filter-weights.\n if i<num_filters:\n # Get the weights for the i'th filter of the input channel.\n # See new_conv_layer() for details on the format\n # of this 4-dim tensor.\n img = w[:, :, input_channel, i]\n\n # Plot image.\n ax.imshow(img, vmin=w_min, vmax=w_max,\n interpolation='nearest', cmap='seismic')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Helper-function for plotting the output of a convolutional layer", "def plot_conv_layer(layer, image):\n # Assume layer is a TensorFlow op that outputs a 4-dim tensor\n # which is the output of a convolutional layer,\n # e.g. layer_conv1 or layer_conv2.\n\n # Create a feed-dict containing just one image.\n # Note that we don't need to feed y_true because it is\n # not used in this calculation.\n feed_dict = {x: [image]}\n\n # Calculate and retrieve the output values of the layer\n # when inputting that image.\n values = session.run(layer, feed_dict=feed_dict)\n\n # Number of filters used in the conv. layer.\n num_filters = values.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot the output images of all the filters.\n for i, ax in enumerate(axes.flat):\n # Only plot the images for valid filters.\n if i<num_filters:\n # Get the output image of using the i'th filter.\n # See new_conv_layer() for details on the format\n # of this 4-dim tensor.\n img = values[0, :, :, i]\n\n # Plot image.\n ax.imshow(img, interpolation='nearest', cmap='binary')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Input Images\nHelper-function for plotting an image.", "def plot_image(image):\n plt.imshow(image.reshape(img_shape),\n interpolation='nearest',\n cmap='binary')\n\n plt.show()", "Plot an image from the test-set which will be used as an example below.", "image1 = data.test.images[0]\nplot_image(image1)", "Plot another example image from the test-set.", "image2 = data.test.images[13]\nplot_image(image2)", "Convolution Layer 1\nNow plot the filter-weights for the first convolutional layer.\nNote that positive weights are red and negative weights are blue.", "plot_conv_weights(weights=weights_conv1)", "Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to 14 x 14 pixels which is half the resolution of the original input image.", "plot_conv_layer(layer=layer_conv1, image=image1)", "The following images are the results of applying the convolutional filters to the second image.", "plot_conv_layer(layer=layer_conv1, image=image2)", "It is difficult to see from these images what the purpose of the convolutional filters might be. It appears that they have merely created several variations of the input image, as if light was shining from different angles and casting shadows in the image.\nConvolution Layer 2\nNow plot the filter-weights for the second convolutional layer.\nThere are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.\nNote again that positive weights are red and negative weights are blue.", "plot_conv_weights(weights=weights_conv2, input_channel=0)", "There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.", "plot_conv_weights(weights=weights_conv2, input_channel=1)", "It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.\nApplying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.\nNote that these are down-sampled yet again to 7 x 7 pixels which is half the resolution of the images from the first conv-layer.", "plot_conv_layer(layer=layer_conv2, image=image1)", "And these are the results of applying the filter-weights to the second image.", "plot_conv_layer(layer=layer_conv2, image=image2)", "From these images, it looks like the second convolutional layer might detect lines and patterns in the input images, which are less sensitive to local variations in the original input images.\nThese images are then flattened and input to the fully-connected layer, but that is not shown here.\nClose TensorFlow Session\nWe are now done using TensorFlow, so we close the session to release its resources.", "# This has been commented out in case you want to modify and experiment\n# with the Notebook without having to restart it.\nsession.close()", "Conclusion\nWe have seen that a Convolutional Neural Network works much better at recognizing hand-written digits than the simple linear model in Tutorial #01. The Convolutional Network gets a classification accuracy of about 99%, or even more if you make some adjustments, compared to only 91% for the simple linear model.\nHowever, the Convolutional Network is also much more complicated to implement, and it is not obvious from looking at the filter-weights why it works and why it sometimes fails.\nSo we would like an easier way to program Convolutional Neural Networks and we would also like a better way of visualizing their inner workings.\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\nDo you get the exact same results if you run the Notebook multiple times without changing any parameters? What are the sources of randomness?\nRun another 10,000 optimization iterations. Are the results better?\nChange the learning-rate for the optimizer.\nChange the configuration of the layers, such as the number of convolutional filters, the size of those filters, the number of neurons in the fully-connected layer, etc.\nAdd a so-called drop-out layer after the fully-connected layer. Note that the drop-out probability should be zero when calculating the classification accuracy, so you will need a placeholder variable for this probability.\nChange the order of ReLU and max-pooling in the convolutional layer. Does it calculate the same thing? What is the fastest way of computing it? How many calculations are saved? Does it also work for Sigmoid-functions and average-pooling?\nAdd one or more convolutional and fully-connected layers. Does it help performance?\nWhat is the smallest possible configuration that still gives good results?\nTry using ReLU in the last fully-connected layer. Does the performance change? Why?\nTry not using pooling in the convolutional layers. Does it change the classification accuracy and training time?\nTry using a 2x2 stride in the convolution instead of max-pooling? What is the difference?\nRemake the program yourself without looking too much at this source-code.\nExplain to a friend how the program works.\n\nLicense (MIT)\nCopyright (c) 2016 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
arokem/seaborn
doc/docstrings/ecdfplot.ipynb
bsd-3-clause
[ "Plot a univariate distribution along the x axis:", "import seaborn as sns; sns.set_theme()\n\npenguins = sns.load_dataset(\"penguins\")\nsns.ecdfplot(data=penguins, x=\"flipper_length_mm\")", "Flip the plot by assigning the data variable to the y axis:", "sns.ecdfplot(data=penguins, y=\"flipper_length_mm\")", "If neither x nor y is assigned, the dataset is treated as wide-form, and a histogram is drawn for each numeric column:", "sns.ecdfplot(data=penguins.filter(like=\"bill_\", axis=\"columns\"))", "You can also draw multiple histograms from a long-form dataset with hue mapping:", "sns.ecdfplot(data=penguins, x=\"bill_length_mm\", hue=\"species\")", "The default distribution statistic is normalized to show a proportion, but you can show absolute counts instead:", "sns.ecdfplot(data=penguins, x=\"bill_length_mm\", hue=\"species\", stat=\"count\")", "It's also possible to plot the empirical complementary CDF (1 - CDF):", "sns.ecdfplot(data=penguins, x=\"bill_length_mm\", hue=\"species\", complementary=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dbkinghorn/blog-jupyter-notebooks
ML-Logistic-Regression-theory.ipynb
gpl-3.0
[ "Logistic Regression the Theory\nDespite it's name Logistic Regression is not actually referring to regression in the sense that we covered with Linear Regression. It is a widely used classification algorithm. \"Regression\" is an historic part of the name.\nLogistic regression makes use of what is know as a binary classifier. It utilizes the Logistic function or Sigmoid function to predict a probability that the answer to some question is 1 or 0, yes or no, true or false, good or bad etc.. It's this function that will drive the algorithm and is also interesting in that it can be used as an \"activation function\" for Neural Networks. As with the posts on Linear Regression { (1), (2), (3), (4), (5), (6) } Logistic Regression will be a good algorithm to dig into for understanding Machine Learning.\nClassification with Logistic Regression\nClassification algorithms do what the name suggests i.e. they train models to predict what class some object belongs to. A very common application is image classification. Given some photo, what is it? It is the success of solving that kind of problem with sophisticated deep neural networks running on GPU's that caused the big resurgence of interest in machine learning a few years ago.\nLogistic Regression is an algorithm that is relatively simple and powerful for deciding between two classes, i.e. it's a binary classifier. It basically gives a function that is a boundary between two different classes. It can be extended to handle more than two classes by a method referred to as \"one-vs-all\" (multinomial logistic regression or softmax regression) which is really a collection of binary classifiers that just picks out the most likely class by looking at each class individually verses everything else and then picks the class that has the highest probability.\nExamples of problems that could be addressed with Logistic Regression are,\n - Spam filtering -- spam or not spam\n - Cell image -- cancer or normal\n - Production line part scan -- good or defective\n - Epidemiological study for illness, \"symptoms\" -- has it or doesn't\n - is-a-(fill in the blank) or not\nYou probably get the idea. It's a simple yes-or-no type of classifier. Logistic regression can make use of large numbers of features including continuous and discrete variables and non-linear features. It can be used for many kinds of problems.\nLogistic Regression Model\nThe first thing to understand is that this is \"supervised learning\". The training data will be labeled data and effectively have just 2 values, that is,\n$$ y \\in {0,1}$$\n$$ y = \\left{\n \\begin{array}{ll} 0 & \\text{The Negative Case} \\ 1 & \\text{The Positive Case} \\end{array} \\right. $$\n0 and 1 are the labels that are assigned to the \"objects or questions\" we are looking at. For example if we are looking at spam email then a message that is spam is labeled as 1. We want a model that will produce values between 0 and 1 and will interpret the value of the model as a probability of the test case being positive or negative (true or false).\n$$ 0 \\le h_a(x) \\le 1 $$\n$$ h_a(x) = P(y=1| x:a)$$\nThe expression above is read as The probability that $y=1$ given the values in the feature vector $x$ parameterized by $a$. Also, since $h$ is being interpreted as a probability the probability that $y=0$ is given by $P(y=0| x:a) = 1 - P(y=1| x:a)$ since the probabilities have to add to 1 (there are only 2 choices!).\nThe model (hypothesis) function $h$ looks like,\n$$ \\bbox[25px,border:2px solid green]{\n\\begin{align}\nh_a(x) & = g(a'x) \\ \\\n\\text{Letting } z& = a'x \\ \\\nh_a(x) =g(z) & = \\frac{1}{1 + e^{-z}} \\ \\\n\\end{align} }$$\nWhen we vectorize the model to generate algorithms we will use $X$, the augmented matrix of feature variables with a column of ones, the same as it was in the posts on linear regression. Note that when we are looking at a single input vector $x$, the first element of $x$ is set to $1$ i.e. $x_0 = 1$. This multiplies the constant term $a_0$. $h_a(x)$ and $h_a(X)$ in the case where we have $n$ features looks like,\n$$ \\begin{align}\nh_a(x) & = \\frac{1}{1+e^{-a'x}} \\ \\\nh_a(x)\n& = g(a_0 + a_1 x_1 + a_2 x_2 + \\cdots + a_n x_n) \\ \\\nh_a(X )& = g \\left(\n \\begin{bmatrix} 1 & x^{(1)}1 & x^{(1)}_2 & \\ldots & x^{(1)}_n\n\\ 1 & x^{(2)}_1 & x^{(2)}_2 & \\ldots & x^{(2)}_n\n\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots\n\\ 1 & x^{(m)}_1 & x^{(m)}_2 & \\ldots & x^{(m)}_n \\end{bmatrix}\n\\begin{bmatrix} a{0} \\ a_{1} \\a_{2} \\ \\vdots \\ a_{n} \\end{bmatrix} \\right)\n\\end{align} $$\n$m$ is the number of elements in the test-set.\nAs was the case in Linear Regression, the feature variables can be non-linear terms such as, $x_1^2, x_1x_2, \\sqrt x_1 \\dots $. The model itself is in the class of \"Generalized Linear Models\" because the parameter vector $a$ is linear with respect to the features. The logistic regression model looks like the linear regression model \"wrapped\" as the argument to the logistic function $g$.\n$g(z)$ is the logistic function or sigmoid function. Lets load up some Python modules and see what $g$ looks like.\nSigmoid function $g(z)$", "import numpy as np # numeriacal computing\nimport matplotlib.pyplot as plt # plotting core\nimport seaborn as sns # higher level plotting tools\n%matplotlib inline\nsns.set()\n\ndef g(z) : # sigmoid function\n return 1/(1 + np.exp(-z))\n\nz = np.linspace(-10,10,100)\nplt.plot(z, g(z))\nplt.title(\"Sigmoid Function g(z) = 1/(1 + exp(-z))\", fontsize=24)", "There are several features of $g$ to note,\n- For larger values of $z$ $g(z)$ approaches 1\n- For more negative values of $z$ $g(z)$ approaches 0\n- The value of $g(0) = 0.5$\n- For $z \\ge 0$, $g(z)\\ge 0.5$\n- For $z \\lt 0$, $g(z)\\lt 0.5$\n0.5 will be the cutoff for decisions. That is, if $g(z) \\ge 0.5$ then the \"answer\" is \"the positive case\", 1, if $g(z) \\lt 0.5$ then the answer is \"the negative case\", 0.\nDecision Boundary\nThe value 0.5 mentioned above creates a boundary for classification by our model (hypothesis) $h_a(x)$\n$$\n\\begin{align} \\text{if } h_a(x) \\ge 0.5 & \\text{ then we say } &y=1 \\ \\\n \\text{if } h_a(x) \\lt 0.5 & \\text{ then } &y=0\n \\end{align} $$\nLooking at $g(z)$ more closely gives,\n $$\n \\begin{align} h_a(x) = g(a'x) \\ge 0.5 & \\text{ when} & a'x \\ge 0 \\ \\\n h_a(x) = g(a'x) \\lt 0.5 & \\text{ when} & a'x \\le 0\n \\end{align} $$\nTherefore,\n$$ \\bbox[25px,border:2px solid green]{\n\\begin{align} a'x \\ge 0.5 & \\text{ implies } & y = 1 \\ \\\n a'x \\lt 0.5 & \\text{ implies} & y = 0\n \\end{align} }$$\n\nThe Decision Boundary is the \"line\" defined by $a'x$ that separates the area where $y=0$ and $y=1$. The \"line\" defined by $a'x$ can be non-linear since the feature variables $x_i$ can be non-linear. The decision boundary can be any shape (curve) that fits the data. We use a Cost Function derived from the logistic regression sigmoid function to helps us find the parameters $a$ that define the optimal decision boundary $a'x$. After we have found the optimal values of $a$ the model function $h_a(x$), which uses the sigmoid function, will tell us which side of the decision boundary our \"question\" lies on based on the values of the features $x$ that we give it.\n\nIf you understand the paragraph above then you have a good idea of what logistic regression about!\nHere's some examples of what that Decision Boundary might look like;", "# Generate 2 clusters of data\nS = np.eye(2)\nx1, y1 = np.random.multivariate_normal([1,1], S, 40).T\nx2, y2 = np.random.multivariate_normal([-1,-1], S, 40).T\n\nfig, ax = plt.subplots()\nax.plot(x1,y1, \"o\", label='neg data' )\nax.plot(x2,y2, \"P\", label='pos data')\nxb = np.linspace(-3,3,100)\na = [0.55,-1.3]\nax.plot(xb, a[0] + a[1]*xb , label='b(x) = %.2f + %.2f x' %(a[0], a[1]))\nplt.title(\"Decision Boundary\", fontsize=24)\nplt.legend();", "The plot above shows 2 sets of training-data. The positive case is represented by green '+' and the negative case by blue 'o'. The red line is the decision boundary $b(x) = 0.55 -1.3x$. Any test cases that are above the line are negative and any below are positive. The parameters for that red line would be what we could have determined from doing a Logistic Regression run on those 2 sets of training data.\nThe next plot shows a case where the decision boundry is more complicated. It's represented by $b(x_1,x_2) = x_1^2 +x_2^2 - 2.5$", "fig, ax = plt.subplots()\nx3, y3 = np.random.multivariate_normal([0,0], [[.5,0],[0,.5]] , 400).T\nt = np.linspace(0,2*np.pi,400)\nax.plot((3+x3)*np.sin(t), (3+y3)*np.cos(t), \"o\")\nax.plot(x3, y3, \"P\")\n\nxb1 = np.linspace(-5.0, 5.0, 100)\nxb2 = np.linspace(-5.0, 5.0, 100)\nXb1, Xb2 = np.meshgrid(xb1,xb2)\nb = Xb1**2 + Xb2**2 - 2.5\nax.contour(Xb1,Xb2,b,[0], colors='r')\nplt.title(\"Decision Boundary\", fontsize=24)\nax.axis('equal')", "In this plot the positive outcomes are in a circular region in the center of the plot. The decision boundary the red circle.\n## Cost Function for Logistic Regression\nA cost function's main purpose is to penalize bad choices for the parameters to be optimized and reward good ones. It should be easy to minimize by having a single global minimum and not be overly sensitive to changes in its arguments. It is also nice if it is differentiable, (without difficulty) so you can find the gradient for the minimization problem. That is, it's best if it is \"convex\", \"well behaved\" and \"smooth\".\nThe cost function for logistic regression is written with logarithmic functions. An argument for using the log form of the cost function comes from the statistical derivation of the likelihood estimation for the probabilities. With the exponential form that's is a product of probabilities and the log-likelihood is a sum. [The statistical derivations are always interesting but usually complex. We don't really need to look at that to justify the cost function we will use.] The log function is also a monotonically increasing function so the negative of the log is decreasing. The minimization of a function and minimizing the negative log of that function will give the same values for the parameters. The log form will also be convex which means it will have a single global minimum whereas a simple \"least-squares\" cost function using the sigmoid function can have multiple minimum and abrupt changes. The log form is just better behaved!\nTo see some of this lets looks at a plot of the sigmoid function and the negative log of the sigmoid function.", "z = np.linspace(-10,10,100)\nfig, ax = plt.subplots()\nax.plot(z, g(z)) \nax.set_title('Sigmoid Function 1/(1 + exp(-z))', fontsize=24)\nax.annotate('Convex', (-7.5,0.2), fontsize=18 )\nax.annotate('Concave', (3,0.8), fontsize=18 )\n\nz = np.linspace(-10,10,100)\nplt.plot(z, -np.log(g(z)))\nplt.title(\"Log Sigmoid Function -log(1/(1 + exp(-z)))\", fontsize=24)\nplt.annotate('Convex', (-2.5,3), fontsize=18 )", "Recall that in the training-set $y$ are labels with a values or 0 or 1. The cost function will be broken down into two cases for each data point $(i)$, one for $y=1$ and one for $y=0$. These two cases can then be combined into a single cost function $J$\n$$ \\bbox[25px,border:2px solid green]{\n\\begin{align} J^{(i)}{y=1}(a) & = -log(h_a(x^{(i)})) \\ \\\n J^{(i)}{y=0}(a) & = -log(1 - h_a(x^{(i)})) \\ \\\n J(a) & = -\\frac{1}{m}\\sum^{m}_{i=1} y^{(i)} log(h_a(x^{(i)})) + (1-y^{(i)})log(1 - h_a(x^{(i)}))\n \\end{align} }$$\nYou can see that the factors $y$ and $(1-y)$ effectively pick out the terms for the cases $y=1$ and $y=0$.\nVectorized form of $J(a)$\n$J(a)$ can be written in vector form eliminating the summation sign as,\n $$ \\bbox[25px,border:2px solid green]{\n \\begin{align} h_a(X) &= g(Xa) \\\n J(a) &= -\\frac{1}{m} \\left( y' log(h_a(X) + (1-y)'log(1 - h_a(X) \\right)\n \\end{align} }$$\nTo visualize how the cost functions works look at the following plots,", "x = np.linspace(-10,10,50)\nplt.plot(g(x), -np.log(g(x)))\nplt.title(\"h(x) vs J(a)=-log(h(x)) for y = 1\", fontsize=24)\nplt.xlabel('h(x)')\nplt.ylabel('J(a)')", "You can see from this plot that when $y=1$ the cost $J(a)$ is large if $h(x)$ goes toward 0. That is, it favors $h(x)$ going to 1 which is what we want.", "x = np.linspace(-10,10,50)\nplt.plot(g(x), -np.log(1-g(x)))\nplt.title(\"h(x) vs J(a)=-log(1-h(x)) for y = 0\", fontsize=24)\nplt.xlabel('h(x)')\nplt.ylabel('J(a)')", "In this plot when $y=0$ the cost $J(a)$ is large if $h(x)$ goes toward 1. It favors $h(x)$ going to 0 which is what we want for this case.\nAlternative form of $J(a)$\nI'm going to \"simplify\" $J$ to give an alternative form that I will use to derive the gradient. The sigmoid function has some interesting properties. ($h_a(X)$ is just the sigmoid function with $z=Xa$ as a vector argument.) Here are a few useful identities,\n$$ 1 - h = 1 - \\frac{1}{1+e^{-z}} = \\frac{e^{-z}}{1+e^{-z}} = e^{-z}h $$\nThat gives,\n$$ \\log(1-h) = \\log(e^{-z}h) = \\log(e^{-z})+\\log(h)= -z +\\log(h)$$\nUsing that result $J$ can be written,\n$$ \\begin{align}\nJ(a) &= -\\frac{1}{m} \\left( y'\\log(h) + (1-y)'(\\log(h) -z) \\right) \\ \\\n&= -\\frac{1}{m} \\left(\\sum^m_i \\log(h^{(i)}) + (y-1)'z \\right)\\ \\\nJ(a) &= -\\frac{1}{m} \\left( \\sum^m_i \\log(h^{(i)}) + (y-1)'Xa \\right)\\ \\\n&= -\\frac{1}{m} \\left(1'\\log(h) + (y-1)'Xa \\right)\n\\end{align} $$\nNote that 1' is the transpose of a vector of 1's. Multiplying a column vector by a row vector of 1's is the same as summing the terms of the column vector.\nGradient of the Logistic Regression Cost Function\nYes, I am going to derive the gradient! (You don't get to see that very often so enjoy! I'm doing it because I can.)\nI'll use the alternative expression for $J(a)$ to find the gradient $\\nabla J(a)$. I'll use matrix differentials to find the gradient. The second term in $J$ is simple,\n$$ d[(y-1)'Xa] = (y-1)'Xda$$\nThe first term is more complicated. Keeping in vector form we have,\n$$ \\begin{align}\nd[1'\\log(h)] &= (h^{-1})'dh \\ \\\n&= (h^{-1})'d[(1+e^{Xa})^{-1}] \\ \\\n&= (h^{-1}\\odot(1+e^{Xa})^{-2}))'d[e^{-Xa}] \\ \\\n&= (h^{-1}\\odot h^2)'d[e^{-Xa}] \\ \\\n&= h'd[e^{-Xa}] \\ \\\n&= (h\\odot e^{-Xa})'d[-Xa] \\ \\\n&= -(1-h)'Xda\n\\end{align}$$\nThe funny looking symbol $\\odot$ is the Hadamard product. It just means term-by-term vector product. I used it to keep the derivation in strictly matrix/vector form.\nWith those two terms derived the differential of $J$ is,\n$$ \\begin{align}\nd[J(a)] &= -\\frac{1}{m} \\left((-(1-h)' - (y-1)')X\\right)da \\ \\\n&= \\frac{1}{m}(h-y)Xda\n\\end{align}$$ \nTherefore,\nThe vector form of the Logistic Regression Cost Funtion is\n$$ \\bbox[25px,border:2px solid green]{\n \\begin{align} h_a(X) &= g(Xa) \\ \\\n \\nabla J(a) &= \\frac{1}{m}X'(h-y) \\ \\\n &=\\frac{1}{m}X'(g(Xa) - y)\n \\end{align} }$$\nNotice how similar this expression is to the gradient of the linear regression cost function,$\\nabla J(a) = \\frac{1}{m}X'(Xa-y)$\nThat's enough! In the next post I'll do an implimentation of Logistic Regression in Python using these formulas and do some examples.\nHappy computing! --dbk" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
XinyiGong/pymks
notebooks/stress_homogenization_2D.ipynb
mit
[ "Effective Stiffness\nIntroduction\nThis example uses the MKSHomogenizationModel to create a homogenization linkage for the effective stiffness. This example starts with a brief background of the homogenization theory on the components of the effective elastic stiffness tensor for a composite material. Then the example generates random microstructures and their average stress values that will be used to show how to calibrate and use our model. We will also show how to use tools from sklearn to optimize fit parameters for the MKSHomogenizationModel. Lastly, the data is used to evaluate the MKSHomogenizationModel for effective stiffness values for a new set of microstructures.\nLinear Elasticity and Effective Elastic Modulus\nFor this example we are looking to create a homogenization linkage that predicts the effective isotropic stiffness components for two-phase microstructures. The specific stiffness component we are looking to predict in this example is $C_{xxxx}$ which is easily accessed by applying an uniaxial macroscal strain tensor (the only non-zero component is $\\varepsilon_{xx}$. \n$$ u(L, y) = u(0, y) + L\\bar{\\varepsilon}_{xx}$$\n$$ u(0, L) = u(0, 0) = 0 $$\n$$ u(x, 0) = u(x, L) $$\nMore details about these boundary conditions can be found in [1]. Using these boundary conditions, $C_{xxxx}$ can be estimated calculating the ratio of the averaged stress over the applied averaged strain.\n$$ C_{xxxx}^* \\cong \\bar{\\sigma}{xx} / \\bar{\\varepsilon}{xx}$$ \nIn this example, $C_{xxxx}$ for 6 different types of microstructures will be estimated using the MKSHomogenizationModel from pymks, and provides a method to compute $\\bar{\\sigma}{xx}$ for a new microstructure with an applied strain of $\\bar{\\varepsilon}{xx}$.", "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n", "Data Generation\nA set of periodic microstructures and their volume averaged elastic stress values $\\bar{\\sigma}_{xx}$ can be generated by importing the make_elastic_stress_random function from pymks.datasets. This function has several arguments. n_samples is the number of samples that will be generated, size specifies the dimensions of the microstructures, grain_size controls the effective microstructure feature size, elastic_modulus and poissons_ratio are used to indicate the material property for each of the\nphases, macro_strain is the value of the applied uniaxixial strain, and the seed can be used to change the the random number generator seed.\nLet's go ahead and create 6 different types of microstructures each with 200 samples with dimensions 21 x 21. Each of the 6 samples will have a different microstructure feature size. The function will return and the microstructures and their associated volume averaged stress values.", "from pymks.datasets import make_elastic_stress_random\nsample_size = 200\ngrain_size = [(15, 2), (2, 15), (7, 7), (8, 3), (3, 9), (2, 2)]\nn_samples = [sample_size] * 6\nelastic_modulus = (410, 200)\npoissons_ratio = (0.28, 0.3)\nmacro_strain = 0.001\nsize = (21, 21)\n\nX, y = make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size, \n elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio, \n macro_strain=macro_strain, seed=0)\n", "The array X contains the microstructure information and has the dimensions \nof (n_samples, Nx, Ny). The array y contains the average stress value for \neach of the microstructures and has dimensions of (n_samples,).", "print(X.shape)\nprint(y.shape)\n", "Lets take a look at the 6 types the microstructures to get an idea of what they \nlook like. We can do this by importing draw_microstructures.", "from pymks.tools import draw_microstructures\nX_examples = X[::sample_size]\ndraw_microstructures((X_examples[:3]))\n\n\ndraw_microstructures((X_examples[3:]))\n", "In this dataset 4 of the 6 microstructure types have grains that are elongated in either\nthe x or y directions. The remaining 2 types of samples have equiaxed grains with\ndifferent average sizes.\nLet's look at the stress values for each of the microstructures shown above.", "print('Stress Values'), (y[::200])\n", "Now that we have a dataset to work with, we can look at how to use the MKSHomogenizationModelto predict stress values for new microstructures.\nMKSHomogenizationModel Work Flow\nThe default instance of the MKSHomogenizationModel takes in a dataset and \n - calculates the 2-point statistics\n - performs dimensionality reduction using Singular Valued Decomposition (SVD)\n - and fits a polynomial regression model model to the low-dimensional representation.\nThis work flow has been shown to accurately predict effective properties in several examples [2][3], and requires that we specify the number of components used in dimensionality reduction and the order of the polynomial we will be using for the polynomial regression. In this example we will show how we can use tools from sklearn to try and optimize our selection for these two parameters.\nModeling with MKSHomogenizationModel\nIn order to make an instance of the MKSHomogenizationModel, we need to pass an instance of a basis (used to compute the 2-point statistics). For this particular example, there are only 2 discrete phases, so we will use the PrimitiveBasis from pymks. We only have two phases denoted by 0 and 1, therefore we have two local states and our domain is 0 to 1.\nLet's make an instance of the MKSHomgenizationModel.", "from pymks import MKSHomogenizationModel\nfrom pymks import PrimitiveBasis\n\nprim_basis = PrimitiveBasis(n_states=2, domain=[0, 1])\nmodel = MKSHomogenizationModel(basis=prim_basis, \n correlations=[(0, 0), (1, 1), (0, 1)])\n", "Let's take a look at the default values for the number of components and the order of the polynomial.", "print('Default Number of Components'), (model.n_components)\nprint('Default Polynomail Order'), (model.degree)\n", "These default parameters may not be the best model for a given problem, we will now show one method that can be used to optimize them.\nOptimizing the Number of Components and Polynomial Order\nTo start with, we can look at how the variance changes as a function of the number of components.\nIn general for SVD as well as PCA, the amount of variance captured in each component decreases\nas the component number increases.\nThis means that as the number of components used in the dimensionality reduction increases, the percentage of the variance will asymptotically approach 100%. Let's see if this is true for our dataset.\nIn order to do this we will change the number of components to 40 and then\nfit the data we have using the fit function. This function performs the dimensionality reduction and \nalso fits the regression model. Because our microstructures are periodic, we need to \nuse the periodic_axes argument when we fit the data.", "model.n_components = 40\nmodel.fit(X, y, periodic_axes=[0, 1])\n", "Now look at how the cumlative variance changes as a function of the number of components using draw_component_variance \nfrom pymks.tools.", "from pymks.tools import draw_component_variance\n\ndraw_component_variance(model.dimension_reducer.explained_variance_ratio_)\n", "Roughly 90 percent of the variance is captured with the first 5 components. This means our model may only need a few components to predict the average stress.\nNext we need to optimize the number of components and the polynomial order. To do this we are going to split the data into testing and training sets. This can be done using the train_test_spilt function from sklearn.", "from sklearn.cross_validation import train_test_split\n\nflat_shape = (X.shape[0],) + (np.prod(X.shape[1:]),)\n\nX_train, X_test, y_train, y_test = train_test_split(X.reshape(flat_shape), y,\n test_size=0.2, random_state=3)\nprint(X_train.shape)\nprint(X_test.shape)\n", "We will use cross validation with the testing data to fit a number \nof models, each with a different number \nof components and a different polynomial order.\nThen we will use the testing data to verify the best model. \nThis can be done using GridSeachCV \nfrom sklearn.\nWe will pass a dictionary params_to_tune with the range of\npolynomial order degree and components n_components we want to try.\nA dictionary fit_params can be used to pass the periodic_axes variable to \ncalculate periodic 2-point statistics. The argument cv can be used to specify \nthe number of folds used in cross validation and n_jobs can be used to specify \nthe number of jobs that are ran in parallel.\nLet's vary n_components from 1 to 7 and degree from 1 to 3.", "from sklearn.grid_search import GridSearchCV\n\nparams_to_tune = {'degree': np.arange(1, 4), 'n_components': np.arange(1, 8)}\nfit_params = {'size': X[0].shape, 'periodic_axes': [0, 1]}\ngs = GridSearchCV(model, params_to_tune, cv=12, n_jobs=6, fit_params=fit_params).fit(X_train, y_train)\n", "The default score method for the MKSHomogenizationModel is the R-squared value. Let's look at the how the mean R-squared values and their \nstandard deviations change as we varied the number of n_components and degree using\ndraw_gridscores_matrix from pymks.tools.", "from pymks.tools import draw_gridscores_matrix\n\ndraw_gridscores_matrix(gs, ['n_components', 'degree'], score_label='R-Squared',\n param_labels=['Number of Components', 'Order of Polynomial'])\n", "It looks like we get a poor fit when only the first and second component are used, and when we increase\nthe polynomial order and the components together. The models have a high standard deviation and \npoor R-squared values for both of these cases.\nThere seems to be several potential models that use 3 to 6 components. It's difficult to see which model \nis the best. Let's use our testing data X_test to see which model performs the best.", "print('Order of Polynomial'), (gs.best_estimator_.degree)\nprint('Number of Components'), (gs.best_estimator_.n_components)\nprint('R-squared Value'), (gs.score(X_test, y_test))\n", "For the parameter range that we searched, we have found that a model with 3rd order polynomial \nand 3 components had the best R-squared value. It's difficult to see the differences in the score\nvalues and the standard deviation when we have 3 or more components. Let's take a closer look at those values using draw_grid_scores.", "from pymks.tools import draw_gridscores\n\ngs_deg_1 = [x for x in gs.grid_scores_ \\\n if x.parameters['degree'] == 1][2:-1]\ngs_deg_2 = [x for x in gs.grid_scores_ \\\n if x.parameters['degree'] == 2][2:-1]\ngs_deg_3 = [x for x in gs.grid_scores_ \\\n if x.parameters['degree'] == 3][2:-1]\n\ndraw_gridscores([gs_deg_1, gs_deg_2, gs_deg_3], 'n_components', \n data_labels=['1st Order', '2nd Order', '3rd Order'], \n colors=['#f46d43', '#1a9641', '#762a83'],\n param_label='Number of Components', score_label='R-Squared')\n", "As we said, a model with a 3rd order polynomial and 3 components will give us the best result,\nbut there are several other models that will likely provide comparable results. Let's make the\nbest model from our grid scores.", "model = gs.best_estimator_\n", "Prediction using MKSHomogenizationModel\nNow that we have selected values for n_components and degree, lets fit the model with the data. Again because\nour microstructures are periodic, we need to use the periodic_axes argument.", "model.fit(X, y, periodic_axes=[0, 1])\n", "Let's generate some more data that can be used to try and validate our model's prediction accuracy. We are going to\ngenerate 20 samples of all six different types of microstructures using the same \nmake_elastic_stress_random function.", "test_sample_size = 20\nn_samples = [test_sample_size] * 6\nX_new, y_new = make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size, \n elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio, \n macro_strain=macro_strain, seed=1)\n", "Now let's predict the stress values for the new microstructures.", "y_predict = model.predict(X_new, periodic_axes=[0, 1])\n", "We can look to see if the low-dimensional representation of the \nnew data is similar to the low-dimensional representation of the data \nwe used to fit the model using draw_components from pymks.tools.", "from pymks.tools import draw_components\n\ndraw_components([model.reduced_fit_data[:, :2], \n model.reduced_predict_data[:, :2]],\n ['Training Data', 'Testing Data'])\n", "The predicted data seems to be reasonably similar to the data we used to fit the model\nwith. Now let's look at the score value for the predicted data.", "from sklearn.metrics import r2_score\nprint('R-squared'), (model.score(X_new, y_new, periodic_axes=[0, 1]))\n", "Looks pretty good. Let's print out one actual and predicted stress value for each of the 6 microstructure types to see how they compare.", "print('Actual Stress '), (y_new[::20])\nprint('Predicted Stress'), (y_predict[::20])\n", "Lastly, we can also evaluate our prediction by looking at a goodness-of-fit plot. We\ncan do this by importing draw_goodness_of_fit from pymks.tools.", "from pymks.tools import draw_goodness_of_fit\n\nfit_data = np.array([y, model.predict(X, periodic_axes=[0, 1])])\npred_data = np.array([y_new, y_predict])\ndraw_goodness_of_fit(fit_data, pred_data, ['Training Data', 'Testing Data'])\n", "We can see that the MKSHomogenizationModel has created a homogenization linkage for the effective stiffness for the 6 different microstructures and has predicted the average stress values for our new microstructures reasonably well.\nReferences\n[1] Landi, G., S.R. Niezgoda, S.R. Kalidindi, Multi-scale modeling of elastic response of three-dimensional voxel-based microstructure datasets using novel DFT-based knowledge systems. Acta Materialia, 2009. 58 (7): p. 2716-2725 doi:10.1016/j.actamat.2010.01.007.\n[2] Çeçen, A., et al. \"A data-driven approach to establishing microstructure–property relationships in porous transport layers of polymer electrolyte fuel cells.\" Journal of Power Sources 245 (2014): 144-153. doi:10.1016/j.jpowsour.2013.06.100\n[3] Deshpande, P. D., et al. \"Application of Statistical and Machine Learning Techniques for Correlating Properties to Composition and Manufacturing Processes of Steels.\" 2 World Congress on Integrated Computational Materials Engineering. John Wiley & Sons, Inc. doi:10.1002/9781118767061.ch25" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Kaggle/learntools
notebooks/ml_intermediate/raw/ex7.ipynb
apache-2.0
[ "Most people find target leakage very tricky until they've thought about it for a long time.\nSo, before trying to think about leakage in the housing price example, we'll go through a few examples in other applications. Things will feel more familiar once you come back to a question about house prices.\nSetup\nThe questions below will give you feedback on your answers. Run the following cell to set up the feedback system.", "# Set up code checking\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.ml_intermediate.ex7 import *\nprint(\"Setup Complete\")", "Step 1: The Data Science of Shoelaces\nNike has hired you as a data science consultant to help them save money on shoe materials. Your first assignment is to review a model one of their employees built to predict how many shoelaces they'll need each month. The features going into the machine learning model include:\n- The current month (January, February, etc)\n- Advertising expenditures in the previous month\n- Various macroeconomic features (like the unemployment rate) as of the beginning of the current month\n- The amount of leather they ended up using in the current month\nThe results show the model is almost perfectly accurate if you include the feature about how much leather they used. But it is only moderately accurate if you leave that feature out. You realize this is because the amount of leather they use is a perfect indicator of how many shoes they produce, which in turn tells you how many shoelaces they need.\nDo you think the leather used feature constitutes a source of data leakage? If your answer is \"it depends,\" what does it depend on?\nAfter you have thought about your answer, check it against the solution below.", "# Check your answer (Run this code cell to receive credit!)\nq_1.check()", "Step 2: Return of the Shoelaces\nYou have a new idea. You could use the amount of leather Nike ordered (rather than the amount they actually used) leading up to a given month as a predictor in your shoelace model.\nDoes this change your answer about whether there is a leakage problem? If you answer \"it depends,\" what does it depend on?", "# Check your answer (Run this code cell to receive credit!)\nq_2.check()", "Step 3: Getting Rich With Cryptocurrencies?\nYou saved Nike so much money that they gave you a bonus. Congratulations.\nYour friend, who is also a data scientist, says he has built a model that will let you turn your bonus into millions of dollars. Specifically, his model predicts the price of a new cryptocurrency (like Bitcoin, but a newer one) one day ahead of the moment of prediction. His plan is to purchase the cryptocurrency whenever the model says the price of the currency (in dollars) is about to go up.\nThe most important features in his model are:\n- Current price of the currency\n- Amount of the currency sold in the last 24 hours\n- Change in the currency price in the last 24 hours\n- Change in the currency price in the last 1 hour\n- Number of new tweets in the last 24 hours that mention the currency\nThe value of the cryptocurrency in dollars has fluctuated up and down by over $\\$$100 in the last year, and yet his model's average error is less than $\\$$1. He says this is proof his model is accurate, and you should invest with him, buying the currency whenever the model says it is about to go up.\nIs he right? If there is a problem with his model, what is it?", "# Check your answer (Run this code cell to receive credit!)\nq_3.check()", "Step 4: Preventing Infections\nAn agency that provides healthcare wants to predict which patients from a rare surgery are at risk of infection, so it can alert the nurses to be especially careful when following up with those patients.\nYou want to build a model. Each row in the modeling dataset will be a single patient who received the surgery, and the prediction target will be whether they got an infection.\nSome surgeons may do the procedure in a manner that raises or lowers the risk of infection. But how can you best incorporate the surgeon information into the model?\nYou have a clever idea. \n1. Take all surgeries by each surgeon and calculate the infection rate among those surgeons.\n2. For each patient in the data, find out who the surgeon was and plug in that surgeon's average infection rate as a feature.\nDoes this pose any target leakage issues?\nDoes it pose any train-test contamination issues?", "# Check your answer (Run this code cell to receive credit!)\nq_4.check()", "Step 5: Housing Prices\nYou will build a model to predict housing prices. The model will be deployed on an ongoing basis, to predict the price of a new house when a description is added to a website. Here are four features that could be used as predictors.\n1. Size of the house (in square meters)\n2. Average sales price of homes in the same neighborhood\n3. Latitude and longitude of the house\n4. Whether the house has a basement\nYou have historic data to train and validate the model.\nWhich of the features is most likely to be a source of leakage?", "# Fill in the line below with one of 1, 2, 3 or 4.\npotential_leakage_feature = ____\n\n# Check your answer\nq_5.check()\n\n#%%RM_IF(PROD)%%\npotential_leakage_feature = 1\nq_5.assert_check_failed()\n\n#%%RM_IF(PROD)%%\npotential_leakage_feature = 2\nq_5.assert_check_passed()\n\n#_COMMENT_IF(PROD)_\nq_5.hint()\n#_COMMENT_IF(PROD)_\nq_5.solution()", "Conclusion\nLeakage is a hard and subtle issue. You should be proud if you picked up on the issues in these examples.\nNow you have the tools to make highly accurate models, and pick up on the most difficult practical problems that arise with applying these models to solve real problems.\nThere is still a lot of room to build knowledge and experience. Try out a Competition or look through our Datasets to practice your new skills.\nAgain, Congratulations!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
esa-as/2016-ml-contest
HouMath/Face_classification_HouMath_XGB_03.ipynb
apache-2.0
[ "In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found\nOur work will be orginized in the follwing order:\n•Background\n•Exploratory Data Analysis\n•Data Prepration and Model Selection\n•Final Results\nBackground\nThe dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).\nThe dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.\nThis data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.\nThe seven predictor variables are:\n•Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.\n•Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)\nThe nine discrete facies (classes of rocks) are:\n1.Nonmarine sandstone\n2.Nonmarine coarse siltstone \n3.Nonmarine fine siltstone \n4.Marine siltstone and shale \n5.Mudstone (limestone)\n6.Wackestone (limestone)\n7.Dolomite\n8.Packstone-grainstone (limestone)\n9.Phylloid-algal bafflestone (limestone)\nThese facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.\nFacies/ Label/ Adjacent Facies\n1 SS 2 \n2 CSiS 1,3 \n3 FSiS 2 \n4 SiSh 5 \n5 MS 4,6 \n6 WS 5,7 \n7 D 6,8 \n8 PS 6,7,9 \n9 BS 7,8 \nExprolatory Data Analysis\nAfter the background intorduction, we start to import the pandas library for some basic data analysis and manipulation. The matplotblib and seaborn are imported for data vislization.", "%matplotlib inline\nimport pandas as pd\nfrom pandas.tools.plotting import scatter_matrix\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport seaborn as sns\nimport matplotlib.colors as colors\n\nimport xgboost as xgb\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix, f1_score, accuracy_score\nfrom classification_utilities import display_cm, display_adj_cm\nfrom sklearn.model_selection import GridSearchCV\n\n\nfrom sklearn.model_selection import validation_curve\nfrom sklearn.datasets import load_svmlight_files\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.datasets import make_classification\nfrom xgboost.sklearn import XGBClassifier\nfrom scipy.sparse import vstack\n\nseed = 123\nnp.random.seed(seed)\n\nimport pandas as pd\nfilename = './facies_vectors.csv'\ntraining_data = pd.read_csv(filename)\ntraining_data.head(10)\n\ntraining_data['Well Name'] = training_data['Well Name'].astype('category')\ntraining_data['Formation'] = training_data['Formation'].astype('category')\ntraining_data.info()\n\ntraining_data.describe()\n\nfacies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00','#1B4F72',\n '#2E86C1', '#AED6F1', '#A569BD', '#196F3D']\n\nfacies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']\n\nfacies_counts = training_data['Facies'].value_counts().sort_index()\nfacies_counts.index = facies_labels\nfacies_counts.plot(kind='bar',color=facies_colors,title='Distribution of Training Data by Facies')\n\nsns.heatmap(training_data.corr(), vmax=1.0, square=True)", "Data Preparation and Model Selection\nNow we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.", "import xgboost as xgb\nX_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 ) \nY_train = training_data['Facies' ] - 1\ndtrain = xgb.DMatrix(X_train, Y_train)\n\ntrain = X_train.copy()\n\ntrain['Facies']=Y_train\n\ntrain.head()", "The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.", "def accuracy(conf):\n total_correct = 0.\n nb_classes = conf.shape[0]\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n acc = total_correct/sum(sum(conf))\n return acc\n\nadjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])\n\ndef accuracy_adjacent(conf, adjacent_facies):\n nb_classes = conf.shape[0]\n total_correct = 0.\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n for j in adjacent_facies[i]:\n total_correct += conf[i][j]\n return total_correct / sum(sum(conf))\n\ntarget='Facies'", "Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.", "def modelfit(alg, dtrain, features, useTrainCV=True,\n cv_fold=10,early_stopping_rounds = 50):\n if useTrainCV:\n xgb_param = alg.get_xgb_params()\n xgb_param['num_class']=9\n xgtrain = xgb.DMatrix(train[features].values,label = train[target].values)\n cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=\n alg.get_params()['n_estimators'],nfold=cv_fold,\n metrics='merror',early_stopping_rounds = early_stopping_rounds)\n alg.set_params(n_estimators=cvresult.shape[0])\n \n #Fit the algorithm on the data\n alg.fit(dtrain[features], dtrain[target],eval_metric='merror')\n \n #Predict training set:\n dtrain_prediction = alg.predict(dtrain[features])\n dtrain_predprob = alg.predict_proba(dtrain[features])[:,1]\n \n #Pring model report\n print (\"\\nModel Report\")\n print (\"Accuracy : %.4g\" % accuracy_score(dtrain[target], \n dtrain_prediction))\n print (\"F1 score (Train) : %f\" % f1_score(dtrain[target], \n dtrain_prediction,average='weighted'))\n feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)\n feat_imp.plot(kind='bar',title='Feature Importances')\n plt.ylabel('Feature Importance Score')\n\nfeatures =[x for x in X_train.columns]\nfeatures", "General Approach for Parameter Tuning\nWe are going to preform the steps as follows:\n1.Choose a relatively high learning rate, e.g., 0.1. Usually somewhere between 0.05 and 0.3 should work for different problems. \n2.Determine the optimum number of tress for this learning rate.XGBoost has a very usefull function called as \"cv\" which performs cross-validation at each boosting iteration and thus returns the optimum number of tress required.\n3.Tune tree-based parameters(max_depth, min_child_weight, gamma, subsample, colsample_bytree) for decided learning rate and number of trees. \n4.Tune regularization parameters(lambda, alpha) for xgboost which can help reduce model complexity and enhance performance.\n5.Lower the learning rate and decide the optimal parameters.\nStep 1:Fix learning rate and number of estimators for tuning tree-based parameters\nIn order to decide on boosting parameters, we need to set some initial values of other parameters. Lets take the following values:\n1.max_depth = 5\n2.min_child_weight = 1 \n3.gamma = 0 \n4.subsample, colsample_bytree = 0.8 : This is a commonly used used start value. \n5.scale_pos_weight = 1\nPlease note that all the above are just initial estimates and will be tuned later. Lets take the default learning rate of 0.1 here and check the optimum number of trees using cv function of xgboost. The function defined above will do it for us.", "from xgboost import XGBClassifier\nxgb1 = XGBClassifier(\n learning_rate = 0.1,\n n_estimators=1000,\n max_depth=5,\n min_child_weight=1,\n gamma = 0,\n subsample=0.8,\n colsample_bytree=0.8,\n objective='multi:softmax',\n nthread =4,\n seed = 123,\n)\n\nmodelfit(xgb1, train, features)\n\nxgb1", "Step 2: Tune max_depth and min_child_weight", "from sklearn.model_selection import GridSearchCV\nparam_test1={\n 'max_depth':range(3,10,2),\n 'min_child_weight':range(1,6,2)\n}\n\ngs1 = GridSearchCV(xgb1,param_grid=param_test1, \n scoring='accuracy', n_jobs=4,iid=False, cv=5)\ngs1.fit(train[features],train[target])\ngs1.grid_scores_, gs1.best_params_,gs1.best_score_\n\nparam_test2={\n 'max_depth':[8,9,10],\n 'min_child_weight':[1,2]\n}\n\ngs2 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,\n gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=5,\n min_child_weight=1, n_estimators=290, nthread=4,\n objective='multi:softprob', reg_alpha=0, reg_lambda=1,\n scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test2, \n scoring='accuracy', n_jobs=4,iid=False, cv=5)\ngs2.fit(train[features],train[target])\ngs2.grid_scores_, gs2.best_params_,gs2.best_score_\n\ngs2.best_estimator_", "Step 3: Tune gamma", "param_test3={\n 'gamma':[i/10.0 for i in range(0,5)]\n}\n\ngs3 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,\n gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=9,\n min_child_weight=1, n_estimators=370, nthread=4,\n objective='multi:softprob', reg_alpha=0, reg_lambda=1,\n scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test3, \n scoring='accuracy', n_jobs=4,iid=False, cv=5)\ngs3.fit(train[features],train[target])\ngs3.grid_scores_, gs3.best_params_,gs3.best_score_\n\nxgb2 = XGBClassifier(\n learning_rate = 0.1,\n n_estimators=1000,\n max_depth=9,\n min_child_weight=1,\n gamma = 0.2,\n subsample=0.8,\n colsample_bytree=0.8,\n objective='multi:softmax',\n nthread =4,\n scale_pos_weight=1,\n seed = seed,\n)\nmodelfit(xgb2,train,features)\n\nxgb2", "Step 4:Tune subsample and colsample_bytree", "param_test4={\n 'subsample':[i/10.0 for i in range(6,10)],\n 'colsample_bytree':[i/10.0 for i in range(6,10)]\n}\n\ngs4 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,\n gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,\n min_child_weight=1, n_estimators=236, nthread=4,\n objective='multi:softprob', reg_alpha=0, reg_lambda=1,\n scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test4, \n scoring='accuracy', n_jobs=4,iid=False, cv=5)\ngs4.fit(train[features],train[target])\ngs4.grid_scores_, gs4.best_params_,gs4.best_score_\n\nparam_test4b={\n 'subsample':[i/10.0 for i in range(5,7)],\n}\n\ngs4b = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,\n gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,\n min_child_weight=1, n_estimators=236, nthread=4,\n objective='multi:softprob', reg_alpha=0, reg_lambda=1,\n scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test4b, \n scoring='accuracy', n_jobs=4,iid=False, cv=5)\ngs4b.fit(train[features],train[target])\ngs4b.grid_scores_, gs4b.best_params_,gs4b.best_score_", "Step 5: Tuning Regularization Parameters", "param_test5={\n 'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]\n}\n\ngs5 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,\n gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,\n min_child_weight=1, n_estimators=236, nthread=4,\n objective='multi:softprob', reg_alpha=0, reg_lambda=1,\n scale_pos_weight=1, seed=123,subsample=0.6),param_grid=param_test5, \n scoring='accuracy', n_jobs=4,iid=False, cv=5)\ngs5.fit(train[features],train[target])\ngs5.grid_scores_, gs5.best_params_,gs5.best_score_\n\nparam_test6={\n 'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]\n}\n\ngs6 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,\n gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,\n min_child_weight=1, n_estimators=236, nthread=4,\n objective='multi:softprob', reg_alpha=0, reg_lambda=1,\n scale_pos_weight=1, seed=123,subsample=0.6),param_grid=param_test6, \n scoring='accuracy', n_jobs=4,iid=False, cv=5)\ngs6.fit(train[features],train[target])\ngs6.grid_scores_, gs6.best_params_,gs6.best_score_\n\nxgb3 = XGBClassifier(\n learning_rate = 0.1,\n n_estimators=1000,\n max_depth=9,\n min_child_weight=1,\n gamma = 0.2,\n subsample=0.6,\n colsample_bytree=0.8,\n reg_alpha=0.05,\n objective='multi:softmax',\n nthread =4,\n scale_pos_weight=1,\n seed = seed,\n)\nmodelfit(xgb3,train,features)\n\nxgb3\n\nmodel = XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=0.8,\n gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,\n min_child_weight=1, missing=None, n_estimators=122, nthread=4,\n objective='multi:softprob', reg_alpha=0.05, reg_lambda=1,\n scale_pos_weight=1, seed=123, silent=True, subsample=0.6)\nmodel.fit(X_train, Y_train)\nxgb.plot_importance(model)", "Step 6: Reducing Learning Rate", "xgb4 = XGBClassifier(\n learning_rate = 0.01,\n n_estimators=5000,\n max_depth=9,\n min_child_weight=1,\n gamma = 0.2,\n subsample=0.6,\n colsample_bytree=0.8,\n reg_alpha=0.05,\n objective='multi:softmax',\n nthread =4,\n scale_pos_weight=1,\n seed = seed,\n)\nmodelfit(xgb4,train,features)\n\nxgb4", "Cross Validation\nNext we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.", "# Load data \nfilename = './facies_vectors.csv'\ndata = pd.read_csv(filename)\n\n# Change to category data type\ndata['Well Name'] = data['Well Name'].astype('category')\ndata['Formation'] = data['Formation'].astype('category')\n\n# Leave one well out for cross validation \nwell_names = data['Well Name'].unique()\nf1=[]\nfor i in range(len(well_names)):\n \n # Split data for training and testing\n X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 ) \n Y_train = data['Facies' ] - 1\n \n train_X = X_train[X_train['Well Name'] != well_names[i] ]\n train_Y = Y_train[X_train['Well Name'] != well_names[i] ]\n test_X = X_train[X_train['Well Name'] == well_names[i] ]\n test_Y = Y_train[X_train['Well Name'] == well_names[i] ]\n\n train_X = train_X.drop(['Well Name'], axis = 1 ) \n test_X = test_X.drop(['Well Name'], axis = 1 )\n\n # Final recommended model based on the extensive parameters search\n model_final = XGBClassifier(base_score=0.5, colsample_bylevel=1,\n colsample_bytree=0.8, gamma=0.2,\n learning_rate=0.01, max_delta_step=0, max_depth=9,\n min_child_weight=1, missing=None, n_estimators=432, nthread=4,\n objective='multi:softmax', reg_alpha=0.05, reg_lambda=1,\n scale_pos_weight=1, seed=123, silent=1,\n subsample=0.6)\n # Train the model based on training data\n model_final.fit( train_X , train_Y , eval_metric = 'merror' )\n\n\n # Predict on the test set\n predictions = model_final.predict(test_X)\n\n # Print report\n print (\"\\n------------------------------------------------------\")\n print (\"Validation on the leaving out well \" + well_names[i])\n conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )\n print (\"\\nModel Report\")\n print (\"-Accuracy: %.6f\" % ( accuracy(conf) ))\n print (\"-Adjacent Accuracy: %.6f\" % ( accuracy_adjacent(conf, adjacent_facies) ))\n print (\"-F1 Score: %.6f\" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))\n f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))\n facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',\n 'WS', 'D','PS', 'BS']\n print (\"\\nConfusion Matrix Results\")\n from classification_utilities import display_cm, display_adj_cm\n display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)\n \nprint (\"\\n------------------------------------------------------\")\nprint (\"Final Results\")\nprint (\"-Average F1 Score: %6f\" % (sum(f1)/(1.0*len(f1))))", "Model from all data set", "# Load data \nfilename = './facies_vectors.csv'\ndata = pd.read_csv(filename)\n\n# Change to category data type\ndata['Well Name'] = data['Well Name'].astype('category')\ndata['Formation'] = data['Formation'].astype('category')\n\n# Split data for training and testing\nX_train_all = data.drop(['Facies', 'Formation','Depth'], axis = 1 ) \nY_train_all = data['Facies' ] - 1\n\nX_train_all = X_train_all.drop(['Well Name'], axis = 1)\n\n# Final recommended model based on the extensive parameters search\nmodel_final = XGBClassifier(base_score=0.5, colsample_bylevel=1,\n colsample_bytree=0.8, gamma=0.2,\n learning_rate=0.01, max_delta_step=0, max_depth=9,\n min_child_weight=1, missing=None, n_estimators=432, nthread=4,\n objective='multi:softmax', reg_alpha=0.05, reg_lambda=1,\n scale_pos_weight=1, seed=123, silent=1,\n subsample=0.6)\n\n# Train the model based on training data\nmodel_final.fit(X_train_all , Y_train_all , eval_metric = 'merror' )\n\n\n# Leave one well out for cross validation \nwell_names = data['Well Name'].unique()\nf1=[]\n\nfor i in range(len(well_names)):\n \n X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 ) \n Y_train = data['Facies' ] - 1\n\n train_X = X_train[X_train['Well Name'] != well_names[i] ]\n train_Y = Y_train[X_train['Well Name'] != well_names[i] ]\n test_X = X_train[X_train['Well Name'] == well_names[i] ]\n test_Y = Y_train[X_train['Well Name'] == well_names[i] ]\n\n train_X = train_X.drop(['Well Name'], axis = 1 ) \n test_X = test_X.drop(['Well Name'], axis = 1 )\n #print(test_Y)\n predictions = model_final.predict(test_X)\n \n # Print report\n print (\"\\n------------------------------------------------------\")\n print (\"Validation on the leaving out well \" + well_names[i])\n conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )\n print (\"\\nModel Report\")\n print (\"-Accuracy: %.6f\" % ( accuracy(conf) ))\n print (\"-Adjacent Accuracy: %.6f\" % ( accuracy_adjacent(conf, adjacent_facies) ))\n print (\"-F1 Score: %.6f\" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))\n f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))\n facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',\n 'WS', 'D','PS', 'BS']\n print (\"\\nConfusion Matrix Results\")\n from classification_utilities import display_cm, display_adj_cm\n display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)\n \nprint (\"\\n------------------------------------------------------\")\nprint (\"Final Results\")\nprint (\"-Average F1 Score: %6f\" % (sum(f1)/(1.0*len(f1))))", "Use final model to predict the given test data set", "# Load test data\ntest_data = pd.read_csv('validation_data_nofacies.csv')\ntest_data['Well Name'] = test_data['Well Name'].astype('category')\nX_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)\n# Predict facies of unclassified data\nY_predicted = model_final.predict(X_test)\ntest_data['Facies'] = Y_predicted + 1\n# Store the prediction\ntest_data.to_csv('Prediction3.csv')\n\ntest_data", "Future work, make more customerized objective function. Also, we could use RandomizedSearchCV instead of GridSearchCV to avoild potential local minimal trap and further improve the test results." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
morphean/deep-learning
linear-regression/Linear-Regression.ipynb
apache-2.0
[ "%matplotlib inline\nimport matplotlib", "Linear Regression\nThis is one of the simplest models to use, since it has uses linearly correlated data to 'predict' a value given", "import pandas as pd\nfrom sklearn import linear_model\nimport matplotlib.pyplot as plt", "We will use pandas to handle reading of the data, pandas is pretty much the de facto standard for data manipulation in python.", "df = pd.read_fwf('brain_body.txt')\nx_values = df[['Brain']]\ny_values = df[['Body']]\ndf.head()", "now lets train the model using the data", "import warnings\nwarnings.filterwarnings(action=\"ignore\", module=\"scipy\", message=\"^internal gelsd\")\n\nbody_regression = linear_model.LinearRegression()\nbody_regression.fit(x_values, y_values)\n\nfig = plt.figure()\n\nplt.scatter(x_values, y_values)\nplt.plot(x_values, body_regression.predict(x_values))\n\n#add some axes and labelling\nfig.suptitle('Linear Regression', fontsize=14, fontweight='bold')\nax = fig.add_subplot(111)\n\nax.set_title('Body vs Brain')\nfig.subplots_adjust(top=0.85)\nax.set_xlabel('Body weight (kg)')\nax.set_ylabel('Brain weight (kg)')\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dsavransky/MAE4060
Notebooks/Torque Free 3-1-3 Body Dynamics.ipynb
mit
[ "from miscpy.utils.sympyhelpers import *\ninit_printing()\nfrom sympy.utilities.codegen import codegen", "Set up rotation matrices representing a 3-1-3 $(\\psi,\\theta,\\phi)$ Euler angle set.", "aCi = rotMat(3,psi)\ncCa = rotMat(1,th)\nbCc = rotMat(3,ph)\naCi,cCa,bCc\n\nbCi = bCc*cCa*aCi; bCi #3-1-3 rotation\n\nbCi_dot = difftotalmat(bCi,t,{th:thd,psi:psid,ph:phd});\nbCi_dot", "$\\tilde{\\omega} = {}^\\mathcal{B}C^{\\mathcal{I}} {}^\\mathcal{B}{\\dot{C}}^{\\mathcal{I}}$", "omega_tilde = bCi*bCi_dot.T; omega_tilde", "$\\left[{}^\\mathcal{I}\\boldsymbol{\\omega}^{\\mathcal{B}}\\right]\\mathcal{B} = \\left[ {}^\\mathcal{B}C^{\\mathcal{I}}{32} \\quad {}^\\mathcal{B}C^{\\mathcal{I}}{13} \\quad {}^\\mathcal{B}C^{\\mathcal{I}}{21} \\right]$", "omega = simplify(Matrix([omega_tilde[2,1],omega_tilde[0,2],omega_tilde[1,0]]))\nomega\n\nw1,w2,w3 = symbols('omega_1,omega_2,omega_3')\n\ns0 = solve(omega - Matrix([w1,w2,w3]),[psid,thd,phd]); s0", "Find EOM (second derivatives of Euler Angles)", "I1,I2,I3 = symbols(\"I_1,I_2,I_3\",real=True,positive=True)\niWb_B = omega\nI_G_B = diag(I1,I2,I3)\nI_G_B\n\ndiffmap = {th:thd,psi:psid,ph:phd,thd:thdd,psid:psidd,phd:phdd}\ndiffmap\n\nt1 = I_G_B*difftotalmat(iWb_B,t,diffmap) \nt2 = skew(iWb_B)*I_G_B*iWb_B\nt1,t2\n\ndh_G_B = t1+t2\ndh_G_B\n\nt3 = expand(dh_G_B[0]*cos(ph)*I2 - dh_G_B[1]*sin(ph)*I1)\n\nsol_thdd = simplify(solve(t3,thdd)) \nsol_thdd\n\nt4= expand(dh_G_B[0]*sin(ph)*I2 + dh_G_B[1]*cos(ph)*I1)\nt4\n\nsol_psidd = simplify(solve(t4,psidd)) \nsol_psidd\n\nsol_phdd = solve(dh_G_B[2],phdd)\nsol_phdd", "Find initial orientation such that $\\mathbf h$ is down-pointing", "h = sqrt(((I_G_B*Matrix([w1,w2,w3])).transpose()*(I_G_B*Matrix([w1,w2,w3])))[0]);h\n\neqs1 = simplify(bCi.transpose()*I_G_B*Matrix([w1,w2,w3]) - Matrix([0,0,-h])); eqs1 #equal 0\n\nsimplify(solve(simplify(eqs1[0]*cos(psi) + eqs1[1]*sin(psi)),ph)) #phi solution\n\nsolve(simplify(expand(simplify(-eqs1[0]*sin(psi) + eqs1[1]*cos(psi)).subs(ph,atan(I1*w1/I2/w2)))),th) #th solution\n\nsimplify(eqs1[2].subs(ph,atan(I1*w1/I2/w2)))", "Generate MATLAB Code", "out = codegen((\"eom1\",sol_psidd[0]), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3]);out\n\ncodegen((\"eom1\",sol_thdd[0]), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3])\n\ncodegen((\"eom1\",sol_phdd[0]), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3,psidd])\n\ncodegen((\"eom1\",[s0[psid],s0[thd],s0[phd]]), 'Octave', argument_sequence=[w1,w2,w3,th,thd,psi,psid,ph,phd,I1,I2,I3,psidd])\n\ncodegen((\"eom1\",bCi), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3,psidd])\n\ncodegen((\"eom1\",omega), 'Octave', argument_sequence=[w1,w2,w3,th,thd,psi,psid,ph,phd,I1,I2,I3,psidd])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yashdeeph709/Algorithms
PythonBootCamp/Complete-Python-Bootcamp-master/Errors and Exceptions Homework - Solution.ipynb
apache-2.0
[ "Errors and Exceptions Homework - Solution\nProblem 1\nHandle the exception thrown by the code below by using try and except blocks.", "try:\n for i in ['a','b','c']:\n print i**2\nexcept:\n print \"An error occurred!\"", "Problem 2\nHandle the exception thrown by the code below by using try and except blocks. Then use a finally block to print 'All Done.'", "x = 5\ny = 0\ntry:\n z = x/y\nexcept ZeroDivisionError:\n print \"Can't divide by Zero!\"\nfinally:\n print 'All Done!'", "Problem 3\nWrite a function that asks for an integer and prints the square of it. Use a while loop with a try,except, else block to account for incorrect inputs.", "def ask():\n \n while True:\n try:\n n = input('Input an integer: ')\n except:\n print 'An error occurred! Please try again!'\n continue\n else:\n break\n \n \n print 'Thank you, you number squared is: ',n**2\n\nask()", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
drakero/Electron_Spectrometer
Photon_Diffusion/Photon_Diffusion.ipynb
mit
[ "<h1>Photon Diffusion</h1>\n<h3>Calculates the diffusion of photons in a non-absorbing medium (lanex phosphor) using Fick's laws of diffusion.</h3>", "#Imports\nfrom numpy import *\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import quad\nfrom scipy.special import erf\nimport sys\nimport os\n\n#Import custom modules\nsys.path.append('/home/drake/Documents/Physics/Research/Python/Modules')\nfrom physics import *\n\n%matplotlib notebook", "<h3>Calculation of the phosphor layer thickness of lanex regular given its areal density:</h3>", "Dcell = 55*10**-6\nDPET = 175*10**-6\nDcell2 = 13*10**-6\nrhocell = 1.44*10**3\nrhoPET = 1.38*10**3\nrhophos = 4.48*10**3\nsigma = 70*10**-2\n\nDphos = (sigma - Dcell*rhocell - DPET*rhoPET - Dcell2*rhocell)/rhophos\nprint(Dphos*10**6)", "<h3>Define functions for photon density and photon current density</h3>\n<font size=\"4\"><p>These functions were derived from Fick's laws of diffusion:\n$$ \\frac{\\partial n(z,t)}{\\partial t} = D \\frac{\\partial^2 n(z,t)}{\\partial z^2}, \\: \\: \\phi (z,t) = -D \\frac{\\partial n(z,t)}{\\partial z}$$\nwhere n(z,t) is the photon density, D is the diffusion constant $D=\\lambda_s c /6$ with $\\lambda_s$ corresponding to the mean photon scattering length, and $\\phi (z,t)$ is the photon current density. The fluorescence of the lanex phosphor was modeled as the instantaneous generation of light within a infinitely thin segment within a rectangular slab. The slab thickness $L$ is taken to be small compared to the other dimensions so that the problem can be treated one-dimensionally. The initial condition is taken to be\n$$ n(z,0) = \\frac{N_0}{A} \\delta (z) \\: \\: \\mathrm{with} \\: \\: n(z,t)=0 \\: \\: \\mathrm{for} \\: t<0 $$\nwhere $N_0$ is the number of photons generated and $A$ is the cross-sectional area of the slab. In other words, at $t=0$, $N_0$ photons are generated and modeled as a dirac delta function at $z=0$. Fick's laws are then solved with absorbing boundary conditions at the edges of the lanex:\n$$ n(d,t)=0, \\: n(-l,t)=0$$\nwhere $z=d$ is the location of the CCD and $z=-l$ is the location of the top edge of the phosphor. This yields\n$$ n(z,t) = \\frac{N_0}{2 A \\sqrt{\\pi D t}} \\sum\\limits_{m=-\\infty}^\\infty \\left[ e^{-(z-2mL)^2/4Dt} - e^{-(z+2mL-2d)^2/4Dt} \\right] $$\nThe photon current density then follows by differentiation with respect to $z$.\n</font></p>", "N0 = 10**6 #Number of photons emitted at t=0\nlambdas = 2.85*10**-6 #Diffusion length in m\nD = lambdas*c/6 #Diffusion constant\nA = 100*10**-6*100*10**-6 #Area of segment in m^2\nL = 81*10**-6 #Depth of lanex in m\nl = 10.0*10**-6 #Distance from top lanex edge to segment in m\nd = L-l #Distance from bottom lanex edge to segment\n\ndef n(z,t):\n '''Returns the photon density at position z and time t'''\n n0 = N0/(2*A*sqrt(pi*D*t))\n Sum = 0\n maxm = 10\n for m in range(-maxm,maxm+1):\n Sum += exp(-(z-2*m*(l+d))**2/(4*D*t))-exp(-(z+2*m*(l+d)-2*d)**2/(4*D*t))\n return n0*Sum\n\ndef particlecurrent(t):\n '''Returns the particle current (photons per second per meter^2) at the boundary z=d at time t'''\n Sum = 0\n maxm = 10\n for m in range(-maxm,maxm+1):\n am = d-2*m*L\n Sum += am*exp(-am**2/(4*D*t))\n return N0/(A*sqrt(4*pi*D*t**3))*Sum", "<h3>Plot photon density</h3>\n<font size=\"4\"><p>The function $n(z,t)$ is calculated from 1 fs to 10 ps and plotted below. The boundary conditions are visibly satisfied and the function approaches a dirac delta function for short times. It then spreads out with the total number of photons decreasing as they're absorbed at the boundaries.</font></p>", "narray = []\nzarray = np.linspace(-l,d,1000)\ntime = [1,10,10**2,10**3,10**4]\ntime = np.multiply(time,10**-15) #convert to s\n\nfor i in range(len(time)):\n narray.append([])\n for z in zarray:\n narray[i].append(n(z,time[i])*10**-6)\n\nzarray = np.multiply(zarray,10**6)\n\n#Update the matplotlib configuration parameters\nmpl.rcParams.update({'font.size': 18, 'font.family': 'serif'})\n\n#Adjust figure size\nplt.subplots(figsize=(12,6))\n\ncolor = ['r','g','b','c','m','y','k']\nlegend = []\nfor i in range(5):\n legend.append(str(int(time[i]*10**15))+' fs')\n plt.plot(zarray,narray[i],color=color[i],linewidth=2,label=legend[i])\nplt.xlim(np.min(zarray),np.max(zarray))\nplt.ylim(1.0*10**6,np.max(narray[0]))\nplt.xlabel('Position (um)')\nplt.ylabel('Photon Density (m^-3)')\n#plt.semilogy()\nplt.legend(loc=1)", "<h3>Plot photon current density</h3>\n<font size=\"4\"><p>Photon current density is then calculated at $z=d$ and plotted below as a function of time.</font></p>", "particlecurrentarray = []\ntarray = []\nfor t in linspace(10**-15,50*10**-12,1000):\n tarray.append(t*10**12)\n particlecurrentarray.append(particlecurrent(t))\n\n#Update the matplotlib configuration parameters\nmpl.rcParams.update({'font.size': 18, 'font.family': 'serif'})\n\n#Adjust figure size\nplt.subplots(figsize=(12,6))\n\nplt.plot(tarray,particlecurrentarray,linewidth=2)\nplt.xlim(np.min(tarray),np.max(tarray))\nplt.ylim(0)\nplt.xlabel('time (ps)')\nplt.ylabel('Photon Current at $z=d$ $(s^{-1} \\cdot m^{-2})$')\n#plt.semilogy()\nplt.legend(loc=4)", "<h3>Integrate photon current density</h3>\n<font size=\"4\"><p>The photon current density at $z=d$ is then integrated over large times and multiplied by the area $A$ to determine the total number of photons absorbed by the CCD. This is done numerically and analytically with the function defined below. A plot of the fraction of photons absorbed is plotted as a function of time to ensure that the integral converges.</font></p>", "Nabs = A*quad(particlecurrent,0,400*10**-12)[0] #Total number of photons absorbed at the boundary z=d\nprint(Nabs/N0)\n\ndef F(t,maxm,distance):\n Sum1 = 0\n Sum2 = 0\n for m in range(-maxm,1):\n am = distance-2*m*L\n Sum1 += 1 - erf(am/sqrt(4*D*t))\n for m in range(1,maxm+1):\n am = distance-2*m*L\n Sum2 += 1 + erf(am/sqrt(4*D*t))\n return (Sum1 - Sum2)\n\nFractionAbsArray = []\nFractionAbsArrayAnalytic = []\ntarray = []\nfor t in linspace(10**-12,50*10**-12,10000):\n tarray.append(t*10**12)\n #FractionAbsArray.append(A*quad(particlecurrent,0,t)[0]/N0)\n FractionAbsArrayAnalytic.append(F(t,100,d))\n\n#Adjust figure size\nplt.subplots(figsize=(12,6))\n\nplt.plot(tarray,FractionAbsArrayAnalytic,linewidth=2)\nplt.xlim(np.min(tarray),np.max(tarray))\nplt.ylim(0,1.0)\nplt.xlim(0,50)\nplt.xlabel('time (ps)')\nplt.ylabel('Fraction Absorbed at $z=d$')\n#plt.semilogy()\nplt.legend(loc=4)", "<h3>Calculate number of photons absorbed as a function of d</h3>\n<font size=\"4\"><p>A function is defined to calculate the total number of photons absorbed at $z=d$ after all time as a function of $d$. The results are then plotted and found to be linear. The rather unpleasant expression defined above evidently can be approximated as (or is exactly equal to)\n$$N_{abs}(d) = N_0 (1 - d/L) $$\n</font></p>", "FractionAbsArrayAnalytic = []\ndistancearray = []\n\n#Find the fraction of photons absorbed at z=d for various values of d ranging from 0 to L - 1 um (to avoid division by zero errors)\nfor distance in linspace(0,L-10**-6,100):\n Integrationtime = 10**-12\n TargetError = 10**-3\n Error = 1.0\n FractionAbsAnalytic=0\n while Error>TargetError:\n Error = abs(FractionAbsAnalytic-F(Integrationtime,100,distance))/F(Integrationtime,100,distance)\n FractionAbsAnalytic = F(Integrationtime,100,distance)\n Integrationtime *= 2\n FractionAbsArrayAnalytic.append(FractionAbsAnalytic)\n distancearray.append(distance*10**6)\n\n#Update the matplotlib configuration parameters\nmpl.rcParams.update({'font.size': 18, 'font.family': 'serif'})\n\n#Adjust figure size\nplt.subplots(figsize=(12,6))\n\nplt.plot(distancearray,FractionAbsArrayAnalytic,linewidth=2)\n#plt.xlim(np.min(tarray),np.max(tarray))\n#plt.ylim(0,1.0)\n#plt.xlim(0,50)\nplt.xlabel('Segment Distance (um)')\nplt.ylabel('Fraction Absorbed by CCD')\n#plt.semilogy()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scikit-optimize/scikit-optimize.github.io
dev/notebooks/auto_examples/hyperparameter-optimization.ipynb
bsd-3-clause
[ "%matplotlib inline", "Tuning a scikit-learn estimator with skopt\nGilles Louppe, July 2016\nKatie Malone, August 2016\nReformatted by Holger Nahrstaedt 2020\n.. currentmodule:: skopt\nIf you are looking for a :obj:sklearn.model_selection.GridSearchCV replacement checkout\nsphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py instead.\nProblem statement\nTuning the hyper-parameters of a machine learning model is often carried out\nusing an exhaustive exploration of (a subset of) the space all hyper-parameter\nconfigurations (e.g., using :obj:sklearn.model_selection.GridSearchCV), which\noften results in a very time consuming operation.\nIn this notebook, we illustrate how to couple :class:gp_minimize with sklearn's\nestimators to tune hyper-parameters using sequential model-based optimisation,\nhopefully resulting in equivalent or better solutions, but within fewer\nevaluations.\nNote: scikit-optimize provides a dedicated interface for estimator tuning via\n:class:BayesSearchCV class which has a similar interface to those of\n:obj:sklearn.model_selection.GridSearchCV. This class uses functions of skopt to perform hyperparameter\nsearch efficiently. For example usage of this class, see\nsphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py\nexample notebook.", "print(__doc__)\nimport numpy as np", "Objective\nTo tune the hyper-parameters of our model we need to define a model,\ndecide which parameters to optimize, and define the objective function\nwe want to minimize.", "from sklearn.datasets import load_boston\nfrom sklearn.ensemble import GradientBoostingRegressor\nfrom sklearn.model_selection import cross_val_score\n\nboston = load_boston()\nX, y = boston.data, boston.target\nn_features = X.shape[1]\n\n# gradient boosted trees tend to do well on problems like this\nreg = GradientBoostingRegressor(n_estimators=50, random_state=0)", "Next, we need to define the bounds of the dimensions of the search space\nwe want to explore and pick the objective. In this case the cross-validation\nmean absolute error of a gradient boosting regressor over the Boston\ndataset, as a function of its hyper-parameters.", "from skopt.space import Real, Integer\nfrom skopt.utils import use_named_args\n\n\n# The list of hyper-parameters we want to optimize. For each one we define the\n# bounds, the corresponding scikit-learn parameter name, as well as how to\n# sample values from that dimension (`'log-uniform'` for the learning rate)\nspace = [Integer(1, 5, name='max_depth'),\n Real(10**-5, 10**0, \"log-uniform\", name='learning_rate'),\n Integer(1, n_features, name='max_features'),\n Integer(2, 100, name='min_samples_split'),\n Integer(1, 100, name='min_samples_leaf')]\n\n# this decorator allows your objective function to receive a the parameters as\n# keyword arguments. This is particularly convenient when you want to set\n# scikit-learn estimator parameters\n@use_named_args(space)\ndef objective(**params):\n reg.set_params(**params)\n\n return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1,\n scoring=\"neg_mean_absolute_error\"))", "Optimize all the things!\nWith these two pieces, we are now ready for sequential model-based\noptimisation. Here we use gaussian process-based optimisation.", "from skopt import gp_minimize\nres_gp = gp_minimize(objective, space, n_calls=50, random_state=0)\n\n\"Best score=%.4f\" % res_gp.fun\n\nprint(\"\"\"Best parameters:\n- max_depth=%d\n- learning_rate=%.6f\n- max_features=%d\n- min_samples_split=%d\n- min_samples_leaf=%d\"\"\" % (res_gp.x[0], res_gp.x[1],\n res_gp.x[2], res_gp.x[3],\n res_gp.x[4]))", "Convergence plot", "from skopt.plots import plot_convergence\n\nplot_convergence(res_gp)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
RPGOne/Skynet
xgboost-master/demo/distributed-training/plot_model.ipynb
bsd-3-clause
[ "XGBoost Model Analysis\nThis notebook can be used to load and analysis model learnt from all xgboost bindings, including distributed training.", "import sys\nimport os\n%matplotlib inline ", "Please change the pkg_path and model_file to be correct path", "pkg_path = '../../python-package/'\nmodel_file = 's3://my-bucket/xgb-demo/model/0002.model'\nsys.path.insert(0, pkg_path)\nimport xgboost as xgb", "Plot the Feature Importance", "# plot the first two trees.\nbst = xgb.Booster(model_file=model_file)\nxgb.plot_importance(bst)", "Plot the First Tree", "tree_id = 0\nxgb.to_graphviz(bst, tree_id)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mzszym/oedes
examples/photovoltaic/Voc-Ff.ipynb
agpl-3.0
[ "Modeling of photovoltaic devices\nThis example demonstrates modeling of photovoltaic devices. Figures of merit, such as fill factor, are extracted from results of the simulation. For illustration, a reference is partially reproduced.\nSimulations like those shown above can show possible optimization oportunities for devices. They also can yield ultimate performance of idealized devices.", "%matplotlib inline\nimport matplotlib.pylab as plt\nfrom oedes import *\ninit_notebook()\nimport scipy.interpolate\nplt.rcParams['axes.formatter.useoffset']=False", "Model\nWe start with popular assumption that light is absorbed uniformly inside the device. The assumption can be justified for thin devices, and for white illumination.", "def absorption(x):\n return 1.5e28", "The base model consists in Poisson's equation coupled to the drift-diffusion equations for electrons and holes. Constant mobilities are assumed. Additionally, contacts can be defined as selective (blocking electrons and holes at \"invalid\" electrode\"), or non-selective with local thermal equilibrium is assumed at any electrode for all charge carriers.", "def base_model(L=50e-9,selective=False,**kwargs):\n model = models.BaseModel()\n mesh = fvm.mesh1d(L)\n models.std.bulk_heterojunction(model, mesh, absorption=absorption,selective_contacts=selective,**kwargs)\n model.setUp()\n return model", "The basic model created by function above contains no recombination term, and must be suplemented with it. Complete models are created by functions below, with the following options for the recombination model:\n- direct : $R=\\beta \\left(n p-n_i p_i \\right)$\n- Langevin: $R=\\frac{\\mu_n+\\mu_p}{\\varepsilon} \\left(n p-n_i p_i \\right)$\n- Shockley-Reed-Hall recombination, in parallel with direct recombination\nAbsorption is assumed to create free electrons and holes directly.", "def model_Langevin(**kwargs):\n return base_model(langevin_recombination=True,**kwargs)\ndef model_const(**kwargs):\n return base_model(const_recombination=True,**kwargs)\ndef model_SRH(**kwargs):\n return base_model(const_recombination=True,srh_recombination=True,**kwargs)", "Below is a procedure generating default simulation parameters. They are parametrized by the bandgap, by the (symmetric) barrier at the electrodes, and by SRH lifetime. Note that not all parameters are used at the same time, for example SRH parameters are not used by non-SRH models.", "def make_params(barrier=0.3,bandgap=1.2,srh_tau=1e-8):\n params=models.std.bulk_heterojunction_params(barrier=barrier,bandgap=bandgap,Nc=1e27,Nv=1e27)\n srh_trate=1e-8\n params.update({\n 'beta':7.23e-17,\n 'electron.srh.trate':srh_trate,\n 'hole.srh.trate':srh_trate,\n 'srh.N0':1./(srh_tau*srh_trate),\n 'srh.energy':-bandgap*0.5\n })\n return params", "Calculations\nThe function below takes I_V curve, which should include points V=0 and J=0, and calculates the open circuit voltage, the power at maximum power point, and the fill factor.", "def performance(v,J):\n iv=scipy.interpolate.InterpolatedUnivariateSpline(v,J)\n Isc=iv(0.)\n Voc,=iv.roots()\n v=np.linspace(0,Voc)\n Pmax=np.amax(-v*iv(v))\n Ff=-Pmax/(Voc*Isc)\n return dict(Ff=Ff,Voc=Voc,Isc=Isc,Pmax=Pmax)", "In the reference, the mobilities of electrons and holes are varied but kept equal. The following shows how such sweep can be implemented.", "mu_values = np.logspace(-10,-2,19)\ndef mu_sweep(params):\n for mu in mu_values:\n p=dict(params)\n p['electron.mu']=mu\n p['hole.mu']=mu\n yield mu,p\nv_sweep = sweep('electrode0.voltage',np.linspace(0.,0.8,40))", "Because different models are considered below, a common function is defined here to run the simulation and to plot the result. The function takes model as an argument.", "def Voc_Ff(model,params):\n c=context(model)\n result=[]\n def onemu(mu, cmu):\n for _ in cmu.sweep(cmu.params, v_sweep):\n pass\n v,J=cmu.teval(v_sweep.parameter_name,'J')\n p = performance(v,J)\n return (mu, p['Voc'], p['Ff'])\n result = np.asarray([onemu(*_) for _ in c.sweep(params, mu_sweep)])\n testing.store(result)\n fig,(ax_voc,ax_ff)=plt.subplots(nrows=2,sharex=True)\n ax_voc.plot(result[:,0],result[:,1])\n ax_ff.plot(result[:,0],result[:,2])\n ax_ff.set_xlabel(r'$\\mu \\mathrm{[m^2 V^{-1} s^{-1}]}$')\n ax_ff.set_xscale('log')\n ax_ff.set_ylabel('FF')\n ax_voc.set_ylabel('$V_{oc}$'); \n return result\n\nparams=make_params()", "Results\nDirect recombination, non-selective contacts\nAs seen below, in the case of direct recombination, selective contacts are useful for improving fill factor and open-circuit voltage regardless of mobilities.", "Voc_Ff(model_const(selective=False),params);", "Direct recombination, selective contacts", "Voc_Ff(model_const(selective=True),params);", "Langevin recombination, non-selective contants\nIf Langevin recombination is assumed, open circuit voltage drops regardless of contact selectivity.", "Voc_Ff(model_Langevin(selective=False),params);", "Langevin recombination, selective contants", "Voc_Ff(model_Langevin(selective=True),params);", "SRH recombination, non-selective contacts\nThe case of SRH recombination resembles the case of direct recombination in its dependence on mobility. This is not surprising, as in both cases the mobility does not enter the recombination term $R$.", "Voc_Ff(model_SRH(selective=False),params);", "SRH recombination, selective contacts", "Voc_Ff(model_SRH(selective=True),params);", "Reference\nWolfgang Tress, Karl Leo, Moritz Riede, Optimum mobility, contact properties, and open-circuit voltage of organic solar cells: A drift-diffusion simulation study, Phys. Rev. B. 85155201 (2012))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
steinam/teacher
jup_notebooks/data-science-ipython-notebooks-master/scipy/sampling.ipynb
mit
[ "Random Sampling\nCredits: Forked from CompStats by Allen Downey. License: Creative Commons Attribution 4.0 International.", "from __future__ import print_function, division\n\nimport numpy\nimport scipy.stats\n\nimport matplotlib.pyplot as pyplot\n\nfrom IPython.html.widgets import interact, fixed\nfrom IPython.html import widgets\n\n# seed the random number generator so we all get the same results\nnumpy.random.seed(18)\n\n# some nicer colors from http://colorbrewer2.org/\nCOLOR1 = '#7fc97f'\nCOLOR2 = '#beaed4'\nCOLOR3 = '#fdc086'\nCOLOR4 = '#ffff99'\nCOLOR5 = '#386cb0'\n\n%matplotlib inline", "Part One\nSuppose we want to estimate the average weight of men and women in the U.S.\nAnd we want to quantify the uncertainty of the estimate.\nOne approach is to simulate many experiments and see how much the results vary from one experiment to the next.\nI'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.\nBased on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters:", "weight = scipy.stats.lognorm(0.23, 0, 70.8)\nweight.mean(), weight.std()", "Here's what that distribution looks like:", "xs = numpy.linspace(20, 160, 100)\nys = weight.pdf(xs)\npyplot.plot(xs, ys, linewidth=4, color=COLOR1)\npyplot.xlabel('weight (kg)')\npyplot.ylabel('PDF')\nNone", "make_sample draws a random sample from this distribution. The result is a NumPy array.", "def make_sample(n=100):\n sample = weight.rvs(n)\n return sample", "Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.", "sample = make_sample(n=100)\nsample.mean(), sample.std()", "We want to estimate the average weight in the population, so the \"sample statistic\" we'll use is the mean:", "def sample_stat(sample):\n return sample.mean()", "One iteration of \"the experiment\" is to collect a sample of 100 women and compute their average weight.\nWe can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.", "def compute_sample_statistics(n=100, iters=1000):\n stats = [sample_stat(make_sample(n)) for i in range(iters)]\n return numpy.array(stats)", "The next line runs the simulation 1000 times and puts the results in\nsample_means:", "sample_means = compute_sample_statistics(n=100, iters=1000)", "Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.\nRemember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.", "pyplot.hist(sample_means, color=COLOR5)\npyplot.xlabel('sample mean (n=100)')\npyplot.ylabel('count')\nNone", "The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.", "sample_means.mean()", "The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.\nThis quantity is called the \"standard error\".", "std_err = sample_means.std()\nstd_err", "We can also use the distribution of sample means to compute a \"90% confidence interval\", which contains 90% of the experimental results:", "conf_int = numpy.percentile(sample_means, [5, 95])\nconf_int", "The following function takes an array of sample statistics and prints the SE and CI:", "def summarize_sampling_distribution(sample_stats):\n print('SE', sample_stats.std())\n print('90% CI', numpy.percentile(sample_stats, [5, 95]))", "And here's what that looks like:", "summarize_sampling_distribution(sample_means)", "Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.", "def plot_sample_stats(n, xlim=None):\n sample_stats = compute_sample_statistics(n, iters=1000)\n summarize_sampling_distribution(sample_stats)\n pyplot.hist(sample_stats, color=COLOR2)\n pyplot.xlabel('sample statistic')\n pyplot.xlim(xlim)", "Here's a test run with n=100:", "plot_sample_stats(100)", "Now we can use interact to run plot_sample_stats with different values of n. Note: xlim sets the limits of the x-axis so the figure doesn't get rescaled as we vary n.", "def sample_stat(sample):\n return sample.mean()\n\nslider = widgets.IntSliderWidget(min=10, max=1000, value=100)\ninteract(plot_sample_stats, n=slider, xlim=fixed([55, 95]))\nNone", "This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic.\nAs an exercise, fill in sample_stat below with any of these statistics:\n\nStandard deviation of the sample.\nCoefficient of variation, which is the sample standard deviation divided by the sample standard mean.\nMin or Max\nMedian (which is the 50th percentile)\n10th or 90th percentile.\nInterquartile range (IQR), which is the difference between the 75th and 25th percentiles.\n\nNumPy array methods you might find useful include std, min, max, and percentile.\nDepending on the results, you might want to adjust xlim.", "def sample_stat(sample):\n # TODO: replace the following line with another sample statistic\n return sample.mean()\n\nslider = widgets.IntSliderWidget(min=10, max=1000, value=100)\ninteract(plot_sample_stats, n=slider, xlim=fixed([0, 100]))\nNone", "Part Two\nSo far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.\nBut in real life we don't know the actual distribution of the population. If we did, we wouldn't need to estimate it!\nIn real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is \"resampling,\" which means we use the sample itself as a model of the population distribution and draw samples from it.\nBefore we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.", "class Resampler(object):\n \"\"\"Represents a framework for computing sampling distributions.\"\"\"\n \n def __init__(self, sample, xlim=None):\n \"\"\"Stores the actual sample.\"\"\"\n self.sample = sample\n self.n = len(sample)\n self.xlim = xlim\n \n def resample(self):\n \"\"\"Generates a new sample by choosing from the original\n sample with replacement.\n \"\"\"\n new_sample = numpy.random.choice(self.sample, self.n, replace=True)\n return new_sample\n \n def sample_stat(self, sample):\n \"\"\"Computes a sample statistic using the original sample or a\n simulated sample.\n \"\"\"\n return sample.mean()\n \n def compute_sample_statistics(self, iters=1000):\n \"\"\"Simulates many experiments and collects the resulting sample\n statistics.\n \"\"\"\n stats = [self.sample_stat(self.resample()) for i in range(iters)]\n return numpy.array(stats)\n \n def plot_sample_stats(self):\n \"\"\"Runs simulated experiments and summarizes the results.\n \"\"\"\n sample_stats = self.compute_sample_statistics()\n summarize_sampling_distribution(sample_stats)\n pyplot.hist(sample_stats, color=COLOR2)\n pyplot.xlabel('sample statistic')\n pyplot.xlim(self.xlim)", "The following function instantiates a Resampler and runs it.", "def plot_resampled_stats(n=100):\n sample = weight.rvs(n)\n resampler = Resampler(sample, xlim=[55, 95])\n resampler.plot_sample_stats()", "Here's a test run with n=100", "plot_resampled_stats(100)", "Now we can use plot_resampled_stats in an interaction:", "slider = widgets.IntSliderWidget(min=10, max=1000, value=100)\ninteract(plot_resampled_stats, n=slider, xlim=fixed([1, 15]))\nNone", "Exercise: write a new class called StdResampler that inherits from Resampler and overrides sample_stat so it computes the standard deviation of the resampled data.", "class StdResampler(Resampler): \n \"\"\"Computes the sampling distribution of the standard deviation.\"\"\"\n \n def sample_stat(self, sample):\n \"\"\"Computes a sample statistic using the original sample or a\n simulated sample.\n \"\"\"\n return sample.std()", "Test your code using the cell below:", "def plot_resampled_stats(n=100):\n sample = weight.rvs(n)\n resampler = StdResampler(sample, xlim=[0, 100])\n resampler.plot_sample_stats()\n \nplot_resampled_stats()", "When your StdResampler is working, you should be able to interact with it:", "slider = widgets.IntSliderWidget(min=10, max=1000, value=100)\ninteract(plot_resampled_stats, n=slider)\nNone", "Part Three\nWe can extend this framework to compute SE and CI for a difference in means.\nFor example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data):", "female_weight = scipy.stats.lognorm(0.23, 0, 70.8)\nfemale_weight.mean(), female_weight.std()", "And here's the men's distribution:", "male_weight = scipy.stats.lognorm(0.20, 0, 87.3)\nmale_weight.mean(), male_weight.std()", "I'll simulate a sample of 100 men and 100 women:", "female_sample = female_weight.rvs(100)\nmale_sample = male_weight.rvs(100)", "The difference in means should be about 17 kg, but will vary from one random sample to the next:", "male_sample.mean() - female_sample.mean()", "Here's the function that computes Cohen's $d$ again:", "def CohenEffectSize(group1, group2):\n \"\"\"Compute Cohen's d.\n\n group1: Series or NumPy array\n group2: Series or NumPy array\n\n returns: float\n \"\"\"\n diff = group1.mean() - group2.mean()\n\n n1, n2 = len(group1), len(group2)\n var1 = group1.var()\n var2 = group2.var()\n\n pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)\n d = diff / numpy.sqrt(pooled_var)\n return d", "The difference in weight between men and women is about 1 standard deviation:", "CohenEffectSize(male_sample, female_sample)", "Now we can write a version of the Resampler that computes the sampling distribution of $d$.", "class CohenResampler(Resampler):\n def __init__(self, group1, group2, xlim=None):\n self.group1 = group1\n self.group2 = group2\n self.xlim = xlim\n \n def resample(self):\n group1 = numpy.random.choice(self.group1, len(self.group1), replace=True)\n group2 = numpy.random.choice(self.group2, len(self.group2), replace=True)\n return group1, group2\n \n def sample_stat(self, groups):\n group1, group2 = groups\n return CohenEffectSize(group1, group2)\n \n # NOTE: The following functions are the same as the ones in Resampler,\n # so I could just inherit them, but I'm including them for readability\n def compute_sample_statistics(self, iters=1000):\n stats = [self.sample_stat(self.resample()) for i in range(iters)]\n return numpy.array(stats)\n \n def plot_sample_stats(self):\n sample_stats = self.compute_sample_statistics()\n summarize_sampling_distribution(sample_stats)\n pyplot.hist(sample_stats, color=COLOR2)\n pyplot.xlabel('sample statistic')\n pyplot.xlim(self.xlim)", "Now we can instantiate a CohenResampler and plot the sampling distribution.", "resampler = CohenResampler(male_sample, female_sample)\nresampler.plot_sample_stats()", "This example demonstrates an advantage of the computational framework over mathematical analysis. Statistics like Cohen's $d$, which is the ratio of other statistics, are relatively difficult to analyze. But with a computational approach, all sample statistics are equally \"easy\".\nOne note on vocabulary: what I am calling \"resampling\" here is a specific kind of resampling called \"bootstrapping\". Other techniques that are also considering resampling include permutation tests, which we'll see in the next section, and \"jackknife\" resampling. You can read more at http://en.wikipedia.org/wiki/Resampling_(statistics)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
steinam/teacher
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
mit
[ "<!--BOOK_INFORMATION-->\n<img align=\"left\" style=\"padding-right:10px;\" src=\"figures/PDSH-cover-small.png\">\nThis notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.\nThe text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!\nNo changes were made to the contents of this notebook from the original.\n<!--NAVIGATION-->\n< Pivot Tables | Contents | Working with Time Series >\nVectorized String Operations\nOne strength of Python is its relative ease in handling and manipulating string data.\nPandas builds on this and provides a comprehensive set of vectorized string operations that become an essential piece of the type of munging required when working with (read: cleaning up) real-world data.\nIn this section, we'll walk through some of the Pandas string operations, and then take a look at using them to partially clean up a very messy dataset of recipes collected from the Internet.\nIntroducing Pandas String Operations\nWe saw in previous sections how tools like NumPy and Pandas generalize arithmetic operations so that we can easily and quickly perform the same operation on many array elements. For example:", "import numpy as np\nx = np.array([2, 3, 5, 7, 11, 13])\nx * 2", "This vectorization of operations simplifies the syntax of operating on arrays of data: we no longer have to worry about the size or shape of the array, but just about what operation we want done.\nFor arrays of strings, NumPy does not provide such simple access, and thus you're stuck using a more verbose loop syntax:", "data = ['peter', 'Paul', 'MARY', 'gUIDO']\n[s.capitalize() for s in data]", "This is perhaps sufficient to work with some data, but it will break if there are any missing values.\nFor example:", "data = ['peter', 'Paul', None, 'MARY', 'gUIDO']\n[s.capitalize() for s in data]", "Pandas includes features to address both this need for vectorized string operations and for correctly handling missing data via the str attribute of Pandas Series and Index objects containing strings.\nSo, for example, suppose we create a Pandas Series with this data:", "import pandas as pd\nnames = pd.Series(data)\nnames", "We can now call a single method that will capitalize all the entries, while skipping over any missing values:", "names.str.capitalize()", "Using tab completion on this str attribute will list all the vectorized string methods available to Pandas.\nTables of Pandas String Methods\nIf you have a good understanding of string manipulation in Python, most of Pandas string syntax is intuitive enough that it's probably sufficient to just list a table of available methods; we will start with that here, before diving deeper into a few of the subtleties.\nThe examples in this section use the following series of names:", "monte = pd.Series(['Graham Chapman', 'John Cleese', 'Terry Gilliam',\n 'Eric Idle', 'Terry Jones', 'Michael Palin'])", "Methods similar to Python string methods\nNearly all Python's built-in string methods are mirrored by a Pandas vectorized string method. Here is a list of Pandas str methods that mirror Python string methods:\n| | | | |\n|-------------|------------------|------------------|------------------|\n|len() | lower() | translate() | islower() | \n|ljust() | upper() | startswith() | isupper() | \n|rjust() | find() | endswith() | isnumeric() | \n|center() | rfind() | isalnum() | isdecimal() | \n|zfill() | index() | isalpha() | split() | \n|strip() | rindex() | isdigit() | rsplit() | \n|rstrip() | capitalize() | isspace() | partition() | \n|lstrip() | swapcase() | istitle() | rpartition() |\nNotice that these have various return values. Some, like lower(), return a series of strings:", "monte.str.lower()", "But some others return numbers:", "monte.str.len()", "Or Boolean values:", "monte.str.startswith('T')", "Still others return lists or other compound values for each element:", "monte.str.split()", "We'll see further manipulations of this kind of series-of-lists object as we continue our discussion.\nMethods using regular expressions\nIn addition, there are several methods that accept regular expressions to examine the content of each string element, and follow some of the API conventions of Python's built-in re module:\n| Method | Description |\n|--------|-------------|\n| match() | Call re.match() on each element, returning a boolean. |\n| extract() | Call re.match() on each element, returning matched groups as strings.|\n| findall() | Call re.findall() on each element |\n| replace() | Replace occurrences of pattern with some other string|\n| contains() | Call re.search() on each element, returning a boolean |\n| count() | Count occurrences of pattern|\n| split() | Equivalent to str.split(), but accepts regexps |\n| rsplit() | Equivalent to str.rsplit(), but accepts regexps |\nWith these, you can do a wide range of interesting operations.\nFor example, we can extract the first name from each by asking for a contiguous group of characters at the beginning of each element:", "monte.str.extract('([A-Za-z]+)', expand=False)", "Or we can do something more complicated, like finding all names that start and end with a consonant, making use of the start-of-string (^) and end-of-string ($) regular expression characters:", "monte.str.findall(r'^[^AEIOU].*[^aeiou]$')", "The ability to concisely apply regular expressions across Series or Dataframe entries opens up many possibilities for analysis and cleaning of data.\nMiscellaneous methods\nFinally, there are some miscellaneous methods that enable other convenient operations:\n| Method | Description |\n|--------|-------------|\n| get() | Index each element |\n| slice() | Slice each element|\n| slice_replace() | Replace slice in each element with passed value|\n| cat() | Concatenate strings|\n| repeat() | Repeat values |\n| normalize() | Return Unicode form of string |\n| pad() | Add whitespace to left, right, or both sides of strings|\n| wrap() | Split long strings into lines with length less than a given width|\n| join() | Join strings in each element of the Series with passed separator|\n| get_dummies() | extract dummy variables as a dataframe |\nVectorized item access and slicing\nThe get() and slice() operations, in particular, enable vectorized element access from each array.\nFor example, we can get a slice of the first three characters of each array using str.slice(0, 3).\nNote that this behavior is also available through Python's normal indexing syntax–for example, df.str.slice(0, 3) is equivalent to df.str[0:3]:", "monte.str[0:3]", "Indexing via df.str.get(i) and df.str[i] is likewise similar.\nThese get() and slice() methods also let you access elements of arrays returned by split().\nFor example, to extract the last name of each entry, we can combine split() and get():", "monte.str.split().str.get(-1)", "Indicator variables\nAnother method that requires a bit of extra explanation is the get_dummies() method.\nThis is useful when your data has a column containing some sort of coded indicator.\nFor example, we might have a dataset that contains information in the form of codes, such as A=\"born in America,\" B=\"born in the United Kingdom,\" C=\"likes cheese,\" D=\"likes spam\":", "full_monte = pd.DataFrame({'name': monte,\n 'info': ['B|C|D', 'B|D', 'A|C',\n 'B|D', 'B|C', 'B|C|D']})\nfull_monte", "The get_dummies() routine lets you quickly split-out these indicator variables into a DataFrame:", "full_monte['info'].str.get_dummies('|')", "With these operations as building blocks, you can construct an endless range of string processing procedures when cleaning your data.\nWe won't dive further into these methods here, but I encourage you to read through \"Working with Text Data\" in the Pandas online documentation, or to refer to the resources listed in Further Resources.\nExample: Recipe Database\nThese vectorized string operations become most useful in the process of cleaning up messy, real-world data.\nHere I'll walk through an example of that, using an open recipe database compiled from various sources on the Web.\nOur goal will be to parse the recipe data into ingredient lists, so we can quickly find a recipe based on some ingredients we have on hand.\nThe scripts used to compile this can be found at https://github.com/fictivekin/openrecipes, and the link to the current version of the database is found there as well.\nAs of Spring 2016, this database is about 30 MB, and can be downloaded and unzipped with these commands:", "# !curl -O http://openrecipes.s3.amazonaws.com/recipeitems-latest.json.gz\n# !gunzip recipeitems-latest.json.gz", "The database is in JSON format, so we will try pd.read_json to read it:", "try:\n recipes = pd.read_json('recipeitems-latest.json')\nexcept ValueError as e:\n print(\"ValueError:\", e)", "Oops! We get a ValueError mentioning that there is \"trailing data.\"\nSearching for the text of this error on the Internet, it seems that it's due to using a file in which each line is itself a valid JSON, but the full file is not.\nLet's check if this interpretation is true:", "with open('recipeitems-latest.json') as f:\n line = f.readline()\npd.read_json(line).shape", "Yes, apparently each line is a valid JSON, so we'll need to string them together.\nOne way we can do this is to actually construct a string representation containing all these JSON entries, and then load the whole thing with pd.read_json:", "# read the entire file into a Python array\nwith open('recipeitems-latest.json', 'r') as f:\n # Extract each line\n data = (line.strip() for line in f)\n # Reformat so each line is the element of a list\n data_json = \"[{0}]\".format(','.join(data))\n# read the result as a JSON\nrecipes = pd.read_json(data_json)\n\nrecipes.shape", "We see there are nearly 200,000 recipes, and 17 columns.\nLet's take a look at one row to see what we have:", "recipes.iloc[0]", "There is a lot of information there, but much of it is in a very messy form, as is typical of data scraped from the Web.\nIn particular, the ingredient list is in string format; we're going to have to carefully extract the information we're interested in.\nLet's start by taking a closer look at the ingredients:", "recipes.ingredients.str.len().describe()", "The ingredient lists average 250 characters long, with a minimum of 0 and a maximum of nearly 10,000 characters!\nJust out of curiousity, let's see which recipe has the longest ingredient list:", "recipes.name[np.argmax(recipes.ingredients.str.len())]", "That certainly looks like an involved recipe.\nWe can do other aggregate explorations; for example, let's see how many of the recipes are for breakfast food:", "recipes.description.str.contains('[Bb]reakfast').sum()", "Or how many of the recipes list cinnamon as an ingredient:", "recipes.ingredients.str.contains('[Cc]innamon').sum()", "We could even look to see whether any recipes misspell the ingredient as \"cinamon\":", "recipes.ingredients.str.contains('[Cc]inamon').sum()", "This is the type of essential data exploration that is possible with Pandas string tools.\nIt is data munging like this that Python really excels at.\nA simple recipe recommender\nLet's go a bit further, and start working on a simple recipe recommendation system: given a list of ingredients, find a recipe that uses all those ingredients.\nWhile conceptually straightforward, the task is complicated by the heterogeneity of the data: there is no easy operation, for example, to extract a clean list of ingredients from each row.\nSo we will cheat a bit: we'll start with a list of common ingredients, and simply search to see whether they are in each recipe's ingredient list.\nFor simplicity, let's just stick with herbs and spices for the time being:", "spice_list = ['salt', 'pepper', 'oregano', 'sage', 'parsley',\n 'rosemary', 'tarragon', 'thyme', 'paprika', 'cumin']", "We can then build a Boolean DataFrame consisting of True and False values, indicating whether this ingredient appears in the list:", "import re\nspice_df = pd.DataFrame(dict((spice, recipes.ingredients.str.contains(spice, re.IGNORECASE))\n for spice in spice_list))\nspice_df.head()", "Now, as an example, let's say we'd like to find a recipe that uses parsley, paprika, and tarragon.\nWe can compute this very quickly using the query() method of DataFrames, discussed in High-Performance Pandas: eval() and query():", "selection = spice_df.query('parsley & paprika & tarragon')\nlen(selection)", "We find only 10 recipes with this combination; let's use the index returned by this selection to discover the names of the recipes that have this combination:", "recipes.name[selection.index]", "Now that we have narrowed down our recipe selection by a factor of almost 20,000, we are in a position to make a more informed decision about what we'd like to cook for dinner.\nGoing further with recipes\nHopefully this example has given you a bit of a flavor (ba-dum!) for the types of data cleaning operations that are efficiently enabled by Pandas string methods.\nOf course, building a very robust recipe recommendation system would require a lot more work!\nExtracting full ingredient lists from each recipe would be an important piece of the task; unfortunately, the wide variety of formats used makes this a relatively time-consuming process.\nThis points to the truism that in data science, cleaning and munging of real-world data often comprises the majority of the work, and Pandas provides the tools that can help you do this efficiently.\n<!--NAVIGATION-->\n< Pivot Tables | Contents | Working with Time Series >" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pvchaumier/ml_by_example
src/Regression.ipynb
isc
[ "Ridge Regression\nGoal\nGiven a dataset with continuous inputs and corresponding outputs, the objective is to find a function that matches the two as accurately as possible. This function is usually called the target function.\nIn the case of a ridge regression, the idea is to modellize the target function as a linear sum of functions (that can be non linear and are generally not). Thus, with f the target function, $\\phi_i$ a base function and $w_i$ its weight in the linear sum, we suppose that:\n$$f(x) = \\sum w_i \\phi_i(x)$$\nThe parameters that must be found are the weights $w_i$ for each base function $\\phi_i$. This is done by minimizing the root mean square error.\nThere is a closed solution to this problem given by the following equation $W = (\\Phi^T \\Phi)^{-1} \\Phi^T Y$ with:\n- $d$ the number of base functions\n- $W = (w_0, ..., w_d)$ the weight vector\n- $Y$ the output vector\n- $\\Phi(X) = (\\phi_0(X)^T, \\phi_1(X)^T, ..., \\phi_d(X)^T)$, $\\phi_0(X) = \\mathbf{1}$ and $\\phi_i(X) = (\\phi_i(X_1), ... \\phi_i(X_n))$.\nIf you want more details, I find that the best explanation is the one given in the book Pattern Recognition and Machine Learning by C. Bishop.\nImplementation\nThe following implementation does exactly what is explained above and uses three different types of kernel: \n- linear $f(x) = w_0 + w_1 x$\n- polynomial $f(x) = \\sum_{i=0}^d w_i x^i$ with d the degree of the polynome. Notice that d = 1 is the linear case.\n- gaussian $f(x) = \\sum w_i \\exp(-\\frac{x - b_i}{2 \\sigma^2})$ with $b_i$ define the location of the base function number $i$ (they are usually taken at random within the dataset) and $\\sigma$ a parameter tuning the width of the functions. Here the \"width\" is the same for all base function but you could make them different for each of them.\nThe steps are:\n- normalization\n- building the $\\Phi$ matrix\n- computing the weights $W$\n- plotting the found function and the dataset", "# to display plots within the notebook\n%matplotlib inline\n# to define the size of the plotted images\nfrom pylab import rcParams\nrcParams['figure.figsize'] = (15, 10)\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom numpy.linalg import inv\n\nfrom fct import normalize_pd", "The X matrix correspond to the inputs and the Y matrix to the outputs to predict.", "data = pd.read_csv('datasets/data_regression.csv')\nX = data['X']\nY = data['Y']\n\n# Normalization\nX = np.asmatrix(normalize_pd(X)).T\nY = np.asmatrix(normalize_pd(Y)).T", "Linear regression\nHere we have $\\Phi(X) = X$. The function we look for has the form $f(x) = ax + b$.", "def linear_regression(X, Y):\n # Building the Phi matrix\n Ones = np.ones((X.shape[0], 1))\n phi_X = np.hstack((Ones, X))\n\n # Calculating the weights\n w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y)\n \n # Predicting the output values\n Y_linear_reg = np.dot(phi_X, w)\n\n return Y_linear_reg\n\nY_linear_reg = linear_regression(X, Y)\n\nplt.plot(X, Y, '.')\nplt.plot(X, Y_linear_reg, 'r')\nplt.title('Linear Regression')\nplt.legend(['Data', 'Linear Regression'])", "The obtained solution does not represent the data very well. It is because the power of representation is too low compared to the target function. This is usually referred to as underfitting.\nPolynomial Regression\nNow, we approximate the target function by a polynom $f(x) = w_0 + w_1 x + w_2 x^2 + ... + w_d x^d$ with $d$ the degree of the polynom.\nWe plotted the results obtained with different degrees.", "def polynomial_regression(X, Y, degree):\n # Building the Phi matrix\n Ones = np.ones((X.shape[0], 1))\n # Add a column of ones\n phi_X = np.hstack((Ones, X))\n\n # add a column of X elevated to all the powers from 2 to degree\n for i in range(2, degree + 1):\n # calculate the vector X to the power i and add it to the Phi matrix\n X_power = np.array(X) ** i\n phi_X = np.hstack((phi_X, np.asmatrix(X_power)))\n\n # Calculating the weights\n w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y)\n \n # Predicting the output values\n Y_poly_reg = np.dot(phi_X, w)\n\n return Y_poly_reg\n\n# Degrees to plot you can change these values to\n# see how the degree of the polynom affects the \n# predicted function\ndegrees = [1, 2, 20]\nlegend = ['Data']\n\nplt.plot(X, Y, '.')\nfor degree in degrees:\n Y_poly_reg = polynomial_regression(X, Y, degree)\n plt.plot(X, Y_poly_reg)\n legend.append('degree ' + str(degree))\n \nplt.legend(legend)\nplt.title('Polynomial regression results depending on the degree of the polynome used')", "The linear case is still underfitting but now, we see that the polynom of degree 20 is too sensitive to the data, especially around $[-2.5, -1.5]$. This phenomena is called overfitting: the model starts fitting the noise in the data as well and looses its capacity to generalize.\nRegression with kernel gaussian\nLastly, we look at function of the type $f(x) = \\sum \\phi_i(x)$ with $\\phi_i(x) = \\exp({-\\frac{x - b_i}{\\sigma^2}}$). $b_i$ is called the base and $\\sigma$ is its width.\nUsually, the $b_i$ are taken randomly within the dataset. That is what I did in the implementation with b the number of bases.\nIn the plot, there is the base function used to compute the regressed function and the latter.", "def gaussian_regression(X, Y, b, sigma, return_base=True):\n \"\"\"b is the number of bases to use, sigma is the variance of the\n base functions.\"\"\"\n \n # Building the Phi matrix\n Ones = np.ones((X.shape[0], 1))\n # Add a column of ones\n phi_X = np.hstack((Ones, X))\n \n # Choose randomly without replacement b values from X\n # to be the center of the base functions\n X_array = np.array(X).reshape(1, -1)[0]\n bases = np.random.choice(X_array, b, replace=False)\n \n bases_function = []\n for i in range(1, b):\n base_function = np.exp(-0.5 * (((X_array - bases[i - 1] * \n np.ones(len(X_array))) / sigma) ** 2))\n bases_function.append(base_function)\n phi_X = np.hstack((phi_X, np.asmatrix(base_function).T))\n\n w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y)\n\n if return_base:\n return np.dot(phi_X, w), bases_function\n else:\n return np.dot(phi_X, w)\n\n# By changing this value, you will change the width of the base functions\nsigma = 0.2\n# b is the number of base functions used\nb = 5\nY_gauss_reg, bases_function = gaussian_regression(X, Y, b, sigma)\n\n# Plotting the base functions and the dataset\nplt.plot(X, Y, '.')\nplt.plot(X, Y_gauss_reg)\n\nlegend = ['Data', 'Regression result']\nfor i, base_function in enumerate(bases_function):\n plt.plot(X, base_function)\n legend.append('Base function n°' + str(i))\n\nplt.legend(legend)\nplt.title('Regression with gaussian base functions')", "We can observe that here the sigma is too small. Some part of the dataset are too far away from the bases to be taken into accoutn.\nIf you change the <code>sigma</code> in the code to 0.5 and then 1. You will notice how the output function will get closer to the data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sebastiandres/mat281
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
cc0-1.0
[ "\"\"\"\nIPython Notebook v4.0 para python 2.7\nLibrerías adicionales: numpy, matplotlib, sklearn\nContenido bajo licencia CC-BY 4.0. Código bajo licencia MIT. (c) Sebastian Flores.\n\"\"\"\n\n# Configuracion para recargar módulos y librerías \n%reload_ext autoreload\n%autoreload 2\n\n%matplotlib inline\n\nfrom IPython.core.display import HTML\n\nHTML(open(\"style/mat281.css\", \"r\").read())", "<header class=\"w3-container w3-teal\">\n<img src=\"images/utfsm.png\" alt=\"\" height=\"100px\" align=\"left\"/>\n<img src=\"images/mat.png\" alt=\"\" height=\"100px\" align=\"right\"/>\n</header>\n<br/><br/><br/><br/><br/>\nMAT281\nAplicaciones de la Matemática en la Ingeniería\nSebastián Flores\nhttps://www.github.com/usantamaria/mat281\nClase anterior\nRegresión Lineal\n* ¿Cómo se llamaba el algoritmo que vimos?\n* ¿Cuál era la aproximación ingenieril? ¿Machine Learning? ¿Estadística? \n* ¿Cuándo funcionaba y cuándo fallaba?\n¿Qué veremos hoy?\nClasificación y Regresión logística.\n¿Porqué veremos ese contenido?\nPorque clasificación es un problema muy común puesto que permite la toma de decisiones. \nRegresión logística es un algoritmo que surge naturalmente como extensión de regresión lineal pero en el contexto de clasificación.\nProblemas de Clasificación\n¿Conocen algún problema de clasificación?\nWine Dataset\n<img src=\"images/wine.jpg\" alt=\"\" width=\"600px\" align=\"middle\"/>\nWine Dataset\nLos datos corresponden a 3 cultivos diferentes de vinos de la misma región de Italia, y que han sido identificados con las etiquetas 1, 2 y 3. Para cada tipo de vino se realizado 13 análisis químicos:\n\nAlcohol \nMalic acid \nAsh \nAlcalinity of ash \nMagnesium \nTotal phenols \nFlavanoids \nNonflavanoid phenols \nProanthocyanins \nColor intensity \nHue \nOD280/OD315 of diluted wines \nProline \n\nLa base de datos contiene 178 muestras distintas en total.\nWine dataset\n\nSi no conocemos de antemano las etiquetas, es decir, los cultivos 1,2 o 3 a los que pertenece cada muestra, el problema es de clustering:\n\n$$\\textrm{Tenemos } X \\in R^{n,m} \\textrm{ buscamos las etiquetas } Y \\in N^m$$\n\n\n¿Cuántos grupos existen? ¿A que grupo pertenece cada dato?\n\n\nSi conocemos los valores y las etiquetas, y se desea obtener la etiqueta de una muestra sin etiquetar, el problema es de clasificación:\n\n\n$$\\textrm{Tenemos } X \\in R^{n \\times m} \\textrm{ y } Y \\in N^m \\textrm{ y buscamos las etiquetas de } x \\in R^n$$\n\n\n¿A qué grupo pertenece este nuevo dato?\n\n\nRegresión Lineal\nSe buscaba entrenar una función lineal\n$$h_{\\theta}(x) = \\theta_0 + \\theta_1 x_1 + ... + \\theta_n x_n$$ de\nforma que se minimice\n$$J(\\theta) = \\frac{1}{2} \\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right)^2$$\nRegresión Logística\nBuscaremos entrenar una función\nlogística\n$$h_{\\theta}(x) = \\frac{1}{1 + e^{-(\\theta_0 + \\theta_1 x_1 + ... + \\theta_n x_n)}}$$\nde forma que se minimice\n$$J(\\theta) = \\frac{1}{2} \\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right)^2$$\nEjemplo 2D\n¿Conocen el accidente del Space Shuttle Challeger?\n28 Junio 1986. A pesar de existir evidencia de funcionamiento defectuoso, se da luz verde al lanzamiento.\n<img src=\"images/Challenger1.gif\" alt=\"\" width=\"600px\" align=\"middle\"/>\nEjemplo 2D\nA los 73 segundos de vuelo, el transbordador espacial explota, matando a los 7 pasajeros.\n<img src=\"images/Challenger2.jpg\" alt=\"\" width=\"600px\" align=\"middle\"/>\nEjemplo 2D\nComo parte del debriefing del accidente, se obtuvieron los siguientes datos", "%%bash\ncat data/Challenger.txt", "Ejemplo 2D\nGrafiquemos los datos", "from matplotlib import pyplot as plt\nimport numpy as np\n# Plot of data\ndata = np.loadtxt(\"data/Challenger.txt\", skiprows=1)\nx = data[:,0]\ny = data[:,1]\nplt.figure(figsize=(16,8))\nplt.plot(x, y, 'bo', ms=8)\nplt.title(\"Exito o Falla en lanzamiento de Challenger\")\nplt.xlabel(r\"T [${}^o F$]\")\nplt.ylabel(r\"Bad Rings\")\nplt.ylim([-0.1,3.1])\nplt.show()", "Ejemplo 2D\nNos gustaría saber en qué condiciones se produce accidente. No nos importa el número de fallas, sólo si existe falla o no.", "# Plot of data\ndata = np.loadtxt(\"data/Challenger.txt\", skiprows=1)\nx = (data[:,0]-32.)/1.8\ny = np.array(data[:,1]==0,int)\nplt.figure(figsize=(16,8))\nplt.plot(x[y==0], y[y==0], 'bo', label=\"Falla\", ms=8)\nplt.plot(x[y>0], y[y>0], 'rs', label=\"Exito\", ms=8)\nplt.ylim([-0.1, 1.1])\nplt.legend(loc=0, numpoints=1)\nplt.title(\"Exito o Falla en lanzamiento de Challenger\")\nplt.xlabel(r\"T [${}^o C$]\")\nplt.ylabel(r\"$y$\")\nplt.show()", "Modelo\nDefinimos como\nantes\n$$\\begin{aligned}\nY &= \\begin{bmatrix}y^{(1)} \\ y^{(2)} \\ \\vdots \\ y^{(m)}\\end{bmatrix}\\end{aligned}$$\ny\n$$\\begin{aligned}\nX = \n\\begin{bmatrix} \n1 & x^{(1)}_1 & \\dots & x^{(1)}_n \\ \n1 & x^{(2)}_1 & \\dots & x^{(2)}_n \\\n\\vdots & \\vdots & & \\vdots \\\n1 & x^{(m)}_1 & \\dots & x^{(m)}_n \\\n\\end{bmatrix}\\end{aligned}$$\nModelo\nLuego la\nevaluación de todos los datos puede escribirse matricialmente como\n$$\\begin{aligned}\nX \\theta &= \n\\begin{bmatrix}\n1 & x_1^{(1)} & ... & x_n^{(1)} \\\n\\vdots & \\vdots & & \\vdots \\\n1 & x_1^{(m)} & ... & x_n^{(m)} \\\n\\end{bmatrix}\n\\begin{bmatrix}\\theta_0 \\ \\theta_1 \\ \\vdots \\ \\theta_n\\end{bmatrix} \\\n& = \n\\begin{bmatrix}\n1 \\theta_0 + x^{(1)}_1 \\theta_1 + ... + x^{(1)}_n \\theta_n \\\n\\vdots \\\n1 \\theta_0 + x^{(m)}_1 \\theta_1 + ... + x^{(m)}_n \\theta_n \\\n\\end{bmatrix}\\end{aligned}$$\nModelo\nNuestro problema\nes encontrar un “buen” conjunto de valores $\\theta$ \nde modo que\n$$\\begin{aligned}\ng(X\\theta)\n\\approx\nY\\end{aligned}$$\ndonde $g(z)$ es la función sigmoide (en. sigmoid function).\n$$g(z) = \\frac{1}{1+e^{-z}}$$\nInterpretación gráfica", "from matplotlib import pyplot as plt\nimport numpy as np\n\ndef sigmoid(z):\n return (1+np.exp(-z))**(-1.)\n\nz = np.linspace(-5,5,100)\ng = sigmoid(z)\nfig = plt.figure(figsize=(16,8))\nplt.plot(z,sigmoid(z), lw=2.0)\nplt.plot(z,sigmoid(z*2), lw=2.0)\nplt.plot(z,sigmoid(z-2), lw=2.0)\nplt.grid(\"on\")\nplt.show()", "Modelo\nFunción Sigmoide\nLa función\nsigmoide $g(z) = (1+e^{-z})^{-1}$ tiene la siguiente propiedad:\n$$g'(z) = g(z)(1-g(z))$$\nModelo\nFunción Sigmoide\n$g(z) = (1+e^{-z})^{-1}$ y $g'(z) = g(z)(1-g(z))$.\nDemostración:\n$$\\begin{aligned}\ng'(z) &= \\frac{-1}{(1+e^{-z})^2} (-e^{-z}) \\\n &= \\frac{e^{-z}}{(1+e^{-z})^2} \\\n &= \\frac{1}{1+e^{-z}} \\frac{e^{-z}}{1+e^{-z}} \\\n &= \\frac{1}{1+e^{-z}} \\left(1 - \\frac{1}{1+e^{-z}} \\right) \\\n &= g(z)(1-g(z))\\end{aligned}$$\nInterpretación gráfica", "from matplotlib import pyplot as plt\nimport numpy as np\n\ndef sigmoid(z):\n return (1+np.exp(-z))**(-1.)\n\nz = np.linspace(-5,5,100)\ng = sigmoid(z)\ndgdz = g*(1-g)\nfig = plt.figure(figsize=(16,8))\nplt.plot(z, g, \"k\", label=\"g(z)\", lw=2)\nplt.plot(z, dgdz, \"r\", label=\"dg(z)/dz\", lw=2)\nplt.legend()\nplt.grid(\"on\")\nplt.show()", "Aproximación Ingenieril\n¿Cómo podemos reutilizar lo que conocemos de regresión lineal?\nSi buscamos minimizar\n$$J(\\theta) = \\frac{1}{2} \\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right)^2$$\nPodemos calcular el gradiente y luego utilizar el método del máximo\ndescenso para obtener $\\theta$.\nAproximación Ingenieril\nEl\ncálculo del gradiente es directo:\n$$\\begin{aligned}\n\\frac{\\partial J(\\theta)}{\\partial \\theta_k}\n&= \\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right) \\frac{\\partial}{\\partial \\theta_k} h_{\\theta}(x^{(i)}) \\\n&= \\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right) \\frac{\\partial}{\\partial \\theta_k} g(\\theta^T x^{(i)}) \\\n&= \\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right) h_{\\theta}(x^{(i)}) \\left(1-h_{\\theta}(x^{(i)})\\right) \\frac{\\partial}{\\partial \\theta_k} (\\theta^T x^{(i)}) \\\n&= \\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right) h_{\\theta}(x^{(i)}) \\left(1-h_{\\theta}(x^{(i)})\\right) x^{(i)}_k\\end{aligned}$$\nAproximación Ingenieril\n¿Hay alguna forma de escribir todo esto de manera matricial? Recordemos\nque si las componentes eran\n$$\\begin{aligned}\n\\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right) x^{(i)}k = \\sum{i=1}^{m} x^{(i)}k \\left( h{\\theta}(x^{(i)}) - y^{(i)}\\right)\\end{aligned}$$\npodíamos escribirlo vectorialmente como $$X^T (X\\theta - Y)$$\nAproximación Ingenieril\nLuego, para\n$$\\begin{aligned}\n\\frac{\\partial J(\\theta)}{\\partial \\theta_k}\n&= \\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right) h_{\\theta}(x^{(i)}) \\left(1-h_{\\theta}(x^{(i)})\\right) x^{(i)}k \\\n&= \\sum{i=1}^{m} x^{(i)}k \\left( h{\\theta}(x^{(i)}) - y^{(i)}\\right) h_{\\theta}(x^{(i)}) \\left(1-h_{\\theta}(x^{(i)})\\right)\\end{aligned}$$\npodemos escribirlo vectorialmente como\n$$\\nabla_{\\theta} J(\\theta) = X^T \\Big[ (g(X\\theta) - Y) \\odot g(X\\theta) \\odot (1-g(X\\theta)) \\Big]$$\ndonde $\\odot$ es la multiplicación elemento a elemento (element-wise).\nAproximación Ingenieril\nObservación crucial:\n$$\\nabla_{\\theta} J(\\theta) = X^T \\Big[ (g(X\\theta) - Y) \\odot g(X\\theta) \\odot (1-g(X\\theta)) \\Big]$$\nno permite construir un sistema lineal para $\\theta$, por lo cual sólo\npodemos resolver iterativamente.\nAproximación Ingenieril\nPor\nlo tanto tenemos el algoritmo\n$$\\begin{aligned}\n\\theta^{(n+1)} & = \\theta^{(n)} - \\alpha \\nabla_{\\theta} J(\\theta^{(n)}) \\\n\\nabla_{\\theta} J(\\theta) &= X^T \\Big[ (g(X\\theta) - Y) \\odot g(X\\theta) \\odot (1-g(X\\theta)) \\Big]\\end{aligned}$$\nAproximación Ingenieril\nEl código sería el siguiente:", "import numpy as np\n\ndef sigmoid(z):\n return 1./(1+np.exp(-z))\n\ndef norm2_error_logistic_regression(X, Y, theta0, tol=1E-6):\n converged = False\n alpha = 0.01/len(Y)\n theta = theta0\n while not converged:\n H = sigmoid(np.dot(X, theta))\n gradient = np.dot(X.T, (H-Y)*H*(1-H))\n new_theta = theta - alpha * gradient\n converged = np.linalg.norm(theta-new_theta) < tol * np.linalg.norm(theta) \n theta = new_theta\n return theta", "Interpretación Probabilística\n¿Es la derivación anterior\nprobabilísticamente correcta?\nAsumamos que la pertenencia a los grupos está dado por\n$$\\begin{aligned}\n\\mathbb{P}[y = 1| \\ x ; \\theta ] & = h_\\theta(x) \\\n\\mathbb{P}[y = 0| \\ x ; \\theta ] & = 1 - h_\\theta(x)\\end{aligned}$$\nEsto es, una distribución de Bernoulli con $p=h_\\theta(x)$.\\\nLas expresiones anteriores pueden escribirse de manera más compacta como\n$$\\begin{aligned}\n\\mathbb{P}[y | \\ x ; \\theta ] & = (h_\\theta(x))^y (1 - h_\\theta(x))^{(1-y)} \\\\end{aligned}$$\nInterpretación Probabilística\nLa función de verosimilitud $L(\\theta)$ nos\npermite entender que tan probable es encontrar los datos observados,\npara una elección del parámetro $\\theta$.\n$$\\begin{aligned}\nL(\\theta) \n&= \\prod_{i=1}^{m} \\mathbb{P}[y^{(i)}| x^{(i)}; \\theta ] \\\n&= \\prod_{i=1}^{m} \\Big(h_{\\theta}(x^{(i)})\\Big)^{y^{(i)}} \\Big(1 - h_\\theta(x^{(i)})\\Big)^{(1-y^{(i)})}\\end{aligned}$$\nNos gustaría encontrar el parámetro $\\theta$ que más probablemente haya\ngenerado los datos observados, es decir, el parámetro $\\theta$ que\nmaximiza la función de verosimilitud.\nInterpretación Probabilística\nCalculamos la log-verosimilitud:\n$$\\begin{aligned}\nl(\\theta) \n&= \\log L(\\theta) \\\n&= \\log \\prod_{i=1}^{m} (h_\\theta(x^{(i)}))^{y^{(i)}} (1 - h_\\theta(x^{(i)}))^{(1-y^{(i)})} \\\n&= \\sum_{i=1}^{m} y^{(i)}\\log (h_\\theta(x^{(i)})) + (1-y^{(i)}) \\log (1 - h_\\theta(x^{(i)}))\\end{aligned}$$\nNo existe una fórmula cerrada que nos permita obtener el máximo de la\nlog-verosimitud. Pero podemos utilizar nuevamente el método del\ngradiente máximo.\nInterpretación Probabilística\nRecordemos que si\n$$\\begin{aligned}\ng(z) = \\frac{1}{1+e^{-z}}\\end{aligned}$$\nEntonces\n$$\\begin{aligned}\ng'(z) &= g(z)(1-g(z))\\end{aligned}$$\ny luego tenemos que\n$$\\begin{aligned}\n\\frac{\\partial}{\\partial \\theta_k} h_\\theta(x) &= h_\\theta(x) (1-h_\\theta(x)) x_k\\end{aligned}$$\nInterpretación Probabilística\n$$\\begin{aligned}\n\\frac{\\partial}{\\partial \\theta_k} l(\\theta) &=\n\\frac{\\partial}{\\partial \\theta_k} \\sum_{i=1}^{m} y^{(i)}\\log (h_\\theta(x^{(i)})) + (1-y^{(i)}) \\log (1 - h_\\theta(x^{(i)})) \\\n&= \\sum_{i=1}^{m} y^{(i)}\\frac{\\partial}{\\partial \\theta_k} \\log (h_\\theta(x^{(i)})) + (1-y^{(i)}) \\frac{\\partial}{\\partial \\theta_k} \\log (1 - h_\\theta(x^{(i)})) \\\n&= \\sum_{i=1}^{m} y^{(i)}\\frac{1}{h_\\theta(x^{(i)})}\\frac{\\partial h_\\theta(x^{(i)})}{\\partial \\theta_k} \n+ (1-y^{(i)}) \\frac{1}{1 - h_\\theta(x^{(i)})} \\frac{\\partial (1-h_\\theta(x^{(i)}))}{\\partial \\theta_k} \\\n&= \\sum_{i=1}^{m} y^{(i)}(1-h_\\theta(x^{(i)})) x^{(i)}- (1-y^{(i)}) h_\\theta(x^{(i)}) x^{(i)}\\\n&= \\sum_{i=1}^{m} y^{(i)}x^{(i)}- y^{(i)}h_\\theta(x^{(i)}) x^{(i)}- h_\\theta(x^{(i)}) x^{(i)}+ y^{(i)}h_\\theta(x^{(i)}) x^{(i)}\\\n&= \\sum_{i=1}^{m} (y^{(i)}-h_\\theta(x^{(i)})) x^{(i)}\\end{aligned}$$\nInterpretación Probabilística\nEs decir, para maximizar la log-verosimilitud\nobtenemos igual que para la regresión lineal:\n$$\\begin{aligned}\n\\theta^{(n+1)} & = \\theta^{(n)} - \\alpha \\nabla_{\\theta} l(\\theta^{(n)}) \\\n\\frac{\\partial l(\\theta)}{\\partial \\theta_k}\n&= \\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right) x^{(i)}_k\\end{aligned}$$\nAunque, en el caso de regresión logística, se tiene\n$h_\\theta(x)=1/(1+e^{-x^T\\theta})$\nOBS: La elección de $\\alpha$ es crucial para la convergencia. En\nparticular, $0.01/m$ funciona bien.\nRecuerdo de la aproximación Ingenieril\nPor\nlo tanto tenemos el algoritmo\n$$\\begin{aligned}\n\\theta^{(n+1)} & = \\theta^{(n)} - \\alpha \\nabla_{\\theta} J(\\theta^{(n)}) \\\n\\nabla_{\\theta} J(\\theta) &= X^T \\Big[ (g(X\\theta) - Y) \\odot g(X\\theta) \\odot (1-g(X\\theta)) \\Big]\\end{aligned}$$\nInterpretación Probabilística\nEs decir, para maximizar la log-verosimilitud\nobtenemos igual que para la regresión lineal:\n$$\\begin{aligned}\n\\theta^{(n+1)} & = \\theta^{(n)} - \\alpha \\nabla_{\\theta} l(\\theta^{(n)}) \\\n\\frac{\\partial l(\\theta)}{\\partial \\theta_k}\n&= \\sum_{i=1}^{m} \\left( h_{\\theta}(x^{(i)}) - y^{(i)}\\right) x^{(i)}_k\\end{aligned}$$\nAunque, en el caso de regresión logística, se tiene\n$h_\\theta(x)=1/(1+e^{-x^T\\theta})$\nOBS: La elección de $\\alpha$ es crucial para la convergencia. En\nparticular, $0.01/m$ funciona bien.", "import numpy as np\n\ndef likelihood_logistic_regression(X, Y, theta0, tol=1E-6):\n converged = False\n alpha = 0.01/len(Y)\n theta = theta0\n while not converged:\n H = sigmoid(np.dot(X, theta))\n gradient = np.dot(X.T, H-Y)\n new_theta = theta - alpha * gradient\n converged = np.linalg.norm(theta-new_theta) < tol * np.linalg.norm(theta) \n theta = new_theta\n return theta\n\ndef sigmoid(z):\n return 1./(1+np.exp(-z))", "Interpretación del resultado\n\n¿Qué significa el parámetro obtenido $\\theta$?\n¿Cómo relacionamos la pertenencia a una clase (discreto) con la hipótesis $h_{\\theta}(x)$ (continuo).\n\n1. Aplicación a Datos del Challenger\nApliquemos lo anterior a los datos que tenemos del Challenger.", "# Plot of data\ndata = np.loadtxt(\"data/Challenger.txt\", skiprows=1)\nx = (data[:,0]-32.)/1.8\nX = np.array([np.ones(x.shape[0]), x]).T\ny = np.array(data[:,1]==0,int)\ntheta_0 = y.mean() / X.mean(axis=0)\nprint \"theta_0\", theta_0\ntheta_J = norm2_error_logistic_regression(X, y, theta_0)\nprint \"theta_J\", theta_J\ntheta_l = likelihood_logistic_regression(X, y, theta_0)\nprint \"theta_l\",theta_l", "1. Aplicación a Datos del Challenger\nVisualización de resultados", "# Predictions\ny_pred_J = sigmoid(np.dot(X, theta_J))\ny_pred_l = sigmoid(np.dot(X, theta_l))\n# Plot of data\nplt.figure(figsize=(16,8))\nplt.plot(x[y==0], y[y==0], 'bo', label=\"Falla\", ms=8)\nplt.plot(x[y>0], y[y>0], 'rs', label=\"Exito\", ms=8)\nplt.plot(x, y_pred_J, label=\"Norm 2 error prediction\")\nplt.plot(x, y_pred_l, label=\"Likelihood prediction\")\nplt.ylim([-0.1, 1.1])\nplt.legend(loc=0, numpoints=1)\nplt.title(\"Exito o Falla en lanzamiento de Challenger\")\nplt.xlabel(r\"T [${}^o C$]\")\nplt.ylabel(r\"$y$\")\nplt.show()", "2. Aplicación al Iris Dataset\nHay\ndefinidas $3$ clases, pero nosotros sólo podemos clasificar en $2$ clases. ¿Qué hacer?", "import numpy as np\nfrom sklearn import datasets\n\n# Loading the data\niris = datasets.load_iris()\nX = iris.data\nY = iris.target\nprint iris.target_names\n\n# Print data and labels\nfor x, y in zip(X,Y):\n print x, y", "2. Aplicación al Iris Dataset\nPodemos definir 2 clases: Iris Setosa y no Iris Setosa.\n¿Que label le pondremos a cada clase?", "import numpy as np\nfrom sklearn import datasets\n\n# Loading the data\niris = datasets.load_iris()\nnames = iris.target_names\nprint names\nX = iris.data\nY = np.array(iris.target==0, int)\n\n# Print data and labels\nfor x, y in zip(X,Y):\n print x, y", "2. Aplicación al Iris Dataset\nPara aplicar el algoritmo, utilizando el algoritmo Logistic Regression de la librería sklearn, requerimos un código como el siguiente:", "import numpy as np\nfrom sklearn import datasets\nfrom sklearn.linear_model import LogisticRegression\n\n# Loading the data\niris = datasets.load_iris()\nnames = iris.target_names\nX = iris.data\nY = np.array(iris.target==0, int)\n\n# Fitting the model\nLogit = LogisticRegression()\nLogit.fit(X,Y)\n\n# Obtain the coefficients\nprint Logit.intercept_, Logit.coef_ \n\n# Predicting values\nY_pred = Logit.predict(X)\n#x = X.mean(axis=0)\n#Y_pred_mean = Logit.predict(x)\n#print x, Y_pred_mean", "2. Aplicación al Iris Dataset\nPodemos visualizar el resultado con una matriz de confusión.", "from sklearn.metrics import confusion_matrix\n\ncm = confusion_matrix(Y, Y_pred)\n\nprint cm", "¡Nuestra clasificación es perfecta!\nReferencias\n\nJake VanderPlas, ESAC Data Analysis and Statistics Workshop 2014, https://github.com/jakevdp/ESAC-stats-2014\nAndrew Ng, Machine Learning CS144, Stanford University." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-3/cmip6/models/sandbox-2/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-3\nSource ID: SANDBOX-2\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:46\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ocean-color-ac-challenge/evaluate-pearson
evaluation-participant-f.ipynb
apache-2.0
[ "E-CEO Challenge #3 Evaluation\nWeights\nDefine the weight of each wavelength", "w_412 = 0.56\nw_443 = 0.73\nw_490 = 0.71\nw_510 = 0.36\nw_560 = 0.01", "Run\nProvide the run information:\n* run id\n* run metalink containing the 3 by 3 kernel extractions\n* participant", "run_id = '0000005-150701000025418-oozie-oozi-W'\nrun_meta = 'http://sb-10-16-10-55.dev.terradue.int:50075/streamFile/ciop/run/participant-f/0000005-150701000025418-oozie-oozi-W/results.metalink?'\nparticipant = 'participant-f'", "Define all imports in a single cell", "import glob\nimport pandas as pd\nfrom scipy.stats.stats import pearsonr\nimport numpy\nimport math", "Manage run results\nDownload the results and aggregate them in a single Pandas dataframe", "!curl $run_meta | aria2c -d $participant -M -\n\npath = participant # use your path\n\nallFiles = glob.glob(path + \"/*.txt\")\nframe = pd.DataFrame()\nlist_ = []\nfor file_ in allFiles:\n df = pd.read_csv(file_,index_col=None, header=0)\n list_.append(df)\n frame = pd.concat(list_)", "Number of points extracted from MERIS level 2 products", "len(frame.index)", "Calculate Pearson\nFor all three sites, AAOT, BOUSSOLE and MOBY, calculate the Pearson factor for each band.\n\nNote AAOT does not have measurements for band @510\n\nAAOT site", "insitu_path = './insitu/AAOT.csv'\ninsitu = pd.read_csv(insitu_path)\nframe_full = pd.DataFrame.merge(frame.query('Name == \"AAOT\"'), insitu, how='inner', on = ['Date', 'ORBIT'])\n\nframe_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()\nr_aaot_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @412\")\n\nframe_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()\nr_aaot_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @443\")\n\nframe_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()\nr_aaot_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @490\")\n\nr_aaot_510 = 0\nprint(\"0 observations for band @510\")\n\nframe_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()\nr_aaot_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @560\")\n\ninsitu_path = './insitu/BOUSS.csv'\ninsitu = pd.read_csv(insitu_path)\nframe_full = pd.DataFrame.merge(frame.query('Name == \"BOUS\"'), insitu, how='inner', on = ['Date', 'ORBIT'])\n\nframe_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()\nr_bous_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @412\")\n\nframe_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()\nr_bous_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @443\")\n\nframe_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()\nr_bous_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @490\")\n\nframe_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()\nr_bous_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]\n\nprint(str(len(frame_xxx.index)) + \" observations for band @510\")\n\nframe_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()\nr_bous_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]\n\nprint(str(len(frame_xxx.index)) + \" observations for band @560\")\n\ninsitu_path = './insitu/MOBY.csv'\ninsitu = pd.read_csv(insitu_path)\nframe_full = pd.DataFrame.merge(frame.query('Name == \"MOBY\"'), insitu, how='inner', on = ['Date', 'ORBIT'])\n\nframe_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()\nr_moby_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @12\")\n\nframe_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()\nr_moby_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @443\")\n\nframe_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()\nr_moby_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]\n\nprint(str(len(frame_xxx.index)) + \" observations for band @490\")\n\nframe_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()\nr_moby_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @510\")\n\nframe_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()\nr_moby_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @560\")\n\n[r_aaot_412, r_aaot_443, r_aaot_490, r_aaot_510, r_aaot_560]\n\n[r_bous_412, r_bous_443, r_bous_490, r_bous_510, r_bous_560]\n\n\n[r_moby_412, r_moby_443, r_moby_490, r_moby_510, r_moby_560]\n\nr_final = (numpy.mean([r_bous_412, r_moby_412, r_aaot_412]) * w_412 \\\n + numpy.mean([r_bous_443, r_moby_443, r_aaot_443]) * w_443 \\\n + numpy.mean([r_bous_490, r_moby_490, r_aaot_490]) * w_490 \\\n + numpy.mean([r_bous_510, r_moby_510, r_aaot_510]) * w_510 \\\n + numpy.mean([r_bous_560, r_moby_560, r_aaot_560]) * w_560) \\\n / (w_412 + w_443 + w_490 + w_510 + w_560)\n\nr_final" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hackolite/PRJ-medtec_sigproc
SigProc_101/SigProc-101-pimped.ipynb
mit
[ "Challenge : envelope detection\nThe aim of this challenge is to determine what is the best envelope detection algorithm to be used for the echOpen probe's raw signal.\nThis challenge is meant to start playing with usual envelope detection algorithms so as to gather knowledge about state-of-art techniques that could be used in the future to improve the echOpen preprocessing. For more details about the principle of ultrasound imaging and signal processing, you can have a glance at our gitbook.\nIn this challenge we'll use raw data that are simulated from images by modulating the signal with a sinusoïdal function, the frequence of which corresponds to the echOpen piezo frequence. Obviously, this is an \"ideal\" case, but that allows to test different implementations while monitoring the error between the envelope detection algorithm used and the ground truth image from which raw data were simulated.\nEventually, the implementations that will be retained at the end of this challenge will be tested on real echOpen raw data. This will allow to measure the different algorithms' performances in terms of \"image quality\" (impact on spatial resolution, for example).\nPipeline\nThe scheme below sums up the different steps that are gone through in this notebook : \n\nStarting from an ultrasound image of good quality (\"ground truth\"), we apply an amplitude modulation by multiplicating each line by a sinusoidal signal. This allows to simulate what would be the raw signal that would lead to each line in the image.\nWe implemented a very basic reconstruction algorithm that will serve as a baseline (i.e. you should do better!). The reconstruction function takes as input the simulated raw signal and performs envelope detection to get an image.\nWe compute the error map between the reconstructed image and the ground truth. We attribute to the reconstruction method a score that corresponds to the sum of squared errors between each pixels.\nAt the very end of the notebook is a reconstructImage() function in which you can implement your own envelope detection method. Then you'll be able to directly evaluate the score associated with your method. Just play with the reconstructImage() function and try to lower the error!\nOnce you're satisfied with your method, you can submit your reconstructImage() to the leaderboard by following the instructions at the end of this notebook.\n\n<img src=\"files/pipeline.png\">\nLoading useful libraries", "from __future__ import print_function\nimport numpy as np\nfrom PIL import Image # for bmp import\nfrom glob import glob\nfrom scipy.misc import imresize\nimport matplotlib.pyplot as plt\nimport math\nimport time\n\n%matplotlib inline\n\ndef showImage(imageToPlot):\n plt.figure(figsize=(2, 4))\n plt.gray()\n plt.imshow(imageToPlot.reshape(imageToPlot.shape), aspect='auto')\n plt.show()\n\ndef normImag(A):\n# Let's normalize the image\n A = A - A.min()\n A = 1.0*A/A.max()\n return(A)", "Loading and studying the 342x719 image of fantom\nHere we load the original image that will serve as \"ground truth\" that we would like to achieve. This image will later be altered in a way that allows to simulate a raw signal (i.e. the image before envelope detection is performed).", "im = Image.open(\"fantom.bmp\").convert('L') # convert 'L' is to get a flat image, not RGB\ngroundTruth = normImag(np.array(im)) # we use the full [0;1] range\nshowImage(groundTruth)", "Let's assume vertical line points are spaced by 1cm each. This corresponds to a depth of about 13cm.", "depth = 0.13 # in meters\nresolution = groundTruth.shape[0]/depth # in pts/m\nt = depth*2.0/1450.0\n\nprint('Image resolution in pixels/mm : ', resolution/1000.0)\nprint('Listening time in micro-secs : ', t*1.0E6)", "The corresponding resolution is 5.53 pts / mm. At a speed of 1450m/s for sound, we'd have a listening time of around 180µs of recording.\nSimulating a raw signal that would lead to this \"ground truth\" image\nLet's assume an ADC sampling rate of 60Msps (close to our prototype) and a piezo frequency f = 3.5 MHz, and compute the length of the raw signal :", "sps = 60.0E6\nf = 3.5E6\nL = int(t*sps)\n\nprint(\"Number of points in raw signal : \", L)", "The corresponding length of raw signal is close to 11k points.\nWe can then recreate the raw signal image :", "# First create a table of L points for each line, from the original image, by using bicubic interpolation\n# This is to get a smoother and more realistic raw signal\nBigImg = imresize(groundTruth, ( L,groundTruth.shape[1]), interp='bicubic')\n\n# Then simulate raw signal by modulating the data of BigImg with a sinusoidal function, \n# the frequence of which corresponds to the piezo frequency\nrawSignal = np.zeros(shape=(L,groundTruth.shape[1]))\nfor i in range(len(rawSignal)):\n for j in range(len(rawSignal[0])):\n pixelValue = 1.0*BigImg[i][j]\n w = 2.0*math.radians(180)*f\n rawSignal[i][j] = pixelValue*math.cos(1.0*i*w/sps)", "Let's check that we have the image (in green) and the corresponding signal (in blue) :", "line = np.zeros(shape=(L))\nimageLine = np.zeros(shape=(L))\nfor i in range(len(rawSignal)):\n line[i] = rawSignal[i][10]\n imageLine[i] = BigImg[i][10]\nplt.plot(line)\nplt.plot(imageLine)\nplt.show()", "Let's analyse this signal in the frequency domain, through a FFT. We should see the image, modulated by the 3.5MHz. That is, a \"potato\" around a 3.5MHz peak :", "maxFreq = 6.0E6\nxLimit = int(L*maxFreq/sps) # upper cap to \nlineFFT = np.abs(np.fft.fft(line))\nxScale = range(xLimit)\nfor i in range(xLimit):\n xScale[i] = (60.0E6)*float(xScale[i])/(L*(1.0E6))\nplt.plot(xScale,lineFFT[0:xLimit])\nplt.xlabel('Frequency (MHz)')\nplt.show()", "Conclusion: our rawSignal matches the raw signal's characteristics for the fantom image !\nSaving the raw signal into a file for use in a different code\nLet's save the raw signal data into a compressed .csv file so that you'll be able to load it from a different code (e.g. if you're not at ease with python, you can make your own script in whatever language you want to implement the envelope detection algorithm). Note that np.savetxt() and np.load() transparently accepts gz files.", "# Let's save the raw signal data\nnp.savetxt(\"RawSignal.csv.gz\",rawSignal, delimiter=';')", "Envelope detection challenge\nBelow are the pieces of code related to the proper envelope detection.\n\nFirstly, some useful functions are defined to display, compare images, and allow performance assessment.\nA basic decimation algorithm is then implemented. This method will serve as a baseline : you're supposed to do better !\nYou'll have to define your own envelope detection method\nAn automated score estimation and comparison between the baseline and your algorithm are provided.\n\nScore estimation and comparison functions\nThe estimateScore() function computes the error map between reconstructed image and the ground truth, and returns a score associated to this error map, as well as the max error achieved on a given pixel.\nYou should retain the algorithm that achieves the lowest possible value for these scores.", "def ssd(A,B):\n A = A - 0.95*A.min()\n A = 1.0*A/A.max()\n B = B - 0.95*B.min()\n B = 1.0*B/B.max()\n squares = (A[:,:] - B[:,:]) ** 2\n return np.sum(squares)\n\ndef estimateScore(groundTruth, reconstructedImage) :\n errorMap = (groundTruth - reconstructedImage)\n print('Error map between ground truth and reconstructed image : ')\n showImage(errorMap)\n score = ssd(reconstructedImage,groundTruth)\n maxErr = errorMap.max()\n return [score,maxErr]\n\ndef compareImages(im1,im2) :\n plt.figure()\n ax = plt.subplot(1, 2, 1)\n plt.imshow(im1)\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n ax = plt.subplot(1, 2, 2)\n plt.imshow(im2)\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n plt.show()", "Baseline method (don't change this one ! )\nThis function implements a basic decimation.", "def reconstructBaseline(rawSignal,image_shape) :\n reconstructedImage = np.zeros(shape=(image_shape[0],image_shape[1]))\n decimationFactor = 1.0*rawSignal.shape[0]/image_shape[0]\n\n for i in range(rawSignal.shape[0]):\n for j in range(image_shape[1]): \n reconstructedImage[int(i/decimationFactor)][j] += np.abs(rawSignal[i][j])\n \n reconstructedImage = normImag(np.abs(reconstructedImage))\n return reconstructedImage", "Let's compare the image reconstructed with the baseline method, with the ground truth to achieve :", "reconBaseline = reconstructBaseline(rawSignal,groundTruth.shape)\ncompareImages(groundTruth, reconBaseline)\n[scoreBaseline,maxErrBaseline] = estimateScore(groundTruth, reconBaseline)\n\nprint('Score for Baseline method : ', scoreBaseline)\nprint('max Err between pixels for Baseline method : ', maxErrBaseline)", "Your turn : implement your own method in the function below", "def reconstructImage(rawSignal,image_shape) :\n # Here is a copy of the baseline method. Replace that by another method.\n reconstructedImage = np.zeros(shape=(image_shape[0],image_shape[1]))\n decimationFactor = 1.0*rawSignal.shape[0]/image_shape[0]\n\n for i in range(rawSignal.shape[0]):\n for j in range(image_shape[1]): \n reconstructedImage[int(i/decimationFactor)][j] += np.abs(rawSignal[i][j])\n \n reconstructedImage = normImag(np.abs(reconstructedImage))\n # The function should return the reconstructed image \n return reconstructedImage", "Performance assessment of your method", "recon = reconstructImage(rawSignal,groundTruth.shape)\ncompareImages(groundTruth, recon)\n[score,maxErr] = estimateScore(groundTruth, recon)\n\nprint('Score for your method : ', score)\nprint('max Err between pixels for your method : ', maxErr)", "Submitting your own method to the leaderboard\nTo submit your own implementation to our leaderboard and compare your performances to other teams, go to http://37.187.117.106:8888/.\n\nSubscribe to the leaderboard\nGo to the IDE and paste your code, in the same form as the example provided in the cell below. The code should at least include the definition of a function \"run(rawSignal,image_shape)\" where :\nrawData is a numpy.array containing the raw signal values (in the same format as in this notebook)\nimageShape is an array [imageLength, imageWidth] with dimensions of the reconstructed image\nthe function should return a numpy.array of shape [imageLength, imageWidth] containing the reconstructed image values\nIt's possible to install python packages via pip, by defining a \"install_packages()\" function. The imports should then be done in the run function.\nClick on the \"submit\" button. \nAfter some time, a notification will inform you about your score and your ranking will appear in the leaderboard.\n\nYou can submit the example code in the cell below. This implementation should lead to a score of 12481.6872689", "def install_packages():\n import pip\n pip.main(['install', 'scipy'])\n\ndef run(rawSignal,image_shape) :\n \n import numpy as np\n from scipy.signal import hilbert\n\n reconstructedImage = np.zeros(shape=(image_shape[0],image_shape[1]))\n analytic_signal = hilbert(rawSignal)\n amplitude_envelope = np.abs(analytic_signal)\n decimationFactor = 1.0*amplitude_envelope.shape[0]/image_shape[0]\n \n old_pixel = 0\n nb_points=0\n for i in range(amplitude_envelope.shape[0]):\n for j in range(image_shape[1]): \n reconstructedImage[int(i/decimationFactor)][j] += np.abs(amplitude_envelope[i][j])\n \n if (int(i/decimationFactor) == old_pixel):\n nb_points += 1\n else:\n nb_points += 1\n reconstructedImage[int(i/decimationFactor)-1] = reconstructedImage[int(i/decimationFactor)-1]/nb_points\n nb_points = 1\n old_pixel = old_pixel+1\n \n reconstructedImage = normImag(np.abs(reconstructedImage))\n \n # The function should return the reconstructed image \n return reconstructedImage" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ewanbarr/anansi
docs/Molonglo_coords.ipynb
apache-2.0
[ "Molonglo coordinate transforms\nUseful coordinate transforms for the molonglo radio telescope", "import numpy as np\nimport ephem as e\nfrom scipy.optimize import minimize\nimport matplotlib.pyplot as plt\nnp.set_printoptions(precision=5,suppress =True)", "Below we define the rotation and reflection matrices", "def rotation_matrix(angle, d):\n directions = {\n \"x\":[1.,0.,0.],\n \"y\":[0.,1.,0.],\n \"z\":[0.,0.,1.]\n }\n direction = np.array(directions[d])\n sina = np.sin(angle)\n cosa = np.cos(angle)\n # rotation matrix around unit vector \n R = np.diag([cosa, cosa, cosa])\n R += np.outer(direction, direction) * (1.0 - cosa)\n direction *= sina\n R += np.array([[ 0.0, -direction[2], direction[1]],\n [ direction[2], 0.0, -direction[0]],\n [-direction[1], direction[0], 0.0]])\n return R\n\ndef reflection_matrix(d):\n m = {\n \"x\":[[-1.,0.,0.],[0., 1.,0.],[0.,0., 1.]],\n \"y\":[[1., 0.,0.],[0.,-1.,0.],[0.,0., 1.]],\n \"z\":[[1., 0.,0.],[0., 1.,0.],[1.,0.,-1.]]\n }\n return np.array(m[d])", "Define a position vectors", "def pos_vector(a,b):\n return np.array([[np.cos(b)*np.cos(a)],\n [np.cos(b)*np.sin(a)],\n [np.sin(b)]])\n\ndef pos_from_vector(vec):\n a,b,c = vec\n a_ = np.arctan2(b,a)\n c_ = np.arcsin(c) \n return a_,c_", "Generic transform", "def transform(a,b,R,inverse=True):\n P = pos_vector(a,b)\n if inverse:\n R = R.T\n V = np.dot(R,P).ravel()\n a,b = pos_from_vector(V)\n a = 0 if np.isnan(a) else a\n b = 0 if np.isnan(a) else b\n return a,b", "Reference conversion formula from Duncan's old TCC", "def hadec_to_nsew(ha,dec):\n ew = np.arcsin((0.9999940546 * np.cos(dec) * np.sin(ha))\n - (0.0029798011806 * np.cos(dec) * np.cos(ha))\n + (0.002015514993 * np.sin(dec)))\n ns = np.arcsin(((-0.0000237558704 * np.cos(dec) * np.sin(ha))\n + (0.578881847 * np.cos(dec) * np.cos(ha))\n + (0.8154114339 * np.sin(dec)))\n / np.cos(ew))\n return ns,ew", "New conversion formula using rotation matrices\nWhat do we think we should have:\n\\begin{equation}\n\\begin{bmatrix} \n\\cos(\\rm EW)\\cos(\\rm NS) \\\n\\cos(\\rm EW)\\sin(\\rm NS) \\\n\\sin(\\rm EW)\n\\end{bmatrix}\n=\n\\mathbf{R}\n\\begin{bmatrix} \n\\cos(\\delta)\\cos(\\rm HA) \\\n\\cos(\\delta)\\sin(\\rm HA) \\\n\\sin(\\delta)\n\\end{bmatrix}\n\\end{equation}\nWhere $\\mathbf{R}$ is a composite rotation matrix.\nWe need a rotations in axis of array plus orthogonal rotation w.r.t. to array centre. Note that the NS convention is flipped so HA and NS go clockwise and anti-clockwise respectively when viewed from the north pole in both coordinate systems.\n\\begin{equation}\n\\mathbf{R}_x\n=\n\\begin{bmatrix} \n1 & 0 & 0 \\\n0 & \\cos(\\theta) & -\\sin(\\theta) \\\n0 & \\sin(\\theta) & \\cos(\\theta)\n\\end{bmatrix}\n\\end{equation}\n\\begin{equation}\n\\mathbf{R}_y\n=\n\\begin{bmatrix} \n\\cos(\\phi) & 0 & \\sin(\\phi) \\\n0 & 1 & 0 \\\n-\\sin(\\phi) & 0 & \\cos(\\phi)\n\\end{bmatrix}\n\\end{equation}\n\\begin{equation}\n\\mathbf{R}_z\n=\n\\begin{bmatrix} \n\\cos(\\eta) & -\\sin(\\eta) & 0\\\n\\sin(\\eta) & \\cos(\\eta) & 0\\\n0 & 0 & 1\n\\end{bmatrix}\n\\end{equation}\n\\begin{equation}\n\\mathbf{R} = \\mathbf{R}_x \\mathbf{R}_y \\mathbf{R}_z\n\\end{equation}\nHere I think $\\theta$ is a $3\\pi/2$ rotation to put the telescope pole (west) at the telescope zenith and $\\phi$ is also $\\pi/2$ to rotate the telescope meridian (which is lengthwise on the array, what we traditionally think of as the meridian is actually the equator of the telescope) into the position of $Az=0$.\nHowever rotation of NS and HA are opposite, so a reflection is needed. For example reflection around a plane in along which the $z$ axis lies:\n\\begin{equation}\n\\mathbf{\\bar{R}}_z\n=\n\\begin{bmatrix} \n1 & 0 & 0\\\n0 & 1 & 0\\\n0 & 0 & -1\n\\end{bmatrix}\n\\end{equation}\nConversion to azimuth and elevations should therefore require $\\theta=-\\pi/2$ and $\\phi=\\pi/2$ with a reflection about $x$.\nTaking into account the EW skew and slope of the telescope:\n\\begin{equation}\n\\begin{bmatrix} \n\\cos(\\rm EW)\\cos(\\rm NS) \\\n\\cos(\\rm EW)\\sin(\\rm NS) \\\n\\sin(\\rm EW)\n\\end{bmatrix}\n=\n\\begin{bmatrix} \n\\cos(\\alpha) & -\\sin(\\alpha) & 0\\\n\\sin(\\alpha) & \\cos(\\alpha) & 0\\\n0 & 0 & 1\n\\end{bmatrix}\n\\begin{bmatrix} \n\\cos(\\beta) & 0 & \\sin(\\beta) \\\n0 & 1 & 0 \\\n-\\sin(\\beta) & 0 & \\cos(\\beta)\n\\end{bmatrix}\n\\begin{bmatrix} \n1 & 0 & 0 \\\n0 & 0 & 1 \\\n0 & -1 & 0\n\\end{bmatrix}\n\\begin{bmatrix} \n0 & 0 & -1 \\\n0 & 1 & 0 \\\n1 & 0 & 0\n\\end{bmatrix}\n\\begin{bmatrix} \n-1 & 0 & 0\\\n0 & 1 & 0\\\n0 & 0 & 1\n\\end{bmatrix}\n\\begin{bmatrix} \n\\cos(\\delta)\\cos(\\rm HA) \\\n\\cos(\\delta)\\sin(\\rm HA) \\\n\\sin(\\delta)\n\\end{bmatrix}\n\\end{equation}\nSo the correction matrix to take telescope coordinates to ns,ew\n\\begin{bmatrix} \n\\cos(\\alpha)\\sin(\\beta) & -\\sin(\\beta) & \\cos(\\alpha)\\sin(\\beta) \\\n\\sin(\\alpha)\\cos(\\beta) & \\cos(\\alpha) & \\sin(\\alpha)\\sin(\\beta) \\\n-\\sin(\\beta) & 0 & \\cos(\\beta)\n\\end{bmatrix}\nand to Az Elv\n\\begin{bmatrix} \n\\sin(\\alpha) & -\\cos(\\alpha)\\sin(\\beta) & -\\cos(\\alpha)\\cos(\\beta) \\\n\\cos(\\alpha) & -\\sin(\\alpha)\\sin(\\beta) & -\\sin(\\alpha)\\cos(\\beta) \\\n-\\cos(\\beta) & 0 & \\sin(\\beta)\n\\end{bmatrix}", "# There should be a slope and tilt conversion to get accurate change\n#skew = 4.363323129985824e-05\n#slope = 0.0034602076124567475\n\n#skew = 0.00004\n#slope = 0.00346\n\nskew = 0.01297 # <- this is the skew I get if I optimize for the same results as duncan's system\nslope= 0.00343\n\ndef telescope_to_nsew_matrix(skew,slope):\n R = rotation_matrix(skew,\"z\")\n R = np.dot(R,rotation_matrix(slope,\"y\"))\n return R\n\ndef nsew_to_azel_matrix(skew,slope):\n pre_R = telescope_to_nsew_matrix(skew,slope)\n x_rot = rotation_matrix(-np.pi/2,\"x\")\n y_rot = rotation_matrix(np.pi/2,\"y\")\n R = np.dot(x_rot,y_rot)\n R = np.dot(pre_R,R)\n R_bar = reflection_matrix(\"x\")\n R = np.dot(R,R_bar)\n return R\n\ndef nsew_to_azel(ns, ew): \n az,el = transform(ns,ew,nsew_to_azel_matrix(skew,slope))\n return az,el\n\nprint nsew_to_azel(0,np.pi/2) # should be -pi/2 and 0\nprint nsew_to_azel(-np.pi/2,0)# should be -pi and 0\nprint nsew_to_azel(0.0,.5) # should be pi/2 and something near pi/2\nprint nsew_to_azel(-.5,.5) # less than pi/2 and less than pi/2\nprint nsew_to_azel(.5,-.5) \nprint nsew_to_azel(.5,.5) ", "The inverse of this is:", "def azel_to_nsew(az, el): \n ns,ew = transform(az,el,nsew_to_azel_matrix(skew,slope).T)\n return ns,ew", "Extending this to HA Dec", "mol_lat = -0.6043881274183919 # in radians\n\ndef azel_to_hadec_matrix(lat):\n rot_y = rotation_matrix(np.pi/2-lat,\"y\")\n rot_z = rotation_matrix(np.pi,\"z\")\n R = np.dot(rot_y,rot_z)\n return R\n\ndef azel_to_hadec(az,el,lat):\n ha,dec = transform(az,el,azel_to_hadec_matrix(lat))\n return ha,dec\n\ndef nsew_to_hadec(ns,ew,lat,skew=skew,slope=slope):\n R = np.dot(nsew_to_azel_matrix(skew,slope),azel_to_hadec_matrix(lat))\n ha,dec = transform(ns,ew,R)\n return ha,dec\n\nns,ew = 0.8,0.8\naz,el = nsew_to_azel(ns,ew)\nprint \"AzEl:\",az,el\nha,dec = azel_to_hadec(az,el,mol_lat)\nprint \"HADec:\",ha,dec\nha,dec = nsew_to_hadec(ns,ew,mol_lat)\nprint \"HADec2:\",ha,dec\n\n# This is Duncan's version\nns_,ew_ = hadec_to_nsew(ha,dec)\nprint \"NSEW Duncan:\",ns_,ew_\nprint \"NS offset:\",ns_-ns,\" EW offset:\",ew_-ew\n\ndef test(ns,ew,skew,slope):\n ha,dec = nsew_to_hadec(ns,ew,mol_lat,skew,slope)\n ns_,ew_ = hadec_to_nsew(ha,dec)\n no,eo = ns-ns_,ew-ew_\n no = 0 if np.isnan(no) else no\n eo = 0 if np.isnan(eo) else eo\n return no,eo\n\nns = np.linspace(-np.pi/2+0.1,np.pi/2-0.1,10)\new = np.linspace(-np.pi/2+0.1,np.pi/2-0.1,10)\n\ndef test2(a):\n skew,slope = a\n out_ns = np.empty([10,10])\n out_ew = np.empty([10,10])\n for ii,n in enumerate(ns):\n for jj,k in enumerate(ew):\n a,b = test(n,k,skew,slope)\n out_ns[ii,jj] = a\n out_ew[ii,jj] = b\n a = abs(out_ns).sum()#abs(np.median(out_ns))\n b = abs(out_ew).sum()#abs(np.median(out_ew))\n print a,b\n print max(a,b)\n return max(a,b) \n\n#minimize(test2,[skew,slope])\n\n# Plotting out the conversion error as a function of HA and Dec. \n# Colour scale is log of the absolute difference between original system and new system\n\n\nns = np.linspace(-np.pi/2,np.pi/2,10)\new = np.linspace(-np.pi/2,np.pi/2,10)\nout_ns = np.empty([10,10])\nout_ew = np.empty([10,10])\nfor ii,n in enumerate(ns):\n for jj,k in enumerate(ew):\n print jj\n a,b = test(n,k,skew,slope)\n out_ns[ii,jj] = a\n out_ew[ii,jj] = b\nplt.figure()\nplt.subplot(121)\nplt.imshow(abs(out_ns),aspect=\"auto\")\nplt.colorbar()\n\nplt.subplot(122)\nplt.imshow(abs(out_ew),aspect=\"auto\")\nplt.colorbar()\nplt.show()\n\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom itertools import product, combinations\nfrom matplotlib.patches import FancyArrowPatch\nfrom mpl_toolkits.mplot3d import proj3d\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.set_aspect(\"equal\")\n\n#draw sphere\nu, v = np.mgrid[0:2*np.pi:20j, 0:np.pi:10j]\nx=np.cos(u)*np.sin(v)\ny=np.sin(u)*np.sin(v)\nz=np.cos(v)\nax.plot_wireframe(x, y, z, color=\"r\",lw=1)\n\nR = rotation_matrix(np.pi/2,\"x\")\npos_v = np.array([[x],[y],[z]])\np = pos_v.T\nfor i in p:\n for j in i:\n j[0] = np.dot(R,j[0])\n\n \nclass Arrow3D(FancyArrowPatch):\n def __init__(self, xs, ys, zs, *args, **kwargs):\n FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)\n self._verts3d = xs, ys, zs\n\n def draw(self, renderer):\n xs3d, ys3d, zs3d = self._verts3d\n xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)\n self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))\n FancyArrowPatch.draw(self, renderer)\n\na = Arrow3D([0,1],[0,0.1],[0,.10], mutation_scale=20, lw=1, arrowstyle=\"-|>\", color=\"k\")\nax.add_artist(a) \n \nax.set_xlabel(\"X\")\nax.set_ylabel(\"Y\")\nax.set_zlabel(\"Z\")\n \nx=p.T[0,0]\ny=p.T[1,0]\nz=p.T[2,0]\nax.plot_wireframe(x, y, z, color=\"b\",lw=1)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Yatekii/glal3
versuch3/W8.ipynb
gpl-3.0
[ "Arbeitsgrundlagen\nIn diesem Versuch soll die Schallgeschindigkeit in Gasen mittels Laufzeitmessung sowie Bestimmung der Resonanzfrequenz des Schalls bestimmt werden.\nEs werden ausschliesslich Longitudinalwellen am einen Ende des Mediums in welchem gemessen wird ausgesandt und am anderen Ende aufgefangen.\nPhasengeschwindigkeit von Shallwellen in Gasen\nDie Schallgeschwindigkeit $c$ in idealen Gasen ist eine Funktion die nur von der Temperatur abhängig ist. Unter Annahme der isentropen Kompression und Dekompression mit dem Koeffizienten $\\kappa$, der molaren Gaskonstante $R_m$ und der Molmasse $M_m$ erhält man die Formel \\ref{eq:Schallgeschwindigkeit}\n\\begin{equation}\nc = \\sqrt{\\kappa\\frac{s}{t}T}\n\\label{eq:Schallgeschwindigkeit}\n\\end{equation}\n$R_m$ ist dabei immer 8.314 $\\frac{kJ}{kmol\\cdot K}$. Mit $R_m$ lässt sich für viele Gase die Schallgeschwindigkeit berechnen, welche mit den Messwerten gut übereinstimmt.\nDen Isentropenkoeffizienten $\\kappa$ kann der Theorie des Dozenten oder Tabellen entnommen werden. \n$\\kappa$ kann jedoch auch von der Temperatur abhängen, weswegen experimentelle Daten vorzuziehen sind.\nSchallgeschwindigkeit in einem Gasgemisch\nUm die spezifische Schallgeschwindigkeit in einem Gasgemisch zu bestimmen, müssen $\\kappa$ und $R_i$ umgerechnet werden.\nHier geht man von den Definitionen \\ref{eq:kappa} und \\ref{eq:Schallgeschwindigkeit} aus.\n\\begin{equation}\n\\kappa = \\frac{c_p}{c_v}\n\\label{eq:kappa}\n\\end{equation}\nDies führt zu \\ref{eq:v_spez}.\n\\begin{equation}\nc = \\sqrt{{\\kappa}^{R_i}^T} = \\sqrt{\n\\frac{\\sum m_j\\cdot c_{p, j}}{\\sum m_j\\cdot c_{v, j}}\\frac{\\sum m_j\\cdot R_{i, j}}{\\sum m_j}T}\n\\label{eq:v_spez}\n\\end{equation}\nIm Experiment wird das Gasgemisch jedoch nicht über die Einfüllmassen $m_j$ sondern den jeweiligen Partialdruck $p_j$ beziehungsweise über den relativen Partialdruck $p_{rel, j} = \\frac{p_j}{p_{tot}}$ bestimmt. Somit gilt für ideale Gase die Relation in \\ref{eq:m_p}.\n\\begin{equation}\nm_j \\propto p_j \\cdot M_{m, j} \\propto p_{rel, j} \\cdot M_{m, j}\n\\label{eq:m_p}\n\\end{equation}\nUnd natürlich gilt für jedes Gasgemisch die Relation $p_{rel, 2} = 1 - p_{rel, 1}$.\nStehende Wellen in einem Rohr\nDurch das Begrenzen einer Welle in einem Medium, in diesem Versuch sind das Schallwellen in Gasen in einem Zylinder, werden Reflexionen hervorgerufen. Wenn die Randbedingungen, sprich die Länge des Rohres, bzw die passenden Frequenzen des Schalls dazu , gut gewählt werden, dann entsteht konstruktive Interferenz.\nNatürlich kann dabei auch destruktive Interferenz ausgelöst werden, so dass die Wellen ganz verschwinden. Wenn konstruktive Interferenz herrscht, so spricht man von einer stehenden Welle. Die Amplitude dieser Stehwelle ist abhängig von der Dämpfung, beziehungsweise der Güte Q des Resonators, welche das Verhältnis zwischen Anregeamplitude und Resonanzamplitude darstellt.\n\nFür ein offenes Rohr gelten die in \\ref{eq:rohr_offen_1} und \\ref{eq:rohr_offen_1} ersichtlichen Formeln.\n\\begin{equation}\nL = n \\cdot \\frac{\\lambda}{2}\n\\label{eq:rohr_offen_1}\n\\end{equation}\n\\begin{equation}\nf_n = n \\cdot \\frac{c}{2L}\n\\label{eq:rohr_offen_2}\n\\end{equation}\nFür ein geschlossenes Rohr gelten die in \\ref{eq:rohr_geschlossen_1} und \\ref{eq:rohr_geschlossen_2} ersichtlichen Formeln.\n\\begin{equation}\nL = \\frac{\\lambda}{4} + n \\cdot \\frac{\\lambda}{2}\n\\label{eq:rohr_geschlossen_1}\n\\end{equation}\n\\begin{equation}\nf_n = \\frac{c}{4L} (1 + 2n)\n\\label{eq:rohr_geschlossen_2}\n\\end{equation}\nWobei $L$ die Länge des Rohres, $f_n$ die Eigenfrequenz. $c$ die Schallgeschwindigkeit $\\lambda$ die Wellenlänge und $n$ ein ganzzahliger Faktor sind.", "# Preparations\nimport math\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\nimport numpy as np\nfrom scipy import stats\nfrom scipy.optimize import curve_fit\nimport seaborn as sns\nfrom IPython.display import Latex\nimport warnings\nfrom PrettyTable import PrettyTable\nfrom functools import partial\nfrom PrettyFigure import PrettyFigure\nwarnings.filterwarnings(\"ignore\", module=\"matplotlib\")\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nplt.rcParams['savefig.dpi'] = 75\n\n# plt.rcParams['figure.autolayout'] = False\n# plt.rcParams['figure.figsize'] = 10, 6\nplt.rcParams['axes.labelsize'] = 18\nplt.rcParams['axes.titlesize'] = 20\nplt.rcParams['font.size'] = 16\nplt.rcParams['lines.linewidth'] = 2.0\nplt.rcParams['lines.markersize'] = 8\nplt.rcParams['legend.fontsize'] = 14\n\nplt.rcParams['text.usetex'] = True\nplt.rcParams['text.latex.unicode'] = True\nplt.rcParams['font.family'] = \"STIX\"\nplt.rcParams['text.latex.preamble'] = \"\\\\usepackage{subdepth}, \\\\usepackage{type1cm}\"\n\nresults = {}\n\nsns.set(color_codes=True)\n\ndef average(data):\n return 1 / len(data) * sum(data)\n\ndef error(data, average_of_data):\n s = sum([(x - average_of_data)**2 for x in data])\n return math.sqrt(s / (len(data) * (len(data) - 1)))\n\ndef std_deviation(error_of_average, length_of_dataset):\n return error_of_average * math.sqrt(length_of_dataset)\n\ndef average_with_weights(data, weights):\n d = data\n w = weights\n return (d * w**-2).sum() / (w**-2).sum()\n\ndef error_with_weights(weights):\n w = weights\n return 1 / math.sqrt((w**-2).sum())\n\ndef wavg(group, avg_name, weight_name):\n d = group[avg_name]\n w = group[weight_name]\n return (d * w**-2).sum() / (w**-2).sum()\n\ndef werr(group, weight_name):\n return 1 / math.sqrt((group[weight_name]**-2).sum())", "Durchführung\nDie Versuchumgebung besteht aus einem doppelwandigen, luftdicht verschlossenen Messingrohr R. Auf einer Seite des Rohres ist im Inneren ein Lautsprecher L und gegenüberliegend ein Kondensatormikrofon KM montiert. Um die Schallgeschwindigkeiten bei verschiedenen Distanzen bestimmen zu können, kann die Wand, an welcher das KM angebracht ist, per Handkurbel verstellt werden. Die Temperatur im Inneren kann durch ein Chromel-Alumel-Thermoelement bestimmt werden. Zudem gibt es natürlich ein Einlassventil sowie ein Ablassventil.\nDie genaue Versuchsanordng kann der nachstehenden Illustration entnommen werden.\n\nLaufzeitmessung\nDie Versuchsanornung zur bestimmung der Schallgeschwindigkeit mithilfe der Laufzeitmethode kann der folgenden Abbildung entnommen werden.\n\nUm kurze, steile Schallimpulse zu erzeugen, wird ein Kondensator C per Drucktaster über dem Lautsprecher L entladen. Zeitgleich wird dem Zeitmesser signalisiert dass er die Zeitmessung starten soll. Das Kondensatormikrofon wird dann nach einiger Zeit und genügend Verstärkung im Audioverstärker den Impuls aufnehmen und dem Zeitmesser das Signal die Zeit zu stoppen geben.\nZur Kontrolle der Funktionalität steht ein Oszilloskop bereit auf welchem die Impulse beobachtet werden können. Diese sollten in etwa wie in folgender Abbildung aussehen.\n\nResonanzmethode\nNachfolgend ist die Versuchsanornung zur Resonanzbestimmung zu sehen.\n\nZur bestimmung der Resonanz wird der Impulsgeber aus der Laufzeitmessung durch einen Sinusgenerator ersetzt. Nun sendet der Lautsprecher kontinuierlich Wellen in das Rohr. Auf dem Oszilloskop wird das ausgesendete Signal mit dem empfangenen Signal im XY-Modus in Relation gestellt. Logischerweise müsste bei Resonanz die Verstärkung des Resonators linear sein und auf dem Oszilloskop eine Linie zu sehen sein. Ist noch eine Ellipse sichtbar, so herrscht noch hysterese und es ist noch keine vollkommen konstruktive Interferenz.\nNun kann mithilfe der Handkurbel die Distanz des Mikrofons zum Lautsprecher verstellt werden. Dadurch kann die Distanz wischen zwei Wellenbergen gemessen werden.\nGasgemische\nBeide Methodiken wurden mit reiner Luft und je Helium und SF6 angewandt.\nZudem wurden dann die Gase Helium und SD6 in 20% schritten vermischt und gemessen.\nDer Anteil konnte einfach über den Druck im Behälter eingestellt werden, da dieser wie in \\ref{eq:m_p} dargestellt direkt proportional zu den Molekülen des Gases ist.\nKonstanten\nDie nachfolgenden Konstanten sind alle in Horst Kuchlings Taschenbuch der Physik zu finden.", "# Constants\n\nname = ['Luft', 'Helium', 'SF6']\nmm = [28.95, 4.00, 146.06]\nri = [287, 2078, 56.92]\ncp = [1.01, 5.23, 0.665]\ncv = [0.72, 3.21, 0.657]\nk = [1.63, 1.40, 1.012]\nc0 = [971, 344, 129]\n\nconstants_tbl = PrettyTable(\n list(zip(name, mm, ri, cp, cv, k, c0)),\n label='tab:gase',\n caption='Kennwerte und Konstanten der verwendeten Gase.',\n extra_header=[\n 'Gas',\n r'$M_m[\\frac{g}{mol}]$',\n r'$R_i[\\frac{J}{kg K}]$',\n r'$c_p[\\frac{kJ}{kg K}]$',\n r'$c_v[\\frac{kJ}{kg K}]$',\n r'$K$',\n r'$c_0[\\frac{m}{s}]$'\n ], entries_per_column=3)\nconstants_tbl.show()", "Verwendete Messgeräte", "# Utilities\n\nname = ['Oszilloskop', 'Zeitmesser', 'Funktionsgenerator', 'Verstärker', 'Vakuumpumpe', 'Netzgerät', 'Temperaturmessgerät']\nmanufacturer = ['LeCroy', 'Keithley', 'HP', 'WicTronic', 'Pfeiffer', ' ', ' ']\ndevice = ['9631 Dual 300MHz Oscilloscope 2.5 GS/s', '775 Programmable Counter/Timer', '33120A 15MHz Waveform Generator', 'Zweikanalverstärker', 'Vacuum', ' ', ' ']\n\nutilities_tbl = PrettyTable(\n list(zip(name, manufacturer, device)),\n label='tab:utilities',\n caption='Verwendete Gerätschaften',\n extra_header=[\n 'Funktion',\n 'Hersteller',\n 'Gerätename',\n ], entries_per_column=7)\nutilities_tbl.show()", "Auswertung\nBei allen Versuchen wurde im Behältnis ein Unterdruck von -0.8 Bar erzeugt. Anschliesend wurde das Rohr bis 0.3 Bar mit Gas gefüllt. Dies wurde jeweils zweimal gemacht um Rückstände des vorherigen Gases zu entfernen.\nLaufzeitmethode\nBei der Laufzeitmethode wurde die Laufzeit vom Lautsprecher bis zum Mikrofon bei verschidenen Distanzen gemessen. Mit einer Linearen Regression konnte dann die Schallgeschwindigkeit bestimmt werden. systematische Fehler wie die Wahl der Triggerschwelle, die Position des Mikrofons oder der Position des Lautsprechers sind im y-Achsenabschnitt $t_0$ enthalten und müssen somit nicht mehr berücksichtigt werden.", "# Laufzeitenmethode Luft, Helium, SF6\nimport collections\n\n# Read Data\ndfb = pd.read_csv('data/laufzeitmethode.csv')\nax = None\ni = 0\nfor gas1 in ['luft', 'helium', 'sf6']:\n df = dfb.loc[dfb['gas1'] == gas1].loc[dfb['gas2'] == gas1].loc[dfb['p'] == 1]\n slope, intercept, sem, r, p = stats.linregress(df['t'], df['s'])\n n = np.linspace(0.0, df['t'][9 + i * 10] * 1.2, 100)\n\n results[gas1] = {\n gas1: {\n\n }\n }\n results[gas1][gas1]['1_l_df'] = df\n results[gas1][gas1]['1_l_slope'] = slope\n results[gas1][gas1]['1_l_intercept'] = intercept\n results[gas1][gas1]['1_l_sem'] = sem\n \n ax = df.plot(kind='scatter', x='t', y='s', label='gemessene Laufzeit')\n plt.plot(n, [i * slope + intercept for i in n], label='linearer Fit der Laufzeit', axes=ax)\n plt.xlabel('Laufzeit [s]')\n plt.ylabel('Strecke [m]')\n plt.xlim([0, df['t'][9 + i * 10] * 1.1])\n plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)\n i += 1\n plt.close()\n figure = PrettyFigure(\n ax.figure,\n label='fig:laufzeiten_{}'.format(gas1),\n caption='Laufzeiten in {}. Dazu einen linearen Fit um die Mittlere Geschwindigkeit zu bestimmen.'.format(gas1.title()))\n figure.show()", "Resonanzmethode\nUm eine anständige Messung zu kriegen, wurde zuerst eine Anfangsfrequenz bestimmt, bei welcher mindestens 3 konstruktive Interferenzen über die Messdistanz von einem Meter gemessen wurden. Da wurde dann eine Messung durchgeführt, sowie bei 5 weiteren, höheren Frequenzen.\nMit einem linearen Fit konnte dann vorzüglich die Schallgeschwindigkeit berechnet werden. Hierbei wurde die Formel in \\ref{eq:rohr_offen_2} verwendet.", "# Resonanzmethode Luft, Helium, SF6\nimport collections\n\n# Read Data\ndfb2 = pd.read_csv('data/resonanzfrequenz.csv')\nax = None\ni = 0\nfor gas1 in ['luft', 'helium', 'sf6']:\n df = dfb2.loc[dfb2['gas1'] == gas1].loc[dfb2['gas2'] == gas1].loc[dfb2['p'] == 1]\n df['lbd'] = 1 / (df['s'] * 2)\n df['v'] = 2 * df['f'] * df['s']\n slope, intercept, sem, r, p = stats.linregress(df['lbd'], df['f'])\n n = np.linspace(0.0, df['lbd'][(5 + i * 6) if i < 2 else 15] * 1.2, 100)\n\n results[gas1][gas1]['1_r_df'] = df\n results[gas1][gas1]['1_r_slope'] = slope\n results[gas1][gas1]['1_r_intercept'] = intercept\n results[gas1][gas1]['1_r_sem'] = sem\n \n ax = df.plot(kind='scatter', x='lbd', y='f', label='gemessenes $\\lambda^{-1}$')\n plt.plot(n, [i * slope + intercept for i in n], label='linearer Fit von $\\lambda^{-1}$', axes=ax)\n plt.xlabel(r'$1 / \\lambda [m^{-1}]$')\n plt.ylabel(r'$Frequenz [s^{-1}]$')\n plt.xlim([0, df['lbd'][(5 + i * 6) if i < 2 else 15] * 1.1])\n plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)\n i += 1\n plt.close()\n figure = PrettyFigure(\n ax.figure,\n label='fig:laufzeiten_{}'.format(gas1),\n caption='Abstände der Maxima bei resonanten Frequenzen in {}. Dazu einen linearen Fit um die Mittlere Geschwindigkeit zu bestimmen.'.format(gas1.title()))\n figure.show()", "Gasgemische\nBei diesem Versuch wurden Helium und SF6 mit $\\frac{1}{5}$ Anteilen kombiniert.\nDafür wurde jeweils erst ein Gas bis einem Druck proportional zum jeweiligen Anteil eingelassen und darauf hin das zweite Gas.\nWie in \\ref{eq:m_p} erklärt ist dies möglich.", "# Laufzeitenmethode Helium-SF6-Gemisch\nimport collections\n\n# Read Data\ndfb = pd.read_csv('data/laufzeitmethode.csv')\nax = None\ncolors = ['blue', 'green', 'red', 'purple']\nresults['helium']['sf6'] = {}\nv_exp = []\nfor i in range(1, 5):\n i /= 5\n df = dfb.loc[dfb['gas1'] == 'helium'].loc[dfb['gas2'] == 'sf6'].loc[dfb['p'] == i]\n slope, intercept, sem, r, p = stats.linregress(df['t'], df['s'])\n v_exp.append(slope)\n n = np.linspace(0.0, df['t'][29 + i * 15] * 2, 100)\n\n results['helium']['sf6']['0{}_l_df'.format(int(i * 10))] = df\n results['helium']['sf6']['0{}_l_slope'.format(int(i * 10))] = slope\n results['helium']['sf6']['0{}_l_intercept'.format(int(i * 10))] = intercept\n results['helium']['sf6']['0{}_l_sem'.format(int(i * 10))] = sem\n \n if i == 0.2:\n ax = df.plot(kind='scatter', x='t', y='s', label='gemessene Laufzeit', color=colors[int(i * 5) - 1])\n else:\n plt.scatter(df['t'], df['s'], axes=ax, label=None, color=colors[int(i * 5) - 1])\n plt.plot(n, [i * slope + intercept for i in n], label='Laufzeit ({:.1f}\\% Helium, {:.1f}\\% SF6)'.format(i, 1 - i), axes=ax, color=colors[int(i * 5) - 1])\n plt.xlabel('Laufzeit [s]')\n plt.ylabel('Strecke [m]')\n plt.legend(bbox_to_anchor=(0.02, 0.98), loc=2, borderaxespad=0.2)\n i += 0.2\nplt.xlim([0, 0.006])\nplt.close()\nfigure = PrettyFigure(\n ax.figure,\n label='fig:laufzeiten_HESF6',\n caption='Laufzeiten in verschiedenen Helium/SF6-Gemischen. Dazu lineare Regression um die Mittlere Geschwindigkeit zu bestimmen.')\nfigure.show()\n\n# Literature & Calcs\n\nT = 21.3 + 273.15\nRi = 287\nK = 1.402\n\nresults['luft']['luft']['berechnet'] = math.sqrt(K * Ri * T)\nresults['luft']['luft']['literatur'] = 343\n\nRi = 2078\nK = 1.63\n\nresults['helium']['helium']['berechnet'] = math.sqrt(K * Ri * T)\nresults['helium']['helium']['literatur'] = 971\n\nRi = 56.92\nK = 1.012\n\nresults['sf6']['sf6']['berechnet'] = math.sqrt(K * Ri * T)\nresults['sf6']['sf6']['literatur'] = 129\n\ncp1 = cp[1]\ncp2 = cp[2]\ncv1 = cv[1]\ncv2 = cv[2]\nRL1 = ri[1]\nRL2 = ri[2]\nm1 = 0.2\nm2 = 0.8\ns1 = (m1 * cp1) + (m2 * cp2)\ns2 = (m1 + cv1) + (m2 * cv2)\ns3 = (m1 + RL1) + (m2 * RL2)\nresults['helium']['sf6']['02_l_berechnet'] = math.sqrt(s1 / s2 * s3 * T)\n\nm1 = 0.4\nm2 = 0.6\ns1 = (m1 * cp1) + (m2 * cp2)\ns2 = (m1 + cv1) + (m2 * cv2)\ns3 = (m1 + RL1) + (m2 * RL2)\nresults['helium']['sf6']['04_l_berechnet'] = math.sqrt(s1 / s2 * s3 * T)\n\nm1 = 0.6\nm2 = 0.4\ns1 = (m1 * cp1) + (m2 * cp2)\ns2 = (m1 + cv1) + (m2 * cv2)\ns3 = (m1 + RL1) + (m2 * RL2)\nresults['helium']['sf6']['06_l_berechnet'] = math.sqrt(s1 / s2 * s3 * T)\n\nm1 = 0.8\nm2 = 0.2\ns1 = (m1 * cp1) + (m2 * cp2)\ns2 = (m1 + cv1) + (m2 * cv2)\ns3 = (m1 + RL1) + (m2 * RL2)\nresults['helium']['sf6']['08_l_berechnet'] = math.sqrt(s1 / s2 * s3 * T)\n\np = [p for p in np.linspace(0, 1, 1000)]\nv = [math.sqrt(((n * cp1) + ((1 - n) * cp2)) / ((n + cv1) + ((1 - n) * cv2)) * ((n + RL1) + ((1 - n) * RL2)) * T) for n in p]\nfig = plt.figure()\nplt.plot(p, v, label='errechnete Laufzeit')\nplt.scatter([0.2, 0.4, 0.6, 0.8], v_exp, label='experimentelle Laufzeit')\nplt.xlabel('Heliumanteil')\nplt.ylabel('Schallgeschwindigkeit [v]')\nplt.xlim([0, 1])\nplt.close()\nfigure = PrettyFigure(\n fig,\n label='fig:laufzeiten_vgl',\n caption='Laufzeiten in Helium/SF6-Gemischen. Experimentelle Werte verglichen mit den berechneten.')\nfigure.show()", "Fehlerrechnung\nWie in der nachfolgenden Sektion ersichtlich ist, hält sich der statistische Fehler sehr in Grenzen. Der systematische Fehler sollte durch die Lineare Regression ebenfalls durch den Offset kompensiert werden.\nResonanzmethode\nHier wurde nur eine Distanz zwischen den Maximas gemessen. Besser wäre drei oder gar vier zu messen und dann zwischen den Werten ebenfalls noch eine lineare Regression anzustellen. Würde man den versuch noch einmal durchführen, so müsste man das sicher tun. Das von Auge ablesen am Oszilloskop erache ich als eher wenig kritisch, da das Bild relativ gut gezoomt werden kann und man schon kleinste Änderungen bemerkt.\nGasgemische\nBei den Gasgemischen gibt es natürlich die sehr hohe Fehlerquelle des Abmischens. Das Behältnis muss jedes Mal komplett geleert werden und dann wiede befüllt. Das Ablesen am Manometer ist nicht unbeding das genaueste Pozedere. Jedoch schätze ich es so ein dass man die Gemische auf ein Prozent genau abmischen kann. Jedoch wird dieses eine Prozent unter der Wurzel verrechnet was dann den Fehler noch vergrössert.\nHier müsste man eine Fehlerkorrektur machen. Aus diese wurde hier aber verzichtet, da wie im nächsten Abschnitt erläutert wahrscheinlich sowieso ein Messfehler vorliegt und man deshlab die Daten noch einmal erheben müsste.\nResultate und Diskussion\nReine Gase\nWie Tabelle \\ref{tab:resultat_rein} entnommen werden kann fallen die Resultate äusserst zufriedenstellend aus.\nBei der Laufzeitmethode sowie der Resonanzmethode in der Luft gibt es praktisch keine Abweichung (< 1%) von Literaturwerten.\nBei SF6 sieht es ähnlich aus. Beim Helium gibt es auf den Ersten Blick krassere Unterschiede. Wenn man aber genauer hinschaut, merkt man, dass die Werte insgesamt um Faktor 3 grösser sind als bei der Luftmessung und somit auch der Relative Fehler. Er er wird zwar ein wenig grösser, bleibt aber immernoch < 5%.\nSpannend finde ich die Tatsache, dass die Resonanzmethode näher am Literaturwert liegt, da diese der Annahme nach ungenauer sein müsste. Bei Luft und SF6 war dies auch tatsächlich der Fall.", "# Show results\n\nvalues = [\n 'Luft',\n 'Helium',\n 'SF6'\n]\nmeans_l = [\n '{0:.2f}'.format(results['luft']['luft']['1_l_slope']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['helium']['1_l_slope']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['sf6']['sf6']['1_l_slope']) + r'$\\frac{m}{s}$'\n]\n\nmeans_r = [\n '{0:.2f}'.format(results['luft']['luft']['1_r_slope']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['helium']['1_r_slope']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['sf6']['sf6']['1_r_slope']) + r'$\\frac{m}{s}$'\n]\n\nsem_l = [\n '{0:.2f}'.format(results['luft']['luft']['1_l_sem']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['helium']['1_l_sem']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['sf6']['sf6']['1_l_sem']) + r'$\\frac{m}{s}$'\n]\n\nsem_r = [\n '{0:.2f}'.format(results['luft']['luft']['1_r_sem']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['helium']['1_r_sem']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['sf6']['sf6']['1_r_sem']) + r'$\\frac{m}{s}$'\n]\n\nberechnet = [\n '{0:.2f}'.format(results['luft']['luft']['berechnet']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['helium']['berechnet']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['sf6']['sf6']['berechnet']) + r'$\\frac{m}{s}$'\n]\n\nliteratur = [\n '{0:.2f}'.format(results['luft']['luft']['literatur']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['helium']['literatur']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['sf6']['sf6']['literatur']) + r'$\\frac{m}{s}$'\n]\n\nv2_results_tbl = PrettyTable(\n list(zip(values, means_l, sem_l, means_r, sem_r, berechnet, literatur)),\n label='tab:resultat_rein',\n caption='Resultate aus den Versuchen mit reinen Gasen.',\n extra_header=[\n 'Wert',\n 'Laufzeitmethode $v_{L}$',\n 'stat. Fehler',\n 'Resonanzmethode $v_{R}$',\n 'stat. Fehler',\n 'berechnet',\n 'Literatur'\n ], entries_per_column=3)\nv2_results_tbl.show()", "Gasgemische\nIn der Tabelle \\ref{tab:resultat_gasgemisch} kann einfach erkannt werden, dass die experimentell bestimmten Werte absolut nicht übereinstimmen mit den berechneten Werten. Beide Resultatreihen würden einzeln aber plausibel aussehen, wobei die berechnete Reihe in Anbetracht der Konstanten von SF6 und Helium stimmen müsste. Helium hat viel Grössere Werte für $c_v$, $c_p$, $R_i$ und zwar um jeweils etwa eine Grössenordnung. Somit fallen sie in der verwendeten Formel \\ref{eq:v_spez} viel stärker isn Gewicht, weshalb die Schallgeschwindigkeiten näher bei Helium liegen müssten als bei SF6.\nLeider lag es nicht im Zeitlichen Rahmen und dem des Praktikums, die Messung noch einmal zu machen. Jedoch müsste diese noch einmal durchgeführt werden und verifiziert werden, dass diese tatsächlich stimmt. Erst dann kann man den Fehler in der Mathematik suchen. Da die Werte mit exakt derselben Formel für die Laufzeit berechnet wurden wie bei den reinen Gasen und da die Werte stimmten, ist anzunehmen dass tatsächlich ein Messfehler vorliegt. Die Form der er errechneten Kurve stimmt auch definitiv mit der von $y =\\sqrt{x}$ überein.", "# Show results\n\nvalues = [\n '20% / 80%',\n '40% / 60%',\n '60% / 40%',\n '80% / 20%'\n]\nmeans_x = [\n '{0:.2f}'.format(results['helium']['sf6']['02_l_slope']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['sf6']['04_l_slope']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['sf6']['06_l_slope']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['sf6']['08_l_slope']) + r'$\\frac{m}{s}$'\n]\n\nsem_x = [\n '{0:.2f}'.format(results['helium']['sf6']['02_l_sem']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['sf6']['04_l_sem']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['sf6']['06_l_sem']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['sf6']['08_l_sem']) + r'$\\frac{m}{s}$'\n]\n\nberechnet_x = [\n '{0:.2f}'.format(results['helium']['sf6']['02_l_berechnet']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['sf6']['04_l_berechnet']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['sf6']['06_l_berechnet']) + r'$\\frac{m}{s}$',\n '{0:.2f}'.format(results['helium']['sf6']['08_l_berechnet']) + r'$\\frac{m}{s}$'\n]\n\nv2_results_tbl = PrettyTable(\n list(zip(values, means_x, sem_x, berechnet_x)),\n label='tab:resultat_gasgemisch',\n caption='Resultate aus dem Versuch mit den Gasgemischen.',\n extra_header=[\n 'Helium / SF6',\n 'mit Laufzeitmethode $v_{L}$',\n 'statistischer Fehler',\n 'berechnet',\n ], entries_per_column=4)\nv2_results_tbl.show()", "Anhang\nLaufzeitmethode", "data = PrettyTable(\n list(zip(dfb['gas1'], dfb['gas2'], dfb['p'], dfb['s'], dfb['t'])),\n caption='Messwerte der Laufzeitmethode.',\n entries_per_column=len(dfb['gas1']),\n extra_header=['Gas 1', 'Gas 2', 'Anteil Gas 1', 'Strecke [m]', 'Laufzeit [s]']\n)\ndata.show()", "Resonanzmethode", "data = PrettyTable(\n list(zip(dfb2['gas1'], dfb2['gas2'], dfb2['p'], dfb2['f'], dfb2['s'])),\n caption='Messwerte der Resonanzmethode.',\n entries_per_column=len(dfb2['gas1']),\n extra_header=['Gas 1', 'Gas 2', 'Anteil Gas 1', 'Frequenz [Hz]', 'Strecke [m]']\n)\ndata.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.20/_downloads/063df3a44a4ac9d23978d7b307e69a4e/plot_read_evoked.ipynb
bsd-3-clause
[ "%matplotlib inline", "Reading and writing an evoked file\nThis script shows how to read and write evoked datasets.", "# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD (3-clause)\n\nfrom mne import read_evokeds\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nfname = data_path + '/MEG/sample/sample_audvis-ave.fif'\n\n# Reading\ncondition = 'Left Auditory'\nevoked = read_evokeds(fname, condition=condition, baseline=(None, 0),\n proj=True)", "Show result as a butterfly plot:\nBy using exclude=[] bad channels are not excluded and are shown in red", "evoked.plot(exclude=[], time_unit='s')\n\n# Show result as a 2D image (x: time, y: channels, color: amplitude)\nevoked.plot_image(exclude=[], time_unit='s')", "Use :func:mne.Evoked.save or :func:mne.write_evokeds to write the evoked\nresponses to a file." ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
ml4a/ml4a-guides
examples/dreaming/neural-net-painter.ipynb
gpl-2.0
[ "Neural net painter\nThis notebook demonstrates a fun experiment in training a neural network to do regression from the color (r,g,b) of a pixel in an image, given its (x,y) position in the image. It's mostly useless, but gives a nice visual intuition for regression. This notebook is inspired by the same example in convnet.js and the first part of this notebook is mostly reimplementing it in Keras instead. Later, we'll have some fun interpolating different image models.\nFirst make sure the following import statements work.", "%matplotlib inline\nimport time\nfrom PIL import Image\nimport numpy as np\nimport keras\nfrom matplotlib.pyplot import imshow, figure\nfrom keras.models import Sequential\nfrom keras.layers import Dense", "First we'll open an image, and create a helper function that converts that image into a training set of (x,y) positions (the data) and their corresponding (r,g,b) colors (the labels). We'll then load a picture with it.", "def get_data(img):\n width, height = img.size\n pixels = img.getdata()\n x_data, y_data = [],[]\n for y in range(height):\n for x in range(width):\n idx = x + y * width\n r, g, b = pixels[idx]\n x_data.append([x / float(width), y / float(height)])\n y_data.append([r, g, b])\n x_data = np.array(x_data)\n y_data = np.array(y_data)\n return x_data, y_data\n\nim1 = Image.open(\"../assets/dog.jpg\")\nx1, y1 = get_data(im1)\n\nprint(\"data\", x1)\nprint(\"labels\", y1)\nimshow(im1)", "We've postfixed all the variable names with a 1 because later we'll open a second image.\nWe're now going to define a neural network which takes a 2-neuron input (the normalized x, y position) and outputs a 3-neuron output corresponding to color. We'll use Keras's Sequential class to create a deep neural network with a bunch of 20-neuron fully-connected layers with ReLU activations. Our loss function will be a mean_squared_error between the predicted colors and the actual ones from the image.\nOnce we've defined that model, we'll create a neural network m1 with that architecture.", "def make_model():\n model = Sequential()\n model.add(Dense(2, activation='relu', input_shape=(2,)))\n model.add(Dense(20, activation='relu'))\n model.add(Dense(20, activation='relu'))\n model.add(Dense(20, activation='relu'))\n model.add(Dense(20, activation='relu'))\n model.add(Dense(20, activation='relu'))\n model.add(Dense(20, activation='relu'))\n model.add(Dense(20, activation='relu'))\n model.add(Dense(3))\n model.compile(loss='mean_squared_error', optimizer='adam')\n return model\n\nm1 = make_model()", "Let's now go ahead and train our neural network. In this case, we are going to use the training set as the validation set as well. Normally, you'd never do this because it would cause your neural network to overfit. But in this experiment, we're not worried about overfitting... in fact, overfitting is the whole point! \nWe train for 25 epochs and have a batch size of 5.", "m1.fit(x1, y1, batch_size=5, epochs=25, verbose=1, validation_data=(x1, y1))", "Now that the neural net is finished training, let's take the training data, our pixel positions, and simply send them back straight through the network, and plot the predicted colors on a new image. We'll make a new function for this called generate_image.", "def generate_image(model, x, width, height):\n img = Image.new(\"RGB\", [width, height])\n pixels = img.load()\n y_pred = model.predict(x)\n for y in range(height):\n for x in range(width):\n idx = x + y * width\n r, g, b = y_pred[idx]\n pixels[x, y] = (int(r), int(g), int(b))\n return img\n\nimg = generate_image(m1, x1, im1.width, im1.height)\nimshow(img)", "Sort of looks like the original image a bit! Of course the network can't learn the mapping perfectly without pretty much memorizing the data, but this way gives us a pretty good impression and doubles as an extremely inefficient form of compression!\nLet's load another image. We'll load the second image and also resize it so that it's the same size as the first image.", "im2 = Image.open(\"../assets/kitty.jpg\")\nim2 = im2.resize(im1.size)\nx2, y2 = get_data(im2)\n\nprint(\"data\", x2)\nprint(\"labels\", y2)\nimshow(im2)", "Now we'll repeat the experiment from before. We'll make a new neural network m2 which will learn to map im2's (x,y) positions to its (r,g,b) colors.", "m2 = make_model() # make a new model, keep m1 separate\nm2.fit(x2, y2, batch_size=5, epochs=25, verbose=1, validation_data=(x2, y2))", "Let's generate a new image from m2 and see how it looks.", "img = generate_image(m2, x2, im2.width, im2.height)\nimshow(img)", "Not too bad!\nNow let's do something funky. We're going to make a new neural network, m3, with the same architecture as m1 and m2 but instead of training it, we'll just set its weights to be interpolations between the weights of m1 and m2 and at each step, we'll generate a new image. In other words, we'll gradually change the model learned from the first image into the model learned from the second image, and see what kind of an image it outputs at each step.\nTo help us do this, we'll create a function get_interpolated_weights and we'll make one change to our image generation function: instead of just coloring the pixels to be the exact outputs, we'll auto-normalize every frame by rescaling the minimum and maximum output color to 0 to 255. This is because sometimes the intermediate models output in different ranges than what m1 and m2 were trained to. Yeah, this is a bit of a hack, but it works!", "def get_interpolated_weights(model1, model2, amt):\n w1 = np.array(model1.get_weights())\n w2 = np.array(model2.get_weights())\n w3 = np.add((1.0 - amt) * w1, amt * w2)\n return w3\n\ndef generate_image_rescaled(model, x, width, height):\n img = Image.new(\"RGB\", [width, height])\n pixels = img.load()\n y_pred = model.predict(x)\n y_pred = 255.0 * (y_pred - np.min(y_pred)) / (np.max(y_pred) - np.min(y_pred)) # rescale y_pred\n for y in range(height):\n for x in range(width):\n idx = x + y * width\n r, g, b = y_pred[idx]\n pixels[x, y] = (int(r), int(g), int(b))\n return img\n\n\n# make new model to hold interpolated weights\nm3 = make_model()\n\n# we'll do 8 frames and stitch the images together at the end\nn = 8\ninterpolated_images = []\nfor i in range(n):\n amt = float(i)/(n-1.0)\n w3 = get_interpolated_weights(m1, m2, amt)\n m3.set_weights(w3)\n img = generate_image_rescaled(m3, x1, im1.width, im1.height)\n interpolated_images.append(img)\n\nfull_image = np.concatenate(interpolated_images, axis=1)\nfigure(figsize=(16,4))\nimshow(full_image)", "Neat... Let's do one last thing, and make an animation with more frames. We'll generate 120 frames inside the assets folder, then use ffmpeg to stitch them into an mp4 file. If you don't have ffmpeg, you can install it from here.", "n = 120\nframes_dir = '../assets/neural-painter-frames'\nvideo_path = '../assets/neural-painter-interpolation.mp4'\n\nimport os\nif not os.path.isdir(frames_dir):\n os.makedirs(frames_dir)\n\nfor i in range(n):\n amt = float(i)/(n-1.0)\n w3 = get_interpolated_weights(m1, m2, amt)\n m3.set_weights(w3)\n img = generate_image_rescaled(m3, x1, im1.width, im1.height)\n img.save('../assets/neural-painter-frames/frame%04d.png'%i)\n\ncmd = 'ffmpeg -i %s/frame%%04d.png -c:v libx264 -pix_fmt yuv420p %s' % (frames_dir, video_path)\nos.system(cmd)", "You can find the video now in the assets directory. Looks neat! We can also display it in this notebook. From here, there's a lot of fun things we can do... Triangulating between multiple images, or streaming together several interpolations, or predicting color from not just position, but time in a movie. Lots of possibilities.", "from IPython.display import HTML\nimport io\nimport base64\n\nvideo = io.open(video_path, 'r+b').read()\nencoded = base64.b64encode(video)\n\nHTML(data='''<video alt=\"test\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded.decode('ascii')))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pacoqueen/ginn
extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Terminal Usage.ipynb
gpl-2.0
[ "A few things that work best/only at the IPython terminal or Qt console clients\nRunning code with %run", "%%writefile script.py\nx = 10\ny = 20\nz = x+y\nprint('z is: %s' % z)\n\n%run script\n\nx", "Event loop and GUI integration\nThe %gui magic enables the integration of GUI event loops with the interactive execution loop, allowing you to run GUI code without blocking IPython.\nConsider for example the execution of Qt-based code. Once we enable the Qt gui support:", "%gui qt", "We can define a simple Qt application class (simplified version from this Qt tutorial):", "import sys\nfrom PyQt4 import QtGui, QtCore\n\nclass SimpleWindow(QtGui.QWidget):\n def __init__(self, parent=None):\n QtGui.QWidget.__init__(self, parent)\n\n self.setGeometry(300, 300, 200, 80)\n self.setWindowTitle('Hello World')\n\n quit = QtGui.QPushButton('Close', self)\n quit.setGeometry(10, 10, 60, 35)\n\n self.connect(quit, QtCore.SIGNAL('clicked()'),\n self, QtCore.SLOT('close()'))", "And now we can instantiate it:", "app = QtCore.QCoreApplication.instance()\nif app is None:\n app = QtGui.QApplication([])\n\nsw = SimpleWindow()\nsw.show()\n\nfrom IPython.lib.guisupport import start_event_loop_qt4\nstart_event_loop_qt4(app)", "But IPython still remains responsive:", "10+2", "The %gui magic can be similarly used to control Wx, Tk, glut and pyglet applications, as can be seen in our examples.\nEmbedding IPython in a terminal application", "%%writefile simple-embed.py\n# This shows how to use the new top-level embed function. It is a simpler\n# API that manages the creation of the embedded shell.\n\nfrom IPython import embed\n\na = 10\nb = 20\n\nembed(header='First time', banner1='')\n\nc = 30\nd = 40\n\nembed(header='The second time')", "The example in kernel-embedding shows how to embed a full kernel into an application and how to connect to this kernel from an external process.\nLogging terminal sessions and transitioning to a notebook\nThe %logstart magic lets you log a terminal session with various degrees of control, and the %notebook one will convert an interactive console session into a notebook with all input cells already created for you (but no output)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ernestyalumni/CUDACFD_out
LatticeBoltzmann/LatticeBoltzmannMethod.ipynb
mit
[ "Lattice Boltzmann Method", "import sympy\n\nfrom sympy import exp, integrate, pi, sqrt, Symbol, symbols, oo\n\nfrom sympy import Abs, Q, periodic_argument, polar_lift, refine\n\nfrom sympy import cos\n\nfrom sympy import Rational as Rat", "$d2q9$\ncf.Jonas Tölke. Implementation of a Lattice Boltzmann kernel using the Compute\nUnified Device Architecture developed by nVIDIA. Comput. Visual Sci. DOI 10.1007/s00791-008-0120-2\nAffine Spaces\nLattices, that are \"sufficiently\" Galilean invariant, through non-perturbative algebraic theory\ncf. http://staff.polito.it/pietro.asinari/publications/preprint_Asinari_PA_2010a.pdf, I. Karlin and P. Asinari, Factorization symmetry in the lattice Boltzmann method. Physica A 389, 1530 (2010). The prepaper that this seemd to be based upon and had some more calculation details is \nMaxwell Lattices in 1-dim.\nMaxwell's (M) moment relations", "#u = Symbol(\"u\",assume=\"real\")\nu = Symbol(\"u\",real=True)\n#T_0 =Symbol(\"T_0\",assume=\"positive\")\nT_0 =Symbol(\"T_0\",real=True,positive=True)\n\n#v = Symbol(\"v\",assume=\"real\")\nv = Symbol(\"v\",real=True)\n\n#phi_v = sqrt( pi/(Rat(2)*T_0))*exp( - (v-u)**2/(Rat(2)*T_0))\nphi_v = sqrt( pi/(2*T_0))*exp( - (v-u)**2/(2*T_0))\n\nintegrate(phi_v,v)\n\nintegrate(phi_v,(v,-oo,oo))\n\n(integrate(phi_v,v).subs(u,oo)- integrate(phi_v,v).subs(u,-oo)).expand()\n\nintegrate(phi_v,(v,-oo,oo),conds='none')\n\nintegrate(phi_v,(v,-oo,oo),conds='separate')", "cf. http://stackoverflow.com/questions/16599325/simplify-conditional-integrals-in-sympy", "refine(integrate(phi_v,(v,-oo,oo)), Q.is_true(Abs(periodic_argument(1/polar_lift(sqrt(T_0))**2, oo)) <= pi/2))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
jakobrunge/tigramite
tutorials/tigramite_tutorial_assumptions.ipynb
gpl-3.0
[ "Causal discovery with TIGRAMITE\nTIGRAMITE is a time series analysis python module. It allows to reconstruct graphical models (conditional independence graphs) from discrete or continuously-valued time series based on the PCMCI framework and create high-quality plots of the results.\nPCMCI is described here:\nJ. Runge, P. Nowack, M. Kretschmer, S. Flaxman, D. Sejdinovic, \nDetecting and quantifying causal associations in large nonlinear time series datasets. Sci. Adv. 5, eaau4996 (2019) \nhttps://advances.sciencemag.org/content/5/11/eaau4996\nFor further versions of PCMCI (e.g., PCMCI+, LPCMCI, etc.), see the corresponding tutorials.\nThis tutorial explains the causal assumptions and gives walk-through examples. See the following paper for theoretical background:\nRunge, Jakob. 2018. “Causal Network Reconstruction from Time Series: From Theoretical Assumptions to Practical Estimation.” Chaos: An Interdisciplinary Journal of Nonlinear Science 28 (7): 075310.\nLast, the following Nature Communications Perspective paper provides an overview of causal inference methods in general, identifies promising applications, and discusses methodological challenges (exemplified in Earth system sciences): \nhttps://www.nature.com/articles/s41467-019-10105-3", "# Imports\nimport numpy as np\nimport matplotlib\nfrom matplotlib import pyplot as plt\n%matplotlib inline \n## use `%matplotlib notebook` for interactive figures\n# plt.style.use('ggplot')\nimport sklearn\n\nimport tigramite\nfrom tigramite import data_processing as pp\nfrom tigramite.toymodels import structural_causal_processes as toys\nfrom tigramite import plotting as tp\nfrom tigramite.pcmci import PCMCI\nfrom tigramite.independence_tests import ParCorr, GPDC, CMIknn, CMIsymb\nfrom tigramite.models import LinearMediation, Prediction\n", "Causal assumptions\nHaving introduced the basic functionality, we now turn to a discussion of the assumptions underlying a causal interpretation:\n\n\nFaithfulness / Stableness: Independencies in data arise not from coincidence, but rather from causal structure or, expressed differently, If two variables are independent given some other subset of variables, then they are not connected by a causal link in the graph.\n\n\nCausal Sufficiency: Measured variables include all of the common causes.\n\n\nCausal Markov Condition: All the relevant probabilistic information that can be obtained from the system is contained in its direct causes or, expressed differently, If two variables are not connected in the causal graph given some set of conditions (see Runge Chaos 2018 for further definitions), then they are conditionally independent.\n\n\nNo contemporaneous effects: There are no causal effects at lag zero.\n\n\nStationarity\n\n\nParametric assumptions of independence tests (these were already discussed in basic tutorial)\n\n\nFaithfulness\nFaithfulness, as stated above, is an expression of the assumption that the independencies we measure come from the causal structure, i.e., the time series graph, and cannot occur due to some fine tuning of the parameters. Another unfaithful case are processes containing purely deterministic dependencies, i.e., $Y=f(X)$, without any noise. We illustrate these cases in the following.\nFine tuning\nSuppose in our model we have two ways in which $X^0$ causes $X^2$, a direct one, and an indirect effect $X^0\\to X^1 \\to X^2$ as realized in the following model:\n\\begin{align}\n X^0_t &= \\eta^0_t\\\n X^1_t &= 0.6 X^0_{t-1} + \\eta^1_t\\\n X^2_t &= 0.6 X^1_{t-1} - 0.36 X^0_{t-2} + \\eta^2_t\\\n\\end{align}", "np.random.seed(1)\ndata = np.random.randn(500, 3)\nfor t in range(1, 500):\n# data[t, 0] += 0.6*data[t-1, 1]\n data[t, 1] += 0.6*data[t-1, 0]\n data[t, 2] += 0.6*data[t-1, 1] - 0.36*data[t-2, 0]\n \nvar_names = [r'$X^0$', r'$X^1$', r'$X^2$']\ndataframe = pp.DataFrame(data, var_names=var_names)\n# tp.plot_timeseries(dataframe)", "Since here $X^2_t = 0.6 X^1_{t-1} - 0.36 X^0_{t-2} + \\eta^2_t = 0.6 (0.6 X^0_{t-2} + \\eta^1_{t-1}) - 0.36 X^0_{t-2} + \\eta^2_t = 0.36 X^0_{t-2} - 0.36 X^0_{t-2} + ...$, there is no unconditional dependency $X^0_{t-2} \\to X^2_t$ and the link is not detected in the condition-selection step:", "parcorr = ParCorr()\npcmci_parcorr = PCMCI(\n dataframe=dataframe, \n cond_ind_test=parcorr,\n verbosity=1)\nall_parents = pcmci_parcorr.run_pc_stable(tau_max=2, pc_alpha=0.2)", "However, since the other parent of $X^2$, namely $X^1_{t-1}$ is detected, the MCI step conditions on $X^1_{t-1}$ and can reveal the true underlying graph (in this particular case):", "results = pcmci_parcorr.run_pcmci(tau_max=2, pc_alpha=0.2, alpha_level = 0.01)", "Note, however, that this is not always the case and such cancellation, even though a pathological case, can present a problem especially for smaller sample sizes.\nDeterministic dependencies\nAnother violation of faithfulness can happen due to purely deterministic dependencies as shown here:", "np.random.seed(1)\ndata = np.random.randn(500, 3)\nfor t in range(1, 500):\n data[t, 0] = 0.4*data[t-1, 1]\n data[t, 2] += 0.3*data[t-2, 1] + 0.7*data[t-1, 0]\ndataframe = pp.DataFrame(data, var_names=var_names)\ntp.plot_timeseries(dataframe); plt.show()\n\nparcorr = ParCorr()\npcmci_parcorr = PCMCI(\n dataframe=dataframe, \n cond_ind_test=parcorr,\n verbosity=2)\nresults = pcmci_parcorr.run_pcmci(tau_max=2, pc_alpha=0.2, alpha_level = 0.01)\n\n# Plot time series graph\ntp.plot_time_series_graph(\n val_matrix=results['val_matrix'],\n graph=results['graph'],\n var_names=var_names,\n link_colorbar_label='MCI',\n ); plt.show()", "Here the partial correlation $X^1_{t-1} \\to X^0_t$ is exactly 1. Since these now represent the same variable, the true link $X^0_{t-1} \\to X^2_t$ cannot be detected anymore since we condition on $X^1_{t-2}$. Deterministic copies of other variables should be excluded from the analysis.\nCausal sufficiency\nCausal sufficiency demands that the set of variables contains all common causes of any two variables. This assumption is mostly violated when analyzing open complex systems outside a confined experimental setting. Any link estimated from a causal discovery algorithm could become non-significant if more variables are included in the analysis. \nObservational causal inference assuming causal sufficiency should generally be seen more as one step towards a physical process understanding. There exist, however, algorithms that take into account and can expclicitely represent confounded links (e.g., the FCI algorithm and LPCMCI). Causal discovery can greatly help in an explorative model building analysis to get an idea of potential drivers. In particular, the absence of a link allows for a more robust conclusion: If there is no evidence for a statistical dependency, then a physical mechanism is less likely (assuming that the other assumptions hold). \nSee Runge, Jakob. 2018. “Causal Network Reconstruction from Time Series: From Theoretical Assumptions to Practical Estimation.” Chaos: An Interdisciplinary Journal of Nonlinear Science 28 (7): 075310.\nfor alternative approaches that do not necessitate Causal Sufficiency.\nUnobserved driver / latent variable\nFor the common driver process, consider that the common driver was not measured:", "np.random.seed(1)\ndata = np.random.randn(10000, 5)\na = 0.8\nfor t in range(5, 10000):\n data[t, 0] += a*data[t-1, 0]\n data[t, 1] += a*data[t-1, 1] + 0.5*data[t-1, 0]\n data[t, 2] += a*data[t-1, 2] + 0.5*data[t-1, 1] + 0.5*data[t-1, 4]\n data[t, 3] += a*data[t-1, 3] + 0.5*data[t-2, 4]\n data[t, 4] += a*data[t-1, 4]\n\n# tp.plot_timeseries(dataframe)\nobsdata = data[:,[0, 1, 2, 3]]\nvar_names_lat = ['W', 'Y', 'X', 'Z', 'U']\n\n\nfor data_here in [data, obsdata]:\n dataframe = pp.DataFrame(data_here)\n parcorr = ParCorr()\n pcmci_parcorr = PCMCI(\n dataframe=dataframe, \n cond_ind_test=parcorr,\n verbosity=0)\n results = pcmci_parcorr.run_pcmci(tau_max=5, pc_alpha=0.1, alpha_level = 0.001)\n\n tp.plot_graph(\n val_matrix=results['val_matrix'],\n graph=results['graph'],\n var_names=var_names_lat,\n link_colorbar_label='cross-MCI',\n node_colorbar_label='auto-MCI',\n ); plt.show()", "The upper plot shows the true causal graph if all variables are observed. The lower graph shows the case where variable $U$ is hidden. Then several spurious links appear: (1) $X\\to Z$ and (2) links from $Y$ and $W$ to $Z$, which is counterintuitive because there is no possible indirect pathway (see upper graph). What's the reason? The culprit is the collider $X$: MCI (or FullCI and any other causal measure conditioning on the entire past) between $Y$ and $Z$ is conditioned on the parents of $Z$, which includes $X$ here in the lower latent graph. But then conditioning on a collider opens up the paths from $Y$ and $W$ to $Z$ and makes them dependent.\nSolar forcing\nIn a geoscientific context, the solar forcing typically is a strong common driver of many processes. To remove this trivial effect, time series are typically anomalized, that is, the average seasonal cycle is subtracted. But one could also include the solar forcing explicitely as shown here via a sine wave for an artificial example. We've also made the time series more realistic by adding an auto-dependency on their past values.", "np.random.seed(42)\nT = 2000\ndata = np.random.randn(T, 4)\n# Simple sun\ndata[:,3] = np.sin(np.arange(T)*20/np.pi) + 0.1*np.random.randn(T)\nc = 0.8\nfor t in range(1, T):\n data[t, 0] += 0.4*data[t-1, 0] + 0.4*data[t-1, 1] + c*data[t-1,3]\n data[t, 1] += 0.5*data[t-1, 1] + c*data[t-1,3]\n data[t, 2] += 0.6*data[t-1, 2] + 0.3*data[t-2, 1] + c*data[t-1,3]\ndataframe = pp.DataFrame(data, var_names=[r'$X^0$', r'$X^1$', r'$X^2$', 'Sun'])\ntp.plot_timeseries(dataframe); plt.show()", "If we do not account for the common solar forcing, there will be many spurious links:", "parcorr = ParCorr()\ndataframe_nosun = pp.DataFrame(data[:,[0,1,2]], var_names=[r'$X^0$', r'$X^1$', r'$X^2$'])\npcmci_parcorr = PCMCI(\n dataframe=dataframe_nosun, \n cond_ind_test=parcorr,\n verbosity=0)\ntau_max = 2\ntau_min = 1\nresults = pcmci_parcorr.run_pcmci(tau_max=tau_max, pc_alpha=0.2, alpha_level = 0.01)\n\n# Plot time series graph\ntp.plot_time_series_graph(\n val_matrix=results['val_matrix'],\n graph=results['graph'],\n var_names=var_names,\n link_colorbar_label='MCI',\n ); plt.show()", "However, if we explicitely include the solar forcing variable (which we assume is known in this case), we can identify the correct causal graph. Since we are not interested in the drivers of the solar forcing variable, we don't attempt to reconstruct its parents. This can be achieved by restricting selected_links.", "parcorr = ParCorr()\n# Only estimate parents of variables 0, 1, 2\nselected_links = {}\nfor j in range(4):\n if j in [0, 1, 2]:\n selected_links[j] = [(var, -lag) for var in range(4)\n for lag in range(tau_min, tau_max + 1)]\n else:\n selected_links[j] = []\npcmci_parcorr = PCMCI(\n dataframe=dataframe, \n cond_ind_test=parcorr,\n verbosity=0)\nresults = pcmci_parcorr.run_pcmci(tau_min=tau_min, tau_max=tau_max, pc_alpha=0.2, \n selected_links=selected_links, alpha_level = 0.01)\n\n# Plot time series graph\ntp.plot_time_series_graph(\n val_matrix=results['val_matrix'],\n graph=results['graph'],\n var_names=[r'$X^0$', r'$X^1$', r'$X^2$', 'Sun'],\n link_colorbar_label='MCI',\n ); plt.show()", "Time sub-sampling\nSometimes a time series might be sub-sampled, that is the measurements are less frequent than the true underlying time-dependency. Consider the following process:", "np.random.seed(1)\ndata = np.random.randn(1000, 3)\nfor t in range(1, 1000):\n data[t, 0] += 0.*data[t-1, 0] + 0.6*data[t-1,2]\n data[t, 1] += 0.*data[t-1, 1] + 0.6*data[t-1,0]\n data[t, 2] += 0.*data[t-1, 2] + 0.6*data[t-1,1]\ndataframe = pp.DataFrame(data, var_names=[r'$X^0$', r'$X^1$', r'$X^2$'])\ntp.plot_timeseries(dataframe); plt.show()", "With the original time sampling we obtain the correct causal graph:", "pcmci_parcorr = PCMCI(dataframe=dataframe, cond_ind_test=ParCorr())\nresults = pcmci_parcorr.run_pcmci(tau_min=0,tau_max=2, pc_alpha=0.2, alpha_level = 0.01)\n\n# Plot time series graph\ntp.plot_time_series_graph(\n val_matrix=results['val_matrix'],\n graph=results['graph'],\n var_names=var_names,\n link_colorbar_label='MCI',\n ); plt.show()", "If we sub-sample the data, very counter-intuitive links can appear. The true causal loop gets detected in the wrong direction:", "sampled_data = data[::2]\npcmci_parcorr = PCMCI(dataframe=pp.DataFrame(sampled_data, var_names=var_names), \n cond_ind_test=ParCorr(), verbosity=0)\nresults = pcmci_parcorr.run_pcmci(tau_min=0, tau_max=2, pc_alpha=0.2, alpha_level=0.01)\n# Plot time series graph\ntp.plot_time_series_graph(\n val_matrix=results['val_matrix'],\n graph=results['graph'],\n var_names=var_names,\n link_colorbar_label='MCI',\n ); plt.show()", "If causal lags are smaller than the time sampling, such problems may occur. Causal inference for sub-sampled data is still an active area of research.\nCausal Markov condition\nThe Markov condition can be rephrased as assuming that the noises driving each variable are independent of each other and independent in time (iid). This is violated in the following example where each variable is driven by 1/f noise which refers to the scaling of the power spectrum. 1/f noise can be generated by averaging AR(1) processes (http://www.scholarpedia.org/article/1/f_noise) which means that the noise is not independent in time anymore (even though the noise terms of each individual variable are still independent). Note that this constitutes a violation of the Markov Condition of the observed process only. So one might call this rather a violation of Causal Sufficiency.", "np.random.seed(1)\nT = 10000\n# Generate 1/f noise by averaging AR1-process with wide range of coeffs \n# (http://www.scholarpedia.org/article/1/f_noise)\ndef one_over_f_noise(T, n_ar=20):\n whitenoise = np.random.randn(T, n_ar)\n ar_coeffs = np.linspace(0.1, 0.9, n_ar)\n for t in range(T):\n whitenoise[t] += ar_coeffs*whitenoise[t-1] \n return whitenoise.sum(axis=1)\n\ndata = np.random.randn(T, 3)\ndata[:,0] += one_over_f_noise(T)\ndata[:,1] += one_over_f_noise(T)\ndata[:,2] += one_over_f_noise(T)\n\nfor t in range(1, T):\n data[t, 0] += 0.4*data[t-1, 1] \n data[t, 2] += 0.3*data[t-2, 1] \ndataframe = pp.DataFrame(data, var_names=var_names)\ntp.plot_timeseries(dataframe); plt.show()\n# plt.psd(data[:,0],return_line=True)[2]\n# plt.psd(data[:,1],return_line=True)[2]\n# plt.psd(data[:,2],return_line=True)[2]\n# plt.gca().set_xscale(\"log\", nonposx='clip')\n# plt.gca().set_yscale(\"log\", nonposy='clip')", "Here PCMCI will detect many spurious links, especially auto-dependencies, since the process has long memory and the present state is not independent of the further past given some set of parents.", "parcorr = ParCorr()\npcmci_parcorr = PCMCI(\n dataframe=dataframe, \n cond_ind_test=parcorr,\n verbosity=1)\nresults = pcmci_parcorr.run_pcmci(tau_max=5, pc_alpha=0.2, alpha_level = 0.01)", "Time aggregation\nAn important choice is how to aggregate measured time series. For example, climate time series might have been measured daily, but one might be interested in a less noisy time-scale and analyze monthly aggregates. Consider the following process:", "np.random.seed(1)\ndata = np.random.randn(1000, 3)\nfor t in range(1, 1000):\n data[t, 0] += 0.7*data[t-1, 0] \n data[t, 1] += 0.6*data[t-1, 1] + 0.6*data[t-1,0]\n data[t, 2] += 0.5*data[t-1, 2] + 0.6*data[t-1,1]\ndataframe = pp.DataFrame(data, var_names=var_names)\ntp.plot_timeseries(dataframe); plt.show()", "With the original time aggregation we obtain the correct causal graph:", "pcmci_parcorr = PCMCI(dataframe=dataframe, cond_ind_test=ParCorr())\nresults = pcmci_parcorr.run_pcmci(tau_min=0,tau_max=2, pc_alpha=0.2, alpha_level = 0.01)\n\n# Plot time series graph\ntp.plot_time_series_graph(\n val_matrix=results['val_matrix'],\n graph=results['graph'],\n var_names=var_names,\n link_colorbar_label='MCI',\n ); plt.show()", "If we aggregate the data, we also detect a contemporaneous dependency for which no causal direction can be assessed in this framework and we obtain also several lagged spurious links. Essentially, we now have direct causal effects that appear contemporaneous on the aggregated time scale. Also causal inference for time-aggregated data is still an active area of research. Note again that this constitutes a violation of the Markov Condition of the observed process only. So one might call this rather a violation of Causal Sufficiency.", "aggregated_data = pp.time_bin_with_mask(data, time_bin_length=4)\npcmci_parcorr = PCMCI(dataframe=pp.DataFrame(aggregated_data[0], var_names=var_names), cond_ind_test=ParCorr(), \n verbosity=0)\nresults = pcmci_parcorr.run_pcmci(tau_min=0, tau_max=2, pc_alpha=0.2, alpha_level = 0.01)\n\n# Plot time series graph\ntp.plot_time_series_graph(\n val_matrix=results['val_matrix'],\n graph=results['graph'],\n var_names=var_names,\n link_colorbar_label='MCI',\n ); plt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statsmodels/statsmodels.github.io
v0.13.1/examples/notebooks/generated/statespace_structural_harvey_jaeger.ipynb
bsd-3-clause
[ "Detrending, Stylized Facts and the Business Cycle\nIn an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as \"structural time series models\") to derive stylized facts of the business cycle.\nTheir paper begins:\n\"Establishing the 'stylized facts' associated with a set of time series is widely considered a crucial step\nin macroeconomic research ... For such facts to be useful they should (1) be consistent with the stochastic\nproperties of the data and (2) present meaningful information.\"\n\nIn particular, they make the argument that these goals are often better met using the unobserved components approach rather than the popular Hodrick-Prescott filter or Box-Jenkins ARIMA modeling techniques.\nstatsmodels has the ability to perform all three types of analysis, and below we follow the steps of their paper, using a slightly updated dataset.", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nfrom IPython.display import display, Latex", "Unobserved Components\nThe unobserved components model available in statsmodels can be written as:\n$$\ny_t = \\underbrace{\\mu_{t}}{\\text{trend}} + \\underbrace{\\gamma{t}}{\\text{seasonal}} + \\underbrace{c{t}}{\\text{cycle}} + \\sum{j=1}^k \\underbrace{\\beta_j x_{jt}}{\\text{explanatory}} + \\underbrace{\\varepsilon_t}{\\text{irregular}}\n$$\nsee Durbin and Koopman 2012, Chapter 3 for notation and additional details. Notice that different specifications for the different individual components can support a wide range of models. The specific models considered in the paper and below are specializations of this general equation.\nTrend\nThe trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend.\n$$\n\\begin{align}\n\\underbrace{\\mu_{t+1}}{\\text{level}} & = \\mu_t + \\nu_t + \\eta{t+1} \\qquad & \\eta_{t+1} \\sim N(0, \\sigma_\\eta^2) \\\\\n\\underbrace{\\nu_{t+1}}{\\text{trend}} & = \\nu_t + \\zeta{t+1} & \\zeta_{t+1} \\sim N(0, \\sigma_\\zeta^2) \\\n\\end{align}\n$$\nwhere the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.\nFor both elements (level and trend), we can consider models in which:\n\nThe element is included vs excluded (if the trend is included, there must also be a level included).\nThe element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)\n\nThe only additional parameters to be estimated via MLE are the variances of any included stochastic components.\nThis leads to the following specifications:\n| | Level | Trend | Stochastic Level | Stochastic Trend |\n|----------------------------------------------------------------------|-------|-------|------------------|------------------|\n| Constant | ✓ | | | |\n| Local Level <br /> (random walk) | ✓ | | ✓ | |\n| Deterministic trend | ✓ | ✓ | | |\n| Local level with deterministic trend <br /> (random walk with drift) | ✓ | ✓ | ✓ | |\n| Local linear trend | ✓ | ✓ | ✓ | ✓ |\n| Smooth trend <br /> (integrated random walk) | ✓ | ✓ | | ✓ |\nSeasonal\nThe seasonal component is written as:\n<span>$$\n\\gamma_t = - \\sum_{j=1}^{s-1} \\gamma_{t+1-j} + \\omega_t \\qquad \\omega_t \\sim N(0, \\sigma_\\omega^2)\n$$</span>\nThe periodicity (number of seasons) is s, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time.\nThe variants of this model are:\n\nThe periodicity s\nWhether or not to make the seasonal effects stochastic.\n\nIf the seasonal effect is stochastic, then there is one additional parameter to estimate via MLE (the variance of the error term).\nCycle\nThe cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between \"1.5 and 12 years\" (see Durbin and Koopman).\nThe cycle is written as:\n<span>$$\n\\begin{align}\nc_{t+1} & = c_t \\cos \\lambda_c + c_t^ \\sin \\lambda_c + \\tilde \\omega_t \\qquad & \\tilde \\omega_t \\sim N(0, \\sigma_{\\tilde \\omega}^2) \\\\\nc_{t+1}^ & = -c_t \\sin \\lambda_c + c_t^ \\cos \\lambda_c + \\tilde \\omega_t^ & \\tilde \\omega_t^* \\sim N(0, \\sigma_{\\tilde \\omega}^2)\n\\end{align}\n$$</span>\nThe parameter $\\lambda_c$ (the frequency of the cycle) is an additional parameter to be estimated by MLE. If the seasonal effect is stochastic, then there is one another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws).\nIrregular\nThe irregular component is assumed to be a white noise error term. Its variance is a parameter to be estimated by MLE; i.e.\n$$\n\\varepsilon_t \\sim N(0, \\sigma_\\varepsilon^2)\n$$\nIn some cases, we may want to generalize the irregular component to allow for autoregressive effects:\n$$\n\\varepsilon_t = \\rho(L) \\varepsilon_{t-1} + \\epsilon_t, \\qquad \\epsilon_t \\sim N(0, \\sigma_\\epsilon^2)\n$$\nIn this case, the autoregressive parameters would also be estimated via MLE.\nRegression effects\nWe may want to allow for explanatory variables by including additional terms\n<span>$$\n\\sum_{j=1}^k \\beta_j x_{jt}\n$$</span>\nor for intervention effects by including\n<span>$$\n\\begin{align}\n\\delta w_t \\qquad \\text{where} \\qquad w_t & = 0, \\qquad t < \\tau, \\\\\n& = 1, \\qquad t \\ge \\tau\n\\end{align}\n$$</span>\nThese additional parameters could be estimated via MLE or by including them as components of the state space formulation.\nData\nFollowing Harvey and Jaeger, we will consider the following time series:\n\nUS real GNP, \"output\", (GNPC96)\nUS GNP implicit price deflator, \"prices\", (GNPDEF)\nUS monetary base, \"money\", (AMBSL)\n\nThe time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.\nAll data series considered here are taken from Federal Reserve Economic Data (FRED). Conveniently, the Python library Pandas has the ability to download data from FRED directly.", "# Datasets\nfrom pandas_datareader.data import DataReader\n\n# Get the raw data\nstart = '1948-01'\nend = '2008-01'\nus_gnp = DataReader('GNPC96', 'fred', start=start, end=end)\nus_gnp_deflator = DataReader('GNPDEF', 'fred', start=start, end=end)\nus_monetary_base = DataReader('AMBSL', 'fred', start=start, end=end).resample('QS').mean()\nrecessions = DataReader('USRECQ', 'fred', start=start, end=end).resample('QS').last().values[:,0]\n\n# Construct the dataframe\ndta = pd.concat(map(np.log, (us_gnp, us_gnp_deflator, us_monetary_base)), axis=1)\ndta.columns = ['US GNP','US Prices','US monetary base']\ndta.index.freq = dta.index.inferred_freq\ndates = dta.index._mpl_repr()", "To get a sense of these three variables over the timeframe, we can plot them:", "# Plot the data\nax = dta.plot(figsize=(13,3))\nylim = ax.get_ylim()\nax.xaxis.grid()\nax.fill_between(dates, ylim[0]+1e-5, ylim[1]-1e-5, recessions, facecolor='k', alpha=0.1);", "Model\nSince the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is:\n$$\ny_t = \\underbrace{\\mu_{t}}{\\text{trend}} + \\underbrace{c{t}}{\\text{cycle}} + \\underbrace{\\varepsilon_t}{\\text{irregular}}\n$$\nThe irregular will be assumed to be white noise, and the cycle will be stochastic and damped. The final modeling choice is the specification to use for the trend component. Harvey and Jaeger consider two models:\n\nLocal linear trend (the \"unrestricted\" model)\nSmooth trend (the \"restricted\" model, since we are forcing $\\sigma_\\eta = 0$)\n\nBelow, we construct kwargs dictionaries for each of these model types. Notice that rather that there are two ways to specify the models. One way is to specify components directly, as in the table above. The other way is to use string names which map to various specifications.", "# Model specifications\n\n# Unrestricted model, using string specification\nunrestricted_model = {\n 'level': 'local linear trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n}\n\n# Unrestricted model, setting components directly\n# This is an equivalent, but less convenient, way to specify a\n# local linear trend model with a stochastic damped cycle:\n# unrestricted_model = {\n# 'irregular': True, 'level': True, 'stochastic_level': True, 'trend': True, 'stochastic_trend': True,\n# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n# }\n\n# The restricted model forces a smooth trend\nrestricted_model = {\n 'level': 'smooth trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n}\n\n# Restricted model, setting components directly\n# This is an equivalent, but less convenient, way to specify a\n# smooth trend model with a stochastic damped cycle. Notice\n# that the difference from the local linear trend model is that\n# `stochastic_level=False` here.\n# unrestricted_model = {\n# 'irregular': True, 'level': True, 'stochastic_level': False, 'trend': True, 'stochastic_trend': True,\n# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n# }", "We now fit the following models:\n\nOutput, unrestricted model\nPrices, unrestricted model\nPrices, restricted model\nMoney, unrestricted model\nMoney, restricted model", "# Output\noutput_mod = sm.tsa.UnobservedComponents(dta['US GNP'], **unrestricted_model)\noutput_res = output_mod.fit(method='powell', disp=False)\n\n# Prices\nprices_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **unrestricted_model)\nprices_res = prices_mod.fit(method='powell', disp=False)\n\nprices_restricted_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **restricted_model)\nprices_restricted_res = prices_restricted_mod.fit(method='powell', disp=False)\n\n# Money\nmoney_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **unrestricted_model)\nmoney_res = money_mod.fit(method='powell', disp=False)\n\nmoney_restricted_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **restricted_model)\nmoney_restricted_res = money_restricted_mod.fit(method='powell', disp=False)", "Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the summary method on the fit object.", "print(output_res.summary())", "For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.\nThe plot_components method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.", "fig = output_res.plot_components(legend_loc='lower right', figsize=(15, 9));", "Finally, Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.", "# Create Table I\ntable_i = np.zeros((5,6))\n\nstart = dta.index[0]\nend = dta.index[-1]\ntime_range = '%d:%d-%d:%d' % (start.year, start.quarter, end.year, end.quarter)\nmodels = [\n ('US GNP', time_range, 'None'),\n ('US Prices', time_range, 'None'),\n ('US Prices', time_range, r'$\\sigma_\\eta^2 = 0$'),\n ('US monetary base', time_range, 'None'),\n ('US monetary base', time_range, r'$\\sigma_\\eta^2 = 0$'),\n]\nindex = pd.MultiIndex.from_tuples(models, names=['Series', 'Time range', 'Restrictions'])\nparameter_symbols = [\n r'$\\sigma_\\zeta^2$', r'$\\sigma_\\eta^2$', r'$\\sigma_\\kappa^2$', r'$\\rho$',\n r'$2 \\pi / \\lambda_c$', r'$\\sigma_\\varepsilon^2$',\n]\n\ni = 0\nfor res in (output_res, prices_res, prices_restricted_res, money_res, money_restricted_res):\n if res.model.stochastic_level:\n (sigma_irregular, sigma_level, sigma_trend,\n sigma_cycle, frequency_cycle, damping_cycle) = res.params\n else:\n (sigma_irregular, sigma_level,\n sigma_cycle, frequency_cycle, damping_cycle) = res.params\n sigma_trend = np.nan\n period_cycle = 2 * np.pi / frequency_cycle\n \n table_i[i, :] = [\n sigma_level*1e7, sigma_trend*1e7,\n sigma_cycle*1e7, damping_cycle, period_cycle,\n sigma_irregular*1e7\n ]\n i += 1\n \npd.set_option('float_format', lambda x: '%.4g' % np.round(x, 2) if not np.isnan(x) else '-')\ntable_i = pd.DataFrame(table_i, index=index, columns=parameter_symbols)\ntable_i" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
googledatalab/notebooks
tutorials/Stackdriver Monitoring/Getting started.ipynb
apache-2.0
[ "Getting started with the Stackdriver Monitoring API\nCloud Datalab provides an environment for working with your data. This includes data that is being managed within the Stackdriver Monitoring API. This notebook introduces some of the APIs that Cloud Datalab provides for working with the monitoring data, and allows you to try them out on your own project.\nThe main focus of this API is to allow you to query time series data for your monitored resources. The time series, and it's metadata are returned as pandas DataFrame objects. pandas is a widely used library for data manipulation, and is well suited to working with time series data.\nNote: This notebook will show you how to use this API with your own project. The charts included here are from a sample project that you will not have access to. For all cells to run without errors, the following must hold:\n* The default project must be set\n* This project must have at least one GCE Instance. You can create an instance at the following link: https://console.cloud.google.com/compute/instances\nImporting the API and setting up the default project\nThe Monitoring functionality is contained within the datalab.stackdriver.monitoring module.\nIf the default project is not already set via the environment variable $PROJECT_ID, you must do so using 'set_datalab_project_id', or using the %datalab config magic.", "# set_datalab_project_id('my-project-id')", "First, list supported options on the Stackdriver magic %sd:", "%sd -h", "Let's see what we can do with the monitoring command:", "%sd monitoring -h", "List names of Compute Engine CPU metrics\nHere we use IPython cell magics to list the CPU metrics. The Labels column shows that instance_name is a metric label.", "%sd monitoring metrics list --type compute*/cpu/*", "List monitored resource types related to GCE", "%sd monitoring resource_types list --type gce*", "Querying time series data\nThe Query class allows users to query and access the monitoring time series data.\nMany useful methods of the Query class are actually defined by the base class, which is provided by the google-cloud-python library. These methods include:\n* select_metrics: filters the query based on metric labels.\n* select_resources: filters the query based on resource type and labels.\n* align: aligns the query along the specified time intervals.\n* reduce: applies aggregation to the query.\n* as_dataframe: returns the time series data as a pandas DataFrame object.\nReference documentation for the Query base class is available here. You can also get help from inside the notebook by calling the help function on any class, object or method.", "from google.datalab.stackdriver import monitoring as gcm\nhelp(gcm.Query.select_interval)", "Initializing the query\nDuring intialization, the metric type and the time interval need to be specified. For interactive use, the metric type has a default value. The simplest way to specify the time interval that ends now is to use the arguments days, hours, and minutes.\nIn the cell below, we initialize the query to load the time series for CPU Utilization for the last two hours.", "query_cpu = gcm.Query('compute.googleapis.com/instance/cpu/utilization', hours=2)", "Getting the metadata\nThe method metadata() returns a QueryMetadata object. It contains the following information about the time series matching the query:\n* resource types\n* resource labels and their values\n* metric labels and their values\nThis helps you understand the structure of the time series data, and makes it easier to modify the query.", "metadata_cpu = query_cpu.metadata().as_dataframe()\nmetadata_cpu.head(5)", "Reading the instance names from the metadata\nNext, we read in the instance names from the metadata, and use it in filtering the time series data below. If there are no GCE instances in this project, the cells below will raise errors.", "import sys\n\nif metadata_cpu.empty:\n sys.stderr.write('This project has no GCE instances. The remaining notebook '\n 'will raise errors!')\nelse:\n instance_names = sorted(list(metadata_cpu['metric.labels']['instance_name']))\n print('First 5 instance names: %s' % ([str(name) for name in instance_names[:5]],))", "Filtering by metric label\nWe first filter query_cpu defined earlier to include only the first instance. Next, calling as_dataframe gets the results from the monitoring API, and converts them into a pandas DataFrame.", "query_cpu_single_instance = query_cpu.select_metrics(instance_name=instance_names[0])\n\n# Get the query results as a pandas DataFrame and look at the last 5 rows.\ndata_single_instance = query_cpu_single_instance.as_dataframe(label='instance_name')\ndata_single_instance.tail(5)", "Displaying the time series as a linechart\nWe can plot the time series data by calling the plot method of the dataframe. The pandas library uses matplotlib for plotting, so you can learn more about it here.", "# N.B. A useful trick is to assign the return value of plot to _ \n# so that you don't get text printed before the plot itself.\n\n_ = data_single_instance.plot()", "Aggregating the query\nYou can aggregate or summarize time series data along various dimensions.\n* In the first stage, data in a time series is aligned to a specified period.\n* In the second stage, data from multiple time series is combined, or reduced, into one time series. \nNot all alignment and reduction options are applicable to all time series, depending on their metric type and value type. Alignment and reduction may change the metric type or value type of a time series.\nAligning the query\nFor multiple time series, aligning the data is recommended. Aligned data is more compact to read from the Monitoring API, and lends itself better to visualizations.\nThe alignment period can be specified using the arguments hours, minutes, and seconds. In the cell below, we do the following:\n* select a subset of the instances by using a prefix of the first instance name\n* align the time series to 5 minute intervals using an 'ALIGN_MEAN' method.\n* plot the time series, and adjust the legend to be outside the plot. You can learn more about legend placement here.", "# Filter the query by a common instance name prefix.\ncommon_prefix = instance_names[0].split('-')[0]\nquery_cpu_aligned = query_cpu.select_metrics(instance_name_prefix=common_prefix)\n\n# Align the query to have data every 5 minutes.\nquery_cpu_aligned = query_cpu_aligned.align(gcm.Aligner.ALIGN_MEAN, minutes=5)\ndata_multiple_instances = query_cpu_aligned.as_dataframe(label='instance_name')\n\n# Display the data as a linechart, and move the legend to the right of it.\n_ = data_multiple_instances.plot().legend(loc=\"upper left\", bbox_to_anchor=(1,1))", "Reducing the query\nIn order to combine the data across multiple time series, the reduce() method can be used. The fields to be retained after aggregation must be specified in the method.\nFor example, to aggregate the results by the zone, 'resource.zone' can be specified.", "query_cpu_reduced = query_cpu_aligned.reduce(gcm.Reducer.REDUCE_MEAN, 'resource.zone')\ndata_per_zone = query_cpu_reduced.as_dataframe('zone')\ndata_per_zone.tail(5)", "Displaying the time series as a heatmap\nLet us look at the time series at the instance level as a heatmap. A heatmap is a compact representation of the data, and can often highlight patterns.\nThe diagram below shows the instances along rows, and the timestamps along columns.", "import matplotlib\nimport seaborn\n\n# Set the size of the heatmap to have a better aspect ratio.\ndiv_ratio = 1 if len(data_multiple_instances.columns) == 1 else 2.0\nwidth, height = (size/div_ratio for size in data_multiple_instances.shape)\nmatplotlib.pyplot.figure(figsize=(width, height))\n\n# Display the data as a heatmap. The timestamps are converted to strings\n# for better readbility.\n_ = seaborn.heatmap(data_multiple_instances.T,\n xticklabels=data_multiple_instances.index.map(str),\n cmap='YlGnBu')", "Multi-level headers\nIf you don't provide any labels to as_dataframe, it returns all the resource and metric labels present in the time series as a multi-level header.\nThis allows you to filter, and aggregate the data more easily.", "data_multi_level = query_cpu_aligned.as_dataframe()\ndata_multi_level.tail(5)", "Filter the dataframe\nLet us filter the multi-level dataframe based on the common prefix. Applying the filter will look across all column headers.", "print('Finding pattern \"%s\" in the dataframe headers' % (common_prefix,))\n\ndata_multi_level.filter(regex=common_prefix).tail(5)", "Aggregate columns in the dataframe\nHere, we aggregate the multi-level dataframe at the zone level. This is similar to applying reduction using 'REDUCE_MEAN' on the field 'resource.zone'.", "data_multi_level.groupby(level='zone', axis=1).mean().tail(5)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
charmasaur/digbeta
tour/traj_visualisation.ipynb
gpl-3.0
[ "Trajectory Visualisation\nNOTE: Before running this notebook, please run script src/ijcai15_setup.py to setup data properly.\nVisualise trajectories on maps by generating a KML file for each trajectory.\n\nPrepare Data\nLoad Trajectory Data\nCompute POI Info\nConstruct Travelling Sequences\nGenerate KML File for Trajectory\nTrajectory with same (start, end)\nTrajectory with more than one occurrence\nVisualise Trajectory\nVisualise Trajectories with more than one occurrence\nVisualise Trajectories with same (start, end) but different paths\nVisualise the Most Common Edges\nCount the occurrence of edges\n\n<a id='sec1'></a>\n1. Prepare Data\n<a id='sec1.1'></a>\n1.1 Load Trajectory Data", "%matplotlib inline\n\nimport os\nimport re\nimport math\nimport random\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\nfrom fastkml import kml, styles\nfrom shapely.geometry import Point, LineString\n\nrandom.seed(123456789)\n\ndata_dir = 'data/data-ijcai15'\n#fvisit = os.path.join(data_dir, 'userVisits-Osak.csv')\n#fcoord = os.path.join(data_dir, 'photoCoords-Osak.csv')\n#fvisit = os.path.join(data_dir, 'userVisits-Glas.csv')\n#fcoord = os.path.join(data_dir, 'photoCoords-Glas.csv')\n#fvisit = os.path.join(data_dir, 'userVisits-Edin.csv')\n#fcoord = os.path.join(data_dir, 'photoCoords-Edin.csv')\nfvisit = os.path.join(data_dir, 'userVisits-Toro.csv')\nfcoord = os.path.join(data_dir, 'photoCoords-Toro.csv')\n\nsuffix = fvisit.split('-')[-1].split('.')[0]\n\nvisits = pd.read_csv(fvisit, sep=';')\nvisits.head()\n\ncoords = pd.read_csv(fcoord, sep=';')\ncoords.head()\n\n# merge data frames according to column 'photoID'\nassert(visits.shape[0] == coords.shape[0])\ntraj = pd.merge(visits, coords, on='photoID')\ntraj.head()\n\nnum_photo = traj['photoID'].unique().shape[0]\nnum_user = traj['userID'].unique().shape[0]\nnum_seq = traj['seqID'].unique().shape[0]\nnum_poi = traj['poiID'].unique().shape[0]\npd.DataFrame([num_photo, num_user, num_seq, num_poi, num_photo/num_user, num_seq/num_user], \\\n index = ['#photo', '#user', '#seq', '#poi', '#photo/user', '#seq/user'], columns=[str(suffix)])", "<a id='sec3.2'></a>\n<a id='sec1.2'></a>\n1.2 Compute POI Info\nCompute POI (Longitude, Latitude) as the average coordinates of the assigned photos.", "poi_coords = traj[['poiID', 'photoLon', 'photoLat']].groupby('poiID').agg(np.mean)\npoi_coords.reset_index(inplace=True)\npoi_coords.rename(columns={'photoLon':'poiLon', 'photoLat':'poiLat'}, inplace=True)\npoi_coords.head()", "Extract POI category and visiting frequency.", "poi_catfreq = traj[['poiID', 'poiTheme', 'poiFreq']].groupby('poiID').first()\npoi_catfreq.reset_index(inplace=True)\npoi_catfreq.head()\n\npoi_all = pd.merge(poi_catfreq, poi_coords, on='poiID')\npoi_all.set_index('poiID', inplace=True)\npoi_all.head()", "<a id='sec1.3'></a>\n1.3 Construct Travelling Sequences", "seq_all = traj[['userID', 'seqID', 'poiID', 'dateTaken']].copy()\\\n .groupby(['userID', 'seqID', 'poiID']).agg([np.min, np.max])\nseq_all.columns = seq_all.columns.droplevel()\nseq_all.reset_index(inplace=True)\nseq_all.rename(columns={'amin':'arrivalTime', 'amax':'departureTime'}, inplace=True)\nseq_all['poiDuration(sec)'] = seq_all['departureTime'] - seq_all['arrivalTime']\nseq_all.head()", "<a id='sec1.4'></a>\n1.4 Generate KML File for Trajectory\nVisualise Trajectory on map by generating a KML file for a trajectory and its associated POIs.", "def generate_kml(fname, seqid_set, seq_all, poi_all):\n k = kml.KML()\n ns = '{http://www.opengis.net/kml/2.2}'\n styid = 'style1'\n # colors in KML: aabbggrr, aa=00 is fully transparent\n sty = styles.Style(id=styid, styles=[styles.LineStyle(color='9f0000ff', width=2)]) # transparent red\n doc = kml.Document(ns, '1', 'Trajectory', 'Trajectory visualization', styles=[sty])\n k.append(doc)\n \n poi_set = set()\n seq_dict = dict()\n for seqid in seqid_set:\n # ordered POIs in sequence\n seqi = seq_all[seq_all['seqID'] == seqid].copy()\n seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)\n seq = seqi['poiID'].tolist()\n seq_dict[seqid] = seq\n for poi in seq: poi_set.add(poi)\n \n # Placemark for trajectory\n for seqid in sorted(seq_dict.keys()):\n seq = seq_dict[seqid]\n desc = 'Trajectory: ' + str(seq[0]) + '->' + str(seq[-1])\n pm = kml.Placemark(ns, str(seqid), 'Trajectory ' + str(seqid), desc, styleUrl='#' + styid)\n pm.geometry = LineString([(poi_all.loc[x, 'poiLon'], poi_all.loc[x, 'poiLat']) for x in seq])\n doc.append(pm)\n \n # Placemark for POI\n for poi in sorted(poi_set):\n desc = 'POI of category ' + poi_all.loc[poi, 'poiTheme']\n pm = kml.Placemark(ns, str(poi), 'POI ' + str(poi), desc, styleUrl='#' + styid)\n pm.geometry = Point(poi_all.loc[poi, 'poiLon'], poi_all.loc[poi, 'poiLat'])\n doc.append(pm)\n \n # save to file\n kmlstr = k.to_string(prettyprint=True)\n with open(fname, 'w') as f:\n f.write('<?xml version=\"1.0\" encoding=\"UTF-8\"?>\\n')\n f.write(kmlstr)", "<a id='sec2'></a>\n2. Trajectory with same (start, end)", "seq_user = seq_all[['userID', 'seqID', 'poiID']].copy().groupby(['userID', 'seqID']).agg(np.size)\nseq_user.reset_index(inplace=True)\nseq_user.rename(columns={'size':'seqLen'}, inplace=True)\nseq_user.set_index('seqID', inplace=True)\nseq_user.head()\n\ndef extract_seq(seqid, seq_all):\n seqi = seq_all[seq_all['seqID'] == seqid].copy()\n seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)\n return seqi['poiID'].tolist()\n\nstartend_dict = dict()\nfor seqid in seq_all['seqID'].unique():\n seq = extract_seq(seqid, seq_all)\n if (seq[0], seq[-1]) not in startend_dict:\n startend_dict[(seq[0], seq[-1])] = [seqid]\n else:\n startend_dict[(seq[0], seq[-1])].append(seqid)\n\nindices = sorted(startend_dict.keys())\ncolumns = ['#traj', '#user']\nstartend_seq = pd.DataFrame(data=np.zeros((len(indices), len(columns))), index=indices, columns=columns)\n\nfor pair, seqid_set in startend_dict.items():\n users = set([seq_user.loc[x, 'userID'] for x in seqid_set])\n startend_seq.loc[pair, '#traj'] = len(seqid_set)\n startend_seq.loc[pair, '#user'] = len(users)\n\nstartend_seq.sort(columns=['#traj'], ascending=True, inplace=True)\nstartend_seq.index.name = '(start, end)'\nstartend_seq.sort_index(inplace=True)\nprint(startend_seq.shape)\nstartend_seq", "<a id='sec3'></a>\n3. Trajectory with more than one occurrence\nContruct trajectories with more than one occurrence (can be same or different user).", "distinct_seq = dict()\n\nfor seqid in seq_all['seqID'].unique():\n seq = extract_seq(seqid, seq_all)\n #if len(seq) < 2: continue # drop trajectory with single point\n if str(seq) not in distinct_seq:\n distinct_seq[str(seq)] = [(seqid, seq_user.loc[seqid].iloc[0])] # (seqid, user)\n else:\n distinct_seq[str(seq)].append((seqid, seq_user.loc[seqid].iloc[0]))\n\nprint(len(distinct_seq))\n#distinct_seq\n\ndistinct_seq_df = pd.DataFrame.from_dict({k:len(distinct_seq[k]) for k in sorted(distinct_seq.keys())}, orient='index')\ndistinct_seq_df.columns = ['#occurrence']\ndistinct_seq_df.index.name = 'trajectory'\ndistinct_seq_df['seqLen'] = [len(x.split(',')) for x in distinct_seq_df.index]\ndistinct_seq_df.sort_index(inplace=True)\nprint(distinct_seq_df.shape)\ndistinct_seq_df.head()\n\nplt.figure(figsize=[9, 9])\nplt.xlabel('sequence length')\nplt.ylabel('#occurrence')\nplt.scatter(distinct_seq_df['seqLen'], distinct_seq_df['#occurrence'], marker='+')", "Filtering out sequences with single point as well as sequences occurs only once.", "distinct_seq_df2 = distinct_seq_df[distinct_seq_df['seqLen'] > 1]\ndistinct_seq_df2 = distinct_seq_df2[distinct_seq_df2['#occurrence'] > 1]\ndistinct_seq_df2.head()\n\nplt.figure(figsize=[9, 9])\nplt.xlabel('sequence length')\nplt.ylabel('#occurrence')\nplt.scatter(distinct_seq_df2['seqLen'], distinct_seq_df2['#occurrence'], marker='+')", "<a id='sec4'></a>\n4. Visualise Trajectory\n<a id='sec4.1'></a>\n4.1 Visualise Trajectories with more than one occurrence", "for seqstr in distinct_seq_df2.index:\n assert(seqstr in distinct_seq)\n seqid = distinct_seq[seqstr][0][0]\n fname = re.sub(',', '_', re.sub('[ \\[\\]]', '', seqstr))\n fname = os.path.join(data_dir, suffix + '-seq-occur-' + str(len(distinct_seq[seqstr])) + '_' + fname + '.kml')\n generate_kml(fname, [seqid], seq_all, poi_all)", "<a id='sec4.2'></a>\n4.2 Visualise Trajectories with same (start, end) but different paths", "startend_distinct_seq = dict()\n\ndistinct_seqid_set = [distinct_seq[x][0][0] for x in distinct_seq_df2.index]\n\nfor seqid in distinct_seqid_set:\n seq = extract_seq(seqid, seq_all)\n if (seq[0], seq[-1]) not in startend_distinct_seq:\n startend_distinct_seq[(seq[0], seq[-1])] = [seqid]\n else:\n startend_distinct_seq[(seq[0], seq[-1])].append(seqid)\n\nfor pair in sorted(startend_distinct_seq.keys()):\n if len(startend_distinct_seq[pair]) < 2: continue\n fname = suffix + '-seq-start_' + str(pair[0]) + '_end_' + str(pair[1]) + '.kml'\n fname = os.path.join(data_dir, fname)\n print(pair, len(startend_distinct_seq[pair]))\n generate_kml(fname, startend_distinct_seq[pair], seq_all, poi_all)", "<a id='sec5'></a>\n5. Visualise the Most Common Edges\n<a id='sec5.1'></a>\n5.1 Count the occurrence of edges", "edge_count = pd.DataFrame(data=np.zeros((poi_all.index.shape[0], poi_all.index.shape[0]), dtype=np.int), \\\n index=poi_all.index, columns=poi_all.index)\n\nfor seqid in seq_all['seqID'].unique():\n seq = extract_seq(seqid, seq_all)\n for j in range(len(seq)-1):\n edge_count.loc[seq[j], seq[j+1]] += 1\n\nedge_count\n\nk = kml.KML()\nns = '{http://www.opengis.net/kml/2.2}'\nwidth_set = set()\n\n# Placemark for edges\npm_list = []\nfor poi1 in poi_all.index:\n for poi2 in poi_all.index:\n width = edge_count.loc[poi1, poi2]\n if width < 1: continue\n width_set.add(width)\n sid = str(poi1) + '_' + str(poi2)\n desc = 'Edge: ' + str(poi1) + '->' + str(poi2) + ', #occurrence: ' + str(width)\n pm = kml.Placemark(ns, sid, 'Edge_' + sid, desc, styleUrl='#sty' + str(width))\n pm.geometry = LineString([(poi_all.loc[x, 'poiLon'], poi_all.loc[x, 'poiLat']) for x in [poi1, poi2]])\n pm_list.append(pm)\n\n# Placemark for POIs\nfor poi in poi_all.index:\n sid = str(poi)\n desc = 'POI of category ' + poi_all.loc[poi, 'poiTheme']\n pm = kml.Placemark(ns, sid, 'POI_' + sid, desc, styleUrl='#sty1')\n pm.geometry = Point(poi_all.loc[poi, 'poiLon'], poi_all.loc[poi, 'poiLat'])\n pm_list.append(pm)\n\n# Styles\nstys = []\nfor width in width_set:\n sid = 'sty' + str(width)\n # colors in KML: aabbggrr, aa=00 is fully transparent\n stys.append(styles.Style(id=sid, styles=[styles.LineStyle(color='3f0000ff', width=width)])) # transparent red\n\ndoc = kml.Document(ns, '1', 'Edges', 'Edge visualization', styles=stys)\nfor pm in pm_list: doc.append(pm)\nk.append(doc)\n\n# save to file\nfname = suffix + '-common_edges.kml'\nfname = os.path.join(data_dir, fname)\nkmlstr = k.to_string(prettyprint=True)\nwith open(fname, 'w') as f:\n f.write('<?xml version=\"1.0\" encoding=\"UTF-8\"?>\\n')\n f.write(kmlstr)", "An example map for Toronto is available here." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wuafeing/Python3-Tutorial
02 strings and text/02.01 split string on multiple delimiters.ipynb
gpl-3.0
[ "Previous\n2.1 使用多个界定符分割字符串\n问题\n你需要将一个字符串分割为多个字段,但是分隔符(还有周围的空格)并不是固定的。\n解决方案\nstring 对象的 split() 方法只适应于非常简单的字符串分割情形, 它并不允许有多个分隔符或者是分隔符周围不确定的空格。 当你需要更加灵活的切割字符串的时候,最好使用 re.split() 方法:", "line = \"asdf fjdk; afed, fjek,asdf, foo\"\n\nimport re\nre.split(r\"[;,\\s]\\s*\", line)", "讨论\n函数 re.split() 是非常实用的,因为它允许你为分隔符指定多个正则模式。 比如,在上面的例子中,分隔符可以是逗号,分号或者是空格,并且后面紧跟着任意个的空格。 只要这个模式被找到,那么匹配的分隔符两边的实体都会被当成是结果中的元素返回。 返回结果为一个字段列表,这个跟 str.split() 返回值类型是一样的。\n当你使用 re.split() 函数时候,需要特别注意的是正则表达式中是否包含一个括号捕获分组。 如果使用了捕获分组,那么被匹配的文本也将出现在结果列表中。比如,观察一下这段代码运行后的结果:", "fields = re.split(r\"(;|,|\\s)\\s*\", line)\nfields", "获取分割字符在某些情况下也是有用的。 比如,你可能想保留分割字符串,用来在后面重新构造一个新的输出字符串:", "values = fields[::2]\nvalues\n\ndelimiters = fields[1::2] + [\"\"]\ndelimiters\n\n# Reform the line using the same delimiters\n\"\".join(v + d for v, d in zip(values, delimiters))", "如果你不想保留分割字符串到结果列表中去,但仍然需要使用到括号来分组正则表达式的话, 确保你的分组是非捕获分组,形如 (?:...) 。比如:", "re.split(r\"(?:,|;|\\s)\\s*\", line)", "Next" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
muxiaobai/CourseExercises
python/kaggle/competition/house-price/house_price.ipynb
gpl-2.0
[ "房价预测案例\nStep 1: 检视源数据集", "import numpy as np\nimport pandas as pd", "读入数据\n\n\n一般来说源数据的index那一栏没什么用,我们可以用来作为我们pandas dataframe的index。这样之后要是检索起来也省事儿。\n\n\n有人的地方就有鄙视链。跟知乎一样。Kaggle的也是个处处呵呵的危险地带。Kaggle上默认把数据放在input文件夹下。所以我们没事儿写个教程什么的,也可以依据这个convention来,显得自己很有逼格。。", "train_df = pd.read_csv('../input/train.csv', index_col=0)\ntest_df = pd.read_csv('../input/test.csv', index_col=0)", "检视源数据", "train_df.head()", "这时候大概心里可以有数,哪些地方需要人为的处理一下,以做到源数据更加好被process。\nStep 2: 合并数据\n这么做主要是为了用DF进行数据预处理的时候更加方便。等所有的需要的预处理进行完之后,我们再把他们分隔开。\n首先,SalePrice作为我们的训练目标,只会出现在训练集中,不会在测试集中(要不然你测试什么?)。所以,我们先把SalePrice这一列给拿出来,不让它碍事儿。\n我们先看一下SalePrice长什么样纸:", "%matplotlib inline\nprices = pd.DataFrame({\"price\":train_df[\"SalePrice\"], \"log(price + 1)\":np.log1p(train_df[\"SalePrice\"])})\nprices.hist()", "可见,label本身并不平滑。为了我们分类器的学习更加准确,我们会首先把label给“平滑化”(正态化)\n这一步大部分同学会miss掉,导致自己的结果总是达不到一定标准。\n这里我们使用最有逼格的log1p, 也就是 log(x+1),避免了复值的问题。\n记住哟,如果我们这里把数据都给平滑化了,那么最后算结果的时候,要记得把预测到的平滑数据给变回去。\n按照“怎么来的怎么去”原则,log1p()就需要expm1(); 同理,log()就需要exp(), ... etc.", "y_train = np.log1p(train_df.pop('SalePrice'))", "然后我们把剩下的部分合并起来", "all_df = pd.concat((train_df, test_df), axis=0)", "此刻,我们可以看到all_df就是我们合在一起的DF", "all_df.shape", "而y_train则是SalePrice那一列", "y_train.head()", "Step 3: 变量转化\n类似『特征工程』。就是把不方便处理或者不unify的数据给统一了。\n正确化变量属性\n首先,我们注意到,MSSubClass 的值其实应该是一个category,\n但是Pandas是不会懂这些事儿的。使用DF的时候,这类数字符号会被默认记成数字。\n这种东西就很有误导性,我们需要把它变回成string", "all_df['MSSubClass'].dtypes\n\nall_df['MSSubClass'] = all_df['MSSubClass'].astype(str)", "变成str以后,做个统计,就很清楚了", "all_df['MSSubClass'].value_counts()", "把category的变量转变成numerical表达形式\n当我们用numerical来表达categorical的时候,要注意,数字本身有大小的含义,所以乱用数字会给之后的模型学习带来麻烦。于是我们可以用One-Hot的方法来表达category。\npandas自带的get_dummies方法,可以帮你一键做到One-Hot。", "pd.get_dummies(all_df['MSSubClass'], prefix='MSSubClass').head()", "此刻MSSubClass被我们分成了12个column,每一个代表一个category。是就是1,不是就是0。\n同理,我们把所有的category数据,都给One-Hot了", "all_dummy_df = pd.get_dummies(all_df)\nall_dummy_df.head()", "处理好numerical变量\n就算是numerical的变量,也还会有一些小问题。\n比如,有一些数据是缺失的:", "all_dummy_df.isnull().sum().sort_values(ascending=False).head(10)", "可以看到,缺失最多的column是LotFrontage\n处理这些缺失的信息,得靠好好审题。一般来说,数据集的描述里会写的很清楚,这些缺失都代表着什么。当然,如果实在没有的话,也只能靠自己的『想当然』。。\n在这里,我们用平均值来填满这些空缺。", "mean_cols = all_dummy_df.mean()\nmean_cols.head(10)\n\nall_dummy_df = all_dummy_df.fillna(mean_cols)", "看看是不是没有空缺了?", "all_dummy_df.isnull().sum().sum()", "标准化numerical数据\n这一步并不是必要,但是得看你想要用的分类器是什么。一般来说,regression的分类器都比较傲娇,最好是把源数据给放在一个标准分布内。不要让数据间的差距太大。\n这里,我们当然不需要把One-Hot的那些0/1数据给标准化。我们的目标应该是那些本来就是numerical的数据:\n先来看看 哪些是numerical的:", "numeric_cols = all_df.columns[all_df.dtypes != 'object']\nnumeric_cols", "计算标准分布:(X-X')/s\n让我们的数据点更平滑,更便于计算。\n注意:我们这里也是可以继续使用Log的,我只是给大家展示一下多种“使数据平滑”的办法。", "numeric_col_means = all_dummy_df.loc[:, numeric_cols].mean()\nnumeric_col_std = all_dummy_df.loc[:, numeric_cols].std()\nall_dummy_df.loc[:, numeric_cols] = (all_dummy_df.loc[:, numeric_cols] - numeric_col_means) / numeric_col_std", "Step 4: 建立模型\n把数据集分回 训练/测试集", "dummy_train_df = all_dummy_df.loc[train_df.index]\ndummy_test_df = all_dummy_df.loc[test_df.index]\n\ndummy_train_df.shape, dummy_test_df.shape", "Ridge Regression\n用Ridge Regression模型来跑一遍看看。(对于多因子的数据集,这种模型可以方便的把所有的var都无脑的放进去)", "from sklearn.linear_model import Ridge\nfrom sklearn.model_selection import cross_val_score", "这一步不是很必要,只是把DF转化成Numpy Array,这跟Sklearn更加配", "X_train = dummy_train_df.values\nX_test = dummy_test_df.values", "用Sklearn自带的cross validation方法来测试模型", "alphas = np.logspace(-3, 2, 50)\ntest_scores = []\nfor alpha in alphas:\n clf = Ridge(alpha)\n test_score = np.sqrt(-cross_val_score(clf, X_train, y_train, cv=10, scoring='neg_mean_squared_error'))\n test_scores.append(np.mean(test_score))", "存下所有的CV值,看看哪个alpha值更好(也就是『调参数』)", "import matplotlib.pyplot as plt\n%matplotlib inline\nplt.plot(alphas, test_scores)\nplt.title(\"Alpha vs CV Error\");", "可见,大概alpha=10~20的时候,可以把score达到0.135左右。\nRandom Forest", "from sklearn.ensemble import RandomForestRegressor\n\nmax_features = [.1, .3, .5, .7, .9, .99]\ntest_scores = []\nfor max_feat in max_features:\n clf = RandomForestRegressor(n_estimators=200, max_features=max_feat)\n test_score = np.sqrt(-cross_val_score(clf, X_train, y_train, cv=5, scoring='neg_mean_squared_error'))\n test_scores.append(np.mean(test_score))\n\nplt.plot(max_features, test_scores)\nplt.title(\"Max Features vs CV Error\");", "用RF的最优值达到了0.137\nStep 5: Ensemble\n这里我们用一个Stacking的思维来汲取两种或者多种模型的优点\n首先,我们把最好的parameter拿出来,做成我们最终的model", "ridge = Ridge(alpha=15)\nrf = RandomForestRegressor(n_estimators=500, max_features=.3)\n\nridge.fit(X_train, y_train)\nrf.fit(X_train, y_train)", "上面提到了,因为最前面我们给label做了个log(1+x), 于是这里我们需要把predit的值给exp回去,并且减掉那个\"1\"\n所以就是我们的expm1()函数。", "y_ridge = np.expm1(ridge.predict(X_test))\ny_rf = np.expm1(rf.predict(X_test))", "一个正经的Ensemble是把这群model的预测结果作为新的input,再做一次预测。这里我们简单的方法,就是直接『平均化』。", "y_final = (y_ridge + y_rf) / 2", "Step 6: 提交结果", "submission_df = pd.DataFrame(data= {'Id' : test_df.index, 'SalePrice': y_final})", "我们的submission大概长这样:", "submission_df.head(10)", "走你~" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
samoturk/HUB-ipython
notebooks/Intro to Python and Jupyter.ipynb
mit
[ "Python\nPython is widely used general-purpose high-level programming language. Its design philosophy emphasizes code readability. It is very popular in science.\nJupyter\nThe Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text.\n* Evolved from IPython notebook\n* In addition to Python it supports many other programming languages (Julija, R, Haskell, etc..)\n* http://jupyter.org/\nGetting started\nAnaconda/Conda (need to install)\n\nhttps://www.continuum.io/downloads\nI recommend PYTHON 2.7\n\nWeb hosted (only need a web browser)\n\nhttp://tmpnb.org\n\nThe notebook\nCell types - markdown and code\nThis is Markdown cell", "print('This is cell with code')", "Variables, lists and dictionaries", "var1 = 1\nmy_string = \"This is a string\"\n\nvar1\n\nprint(my_string)\n\nmy_list = [1, 2, 3, 'x', 'y']\nmy_list\n\nmy_list[0]\n\nmy_list[1:3]\n\nsalaries = {'Mike':2000, 'Ann':3000}\n\nsalaries['Mike']\n\nsalaries['Jake'] = 2500\n\nsalaries", "Strings", "long_string = 'This is a string \\n Second line of the string'\n\nprint(long_string)\n\nlong_string.split(\" \")\n\nlong_string.split(\"\\n\")\n\nlong_string.count('s') # case sensitive!\n\nlong_string.upper()", "Conditionals", "if long_string.startswith('X'):\n print('Yes')\nelif long_string.startswith('T'):\n print('It has T')\nelse:\n print('No')", "Loops", "for line in long_string.split('\\n'):\n print line\n\nc = 0\nwhile c < 10:\n c += 2\n print c", "List comprehensions", "some_numbers = [1,2,3,4]\n\n[x**2 for x in some_numbers]", "File operations", "with open('../README.md', 'r') as f:\n content = f.read()\n\nprint(content)", "Functions", "def average(numbers):\n return float(sum(numbers)/len(numbers))\n\naverage([1,2,2,2.5,3,])\n\nmap(average, [[1,2,2,2.5,3,],[3,2.3,4.2,2.5,5,]])\n\n# %load cool_events.py\n#!/usr/bin/env python\nfrom IPython.display import HTML\n\nclass HUB:\n \"\"\"\n HUB event class\n \"\"\"\n def __init__(self, version):\n self.full_name = \"Heidelberg Unseminars in Bioinformatics\"\n self.info = HTML(\"<p>Heidelberg Unseminars in Bioinformatics are participant-\"\n \"driven meetings where people with an interest in bioinformatics \" \n \"come together to discuss hot topics and exchange ideas and then go \"\n \"for a drink and a snack afterwards.</p>\")\n self.version = version\n def __repr__(self):\n return self.full_name\n\nthis_event = HUB(21)\n\nthis_event\n\nthis_event.full_name\n\nthis_event.version", "Python libraries\nLibrary is a collection of resources. These include pre-written code, subroutines, classes, etc.", "from math import exp\n\nexp(2) #shift tab to access documentation\n\nimport math\n\nmath.exp(10)\n\nimport numpy as np # Numpy - package for scientifc computing\n\n#import pandas as pd # Pandas - package for working with data frames (tables)\n\n#import Bio # BioPython - package for bioinformatics\n\n#import sklearn # scikit-learn - package for machine larning\n\n#from rdkit import Chem # RDKit - Chemoinformatics library", "Plotting", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nx_values = np.arange(0, 20, 0.1)\ny_values = [math.sin(x) for x in x_values]\n\nplt.plot(x_values, y_values)\n\nplt.scatter(x_values, y_values)\n\nplt.boxplot(y_values)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mclaughlin6464/pearce
notebooks/wt Integral calculation.ipynb
mit
[ "The problem with my w(\\theta) calculation appears to be fairly fundamental: calculating in a snapshot just gives very strange answers. I'm gonna try directly integratng the 3-d correlation function. I'm gonna try doing that directly, but it is possible that I'll have to emulate to get that quite right. \nThere is a prefactor which I'm going to compute directly from the redMagic data that I have.", "from pearce.mocks import cat_dict\nimport numpy as np\nfrom os import path\nfrom astropy.io import fits\nfrom astropy import constants as const, units as unit\n\nimport george\nfrom george.kernels import ExpSquaredKernel\n\nimport matplotlib\n#matplotlib.use('Agg')\nfrom matplotlib import pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nsns.set()", "Load up the tptY3 buzzard mocks.", "fname = '/u/ki/jderose/public_html/bcc/measurement/y3/3x2pt/buzzard/flock/buzzard-2/tpt_Y3_v0.fits'\nhdulist = fits.open(fname)\n\nz_bins = np.array([0.15, 0.3, 0.45, 0.6, 0.75, 0.9])\nzbin=1\n\na = 0.81120\nz = 1.0/a - 1.0", "Load up a snapshot at a redshift near the center of this bin.", "print z", "This code load a particular snapshot and and a particular HOD model. In this case, 'redMagic' is the Zheng07 HOD with the f_c variable added in.", "cosmo_params = {'simname':'chinchilla', 'Lbox':400.0, 'scale_factors':[a]}\ncat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!\n\ncat.load_catalog(a)\n#cat.h = 1.0\n#halo_masses = cat.halocat.halo_table['halo_mvir']\n\ncat.load_model(a, 'redMagic')\n\nhdulist.info()", "Take the zspec in our selected zbin to calculate the dN/dz distribution. The below cell calculate the redshift distribution prefactor\n$$ W = \\frac{2}{c}\\int_0^{\\infty} dz H(z) \\left(\\frac{dN}{dz} \\right)^2 $$", "nz_zspec = hdulist[8]\n#N = 0#np.zeros((5,))\nN_total = np.sum([row[2+zbin] for row in nz_zspec.data])\ndNdzs = [] \nzs = []\nW = 0 \nfor row in nz_zspec.data:\n\n N = row[2+zbin]\n \n dN = N*1.0/N_total\n \n #volIn, volOut = cat.cosmology.comoving_volume(row[0]), cat.cosmology.comoving_volume(row[2])\n\n #fullsky_volume = volOut-volIn\n #survey_volume = fullsky_volume*area/full_sky\n #nd = dN/survey_volume\n \n dz = row[2] - row[0]\n \n #print row[2], row[0]\n dNdz = dN/dz\n \n H = cat.cosmology.H(row[1])\n \n W+= dz*H*(dNdz)**2\n \n dNdzs.append(dNdz)\n zs.append(row[1])\n \n \n #for idx, n in enumerate(row[3:]):\n # N[idx]+=n\nW = 2*W/const.c\n\nprint W\n\nN_z = [row[2+zbin] for row in nz_zspec.data]\nN_total = np.sum(N_z)#*0.01\nplt.plot(zs,N_z/N_total)\nplt.xlim(0,1.0)\n\nlen(dNdzs)\n\nplt.plot(zs, dNdzs)\nplt.vlines(z, 0,8)\nplt.xlim(0,1.0)\nplt.xlabel(r'$z$')\nplt.ylabel(r'$dN/dz$')\n\nlen(nz_zspec.data)\n\nnp.sum(dNdzs)\n\nnp.sum(dNdzs)/len(nz_zspec.data)\n\nW.to(1/unit.Mpc)", "If we happened to choose a model with assembly bias, set it to 0. Leave all parameters as their defaults, for now.", "4.51077317e-03\n\nparams = cat.model.param_dict.copy()\n#params['mean_occupation_centrals_assembias_param1'] = 0.0\n#params['mean_occupation_satellites_assembias_param1'] = 0.0\nparams['logMmin'] = 13.4\nparams['sigma_logM'] = 0.1\nparams['f_c'] = 0.19\nparams['alpha'] = 1.0\nparams['logM1'] = 14.0\nparams['logM0'] = 12.0\n\nprint params\n\ncat.populate(params)\n\nnd_cat = cat.calc_analytic_nd()\nprint nd_cat\n\narea = 5063 #sq degrees\nfull_sky = 41253 #sq degrees\n\nvolIn, volOut = cat.cosmology.comoving_volume(z_bins[zbin-1]), cat.cosmology.comoving_volume(z_bins[zbin])\n\nfullsky_volume = volOut-volIn\nsurvey_volume = fullsky_volume*area/full_sky\nnd_mock = N_total/survey_volume\nprint nd_mock\n\nnd_mock.value/nd_cat\n\n#compute the mean mass\nmf = cat.calc_mf()\nHOD = cat.calc_hod()\nmass_bin_range = (9,16)\nmass_bin_size = 0.01\nmass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )\n\nmean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\\\n np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])\nprint mean_host_mass\n\n10**0.35\n\nN_total\n\ntheta_bins = np.logspace(np.log10(0.004), 0, 24)#/60\ntpoints = (theta_bins[1:]+theta_bins[:-1])/2\n\nr_bins = np.logspace(-0.5, 1.7, 16)\nrpoints = (r_bins[1:]+r_bins[:-1])/2", "Use my code's wrapper for halotools' xi calculator. Full source code can be found here.", "xi = cat.calc_xi(r_bins, do_jackknife=False)", "Interpolate with a Gaussian process. May want to do something else \"at scale\", but this is quick for now.", "kernel = ExpSquaredKernel(0.05)\ngp = george.GP(kernel)\ngp.compute(np.log10(rpoints))\n\nprint xi\n\nxi[xi<=0] = 1e-2 #ack\n\nfrom scipy.stats import linregress\nm,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))\n\nplt.plot(rpoints, (2.22353827e+03)*(rpoints**(-1.88359)))\n#plt.plot(rpoints, b2*(rpoints**m2))\n\nplt.scatter(rpoints, xi)\nplt.loglog();\n\nplt.plot(np.log10(rpoints), b+(np.log10(rpoints)*m))\n#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))\n#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))\n\nplt.scatter(np.log10(rpoints), np.log10(xi) )\n#plt.loglog();\n\nprint m,b\n\nrpoints_dense = np.logspace(-0.5, 2, 500)\n\nplt.scatter(rpoints, xi)\nplt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))\nplt.loglog();", "This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta). \nThis plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing.", "theta_bins_rm = np.logspace(np.log10(2.5), np.log10(250), 21)/60 #binning used in buzzard mocks\ntpoints_rm = (theta_bins_rm[1:]+theta_bins_rm[:-1])/2.0\n\nrpoints_dense = np.logspace(-1.5, 2, 500)\nx = cat.cosmology.comoving_distance(z)\n\nplt.scatter(rpoints, xi)\nplt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))\nplt.vlines((a*x*np.radians(tpoints_rm)).value, 1e-2, 1e4)\nplt.vlines((a*np.sqrt(x**2*np.radians(tpoints_rm)**2+unit.Mpc*unit.Mpc*10**(1.7*2))).value, 1e-2, 1e4, color = 'r')\n\nplt.loglog();", "Perform the below integral in each theta bin:\n$$ w(\\theta) = W \\int_0^\\infty du \\xi \\left(r = \\sqrt{u^2 + \\bar{x}^2(z)\\theta^2} \\right) $$\nWhere $\\bar{x}$ is the median comoving distance to z.", "x = cat.cosmology.comoving_distance(z)\nprint x\n\n-\n\nnp.radians(tpoints_rm)\n\n#a subset of the data from above. I've verified it's correct, but we can look again. \nwt_redmagic = np.loadtxt('/u/ki/swmclau2/Git/pearce/bin/mcmc/buzzard2_wt_%d%d.npy'%(zbin,zbin))\n\ntpoints_rm\n\nmathematica_calc = np.array([122.444, 94.8279, 73.4406, 56.8769, 44.049, 34.1143, 26.4202, \\\n20.4614, 15.8466, 12.2726, 9.50465, 7.36099, 5.70081, 4.41506, \\\n3.41929, 2.64811, 2.05086, 1.58831, 1.23009, 0.952656])#*W", "The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.", "print W.value\nprint W.to(\"1/Mpc\").value\nprint W.value\n\nfrom scipy.special import gamma\ndef wt_analytic(m,b,t,x):\n return W.to(\"1/Mpc\").value*b*np.sqrt(np.pi)*(t*x)**(1 + m)*(gamma(-(1./2) - m/2.)/(2*gamma(-(m/2.))) )\n\nplt.plot(tpoints_rm, wt, label = 'My Calculation')\nplt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')\n#plt.plot(tpoints_rm, W.to(\"1/Mpc\").value*mathematica_calc, label = 'Mathematica Calc')\n#plt.plot(tpoints_rm, wt_analytic(m,10**b, np.radians(tpoints_rm), x),label = 'Mathematica Calc' )\n\nplt.ylabel(r'$w(\\theta)$')\nplt.xlabel(r'$\\theta \\mathrm{[degrees]}$')\nplt.loglog();\nplt.legend(loc='best')\n\nwt_redmagic/(W.to(\"1/Mpc\").value*mathematica_calc)\n\nimport cPickle as pickle\nwith open('/u/ki/jderose/ki23/bigbrother-addgals/bbout/buzzard-flock/buzzard-0/buzzard0_lb1050_xigg_ministry.pkl') as f:\n xi_rm = pickle.load(f)\n\nxi_rm.metrics[0].xi.shape\n\nxi_rm.metrics[0].mbins\n\nxi_rm.metrics[0].cbins\n\n#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))\n#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))\n\nplt.scatter(rpoints, xi)\nfor i in xrange(3):\n for j in xrange(3):\n plt.plot(xi_rm.metrics[0].rbins[:-1], xi_rm.metrics[0].xi[:,i,j,0])\nplt.loglog();\n\nplt.subplot(211)\nplt.plot(tpoints_rm, wt_redmagic/wt)\nplt.xscale('log')\n#plt.ylim([0,10])\nplt.subplot(212)\nplt.plot(tpoints_rm, wt_redmagic/wt)\nplt.xscale('log')\nplt.ylim([2.0,4])\n\nxi_rm.metrics[0].xi.shape\n\nxi_rm.metrics[0].rbins #Mpc/h", "The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason.", "x = cat.cosmology.comoving_distance(z)*a\n#ubins = np.linspace(10**-6, 10**2.0, 1001)\nubins = np.logspace(-6, 2.0, 51)\nubc = (ubins[1:]+ubins[:-1])/2.0\n\n#NLL\ndef liklihood(params, wt_redmagic,x, tpoints):\n #print _params\n #prior = np.array([ PRIORS[pname][0] < v < PRIORS[pname][1] for v,pname in zip(_params, param_names)])\n #print param_names\n #print prior\n #if not np.all(prior):\n # return 1e9\n #params = {p:v for p,v in zip(param_names, _params)}\n #cat.populate(params)\n #nd_cat = cat.calc_analytic_nd(parmas)\n #wt = np.zeros_like(tpoints_rm[:-5])\n \n #xi = cat.calc_xi(r_bins, do_jackknife=False)\n #m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))\n \n #if np.any(xi < 0):\n # return 1e9\n #kernel = ExpSquaredKernel(0.05)\n #gp = george.GP(kernel)\n #gp.compute(np.log10(rpoints))\n \n #for bin_no, t_med in enumerate(np.radians(tpoints_rm[:-5])):\n # int_xi = 0\n # for ubin_no, _u in enumerate(ubc):\n # _du = ubins[ubin_no+1]-ubins[ubin_no]\n # u = _u*unit.Mpc*a\n # du = _du*unit.Mpc*a\n #print np.sqrt(u**2+(x*t_med)**2)\n # r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h\n #if r > unit.Mpc*10**1.7: #ignore large scales. In the full implementation this will be a transition to a bias model. \n # int_xi+=du*0\n #else:\n # the GP predicts in log, so i predict in log and re-exponate\n # int_xi+=du*(np.power(10, \\\n # gp.predict(np.log10(xi), np.log10(r.value), mean_only=True)[0]))\n # int_xi+=du*(10**b)*(r.to(\"Mpc\").value**m)\n\n #print (((int_xi*W))/wt_redmagic[0]).to(\"m/m\")\n #break\n # wt[bin_no] = int_xi*W.to(\"1/Mpc\")\n \n wt = wt_analytic(params[0],params[1], tpoints, x.to(\"Mpc\").value) \n chi2 = np.sum(((wt - wt_redmagic[:-5])**2)/(1e-3*wt_redmagic[:-5]) )\n \n #chi2=0\n #print nd_cat\n #print wt\n #chi2+= ((nd_cat-nd_mock.value)**2)/(1e-6)\n \n #mf = cat.calc_mf()\n #HOD = cat.calc_hod()\n #mass_bin_range = (9,16)\n #mass_bin_size = 0.01\n #mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )\n\n #mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\\\n # np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])\n \n #chi2+=((13.35-np.log10(mean_host_mass))**2)/(0.2)\n print chi2\n return chi2 #nll\n\nprint nd_mock\nprint wt_redmagic[:-5]\n\nimport scipy.optimize as op\n\nresults = op.minimize(liklihood, np.array([-2.2, 10**1.7]),(wt_redmagic,x, tpoints_rm[:-5]))\n\nresults\n\n#plt.plot(tpoints_rm, wt, label = 'My Calculation')\nplt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')\nplt.plot(tpoints_rm, wt_analytic(-1.88359, 2.22353827e+03,tpoints_rm, x.to(\"Mpc\").value), label = 'Mathematica Calc')\n\nplt.ylabel(r'$w(\\theta)$')\nplt.xlabel(r'$\\theta \\mathrm{[degrees]}$')\nplt.loglog();\nplt.legend(loc='best')\n\nplt.plot(np.log10(rpoints), np.log10(2.22353827e+03)+(np.log10(rpoints)*(-1.88)))\nplt.scatter(np.log10(rpoints), np.log10(xi) )\n\n\nnp.array([v for v in params.values()])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
goodwordalchemy/thinkstats_notes_and_exercises
code/.ipynb_checkpoints/chap03ex-checkpoint.ipynb
gpl-3.0
[ "Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>\nAllen Downey\nRead the female respondent file.", "%matplotlib inline\n\nimport chap01soln\nresp = chap01soln.ReadFemResp()", "Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.\nDisplay the PMF.\nDefine <tt>BiasPmf</tt>.", "def BiasPmf(pmf, label=''):\n \"\"\"Returns the Pmf with oversampling proportional to value.\n\n If pmf is the distribution of true values, the result is the\n distribution that would be seen if values are oversampled in\n proportion to their values; for example, if you ask students\n how big their classes are, large classes are oversampled in\n proportion to their size.\n\n Args:\n pmf: Pmf object.\n label: string label for the new Pmf.\n\n Returns:\n Pmf object\n \"\"\"\n new_pmf = pmf.Copy(label=label)\n\n for x, p in pmf.Items():\n new_pmf.Mult(x, x)\n \n new_pmf.Normalize()\n return new_pmf", "Make a the biased Pmf of children in the household, as observed if you surveyed the children instead of the respondents.\nDisplay the actual Pmf and the biased Pmf on the same axes.\nCompute the means of the two Pmfs." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
thalesians/tsa
src/jupyter/python/particle.ipynb
apache-2.0
[ "Particle filtering\nParticle filtering is not working yet - WORK IN PROGRESS!", "import os, sys\nsys.path.append(os.path.abspath('../../main/python'))\n\nimport datetime as dt\n\nimport numpy as np\nimport numpy.testing as npt\nimport matplotlib.pyplot as plt\n\nfrom thalesians.tsa.distrs import NormalDistr as N\nimport thalesians.tsa.filtering as filtering\nimport thalesians.tsa.filtering.kalman as kalman\nimport thalesians.tsa.filtering.particle as particle\nimport thalesians.tsa.numpyutils as npu\nimport thalesians.tsa.processes as proc\n\nimport importlib\nimportlib.reload(particle)\nimportlib.reload(proc)", "A single-process, univariate example\nFirst we need a process model. In this case it will be a single stochastic process,", "process = proc.WienerProcess.create_from_cov(mean=3., cov=0.0001)", "This we pass to a newly created particle filter, along with the initial time and initial state. The latter takes the form of a normal distribution. We have chosen to use Python datetimes as our data type for time, but we could have chosen ints or something else.", "t0 = dt.datetime(2017, 5, 12, 16, 18, 25, 204000)\npf = particle.ParticleFilter(t0, state_distr=N(mean=100., cov=0.0000000000001), process=process)", "Next we create an observable, which incorporates a particular observation model. In this case, the observation model is particularly simple, since we are observing the entire state of the particle filter. Our observation model is a 1x1 identity:", "observable = pf.create_observable(kalman.LinearGaussianObsModel.create(1.), process)", "Let's roll forward the time by one hour:", "t1 = t0 + dt.timedelta(hours=1)", "What is our predicted observation at this time? Since we haven't observed any actual information, this is our prior observation estimate:", "prior_predicted_obs1 = observable.predict(t1)\nprior_predicted_obs1", "We confirm that this is consistent with how our (linear-Gaussian) process model scales over time:", "np.mean(pf._prior_particles), 100. + 3./24.\n\nprior_predicted_obs1\n\nprior_predicted_obs1 = observable.predict(t1)\nnpt.assert_almost_equal(prior_predicted_obs1.distr.mean, 100. + 3./24.)\nnpt.assert_almost_equal(prior_predicted_obs1.distr.cov, 250. + 25./24.)\nnpt.assert_almost_equal(prior_predicted_obs1.cross_cov, prior_predicted_obs1.distr.cov)", "Let us now actually observe our observation. Say, the observation is 100.35 and the observation noise covariance is 100.0:", "observable.observe(time=t1, obs=N(mean=100.35, cov=100.0))", "Having seen an actual observation, let us obtain the posterior observation estimate:", "posterior_predicted_obs1 = observable.predict(t1); posterior_predicted_obs1", "We can now fast-forward the time, by two hours, say, and repeat the process:", "t2 = t1 + dt.timedelta(hours=2)\n \nprior_predicted_obs2 = observable.predict(t2)\nnpt.assert_almost_equal(prior_predicted_obs2.distr.mean, 100.28590504 + 2.*3./24.)\nnpt.assert_almost_equal(prior_predicted_obs2.distr.cov, 71.513353115 + 2.*25./24.)\nnpt.assert_almost_equal(prior_predicted_obs2.cross_cov, prior_predicted_obs2.distr.cov)\n \nobservable.observe(time=t2, obs=N(mean=100.35, cov=100.0))\n\nposterior_predicted_obs2 = observable.predict(t2)\nnpt.assert_almost_equal(posterior_predicted_obs2.distr.mean, 100.45709020)\nnpt.assert_almost_equal(posterior_predicted_obs2.distr.cov, 42.395213845)\nnpt.assert_almost_equal(posterior_predicted_obs2.cross_cov, posterior_predicted_obs2.distr.cov)\n", "A multi-process, multivariate example\nThe real power of our particle filter interface is demonstrated for process models consisting of several (independent) stochastic processes:", "process1 = proc.WienerProcess.create_from_cov(mean=3., cov=25.)\nprocess2 = proc.WienerProcess.create_from_cov(mean=[1., 4.], cov=[[36.0, -9.0], [-9.0, 25.0]])", "Such models are common in finance, where, for example, the dynamics of a yield curve may be represented by a (multivariate) stochastic process, whereas the idiosyncratic spread for each bond may be an independent stochastic process.\nLet us pass process1 and process2 as a (compound) process model to our particle filter, along with the initial time and state:", "t0 = dt.datetime(2017, 5, 12, 16, 18, 25, 204000)\nkf = kalman.KalmanFilter(\n t0,\n state_distr=N(\n mean=[100.0, 120.0, 130.0],\n cov=[[250.0, 0.0, 0.0],\n [0.0, 360.0, 0.0],\n [0.0, 0.0, 250.0]]),\n process=(process1, process2))", "We shall now create several observables, each corresponding to a distinct observation model. The first one will observe the entire state:", "state_observable = kf.create_observable(\n kalman.KalmanFilterObsModel.create(1.0, np.eye(2)),\n process1, process2)", "The second observable will observe the first coordinate of the first process:", "coord0_observable = kf.create_observable(\n kalman.KalmanFilterObsModel.create(1.),\n process1)", "The third, the first coordinate of the second process:", "coord1_observable = kf.create_observable(\n kalman.KalmanFilterObsModel.create(npu.row(1., 0.)),\n process2)", "The fourth, the second coordinate of the second process:", "coord2_observable = kf.create_observable(\n kalman.KalmanFilterObsModel.create(npu.row(0., 1.)),\n process2)", "The fifth will observe the sum of the entire state (across the two processes):", "sum_observable = kf.create_observable(\n kalman.KalmanFilterObsModel.create(npu.row(1., 1., 1.)),\n process1, process2)", "And the sixth a certain linear combination thereof:", "lin_comb_observable = kf.create_observable(\n kalman.KalmanFilterObsModel.create(npu.row(2., 0., -3.)),\n process1, process2)", "Fast-forward the time by one hour:", "t1 = t0 + dt.timedelta(hours=1)", "Let's predict the state at this time...", "predicted_obs1_prior = state_observable.predict(t1)\npredicted_obs1_prior", "And check that it is consistent with the scaling of the (multivariate) Wiener process with time:", "npt.assert_almost_equal(predicted_obs1_prior.distr.mean,\n npu.col(100.0 + 3.0/24.0, 120.0 + 1.0/24.0, 130.0 + 4.0/24.0))\nnpt.assert_almost_equal(predicted_obs1_prior.distr.cov,\n [[250.0 + 25.0/24.0, 0.0, 0.0],\n [0.0, 360.0 + 36.0/24.0, -9.0/24.0],\n [0.0, -9.0/24.0, 250 + 25.0/24.0]])\nnpt.assert_almost_equal(predicted_obs1_prior.cross_cov, predicted_obs1_prior.distr.cov)", "Suppose that a new observation arrives, and we observe each of the three coordinates individually:", "state_observable.observe(time=t1, obs=N(mean=[100.35, 121.0, 135.0],\n cov=[[100.0, 0.0, 0.0],\n [0.0, 400.0, 0.0],\n [0.0, 0.0, 100.0]]));", "Let's look at our (posterior) predicted state:", "state_observable.predict(t1)", "Let's also look at the predictions for the individual coordinates:", "coord0_observable.predict(t1)\n\ncoord1_observable.predict(t1)\n\ncoord2_observable.predict(t1)", "The predicted sum:", "sum_observable.predict(t1)", "And the predicted linear combination:", "lin_comb_observable.predict(t1)", "Let's now go 30 minutes into the future:", "t2 = t1 + dt.timedelta(minutes=30)", "And observe only the first coordinate of the second process, with a pretty high confidence:", "coord1_observable.observe(time=t2, obs=N(mean=125.25, cov=4.))", "How does our predicted state change?", "state_observable.predict(t2)", "Thirty minutes later...", "t3 = t2 + dt.timedelta(minutes=30)", "We observe the sum of the three coordinates, rather than the individual coordinates:", "sum_observable.observe(time=t3, obs=N(mean=365.00, cov=9.))", "How has our prediction of the state changed?", "state_observable.predict(t3)", "And what is its predicted sum?", "sum_observable.predict(t3)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
emmaqian/DataScientistBootcamp
DS_HW1_Huimin Qian_052617.ipynb
mit
[ "数据应用学院 Data Scientist Program\nHw1", "# import the necessary package at the very beginning\nimport numpy as np\nimport pandas as pd\n\nprint(str(float(100*177/891)) + '%')", "1. Please rewrite following functions to lambda expressions\nExample:\n```\ndef AddOne(x):\n y=x+1\n return y\naddOneLambda = lambda x: x+1\n```", "def foolOne(x): # note: assume x is a number\n y = x * 2\n y -= 25\n return y\n\n## Type Your Answer Below ##\nfoolOne_lambda = lambda x: x*2-25\n\n# Generate a random 3*4 matrix for test\ntlist = np.random.randn(3,4) \ntlist\n\n# Check if the lambda function yields same results as previous function\ndef test_foolOne(tlist, func1, func2):\n if func1(tlist).all() == func2(tlist).all():\n print(\"Same results!\")\n \ntest_foolOne(tlist, foolOne, foolOne_lambda)\n\ndef foolTwo(x): # note: assume x here is a string\n if x.startswith('g'):\n return True\n else:\n return False\n\n## Type Your Answer Below ##\nfoolTwo_lambda = lambda x: x.startswith('g')\n\n# Generate a random 3*4 matrix of strings for test\n# reference: https://pythontips.com/2013/07/28/generating-a-random-string/\n# reference: http://www.programcreek.com/python/example/1246/string.ascii_lowercase\n\nimport random\nimport string \n\ndef random_string(size):\n new_string = ''.join([random.choice(string.ascii_letters + string.digits) for n in range(size)])\n return new_string\n\ndef test_foolTwo():\n test_string = random_string(6)\n if foolTwo_lambda(test_string) == foolTwo(test_string):\n return True\n \nfor i in range(10):\n if test_foolTwo() is False:\n print('Different results!')", "2. What's the difference between tuple and list?", "## Type Your Answer Below ##\n# reference: https://docs.python.org/3/tutorial/datastructures.html\n# tuple is immutable. They cannot be changed once they are made.\n# tuples are easier for the python interpreter to deal with and therefore might end up being easier\n# tuples might indicate that each entry has a distinct meaning and their order has some meaning (e.g., year)\n# Another pragmatic reason to use tuple is when you have data which you know should not be changed (e.g., constant)\n# tuples can be used as keys in dictionaries\n# tuples usually contain a heterogeneous sequence of elements that are accessed via unpacking or indexing (or even by attribute in the case of namedtuples).\ntuple1 = (1, 2, 3, 'a', True)\nprint('tuple: ', tuple1)\nprint('1st item of tuple: ', tuple1[0])\ntuple1[0] = 4 # item assignment won't work for tuple\n\n# tuple with just one element\ntuple2 = (1) # just a number, so has no elements\nprint(type(tuple2))\ntuple2[0]\n\n# tuple with just one element\ntuple3 = (1, ) \nprint(type(tuple3))\ntuple3[0]\n\n# Question for TA: is tuple comprehension supported?\ntuple4 = (char for char in 'abcdabcdabcd' if char not in 'ac')\nprint(tuple4)\n\n# Question for TA: is the following two tuples the same?\ntuple4= (1,2,'a'),(True, False)\ntuple5 = ((1,2,'a'),(True, False))\nprint(tuple4)\nprint(tuple5)\n\n# lists' elements are usually homogeneous and are accessed by iterating over the list.\nlist1 = [1, 2, 3, 'a', True] \nprint('list1: ', list1)\nprint('1st item of list: ', list1[0])\nlist1[0] = 4 # item assignment works for list\n\n# list comprehensions\nlist_int = [element for element in list1 if type(element)==int]\nprint(\"list_int\", list2)\n\n## Type Your Answer Below ##\n# A set is an unordered collection with no duplicate elements. \n\n# set() can be used to eliminate duplicate entries\nlist1 = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']\nset1 = set(list1)\nprint(set1)\n\n# set can be used for membership testing\nset2 = {1, 2, 'abc', True}\nprint('abc' in set2) # membership testing\nset1[0] # set does not support indexing\n\n# set comprehensions\nset4 = {char for char in 'abcdabcdabcd' if char not in 'ac'}\nprint(set4)", "3. Why set is faster than list in python?\nAnswers:\nSet and list are implemented using two different data structures - Hash tables and Dynamic arrays.\n. Python lists are implemented as dynamic arrays (which can preserve ), which must be searched one by one to compare every single member for equality, with lookup speed O(n) depending on the size of the list.\n. Python sets are implemented as hash tables, which can directly jump and locate the bucket (the position determined by the object's hash) using hash in a constant speed O(1), regardless of the size of the set.", "# Calculate the time cost differences between set and list\nimport time\nimport random \n\ndef compute_search_speed_difference(scope): \n list1 = []\n dic1 = {}\n set1 = set(dic1)\n for i in range(0,scope):\n list1.append(i)\n set1.add(i)\n \n random_n = random.randint(0,100000) # look for this random integer in both list and set\n\n list_search_starttime = time.time()\n list_search = random_n in list1\n list_search_endtime = time.time()\n list_search_time = list_search_endtime - list_search_starttime # Calculate the look-up time in list\n #print(\"The look up time for the list is:\")\n #print(list_search_time)\n\n set_search_starttime = time.time()\n set_search = random_n in set1\n set_search_endtime = time.time()\n set_search_time = set_search_endtime - set_search_starttime # Calculate the look-up time in set\n #print(\"The look up time for the set is:\")\n #print(set_search_time)\n \n speed_difference = list_search_time - set_search_time\n return(speed_difference)\n\ndef test(testing_times, scope):\n test_speed_difference = []\n for i in range(0,testing_times):\n test_speed_difference.append(compute_search_speed_difference(scope))\n return(test_speed_difference)\n\n#print(test(1000, 100000)) # test 10 times can print out the time cost differences\nprint(\"On average, the look up time for a list is more than a set in:\")\nprint(np.mean(test(100, 1000))) ", "4. What's the major difference between array in numpy and series in pandas?\nPandas series (which can contain values of different data types) is much more general and flexible than the one-dimensional Numpy array(which can only contain one data type).\nWhile Numpy array has an implicitly defined integer used to access the values, the Pandas series has an explicitly defined index (which can be any data type) associated with the values (which gives the series object additonal capabilities).\nWhat's the relationships among Numpy, Pandas and SciPy:\n. Numpy is a libary for efficient array computations, modeled after Matlab. Arrays differ from plain Python lists in the way they are stored and handled. Array elements stay together in memory, so they can be quickly accessed. Numpy also supports quick subindexing (a[0,:,2]). Furthermore, Numpy provides vectorized mathematical functions (when you call numpy.sin(a), the sine function is applied on every element of array a), which are faster than a Python for loop.\n. Pandas library is good for analyzing tabular data for exploratory data analysis, statistics and visualization. It's used to understand the data you have.\n. Scipy provides large menu of libraries for scientific computation, such as integration, interpolation, signal processing, linear algebra, statistics. It's built upon the infrastructure of Numpy. It's good for performing scientific and engineering calculation.\n. Scikit-learn is a collection of advanced machine-learning algorithms for Python. It is built upon Numpy and SciPy. It's good to use the data you have to train a machine-learning algorithm.", "## Type Your Answer Below ##\nstudent = np.array([0, 'Alex', 3, 'M'])\nprint(student) # all the values' datatype is converted to str", "Question 5-11 are related to titanic data (train.csv) on kaggle website\nYou can download the data from the following link:<br />https://www.kaggle.com/c/titanic/data\n5. Read titanic data (train.csv) into pandas dataframe, and display a sample of data.", "## Type Your Answer Below ##\nimport pandas as pd\ndf = pd.read_csv('https://raw.githubusercontent.com/pcsanwald/kaggle-titanic/master/train.csv')\ndf.sample(3)\n\ndf.tail(3)\n\ndf.describe()\n\ndf.info()", "6. What's the percentage of null value in 'Age'?", "## Type Your Answer Below ##\nlen(df[df.age.isnull()])/len(df)*100\n", "7. How many unique classes in 'Embarked' ?", "## Type Your Answer Below ##\ndf.embarked.value_counts()\n\nprint('number of classes: ', len(df.embarked.value_counts().index))\nprint('names of classes: ', df.embarked.value_counts().index)\n\n# Another method\nembarked_set = set(df.embarked)\nprint(df.embarked.unique())", "8. Compare survival chance between male and female passangers.\nPlease use pandas to plot a chart you think can address this question", "## Type Your Answer Below ##\nmale_survived = df[df.survived==1][df.sex=='male']\nmale_survived_n = len(df.query('''sex=='male' and survived ==1'''))\n\nfemale_survived = df[df.survived==1][df.sex=='female']\nfemale_survived_n = len(df.query('''sex=='female' and survived ==1'''))\n\ndf_survived = pd.DataFrame({'male':male_survived_n, 'female': female_survived_n}, index=['Survived_number'])\ndf_survived\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndf_survived.plot(kind='bar', title='survived female and male', legend='True')\n\nsns.pointplot(x='embarked', y='survived', hue='sex', data=df, palette={'male':'blue', 'female':'pink'}, markers=[\"*\", \"o\"], linestyles=['-', '--'])\n\ngrid = sns.FacetGrid(df, col='embarked')\ngrid.map(sns.pointplot, 'pclass', 'survived', 'sex', palette={'male':'blue', 'female':'pink'}, markers=[\"*\", \"o\"], linestyles=['-', '--'])\ngrid.add_legend()\n\ngrid = sns.FacetGrid(data_train, col='pclass')\ngrid.map(sns.barplot, 'embarked', 'age', 'sex')\ngrid.add_legend()", "Observations from barplot above:\n\nIn Pclass = 1 and 2, female has higher mean age than male. But in Pclass = 3, female has lower mean age than male.\nPassengers in Pclass = 1 has the highest average age, followed by Pclass = 2 and Pclass = 3.\nAge trend among Embarked is not abvious\n\nDecisions:\nUse 'Pclass'and 'Sex' in estimating missing values in 'Age'.\n9. Show the table of passangers who are 23 years old.", "## Type Your Answer Below ##\ndf_23=df.query('''age>23''')\ndf_23", "10. Is there a Jack or Rose in our dataset?", "# first split name into string lists by ' '\ndef format_name(df):\n df['split_name'] = df.name.apply(lambda x: x.split(' '))\n return df \n\nprint(df.sample(3).split_name, '\\n')\n\n# for each subset string of name, check if \"jack\" or \"rose\" in it\nfor i in format_name(df).split_name:\n for l in i:\n if ((\"jack\" in l.lower()) | (\"rose\" in l.lower()) ):\n print(\"found names that contain jack or rose: \", l)", "11. What's the percentage of surviving when passangers' pclass is 1?", "## Type Your Answer Below ##\ndf4 = df.query('''pclass==1''')\n\ndef percent(x):\n m = int(x.count())\n n = m/len(df4)\n return(n)\n\ndf[['survived','pclass']].query('''pclass==1''').groupby([ 'survived']).agg({'pclass':percent})", "Refereences\nhttps://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/\nhttps://docs.python.org/3/tutorial/datastructures.html\nhttps://stackoverflow.com/questions/2030053/random-strings-in-python" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jayme-anchante/cv-bio
courses/python_for_data_analysis/week4_pandas.ipynb
mit
[ "Week 4: Pandas\n(2017-07-25 15:26)\n4.1. Working with Pandas Part 1\n4.1.1. Why Pandas\nBenefits: built on top of NumPy, data variety support, integration and transformation; visualization is made easier; descriptive statistics; support for time-series data. Pandas data structures: i) Series is a 1-d labeled array that support many data types; and ii) DataFrame is a 2-d labeled data, a dictionary of Series objects of differente types.\n4.1.2. Notebooks for Week 4\n4.1.3. Live Code: Why Pandas", "import pandas as pd", "Series", "ser = pd.Series(data = [ 100, 200, 300, 400, 500],\n index = ['tom', 'bob', 'nancy', 'dan', 'eric'])\nser\n\nser.index # list of indices\n\n# we can use rectangular brackets to access data at that location\nprint(ser['nancy'])\nprint(ser.loc['nancy']) # we can explicitly use the loc (location) function\n\n# accessing multiple locations\nprint(ser[['nancy', 'bob']])\nprint()\nprint(ser[[4, 3, 1]])\nprint()\nprint(ser.iloc[[2]]) # we can explicitly use the iloc (ilocation) function\n\n# check if an index exists in the Series\n'bob' in ser\n\n# multiply whole Series by two\nser * 2", "DataFrame", "# create a DataFrame from a dictionary\n\nd = {'one': pd.Series([100., 200., 300.], index = ['apple', 'ball', 'clock']),\n 'two': pd.Series([111., 222., 333., 444.], index = ['apple', 'ball', 'cerill', 'dancy'])}\n\ndf = pd.DataFrame(d)\ndf\n\ndf.index # indices\n\ndf.columns # columns\n\n# subsetting by indices\npd.DataFrame(d, index = ['dancy', 'ball', 'apple'])\n\n# subsetting, but adding a new column\npd.DataFrame(d, index = ['dancy', 'ball', 'apple'], columns = ['one', 'five'])\n\n# create a DataFrame from a Python list of dictionaries\ndata = [{'alex': 1, 'joe': 2}, {'ema': 5, 'dora': 10, 'alice': 20}]\npd.DataFrame(data) # indices are inferred\n\npd.DataFrame(data, index = ['orange', 'red']) #inserting indices\n\n# column subsetting\npd.DataFrame(data, columns = ['joe', 'dora', 'alice'])", "Basic DataFrame operations", "df\n\n# slice one column\ndf['one']\n\ndf['three'] = df['one'] * df['two']\ndf\n\n# logical operation\n\ndf['flag'] = df['one'] > 250\ndf\n\n# remove data from DataFrame using the pop function\nthree = df.pop('three')\nthree\n\ndf\n\n# we could also use the del function\n\ndel df['two']\n\ndf\n\n# creates a new column from another existing column\ndf.insert(2, 'copy_of_one', df['one'])\ndf\n\n# get the first two values and assign it to a new column\n\ndf['one_upper_half'] = df['one'][:2]\ndf", "4.1.4. Pandas: Data Ingestion\ncsv (comma-separated format) using the pandas.read_csv. json using pandas.read_json. html (hyper-text markup language) using read_html, the output is a list of Pandas DataFrames. sql (structured query language) using read_sql_query. The pandas.read_sql_table imports all sql file.\n4.1.5. Live Code: Data Ingestion", "!ls ./ml-latest-small # contents of the movie-lens\n\n!cat ./ml-latest-small/movies.csv\n\n!cat ./ml-latest-small/movies.csv | wc -l # number of movies\n\n!head -5 ./ml-latest-small/tags.csv\n\n!head -5 ./ml-latest-small/ratings.csv", "Let's load the movies.csv, tags.csv and ratings.csv using the pandas.read_csv function", "import pandas as pd\n\nmovies = pd.read_csv('./ml-latest-small/movies.csv')\nprint(type(movies))\nmovies.head()\n\ntags = pd.read_csv('./ml-latest-small/tags.csv')\ntags.head()\n\nratings = pd.read_csv('./ml-latest-small/ratings.csv')\nratings.head()\n\n# later we'll work on timestamps, for now we'll deleted them\ndel ratings['timestamp']\ndel tags['timestamp']", "Playing with data structures", "# extract the 0th row, notice it's indeed a Series\nrow_0 = tags.iloc[0]\ntype(row_0)\n\nprint(row_0)\n\nrow_0.index\n\nrow_0['userId']\n\n'rating' in row_0\n\nrow_0.name\n\nrow_0 = row_0.rename('first_row')\nrow_0.name\n\ntags.head()\n\ntags.index\n\ntags.columns\n\n# extract row 0, 11, 1000 from DataFrame\ntags.iloc[[0, 11, 1000]]", "4.1.6. Pandas: Descriptive Statistics\ndescribe() shows summary statistics, corr() shows pairwise Pearson coefficient of columns, min(), max(), mode(), median(). Generally the syntax is dataframe.function(), frequently used optional parameter is axis = 0 (rows) or 1 (columns). \nAlso the logical any() returns whether any element is True and all() returns whether all element is True. \nOther functions: count(), clip(), rank(), round()\n4.1.7. Live Code: Descriptive Statistics", "ratings['rating'].describe()\n\nratings['rating'].mean()\n\nratings['rating'].min()\n\nratings['rating'].max()\n\nratings['rating'].std()\n\nratings['rating'].mode()\n\nratings.corr()\n\nfilter1 = ratings['rating'] > 5\nfilter1.any()\n\nfilter2 = ratings['rating'] > 0\nfilter2.all()", "4.2. Working with Pandas Part 2\n4.2.1. Pandas: Data Cleaning\nReal world is messy: missings, outliers, invalid, NaN, None etc.\nHandling the problem: replace the value, fill the gaps, drop fields, interpolation.\nSome functions: df.replace(), df.fillna(method = 'ffill' | 'backfill') - forward fill or backward fill), df.dropna(axis = 0|1), df.interpolate().\n4.2.2. Live Code: Data Cleaning", "!ls\n\n!ls ml-latest-small/\n\nimport pandas as pd\n\nmovies = pd.read_csv('./ml-latest-small/movies.csv')\nratings = pd.read_csv('./ml-latest-small/ratings.csv')\ntags = pd.read_csv('./ml-latest-small/tags.csv')\n\nmovies.shape\n\n# is any row NULL?\nmovies.isnull().any()\n\nratings.shape\n\n# is any row NULL?\nratings.isnull().any()\n\ntags.shape\n\n# is any row NULL?\nimport numpy as np\ntags['tag'][:5] = np.nan\n\ntags.isnull().any()\n\ntags = tags.dropna()\n\n# check again: is any row NULL?\ntags.isnull().any()", "4.2.3. Pandas: Data Visualization\ndf.plot.bar() - bar charts, df.plot.box() - box plots, df.plot.hist() - histograms, df.plot() - line graphs etc.\n4.2.4. Live Code: Data Visualization", "%matplotlib inline\n\nratings.hist(column = 'rating', figsize = (10, 5));\n\nratings.boxplot(column = 'rating');", "4.2.5. Pandas: Frequent Data Operations\ndf['sensor1'] - slice a column, df[df['sensor2'] > 0] - filter out rows from a column, df['sensor4'] = df['sensor1'] ** 2 - create a new column, df.loc[10] = [10, 20, 30, 40] - insert a new row, df.drop(df.index[[5]]) - delete the 5th row from DataFrame, del df['sensor4'] - delete a column, df.groupby('student_id').mean() - mean of grades by student etc.\n4.2.6. Live Code: Frequent Data Operations\nSlicing", "tags['tag'].head() # head of the tag column\n\nmovies[['title', 'genres']].head() # head of the title and genres columns\n\nratings[1000:1010] # rows 1000 to 1010 from ratings df\n\nratings[-10:] # last ten rows of ratings\n\ntag_counts = tags['tag'].value_counts() # count the number of unique values in the columns tag from tags\ntag_counts[:10] # top 10 tag counts\n\ntag_counts[:10].plot(kind = 'bar', figsize = (10, 5));", "Filter", "is_highly_rated = ratings['rating'] >= 4.0 # filter movies with a rating more or equal to 4.0\nratings[is_highly_rated][-5:] # bottom 5 movies\n\nis_animation = movies['genres'].str.contains('Animation') # search for the Animation string in the genres column\nmovies[is_animation][5:15]\n\nmovies[movies['title'].str.contains('Christmas')].head() # search for movies titles that contain the string Christmas", "Groupby and Aggregate", "ratings_count = ratings[['movieId', 'rating']].groupby('rating').count() # number of movies by rating grade\nratings_count\n\naverage_rating = ratings[['movieId', 'rating']].groupby('movieId').mean() # average rating grade by movieId\naverage_rating.tail()\n\nmovie_count = ratings[['movieId', 'rating']].groupby('movieId').count() # how many ratings per movie?\nmovie_count.head()", "4.3. Working with Pandas Part 3\n4.3.1. Pandas: Merging DataFrames\npd.concat([left, right]): stack DataFrames vertically (one on top of the other)\npd.concat([left, right], axis = 1, join = 'inner'): stack DataFrames horizontally, preserve both key columns\nleft.append(right): the same of concat, but it is a DataFrame function\npd.merge(left, right, how = 'inner'): the same as concat horizontally, but dumps duplicate key columns\nLive Code", "tags.head()\n\nmovies.head()\n\nt = movies.merge(tags, on = 'movieId', how = 'inner')\nt.head()", "Combine aggregation, merging, and filters to get useful analytics", "avg_ratings = ratings.groupby('movieId', as_index = False).mean() # average movie rating\ndel avg_ratings['userId'] # delete unused columns\ndel avg_ratings['timestamp'] # delete unused column\navg_ratings.head()\n\nbox_office = movies.merge(avg_ratings, on = 'movieId', how = 'inner') # merge DataFrames\nbox_office.tail()\n\nis_highly_rated = box_office['rating'] >= 4.0\nbox_office[is_highly_rated][-5:]\n\nis_comedy = box_office['genres'].str.contains('Comedy')\nbox_office[is_comedy][:5]\n\nbox_office[is_comedy & is_highly_rated][-5:]", "4.3.2. Pandas: Frequent String Operations\nstr.split(): separates two strings around a delimiter character\nstr.contains(): check if a given string contains a given character\nstr.replace(): replace some characeters for another set of characeters\nstr.extract():", "import pandas as pd\nimport re\n\ncity = pd.DataFrame(('city_' + str(i) for i in range(4)), columns = ['city'])\ncity\n\n# extract words in the strings\ncity['city'].str.extract('([a-z]\\w{0,})')\n\n# extract single digit in the strings\ncity['city'].str.extract('(\\d)')\n\nimport pandas as pd\n\nmovies = pd.read_csv('./ml-latest-small/movies.csv')\nratings = pd.read_csv('./ml-latest-small/ratings.csv')\ntags = pd.read_csv('./ml-latest-small/tags.csv')\n\nmovies.head()\n\n# split 'genres' into multiple columns\nmovies_genres = movies['genres'].str.split('|', expand = True) \nmovies_genres[:10]\n\n# by default, split() will return a series of lists, by providing expand = True, we make it returns a DataFrame\n\n# add a new column for comedy genre flag\nmovies_genres['isComedy'] = movies['genres'].str.contains('Comedy')\nmovies_genres[:10]\n\n# extract the year from the title\nmovies['year'] = movies['title'].str.extract('.*\\((.*)\\).*', expand = True)\nmovies.tail()", "4.3.3. Pandas: Parsing Timestamps\nUnix time tracks the progress of time by counting the number of seconds since an arbitrary date, 1970-01-01 00:00, as per the UTC time zone. Generic data type is datatime64[ns]. \npandas.to_datetime() function parses timestamps. Now it we can filter information on dates, sort dates,", "tags['parsed_time'] = pd.to_datetime(tags['timestamp'], unit = 's')\ntags.head()\n\ntags[tags['parsed_time'] > '2015-02-01'].head()\n\ntags.sort_values(by = 'parsed_time', ascending = True)[:10]\n\ntags.dtypes\n\ntags['parsed_time'].dtype", "Are movie ratings related to the year of launch?", "average_rating = ratings[['movieId', 'rating']].groupby('movieId', as_index = False).count()\naverage_rating.head(5)\n\njoined = movies.merge(average_rating, on = 'movieId', how = 'inner')\njoined.corr()\n\nyearly_average = joined[['year', 'rating']].groupby('year', as_index = False ).count()\n\nyearly_average = yearly_average[yearly_average['year'] != '2007-'] # remove a '2007-' row from DataFrame\n\n%matplotlib inline\n\n# yearly_average.sort_values(by = 'year', ascending = True)[-20:].plot(x = 'year', y = 'rating',\n# figsize = (10, 5), grid = True)\n\nyearly_average[-20:].plot(x = 'year', y = 'rating', figsize = (10, 5), grid = True);", "4.3.4. Pandas: Summary of Movie Rating Notebook\nData Ingestion (Importing), Statistical Analysis, Data Cleaning, Data Visualization, Data Transformation, Merging DataFrames, String Operations, Timestamps.\n4.3.5. Coding Practice\n4.3.6. Pandas Discussion\n4.3.7. Pandas Efficiency - Extra Video Resource\n4.4. Assessment", "movies.isnull().any()", "(2017-07-29 21:23)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
charlesll/RamPy
examples/Normalisation.ipynb
gpl-2.0
[ "Use of normalisation function\nIn this example we normalise a spectrum to different values: area, maximum, or min-max.", "%matplotlib inline\nimport sys\nsys.path.append(\"../\")\nimport numpy as np\nimport scipy\nfrom matplotlib import pyplot as plt\n\nimport rampy as rp\nfrom sklearn import preprocessing", "Signal creation\nBelow we create a fake Gaussian signal for the example.", "nb_points =100\nx = np.sort(np.random.uniform(0,100,nb_points)) # increasing point\ny = 120.0*scipy.stats.norm.pdf(x,loc=50,scale=5)\n\nplt.plot(x,y)\n\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.show()", "We can consider that the area of the Gaussian peak should be equal to 1, as it is the value of the intergral of a Gaussian distribution.\nTo normalise the spectra, we can do:", "y_norm_area = rp.normalise(y,x=x,method=\"area\")\n\nplt.plot(x,y_norm_area)\n\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.show()", "We could also just want the signal to be comprised between 0 and 1, so we normalise to the maximum:", "y_norm_area = rp.normalise(y,method=\"intensity\")\n\nplt.plot(x,y_norm_area)\n\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.show()", "Now, if our signal intensity was shifted from 0 by a constant, the \"intensity\" method will not work well. For instance, I can add 0.1 to y and plot it.", "y2 = y + 1\n\nplt.plot(x,y2)\n\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.ylim(0,12)\nplt.show()", "In this case, the \"intensity\" method will not work well:", "y_norm_area = rp.normalise(y2,method=\"intensity\")\n\nplt.plot(x,y_norm_area)\n\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.ylim(0,1)\nplt.show()", "The signal remains shifted from 0. For safety, we can do a min-max normalisation, which will put the minimum to 0 and maximum to 1:", "y_norm_area = rp.normalise(y2,method=\"minmax\")\n\nplt.plot(x,y_norm_area)\n\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rayjustinhuang/DataAnalysisandMachineLearning
Predicting Survival on the Titanic.ipynb
mit
[ "Predicting Survival on the Titanic\nNote! Work in Progress - This notebook is not yet finished\nAn implementation in Python of the exploration of the Titanic dataset that closely follows the excellent Exploring Survival on the Titanic notebook by Megan L. Risdal found at https://www.kaggle.com/mrisdal/titanic/exploring-survival-on-the-titanic/notebook. Data preprocessing largely follows what she did though predictive modeling attempts to explore more models than just the random forest she used.\nAs an aside, this also serves as an interesting look at how some of the tasks performed in her notebook might be done in Python and, in a way, shows both languages' relative strengths and weaknesses.", "# Import necessary libraries\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.svm import SVC\nfrom sklearn.cross_validation import train_test_split, cross_val_score\nfrom sklearn import metrics", "The Dataset", "train = pd.read_csv(\"train.csv\", index_col='PassengerId')\ntest = pd.read_csv(\"test.csv\", index_col='PassengerId')\ntrain.head(3)\ntest.head(3)\n\n# print(train.shape)\n# print(test.shape)\nprint('Number of features: {}'.format(test.shape[1]))\nprint('Training samples: {}'.format(train.shape[0]))\nprint('Test samples: {}'.format(test.shape[0]))\nprint('Total number of samples: {}'.format(train.shape[0]+test.shape[0]))", "The data contains the following features:\n\nPassengerId - a number describing a unique passenger\nSurvived - the binary dependent variable indicating whether a passenger survived (1) or died (0)\nPclass - the passenger's class, from first class (1) to third class (3)\nName\nSex\nAge\nSibSp - the number of siblings or spouses aboard\nParch - the number of parents or children aboard\nTicket - the ticket number\nFare - the fare that the passenger paid\nCabin - the cabin number the passenger stayed in\nEmbarked - the port where the passenger embarked, whether at Cherbourg (C), Queenstown (Q), or Southampton (S)\n\nIt's time to explore the dataset to get a general idea of what it's like.\nExploratory Data Analysis\nWe first do some general overviews of the data via summary statistics and histograms before moving on to preprocessing.", "# First, combine datasets\ntotal = pd.concat([train, test])\n\n# View summary statistics\ntotal.describe()", "Most numerical data appear to be fairly complete, with the exception of fare (which only has one missing value) and age (which has 263 missing values). We can deal with the missing values later.\nLet's also visualize the data with histograms to see the general distribution of the data.", "# Generate histograms\nsns.set_color_codes('muted')\ntotal.hist(color='g')\nplt.tight_layout()\nplt.show()", "A fairly obvious observation here is that the PassengerId variable is not very useful -- we should drop this column. The rest of the data is quite interesting, with most passengers being somewhat young (around 20 to 30 years of age) and most people traveling without too much family.\nPclass serves as a proxy for the passengers' socioeconomic stata. Interestingly, the middle class appears to be the lowest in size, though not by much compared to upperclass passengers.\nLooking at the data, given that we don't have the ticket number does not appear to be too informative.", "totalwithoutnas = total.dropna()\nscattermatrix = sns.pairplot(totalwithoutnas)\nplt.show()", "Data Preprocessing\nThe first thing we should do is drop columns that will not be particularly helpful in our analysis. This includes the Ticket variable identified previously.", "total.drop('Ticket', axis=1, inplace=True)", "Feature Engineering\nA number of the variables in the data present opportunities to be further generate meaningful features. One particular feature that appears to contain a lot of meaning is the names of the passengers. As in the notebook of Megan, we will be able to extract titles (which are indicative of both gender and marriage status) and families (given by shared surnames, under the assumption that incidences of unrelated people having the same surname are trivial).\nSurnames and Titles", "Surnames = pd.DataFrame(total['Name'].str.split(\",\").tolist(), columns=['Surname', 'Rest'])\nTitles = pd.DataFrame(Surnames['Rest'].str.split(\".\").tolist(), columns=['Title', 'Rest1', 'Rest2'])\n\nSurnames.drop('Rest',axis=1,inplace=True)\nTitles = pd.DataFrame(Titles['Title'])\n\nSurnames['Surname'].str.strip()\nTitles['Title'].str.strip()\n\ntotal['Surname'] = Surnames.set_index(np.arange(1,1310))\ntotal['Title'] = Titles.set_index(np.arange(1,1310))\n\ntotal.head()", "Let's tabulate our titles against sex to see the frequency of the various titles.", "pd.crosstab(total['Sex'], total['Title'])", "We see that with the exception of Master, Mr, Miss, and Mrs, the other titles are relatively rare. We can group rare titles together to simplify our analysis. Also note that Mlle and Ms are synonymous with Miss, and Mme is synonymous with Mrs.", "raretitles = ['Dona', 'Lady', 'the Countess','Capt', 'Col', 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer']\n\ntotal.ix[total['Title'].str.contains('Mlle|Ms|Miss'), 'Title'] = 'Miss'\ntotal.ix[total['Title'].str.contains('Mme|Mrs'), 'Title'] = 'Mrs'\ntotal.ix[total['Title'].str.contains('|'.join(raretitles)), 'Title'] = 'Rare Title'\n\npd.crosstab(total['Sex'], total['Title'])\n\ntotal['Surname'].nunique()", "We have 875 unique surnames.\nFamily Sizes\nFamily size may have an impact on survival. To this end, we create a family size attribute and plot the relationship.", "total['FamilySize'] = total['SibSp'] + total['Parch'] + 1\n\ntotal['Family'] = total['Surname'] + \"_\" + total['FamilySize'].apply(str)\n\ntotal.head(1)\n\n# Plot family size\nfamsizebarplot = sns.countplot(total['FamilySize'].loc[1:len(train.index)], hue=total['Survived'])\nfamsizebarplot.set_xlabel('Family Size')\nplt.show()", "The chart above clearly shows an interesting phenomenon -- single people and families of over 4 people have a significantly lower chance of survival than those in small (2 to 4 person) families.", "# Categorize family size\ntotal['FamSizeCat'] = 'small'\ntotal.loc[(total['FamilySize'] == 1), 'FamSizeCat'] = 'singleton'\ntotal.loc[(total['FamilySize'] > 4), 'FamSizeCat'] = 'large'\n\n# Create mosaic plot\n# To be done in the future using statsmodel", "Dealing with Missing Values\nWe first check columns with missing values.", "total.isnull().sum()", "It appears that age, cabin, embarked, and fare have missing values. Let's first work on \"Embarked\" and \"Fare\" given that there are few enough NaN's for us to be able to manually work out what values they should have. For Cabin, given that there are 1309 samples and more than 75% of them are missing, we can probably just drop this column. It might have been useful given that location on the ship might influence their chance of survival, but data is too sparse on this particular attribute.", "total[(total['Embarked'].isnull()) | (total['Fare'].isnull())]", "Miss Icard and Mrs. Stone, both shared the same cabin, both survived, both paid the same fare, and are both of the same class, interestingly enough. Mr. Storey is of the third class and embarked from Southampton.\nVisualizing the fares by embarkation location may shed some light on where the two first class ladies embarked.", "sns.boxplot(x='Embarked',y='Fare',data=train.dropna(),hue='Pclass')\nplt.tight_layout()\nplt.show()\n\ntrainwithoutnas = train.dropna()\nprint(\"Mean fares for passengers traveling in first class:\")\nprint(trainwithoutnas[trainwithoutnas['Pclass']==1].groupby('Embarked')['Fare'].mean())\nprint(\"\\nMedian fares for passengers traveling in first class:\")\nprint(trainwithoutnas[trainwithoutnas['Pclass']==1].groupby('Embarked')['Fare'].median())", "The closest value to the $80 fare paid by both ladies for first class is very close to the mean fare paid by first class passengers embarking from Southampton, but also aligns very nicely with the median fare paid by those embarking from Cherbourg. Perhaps a swarm plot will better show how passengers are distributed.", "sns.swarmplot(x='Embarked',y='Fare',data=train.dropna(),hue='Pclass')\nplt.show()", "This is a tough call. Looking at the spread of the points, however, it seems that those that embarked from Southampton generally paid lower fares. It appears that the mean fare paid by those from Cherbourg is pulled up by the extreme outliers that paid more than \\$500 for their tickets, with a majority of first class passengers indeed paying around $80. As such, we classify the two ladies as having embarked from Cherbourg (C).", "total.loc[(62,830), 'Embarked'] = \"C\"\ntotal.loc[(62,830), 'Embarked']", "The swarm plot also shows that the passengers embarking from Southampton in third class have paid around the same fare. It would be reasonable to use the mean value of third class passengers from Southampton as his fare value.", "total.loc[1044,'Fare'] = total[(total['Embarked']==\"S\") & (total['Pclass']==3)]['Fare'].mean()\ntotal.loc[1044, ['Name','Fare']]", "We could do mice imputation similar to Megan's notebook via the fancyimpute package.", "AgeHistogram = total['Age'].hist(bins=20, edgecolor=\"black\")\nAgeHistogram.set_xlabel(\"Age\")\nAgeHistogram.set_ylabel(\"Count\")\nAgeHistogram.set_title(\"Age (Prior to Missing Value Imputation)\")\nplt.show()\n\nimport fancyimpute\ntotal.isnull().sum()\n\ntotalforMICE = total.drop(['Survived','Cabin','FamSizeCat','Family','Name','Surname'], axis=1)\n# totalforMICE.fillna(np.nan)\ntotalforMICE['Sex'] = pd.get_dummies(totalforMICE['Sex'])['male']\ndummycodedTitles = pd.get_dummies(totalforMICE['Title']).drop('Rare Title', axis=1)\ntotalforMICE = pd.merge(totalforMICE, dummycodedTitles, left_index=True, right_index=True, how='outer')\ntotalforMICE = totalforMICE.drop(['Title'],axis=1)\ndummycodedEmbarked = pd.get_dummies(totalforMICE['Embarked'])[['C','Q']]\ntotalforMICE = totalforMICE.join(dummycodedEmbarked).drop(['Embarked'],axis=1)\ndummycodedPclass = pd.get_dummies(totalforMICE['Pclass'], columns=[list(\"123\")]).drop(3,axis=1)\ntotalforMICE = totalforMICE.join(dummycodedPclass).drop('Pclass',axis=1)\nMICEdtotal = fancyimpute.MICE().complete(totalforMICE.values.astype(float))\n\nMICEdtotal = pd.DataFrame(MICEdtotal, columns=totalforMICE.columns)\nMICEdtotal.isnull().sum()", "We see that the MICE'd data has no more missing Age values. Plotting these values in the histogram:", "MICEAgeHistogram = MICEdtotal['Age'].hist(bins=20, edgecolor=\"black\")\nMICEAgeHistogram.set_xlabel(\"Age\")\nMICEAgeHistogram.set_ylabel(\"Count\")\nMICEAgeHistogram.set_title(\"Age (After Missing Value Imputation)\")\nplt.show()\n\nAgeHists, AgeHistAxes = plt.subplots(nrows=1,ncols=2, figsize=(10,5), sharey=True)\n\nAgeHistAxes[0].hist(total['Age'].dropna(), bins=20, edgecolor='black', normed=True)\nAgeHistAxes[0].set_xlabel(\"Age\")\nAgeHistAxes[0].set_ylabel(\"Density\")\nAgeHistAxes[0].set_title(\"Age Density (Original Data)\")\n\nAgeHistAxes[1].hist(MICEdtotal['Age'], bins=20, edgecolor='black', normed=True)\nAgeHistAxes[1].set_xlabel(\"Age\")\nAgeHistAxes[1].set_ylabel(\"Density\")\nAgeHistAxes[1].set_title(\"Age Density (After MICE)\")\n\nAgeHists.tight_layout()\nAgeHists", "Most age values were added around the 20 to 30 year-old age range, which makes sense given the distribution of the ages in the data that we had. Note that the fancyimpute version of MICE uses Bayesian Ridge Regression. The density is not perfectly preserved but is useful enough to proceed with the analysis.\nWe use the new Age column with the imputed values for our analysis.", "newtotal = total\nnewtotal['Age'] = MICEdtotal['Age']", "We can create some additional categorical columns based on our complete age feature -- whether the person is a child (18 or under) and whether a person is a mother (female, over 18, with children, and does not have the title \"Miss\").", "AgeandSexHist = sns.FacetGrid(newtotal.iloc[0:891,:], col = 'Sex', hue='Survived', size=5)\n# AgeandSexHist.map(sns.distplot, 'Age', kde=False, hist_kws={'edgecolor':'black','stacked':True})\nAgeandSexHist.map(plt.hist, 'Age', alpha=0.5, bins=20)\nAgeandSexHist.add_legend()\n# plt.close('all')\nplt.show(AgeandSexHist)\n\nAgeandSexHist, AgeandSexHistAxes = plt.subplots(nrows=1,ncols=2, figsize=(10,5), sharey=True)\nAgeandSexHistAxes[0].hist([newtotal.loc[0:891, 'Age'].loc[(newtotal['Sex']=='male') & (newtotal['Survived']==1)],\n newtotal.loc[0:891, 'Age'].loc[(newtotal['Sex']=='male') & (newtotal['Survived']==0)]],stacked=True, edgecolor='black', label=['Survived','Did Not Survive'], bins=24)\nAgeandSexHistAxes[1].hist([newtotal.loc[0:891, 'Age'].loc[(newtotal['Sex']=='female') & (newtotal['Survived']==1)],\n newtotal.loc[0:891, 'Age'].loc[(newtotal['Sex']=='female') & (newtotal['Survived']==0)]],stacked=True, edgecolor='black', bins=24)\nAgeandSexHistAxes[0].set_title('Survival By Age for Males')\nAgeandSexHistAxes[1].set_title('Survival By Age for Females')\nfor i in range(2):\n AgeandSexHistAxes[i].set_xlabel('Age')\nAgeandSexHistAxes[0].set_ylabel('Count')\nAgeandSexHistAxes[0].legend()\nplt.show()\n\n# Create the 'Child' variable\nnewtotal['Child'] = 1\nnewtotal.loc[newtotal['Age']>=18, 'Child'] = 0\n\npd.crosstab(newtotal['Child'],newtotal['Survived'])\n\n# Create the 'Mother' variable\nnewtotal['Mother'] = 0\nnewtotal.loc[(newtotal['Sex']=='female') & (newtotal['Parch'] > 0) & (newtotal['Age']>18) & (newtotal['Title'] != \"Miss\"), 'Mother'] = 1\n\npd.crosstab(newtotal['Mother'], newtotal['Survived'])", "Let's take a look at the dataset once again.", "newtotal.head()\n\nnewtotal.shape", "We ensure that all important categorical variables are dummy coded.", "dummycodedFamSizeCat = pd.get_dummies(newtotal['FamSizeCat']).drop('large',axis=1)\nnewtotal = newtotal.drop(['Title','Embarked','Pclass', 'Cabin', 'Name', 'Family', 'Surname'], axis=1)\nnewtotal['Sex'] = pd.get_dummies(newtotal['Sex'])['male']\nnewtotal = newtotal.join(dummycodedEmbarked)\nnewtotal = newtotal.join(dummycodedPclass)\nnewtotal = newtotal.join(dummycodedTitles)\nnewtotal = newtotal.join(dummycodedFamSizeCat)\nnewtotal.head()", "After we split the data back into training and test sets, our data set will be ready to use for modeling.", "newtrain = newtotal.loc[:891,:]\nnewtest = newtotal.loc[892:,:]", "Modeling and Prediction\nNote! Work in Progress - This notebook is not yet finished" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ctralie/TUMTopoTimeSeries2016
SlidingWindow4-Video.ipynb
apache-2.0
[ "<h1>Video Sliding Windows</h1>\n\n<p>\nSo far we restricted ourselves to 1D time series, but the idea of recovering periodic dynamics with geometry can just as easily apply to multivariate signals. In this module, we will examine sliding windows of videos as an exmaple. Many natural videos also have periodicity, such as this video of a woman doing jumping jacks\n</p>", "import io\nimport base64\nfrom IPython.display import HTML\n\nvideo = io.open('jumpingjacks.ogg', 'r+b').read()\nencoded = base64.b64encode(video)\nHTML(data='''<video alt=\"test\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded.decode('ascii')))", "<p>\nVideo can be decomposed into a 3D array, which has dimensions width x height x time. To tease out periodicity in geometric form, we will do the exact same thing as with sliding window 1D signal embeddings, but instead of just one sample per time shift, we need to take every pixel in every frame in the time window. The figure below depicts this\n</p>\n\n<img src = \"VideoStackTime.svg\"><BR><BR>\nTo see this visually in the video next to PCA of the embedding, look at the following video", "video = io.open('jumpingjackssliding.ogg', 'r+b').read()\nencoded = base64.b64encode(video)\nHTML(data='''<video alt=\"test\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded.decode('ascii')))", "<h2>PCA Preprocessing for Efficiency</h2>\n<BR>\nOne issue we have swept under the rug so far is memory consumption and computational efficiency. Doing a raw sliding window of every pixel of every frame in the video would blow up in memory. However, even though there are <code>WH</code> pixels in each frame, there are only <code>N</code> frames in the video. This means that each frame in the video can be represented in an <code>(N-1)</code> dimensional subspace of the pixel space, and the coordinates of this subspace can be used in lieu of the pixels in the sliding window embedding. This can be done efficiently with a PCA step before the sliding window embedding. Run the cell below to load code that does PCA efficiently", "#Do all of the imports and setup inline plotting\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.decomposition import PCA\nfrom mpl_toolkits.mplot3d import Axes3D\nimport scipy.interpolate\n\nfrom ripser import ripser\nfrom persim import plot_diagrams\nfrom VideoTools import *\n\n##Here is the actual PCA code\ndef getPCAVideo(I):\n ICov = I.dot(I.T)\n [lam, V] = linalg.eigh(ICov)\n V = V*np.sqrt(lam[None, :])\n return V", "<h2>Jumping Jacks Example Live Demo</h2>\n<BR>\nLet's now load in code that does sliding window embeddings of videos. The code is very similar to the 1D case, and it has the exact same parameters. The only difference is that each sliding window lives in a Euclidean space of dimension the number of pixels times <code>dim</code>. We're also using linear interpolation instead of spline interpolation to keep things fast", "def getSlidingWindowVideo(I, dim, Tau, dT):\n N = I.shape[0] #Number of frames\n P = I.shape[1] #Number of pixels (possibly after PCA)\n pix = np.arange(P)\n NWindows = int(np.floor((N-dim*Tau)/dT))\n X = np.zeros((NWindows, dim*P))\n idx = np.arange(N)\n for i in range(NWindows):\n idxx = dT*i + Tau*np.arange(dim)\n start = int(np.floor(idxx[0]))\n end = int(np.ceil(idxx[-1]))+2\n if end >= I.shape[0]:\n X = X[0:i, :]\n break\n f = scipy.interpolate.interp2d(pix, idx[start:end+1], I[idx[start:end+1], :], kind='linear')\n X[i, :] = f(pix, idxx).flatten()\n return X", "Finally, let's load in the jumping jacks video and perform PCA to reduce the number of effective pixels. <BR>\n<i>Note that loading the video may take a few seconds on the virtual image</i>", "#Load in video and do PCA to compress dimension\n(X, FrameDims) = loadImageIOVideo(\"jumpingjacks.ogg\")\nX = getPCAVideo(X)", "Now let's do a sliding window embedding and examine the sliding window embedding using TDA. As before, you should tweak the parameters of the sliding window embedding and study the effect on the geometry.", "#Given that the period is 30 frames per cycle, choose a dimension and a Tau that capture \n#this motion in the roundest possible way\n#Plot persistence diagram and PCA\ndim = 30\nTau = 1\ndT = 1\n\n#Get sliding window video\nXS = getSlidingWindowVideo(X, dim, Tau, dT)\n\n#Mean-center and normalize sliding window\nXS = XS - np.mean(XS, 1)[:, None]\nXS = XS/np.sqrt(np.sum(XS**2, 1))[:, None]\n\n#Get persistence diagrams\ndgms = ripser(XS)['dgms']\n\n#Do PCA for visualization\npca = PCA(n_components = 3)\nY = pca.fit_transform(XS)\n\n\nfig = plt.figure(figsize=(12, 6))\nplt.subplot(121)\nplot_diagrams(dgms)\nplt.title(\"1D Persistence Diagram\")\n\nc = plt.get_cmap('nipy_spectral')\nC = c(np.array(np.round(np.linspace(0, 255, Y.shape[0])), dtype=np.int32))\nC = C[:, 0:3]\nax2 = fig.add_subplot(122, projection = '3d')\nax2.set_title(\"PCA of Sliding Window Embedding\")\nax2.scatter(Y[:, 0], Y[:, 1], Y[:, 2], c=C)\nax2.set_aspect('equal', 'datalim')\nplt.show()", "<h1>Periodicities in The KTH Dataset</h1>\n<BR>\nWe will now examine videos from the <a href = \"http://www.nada.kth.se/cvap/actions/\">KTH dataset</a>, which is a repository of black and white videos of human activities. It consists of 25 subjects performing 6 different actions in each of 4 scenarios. We will use the algorithms developed in this section to measure and rank the periodicity of the different video clips.\n<h2>Varying Window Length</h2>\n<BR>\nFor our first experiment, we will be showing some precomputed results of varying the sliding window length, while choosing Tau and dT appropriately to keep the dimension and the number of points, respectively, the same in the sliding window embedding. As an example, we will apply it to one of the videos of a subject waving his hands back and forth, as shown below", "video = io.open('KTH/handwaving/person01_handwaving_d1_uncomp.ogg', 'r+b').read()\nencoded = base64.b64encode(video)\nHTML(data='''<video alt=\"test\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded.decode('ascii')))", "We have done some additional preprocessing, including applying a bandpass filter to each PCA pixel to cut down on drift in the video. Below we show a video varying the window size of the embedding and plotting the persistence diagram, \"self-similarity matrix\" (distance matrix), and PCA of the embedding, as well as an evolving plot of the maximum persistence versus window size:", "video = io.open('Handwaving_Deriv10_Block160_PCA10.ogg', 'r+b').read()\nencoded = base64.b64encode(video)\nHTML(data='''<video alt=\"test\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded.decode('ascii')))", "As you can see, the maximum persistence peaks at around 40 frames, which is the period of each hand wave. This is what the theory we developed for 1D time series would have predicted as the roundest window.<BR>\n<h1>Quasiperiodicity Quantification in Video</h1>\n<BR>\n<p>\nWe now examine how this pipeline can be used to detect quasiperiodicity in videos. As an example, we examine videos from high-speed glottography, or high speed videos (4000 fps) of the left and right vocal folds in the human vocal tract. When a person has a normal voice, the vocal folds oscillate in a periodic fashion. On the other hand, if they have certain types of paralysis or near chaotic dynamics, they can exhibit biphonation just as the horse whinnies did. More info can be found in <a href = \"https://arxiv.org/abs/1704.08382\">this paper</a>.\n</p>\n\n<h2>Healthy Subject</h2>\n<p>\nLet's begin by analyzing a video of a healthy person. In this example and in the following example, we will be computing both persistent H1 and persistent H2, so the code may take a bit longer to run.\n</p>\n\nQuestions\n\nWhat can we say about the vocal folds of a healthy subject based on the persistence diagram?", "video = io.open('NormalPeriodicCrop.ogg', 'r+b').read()\nencoded = base64.b64encode(video)\nHTML(data='''<video alt=\"test\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded.decode('ascii')))\n\n(X, FrameDims) = loadVideo(\"NormalPeriodicCrop.ogg\")\nX = getPCAVideo(X)\ndim = 70\nTau = 0.5\ndT = 1\nderivWin = 10\n\n#Take a bandpass filter in time at each pixel to smooth out noise\n[X, validIdx] = getTimeDerivative(X, derivWin)\n\n#Do the sliding window\nXS = getSlidingWindowVideo(X, dim, Tau, dT)\n\n#Mean-center and normalize sliding window\nXS = XS - np.mean(XS, 1)[:, None]\nXS = XS/np.sqrt(np.sum(XS**2, 1))[:, None]\n\n#Compute and plot persistence diagrams\nprint(\"Computing persistence diagrams...\")\ndgms = ripser(XS, maxdim=2)['dgms']\nprint(\"Finished computing persistence diagrams\")\n\nplt.figure()\nplot_diagrams(dgms)\nplt.title(\"Persistence Diagrams$\")\nplt.show()", "<h2>Subject with Biphonation</h2>\n<p>\nLet's now examine a video of someone with a vocal pathology. This video may still appear periodic, but if you look closely there's a subtle shift going on over time\n</p>", "video = io.open('ClinicalAsymmetry.mp4', 'r+b').read()\nencoded = base64.b64encode(video)\nHTML(data='''<video alt=\"test\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded.decode('ascii')))\n\n(X, FrameDims) = loadVideo(\"ClinicalAsymmetry.mp4\")\nX = getPCAVideo(X)\nX = X[0:200, :]\n#'dim':32, 'Tau':0.25, 'dT':0.25, 'derivWin':2\ndim = 100\nTau = 0.25\ndT = 0.5\nderivWin = 5\n\n#Take a bandpass filter in time at each pixel to smooth out noise\n[X, validIdx] = getTimeDerivative(X, derivWin)\n\n#Do the sliding window\nXS = getSlidingWindowVideo(X, dim, Tau, dT)\nprint(\"XS.shape = \", XS.shape)\n\n#Mean-center and normalize sliding window\nXS = XS - np.mean(XS, 1)[:, None]\nXS = XS/np.sqrt(np.sum(XS**2, 1))[:, None]\n\n#Compute and plot persistence diagrams\nprint(\"Computing persistence diagrams...\")\ndgms = ripser(XS, maxdim=2)['dgms']\nprint(\"Finished computing persistence diagrams\")\n\nplt.figure()\nplt.title(\"Persistence Diagrams$\")\nplot_diagrams(dgms)\nplt.show()", "Question:\n\nWhat shape is this? What does this say about the underlying frequencies involved?\n\n<h2>Another Subject with Biphonation</h2>\n<p>\nLet's now examine another person with a vocal pathology, this time due to mucus that is pushed out of the vocal folds every other oscillation. This time, we will look at both $\\mathbb{Z} / 2\\mathbb{Z}$ coefficients and $\\mathbb{Z} / 3 \\mathbb{Z}$ coefficients.\n</p>\n\nQuestions\n\nCan you see any changes between $\\mathbb{Z} / 2\\mathbb{Z}$ coefficients and $\\mathbb{Z} / 3 \\mathbb{Z}$ coefficients? the What shape is this? Can you relate this to something we've seen before?", "video = io.open('LTR_ED_MucusBiphonCrop.ogg', 'r+b').read()\nencoded = base64.b64encode(video)\nHTML(data='''<video alt=\"test\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" />\n </video>'''.format(encoded.decode('ascii')))\n\n(X, FrameDims) = loadVideo(\"LTR_ED_MucusBiphonCrop.ogg\")\nX = getPCAVideo(X)\nX = X[0:200, :]\n#'dim':32, 'Tau':0.25, 'dT':0.25, 'derivWin':2\ndim = 100\nTau = 1\ndT = 0.25\nderivWin = 5\n\n#Take a bandpass filter in time at each pixel to smooth out noise\n[X, validIdx] = getTimeDerivative(X, derivWin)\n\n#Do the sliding window\nXS = getSlidingWindowVideo(X, dim, Tau, dT)\nprint(\"XS.shape = \", XS.shape)\n\n#Mean-center and normalize sliding window\nXS = XS - np.mean(XS, 1)[:, None]\nXS = XS/np.sqrt(np.sum(XS**2, 1))[:, None]\n\n#Compute and plot persistence diagrams\nprint(\"Computing persistence diagrams...\")\ndgms2 = ripser(XS, maxdim=2, coeff=2)['dgms']\ndgms3 = ripser(XS, maxdim=2, coeff=3)['dgms']\nprint(\"Finished computing persistence diagrams\")\n\nplt.figure(figsize=(8, 4))\nplt.subplot(121)\nplot_diagrams(dgms2)\nplt.title(\"Persistence Diagrams $\\mathbb{Z}2$\")\nplt.subplot(122)\nplot_diagrams(dgms3)\nplt.title(\"Persistence Diagrams $\\mathbb{Z}3$\")\nplt.show()", "<h1>Summary</h1>\n<ul>\n<li>Periodicity can be studied on general time series data, including multivariate time series such as video</li>\n<li>Computational tricks, such as PCA, can be employed to make sliding window videos computationally tractable</li>\n<li>It is even possible to pick up on quasiperiodicity/biphonation in videos without doing any tracking.</li>\n</ul>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sinamoeini/mapp4py
examples/fracture-gcmc-tutorial/dislocation.ipynb
mit
[ "Introdcution\nThis trial describes how to create edge and screw dislocations in iron BCC strating with one unitcell containing two atoms \nBackground\nThe elastic solution for displacement field of dislocations is provided in the paper Dislocation Displacement Fields in Anisotropic Media.\nTheoritical\nThe paper mentioned in backgroud subsection deals with only one dislocation. Here we describe how to extend the solution to periodic array of dislocations. Since we are dealing with linear elasticity we can superpose (sum up) the displacement field of all the individual dislocations. Looking at the Eqs. (2-8) of abovementioned reference this boils done to finding a closed form soloution for \n$$\\sum_{m=-\\infty}^{\\infty} \\log\\left(z-ma \\right).$$\nWhere $z= x+yi$ and $a$ is a real number, equivakent to $\\mathbf{H}_{00}$ that defines the periodicity of dislocations on x direction. \nLet us simplify the problem a bit further. Since this is the component displacement field we can add or subtract constant term so for each $\\log\\left(z-ma \\right)$ we subtract a factor of $log\\left(a \\right)$, leading to\n$$\\sum_{m=-\\infty}^{\\infty} \\log\\left(\\frac{z}{a}-m \\right).$$\nLets change $z/a$ to $z$ and when we arrive the solution we will change ot back\n$$\\sum_{m=-\\infty}^{\\infty} \\log\\left(z-m \\right).$$\nObjective is to find a closed form solution for\n$$f\\left(z\\right)=\\sum_{m=-\\infty}^{\\infty} \\log\\left(z-m \\right).$$\nFirst note that\n$$\nf'\\left(z\\right)=\\frac{1}{z}+\\sum_{m=1}^{\\infty}\\frac{1}{z-m}+\\frac{1}{z+m},\n$$\nand also\n$$\n\\frac{1}{z\\mp m}=\\mp \\frac{1}{m}\\sum_{n=0}^{\\infty}\n\\left(\\pm \\frac{z}{m}\\right)^n.\n$$\nThis leads to\n$$\n\\frac{1}{z-m}+\\frac{1}{z+m}=-\\frac{2}{z}\\sum_{n=1}^{\\infty}\\left(\\frac{z}{m}\\right)^{2n},\n$$\nand subsequently\n$$\nf'\\left(z\\right)=\\frac{1}{z}-\\frac{2}{z}\\sum_{n=1}^{\\infty}\\left(z\\right)^{2n}\\sum_{m=1}^{\\infty}m^{-2n},\n$$\n$$\n=\\frac{1}{z}-\\frac{2}{z}\\sum_{n=1}^{\\infty}\\left(z\\right)^{2n}\\zeta\\left(2n\\right).\n$$\nWhere $\\zeta$ is Riemann zeta function. Since $\\zeta\\left(0\\right)=-1/2$, it simplifies to:\n$$\nf'\\left(z\\right)=-\\frac{2}{z}\\sum_{n=0}^{\\infty}\\left(z\\right)^{2n}\\zeta\\left(2n\\right)\n$$\nNote that\n$$\n-\\frac{\\pi z\\cot\\left(\\pi z\\right)}{2}=\\sum_{n=0}^{\\infty}z^{2n} \\zeta\\left(2n\\right)\n$$\nI have no idea how I figured this out but it is true. Therefore,\n$$\nf'\\left(z\\right)=\\pi\\cot\\left(\\pi z\\right).\n$$\nAt this point one can naively assume that the problem is solved (like I did) and the answer is something like:\n$$\nf\\left(z\\right)=\\log\\left[\\sin\\left(\\pi z\\right)\\right]+C,\n$$\nWhere $C$ is a constant. However, after checking this against numerical vlaues you will see that this is completely wrong. \nThe issue here is that startegy was wrong at the very begining. The sum of the displacelment of infinte dislocations will not converge since we have infinite discountinuity in displacement field. In other words they do not cancel each other they feed each other.\nBut there is still a way to salvage this. Luckily, displacement is relative quantity and we are dealing with crystals. We can easily add a discontinuity in form an integer number burger vectors to a displacement field and nothing will be affected. \nSo here is the trick: We will focus only on the displacement field of one unit cell dislocation (number 0). At each iteration we add two dislocation to its left and right. \nAt $n$th iterations we add a discontinuity of the form\n$$\n-\\mathrm{Sign}\\left[\\mathrm{Im}\\left(z\\right)\\right] \\pi i\n$$\nand a constant of the form:\n$$\n-2\\log n.\n$$\nIn other words and we need to evaluate: \n$$\n\\lim_{m\\to\\infty}\\sum_{n=-m}^{m}\n\\biggl{\n\\log\\left(z-n\\right)\n-\\mathrm{Sign}\\left[\\mathrm{Im}\\left(z\\right)\\right] \\pi i \n-2\\log\\left(n \\right)\n\\biggr} + \\pi,\n$$\nwhich simplifies to \n$$\n\\lim_{m\\to\\infty}\\sum_{n=-m}^{m}\\log\\left(z-n\\right)\n-\\mathrm{Sign}\\left[\\mathrm{Im}\\left(z\\right)\\right] m \\pi i \n-2\\log\\left(\\frac{m!!}{\\sqrt{\\pi}} \\right)\n$$\nNote that we added an extra $\\pi$ to displacement field for aesthetic reasons. After a lot of manipulations and tricks (meaning I dont't remember how I got here) we arrive at the following relation:\n$$\n\\lim_{m\\to\\infty}\\sum_{n=-m}^{m}\\log\\left(z-n\\right)\n-\\mathrm{Sign}\\left[\\mathrm{Im}\\left(z\\right)\\right] m \\pi i \n-2\\log\\left(\\frac{m!!}{\\sqrt{\\pi}} \\right)=\\log\\left[\\sin\\left(\\pi z\\right)\\right]\n$$\nHowever, this is only valid when \n$$-1/2 \\le\\mathrm{Re}\\left(z\\right)\\lt 1/2.$$ \nIf one exceeds this domain the answer is:\n$$\n\\boxed{\n\\log\\left[\\sin\\left(\\pi z\\right)\\right]-\\mathrm{Sign}\\left[\\mathrm{Im}\\left(z\\right)\\right]\\left \\lceil{\\mathrm{Re}\\left(\\frac{z}{2}\\right)}-\\frac{3}{4}\\right \\rceil 2 \\pi i\n}\n$$\nWhere $\\lceil . \\rceil$ is the cieling function. Of course there is probably a nicer form. Feel free to derive it\nFinal formulation\nTo account for peridicity of dislocations in $x$ direction, the expression $\\log\\left(z\\right)$ in Eqs(2-7) of the paper, it should be replaced by:\n$$\\lim_{m\\to\\infty}\\sum_{n=-m}^{m}\\log\\left(z-na\\right)\n-\\mathrm{Sign}\\left[\\mathrm{Im}\\left(z\\right)\\right] m \\pi i \n-2\\log\\left(\\frac{m\\,\\,!!}{\\sqrt{\\pi}} \\right),$$\nwhich has the closed form:\n$$\n\\boxed{\n\\log\\left[\\sin\\left(\\pi\\frac{z}{a}\\right)\\right]-\\mathrm{Sign}\\left[\\mathrm{Im}\\left(\\frac{z}{a}\\right)\\right]\\left \\lceil{\\mathrm{Re}\\left(\\frac{z}{2a}\\right)}-\\frac{3}{4}\\right \\rceil 2 \\pi i.\n}\n$$\nPreperation\nImport packages", "import numpy as np\nimport matplotlib.pyplot as plt\nimport mapp4py\nfrom mapp4py import md\nfrom lib.elasticity import rot, cubic, resize, displace, HirthEdge, HirthScrew", "Block the output of all cores except for one", "from mapp4py import mpi\nif mpi().rank!=0:\n with open(os.devnull, 'w') as f:\n sys.stdout = f;", "Define an md.export_cfg object\nmd.export_cfg has a call method that we can use to create quick snapshots of our simulation box", "xprt = md.export_cfg(\"\");", "Screw dislocation", "sim=md.atoms.import_cfg('configs/Fe_300K.cfg');\nnlyrs_fxd=2\na=sim.H[0][0];\nb_norm=0.5*a*np.sqrt(3.0);\n\nb=np.array([1.0,1.0,1.0])\ns=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)", "Create a $\\langle110\\rangle\\times\\langle112\\rangle\\times\\frac{1}{2}\\langle111\\rangle$ cell\ncreate a $\\langle110\\rangle\\times\\langle112\\rangle\\times\\langle111\\rangle$ cell\nSince mapp4py.md.atoms.cell_chenge() only accepts integer values start by creating a $\\langle110\\rangle\\times\\langle112\\rangle\\times\\langle111\\rangle$ cell", "sim.cell_change([[1,-1,0],[1,1,-2],[1,1,1]])", "Remove half of the atoms and readjust the position of remaining\nNow one needs to cut the cell in half in $[111]$ direction. We can achive this in three steps:\n\nRemove the atoms that are above located above $\\frac{1}{2}[111]$\nDouble the position of the remiaing atoms in the said direction\nShrink the box affinly to half on that direction", "H=np.array(sim.H);\ndef _(x):\n if x[2] > 0.5*H[2, 2] - 1.0e-8:\n return False;\n else:\n x[2]*=2.0;\nsim.do(_);\n\n_ = np.full((3,3), 0.0)\n_[2, 2] = - 0.5\nsim.strain(_)", "Readjust the postions", "displace(sim,np.array([sim.H[0][0]/6.0,sim.H[1][1]/6.0,0.0]))", "Replicating the unit cell", "max_natms=100000\nH=np.array(sim.H);\nn_per_area=sim.natms/(H[0,0] * H[1,1]);\n_ =np.sqrt(max_natms/n_per_area);\nN0 = np.array([\n np.around(_ / sim.H[0][0]),\n np.around(_ / sim.H[1][1]), \n 1], dtype=np.int32)\n\nsim *= N0;\n\nH = np.array(sim.H);\nH_new = np.array(sim.H);\nH_new[1][1] += 50.0\nresize(sim, H_new, np.full((3),0.5) @ H)\n\nC_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);\nQ=np.array([np.cross(s,b)/np.linalg.norm(np.cross(s,b)),s/np.linalg.norm(s),b/np.linalg.norm(b)])\nhirth = HirthScrew(rot(C_Fe,Q), rot(b*0.5*a,Q))\n\nctr = np.full((3),0.5) @ H_new;\ns_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])\n\ndef _(x,x_d,x_dof):\n sy=(x[1]-ctr[1])/H[1, 1];\n x0=(x-ctr)/H[0, 0];\n\n if sy>s_fxd or sy<=-s_fxd:\n x_dof[1]=x_dof[2]=False;\n x+=b_norm*hirth.ave_disp(x0)\n else:\n x+=b_norm*hirth.disp(x0)\n\nsim.do(_) \n\nH = np.array(sim.H);\nH_inv = np.array(sim.B);\nH_new = np.array(sim.H);\n\nH_new[0,0]=np.sqrt(H[0,0]**2+(0.5*b_norm)**2)\nH_new[2,0]=H[2,2]*0.5*b_norm/H_new[0,0]\nH_new[2,2]=np.sqrt(H[2,2]**2-H_new[2,0]**2)\nF = np.transpose(H_inv @ H_new);\nsim.strain(F - np.identity(3))\n\nxprt(sim, \"dumps/screw.cfg\")", "putting it all together", "def make_scrw(nlyrs_fxd,nlyrs_vel,vel):\n #this is for 0K\n #c_Fe=cubic(1.5187249951755375,0.9053185628093443,0.7249256807942608);\n #this is for 300K\n c_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);\n \n #N0=np.array([80,46,5],dtype=np.int32)\n\n sim=md.atoms.import_cfg('configs/Fe_300K.cfg');\n a=sim.H[0][0];\n b_norm=0.5*a*np.sqrt(3.0);\n\n b=np.array([1.0,1.0,1.0])\n s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)\n Q=np.array([np.cross(s,b)/np.linalg.norm(np.cross(s,b)),s/np.linalg.norm(s),b/np.linalg.norm(b)])\n c0=rot(c_Fe,Q)\n \n hirth = HirthScrew(rot(c_Fe,Q),np.dot(Q,b)*0.5*a)\n\n\n sim.cell_change([[1,-1,0],[1,1,-2],[1,1,1]])\n displace(sim,np.array([sim.H[0][0]/6.0,sim.H[1][1]/6.0,0.0]))\n\n max_natms=1000000\n n_per_vol=sim.natms/sim.vol;\n _=np.power(max_natms/n_per_vol,1.0/3.0);\n N1=np.full((3),0,dtype=np.int32);\n for i in range(0,3):\n N1[i]=int(np.around(_/sim.H[i][i]));\n\n N0=np.array([N1[0],N1[1],1],dtype=np.int32);\n sim*=N0;\n\n sim.kB=8.617330350e-5\n sim.create_temp(300.0,8569643);\n\n H=np.array(sim.H);\n H_new=np.array(sim.H);\n H_new[1][1]+=50.0\n resize(sim, H_new, np.full((3),0.5) @ H)\n ctr=np.dot(np.full((3),0.5),H_new);\n\n\n s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])\n s_vel=0.5-0.5*float(nlyrs_vel)/float(N0[1])\n\n def _(x,x_d,x_dof):\n sy=(x[1]-ctr[1])/H[1][1];\n x0=(x-ctr)/H[0][0];\n \n if sy>s_fxd or sy<=-s_fxd:\n x_d[1]=0.0;\n x_dof[1]=x_dof[2]=False;\n x+=b_norm*hirth.ave_disp(x0)\n else:\n x+=b_norm*hirth.disp(x0)\n \n if sy<=-s_vel or sy>s_vel:\n x_d[2]=2.0*sy*vel;\n\n sim.do(_) \n H = np.array(sim.H);\n H_inv = np.array(sim.B);\n H_new = np.array(sim.H);\n\n\n H_new[0,0]=np.sqrt(H[0,0]**2+(0.5*b_norm)**2)\n H_new[2,0]=H[2,2]*0.5*b_norm/H_new[0,0]\n H_new[2,2]=np.sqrt(H[2,2]**2-H_new[2,0]**2)\n F = np.transpose(H_inv @ H_new);\n sim.strain(F - np.identity(3))\n return N1[2],sim;", "Edge dislocation", "sim=md.atoms.import_cfg('configs/Fe_300K.cfg');\nnlyrs_fxd=2\na=sim.H[0][0];\nb_norm=0.5*a*np.sqrt(3.0);\n\nb=np.array([1.0,1.0,1.0])\ns=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)\n\nsim.cell_change([[1,1,1],[1,-1,0],[1,1,-2]])\nH=np.array(sim.H);\n\ndef _(x):\n if x[0] > 0.5*H[0, 0] - 1.0e-8:\n return False;\n else:\n x[0]*=2.0;\nsim.do(_);\n_ = np.full((3,3), 0.0)\n_[0,0] = - 0.5\nsim.strain(_)\ndisplace(sim,np.array([0.0,sim.H[1][1]/4.0,0.0]))\n\nmax_natms=100000\nH=np.array(sim.H);\nn_per_area=sim.natms/(H[0, 0] * H[1, 1]);\n_ =np.sqrt(max_natms/n_per_area);\nN0 = np.array([\n np.around(_ / sim.H[0, 0]),\n np.around(_ / sim.H[1, 1]), \n 1], dtype=np.int32)\n\nsim *= N0;\n\n# remove one layer along ... direction\nH=np.array(sim.H);\nfrac=H[0,0] /N0[0]\ndef _(x):\n if x[0] < H[0, 0] /N0[0] and x[1] >0.5*H[1, 1]:\n return False;\n\nsim.do(_)\n\nH = np.array(sim.H);\nH_new = np.array(sim.H);\nH_new[1][1] += 50.0\nresize(sim, H_new, np.full((3),0.5) @ H)\n\nC_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);\n_ = np.cross(b,s)\nQ = np.array([b/np.linalg.norm(b), s/np.linalg.norm(s), _/np.linalg.norm(_)])\nhirth = HirthEdge(rot(C_Fe,Q), rot(b*0.5*a,Q))\n\n_ = (1.0+0.5*(N0[0]-1.0))/N0[0];\nctr = np.array([_,0.5,0.5]) @ H_new;\nfrac = H[0][0]/N0[0]\n\ns_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])\n\ndef _(x,x_d,x_dof):\n sy=(x[1]-ctr[1])/H[1, 1];\n x0=(x-ctr);\n if(x0[1]>0.0):\n x0/=(H[0, 0]-frac)\n else:\n x0/= H[0, 0]\n\n\n if sy>s_fxd or sy<=-s_fxd:\n x+=b_norm*hirth.ave_disp(x0);\n x_dof[0]=x_dof[1]=False;\n else:\n x+=b_norm*hirth.disp(x0);\n\n x[0]-=0.25*b_norm;\n\nsim.do(_)\n\nH = np.array(sim.H)\nH_new = np.array(sim.H);\nH_new[0, 0] -= 0.5*b_norm;\nresize(sim, H_new, np.full((3),0.5) @ H)\n\nxprt(sim, \"dumps/edge.cfg\")", "putting it all together", "def make_edge(nlyrs_fxd,nlyrs_vel,vel):\n #this is for 0K\n #c_Fe=cubic(1.5187249951755375,0.9053185628093443,0.7249256807942608);\n #this is for 300K\n c_Fe=cubic(1.3967587463636366,0.787341583191591,0.609615090769241);\n \n #N0=np.array([80,46,5],dtype=np.int32)\n\n sim=md.atoms.import_cfg('configs/Fe_300K.cfg');\n a=sim.H[0][0];\n b_norm=0.5*a*np.sqrt(3.0);\n\n b=np.array([1.0,1.0,1.0])\n s=np.array([1.0,-1.0,0.0])/np.sqrt(2.0)\n\n # create rotation matrix\n _ = np.cross(b,s)\n Q=np.array([b/np.linalg.norm(b), s/np.linalg.norm(s), _/np.linalg.norm(_)])\n hirth = HirthEdge(rot(c_Fe,Q),np.dot(Q,b)*0.5*a)\n\n # create a unit cell \n sim.cell_change([[1,1,1],[1,-1,0],[1,1,-2]])\n H=np.array(sim.H);\n def f0(x):\n if x[0]>0.5*H[0][0]-1.0e-8:\n return False;\n else:\n x[0]*=2.0;\n sim.do(f0);\n _ = np.full((3,3), 0.0)\n _[0,0] = - 0.5\n sim.strain(_)\n displace(sim,np.array([0.0,sim.H[1][1]/4.0,0.0]))\n\n max_natms=1000000\n n_per_vol=sim.natms/sim.vol;\n _=np.power(max_natms/n_per_vol,1.0/3.0);\n N1=np.full((3),0,dtype=np.int32);\n for i in range(0,3):\n N1[i]=int(np.around(_/sim.H[i][i]));\n\n N0=np.array([N1[0],N1[1],1],dtype=np.int32);\n N0[0]+=1;\n sim*=N0;\n\n\n # remove one layer along ... direction\n H=np.array(sim.H);\n frac=H[0][0]/N0[0]\n def _(x):\n if x[0] < H[0][0]/N0[0] and x[1]>0.5*H[1][1]:\n return False;\n\n sim.do(_)\n \n \n\n sim.kB=8.617330350e-5\n sim.create_temp(300.0,8569643);\n\n\n H = np.array(sim.H);\n H_new = np.array(sim.H);\n H_new[1][1] += 50.0\n ctr=np.dot(np.full((3),0.5),H);\n resize(sim,H_new, np.full((3),0.5) @ H)\n l=(1.0+0.5*(N0[0]-1.0))/N0[0];\n ctr=np.dot(np.array([l,0.5,0.5]),H_new);\n frac=H[0][0]/N0[0]\n\n s_fxd=0.5-0.5*float(nlyrs_fxd)/float(N0[1])\n s_vel=0.5-0.5*float(nlyrs_vel)/float(N0[1])\n\n def f(x,x_d,x_dof):\n sy=(x[1]-ctr[1])/H[1][1];\n x0=(x-ctr);\n if(x0[1]>0.0):\n x0/=(H[0][0]-frac)\n else:\n x0/= H[0][0]\n\n\n if sy>s_fxd or sy<=-s_fxd:\n x_d[1]=0.0;\n x_dof[0]=x_dof[1]=False;\n x+=b_norm*hirth.ave_disp(x0);\n else:\n x+=b_norm*hirth.disp(x0);\n \n if sy<=-s_vel or sy>s_vel:\n x_d[0]=2.0*sy*vel;\n x[0]-=0.25*b_norm;\n\n sim.do(f)\n H = np.array(sim.H)\n H_new = np.array(sim.H);\n H_new[0, 0] -= 0.5*b_norm;\n resize(sim, H_new, np.full((3),0.5) @ H)\n return N1[2], sim;\n\nnlyrs_fxd=2\nnlyrs_vel=7;\nvel=-0.004;\nN,sim=make_edge(nlyrs_fxd,nlyrs_vel,vel)\n\nxprt(sim, \"dumps/edge.cfg\")\n\n_ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float);\nQ = np.linalg.inv(np.sqrt(_ @ _.T)) @ _;\n\nC = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q)\n\nB = np.linalg.inv(\n np.array([\n [C[0, 0, 0, 0], C[0, 0, 1, 1], C[0, 0, 0, 1]],\n [C[0, 0, 1, 1], C[1, 1, 1, 1], C[1, 1, 0, 1]],\n [C[0, 0, 0, 1], C[1, 1, 0, 1], C[0, 1, 0, 1]]\n ]\n))\n\n_ = np.roots([B[0, 0], -2.0*B[0, 2],2.0*B[0, 1]+B[2, 2], -2.0*B[1, 2], B[1, 1]])\n\nmu = np.array([_[0],0.0]);\n\nif np.absolute(np.conjugate(mu[0]) - _[1]) > 1.0e-12:\n mu[1] = _[1];\nelse:\n mu[1] = _[2]\n\nalpha = np.real(mu);\nbeta = np.imag(mu);\n\np = B[0,0] * mu**2 - B[0,2] * mu + B[0, 1]\nq = B[0,1] * mu - B[0, 2] + B[1, 1]/ mu\n\nK = np.stack([p, q]) * np.array(mu[1], mu[0]) /(mu[1] - mu[0])\n\nK_r = np.real(K)\nK_i = np.imag(K)\n\nTr = np.stack([\n np.array(np.array([[1.0, alpha[0]], [0.0, beta[0]]])), \n np.array([[1.0, alpha[1]], [0.0, beta[1]]])\n], axis=1)\n\n\ndef u_f0(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) + x[0])\ndef u_f1(x): return np.sqrt(np.sqrt(x[0] * x[0] + x[1] * x[1]) - x[0]) * np.sign(x[1]) \n\n\ndef disp(x): \n _ = Tr @ x\n return K_r @ u_f0(_) + K_i @ u_f1(_)", "Putting it all together", "_ = np.array([[-1,1,0],[1,1,1],[1,1,-2]], dtype=np.float);\nQ = np.linalg.inv(np.sqrt(_ @ _.T)) @ _;\nC = rot(cubic(1.3967587463636366,0.787341583191591,0.609615090769241),Q)\ndisp = crack(C)\n\nn = 300;\nr = 10;\ndisp_scale = 0.3;\n\nn0 = int(np.round(n/ (1 +np.pi), ))\nn1 = n - n0\n\nxs = np.concatenate((\n np.stack([np.linspace(0, -r , n0), np.full((n0,), -1.e-8)]),\n r * np.stack([np.cos(np.linspace(-np.pi, np.pi , n1)),np.sin(np.linspace(-np.pi, np.pi , n1))]), \n np.stack([np.linspace(-r, 0 , n0), np.full((n0,), 1.e-8)]),\n ), axis =1)\n\nxs_def = xs + disp_scale * disp(xs)\n\nfig, ax = plt.subplots(figsize=(10.5,5), ncols = 2)\nax[0].plot(xs[0], xs[1], \"b-\", label=\"non-deformed\");\nax[1].plot(xs_def[0], xs_def[1], \"r-.\", label=\"deformed\");" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cdt15/lingam
examples/RESIT.ipynb
mit
[ "RESIT\nImport and settings\nIn this example, we need to import numpy, pandas, and graphviz in addition to lingam.", "import numpy as np\nimport pandas as pd\nimport graphviz\nimport lingam\nfrom lingam.utils import print_causal_directions, print_dagc, make_dot\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nprint([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])\n\nnp.set_printoptions(precision=3, suppress=True)", "Test data\nFirst, we generate a causal structure with 7 variables. Then we create a dataset with 6 variables from x0 to x5, with x6 being the latent variable for x2 and x3.", "X = pd.read_csv('nonlinear_data.csv')\n\nm = np.array([\n [0, 0, 0, 0, 0],\n [1, 0, 0, 0, 0],\n [1, 1, 0, 0, 0],\n [0, 1, 1, 0, 0],\n [0, 0, 0, 1, 0]])\n\ndot = make_dot(m)\n\n# Save pdf\ndot.render('dag')\n\n# Save png\ndot.format = 'png'\ndot.render('dag')\n\ndot", "Causal Discovery\nTo run causal discovery, we create a RESIT object and call the fit method.", "from sklearn.ensemble import RandomForestRegressor\nreg = RandomForestRegressor(max_depth=4, random_state=0)\n\nmodel = lingam.RESIT(regressor=reg)\nmodel.fit(X)", "Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery. x2 and x3, which have latent confounders as parents, are stored in a list without causal ordering.", "model.causal_order_", "Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. The coefficients between variables with latent confounders are np.nan.", "model.adjacency_matrix_", "We can draw a causal graph by utility funciton.", "make_dot(model.adjacency_matrix_)", "Bootstrapping\nWe call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.", "import warnings\nwarnings.filterwarnings('ignore', category=UserWarning)\n\nn_sampling = 100\nmodel = lingam.RESIT(regressor=reg)\nresult = model.bootstrap(X, n_sampling=n_sampling)", "Causal Directions\nSince BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.", "cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01, split_by_causal_effect_sign=True)", "We can check the result by utility function.", "print_causal_directions(cdc, n_sampling)", "Directed Acyclic Graphs\nAlso, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.", "dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01, split_by_causal_effect_sign=True)", "We can check the result by utility function.", "print_dagc(dagc, n_sampling)", "Probability\nUsing the get_probabilities() method, we can get the probability of bootstrapping.", "prob = result.get_probabilities(min_causal_effect=0.01)\nprint(prob)", "Bootstrap Probability of Path\nUsing the get_paths() method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array [0, 1, 3] shows the path from variable X0 through variable X1 to variable X3.", "from_index = 0 # index of x0\nto_index = 3 # index of x3\n\npd.DataFrame(result.get_paths(from_index, to_index))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
deepmind/dm_construction
demos/task_difficulties.ipynb
apache-2.0
[ "Task Difficulty Levels\nThis notebook demonstrates the different levels of difficulty for each of the construction tasks.\nFor further details, see the Documentation.", "# Copyright 2020 DeepMind Technologies Limited\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Installation\n\nFrom the root of this repository, run pip install .[demos] to install both dm_construction and extra dependencies needed to run this notebook.\nInstall ffmpeg:\nCross-platform with Anaconda: conda install ffmpeg\nUbuntu: apt-get install ffmpeg\nMac with Homebrew: brew install ffmpeg", "import matplotlib.pyplot as plt\nimport dm_construction\n\ndef show_difficulties(env_, difficulties=None):\n \"\"\"Generate and plot episodes at each difficulty level.\"\"\"\n if not difficulties:\n difficulties = range(0, env_.core_env.max_difficulty + 1)\n frames = []\n for difficulty in difficulties:\n _ = env_.reset(difficulty=difficulty, curriculum_sample=False)\n frames.append(env_.core_env.last_time_step.observation[\"RGB\"].squeeze())\n\n base_size = 5\n num_frames = len(frames)\n _, axes = plt.subplots(\n 1, num_frames, squeeze=False, figsize=(base_size*num_frames, base_size))\n\n for i, rgb_observation in enumerate(frames):\n ax = axes[0, i]\n ax.imshow(rgb_observation)\n ax.set_axis_off()\n ax.set_aspect(\"equal\")\n if isinstance(difficulties[i], str):\n ax.set_title(difficulties[i])\n else:\n ax.set_title(\"difficulty = {}\".format(difficulties[i]))", "Load Environments\nFirst we will load a copy of each environment. We can reuse the same underlying\nUnity process for all of them, which makes loading a bit faster.", "# Create a new Unity process. Use a higher res on the camera for nicer images.\nunity_env = dm_construction.get_unity_environment(width=600, height=600)\n\n# Create one copy of each environment.\nenvs = {}\nenv_names = [\n \"marble_run\", \"covering_hard\", \"covering\", \"connecting\", \"silhouette\"]\nfor task in env_names:\n envs[task] = dm_construction.get_environment(\n task, unity_environment=unity_env, curriculum_sample=None,\n difficulty=None)", "Silhouette\nThe difficulty levels of Silhouette involve increasing the number of targets, the number of obstacles, and the maximum height of the targets.\nGeneralization involves increasing the number of targets beyond what was seen during training.", "# Curriculum difficulties.\nshow_difficulties(envs[\"silhouette\"], difficulties=[0, 1, 2, 3])\nshow_difficulties(envs[\"silhouette\"], difficulties=[4, 5, 6, 7])\n\n# Generalization.\nshow_difficulties(envs[\"silhouette\"], difficulties=[\"double_the_targets\"])", "Connecting\nThe difficulty levels in Connecting involve increasing the number of obstacles, the number of layers of obstacles, and the height of the targets.\nGeneralization in connecting involves having mixed heights of the targets, or adding an additional layer of obstacles (and also increasing the height of the targets).", "# Curriculum difficulties.\nshow_difficulties(envs[\"connecting\"], difficulties=[0, 1, 2, 3, 4])\nshow_difficulties(envs[\"connecting\"], difficulties=[5, 6, 7, 8, 9])\n\n# Generalization.\nshow_difficulties(envs[\"connecting\"], difficulties=[\"mixed_height_targets\", \"additional_layer\"])", "Covering\nThe difficulty levels in the Covering task involves increasing the number of obstacles and the maximum height of the obstacles.", "# Curriculum difficulties.\nshow_difficulties(envs[\"covering\"])", "Covering Hard\nLike in Covering, the difficulty levels involve increasing the number of obstacles and the maximum height of the obstacles.", "# Curriculum difficulties.\nshow_difficulties(envs[\"covering_hard\"])", "Marble Run\nThe difficulty levels in Marble Run involve the distance between the ball and the goal, the number of obstacles, and the height of the target.", "# Curriculum difficulties.\nshow_difficulties(envs[\"marble_run\"], difficulties=[0, 1, 2, 3, 4])\nshow_difficulties(envs[\"marble_run\"], difficulties=[5, 6, 7, 8])", "Close Environments\nThe Unity environment won't get garbage collected since it is actually running as a separate process, so make sure to always shut down all environments after they are finished running.", "for name, env in envs.items():\n print(\"Closing '{}'\".format(name))\n env.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yassineAlouini/first-steps-data-science
first_steps_in_data_science.ipynb
mit
[ "First steps in data science with Python \nInstallation\nFor new comers, I recommend using the Anacaonda distribution. You can download it from here. \nIf you are familiar with Python, create a conda environment and install the needed libraries (using the environment.yml file provided in this repository): conda env create -f environment.yml. \nFinally, activate the environement using: conda activate workshop\nThe Python data science ecosystem\nJupyter notebook\nJupyter notebook is the code environment we will be using today. <br>\nPreviously known as ipython notebook, it is an interactive environment that makes prototyping easier for data scientists.\nPandas\nPandas is the primary toolbox used for collecting and cleaning datasets from various data sources. <br>\nMost of the concepts that we are exploring today can be found in the following great cheatsheet\nMatplotlib\nMatplotlib is the standard and de facto Python library for creating visualizations.\nNumerical and statistical (numpy, scipy, statsmodels)\nAlongside the above tools, Python offers a set of numerical and statistical packages to perform data analysis. \nThe most famous ones are: \n\nnumpy: Base N-dimensional array package\nscipy: Fundamental library for scientific computing\nstatsmodels: Statistical computations and models for Python\n\nKeep in mind that most of the capabilites of the above package are integrated within the Pandas library.\nTidy data\nThis is a very important concept when doing data science. To demonstrate how important it is, let's start by creating a messy one and tidying it.", "import pandas as pd\nmessy_df = pd.DataFrame({'2016': [1000, 2000, 3000], \n '2017': [1200, 1300, 4000], \n 'company': \n ['slack', 'twitter', 'twitch']\n })", "Here, we have created a fictional dataset that contains earnings for years 2016 and 2017", "messy_df", "You might ask, what is the problem with this dataset? <br>\nThere are two main ones:\n\nThe coloumns 2016 and 2017 contain the same type of variable (earnings)\nThe columns 2016 and 2017 contain an information about the year \n\nNow that we have a \"messy\" dataset, let's clean it.", "tidy_df = pd.melt(messy_df, id_vars=['company'],\n value_name='earnings', \n var_name='year')\n\ntidy_df", "That's much better! <br>\nIn summary, a tidy dataset has the following properties: \n\nEach column represents only one variable\nEach row represents an observation\n\nExample\nImport pacakges", "import pandas as pd\nimport missingno as msno", "Loading data\nKaggle offers many free datasets with lots of metadata, descriptions, kernels, discussions and so on. <br>\nToday, we will be working with the San Francisco Salaries dataset. You can download it from here (you need a Kaggle account) or get it from the workshop repository.\nThe dataset we will be working with is a CSV file. Fortunately for us, Pandas has a handy method .read_csv.\nLet's try it out!", "sf_slaries_df = pd.read_csv('data/Salaries.csv')", "Data exploration", "sf_slaries_df.head(3).transpose()\n\nsf_slaries_df.sample(5).transpose()\n\nsf_slaries_df.columns\n\nsf_slaries_df.dtypes\n\nsf_slaries_df.describe()\n\nmsno.matrix(sf_slaries_df)", "Some analysis\nWhat are the different job titles? How many?", "sf_slaries_df.JobTitle.value_counts()\n\nsf_slaries_df.JobTitle.nunique()", "Highest and lowest salaries per year? Which jobs?", "sf_slaries_df.groupby('Year').TotalPay.agg(['min', 'max'])\n\nlowest_idx = sf_slaries_df.groupby('Year').apply(lambda df: df.TotalPay.argmin())\n\nsf_slaries_df.loc[lowest_idx, ['Year', 'JobTitle']]\n\nhighest_idx = sf_slaries_df.groupby('Year').apply(lambda df: df.TotalPay.argmax())\n\nsf_slaries_df.loc[highest_idx, ['Year', 'JobTitle']]", "To wrap up\nIn todays's workshop, you have learned: \n\nAbout the Python data science ecosystem (some of its parts at least)\nThe concept of a tidy dataset\nHow to load a dataset using Pandas\nHow to explore a dataset\n\nI hope this was insightful! <br>\nSee you at a next workshop hopefully.\nReferences/ To go beyond\nI hope you have enjoyed this workshop. To continue learning, I recommend the following:\n\nA blog post on how to become a data scientist: https://www.dataquest.io/blog/how-to-become-a-data-scientist/ \nConsider trying dataquest and/or datacamp if you want to learn more about data science using Python. Notice that they both offer some free content but most of it is available for a monthly subscription\nQuora: one of the best places to ask and answer questions about data science (and any other subject more generally)\nKaggle: this is a great place to hone your data science skills through producing and reading different kernels (these are their internal variation of notebooks)\nMore generally follow great data scientists. Some that I really enjoy reading their work (in no particular order): \nWes Mckinney: original creator of Pandas.\nTom Augspurger: core contributor of Pandas. Has written the modern Pandas blog posts series (a most read).\nJake VanderPlas: a data scientist in academia (as he defines himself)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/mri/cmip6/models/sandbox-3/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: MRI\nSource ID: SANDBOX-3\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:19\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mri', 'sandbox-3', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
maojrs/riemann_book
Make_html_animations.ipynb
bsd-3-clause
[ "Make animations for webpage\nCreate html versions of some animations to be uploaded to the webpage. Links from the pdf version of the book will go to these versions for readers who are only reading the pdf.\nNote that make_html_on_master.py will copy everything from html_animations into build_html when creating webpage version.", "%matplotlib inline\n\nfrom IPython.display import FileLink", "Acoustics\nAnimation to link from Acoustics.ipynb.", "from exact_solvers import acoustics_demos\n\ndef make_bump_animation_html(numframes, file_name):\n video_html = acoustics_demos.bump_animation(numframes)\n f = open(file_name,'w')\n f.write('<html>\\n')\n file_name = 'acoustics_bump_animation.html'\n descr = \"\"\"<h1>Acoustics Bump Animation</h1>\n This animation is to accompany \n <a href=\"http://www.clawpack.org/riemann_book/html/Acoustics.html\">this\n notebook</a>,\\n from the book <a\n href=\"http://www.clawpack.org/riemann_book/index.html\">Riemann Problems and\n Jupyter Solutions</a>\\n\"\"\"\n f.write(descr)\n f.write(\"<p>\")\n f.write(video_html)\n print(\"Created \", file_name)\n f.close()\n\n\nfile_name = 'html_animations/acoustics_bump_animation.html'\nanim = make_bump_animation_html(numframes=50, file_name=file_name)\nFileLink(file_name)", "Burgers\nAnimations to link from Burgers.ipynb.", "from exact_solvers import burgers_demos\n\nfrom importlib import reload\nreload(burgers_demos)\n\nvideo_html = burgers_demos.bump_animation(numframes = 50)\nfile_name = 'html_animations/burgers_animation0.html'\nf = open(file_name,'w')\nf.write('<html>\\n')\ndescr = \"\"\"<h1>Burgers' Equation Animation</h1>\n This animation is to accompany \n <a href=\"http://www.clawpack.org/riemann_book/html/Burgers.html\">this\n notebook</a>,\\n from the book <a\n href=\"http://www.clawpack.org/riemann_book/index.html\">Riemann Problems and\n Jupyter Solutions</a>\\n\n <p>\n Burgers' equation with hump initial data, evolving into a shock wave\n followed by a rarefaction wave.\"\"\"\nf.write(descr)\nf.write(\"<p>\")\nf.write(video_html)\nprint(\"Created \", file_name)\nf.close()\nFileLink(file_name)\n\ndef make_burgers_animation_html(ql, qm, qr, file_name):\n video_html = burgers_demos.triplestate_animation(ql,qm,qr,numframes=50)\n f = open(file_name,'w')\n f.write('<html>\\n')\n descr = \"\"\"<h1>Burgers' Equation Animation</h1>\n This animation is to accompany \n <a href=\"http://www.clawpack.org/riemann_book/html/Burgers.html\">this\n notebook</a>,\\n from the book <a\n href=\"http://www.clawpack.org/riemann_book/index.html\">Riemann Problems and\n Jupyter Solutions</a>\\n\n <p>\n Burgers' equation with three constant states as initial data,\\n\n ql = %.1f, qm = %.1f, qr = %.1f\"\"\" % (ql,qm,qr)\n f.write(descr)\n f.write(\"<p>\")\n f.write(video_html)\n print(\"Created \", file_name)\n f.close()\n\nfile_name = 'html_animations/burgers_animation1.html'\nmake_burgers_animation_html(4., 2., 0., file_name)\nFileLink(file_name)\n\nfile_name = 'html_animations/burgers_animation2.html'\nmake_burgers_animation_html(4., -1.5, 0.5, file_name)\nFileLink(file_name)\n\nfile_name = 'html_animations/burgers_animation3.html'\nmake_burgers_animation_html(-1., 3., -2., file_name)\nFileLink(file_name)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
astroumd/GradMap
notebooks/Lectures2018/Lecture3/Lecture3_Gaussians-Answer Key.ipynb
gpl-3.0
[ "Gaussians\nYou just learned a little about what a Gaussian distribution looks like. As a reminder, a Gaussian curve is sometimes called a bell curve because the shape looks like a bell.\nTo review, the equation for the Gaussian curve is the following:\n$f(x) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}e^{\\frac{-(x-\\mu)^2}{2\\sigma^2}}$\nwhere $\\mu$ is the mean and $\\sigma$ is the standard deviation. \nThe standard normal distribution, where $\\mu=0$ and $\\sigma=1$, is selected for by calling np.random.randn().\nYou're probably wondering why Gaussian, a.k.a. normal, distributions are so important. The reason is that the distributions of many things follow a normal distribution -- such as the heights of people, manufactured parts, blood pressure readings, and error measurements -- making it important to understand.\nThere are specific metrics that describe a normal distribution.\n1) The mean, median, and mode of a Gaussian distribution are all the same.\n2) There is symmetry about the mean, as in 50% of the values fall to the right of the mean and the other 50% fall to the left.\n3) A certain amount of data falls within integer multiples of the standard deviation, as shown below.\n\nDoes the lifetimes data we plotted earlier hold up to these three criteria? Let's find out.\nRemember, the lifetimes data was imported as the variable lifetimes before.", "lifemean = np.mean(lifetimes) #get mean\nlifestd = np.std(lifetimes) #get standard deviation", "Let's examine the first criterion: the mean, median, and mode of a Gaussian distribution are all the same.\nTo calculate the mode, we need to import another module called the stats module. The median can still be calculated from the numpy module.", "#import stats module\nfrom scipy import stats", "Now calculate the median and mode of the variable lifetimes and display them.", "#your code here\nlifemode = stats.mode(lifetimes) #calculate mode\nlifemedian = np.median(lifetimes) #calculate median\n\nprint(lifemean)\nprint(lifemode)\nprint(lifemedian)", "Does the lifetimes data fulfill the first criterion of a Gaussian distribution?\nNow let's check the second criterion. Is there symmetry about the mean?\nFirst, let's find out how many samples are in the variable lifetimes and display it.", "#your code here\nnumsamp = len(lifetimes)\nprint(numsamp)", "Now that you have the number of samples, you will need to use the median value to find out how many samples lie above and below it.", "#Put your code here\n\n#why doesn't this work?\n#uppermask = lifetimes>lifemedian\n#upperhalf = lifetimes(uppermask) #this should work, but doesn't?\n#lowermask = lifetimes<=lifemedian\n#lowerhalf = lifetimes(lowermask) #ditto\n\n#but this does?\nupperhalf = [ii for ii in lifetimes if ii>lifemedian] #get upper 50%\nlowerhalf = [jj for jj in lifetimes if jj<=lifemedian] #get lower 50%\n\nupperperc = len(upperhalf)/numsamp\nlowerperc = len(lowerhalf)/numsamp\n\nprint(upperperc)\nprint(lowerperc)", "Does the lifetimes data fulfill the second criterion of a Gaussian distribution?\nNow let's check the last criterion. How much data falls within a standard deviation or two (or three)?\nRemember, you already calculated the standard deviation of the lifetimes data as the variable lifestd.", "#Put your code here\n\nplus_std = (lifemedian+1*lifestd, lifemedian+2*lifestd, lifemedian+3*lifestd)\nminus_std = (lifemedian-1*lifestd, lifemedian-2*lifestd, lifemedian-3*lifestd)\naboveperc = [None]*3\nbelowperc = [None]*3\n\nii=0\nwhile ii<len(plus_std):\n data_above = [jj for jj in lifetimes if jj>lifemedian and jj<plus_std[ii]]\n aboveperc[ii] = len(data_above)/numsamp\n \n data_below = [kk for kk in lifetimes if kk<=lifemedian and kk>minus_std[ii]]\n belowperc[ii] = len(data_below)/numsamp\n \n ii+=1\n print('% of data within', ii, 'standard deviations of the median:', aboveperc[ii-1]+belowperc[ii-1])", "Does the lifetimes data fulfill the third criterion of a Gaussian distribution?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
skipamos/code_guild
wk0/notebooks/challenges/reverse_string/reverse_string_challenge.ipynb
mit
[ "<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>\nChallenge Notebook\nProblem: Implement a function to reverse a string (a list of characters), in-place.\n\nConstraints\nTest Cases\nAlgorithm\nCode\nUnit Test\nSolution Notebook\n\nConstraints\n\n\nCan I assume the string is ASCII?\n\nYes\nNote: Unicode strings could require special handling depending on your language\n\n\n\nSince Python string are immutable, can I use a list of characters instead?\n\nYes\n\n\n\nTest Cases\n\nNone -> None\n[''] -> ['']\n['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f']\n\nAlgorithm\nIf you finish this quickly, try implementing three different ways.\nCode", "def list_of_chars(list_chars):\n # TODO: Implement me\n pass", "Unit Test\nThe following unit test is expected to fail until you solve the challenge.", "# %load test_reverse_string.py\nfrom nose.tools import assert_equal\n\n\nclass TestReverse(object):\n\n def test_reverse(self):\n assert_equal(list_of_chars(None), None)\n assert_equal(list_of_chars(['']), [''])\n assert_equal(list_of_chars(\n ['f', 'o', 'o', ' ', 'b', 'a', 'r']),\n ['r', 'a', 'b', ' ', 'o', 'o', 'f'])\n print('Success: test_reverse')\n\n\ndef main():\n test = TestReverse()\n test.test_reverse()\n\n\nif __name__ == '__main__':\n main()", "Solution Notebook\nReview the Solution Notebook for a discussion on algorithms and code solutions." ]
[ "markdown", "code", "markdown", "code", "markdown" ]