markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Pytorch backend - Only the import changes | #Change gluon_prototype to pytorch_prototype
from monk.pytorch_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.resnet_v2_bottleneck_block(output_channels=64, downsample=False));
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False); | Pytorch Version: 1.2.0
Experiment Details
Project: sample-project-1
Experiment: sample-experiment-1
Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/
Model Details
Loading pretrained model
Model Loaded on device
Model name: Custom Model
Num layers in model: 6
Num trainable layers: 6
| Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Keras backend - Only the import changes | #Change gluon_prototype to keras_prototype
from monk.keras_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.resnet_v2_bottleneck_block(output_channels=64, downsample=False));
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False); | Keras Version: 2.2.5
Tensorflow Version: 1.12.0
Experiment Details
Project: sample-project-1
Experiment: sample-experiment-1
Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/blocks/workspace/sample-project-1/sample-experiment-1/
Model Details
Loading pretrained model
Model Loaded on device
Model name: Custom Model
Num layers in model: 11
Num trainable layers: 10
| Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Appendix Study links - https://towardsdatascience.com/residual-blocks-building-blocks-of-resnet-fd90ca15d6ec - https://medium.com/@MaheshNKhatri/resnet-block-explanation-with-a-terminology-deep-dive-989e15e3d691 - https://medium.com/analytics-vidhya/understanding-and-implementation-of-residual-networks-resnets-b80f9a507b9c - https://hackernoon.com/resnet-block-level-design-with-deep-learning-studio-part-1-727c6f4927ac Creating block using traditional Mxnet - Code credits - https://mxnet.incubator.apache.org/ | # Traditional-Mxnet-gluon
import mxnet as mx
from mxnet.gluon import nn
from mxnet.gluon.nn import HybridBlock, BatchNorm
from mxnet.gluon.contrib.nn import HybridConcurrent, Identity
from mxnet import gluon, init, nd
def _conv3x3(channels, stride, in_channels):
return nn.Conv2D(channels, kernel_size=3, strides=stride, padding=1,
use_bias=False, in_channels=in_channels)
class ResnetBlockV1(HybridBlock):
def __init__(self, channels, stride, in_channels=0, **kwargs):
super(ResnetBlockV1, self).__init__(**kwargs)
#Common Elements
self.bn0 = nn.BatchNorm();
self.relu0 = nn.Activation('relu');
#Branch - 1
#Identity
# Branch - 2
self.body = nn.HybridSequential(prefix='')
self.body.add(nn.Conv2D(channels//4, kernel_size=1, strides=stride,
use_bias=False, in_channels=in_channels))
self.body.add(nn.BatchNorm())
self.body.add(nn.Activation('relu'))
self.body.add(_conv3x3(channels//4, stride, in_channels))
self.body.add(nn.BatchNorm())
self.body.add(nn.Activation('relu'))
self.body.add(nn.Conv2D(channels, kernel_size=1, strides=stride,
use_bias=False, in_channels=in_channels))
def hybrid_forward(self, F, x):
x = self.bn0(x);
x = self.relu0(x);
residual = x
x = self.body(x)
x = residual+x
return x
# Invoke the block
block = ResnetBlockV1(64, 1)
# Initialize network and load block on machine
ctx = [mx.cpu()];
block.initialize(init.Xavier(), ctx = ctx);
block.collect_params().reset_ctx(ctx)
block.hybridize()
# Run data through network
x = np.zeros((1, 64, 224, 224));
x = mx.nd.array(x);
y = block.forward(x);
print(x.shape, y.shape)
# Export Model to Load on Netron
block.export("final", epoch=0);
netron.start("final-symbol.json", port=8082) | (1, 64, 224, 224) (1, 64, 224, 224)
Serving 'final-symbol.json' at http://localhost:8082
| Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Creating block using traditional Pytorch - Code credits - https://pytorch.org/ | # Traiditional-Pytorch
import torch
from torch import nn
from torch.jit.annotations import List
import torch.nn.functional as F
def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=dilation, groups=groups, bias=False, dilation=dilation)
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class ResnetBottleNeckBlock(nn.Module):
expansion = 1
__constants__ = ['downsample']
def __init__(self, inplanes, planes, stride=1, groups=1,
base_width=64, dilation=1, norm_layer=None):
super(ResnetBottleNeckBlock, self).__init__()
norm_layer = nn.BatchNorm2d
#Common elements
self.bn0 = norm_layer(inplanes);
self.relu0 = nn.ReLU(inplace=True);
# Branch - 1
#Identity
# Branch - 2
self.conv1 = conv1x1(inplanes, planes//4, stride)
self.bn1 = norm_layer(planes//4)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes//4, planes//4, stride)
self.bn2 = norm_layer(planes//4)
self.relu2 = nn.ReLU(inplace=True)
self.conv3 = conv1x1(planes//4, planes)
self.stride = stride
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
x = self.bn0(x);
x = self.relu0(x);
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu1(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu2(out)
out = self.conv3(out)
out += identity
return out
# Invoke the block
block = ResnetBottleNeckBlock(64, 64, stride=1);
# Initialize network and load block on machine
layers = []
layers.append(block);
net = nn.Sequential(*layers);
# Run data through network
x = torch.randn(1, 64, 224, 224)
y = net(x)
print(x.shape, y.shape);
# Export Model to Load on Netron
torch.onnx.export(net, # model being run
x, # model input (or a tuple for multiple inputs)
"model.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes
'output' : {0 : 'batch_size'}})
netron.start('model.onnx', port=9998);
| torch.Size([1, 64, 224, 224]) torch.Size([1, 64, 224, 224])
Serving 'model.onnx' at http://localhost:9998
| Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
Creating block using traditional Keras - Code credits: https://keras.io/ | # Traditional-Keras
import keras
import keras.layers as kla
import keras.models as kmo
import tensorflow as tf
from keras.models import Model
backend = 'channels_last'
from keras import layers
def resnet_conv_block(input_tensor,
kernel_size,
filters,
stage,
block,
strides=(1, 1)):
filters1, filters2, filters3 = filters
bn_axis = 3
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
#Common Elements
start = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '0a')(input_tensor)
start = layers.Activation('relu')(start)
# Branch - 1
# Identity
shortcut = start
# Branch - 2
x = layers.Conv2D(filters1, (1, 1), strides=strides,
kernel_initializer='he_normal',
name=conv_name_base + '2a')(start)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(filters2, (3, 3), strides=strides,
kernel_initializer='he_normal',
name=conv_name_base + '2b', padding="same")(x)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(filters3, (1, 1),
kernel_initializer='he_normal',
name=conv_name_base + '2c')(x);
x = layers.add([x, shortcut])
x = layers.Activation('relu')(x)
return x
def create_model(input_shape, kernel_size, filters, stage, block):
img_input = layers.Input(shape=input_shape);
x = resnet_conv_block(img_input, kernel_size, filters, stage, block)
return Model(img_input, x);
# Invoke the block
kernel_size=3;
filters=[16, 16, 64];
input_shape=(224, 224, 64);
model = create_model(input_shape, kernel_size, filters, 0, "0");
# Run data through network
x = tf.placeholder(tf.float32, shape=(1, 224, 224, 64))
y = model(x)
print(x.shape, y.shape)
# Export Model to Load on Netron
model.save("final.h5");
netron.start("final.h5", port=8082) | (1, 224, 224, 64) (1, 224, 224, 64)
Stopping http://localhost:8082
Serving 'final.h5' at http://localhost:8082
| Apache-2.0 | study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks in Deep Learning Networks/8) Resnet V2 Bottleneck Block (Type - 2).ipynb | take2rohit/monk_v1 |
from google.colab import drive
drive.mount('/content/drive')
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten, Dropout
from tensorflow.keras.applications.resnet import preprocess_input
from tensorflow.keras.applications import xception
import pandas as pd
import PIL
import matplotlib.pyplot as plt
import os
import shutil | _____no_output_____ | MIT | RocksResnetTrainer.ipynb | malcolmrite-dsi/RockVideoClassifier |
|
Training | train_datagen = keras.preprocessing.image.ImageDataGenerator(validation_split=0.2, preprocessing_function=preprocess_input)
train_generator = train_datagen.flow_from_directory(
'/content/drive/My Drive/Module 2 shared folder/samples',
subset="training",
seed=3,
target_size=(64, 64),
batch_size=64,
class_mode='categorical')
val_generator = train_datagen.flow_from_directory( '/content/drive/My Drive/Module 2 shared folder/samples',
subset="validation",
seed=3,
target_size=(64, 64),
batch_size=64,
class_mode='categorical')
train_datagen = keras.preprocessing.image.ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = train_datagen.flow_from_directory(
'/content/drive/My Drive/Module 2 shared folder/samples',
subset="training",
seed=3,
target_size=(64, 64),
batch_size=64,
class_mode='categorical')
resnet = keras.applications.ResNet50(include_top=False, pooling="max", input_shape=(64,64,3))
# mark loaded layers as not trainable
for layer in resnet.layers:
layer.trainable = False
data_augmentation = tf.keras.Sequential([
keras.layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"),
#keras.layers.experimental.preprocessing.RandomRotation(0.2),
])
# mark loaded layers as not trainable
#for layer in resnet.layers:
#layer.trainable = False
flat = Flatten()(resnet.layers[-1].output)
dense = Dense(1024, activation='relu')(flat)
output = Dense(5, activation='softmax')(dense)
model = Model(inputs=resnet.inputs, outputs=output)
model.summary()
model.compile(loss="categorical_crossentropy", optimizer=keras.optimizers.Adam(), metrics=["categorical_accuracy"])
checkpoint_best = keras.callbacks.ModelCheckpoint("/content/drive/My Drive/model_best.h5",
monitor='loss', verbose=0, save_best_only=True, save_weights_only=False, save_freq='epoch')
checkpoint = keras.callbacks.ModelCheckpoint("/content/drive/My Drive/model_last.h5",
verbose=0, save_best_only=False, save_weights_only=False, save_freq='epoch')
model.fit(
train_generator,
epochs = 5,
validation_data=val_generator,
callbacks=[checkpoint_best]
)
model.evaluate(val_generator)
model.fit(
train_generator,
initial_epoch=10,
epochs = 20,
validation_data=val_generator, callbacks=[checkpoint, checkpoint_best]
)
model.save("/content/drive/My Drive/model_best_64.h5") | _____no_output_____ | MIT | RocksResnetTrainer.ipynb | malcolmrite-dsi/RockVideoClassifier |
02 - AC SAF GOME-2 - Produce gridded dataset (L3)>> Optional: Introduction to Python and Project Jupyter Project Jupyter "Project Jupyter exists to develop open-source software, open-standards, and services for interactive computing across dozens of programming languages." Project Jupyter offers different tools to facilitate interactive computing, either with a web-based application (`Jupyter Notebooks`), an interactive development environment (`JupyterLab`) or via a `JupyterHub` that brings interactive computing to groups of users. * **Jupyter Notebook** is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.* **JupyterLab 1.0: Jupyter’s Next-Generation Notebook Interface** JupyterLab is a web-based interactive development environment for Jupyter notebooks, code, and data.* **JupyterHub** JupyterHub brings the power of notebooks to groups of users. It gives users access to computational environments and resources without burdening the users with installation and maintenance tasks. Users - including students, researchers, and data scientists - can get their work done in their own workspaces on shared resources which can be managed efficiently by system administrators. Why Jupyter Notebooks? * Started with Python support, now **support of over 40 programming languages, including Python, R, Julia, ...*** Notebooks can **easily be shared via GitHub, NBViewer, etc.*** **Code, data and visualizations are combined in one place*** A great tool for **teaching*** **JupyterHub allows you to access an environment ready to code** Installation Installing Jupyter using Anaconda Anaconda comes with the Jupyter Notebook installed. You just have to download Anaconda and following the installation instructions. Once installed, the jupyter notebook can be started with: | jupyter notebook | _____no_output_____ | MIT | 90_workshops/202012_EUM_short_course_gridded_dataset/01_introduction_to_python_and_jupyter.ipynb | trivedi-c/atm_Practical3 |
Installing Jupyter with pip Experienced Python users may want to install Jupyter using Python's package manager `pip`.With `Python3` you do: | python3 -m pip install --upgrade pip
python3 -m pip install jupyter | _____no_output_____ | MIT | 90_workshops/202012_EUM_short_course_gridded_dataset/01_introduction_to_python_and_jupyter.ipynb | trivedi-c/atm_Practical3 |
In order to run the notebook, you run the same command as with Anaconda at the Terminal : | jupyter notebook | _____no_output_____ | MIT | 90_workshops/202012_EUM_short_course_gridded_dataset/01_introduction_to_python_and_jupyter.ipynb | trivedi-c/atm_Practical3 |
Jupyter notebooks UI * Notebook dashboard * Create new notebook* Notebook editor (UI) * Menu * Toolbar * Notebook area and cells* Cell types * Code * Markdown* Edit (green) vs. Command mode (blue) Notebook editor User Interface (UI) Shortcuts Get an overview of the shortcuts by hitting `H` or go to `Help/Keyboard shortcuts` Most useful shortcuts * `Esc` - switch to command mode* `B` - insert below* `A` - insert above* `M` - Change current cell to Markdown* `Y` - Change current cell to code* `DD` - Delete cell* `Enter` - go back to edit mode* `Esc + F` - Find and replace on your code* `Shift + Down / Upwards` - Select multiple cells* `Shift + M` - Merge multiple cells Cell magics Magic commands can make your life a lot easier, as you only have one command instead of an entire function or multiple lines of code.> Go to an [extensive overview of magic commands]() Some of the handy ones **Overview of available magic commands** | %lsmagic | _____no_output_____ | MIT | 90_workshops/202012_EUM_short_course_gridded_dataset/01_introduction_to_python_and_jupyter.ipynb | trivedi-c/atm_Practical3 |
**See and set environment variables** | %env | _____no_output_____ | MIT | 90_workshops/202012_EUM_short_course_gridded_dataset/01_introduction_to_python_and_jupyter.ipynb | trivedi-c/atm_Practical3 |
**Install and list libraries** | !pip install numpy
!pip list | grep pandas | _____no_output_____ | MIT | 90_workshops/202012_EUM_short_course_gridded_dataset/01_introduction_to_python_and_jupyter.ipynb | trivedi-c/atm_Practical3 |
**Write cell content to a Python file** | %%writefile hello_world.py
print('Hello World') | _____no_output_____ | MIT | 90_workshops/202012_EUM_short_course_gridded_dataset/01_introduction_to_python_and_jupyter.ipynb | trivedi-c/atm_Practical3 |
**Load a Python file** | %pycat hello_world.py | _____no_output_____ | MIT | 90_workshops/202012_EUM_short_course_gridded_dataset/01_introduction_to_python_and_jupyter.ipynb | trivedi-c/atm_Practical3 |
**Get the time of cell execution** | %%time
tmpList = []
for i in range(100):
tmpList.append(i+i)
print(tmpList) | _____no_output_____ | MIT | 90_workshops/202012_EUM_short_course_gridded_dataset/01_introduction_to_python_and_jupyter.ipynb | trivedi-c/atm_Practical3 |
**Show matplotlib plots inline** | %matplotlib inline | _____no_output_____ | MIT | 90_workshops/202012_EUM_short_course_gridded_dataset/01_introduction_to_python_and_jupyter.ipynb | trivedi-c/atm_Practical3 |
Adding functionality so we can create dataframe from string representation of dict | import ast
def str_to_dict(string):
return ast.literal_eval(string)
import pandas as pd
class MySubClass(pd.DataFrame):
def from_str(self, string):
df_obj = super().from_dict(str_to_dict(string))
df_obj.my_string_attribute = string
return df_obj
data = "{'col_1' : ['a','b'], 'col2': [1, 2]}"
obj = MySubClass().from_str(data)
type(obj)
obj
obj.my_string_attribute
sales = [{'account': 'Jones LLC', 'Jan': 150, 'Feb': 200, 'Mar': 140},
{'account': 'Alpha Co', 'Jan': 200, 'Feb': 210, 'Mar': 215},
{'account': 'Blue Inc', 'Jan': 50, 'Feb': 90, 'Mar': 95 }]
df = MySubClass(sales)
df
type(df) | _____no_output_____ | MIT | 6_a_extending_dataframe_capabilities.ipynb | mj111312/pandas_basics |
[Ateliers: Technologies des grosses données](https://github.com/wikistat/Ateliers-Big-Data) Recommandation de Films par Filtrage Collaboratif: [NMF](http://wikistat.fr/pdf/st-m-explo-nmf.pdf) de la librairie [SparkML](https://spark.apache.org/docs/latest/ml-guide.html) de 1. IntroductionCe calepin traite d'un problème classique de recommandation par filtrage collaboratif en utilisant les ressources de la librairie [MLlib de Spark]([http://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.recommendation.ALS) avec l'API pyspark. Le problème général est décrit en [introduction](https://github.com/wikistat/Ateliers-Big-Data/tree/master/3-MovieLens) et dans une [vignette](http://wikistat.fr/pdf/st-m-datSc3-colFil.pdf) de [Wikistat](http://wikistat.fr/). Il est appliqué aux données publiques du site [GroupLens](http://grouplens.org/datasets/movielens/). L'objectif est de tester les méthodes et la procédure d'optimisation sur le plus petit jeu de données composé de 100k notes de 943 clients sur 1682 films où chaque client a au moins noté 20 films. Les jeux de données plus gros (1M, 10M, 20M notes) peuvent être utilisés pour "passer à l'échelle volume". Ce calepin s'inspire des exemples de la [documentation](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.recommendation.ALS) et d'un [tutoriel](https://github.com/jadianes/spark-movie-lens/blob/master/notebooks/building-recommender.ipynb) de [Jose A. Dianes](https://www.codementor.io/jadianes). Le sujet a été traité lors d'un [Spark Summit](https://databricks-training.s3.amazonaws.com/movie-recommendation-with-mllib.html).L'objectif est d'utiliser ces seules données pour proposer des recommandations. Les données initiales sont sous la forme d'une matrice **très creuse** (*sparse*) contenant des notes ou évaluations. **Attention**, les "0" de la matrice ne sont pas des notes mais des *données manquantes*, le film n'a pas encore été vu ou évalué. Un algorithme satisfaisant à l'objectif de *complétion de grande matrice creuse*, et implémenté dans un logiciel libre d'accès est disponible dans la librairie [softImpute de R](https://cran.r-project.org/web/packages/softImpute/index.html). SOn utilisaiton est décrite dans un autre [calepin](https://github.com/wikistat/Ateliers-Big-Data/blob/master/3-MovieLens/Atelier-MovieLens-softImpute.ipynb). La version de [NMF](http://wikistat.fr/pdf/st-m-explo-nmf.pdf) de [MLlib de Spark](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.recommendation.ALS) autorise permet également la complétion.En revanche,la version de NMF incluse dans la librairie [Scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html) traite également des [matrices creuses](http://docs.scipy.org/doc/scipy/reference/sparse.html) mais le critère (moindres carrés) optimisé considère les "0" comme des notes nulles, pas comme des données manquantes. *Elle n'est pas adaptée au problème de complétion*, contrairement à celle de MLliB. Il faudrait sans doute utiliser la librairie [nonnegfac](https://github.com/kimjingu/nonnegfac-python) en Python de [Kim et al. (2014)](http://link.springer.com/content/pdf/10.1007%2Fs10898-013-0035-4.pdf); **à tester**!Dans la première partie, le plus petit fichier est partagé en trois échantillons: apprentissage, validation et test; l'optimisation du rang de la factorisation (nombre de facteurs latents) est réalisée par minimisation de l'erreur estimée sur l'échantillon de validation.Ensuite le plus gros fichier est utilisé pour évaluer l'impact de la taille de la base d'apprentissage. 2 Importation des données en HDFSLes données doivent être stockées à un emplacement accessibles de tous les noeuds du cluster pour permettre la construction de la base de données réparties (RDD). Dans une utilisation monoposte (*standalone*) de *Spark*, elles sont simplement chargées dans le répertoire courant. | sc
# Chargement des fichiers si ce n'est déjà fait
#Renseignez ici le dossier où vous souhaitez stocker le fichier téléchargé.
DATA_PATH=""
import urllib.request
# fichier réduit
f = urllib.request.urlretrieve("http://www.math.univ-toulouse.fr/~besse/Wikistat/data/ml-ratings100k.csv",DATA_PATH+"ml-ratings100k.csv") | _____no_output_____ | MIT | MovieLens/Atelier-pyspark-MovieLens.ipynb | duongtoan261196/AI_Framework |
Les données sont lues comme une seule ligne de texte avant d'être restructurées au bon format d'une *matrice creuse* à savoir une liste de triplets contenant les indices de ligne, de colonne et la note pour les seules valeurs renseignées. | # Importer les données au format texte dans un RDD
small_ratings_raw_data = sc.textFile(DATA_PATH+"ml-ratings100k.csv")
# Identifier et afficher la première ligne
small_ratings_raw_data_header = small_ratings_raw_data.take(1)[0]
print(small_ratings_raw_data_header)
# Create RDD without header
all_lines = small_ratings_raw_data.filter(lambda l : l!=small_ratings_raw_data_header)
# Séparer les champs (user, item, note) dans un nouveau RDD
from pyspark.sql import Row
split_lines = all_lines.map(lambda l : l.split(","))
ratingsRDD = split_lines.map(lambda p: Row(user=int(p[0]), item=int(p[1]),
rating=float(p[2]), timestamp=int(p[3])))
# .cache() : le RDD est conservé en mémoire une fois traité
ratingsRDD.cache()
# Display the two first rows
ratingsRDD.take(2)
# Convert RDD to DataFrame
ratingsDF = spark.createDataFrame(ratingsRDD)
ratingsDF.take(2) | _____no_output_____ | MIT | MovieLens/Atelier-pyspark-MovieLens.ipynb | duongtoan261196/AI_Framework |
3. Optimisation du rang sur l'échantillon 10kLe fichier comporte 10 000 évaluations croisant les avis de mille utilisateurs sur les films qu'ils ont vus parmi 1700. 3.1 Constitution des échantillons Séparation aléatoire en trois échantillons apprentissage, validation et test. Le paramètre de rang est optimisé en minimisant l'estimaiton de l'erreur sur l'échantillon test. Cette stratégie, plutôt qu'ue validation croisée est plus adaptée à des données massives. | tauxTrain=0.6
tauxVal=0.2
tauxTes=0.2
# Si le total est inférieur à 1, les données sont sous-échantillonnées.
(trainDF, validDF, testDF) = ratingsDF.randomSplit([tauxTrain, tauxVal, tauxTes])
# validation et test à prédire, sans les notes
validDF_P = validDF.select("user", "item")
testDF_P = testDF.select("user", "item")
trainDF.take(2), validDF_P.take(2), testDF_P.take(2) | _____no_output_____ | MIT | MovieLens/Atelier-pyspark-MovieLens.ipynb | duongtoan261196/AI_Framework |
3.2 Optimisation du rang de la NMF L'erreur d'imputation des données, donc de recommandation, est estimée sur l'échantillon de validation pour différentes valeurs (grille) du rang de la factorisation matricielle. Il faudrait en principe aussi optimiser la valeur du paramètre de pénalisation pris à 0.1 par défaut.*Point important:* l'erreur d'ajustement de la factorisation ne prend en compte que les valeurs listées dans la matrice creuses, pas les "0" qui sont des données manquantes. | from pyspark.ml.recommendation import ALS
import math
import collections
# Initialisation du générateur
seed = 5
# Nombre max d'itérations (ALS)
maxIter = 10
# Régularisation L1; à optimiser également
regularization_parameter = 0.1
# Choix d'une grille pour les valeurs du rang à optimiser
ranks = [4, 8, 12]
#Initialisation variable
# création d'un dictionaire pour stocker l'erreur par rang testé
errors = collections.defaultdict(float)
tolerance = 0.02
min_error = float('inf')
best_rank = -1
best_iteration = -1
from pyspark.ml.evaluation import RegressionEvaluator
for rank in ranks:
als = ALS( rank=rank, seed=seed, maxIter=maxIter,
regParam=regularization_parameter)
model = als.fit(trainDF)
# Prévision de l'échantillon de validation
predDF = model.transform(validDF).select("prediction","rating")
#Remove unpredicter row due to no-presence of user in the train dataset
pred_without_naDF = predDF.na.drop()
# Calcul du RMSE
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(pred_without_naDF)
print("Root-mean-square error for rank %d = "%rank + str(rmse))
errors[rank] = rmse
if rmse < min_error:
min_error = rmse
best_rank = rank
# Meilleure solution
print('Rang optimal: %s' % best_rank) | _____no_output_____ | MIT | MovieLens/Atelier-pyspark-MovieLens.ipynb | duongtoan261196/AI_Framework |
3.3 Résultats et test | # Quelques prévisions
pred_without_naDF.take(3) | _____no_output_____ | MIT | MovieLens/Atelier-pyspark-MovieLens.ipynb | duongtoan261196/AI_Framework |
Prévision finale de l'échantillon test. | #On concatane la DataFrame Train et Validatin
trainValidDF = trainDF.union(validDF)
# On crée un model avec le nouveau Dataframe complété d'apprentissage et le rank fixé à la valeur optimal
als = ALS( rank=best_rank, seed=seed, maxIter=maxIter,
regParam=regularization_parameter)
model = als.fit(trainValidDF)
#Prediction sur la DataFrame Test
testDF = model.transform(testDF).select("prediction","rating")
#Remove unpredicter row due to no-presence of user in the trai dataset
pred_without_naDF = predDF.na.drop()
# Calcul du RMSE
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(pred_without_naDF)
print("Root-mean-square error for rank %d = "%best_rank + str(rmse))
| _____no_output_____ | MIT | MovieLens/Atelier-pyspark-MovieLens.ipynb | duongtoan261196/AI_Framework |
3 Analyse du fichier complet MovieLens propose un plus gros fichier avec 20M de notes (138000 utilisateurs, 27000 films). Ce fichier est utilisé pour extraire un fichier test de deux millions de notes à reconstruire. Les paramètres précédemment optimisés, ils pourraient sans doute l'être mieux, sont appliqués pour une succesion d'estimation / prévision avec une taille croissante de l'échantillon d'apprentissage. Il aurait été plus élégant d'automatiser le travail dans une boucle mais lorsque les données sont les plus volumineuses des comportement mal contrôlés de Spark peuvent provoquer des plantages par défaut de mémoire. 3.1 Lecture des données Le fichier est prétraité de manière analogue. | # Chargement des fichiers si ce n'est déjà fait
import urllib.request
# fichier complet mais compressé
f = urllib.request.urlretrieve("http://www.math.univ-toulouse.fr/~besse/Wikistat/data/ml-ratings20M.zip",DATA_PATH+"ml-ratings20M.zip")
#Unzip downloaded file
import zipfile
zip_ref = zipfile.ZipFile(DATA_PATH+"ml-ratings20M.zip", 'r')
zip_ref.extractall(DATA_PATH)
zip_ref.close()
# Importer les données au format texte dans un RDD
ratings_raw_data = sc.textFile(DATA_PATH+"ratings20M.csv")
# Identifier et afficher la première ligne
ratings_raw_data_header = ratings_raw_data.take(1)[0]
ratings_raw_data_header
# Create RDD without header
all_lines = ratings_raw_data.filter(lambda l : l!=ratings_raw_data_header)
# Séparer les champs (user, item, note) dans un nouveau RDD
split_lines = all_lines.map(lambda l : l.split(","))
ratingsRDD = split_lines.map(lambda p: Row(user=int(p[0]), item=int(p[1]),
rating=float(p[2]), timestamp=int(p[3])))
# Display the two first rows
ratingsRDD.take(2)
# Convert RDD to DataFrame
ratingsDF = spark.createDataFrame(ratingsRDD)
ratingsDF.take(2) | _____no_output_____ | MIT | MovieLens/Atelier-pyspark-MovieLens.ipynb | duongtoan261196/AI_Framework |
3.2 Echantillonnage Extraction de l'échantillon test et éventuellement sous-échantillonnage de l'échantillon d'apprentissage. | tauxTest=0.1
# Si le total est inférieur à 1, les données sont sous-échantillonnées.
(trainTotDF, testDF) = ratingsDF.randomSplit([1-tauxTest, tauxTest])
# Sous-échantillonnage de l'apprentissage permettant de
# tester pour des tailles croissantes de cet échantillon
tauxEch=0.2
(trainDF, DropData) = trainTotDF.randomSplit([tauxEch, 1-tauxEch])
testDF.take(2), trainDF.take(2) | _____no_output_____ | MIT | MovieLens/Atelier-pyspark-MovieLens.ipynb | duongtoan261196/AI_Framework |
3.3 Estimation du modèle Le modèle est estimé en utilisant les valeurs des paramètres obtenues dans l'étape précédente. | import time
time_start=time.time()
# Initialisation du générateur
seed = 5
# Nombre max d'itérations (ALS)
maxIter = 10
# Régularisation L1 (valeur par défaut)
regularization_parameter = 0.1
best_rank = 8
# Estimation pour chaque valeur de rang
als = ALS(rank=rank, seed=seed, maxIter=maxIter,
regParam=regularization_parameter)
model = als.fit(trainDF)
time_end=time.time()
time_als=(time_end - time_start)
print("ALS prend %d s" %(time_als)) | _____no_output_____ | MIT | MovieLens/Atelier-pyspark-MovieLens.ipynb | duongtoan261196/AI_Framework |
3.4 Prévision de l'échantillon test et erreur | # Prévision de l'échantillon de validation
predDF = model.transform(testDF).select("prediction","rating")
#Remove unpredicter row due to no-presence of user in the train dataset
pred_without_naDF = predDF.na.drop()
# Calcul du RMSE
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(pred_without_naDF)
print("Root-mean-square error for rank %d = "%best_rank + str(rmse))
trainDF.count() | _____no_output_____ | MIT | MovieLens/Atelier-pyspark-MovieLens.ipynb | duongtoan261196/AI_Framework |
Experiments comparing the performance of traditional pooling operations and entropy pooling within a shallow neural network and Lenet. The experiments use cifar10 and cifar100. | %matplotlib inline
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR100(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=8)
testset = torchvision.datasets.CIFAR100(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=8)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.modules.utils import _pair, _quadruple
import time
from skimage.measure import shannon_entropy
from scipy import stats
from torch.nn.modules.utils import _pair, _quadruple
import time
from skimage.measure import shannon_entropy
from scipy import stats
import numpy as np
class EntropyPool2d(nn.Module):
def __init__(self, kernel_size=3, stride=1, padding=0, same=False, entr='high'):
super(EntropyPool2d, self).__init__()
self.k = _pair(kernel_size)
self.stride = _pair(stride)
self.padding = _quadruple(padding) # convert to l, r, t, b
self.same = same
self.entr = entr
def _padding(self, x):
if self.same:
ih, iw = x.size()[2:]
if ih % self.stride[0] == 0:
ph = max(self.k[0] - self.stride[0], 0)
else:
ph = max(self.k[0] - (ih % self.stride[0]), 0)
if iw % self.stride[1] == 0:
pw = max(self.k[1] - self.stride[1], 0)
else:
pw = max(self.k[1] - (iw % self.stride[1]), 0)
pl = pw // 2
pr = pw - pl
pt = ph // 2
pb = ph - pt
padding = (pl, pr, pt, pb)
else:
padding = self.padding
return padding
def forward(self, x):
# using existing pytorch functions and tensor ops so that we get autograd,
# would likely be more efficient to implement from scratch at C/Cuda level
start = time.time()
x = F.pad(x, self._padding(x), mode='reflect')
x_detached = x.cpu().detach()
x_unique, x_indices, x_inverse, x_counts = np.unique(x_detached,
return_index=True,
return_inverse=True,
return_counts=True)
freq = torch.FloatTensor([x_counts[i] / len(x_inverse) for i in x_inverse]).cuda()
x_probs = freq.view(x.shape)
x_probs = x_probs.unfold(2, self.k[0], self.stride[0]).unfold(3, self.k[1], self.stride[1])
x_probs = x_probs.contiguous().view(x_probs.size()[:4] + (-1,))
if self.entr is 'high':
x_probs, indices = torch.min(x_probs.cuda(), dim=-1)
elif self.entr is 'low':
x_probs, indices = torch.max(x_probs.cuda(), dim=-1)
else:
raise Exception('Unknown entropy mode: {}'.format(self.entr))
x = x.unfold(2, self.k[0], self.stride[0]).unfold(3, self.k[1], self.stride[1])
x = x.contiguous().view(x.size()[:4] + (-1,))
indices = indices.view(indices.size() + (-1,))
pool = torch.gather(input=x, dim=-1, index=indices)
return pool.squeeze(-1)
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import time
from sklearn.metrics import f1_score
MAX = 'max'
AVG = 'avg'
HIGH_ENTROPY = 'high_entr'
LOW_ENTROPY = 'low_entr'
class Net1Pool(nn.Module):
def __init__(self, num_classes=10, pooling=MAX):
super(Net1Pool, self).__init__()
self.conv1 = nn.Conv2d(3, 30, 5)
if pooling is MAX:
self.pool = nn.MaxPool2d(2, 2)
elif pooling is AVG:
self.pool = nn.AvgPool2d(2, 2)
elif pooling is HIGH_ENTROPY:
self.pool = EntropyPool2d(2, 2, entr='high')
elif pooling is LOW_ENTROPY:
self.pool = EntropyPool2d(2, 2, entr='low')
self.fc0 = nn.Linear(30 * 14 * 14, num_classes)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = x.view(-1, 30 * 14 * 14)
x = F.relu(self.fc0(x))
return x
class Net2Pool(nn.Module):
def __init__(self, num_classes=10, pooling=MAX):
super(Net2Pool, self).__init__()
self.conv1 = nn.Conv2d(3, 50, 5, 1)
self.conv2 = nn.Conv2d(50, 50, 5, 1)
if pooling is MAX:
self.pool = nn.MaxPool2d(2, 2)
elif pooling is AVG:
self.pool = nn.AvgPool2d(2, 2)
elif pooling is HIGH_ENTROPY:
self.pool = EntropyPool2d(2, 2, entr='high')
elif pooling is LOW_ENTROPY:
self.pool = EntropyPool2d(2, 2, entr='low')
self.fc1 = nn.Linear(5*5*50, 500)
self.fc2 = nn.Linear(500, num_classes)
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.pool(x)
x = F.relu(self.conv2(x))
x = self.pool(x)
x = x.view(-1, 5*5*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
def configure_net(net, device):
net.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
return net, optimizer, criterion
def train(net, optimizer, criterion, trainloader, device, epochs=10, logging=2000):
for epoch in range(epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
start = time.time()
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % logging == logging - 1:
print('[%d, %5d] loss: %.3f duration: %.5f' %
(epoch + 1, i + 1, running_loss / logging, time.time() - start))
running_loss = 0.0
print('Finished Training')
def test(net, testloader, device):
correct = 0
total = 0
predictions = []
l = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to(device), labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
predictions.extend(predicted.cpu().numpy())
l.extend(labels.cpu().numpy())
print('Accuracy: {}'.format(100 * correct / total))
epochs = 10
logging = 15000
num_classes = 100
print('- - - - - - - - -- - - - 2 pool - - - - - - - - - - - - - - - -')
print('- - - - - - - - -- - - - MAX - - - - - - - - - - - - - - - -')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=MAX), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - AVG - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=AVG), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - HIGH - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=HIGH_ENTROPY), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - LOW - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=LOW_ENTROPY), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - 1 pool - - - - - - - - - - - - - - - -')
print('- - - - - - - - -- - - - MAX - - - - - - - - - - - - - - - -')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=MAX), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - AVG - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=AVG), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - HIGH - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=HIGH_ENTROPY), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - LOW - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=LOW_ENTROPY), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device) | _____no_output_____ | Apache-2.0 | pytorch/notebooks/shallow_NNs-Cifar.ipynb | ChristoferNal/pooling-operations-and-information-theory |
M.A.R.K. Detection model Training and InferenceIn this notebook we will use axelerate, Keras-based framework for AI on the edge, to quickly setup model training and then after training session is completed convert it to .tflite and .kmodel formats.First, let's take care of some administrative details. 1) Before we do anything, make sure you have choosen GPU as Runtime type (in Runtime - > Change Runtime type).2) We need to mount Google Drive for saving our model checkpoints and final converted model(s). Press on Mount Google Drive button in Files tab on your left. In the next cell we clone axelerate Github repository and import it. **It is possible to use pip install or python setup.py install, but in that case you will need to restart the enironment.** Since I'm trying to make the process as streamlined as possibile I'm using sys.path.append for import. | %load_ext tensorboard
#we need imgaug 0.4 for image augmentations to work properly, see https://stackoverflow.com/questions/62580797/in-colab-doing-image-data-augmentation-with-imgaug-is-not-working-as-intended
!pip uninstall -y imgaug && pip uninstall -y albumentations && pip install imgaug==0.4
!git clone https://github.com/AIWintermuteAI/aXeleRate.git
import sys
sys.path.append('/content/aXeleRate')
from axelerate import setup_training, setup_inference | _____no_output_____ | MIT | resources/aXeleRate_mark_detector.ipynb | joaopdss/aXelerate |
At this step you typically need to get the dataset. You can use !wget command to download it from somewhere on the Internet or !cp to copy from My Drive as in this example```!cp -r /content/drive/'My Drive'/pascal_20_segmentation.zip .!unzip --qq pascal_20_segmentation.zip```Dataset preparation and postprocessing are discussed in the article here:The annotation tool I use is LabelImghttps://github.com/tzutalin/labelImgLet's visualize our detection model test dataset. There are images in validation folder with corresponding annotations in PASCAL-VOC format in validation annotations folder. | %matplotlib inline
!gdown https://drive.google.com/uc?id=1s2h6DI_1tHpLoUWRc_SavvMF9jYG8XSi #dataset
!gdown https://drive.google.com/uc?id=1-bDRZ9Z2T81SfwhHEfZIMFG7FtMQ5ZiZ #pre-trained model
!unzip --qq mark_dataset.zip
from axelerate.networks.common_utils.augment import visualize_detection_dataset
visualize_detection_dataset(img_folder='mark_detection/imgs_validation', ann_folder='mark_detection/ann_validation', num_imgs=10, img_size=224, augment=True) | _____no_output_____ | MIT | resources/aXeleRate_mark_detector.ipynb | joaopdss/aXelerate |
Next step is defining a config dictionary. Most lines are self-explanatory.Type is model frontend - Classifier, Detector or SegnetArchitecture is model backend (feature extractor) - Full Yolo- Tiny Yolo- MobileNet1_0- MobileNet7_5 - MobileNet5_0 - MobileNet2_5 - SqueezeNet- NASNetMobile- DenseNet121- ResNet50For more information on anchors, please read herehttps://github.com/pjreddie/darknet/issues/568Labels are labels present in your dataset.IMPORTANT: Please, list all the labels present in the dataset.object_scale determines how much to penalize wrong prediction of confidence of object predictorsno_object_scale determines how much to penalize wrong prediction of confidence of non-object predictorscoord_scale determines how much to penalize wrong position and size predictions (x, y, w, h)class_scale determines how much to penalize wrong class predictionFor converter type you can choose the following:'k210', 'tflite_fullint', 'tflite_dynamic', 'edgetpu', 'openvino', 'onnx' Parameters for Person DetectionK210, which is where we will run the network, has constrained memory (5.5 RAM) available, so with Micropython firmware, the largest model you can run is about 2 MB, which limits our architecture choice to Tiny Yolo, MobileNet(up to 0.75 alpha) and SqueezeNet. Out of these 3 architectures, only one comes with pre-trained model - MobileNet. So, to save the training time we will use Mobilenet with alpha 0.75, which has ... parameters. For objects that do not have that much variety, you can use MobileNet with lower alpha, down to 0.25. | config = {
"model":{
"type": "Detector",
"architecture": "MobileNet5_0",
"input_size": 224,
"anchors": [0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828],
"labels": ["mark"],
"coord_scale" : 1.0,
"class_scale" : 1.0,
"object_scale" : 5.0,
"no_object_scale" : 1.0
},
"weights" : {
"full": "",
"backend": "imagenet"
},
"train" : {
"actual_epoch": 50,
"train_image_folder": "mark_detection/imgs",
"train_annot_folder": "mark_detection/ann",
"train_times": 1,
"valid_image_folder": "mark_detection/imgs_validation",
"valid_annot_folder": "mark_detection/ann_validation",
"valid_times": 1,
"valid_metric": "mAP",
"batch_size": 32,
"learning_rate": 1e-3,
"saved_folder": F"/content/drive/MyDrive/mark_detector",
"first_trainable_layer": "",
"augumentation": True,
"is_only_detect" : False
},
"converter" : {
"type": ["k210","tflite"]
}
} | _____no_output_____ | MIT | resources/aXeleRate_mark_detector.ipynb | joaopdss/aXelerate |
Let's check what GPU we have been assigned in this Colab session, if any. | from tensorflow.python.client import device_lib
device_lib.list_local_devices() | _____no_output_____ | MIT | resources/aXeleRate_mark_detector.ipynb | joaopdss/aXelerate |
Also, let's open Tensorboard, where we will be able to watch model training progress in real time. Training and validation logs also will be saved in project folder.Since there are no logs before we start the training, tensorboard will be empty. Refresh it after first epoch. | %tensorboard --logdir logs | _____no_output_____ | MIT | resources/aXeleRate_mark_detector.ipynb | joaopdss/aXelerate |
Finally we start the training by passing config dictionary we have defined earlier to setup_training function. The function will start the training with Checkpoint, Reduce Learning Rate on Plateau and Early Stopping callbacks. After the training has stopped, it will convert the best model into the format you have specified in config and save it to the project folder. | from keras import backend as K
K.clear_session()
model_path = setup_training(config_dict=config) | _____no_output_____ | MIT | resources/aXeleRate_mark_detector.ipynb | joaopdss/aXelerate |
After training it is good to check the actual perfomance of your model by doing inference on your validation dataset and visualizing results. This is exactly what next block does. Obviously since our model has only trained on a few images the results are far from stellar, but if you have a good dataset, you'll have better results. | from keras import backend as K
K.clear_session()
setup_inference(config, model_path) | _____no_output_____ | MIT | resources/aXeleRate_mark_detector.ipynb | joaopdss/aXelerate |
Table of Contents1. [Import Modules](import)2. [Read the History File](read)3. [Summary Table](table)4. [Histogram Fission Fragment Properties](ffHistograms)5. [Correlated Observables](correlations)6. [Neutron Properties](neutrons)7. [Gamma Properties](gammas)8. [Gamma-ray Timing Information](timing)9. [Angular Correlations](angles) 1. import modules for the notebook | import numpy as np
import os
import matplotlib.pyplot as plt
from CGMFtk import histories as fh
# also define some plotting features
import matplotlib as mpl
mpl.rcParams['font.size'] = 12
mpl.rcParams['font.family'] = 'Helvetica','serif'
mpl.rcParams['font.weight'] = 'normal'
mpl.rcParams['axes.labelsize'] = 18.
mpl.rcParams['xtick.labelsize'] = 18.
mpl.rcParams['ytick.labelsize'] = 18.
mpl.rcParams['lines.linewidth'] = 2.
mpl.rcParams['xtick.major.pad'] = '10'
mpl.rcParams['ytick.major.pad'] = '10'
mpl.rcParams['image.cmap'] = 'BuPu'
# define a working directory where the history files are stored
workdir = './'
histFile = 'histories.out'
timeFile = 'histories.out'
yeildFile = 'yeilds.cgmf.0'
nevents = int(1E6) | _____no_output_____ | MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
2. Read CGMF history file | # run Cf-252 sf with a single OM param file (#42)
directory = "/home/beykyle/db/projects/OM/KDOMPuq/KDUQSamples"
for filename in os.scandir(directory):
if filename.is_file() and "42" in filename.path:
#if filename.is_file():
print(filename.path)
os.system("mpirun -np 8 --use-hwthread-cpus cgmf.mpi.x -t -1 -i 98252 -e 0.0 -n 100 -o" + filename.path)
os.system("cat histories.cgmf.* > histories.out")
os.system("rm histories.cgmf.*")
print("Analyzing histories")
hist = fh.Histories(workdir + histFile, nevents=nevents)
# print the number of events in the file
print ('This file contains ',str(hist.getNumberEvents()),' events and ',str(hist.getNumberFragments()),' fission fragments') | Analyzing histories
WARNING
You asked for 1000000 events and there are only 800 in this history file
This file contains 800 events and 1600 fission fragments
| MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
With the option 'nevents', the number of fission events that are read can be specified: hist = fh.Histories('92235_1MeV.cgmf',nevents=5000) 3. Summary Table | # provide a summary table of the fission events
hist.summaryTable() | _____no_output_____ | MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
4. Fission Fragment Properties With the histogram function from matplotlib, we can easily plot distributions of the fission fragment characteristics | # plot the distributions of the fission fragments
A = hist.getA() # get A of all fragments
AL = hist.getALF() # get A of light fragments
AH = hist.getAHF() # get A of heavy fragments
fig = plt.figure(figsize=(8,6))
bins = np.arange(min(A),max(A))
h,b = np.histogram(A,bins=bins,density=True)
plt.plot(b[:-1],h,'-o')
plt.xlabel('Mass (A)')
plt.ylabel('Frequency')
plt.show()
Z = hist.getZ()
ZL = hist.getZLF()
ZH = hist.getZHF()
fig = plt.figure(figsize=(8,6))
bins = np.arange(min(Z),max(Z))
h,b = np.histogram(Z,bins=bins,density=True)
plt.plot(b[:-1],h,'-o')
plt.xlabel('Charge (Z)')
plt.ylabel('Frequency')
plt.show()
fig = plt.figure(figsize=(8,6))
TKEpre = hist.getTKEpre() # TKE before neutron emission
TKEpost = hist.getTKEpost() # TKE after neutron emission
bins = np.arange(min(TKEpre),max(TKEpre))
h,b = np.histogram(TKEpre,bins=bins,density=True)
plt.plot(0.5*(b[:-1]+b[1:]),h,'-o',label='Before neutron emission')
bins = np.arange(min(TKEpost),max(TKEpost))
h,b = np.histogram(TKEpost,bins=bins,density=True)
plt.plot(0.5*(b[:-1]+b[1:]),h,'-o',label='After neutron emission')
plt.legend()
plt.xlabel('Total Kinetic Energy (MeV)')
plt.ylabel('Frequency')
plt.show() | _____no_output_____ | MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
With the 2D histogram feature, we can see correlations between the calculated features from CGMF | TKEpre = hist.getTKEpre()
TXE = hist.getTXE()
bx = np.arange(min(TKEpre),max(TKEpre))
by = np.arange(min(TXE),max(TXE))
fig = plt.figure(figsize=(8,6))
plt.hist2d(TKEpre,TXE,bins=(bx,by),density=True)
plt.xlabel('Total Kinetic Energy (MeV)')
plt.ylabel('Total Excitation Energy (MeV)')
plt.colorbar()
plt.show() | _____no_output_____ | MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
5. Correlated Observables Many observables within fission are correlated with one another. Sometimes, these are best visualized as two-dimensional histograms as in the TKE-TXE plot directly above. Other times, it is helpful to plot certain observables as a function of mass or TKE. There are routines within CGMFtk that easily construct those, as demonstrated here: | # nubar as a function of mass
## nubarg, excitation energy, kinetic energy (pre), and spin are available as a function of mass
nubarA = hist.nubarA()
TKEA = hist.TKEA()
fig = plt.figure(figsize=(16,6))
plt.subplot(121)
plt.plot(nubarA[0],nubarA[1],'ko')
plt.xlabel('Mass (u)')
plt.ylabel(r'$\overline{\nu}$')
plt.subplot(122)
plt.plot(TKEA[0],TKEA[1],'ko')
plt.xlabel('Mass (u)')
plt.ylabel(r'Total Kinetic Energy (MeV)')
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
6. Neutron properties | # construct and plot the neutron multiplicity distribution
nu,pnu = hist.Pnu()
fig = plt.figure(figsize=(8,6))
plt.plot(nu,pnu,'k*--',markersize=10)
plt.xlabel(r'$\nu$')
plt.ylabel(r'P($\nu$)')
plt.show()
# construct and plot the prompt neutron spectrum
fig = plt.figure(figsize=(16,6))
plt.subplot(121)
ebins,pfns = hist.pfns()
plt.step(ebins,pfns,where='mid')
plt.xlim(0,20)
plt.xlabel('Outgoing neutron energy (MeV)')
plt.ylabel('PFNS')
plt.subplot(122)
plt.step(ebins,pfns,where='mid')
plt.xlim(0.01,20)
plt.xscale('log')
plt.yscale('log')
plt.xlabel('Outgoing neutron energy (MeV)')
plt.ylabel('PFNS')
plt.tight_layout()
plt.show()
# average number of prompt neutrons
print ('nubar (per fission event) = ',hist.nubartot())
print ('average number of neutrons per fragment = ',hist.nubar())
# average neutron energies
print ('Neutron energies in the lab:')
print ('Average energy of all neutrons = ',hist.meanNeutronElab())
print ('Average energy of neutrons from fragments = ',hist.meanNeutronElabFragments())
print ('Average energy of neutrons from light fragment = ',hist.meanNeutronElabLF())
print ('Average energy of neutrons from heavy fragment = ',hist.meanNeutronElabHF())
print (' ')
print ('Neutron energies in the center of mass:')
print ('Average energy of neutrons from fragments = ',hist.meanNeutronEcmFragments())
print ('Average energy of neutrons from light fragment = ',hist.meanNeutronEcmLF())
print ('Average energy of neutrons from heavy fragment = ',hist.meanNeutronEcmHF()) | Neutron energies in the lab:
Average energy of all neutrons = 2.0337924277648622
Average energy of neutrons from fragments = 2.0337924277648622
Average energy of neutrons from light fragment = 2.2622832643406268
Average energy of neutrons from heavy fragment = 1.7410818181818182
Neutron energies in the center of mass:
Average energy of neutrons from fragments = 1.2891006310195947
Average energy of neutrons from light fragment = 1.3413187463039622
Average energy of neutrons from heavy fragment = 1.2222060606060605
| MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
Note that the energies are not recorded for the pre-fission neutrons in the center of mass frame 7. Gamma properties | # construct and plot the gamma multiplicity distribution
nug,pnug = hist.Pnug()
fig = plt.figure(figsize=(8,6))
plt.plot(nug,pnug,'k*--',markersize=10)
plt.xlabel(r'$N_\gamma$')
plt.ylabel(r'P($N_\gamma$)')
plt.show()
# construct and plot the prompt neutron spectrum
fig = plt.figure(figsize=(16,6))
plt.subplot(121)
ebins,pfgs = hist.pfgs()
plt.step(ebins,pfgs,where='mid')
plt.xlim(0,5)
plt.xlabel(r'Outgoing $\gamma$ energy (MeV)')
plt.ylabel('PFGS')
plt.subplot(122)
plt.step(ebins,pfgs,where='mid')
plt.xlim(0.1,5)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r'Outgoing $\gamma$ energy (MeV)')
plt.ylabel('PFGS')
plt.ylim(1e-2,30)
plt.tight_layout()
plt.show()
# average number of prompt neutrons
print ('nugbar (per fission event) = ',hist.nubargtot())
print ('average number of gammas per fragment = ',hist.nubarg())
# perform gamma-ray spectroscopy
gE = 0.2125 # gamma ray at 212.5 keV
dE = 0.01 # 1% energy resolution
gspec1 = hist.gammaSpec(gE,dE*gE,post=True)
# calculate the percentage of events for each A/Z
As1 = np.unique(gspec1[:,1])
totEvents = len(gspec1)
fracA1 = []
for A in As1:
mask = gspec1[:,1]==A
fracA1.append(len(gspec1[mask])/totEvents)
Zs1 = np.unique(gspec1[:,0])
fracZ1 = []
for Z in Zs1:
mask = gspec1[:,0]==Z
fracZ1.append(len(gspec1[mask])/totEvents)
fig = plt.figure(figsize=(8,6))
plt.plot(As1,fracA1,'--')
plt.xlabel('Fission Fragment Mass (A)')
plt.ylabel('Fraction of Events')
plt.text(135,0.170,r'$\epsilon_\gamma$=212.5 keV',fontsize=18)
plt.show()
fig = plt.figure(figsize=(8,6))
plt.plot(Zs1,fracZ1,'--')
plt.xlabel('Fission Fragment Charge (Z)')
plt.ylabel('Fraction of Events')
plt.show()
# average neutron energies
print ('Gamma energies in the lab:')
print ('Average energy of all gammas = ',hist.meanGammaElab())
print ('Average energy of gammas from light fragment = ',hist.meanGammaElabLF())
print ('Average energy of gammas from heavy fragment = ',hist.meanGammaElabHF()) | Gamma energies in the lab:
Average energy of all gammas = 0.7157650070395495
Average energy of gammas from light fragment = 0.7317019104946348
Average energy of gammas from heavy fragment = 0.7005107715430862
| MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
Note that in the current version of CGMF, only the doppler shifted lab energies are recorded in the history file 8. Gamma-ray Timing Information We can also calculate quantities that are related to the time at which the 'late' prompt gamma rays are emitted. When the option -t -1 is included in the run time options for CGMF, these gamma-ray times are printed out in the CGMF history file. The Histories class can read these times based on the header of the CGMF history file. | histTime = fh.Histories(workdir + timeFile, nevents=nevents*2)
# gamma-ray times can be retrieved through
gammaAges = histTime.getGammaAges() | _____no_output_____ | MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
The nubargtot function can also be used to construct the average gamma-ray multiplicity per fission event as a function of time. In the call to nubarg() or nubargtot(), timeWindow=True should be included which uses the default timings provided in the function (otherwise, passing a numpy array or list of times to timeWindow will use those times). Optionally, a minimum gamma-ray energy cut-off can also be included, Eth. | times,nubargTime = histTime.nubarg(timeWindow=True) # include timeWindow as a boolean or list of times (in seconds) to activate this feature
fig = plt.figure(figsize=(8,6))
plt.plot(times,nubargTime,'o',label='Eth=0. MeV')
times,nubargTime = histTime.nubarg(timeWindow=True,Eth=0.1)
plt.plot(times,nubargTime,'o',label='Eth=0.1 MeV')
plt.xlabel('Time since fission (s)')
plt.ylabel(r'Averge $\gamma$-ray multiplicity')
plt.xscale('log')
plt.legend()
plt.show() | _____no_output_____ | MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
The prompt fission gamma-ray spectrum function, pfgs(), can also be used to calculate this quantity within a certain time window since the fission event. The time window is defined using minTime and maxTime to set the lower and upper boundaries. | fig = plt.figure(figsize=(8,6))
bE,pfgsTest = histTime.pfgs(minTime=5e-8,maxTime=500e-8)
plt.step(bE,pfgsTest,label='Time window')
bE,pfgsTest = histTime.pfgs()
plt.step(bE,pfgsTest,label='All events')
plt.yscale('log')
plt.xlim(0,2)
plt.ylim(0.1,100)
plt.xlabel('Gamma-ray Energy (MeV)')
plt.ylabel('Prompt Fission Gamma Spectrum')
plt.legend()
plt.show()
# calculate the gamma-ray multiplicity as a function of time since fission for a specific fission fragment
times,gMultiplicity = histTime.gammaMultiplicity(minTime=1e-8,maxTime=1e-6,Afragment=134,Zfragment=52)
# also compare to an exponential decay with the half life of the state
f = np.exp(-times*np.log(2)/1.641e-7) # the half life of 134Te is 164.1 ns
norm = gMultiplicity[0]/f[0]
fig = plt.figure(figsize=(8,6))
plt.plot(times*1e9,gMultiplicity/norm,'k-',label='CGMF')
plt.plot(times*1e9,f,'r--',label=r'exp($-t\cdot$log(2)/$\tau_{1/2}$)')
plt.legend()
plt.yscale('log')
plt.xlabel('Time since fission (ns)')
plt.ylabel(r'N$_\gamma$(t) (arb. units)')
plt.show()
# calculate the isomeric ratios for specific states in nuclei
# e.g. isomeric ratio for the 1/2- state in 99Nb, ground state is 9/2+, lifetime is 150 s
r = histTime.isomericRatio(thresholdTime=1,A=99,Z=41,Jm=0.5,Jgs=4.5)
print ('99Nb:',round(r,2))
# e.g. isomeric ratio for the 11/2- state in 133Te, ground state is 3/2+, lifetime is 917.4 s
r = histTime.isomericRatio(thresholdTime=1,A=133,Z=52,Jm=5.5,Jgs=1.5)
print ('133Te:',round(r,2)) | 99Nb: 1.0
133Te: 0.14
| MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
9. Angular Correlations For the fission fragment angular distribution with respect to the beam axis/z-axis, there is one option: afterEmission=True/False. afterEmission=True uses these angles after neutron emission and afterEmission=False uses these angles before neutron emission. The default is True. | # calculate cos(theta) between the fragments and z-axis/beam axis
FFangles = hist.FFangles()
bins = np.linspace(-1,1,30)
h,b = np.histogram(FFangles,bins=bins,density=True)
# only light fragments
hLight,b = np.histogram(FFangles[::2],bins=bins,density=True)
# only heavy fragments
hHeavy,b = np.histogram(FFangles[1::2],bins=bins,density=True)
x = 0.5*(b[:-1]+b[1:])
fig = plt.figure(figsize=(8,6))
plt.plot(x,h,'k*',label='All Fragments')
plt.plot(x,hLight,'ro',label='Light Fragments')
plt.plot(x,hHeavy,'b^',label='Heavy Fragments')
plt.xlabel(r'cos($\theta$)')
plt.ylabel('Frequency')
plt.ylim(0.45,0.55)
plt.title('Fission fragment angles with respect to beam axis')
plt.legend()
plt.show() | _____no_output_____ | MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
There are several options when calculating the angles of the neutrons with respect to the beam axis/z-axis. The first is including a neutron threshold energy with keyword, Eth (given in MeV). We can also calculate these angles in the lab frame (lab=True, default) or in the center of mass frame of the compound system (lab=False). Finally, we can include pre-fission neutrons (includePrefission=True, default) or not include them (includePreFission=False). However, the pre-fission neutrons can only be include in the lab frame. | # calculate the angles between the neutrons and the z-axis/beam axis
nAllLab,nLLab,nHLab = hist.nangles(lab=True) # all neutrons, from the light fragment, from the heavy fragment
nAllCM,nLCM,nHCM = hist.nangles(lab=False) # center of mass frame of the compound
bins = np.linspace(-1,1,30)
hAllLab,b = np.histogram(nAllLab,bins=bins,density=True)
hLightLab,b = np.histogram(nLLab,bins=bins,density=True)
hHeavyLab,b = np.histogram(nHLab,bins=bins,density=True)
hAllcm,b = np.histogram(nAllCM,bins=bins,density=True)
hLightcm,b = np.histogram(nLCM,bins=bins,density=True)
hHeavycm,b = np.histogram(nHCM,bins=bins,density=True)
x = 0.5*(b[:-1]+b[1:])
fig = plt.figure(figsize=(8,6))
plt.plot(x,hAllLab,'k*',label='All Fragments')
plt.plot(x,hLightLab,'ro',label='Light Fragments')
plt.plot(x,hHeavyLab,'b^',label='Heavy Fragments')
plt.xlabel(r'cos($\theta$)')
plt.ylabel('Frequency')
plt.ylim(0.45,0.55)
plt.title('Neutron Angles with respect to beam axis in the Lab Frame')
plt.legend()
plt.show()
fig = plt.figure(figsize=(8,6))
plt.plot(x,hAllLab,'k*',label='Lab Frame')
plt.plot(x,hAllcm,'ro',label='CoM Frame')
plt.xlabel(r'cos($\theta$)')
plt.ylabel('Frequency')
plt.ylim(0.45,0.55)
plt.legend()
plt.show() | _____no_output_____ | MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
There are again several options that we can use when calculating the angles between all pairs of neutrons (from all framgments) and the ligh fragments, all of which have been seen in the last two examples. These include, Eth (neutron threshold energy), afterEmission (fission fragment angles are post or pre neutron emission), and includePrefission (to include or not include pre-fission neutrons). | # calculate the angles between the neutrons and the light fragments
nFall,nFLight,nFHeavy = hist.nFangles()
bins = np.linspace(-1,1,30)
hall,b = np.histogram(nFall,bins=bins,density=True)
hlight,b = np.histogram(nFLight,bins=bins,density=True)
hheavy,b = np.histogram(nFHeavy,bins=bins,density=True)
x = 0.5*(b[:-1]+b[1:])
fig = plt.figure(figsize=(8,6))
plt.plot(x,hall,'k*',label='All Fragments')
plt.plot(x,hlight,'ro',label='Light Fragments')
plt.plot(x,hheavy,'b^',label='Heavy Fragments')
plt.xlabel(r'cos($\theta$)')
plt.ylabel('Frequency')
plt.legend()
plt.show() | _____no_output_____ | MIT | analysis/example/CGMFtkHistoriesExample.ipynb | beykyle/omp-uq |
0. required packages for h5py | %run "..\..\Startup_py3.py"
sys.path.append(r"..\..\..\..\Documents")
import ImageAnalysis3 as ia
%matplotlib notebook
from ImageAnalysis3 import *
print(os.getpid())
import h5py
from ImageAnalysis3.classes import _allowed_kwds
import ast | 18112
| MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
1. Create field-of-view class | reload(ia)
reload(classes)
reload(classes.batch_functions)
reload(classes.field_of_view)
reload(io_tools.load)
reload(visual_tools)
reload(ia.correction_tools)
reload(ia.correction_tools.alignment)
reload(ia.spot_tools.matching)
reload(ia.segmentation_tools.chromosome)
reload(ia.spot_tools.fitting)
fov_param = {'data_folder':r'\\10.245.74.158\Chromatin_NAS_1\20210320-proB_Dox_IAA_STI_CTP-08_2color',
'save_folder':r'\\10.245.74.212\Chromatin_NAS_2\IgH_analyzed_results\20210320_IgH_proB_iaa_dox+',
#'save_folder':r'D:\Pu_Temp\202009_IgH_proB_DMSO_2color',
'experiment_type': 'DNA',
'num_threads': 24,
'correction_folder':r'\\10.245.74.158\Chromatin_NAS_0\Corrections\20201012-Corrections_2color',
'shared_parameters':{
'single_im_size':[35,2048,2048],
'corr_channels':['750','647'],
'num_empty_frames': 0,
'corr_hot_pixel':True,
'corr_Z_shift':False,
'min_num_seeds':500,
'max_num_seeds': 2500,
'spot_seeding_th':125,
'normalize_intensity_local':False,
'normalize_intensity_background':False,
},
}
fov = classes.field_of_view.Field_of_View(fov_param, _fov_id=30,
_color_info_kwargs={
'_color_filename':'Color_Usage_clean',
},
_prioritize_saved_attrs=False,
) | Get Folder Names: (ia.get_img_info.get_folders)
- Number of folders: 78
- Number of field of views: 64
- Importing csv file: \\10.245.74.158\Chromatin_NAS_1\20210320-proB_Dox_IAA_STI_CTP-08_2color\Analysis\Color_Usage_clean.csv
- header: ['Hyb', '750', '647', '488', '405']
-- Hyb H0R0 exists in this data
-- DAPI exists in hyb: H0R0
- 75 folders are found according to color-usage annotation.
+ loading fov_info from file: \\10.245.74.212\Chromatin_NAS_2\IgH_analyzed_results\20210320_IgH_proB_iaa_dox+\Conv_zscan_30.hdf5
++ base attributes loaded:['be8ce21d23df451e85292b57a8b61273', 'cand_chrom_coords', 'chrom_coords', 'chrom_im', 'ref_im'] in 11.353s.
+ loading correction from file: \\10.245.74.212\Chromatin_NAS_2\IgH_analyzed_results\20210320_IgH_proB_iaa_dox+\Conv_zscan_30.hdf5
++ load bleed correction profile directly from savefile.
++ load chromatic correction profile directly from savefile.
++ load chromatic_constants correction profile directly from savefile.
++ load illumination correction profile directly from savefile.
+ loading segmentation from file: \\10.245.74.212\Chromatin_NAS_2\IgH_analyzed_results\20210320_IgH_proB_iaa_dox+\Conv_zscan_30.hdf5
++ base attributes loaded:[] in 0.005s.
-- saving fov_info to file: \\10.245.74.212\Chromatin_NAS_2\IgH_analyzed_results\20210320_IgH_proB_iaa_dox+\Conv_zscan_30.hdf5
++ base attributes saved:['analysis_folder', 'annotated_folders', 'be8ce21d23df451e85292b57a8b61273', 'bead_channel_index', 'cand_chrom_coords', 'channels', 'chrom_coords', 'chrom_im', 'color_dic', 'color_filename', 'color_format', 'correction_folder', 'dapi_channel_index', 'data_folder', 'drift', 'drift_filename', 'drift_folder', 'experiment_folder', 'folders', 'fov_id', 'fov_name', 'map_folder', 'num_threads', 'ref_filename', 'ref_id', 'ref_im', 'rotation', 'save_filename', 'save_folder', 'segmentation_dim', 'segmentation_folder', 'shared_parameters', 'use_dapi'] in 18.835s.
| MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
2. Process image into candidate spots | reload(io_tools.load)
reload(spot_tools.fitting)
reload(correction_tools.chromatic)
reload(classes.batch_functions)
# process image into spots
id_list, spot_list = fov._process_image_to_spots('unique',
#_sel_ids=np.arange(41,47),
_load_common_reference=True,
_load_with_multiple=False,
_save_images=True,
_warp_images=False,
_overwrite_drift=False,
_overwrite_image=False,
_overwrite_spot=False,
_verbose=True) | -- No folder selected, allow processing all 75 folders
+ load reference image from file:\\10.245.74.158\Chromatin_NAS_1\20210320-proB_Dox_IAA_STI_CTP-08_2color\H0R0\Conv_zscan_30.dax
- correct the whole fov for image: \\10.245.74.158\Chromatin_NAS_1\20210320-proB_Dox_IAA_STI_CTP-08_2color\H0R0\Conv_zscan_30.dax
-- loading illumination correction profile from file:
488 illumination_correction_488_2048x2048.npy
-- loading image from file:\\10.245.74.158\Chromatin_NAS_1\20210320-proB_Dox_IAA_STI_CTP-08_2color\H0R0\Conv_zscan_30.dax in 8.178s
-- removing hot pixels for channels:['488'] in 6.360s
-- illumination correction for channels: 488, in 1.390s
-- -- generate translation function with drift:[0. 0. 0.] in 0.000s
-- finish correction in 16.463s
-- saving fov_info to file: \\10.245.74.212\Chromatin_NAS_2\IgH_analyzed_results\20210320_IgH_proB_iaa_dox+\Conv_zscan_30.hdf5
++ base attributes saved:['ref_im'] in 4.984s.
-- checking unique, region:[41 42] in 0.047s.
-- checking unique, region:[44 45] in 0.000s.
-- checking unique, region:[47 48] in 0.016s.
-- checking unique, region:[50 51] in 0.007s.
-- checking unique, region:[53 54] in 0.000s.
-- checking unique, region:[56 57] in 0.009s.
-- checking unique, region:[60 61] in 0.000s.
-- checking unique, region:[63 64] in 0.000s.
-- checking unique, region:[66 67] in 0.016s.
-- checking unique, region:[69 70] in 0.000s.
-- checking unique, region:[72 73] in 0.000s.
-- checking unique, region:[75 76] in 0.016s.
-- checking unique, region:[78 79] in 0.000s.
-- checking unique, region:[81 82] in 0.000s.
-- checking unique, region:[84 85] in 0.016s.
-- checking unique, region:[87 88] in 0.000s.
-- checking unique, region:[90 91] in 0.000s.
-- checking unique, region:[93 94] in 0.016s.
-- checking unique, region:[96 97] in 0.000s.
-- checking unique, region:[ 99 100] in 0.000s.
-- checking unique, region:[102 103] in 0.016s.
-- checking unique, region:[105 106] in 0.000s.
-- checking unique, region:[108 109] in 0.000s.
-- checking unique, region:[111 112] in 0.016s.
-- checking unique, region:[114 115] in 0.000s.
-- checking unique, region:[323 321] in 0.000s.
-- checking unique, region:[326 324] in 0.016s.
-- checking unique, region:[329 327] in 0.000s.
-- checking unique, region:[332 330] in 0.000s.
-- checking unique, region:[335 333] in 0.016s.
-- checking unique, region:[339 337] in 0.000s.
-- checking unique, region:[342 340] in 0.000s.
-- checking unique, region:[345 343] in 0.016s.
-- checking unique, region:[348 346] in 0.000s.
-- checking unique, region:[351 349] in 0.000s.
-- checking unique, region:[354 352] in 0.016s.
-- checking unique, region:[357 355] in 0.000s.
-- checking unique, region:[360 358] in 0.000s.
-- checking unique, region:[363 361] in 0.017s.
-- checking unique, region:[366 364] in 0.001s.
-- checking unique, region:[369 367] in 0.000s.
-- checking unique, region:[375 373] in 0.014s.
-- checking unique, region:[388 383] in 0.000s.
-- checking unique, region:[391 386] in 0.013s.
-- checking unique, region:[394 389] in 0.006s.
-- checking unique, region:[ 43 392] in 0.006s.
-- checking unique, region:[ 49 395] in 0.005s.
-- checking unique, region:[55 46] in 0.006s.
-- checking unique, region:[62 52] in 0.005s.
-- checking unique, region:[68 59] in 0.005s.
-- checking unique, region:[74 65] in 0.004s.
-- checking unique, region:[80 71] in 0.005s.
-- checking unique, region:[86 77] in 0.005s.
-- checking unique, region:[92 83] in 0.005s.
-- checking unique, region:[98 89] in 0.005s.
-- checking unique, region:[104 95] in 0.005s.
-- checking unique, region:[110 101] in 0.004s.
-- checking unique, region:[325 107] in 0.000s.
-- checking unique, region:[331 113] in 0.000s.
-- checking unique, region:[341 328] in 0.014s.
-- checking unique, region:[347 334] in 0.000s.
-- checking unique, region:[353 344] in 0.000s.
-- checking unique, region:[359 350] in 0.016s.
-- checking unique, region:[365 356] in 0.000s.
-- checking unique, region:[371 362] in 0.000s.
-- checking unique, region:[377 368] in 0.016s.
-- checking unique, region:[384 374] in 0.000s.
-- checking unique, region:[390 381] in 0.000s.
-- checking unique, region:[393 387] in 0.017s.
+ Start multi-processing of pre-processing for 69 images with 24 threads
++ processed unique ids: [ 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 59
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113
114 115 321 323 324 325 326 327 328 329 330 331 332 333 334 335 337 339
340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357
358 359 360 361 362 363 364 365 366 367 368 369 371 373 374 375 377 381
383 384 386 387 388 389 390 391 392 393 394 395] in 3189.44s.
| MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
3. Find chromosomes 3.1 load chromosome image | overwrite_chrom = False
chrom_im = fov._load_chromosome_image(_type='reverse',
_overwrite=overwrite_chrom) | -- choose chrom images from folder: \.
- correct the whole fov for image: \\10.245.74.158\Chromatin_NAS_1\20210320-proB_Dox_IAA_STI_CTP-08_2color\H0R0\Conv_zscan_30.dax
-- loading illumination correction profile from file:
647 illumination_correction_647_2048x2048.npy
-- loading chromatic correction profile from file:
750 chromatic_correction_750_647_35_2048_2048.npy
647 None
-- loading image from file:\\10.245.74.158\Chromatin_NAS_1\20210320-proB_Dox_IAA_STI_CTP-08_2color\H0R0\Conv_zscan_30.dax in 8.030s
-- removing hot pixels for channels:['647'] in 7.033s
-- illumination correction for channels: 647, in 1.573s
-- warp image with chromatic correction for channels: [] and drift:[0. 0. 0.] in 0.000s
-- finish correction in 38.839s
-- chromosome image has drift: [0. 0. 0.]
-- saving fov_info to file: \\10.245.74.212\Chromatin_NAS_2\IgH_analyzed_results\20210320_IgH_proB_iaa_dox+\Conv_zscan_30.hdf5
++ base attributes saved:['chrom_im'] in 5.277s.
| MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
3.2 find candidate chromosomes | chrom_coords = fov._find_candidate_chromosomes_by_segmentation(_filt_size=4,
_binary_per_th=99.75,
_morphology_size=2,
_overwrite=overwrite_chrom) | + loading fov_info from file: \\10.245.74.212\Chromatin_NAS_2\IgH_analyzed_results\20210320_IgH_proB_iaa_dox+\Conv_zscan_30.hdf5
++ base attributes loaded:[] in 4.192s.
-- adjust seed image with filter size=4
-- binarize image with threshold: 99.75%
-- erosion and dialation with size=2.
-- find close objects.
-- random walk segmentation, beta=10.
| MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
3.3 select among candidate chromosomes | chrom_coords = fov._select_chromosome_by_candidate_spots(_good_chr_loss_th=0.3,
_cand_spot_intensity_th=200,
_save=True,
_overwrite=overwrite_chrom) | + loading fov_info from file: \\10.245.74.212\Chromatin_NAS_2\IgH_analyzed_results\20210320_IgH_proB_iaa_dox+\Conv_zscan_30.hdf5
++ base attributes loaded:[] in 5.189s.
+ loading unique from file: \\10.245.74.212\Chromatin_NAS_2\IgH_analyzed_results\20210320_IgH_proB_iaa_dox+\Conv_zscan_30.hdf5
| MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
visualize chromosomes selections | %matplotlib notebook
%matplotlib notebook
## visualize
coord_dict = {'coords':[np.flipud(_coord) for _coord in fov.chrom_coords],
'class_ids':list(np.zeros(len(fov.chrom_coords),dtype=np.int)),
}
visual_tools.imshow_mark_3d_v2([fov.chrom_im],
given_dic=coord_dict,
save_file=None,
)
| _____no_output_____ | MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
select spots based on chromosomes | fov._load_from_file('unique')
intensity_th = 200
from ImageAnalysis3.spot_tools.picking import assign_spots_to_chromosomes
kept_spots_list = []
for _spots in fov.unique_spots_list:
kept_spots_list.append(_spots[_spots[:,0] > intensity_th])
# finalize candidate spots
cand_chr_spots_list = [[] for _ct in fov.chrom_coords]
for _spots in kept_spots_list:
_cands_list = assign_spots_to_chromosomes(_spots, fov.chrom_coords)
for _i, _cands in enumerate(_cands_list):
cand_chr_spots_list[_i].append(_cands)
print(f"kept chromosomes: {len(fov.chrom_coords)}")
reload(spot_tools.picking)
from ImageAnalysis3.spot_tools.picking import convert_spots_to_hzxys
dna_cand_hzxys_list = [convert_spots_to_hzxys(_spots, fov.shared_parameters['distance_zxy'])
for _spots in cand_chr_spots_list]
dna_reg_ids = fov.unique_ids
dna_reg_channels = fov.unique_channels
chrom_coords = fov.chrom_coords
# select_hzxys close to the chromosome center
dist_th = 3000 # upper limit is 3000nm
good_chr_th = 0.8 # 80% of regions should have candidate spots
sel_dna_cand_hzxys_list = []
sel_chrom_coords = []
chr_cand_pers = []
sel_chr_cand_pers = []
for _cand_hzxys, _chrom_coord in zip(dna_cand_hzxys_list, chrom_coords):
_chr_cand_per = 0
_sel_cands_list = []
for _cands in _cand_hzxys:
if len(_cands) == 0:
_sel_cands_list.append([])
else:
_dists = np.linalg.norm(_cands[:,1:4] - _chrom_coord*np.array([200,108,108]), axis=1)
_sel_cands_list.append(_cands[(_dists < dist_th)])
_chr_cand_per += 1
_chr_cand_per *= 1/len(_cand_hzxys)
# append
if _chr_cand_per >= good_chr_th:
sel_dna_cand_hzxys_list.append(_sel_cands_list)
sel_chrom_coords.append(_chrom_coord)
sel_chr_cand_pers.append(_chr_cand_per)
chr_cand_pers.append(_chr_cand_per)
print(f"kept chromosomes: {len(sel_chrom_coords)}") | kept chromosomes: 522
| MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
EM pick spots | %matplotlib inline
reload(spot_tools.picking)
from ImageAnalysis3.spot_tools.picking import _maximize_score_spot_picking_of_chr, pick_spots_by_intensities,pick_spots_by_scores, generate_reference_from_population, evaluate_differences
niter= 10
num_threads = 24
ref_chr_cts = None
# initialize
init_dna_hzxys = pick_spots_by_intensities(sel_dna_cand_hzxys_list)
# set save list
sel_dna_hzxys_list, sel_dna_scores_list, all_dna_scores_list = [init_dna_hzxys], [], []
for _iter in range(niter):
print(f"+ iter:{_iter}")
# E: generate reference
ref_ct_dists, ref_local_dists, ref_ints = generate_reference_from_population(
sel_dna_hzxys_list[-1], dna_reg_ids,
sel_dna_hzxys_list[-1], dna_reg_ids,
ref_channels=dna_reg_channels,
ref_chr_cts=ref_chr_cts,
num_threads=num_threads,
collapse_regions=True,
split_channels=True,
verbose=True,
)
plt.figure(figsize=(4,2), dpi=100)
for _k, _v in ref_ct_dists.items():
plt.hist(np.array(_v), bins=np.arange(0,2500,50), alpha=0.5, label=_k)
plt.legend(fontsize=8)
plt.title('center dist', fontsize=8)
plt.show()
plt.figure(figsize=(4,2), dpi=100)
for _k, _v in ref_local_dists.items():
plt.hist(np.array(_v), bins=np.arange(0,2500,50), alpha=0.5, label=_k)
plt.legend(fontsize=8)
plt.title('local dist', fontsize=8)
plt.show()
plt.figure(figsize=(4,2), dpi=100)
for _k, _v in ref_ints.items():
plt.hist(np.array(_v), bins=np.arange(0,5000,100), alpha=0.5, label=_k)
plt.legend(fontsize=8)
plt.title('intensity', fontsize=8)
plt.show()
# M: pick based on scores
sel_hzxys_list, sel_scores_list, all_scores_list, other_scores_list = \
pick_spots_by_scores(
sel_dna_cand_hzxys_list, dna_reg_ids,
ref_channels=dna_reg_channels,
ref_hzxys_list=sel_dna_hzxys_list[-1], ref_ids=dna_reg_ids,
ref_ct_dists=ref_ct_dists, ref_local_dists=ref_local_dists, ref_ints=ref_ints,
ref_chr_cts=ref_chr_cts,
num_threads=num_threads,
collapse_regions=True,
split_intensity_channels=True,
split_distance_channels=False,
return_other_scores=True,
verbose=True,
)
# check updating rate
update_rate = evaluate_differences(sel_hzxys_list, sel_dna_hzxys_list[-1])
print(f"-- region kept: {update_rate:.4f}")
# append
sel_dna_hzxys_list.append(sel_hzxys_list)
sel_dna_scores_list.append(sel_scores_list)
all_dna_scores_list.append(all_scores_list)
plt.figure(figsize=(4,2), dpi=100)
plt.hist(np.concatenate([np.concatenate(_scores)
for _scores in other_scores_list]),
bins=np.arange(-15, 0, 0.5), alpha=0.5, label='unselected')
plt.hist(np.ravel([np.array(_sel_scores)
for _sel_scores in sel_dna_scores_list[-1]]),
bins=np.arange(-15, 0, 0.5), alpha=0.5, label='selected')
plt.legend(fontsize=8)
plt.show()
if update_rate > 0.998:
break
%%timeit
spot_tools.picking.chromosome_center_dists(sel_dna_hzxys_list[0][0], ref_channels=dna_reg_channels, split_channels=False)
%%timeit
spot_tools.picking.chromosome_center_dists(sel_dna_hzxys_list[0][0], ref_channels=dna_reg_channels, split_channels=True)
from scipy.spatial.distance import pdist, squareform
sel_iter = -1
final_dna_hzxys_list = []
kept_chr_ids = []
distmap_list = []
score_th = -5
int_th = 200
bad_spot_percentage = 1.0 #0.5
for _hzxys, _scores in zip(sel_dna_hzxys_list[sel_iter], sel_dna_scores_list[sel_iter]):
_kept_hzxys = np.array(_hzxys).copy()
# remove spots by intensity
_bad_inds = _kept_hzxys[:,0] < int_th
# remove spots by scores
_bad_inds += _scores < score_th
#print(np.mean(_bad_inds))
_kept_hzxys[_bad_inds] = np.nan
if np.mean(np.isnan(_kept_hzxys).sum(1)>0)<bad_spot_percentage:
kept_chr_ids.append(True)
final_dna_hzxys_list.append(_kept_hzxys)
distmap_list.append(squareform(pdist(_kept_hzxys[:,1:4])))
else:
kept_chr_ids.append(False)
kept_chr_ids = np.array(kept_chr_ids, dtype=np.bool)
#kept_chrom_coords = np.array(sel_chrom_coords)[kept_chr_ids]
distmap_list = np.array(distmap_list)
median_distmap = np.nanmedian(distmap_list, axis=0)
loss_rates = np.mean(np.sum(np.isnan(final_dna_hzxys_list), axis=2)>0, axis=0)
print(np.mean(loss_rates))
fig, ax = plt.subplots(figsize=(4,2),dpi=200)
ax.plot(loss_rates, '.-')
ax.set_ylim([0,1])
ax.set_xticks(np.arange(0,len(dna_reg_ids),int(len(dna_reg_ids)/5)))
plt.show()
imaging_order = []
for _fd, _infos in fov.color_dic.items():
for _info in _infos:
if len(_info) > 0 and _info[0] == 'u':
if int(_info[1:]) in dna_reg_ids:
imaging_order.append(list(dna_reg_ids).index(int(_info[1:])))
imaging_order = np.array(imaging_order, dtype=np.int)
#kept_inds = imaging_order # plot imaging ordered regions
#kept_inds = np.where(loss_rates<0.5)[0] # plot good regions only
kept_inds = np.arange(len(fov.unique_ids)) # plot all
%matplotlib inline
fig, ax = plt.subplots(figsize=(4,3),dpi=200)
ax = ia.figure_tools.distmap.plot_distance_map(median_distmap[kept_inds][:,kept_inds],
color_limits=[0,600],
ax=ax,
ticks=np.arange(0,150,20),
figure_dpi=500)
ax.set_title(f"v-Abl ProB iaa_dox_STI+, n={len(distmap_list)}", fontsize=7.5)
_ticks = np.arange(0, len(kept_inds), 20)
ax.set_xticks(_ticks)
ax.set_xticklabels(dna_reg_ids[kept_inds][_ticks])
ax.set_xlabel(f"5kb region id", fontsize=7, labelpad=2)
ax.set_yticks(_ticks)
ax.set_yticklabels(dna_reg_ids[kept_inds][_ticks])
ax.set_ylabel(f"5kb region id", fontsize=7, labelpad=2)
ax.axvline(x=np.where(fov.unique_ids[kept_inds]>300)[0][0], color=[1,1,0])
ax.axhline(y=np.where(fov.unique_ids[kept_inds]>300)[0][0], color=[1,1,0])
plt.gcf().subplots_adjust(bottom=0.1)
plt.show() | _____no_output_____ | MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
visualize single example | %matplotlib inline
reload(figure_tools.image)
chrom_id = 3
import matplotlib
import copy
sc_cmap = copy.copy(matplotlib.cm.get_cmap('seismic_r'))
sc_cmap.set_bad(color=[0.5,0.5,0.5,1])
#valid_inds = np.where(np.isnan(final_dna_hzxys_list[chrom_id]).sum(1) == 0)[0]
valid_inds = np.ones(len(final_dna_hzxys_list[chrom_id]), dtype=np.bool) # all spots
fig, ax = plt.subplots(figsize=(4,3),dpi=200)
ax = ia.figure_tools.distmap.plot_distance_map(
distmap_list[chrom_id][valid_inds][:,valid_inds],
color_limits=[0,600],
ax=ax,
cmap=sc_cmap,
ticks=np.arange(0,150,20),
figure_dpi=200)
ax.set_title(f"proB DMSO chrom: {chrom_id}", fontsize=7.5)
plt.gcf().subplots_adjust(bottom=0.1)
plt.show()
ax3d = figure_tools.image.chromosome_structure_3d_rendering(
final_dna_hzxys_list[chrom_id][valid_inds, 1:],
marker_edge_line_width=0,
reference_bar_length=200, image_radius=300,
line_width=0.5, figure_dpi=300, depthshade=False)
plt.show()
?figure_tools.image.chromosome_structure_3d_rendering | _____no_output_____ | MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
visualize all fitted spots | with h5py.File(fov.save_filename, "r", libver='latest') as _f:
_grp = _f['unique']
raw_spots_list = [_spots[_spots[:,0] > 0] for _spots in _grp['raw_spots'][:]]
spots_list = [_spots[_spots[:,0] > 0] for _spots in _grp['spots'][:]]
from scipy.spatial.distance import cdist
picked_spot_inds_list = []
for _i, _id in enumerate(dna_reg_ids):
_cand_hzxys = spots_list[_i][:,1:4] * fov.shared_parameters['distance_zxy']
_dists = cdist(np.array(final_dna_hzxys_list)[:,_i,1:], _cand_hzxys)#, axis=1
_matched_spot_inds = []
for _ds in _dists:
if np.sum(np.isnan(_ds)) < len(_ds) and np.nanmin(_ds) < 0.01:
_matched_spot_inds.append(np.argmin(_ds))
else:
_matched_spot_inds.append(np.nan)
# append
picked_spot_inds_list.append(np.array(_matched_spot_inds))
#vis_inds = [0,1,2,3,4,5]
vis_inds = np.where(loss_rates > 0.8)[0]
vis_ims, vis_ids, vis_spot_list, vis_raw_spot_list = [], [], [], []
with h5py.File(fov.save_filename, "r", libver='latest') as _f:
_grp = _f['unique']
for _ind in vis_inds:
vis_ims.append(_grp['ims'][_ind])
vis_ids.append(_grp['ids'][_ind])
_picked_inds = picked_spot_inds_list[_ind]
_picked_inds = np.array(_picked_inds[np.isnan(_picked_inds)==False], dtype=np.int)
vis_spot_list.append(raw_spots_list[_ind][_picked_inds])
dna_reg_ids[59]
fov.color_dic
# visualize_all_chromosomes
%matplotlib notebook
%matplotlib notebook
## visualize
coord_dict = {'coords':[],
'class_ids':[],
}
for _i, _spots in enumerate(vis_spot_list):
coord_dict['coords'] += list(np.flipud(_spot[1:4]) for _spot in _spots)
coord_dict['class_ids'] += list(_i * np.ones(len(_spots),dtype=np.int))
fig=plt.figure(figsize=(4,6), dpi=150)
visual_tools.imshow_mark_3d_v2(vis_ims,
fig=fig,
given_dic=coord_dict,
save_file=None,
) | _____no_output_____ | MIT | 5kb_DNA_analysis/single_fov/20210320_updated_single_fov_IgH_batch1_proB_dox+_2color.ipynb | shiwei23/Chromatin_Analysis_Scripts |
CIFAR10 是另外一個 dataset, 和 mnist 一樣,有十種類別(飛機、汽車、鳥、貓、鹿、狗、青蛙、馬、船、卡車)https://www.cs.toronto.edu/~kriz/cifar.html | import keras
from keras.models import Sequential
from PIL import Image
import numpy as np
import tarfile
# 讀取 dataset
# 只有 train 和 test 沒有 validation
import pickle
train_X=[]
train_y=[]
tar_gz = "../Week06/cifar-10-python.tar.gz"
with tarfile.open(tar_gz) as tarf:
for i in range(1, 6):
dataset = "cifar-10-batches-py/data_batch_%d"%i
print("load",dataset)
with tarf.extractfile(dataset) as f:
result = pickle.load(f, encoding='latin1')
train_X.extend(result['data']/255)
train_y.extend(result['labels'])
train_X=np.float32(train_X)
train_y=np.int32(train_y)
dataset = "cifar-10-batches-py/test_batch"
print("load",dataset)
with tarf.extractfile(dataset) as f:
result = pickle.load(f, encoding='latin1')
test_X=np.float32(result['data']/255)
test_y=np.int32(result['labels'])
train_Y = np.eye(10)[train_y]
test_Y = np.eye(10)[test_y]
validation_data = (test_X[:1000], test_Y[:1000])
test_data = (test_X[1000:], test_Y[1000:])
from IPython.display import display
def showX(X):
int_X = (X*255).clip(0,255).astype('uint8')
# N*3072 -> N*3*32*32 -> 32 * 32N * 3
int_X_reshape = np.moveaxis(int_X.reshape(-1,3,32,32), 1, 3)
int_X_reshape = int_X_reshape.swapaxes(0,1).reshape(32,-1, 3)
display(Image.fromarray(int_X_reshape))
# 訓練資料, X 的前 20 筆
showX(train_X[:20])
print(train_y[:20])
name_array = np.array("飛機、汽車、鳥、貓、鹿、狗、青蛙、馬、船、卡車".split('、'))
print(name_array[train_y[:20]]) | _____no_output_____ | MIT | Week11/01-CIFAR10.ipynb | HowardNTUST/HackNTU_Data_2017 |
將之前的 cnn model 套用過來看看 | # %load ../Week06/q_cifar10_cnn.py
import keras
from keras.layers import Dense, Activation, Conv2D, MaxPool2D, Reshape
model = Sequential()
model.add(Reshape((3, 32, 32), input_shape=(3*32*32,) ))
model.add(Conv2D(filters=32, kernel_size=(3,3), padding='same', activation="relu", data_format='channels_first'))
model.add(MaxPool2D())
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation="relu", data_format='channels_first'))
model.add(MaxPool2D())
model.add(Reshape((-1,)))
model.add(Dense(units=1024, activation="relu"))
model.add(Dense(units=10, activation="softmax"))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(train_X, train_Y, validation_data=validation_data, batch_size=100, epochs=10)
rtn = model.evaluate(*test_data)
print("\ntest accuracy=", rtn[1])
showX(test_X[:15])
predict_y = model.predict_classes(test_X[:15], verbose=False)
print(name_array[predict_y])
print(name_array[test_y[:15]])
import keras
from keras.layers import Dense, Activation, Conv2D, MaxPool2D, Reshape, Dropout
model = Sequential()
model.add(Reshape((3, 32, 32), input_shape=(3*32*32,) ))
model.add(Conv2D(32, 3, padding='same', activation="relu", data_format='channels_first'))
model.add(MaxPool2D())
model.add(Conv2D(64, 3, padding='same', activation="relu", data_format='channels_first'))
model.add(MaxPool2D())
model.add(Reshape((-1,)))
model.add(Dense(units=1024, activation="relu"))
model.add(Dropout(rate=0.4))
model.add(Dense(units=10, activation="softmax"))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(train_X, train_Y, validation_data=validation_data, batch_size=100, epochs=10)
rtn = model.evaluate(*test_data)
print("\ntest accuracy=", rtn[1])
model.fit(train_X, train_Y, validation_data=validation_data, batch_size=100, epochs=10)
rtn = model.evaluate(*test_data)
print("\ntest accuracy=", rtn[1])
showX(test_X[:15])
predict_y = model.predict_classes(test_X[:15], verbose=False)
print(name_array[predict_y])
print(name_array[test_y[:15]]) | _____no_output_____ | MIT | Week11/01-CIFAR10.ipynb | HowardNTUST/HackNTU_Data_2017 |
不同的 activationhttps://keras.io/activations/ | # 先定義一個工具
def add_layers(model, *layers):
for l in layers:
model.add(l)
import keras
from keras.engine.topology import Layer
from keras.layers import Dense, Activation, Conv2D, MaxPool2D, Reshape, Dropout
def MyConv2D(filters, kernel_size, **kwargs):
return (Conv2D(filters=filters, kernel_size=kernel_size,
padding='same', data_format='channels_first', **kwargs),
Activation("elu"))
model = Sequential()
add_layers( model,
Reshape((3, 32, 32), input_shape=(3*32*32,)),
*MyConv2D(32, 3),
MaxPool2D(),
*MyConv2D(64, 3),
MaxPool2D(),
Reshape((-1,)),
Dense(units=1024, activation="elu"),
Dropout(rate=0.4),
Dense(units=10, activation="softmax")
)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(train_X, train_Y, validation_data=validation_data, batch_size=100, epochs=10)
rtn = model.evaluate(*test_data)
print("\ntest accuracy=", rtn[1])
import keras
from keras.layers import Dense, Activation, Conv2D, MaxPool2D, Reshape, Dropout, BatchNormalization
# 先處理資料
#GCN
train_X_mean = np.mean(train_X, axis=0, keepdims=True)
train_X_std = np.std(train_X, axis=0, keepdims=True)
preprocessed_train_X = (train_X-train_X_mean)/train_X_std
preprocessed_test_X = (test_X-train_X_mean)/train_X_std
preprocessed_validation_data = (preprocessed_test_X[:1000], test_Y[:1000])
preprocessed_test_data = (preprocessed_test_X[1000:], test_Y[1000:])
def MyConv2D(filters, kernel_size, **kwargs):
return (Conv2D(filters=filters, kernel_size=kernel_size,
padding='same', data_format='channels_first', **kwargs),
Activation("relu"))
def add_layers(model, *layers):
for l in layers:
model.add(l)
model = Sequential()
add_layers( model,
Reshape((3, 32, 32), input_shape=(3*32*32,)),
*MyConv2D(32, 3),
MaxPool2D(),
*MyConv2D(64, 3),
MaxPool2D(),
Reshape((-1,)),
Dense(units=1024, activation="relu"),
Dropout(rate=0.4),
Dense(units=10, activation="softmax")
)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(preprocessed_train_X, train_Y, validation_data=preprocessed_validation_data,
batch_size=100, epochs=10)
rtn = model.evaluate(*preprocessed_test_data)
print("\ntest accuracy=", rtn[1])
import keras
from keras.layers import Dense, Activation, Conv2D, MaxPool2D, Reshape, Dropout, BatchNormalization
# 先處理資料
#GCN
train_X_mean = np.mean(train_X, axis=0, keepdims=True)
train_X_std = np.std(train_X, axis=0, keepdims=True)
preprocessed_train_X = (train_X-train_X_mean)/train_X_std
preprocessed_test_X = (test_X-train_X_mean)/train_X_std
preprocessed_validation_data = (preprocessed_test_X[:1000], test_Y[:1000])
preprocessed_test_data = (preprocessed_test_X[1000:], test_Y[1000:])
def MyConv2D(filters, kernel_size, **kwargs):
return (Conv2D(filters=filters, kernel_size=kernel_size,
padding='same', data_format='channels_first', **kwargs),
BatchNormalization(axis=1),
Activation("relu"))
def add_layers(model, *layers):
for l in layers:
model.add(l)
model = Sequential()
add_layers( model,
Reshape((3, 32, 32), input_shape=(3*32*32,)),
*MyConv2D(32, 3),
MaxPool2D(),
*MyConv2D(64, 3),
MaxPool2D(),
Reshape((-1,)),
Dense(units=1024, activation="relu"),
Dropout(0.4),
Dense(units=10, activation="softmax")
)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(preprocessed_train_X, train_Y, validation_data=preprocessed_validation_data,
batch_size=100, epochs=10)
rtn = model.evaluate(*preprocessed_test_data)
print("\ntest accuracy=", rtn[1])
model.fit(preprocessed_train_X, train_Y, validation_data=preprocessed_validation_data,
batch_size=100, epochs=10)
rtn = model.evaluate(*preprocessed_test_data)
print("\ntest accuracy=", rtn[1])
def zca_whitening_matrix(X):
"""
Function to compute ZCA whitening matrix (aka Mahalanobis whitening).
INPUT: X: [M x N] matrix.
Rows: Variables
Columns: Observations
OUTPUT: ZCAMatrix: [M x M] matrix
"""
X = X.T
# Covariance matrix [column-wise variables]: Sigma = (X-mu)' * (X-mu) / N
sigma = np.cov(X, rowvar=True) # [M x M]
# Singular Value Decomposition. X = U * np.diag(S) * V
U,S,V = np.linalg.svd(sigma)
# U: [M x M] eigenvectors of sigma.
# S: [M x 1] eigenvalues of sigma.
# V: [M x M] transpose of U
# Whitening constant: prevents division by zero
epsilon = 1e-5
# ZCA Whitening matrix: U * Lambda * U'
ZCAMatrix = np.dot(U, np.dot(np.diag(1.0/np.sqrt(S + epsilon)), U.T)) # [M x M]
return ZCAMatrix.T
# ZCAMatrix = zca_whitening_matrix(X0)
# new_train_X= ((train_X-train_X_mean)/train_X_std) @ ZCAMatrix
# 參考 https://keras.io/preprocessing/image/
# 輸入改成 tensor4
train_X = train_X.reshape(-1, 3, 32, 32)
test_X = test_X.reshape(-1, 3, 32, 32)
def MyConv2D(filters, kernel_size, **kwargs):
return (Conv2D(filters=filters, kernel_size=kernel_size,
padding='same', data_format='channels_first', **kwargs),
BatchNormalization(axis=1),
Activation("relu"))
model = Sequential()
add_layers( model,
*MyConv2D(32, 3, input_shape=(3,32,32)),
MaxPool2D(),
*MyConv2D(64, 3),
MaxPool2D(),
Reshape((-1,)),
Dense(units=1024, activation="relu"),
Dropout(0.4),
Dense(units=10, activation="softmax")
)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# 使用 keras 的功能
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
zca_whitening=True,
data_format="channels_first")
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(train_X)
p_train_X, p_train_Y = datagen.flow(train_X, train_Y, batch_size=len(train_X), shuffle=False).next()
# 順序都沒變
assert (p_train_Y == train_Y).all()
p_test_X, p_test_Y = datagen.flow(test_X, test_Y, batch_size=len(test_X), shuffle=False).next()
# 順序都沒變
assert (p_test_Y == test_Y).all()
# 不需要這兩個
del p_train_Y, p_test_Y
p_validation_data = (p_test_X[:1000], test_Y[:1000])
p_test_data = (p_test_X[1000:], test_Y[1000:])
model.fit(p_train_X, train_Y, validation_data=p_validation_data,
batch_size=100, epochs=10)
rtn = model.evaluate(*p_test_data)
print("\ntest accuracy=", rtn[1])
| Train on 50000 samples, validate on 1000 samples
Epoch 1/10
50000/50000 [==============================] - 59s - loss: 4.1387 - acc: 0.2103 - val_loss: 1.9535 - val_acc: 0.2730
Epoch 2/10
50000/50000 [==============================] - 58s - loss: 1.6590 - acc: 0.3671 - val_loss: 1.3423 - val_acc: 0.5450
Epoch 3/10
50000/50000 [==============================] - 59s - loss: 1.4440 - acc: 0.4645 - val_loss: 1.2077 - val_acc: 0.5660
Epoch 4/10
50000/50000 [==============================] - 59s - loss: 1.3028 - acc: 0.5202 - val_loss: 1.0343 - val_acc: 0.6300
Epoch 5/10
50000/50000 [==============================] - 58s - loss: 1.2161 - acc: 0.5540 - val_loss: 1.1199 - val_acc: 0.6130
Epoch 6/10
50000/50000 [==============================] - 58s - loss: 1.1516 - acc: 0.5752 - val_loss: 1.0667 - val_acc: 0.6310
Epoch 7/10
50000/50000 [==============================] - 57s - loss: 1.1062 - acc: 0.5923 - val_loss: 1.0645 - val_acc: 0.6160
Epoch 8/10
50000/50000 [==============================] - 57s - loss: 1.0729 - acc: 0.6030 - val_loss: 1.0230 - val_acc: 0.6330
Epoch 9/10
50000/50000 [==============================] - 58s - loss: 1.0288 - acc: 0.6178 - val_loss: 0.9881 - val_acc: 0.6540
Epoch 10/10
50000/50000 [==============================] - 57s - loss: 1.0024 - acc: 0.6277 - val_loss: 1.0081 - val_acc: 0.6570
8960/9000 [============================>.] - ETA: 0s
test accuracy= 0.660666666667
| MIT | Week11/01-CIFAR10.ipynb | HowardNTUST/HackNTU_Data_2017 |
使用動態資料處理```python fits the model on batches with real-time data augmentation:train_generator = datagen.flow(train_X, train_Y, batch_size=100, shuffle=False)test_generator = datagen.flow(*test_data, batch_size=100, shuffle=False)model.fit_generator(train_generator, steps_per_epoch=len(train_X), validation_data=datagen.flow(*validation_data, batch_size=100), validation_steps=1000, epochs=10)rtn = model.evaluate_generator(test_generator, steps=9000)``` | import keras
from keras.layers import Dense, Activation, Conv2D, MaxPool2D, Reshape, Dropout, BatchNormalization
# 輸入改成 tensor4
train_X = train_X.reshape(-1, 3, 32, 32)
test_X = test_X.reshape(-1, 3, 32, 32)
def MyConv2D(filters, kernel_size, **kwargs):
return (Conv2D(filters=filters, kernel_size=kernel_size,
padding='same', data_format='channels_first', **kwargs),
BatchNormalization(axis=1),
Activation("elu"))
model = Sequential()
add_layers( model,
*MyConv2D(64, 3, input_shape=(3,32,32)),
*MyConv2D(64, 3),
MaxPool2D(),
*MyConv2D(128, 3),
*MyConv2D(128, 3),
MaxPool2D(),
*MyConv2D(256, 3),
*MyConv2D(256, 3),
Reshape((-1,)),
Dense(units=1024),
BatchNormalization(),
Activation("elu"),
Dropout(0.4),
Dense(units=10, activation="softmax")
)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# 使用 keras 的功能
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
zca_whitening=True,
data_format="channels_first")
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(train_X)
p_train_X, p_train_Y = datagen.flow(train_X, train_Y, batch_size=len(train_X), shuffle=False).next()
# 順序都沒變
assert (p_train_Y == train_Y).all()
p_test_X, p_test_Y = datagen.flow(test_X, test_Y, batch_size=len(test_X), shuffle=False).next()
# 順序都沒變
assert (p_test_Y == test_Y).all()
# 不需要這兩個
del p_train_Y, p_test_Y
p_validation_data = (p_test_X[:1000], test_Y[:1000])
p_test_data = (p_test_X[1000:], test_Y[1000:])
model.fit(p_train_X, train_Y, validation_data=p_validation_data,
batch_size=100, epochs=10)
rtn = model.evaluate(*p_test_data)
print("\ntest accuracy=", rtn[1])
model.fit(p_train_X, train_Y, validation_data=p_validation_data,
batch_size=100, epochs=10)
rtn = model.evaluate(*p_test_data) | _____no_output_____ | MIT | Week11/01-CIFAR10.ipynb | HowardNTUST/HackNTU_Data_2017 |
lasagne 中相同的```python_ = InputLayer(shape=(None, 3*32*32), input_var=input_var)_ = DropoutLayer(_, 0.2)_ = ReshapeLayer(_, ([0], 3, 32, 32))_ = conv(_, 96, 3)_ = conv(_, 96, 3)_ = MaxPool2DDNNLayer(_, 3, 2)_ = DropoutLayer(_, 0.5)_ = conv(_, 192, 3)_ = conv(_, 192, 3)_ = MaxPool2DDNNLayer(_, 3, 2)_ = DropoutLayer(_, 0.5)_ = conv(_, 192, 3)_ = conv(_, 192, 1)_ = conv(_, 10, 1)_ = Pool2DDNNLayer(_, 7, mode='average_exc_pad')_ = FlattenLayer(_)l_out = NonlinearityLayer(_, nonlinearity=lasagne.nonlinearities.softmax)``` | import keras
from keras.layers import Dense, Activation, Conv2D, MaxPool2D, Reshape, Dropout, BatchNormalization, GlobalAveragePooling2D
# 輸入改成 tensor4
train_X = train_X.reshape(-1, 3, 32, 32)
test_X = test_X.reshape(-1, 3, 32, 32)
def MyConv2D(filters, kernel_size, **kwargs):
return (Conv2D(filters=filters, kernel_size=kernel_size,
padding='same', data_format='channels_first', **kwargs),
BatchNormalization(axis=1, momentum=0.9),
Activation("relu"))
model = Sequential()
add_layers( model,
Dropout(0.2, input_shape=(3,32,32)),
*MyConv2D(96, 3),
*MyConv2D(96, 3),
MaxPool2D(3, 2),
*MyConv2D(192, 3),
*MyConv2D(192, 3),
MaxPool2D(3, 2),
Dropout(0.5),
*MyConv2D(192, 3),
*MyConv2D(192, 1),
*MyConv2D(10, 1),
GlobalAveragePooling2D(data_format='channels_first'),
Activation("softmax")
)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(p_train_X, train_Y, validation_data=p_validation_data,
batch_size=100, epochs=50)
rtn = model.evaluate(*p_test_data)
print("\ntest accuracy=", rtn[1])
import keras
from keras.layers import Dense, Activation, Conv2D, MaxPool2D, Reshape, Dropout, BatchNormalization, GlobalAveragePooling2D
# 輸入改成 tensor4
train_X = train_X.reshape(-1, 3, 32, 32)
test_X = test_X.reshape(-1, 3, 32, 32)
def MyConv2D(filters, kernel_size, **kwargs):
return (Conv2D(filters=filters, kernel_size=kernel_size,
padding='same', data_format='channels_first', **kwargs),
BatchNormalization(axis=1, momentum=0.9),
Activation("relu"))
model = Sequential()
add_layers( model,
Dropout(0.2, input_shape=(3,32,32)),
*MyConv2D(96, 3),
*MyConv2D(96, 3),
MaxPool2D(3, 2),
*MyConv2D(192, 3),
*MyConv2D(192, 3),
MaxPool2D(3, 2),
Dropout(0.5),
*MyConv2D(192, 3),
*MyConv2D(192, 1),
*MyConv2D(10, 1),
GlobalAveragePooling2D(data_format='channels_first'),
Activation("softmax")
)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# 使用 keras 的功能
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
zca_whitening=True,
data_format="channels_first")
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(train_X)
p_train_X, p_train_Y = datagen.flow(train_X, train_Y, batch_size=len(train_X), shuffle=False).next()
# 順序都沒變
assert (p_train_Y == train_Y).all()
p_test_X, p_test_Y = datagen.flow(test_X, test_Y, batch_size=len(test_X), shuffle=False).next()
# 順序都沒變
assert (p_test_Y == test_Y).all()
# 不需要這兩個
del p_train_Y, p_test_Y
p_validation_data = (p_test_X[:1000], test_Y[:1000])
p_test_data = (p_test_X[1000:], test_Y[1000:])
model.fit(p_train_X, train_Y, validation_data=p_validation_data,
batch_size=100, epochs=10)
rtn = model.evaluate(*p_test_data)
print("\ntest accuracy=", rtn[1])
| _____no_output_____ | MIT | Week11/01-CIFAR10.ipynb | HowardNTUST/HackNTU_Data_2017 |
Preprocess the data and dump to a new file | DATA_PATH = '/fast/ankitesh/data/'
TRAINFILE = 'CI_SP_M4K_train_shuffle.nc'
VALIDFILE = 'CI_SP_M4K_valid.nc'
NORMFILE = 'CI_SP_M4K_NORM_norm.nc'
percentile_path='/export/nfs0home/ankitesg/data/percentile_data.pkl'
data_name='M4K'
bin_size = 1000
scale_dict = load_pickle('/export/nfs0home/ankitesg/CBrain_project/CBRAIN-CAM/nn_config/scale_dicts/009_Wm2_scaling.pkl')
percentile_bins = load_pickle(percentile_path)['Percentile'][data_name]
enc = OneHotEncoder(sparse=False)
classes = np.arange(bin_size+2)
enc.fit(classes.reshape(-1,1))
data_ds = xr.open_dataset(f"{DATA_PATH}{TRAINFILE}")
n = data_ds['vars'].shape[0]
data_ds
coords = list(data_ds['vars'].var_names.values)
coords = coords + ['PHQ_BIN']*30+['TPHYSTND_BIN']*30+['FSNT_BIN','FSNS_BIN','FLNT_BIN','FLNS_BIN']
def _transform_to_one_hot(Y):
'''
return shape = batch_size X 64 X bin_size
'''
Y_trans = []
out_vars = ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS']
var_dict = {}
var_dict['PHQ'] = Y[:,:30]
var_dict['TPHYSTND'] = Y[:,30:60]
var_dict['FSNT'] = Y[:,60]
var_dict['FSNS'] = Y[:,61]
var_dict['FLNT'] = Y[:,62]
var_dict['FLNS'] = Y[:,63]
perc = percentile_bins
for var in out_vars[:2]:
all_levels_one_hot = []
for ilev in range(30):
bin_index = np.digitize(var_dict[var][:,ilev],perc[var][ilev])
one_hot = enc.transform(bin_index.reshape(-1,1))
all_levels_one_hot.append(one_hot)
var_one_hot = np.stack(all_levels_one_hot,axis=1)
Y_trans.append(var_one_hot)
for var in out_vars[2:]:
bin_index = np.digitize(var_dict[var][:], perc[var])
one_hot = enc.transform(bin_index.reshape(-1,1))[:,np.newaxis,:]
Y_trans.append(one_hot)
Y_concatenated = np.concatenate(Y_trans,axis=1)
return Y_concatenated
inp_vars = ['QBP','TBP','PS', 'SOLIN', 'SHFLX', 'LHFLX']
inp_coords = coords[:64]
out_coords = coords[64:128]
bin_coords = list(range(bin_size+2))
all_data_arrays = []
batch_size = 4096
norm_ds = xr.open_dataset(f'{DATA_PATH}{NORMFILE}')
output_transform = DictNormalizer(norm_ds, ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS'], scale_dict)
for i in range(0,n,batch_size):
all_vars = data_ds['vars'][i:i+batch_size]
inp_vals = all_vars[:,:64]
out_vals = all_vars[:,64:128]
out_vals = output_transform.transform(out_vals)
one_hot = _transform_to_one_hot(out_vals)
sample_coords = list(range(i,i+all_vars.shape[0]))
x3 = xr.Dataset(
{
"X": (("sample", "inp_coords"),inp_vals),
"Y_raw":(("sample","out_cords"),out_vals),
"Y": (("sample", "out_coords","bin_index"), one_hot),
},
coords={"sample": sample_coords, "inp_coords": inp_coords,"out_coords":out_coords,"bin_index":bin_coords},
)
all_data_arrays.append(x3)
if(int(i/batch_size+1)%100 == 0):
print("saving this batch")
final_da = xr.combine_by_coords(all_data_arrays)
final_da.to_netcdf(f'/scratch/ankitesh/data/new_data_for_v2_{int(i/batch_size+1)}.nc')
all_data_arrays = []
print(int(i/batch_size), end='\r')
final_da = xr.combine_by_coords(all_data_arrays)
final_da
data_ds = xr.open_dataset(f"{DATA_PATH}{VALIDFILE}")
n = data_ds['vars'].shape[0]
coords = list(data_ds['vars'].var_names.values)
coords = coords + ['PHQ_BIN']*30+['TPHYSTND_BIN']*30+['FSNT_BIN','FSNS_BIN','FLNT_BIN','FLNS_BIN']
all_data_arrays = []
batch_size = 4096
norm_ds = xr.open_dataset(f'{DATA_PATH}{NORMFILE}')
output_transform = DictNormalizer(norm_ds, ['PHQ','TPHYSTND','FSNT', 'FSNS', 'FLNT', 'FLNS'], scale_dict)
for i in range(0,n,batch_size):
all_vars = data_ds['vars'][i:i+batch_size]
inp_vals = all_vars[:,:64]
out_vals = all_vars[:,64:128]
out_vals = output_transform.transform(out_vals)
one_hot = _transform_to_one_hot(out_vals)
sample_coords = list(range(i,i+all_vars.shape[0]))
x3 = xr.Dataset(
{
"X": (("sample", "inp_coords"),inp_vals),
"Y_raw":(("sample","out_cords"),out_vals),
"Y": (("sample", "out_coords","bin_index"), one_hot),
},
coords={"sample": sample_coords, "inp_coords": inp_coords,"out_coords":out_coords,"bin_index":bin_coords},
)
all_data_arrays.append(x3)
if(int(i/batch_size+1)%100 == 0):
print("saving this batch")
final_da = xr.combine_by_coords(all_data_arrays)
final_da.to_netcdf(f'/scratch/ankitesh/data/new_data_valid_for_v2_{int(i/batch_size+1)}.nc')
all_data_arrays = []
print(int(i/batch_size), end='\r')
final_da = xr.combine_by_coords(all_data_arrays)
final_da.to_netcdf(f'/scratch/ankitesh/data/new_data_valid_for_v2_{int(i/batch_size+1)}.nc') | saving this batch
saving this batch
saving this batch
saving this batch
saving this batch
saving this batch
saving this batch
saving this batch
saving this batch
saving this batch
| MIT | notebooks/ankitesh-devlog/08_Preprocess_to_onehot.ipynb | ankitesh97/CBRAIN-CAM |
Slicing | df.recency
df['recency']
df[['recency']]
df.loc[:,'recency']
df.loc[:,['recency']]
df.iloc[:,0]
df.iloc[:,[0]] | _____no_output_____ | MIT | Module2/Manipulating_Data.ipynb | akielbowicz/DAT210x |
Boolean Indexing | df.recency < 7
df[ df.recency < 7 ]
df[ (df.recency < 7) & (df.newbie == 0) ]
df[df.recency < 7] = -100
ordered_satisfaction = ['Very Unhappy', 'Unhappy', 'Neutral', 'Happy', 'Very Happy']
df = pd.DataFrame({'satisfaction':['Mad', 'Happy', 'Unhappy', 'Neutral']})
df.satisfaction = df.satisfaction.astype("category",
ordered=True,
categories=ordered_satisfaction
).cat.codes
df['satisfaction']
df = pd.DataFrame({'vertebrates':[
... 'Bird',
... 'Bird',
... 'Mammal',
... 'Fish',
... 'Amphibian',
... 'Reptile',
... 'Mammal',
... ]})
df.vertebrates.astype('category').cat.codes, df.vertebrates.astype('category').cat.categories
pd.get_dummies(df, columns=['vertebrates']) | _____no_output_____ | MIT | Module2/Manipulating_Data.ipynb | akielbowicz/DAT210x |
Feature Representation Pure Textual Features | from sklearn.feature_extraction.text import CountVectorizer
corpus = [
"Authman ran faster than Harry because he is an athlete.",
"Authman and Harry ran faster and faster.",
]
bow = CountVectorizer()
X = bow.fit_transform(corpus) # Sparse Matrix
bow.get_feature_names()
X.toarray() | _____no_output_____ | MIT | Module2/Manipulating_Data.ipynb | akielbowicz/DAT210x |
Graphical Features | %matplotlib inline
import matplotlib.pyplot as plt
from imageio import imread
# Load the image up
img = imread('imageio:chelsea.png')
# Is the image too big? Resample it down by an order of magnitude
img = img[::2, ::2]
# Scale colors from (0-255) to (0-1), then reshape to 1D array per pixel, e.g. grayscale
# If you had color images and wanted to preserve all color channels, use .reshape(-1,3)
X = (img / 255.0).reshape(-1)
plt.imshow(img)
X.shape | _____no_output_____ | MIT | Module2/Manipulating_Data.ipynb | akielbowicz/DAT210x |
Audio Features | import scipy.io.wavfile as wavfile
sample_rate, audio_data = wavfile.read('sound.wav')
print(audio_data) | _____no_output_____ | MIT | Module2/Manipulating_Data.ipynb | akielbowicz/DAT210x |
Let's try parameters from an actual simulation: REF_RL/RUN0 | """
[tud67309@login2 RUN0]$ cat LIG_res.itp
;LIG_res.itp
[ position_restraints ]
;i funct fcx fcy fcz
6 1 800 800 800
35 1 800 800 800
23 1 800 800 800
"""
# From LIG_h.pdb:
"""
ATOM 6 C28 LIG A 1 11.765 -16.536 1.909 1.00 0.00 C
...
ATOM 23 C23 LIG A 1 12.358 -10.647 7.766 1.00 0.00 C
...
ATOM 35 C17 LIG A 1 20.883 -7.674 2.314 1.00 0.00 C
"""
x0 = np.zeros((3,3))
x0[0,:] = np.array([1.1765, -1.6536, 0.1909]) # converted to nm
x0[1,:] = np.array([1.2358, -1.0647, 0.7766]) # converted to nm
x0[2,:] = np.array([2.0883, -0.7674, 0.2314]) # converted to nm
V0 = 1660.0 # in Å^3 is the standard volume
L0 = ((1660.0)**(1/3)) * 0.1 # converted to nm
print('L0', L0, 'nm')
ee_REF0 = EESampler_RigidThreeParticle(L=L0, x0=x0, a1=x0)
theory_dG_in_kT = ee_REF0.theory_dg_in_kT()
print('theory_dG_in_kT', theory_dG_in_kT)
print('ee_REF0.g', ee_REF0.g)
plt.figure(figsize=(10,4))
# Plot the final free energy estimates as a function of k
plt.subplot(1,2,1)
#plt.plot(ee.k_values, ee.g, 'o-', label='EE result')
plt.plot(ee_REF0.k_values[1:], theory_dG_in_kT, 'o-', label='theory')
plt.xlabel('$k$ (kJ/nm$^2$)')
plt.ylabel('$\Delta G_{rest}$ (kT)')
plt.legend(loc='best') | L0 1.1840481475428983 nm
self.d 0.8326849284092993
self.c 1.0486785581066906
self.e 0.720168877255378
self.d 0.8326849284092993
self.c 1.0486785581066906
self.e 0.720168877255378
self.L 1.1840481475428983
kc_coeff 1.7480098038626308
kp1_coeff 3.334083521100217
theory_dG_in_kT [-2.88375874 -0.80431719 1.27512435 4.02399654 6.10343808 7.31983341
8.18287963 10.93175182 13.01119336 15.09063491 17.17007645 19.24951799]
ee_REF0.g [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
| MIT | scripts/triple-harmonic-restraint.ipynb | yabmtm/position-restraints |
REF_RL/RUN1 | """
$ cat LIG_res.itp
;LIG_res.itp
[ position_restraints ]
;i funct fcx fcy fcz
11 1 800 800 800
25 1 800 800 800
2 1 800 800 800
"""
# From LIG_h.pdb:
"""
ATOM 2 C12 LIG A 1 6.050 0.774 17.871 1.00 0.00 C
ATOM 11 C15 LIG A 1 2.770 2.355 21.054 1.00 0.00 C
ATOM 25 C4 LIG A 1 13.466 -1.210 22.191 1.00 0.00 C
"""
x0 = np.zeros((3,3))
x0[0,:] = np.array([0.6050, 0.0774, 1.7871]) # converted to nm
x0[1,:] = np.array([0.2770, 0.2355, 2.1054]) # converted to nm
x0[2,:] = np.array([1.3466, -0.1210, 2.2191]) # converted to nm
V0 = 1660.0 # in Å^3 is the standard volume
L0 = ((1660.0)**(1/3)) * 0.1 # converted to nm
print('L0', L0, 'nm')
ee_REF1 = EESampler_RigidThreeParticle(L=L0, x0=x0, a1=x0)
# The *theory* for the triple-restraint rigid rotor says that
# dG/kT = -ln ( [(2*\pi)/(\beta *3k )]^{3/2} / L^3
# [(2*\pi)/(\beta*2k))]^{1/2} * [(2*\pi)/(\beta *(2+c^2/(d/2)^2)k )]^{1/2} / 4 \pi (d/2)^2
# [(2*\pi)/(\beta *k )]^{1/2} / 2 \pi c
print('ee_REF1.d', ee_REF1.d)
print('ee_REF1.c', ee_REF1.c)
print('ee_REF1.L', ee_REF1.L)
k_prime_coeff = 2.0 + (ee.c/(ee.d/2.))**2.0
print('k_prime_coeff', k_prime_coeff)
theory_dG_in_kT = -1.0*( 3.0/2.0*np.log(2.0*np.pi*ee_REF1.RT/(3.0*ee_REF1.k_values[1:])) \
+ 1.0/2.0*np.log(2.0*np.pi*ee_REF1.RT/(2.0*ee_REF1.k_values[1:])) \
+ 1.0/2.0*np.log(2.0*np.pi*ee_REF1.RT/(k_prime_coeff*ee_REF0.k_values[1:])) \
+ 1.0/2.0*np.log(2.0*np.pi*ee_REF1.RT/ee_REF1.k_values[1:]) \
- np.log( ee_REF1.L**3 * 8.0 * (np.pi**2) * (ee_REF1.d/2.0)**2 * ee_REF1.c ) )
print('theory_dG_in_kT', theory_dG_in_kT)
print('ee_REF1.g', ee_REF1.g)
plt.figure(figsize=(10,4))
# Plot the final free energy estimates as a function of k
plt.subplot(1,2,1)
#plt.plot(ee.k_values, ee.g, 'o-', label='EE result')
plt.plot(ee_REF1.k_values[1:], theory_dG_in_kT, 'o-', label='theory')
plt.xlabel('$k$ (kJ/nm$^2$)')
plt.ylabel('$\Delta G_{rest}$ (kT)')
plt.legend(loc='best') | L0 1.1840481475428983 nm
self.d 0.4836264053998706
self.c 0.834018605392616
ee_REF1.d 0.4836264053998706
ee_REF1.c 0.834018605392616
ee_REF1.L 1.1840481475428983
k_prime_coeff 5.999999999996005
theory_dG_in_kT [-5.57122688 -3.49178533 -1.41234379 1.3365284 3.41596994 4.63236527
5.49541149 8.24428368 10.32372522 12.40316677 14.48260831 16.56204985]
ee_REF1.g [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
| MIT | scripts/triple-harmonic-restraint.ipynb | yabmtm/position-restraints |
REF_RL/RUN2 | """;LIG_res.itp
[ position_restraints ]
;i funct fcx fcy fcz
13 1 800 800 800
19 1 800 800 800
9 1 800 800 800
"""
# From LIG_h.pdb:
"""
ATOM 9 C2 LIG A 1 12.189 0.731 23.852 1.00 0.00 C
ATOM 13 CL LIG A 1 14.006 -1.527 21.119 1.00 0.00 Cl
ATOM 19 C13 LIG A 1 3.244 2.176 20.610 1.00 0.00 C
"""
x0 = np.zeros((3,3))
x0[0,:] = 0.1*np.array([12.189, 0.731, 23.852]) # converted to nm
x0[1,:] = 0.1*np.array([14.006, -1.527, 21.119]) # converted to nm
x0[2,:] = 0.1*np.array([ 3.244, 2.176, 20.610]) # converted to nm
V0 = 1660.0 # in Å^3 is the standard volume
L0 = ((1660.0)**(1/3)) * 0.1 # converted to nm
print('L0', L0, 'nm')
ee_REF2 = EESampler_RigidThreeParticle(L=L0, x0=x0, a1=x0)
# The *theory* for the triple-restraint rigid rotor says that
# dG/kT = -ln ( [(2*\pi)/(\beta *3k )]^{3/2} / L^3
# [(2*\pi)/(\beta*2k))]^{1/2} * [(2*\pi)/(\beta *(2+c^2/(d/2)^2)k )]^{1/2} / 4 \pi (d/2)^2
# [(2*\pi)/(\beta *k )]^{1/2} / 2 \pi c
print('ee_REF1.d', ee_REF2.d)
print('ee_REF1.c', ee_REF2.c)
print('ee_REF1.L', ee_REF2.L)
k_prime_coeff = 2.0 + (ee.c/(ee.d/2.))**2.0
print('k_prime_coeff', k_prime_coeff)
theory_dG_in_kT = -1.0*( 3.0/2.0*np.log(2.0*np.pi*ee_REF2.RT/(3.0*ee_REF2.k_values[1:])) \
+ 1.0/2.0*np.log(2.0*np.pi*ee_REF2.RT/(2.0*ee_REF2.k_values[1:])) \
+ 1.0/2.0*np.log(2.0*np.pi*ee_REF2.RT/(k_prime_coeff*ee_REF0.k_values[1:])) \
+ 1.0/2.0*np.log(2.0*np.pi*ee_REF2.RT/ee_REF2.k_values[1:]) \
- np.log( ee_REF2.L**3 * 8.0 * (np.pi**2) * (ee_REF2.d/2.0)**2 * ee_REF2.c ) )
print('theory_dG_in_kT', theory_dG_in_kT)
print('ee_REF2.g', ee_REF2.g)
plt.figure(figsize=(10,4))
# Plot the final free energy estimates as a function of k
plt.subplot(1,2,1)
#plt.plot(ee.k_values, ee.g, 'o-', label='EE result')
plt.plot(ee_REF2.k_values[1:], theory_dG_in_kT, 'o-', label='theory')
plt.xlabel('$k$ (kJ/nm$^2$)')
plt.ylabel('$\Delta G_{rest}$ (kT)')
plt.legend(loc='best')
nsteps = 100000
traj = ee.sample(nsteps)
# print(traj['dhdl'])
step = traj.loc[:,'step'].values
p1 = np.zeros((step.shape[0], 3))
p1[:,0] = traj.loc[:,'x1'].values
p1[:,1] = traj.loc[:,'y1'].values
p1[:,2] = traj.loc[:,'z1'].values
p2 = np.zeros((step.shape[0], 3))
p2[:,0] = traj.loc[:,'x2'].values
p2[:,1] = traj.loc[:,'y2'].values
p2[:,2] = traj.loc[:,'z2'].values
d = traj.loc[:,'distance'].values
c = traj.loc[:,'height'].values
import matplotlib
from matplotlib import pyplot as plt
plt.figure(figsize=(10,4))
plt.subplot(1,3,1)
plt.plot(step, p1)
plt.plot(step, p2)
plt.subplot(1,3,2)
plt.plot(step, d)
plt.subplot(1,3,3)
plt.plot(step, c)
# The *theory* for the triple-restraint rigid rotor says that
# dG/kT = -ln ( [(2*\pi)/(\beta *3k )]^{3/2} / L^3
# [(2*\pi)/(\beta*2k))]^{1/2} * [(2*\pi)/(\beta *(2+c^2/(d/2)^2)k )]^{1/2} / 4 \pi (d/2)^2
# [(2*\pi)/(\beta *k )]^{1/2} / 2 \pi c
print('ee.d', ee.d)
print('ee.c', ee.c)
print('ee.L', ee.L)
k_prime_coeff = 2.0 + (ee.c/(ee.d/2.))**2.0
print('k_prime_coeff', k_prime_coeff)
theory_dG_in_kT = -1.0*( 3.0/2.0*np.log(2.0*np.pi*ee.RT/(3.0*ee.k_values[1:])) \
+ 1.0/2.0*np.log(2.0*np.pi*ee.RT/(2.0*ee.k_values[1:])) \
+ 1.0/2.0*np.log(2.0*np.pi*ee.RT/(k_prime_coeff*ee.k_values[1:])) \
+ 1.0/2.0*np.log(2.0*np.pi*ee.RT/ee.k_values[1:]) \
- np.log( ee.L**3 * 8.0 * (np.pi**2) * (ee.d/2.0)**2 * ee.c ) )
print('theory_dG_in_kT', theory_dG_in_kT)
print('ee.g',ee.g)
plt.figure(figsize=(10,4))
# Plot the final free energy estimates as a function of k
plt.subplot(1,2,1)
#plt.plot(ee.k_values, ee.g, 'o-', label='EE result')
plt.plot(ee.k_values[1:], theory_dG_in_kT, 'o-', label='theory')
plt.xlabel('$k$ (kJ/nm$^2$)')
plt.ylabel('$\Delta G_{rest}$ (kT)')
plt.legend(loc='best')
# Plot the convergence of the free energy
step = traj.loc[:,'step'].values
free_energy = traj.loc[:,'free_energy'].values
plt.subplot(1,2,2)
plt.plot(step, free_energy, 'r-')
plt.xlabel('step')
plt.ylabel('$\Delta G_{rest}$ (kT)')
# plt.legend(loc='best')
| ee.d 0.5844506200868992
ee.c 0.40967777340459033
ee.L 1.1840481475428983
k_prime_coeff 3.965391840602323
theory_dG_in_kT [-6.1104699 -4.03102836 -1.95158682 0.79728538 2.87672692 4.09312225
4.95616846 7.70504066 9.7844822 11.86392374 13.94336528 16.02280683]
ee.g [ 0. 5. 5. 20. 35. 70. 75. 95. 200. 330. 605. 1045.
1725.]
| MIT | scripts/triple-harmonic-restraint.ipynb | yabmtm/position-restraints |
Creating a Sentiment Analysis Web App Using PyTorch and SageMaker_Deep Learning Nanodegree Program | Deployment_---Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. General OutlineRecall the general outline for SageMaker projects using a notebook instance.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.For this project, you will be following the steps in the general outline with some modifications. First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app. Step 1: Downloading the dataAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011. | %mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data | mkdir: cannot create directory ‘../data’: File exists
--2020-10-07 18:13:58-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 9.44MB/s in 10s
2020-10-07 18:14:08 (7.99 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
| MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set. | import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg']))) | IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
| MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records. | from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) | IMDb reviews (combined): train = 25000, test = 25000
| MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly. | print(train_X[100])
print(train_y[100]) | "The Straight Story" is a truly beautiful movie about an elderly man named Alvin Straight, who rides his lawnmower across the country to visit his estranged, dying brother. But that's just the basic synapsis...this movie is about so much more than that. This was Richard's Farnworth's last role before he died, and it's definitely one that he will be remembered for. He's a stubborn old man, not unlike a lot of the old men that you and I probably know. <br /><br />"The Straight Story" is a movie that everyone should watch at least once in their lives. It will reach down and touch some part of you, at least if you have a heart, it will.
1
| MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis. | import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import re
REPLACE_NO_SPACE = re.compile("(\.)|(\;)|(\:)|(\!)|(\')|(\?)|(\,)|(\")|(\()|(\))|(\[)|(\])")
REPLACE_WITH_SPACE = re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
def review_to_words_2(review):
words = REPLACE_NO_SPACE.sub("", review.lower())
words = REPLACE_WITH_SPACE.sub(" ", words)
return words | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set. | # TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100]) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input? **Answer:**1)remove any non-alpha numeric characters (like punctuation), which are not useful, to improve the accuracy 2) remove all the Remove all the predefined english stopwords, like is, a, of, the, to, his......which are neutral words,to improve the accuracy 3) convert all in lower cases 4) get the only stem parts of the words The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time. | import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) | Read preprocessed data from cache file: preprocessed_data.pkl
| MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews. (TODO) Create a word dictionaryTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'. | import numpy as np
from collections import Counter
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count ={} # A dict storing the words that appear in the reviews along with how often they occur
####Begin: Yanfei First Try#############
#word_count =Counter(x for xs in data for x in set(xs)) # it is a sorted dictionary already, with the most frequently appearing word in word_count[0], the last frequently appearing word in word_count[-1]
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
#words_sorted=sorted(words.items(), key=lambda x: x[1], reverse=True) # it seems not nesseary, since it is a sorted dictionary already, with the most frequently appearing word first, the last frequently appearing word last
# sorted_words = None
# sorted_words = list(word_count.keys())
####End: Yanfei First Try#############
####Begin: Yanfei Second Try############
review_words = []
for review_id in range(len(data)):
for review_word in data[review_id]:
review_words.append(review_word)
word_count = Counter(review_words)
sorted_words = sorted(word_count, key=word_count.get, reverse=True) # TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
###End: Yanfei Second Try###############
####Begin: Yanfei Third Try############
# word_count =Counter(x for xs in data for x in set(xs))
# sorted_words = sorted(word_count, key=word_count.get, reverse=True) # TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
###End: Yanfei Third Try###############
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set? **Answer:**['one', 'come', 'cartoon', 'long', 'minut'], yes, it make sense to see it in the movie review. | # TODO: Use this space to determine the five most frequently appearing words in the training set.
list(word_dict.keys())[0:5] | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use. | data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`. | def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set? | # Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print(len(train_X))
print(len(train_X_len))
print(len(test_X))
print(len(test_X_len))
print(len(train_X[100]))
print(train_X[100]) | 25000
25000
25000
25000
500
[ 421 1135 141 12 4 169 866 1219 1180 119 71 114 248 262
51 16 227 84 570 729 1067 3953 1 1 62 569 267 945
747 225 1020 629 3519 2255 386 4562 1695 506 4407 702 144 4
402 900 296 32 131 196 92 8 151 154 2694 729 1161 5
50 12 322 729 181 86 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
| MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem? **Answer:** it can be a problem. we should only have access to the training set so our transformer can only use the training set to construct a representation. The picke file take long time to process until finished. preprocess_data faces problem when the pickle file is not ready yet.convert_and_pad_data may have problem if the review is too long (now size ist limited to 500)-> we will lost the information. Maybe we extract the total different meaning of the original review, when the critical comments in the last part of the review. Step 3: Upload the data to S3As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on. Save the processed training dataset locallyIt is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review. | import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model. | import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory. Step 4: Build and Train the PyTorch ModelIn the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects - Model Artifacts, - Training Code, and - Inference Code, each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below. | !pygmentize train/model.py | [34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysis.[39;49;00m
[33m """[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, embedding_dim, hidden_dim, vocab_size):
[33m"""[39;49;00m
[33m Initialize the model by settingg up the various layers.[39;49;00m
[33m """[39;49;00m
[36msuper[39;49;00m(LSTMClassifier, [36mself[39;49;00m).[32m__init__[39;49;00m()
[36mself[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=[34m0[39;49;00m)
[36mself[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)
[36mself[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=[34m1[39;49;00m)
[36mself[39;49;00m.sig = nn.Sigmoid()
[36mself[39;49;00m.word_dict = [34mNone[39;49;00m
[34mdef[39;49;00m [32mforward[39;49;00m([36mself[39;49;00m, x):
[33m"""[39;49;00m
[33m Perform a forward pass of our model on some input.[39;49;00m
[33m """[39;49;00m
x = x.t()
lengths = x[[34m0[39;49;00m,:]
reviews = x[[34m1[39;49;00m:,:]
embeds = [36mself[39;49;00m.embedding(reviews)
lstm_out, _ = [36mself[39;49;00m.lstm(embeds)
out = [36mself[39;49;00m.dense(lstm_out)
out = out[lengths - [34m1[39;49;00m, [36mrange[39;49;00m([36mlen[39;49;00m(lengths))]
[34mreturn[39;49;00m [36mself[39;49;00m.sig(out.squeeze())
| MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving. | import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
(TODO) Writing the training methodNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later. | def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
out = model(batch_X)
#model.zero_grad()
#out = model.forward(batch_X)
loss = loss_fn(out, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader))) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose. | import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device) | Epoch: 1, BCELoss: 0.6912282586097718
Epoch: 2, BCELoss: 0.6808721899986268
Epoch: 3, BCELoss: 0.6719786524772644
Epoch: 4, BCELoss: 0.6623488545417786
Epoch: 5, BCELoss: 0.65109281539917
| MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run. (TODO) Training the modelWhen a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file. | from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge', #original: train_instance_type='ml.p2.xlarge', the support center haven't processed my ticket
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data}) | 'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
| MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Step 5: Testing the modelAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.In other words **If you are no longer using a deployed endpoint, shut it down!****TODO:** Deploy the trained model. | estimator.model_data
training_job_name=estimator.latest_training_job.name
# TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
#original: train_instance_type='ml.p2.xlarge', the support center haven't processed my ticket
model_predict = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point="train.py",
source_dir="train")
predictor_predict = model_predict.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
predictor_predict.endpoint
predictor = estimator.deploy(initial_instance_count = 1, instance_type = 'ml.p2.xlarge') | Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
| MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is. | test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis? **Answer:**XGBoost has a better accuray score than this model and run much faster. The trainning job for XGBoost takes only 3 minutes, but this model takes 2 hours. I think XGBoost is better for sentiment analysis. Actually this two accuray score is close to each other, may this model can be improved somehow. (TODO) More testingWe now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model. | test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.' | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a sequence of integers using `word_dict` In order process the review we will need to repeat these two steps.**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`. | # TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data = None
test_words = review_to_words(test_review)
test_data, test_data_len = convert_and_pad(word_dict, test_words) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review. | predictor.predict(test_data) | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it. | estimator.delete_endpoint() | _____no_output_____ | MIT | Project/SageMaker Project.ipynb | csuquanyanfei/ML_Sagemaker_Studies_Project1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.