markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
HugeCTR Continuous Training and Inference Demo (Part I) OverviewIn HugeCTR version 3.3, we finished the whole pipeline of parameter server, including 1. The parameter dumping interface from training to kafka.2. CPU cache(Redis Cluster / Hash Map / Parallel Hash Map).3. RocksDB as a persistence storage.4. Embedding cache update mechanism.The purpose of this notebook is to show how to do continuous traning and inference using HugeCTR Hierarchical Parameter Server. Table of Contents- Data Preparation- Data Preprocessing using Pandas- Wide&Deep Training Demo- Wide&Deep Model Inference using Python API- Wide&Deep Model continuous training- Wide&Deep Model continuous inference 1. Data preparation 1.1 Make a folder to store our data and data processing scripts:
!mkdir criteo_data !mkdir criteo_script
_____no_output_____
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
1.2 Download Criteo Dataset
!wget http://azuremlsampleexperiments.blob.core.windows.net/criteo/day_1.gz
_____no_output_____
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
**NOTE**: Replace `1` with a value from [0, 23] to use a different day.During preprocessing, the amount of data, which is used to speed up the preprocessing, fill missing values, and remove the feature values that are considered rare, is further reduced. 1.3 Write the preprocessing the script.
%%writefile preprocess.sh #!/bin/bash if [[ $# -lt 3 ]]; then echo "Usage: preprocess.sh [DATASET_NO.] [DST_DATA_DIR] [SCRIPT_TYPE] [SCRIPT_TYPE_SPECIFIC_ARGS...]" exit 2 fi DST_DATA_DIR=$2 echo "Warning: existing $DST_DATA_DIR is erased" rm -rf $DST_DATA_DIR if [[ $3 == "nvt" ]]; then if [[ $# -ne 6 ]]; then echo "Usage: preprocess.sh [DATASET_NO.] [DST_DATA_DIR] nvt [IS_PARQUET_FORMAT] [IS_CRITEO_MODE] [IS_FEATURE_CROSSED]" exit 2 fi echo "Preprocessing script: NVTabular" elif [[ $3 == "perl" ]]; then if [[ $# -ne 4 ]]; then echo "Usage: preprocess.sh [DATASET_NO.] [DST_DATA_DIR] perl [NUM_SLOTS]" exit 2 fi echo "Preprocessing script: Perl" elif [[ $3 == "pandas" ]]; then if [[ $# -lt 5 ]]; then echo "Usage: preprocess.sh [DATASET_NO.] [DST_DATA_DIR] pandas [IS_DENSE_NORMALIZED] [IS_FEATURE_CROSSED] (FILE_LIST_LENGTH)" exit 2 fi echo "Preprocessing script: Pandas" else echo "Error: $3 is an invalid script type. Pick one from {nvt, perl, pandas}." exit 2 fi SCRIPT_TYPE=$3 echo "Getting the first few examples from the uncompressed dataset..." mkdir -p $DST_DATA_DIR/train && \ mkdir -p $DST_DATA_DIR/val && \ head -n 500000 day_$1 > $DST_DATA_DIR/day_$1_small if [ $? -ne 0 ]; then echo "Warning: fallback to find original compressed data day_$1.gz..." echo "Decompressing day_$1.gz..." gzip -d -c day_$1.gz > day_$1 if [ $? -ne 0 ]; then echo "Error: failed to decompress the file." exit 2 fi head -n 500000 day_$1 > $DST_DATA_DIR/day_$1_small if [ $? -ne 0 ]; then echo "Error: day_$1 file" exit 2 fi fi echo "Counting the number of samples in day_$1 dataset..." total_count=$(wc -l $DST_DATA_DIR/day_$1_small) total_count=(${total_count}) echo "The first $total_count examples will be used in day_$1 dataset." echo "Shuffling dataset..." shuf $DST_DATA_DIR/day_$1_small > $DST_DATA_DIR/day_$1_shuf train_count=$(( total_count * 8 / 10)) valtest_count=$(( total_count - train_count )) val_count=$(( valtest_count * 5 / 10 )) test_count=$(( valtest_count - val_count )) split_dataset() { echo "Splitting into $train_count-sample training, $val_count-sample val, and $test_count-sample test datasets..." head -n $train_count $DST_DATA_DIR/$1 > $DST_DATA_DIR/train/train.txt && \ tail -n $valtest_count $DST_DATA_DIR/$1 > $DST_DATA_DIR/val/valtest.txt && \ head -n $val_count $DST_DATA_DIR/val/valtest.txt > $DST_DATA_DIR/val/val.txt && \ tail -n $test_count $DST_DATA_DIR/val/valtest.txt > $DST_DATA_DIR/val/test.txt if [ $? -ne 0 ]; then exit 2 fi } echo "Preprocessing..." if [[ $SCRIPT_TYPE == "nvt" ]]; then IS_PARQUET_FORMAT=$4 IS_CRITEO_MODE=$5 FEATURE_CROSS_LIST_OPTION="" if [[ ( $IS_CRITEO_MODE -eq 0 ) && ( $6 -eq 1 ) ]]; then FEATURE_CROSS_LIST_OPTION="--feature_cross_list C1_C2,C3_C4" echo $FEATURE_CROSS_LIST_OPTION fi split_dataset day_$1_shuf python3 criteo_script/preprocess_nvt.py \ --data_path $DST_DATA_DIR \ --out_path $DST_DATA_DIR \ --freq_limit 6 \ --device_limit_frac 0.5 \ --device_pool_frac 0.5 \ --out_files_per_proc 8 \ --devices "0" \ --num_io_threads 2 \ --parquet_format=$IS_PARQUET_FORMAT \ --criteo_mode=$IS_CRITEO_MODE \ $FEATURE_CROSS_LIST_OPTION elif [[ $SCRIPT_TYPE == "perl" ]]; then NUM_SLOT=$4 split_dataset day_$1_shuf perl criteo_script_legacy/preprocess.pl $DST_DATA_DIR/train/train.txt $DST_DATA_DIR/val/val.txt $DST_DATA_DIR/val/test.txt && \ criteo2hugectr_legacy $NUM_SLOT $DST_DATA_DIR/train/train.txt.out $DST_DATA_DIR/train/sparse_embedding $DST_DATA_DIR/file_list.txt && \ criteo2hugectr_legacy $NUM_SLOT $DST_DATA_DIR/val/test.txt.out $DST_DATA_DIR/val/sparse_embedding $DST_DATA_DIR/file_list_test.txt elif [[ $SCRIPT_TYPE == "pandas" ]]; then python3 criteo_script/preprocess.py \ --src_csv_path=$DST_DATA_DIR/day_$1_shuf \ --dst_csv_path=$DST_DATA_DIR/day_$1_shuf.out \ --normalize_dense=$4 --feature_cross=$5 && \ split_dataset day_$1_shuf.out NUM_WIDE_KEYS="" if [[ $5 -ne 0 ]]; then NUM_WIDE_KEYS=2 fi FILE_LIST_LENGTH="" if [[ $# -gt 5 ]]; then FILE_LIST_LENGTH=$6 fi criteo2hugectr $DST_DATA_DIR/train/train.txt $DST_DATA_DIR/train/sparse_embedding $DST_DATA_DIR/file_list.txt $NUM_WIDE_KEYS $FILE_LIST_LENGTH && \ criteo2hugectr $DST_DATA_DIR/val/test.txt $DST_DATA_DIR/val/sparse_embedding $DST_DATA_DIR/file_list_test.txt $NUM_WIDE_KEYS $FILE_LIST_LENGTH fi if [ $? -ne 0 ]; then exit 2 fi echo "All done!"
Overwriting preprocess.sh
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
**NOTE**: Here we only read the first 500000 lines of the data to do the demo.
%%writefile criteo_script/preprocess.py from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import argparse import sys import tempfile from six.moves import urllib import urllib.request import sys import os import math import time import logging import concurrent.futures as cf from traceback import print_exc import numpy as np import pandas as pd import sklearn.preprocessing as skp logging.basicConfig(format='%(asctime)s %(message)s') logging.root.setLevel(logging.NOTSET) NUM_INTEGER_COLUMNS = 13 NUM_CATEGORICAL_COLUMNS = 26 NUM_TOTAL_COLUMNS = 1 + NUM_INTEGER_COLUMNS + NUM_CATEGORICAL_COLUMNS MAX_NUM_WORKERS = NUM_TOTAL_COLUMNS INT_NAN_VALUE = np.iinfo(np.int32).min CAT_NAN_VALUE = '80000000' def idx2key(idx): if idx == 0: return 'label' return 'I' + str(idx) if idx <= NUM_INTEGER_COLUMNS else 'C' + str(idx - NUM_INTEGER_COLUMNS) def _fill_missing_features_and_split(chunk, series_list_dict): for cid, col in enumerate(chunk.columns): NAN_VALUE = INT_NAN_VALUE if cid <= NUM_INTEGER_COLUMNS else CAT_NAN_VALUE result_series = chunk[col].fillna(NAN_VALUE) series_list_dict[col].append(result_series) def _merge_and_transform_series(src_series_list, col, dense_cols, normalize_dense): result_series = pd.concat(src_series_list) if col != 'label': unique_value_counts = result_series.value_counts() unique_value_counts = unique_value_counts.loc[unique_value_counts >= 6] unique_value_counts = set(unique_value_counts.index.values) NAN_VALUE = INT_NAN_VALUE if col.startswith('I') else CAT_NAN_VALUE result_series = result_series.apply( lambda x: x if x in unique_value_counts else NAN_VALUE) if col == 'label' or col in dense_cols: result_series = result_series.astype(np.int64) le = skp.LabelEncoder() result_series = pd.DataFrame(le.fit_transform(result_series)) if col != 'label': result_series = result_series + 1 else: oe = skp.OrdinalEncoder(dtype=np.int64) result_series = pd.DataFrame(oe.fit_transform(pd.DataFrame(result_series))) result_series = result_series + 1 if normalize_dense != 0: if col in dense_cols: mms = skp.MinMaxScaler(feature_range=(0,1)) result_series = pd.DataFrame(mms.fit_transform(result_series)) result_series.columns = [col] min_max = (np.int64(result_series[col].min()), np.int64(result_series[col].max())) if col != 'label': logging.info('column {} [{}, {}]'.format(col, str(min_max[0]),str(min_max[1]))) return [result_series, min_max] def _convert_to_string(series): return series.astype(str) def _merge_columns_and_feature_cross(series_list, min_max, feature_pairs, feature_cross): name_to_series = dict() for series in series_list: name_to_series[series.columns[0]] = series.iloc[:,0] df = pd.DataFrame(name_to_series) cols = [idx2key(idx) for idx in range(0, NUM_TOTAL_COLUMNS)] df = df.reindex(columns=cols) offset = np.int64(0) for col in cols: if col != 'label' and col.startswith('I') == False: df[col] += offset logging.info('column {} offset {}'.format(col, str(offset))) offset += min_max[col][1] if feature_cross != 0: for idx, pair in enumerate(feature_pairs): col0 = pair[0] col1 = pair[1] col1_width = int(min_max[col1][1] - min_max[col1][0] + 1) crossed_column_series = df[col0] * col1_width + df[col1] oe = skp.OrdinalEncoder(dtype=np.int64) crossed_column_series = pd.DataFrame(oe.fit_transform(pd.DataFrame(crossed_column_series))) crossed_column_series = crossed_column_series + 1 crossed_column = col0 + '_' + col1 df.insert(NUM_INTEGER_COLUMNS + 1 + idx, crossed_column, crossed_column_series) crossed_column_max_val = np.int64(df[crossed_column].max()) logging.info('column {} [{}, {}]'.format( crossed_column, str(df[crossed_column].min()), str(crossed_column_max_val))) df[crossed_column] += offset logging.info('column {} offset {}'.format(crossed_column, str(offset))) offset += crossed_column_max_val return df def _wait_futures_and_reset(futures): for future in futures: result = future.result() if result: print(result) futures = list() def _process_chunks(executor, chunks_to_process, op, *argv): futures = list() for chunk in chunks_to_process: argv_list = list(argv) argv_list.insert(0, chunk) new_argv = tuple(argv_list) future = executor.submit(op, *new_argv) futures.append(future) _wait_futures_and_reset(futures) def preprocess(src_txt_name, dst_txt_name, normalize_dense, feature_cross): cols = [idx2key(idx) for idx in range(0, NUM_TOTAL_COLUMNS)] series_list_dict = dict() with cf.ThreadPoolExecutor(max_workers=MAX_NUM_WORKERS) as executor: logging.info('read a CSV file') reader = pd.read_csv(src_txt_name, sep='\t', names=cols, chunksize=131072) logging.info('_fill_missing_features_and_split') for col in cols: series_list_dict[col] = list() _process_chunks(executor, reader, _fill_missing_features_and_split, series_list_dict) with cf.ProcessPoolExecutor(max_workers=MAX_NUM_WORKERS) as executor: logging.info('_merge_and_transform_series') futures = list() dense_cols = [idx2key(idx+1) for idx in range(NUM_INTEGER_COLUMNS)] dst_series_list = list() min_max = dict() for col, src_series_list in series_list_dict.items(): future = executor.submit(_merge_and_transform_series, src_series_list, col, dense_cols, normalize_dense) futures.append(future) for future in futures: col = None for idx, ret in enumerate(future.result()): try: if idx == 0: col = ret.columns[0] dst_series_list.append(ret) else: min_max[col] = ret except: print_exc() futures = list() logging.info('_merge_columns_and_feature_cross') feature_pairs = [('C1', 'C2'), ('C3', 'C4')] df = _merge_columns_and_feature_cross(dst_series_list, min_max, feature_pairs, feature_cross) logging.info('_convert_to_string') futures = dict() for col in cols: future = executor.submit(_convert_to_string, df[col]) futures[col] = future if feature_cross != 0: for pair in feature_pairs: col = pair[0] + '_' + pair[1] future = executor.submit(_convert_to_string, df[col]) futures[col] = future logging.info('_store_to_df') for col, future in futures.items(): ret = future.result() try: df[col] = ret except: print_exc() futures = dict() logging.info('write to a CSV file') df.to_csv(dst_txt_name, sep=' ', header=False, index=False) logging.info('done!') if __name__ == '__main__': arg_parser = argparse.ArgumentParser(description='Preprocssing Criteo Dataset') arg_parser.add_argument('--src_csv_path', type=str, required=True) arg_parser.add_argument('--dst_csv_path', type=str, required=True) arg_parser.add_argument('--normalize_dense', type=int, default=1) arg_parser.add_argument('--feature_cross', type=int, default=1) args = arg_parser.parse_args() src_csv_path = args.src_csv_path dst_csv_path = args.dst_csv_path normalize_dense = args.normalize_dense feature_cross = args.feature_cross if os.path.exists(src_csv_path) == False: sys.exit('ERROR: the file \'{}\' doesn\'t exist'.format(src_csv_path)) if os.path.exists(dst_csv_path) == True: sys.exit('ERROR: the file \'{}\' exists'.format(dst_csv_path)) preprocess(src_csv_path, dst_csv_path, normalize_dense, feature_cross)
Overwriting criteo_script/preprocess.py
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
1.4 Run the preprocess script
!bash preprocess.sh 0 criteo_data pandas 1 1 1
_____no_output_____
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
**IMPORTANT NOTES**: Arguments may vary depend on your setting:- The first argument represents the dataset postfix. For instance, if `day_1` is used, the postfix is `1`.- The second argument, `criteo_data`, is where the preprocessed data is stored. 1.5 Generate data sample for inference
import pandas as pd import numpy as np df = pd.read_table("criteo_data/train/train.txt", header = None, sep= ' ', \ names = ['label'] + ['I'+str(i) for i in range(1, 14)] + \ ['C1_C2', 'C3_C4'] + ['C'+str(i) for i in range(1, 27)])[:5] left = df.iloc[:,:14].astype(np.float32) right = df.iloc[:, 14:].astype(np.int64) merged = pd.concat([left, right], axis = 1) merged.to_csv("infer_data.csv", index = False)
_____no_output_____
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
2. Start the Kafka Broker **Please refer to the README to start the Kafka Broker properly.** 3. Wide&Deep Model Demo
!rm -r *model %%writefile wdl_demo.py import hugectr from mpi4py import MPI solver = hugectr.CreateSolver(model_name = "wdl", max_eval_batches = 5000, batchsize_eval = 1024, batchsize = 1024, lr = 0.001, vvgpu = [[0]], i64_input_key = False, use_mixed_precision = False, repeat_dataset = False, use_cuda_graph = True, kafka_brockers = "10.23.137.25:9093") #Make sure this is consistent with your Kafka broker.) reader = hugectr.DataReaderParams(data_reader_type = hugectr.DataReaderType_t.Norm, source = ["criteo_data/file_list."+str(i)+".txt" for i in range(2)], keyset = ["criteo_data/file_list."+str(i)+".keyset" for i in range(2)], eval_source = "criteo_data/file_list.2.txt", check_type = hugectr.Check_t.Sum) optimizer = hugectr.CreateOptimizer(optimizer_type = hugectr.Optimizer_t.Adam) hc_config = hugectr.CreateHMemCache(2, 0.5, 0) etc = hugectr.CreateETC(ps_types = [hugectr.TrainPSType_t.Staged, hugectr.TrainPSType_t.Cached],\ sparse_models = ["./wdl_0_sparse_model", "./wdl_1_sparse_model"],\ local_paths = ["./"], hmem_cache_configs = [hc_config]) model = hugectr.Model(solver, reader, optimizer, etc) model.add(hugectr.Input(label_dim = 1, label_name = "label", dense_dim = 13, dense_name = "dense", data_reader_sparse_param_array = [hugectr.DataReaderSparseParam("wide_data", 2, True, 1), hugectr.DataReaderSparseParam("deep_data", 1, True, 26)])) model.add(hugectr.SparseEmbedding(embedding_type = hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash, workspace_size_per_gpu_in_mb = 23, embedding_vec_size = 1, combiner = "sum", sparse_embedding_name = "sparse_embedding0", bottom_name = "wide_data", optimizer = optimizer)) model.add(hugectr.SparseEmbedding(embedding_type = hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash, workspace_size_per_gpu_in_mb = 358, embedding_vec_size = 16, combiner = "sum", sparse_embedding_name = "sparse_embedding1", bottom_name = "deep_data", optimizer = optimizer)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Reshape, bottom_names = ["sparse_embedding1"], top_names = ["reshape1"], leading_dim=416)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Reshape, bottom_names = ["sparse_embedding0"], top_names = ["reshape2"], leading_dim=1)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Concat, bottom_names = ["reshape1", "dense"], top_names = ["concat1"])) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct, bottom_names = ["concat1"], top_names = ["fc1"], num_output=1024)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU, bottom_names = ["fc1"], top_names = ["relu1"])) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Dropout, bottom_names = ["relu1"], top_names = ["dropout1"], dropout_rate=0.5)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct, bottom_names = ["dropout1"], top_names = ["fc2"], num_output=1024)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.ReLU, bottom_names = ["fc2"], top_names = ["relu2"])) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Dropout, bottom_names = ["relu2"], top_names = ["dropout2"], dropout_rate=0.5)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.InnerProduct, bottom_names = ["dropout2"], top_names = ["fc3"], num_output=1)) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.Add, bottom_names = ["fc3", "reshape2"], top_names = ["add1"])) model.add(hugectr.DenseLayer(layer_type = hugectr.Layer_t.BinaryCrossEntropyLoss, bottom_names = ["add1", "label"], top_names = ["loss"])) model.compile() model.summary() model.graph_to_json(graph_config_file = "wdl.json") #model.save_params_to_files("wdl") model.fit(num_epochs = 1, display = 500, eval_interval = 1000) model.set_source(source = ["criteo_data/file_list."+str(i)+".txt" for i in range(3, 5)], \ keyset = ["criteo_data/file_list."+str(i)+".keyset" for i in range(3, 5)], \ eval_source = "criteo_data/file_list.9.txt") model.save_params_to_files("wdl") !python wdl_demo.py
[HUGECTR][03:34:23][INFO][RANK0]: Empty embedding, trained table will be stored in ./wdl_0_sparse_model [HUGECTR][03:34:23][INFO][RANK0]: Empty embedding, trained table will be stored in ./wdl_1_sparse_model HugeCTR Version: 3.2 ====================================================Model Init===================================================== [HUGECTR][03:34:23][INFO][RANK0]: Initialize model: wdl [HUGECTR][03:34:23][INFO][RANK0]: Global seed is 337017754 [HUGECTR][03:34:23][INFO][RANK0]: Device to NUMA mapping: GPU 0 -> node 0 [HUGECTR][03:34:25][WARNING][RANK0]: Peer-to-peer access cannot be fully enabled. [HUGECTR][03:34:25][INFO][RANK0]: Start all2all warmup [HUGECTR][03:34:25][INFO][RANK0]: End all2all warmup [HUGECTR][03:34:25][INFO][RANK0]: Using All-reduce algorithm: NCCL [HUGECTR][03:34:25][INFO][RANK0]: Device 0: Tesla V100-SXM2-32GB [HUGECTR][03:34:25][DEBUG][RANK0]: Creating Kafka lifetime service. [HUGECTR][03:34:25][INFO][RANK0]: num of DataReader workers: 12 [HUGECTR][03:34:25][INFO][RANK0]: max_vocabulary_size_per_gpu_=6029312 [HUGECTR][03:34:25][INFO][RANK0]: max_vocabulary_size_per_gpu_=5865472 [HUGECTR][03:34:25][INFO][RANK0]: Graph analysis to resolve tensor dependency ===================================================Model Compile=================================================== [HUGECTR][03:34:28][INFO][RANK0]: gpu0 start to init embedding [HUGECTR][03:34:28][INFO][RANK0]: gpu0 init embedding done [HUGECTR][03:34:28][INFO][RANK0]: gpu0 start to init embedding [HUGECTR][03:34:28][INFO][RANK0]: gpu0 init embedding done [HUGECTR][03:34:28][INFO][RANK0]: Enable HMEM-Based Parameter Server [HUGECTR][03:34:28][INFO][RANK0]: ./wdl_0_sparse_model not exist, create and train from scratch [HUGECTR][03:34:28][INFO][RANK0]: Enable HMemCache-Based Parameter Server [HUGECTR][03:34:28][INFO][RANK0]: ./wdl_1_sparse_model/key doesn't exist, created [HUGECTR][03:34:28][INFO][RANK0]: ./wdl_1_sparse_model/emb_vector doesn't exist, created [HUGECTR][03:34:28][INFO][RANK0]: ./wdl_1_sparse_model/Adam.m doesn't exist, created [HUGECTR][03:34:28][INFO][RANK0]: ./wdl_1_sparse_model/Adam.v doesn't exist, created [HUGECTR][03:34:29][INFO][RANK0]: Starting AUC NCCL warm-up [HUGECTR][03:34:29][INFO][RANK0]: Warm-up done ===================================================Model Summary=================================================== label Dense Sparse label dense wide_data,deep_data (None, 1) (None, 13) β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€” Layer Type Input Name Output Name Output Shape β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€” DistributedSlotSparseEmbeddingHash wide_data sparse_embedding0 (None, 1, 1) ------------------------------------------------------------------------------------------------------------------ DistributedSlotSparseEmbeddingHash deep_data sparse_embedding1 (None, 26, 16) ------------------------------------------------------------------------------------------------------------------ Reshape sparse_embedding1 reshape1 (None, 416) ------------------------------------------------------------------------------------------------------------------ Reshape sparse_embedding0 reshape2 (None, 1) ------------------------------------------------------------------------------------------------------------------ Concat reshape1 concat1 (None, 429) dense ------------------------------------------------------------------------------------------------------------------ InnerProduct concat1 fc1 (None, 1024) ------------------------------------------------------------------------------------------------------------------ ReLU fc1 relu1 (None, 1024) ------------------------------------------------------------------------------------------------------------------ Dropout relu1 dropout1 (None, 1024) ------------------------------------------------------------------------------------------------------------------ InnerProduct dropout1 fc2 (None, 1024) ------------------------------------------------------------------------------------------------------------------ ReLU fc2 relu2 (None, 1024) ------------------------------------------------------------------------------------------------------------------ Dropout relu2 dropout2 (None, 1024) ------------------------------------------------------------------------------------------------------------------ InnerProduct dropout2 fc3 (None, 1) ------------------------------------------------------------------------------------------------------------------ Add fc3 add1 (None, 1) reshape2 ------------------------------------------------------------------------------------------------------------------ BinaryCrossEntropyLoss add1 loss label ------------------------------------------------------------------------------------------------------------------ [HUGECTR][03:34:29][INFO][RANK0]: Save the model graph to wdl.json successfully =====================================================Model Fit===================================================== [HUGECTR][03:34:29][INFO][RANK0]: Use embedding training cache mode with number of training sources: 2, number of epochs: 1 [HUGECTR][03:34:29][INFO][RANK0]: Training batchsize: 1024, evaluation batchsize: 1024 [HUGECTR][03:34:29][INFO][RANK0]: Evaluation interval: 1000, snapshot interval: 10000 [HUGECTR][03:34:29][INFO][RANK0]: Dense network trainable: True [HUGECTR][03:34:29][INFO][RANK0]: Sparse embedding sparse_embedding0 trainable: True [HUGECTR][03:34:29][INFO][RANK0]: Sparse embedding sparse_embedding1 trainable: True [HUGECTR][03:34:29][INFO][RANK0]: Use mixed precision: False, scaler: 1.000000, use cuda graph: True [HUGECTR][03:34:29][INFO][RANK0]: lr: 0.001000, warmup_steps: 1, end_lr: 0.000000 [HUGECTR][03:34:29][INFO][RANK0]: decay_start: 0, decay_steps: 1, decay_power: 2.000000 [HUGECTR][03:34:29][INFO][RANK0]: Evaluation source file: criteo_data/file_list.2.txt [HUGECTR][03:34:29][INFO][RANK0]: --------------------Epoch 0, source file: criteo_data/file_list.0.txt-------------------- [HUGECTR][03:34:29][INFO][RANK0]: Preparing embedding table for next pass [HUGECTR][03:34:29][INFO][RANK0]: HMEM-Cache PS: Hit rate [load]: 0 % [HUGECTR][03:34:30][INFO][RANK0]: --------------------Epoch 0, source file: criteo_data/file_list.1.txt-------------------- [HUGECTR][03:34:30][INFO][RANK0]: Preparing embedding table for next pass [HUGECTR][03:34:30][INFO][RANK0]: HMEM-Cache PS: Hit rate [dump]: 0 % [HUGECTR][03:34:31][INFO][RANK0]: HMEM-Cache PS: Hit rate [load]: 0 % [HUGECTR][03:34:31][INFO][RANK0]: HMEM-Cache PS: Hit rate [dump]: 76.51 % [HUGECTR][03:34:31][INFO][RANK0]: Updating sparse model in SSD [DONE] [HUGECTR][03:34:32][INFO][RANK0]: Sync blocks from HMEM-Cache to SSD  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– 100.0% [ 2/ 2 | 79.0 Hz | 0s<0s] m [HUGECTR][03:34:32][INFO][RANK0]: Dumping dense weights to file, successful [HUGECTR][03:34:32][INFO][RANK0]: Dumping dense optimizer states to file, successful [HUGECTR][03:34:32][INFO][RANK0]: Dumping untrainable weights to file, successful [HUGECTR][03:34:33][DEBUG][RANK0]: Destroying Kafka lifetime service.
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
4. WDL Inference 4.1 Inference using HugeCTR python API
#Create a folder for RocksDB !mkdir /wdl_infer !mkdir /wdl_infer/rocksdb
mkdir: cannot create directory β€˜/wdl_infer’: File exists mkdir: cannot create directory β€˜/wdl_infer/rocksdb’: File exists
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
**Please make sure you have started Redis cluster following the README before you start doing inference.**
%%writefile 'wdl_predict.py' from hugectr.inference import InferenceParams, CreateInferenceSession import hugectr import pandas as pd import numpy as np import sys from mpi4py import MPI def wdl_inference(model_name='wdl', network_file='wdl.json', dense_file='wdl_dense_0.model', \ embedding_file_list=['wdl_0_sparse_model', 'wdl_1_sparse_model'], data_file='infer_data.csv',\ enable_cache=False, rocksdb_path=""): CATEGORICAL_COLUMNS=["C1_C2","C3_C4"] + ["C" + str(x) for x in range(1, 27)] CONTINUOUS_COLUMNS=["I" + str(x) for x in range(1, 14)] LABEL_COLUMNS = ['label'] test_df=pd.read_csv(data_file,sep=',') config_file = network_file row_ptrs = list(range(0, 11, 2)) + list(range(0, 131)) dense_features = list(test_df[CONTINUOUS_COLUMNS].values.flatten()) test_df[CATEGORICAL_COLUMNS].astype(np.int64) embedding_columns = list((test_df[CATEGORICAL_COLUMNS]).values.flatten()) redisdatabase = hugectr.inference.DistributedDatabaseParams( hugectr.DatabaseType_t.redis_cluster, address="127.0.0.1:7000,127.0.0.1:7001,127.0.0.1:7002", initial_cache_rate=0.2) rocksdbdatabase = hugectr.inference.PersistentDatabaseParams( hugectr.DatabaseType_t.rocks_db, path="/wdl_infer/rocksdb/") # create parameter server, embedding cache and inference session inference_params = InferenceParams(model_name = model_name, max_batchsize = 64, hit_rate_threshold = 0.5, dense_model_file = dense_file, sparse_model_files = embedding_file_list, device_id = 0, use_gpu_embedding_cache = enable_cache, cache_size_percentage = 0.9, i64_input_key = True, use_mixed_precision = False, volatile_db=redisdatabase, persistent_db=rocksdbdatabase) inference_session = CreateInferenceSession(config_file, inference_params) output = inference_session.predict(dense_features, embedding_columns, row_ptrs) print("WDL multi-embedding table inference result is {}".format(output)) wdl_inference() !python wdl_predict.py
[HUGECTR][03:36:30][INFO][RANK0]: default_emb_vec_value is not specified using default: 0.000000 [HUGECTR][03:36:30][INFO][RANK0]: default_emb_vec_value is not specified using default: 0.000000 [HUGECTR][03:36:30][INFO][RANK0]: Creating RedisCluster backend... [HUGECTR][03:36:30][INFO][RANK0]: Connecting to Redis cluster via 127.0.0.1:7000 ... [HUGECTR][03:36:30][INFO][RANK0]: Connected to Redis database! [HUGECTR][03:36:30][INFO][RANK0]: Creating RocksDB backend... [HUGECTR][03:36:30][INFO][RANK0]: Connecting to RocksDB database... [HUGECTR][03:36:30][INFO][RANK0]: RocksDB /wdl_infer/rocksdb/, found column family "default". [HUGECTR][03:36:30][INFO][RANK0]: RocksDB /wdl_infer/rocksdb/, found column family "hctr_et.wdl.sparse_embedding0". [HUGECTR][03:36:30][INFO][RANK0]: RocksDB /wdl_infer/rocksdb/, found column family "hctr_et.wdl.sparse_embedding1". [HUGECTR][03:36:31][INFO][RANK0]: Connected to RocksDB database! [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p0/v, query 0: Inserted 5565 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p1/v, query 0: Inserted 5553 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p2/v, query 0: Inserted 5572 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p3/v, query 0: Inserted 5553 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p4/v, query 0: Inserted 5485 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p5/v, query 0: Inserted 5556 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p6/v, query 0: Inserted 5584 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p7/v, query 0: Inserted 5605 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RedisCluster backend. Table: hctr_et.wdl.sparse_embedding0. Inserted 44473 / 44473 pairs. [HUGECTR][03:36:31][INFO][RANK0]: Table: hctr_et.wdl.sparse_embedding0; cached 44473 / 44473 embeddings in distributed database! [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 0: Inserted 10000 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 1: Inserted 10000 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 2: Inserted 10000 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 3: Inserted 10000 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 4: Inserted 4473 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB backend. Table: hctr_et.wdl.sparse_embedding0. Inserted 44473 / 44473 pairs. [HUGECTR][03:36:31][INFO][RANK0]: Table: hctr_et.wdl.sparse_embedding0; cached 44473 embeddings in persistent database! [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p0/v, query 0: Inserted 6801 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p1/v, query 0: Inserted 6769 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p2/v, query 0: Inserted 6745 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p3/v, query 0: Inserted 6797 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p4/v, query 0: Inserted 6771 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p5/v, query 0: Inserted 6757 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p6/v, query 0: Inserted 6837 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p7/v, query 0: Inserted 6807 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RedisCluster backend. Table: hctr_et.wdl.sparse_embedding1. Inserted 54284 / 54284 pairs. [HUGECTR][03:36:31][INFO][RANK0]: Table: hctr_et.wdl.sparse_embedding1; cached 54284 / 54284 embeddings in distributed database! [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 0: Inserted 10000 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 1: Inserted 10000 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 2: Inserted 10000 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 3: Inserted 10000 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 4: Inserted 10000 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 5: Inserted 4284 pairs. [HUGECTR][03:36:31][DEBUG][RANK0]: RocksDB backend. Table: hctr_et.wdl.sparse_embedding1. Inserted 54284 / 54284 pairs. [HUGECTR][03:36:31][INFO][RANK0]: Table: hctr_et.wdl.sparse_embedding1; cached 54284 embeddings in persistent database! [HUGECTR][03:36:31][DEBUG][RANK0]: Real-time subscribers created! [HUGECTR][03:36:31][INFO][RANK0]: Create embedding cache in device 0. [HUGECTR][03:36:31][INFO][RANK0]: Use GPU embedding cache: False, cache size percentage: 0.900000 [HUGECTR][03:36:31][INFO][RANK0]: Configured cache hit rate threshold: 0.500000 [HUGECTR][03:36:31][INFO][RANK0]: Global seed is 2566656433 [HUGECTR][03:36:31][INFO][RANK0]: Device to NUMA mapping: GPU 0 -> node 0 [HUGECTR][03:36:32][WARNING][RANK0]: Peer-to-peer access cannot be fully enabled. [HUGECTR][03:36:32][INFO][RANK0]: Start all2all warmup [HUGECTR][03:36:32][INFO][RANK0]: End all2all warmup [HUGECTR][03:36:32][INFO][RANK0]: Model name: wdl [HUGECTR][03:36:32][INFO][RANK0]: Use mixed precision: False [HUGECTR][03:36:32][INFO][RANK0]: Use cuda graph: True [HUGECTR][03:36:32][INFO][RANK0]: Max batchsize: 64 [HUGECTR][03:36:32][INFO][RANK0]: Use I64 input key: True [HUGECTR][03:36:32][INFO][RANK0]: start create embedding for inference [HUGECTR][03:36:32][INFO][RANK0]: sparse_input name wide_data [HUGECTR][03:36:32][INFO][RANK0]: sparse_input name deep_data [HUGECTR][03:36:32][INFO][RANK0]: create embedding for inference success [HUGECTR][03:36:32][INFO][RANK0]: Inference stage skip BinaryCrossEntropyLoss layer, replaced by Sigmoid layer [HUGECTR][03:36:33][INFO][RANK0]: Looking up 10 embeddings (each with 1 values)... [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p1/v, query 0: Fetched 2 keys. Hits 2. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p2/v, query 0: Fetched 2 keys. Hits 2. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p3/v, query 0: Fetched 2 keys. Hits 2. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p4/v, query 0: Fetched 1 keys. Hits 1. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p5/v, query 0: Fetched 1 keys. Hits 1. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p6/v, query 0: Fetched 1 keys. Hits 1. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p7/v, query 0: Fetched 1 keys. Hits 1. [HUGECTR][03:36:33][DEBUG][RANK0]: RedisCluster backend. Table: hctr_et.wdl.sparse_embedding0. Fetched 10 / 10 values. [HUGECTR][03:36:33][DEBUG][RANK0]: RedisCluster: 10 hits, 0 missing! [HUGECTR][03:36:33][DEBUG][RANK0]: RocksDB backend. Table: hctr_et.wdl.sparse_embedding0. Fetched 0 / 0 values. [HUGECTR][03:36:33][DEBUG][RANK0]: RocksDB: 10 hits, 0 missing! [HUGECTR][03:36:33][INFO][RANK0]: Parameter server lookup of 10 / 10 embeddings took 656 us. [HUGECTR][03:36:33][INFO][RANK0]: Looking up 130 embeddings (each with 16 values)... [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p0/v, query 0: Fetched 10 keys. Hits 10. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p1/v, query 0: Fetched 16 keys. Hits 16. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p2/v, query 0: Fetched 17 keys. Hits 17. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p3/v, query 0: Fetched 16 keys. Hits 16. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p4/v, query 0: Fetched 18 keys. Hits 18. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p5/v, query 0: Fetched 14 keys. Hits 14. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p6/v, query 0: Fetched 21 keys. Hits 21. [HUGECTR][03:36:33][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p7/v, query 0: Fetched 18 keys. Hits 18. [HUGECTR][03:36:33][DEBUG][RANK0]: RedisCluster backend. Table: hctr_et.wdl.sparse_embedding1. Fetched 130 / 130 values. [HUGECTR][03:36:33][DEBUG][RANK0]: RedisCluster: 130 hits, 0 missing! [HUGECTR][03:36:33][DEBUG][RANK0]: RocksDB backend. Table: hctr_et.wdl.sparse_embedding1. Fetched 0 / 0 values. [HUGECTR][03:36:33][DEBUG][RANK0]: RocksDB: 130 hits, 0 missing! [HUGECTR][03:36:33][INFO][RANK0]: Parameter server lookup of 130 / 130 embeddings took 882 us. WDL multi-embedding table inference result is [0.013668588362634182, 0.008148659951984882, 0.06785331666469574, 0.007276115473359823, 0.019930679351091385] [HUGECTR][03:36:33][INFO][RANK0]: Disconnecting from RocksDB database... [HUGECTR][03:36:33][INFO][RANK0]: Disconnected from RocksDB database! [HUGECTR][03:36:33][INFO][RANK0]: Disconnecting from Redis database... [HUGECTR][03:36:33][INFO][RANK0]: Disconnected from Redis database!
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
4.2 Inference using Triton **Please refer to the [Triton_Inference.ipynb](./Triton_Inference.ipynb) notebook to start Triton and do the inference.** 5. Continue Training WDL Model
%%writefile wdl_continue.py import hugectr from mpi4py import MPI solver = hugectr.CreateSolver(model_name = "wdl", max_eval_batches = 5000, batchsize_eval = 1024, batchsize = 1024, lr = 0.001, vvgpu = [[0]], i64_input_key = False, use_mixed_precision = False, repeat_dataset = False, use_cuda_graph = True, kafka_brockers = "10.23.137.25:9093") reader = hugectr.DataReaderParams(data_reader_type = hugectr.DataReaderType_t.Norm, source = ["criteo_data/file_list."+str(i)+".txt" for i in range(6, 9)], keyset = ["criteo_data/file_list."+str(i)+".keyset" for i in range(6, 9)], eval_source = "criteo_data/file_list.9.txt", check_type = hugectr.Check_t.Sum) optimizer = hugectr.CreateOptimizer(optimizer_type = hugectr.Optimizer_t.Adam) hc_config = hugectr.CreateHMemCache(2, 0.5, 0) etc = hugectr.CreateETC(ps_types = [hugectr.TrainPSType_t.Staged, hugectr.TrainPSType_t.Cached],\ sparse_models = ["./wdl_0_sparse_model", "./wdl_1_sparse_model"],\ local_paths = ["./"], hmem_cache_configs = [hc_config]) model = hugectr.Model(solver, reader, optimizer, etc) model.construct_from_json(graph_config_file = "wdl.json", include_dense_network = True) model.compile() model.load_dense_weights("wdl_dense_0_model") model.load_dense_optimizer_states("dcn_opt_dense_1000.model") model.summary() model.graph_to_json(graph_config_file = "wdl.json") model.fit(num_epochs = 1, display = 500, eval_interval = 1000) model.dump_incremental_model_2kafka() model.save_params_to_files("wdl_new") !python wdl_continue.py
[HUGECTR][03:37:25][INFO][RANK0]: Use existing embedding: ./wdl_0_sparse_model [HUGECTR][03:37:25][INFO][RANK0]: Use existing embedding: ./wdl_1_sparse_model HugeCTR Version: 3.2 ====================================================Model Init===================================================== [HUGECTR][03:37:25][INFO][RANK0]: Initialize model: wdl [HUGECTR][03:37:25][INFO][RANK0]: Global seed is 2083265859 [HUGECTR][03:37:25][INFO][RANK0]: Device to NUMA mapping: GPU 0 -> node 0 [HUGECTR][03:37:27][WARNING][RANK0]: Peer-to-peer access cannot be fully enabled. [HUGECTR][03:37:27][INFO][RANK0]: Start all2all warmup [HUGECTR][03:37:27][INFO][RANK0]: End all2all warmup [HUGECTR][03:37:27][INFO][RANK0]: Using All-reduce algorithm: NCCL [HUGECTR][03:37:27][INFO][RANK0]: Device 0: Tesla V100-SXM2-32GB [HUGECTR][03:37:27][DEBUG][RANK0]: Creating Kafka lifetime service. [HUGECTR][03:37:27][INFO][RANK0]: num of DataReader workers: 12 [HUGECTR][03:37:27][INFO][RANK0]: max_num_frequent_categories is not specified using default: 1 [HUGECTR][03:37:27][INFO][RANK0]: max_num_infrequent_samples is not specified using default: -1 [HUGECTR][03:37:27][INFO][RANK0]: p_dup_max is not specified using default: 0.010000 [HUGECTR][03:37:27][INFO][RANK0]: max_all_reduce_bandwidth is not specified using default: 130000000000.000000 [HUGECTR][03:37:27][INFO][RANK0]: max_all_to_all_bandwidth is not specified using default: 190000000000.000000 [HUGECTR][03:37:27][INFO][RANK0]: efficiency_bandwidth_ratio is not specified using default: 1.000000 [HUGECTR][03:37:27][INFO][RANK0]: use_train_precompute_indices is not specified using default: 0 [HUGECTR][03:37:27][INFO][RANK0]: use_eval_precompute_indices is not specified using default: 0 [HUGECTR][03:37:27][INFO][RANK0]: communication_type is not specified using default: IB_NVLink [HUGECTR][03:37:27][INFO][RANK0]: hybrid_embedding_type is not specified using default: Distributed [HUGECTR][03:37:27][INFO][RANK0]: max_vocabulary_size_per_gpu_=6029312 [HUGECTR][03:37:27][INFO][RANK0]: max_num_frequent_categories is not specified using default: 1 [HUGECTR][03:37:27][INFO][RANK0]: max_num_infrequent_samples is not specified using default: -1 [HUGECTR][03:37:27][INFO][RANK0]: p_dup_max is not specified using default: 0.010000 [HUGECTR][03:37:27][INFO][RANK0]: max_all_reduce_bandwidth is not specified using default: 130000000000.000000 [HUGECTR][03:37:27][INFO][RANK0]: max_all_to_all_bandwidth is not specified using default: 190000000000.000000 [HUGECTR][03:37:27][INFO][RANK0]: efficiency_bandwidth_ratio is not specified using default: 1.000000 [HUGECTR][03:37:27][INFO][RANK0]: use_train_precompute_indices is not specified using default: 0 [HUGECTR][03:37:27][INFO][RANK0]: use_eval_precompute_indices is not specified using default: 0 [HUGECTR][03:37:27][INFO][RANK0]: communication_type is not specified using default: IB_NVLink [HUGECTR][03:37:27][INFO][RANK0]: hybrid_embedding_type is not specified using default: Distributed [HUGECTR][03:37:27][INFO][RANK0]: max_vocabulary_size_per_gpu_=5865472 [HUGECTR][03:37:27][INFO][RANK0]: Load the model graph from wdl.json successfully [HUGECTR][03:37:27][INFO][RANK0]: Graph analysis to resolve tensor dependency ===================================================Model Compile=================================================== [HUGECTR][03:37:30][INFO][RANK0]: gpu0 start to init embedding [HUGECTR][03:37:30][INFO][RANK0]: gpu0 init embedding done [HUGECTR][03:37:30][INFO][RANK0]: gpu0 start to init embedding [HUGECTR][03:37:30][INFO][RANK0]: gpu0 init embedding done [HUGECTR][03:37:30][INFO][RANK0]: Enable HMEM-Based Parameter Server [HUGECTR][03:37:30][INFO][RANK0]: Enable HMemCache-Based Parameter Server [HUGECTR][03:37:31][INFO][RANK0]: Starting AUC NCCL warm-up [HUGECTR][03:37:31][INFO][RANK0]: Warm-up done 0. Runtime error: Cannot open dense model file /jershi/HugeCTR_gitlab/hugectr/HugeCTR/pybind/model.cpp:1983 0. Runtime error: Cannot open dense opt states file /jershi/HugeCTR_gitlab/hugectr/HugeCTR/pybind/model.cpp:1934 ===================================================Model Summary=================================================== label Dense Sparse label dense wide_data,deep_data (None, 1) (None, 13) β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€” Layer Type Input Name Output Name Output Shape β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€” DistributedSlotSparseEmbeddingHash wide_data sparse_embedding0 (None, 1, 1) ------------------------------------------------------------------------------------------------------------------ DistributedSlotSparseEmbeddingHash deep_data sparse_embedding1 (None, 26, 16) ------------------------------------------------------------------------------------------------------------------ Reshape sparse_embedding1 reshape1 (None, 416) ------------------------------------------------------------------------------------------------------------------ Reshape sparse_embedding0 reshape2 (None, 1) ------------------------------------------------------------------------------------------------------------------ Concat reshape1 concat1 (None, 429) dense ------------------------------------------------------------------------------------------------------------------ InnerProduct concat1 fc1 (None, 1024) ------------------------------------------------------------------------------------------------------------------ ReLU fc1 relu1 (None, 1024) ------------------------------------------------------------------------------------------------------------------ Dropout relu1 dropout1 (None, 1024) ------------------------------------------------------------------------------------------------------------------ InnerProduct dropout1 fc2 (None, 1024) ------------------------------------------------------------------------------------------------------------------ ReLU fc2 relu2 (None, 1024) ------------------------------------------------------------------------------------------------------------------ Dropout relu2 dropout2 (None, 1024) ------------------------------------------------------------------------------------------------------------------ InnerProduct dropout2 fc3 (None, 1) ------------------------------------------------------------------------------------------------------------------ Add fc3 add1 (None, 1) reshape2 ------------------------------------------------------------------------------------------------------------------ BinaryCrossEntropyLoss add1 loss label ------------------------------------------------------------------------------------------------------------------ [HUGECTR][03:37:31][INFO][RANK0]: Save the model graph to wdl.json successfully =====================================================Model Fit===================================================== [HUGECTR][03:37:31][INFO][RANK0]: Use embedding training cache mode with number of training sources: 3, number of epochs: 1 [HUGECTR][03:37:31][INFO][RANK0]: Training batchsize: 1024, evaluation batchsize: 1024 [HUGECTR][03:37:31][INFO][RANK0]: Evaluation interval: 1000, snapshot interval: 10000 [HUGECTR][03:37:31][INFO][RANK0]: Dense network trainable: True [HUGECTR][03:37:31][INFO][RANK0]: Sparse embedding sparse_embedding0 trainable: True [HUGECTR][03:37:31][INFO][RANK0]: Sparse embedding sparse_embedding1 trainable: True [HUGECTR][03:37:31][INFO][RANK0]: Use mixed precision: False, scaler: 1.000000, use cuda graph: True [HUGECTR][03:37:31][INFO][RANK0]: lr: 0.001000, warmup_steps: 1, end_lr: 0.000000 [HUGECTR][03:37:31][INFO][RANK0]: decay_start: 0, decay_steps: 1, decay_power: 2.000000 [HUGECTR][03:37:31][INFO][RANK0]: Evaluation source file: criteo_data/file_list.9.txt [HUGECTR][03:37:31][INFO][RANK0]: --------------------Epoch 0, source file: criteo_data/file_list.6.txt-------------------- [HUGECTR][03:37:31][INFO][RANK0]: Preparing embedding table for next pass [HUGECTR][03:37:32][INFO][RANK0]: HMEM-Cache PS: Hit rate [load]: 0 % [HUGECTR][03:37:32][INFO][RANK0]: --------------------Epoch 0, source file: criteo_data/file_list.7.txt-------------------- [HUGECTR][03:37:32][INFO][RANK0]: Preparing embedding table for next pass [HUGECTR][03:37:32][INFO][RANK0]: HMEM-Cache PS: Hit rate [dump]: 90.64 % [HUGECTR][03:37:32][INFO][RANK0]: HMEM-Cache PS: Hit rate [load]: 75.02 % [HUGECTR][03:37:33][INFO][RANK0]: --------------------Epoch 0, source file: criteo_data/file_list.8.txt-------------------- [HUGECTR][03:37:33][INFO][RANK0]: Preparing embedding table for next pass [HUGECTR][03:37:33][INFO][RANK0]: HMEM-Cache PS: Hit rate [dump]: 95.81 % [HUGECTR][03:37:33][INFO][RANK0]: HMEM-Cache PS: Hit rate [load]: 87.33 % [HUGECTR][03:37:34][INFO][RANK0]: HMEM-Cache PS: Hit rate [dump]: 85.8 % [HUGECTR][03:37:34][INFO][RANK0]: HMEM-Cache PS: Hit rate [load]: 86.51 % [HUGECTR][03:37:34][INFO][RANK0]: Get updated portion of embedding table [DONE} [HUGECTR][03:37:34][INFO][RANK0]: Dump incremental parameters of hctr_et.wdl.sparse_embedding0 into kafka. Embedding size is 1, num_pairs is 58853 [HUGECTR][03:37:34][INFO][RANK0]: Creating new Kafka topic "hctr_et.wdl.sparse_embedding0". [HUGECTR][03:37:34][INFO][RANK0]: Dump incremental parameters of hctr_et.wdl.sparse_embedding1 into kafka. Embedding size is 16, num_pairs is 58383 [HUGECTR][03:37:34][INFO][RANK0]: Creating new Kafka topic "hctr_et.wdl.sparse_embedding1". [HUGECTR][03:37:42][INFO][RANK0]: HMEM-Cache PS: Hit rate [dump]: 85.8 % [HUGECTR][03:37:42][INFO][RANK0]: Updating sparse model in SSD [DONE] [HUGECTR][03:37:42][INFO][RANK0]: Sync blocks from HMEM-Cache to SSD  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– 100.0% [ 2/ 2 | 62.5 Hz | 0s<0s] m [HUGECTR][03:37:42][INFO][RANK0]: Dumping dense weights to file, successful [HUGECTR][03:37:43][INFO][RANK0]: Dumping dense optimizer states to file, successful [HUGECTR][03:37:43][INFO][RANK0]: Dumping untrainable weights to file, successful [HUGECTR][03:37:56][DEBUG][RANK0]: Destroying Kafka lifetime service.
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
6. Inference with new model 6.1 Continuous inference using Python API
!python wdl_predict.py
[HUGECTR][03:38:09][INFO][RANK0]: default_emb_vec_value is not specified using default: 0.000000 [HUGECTR][03:38:09][INFO][RANK0]: default_emb_vec_value is not specified using default: 0.000000 [HUGECTR][03:38:09][INFO][RANK0]: Creating RedisCluster backend... [HUGECTR][03:38:09][INFO][RANK0]: Connecting to Redis cluster via 127.0.0.1:7000 ... [HUGECTR][03:38:09][INFO][RANK0]: Connected to Redis database! [HUGECTR][03:38:09][INFO][RANK0]: Creating RocksDB backend... [HUGECTR][03:38:09][INFO][RANK0]: Connecting to RocksDB database... [HUGECTR][03:38:09][INFO][RANK0]: RocksDB /wdl_infer/rocksdb/, found column family "default". [HUGECTR][03:38:09][INFO][RANK0]: RocksDB /wdl_infer/rocksdb/, found column family "hctr_et.wdl.sparse_embedding0". [HUGECTR][03:38:09][INFO][RANK0]: RocksDB /wdl_infer/rocksdb/, found column family "hctr_et.wdl.sparse_embedding1". [HUGECTR][03:38:09][INFO][RANK0]: Connected to RocksDB database! [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p0/v, query 0: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p0/v, query 1: Inserted 243 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p1/v, query 0: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p1/v, query 1: Inserted 375 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p2/v, query 0: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p2/v, query 1: Inserted 329 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p3/v, query 0: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p3/v, query 1: Inserted 303 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p4/v, query 0: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p4/v, query 1: Inserted 318 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p5/v, query 0: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p5/v, query 1: Inserted 271 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p6/v, query 0: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p6/v, query 1: Inserted 310 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p7/v, query 0: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p7/v, query 1: Inserted 345 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: RedisCluster backend. Table: hctr_et.wdl.sparse_embedding0. Inserted 82494 / 82494 pairs. [HUGECTR][03:38:09][INFO][RANK0]: Table: hctr_et.wdl.sparse_embedding0; cached 82494 / 82494 embeddings in distributed database! [HUGECTR][03:38:09][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 0: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 1: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 2: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 3: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 4: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 5: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 6: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 7: Inserted 10000 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding0, query 8: Inserted 2494 pairs. [HUGECTR][03:38:09][DEBUG][RANK0]: RocksDB backend. Table: hctr_et.wdl.sparse_embedding0. Inserted 82494 / 82494 pairs. [HUGECTR][03:38:09][INFO][RANK0]: Table: hctr_et.wdl.sparse_embedding0; cached 82494 embeddings in persistent database! [HUGECTR][03:38:10][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p0/v, query 0: Inserted 7628 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p1/v, query 0: Inserted 7631 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p2/v, query 0: Inserted 7629 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p3/v, query 0: Inserted 7628 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p4/v, query 0: Inserted 7628 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p5/v, query 0: Inserted 7629 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p6/v, query 0: Inserted 7627 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p7/v, query 0: Inserted 7635 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: RedisCluster backend. Table: hctr_et.wdl.sparse_embedding1. Inserted 61035 / 61035 pairs. [HUGECTR][03:38:10][INFO][RANK0]: Table: hctr_et.wdl.sparse_embedding1; cached 61035 / 61035 embeddings in distributed database! [HUGECTR][03:38:10][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 0: Inserted 10000 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 1: Inserted 10000 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 2: Inserted 10000 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 3: Inserted 10000 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 4: Inserted 10000 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 5: Inserted 10000 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: RocksDB table hctr_et.wdl.sparse_embedding1, query 6: Inserted 1035 pairs. [HUGECTR][03:38:10][DEBUG][RANK0]: RocksDB backend. Table: hctr_et.wdl.sparse_embedding1. Inserted 61035 / 61035 pairs. [HUGECTR][03:38:10][INFO][RANK0]: Table: hctr_et.wdl.sparse_embedding1; cached 61035 embeddings in persistent database! [HUGECTR][03:38:10][DEBUG][RANK0]: Real-time subscribers created! [HUGECTR][03:38:10][INFO][RANK0]: Create embedding cache in device 0. [HUGECTR][03:38:10][INFO][RANK0]: Use GPU embedding cache: False, cache size percentage: 0.900000 [HUGECTR][03:38:10][INFO][RANK0]: Configured cache hit rate threshold: 0.500000 [HUGECTR][03:38:10][INFO][RANK0]: Global seed is 2362747437 [HUGECTR][03:38:10][INFO][RANK0]: Device to NUMA mapping: GPU 0 -> node 0 [HUGECTR][03:38:11][WARNING][RANK0]: Peer-to-peer access cannot be fully enabled. [HUGECTR][03:38:11][INFO][RANK0]: Start all2all warmup [HUGECTR][03:38:11][INFO][RANK0]: End all2all warmup [HUGECTR][03:38:11][INFO][RANK0]: Model name: wdl [HUGECTR][03:38:11][INFO][RANK0]: Use mixed precision: False [HUGECTR][03:38:11][INFO][RANK0]: Use cuda graph: True [HUGECTR][03:38:11][INFO][RANK0]: Max batchsize: 64 [HUGECTR][03:38:11][INFO][RANK0]: Use I64 input key: True [HUGECTR][03:38:11][INFO][RANK0]: start create embedding for inference [HUGECTR][03:38:11][INFO][RANK0]: sparse_input name wide_data [HUGECTR][03:38:11][INFO][RANK0]: sparse_input name deep_data [HUGECTR][03:38:11][INFO][RANK0]: create embedding for inference success [HUGECTR][03:38:11][INFO][RANK0]: Inference stage skip BinaryCrossEntropyLoss layer, replaced by Sigmoid layer [HUGECTR][03:38:12][INFO][RANK0]: Looking up 10 embeddings (each with 1 values)... [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p1/v, query 0: Fetched 2 keys. Hits 2. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p2/v, query 0: Fetched 2 keys. Hits 2. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p3/v, query 0: Fetched 2 keys. Hits 2. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p4/v, query 0: Fetched 1 keys. Hits 1. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p5/v, query 0: Fetched 1 keys. Hits 1. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p6/v, query 0: Fetched 1 keys. Hits 1. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding0/p7/v, query 0: Fetched 1 keys. Hits 1. [HUGECTR][03:38:12][DEBUG][RANK0]: RedisCluster backend. Table: hctr_et.wdl.sparse_embedding0. Fetched 10 / 10 values. [HUGECTR][03:38:12][DEBUG][RANK0]: RedisCluster: 10 hits, 0 missing! [HUGECTR][03:38:12][DEBUG][RANK0]: RocksDB backend. Table: hctr_et.wdl.sparse_embedding0. Fetched 0 / 0 values. [HUGECTR][03:38:12][DEBUG][RANK0]: RocksDB: 10 hits, 0 missing! [HUGECTR][03:38:12][INFO][RANK0]: Parameter server lookup of 10 / 10 embeddings took 679 us. [HUGECTR][03:38:12][INFO][RANK0]: Looking up 130 embeddings (each with 16 values)... [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p0/v, query 0: Fetched 10 keys. Hits 10. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p1/v, query 0: Fetched 16 keys. Hits 16. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p2/v, query 0: Fetched 17 keys. Hits 17. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p3/v, query 0: Fetched 16 keys. Hits 16. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p4/v, query 0: Fetched 18 keys. Hits 18. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p5/v, query 0: Fetched 14 keys. Hits 14. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p6/v, query 0: Fetched 21 keys. Hits 21. [HUGECTR][03:38:12][DEBUG][RANK0]: Redis partition hctr_et.wdl.sparse_embedding1/p7/v, query 0: Fetched 18 keys. Hits 18. [HUGECTR][03:38:12][DEBUG][RANK0]: RedisCluster backend. Table: hctr_et.wdl.sparse_embedding1. Fetched 130 / 130 values. [HUGECTR][03:38:12][DEBUG][RANK0]: RedisCluster: 130 hits, 0 missing! [HUGECTR][03:38:12][DEBUG][RANK0]: RocksDB backend. Table: hctr_et.wdl.sparse_embedding1. Fetched 0 / 0 values. [HUGECTR][03:38:12][DEBUG][RANK0]: RocksDB: 130 hits, 0 missing! [HUGECTR][03:38:12][INFO][RANK0]: Parameter server lookup of 130 / 130 embeddings took 712 us. WDL multi-embedding table inference result is [0.0036218352615833282, 0.000900191895198077, 0.0546233244240284, 0.0028622469399124384, 0.005312761757522821] [HUGECTR][03:38:12][INFO][RANK0]: Disconnecting from RocksDB database... [HUGECTR][03:38:12][INFO][RANK0]: Disconnected from RocksDB database! [HUGECTR][03:38:12][INFO][RANK0]: Disconnecting from Redis database... [HUGECTR][03:38:12][INFO][RANK0]: Disconnected from Redis database!
BSD-3-Clause
samples/hierarchical_deployment/hps_e2e_demo/Continuous_Training.ipynb
miguelusque/hugectr_backend
Observations and Insights
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as st # Study data files mouse_metadata_path = "data/Mouse_metadata.csv" study_results_path = "data/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata_path) study_results = pd.read_csv(study_results_path) # Combine the data into a single dataset merged_df = pd.merge(study_results, mouse_metadata, how="left", on="Mouse ID") # Preview of the merged dataset merged_df.head() # Checking the number of mice in the DataFrame. len(merged_df["Mouse ID"].value_counts()) # Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint. duplicate_df = merged_df[merged_df.duplicated(subset=["Mouse ID", "Timepoint"], keep=False)] duplicate_df[["Mouse ID", "Timepoint"]] # Optional: Get all the data for the duplicate mouse ID. duplicate_df = merged_df.loc[merged_df["Mouse ID"] == "g989"] duplicate_df # Create a clean DataFrame by dropping the duplicate mouse by its ID. clean_df = merged_df.drop(duplicate_df.index) clean_df.head() # Checking the number of mice in the clean DataFrame. len(clean_df["Mouse ID"].value_counts())
_____no_output_____
ADSL
Pymaceuticals/pymaceuticals_starter.ipynb
tomlip/Matplotlib-challenge
Summary Statistics
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen # This method is the most straightforward, creating multiple series and putting them all together at the end. drug_df = clean_df.groupby("Drug Regimen") # calculating the statistics mean_arr = drug_df["Tumor Volume (mm3)"].mean() median_arr = drug_df["Tumor Volume (mm3)"].median() var_arr = drug_df["Tumor Volume (mm3)"].var() std_arr = drug_df["Tumor Volume (mm3)"].std() sem_arr = drug_df["Tumor Volume (mm3)"].sem() # creating statistic summary dataframe stats_df = pd.DataFrame({ "Mean Tumor Volume": mean_arr, "Median Tumor Volume": median_arr, "Tumor Volume Variance": var_arr, "Tumor Volume Std. Dev.": std_arr, "Tumor Volume Std. Err.": sem_arr }) # show statistic summary stats_df # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen # This method produces everything in a single groupby function. stats2_df = clean_df.groupby("Drug Regimen").agg({"Tumor Volume (mm3)": ["mean", "median", "var", "std", "sem"]}) stats2_df
_____no_output_____
ADSL
Pymaceuticals/pymaceuticals_starter.ipynb
tomlip/Matplotlib-challenge
Bar Plots
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas. #creating count dataframe bar_df = clean_df.groupby("Drug Regimen").count() #creating bar chart dataframe bar_df = bar_df.sort_values("Timepoint", ascending=False) bars_df = bar_df["Timepoint"] # creating bar chart graph = bars_df.plot(kind="bar") graph.set_ylabel("Number of Data Points") # Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pyplot. #creating count dataframe bar_df = clean_df.groupby("Drug Regimen").count() #creating bar chart dataframe bar_df = bar_df.sort_values("Timepoint", ascending=False) bars_df = bar_df["Timepoint"] plt.bar(bar_df.index, bar_df["Timepoint"]) plt.ylabel("Number of Data Points") plt.xticks(rotation="vertical")
_____no_output_____
ADSL
Pymaceuticals/pymaceuticals_starter.ipynb
tomlip/Matplotlib-challenge
Pie Plots
# Generate a pie plot showing the distribution of female versus male mice using pandas pie_chart = mouse_metadata["Sex"].value_counts() pie_chart.plot(kind='pie', subplots=True, autopct="%0.1f%%") # Generate a pie plot showing the distribution of female versus male mice using pyplot f_vs_m = mouse_metadata["Sex"].value_counts() plt.pie(f_vs_m, autopct="%1.1f%%")
_____no_output_____
ADSL
Pymaceuticals/pymaceuticals_starter.ipynb
tomlip/Matplotlib-challenge
Quartiles, Outliers and Boxplots
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers. # returning the max timepoints for Capomulin temp_df = clean_df.loc[clean_df["Drug Regimen"] == "Capomulin", :] max_capo = temp_df.groupby("Mouse ID").max() # returning the max timepoints for Ramicane temp_df = clean_df.loc[clean_df["Drug Regimen"] == "Ramicane", :] max_rami = temp_df.groupby("Mouse ID").max() # returning the max timepoints for Infubinol temp_df = clean_df.loc[clean_df["Drug Regimen"] == "Infubinol", :] max_infu = temp_df.groupby("Mouse ID").max() # returning the max timepoints for Ketapril temp_df = clean_df.loc[clean_df["Drug Regimen"] == "Ceftamin", :] max_ceft = temp_df.groupby("Mouse ID").max() # calculating IQR's quartiles = max_capo["Tumor Volume (mm3)"].quantile([.25,.5,.75]) capo_iqr = quartiles[0.75] - quartiles[0.25] lower_bound = quartiles[0.25] - (1.5*capo_iqr) upper_bound = quartiles[0.75] + (1.5*capo_iqr) capo_outliers = max_capo["Tumor Volume (mm3)"].loc[(max_capo["Tumor Volume (mm3)"] < lower_bound) | (max_capo["Tumor Volume (mm3)"] > upper_bound)] quartiles = max_rami["Tumor Volume (mm3)"].quantile([.25,.5,.75]) rami_iqr = quartiles[0.75] - quartiles[0.25] lower_bound = quartiles[0.25] - (1.5*rami_iqr) upper_bound = quartiles[0.75] + (1.5*rami_iqr) rami_outliers = max_rami["Tumor Volume (mm3)"].loc[(max_rami["Tumor Volume (mm3)"] < lower_bound) | (max_rami["Tumor Volume (mm3)"] > upper_bound)] infu_quartiles = max_infu["Tumor Volume (mm3)"].quantile([.25,.5,.75]) infu_iqr = quartiles[0.75] - infu_quartiles[0.25] lower_bound_infu = infu_quartiles[0.25] - (1.5*infu_iqr) upper_bound_infu = infu_quartiles[0.75] + (1.5*infu_iqr) infu_outliers = max_infu["Tumor Volume (mm3)"].loc[(max_infu["Tumor Volume (mm3)"] <= lower_bound_infu) | (max_infu["Tumor Volume (mm3)"] >= upper_bound_infu)] quartiles = max_ceft["Tumor Volume (mm3)"].quantile([.25,.5,.75]) ceft_iqr = quartiles[0.75] - quartiles[0.25] lower_bound = quartiles[0.25] - (1.5*ceft_iqr) upper_bound = quartiles[0.75] + (1.5*ceft_iqr) ceft_outliers = max_ceft["Tumor Volume (mm3)"].loc[(max_ceft["Tumor Volume (mm3)"] < lower_bound) | (max_ceft["Tumor Volume (mm3)"] > upper_bound)] len(infu_outliers) # Generate a box plot of the final tumor volume of each mouse across four regimens of interest
_____no_output_____
ADSL
Pymaceuticals/pymaceuticals_starter.ipynb
tomlip/Matplotlib-challenge
Line and Scatter Plots
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin # Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
_____no_output_____
ADSL
Pymaceuticals/pymaceuticals_starter.ipynb
tomlip/Matplotlib-challenge
Correlation and Regression
# Calculate the correlation coefficient and linear regression model # for mouse weight and average tumor volume for the Capomulin regimen
_____no_output_____
ADSL
Pymaceuticals/pymaceuticals_starter.ipynb
tomlip/Matplotlib-challenge
Create Dataset
def create_dataset(df, name, bonds): print(f"Creating Dataset and Saving to {drive_path}/data/{name}.pkl") data = df.sample(frac=1) data = data.reset_index(drop=True) data['mol'] = data['smiles'].apply(lambda x: create_dgl_features(x, bonds)) data.to_pickle(f"{drive_path}/data/{name}.pkl") return data def featurize_atoms(mol): feats = [] atom_features = utils.ConcatFeaturizer([ utils.atom_type_one_hot, utils.atomic_number_one_hot, utils.atom_degree_one_hot, utils.atom_explicit_valence_one_hot, utils.atom_formal_charge_one_hot, utils.atom_num_radical_electrons_one_hot, utils.atom_hybridization_one_hot, utils.atom_is_aromatic_one_hot ]) for atom in mol.GetAtoms(): feats.append(atom_features(atom)) return {'feats': torch.tensor(feats).float()} def featurize_bonds(mol): feats = [] bond_features = utils.ConcatFeaturizer([ utils.bond_type_one_hot, utils.bond_is_conjugated_one_hot, utils.bond_is_in_ring_one_hot, utils.bond_stereo_one_hot, utils.bond_direction_one_hot, ]) for bond in mol.GetBonds(): feats.append(bond_features(bond)) feats.append(bond_features(bond)) return {'edge_feats': torch.tensor(feats).float()} def create_dgl_features(smiles, bonds): mol = Chem.MolFromSmiles(smiles) mol = standardizer.standardize_mol(mol) if bonds: dgl_graph = utils.mol_to_bigraph(mol=mol, node_featurizer=featurize_atoms, edge_featurizer=featurize_bonds, canonical_atom_order=True) else: dgl_graph = utils.mol_to_bigraph(mol=mol, node_featurizer=featurize_atoms, canonical_atom_order=True) dgl_graph = dgl.add_self_loop(dgl_graph) return dgl_graph def load_dataset(dataset, bonds=False, feat='graph', create_new=False): """ dataset values: muv, tox21, dude-gpcr feat values: graph, ecfp """ dataset_test_tasks = { 'tox21': ['SR-HSE', 'SR-MMP', 'SR-p53'], 'muv': ['MUV-832', 'MUV-846', 'MUV-852', 'MUV-858', 'MUV-859'], 'dude-gpcr': ['adrb2', 'cxcr4'] } dataset_original = dataset if bonds: dataset = dataset + "_with_bonds" if path.exists(f"{drive_path}/data/{dataset}_dgl.pkl") and not create_new: # Load Dataset print("Reading Pickle") if feat == 'graph': data = pd.read_pickle(f"{drive_path}/data/{dataset}_dgl.pkl") else: data = pd.read_pickle(f"{drive_path}/data/{dataset}_ecfp.pkl") else: # Create Dataset df = pd.read_csv(f"{drive_path}/data/raw/{dataset_original}.csv") if feat == 'graph': data = create_dataset(df, f"{dataset}_dgl", bonds) else: data = create_ecfp_dataset(df, f"{dataset}_ecfp") test_tasks = dataset_test_tasks.get(dataset_original) drop_cols = test_tasks.copy() drop_cols.extend(['mol_id', 'smiles', 'mol']) train_tasks = [x for x in list(data.columns) if x not in drop_cols] train_dfs = dict.fromkeys(train_tasks) for task in train_tasks: df = data[[task, 'mol']].dropna() df.columns = ['y', 'mol'] # FOR BOND INFORMATION if with_bonds: for index, r in df.iterrows(): if r.mol.edata['edge_feats'].shape[-1] < 17: df.drop(index, inplace=True) train_dfs[task] = df for key in train_dfs: print(key, len(train_dfs[key])) if feat == 'graph': feat_length = data.iloc[0].mol.ndata['feats'].shape[-1] print("Feature Length", feat_length) if with_bonds: feat_length = data.iloc[0].mol.edata['edge_feats'].shape[-1] print("Feature Length", feat_length) else: print("Edge Features: ", with_bonds) test_dfs = dict.fromkeys(test_tasks) for task in test_tasks: df = data[[task, 'mol']].dropna() df.columns = ['y', 'mol'] # FOR BOND INFORMATION if with_bonds: for index, r in df.iterrows(): if r.mol.edata['edge_feats'].shape[-1] < 17: df.drop(index, inplace=True) test_dfs[task] = df for key in test_dfs: print(key, len(test_dfs[key])) # return data, train_tasks, test_tasks return train_tasks, train_dfs, test_tasks, test_dfs
time: 148 ms (started: 2021-11-30 11:59:08 +00:00)
MIT
Prototypical Nets Tox21 ECFP.ipynb
danielvlla/Few-Shot-Learning-for-Low-Data-Drug-Discovery
Create Episode
def create_episode(n_support_pos, n_support_neg, n_query, data, test=False, train_balanced=True): """ n_query = per class data points Xy = dataframe dataset in format [['y', 'mol']] """ support = [] query = [] n_query_pos = n_query n_query_neg = n_query support_neg = data[data['y'] == 0].sample(n_support_neg) support_pos = data[data['y'] == 1].sample(n_support_pos) # organise support by class in array dimensions support.append(support_neg.to_numpy()) support.append(support_pos.to_numpy()) support = np.array(support, dtype=object) support_X = [rec[1] for sup_class in support for rec in sup_class] support_y = np.asarray([rec[0] for sup_class in support for rec in sup_class], dtype=np.float16).flatten() data = data.drop(support_neg.index) data = data.drop(support_pos.index) if len(data[data['y'] == 1]) < n_query: n_query_pos = len(data[data['y'] == 1]) if test: # test uses all data remaining query_neg = data[data['y'] == 0].to_numpy() query_pos = data[data['y'] == 1].to_numpy() elif (not test) and train_balanced: # for balanced queries, same size as support query_neg = data[data['y'] == 0].sample(n_query_neg).to_numpy() query_pos = data[data['y'] == 1].sample(n_query_pos).to_numpy() elif (not test) and (not train_balanced): # print('test') query_neg = data[data['y'] == 0].sample(1).to_numpy() query_pos = data[data['y'] == 1].sample(1).to_numpy() query_rem = data.sample(n_query*2 - 2) query_neg_rem = query_rem[query_rem['y'] == 0].to_numpy() query_pos_rem = query_rem[query_rem['y'] == 1].to_numpy() query_neg = np.concatenate((query_neg, query_neg_rem)) query_pos = np.concatenate((query_pos, query_pos_rem), axis=0) query_X = np.concatenate([query_neg[:, 1], query_pos[:, 1]]) query_y = np.concatenate([query_neg[:, 0], query_pos[:, 0]]) return support_X, support_y, query_X, query_y # task = 'NR-AR' # df = data[[task, 'mol']] # df = df.dropna() # df.columns = ['y', 'mol'] # support_X, support_y, query_X, query_y = create_episode(1, 1, 64, df) # support_y # testing # support = [] # query = [] # support_neg = df[df['y'] == 0].sample(2) # support_pos = df[df['y'] == 1].sample(2) # # organise support by class in array dimensions # support.append(support_neg.to_numpy()) # support.append(support_pos.to_numpy()) # support = np.array(support) # support.shape # support[:, :, 1]
time: 2.25 ms (started: 2021-11-27 16:25:20 +00:00)
MIT
Prototypical Nets Tox21 ECFP.ipynb
danielvlla/Few-Shot-Learning-for-Low-Data-Drug-Discovery
Graph Embedding
class GCN(nn.Module): def __init__(self, in_channels, out_channels=128): super(GCN, self).__init__() self.conv1 = GraphConv(in_channels, 64) self.conv2 = GraphConv(64, 128) self.conv3 = GraphConv(128, 64) self.sum_pool = SumPooling() self.dense = nn.Linear(64, out_channels) def forward(self, graph, in_feat): h = self.conv1(graph, in_feat) h = F.relu(h) graph.ndata['h'] = h graph.update_all(fn.copy_u('h', 'm'), fn.max('m', 'h')) h = self.conv2(graph, graph.ndata['h']) h = F.relu(h) graph.ndata['h'] = h graph.update_all(fn.copy_u('h', 'm'), fn.max('m', 'h')) h = self.conv3(graph, graph.ndata['h']) h = F.relu(h) graph.ndata['h'] = h graph.update_all(fn.copy_u('h', 'm'), fn.max('m', 'h')) output = self.sum_pool(graph, graph.ndata['h']) output = torch.tanh(output) output = self.dense(output) output = torch.tanh(output) return output class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(2048, 1000) self.fc2 = nn.Linear(1000, 500) self.fc3 = nn.Linear(500, 128) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = torch.tanh(self.fc3(x)) return x
time: 9.12 ms (started: 2021-11-30 11:46:53 +00:00)
MIT
Prototypical Nets Tox21 ECFP.ipynb
danielvlla/Few-Shot-Learning-for-Low-Data-Drug-Discovery
Distance Function
def euclidean_dist(x, y): # x: N x D # y: M x D n = x.size(0) m = y.size(0) d = x.size(1) assert d == y.size(1) x = x.unsqueeze(1).expand(n, m, d) y = y.unsqueeze(0).expand(n, m, d) return torch.pow(x - y, 2).sum(2)
time: 4.05 ms (started: 2021-11-30 11:44:08 +00:00)
MIT
Prototypical Nets Tox21 ECFP.ipynb
danielvlla/Few-Shot-Learning-for-Low-Data-Drug-Discovery
LSTM
def cos(x, y): transpose_shape = tuple(list(range(len(y.shape)))[::-1]) x = x.float() denom = ( torch.sqrt(torch.sum(torch.square(x)) * torch.sum(torch.square(y))) + torch.finfo(torch.float32).eps) return torch.matmul(x, torch.permute(y, transpose_shape)) / denom class ResiLSTMEmbedding(nn.Module): def __init__(self, n_support, n_feat=128, max_depth=3): super(ResiLSTMEmbedding, self).__init__() self.max_depth = max_depth self.n_support = n_support self.n_feat = n_feat self.support_lstm = nn.LSTMCell(input_size=2*self.n_feat, hidden_size=self.n_feat) self.q_init = torch.nn.Parameter(torch.zeros((self.n_support, self.n_feat), dtype=torch.float, device="cuda")) self.support_states_init_h = torch.nn.Parameter(torch.zeros(self.n_support, self.n_feat)) self.support_states_init_c = torch.nn.Parameter(torch.zeros(self.n_support, self.n_feat)) self.query_lstm = nn.LSTMCell(input_size=2*self.n_feat, hidden_size=self.n_feat) if torch.cuda.is_available(): self.support_lstm = self.support_lstm.cuda() self.query_lstm = self.query_lstm.cuda() self.q_init = self.q_init.cuda() # self.p_init = self.p_init.cuda() def forward(self, x_support, x_query): self.p_init = torch.zeros((len(x_query), self.n_feat)).to(device) self.query_states_init_h = torch.zeros(len(x_query), self.n_feat).to(device) self.query_states_init_c = torch.zeros(len(x_query), self.n_feat).to(device) x_support = x_support x_query = x_query z_support = x_support q = self.q_init p = self.p_init support_states_h = self.support_states_init_h support_states_c = self.support_states_init_c query_states_h = self.query_states_init_h query_states_c = self.query_states_init_c for i in range(self.max_depth): sup_e = cos(z_support + q, x_support) sup_a = torch.nn.functional.softmax(sup_e, dim=-1) sup_r = torch.matmul(sup_a, x_support).float() query_e = cos(x_query + p, z_support) query_a = torch.nn.functional.softmax(query_e, dim=-1) query_r = torch.matmul(query_a, z_support).float() sup_qr = torch.cat((q, sup_r), 1) support_hidden, support_out = self.support_lstm(sup_qr, (support_states_h, support_states_c)) q = support_hidden query_pr = torch.cat((p, query_r), 1) query_hidden, query_out = self.query_lstm(query_pr, (query_states_h, query_states_c)) p = query_hidden z_support = sup_r return x_support + q, x_query + p
time: 75.5 ms (started: 2021-11-30 11:44:11 +00:00)
MIT
Prototypical Nets Tox21 ECFP.ipynb
danielvlla/Few-Shot-Learning-for-Low-Data-Drug-Discovery
Protonet https://colab.research.google.com/drive/1QDYIwg2-iiUpVU8YyAh0lOgFgFPhVgvxscrollTo=BnLOgECOKG_y
class ProtoNet(nn.Module): def __init__(self, with_bonds=False): """ Prototypical Network """ super(ProtoNet, self).__init__() def forward(self, X_support, X_query, n_support_pos, n_support_neg): n_support = len(X_support) # prototypes z_dim = X_support.size(-1) # size of the embedding - 128 z_proto_0 = X_support[:n_support_neg].view(n_support_neg, z_dim).mean(0) z_proto_1 = X_support[n_support_neg:n_support].view(n_support_pos, z_dim).mean(0) z_proto = torch.stack((z_proto_0, z_proto_1)) # queries z_query = X_query # compute distance dists = euclidean_dist(z_query, z_proto) # [128, 2] # compute probabilities log_p_y = nn.LogSoftmax(dim=1)(-dists) # [128, 2] return log_p_y
time: 18.5 ms (started: 2021-11-30 11:44:14 +00:00)
MIT
Prototypical Nets Tox21 ECFP.ipynb
danielvlla/Few-Shot-Learning-for-Low-Data-Drug-Discovery
Training Loop
def train(train_tasks, train_dfs, balanced_queries, k_pos, k_neg, n_query, episodes, lr): writer = SummaryWriter() start_time = time.time() node_feat_size = 177 embedding_size = 128 encoder = Net() resi_lstm = ResiLSTMEmbedding(k_pos+k_neg) proto_net = ProtoNet() loss_fn = nn.NLLLoss() if torch.cuda.is_available(): encoder = encoder.cuda() resi_lstm = resi_lstm.cuda() proto_net = proto_net.cuda() loss_fn = loss_fn.cuda() encoder_optimizer = torch.optim.Adam(encoder.parameters(), lr = lr) lstm_optimizer = torch.optim.Adam(resi_lstm.parameters(), lr = lr) # proto_optimizer = torch.optim.Adam(proto_net.parameters(), lr = lr) # encoder_scheduler = torch.optim.lr_scheduler.StepLR(encoder_optimizer, step_size=1, gamma=0.8) encoder_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(encoder_optimizer, patience=300, verbose=False) lstm_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(lstm_optimizer, patience=300, verbose=False) # rn_scheduler = torch.optim.lr_scheduler.StepLR(rn_optimizer, step_size=1, gamma=0.8) # rn_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(rn_optimizer, patience=500, verbose=False) episode_num = 1 early_stop = False losses = [] running_loss = 0.0 running_acc = 0.0 running_roc = 0.0 running_prc = 0.0 # for task in shuffled_train_tasks: pbar = trange(episodes, desc=f"Training") # while episode_num < episodes and not early_stop: for episode in pbar: episode_loss = 0.0 # SET TRAINING MODE encoder.train() resi_lstm.train() proto_net.train() # RANDOMISE ORDER OF TASKS PER EPISODE shuffled_train_tasks = random.sample(train_tasks, len(train_tasks)) # LOOP OVER TASKS for task in shuffled_train_tasks: # CREATE EPISODE FOR TASK X = train_dfs[task] X_support, y_support, X_query, y_query = create_episode(k_pos, k_neg, n_query, X, False, balanced_queries) # TOTAL NUMBER OF QUERIES total_query = int((y_query == 0).sum() + (y_query == 1).sum()) # ONE HOT QUERY TARGETS # query_targets = torch.from_numpy(y_query.astype('int')) # targets = F.one_hot(query_targets, num_classes=2) target_inds = torch.from_numpy(y_query.astype('float32')).float() target_inds = target_inds.unsqueeze(1).type(torch.int64) targets = Variable(target_inds, requires_grad=False).to(device) if torch.cuda.is_available(): targets=targets.cuda() n_support = k_pos + k_neg # flat_support = list(np.concatenate(X_support).flat) # X = flat_support + list(X_query) X = X_support + list(X_query) # CREATE EMBEDDINGS dataloader = torch.utils.data.DataLoader(X, batch_size=(n_support + total_query), shuffle=False, pin_memory=True) for graph in dataloader: graph = graph.to(device) embeddings = encoder.forward(graph) # LSTM EMBEDDINGS emb_support = embeddings[:n_support] emb_query = embeddings[n_support:] emb_support, emb_query = resi_lstm(emb_support, emb_query) # PROTO NETS logits = proto_net(emb_support, emb_query, k_pos, k_neg) # loss = loss_fn(logits, torch.max(query_targets, 1)[1]) loss = loss_fn(logits, targets.squeeze()) encoder.zero_grad() resi_lstm.zero_grad() proto_net.zero_grad() loss.backward() encoder_optimizer.step() lstm_optimizer.step() _, y_hat = logits.max(1) # class_indices = torch.max(query_targets, 1)[1] targets = targets.squeeze().cpu() y_hat = y_hat.squeeze().detach().cpu() roc = roc_auc_score(targets, y_hat) prc = average_precision_score(targets, y_hat) acc = accuracy_score(targets, y_hat) # proto_optimizer.step() # EVALUATE TRAINING LOOP ON TASK episode_loss += loss.item() running_loss += loss.item() running_acc += acc running_roc += roc running_prc += prc pbar.set_description(f"Episode {episode_num} - Loss {loss.item():.6f} - Acc {acc:.4f} - LR {encoder_optimizer.param_groups[0]['lr']}") pbar.refresh() losses.append(episode_loss / len(train_tasks)) writer.add_scalar('Loss/train', episode_loss / len(train_tasks), episode_num) if encoder_optimizer.param_groups[0]['lr'] < 0.000001: break # EARLY STOP elif episode_num < episodes: episode_num += 1 encoder_scheduler.step(loss) lstm_scheduler.step(loss) epoch_loss = running_loss / (episode_num*len(train_tasks)) epoch_acc = running_acc / (episode_num*len(train_tasks)) epoch_roc = running_roc / (episode_num*len(train_tasks)) epoch_prc = running_prc / (episode_num*len(train_tasks)) print(f'Loss: {epoch_loss:.5f} Acc: {epoch_acc:.4f} ROC: {epoch_roc:.4f} PRC: {epoch_prc:.4f}') end_time = time.time() train_info = { "losses": losses, "duration": str(timedelta(seconds=(end_time - start_time))), "episodes": episode_num, "train_roc": epoch_roc, "train_prc": epoch_prc } return encoder, resi_lstm, proto_net, train_info
time: 171 ms (started: 2021-11-30 11:57:42 +00:00)
MIT
Prototypical Nets Tox21 ECFP.ipynb
danielvlla/Few-Shot-Learning-for-Low-Data-Drug-Discovery
Testing Loop
def test(encoder, lstm, proto_net, test_tasks, test_dfs, k_pos, k_neg, rounds): encoder.eval() lstm.eval() proto_net.eval() test_info = {} with torch.no_grad(): for task in test_tasks: Xy = test_dfs[task] running_loss = [] running_acc = [] running_roc = [0] running_prc = [0] running_preds = [] running_targets = [] running_actuals = [] for round in trange(rounds): X_support, y_support, X_query, y_query = create_episode(k_pos, k_neg, n_query=0, data=Xy, test=True, train_balanced=False) total_query = int((y_query == 0).sum() + (y_query == 1).sum()) n_support = k_pos + k_neg # flat_support = list(np.concatenate(X_support).flat) # X = flat_support + list(X_query) X = X_support + list(X_query) # CREATE EMBEDDINGS dataloader = torch.utils.data.DataLoader(X, batch_size=(n_support + total_query), shuffle=False, pin_memory=True) for graph in dataloader: graph = graph.to(device) embeddings = encoder.forward(graph) # LSTM EMBEDDINGS emb_support = embeddings[:n_support] emb_query = embeddings[n_support:] emb_support, emb_query = lstm(emb_support, emb_query) # PROTO NETS logits = proto_net(emb_support, emb_query, k_pos, k_neg) # PRED _, y_hat_actual = logits.max(1) y_hat = logits[:, 1] # targets = targets.squeeze().cpu() target_inds = torch.from_numpy(y_query.astype('float32')).float() target_inds = target_inds.unsqueeze(1).type(torch.int64) targets = Variable(target_inds, requires_grad=False) y_hat = y_hat.squeeze().detach().cpu() roc = roc_auc_score(targets, y_hat) prc = average_precision_score(targets, y_hat) # acc = accuracy_score(targets, y_hat) running_preds.append(y_hat) running_actuals.append(y_hat_actual) running_targets.append(targets) # running_acc.append(acc) running_roc.append(roc) running_prc.append(prc) median_index = running_roc.index(statistics.median(running_roc)) if median_index == rounds: median_index = median_index - 1 chart_preds = running_preds[median_index] chart_actuals = running_actuals[median_index].detach().cpu() chart_targets = running_targets[median_index] c_auc = roc_auc_score(chart_targets, chart_preds) c_fpr, c_tpr, _ = roc_curve(chart_targets, chart_preds) plt.plot(c_fpr, c_tpr, marker='.', label = 'AUC = %0.2f' % c_auc) plt.plot([0, 1], [0, 1],'r--', label='No Skill') # plt.plot([0, 0, 1], [0, 1, 1], 'g--', label='Perfect Classifier') plt.title('Receiver Operating Characteristic') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend(loc = 'best') plt.savefig(f"{drive_path}/{method_dir}/graphs/roc_{dataset}_{task}_ecfp_pos{n_pos}_neg{n_neg}.png") plt.figure().clear() # prc_graph = PrecisionRecallDisplay.from_predictions(chart_targets, chart_preds) c_precision, c_recall, _ = precision_recall_curve(chart_targets, chart_preds) plt.title('Precision Recall Curve') # plt.plot([0, 1], [0, 0], 'r--', label='No Skill') no_skill = len(chart_targets[chart_targets==1]) / len(chart_targets) plt.plot([0, 1], [no_skill, no_skill], linestyle='--', label='No Skill') # plt.plot([0, 1, 1], [1, 1, 0], 'g--', label='Perfect Classifier') plt.plot(c_recall, c_precision, marker='.', label = 'AUC = %0.2f' % auc(c_recall, c_precision)) plt.xlabel('Recall') plt.ylabel('Precision') plt.legend(loc = 'best') plt.savefig(f"{drive_path}/{method_dir}/graphs/prc_{dataset}_{task}_ecfp_pos{n_pos}_neg{n_neg}.png") plt.figure().clear() cm = ConfusionMatrixDisplay.from_predictions(chart_targets, chart_actuals) plt.title('Confusion Matrix') plt.savefig(f"{drive_path}/{method_dir}/graphs/cm_{dataset}_{task}_ecfp_pos{n_pos}_neg{n_neg}.png") plt.figure().clear() running_roc.pop(0) # remove the added 0 running_prc.pop(0) # remove the added 0 # round_acc = f"{statistics.mean(running_acc):.3f} \u00B1 {statistics.stdev(running_acc):.3f}" round_roc = f"{statistics.mean(running_roc):.3f} \u00B1 {statistics.stdev(running_roc):.3f}" round_prc = f"{statistics.mean(running_prc):.3f} \u00B1 {statistics.stdev(running_prc):.3f}" test_info[task] = { # "acc": round_acc, "roc": round_roc, "prc": round_prc, "roc_values": running_roc, "prc_values": running_prc } print(f'Test {task}') # print(f"Acc: {round_acc}") print(f"ROC: {round_roc}") print(f"PRC: {round_prc}") return targets, y_hat, test_info
time: 161 ms (started: 2021-11-30 11:57:33 +00:00)
MIT
Prototypical Nets Tox21 ECFP.ipynb
danielvlla/Few-Shot-Learning-for-Low-Data-Drug-Discovery
Initiate Training and Testing
from google.colab import drive drive.mount('/content/drive') # PATHS drive_path = "/content/drive/MyDrive/Colab Notebooks/MSC_21" method_dir = "ProtoNets" log_path = f"{drive_path}/{method_dir}/logs/" # PARAMETERS dataset = 'tox21' with_bonds = False test_rounds = 20 n_query = 64 # per class episodes = 10000 lr = 0.001 balanced_queries = True #FOR DETERMINISTIC REPRODUCABILITY randomseed = 12 torch.manual_seed(randomseed) np.random.seed(randomseed) random.seed(randomseed) torch.cuda.manual_seed(randomseed) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') torch.backends.cudnn.is_available() torch.backends.cudnn.benchmark = False # selects fastest conv algo torch.backends.cudnn.deterministic = True # LOAD DATASET # data, train_tasks, test_tasks = load_dataset(dataset, bonds=with_bonds, create_new=False) train_tasks, train_dfs, test_tasks, test_dfs = load_dataset(dataset, bonds=with_bonds, feat='ecfp', create_new=False) combinations = [ [10, 10], [5, 10], [1, 10], [1, 5], [1, 1] ] # worksheet = gc.open_by_url("https://docs.google.com/spreadsheets/d/1K15Rx4IZqiLgjUsmMq0blB-WB16MDY-ENR2j8z7S6Ss/edit#gid=0").sheet1 cols = [ 'DATE', 'CPU', 'CPU COUNT', 'GPU', 'GPU RAM', 'RAM', 'CUDA', 'REF', 'DATASET', 'ARCHITECTURE', 'SPLIT', 'TARGET', 'ACCURACY', 'ROC', 'PRC', 'ROC_VALUES', 'PRC_VALUES', 'TRAIN ROC', 'TRAIN PRC', 'EPISODES', 'TRAINING TIME' ] load_from_saved = False for comb in combinations: n_pos = comb[0] n_neg = comb[1] results = pd.DataFrame(columns=cols) print(f"\nRUNNING {n_pos}+/{n_neg}-") if load_from_saved: encoder = GCN(177, 128) lstm = ResiLSTMEmbedding(n_pos+n_neg) proto_net = ProtoNet() encoder.load_state_dict(torch.load(f"{drive_path}/{method_dir}/{dataset}_ecfp_encoder_pos{n_pos}_neg{n_neg}.pt")) lstm.load_state_dict(torch.load(f"{drive_path}/{method_dir}/{dataset}_ecfp__lstm_pos{n_pos}_neg{n_neg}.pt")) proto_net.load_state_dict(torch.load(f"{drive_path}/{method_dir}/{dataset}_ecfp__proto_pos{n_pos}_neg{n_neg}.pt")) encoder.to(device) lstm.to(device) proto_net.to(device) else: encoder, lstm, proto_net, train_info = train(train_tasks, train_dfs, balanced_queries, n_pos, n_neg, n_query, episodes, lr) if with_bonds: torch.save(encoder.state_dict(), f"{drive_path}/{method_dir}/{dataset}_ecfp__encoder_pos{n_pos}_neg{n_neg}_bonds.pt") torch.save(lstm.state_dict(), f"{drive_path}/{method_dir}/{dataset}_ecfp__lstm_pos{n_pos}_neg{n_neg}_bonds.pt") torch.save(proto_net.state_dict(), f"{drive_path}/{method_dir}/{dataset}_ecfp__proto_pos{n_pos}_neg{n_neg}_bonds.pt") else: torch.save(encoder.state_dict(), f"{drive_path}/{method_dir}/{dataset}_ecfp__encoder_pos{n_pos}_neg{n_neg}.pt") torch.save(lstm.state_dict(), f"{drive_path}/{method_dir}/{dataset}_ecfp__lstm_pos{n_pos}_neg{n_neg}.pt") torch.save(proto_net.state_dict(), f"{drive_path}/{method_dir}/{dataset}_ecfp__proto_pos{n_pos}_neg{n_neg}.pt") loss_plot = plt.plot(train_info['losses'])[0] loss_plot.figure.savefig(f"{drive_path}/{method_dir}/loss_plots/{dataset}_ecfp__pos{n_pos}_neg{n_neg}.png") plt.figure().clear() targets, preds, test_info = test(encoder, lstm, proto_net, test_tasks, test_dfs, n_pos, n_neg, test_rounds) dt_string = datetime.now().strftime("%d/%m/%Y %H:%M:%S") cpu = get_cmd_output('cat /proc/cpuinfo | grep -E "model name"') cpu = cpu.split('\n')[0].split('\t: ')[-1] cpu_count = psutil.cpu_count() cuda_version = get_cmd_output('nvcc --version | grep -E "Build"') gpu = get_cmd_output("nvidia-smi -L") general_ram_gb = humanize.naturalsize(psutil.virtual_memory().available) gpu_ram_total_mb = GPU.getGPUs()[0].memoryTotal for target in test_info: if load_from_saved: rec = pd.DataFrame([[dt_string, cpu, cpu_count, gpu, gpu_ram_total_mb, general_ram_gb, cuda_version, "MSC", dataset, {method_dir}, f"{n_pos}+/{n_neg}-", target, 0, test_info[target]['roc'], test_info[target]['prc'], test_info[target]['roc_values'], test_info[target]['prc_values'], 99, 99, 99, 102]], columns=cols) results = pd.concat([results, rec]) else: rec = pd.DataFrame([[dt_string, cpu, cpu_count, gpu, gpu_ram_total_mb, general_ram_gb, cuda_version, "MSC", dataset, {method_dir}, f"{n_pos}+/{n_neg}-", target, 0, test_info[target]['roc'], test_info[target]['prc'], test_info[target]['roc_values'], test_info[target]['prc_values'], train_info["train_roc"], train_info["train_prc"], train_info["episodes"], train_info["duration"] ]], columns=cols) results = pd.concat([results, rec]) if load_from_saved: results.to_csv(f"{drive_path}/results/{dataset}_{method_dir}_ecfp_pos{n_pos}_neg{n_neg}_from_saved.csv", index=False) else: results.to_csv(f"{drive_path}/results/{dataset}_{method_dir}_ecfp_pos{n_pos}_neg{n_neg}.csv", index=False)
_____no_output_____
MIT
Prototypical Nets Tox21 ECFP.ipynb
danielvlla/Few-Shot-Learning-for-Low-Data-Drug-Discovery
Normalize text
herod_fp = '/Users/kyle/cltk_data/greek/text/tlg/plaintext/TLG0016.txt' with open(herod_fp) as fo: herod_raw = fo.read() print(herod_raw[2000:2500]) # What do we notice needs help? from cltk.corpus.utils.formatter import tlg_plaintext_cleanup herod_clean = tlg_plaintext_cleanup(herod_raw, rm_punctuation=True, rm_periods=False) print(herod_clean[2000:2500])
έρῃ Ξ΄α½² λέγουσι γΡνΡῇ μΡτὰ ταῦτα Ἀλέξανδρον Ο„α½ΈΞ½ Πριάμου ἀκηκοότα ταῦτα ἐθΡλῆσαί ΞΏαΌ± ἐκ Ο„αΏ†Ο‚ Ἑλλάδος δι ἁρπαγῆς γΡνέσθαι Ξ³Ο…Ξ½Ξ±αΏ–ΞΊΞ± ἐπιστάμΡνον πάντως ὅτι οὐ δώσΡι Ξ΄α½·ΞΊΞ±Ο‚ οὐδὲ γὰρ ἐκΡίνους διδόναι. ΞŸα½•Ο„Ο‰ δὴ ἁρπάσαντος αὐτοῦ αΌ™Ξ»α½³Ξ½Ξ·Ξ½ τοῖσι Ἕλλησι Ξ΄α½ΉΞΎΞ±ΞΉ πρῢτον Ο€α½³ΞΌΟˆΞ±Ξ½Ο„Ξ±Ο‚ ἀγγέλους ἀπαιτέΡιν τΡ αΌ™Ξ»α½³Ξ½Ξ·Ξ½ ΞΊΞ±α½Ά Ξ΄α½·ΞΊΞ±Ο‚ Ο„αΏ†Ο‚ ἁρπαγῆς αἰτέΡιν. ΀οὺς Ξ΄α½² Ο€ΟΞΏΟŠΟƒΟ‡ΞΏΞΌα½³Ξ½Ο‰Ξ½ ταῦτα προφέρΡιν σφι ΞœΞ·Ξ΄Ξ΅α½·Ξ·Ο‚ τὴν ἁρπαγὡν ὑς οὐ δόντΡς αὐτοὢ Ξ΄α½·ΞΊΞ±Ο‚ οὐδὲ ἐκδόντΡς ἀπαιτΡόντων βουλοίατό σφι παρ ἄλλων Ξ΄α½·ΞΊΞ±Ο‚ γίνΡσθαι. Ξœα½³Ο‡ΟΞΉ ΞΌα½²Ξ½ ὦν τούτου ἁρπαγὰς ΞΌΞΏ
MIT
public_talks/2015_11_17_nyu/3 Normalization, tokenization, tagging.ipynb
kylepjohnson/ipython_notebooks
Tokenize sentences
from cltk.tokenize.sentence import TokenizeSentence tokenizer = TokenizeSentence('greek') herod_sents = tokenizer.tokenize_sentences(herod_clean) print(herod_sents[:5]) for sent in herod_sents: print(sent) print() input()
Ἡροδότου Ξ˜ΞΏΟ…Οα½·ΞΏΟ… ἱστορίης ἀπόδΡξις αΌ₯δΡ ὑς μὡτΡ Ο„α½° γΡνόμΡνα ἐξ ἀνθρώπων Ο„αΏ· χρόνῳ ἐξίτηλα γένηται μὡτΡ ἔργα μΡγάλα τΡ ΞΊΞ±α½Ά θωμαστά Ο„α½° ΞΌα½²Ξ½ Ἕλλησι Ο„α½° Ξ΄α½² βαρβάροισι ἀποδΡχθέντα αΌ€ΞΊΞ»α½³Ξ± γένηται Ο„α½± τΡ ἄλλα ΞΊΞ±α½Ά δι αΌ£Ξ½ Ξ±αΌ°Ο„α½·Ξ·Ξ½ ἐπολέμησαν ἀλλὡλοισι. ΠΡρσέων ΞΌα½³Ξ½ Ξ½Ο…Ξ½ ΞΏαΌ± Ξ»α½ΉΞ³ΞΉΞΏΞΉ Φοίνικας Ξ±αΌ°Ο„α½·ΞΏΟ…Ο‚ φασὢ γΡνέσθαι Ο„αΏ†Ο‚ διαφορῆς τούτους γάρ αΌ€Ο€α½Έ Ο„αΏ†Ο‚ αΌ˜ΟΟ…ΞΈΟαΏ†Ο‚ καλΡομένης θαλάσσης ἀπικομένους ἐπὢ τὡνδΡ τὴν θάλασσαν ΞΊΞ±α½Ά οἰκὡσαντας τοῦτον Ο„α½ΈΞ½ χῢρον Ο„α½ΈΞ½ ΞΊΞ±α½Ά Ξ½αΏ¦Ξ½ οἰκέουσι αὐτίκα ναυτιλίῃσι μακρῇσι ἐπιθέσθαι ἀπαγινέοντας Ξ΄α½² φορτία Αἰγύπτιά τΡ ΞΊΞ±α½Ά αΌˆΟƒΟƒα½»ΟΞΉΞ± Ο„αΏ‡ τΡ ἄλλῃ χώρῃ ἐσαπικνέΡσθαι ΞΊΞ±α½Ά δὴ ΞΊΞ±α½Ά ἐς αΌŒΟΞ³ΞΏΟ‚ Ο„α½Έ Ξ΄α½² αΌŒΟΞ³ΞΏΟ‚ τοῦτον Ο„α½ΈΞ½ χρόνον προΡῖχΡ ἅπασι Ο„αΏΆΞ½ ἐν Ο„αΏ‡ Ξ½αΏ¦Ξ½ Ἑλλάδι καλΡομένῃ χώρῃ.
MIT
public_talks/2015_11_17_nyu/3 Normalization, tokenization, tagging.ipynb
kylepjohnson/ipython_notebooks
Make word tokens
from cltk.tokenize.word import nltk_tokenize_words for sent in herod_sents: words = nltk_tokenize_words(sent) print(words) input()
['Ἡροδότου', 'Ξ˜ΞΏΟ…Οα½·ΞΏΟ…', 'ἱστορίης', 'ἀπόδΡξις', 'αΌ₯δΡ', 'ὑς', 'μὡτΡ', 'Ο„α½°', 'γΡνόμΡνα', 'ἐξ', 'ἀνθρώπων', 'Ο„αΏ·', 'χρόνῳ', 'ἐξίτηλα', 'γένηται', 'μὡτΡ', 'ἔργα', 'μΡγάλα', 'τΡ', 'ΞΊΞ±α½Ά', 'θωμαστά', 'Ο„α½°', 'ΞΌα½²Ξ½', 'Ἕλλησι', 'Ο„α½°', 'Ξ΄α½²', 'βαρβάροισι', 'ἀποδΡχθέντα', 'αΌ€ΞΊΞ»α½³Ξ±', 'γένηται', 'Ο„α½±', 'τΡ', 'ἄλλα', 'ΞΊΞ±α½Ά', 'δι', 'αΌ£Ξ½', 'Ξ±αΌ°Ο„α½·Ξ·Ξ½', 'ἐπολέμησαν', 'ἀλλὡλοισι', '.'] ['ΠΡρσέων', 'ΞΌα½³Ξ½', 'Ξ½Ο…Ξ½', 'ΞΏαΌ±', 'Ξ»α½ΉΞ³ΞΉΞΏΞΉ', 'Φοίνικας', 'Ξ±αΌ°Ο„α½·ΞΏΟ…Ο‚', 'φασὢ', 'γΡνέσθαι', 'Ο„αΏ†Ο‚', 'διαφορῆς', 'τούτους', 'γάρ', 'αΌ€Ο€α½Έ', 'Ο„αΏ†Ο‚', 'αΌ˜ΟΟ…ΞΈΟαΏ†Ο‚', 'καλΡομένης', 'θαλάσσης', 'ἀπικομένους', 'ἐπὢ', 'τὡνδΡ', 'τὴν', 'θάλασσαν', 'ΞΊΞ±α½Ά', 'οἰκὡσαντας', 'τοῦτον', 'Ο„α½ΈΞ½', 'χῢρον', 'Ο„α½ΈΞ½', 'ΞΊΞ±α½Ά', 'Ξ½αΏ¦Ξ½', 'οἰκέουσι', 'αὐτίκα', 'ναυτιλίῃσι', 'μακρῇσι', 'ἐπιθέσθαι', 'ἀπαγινέοντας', 'Ξ΄α½²', 'φορτία', 'Αἰγύπτιά', 'τΡ', 'ΞΊΞ±α½Ά', 'αΌˆΟƒΟƒα½»ΟΞΉΞ±', 'Ο„αΏ‡', 'τΡ', 'ἄλλῃ', 'χώρῃ', 'ἐσαπικνέΡσθαι', 'ΞΊΞ±α½Ά', 'δὴ', 'ΞΊΞ±α½Ά', 'ἐς', 'αΌŒΟΞ³ΞΏΟ‚', 'Ο„α½Έ', 'Ξ΄α½²', 'αΌŒΟΞ³ΞΏΟ‚', 'τοῦτον', 'Ο„α½ΈΞ½', 'χρόνον', 'προΡῖχΡ', 'ἅπασι', 'Ο„αΏΆΞ½', 'ἐν', 'Ο„αΏ‡', 'Ξ½αΏ¦Ξ½', 'Ἑλλάδι', 'καλΡομένῃ', 'χώρῃ', '.'] ['αΌˆΟ€ΞΉΞΊΞΏΞΌα½³Ξ½ΞΏΟ…Ο‚', 'Ξ΄α½²', 'τοὺς', 'Φοίνικας', 'ἐς', 'δὴ', 'Ο„α½Έ', 'αΌŒΟΞ³ΞΏΟ‚', 'τοῦτο', 'διατίθΡσθαι', 'Ο„α½ΈΞ½', 'φόρτον', '.']
MIT
public_talks/2015_11_17_nyu/3 Normalization, tokenization, tagging.ipynb
kylepjohnson/ipython_notebooks
Tokenize Latin enclitics
from cltk.corpus.utils.formatter import phi5_plaintext_cleanup from cltk.tokenize.word import WordTokenizer # 'LAT0474': 'Marcus Tullius Cicero, Cicero, Tully', cicero_fp = '/Users/kyle/cltk_data/latin/text/phi5/plaintext/LAT0474.TXT' with open(cicero_fp) as fo: cicero_raw = fo.read() cicero_clean = phi5_plaintext_cleanup(cicero_raw, rm_punctuation=True, rm_periods=False) # ~5 sec print(cicero_clean[400:600]) sent_tokenizer = TokenizeSentence('latin') cicero_sents = tokenizer.tokenize_sentences(cicero_clean) print(cicero_sents[:3]) word_tokenizer = WordTokenizer('latin') # Patrick's tokenizer for sent in cicero_sents: #words = nltk_tokenize_words(sent) sub_words = word_tokenizer.tokenize(sent) print(sub_words) input()
['Quae', 'res', 'in', 'civitate', 'duae', 'plurimum', 'possunt', 'eae', 'contra', 'nos', 'ambae', 'faciunt', 'in', 'hoc', 'tempore', 'summa', 'gratia', 'et', 'eloquentia', 'quarum', 'alteram', 'C.', 'Aquili', 'vereor', 'alteram', 'metuo.'] ['Eloquentia', 'Q.', 'Hortensi', 'ne', 'me', 'in', 'dicendo', 'impediat', 'non', 'nihil', 'commoveor', 'gratia', 'Sex.'] ['Naevi', 'ne', 'P.', 'Quinctio', 'noceat', 'id', 'vero', 'non', 'mediocriter', 'pertimesco.'] ['Ne', '-que', 'hoc', 'tanto', 'opere', 'querendum', 'videretur', 'haec', 'summa', 'in', 'illis', 'esse', 'si', 'in', 'nobis', 'essent', 'saltem', 'mediocria', 'verum', 'ita', 'se', 'res', 'habet', 'ut', 'ego', 'qui', 'neque', 'usu', 'satis', 'et', 'ingenio', 'parum', 'possum', 'cum', 'patrono', 'disertissimo', 'comparer', 'P.', 'Quinctius', 'cui', 'tenues', 'opes', 'nullae', 'facultates', 'exiguae', 'amicorum', 'copiae', 'sunt', 'cum', 'adversario', 'gratiosissimo', 'contendat.'] ['Illud', 'quoque', 'nobis', 'accedit', 'incommodum', 'quod', 'M.', 'Iunius', 'qui', 'hanc', 'causam', 'aliquotiens', 'apud', 'te', 'egit', 'homo', 'et', 'in', 'aliis', 'causis', 'exercitatus', 'et', 'in', 'hac', 'multum', 'ac', 'saepe', 'versatus', 'hoc', 'tempore', 'abest', 'nova', 'legatio', '-ne', 'impeditus', 'et', 'ad', 'me', 'ventum', 'est', 'qui', 'ut', 'summa', 'haberem', 'cetera', 'temporis', 'quidem', 'certe', 'vix', 'satis', 'habui', 'ut', 'rem', 'tantam', 'tot', 'controversiis', 'implicatam', 'possem', 'cognoscere.']
MIT
public_talks/2015_11_17_nyu/3 Normalization, tokenization, tagging.ipynb
kylepjohnson/ipython_notebooks
POS Tagging
from cltk.tag.pos import POSTag tagger = POSTag('greek') # Heordotus again for sent in herod_sents: tagged_text = tagger.tag_unigram(sent) print(tagged_text) input()
[('Ἡροδότου', None), ('Ξ˜ΞΏΟ…Οα½·ΞΏΟ…', None), ('ἱστορίης', None), ('ἀπόδΡξις', None), ('αΌ₯δΡ', 'P-S---FN-'), ('ὑς', 'D--------'), ('μὡτΡ', None), ('Ο„α½°', 'L-P---NA-'), ('γΡνόμΡνα', None), ('ἐξ', 'R--------'), ('ἀνθρώπων', None), ('Ο„αΏ·', 'P-S---MD-'), ('χρόνῳ', None), ('ἐξίτηλα', None), ('γένηται', None), ('μὡτΡ', None), ('ἔργα', 'N-P---NA-'), ('μΡγάλα', None), ('τΡ', 'G--------'), ('ΞΊΞ±α½Ά', 'C--------'), ('θωμαστά', None), ('Ο„α½°', 'L-P---NA-'), ('ΞΌα½²Ξ½', 'G--------'), ('Ἕλλησι', None), ('Ο„α½°', 'L-P---NA-'), ('Ξ΄α½²', 'G--------'), ('βαρβάροισι', None), ('ἀποδΡχθέντα', None), ('αΌ€ΞΊΞ»α½³Ξ±', None), ('γένηται', None), ('Ο„α½±', None), ('τΡ', 'G--------'), ('ἄλλα', 'A-P---NA-'), ('ΞΊΞ±α½Ά', 'C--------'), ('δι', None), ('αΌ£Ξ½', 'P-S---FA-'), ('Ξ±αΌ°Ο„α½·Ξ·Ξ½', None), ('ἐπολέμησαν', None), ('ἀλλὡλοισι', None), ('.', 'U--------')] [('ΠΡρσέων', None), ('ΞΌα½³Ξ½', None), ('Ξ½Ο…Ξ½', 'D--------'), ('ΞΏαΌ±', 'P-S---MD-'), ('Ξ»α½ΉΞ³ΞΉΞΏΞΉ', None), ('Φοίνικας', None), ('Ξ±αΌ°Ο„α½·ΞΏΟ…Ο‚', None), ('φασὢ', 'V3PPIA---'), ('γΡνέσθαι', None), ('Ο„αΏ†Ο‚', 'L-S---FG-'), ('διαφορῆς', None), ('τούτους', None), ('γάρ', None), ('αΌ€Ο€α½Έ', 'R--------'), ('Ο„αΏ†Ο‚', 'L-S---FG-'), ('αΌ˜ΟΟ…ΞΈΟαΏ†Ο‚', None), ('καλΡομένης', None), ('θαλάσσης', None), ('ἀπικομένους', None), ('ἐπὢ', 'R--------'), ('τὡνδΡ', None), ('τὴν', 'P-S---FA-'), ('θάλασσαν', None), ('ΞΊΞ±α½Ά', 'C--------'), ('οἰκὡσαντας', None), ('τοῦτον', 'A-S---MA-'), ('Ο„α½ΈΞ½', 'P-S---MA-'), ('χῢρον', 'N-S---MA-'), ('Ο„α½ΈΞ½', 'P-S---MA-'), ('ΞΊΞ±α½Ά', 'C--------'), ('Ξ½αΏ¦Ξ½', 'D--------'), ('οἰκέουσι', None), ('αὐτίκα', None), ('ναυτιλίῃσι', None), ('μακρῇσι', 'A-P---FD-'), ('ἐπιθέσθαι', None), ('ἀπαγινέοντας', None), ('Ξ΄α½²', 'G--------'), ('φορτία', None), ('Αἰγύπτιά', None), ('τΡ', 'G--------'), ('ΞΊΞ±α½Ά', 'C--------'), ('αΌˆΟƒΟƒα½»ΟΞΉΞ±', None), ('Ο„αΏ‡', 'P-S---FD-'), ('τΡ', 'G--------'), ('ἄλλῃ', 'D--------'), ('χώρῃ', None), ('ἐσαπικνέΡσθαι', None), ('ΞΊΞ±α½Ά', 'C--------'), ('δὴ', 'G--------'), ('ΞΊΞ±α½Ά', 'C--------'), ('ἐς', 'R--------'), ('αΌŒΟΞ³ΞΏΟ‚', None), ('Ο„α½Έ', 'L-S---NA-'), ('Ξ΄α½²', 'G--------'), ('αΌŒΟΞ³ΞΏΟ‚', None), ('τοῦτον', 'A-S---MA-'), ('Ο„α½ΈΞ½', 'P-S---MA-'), ('χρόνον', None), ('προΡῖχΡ', None), ('ἅπασι', 'A-P---MD-'), ('Ο„αΏΆΞ½', 'L-P---MG-'), ('ἐν', 'R--------'), ('Ο„αΏ‡', 'P-S---FD-'), ('Ξ½αΏ¦Ξ½', 'D--------'), ('Ἑλλάδι', None), ('καλΡομένῃ', None), ('χώρῃ', None), ('.', 'U--------')] [('αΌˆΟ€ΞΉΞΊΞΏΞΌα½³Ξ½ΞΏΟ…Ο‚', None), ('Ξ΄α½²', 'G--------'), ('τοὺς', 'P-P---MA-'), ('Φοίνικας', None), ('ἐς', 'R--------'), ('δὴ', 'G--------'), ('Ο„α½Έ', 'L-S---NA-'), ('αΌŒΟΞ³ΞΏΟ‚', None), ('τοῦτο', 'A-S---NA-'), ('διατίθΡσθαι', None), ('Ο„α½ΈΞ½', 'P-S---MA-'), ('φόρτον', None), ('.', 'U--------')]
MIT
public_talks/2015_11_17_nyu/3 Normalization, tokenization, tagging.ipynb
kylepjohnson/ipython_notebooks
NER
## Latin -- decent, but see M, P, etc from cltk.tag import ner # Heordotus again for sent in cicero_sents: ner_tags = ner.tag_ner('latin', input_text=sent, output_type=list) print(ner_tags) input() # Greek -- not as good! from cltk.tag import ner # Heordotus again for sent in herod_sents: ner_tags = ner.tag_ner('greek', input_text=sent, output_type=list) print(ner_tags) input()
[('Ἡροδότου',), ('Ξ˜ΞΏΟ…Οα½·ΞΏΟ…',), ('ἱστορίης',), ('ἀπόδΡξις',), ('αΌ₯δΡ',), ('ὑς',), ('μὡτΡ',), ('Ο„α½°',), ('γΡνόμΡνα',), ('ἐξ',), ('ἀνθρώπων',), ('Ο„αΏ·',), ('χρόνῳ',), ('ἐξίτηλα',), ('γένηται',), ('μὡτΡ',), ('ἔργα',), ('μΡγάλα',), ('τΡ',), ('ΞΊΞ±α½Ά',), ('θωμαστά',), ('Ο„α½°',), ('ΞΌα½²Ξ½',), ('Ἕλλησι', 'Entity'), ('Ο„α½°',), ('Ξ΄α½²',), ('βαρβάροισι',), ('ἀποδΡχθέντα',), ('αΌ€ΞΊΞ»α½³Ξ±',), ('γένηται',), ('Ο„α½±',), ('τΡ',), ('ἄλλα',), ('ΞΊΞ±α½Ά',), ('δι',), ('αΌ£Ξ½',), ('Ξ±αΌ°Ο„α½·Ξ·Ξ½',), ('ἐπολέμησαν',), ('ἀλλὡλοισι',), ('.',)] [('ΠΡρσέων',), ('ΞΌα½³Ξ½',), ('Ξ½Ο…Ξ½',), ('ΞΏαΌ±',), ('Ξ»α½ΉΞ³ΞΉΞΏΞΉ',), ('Φοίνικας',), ('Ξ±αΌ°Ο„α½·ΞΏΟ…Ο‚',), ('φασὢ',), ('γΡνέσθαι',), ('Ο„αΏ†Ο‚',), ('διαφορῆς',), ('τούτους',), ('γάρ',), ('αΌ€Ο€α½Έ',), ('Ο„αΏ†Ο‚',), ('αΌ˜ΟΟ…ΞΈΟαΏ†Ο‚', 'Entity'), ('καλΡομένης',), ('θαλάσσης',), ('ἀπικομένους',), ('ἐπὢ',), ('τὡνδΡ',), ('τὴν',), ('θάλασσαν',), ('ΞΊΞ±α½Ά',), ('οἰκὡσαντας',), ('τοῦτον',), ('Ο„α½ΈΞ½',), ('χῢρον',), ('Ο„α½ΈΞ½',), ('ΞΊΞ±α½Ά',), ('Ξ½αΏ¦Ξ½',), ('οἰκέουσι',), ('αὐτίκα',), ('ναυτιλίῃσι',), ('μακρῇσι',), ('ἐπιθέσθαι',), ('ἀπαγινέοντας',), ('Ξ΄α½²',), ('φορτία',), ('Αἰγύπτιά',), ('τΡ',), ('ΞΊΞ±α½Ά',), ('αΌˆΟƒΟƒα½»ΟΞΉΞ±',), ('Ο„αΏ‡',), ('τΡ',), ('ἄλλῃ',), ('χώρῃ',), ('ἐσαπικνέΡσθαι',), ('ΞΊΞ±α½Ά',), ('δὴ',), ('ΞΊΞ±α½Ά',), ('ἐς',), ('αΌŒΟΞ³ΞΏΟ‚', 'Entity'), ('Ο„α½Έ',), ('Ξ΄α½²',), ('αΌŒΟΞ³ΞΏΟ‚', 'Entity'), ('τοῦτον',), ('Ο„α½ΈΞ½',), ('χρόνον',), ('προΡῖχΡ',), ('ἅπασι',), ('Ο„αΏΆΞ½',), ('ἐν',), ('Ο„αΏ‡',), ('Ξ½αΏ¦Ξ½',), ('Ἑλλάδι',), ('καλΡομένῃ',), ('χώρῃ',), ('.',)] [('αΌˆΟ€ΞΉΞΊΞΏΞΌα½³Ξ½ΞΏΟ…Ο‚',), ('Ξ΄α½²',), ('τοὺς',), ('Φοίνικας',), ('ἐς',), ('δὴ',), ('Ο„α½Έ',), ('αΌŒΟΞ³ΞΏΟ‚', 'Entity'), ('τοῦτο',), ('διατίθΡσθαι',), ('Ο„α½ΈΞ½',), ('φόρτον',), ('.',)] [('Πέμπτῃ',), ('Ξ΄α½²',), ('αΌ’',), ('αΌ•ΞΊΟ„αΏƒ',), ('ἑμέρῃ',), ('αΌ€Ο€',), ('αΌ§Ο‚',), ('ἀπίκοντο',), ('ἐξΡμπολημένων',), ('σφι',), ('σχΡδὸν',), ('πάντων',), ('ἐλθΡῖν',), ('ἐπὢ',), ('τὴν',), ('θάλασσαν',), ('Ξ³Ο…Ξ½Ξ±αΏ–ΞΊΞ±Ο‚',), ('ἄλλας',), ('τΡ',), ('πολλὰς',), ('ΞΊΞ±α½Ά',), ('δὴ',), ('ΞΊΞ±α½Ά',), ('τοῦ',), ('βασιλέος',), ('θυγατέρα',), ('Ο„α½Έ',), ('Ξ΄α½³',), ('ΞΏαΌ±',), ('ΞΏα½”Ξ½ΞΏΞΌΞ±',), ('Ξ΅αΌΆΞ½Ξ±ΞΉ',), ('ΞΊΞ±Ο„α½°',), ('Ο„α½ Ο…Ο„α½Έ',), ('Ο„α½Έ',), ('ΞΊΞ±α½Ά',), ('ἝλληνΡς', 'Entity'), ('λέγουσι',), ('αΌΈΞΏαΏ¦Ξ½', 'Entity'), ('τὴν',), ('Ἰνάχου',), ('.',)] [('΀αύτας',), ('στάσας',), ('ΞΊΞ±Ο„α½°',), ('πρύμνην',), ('Ο„αΏ†Ο‚',), ('Ξ½Ξ΅α½ΈΟ‚',), ('ὠνέΡσθαι',), ('Ο„αΏΆΞ½',), ('φορτίων',), ('Ο„αΏΆΞ½',), ('σφι',), ('αΌ¦Ξ½',), ('ΞΈΟ…ΞΌα½ΈΟ‚',), ('μάλιστα',), ('ΞΊΞ±α½Ά',), ('τοὺς',), ('Φοίνικας',), ('διακΡλΡυσαμένους',), ('ὁρμῆσαι',), ('ἐπ',), ('αὐτάς',), ('.',)] [('Ξ€α½°Ο‚', 'Entity'), ('ΞΌα½²Ξ½',), ('δὴ',), ('πλέονας',), ('Ο„αΏΆΞ½',), ('Ξ³Ο…Ξ½Ξ±ΞΉΞΊαΏΆΞ½',), ('ἀποφυγΡῖν',), ('τὴν',), ('Ξ΄α½²',), ('αΌΈΞΏαΏ¦Ξ½', 'Entity'), ('Οƒα½ΊΞ½',), ('ἄλλῃσι',), ('ἁρπασθῆναι',), ('ἐσβαλομένους',), ('Ξ΄α½²',), ('ἐς',), ('τὴν',), ('Ξ½α½³Ξ±',), ('οἴχΡσθαι',), ('ἀποπλέοντας',), ('ἐπ',), ('Αἰγύπτου',), ('.',)]
MIT
public_talks/2015_11_17_nyu/3 Normalization, tokenization, tagging.ipynb
kylepjohnson/ipython_notebooks
Stopword filtering
from cltk.stop.greek.stops import STOPS_LIST #p = PunktLanguageVars() for sent in herod_sents: words = nltk_tokenize_words(sent) print('W/ STOPS', words) words = [w for w in words if not w in STOPS_LIST] print('W/O STOPS', words) input()
W/ STOPS ['Ἡροδότου', 'Ξ˜ΞΏΟ…Οα½·ΞΏΟ…', 'ἱστορίης', 'ἀπόδΡξις', 'αΌ₯δΡ', 'ὑς', 'μὡτΡ', 'Ο„α½°', 'γΡνόμΡνα', 'ἐξ', 'ἀνθρώπων', 'Ο„αΏ·', 'χρόνῳ', 'ἐξίτηλα', 'γένηται', 'μὡτΡ', 'ἔργα', 'μΡγάλα', 'τΡ', 'ΞΊΞ±α½Ά', 'θωμαστά', 'Ο„α½°', 'ΞΌα½²Ξ½', 'Ἕλλησι', 'Ο„α½°', 'Ξ΄α½²', 'βαρβάροισι', 'ἀποδΡχθέντα', 'αΌ€ΞΊΞ»α½³Ξ±', 'γένηται', 'Ο„α½±', 'τΡ', 'ἄλλα', 'ΞΊΞ±α½Ά', 'δι', 'αΌ£Ξ½', 'Ξ±αΌ°Ο„α½·Ξ·Ξ½', 'ἐπολέμησαν', 'ἀλλὡλοισι', '.'] W/O STOPS ['Ἡροδότου', 'Ξ˜ΞΏΟ…Οα½·ΞΏΟ…', 'ἱστορίης', 'ἀπόδΡξις', 'αΌ₯δΡ', 'μὡτΡ', 'γΡνόμΡνα', 'ἀνθρώπων', 'χρόνῳ', 'ἐξίτηλα', 'γένηται', 'μὡτΡ', 'ἔργα', 'μΡγάλα', 'θωμαστά', 'Ἕλλησι', 'βαρβάροισι', 'ἀποδΡχθέντα', 'αΌ€ΞΊΞ»α½³Ξ±', 'γένηται', 'ἄλλα', 'δι', 'αΌ£Ξ½', 'Ξ±αΌ°Ο„α½·Ξ·Ξ½', 'ἐπολέμησαν', 'ἀλλὡλοισι', '.'] W/ STOPS ['ΠΡρσέων', 'ΞΌα½³Ξ½', 'Ξ½Ο…Ξ½', 'ΞΏαΌ±', 'Ξ»α½ΉΞ³ΞΉΞΏΞΉ', 'Φοίνικας', 'Ξ±αΌ°Ο„α½·ΞΏΟ…Ο‚', 'φασὢ', 'γΡνέσθαι', 'Ο„αΏ†Ο‚', 'διαφορῆς', 'τούτους', 'γάρ', 'αΌ€Ο€α½Έ', 'Ο„αΏ†Ο‚', 'αΌ˜ΟΟ…ΞΈΟαΏ†Ο‚', 'καλΡομένης', 'θαλάσσης', 'ἀπικομένους', 'ἐπὢ', 'τὡνδΡ', 'τὴν', 'θάλασσαν', 'ΞΊΞ±α½Ά', 'οἰκὡσαντας', 'τοῦτον', 'Ο„α½ΈΞ½', 'χῢρον', 'Ο„α½ΈΞ½', 'ΞΊΞ±α½Ά', 'Ξ½αΏ¦Ξ½', 'οἰκέουσι', 'αὐτίκα', 'ναυτιλίῃσι', 'μακρῇσι', 'ἐπιθέσθαι', 'ἀπαγινέοντας', 'Ξ΄α½²', 'φορτία', 'Αἰγύπτιά', 'τΡ', 'ΞΊΞ±α½Ά', 'αΌˆΟƒΟƒα½»ΟΞΉΞ±', 'Ο„αΏ‡', 'τΡ', 'ἄλλῃ', 'χώρῃ', 'ἐσαπικνέΡσθαι', 'ΞΊΞ±α½Ά', 'δὴ', 'ΞΊΞ±α½Ά', 'ἐς', 'αΌŒΟΞ³ΞΏΟ‚', 'Ο„α½Έ', 'Ξ΄α½²', 'αΌŒΟΞ³ΞΏΟ‚', 'τοῦτον', 'Ο„α½ΈΞ½', 'χρόνον', 'προΡῖχΡ', 'ἅπασι', 'Ο„αΏΆΞ½', 'ἐν', 'Ο„αΏ‡', 'Ξ½αΏ¦Ξ½', 'Ἑλλάδι', 'καλΡομένῃ', 'χώρῃ', '.'] W/O STOPS ['ΠΡρσέων', 'Ξ½Ο…Ξ½', 'Ξ»α½ΉΞ³ΞΉΞΏΞΉ', 'Φοίνικας', 'Ξ±αΌ°Ο„α½·ΞΏΟ…Ο‚', 'φασὢ', 'γΡνέσθαι', 'διαφορῆς', 'τούτους', 'αΌ˜ΟΟ…ΞΈΟαΏ†Ο‚', 'καλΡομένης', 'θαλάσσης', 'ἀπικομένους', 'τὡνδΡ', 'θάλασσαν', 'οἰκὡσαντας', 'τοῦτον', 'χῢρον', 'Ξ½αΏ¦Ξ½', 'οἰκέουσι', 'αὐτίκα', 'ναυτιλίῃσι', 'μακρῇσι', 'ἐπιθέσθαι', 'ἀπαγινέοντας', 'φορτία', 'Αἰγύπτιά', 'αΌˆΟƒΟƒα½»ΟΞΉΞ±', 'ἄλλῃ', 'χώρῃ', 'ἐσαπικνέΡσθαι', 'ἐς', 'αΌŒΟΞ³ΞΏΟ‚', 'αΌŒΟΞ³ΞΏΟ‚', 'τοῦτον', 'χρόνον', 'προΡῖχΡ', 'ἅπασι', 'Ξ½αΏ¦Ξ½', 'Ἑλλάδι', 'καλΡομένῃ', 'χώρῃ', '.']
MIT
public_talks/2015_11_17_nyu/3 Normalization, tokenization, tagging.ipynb
kylepjohnson/ipython_notebooks
Concordance
from cltk.utils.philology import Philology p = Philology() herod_fp = '/Users/kyle/cltk_data/greek/text/tlg/plaintext/TLG0016.txt' p.write_concordance_from_file(herod_fp, 'kyle_herod')
INFO:CLTK:Wrote concordance to '/Users/kyle/cltk_data/user_data/concordance_kyle_herod.txt'.
MIT
public_talks/2015_11_17_nyu/3 Normalization, tokenization, tagging.ipynb
kylepjohnson/ipython_notebooks
Word count
from nltk.text import Text words = nltk_tokenize_words(herod_clean) print(words[:15]) t = Text(words) vocabulary_count = t.vocab() vocabulary_count['ἱστορίης'] vocabulary_count['μὡτΡ'] vocabulary_count['ἀνθρώπων']
_____no_output_____
MIT
public_talks/2015_11_17_nyu/3 Normalization, tokenization, tagging.ipynb
kylepjohnson/ipython_notebooks
Word frequency
from cltk.utils.frequency import Frequency freq = Frequency() herod_frequencies = freq.counter_from_str(herod_clean) herod_frequencies.most_common()
_____no_output_____
MIT
public_talks/2015_11_17_nyu/3 Normalization, tokenization, tagging.ipynb
kylepjohnson/ipython_notebooks
Fuel type
# remove the 'Type de carburant:' string from the carburant_type feature df.fuel_type = df.fuel_type.map(lambda x: x.lstrip('Type de carburant:'))
_____no_output_____
MIT
cars-price-dataset.ipynb
jaselnik/Car-Price-Predictor-Django
Mark & Model
# remove the 'Marque:' string from the mark feature df['mark'] = df['mark'].map(lambda x: x.replace('Marque:', '')) df = df[df.mark != '-'] # remove the 'Modèle:' string from model feature df['model'] = df['model'].map(lambda x: x.replace('Modèle:', ''))
_____no_output_____
MIT
cars-price-dataset.ipynb
jaselnik/Car-Price-Predictor-Django
fiscal powerFor the fiscal power we can see that there is exactly 5728 rows not announced, so we will fill them by the mean of the other columns, since it is an important feature in cars price prediction so we can not drop it.
# remove the 'Puissance fiscale:' from the fiscal_power feature df.fiscal_power = df.fiscal_power.map(lambda x: x.lstrip('Puissance fiscale:Plus de').rstrip(' CV')) # replace the - with NaN values and convert them to integer values df.fiscal_power = df.fiscal_power.str.replace("-","0") # convert all fiscal_power values to numerical ones df.fiscal_power = pd.to_numeric(df.fiscal_power, errors = 'coerce', downcast= 'integer') # now we need to fill those 0 values with the mean of all fiscal_power columns df.fiscal_power = df.fiscal_power.map( lambda x : df.fiscal_power.mean() if x == 0 else x )
_____no_output_____
MIT
cars-price-dataset.ipynb
jaselnik/Car-Price-Predictor-Django
fuel type
# remove those lines having the fuel_type not set df = df[df.fuel_type != '-']
_____no_output_____
MIT
cars-price-dataset.ipynb
jaselnik/Car-Price-Predictor-Django
drop unwanted columns
df = df.drop(columns=['sector', 'type']) df = df[['price', 'year_model', 'mileage', 'fiscal_power', 'fuel_type', 'mark']] df.to_csv('data/car_dataset.csv') df.head() from car_price.wsgi import application from api.models import Car for x in df.values[5598:]: car = Car( price=x[0], year_model=x[1], mileage=x[2], fiscal_power=x[3], fuel_type=x[4], mark=x[5] ) car.save() Car.objects.all().count() df.shape
_____no_output_____
MIT
cars-price-dataset.ipynb
jaselnik/Car-Price-Predictor-Django
Credit Risk ClassificationCredit risk poses a classification problem that’s inherently imbalanced. This is because healthy loans easily outnumber risky loans. In this Challenge, you’ll use various techniques to train and evaluate models with imbalanced classes. You’ll use a dataset of historical lending activity from a peer-to-peer lending services company to build a model that can identify the creditworthiness of borrowers. Instructions:This challenge consists of the following subsections:* Split the Data into Training and Testing Sets* Create a Logistic Regression Model with the Original Data* Predict a Logistic Regression Model with Resampled Training Data Split the Data into Training and Testing SetsOpen the starter code notebook and then use it to complete the following steps.1. Read the `lending_data.csv` data from the `Resources` folder into a Pandas DataFrame.2. Create the labels set (`y`) from the β€œloan_status” column, and then create the features (`X`) DataFrame from the remaining columns. > **Note** A value of `0` in the β€œloan_status” column means that the loan is healthy. A value of `1` means that the loan has a high risk of defaulting. 3. Check the balance of the labels variable (`y`) by using the `value_counts` function.4. Split the data into training and testing datasets by using `train_test_split`. Create a Logistic Regression Model with the Original DataEmploy your knowledge of logistic regression to complete the following steps:1. Fit a logistic regression model by using the training data (`X_train` and `y_train`).2. Save the predictions on the testing data labels by using the testing feature data (`X_test`) and the fitted model.3. Evaluate the model’s performance by doing the following: * Calculate the accuracy score of the model. * Generate a confusion matrix. * Print the classification report.4. Answer the following question: How well does the logistic regression model predict both the `0` (healthy loan) and `1` (high-risk loan) labels? Predict a Logistic Regression Model with Resampled Training DataDid you notice the small number of high-risk loan labels? Perhaps, a model that uses resampled data will perform better. You’ll thus resample the training data and then reevaluate the model. Specifically, you’ll use `RandomOverSampler`.To do so, complete the following steps:1. Use the `RandomOverSampler` module from the imbalanced-learn library to resample the data. Be sure to confirm that the labels have an equal number of data points. 2. Use the `LogisticRegression` classifier and the resampled data to fit the model and make predictions.3. Evaluate the model’s performance by doing the following: * Calculate the accuracy score of the model. * Generate a confusion matrix. * Print the classification report. 4. Answer the following question: How well does the logistic regression model, fit with oversampled data, predict both the `0` (healthy loan) and `1` (high-risk loan) labels? Write a Credit Risk Analysis ReportFor this section, you’ll write a brief report that includes a summary and an analysis of the performance of both machine learning models that you used in this challenge. You should write this report as the `README.md` file included in your GitHub repository.Structure your report by using the report template that `Starter_Code.zip` includes, and make sure that it contains the following:1. An overview of the analysis: Explain the purpose of this analysis.2. The results: Using bulleted lists, describe the balanced accuracy scores and the precision and recall scores of both machine learning models.3. A summary: Summarize the results from the machine learning models. Compare the two versions of the dataset predictions. Include your recommendation for the model to use, if any, on the original vs. the resampled data. If you don’t recommend either model, justify your reasoning.
# Import the modules import numpy as np import pandas as pd from pathlib import Path from sklearn.metrics import balanced_accuracy_score from sklearn.metrics import confusion_matrix from imblearn.metrics import classification_report_imbalanced import warnings warnings.filterwarnings('ignore')
_____no_output_____
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
--- Split the Data into Training and Testing Sets Step 1: Read the `lending_data.csv` data from the `Resources` folder into a Pandas DataFrame.
# Read the CSV file from the Resources folder into a Pandas DataFrame lending_data_df = pd.read_csv(Path("../Starter_Code/Resources/lending_data.csv")) # Review the DataFrame display(lending_data_df.head())
_____no_output_____
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
Step 2: Create the labels set (`y`) from the β€œloan_status” column, and then create the features (`X`) DataFrame from the remaining columns.
# Separate the data into labels and features # Separate the y variable, the labels y = lending_data_df["loan_status"] # Separate the X variable, the features X = lending_data_df.drop(columns=["loan_status"]) # Review the y variable Series y.head() # Review the X variable DataFrame X.head()
_____no_output_____
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
Step 3: Check the balance of the labels variable (`y`) by using the `value_counts` function.
# Check the balance of our target values y.value_counts()
_____no_output_____
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
Step 4: Split the data into training and testing datasets by using `train_test_split`.
# Import the train_test_learn module from sklearn.model_selection import train_test_split # Split the data using train_test_split # Assign a random_state of 1 to the function X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
_____no_output_____
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
--- Create a Logistic Regression Model with the Original Data Step 1: Fit a logistic regression model by using the training data (`X_train` and `y_train`).
# Import the LogisticRegression module from SKLearn from sklearn.linear_model import LogisticRegression # Instantiate the Logistic Regression model # Assign a random_state parameter of 1 to the model model = LogisticRegression(random_state=1) # Fit the model using training data model.fit(X_train, y_train)
_____no_output_____
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
Step 2: Save the predictions on the testing data labels by using the testing feature data (`X_test`) and the fitted model.
# Make a prediction using the testing data y_pred = model.predict(X_test) y_pred
_____no_output_____
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
Step 3: Evaluate the model’s performance by doing the following:* Calculate the accuracy score of the model.* Generate a confusion matrix.* Print the classification report.
# Print the balanced_accuracy score of the model BAS = balanced_accuracy_score(y_test, y_pred) print(BAS) # Generate a confusion matrix for the model print(confusion_matrix(y_test, y_pred)) # Print the classification report for the model print(classification_report_imbalanced(y_test, y_pred))
pre rec spe f1 geo iba sup 0 1.00 0.99 0.91 1.00 0.95 0.91 18765 1 0.85 0.91 0.99 0.88 0.95 0.90 619 avg / total 0.99 0.99 0.91 0.99 0.95 0.91 19384
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
Step 4: Answer the following question. **Question:** How well does the logistic regression model predict both the `0` (healthy loan) and `1` (high-risk loan) labels?**Answer:** The regression models predicts both healthy loans and high-risk loans, for the most part, accurately. We have an average 99% for our F1 score, the summary statistic for both the precision and recall of the data. Although, there is some room for improvement for healthy loans for our PPV (positive predictive value) and recall. --- Predict a Logistic Regression Model with Resampled Training Data Step 1: Use the `RandomOverSampler` module from the imbalanced-learn library to resample the data. Be sure to confirm that the labels have an equal number of data points.
# Import the RandomOverSampler module form imbalanced-learn from imblearn.over_sampling import RandomOverSampler # Instantiate the random oversampler model # # Assign a random_state parameter of 1 to the model random_oversampler = RandomOverSampler(random_state=1) # Fit the original training data to the random_oversampler model X_resampled, y_resampled = random_oversampler.fit_resample(X_train, y_train) # Count the distinct values of the resampled labels data y_resampled.value_counts()
_____no_output_____
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
Step 2: Use the `LogisticRegression` classifier and the resampled data to fit the model and make predictions.
# Instantiate the Logistic Regression model # Assign a random_state parameter of 1 to the model resampled_model = LogisticRegression(random_state=1) # Fit the model using the resampled training data resampled_model.fit(X_resampled, y_resampled) # Make a prediction using the testing data y_pred = resampled_model.predict(X_test)
_____no_output_____
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
Step 3: Evaluate the model’s performance by doing the following:* Calculate the accuracy score of the model.* Generate a confusion matrix.* Print the classification report.
# Print the balanced_accuracy score of the model print(balanced_accuracy_score(y_test, y_pred)) # Generate a confusion matrix for the model confusion_matrix(y_test, y_pred) # Print the classification report for the model print(classification_report_imbalanced(y_test, y_pred))
pre rec spe f1 geo iba sup 0 1.00 0.99 0.99 1.00 0.99 0.99 18765 1 0.84 0.99 0.99 0.91 0.99 0.99 619 avg / total 0.99 0.99 0.99 0.99 0.99 0.99 19384
MIT
Starter_Code/credit_risk_resampling.ipynb
LeoHarada/Challenge_12
Average
from sklearn.metrics import accuracy_score, roc_auc_score from sklearn.preprocessing import StandardScaler oof_preds = np.mean(df_norm[all_feat_names].to_numpy(),1) oof_gts = df_norm['MGMT_value'] cv_preds = [np.mean(df_norm[df_norm.fold==fold][all_feat_names].to_numpy(),1) for fold in range(5)] cv_gts = [df_norm[df_norm.fold==fold]['MGMT_value'] for fold in range(5)] oof_acc = accuracy_score((np.array(oof_gts) > 0.5).flatten(), (np.array(oof_preds) > 0.5).flatten()) oof_auc = roc_auc_score(np.array(oof_gts).flatten().astype(np.float32), np.array(oof_preds).flatten()) cv_accs = np.array([accuracy_score((np.array(cv_gt) > 0.5).flatten(), (np.array(cv_pred) > 0.5).flatten()) for cv_gt,cv_pred in zip(cv_gts, cv_preds)]) cv_aucs = np.array([roc_auc_score(np.array(cv_gt).flatten().astype(np.float32), np.array(cv_pred).flatten()) for cv_gt,cv_pred in zip(cv_gts, cv_preds)]) print(f'OOF acc {oof_acc}, OOF auc {oof_auc}, CV AUC {np.mean(cv_aucs)} (std {np.std(cv_aucs)})') plt.close('all') df_plot = pd.DataFrame({'Pred-MGMT': oof_preds, 'GT-MGMT': oof_gts}) sns.histplot(x='Pred-MGMT', hue='GT-MGMT', data=df_plot) plt.title(f'Average of all models # CV AUC = {np.mean(cv_aucs):.3f} (std: {np.std(cv_aucs):.3f}), Acc. = {np.mean(cv_accs):.3f}') plt.show() selected_feats = [ 'feat_lasso', 'feat_ridge', 'feat_linreg', 'efficientnetv2_l_rocstar', 'resnet101_rocstar', 'densenet169_rocstar', ] oof_acc = accuracy_score((np.array(oof_gts) > 0.5).flatten(), (np.mean(df_norm[selected_feats].to_numpy(),1) > 0.5).flatten()) oof_auc = roc_auc_score(np.array(oof_gts).flatten().astype(np.float32), np.mean(df_norm[selected_feats].to_numpy(),1).flatten()) cv_preds = [np.mean(df_norm[df_norm.fold==fold][selected_feats].to_numpy(),1) for fold in range(5)] cv_gts = [df_norm[df_norm.fold==fold]['MGMT_value'] for fold in range(5)] cv_accs = np.array([accuracy_score((np.array(cv_gt) > 0.5).flatten(), (np.array(cv_pred) > 0.5).flatten()) for cv_gt,cv_pred in zip(cv_gts, cv_preds)]) cv_aucs = np.array([roc_auc_score(np.array(cv_gt).flatten().astype(np.float32), np.array(cv_pred).flatten()) for cv_gt,cv_pred in zip(cv_gts, cv_preds)]) print(f'OOF acc {oof_acc}, OOF auc {oof_auc}, CV AUC {np.mean(cv_aucs)} (std {np.std(cv_aucs)})') plt.close('all') df_plot = pd.DataFrame({'Pred-MGMT': oof_preds, 'GT-MGMT': oof_gts}) sns.histplot(x='Pred-MGMT', hue='GT-MGMT', data=df_plot) plt.title(f'Average of all models # CV AUC = {np.mean(cv_aucs):.3f} (std: {np.std(cv_aucs):.3f}), Acc. = {np.mean(cv_accs):.3f}') plt.show()
OOF acc 0.5944540727902946, OOF auc 0.6514516827964754, CV AUC 0.6504285580435163 (std 0.02232524533384981)
MIT
notebooks/7-Ensemble.ipynb
jpjuvo/RSNA-MICCAI-Brain-Tumor-Classification
2nd level models
import xgboost as xgb def get_data(fold, features): df = df_norm.dropna(inplace=False) scaler = StandardScaler() df_train = df[df.fold != fold] df_val = df[df.fold == fold] if len(df_val) == 0: df_val = df[df.fold == 0] # shuffle train df_train = df_train.sample(frac=1) y_train = df_train.MGMT_value.to_numpy().reshape((-1,1)).astype(np.float32) y_val = df_val.MGMT_value.to_numpy().reshape((-1,1)).astype(np.float32) X_train = df_train[features].to_numpy().astype(np.float32) X_val = df_val[features].to_numpy().astype(np.float32) scaler.fit(X_train) X_train = scaler.transform(X_train) X_val = scaler.transform(X_val) return X_train, y_train, X_val, y_val, scaler, (df_train.index.values).flatten(), (df_val.index.values).flatten() def measure_cv_score(parameters, verbose=False, train_one_model=False, plot=False, return_oof_preds=False): val_preds = [] val_gts = [] val_aucs = [] val_accs = [] val_index_values = [] for fold in range(5): if train_one_model: fold = -1 X_train, y_train, X_val, y_val, scaler, train_index, val_index = get_data(fold, features=parameters['features']) val_index_values = val_index_values + list(val_index) if parameters['model_type'] == 'xgb': model = xgb.XGBRegressor( n_estimators=parameters['n_estimators'], max_depth=parameters['max_depth'], eta=parameters['eta'], subsample=parameters['subsample'], colsample_bytree=parameters['colsample_bytree'], gamma=parameters['gamma'] ) elif parameters['model_type'] == 'linreg': model = linear_model.LinearRegression() elif parameters['model_type'] == 'ridge': model = linear_model.Ridge(parameters['alpha']) elif parameters['model_type'] == 'bayesian': model = linear_model.BayesianRidge( n_iter = parameters['n_iter'], lambda_1 = parameters['lambda_1'], lambda_2 = parameters['lambda_2'], alpha_1 = parameters['alpha_1'], alpha_2 = parameters['alpha_2'], ) elif parameters['model_type'] == 'logreg': model = linear_model.LogisticRegression() elif parameters['model_type'] == 'lassolarsic': model = linear_model.LassoLarsIC( max_iter = parameters['max_iter'], eps = parameters['eps'] ) elif parameters['model_type'] == 'perceptron': model = linear_model.Perceptron( ) else: raise NotImplementedError model.fit(X_train, y_train.ravel()) if train_one_model: return model, scaler val_pred = model.predict(X_val) val_preds += list(val_pred) val_gts += list(y_val) val_aucs.append(roc_auc_score(np.array(y_val).flatten().astype(np.float32), np.array(val_pred).flatten())) val_accs.append(accuracy_score((np.array(y_val) > 0.5).flatten(), (np.array(val_pred) > 0.5).flatten())) if return_oof_preds: return np.array(val_preds).flatten(), np.array(val_gts).flatten(), val_index_values oof_acc = accuracy_score((np.array(val_gts) > 0.5).flatten(), (np.array(val_preds) > 0.5).flatten()) oof_auc = roc_auc_score(np.array(val_gts).flatten().astype(np.float32), np.array(val_preds).flatten()) auc_std = np.std(np.array(val_aucs)) if plot: df_plot = pd.DataFrame({'Pred-MGMT': np.array(val_preds).flatten(), 'GT-MGMT': np.array(val_gts).flatten()}) sns.histplot(x='Pred-MGMT', hue='GT-MGMT', data=df_plot) plt.title(f'{parameters["model_type"]} # CV AUC = {oof_auc:.3f} (std {auc_std:.3f}), Acc. = {oof_acc:.3f}') plt.show() if verbose: print(f'CV AUC = {oof_auc} (std {auc_std}), Acc. = {oof_acc}, aucs: {val_aucs}, accs: {val_accs}') # optimize lower limit of the (2x std range around mean) # This way, we choose the model which ranks well and performs ~equally well on all folds return float(oof_auc) - auc_std default_parameters = { 'model_type': 'linreg', 'n_estimators': 100, 'max_depth' : 3, 'eta': 0.1, 'subsample': 0.7, 'colsample_bytree' : 0.8, 'gamma' : 1.0, 'alpha' : 1.0, 'n_iter':300, 'lambda_1': 1e-6, # bayesian 'lambda_2':1e-6, # bayesian 'alpha_1': 1e-6, # bayesian 'alpha_2': 1e-6, # bayesian 'max_iter': 3, #lasso 'eps': 1e-6, #lasso 'features' : all_feat_names } measure_cv_score(default_parameters, verbose=True) def feat_selection_linreg_objective(trial): kept_feats = [] for i in range(len(all_feat_names)): var = trial.suggest_int(all_feat_names[i], 0,1) if var == 1: kept_feats.append(all_feat_names[i]) parameters = default_parameters.copy() parameters['features'] = kept_feats return 1 - measure_cv_score(parameters, verbose=False) if 1: study = optuna.create_study() study.optimize(feat_selection_linreg_objective, n_trials=20, show_progress_bar=True) print(study.best_value, study.best_params) study.best_params pruned_features = default_parameters.copy() pruned_features['features'] = ['feat_lasso', 'feat_linreg', 'feat_ridge', 'efficientnetv2_l_rocstar'] measure_cv_score(pruned_features, verbose=True) random.randint(0,1)
_____no_output_____
MIT
notebooks/7-Ensemble.ipynb
jpjuvo/RSNA-MICCAI-Brain-Tumor-Classification
Introduction to \LaTeX Math ModeJupyter notebooks integrate the MathJax Javascript library in order to render mathematical formulas and symbols in the same way as one would in \LaTeX (often used to typeset textbooks, research papers, or other technical documents).First, we will take a look at a couple of rendered expressions and the corresponding way to render these in your notebooks, then follow-up with a few exercises which will help you become more familiar with these tools and their corresponding documentation.For example, a common expression used in neural networks is the _weighted sum_ rendered as so:$y=\sum_{i=1}^{N}{w_i x_i + b}$where the variable $y$ is calculating the sum of the elements for a vector, $x_i$, each multiplied by a corresponding weight, $w_i$. An additional scalar term, $b$, known as the _bias_ is added to the overall result as well. This expression is more commonly written as:$y=\boldsymbol{w}\boldsymbol{x}+b$where $\boldsymbol{w}$ and $\boldsymbol{x}$ are both vectors of length $N$. Note the subtle difference in the notation where __ _vectors_ __ are in bold italic, while _scalars_ are only in italic.These kinds of expressions can be rendered in your notebook by creating _markdown_ cells and populating them with the proper expressions. Normally, a cell in a Jupyter notebook is for code that you would like to hand off to the interpreter, but there is a drop-down menu at the top of the current notebook which can change the mode of the current cell to either _code_, _markdown_, or _raw_. We will rarely use _raw_ cells, but the _code_ and _markdown_ types are both quite useful.To render both of the two expressions above, you will need to create a markdown cell, and then enter the following code into the cell:```$y = \sum_{i=1}^{N}{w_i x_i + b}$$y = \boldsymbol{w}\boldsymbol{x}+b$```You should notice first that each expression is surrounded by a set of \$ symbols. Any text that you type between two \$ symbols is rendered using the \LaTeX mathematics mode. \LaTeX is a complete document preparation system that we will learn more about later in the semester. For now, the important thing to understand is that it has a special mode and markup language used to render mathematical expressions, and this markup language is supported in _markdown_ cells in Jupyter notebooks.Second, you can see that special mathematical symbols such as a summation ($\sum$) can be rendered using the "sum" escape sequence (\\sum) where \\ is the math mode escape character. There are numerous different escape sequences that can be used in math mode, each representing a common mathematical symbol or operation.Third, you can see that symbols can be attached to other symbols for rendering as sub- or super-scripts by using the _ and ^ operators, respectively. You can also use curly-braces (liberally) to group symbols together into these sub- or super-scripts and the curly-braces, themselves, will not be rendered in the equation. These delimeters only help the math mode interpreter understand which symbols you would like grouped together, and won't be displayed unless escaped.Finally, it is clear that many symbols are rendered in a way that makes intuitive sense. For example, the bias term, $b$, is simply provided with no markup. Any text __not__ escaped or otherwise marked up will be rendered as a standard scalar is rendered (italic). However, the `\text{}` sequence can be used to render standard text when required. For example:`$a\ \text{plus}\ b$`$a\ \text{plus}\ b$Notice also how a backslash followed by a space will add a space between the words. Normally, when two scalars are presented, it is assumed they are being multiplied together, and are placed closely to represent this fact. However, since extHere are a few other examples:`$\boldsymbol{A}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^\top$`$\boldsymbol{A}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^\top$ `$\alpha \beta \Theta \Omega$`$\alpha \beta \Theta \Omega$`$\int_{-\pi}^{\pi} \sin{x}\ dx$`$\int_{-\pi}^{\pi} \sin{x}\ dx$`$\prod_{i=1}^{N}{(x_i+y_i)^2}$`$\prod_{i=1}^{N}{(x_i+y_i)^2}$`$f(x)=\frac{1}{x^2}$`$f(x)=\frac{1}{x^2}$`$\frac{d}{dx}f(x) = -\frac{2}{x^3}$`$\frac{d}{dx}f(x) = -\frac{2}{x^3}$ Let's make a simple table, and then also show the markdown source for the table...| One | Two | Three | Four || --- | --- | --- | --- || 10% | Something | Else | 40% || 90% | To | Do | 50% |
| One | Two | Three | Four | | --- | --- | --- | --- | | 10% | Something | Else | 40% | | 90% | To | Do | 50% |
_____no_output_____
MIT
Introductions/LaTeX and Markdown Intro.ipynb
mtr3t/notebook-examples
Interacting with models November 2014, by Max Zwiessele with edits by James HensmanThe GPy model class has a set of features which are designed to make it simple to explore the parameter space of the model. By default, the scipy optimisers are used to fit GPy models (via model.optimize()), for which we provide mechanisms for β€˜free’ optimisation: GPy can ensure that naturally positive parameters (such as variances) remain positive. But these mechanisms are much more powerful than simple reparameterisation, as we shall see.Along this tutorial we’ll use a sparse GP regression model as example. This example can be in GPy.examples.regression. All of the examples included in GPy return an instance of a model class, and therefore they can be called in the following way:
m = GPy.examples.regression.sparse_GP_regression_1D(plot=False, optimize=False)
_____no_output_____
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
Examining the model using printTo see the current state of the model parameters, and the model’s (marginal) likelihood just print the model print mThe first thing displayed on the screen is the log-likelihood value of the model with its current parameters. Below the log-likelihood, a table with all the model’s parameters is shown. For each parameter, the table contains the name of the parameter, the current value, and in case there are defined: constraints, ties and prior distrbutions associated.
m
_____no_output_____
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
In this case the kernel parameters (`bf.variance`, `bf.lengthscale`) as well as the likelihood noise parameter (`Gaussian_noise.variance`), are constrained to be positive, while the inducing inputs have no constraints associated. Also there are no ties or prior defined.You can also print all subparts of the model, by printing the subcomponents individually; this will print the details of this particular parameter handle:
m.rbf
_____no_output_____
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
When you want to get a closer look into multivalue parameters, print them directly:
m.inducing_inputs m.inducing_inputs[0] = 1
_____no_output_____
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
Interacting with Parameters:The preferred way of interacting with parameters is to act on the parameter handle itself. Interacting with parameter handles is simple. The names, printed by print m are accessible interactively and programatically. For example try to set the kernel's `lengthscale` to 0.2 and print the result:
m.rbf.lengthscale = 0.2 print m
Name : sparse_gp Objective : 563.178096129 Number of Parameters : 8 Number of Optimization Parameters : 8 Updates : True Parameters: sparse_gp.  | value | constraints | priors inducing_inputs  | (5, 1) | | rbf.variance  | 1.0 | +ve | rbf.lengthscale  | 0.2 | +ve | Gaussian_noise.variance | 1.0 | +ve |
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
This will already have updated the model’s inner state: note how the log-likelihood has changed. YOu can immediately plot the model or see the changes in the posterior (`m.posterior`) of the model. Regular expressionsThe model’s parameters can also be accessed through regular expressions, by β€˜indexing’ the model with a regular expression, matching the parameter name. Through indexing by regular expression, you can only retrieve leafs of the hierarchy, and you can retrieve the values matched by calling `values()` on the returned object
print m['.*var'] #print "variances as a np.array:", m['.*var'].values() #print "np.array of rbf matches: ", m['.*rbf'].values()
index | sparse_gp.rbf.variance | constraints | priors [0]  | 1.00000000 | +ve | ----- | sparse_gp.Gaussian_noise.variance | ----------- | ------ [0]  | 1.00000000 | +ve |
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
There is access to setting parameters by regular expression, as well. Here are a few examples of how to set parameters by regular expression. Note that each time the values are set, computations are done internally to compute the log likeliood of the model.
m['.*var'] = 2. print m m['.*var'] = [2., 3.] print m
Name : sparse_gp Objective : 680.058219518 Number of Parameters : 8 Number of Optimization Parameters : 8 Updates : True Parameters: sparse_gp.  | value | constraints | priors inducing_inputs  | (5, 1) | | rbf.variance  | 2.0 | +ve | rbf.lengthscale  | 0.2 | +ve | Gaussian_noise.variance | 2.0 | +ve | Name : sparse_gp Objective : 705.17934799 Number of Parameters : 8 Number of Optimization Parameters : 8 Updates : True Parameters: sparse_gp.  | value | constraints | priors inducing_inputs  | (5, 1) | | rbf.variance  | 2.0 | +ve | rbf.lengthscale  | 0.2 | +ve | Gaussian_noise.variance | 3.0 | +ve |
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
A handy trick for seeing all of the parameters of the model at once is to regular-expression match every variable:
print m['']
index | sparse_gp.inducing_inputs | constraints | priors [0 0] | 1.00000000 | | [1 0] | -1.51676820 | | [2 0] | -2.23387110 | | [3 0] | 0.91816225 | | [4 0] | 1.33087762 | | ----- | sparse_gp.rbf.variance | ----------- | ------ [0]  | 2.00000000 | +ve | ----- | sparse_gp.rbf.lengthscale | ----------- | ------ [0]  | 0.20000000 | +ve | ----- | sparse_gp.Gaussian_noise.variance | ----------- | ------ [0]  | 3.00000000 | +ve |
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
Setting and fetching parameters parameter_arrayAnother way to interact with the model’s parameters is through the parameter_array. The Parameter array holds all the parameters of the model in one place and is editable. It can be accessed through indexing the model for example you can set all the parameters through this mechanism:
new_params = np.r_[[-4,-2,0,2,4], [.1,2], [.7]] print new_params m[:] = new_params print m
[-4. -2. 0. 2. 4. 0.1 2. 0.7] Name : sparse_gp Objective : 322.428807303 Number of Parameters : 8 Number of Optimization Parameters : 8 Updates : True Parameters: sparse_gp.  | value | constraints | priors inducing_inputs  | (5, 1) | | rbf.variance  | 0.1 | +ve | rbf.lengthscale  | 2.0 | +ve | Gaussian_noise.variance | 0.7 | +ve |
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
Parameters themselves (leafs of the hierarchy) can be indexed and used the same way as numpy arrays. First let us set a slice of the inducing_inputs:
m.inducing_inputs[2:, 0] = [1,3,5] print m.inducing_inputs
index | sparse_gp.inducing_inputs | constraints | priors [0 0] | -4.00000000 | | [1 0] | -2.00000000 | | [2 0] | 1.00000000 | | [3 0] | 3.00000000 | | [4 0] | 5.00000000 | |
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
Or you use the parameters as normal numpy arrays for calculations:
precision = 1./m.Gaussian_noise.variance print precision
[ 1.42857143]
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
Getting the model parameter’s gradientsThe gradients of a model can shed light on understanding the (possibly hard) optimization process. The gradients of each parameter handle can be accessed through their gradient field.:
print "all gradients of the model:\n", m.gradient print "\n gradients of the rbf kernel:\n", m.rbf.gradient
all gradients of the model: [ 2.1054468 3.67055686 1.28382016 -0.36934978 -0.34404866 99.49876932 -12.83697274 -268.02492615] gradients of the rbf kernel: [ 99.49876932 -12.83697274]
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
If we optimize the model, the gradients (should be close to) zero
m.optimize() print m.gradient
[ -4.62140715e-04 -2.13365576e-04 9.60255226e-05 4.82744982e-04 8.56445996e-05 -5.25465293e-06 -6.89058756e-06 -9.34850797e-02]
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
Adjusting the model’s constraintsWhen we initially call the example, it was optimized and hence the log-likelihood gradients were close to zero. However, since we have been changing the parameters, the gradients are far from zero now. Next we are going to show how to optimize the model setting different restrictions on the parameters.Once a constraint has been set on a parameter, it is possible to remove it with the command unconstrain(), which can be called on any parameter handle of the model. The methods constrain() and unconstrain() return the indices which were actually unconstrained, relative to the parameter handle the method was called on. This is particularly handy for reporting which parameters where reconstrained, when reconstraining a parameter, which was already constrained:
m.rbf.variance.unconstrain() print m m.unconstrain() print m
Name : sparse_gp Objective : -613.999681976 Number of Parameters : 8 Number of Optimization Parameters : 8 Updates : True Parameters: sparse_gp.  | value | constraints | priors inducing_inputs  | (5, 1) | | rbf.variance  | 1.6069638252 | | rbf.lengthscale  | 2.56942983558 | | Gaussian_noise.variance | 0.00237759494452 | |
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
If you want to unconstrain only a specific constraint, you can call the respective method, such as `unconstrain_fixed()` (or `unfix()`) to only unfix fixed parameters:
m.inducing_inputs[0].fix() m.rbf.constrain_positive() print m m.unfix() print m
Name : sparse_gp Objective : -613.999681976 Number of Parameters : 8 Number of Optimization Parameters : 7 Updates : True Parameters: sparse_gp.  | value | constraints | priors inducing_inputs  | (5, 1) | {fixed} | rbf.variance  | 1.6069638252 | +ve | rbf.lengthscale  | 2.56942983558 | +ve | Gaussian_noise.variance | 0.00237759494452 | | Name : sparse_gp Objective : -613.999681976 Number of Parameters : 8 Number of Optimization Parameters : 8 Updates : True Parameters: sparse_gp.  | value | constraints | priors inducing_inputs  | (5, 1) | | rbf.variance  | 1.6069638252 | +ve | rbf.lengthscale  | 2.56942983558 | +ve | Gaussian_noise.variance | 0.00237759494452 | |
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
Tying ParametersNot yet implemented for GPy version 0.8.0 Optimizing the modelOnce we have finished defining the constraints, we can now optimize the model with the function optimize.:
m.Gaussian_noise.constrain_positive() m.rbf.constrain_positive() m.optimize()
No handlers could be found for logger "rbf"
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
By deafult, GPy uses the lbfgsb optimizer.Some optional parameters may be discussed here. * `optimizer`: which optimizer to use, currently there are lbfgsb, fmin_tnc, scg, simplex or any unique identifier uniquely identifying an optimizer.Thus, you can say m.optimize('bfgs') for using the `lbfgsb` optimizer * `messages`: if the optimizer is verbose. Each optimizer has its own way of printing, so do not be confused by differing messages of different optimizers * `max_iters`: Maximum number of iterations to take. Some optimizers see iterations as function calls, others as iterations of the algorithm. Please be advised to look into scipy.optimize for more instructions, if the number of iterations matter, so you can give the right parameters to optimize() * `gtol`: only for some optimizers. Will determine the convergence criterion, as the tolerance of gradient to finish the optimization. PlottingMany of GPys models have built-in plot functionality. we distringuish between plotting the posterior of the function (`m.plot_f`) and plotting the posterior over predicted data values (`m.plot`). This becomes especially important for non-Gaussian likleihoods. Here we'll plot the sparse GP model we've been working with. for more information of the meaning of the plot, please refer to the accompanying `basic_gp_regression` and `sparse_gp` noteooks.
fig = m.plot()
/home/nbuser/anaconda2_501/lib/python2.7/site-packages/matplotlib/figure.py:1999: UserWarning:This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
We can even change the backend for plotting and plot the model using a different backend.
GPy.plotting.change_plotting_library('plotly') fig = m.plot(plot_density=True) GPy.plotting.show(fig, filename='gpy_sparse_gp_example')
This is the format of your plot grid: [ (1,1) x1,y1 ] Aw, snap! We don't have an account for ''. Want to try again? You can authenticate with your email address or username. Sign in is not case sensitive. Don't have an account? plot.ly Questions? support@plot.ly
MIT
tests/GPy/models_basic.ipynb
gopala-kr/ds-notebooks
Partial Dependence PlotsSigurd Carlsen Feb 2019Holger Nahrstaedt 2020.. currentmodule:: skoptPlot objective now supports optional use of partial dependence as well asdifferent methods of defining parameter values for dependency plots.
print(__doc__) import sys from skopt.plots import plot_objective from skopt import forest_minimize import numpy as np np.random.seed(123) import matplotlib.pyplot as plt
_____no_output_____
BSD-3-Clause
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
scikit-optimize/scikit-optimize.github.io
Objective functionPlot objective now supports optional use of partial dependence as well asdifferent methods of defining parameter values for dependency plots
# Here we define a function that we evaluate. def funny_func(x): s = 0 for i in range(len(x)): s += (x[i] * i) ** 2 return s
_____no_output_____
BSD-3-Clause
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
scikit-optimize/scikit-optimize.github.io
Optimisation using decision treesWe run forest_minimize on the function
bounds = [(-1, 1.), ] * 3 n_calls = 150 result = forest_minimize(funny_func, bounds, n_calls=n_calls, base_estimator="ET", random_state=4)
_____no_output_____
BSD-3-Clause
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
scikit-optimize/scikit-optimize.github.io
Partial dependence plotHere we see an example of using partial dependence. Even when settingn_points all the way down to 10 from the default of 40, this method isstill very slow. This is because partial dependence calculates 250 extrapredictions for each point on the plots.
_ = plot_objective(result, n_points=10)
_____no_output_____
BSD-3-Clause
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
scikit-optimize/scikit-optimize.github.io
It is possible to change the location of the red dot, which normally showsthe position of the found minimum. We can set it 'expected_minimum',which is the minimum value of the surrogate function, obtained by aminimum search method.
_ = plot_objective(result, n_points=10, minimum='expected_minimum')
_____no_output_____
BSD-3-Clause
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
scikit-optimize/scikit-optimize.github.io
Plot without partial dependenceHere we plot without partial dependence. We see that it is a lot faster.Also the values for the other parameters are set to the default "result"which is the parameter set of the best observed value so far. In the caseof funny_func this is close to 0 for all parameters.
_ = plot_objective(result, sample_source='result', n_points=10)
_____no_output_____
BSD-3-Clause
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
scikit-optimize/scikit-optimize.github.io
Modify the shown minimumHere we try with setting the `minimum` parameters to something other than"result". First we try with "expected_minimum" which is the set ofparameters that gives the miniumum value of the surrogate function,using scipys minimum search method.
_ = plot_objective(result, n_points=10, sample_source='expected_minimum', minimum='expected_minimum')
_____no_output_____
BSD-3-Clause
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
scikit-optimize/scikit-optimize.github.io
"expected_minimum_random" is a naive way of finding the minimum of thesurrogate by only using random sampling:
_ = plot_objective(result, n_points=10, sample_source='expected_minimum_random', minimum='expected_minimum_random')
_____no_output_____
BSD-3-Clause
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
scikit-optimize/scikit-optimize.github.io
We can also specify how many initial samples are used for the two different"expected_minimum" methods. We set it to a low value in the next examplesto showcase how it affects the minimum for the two methods.
_ = plot_objective(result, n_points=10, sample_source='expected_minimum_random', minimum='expected_minimum_random', n_minimum_search=10) _ = plot_objective(result, n_points=10, sample_source="expected_minimum", minimum='expected_minimum', n_minimum_search=2)
_____no_output_____
BSD-3-Clause
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
scikit-optimize/scikit-optimize.github.io
Set a minimum locationLastly we can also define these parameters ourself by parsing a listas the minimum argument:
_ = plot_objective(result, n_points=10, sample_source=[1, -0.5, 0.5], minimum=[1, -0.5, 0.5])
_____no_output_____
BSD-3-Clause
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
scikit-optimize/scikit-optimize.github.io
Training ModelsThe central goal of machine learning is to train predictive models that can be used by applications. In Azure Machine Learning, you can use scripts to train models leveraging common machine learning frameworks like Scikit-Learn, Tensorflow, PyTorch, SparkML, and others. You can run these training scripts as experiments in order to track metrics and outputs - in particular, the trained models. Before You StartBefore you start this lab, ensure that you have completed the *Create an Azure Machine Learning Workspace* and *Create a Compute Instance* tasks in [Lab 1: Getting Started with Azure Machine Learning](./labdocs/Lab01.md). Then open this notebook in Jupyter on your Compute Instance. Connect to Your WorkspaceThe first thing you need to do is to connect to your workspace using the Azure ML SDK.> **Note**: If you do not have a current authenticated session with your Azure subscription, you'll be prompted to authenticate. Follow the instructions to authenticate using the code provided.
import azureml.core from azureml.core import Workspace # Load the workspace from the saved config file ws = Workspace.from_config() print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
Ready to use Azure ML 1.17.0 to work with ml-sdk
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
Create a Training ScriptYou're going to use a Python script to train a machine learning model based on the diabates data, so let's start by creating a folder for the script and data files.
import os, shutil # Create a folder for the experiment files training_folder = 'diabetes-training' os.makedirs(training_folder, exist_ok=True) # Copy the data file into the experiment folder shutil.copy('data/diabetes.csv', os.path.join(training_folder, "diabetes.csv"))
_____no_output_____
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
Now you're ready to create the training script and save it in the folder.
%%writefile $training_folder/diabetes_training.py # Import libraries from azureml.core import Run import pandas as pd import numpy as np import joblib import os from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve # Get the experiment run context run = Run.get_context() # load the diabetes dataset print("Loading Data...") diabetes = pd.read_csv('diabetes.csv') # Separate features and labels X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values # Split data into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) # Set regularization hyperparameter reg = 0.01 # Train a logistic regression model print('Training a logistic regression model with regularization rate of', reg) run.log('Regularization Rate', np.float(reg)) model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train) # calculate accuracy y_hat = model.predict(X_test) acc = np.average(y_hat == y_test) print('Accuracy:', acc) run.log('Accuracy', np.float(acc)) # calculate AUC y_scores = model.predict_proba(X_test) auc = roc_auc_score(y_test,y_scores[:,1]) print('AUC: ' + str(auc)) run.log('AUC', np.float(auc)) # Save the trained model in the outputs folder os.makedirs('outputs', exist_ok=True) joblib.dump(value=model, filename='outputs/diabetes_model.pkl') run.complete()
Overwriting diabetes-training/diabetes_training.py
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
Use an Estimator to Run the Script as an ExperimentYou can run experiment scripts using a **RunConfiguration** and a **ScriptRunConfig**, or you can use an **Estimator**, which abstracts both of these configurations in a single object.In this case, we'll use a generic **Estimator** object to run the training experiment. Note that the default environment for this estimator does not include the **scikit-learn** package, so you need to explicitly add that to the configuration. The conda environment is built on-demand the first time the estimator is used, and cached for future runs that use the same configuration; so the first run will take a little longer. On subsequent runs, the cached environment can be re-used so they'll complete more quickly.
from azureml.train.estimator import Estimator from azureml.core import Experiment # Create an estimator estimator = Estimator(source_directory=training_folder, entry_script='diabetes_training.py', compute_target='local', conda_packages=['scikit-learn'] ) # Create an experiment experiment_name = 'diabetes-training' experiment = Experiment(workspace = ws, name = experiment_name) # Run the experiment based on the estimator run = experiment.submit(config=estimator) run.wait_for_completion(show_output=True)
WARNING - If 'script' has been provided here and a script file name has been specified in 'run_config', 'script' provided in ScriptRunConfig initialization will take precedence.
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
As with any experiment run, you can use the **RunDetails** widget to view information about the run and get a link to it in Azure Machine Learning studio.
from azureml.widgets import RunDetails RunDetails(run).show()
_____no_output_____
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
You can also retrieve the metrics and outputs from the **Run** object.
# Get logged metrics metrics = run.get_metrics() for key in metrics.keys(): print(key, metrics.get(key)) print('\n') for file in run.get_file_names(): print(file)
Regularization Rate 0.01 Accuracy 0.774 AUC 0.8484929598487486 azureml-logs/60_control_log.txt azureml-logs/70_driver_log.txt logs/azureml/8_azureml.log outputs/diabetes_model.pkl
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
Register the Trained ModelNote that the outputs of the experiment include the trained model file (**diabetes_model.pkl**). You can register this model in your Azure Machine Learning workspace, making it possible to track model versions and retrieve them later.
from azureml.core import Model # Register the model run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model', tags={'Training context':'Estimator'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']}) # List registered models for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',prop_name, ':', prop) print('\n')
diabetes_model version: 4 Training context : Estimator AUC : 0.8484929598487486 Accuracy : 0.774 diabetes_mitigated_20 version: 1 diabetes_mitigated_19 version: 1 diabetes_mitigated_18 version: 1 diabetes_mitigated_17 version: 1 diabetes_mitigated_16 version: 1 diabetes_mitigated_15 version: 1 diabetes_mitigated_14 version: 1 diabetes_mitigated_13 version: 1 diabetes_mitigated_12 version: 1 diabetes_mitigated_11 version: 1 diabetes_mitigated_10 version: 1 diabetes_mitigated_9 version: 1 diabetes_mitigated_8 version: 1 diabetes_mitigated_7 version: 1 diabetes_mitigated_6 version: 1 diabetes_mitigated_5 version: 1 diabetes_mitigated_4 version: 1 diabetes_mitigated_3 version: 1 diabetes_mitigated_2 version: 1 diabetes_mitigated_1 version: 1 diabetes_unmitigated version: 1 diabetes_classifier version: 1 diabetes_model version: 3 Training context : Inline Training AUC : 0.8790686103786257 Accuracy : 0.8906666666666667 diabetes_model version: 2 Training context : Inline Training AUC : 0.888068803690671 Accuracy : 0.9024444444444445 diabetes_model version: 1 Training context : Inline Training AUC : 0.879837305338574 Accuracy : 0.8923333333333333
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
Create a Parameterized Training ScriptYou can increase the flexibility of your training experiment by adding parameters to your script, enabling you to repeat the same training experiment with different settings. In this case, you'll add a parameter for the regularization rate used by the Logistic Regression algorithm when training the model.Again, lets start by creating a folder for the parameterized script and the training data.
import os, shutil # Create a folder for the experiment files training_folder = 'diabetes-training-params' os.makedirs(training_folder, exist_ok=True) # Copy the data file into the experiment folder shutil.copy('data/diabetes.csv', os.path.join(training_folder, "diabetes.csv"))
_____no_output_____
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
Now let's create a script containing a parameter for the regularization rate hyperparameter.
%%writefile $training_folder/diabetes_training.py # Import libraries from azureml.core import Run import pandas as pd import numpy as np import joblib import os import argparse from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve # Get the experiment run context run = Run.get_context() # Set regularization hyperparameter parser = argparse.ArgumentParser() parser.add_argument('--reg_rate', type=float, dest='reg', default=0.01) args = parser.parse_args() reg = args.reg # load the diabetes dataset print("Loading Data...") # load the diabetes dataset diabetes = pd.read_csv('diabetes.csv') # Separate features and labels X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values # Split data into training set and test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0) # Train a logistic regression model print('Training a logistic regression model with regularization rate of', reg) run.log('Regularization Rate', np.float(reg)) model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train) # calculate accuracy y_hat = model.predict(X_test) acc = np.average(y_hat == y_test) print('Accuracy:', acc) run.log('Accuracy', np.float(acc)) # calculate AUC y_scores = model.predict_proba(X_test) auc = roc_auc_score(y_test,y_scores[:,1]) print('AUC: ' + str(auc)) run.log('AUC', np.float(auc)) os.makedirs('outputs', exist_ok=True) joblib.dump(value=model, filename='outputs/diabetes_model.pkl') run.complete()
Writing diabetes-training-params/diabetes_training.py
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
Use a Framework-Specific EstimatorYou used a generic **Estimator** class to run the training script, but you can also take advantage of framework-specific estimators that include environment definitions for common machine learning frameworks. In this case, you're using Scikit-Learn, so you can use the **SKLearn** estimator. This means that you don't need to specify the **scikit-learn** package in the configuration.> **Note**: Once again, the training experiment uses a new environment; which must be created the first time it is run.
from azureml.train.sklearn import SKLearn from azureml.widgets import RunDetails # Create an estimator estimator = SKLearn(source_directory=training_folder, entry_script='diabetes_training.py', script_params = {'--reg_rate': 0.1}, compute_target='local' ) # Create an experiment experiment_name = 'diabetes-training' experiment = Experiment(workspace = ws, name = experiment_name) # Run the experiment run = experiment.submit(config=estimator) # Show the run details while running RunDetails(run).show() run.wait_for_completion()
WARNING - If 'script' has been provided here and a script file name has been specified in 'run_config', 'script' provided in ScriptRunConfig initialization will take precedence. WARNING - If 'arguments' has been provided here and arguments have been specified in 'run_config', 'arguments' provided in ScriptRunConfig initialization will take precedence.
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
Once again, you can get the metrics and outputs from the run.
# Get logged metrics metrics = run.get_metrics() for key in metrics.keys(): print(key, metrics.get(key)) print('\n') for file in run.get_file_names(): print(file)
Regularization Rate 0.1 Accuracy 0.7736666666666666 AUC 0.8483904671874223 azureml-logs/60_control_log.txt azureml-logs/70_driver_log.txt logs/azureml/8_azureml.log outputs/diabetes_model.pkl
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
Register A New Version of the ModelNow that you've trained a new model, you can register it as a new version in the workspace.
from azureml.core import Model # Register the model run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model', tags={'Training context':'Parameterized SKLearn Estimator'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']}) # List registered models for model in Model.list(ws): print(model.name, 'version:', model.version) for tag_name in model.tags: tag = model.tags[tag_name] print ('\t',tag_name, ':', tag) for prop_name in model.properties: prop = model.properties[prop_name] print ('\t',prop_name, ':', prop) print('\n')
diabetes_model version: 5 Training context : Parameterized SKLearn Estimator AUC : 0.8483904671874223 Accuracy : 0.7736666666666666 diabetes_model version: 4 Training context : Estimator AUC : 0.8484929598487486 Accuracy : 0.774 diabetes_mitigated_20 version: 1 diabetes_mitigated_19 version: 1 diabetes_mitigated_18 version: 1 diabetes_mitigated_17 version: 1 diabetes_mitigated_16 version: 1 diabetes_mitigated_15 version: 1 diabetes_mitigated_14 version: 1 diabetes_mitigated_13 version: 1 diabetes_mitigated_12 version: 1 diabetes_mitigated_11 version: 1 diabetes_mitigated_10 version: 1 diabetes_mitigated_9 version: 1 diabetes_mitigated_8 version: 1 diabetes_mitigated_7 version: 1 diabetes_mitigated_6 version: 1 diabetes_mitigated_5 version: 1 diabetes_mitigated_4 version: 1 diabetes_mitigated_3 version: 1 diabetes_mitigated_2 version: 1 diabetes_mitigated_1 version: 1 diabetes_unmitigated version: 1 diabetes_classifier version: 1 diabetes_model version: 3 Training context : Inline Training AUC : 0.8790686103786257 Accuracy : 0.8906666666666667 diabetes_model version: 2 Training context : Inline Training AUC : 0.888068803690671 Accuracy : 0.9024444444444445 diabetes_model version: 1 Training context : Inline Training AUC : 0.879837305338574 Accuracy : 0.8923333333333333
MIT
02-Training_Models.ipynb
djanie1/mslearn-aml-labs
CHALLENGE TASKStats Challege notebook Fit multiple linear regression for the following data and check for the assumptions using pythonX1 22 22 25 26 24 28 29 27 24 33 39 42X2 15 14 18 13 12 11 11 10 5 9 7 3Y 55 56 55 59 66 65 69 70 75 75 78 79
import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt """ Convert the data values into DataFrames""" stats_chal={"X1":[22, 22, 25, 26, 24, 28, 29, 27, 24, 33, 39, 42], "X2":[15, 14, 18, 13, 12, 11, 11, 10, 5, 9, 7, 3], "Y":[55, 56, 55, 59, 66, 65, 69, 70, 75, 75, 78, 79]} df = pd.DataFrame(stats_chal,columns=['X1','X2','Y']) print (df) """Check for the linearity""" plt.scatter(df['X1'], df['Y'], color='green') plt.xlabel('X1 values', fontsize=14) plt.ylabel('Y values', fontsize=14) plt.grid(True) plt.show() """its clear that indeed a linear relationship exists between the X1 values and the Y values. Specifically, when X1 values go up, the Y values also goes up""" """Check for the linearity""" plt.scatter(df['X2'], df['Y'], color='blue') plt.xlabel('X2 values', fontsize=14) plt.ylabel('Y values', fontsize=14) plt.grid(True) plt.show() """its clear that indeed a linear relationship exists between the X2 values and the Y values. Specifically, when X2 values go up, the Y values also goes down but with a negative slope""" """Performing the Multiple Linear Regression""" X = df[['X1','X2']] # here we have 2 variables for multiple regression. Y = df['Y'] # with statsmodels X = sm.add_constant(X) # adding a constant mlr_model = sm.OLS(Y, X).fit() predictions = mlr_model.predict(X) print_model = mlr_model.summary() print(print_model) """If you plug that X1=22, X2=15 data into the regression equation, you’ll get the same predicted result of Y values """ y = (74.5958) + (0.3314)*(22)+(-1.6106)*(15) y y = (74.5958) + (0.3314)*(X1)+(-1.6106)*(X2) predicted_values=(74.5958) + (0.3314)*(X1)+(-1.6106)*(X2) predicted_values X = df[['X1','X2']].values X sns.regplot(data=df,x="X",y="Y",color="green") #OLS y=df["Y"] y X = df[['X1','X2']] X from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) X_train.head() len(X_train) len(X_test) from sklearn.linear_model import LinearRegression model =LinearRegression() model.fit(X_train,y_train) test_model=model.predict(X_test) test_model from sklearn.metrics import mean_squared_error,mean_absolute_error import seaborn as sns sns.histplot(data=df,x="X1",bins=20) import seaborn as sns sns.histplot(data=df,x="X2",bins=20) mean_absolute_error(y_test,test_model) mean_squared_error(y_test,test_model) np.sqrt(mean_squared_error(y_test,test_model)) sns.scatterplot(x="X",y="y",data=df)#scatter plot plt.plot(potential_spend,predicted_sales,color="green") # with sklearn from sklearn import linear_model ml_regr = linear_model.LinearRegression() ml_regr.fit(X, Y) print('Intercept: \n', ml_regr.intercept_) print('Coefficients: \n', ml_regr.coef_)
Intercept: 74.59582972285749 Coefficients: [ 0. 0.33138486 -1.61056402]
MIT
Stats_Live_MLR_challenge.ipynb
krishnavizster/Statistics
CHECKING FOR LINEAR REGRESSION ASSUMPTIONS 1.Linear Relationship Aims at finding linear relationship between the independent and dependent variables TESTA simple visual way of determining this is through the use of scatter plot2.Variables follow a normal DistributionThis assumption ensures that for each value of independent variable, the dependent variable is a random variable following a normal distribution and its mean lies on the regression line. TESTOne of the ways to visually test for this assumption is through the use of the Quantile-Quantile plot(Q-Q_Plot)
#Multicollinearity test corr =df.corr() print(corr) #Linearity and Normality Test import seaborn as sns sns.set(style="ticks", color_codes=True, font_scale=2) g=sns.pairplot(df, height=3, diag_kind="hist",kind="reg") g.fig.suptitle("Scatter Plot",y=1.08) X_test = sm.add_constant(X_test) X_test y_pred=mlr_model.predict(X_test) residual = y_test - y_pred #No Multicolinearity from statsmodels.stats.outliers_influence import variance_inflation_factor vif = [variance_inflation_factor(X_train.values, i) for i in range(X_train.shape[1])] pd.DataFrame({'vif': vif[0:]}, index=X_train.columns).T """Little or no multicollinearity This assumption aims to test correlation between independent variables. If multicollinearity exists between them (i.e independent variables are highly correlated), they are no longer independent. TEST Correlation Analysis (others are variance inflation factor (VIF)) and condition Index If you find any values in which the absolute value of their correlation is >=0.8, the multicollinearity Assumption is being broken. """ #Normailty of Residual sns.distplot(residual) import scipy as sp fig, ax = plt.subplots(figsize=(6,2.5)) _, (__, ___, r) = sp.stats.probplot(residual, plot=ax, fit=True) np.mean(residual) #Normality of error / residue import scipy.stats as stats fig, ax=plt.subplots(figsize=(10,6)) stats.probplot(residual, dist="norm",plot=plt) plt.show #Homoscedasticity fig, ax = plt.subplots(figsize=(6,2.5)) _ = ax.scatter(y_pred, residual) plt.title("Homoscedasticity") """Data is homoscedastic The linear regression analysis makes is homoscedasticity (i.e the error terms along the regression line are equal) This analysis is also applied to the residuals of your linear regression model. TEST Homoscedasticity can be easily tested with a Scatterplot of the residuals. """ #No autocorrelation of residuals import statsmodels.tsa.api as smt acf = smt.graphics.plot_acf(residual, lags=3 , alpha=0.05) acf.show() """Little or No Autocorrelation This next assumption is much like our previous one, except it applies to the residuals of your linear regression model. Linear regression analysis requires that there is little or no autocorrelation in the data. TEST You can test the liner regression model for autocorrelation with Durbin-Watson test(d), while d can assume values between 0 and 4 ,values around 2 indicates no autocorrelation. As a rule of thumb values of 1.5 """
_____no_output_____
MIT
Stats_Live_MLR_challenge.ipynb
krishnavizster/Statistics
Prepare stimuli in stereo with sync tone in the L channelTo syncrhonize the recording systems, each stimulus file goes in stereo, the L channel has the stimulus, and the R channel has a pure tone (500-5Khz).This is done here, with the help of the rigmq.util.stimprep moduleIt uses (or creates) a dictionary of {stim_file: tone_freq} which is stored as a .json file for offline processing.
import socket import os import sys import logging import warnings import numpy as np import glob from rigmq.util import stimprep as sp # setup the logger logger = logging.getLogger() handler = logging.StreamHandler() formatter = logging.Formatter( '%(asctime)s %(name)-12s %(levelname)-8s %(message)s') handler.setFormatter(formatter) logger.addHandler(handler) logger.setLevel(logging.INFO) # Check wich computer to decide where the things are mounted comp_name=socket.gethostname() logger.info('Computer: ' + comp_name) exp_folder = os.path.abspath('/Users/zeke/experiment/birds') bird = 'g3v3' sess = 'acute_0' stim_sf = 48000 # sampling frequency of the stimulus system stim_folder = os.path.join(exp_folder, bird, 'SongData', sess) glob.glob(os.path.join(stim_folder, '*.wav')) from scipy.io import wavfile from scipy.signal import resample a_file = glob.glob(os.path.join(stim_folder, '*.wav'))[0] in_sf, data = wavfile.read(a_file) %matplotlib inline from matplotlib import pyplot as plt plt.plot(data) data.dtype np.iinfo(data.dtype).min def normalize(x: np.array, max_amp: np.float=0.9)-> np.array: y = x.astype(np.float) y = y - np.mean(y) y = y / np.max(np.abs(y)) # if it is still of-centered, scale to avoid clipping in the widest varyng sign return y * max_amp data_float = normalize(data) plt.plot(data_float) def int_range(x: np.array, dtype: np.dtype): min_int = np.iinfo(dtype).min max_int = np.iinfo(dtype).max if min_int==0: # for unsigned types shift everything x = x + np.min(x) y = x * max_int return y.astype(dtype) data_int = int_range(data_float, data.dtype) plt.plot(data_int) data_tagged = sp.make_stereo_stim(a_file, 48000, tag_freq=1000) plt.plot(data_tagged[:480,1]) ### Define stim_tags There is a dictionary of {wav_file: tag_frequency} can be done by hand when there are few stimuli stim_tags_dict = {'bos': 1000, 'bos-lo': 2000, 'bos-rev': 3000} stims_list = list(stim_tags_dict.keys()) sp.create_sbc_stim(stims_list, stim_folder, stim_sf, stim_tag_dict=stim_tags_dict)
2019-06-27 16:36:59,810 rigmq.util.stimprep INFO Processing /Users/zeke/experiment/birds/g3v3/SongData/acute_0/bos.wav 2019-06-27 16:36:59,813 rigmq.util.stimprep INFO tag_freq = 1000 2019-06-27 16:36:59,815 rigmq.util.stimprep INFO Will resample from 40414 to 60621 sampes 2019-06-27 16:36:59,831 rigmq.util.stimprep INFO Saved to /Users/zeke/experiment/birds/g3v3/SongData/acute_0/sbc_stim/bos_tag.wav 2019-06-27 16:36:59,832 rigmq.util.stimprep INFO Processing /Users/zeke/experiment/birds/g3v3/SongData/acute_0/bos-lo.wav 2019-06-27 16:36:59,833 rigmq.util.stimprep INFO tag_freq = 2000 2019-06-27 16:36:59,835 rigmq.util.stimprep INFO Will resample from 43906 to 65859 sampes 2019-06-27 16:36:59,876 rigmq.util.stimprep INFO Saved to /Users/zeke/experiment/birds/g3v3/SongData/acute_0/sbc_stim/bos-lo_tag.wav 2019-06-27 16:36:59,876 rigmq.util.stimprep INFO Processing /Users/zeke/experiment/birds/g3v3/SongData/acute_0/bos-rev.wav 2019-06-27 16:36:59,877 rigmq.util.stimprep INFO tag_freq = 3000 2019-06-27 16:36:59,879 rigmq.util.stimprep INFO Will resample from 40414 to 60621 sampes 2019-06-27 16:36:59,893 rigmq.util.stimprep INFO Saved to /Users/zeke/experiment/birds/g3v3/SongData/acute_0/sbc_stim/bos-rev_tag.wav 2019-06-27 16:36:59,895 rigmq.util.stimprep INFO Saved tags .json file to /Users/zeke/experiment/birds/g3v3/SongData/acute_0/sbc_stim/stim_tags.json
MIT
rigmq/util/prepare_audio_stims_debug.ipynb
zekearneodo/rigmq
Scaling and Normalization
import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler from scipy.cluster.vq import whiten
_____no_output_____
MIT
Scaling and Normalization.ipynb
dbl007/python-cheat-sheet