relative_path
stringclasses
812 values
section
stringclasses
339 values
filename
stringlengths
2
61
text
stringlengths
6
1.76M
PyTorch/LanguageModeling/BERT/distillation
distillation
README
Distillation ======== To get setup to run Knowledge Distillation on BERT once in the container, run the following: ``` cd /workspace/bert/distillation bash utils/perform_distillation_prereqs.sh ``` `perform_distillation_prereqs.sh` performs the following: - Downloads and processes prerequisite BERT-base checkpoints to `/workspace/bert/distillation/checkpoints` - Downloads prerequisite GloVe embeddings to `/workspace/bert/data/downloads/glove` After performing prerequisite tasks, in the container run the following to produce fully distilled BERT models for SQuADv1.1 and SST-2. ``` bash run_e2e_distillation.sh ``` `run_e2e_distillation.sh` contains 8 command lines to obtain fully distilled BERT models for SQuADv1.1 and SST-2. The distilled BERT model has a config (N=4, D=312, Di=1200 , H=12). To distill knowledge into models of different sizes, a new `BERT_4L_312D/config.json` can be created and passed as a starting point in `run_e2e_distillation.sh` `run_e2e_distillation.sh` contains the following: - Phase1 distillation: Generic distillation on Wikipedia dataset of maximum sequence length 128. `--input_dir` needs to be update respectively. - Phase2 distillation: Generic distillation on Wikipedia dataset of maximum sequence length 512. `--input_dir` needs to be update respectively. *Task specific distillation: SQuAD v1.1* (maximum sequence length 384) - Data augmentation - Distillation on task specific SQuad v1.1 dataset using losses based on transformer backbone only - Distillation on task specific SQuad v1.1 dataset using loss based on task specific prediction head only. *Task specific distillation: SST-2* (maximum sequence length 128) - Data augmentation - Distillation on task specific SST-2 dataset using losses based on transformer backbone only - Distillation on task specific SST-2 dataset using loss based on task specific prediction head only. ![BERT Distillation Flow](https://developer.nvidia.com/sites/default/files/akamai/joc_model.png) Note: Task specific distillation for SST-2 uses as output checkpoint of phase1 distillation as starting point, whereas task specific distillation of SQuAD v1.1 uses output checkpoint of phase2 distillation as a starting point. One can download different general and task-specific distilled checkpoints from NGC: | Model | Description | |------------------------|---------------------------------------------------------------------------| | [bert-dist-4L-288D-uncased-qa](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/dle/models/bert_pyt_ckpt_distilled_4l_288d_qa_squad11_amp/files) | 4 layer distilled model fine-tuned on SQuAD v1.1 | | [bert-dist-4L-288D-uncased-sst2](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/dle/models/bert_pyt_ckpt_distilled_4l_288d_ft_sst2_amp/files) | 4 layer distilled model fine-tuned on GLUE SST-2 | | [bert-dist-4L-288D-uncased-pretrained](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/dle/models/bert_pyt_ckpt_distilled_4l_288d_pretraining_amp/files) | 4 layer distilled model pretrained checkpoint on Generic corpora like Wikipedia. | | [bert-dist-6L-768D-uncased-qa](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/dle/models/bert_pyt_ckpt_distilled_6l_768d_qa_squad11_amp/files) | 6 layer distilled model fine-tuned on SQuAD v1.1 | | [bert-dist-6L-768D-uncased-sst2](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/dle/models/bert_pyt_ckpt_distilled_6l_768d_ft_sst2_amp/files) | 6 layer distilled model fine-tuned on GLUE SST-2 | | [bert-dist-6L-768D-uncased-pretrained](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/dle/models/bert_pyt_ckpt_distilled_6l_768d_pretraining_amp/files) | 6 layer distilled model pretrained checkpoint on Generic corpora like Wikipedia. | Following results were obtained on NVIDIA DGX-1 with 32G on pytorch:20.12-py3 NGC container. *Accuracy achieved and E2E time to train on NVIDIA DGX-1 With 32G:* | Student | Task | SubTask | Time(hrs) | Total Time (hrs)| Accuracy | BERT Base Accuracy | | --------------- |:----------------:| :---------------:| :--------: | :-------------: | :------: | ------------------: | | 4 Layers; H=288 | Distil Phase 1 | backbone loss | 1.399 | | | | | 4 Layers; H=288 | Distil Phase 2 | backbone loss | 0.649 | | | | | 4 Layers; H=288 | Distil SST-2 | backbone loss | 1.615 | | | | | 4 Layers; H=288 | Distil SST-2 | final layer loss | 0.469 | 3.483 | 90.82 | 91.51 | | 4 Layers; H=288 | Distil SQuADv1.1 | backbone loss | 3.471 | | | | | 4 Layers; H=288 | Distil SQuADv1.1 | final layer loss | 3.723 | 9.242 | 83.09 | 88.58 | | 6 Layers; H=768 | SST-2 | | | | 91.97 | 91.51 | | 6 Layers; H=768 | SQuADv1.1 | | | | 88.43 | 88.58 | To perform inference refer to [Inference Performance Benchmark](../#inference-process) *FP16 Inference Performance:* | Model | BS | Infer Perf (seqlen128) (seq/sec)| Infer Perf (seqlen384) (seq/sec) | Speedup vs Bert Large (seqlen128)| Speedup vs Bert Large (seqlen384)| Speedup vs Bert Base (seqlen128) | Speedup vs Bert Base (seqlen384) | | --------------------- |:------:| :----------------------------: | :----------------------------: | :------------------------------: | :------------------------------: | :------------------------------: | -------------------------------- | | BERT Large PyT |8 | 502 | 143 | 1 | 1 | 0.3625 | 0.333 | | BERT Base PyT |128 | 1385 | 429 | 2.7590 | 3 | 1 | 1 | | NV_DistillBERT_4l_312D |128 | 13600 | 2300 | 27.0916 | 16.0839 | 9.8195 | 5.36130 |
CUDA-Optimized/FastSpeech/fastspeech/utils
utils
nvtx
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # * Neither the name of the NVIDIA CORPORATION nor the # names of its contributors may be used to endorse or promote products # derived from this software without specific prior written permission. # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE # DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. from torch.cuda import nvtx class Nvtx(object): def __init__(self, name, enabled=True): self.name = name self.enabled = enabled def __call__(self, f): def wrapped_f(*args, **kwargs): with Nvtx(self.name, self.enabled): return f(*args, **kwargs) return wrapped_f def __enter__(self): if self.enabled: nvtx.range_push(self.name) def __exit__(self, *exc_info): if self.enabled: nvtx.range_pop()
Tools/PyTorch/TimeSeriesPredictionPlatform/triton
triton
run_inference_on_fw
#!/usr/bin/env python3 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. r""" To infer the model on framework runtime, you can use `run_inference_on_fw.py` script. It infers data obtained from pointed data loader locally and saves received data into dump files. Those files are stored in directory pointed by `--output-dir` argument. Example call: ```shell script python ./triton/run_inference_on_fw.py \ --input-path /models/exported/model.onnx \ --input-type onnx \ --dataloader triton/dataloader.py \ --data-dir /data/imagenet \ --batch-size 32 \ --output-dir /results/dump_local \ --dump-labels ``` """ import argparse import logging import os from pathlib import Path from tqdm import tqdm # method from PEP-366 to support relative import in executed modules if __package__ is None: __package__ = Path(__file__).parent.name os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2" os.environ["TF_ENABLE_DEPRECATION_WARNINGS"] = "0" from .deployment_toolkit.args import ArgParserGenerator # noqa: E402 module level import not at top of file from .deployment_toolkit.core import ( # noqa: E402 module level import not at top of file DATALOADER_FN_NAME, BaseLoader, BaseRunner, load_from_file, ) from .deployment_toolkit.dump import JsonDumpWriter # noqa: E402 module level import not at top of file from .deployment_toolkit.extensions import loaders, runners # noqa: E402 module level import not at top of file LOGGER = logging.getLogger("run_inference_on_fw") def _verify_and_format_dump(args, ids, x, y_pred, y_real): data = {"outputs": y_pred, "ids": {"ids": ids}} if args.dump_inputs: data["inputs"] = x if args.dump_labels: if not y_real: raise ValueError( "Found empty label values. Please provide labels in dataloader_fn or do not use --dump-labels argument" ) data["labels"] = y_real return data def _parse_and_validate_args(): supported_inputs = set(runners.supported_extensions) & set(loaders.supported_extensions) parser = argparse.ArgumentParser(description="Dump local inference output of given model", allow_abbrev=False) parser.add_argument("--input-path", help="Path to input model", required=True) parser.add_argument("--input-type", help="Input model type", choices=supported_inputs, required=True) parser.add_argument("--dataloader", help="Path to python file containing dataloader.", required=True) parser.add_argument("--output-dir", help="Path to dir where output files will be stored", required=True) parser.add_argument("--dump-labels", help="Dump labels to output dir", action="store_true", default=False) parser.add_argument("--dump-inputs", help="Dump inputs to output dir", action="store_true", default=False) parser.add_argument("-v", "--verbose", help="Verbose logs", action="store_true", default=False) args, *_ = parser.parse_known_args() get_dataloader_fn = load_from_file(args.dataloader, label="dataloader", target=DATALOADER_FN_NAME) ArgParserGenerator(get_dataloader_fn).update_argparser(parser) Loader: BaseLoader = loaders.get(args.input_type) ArgParserGenerator(Loader, module_path=args.input_path).update_argparser(parser) Runner: BaseRunner = runners.get(args.input_type) ArgParserGenerator(Runner).update_argparser(parser) args = parser.parse_args() types_requiring_io_params = [] if args.input_type in types_requiring_io_params and not all(p for p in [args.inputs, args.outptputs]): parser.error(f"For {args.input_type} input provide --inputs and --outputs parameters") return args def main(): args = _parse_and_validate_args() log_level = logging.INFO if not args.verbose else logging.DEBUG log_format = "%(asctime)s %(levelname)s %(name)s %(message)s" logging.basicConfig(level=log_level, format=log_format) LOGGER.info("args:") for key, value in vars(args).items(): LOGGER.info(f" {key} = {value}") Loader: BaseLoader = loaders.get(args.input_type) Runner: BaseRunner = runners.get(args.input_type) loader = ArgParserGenerator(Loader, module_path=args.input_path).from_args(args) runner = ArgParserGenerator(Runner).from_args(args) LOGGER.info(f"Loading {args.input_path}") model = loader.load(args.input_path) with runner.init_inference(model=model) as runner_session, JsonDumpWriter(args.output_dir) as writer: get_dataloader_fn = load_from_file(args.dataloader, label="dataloader", target=DATALOADER_FN_NAME) dataloader_fn = ArgParserGenerator(get_dataloader_fn).from_args(args) LOGGER.info("Data loader initialized; Running inference") for ids, x, y_real in tqdm(dataloader_fn(), unit="batch", mininterval=10): y_pred = runner_session(x) data = _verify_and_format_dump(args, ids=ids, x=x, y_pred=y_pred, y_real=y_real) writer.write(**data) LOGGER.info("Inference finished") if __name__ == "__main__": main()
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/trtis_client/src/bin
bin
CMakeLists
## # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # * Neither the name of the NVIDIA CORPORATION nor the # names of its contributors may be used to endorse or promote products # derived from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE # ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # function(add_binary bin_file) get_filename_component(bin_name "${bin_file}" NAME_WE) add_executable(${bin_name} ${bin_file}) target_compile_options(${bin_name} PRIVATE ${CPP_DEVEL_FLAGS}) target_link_libraries(${bin_name} tt2i_client request) set_property(TARGET ${bin_name} PROPERTY RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin) endfunction() # build benchmark executable file(GLOB binaries *.cpp) foreach (file ${binaries}) add_binary(${file}) endforeach()
TensorFlow2/Recommendation/DLRM_and_DCNv2/nn
nn
model
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # author: Tomasz Grel (tgrel@nvidia.com) import tensorflow as tf import horovod.tensorflow as hvd import time import os from utils.distributed import dist_print from .dense_model import DenseModel, dense_model_parameters from .sparse_model import SparseModel, sparse_model_parameters from .nn_utils import create_inputs_dict class Model(tf.keras.Model): def __init__(self, **kwargs): super(Model, self).__init__() if kwargs: dense_model_kwargs = {k:kwargs[k] for k in dense_model_parameters} self.dense_model = DenseModel(**dense_model_kwargs) sparse_model_kwargs = {k:kwargs[k] for k in sparse_model_parameters} self.sparse_model = SparseModel(**sparse_model_kwargs) @staticmethod def create_from_checkpoint(checkpoint_path): if checkpoint_path is None: return None model = Model() model.dense_model = DenseModel.from_config(os.path.join(checkpoint_path, 'dense', 'config.json')) model.sparse_model = SparseModel.from_config(os.path.join(checkpoint_path, 'sparse', 'config.json')) model.restore_checkpoint(checkpoint_path) return model def force_initialization(self, global_batch_size): numerical_features = tf.zeros(shape=[global_batch_size // hvd.size(), self.dense_model.num_numerical_features]) categorical_features = [tf.zeros(shape=[global_batch_size, 1], dtype=tf.int32) for _ in range(len(self.sparse_model.get_local_table_ids(hvd.rank())))] inputs = create_inputs_dict(numerical_features, categorical_features) self(inputs=inputs) @tf.function def call(self, inputs, sigmoid=False, training=False): numerical_features, cat_features = list(inputs.values()) embedding_outputs = self.sparse_model(cat_features) embedding_outputs = tf.reshape(embedding_outputs, shape=[-1]) x = self.dense_model(numerical_features, embedding_outputs, sigmoid=sigmoid, training=training) return x def save_checkpoint(self, checkpoint_path): dist_print('Saving a checkpoint...') begin_save = time.time() os.makedirs(checkpoint_path, exist_ok=True) if hvd.rank() == 0: dense_checkpoint_dir = os.path.join(checkpoint_path, 'dense') os.makedirs(dense_checkpoint_dir, exist_ok=True) self.dense_model.save_config(os.path.join(dense_checkpoint_dir, 'config.json')) self.dense_model.save_weights(os.path.join(dense_checkpoint_dir, 'dense')) sparse_checkpoint_dir = os.path.join(checkpoint_path, 'sparse') os.makedirs(sparse_checkpoint_dir, exist_ok=True) self.sparse_model.save_config(os.path.join(sparse_checkpoint_dir, 'config.json')) self.sparse_model.save_checkpoint(sparse_checkpoint_dir) end_save = time.time() dist_print('Saved a checkpoint to ', checkpoint_path) dist_print(f'Saving a checkpoint took {end_save - begin_save:.3f}') def restore_checkpoint(self, checkpoint_path): begin = time.time() dist_print('Restoring a checkpoint...') local_batch = 64 self.force_initialization(global_batch_size=hvd.size()*local_batch) dense_checkpoint_path = os.path.join(checkpoint_path, 'dense', 'dense') self.dense_model.load_weights(dense_checkpoint_path) sparse_checkpoint_dir = os.path.join(checkpoint_path, 'sparse') self.sparse_model.load_checkpoint(sparse_checkpoint_dir) end = time.time() dist_print(f'Restoring a checkpoint took: {end-begin:.3f} seconds') return self
Tools/DGLPyTorch/SyntheticGraphGeneration/syngen/generator/tabular
tabular
base_tabular_generator
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import torch class BaseTabularGenerator(abc.ABC): """Base class for all tabular generators""" def __init__(self, **kwargs): pass @classmethod def get_generators(cls, include_parents=True): """Recursively find subclasses of `BaseTabularGenerator` Args: include_parents (bool): whether to include parents to other classes. (default: `True`) """ generators = dict() for child in cls.__subclasses__(): children = child.get_generators(include_parents) generators.update(children) if include_parents or not children: if abc.ABC not in child.__bases__: generators[child.__name__] = child return generators def fit(self, *args, **kwargs): """fit function for the generator Args: *args: optional positional args **kwargs: optional key-word arguments """ raise NotImplementedError() def sample(self, num_samples, *args, **kwargs): """generate `num_samples` from generator Args: num_samples (int): number of samples to generate *args: optional positional args **kwargs: optional key-word arguments """ raise NotImplementedError() def save(self, path): raise NotImplementedError() @property def supports_memmap(self) -> bool: return False @classmethod def load(cls, path): raise NotImplementedError() @staticmethod def add_args(parser): return parser
TensorFlow2/LanguageModeling/ELECTRA/scripts
scripts
run_squad
#!/usr/bin/env bash # Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. echo "Container nvidia build = " $NVIDIA_BUILD_ID electra_model=${1:-"google/electra-base-discriminator"} epochs=${2:-"2"} batch_size=${3:-"16"} infer_batch_size=${4:-"128"} learning_rate=${5:-"4e-4"} precision=${6:-"amp"} num_gpu=${7:-"8"} seed=${8:-"$RANDOM"} SQUAD_VERSION=${9:-"1.1"} squad_dir=${10:-"/workspace/electra/data/download/squad/v$SQUAD_VERSION"} OUT_DIR=${11:-"results/"} init_checkpoint=${12:-"None"} mode=${13:-"train_eval"} env=${14:-"interactive"} cache_dir=${15:-"$squad_dir"} max_steps=${16:-"-1"} echo "out dir is $OUT_DIR" mkdir -p $OUT_DIR if [ ! -d "$OUT_DIR" ]; then echo "ERROR: non existing $OUT_DIR" exit 1 fi use_fp16="" if [ "$precision" = "amp" ] ; then echo "mixed-precision training and xla activated!" use_fp16=" --amp --xla " fi if [ "$num_gpu" = "1" ] ; then export CUDA_VISIBLE_DEVICES=0 mpi_command=" " else unset CUDA_VISIBLE_DEVICES mpi_command=" horovodrun -np $num_gpu " fi if [ "$env" = "cluster" ] ; then unset CUDA_VISIBLE_DEVICES mpi_command=" " fi v2="" echo "Running SQuAD-v$SQUAD_VERSION" if [ "$SQUAD_VERSION" = "2.0" ] ; then v2=" --version_2_with_negative " fi CMD=" $mpi_command python run_tf_squad.py " CMD+="--init_checkpoint=$init_checkpoint " if [ "$mode" = "train" ] ; then CMD+="--do_train " CMD+="--train_batch_size=$batch_size " elif [ "$mode" = "eval" ] ; then CMD+="--do_predict " CMD+="--predict_batch_size=$infer_batch_size " CMD+="--eval_script=$squad_dir/evaluate-v$SQUAD_VERSION.py " CMD+="--do_eval " elif [ "$mode" = "prediction" ] ; then CMD+="--do_predict " CMD+="--predict_batch_size=$infer_batch_size " else CMD+=" --do_train " CMD+=" --train_batch_size=$batch_size " CMD+="--do_predict " CMD+="--predict_batch_size=$infer_batch_size " CMD+="--eval_script=$squad_dir/evaluate-v$SQUAD_VERSION.py " CMD+="--do_eval " fi CMD+=" $v2 " CMD+=" --data_dir $squad_dir " CMD+=" --do_lower_case " CMD+=" --electra_model=$electra_model " CMD+=" --learning_rate=$learning_rate " CMD+=" --warmup_proportion 0.05 " CMD+=" --weight_decay_rate 0.01 " CMD+=" --layerwise_lr_decay 0.8 " CMD+=" --seed=$seed " CMD+=" --num_train_epochs=$epochs " CMD+=" --max_seq_length=384 " CMD+=" --doc_stride=128 " CMD+=" --beam_size 5 " CMD+=" --joint_head True " CMD+=" --null_score_diff_threshold -5.6 " CMD+=" --output_dir=$OUT_DIR " CMD+=" $use_fp16" CMD+=" --cache_dir=$cache_dir " CMD+=" --max_steps=$max_steps " CMD+=" --vocab_file=/workspace/electra/vocab/vocab.txt " LOGFILE=$OUT_DIR/logfile.txt echo "$CMD |& tee $LOGFILE" time $CMD |& tee $LOGFILE
TensorFlow2/Recommendation/WideAndDeep/triton/runner
runner
task
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import pathlib import platform import subprocess from datetime import datetime from typing import Dict, List, Optional, Union import cpuinfo import psutil import yaml # method from PEP-366 to support relative import in executed modules if __name__ == "__main__" and __package__ is None: __package__ = pathlib.Path(__file__).parent.name from ..deployment_toolkit.core import PerformanceTool from .core import CustomDumper, DataObject from .experiment import Experiment from .triton import Triton class GPU(DataObject): """ GPU information data object """ name: str driver_version: str cuda_version: str memory: str tdp: str def __init__(self, name: str, driver_version: str, cuda_version: str, memory: str, tdp: str): """ Args: name: name of GPU driver_version: version of driver cuda_version: version of CUDA memory: size of memory available on GPU [MB] tdp: Max TDP of GPU unit """ self.name = name self.driver_version = driver_version self.cuda_version = cuda_version self.memory = memory self.tdp = tdp @staticmethod def from_dict(data: Dict): """ Create GPU object from dictionary Args: data: dictionary with GPU data Returns: GPU object """ return GPU( name=data["name"], driver_version=data["driver_version"], cuda_version=data["cuda_version"], memory=data["memory"], tdp=data["tdp"], ) @staticmethod def from_host(): """ Create GPU object from host data Returns: GPU object """ data = subprocess.check_output( ["nvidia-smi", "--query-gpu=name,driver_version,memory.total,power.max_limit", "--format=csv"] ).decode() lines = data.split(sep="\n") device_details = lines[1].split(",") name = device_details[0].strip() driver_version = device_details[1].strip() memory = device_details[2].strip() tdp = device_details[3].strip() cuda_version = None data = subprocess.check_output(["nvidia-smi", "--query"]).decode() lines = data.split(sep="\n") for line in lines: if line.startswith("CUDA Version"): cuda_version = line.split(":")[1].strip() break return GPU( name=name, driver_version=driver_version, cuda_version=cuda_version, memory=memory, tdp=tdp, ) class CPU(DataObject): """ CPU details """ name: str physical_cores: int logical_cores: int min_frequency: float max_frequency: float def __init__(self, name: str, physical_cores: int, logical_cores: int, min_frequency: float, max_frequency: float): """ Args: name: name of CPU unit physical_cores: number of physical cores available on CPU logical_cores: number of logical cores available on CPU min_frequency: minimal clock frequency max_frequency: maximal clock frequency """ self.name = name self.physical_cores = physical_cores self.logical_cores = logical_cores self.min_frequency = min_frequency self.max_frequency = max_frequency @staticmethod def from_host(): """ Create CPU object from host data Returns: CPU object """ return CPU( name=cpuinfo.get_cpu_info()["brand_raw"], physical_cores=psutil.cpu_count(logical=False), logical_cores=psutil.cpu_count(logical=True), min_frequency=psutil.cpu_freq().min, max_frequency=psutil.cpu_freq().max, ) class Memory(DataObject): """ Memory data object """ size: float def __init__(self, size: float): """ Args: size: RAM memory size in MB """ self.size = size @staticmethod def from_host(): """ Create Memory object from host data Returns: Memory object """ svm = psutil.virtual_memory() return Memory(size=svm.total) class SystemInfo(DataObject): """ System Information data object """ system: str cpu: CPU memory: Memory gpu: GPU def __init__(self, system: str, cpu: CPU, memory: Memory, gpu: GPU): """ Args: system: name of operating system cpu: CPU info memory: Memory info gpu: GPU info """ self.system = system self.cpu = cpu self.memory = memory self.gpu = gpu @staticmethod def from_host(): """ Create SystemInfo object from host data Returns: SystemInfo object """ system = platform.platform() gpu = GPU.from_host() memory = Memory.from_host() cpu = CPU.from_host() return SystemInfo(system=system, cpu=cpu, gpu=gpu, memory=memory) class Checkpoint(DataObject): """ Checkpoint data object """ def __init__(self, name: str, url: str, path: Union[str, pathlib.Path]): """ Args: name: Name of checkpoint path: Location of checkpoint on local hardware """ self.name = name self.url = url self.path = pathlib.Path(path) class Dataset(DataObject): """ Dataset data object """ def __init__(self, name: str): """ Args: name: Name of dataset """ self.name = name class Task(DataObject): """ Task data object to store build information """ model_name: str framework: str batching: str started_at: int ended_at: Optional[int] container_version: str checkpoints: Dict[str, Checkpoint] datasets: Dict[str, Dataset] datasets_dir: Optional[Union[str, pathlib.Path]] experiments: List[Experiment] system_info: SystemInfo triton_container_image: Optional[str] triton_custom_operations: Optional[str] performance_tool: PerformanceTool filename: str = "task.yaml" results_dir: str = "results" checkpoints_dir: str = "checkpoints" def __init__( self, model_name: str, ensemble_model_name: Optional[str], framework: str, batching: str, container_version: str, checkpoints: Dict, datasets: Dict, experiments: List, system_info: SystemInfo, started_at: int, logs_dir: pathlib.Path, datasets_dir: Optional[Union[str, pathlib.Path]] = None, ended_at: Optional[int] = None, triton_container_image: Optional[str] = None, triton_custom_operations: Optional[str] = None, triton_load_model_method: str = Triton.LOAD_MODE.EXPLICIT, measurement_steps_offline: int = 8, measurement_steps_online: int = 32, performance_tool: PerformanceTool = PerformanceTool.MODEL_ANALYZER, ): """ Args: model_name: Name of model framework: Model framework container_version: Container version used in task checkpoints: List of checkpoints datasets: List of datasets datasets_dir: Directory where datasests are stored experiments: List of experiments run as part of task system_info: information about node on which experiment was executed started_at: Time when task has started ended_at: Time when task has ended triton_container_image: Custom Triton Container Image used for task triton_custom_operations: Custom operation library path triton_load_model_method: Method how models are loaded on Triton measurement_steps_offline: Number of measurement steps in offline performance stage measurement_steps_online: Number of measurement steps in online performance stage performance_tool: Performance Tool used for generating results logs_dir: place where logs for task are stored """ self.started_at = started_at self.ended_at = ended_at self.model_name = model_name self.ensemble_model_name = ensemble_model_name self.framework = framework self.container_version = container_version self.checkpoints = checkpoints self.datasets = datasets self.datasets_dir = pathlib.Path(datasets_dir) self.experiments = experiments self.system_info = system_info self.triton_container_image = triton_container_image self.triton_custom_operations = triton_custom_operations self.triton_load_model_method = triton_load_model_method self.measurement_steps_offline = measurement_steps_offline self.measurement_steps_online = measurement_steps_online self.logs_dir = logs_dir self.batching = batching self.performance_tool = performance_tool def start(self) -> None: """ Update stage execution info at start Returns: None """ self.started_at = int(datetime.utcnow().timestamp()) def end(self) -> None: """ Update stage execution info at end Returns: None """ self.ended_at = int(datetime.utcnow().timestamp()) def to_file(self, file_path: Union[pathlib.Path, str]): """ Store task data to YAML file Args: file_path: path to file where task data has to be saved Returns: None """ task_data = self.to_dict() with open(file_path, "w") as f: yaml.dump(task_data, f, Dumper=CustomDumper, width=240, sort_keys=False)
PyTorch/SpeechSynthesis/Tacotron2/platform
platform
DGXA100_tacotron2_AMP_8NGPU_train
mkdir -p output python -m multiproc train.py -m Tacotron2 -o output/ --amp -lr 1e-3 --epochs 1501 -bs 128 --weight-decay 1e-6 --grad-clip-thresh 1.0 --cudnn-enabled --load-mel-from-disk --training-files=filelists/ljs_mel_text_train_filelist.txt --validation-files=filelists/ljs_mel_text_val_filelist.txt --log-file nvlog.json --anneal-steps 500 1000 1500 --anneal-factor 0.3
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/test
test
Taco2LSTMCellLayerPlugin_test
/* * Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * Neither the name of the NVIDIA CORPORATION nor the * names of its contributors may be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "UnitTest.hpp" #include "binding.h" #include "cudaMemory.h" #include "cudaUtils.h" #include "logging.h" #include "taco2LSTMCellLayerPlugin.h" #include "trtUtils.h" #include "utils.h" #include "NvInfer.h" #include <random> #include <vector> using namespace nvinfer1; using namespace nvinfer1::plugin; using namespace tts; /****************************************************************************** * HELPER FUNCTIONS *********************************************************** *****************************************************************************/ namespace { template <typename RNG> std::vector<float> genVec(const size_t size, RNG& rng) { std::uniform_real_distribution<float> dist(-10.0, 10.0); std::vector<float> vec(size); for (size_t i = 0; i < size; ++i) { vec[i] = dist(rng); } return vec; } } // namespace /****************************************************************************** * UNIT TESTS ***************************************************************** *****************************************************************************/ TEST(CPUCompareFP32I256Test) { std::mt19937 rng(0); const int inputLengthFirst = 256; const int inputLengthSecond = 512; const int inputLength = inputLengthFirst + inputLengthSecond; const int numDimensions = 1024; // weights std::vector<float> inputWeight = genVec(inputLength * numDimensions * 4, rng); const std::vector<float> inputBias = genVec(numDimensions * 4, rng); std::vector<float> hiddenWeight = genVec(numDimensions * numDimensions * 4, rng); const std::vector<float> hiddenBias = genVec(numDimensions * 4, rng); Taco2LSTMCellLayerPlugin layer( TRTUtils::toWeights(inputWeight), TRTUtils::toWeights(hiddenWeight), TRTUtils::toWeights(inputBias), TRTUtils::toWeights(hiddenBias), inputLength, numDimensions, false); const std::vector<float> inputFirst = genVec(inputLengthFirst, rng); const std::vector<float> inputSecond = genVec(inputLengthSecond, rng); const std::vector<float> hiddenState = genVec(numDimensions, rng); const std::vector<float> cellState = genVec(numDimensions, rng); CudaMemory<float> inputFirstDevice(inputFirst); CudaMemory<float> inputSecondDevice(inputSecond); CudaMemory<float> hiddenStateDevice(hiddenState); CudaMemory<float> cellStateDevice(cellState); const std::vector<Dims> inputDims{Dims2(1, inputLengthFirst), Dims4(1, inputLengthSecond, 1, 1), Dims2(1, numDimensions), Dims2(1, numDimensions)}; const std::vector<Dims> outputDims{Dims2(1, numDimensions), Dims2(1, numDimensions)}; const std::vector<DataType> dataTypes(4, DataType::kFLOAT); const std::vector<DynamicPluginTensorDesc> inDesc{ {// INPUT_FIRST_INDEX {Dims2(-1, inputLengthFirst), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, inputLengthFirst), Dims2(1, inputLengthFirst)}, {// INPUT_SECOND_INDEX {Dims4(-1, inputLengthSecond, 1, 1), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, inputLengthSecond), Dims2(1, inputLengthSecond)}, {// HIDDEN_INDEX {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}, {// CELL_INDEX {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}}; const std::vector<DynamicPluginTensorDesc> outDesc{{// HIDDEN {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}, {// CELL {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}}; layer.configurePlugin( inDesc.data(), inDesc.size(), outDesc.data(), outDesc.size()); layer.initialize(); const std::vector<const float*> inputs{inputFirstDevice.data(), inputSecondDevice.data(), hiddenStateDevice.data(), cellStateDevice.data()}; CudaMemory<float> hiddenStateOutDevice(hiddenState.size()); CudaMemory<float> cellStateOutDevice(hiddenState.size()); std::vector<float*> outputs{hiddenStateOutDevice.data(), cellStateOutDevice.data()}; const std::vector<PluginTensorDesc> inConf{{// INPUT_FIRST_INDEX Dims2(1, inputLengthFirst), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// INPUT_SECOND_INDEX Dims4(1, inputLengthSecond, 1, 1), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// HIDDEN_INDEX Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// CELL_INDEX Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}}; const std::vector<PluginTensorDesc> outConf{{// HIDDEN Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// CELL Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}}; CudaMemory<uint8_t> workspace(layer.getWorkspaceSize( inConf.data(), static_cast<int>(inConf.size()), outConf.data(), static_cast<int>(outConf.size()))); layer.enqueue( inConf.data(), outConf.data(), reinterpret_cast<const void* const*>(inputs.data()), reinterpret_cast<void**>(outputs.data()), workspace.data(), 0); CudaUtils::sync(0); // perform operations on cpu std::vector<float> prod1(4 * numDimensions, 0); std::vector<float> prod2(4 * numDimensions, 0); std::vector<float> prod3(4 * numDimensions, 0); std::vector<float> prod(4 * numDimensions, 0); // perform input MV for (size_t i = 0; i < inputBias.size(); ++i) { double val = 0; for (size_t j = 0; j < static_cast<size_t>(inputLengthFirst); ++j) { val += inputWeight[i * inputLength + j] * inputFirst[j]; } prod[i] += val; } for (size_t i = 0; i < inputBias.size(); ++i) { double val = 0; for (size_t j = 0; j < static_cast<size_t>(inputLengthSecond); ++j) { val += inputWeight[i * inputLength + j + inputLengthFirst] * inputSecond[j]; } prod[i] += val; } for (size_t i = 0; i < hiddenBias.size(); ++i) { double val = 0; for (size_t j = 0; j < hiddenState.size(); ++j) { val += hiddenWeight[i * hiddenState.size() + j] * hiddenState[j]; } prod[i] += val; } // add biases for (size_t i = 0; i < inputBias.size(); ++i) { prod[i] += inputBias[i] + hiddenBias[i]; } std::vector<float> expHiddenOut(hiddenState); std::vector<float> expCellOut(cellState); // perform reduction for (int row = 0; row < numDimensions; ++row) { const float c = cellState[row]; const float i = Utils::sigmoid(prod[row]); const float f = Utils::sigmoid(prod[row + numDimensions]); const float g = tanh(prod[row + numDimensions * 2]); const float o = Utils::sigmoid(prod[row + numDimensions * 3]); const float cPrime = f * c + i * g; const float hPrime = o * tanh(cPrime); expHiddenOut[row] = hPrime; expCellOut[row] = cPrime; } // copy back to host const std::vector<float> actHiddenOut = hiddenStateOutDevice.toHost(); const std::vector<float> actCellOut = cellStateOutDevice.toHost(); ASSERT_EQ(expHiddenOut.size(), actHiddenOut.size()); for (size_t i = 0; i < expHiddenOut.size(); ++i) { EXPECT_NEAR(expHiddenOut[i], actHiddenOut[i], 7.5e-4) << "i = " << i; } ASSERT_EQ(expCellOut.size(), actCellOut.size()); for (size_t i = 0; i < expCellOut.size(); ++i) { EXPECT_NEAR(expCellOut[i], actCellOut[i], 5e-3) << "i = " << i; } } TEST(CPUCompareFP32I1024Test) { std::mt19937 rng(0); const int inputLengthFirst = 1024; const int inputLengthSecond = 512; const int inputLength = inputLengthFirst + inputLengthSecond; const int numDimensions = 1024; // weights std::vector<float> inputWeight = genVec(inputLength * numDimensions * 4, rng); const std::vector<float> inputBias = genVec(numDimensions * 4, rng); std::vector<float> hiddenWeight = genVec(numDimensions * numDimensions * 4, rng); const std::vector<float> hiddenBias = genVec(numDimensions * 4, rng); Taco2LSTMCellLayerPlugin layer( TRTUtils::toWeights(inputWeight), TRTUtils::toWeights(hiddenWeight), TRTUtils::toWeights(inputBias), TRTUtils::toWeights(hiddenBias), inputLength, numDimensions, false); const std::vector<float> inputFirst = genVec(inputLengthFirst, rng); const std::vector<float> inputSecond = genVec(inputLengthSecond, rng); const std::vector<float> hiddenState = genVec(numDimensions, rng); const std::vector<float> cellState = genVec(numDimensions, rng); CudaMemory<float> inputFirstDevice(inputFirst); CudaMemory<float> inputSecondDevice(inputSecond); CudaMemory<float> hiddenStateDevice(hiddenState); CudaMemory<float> cellStateDevice(cellState); const std::vector<Dims> inputDims{Dims2(1, inputLengthFirst), Dims4(1, inputLengthSecond, 1, 1), Dims2(1, numDimensions), Dims2(1, numDimensions)}; const std::vector<Dims> outputDims{Dims2(1, numDimensions), Dims2(1, numDimensions)}; const std::vector<DataType> dataTypes(4, DataType::kFLOAT); const std::vector<DynamicPluginTensorDesc> inDesc{ {// INPUT_FIRST_INDEX {Dims2(-1, inputLengthFirst), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, inputLengthFirst), Dims2(1, inputLengthFirst)}, {// INPUT_SECOND_INDEX {Dims4(-1, inputLengthSecond, 1, 1), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, inputLengthSecond), Dims2(1, inputLengthSecond)}, {// HIDDEN_INDEX {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}, {// CELL_INDEX {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}}; const std::vector<DynamicPluginTensorDesc> outDesc{{// HIDDEN {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}, {// CELL {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}}; layer.configurePlugin( inDesc.data(), inDesc.size(), outDesc.data(), outDesc.size()); layer.initialize(); const std::vector<const float*> inputs{inputFirstDevice.data(), inputSecondDevice.data(), hiddenStateDevice.data(), cellStateDevice.data()}; CudaMemory<float> hiddenStateOutDevice(hiddenState.size()); CudaMemory<float> cellStateOutDevice(hiddenState.size()); std::vector<float*> outputs{hiddenStateOutDevice.data(), cellStateOutDevice.data()}; const std::vector<PluginTensorDesc> inConf{{// INPUT_FIRST_INDEX Dims2(1, inputLengthFirst), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// INPUT_SECOND_INDEX Dims4(1, inputLengthSecond, 1, 1), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// HIDDEN_INDEX Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// CELL_INDEX Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}}; const std::vector<PluginTensorDesc> outConf{{// HIDDEN Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// CELL Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}}; CudaMemory<uint8_t> workspace(layer.getWorkspaceSize( inConf.data(), static_cast<int>(inConf.size()), outConf.data(), static_cast<int>(outConf.size()))); layer.enqueue( inConf.data(), outConf.data(), reinterpret_cast<const void* const*>(inputs.data()), reinterpret_cast<void**>(outputs.data()), workspace.data(), 0); CudaUtils::sync(0); // perform operations on cpu std::vector<float> prod1(4 * numDimensions, 0); std::vector<float> prod2(4 * numDimensions, 0); std::vector<float> prod3(4 * numDimensions, 0); std::vector<float> prod(4 * numDimensions, 0); // perform input MV for (size_t i = 0; i < inputBias.size(); ++i) { double val = 0; for (size_t j = 0; j < static_cast<size_t>(inputLengthFirst); ++j) { val += inputWeight[i * inputLength + j] * inputFirst[j]; } prod[i] += val; } for (size_t i = 0; i < inputBias.size(); ++i) { double val = 0; for (size_t j = 0; j < static_cast<size_t>(inputLengthSecond); ++j) { val += inputWeight[i * inputLength + j + inputLengthFirst] * inputSecond[j]; } prod[i] += val; } for (size_t i = 0; i < hiddenBias.size(); ++i) { double val = 0; for (size_t j = 0; j < hiddenState.size(); ++j) { val += hiddenWeight[i * hiddenState.size() + j] * hiddenState[j]; } prod[i] += val; } // add biases for (size_t i = 0; i < inputBias.size(); ++i) { prod[i] += inputBias[i] + hiddenBias[i]; } std::vector<float> expHiddenOut(hiddenState); std::vector<float> expCellOut(cellState); // perform reduction for (int row = 0; row < numDimensions; ++row) { const float c = cellState[row]; const float i = Utils::sigmoid(prod[row]); const float f = Utils::sigmoid(prod[row + numDimensions]); const float g = tanh(prod[row + numDimensions * 2]); const float o = Utils::sigmoid(prod[row + numDimensions * 3]); const float cPrime = f * c + i * g; const float hPrime = o * tanh(cPrime); expHiddenOut[row] = hPrime; expCellOut[row] = cPrime; } // copy back to host const std::vector<float> actHiddenOut = hiddenStateOutDevice.toHost(); const std::vector<float> actCellOut = cellStateOutDevice.toHost(); ASSERT_EQ(expHiddenOut.size(), actHiddenOut.size()); for (size_t i = 0; i < expHiddenOut.size(); ++i) { EXPECT_NEAR(expHiddenOut[i], actHiddenOut[i], 7.5e-4) << "i = " << i; } ASSERT_EQ(expCellOut.size(), actCellOut.size()); for (size_t i = 0; i < expCellOut.size(); ++i) { EXPECT_NEAR(expCellOut[i], actCellOut[i], 5e-3) << "i = " << i; } } TEST(CPUCompareFP16I256Test) { std::mt19937 rng(0); const int inputLengthFirst = 256; const int inputLengthSecond = 512; const int inputLength = inputLengthFirst + inputLengthSecond; const int numDimensions = 1024; // weights std::vector<float> inputWeight = genVec(inputLength * numDimensions * 4, rng); const std::vector<float> inputBias = genVec(numDimensions * 4, rng); std::vector<float> hiddenWeight = genVec(numDimensions * numDimensions * 4, rng); const std::vector<float> hiddenBias = genVec(numDimensions * 4, rng); Taco2LSTMCellLayerPlugin layer( TRTUtils::toWeights(inputWeight), TRTUtils::toWeights(hiddenWeight), TRTUtils::toWeights(inputBias), TRTUtils::toWeights(hiddenBias), inputLength, numDimensions, true); const std::vector<float> inputFirst = genVec(inputLengthFirst, rng); const std::vector<float> inputSecond = genVec(inputLengthSecond, rng); const std::vector<float> hiddenState = genVec(numDimensions, rng); const std::vector<float> cellState = genVec(numDimensions, rng); CudaMemory<float> inputFirstDevice(inputFirst); CudaMemory<float> inputSecondDevice(inputSecond); CudaMemory<float> hiddenStateDevice(hiddenState); CudaMemory<float> cellStateDevice(cellState); const std::vector<Dims> inputDims{Dims2(1, inputLengthFirst), Dims4(1, inputLengthSecond, 1, 1), Dims2(1, numDimensions), Dims2(1, numDimensions)}; const std::vector<Dims> outputDims{Dims2(1, numDimensions), Dims2(1, numDimensions)}; const std::vector<DataType> dataTypes(4, DataType::kFLOAT); const std::vector<DynamicPluginTensorDesc> inDesc{ {// INPUT_FIRST_INDEX {Dims2(-1, inputLengthFirst), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, inputLengthFirst), Dims2(1, inputLengthFirst)}, {// INPUT_SECOND_INDEX {Dims4(-1, inputLengthSecond, 1, 1), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, inputLengthSecond), Dims2(1, inputLengthSecond)}, {// HIDDEN_INDEX {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}, {// CELL_INDEX {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}}; const std::vector<DynamicPluginTensorDesc> outDesc{{// HIDDEN {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}, {// CELL {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}}; layer.configurePlugin( inDesc.data(), inDesc.size(), outDesc.data(), outDesc.size()); layer.initialize(); const std::vector<const float*> inputs{inputFirstDevice.data(), inputSecondDevice.data(), hiddenStateDevice.data(), cellStateDevice.data()}; CudaMemory<float> hiddenStateOutDevice(hiddenState.size()); CudaMemory<float> cellStateOutDevice(hiddenState.size()); std::vector<float*> outputs{hiddenStateOutDevice.data(), cellStateOutDevice.data()}; const std::vector<PluginTensorDesc> inConf{{// INPUT_FIRST_INDEX Dims2(1, inputLengthFirst), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// INPUT_SECOND_INDEX Dims4(1, inputLengthSecond, 1, 1), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// HIDDEN_INDEX Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// CELL_INDEX Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}}; const std::vector<PluginTensorDesc> outConf{{// HIDDEN Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// CELL Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}}; CudaMemory<uint8_t> workspace(layer.getWorkspaceSize( inConf.data(), static_cast<int>(inConf.size()), outConf.data(), static_cast<int>(outConf.size()))); layer.enqueue( inConf.data(), outConf.data(), reinterpret_cast<const void* const*>(inputs.data()), reinterpret_cast<void**>(outputs.data()), workspace.data(), 0); CudaUtils::sync(0); // perform operations on cpu std::vector<float> prod1(4 * numDimensions, 0); std::vector<float> prod2(4 * numDimensions, 0); std::vector<float> prod3(4 * numDimensions, 0); std::vector<float> prod(4 * numDimensions, 0); // perform input MV for (size_t i = 0; i < inputBias.size(); ++i) { double val = 0; for (size_t j = 0; j < static_cast<size_t>(inputLengthFirst); ++j) { val += inputWeight[i * inputLength + j] * inputFirst[j]; } prod[i] += val; } for (size_t i = 0; i < inputBias.size(); ++i) { double val = 0; for (size_t j = 0; j < static_cast<size_t>(inputLengthSecond); ++j) { val += inputWeight[i * inputLength + j + inputLengthFirst] * inputSecond[j]; } prod[i] += val; } for (size_t i = 0; i < hiddenBias.size(); ++i) { double val = 0; for (size_t j = 0; j < hiddenState.size(); ++j) { val += hiddenWeight[i * hiddenState.size() + j] * hiddenState[j]; } prod[i] += val; } // add biases for (size_t i = 0; i < inputBias.size(); ++i) { prod[i] += inputBias[i] + hiddenBias[i]; } std::vector<float> expHiddenOut(hiddenState); std::vector<float> expCellOut(cellState); // perform reduction for (int row = 0; row < numDimensions; ++row) { const float c = cellState[row]; const float i = Utils::sigmoid(prod[row]); const float f = Utils::sigmoid(prod[row + numDimensions]); const float g = tanh(prod[row + numDimensions * 2]); const float o = Utils::sigmoid(prod[row + numDimensions * 3]); const float cPrime = f * c + i * g; const float hPrime = o * tanh(cPrime); expHiddenOut[row] = hPrime; expCellOut[row] = cPrime; } // copy back to host const std::vector<float> actHiddenOut = hiddenStateOutDevice.toHost(); const std::vector<float> actCellOut = cellStateOutDevice.toHost(); ASSERT_EQ(expHiddenOut.size(), actHiddenOut.size()); for (size_t i = 0; i < expHiddenOut.size(); ++i) { EXPECT_NEAR(expHiddenOut[i], actHiddenOut[i], 4.5e-1) << "i = " << i; } ASSERT_EQ(expCellOut.size(), actCellOut.size()); for (size_t i = 0; i < expCellOut.size(); ++i) { EXPECT_NEAR(expCellOut[i], actCellOut[i], 4.5e-1) << "i = " << i; } } TEST(CPUCompareFP16I1024Test) { std::mt19937 rng(0); const int inputLengthFirst = 1024; const int inputLengthSecond = 512; const int inputLength = inputLengthFirst + inputLengthSecond; const int numDimensions = 1024; // weights std::vector<float> inputWeight = genVec(inputLength * numDimensions * 4, rng); const std::vector<float> inputBias = genVec(numDimensions * 4, rng); std::vector<float> hiddenWeight = genVec(numDimensions * numDimensions * 4, rng); const std::vector<float> hiddenBias = genVec(numDimensions * 4, rng); Taco2LSTMCellLayerPlugin layer( TRTUtils::toWeights(inputWeight), TRTUtils::toWeights(hiddenWeight), TRTUtils::toWeights(inputBias), TRTUtils::toWeights(hiddenBias), inputLength, numDimensions, true); const std::vector<float> inputFirst = genVec(inputLengthFirst, rng); const std::vector<float> inputSecond = genVec(inputLengthSecond, rng); const std::vector<float> hiddenState = genVec(numDimensions, rng); const std::vector<float> cellState = genVec(numDimensions, rng); CudaMemory<float> inputFirstDevice(inputFirst); CudaMemory<float> inputSecondDevice(inputSecond); CudaMemory<float> hiddenStateDevice(hiddenState); CudaMemory<float> cellStateDevice(cellState); const std::vector<Dims> inputDims{Dims2(1, inputLengthFirst), Dims4(1, inputLengthSecond, 1, 1), Dims2(1, numDimensions), Dims2(1, numDimensions)}; const std::vector<Dims> outputDims{Dims2(1, numDimensions), Dims2(1, numDimensions)}; const std::vector<DataType> dataTypes(4, DataType::kFLOAT); const std::vector<DynamicPluginTensorDesc> inDesc{ {// INPUT_FIRST_INDEX {Dims2(-1, inputLengthFirst), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, inputLengthFirst), Dims2(1, inputLengthFirst)}, {// INPUT_SECOND_INDEX {Dims4(-1, inputLengthSecond, 1, 1), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, inputLengthSecond), Dims2(1, inputLengthSecond)}, {// HIDDEN_INDEX {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}, {// CELL_INDEX {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}}; const std::vector<DynamicPluginTensorDesc> outDesc{{// HIDDEN {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}, {// CELL {Dims2(-1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, Dims2(1, numDimensions), Dims2(1, numDimensions)}}; layer.configurePlugin( inDesc.data(), inDesc.size(), outDesc.data(), outDesc.size()); layer.initialize(); const std::vector<const float*> inputs{inputFirstDevice.data(), inputSecondDevice.data(), hiddenStateDevice.data(), cellStateDevice.data()}; CudaMemory<float> hiddenStateOutDevice(hiddenState.size()); CudaMemory<float> cellStateOutDevice(hiddenState.size()); std::vector<float*> outputs{hiddenStateOutDevice.data(), cellStateOutDevice.data()}; const std::vector<PluginTensorDesc> inConf{{// INPUT_FIRST_INDEX Dims2(1, inputLengthFirst), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// INPUT_SECOND_INDEX Dims4(1, inputLengthSecond, 1, 1), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// HIDDEN_INDEX Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// CELL_INDEX Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}}; const std::vector<PluginTensorDesc> outConf{{// HIDDEN Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}, {// CELL Dims2(1, numDimensions), DataType::kFLOAT, TensorFormat::kLINEAR, 1.0f}}; CudaMemory<uint8_t> workspace(layer.getWorkspaceSize( inConf.data(), static_cast<int>(inConf.size()), outConf.data(), static_cast<int>(outConf.size()))); layer.enqueue( inConf.data(), outConf.data(), reinterpret_cast<const void* const*>(inputs.data()), reinterpret_cast<void**>(outputs.data()), workspace.data(), 0); CudaUtils::sync(0); // perform operations on cpu std::vector<float> prod1(4 * numDimensions, 0); std::vector<float> prod2(4 * numDimensions, 0); std::vector<float> prod3(4 * numDimensions, 0); std::vector<float> prod(4 * numDimensions, 0); // perform input MV for (size_t i = 0; i < inputBias.size(); ++i) { double val = 0; for (size_t j = 0; j < static_cast<size_t>(inputLengthFirst); ++j) { val += inputWeight[i * inputLength + j] * inputFirst[j]; } prod[i] += val; } for (size_t i = 0; i < inputBias.size(); ++i) { double val = 0; for (size_t j = 0; j < static_cast<size_t>(inputLengthSecond); ++j) { val += inputWeight[i * inputLength + j + inputLengthFirst] * inputSecond[j]; } prod[i] += val; } for (size_t i = 0; i < hiddenBias.size(); ++i) { double val = 0; for (size_t j = 0; j < hiddenState.size(); ++j) { val += hiddenWeight[i * hiddenState.size() + j] * hiddenState[j]; } prod[i] += val; } // add biases for (size_t i = 0; i < inputBias.size(); ++i) { prod[i] += inputBias[i] + hiddenBias[i]; } std::vector<float> expHiddenOut(hiddenState); std::vector<float> expCellOut(cellState); // perform reduction for (int row = 0; row < numDimensions; ++row) { const float c = cellState[row]; const float i = Utils::sigmoid(prod[row]); const float f = Utils::sigmoid(prod[row + numDimensions]); const float g = tanh(prod[row + numDimensions * 2]); const float o = Utils::sigmoid(prod[row + numDimensions * 3]); const float cPrime = f * c + i * g; const float hPrime = o * tanh(cPrime); expHiddenOut[row] = hPrime; expCellOut[row] = cPrime; } // copy back to host const std::vector<float> actHiddenOut = hiddenStateOutDevice.toHost(); const std::vector<float> actCellOut = cellStateOutDevice.toHost(); ASSERT_EQ(expHiddenOut.size(), actHiddenOut.size()); for (size_t i = 0; i < expHiddenOut.size(); ++i) { EXPECT_NEAR(expHiddenOut[i], actHiddenOut[i], 4.5e-1) << "i = " << i; } ASSERT_EQ(expCellOut.size(), actCellOut.size()); for (size_t i = 0; i < expCellOut.size(); ++i) { EXPECT_NEAR(expCellOut[i], actCellOut[i], 4.5e-1) << "i = " << i; } }
PyTorch/SpeechSynthesis/FastPitch/triton/deployment_toolkit
deployment_toolkit
warmup
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys from typing import List, Optional def warmup( model_name: str, batch_sizes: List[int], triton_gpu_engine_count: int = 1, triton_instances: int = 1, profiling_data: str = "random", input_shapes: Optional[List[str]] = None, server_url: str = "localhost", measurement_window: int = 10000, shared_memory: bool = False ): print("\n") print(f"==== Warmup start ====") print("\n") input_shapes = " ".join(map(lambda shape: f" --shape {shape}", input_shapes)) if input_shapes else "" measurement_window = 6 * measurement_window max_batch_size = max(batch_sizes) max_total_requests = 2 * max_batch_size * triton_instances * triton_gpu_engine_count max_concurrency = min(256, max_total_requests) batch_size = max(1, max_total_requests // 256) step = max(1, max_concurrency // 2) min_concurrency = step exec_args = f"""-m {model_name} \ -x 1 \ -p {measurement_window} \ -v \ -i http \ -u {server_url}:8000 \ -b {batch_size} \ --concurrency-range {min_concurrency}:{max_concurrency}:{step} \ --input-data {profiling_data} {input_shapes}""" if shared_memory: exec_args += " --shared-memory=cuda" result = os.system(f"perf_client {exec_args}") if result != 0: print(f"Failed running performance tests. Perf client failed with exit code {result}") sys.exit(1) print("\n") print(f"==== Warmup done ====") print("\n")
TensorFlow/Recommendation/VAE-CF
VAE-CF
README
# Variational Autoencoder for Collaborative Filtering for TensorFlow This repository provides a script and recipe to train the Variational Autoencoder model for TensorFlow to achieve state-of-the-art accuracy on a Collaborative Filtering task and is tested and maintained by NVIDIA. VAE-CF model for TensorFlow1 is no longer maintained and will soon become unavailable, please consider other PyTorch or TensorFlow2 models as a substitute for your requirements. ## Table Of Contents * [Model overview](#model-overview) * [Model architecture](#model-architecture) * [Default configuration](#default-configuration) * [Feature support matrix](#feature-support-matrix) * [Features](#features) * [Mixed precision training](#mixed-precision-training) * [Enabling mixed precision](#enabling-mixed-precision) * [Enabling TF32](#enabling-tf32) * [Setup](#setup) * [Requirements](#requirements) * [Quick Start Guide](#quick-start-guide) * [Advanced](#advanced) * [Scripts and sample code](#scripts-and-sample-code) * [Parameters](#parameters) * [Command-line options](#command-line-options) * [Getting the data](#getting-the-data) * [Dataset guidelines](#dataset-guidelines) * [Training process](#training-process) * [Inference process](#inference-process) * [Performance](#performance) * [Benchmarking](#benchmarking) * [Training performance benchmark](#training-performance-benchmark) * [Inference performance benchmark](#inference-performance-benchmark) * [Results](#results) * [Training accuracy results](#training-accuracy-results) * [Training accuracy: NVIDIA DGX A100 (8x A100 40GB)](#training-accuracy-nvidia-dgx-a100-8x-a100-40gb) * [Training accuracy: NVIDIA DGX-1 (8x V100 32GB)](#training-accuracy-nvidia-dgx-1-8x-v100-32gb) * [Training performance results](#training-performance-results) * [Training performance: NVIDIA DGX A100 (8x A100 40GB)](#training-performance-nvidia-dgx-a100-8x-a100-40gb) * [Training performance: NVIDIA DGX-1 (8x V100 32GB)](#training-performance-nvidia-dgx-1-8x-v100-32gb) * [Inference performance results](#inference-performance-results) * [Inference performance: NVIDIA DGX A100 (1x A100 40GB)](#inference-performance-nvidia-dgx-a100-1x-a100-40gb) * [Inference performance: NVIDIA DGX-1 (1x V100 16GB)](#inference-performance-nvidia-dgx-1-1x-v100-16gb) * [Release notes](#release-notes) * [Changelog](#changelog) * [Known issues](#known-issues) * [AMP speedup for Ampere](#amp-speedup-for-ampere) * [Multi-GPU scaling](#multi-gpu-scaling) ## Model overview The Variational Autoencoder (VAE) shown here is an optimized implementation of the architecture first described in [Variational Autoencoders for Collaborative Filtering](https://arxiv.org/abs/1802.05814) and can be used for recommendation tasks. The main differences between this model and the original one are the performance optimizations, such as using sparse matrices, mixed precision, larger mini-batches and multiple GPUs. These changes enabled us to achieve a significantly higher speed while maintaining the same accuracy. Because of our fast implementation, we've also been able to carry out an extensive hyperparameter search to slightly improve the accuracy metrics. When using Variational Autoencoder for Collaborative Filtering (VAE-CF), you can quickly train a recommendation model for the collaborative filtering task. The required input data consists of pairs of user-item IDs for each interaction between a user and an item. With a trained model, you can run inference to predict what items is a new user most likely to interact with. This model is trained with mixed precision using Tensor Cores on NVIDIA Volta, Turing and Ampere GPUs. Therefore, researchers can get results 1.9x faster than training without Tensor Cores, while experiencing the benefits of mixed precision training. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time. This implementation has been initially developed as an educational project at the University of Warsaw by Albert Cieślak, Michał Filipiuk, Frederic Grabowski and Radosław Rowicki. ### Model architecture <p align="center"> <img width="70%" src="images/autoencoder.png" /> <br> Figure 1. The architecture of the VAE-CF model </p> The Variational Autoencoder is a neural network that provides collaborative filtering based on implicit feedback. Specifically, it provides product recommendations based on user and item interactions. The training data for this model should contain a sequence of (user ID, item ID) pairs indicating that the specified user has interacted with the specified item. The model consists of two parts: the encoder and the decoder. The encoder transforms the vector, which contains the interactions for a specific user, into a *n*-dimensional variational distribution. We can then use this variational distribution to obtain a latent representation of a user. This latent representation is then fed into the decoder. The result is a vector of item interaction probabilities for a particular user. ### Default configuration The following features were implemented in this model: - Sparse matrix support - Data-parallel multi-GPU training - Dynamic loss scaling with backoff for tensor cores (mixed precision) training ### Feature support matrix The following features are supported by this model: | Feature | VAE-CF |-----------------------|-------------------------- |Horovod Multi-GPU (NCCL) | Yes |Automatic mixed precision (AMP) | Yes #### Features Horovod: Horovod is a distributed training framework for TensorFlow, Keras, PyTorch and MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information about how to get started with Horovod, see the [Horovod: Official repository](https://github.com/horovod/horovod). Multi-GPU training with Horovod: Our model uses Horovod to implement efficient multi-GPU training with NCCL. For details, see example sources in this repository or see the [TensorFlow tutorial](https://github.com/horovod/horovod/#usage). ### Mixed precision training Mixed precision is the combined use of different numerical precisions in a computational method. [Mixed precision](https://arxiv.org/abs/1710.03740) training offers significant computational speedup by performing operations in half-precision format, while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of [Tensor Cores](https://developer.nvidia.com/tensor-cores) in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. Using mixed precision training requires two steps: 1. Porting the model to use the FP16 data type where appropriate. 2. Adding loss scaling to preserve small gradient values. This can now be achieved using Automatic Mixed Precision (AMP) for TensorFlow to enable the full [mixed precision methodology](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#tensorflow) in your existing TensorFlow model code. AMP enables mixed precision training on Volta, Turing, and NVIDIA Ampere GPU architectures automatically. The TensorFlow framework code makes all necessary model changes internally. In TF-AMP, the computational graph is optimized to use as few casts as necessary and maximize the use of FP16, and the loss scaling is automatically applied inside of supported optimizers. AMP can be configured to work with the existing tf.contrib loss scaling manager by disabling the AMP scaling with a single environment variable to perform only the automatic mixed-precision optimization. It accomplishes this by automatically rewriting all computation graphs with the necessary operations to enable mixed precision training and automatic loss scaling. For information about: - How to train using mixed precision, see the [Mixed Precision Training](https://arxiv.org/abs/1710.03740) paper and [Training With Mixed Precision](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html) documentation. - Techniques used for mixed precision training, see the [Mixed-Precision Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/) blog. - How to access and enable AMP for TensorFlow, see [Using TF-AMP](https://docs.nvidia.com/deeplearning/dgx/tensorflow-user-guide/index.html#tfamp) from the TensorFlow User Guide. #### Enabling mixed precision Mixed precision is enabled in TensorFlow by using the Automatic Mixed Precision (TF-AMP) extension which casts variables to half-precision upon retrieval, while storing variables in single-precision format. Furthermore, to preserve small gradient magnitudes in backpropagation, a [loss scaling](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#lossscaling) step must be included when applying gradients. In TensorFlow, loss scaling can be applied statically by using simple multiplication of loss by a constant value or automatically, by TF-AMP. Automatic mixed precision makes all the adjustments internally in TensorFlow, providing two benefits over manual operations. First, programmers need not modify network model code, reducing development and maintenance effort. Second, using AMP maintains forward and backward compatibility with all the APIs for defining and running TensorFlow models. To enable mixed precision, you can simply add the values to the environmental variables inside your training script: - Enable TF-AMP graph rewrite: ``` os.environ["TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE"] = '1' ``` - Enable Automated Mixed Precision: ``` os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1' ``` To enable mixed precision in VAE-CF, run the `main.py` script with the `--amp` flag. #### Enabling TF32 TensorFloat-32 (TF32) is the new math mode in [NVIDIA A100](https://www.nvidia.com/en-us/data-center/a100/) GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs. TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations. For more information, refer to the [TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) blog post. TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default. ## Setup The following section lists the requirements that you need to meet in order to start training the VAE-CF model. ### Requirements This repository contains Dockerfile which extends the Tensorflow NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components: - [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker) - TensorFlow-1 20.06+ NGC container - Supported GPUs: - [NVIDIA Volta architecture](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/) - [NVIDIA Turing architecture](https://www.nvidia.com/en-us/geforce/turing/) - [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/) For more information about how to get started with NGC containers, see the following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning Documentation: - [Getting Started Using NVIDIA GPU Cloud](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html) - [Accessing And Pulling From The NGC Container Registry](https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html#accessing_registry) - [Running TensorFlow](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/running.html#running) For those unable to use the TensorFlow NGC container, to set up the required environment or create your own container, see the versioned [NVIDIA Container Support Matrix](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html). ## Quick Start Guide To train your model using mixed or TF32 precision with Tensor Cores or using FP32, perform the following steps using the default parameters of the VAE-CF model on the [MovieLens 20m dataset](https://grouplens.org/datasets/movielens/20m/). For the specifics concerning training and inference, see the [Advanced](#advanced) section. 1. Clone the repository. git clone https://github.com/NVIDIA/DeepLearningExamples cd DeepLearningExamples/Tensorflow/Recommendation/VAE_CF ``` 2. Build the VAE TensorFlow NGC container. ```bash docker build . -t vae ``` 3. Launch the VAE-CF TensorFlow Docker container. ```bash docker run -it --rm --runtime=nvidia -v /data/vae-cf:/data vae /bin/bash ``` 4. Downloading the dataset: Here we use the [MovieLens 20m dataset](https://grouplens.org/datasets/movielens/20m/). * If you do not have the dataset downloaded: Run the commands below to download and extract the MovieLens dataset to the ```/data/ml-20m/extracted/``` folder. ``` cd /data mkdir ml-20m cd ml-20m mkdir extracted cd extracted wget http://files.grouplens.org/datasets/movielens/ml-20m.zip unzip ml-20m.zip ``` * If you already have the dataset downloaded and unzipped elsewhere: Run the below commands to first exit the current VAE-CF Docker container and then Restart the VAE-CF Docker Container (like in Step 3 above) by mounting the MovieLens dataset location ``` exit docker run -it --rm --runtime=nvidia -v /data/vae-cf:/data -v <ml-20m folder path>:/data/ml-20m/extracted/ml-20m vae /bin/bash ``` where, the unzipped MovieLens dataset is at ```<ml-20m folder path>``` 5. Prepare the dataset. ```bash python prepare_dataset.py ``` 6. Start training on 8 GPUs. ```bash mpirun --bind-to numa --allow-run-as-root -np 8 -H localhost:8 python main.py --train --amp --checkpoint_dir ./checkpoints ``` 7. Start validation/evaluation. The model is exported to the default `model_dir` and can be loaded and tested using: ```bash python main.py --test --amp --checkpoint_dir ./checkpoints ``` ## Advanced The following sections provide greater details of the dataset, running training and inference, and the training results. ### Scripts and sample code The `main.py` script provides an entry point to all the provided functionalities. This includes running training, testing and inference. The behavior of the script is controlled by command-line arguments listed below in the [Parameters](#parameters) section. The `prepare_dataset.py` script can be used to preprocess the MovieLens 20m dataset. Most of the deep learning logic is implemented in the `vae/models` subdirectory. The `vae/load` subdirectory contains the code for preprocessing the dataset. The `vae/metrics` subdirectory provides functions for computing the validation metrics such as recall and [NDCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG). ### Parameters The most important command-line parameters include: * `--data_dir` which specifies the directory inside the docker container where the data will be stored, overriding the default location ```/data``` * `--checkpoint_dir` which controls if and where the checkpoints will be stored * `--amp` for enabling mixed precision training There are also multiple parameters controlling the various hyperparameters of the training process, such as the learning rate, batch size etc. ### Command-line options To see the full list of available options and their descriptions, use the `-h` or `--help` command-line option, for example: ```bash python main.py --help usage: main.py [-h] [--train] [--test] [--inference_benchmark] [--amp] [--epochs EPOCHS] [--batch_size_train BATCH_SIZE_TRAIN] [--batch_size_validation BATCH_SIZE_VALIDATION] [--validation_step VALIDATION_STEP] [--warm_up_epochs WARM_UP_EPOCHS] [--total_anneal_steps TOTAL_ANNEAL_STEPS] [--anneal_cap ANNEAL_CAP] [--lam LAM] [--lr LR] [--beta1 BETA1] [--beta2 BETA2] [--top_results TOP_RESULTS] [--xla] [--trace] [--activation ACTIVATION] [--log_path LOG_PATH] [--seed SEED] [--data_dir DATA_DIR] [--checkpoint_dir CHECKPOINT_DIR] Train a Variational Autoencoder for Collaborative Filtering in TensorFlow optional arguments: -h, --help show this help message and exit --train Run training of VAE --test Run validation of VAE --inference_benchmark Benchmark the inference throughput and latency --amp Enable Automatic Mixed Precision --epochs EPOCHS Number of epochs to train --batch_size_train BATCH_SIZE_TRAIN Global batch size for training --batch_size_validation BATCH_SIZE_VALIDATION Used both for validation and testing --validation_step VALIDATION_STEP Train epochs for one validation --warm_up_epochs WARM_UP_EPOCHS Number of epochs to omit during benchmark --total_anneal_steps TOTAL_ANNEAL_STEPS Number of annealing steps --anneal_cap ANNEAL_CAP Annealing cap --lam LAM Regularization parameter --lr LR Learning rate --beta1 BETA1 Adam beta1 --beta2 BETA2 Adam beta2 --top_results TOP_RESULTS Number of results to be recommended --xla Enable XLA --trace Save profiling traces --activation ACTIVATION Activation function --log_path LOG_PATH Path to the detailed JSON log from to be created --seed SEED Random seed for TensorFlow and numpy --data_dir DATA_DIR Directory for storing the training data --checkpoint_dir CHECKPOINT_DIR Path for saving a checkpoint after the training ``` ### Getting the data The VA-CF model was trained on the [MovieLens 20M dataset](https://grouplens.org/datasets/movielens/20m/). The dataset can be preprocessed simply by running: `python prepare_dataset.py` in the Docker container. By default, the dataset will be stored in the `/data` directory. If you want to store the data in a different location, you can pass the desired location to the `--data_dir` argument. #### Dataset guidelines As a Collaborative Filtering model, VAE-CF only uses information about which user interacted with which item. For the MovieLens dataset, this means that a particular user has positively reviewed a particular movie. VAE-CF can be adapted to any other collaborative filtering task. The input to the model is generally a list of all interactions between users and items. One column of the CSV should contain user IDs, while the other should contain item IDs. Preprocessing for the MovieLens 20M dataset is provided in the `vae/load/preprocessing.py` file. ### Training process The training can be started by running the `main.py` script with the `train` argument. The resulting checkpoints containing the trained model weights are then stored in the directory specified by the `--checkpoint_dir` directory (by default no checkpoints are saved). Additionally, a command-line argument called `--results_dir` (by default `None`) specifies where to save the following statistics in a JSON format: 1) a complete list of command-line arguments saved as `<results_dir>/args.json`, and 2) a dictionary of validation metrics and performance metrics recorded during training. The main validation metric used is [NDCG@100](https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG). Following the original VAE-CF paper we also report numbers for Recall@20 and Recall@50. Multi-GPU training uses horovod. Mixed precision support is controlled by the `--amp` command-line flag. It enables TensorFlow’s Automatic Mixed Precision mode. ### Inference process Inference on a trained model can be run by passing the `--inference_benchmark` argument to the main.py script ``` python main.py --inference_benchmark [--amp] --checkpoint_dir ./checkpoints ``` This will generate a user with a collection of random items that they interacted with and run inference for that user multiple times to measure latency and throughput. ## Performance The performance measurements in this document were conducted at the time of publication and may not reflect the performance achieved from NVIDIA’s latest software release. For the most up-to-date performance measurements, go to [NVIDIA Data Center Deep Learning Product Performance](https://developer.nvidia.com/deep-learning-performance-training-inference). ### Benchmarking The following section shows how to run benchmarks measuring the model performance in training and inference modes. #### Training performance benchmark To benchmark the training performance, run: ``` mpirun --bind-to numa --allow-run-as-root -np 8 -H localhost:8 python main.py --train [--amp] ``` #### Inference performance benchmark To benchmark the inference performance, run: ``` python main.py --inference_benchmark [--amp] ``` ### Results The following sections provide details on how we achieved our performance and accuracy in training and inference. #### Training accuracy results All training performance results were obtained by running: ``` mpirun --bind-to numa --allow-run-as-root -np <gpus> -H localhost:8 python main.py --train [--amp] ``` in the TensorFlow 20.06 NGC container. ##### Training accuracy: NVIDIA DGX A100 (8x A100 40GB) | GPUs | Batch size / GPU | Accuracy - TF32 | Accuracy - mixed precision | Time to train - TF32 [s] | Time to train - mixed precision [s] | Time to train speedup (TF32 to mixed precision) |-------:|-----------------:|-------------:|-----------:|----------------:|--------------:|---------------:| | 1 | 24,576 | 0.430298 | 0.430398 | 112.8 | 109.4 | 1.03 | | 8 | 3,072 | 0.430897 | 0.430353 | 25.9 | 30.4 | 0.85 | ##### Training accuracy: NVIDIA DGX-1 (8x V100 32GB) | GPUs | Batch size / GPU | Accuracy - FP32 | Accuracy - mixed precision | Time to train - FP32 [s] | Time to train - mixed precision [s] | Time to train speedup (FP32 to mixed precision) | |-------:|-----------------:|-------------:|-----------:|----------------:|--------------:|---------------:| | 1 | 24,576 | 0.430592 | 0.430525 | 346.5 | 186.5 | 1.86 | | 8 | 3,072 | 0.430753 | 0.431202 | 59.1 | 42.2 | 1.40 | #### Training performance results Performance numbers below show throughput in users processed per second. They were averaged over an entire training run. ##### Training performance: NVIDIA DGX A100 (8x A100 40GB) | GPUs | Batch size / GPU | Throughput - TF32 | Throughput - mixed precision | Throughput speedup (TF32 - mixed precision) | Strong scaling - TF32 | Strong scaling - mixed precision |-------:|------------:|-------------------:|-----------------:|---------------------:|---:|---:| | 1 | 24,576 | 354,032 | 365,474 | 1.03 | 1 | 1 | | 8 | 3,072 | 1,660,700 | 1,409,770 | 0.85 | 4.69 | 3.86 | ##### Training performance: NVIDIA DGX-1 (8x V100 32GB) | GPUs | Batch size / GPU | Throughput - FP32 | Throughput - mixed precision | Throughput speedup (FP32 - mixed precision) | Strong scaling - FP32 | Strong scaling - mixed precision | |-------:|------------:|-------------------:|-----------------:|---------------------:|---:|---:| | 1 | 24,576 | 114,125 | 213,283 | 1.87 | 1 | 1 | | 8 | 3,072 | 697,628 | 1,001,210 | 1.44 | 6.11 | 4.69 | #### Inference performance results Our results were obtained by running: ``` python main.py --inference_benchmark [--amp] ``` in the TensorFlow 20.06 NGC container. We use users processed per second as a throughput metric for measuring inference performance. All latency numbers are in seconds. ##### Inference performance: NVIDIA DGX A100 (1x A100 40GB) TF32 | Batch size | Throughput Avg | Latency Avg | Latency 90% | Latency 95% | Latency 99% | |-------------:|-----------------:|--------------:|--------------:|--------------:|---------------:| | 1 | 1181 | 0.000847 | 0.000863 | 0.000871 | 0.000901 | FP16 | Batch size | Throughput Avg | Latency Avg | Latency 90% | Latency 95% | Latency 99% | |-------------:|-----------------:|--------------:|--------------:|--------------:|---------------:| | 1 | 1215 | 0.000823 | 0.000858 | 0.000864 | 0.000877 | ##### Inference performance: NVIDIA DGX-1 (1x V100 16GB) FP32 | Batch size | Throughput Avg | Latency Avg | Latency 90% | Latency 95% | Latency 99% | |-------------:|-----------------:|--------------:|--------------:|--------------:|---------------:| | 1 | 718 | 0.001392 | 0.001443 | 0.001458 | 0.001499 | FP16 | Batch size | Throughput Avg | Latency Avg | Latency 90% | Latency 95% | Latency 99% | |-------------:|-----------------:|--------------:|--------------:|--------------:|---------------:| | 1 | 707 | 0.001413 | 0.001511 | 0.001543 | 0.001622 | ## Release notes ### Changelog April 2023 - Ceased maintenance of this model in TensorFlow1 July 2020 - Updated with Ampere convergence and performance results November 2019 - Initial release ### Known issues #### AMP speedup for Ampere In this model the TF32 precision can in some cases be as fast as the FP16 precision on Ampere GPUs. This is because TF32 also uses Tensor Cores and doesn't need any additional logic such as maintaining FP32 master weights and casts. However, please note that VAE-CF is, by modern recommender standards, a very small model. Larger models should still see significant benefits of using FP16 math. #### Multi-GPU scaling We benchmark this implementation on the ML-20m dataset so that our results are comparable to the original VAE-CF paper. We also use the same neural network architecture. As a consequence, the ratio of communication to computation is relatively large. This means that although using multiple GPUs speeds up the training substantially, the scaling efficiency is worse from what one would expect if using a larger model and a more realistic dataset.
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/tacotron2
tacotron2
decoderInstancePlain
/* * Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * Neither the name of the NVIDIA CORPORATION nor the * names of its contributors may be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef TT2I_DECODERINSTANCEPLAIN_H #define TT2I_DECODERINSTANCEPLAIN_H #include "binding.h" #include "cudaMemory.h" #include "decoderInstance.h" #include "NvInfer.h" #include <cuda_runtime.h> #include <memory> #include <string> namespace tts { class DecoderInstancePlain : public DecoderInstance { public: static constexpr const char* const ENGINE_NAME = "tacotron2_decoder_plain"; /** * @brief Create a new DecoderInstancePlain. * * @param engine The ICudaEngine containing the decoder network. * @param maxChunkSize The maximum sized chunk the decoder will process. */ DecoderInstancePlain( TRTPtr<nvinfer1::ICudaEngine> engine, int maxChunkSize); /** * @brief Reset the decoder for new input. * * @param stream The stream to run on. */ void reset(cudaStream_t stream) override; protected: /** * @brief Decode a single frame of output. * * @param stream The stream to operate on. * @param context The execution context. * @param batchSize The size of the batch to process. * @param inputLastFrameDevice The last frame of output produced (all 0s * for first frame). * @param inputMemoryDevice The "Memory" tensor on the device. * @param inputProcessedMemoryDevice The "Processed Memory" tensor on the * device. * @param inputMaskDevice The input mask on the device (1 for i < input * length, 0 for i >= input length). * @param inputLengthHost The length of each input item on the host. * @param inputLengthDevice The length of each input on the device. * @param inputDropoutsDevice The dropout vector to use on the device. * @param outputFrameDevice The output frame on the device. */ void decode(cudaStream_t stream, nvinfer1::IExecutionContext& context, int batchSize, const float* inputLastFrameDevice, const float* inputMemoryDevice, const float* inputProcessedMemoryDevice, const float* inputMaskDevice, const int32_t* inputLengthHost, const int32_t* inputLengthDevice, const float* inputDropoutsDevice, float* outputFrameDevice) override; private: Binding mBinding; CudaMemory<float> mInputWeightsDevice; CudaMemory<float> mOutputWeightsDevice; CudaMemory<float> mInAttentionHiddenStatesDevice; CudaMemory<float> mInAttentionCellStatesDevice; CudaMemory<float> mOutAttentionHiddenStatesDevice; CudaMemory<float> mOutAttentionCellStatesDevice; CudaMemory<float> mInputAttentionContextDevice; CudaMemory<float> mOutputAttentionContextDevice; CudaMemory<float> mInDecoderHiddenStatesDevice; CudaMemory<float> mInDecoderCellStatesDevice; CudaMemory<float> mOutDecoderHiddenStatesDevice; CudaMemory<float> mOutDecoderCellStatesDevice; }; } // namespace tts #endif
PaddlePaddle/Classification/RN50v1.5/scripts/inference
inference
export_resnet50_AMP
# Copyright (c) 2022 NVIDIA Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. CKPT=${1:-"./output/ResNet50/89"} MODEL_PREFIX=${2:-"resnet_50_paddle"} python -m paddle.distributed.launch --gpus=0 export_model.py \ --amp \ --data-layout NHWC \ --trt-inference-dir ./inference_amp \ --from-checkpoint ${CKPT} \ --model-prefix ${MODEL_PREFIX}
TensorFlow2/LanguageModeling/ELECTRA
ELECTRA
README
# ELECTRA For TensorFlow2 This repository provides a script and recipe to train the ELECTRA model for TensorFlow2 to achieve state-of-the-art accuracy, and is tested and maintained by NVIDIA. ## Table Of Contents - [Model overview](#model-overview) * [Model architecture](#model-architecture) * [Default configuration](#default-configuration) * [Feature support matrix](#feature-support-matrix) * [Features](#features) * [Mixed precision training](#mixed-precision-training) * [Enabling mixed precision](#enabling-mixed-precision) * [Enabling TF32](#enabling-tf32) * [Glossary](#glossary) - [Setup](#setup) * [Requirements](#requirements) - [Quick Start Guide](#quick-start-guide) - [Advanced](#advanced) * [Scripts and sample code](#scripts-and-sample-code) * [Parameters](#parameters) + [Pre-training parameters](#pre-training-parameters) + [Fine-tuning parameters](#fine-tuning-parameters) * [Command-line options](#command-line-options) * [Getting the data](#getting-the-data) + [Multi-dataset](#multi-dataset) * [Training process](#training-process) + [Pre-training](#pre-training) + [Multi-node](#multi-node) + [Fine-tuning](#fine-tuning) * [Inference process](#inference-process) + [Fine-tuning inference](#fine-tuning-inference) - [Performance](#performance) * [Benchmarking](#benchmarking) + [Training performance benchmark](#training-performance-benchmark) + [Inference performance benchmark](#inference-performance-benchmark) * [Results](#results) + [Training accuracy results](#training-accuracy-results) - [Pre-training loss curves](#pre-training-loss-curves) - [Pre-training loss results](#pre-training-loss-results) - [Fine-tuning accuracy: NVIDIA DGX A100 (8x A100 40GB)](#fine-tuning-accuracy-nvidia-dgx-a100-8x-a100-40gb) - [Fine-tuning accuracy: NVIDIA DGX-1 (8x V100 16GB)](#fine-tuning-accuracy-nvidia-dgx-1-8x-v100-16gb) - [Fine-tuning accuracy: NVIDIA DGX-2 (16x V100 32GB)](#fine-tuning-accuracy-nvidia-dgx-2-16x-v100-32gb) - [Training stability test](#training-stability-test) * [Pre-training stability test: NVIDIA DGX A100 (8x A100 40GB)](#pre-training-stability-test-nvidia-dgx-a100-8x-a100-40gb) * [Fine-tuning stability test: NVIDIA DGX-1 (8x V100 16GB)](#fine-tuning-stability-test-nvidia-dgx-1-8x-v100-16gb) + [Training performance results](#training-performance-results) - [Training performance: NVIDIA DGX A100 (8x A100 40GB)](#training-performance-nvidia-dgx-a100-8x-a100-40gb) * [Pre-training NVIDIA DGX A100 (8x A100 40GB)](#pre-training-nvidia-dgx-a100-8x-a100-40gb) * [Fine-tuning NVIDIA DGX A100 (8x A100 40GB)](#fine-tuning-nvidia-dgx-a100-8x-a100-40gb) - [Training performance: NVIDIA DGX-1 (8x V100 16GB)](#training-performance-nvidia-dgx-1-8x-v100-16gb) * [Pre-training NVIDIA DGX-1 (8x V100 16GB)](#pre-training-nvidia-dgx-1-8x-v100-16gb) * [Fine-tuning NVIDIA DGX-1 (8x V100 16GB)](#fine-tuning-nvidia-dgx-1-8x-v100-16gb) - [Training performance: NVIDIA DGX-2 (16x V100 32GB)](#training-performance-nvidia-dgx-2-16x-v100-32gb) * [Pre-training NVIDIA DGX-2 (16x V100 32GB)](#pre-training-nvidia-dgx-2-16x-v100-32gb) * [Fine-tuning NVIDIA DGX-2 (16x V100 32GB)](#fine-tuning-nvidia-dgx-2-16x-v100-32gb) + [Inference performance results](#inference-performance-results) - [Inference performance: NVIDIA DGX A100 (1x A100 40GB)](#inference-performance-nvidia-dgx-a100-1x-a100-40gb) * [Fine-tuning inference on NVIDIA DGX A100 (1x A100 40GB)](#fine-tuning-inference-on-nvidia-dgx-a100-1x-a100-40gb) - [Inference performance: NVIDIA T4](#inference-performance-nvidia-t4) * [Fine-tuning inference on NVIDIA T4](#fine-tuning-inference-on-nvidia-t4) - [Release notes](#release-notes) * [Changelog](#changelog) * [Known issues](#known-issues) ## Model overview Electra (Efficiently Learning an Encoder that Classifies Token Replacements Accurately), is a novel pre-training method for language representations which outperforms existing techniques, given the same compute budget on a wide array of Natural Language Processing (NLP) tasks. This model is based on the [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/forum?id=r1xMH1BtvB) paper. NVIDIA's implementation of ELECTRA is an optimized version of the [Hugging Face implementation](https://huggingface.co/transformers/model_doc/electra.html), leveraging mixed precision arithmetic and Tensor Cores on Volta, Turing, and the NVIDIA Ampere GPU architectures for faster training times with state-of-the-art accuracy. This repository contains the scripts to interactively launch data download, training, benchmarking and inference routines in a Docker container for pre-training on your own dataset (Wikipedia and BookCorpus shown as an example), and fine-tuning for tasks such as question answering. The major differences between the original implementation as described in the paper and this version of ELECTRA are as follows: - Scripts to download Wikipedia and BookCorpus datasets - Scripts to preprocess downloaded data or a custom corpus into inputs and targets for pre-training in a modular fashion - Automatic mixed precision (AMP) support and optimized for performance - Multi-GPU and Multi-node training support with push-button scripts to reach state-of-the-art accuracy and performance. Other publicly available implementations of Electra include: 1. [Hugging Face](https://huggingface.co/transformers/model_doc/electra.html) 2. [Google's implementation](https://github.com/google-research/electra) This model is trained with mixed precision using Tensor Cores on Volta, Turing, and the NVIDIA Ampere GPU architectures. Additionally, this model provides push-button solutions to pre-training, fine-tuning and inference and on a corpus of choice. As a result, researchers can get results up to 4x faster than training without Tensor Cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time. ### Model architecture ELECTRA is a combination of two Transformer models: a generator and a discriminator. The generator’s role is to replace tokens in a sequence, and is therefore trained as a masked language model. The discriminator, which is the model we are interested in, tries to identify which tokens were replaced by the generator in the sequence. Both generator and discriminator use the same architecture as the encoder of the Transformer. The encoder is simply a stack of Transformer blocks, which consist of a multi-head attention layer followed by successive stages of feed-forward networks and layer normalization. The multi-head attention layer performs self-attention on multiple input representations. ![Figure 1-1](https://1.bp.blogspot.com/-sHybc03nJRo/XmfLongdVYI/AAAAAAAAFbI/a0t5w_zOZ-UtxYaoQlVkmTRsyFJyFddtQCLcBGAsYHQ/s1600/image1.png "ELECTRA architecture") ### Default configuration ELECTRA uses a new pre-training task called replaced token detection (RTD), that trains a bidirectional model (like a MLM) while learning from all input positions (like a LM). Inspired by generative adversarial networks (GANs), instead of corrupting the input by replacing tokens with “[MASK]” as in BERT, the generator is trained to corrupt the input by replacing some input tokens with incorrect, but somewhat plausible, fakes. On the other hand, the discriminator is trained to distinguish between “real” and “fake” input data. The [Google ELECTRA repository](https://github.com/google-research/electra) reports the results for three configurations of ELECTRA, each corresponding to a unique model size. This implementation provides the same configurations by default, which are described in the table below. | **Model** | **Hidden layers** | **Hidden unit size** | **Parameters** | |:---------:|:----------:|:---:|:----:| |ELECTRA_SMALL|12 encoder| 256 | 14M| |ELECTRA_BASE |12 encoder| 768 |110M| |ELECTRA_LARGE|24 encoder|1024 |335M| The following features were implemented in this model: - General: - Mixed precision support with TensorFlow Automatic Mixed Precision (TF-AMP) - Multi-GPU support using Horovod - XLA support - Multi-Node support - Training - Pre-training support - Fine-tuning example - Inference: - Joint predictions with beam search. ### Feature support matrix The following features are supported by this model. | **Feature** | **ELECTRA** | |:---------:|:----------:| |LAMB|Yes| |Automatic mixed precision (AMP)|Yes| |XLA|Yes| |Horovod Multi-GPU|Yes| |Multi-node|Yes| #### Features **Automatic Mixed Precision (AMP)** This implementation of ELECTRA uses AMP to implement mixed precision training. It allows us to use FP16 training with FP32 master weights by modifying just a few lines of code. **Horovod** Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information about how to get started with Horovod, see the [Horovod: Official repository](https://github.com/horovod/horovod). Multi-GPU training with Horovod Our model uses Horovod to implement efficient multi-GPU training with NCCL. For details, see example sources in this repository or see the [TensorFlow tutorial](https://github.com/horovod/horovod/#usage). **XLA support (experimental)** XLA is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes. The results are improvements in speed and memory usage: most internal benchmarks run ~1.1-1.5x faster after XLA is enabled. [AMP](https://nvidia.github.io/apex/amp.html) is an abbreviation used for automatic mixed precision training. **Multi-node Training** Supported on a Pyxis/Enroot Slurm cluster. ### Mixed precision training Mixed precision is the combined use of different numerical precisions in a computational method. [Mixed precision](https://arxiv.org/abs/1710.03740) training offers significant computational speedup by performing operations in half-precision format, while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of [Tensor Cores](https://developer.nvidia.com/tensor-cores) in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. Using mixed precision training requires two steps: 1. Porting the model to use the FP16 data type where appropriate. 2. Adding loss scaling to preserve small gradient values. This can now be achieved using Automatic Mixed Precision (AMP) for TensorFlow to enable the full [mixed precision methodology](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#tensorflow) in your existing TensorFlow model code. AMP enables mixed precision training on Volta, Turing, and NVIDIA Ampere GPU architectures automatically. The TensorFlow framework code makes all necessary model changes internally. In TF-AMP, the computational graph is optimized to use as few casts as necessary and maximize the use of FP16, and the loss scaling is automatically applied inside of supported optimizers. AMP can be configured to work with the existing tf.contrib loss scaling manager by disabling the AMP scaling with a single environment variable to perform only the automatic mixed-precision optimization. It accomplishes this by automatically rewriting all computation graphs with the necessary operations to enable mixed precision training and automatic loss scaling. For information about: - How to train using mixed precision, see the [Mixed Precision Training](https://arxiv.org/abs/1710.03740) paper and [Training With Mixed Precision](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html) documentation. - Techniques used for mixed precision training, see the [Mixed-Precision Training of Deep Neural Networks](https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/) blog. - How to access and enable AMP for TensorFlow, see [Using TF-AMP](https://docs.nvidia.com/deeplearning/dgx/tensorflow-user-guide/index.html#tfamp) from the TensorFlow User Guide. #### Enabling mixed precision This implementation exploits the TensorFlow Automatic Mixed Precision feature. To enable AMP, you simply need to supply the `--amp` flag to the `run_pretraining.py` or `run_tf_squad.py` script. For reference, enabling AMP required us to apply the following changes to the code: 1. Set the Keras mixed precision policy: ```python if config.amp: policy = tf.keras.mixed_precision.experimental.Policy("mixed_float16", loss_scale="dynamic") tf.keras.mixed_precision.experimental.set_policy(policy) ``` 2. Use the loss scaling wrapper on the optimizer: ```python if config.amp: optimizer = tf.keras.mixed_precision.experimental.LossScaleOptimizer(optimizer, "dynamic") ``` 3. Use scaled loss to calculate the gradients: ```python #Scale loss if config.amp: total_loss = optimizer.get_scaled_loss(total_loss) gradients = tape.gradient(total_loss, model.trainable_variables) #Get unscaled gradients if AMP if config.amp: gradients = optimizer.get_unscaled_gradients(gradients) ``` #### Enabling TF32 TensorFloat-32 (TF32) is the new math mode in [NVIDIA A100](https://www.nvidia.com/en-us/data-center/a100/) GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs. TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations. For more information, refer to the [TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) blog post. TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default. ### Glossary **Fine-tuning** Training an already pretrained model further using a task specific dataset for subject-specific refinements, by adding task-specific layers on top if required. **Language Model** Assigns a probability distribution over a sequence of words. Given a sequence of words, it assigns a probability to the whole sequence. **Pre-training** Training a model on vast amounts of data on the same (or different) task to build general understandings. **Transformer** The paper [Attention Is All You Need](https://arxiv.org/abs/1706.03762) introduces a novel architecture called Transformer that uses an attention mechanism and transforms one sequence into another. **Phase 1** Pretraining on samples of sequence length 128 and at most 15% masked predictions per sequence. **Phase 2** Pretraining on samples of sequence length 512 and at most 15% masked predictions per sequence. ## Setup The following section lists the requirements that you need to meet in order to start training the ELECTRA model. ### Requirements This repository contains Dockerfile which extends the TensorFlow2 NGC container and encapsulates some dependencies. Aside from these dependencies, ensure you have the following components: - [NVIDIA Docker](https://github.com/NVIDIA/nvidia-docker) - [TensorFlow2 20.07-py3 NGC container or later](https://ngc.nvidia.com/registry/nvidia-tensorflow) - Supported GPUs: - [NVIDIA Volta architecture](https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/) - [NVIDIA Turing architecture](https://www.nvidia.com/en-us/geforce/turing/) - [NVIDIA Ampere architecture](https://www.nvidia.com/en-us/data-center/nvidia-ampere-gpu-architecture/) For more information about how to get started with NGC containers, see the following sections from the NVIDIA GPU Cloud Documentation and the Deep Learning Documentation: - [Getting Started Using NVIDIA GPU Cloud](https://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html) - [Accessing And Pulling From The NGC Container Registry](https://docs.nvidia.com/deeplearning/dgx/user-guide/index.html#accessing_registry) - [Running TensorFlow2](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/running.html#running) For those unable to use the TensorFlow 2 NGC container, to set up the required environment or create your own container, see the versioned [NVIDIA Container Support Matrix](https://docs.nvidia.com/deeplearning/dgx/support-matrix/index.html). For multi-node, the sample provided in this repository requires [Enroot](https://github.com/NVIDIA/enroot) and [Pyxis](https://github.com/NVIDIA/pyxis) set up on a [SLURM](https://slurm.schedmd.com) cluster. More information on how to set up and launch can be found in the [Multi-node Documentation](https://docs.nvidia.com/ngc/multi-node-bert-user-guide). ## Quick Start Guide To train your model using mixed precision or TF32 precision with Tensor Cores or using FP32, perform the following steps using the default parameters of the ELECTRA model. The default parameters for pre-training have been set to run on both 8x A100 40G and 8 x V100 32G GPUs. For the specifics concerning training and inference, see the [Advanced](#advanced) section. 1. Clone the repository. ``` git clone https://github.com/NVIDIA/DeepLearningExamples.git cd DeepLearningExamples/TensorFlow2/LanguageModeling/ELECTRA ``` 2. Build ELECTRA on top of the NGC container. ``` bash scripts/docker/build.sh ``` 3. Start an interactive session in the NGC container to run data download, training and inference. ``` bash scripts/docker/launch.sh ``` Resultant logs of pre-training and fine-tuning routines are stored in the `results/` folder. Checkpoints are stored in the `results/<model-name>/` folder. Required data is downloaded into the `data/` directory by default. 4. Download and preprocess the dataset. This repository provides scripts to download, verify, and extract the following datasets: - [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) (fine-tuning for question answering) - Wikipedia (pre-training) - BookCorpus (pre-training) To download, verify, extract the datasets, and create the shards in `tfrecord` format, run: ``` /workspace/electra/data/create_datasets_from_start.sh ``` Note: For fine-tuning only, Wikipedia and Bookscorpus dataset download and preprocessing can be skipped by commenting it out. - Download Wikipedia only for pretraining The pre-training dataset is 170GB+ and takes 15+ hours to download. The BookCorpus server most of the time gets overloaded and also contains broken links resulting in HTTP 403 and 503 errors. Hence, it is recommended to skip downloading BookCorpus data by running: ``` /workspace/electra/data/create_datasets_from_start.sh wiki_only ``` - Download Wikipedia and BookCorpus Users are welcome to download the BookCorpus from other sources to match our accuracy, or repeatedly try our script until the required number of files are downloaded by running the following: ``` /workspace/electra/data/create_datasets_from_start.sh wiki_books ``` Note: Not using the BookCorpus can potentially change the final accuracy on a few downstream tasks. 5. Start pretraining. To run on a single node 8 x V100 32G, from within the container, you can use the following script to run pre-training. ``` bash scripts/run_pretraining.sh ``` The default hyperparameters are set to run on both 8 x A100 40G and 8 x V100 32G. For the other platforms, the configs present in `scripts/configs/pretrain_config.sh` can be used as shown below: ``` bash scripts/run_pretraining.sh $(source scripts/configs/pretrain_config.sh && dgxa100_8gpu_amp) ``` To run pre-training on multiple nodes, see the [Multi-node](#multi-node) section. 6. Postprocess pretrained checkpoint and fine-tune on SQuAD dataset The above pretrained ELECTRA model representations can be fine-tuned with just one additional output layer for a state-of-the-art question answering system. Running the following script extracts and saves the discriminator and generator from the pretrained checkpoint and fine-tunes the discriminator on SQuAD: ``` checkpoints=results/base/checkpoints bash scripts/finetune_ckpts_on_squad.sh ``` It internally runs `postprocess_pretrained_ckpt.py` which extracts and saves the discriminator and the generator from the pretrained checkpoint. The default hyperparameters are set to run on 8 x V100 16G. To run fine-tuning with the SQuAD dataset on Google's pretrained checkpoints, do the following. ``` bash scripts/run_squad.sh ``` For other platforms, configs present in `scripts/configs/squad_config.sh` can be used as shown below: ``` bash scripts/run_squad.sh $(source scripts/configs/squad_config.sh && dgxa100_8gpu_amp) train_eval ``` 7. Start validation/evaluation. Validation can be performed by running: ``` bash scripts/run_squad.sh $(source scripts/configs/squad_config.sh && dgxa100_8gpu_amp) eval ``` Running training first is required to generate needed checkpoints. 8. Start inference/predictions. Inference can be performed by running: ``` bash scripts/run_squad.sh $(source scripts/configs/squad_config.sh && dgxa100_8gpu_amp) prediction ``` Inference predictions are saved to `<OUTPUT_DIRECTORY>/predictions.json`. ## Advanced The following sections provide greater details of the datasets, running training and inference, and the training results. ### Scripts and sample code Descriptions of the key scripts and folders are provided below. - `data/` - Contains scripts for downloading and preparing individual datasets, and will contain downloaded and processed datasets. - `scripts/` - Contains shell scripts to launch the Docker container, data download, pre-training, fine-tuning and inference. - `results/` - Folder where all training and inference results get stored by default. - `run_squad.sh` - Interface for launching question answering fine-tuning with `run_tf_squad.py`. - `run_pretraining.sh` - Interface for launching ELECTRA pre-training with `run_pretraining.py`. - `finetune_ckpts_on_squad.sh` - Interface for extracting and saving discriminator and generator from the pretrained checkpoint and run SQuAD fine-tuning on discriminator. - `build_pretraining_dataset.py` - Creates `tfrecord` files from shared text files in the final step of dataset creation. - `postprocess_pretrained_ckpt.py` - Converts pretrained checkpoint to discriminator checkpoint and generator checkpoint which can be fed into `run_tf_squad.py`. - `modeling.py` - Implements the ELECTRA pre-training and fine-tuning model architectures with TensorFlow2. - `optimization.py` - Implements the Adam optimizer, LAMB and the learning rate schedule with TensorFlow2. - `configuration.py` - Implements parent class for model config. - `tokenization.py` - Implements the ELECTRA tokenizer. - `run_pretraining.py` - Implements ELECTRA pre-training. - `pretrain_utils.py` - Utilities required for pre-training such as dynamic masking etc., - `run_tf_squad.py` - Implements fine-tuning training and evaluation for question answering on the [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) dataset. - `inference.py` - Implements interactive question answering. - `postprocess_pretrained_ckpt.py` - Implements extracting and saving the discriminator and the generator from the pretrained checkpoint. ### Parameters #### Pre-training parameters ELECTRA is designed to pre-train deep bidirectional networks for language representations. The following scripts replicate pre-training on Wikipedia + BookCorpus from this [paper](https://openreview.net/forum?id=r1xMH1BtvB). These scripts are general and can be used for pre-training language representations on any corpus of choice. In the parameters expected by `scripts/run_pretraining.sh`, `p1` stands for phase 1 whereas `p2` stands for phase 2 training. They are as follows: - `<training_batch_size_p1>` is per-GPU batch size used for training. Larger batch sizes run more efficiently, but require more GPU memory. Default is 176. - `<learning_rate_p1>` is the base learning rate for training. Default is 6e-3. - `<precision>` is the type of math in your model, can be either `fp32` or `amp`. Default is `amp`. The options mean: - FP32: 32-bit IEEE single precision float format. - AMP: Automatic mixed precision 16 and 32-bit float format. - `<num_gpus>` is the number of GPUs to use for training. Must be equal to or smaller than the number of GPUs attached to your node. Default is 8. - `<warmup_steps_p1>` is the percentage of training steps used for warm-up at the start of training. Default is 2000. - `<train_steps_p1>` is the total number of training steps. Default is 10000. - `<save_checkpoint_steps>` controls how often checkpoints are saved. Default is 500. - `<resume_training>` if set to `true`, training should resume from the latest model in `/results/checkpoints`. Default is `false`. - `<accumulate_gradient>` a flag indicating whether a larger batch should be simulated with gradient accumulation. Default is `true`. - `<gradient_accumulation_steps_p1>` an integer indicating the number of steps to accumulate gradients over. Effective batch size / GPU = `training_batch_size` x `gradient_accumulation_steps`. Default is 48. - `<seed>` random seed for the run. - `<training_batch_size_p2>` is per-GPU batch size used for training in phase 2. Larger batch sizes run more efficiently, but require more memory. Default is 24. - `<learning_rate_p2>` is the base learning rate for training phase 2. Default is 4e-3. - `<warmup_steps_p2>` is the percentage of training steps used for warm-up at the start of training. Default is 200. - `<training_steps_p2>` is the total number of training steps for phase 2, to be continued in addition to phase 1. Default is 930. - `<gradient_accumulation_steps_p2>` an integer indicating the number of steps to accumulate gradients over in phase 2. Effective batch size / GPU = `training_batch_size_p2` * `gradient_accumulation_steps_p2`. Default is 144. - `<init_checkpoint>` A checkpoint to start the pre-training routine on (Usually a ELECTRA pretrained checkpoint). Default is `None`. The complete list of the available parameters for the `run_pretraining.py` script are: ``` --model_name MODEL_NAME - Model name, used to define the name of the results folder. --pretrain_tfrecords PRETRAIN_TFRECORDS - Specifies tfrecord files used for pretraining. --max_seq_length MAX_SEQ_LENGTH - The maximum total input sequence length after WordPiece tokenization. Sequences longer than this will be truncated, and sequences shorter than this will be padded. --mask_prob MASK_PROB - Percentage of input tokens to mask out / replace. --disc_weight DISC_WEIGHT - Ratio of discriminator loss over generator loss. --generator_hidden_size GENERATOR_HIDDEN_SIZE - Fraction of discriminator hidden size for generator. --train_batch_size TRAIN_BATCH_SIZE - Batch size per GPU for training. --learning_rate LEARNING_RATE - The initial learning rate for the optimizer. --num_train_steps NUM_TRAIN_STEPS - Total number of training steps to perform. --num_warmup_steps NUM_WARMUP_STEPS - Number of steps of training to perform linear learning rate warmup for. For example, 0.1 = 10% of training. --seed SEED - Sets the seed to use for random number generation. --gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS - Number of update steps to accumulate before performing a backward/update pass. --fp16_compression - Whether to use 16-bit all reduce --amp - If set, will perform computations using automatic mixed precision. --log_freq LOG_FREQ - If set, the script will output the training loss every LOG_FREQ steps. --save_checkpoints_steps SAVE_CHECKPOINTS_STEPS - Checkpoints saving frequency. --keep_checkpoint_max KEEP_CHECKPOINT_MAX - Maximum number of checkpoints to keep. --restore_checkpoint RESTORE_CHECKPOINT - Whether to restore from a checkpoint; if specified, set to `path-to-checkpoint` or `latest` --phase2 - Specified if training on phase 2 only. If not specified, default pre-training is on phase 1. --optimizer OPTIMIZER - Specifies optimizer, `adam` or `lamb`. --skip_adaptive - Whether to apply adaptive learning rate on LayerNorm and biases. --gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS - Number of steps to accumulate gradients across before performing an update. --lr_decay_power LR_DECAY_POWER - Learning rate polynomial decay power. --opt_beta_1 OPT_BETA_1 - beta2 of optimizer. --opt_beta_2 OPT_BETA_2 - beta2 of optimizer. --end_lr END_LR - Ending learning rate. ``` #### Fine-tuning parameters Default arguments are listed below in the order `scripts/run_squad.sh` expects: - ELECTRA MODEL - The default is `"google/electra-base-discriminator"`. - Number of training Epochs - The default is `2`. - Batch size - The default is `16`. - Learning rate - The default is `4e-4`. - Precision (either `amp`, `tf32` or `fp32`) - The default is `amp`. - Number of GPUs - The default is `8`. - Seed - The default is `1`. - SQuAD version - The default is `1.1` - SQuAD directory - The default is `/workspace/electra/data/download/squad/v$SQUAD_VERSION`. - Output directory for result - The default is `results/`. - Initialize checkpoint - The default is `"None"` - Mode (`train`, `eval`, `train_eval`, `prediction`) - The default is `train_eval`. The script saves the checkpoint at the end of each epoch to the `checkpoints/` folder. The main script `run_tf_squad.py` specific parameters are: ``` --electra_model ELECTRA_MODEL - Specifies the type of ELECTRA model to use; should be the discriminator of a pretrained checkpoint(output of postprocess_pretrained_ckpt.py) or one of the following: google/electra-small-generator google/electra-base-generator google/electra-large-generator google/electra-small-discriminator google/electra-base-discriminator google/electra-large-discriminator --amp - If set, will perform computations using automatic mixed precision. --data_dir DATA_DIR - Path to the SQuAD json for training and evaluation. --max_seq_length MAX_SEQ_LENGTH - The maximum total input sequence length after WordPiece tokenization. Sequences longer than this will be truncated, and sequences shorter than this will be padded. --doc_stride DOC_STRIDE - When splitting up a long document into chunks this parameters sets how much stride to take between chunks of tokens. --max_query_length MAX_QUERY_LENGTH - The maximum number of tokens for the question. Questions longer than <max_query_length> will be truncated to the value specified. --n_best_size N_BEST_SIZE - The total number of n-best predictions to generate in the nbest_predictions.json output file. --max_answer_length MAX_ANSWER_LENGTH - The maximum length of an answer that can be generated. This is needed because the start and end predictions are not conditioned on one another. --joint_head <True|False> - If true, beam search will be used to jointly predict the start and end positions. Default is True. --beam_size BEAM_SIZE - The beam size used to do joint predictions. The default value is 5. --verbose_logging - If true, all the warnings related to data processing will be printed. A number of warnings are expected for a normal SQuAD evaluation. --do_lower_case - Whether to lower case the input text. Set to true for uncased models and false for cased models. --version_2_with_negative - If true, the SQuAD examples contain questions that do not have an answer. --null_score_diff_threshold NULL_SCORE_DIFF_THRES HOLD - A null answer will be predicted if null_score is greater than NULL_SCORE_DIFF_THRESHOLD. ``` ### Command-line options To see the full list of available options and their descriptions, use the `-h` or `--help` command line option, for example: `python run_pretraining.py --help` `python run_tf_squad.py --help` Detailed descriptions of command-line options can be found in the [Parameters](#parameters) section. ### Getting the data For pre-training ELECTRA, we use the concatenation of Wikipedia (2500M words) as well as BookCorpus (800M words). For Wikipedia, we extract only the text passages and ignore headers, lists, and tables. ELECTRA requires that datasets are structured as a document level corpus rather than a shuffled sentence level corpus because it is critical to extract long contiguous sentences. The preparation of the pre-training dataset is described in the `dataPrep.py` script found in the `data/` folder. The component steps in the automated scripts to prepare the datasets are as follows: 1. Data download and extract - the dataset is downloaded and extracted. 2. Clean and format - document tags, etc. are removed from the dataset. 3. Sentence segmentation - the corpus text file is processed into separate sentences. 4. Sharding - the sentence segmented corpus file is split into a number of uniformly distributed smaller text documents. 5. `tfrecord` file creation - each text file shard is processed by the `build_pretraining_dataset.py` script to produce a corresponding `tfrecord` file. The script generates input data for the input text shard. The tools used for preparing the BookCorpus and Wikipedia datasets can be applied to prepare an arbitrary corpus. The `create_datasets_from_start.sh` script in the `data/` directory applies sentence segmentation, sharding, and `tfrecord` file creation given an arbitrary text file containing a document-separated text corpus. For fine-tuning a pre-trained ELECTRA model for specific tasks, by default this repository prepares the following dataset: - [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/): for question answering Depending on the speed of your internet connection, this process takes about a day to complete. The BookCorpus server could sometimes get overloaded and also contain broken links resulting in HTTP 403 and 503 errors. You can either skip the missing files or retry downloading at a later time. #### Multi-dataset This repository provides functionality to combine multiple datasets into a single dataset for pre-training on a diverse text corpus at the shard level. Currently Wikipedia and BookCorpus get merged in `data/create_datasets_from_start.sh`. Snippets to download and format more text corpuses can be added to `data/dataPrep.py`. The sharding scheme combines multiple corpuses together and splits them into the required number of training(90%) and testing(10%) shards. Once the data is sharded, the `build_pretraining_dataset.py` converts raw text shards to tokenized segments and saves the dataset to the `data` directory in TFRecord format. This dataset can now be used to pre-train ELECTRA. ### Training process The training process consists of two steps: pre-training and fine-tuning. #### Pre-training Pre-training is performed using `run_pretraining.py` along with parameters defined in `scripts/run_pretraining.sh` and `scripts/configs/pretrain_configs.sh`. The `run_pretraining.sh` script runs a job on a single node that trains the ELECTRA-base model from scratch using Wikipedia and BookCorpus datasets as training data using the LAMB optimizer. Phase 1: (Maximum sequence length of 128) - Runs on 8 GPUs with training batch size of 176 per GPU - Uses a learning rate of 6e-3 - Has FP16 precision enabled - Runs for 10000 steps, where the first 2000 are warm-up steps - Saves a checkpoint every 500 iterations (keeps only the latest 5 checkpoints) and at the end of training. All checkpoints, and training logs are saved to the `results/<model_name>` directory. - Creates a log file containing all the output Phase 2: (Maximum sequence length of 512) - Runs on 8 GPUs with training batch size of 24 per GPU - Uses a learning rate of 4e-3 - Has FP16 precision enabled - Runs for 930 steps, where the first 200 are warm-up steps - Saves a checkpoint every 500 iterations (keeps only the latest 5 checkpoints) and at the end of training. All checkpoints, and training logs are saved to the `results/<model_name>` directory. - Creates a log file containing all the output Specific configs available at `scripts/configs/pretrain_config.sh` can be run as follows: ``` bash scripts/run_pretraining.sh $(source scripts/configs/pretrain_config.sh && dgxa100_8gpu_amp) bash scripts/run_pretraining.sh $(source scripts/configs/pretrain_config.sh && dgx2_16gpu_amp) bash scripts/run_pretraining.sh $(source scripts/configs/pretrain_config.sh && dgx1_8gpu_amp) ``` The above commands will train ELECTRA based on Wikipedia and BookCorpus to state-of-the-art accuracy on any DGX platform using FP16 arithmetic. Around 96% of the training sequences are of length 128 (phase 1 of training) and less than 4% of the training sequences are of length 512 (phase 2 of training). In order to run pre-training routine on an initial checkpoint, perform the following in `scripts/run_pretraining.sh`: - set `restore_checkpoint=<path_to_checkpoint>` - Note: The parameter value assigned to `--model_size` during training should remain unchanged. Also, to resume pre-training on your corpus of choice, the training dataset should be created using the same vocabulary file used in `data/create_datasets_from_start.sh`. #### Multi-node Multi-node runs can be launched on a Pyxis/enroot Slurm cluster (see [Requirements](#requirements)) with the `run.sub` script with the following command for a 48-node NVIDIA DGX A100 example for both phase 1 and phase 2: ``` BATCHSIZE=176 LR=6e-3 GRAD_ACCUM_STEPS=1 PHASE=1 STEPS=10000 WARMUP=2000 b1=0.878 b2=0.974 decay=0.5 skip_adaptive=yes end_lr=0.0 sbatch N48 --ntasks-per-node=8 run.sub BATCHSIZE=24 LR=4e-3 GRAD_ACCUM_STEPS=3 PHASE=2 STEPS=930 WARMUP=200 b1=0.878 b2=0.974 decay=0.5 skip_adaptive=yes end_lr=0.0 sbatch N48 --ntasks-per-node=8 run.sub ``` Checkpoint after phase 1 will be saved in `<results_dir>/models/<model_name>`. The checkpoint will be automatically picked up to resume training on phase 2. Note that phase 2 should be run after phase 1. The batch variables `BATCHSIZE`, `LR`, `GRAD_ACCUM_STEPS`, `PHASE`, `STEPS`, `WARMUP`, `b1`, `b2`, `decay`, `skip_adaptive` and `end_lr` refer to the Python arguments `train_batch_size`, `learning_rate`, `gradient_accumulation_steps`, `phase2`, `num_train_steps`, `num_warmup_steps`, `opt_beta_1`, `opt_beta_2`, `lr_decay_power`, `skip_adaptive` and `end_lr` in `run_pretraining.py` respectively. Note that the `run.sub` script is a starting point that has to be adapted depending on the environment. In particular, variables such as `docker_image` and `datadir` handle the location of the files for each phase. Refer to the files contents to see the full list of variables to adjust for your system. #### Fine-tuning Fine-tuning is provided for a variety of tasks. The following tasks are included with this repository through the following scripts: - Question Answering (`scripts/run_squad.sh`) By default, each Python script implements fine-tuning a pre-trained ELECTRA model for a specified number of training epochs as well as evaluation of the fine-tuned model. Each shell script invokes the associated Python script with the following default parameters: - Uses 8 GPUs - Has FP16 precision enabled - Has XLA enabled - Saves a checkpoint at the end of training to the `checkpoints/` folder Specific configs available at `scripts/configs/squad_configs.sh` can be run as follows: ``` bash scripts/run_squad.sh $(source scripts/configs/squad_config.sh && dgxa100_8gpu_amp) train_eval bash scripts/run_squad.sh $(source scripts/configs/squad_config.sh && dgx2_16gpu_amp) train_eval bash scripts/run_squad.sh $(source scripts/configs/squad_config.sh && dgx1_8gpu_amp) train_eval ``` Fine-tuning Python scripts implement support for mixed precision and multi-GPU training through [Horovod](https://github.com/horovod/horovod). For a full list of parameters and associated explanations, see the [Parameters](#parameters) section. All fine-tuning shell scripts have the same positional arguments, outlined below: ```bash scripts/run_squad.sh <pretrained electra model> <epochs> <batch size> <learning rate> <amp|fp32> <num_gpus> <seed> <SQuAD version> <path to SQuAD dataset> <results directory> <checkpoint_to_load> <mode (either `train`, `eval` or `train_eval`)>``` By default, the mode positional argument is set to `train_eval`. See the [Fine-tuning parameters](#fine-tuning-parameters) for explanations of each positional argument. Note: The first positional argument (the path to the checkpoint to load) is required. Each fine-tuning script assumes that the corresponding dataset files exist in the `data/` directory or separate path can be a command-line input to `run_squad.sh`. ### Inference process #### Fine-tuning inference Evaluation fine-tuning is enabled by the same scripts as training: - Question Answering (`scripts/run_squad.sh`) The mode positional argument of the shell script is used to run in evaluation mode. The fine-tuned ELECTRA model will be run on the evaluation dataset, and the evaluation loss and accuracy will be displayed. Each inference shell script expects dataset files to exist in the same locations as the corresponding training scripts. The inference scripts can be run with default settings. By setting the `mode` variable in the script to either `eval` or `prediction` flag, you can choose between running predictions and evaluating them on a given dataset or just the former. `bash scripts/run_squad.sh <pretrained electra model> <epochs> <batch size> <learning rate> <amp|fp32> <num_gpus> <seed> <SQuAD version> <path to SQuAD dataset> <results directory> <path to fine-tuned model checkpoint> <eval or prediction>` To run inference interactively on question-context pairs, use the script `run_inference.py` as follows: `python run_inference.py --electra_model <electra_model_type> --init_checkpoint <fine_tuned_checkpoint> --question="What food does Harry like?" --context="My name is Harry and I grew up in Canada. I love apples."` ## Performance The performance measurements in this document were conducted at the time of publication and may not reflect the performance achieved from NVIDIA’s latest software release. For the most up-to-date performance measurements, go to [NVIDIA Data Center Deep Learning Product Performance](https://developer.nvidia.com/deep-learning-performance-training-inference). ### Benchmarking The following section shows how to run benchmarks measuring the model performance in training and inference modes. #### Training performance benchmark Training performance benchmarks for both pre-training phases can be obtained by running `scripts/benchmark_pretraining.sh`. Default parameters are set to run a few training steps for a converging NVIDIA DGX A100 system. To benchmark training performance with other parameters, run: ``` bash scripts/benchmark_pretraining.sh <train_batch_size_p1> <amp|tf32|fp32> <xla|no_xla> <num_gpus> <accumulate_gradients=true|false> <gradient_accumulation_steps_p1> <train_batch_size_p2> <gradient_accumulation_steps_p2> <base> ``` An example call used to generate throughput numbers: ``` bash scripts/benchmark_pretraining.sh 88 amp xla 8 true 2 12 4 base ``` Training performance benchmarks for fine-tuning can be obtained by running `scripts/benchmark_squad.sh`. The required parameters can be passed through the command-line as described in [Training process](#training-process). The performance information is printed after 200 training iterations. To benchmark the training performance on a specific batch size, run: ``` bash scripts/benchmark_squad.sh train <num_gpus> <batch size> <infer_batch_size> <amp|tf32|fp32> <SQuAD version> <path to SQuAD dataset> <results directory> <checkpoint_to_load> <cache_Dir> ``` An example call used to generate throughput numbers: ``` bash scripts/benchmark_squad.sh train 8 16 ``` #### Inference performance benchmark Inference performance benchmarks fine-tuning can be obtained by running `scripts/benchmark_squad.sh`. The required parameters can be passed through the command-line as described in [Inference process](#inference-process). This script runs one epoch by default on the SQuAD v1.1 dataset and extracts the average performance for the given configuration. To benchmark the training performance on a specific batch size, run: `bash scripts/benchmark_squad.sh train <num_gpus> <batch size> <infer_batch_size> <amp|fp32> <SQuAD version> <path to SQuAD dataset> <results directory> <checkpoint_to_load> <cache_Dir>` An example call used to generate throughput numbers: `bash scripts/benchmark_squad.sh eval 8 256` ### Results The following sections provide details on how we achieved our performance and accuracy in training and inference. All results are on ELECTRA-base model and on SQuAD v1.1 dataset with a sequence length of 384 unless otherwise mentioned. #### Training accuracy results ##### Pre-training loss curves ![Pretraining Loss Curves](images/total_loss.svg) Phase 1 is shown by the blue curve and Phase 2 by the grey. Y axis stands for the total loss and x axis for the total steps trained. ##### Pre-training loss results | DGX System | GPUs | Batch size / GPU (Phase 1 and Phase 2) | Accumulation steps (Phase 1 and Phase 2) | Final Loss - TF32/FP32 | Final Loss - mixed precision | Time to train(hours) - TF32/FP32 | Time to train(hours) - mixed precision | Time to train speedup (TF32/FP32 to mixed precision) |---|---|---|---|---|---|---|---|--- |48 x DGX A100 |8 |176 and 24 |1 and 3 |8.686|8.68|1.61 |1.126|1.43 |24 x DGX-2H |16|176 and 24 |1 and 3 |8.72 |8.67|5.58 |1.74 |3.20 |1 x DGX A100 |8 |176 and 24 |48 and 144|- |- |54.84 |30.47|1.8 |1 x DGX-1 16G |8 |88 and 12 |96 and 288|- |- |241.8 |65.1 |3.71 |1 x DGX-2 32G |16|176 and 24 |24 and 72 |- |- |109.97|29.08|3.78 In the above table, FP32 and TF32 runs were made at half the batch per GPU and twice the gradient accumulation steps of a run with mixed precision in order to not run out of memory. The SQuAD fine-tuning scripts by default train on [Google's ELECTRA++ base pretrained checkpoint](https://github.com/google-research/electra#released-models) which uses around 10x training dataset (dataset used by XLNet authors) and greater than 5x training steps compared to the training recipe in `scripts/run_pretraining.sh`. The latter trains and achieves state-of-the-art accuracy on Wikipedia and BookCorpus datasets only. ##### Fine-tuning accuracy: NVIDIA DGX A100 (8x A100 40GB) Our results were obtained by running the `scripts/run_squad.sh` training script in the tensorflow:20.07-tf2-py3 NGC container on NVIDIA DGX A100 (8x A100 40GB) GPUs. *ELECTRA BASE++* | GPUs | Batch size / GPU | Accuracy / F1 - FP32 | Accuracy / F1 - mixed precision | Time to train - TF32 (sec) | Time to train - mixed precision (sec) | Time to train speedup (FP32 to mixed precision) | |---------|---------------------|------------------|-----------------------------|--------------------------|---------------------------------|-------------------------------------------------| | 1 | 32 | 87.19 / 92.85 | 87.19 / 92.84 | 1699 | 749 | 2.27 | | 8 | 32 | 86.84 / 92.57 | 86.83 / 92.56 | 263 | 201 | 1.30 | ##### Fine-tuning accuracy: NVIDIA DGX-1 (8x V100 16GB) Our results were obtained by running the `scripts/run_squad.sh` training script in the tensorflow:20.07-tf2-py3 NGC container on NVIDIA DGX-1 with (8x V100 16GB) GPUs. *ELECTRA BASE++* | GPUs | Batch size / GPU (FP32 : mixed precision) | Accuracy / F1 - FP32 | Accuracy / F1 - mixed precision | Time to train - FP32 (sec) | Time to train - mixed precision (sec) | Time to train speedup (FP32 to mixed precision) | |---------|---------------------|------------------|-----------------------------|--------------------------|---------------------------------|-------------------------------------------------| | 1 | 8 : 16 | 87.36 / 92.82 | 87.32 / 92.74 | 5136 | 1378 | 3.73 | | 8 | 8 : 16 | 87.02 / 92.73 | 87.02 / 92.72 | 730 | 334 | 2.18 | *ELECTRA BASE checkpoint Wikipedia and BookCorpus* GPUs | SQuAD version| Batch size / GPU (FP32 : mixed precision) | Accuracy / F1 - FP32 | Accuracy / F1 - mixed precision | Time to train - FP32 (sec) | Time to train - mixed precision (sec) | Time to train speedup (FP32 to mixed precision) | |---------|-----|----------------|------------------|-----------------------------|--------------------------|---------------------------------|-------------------------------------------------| | 8 | v1.1 | 8 : 16 | 85.00 / 90.94 | 85.04 / 90.96 | 5136 | 1378 | 3.73 | | 8 | v2.0 | 8 : 16 | 80.517 / 83.36 | 80.523 / 83.43 | 730 | 334 | 2.18 ##### Fine-tuning accuracy: NVIDIA DGX-2 (16x V100 32GB) Our results were obtained by running the `scripts/run_squad.sh` training script in the tensorflow:20.07-tf2-py3 NGC container on NVIDIA DGX-2 (16x V100 32G) GPUs. *ELECTRA BASE++* | GPUs | Batch size / GPU | Accuracy / F1 - FP32 | Accuracy / F1 - mixed precision | Time to train - FP32 (sec) | Time to train - mixed precision (sec) | Time to train speedup (FP32 to mixed precision) | |---------|---------------------|------------------|-----------------------------|--------------------------|---------------------------------|-------------------------------------------------| | 1 | 32 | 87.14 / 92.69 | 86.95 / 92.69 | 4478 | 1162 | 3.85 | | 16 | 32 | 86.95 / 90.58 | 86.93 / 92.48 | 333 | 229 | 1.45 | ##### Training stability test ###### Pre-training stability test: NVIDIA DGX A100 (8x A100 40GB) *ELECTRA BASE Wikipedia and BookCorpus* Training stability with 48 x DGX A100, TF32 computations and loss reported after Phase 2: | Accuracy Metric | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Mean | Standard Deviation |---|---|---|---|---|---|---|--- |Final Loss| 8.72 | 8.69 | 8.71 | 8.7 | 8.68 | 8.7 | 0.015 ###### Fine-tuning stability test: NVIDIA DGX-1 (8x V100 16GB) *ELECTRA BASE++* Training stability with 8 GPUs, FP16 computations, batch size of 16 on SQuAD v1.1: | Accuracy Metric | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Mean | Standard Deviation |---|---|---|---|---|---|---|--- |Exact Match %| 86.99 | 86.81 | 86.95 | 87.10 | 87.26 | 87.02 | 0.17 | f1 % | 92.7 | 92.66 | 92.65 | 92.61 | 92.97 | 92.72 | 0.14 Training stability with 8 GPUs, FP16 computations, batch size of 16 on SQuAD v2.0: | Accuracy Metric | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Mean | Standard Deviation |---|---|---|---|---|---|---|--- |Exact Match %| 83.00 | 82.84 | 83.11 | 82.70 | 82.94 | 82.91 | 0.15 | f1 % | 85.63 | 85.48 | 85.69 | 85.31 | 85.57 | 85.54 | 0.15 #### Training performance results ##### Training performance: NVIDIA DGX A100 (8x A100 40GB) Our results were obtained by running the `scripts/benchmark_squad.sh` training script in the tensorflow:20.07-tf2-py3 NGC container on NVIDIA DGX A100 (8x A100 40GB) GPUs. Performance numbers (in items/images per second) were averaged over an entire training epoch. ###### Pre-training NVIDIA DGX A100 (8x A100 40GB) | GPUs | Batch size / GPU (TF32 and FP16) | Accumulation steps (TF32 and FP16) | Sequence length | Throughput - TF32(sequences/sec) | Throughput - mixed precision(sequences/sec) | Throughput speedup (TF32 - mixed precision) | Weak scaling - TF32 | Weak scaling - mixed precision |------------------|----------------------|----------------------|-------------------|-----------------------------------------------|------------------------------------|---------------------------------|----------------------|---------------------------------------------- |1 | 88 and 176| 768 and 384 | 128| 533 |955 |1.79|1.00| 1.00 |8 | 88 and 176| 96 and 48 | 128| 4202|7512|1.79|7.88| 7.87 |1 | 12 and 24 | 2304 and 1152| 512| 90 |171 |1.90|1.00| 1.00 |8 | 12 and 24 | 288 and 144 | 512| 716 |1347|1.88|7.96| 7.88 ###### Fine-tuning NVIDIA DGX A100 (8x A100 40GB) | GPUs | Batch size / GPU | Sequence length | Throughput - TF32 (sequences/sec) | Throughput - mixed precision (sequences/sec) | Throughput speedup (TF32 - mixed precision) | Weak scaling - TF32 | Weak scaling - mixed precision | |------------------|-----------|-----------|-----------------------------------------------|------------------------------------|---------------------------------|----------------------|---------------------------------------------- | 1 | 32 | 384 | 107 | 317 | 2.96 | 1.00 | 1.00 | 8 | 32 | 384 | 828 | 2221| 2.68 | 7.74 | 7.00 ##### Training performance: NVIDIA DGX-1 (8x V100 16GB) Our results were obtained by running the `scripts/benchmark_squad.sh` training scripts in the tensorflow:20.07-tf2-py3 NGC container on NVIDIA DGX-1 with (8x V100 16GB) GPUs. Performance numbers (in sequences per second) were averaged over an entire training epoch. ###### Pre-training NVIDIA DGX-1 (8x V100 16GB) | GPUs | Batch size / GPU (FP32 and FP16) | Accumulation steps (FP32 and FP16) | Sequence length | Throughput - FP32(sequences/sec) | Throughput - mixed precision(sequences/sec) | Throughput speedup (FP32 - mixed precision) | Weak scaling - FP32 | Weak scaling - mixed precision |------------------|----------------------|----------------------|-------------------|-----------------------------------------------|------------------------------------|---------------------------------|----------------------|---------------------------------------------- |1 | 40 and 88| 1689 and 768 | 128| 116 |444 |3.83 |1.00 | 1.00 |8 | 40 and 88| 211 and 96 | 128| 920 |3475|3.77 |7.93 | 7.83 |1 | 6 and 12 | 4608 and 2304| 512| 24 |84 |3.50 |1.00 | 1.00 |8 | 6 and 12 | 576 and 288 | 512| 190 |656 |3.45 |7.92 | 7.81 ###### Fine-tuning NVIDIA DGX-1 (8x V100 16GB) | GPUs | Batch size / GPU (FP32 : mixed precision) | Sequence length | Throughput - FP32 (sequences/sec) | Throughput - mixed precision (sequences/sec) | Throughput speedup (FP32 - mixed precision) | Weak scaling - FP32 | Weak scaling - mixed precision | |------------------|-----------|-----------|-----------------------------------------------|------------------------------------|---------------------------------|----------------------|---------------------------------------------- |1 | 8 : 16| 384| 35| 154| 4.4 | 1.00| 1.00 |8 | 8 : 16| 384|268|1051| 3.92| 7.66| 6.82 To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide). ##### Training performance: NVIDIA DGX-2 (16x V100 32GB) Our results were obtained by running the `scripts/benchmark_squad.sh` training scripts in the tensorflow:20.07-tf2-py3 NGC container on NVIDIA DGX-2 with (16x V100 32G) GPUs. Performance numbers (in sequences per second) were averaged over an entire training epoch. ###### Pre-training NVIDIA DGX-2 (16x V100 32GB) | GPUs | Batch size / GPU (FP32 and FP16) | Accumulation steps (FP32 and FP16) | Sequence length | Throughput - FP32(sequences/sec) | Throughput - mixed precision(sequences/sec) | Throughput speedup (FP32 - mixed precision) | Weak scaling - FP32 | Weak scaling - mixed precision |------------------|----------------------|----------------------|-------------------|-----------------------------------------------|------------------------------------|---------------------------------|----------------------|---------------------------------------------- |1 | 88 and 176| 768 and 384 | 128| 128 |500 |3.91| 1.00 | 1.00 |8 | 88 and 176| 96 and 48 | 128| 1011|3916|3.87| 7.90 | 7.83 |16| 88 and 176| 48 and 24 | 128| 2018|7773|3.85|15.77 |15.55 |1 | 12 and 24 | 2304 and 1152| 512| 27 |96 |3.55| 1.00 | 1.00 |8 | 12 and 24 | 288 and 144 | 512| 213 |754 |3.54| 7.89 | 7.85 |16| 12 and 24 | 144 and 72 | 512| 426 |1506|3.54| 15.78|15.69 ###### Fine-tuning NVIDIA DGX-2 (16x V100 32GB) | GPUs | Batch size / GPU | Sequence length | Throughput - FP32 (sequences/sec) | Throughput - mixed precision (sequences/sec) | Throughput speedup (FP32 - mixed precision) | Weak scaling - FP32 | Weak scaling - mixed precision | |------|-----------|-------|----------------------------------|---------------------------------------------|---------------------------------------------|---------------------|--------------------------------| | 1 | 16 | 384 | 40 | 184 | 4.6 | 1.00 | 1.00 | | 8 | 16 | 384 | 311 | 1289 | 4.14 | 7.77 | 7.00 | | 16 | 16 | 384 | 626 | 2594 | 4.14 | 15.65 | 14.09 | To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide). #### Inference performance results ##### Inference performance: NVIDIA DGX A100 (1x A100 40GB) Our results were obtained by running the `scripts/benchmark_squad.sh` inferencing benchmarking script in the tensorflow:20.07-tf2-py3 NGC container on NVIDIA DGX A100 (1x A100 40GB) GPU. ###### Fine-tuning inference on NVIDIA DGX A100 (1x A100 40GB) FP16 | Batch size | Sequence length | Throughput Avg (sequences/sec) | Latency Avg (ms) | Latency 90% (ms) | Latency 95% (ms) | Latency 99% (ms) | |------------|-----------------|--------------------------------|------------------|------------------|------------------|------------------| | 1 | 384 | 166 | 6.035 | 5.995 | 6.013 | 6.029 | | 256 | 384 | 886 | 276.26 | 274.53 | 275.276 | 275.946 | | 512 | 384 | 886 | 526.5 | 525.014 | 525.788 | 525.788 | TF32 | Batch size | Sequence length | Throughput Avg (sequences/sec) | Latency Avg (ms) | Latency 90% (ms) | Latency 95% (ms) | Latency 99% (ms) | |------------|-----------------|--------------------------------|------------------|------------------|------------------|------------------| | 1 | 384 | 122 | 8.228 | 8.171 | 8.198 | 8.221 | | 256 | 384 | 342 | 729.293 | 727.990 | 728.505 | 729.027 | | 512 | 384 | 350 | 1429.314 | 1427.719 | 1428.550 | 1428.550 | ##### Inference performance: NVIDIA T4 Our results were obtained by running the `scripts/benchmark_squad.sh` script in the tensorflow:20.07-tf2-py3 NGC container on NVIDIA Tesla T4 (1x T4 16GB) GPU. ###### Fine-tuning inference on NVIDIA T4 FP16 | Batch size | Sequence length | Throughput Avg (sequences/sec) | Latency Avg (ms) | Latency 90% (ms) | Latency 95% (ms) | Latency 99% (ms) | |------------|-----------------|--------------------------------|------------------|------------------|------------------|------------------| | 1 | 384 | 58 | 17.413 | 17.295 | 17.349 | 17.395 | | 128 | 384 | 185 | 677.298 | 675.211 | 675.674 | 676.269 | | 256 | 384 | 169 | 1451.396 | 1445.070 | 1447.654 | 1450.141 | To achieve these same results, follow the steps in the [Quick Start Guide](#quick-start-guide). ## Release notes ### Changelog July 2020 - Initial release. October 2020 - Data preparation scripts for pre-training. - Pre-training support. - Mixed precision support with Keras AMP policy. - Update beam size in SQuAD fine-tuning from 4 to 5 for higher accuracy. - T4 inference performance. ### Known issues There are no known issues with this model.
TensorFlow/Detection/SSD/models/research/object_detection/builders
builders
graph_rewriter_builder_test
# Copyright 2018 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Tests for graph_rewriter_builder.""" import mock import tensorflow as tf from object_detection.builders import graph_rewriter_builder from object_detection.protos import graph_rewriter_pb2 class QuantizationBuilderTest(tf.test.TestCase): def testQuantizationBuilderSetsUpCorrectTrainArguments(self): with mock.patch.object( tf.contrib.quantize, 'create_training_graph') as mock_quant_fn: with mock.patch.object(tf.contrib.layers, 'summarize_collection') as mock_summarize_col: graph_rewriter_proto = graph_rewriter_pb2.GraphRewriter() graph_rewriter_proto.quantization.delay = 10 graph_rewriter_proto.quantization.weight_bits = 8 graph_rewriter_proto.quantization.activation_bits = 8 graph_rewrite_fn = graph_rewriter_builder.build( graph_rewriter_proto, is_training=True) graph_rewrite_fn() _, kwargs = mock_quant_fn.call_args self.assertEqual(kwargs['input_graph'], tf.get_default_graph()) self.assertEqual(kwargs['quant_delay'], 10) mock_summarize_col.assert_called_with('quant_vars') def testQuantizationBuilderSetsUpCorrectEvalArguments(self): with mock.patch.object(tf.contrib.quantize, 'create_eval_graph') as mock_quant_fn: with mock.patch.object(tf.contrib.layers, 'summarize_collection') as mock_summarize_col: graph_rewriter_proto = graph_rewriter_pb2.GraphRewriter() graph_rewriter_proto.quantization.delay = 10 graph_rewrite_fn = graph_rewriter_builder.build( graph_rewriter_proto, is_training=False) graph_rewrite_fn() _, kwargs = mock_quant_fn.call_args self.assertEqual(kwargs['input_graph'], tf.get_default_graph()) mock_summarize_col.assert_called_with('quant_vars') if __name__ == '__main__': tf.test.main()
TensorFlow/Detection/SSD/models/research/object_detection/models
models
ssd_inception_v3_feature_extractor_test
# Copyright 2017 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Tests for object_detection.models.ssd_inception_v3_feature_extractor.""" import numpy as np import tensorflow as tf from object_detection.models import ssd_feature_extractor_test from object_detection.models import ssd_inception_v3_feature_extractor class SsdInceptionV3FeatureExtractorTest( ssd_feature_extractor_test.SsdFeatureExtractorTestBase): def _create_feature_extractor(self, depth_multiplier, pad_to_multiple, is_training=True): """Constructs a SsdInceptionV3FeatureExtractor. Args: depth_multiplier: float depth multiplier for feature extractor pad_to_multiple: the nearest multiple to zero pad the input height and width dimensions to. is_training: whether the network is in training mode. Returns: an ssd_inception_v3_feature_extractor.SsdInceptionV3FeatureExtractor. """ min_depth = 32 return ssd_inception_v3_feature_extractor.SSDInceptionV3FeatureExtractor( is_training, depth_multiplier, min_depth, pad_to_multiple, self.conv_hyperparams_fn, override_base_feature_extractor_hyperparams=True) def test_extract_features_returns_correct_shapes_128(self): image_height = 128 image_width = 128 depth_multiplier = 1.0 pad_to_multiple = 1 expected_feature_map_shape = [(2, 13, 13, 288), (2, 6, 6, 768), (2, 2, 2, 2048), (2, 1, 1, 512), (2, 1, 1, 256), (2, 1, 1, 128)] self.check_extract_features_returns_correct_shape( 2, image_height, image_width, depth_multiplier, pad_to_multiple, expected_feature_map_shape) def test_extract_features_returns_correct_shapes_with_dynamic_inputs(self): image_height = 128 image_width = 128 depth_multiplier = 1.0 pad_to_multiple = 1 expected_feature_map_shape = [(2, 13, 13, 288), (2, 6, 6, 768), (2, 2, 2, 2048), (2, 1, 1, 512), (2, 1, 1, 256), (2, 1, 1, 128)] self.check_extract_features_returns_correct_shapes_with_dynamic_inputs( 2, image_height, image_width, depth_multiplier, pad_to_multiple, expected_feature_map_shape) def test_extract_features_returns_correct_shapes_299(self): image_height = 299 image_width = 299 depth_multiplier = 1.0 pad_to_multiple = 1 expected_feature_map_shape = [(2, 35, 35, 288), (2, 17, 17, 768), (2, 8, 8, 2048), (2, 4, 4, 512), (2, 2, 2, 256), (2, 1, 1, 128)] self.check_extract_features_returns_correct_shape( 2, image_height, image_width, depth_multiplier, pad_to_multiple, expected_feature_map_shape) def test_extract_features_returns_correct_shapes_enforcing_min_depth(self): image_height = 299 image_width = 299 depth_multiplier = 0.5**12 pad_to_multiple = 1 expected_feature_map_shape = [(2, 35, 35, 128), (2, 17, 17, 128), (2, 8, 8, 192), (2, 4, 4, 32), (2, 2, 2, 32), (2, 1, 1, 32)] self.check_extract_features_returns_correct_shape( 2, image_height, image_width, depth_multiplier, pad_to_multiple, expected_feature_map_shape) def test_extract_features_returns_correct_shapes_with_pad_to_multiple(self): image_height = 299 image_width = 299 depth_multiplier = 1.0 pad_to_multiple = 32 expected_feature_map_shape = [(2, 37, 37, 288), (2, 18, 18, 768), (2, 8, 8, 2048), (2, 4, 4, 512), (2, 2, 2, 256), (2, 1, 1, 128)] self.check_extract_features_returns_correct_shape( 2, image_height, image_width, depth_multiplier, pad_to_multiple, expected_feature_map_shape) def test_extract_features_raises_error_with_invalid_image_size(self): image_height = 32 image_width = 32 depth_multiplier = 1.0 pad_to_multiple = 1 self.check_extract_features_raises_error_with_invalid_image_size( image_height, image_width, depth_multiplier, pad_to_multiple) def test_preprocess_returns_correct_value_range(self): image_height = 128 image_width = 128 depth_multiplier = 1 pad_to_multiple = 1 test_image = np.random.rand(4, image_height, image_width, 3) feature_extractor = self._create_feature_extractor(depth_multiplier, pad_to_multiple) preprocessed_image = feature_extractor.preprocess(test_image) self.assertTrue(np.all(np.less_equal(np.abs(preprocessed_image), 1.0))) def test_variables_only_created_in_scope(self): depth_multiplier = 1 pad_to_multiple = 1 scope_name = 'InceptionV3' self.check_feature_extractor_variables_under_scope( depth_multiplier, pad_to_multiple, scope_name) if __name__ == '__main__': tf.test.main()
PyTorch/SpeechRecognition/Jasper/configs
configs
jasper10x5dr_speedp-online-discrete
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. name: "Jasper" labels: [" ", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "'"] input_val: audio_dataset: &val_dataset sample_rate: &sample_rate 16000 trim_silence: true normalize_transcripts: true filterbank_features: &val_features normalize: per_feature sample_rate: *sample_rate window_size: 0.02 window_stride: 0.01 window: hann n_filt: &n_filt 64 n_fft: 512 frame_splicing: &frame_splicing 1 dither: 0.00001 pad_align: 16 # For training we keep samples < 16.7s and apply augmentation input_train: audio_dataset: <<: *val_dataset max_duration: 16.7 ignore_offline_speed_perturbation: true speed_perturbation: discrete: true min_rate: 0.9 max_rate: 1.1 filterbank_features: <<: *val_features max_duration: 16.7 spec_augment: freq_masks: 0 max_freq: 20 time_masks: 0 max_time: 75 jasper: encoder: init: xavier_uniform in_feats: *n_filt frame_splicing: *frame_splicing activation: relu use_conv_masks: true blocks: - &Conv1 filters: 256 repeat: 1 kernel_size: [11] stride: [2] dilation: [1] dropout: 0.2 residual: false - &B1 filters: 256 repeat: 5 kernel_size: [11] stride: [1] dilation: [1] dropout: 0.2 residual: true residual_dense: true - *B1 - &B2 filters: 384 repeat: 5 kernel_size: [13] stride: [1] dilation: [1] dropout: 0.2 residual: true residual_dense: true - *B2 - &B3 filters: 512 repeat: 5 kernel_size: [17] stride: [1] dilation: [1] dropout: 0.2 residual: true residual_dense: true - *B3 - &B4 filters: 640 repeat: 5 kernel_size: [21] stride: [1] dilation: [1] dropout: 0.3 residual: true residual_dense: true - *B4 - &B5 filters: 768 repeat: 5 kernel_size: [25] stride: [1] dilation: [1] dropout: 0.3 residual: true residual_dense: true - *B5 - &Conv2 filters: 896 repeat: 1 kernel_size: [29] stride: [1] dilation: [2] dropout: 0.4 residual: false - &Conv3 filters: &enc_feats 1024 repeat: 1 kernel_size: [1] stride: [1] dilation: [1] dropout: 0.4 residual: false decoder: in_feats: *enc_feats init: xavier_uniform
PyTorch/Recommendation/NCF
NCF
inference
# # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch.jit import time from argparse import ArgumentParser import numpy as np import torch from neumf import NeuMF import dllogger def parse_args(): parser = ArgumentParser(description="Benchmark inference performance of the NCF model") parser.add_argument('--load_checkpoint_path', default=None, type=str, help='Path to the checkpoint file to be loaded before training/evaluation') parser.add_argument('--n_users', default=138493, type=int, help='Number of users. Defaults to the number of users in the ml-20m dataset after preprocessing') parser.add_argument('--n_items', default=26744, type=int, help='Number of items. Defaults to the number of users in the ml-20m dataset after preprocessing') parser.add_argument('-f', '--factors', type=int, default=64, help='Number of predictive factors') parser.add_argument('--dropout', type=float, default=0.5, help='Dropout probability, if equal to 0 will not use dropout at all') parser.add_argument('--layers', nargs='+', type=int, default=[256, 256, 128, 64], help='Sizes of hidden layers for MLP') parser.add_argument('--batch_sizes', default='1,4,16,64,256,1024,4096,16384,65536,262144,1048576', type=str, help='A list of comma-separated batch size values to benchmark') parser.add_argument('--num_batches', default=200, type=int, help='Number of batches for which to measure latency and throughput') parser.add_argument('--fp16', action='store_true', help='Cast the model to FP16 precision', default=False) parser.add_argument('--log_path', default='log.json', type=str, help='Path for the JSON training log') return parser.parse_args() def main(): args = parse_args() dllogger.init(backends=[dllogger.JSONStreamBackend(verbosity=dllogger.Verbosity.VERBOSE, filename=args.log_path), dllogger.StdOutBackend(verbosity=dllogger.Verbosity.VERBOSE)]) dllogger.log(data=vars(args), step='PARAMETER') model = NeuMF(nb_users=args.n_users, nb_items=args.n_items, mf_dim=args.factors, mlp_layer_sizes=args.layers, dropout=args.dropout) model = model.cuda() if args.load_checkpoint_path: state_dict = torch.load(args.load_checkpoint_path) model.load_state_dict(state_dict) if args.fp16: model.half() model.eval() batch_sizes = args.batch_sizes.split(',') batch_sizes = [int(s) for s in batch_sizes] result_data = {} for batch_size in batch_sizes: print('benchmarking batch size: ', batch_size) users = torch.cuda.LongTensor(batch_size).random_(0, args.n_users) items = torch.cuda.LongTensor(batch_size).random_(0, args.n_items) latencies = [] for i in range(args.num_batches): torch.cuda.synchronize() start = time.time() _ = model(users, items, sigmoid=True) torch.cuda.synchronize() end_time = time.time() if i < 10: # warmup iterations continue latencies.append(end_time - start) result_data[f'batch_{batch_size}_mean_throughput'] = batch_size / np.mean(latencies) result_data[f'batch_{batch_size}_mean_latency'] = np.mean(latencies) result_data[f'batch_{batch_size}_p90_latency'] = np.percentile(latencies, 90) result_data[f'batch_{batch_size}_p95_latency'] = np.percentile(latencies, 95) result_data[f'batch_{batch_size}_p99_latency'] = np.percentile(latencies, 99) for batch_size in batch_sizes: dllogger.metadata(f'batch_{batch_size}_mean_throughput', {'unit': 'samples/s'}) for p in ['mean', 'p90', 'p95', 'p99']: dllogger.metadata(f'batch_{batch_size}_{p}_latency', {'unit': 's'}) dllogger.log(data=result_data, step=tuple()) dllogger.flush() return if __name__ == '__main__': main()
PyTorch/Classification/ConvNets/triton/deployment_toolkit
deployment_toolkit
__init__
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
Kaldi/SpeechRecognition/scripts/docker
docker
dataset_setup
#!/bin/bash # Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. /workspace/scripts/docker/prepare_data.sh chown -R $1:$2 /data/ mv /data/* /mnt/data/
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/scripts
scripts
build_engines
#!/bin/bash ## # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # Neither the name of the NVIDIA CORPORATION nor the # names of its contributors may be used to endorse or promote products # derived from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE # DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # MODEL_DIR="/models/" ENGINE_DIR="/engines/" TACOTRON2_JSON="${MODEL_DIR}/tacotron2.json" WAVEGLOW_ONNX="${MODEL_DIR}/waveglow.onnx" DENOISER_JSON="${MODEL_DIR}/denoiser.json" TACOTRON2_ENG="${ENGINE_DIR}/tacotron2.eng" WAVEGLOW_ENG="${ENGINE_DIR}/waveglow_chunk160_fp16.eng" DENOISER_ENG="${ENGINE_DIR}/denoiser.eng" BIN_DIR="./build/bin" BENCHMARK_BIN="${BIN_DIR}/benchmark" BUILD_TACOTRON2_BIN="${BIN_DIR}/build_tacotron2" BUILD_WAVEGLOW_BIN="${BIN_DIR}/build_waveglow" MAX_BATCH_SIZE=32 die() { echo "ERROR: ${@}" 1>&2 exit 1 } AMP="amp" if [[ "${#}" == "1" ]]; then if [[ "${1}" == "0" || "${1}" == "no" ]]; then AMP="fp32" elif [[ "${1}" == "1" || "${1}" == "yes" ]]; then AMP="amp" else echo "Invalid arguments." exit 1 fi fi echo echo "Building with -F${AMP}" echo ## build tacotron2 engine ./build/bin/build_tacotron2 "${TACOTRON2_JSON}" "${TACOTRON2_ENG}" -B ${MAX_BATCH_SIZE} -I 400 -F${AMP} || die "Failed to build tacotron2 engine." rm -v "${TACOTRON2_JSON}" ## build wave glow engine ./build/bin/build_waveglow "${WAVEGLOW_ONNX}" "${WAVEGLOW_ENG}" -B ${MAX_BATCH_SIZE} -F${AMP} || die "Failed to build waveglow engine." rm -v "${WAVEGLOW_ONNX}" ## build denoiser engine ./build/bin/build_denoiser "${DENOISER_JSON}" "${DENOISER_ENG}" -B ${MAX_BATCH_SIZE} -F${AMP} || die "Failed to build waveglow engine." rm -v "${DENOISER_JSON}" ls "${TACOTRON2_ENG}" "${WAVEGLOW_ENG}" "${DENOISER_ENG}" || die "Unable to access built engines." echo "Successfully built '${TACOTRON2_ENG}', '${WAVEGLOW_ENG}', and '${DENOISER_ENG}'"
PyTorch/Classification/GPUNet/triton/runner/maintainer/docker/containers
containers
triton_server_container
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import pathlib from threading import Thread from typing import Dict, Generator, Union from docker.models.containers import ExecResult from docker.types import DeviceRequest, Ulimit if __name__ == "__main__" and __package__ is None: __package__ = pathlib.Path(__file__).parent.name from ....logger import LOGGER from ...exceptions import ContainerNotStarted from ..container import DockerContainer class TritonServerContainer(DockerContainer): def __init__( self, name: str, command: str, image: str, volumes: Dict, devices: Union[list, int], environment: Dict, log_file: Union[pathlib.Path, str], network: str = "host", shm_size: str = "1G", ): """ Initialize Triton Server Container Args: name: Container name command: Triton Server command to exec on container start image: Docker Image volumes: Volumes to mount inside container devices: Devices which has to be visible in container environment: Environment variables log_file: Path where logs should be saved network: Network mode shm_size: Shared memory size """ super().__init__(name) self._image = image self._command = command self._volumes = volumes self._devices = devices self._environment = environment self._network = network self._shm_size = shm_size self._triton_exec = None self._logging_thread = None self._log_file_path = pathlib.Path(log_file) def start(self) -> None: """ Start Triton Server Container """ devices = [ DeviceRequest(capabilities=[["gpu"]], device_ids=self._devices), ] LOGGER.info(f"Triton environment: {json.dumps(self._environment, indent=4)}") LOGGER.info(f"Starting Triton container {self.name}.") self._container = self._docker_client.containers.run( image=self._image, name=self.name, device_requests=devices, detach=True, tty=True, shm_size=self._shm_size, ulimits=[ Ulimit(name="memlock", soft=-1, hard=-1), Ulimit(name="stack", soft=67108864, hard=67108864), ], volumes=self._volumes, environment=self._environment, network_mode=self._network, auto_remove=True, ipc_mode="host", ) LOGGER.info("Triton command:") LOGGER.info(f" {self._command}") LOGGER.info(f"Starting Triton Server {self.name}.") self._triton_exec = self._docker_api_client.exec_create( container=self._container.id, cmd=self._command, ) stream_generator = self._docker_api_client.exec_start(exec_id=self._triton_exec["Id"], stream=True) self._logging_thread = Thread(target=TritonServerContainer._logging, args=(self, stream_generator), daemon=True) self._logging_thread.start() def stop(self) -> None: """ Stop Triton Server Container and save logs to file """ if self._container is not None: triton_result = self._docker_api_client.exec_inspect(self._triton_exec["Id"]) if triton_result.get("ExitCode") not in (0, None): LOGGER.info( f"Triton Inference Server instance {self.name} failed. Exit code: {triton_result.get('ExitCode')}" ) LOGGER.info(f"Stopping triton server {self.name}.") self._container.stop() self._container = None self._docker_client.close() self._docker_api_client.close() def run(self, command: str) -> ExecResult: """ Run command in container Args: command: Command to execute Returns: ExecResult """ if not self._container: raise ContainerNotStarted("Triton Server Container is not running. Use .start() first.") return self._container.exec_run(command) def _logging(self, generator: Generator) -> None: """Triton logging thread for Triton Inference Server Args: generator (string generator): Triton log stream. """ with open(self._log_file_path, mode="w") as file: try: while True: log = next(generator) txt = log.decode("utf-8") file.write(txt) except StopIteration: LOGGER.info(f"Saving Triton Inference Server {self.name} logs in {self._log_file_path}.")
Tools/DGLPyTorch/SyntheticGraphGeneration/syngen/analyzer/graph
graph
plotting
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import os from functools import partial from typing import Dict import matplotlib import matplotlib.pyplot as plt import numpy as np from scipy.sparse.linalg import eigsh from syngen.analyzer.graph.graph import safeSNAP from syngen.utils.types import ColumnType TMP_NAME = "tmp" def common_plot(f, ax, *graphs, **kwargs): for i, G in enumerate(graphs, 1): f(G, i, ax, **kwargs) if len(graphs) > 1: ax.legend() def parse_file(plot, filename): parsed_filename = f"{plot}.{filename}.tab" with open(parsed_filename, "r") as f: lines = f.read().splitlines() x_values = [] y_values = [] for line in lines: if len(line) and "#" not in line: x, y = line.split() x_values.append(float(x)) y_values.append(float(y)) return x_values, y_values def clear_files(plot, filename): files_to_clean = [ f"./{plot}.{filename}.plt", f"./{plot}.{filename}.png", f"./{plot}.{filename}.tab", ] for file in files_to_clean: try: os.remove(file) except FileNotFoundError: print(f"File {file} attempted to be removed, but not found") def parse_snap_object(snap_object): zipped = [(pair.GetVal1(), pair.GetVal2()) for pair in snap_object] x, y = zip(*zipped) return x, y def get_degree_dist(snapGraph): return parse_snap_object(snapGraph.GetDegCnt()) def get_in_degree_dist(snapGraph): return parse_snap_object(snapGraph.GetInDegCnt()) def get_out_degree_dist(snapGraph): return parse_snap_object(snapGraph.GetOutDegCnt()) def get_clustering_coef_dist(snapGraph): return parse_snap_object(snapGraph.GetClustCf(True, -1)[1]) def get_strongly_connected_component(snapGraph): return parse_snap_object(snapGraph.GetSccSzCnt()) def get_weakly_connected_component(snapGraph): return parse_snap_object(snapGraph.GetWccSzCnt()) @safeSNAP def _add_to_axis_idd(G, i, ax): graph_name = G.name or f"Graph {i}" title = "Log-log in degree distribution" G = G.snapGraph x, y = get_in_degree_dist(G) ax.set_xscale("log") ax.set_xlabel("In degree") ax.set_yscale("log") ax.set_ylabel("Number of nodes") ax.set_title(title) ax.scatter(x, y, label=graph_name, s=5) @safeSNAP def _add_to_axis_odd(G, i, ax): graph_name = G.name or f"Graph {i}" title = "Log-log out degree distribution" G = G.snapGraph x, y = get_out_degree_dist(G) ax.set_xscale("log") ax.set_xlabel("Out degree") ax.set_yscale("log") ax.set_ylabel("Number of nodes") ax.set_title(title) ax.scatter(x, y, label=graph_name, s=5) @safeSNAP def _add_to_axis_dd(G, i, ax): graph_name = G.name or f"Graph {i}" title = "Log-log degree distribution" G = G.snapGraph x, y = get_degree_dist(G) ax.set_xscale("log") ax.set_xlabel("Degree") ax.set_yscale("log") ax.set_ylabel("Number of nodes") ax.set_title(title) ax.scatter(x, y, label=graph_name, s=5) @safeSNAP def _add_to_axis_ccd(G, i, ax): graph_name = G.name or f"Graph {i}" title = "Log-log distribution of clustering coefficient" G = G.snapGraph x, y = get_clustering_coef_dist(G) ax.set_xscale("log") ax.set_xlabel("Degree") ax.set_yscale("symlog") ax.set_ylabel("Clustering coefficient") ax.set_title(title) ax.scatter(x, y, label=graph_name, s=5) @safeSNAP def _add_to_axis_scc(G, i, ax): graph_name = G.name or f"Graph {i}" title = "Log-log distribution of sizes of strongly connected components" G = G.snapGraph x, y = get_strongly_connected_component(G) ax.set_xscale("log") ax.set_xlabel("Size of strongly connected component") ax.set_yscale("symlog") ax.set_ylabel("Number of components") ax.set_title(title) ax.scatter(x, y, label=graph_name, s=5) @safeSNAP def _add_to_axis_wcc(G, i, ax): is_directed = G.is_directed weakly_string = " weakly " if is_directed else " " title = ( f"Log-log distribution of sizes of{weakly_string}connected components" ) graph_name = G.name or f"Graph {i}" G = G.snapGraph x, y = get_weakly_connected_component(G) ax.set_xscale("log") ax.set_xlabel(f"Size of{weakly_string}connected component") ax.set_yscale("symlog") ax.set_ylabel("Number of components") ax.set_title(title) ax.scatter(x, y, label=graph_name, s=5) @safeSNAP def _add_to_axis_hp(G, i, ax, hop_plot_iters=128): is_directed = G.is_directed graph_name = G.name or f"Graph {i}" title = "Hop plot" plot = "hop" G = G.snapGraph G.PlotHops(TMP_NAME, "Hop plot", is_directed, hop_plot_iters) num_hops, num_nodes = parse_file(plot=plot, filename=TMP_NAME) num_hops = [int(num_hop) for num_hop in num_hops] parse_file(plot=plot, filename=TMP_NAME) clear_files(plot=plot, filename=TMP_NAME) ax.set_xlabel("Number of hops") ax.set_ylabel("Number of pairs of nodes") ax.set_yscale("log") ax.set_title(title) ax.plot(num_hops, num_nodes, "--", marker="o", label=graph_name) @safeSNAP def _add_to_axis_svr(G, i, ax, num_spectral_values=100): graph_name = G.name or f"Graph {i}" title = "Singular value rank distribution" plot = "sngVal" G = G.snapGraph G.PlotSngValRank(num_spectral_values, TMP_NAME, title) ranks, sin_values = parse_file(plot, filename=TMP_NAME) ranks = [int(rank) for rank in ranks] parse_file(plot=plot, filename=TMP_NAME) clear_files(plot=plot, filename=TMP_NAME) ax.set_xlabel("Rank") ax.set_ylabel("Singular value") ax.set_yscale("log") ax.set_title(title) ax.plot( ranks, sin_values, "--", marker="o", label=graph_name, markersize=5 ) @safeSNAP def _add_to_axis_evr(G, i, ax, num_spectral_values=100): graph_name = G.name or f"Graph {i}" title = "Eigenvalue rank distribution" plot = "eigVal" G = G.snapGraph G.PlotEigValRank(num_spectral_values, TMP_NAME, title) ranks, eig_values = parse_file(plot, filename=TMP_NAME) ranks = [int(rank) for rank in ranks] parse_file(plot=plot, filename=TMP_NAME) clear_files(plot=plot, filename=TMP_NAME) ax.set_xlabel("Rank") ax.set_ylabel("Eigenvalue") ax.set_yscale("log") ax.set_title(title) ax.plot( ranks, eig_values, "--", marker="o", label=graph_name, markersize=5 ) @safeSNAP def _add_to_axis_svd(G, i, ax, num_spectral_values=100): graph_name = G.name or f"Graph {i}" title = "Singular value distribution" plot = "sngDistr" G = G.snapGraph G.PlotSngValDistr(num_spectral_values, TMP_NAME, title) sin_values, counts = parse_file(plot=plot, filename=TMP_NAME) parse_file(plot=plot, filename=TMP_NAME) clear_files(plot=plot, filename=TMP_NAME) ax.set_xlabel("Singular value") ax.set_ylabel("Count") ax.set_yscale("symlog") ax.set_title(title) ax.plot( sin_values, counts, "--", marker="o", label=graph_name, markersize=5 ) @safeSNAP def _add_to_axis_evd(G, i, ax, num_spectral_values=100): graph_name = G.name or f"Graph {i}" title = "Eigenvalue distribution" plot = "eigDistr" G = G.snapGraph G.PlotEigValDistr(num_spectral_values, TMP_NAME, title) eig_values, counts = parse_file(plot, filename=TMP_NAME) parse_file(plot=plot, filename=TMP_NAME) clear_files(plot=plot, filename=TMP_NAME) ax.set_xlabel("Eigenvalue") ax.set_ylabel("Count") ax.set_yscale("symlog") ax.set_title(title) ax.plot( eig_values, counts, "--", marker="o", label=graph_name, markersize=5 ) @safeSNAP def _add_to_axis_lsv(G, i, ax): graph_name = G.name or f"Graph {i}" title = "Leading singular vector rank distribution" plot = "sngVecL" G = G.snapGraph G.PlotSngVec(TMP_NAME, title) ranks, components = parse_file(plot, filename=TMP_NAME) ranks = [int(rank) for rank in ranks] parse_file(plot=plot, filename=TMP_NAME) clear_files(plot=plot, filename=TMP_NAME) ax.set_xlabel("Rank") ax.set_ylabel("Component of leading singular vector") ax.set_yscale("log") ax.set_title(title) ax.plot( ranks, components, "--", marker="o", label=graph_name, markersize=5 ) def plot_node_degree_centrality_feat_dist( data, feat_name_col_info: Dict[str, ColumnType], src_col: str = "src", dst_col: str = "dst", ): # - suppress matplotlib debug logger matplotlib_logger = logging.getLogger("matplotlib") matplotlib_logger.setLevel(logging.WARNING) src_degree = ( data.groupby(src_col, as_index=False) .count()[[src_col, dst_col]] .rename(columns={dst_col: "src_degree"}) ) # - normalized src_degree src_degree_vals = src_degree["src_degree"].values normalized_src_degree = (src_degree_vals - np.min(src_degree_vals)) / ( np.max(src_degree_vals) - np.min(src_degree_vals) ) src_degree.loc[:, "src_degree"] = normalized_src_degree # - normalized dst_degree dst_degree = ( data.groupby(dst_col, as_index=False) .count()[[src_col, dst_col]] .rename(columns={src_col: "dst_degree"}) ) dst_degree_vals = dst_degree["dst_degree"].values normalized_dst_degree = (dst_degree_vals - np.min(dst_degree_vals)) / ( np.max(dst_degree_vals) - np.min(dst_degree_vals) ) dst_degree.loc[:, "dst_degree"] = normalized_dst_degree # - merge data = data.merge(src_degree, how="outer", on=src_col) data = data.merge(dst_degree, how="outer", on=dst_col) # - normalize continuous columns for feat, col_info in feat_name_col_info.items(): col_type = col_info["type"] if col_type == ColumnType.CONTINUOUS: vals = data[feat].values min_, max_ = np.min(vals), np.max(vals) data.loc[:, feat] = (vals - min_) / (max_ - min_) # - plot heat maps def heat_map(x, y): heatmap, xedges, yedges = np.histogram2d(x, y, bins=30) extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]] return heatmap.T, extent nr = 1 # - num plots per row fig, axs = plt.subplots(len(feat_name_col_info), nr, figsize=(12, 8)) c = 0 for feat in feat_name_col_info: if nr * len(feat_name_col_info) == 1: heatmap, extent = heat_map( data["src_degree"].values, data[feat].values ) axs.imshow(heatmap, extent=extent, origin="lower") axs.set_xlabel("src_degree") axs.set_ylabel("feat") else: # - src degree dist heatmap, extent = heat_map( data["src_degree"].values, data[feat].values ) axs[c].imshow(heatmap, extent=extent, origin="lower") axs[c].set_xlabel("src_degree") axs[c].set_ylabel("feat") c += nr return fig # Degree distribution plot_degree_distribution = partial(common_plot, _add_to_axis_dd) # In degree distribution plot_in_degree_distribution = partial(common_plot, _add_to_axis_idd) # Out degree distribution plot_out_degree_distribution = partial(common_plot, _add_to_axis_odd) # Hop plot plot_hopplot = partial(common_plot, _add_to_axis_hp) # Clustering coefficient distribution plot_clustering_coef_distribution = partial(common_plot, _add_to_axis_ccd) # Strongly connected component distribution plot_strongly_connected_component_distribution = partial( common_plot, _add_to_axis_scc ) # Weakly connected component distribution plot_weakly_connected_component_distribution = partial( common_plot, _add_to_axis_wcc ) # Eigenvalue rank distribution plot_eigenvalue_rank_distribution = partial(common_plot, _add_to_axis_evr) # Singular value rank distribution plot_singular_value_rank_distribution = partial(common_plot, _add_to_axis_svr) # Eigenvalue rank distribution plot_eigenvalue_histogram_distribution = partial(common_plot, _add_to_axis_evd) # Singular value rank distribution plot_singular_value_histogram_distribution = partial( common_plot, _add_to_axis_svd ) # Leading singular vector rank distribution plot_leading_singular_vector_rank = partial(common_plot, _add_to_axis_lsv)
PyTorch/LanguageModeling/BERT/scripts/docker
docker
launch
#!/bin/bash CMD=${1:-/bin/bash} NV_VISIBLE_DEVICES=${2:-"all"} EXTRA_MOUNTS=${3:-""} IMAGE=${4:-"bert"} DOCKER_BRIDGE=${5:-"host"} docker run -it --rm \ --gpus \"device=$NV_VISIBLE_DEVICES\" \ --net=$DOCKER_BRIDGE \ --shm-size=1g \ --ulimit memlock=-1 \ --ulimit stack=67108864 \ -v $PWD:/workspace/bert \ -v $PWD/results:/results \ ${EXTRA_MOUNTS} \ ${IMAGE} $CMD
TensorFlow2/Detection/Efficientdet/efficientnet/blocks
blocks
mb_conv_block
import tensorflow as tf from typing import Any, Dict, Optional, Text, Tuple from efficientnet.layers import get_activation from efficientnet.blocks import conv2d_block __all__ = ['mb_conv_block'] def mb_conv_block(inputs: tf.Tensor, block: dict, config: dict, prefix: Text = None): """Mobile Inverted Residual Bottleneck. Args: inputs: the Keras input to the block block: BlockConfig, arguments to create a Block config: ModelConfig, a set of model parameters prefix: prefix for naming all layers Returns: the output of the block """ use_se = config['use_se'] activation = get_activation(config['activation']) drop_connect_rate = config['drop_connect_rate'] data_format = tf.keras.backend.image_data_format() use_depthwise = block['conv_type'] != 'no_depthwise' prefix = prefix or '' filters = block['input_filters'] * block['expand_ratio'] x = inputs if block['fused_conv']: # If we use fused mbconv, skip expansion and use regular conv. x = conv2d_block(x, filters, config, kernel_size=block['kernel_size'], strides=block['strides'], activation=activation, name=prefix + 'fused') else: if block['expand_ratio'] != 1: # Expansion phase kernel_size = (1, 1) if use_depthwise else (3, 3) x = conv2d_block(x, filters, config, kernel_size=kernel_size, activation=activation, name=prefix + 'expand') # Depthwise Convolution if use_depthwise: x = conv2d_block(x, conv_filters=None, config=config, kernel_size=block['kernel_size'], strides=block['strides'], activation=activation, depthwise=True, name=prefix + 'depthwise') # Squeeze and Excitation phase if use_se: assert block['se_ratio'] is not None assert 0 < block['se_ratio'] <= 1 num_reduced_filters = max(1, int( block['input_filters'] * block['se_ratio'] )) if data_format == 'channels_first': se_shape = (filters, 1, 1) else: se_shape = (1, 1, filters) se = tf.keras.layers.GlobalAveragePooling2D(name=prefix + 'se_squeeze')(x) se = tf.keras.layers.Reshape(se_shape, name=prefix + 'se_reshape')(se) se = conv2d_block(se, num_reduced_filters, config, use_bias=True, use_batch_norm=False, activation=activation, name=prefix + 'se_reduce') se = conv2d_block(se, filters, config, use_bias=True, use_batch_norm=False, activation='sigmoid', name=prefix + 'se_expand') x = tf.keras.layers.multiply([x, se], name=prefix + 'se_excite') # Output phase x = conv2d_block(x, block['output_filters'], config, activation=None, name=prefix + 'project') # Add identity so that quantization-aware training can insert quantization # ops correctly. x = tf.keras.layers.Activation(get_activation('identity'), name=prefix + 'id')(x) if (block['id_skip'] and all(s == 1 for s in block['strides']) and block['input_filters'] == block['output_filters']): if drop_connect_rate and drop_connect_rate > 0: # Apply dropconnect # The only difference between dropout and dropconnect in TF is scaling by # drop_connect_rate during training. See: # https://github.com/keras-team/keras/pull/9898#issuecomment-380577612 x = tf.keras.layers.Dropout(drop_connect_rate, noise_shape=(None, 1, 1, 1), name=prefix + 'drop')(x) x = tf.keras.layers.add([x, inputs], name=prefix + 'add') return x
PyTorch/Classification/ConvNets/resnext101-32x4d/training/FP32
FP32
DGX1V_resnext101-32x4d_FP32_250E
python ./multiproc.py --nproc_per_node 8 ./launch.py --model resnext101-32x4d --precision FP32 --mode convergence --platform DGX1V /imagenet --workspace ${1:-./} --raport-file raport.json
PyTorch/SpeechSynthesis/Tacotron2/tensorrt
tensorrt
convert_onnx2trt
# ***************************************************************************** # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # * Neither the name of the NVIDIA CORPORATION nor the # names of its contributors may be used to endorse or promote products # derived from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE # DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # ***************************************************************************** import pycuda.driver as cuda import pycuda.autoinit import onnx import argparse import tensorrt as trt import os import sys sys.path.append('./') from trt_utils import build_engine def parse_args(parser): """ Parse commandline arguments. """ parser.add_argument('-o', '--output', required=True, help='output folder to save audio (file per phrase)') parser.add_argument('--encoder', type=str, default="", help='full path to the Encoder ONNX') parser.add_argument('--decoder', type=str, default="", help='full path to the DecoderIter ONNX') parser.add_argument('--postnet', type=str, default="", help='full path to the Postnet ONNX') parser.add_argument('--waveglow', type=str, default="", help='full path to the WaveGlow ONNX') parser.add_argument('--fp16', action='store_true', help='inference with FP16') return parser def main(): parser = argparse.ArgumentParser( description='Export from ONNX to TensorRT for Tacotron 2 and WaveGlow') parser = parse_args(parser) args = parser.parse_args() engine_prec = "_fp16" if args.fp16 else "_fp32" # Encoder shapes=[{"name": "sequences", "min": (1,4), "opt": (1,128), "max": (1,256)}, {"name": "sequence_lengths", "min": (1,), "opt": (1,), "max": (1,)}] if args.encoder != "": print("Building Encoder ...") encoder_engine = build_engine(args.encoder, shapes=shapes, fp16=args.fp16) if encoder_engine is not None: with open(args.output+"/"+"encoder"+engine_prec+".engine", 'wb') as f: f.write(encoder_engine.serialize()) else: print("Failed to build engine from", args.encoder) sys.exit() # DecoderIter shapes=[{"name": "decoder_input", "min": (1,80), "opt": (1,80), "max": (1,80)}, {"name": "attention_hidden", "min": (1,1024), "opt": (1,1024), "max": (1,1024)}, {"name": "attention_cell", "min": (1,1024), "opt": (1,1024), "max": (1,1024)}, {"name": "decoder_hidden", "min": (1,1024), "opt": (1,1024), "max": (1,1024)}, {"name": "decoder_cell", "min": (1,1024), "opt": (1,1024), "max": (1,1024)}, {"name": "attention_weights", "min": (1,4), "opt": (1,128), "max": (1,256)}, {"name": "attention_weights_cum", "min": (1,4), "opt": (1,128), "max": (1,256)}, {"name": "attention_context", "min": (1,512), "opt": (1,512), "max": (1,512)}, {"name": "memory", "min": (1,4,512), "opt": (1,128,512), "max": (1,256,512)}, {"name": "processed_memory", "min": (1,4,128), "opt": (1,128,128), "max": (1,256,128)}, {"name": "mask", "min": (1,4), "opt": (1,128), "max": (1,256)}] if args.decoder != "": print("Building Decoder ...") decoder_iter_engine = build_engine(args.decoder, shapes=shapes, fp16=args.fp16) if decoder_iter_engine is not None: with open(args.output+"/"+"decoder_iter"+engine_prec+".engine", 'wb') as f: f.write(decoder_iter_engine.serialize()) else: print("Failed to build engine from", args.decoder) sys.exit() # Postnet shapes=[{"name": "mel_outputs", "min": (1,80,32), "opt": (1,80,768), "max": (1,80,1664)}] if args.postnet != "": print("Building Postnet ...") postnet_engine = build_engine(args.postnet, shapes=shapes, fp16=args.fp16) if postnet_engine is not None: with open(args.output+"/"+"postnet"+engine_prec+".engine", 'wb') as f: f.write(postnet_engine.serialize()) else: print("Failed to build engine from", args.postnet) sys.exit() # WaveGlow shapes=[{"name": "mel", "min": (1,80,32), "opt": (1,80,768), "max": (1,80,1664)}, {"name": "z", "min": (1,8,1024), "opt": (1,8,24576), "max": (1,8,53248)}] if args.waveglow != "": print("Building WaveGlow ...") waveglow_engine = build_engine(args.waveglow, shapes=shapes, fp16=args.fp16) if waveglow_engine is not None: engine_path = os.path.join(args.output, "waveglow"+engine_prec+".engine") with open(engine_path, 'wb') as f: f.write(waveglow_engine.serialize()) else: print("Failed to build engine from", args.waveglow) sys.exit() if __name__ == '__main__': main()
PyTorch/SpeechSynthesis/Tacotron2/platform
platform
DGX1_waveglow_AMP_8NGPU_train
mkdir -p output python -m multiproc train.py -m WaveGlow -o output/ --amp -lr 1e-4 --epochs 1001 -bs 10 --segment-length 8000 --weight-decay 0 --grad-clip-thresh 65504.0 --cudnn-benchmark --cudnn-enabled --log-file nvlog.json
TensorFlow2/Segmentation/MaskRCNN/dataset
dataset
create_coco_tf_record
# Copyright 2017 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. r"""Convert raw COCO dataset to TFRecord for object_detection. Example usage: python create_coco_tf_record.py --logtostderr \ --train_image_dir="${TRAIN_IMAGE_DIR}" \ --val_image_dir="${VAL_IMAGE_DIR}" \ --test_image_dir="${TEST_IMAGE_DIR}" \ --train_annotations_file="${TRAIN_ANNOTATIONS_FILE}" \ --val_annotations_file="${VAL_ANNOTATIONS_FILE}" \ --testdev_annotations_file="${TESTDEV_ANNOTATIONS_FILE}" \ --output_dir="${OUTPUT_DIR}" """ from __future__ import absolute_import, division, print_function import collections import hashlib import io import json import logging import multiprocessing import os import PIL.Image import numpy as np import tensorflow as tf from absl import app, flags from pycocotools import mask from research.object_detection.utils import dataset_util, label_map_util flags.DEFINE_boolean('include_masks', False, 'Whether to include instance segmentations masks ' '(PNG encoded) in the result. default: False.') flags.DEFINE_string('train_image_dir', '', 'Training image directory.') flags.DEFINE_string('val_image_dir', '', 'Validation image directory.') flags.DEFINE_string('test_image_dir', '', 'Test image directory.') flags.DEFINE_string('train_object_annotations_file', '', '') flags.DEFINE_string('val_object_annotations_file', '', '') flags.DEFINE_string('train_caption_annotations_file', '', '') flags.DEFINE_string('val_caption_annotations_file', '', '') flags.DEFINE_string('testdev_annotations_file', '', 'Test-dev annotations JSON file.') flags.DEFINE_string('output_dir', '/tmp/', 'Output data directory.') FLAGS = flags.FLAGS def create_tf_example(image, bbox_annotations, caption_annotations, image_dir, category_index, include_masks=False): """Converts image and annotations to a tf.Example proto. Args: image: dict with keys: [u'license', u'file_name', u'coco_url', u'height', u'width', u'date_captured', u'flickr_url', u'id'] bbox_annotations: list of dicts with keys: [u'segmentation', u'area', u'iscrowd', u'image_id', u'bbox', u'category_id', u'id'] Notice that bounding box coordinates in the official COCO dataset are given as [x, y, width, height] tuples using absolute coordinates where x, y represent the top-left (0-indexed) corner. This function converts to the format expected by the Tensorflow Object Detection API (which is which is [ymin, xmin, ymax, xmax] with coordinates normalized relative to image size). image_dir: directory containing the image files. category_index: a dict containing COCO category information keyed by the 'id' field of each category. See the label_map_util.create_category_index function. include_masks: Whether to include instance segmentations masks (PNG encoded) in the result. default: False. Returns: example: The converted tf.Example num_annotations_skipped: Number of (invalid) annotations that were ignored. Raises: ValueError: if the image pointed to by data['filename'] is not a valid JPEG """ image_height = image['height'] image_width = image['width'] filename = image['file_name'] image_id = image['id'] full_path = os.path.join(image_dir, filename) with tf.io.gfile.GFile(full_path, 'rb') as fid: encoded_jpg = fid.read() encoded_jpg_io = io.BytesIO(encoded_jpg) image = PIL.Image.open(encoded_jpg_io) key = hashlib.sha256(encoded_jpg).hexdigest() xmin = [] xmax = [] ymin = [] ymax = [] is_crowd = [] category_names = [] category_ids = [] area = [] encoded_mask_png = [] num_annotations_skipped = 0 for object_annotations in bbox_annotations: (x, y, width, height) = tuple(object_annotations['bbox']) if width <= 0 or height <= 0: num_annotations_skipped += 1 continue if x + width > image_width or y + height > image_height: num_annotations_skipped += 1 continue xmin.append(float(x) / image_width) xmax.append(float(x + width) / image_width) ymin.append(float(y) / image_height) ymax.append(float(y + height) / image_height) is_crowd.append(object_annotations['iscrowd']) category_id = int(object_annotations['category_id']) category_ids.append(category_id) category_names.append(category_index[category_id]['name'].encode('utf8')) area.append(object_annotations['area']) if include_masks: run_len_encoding = mask.frPyObjects(object_annotations['segmentation'], image_height, image_width) binary_mask = mask.decode(run_len_encoding) if not object_annotations['iscrowd']: binary_mask = np.amax(binary_mask, axis=2) pil_image = PIL.Image.fromarray(binary_mask) output_io = io.BytesIO() pil_image.save(output_io, format='PNG') encoded_mask_png.append(output_io.getvalue()) captions = [] for caption_annotation in caption_annotations: captions.append(caption_annotation['caption'].encode('utf8')) feature_dict = { 'image/height': dataset_util.int64_feature(image_height), 'image/width': dataset_util.int64_feature(image_width), 'image/filename': dataset_util.bytes_feature(filename.encode('utf8')), 'image/source_id': dataset_util.bytes_feature(str(image_id).encode('utf8')), 'image/key/sha256': dataset_util.bytes_feature(key.encode('utf8')), 'image/encoded': dataset_util.bytes_feature(encoded_jpg), 'image/caption': dataset_util.bytes_list_feature(captions), 'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')), 'image/object/bbox/xmin': dataset_util.float_list_feature(xmin), 'image/object/bbox/xmax': dataset_util.float_list_feature(xmax), 'image/object/bbox/ymin': dataset_util.float_list_feature(ymin), 'image/object/bbox/ymax': dataset_util.float_list_feature(ymax), 'image/object/class/text': dataset_util.bytes_list_feature(category_names), 'image/object/class/label': dataset_util.int64_list_feature(category_ids), 'image/object/is_crowd': dataset_util.int64_list_feature(is_crowd), 'image/object/area': dataset_util.float_list_feature(area), } if include_masks: feature_dict['image/object/mask'] = ( dataset_util.bytes_list_feature(encoded_mask_png)) example = tf.train.Example(features=tf.train.Features(feature=feature_dict)) return key, example, num_annotations_skipped def _pool_create_tf_example(args): return create_tf_example(*args) def _load_object_annotations(object_annotations_file): with tf.io.gfile.GFile(object_annotations_file, 'r') as fid: obj_annotations = json.load(fid) images = obj_annotations['images'] category_index = label_map_util.create_category_index( obj_annotations['categories']) img_to_obj_annotation = collections.defaultdict(list) logging.info('Building bounding box index.') for annotation in obj_annotations['annotations']: image_id = annotation['image_id'] img_to_obj_annotation[image_id].append(annotation) missing_annotation_count = 0 for image in images: image_id = image['id'] if image_id not in img_to_obj_annotation: missing_annotation_count += 1 logging.info('%d images are missing bboxes.', missing_annotation_count) return images, img_to_obj_annotation, category_index def _load_caption_annotations(caption_annotations_file): with tf.io.gfile.GFile(caption_annotations_file, 'r') as fid: caption_annotations = json.load(fid) img_to_caption_annotation = collections.defaultdict(list) logging.info('Building caption index.') for annotation in caption_annotations['annotations']: image_id = annotation['image_id'] img_to_caption_annotation[image_id].append(annotation) missing_annotation_count = 0 images = caption_annotations['images'] for image in images: image_id = image['id'] if image_id not in img_to_caption_annotation: missing_annotation_count += 1 logging.info('%d images are missing captions.', missing_annotation_count) return img_to_caption_annotation def _create_tf_record_from_coco_annotations( object_annotations_file, caption_annotations_file, image_dir, output_path, include_masks, num_shards): """Loads COCO annotation json files and converts to tf.Record format. Args: object_annotations_file: JSON file containing bounding box annotations. caption_annotations_file: JSON file containing caption annotations. image_dir: Directory containing the image files. output_path: Path to output tf.Record file. include_masks: Whether to include instance segmentations masks (PNG encoded) in the result. default: False. num_shards: Number of output files to create. """ logging.info('writing to output path: %s', output_path) writers = [ tf.io.TFRecordWriter(output_path + '-%05d-of-%05d.tfrecord' % (i, num_shards)) for i in range(num_shards) ] images, img_to_obj_annotation, category_index = ( _load_object_annotations(object_annotations_file)) img_to_caption_annotation = ( _load_caption_annotations(caption_annotations_file)) pool = multiprocessing.Pool() total_num_annotations_skipped = 0 for idx, (_, tf_example, num_annotations_skipped) in enumerate( pool.imap(_pool_create_tf_example, [(image, img_to_obj_annotation[image['id']], img_to_caption_annotation[image['id']], image_dir, category_index, include_masks) for image in images])): if idx % 100 == 0: logging.info('On image %d of %d', idx, len(images)) total_num_annotations_skipped += num_annotations_skipped writers[idx % num_shards].write(tf_example.SerializeToString()) pool.close() pool.join() for writer in writers: writer.close() logging.info('Finished writing, skipped %d annotations.', total_num_annotations_skipped) def main(_): assert FLAGS.train_image_dir, '`train_image_dir` missing.' assert FLAGS.val_image_dir, '`val_image_dir` missing.' assert FLAGS.test_image_dir, '`test_image_dir` missing.' if not tf.io.gfile.isdir(FLAGS.output_dir): tf.io.gfile.makedirs(FLAGS.output_dir) train_output_path = os.path.join(FLAGS.output_dir, 'train') val_output_path = os.path.join(FLAGS.output_dir, 'val') testdev_output_path = os.path.join(FLAGS.output_dir, 'test-dev') _create_tf_record_from_coco_annotations( FLAGS.train_object_annotations_file, FLAGS.train_caption_annotations_file, FLAGS.train_image_dir, train_output_path, FLAGS.include_masks, num_shards=256) _create_tf_record_from_coco_annotations( FLAGS.val_object_annotations_file, FLAGS.val_caption_annotations_file, FLAGS.val_image_dir, val_output_path, FLAGS.include_masks, num_shards=32) if __name__ == '__main__': app.run(main)
PyTorch/Forecasting/TFT/triton/runner
runner
logger
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import pathlib import coloredlogs class Logger(logging.Logger): def __init__(self, name, level=logging.NOTSET): super().__init__(name, level=level) self._file_path = None def initialize(self, file_path: pathlib.Path): self._file_path = file_path def write(self, log: str): if not self._file_path: return with open(self._file_path, "+a") as file: file.write(log) LOGGER = Logger("runner") log_format = "%(asctime)s %(levelname)s %(name)s %(message)s" logging.basicConfig(format=log_format) coloredlogs.install( level=logging.INFO, fmt=log_format, logger=LOGGER, field_styles={ "asctime": {"color": "green"}, "hostname": {"color": "magenta"}, "levelname": {"bold": True, "color": "blue"}, "name": {"color": "blue"}, "programname": {"color": "cyan"}, "username": {"color": "yellow"}, }, reconfigure=True, )
TensorFlow2/LanguageModeling/ELECTRA
ELECTRA
tokenization
# Copyright 2020 The Google AI Team, Stanford University and The HuggingFace Inc. team. # Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from tokenization_utils import BertTokenizer from tokenization_utils import BertTokenizer VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt"} PRETRAINED_VOCAB_FILES_MAP = { "vocab_file": { "google/electra-small-generator": "https://s3.amazonaws.com/models.huggingface.co/bert/google/electra-small-generator/vocab.txt", "google/electra-base-generator": "https://s3.amazonaws.com/models.huggingface.co/bert/google/electra-base-generator/vocab.txt", "google/electra-large-generator": "https://s3.amazonaws.com/models.huggingface.co/bert/google/electra-large-generator/vocab.txt", "google/electra-small-discriminator": "https://s3.amazonaws.com/models.huggingface.co/bert/google/electra-small-discriminator/vocab.txt", "google/electra-base-discriminator": "https://s3.amazonaws.com/models.huggingface.co/bert/google/electra-base-discriminator/vocab.txt", "google/electra-large-discriminator": "https://s3.amazonaws.com/models.huggingface.co/bert/google/electra-large-discriminator/vocab.txt", } } PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { "google/electra-small-generator": 512, "google/electra-base-generator": 512, "google/electra-large-generator": 512, "google/electra-small-discriminator": 512, "google/electra-base-discriminator": 512, "google/electra-large-discriminator": 512, } PRETRAINED_INIT_CONFIGURATION = { "google/electra-small-generator": {"do_lower_case": True}, "google/electra-base-generator": {"do_lower_case": True}, "google/electra-large-generator": {"do_lower_case": True}, "google/electra-small-discriminator": {"do_lower_case": True}, "google/electra-base-discriminator": {"do_lower_case": True}, "google/electra-large-discriminator": {"do_lower_case": True}, } class ElectraTokenizer(BertTokenizer): r""" Constructs an Electra tokenizer. :class:`~transformers.ElectraTokenizer` is identical to :class:`~transformers.BertTokenizer` and runs end-to-end tokenization: punctuation splitting + wordpiece. Refer to superclass :class:`~transformers.BertTokenizer` for usage examples and documentation concerning parameters. """ vocab_files_names = VOCAB_FILES_NAMES pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES pretrained_init_configuration = PRETRAINED_INIT_CONFIGURATION
Tools/PyTorch/TimeSeriesPredictionPlatform/models/tft_pyt/triton
triton
calculate_metrics
#!/usr/bin/env python3 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. r""" Using `calculate_metrics.py` script, you can obtain model accuracy/error metrics using defined `MetricsCalculator` class. Data provided to `MetricsCalculator` are obtained from dump files stored in directory pointed by `--dump-dir` argument. Above files are prepared by `run_inference_on_fw.py` and `run_inference_on_triton.py` scripts. Output data is stored in csv file pointed by `--csv` argument. Example call: ```shell script python ./triton/calculate_metrics.py \ --dump-dir /results/dump_triton \ --csv /results/accuracy_results.csv \ --metrics metrics.py \ --metric-class-param1 value ``` """ import argparse import csv import logging import string from pathlib import Path # method from PEP-366 to support relative import in executed modules if __package__ is None: __package__ = Path(__file__).parent.name from .deployment_toolkit.args import ArgParserGenerator from .deployment_toolkit.core import BaseMetricsCalculator, load_from_file from .deployment_toolkit.dump import JsonDumpReader LOGGER = logging.getLogger("calculate_metrics") TOTAL_COLUMN_NAME = "_total_" def main(): logging.basicConfig(level=logging.INFO) parser = argparse.ArgumentParser(description="Run models with given dataloader", allow_abbrev=False) parser.add_argument("--metrics", help="Path to python module containing metrics calculator", required=True) parser.add_argument("--csv", help="Path to csv file", required=True) parser.add_argument("--dump-dir", help="Path to directory with dumped outputs (and labels)", required=True) args, *_ = parser.parse_known_args() MetricsCalculator = load_from_file(args.metrics, "metrics", "MetricsCalculator") ArgParserGenerator(MetricsCalculator).update_argparser(parser) args = parser.parse_args() LOGGER.info("args:") for key, value in vars(args).items(): LOGGER.info(f" {key} = {value}") MetricsCalculator = load_from_file(args.metrics, "metrics", "MetricsCalculator") metrics_calculator: BaseMetricsCalculator = ArgParserGenerator(MetricsCalculator).from_args(args) reader = JsonDumpReader(args.dump_dir) for ids, x, y_true, y_pred in reader.iterate_over(["ids", "inputs", "labels", "outputs"]): ids = list(ids["ids"]) if ids is not None else None metrics_calculator.update(ids=ids, x=x, y_pred=y_pred, y_real=y_true) metrics = metrics_calculator.metrics metric_names_with_space = [name for name in metrics if any([c in string.whitespace for c in name])] if metric_names_with_space: raise ValueError(f"Metric names shall have no spaces; Incorrect names: {', '.join(metric_names_with_space)}") csv_path = Path(args.csv) csv_path.parent.mkdir(parents=True, exist_ok=True) with csv_path.open("w") as csv_file: writer = csv.DictWriter(csv_file, fieldnames=list(metrics.keys())) writer.writeheader() writer.writerow(metrics) if __name__ == "__main__": main()
PyTorch/SpeechSynthesis/Tacotron2/tensorrt
tensorrt
README
# Tacotron 2 and WaveGlow Inference with TensorRT This is subfolder of the Tacotron 2 for PyTorch repository, tested and maintained by NVIDIA, and provides scripts to perform high-performance inference using NVIDIA TensorRT. The Tacotron 2 and WaveGlow models form a text-to-speech (TTS) system that enables users to synthesize natural sounding speech from raw transcripts without any additional information such as patterns and/or rhythms of speech. More information about the TTS system and its training can be found in the [Tacotron 2 PyTorch README](../README.md). NVIDIA TensorRT is a platform for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications. After optimizing the compute-intensive acoustic model with NVIDIA TensorRT, inference throughput increased by up to 1.4x over native PyTorch in mixed precision. ## Quick Start Guide 1. Clone the repository. ```bash git clone https://github.com/NVIDIA/DeepLearningExamples cd DeepLearningExamples/PyTorch/SpeechSynthesis/Tacotron2 ``` 2. Download pretrained checkpoints from [NGC](https://ngc.nvidia.com/catalog/models) and copy them to the `./checkpoints` directory: - [Tacotron2 checkpoint](https://ngc.nvidia.com/models/nvidia:tacotron2pyt_fp16) - [WaveGlow checkpoint](https://ngc.nvidia.com/models/nvidia:waveglow256pyt_fp16) ```bash mkdir -p checkpoints cp <Tacotron2_and_WaveGlow_checkpoints> ./checkpoints/ ``` 3. Build the Tacotron 2 and WaveGlow PyTorch NGC container. ```bash bash scripts/docker/build.sh ``` 4. Start an interactive session in the NGC container to run training/inference. After you build the container image, you can start an interactive CLI session with: ```bash bash scripts/docker/interactive.sh ``` 5. Verify that TensorRT version installed is 7.0 or greater. If necessary, download and install the latest release from https://developer.nvidia.com/nvidia-tensorrt-download ```bash pip list | grep tensorrt dpkg -l | grep TensorRT ``` 6. Convert the models to ONNX intermediate representation (ONNX IR). Convert Tacotron 2 to three ONNX parts: Encoder, Decoder, and Postnet: ```bash mkdir -p output python tensorrt/convert_tacotron22onnx.py --tacotron2 ./checkpoints/nvidia_tacotron2pyt_fp16_20190427 -o output/ --fp16 ``` Convert WaveGlow to ONNX IR: ```bash python tensorrt/convert_waveglow2onnx.py --waveglow ./checkpoints/nvidia_waveglow256pyt_fp16 --config-file config.json --wn-channels 256 -o output/ --fp16 ``` After running the above commands, there should be four new ONNX files in `./output/` directory: `encoder.onnx`, `decoder_iter.onnx`, `postnet.onnx`, and `waveglow.onnx`. 7. Convert the ONNX IRs to TensorRT engines with fp16 mode enabled: ```bash python tensorrt/convert_onnx2trt.py --encoder output/encoder.onnx --decoder output/decoder_iter.onnx --postnet output/postnet.onnx --waveglow output/waveglow.onnx -o output/ --fp16 ``` After running the command, there should be four new engine files in `./output/` directory: `encoder_fp16.engine`, `decoder_iter_fp16.engine`, `postnet_fp16.engine`, and `waveglow_fp16.engine`. 8. Run TTS inference pipeline with fp16: ```bash python tensorrt/inference_trt.py -i phrases/phrase.txt --encoder output/encoder_fp16.engine --decoder output/decoder_iter_fp16.engine --postnet output/postnet_fp16.engine --waveglow output/waveglow_fp16.engine -o output/ --fp16 ``` ## Inference performance: NVIDIA T4 Our results were obtained by running the `./tensorrt/run_latency_tests_trt.sh` script in the PyTorch-19.11-py3 NGC container. Please note that to reproduce the results, you need to provide pretrained checkpoints for Tacotron 2 and WaveGlow. Please edit the script to provide your checkpoint filenames. For all tests in this table, we used WaveGlow with 256 residual channels. |Framework|Batch size|Input length|Precision|Avg latency (s)|Latency std (s)|Latency confidence interval 90% (s)|Latency confidence interval 95% (s)|Latency confidence interval 99% (s)|Throughput (samples/sec)|Speed-up PyTorch+TensorRT / TensorRT|Avg mels generated (81 mels=1 sec of speech)|Avg audio length (s)|Avg RTF| |---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:| |PyTorch+TensorRT|1| 128| FP16| 1.02| 0.05| 1.09| 1.10| 1.14| 150,439| 1.59| 602| 6.99| 6.86| |PyTorch |1| 128| FP16| 1.63| 0.07| 1.71| 1.73| 1.81| 94,758| 1.00| 601| 6.98| 4.30|
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/plugins/taco2PrenetPlugin
taco2PrenetPlugin
taco2PrenetLayerPluginCreator
/* * Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * Neither the name of the NVIDIA CORPORATION nor the * names of its contributors may be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "taco2PrenetLayerPluginCreator.h" #include "taco2PrenetLayerPlugin.h" #include <stdexcept> #include <vector> using namespace nvinfer1; namespace nvinfer1 { namespace plugin { /****************************************************************************** * CONSTANTS ****************************************************************** *****************************************************************************/ namespace { constexpr const char* const DIMENSION_STR = "Dimension"; constexpr const char* const INPUTLENGTH_STR = "InputLength"; constexpr const char* const WEIGHTS1_STR = "weight1"; constexpr const char* const WEIGHTS2_STR = "weight2"; } // namespace /****************************************************************************** * PUBLIC STATIC METHODS ****************************************************** *****************************************************************************/ PluginFieldCollection* Taco2PrenetLayerPluginCreator::getFields() { static PluginFieldCollection* pluginPtr = nullptr; static const std::vector<PluginField> fields{{INPUTLENGTH_STR, nullptr, PluginFieldType::kINT32, 0}, {DIMENSION_STR, nullptr, PluginFieldType::kINT32, 0}, {WEIGHTS1_STR, nullptr, PluginFieldType::kFLOAT32, 0}, {WEIGHTS2_STR, nullptr, PluginFieldType::kFLOAT32, 0}}; if (!pluginPtr) { pluginPtr = static_cast<PluginFieldCollection*>(malloc(sizeof(*pluginPtr) + fields.size() * sizeof(PluginField))); pluginPtr->nbFields = static_cast<int>(fields.size()); pluginPtr->fields = fields.data(); } return pluginPtr; } /****************************************************************************** * CONSTRUCTORS / DESTRUCTOR ************************************************** *****************************************************************************/ Taco2PrenetLayerPluginCreator::Taco2PrenetLayerPluginCreator() : mNamespace() { // do nothing } /****************************************************************************** * PUBLIC METHODS ************************************************************* *****************************************************************************/ const char* Taco2PrenetLayerPluginCreator::getPluginName() const { return Taco2PrenetLayerPlugin::getName(); } const char* Taco2PrenetLayerPluginCreator::getPluginVersion() const { return Taco2PrenetLayerPlugin::getVersion(); } const PluginFieldCollection* Taco2PrenetLayerPluginCreator::getFieldNames() { return getFields(); } IPluginV2* Taco2PrenetLayerPluginCreator::createPlugin(const char* const /*name*/, const PluginFieldCollection* fc) { int inputLength = 0; int dimension = 0; Weights weights1{DataType::kFLOAT, nullptr, 0}; Weights weights2{DataType::kFLOAT, nullptr, 0}; for (int i = 0; i < fc->nbFields; ++i) { const std::string name(fc->fields[i].name); if (name == INPUTLENGTH_STR) { inputLength = static_cast<const int32_t*>(fc->fields[i].data)[0]; } else if (name == DIMENSION_STR) { dimension = static_cast<const int32_t*>(fc->fields[i].data)[0]; } else if (name == WEIGHTS1_STR) { weights1.values = fc->fields[i].data; weights1.count = fc->fields[i].length; } else if (name == WEIGHTS2_STR) { weights2.values = fc->fields[i].data; weights2.count = fc->fields[i].length; } else { throw std::runtime_error("Unknown plugin field: '" + name + "'"); } } return new Taco2PrenetLayerPlugin(weights1, weights2, inputLength, dimension); } IPluginV2* Taco2PrenetLayerPluginCreator::deserializePlugin( const char* const /* layerName */, const void* const serialData, size_t const serialLength) { return new Taco2PrenetLayerPlugin(Taco2PrenetLayerPlugin::deserialize(serialData, serialLength)); } void Taco2PrenetLayerPluginCreator::setPluginNamespace(const char* pluginNamespace) { mNamespace = pluginNamespace; } const char* Taco2PrenetLayerPluginCreator::getPluginNamespace() const { return mNamespace.c_str(); } } // namespace plugin } // namespace nvinfer1
PyTorch/LanguageModeling/BERT/triton
triton
export_model
#!/usr/bin/env python3 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import logging import os from pathlib import Path os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2" os.environ["TF_ENABLE_DEPRECATION_WARNINGS"] = "1" # method from PEP-366 to support relative import in executed modules if __name__ == "__main__" and __package__ is None: __package__ = Path(__file__).parent.name from .deployment_toolkit.args import ArgParserGenerator # noqa: E402 module level import not at top of file from .deployment_toolkit.core import ( # noqa: E402 module level import not at top of file DATALOADER_FN_NAME, BaseLoader, BaseSaver, Format, load_from_file, ) from .deployment_toolkit.extensions import loaders, savers # noqa: E402 module level import not at top of file LOGGER = logging.getLogger("export_model") INPUT_MODEL_TYPES = [Format.TF_ESTIMATOR, Format.TF_KERAS, Format.PYT] OUTPUT_MODEL_TYPES = [Format.TF_SAVEDMODEL, Format.TS_TRACE, Format.TS_SCRIPT, Format.ONNX] def _get_args(): parser = argparse.ArgumentParser( description="Script for exporting models from supported frameworks.", allow_abbrev=False ) parser.add_argument("--input-path", help="Path to input python module", required=True) parser.add_argument( "--input-type", help="Input model type", choices=[f.value for f in INPUT_MODEL_TYPES], required=True ) parser.add_argument("--output-path", help="Path to output model file", required=True) parser.add_argument( "--output-type", help="Output model type", choices=[f.value for f in OUTPUT_MODEL_TYPES], required=True ) parser.add_argument("--dataloader", help="Path to python module containing data loader") parser.add_argument("-v", "--verbose", help="Verbose logs", action="store_true", default=False) parser.add_argument( "--ignore-unknown-parameters", help="Ignore unknown parameters (argument often used in CI where set of arguments is constant)", action="store_true", default=False, ) args, unparsed_args = parser.parse_known_args() Loader: BaseLoader = loaders.get(args.input_type) ArgParserGenerator(Loader, module_path=args.input_path).update_argparser(parser) if args.input_type == Format.PYT.value and args.output_type == Format.ONNX.value: saver_type = f"{Format.PYT.value}--{Format.ONNX.value}" else: saver_type = args.output_type Saver: BaseSaver = savers.get(saver_type) ArgParserGenerator(Saver).update_argparser(parser) if args.dataloader is not None: get_dataloader_fn = load_from_file(args.dataloader, label="dataloader", target=DATALOADER_FN_NAME) ArgParserGenerator(get_dataloader_fn).update_argparser(parser) if args.ignore_unknown_parameters: args, unknown_args = parser.parse_known_args() LOGGER.warning(f"Got additional args {unknown_args}") else: args = parser.parse_args() return args def main(): args = _get_args() log_level = logging.INFO if not args.verbose else logging.DEBUG log_format = "%(asctime)s %(levelname)s %(name)s %(message)s" logging.basicConfig(level=log_level, format=log_format) LOGGER.info("args:") for key, value in vars(args).items(): LOGGER.info(f" {key} = {value}") dataloader_fn = None if args.dataloader is not None: get_dataloader_fn = load_from_file(args.dataloader, label="dataloader", target=DATALOADER_FN_NAME) dataloader_fn = ArgParserGenerator(get_dataloader_fn).from_args(args) Loader: BaseLoader = loaders.get(args.input_type) loader = ArgParserGenerator(Loader, module_path=args.input_path).from_args(args) model = loader.load(args.input_path, dataloader_fn=dataloader_fn, output_type=args.output_type) LOGGER.info("inputs: %s", model.inputs) LOGGER.info("outputs: %s", model.outputs) if args.input_type == Format.PYT.value and args.output_type == Format.ONNX.value: saver_type = f"{Format.PYT.value}--{Format.ONNX.value}" else: saver_type = args.output_type Saver: BaseSaver = savers.get(saver_type) saver = ArgParserGenerator(Saver).from_args(args) saver.save(model, args.output_path, dataloader_fn) if __name__ == "__main__": main()
MxNet/Classification/RN50v1.5
RN50v1.5
train
# Copyright 2017-2018 The Apache Software Foundation # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # # ----------------------------------------------------------------------- # # Copyright (c) 2019-2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import dllogger import horovod.mxnet as hvd import dali import data import fit import models from log_utils import setup_logging def parse_args(): parser = argparse.ArgumentParser(description="Train classification models on ImageNet", formatter_class=argparse.ArgumentDefaultsHelpFormatter) models.add_model_args(parser) fit.add_fit_args(parser) data.add_data_args(parser) dali.add_dali_args(parser) data.add_data_aug_args(parser) return parser.parse_args() if __name__ == '__main__': args = parse_args() if 'horovod' in args.kv_store: hvd.init() setup_logging(args) dllogger.log(step='PARAMETER', data=vars(args)) model = models.get_model(**vars(args)) data_loader = data.get_data_loader(args) fit.fit(args, model, data_loader)
TensorFlow/LanguageModeling/BERT/scripts
scripts
run_pretraining_lamb_phase1
#! /bin/bash # Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. echo "Container nvidia build = " $NVIDIA_BUILD_ID train_batch_size_phase1=${1:-64} train_batch_size_phase2=${2:-8} eval_batch_size=${3:-8} learning_rate_phase1=${4:-"7.5e-4"} learning_rate_phase2=${5:-"5e-4"} precision=${6:-"fp16"} use_xla=${7:-"true"} num_gpus=${8:-8} warmup_steps_phase1=${9:-"2000"} warmup_steps_phase2=${10:-"200"} train_steps=${11:-7820} save_checkpoints_steps=${12:-100} num_accumulation_steps_phase1=${13:-128} num_accumulation_steps_phase2=${14:-512} bert_model=${15:-"large"} DATA_DIR=${DATA_DIR:-data} #Edit to save logs & checkpoints in a different directory RESULTS_DIR=${RESULTS_DIR:-/results} if [ "$bert_model" = "large" ] ; then export BERT_CONFIG=data/download/nvidia_pretrained/bert_tf_pretraining_large_lamb/bert_config.json else export BERT_CONFIG=data/download/nvidia_pretrained/bert_tf_squad11_base_128/bert_config.json fi PREC="" if [ "$precision" = "fp16" ] ; then PREC="--amp" elif [ "$precision" = "fp32" ] ; then PREC="--noamp" elif [ "$precision" = "tf32" ] ; then PREC="--noamp" elif [ "$precision" = "manual_fp16" ] ; then PREC="--noamp --manual_fp16" else echo "Unknown <precision> argument" exit -2 fi if [ "$use_xla" = "true" ] ; then PREC="$PREC --use_xla" echo "XLA activated" else PREC="$PREC --nouse_xla" fi mpi="" horovod_str="" if [ $num_gpus -gt 1 ] ; then mpi="mpiexec --allow-run-as-root -np $num_gpus --bind-to socket" horovod_str="--horovod" fi #PHASE 1 train_steps_phase1=$(expr $train_steps \* 9 \/ 10) #Phase 1 is 10% of training gbs_phase1=$(expr $train_batch_size_phase1 \* $num_accumulation_steps_phase1) seq_len=128 max_pred_per_seq=20 RESULTS_DIR_PHASE1=${RESULTS_DIR}/phase_1 mkdir -m 777 -p $RESULTS_DIR_PHASE1 INPUT_FILES="$DATA_DIR/tfrecord/lower_case_1_seq_len_${seq_len}_max_pred_${max_pred_per_seq}_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5_shard_1472_test_split_10/books_wiki_en_corpus/training" EVAL_FILES="$DATA_DIR/tfrecord/lower_case_1_seq_len_${seq_len}_max_pred_${max_pred_per_seq}_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5_shard_1472_test_split_10/books_wiki_en_corpus/test" #Check if all necessary files are available before training for DIR_or_file in $DATA_DIR $RESULTS_DIR_PHASE1 $BERT_CONFIG; do if [ ! -d "$DIR_or_file" ] && [ ! -f "$DIR_or_file" ]; then echo "Error! $DIR_or_file directory missing. Please mount correctly" exit -1 fi done $mpi python /workspace/bert/run_pretraining.py \ --input_files_dir=$INPUT_FILES \ --eval_files_dir=$EVAL_FILES \ --output_dir=$RESULTS_DIR_PHASE1 \ --bert_config_file=$BERT_CONFIG \ --do_train=True \ --do_eval=True \ --train_batch_size=$train_batch_size_phase1 \ --eval_batch_size=$eval_batch_size \ --max_seq_length=$seq_len \ --max_predictions_per_seq=$max_pred_per_seq \ --num_train_steps=$train_steps_phase1 \ --num_accumulation_steps=$num_accumulation_steps_phase1 \ --num_warmup_steps=$warmup_steps_phase1 \ --save_checkpoints_steps=$save_checkpoints_steps \ --learning_rate=$learning_rate_phase1 \ $horovod_str $PREC \ --allreduce_post_accumulation=True
PyTorch/Classification/GPUNet/triton/065ms/runner
runner
config_NVIDIA-DGX-1-(1x-V100-32GB)
batching: dynamic checkpoints: - name: 0.65ms url: https://api.ngc.nvidia.com/v2/models/nvidia/dle/gpunet_0_pyt_ckpt/versions/21.12.0_amp/zip configurations: - checkpoint: 0.65ms parameters: backend_accelerator: trt checkpoint: 0.65ms device_kind: gpu export_format: onnx export_precision: fp16 format: onnx max_batch_size: 64 number_of_model_instances: 2 precision: fp16 tensorrt_capture_cuda_graph: 0 torch_jit: none container_version: '21.12' datasets: - name: imagenet datasets_dir: datasets ensemble_model_name: null framework: PyTorch measurement_steps_offline: 8 measurement_steps_online: 32 model_name: GPUnet performance_tool: model_analyzer triton_container_image: nvcr.io/nvidia/tritonserver:21.12-py3 triton_custom_operations: null triton_dockerfile: null triton_load_model_method: explicit
TensorFlow/Detection/SSD/models/research/object_detection/builders
builders
model_builder
# Copyright 2017 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """A function to build a DetectionModel from configuration.""" import functools from object_detection.builders import anchor_generator_builder from object_detection.builders import box_coder_builder from object_detection.builders import box_predictor_builder from object_detection.builders import hyperparams_builder from object_detection.builders import image_resizer_builder from object_detection.builders import losses_builder from object_detection.builders import matcher_builder from object_detection.builders import post_processing_builder from object_detection.builders import region_similarity_calculator_builder as sim_calc from object_detection.core import balanced_positive_negative_sampler as sampler from object_detection.core import post_processing from object_detection.core import target_assigner from object_detection.meta_architectures import faster_rcnn_meta_arch from object_detection.meta_architectures import rfcn_meta_arch from object_detection.meta_architectures import ssd_meta_arch from object_detection.models import faster_rcnn_inception_resnet_v2_feature_extractor as frcnn_inc_res from object_detection.models import faster_rcnn_inception_v2_feature_extractor as frcnn_inc_v2 from object_detection.models import faster_rcnn_nas_feature_extractor as frcnn_nas from object_detection.models import faster_rcnn_pnas_feature_extractor as frcnn_pnas from object_detection.models import faster_rcnn_resnet_v1_feature_extractor as frcnn_resnet_v1 from object_detection.models import ssd_resnet_v1_fpn_feature_extractor as ssd_resnet_v1_fpn from object_detection.models import ssd_resnet_v1_ppn_feature_extractor as ssd_resnet_v1_ppn from object_detection.models.embedded_ssd_mobilenet_v1_feature_extractor import EmbeddedSSDMobileNetV1FeatureExtractor from object_detection.models.ssd_inception_v2_feature_extractor import SSDInceptionV2FeatureExtractor from object_detection.models.ssd_inception_v3_feature_extractor import SSDInceptionV3FeatureExtractor from object_detection.models.ssd_mobilenet_v1_feature_extractor import SSDMobileNetV1FeatureExtractor from object_detection.models.ssd_mobilenet_v1_fpn_feature_extractor import SSDMobileNetV1FpnFeatureExtractor from object_detection.models.ssd_mobilenet_v1_ppn_feature_extractor import SSDMobileNetV1PpnFeatureExtractor from object_detection.models.ssd_mobilenet_v2_feature_extractor import SSDMobileNetV2FeatureExtractor from object_detection.models.ssd_mobilenet_v2_fpn_feature_extractor import SSDMobileNetV2FpnFeatureExtractor from object_detection.models.ssd_mobilenet_v2_keras_feature_extractor import SSDMobileNetV2KerasFeatureExtractor from object_detection.models.ssd_pnasnet_feature_extractor import SSDPNASNetFeatureExtractor from object_detection.predictors import rfcn_box_predictor from object_detection.predictors.heads import mask_head from object_detection.protos import model_pb2 from object_detection.utils import ops # A map of names to SSD feature extractors. SSD_FEATURE_EXTRACTOR_CLASS_MAP = { 'ssd_inception_v2': SSDInceptionV2FeatureExtractor, 'ssd_inception_v3': SSDInceptionV3FeatureExtractor, 'ssd_mobilenet_v1': SSDMobileNetV1FeatureExtractor, 'ssd_mobilenet_v1_fpn': SSDMobileNetV1FpnFeatureExtractor, 'ssd_mobilenet_v1_ppn': SSDMobileNetV1PpnFeatureExtractor, 'ssd_mobilenet_v2': SSDMobileNetV2FeatureExtractor, 'ssd_mobilenet_v2_fpn': SSDMobileNetV2FpnFeatureExtractor, 'ssd_resnet50_v1_fpn': ssd_resnet_v1_fpn.SSDResnet50V1FpnFeatureExtractor, 'ssd_resnet101_v1_fpn': ssd_resnet_v1_fpn.SSDResnet101V1FpnFeatureExtractor, 'ssd_resnet152_v1_fpn': ssd_resnet_v1_fpn.SSDResnet152V1FpnFeatureExtractor, 'ssd_resnet50_v1_ppn': ssd_resnet_v1_ppn.SSDResnet50V1PpnFeatureExtractor, 'ssd_resnet101_v1_ppn': ssd_resnet_v1_ppn.SSDResnet101V1PpnFeatureExtractor, 'ssd_resnet152_v1_ppn': ssd_resnet_v1_ppn.SSDResnet152V1PpnFeatureExtractor, 'embedded_ssd_mobilenet_v1': EmbeddedSSDMobileNetV1FeatureExtractor, 'ssd_pnasnet': SSDPNASNetFeatureExtractor, } SSD_KERAS_FEATURE_EXTRACTOR_CLASS_MAP = { 'ssd_mobilenet_v2_keras': SSDMobileNetV2KerasFeatureExtractor } # A map of names to Faster R-CNN feature extractors. FASTER_RCNN_FEATURE_EXTRACTOR_CLASS_MAP = { 'faster_rcnn_nas': frcnn_nas.FasterRCNNNASFeatureExtractor, 'faster_rcnn_pnas': frcnn_pnas.FasterRCNNPNASFeatureExtractor, 'faster_rcnn_inception_resnet_v2': frcnn_inc_res.FasterRCNNInceptionResnetV2FeatureExtractor, 'faster_rcnn_inception_v2': frcnn_inc_v2.FasterRCNNInceptionV2FeatureExtractor, 'faster_rcnn_resnet50': frcnn_resnet_v1.FasterRCNNResnet50FeatureExtractor, 'faster_rcnn_resnet101': frcnn_resnet_v1.FasterRCNNResnet101FeatureExtractor, 'faster_rcnn_resnet152': frcnn_resnet_v1.FasterRCNNResnet152FeatureExtractor, } def build(model_config, is_training, add_summaries=True): """Builds a DetectionModel based on the model config. Args: model_config: A model.proto object containing the config for the desired DetectionModel. is_training: True if this model is being built for training purposes. add_summaries: Whether to add tensorflow summaries in the model graph. Returns: DetectionModel based on the config. Raises: ValueError: On invalid meta architecture or model. """ if not isinstance(model_config, model_pb2.DetectionModel): raise ValueError('model_config not of type model_pb2.DetectionModel.') meta_architecture = model_config.WhichOneof('model') if meta_architecture == 'ssd': return _build_ssd_model(model_config.ssd, is_training, add_summaries) if meta_architecture == 'faster_rcnn': return _build_faster_rcnn_model(model_config.faster_rcnn, is_training, add_summaries) raise ValueError('Unknown meta architecture: {}'.format(meta_architecture)) def _build_ssd_feature_extractor(feature_extractor_config, is_training, freeze_batchnorm, reuse_weights=None): """Builds a ssd_meta_arch.SSDFeatureExtractor based on config. Args: feature_extractor_config: A SSDFeatureExtractor proto config from ssd.proto. is_training: True if this feature extractor is being built for training. freeze_batchnorm: Whether to freeze batch norm parameters during training or not. When training with a small batch size (e.g. 1), it is desirable to freeze batch norm update and use pretrained batch norm params. reuse_weights: if the feature extractor should reuse weights. Returns: ssd_meta_arch.SSDFeatureExtractor based on config. Raises: ValueError: On invalid feature extractor type. """ feature_type = feature_extractor_config.type is_keras_extractor = feature_type in SSD_KERAS_FEATURE_EXTRACTOR_CLASS_MAP depth_multiplier = feature_extractor_config.depth_multiplier min_depth = feature_extractor_config.min_depth pad_to_multiple = feature_extractor_config.pad_to_multiple use_explicit_padding = feature_extractor_config.use_explicit_padding use_depthwise = feature_extractor_config.use_depthwise if is_keras_extractor: conv_hyperparams = hyperparams_builder.KerasLayerHyperparams( feature_extractor_config.conv_hyperparams) else: conv_hyperparams = hyperparams_builder.build( feature_extractor_config.conv_hyperparams, is_training) override_base_feature_extractor_hyperparams = ( feature_extractor_config.override_base_feature_extractor_hyperparams) if (feature_type not in SSD_FEATURE_EXTRACTOR_CLASS_MAP) and ( not is_keras_extractor): raise ValueError('Unknown ssd feature_extractor: {}'.format(feature_type)) if is_keras_extractor: feature_extractor_class = SSD_KERAS_FEATURE_EXTRACTOR_CLASS_MAP[ feature_type] else: feature_extractor_class = SSD_FEATURE_EXTRACTOR_CLASS_MAP[feature_type] kwargs = { 'is_training': is_training, 'depth_multiplier': depth_multiplier, 'min_depth': min_depth, 'pad_to_multiple': pad_to_multiple, 'use_explicit_padding': use_explicit_padding, 'use_depthwise': use_depthwise, 'override_base_feature_extractor_hyperparams': override_base_feature_extractor_hyperparams } if is_keras_extractor: kwargs.update({ 'conv_hyperparams': conv_hyperparams, 'inplace_batchnorm_update': False, 'freeze_batchnorm': freeze_batchnorm }) else: kwargs.update({ 'conv_hyperparams_fn': conv_hyperparams, 'reuse_weights': reuse_weights, }) if feature_extractor_config.HasField('fpn'): kwargs.update({ 'fpn_min_level': feature_extractor_config.fpn.min_level, 'fpn_max_level': feature_extractor_config.fpn.max_level, 'additional_layer_depth': feature_extractor_config.fpn.additional_layer_depth, }) return feature_extractor_class(**kwargs) def _build_ssd_model(ssd_config, is_training, add_summaries): """Builds an SSD detection model based on the model config. Args: ssd_config: A ssd.proto object containing the config for the desired SSDMetaArch. is_training: True if this model is being built for training purposes. add_summaries: Whether to add tf summaries in the model. Returns: SSDMetaArch based on the config. Raises: ValueError: If ssd_config.type is not recognized (i.e. not registered in model_class_map). """ num_classes = ssd_config.num_classes # Feature extractor feature_extractor = _build_ssd_feature_extractor( feature_extractor_config=ssd_config.feature_extractor, freeze_batchnorm=ssd_config.freeze_batchnorm, is_training=is_training) box_coder = box_coder_builder.build(ssd_config.box_coder) matcher = matcher_builder.build(ssd_config.matcher) region_similarity_calculator = sim_calc.build( ssd_config.similarity_calculator) encode_background_as_zeros = ssd_config.encode_background_as_zeros negative_class_weight = ssd_config.negative_class_weight anchor_generator = anchor_generator_builder.build( ssd_config.anchor_generator) if feature_extractor.is_keras_model: ssd_box_predictor = box_predictor_builder.build_keras( conv_hyperparams_fn=hyperparams_builder.KerasLayerHyperparams, freeze_batchnorm=ssd_config.freeze_batchnorm, inplace_batchnorm_update=False, num_predictions_per_location_list=anchor_generator .num_anchors_per_location(), box_predictor_config=ssd_config.box_predictor, is_training=is_training, num_classes=num_classes, add_background_class=ssd_config.add_background_class) else: ssd_box_predictor = box_predictor_builder.build( hyperparams_builder.build, ssd_config.box_predictor, is_training, num_classes, ssd_config.add_background_class) image_resizer_fn = image_resizer_builder.build(ssd_config.image_resizer) non_max_suppression_fn, score_conversion_fn = post_processing_builder.build( ssd_config.post_processing) (classification_loss, localization_loss, classification_weight, localization_weight, hard_example_miner, random_example_sampler, expected_loss_weights_fn) = losses_builder.build(ssd_config.loss) normalize_loss_by_num_matches = ssd_config.normalize_loss_by_num_matches normalize_loc_loss_by_codesize = ssd_config.normalize_loc_loss_by_codesize equalization_loss_config = ops.EqualizationLossConfig( weight=ssd_config.loss.equalization_loss.weight, exclude_prefixes=ssd_config.loss.equalization_loss.exclude_prefixes) target_assigner_instance = target_assigner.TargetAssigner( region_similarity_calculator, matcher, box_coder, negative_class_weight=negative_class_weight) ssd_meta_arch_fn = ssd_meta_arch.SSDMetaArch kwargs = {} return ssd_meta_arch_fn( is_training=is_training, anchor_generator=anchor_generator, box_predictor=ssd_box_predictor, box_coder=box_coder, feature_extractor=feature_extractor, encode_background_as_zeros=encode_background_as_zeros, image_resizer_fn=image_resizer_fn, non_max_suppression_fn=non_max_suppression_fn, score_conversion_fn=score_conversion_fn, classification_loss=classification_loss, localization_loss=localization_loss, classification_loss_weight=classification_weight, localization_loss_weight=localization_weight, normalize_loss_by_num_matches=normalize_loss_by_num_matches, hard_example_miner=hard_example_miner, target_assigner_instance=target_assigner_instance, add_summaries=add_summaries, normalize_loc_loss_by_codesize=normalize_loc_loss_by_codesize, freeze_batchnorm=ssd_config.freeze_batchnorm, inplace_batchnorm_update=ssd_config.inplace_batchnorm_update, add_background_class=ssd_config.add_background_class, explicit_background_class=ssd_config.explicit_background_class, random_example_sampler=random_example_sampler, expected_loss_weights_fn=expected_loss_weights_fn, use_confidences_as_targets=ssd_config.use_confidences_as_targets, implicit_example_weight=ssd_config.implicit_example_weight, equalization_loss_config=equalization_loss_config, **kwargs) def _build_faster_rcnn_feature_extractor( feature_extractor_config, is_training, reuse_weights=None, inplace_batchnorm_update=False): """Builds a faster_rcnn_meta_arch.FasterRCNNFeatureExtractor based on config. Args: feature_extractor_config: A FasterRcnnFeatureExtractor proto config from faster_rcnn.proto. is_training: True if this feature extractor is being built for training. reuse_weights: if the feature extractor should reuse weights. inplace_batchnorm_update: Whether to update batch_norm inplace during training. This is required for batch norm to work correctly on TPUs. When this is false, user must add a control dependency on tf.GraphKeys.UPDATE_OPS for train/loss op in order to update the batch norm moving average parameters. Returns: faster_rcnn_meta_arch.FasterRCNNFeatureExtractor based on config. Raises: ValueError: On invalid feature extractor type. """ if inplace_batchnorm_update: raise ValueError('inplace batchnorm updates not supported.') feature_type = feature_extractor_config.type first_stage_features_stride = ( feature_extractor_config.first_stage_features_stride) batch_norm_trainable = feature_extractor_config.batch_norm_trainable if feature_type not in FASTER_RCNN_FEATURE_EXTRACTOR_CLASS_MAP: raise ValueError('Unknown Faster R-CNN feature_extractor: {}'.format( feature_type)) feature_extractor_class = FASTER_RCNN_FEATURE_EXTRACTOR_CLASS_MAP[ feature_type] return feature_extractor_class( is_training, first_stage_features_stride, batch_norm_trainable, reuse_weights) def _build_faster_rcnn_model(frcnn_config, is_training, add_summaries): """Builds a Faster R-CNN or R-FCN detection model based on the model config. Builds R-FCN model if the second_stage_box_predictor in the config is of type `rfcn_box_predictor` else builds a Faster R-CNN model. Args: frcnn_config: A faster_rcnn.proto object containing the config for the desired FasterRCNNMetaArch or RFCNMetaArch. is_training: True if this model is being built for training purposes. add_summaries: Whether to add tf summaries in the model. Returns: FasterRCNNMetaArch based on the config. Raises: ValueError: If frcnn_config.type is not recognized (i.e. not registered in model_class_map). """ num_classes = frcnn_config.num_classes image_resizer_fn = image_resizer_builder.build(frcnn_config.image_resizer) feature_extractor = _build_faster_rcnn_feature_extractor( frcnn_config.feature_extractor, is_training, inplace_batchnorm_update=frcnn_config.inplace_batchnorm_update) number_of_stages = frcnn_config.number_of_stages first_stage_anchor_generator = anchor_generator_builder.build( frcnn_config.first_stage_anchor_generator) first_stage_target_assigner = target_assigner.create_target_assigner( 'FasterRCNN', 'proposal', use_matmul_gather=frcnn_config.use_matmul_gather_in_matcher) first_stage_atrous_rate = frcnn_config.first_stage_atrous_rate first_stage_box_predictor_arg_scope_fn = hyperparams_builder.build( frcnn_config.first_stage_box_predictor_conv_hyperparams, is_training) first_stage_box_predictor_kernel_size = ( frcnn_config.first_stage_box_predictor_kernel_size) first_stage_box_predictor_depth = frcnn_config.first_stage_box_predictor_depth first_stage_minibatch_size = frcnn_config.first_stage_minibatch_size use_static_shapes = frcnn_config.use_static_shapes and ( frcnn_config.use_static_shapes_for_eval or is_training) first_stage_sampler = sampler.BalancedPositiveNegativeSampler( positive_fraction=frcnn_config.first_stage_positive_balance_fraction, is_static=(frcnn_config.use_static_balanced_label_sampler and use_static_shapes)) first_stage_max_proposals = frcnn_config.first_stage_max_proposals if (frcnn_config.first_stage_nms_iou_threshold < 0 or frcnn_config.first_stage_nms_iou_threshold > 1.0): raise ValueError('iou_threshold not in [0, 1.0].') if (is_training and frcnn_config.second_stage_batch_size > first_stage_max_proposals): raise ValueError('second_stage_batch_size should be no greater than ' 'first_stage_max_proposals.') first_stage_non_max_suppression_fn = functools.partial( post_processing.batch_multiclass_non_max_suppression, score_thresh=frcnn_config.first_stage_nms_score_threshold, iou_thresh=frcnn_config.first_stage_nms_iou_threshold, max_size_per_class=frcnn_config.first_stage_max_proposals, max_total_size=frcnn_config.first_stage_max_proposals, use_static_shapes=use_static_shapes) first_stage_loc_loss_weight = ( frcnn_config.first_stage_localization_loss_weight) first_stage_obj_loss_weight = frcnn_config.first_stage_objectness_loss_weight initial_crop_size = frcnn_config.initial_crop_size maxpool_kernel_size = frcnn_config.maxpool_kernel_size maxpool_stride = frcnn_config.maxpool_stride second_stage_target_assigner = target_assigner.create_target_assigner( 'FasterRCNN', 'detection', use_matmul_gather=frcnn_config.use_matmul_gather_in_matcher) second_stage_box_predictor = box_predictor_builder.build( hyperparams_builder.build, frcnn_config.second_stage_box_predictor, is_training=is_training, num_classes=num_classes) second_stage_batch_size = frcnn_config.second_stage_batch_size second_stage_sampler = sampler.BalancedPositiveNegativeSampler( positive_fraction=frcnn_config.second_stage_balance_fraction, is_static=(frcnn_config.use_static_balanced_label_sampler and use_static_shapes)) (second_stage_non_max_suppression_fn, second_stage_score_conversion_fn ) = post_processing_builder.build(frcnn_config.second_stage_post_processing) second_stage_localization_loss_weight = ( frcnn_config.second_stage_localization_loss_weight) second_stage_classification_loss = ( losses_builder.build_faster_rcnn_classification_loss( frcnn_config.second_stage_classification_loss)) second_stage_classification_loss_weight = ( frcnn_config.second_stage_classification_loss_weight) second_stage_mask_prediction_loss_weight = ( frcnn_config.second_stage_mask_prediction_loss_weight) hard_example_miner = None if frcnn_config.HasField('hard_example_miner'): hard_example_miner = losses_builder.build_hard_example_miner( frcnn_config.hard_example_miner, second_stage_classification_loss_weight, second_stage_localization_loss_weight) crop_and_resize_fn = ( ops.matmul_crop_and_resize if frcnn_config.use_matmul_crop_and_resize else ops.native_crop_and_resize) clip_anchors_to_image = ( frcnn_config.clip_anchors_to_image) common_kwargs = { 'is_training': is_training, 'num_classes': num_classes, 'image_resizer_fn': image_resizer_fn, 'feature_extractor': feature_extractor, 'number_of_stages': number_of_stages, 'first_stage_anchor_generator': first_stage_anchor_generator, 'first_stage_target_assigner': first_stage_target_assigner, 'first_stage_atrous_rate': first_stage_atrous_rate, 'first_stage_box_predictor_arg_scope_fn': first_stage_box_predictor_arg_scope_fn, 'first_stage_box_predictor_kernel_size': first_stage_box_predictor_kernel_size, 'first_stage_box_predictor_depth': first_stage_box_predictor_depth, 'first_stage_minibatch_size': first_stage_minibatch_size, 'first_stage_sampler': first_stage_sampler, 'first_stage_non_max_suppression_fn': first_stage_non_max_suppression_fn, 'first_stage_max_proposals': first_stage_max_proposals, 'first_stage_localization_loss_weight': first_stage_loc_loss_weight, 'first_stage_objectness_loss_weight': first_stage_obj_loss_weight, 'second_stage_target_assigner': second_stage_target_assigner, 'second_stage_batch_size': second_stage_batch_size, 'second_stage_sampler': second_stage_sampler, 'second_stage_non_max_suppression_fn': second_stage_non_max_suppression_fn, 'second_stage_score_conversion_fn': second_stage_score_conversion_fn, 'second_stage_localization_loss_weight': second_stage_localization_loss_weight, 'second_stage_classification_loss': second_stage_classification_loss, 'second_stage_classification_loss_weight': second_stage_classification_loss_weight, 'hard_example_miner': hard_example_miner, 'add_summaries': add_summaries, 'crop_and_resize_fn': crop_and_resize_fn, 'clip_anchors_to_image': clip_anchors_to_image, 'use_static_shapes': use_static_shapes, 'resize_masks': frcnn_config.resize_masks } if isinstance(second_stage_box_predictor, rfcn_box_predictor.RfcnBoxPredictor): return rfcn_meta_arch.RFCNMetaArch( second_stage_rfcn_box_predictor=second_stage_box_predictor, **common_kwargs) else: return faster_rcnn_meta_arch.FasterRCNNMetaArch( initial_crop_size=initial_crop_size, maxpool_kernel_size=maxpool_kernel_size, maxpool_stride=maxpool_stride, second_stage_mask_rcnn_box_predictor=second_stage_box_predictor, second_stage_mask_prediction_loss_weight=( second_stage_mask_prediction_loss_weight), **common_kwargs)
TensorFlow/Classification/ConvNets/se-resnext101-32x4d/training
training
DGXA100_SE-RNxt101-32x4d_AMP_90E
#!/bin/bash # Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. WORKSPACE=${1:-"/workspace/rn50v15_tf"} DATA_DIR=${2:-"/data"} OTHER=${@:3} if [[ ! -z "${BIND_TO_SOCKET}" ]]; then BIND_TO_SOCKET="--bind-to socket" fi mpiexec --allow-run-as-root ${BIND_TO_SOCKET} -np 8 python3 main.py --arch=se-resnext101-32x4d \ --mode=train_and_evaluate --iter_unit=epoch --num_iter=90 \ --batch_size=256 --warmup_steps=100 --cosine_lr --label_smoothing 0.1 \ --lr_init=0.256 --lr_warmup_epochs=8 --momentum=0.875 --weight_decay=6.103515625e-05 \ --amp \ --data_dir=${DATA_DIR}/tfrecords --data_idx_dir=${DATA_DIR}/dali_idx \ --results_dir=${WORKSPACE}/results --weight_init=fan_in ${OTHER}
TensorFlow2/Recommendation/WideAndDeep/tests/feature_specs
feature_specs
no_categorical
channel_spec: label: - clicked map: [] multihot_categorical: [] numerical: - document_id_document_id_promo_sim_categories - document_id_document_id_promo_sim_topics - document_id_document_id_promo_sim_entities - document_id_promo_ctr - publisher_id_promo_ctr - source_id_promo_ctr - document_id_promo_count - publish_time_days_since_published - ad_id_ctr - advertiser_id_ctr - campaign_id_ctr - ad_id_count - publish_time_promo_days_since_published onehot_categorical: [] feature_spec: ad_id_count: {} ad_id_ctr: {} advertiser_id_ctr: {} campaign_id_ctr: {} clicked: {} document_id_document_id_promo_sim_categories: {} document_id_document_id_promo_sim_entities: {} document_id_document_id_promo_sim_topics: {} document_id_promo_count: {} document_id_promo_ctr: {} publish_time_days_since_published: {} publish_time_promo_days_since_published: {} publisher_id_promo_ctr: {} source_id_promo_ctr: {} metadata: {} source_spec: test: - features: - clicked - document_id_document_id_promo_sim_categories - document_id_document_id_promo_sim_topics - document_id_document_id_promo_sim_entities - document_id_promo_ctr - publisher_id_promo_ctr - source_id_promo_ctr - document_id_promo_count - publish_time_days_since_published - ad_id_ctr - advertiser_id_ctr - campaign_id_ctr - ad_id_count - publish_time_promo_days_since_published files: - valid.csv type: csv train: - features: - clicked - document_id_document_id_promo_sim_categories - document_id_document_id_promo_sim_topics - document_id_document_id_promo_sim_entities - document_id_promo_ctr - publisher_id_promo_ctr - source_id_promo_ctr - document_id_promo_count - publish_time_days_since_published - ad_id_ctr - advertiser_id_ctr - campaign_id_ctr - ad_id_count - publish_time_promo_days_since_published files: - train.csv type: csv
PyTorch/SpeechSynthesis/Tacotron2/tacotron2
tacotron2
arg_parser
# ***************************************************************************** # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # * Neither the name of the NVIDIA CORPORATION nor the # names of its contributors may be used to endorse or promote products # derived from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE # DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # ***************************************************************************** import argparse from tacotron2.text import symbols def tacotron2_parser(parent, add_help=False): """ Parse commandline arguments. """ parser = argparse.ArgumentParser(parents=[parent], add_help=add_help) # misc parameters parser.add_argument('--mask-padding', default=False, type=bool, help='Use mask padding') parser.add_argument('--n-mel-channels', default=80, type=int, help='Number of bins in mel-spectrograms') # symbols parameters global symbols len_symbols = len(symbols) symbols = parser.add_argument_group('symbols parameters') symbols.add_argument('--n-symbols', default=len_symbols, type=int, help='Number of symbols in dictionary') symbols.add_argument('--symbols-embedding-dim', default=512, type=int, help='Input embedding dimension') # encoder parameters encoder = parser.add_argument_group('encoder parameters') encoder.add_argument('--encoder-kernel-size', default=5, type=int, help='Encoder kernel size') encoder.add_argument('--encoder-n-convolutions', default=3, type=int, help='Number of encoder convolutions') encoder.add_argument('--encoder-embedding-dim', default=512, type=int, help='Encoder embedding dimension') # decoder parameters decoder = parser.add_argument_group('decoder parameters') decoder.add_argument('--n-frames-per-step', default=1, type=int, help='Number of frames processed per step') # currently only 1 is supported decoder.add_argument('--decoder-rnn-dim', default=1024, type=int, help='Number of units in decoder LSTM') decoder.add_argument('--prenet-dim', default=256, type=int, help='Number of ReLU units in prenet layers') decoder.add_argument('--max-decoder-steps', default=2000, type=int, help='Maximum number of output mel spectrograms') decoder.add_argument('--gate-threshold', default=0.5, type=float, help='Probability threshold for stop token') decoder.add_argument('--p-attention-dropout', default=0.1, type=float, help='Dropout probability for attention LSTM') decoder.add_argument('--p-decoder-dropout', default=0.1, type=float, help='Dropout probability for decoder LSTM') decoder.add_argument('--decoder-no-early-stopping', action='store_true', help='Stop decoding once all samples are finished') # attention parameters attention = parser.add_argument_group('attention parameters') attention.add_argument('--attention-rnn-dim', default=1024, type=int, help='Number of units in attention LSTM') attention.add_argument('--attention-dim', default=128, type=int, help='Dimension of attention hidden representation') # location layer parameters location = parser.add_argument_group('location parameters') location.add_argument( '--attention-location-n-filters', default=32, type=int, help='Number of filters for location-sensitive attention') location.add_argument( '--attention-location-kernel-size', default=31, type=int, help='Kernel size for location-sensitive attention') # Mel-post processing network parameters postnet = parser.add_argument_group('postnet parameters') postnet.add_argument('--postnet-embedding-dim', default=512, type=int, help='Postnet embedding dimension') postnet.add_argument('--postnet-kernel-size', default=5, type=int, help='Postnet kernel size') postnet.add_argument('--postnet-n-convolutions', default=5, type=int, help='Number of postnet convolutions') return parser
TensorFlow/Detection/SSD/models/research/object_detection/samples/configs
configs
faster_rcnn_resnet50_fgvc
# Faster R-CNN with Resnet-50 (v1) # Users should configure the fine_tune_checkpoint field in the train config as # well as the label_map_path and input_path fields in the train_input_reader and # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that # should be configured. model { faster_rcnn { num_classes: 2854 image_resizer { keep_aspect_ratio_resizer { min_dimension: 600 max_dimension: 1024 } } feature_extractor { type: 'faster_rcnn_resnet50' first_stage_features_stride: 16 } first_stage_anchor_generator { grid_anchor_generator { scales: [0.25, 0.5, 1.0, 2.0] aspect_ratios: [0.5, 1.0, 2.0] height_stride: 16 width_stride: 16 } } first_stage_box_predictor_conv_hyperparams { op: CONV regularizer { l2_regularizer { weight: 0.0 } } initializer { truncated_normal_initializer { stddev: 0.01 } } } first_stage_nms_score_threshold: 0.0 first_stage_nms_iou_threshold: 0.7 first_stage_max_proposals: 32 first_stage_localization_loss_weight: 2.0 first_stage_objectness_loss_weight: 1.0 initial_crop_size: 14 maxpool_kernel_size: 2 maxpool_stride: 2 second_stage_batch_size: 32 second_stage_box_predictor { mask_rcnn_box_predictor { use_dropout: false dropout_keep_probability: 1.0 fc_hyperparams { op: FC regularizer { l2_regularizer { weight: 0.0 } } initializer { variance_scaling_initializer { factor: 1.0 uniform: true mode: FAN_AVG } } } } } second_stage_post_processing { batch_non_max_suppression { score_threshold: 0.0 iou_threshold: 0.6 max_detections_per_class: 5 max_total_detections: 5 } score_converter: SOFTMAX } second_stage_localization_loss_weight: 2.0 second_stage_classification_loss_weight: 1.0 } } train_config: { batch_size: 1 num_steps: 4000000 optimizer { momentum_optimizer: { learning_rate: { manual_step_learning_rate { initial_learning_rate: 0.0003 schedule { step: 3000000 learning_rate: .00003 } schedule { step: 3500000 learning_rate: .000003 } } } momentum_optimizer_value: 0.9 } use_moving_average: false } gradient_clipping_by_norm: 10.0 fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt" data_augmentation_options { random_horizontal_flip { } } } train_input_reader: { label_map_path: "PATH_TO_BE_CONFIGURED/fgvc_2854_classes_label_map.pbtxt" tf_record_input_reader { input_path: "PATH_TO_BE_CONFIGURED/animal_2854_train.record" } } eval_config: { metrics_set: "pascal_voc_detection_metrics" use_moving_averages: false num_examples: 10 } eval_input_reader: { label_map_path: "PATH_TO_BE_CONFIGURED/fgvc_2854_classes_label_map.pbtxt" shuffle: false num_readers: 1 tf_record_input_reader { input_path: "PATH_TO_BE_CONFIGURED/animal_2854_val.record" } }
PyTorch/Forecasting/TFT
TFT
data_utils
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################ # Copyright 2021 The Google Research Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import math import pickle import enum import datetime from collections import namedtuple, OrderedDict import sklearn.preprocessing from sklearn.impute import SimpleImputer import pandas as pd import numpy as np from bisect import bisect import torch from torch.utils.data import Dataset, IterableDataset, DataLoader, DistributedSampler, RandomSampler from torch.utils.data.dataloader import default_collate class DataTypes(enum.IntEnum): """Defines numerical types of each column.""" CONTINUOUS = 0 CATEGORICAL = 1 DATE = 2 STR = 3 class InputTypes(enum.IntEnum): """Defines input types of each column.""" TARGET = 0 OBSERVED = 1 KNOWN = 2 STATIC = 3 ID = 4 # Single column used as an entity identifier TIME = 5 # Single column exclusively used as a time index FeatureSpec = namedtuple('FeatureSpec', ['name', 'feature_type', 'feature_embed_type']) DTYPE_MAP = { DataTypes.CONTINUOUS : np.float32, DataTypes.CATEGORICAL : np.int64, DataTypes.DATE:'datetime64[ns]', DataTypes.STR: str } FEAT_ORDER = [ (InputTypes.STATIC, DataTypes.CATEGORICAL), (InputTypes.STATIC, DataTypes.CONTINUOUS), (InputTypes.KNOWN, DataTypes.CATEGORICAL), (InputTypes.KNOWN, DataTypes.CONTINUOUS), (InputTypes.OBSERVED, DataTypes.CATEGORICAL), (InputTypes.OBSERVED, DataTypes.CONTINUOUS), (InputTypes.TARGET, DataTypes.CONTINUOUS), (InputTypes.ID, DataTypes.CATEGORICAL) ] FEAT_NAMES = ['s_cat' , 's_cont' , 'k_cat' , 'k_cont' , 'o_cat' , 'o_cont' , 'target', 'id'] DEFAULT_ID_COL = 'id' class TFTBinaryDataset(Dataset): def __init__(self, path, config): super(TFTBinaryDataset).__init__() self.features = [x for x in config.features if x.feature_embed_type != DataTypes.DATE] self.example_length = config.example_length self.stride = config.dataset_stride self.grouped = pickle.load(open(path, 'rb')) self.grouped = [x for x in self.grouped if x.shape[0] >= self.example_length] self._cum_examples_in_group = np.cumsum([(g.shape[0] - self.example_length + 1)//self.stride for g in self.grouped]) self.feature_type_col_map = [[i for i,f in enumerate(self.features) if (f.feature_type, f.feature_embed_type) == x] for x in FEAT_ORDER] # The list comprehension below is an elaborate way of rearranging data into correct order, # simultaneously doing casting to proper types. Probably can be written neater self.grouped = [ [ arr[:, idxs].view(dtype=np.float32).astype(DTYPE_MAP[t[1]]) for t, idxs in zip(FEAT_ORDER, self.feature_type_col_map) ] for arr in self.grouped ] def __len__(self): return self._cum_examples_in_group[-1] if len(self._cum_examples_in_group) else 0 def __getitem__(self, idx): g_idx = bisect(self._cum_examples_in_group, idx) e_idx = idx - self._cum_examples_in_group[g_idx-1] if g_idx else idx group = self.grouped[g_idx] tensors = [ torch.from_numpy(feat[e_idx * self.stride:e_idx*self.stride + self.example_length]) if feat.size else torch.empty(0) for feat in group ] return OrderedDict(zip(FEAT_NAMES, tensors)) class TFTDataset(Dataset): def __init__(self, path, config): super(TFTDataset).__init__() self.features = config.features self.data = pd.read_csv(path, index_col=0) self.example_length = config.example_length self.stride = config.dataset_stride # name field is a column name. # there can be multiple entries with the same name because one column can be interpreted in many ways time_col_name = next(x.name for x in self.features if x.feature_type==InputTypes.TIME) id_col_name = next(x.name for x in self.features if x.feature_type==InputTypes.ID) if not id_col_name in self.data.columns: id_col_name = DEFAULT_ID_COL self.features = [x for x in self.features if x.feature_type!=InputTypes.ID] self.features.append(FeatureSpec(DEFAULT_ID_COL, InputTypes.ID, DataTypes.CATEGORICAL)) col_dtypes = {v.name:DTYPE_MAP[v.feature_embed_type] for v in self.features} self.data.sort_values(time_col_name,inplace=True) self.data = self.data[set(x.name for x in self.features)] #leave only relevant columns self.data = self.data.astype(col_dtypes) self.data = self.data.groupby(id_col_name).filter(lambda group: len(group) >= self.example_length) self.grouped = list(self.data.groupby(id_col_name)) self._cum_examples_in_group = np.cumsum([(len(g[1]) - self.example_length + 1)//self.stride for g in self.grouped]) def __len__(self): return self._cum_examples_in_group[-1] def __getitem__(self, idx): g_idx = len([x for x in self._cum_examples_in_group if x <= idx]) e_idx = idx - self._cum_examples_in_group[g_idx-1] if g_idx else idx group = self.grouped[g_idx][1] sliced = group.iloc[e_idx * self.stride:e_idx*self.stride + self.example_length] # We need to be sure that tensors are returned in the correct order tensors = tuple([] for _ in range(8)) for v in self.features: if v.feature_type == InputTypes.STATIC and v.feature_embed_type == DataTypes.CATEGORICAL: tensors[0].append(torch.from_numpy(sliced[v.name].to_numpy())) elif v.feature_type == InputTypes.STATIC and v.feature_embed_type == DataTypes.CONTINUOUS: tensors[1].append(torch.from_numpy(sliced[v.name].to_numpy())) elif v.feature_type == InputTypes.KNOWN and v.feature_embed_type == DataTypes.CATEGORICAL: tensors[2].append(torch.from_numpy(sliced[v.name].to_numpy())) elif v.feature_type == InputTypes.KNOWN and v.feature_embed_type == DataTypes.CONTINUOUS: tensors[3].append(torch.from_numpy(sliced[v.name].to_numpy())) elif v.feature_type == InputTypes.OBSERVED and v.feature_embed_type == DataTypes.CATEGORICAL: tensors[4].append(torch.from_numpy(sliced[v.name].to_numpy())) elif v.feature_type == InputTypes.OBSERVED and v.feature_embed_type == DataTypes.CONTINUOUS: tensors[5].append(torch.from_numpy(sliced[v.name].to_numpy())) elif v.feature_type == InputTypes.TARGET: tensors[6].append(torch.from_numpy(sliced[v.name].to_numpy())) elif v.feature_type == InputTypes.ID: tensors[7].append(torch.from_numpy(sliced[v.name].to_numpy())) tensors = [torch.stack(x, dim=-1) if x else torch.empty(0) for x in tensors] return OrderedDict(zip(FEAT_NAMES, tensors)) def get_dataset_splits(df, config): if hasattr(config, 'relative_split') and config.relative_split: forecast_len = config.example_length - config.encoder_length # The valid split is shifted from the train split by number of the forecast steps to the future. # The test split is shifted by the number of the forecast steps from the valid split train = [] valid = [] test = [] for _, group in df.groupby(DEFAULT_ID_COL): index = group[config.time_ids] _train = group.loc[index < config.valid_boundary] _valid = group.iloc[(len(_train) - config.encoder_length):(len(_train) + forecast_len)] _test = group.iloc[(len(_train) - config.encoder_length + forecast_len):(len(_train) + 2*forecast_len)] train.append(_train) valid.append(_valid) test.append(_test) train = pd.concat(train, axis=0) valid = pd.concat(valid, axis=0) test = pd.concat(test, axis=0) else: index = df[config.time_ids] train = df.loc[(index >= config.train_range[0]) & (index < config.train_range[1])] valid = df.loc[(index >= config.valid_range[0]) & (index < config.valid_range[1])] test = df.loc[(index >= config.test_range[0]) & (index < config.test_range[1])] return train, valid, test def flatten_ids(df, config): if config.missing_id_strategy == 'drop': if hasattr(config, 'combine_ids') and config.combine_ids: index = np.logical_or.reduce([df[c].isna() for c in config.combine_ids]) else: id_col = next(x.name for x in config.features if x.feature_type == InputTypes.ID) index = df[id_col].isna() index = index[index == True].index # Extract indices of nans df.drop(index, inplace=True) if not (hasattr(config, 'combine_ids') and config.combine_ids): id_col = next(x.name for x in config.features if x.feature_type == InputTypes.ID) ids = df[id_col].apply(str) df.drop(id_col, axis=1, inplace=True) encoder = sklearn.preprocessing.LabelEncoder().fit(ids.values) df[DEFAULT_ID_COL] = encoder.transform(ids) encoders = OrderedDict({id_col: encoder}) else: encoders = {c:sklearn.preprocessing.LabelEncoder().fit(df[c].values) for c in config.combine_ids} encoders = OrderedDict(encoders) lens = [len(v.classes_) for v in encoders.values()] clens = np.roll(np.cumprod(lens), 1) clens[0] = 1 # this takes a looooooot of time. Probably it would be better to create 2 dummy columns df[DEFAULT_ID_COL] = df.apply(lambda row: sum([encoders[c].transform([row[c]])[0]*clens[i] for i,c in enumerate(encoders.keys())]), axis=1) df.drop(config.combine_ids, axis=1, inplace=True) return DEFAULT_ID_COL, encoders def impute(df, config): #XXX This ensures that out scaling will have the same mean. We still need to check the variance if not hasattr(config, 'missing_data_label'): return df, None else: imp = SimpleImputer(missing_values=config.missing_data_label, strategy='mean') mask = df.applymap(lambda x: True if x == config.missing_data_label else False) data = df.values col_mask = (data == config.missing_data_label).all(axis=0) data[:,~col_mask] = imp.fit_transform(data) return data, mask def normalize_reals(train, valid, test, config, id_col=DEFAULT_ID_COL): tgt_cols = [x.name for x in config.features if x.feature_type == InputTypes.TARGET] real_cols = list(set(v.name for v in config.features if v.feature_embed_type == DataTypes.CONTINUOUS).difference(set(tgt_cols))) real_scalers = {} tgt_scalers = {} def apply_scalers(df, name=None): if name is None: name = df.name mask = df.applymap(lambda x: True if x == config.missing_data_label else False) if hasattr(config, 'missing_data_label') else None df[real_cols] = real_scalers[name].transform(df[real_cols]) if mask is not None and any(mask): df[real_cols].mask(mask, 10**9) df[tgt_cols] = tgt_scalers[name].transform(df[tgt_cols]) return df if config.scale_per_id: for identifier, sliced in train.groupby(id_col): data = sliced[real_cols] data, _ = impute(data, config) real_scalers[identifier] = sklearn.preprocessing.StandardScaler().fit(data) # XXX We should probably remove examples that contain NaN as a target target = sliced[tgt_cols] tgt_scalers[identifier] = sklearn.preprocessing.StandardScaler().fit(target) train = train.groupby(id_col).apply(apply_scalers) # For valid and testing leave only timeseries previously present in train subset # XXX for proper data science we should consider encoding unseen timeseries as a special case, not throwing them away valid = valid.loc[valid[id_col].isin(real_scalers.keys())] valid = valid.groupby(id_col).apply(apply_scalers) test = test.loc[test[id_col].isin(real_scalers.keys())] test = test.groupby(id_col).apply(apply_scalers) else: data, _ = impute(train[real_cols], config) real_scalers[''] = sklearn.preprocessing.StandardScaler().fit(data) tgt_scalers[''] = sklearn.preprocessing.StandardScaler().fit(train[tgt_cols]) train = apply_scalers(train, name='') valid = apply_scalers(valid, name='') test = apply_scalers(test, name='') return train, valid, test, real_scalers, tgt_scalers def encode_categoricals(train, valid, test, config): cat_encodings = {} cat_cols = list(set(v.name for v in config.features if v.feature_embed_type == DataTypes.CATEGORICAL and v.feature_type != InputTypes.ID)) num_classes = [] #XXX Maybe we should modify config based on this value? Or send a warninig? # For TC performance reasons we might want for num_classes[i] be divisible by 8 # Train categorical encoders for c in cat_cols: if config.missing_cat_data_strategy == 'special_token': #XXX this will probably require some data augmentation unique = train[c].unique() valid[c].loc[valid[c].isin(unique)] = '<UNK>' test[c].loc[test[c].isin(unique)] = '<UNK>' if config.missing_cat_data_strategy == 'encode_all' or \ config.missing_cat_data_strategy == 'special_token': srs = pd.concat([train[c], valid[c], test[c]]).apply(str) cat_encodings[c] = sklearn.preprocessing.LabelEncoder().fit(srs.values) elif config.missing_cat_data_strategy == 'drop': # TODO: implement this. In addition to dropping rows this has to split specific time series in chunks # to prevent data from having temporal gaps pass num_classes.append(srs.nunique()) print('Categorical variables encodings lens: ', num_classes) for split in [train, valid, test]: for c in cat_cols: srs = split[c].apply(str) split[c] = srs split.loc[:,c] = cat_encodings[c].transform(srs) return cat_encodings def preprocess(src_path, dst_path, config): df = pd.read_csv(src_path, index_col=0) for c in config.features: if c.feature_embed_type == DataTypes.DATE: df[c.name] = pd.to_datetime(df[c.name]) # Leave only columns relevant to preprocessing relevant_columns = list(set([f.name for f in config.features] + [config.time_ids])) df = df[relevant_columns] id_col, id_encoders = flatten_ids(df, config) df = df.reindex(sorted(df.columns), axis=1) train, valid, test = get_dataset_splits(df, config) # Length filter the data (all timeseries shorter than example len will be dropped) #for df in [train, valid, test]: # df.groupby(id_col).filter(lambda x: len(x) >= config.example_length) train = pd.concat([x[1] for x in train.groupby(id_col) if len(x[1]) >= config.example_length]) valid = pd.concat([x[1] for x in valid.groupby(id_col) if len(x[1]) >= config.example_length]) test = pd.concat([x[1] for x in test.groupby(id_col) if len(x[1]) >= config.example_length]) train, valid, test, real_scalers, tgt_scalers = normalize_reals(train, valid, test, config, id_col) cat_encodings = encode_categoricals(train, valid, test, config) os.makedirs(dst_path, exist_ok=True) train.to_csv(os.path.join(dst_path, 'train.csv')) valid.to_csv(os.path.join(dst_path, 'valid.csv')) test.to_csv(os.path.join(dst_path, 'test.csv')) # Save relevant columns in binary form for faster dataloading # IMORTANT: We always expect id to be a single column indicating the complete timeseries # We also expect a copy of id in form of static categorical input!!! col_names = [id_col] + [x.name for x in config.features if x.feature_embed_type != DataTypes.DATE and x.feature_type != InputTypes.ID] grouped_train = [x[1][col_names].values.astype(np.float32).view(dtype=np.int32) for x in train.groupby(id_col)] grouped_valid = [x[1][col_names].values.astype(np.float32).view(dtype=np.int32) for x in valid.groupby(id_col)] grouped_test = [x[1][col_names].values.astype(np.float32).view(dtype=np.int32) for x in test.groupby(id_col)] pickle.dump(grouped_train, open(os.path.join(dst_path, 'train.bin'), 'wb')) pickle.dump(grouped_valid, open(os.path.join(dst_path, 'valid.bin'), 'wb')) pickle.dump(grouped_test, open(os.path.join(dst_path, 'test.bin'), 'wb')) with open(os.path.join(dst_path, 'real_scalers.bin'), 'wb') as f: pickle.dump(real_scalers, f) with open(os.path.join(dst_path, 'tgt_scalers.bin'), 'wb') as f: pickle.dump(tgt_scalers, f) with open(os.path.join(dst_path, 'cat_encodings.bin'), 'wb') as f: pickle.dump(cat_encodings, f) with open(os.path.join(dst_path, 'id_encoders.bin'), 'wb') as f: pickle.dump(id_encoders, f) def sample_data(dataset, num_samples): if num_samples < 0: return dataset else: return torch.utils.data.Subset(dataset, np.random.choice(np.arange(len(dataset)), size=num_samples, replace=False)) def load_dataset(args, config, collate_fn=default_collate): from utils import print_once train_split = TFTBinaryDataset(os.path.join(args.data_path, 'train.bin'), config) train_split = sample_data(train_split, args.sample_data[0]) if args.distributed_world_size > 1: data_sampler = DistributedSampler(train_split, args.distributed_world_size, args.distributed_rank, seed=args.seed + args.distributed_rank, drop_last=True) else: data_sampler = RandomSampler(train_split) train_loader = DataLoader(train_split, batch_size=args.batch_size, num_workers=4, sampler=data_sampler, collate_fn=collate_fn, pin_memory=True) valid_split = TFTBinaryDataset(os.path.join(args.data_path, 'valid.bin'), config) valid_split = sample_data(valid_split, args.sample_data[1]) if args.distributed_world_size > 1: data_sampler = DistributedSampler(valid_split, args.distributed_world_size, args.distributed_rank, shuffle=False, drop_last=False) else: data_sampler = None valid_loader = DataLoader(valid_split, batch_size=args.batch_size, sampler=data_sampler, num_workers=4, collate_fn=collate_fn, pin_memory=True) test_split = TFTBinaryDataset(os.path.join(args.data_path, 'test.bin'), config) if args.distributed_world_size > 1: data_sampler = DistributedSampler(test_split, args.distributed_world_size, args.distributed_rank, shuffle=False, drop_last=False) else: data_sampler = None test_loader = DataLoader(test_split, batch_size=args.batch_size, sampler=data_sampler, num_workers=4, collate_fn=collate_fn, pin_memory=True) print_once(f'Train split length: {len(train_split)}') print_once(f'Valid split length: {len(valid_split)}') print_once(f'Test split length: {len(test_split)}') return train_loader, valid_loader, test_loader def standarize_electricity(path): """Code taken from https://github.com/google-research/google-research/blob/master/tft/script_download_data.py""" df = pd.read_csv(os.path.join(path, 'LD2011_2014.txt'), index_col=0, sep=';', decimal=',') df.index = pd.to_datetime(df.index) df.sort_index(inplace=True) # Used to determine the start and end dates of a series output = df.resample('1h').mean().replace(0., np.nan) earliest_time = output.index.min() df_list = [] for label in output: print('Processing {}'.format(label)) srs = output[label] start_date = min(srs.fillna(method='ffill').dropna().index) end_date = max(srs.fillna(method='bfill').dropna().index) active_range = (srs.index >= start_date) & (srs.index <= end_date) srs = srs[active_range].fillna(0.) tmp = pd.DataFrame({'power_usage': srs}) date = tmp.index tmp['t'] = (date - earliest_time).seconds / 60 / 60 + ( date - earliest_time).days * 24 tmp['days_from_start'] = (date - earliest_time).days tmp['categorical_id'] = label tmp['date'] = date tmp['id'] = label tmp['hour'] = date.hour tmp['day'] = date.day tmp['day_of_week'] = date.dayofweek tmp['month'] = date.month df_list.append(tmp) output = pd.concat(df_list, axis=0, join='outer').reset_index(drop=True) output['categorical_id'] = output['id'].copy() output['hours_from_start'] = output['t'] output['categorical_day_of_week'] = output['day_of_week'].copy() output['categorical_hour'] = output['hour'].copy() output.to_csv(os.path.join(path, 'standarized.csv')) def standarize_traffic(path): def process_list(s, variable_type=int, delimiter=None): """Parses a line in the PEMS format to a list.""" if delimiter is None: l = [ variable_type(i) for i in s.replace('[', '').replace(']', '').split() ] else: l = [ variable_type(i) for i in s.replace('[', '').replace(']', '').split(delimiter) ] return l def read_single_list(filename): """Returns single list from a file in the PEMS-custom format.""" with open(os.path.join(path, filename), 'r') as dat: l = process_list(dat.readlines()[0]) return l def read_matrix(filename): """Returns a matrix from a file in the PEMS-custom format.""" array_list = [] with open(os.path.join(path, filename), 'r') as dat: lines = dat.readlines() for i, line in enumerate(lines): if (i + 1) % 50 == 0: print('Completed {} of {} rows for {}'.format(i + 1, len(lines), filename)) array = [ process_list(row_split, variable_type=float, delimiter=None) for row_split in process_list( line, variable_type=str, delimiter=';') ] array_list.append(array) return array_list shuffle_order = np.array(read_single_list('randperm')) - 1 # index from 0 train_dayofweek = read_single_list('PEMS_trainlabels') train_tensor = read_matrix('PEMS_train') test_dayofweek = read_single_list('PEMS_testlabels') test_tensor = read_matrix('PEMS_test') # Inverse permutate shuffle order print('Shuffling') inverse_mapping = { new_location: previous_location for previous_location, new_location in enumerate(shuffle_order) } reverse_shuffle_order = np.array([ inverse_mapping[new_location] for new_location, _ in enumerate(shuffle_order) ]) # Group and reoder based on permuation matrix print('Reodering') day_of_week = np.array(train_dayofweek + test_dayofweek) combined_tensor = np.array(train_tensor + test_tensor) day_of_week = day_of_week[reverse_shuffle_order] combined_tensor = combined_tensor[reverse_shuffle_order] # Put everything back into a dataframe print('Parsing as dataframe') labels = ['traj_{}'.format(i) for i in read_single_list('stations_list')] hourly_list = [] for day, day_matrix in enumerate(combined_tensor): # Hourly data hourly = pd.DataFrame(day_matrix.T, columns=labels) hourly['hour_on_day'] = [int(i / 6) for i in hourly.index ] # sampled at 10 min intervals if hourly['hour_on_day'].max() > 23 or hourly['hour_on_day'].min() < 0: raise ValueError('Invalid hour! {}-{}'.format( hourly['hour_on_day'].min(), hourly['hour_on_day'].max())) hourly = hourly.groupby('hour_on_day', as_index=True).mean()[labels] hourly['sensor_day'] = day hourly['time_on_day'] = hourly.index hourly['day_of_week'] = day_of_week[day] hourly_list.append(hourly) hourly_frame = pd.concat(hourly_list, axis=0, ignore_index=True, sort=False) # Flatten such that each entitiy uses one row in dataframe store_columns = [c for c in hourly_frame.columns if 'traj' in c] other_columns = [c for c in hourly_frame.columns if 'traj' not in c] flat_df = pd.DataFrame(columns=['values', 'prev_values', 'next_values'] + other_columns + ['id']) for store in store_columns: print('Processing {}'.format(store)) sliced = hourly_frame[[store] + other_columns].copy() sliced.columns = ['values'] + other_columns sliced['id'] = int(store.replace('traj_', '')) # Sort by Sensor-date-time key = sliced['id'].apply(str) \ + sliced['sensor_day'].apply(lambda x: '_{:03d}'.format(x)) \ + sliced['time_on_day'].apply(lambda x: '_{:03d}'.format(x)) sliced = sliced.set_index(key).sort_index() sliced['values'] = sliced['values'].fillna(method='ffill') sliced['prev_values'] = sliced['values'].shift(1) sliced['next_values'] = sliced['values'].shift(-1) flat_df = flat_df.append(sliced.dropna(), ignore_index=True, sort=False) # Filter to match range used by other academic papers index = flat_df['sensor_day'] flat_df = flat_df[index < 173].copy() # Creating columns fo categorical inputs flat_df['categorical_id'] = flat_df['id'].copy() flat_df['hours_from_start'] = flat_df['time_on_day'] \ + flat_df['sensor_day']*24. flat_df['categorical_day_of_week'] = flat_df['day_of_week'].copy() flat_df['categorical_time_on_day'] = flat_df['time_on_day'].copy() flat_df.to_csv(os.path.join(path, 'standarized.csv'))
TensorFlow/Classification/ConvNets/resnext101-32x4d/training
training
DGX1_RNxt101-32x4d_FP32_250E
#!/bin/bash # Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. WORKSPACE=${1:-"/workspace/rn50v15_tf"} DATA_DIR=${2:-"/data"} OTHER=${@:3} if [[ ! -z "${BIND_TO_SOCKET}" ]]; then BIND_TO_SOCKET="--bind-to socket" fi mpiexec --allow-run-as-root ${BIND_TO_SOCKET} -np 8 python3 main.py --arch=resnext101-32x4d \ --mode=train_and_evaluate --iter_unit=epoch --num_iter=250 --mixup=0.2 \ --batch_size=64 --warmup_steps=100 --cosine_lr --label_smoothing 0.1 \ --lr_init=0.256 --lr_warmup_epochs=8 --momentum=0.875 --weight_decay=6.103515625e-05 \ --data_dir=${DATA_DIR}/tfrecords --data_idx_dir=${DATA_DIR}/dali_idx \ --results_dir=${WORKSPACE}/results --weight_init=fan_in ${OTHER}
PyTorch/Recommendation/DLRM/dlrm/scripts
scripts
gen_csv
from dlrm.data.defaults import NUMERICAL_CHANNEL, LABEL_CHANNEL from dlrm.data.feature_spec import FeatureSpec from argparse import ArgumentParser import pandas as pd import os import numpy as np def parse_args(): parser = ArgumentParser() parser.add_argument('--feature_spec_in', type=str, default='feature_spec.yaml', help='Name of the input feature specification file') parser.add_argument('--output', type=str, default='/data') parser.add_argument('--size', type=int, default=1000) return parser.parse_args() def main(): args = parse_args() dataset_size = args.size fspec_in = FeatureSpec.from_yaml(args.feature_spec_in) fspec_in.base_directory = args.output cat_cardinalities = fspec_in.get_categorical_sizes() cat_names = fspec_in.get_categorical_feature_names() cardinalities = {name: cardinality for name, cardinality in zip(cat_names, cat_cardinalities)} input_label_feature_name = fspec_in.channel_spec[LABEL_CHANNEL][0] numerical_names_set = set(fspec_in.channel_spec[NUMERICAL_CHANNEL]) for mapping_name, mapping in fspec_in.source_spec.items(): for chunk in mapping: assert chunk['type'] == 'csv', "Only csv files supported in this generator" assert len(chunk['files']) == 1, "Only one file per chunk supported in this transcoder" path_to_save = os.path.join(fspec_in.base_directory, chunk['files'][0]) data = [] for name in chunk['features']: if name == input_label_feature_name: data.append(np.random.randint(0, 1, size=dataset_size)) elif name in numerical_names_set: data.append(np.random.rand(dataset_size)) else: local_cardinality = cardinalities[name] data.append(np.random.randint(0, local_cardinality, size=dataset_size)) values = np.stack(data).T to_save = pd.DataFrame(values, columns=chunk['features']) os.makedirs(os.path.dirname(path_to_save), exist_ok=True) to_save.to_csv(path_to_save, index=False, header=False) if __name__ == "__main__": main()
PyTorch/Translation/Transformer
Transformer
requirements
cffi numpy torch tqdm tensorboardX
PyTorch/SpeechSynthesis/HiFiGAN/common/text
text
__init__
from .cmudict import CMUDict cmudict = CMUDict()
TensorFlow2/Recommendation/WideAndDeep/data/outbrain/nvtabular/utils
utils
setup
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os from data.outbrain.features import HASH_BUCKET_SIZES def create_config(args): data_bucket_folder = args.data_path output_bucket_folder = args.metadata_path temporary_folder = os.path.join("/tmp", "preprocessed") train_path = os.path.join(temporary_folder, "train_gdf.parquet") valid_path = os.path.join(temporary_folder, "valid_gdf.parquet") stats_file = os.path.join(temporary_folder, "stats_wnd_workflow") output_train_folder = os.path.join(output_bucket_folder, "train/") output_valid_folder = os.path.join(output_bucket_folder, "valid/") hash_spec = HASH_BUCKET_SIZES config = { "stats_file": stats_file, "data_bucket_folder": data_bucket_folder, "output_bucket_folder": output_bucket_folder, "output_train_folder": output_train_folder, "temporary_folder": temporary_folder, "train_path": train_path, "valid_path": valid_path, "output_valid_folder": output_valid_folder, "hash_spec": hash_spec, "dask": args.use_dask, } return config
PyTorch/SpeechRecognition/Jasper/common
common
dataset
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json from pathlib import Path import numpy as np import torch from torch.utils.data import Dataset, DataLoader from torch.utils.data.distributed import DistributedSampler from .audio import (audio_from_file, AudioSegment, GainPerturbation, ShiftPerturbation, SpeedPerturbation) from .text import _clean_text, punctuation_map def normalize_string(s, labels, punct_map): """Normalizes string. Example: 'call me at 8:00 pm!' -> 'call me at eight zero pm' """ labels = set(labels) try: text = _clean_text(s, ["english_cleaners"], punct_map).strip() return ''.join([tok for tok in text if all(t in labels for t in tok)]) except: print(f"WARNING: Normalizing failed: {s}") return None class FilelistDataset(Dataset): def __init__(self, filelist_fpath): self.samples = [line.strip() for line in open(filelist_fpath, 'r')] def __len__(self): return len(self.samples) def __getitem__(self, index): audio, audio_len = audio_from_file(self.samples[index]) return (audio.squeeze(0), audio_len, torch.LongTensor([0]), torch.LongTensor([0])) class SingleAudioDataset(FilelistDataset): def __init__(self, audio_fpath): self.samples = [audio_fpath] class AudioDataset(Dataset): def __init__(self, data_dir, manifest_fpaths, labels, sample_rate=16000, min_duration=0.1, max_duration=float("inf"), pad_to_max_duration=False, max_utts=0, normalize_transcripts=True, sort_by_duration=False, trim_silence=False, speed_perturbation=None, gain_perturbation=None, shift_perturbation=None, ignore_offline_speed_perturbation=False): """Loads audio, transcript and durations listed in a .json file. Args: data_dir: absolute path to dataset folder manifest_filepath: relative path from dataset folder to manifest json as described above. Can be coma-separated paths. labels (str): all possible output symbols min_duration (int): skip audio shorter than threshold max_duration (int): skip audio longer than threshold pad_to_max_duration (bool): pad all sequences to max_duration max_utts (int): limit number of utterances normalize_transcripts (bool): normalize transcript text sort_by_duration (bool): sort sequences by increasing duration trim_silence (bool): trim leading and trailing silence from audio ignore_offline_speed_perturbation (bool): use precomputed speed perturbation Returns: tuple of Tensors """ self.data_dir = data_dir self.labels = labels self.labels_map = dict([(labels[i], i) for i in range(len(labels))]) self.punctuation_map = punctuation_map(labels) self.blank_index = len(labels) self.pad_to_max_duration = pad_to_max_duration self.sort_by_duration = sort_by_duration self.max_utts = max_utts self.normalize_transcripts = normalize_transcripts self.ignore_offline_speed_perturbation = ignore_offline_speed_perturbation self.min_duration = min_duration self.max_duration = max_duration self.trim_silence = trim_silence self.sample_rate = sample_rate perturbations = [] if speed_perturbation is not None: perturbations.append(SpeedPerturbation(**speed_perturbation)) if gain_perturbation is not None: perturbations.append(GainPerturbation(**gain_perturbation)) if shift_perturbation is not None: perturbations.append(ShiftPerturbation(**shift_perturbation)) self.perturbations = perturbations self.max_duration = max_duration self.samples = [] self.duration = 0.0 self.duration_filtered = 0.0 for fpath in manifest_fpaths: self._load_json_manifest(fpath) if sort_by_duration: self.samples = sorted(self.samples, key=lambda s: s['duration']) def __getitem__(self, index): s = self.samples[index] rn_indx = np.random.randint(len(s['audio_filepath'])) duration = s['audio_duration'][rn_indx] if 'audio_duration' in s else 0 offset = s.get('offset', 0) segment = AudioSegment( s['audio_filepath'][rn_indx], target_sr=self.sample_rate, offset=offset, duration=duration, trim=self.trim_silence) for p in self.perturbations: p.maybe_apply(segment, self.sample_rate) segment = torch.FloatTensor(segment.samples) return (segment, torch.tensor(segment.shape[0]).int(), torch.tensor(s["transcript"]), torch.tensor(len(s["transcript"])).int()) def __len__(self): return len(self.samples) def _load_json_manifest(self, fpath): for s in json.load(open(fpath, "r", encoding="utf-8")): if self.pad_to_max_duration and not self.ignore_offline_speed_perturbation: # require all perturbed samples to be < self.max_duration s_max_duration = max(f['duration'] for f in s['files']) else: # otherwise we allow perturbances to be > self.max_duration s_max_duration = s['original_duration'] s['duration'] = s.pop('original_duration') if not (self.min_duration <= s_max_duration <= self.max_duration): self.duration_filtered += s['duration'] continue # Prune and normalize according to transcript tr = (s.get('transcript', None) or self.load_transcript(s['text_filepath'])) if not isinstance(tr, str): print(f'WARNING: Skipped sample (transcript not a str): {tr}.') self.duration_filtered += s['duration'] continue if self.normalize_transcripts: tr = normalize_string(tr, self.labels, self.punctuation_map) s["transcript"] = self.to_vocab_inds(tr) files = s.pop('files') if self.ignore_offline_speed_perturbation: files = [f for f in files if f['speed'] == 1.0] s['audio_duration'] = [f['duration'] for f in files] s['audio_filepath'] = [str(Path(self.data_dir, f['fname'])) for f in files] self.samples.append(s) self.duration += s['duration'] if self.max_utts > 0 and len(self.samples) >= self.max_utts: print(f'Reached max_utts={self.max_utts}. Finished parsing {fpath}.') break def load_transcript(self, transcript_path): with open(transcript_path, 'r', encoding="utf-8") as transcript_file: transcript = transcript_file.read().replace('\n', '') return transcript def to_vocab_inds(self, transcript): chars = [self.labels_map.get(x, self.blank_index) for x in list(transcript)] transcript = list(filter(lambda x: x != self.blank_index, chars)) return transcript def collate_fn(batch): bs = len(batch) max_len = lambda l, idx: max(el[idx].size(0) for el in l) audio = torch.zeros(bs, max_len(batch, 0)) audio_lens = torch.zeros(bs, dtype=torch.int32) transcript = torch.zeros(bs, max_len(batch, 2)) transcript_lens = torch.zeros(bs, dtype=torch.int32) for i, sample in enumerate(batch): audio[i].narrow(0, 0, sample[0].size(0)).copy_(sample[0]) audio_lens[i] = sample[1] transcript[i].narrow(0, 0, sample[2].size(0)).copy_(sample[2]) transcript_lens[i] = sample[3] return audio, audio_lens, transcript, transcript_lens def get_data_loader(dataset, batch_size, multi_gpu=True, shuffle=True, drop_last=True, num_workers=4): kw = {'dataset': dataset, 'collate_fn': collate_fn, 'num_workers': num_workers, 'pin_memory': True} if multi_gpu: loader_shuffle = False sampler = DistributedSampler(dataset, shuffle=shuffle) else: loader_shuffle = shuffle sampler = None return DataLoader(batch_size=batch_size, drop_last=drop_last, sampler=sampler, shuffle=loader_shuffle, **kw)
PyTorch/Forecasting/TFT/triton/deployment_toolkit/model_analyzer
model_analyzer
model_analyzer_config
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from .exceptions import ModelAnalyzerException class ModelAnalyzerConfig: """ A config class to set arguments to the Model Analyzer. An argument set to None will use the default. """ model_analyzer_args = [ "config-file", ] input_to_options = [ "config-file", ] def __init__(self): # Args will be a dict with the string representation as key self._args = {k: None for k in self.model_analyzer_args} self._options = { "-f": "config.yaml", } self._input_to_options = { "config-file": "-f", } def to_cli_string(self): """ Utility function to convert a config into a string of arguments to the server with CLI. Returns ------- str the command consisting of all set arguments to the model analyzer. e.g. '--model-repository=/models --verbose=True' """ # single dashed options, then verbose flags, then main args args = [f"{k} {v}" for k, v in self._options.items() if v] args += [f"--{k}={v}" for k, v in self._args.items() if v] return " ".join(args) @classmethod def allowed_keys(cls): """ Returns ------- list of str The keys that are allowed to be passed into model_analyzer """ return list(cls.model_analyzer_args) + list(cls.input_to_options) def __getitem__(self, key): """ Gets an arguments value in config Parameters ---------- key : str The name of the argument to the model analyzer Returns ------- The value that the argument is set to in this config """ if key in self._args: return self._args[key] elif key in self._input_to_options: return self._options[self._input_to_options[key]] else: raise ModelAnalyzerException(f"'{key}' Key not found in config") def __setitem__(self, key, value): """ Sets an arguments value in config after checking if defined/supported. Parameters ---------- key : str The name of the argument to the model analyzer value : (any) The value to which the argument is being set Raises ------ TritonModelAnalyzerException If key is unsupported or undefined in the config class """ if key in self._args: self._args[key] = value elif key in self._input_to_options: self._options[self._input_to_options[key]] = value else: raise ModelAnalyzerException(f"The argument '{key}' to the Model Analyzer is not supported.")
TensorFlow2/LanguageModeling/BERT/official/nlp
nlp
bert_models
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """BERT models that are compatible with TF 2.0.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf import tensorflow_hub as hub from official.modeling import tf_utils from official.nlp import bert_modeling from official.nlp.modeling import losses from official.nlp.modeling import networks from official.nlp.modeling.networks import bert_classifier from official.nlp.modeling.networks import bert_pretrainer from official.nlp.modeling.networks import bert_span_labeler def gather_indexes(sequence_tensor, positions): """Gathers the vectors at the specific positions. Args: sequence_tensor: Sequence output of `BertModel` layer of shape (`batch_size`, `seq_length`, num_hidden) where num_hidden is number of hidden units of `BertModel` layer. positions: Positions ids of tokens in sequence to mask for pretraining of with dimension (batch_size, max_predictions_per_seq) where `max_predictions_per_seq` is maximum number of tokens to mask out and predict per each sequence. Returns: Masked out sequence tensor of shape (batch_size * max_predictions_per_seq, num_hidden). """ sequence_shape = tf_utils.get_shape_list( sequence_tensor, name='sequence_output_tensor') batch_size = sequence_shape[0] seq_length = sequence_shape[1] width = sequence_shape[2] flat_offsets = tf.keras.backend.reshape( tf.range(0, batch_size, dtype=tf.int32) * seq_length, [-1, 1]) flat_positions = tf.keras.backend.reshape(positions + flat_offsets, [-1]) flat_sequence_tensor = tf.keras.backend.reshape( sequence_tensor, [batch_size * seq_length, width]) output_tensor = tf.gather(flat_sequence_tensor, flat_positions) return output_tensor class BertPretrainLossAndMetricLayer(tf.keras.layers.Layer): """Returns layer that computes custom loss and metrics for pretraining.""" def __init__(self, vocab_size, **kwargs): super(BertPretrainLossAndMetricLayer, self).__init__(**kwargs) self._vocab_size = vocab_size self.config = { 'vocab_size': vocab_size, } def __call__(self, lm_output, sentence_output=None, lm_label_ids=None, lm_label_weights=None, sentence_labels=None, **kwargs): inputs = tf_utils.pack_inputs([ lm_output, sentence_output, lm_label_ids, lm_label_weights, sentence_labels ]) return super(BertPretrainLossAndMetricLayer, self).__call__(inputs, **kwargs) def _add_metrics(self, lm_output, lm_labels, lm_label_weights, lm_example_loss, sentence_output, sentence_labels, next_sentence_loss): """Adds metrics.""" masked_lm_accuracy = tf.keras.metrics.sparse_categorical_accuracy( lm_labels, lm_output) numerator = tf.reduce_sum(masked_lm_accuracy * lm_label_weights) denominator = tf.reduce_sum(lm_label_weights) + 1e-5 masked_lm_accuracy = numerator / denominator self.add_metric( masked_lm_accuracy, name='masked_lm_accuracy', aggregation='mean') self.add_metric(lm_example_loss, name='lm_example_loss', aggregation='mean') next_sentence_accuracy = tf.keras.metrics.sparse_categorical_accuracy( sentence_labels, sentence_output) self.add_metric( next_sentence_accuracy, name='next_sentence_accuracy', aggregation='mean') self.add_metric( next_sentence_loss, name='next_sentence_loss', aggregation='mean') def call(self, inputs): """Implements call() for the layer.""" unpacked_inputs = tf_utils.unpack_inputs(inputs) lm_output = unpacked_inputs[0] sentence_output = unpacked_inputs[1] lm_label_ids = unpacked_inputs[2] lm_label_weights = tf.keras.backend.cast(unpacked_inputs[3], tf.float32) sentence_labels = unpacked_inputs[4] mask_label_loss = losses.weighted_sparse_categorical_crossentropy_loss( labels=lm_label_ids, predictions=lm_output, weights=lm_label_weights) sentence_loss = losses.weighted_sparse_categorical_crossentropy_loss( labels=sentence_labels, predictions=sentence_output) loss = mask_label_loss + sentence_loss batch_shape = tf.slice(tf.keras.backend.shape(sentence_labels), [0], [1]) # TODO(hongkuny): Avoids the hack and switches add_loss. final_loss = tf.fill(batch_shape, loss) self._add_metrics(lm_output, lm_label_ids, lm_label_weights, mask_label_loss, sentence_output, sentence_labels, sentence_loss) return final_loss def get_transformer_encoder(bert_config, sequence_length, float_dtype=tf.float32): """Gets a 'TransformerEncoder' object. Args: bert_config: A 'modeling.BertConfig' or 'modeling.AlbertConfig' object. sequence_length: Maximum sequence length of the training data. float_dtype: tf.dtype, tf.float32 or tf.float16. Returns: A networks.TransformerEncoder object. """ kwargs = dict( vocab_size=bert_config.vocab_size, hidden_size=bert_config.hidden_size, num_layers=bert_config.num_hidden_layers, num_attention_heads=bert_config.num_attention_heads, intermediate_size=bert_config.intermediate_size, activation=tf_utils.get_activation(bert_config.hidden_act), dropout_rate=bert_config.hidden_dropout_prob, attention_dropout_rate=bert_config.attention_probs_dropout_prob, sequence_length=sequence_length, max_sequence_length=bert_config.max_position_embeddings, type_vocab_size=bert_config.type_vocab_size, initializer=tf.keras.initializers.TruncatedNormal( stddev=bert_config.initializer_range), float_dtype=float_dtype.name) if isinstance(bert_config, bert_modeling.AlbertConfig): kwargs['embedding_width'] = bert_config.embedding_size return networks.AlbertTransformerEncoder(**kwargs) else: assert isinstance(bert_config, bert_modeling.BertConfig) return networks.TransformerEncoder(**kwargs) def pretrain_model(bert_config, seq_length, max_predictions_per_seq, float_type, initializer=None): """Returns model to be used for pre-training. Args: bert_config: Configuration that defines the core BERT model. seq_length: Maximum sequence length of the training data. max_predictions_per_seq: Maximum number of tokens in sequence to mask out and use for pretraining. initializer: Initializer for weights in BertPretrainer. Returns: Pretraining model as well as core BERT submodel from which to save weights after pretraining. """ input_word_ids = tf.keras.layers.Input( shape=(seq_length,), name='input_word_ids', dtype=tf.int32) input_mask = tf.keras.layers.Input( shape=(seq_length,), name='input_mask', dtype=tf.int32) input_type_ids = tf.keras.layers.Input( shape=(seq_length,), name='input_type_ids', dtype=tf.int32) masked_lm_positions = tf.keras.layers.Input( shape=(max_predictions_per_seq,), name='masked_lm_positions', dtype=tf.int32) masked_lm_ids = tf.keras.layers.Input( shape=(max_predictions_per_seq,), name='masked_lm_ids', dtype=tf.int32) masked_lm_weights = tf.keras.layers.Input( shape=(max_predictions_per_seq,), name='masked_lm_weights', dtype=tf.int32) next_sentence_labels = tf.keras.layers.Input( shape=(1,), name='next_sentence_labels', dtype=tf.int32) transformer_encoder = get_transformer_encoder(bert_config, seq_length, float_type) if initializer is None: initializer = tf.keras.initializers.TruncatedNormal( stddev=bert_config.initializer_range) pretrainer_model = bert_pretrainer.BertPretrainer( network=transformer_encoder, num_classes=2, # The next sentence prediction label has two classes. num_token_predictions=max_predictions_per_seq, initializer=initializer, float_type=float_type, output='predictions') lm_output, sentence_output = pretrainer_model( [input_word_ids, input_mask, input_type_ids, masked_lm_positions]) pretrain_loss_layer = BertPretrainLossAndMetricLayer( vocab_size=bert_config.vocab_size) output_loss = pretrain_loss_layer(lm_output, sentence_output, masked_lm_ids, masked_lm_weights, next_sentence_labels) keras_model = tf.keras.Model( inputs={ 'input_word_ids': input_word_ids, 'input_mask': input_mask, 'input_type_ids': input_type_ids, 'masked_lm_positions': masked_lm_positions, 'masked_lm_ids': masked_lm_ids, 'masked_lm_weights': masked_lm_weights, 'next_sentence_labels': next_sentence_labels, }, outputs=output_loss) return keras_model, transformer_encoder class BertSquadLogitsLayer(tf.keras.layers.Layer): """Returns a layer that computes custom logits for BERT squad model.""" def __init__(self, initializer=None, float_type=tf.float32, **kwargs): super(BertSquadLogitsLayer, self).__init__(**kwargs) self.initializer = initializer self.float_type = float_type def build(self, unused_input_shapes): """Implements build() for the layer.""" self.final_dense = tf.keras.layers.Dense( units=2, kernel_initializer=self.initializer, name='final_dense') super(BertSquadLogitsLayer, self).build(unused_input_shapes) def call(self, inputs): """Implements call() for the layer.""" sequence_output = inputs input_shape = tf_utils.get_shape_list( sequence_output, name='sequence_output_tensor') sequence_length = input_shape[1] num_hidden_units = input_shape[2] final_hidden_input = tf.keras.backend.reshape(sequence_output, [-1, num_hidden_units]) logits = self.final_dense(final_hidden_input) logits = tf.keras.backend.reshape(logits, [-1, sequence_length, 2]) logits = tf.transpose(logits, [2, 0, 1]) unstacked_logits = tf.unstack(logits, axis=0) if self.float_type == tf.float16: unstacked_logits = tf.cast(unstacked_logits, tf.float32) return unstacked_logits[0], unstacked_logits[1] def squad_model(bert_config, max_seq_length, float_type, initializer=None, hub_module_url=None): """Returns BERT Squad model along with core BERT model to import weights. Args: bert_config: BertConfig, the config defines the core Bert model. max_seq_length: integer, the maximum input sequence length. float_type: tf.dtype, tf.float32 or tf.bfloat16. initializer: Initializer for the final dense layer in the span labeler. Defaulted to TruncatedNormal initializer. hub_module_url: TF-Hub path/url to Bert module. Returns: A tuple of (1) keras model that outputs start logits and end logits and (2) the core BERT transformer encoder. """ if initializer is None: initializer = tf.keras.initializers.TruncatedNormal( stddev=bert_config.initializer_range) if not hub_module_url: bert_encoder = get_transformer_encoder(bert_config, max_seq_length, float_type) return bert_span_labeler.BertSpanLabeler( network=bert_encoder, initializer=initializer), bert_encoder input_word_ids = tf.keras.layers.Input( shape=(max_seq_length,), dtype=tf.int32, name='input_word_ids') input_mask = tf.keras.layers.Input( shape=(max_seq_length,), dtype=tf.int32, name='input_mask') input_type_ids = tf.keras.layers.Input( shape=(max_seq_length,), dtype=tf.int32, name='input_type_ids') core_model = hub.KerasLayer(hub_module_url, trainable=True) _, sequence_output = core_model( [input_word_ids, input_mask, input_type_ids]) # Sets the shape manually due to a bug in TF shape inference. # TODO(hongkuny): remove this once shape inference is correct. sequence_output.set_shape((None, max_seq_length, bert_config.hidden_size)) squad_logits_layer = BertSquadLogitsLayer( initializer=initializer, float_type=float_type, name='squad_logits') start_logits, end_logits = squad_logits_layer(sequence_output) squad = tf.keras.Model( inputs={ 'input_word_ids': input_word_ids, 'input_mask': input_mask, 'input_type_ids': input_type_ids, }, outputs=[start_logits, end_logits], name='squad_model') return squad, core_model def classifier_model(bert_config, float_type, num_labels, max_seq_length, final_layer_initializer=None, hub_module_url=None): """BERT classifier model in functional API style. Construct a Keras model for predicting `num_labels` outputs from an input with maximum sequence length `max_seq_length`. Args: bert_config: BertConfig or AlbertConfig, the config defines the core BERT or ALBERT model. float_type: dtype, tf.float32 or tf.bfloat16. num_labels: integer, the number of classes. max_seq_length: integer, the maximum input sequence length. final_layer_initializer: Initializer for final dense layer. Defaulted TruncatedNormal initializer. hub_module_url: TF-Hub path/url to Bert module. Returns: Combined prediction model (words, mask, type) -> (one-hot labels) BERT sub-model (words, mask, type) -> (bert_outputs) """ if final_layer_initializer is not None: initializer = final_layer_initializer else: initializer = tf.keras.initializers.TruncatedNormal( stddev=bert_config.initializer_range) if not hub_module_url: bert_encoder = get_transformer_encoder(bert_config, max_seq_length) return bert_classifier.BertClassifier( bert_encoder, num_classes=num_labels, dropout_rate=bert_config.hidden_dropout_prob, initializer=initializer), bert_encoder input_word_ids = tf.keras.layers.Input( shape=(max_seq_length,), dtype=tf.int32, name='input_word_ids') input_mask = tf.keras.layers.Input( shape=(max_seq_length,), dtype=tf.int32, name='input_mask') input_type_ids = tf.keras.layers.Input( shape=(max_seq_length,), dtype=tf.int32, name='input_type_ids') bert_model = hub.KerasLayer(hub_module_url, trainable=True) pooled_output, _ = bert_model([input_word_ids, input_mask, input_type_ids]) output = tf.keras.layers.Dropout(rate=bert_config.hidden_dropout_prob)( pooled_output) output = tf.keras.layers.Dense( num_labels, kernel_initializer=initializer, name='output', dtype=float_type)( output) return tf.keras.Model( inputs={ 'input_word_ids': input_word_ids, 'input_mask': input_mask, 'input_type_ids': input_type_ids }, outputs=output), bert_model
TensorFlow2/Recommendation/WideAndDeep/tests/feature_specs
feature_specs
fspec_csv
channel_spec: label: - clicked map: [] multihot_categorical: - topic_id_list - entity_id_list - category_id_list numerical: - document_id_document_id_promo_sim_categories - document_id_document_id_promo_sim_topics - document_id_document_id_promo_sim_entities - document_id_promo_ctr - publisher_id_promo_ctr - source_id_promo_ctr - document_id_promo_count - publish_time_days_since_published - ad_id_ctr - advertiser_id_ctr - campaign_id_ctr - ad_id_count - publish_time_promo_days_since_published onehot_categorical: - ad_id - document_id - platform - document_id_promo - campaign_id - advertiser_id - source_id - geo_location - geo_location_country - geo_location_state - publisher_id - source_id_promo - publisher_id_promo feature_spec: ad_id: cardinality: 250000 ad_id_count: {} ad_id_ctr: {} advertiser_id: cardinality: 2500 advertiser_id_ctr: {} campaign_id: cardinality: 5000 campaign_id_ctr: {} category_id_list: cardinality: 100 max_hotness: 3 clicked: {} document_id: cardinality: 300000 document_id_document_id_promo_sim_categories: {} document_id_document_id_promo_sim_entities: {} document_id_document_id_promo_sim_topics: {} document_id_promo: cardinality: 100000 document_id_promo_count: {} document_id_promo_ctr: {} entity_id_list: cardinality: 10000 max_hotness: 3 geo_location: cardinality: 2500 geo_location_country: cardinality: 300 geo_location_state: cardinality: 2000 platform: cardinality: 4 publish_time_days_since_published: {} publish_time_promo_days_since_published: {} publisher_id: cardinality: 1000 publisher_id_promo: cardinality: 1000 publisher_id_promo_ctr: {} source_id: cardinality: 4000 source_id_promo: cardinality: 4000 source_id_promo_ctr: {} topic_id_list: cardinality: 350 max_hotness: 3 metadata: {} source_spec: test: - features: - clicked - ad_id - document_id - platform - document_id_promo - campaign_id - advertiser_id - source_id - geo_location - geo_location_country - geo_location_state - publisher_id - source_id_promo - publisher_id_promo - topic_id_list - entity_id_list - category_id_list - document_id_document_id_promo_sim_categories - document_id_document_id_promo_sim_topics - document_id_document_id_promo_sim_entities - document_id_promo_ctr - publisher_id_promo_ctr - source_id_promo_ctr - document_id_promo_count - publish_time_days_since_published - ad_id_ctr - advertiser_id_ctr - campaign_id_ctr - ad_id_count - publish_time_promo_days_since_published files: - valid.csv type: csv train: - features: - clicked - ad_id - document_id - platform - document_id_promo - campaign_id - advertiser_id - source_id - geo_location - geo_location_country - geo_location_state - publisher_id - source_id_promo - publisher_id_promo - topic_id_list - entity_id_list - category_id_list - document_id_document_id_promo_sim_categories - document_id_document_id_promo_sim_topics - document_id_document_id_promo_sim_entities - document_id_promo_ctr - publisher_id_promo_ctr - source_id_promo_ctr - document_id_promo_count - publish_time_days_since_published - ad_id_ctr - advertiser_id_ctr - campaign_id_ctr - ad_id_count - publish_time_promo_days_since_published files: - train.csv type: csv
TensorFlow2/Segmentation/Contrib/UNet3P/configs
configs
config
# project root working directory, automatically read by hydra (.../UNet3P) WORK_DIR: ${hydra:runtime.cwd} DATA_PREPARATION: # unprocessed LiTS scan data paths, for custom data training skip this section details SCANS_TRAIN_DATA_PATH: "/data/Training Batch 2/" SCANS_VAL_DATA_PATH: "/data/Training Batch 1/" # Resize scans to model input size RESIZED_HEIGHT: ${INPUT.HEIGHT} RESIZED_WIDTH: ${INPUT.WIDTH} # Clip scans value in given range SCAN_MIN_VALUE: -200 SCAN_MAX_VALUE: 250 DATASET: # paths should be relative from project root path TRAIN: IMAGES_PATH: "/data/train/images" MASK_PATH: "/data/train/mask" VAL: IMAGES_PATH: "/data/val/images" MASK_PATH: "/data/val/mask" MODEL: # available variants are unet3plus, unet3plus_deepsup, unet3plus_deepsup_cgm TYPE: "unet3plus" WEIGHTS_FILE_NAME: model_${MODEL.TYPE} BACKBONE: # available variants are unet3plus, vgg16, vgg19 TYPE: "vgg19" DATA_GENERATOR_TYPE: "DALI_GENERATOR" # options are TF_GENERATOR or DALI_GENERATOR SEED: 5 # for result's reproducibility VERBOSE: 1 # For logs printing details, available options are 0, 1, 2 DATALOADER_WORKERS: 3 # number of workers used for data loading SHOW_CENTER_CHANNEL_IMAGE: True # only true for UNet3+ for custom dataset it should be False # Model input shape INPUT: HEIGHT: 320 WIDTH: 320 CHANNELS: 3 # Model output classes OUTPUT: CLASSES: 2 HYPER_PARAMETERS: EPOCHS: 100 BATCH_SIZE: 16 # specify per gpu batch size LEARNING_RATE: 5e-5 # 0.1, 1e-3, 3e-4, 5e-5 CALLBACKS: # paths should be relative from project root path TENSORBOARD: PATH: "/checkpoint/tb_logs" EARLY_STOPPING: PATIENCE: 100 MODEL_CHECKPOINT: PATH: "/checkpoint" SAVE_WEIGHTS_ONLY: True SAVE_BEST_ONLY: True CSV_LOGGER: PATH: "/checkpoint" APPEND_LOGS: False PREPROCESS_DATA: RESIZE: VALUE: False # if True, resize to input height and width HEIGHT: ${INPUT.HEIGHT} WIDTH: ${INPUT.WIDTH} IMAGE_PREPROCESSING_TYPE: "normalize" NORMALIZE_MASK: VALUE: False # if True, divide mask by given value NORMALIZE_VALUE: 255 SHUFFLE: TRAIN: VALUE: True VAL: VALUE: False USE_MULTI_GPUS: VALUE: True # If True use multiple gpus for training # GPU_IDS: Could be integer or list of integers. # In case Integer: if integer value is -1 then it uses all available gpus. # otherwise if positive number, then use given number of gpus. # In case list of Integers: each integer will be considered as gpu id # e.g. [4, 5, 7] means use gpu 5,6 and 8 for training/evaluation GPU_IDS: -1 OPTIMIZATION: AMP: True # Automatic Mixed Precision(AMP) XLA: True # Accelerated Linear Algebra(XLA) # to stop hydra from storing logs files # logs will be stored in outputs directory defaults: - _self_ - override hydra/hydra_logging: disabled - override hydra/job_logging: disabled hydra: output_subdir: null
PyTorch/Forecasting/TFT/scripts
scripts
run_traffic
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. : ${SEED:=1} : ${LR:=1e-3} : ${NGPU:=8} : ${BATCH_SIZE:=1024} : ${EPOCHS:=20} python -m torch.distributed.run --nproc_per_node=${NGPU} train.py \ --dataset traffic \ --data_path /data/processed/traffic_bin \ --batch_size=${BATCH_SIZE} \ --sample 450000 50000 \ --lr ${LR} \ --epochs ${EPOCHS} \ --seed ${SEED} \ --use_amp \ --results /results/TFT_traffic_bs${NGPU}x${BATCH_SIZE}_lr${LR}/seed_${SEED}
PyTorch/SpeechRecognition/QuartzNet/common/text/unidecoder
unidecoder
__init__
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import io import warnings from .homoglyphs import homoglyphs from .replacements import replacements _replacements = {uni: asc for uni, asc in replacements} _homoglyphs = {g: asc for asc, glyphs in homoglyphs.items() for g in glyphs} def unidecoder(s, homoglyphs=False): """Transliterate unicode Args: s (str): unicode string homoglyphs (bool): prioritize translating to homoglyphs """ warned = False # Once per utterance ret = '' for u in s: if ord(u) < 127: a = u elif homoglyphs: a = _homoglyphs.get(u, _replacements.get(u, None)) else: a = _replacements.get(u, _homoglyphs.get(u, None)) if a is None: if not warned: warnings.warn(f'Unexpected character {u}: ' 'please revise your text cleaning rules.', stacklevel=10**6) warned = True else: ret += a return ret
PyTorch/LanguageModeling/Transformer-XL/pytorch
pytorch
run_lm1b_base
#!/bin/bash export OMP_NUM_THREADS=1 if [[ $1 == 'train' ]]; then echo 'Run training...' python train.py \ --cuda \ --data ../data/one-billion-words/ \ --dataset lm1b \ --adaptive \ --n_layer 18 \ --d_model 1024 \ --div_val 4 \ --n_head 8 \ --d_head 128 \ --d_inner 4096 \ --dropout 0.0 \ --dropatt 0.0 \ --optim adam \ --warmup_step 20000 \ --max_step 500000 \ --lr 0.00025 \ --tgt_len 32 \ --mem_len 32 \ --eval_tgt_len 32 \ --batch_size 224 \ --multi_gpu \ --gpu0_bsz 32 \ ${@:2} elif [[ $1 == 'eval' ]]; then echo 'Run evaluation...' python eval.py \ --cuda \ --data ../data/one-billion-words/ \ --dataset lm1b \ --batch_size 64 \ --tgt_len 32 \ --mem_len 128 \ --split test \ --same_length \ ${@:2} else echo 'unknown argment 1' fi
TensorFlow/Detection/SSD/models/research/slim/nets
nets
inception_resnet_v2
# Copyright 2016 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Contains the definition of the Inception Resnet V2 architecture. As described in http://arxiv.org/abs/1602.07261. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf slim = tf.contrib.slim def block35(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None): """Builds the 35x35 resnet block.""" with tf.variable_scope(scope, 'Block35', [net], reuse=reuse): with tf.variable_scope('Branch_0'): tower_conv = slim.conv2d(net, 32, 1, scope='Conv2d_1x1') with tf.variable_scope('Branch_1'): tower_conv1_0 = slim.conv2d(net, 32, 1, scope='Conv2d_0a_1x1') tower_conv1_1 = slim.conv2d(tower_conv1_0, 32, 3, scope='Conv2d_0b_3x3') with tf.variable_scope('Branch_2'): tower_conv2_0 = slim.conv2d(net, 32, 1, scope='Conv2d_0a_1x1') tower_conv2_1 = slim.conv2d(tower_conv2_0, 48, 3, scope='Conv2d_0b_3x3') tower_conv2_2 = slim.conv2d(tower_conv2_1, 64, 3, scope='Conv2d_0c_3x3') mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_1, tower_conv2_2]) up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None, activation_fn=None, scope='Conv2d_1x1') scaled_up = up * scale if activation_fn == tf.nn.relu6: # Use clip_by_value to simulate bandpass activation. scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0) net += scaled_up if activation_fn: net = activation_fn(net) return net def block17(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None): """Builds the 17x17 resnet block.""" with tf.variable_scope(scope, 'Block17', [net], reuse=reuse): with tf.variable_scope('Branch_0'): tower_conv = slim.conv2d(net, 192, 1, scope='Conv2d_1x1') with tf.variable_scope('Branch_1'): tower_conv1_0 = slim.conv2d(net, 128, 1, scope='Conv2d_0a_1x1') tower_conv1_1 = slim.conv2d(tower_conv1_0, 160, [1, 7], scope='Conv2d_0b_1x7') tower_conv1_2 = slim.conv2d(tower_conv1_1, 192, [7, 1], scope='Conv2d_0c_7x1') mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_2]) up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None, activation_fn=None, scope='Conv2d_1x1') scaled_up = up * scale if activation_fn == tf.nn.relu6: # Use clip_by_value to simulate bandpass activation. scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0) net += scaled_up if activation_fn: net = activation_fn(net) return net def block8(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None): """Builds the 8x8 resnet block.""" with tf.variable_scope(scope, 'Block8', [net], reuse=reuse): with tf.variable_scope('Branch_0'): tower_conv = slim.conv2d(net, 192, 1, scope='Conv2d_1x1') with tf.variable_scope('Branch_1'): tower_conv1_0 = slim.conv2d(net, 192, 1, scope='Conv2d_0a_1x1') tower_conv1_1 = slim.conv2d(tower_conv1_0, 224, [1, 3], scope='Conv2d_0b_1x3') tower_conv1_2 = slim.conv2d(tower_conv1_1, 256, [3, 1], scope='Conv2d_0c_3x1') mixed = tf.concat(axis=3, values=[tower_conv, tower_conv1_2]) up = slim.conv2d(mixed, net.get_shape()[3], 1, normalizer_fn=None, activation_fn=None, scope='Conv2d_1x1') scaled_up = up * scale if activation_fn == tf.nn.relu6: # Use clip_by_value to simulate bandpass activation. scaled_up = tf.clip_by_value(scaled_up, -6.0, 6.0) net += scaled_up if activation_fn: net = activation_fn(net) return net def inception_resnet_v2_base(inputs, final_endpoint='Conv2d_7b_1x1', output_stride=16, align_feature_maps=False, scope=None, activation_fn=tf.nn.relu): """Inception model from http://arxiv.org/abs/1602.07261. Constructs an Inception Resnet v2 network from inputs to the given final endpoint. This method can construct the network up to the final inception block Conv2d_7b_1x1. Args: inputs: a tensor of size [batch_size, height, width, channels]. final_endpoint: specifies the endpoint to construct the network up to. It can be one of ['Conv2d_1a_3x3', 'Conv2d_2a_3x3', 'Conv2d_2b_3x3', 'MaxPool_3a_3x3', 'Conv2d_3b_1x1', 'Conv2d_4a_3x3', 'MaxPool_5a_3x3', 'Mixed_5b', 'Mixed_6a', 'PreAuxLogits', 'Mixed_7a', 'Conv2d_7b_1x1'] output_stride: A scalar that specifies the requested ratio of input to output spatial resolution. Only supports 8 and 16. align_feature_maps: When true, changes all the VALID paddings in the network to SAME padding so that the feature maps are aligned. scope: Optional variable_scope. activation_fn: Activation function for block scopes. Returns: tensor_out: output tensor corresponding to the final_endpoint. end_points: a set of activations for external use, for example summaries or losses. Raises: ValueError: if final_endpoint is not set to one of the predefined values, or if the output_stride is not 8 or 16, or if the output_stride is 8 and we request an end point after 'PreAuxLogits'. """ if output_stride != 8 and output_stride != 16: raise ValueError('output_stride must be 8 or 16.') padding = 'SAME' if align_feature_maps else 'VALID' end_points = {} def add_and_check_final(name, net): end_points[name] = net return name == final_endpoint with tf.variable_scope(scope, 'InceptionResnetV2', [inputs]): with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d], stride=1, padding='SAME'): # 149 x 149 x 32 net = slim.conv2d(inputs, 32, 3, stride=2, padding=padding, scope='Conv2d_1a_3x3') if add_and_check_final('Conv2d_1a_3x3', net): return net, end_points # 147 x 147 x 32 net = slim.conv2d(net, 32, 3, padding=padding, scope='Conv2d_2a_3x3') if add_and_check_final('Conv2d_2a_3x3', net): return net, end_points # 147 x 147 x 64 net = slim.conv2d(net, 64, 3, scope='Conv2d_2b_3x3') if add_and_check_final('Conv2d_2b_3x3', net): return net, end_points # 73 x 73 x 64 net = slim.max_pool2d(net, 3, stride=2, padding=padding, scope='MaxPool_3a_3x3') if add_and_check_final('MaxPool_3a_3x3', net): return net, end_points # 73 x 73 x 80 net = slim.conv2d(net, 80, 1, padding=padding, scope='Conv2d_3b_1x1') if add_and_check_final('Conv2d_3b_1x1', net): return net, end_points # 71 x 71 x 192 net = slim.conv2d(net, 192, 3, padding=padding, scope='Conv2d_4a_3x3') if add_and_check_final('Conv2d_4a_3x3', net): return net, end_points # 35 x 35 x 192 net = slim.max_pool2d(net, 3, stride=2, padding=padding, scope='MaxPool_5a_3x3') if add_and_check_final('MaxPool_5a_3x3', net): return net, end_points # 35 x 35 x 320 with tf.variable_scope('Mixed_5b'): with tf.variable_scope('Branch_0'): tower_conv = slim.conv2d(net, 96, 1, scope='Conv2d_1x1') with tf.variable_scope('Branch_1'): tower_conv1_0 = slim.conv2d(net, 48, 1, scope='Conv2d_0a_1x1') tower_conv1_1 = slim.conv2d(tower_conv1_0, 64, 5, scope='Conv2d_0b_5x5') with tf.variable_scope('Branch_2'): tower_conv2_0 = slim.conv2d(net, 64, 1, scope='Conv2d_0a_1x1') tower_conv2_1 = slim.conv2d(tower_conv2_0, 96, 3, scope='Conv2d_0b_3x3') tower_conv2_2 = slim.conv2d(tower_conv2_1, 96, 3, scope='Conv2d_0c_3x3') with tf.variable_scope('Branch_3'): tower_pool = slim.avg_pool2d(net, 3, stride=1, padding='SAME', scope='AvgPool_0a_3x3') tower_pool_1 = slim.conv2d(tower_pool, 64, 1, scope='Conv2d_0b_1x1') net = tf.concat( [tower_conv, tower_conv1_1, tower_conv2_2, tower_pool_1], 3) if add_and_check_final('Mixed_5b', net): return net, end_points # TODO(alemi): Register intermediate endpoints net = slim.repeat(net, 10, block35, scale=0.17, activation_fn=activation_fn) # 17 x 17 x 1088 if output_stride == 8, # 33 x 33 x 1088 if output_stride == 16 use_atrous = output_stride == 8 with tf.variable_scope('Mixed_6a'): with tf.variable_scope('Branch_0'): tower_conv = slim.conv2d(net, 384, 3, stride=1 if use_atrous else 2, padding=padding, scope='Conv2d_1a_3x3') with tf.variable_scope('Branch_1'): tower_conv1_0 = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1') tower_conv1_1 = slim.conv2d(tower_conv1_0, 256, 3, scope='Conv2d_0b_3x3') tower_conv1_2 = slim.conv2d(tower_conv1_1, 384, 3, stride=1 if use_atrous else 2, padding=padding, scope='Conv2d_1a_3x3') with tf.variable_scope('Branch_2'): tower_pool = slim.max_pool2d(net, 3, stride=1 if use_atrous else 2, padding=padding, scope='MaxPool_1a_3x3') net = tf.concat([tower_conv, tower_conv1_2, tower_pool], 3) if add_and_check_final('Mixed_6a', net): return net, end_points # TODO(alemi): register intermediate endpoints with slim.arg_scope([slim.conv2d], rate=2 if use_atrous else 1): net = slim.repeat(net, 20, block17, scale=0.10, activation_fn=activation_fn) if add_and_check_final('PreAuxLogits', net): return net, end_points if output_stride == 8: # TODO(gpapan): Properly support output_stride for the rest of the net. raise ValueError('output_stride==8 is only supported up to the ' 'PreAuxlogits end_point for now.') # 8 x 8 x 2080 with tf.variable_scope('Mixed_7a'): with tf.variable_scope('Branch_0'): tower_conv = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1') tower_conv_1 = slim.conv2d(tower_conv, 384, 3, stride=2, padding=padding, scope='Conv2d_1a_3x3') with tf.variable_scope('Branch_1'): tower_conv1 = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1') tower_conv1_1 = slim.conv2d(tower_conv1, 288, 3, stride=2, padding=padding, scope='Conv2d_1a_3x3') with tf.variable_scope('Branch_2'): tower_conv2 = slim.conv2d(net, 256, 1, scope='Conv2d_0a_1x1') tower_conv2_1 = slim.conv2d(tower_conv2, 288, 3, scope='Conv2d_0b_3x3') tower_conv2_2 = slim.conv2d(tower_conv2_1, 320, 3, stride=2, padding=padding, scope='Conv2d_1a_3x3') with tf.variable_scope('Branch_3'): tower_pool = slim.max_pool2d(net, 3, stride=2, padding=padding, scope='MaxPool_1a_3x3') net = tf.concat( [tower_conv_1, tower_conv1_1, tower_conv2_2, tower_pool], 3) if add_and_check_final('Mixed_7a', net): return net, end_points # TODO(alemi): register intermediate endpoints net = slim.repeat(net, 9, block8, scale=0.20, activation_fn=activation_fn) net = block8(net, activation_fn=None) # 8 x 8 x 1536 net = slim.conv2d(net, 1536, 1, scope='Conv2d_7b_1x1') if add_and_check_final('Conv2d_7b_1x1', net): return net, end_points raise ValueError('final_endpoint (%s) not recognized', final_endpoint) def inception_resnet_v2(inputs, num_classes=1001, is_training=True, dropout_keep_prob=0.8, reuse=None, scope='InceptionResnetV2', create_aux_logits=True, activation_fn=tf.nn.relu): """Creates the Inception Resnet V2 model. Args: inputs: a 4-D tensor of size [batch_size, height, width, 3]. Dimension batch_size may be undefined. If create_aux_logits is false, also height and width may be undefined. num_classes: number of predicted classes. If 0 or None, the logits layer is omitted and the input features to the logits layer (before dropout) are returned instead. is_training: whether is training or not. dropout_keep_prob: float, the fraction to keep before final layer. reuse: whether or not the network and its variables should be reused. To be able to reuse 'scope' must be given. scope: Optional variable_scope. create_aux_logits: Whether to include the auxilliary logits. activation_fn: Activation function for conv2d. Returns: net: the output of the logits layer (if num_classes is a non-zero integer), or the non-dropped-out input to the logits layer (if num_classes is 0 or None). end_points: the set of end_points from the inception model. """ end_points = {} with tf.variable_scope(scope, 'InceptionResnetV2', [inputs], reuse=reuse) as scope: with slim.arg_scope([slim.batch_norm, slim.dropout], is_training=is_training): net, end_points = inception_resnet_v2_base(inputs, scope=scope, activation_fn=activation_fn) if create_aux_logits and num_classes: with tf.variable_scope('AuxLogits'): aux = end_points['PreAuxLogits'] aux = slim.avg_pool2d(aux, 5, stride=3, padding='VALID', scope='Conv2d_1a_3x3') aux = slim.conv2d(aux, 128, 1, scope='Conv2d_1b_1x1') aux = slim.conv2d(aux, 768, aux.get_shape()[1:3], padding='VALID', scope='Conv2d_2a_5x5') aux = slim.flatten(aux) aux = slim.fully_connected(aux, num_classes, activation_fn=None, scope='Logits') end_points['AuxLogits'] = aux with tf.variable_scope('Logits'): # TODO(sguada,arnoegw): Consider adding a parameter global_pool which # can be set to False to disable pooling here (as in resnet_*()). kernel_size = net.get_shape()[1:3] if kernel_size.is_fully_defined(): net = slim.avg_pool2d(net, kernel_size, padding='VALID', scope='AvgPool_1a_8x8') else: net = tf.reduce_mean(net, [1, 2], keep_dims=True, name='global_pool') end_points['global_pool'] = net if not num_classes: return net, end_points net = slim.flatten(net) net = slim.dropout(net, dropout_keep_prob, is_training=is_training, scope='Dropout') end_points['PreLogitsFlatten'] = net logits = slim.fully_connected(net, num_classes, activation_fn=None, scope='Logits') end_points['Logits'] = logits end_points['Predictions'] = tf.nn.softmax(logits, name='Predictions') return logits, end_points inception_resnet_v2.default_image_size = 299 def inception_resnet_v2_arg_scope( weight_decay=0.00004, batch_norm_decay=0.9997, batch_norm_epsilon=0.001, activation_fn=tf.nn.relu, batch_norm_updates_collections=tf.GraphKeys.UPDATE_OPS, batch_norm_scale=False): """Returns the scope with the default parameters for inception_resnet_v2. Args: weight_decay: the weight decay for weights variables. batch_norm_decay: decay for the moving average of batch_norm momentums. batch_norm_epsilon: small float added to variance to avoid dividing by zero. activation_fn: Activation function for conv2d. batch_norm_updates_collections: Collection for the update ops for batch norm. batch_norm_scale: If True, uses an explicit `gamma` multiplier to scale the activations in the batch normalization layer. Returns: a arg_scope with the parameters needed for inception_resnet_v2. """ # Set weight_decay for weights in conv2d and fully_connected layers. with slim.arg_scope([slim.conv2d, slim.fully_connected], weights_regularizer=slim.l2_regularizer(weight_decay), biases_regularizer=slim.l2_regularizer(weight_decay)): batch_norm_params = { 'decay': batch_norm_decay, 'epsilon': batch_norm_epsilon, 'updates_collections': batch_norm_updates_collections, 'fused': None, # Use fused batch norm if possible. 'scale': batch_norm_scale, } # Set activation_fn and parameters for batch_norm. with slim.arg_scope([slim.conv2d], activation_fn=activation_fn, normalizer_fn=slim.batch_norm, normalizer_params=batch_norm_params) as scope: return scope
PyTorch/Detection/SSD/ssd
ssd
entrypoints
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import torch import sys import urllib.request # from https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/SpeechSynthesis/Tacotron2/inference.py def checkpoint_from_distributed(state_dict): """ Checks whether checkpoint was generated by DistributedDataParallel. DDP wraps model in additional "module.", it needs to be unwrapped for single GPU inference. :param state_dict: model's state dict """ ret = False for key, _ in state_dict.items(): if key.find('module.') != -1: ret = True break return ret # from https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/SpeechSynthesis/Tacotron2/inference.py def unwrap_distributed(state_dict): """ Unwraps model from DistributedDataParallel. DDP wraps model in additional "module.", it needs to be removed for single GPU inference. :param state_dict: model's state dict """ new_state_dict = {} for key, value in state_dict.items(): new_key = key.replace('module.1.', '') new_key = new_key.replace('module.', '') new_state_dict[new_key] = value return new_state_dict def _download_checkpoint(checkpoint, force_reload): model_dir = os.path.join(torch.hub._get_torch_home(), 'checkpoints') if not os.path.exists(model_dir): os.makedirs(model_dir) ckpt_file = os.path.join(model_dir, os.path.basename(checkpoint)) if not os.path.exists(ckpt_file) or force_reload: sys.stderr.write('Downloading checkpoint from {}\n'.format(checkpoint)) urllib.request.urlretrieve(checkpoint, ckpt_file) return ckpt_file def nvidia_ssd_processing_utils(): import numpy as np import skimage from skimage import io, transform from .utils import dboxes300_coco, Encoder class Processing: @staticmethod def load_image(image_path): """Code from Loading_Pretrained_Models.ipynb - a Caffe2 tutorial""" img = skimage.img_as_float(io.imread(image_path)) if len(img.shape) == 2: img = np.array([img, img, img]).swapaxes(0, 2) return img @staticmethod def rescale(img, input_height, input_width): """Code from Loading_Pretrained_Models.ipynb - a Caffe2 tutorial""" aspect = img.shape[1] / float(img.shape[0]) if (aspect > 1): # landscape orientation - wide image res = int(aspect * input_height) imgScaled = transform.resize(img, (input_width, res)) if (aspect < 1): # portrait orientation - tall image res = int(input_width / aspect) imgScaled = transform.resize(img, (res, input_height)) if (aspect == 1): imgScaled = transform.resize(img, (input_width, input_height)) return imgScaled @staticmethod def crop_center(img, cropx, cropy): """Code from Loading_Pretrained_Models.ipynb - a Caffe2 tutorial""" y, x, c = img.shape startx = x // 2 - (cropx // 2) starty = y // 2 - (cropy // 2) return img[starty:starty + cropy, startx:startx + cropx] @staticmethod def normalize(img, mean=128, std=128): img = (img * 256 - mean) / std return img @staticmethod def prepare_tensor(inputs, fp16=False): NHWC = np.array(inputs) NCHW = np.swapaxes(np.swapaxes(NHWC, 1, 3), 2, 3) tensor = torch.from_numpy(NCHW) tensor = tensor.contiguous() tensor = tensor.cuda() tensor = tensor.float() if fp16: tensor = tensor.half() return tensor @staticmethod def prepare_input(img_uri): img = Processing.load_image(img_uri) img = Processing.rescale(img, 300, 300) img = Processing.crop_center(img, 300, 300) img = Processing.normalize(img) return img @staticmethod def decode_results(predictions): dboxes = dboxes300_coco() encoder = Encoder(dboxes) ploc, plabel = [val.float() for val in predictions] results = encoder.decode_batch(ploc, plabel, criteria=0.5, max_output=20) return [[pred.detach().cpu().numpy() for pred in detections] for detections in results] @staticmethod def pick_best(detections, threshold=0.3): bboxes, classes, confidences = detections best = np.argwhere(confidences > threshold)[:, 0] return [pred[best] for pred in detections] @staticmethod def get_coco_object_dictionary(): import os file_with_coco_names = "category_names.txt" if not os.path.exists(file_with_coco_names): print("Downloading COCO annotations.") import urllib import zipfile import json import shutil urllib.request.urlretrieve("http://images.cocodataset.org/annotations/annotations_trainval2017.zip", "cocoanno.zip") with zipfile.ZipFile("cocoanno.zip", "r") as f: f.extractall() print("Downloading finished.") with open("annotations/instances_val2017.json", 'r') as COCO: js = json.loads(COCO.read()) class_names = [category['name'] for category in js['categories']] open("category_names.txt", 'w').writelines([c+"\n" for c in class_names]) os.remove("cocoanno.zip") shutil.rmtree("annotations") else: class_names = open("category_names.txt").readlines() class_names = [c.strip() for c in class_names] return class_names return Processing() def nvidia_ssd(pretrained=True, **kwargs): """Constructs an SSD300 model. For detailed information on model input and output, training recipies, inference and performance visit: github.com/NVIDIA/DeepLearningExamples and/or ngc.nvidia.com Args: pretrained (bool, True): If True, returns a model pretrained on COCO dataset. model_math (str, 'fp32'): returns a model in given precision ('fp32' or 'fp16') """ from . import model as ssd fp16 = "model_math" in kwargs and kwargs["model_math"] == "fp16" force_reload = "force_reload" in kwargs and kwargs["force_reload"] m = ssd.SSD300() if fp16: m = m.half() def batchnorm_to_float(module): """Converts batch norm to FP32""" if isinstance(module, torch.nn.modules.batchnorm._BatchNorm): module.float() for child in module.children(): batchnorm_to_float(child) return module m = batchnorm_to_float(m) if pretrained: checkpoint = 'https://api.ngc.nvidia.com/v2/models/nvidia/ssd_pyt_ckpt_amp/versions/20.06.0/files/nvidia_ssdpyt_amp_200703.pt' ckpt_file = _download_checkpoint(checkpoint, force_reload) ckpt = torch.load(ckpt_file) ckpt = ckpt['model'] if checkpoint_from_distributed(ckpt): ckpt = unwrap_distributed(ckpt) m.load_state_dict(ckpt) return m
PyTorch/SpeechSynthesis/Tacotron2/trtis_cpp/src/trt/util
util
cudaUtils
/* * Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * Neither the name of the NVIDIA CORPORATION nor the * names of its contributors may be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef TT2I_CUDAUTILS_H #define TT2I_CUDAUTILS_H #include "cuda_runtime.h" #include <cassert> #include <stdexcept> #include <string> #include <vector> namespace tts { class CudaUtils { public: /** * @brief Synchronize on the given stream, and throw an exception if an error * occurs. * * @param stream The stream to synchronize on. */ static void sync(cudaStream_t stream); /** * @brief Print information about the avialable devices and which one is in * use to stdout. */ static void printDeviceInformation(); /** * @brief Get the number of SMs on the current device. * * @return The number of SMs. */ static int getNumSM(); /** * @brief Free the given device pointer. * * @tparam T The type of pointer. * @param ptr The pointer. */ template <typename T> static void free(T** const ptr) { check(cudaFree(*ptr), "CudaUtils::free(ptr)"); *ptr = nullptr; } /** * @brief Allocte data on the GPU. * * @tparam T The data type. * @param ptr The pointer to set pointing at the allocated memory. * @param count The number of elements to allocate. */ template <typename T> static void alloc(T** const ptr, const size_t count) { check(cudaMalloc((void**) ptr, sizeof(T) * count), "CudaUtils::alloc(ptr, count" + std::to_string(count) + ")"); assert(count == 0 || *ptr); } /** * @brief Allocate pinned memory on the host. * * @tparam T The data type. * @param ptr The pointer to set pointing at the allocated memory. * @param count The number of elements to allocate. */ template <typename T> static void allocHost(T** const ptr, const size_t count) { check( cudaMallocHost((void**)ptr, sizeof(T) * count), "CudaUtils::allocHost(ptr, count)"); assert(count == 0 || *ptr); } /** * @brief Zero out region of device memory. * * @tparam T The data type. * @param ptr The pointer to the memory. * @param count The number of elements to zero. */ template <typename T> static void zero(T* const ptr, const size_t count) { check(cudaMemset(ptr, 0, sizeof(T) * count), "CudaUtils::zero(ptr, count)"); } /** * @brief Zero out region of device memory asynchronously. * * @tparam T The data type. * @param ptr The pointer to the memory. * @param count The number of elements to zero. * @param stream The stream to operate on. */ template <typename T> static void zeroAsync(T* const ptr, const size_t count, cudaStream_t stream) { check(cudaMemsetAsync(ptr, 0, sizeof(T) * count, stream), "CudaUtils::zeroAsync(ptr, count, stream)"); } /** * @brief Allocate memory zeroed memory. * * @tparam T The dat atype. * @param ptr The pointer to set point at the allocated memory. * @param count The number of elements to allocate and zero. */ template <typename T> static void allocZeroed(T** const ptr, const size_t count) { alloc(ptr, count); zero(*ptr, count); } private: /** * @brief Convert cuda errors into exceptions. Will throw an exception * unless `err == cudaSuccess`. * * @param err The error. * @param msg The message to attach to the exception. */ static void check(const cudaError_t err, const std::string& msg = "") { if (err != cudaSuccess) { throw std::runtime_error("Encountered error: " + std::to_string(static_cast<int>(err)) + ": " + msg); } } }; } // namespace tts #endif
DGLPyTorch/DrugDiscovery/SE3Transformer/scripts
scripts
train_multi_gpu
#!/usr/bin/env bash # CLI args with defaults BATCH_SIZE=${1:-240} AMP=${2:-true} NUM_EPOCHS=${3:-130} LEARNING_RATE=${4:-0.01} WEIGHT_DECAY=${5:-0.1} # choices: 'mu', 'alpha', 'homo', 'lumo', 'gap', 'r2', 'zpve', 'U0', 'U', 'H', 'G', 'Cv', # 'U0_atom', 'U_atom', 'H_atom', 'G_atom', 'A', 'B', 'C' TASK=homo python -m torch.distributed.run --nnodes=1 --nproc_per_node=gpu --max_restarts 0 --module \ se3_transformer.runtime.training \ --amp "$AMP" \ --batch_size "$BATCH_SIZE" \ --epochs "$NUM_EPOCHS" \ --lr "$LEARNING_RATE" \ --min_lr 0.00001 \ --weight_decay "$WEIGHT_DECAY" \ --use_layer_norm \ --norm \ --save_ckpt_path model_qm9.pth \ --precompute_bases \ --seed 42 \ --task "$TASK"
TensorFlow2/Detection/Efficientdet/scripts/D0
D0
evaluate-FP32-8xV100-32G
#!/bin/bash # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. bs=40 ema=0.9999 mkdir -p /tmp/evaluate-FP32-8xV100-32G mpirun -np 8 --allow-run-as-root --bind-to none \ -map-by slot -x LD_LIBRARY_PATH -x PATH \ -mca pml ob1 -mca btl ^openib \ -x CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ python3 eval.py \ --val_file_pattern=/workspace/coco/val-* \ --val_json_file=/workspace/coco/annotations/instances_val2017.json \ --ckpt_path=${CKPT:-/checkpoints/emackpt-300} \ --batch_size=$bs \ --amp=False \ --hparams="moving_average_decay=$ema" \ 2>&1 | tee /tmp/evaluate-FP32-8xV100-32G/eval.log
TensorFlow/Detection/SSD/models/research/object_detection/metrics
metrics
oid_vrd_challenge_evaluation_utils_test
# Copyright 2018 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Tests for oid_vrd_challenge_evaluation_utils.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import pandas as pd import tensorflow as tf from object_detection.core import standard_fields from object_detection.metrics import oid_vrd_challenge_evaluation_utils as utils from object_detection.utils import vrd_evaluation class OidVrdChallengeEvaluationUtilsTest(tf.test.TestCase): def testBuildGroundtruthDictionary(self): np_data = pd.DataFrame( [[ 'fe58ec1b06db2bb7', '/m/04bcr3', '/m/083vt', 0.0, 0.3, 0.5, 0.6, 0.0, 0.3, 0.5, 0.6, 'is', None, None ], [ 'fe58ec1b06db2bb7', '/m/04bcr3', '/m/02gy9n', 0.0, 0.3, 0.5, 0.6, 0.1, 0.2, 0.3, 0.4, 'under', None, None ], [ 'fe58ec1b06db2bb7', '/m/04bcr3', '/m/083vt', 0.0, 0.1, 0.2, 0.3, 0.0, 0.1, 0.2, 0.3, 'is', None, None ], [ 'fe58ec1b06db2bb7', '/m/083vt', '/m/04bcr3', 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 'at', None, None ], [ 'fe58ec1b06db2bb7', None, None, None, None, None, None, None, None, None, None, None, '/m/04bcr3', 1.0 ], [ 'fe58ec1b06db2bb7', None, None, None, None, None, None, None, None, None, None, None, '/m/083vt', 0.0 ], [ 'fe58ec1b06db2bb7', None, None, None, None, None, None, None, None, None, None, None, '/m/02gy9n', 0.0 ]], columns=[ 'ImageID', 'LabelName1', 'LabelName2', 'XMin1', 'XMax1', 'YMin1', 'YMax1', 'XMin2', 'XMax2', 'YMin2', 'YMax2', 'RelationshipLabel', 'LabelName', 'Confidence' ]) class_label_map = {'/m/04bcr3': 1, '/m/083vt': 2, '/m/02gy9n': 3} relationship_label_map = {'is': 1, 'under': 2, 'at': 3} groundtruth_dictionary = utils.build_groundtruth_vrd_dictionary( np_data, class_label_map, relationship_label_map) self.assertTrue(standard_fields.InputDataFields.groundtruth_boxes in groundtruth_dictionary) self.assertTrue(standard_fields.InputDataFields.groundtruth_classes in groundtruth_dictionary) self.assertTrue(standard_fields.InputDataFields.groundtruth_image_classes in groundtruth_dictionary) self.assertAllEqual( np.array( [(1, 2, 1), (1, 3, 2), (1, 2, 1), (2, 1, 3)], dtype=vrd_evaluation.label_data_type), groundtruth_dictionary[ standard_fields.InputDataFields.groundtruth_classes]) expected_vrd_data = np.array( [ ([0.5, 0.0, 0.6, 0.3], [0.5, 0.0, 0.6, 0.3]), ([0.5, 0.0, 0.6, 0.3], [0.3, 0.1, 0.4, 0.2]), ([0.2, 0.0, 0.3, 0.1], [0.2, 0.0, 0.3, 0.1]), ([0.3, 0.1, 0.4, 0.2], [0.7, 0.5, 0.8, 0.6]), ], dtype=vrd_evaluation.vrd_box_data_type) for field in expected_vrd_data.dtype.fields: self.assertNDArrayNear( expected_vrd_data[field], groundtruth_dictionary[ standard_fields.InputDataFields.groundtruth_boxes][field], 1e-5) self.assertAllEqual( np.array([1, 2, 3]), groundtruth_dictionary[ standard_fields.InputDataFields.groundtruth_image_classes]) def testBuildPredictionDictionary(self): np_data = pd.DataFrame( [[ 'fe58ec1b06db2bb7', '/m/04bcr3', '/m/083vt', 0.0, 0.3, 0.5, 0.6, 0.0, 0.3, 0.5, 0.6, 'is', 0.1 ], [ 'fe58ec1b06db2bb7', '/m/04bcr3', '/m/02gy9n', 0.0, 0.3, 0.5, 0.6, 0.1, 0.2, 0.3, 0.4, 'under', 0.2 ], [ 'fe58ec1b06db2bb7', '/m/04bcr3', '/m/083vt', 0.0, 0.1, 0.2, 0.3, 0.0, 0.1, 0.2, 0.3, 'is', 0.3 ], [ 'fe58ec1b06db2bb7', '/m/083vt', '/m/04bcr3', 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 'at', 0.4 ]], columns=[ 'ImageID', 'LabelName1', 'LabelName2', 'XMin1', 'XMax1', 'YMin1', 'YMax1', 'XMin2', 'XMax2', 'YMin2', 'YMax2', 'RelationshipLabel', 'Score' ]) class_label_map = {'/m/04bcr3': 1, '/m/083vt': 2, '/m/02gy9n': 3} relationship_label_map = {'is': 1, 'under': 2, 'at': 3} prediction_dictionary = utils.build_predictions_vrd_dictionary( np_data, class_label_map, relationship_label_map) self.assertTrue(standard_fields.DetectionResultFields.detection_boxes in prediction_dictionary) self.assertTrue(standard_fields.DetectionResultFields.detection_classes in prediction_dictionary) self.assertTrue(standard_fields.DetectionResultFields.detection_scores in prediction_dictionary) self.assertAllEqual( np.array( [(1, 2, 1), (1, 3, 2), (1, 2, 1), (2, 1, 3)], dtype=vrd_evaluation.label_data_type), prediction_dictionary[ standard_fields.DetectionResultFields.detection_classes]) expected_vrd_data = np.array( [ ([0.5, 0.0, 0.6, 0.3], [0.5, 0.0, 0.6, 0.3]), ([0.5, 0.0, 0.6, 0.3], [0.3, 0.1, 0.4, 0.2]), ([0.2, 0.0, 0.3, 0.1], [0.2, 0.0, 0.3, 0.1]), ([0.3, 0.1, 0.4, 0.2], [0.7, 0.5, 0.8, 0.6]), ], dtype=vrd_evaluation.vrd_box_data_type) for field in expected_vrd_data.dtype.fields: self.assertNDArrayNear( expected_vrd_data[field], prediction_dictionary[ standard_fields.DetectionResultFields.detection_boxes][field], 1e-5) self.assertNDArrayNear( np.array([0.1, 0.2, 0.3, 0.4]), prediction_dictionary[ standard_fields.DetectionResultFields.detection_scores], 1e-5) if __name__ == '__main__': tf.test.main()
Tools/DGLPyTorch/SyntheticGraphGeneration/scripts
scripts
time_filter_credit
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys import pandas as pd from pathlib import Path if __name__ == '__main__': data_path = sys.argv[1] save_path = Path(data_path).parent save_path = save_path / 'data.csv' df = pd.read_csv(data_path) df['user'] = df['first'] + df['last'] df = df.groupby(['user', 'merchant'], axis=0).tail(1).reset_index(drop=True) df = df.drop(columns=['user']) # - save data df.to_csv(save_path, index=False)
TensorFlow/Segmentation/VNet/examples
examples
vnet_train_and_evaluate
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import os import subprocess from os.path import dirname PARSER = argparse.ArgumentParser(description="vnet_train_and_evaluate") PARSER.add_argument('--data_dir', required=True, type=str, help='Directory where the dataset is stored') PARSER.add_argument('--model_dir', required=True, type=str, help='Directory where model information (including checkpoints) is stored') PARSER.add_argument('--gpus', choices=[1, 8], required=True, type=int, help='Number of GPUs') PARSER.add_argument('--batch_size', default=1, type=int, help='Batch size for training') PARSER.add_argument('--epochs', default=40, type=int, help='Number of epochs for training') PARSER.add_argument('--amp', dest='use_amp', action='store_true', default=False) PARSER.add_argument('--base_lr', default=0.0001, type=float, help='Initial learning rate for RMSProp') def build_horovod_prefix(gpus): return 'mpirun -np {} -H localhost:{} -bind-to none -map-by slot -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH -mca ' \ 'pml ob1 -mca btl ^openib --allow-run-as-root '.format(gpus, gpus) def build_command(FLAGS, path_to_main, use_amp): return 'python {} --data_dir {} --model_dir {} --exec_mode train_and_evaluate --batch_size {} {} --augment --train_epochs {} --train_split 0.9 --split_seed 42 --base_lr {}'.format( path_to_main, FLAGS.data_dir, FLAGS.model_dir, FLAGS.batch_size, use_amp, FLAGS.epochs, FLAGS.base_lr) def main(): FLAGS = PARSER.parse_args() use_amp = '--amp' if FLAGS.use_amp else '' path_to_main = os.path.join(dirname(dirname(os.path.realpath(__file__))), 'main.py') cmd = build_command(FLAGS, path_to_main, use_amp) if FLAGS.gpus > 1: cmd = build_horovod_prefix(FLAGS.gpus) + cmd print('Command to be executed:') print(cmd) subprocess.call(cmd, shell=True) if __name__ == '__main__': main()
Tools/PyTorch/TimeSeriesPredictionPlatform
TimeSeriesPredictionPlatform
launch_inference_server
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import warnings import hydra warnings.filterwarnings("ignore") @hydra.main(config_path="conf/", config_name="deployment_config") def main(cfg): print(cfg) cfg.deployment.config.checkpoint=cfg.checkpoint hydra.utils.call(cfg, _recursive_=False) if __name__ == "__main__": main()
Tools/PyTorch/TimeSeriesPredictionPlatform/models/tft_pyt/scripts
scripts
run_favorita
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. : ${SEED:=1} : ${LR:=1e-3} : ${NGPU:=8} : ${BATCH_SIZE:=1024} : ${EPOCHS:=10} python -m torch.distributed.run --nproc_per_node=${NGPU} train.py \ --dataset favorita \ --data_path /data/processed/favorita_bin \ --batch_size=${BATCH_SIZE} \ --sample 450000 50000 \ --lr ${LR} \ --epochs ${EPOCHS} \ --seed ${SEED} \ --use_amp \ --results /results/TFT_favorita_bs${NGPU}x${BATCH_SIZE}_lr${LR}/seed${SEED}
TensorFlow/LanguageModeling/BERT/data
data
BookscorpusTextFormatting
# Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import glob import os class BookscorpusTextFormatting: def __init__(self, books_path, output_filename, recursive = False): self.books_path = books_path self.recursive = recursive self.output_filename = output_filename # This puts one book per line def merge(self): with open(self.output_filename, mode='w', newline='\n') as ofile: for filename in glob.glob(self.books_path + '/' + '*.txt', recursive=True): with open(filename, mode='r', encoding='utf-8-sig', newline='\n') as file: for line in file: if line.strip() != '': ofile.write(line.strip() + ' ') ofile.write("\n\n")
TensorFlow/Segmentation/UNet_Industrial/utils
utils
hvd_utils
#!/usr/bin/env python # -*- coding: utf-8 -*- # ============================================================================== # # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ============================================================================== import os __all__ = ["is_using_hvd"] def is_using_hvd(): return True
PyTorch/SpeechRecognition/Jasper/triton
triton
speech_utils
#!/usr/bin/python # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import soundfile as sf import math from os import system import numpy as np import tritonclient.grpc, tritonclient.http import tritonclient.grpc.model_config_pb2 as model_config from tritonclient.utils import triton_to_np_dtype, np_to_triton_dtype import grpc import sys import os if "./triton" not in sys.path: sys.path.append(os.path.join(sys.path[0], "../")) from common.text import _clean_text WINDOWS_FNS = {"hanning": np.hanning, "hamming": np.hamming, "none": None} triton_type_to_np_dtype = { 'TYPE_BOOL': np.bool, 'TYPE_INT8': np.int8, 'TYPE_INT16': np.int16, 'TYPE_INT32': np.int32, 'TYPE_INT64': np.int64, 'TYPE_UINT8': np.uint8, 'TYPE_FP16': np.float16, 'TYPE_FP32': np.float32, 'TYPE_FP64': np.float64 } model_dtype_to_np_dtype = { "BOOL": np.bool, "INT8": np.int8, "INT16": np.int16, "INT32": np.int32, "INT64": np.int64, "UINT8": np.uint8, "UINT16": np.uint16, "FP16": np.float16, "FP32": np.float32, "FP64": np.float64, "BYTES": np.dtype(object) } def load_transcript(transcript_path): with open(transcript_path, 'r', encoding="utf-8") as transcript_file: transcript = transcript_file.read().replace('\n', '') return transcript def parse_transcript(transcript, labels_map, blank_index): chars = [labels_map.get(x, blank_index) for x in list(transcript)] transcript = list(filter(lambda x: x != blank_index, chars)) return transcript def normalize_string(s, labels, table, **unused_kwargs): """ Normalizes string. For example: 'call me at 8:00 pm!' -> 'call me at eight zero pm' Args: s: string to normalize labels: labels used during model training. Returns: Normalized string """ def good_token(token, labels): s = set(labels) for t in token: if not t in s: return False return True try: text = _clean_text(s, ["english_cleaners"], table).strip() return ''.join([t for t in text if good_token(t, labels=labels)]) except: print("WARNING: Normalizing {} failed".format(s)) return None def ctc_decoder_predictions_tensor(prediction_cpu_tensor, batch_size, labels): """ Takes output of greedy ctc decoder and performs ctc decoding algorithm to remove duplicates and special symbol. Returns prediction Args: tensor: model output tensor label: A list of labels Returns: prediction """ blank_id = len(labels) - 1 hypotheses = [] labels_map = dict([(i, labels[i]) for i in range(len(labels))]) # iterate over batch prediction_cpu_tensor = prediction_cpu_tensor.reshape((batch_size, int(prediction_cpu_tensor.size/batch_size))) for ind in range(batch_size): prediction = prediction_cpu_tensor[ind].tolist() # CTC decoding procedure decoded_prediction = [] previous = len(labels) - 1 # id of a blank symbol for p in prediction: if (p != previous or previous == blank_id) and p != blank_id: decoded_prediction.append(p) previous = p hypothesis = ''.join([labels_map[c] for c in decoded_prediction]) hypotheses.append(hypothesis) return hypotheses class SpeechClient(object): def __init__(self, url, protocol, model_name, model_version, batch_size, model_platform=None, verbose=False, mode="batch", from_features=True): self.model_name = model_name self.model_version = model_version self.verbose = verbose self.batch_size = batch_size self.transpose_audio_features = False self.grpc_stub = None self.ctx = None self.correlation_id = 0 self.first_run = True if mode == "streaming" or mode == "asynchronous": self.correlation_id = 1 self.buffer = [] if protocol == "grpc": # Create gRPC client for communicating with the server self.prtcl_client = tritonclient.grpc else: # Create HTTP client for communicating with the server self.prtcl_client = tritonclient.http self.triton_client = self.prtcl_client.InferenceServerClient( url=url, verbose=self.verbose) self.audio_signals_name, self.num_samples_name, self.transcripts_name, \ self.audio_signals_type, self.num_samples_type, self.transcripts_type = self.parse_model(# server_status, model_name, batch_size, model_platform, verbose) self.labels = [" ", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "'", "<BLANK>"] def postprocess(self, transcript_values, labels): res = [] for transcript, filename in zip(transcript_values, labels): print('---') print('File: ', filename) t=ctc_decoder_predictions_tensor(transcript, self.batch_size, self.labels) print("Final transcript: ", t) print('---') res.append(t) return res def check_num_samples(self, num_samples, model_name): if num_samples['data_type'] != 'TYPE_UINT32' and num_samples['data_type'] != 'TYPE_INT32': raise Exception( "expecting num_samples datatype to be TYPE_UINT32/TYPE_INT32, " "model '" + model_name + "' output type is " + model_config.DataType.Name(num_samples['data_type'])) if len(num_samples['dims']) != 1: raise Exception("Expecting num_samples to have 1 dimension, " "model '{}' num_samples has {}".format( model_name,len(num_samples['dims']))) def parse_model(self, # server_status, model_name, batch_size, model_platform=None, verbose=False): """ Check the configuration of the ensemble model """ if self.prtcl_client is tritonclient.grpc: config = self.triton_client.get_model_config(model_name, as_json=True) else: config = self.triton_client.get_model_config(model_name) self.model_platform = model_platform # Inputs are: # 1) audio_signal: raw audio samples [num_samples] # 2) sample_rate: sample rate of audio # 3) num_samples: length of audio if len(config['input']) < 2: raise Exception( "expecting 2-3 inputs, got {}".format(len(config['input']))) # Outputs are: # 1) transcripts: candidate transcripts if len(config['output']) != 1: raise Exception( "expecting 1 output, got {}".format(len(config['output']))) audio_signal = config['input'][0] if len(config['input']) > 1: num_samples = config['input'][1] self.check_num_samples(num_samples, model_name); transcripts = config['output'][0] expected_audio_signal_dim = 1 # Model specifying maximum batch size of 0 indicates that batching # is not supported and so the input tensors do not expect an "N" # dimension (and 'batch_size' should be 1 so that only a single # image instance is inferred at a time). max_batch_size = config['max_batch_size'] if max_batch_size == 0: if batch_size != 1: raise Exception( "batching not supported for model '" + model_name + "'") else: # max_batch_size > 0 if batch_size > max_batch_size: raise Exception( "expecting batch size <= {} for model {}".format( max_batch_size, model_name)) if len(audio_signal['dims']) != expected_audio_signal_dim: raise Exception("Expecting audio signal to have {} dimensions, " "model '{}' audio_signal has {}".format( expected_audio_signal_dim, model_name, len(audio_signal.dims))) return (audio_signal['name'], num_samples['name'], transcripts['name'], triton_type_to_np_dtype[audio_signal['data_type']], triton_type_to_np_dtype[num_samples['data_type']], triton_type_to_np_dtype[transcripts['data_type']]) def recognize(self, audio_signal, filenames): # Send requests of FLAGS.batch_size audio signals. If the number of # audios isn't an exact multiple of FLAGS.batch_size then just # start over with the first audio until the batch is filled. input_batch = [] input_filenames = [] max_num_samples_batch = 0 for idx in range(self.batch_size): input_batch.append(audio_signal[idx].astype( self.audio_signals_type)) input_filenames.append(filenames[idx]) num_samples = audio_signal[idx].shape[0] if (num_samples > max_num_samples_batch): max_num_samples_batch = num_samples for idx in range(self.batch_size): num_samples = input_batch[idx].shape[0] mean = np.mean(input_batch[idx]) std_var = np.std(input_batch[idx]) gauss_noise = np.random.normal( mean,std_var, max_num_samples_batch-num_samples) input_batch[idx]= np.concatenate( (input_batch[idx], gauss_noise.astype( self.audio_signals_type))) max_num_samples_batch = np.asarray([max_num_samples_batch], dtype=self.num_samples_type) num_samples_batch = [max_num_samples_batch]*self.batch_size # Send request print("Sending request to transcribe file(s):", ",".join( input_filenames)) inputs = [] input_batch = np.asarray(input_batch) num_samples_batch = np.asarray(num_samples_batch) inputs.append(self.prtcl_client.InferInput(self.audio_signals_name, input_batch.shape, np_to_triton_dtype(input_batch.dtype))) inputs.append(self.prtcl_client.InferInput(self.num_samples_name, num_samples_batch.shape, np_to_triton_dtype(num_samples_batch.dtype))) if self.prtcl_client is tritonclient.grpc: inputs[0].set_data_from_numpy(input_batch) inputs[1].set_data_from_numpy(num_samples_batch) else: # http inputs[0].set_data_from_numpy(input_batch, binary_data=True) inputs[1].set_data_from_numpy(num_samples_batch, binary_data=True) outputs = [] if self.prtcl_client is tritonclient.grpc: outputs.append(self.prtcl_client.InferRequestedOutput(self.transcripts_name)) else: outputs.append(self.prtcl_client.InferRequestedOutput(self.transcripts_name, binary_data=True)) triton_result = self.triton_client.infer(self.model_name, inputs=inputs, outputs=outputs) transcripts = triton_result.as_numpy(self.transcripts_name) result = self.postprocess(transcripts, input_filenames) return result def preemphasis(signal, coeff=0.97): return np.append(signal[0], signal[1:] - coeff*signal[:-1]) def normalize_signal(signal, gain=None): """ Normalize float32 signal to [-1, 1] range """ if gain is None: gain = 1.0/(np.max(np.abs(signal)) + 1e-5) return signal*gain class AudioSegment(object): """Monaural audio segment abstraction. :param samples: Audio samples [num_samples x num_channels]. :type samples: ndarray.float32 :param sample_rate: Audio sample rate. :type sample_rate: int :raises TypeError: If the sample data type is not float or int. """ def __init__(self, samples, sample_rate, target_sr=16000, trim=False, trim_db=60): """Create audio segment from samples. Samples are convert float32 internally, with int scaled to [-1, 1]. """ samples = self._convert_samples_to_float32(samples) self._samples = samples self._sample_rate = sample_rate if self._samples.ndim >= 2: self._samples = np.mean(self._samples, 1) @staticmethod def _convert_samples_to_float32(samples): """Convert sample type to float32. Audio sample type is usually integer or float-point. Integers will be scaled to [-1, 1] in float32. """ float32_samples = samples.astype('float32') if samples.dtype in np.sctypes['int']: bits = np.iinfo(samples.dtype).bits float32_samples *= (1. / 2 ** (bits - 1)) elif samples.dtype in np.sctypes['float']: pass else: raise TypeError("Unsupported sample type: %s." % samples.dtype) return float32_samples @classmethod def from_file(cls, filename, target_sr=16000, int_values=False, offset=0, duration=0, trim=False): """ Load a file supported by librosa and return as an AudioSegment. :param filename: path of file to load :param target_sr: the desired sample rate :param int_values: if true, load samples as 32-bit integers :param offset: offset in seconds when loading audio :param duration: duration in seconds when loading audio :return: numpy array of samples """ with sf.SoundFile(filename, 'r') as f: dtype = 'int32' if int_values else 'float32' sample_rate = f.samplerate if offset > 0: f.seek(int(offset * sample_rate)) if duration > 0: samples = f.read(int(duration * sample_rate), dtype=dtype) else: samples = f.read(dtype=dtype) samples = samples.transpose() return cls(samples, sample_rate, target_sr=target_sr, trim=trim) @property def samples(self): return self._samples.copy() @property def sample_rate(self): return self._sample_rate # define our clear function def clear_screen(): _ = system('clear')
PyTorch/SpeechRecognition/QuartzNet/common
common
metrics
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. def __levenshtein(a, b): """Calculates the Levenshtein distance between two sequences.""" n, m = len(a), len(b) if n > m: # Make sure n <= m, to use O(min(n,m)) space a, b = b, a n, m = m, n current = list(range(n + 1)) for i in range(1, m + 1): previous, current = current, [i] + [0] * n for j in range(1, n + 1): add, delete = previous[j] + 1, current[j - 1] + 1 change = previous[j - 1] if a[j - 1] != b[i - 1]: change = change + 1 current[j] = min(add, delete, change) return current[n] def word_error_rate(hypotheses, references): """Computes average Word Error Rate (WER) between two text lists.""" scores = 0 words = 0 len_diff = len(references) - len(hypotheses) if len_diff > 0: raise ValueError("Uneqal number of hypthoses and references: " "{0} and {1}".format(len(hypotheses), len(references))) elif len_diff < 0: hypotheses = hypotheses[:len_diff] for h, r in zip(hypotheses, references): h_list = h.split() r_list = r.split() words += len(r_list) scores += __levenshtein(h_list, r_list) if words!=0: wer = 1.0*scores/words else: wer = float('inf') return wer, scores, words
PyTorch/Segmentation/nnUNet/nnunet
nnunet
nn_unet
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import numpy as np import pytorch_lightning as pl import torch import torch.nn as nn from apex.optimizers import FusedAdam, FusedSGD from data_loading.data_module import get_data_path, get_test_fnames from monai.inferers import sliding_window_inference from monai.networks.nets import DynUNet from nnunet.brats22_model import UNet3D from nnunet.loss import Loss, LossBraTS from nnunet.metrics import Dice from pytorch_lightning.utilities import rank_zero_only from scipy.special import expit, softmax from skimage.transform import resize from utils.logger import DLLogger from utils.utils import get_config_file, print0 class NNUnet(pl.LightningModule): def __init__(self, args, triton=False, data_dir=None): super(NNUnet, self).__init__() self.save_hyperparameters() self.args = args self.triton = triton if data_dir is not None: self.args.data = data_dir self.build_nnunet() self.best_mean, self.best_epoch, self.test_idx = (0,) * 3 self.start_benchmark = 0 self.train_loss = [] self.test_imgs = [] if not self.triton: self.learning_rate = args.learning_rate loss = LossBraTS if self.args.brats else Loss self.loss = loss(self.args.focal) if self.args.dim == 2: self.tta_flips = [[2], [3], [2, 3]] else: self.tta_flips = [[2], [3], [4], [2, 3], [2, 4], [3, 4], [2, 3, 4]] self.dice = Dice(self.n_class, self.args.brats) if self.args.exec_mode in ["train", "evaluate"] and not self.args.benchmark: self.dllogger = DLLogger(args.results, args.logname) def forward(self, img): return torch.argmax(self.model(img), 1) def _forward(self, img): if self.args.benchmark: if self.args.dim == 2 and self.args.data2d_dim == 3: img = layout_2d(img, None) return self.model(img) return self.tta_inference(img) if self.args.tta else self.do_inference(img) def compute_loss(self, preds, label): if self.args.brats22_model: loss = self.loss(preds[0], label) for i, pred in enumerate(preds[1:]): downsampled_label = nn.functional.interpolate(label, pred.shape[2:]) loss += 0.5 ** (i + 1) * self.loss(pred, downsampled_label) c_norm = 1 / (2 - 2 ** (-len(preds))) return c_norm * loss if self.args.deep_supervision: loss, weights = 0.0, 0.0 for i in range(preds.shape[1]): loss += self.loss(preds[:, i], label) * 0.5**i weights += 0.5**i return loss / weights return self.loss(preds, label) def training_step(self, batch, batch_idx): img, lbl = self.get_train_data(batch) img, lbl = self.convert_data(img, lbl) pred = self.model(img) loss = self.compute_loss(pred, lbl) self.train_loss.append(loss.item()) return loss def validation_step(self, batch, batch_idx): if self.current_epoch < self.args.skip_first_n_eval: return None img, lbl = batch["image"], batch["label"] img, lbl = self.convert_data(img, lbl) pred = self._forward(img) loss = self.loss(pred, lbl) if self.args.invert_resampled_y: meta, lbl = batch["meta"][0].cpu().detach().numpy(), batch["orig_lbl"] pred = nn.functional.interpolate(pred, size=tuple(meta[3]), mode="trilinear", align_corners=True) self.dice.update(pred, lbl[:, 0], loss) def test_step(self, batch, batch_idx): if self.args.exec_mode == "evaluate": return self.validation_step(batch, batch_idx) img = batch["image"] img = self.convert_ncdhw_to_ndhwc(img) if self.args.benchmark: pred = self._forward(img) return pred = self._forward(img).squeeze(0).cpu().detach().numpy() if self.args.save_preds: meta = batch["meta"][0].cpu().detach().numpy() min_d, max_d = meta[0, 0], meta[1, 0] min_h, max_h = meta[0, 1], meta[1, 1] min_w, max_w = meta[0, 2], meta[1, 2] n_class, original_shape, cropped_shape = pred.shape[0], meta[2], meta[3] if not all(cropped_shape == pred.shape[1:]): resized_pred = np.zeros((n_class, *cropped_shape)) for i in range(n_class): resized_pred[i] = resize( pred[i], cropped_shape, order=3, mode="edge", cval=0, clip=True, anti_aliasing=False ) pred = resized_pred final_pred = np.zeros((n_class, *original_shape)) final_pred[:, min_d:max_d, min_h:max_h, min_w:max_w] = pred if self.args.brats: final_pred = expit(final_pred) else: final_pred = softmax(final_pred, axis=0) self.save_mask(final_pred) def get_unet_params(self): config = get_config_file(self.args) patch_size, spacings = config["patch_size"], config["spacings"] strides, kernels, sizes = [], [], patch_size[:] while True: spacing_ratio = [spacing / min(spacings) for spacing in spacings] stride = [ 2 if ratio <= 2 and size >= 2 * self.args.min_fmap else 1 for (ratio, size) in zip(spacing_ratio, sizes) ] kernel = [3 if ratio <= 2 else 1 for ratio in spacing_ratio] if all(s == 1 for s in stride): break sizes = [i / j for i, j in zip(sizes, stride)] spacings = [i * j for i, j in zip(spacings, stride)] kernels.append(kernel) strides.append(stride) if len(strides) == self.args.depth: break strides.insert(0, len(spacings) * [1]) kernels.append(len(spacings) * [3]) return config["in_channels"], config["n_class"], kernels, strides, patch_size def convert_ncdhw_to_ndhwc(self, tensor): if self.args.layout == "NCDHW": return tensor strides = tensor.stride() shape = tensor.shape tensor = torch.as_strided( tensor, (shape[0], shape[-1], *shape[1:-1]), (strides[0], strides[-1], *strides[1:-1]) ) return tensor def convert_data(self, img, lbl): img, lbl = self.convert_ncdhw_to_ndhwc(img), self.convert_ncdhw_to_ndhwc(lbl) return img, lbl def build_nnunet(self): self.in_channels, out_channels, kernels, strides, self.patch_size = self.get_unet_params() self.n_class = out_channels - 1 if self.args.brats: out_channels = 3 if self.args.brats22_model: self.model = UNet3D(kernels, strides) else: self.model = DynUNet( self.args.dim, self.in_channels, out_channels, kernels, strides, strides[1:], filters=self.args.filters, norm_name=(self.args.norm.upper(), {"affine": True}), act_name=("leakyrelu", {"inplace": False, "negative_slope": 0.01}), deep_supervision=self.args.deep_supervision, deep_supr_num=self.args.deep_supr_num, res_block=self.args.res_block, trans_bias=True, ) if self.args.layout == "NDHWC" and self.args.dim == 3: self.model.to(memory_format=torch.channels_last_3d) print0(f"Filters: {self.model.filters},\nKernels: {kernels}\nStrides: {strides}") def do_inference(self, image): if self.args.dim == 3: return self.sliding_window_inference(image) if self.args.data2d_dim == 2: return self.model(image) if self.args.exec_mode == "predict": return self.inference2d_test(image) return self.inference2d(image) def tta_inference(self, img): pred = self.do_inference(img) for flip_idx in self.tta_flips: pred += flip(self.do_inference(flip(img, flip_idx)), flip_idx) pred /= len(self.tta_flips) + 1 return pred def inference2d(self, image): image = torch.transpose(image.squeeze(0), 0, 1) preds = self.model(image) preds = torch.transpose(preds, 0, 1).unsqueeze(0) return preds def inference2d_test(self, image): preds_shape = (image.shape[0], self.n_class + 1, *image.shape[2:]) preds = torch.zeros(preds_shape, dtype=image.dtype, device=image.device) for depth in range(image.shape[2]): preds[:, :, depth] = self.sliding_window_inference(image[:, :, depth]) return preds def sliding_window_inference(self, image): return sliding_window_inference( inputs=image, roi_size=self.patch_size, sw_batch_size=self.args.val_batch_size, predictor=self.model, overlap=self.args.overlap, mode=self.args.blend, ) def round(self, tensor): return round(torch.mean(tensor).item(), 2) def validation_epoch_end(self, outputs): if self.current_epoch < self.args.skip_first_n_eval: self.log("dice", 0.0, sync_dist=False) self.dice.reset() return None dice, loss = self.dice.compute() self.dice.reset() # Update metrics dice_mean = torch.mean(dice) if dice_mean >= self.best_mean: self.best_mean = dice_mean self.best_mean_dice = dice[:] self.best_epoch = self.current_epoch metrics = {} metrics["Dice"] = self.round(dice) metrics["Val Loss"] = self.round(loss) metrics["Max Dice"] = self.round(self.best_mean_dice) metrics["Best epoch"] = self.best_epoch metrics["Train Loss"] = ( 0 if len(self.train_loss) == 0 else round(sum(self.train_loss) / len(self.train_loss), 4) ) if self.n_class > 1: metrics.update({f"D{i+1}": self.round(m) for i, m in enumerate(dice)}) self.dllogger.log_metrics(step=self.current_epoch, metrics=metrics) self.dllogger.flush() self.log("dice", metrics["Dice"], sync_dist=False) def test_epoch_end(self, outputs): if self.args.exec_mode == "evaluate": self.eval_dice, _ = self.dice.compute() @rank_zero_only def on_fit_end(self): if not self.args.benchmark: metrics = {} metrics["dice_score"] = round(self.best_mean.item(), 2) metrics["train_loss"] = round(sum(self.train_loss) / len(self.train_loss), 4) metrics["val_loss"] = round(1 - self.best_mean.item() / 100, 4) metrics["Epoch"] = self.best_epoch self.dllogger.log_metrics(step=(), metrics=metrics) self.dllogger.flush() def configure_optimizers(self): optimizer = { "sgd": FusedSGD(self.parameters(), lr=self.learning_rate, momentum=self.args.momentum), "adam": FusedAdam(self.parameters(), lr=self.learning_rate, weight_decay=self.args.weight_decay), }[self.args.optimizer.lower()] if self.args.scheduler: scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, 4096, eta_min=8e-5) return {"optimizer": optimizer, "monitor": "val_loss", "lr_scheduler": scheduler} return {"optimizer": optimizer, "monitor": "val_loss"} def save_mask(self, pred): if self.test_idx == 0: data_path = get_data_path(self.args) self.test_imgs, _ = get_test_fnames(self.args, data_path) fname = os.path.basename(self.test_imgs[self.test_idx]).replace("_x", "") np.save(os.path.join(self.save_dir, fname), pred, allow_pickle=False) self.test_idx += 1 def get_train_data(self, batch): img, lbl = batch["image"], batch["label"] if self.args.dim == 2 and self.args.data2d_dim == 3: img, lbl = layout_2d(img, lbl) return img, lbl def layout_2d(img, lbl): batch_size, depth, channels, height, width = img.shape img = torch.reshape(img, (batch_size * depth, channels, height, width)) if lbl is not None: lbl = torch.reshape(lbl, (batch_size * depth, 1, height, width)) return img, lbl return img def flip(data, axis): return torch.flip(data, dims=axis)
TensorFlow/Classification/ConvNets/resnet50v1.5/training
training
DGX2_RN50_AMP_250E
#!/bin/bash # Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. WORKSPACE=${1:-"/workspace/rn50v15_tf"} DATA_DIR=${2:-"/data"} OTHER=${@:3} if [[ ! -z "${BIND_TO_SOCKET}" ]]; then BIND_TO_SOCKET="--bind-to socket" fi mpiexec --allow-run-as-root ${BIND_TO_SOCKET} -np 8 python3 main.py --arch=resnet50 \ --mode=train_and_evaluate --iter_unit=epoch --num_iter=250 --mixup=0.2 \ --batch_size=256 --warmup_steps=100 --cosine_lr --label_smoothing 0.1 \ --lr_init=0.256 --lr_warmup_epochs=8 --momentum=0.875 --weight_decay=3.0517578125e-05 \ --amp --static_loss_scale 128 \ --data_dir=${DATA_DIR}/tfrecords --data_idx_dir=${DATA_DIR}/dali_idx \ --results_dir=${WORKSPACE}/results --weight_init=fan_in ${OTHER}
TensorFlow/Detection/SSD/models/research/object_detection/dataset_tools
dataset_tools
oid_tfrecord_creation
# Copyright 2017 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== r"""Utilities for creating TFRecords of TF examples for the Open Images dataset. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from object_detection.core import standard_fields from object_detection.utils import dataset_util def tf_example_from_annotations_data_frame(annotations_data_frame, label_map, encoded_image): """Populates a TF Example message with image annotations from a data frame. Args: annotations_data_frame: Data frame containing the annotations for a single image. label_map: String to integer label map. encoded_image: The encoded image string Returns: The populated TF Example, if the label of at least one object is present in label_map. Otherwise, returns None. """ filtered_data_frame = annotations_data_frame[ annotations_data_frame.LabelName.isin(label_map)] filtered_data_frame_boxes = filtered_data_frame[ ~filtered_data_frame.YMin.isnull()] filtered_data_frame_labels = filtered_data_frame[ filtered_data_frame.YMin.isnull()] image_id = annotations_data_frame.ImageID.iloc[0] feature_map = { standard_fields.TfExampleFields.object_bbox_ymin: dataset_util.float_list_feature( filtered_data_frame_boxes.YMin.as_matrix()), standard_fields.TfExampleFields.object_bbox_xmin: dataset_util.float_list_feature( filtered_data_frame_boxes.XMin.as_matrix()), standard_fields.TfExampleFields.object_bbox_ymax: dataset_util.float_list_feature( filtered_data_frame_boxes.YMax.as_matrix()), standard_fields.TfExampleFields.object_bbox_xmax: dataset_util.float_list_feature( filtered_data_frame_boxes.XMax.as_matrix()), standard_fields.TfExampleFields.object_class_text: dataset_util.bytes_list_feature( filtered_data_frame_boxes.LabelName.as_matrix()), standard_fields.TfExampleFields.object_class_label: dataset_util.int64_list_feature( filtered_data_frame_boxes.LabelName.map(lambda x: label_map[x]) .as_matrix()), standard_fields.TfExampleFields.filename: dataset_util.bytes_feature('{}.jpg'.format(image_id)), standard_fields.TfExampleFields.source_id: dataset_util.bytes_feature(image_id), standard_fields.TfExampleFields.image_encoded: dataset_util.bytes_feature(encoded_image), } if 'IsGroupOf' in filtered_data_frame.columns: feature_map[standard_fields.TfExampleFields. object_group_of] = dataset_util.int64_list_feature( filtered_data_frame_boxes.IsGroupOf.as_matrix().astype(int)) if 'IsOccluded' in filtered_data_frame.columns: feature_map[standard_fields.TfExampleFields. object_occluded] = dataset_util.int64_list_feature( filtered_data_frame_boxes.IsOccluded.as_matrix().astype( int)) if 'IsTruncated' in filtered_data_frame.columns: feature_map[standard_fields.TfExampleFields. object_truncated] = dataset_util.int64_list_feature( filtered_data_frame_boxes.IsTruncated.as_matrix().astype( int)) if 'IsDepiction' in filtered_data_frame.columns: feature_map[standard_fields.TfExampleFields. object_depiction] = dataset_util.int64_list_feature( filtered_data_frame_boxes.IsDepiction.as_matrix().astype( int)) if 'ConfidenceImageLabel' in filtered_data_frame_labels.columns: feature_map[standard_fields.TfExampleFields. image_class_label] = dataset_util.int64_list_feature( filtered_data_frame_labels.LabelName.map( lambda x: label_map[x]).as_matrix()) feature_map[standard_fields.TfExampleFields. image_class_text] = dataset_util.bytes_list_feature( filtered_data_frame_labels.LabelName.as_matrix()), return tf.train.Example(features=tf.train.Features(feature=feature_map))
PyTorch/Detection/Efficientdet/effdet/layers
layers
activations
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Copyright 2019-2022 Ross Wightman # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from torch import nn as nn from torch.nn import functional as F def swish(x, inplace: bool = False): """Swish - Described in: https://arxiv.org/abs/1710.05941 """ return x.mul_(x.sigmoid()) if inplace else x.mul(x.sigmoid()) class Swish(nn.Module): def __init__(self, inplace: bool = False): super(Swish, self).__init__() self.inplace = inplace def forward(self, x): return swish(x, self.inplace) def mish(x, inplace: bool = False): """Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681 NOTE: I don't have a working inplace variant """ return x.mul(F.softplus(x).tanh()) class Mish(nn.Module): """Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681 """ def __init__(self, inplace: bool = False): super(Mish, self).__init__() def forward(self, x): return mish(x) def sigmoid(x, inplace: bool = False): return x.sigmoid_() if inplace else x.sigmoid() # PyTorch has this, but not with a consistent inplace argmument interface class Sigmoid(nn.Module): def __init__(self, inplace: bool = False): super(Sigmoid, self).__init__() self.inplace = inplace def forward(self, x): return x.sigmoid_() if self.inplace else x.sigmoid() def tanh(x, inplace: bool = False): return x.tanh_() if inplace else x.tanh() # PyTorch has this, but not with a consistent inplace argmument interface class Tanh(nn.Module): def __init__(self, inplace: bool = False): super(Tanh, self).__init__() self.inplace = inplace def forward(self, x): return x.tanh_() if self.inplace else x.tanh() def hard_swish(x, inplace: bool = False): inner = F.relu6(x + 3.).div_(6.) return x.mul_(inner) if inplace else x.mul(inner) class HardSwish(nn.Module): def __init__(self, inplace: bool = False): super(HardSwish, self).__init__() self.inplace = inplace def forward(self, x): return hard_swish(x, self.inplace) def hard_sigmoid(x, inplace: bool = False): if inplace: return x.add_(3.).clamp_(0., 6.).div_(6.) else: return F.relu6(x + 3.) / 6. class HardSigmoid(nn.Module): def __init__(self, inplace: bool = False): super(HardSigmoid, self).__init__() self.inplace = inplace def forward(self, x): return hard_sigmoid(x, self.inplace) def hard_mish(x, inplace: bool = False): """ Hard Mish Experimental, based on notes by Mish author Diganta Misra at https://github.com/digantamisra98/H-Mish/blob/0da20d4bc58e696b6803f2523c58d3c8a82782d0/README.md """ if inplace: return x.mul_(0.5 * (x + 2).clamp(min=0, max=2)) else: return 0.5 * x * (x + 2).clamp(min=0, max=2) class HardMish(nn.Module): def __init__(self, inplace: bool = False): super(HardMish, self).__init__() self.inplace = inplace def forward(self, x): return hard_mish(x, self.inplace)
TensorFlow/Detection/SSD/models/research/slim/scripts
scripts
train_lenet_on_mnist
#!/bin/bash # Copyright 2017 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== # # This script performs the following operations: # 1. Downloads the MNIST dataset # 2. Trains a LeNet model on the MNIST training set. # 3. Evaluates the model on the MNIST testing set. # # Usage: # cd slim # ./slim/scripts/train_lenet_on_mnist.sh set -e # Where the checkpoint and logs will be saved to. TRAIN_DIR=/tmp/lenet-model # Where the dataset is saved to. DATASET_DIR=/tmp/mnist # Download the dataset python download_and_convert_data.py \ --dataset_name=mnist \ --dataset_dir=${DATASET_DIR} # Run training. python train_image_classifier.py \ --train_dir=${TRAIN_DIR} \ --dataset_name=mnist \ --dataset_split_name=train \ --dataset_dir=${DATASET_DIR} \ --model_name=lenet \ --preprocessing_name=lenet \ --max_number_of_steps=20000 \ --batch_size=50 \ --learning_rate=0.01 \ --save_interval_secs=60 \ --save_summaries_secs=60 \ --log_every_n_steps=100 \ --optimizer=sgd \ --learning_rate_decay_type=fixed \ --weight_decay=0 # Run evaluation. python eval_image_classifier.py \ --checkpoint_path=${TRAIN_DIR} \ --eval_dir=${TRAIN_DIR} \ --dataset_name=mnist \ --dataset_split_name=test \ --dataset_dir=${DATASET_DIR} \ --model_name=lenet
PyTorch/Segmentation/MaskRCNN/pytorch/maskrcnn_benchmark/utils
utils
cv2_util
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. """ Module for cv2 utility functions and maintaining version compatibility between 3.x and 4.x """ import cv2 def findContours(*args, **kwargs): """ Wraps cv2.findContours to maintain compatiblity between versions 3 and 4 Returns: contours, hierarchy """ if cv2.__version__.startswith('4'): contours, hierarchy = cv2.findContours(*args, **kwargs) elif cv2.__version__.startswith('3'): _, contours, hierarchy = cv2.findContours(*args, **kwargs) else: raise AssertionError( 'cv2 must be either version 3 or 4 to call this method') return contours, hierarchy
PyTorch/Recommendation/DLRM/dlrm/cuda_ext
cuda_ext
sparse_embedding
# Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import torch from torch.cuda import amp from dlrm.cuda_ext import sparse_gather from torch import nn from torch.autograd import Function class EmbeddingGatherFunction(Function): """Customized embedding gather with fused plain SGD""" @staticmethod @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32) def forward(ctx, embedding, indices): output = sparse_gather.gather_gpu_fwd(embedding, indices) ctx.save_for_backward(indices) ctx.num_features = embedding.size(0) return output @staticmethod @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32) def backward(ctx, grad_output): indices = ctx.saved_tensors[0] grad_embedding = sparse_gather.gather_gpu_bwd(grad_output, indices, ctx.num_features) return grad_embedding, None class JointSparseEmbedding(nn.Module): """Joint multiple one hot embedding together Multiple one hot embedding can be done as one embedding (indexing). Args: categorical_feature_sizes (list): A list of integer indicating number of features of each embedding table embedding_dim (int): the size of each embedding vector device (torch.device): where to create the embedding. Default "cuda" """ def __init__(self, categorical_feature_sizes, embedding_dim, device="cuda"): super(JointSparseEmbedding, self).__init__() self.embedding_dim = embedding_dim self.categorical_feature_sizes = copy.copy(categorical_feature_sizes) self.register_buffer("offsets", torch.tensor([0] + categorical_feature_sizes).cumsum(0).to(device)) self.weights = torch.nn.Parameter(torch.rand((self.offsets[-1].item(), embedding_dim), device=device)) def forward(self, categorical_inputs): # Check input has the right shape assert categorical_inputs.shape[1] == len(self.categorical_feature_sizes) embedding_out = embedding_gather(self.weights, categorical_inputs + self.offsets[:-1]) return embedding_out embedding_gather = EmbeddingGatherFunction.apply
PyTorch/LanguageModeling/BERT/triton/runner
runner
exceptions
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. class RunnerException(Exception): """ Runner Exception """ def __init__(self, message: str): self._message = message def __str__(self): return self._message @property def message(self): """Get the exception message. Returns ------- str The message associated with this exception, or None if no message. """ return self._message
PyTorch/SpeechSynthesis/FastPitch/phrases
phrases
phrase_bilingual
nokia有跟facebook簽約。 讓net backup同時強化重覆刪除和資料搜尋功能。 classic仍有一定的價值。 資料代管商ball的虛擬化工具。 針對vmware虛擬化環境的基本功能。 這跟微軟bing有何關連? 由ben toyota所寫的the accidental billionaires。 v d s技術提供一個如同伺服器般的獨立操作系統環境。 專利設計通過美國f d a認證與臨床測試。 你可直接把圖片丟進wave訊息中。 由前英國陸軍軍官neil laughton領軍。 這次android版也沿用了同樣的輸入法。 facebook新註冊用戶。 現在android跟iphone都支援這項功能。 o r g的經理maxim weinstein。 但本來就甚少舉辦活動的kingston金士頓。 touchstone充電系統是還蠻酷的技術。 雖然caspian市佔率不斷下滑。 第一隻中文化的google android手機。 因為google自家已經有android的同級競爭產品。
TensorFlow/Detection/SSD/models/research/slim/nets
nets
cifarnet
# Copyright 2016 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Contains a variant of the CIFAR-10 model definition.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf slim = tf.contrib.slim trunc_normal = lambda stddev: tf.truncated_normal_initializer(stddev=stddev) def cifarnet(images, num_classes=10, is_training=False, dropout_keep_prob=0.5, prediction_fn=slim.softmax, scope='CifarNet'): """Creates a variant of the CifarNet model. Note that since the output is a set of 'logits', the values fall in the interval of (-infinity, infinity). Consequently, to convert the outputs to a probability distribution over the characters, one will need to convert them using the softmax function: logits = cifarnet.cifarnet(images, is_training=False) probabilities = tf.nn.softmax(logits) predictions = tf.argmax(logits, 1) Args: images: A batch of `Tensors` of size [batch_size, height, width, channels]. num_classes: the number of classes in the dataset. If 0 or None, the logits layer is omitted and the input features to the logits layer are returned instead. is_training: specifies whether or not we're currently training the model. This variable will determine the behaviour of the dropout layer. dropout_keep_prob: the percentage of activation values that are retained. prediction_fn: a function to get predictions out of logits. scope: Optional variable_scope. Returns: net: a 2D Tensor with the logits (pre-softmax activations) if num_classes is a non-zero integer, or the input to the logits layer if num_classes is 0 or None. end_points: a dictionary from components of the network to the corresponding activation. """ end_points = {} with tf.variable_scope(scope, 'CifarNet', [images]): net = slim.conv2d(images, 64, [5, 5], scope='conv1') end_points['conv1'] = net net = slim.max_pool2d(net, [2, 2], 2, scope='pool1') end_points['pool1'] = net net = tf.nn.lrn(net, 4, bias=1.0, alpha=0.001/9.0, beta=0.75, name='norm1') net = slim.conv2d(net, 64, [5, 5], scope='conv2') end_points['conv2'] = net net = tf.nn.lrn(net, 4, bias=1.0, alpha=0.001/9.0, beta=0.75, name='norm2') net = slim.max_pool2d(net, [2, 2], 2, scope='pool2') end_points['pool2'] = net net = slim.flatten(net) end_points['Flatten'] = net net = slim.fully_connected(net, 384, scope='fc3') end_points['fc3'] = net net = slim.dropout(net, dropout_keep_prob, is_training=is_training, scope='dropout3') net = slim.fully_connected(net, 192, scope='fc4') end_points['fc4'] = net if not num_classes: return net, end_points logits = slim.fully_connected(net, num_classes, biases_initializer=tf.zeros_initializer(), weights_initializer=trunc_normal(1/192.0), weights_regularizer=None, activation_fn=None, scope='logits') end_points['Logits'] = logits end_points['Predictions'] = prediction_fn(logits, scope='Predictions') return logits, end_points cifarnet.default_image_size = 32 def cifarnet_arg_scope(weight_decay=0.004): """Defines the default cifarnet argument scope. Args: weight_decay: The weight decay to use for regularizing the model. Returns: An `arg_scope` to use for the inception v3 model. """ with slim.arg_scope( [slim.conv2d], weights_initializer=tf.truncated_normal_initializer(stddev=5e-2), activation_fn=tf.nn.relu): with slim.arg_scope( [slim.fully_connected], biases_initializer=tf.constant_initializer(0.1), weights_initializer=trunc_normal(0.04), weights_regularizer=slim.l2_regularizer(weight_decay), activation_fn=tf.nn.relu) as sc: return sc
PyTorch/SpeechSynthesis/FastPitch/fastpitch
fastpitch
attn_loss_function
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch import torch.nn as nn import torch.nn.functional as F class AttentionCTCLoss(torch.nn.Module): def __init__(self, blank_logprob=-1): super(AttentionCTCLoss, self).__init__() self.log_softmax = torch.nn.LogSoftmax(dim=-1) self.blank_logprob = blank_logprob self.CTCLoss = nn.CTCLoss(zero_infinity=True) def forward(self, attn_logprob, in_lens, out_lens): key_lens = in_lens query_lens = out_lens max_key_len = attn_logprob.size(-1) # Reorder input to [query_len, batch_size, key_len] attn_logprob = attn_logprob.squeeze(1) attn_logprob = attn_logprob.permute(1, 0, 2) # Add blank label attn_logprob = F.pad( input=attn_logprob, pad=(1, 0, 0, 0, 0, 0), value=self.blank_logprob) # Convert to log probabilities # Note: Mask out probs beyond key_len key_inds = torch.arange( max_key_len+1, device=attn_logprob.device, dtype=torch.long) attn_logprob.masked_fill_( key_inds.view(1,1,-1) > key_lens.view(1,-1,1), # key_inds >= key_lens+1 -float("inf")) attn_logprob = self.log_softmax(attn_logprob) # Target sequences target_seqs = key_inds[1:].unsqueeze(0) target_seqs = target_seqs.repeat(key_lens.numel(), 1) # Evaluate CTC loss cost = self.CTCLoss( attn_logprob, target_seqs, input_lengths=query_lens, target_lengths=key_lens) return cost class AttentionBinarizationLoss(torch.nn.Module): def __init__(self): super(AttentionBinarizationLoss, self).__init__() def forward(self, hard_attention, soft_attention, eps=1e-12): log_sum = torch.log(torch.clamp(soft_attention[hard_attention == 1], min=eps)).sum() return -log_sum / hard_attention.sum()
PyTorch/LanguageModeling/BERT/triton/large/runner
runner
start_NVIDIA-DGX-A100-(1x-A100-80GB)
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #!/bin/bash # Install Docker . /etc/os-release && \ curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - && \ echo "deb [arch=amd64] https://download.docker.com/linux/debian buster stable" > /etc/apt/sources.list.d/docker.list && \ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey| apt-key add - && \ curl -s -L https://nvidia.github.io/nvidia-docker/$ID$VERSION_ID/nvidia-docker.list > /etc/apt/sources.list.d/nvidia-docker.list && \ apt-get update && \ apt-get install -y docker-ce docker-ce-cli containerd.io nvidia-docker2 # Install packages pip install -r triton/runner/requirements.txt # Evaluate Runner python3 -m "triton.large.runner.__main__" \ --config-path "triton/large/runner/config_NVIDIA-DGX-A100-(1x-A100-80GB).yaml" \ --device 0
TensorFlow/Segmentation/UNet_3D_Medical/dataset
dataset
data_loader
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Data loader """ import os import numpy as np import horovod.tensorflow as hvd import tensorflow as tf from dataset.transforms import NormalizeImages, OneHotLabels, apply_transforms, PadXYZ, RandomCrop3D, \ RandomHorizontalFlip, RandomBrightnessCorrection, CenterCrop, \ apply_test_transforms, Cast CLASSES = {0: "tumor_core", 1: "peritumoral_edema", 2: "enhancing_tumor"} def cross_validation(arr: np.ndarray, fold_idx: int, n_folds: int): """ Split data into folds for training and evaluation :param arr: Collection items to split :param fold_idx: Index of crossvalidation fold :param n_folds: Total number of folds :return: Train and Evaluation folds """ if fold_idx < 0 or fold_idx >= n_folds: raise ValueError('Fold index has to be [0, n_folds). Received index {} for {} folds'.format(fold_idx, n_folds)) _folders = np.array_split(arr, n_folds) return np.concatenate(_folders[:fold_idx] + _folders[fold_idx + 1:]), _folders[fold_idx] class Dataset: # pylint: disable=R0902 """ Class responsible for the data loading during training, inference and evaluation """ def __init__(self, data_dir, batch_size=2, input_shape=(128, 128, 128), # pylint: disable=R0913 fold_idx=0, n_folds=5, seed=0, params=None): """ Creates and configures the dataset :param data_dir: Directory where the data is stored :param batch_size: Number of pairs to be provided by batch :param input_shape: Dimension of the input to the model :param fold_idx: Fold index for crossvalidation :param n_folds: Total number of folds in crossvalidation :param seed: Random seed :param params: Dictionary with additional configuration parameters """ self._folders = np.array([os.path.join(data_dir, path) for path in os.listdir(data_dir) if path.endswith(".tfrecords")]) assert len(self._folders) > 0, "No matching data found at {}".format(data_dir) self._train, self._eval = cross_validation(self._folders, fold_idx=fold_idx, n_folds=n_folds) self._input_shape = input_shape self._data_dir = data_dir self.params = params self._batch_size = batch_size self._seed = seed self._xshape = (240, 240, 155, 4) self._yshape = (240, 240, 155) def parse(self, serialized): """ Parse TFRecord :param serialized: Serialized record for a particular example :return: sample, label, mean and std of intensities """ features = { 'X': tf.io.FixedLenFeature([], tf.string), 'Y': tf.io.FixedLenFeature([], tf.string), 'mean': tf.io.FixedLenFeature([4], tf.float32), 'stdev': tf.io.FixedLenFeature([4], tf.float32) } parsed_example = tf.io.parse_single_example(serialized=serialized, features=features) sample = tf.io.decode_raw(parsed_example['X'], tf.uint8) sample = tf.cast(tf.reshape(sample, self._xshape), tf.uint8) label = tf.io.decode_raw(parsed_example['Y'], tf.uint8) label = tf.cast(tf.reshape(label, self._yshape), tf.uint8) mean = parsed_example['mean'] stdev = parsed_example['stdev'] return sample, label, mean, stdev def parse_x(self, serialized): """ Parse only the sample in a TFRecord with sample and label :param serialized: :return: sample, mean and std of intensities """ features = {'X': tf.io.FixedLenFeature([], tf.string), 'Y': tf.io.FixedLenFeature([], tf.string), 'mean': tf.io.FixedLenFeature([4], tf.float32), 'stdev': tf.io.FixedLenFeature([4], tf.float32)} parsed_example = tf.io.parse_single_example(serialized=serialized, features=features) sample = tf.io.decode_raw(parsed_example['X'], tf.uint8) sample = tf.cast(tf.reshape(sample, self._xshape), tf.uint8) mean = parsed_example['mean'] stdev = parsed_example['stdev'] return sample, mean, stdev def train_fn(self): """ Create dataset for training """ if 'debug' in self.params.exec_mode: return self.synth_train_fn() assert len(self._train) > 0, "Training data not found." dataset = tf.data.TFRecordDataset(filenames=self._train) dataset = dataset.shard(hvd.size(), hvd.rank()) dataset = dataset.cache() dataset = dataset.shuffle(buffer_size=self._batch_size * 8, seed=self._seed) dataset = dataset.repeat() dataset = dataset.map(self.parse, num_parallel_calls=tf.data.experimental.AUTOTUNE) transforms = [ RandomCrop3D(self._input_shape), RandomHorizontalFlip() if self.params.augment else None, Cast(dtype=tf.float32), NormalizeImages(), RandomBrightnessCorrection() if self.params.augment else None, OneHotLabels(n_classes=4), ] dataset = dataset.map( map_func=lambda x, y, mean, stdev: apply_transforms(x, y, mean, stdev, transforms=transforms), num_parallel_calls=tf.data.experimental.AUTOTUNE) dataset = dataset.batch(batch_size=self._batch_size, drop_remainder=True) dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) if self._batch_size == 1: options = dataset.options() options.experimental_optimization.map_and_batch_fusion = False dataset = dataset.with_options(options) return dataset def eval_fn(self): """ Create dataset for evaluation """ dataset = tf.data.TFRecordDataset(filenames=self._eval) assert len(self._eval) > 0, "Evaluation data not found. Did you specify --fold flag?" dataset = dataset.cache() dataset = dataset.map(self.parse, num_parallel_calls=tf.data.experimental.AUTOTUNE) transforms = [ CenterCrop((224, 224, 155)), Cast(dtype=tf.float32), NormalizeImages(), OneHotLabels(n_classes=4), PadXYZ() ] dataset = dataset.map( map_func=lambda x, y, mean, stdev: apply_transforms(x, y, mean, stdev, transforms=transforms), num_parallel_calls=tf.data.experimental.AUTOTUNE) dataset = dataset.batch(batch_size=self._batch_size, drop_remainder=False) dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) return dataset def test_fn(self): """ Create dataset for inference """ if 'debug' in self.params.exec_mode: return self.synth_predict_fn() count = 1 if not self.params.benchmark \ else 2 * self.params.warmup_steps * self.params.batch_size // self.test_size dataset = tf.data.TFRecordDataset(filenames=self._eval) assert len(self._eval) > 0, "Evaluation data not found. Did you specify --fold flag?" dataset = dataset.repeat(count) dataset = dataset.map(self.parse_x, num_parallel_calls=tf.data.experimental.AUTOTUNE) transforms = [ CenterCrop((224, 224, 155)), Cast(dtype=tf.float32), NormalizeImages(), PadXYZ((224, 224, 160)) ] dataset = dataset.map( map_func=lambda x, mean, stdev: apply_test_transforms(x, mean, stdev, transforms=transforms), num_parallel_calls=tf.data.experimental.AUTOTUNE) dataset = dataset.batch(batch_size=self._batch_size, drop_remainder=self.params.benchmark) dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) return dataset def export_fn(self): """ Create dataset for calibrating and exporting """ dataset = tf.data.TFRecordDataset(filenames=self._eval) assert len(self._eval) > 0, "Evaluation data not found. Did you specify --fold flag?" dataset = dataset.repeat(1) dataset = dataset.map(self.parse_x, num_parallel_calls=tf.data.experimental.AUTOTUNE) transforms = [ CenterCrop((224, 224, 155)), Cast(dtype=tf.float32), NormalizeImages(), PadXYZ((224, 224, 160)) ] dataset = dataset.map( map_func=lambda x, mean, stdev: apply_test_transforms(x, mean, stdev, transforms=transforms), num_parallel_calls=tf.data.experimental.AUTOTUNE) dataset = dataset.batch(batch_size=self._batch_size, drop_remainder=True) dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) return dataset def synth_train_fn(self): """ Synthetic data function for training """ inputs = tf.random.uniform(self._xshape, dtype=tf.int32, minval=0, maxval=255, seed=self._seed, name='synth_inputs') masks = tf.random.uniform(self._yshape, dtype=tf.int32, minval=0, maxval=4, seed=self._seed, name='synth_masks') mean = tf.random.uniform((4,), dtype=tf.float32, minval=0, maxval=255, seed=self._seed) stddev = tf.random.uniform((4,), dtype=tf.float32, minval=0, maxval=1, seed=self._seed) dataset = tf.data.Dataset.from_tensors((inputs, masks)) dataset = dataset.repeat() transforms = [ Cast(dtype=tf.uint8), RandomCrop3D((128, 128, 128)), RandomHorizontalFlip() if self.params.augment else None, Cast(dtype=tf.float32), NormalizeImages(), RandomBrightnessCorrection() if self.params.augment else None, OneHotLabels(n_classes=4), ] dataset = dataset.map(map_func=lambda x, y: apply_transforms(x, y, mean, stddev, transforms), num_parallel_calls=tf.data.experimental.AUTOTUNE) dataset = dataset.batch(self._batch_size) dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) return dataset def synth_predict_fn(self): """Synthetic data function for testing""" inputs = tf.random.truncated_normal((224, 224, 160, 4), dtype=tf.float32, mean=0.0, stddev=1.0, seed=self._seed, name='synth_inputs') count = 2 * self.params.warmup_steps dataset = tf.data.Dataset.from_tensors(inputs) dataset = dataset.repeat(count) dataset = dataset.batch(self._batch_size) dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) return dataset @property def train_size(self): """ Number of pairs in the training set """ return len(self._train) @property def eval_size(self): """ Number of pairs in the validation set """ return len(self._eval) @property def test_size(self): """ Number of pairs in the test set """ return len(self._eval)
TensorFlow2/LanguageModeling/ELECTRA
ELECTRA
gpu_affinity
# Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math import os import pynvml pynvml.nvmlInit() def systemGetDriverVersion(): return pynvml.nvmlSystemGetDriverVersion() def deviceGetCount(): return pynvml.nvmlDeviceGetCount() class device: # assume nvml returns list of 64 bit ints _nvml_affinity_elements = math.ceil(os.cpu_count() / 64) def __init__(self, device_idx): super().__init__() self.handle = pynvml.nvmlDeviceGetHandleByIndex(device_idx) def getName(self): return pynvml.nvmlDeviceGetName(self.handle) def getCpuAffinity(self): affinity_string = '' for j in pynvml.nvmlDeviceGetCpuAffinity( self.handle, device._nvml_affinity_elements ): # assume nvml returns list of 64 bit ints affinity_string = '{:064b}'.format(j) + affinity_string affinity_list = [int(x) for x in affinity_string] affinity_list.reverse() # so core 0 is in 0th element of list return [i for i, e in enumerate(affinity_list) if e != 0] def set_affinity(gpu_id=None): if gpu_id is None: gpu_id = int(os.getenv('LOCAL_RANK', 0)) dev = device(gpu_id) os.sched_setaffinity(0, dev.getCpuAffinity()) # list of ints representing the logical cores this process is now affinitied with return os.sched_getaffinity(0)
Tools/PyTorch/TimeSeriesPredictionPlatform/conf/trainer/optimizer
optimizer
Adagrad
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. _target_: torch.optim.Adagrad lr: 0.01 lr_decay: 0.0 weight_decay: 0.0 eps: 1e-10
TensorFlow2/Recommendation/WideAndDeep/tests/feature_specs
feature_specs
more_onehot
channel_spec: label: - clicked map: [] multihot_categorical: - topic_id_list - entity_id_list - category_id_list numerical: - document_id_document_id_promo_sim_categories - document_id_document_id_promo_sim_topics - document_id_document_id_promo_sim_entities - document_id_promo_ctr - publisher_id_promo_ctr - source_id_promo_ctr - document_id_promo_count - publish_time_days_since_published - ad_id_ctr - advertiser_id_ctr - campaign_id_ctr - ad_id_count - publish_time_promo_days_since_published onehot_categorical: - ad_id - document_id - platform - document_id_promo - campaign_id - advertiser_id - source_id - geo_location - geo_location_country - geo_location_state - publisher_id - source_id_promo - publisher_id_promo - additional_1 - additional_2 - additional_3 feature_spec: ad_id: cardinality: 250000 ad_id_count: {} ad_id_ctr: {} advertiser_id: cardinality: 2500 advertiser_id_ctr: {} campaign_id: cardinality: 5000 campaign_id_ctr: {} category_id_list: cardinality: 100 max_hotness: 3 clicked: {} document_id: cardinality: 300000 document_id_document_id_promo_sim_categories: {} document_id_document_id_promo_sim_entities: {} document_id_document_id_promo_sim_topics: {} document_id_promo: cardinality: 100000 document_id_promo_count: {} document_id_promo_ctr: {} entity_id_list: cardinality: 10000 max_hotness: 3 geo_location: cardinality: 2500 geo_location_country: cardinality: 300 geo_location_state: cardinality: 2000 platform: cardinality: 4 publish_time_days_since_published: {} publish_time_promo_days_since_published: {} publisher_id: cardinality: 1000 publisher_id_promo: cardinality: 1000 publisher_id_promo_ctr: {} source_id: cardinality: 4000 source_id_promo: cardinality: 4000 source_id_promo_ctr: {} topic_id_list: cardinality: 350 max_hotness: 3 additional_1: cardinality: 4000 additional_2: cardinality: 1337 additional_3: cardinality: 9999 metadata: {} source_spec: test: - features: - clicked - ad_id - document_id - platform - document_id_promo - campaign_id - advertiser_id - source_id - geo_location - geo_location_country - geo_location_state - publisher_id - source_id_promo - publisher_id_promo - topic_id_list - entity_id_list - category_id_list - document_id_document_id_promo_sim_categories - document_id_document_id_promo_sim_topics - document_id_document_id_promo_sim_entities - document_id_promo_ctr - publisher_id_promo_ctr - source_id_promo_ctr - document_id_promo_count - publish_time_days_since_published - ad_id_ctr - advertiser_id_ctr - campaign_id_ctr - ad_id_count - publish_time_promo_days_since_published - additional_1 - additional_2 - additional_3 files: - valid.csv type: csv train: - features: - clicked - ad_id - document_id - platform - document_id_promo - campaign_id - advertiser_id - source_id - geo_location - geo_location_country - geo_location_state - publisher_id - source_id_promo - publisher_id_promo - topic_id_list - entity_id_list - category_id_list - document_id_document_id_promo_sim_categories - document_id_document_id_promo_sim_topics - document_id_document_id_promo_sim_entities - document_id_promo_ctr - publisher_id_promo_ctr - source_id_promo_ctr - document_id_promo_count - publish_time_days_since_published - ad_id_ctr - advertiser_id_ctr - campaign_id_ctr - ad_id_count - publish_time_promo_days_since_published - additional_1 - additional_2 - additional_3 files: - train.csv type: csv
PyTorch/Classification/GPUNet/triton/175ms/runner
runner
__main__
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import pathlib from typing import List if __name__ == "__main__" and __package__ is None: __package__ = pathlib.Path(__file__).parent.name from ...runner.config import Config from ...runner.executor import Executor from ...runner.finalizer import ExperimentFinalizer from ...runner.maintainer import DockerMaintainer from ...runner.preparer import ExperimentPreparer from ...runner.runner_proxy import RunnerProxy from .pipeline_impl import pipeline class ExperimentRunner(RunnerProxy): """ Experiment Runner proxy for runner wrapper """ maintainer_cls = DockerMaintainer executor_cls = Executor preparer_cls = ExperimentPreparer finalizer_cls = ExperimentFinalizer def execute(config_path: str, devices: List[str]): if len(devices) == 0: devices = ["0"] config = Config.from_file(config_path) runner = ExperimentRunner(config=config, pipeline=pipeline, devices=devices) runner.start() if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--config-path", type=str, required=True, help="Path to configuration file with details.") parser.add_argument( "--devices", type=str, nargs="*", required=False, help="Path to configuration file with details." ) args = parser.parse_args() config_path = args.config_path devices = args.devices execute(config_path, devices)
TensorFlow2/Detection/Efficientdet/scripts/D0
D0
training-benchmark-AMP-A100-80G
#!/bin/bash # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. bs=200 ep=1 lr=2.2 wu=25 ema=0.999 momentum=0.93 visible_devices=$(seq -s, 0 $((${NGPU:-8}-1))) mkdir -p /tmp/training-benchmark-AMP-A100-80G rm -rf /tmp/training-benchmark-AMP-A100-80G/* mpirun -np ${NGPU:-8} --allow-run-as-root --bind-to none \ -map-by slot -x LD_LIBRARY_PATH -x PATH \ -mca pml ob1 -mca btl ^openib \ -x CUDA_VISIBLE_DEVICES=$visible_devices \ python3 train.py \ --training_file_pattern=/workspace/coco/train-* \ --val_file_pattern=/workspace/coco/val-* \ --val_json_file=/workspace/coco/annotations/instances_val2017.json \ --model_name=efficientdet-d0 \ --model_dir=/tmp/training-benchmark-AMP-A100-80G \ --backbone_init=/workspace/checkpoints/efficientnet-b0-joc \ --batch_size=$bs \ --num_epochs=$ep \ --use_xla=True \ --amp=True \ --lr=$lr \ --warmup_epochs=$wu \ --benchmark=True \ --benchmark_steps=500 \ --hparams="moving_average_decay=$ema,momentum=$momentum" \ 2>&1 | tee /tmp/training-benchmark-AMP-A100-80G/train-benchmark.log
PyTorch/LanguageModeling/BERT/triton/dist4l/scripts/docker
docker
interactive
#!/usr/bin/env bash # Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES:=0} docker run -it --rm \ --runtime=nvidia \ -e NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES} \ --net=host \ --shm-size=1g \ --ulimit memlock=-1 \ --ulimit stack=67108864 \ --ipc=host \ -e WORKDIR="$(pwd)" \ -e PYTHONPATH="$(pwd)" \ -v "$(pwd)":"$(pwd)" \ -v /var/run/docker.sock:/var/run/docker.sock \ -w "$(pwd)" \ bert:latest bash
TensorFlow/Classification/ConvNets/se-resnext101-32x4d/training
training
DGXA100_SE-RNxt101-32x4d_TF32_90E
#!/bin/bash # Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. WORKSPACE=${1:-"/workspace/rn50v15_tf"} DATA_DIR=${2:-"/data"} OTHER=${@:3} if [[ ! -z "${BIND_TO_SOCKET}" ]]; then BIND_TO_SOCKET="--bind-to socket" fi mpiexec --allow-run-as-root ${BIND_TO_SOCKET} -np 8 python3 main.py --arch=se-resnext101-32x4d \ --mode=train_and_evaluate --iter_unit=epoch --num_iter=90 \ --batch_size=128 --warmup_steps=100 --cosine_lr --label_smoothing 0.1 \ --lr_init=0.256 --lr_warmup_epochs=8 --momentum=0.875 --weight_decay=6.103515625e-05 \ --data_dir=${DATA_DIR}/tfrecords --data_idx_dir=${DATA_DIR}/dali_idx \ --results_dir=${WORKSPACE}/results --weight_init=fan_in ${OTHER}
PyTorch/Translation/GNMT/seq2seq/data
data
tokenizer
# Copyright (c) 2017 Elad Hoffer # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved. # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import logging from collections import defaultdict from functools import partial import torch import subword_nmt.apply_bpe import sacremoses import seq2seq.data.config as config class Tokenizer: """ Tokenizer class. """ def __init__(self, vocab_fname=None, bpe_fname=None, lang=None, pad=1, separator='@@'): """ Constructor for the Tokenizer class. :param vocab_fname: path to the file with vocabulary :param bpe_fname: path to the file with bpe codes :param pad: pads vocabulary to a multiple of 'pad' tokens :param separator: tokenization separator """ self.separator = separator self.lang = lang if bpe_fname: with open(bpe_fname, 'r') as bpe_codes: self.bpe = subword_nmt.apply_bpe.BPE(bpe_codes) if vocab_fname: self.build_vocabulary(vocab_fname, pad) if lang: self.init_moses(lang) def init_moses(self, lang): self.moses_tokenizer = sacremoses.MosesTokenizer(lang['src']) self.moses_detokenizer = sacremoses.MosesDetokenizer(lang['tgt']) def build_vocabulary(self, vocab_fname, pad): logging.info(f'Building vocabulary from {vocab_fname}') vocab = [config.PAD_TOKEN, config.UNK_TOKEN, config.BOS_TOKEN, config.EOS_TOKEN] with open(vocab_fname) as vfile: for line in vfile: vocab.append(line.strip()) self.pad_vocabulary(vocab, pad) self.vocab_size = len(vocab) logging.info(f'Size of vocabulary: {self.vocab_size}') self.tok2idx = defaultdict(partial(int, config.UNK)) for idx, token in enumerate(vocab): self.tok2idx[token] = idx self.idx2tok = {} for key, value in self.tok2idx.items(): self.idx2tok[value] = key def pad_vocabulary(self, vocab, pad): """ Pads vocabulary to a multiple of 'pad' tokens. :param vocab: list with vocabulary :param pad: integer """ vocab_size = len(vocab) padded_vocab_size = (vocab_size + pad - 1) // pad * pad for i in range(0, padded_vocab_size - vocab_size): token = f'madeupword{i:04d}' vocab.append(token) assert len(vocab) % pad == 0 def get_state(self): logging.info(f'Saving state of the tokenizer') state = { 'lang': self.lang, 'separator': self.separator, 'vocab_size': self.vocab_size, 'bpe': self.bpe, 'tok2idx': self.tok2idx, 'idx2tok': self.idx2tok, } return state def set_state(self, state): logging.info(f'Restoring state of the tokenizer') self.lang = state['lang'] self.separator = state['separator'] self.vocab_size = state['vocab_size'] self.bpe = state['bpe'] self.tok2idx = state['tok2idx'] self.idx2tok = state['idx2tok'] self.init_moses(self.lang) def segment(self, line): """ Tokenizes single sentence and adds special BOS and EOS tokens. :param line: sentence returns: list representing tokenized sentence """ line = line.strip().split() entry = [self.tok2idx[i] for i in line] entry = [config.BOS] + entry + [config.EOS] return entry def tokenize(self, line): tokenized = self.moses_tokenizer.tokenize(line, return_str=True) bpe = self.bpe.process_line(tokenized) segmented = self.segment(bpe) tensor = torch.tensor(segmented) return tensor def detokenize_bpe(self, inp, delim=' '): """ Detokenizes single sentence and removes token separator characters. :param inputs: sequence of tokens :param delim: tokenization delimiter returns: string representing detokenized sentence """ detok = delim.join([self.idx2tok[idx] for idx in inp]) detok = detok.replace(self.separator + ' ', '') detok = detok.replace(self.separator, '') detok = detok.replace(config.BOS_TOKEN, '') detok = detok.replace(config.EOS_TOKEN, '') detok = detok.replace(config.PAD_TOKEN, '') detok = detok.strip() return detok def detokenize_moses(self, inp): output = self.moses_detokenizer.detokenize(inp.split()) return output def detokenize(self, inp): detok_bpe = self.detokenize_bpe(inp) output = self.detokenize_moses(detok_bpe) return output
TensorFlow2/Classification/ConvNets/scripts/docker
docker
launch
#!/bin/bash # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -euxo pipefail nvidia-docker run -it --rm --net=host --runtime=nvidia --ipc=host --cap-add=SYS_PTRACE --cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH --security-opt seccomp=unconfined \ -v $(pwd)/:/workspace/ \ -v "/imagenet_tfrecords":/data/ \ -v "/imagenet_infer/":/infer_data/images/ \ nvcr.io/nvidia/efficientnet-tf2:21.09-tf2-py3
TensorFlow/Segmentation/UNet_Industrial/scripts
scripts
UNet_AMP_1GPU_XLA
#!/usr/bin/env bash # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # This script launches UNet training in FP32-AMP on 1 GPU using 16 batch size (16 per GPU) # Usage ./UNet_AMP_1GPU_XLA.sh <path to result repository> <path to dataset> <dagm classID (1-10)> BASEDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" export TF_CPP_MIN_LOG_LEVEL=3 python "${BASEDIR}/../main.py" \ --unet_variant='tinyUNet' \ --activation_fn='relu' \ --exec_mode='train_and_evaluate' \ --iter_unit='batch' \ --num_iter=2500 \ --batch_size=16 \ --warmup_step=10 \ --results_dir="${1}" \ --data_dir="${2}" \ --dataset_name='DAGM2007' \ --dataset_classID="${3}" \ --data_format='NCHW' \ --use_auto_loss_scaling \ --amp \ --xla \ --learning_rate=1e-4 \ --learning_rate_decay_factor=0.8 \ --learning_rate_decay_steps=500 \ --rmsprop_decay=0.9 \ --rmsprop_momentum=0.8 \ --loss_fn_name='adaptive_loss' \ --weight_decay=1e-5 \ --weight_init_method='he_uniform' \ --augment_data \ --display_every=250 \ --debug_verbosity=0
PyTorch/Translation/Transformer/fairseq/modules
modules
multihead_attention
# Copyright (c) 2017-present, Facebook, Inc. # All rights reserved. # # This source code is licensed under the license found in the LICENSE file in # the root directory of this source tree. An additional grant of patent rights # can be found in the PATENTS file in the same directory. # #------------------------------------------------------------------------- # # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Dict, Optional import torch from torch import nn, Tensor from torch.nn import Parameter import torch.nn.functional as F from torch.cuda import amp from torch.autograd.variable import Variable import strided_batched_gemm from fairseq import utils class QueryLinear(torch.autograd.Function): @staticmethod @amp.custom_fwd(cast_inputs=torch.half) def forward(ctx, input, weights_q, scale): s = Variable(torch.tensor([scale])) ctx.save_for_backward(input, weights_q, s) q = torch.addmm(input.view(input.size(0) * input.size(1), input.size(2)), input.view(input.size(0) * input.size(1), input.size(2)), weights_q, beta=0.0, alpha=s[0]) q = q.view(input.size(0), input.size(1), input.size(2)) return q.detach() @staticmethod @amp.custom_bwd def backward(ctx, q_grad): input, weights_q, s = ctx.saved_tensors input = input.view(input.size(0) * input.size(1), input.size(2)).transpose(0, 1) q = torch.addmm(q_grad.view(q_grad.size(0) * q_grad.size(1), q_grad.size(2)), q_grad.view(q_grad.size(0) * q_grad.size(1), q_grad.size(2)), weights_q.transpose(0, 1), beta=0.0, alpha=s[0]) q = q.view(q_grad.size(0), q_grad.size(1), q_grad.size(2)) q_grad = q_grad.view(q_grad.size(0) * q_grad.size(1), q_grad.size(2)) weights_q_grad = torch.addmm(weights_q, input, q_grad, beta=0.0, alpha=s[0]) return q, weights_q_grad, None class KeyValueLinears(torch.autograd.Function): @staticmethod @amp.custom_fwd(cast_inputs=torch.half) def forward(ctx, input, weights_k, weights_v): ctx.save_for_backward(input, weights_k, weights_v) k = torch.addmm(input.view(input.size(0) * input.size(1), input.size(2)), input.view(input.size(0) * input.size(1), input.size(2)), weights_k, beta=0.0, alpha=1.0) k = k.view(input.size(0), input.size(1), input.size(2)) v = torch.addmm(input.view(input.size(0) * input.size(1), input.size(2)), input.view(input.size(0) * input.size(1), input.size(2)), weights_v, beta=0.0, alpha=1.0) v = v.view(input.size(0), input.size(1), input.size(2)) return k.detach(), v.detach() @staticmethod @amp.custom_bwd def backward(ctx, k_grad, v_grad): input, weights_k, weights_v = ctx.saved_tensors input = input.view(input.size(0) * input.size(1), input.size(2)).transpose(0, 1) k = torch.addmm(k_grad.view(k_grad.size(0) * k_grad.size(1), k_grad.size(2)), k_grad.view(k_grad.size(0) * k_grad.size(1), k_grad.size(2)), weights_k.transpose(0, 1), beta=0.0) k_grad = k_grad.view(k_grad.size(0) * k_grad.size(1), k_grad.size(2)) weights_k_grad = torch.mm(input, k_grad) v = k.addmm_(v_grad.view(v_grad.size(0) * v_grad.size(1), v_grad.size(2)), weights_v.transpose(0, 1), beta=1.0) v = v.view(v_grad.size(0), v_grad.size(1), v_grad.size(2)) v_grad = v_grad.view(v_grad.size(0) * v_grad.size(1), v_grad.size(2)) weights_v_grad = torch.mm(input, v_grad) return v, weights_k_grad, weights_v_grad class SelfAttentionLinears(torch.autograd.Function): @staticmethod @amp.custom_fwd(cast_inputs=torch.half) def forward(ctx, input, weights_q, weights_k, weights_v, scale): s = Variable(torch.tensor([scale])) ctx.save_for_backward(input, weights_q, weights_k, weights_v, s) q = torch.addmm(input.view(input.size(0) * input.size(1), input.size(2)), input.view(input.size(0) * input.size(1), input.size(2)), weights_q, beta=0.0, alpha=s[0]) q = q.view(input.size(0), input.size(1), input.size(2)) k = torch.addmm(input.view(input.size(0) * input.size(1), input.size(2)), input.view(input.size(0) * input.size(1), input.size(2)), weights_k, beta=0.0, alpha=1.0) k = k.view(input.size(0), input.size(1), input.size(2)) v = torch.addmm(input.view(input.size(0) * input.size(1), input.size(2)), input.view(input.size(0) * input.size(1), input.size(2)), weights_v, beta=0.0, alpha=1.0) v = v.view(input.size(0), input.size(1), input.size(2)) return q.detach(), k.detach(), v.detach() @staticmethod @amp.custom_bwd def backward(ctx, q_grad, k_grad, v_grad): input, weights_q, weights_k, weights_v, s = ctx.saved_tensors input = input.view(input.size(0) * input.size(1), input.size(2)).transpose(0, 1) q = torch.addmm(q_grad.view(q_grad.size(0) * q_grad.size(1), q_grad.size(2)), q_grad.view(q_grad.size(0) * q_grad.size(1), q_grad.size(2)), weights_q.transpose(0, 1), beta=0.0, alpha=s[0]) q_grad = q_grad.view(q_grad.size(0) * q_grad.size(1), q_grad.size(2)) weights_q_grad = torch.addmm(weights_q, input, q_grad, beta=0.0, alpha=s[0]) k = q.addmm_(k_grad.view(k_grad.size(0) * k_grad.size(1), k_grad.size(2)), weights_k.transpose(0, 1), beta=1.0) k_grad = k_grad.view(k_grad.size(0) * k_grad.size(1), k_grad.size(2)) weights_k_grad = torch.mm(input, k_grad) v = k.addmm_(v_grad.view(v_grad.size(0) * v_grad.size(1), v_grad.size(2)), weights_v.transpose(0, 1), beta=1.0) v = v.view(v_grad.size(0), v_grad.size(1), v_grad.size(2)) v_grad = v_grad.view(v_grad.size(0) * v_grad.size(1), v_grad.size(2)) weights_v_grad = torch.mm(input, v_grad) return v, weights_q_grad, weights_k_grad, weights_v_grad, None class StridedBmm1Func(torch.autograd.Function): @staticmethod @amp.custom_fwd(cast_inputs=torch.half) def forward(ctx, input1, input2): ctx.save_for_backward(input1, input2) output = torch.empty((input1.size(0), input1.size(1), input2.size(2)), dtype=input1.dtype, device=torch.device('cuda')) if (input1.dtype == torch.float16) and (input2.dtype == torch.float16): output = strided_batched_gemm.strided_batched_gemm(0.0, output, 1.0, input1, input2) else: output = torch.bmm(input1, input2, out=output) return output.detach() @staticmethod @amp.custom_bwd def backward(ctx, grad_output): input1, input2 = ctx.saved_tensors grad_input1 = torch.empty((input1.size(1), input2.size(0), input1.size(2)), dtype=input1.dtype, device=torch.device('cuda')).transpose(1, 0) grad_input2 = torch.empty((input2.size(2), input2.size(0), input2.size(1)), dtype=input2.dtype, device=torch.device('cuda')).transpose(1, 0) if (grad_output.dtype == torch.float16) and (input1.dtype == torch.float16) and (input2.dtype == torch.float16): grad_input1 = strided_batched_gemm.strided_batched_gemm(0.0, grad_input1, 1.0, grad_output, input2.transpose(1, 2)) grad_input2 = strided_batched_gemm.strided_batched_gemm(0.0, grad_input2, 1.0, grad_output.transpose(1, 2), input1) grad_input2 = grad_input2.transpose(1, 2) else: grad_input1 = torch.bmm(grad_output, input2.transpose(1, 2), out=grad_input1) grad_input2 = torch.bmm(grad_output.transpose(1, 2), input1, out=grad_input2).transpose(1, 2) return grad_input1, grad_input2 class StridedBmm2Func(torch.autograd.Function): @staticmethod @amp.custom_fwd(cast_inputs=torch.half) def forward(ctx, input1, input2): ctx.save_for_backward(input1, input2) output = torch.empty((input1.size(1), input1.size(0), input2.size(2)), dtype=input1.dtype, device=torch.device('cuda')).transpose(1, 0) if (input1.dtype == torch.float16) and (input2.dtype == torch.float16): output = strided_batched_gemm.strided_batched_gemm(0.0, output, 1.0, input1, input2) else: output = torch.bmm(input1, input2, out=output) return output.detach() @staticmethod @amp.custom_bwd def backward(ctx, grad_output): input1, input2 = ctx.saved_tensors grad_input2 = torch.empty((input2.size(1), input2.size(0), input2.size(2)), dtype=input2.dtype, device=torch.device('cuda')).transpose(1, 0) grad_input1 = torch.empty((input1.size(0), input1.size(1), input1.size(2)), dtype=input2.dtype, device=torch.device('cuda')) if (grad_output.dtype == torch.float16) and (input1.dtype == torch.float16) and (input2.dtype == torch.float16): grad_input1 = strided_batched_gemm.strided_batched_gemm(0.0, grad_input1, 1.0, grad_output, input2.transpose(1, 2)) grad_input2 = strided_batched_gemm.strided_batched_gemm(0.0, grad_input2, 1.0, input1.transpose(1, 2), grad_output) else: grad_input1 = torch.bmm(grad_output, input2.transpose(1, 2)) grad_input2 = torch.bmm(input1.transpose(1, 2), grad_output, out=grad_input2) return grad_input1, grad_input2 def query_linear(input: Tensor, weights_q: Tensor, scale: float): if not torch.jit.is_scripting(): return QueryLinear.apply(input, weights_q, scale) else: q = scale * torch.einsum('ij,jk->ik', input.view(input.size(0) * input.size(1), -1), weights_q) q = q.view(input.shape) return q def key_value_linears(input: Tensor, weights_k: Tensor, weights_v: Tensor): if not torch.jit.is_scripting(): return KeyValueLinears.apply(input, weights_k, weights_v) else: k = torch.einsum('ij,jk->ik', input.view(input.size(0) * input.size(1), -1), weights_k) k = k.view(input.shape) v = torch.einsum('ij,jk->ik', input.view(input.size(0) * input.size(1), -1), weights_v) v = v.view(input.shape) return k, v def self_attn_linears(input: Tensor, weights_q: Tensor, weights_k: Tensor, weights_v: Tensor, scale: float): if not torch.jit.is_scripting(): return SelfAttentionLinears.apply(input, weights_q, weights_k, weights_v, scale) else: q = scale * torch.einsum('ij,jk->ik', input.view(input.size(0) * input.size(1), -1), weights_q) q = q.view(input.shape) k = torch.einsum('ij,jk->ik', input.view(input.size(0) * input.size(1), -1), weights_k) k = k.view(input.shape) v = torch.einsum('ij,jk->ik', input.view(input.size(0) * input.size(1), -1), weights_v) v = v.view(input.shape) return q, k, v def strided_bmm1(input1: Tensor, input2: Tensor): if not torch.jit.is_scripting(): return StridedBmm1Func.apply(input1, input2) else: return torch.einsum('ijk,ikn->ijn', input1, input2) def strided_bmm2(input1: Tensor, input2: Tensor): if not torch.jit.is_scripting(): return StridedBmm2Func.apply(input1, input2) else: return torch.einsum('ijk,ikn->ijn', input1, input2) class MultiheadAttention(nn.Module): """Multi-headed attention. See "Attention Is All You Need" for more details. """ def __init__(self, embed_dim, num_heads, dropout=0., bias=False): super().__init__() self.embed_dim = embed_dim self.num_heads = num_heads self.dropout = dropout self.head_dim = embed_dim // num_heads assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads" self.scaling = self.head_dim**-0.5 self._mask = torch.empty(0) #self.in_proj_weight = Parameter(torch.Tensor(3*embed_dim, embed_dim)) self.in_proj_weight_q = Parameter(torch.Tensor(embed_dim, embed_dim)) self.in_proj_weight_k = Parameter(torch.Tensor(embed_dim, embed_dim)) self.in_proj_weight_v = Parameter(torch.Tensor(embed_dim, embed_dim)) if bias: #self.in_proj_bias = Parameter(torch.Tensor(3*embed_dim)) self.in_proj_bias_q = Parameter(torch.Tensor(embed_dim)) self.in_proj_bias_k = Parameter(torch.Tensor(embed_dim)) self.in_proj_bias_v = Parameter(torch.Tensor(embed_dim)) else: #self.register_parameter('in_proj_bias', None) self.register_parameter('in_proj_bias_k', None) self.register_parameter('in_proj_bias_q', None) self.register_parameter('in_proj_bias_v', None) self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias) self.cache_id = str(id(self)) self.reset_parameters() def reset_parameters(self): #nn.init.xavier_uniform_(self.in_proj_weight) nn.init.xavier_uniform_(self.in_proj_weight_q) nn.init.xavier_uniform_(self.in_proj_weight_k) nn.init.xavier_uniform_(self.in_proj_weight_v) nn.init.xavier_uniform_(self.out_proj.weight) if self.in_proj_bias_k is not None: #nn.init.constant_(self.in_proj_bias, 0.) nn.init.constant_(self.in_proj_bias_q, 0.) nn.init.constant_(self.in_proj_bias_k, 0.) nn.init.constant_(self.in_proj_bias_v, 0.) nn.init.constant_(self.out_proj.bias, 0.) def forward(self, query: Tensor, key: Tensor, value: Tensor, mask_future_timesteps: bool, key_padding_mask: Optional[Tensor], incremental_state: Optional[Dict[str, Dict[str, Tensor]]], need_weights: bool, static_kv: bool): """Input shape: Time x Batch x Channel Self-attention can be implemented by passing in the same arguments for query, key and value. Future timesteps can be masked with the `mask_future_timesteps` argument. Padding elements can be excluded from the key by passing a binary ByteTensor (`key_padding_mask`) with shape: batch x src_len, where padding elements are indicated by 1s. """ if torch.jit.is_scripting(): kv_same = torch.equal(key, value) qkv_same = torch.equal(query, value) and kv_same else: qkv_same, kv_same = self._fast_same_check(query, key, value) tgt_len, bsz, embed_dim = query.size() assert embed_dim == self.embed_dim assert list(query.size()) == [tgt_len, bsz, embed_dim] assert key.size() == value.size() k = v = query.new_empty(0) if incremental_state is not None: saved_state = self._get_input_buffer(incremental_state) else: saved_state = None if qkv_same: # self-attention q, k, v = self_attn_linears(query, self.in_proj_weight_q, self.in_proj_weight_k, self.in_proj_weight_v, self.scaling) elif kv_same: # encoder-decoder attention q = query_linear(query, self.in_proj_weight_q, self.scaling) if not(saved_state is not None and 'prev_key' in saved_state and static_kv): k, v = key_value_linears(key, self.in_proj_weight_k, self.in_proj_weight_v) else: q = torch.addmm(query.view(query.size(0) * query.size(1), query.size(2)), query.view(query.size(0) * query.size(1), query.size(2)), self.in_proj_weight_q, beta=0.0, alpha=self.scaling) if not(saved_state is not None and 'prev_key' in saved_state and static_kv): k = F.linear(key, self.in_proj_weight_k, self.in_proj_bias_k) v = F.linear(value, self.in_proj_weight_v, self.in_proj_bias_v) if saved_state is not None: if 'prev_key' in saved_state: k = torch.cat((saved_state['prev_key'], k), dim=0) if 'prev_value' in saved_state: v = torch.cat((saved_state['prev_value'], v), dim=0) saved_state['prev_key'] = k saved_state['prev_value'] = v self._set_input_buffer(incremental_state, saved_state) src_len = k.size(0) if key_padding_mask is not None: assert key_padding_mask.size(0) == bsz assert key_padding_mask.size(1) == src_len q = q.contiguous().view(tgt_len, bsz * self.num_heads, self.head_dim).transpose(0, 1) k = k.contiguous().view(src_len, bsz * self.num_heads, self.head_dim).transpose(0, 1) v = v.contiguous().view(src_len, bsz * self.num_heads, self.head_dim).transpose(0, 1) attn_weights = strided_bmm1(q, k.transpose(1, 2)) assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len] # only apply masking at training time (when incremental state is None) if mask_future_timesteps and incremental_state is None: assert query.size() == key.size(), \ 'mask_future_timesteps only applies to self-attention' attn_weights += self.buffered_mask(attn_weights).unsqueeze(0) if key_padding_mask is not None: # don't attend to padding symbols attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) attn_weights = attn_weights.float().masked_fill( key_padding_mask.unsqueeze(1).unsqueeze(2), float('-inf'), ).type_as(attn_weights) # FP16 support: cast to float and back attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) attn_weights = F.softmax(attn_weights, dim=-1) attn_weights = F.dropout(attn_weights, p=self.dropout, training=self.training) attn = strided_bmm2(attn_weights, v) assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) attn = self.out_proj(attn) if need_weights: # average attention weights over heads attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) attn_weights = attn_weights.sum(dim=1) / self.num_heads else: attn_weights = attn_weights.new_empty(0) # Can't set to None because jit script reasons return attn, attn_weights def in_proj_qkv(self, query): return self._in_proj(query).chunk(3, dim=-1) def in_proj_kv(self, key): return self._in_proj(key, start=self.embed_dim).chunk(2, dim=-1) def in_proj_q(self, query): return self._in_proj(query, end=self.embed_dim) def in_proj_k(self, key): return self._in_proj(key, start=self.embed_dim, end=2 * self.embed_dim) def in_proj_v(self, value): return self._in_proj(value, start=2 * self.embed_dim) def _in_proj(self, input, start=None, end=None): weight = self.in_proj_weight bias = self.in_proj_bias if end is not None: weight = weight[:end, :] if bias is not None: bias = bias[:end] if start is not None: weight = weight[start:, :] if bias is not None: bias = bias[start:] return F.linear(input, weight, bias) def buffered_mask(self, tensor): dim = tensor.size(-1) if self._mask.size(0) == 0: #TODO: try torch.new_full instead self._mask = torch.triu(utils.fill_with_neg_inf(tensor.new_empty(dim, dim)), 1) if self._mask.size(0) < dim: self._mask = torch.triu(utils.fill_with_neg_inf(self._mask.resize_(dim, dim)), 1) return self._mask[:dim, :dim] def reorder_incremental_state(self, incremental_state, new_order): """Reorder buffered internal state (for incremental generation).""" input_buffer = self._get_input_buffer(incremental_state) if input_buffer is not None: for k in input_buffer.keys(): input_buffer[k] = input_buffer[k].index_select(1, new_order) self._set_input_buffer(incremental_state, input_buffer) def _get_input_buffer(self, incremental_state: Optional[Dict[str, Dict[str, Tensor]]]): if incremental_state is None or self.cache_id not in incremental_state: return {} return incremental_state[self.cache_id] def _set_input_buffer(self, incremental_state: Optional[Dict[str, Dict[str, Tensor]]], buffer: Dict[str, Tensor]): if incremental_state is not None: incremental_state[self.cache_id] = buffer @torch.jit.unused def _fast_same_check(self, q, k, v): qkv_same = q.data_ptr() == k.data_ptr() == v.data_ptr() kv_same = k.data_ptr() == v.data_ptr() return qkv_same, kv_same
PyTorch/Translation/Transformer/examples/translation
translation
README
# Example usage for Neural Machine Translation These scripts provide an example of pre-processing data for the NMT task and instructions for how to replicate the results from the paper [Scaling Neural Machine Translation (Ott et al., 2018)](https://arxiv.org/abs/1806.00187). ## Preprocessing ### prepare-iwslt14.sh Provides an example of pre-processing for IWSLT'14 German to English translation task: ["Report on the 11th IWSLT evaluation campaign" by Cettolo et al.](http://workshop2014.iwslt.org/downloads/proceeding.pdf) Example usage: ``` $ cd examples/translation/ $ bash prepare-iwslt14.sh $ cd ../.. # Binarize the dataset: $ TEXT=examples/translation/iwslt14.tokenized.de-en $ python preprocess.py --source-lang de --target-lang en \ --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ --destdir data-bin/iwslt14.tokenized.de-en # Train the model (better for a single GPU setup): $ mkdir -p checkpoints/fconv $ CUDA_VISIBLE_DEVICES=0 python train.py data-bin/iwslt14.tokenized.de-en \ --lr 0.25 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 \ --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ --lr-scheduler fixed --force-anneal 200 \ --arch fconv_iwslt_de_en --save-dir checkpoints/fconv # Generate: $ python generate.py data-bin/iwslt14.tokenized.de-en \ --path checkpoints/fconv/checkpoint_best.pt \ --batch-size 128 --beam 5 --remove-bpe ``` To train transformer model on IWSLT'14 German to English: ``` # Preparation steps are the same as for fconv model. # Train the model (better for a single GPU setup): $ mkdir -p checkpoints/transformer $ CUDA_VISIBLE_DEVICES=0 python train.py data-bin/iwslt14.tokenized.de-en \ -a transformer_iwslt_de_en --optimizer adam --lr 0.0005 -s de -t en \ --label-smoothing 0.1 --dropout 0.3 --max-tokens 4000 \ --min-lr '1e-09' --lr-scheduler inverse_sqrt --weight-decay 0.0001 \ --criterion label_smoothed_cross_entropy --max-update 50000 \ --warmup-updates 4000 --warmup-init-lr '1e-07' \ --adam-betas '(0.9, 0.98)' --save-dir checkpoints/transformer # Average 10 latest checkpoints: $ python scripts/average_checkpoints.py --inputs checkpoints/transformer \ --num-epoch-checkpoints 10 --output checkpoints/transformer/model.pt # Generate: $ python generate.py data-bin/iwslt14.tokenized.de-en \ --path checkpoints/transformer/model.pt \ --batch-size 128 --beam 5 --remove-bpe ``` ### prepare-wmt14en2de.sh Provides an example of pre-processing for the WMT'14 English to German translation task. By default it will produce a dataset that was modeled after ["Attention Is All You Need" by Vaswani et al.](https://arxiv.org/abs/1706.03762) that includes news-commentary-v12 data. To use only data available in WMT'14 or to replicate results obtained in the original paper ["Convolutional Sequence to Sequence Learning" by Gehring et al.](https://arxiv.org/abs/1705.03122) run it with --icml17 instead: ``` $ bash prepare-wmt14en2de.sh --icml17 ``` Example usage: ``` $ cd examples/translation/ $ bash prepare-wmt14en2de.sh $ cd ../.. # Binarize the dataset: $ TEXT=examples/translation/wmt14_en_de $ python preprocess.py --source-lang en --target-lang de \ --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ --destdir data-bin/wmt14_en_de --thresholdtgt 0 --thresholdsrc 0 # Train the model: # If it runs out of memory, try to set --max-tokens 1500 instead $ mkdir -p checkpoints/fconv_wmt_en_de $ python train.py data-bin/wmt14_en_de \ --lr 0.5 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 \ --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ --lr-scheduler fixed --force-anneal 50 \ --arch fconv_wmt_en_de --save-dir checkpoints/fconv_wmt_en_de # Generate: $ python generate.py data-bin/wmt14_en_de \ --path checkpoints/fconv_wmt_en_de/checkpoint_best.pt --beam 5 --remove-bpe ``` ### prepare-wmt14en2fr.sh Provides an example of pre-processing for the WMT'14 English to French translation task. Example usage: ``` $ cd examples/translation/ $ bash prepare-wmt14en2fr.sh $ cd ../.. # Binarize the dataset: $ TEXT=examples/translation/wmt14_en_fr $ python preprocess.py --source-lang en --target-lang fr \ --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ --destdir data-bin/wmt14_en_fr --thresholdtgt 0 --thresholdsrc 0 # Train the model: # If it runs out of memory, try to set --max-tokens 1000 instead $ mkdir -p checkpoints/fconv_wmt_en_fr $ python train.py data-bin/wmt14_en_fr \ --lr 0.5 --clip-norm 0.1 --dropout 0.1 --max-tokens 3000 \ --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ --lr-scheduler fixed --force-anneal 50 \ --arch fconv_wmt_en_fr --save-dir checkpoints/fconv_wmt_en_fr # Generate: $ python generate.py data-bin/fconv_wmt_en_fr \ --path checkpoints/fconv_wmt_en_fr/checkpoint_best.pt --beam 5 --remove-bpe ``` ## Replicating results from "Scaling Neural Machine Translation" To replicate results from the paper [Scaling Neural Machine Translation (Ott et al., 2018)](https://arxiv.org/abs/1806.00187): 1. Prepare the WMT'14 En-De data with a BPE vocab of 32k: ``` $ bash prepare-wmt14en2de.sh --scaling18 $ cd ../.. ``` 2. Preprocess the dataset with a joined dictionary: ``` $ TEXT=examples/translation/wmt14_en_de $ python preprocess.py --source-lang en --target-lang de \ --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ --destdir data-bin/wmt14_en_de_joined_dict \ --nwordssrc 32768 --nwordstgt 32768 \ --joined-dictionary ``` 3. Train a model: ``` $ python train.py data-bin/wmt14_en_de_joined_dict \ --arch transformer_vaswani_wmt_en_de_big --share-all-embeddings \ --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ --lr-scheduler inverse_sqrt --warmup-init-lr 1e-07 --warmup-updates 4000 \ --lr 0.0005 --min-lr 1e-09 \ --dropout 0.3 --weight-decay 0.0 --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ --max-tokens 3584 \ --fp16 ``` Note that the `--fp16` flag requires you have CUDA 9.1 or greater and a Volta GPU. If you want to train the above model with big batches (assuming your machine has 8 GPUs): - add `--update-freq 16` to simulate training on 8*16=128 GPUs - increase the learning rate; 0.001 works well for big batches
PaddlePaddle/LanguageModeling/BERT/scripts/docker
docker
build
#!/bin/bash # Copyright (c) 2023 NVIDIA Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. URL=${1:-"bert"} PUSH=${2:-"none"} # 'push' or 'none' set -e docker build \ --network=host \ --rm \ --pull \ --no-cache \ -t ${URL} \ . if [ "${PUSH}" == "push" ]; then docker push ${URL} elif [ "${PUSH}" == "none" ]; then echo "Keep the built image locally." else echo "Invalid \${PUSH} option: ${PUSH} !" exit 1 fi
PyTorch/SpeechRecognition/QuartzNet/scripts
scripts
download_librispeech
#!/usr/bin/env bash # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. DATA_SET="LibriSpeech" DATA_ROOT_DIR="/datasets" DATA_DIR="${DATA_ROOT_DIR}/${DATA_SET}" if [ ! -d "$DATA_DIR" ] then mkdir --mode 755 $DATA_DIR python utils/download_librispeech.py \ utils/librispeech.csv \ $DATA_DIR \ -e ${DATA_ROOT_DIR}/ else echo "Directory $DATA_DIR already exists." fi
PyTorch/SpeechSynthesis/Tacotron2/filelists
filelists
ljs_audio_text_train_subset_625_filelist
LJSpeech-1.1/wavs/LJ040-0100.wav|she would sometimes take Lee with her, apparently leaving him alone in the car while she transacted her business. LJSpeech-1.1/wavs/LJ011-0248.wav|Howard, strange to say, making no attempt to detain him; probably because Mullay promised to return a few days later, and to bring more money. LJSpeech-1.1/wavs/LJ016-0442.wav|made a determined effort to burn himself to death by throwing himself bodily on to the fire in the condemned ward. LJSpeech-1.1/wavs/LJ026-0036.wav|and then a balance must be struck and the doubtful form placed in the kingdom with which it has, on the whole, most points in common. LJSpeech-1.1/wavs/LJ042-0176.wav|One offers oppression, the other poverty. Both offer imperialistic injustice, tinted with two brands of slavery, end quote. LJSpeech-1.1/wavs/LJ003-0323.wav|Drunkenness, if it ever occurred, should be visited with severe punishment; LJSpeech-1.1/wavs/LJ045-0161.wav|He was upset over the fact that I would not answer him. LJSpeech-1.1/wavs/LJ028-0187.wav|Cyrus decided that Babylon must be taken. LJSpeech-1.1/wavs/LJ037-0178.wav|or one used Remington-Peters cartridge case, which may have been in the revolver before the shooting, LJSpeech-1.1/wavs/LJ010-0164.wav|Oxford, who was only nineteen at the time his offense was committed, had been born at Birmingham, LJSpeech-1.1/wavs/LJ019-0178.wav|and abandoned because of the expense. As to the entire reconstruction of Newgate, nothing had been done as yet. LJSpeech-1.1/wavs/LJ050-0117.wav|particularly those arising from organized groups, within their special jurisdiction. LJSpeech-1.1/wavs/LJ033-0128.wav|that the bag Oswald carried contained the assassination weapon and has concluded that Frazier and Randle are mistaken as to the length of the bag. LJSpeech-1.1/wavs/LJ007-0179.wav|defeats the ends of justice, and disgraces the profession of a Christian country. LJSpeech-1.1/wavs/LJ033-0067.wav|She pointed to the blanket which was on the floor very close to where Ruth Paine was standing. LJSpeech-1.1/wavs/LJ004-0139.wav|"In the morning the stench and heat were so oppressive that he and every one else on waking rushed unclothed into the yard;" LJSpeech-1.1/wavs/LJ009-0208.wav|erected on the cart, about four feet high at the head, and gradually sloping towards the horse, giving a full view of the body, LJSpeech-1.1/wavs/LJ012-0144.wav|and passed it on to Solomons by his daughter, a widow named Abrahams. LJSpeech-1.1/wavs/LJ001-0020.wav|the "lower-case" being in fact invented in the early Middle Ages. LJSpeech-1.1/wavs/LJ014-0227.wav|One of these was Mobbs, who lived in the Minories, LJSpeech-1.1/wavs/LJ040-0146.wav|He noted that Lee liked to give the impression that he did not care for other people but preferred to keep to himself, LJSpeech-1.1/wavs/LJ001-0149.wav|From the time when books first took their present shape till the end of the sixteenth century, or indeed later, LJSpeech-1.1/wavs/LJ002-0143.wav|The commissioners who presided were, quote, little otherwise than self-elected LJSpeech-1.1/wavs/LJ014-0217.wav|Dwyer managed to overpower his assailant, and got to his feet; but Cannon butted at him with his head, and again threw him to the ground, LJSpeech-1.1/wavs/LJ005-0250.wav|The prisoners were crowded together in the jail, contrary to the requirements of the four George the fourth LJSpeech-1.1/wavs/LJ042-0049.wav|I never believed I would find more material advantages at this stage of development in the Soviet Union than I might of had in the U.S. LJSpeech-1.1/wavs/LJ014-0198.wav|Marley at his trial was undefended, and the sheriffs offered him counsel; but he declined. The witnesses against him all spoke the truth, he said; LJSpeech-1.1/wavs/LJ034-0093.wav|Brennan also testified that Lee Harvey Oswald, LJSpeech-1.1/wavs/LJ016-0237.wav|With Calcraft's method there were undoubtedly many failures, and it was a common custom for him to go below the gallows LJSpeech-1.1/wavs/LJ015-0156.wav|Down at Weybridge, where he had a country place, his name was long remembered with gratitude by the poor. LJSpeech-1.1/wavs/LJ018-0047.wav|He adhered to this almost to the very last. His case had been warmly espoused by the Society for the Protection of Germans in this country, LJSpeech-1.1/wavs/LJ013-0020.wav|he acted in a manner which excited the suspicions of the crew. LJSpeech-1.1/wavs/LJ002-0041.wav|Two other wards were appropriated to the master's side debtors; they were each twenty-three feet by fourteen and a half, LJSpeech-1.1/wavs/LJ008-0227.wav|slipshod and slovenly, in crushed bonnet and dirty shawl, the gown fastened by a single hook, LJSpeech-1.1/wavs/LJ007-0029.wav|The condition of the capitally-convicted prisoners after sentence was still very disgraceful. The side they occupied, still known as the press-yard, LJSpeech-1.1/wavs/LJ018-0358.wav|Christina Edmunds had resort to strychnia, the same lethal drug that Palmer used; LJSpeech-1.1/wavs/LJ007-0198.wav|The windows were to be glazed and painted to prevent prisoners from looking out; LJSpeech-1.1/wavs/LJ043-0032.wav|After about a two-week separation, Marina Oswald returned to her husband. LJSpeech-1.1/wavs/LJ035-0071.wav|At a given signal, they reenacted the event. Baker's movements were timed with a stopwatch. LJSpeech-1.1/wavs/LJ009-0092.wav|his legs give way, he utters a faint groan, and sinks on the floor. LJSpeech-1.1/wavs/LJ019-0310.wav|which had long been admitted as indispensable, and had never as yet been properly obtained. LJSpeech-1.1/wavs/LJ038-0071.wav|When he entered the homicide and robbery bureau office, he saw two detectives standing there with Sgt. Gerald L. Hill, LJSpeech-1.1/wavs/LJ014-0291.wav|he showed symptoms of delirium tremens, and admitted that he had been addicted to the excessive use of stimulants. LJSpeech-1.1/wavs/LJ014-0283.wav|The jury found him guilty of the latter only, with a point of law reserved. This was fully argued before three judges, LJSpeech-1.1/wavs/LJ021-0096.wav|under the able and energetic leadership of General Johnson. LJSpeech-1.1/wavs/LJ045-0075.wav|She was, quote, sorry that I had not married him (the Russian boyfriend) instead, that it would have been much easier for me, end quote. LJSpeech-1.1/wavs/LJ022-0203.wav|For that we can be thankful to the God who watches over America. LJSpeech-1.1/wavs/LJ029-0073.wav|that the President would arrive and depart from Dallas' Love Field; that a motorcade through the downtown area of Dallas to the luncheon site should be arranged; LJSpeech-1.1/wavs/LJ040-0187.wav|According to Sokolow, this indicated a, quote, present intellectual functioning in the upper range of bright normal intelligence, end quote. LJSpeech-1.1/wavs/LJ016-0101.wav|One of the three, shamming ill, remained all day in his ward, where he employed himself unraveling the rope from the sleeping-mats. LJSpeech-1.1/wavs/LJ015-0086.wav|He kept open house at Kilburn Priory; LJSpeech-1.1/wavs/LJ028-0427.wav|The enormous amount of debris which buried the palaces and temples and walls of Nebuchadnezzar's city, in places to the depth of a hundred feet, LJSpeech-1.1/wavs/LJ048-0248.wav|President Kennedy was scheduled to speak across the street from his hotel in Fort Worth at eight:thirty a.m. LJSpeech-1.1/wavs/LJ021-0095.wav|We are now prepared to move into this second phase, on the basis of our experience in the first phase LJSpeech-1.1/wavs/LJ030-0081.wav|They were instructed to watch particularly for thrown objects, sudden actions in the crowd, and any movements toward the Presidential car. LJSpeech-1.1/wavs/LJ032-0176.wav|Moreover, the bus transfer which he obtained as he left the bus was still in the pocket when he was arrested. LJSpeech-1.1/wavs/LJ044-0129.wav|and often it is advisable for some people to remain in the background, not underground, end quote. LJSpeech-1.1/wavs/LJ018-0177.wav|But as there was no independent corroboration of the informer's evidence, according to the custom of the British law, LJSpeech-1.1/wavs/LJ049-0113.wav|This point was ably made in the nineteen oh two debate by Senator George F. Hoar, the sponsor of the Senate bill, quote, LJSpeech-1.1/wavs/LJ050-0141.wav|As a beginning step to improve liaison with local law enforcement officials, the Secret Service on August twenty-six, nineteen sixty-four, LJSpeech-1.1/wavs/LJ013-0156.wav|a scion of the ducal house of Bedford, by his confidential valet and personal attendant. LJSpeech-1.1/wavs/LJ032-0222.wav|Moreover, Shaneyfelt testified that in his opinion the photographs were not composites of two different photographs LJSpeech-1.1/wavs/LJ004-0052.wav|which Howard had eulogized some forty years before. LJSpeech-1.1/wavs/LJ006-0017.wav|with those who made the selection of the first inspectors, and the two gentlemen appointed were probably the most fitted in England to be so employed. LJSpeech-1.1/wavs/LJ049-0046.wav|Even so, analysis of the motion picture films taken by amateur photographer Zapruder LJSpeech-1.1/wavs/LJ017-0124.wav|He frequently declared before and during the trial that it would be impossible to find him guilty. LJSpeech-1.1/wavs/LJ048-0150.wav|while the Secret Service representatives in Dallas LJSpeech-1.1/wavs/LJ017-0082.wav|He fixed upon a sporting friend, Mr. John Parsons Cook, who had been in luck at Shrewsbury races, both as a winner and a backer, LJSpeech-1.1/wavs/LJ041-0095.wav|Oswald read a good deal, said Powers, but, quote, he would never be reading any of the shoot-em-up westerns or anything like that. LJSpeech-1.1/wavs/LJ002-0089.wav|eight. The female felons were deprived of part of the space which the architect had intended for them. LJSpeech-1.1/wavs/LJ050-0264.wav|The Commission recommends that the present arrangements LJSpeech-1.1/wavs/LJ039-0177.wav|was greater than from the second to the third shot and required a movement in the basic firing position of the marksmen. LJSpeech-1.1/wavs/LJ047-0016.wav|The FBI opened a file on Oswald in October nineteen fifty-nine, when news reports appeared of his defection to the Soviet Union. LJSpeech-1.1/wavs/LJ028-0036.wav|But in those very early days Babylon was little more than a shrine, surrounded with mud huts and date palms. LJSpeech-1.1/wavs/LJ013-0173.wav|The researches of the police soon laid bare other suspicious facts. LJSpeech-1.1/wavs/LJ014-0138.wav|Mrs. Manning became still more violent, shouting, "No, no, I will not stand it! You ought to be ashamed of yourselves!" LJSpeech-1.1/wavs/LJ028-0165.wav|There is, however, a second inner wall, of less thickness than the first, but very little inferior to it in strength. LJSpeech-1.1/wavs/LJ006-0048.wav|To these were still added an average of about fifty expecting the last penalty of the law; a certain number of transports awaiting removal to the colonies; LJSpeech-1.1/wavs/LJ032-0133.wav|Lieutenant Day of the Dallas Police Department had "lifted" a palmprint from the underside of the gun barrel LJSpeech-1.1/wavs/LJ038-0093.wav|Frequently, however, he was confronted with evidence which he could not explain, and he resorted to statements which are known to be lies. LJSpeech-1.1/wavs/LJ018-0228.wav|Five or six years later, William Roupell minutely described how he had effected the fraud. LJSpeech-1.1/wavs/LJ046-0084.wav|for the President soon after the assassination, quote, LJSpeech-1.1/wavs/LJ033-0109.wav|the Commission has carefully considered the testimony of these two witnesses with regard to the length of the bag. LJSpeech-1.1/wavs/LJ013-0158.wav|One morning in May his lordship was found dead in his bed with his throat cut. LJSpeech-1.1/wavs/LJ036-0111.wav|Whaley's memory of the lineup is inaccurate. There were four men altogether, not six men, in the lineup with Oswald. LJSpeech-1.1/wavs/LJ044-0082.wav|His attempt to express himself through his Fair Play for Cuba activities, however, LJSpeech-1.1/wavs/LJ036-0208.wav|white male, approximately thirty, slender build, height five foot ten inches, weight one hundred sixty-five pounds, end quote. LJSpeech-1.1/wavs/LJ038-0255.wav|Firearms identification. LJSpeech-1.1/wavs/LJ031-0111.wav|The elliptical wound in the Governor's back, located slightly to the left of the Governor's right armpit approximately five-eighths inch (a centimeter and a half) LJSpeech-1.1/wavs/LJ006-0246.wav|On another occasion a young man, who was being violently teased, seized a knife and stabbed his tormentor in the back. LJSpeech-1.1/wavs/LJ027-0167.wav|Then the gills gradually dry up, as the lungs develop, and they now breathe wholly by lungs, but still retain the tail. LJSpeech-1.1/wavs/LJ033-0187.wav|However, the complete identity of characteristics between the paper and tape in the bag found on the sixth floor LJSpeech-1.1/wavs/LJ009-0284.wav|It was stated in evidence before the Commission on Capital Punishment in eighteen sixty-four, LJSpeech-1.1/wavs/LJ009-0249.wav|When Charles White was executed in eighteen twenty-three for arson, he arranged a handkerchief LJSpeech-1.1/wavs/LJ015-0149.wav|peas at ten shillings a quart, five-guinea pines, and early asparagus were to be found on his table. LJSpeech-1.1/wavs/LJ019-0330.wav|Dietaries were drawn up for adoption on the recommendation of a committee of experts. LJSpeech-1.1/wavs/LJ012-0118.wav|It was a large gold brooch set in pearls, but a portion of the mounting had melted with the heat. LJSpeech-1.1/wavs/LJ008-0071.wav|In the few years which elapsed between the establishment of the gallows at Newgate LJSpeech-1.1/wavs/LJ015-0253.wav|he handed over to Pierce a sum of three thousand pounds, his own, whether rightly or wrongly acquired never came out, LJSpeech-1.1/wavs/LJ045-0102.wav|things apparently went quite smoothly from the time Oswald returned from Mexico until the weekend of November sixteen to seventeen, nineteen sixty-three. LJSpeech-1.1/wavs/LJ009-0256.wav|Still he resisted. LJSpeech-1.1/wavs/LJ050-0055.wav|that the PRS files can no longer be limited largely to persons communicating actual threats to the President. LJSpeech-1.1/wavs/LJ034-0037.wav|Someone sitting on the box facing the window would have his palm in this position if he placed his hand alongside his right hip. LJSpeech-1.1/wavs/LJ020-0081.wav|and knead for ten minutes, carefully at first, lest the liquids should be wasted, and more boldly when they are absorbed by the paste. LJSpeech-1.1/wavs/LJ009-0077.wav|The ordinary of Newgate is an orthodox, unaffected, Church of England divine, LJSpeech-1.1/wavs/LJ008-0107.wav|in his canonicals, and with his head as stiffly erect as a sheriff's coachman. LJSpeech-1.1/wavs/LJ043-0013.wav|Part of the problem resulted from the fact that, as Jeanne De Mohrenschildt testified, LJSpeech-1.1/wavs/LJ037-0225.wav|five foot eight inches, black hair, slender, wearing a white jacket, white shirt and dark slacks, end quote, LJSpeech-1.1/wavs/LJ012-0294.wav|without hesitation brought in a verdict of willful murder. LJSpeech-1.1/wavs/LJ042-0192.wav|are preferred rather than loud and useless manifestations of protest, end quote, Oswald went on to note, quote, LJSpeech-1.1/wavs/LJ016-0078.wav|but had to come down again covered with soot and filth just as the officers entered the ward. LJSpeech-1.1/wavs/LJ028-0174.wav|Other ancient descriptions of the walls have been left us by Ctesias of the fifth century B.C., and by Strabo of the beginning of the Christian era, LJSpeech-1.1/wavs/LJ019-0002.wav|The time at length approached when a radical and complete change was to come over the old city jail. LJSpeech-1.1/wavs/LJ032-0271.wav|(two) Oswald's palmprint was on the rifle in a position which shows that he had handled it while it was disassembled, LJSpeech-1.1/wavs/LJ018-0325.wav|But extra precautions and close supervision have so far proved effectual, and the prisoners are still in custody after a lapse of ten years. LJSpeech-1.1/wavs/LJ048-0259.wav|However, Chief Rowley did not condone the action of the off-duty agents, particularly since it violated a regulation of the Secret Service, LJSpeech-1.1/wavs/LJ009-0099.wav|Meanwhile the clergyman, still bent into the form of a sleeping dog, LJSpeech-1.1/wavs/LJ034-0180.wav|The man was dressed in a light-colored, open-neck shirt which could have been either a sports shirt or a T-shirt, LJSpeech-1.1/wavs/LJ024-0057.wav|Why then should we leave the fulfillment of this public policy to chance LJSpeech-1.1/wavs/LJ018-0260.wav|Mr. Justice Byles, in passing sentence, commented severely upon the commission of such crimes by a man in Roupell's position in life, LJSpeech-1.1/wavs/LJ007-0095.wav|Prisoners indeed were known to boast that they had saved their necks by feigning insanity. LJSpeech-1.1/wavs/LJ005-0117.wav|Numbers of the jails were still unprovided with chaplains, and the prisoners never heard Divine service. LJSpeech-1.1/wavs/LJ006-0168.wav|to taking the descriptions of newly-arrived prisoners. LJSpeech-1.1/wavs/LJ011-0117.wav|devoted its efforts first to a mitigation of the forgery statute, but could not immediately accomplish much. LJSpeech-1.1/wavs/LJ007-0223.wav|The prison officials appear to be on the side of the inspectors, to the great dissatisfaction of the corporation, who claimed the full allegiance and support of its servants. LJSpeech-1.1/wavs/LJ009-0176.wav|Seven other crimes, however, were still capital by law, and so continued till the passing of the Criminal Consolidation Acts of eighteen sixty-one. LJSpeech-1.1/wavs/LJ034-0119.wav|Approximately seven or eight minutes later LJSpeech-1.1/wavs/LJ014-0226.wav|Only a few have vied with Cannon in fiendish cruelty and brutality. LJSpeech-1.1/wavs/LJ045-0074.wav|In the letter Marina Oswald stated that her husband had changed a great deal and that she was very lonely in the United States. LJSpeech-1.1/wavs/LJ012-0044.wav|When his trade was busiest he set up a second establishment, at the head of which, although he was married, LJSpeech-1.1/wavs/LJ027-0012.wav|All have the same ultimate substance LJSpeech-1.1/wavs/LJ028-0254.wav|The people, enjoying the greater freedom which Cyrus permitted them, were contented, and life in Babylon went on about as before. LJSpeech-1.1/wavs/LJ002-0326.wav|The poor debtors were not supplied with beds. Those who could pay the price might hire them from each other, LJSpeech-1.1/wavs/LJ014-0259.wav|Watts led two lives. LJSpeech-1.1/wavs/LJ035-0067.wav|from the sixth floor by the time Baker and Truly arrived, Commission counsel asked Baker and Truly to repeat their movements from the time of the shot LJSpeech-1.1/wavs/LJ010-0146.wav|Attacks upon the sovereign, as I have said, became more common after the accession of the young Queen Victoria in eighteen thirty-eight. LJSpeech-1.1/wavs/LJ007-0084.wav|The inspectors in the following year, on examining the facts, found that some of these poor creatures had been in confinement for long periods: LJSpeech-1.1/wavs/LJ049-0204.wav|While in accordance with its mandate LJSpeech-1.1/wavs/LJ011-0035.wav|Every endeavor was used, however, to obtain a commutation of sentence. His case was twice argued before the judges on points of law, LJSpeech-1.1/wavs/LJ021-0001.wav|The Fireside Chats of Franklin Delano Roosevelt, by Franklin D Roosevelt, Section six. LJSpeech-1.1/wavs/LJ008-0148.wav|One night he was missing LJSpeech-1.1/wavs/LJ011-0237.wav|The jewelers were always a favorite prey of the London thieves. LJSpeech-1.1/wavs/LJ017-0272.wav|"Ah!" he remarked, "they will have to wait for us then till eight." LJSpeech-1.1/wavs/LJ049-0067.wav|the radio net in use in motorcades is elaborate and permits a number of different means of communication with various local points. LJSpeech-1.1/wavs/LJ032-0171.wav|and that this was the same shirt which Oswald wore on the morning of the assassination. LJSpeech-1.1/wavs/LJ048-0132.wav|which would bring to bear the judgment and experience of members of the White House detail other than the advance agent. LJSpeech-1.1/wavs/LJ006-0025.wav|France had sent Misseurs Beaumont and De Tocqueville, who subsequently published several interesting works on the subject. LJSpeech-1.1/wavs/LJ043-0176.wav|If the attack had succeeded and Oswald had been caught, the pictures showing him with his rifle LJSpeech-1.1/wavs/LJ044-0191.wav|Now there appeared to be no chance to get to Cuba, where he had thought he might find his communist ideal. The U.S. Government would not permit travel there LJSpeech-1.1/wavs/LJ038-0011.wav|A police car made a U-turn, and as the sirens grew fainter, LJSpeech-1.1/wavs/LJ002-0244.wav|but its business was much reduced by the extension of the Courts of Conscience. LJSpeech-1.1/wavs/LJ031-0209.wav|X-rays and photographs were taken preliminarily and the pathological examination began at about eight p.m. LJSpeech-1.1/wavs/LJ042-0032.wav|and of his initial commitment to that country can best be understood, however, in the context LJSpeech-1.1/wavs/LJ009-0132.wav|Although this misapplication of religious services still went on, LJSpeech-1.1/wavs/LJ034-0048.wav|The freshness of prints developed in this manner cannot be estimated, LJSpeech-1.1/wavs/LJ043-0023.wav|and helped to move the personal effects of Marina Oswald and the baby. LJSpeech-1.1/wavs/LJ015-0216.wav|This was an important step, and they might easily be robbed some day when Burgess was the guard, provided only that they could be opened. LJSpeech-1.1/wavs/LJ006-0180.wav|the interior of the jail was more like a bear-garden or the noisy purlieus of a public-house than a prison. LJSpeech-1.1/wavs/LJ016-0342.wav|The first private execution under the new law took place within the precincts of Maidstone Jail. LJSpeech-1.1/wavs/LJ025-0170.wav|for it is only the green parts of the plant which, under the influence of sunlight, have the marvelous power of decomposing carbonic acid, LJSpeech-1.1/wavs/LJ047-0076.wav|In New Orleans. In the middle of May of nineteen sixty-three, Agent Hosty checked Oswald's last known residence and found that he had moved. LJSpeech-1.1/wavs/LJ005-0011.wav|were first made use of about eighteen twenty-seven. That the need for prison reform was imperative may be gathered from the few out of many instances I have adduced, LJSpeech-1.1/wavs/LJ033-0142.wav|because the cartons stacked around the southeast corner would shield him. LJSpeech-1.1/wavs/LJ018-0005.wav|the public mind was greatly agitated by the affair for several months. The story of the murder must be pretty familiar to most of my readers. LJSpeech-1.1/wavs/LJ049-0183.wav|regarding such threats and that its Protective Research Section is not adequately staffed or equipped LJSpeech-1.1/wavs/LJ036-0031.wav|and requested a transfer which she might use if she got through the traffic. LJSpeech-1.1/wavs/LJ011-0285.wav|The door of his place of durance stood open, and Mr. Gee began to consider whether he might not escape. LJSpeech-1.1/wavs/LJ041-0114.wav|three months prior to his regularly scheduled separation date, ostensibly to care for his mother who had been injured in an accident at her work. LJSpeech-1.1/wavs/LJ012-0134.wav|Presently the proper person arrived from the consignees, but found the gold-dust gone. LJSpeech-1.1/wavs/LJ011-0005.wav|A lady in the country, who had thirteen thousand pounds in the stocks, desired her London agent to sell them out. LJSpeech-1.1/wavs/LJ028-0087.wav|Such was the appearance of the builder of the walls of Babylon. LJSpeech-1.1/wavs/LJ016-0329.wav|a bill was introduced by Mr. Hibbert, M.P., and accepted by the Government, providing for the future carrying out of executions within prisons. LJSpeech-1.1/wavs/LJ034-0017.wav|could look southwesterly down Elm Street over the top of the "Rolling Readers" cartons. LJSpeech-1.1/wavs/LJ044-0086.wav|executive director of the Information Council of the Americas, who also appeared on the program. LJSpeech-1.1/wavs/LJ038-0100.wav|On November twenty-three, Fritz confronted Oswald with the evidence that he had purchased a rifle under the fictitious name of "Hidell." LJSpeech-1.1/wavs/LJ049-0019.wav|The last Presidential vehicle with any protection against small-arms fire left the White House in nineteen fifty-three. LJSpeech-1.1/wavs/LJ021-0125.wav|it was natural that the workers should seek and obtain a statutory declaration of their constitutional right LJSpeech-1.1/wavs/LJ019-0294.wav|The prison buildings were in many places out of repair; other houses often overlooked them. LJSpeech-1.1/wavs/LJ009-0211.wav|and on the right the ripping chisel, with which the murders had been committed, were exposed to view. LJSpeech-1.1/wavs/LJ044-0172.wav|and left for Irving with Marina Oswald and June and most of the Oswalds' effects three days later. LJSpeech-1.1/wavs/LJ047-0129.wav|FBI informants in the New Orleans area, familiar with pro-Castro or Communist Party activity there, LJSpeech-1.1/wavs/LJ024-0139.wav|has been tipped out of balance by the courts in direct contradiction of the high purposes of the framers of the Constitution. LJSpeech-1.1/wavs/LJ005-0106.wav|Jails, of which the old prison at Reading was a specimen, were still left intact. LJSpeech-1.1/wavs/LJ042-0247.wav|In August of nineteen sixty-three, he gave the New Orleans police as a reason for refusing to permit his family to learn English, LJSpeech-1.1/wavs/LJ047-0092.wav|On August nine, nineteen sixty-three, LJSpeech-1.1/wavs/LJ026-0166.wav|back to starch usable as food and the comparison of the green plant and the animal would be complete. LJSpeech-1.1/wavs/LJ033-0019.wav|According to the testimony of Frazier, Marina Oswald, and Ruth Paine, it appears that Oswald never returned to Irving in midweek LJSpeech-1.1/wavs/LJ042-0172.wav|must have as its nucleus the traditional ideological best of both systems, and yet be utterly opposed to both systems. LJSpeech-1.1/wavs/LJ027-0018.wav|All are forced to make concession after concession to their surroundings, and in these concessions all progress in life consists. LJSpeech-1.1/wavs/LJ041-0187.wav|and he wanted to be on the winning side so that ten thousand years from-now people would look in the history books and say, "Well, this man was ahead of his time." LJSpeech-1.1/wavs/LJ048-0286.wav|Nor is this goal served when agents remain out until early morning hours, and lose the opportunity to get a reasonable amount of sleep. LJSpeech-1.1/wavs/LJ018-0037.wav|In searching the prisoner's box, Mr. Briggs' watch was found wrapped up in a piece of leather, LJSpeech-1.1/wavs/LJ009-0044.wav|His features have no felonious cast; LJSpeech-1.1/wavs/LJ045-0100.wav|She thought that he might not have become involved in the assassination if people had been kinder to him. LJSpeech-1.1/wavs/LJ035-0149.wav|She ran inside and up the front stairs into the large open office reserved for clerical employees. LJSpeech-1.1/wavs/LJ028-0188.wav|In five thirty-eight the city fell, and for a time it became the home of the Persian King. LJSpeech-1.1/wavs/LJ003-0320.wav|which recommended restrictions upon the number of visitors admitted. LJSpeech-1.1/wavs/LJ013-0241.wav|The policeman insisted on searching the premises, at which Good displayed some uneasiness. LJSpeech-1.1/wavs/LJ018-0194.wav|Cummings was repeatedly "run in" for the offense of coining and uttering bad money, whether coin or notes. LJSpeech-1.1/wavs/LJ046-0135.wav|PRS received items in eight thousand, seven hundred nine cases. LJSpeech-1.1/wavs/LJ046-0143.wav|These instructions to PRS personnel appear to be the only instance where an effort was made to reduce the criteria to writing. LJSpeech-1.1/wavs/LJ048-0103.wav|and with the concurrence of the Dallas police, was entirely appropriate, in view of the known desires of the President. LJSpeech-1.1/wavs/LJ038-0279.wav|I think is going overboard in the other direction. LJSpeech-1.1/wavs/LJ044-0117.wav|that there were people who understood his activity, end quote. LJSpeech-1.1/wavs/LJ028-0485.wav|The outer and inner defenses of Babylon were so strong and so high that no enemy could hope to take them, LJSpeech-1.1/wavs/LJ031-0174.wav|After the President was pronounced dead, LJSpeech-1.1/wavs/LJ026-0020.wav|If chlorophyll is present, the carbon dioxide of the air serves as a source of carbon, LJSpeech-1.1/wavs/LJ027-0136.wav|Illustrations quoted from the works of Romanes and Le Conte will make this principle clear. LJSpeech-1.1/wavs/LJ002-0113.wav|in an age when insolvent acts and bankruptcy courts do so much to relieve the impecunious, LJSpeech-1.1/wavs/LJ004-0113.wav|It was further ordered that male prisoners should be kept perfectly distinct from the females. LJSpeech-1.1/wavs/LJ044-0115.wav|he felt that this was a great man that he had received the letter from, end quote. LJSpeech-1.1/wavs/LJ039-0012.wav|The Commission first learned of this incident when Robert Oswald related it to FBI agents on February nineteen, nineteen sixty-four, LJSpeech-1.1/wavs/LJ014-0164.wav|as the wickedness and levity of the immense crowd collected at the execution this morning could be imagined by no man, LJSpeech-1.1/wavs/LJ050-0018.wav|and to keep the Secretary fully informed regarding all significant developments relating to Presidential protection. LJSpeech-1.1/wavs/LJ012-0131.wav|The letter informed him of the marks and sizes of the cases containing the precious metal, LJSpeech-1.1/wavs/LJ016-0308.wav|yet the witnesses were not unanimous. LJSpeech-1.1/wavs/LJ028-0332.wav|Once more, however, he waited till the interval appointed had gone by, and then leading the troops to the place where the four thousand were, LJSpeech-1.1/wavs/LJ006-0251.wav|but the presence and authority of the governor himself became indispensable. LJSpeech-1.1/wavs/LJ006-0016.wav|These considerations no doubt had weight LJSpeech-1.1/wavs/LJ031-0093.wav|Answer: No, sir. Before -- well, in trying to treat an acutely injured patient, you have to establish an airway, adequate ventilation LJSpeech-1.1/wavs/LJ042-0163.wav|After, however, two years and a lot of growing up, I decided to return to the USA. LJSpeech-1.1/wavs/LJ031-0220.wav|During the autopsy examination, Federal agents brought the surgeons three pieces of bone recovered from Elm Street and the Presidential automobile. LJSpeech-1.1/wavs/LJ030-0050.wav|The Presidential limousine. LJSpeech-1.1/wavs/LJ012-0010.wav|both having been recognized by the clergyman who had performed the ceremony, and the assault had been committed to secure the money LJSpeech-1.1/wavs/LJ004-0213.wav|Compared with those highly meritorious institutions Newgate still showed but badly. LJSpeech-1.1/wavs/LJ010-0061.wav|That some thirty or more needy men should hope to revolutionize England is a sufficient proof of the absurdity of their attempt. LJSpeech-1.1/wavs/LJ022-0195.wav|But it is more than the recovery of the material basis of our individual lives. LJSpeech-1.1/wavs/LJ039-0102.wav|After familiarization with live ammunition in the twenty-two rifle and the twenty-two pistol, LJSpeech-1.1/wavs/LJ020-0073.wav|Sift the flour, salt and sugar into a bowl, LJSpeech-1.1/wavs/LJ040-0038.wav|Such ideas of grandeur were apparently accompanied by notions of oppression. LJSpeech-1.1/wavs/LJ019-0049.wav|the principles of which were debated by disputants of widely opposite opinions with an earnestness that sometimes bordered upon acrimony. LJSpeech-1.1/wavs/LJ050-0012.wav|through an Assistant Secretary whose duties also include the direct supervision of the Bureau of the Mint LJSpeech-1.1/wavs/LJ007-0117.wav|where the upper ward was exclusively appropriated to their use. They also had their meals sent in, and, with the food, wine almost ad libitum. LJSpeech-1.1/wavs/LJ004-0169.wav|On the dirty bedstead lay a wretched being in the throes of severe illness. LJSpeech-1.1/wavs/LJ019-0127.wav|or the still more costly process of walling in the whole farm, would have greatly added to the charges of these establishments. LJSpeech-1.1/wavs/LJ014-0141.wav|and stretching out her hand, she gathered up a quantity of the rue which, following ancient custom dating from the days of the jail fever, LJSpeech-1.1/wavs/LJ037-0041.wav|The man appeared to step back as the policeman, quote, calmly opened the car door, end quote, and very slowly got out and walked toward the front of the car. LJSpeech-1.1/wavs/LJ012-0023.wav|He was taken up when still in his teens for stealing a pocketbook, and was sentenced to transportation, but did not get beyond the hulks at Chatham. LJSpeech-1.1/wavs/LJ032-0115.wav|A few minutes after the rifle was discovered on the sixth floor of the Depository Building LJSpeech-1.1/wavs/LJ047-0007.wav|It had interviewed him twice shortly after his return to the United States, again a year later at his request LJSpeech-1.1/wavs/LJ006-0049.wav|an occasional prisoner or two committed by the Houses of Parliament, the Courts of King's Bench, Common Pleas, LJSpeech-1.1/wavs/LJ028-0065.wav|Eleven years later, in five eighty-six, he destroyed the sacred Hebrew city, LJSpeech-1.1/wavs/LJ049-0076.wav|The Commission's review of the provisions for Presidential protection at the time of President Kennedy's trip to Dallas demonstrates the need for substantial improvements. LJSpeech-1.1/wavs/LJ003-0091.wav|Constantly associated with these convicted felons were numbers of juveniles, infants of tender years. LJSpeech-1.1/wavs/LJ050-0030.wav|The Commission also recommends LJSpeech-1.1/wavs/LJ013-0122.wav|Stealing plate was about this period the crime of a more aristocratic thief. LJSpeech-1.1/wavs/LJ046-0013.wav|Prompted by these dismaying statistics, the Commission has inquired into the problems and methods of Presidential protection in effect LJSpeech-1.1/wavs/LJ035-0134.wav|that they were watching the parade from the top step of the building entrance when Gloria Calverly, who works in the Depository Building, LJSpeech-1.1/wavs/LJ016-0232.wav|and he owned a pet pony which would follow him about like a dog. LJSpeech-1.1/wavs/LJ020-0023.wav|If too stiff, warm water, a spoonful at a time until you can handle the paste easily. The danger is in getting it too stiff. Now. LJSpeech-1.1/wavs/LJ005-0046.wav|The good it tried to do took active shape in the establishment of temporary refuges -- at Hoxton for males, and in the Hackney Road for females LJSpeech-1.1/wavs/LJ010-0019.wav|As time passed, LJSpeech-1.1/wavs/LJ049-0130.wav|The Secret Service must rely in large part LJSpeech-1.1/wavs/LJ024-0023.wav|ever since a similar proposal passed the House of Representatives in eighteen sixty-nine. LJSpeech-1.1/wavs/LJ018-0315.wav|to whom it was said one hundred pounds apiece had been given down as the price of their infidelity. LJSpeech-1.1/wavs/LJ029-0037.wav|Advance preparations for President Kennedy's visit to Dallas were primarily the responsibility of two Secret Service agents: LJSpeech-1.1/wavs/LJ049-0218.wav|between the Secret Service and the President and his family is contemplated. LJSpeech-1.1/wavs/LJ003-0155.wav|Tailoring and shoemaking was permitted, but it was deemed unsafe to allow a carpenter or blacksmith to have his tools. LJSpeech-1.1/wavs/LJ013-0113.wav|Robberies as daring in conception as they were boldly executed were common enough. LJSpeech-1.1/wavs/LJ045-0047.wav|and I told him that LJSpeech-1.1/wavs/LJ006-0065.wav|were associated together, "of every variety of age, habit, and delinquency, without employment, oversight, or control." LJSpeech-1.1/wavs/LJ003-0316.wav|It should be peremptorily forbidden to the keeper or any officer to make a pecuniary profit out of the supplies of food, fuel, or other necessaries. LJSpeech-1.1/wavs/LJ021-0004.wav|Tonight I continue that report, though, because of the shortness of time, I must defer a number of subjects to a later date. LJSpeech-1.1/wavs/LJ031-0022.wav|Charles R. Baxter, Robert N. McClelland, Ronald C. Jones; the chief neurologist, Dr. William Kemp Clark; LJSpeech-1.1/wavs/LJ007-0030.wav|consisted of two dozen rooms and fifteen cells. In these various chambers, until just before the inspectors made their report, LJSpeech-1.1/wavs/LJ021-0137.wav|Step by step we have created all the government agencies necessary to insure, as a general rule, industrial peace, LJSpeech-1.1/wavs/LJ033-0081.wav|she looked out the breakfast-room window and saw Oswald cross the street and walk toward the driveway where her brother parked his car near the carport. LJSpeech-1.1/wavs/LJ003-0218.wav|The chapel was filled with a curious but callous congregation, who came to stare at the miserable people thus publicly exposed. LJSpeech-1.1/wavs/LJ028-0317.wav|Introduced into their assembly, he began to bewail his misfortunes, telling them that LJSpeech-1.1/wavs/LJ047-0014.wav|the Office of Naval Intelligence, the FBI and the CIA. The information known to the FBI is summarized below. LJSpeech-1.1/wavs/LJ002-0067.wav|but really kept for the few who had funds sufficient to gain them admission to these more comfortable quarters. LJSpeech-1.1/wavs/LJ003-0101.wav|must have had a tendency to turn them into the world hardened and accomplished in the ways of vice and crime. End quote. LJSpeech-1.1/wavs/LJ036-0048.wav|She boarded the Marsalis bus at St. Paul and Elm Streets to return home. She testified further, quote, LJSpeech-1.1/wavs/LJ022-0129.wav|in making this the most efficient and the cleanest example of public enterprise the world has ever seen. LJSpeech-1.1/wavs/LJ038-0121.wav|or to answer any questions concerning the card. LJSpeech-1.1/wavs/LJ031-0095.wav|Before this was accomplished the President's cardiac activity had ceased and closed cardiac massage was instituted, which made it impossible to inspect his back. LJSpeech-1.1/wavs/LJ007-0131.wav|Enough has probably been extracted from this most damnatory report to give a complete picture of the disgraceful state in which Newgate still remained in eighteen thirty-five. LJSpeech-1.1/wavs/LJ001-0067.wav|In the Low Countries and Cologne, which were very fertile of printed books, Gothic was the favorite. LJSpeech-1.1/wavs/LJ011-0061.wav|Let this monster give his name; I am ready to fight him. I am still determined to put myself in the place of Mr. Fauntleroy. LJSpeech-1.1/wavs/LJ019-0381.wav|in another there was half-heartedness, even apathy and an almost complete contempt for the provisions of the act. LJSpeech-1.1/wavs/LJ012-0170.wav|According to his statement, when sentenced to death, he had been driven to horse-stealing by the execration which had pursued him after the murder. LJSpeech-1.1/wavs/LJ005-0090.wav|the first by daily services, the latter by the appointment of schoolmasters and instruction in reading and writing. LJSpeech-1.1/wavs/LJ049-0127.wav|agencies other than the Secret Service have become involved in phases of the overall problem of protecting our national leaders. LJSpeech-1.1/wavs/LJ004-0100.wav|An infirmary, consisting of two distinct rooms, one for males and one for females, should be provided for the separate accommodation of the sick. LJSpeech-1.1/wavs/LJ003-0148.wav|and spent in providing coals, candles, plates, knives, and forks; while all the occupants of this part of the prison LJSpeech-1.1/wavs/LJ005-0073.wav|To its efforts, and their effect upon Parliament and the public mind, we must attribute the new Jail Acts of four George the fourth LJSpeech-1.1/wavs/LJ003-0166.wav|association at one time forbidden by custom, but which greed and rapacity long made the rule. LJSpeech-1.1/wavs/LJ028-0076.wav|However, several decades ago, an Oriental appeared at the Berlin Museum, LJSpeech-1.1/wavs/LJ012-0253.wav|A further discovery was made in an osier bed near Cold Harbor Lane, Camberwell, LJSpeech-1.1/wavs/LJ024-0053.wav|Fundamentally, if in the future, America cannot trust the Congress it elects to refrain from abuse of our Constitutional usages LJSpeech-1.1/wavs/LJ032-0069.wav|The person having access to the box then takes the notice to the window and is given the package. LJSpeech-1.1/wavs/LJ037-0082.wav|On the evening of November twenty-two, LJSpeech-1.1/wavs/LJ040-0085.wav|John Pic, however, did not think her position was worse than that of many other people. LJSpeech-1.1/wavs/LJ028-0099.wav|the first-born son of Nabopolassar, King of Babylon, am I. LJSpeech-1.1/wavs/LJ004-0170.wav|The only ventilation of this pit, this "dark, cheerless, damp, unwholesome cavern -- a dungeon in its worst sense" LJSpeech-1.1/wavs/LJ022-0110.wav|The key men for the major responsibilities of this great task already have been selected. LJSpeech-1.1/wavs/LJ024-0116.wav|When the time comes for action, LJSpeech-1.1/wavs/LJ040-0161.wav|Dr. Hartogs recommended that Oswald be placed on probation on condition that he seek help and guidance through a child guidance clinic. LJSpeech-1.1/wavs/LJ032-0266.wav|Paul M. Stombaugh, of the FBI Laboratory, LJSpeech-1.1/wavs/LJ006-0086.wav|his place is assigned among the most depraved, the most experienced, and the most incorrigible offenders in the middle yard. LJSpeech-1.1/wavs/LJ038-0228.wav|and into downtown Dallas through the Triple Underpass. LJSpeech-1.1/wavs/LJ028-0319.wav|"And now," he went on to say, "my coming to you, Babylonians, LJSpeech-1.1/wavs/LJ023-0054.wav|I hope that you have re-read the Constitution of the United States in these past few weeks. LJSpeech-1.1/wavs/LJ028-0108.wav|Fortunately in several of his long inscriptions, recently discovered in the Babylonian mounds, Nebuchadnezzar speaks of the building of the walls. LJSpeech-1.1/wavs/LJ042-0134.wav|The psychological effects of that change must have been highly unsettling. It should be remembered LJSpeech-1.1/wavs/LJ032-0083.wav|Experts on questioned documents from the Treasury Department and the FBI testified that the Hidell cards were counterfeit photographic reproductions LJSpeech-1.1/wavs/LJ036-0216.wav|Tippit got out and started to walk around the front of the car LJSpeech-1.1/wavs/LJ002-0281.wav|The demands for fees were excessive in Giltspur Street. LJSpeech-1.1/wavs/LJ034-0169.wav|the same corner where Brennan was sitting on a concrete wall. LJSpeech-1.1/wavs/LJ009-0004.wav|would be astonished to observe the peculiar tenderness, I was going to add respect, LJSpeech-1.1/wavs/LJ004-0094.wav|This act set forth that "whereas the malignant fever commonly called the jail distemper LJSpeech-1.1/wavs/LJ034-0122.wav|As will be discussed fully below, the Commission has concluded that this suspect was Lee Harvey Oswald. LJSpeech-1.1/wavs/LJ033-0179.wav|since the original bag had been discolored during various laboratory examinations and could not be used for valid identification by witnesses. LJSpeech-1.1/wavs/LJ022-0094.wav|in whose field the project falls, and also to notify another agency which I am creating -- a Progress Division. LJSpeech-1.1/wavs/LJ003-0334.wav|It made this too the excuse for begging the most important issue of the whole question. LJSpeech-1.1/wavs/LJ004-0034.wav|Moreover, the laws applied more particularly to county jurisdictions. LJSpeech-1.1/wavs/LJ048-0254.wav|advised, in the course of the Secret Service investigation of these events, that each agent reported for duty on time, LJSpeech-1.1/wavs/LJ025-0038.wav|carbon, hydrogen and oxygen. LJSpeech-1.1/wavs/LJ036-0217.wav|As Tippit reached the left front wheel the man pulled out a revolver and fired several shots. LJSpeech-1.1/wavs/LJ043-0100.wav|It is not possible to tell whether Oswald did this to provide an excuse for his eventual discharge, LJSpeech-1.1/wavs/LJ005-0222.wav|and as the window-frames would not shut tight, the prisoners complained much of the cold, especially at night. LJSpeech-1.1/wavs/LJ032-0040.wav|were written the words "A. Hidell, P.O. Box two nine one five Dallas, Texas." LJSpeech-1.1/wavs/LJ015-0011.wav|Maltby, who had bolted, was pursued and arrested, to end his life miserably by committing suicide in a Newgate cell. LJSpeech-1.1/wavs/LJ032-0153.wav|A palmprint could not be placed on this portion of the rifle, when assembled, because the wooden foregrip covers the barrel at this point. LJSpeech-1.1/wavs/LJ029-0092.wav|On November eight, when Lawson was briefed on the itinerary for the trip to Dallas, LJSpeech-1.1/wavs/LJ004-0132.wav|the old evils of indiscriminate association still continued unchecked. LJSpeech-1.1/wavs/LJ039-0067.wav|which is ordinarily required when a marksman must raise his rifle as a target moves farther away. LJSpeech-1.1/wavs/LJ044-0235.wav|If there was no conspiracy which would help him escape, the possibility of which has been considered in chapter six, LJSpeech-1.1/wavs/LJ028-0144.wav|fifty royal cubits in width, and two hundred in height. LJSpeech-1.1/wavs/LJ029-0102.wav|After the selection of the Trade Mart as the luncheon site, LJSpeech-1.1/wavs/LJ009-0116.wav|On the following day the capital convicts, whose companions have been hanged, are required to return thanks for their narrow escape. LJSpeech-1.1/wavs/LJ040-0228.wav|Despite his withdrawal, he gives the impression that he is not so difficult to reach as he appears and patient, prolonged effort LJSpeech-1.1/wavs/LJ022-0020.wav|cause of clearer thinking and a better understanding, are considering the whole rather than a mere part relating to one section or to one crop, LJSpeech-1.1/wavs/LJ047-0104.wav|reluctant and actually as far as I was concerned, was completely evasive on them. End quote. LJSpeech-1.1/wavs/LJ031-0127.wav|a protective circle of Secret Service agents surrounded Vice President and Mrs. Johnson LJSpeech-1.1/wavs/LJ043-0021.wav|While the exact sequence of events is not clear because of conflicting testimony, LJSpeech-1.1/wavs/LJ007-0005.wav|The inspectors paid tribute to the excellence of the motives of these philanthropic ladies, and recognized the good they did. LJSpeech-1.1/wavs/LJ018-0307.wav|Through Noyes the rest of the conspirators were eventually apprehended. Very little if any of the ill-gotten proceeds, however, was ever recovered. LJSpeech-1.1/wavs/LJ029-0191.wav|He asserted that Dallas had shed its reputation of the twenties as the, quote, Southwest hate capital of Dixie, end quote LJSpeech-1.1/wavs/LJ009-0088.wav|ignominy, sorrow, sufferings, wretchedness, pangs, LJSpeech-1.1/wavs/LJ021-0192.wav|We are not frightened by reactionary lawyers or political editors. LJSpeech-1.1/wavs/LJ038-0179.wav|(three) firearm identification of the bullet found in Walker's home, and (four) LJSpeech-1.1/wavs/LJ028-0518.wav|It is not strange, then, that they were included among the Seven Wonders of the World, LJSpeech-1.1/wavs/LJ026-0026.wav|As in the liquefaction of gases, there is a "critical point" at which the substance under experiment is neither gaseous nor liquid. LJSpeech-1.1/wavs/LJ031-0090.wav|A thorough inspection would have involved washing and cleansing the back, and this is not practical in treating an acutely injured patient. LJSpeech-1.1/wavs/LJ016-0110.wav|The third, Bell, remained longest at large. He too was run into at a lodging in the Kingsland Road. LJSpeech-1.1/wavs/LJ032-0019.wav|Shortly after the Mannlicher-Carcano rifle was found on the sixth floor of the Texas School Book Depository Building, agents of the FBI LJSpeech-1.1/wavs/LJ044-0146.wav|On June twenty-four, nineteen sixty-three, he applied for a new passport LJSpeech-1.1/wavs/LJ048-0003.wav|Hosty's interpretation of the prevailing FBI instructions on referrals to the Secret Service was defended before the Commission by his superiors. LJSpeech-1.1/wavs/LJ013-0194.wav|but on the second day the discovery of fresh evidence, more particularly the recovery of some of Lord William's stolen plate, LJSpeech-1.1/wavs/LJ038-0224.wav|Another statement which limits the time when it could have been written is the reference, quote, you and the baby, end quote, LJSpeech-1.1/wavs/LJ014-0147.wav|shaking her clenched and manacled hands in the officers' faces. LJSpeech-1.1/wavs/LJ019-0168.wav|Renewed recommendations to provide employment resulted in the provision of a certain amount of oakum for picking, LJSpeech-1.1/wavs/LJ029-0175.wav|Both Dallas papers cited White House sources on September twenty-six as confirming the President's intention to visit Texas on November twenty-one and twenty-two, LJSpeech-1.1/wavs/LJ033-0078.wav|Neither she nor Mrs. Paine saw him leave the house. About half-a-block away from the Paine house was the residence of Mrs. Linnie Mae Randle, LJSpeech-1.1/wavs/LJ040-0235.wav|When Lee became a disciplinary problem upon his return to school in the fall of nineteen fifty-three, LJSpeech-1.1/wavs/LJ003-0322.wav|except for the use of the debtors, or as medical comforts for the infirmary. LJSpeech-1.1/wavs/LJ018-0359.wav|her object being first to dispose of the wife of a man for whom she had conceived a guilty passion, LJSpeech-1.1/wavs/LJ030-0128.wav|From Main Street the motorcade turned right and went north on Houston Street, passing tall buildings on the right, LJSpeech-1.1/wavs/LJ033-0204.wav|So if I found all of these then I would have been able to say these fibers probably had come from this blanket. But since I found so few, LJSpeech-1.1/wavs/LJ013-0042.wav|the foundations of which had been laid by buying old ships on purpose to cast them away. LJSpeech-1.1/wavs/LJ041-0174.wav|and had not intended any criticism of Oswald's political views which is the way in which, Thornley thought, Oswald took his remarks. LJSpeech-1.1/wavs/LJ030-0245.wav|I was pushed down by Agent Youngblood. LJSpeech-1.1/wavs/LJ031-0103.wav|While Dr. Carrico went on to attend the President, Dr. Dulany stayed with the Governor and was soon joined by several other doctors. LJSpeech-1.1/wavs/LJ048-0152.wav|At some overpasses all persons were excluded LJSpeech-1.1/wavs/LJ018-0232.wav|He himself prepared it on a blank form which he had brought with him on purpose. LJSpeech-1.1/wavs/LJ050-0200.wav|The Secret Service should utilize the personnel of other Federal law enforcement offices LJSpeech-1.1/wavs/LJ012-0167.wav|But Probert, who turned king's evidence, and materially assisted conviction, LJSpeech-1.1/wavs/LJ006-0225.wav|If any man presumed to turn in too early LJSpeech-1.1/wavs/LJ014-0127.wav|She was smartly dressed in a plaid shawl, a white lace cap; LJSpeech-1.1/wavs/LJ033-0021.wav|after the birth of their second child. LJSpeech-1.1/wavs/LJ036-0080.wav|toward a light-colored Rambler station wagon, which was moving slowly along Elm toward the underpass: LJSpeech-1.1/wavs/LJ008-0083.wav|Two cart-loads of faggots were piled about her, and after she had hung for half-an-hour the fire was kindled. LJSpeech-1.1/wavs/LJ010-0282.wav|Pate was said to be an eccentric person, given to strange acts and antics, such as mixing whiskey and camphor with his morning bath-water, LJSpeech-1.1/wavs/LJ013-0088.wav|the cashier gave them eight Bank of England notes for one thousand pounds each, saying that they could get so much specie nowhere else. LJSpeech-1.1/wavs/LJ028-0279.wav|after which he commanded his servants to tell no one what had come to pass, while he himself pondered the matter. LJSpeech-1.1/wavs/LJ002-0057.wav|These wards were all fitted with barrack-beds, but no bedding was supplied. LJSpeech-1.1/wavs/LJ032-0253.wav|From September twenty-four, nineteen sixty-three, when Marina Oswald arrived in Irving from New Orleans, until the morning of the assassination, LJSpeech-1.1/wavs/LJ043-0135.wav|Oswald shot at Maj. Gen. Edwin A. Walker (Resigned, U.S. Army), LJSpeech-1.1/wavs/LJ025-0115.wav|and from that day to this the rapid improvement of methods of investigation and the energy of a host of accurate observers LJSpeech-1.1/wavs/LJ050-0166.wav|The Commission was struck by the apparent lack of effort, on an interagency basis, LJSpeech-1.1/wavs/LJ038-0026.wav|Other policemen entered the front door and searched the balcony. LJSpeech-1.1/wavs/LJ028-0470.wav|Time has dealt even less kindly with it, for it may be traced only for the distance of about a mile along its eastern side. LJSpeech-1.1/wavs/LJ018-0253.wav|For these crimes William Roupell was tried at the Central Criminal Court on the twenty-fourth September, eighteen sixty-two. LJSpeech-1.1/wavs/LJ019-0147.wav|this occurred in summer at eight, but in the winter months it took place at dusk, and was often as early as four or five. LJSpeech-1.1/wavs/LJ045-0148.wav|After all, when will all your foolishness come to an end? All of these comedies. First one thing and then another. And now this fictitious name, end quote. LJSpeech-1.1/wavs/LJ043-0031.wav|I am surprised that he didn't do something worse, end quote. LJSpeech-1.1/wavs/LJ033-0039.wav|and one which provided an excuse for the carrying of a bulky package the following morning. LJSpeech-1.1/wavs/LJ010-0006.wav|Certain crimes, those against the person especially, diminished gradually. They became less easy or remunerative. LJSpeech-1.1/wavs/LJ049-0005.wav|Rigorous security precautions had been arranged at Love Field with the local law enforcement authorities by Agents Sorrels and Lawson. LJSpeech-1.1/wavs/LJ004-0142.wav|where a lad lay ill with fever, three other prisoners, at first perfectly healthy, were lodged. Of course they were seized with the fever; LJSpeech-1.1/wavs/LJ042-0038.wav|and religion and education are used as a tool to suppress what would otherwise be a population questioning their government's unfair LJSpeech-1.1/wavs/LJ046-0079.wav|The rights of private individuals must not be infringed. LJSpeech-1.1/wavs/LJ026-0123.wav|which could be derived by the ordinary chemical evolution of protoplasm, proteid, sugar, starch or fats. LJSpeech-1.1/wavs/LJ037-0255.wav|testified that Commission Exhibit Number one sixty-two was the jacket worn by the man they saw on November twenty-two. LJSpeech-1.1/wavs/LJ028-0345.wav|He then chose out near three thousand of the leading citizens and caused them to be crucified, while he allowed the remainder still to inhabit the city. LJSpeech-1.1/wavs/LJ045-0076.wav|The letter fell into Oswald's hands when it was returned to his post office box LJSpeech-1.1/wavs/LJ027-0103.wav|Thus, for instance, the unborn whale has rudimentary teeth, LJSpeech-1.1/wavs/LJ011-0076.wav|His offense was uttering forged notes, and there was strong suspicion that he had long subsisted entirely by this fraud. LJSpeech-1.1/wavs/LJ047-0223.wav|I don't recall the exact date. It was about a week prior. End quote. LJSpeech-1.1/wavs/LJ016-0369.wav|upon them devolved the painful duty of cutting down the body and preparing for the inquest. LJSpeech-1.1/wavs/LJ050-0189.wav|that written instructions might come into the hands of local newspapers, to the prejudice of the precautions described. LJSpeech-1.1/wavs/LJ019-0095.wav|which was yet under full control, and might be made to work corn-mills or prove otherwise productive; LJSpeech-1.1/wavs/LJ029-0205.wav|for President Kennedy, stating that "in many respects Dallas County has isolated itself from the main stream of life in the world in this decade. LJSpeech-1.1/wavs/LJ047-0045.wav|and promised to advise the FBI if he heard from them. LJSpeech-1.1/wavs/LJ036-0069.wav|Instead of waiting there, Oswald apparently went as far away as he could and boarded the first Oak Cliff bus which came along LJSpeech-1.1/wavs/LJ014-0180.wav|secure the stock of watches and jewelry, then lock up the place and take on the keys to Mr. Berry's private house in Pimlico. LJSpeech-1.1/wavs/LJ021-0060.wav|Minimum wages have been established and other wages adjusted toward a rising standard of living. LJSpeech-1.1/wavs/LJ002-0128.wav|He also makes the curious calculation that the costs of these actions if undefended LJSpeech-1.1/wavs/LJ028-0437.wav|Here, it has been suggested, were the famous hanging gardens which some ancient authors included among the Seven Wonders of the World. LJSpeech-1.1/wavs/LJ028-0234.wav|Cyrus was now reduced to great perplexity, as time went on and he made no progress against the place. LJSpeech-1.1/wavs/LJ001-0050.wav|and though the famous family of Aldus restored its technical excellence, rejecting battered letters, LJSpeech-1.1/wavs/LJ006-0154.wav|Nothing was more prominently brought out by the inspectors than the inefficiency of the governor at that time, Mr. Cope. LJSpeech-1.1/wavs/LJ022-0148.wav|to enforce minimum wages, to prevent excessive hours, LJSpeech-1.1/wavs/LJ035-0070.wav|Truly stood in front of the building. LJSpeech-1.1/wavs/LJ028-0250.wav|Such, then, were the circumstances of the first taking of Babylon. LJSpeech-1.1/wavs/LJ043-0001.wav|Report of the President's Commission on the Assassination of President Kennedy. LJSpeech-1.1/wavs/LJ004-0171.wav|was by a kind of chimney, which the prisoners kept hermetically sealed, and which had never been opened in the memory of the turnkey. LJSpeech-1.1/wavs/LJ025-0009.wav|for in the past fifty years it has been made evident that in general principles all living things are fundamentally similar. LJSpeech-1.1/wavs/LJ010-0066.wav|which under Thistlewood as dictator was to rule the nation, by first handing over its capital to fire and pillage. LJSpeech-1.1/wavs/LJ022-0139.wav|with which we have been concerned for two years. LJSpeech-1.1/wavs/LJ014-0056.wav|while in Ireland a wife dashed out her husband's brains with a hammer. LJSpeech-1.1/wavs/LJ037-0079.wav|They ran to the door in time to see a man with a revolver cut across their lawn and disappear around a corner of the house onto Patton. LJSpeech-1.1/wavs/LJ032-0044.wav|shows an imprint made by the cash register which recorded the receipt of twenty-one dollars, forty-five cents on March thirteen, nineteen sixty-three. LJSpeech-1.1/wavs/LJ036-0116.wav|Lee Oswald was Number three; LJSpeech-1.1/wavs/LJ028-0476.wav|The entire width of this inner defense was about fifty-five feet; its height is uncertain. LJSpeech-1.1/wavs/LJ004-0137.wav|and that it was accomplished by "sleeping edgewise." LJSpeech-1.1/wavs/LJ003-0113.wav|A prisoner, generally the oldest and most dexterous thief, LJSpeech-1.1/wavs/LJ037-0128.wav|were on the lot at the time, and they saw a white male with a revolver in his hands running south on Patton. LJSpeech-1.1/wavs/LJ031-0137.wav|At approximately one:twenty p.m., Vice President Johnson was notified by O'Donnell that President Kennedy was dead. LJSpeech-1.1/wavs/LJ005-0008.wav|they were followed by a crowd of reckless boys, who jeered at and insulted them. LJSpeech-1.1/wavs/LJ001-0083.wav|The seventeenth century founts were bad rather negatively than positively. LJSpeech-1.1/wavs/LJ006-0224.wav|New arrivals, especially the innocent and still guileless debutant, were tormented with rude horse-play, and assailed by the most insulting "chaff." LJSpeech-1.1/wavs/LJ015-0298.wav|But while Hardwicke was in communication with Saward, the bank was in communication with London LJSpeech-1.1/wavs/LJ017-0212.wav|Her captain was John Smith; LJSpeech-1.1/wavs/LJ049-0096.wav|There have been a number of efforts to make assassination a Federal crime, particularly after the assassination of President McKinley LJSpeech-1.1/wavs/LJ015-0069.wav|but the firm he served got him a situation as clerk in the office of the Great Northern Railway, LJSpeech-1.1/wavs/LJ013-0243.wav|Good now offered to go to Wandsworth and satisfy the pawnbroker. LJSpeech-1.1/wavs/LJ015-0235.wav|and last, but not least, Agar frequently traveled up and down the line to test the false keys he had manufactured with Pierce's assistance. LJSpeech-1.1/wavs/LJ016-0096.wav|They were penal servitude men, their names Bell, Brown, and Barry, and they were awaiting transfer to Leicester, LJSpeech-1.1/wavs/LJ029-0110.wav|The route impressed the agents as a natural and desirable one. LJSpeech-1.1/wavs/LJ011-0098.wav|He soon, however, became deeply involved in Stock Exchange speculations, LJSpeech-1.1/wavs/LJ001-0016.wav|The Middle Ages brought calligraphy to perfection, and it was natural therefore LJSpeech-1.1/wavs/LJ005-0130.wav|There were tread-wheels at most of the prisons, and regular employment thereon or at some other kind of hard labor. LJSpeech-1.1/wavs/LJ018-0091.wav|Wagner and Bateman, who had already been convicted of systematic forgery, and sentenced to transportation, but they had been released on ticket-of-leave LJSpeech-1.1/wavs/LJ019-0053.wav|and our modern practice has prudently tried to steer between the two extremes, accepting as the best system a judicious combination of both. LJSpeech-1.1/wavs/LJ023-0071.wav|For nearly twenty years there was no conflict between the Congress and the Court. LJSpeech-1.1/wavs/LJ019-0390.wav|Since then a strong central authority has labored steadfastly to compass concentration, LJSpeech-1.1/wavs/LJ047-0163.wav|According to Hosty, Mrs. Paine indicated that she thought she could find out where Oswald was living and would let him know. LJSpeech-1.1/wavs/LJ016-0035.wav|the wall beneath and above it was "rusticated," in other words, the granite surface had become roughened, and offered a sort of foothold. LJSpeech-1.1/wavs/LJ015-0211.wav|Each safe had three sets of double keys, all held by confidential servants of the company. LJSpeech-1.1/wavs/LJ043-0148.wav|She testified that she was agitated because she had found the note in Oswald's room, LJSpeech-1.1/wavs/LJ028-0207.wav|On the fourteenth day Sippar was taken without a battle. LJSpeech-1.1/wavs/LJ007-0062.wav|Latterly his ministrations to the condemned had been restricted to a visit on Sunday afternoons, and occasionally about once a fortnight on a week-day. LJSpeech-1.1/wavs/LJ049-0186.wav|the Commission received a number of proposals designed to improve current arrangements for protecting the President. LJSpeech-1.1/wavs/LJ011-0196.wav|Mr. Turner at once set off for London, where he sought the assistance of the police, LJSpeech-1.1/wavs/LJ003-0227.wav|So unjust and unequal was the system, that the allowance to convicted criminals was better than that of the innocent debtor, LJSpeech-1.1/wavs/LJ047-0243.wav|According to Revill, Hosty indicated that he was going to tell this to Lieutenant Wells of the homicide and robbery bureau. LJSpeech-1.1/wavs/LJ007-0116.wav|A few others, who could not afford a payment of more than half a guinea, were permitted to monopolize a part of the prison infirmary, LJSpeech-1.1/wavs/LJ018-0243.wav|the hardship to the holders of these lands being plain, should the allegations of invalidity be made good. LJSpeech-1.1/wavs/LJ007-0080.wav|These powers were not invariably put in force, and there were in consequence many unhappy lunatics in Newgate and other jails, LJSpeech-1.1/wavs/LJ038-0037.wav|Oswald then struck McDonald between the eyes with his left fist; with his right hand he drew a gun from his waist. LJSpeech-1.1/wavs/LJ043-0175.wav|The items which Oswald left at home when he made his attack on Walker suggest a strong concern for his place in history. LJSpeech-1.1/wavs/LJ040-0114.wav|Relations soon became strained, however, so in late September Lee and his mother moved to their own apartment in the Bronx. LJSpeech-1.1/wavs/LJ010-0241.wav|but she declared she would not remain a prisoner in her own palace, and next day drove out as usual in an open barouche. LJSpeech-1.1/wavs/LJ037-0202.wav|identified records of Seaport Traders, Incorporated, which showed that a, quote, point three eight LJSpeech-1.1/wavs/LJ019-0230.wav|In eighteen sixty-one a similar work was undertaken to provide separate cellular accommodation for the female inmates of Newgate, LJSpeech-1.1/wavs/LJ010-0134.wav|He roared out snatches of a song about Death or Liberty, and just before he was turned off, LJSpeech-1.1/wavs/LJ014-0005.wav|but too late to give substantial aid. LJSpeech-1.1/wavs/LJ005-0186.wav|They neither built new jails nor contracted with the counties, as had been expected, for the transfer of their prisoners. LJSpeech-1.1/wavs/LJ017-0003.wav|Nevertheless, in order to give completeness to the picture LJSpeech-1.1/wavs/LJ020-0014.wav|beating the batter smooth as you go on until all of the liquid and flour has gone in. LJSpeech-1.1/wavs/LJ014-0245.wav|It was the custom in this office to make the banker's passbook the basis of the entries in the company's ledgers. LJSpeech-1.1/wavs/LJ008-0180.wav|Among the dead was a sailor lad whom no one knew; LJSpeech-1.1/wavs/LJ019-0022.wav|On the other hand, it must be admitted LJSpeech-1.1/wavs/LJ027-0034.wav|Hence, as Jordan has said, "the inside of an animal tells the real history of its ancestry; the outside tells us only where its ancestors have been." LJSpeech-1.1/wavs/LJ040-0124.wav|This continued despite the efforts of the school authorities and, to a lesser extent, of his mother to have him return to school. LJSpeech-1.1/wavs/LJ006-0192.wav|There was no school for adults; only the boys were taught anything, and their instructor, with his assistant, were convicted prisoners. LJSpeech-1.1/wavs/LJ014-0229.wav|Mobbs systematically ill-used his wife for a long space of time, and at last cut her throat. LJSpeech-1.1/wavs/LJ031-0162.wav|other terminal buildings and the neighboring parking lots, of all people. LJSpeech-1.1/wavs/LJ032-0094.wav|listing Marina Oswald and A. J. Hidell LJSpeech-1.1/wavs/LJ022-0155.wav|Power production in this country is virtually back to the nineteen twenty-nine peak. LJSpeech-1.1/wavs/LJ009-0291.wav|He was always known as a mild-mannered man of simple tastes, much given to angling in the New River, and a devoted rabbit fancier. LJSpeech-1.1/wavs/LJ006-0130.wav|had a key of both the master's side and middle side yards, was the only person present at the distribution of beer, and was trusted to examine, LJSpeech-1.1/wavs/LJ040-0131.wav|Marguerite Oswald visited her son at Youth House, where she recalled that she waited in line, quote, LJSpeech-1.1/wavs/LJ009-0113.wav|whistles merrily, and points upwards with madness in his look. LJSpeech-1.1/wavs/LJ037-0078.wav|when they heard the sound of gunfire and the screams of Helen Markham. LJSpeech-1.1/wavs/LJ006-0093.wav|So closely did they lie together, that the inspectors at their night visits found it difficult in stepping across the room to avoid treading on them. LJSpeech-1.1/wavs/LJ008-0061.wav|The entrance upon this floor or leaf is from the middle window over the gate of the prison; LJSpeech-1.1/wavs/LJ001-0156.wav|The paper on which the printing is to be done is a necessary part of our subject: LJSpeech-1.1/wavs/LJ029-0195.wav|when Governor Connally confirmed on November eight that the President would come to Texas on November twenty-one and twenty-two, LJSpeech-1.1/wavs/LJ040-0080.wav|That situation, however, was short-lived, LJSpeech-1.1/wavs/LJ010-0165.wav|but he came as a lad to London, and took service as a pot-boy to a publican. LJSpeech-1.1/wavs/LJ018-0334.wav|Webster, it may be mentioned here, was one of the worst prisoners ever remembered in Newgate LJSpeech-1.1/wavs/LJ046-0227.wav|According to Special Agent in Charge Bouck, LJSpeech-1.1/wavs/LJ019-0089.wav|sometimes it embraced the tread-wheel or the newly-invented instruments known as cranks, which ground air. LJSpeech-1.1/wavs/LJ034-0005.wav|He worked principally on the first and sixth floors of the building, gathering books listed on orders and delivering them to the shipping room on the first floor. LJSpeech-1.1/wavs/LJ043-0089.wav|to a commercial advertising photography firm in Dallas, where he was employed as a trainee starting October twelve, nineteen sixty-two. LJSpeech-1.1/wavs/LJ016-0247.wav|while round about were shoe-strings, boot-laces, and lasts. Marwood, strange to say, followed the same trade as Calcraft. LJSpeech-1.1/wavs/LJ045-0105.wav|She testified that she told him, quote, LJSpeech-1.1/wavs/LJ020-0027.wav|Half the quantity of sponge given in preceding receipt. LJSpeech-1.1/wavs/LJ028-0211.wav|He appointed Gobrias governor of Babylon. LJSpeech-1.1/wavs/LJ019-0314.wav|The separation of prisoners in cells duly certified by the inspectors was insisted upon, LJSpeech-1.1/wavs/LJ005-0001.wav|The Chronicles of Newgate, Volume two. By Arthur Griffiths. Section eight: The beginnings of prison reform. LJSpeech-1.1/wavs/LJ009-0178.wav|In eighteen thirty-two the dissection of bodies cut down from the gallows, which had been decreed centuries previously, was abolished; LJSpeech-1.1/wavs/LJ034-0062.wav|Although a person could handle a carton and not leave identifiable prints, LJSpeech-1.1/wavs/LJ027-0088.wav|Extensive comparison, on the contrary, shows them to be the same, although the essential identity is obscured by adaptive modifications. LJSpeech-1.1/wavs/LJ032-0240.wav|By Sunday, March thirty-one, nineteen sixty-three, LJSpeech-1.1/wavs/LJ036-0024.wav|on a trip which passed a check point at St. Paul and Elm Streets at twelve:thirty-six p.m., November twenty-two, nineteen sixty-three. LJSpeech-1.1/wavs/LJ039-0201.wav|fired two series of three shots at twenty-five yards in four point six and four point eight seconds. LJSpeech-1.1/wavs/LJ006-0113.wav|The authority of these wardsmen so improperly exalted, and so entirely unchecked, degenerated into a baneful despotism. LJSpeech-1.1/wavs/LJ048-0137.wav|there have been references to the numerous discussions between Secret Service representatives and the Dallas Police Department. LJSpeech-1.1/wavs/LJ007-0014.wav|The admission of a crowd of visitors to assist in these lay services has already been remarked upon; as the inspectors pointed out, LJSpeech-1.1/wavs/LJ007-0057.wav|Turnkeys occasionally visited the press-yard, but its occupants were under little or no control. LJSpeech-1.1/wavs/LJ010-0121.wav|that he was, to use Thistlewood's words, "a contriver, instigator, and entrapper." LJSpeech-1.1/wavs/LJ011-0176.wav|He now pretended that Mr. Turner was also on his way to the border, pursued by sheriffs' officers. LJSpeech-1.1/wavs/LJ036-0189.wav|at the southeast corner of tenth Street and Patton Avenue, moments before the Tippit shooting. LJSpeech-1.1/wavs/LJ006-0068.wav|We have reason to fear that poverty, ragged clothes, and an inability to pay the ward dues, elsewhere exacted for better accommodation, LJSpeech-1.1/wavs/LJ006-0097.wav|Water might not be taken into the ward for washing purposes. LJSpeech-1.1/wavs/LJ048-0085.wav|the Commission believes that the liaison between all Federal agencies responsible for Presidential protection should be improved. LJSpeech-1.1/wavs/LJ039-0160.wav|In tests with the Mannlicher-Carano C twenty-seven sixty-six rifle, over one hundred rounds of this ammunition were fired by the FBI LJSpeech-1.1/wavs/LJ038-0052.wav|testified regarding the arrest of Oswald, as did the various police officers who participated in the fight. LJSpeech-1.1/wavs/LJ010-0063.wav|The massacre of the whole of the Cabinet Ministers at one stroke was to be followed by an attack LJSpeech-1.1/wavs/LJ009-0295.wav|who had been a convicted prisoner at York, but who consented to act as hangman when Calcraft was engaged, and no other functionary could be obtained. LJSpeech-1.1/wavs/LJ011-0250.wav|While thus engaged, Howard thrust the poker into the fire. LJSpeech-1.1/wavs/LJ018-0273.wav|Tarpey was caught through his wife, LJSpeech-1.1/wavs/LJ047-0131.wav|In early September nineteen sixty-three LJSpeech-1.1/wavs/LJ040-0232.wav|Few social agencies even in New York were equipped to provide the kind of intensive treatment that he needed, LJSpeech-1.1/wavs/LJ010-0051.wav|The well-known Cato Street conspiracy, LJSpeech-1.1/wavs/LJ008-0077.wav|where the apparatus for the punishment she was about to experience LJSpeech-1.1/wavs/LJ006-0115.wav|Their original capital had been a few shillings, and for this they purchased the right to tax their fellows to the extent of pounds per week. LJSpeech-1.1/wavs/LJ048-0262.wav|during the hours they are officially employed at their post of duty, or when they may reasonably expect that they may be called upon to perform an official duty. LJSpeech-1.1/wavs/LJ020-0101.wav|From the beginning of your apprenticeship in housewifery, learn how to "dovetail" your duties neatly into one another. LJSpeech-1.1/wavs/LJ045-0207.wav|He could not keep them with him in Dallas, where at least he could see his children whom, several witnesses testified, he seemed to love. LJSpeech-1.1/wavs/LJ021-0009.wav|with a greater certainty of the employment of labor at a reasonable wage and of more business at a fair profit. LJSpeech-1.1/wavs/LJ038-0137.wav|the Commission found that Oswald lied when he told Frazier that he was returning to Irving to obtain curtain rods. LJSpeech-1.1/wavs/LJ041-0164.wav|which Thornley read at Oswald's suggestion. LJSpeech-1.1/wavs/LJ001-0006.wav|And it is worth mention in passing that, as an example of fine typography, LJSpeech-1.1/wavs/LJ003-0131.wav|He was an inmate of the same ward with others of the most dreadful sort, quote, LJSpeech-1.1/wavs/LJ003-0208.wav|a number of amateurs were ever ready to give their gratuitous ministrations to the condemned. LJSpeech-1.1/wavs/LJ010-0172.wav|He saw Prince Albert return there from a visit to Woolwich, and then passed on to Constitution Hill, LJSpeech-1.1/wavs/LJ028-0203.wav|Less picturesque than this Hebrew legend is the royal record of Babylon, which fortunately was inscribed upon a clay cylinder from the ruins of the city. LJSpeech-1.1/wavs/LJ007-0146.wav|vaunting his own adventures, or listening to those of others; LJSpeech-1.1/wavs/LJ021-0087.wav|We have the right to expect that this driving power will be given patriotically and whole-heartedly to our nation. LJSpeech-1.1/wavs/LJ025-0077.wav|Their food is provided for them, LJSpeech-1.1/wavs/LJ028-0185.wav|Perhaps Babylon was so strongly fortified that at first he made no attempt to add it to his empire, LJSpeech-1.1/wavs/LJ030-0207.wav|with the follow-up car trailing the President's automobile by approximately five feet. LJSpeech-1.1/wavs/LJ012-0109.wav|But they at once made tracks, and took up their residence under assumed names in a tavern in Bloomsbury. LJSpeech-1.1/wavs/LJ032-0230.wav|that the published pictures were the same as the original except for retouching done by these publications, apparently for the purpose of clarifying the lines of the rifle LJSpeech-1.1/wavs/LJ049-0095.wav|for all offenses within its jurisdiction, as are FBI agents and Federal marshals. LJSpeech-1.1/wavs/LJ024-0142.wav|I seek to make American democracy succeed. LJSpeech-1.1/wavs/LJ050-0177.wav|This PRS agent will also be responsible for establishing an informal local liaison committee LJSpeech-1.1/wavs/LJ011-0006.wav|He went to the bank, and found that no stocks stood in her name. He called at once upon Fauntleroy, his client's bankers, for an explanation, LJSpeech-1.1/wavs/LJ029-0012.wav|He had made only a few brief visits to the State since the nineteen sixty Presidential campaign and in nineteen sixty-two he began to consider a formal visit. LJSpeech-1.1/wavs/LJ026-0022.wav|if chlorophyll is absent, carbon is obtained from sugar or some similar compound, LJSpeech-1.1/wavs/LJ019-0233.wav|and when it was completed, both sides of the prison were brought into harmony with modern ideas. LJSpeech-1.1/wavs/LJ010-0096.wav|Edgeware Road, completing their dispositions for assuming supreme power after the blow had been struck. LJSpeech-1.1/wavs/LJ045-0111.wav|They asked for Lee Oswald who was not called to the telephone because he was known by the other name. LJSpeech-1.1/wavs/LJ005-0298.wav|to the county jails from such prisons as were past improvement, and that the borough funds should be charged for the accommodation. LJSpeech-1.1/wavs/LJ009-0224.wav|At the first-named the exhibition nearly created a tumult, and the body was taken down and buried, LJSpeech-1.1/wavs/LJ014-0179.wav|a working jeweler, shopman to a Mr. Berry of Parliament Street. It was Cope's duty to stay in the shop till the last, close the shutters, LJSpeech-1.1/wavs/LJ035-0044.wav|If the man had passed from the vestibule into the lunchroom, Baker could not have seen him. LJSpeech-1.1/wavs/LJ008-0113.wav|and his soul shot out so piercingly through the port-holes of his head, that the first glance of him nearly petrified me LJSpeech-1.1/wavs/LJ050-0087.wav|propensity toward violent action, or some similar characteristic, coupled with some evaluation of the capability of the individual or group LJSpeech-1.1/wavs/LJ047-0135.wav|According to the information received by the Bureau LJSpeech-1.1/wavs/LJ049-0066.wav|For instance, the lead car always is manned by Secret Service agents familiar with the area and with local law enforcement officials; LJSpeech-1.1/wavs/LJ030-0005.wav|by helicopter at ten:forty-five A.M., Eastern Standard Time, on November twenty-one, nineteen sixty-three, for Andrews Air Force Base. LJSpeech-1.1/wavs/LJ027-0158.wav|But according to the opposite view no reason can be assigned why such should be the case. LJSpeech-1.1/wavs/LJ048-0225.wav|they had little opportunity to eat during the day. No food was available at the Press Club. LJSpeech-1.1/wavs/LJ033-0149.wav|the FBI Laboratory developed a latent palmprint and latent fingerprint on the bag. LJSpeech-1.1/wavs/LJ018-0255.wav|The case was easily and rapidly disposed of. LJSpeech-1.1/wavs/LJ014-0276.wav|Watts's crime was discovered by the secretary of the Globe Company, who came suddenly upon the extensive falsification of the passbook. LJSpeech-1.1/wavs/LJ039-0219.wav|Frazier testified that the rifle was accurate, that it had less recoil than the average military rifle LJSpeech-1.1/wavs/LJ036-0213.wav|The man's general description was similar to the one broadcast over the police radio. LJSpeech-1.1/wavs/LJ037-0179.wav|was discarded along with the others as Oswald left the scene. LJSpeech-1.1/wavs/LJ037-0009.wav|One witness felt he was too distant from the gunman to make a positive identification. LJSpeech-1.1/wavs/LJ038-0163.wav|Prior attempt to kill. LJSpeech-1.1/wavs/LJ006-0139.wav|Nobody interfered with them or regulated their conduct. They might get drunk when so disposed, and did so frequently, alone or in company. LJSpeech-1.1/wavs/LJ039-0091.wav|Sergeant Zahm expressed the opinion that the shot which struck President Kennedy in the neck at one hundred seventy-six point nine LJSpeech-1.1/wavs/LJ036-0016.wav|Lee Harvey Oswald left the building approximately three minutes after the assassination. LJSpeech-1.1/wavs/LJ030-0109.wav|The Vice-Presidential car LJSpeech-1.1/wavs/LJ019-0030.wav|Major, afterwards Sir Joshua Jebb, LJSpeech-1.1/wavs/LJ015-0154.wav|When the crash came there were pensioners and other recipients of his bounty who could not believe LJSpeech-1.1/wavs/LJ038-0039.wav|Three other officers, moving toward the scuffle, grabbed Oswald from the front, rear and side. LJSpeech-1.1/wavs/LJ017-0146.wav|He had all the characteristics of the poisoner -- the calm deliberation, LJSpeech-1.1/wavs/LJ036-0171.wav|he would have arrived there about twelve:fifty-nine to one p.m. LJSpeech-1.1/wavs/LJ039-0099.wav|In accordance with standard Marine procedures, Oswald received extensive training in marksmanship. LJSpeech-1.1/wavs/LJ004-0216.wav|The most noticeable of the improvements introduced was a better regulation of dietaries within the prison. LJSpeech-1.1/wavs/LJ045-0136.wav|as Oswald went on to say. In Oswald's imagination, quote, LJSpeech-1.1/wavs/LJ004-0135.wav|twenty men slept on eight straw beds, with sixteen rugs amongst them, and a piece of timber for a bolster. LJSpeech-1.1/wavs/LJ045-0173.wav|Question: What did you say to that? Answer: LJSpeech-1.1/wavs/LJ040-0030.wav|When he was in the Soviet Union, he apparently resented the Communist Party members, LJSpeech-1.1/wavs/LJ024-0096.wav|No amendment which any powerful economic interests or the leaders of any powerful political party have had reason to oppose LJSpeech-1.1/wavs/LJ018-0208.wav|were a low lot, the lowest among criminals except, perhaps, the 'smashers,' or those who passed the counterfeit money. LJSpeech-1.1/wavs/LJ030-0215.wav|the car lurched forward, causing him to lose his footing. He ran three or four steps, regained his position and mounted the car. LJSpeech-1.1/wavs/LJ012-0156.wav|His arrest and conviction cast dismay over the whole gang of receivers, and for a time seriously checked the nefarious traffic. LJSpeech-1.1/wavs/LJ019-0028.wav|Mr. Shaw-Lefevre, the Speaker of the House of Commons, Sir Benjamin Brodie, LJSpeech-1.1/wavs/LJ019-0079.wav|The cells inhabited by prisoners were of very varying dimensions; LJSpeech-1.1/wavs/LJ046-0046.wav|In all of these roles the President must go to the people. LJSpeech-1.1/wavs/LJ018-0054.wav|While in the condemned cell he conversed freely with the warders in broken English or through an interpreter. LJSpeech-1.1/wavs/LJ014-0338.wav|These bankers, wishing for more specific information, LJSpeech-1.1/wavs/LJ026-0113.wav|Only proteid foods form new protoplasm LJSpeech-1.1/wavs/LJ015-0310.wav|which had received so perverted and mistaken direction, LJSpeech-1.1/wavs/LJ049-0040.wav|The assassination suggests that it would have been of prime importance LJSpeech-1.1/wavs/LJ022-0052.wav|here as in every other nation, we have come to recognize the possibility and the necessity of certain helpful remedial measures. LJSpeech-1.1/wavs/LJ032-0054.wav|"A. Hidell, P.O. Box two nine one five, Dallas, Texas," on March twenty, nineteen sixty-three. LJSpeech-1.1/wavs/LJ005-0290.wav|Instances rarely occur in which the borough jails admit of any proper classification of the prisoners. LJSpeech-1.1/wavs/LJ028-0314.wav|observing him, hastened down, and setting one of the gates slightly ajar, questioned him who he was, and on what errand he had come. LJSpeech-1.1/wavs/LJ028-0324.wav|his body red with marks of scourging and with blood, had no suspicion but that he spoke the truth, and was really come to be their friend and helper. LJSpeech-1.1/wavs/LJ033-0035.wav|and Marina Oswald testified that Oswald did not say anything about curtain rods on the day before the assassination. LJSpeech-1.1/wavs/LJ050-0082.wav|the interest of the Secret Service goes beyond information on individuals or groups threatening to cause harm or embarrassment to the President. LJSpeech-1.1/wavs/LJ046-0190.wav|it had arrangements to be notified about release from confinement in roughly one thousand cases; LJSpeech-1.1/wavs/LJ015-0096.wav|whether representing real or fictitious shares does not appear; but they were certificates connected in some way with Robson's long practiced frauds LJSpeech-1.1/wavs/LJ019-0146.wav|There was as yet no control over the prisoners after locking-up time; LJSpeech-1.1/wavs/LJ007-0091.wav|The lunatic became the sport of the idle and the depraved. His cure was out of the question; LJSpeech-1.1/wavs/LJ033-0087.wav|She thought that its color was similar to that of the bag found on the sixth floor of the School Book Depository after the assassination. LJSpeech-1.1/wavs/LJ050-0086.wav|Under these criteria, whether the case should be referred to the Secret Service depends on the existence of a previous history of mental instability, LJSpeech-1.1/wavs/LJ025-0011.wav|is Huxley's famous essay, "The Border Territory Between the Animal and Vegetable Kingdoms," written in eighteen seventy-six, LJSpeech-1.1/wavs/LJ003-0338.wav|End quote. it would cover some thirty acres, and cost a great deal more than the city, with the example of Whitecross Street prison before it, LJSpeech-1.1/wavs/LJ038-0282.wav|there is enough on it to say that it could have come, and even perhaps a little stronger, to say that it probably came from this, LJSpeech-1.1/wavs/LJ037-0075.wav|However, even in the absence of Mrs. Markham's testimony, there is ample evidence to identify Oswald as the killer of Tippit. LJSpeech-1.1/wavs/LJ003-0033.wav|Enough has been said, probably, to prove that there was room for improvement in the condition and treatment of debtors in the prisons of the city of London. LJSpeech-1.1/wavs/LJ041-0011.wav|Several witnesses testified that Lee Oswald was not aggressive. He was, however, involved in some fights. LJSpeech-1.1/wavs/LJ026-0102.wav|but root pressure due to osmosis, capillary action and evaporation from the leaves are factors. LJSpeech-1.1/wavs/LJ048-0078.wav|In each instance, liaison contacts should be developed to include a close friendly relationship,